text
stringlengths
56
7.94M
\begin{document} \tolerance = 9999 \title{Bogomolov-Sommese vanishing and liftability for surface pairs in positive characteristic} \markboth{Tatsuro Kawakami}{Bogomolov-Sommese vanishing and liftability for surface pairs} \begin{abstract} We show that the Bogomolov-Sommese vanishing theorem holds for a log canonical projective surface $(X, B)$ in large characteristic unless the Iitaka dimension of $K_X+\lfloor B \rfloor$ is not equal to two. As an application, we prove that a log resolution of a pair of a normal projective surface and a reduced divisor in large characteristic lifts to the ring of Witt vectors when the Iitaka dimension of the log canonical divisor is less than or equal to zero. Moreover, we give explicit and optimal bounds on the characteristic unless their Iitaka dimensions are equal to zero. \end{abstract} \tableofcontents \section{Introduction} Vanishing theorems involving differential sheaves play a significant role in the analysis of algebraic varieties. The Bogomolov-Sommese vanishing theorem, which was originally proved in \cite{Bog}, is one of the most important tools of this kind and has been studied by many authors (see \cite{Gra15}, \cite{GKK}, \cite{GKKP}, \cite{JK}, \cite{SS85} for example). \begin{thm}[\textup{Bogomolov-Sommese vanishing theorem, \cite[Corollary 1.3]{Gra15}}]\label{BSVoverC} Let $(X, B)$ be a log canonical (lc, for short) projective pair over the field of complex numbers $\mathbb{C}$. Then \[ H^0(X, (\Omega_X^{[i]}(\operatorname{log}\,\lfloor B\rfloor)\otimes \mathcal{O}_X(-D))^{**})=0 \] for every $\mathbb{Z}$-divisor $D$ on $X$ satisfying $\kappa(X, D)>i$. \end{thm} In Theorem \ref{BSVoverC}, $\kappa(X, D)$ denotes the Iitaka dimension of a $\mathbb{Z}$-divisor $D$, $(-)^{**}$ denotes double dual, and $\Omega_X^{[i]}(\operatorname{log}\,\lfloor B\rfloor)$ denotes the $i$-th logarithmic reflexive differential form of the pair $(X, \lfloor B\rfloor)$, where $\lfloor B\rfloor$ is the round-down of $B$. We refer to Definition \ref{Definition:Iitaka dim} and \textit{Notation} for details. The logarithmic extension theorem for $(n+1)$-dimensional lc pairs can be deduced from the Bogomolov-Sommese vanishing theorem for $n$-dimensional log Calabi-Yau pairs (see \cite[Section 9]{Gra}). The vanishing theorem is also applied to show the vanishing of the second cohomology of the tangent sheaf of lc projective surfaces with big anti-canonical divisors so that they have no local-to-global obstruction (see \cite[Proposition 3.1]{HP}). In this paper, we discuss an analog of Theorem \ref{BSVoverC} when the pair $(X, B)$ is defined over an algebraically closed field of positive characteristic and $\dim\,X=2$. It is well-known that the Bogomolov-Sommese vanishing theorem fails when the canonical divisor $K_X$ is big. For example, it is not difficult to see that the sheaf of first differential forms of Raynaud's surface \cite{Ray} contains an ample invertible sheaf. Moreover, Langer \cite[Section 8]{Lan} constructed a pair $(S, F)$ of a smooth rational surface $S$ and a disjoint union of smooth rational curves $F$ such that $\Omega_S(\operatorname{log}\,F)$ contains a big invertible sheaf in every characteristic (see also \cite[Section 11]{Langer19}). In other words, the Bogomolov-Sommese vanishing theorem fails even if $X$ is a smooth rational surface. On the other hand, we can observe that the log canonical divisor $K_S+F$ is big except when the characteristic is equal to two (see Example \ref{Example:Langer's surface} for the details). Therefore, it is natural to ask whether the Bogomolov-Sommese vanishing theorem holds when the log canonical divisor is not big and the characteristic is sufficiently large. We give an affirmative answer to this question. \begin{thm}\label{BSV, Intro} There exists a positive integer $p_0$ with the following property. Let $(X, B)$ be an lc projective surface pair over an algebraically closed field of characteristic $p>p_0$. If $\kappa(X, K_X+\lfloor B \rfloor)\neq 2$, then \[ H^0(X, (\Omega_X^{[i]}(\operatorname{log}\,\lfloor B\rfloor)\otimes \mathcal{O}_X(-D))^{**})=0 \] for every $\mathbb{Z}$-divisor $D$ on $X$ satisfying $\kappa(X, D)>i$. Moreover, if $\kappa(X, K_X+\lfloor B \rfloor)=-\infty$ (resp.~$\kappa(X, K_X+\lfloor B \rfloor)=1$), then we can take $p_0=5$ (resp.~$p_0=3$) as an optimal bound. If $\kappa(X, K_X+\lfloor B \rfloor)=0$, then we can take $p_0$ as the maximum number of the Gorenstein index of every klt Calabi-Yau surface over every algebraically closed field. \end{thm} In Theorem \ref{BSV, Intro}, a klt Calabi-Yau surface means a klt projective surface whose canonical divisor is numerically trivial. If the base field is an algebraically closed field of characteristic zero, then the Gorenstein index of a klt Calabi-Yau surface is less than or equal to $21$ by \cite[Theorem C (a)]{Bla}. In general, there exists a uniform bound on the Gorenstein index independent of the choice of the algebraically closed base field (see Lemma \ref{max Gor index}), but its explicit value is not known. It is worth pointing out that Theorem \ref{BSV, Intro} is new even when $B=0$ if $X$ is singular. In particular, Theorem \ref{BSV, Intro} shows that lc projective surfaces whose canonical divisors have negative Iitaka dimension have no local-to-global obstruction when the characteristic is bigger than five (see Proposition \ref{tangent} for the details). The key step of the proof of Theorem \ref{BSV, Intro} is Proposition \ref{ext thm}, which is a generalization of Graf's logarithmic extension theorem \cite[Theorem 1.2]{Gra} for lc surfaces in positive characteristic. The Bogomolov-Sommese vanishing theorem in characteristic zero (Theorem \ref{BSVoverC}) is reduced to the case where $(X, B)$ is log smooth by the logarithmic extension theorem (see \cite{Gra} and \cite[7.C. Proof of Theorem 7.2]{GKKP}). In this reduction process, we need the fact that an index one cover of $D$ is \'etale in codimension one. However, this fact is not necessarily true in characteristic $p>0$ when the Cartier index of $D$ is divisible by $p$. Therefore, we cannot apply Graf's logarithmic extension theorem directly to reduce Theorem \ref{BSV, Intro} to the case where $(X, B)$ is log smooth. In order to overcome this issue, we prove Proposition \ref{ext thm}. Moreover, reducing Theorem \ref{BSV, Intro} to the case of log smooth pairs is not sufficient because the Bogomolov-Sommese vanishing theorem is not known even for such pairs in positive characteristic. Proposition \ref{ext thm} enables us to apply the logarithmic Akizuki-Nakano vanishing theorem for $W_2$-liftable pairs, which was proved by Hara \cite{Har}, to our log smooth pairs (see Remark \ref{Remark:EXT2} for details). Next, we fix a log smooth surface pair $(Y, B_Y)$ defined over an algebraically closed field $k$ of characteristic $p>0$. Let us consider the liftability of $(Y, B_Y)$ to the ring $W(k)$ of Witt vectors. When $\kappa(Y, K_Y)\leqslantq 0$, $B_Y=0$, and $p>3$, it is well-known that $Y$ lifts to $W(k)$ (see \cite[Proposition 2.6]{Ito}, \cite[Section 11]{Liedtke(Book)}, and \cite[Proposition 11.1]{Oort} for example). However, when $B_Y\neq 0$, the pair $(Y, B_Y)$ does not have such a lifting in general even when $\kappa(Y, K_Y)=-\infty$. Indeed, the counterexample $(S, F)$ to the Bogomolov-Sommese vanishing theorem constructed by Langer does not lift to $W_2(k)$, which means that $(Y, B_Y)$ does not always lift to $W(k)$ even when $Y$ is a smooth rational surface. On the other hand, Arvidsson-Bernasconi-Lacini \cite[Theorem 1.2]{ABL} showed that $(Y, B_Y)$ lifts to $W(k)$ when $(Y, B_Y)$ can be realized as some log resolution of a klt projective surface over $k$ of characteristic $p>5$ such that the canonical divisor has negative Iitaka dimension, and in particular, $\kappa(Y, K_Y+B_Y)=-\infty$. Therefore, it is natural to ask $(Y, B_Y)$ lifts to $W(k)$ when $\kappa(Y, K_Y+B_Y)\leqslantq 0$ and the characteristic $p$ is sufficiently large. We give a complete answer to this question. \begin{thm}\label{lift, Intro} There exists a positive integer $p_0$ with the following property. Let $X$ be a normal projective surface over an algebraically closed field $k$ of characteristic $p>p_0$, $B$ a reduced divisor on $X$, and $\pi\colon Y\to X$ a log resolution of $(X, B)$. Suppose that one of the following holds. \begin{enumerate} \item[\textup{(1)}] $\kappa(X, K_X+B)=-\infty$. \item[\textup{(2)}] $K_X+B\equiv 0$ and $B\neq0$. \item[\textup{(3)}] $\kappa(X, K_X+B)=0$. \end{enumerate} Then $(Y, \pi_{*}^{-1}B+\operatorname{Exc}(\pi))$ lifts to the ring $W(k)$ of Witt vectors. Moreover, when the condition (1) or (2) holds, we can take $p_0=5$ as an optimal bound. \end{thm} In Theorem \ref{lift, Intro} (3), $p_0$ should be at least $19$ by Example \ref{Example : sharpness for Kodaira dim 0}, but it is not clear whether we can take $p_0$ as the maximum Gorenstein index of klt Calabi-Yau surfaces. In the proof of Theorem \ref{lift, Intro} (1) and (2), we apply Theorem \ref{BSV, Intro} to obtain the vanishing of $H^2(Y, T_Y(-\operatorname{log}\,\pi_{*}^{-1}B+\operatorname{Exc}(\pi)))$, where the obstruction to the lifting lives. In (3), such a vanishing does not always hold. Therefore, using an argument of Cascini-Tanaka-Witaszek \cite{CTW}, we show the boundedness of some $\varepsilon$-klt log Calabi-Yau surfaces, from which we deduce the desired liftability (see Lemma \ref{Lift of log CY MFS} and Proposition \ref{Lift of strictly klt CY}). As an application of Theorems \ref{BSV, Intro} and \ref{lift, Intro}, we obtain a Kawamata-Viehweg type vanishing for $\mathbb{Z}$-divisors on a normal projective surface. \begin{thm}\label{KVV, Intro} There exists a positive integer $p_0$ with the following property. Let $X$ be a normal projective surface over an algebraically closed field of characteristic $p>p_0$ and $D$ a nef and big $\mathbb{Z}$-divisor on $X$. Suppose that one of the following holds. \begin{enumerate} \item[\textup{(1)}] $\kappa(X, K_X)\leqslantq 0$. \item[\textup{(2)}] $\kappa(X, K_X)=1$ and $X$ is lc. \end{enumerate} Then $H^i(X, \mathcal{O}_X(K_X+D))=0$ for all $i>0$. Moreover, if $\kappa(X, K_X)=-\infty$ (resp.~(2) holds), then we can take $p_0=5$ (resp.~$p_0=3$) as an optimal bound. \end{thm} \begin{notation} A \textit{variety} means an integral separated scheme of finite type over an algebraically closed field. A \textit{curve} (resp.~\textit{surface}) means a variety of dimension one (resp.~two). A \textit{pair} $(X, B)$ consists of a normal variety of $X$ and an effective $\mathbb{Q}$-divisor $B$ with coefficients in $[0,1]\cap\mathbb{Q}$ such that the log canonical divisor $K_X+B$ is $\mathbb{Q}$-Cartier. We say a pair $(X,B)$ is \textit{log smooth} when $X$ is smooth and $B$ has a simple normal crossing support. We refer to \cite[Section 2.3]{KM98} for definitions of singularities appearing in the minimal model program (e.g.~klt, dlt, \ldots). Throughout this paper, we use the following notation: \begin{itemize} \item $\operatorname{Exc}(f)$: the reduced exceptional divisor of a birational morphism $f$. \item $\lfloor D \rfloor$ (resp.~ $\lceil D \rceil$): the \textit{round-down} (resp.~ \textit{round-up}) of a $\mathbb{Q}$-divisor $D$. \item $\mathcal{F}^{*}$: the dual of a coherent sheaf of $\mathcal{F}$. \item $\Omega_X^{[i]}(\operatorname{log}\,B)$: the \textit{$i$-th logarithmic reflexive differential form} $j_{*}\Omega_U^{i}(\operatorname{log}\,B)$, where $X$ is a normal variety, $B$ is a reduced divisor on $X$, $U$ is the log smooth locus of $(X, B)$, and $j\colon U\hookrightarrow X$ is the natural inclusion morphism. \item $T_X(-\operatorname{log}\,B)\coloneqq (\Omega_X^{[1]}(\operatorname{log}\,B))^{*}$: the \textit{logarithmic tangent sheaf}, where $X$ is a normal variety and $B$ is a reduced divisor on $X$. \item $W(k)$ (resp.~$W_n(k)$): the ring of Witt vectors (resp.~the ring of Witt vectors of length $n$), where $k$ is an algebraically closed field of positive characteristic. \end{itemize} \end{notation} \section{Preliminaries}\label{sec:pre} \subsection{The Iitaka dimension for \texorpdfstring{$\mathbb{Z}$}--divisors} In this subsection, we recall the definition and basic properties of the Iitaka dimension of $\mathbb{Z}$-divisors. \begin{defn}[\textup{\cite[Definition 2.18]{GKKP}}]\label{Definition:Iitaka dim} Let $X$ be a normal projective variety over an algebraically closed field $k$ and $D$ a $\mathbb{Z}$-divisor on $X$. We define the \textit{Iitaka dimension} $\kappa(X,D)\in\{-\infty,0,\cdots,\dim X\}$ of $D$ as follows. If $H^0(X, \mathcal{O}_X(mD)) = 0$ for all $m \in \mathbb{Z}_{>0}$, then we say that $D$ has \textit{Iitaka dimension $\kappa(X, D)\coloneqq-\infty$}. Otherwise, set \[ M \coloneqq \bigl\{ m\in \mathbb{Z}_{>0} \;\big|\; \dim_{k} H^0(X, \mathcal{O}_X(mD)) > 0 \bigr\}, \] and consider the natural rational mappings \[ \varphi_m : X \dasharrow \mathbb P\bigl(H^0(X, \mathcal{O}_X(mD))^*\bigr) \quad \text{ for each } m \in M. \] Note that we can consider the rational map as above since $\mathcal{O}_X(mD)$ is invertible on the regular locus of $X$. The Iitaka dimension of $D$ is then defined as \[ \kappa(X, D) \coloneqq \operatorname{max}_{m \in M} \bigl\{ \dim \overline{\varphi_m(X)} \bigr\}. \] When $D$ is a $\mathbb{Q}$-divisor, we define $\kappa(X, D)$ as $\kappa(X, mD)$, where $m$ is any positive integer such that $mD$ is a $\mathbb{Z}$-divisor. We say a $\mathbb{Q}$-divisor $D$ is \emph{big} if $\kappa(X, D) = \dim X$. Note that if $D$ is $\mathbb{Q}$-Cartier, then the above definition coincides with the usual definition (\cite[Definition 2.13]{Lazarsfeld}). \end{defn} \begin{defn} Let $D$ be a $\mathbb{Q}$-divisor on a normal projective surface $X$. We say $D$ is \textit{nef} if $D\cdot C\geqslantq0$ for every curve $C$ on $X$. We refer to \cite[Chapter 14, 14.24]{Bad} for the definition of the intersection number of $\mathbb{Z}$-divisors on a normal projective surface. \end{defn} \begin{rem}\label{nefness and bigness}\, \begin{itemize} \item Let $f\colon Y\to X$ be a projective birational morphism of normal surfaces. We recall the \textit{Mumford pullback} by $f$. If $Y$ is smooth, then we refer to \cite[Chapter 14, 14.24]{Bad}. When $Y$ is not smooth, then we take a resolution $\pi\colon \widetilde{Y}\to Y$. Then $\operatorname{Supp}(\pi^{*}\operatorname{Exc}(f))\subset \operatorname{Exc}(f\circ \pi)$, and thus the intersection matrix of $\operatorname{Exc}(f)$ is negative definite. Now, we can define the Mumford pullback by $f$ in a similar way to \cite[Chapter 14, 14.24]{Bad}. \item The Mumford pullback preserves the Iitaka dimension by the projection formula. In addition, the Mumford pullback preserves nefness by definition. \end{itemize} \end{rem} In the rest of the paper, we simply refer to the Mumford pullback as pullback. \subsection{Liftability of pairs to the ring of Witt vectors} In this subsection, we recall the fundamental facts about liftability of pairs. \begin{defn}\label{def:snc} Let $T$ be a Noetherian scheme. Let $X$ be a smooth scheme over $T$ of relative dimension $d$ and $B_1,\ldots, B_n$ irreducible closed subschemes. We say that $B\coloneqq \sum_{r=1}^{n} B_r$ is \textit{simple normal crossing over $T$} (\textit{snc over $T$}, for short) if the scheme-theoretic intersection $\bigcap_{r \in J} B_r$ is smooth over $T$ of relative dimension $d-|J|$ for any subset $J \subseteq \{1, \ldots, n\}$ such that $\bigcap_{r \in J} B_r\neq \emptyset$. When $T$ is a spectrum of an algebraically closed field, we simply say that $B$ is \textit{snc}. \end{defn} \begin{rem} We follow the notation of Definition \ref{def:snc} and suppose that $B$ is snc over $T$. By \cite[Theorem 2.5.8]{MR2791606}, for any $x \in \bigcap_{r \in J} B_r$, there exist an open neighbourhood $U_x\subset X$ of $x$ and an \'etale morphism $\varphi_x \colon U_x \to \mathbb A^d_T = T \times_{\operatorname{Spec} \mathbb{Z}} \operatorname{Spec} \mathbb{Z}[t_1, ..., t_d]$ such that $(\bigcap_{r \in J} B_r)|_{U_x} = V(\prod_{r\in J}\varphi_{x}^{\#}(t_r))$. \end{rem} \begin{defn}\label{d-liftable} Let $X$ be a smooth projective variety over an algebraically closed field $k$ and $B$ an snc divisor on $X$. Let $B = \sum_{r=1}^n B_r$ be the irreducible decomposition. Let $R$ be a Noetherian local ring with residue field $k$. We say that a pair $(X,B)$ \textit{lifts to $R$} if there exist \begin{itemize} \item a scheme $\mathcal{X}$ smooth and projective over $R$ with a closed immersion $i\colon X\hookrightarrow \mathcal{X}$ and \item irreducible closed subschemes $\mathcal{B}_1,\ldots, \mathcal{B}_n$ such that $\sum_{r=1}^{n}\mathcal{B}_{r}$ is snc over $R$ \end{itemize} such that the induced morphism $i\times_{R}k \colon X\to \mathcal{X}\times_{R} k$ is isomorphic and $(i\times_{R}k) (B_r)= \mathcal{B}_r\times_{R} k$ for every $1 \leqslantq r\leqslantq n$. \end{defn} \begin{rem}\label{Remark:lifting as log smooth pairs} In the setting of Definition \ref{d-liftable}, we further assume that $R$ is regular. In this case, if $\sum_{r=1}^{n}\mathcal{B}_{r}$ is flat over $R$, then $\sum_{r=1}^{n}\mathcal{B}_{r}$ is snc over $R$ as follows. Since $\mathcal{B}_r$ is flat over $\operatorname{Spec}\,R$, this is smooth of relative dimension $d-1$ by \cite[Th\'eor\`eme 12.2.4 (iii)]{EGAIV3}. We fix a closed point of $x\in \bigcap_{r \in J} \mathcal{B}_r$. Since $\mathcal{X}$ is regular, each $\mathcal{B}_r$ is Cartier, and thus we obtain \begin{align*} &\dim\mathcal{O}_{\mathcal{X},x}-|J|\leqslantq \dim\,\mathcal{O}_{\bigcap_{r \in J} \mathcal{B}_r, x} \leqslantq\dim\,\mathcal{O}_{\bigcap_{r \in J} B_r, x}+\dim\,R\\ =&\dim\,\mathcal{O}_{X,x}-|J|+\dim\,R =\dim\mathcal{O}_{\mathcal{X},x}-|J| \end{align*} and hence $\mathcal{O}_{\bigcap_{r \in J} \mathcal{B}_r, x}$ is Cohen-Macaulay and \[\dim\,\mathcal{O}_{\bigcap_{r \in J} \mathcal{B}_r, x}=\dim\,\mathcal{O}_{\bigcap_{r \in J} B_r, x}+\dim\,R\] holds. Then, by \cite[Theorem 23.1]{Matsumura}, it follows that $\bigcap_{r \in J} \mathcal{B}_r\to \operatorname{Spec}\,R$ is flat at $x$. By \cite[Th\'eor\`eme 12.2.4 (iii)]{EGAIV3} again, we conclude that the closed subscheme $\bigcap_{r \in J} \mathcal{B}_r$ is smooth over $R$ and hence $\mathcal{B}$ is snc over $R$. \end{rem} \begin{lem}\label{log lift of blow-up} Let $X$ be a smooth projective surface and $B$ an snc divisor on $X$. Let $\mathrm{Bl}_{x}\colon Y\to X$ be a blow-up at a closed point $x\in X$. Suppose that $(X, B)$ lifts to a complete regular local ring $R$. Then $(Y, (\mathrm{Bl}_{x})_{*}^{-1}B+\operatorname{Exc}(\mathrm{Bl}_{x}))$ lifts to $R$. \end{lem} \begin{proof} Let $(\mathcal{X}, \mathcal{B})$ be a lifting of $(X,B)$ to $R$. Since $R$ is henselian, \cite[Propostion 2.8.13]{MR2791606} shows that there exists a lifting $\widetilde{x}$ of $x$ to $R$, which is compatible with the snc structure in the sense of \cite[Definition 2.7]{ABL}. By \cite[Theorem 2.5.8]{MR2791606}, there exists an open neighborhood $\mathcal{U}_{\widetilde{x}}\subset \mathcal{X}$ of $\widetilde{x}$ and an \'etale morphism $\widetilde{\varphi}\colon\mathcal{U}_{\widetilde{x}}\to \operatorname{Spec}\,R[t_1,t_2]$ such that \begin{itemize} \item $\mathcal{B}_r\cap \mathcal{U}_{\widetilde{x}}=V(\widetilde{\varphi}^{*}t_r)$ for each irreducible component $\mathcal{B}_r$ of $\mathcal{B}$ containing $\widetilde{x}$ and \item $\widetilde{x}=V(\widetilde{\varphi}^{*}t_1,\widetilde{\varphi}^{*}t_2)$. \end{itemize} We define an \'etale morphism $\varphi\colon U_x\coloneqq \mathcal{U}_{\widetilde{x}}\otimes_R k \to \operatorname{Spec}\,k[t_1,t_2]$ as $\varphi\coloneqq\widetilde{\varphi}\otimes_R k$. Then $x=V(\varphi^{*}t_1, \varphi^{*}t_2)$. Now, an argument of after Claim of \cite[Lemma 4.4]{Ard} shows that $(\mathcal{Y}, (\mathrm{Bl}_{\widetilde{x}})_{*}^{-1}\mathcal{B}+\operatorname{Exc}(\mathrm{Bl}_{\widetilde{x}}))$ is a lifting of $(Y, (\mathrm{Bl}_{x})_{*}^{-1}B+\operatorname{Exc}(\mathrm{Bl}_{x}))$. \end{proof} \begin{lem}\label{every lift=some lift} Let $X$ be a normal projective surface and $B$ a reduced divisor on $X$. Suppose that there exists a log resolution $f\colon Y\to X$ of $(X, B)$ such that $H^2(Y, \mathcal{O}_Y)=0$ and $(Y, f_{*}^{-1}B+\operatorname{Exc}(f))$ lifts to a complete regular local ring $R$. Then, for every log resolution $g\colon Z\to X$ of $(X, B)$, the pair $(Z, g_{*}^{-1}B+\operatorname{Exc}(g))$ lifts to $R$. \end{lem} \begin{proof} We take a log resolution $g\colon Z\to X$ of $(X, B)$ and show the liftability of $(Z, g_{*}^{-1}B+\operatorname{Exc}(g))$. We can take a log resolution $h\colon W\to X$ of $(X, B)$ that factors through both $f$ and $g$. Since $W\to Y$ is a composition of blow-ups at a smooth point, the pair $(W, h_{*}^{-1}B+\operatorname{Exc}(h))$ lifts to $R$ by Lemma \ref{log lift of blow-up}. Since $W\to Z$ is also a composition of blow-ups at a smooth point, it follows from \cite[Proposition 4.3 (1)]{AZ} that $(Z, g_{*}^{-1}B+\operatorname{Exc}(g))$ formally lifts to $R$. By assumption, we have $H^2(Z, \mathcal{O}_Z)=H^2(Y, \mathcal{O}_Y)=0$, and hence $(Z, g_{*}^{-1}B+\operatorname{Exc}(g))$ lifts to $R$ as a scheme. \end{proof} \begin{thm}\label{log smooth lift criterion} Let $X$ be a smooth projective surface over an algebraically closed field $k$ and $B$ an snc divisor on $X$. Let $R$ be a Noetherian complete local ring with residue field $k$. Suppose that $H^2(X, T_{X}(-\operatorname{log}\,B))=0$. Then $(X ,B)$ lifts to $R$ as a formal scheme. Moreover, if $H^2(X, \mathcal{O}_X)=0$, then $(X,B)$ lifts to $R$ as a scheme. \end{thm} \begin{proof} We refer to \cite[Theorem 2.3]{KN} for the proof. \end{proof} Hara \cite[Corollary 3.8]{Hara98} showed the logarithmic Akizuki-Nakano vanishing theorem for $W_2$-liftable pairs $(X, B)$. In Theorem \ref{Hara's vanishing}, we slightly generalize this theorem to the vanishing for nef and big divisors when $\dim\,X=2$. \begin{thm}[\textup{cf.~\cite[Corollary 3.8]{Hara98}}]\label{Hara's vanishing} Let $X$ be a smooth projective surface over an algebraically closed field $k$ of characteristic $p>0$ and $B$ an snc divisor on $X$. Suppose that $(X, B)$ lifts to $W_2(k)$. Let $D$ be a nef and big $\mathbb{Q}$-divisor on $X$ such that $\operatorname{Supp}(\lceil D\rceil-D)$ is contained in $B$. Then \[ H^{j}(X, \Omega_X^{i}(\operatorname{log} \, B)\otimes \mathcal{O}_X(-\lceil D \rceil))=0 \] for $i,j\in \mathbb{Z}_{\geqslantq 0}$ such that $i+j<2$. \end{thm} \begin{rem} Langer \cite[Example 1]{Lan15} showed that Theorem \ref{Hara's vanishing} does not hold when $D$ is only big. In other words, the Bogomolov-Sommese vanishing theorem can fail on $W_2(k)$-liftable pairs. \end{rem} \begin{proof} By the Serre duality and the essentially same argument as in \cite[Corollary 3.8]{Hara98}, we can reduce the assertion to \[ H^j(X, \Omega_X^i(\operatorname{log} \, B))\otimes \mathcal{O}_X(-B+\lceil p^eD \rceil)=0 \] for all $i+j>2$ and some $e>0$. We remark that the assumption that $p>\dim\,X$ in \cite[Corollary 3.8]{Hara98} is relaxed to $p\geqslantq \dim\,X$. Indeed, in the proof of \cite[Corollary 3.8]{Hara98}, the assumption that $p>\dim\,X$ is only used for the quasi-isomorphism $\bigoplus_{i}\Omega_X^i(\operatorname{log}\, B)[-i]\cong F_{*}\Omega_X^{\bullet}(\operatorname{log} B)$, and this quasi-isomorphism holds even in $p=\dim\,X$ by \cite[10.19 Proposition]{EV}. We take $m, n\in \mathbb{Z}_{>0}$ such that $p^m(p^n-1)D$ is Cartier. Then we obtain \begin{align*} &H^j(X, \Omega_X^i(\operatorname{log} \, B))\otimes \mathcal{O}_Y(-B+\lceil p^{m+ln}D \rceil)\\ =&H^j(X, \Omega_X^i(\operatorname{log} \, B)\otimes\mathcal{O}_Y(-B+\lceil p^mD \rceil+(\sum_{i=0}^{l-1}p^{ni})p^m(p^n-1)D)). \end{align*} When $j=2$, the last term vanishes for all sufficiently large $l\gg 0$ by \cite[Proposition 2.3]{Tan15}. Moreover, when $(i, j)=(2, 1)$, the last term vanishes for all sufficiently large $l\gg 0$ by \cite[Theorem 2.6]{Tan15}. Therefore, we obtain the desired vanishing. \end{proof} \begin{lem}\label{Lem:KVV} Let $X$ be a normal projective surface over an algebraically closed field $k$ of positive characteristic and $D$ a nef and big $\mathbb{Z}$-divisor on $X$. Suppose that there exists a log resolution $\pi\colon Y\to X$ such that $(Y, \operatorname{Exc}(\pi))$ lifts to $W_2(k)$. Then $H^i(X, \mathcal{O}_X(K_X+D))=0$ for all $i>0$. \end{lem} \begin{proof} By the Serre duality for Cohen-Macaulay sheaves (\cite[Theorem 5.71]{KM98}), it suffices to show that $H^i(X, \mathcal{O}_X(-D))=0$ for all $i<2$. When $i=0$, the vanishing follows from the bigness of $D$. Thus we assume that $i=1$. By the spectral sequence \[ E_2^{p,q}=H^{p}(X,R^{q}\pi_{*}\mathcal{O}_{Y}(-\lceil \pi^{*}D\rceil ))\mathbb{R}ightarrow E^{p+q}=H^{p+q}(Y,\mathcal{O}_{Y}(-\lceil \pi^{*}D \rceil)), \] we obtain an injective morphism \[ H^{1}(X,\pi_{*}\mathcal{O}_{Y}(-\lceil \pi^{*}D\rceil))\hookrightarrow H^1(Y, \mathcal{O}_{Y}(-\lceil \pi^{*}D\rceil)).\] By the projection formula, we have $\pi_{*}\mathcal{O}_{Y}(-\lceil \pi^{*}D\rceil )=\pi_{*}\mathcal{O}_{Y}(\lfloor -\pi^{*}D\rfloor )=\mathcal{O}_{X}(-D)$ and thus it suffices to show that $H^1(Y, \mathcal{O}_{Y}(-\lceil\pi^{*}D\rceil))=0$. Since $\operatorname{Supp}(\lceil\pi^{*}D\rceil-\pi^{*}D)\subset\operatorname{Exc}(\pi)$ and $\pi^{*}D$ is nef and big (see Remark \ref{nefness and bigness}), we obtain the desired vanishing by Theorem \ref{Hara's vanishing}. \end{proof} \section{Klt Calabi-Yau surfaces}\label{sec:klt CY surfaces} In this section, we prove the liftability of a log resolution of a klt Calabi-Yau surface in large characteristic (Propositions \ref{Prop : Lift of canonical CY} and \ref{Lift of strictly klt CY}). We also show that there exists a bound on the Gorenstein index for every klt Calabi-Yau surface over every algebraically closed field (Lemma \ref{max Gor index}). \begin{defn} We fix a real number $\varepsilon\in\mathbb{R}_{>0}$. We say a pair $(X, B)$ is \textit{$\varepsilon$-klt} if, for every proper birational morphism $f\colon Z\to X$ from a normal variety $Z$, all the coefficients of $f^{*}(K_X+B)-K_Z$ is less than $1-\varepsilon$. \end{defn} \begin{defn} We say that a normal projective variety $X$ is \textit{canonical (resp.~klt) Calabi-Yau} if $X$ has only canonical (resp.~klt) singularities and $K_X\equiv0$. Moreover, if $X$ is klt Calabi-Yau but not canonical Calabi-Yau, then we say that $X$ is \textit{strictly klt Calabi-Yau}. We say a projective pair $(X,B)$ is \textit{log Calabi-Yau} if $(X, B)$ is lc and $K_X+B\equiv 0$. We say a normal projective variety $X$ is \textit{of Calabi-Yau type} if there exists an effective $\mathbb{Q}$-divisor $B$ such that $(X, B)$ is log Calabi-Yau. \end{defn} First, we show the liftability of a log resolution of a canonical Calabi-Yau surface. \begin{prop}\label{Prop : Lift of canonical CY} Let $X$ be a canonical Calabi-Yau surface over an algebraically closed field $k$ of characteristic $p>19$. Then, for every log resolution $f\colon Z\to X$ of $X$, the pair $(Z, \operatorname{Exc}(f))$ lifts to $W(k)$. \end{prop} \begin{proof} Let $\pi\colon Y\to X$ be the minimal resolution. By Lemma \ref{log lift of blow-up}, it suffices to show the liftability of $(Y, E\coloneqq\operatorname{Exc}(\pi))$. Since $K_Y=\pi^{*}K_X=0$, it follows that $Y$ is one of an abelian surface, a hyperelliptic surface, a K3 surface, or an Enriques surface. If $Y$ is an abelian surface, then $Y=X$ and $Y$ lifts to $W(k)$ by \cite[Proposition 11.1]{Oort}. Next, we assume that $Y$ is a hyperelliptic surface. In this case, $Y=X$ and $Y$ is a quotient of a fiber product $C_1\times C_2$ of elliptic curves by an action of some group scheme $G$. We recall that a smooth projective curve lifts to $W(k)$ with its automorphism if the degree of the automorphism is not divisible by $p$ (\cite[Theorem 1.5 and Remark 1.11]{Obus}). Since $p\neq 2,3$, comparing with the list of actions of $G$ on $C_1\times C_2$ in \cite[List 10.27]{Bad}, we can take a $W(k)$-lifting $\mathcal{C}_i$ of $C_i$ and $\mathcal{G}$ of $G$ such that $\mathcal{G}$ acts on $\mathcal{C}_1\times \mathcal{C}_2$ compatibly with the action of $G$ on $C_1\times C_2$. Then $\mathcal{C}_1\times \mathcal{C}_2/\mathcal{G}$ gives a lifting of $Y$. Next, we assume that $Y$ is a K3 surface or an Enriques surface. We show that the determinant $d$ of the intersection matrix of $E$ is not divisible by $p$. For the sake of contradiction, we assume that $d$ is divisible by $p$. Since the determinant of the intersection matrix of a rational double point of type $A_n$ (resp.~$D_n$, $E_6$, $E_7$, $E_8$) is equal to $(-1)^n(n+1)$ (resp.~$(-1)^n4,~3,-2,1$), it follows from the assumption of $p>19$ that $X$ has an $A_{np-1}$-singularity for some $n\in\mathbb{Z}_{>0}$. Hence we have $\rho(Y)\geqslantq np\geqslantq 23$. This is a contradiction because the Picard rank of a K3 surface is at most $22$ by \cite[Chapter 17, 2.4]{K3book} and that of an Enriques surface is at most $10$ by \cite[Section 3]{BMIII}. Thus $d$ is not divisible by $p$ and \cite[Theorems 1.2 and 1.3]{Gra} shows that $\pi_{*}\Omega_Y=\Omega^{[1]}_{X}$. Then we obtain \begin{align*} H^2(Y, T_Y(-\operatorname{log}\,E))\hookrightarrow H^2(X, T_X)\cong& H^0(X, \Omega_X^{[1]}\otimes \mathcal{O}_X(K_X))\\=&H^0(Y, \Omega_Y\otimes \mathcal{O}_Y(K_Y)). \end{align*} For the first injection, we refer to Remark \ref{Remark:tangent}. We assume that $Y$ is a K3 surface. Then we have $H^0(Y, \Omega_Y\otimes \mathcal{O}_Y(K_Y))=H^0(Y, \Omega_Y)=0$, and $(Y, E)$ formally lifts to $W(k)$ by Theorem \ref{log smooth lift criterion}. Moreover, $(Y, E)$ is algebraizable by \cite[Proposition 2.6]{Ito}. Finally, We assume that $Y$ is an Enriques surface. Then we have an \'etale morphism $\tau \colon \widetilde{Y}\to Y$ from a K3 surface $\widetilde{Y}$ since $p\neq 2$. Thus we obtain \[ H^0(Y, \Omega_Y\otimes \mathcal{O}_Y(K_Y))\hookrightarrow H^0(\widetilde{Y}, \Omega_{\widetilde{Y}}\otimes \mathcal{O}_{\widetilde{Y}}(K_{\widetilde{Y}}))=0. \] Moreover, since $p\neq 2$, we have $K_Y\neq 0$, and in particular, $H^2(Y, \mathcal{O}_Y)\cong H^0(Y, \mathcal{O}_Y(K_Y))= 0$. Therefore, the pair $(Y, E)$ lifts to $W(k)$ by Theorem \ref{log smooth lift criterion}. \end{proof} \begin{rem}\label{Rem : Lift of canonical CY} In Proposition \ref{Prop : Lift of canonical CY}, we cannot drop the assumption $p>19$ (see Example \ref{Example : sharpness for Kodaira dim 0}). On the other hand, when the minimal resolution $Y$ is a K3 surface that is not supersingular, the pair $(Y,E)$ lifts to $W(k)$ even if $p\leqslantq 19$ as follows. First, by \cite[Corollary 4.2]{LM18}, $Y$ itself lifts to $W(k)$. Moreover, by \cite[Lemma 2.3 and Corollary 4.2]{LM18}, each irreducible component of $E$ lifts to $W(k)$. Then we obtain the desired liftability by Remark \ref{Remark:lifting as log smooth pairs}. \end{rem} From now, we focus on a strictly klt Calabi-Yau surface. We first prove that the Gorenstein index of a klt Calabi-Yau surface is bounded from above. \begin{lem}\label{Cartier index divides det} Let $X$ be a klt surface and $\pi\colon Y\to X$ a resolution. Then the Cartier index of any $\mathbb{Z}$-divisor on $X$ divides the determinant of the intersection matrix of $\operatorname{Exc}(\pi)$. \end{lem} \begin{proof} Let $d$ be the determinant of the intersection matrix of $\operatorname{Exc}(\pi)$. We take a $\mathbb{Z}$-divisor $D$ on $X$ and write $\pi^{*}D=\pi_{*}^{-1}D+\sum d_iE_i$ for some $d_i\in \mathbb{Q}$. Then it follows that $dd_i\in \mathbb{Z}$ for each $i$, and in particular, $\pi^{*}dD$ is Cartier. Now, we can conclude that $dD$ is Cartier by \cite[Lemma 2.1]{CTW}. \end{proof} \begin{lem}\label{Q-fac index of epsilon CY} We fix a real number $\varepsilon\in (0, \frac{1}{\sqrt{3}})$. Then there exists $m\coloneqq m(\varepsilon)\in \mathbb{Z}_{>0}$ with the following property. For every $\varepsilon$-klt surface $X$ of Calabi-Yau type over every algebraically closed field and every $\mathbb{Z}$-divisor $D$ on $X$, the divisor $mD$ is Cartier. \end{lem} \begin{proof} Let $\pi\colon Y\to X$ be the minimal resolution and $\operatorname{Exc}(\pi)\coloneqq \sum_i E_i$ the irreducible decomposition. Then $Y$ is $\varepsilon$-klt and of Calabi-Yau type. By \cite[Lemma 1.2 and Theorem 1.8]{AM}, we have $-\frac{2}{\varepsilon}\leqslantq E_i^2\leqslantq -2$ and $\rho(Y)\leqslantq \frac{128}{\varepsilon^5}$. In addition, we have $E_i\cdot E_j=0$ or $1$ for $i\neq j$ since $X$ is klt. Thus there are only finitely many possibilities for the intersection matrix of $\operatorname{Exc}(\pi)$. We take $m$ as a product of all possible determinants of the intersection matrices of $\operatorname{Exc}(\pi)$. Now, Lemma \ref{Cartier index divides det} shows that $m$ is the desired integer. \end{proof} \begin{lem}\label{Boundedness of Vol of epsilon CY} We fix a real number $\varepsilon\in (0, \frac{1}{\sqrt{3}})$. For every $\varepsilon$-klt surface $X$ of Calabi-Yau type over every algebraically closed field, there are only finitely many possibilities for $K_X^2$. \end{lem} \begin{proof} Let $\pi\colon Y\to X$ be the minimal resolution. We can write $K_Y+\sum_i a_iE_i=\pi^{*}K_X$ for some $a_i\in \mathbb{Q}_{>0}$, where $E_i$ is a $\pi$-exceptional prime divisor. As in the proof of Lemma \ref{Q-fac index of epsilon CY}, we have $\rho(Y)\leqslantq \frac{128}{\varepsilon^5}$ and there are only finitely many possibilities for the intersection matrix of $\operatorname{Exc}(\pi)$. We fix a positive integer $m\coloneqq m(\varepsilon)\in\mathbb{Z}_{>0}$ as in Lemma \ref{Q-fac index of epsilon CY}. Then we have $a_i\in\{\frac{1}{m},\cdots,\frac{m-1}{m}\}$ for each $i$. If $Y$ is rational, then $Y$ is obtain from $\mathbb{P}^2_k$ or a Hirzebruch surface by at most $(\lfloor\frac{128}{\varepsilon^5}\rfloor-1)$-times blow-ups, and in particular, $K_Y^2\in \mathbb{Z}\cap (9-\lfloor\frac{128}{\varepsilon^5}\rfloor,9)$. If $Y$ is not rational, then $K_Y^2=0$ by \cite[Lemma 1.4]{AM}. Now, we can conclude that there are only finitely many possibilities for \[ K_X^2=K_Y^2+\sum_i a_i(K_Y\cdot E_i)=K_Y^2+\sum_i a_i(-E_i^2-2) \] and obtain the assertion. \end{proof} \begin{lem}[\textup{cf.~\cite[Proposition 11.7]{Bir}}]\label{global ACC} Let $\Lambda\subset [0,1]\cap \mathbb{Q}$ be a DCC set. Then there exists a finite subset $\Gamma\subset \Lambda$ with the following property: for every projective morphism $X\to Z$ over every algebraically closed field and every $\mathbb{Q}$-divisor $B$ on $X$ satisfying \begin{itemize} \item $(X,B)$ is an lc surface, \item the coefficients of $B$ are in $\Lambda$, \item $K_X+B$ is numerically trivial over $Z$, and \item $\dim\,X>\dim\,Z$, \end{itemize} all the $\pi$-horizontal coefficients of $B$ are contained in $\Gamma$. \end{lem} \begin{proof} The assertion has been proved in \cite[Proposition 11.7]{Bir} when we fix the base field. We remark that the same proof works without fixing the base field. We note that, in Step~4 of the proof of \cite[Proposition 11.7]{Bir}, we use \cite[Theorem 6.9]{Ale}, which requires us to fix the base field. However, \cite[Theorem 6.9]{Ale} is applied to only show the boundedness of the Gorenstein index and the self-intersection number of the canonical divisor of an $\varepsilon$-klt del Pezzo surface, which do not depend on the base field by Lemmas \ref{Q-fac index of epsilon CY} and \ref{Boundedness of Vol of epsilon CY}. \end{proof} \begin{lem}\label{epsilon-klt=klt} There exists a positive real number $\varepsilon \in \mathbb{R}_{>0}$ such that every klt Calabi-Yau surface over every algebraically closed field is $\varepsilon$-klt. \end{lem} \begin{proof} First, we extract an exceptional divisor with minimum log discrepancy. We take a klt Calabi-Yau surface $X$ as in the lemma. Let $\pi\colon Y\to X$ be the minimal resolution and write \[ K_{Y}+\sum_{i} a_{X,i}E_{i}=\pi^{*}K_X \] for some $a_{X, i}\in \mathbb{Q}_{>0}$. We may assume that $a_{X,1}\geqslantq a_{X,i}$ for all $i$. We run a $(K_Y+a_{X,1}E_{1}+\sum_{i\geqslantq 2} E_{i})$-MMP over $X$ to obtain a birational contraction $\varphi\colon Y\to Y'$. Since $K_Y+a_{X,1}E_{1}+\sum_{i\geqslantq 2} E_{i}\equiv_{X}\sum_{i\geqslantq 2}(1-a_{X,i})E_{i}$, it follows that $\varphi_{*}E_1\neq 0$ and $\sum_{i\geqslantq 2}(1-a_{X,i})\varphi_{*}E_{i}$ is nef over $X$. The negativity lemma shows that $\varphi_{*}E_{i}=0$ for each $i\geqslantq 2$ and hence \[ K_{Y'}+a_{X,1}\varphi_{*}E_{1}\equiv \varphi_{*}(K_{Y}+\sum_{i} a_{X,i}E_{i})\equiv 0. \] Now, we prove the assertion. For the sake of contradiction, we assume that there exists a sequence of klt Calabi-Yau surfaces $\{X_m\}_{m\in\mathbb{Z}_{>0}}$ such that $\{{a_{X_m,1}}\}_{m\in\mathbb{Z}_{>0}}$ is a strictly increase sequence. Since $\{{a_{X_m,1}}|m\in\mathbb{Z}_{>0}\}$ is a DCC set, we can derive a contradiction by Lemma \ref{global ACC}. \end{proof} \begin{lem}\label{max Gor index} There exists a minimum positive integer $n\in\mathbb{Z}_{>0}$ such that, for every klt Calabi-Yau surface $X$ over every algebraically closed field, the Gorenstein index of $X$ is less than or equal to $n$. \end{lem} \begin{proof} The assertion follows from Lemmas \ref{Q-fac index of epsilon CY} and \ref{epsilon-klt=klt}. \end{proof} \begin{rem}\label{Rem:max Gor index} There exists a klt Calabi-Yau surface over $\mathbb{C}$ whose Gorenstein index is $19$ by \cite[Theorem C (a)]{Bla}. Thus we have $n\geqslantq 19$ in Lemma \ref{max Gor index}. Moreover, \cite[Theorem C (a)]{Bla} also shows that we can take $n=21$ when the base field of $X$ is an algebraically closed fields of characteristic zero. \end{rem} \begin{lem}\label{Cartier index} Let $X$ be a strictly klt Calabi-Yau surface and $n$ the Gorenstein index of $X$. Then $n$ is a minimum positive integer such that $nK_X=0$. \end{lem} \begin{proof} By the abundance theorem (\cite[Theorem 1.2]{Tan12}), we can take a minimum positive integer $l$ such that $lK_X=0$. By the definition, we have $n \leqslantq l$. We show that $l \leqslantq n$. Let $\pi\colon Y \to X$ be the minimal resolution of $X$. Then we have $nK_Y+E= \pi^{*}nK_X\equiv 0$ for some effective Cartier divisor $E$. Since $X$ is strictly klt Calabi-Yau, it follows from the proof of \cite[Lemma 1.4]{AM} that $Y$ is a rational surface. Thus numerically trivial Cartier divisors on $Y$ are linearly trivial, and in particular, $nK_Y+E=0$. Now we obtain $nK_X=\pi_{*}(nK_Y+E)=0$ and hence $l\leqslantq n$. \end{proof} Lemmas \ref{max Gor index} and \ref{Cartier index} show that a global cyclic cover associated to the canonical divisor of a strictly klt Calabi-Yau surface is \'etale in codimension one in large characteristic. Finally, we prove the liftability of a log resolution of a strictly klt Calabi-Yau surface in large characteristic. \begin{defn}\label{MFS} Let $(X, B)$ be a pair and $f \colon X \to Z$ a projective surjective morphism to a normal variety $Z$. We say $f \colon X \to Z$ is a $(K_X+B)$-\emph{Mori fiber space} if \begin{itemize} \item $-(K_X+B)$ is $f$-ample, \item $f_{*}\mathcal{O}_X=\mathcal{O}_Z$ and $\dim \, X>\dim \, Z$, and \item the relative Picard rank $\rho(X/Z)=1$. \end{itemize} \end{defn} \begin{lem}\label{Lift of log CY MFS} We fix a finite set $I\subset [0,1)\cap \mathbb{Q}$ and a positive real number $\varepsilon\in (0, \frac{1}{\sqrt{3}})$. There exists a positive integer $p(I, \varepsilon)\in\mathbb{Z}_{>0}$ with the following property. Let $(X, B)$ be an $\varepsilon$-klt log Calabi-Yau surface over an algebraically closed field $k$ of characteristic bigger than $p(I, \varepsilon)$. Suppose that $X$ admits a $K_X$-Mori fiber space structure $f\colon X\to Z$ and all the coefficients of $B$ are contained in $I$. Then, for every log resolution $g\colon W\to X$ of $(X,B)$, the pair $(W, g_{*}^{-1}(\operatorname{Supp}(B))+\operatorname{Exc}(g))$ lifts to $W(k)$. \end{lem} \begin{proof} By \cite[Proposition 2.5]{ABL} and Lemma \ref{every lift=some lift}, it suffices to show the following liftability: there exists a log resolution $g\colon W\to X$ of $(X, B)$ such that the pair $(W, g_{*}^{-1}(\operatorname{Supp}(B))+\operatorname{Exc}(g))$ lifts to characteristic zero over a smooth base in the sense of \cite[Definition 2.15]{CTW}. When $\dim\,Z=0$, by replacing $B$ with $\frac{1}{2}B$, we can assume that $(X, B)$ is an $\varepsilon$-klt log del Pezzo surface, and the assertion follows from \cite[Proposition 3.2]{CTW}. Therefore, we may assume that $\dim\,Z=1$. We show the following claim. \begin{cl} There exists a flat family $(\mathcal{X}, \mathcal{B})\to T$ to a reduced quasi-projective scheme $T$ over $\operatorname{Spec}\,\mathbb{Z}$ such that every log Calabi-Yau surface $(X, B)$ over every algebraically closed field of characteristic bigger than five satisfying \begin{itemize} \item $(X,B)$ is $\varepsilon$-klt, \item $X$ has a $K_X$-Mori fiber structure $f\colon X\to Z$ to a curve $Z$, and \item all the coefficients of $B$ are contained in $I$, \end{itemize} is a geometric fiber of $(\mathcal{X}, \mathcal{B})\to T$. \end{cl} \begin{clproof} As in the proof of \cite[Lemma 3.1]{CTW}, it suffices to show the following: there exists a positive integer $m\in\mathbb{Z}_{>0}$ not depending on $X$ and a very ample divisor $H_X$ on $X$ such that \begin{itemize} \item $mB$ is Cartier and \item there are only finitely many possibilities for $\dim_{k} H^0(X, \mathcal{O}_X(H_X))$, $H_X^2$, $H_X\cdot K_X$, $H_X\cdot B$, $K_X\cdot B$, and $B^2$ \end{itemize} for every log Calabi-Yau surface $(X,B)$ as in the claim. We take a positive integer $m=m(\varepsilon)$ as in Lemma \ref{Q-fac index of epsilon CY}. Since all the coefficients of $B$ are contained in a finite set $I$, we can assume that $mB$ is Cartier, and thus the first assertion holds. We show the latter assertion. Together with $B\equiv -K_X$ and Lemma \ref{Boundedness of Vol of epsilon CY}, it suffices to check the values of $\dim_{k} H^0(X, \mathcal{O}_X(H_X)), H_X^2$, and $H_X\cdot K_X$. We first show that $A_X\coloneqq -K_X+(\lceil \frac{2}{\varepsilon}\rceil-1)F$ is an ample Cartier divisor on $X$ such that there are only finitely many possibilities for $A_X^2, A_X\cdot K_X$, where $F$ is a fiber of $X\to Z$. Let $C$ be an irreducible curve whose numerical class spans an extremal ray of $\overline{NE}(X)$ that is not spanned by the numerical class of $F$. If $C^2\geqslantq 0$, then we have \[ (-K_{X}+(\lceil \frac{2}{\varepsilon}\rceil-1)F)\cdot C=(B+(\lceil \frac{2}{\varepsilon}\rceil-1)F)\cdot C \geqslantq \lceil \frac{2}{\varepsilon}\rceil-1>0, \] and thus $-K_X+(\lceil \frac{2}{\varepsilon}\rceil-1)F$ is ample by Kleiman's ampleness criterion. We next assume that $C^{2}<0$. Let $\pi\colon Y\to X$ be the minimal resolution. Then $Y$ is $\varepsilon$-klt and of Calabi-Yau type, and thus \cite[Lemma 1.2]{AM} shows that $-\frac{2}{\varepsilon}\leqslantq (\pi_{*}^{-1}C)^2$. In particular, $-\frac{2}{\varepsilon}\leqslantq C^2$. Now, we have \begin{align*} (-K_{X}+(\lceil \frac{2}{\varepsilon}\rceil-1)F)\cdot C =(B+(\lceil\frac{2}{\varepsilon}\rceil-1)F)\cdot C >& (1-\varepsilon)C^2+\lceil \frac{2}{\varepsilon}\rceil-1\\ \geqslantq&-\frac{2}{\varepsilon}+2+\lceil\frac{2}{\varepsilon}\rceil-1>0, \end{align*} and hence $-K_X+(\lceil\frac{2}{\varepsilon}\rceil-1)F$ is ample. Together with $F^2=0$, $K_{X}\cdot F=-2$, and Lemma \ref{Boundedness of Vol of epsilon CY}, we can see that $A_X$ is the desired ample Cartier divisor. Now, by \cite[Theorem 1.2]{Wit}, it follows that $13mK_{X}+45mA_X$ is very ample. Moreover, we can see that $(13m-3)K_X+(45m-14)A_X$ is nef and hence $H^i(X, \mathcal{O}_{X}(13mK_{X}+45mA_X))=0$ for all $i>0$ by \cite[Proposition 6.5]{Wit}. We set $H_X\coloneqq 13mK_{X}+45mA_X$. Then there are only finitely many possibilities for $H_X^2$ and $H_X\cdot K_X$. Moreover, by the Riemann-Roch theorem, we have \begin{align*} &\dim_{k} H^0(X, \mathcal{O}_X(H_X))=\mathcal{X}(\mathcal{O}_{X}(H_X)) =\mathcal{X}(\mathcal{O}_{W}(f^{*}H_X)\\ =&\frac{(f^{*}H_X)^2}{2}+\frac{f^{*}H_X\cdot (-K_{W})}{2}+1 =\frac{(H_X)^2}{2}+\frac{H_X\cdot (-K_{X})}{2}+1, \end{align*} where $f\colon W\to X$ is a resolution and we used the fact that $X$ has only rational singularities for the second equality. Therefore, there are only finitely many possibilities for $\dim_{k} H^0(X, \mathcal{O}_X(H_X))$, and we finish the proof of the claim. \end{clproof} Now, by the above claim and the proof of \cite[Proposition 3.2]{CTW}, we can find the desired positive integer $p(I,\varepsilon)$. \end{proof} \begin{prop}\label{Lift of strictly klt CY} There exists a positive integer $p_0$ with the following property. Let $X$ be a strictly klt Calabi-Yau surface over an algebraically closed field of characteristic $p>p_0$. Then, for every log resolution $g \colon W\to X$, the pair $(W, \operatorname{Exc}(g))$ lifts to $W(k)$. \end{prop} \begin{proof} By Lemma \ref{epsilon-klt=klt}, there exists a positive real number $\varepsilon\in (0, \frac{1}{\sqrt{3}})$ such that every klt Calabi-Yau surface is $\varepsilon$-klt. We take $m=m(\varepsilon)$ as in Lemma \ref{Q-fac index of epsilon CY} and define a finite set $I\coloneqq \{\frac{1}{m},\cdots,\frac{m-1}{m}\}$. We take $p_0\coloneqq p(I,\varepsilon)$ as in Lemma \ref{Lift of log CY MFS}. Let $X$ be a strictly klt Calabi-Yau surface over an algebraically closed field of characteristic $p>p_0$. As in Lemma \ref{epsilon-klt=klt}, we can take an extraction $f \colon Y\to X$ of an exceptional prime divisor $E_1$ such that $a_1\coloneqq\mathrm{coeff}_{E_1}(f^{*}K_X-K_{Y})\in I$. Since $K_Y\equiv -a_1E_1$ is not pseudo-effective, we can run a $K_{Y}$-MMP to obtain a birational contraction $\varphi\colon Y\to Y'$ and a $K_{Y'}$-Mori fiber space $Y'$. Since $K_Y+a_1E_1\equiv 0$, the negativity lemma shows that $K_Y+a_1E_1=\varphi^{*}(K_{Y'}+a_1E'_1)$ and hence $(Y', a_1E'_1)$ is $\varepsilon$-klt and log Calabi-Yau, where $E'_1\coloneqq \varphi_{*}E_1$. Then, by Lemma \ref{Lift of log CY MFS} and the definition of $p_0$, we can take a log resolution $\mu\colon Z\to Y'$ of $(Y', a_1E'_1)$ that factors through $\varphi$ and $(Z, \mu_{*}^{-1}E'_1+\operatorname{Exc}(\mu))$ lifts to $W(k)$. We now have the following diagram: \[ \xymatrix{ (Z, \mu_{*}^{-1}E'_1+\operatorname{Exc}(\mu))\ar[rd]_\mu\ar[r]_-h & (Y, E_1) \ar[r]_{f}\ar[d]^{\varphi}& X \\ & (Y', E'_1) &. \\ } \] Since $\operatorname{Exc}(f\circ h)\subset \mu_{*}^{-1}E'_1+\operatorname{Exc}(\mu)$, the pair $(Z, \operatorname{Exc}(f\circ h))$ lifts to $W(k)$, and the assertion holds by Lemma \ref{every lift=some lift}. \end{proof} \section{The Bogomolov-Sommese vanishing theorem}\label{sec:BSV} \subsection{An extension type theorem for lc surfaces} In this subsection, we show an extension type theorem for lc surfaces (Proposition \ref{ext thm}), which plays an essential role in the proof of Theorem \ref{BSV, Intro}. \begin{lem}[\textup{cf.~\cite[Lemma 2.2]{Kaw1}}]\label{push} Let $f\colon Y\to X$ be a projective birational morphism of normal surfaces. Let $B_Y$ be a reduced $\mathbb{Z}$-divisor and $D_Y$ a $\mathbb{Z}$-divisor on $Y$. We set $B\coloneqq f_{*}B_Y$ and $D\coloneqq f_{*}D_Y$. Then the following hold. \begin{enumerate} \item[\textup{(1)}] The natural restriction morphism \[f_{*}(\Omega_{Y}^{[1]}(\operatorname{log}\,B_Y)\otimes \mathcal{O}_{Y}(-D_Y))^{**}\to (\Omega_X^{[1]}(\operatorname{log}\, B)\otimes \mathcal{O}_X(-D))^{**} \] is injective. \item[\textup{(2)}] Suppose that $X$ and $Y$ are projective. Then $\kappa(Y, D_Y)\leqslantq \kappa(X, D)$ holds. \end{enumerate} \end{lem} \begin{proof} The same proof as \cite[Lemma 2.2]{Kaw1} works. Note that (1) is local on $X$. \end{proof} \begin{rem}\label{Remark:tangent} In the setting of Lemma \ref{push} (2), we have \begin{align*} H^2(Y, T_Y(-\operatorname{log}\, B_Y))\cong&\mathrm{Hom}_{\mathcal{O}_Y}(T_Y(-\operatorname{log}\, B_Y), \mathcal{O}_Y(K_Y))\\ \cong &H^0(Y, (\Omega_Y^{[1]}(\operatorname{log}\,B_Y)\otimes \mathcal{O}_Y(K_Y))^{**}) \end{align*} by the Serre duality. Then, by Lemma \ref{push} (1), we obtain an injective morphism \[ H^2(Y, T_Y(-\operatorname{log}\, B_Y))\hookrightarrow H^2(X, T_X(-\operatorname{log}\, B)). \] We will use this fact in Section \ref{sec : Lift of a surface pair}. \end{rem} \begin{defn}\label{Definition:dlt blow-up} Let $X$ be a normal surface and $B$ a $\mathbb{Q}$-divisor with coefficients in $[0,1]$. We say that a morphism $h\colon W\to X$ is a \textit{dlt blow-up of $(X, B)$} if \begin{enumerate} \item[\textup{(1)}] $h$ is a projective birational morphism, \item[\textup{(2)}] $(W, h^{-1}_{*}B+\operatorname{Exc}(h))$ is dlt, and \item[\textup{(3)}] $K_W+h^{-1}_{*}B+\operatorname{Exc}(h)+F=h^{*}(K_X+B)$ for some effective $\mathbb{Q}$-divisor $F$. \end{enumerate} \end{defn} \begin{lem}\label{Lemma:dlt blow-up} Let $X$ be a normal surface and $B$ is a $\mathbb{Q}$-divisor with coefficients in $[0,1]$. Then the following hold. \begin{enumerate} \item[\textup{(1)}] Any log resolution $\pi\colon Y\to X$ of $(X, B)$ decomposes into a birational projective morphism $Y\to W$ and a dlt blow-up $W\to X$. \item[\textup{(2)}] $F=0$ if and only if $(X, B)$ is lc. \end{enumerate} \end{lem} \begin{proof} We refer to \cite[Theorem 4.7 and Remark 4.8]{Tanaka(exc)} for the proof. Note that, using the Mumford pullback, \cite[Remark 4.8 (1)]{Tanaka(exc)} holds without the assumption that $K_X+B$ is $\mathbb{R}$-Cartier. \end{proof} \begin{defn} Let $(X, B)$ be a dlt pair over an algebraically closed field of characteristic $p>0$ such that $B$ is reduced. We say that $(X, B)$ is \textit{tamely dlt} if the Cartier index of $K_X+B$ is not divisible by $p$. \end{defn} \begin{defn}\label{def:tame decomp} Let $(X,B)$ be a pair over an algebraically closed field of positive characteristic such that $B$ is reduced. Let $\pi\colon Y\to X$ be a log resolution of $(X,B)$. We say $\pi\colon Y\to X$ admits a \textit{tame decomposition} if there exists a decomposition \[ \pi\colon Y_0\coloneqq Y\overset{f_0}{\to} Y_1 \to \cdots \overset{f_{m-1}}{\to} Y_m\coloneqq X \] of $\pi$ such that \begin{itemize} \item $(Y_i, B_{Y_i})$ is tamely dlt and \item $-(K_{Y_i}+B_{Y_i})$ is $f_i$-nef, \end{itemize} for all $i\in\{0,\dots,m-1\}$, where $B_{Y_0}=B_Y\coloneqq f_{*}^{-1}B+\operatorname{Exc}(\pi)$ and $B_{Y_{i}}$ is the pushforward of $B_Y$ to $Y_i$. We remark that a tame resolution \cite[Definition 7.1]{Gra} admits a tame decomposition. \end{defn} \begin{lem}[cf.~\textup{\cite[7.B. Proof of Theorem 1.2]{Gra}}]\label{Lemma:tame decomposition} Let $(X, B)$ be an lc surface over an algebraically closed field of characteristic $p>5$ and $\pi\colon Y\to X$ a log resolution of $(X,B)$. Then $\pi$ admits a tame decomposition. \end{lem} \begin{proof} We follow the notation of Definition \ref{def:tame decomp}. We note that the minimal resolution of lc surface singularities with reduced boundary are classified as in (7.8.1)--(7.8.7) in \cite[7.B. Proof of Theorem 1.2]{Gra}. By the dual graph of them (\cite[Figure 2--9]{Gra}), we can see that every $(-1)$-curve $F$ on $Y$ intersects at most two components of $\operatorname{Supp}(B_Y)$. Then \[(K_Y+B_Y)\cdot F=(K_Y+F)\cdot F + (B_Y-F)\cdot F\leqslantq 0,\] and by taking $f_0$ as the contraction of all $(-1)$-curves on $Y$, we can assume that $\pi\colon Y\to X$ is the minimal resolution. For the cases of (7.8.1)--(7.8.4), (7.8.6) with non-zero boundary, and (7.8.7) in \cite[7.B. Proof of Theorem 1.2]{Gra}, it has been already proved that $\pi$ is a tame resolution, and in particular, admits a tame decomposition. We use here $p>5$ essentially. For the case of (7.8.5), $-(K_Y+B_Y)$ is $\pi$-nef since the dual graph of the $\pi$-exceptional divisor is a chain, and thus $\pi$ itself is a tame decomposition. Finally, we discuss the case (7.8.6) with zero boundary. Let $f_0 \colon Y\to Y_1$ be the contraction of two $(-2)$-curves $G_1$ and $G_2$ which intersect the fork. Then $(K_Y+B_Y)\cdot G_i=-1$ and $Y_1$ is tamely dlt since $p\neq 2$. Next, let $f_1\colon Y_1\to X$ be the contraction of all the remaining $\pi$-exceptional divisors, whose dual graph is a chain. Then we can see that $f_{1}\circ f_{0}$ is a tame decomposition of $\pi$. \end{proof} \begin{prop}[An extension type theorem for lc surfaces]\label{ext thm} Let $(X, B)$ be an lc surface pair over an algebraically closed field of characteristic $p>5$ and $D$ a $\mathbb{Z}$-divisor on $X$. Let $f\colon Y\to X$ be a projective birational morphism from a normal surface $Y$ and $B_Y\coloneqq f_{*}^{-1}B+\operatorname{Exc}(f)$. Then the natural restriction morphism \[ \Phi\colon f_{*}(\Omega_{Y}^{[1]}(\operatorname{log}\,\lfloor B_Y\rfloor)\otimes \mathcal{O}_{Y}(-\lceil f^{*}D \rceil))^{**}\to (\Omega_X^{[1]}(\operatorname{log}\,\lfloor B\rfloor)\otimes \mathcal{O}_X(-D))^{**} \] is isomorphic. \end{prop} \begin{rem}\label{Remark:EXT} Proposition \ref{ext thm} is equivalent to saying that \[ f_{*}(\Omega_{Y}^{[1]}(\operatorname{log}\,\lfloor B_Y\rfloor)\otimes \mathcal{O}_{Y}(-\lceil f^{*}D \rceil))^{**} \] is reflexive. \end{rem} \begin{rem}\label{Remark:EXT2} If we take $D=0$ in the proposition, then this is nothing but Graf's logarithmic extension theorem (\cite[Theorem 1.2]{Gra}). Let us see why it is necessary to generalize Graf's logarithmic extension theorem as Proposition \ref{ext thm} to prove Theorem \ref{BSV, Intro}. First, we consider the case when the base field is an algebraically closed field of characteristic zero and follow the notation of Theorem \ref{BSVoverC}. Let $\pi\colon Y\to X$ be a log resolution and $B_Y\coloneqq \pi^{-1}_{*}B+\operatorname{Exc}(\pi)$. Suppose that there exists a $\mathbb{Z}$-divisor $D$ and an injective morphism $\mathcal{O}_X(D)\hookrightarrow \Omega^{[i]}_X(\operatorname{log}\,B)$. For simplicity, we assume that $D$ is $\mathbb{Q}$-Cartier. Then, by applying the logarithmic extension theorem in characteristic zero \cite[Theorem 1.5]{GKKP}, we can construct a $\mathbb{Z}$-divisor $D_Y$ on $Y$ such that there exists an injective morphism $\mathcal{O}_Y(D_Y)\hookrightarrow \Omega^{[i]}_Y(\operatorname{log}\,B_Y)$ and $\kappa(X, D)=\kappa(Y, D_Y)$. This means the Bogomolov-Sommese vanishing theorem can be reduced to the case of log smooth pairs by the logarithmic extension theorem (see \cite[7.C. Proof of Theorem 7.2.]{GKKP} for the detailed argument). In the construction of $D_Y$, we use the fact that an index one cover of $D$ is \'etale in codimension. However, when we work in characteristic $p>0$ and the Cartier index of $D$ is divisible by $p$, this fact is not always true. Therefore, we cannot apply Graf's logarithmic extension theorem directly to reduce Theorem \ref{BSV, Intro} to the case where $(X, B)$ is log smooth. Moreover, in positive characteristic, reducing to the case of log smooth surfaces is not enough because the Bogomolov-Sommese vanishing theorem is not known even for such pairs. Proposition \ref{ext thm} asserts that $D_Y$ can be taken as $\lceil f^{*}D \rceil$, and this enables us to apply the Akizuki-Nakano vanishing theorem (Theorem \ref{Hara's vanishing}) when $D$ is ample. \end{rem} \begin{proof}[Proof of Proposition \ref{ext thm}] \textbf{Step~0.} Throughout the proof of this proposition, $\Omega_W^{[1]}(\operatorname{log}\,B_W)(-D_W)$ denotes $(\Omega_W^{[1]}(\operatorname{log}\,B_W)\otimes \mathcal{O}_W(-D_W))^{**}$ for every surface pair $(W, B_W)$ and $\mathbb{Z}$-divisor $D_W$. By Lemma \ref{push} (1), $\Phi$ is injective. Since $(X, \lfloor B \rfloor)$ is lc (see \cite[Proposition 7.2]{Gra}), by replacing $B$ with $\lfloor B \rfloor$, we may assume that $B$ is reduced. Moreover, since the assertion of the proposition is local on $X$, we may assume that $X$ is affine. Therefore, it suffices to show that \[ \Phi\colon H^0(Y, \Omega_{Y}^{[1]}(\operatorname{log}\,B_Y)(-\lceil f^{*}D \rceil))\hookrightarrow H^0(X, \Omega_X^{[1]}(\operatorname{log}\,B)(-D)) \] is surjective. \textbf{Step~1.} First, we prove the following claim. \begin{cl}\label{step} Suppose that \begin{itemize} \item $(Y, B_Y)$ is tamely dlt and \item $-(K_Y+B_Y)$ is $f$-nef. \end{itemize} Then $\Phi$ is surjective. \end{cl} \begin{clproof} We take $s\in H^0(X, \Omega_X^{[1]}(\operatorname{log}\, B)(-D))$. We construct a section of $H^0(Y, \Omega_{Y}^{[1]}(\operatorname{log}\, B_Y)(-\lceil \pi^{*}D \rceil))$ that maps to $s$ by $\Phi$. We may assume that $s$ is non-zero, and thus $s$ is considered as an injective $\mathcal{O}_X$-module homomorphism $s\colon \mathcal{O}_X(D) \hookrightarrow \Omega_X^{[1]}(\operatorname{log}\,B)$. By \cite[Theorem 6.1]{Gra}, the natural restriction morphism $f_{*}\Omega_Y^{[1]}(\operatorname{log}\,B_Y)\cong \Omega_X^{[1]}(\operatorname{log}\,B)$ is isomorphic. Then we have a generically injective $\mathcal{O}_Y$-module homomorphism \[ f^{*}\mathcal{O}_X(D) \overset{f^{*}s}{\to} f^{*}\Omega_X^{[1]}(\operatorname{log}\,B)\cong f^{*}f_{*}\Omega_{Y}^{[1]}(\operatorname{log}\,B_Y) \to \Omega^{[1]}_{Y}(\operatorname{log}\,B_Y). \] By taking double dual, we obtain an injective $\mathcal{O}_Y$-module homomorphism \[ s_Y\colon f^{[*]}\mathcal{O}_X(D) \hookrightarrow \Omega^{[1]}_{Y}(\operatorname{log}\,B_Y), \] where $f^{[*]}\mathcal{O}_X(D)\coloneqq (f^{*}\mathcal{O}_X(D))^{**}$. We take a $\mathbb{Z}$-divisor $D_Y$ on $Y$ such that $\mathcal{O}_{Y}(D_Y)= f^{[*]}\mathcal{O}_X(D)$. Since $\mathcal{O}_X(f_*D_Y)=(f_{*}\mathcal{O}_Y(D_Y))^{**}=(f_{*}f^{[*]}\mathcal{O}_X(D))^{**}=\mathcal{O}_X(D)$, it follows that $f_{*}D_Y$ is linearly equivalent to $D$. By replacing $D_Y$ with $D_Y+f^{*}(D-f_{*}D_Y)$, we may assume that $f_*D_Y=D$. In particular, $D_Y-f^{*}D$ is $f$-exceptional. Now, we replace $D_Y$ so that $D_Y-f^{*}D$ is effective. We assume that $D_Y-f^{*}D$ is not effective. By applying the negativity lemma to the negative coefficients part of $D_Y-f^{*}D$, we can take a prime $f$-exceptional divisor $E_1$ such that $\mathrm{mult}_{E_1}(D_Y-f^{*}D)<0$ and $D_Y\cdot E_1>0$. Then we can show that $s_Y$ factors though an injective $\mathcal{O}_Y$-module homomorphism $\mathcal{O}_{Y}(D_Y+E_1)\hookrightarrow \Omega^{[1]}_{Y}(\operatorname{log}\,B_Y)$. This follows from the essentially same argument as \cite[Theorem 6.1]{Gra}, but we provide the proof here for the completeness. Since $(Y, B_Y)$ is tamely dlt, we have the following commutative diagram \begin{equation*} \xymatrix{ & & \mathcal{O}_{Y}(D_Y) \ar@{.>}[ld] \ar[d]^-{s_Y} \ar[rd]^-{t} &\\ 0\ar[r] &\Omega^{[1]}_{Y}(\operatorname{log}\,B_Y-E_1)\ar[r] & \Omega^{[1]}_{Y}(\operatorname{log}\,B_Y) \ar[r]^{\operatorname{res}_{E_1}} & \mathcal{O}_{E_1} \ar[r] & 0,} \end{equation*} and a surjective morphism \[ \operatorname{res}^m_{E_1}\colon \operatorname{Sym}^{[m]}\Omega_Y^{[1]}(\operatorname{log}\,B_Y)\coloneqq (\operatorname{Sym}^{m}\Omega_Y^{[1]}(\operatorname{log}\,B_Y))^{**}\to \mathcal{O}_{E_1} \] for each $m>0$ that coincides with $\operatorname{Sym}^{m}(\operatorname{res}_{E_1})$ at the generic point of $E_1$ by \cite[Theorem 1.4 (1.4.1)]{Gra}. We show that $t$ is zero. For the sake of contradiction, we assume that $t$ is not zero. Since $\operatorname{Im}(t)\subset \mathcal{O}_{E_1}$ is a torsion-free $\mathcal{O}_{E_1}$-module, it follows that $t$ is non-zero at the generic point of $E_1$ and so is $\operatorname{Sym}^{m}(t)\colon\mathcal{O}_Y(D_Y)^{\otimes m}\to \mathcal{O}_{E_1}$. Since $\operatorname{Sym}^{m}(s_Y)$ and $\operatorname{Sym}^{m}(\operatorname{res}_{E_1})$ coincides with $\operatorname{Sym}^{[m]}(s_Y)\coloneqq (\operatorname{Sym}^m(s_Y))^{**}$ and $\operatorname{res}^m_{E_1}$ at the generic point of $E_1$ respectively, the composition $\operatorname{res}^m_{E_1}\circ\operatorname{Sym}^{[m]}(s_Y)$ coincides with $\operatorname{Sym}^m(t)=\operatorname{Sym}^m(\operatorname{res}_{E_1})\circ\operatorname{Sym}^m(s_Y)$, and in particular, is non-zero at the generic point of $E_1$. Now, we fix $m>0$ such that $mD_Y$ is Cartier. Note that $Y$ is $\mathbb{Q}$-factorial since $(Y, B_Y)$ is dlt. By restricting $\operatorname{res}^m_{E_1}\circ\operatorname{Sym}^{[m]}(s_Y)$ to $E_1$, we obtain an injective $\mathcal{O}_{E_1}$-module homomorphism $\mathcal{O}_{E_1}(mD_Y)\hookrightarrow \mathcal{O}_{E_1}$ and hence $0<mD_Y\cdot E_1=\deg(\mathcal{O}_{E_1}(mD_Y))\leqslantq 0$, a contradiction. Therefore $t$ is zero and the morphism $s_Y$ factors through $\mathcal{O}_{Y}(D_Y) \to \Omega^{[1]}_{Y}(\operatorname{log}\,B_Y-E_1)$. Then, by \cite[Theorem 1.5 (1.5.1)]{Gra}, we obtain the following commutative diagram \begin{equation*} \xymatrix{ & & \mathcal{O}_{Y}(D_Y) \ar@{.>}[ld] \ar[d]^{s_Y} \ar[rd]^{v} &\\ 0\ar[r] &\Omega^{[1]}_{Y}(\operatorname{log}\,B_Y)(-E_1)\ar[r] & \Omega^{[1]}_{Y}(\operatorname{log}\,B_Y-E_1) \ar[r]^-{\mathrm{restr_{E_1}}} & \omega_{E_1}(\lfloor E_1^c \rfloor) \ar[r] & 0,} \end{equation*} and a surjective morphism \[ \mathrm{restr}^{m}_{E_1}\colon \operatorname{Sym}^{[m]}\Omega^{[1]}_{Y}(\operatorname{log}\,(B_Y-E_1))\to \mathcal{O}_{E_1}(mK_{E_1}+\lfloor mE_1^c \rfloor) \] that coincides with $\operatorname{Sym}^{m}(\mathrm{restr_{E_1}})$ at the generic point $E_1$. Here, $E_1^c$ denotes the different $\mathrm{Diff}_{E_1}(B_Y-E_1)$ (see \cite[Definition 4.2]{Kol13} for the definition). Since $-(K_Y+B_Y)$ is $f$-nef, it follows that \[ \deg(\mathcal{O}_{E_1}(mK_{E_1}+\lfloor mE_1^c \rfloor))\leqslantq (mK_Y+mB_Y)\cdot E_1\leqslantq 0 \] for all $m>0$ and hence an argument similar to above shows that $v=0$ and $s_Y$ factors through $\mathcal{O}_{Y}(D_Y) \hookrightarrow \Omega^{[1]}_{Y}(\operatorname{log}\,B_Y)(-E_1)$. In particular, we obtain an injective $\mathcal{O}_Y$-module homomorphism $\mathcal{O}_{Y}(D_Y+E_1)\hookrightarrow \Omega^{[1]}_{Y}(\operatorname{log}\,B_Y)$ that coincides with $s_Y$ on $Y\setminus\operatorname{Exc}(f)$. By replacing $D_Y$ with $D_Y+E_1$, and repeating the above procedure, we can assume that $D_Y-f^{*}D$ is effective. Now, we obtain a $\mathbb{Z}$-divisor $D_Y$ on $Y$ such that $D_Y-f^{*}D\geqslantq 0$ and a morphism $s_Y \in H^0(Y, \Omega_{Y}^{[1]}(\operatorname{log}\, B_Y)(-D_Y))$, which maps to $s$ under the natural restriction morphism \[ \Phi' \colon H^0(Y, \Omega_{Y}^{[1]}(\operatorname{log}\, B_Y)(-D_Y))\to H^0(X, \Omega_X^{[1]}(\operatorname{log}\,B)(-D)). \] Since $\lceil f^{*}D \rceil\leqslantq \lceil D_Y \rceil=D_Y$, it follows that $\Phi'$ decomposes into the natural injective morphism \[ \mathcal{T}heta \colon H^0(Y, \Omega_{Y}^{[1]}(\operatorname{log}\, B_Y)(-D_Y))\hookrightarrow H^0(Y, (\Omega_{Y}^{[1]}(\operatorname{log}\, B_Y)(-\lceil f^{*}D \rceil)) \] and the morphism \[ \Phi\colon H^0(Y, \Omega_{Y}^{[1]}(\operatorname{log}\,B_Y)(-\lceil f^{*}D \rceil))\hookrightarrow H^0(X, \Omega_X^{[1]}(\operatorname{log}\,B)(-D)). \] Now we have $\Phi(\mathcal{T}heta(s_Y))=\Phi'(s_Y)=s$ and hence $\Phi$ is surjective. Thus we finish the proof of the claim. \end{clproof} \textbf{Step~2.} Next, we confirm that we may assume that $f\colon Y\to X$ is a log resolution of $(X, B)$. Let $\widetilde{f}\colon Z\to X$ be a log resolution of $(X,B)$ that factors through $f$. Suppose that the natural restriction morphism \[ \Phi_{Z,X} \colon H^0(Z, \Omega_{Z}(\operatorname{log}\,B_Z)(-\lceil \widetilde{f}^{*}D \rceil))\hookrightarrow H^0(X, \Omega_X^{[1]}(\operatorname{log}\,B)(-D)) \] is surjective, where $B_Z\coloneqq \widetilde{f}^{-1}_{*}B+\operatorname{Exc}(\widetilde{f})$. Since $\Phi_{Z,X}$ factors through the natural restriction \[ \Phi_{Y,X} \colon H^0(Y, \Omega^{[1]}_{Y}(\operatorname{log}\,B_Y)(-\lceil f^{*}D \rceil))\hookrightarrow H^0(X, \Omega_X^{[1]}(\operatorname{log}\,B)(-D)), \] the restriction $\Phi_{Y,X}$ is also surjective. Thus we may assume that $f\colon Y\to X$ is a log resolution of $(X, B)$. \textbf{Step~3.} Finally, we show that the surjectivity of $\Phi$ and finish the proof of the proposition. By Lemma \ref{Lemma:tame decomposition}, a log resolution $f$ admits a tame decomposition \[ f\colon Y_0\coloneqq Y\overset{f_1}{\to} Y_1 \to \cdots \overset{f_m}{\to} Y_m\coloneqq X. \] By the claim in Step~1, the natural restriction morphisms \begin{align*} \Phi_{m-1,m} \colon &H^0(Y_{m-1}, \Omega_{Y_{m-1}}^{[1]}(\operatorname{log}\,B_{Y_{m-1}})(-\lceil f_{m}^{*}D \rceil))\cong H^0(X, \Omega_X^{[1]}(\operatorname{log}\,B)(-D)),\\ \Phi_{m-2,m-1} \colon &H^0(Y_{m-2}, \Omega_{Y_{m-2}}^{[1]}(\operatorname{log}\,B_{Y_{m-2}})(-\lceil f_{m-1}^{*}\lceil f_{m}^{*}D \rceil \rceil))\\\cong &H^0(Y_{m-1}, \Omega_{Y_{m-1}}^{[1]}(\operatorname{log}\,B_{Y_{m-1}})(-\lceil f_{m}^{*}D \rceil)) \end{align*} are isomorphic. Then $\Phi_{m-1,m}\circ\Phi_{m-2,m-1}$ factors through the natural restriction morphism \begin{align*} \Phi_{m-2,m}\colon &H^0(Y_{m-2}, \Omega_{Y_{m-2}}^{[1]}(\operatorname{log}\,B_{Y_{m-2}})(-\lceil f_{m-1}^{*}f_{m}^{*}D\rceil))\\ \hookrightarrow &H^0(X, \Omega_X^{[1]}(\operatorname{log}\,B)(-D)), \end{align*} and hence $\Phi_{m-2,m}$ is isomorphic. By repeating this procedure, we can conclude that $\Phi$ is an isomorphism. \end{proof} \subsection{Proof of Theorem \ref{BSV, Intro}} In this subsection, we prove Theorem \ref{BSV, Intro}. First, we show the Bogomolov-Sommese vanishing theorem on a surface admitting a fibration structure including a Mori fiber space and an lc trivial fibration. \begin{lem}\label{LCTF} Let $X$ be a normal surface over an algebraically closed field $k$ of characteristic $p>3$ and $B$ a reduced divisor on $X$. Let $f\colon X\to Z$ be a projective surjective morphism such that $\dim\,Z=1$, $f_{*}\mathcal{O}_X=\mathcal{O}_Z$, and $-(K_X+B)$ is $f$-nef. Then \[ f_{*}(\Omega_X^{[1]}(\operatorname{log}\,B)\otimes \mathcal{O}_X(-D))^{**}=0 \] for every $\mathbb{Z}$-divisor $D$ satisfying $D\cdot F>0$ for a general fiber $F$ of $f$. \end{lem} \begin{proof} Since $f_{*}(\Omega_X^{[1]}(\operatorname{log}\,B)\otimes \mathcal{O}_X(-D))^{**}$ is torsion-free, it suffices to show that the rank of the sheaf is zero, and in particular, we can shrink $Z$ for the proof. First, we prove the following claim. \begin{cl} By shrinking $Z$, we may assume that $B$ is snc over $Z$. \end{cl} \begin{clproof} We note that a general fiber $F$ is reduced and irreducible since $\dim\,Z=1$ and $f_{*}\mathcal{O}_X=\mathcal{O}_Z$. By shrinking $Z$, we may assume that all irreducible components of $B$ dominant $Z$. Let $n\in\mathbb{Z}_{\geqslantq0}$ be the number of the irreducible components of $B$. Then we have \[ \deg(K_F)=K_X\cdot F\leqslantq K_X\cdot F+n\leqslantq (K_X+B)\cdot F\leqslantq0 \] and hence $(\deg(K_F), n)=(0, 0), (-2, 0), (-2, 1)$, or $(-2, 2)$. If $(\deg(F), n)=(0, 0)$, then $B=0$ and $F$ is an elliptic curve since $p>3$. Similarly, if $(\deg(F), n)=(-2, 0)$, then $B=0$ and $F\cong \mathbb{P}_k^1$. Next, if $(\deg(F), n)=(-2, 1)$, then $F\cong \mathbb{P}_k^1$ and $B\cdot F=1$ or $2$. In the case where $B\cdot F=1$, it follows that $B$ and $F$ intersect transversally. In the case where $B\cdot F=2$, the restricted morphism $f|_{B}\colon B\to Z$ is generically \'etale since $p\neq 2$. Finally, if $(\deg(F), n)=(-2, 2)$, then $B_1\cdot F=B_2\cdot F=1$ and thus both $B_1$ and $B_2$ intersect transversally with $F$, where $B_1$ and $B_2$ are irreducible components of $B$. Therefore, in each case, we can assume that $B$ is snc over $Z$ by shrinking $Z$ and finish the proof of the claim. \end{clproof} Now, we show that the assertion of the lemma. We shrink $Z$ so that $Z$ is affine and $B$ is snc over $Z$. Note that $(X, B)$ is log smooth in this case. For the sake of contradiction, we assume that \[ H^0(X, \Omega_X(\operatorname{log}\,B)\otimes \mathcal{O}_X(-D))\neq 0 \] for some $\mathbb{Z}$-divisor $D$ satisfying $D\cdot F>0$. Then there exists an injective $\mathcal{O}_X$-module homomorphism $s\colon \mathcal{O}_X(D)\hookrightarrow \Omega_X(\operatorname{log}\,B)$. Since $B$ is snc over $Z$, we have the following diagram: \begin{equation*} \xymatrix{ & & \mathcal{O}_{X}(D) \ar@{.>}[ld] \ar[d]^{s} \ar[rd]^{t} &\\ 0\ar[r] &\mathcal{O}_X(f^{*}K_Z)\ar[r] & \Omega_X(\operatorname{log}\,B) \ar[r] & \Omega_{X/Z}(\operatorname{log}\, B) \ar[r] & 0.} \end{equation*} In the above diagram, when $B\neq 0$, we define $\Omega_X(\operatorname{log}\,B) \to \Omega_{X/Z}(\operatorname{log}\, B)$ by $d(f^{*}{z})\longmapsto 0, dx/x\longmapsto dx/x$, where $z$ is a coordinate on $Z$ and $x$ is a local equation of $B$. Note that $f^{*}{z}$ and $x$ form coordinates on $X$ since $B$ is snc over $Z$. When $B=0$, this is the usual relative differential sequence for $f$ (\cite[II Proposition 8.11]{Har}). Suppose that $t$ is non-zero. Then, by restricting $t$ to $F$, we have an injective $\mathcal{O}_F$-module homomorphism $t|_{F}\colon \mathcal{O}_F(D) \hookrightarrow \Omega_{F}(\operatorname{log}\, B|_F)=\mathcal{O}_F(K_F+B_F)$, where the injectivity follows from the generality of $F$. This shows that \[ 0<\deg (D|_F)\leqslantq\deg(K_F+B|_F)=(K_X+B)\cdot F\leqslantq0, \] a contradiction. Thus $t$ is zero and the morphism $s$ factors through $\mathcal{O}_X(D)\hookrightarrow \mathcal{O}_X(f^{*}K_Z)$. Then by considering the restriction to $F$, we obtain \[ 0<\deg (D|_F)\leqslantq\deg(f^*K_Z|_F)=0, \] a contradiction. Hence we conclude that $H^0(X, \Omega_X(\operatorname{log}\,B)\otimes \mathcal{O}_X(-D))=0$. \end{proof} Now, we prove Theorem \ref{BSV, Intro}. \begin{proof}[Proof of Theorem \ref{BSV, Intro}] \textbf{Step~0.} By replacing $B$ with $\lfloor B\rfloor$, we may assume that $B$ is reduced. Since the assertion is obvious when $i=0$ or $2$, it suffices to show that \begin{equation} H^0(X, (\Omega_{X}^{[1]}(\operatorname{log}\,B)\otimes \mathcal{O}_{X}(-D))^{**})=0 \tag{a} \end{equation} for every big $\mathbb{Z}$-divisor $D$. Let $h\colon (W, B_W\coloneqq h^{-1}_*B+\operatorname{Exc}(h))\to (X,B)$ be a dlt blow-up. Then $\kappa(W, K_W+B_W)=\kappa(X, K_X+B)$ and the vanishing (a) is equivalent to saying that \begin{equation} H^0(W, (\Omega_{W}^{[1]}(\operatorname{log}\,B_W)\otimes \mathcal{O}_{W}(-\lceil h^{*}D\rceil))^{**})=0 \tag{b} \end{equation} when $p>5$ by Proposition \ref{ext thm}. We set $D_W\coloneqq \lceil h^{*}D\rceil$. By Remark \ref{nefness and bigness}, $D_W$ is big. \textbf{Step~1.} First, we assume that $\kappa(X, K_X+B)=-\infty$ and $p>5$. We show the vanishing (b). In this case, $K_W+B_W$ is not pseudo-effective by the abundance theorem (\cite[Theorem 1.2]{Tan12}). By Lemma \ref{push} (1) and (2), we can replace $W$ with an output of a $(K_W+B_W)$-MMP and assume that $W$ has a $(K_W+B_W)$-Mori fiber space structure $f\colon W\to Z$. If $\dim\,Z=1$, then the assertion follows from Lemma \ref{LCTF}. Thus we assume that $\dim\,Z=0$. In this case, $W$ is a klt del Pezzo surface of Picard rank one and $D_W$ is an ample $\mathbb{Q}$-Cartier $\mathbb{Z}$-divisor. Let $\pi\colon Y\to W$ be a log resolution of $(W, B_W)$, $B'\coloneqq \pi^{-1}_{*}B_W$, $E\coloneqq \operatorname{Exc}(\pi)$, and $B_Y\coloneqq B'+E$. Then by Proposition \ref{ext thm}, it suffices to show that \[ H^0(Y, \Omega_{Y}(\operatorname{log}\,B_Y)\otimes \mathcal{O}_{Y}(-\lceil \pi^{*}D_W \rceil))=0. \] For the sake of contradiction, we assume that there exists an injective $\mathcal{O}_Y$-module homomorphism $s\colon \mathcal{O}_Y(\lceil \pi^{*}D_W \rceil))\hookrightarrow \Omega_{Y}(\operatorname{log}\,B_Y)$. We show that $s$ factors through $s\colon \mathcal{O}_Y(\lceil \pi^{*}D_W \rceil))\hookrightarrow \Omega_{Y}(\operatorname{log}\,E)$. Let $B'_1$ be an irreducible component of $B'$. Since $(Y, B_Y)$ is log smooth, we obtain the following diagram \begin{equation*} \xymatrix{ & & \mathcal{O}_{Y}(\lceil \pi^{*}D_W\rceil) \ar@{.>}[ld] \ar[d]^{s} \ar[rd]^{t} &\\ 0\ar[r] &\Omega_{Y}(\operatorname{log}\,B_Y-B'_1)\ar[r] & \Omega_{Y}(\operatorname{log}\,B_Y) \ar[r] & \mathcal{O}_{B'_1} \ar[r] & 0.} \end{equation*} Since $B'_1$ is not $\pi$-exceptional and $D_W$ is an ample $\mathbb{Q}$-Cartier $\mathbb{Z}$-divisor, it follows that \[ \lceil \pi^{*}D_W\rceil \cdot B'_1\geqslantq \pi^{*}D_W\cdot B'_1= D_W\cdot \pi_{*}B'_1>0, \] and hence $t$ is zero. Thus the morphism $s$ factors through $\mathcal{O}_Y(\lceil \pi^{*}D_W\rceil)\hookrightarrow \Omega_{Y}(\operatorname{log}\,B_Y-B'_1)$. By repeating this procedure, we can show that $s$ factors through $\mathcal{O}_Y(\lceil \pi^{*}D_W\rceil)\hookrightarrow \Omega_{Y}(\operatorname{log}\,E)$. By \cite[Theorem 1.4]{Lac} and Lemma \ref{every lift=some lift} (1), it follows that $(Y, E)$ lifts to $W(k)$. Now, since $\pi^{*}D_W$ is a nef and big $\mathbb{Q}$-divisor whose support of the fractional part is contained in $E$, Theorem \ref{Hara's vanishing} shows that $0\neq s\in H^0(Y, \Omega_Y(\operatorname{log}\,E))\otimes \mathcal{O}_Y(-\lceil \pi^{*}D_W \rceil)=0$, a contradiction. \textbf{Step~2.} Next, we assume that $\kappa(X, K_X+B)=0$ and prove the vanishing (b). We can replace $(W, B_W)$ with the $(K_W+B_W)$-minimal model by Lemma \ref{push} and hence assume that $K_W+B_W\equiv 0$. \textbf{Step~2-1.} First, we assume that $B_W\neq 0$ and $p>5$. In this case, $K_W$ is not pseudo-effective and we can run a $K_W$-MMP to obtain a birational contraction $\varphi\colon W\to W'$ and a $K_{W'}$-Mori fiber space $f\colon W'\to Z$. Since $K_W+B_W\equiv 0$, the negativity lemma shows that $K_W+B_W=\varphi^{*}(K_{W'}+B_{W'})$, where $B_{W'}\coloneqq \varphi_{*}B_W$. Thus $(W', B_{W'})$ is log Calabi-Yau and $W'$ is klt. By Lemma \ref{push}, we can replace $(W, B_W)$ with $(W', B_{W'})$. If $\dim\,Z=1$, then the assertion follows from Lemma \ref{LCTF}. Thus we may assume that $\dim\,Z=0$. In this case, $W$ is a klt del Pezzo surface of Picard rank one and $D_W$ is an ample $\mathbb{Q}$-Cartier $\mathbb{Z}$-divisor. Let $\pi\colon Y\to W$ be a log resolution of $(W, B_W)$, $B'\coloneqq \pi^{-1}_{*}B_W$, $E\coloneqq \operatorname{Exc}(\pi)$, and $B_Y\coloneqq B'+E$. As in Step~1, we derive a contradiction assuming there exists an injective $\mathcal{O}_Y$-module homomorphism $s \colon \mathcal{O}_Y(\lceil \pi^{*}D_W\rceil) \hookrightarrow \Omega_Y(\operatorname{log}\,B_Y)$. Since $B'\neq 0$, we can take an irreducible component $B'_1$ of $B'$. Since $B'_1$ is not $\pi$-exceptional and $D_W$ is an ample $\mathbb{Q}$-Cartier $\mathbb{Z}$-divisor, an argument as in Step~1 shows that the morphism $s$ factors through $\mathcal{O}_Y(\lceil \pi^{*}D_W \rceil) \to \Omega_Y(\operatorname{log}\, B_Y-B'_1)$. Since $K_W+B_W\equiv 0$, we have $\kappa(Y, K_Y+B_Y-B'_1)=-\infty$. Now, we obtain a contradiction by Step~1. \textbf{Step~2-2.} Next, we assume that $B_W=0$. In this case, $W$ is a klt Calabi-Yau surface. We take a positive integer $n$ as in Lemma \ref{max Gor index} and assume $p>n$. We show that we may assume that $D_W$ is nef and big. Let $D_W\equiv P+N$ be the Zariski decomposition. Note that we can take the Zariski decomposition even when $X$ is singular (\cite[Theorem 3.1]{Eno}). We take a rational number $0<\varepsilon \ll 1$ such that $(W, \varepsilon N)$ is klt. Since $K_W$ is torsion by the abundance theorem (\cite[Theorem 1.2]{Tan12}) and $N$ is negative definite, it follows that $\kappa(K_W+\varepsilon N)=\kappa(X, N)=0$. We run a $(K_W+\varepsilon N)$-MMP to obtain a birational contraction $\varphi\colon W\to W'$ to a $(K_W+\varepsilon N)$-minimal model $W'$. Then $K_{W'}=\varphi_{*}K_W\equiv0$, and in particular, $W'$ is klt Calabi-Yau. Moreover, $\varphi_{*}\varepsilon N\equiv K_{W'}+\varphi_{*}\varepsilon N\equiv 0$, and hence $\varphi_{*}D_W\equiv\varphi_{*}P$ is nef and big. By Lemma \ref{push}, we can replace $W$ with $W'$ and assume that $D_W$ is nef and big. We next reduce to the case where $W$ is canonical Calabi-Yau. We assume that $W$ is a strictly klt Calabi-Yau surface. By Lemma \ref{Cartier index}, the positive integer $n$ is the minimum integer such that $nK_W=0$. Then we can take a cyclic cover $\tau\colon \widetilde{W}\to W$ associated to a non-zero global section of $nK_W=0$. Since $n$ is not divisible by $p$, it follows that $\tau$ is \'etale in codimension one, and hence we obtain an injective morphism \[ H^0(W, (\Omega_X^{[1]}\otimes \mathcal{O}_W(-D_W))^{**})\hookrightarrow H^0(\widetilde{W}, (\Omega_{\widetilde{W}}^{[1]}\otimes \mathcal{O}_{\widetilde{W}}(-\tau^{*}D_W))^{**}) \] and $\tau^{*}D_W$ is nef and big. By replacing $W$ with $\widetilde{W}$, we may assume that $W$ has only canonical singularities. Now, we show the vanishing (b). Let $\pi\colon Y\to W$ be the minimal resolution and $E\coloneqq \operatorname{Exc}(\pi)$. By Proposition \ref{ext thm}, it suffices to show that \[ H^0(Y, \Omega_{Y}(\operatorname{log}\,E)\otimes \mathcal{O}_{Y}(-\lceil \pi^{*}D_W \rceil))=0. \] Since $p>n\geqslantq 19$ by Remark \ref{Rem:max Gor index}, the pair $(Y, E)$ lifts to $W_2(k)$ by Proposition \ref{Prop : Lift of canonical CY}. Thus we conclude the desired vanishing by Theorem \ref{Hara's vanishing}. \textbf{Step~3.} Finally, we assume that $\kappa(X, K_X+B)=1$ and $p>3$. We prove the vanishing (a) directly. In this case, by replacing $(X, B)$ with its $(K_X+B)$-minimal model, we may assume that $K_X+B$ is semiample and $\kappa(X, K_X+B)=1$. Then there exists a projective morphism $f\colon X\to Z$ such that $\dim\,Z=1$, $f_{*}\mathcal{O}_X=\mathcal{O}_Z$, and $K_X+B$ is numerically trivial over $Z$. Now, by Lemma \ref{LCTF}, we obtain the assertion. We will check the sharpness of the explicit bounds on $p_0$ in Example \ref{sharpness of char}. \end{proof} We recall the definition of a \textit{globally sharply $F$-split pair}, which is a positive characteristic analog of a log Calabi-Yau pair in characteristic zero. \begin{defn}[\textup{\cite[Definition 3.1]{SS10}}]\label{defition:GFS} Let $(X, B)$ be a pair over an algebraically closed field of characteristic $p>0$. We say that $(X, B)$ is \textit{globally sharply $F$-split} if there exists a positive integer $e\in \mathbb{Z}_{>0}$ such that the composite map \[ \mathcal{O}_X \to F^e_*\mathcal{O}_X \hookrightarrow F^e_*\mathcal{O}_X(\lceil (p^e-1)B\rceil) \] of the $e$-times iterated Frobenius morphism $\mathcal{O}_X \to F^e_*\mathcal{O}_X$ and the natural inclusion $F^e_*\mathcal{O}_X \hookrightarrow F^e_*\mathcal{O}_X(\lceil (p^e-1)B\rceil)$ splits as an $\mathcal{O}_X$-module homomorphism. \end{defn} By a similar argument to Theorem \ref{BSV, Intro}, we can show the Bogomolov-Sommese vanishing theorem for a globally sharply $F$-split surface pair. \begin{prop}\label{BSV for GSFS} Let $(X, B)$ be a globally sharply $F$-split surface pair over an algebraically closed field of characteristic $p>5$. Then \[ H^0(X, (\Omega_X^{[i]}(\operatorname{log}\,\lfloor B\rfloor)\otimes \mathcal{O}_X(-D))^{**})=0 \] for every $\mathbb{Z}$-divisor $D$ on $X$ satisfying $\kappa(X, D)>i$. \end{prop} \begin{proof} By \cite[Theorem 4.4 (ii) and Theorem 4.3 (ii)]{SS10}, it follows that $(X, B)$ is lc and $-(K_X+B)$ is effective. If $\kappa(X, K_X+\lfloor B\rfloor)=-\infty$, then the assertion follows from Theorem \ref{BSV, Intro}. Thus we may assume that $K_X+\lfloor B\rfloor\equiv 0$. First, we assume that $(X, \lfloor B\rfloor)$ is not klt. By Proposition \ref{ext thm}, we can replace $(X, \lfloor B\rfloor)$ with its dlt blow-up. In this case, the boundary of the dlt pair is non-zero since $(X, \lfloor B\rfloor)$ is not klt. Then the assertion follows from Step~2-1 of the proof of Theorem \ref{BSV, Intro}. Now, we assume that $X$ is klt Calabi-Yau and $B=0$. As in Step~2-2 of the proof of Theorem \ref{BSV, Intro}, by considering the Zariski decomposition, we can assume that $D$ is nef and big. Note that the globally $F$-split property is preserved under a birational contraction (\cite[1.1.9 Lemma]{fbook}). Next, a splitting morphism $F_{*}\mathcal{O}_X\to \mathcal{O}_X$ give a non-zero section of $\operatorname{Hom}_{\mathcal{O}_X}(F_{*}\mathcal{O}_X, \mathcal{O}_X)\cong H^0(X, \mathcal{O}_X((1-p)K_X))$, and together with $K_X\equiv0$, we obtain $(1-p)K_X=0$. In particular, the minimum positive integer $n$ such that $nK_X=0$ is not divisible by $p$. We recall the globally $F$-split property is preserved under a finite cover which is \'etale in codimension one (\cite[Lemma 11.1.]{PZ19}). Thus, by taking a cyclic cover associated to a non-zero global section of $nK_X$, we may assume that $X$ is a canonical Calabi-Yau surface such that $K_X=0$. If $X$ is an abelian surface, then the same argument as Step~2-2 of the proof of Theorem \ref{BSV, Intro} works. Thus we may assume that the minimal resolution $Y$ of $X$ is a K3 surface. Now, by \cite[1.3.13 Lemma]{fbook} and \cite[5.1 Theorem]{GK}, the K3 surface $Y$ is not supersingular, and an argument of Step~2-2 of the proof of Theorem \ref{BSV, Intro} and Remark \ref{Rem : Lift of canonical CY} show the desired vanishing. \end{proof} \begin{rem} In the proof of Proposition \ref{BSV for GSFS}, we do not need the assumption that $p>5$ in some places (see \cite[Lemma 4.17]{BBKW} and \cite[Proposition 4.18]{BBKW}). On the other hand, it is still open whether Proposition \ref{ext thm} holds for $F$-pure surface singularities in characteristic $p\leqslantq5$. This is the reason why we need the assumption that $p>5$ in Proposition \ref{BSV for GSFS}. \end{rem} \section{Liftability of surface pairs}\label{sec : Lift of a surface pair} In this section, we prove Theorems \ref{lift, Intro} and \ref{KVV, Intro}. We also discuss deformations of an lc projective surface whose canonical divisor has negative Iitaka dimension (Proposition \ref{tangent}). First, we focus on the vanishing of the second cohomology of the logarithmic tangent sheaf. \begin{defn}\label{Definition:Q-ableian} Let $X$ be a normal projective variety. We say $X$ is \textit{Q-abelian} if there exists a finite surjective morphism $\tau\colon \widetilde{X}\to X$ such that $\widetilde{X}$ is an abelian variety and $\tau$ is \'etale in codimension one. \end{defn} \begin{prop}\label{van of tan} Let $(X, B)$ be an lc projective surface pair over an algebraically closed field of characteristic $p>0$ such that $B$ is reduced. When $\kappa(X, K_X+B)=0$, let $(X', B')$ be the $(K_X+B)$-minimal model of $(X, B)$, where $B'$ is the pushforward of $B$. Suppose that one of the following holds. \begin{enumerate} \item[\textup{(1)}] $\kappa(X, K_X+B)=-\infty$ and $p>5$. \item[\textup{(2)}] $\kappa(X, K_X+B)=0$ and one of the following holds. \begin{enumerate} \item[\textup{(i)}] $B'\neq 0$ and $p>5$, \item[\textup{(ii)}] $B'=0$, $X'$ is klt, the Gorenstein index of $X'$ is not divisible by $p$, $X'$ is not Q-abelian, and $p>19$. \end{enumerate} \end{enumerate} Then $H^2(X, T_X(-\operatorname{log}\,B))=0$. \end{prop} \begin{rem} All the assumptions on $p$ are sharp (see Examples \ref{sharpness of char}, \ref{Example : num triv pair}, and \ref{Example : sharpness for Kodaira dim 0}). \end{rem} \begin{proof} First, we assume that the condition (1) holds. We can reduce the desired vanishing to an output of a $(K_X+B)$-MMP by Remark \ref{Remark:tangent}, and hence assume that $X$ admits a $(K_X+B)$-Mori fiber space structure $f\colon X\to Z$. If $\dim\,Z=1$, then the assertion follows from Lemma \ref{LCTF} since $-K_X$ is $f$-ample. We next assume that $\dim\,Z=0$. Note that $-K_X$ is $\mathbb{Q}$-Cartier by \cite[Theorem 5.4]{Tan12}. Then it follows from $\rho(X)=1$ that $-K_{X}$ is an ample $\mathbb{Q}$-Cartier $\mathbb{Z}$-divisor, and the assertion follows from Theorem \ref{BSV, Intro}. Next, we assume that the condition (2)-(i) holds. It suffices to show that $H^2(X', T_{X'}(-\operatorname{log}\,B'))=0$. Since $K_{X'}+B'\equiv 0$ and $B'\neq 0$, it follows that $K_{X'}$ is not pseudo-effective. Then we can run a $K_{X'}$-MMP to obtain a birational contraction $\varphi\colon X'\to \overline{X}$ to a $K_{\overline{X}}$-Mori fiber space $f\colon \overline{X}\to Z$. It suffices to show that $H^2(\overline{X}, T_{\overline{X}}(-\operatorname{log}\,\overline{B}))=0$. Since $K_{X'}+B'\equiv 0$, the negativity lemma shows that $K_{X'}+B'=\varphi^{*}(K_{\overline{X}}+\overline{B})$ and hence $(\overline{X}, \overline{B})$ is log Calabi-Yau, where $\overline{B}\coloneqq \varphi_{*}B'$. If $\dim\,Z=1$, then the assertion follows from Lemma \ref{LCTF} since $-K_{\overline{X}}$ is $f$-ample. If $\dim\,Z=0$, then the assertion follows from Step~2-1 of the proof of Theorem \ref{BSV, Intro} since $-K_{\overline{X}}$ is ample $\mathbb{Q}$-Cartier by \cite[Theorem 5.4]{Tan12}. Finally, we assume that the condition (2)-(ii) holds. It suffices to show that $H^2(X', T_{X'})=0$. We first assume that $X'$ is strictly klt Calabi-Yau. Let $n$ be the minimum positive integer such that $nK_{X'}=0$. Then $n$ is equal to the Gorenstein index by Lemma \ref{Cartier index}, and hence $n$ is not divisible by $p$ by assumption. Let $\tau\colon \widetilde{X}\to X'$ be a cyclic cover associated to a non-zero global section of $nK_{X'}=0$. Since $\tau$ is \'etale in codimension one, we have \begin{align*} H^2(X', T_{X'})\cong H^0(X', (\Omega_{X'}^{[1]}\otimes \mathcal{O}_{X'}(K_{X'}))^{**})\hookrightarrow& H^0(\widetilde{X}, (\Omega_{\widetilde{X}}^{[1]}\otimes \mathcal{O}_{\widetilde{X}}(K_{\widetilde{X}}))^{**})\\ \cong&H^2(\widetilde{X}, T_{\widetilde{X}}), \end{align*} Thus we may assume that $X'$ is canonical Calabi-Yau. By the assumption that $X'$ is not Q-abelian, the minimal resolution of $X'$ is a K3 surface or an Enriques surface. In these cases, we have already shown that $H^2(X', T_{X'})=0$ in the proof of Proposition \ref{Rem : Lift of canonical CY}. \end{proof} Now, we prove Theorems \ref{lift, Intro} and \ref{KVV, Intro}. \begin{proof}[Proof of Theorem \ref{lift, Intro}] Set $B_Y\coloneqq \pi_{*}^{-1}B+\operatorname{Exc}(\pi)$. Suppose that the condition (1) holds and $p>5$. Then $\kappa(Y,K_Y+B_Y)=-\infty$ by Lemma \ref{push} (2), and hence $(Y, B_Y)$ lifts to $W(k)$ by Proposition \ref{van of tan} (1) and Theorem \ref{log smooth lift criterion}. Next, we assume that the condition (2) holds and $p>5$. By Lemma \ref{Lemma:dlt blow-up}, we can decompose $\pi\colon Y\to X$ into a birational morphism $Y\to W$ and a dlt blow-up $h\colon W\to X$. Then there exists an effective $\mathbb{Q}$-divisor $F$ such that $K_W+B_W+F\equiv h^{*}(K_X+B)\equiv 0$, where $B_W\coloneqq h^{-1}_{*}B+\operatorname{Exc}(h)$. By assumption, we have $B_W\neq 0$ and hence $H^2(Y, T_Y(-\operatorname{log}\,B_Y))\hookrightarrow H^2(W, T_W(-\operatorname{log}\,B_W))=0$ by Proposition \ref{van of tan} (1) and (2)-(i). Moreover, since $-K_X\equiv B$ is strictly effective, it follows that $H^2(Y, \mathcal{O}_Y)=0$. Now, we conclude that $(Y, B_Y)$ lifts to $W(k)$ by Theorem \ref{log smooth lift criterion}. Finally, we assume that the condition (3) holds. In this case, $\kappa(Y,K_Y+B_Y)\leqslantq 0$ by Lemma \ref{push} (2). If $\kappa(Y,K_Y+B_Y)=-\infty$ and $p>5$, then $(Y, B_Y)$ lifts to $W(k)$ by (1). Thus we can assume that $\kappa(Y,K_Y+B_Y)=0$. By Propositions \ref{Prop : Lift of canonical CY} and \ref{Lift of strictly klt CY}, we can take a positive integer $p_{0}>19$ with the following property; for every klt Calabi-Yau surface $Z$ over an algebraically closed field of characteristic bigger than $p_0$ and every log resolution $f\colon \widetilde{Z}\to Z$, the pair $(\widetilde{Z}, \operatorname{Exc}(f))$ lifts to $W(k)$. We fix such a $p_0$ and assume that $p>p_0$. We run a $(K_Y+B_Y)$-MMP to obtain a birational contraction $\varphi\colon Y\to Y'$ to the $(K_Y+B_Y)$-minimal model $(Y',B_{Y'}\coloneqq \varphi_{*}B_Y)$, which is dlt and log Calabi-Yau. If $B_{Y'}\neq0$, then $(Y, B_Y)$ lifts to $W(k)$ by (2). If $B_{Y'}=0$, then we obtain the desired liftability by the assumption of $p_0$. We will check the sharpness of the explicit bound on $p_0$ in Examples \ref{sharpness of char} and \ref{Example : num triv pair}. \end{proof} \begin{proof}[Proof of Theorem \ref{KVV, Intro}] If $\kappa(X, K_X)=-\infty$ and $p>5$, then we obtain the desired vanishing by Theorem \ref{lift, Intro} and Lemma \ref{Lem:KVV}. Similarly, if we take a positive integer $p_0$ as in Theorem \ref{lift, Intro}, $\kappa(X, K_X)=0$, and $p>p_0$, then we obtain the remaining case of (1). Now, we assume that $\kappa(X, K_X)=1$, $X$ is lc, and $p>3$. In this case, we have $H^0(X, (\Omega_X^{[1]}\otimes\mathcal{O}_X(-p^eD))^{**})=0$ for all $e\in \mathbb{Z}_{>0}$ by Theorem \ref{BSV, Intro}. Then, by the proof of \cite[Lemma 2.5]{Kaw2}, we have an injective morphism $H^1(X, \mathcal{O}_X(-D))\hookrightarrow H^1(X, \mathcal{O}_X(-p^{e}D))$ arising from the $e$-th iterated Frobenius morphism. Let $\pi\colon Y\to X$ be a log resolution. By the proof of Lemma \ref{Lem:KVV}, it suffices to show that $H^1(Y, \mathcal{O}_{Y}(-\lceil p^e\pi^{*}D\rceil))=0$ for $e\gg0$. We take $m, n\in \mathbb{Z}_{>0}$ such that $p^m(p^n-1)\pi^{*}D$ is Cartier. Then we obtain \[ H^1(Y, \mathcal{O}_Y(-\lceil p^{m+nl}\pi^{*}D \rceil)\\ =H^1(Y, \mathcal{O}_Y(-\lceil p^m\pi^{*}D \rceil+(\sum_{i=0}^{l-1}p^{ni})p^m(p^n-1)\pi^{*}D)) \] and the last cohomology vanishes for $l\gg0$ by \cite[Theorem 2.6]{Tan15}. We will check the sharpness of the explicit bounds on $p_0$ in Example \ref{sharpness of char}. \end{proof} Finally, we apply Proposition \ref{van of tan} to show the vanishing of local-to-global obstruction (Definition \ref{def:local-to-global}). In particular, we prove Proposition \ref{tangent} (3), which is a positive characteristic analog of \cite[Proposition 3.1]{HP}. We first recall the definition of local-to-global obstruction, whose vanishing means deformations of singular points extend to a global deformation. \begin{defn}\label{def:local-to-global} Let $X$ be a normal projective variety over an algebraically closed field $k$ with only isolated singularities. Let $\Lambda\coloneqq k$ or a complete discrete valuation ring with residue field $k$. Let $T$ be a Noetherian scheme of finite type over $\Lambda$ and $o \colon \operatorname{Spec} k \to T$ a morphism. For every singular point $x\in X$, we suppose that there exists an \'etale morphism $\varphi_{x}\colon U_x\to X$ and a closed point $x'\in U_x$ such that \begin{itemize} \item $\varphi_{x}(x')=x$ \item $U_x$ is smooth outside $x'$, and \item there exists a flat morphism $\mathcal{U}_x\to T$ such that the base change of $\mathcal{U}_x$ by $o \colon \operatorname{Spec} k \to T$ is isomorphic to $U_x$. \end{itemize} We say $X$ has \textit{no local-to-global obstruction} if there exist \begin{enumerate} \item an \'etale morphism $\iota\colon T'\to T$ from a Noetherian scheme $T'$ with a morphism $o' \colon\operatorname{Spec}\,k\to T'$ such that $\iota\circ o'=o$ and \item there exists a flat projective morphism $\mathcal{X}\to T'$ such that the base change of $\mathcal{X}$ by $o'\colon\operatorname{Spec}\,k\to T'$ is isomorphic to $X$, \end{enumerate} such that the formal completions of $\mathcal{O}_{\mathcal{X},x}$ and $\mathcal{O}_{\mathcal{U}_x,x'}$ are isomorphic for every singular point $x\in X$. \end{defn} \begin{thm}\label{Thm:local-to-global} Let $X$ be a normal projective variety over an algebraically closed field with only isolated singularities. Suppose that $H^2(X, T_X)=H^2(X, \mathcal{O}_X)=0$. Then $X$ has no local-to-global obstruction. \end{thm} \begin{proof} This is \cite[Theorem 4.14 and Remark 4.15]{LN}. \end{proof} \begin{defn} Let $X$ be a normal projective variety. We say $X$ admits a \textit{$\mathbb{Q}$-Gorenstein smoothing} if there exists a flat projective morphism $\mathcal{X}\to T$ from a normal $\mathbb{Q}$-Gorenstein scheme to a smooth curve $T$ with a closed point $o\in T$ such that the fiber over $o$ is isomorphic to $X$ and $\mathcal{X}\to T$ is smooth over $T\setminus \{o\}$. \end{defn} \begin{defn} Let $R$ be an $F$-finite ring of positive characteristic. We say that $R$ is \textit{$F$-pure} if the Frobenius map $F\colon R\to F_{*}R$ splits as an $R$-module homomorphism. We say that a variety $X$ is \textit{$F$-pure} if $\mathcal{O}_{X,x}$ is $F$-pure for every closed point $x\in X$. \end{defn} \begin{prop}\label{tangent} Let $X$ be an lc projective surface over an algebraically closed field $k$ of characteristic $p>5$ with $\kappa(X, K_X)=-\infty$. Then $X$ has no local-to-global obstruction. In particular, the following hold. \begin{enumerate} \item[\textup{(1)}] If $X$ is $F$-pure, then $X$ lifts to $W_2(k)$. \item[\textup{(2)}] If $X$ is l.c.i, then $X$ lifts to $W(k)$. \item[\textup{(3)}] If $X$ has only rational double points or toric singularities of class $T$ (see \cite[Definition 3.4]{LN} for the definition), then $X$ admits a $\mathbb{Q}$-Gorenstein smoothing. \end{enumerate} \end{prop} \begin{proof} By taking $B=0$ in Proposition \ref{van of tan} (1), we have $H^2(X, T_X)=0$. Together with $H^2(X, \mathcal{O}_X)=0$, it follows from Theorem \ref{Thm:local-to-global} that $X$ has no local-to-global obstruction. First, we show (1). By \cite[Corollary 8]{Lan15}, a spectrum of an $F$-pure ring lifts to $W_2(k)$. Since $H^2(X, T_X)=0$, it follows that $X$ lifts to $W_2(k)$ by \cite[Theorem 4.13]{LN}. Next, we show (2). By applying \cite[Theorem 9.2]{Har2} repeatedly, we can see that an l.c.i. affine variety lifts to $W(k)$. Then it follows that $X$ lifts to a scheme $T'$ \'etale over $W(k)$ since $X$ has no local-to-global obstruction. Since $T'$ is smooth over $\mathbb{Z}$, we can conclude that $X$ lifts to $W(k)$ by the proof of \cite[Proposition 2.5]{ABL}. Finally, (3) follows from \cite[Theorem 5.3]{LN}. \end{proof} \section{Sharpness of Theorems \ref{BSV, Intro}, \ref{lift, Intro}, and \ref{KVV, Intro}}\label{sec:examples} In this section, we observe the failure of Theorems \ref{BSV, Intro}, \ref{lift, Intro}, and \ref{KVV, Intro} in low characteristic or for a surface pair whose log canonical divisor is big. First, we focus on the characteristic. \begin{eg}\label{sharpness of char} By \cite[Theorem 1.7 (3)]{KN}, \cite[Theorem 1.1]{Ber}, and \cite[Proposition 5.2]{ABL}, we can take a klt del Pezzo surface $X$ in each characteristic $p\in \{2,3,5\}$ with more than four singularities and an ample $\mathbb{Q}$-Cartier $\mathbb{Z}$-divisor $D$ on $X$ such that $H^1(X, \mathcal{O}_X(K_X+D))\neq 0$. Let $\pi\colon Y\to X$ be the minimal resolution with $E\coloneqq \operatorname{Exc}(\pi)$. Then $-K_Y$ is big and $\kappa(Y, K_Y+E)=-\infty$. Firstly, $(Y, E)$ does not lift to any Noetherian local domain with fractional field of characteristic zero because there are no klt del Pezzo surfaces with more than four singularities in characteristic zero by \cite[Theorem 1.1]{Bel}. In addition, $(Y, E)$ dose not lift to $W_2(k)$ by Lemma \ref{Lem:KVV}, and it follows from Theorem \ref{log smooth lift criterion} that $0\neq H^2(Y, T_Y(-\operatorname{log}\,E))\hookrightarrow H^2(X, T_X)$. Therefore, the explicit bound $p_0=5$ in Theorems \ref{BSV, Intro}, \ref{lift, Intro} (1), and \ref{KVV, Intro} (1) is optimal. These examples also show that the sharpness of the assumption $p>5$ in Proposition \ref{van of tan} (1). By \cite[Section 3,1]{Ray}, we can take a smooth projective surface $X$ in each characteristic $p\in \{2,3\}$ with $\kappa(X, K_X)=1$ and an ample Cartier divisor $D$ on $X$ such that $H^1(X, \mathcal{O}_X(K_X+D))\neq 0$. Then \cite[Lemma 2.5]{Kaw2} shows that there exists $n>0$ such that $H^0(X, \Omega_X\otimes \mathcal{O}_X(-p^nD))\neq 0$. Therefore, the explicit bound $p_0=3$ in Theorems \ref{BSV, Intro}, \ref{KVV, Intro} (2) and the assumption that $p>3$ in Lemma \ref{LCTF} are optimal. \end{eg} \begin{eg}\label{Example : num triv pair} We show that there exists a klt del Pezzo surface $X$ in each characteristic $p\in\{2,3,5\}$ and a non-zero reduced divisor $B$ on $X$ such that $K_X+B\equiv 0$, but $(Y, f_{*}^{-1}B+\operatorname{Exc}(f))$ does not lift to any Noetherian local domain with fractional field of characteristic zero for some log resolution $f\colon Y\to X$ of $(X, B)$. We first assume $p=5$. We take a del Pezzo surface $X$ with two $A_4$-singularities and a cuspidal rational curve $B$ in the smooth locus of $X$ as in \cite[Example 7.6]{Lac}. Then we have $K_X+B\equiv 0$ by the adjunction formula. We take a three-times blow-up $f\colon Y\to X$ at the cusp of $B$. Then there exists a contraction $\pi\colon Y\to Z$ to a klt del Pezzo surface $Z$ with five singularities and $\operatorname{Exc}(\pi)\subset f_{*}^{-1}B+\operatorname{Exc}(f)$ (see \cite[Example 7.6]{Lac} for the detail). Then $(Y, \operatorname{Exc}(\pi))$ does not lift to any Noetherian local domain with fractional field of characteristic zero by \cite[Theorem 1.1]{Bel} and neither does $(Y, f_{*}^{-1}B+\operatorname{Exc}(f))$. When $p=3$, we can take $X=\mathbb{P}_k^2$ and a curve $B$ as in \cite[Example 7.5]{Lac} to show the assertion. When $p=2$, we can take a del Pezzo surface $X$ as any one of \cite[Theorem 1.7 (2)]{KN} and $B$ is a general anti-canonical member, which is integral. Therefore, the explicit bound on $p_0$ in Theorem \ref{lift, Intro} (2) and the assumption $p>5$ in Proposition \ref{van of tan} (2)-(i) are optimal. \end{eg} The following example was taught by Fabio Bernasconi, Iacopo Brivio, and Jakub Witaszek. \begin{eg}\label{Example : sharpness for Kodaira dim 0} By \cite[Corollary 1.2]{Shi}, there exists a canonical Calabi-Yau surface $X$ in each characteristic $p\leqslantq 19$ such that $Y$ is a supersingular K3 surface and $E\coloneqq \operatorname{Exc}(\pi)$ consists of $21$ $(-2)$-curves, where $\pi\colon Y\to X$ is the minimal resolution. Then $(Y, E)$ does not lift to any Noetherian local domain $R$ with fractional field $K$ of characteristic zero. For the sake of contradiction, we assume that there exists a lifting $(\mathcal{Y}, \mathcal{E})$ of $(Y,E)$ to such an $R$. Let $Y_{\overline{K}}$ and $E_{\overline{K}}$ be the geometric generic fibers of $\mathcal{Y}\to R$ and $\mathcal{E}\to R$, respectively. We show that $Y_{\overline{K}}$ is a K3 surface. Since $H^1(Y, \mathcal{O}_Y)=0$, a lifting of each invertible sheaf is unique by \cite[Corollary 8.5.5]{FAG}. Then $\omega_{\mathcal{Y}}|_Y=\omega_Y=\mathcal{O}_Y=\mathcal{O}_{\mathcal{Y}}|_Y$ shows that $\omega_{\mathcal{Y}}=\mathcal{O}_{\mathcal{Y}}$. Together with $\mathcal{X}(Y_{\overline{K}}, \mathcal{O}_{Y_{\overline{K}}})=\mathcal{X}(Y, \mathcal{O}_Y)=2$, we conclude that $Y_{\overline{K}}$ is a K3 surface. Since $Y_{\overline{K}}$ contains $21$ $(-2)$-curves $E_{\overline{K}}$ that is negative definite, we obtain $\rho(Y_{\overline{K}})\geqslantq 22$, a contradiction with the fact that the Picard rank of a K3 surface in characteristic zero is at most $20$ (see \cite[Chapter 17, 1.1]{K3book} for example). Finally, by the proof Proposition \ref{Prop : Lift of canonical CY}, we obtain $0\neq H^2(Y, T_Y(-\operatorname{log}\,E))\hookrightarrow H^2(X, T_X)$. Therefore, $p_0$ in Theorem \ref{lift, Intro} (3) should be at least $19$. Moreover, the assumption that $p>19$ in Propositions \ref{Prop : Lift of canonical CY} and \ref{van of tan} (2)-(ii) is sharp. \end{eg} Finally, we close the paper by discussing the assumptions of Iitaka dimensions of log canonical divisors in Theorems \ref{BSV, Intro}, \ref{lift, Intro}, and \ref{KVV, Intro}. By Raynaud's counterexample (\cite{Ray}) to the Kodaira vanishing theorem on a smooth projective surface with a big canonical divisor, we can see that Theorems \ref{BSV, Intro}, \ref{lift, Intro}, and \ref{KVV, Intro} do not hold for a surface with a big canonical divisor in any characteristic. In the next example, we will see that Langer's surface pair \cite{Lan15} shows that Theorems \ref{BSV, Intro} and \ref{lift, Intro} do not hold on a surface pair whose log canonical divisor is big even when the surface itself is rational. \begin{eg}\label{Example:Langer's surface} We first recall the construction of Langer's surface pair \cite[Section 8]{Lan15}. Let $h\colon X\to \mathbb{P}_k^2$ be the blow-up all the $\mathbb{F}_p$-rational points and $L_1,\ldots, L_{p^2+p+1}$ strict transforms of all the $\mathbb{F}_p$-lines. Then $X$ is a smooth rational surface and $L_1,\ldots,L_{p^2+p+1}$ are pairwise disjoint smooth rational curves. By \cite[Theorem 3.1]{CT}, there exists a nef and big $\mathbb{Q}$-divisor $D$ such that $H^1(X, \mathcal{O}_X(K_X+\lceil D \rceil))\neq 0$ and $\operatorname{Supp}(\{D\})=\sum_{i=1}^{p^2+p+1}L_i$. Thus $(X, \sum_{i=1}^{p^2+p+1}L_i)$ dose not lift to $W_2(k)$ by Theorem \ref{Hara's vanishing} and $H^2(X, T_X(-\operatorname{log} \sum_{i=1}^{p^2+p+1}L_i))\neq 0$ by Theorem \ref{log smooth lift criterion}. Finally, there exists a big divisor $M$ such that $\mathcal{O}_X(M)$ is contained in $\Omega_X(\operatorname{log}\,\sum_{i=1}^{p^2+p+1}L_i)$ by \cite[Proposition 11.1]{Langer19}. Now, we check that $K_X+\sum_{i=1}^{p^2+p+1}L_i$ is big except when $p=2$. Since $L_i^2=-p$ for each $i$ and $L_1,\ldots,L_{p^2+p+1}$ are pairwise disjoint, we can take the contraction $f\colon X\to Z$ of $\sum_{i=1}^{p^2+p+1}L_i$. By the proof of \cite[Lemma 2.4 (i)]{CT}, we have $K_X+(1-\frac{2}{p})(\sum_{i=1}^{p^2+p+1}L_i)=f^{*}K_Z$ and hence $K_X+\sum_{i=1}^{p^2+p+1}L_i=\lceil f^{*}K_Z \rceil$. If $p\neq2$, then $K_Z$ is ample by \cite[Lemma 2.4 (iv)]{CT} and hence $K_X+\sum_{i=1}^{p^2+p+1}L_i$ is big. Note that if $p=2$, then $\kappa(X, K_X+\sum_{i=1}^{p^2+p+1}L_i)=-\infty$ since $f_{*}(K_X+\sum_{i=1}^{p^2+p+1}L_i)=K_Z$ is anti-ample. Therefore, Theorems \ref{BSV, Intro}, \ref{lift, Intro} and Proposition \ref{van of tan} do not hold on a surface pair whose log canonical divisor is big even when the surface itself is rational. \end{eg} \begin{rem}\label{Remark:log lift} For a singular surface, it is often more useful to consider the liftability of a log resolution than that of itself (see \cite{ABL}, \cite{CTW}, and \cite{KN} for example). In Example \ref{Example:Langer's surface}, we constructed the pathological example from the log resolution of the pair consisting of $\mathbb{P}_{k}^2$ and all the $\mathbb{F}_p$-lines $\sum_{i=1}^{p^2+p+1}\overline{L}_i$. However, the pair $(\mathbb{P}_{k}^2, \sum_{i=1}^{p^2+p+1}\overline{L}_i)$ clearly lifts to $W(k)$. For this reason, when we discuss lifting of a non-log smooth pair, it is more suitable to consider the liftability of a log resolution of the pair to capture pathologies in positive characteristic. \end{rem} \begin{rem} Example \ref{Example:Langer's surface} gives a slightly generalization of \cite[Corollary 3.3]{CT}. Indeed, we can drop the assumption $p\geqslantq 3$ and replace $\sum_{i=1}^{q^2+q+1}E_i+\sum_{i=1}^{q^2+q+1}L'_i$ with $\sum_{i=1}^{q^2+q+1}L'_i$ in \cite[Corollary 3.3]{CT}. On the other hand, this fact also follows from \cite[Proposition 4.1]{Lan} and \cite[Proposition 11.1]{Langer19}. \end{rem} \end{document}
\begin{document} \title{On Mixed Concatenations of Fibonacci and Lucas Numbers Which are Fibonacci Numbers} \author{Alaa ALTASSAN$^{1}$ and Murat ALAN$^{2}$ \\ $^{1}$King Abdulaziz University, Department of Mathematics,\\ P.O. Box 80203, Jeddah 21589, Saudi Arabia\\ e-mail: [email protected]\\ $^{2}$Yildiz Technical University\\ Mathematics Department, 34210, Istanbul, Turkey.\\ e-mail: [email protected] } \maketitle \begin{abstract} Let $(F_n)_{n\geq 0}$ and $(L_n)_{n\geq 0}$ be the Fibonacci and Lucas sequences, respectively. In this paper we determine all Fibonacci numbers which are mixed concatenations of a Fibonacci and a Lucas numbers. By mixed concatenations of $ a $ and $ b $, we mean the both concatenations $\overline{ab}$ and $\overline{ba}$ together, where $ a $ and $ b $ are any two non negative integers. So, the mathematical formulation of this problem leads us searching the solutions of two Diophantine equations $ F_n=10^d F_m +L_k $ and $ F_n=10^d L_m+F_k $ in non-negative integers $ (n,m,k) ,$ where $ d $ denotes the number of digits of $ L_k $ and $ F_k $, respectively. We use lower bounds for linear forms in logarithms and reduction method in Diophantine approximation to get the results. \end{abstract} \section{Introduction} Let $(F_n)_{n\geq 0}$ and $(L_n)_{n\geq 0}$ be the Fibonacci and Lucas sequences given by $F_0=0$, $F_1=1$, $L_0=2$, $L_1=1$, $F_{n+2}=F_{n+1}+F_{n}$ and $L_{n+2}=L_{n+1}+L_{n}$ for $n\geq 0,$ respectively. In recent years, the numbers in Fibonacci, Lucas or some similar sequences which are concatenations of two or more repdigits are investigated by a series of papers \cite{Alahmadi2,Dam1,Dam2, EK2,Qu,Trojovsky}. In the case of the concatenation of binary recurrent sequences, there is a general result due to Banks and Luca \cite{Banks}. In \cite{Banks}, they proved that if $ u_n $ is any binary recurrent sequence of integers, then only finitely many terms of the sequence $ u_n $ can be written as concatenations of two or more terms of the same sequence $ u_n $ under some mild hypotheses on $ u_n.$ In particular, they proved that 13, 21 and 55 are the only Fibonacci numbers which are non trivial concatenations of two terms of Fibonacci numbers. In \cite{Alan}, Fibonacci and Lucas numbers which can be written as concatenations of two terms of the other sequences also investigated and it is shown that 13, 21 and 34 (1, 2, 3, 11, 18 and 521 ) are the only Fibonacci (Lucas) numbers which are concatenations of two Lucas (Fibonacci) numbers. In this paper, we study the mixed concatenations of these famous two sequences which forms a Fibonacci number. By mixed concatenations of $ a $ and $ b $, for any two non negative integers $ a $ and $ b ,$ we mean the both concatenations $\overline{ab} $ and $ \overline{ba},$ together. So, we search all Fibonacci numbers of the form $\overline{F_mL_k}$, that is the concatenations of $ F_m $ and $ L_k $, as well as of the form $\overline{L_mF_k},$ that is the concatenations of $L_m $ and $ F_k.$ In other words, by the mathematical expression of this problem, we solve the Diophantine equations \begin{equation} F_n=10^d F_m +L_k \label{FcFL} \end{equation} and \begin{equation} F_n=10^d L_m+F_k \label{FcLF} \end{equation} in non negative integers $ (n,m,k) $ where $ d $ denotes the number of digits of $ L_k $ and $ F_k $, respectively and we get the following results. \begin{theorem} \label{main1} All Fibonacci numbers which are concatenations of a Fibonacci and a Lucas numbers are only the numbers 1, 2, 3, 13, 21 and 34. \end{theorem} \begin{theorem} \label{main2} All Fibonacci numbers which are concatenations of a Lucas and a Fibonacci numbers are only the numbers 13 and 21. \end{theorem} In the next chapter, we give some details of the methods we used in this study to prove the above theorems. In fact, we mainly use two powerful tools. The first one is the theory of non zero linear forms in logarithms of algebraic numbers which is due to Matveev \cite{Matveev} and the second one is reduction method based on the theory of continued fractions given in \cite{DP} which is a version of Baker-Davenport lemma \cite{Baker-Davenport}. In the third section, we give the proofs of the above theorem. All calculations and computations are made with the help of the software \textsf{Maple}. \section{Preliminaries} Let $F_n$ and $L_n$ be the Fibonacci and Lucas numbers, respectively. The Binet formula for Fibonacci and Lucas numbers are given by $$ F_n = \dfrac{\alpha^n-\beta^n}{\sqrt{5}}, \qquad L_n=\alpha^n+\beta^n \quad \quad n\geq 0, $$ where \[ \alpha=\frac{1+\sqrt{5}}{2} \quad \text{and} \quad \beta=\frac{1-\sqrt{5}}{2} \] are the roots of the equation $x^2-x-1=0.$ By using the Binet formula of these sequences, one can see that, by induction, \begin{equation} \label{F1} \alpha^{n-2} \leq F_n \leq \alpha^{n-1} \end{equation} and \begin{equation} \label{L1} \alpha^{n-1}\leq L_n \leq 2\alpha^{n} \end{equation} hold for all $n\geq 1$ and $n\geq 0,$ respectively. Let $\eta$ be an algebraic number of degree $d$ with minimal polynomial \[ a_0x^d+a_1x^{d-1}+\cdots+a_d=a_0\prod_{i=1}^{d}(x-\eta^{(i)}), \] where the $a_i$'s are relatively prime integers with $a_0>0$ and the $\eta^{(i)}$'s are conjugates of $\eta$. Recall that, the logarithmic height of $\eta$ is defined by \[ h(\eta)=\frac{1}{d}\left(\log a_0+\sum_{i=1}^{d}\log\left(\max\{|\eta^{(i)}|,1\}\right)\right). \] In particular, for a rational number $p/q$ with $\gcd(p,q)=1$ and $q>0$, $h(p/q)=\log \max \{|p|,q\}$. The logarithmic height $ h(\eta) $ has the following properties: \begin{itemize} \item[$ \bullet $] $h(\eta\pm\gamma)\leq h(\eta) + h(\gamma)+\log 2$. \item[$ \bullet $] $h(\eta\gamma^{\pm 1})\leq h(\eta)+h(\gamma)$. \item[$ \bullet $] $h(\eta^{s})=|s|h(\eta),$ $ s \in \mathbb{Z} $. \end{itemize} \begin{theorem}[Matveev's Theorem] \label{Matveev} Assume that $\gamma_1, \ldots, \gamma_t$ are positive real algebraic numbers in a real algebraic number field $\mathbb{K}$ of degree $ d_\mathbb{K} $, $b_1,\ldots,b_t$ are rational integers, and \[ \Lambda:=\eta_1^{b_1} \cdots \eta_t^{b_t}-1, \] is not zero. Then \[ |\Lambda|>\exp\left(-1.4\cdot 30^{t+3}\cdot t^{4.5}\cdot d_\mathbb{K}^2(1+\log d_\mathbb{K})(1+\log B)A_1\cdots A_t\right), \] where $B\geq \max\{|b_1|,\ldots,|b_t|\},$ and $A_i\geq \max\{d_\mathbb{K}h(\eta_i),|\log \eta_i|, 0.16\},$ for all $i=1,\ldots,t.$ \end{theorem} We cite the following lemma from \cite{DP}, which is a version of the reduction method based on the Baker-Davenport lemma \cite{Baker-Davenport}, and we use it to reduce some upper bounds on the variables. Recall that, for a real number $ \theta, $ we put $ ||\theta||=\min\{ |\theta -n | : n \in\mathbb{N} \}, $ the distance from $ \theta $ to the nearest integer. \begin{lemma} \label{reduction} Let $M$ be a positive integer, $p/q$ be a convergent of the continued fraction of the irrational $\gamma$ such that $q>6M$, and let $A,B,\mu$ be some real numbers with $A>0$ and $B>1$. If $\epsilon:=||\mu q||-M||\gamma q|| >0$, then there is no solution to the inequality \[ 0< | u\gamma-v+\mu | <AB^{-w}, \] in positive integers $u,v$ and $w$ with \[ u\leq M \quad\text{and}\quad w\geq \frac{\log(Aq/\epsilon)}{\log B}. \] \end{lemma} \section{Proof of Theorem \ref{main1} and \ref{main2}} \textbf{Proof of Theorem \ref{main1}:} Assume that the equation \eqref{FcFL} holds. We will need the relations among the variables $ n, m, k $ and $ d $ through this section. Note that, we may write the number of digits of $ L_k $ as $ d=\lfloor \log_{10}L_k \rfloor +1$ where $ \lfloor \theta\rfloor $ is the floor function of $ \theta $ , that is the greatest integer less than or equal to $ \theta .$ Thus, \begin{align*} d=\lfloor \log_{10}L_k \rfloor +1 \leq 1+\log_{10}L_k & \leq 1+ \log_{10}{ (2\alpha^{k} )}= 1+ {k}\log_{10}{ \alpha } +\log_{10}{ 2 } \\ \end{align*} and $$ d=\lfloor \log_{10}L_k \rfloor +1 > \log_{10}L_k \geq \log_{10}{ \alpha^{k-1} } \geq {(k-1)} \log_{10}{ \alpha } .$$ From the above relations we may get more explicit bounds for $ d $ as \begin{equation} \label{d} \frac{k-1}{5} < d < \frac{k+6}{4}, \end{equation} by using the fact that $ (1/5)< \log_{10}{ \alpha } \cong 0.208... <(1/4) $ and $ \log_{10}{ 2 } < 0.31 .$ In particular, $$ L_k = 10^{ \log_{10}L_k } <10^d < 10^{1+ \log_{10}L_k } <10 L_k .$$ From the last inequality together with \eqref{FcFL} we write $$\alpha^{n-2} \leq F_n=10^d F_m +L_k \leq 10 L_k F_m +L_k <11 F_mL_k < 22 \alpha^{m+k-1} \leq \alpha^{m+k+6} $$ and $$ \alpha^{n-1} \geq F_n=F_m 10^d +L_k > F_m L_k+L_k > F_m L_k \geq \alpha^{m+k-3}.$$ Hence, we have that \begin{equation} \label{n} m+k-2 < n < m+k+8. \end{equation} Before further calculations, we wrote a short computer program to search the variables $ n,m $ and $ k $ satisfying \eqref{FcFL} in the range $ 0\leq m,k <200 $ and we found only the Fibonacci numbers given in Theorem \ref{main1}. So from now on we may assume that $ \max \{m,k\} \geq 200. $ Note that, we may take $ n-k \geq 4 .$ Indeed, using the well-known fact $ L_k=F_{k+1}+F_{k-1} $, see for example \cite{Koshy}, we may write equation \eqref{FcFL} as $$ F_n= 10^d F_m+F_{k+1}+F_{k-1} .$$ Then, clearly $ n \neq k $ and $ n \neq k+1 .$ If $ m=0 ,$ then the case $ F_n=L_k $ is possible only for $ L_k \in \{ 1,2,3 \}, $ that is $ \max \{m,k\} < 3, $ a contradiction. So $ m \neq 0 $ and hence from the inequality $$ F_n=10^d F_m +L_k \geq 10^d +L_k > 2L_k = 2 (F_{k+1}+F_{k-1}) ,$$ we see that the cases $ n = k+2 $ and $ n = k+3 $ are not also possible. So we get that \begin{equation} \label{nk4} n-k \geq 4. \end{equation} Using Binet formula for Fibonacci and Lucas sequences, we rewrite equation \eqref{FcFL} as $$ \dfrac{\alpha^{n} -\beta^{n}}{\sqrt{5}} = \dfrac{\alpha^{m}}{\sqrt{5}} 10^d - \dfrac{\beta^{m}}{\sqrt{5}} 10^d +L_k,$$ $$ \dfrac{\alpha^{n}}{\sqrt{5}} - \dfrac{ 10^d \alpha^{m}}{\sqrt{5}} = \dfrac{\beta^{n}}{\sqrt{5}} - \dfrac{ 10^d \beta^{m}}{\sqrt{5}} + L_k. $$ Multiplying both sides of the above equation by $ {\sqrt{5}}/{\alpha^{n}} ,$ and taking absolute values of the both sides of it, we get that $$ \left| 1- \dfrac{10^{d} }{ \alpha^{n-m}} \right| \leq \dfrac{ |\beta^{n}| }{\alpha^n} + \dfrac{ 10^d |\beta^{m}|}{ \alpha^n } + \dfrac{L_k \sqrt{5} }{\alpha^n} $$ $$ \qquad \qquad \qquad \leq \dfrac{ 1}{\alpha^{2n}} + \dfrac{ 10 L_k }{ \alpha^{n+m} } + \dfrac{2 \alpha^k \sqrt{5} }{\alpha^n} $$ $$ \qquad \qquad \qquad < \dfrac{ 1}{\alpha^{2n}} + \dfrac{ 20 \alpha^k }{ \alpha^{n+m} } + \dfrac{2 \sqrt{5} }{\alpha^{n-k}}. $$ Since $ 20 < \alpha^{7} ,$ we find that \begin{equation} \label{1ineq} \Lambda_1 := \left| 1- \dfrac{10^{d} }{ \alpha^{n-m}} \right| < \dfrac{ 3}{ \alpha^{n-k-7}}. \end{equation} Let $(\eta_1, b_1)=(10, d)$ and $(\eta_2, b_2)=(\alpha, -(n-m)),$ where $ \eta_1, \eta_2 \in \mathbb{K}=\mathbb{Q}(\sqrt{5}).$ We take $ d_\mathbb{K}=2 $, to be the degree of the real number field $ \mathbb{K}.$ Since $ h(\eta_1)=\log{10} ,$ $h(\eta_2)=\dfrac{1}{2} \log {\alpha} ,$ we take $ A_1 = 2\log{10} ,$ $A_2= \log {\alpha}.$ Suppose that $ n-m < d.$ Then from the two relations \eqref{d} and \eqref{n}, we get that $$ k-2<n-m<d<(k+6)/4,$$ which implies $ k \leq 4 .$ Since $ L_4=7 ,$ we have $ d=1 .$ Hence $ n=m,$ that is $ F_n=10 F_n+L_k ,$ which is clearly false . So we take $$ B: = \max\{ |b_i | \} = \max\{ d, n-m\} = n-m. $$ In \eqref{1ineq}, $ \Lambda_1 \neq 0 .$ Indeed if $ \Lambda_1=0 ,$ then we get that $ \alpha^{n-m}= 10^d \in \mathbb{Q} $, which is possible only for $ n=m $ and we know that this is not the case. So $ \Lambda_1 \neq 0.$ Now we apply Theorem \ref{Matveev} to $ \Lambda_1 $ and we get that $$ \log{( \Lambda_1 ) } > -1.4 \cdot 30^5 \cdot 2^{4.5} \cdot 2^2 (1+\log 2)(1+\log {(n-m)}) \cdot 2 \log{10}\cdot \log \alpha . $$ On the other hand, taking the logarithm of both sides of \eqref{1ineq}, we get also $$ \log{( \Lambda_1 ) } < \log 3 - (n-k-7) \log \alpha. $$ Combining last two inequalities, we get that \begin{equation} \label{n-k} n-k-7 < 2.41 \cdot 10^{10}\cdot (1+\log {(n-m)}). \end{equation} We will turn \eqref{n-k} later. Now we rewrite \eqref{FcFL} as $$ \dfrac{\alpha^{n}}{\sqrt{5}} - \alpha^k - 10^d F_m = \dfrac{\beta^{n}}{\sqrt{5}}+\beta^k ,$$ $$ \dfrac{ \alpha^n}{\sqrt{5}} \left( 1- \alpha^{k-n} \sqrt{5} \right) - 10^d F_m = \dfrac{\beta^{n}}{\sqrt{5}}+\beta^k. $$ Note that, by \eqref{nk4}, $ 1- \alpha^{k-n} \sqrt{5} >0 $ and $ 1< \dfrac{1}{1 - \alpha^{k-n} \sqrt{5} } <2 $ for $ n-k \geq 4 .$ So, we may divide, and than take the absolute value of both sides of the last equality to get that $$ \left| 1- \dfrac{ F_m 10^d \sqrt{5} }{ { \alpha^n} \left( 1- \alpha^{k-n} \sqrt{5} \right) } \right| < \left( \dfrac{1}{1 - \alpha^{k-n} \sqrt{5} } \right) \left( \dfrac{ |\beta^n | }{\alpha^n} + \dfrac{ |\beta^k | \sqrt{5} }{\alpha^n } \right) $$ $$ \qquad \qquad < 2 \left( \dfrac{ 1 }{\alpha^{2n}} + \dfrac{ \sqrt{5} }{\alpha^{n+k} } \right). $$ So \begin{equation} \label{2ineq} \left| \Lambda_2 \right | := \left| 1- \dfrac{ 10^d F_m \sqrt{5} }{ { \alpha^n} \left( 1- \alpha^{k-n} \sqrt{5} \right) } \right| < \dfrac{7}{\alpha^{2k}}. \end{equation} Let $(\eta_1, b_1)=(10, d)$, $(\eta_2, b_2)=(\alpha, -n)$ and $(\eta_3, b_3)= \left( \dfrac{ F_m \sqrt{5} }{ \left( 1- \alpha^{k-n} \sqrt{5} \right) } , 1 \right) $. Again $ \eta_1, \eta_2$ and $ \eta_3 $ are all belongs to the real quadratic number field $ \mathbb{K}=\mathbb{Q}(\sqrt{5}) .$ So we take $ d_\mathbb{K}=2 $, to be the degree of $ \mathbb{K}, $ $h(\eta_1)= \log{10}$, $h(\eta_2)=(1/2) \log(\alpha)$ and for $ h(\eta_3) ,$ we need the properties of logarithmic height given in the Preliminaries, so that \begin{align*} h(\eta_3)= h \left( \dfrac{ F_m \sqrt{5} }{ 1- \alpha^{k-n} \sqrt{5} } \right) & \leq h( F_m ) +h( \sqrt{5}) +h( 1- \alpha^{k-n} \sqrt{5} ) \\ & \leq \log (F_m)+ h( \sqrt{5}) + h(\alpha^{k-n} \sqrt{5} ) +\log 2 \\ & \leq (m-1) \log(\alpha) + \dfrac{ |k-n| }{2} \log \alpha + 1. \\ \end{align*} Since $ m-1<n-k+1 $ from \eqref{n}, we write $ h(\eta_3)< 1+\dfrac{3(n-k)+2}{2} \log(\alpha).$ So we take $$ A_1 = 2\log {10} ,\quad A_2= \log \alpha \quad \text{and} \quad A_3= 2+ {(3(n-k)+2)} \log(\alpha).$$ By \eqref{d}, if $ n<d<(k+6)/4 ,$ then we find that $ 4n-6<k $ and hence $ m+3n<8 $ from \eqref{n}, a contradiction since $ \max\{m, k \} \geq 200. $ So $$ B:=\max\{ |b_i | \} = \max\{ d, n, 1 \}= n.$$ We show that $ \Lambda_2 \neq 0. $ For this purpose, assume that $ \Lambda_2 = 0. $ Then we get that $$ \alpha^n- \alpha^{k} \sqrt{5} = 10^d F_m \sqrt{5}, $$ Conjugating this expression in $ \mathbb{K} $ we obtain $$ \beta^n+ \beta^{k} \sqrt{5} = - 10^d F_m \sqrt{5}. $$ From these last two equality, we find that $$ \label{d2neq2} 10 \sqrt{5} \leq 10^d F_m \sqrt{5} = | \beta^n+ \beta^{k} \sqrt{5} | \leq 2 \max \{ \beta^n, \beta^{k} \sqrt{5} \} \leq 2 \sqrt{5}, $$ a contradiction. So we conclude that $ \Lambda_2 \neq 0. $ Thus, we apply Theorem \ref{Matveev} to $ \Lambda_2 $ given in \eqref{2ineq} and we get that \begin{equation} \label{23} \log{( \Lambda_2 ) } > -1.4 \cdot 30^6 \cdot 3^{4.5} \cdot 2^2 (1+\log 2)(1+\log n) 2\log{10} \log \alpha \cdot [ 2+ {(3(n-k)+2)} \log(\alpha)]. \end{equation} On the other hand, from \eqref{2ineq}, we know that \begin{equation} \label{24} \log{( \Lambda_2 ) } < \log{7} -2k \log \alpha. \end{equation} Combining \eqref{23} and \eqref{24} we get that \begin{equation} \label{2k} k < 2.24 \cdot 10^{12} \cdot (1+\log n) [2+ (3(n-k)+2) \log(\alpha) ] . \end{equation} Now, we focus on two inequalities \eqref{2k} and \eqref{n-k} to get a bound for $ n $ by examining the cases $ k \leq m $ and $ m <k $ separately. First, assume that $ k \leq m. $ Then $ n-m \leq n-k $ and therefore, from \eqref{n-k}, we find $$ n-k-7 < 2.41 \cdot 10^{10}\cdot (1+\log {(n-k)}) $$ which means that $ n-k < 8 \cdot 10^{11}.$ So it follows that \begin{equation*} n< 1.7 \cdot 10^{12}, \end{equation*} since from the left side of \eqref{n}, we know that $ m-2<n-k $ and from the right side of \eqref{n}, $ n<2m+8.$ Let $ m \leq k .$ Then $ n<2k+8. $ From \eqref{n-k}, in particular we have that \begin{equation} \label{n-k2} n-k-7 < 2.41 \cdot 10^{10}\cdot (1+\log {n}). \end{equation} Substituting \eqref{n-k2} into \eqref{2k} and using the fact that $ (n-8)/2<k ,$ we find that \begin{equation} \label{n2} n< 7 \cdot 10^{26}. \end{equation} So whether $ m \leq k $ or not, in either case, we have that $ n< 7 \cdot 10^{26}. $ Now, we reduce this upper bound on $ n. $ \begin{lemma} \label{mbound} If the equation \eqref{FcFL} holds, then $ m \leq 150. $ In particular, if $ k \leq m ,$ then the equation \eqref{FcFL} has only solution for $ F_n \in \{ 1, 2, 3, 13, 21, 34 \} .$ \end{lemma} \begin{proof} Suppose that $ m>150.$ Let $\Gamma_1 :=d \log {10} -(n-m) \log \alpha.$ Since $ 141< m-9 < n-k-7,$ from \eqref{1ineq} $$\left| \Lambda_1 \right|:=\left| \exp ({\Gamma_1}) -1 \right| <\dfrac{3}{\alpha^{n-k-7}} <\dfrac{1}{2}.$$ Recall that, when $ v <\dfrac{1}{2} ,$ the inequality $$ \left| \exp (u) -1 \right| < v \quad \text{implies that} \quad \left| u \right| < 2 v.$$ So, it follows that $ \left| \Gamma_1 \right| < \dfrac{6}{\alpha^{n-k-7}}.$ Moreover, $\Gamma_1 \neq 0,$ since $ \Lambda_1 .$ Thus, we write \begin{equation} \label{r1} 0< \left| \dfrac{ d }{n-m } - \dfrac{ \log \alpha }{\log 10 } \right | < \dfrac{6}{ (n-m) \alpha^{n-k-7} \log 10}. \end{equation} Note that, we have $$ \dfrac{6}{ (n-m) \alpha^{n-k-7} \log 10} < \dfrac{1}{2(n-m)^2}, $$ for otherwise $$ \dfrac{6}{ (n-m) \alpha^{n-k-7} \log 10} \geq \dfrac{1}{2(n-m)^2} $$ implies that $$ n>n-m> \dfrac{ \alpha^{n-k-7} \log 10 }{ 12 } > \dfrac{ \alpha^{141} \log 10 }{ 12 } > 5.6 \cdot 10^{28}, $$ which is a contradiction because of \eqref{n2}. Then we have that $$ 0< \left| \dfrac{ \log \alpha }{\log 10 } - \dfrac{ d }{n-m } \right | < \dfrac{1}{2(n-m)^2}, $$ which implies that $ \dfrac{d}{n-m} $ is a convergent of continued fractions of $ \dfrac{ \log \alpha }{\log 10 } $, say $ \dfrac{ d }{n-m } = \dfrac{p_i}{q_i} .$ Since $\gcd(p_i, q_i)=1$, we have $ q_i \leq n-m \leq n <7 \cdot 10^{26} .$ Let $[a_1,a_2,a_3,a_4,a_5,\ldots]=[0, 4, 1, 3, 1, \ldots ]$ be the continued fraction expansion of $ \log \alpha / \log 10 $. With the help of Maple, we find that $ i < 54 $ and $ \max\{a_i\} =a_{37}=106 $ for $ i=1,2, \ldots , 54. $ So from the well known property of continued fractions, see for example \cite[Theorem 1.1.(iv)]{Hen}, we get that $$ \dfrac{1}{108 (n-m)^2} \leq \dfrac{1}{(a_i +2) (n-m)^2} < \left| \dfrac{ \log \alpha }{\log 10 } - \dfrac{ d }{n-m } \right | < \dfrac{6}{ (n-m) \alpha^{n-k-7} \log 10} $$ which means $$ n> n-m > \dfrac{ \alpha^{n-k-7} \log 10 }{6 \cdot 108} > 10^{27}. $$ But this is also a contradiction, since $ n < 7 \cdot 10^{26} .$ Therefore, we conclude that $ m \leq 150. $ \end{proof} By Lemma \ref{mbound}, from now on, we assume that $ m \leq k $ and hence $ n<2k+8 .$ Moreover, we write also $ n-k<m+8 \leq 158.$ By substituting this upper bound for $ n-k $ into the \eqref{2k}, we get a better bound for $ n $ as \begin{equation} \label{2k2} n-8 < 2k < 2 \cdot 2.24 \cdot 10^{12} \cdot (1+\log n) ( 476 \log(\alpha) + 2 ) . \end{equation} So from \eqref{2k2}, it follows that \begin{equation*} n< 4.5 \cdot 10^{16} . \end{equation*} Let \begin{equation*} \label{gama2} \Gamma_2 :=d \log {10} -n \log \alpha + \log \left( { \dfrac{ F_m \sqrt{5}}{ 1- \alpha^{k-n} \sqrt{5} } } \right) \end{equation*} so that $ \left | \Lambda_2 \right |:=\left | \exp{(\Gamma_2)}-1 \right | <\dfrac{7}{\alpha^{2k}}.$ Then $ \left | \Gamma_2 \right | < \dfrac{14} {\alpha^{2k}},$ since $ \dfrac{1}{\alpha^{2k}} < \dfrac{1}{2}.$ So, from \eqref{gama2}, \begin{equation*} 0 < \left | d \dfrac{\log {10}}{\log \alpha} -n + \dfrac{ \log \left( { \dfrac{ F_m \sqrt{5}}{ 1- \alpha^{k-n} \sqrt{5} } } \right) }{\log \alpha} \right | < \dfrac{14}{\alpha^{2k} \log \alpha }. \end{equation*} Now, we take $$ M:=4.5 \cdot 10^{16} > n > d \qquad \text{and} \qquad \tau:=\dfrac{ \log{ 10}}{\log \alpha}. $$ Then, in the continued fraction expansion of irrational $ \tau$, we take $ q_{60} ,$ the denominator of the $ 60th $ convergent of $ \tau,$ which exceeds $ 6M. $ Now, with the help of Maple, we calculate $$ \epsilon_{m, n-k} := ||\mu_{m, n-k} q_{60} || -M || \tau q_{60} || $$ for each $ 1 \leq m \leq 150 $ and $ 4 \leq n-k <m+8, $ where $$ \mu_{m, n-k} := \dfrac{ \log \left( { \dfrac{ F_m \sqrt{5}}{ 1- \alpha^{k-n} \sqrt{5} } } \right) }{\log \alpha}, \qquad q_{60}=2568762252997982327345614176552 $$ and we see that $$ 0.00034 < \epsilon_{50, 20} \leq \epsilon_{m, n-k}, \qquad \text{for all} \quad m, n-k . $$ Let $ A:= \dfrac{14}{\log \alpha} ,$ $ B:=\alpha $ and $ \omega :=2k $ . Thus from Lemma \ref{reduction}, we find that $$ 2k < \dfrac{ \log{ \left( Aq_{60} /{0.00034} \right) }}{\log B} < 170 .$$ So we get that $ k < 85 $ which is a contradiction because of bound of $ k .$ Thus, we conclude that the numbers 1, 2, 3, 13, 21 and 34 are the only Fibonacci numbers which are expressible as a concatenation of a Fibonacci and a Lucas number. \textbf{Proof of Theorem \ref{main2}:} Assume that the equation \eqref{FcLF} holds. As $ d=\lfloor \log_{10}F_k \rfloor +1 $ is the number of digits of $ F_k ,$ we have that $$ d=\lfloor \log_{10}F_k \rfloor +1 \leq 1+\log_{10}F_k \leq 1+ \log_{10}{ \alpha^{k-1} }= 1+ {(k-1)} \log_{10}{ \alpha }< \frac{k+3}{4} $$ and $$ d=\lfloor \log_{10}F_k \rfloor +1 > \log_{10}F_k \geq \log_{10}{ \alpha^{k-2} } = {(k-2)} \log_{10}{ \alpha }> \frac{k-2}{5} $$ since $ (1/5)< \log_{10}{ \alpha } \cong 0.208... <(1/4) .$ Hence, we have that \begin{equation} \label{bd2} \frac{k-2}{5} < d < \frac{k+3}{4}, \end{equation} and $$ F_k \leq 10^{\lfloor \log_{10}F_k \rfloor} <10^d < 10^{1+\lfloor \log_{10}F_k \rfloor} \leq 10 F_k .$$ From the last inequality together with \eqref{FcLF}, we can find a range of $ n $ depending on $ m $ and $ k. $ More precisely, $$\alpha^{n-1} \leq F_n=L_m 10^d +F_k \leq L_m 10 F_k+F_k <11 L_m F_k < 22 \alpha^{m+k-1} < \alpha^{m+k+7} $$ and $$ \alpha^{n-2} \geq F_n=L_m 10^d +F_k \geq L_m F_k+F_k > L_m F_k > \alpha^{m+k-3}.$$ Hence \begin{equation} \label{bn2} m+k-1 < n < m+k+7. \end{equation} When $ k,m \leq 200 $, by a quick computation, we see that the only solutions of \eqref{FcLF} are those given in Theorem \ref{main2}. So from now on, we assume that $ \max\{m,k \} >200. $ Using Binet formula for Fibonacci and Lucas sequences, we rewrite equation \eqref{FcLF} as $$ \dfrac{\alpha^{n}}{\sqrt{5}} - \dfrac{\beta^{n}}{\sqrt{5}} = \left( {\alpha^{m} + \beta^{m}} \right) 10^d +F_k,$$ $$ \dfrac{\alpha^{n}}{\sqrt{5}} - \alpha^{m} 10^d = \dfrac{\beta^{n}}{\sqrt{5}} + \beta^{m} 10^d +F_k,$$ Now, we multiply both sides of the above equation by $ {\sqrt{5}}/{\alpha^{n}} ,$ and take the absolute value of both sides of it. Thus, we get \begin{align*} \left| 1- \dfrac{10^{d} }{ \alpha^{n-m} / \sqrt{5} } \right| & \leq \dfrac{\left| \beta \right |^{n}}{\alpha^n} + \dfrac{\left| \beta \right |^{m}10^d \sqrt{5} }{\alpha^n} + \dfrac{F_k \sqrt{5} }{\alpha^n} \\ & < \dfrac{ 1}{ \alpha^{2n}} + \dfrac{ 10 F_k \sqrt{5} }{ \alpha^{n+m}}+ \dfrac{\alpha^{k-1} \alpha^{2}}{\alpha^n} \\ & < \dfrac{ 1}{ \alpha^{2n}} + \dfrac{ \alpha^{5} \alpha^{k-1} \alpha^{2} }{ \alpha^{n+m}}+ \dfrac{1}{\alpha^{n-k-1}} \\ & < \dfrac{ 3}{ \alpha^{n-k-6}}. \end{align*} So, we have that \begin{equation} \label{3ineq} \Lambda_3 := \left| 1- \dfrac{10^{d} \sqrt{5} }{ \alpha^{n-m} } \right| < \dfrac{ 3}{ \alpha^{n-k-6}}. \end{equation} Now, we may apply Theorem \ref{Matveev} to the left side of the inequality \eqref{3ineq} with $(\eta_1, b_1)=(10, d)$, $(\eta_2, b_2)=(\alpha, -(n-m))$ and $(\eta_3, b_3)=(\sqrt{5} , 1)$. Since $ \eta_1,$ $\eta_2$ and $ \eta_3 $ belong to the real quadratic number field $ \mathbb{K}=\mathbb{Q}(\sqrt{5}) $, we take $ d_\mathbb{K}=2 $, to be the degree of the number field $ \mathbb{K}.$ Since $ h(\eta_1)=\log{10},$ $h(\eta_2)=\dfrac{1}{2} \log {\alpha},$ and $h(\eta_3)=\log \sqrt{5}$ we take $ A_1 = 2\log{10},$ $ A_2= \log {\alpha}$ and $A_3= \log 5.$ Assume that $ \Lambda_3=0. $ Then, we get that $ \alpha^{n-m} = 10^d \sqrt{5},$ that is, $ \alpha^{2(n-m)} = 5 \cdot 10^{2d} \in \mathbb{Q} ,$ but this is false for $ n-m \neq 0 $ and one can see that the case $ n-m=0 $ is not possible from \eqref{FcLF}. So $ \Lambda_3 \neq 0.$ Now, we claim that $ \max\{d, n-m \}=n-m .$ Indeed, if $ n-m < d$ then from the inequalities \eqref{bd2} and \eqref{bn2}, we get that $ k-1<n-m<d<(k+3)/4.$ Thus $ k<3 ,$ which means that $ d=1 $ and hence $ n-m<1, $ that is $ n=m. $ But from the identity $L_i=F_{i-1}+F_{i+1}, $ we see that the case $ n=m $ is not possible. So $$ B: = \max\{ |b_i | \} = \max\{ d, n-m, 1 \} = n-m. $$ Now, we are ready to apply Theorem \ref{Matveev} to $ \Lambda_3 $ given in \eqref{3ineq}, and we get that $$ \log{( \Lambda_3 ) } > -1.4 \cdot 30^6 \cdot 3^{4.5} \cdot 2^2 (1+\log 2)(1+\log {(n-m)}) \cdot 2 \log{10} \cdot \log \alpha \cdot \log 5. $$ Combining this inequality with the one directly obtained from \eqref{3ineq}, as $ \log( \Lambda_3 )< \log 3 -(n-k-6)\log \alpha ,$ we get that \begin{equation} \label{bn-k2} n-k-6 < 7.19 \cdot 10^{12} (1+\log {(n-m)}). \end{equation} Now, we rewrite \eqref{FcLF} as $$\dfrac{\alpha^{n}}{\sqrt{5}} - L_m 10^d - \dfrac{\alpha^{k}}{\sqrt{5}} = \dfrac{\beta^{n}}{\sqrt{5}} + \dfrac{\beta^{k}}{\sqrt{5}} ,$$ $$\dfrac{\alpha^{n}}{\sqrt{5}} \left( 1- \alpha^{k-n} \right) - L_m 10^d = \dfrac{\beta^{n}}{\sqrt{5}} + \dfrac{\beta^{k}}{\sqrt{5}} .$$ After dividing both sides of the last equation by $ \dfrac{\alpha^{n}}{\sqrt{5}} \left( 1- \alpha^{k-n} \right) ,$ and taking the absolute value of both sides of it, we get that $$ \left| 1- \dfrac{ L_m 10^d \sqrt{5} }{ \alpha^n ( 1-\alpha^{k-n} ) } \right| \leq \left( \dfrac{1}{ 1-\alpha^{k-n} } \right) \left( \dfrac{ |\beta^n | }{\alpha^n } + \dfrac{ |\beta^k |}{\alpha^n } \right). $$ Taking into account $ 1< \dfrac{1}{ 1-\alpha^{k-n} } <2 $ for $ n-k \geq 2,$ we get that \begin{equation} \label{4ineq} \left| \Lambda_4 \right | := \left| 1- \dfrac{ L_m 10^d \sqrt{5} }{ \alpha^n \left( 1- \alpha^{k-n} \right) } \right| < \dfrac{4}{\alpha^{2k}}. \end{equation} Let $(\eta_1, b_1)=(10, d)$, $(\eta_2, b_2)=(\alpha, -n)$ and $(\eta_3, b_3)=\left( \dfrac{ L_m \sqrt{5} }{ 1- \alpha^{k-n} } , 1 \right) .$ All of $ \eta_1, \eta_2$ and $ \eta_3 $ belong to real quadratic number field $ \mathbb{K}=\mathbb{Q}(\sqrt{5}) $, which has degree $ d_\mathbb{K}=2 .$ Since $h(\eta_1)= \log{10},$ $h(\eta_2)=(1/2)\log(\alpha)$ and $$ h(\eta_3) \leq h( L_m ) + h( \sqrt{5} ) + h( 1- \alpha^{k-n} ) \leq \log(L_m) + \log(\sqrt{5}) + h( \alpha^{k-n}) +\log 2 $$ $$ < \log (2\alpha^m) + \log(\sqrt{5}) + \dfrac{|k-n|}{2} \log \alpha +\log 2. $$ So $$ h(\eta_3) < \dfrac{3(n-k)+2}{2} \log(\alpha) + \log(4\sqrt{5}) $$ where we used the fact that $ m<n-k+1 $ from \eqref{bn2}. So we take $ A_1 = 2\log {10},$ $A_2= \log \alpha $ and $A_3= (3(n-k)+2) \log(\alpha) + \log(80).$ Clearly, $ B:=\max\{ |b_i | \} = \max\{ d, n, 1 \}= n.$ Also $ \Lambda_4 \neq 0.$ To show this fact assume that $ \Lambda_4=0. $ Then, we get that $\alpha^{n} - \alpha^{k} = 10^d L_m \sqrt{5}. $ Conjugating in $ \mathbb{K} ,$ we find $\beta^{n} - \beta^{k} = - 10^d L_m \sqrt{5}. $ Adding side by side the last two equalities we obtain that $ L_n-L_k=0, $ that is $ n=k, $ a contradiction. So $ \Lambda_4 \neq 0.$ Thus, we apply Theorem \ref{Matveev} to $ \Lambda_4 $ given in \eqref{4ineq}, and we get that \begin{equation} \label{b232} \log{( \Lambda_4|) } > -1.4 \cdot 30^6 \cdot 3^{4.5} \cdot 2^2 (1+\log 2)(1+\log n) A_3 2\log{10} \log \alpha. \end{equation} On the other hand, from \eqref{4ineq}, taking into account that $\log{( \Lambda_4 ) } < \log4 -2k \log \alpha ,$ we get that \begin{equation} \label{b2k} k < 2.24 \cdot 10^{12} \cdot (1+\log n) [ (3(n-k)+2) \log(\alpha) + \log(80) ] . \end{equation} Now, we use the two inequalities \eqref{bn-k2} and \eqref{b2k} to get an initial bound on the variable $n$. To do this, first assume that $ k \leq m. $ Then, $ n-m \leq n-k $ and hence, from \eqref{bn-k2}, we write $$ n-k-6 < 7.19 \cdot 10^{12} (1+\log {(n-k)}) $$ which implies that $ n-k < 3 \cdot 10^{14}.$ So it follows that \begin{equation*} n< 10^{15}, \end{equation*} since from \eqref{bn2}, we know that $ n<2m+4$ and $ m<n-k+1.$ Let $ m \leq k .$ From \eqref{bn-k2}, in particular, we have that \begin{equation*} \label{bn-k23} n-k-6 < 7.19 \cdot 10^{12}(1+\log {n}). \end{equation*} Substituting \eqref{bn-k23} into \eqref{b2k} and using the fact that $ n<2k+7 ,$ we find that \begin{equation} \label{bn23} n< 2.5 \cdot 10^{29}. \end{equation} So, whether $ m \leq k $ or not, we have that $ n< 2.5 \cdot 10^{29}. $ \begin{lemma} \label{bmbound} If the equation \eqref{FcLF} holds then $ m \leq 168. $ In particular, if $ k \leq m ,$ then the equation has the solution only for $ F_n \in \{ 13, 21 \} .$ \end{lemma} \begin{proof} Suppose that $ m>168.$ Let \begin{equation} \label{gama3} \Gamma_3 :=d \log {10} -(n-m) \log \alpha + \log \sqrt{5} . \end{equation} Since $ 161<m-7< n-k-6,$ $ \left| \Lambda_3 \right|:=\left| \exp {(\Gamma_3)} -1 \right| <\dfrac{3}{\alpha^{n-k-6}} < \dfrac{1}{2}.$ Hence, we get that $ \left| \Gamma_3 \right| < \dfrac{6}{\alpha^{n-k-6}}.$ Thus from \eqref{gama3}, we write \begin{equation*} 0< \left| d \dfrac{ \log {10} }{\log \alpha } - (n-m) + \dfrac{\log \sqrt{5}}{\log \alpha} \right | < \dfrac{6}{ \alpha^{n-k-6} \log \alpha}. \end{equation*} Let $ M:= 2.5 \cdot 10^{29} > n > d$ and $\tau = \dfrac{\log {10}}{\log \alpha}.$ Then, in the continued fraction expansion of irrational $ \tau$, we see that $ q_{60},$ the denominator of the $ 60th $ convergent of $ \tau,$ exceeds $ 6M.$ With the help of Maple, we calculate $$ \epsilon := ||\mu q_{60} || -M || \tau q_{60} || $$ where $\mu = \dfrac{\log \sqrt{5}}{\log \alpha}$ and we find that $0.017775 < \epsilon .$ Let $ A:= \dfrac{6}{\log \alpha} ,$ $ B:=\alpha $ and $ \omega :=n-k-6 .$ From Lemma \ref{reduction}, we find that $$ n-k-6 < \dfrac{ \log{ \left( Aq_{60}/ \epsilon \right) } }{\log B} < 160 .$$ But this contradicts the fact that $ 161<m-7< n-k-6.$ So, we conclude that $ m \leq 168 .$ \end{proof} By Lemma \ref{bmbound}, from now on, we deal with only the case $ m<k. $ Since $ n-k<m+7 \leq 175 ,$ by substituting this upper bound for $ n-k $ into \eqref{b2k}, we get that \begin{equation} \label{b2k2} n <2k +7 < 2 \cdot 2.24 \cdot 10^{12} \cdot (1+\log n) [ 527 \log(\alpha) + \log(80) ] +7 . \end{equation} So from \eqref{b2k2}, it follows that \begin{equation*} n< 4.6 \cdot 10^{16} . \end{equation*} Let \begin{equation} \label{gama4} \Gamma_4 :=d \log {10} -n \log \alpha + \log { \left( \dfrac{ L_m \sqrt{5}}{ 1- \alpha^{k-n}} \right) }. \end{equation} So $$ \left | \Lambda_4 \right |:=\left | \exp{\Gamma_4}-1 \right | <\dfrac{4}{\alpha^{2k}}. $$ Then, $\left | \Gamma_4 \right | < \dfrac{8}{\alpha^{2k}}, $ since $ \dfrac{1}{\alpha^{2k}} < \dfrac{1}{2}.$ Thus, from \eqref{gama4}, \begin{equation} \label{br2} 0 < \left | \dfrac{\Gamma_4}{\log \alpha} \right | < \dfrac{}{\alpha^{2k} \log \alpha }. \end{equation} Now, we take $ M:=4.6 \cdot 10^{16} > n > d $ and $\tau:=\dfrac{ \log{ 10}}{\log \alpha}$ which is irrational. Then, in the continued fraction expansion of $ \tau$, we take $ q_{91} ,$ the denominator of the $ 91th $ convergent of $ \tau,$ which exceeds $ 6M. $ Now, with the help of Maple we calculate $$ \epsilon_{m, n-k} := ||\mu_{m, n-k} q_{91} || -M || \tau q_{91} || $$ for each $ 0 \leq m \leq 168 $ and $ 3 \leq n-k <m+7 ,$ where $$ \mu_{m, n-k} := \dfrac{ \log { \left( \dfrac{ L_m \sqrt{5}}{ 1- \alpha^{k-n}} \right) } }{\log \alpha} $$ except for $ n-k=4$ and $ n-k=8 .$ For these two values $ \epsilon_{m, 4}<0 $ and $ \epsilon_{m, 8} <0 .$ To overcome this problem, we use the periodicity of Fibonacci sequences. With an easy computation, one can see that there is no positive integer $ t $ in the range $ 1 \leq t \leq 20 $ which satisfies neither $ F_t \equiv F_{t+4} \pmod 5 $ nor $ F_t \equiv F_{t+8} \pmod 5 .$ Since the period of the Fibonacci sequence modulo 5 is 20 \cite[Theorem 35.6.]{Koshy}, we conclude that there is no positive integer which satisfies either of these two congruences. So, from \eqref{FcLF}, we take $ n-k \neq 4 $ and $ n-k \neq 8. $ Let $ A:= \dfrac{8}{\log \alpha} ,$ $ B:=\alpha $ and $ \omega :=2k .$ Thus, from Lemma \ref{reduction} we get that, the inequality \eqref{br2} has solutions only for $$ 2k \leq \dfrac{ \log{ \left( Aq_{91} /{0.000109} \right) }}{\log B} \leq 233 .$$ So, we get that $m< k < 117, $ which is a contradiction because of bound of $ k .$ Thus we conclude that the equation \eqref{FcLF} has no solution when $ F_n \not\in \{ 13, 21 \} .$ This completes the proof. \textbf{Discussion :} In this paper, we aimed to search the results of mixed concatenations of the numbers belonging to the different sequences in the Fibonacci and Lucas particular case and we found that there are only a handful of Fibonacci numbers that can be written as a mixed concatenation of a Fibonacci and a Lucas numbers. The results of this paper together with those of \cite{Alan, Banks} bring the question to the mind that could there exist a nondegenerate \emph{binary} recurrence sequence which contains infinitely many terms which are expressible as a mixed concatenation of, say, a Fibonacci and a Lucas numbers? Let us define a sequence $ u_n = 10F_n+1 .$ Then, clearly, all terms of this sequence are a concatenation of a Fibonacci and $ 1. $ So, in the above question, we can not omit the expression "nondegenerate \emph{binary}". As a final remark, it is also worth noting that just using some properties and identities of Fibonacci and Lucas numbers, we could get only a few partial results so we used the method in this paper, which is the effective combination of Baker's method together with the reduction method. \end{document}
\begin{document} \def\uppercasenonmath#1{} \let\MakeUppercase\relax \title[\uppercase{Compromise, Don't Optimize}]{\larger Compromise, Don't Optimize:\\ Generalizing Perfect Bayesian Equilibrium\\ to Allow for Ambiguity} \author[\uppercase{Schlag and Zapechelnyuk}]{ \larger \textsc{Karl H.~Schlag and Andriy Zapechelnyuk}} \date{\today} \thanks{ \ \\ \textit{Schlag}: Department of Economics, University of Vienna, Oskar-Morgenstern-Platz 1, 1090 Vienna, Austria. \emph{E-mail:} [email protected]. \\ \textit{Zapechelnyuk}: School of Economics and Finance, University of St Andrews, Castlecliffe, the Scores, St Andrews KY16 9AR, UK. {\it E-mail:} {[email protected].} \\ \ \\ We are grateful to Pierpaolo Battigalli, Simon Grant, and Clara Ponsati for their comments. } \begin{abstract} We introduce a solution concept for extensive-form games of incomplete information in which players need not assign likelihoods to what they do not know about the game. This is embedded in a model in which players can hold multiple priors. Players make choices by looking for compromises that yield a good performance under each of their updated priors. Our solution concept is called perfect compromise equilibrium. It generalizes perfect Bayesian equilibrium. We show how it deals with ambiguity in Cournot and Bertrand markets, public good provision, Spence's job market signaling, bilateral trade with common value, and forecasting. \noindent\emph{JEL\ Classification:}\ D81, D83\newline \noindent\emph{Keywords:} compromise, multiple priors, loss, robustness, perfect Bayesian equilibrium, perfect compromise equilibrium, solution concept \end{abstract} \maketitle \section{Introduction} Modeling lack of information is at the center stage of economics. Agents might not know the previous choice of someone else. Or they might not know the type of their opponent. Or they might not know their own payoffs. The concept of {\it perfect Bayesian equilibrium} (PBE) deals with uncertainty by forcing players to specify priors that describe precise likelihoods of the possible states of the world. However, being uncertain seems to contradict any ability to assign such probabilities. Players might be ambiguous and not willing or able to specify probabilities. They might only be able to identify which states are possible. In this paper we introduce a solution concept that does not force players to formulate priors. Our solution concept is called {\it perfect compromise equilibrium} (PCE). It applies to extensive-form games of incomplete information. Players are allowed to be ambiguous about what they do not know about the game. Ambiguity is modeled by allowing each player to hold a set priors. A player without probability assessments is one that has a set of degenerate priors. A standard Bayesian player is one that has a unique prior. PCE includes PBE as a special case when each player in the game is endowed with a single prior. Our solution concept is readily defined once we have resolved the following two issues. How to learn from the past? How to model decision making under ambiguity? Learning from the past is modeled as follows. Each player starts a game with a set of priors. These priors are then updated, prior by prior, using Bayes' rule whenever possible \cite[known as full Bayesian updating,][]{Pires}. This determines the player's posterior beliefs at each of her information sets. As in PBE, there are no restrictions on how a prior is updated at information sets that it does not reach. Decision making under ambiguity is modeled as follows. A player makes a decision at each of her information sets by choosing a {\it best compromise} given the set of her posterior beliefs at that information set. This is a decision that balances the loss of not making the optimal decision for each of her beliefs. This criterion collapses to expected utility maximization if there is only one belief or if there is a dominant action at the given information set. The concept of best compromise follows the tradition of minimax regret and is founded on many pillars. It has an axiomatic foundation. It is similar to classic expected utility maximization when there is little ambiguity, in the sense that all beliefs are close to each other, or when the loss of not making the optimal decision is small for each of the beliefs. Best compromises can be used to justify behavior in front of people with different preferences. They formalize the everyday notion of making a compromise. We assume that a player's choice at each of her information sets is made given the equilibrium behavior of herself and others at all other information sets. In particular, the player anticipates her own choices at subsequent information sets, and hence follows {\it consistent planning} \citep{Strotz56,Siniscalchi2011}. Formally, our solution concept, PCE, specifies for each player a strategy and a belief mapping. The strategy identifies the action the player chooses at each of her information sets. The belief mapping maps each prior of the player to a belief over decision nodes in each of her information sets. We show that a PCE exists in finite games. We illustrate the PCE concept in a simple game that involves a market of lemons with quality inspections. This illustration demonstrates how multiple priors are updated and highlights differences in the reasoning as compared to PBE. We are particularly interested in modeling players who have difficulty forming priors, or who are extremely ambiguous and only focus on which states are possible, without assessing their likelihoods. For instance, it seems unlikely that firms conjecture a specific probability distribution when they think about what demand they will be facing. Yet it seems plausible that they put bounds on the uncertain demand. These bounds can come from the most optimistic and pessimistic scenarios provided by expertise. Situations like this can be modeled within our framework by letting the set of priors consist only of degenerate priors. We call this {\it genuine ambiguity}. This way of modeling incomplete information without using priors comes with numerous advantages in comparison to PBE. Solutions are often easier to obtain. They are more parsimonious as they do not change with a prior. Solutions can be more intuitive as they are simple and depend on observables and not on fictitious distributions. These advantages are demonstrated in our examples. We investigate six salient economic examples. We consider Cournot competition with unknown demand, where firms postulate bounds on the true demand. We consider Bertrand competition where firms assess lower and upper bounds on the marginal costs of their rivals. We consider public good provision where beneficiaries of a public good do not know each others' values and hypothesize an interval where these values can be. We consider Spence's job market where employers are uncertain about the cost of education and the productivity of workers, and conjecture bounds on these parameters. We consider bilateral trade under common value where each party knows an interval that contains the true value. Finally, we consider forecasting of a random variable with unknown distribution. These examples highlight the value of the PCE concept in terms of realism, tractability, and new insights. They are arguably more realistic than those found in the literature, as we do not have to confine ourselves to parametric models of uncertainty or to models with two states (high and low). These examples involve strategic decision making under rich uncertainty where the PBE analysis is intractable. New insights appear. We find that replacing priors by bounds on uncertain parameters has little impact on profits in Cournot and Bertrand competition settings where compromise values are small. In these contexts it makes little sense to think in more detail about which state is really the true one, as payoffs would only be slightly higher in some states but could be substantially lower in other states. Yet loosening these bounds causes firms to react differently. They become more competitive under Cournot competition and less competitive under Bertrand competition. In the public good game, we show the ease of comparing policies and the simplicity of the beneficiaries' contribution rules. In the separating equilibrium of Spence's job market signaling game, better educated workers are not necessarily more productive, unlike in the classic model with two types \citep{Spence73}. In bilateral trade with common value, we find that trade is possible, as opposed to the famous no-trade theorem for PBE \citep{MilgromStokey82}. The possibility that the trading partners have different valuations leads to trade with positive probability in a PCE, as ignoring this possibility generates losses that the traders want to minimize. Finally, when forecasting a random variable with a known mean and unknown distribution based on a noisy signal, the best-compromise forecast is a weighted average of the mean and the signal. \noindent \textbf{Related Literature.} Our paper contributes to the literature on robustness and ambiguity in games of incomplete information. A paper that at a glance may seem very similar to ours is \cite{Hanany2018}. They also consider general extensive form games with incomplete information. Their players have smooth ambiguous preferences \cite[see also][]{Klibanoff2005}. Specifically, a player combines or aggregates different possible priors into a single belief using a distribution over these priors and a concave aggregator function. This aggregated belief is updated over time in a dynamically consistent fashion. Thus, a player has a very detailed understanding of how the different priors should be weighted. In contrast, the different priors in our model remain conceptually separated. The inability or unwillingness to combine priors is at the heart of our approach. Compromises are chosen as a way to resolve the conflict of having different possible understandings of the environment. In fact, one of the emphases of our paper is that it offers a means to get away from probability assessments. Our approach can handle a player who wishes to capture uncertainty by assessing a set of possible states, without any use of priors. An important ingredient of our solution concept is the use of compromise for making choices when the true state is unknown. A popular alternative approach in the literature on ambiguity is maximin preferences \citep{Wald50,Gilboa89}. These preferences have been brought to simultaneous-move games with incomplete information and multiple priors by \cite{Epstein96}, \cite{Kajii97}, \cite{Kajii2005}, and \cite{Azrieli2011}. While the maximin approach can be suitable in applications where players are pessimistic and care about the worst possible payoffs, it leads to unintuitive results in our examples. For instance, in Bertand duopoly with ambiguity about the rival's cost, maximin utility leads firms to shut down. To obtain nontrivial results, additional structural assumptions need to be added, such as assuming knowledge of the mean state. Another approach found in literature is Knightian uncertainty with incomplete preferences. This has been used by Chiesa et al.~(\citeyear{Chiesa}) to model bidding in auctions. Our idea of best compromise has origins in minimax regret \citep{Savage51} and connects to approximate optimality. Our optimization criterion differs from minimax regret as evaluation occurs at each information set, while minimax regret traditionally evaluates regret ex-post. Furthermore, PCE retains the strategic reasoning of PBE, as players have certainty about each others' strategies. For an investigation of minimax regret under strategic uncertainty see \cite{Linhart1989}, and under partial strategic uncertainty see \cite{RenouSchlag2010}. In simultaneous-move games, PCE can be considered as a generalization of ex-post Nash equilibrium \citep{Cremer1985}. It can be thought of as an $\varepsilon $-ex-post Nash equilibrium in which the smallest possible value of $\varepsilon $ is chosen for each player. In the context of $ \varepsilon $-Nash equilibrium \citep{Radner80} the value of $\varepsilon$ is interpreted a minimal level of improvement necessary to trigger a deviation. Our interpretation is different. The value of $\varepsilon $ measures the compromise needed to accommodate all beliefs. In particular, the threshold $\varepsilon $ is endogenous in a PCE. PCE can be interpreted as a robust version of PBE where robustness in the sense of \cite{Huber1965} means to make choices that also perform well if the model is slightly misspecified. Being a compromise, our suggested strategies perform well under each prior given how others make their choices, never doing too badly relative to what could be achieved under that prior. \cite{Stauber} analyzes the local robustness of PBE to small degrees of ambiguity about player's beliefs. In particular, players do not adjust their play to this ambiguity, unlike our paper. We proceed as follows. In Section \ref{s:pce} we introduce our solution concept, prove existence, and demonstrate it in a simple game. In Section \ref{s:examples} we illustrate PCE in six self-contained economic examples. Section \ref{s:concl} concludes. All proofs are in Appendix A. An alternative model of the forecasting example is in Appendix B. \section{Perfect Compromise Equilibrium}\label{s:pce} We introduce a solution concept called {\it perfect compromise equilibrium (PCE)}. The concept is formally defined in Section \ref{s:setup}. It is further discussed in Section \ref{s:disc-1} and illustrated by a simple example in Section \ref{s:ex-lemons}. A reader who wishes to be spared with the formalities and seeks to understand the essence of PCE and its applicability can jump to Section \ref{s:examples} that presents self-contained economic examples. \subsection{Formal Setting}\label{s:setup} \ Consider a finite extensive-form game described by $(N,\mathcal G,$ $\Omega,(\Pi_1,...,\Pi_n),(u_1,...,u_n))$, where $N=\{1,...,n\}$ is a set of players, $\mathcal G$ is a finite game tree, $\Omega$ is a finite set of states, $\Pi_i\subset\Delta(\Omega)$ is a finite set of priors of player $i$, and $u_i$ is a payoff function of player $i$. Note that we allow players to have different sets of priors. The game tree $\mathcal G$ describes the order of players' moves, their information sets, and actions that are available at each information set. It is defined by a set of linked decision nodes and terminal nodes that form a tree. Each decision node is assigned three elements: a player $i$, an information set $\phi_i$, and a set of actions available to player $i$ at that information set. Information set $\phi_i$ is a set of all the decision nodes that player $i$ cannot distinguish. Information sets and action sets satisfy the standard assumptions of games with perfect recall. Let $\phi_0$ be the initial decision node of the game, let $\Phi_i$ be the set of all information sets of player $i$ for each $i\in N$, and let $\mathcal{T}$ be the set of terminal nodes of the game. Let $A_{\phi_i}$ be a finite set of actions available at an information set $\phi_i$, and let $\mathscr A_{\phi_i}=\Delta(A_{\phi_i})$ be the corresponding set of mixed actions. In the spirit of \cite{Harsanyi67}, all incomplete information is captured by a move of nature at the beginning of the game. Nature moves only once, at the initial decision node $\phi_0$. An action of nature $\omega$ is called {\it state} and is chosen from the set of states $\Omega$. The game terminates after finitely many moves at some terminal node, and players obtain payoffs. A payoff function of each player $i\in N$ specifies the payoff $u_i(\tau)$ of player $i$ at each terminal node $\tau\in \mathcal{T}$. A strategy of player $i\in N$ prescribes a mixed action $s_{\phi_i}\in \mathscr A_{\phi_i}$ for each information set $\phi_i\in\Phi_i$. A strategy profile $s$ describes the behavior of all players throughout the game. Like in Bayesian games, we also specify posterior beliefs of the players in their information sets. Unlike in Bayesian games, each player may have multiple beliefs in each of her information sets. These beliefs are derived from the set of priors, prior by prior, following Bayes' rule whenever possible. This procedure is known as full Bayesian updating \citep{Pires}. We refer to Section \ref{s:ex-lemons} for an example that illustrates this updating. Formally, for each player $i$ and each information set $\phi_i\in\Phi_i$, let $\beta_{\phi_i}:\Pi_i\to\Delta(\phi_i)$ be a belief mapping that associates each prior $\pi_i\in \Pi_i$ of player $i$ with a posterior probability distribution $\beta_{\phi_i}$ over the decision nodes in $\phi_i$. Thus, in the information set $\phi_i$, player $i$ faces a set $B_{\phi_i}$ of posterior beliefs derived from the set of priors $\Pi_i$, where \[ B_{\phi_i}=\left\{\beta_{\phi_i}(\pi_i):\pi_i\in\Pi_i\right\}. \] We will refer to $B_{\phi_i}$ as the set of {\it beliefs} at $\phi_i$, and to the profile $\beta=(\beta_{\phi_i})_{\phi_i\in\Phi_i,i\in N}$ as the {\it belief system}. Like in PBE, we will require consistency of beliefs. \begin{definition}\label{def:consistency} A belief mapping $\beta_{\phi_i}$ is called {\it consistent under a strategy profile $s$} if for each prior $\pi_i\in\Pi_i$ such that the information set $\phi_i$ is reached with a strictly positive probability under strategy profile $s$, the belief $\beta_{\phi_i}(\pi_i)$ is derived by Bayes rule from $\pi_i$. A belief system $\beta$ is {\it consistent under a strategy profile $s$} if for each $i\in N$ and each $\phi_i\in\Phi_i$ the belief mapping $\beta_{\phi_i}$ is consistent under $s$. \end{definition} Note that our definition of consistency does not impose any discipline on the out-of-equilibrium beliefs. If an information set $\phi_i$ cannot be reached under a given prior $\pi_i$ and a given strategy profile $s$, then every belief $\beta_{\phi_i}(\pi_i)\in\Delta(\phi_i)$ is consistent under $s$. Of course, not every choice of out-of-equilibrium beliefs can be sensible in applications. This is the very same problem that emerged in the context of PBE and gave rise to a vast literature on PBE refinements. This problem is of equally high importance for PCE. However, addressing this problem would take us away from the main messages of this paper. Neither the idea of PCE, nor its properties in the examples considered in this paper change if additional assumptions about out-of-equilibrium beliefs are made. So, we leave this question for future research. Next we define how decisions are made at an information set $\phi_i$. We fix a strategy profile $s$ and determine how to make a choice at $\phi_i$, while keeping choices at all other information sets fixed. The difficulty of making a decision at $\phi_i$ is that the player does not know which belief in the set of beliefs $B_{\phi_i}$ should be used to evaluate the expected payoff. We resolve this issue by assuming the player chooses a best compromise. This is an action that is never too far from the best action under each belief in $B_{\phi_i}$. Formally, consider a pair $(s,\beta)$. Denote by $\bar u_i(s_{\phi_i}|\phi_i,s,b_i)$ the expected payoff of player $i$ from choosing a mixed action $s_{\phi_i}\in \mathscr A_{\phi_i}$ in an information set $\phi_i$ under the belief $b_i$ over the decision nodes in $\phi_i$, assuming that the play is given by $s$ elsewhere in the game. The payoff difference \[ \sup_{x_i\in \mathscr A_{\phi_i}} \bar u_i(x_i|\phi_i,s,b_i)-\bar u_i(s_{\phi_i}|\phi_i,s,b_i) \] is called player $i$'s {\it loss} from choosing mixed action $s_{\phi_i}$ at information set $\phi_i$ given belief $b_i$. It describes how much better off player $i$ could have been at this information set given this belief if, instead of choosing $s_{\phi_i}$, she had chosen the best action, assuming that the actions in all other information sets are prescribed by $s$. The {\it maximum loss} of player $i$ from choosing a mixed action $s_{\phi_i}$ in an information set $\phi_i$ under $(s,\beta)$ is given by \[ l(s_{\phi_i}|\phi_i,s,\beta)=\max_{b_i \in B_{\phi_i}}\left( \sup_{x_i\in \mathscr A_{\phi_i}} \bar u_i(x_i|\phi_i,s,b_i)-\bar u_i(s_{\phi_i}|\phi_i,s,b_i)\right). \] So the maximum is evaluated over all beliefs of player $i$ at $\phi_i$. Player $i$ makes a decision that minimizes the maximum loss. Such a choice is called a {\it best compromise}. Formally she chooses an element of \begin{equation}\label{e-pce-opt} \argmin_{s_{\phi_i}\in\mathscr A_{\phi_i}} l(s_{\phi_i}|\phi_i,s,\beta) \end{equation} at each of her information sets $\phi_i$. In equilibrium $s^*$, this means that she chooses $s^*_{\phi_i}\in\argmin_{s_{\phi_i}\in\mathscr A_{\phi_i}} l(s_{\phi_i}|\phi_i,s^*,\beta)$. Hence, when computing the maximum loss and finding the best compromise, each player assumes that the behavior is given by $s^*$ at all other information sets, including her own. Thus the players anticipate their own choices at subsequent information sets, which is known as {\it consistent planning} \citep{Strotz56,Siniscalchi2011}. This leads to our equilibrium concept that is based on the ideas of best compromises and consistent beliefs. \begin{definition}\label{def:pce} A pair $(s^*,\beta^*)$ is called a {\it perfect compromise equilibrium} if \noindent (a) each player chooses a best compromise in each of her information sets; \noindent (b) the belief system $\beta^*$ is consistent under the strategy profile $s^*$. \end{definition} We begin by establishing the existence of PCE. \begin{theorem}\label{p:exist} A perfect compromise equilibrium exists. \end{theorem} The proof is in Appendix \ref{s:pt}. \begin{remark} In some applications there can be a continuum of strategies, states, and priors over states. The definition of PCE readily extends to such settings, but some additional assumptions have to be made to ensure its existence. \end{remark} \begin{remark}\label{rem-1} In some applications it can be unrealistic to assume that players choose mixed actions. Our definition of PCE can be easily adjusted if players are only allowed to choose pure actions. In this case, a best compromise means to minimize one's maximal loss among the available pure actions. Formally, we set $\mathscr A_{\phi_i}=A_{\phi_i}$ for each player $i$ and each information set $\phi_i$, and define the concept of PCE as above. \end{remark} In the remainder of this section, we discuss some properties of the PCE and provide a simple example. This example illustrates the PCE concept and outlines the difference from a PBE where each player has a single prior. \subsection{Discussion}\label{s:disc-1} We highlight some properties of PCE. \noindent{\bf Best Compromise.} Our decision making criterion for how to make choices at a given information set captures the intuitive notion of making a compromise. As a compromise, the performance should be satisfactory in all potential situations, as opposed to being best under some and possibly very bad under others. The concept of best compromise identifies the smallest maximal distance from first best as a measure of how large the compromise has to be. Compromises are valuable when decisions have to be justified in front of others who have heterogeneous perceptions about the environment. The concept of a best compromise follows the tradition of decision making under minimax regret, thus having an axiomatic underpinning \citep{Milnor,Stoye2011}. Traditionally, minimax regret is evaluated ex-post after all uncertainty is resolved. In contrast, to model a compromise in the face of several beliefs, we consider the loss attained at the interim (at a given information set) for a given belief. Stoye's (2011) axioms continue to hold from this interim viewpoint. Furthermore, our concept retains the strategic reasoning of PBE, as players know each others' strategies. This is unlike \cite{Linhart1989} who reduce the game to an individual decision problem, where the behavior of the others is treated as a move of nature. Clearly, instead of best compromise, any other decision making criterion under ambiguity could be used for determining choices at information sets. For instance, the maximin utility criterion can be used to model pessimism or cautiousness, a world in which the player always anticipates the worst outcome. \noindent{\bf PCE vs PBE.} Our definition of PCE generalizes the concept of PBE to games where some players may be ambiguous about what they do not know. When there is no ambiguity, so there is a single belief at each information set, then our setting describes a standard game of incomplete information. In this case, the loss minimization objective, as described in \eqref{def:pce}, reduces to the standard utility maximization objective. So, an action minimizes the maximum loss of a player if and only if it is a best response. Moreover, whenever there is only a single belief, the consistency requirement introduced in Definition \ref{def:consistency} reduces to the standard Bayesian consistency of beliefs. Hence, PCE becomes PBE \citep[in the sense of ][]{FT}. The difference between PCE and PBE emerges in models where some players are ambiguous about the state of the world. The standard PBE approach forces players to quantify the uncertainty by specifying a unique belief at each information set, and then assuming that the players optimize with respect to these beliefs. Our approach sidesteps this issue by letting the players have multiple beliefs at each information set and find compromises with respect to these beliefs. \noindent{\bf Ex-post Nash equilibrium.} In simultaneous move games PCE is related to ex-post Nash equilibrium. Ex-post Nash equilibria are profiles that are Nash equilibria in the game in which the state is observed by all players at the outset of the game. This means that the maximum loss of each player at her single information set is equal to zero. Consequently, any ex-post Nash equilibrium is also a PCE. Note, however, that ex-post Nash equilibria often do not exist. \noindent{\bf Dominance.} A PCE survives the elimination of strictly dominated strategies, as we now demonstrate. We say that an action $a_i\in \mathscr A_{\phi_i}$ at an information set $\phi_i$ is {\it strictly dominated} for player $i$ if there exists another action $x_i\in\mathscr A_{\phi_i}$ such that player $i$'s payoff from choosing $a_i$ is strictly worse than that from choosing $x_i$, regardless of the state $\omega\in\Omega$ and of the choices of other players at any of their information sets. Iterated dominance is defined as usual. After having excluded actions that were strictly dominated in previous rounds, one checks the dominance condition w.r.t.~the remaining actions of each player. Now observe that if an action $a_i$ at some information set $\phi_i$ is strictly dominated, then it cannot be a best compromise at this information set. This is because the (mixed) action that strictly dominates $a_i$ will achieve a strictly lower loss for each belief, and hence its maximal loss will be strictly smaller. Thus, a strictly dominated action cannot be a part of a PCE. This argument can be iterated, so any iterated strictly dominated action cannot be a part of a PCE. \begin{figure} \caption{Lemon Market with Quality Inspections} \label{F:A} \end{figure} \subsection{Example}\label{s:ex-lemons} The following example illustrates the PCE concept and highlights the difference from PBE. A seller has a car whose quality is either low ($\theta_L$) or high ($\theta_H$). She observes the quality of the car and decides whether to offer it for sale at a fixed price $p$ or to keep it. If the car is offered for sale, a potential buyer decides whether to buy it without observing its quality, or to demand a costly inspection that reveals the quality. The cost of the inspection is $c_S$ for the seller and $c_B$ for the buyer. The low-type car is worth zero and the high-type car is worth $v$ to the buyer. The seller's value of either type of the car is zero. The payoff parameters $v, p, c_S$, and $c_B$ are commonly known and assumed to satisfy \begin{equation}\label{e-assum-lemon} v-c_B\ge p>c_S\ge 2c_B>0. \end{equation} Consequently, the buyer either decides to purchase the car without demanding the inspection, or to ask for the inspection, in which case he buys if it is high quality and does not buy otherwise. So the only nontrivial choice of the buyer is whether or not to inspect the car. The game is summarized in Figure \ref{F:A}, where $N$, $S$, and $B$ denote Nature, the seller, and the buyer, respectively, and $I$ and $NI$ denote the buyer's choice to inspect or not to inspect the car, respectively. Suppose that the buyer is uncertain about the quality of the car and holds multiple priors about whether the quality is low or high. For convenience, beliefs are summarized by the probability that the car quality is high. Multiple priors can arise in many different ways. The buyer might come up with different competing scenarios for explaining what car is being offered, each scenario possibly leading to a different prior. In this case the buyer's set of priors is $\Pi_B=\{\pi_0,\pi_1,..,\pi_K\}$. The buyer might have some vague understanding of the likelihoods, for instance that a high quality car is more likely than a low quality car. In this case $\Pi_B=\{\pi: \pi\ge 1/2\}$. The buyer might wish to base her purchasing behavior on the possibility that the quality might be high and that it might be low and does not wish to base her behavior on any likelihoods. This motivates setting $\Pi_B=\{0,1\}$. In what follows we analyze the case $\Pi_B=\{\pi_0,\pi_1,..,\pi_K\}$ where $0=\pi_0<\pi_1<...<\pi_{K-1}<\pi_K=1$. In particular, we are including the two degenerate priors where the quality is low and high with certainty. The seller has two information sets with single decision nodes. So the seller's beliefs are trivial in these information sets. The buyer has a single information set denoted by $\phi_B$ that contains two decision nodes. The buyer updates each of his priors to obtain a set of beliefs $\{\beta_{\phi_B}(\pi_k)\}_{k=0,1,...,K}$ at this information set, where $\beta_{\phi_B}(\pi_k)$ is the buyer's belief mapping. We now characterize the PCE of this game. A PCE is summarized by a triple $(\sigma^*_S,\sigma^*_B, \beta^*_{\phi_B})$, where $\sigma^*_S(\theta)$ is the probability that the seller sells the car conditional on its type $\theta\in\{\theta_L,\theta_H\}$, $\sigma^*_B$ is the probability that the buyer inspects the car, and $\beta^*_{\phi_B}$ is the buyer's belief mapping. Because the seller has no ambiguity, the best compromise for the seller is her best response. When the quality is high, the seller prefers to sell the car, as this is a strictly dominant strategy, so $\sigma_S^*(\theta_H)=1$. When the quality is low, the seller prefers to sell the car if the probability of inspection $\sigma^*_B$ is low enough, and to keep the car otherwise, specifically, \begin{equation}\label{e-BR-S-lemon} \sigma_S^*(\theta_L)\in\begin{cases} \{1\} & \text{if $\sigma^*_B<p/(p+c_S),$}\\ [0,1] & \text{if $\sigma^*_B=p/(p+c_S),$}\\ \{0\} & \text{if $\sigma^*_B>p/(p+c_S).$} \end{cases} \end{equation} The buyer can be ambiguous, because he can have multiple beliefs in his information set. In order to be a part of a PCE, the buyer's belief mapping must be {\it consistent} with the strategy $\sigma_S^*$ of the seller. Specifically, each prior $\pi_k\in\Pi_B$ is transformed into a belief $\beta_{\phi_B}^*(\pi_k)$ using Bayes' rule whenever possible. Given $\sigma_S^*(\theta_L)\in[0,1]$ and $\sigma^*_S(\theta_H)=1$, the consistent belief mapping is \begin{equation}\label{e:Bayes} \beta_{\phi_B}^*(\pi_k)=\frac{\pi_k}{\pi_k+(1-\pi_k)\sigma^*_S(\theta_L)} \end{equation} for each $\pi_k\in \Pi_B$, except for the case of $\pi_k=0$ and $\sigma_S(\theta_L)=0$ where the above Bayes' posterior is undefined. In this case, any belief $\beta_{\phi_B}^*(0)\in[0,1]$ is consistent with the seller's strategy. The set of buyer's beliefs in her information set $\phi_B$ is given by \[ B_{\phi_B}=\{\beta^*_{\phi_B}(\pi_k)\}_{k=0,1,...,K}. \] For each belief $b$, denoting the probability that the car has high quality, the buyer's optimal choice is to inspect when $b<1-c_B/p$, and not to inspect when $b> 1-c_B/p$ (being indifferent when $b=1-c_B/p$). The buyer's loss for a given belief $b$ from a strategy $\sigma_B$ describes how much more payoff the buyer could have obtained if he optimized his choice under this belief. For $b\le 1-c_B/p$ this loss is given by \begin{align*} \big[b(v-p)-c_B\big]&-\big[(b(v-p)-c_B)\sigma_B+(bv-p)(1-\sigma_B)\big]\\ &=((1-b)p-c_B)(1-\sigma_B). \end{align*} For $b\ge 1-c_B/p$ this loss is given by \begin{align*} \big[bv-p\big]&-\big[(b(v-p)-c_B)\sigma_B+(bv-p)(1-\sigma_B)\big]\\ &=(c_B-(1-b)p)\sigma_B. \end{align*} The buyer's {\it maximum loss} among his different beliefs of choosing the inspection strategy $\sigma_B$ is thus \begin{multline} l_S(\sigma_B|\phi_B,\sigma_S^*,\beta^*_{\phi_B})=\\\max_{b\in B_{\phi_B}} \max\Big\{((1-b)p-c_B)(1-\sigma_B),(c_B-(1-b)p)\sigma_B\Big\}.\label{e:loss-lemon} \end{multline} Intuitively, the buyer who has multiple beliefs and anticipates the seller to follow her equilibrium strategy worries about two possible situations. It could be that the probability of high quality is high, so the buyer loses payoff by inspecting. The greatest such loss occurs when the belief is the highest, $b=\beta^*_{\phi_B}(1)$. Alternatively, it could be that the probability of high quality is low, so the buyer is losing payoff by not inspecting. The greatest such loss occurs when the belief is the lowest, so $b=\beta^*_{\phi_B}(0)$. The buyer thus chooses the best-compromise inspection strategy $\sigma_B^*$ that balances these two losses. \begin{proposition}\label{p:lemon} A profile $(\sigma^*_S,\sigma^*_B,\beta^*_{\phi_B})$ is a perfect compromise equilibrium if and only if the seller and buyer's strategies $\sigma^*_S$ and $\sigma^*_B$ are given by \begin{align*} \sigma^*_S(\theta_L)&=0, \ \ \sigma^*_S(\theta_H)=1, \ \ \sigma^*_B=1-\frac{c_B}{(1-b_0)p}, \end{align*} and the buyer's belief mapping $\beta^*_{\phi_B}$ is given by \[ \beta^*_{\phi_B}(0)=b_0, \ \ \beta^*_{\phi_B}(\pi_k)=1 \ \text{for each $k=1,...,K$}, \] where $b_0$ satisfies \[ b_0\in\left[0,1-\frac{c_B}{c_S}-\frac{c_B}{p}\right]. \] \end{proposition} The proof is in Appendix \ref{s:lemon-proof}. Our prediction based on the PCE is as follows. High type cars are sold and low type cars are not. While a rational buyer would infer in this situation that the quality of the car is high, under a PCE the buyer inspects the car with a positive probability. This happens because the buyer remains doubtful about the quality and needs to find a compromise given her pair of beliefs $\{b_0,1\}$. The specific probability of inspection is the one that yields the best compromise. Note that the buyer also inspects a car with a positive probability under a PBE with a nondegenerate prior. Under a PBE the inspection probability is the one that makes the low-type seller indifferent between selling or not selling. To see why low type cars are not sold in a PCE, observe that if they were sold with some positive probability, then the buyer's set of beliefs would include the two degenerate beliefs, $\beta^*_{\phi_B}(0)=0$ and $\beta^*_{\phi_B}(1)=1$. The buyer would then choose the best compromise strategy when facing these two extreme cases. This would then lead to an inspection probability so large that the low type seller would prefer not sell the car. But this would be incompatible with low type cars being sold with positive probability. To see why the buyer must have more than a single belief, thus remaining doubtful, consider the following arguments. If there is a single belief, it must be that the car is almost certainly of high quality, as only high quality cars are sold. In this case, the buyer's best compromise is not to inspect the car. But then, the seller would have strictly preferred to sell the car of low quality. Note that the belief $b_0=\beta^*_{\phi_B}(0)$ obtained under the degenerate prior $\pi_0=0$, where the car almost certainly has low quality, is not pinned down by Bayes's rule. This is because it is formed in an out-of-equilibrium event, in which the low-type car is offered for sale. As long as $b_0$ is not too high, the buyer's best-compromise inspection probability is high enough to keep the low type seller out of the market. \section{Examples}\label{s:examples} We illustrate our solution concept in a few economic examples that are prominent in the literature. We consider Cournot and Bertrand duopoly, public good provision, Spence's job market signaling, bilateral trade with common value, and forecasting. The examples presented in this section are self-contained as they do not require knowledge of the formalities presented in Section \ref{s:pce}. We are particularly interested in understanding strategic play under uncertainty when the players cannot or are unwilling to assess the likelihood of different states of the world at the beginning of the game. Formally, players can only have degenerate priors that put probability one on a single state of the world. We call this {\it genuine ambiguity}. Apart from forecasting, the examples presented below deal with genuine ambiguity. Therein, ambiguity is specified in terms of bounds on what the players do not know. Probability distributions do not play a role. Players do not have beliefs. Instead, they speculate about which state is true or about what decision node within an information set they are at. In addition, we assume that players do not use mixed strategies. They search among their pure strategies for a best compromise. Thus we perform a strategic analysis without using probabilities. The section concludes with an example of forecasting that demonstrates the interplay between ambiguity and noise, where multiple priors over one parameter meet a single prior over another parameter. \subsection{Cournot Duopoly with Unknown Demand}\label{s:cournot} We investigate how two firms compete in quantities when neither firm knows the demand. There are two firms that produce a homogeneous good. For clarity of exposition, we assume that there are no costs of production. Each firm $i=1,2$ chooses a number of units $q_i\ge 0$ to produce. Choices are made simultaneously. The firms face an inverse demand function $P(q_1+q_2)$. Firm $i$'s profit is given by \[ u_i(q_i,q_{-i}; P)= P(q_i+q_{-i})q_i, \ \ i=1,2. \] Neither firm knows the inverse demand $P$, but they know that it belongs to a set $\mathcal P$ given as follows. Let \[ \begin{split} &\underaccent{\bar} P(q)=\underaccent{\bar} a-\underaccent{\bar} b q \ \ \text{and} \ \ \bar P(q)=\bar a-\bar b q, \ \ \text{where} \ \ \bar a\ge\underaccent{\bar} a>0 \ \ \text{and} \ \ \bar a/\bar b\ge \underaccent{\bar} a/\underaccent{\bar} b>0. \end{split} \] Let $\mathcal P$ be the set of inverse demand functions that satisfy \begin{equation}\label{e:demand} \begin{split} &\text{$P(q)$ is continuously differentiable in $q$,}\\ & \underaccent{\bar} P(q) \le P(q)\le \bar P(q) \quad\text{and}\quad \underaccent{\bar} P'(q)\le P'(q)\le \bar P'(q). \end{split} \end{equation} A firm $i$'s {\it maximum loss} of choosing quantity $q_i$ when the other firm chooses quantity $q_{-i}$ is given by \[ l_i(q_i,q_{-i})=\sup_{P\in \mathcal P} \left(\sup_{q'_i\ge 0} u_i(q'_i,q_{-i}; P)-u_i(q_i,q_{-i}; P)\right). \] The maximum loss describes how much more profit firm $i$ could have obtained if it had known the inverse demand $P$ when anticipating that the other firm produces $q_{-i}$. Firm $i$'s {\it best compromise} given a choice $q^*_{-i}$ of the other firm is a quantity $q^*_i$ that achieves the lowest maximum loss, so \[ q^*_i\in\argmin_{q_i\ge 0} l_i(q_i,q_{-i}). \] A strategy profile $(q^*_1,q^*_2)$ is a {\it perfect compromise equilibrium} if each firm chooses a best compromise given the choice of the other firm. \begin{proposition}\label{p:cournot} There exists a unique perfect compromise equilibrium. In this PCE, the strategy profile $(q^*_1,q^*_2)$ is given by \begin{equation}\label{e-cournot-eq} q_i^*=\frac{1}{3 \left(\sqrt{\underaccent{\bar} b}+\sqrt{\bar b}\right)}\left(\frac{\underaccent{\bar} a}{\sqrt{\underaccent{\bar} b}}+\frac{\bar a}{\sqrt{\bar b}}\right), \ \ i=1,2. \end{equation} The associated maximum losses are \begin{equation}\label{e-cournot-com} l_i(q^*_i,q^*_{-i})=\frac{(\underaccent{\bar} a\bar b-\bar a\underaccent{\bar} b)^2}{4 \underaccent{\bar} b\bar b \left(\sqrt{\underaccent{\bar} b}+\sqrt{\bar b}\right)^2}, \ \ i=1,2. \end{equation} \end{proposition} The proof is in Appendix \ref{s:p1}. Let us discuss the strategic concerns underlying the PCE in this game. Each firm $i$, when facing unknown inverse demand and deciding about the quantity to produce, worries about two possible situations. It could be that the inverse demand is actually very high, so the firm is losing profit by producing too little. The greatest such loss occurs when the inverse demand is the highest, so $ P=\bar P$. Alternatively, it could be that the inverse demand is actually very low, so the firm is losing profit by producing too much. The greatest such loss occurs when the inverse demand is the lowest, so $ P=\underaccent{\bar} P$. The firm thus chooses the best compromise $q_i^*$ that balances these two losses, assuming that the other firm follows its equilibrium strategy $q_{-i}^*$. \begin{remark} It is generally intractable to find a PBE in this game with such a rich set of possible inverse demand functions. It can only be done under very specific priors about the inverse demand. For example, PBE can be found if a prior describes the uncertainty about the parameters of the linear inverse demand function $ P(q)=a-bq$ \citep{Vives}. \end{remark} \begin{remark} Our equilibrium analysis can shed light on how the firms' behavior changes in response to increasing uncertainty. For comparative statics, let us consider as a benchmark a linear inverse demand function $P_0(q)=a_0-b_0q$. We normalize constants $a_0$ and $b_0$ so that the monopoly profit is equal to 1, that is, \[ \sup_{q\ge 0} (a_0-b_0 q)q=\frac{a^2_0}{4b_0}=1. \] Suppose that there is a small uncertainty. Specifically, for $\varepsilon>0$ let $P(q)$ satisfy \eqref{e:demand} where \[ \underaccent{\bar} P(q)=\left(1-\frac{\varepsilon}2\right)a_0-\left(1+\frac{\varepsilon}2\right)b_0q \quad\text{and}\quad\bar P(q)=\left(1+\frac{\varepsilon}2\right)a_0-\left(1-\frac{\varepsilon}2\right)b_0q. \] Denote by $q^\varepsilon=(q^\varepsilon_1,q^\varepsilon_2)$ the strategies of the PCE as given by Proposition \ref{p:cournot}. We then obtain \[ \frac{\mathrm{d} q^\varepsilon_i}{\mathrm{d} \varepsilon} =\frac{2\varepsilon}{3a_0}+O(\varepsilon^3)>0. \] So the firms optimally respond to a growing uncertainty about the demand by increasing their output, and do so at an increasing rate as $\varepsilon$ grows. Next, consider the associated maximum losses as shown in \eqref{e-cournot-com}. Then \[ l_i(q^\varepsilon_i,q^\varepsilon_{-i})=\varepsilon^2+O(\varepsilon^4), \ \ i=1,2. \] So the maximum losses in the PCE increase very slowly as uncertainty increases. Moreover, if $\varepsilon=0.1$, then $l_i(q^\varepsilon_i,q^\varepsilon_{-i})\approx 0.01$. So the firms lose no more than about 1\% of the maximum profit due to not knowing the demand. \end{remark} \subsection{Bertrand Duopoly with Private Costs} We now consider how two firms compete in prices when the cost of the rival firm is unknown. There are two firms that produce a homogeneous good. Each firm $i=1,2$ chooses a price $p_i$. Choices are made simultaneously. The consumers only buy from the firm that offers a lower price. In particular, the quantity that firm $i$ sells is given by \[ q_i(p_i,p_{-i})=\begin{cases} Q(p_i), & \text{if $p_i<p_{-i}$},\\ Q(p_i)/2, & \text{if $p_i=p_{-i}$},\\ 0, & \text{if $p_i>p_{-i}$}, \end{cases} \] where $Q(p)$ is the demand function. For clarity of exposition we assume that the demand function is given by \[ Q(p)=\max\left\{\frac{a-p}b,0\right\} \] The cost of producing $q_i$ units is $ c_i q_i$. Each firm $i$'s profit is given by \[ u_i(p_i,p_{-i}; c_i)=(p_i- c_i)q_i(p_i,p_{-i}), \ \ i=1,2. \] Each firm $i$ knows her own marginal cost but not that of the other firm, and it is common knowledge that \[ c_1, c_2\in [\underaccent{\bar} c,\bar c], \ \ \text{where $0\le \underaccent{\bar} c\le\bar c\le a/2$.} \] A firm $i$'s pricing strategy $s_i(c_i)$ describes its choice of the price given its marginal cost $c_i$. For each marginal cost $c_i$, firm $i$'s {\it maximum loss} of choosing a price $p_i$ when facing pricing strategy $s_{-i}$ of the other firm is given by \[ l_i(p_i,s_{-i};c_i)=\sup_{c_{-i}\in [\underaccent{\bar} c,\bar c]} \left(\sup_{p'_i\ge 0} u_i(p'_i,s_{-i}(c_{-i}); c_i)-u_i(p_i,s_{-i}(c_{-i}); c_i)\right). \] The maximum loss describes how much more profit $i$ could have obtained if it had known the other firm's marginal cost $c_{-i}$, anticipating the other firm to follow the pricing strategy $s_{-i}$. Firm $i$'s {\it best compromise} given $c_i$ is a pricing strategy $s^*_i(c_i)$ that achieves the lowest maximum loss for a given strategy $s^*_{-i}$ of the other firm: \[ s^*_i(c_i)\in\argmin_{p_i\ge 0} l_i(p_i,s^*_{-i};c_i). \] A strategy profile $(s^*_1,s^*_2)$ is a {\it perfect compromise equilibrium} if each firm $i$ chooses a best compromise given its marginal cost $c_i$ when facing the strategy $s^*_{-i}$ of the other firm. \begin{proposition}\label{p:bertrand} There exists a unique perfect compromise equilibrium. In this PCE, the pricing strategies are given by \begin{equation}\label{e-bertrand-eq} s_i^*( c_i)=\frac{1}{2}\left(a+ c_i-\sqrt{(a-\bar c)^2+(\bar c- c_i)^2}\right), \ \ i=1,2. \end{equation} The associated maximum losses are \begin{equation}\label{e-bertrand-com} l_i(s^*_i(c_i),s^*_{-i},c_i)=\frac{(a-\bar c)(\bar c-c_i)}{2}\le \frac{(a-\bar c)(\bar c-\underaccent{\bar} c)}{2}, \ \ i=1,2. \end{equation} \end{proposition} The proof is in Appendix \ref{s:p2}. Let us discuss the strategic concerns underlying the PCE in this game. Each firm $i$, when deciding about the price $p_i>c_i$ and facing an unknown cost of the other firm, worries about two possible situations. It could be that the other firm chooses a weakly lower price $p_{-i}\le p_i$. Thus, firm $i$ could have obtained more profit by undercutting $p_{-i}$. The greatest such loss occurs when the other firm's price marginally undercuts $p_i$. Alternatively, it could be that the other firm chooses a higher price, $p_{-i}>p_i$. Thus, unless $p_i$ is the profit maximizing price for the monopoly, firm $i$ is losing profit by charging too little. The greatest such loss occurs when the other firm's cost is the highest possible, $\bar c$. The firm thus chooses the best compromise $s_i^*( c_i)$ that balances these two losses, assuming that the other firm follows its equilibrium strategy. We find that the PCE price $s_i^*( c_i)$ is strictly increasing in $c_i$ and lies strictly above the marginal cost $c_i$ whenever $c_i<\bar c$. Moreover, $s_i^*(\bar c)=\bar c$. So, any sale with the cost below $\bar c$ leads to a positive profit. The fact that the equilibrium price cannot not lie above $\bar c$ is intuitive. It is common knowledge that the costs are at most $\bar c$. So if a firm charges a price above $\bar c$, the other firm would undercut it. Note also that the largest equilibrium price cannot lie below $\bar c$. This is because a firm with cost $\bar c$ will never charge a price below $\bar c$. Note that the lowest equilibrium price $s_i^*(\underaccent{\bar} c)$ is strictly positive, even if $\underaccent{\bar} c=0$. This is because when the price is very low, then the potential loss due to not undercutting the other firm is small, while the potential loss due to not setting a price much higher is large. This has an upward effect on prices. \begin{remark} It is generally intractable to find a PBE in this application under any reasonable prior, even in this simplest setting with linear demand and constant marginal costs. The PBE strategy profile for this simplest setting is implicitly defined by a differential equation with no closed form solution \citep[see][]{Spulber1995bertrand}. \end{remark} \begin{remark} As in Section \ref{s:cournot}, our equilibrium analysis can shed light on how the firms' behavior changes in response to increasing uncertainty. For comparative statics, let us consider as a benchmark marginal cost $ c_0=a/4$ (recall that we require $0\le c_i\le a/2$, so $ c_0=a/4$ is the midpoint). We normalize the constants $a$ and $b$ of the demand function $Q(p)=(a-p)/b$ so that the monopoly profit is equal to 1, that is, \[ \sup_{p\ge 0} (p- c_0)\frac{a-p}b=\frac{(a- c_0)^2}{4b}=1. \] Suppose that there is a small uncertainty. Specifically, for $0<\varepsilon<1$ let $ c_i\in[\underaccent{\bar} c,\bar c]$, $i=1,2$, where \[ \underaccent{\bar} c=\left(1-\frac{\varepsilon}2\right) c_0\quad\text{and}\quad\bar c=\left(1+\frac{\varepsilon}2\right) c_0. \] Denote by $s^\varepsilon=(s^\varepsilon_1,s^\varepsilon_2)$ the PCE strategy profile as given by Proposition \ref{p:bertrand}. We then obtain \[ \frac{\mathrm{d} s^\varepsilon_i(c_i)}{\mathrm{d} \varepsilon} =\frac{(a+c_i-2\bar c)c_0}{4\sqrt{(a-\bar c)^2+(\bar c-c_i)^2}}>0, \] because, using our assumptions on the parameters, \[ a+c_i-2\bar c\ge a-2\bar c=1-2\left(1+\frac{\varepsilon}2\right) c_0=\frac 1 4(2-\varepsilon)>0. \] So the firms optimally respond to the growing uncertainty about the demand by increasing their prices. They become less competitive. Next, consider the associated maximum losses as shown in \eqref{e-bertrand-com}. Then \[ l_i(s^\varepsilon_i(c_i),s^\varepsilon_{-i},c_i)\le \frac {3\varepsilon}{32}-\frac{\varepsilon^2}{64}, \ \ i=1,2. \] So the maximum losses are small. For example, if $\varepsilon=0.1$, then the maximum losses are bounded by $0.01$. So the firms lose no more than about 1\% of the maximum profit due to not knowing the cost of the other firm. \end{remark} \subsection{Public Good Provision}\label{s:public} Here we consider a problem of public good provision where the public good is funded by contributions of its beneficiaries. We compare three different mechanisms that regulate the beneficiaries' payments. For each of these mechanisms, we investigate how the beneficiaries find compromises about how much to contribute towards the public good provision. Note that in this setting the Vickrey-Clarke-Groves mechanism is not feasible, because it requires the public good to be externally subsidized. There are $n$ agents, each has a private value $v_{i}\in [0,\bar v]$ for a public good. Agents know their own values of the good, but not those of the others. Each agent $i$ chooses how much to contribute for the public good provision. Let $x_{i}\in [0,\bar v]$ be agent $i$'s contribution. The agents make their choices simultaneously. A commonly known cost of providing the public good is $c>0$. To avoid considering multiple cases, we assume that this cost is not too high, specifically, \begin{equation}\label{E:A1} \frac{c}{n-1}\le \frac{\bar v}2. \end{equation} The payoffs are as follows. If the sum of the contributions does not cover the cost, so $\sum_{i=1}^{n}x_{i}< c$, then the public good is not provided and the agents' contributions are returned to them. In this case each agent $i$ obtains zero payoff. Otherwise, if $\sum_{i=1}^{n}x_{i}\ge c$, then the public good is provided, and each agent obtains the value of the good net of the contribution. In addition, the agents may be refunded the excess contribution, $\sum_{i=1}^{n}x_{i}-c$. The payoff of each agent $i$ is \[ v_i-x_i+r_i(x), \] where $r_i(x)$ is a refund to agent $i$ that depends on the profile of contributions $x=(x_1,...,x_n)$. For all $x$ such that $\sum_{i=1}^n x_i\ge c$, the refunds $r_i(x)$ must satisfy: (a) $r_i(x)\ge 0$ for each $i$, so agents do not pay more than their contributions; (b) $\sum_{i=1}^n (x_i-r_i(x))\ge c$, so the net payments cover the cost of the public good; (c) $r=(r_1,...,r_n)$ is symmetric, so the agents are treated ex-ante equally. We compare three simple refund rules. \noindent (i) {\it No-refunds rule.} The excess contribution is not refunded to the agents, so \begin{equation}\label{t-simple} r_i(x)=0, \ \ i=1,...,n. \end{equation} (ii) {\it Equal-split rule.} The excess contribution is divided equally among to the agents, so \begin{equation}\label{t-additive} r_i(x)=\frac 1 n\left(\sum\nolimits_{j=1}^n x_j-c\right), \ \ i=1,...,n. \end{equation} (iii) {\it Proportional rule.} The excess contribution is divided proportionally to the agents' individual contributions, so \begin{equation}\label{t-prop} r_i(x)=\left(1-\frac{c}{\sum_{j=1}^n x_j}\right)x_i, \ \ i=1,...,n. \end{equation} Let $s_i(v_i)$ be a strategy of agent $i$, so $x_i=s_i(v_i)$ specifies the contribution of agent $i$ whose private value is $v_i$. We restrict attention to strategies that are symmetric and undominated. Specifically, we assume that \begin{equation} \text{$s_i(v)=s_j(v)$ and $s_i(v)\le v$ for all $v\in[0,\bar v]$ and all $i,j\in\{1,...,n\}$.}\label{e-A2} \end{equation} The assumption that the strategies are symmetric is substantive, as we rule out potential asymmetric equilibria. The assumption that the strategies are undominated is inconsequential for the results and introduced for notational convenience. An agent $i$'s {\it maximum loss} of choosing contribution $x_i$ when the other agents choose a profile of contributions $s_{-i}(v_{-i})$ describes how much more payoff agent $i$ could have obtained if she had known the true values of everybody else, anticipating that they follow their strategies. To determine the maximum loss, observe that agent $i$ worries about two possible situations. It could be that the total contribution is marginally below $c$, so $x_i+\sum_{j\ne i} s_j(v_j)=c-\varepsilon$ for a small $\varepsilon>0$. The good is not provided, but had $i$ contributed $\varepsilon$ more it would have been provided. As $\varepsilon\to 0$, agent $i$'s loss is $v_i-x_i$. Alternatively, it could be that all other agents contribute enough to cover $c$, so $\sum_{j\ne i} s_j(v_j) \ge c$. Thus the agent could have contributed nothing and still received the good. In this case the loss is the amount of contribution net of the refund, $x_i-r_i(x_i,s_{-i}(v_{-i}))$. Agent $i$'s maximum loss is thus given by \[ l_i(x_i,s_{-i};v_i)=\sup_{v_{-i}\in[0,\bar v]^{n-1}}\max\left\{v_i-x_i,x_i-r_i(x_i,s_{-i}(v_{-i}))\right\}. \] Agent $i$'s {\it best compromise} given $v_i$ is a strategy $s^*_i(v_i)$ that achieves the lowest maximum loss for a given strategy profile $s_{-i}$ of the other agents: \[ s_i^*(v_i)\in \argmin_{x_i\in[0,v_i]} l_i(x_i,s_{-i};v_i). \] A strategy profile $s^*=(s_1^*,...,s_n^*)$ is a {\it perfect compromise equilibrium} if each agent $i$ chooses a best compromise given her value $v_i$ when facing the strategy profile $s^*_{-i}$ of the other agents. In this application we are interested in how the agents' equilibrium behavior and total efficiency (welfare) changes in PCE induced by different refund rules. We measure the efficiency of a strategy profile $s$ by the maximum welfare loss as compared to the complete information case. Because $s_i(v)\le v$ by assumption \eqref{e-A2}, the welfare loss only emerges in the case of $\sum_i s_i(v_i)<c\le \sum_i v_i$ where the good is not provided when it is efficient to do so. Our inefficiency measure is denoted by $L(s)$ and is given by \begin{equation}\label{e:WLoss} \begin{split} &L(s)=\sup_{(v_1,...,v_n)\in[0,\bar v]^n} \sum\nolimits_{i=1}^n v_i-c \\ &\text{subject to $\sum\nolimits_{i=1}^n s_i(v_i)<c\le \sum\nolimits_{i=1}^n v_i$.} \end{split} \end{equation} We now characterize the PCE and the associated welfare losses for each of the three refund rules. \begin{proposition}\label{C:PG} For each of the three refund rules the is a unique PCE strategy profile $s^*=(s^*_1,...,s^*_n)$ that satisfies assumption \eqref{e-A2}. For each $i=1,...,n$ and each $v_i\in[0,\bar v]$, (i) if $r_i(x)$ is the no-refunds rule, then \[ s^*_i(v_i)=\frac{v_i}{2} \quad \text{and} \quad L(s^*)=c; \] (iii) if $r_i(x)$ is the equal-split rule, then \[ s^*_i(v_i)=\frac{n}{2n-1}v_i \quad \text{and} \quad L(s^*)=\frac{n-1}{n}c; \] (iii) if $r_i(x)$ is the proportional rule, then \[ s^*_i(v_i)=\frac{v_i}{2}-c+\frac{1}{2}\sqrt{v_i^2+4 c^2} \quad \text{and} \quad L(s^*)=\frac{n}{n+1}c. \] \end{proposition} The proof is in Appendix \ref{s:C:PG}. Note that \[ c>\frac{n}{n+1}c>\frac{n-1}{n}c. \] So, the equal split rule is more efficient than the other two according to our efficiency measure. This raises a question of the optimal design. Among all feasible refund rules, which one is the most efficient? Also note that, unlike the equal-split rule, the proportional rule leads to equilibrium behavior that is independent of the number of agents. So, it is robust to the agents' knowledge of how many of them there are. This raises another question. Suppose that the agents are ambiguous not only about the others' values, but also about how many agents there are. How does this change their best compromises, and which refund rule is the most efficient in this case? We leave these questions for future research. \subsection{Job Market Signaling} Here we investigate Spence's job market signaling \citep{Spence73} when the worker's productivity and cost of education are unknown to the firms. There is a single worker and two firms. The worker has productivity $\theta$ with $\theta\in[0,1]$. The worker publicly chooses a level of education $e$, either low ($e_L$) or high ($e_H$), to signal her productivity to the firms. The cost of low education is zero. The cost of high education is $c$ with $c\ge 0$. The firms observe the worker's education level $e$ and simultaneously offer wages $w_1$ and $w_2$. The worker chooses the better of the two wages. Her payoff is given by \[ v(w_1,w_{2},e;\theta,c)=\max\{w_1,w_2\}-\begin{cases} 0, & \text{if $e=e_L$},\\ c, & \text{if $e=e_H$}. \end{cases} \] Each firm $i$'s payoff is given by \[ u_i(w_i,w_{-i};\theta)=\begin{cases} \theta-w_i, & \text{if $w_i>w_{-i}$},\\ (\theta-w_i)/2, & \text{if $w_i=w_{-i}$},\\ 0, & \text{if $w_i<w_{-i}$}. \end{cases} \] The worker knows her productivity type $\theta$ and her cost of high education $c$. The firms know neither. They only know that the worker can have any productivity $\theta$ in $[0,1]$ and that her cost of high education $c$ lies between two linearly decreasing functions of $\theta$. Specifically, $c$ is between $1-b\theta$ and $1-b\theta+\delta$, where $b$ and $\delta$ are parameters that satisfy $0\le \delta\le b\le 1$. Formally, the firms know that $(\theta,c)$ belongs to the set $\Omega$ given by \begin{equation}\label{e:spence-types} \Omega=\left\{(\theta,c): \theta\in[0,1] \ \text{and} \ c\in[1-b\theta, 1-b\theta+\delta]. \right\} \end{equation} The worker's strategy $e^*(\theta,c)$ describes her choice of the education level for each pair $(\theta,c)\in\Omega$. Each firm $i$'s strategy $w^*_i(e)$ describes its wage offer conditional on each education level $e\in \{e_L,e_H\}$. Consider how a firm makes inference from the observed level of education of the worker. This is formalized with the notion of {\it speculated states}. Formally, these are the firms' degenerate beliefs that put probability one on specific states. Speculated states are the pairs $(\theta,c)$ that a firm thinks are possible after observing the education level of the worker. The set of speculated states is denoted by $S_i(e)$. This set is {\it consistent} with the worker's equilibrium strategy $e^*$ if it includes all pairs $(\theta,c)$ under which the worker chooses $e\in\{e_L,e_H\}$, so $(\theta,c)\in S_i(e)$ if $e^*(\theta,c)=e$. For each education level $e$, firm $i$'s {\it maximum loss} of choosing wage $w_i$ when the other firm chooses the wage according to its strategy $w^*_{-i}$ is given by \[ l_i(w_i,w^*_{-i};e)=\sup_{(\theta,c)\in S_i(e)} \left(\sup_{w'_i\ge 0} u_i(w'_i,w^*_{-i}(e); \theta)-u_i(w_i,w^*_{-i}(e); \theta)\right). \] The maximum loss describes how much more profit firm $i$ could have obtained if it had known the true productivity and cost of education of the worker, anticipating that the other firm follows its strategy $w^*_{-i}$. Firm $i$'s {\it best compromise} given $e$ is a wage $w^*_i(e)$ that achieves the lowest maximum loss for a given strategy $w^*_{-i}$ of the other firm: \begin{equation}\label{e-spence-s1} w^*_i(e)\in\argmin_{w_i\ge 0} l_i(w_i,w^*_{-i};e). \end{equation} Observe that the worker has complete information. There is no need for a compromise. So, the worker simply chooses a best-response: \begin{equation}\label{e-spence-s2} e^*(\theta,c)\in\argmax_{e\in\{e_L,e_H\}} v(w^*_1(e),w^*_{2}(e),e;\theta,c). \end{equation} A profile $(e^*,w^*_1,w^*_2,S_1,S_2)$ of strategies and speculated states is a {\it perfect compromise equilibrium} (PCE) if two conditions hold. First, the strategies satisfy \eqref{e-spence-s1} and \eqref{e-spence-s2}, so each firm $i$ chooses a best compromise, and the worker chooses a best response to the strategies of the others. Second, the firms' sets of speculated states are consistent with the worker's strategy $e^*$. A PCE is {\it pooling} if the worker chooses the same level of education for all $(\theta,c)\in\Omega$. A PCE is {\it separating} if the set $\Omega$ can be partitioned into two subsets such that worker types belonging to the same subset choose the same level of education, but these levels differ between the two subsets. \begin{proposition}\label{p:spence} (i) There exists a pooling PCE in which the worker chooses low education, so \[ e^*(\theta,c)=e_L \ \ \text{for all $(\theta,c)\in\Omega$}, \] and the firms' wages are given by \[ w^*_i(e_H)=w^*_i(e_L)=\frac{1}2, \ \ i=1,2. \] After each observed education level $e$, each firm $i$'s set of speculated states $S_i(e)$ contains all states. \noindent (ii) If $\delta\ge 2 b^2-b$, then a separating PCE does not exist. \noindent (iii) If $\delta<2 b^2-b$, then there exists a separating PCE in which the worker chooses high education if and only if her cost $c$ is at most $ \frac1{2b}(b-\delta)$, so for all $(\theta,c)\in\Omega$ \[ e^*(\theta,c)=\begin{cases} e_H, &\text{if $c\le \frac1{2b}(b-\delta)$},\\ e_L, &\text{if $c>\frac1{2b}(b-\delta)$}, \end{cases} \] and the firms' wages are given by \begin{equation}\label{e-spence-w0} w^*_i(e_H)=\frac{1}2+\frac{b+\delta}{4b^2} \ \ \text{and} \ \ w^*_i(e_L)=\frac{\delta}{2b}+\frac{b+\delta}{4b^2}, \ \ i=1,2. \end{equation} After each observed education level $e$, each firm $i$'s set of speculated states $S_i(e)$ contains each state $(\theta,c)\in\Omega$ that satisfies \begin{align}\label{e-spence-pb} &\theta\in\left[0,\frac{b+\delta}{2b^2}+\frac\delta b\right] \ \ \text{if $e=e_L$}, \quad\text{and} \quad \theta\in\left[\frac{b+\delta}{2b^2},1\right] \ \ \text{if $e=e_H$.} \end{align} \end{proposition} The proof is in Appendix \ref{s:p3}. Let us discuss the strategic concerns underlying these PCE. Each firm $i$, when facing unknown productivity of the worker and deciding about the wage offer $w_i$, worries about two possible situations. It could be that the productivity is high, so offering a wage that is marginally greater than that of the competitor would improve profit. The greatest such loss occurs when the productivity is the highest possible. Alternatively, it could be that the productivity is low, so offering a wage that is smaller than the competitor's would eliminate the loss. The greatest such loss occurs when the productivity is the lowest possible. The firm thus offers the best compromise wage that balances these two losses, assuming that the other firm follows its equilibrium strategy. In equilibrium, both firms offer the same wage, so each of them has probability $1/2$ to hire the worker. This is the best compromise between not hiring a productive worker and hiring an unproductive worker at the specified wage. An essential detail in the above considerations is that the greatest and smallest productivities are now endogenous and can depend on the level of education $e$ that the worker chooses. In the pooling equilibrium, $e=e_L$ does not provide any useful information, so all productivity types are possible. However, in the separating equilibrium, the firms believe that the productivity belongs to a different interval when observing a different level of education. For example, if $b=1$ and $\delta=1/4$, then the firms believe that $\theta\in[0,7/8]$ if the education is low, and that $\theta\in[5/8,1]$ if the education is high. Observe that, among the workers with productivity $\theta\in[5/8,7/8]$, some choose low education, while others choose high education. This overlap is due to the richness of the state space. The same productivity type $\theta$ can have different costs of education $c$ that can fall below or above the threshold at which high education is profitable. Clearly, this result cannot emerge in the traditional setting where the workers are differentiated only by their productivity. The parameter $\delta$ captures the firms' uncertainty about the worker's cost of high education given her productivity type. As $\delta$ goes up, this range of costs increases. When $\delta$ is sufficiently large, education signaling is not very informative. A costly signal cannot be used to differentiate high and low productivity types, and the separating PCE does not exist. \subsection{Bilateral Trade with Common Value}\label{s:trade} We now examine bilateral trade with common value. In this example we show that trade can occur when traders follow a PCE. This is in stark contrast to the no-trade theorem under common values as predicted by PBE \citep{MilgromStokey82}. A seller wants to sell an indivisible good to a buyer. The value $v$ of the good is the same for each of them. If the good is traded at some price $p$, then the buyer obtains $v-p$ and the seller obtains $p-v$. If the good is not traded, then both traders obtain zero.\footnote{The same analysis applies if the seller obtains $p$ when the good is sold and $v$ when the good is not sold.} Neither trader knows $v$. Before the trade takes place, the traders privately consults independent experts to obtain some information about $v$. Each expert provides an interval of possible values, from the most pessimistic to the most optimistic assessment of the true value. Specifically, the seller privately learns that $v\in[x_0,x_1]$ and the buyer privately learns that $v\in[y_0,y_1]$. The traders commonly know the lower and upper bounds of the value $v$. These bounds are normalized to be 0 and 1, so $v\in[0,1]$. In addition, the traders commonly know that the experts cannot be wrong, so \begin{equation}\label{BT-cons} v\in[x_0,x_1]\cap[y_0,y_1]. \end{equation} We do not impose constraints on how precise or imprecise the experts' information is. We allow $[x_0,x_1]$ and $[y_0,y_1]$ to be arbitrary intervals contained in $[0,1]$ that satisfy \eqref{BT-cons}. We consider a take-it-or-leave-it protocol in which the seller is the proposer. The protocol is as follows. First, the traders observe their private information $[x_0,x_1]$ and $[y_0,y_1]$. Then the seller asks a price $p\in [0,1]$. Finally, the buyer decides whether to accept or to reject the seller's asked price. Let us describe the traders' strategies. Let $p^*(x_0,x_1)$ be the seller's asked price given her information $[x_0,x_1]$. Let $\alpha^*(p,y_0,y_1)$ be the buyer's decision whether to accept or to reject the asked price $p$ given the buyer's private information $[y_0,y_1]$, where $\alpha^*(p,y_0,y_1)=1$ means to buy, and $\alpha^*(p,y_0,y_1)=0$ means not to buy. Next we describe how the buyer makes inference from the price asked by the seller. This is formalized with the concept of speculated values. These are values for $v$ that the buyer thinks are possible after he observes the price asked by the seller. Let $V_b(p,y_0,y_1)$ be the buyer's set of speculated values when the seller asks price $p$. Clearly, the buyer rules out the values outside of $[y_0,y_1]$, so $V_b(p,y_0,y_1)\subset[y_0,y_1]$. But some values in $[y_0,y_1]$ may be ruled out too, because $p=p^*(x_0,x_1)$ depends on $x_0$ and $x_1$, and the buyer knows that $v\in[x_0,x_1]\cap[y_0,y_1]$. The buyer's maximum loss from his choice $\alpha\in\{0,1\}$, given the asked price $p$ and his set of speculated values $V_b(p,y_0,y_1)$, is \[ l_b(\alpha;p,y_0,y_1)=\sup_{v\in V_b(p,y_0,y_1)}\big(\max\left\{v-p,0\right\}-\left(v-p\right)\alpha\big). \] It describes how much more the buyer could have obtained if he knew the true value $v$. The seller's maximum loss of asking price $p$, given the buyer's acceptance strategy $\alpha^*$, is \[ l_s(p;x_0,x_1)=\sup_{\substack{(v,y_0,y_1)\in[0,1]^3:\\ v\in[x_0,x_1]\cap[y_0,y_1]}}\left(\sup_{p'\in [0,1]} \left(p'-v\right)\alpha^*(p',y_0,y_1)-\left(p-v\right)\alpha^*(p,y_0,y_1) \right). \] It describes how much more the seller could have obtained if she knew both $v$ and the buyer's private information $[y_0,y_1]$, anticipating that the buyer would follow his strategy $\alpha^*$. Each trader's {\it best compromise} is a choice that achieves the lowest maximum loss for a given strategy of the other trader. A strategy profile $(p^*,\alpha^*)$ is a {\it perfect compromise equilibrium} (PCE) if each trader chooses a best compromise given the strategy of the other trader. \begin{proposition}\label{p:trade-b} A perfect compromise equilibrium is given as follows. The seller asks \begin{equation}\label{e-p4-1} p^*(x_0,x_1)=\max\left\{\frac{x_0+x_1}{2}+ \frac{1-x_1}{4},\frac 1 2\right\}. \end{equation} If the seller asks $p\ge \tfrac 1 2$, then the buyer speculates that $v\in [\max\{y_0,2p-1\},y_1]$ and accepts this price if and only if \[ p\le \frac{ y_0+y_1}{2}. \] If the seller asks $p<\tfrac 1 2$, then the buyer speculates that $v\in \{y_0\}$ and accepts this price if and only if $p\le y_0$. \end{proposition} The formal proof is in Appendix \ref{s:p4}. Let us discuss the strategic concerns underlying this PCE. Consider first how the buyer makes his choice when the seller asks $p$. To build the intuition, let us first assume that the buyer makes no inference from the value of the asked price. So the buyer speculates that $v\in[y_0,y_1]$ and compares her maximal losses when buying and not buying the good. The maximal loss of buying is attained when $v=y_0$, giving the loss of $p-y_0$. The maximal loss of not buying is attained when $v=y_1$, giving the loss of $y_1-p$. The best compromise between these two situations is for the buyer to buy if and only if $p\le (y_0+y_1)/2$. Now consider the inference about $v$ that the buyer makes from the asked price $p$. When $p\ge 1/2$, the buyer concludes that $v$ cannot be below $2p-1$. This weakly increases the lower bound on $v$ to $\max\{y_0,2p-1\}$. When $2p-1\le y_0$, the inference from observing $p$ is not useful. So the buyer behaves as described above. When $2p-1>y_0$, the maximal loss from buying is larger than that from not buying. So the buyer does not buy. Notice that $2p-1>y_0$ implies $p>(y_0+y_1)/2$, and hence the rule described above continues to apply. In summary, the buyer behaves as if she ignores how the seller chooses the price when $p\ge 1/2$. Alternatively, suppose that $p<1/2$. This cannot happen in equilibrium, so the buyer can have any beliefs. Assume that the buyer speculates that $x_0=x_1=y_0$. So, the buyer speculates that $v=y_0$. Clearly, it is then best to buy the good if and only if $p\le y_0$. Consider now how the seller chooses the price when anticipating the buyer's equilibrium behavior. Observe that $p$ should be at least $1/2$. This is because if $p<1/2$, then the buyer accepts $p$ if and only if $p\le y_0$. So the good will be purchased at $p<1/2$ only if it its value is above its price. Thus, choosing $p<1/2$ is dominated by not selling the good at all, which is achieved by choosing $p=1$. To understand how a price $p\ge 1/2$ should be chosen, consider briefly an alternative setting where it is common knowledge that $v\in [x_0,x_1]=[y_0,y_1]$. So the buyer has the same information as the seller. Then the seller will ask $p=(x_0+x_1)/2$, as this is the highest price that the buyer is willing to accept, and any higher price leads to no sale with the same maximal loss. Now return to our model. Assume that the buyer does not buy at price $p$. The seller's maximal loss is attained when the buyer would have bought at a marginally lower price, and moreover when the value of the good is the lowest, $v=x_0$. So the maximal loss equals $p-x_0$. Now assume that the buyer buys at price $p$. The seller's maximal loss is attained when the buyer is extremely optimistic and believes that $v\in [y_0,y_1]=[x_1,1]$. This buyer will accept any price up to $(x_1+1)/2$. So the maximal loss equals $(x_1+1)/2-p$. The seller chooses a best compromise price that balances these two losses, and hence sets $p=\frac 1 2(x_0+x_1)+ \frac 1 4 (1-x_1)$. Note that the price asked by the seller lies above the midpoint of the seller's interval $[x_0,x_1]$, due to the possibility of the extremely optimistic buyer. Proposition \ref{p:trade-b} stands in contrast to the no-trade theorem under common values as predicted by PBE \citep{MilgromStokey82}. We observe that trade occurs in our PCE whenever the median assessment $\frac 1 2 (y_0+y_1)$ of the buyer exceeds the price $p^*(x_0,x_1)$. The equilibrium price can be seen as an exaggeration of the seller's median assessment, because $p^*(x_0,x_1)>\frac 1 2 (x_0+x_1)$ unless $x_1=1$. The trade is possible because the traders cannot rule out the possibility of two opposing situations: winning and losing from trade. They do not want to miss a winning opportunity, but also they do not want to lose from trade. They compromise by choosing their decision thresholds so that they do not lose too much either way. We hasten to point out that the PCE presented in Proposition \ref{p:trade-b} is not unique. For example, there is a no-trade PCE, where the seller always asks $p=1$, and the buyer accepts to buy the good at a price $p$ if and only if $p<y_0$. This equilibrium relies on a specific out-of-equilibrium belief of the buyer that $v=x_0=x_1=y_0$ whenever $p$ is different from 1. So, if the seller deviates to some price $p<1$, either the buyer rejects it, or the seller makes a loss. \subsection{Forecasting}\label{s:forec} We conclude the list of our examples with a forecasting problem. Here we consider a single agent who has to forecast of a random variable under multiple priors. This forecast is based on a noisy signal with a known distribution. In this example we illustrate how noise influences learning when the agent makes best compromise choices. In Appendix B we consider an alternative setting, where the agent knows the distribution of the random variable but she is ambiguous about the noisy signal. We also deal with the case where the agent is ambiguous about both aspects (Remark \ref{R:F}). Consider an agent who has to forecast a random variable $\theta$ that belongs to $[0,1]$ and has a distribution $F$ with a density $f$. The agent's payoff is the quadratic loss given by \[ u(a,\theta)=-(a-\theta)^2, \] where $a\in[0,1]$ denotes a forecast. The agent can condition her forecast on a noisy signal $z$ about $\theta$. The signal generating process is given by a conditional probability distribution $G_\varepsilon(z|\theta)$ with a parameter $\varepsilon\in[0,1]$ specified as follows. Signal $z$ reveals the true value $\theta$ with probability $1-\varepsilon$ and is drawn uniformly from $[0,1]$ with probability $\varepsilon$, so \begin{equation}\label{e-g-def} G_\varepsilon(z|\theta)=\begin{cases} \varepsilon z, &\text{if $z<\theta$,}\\ 1-\varepsilon+\varepsilon z, &\text{if $z\ge \theta$.} \end{cases} \end{equation} The agent is ambiguous about the distribution $F$ of $\theta$. She knows that $F$ has mean $\theta_0$ and admits a density $f$ such that $\delta\le f(\theta)\le 1/\delta$ for some $\delta\in(0,1)$. This assumption on the density excludes holes in the support and point masses. The parameter $\delta$ can be interpreted as a lower bound on the degree of dispersion of $\theta$. The set of such distributions is \[ \mathcal F_{\delta}=\left\{F\in\Delta([0,1]):\mathbb{E}_F[\theta]=\theta_0 \ \text{and} \ \delta\le f(\theta)\le 1/\delta \ \text{for all $\theta\in[0,1]$}\right\}. \] For each distribution $F\in \mathcal F_{\delta}$ the agent forms a posterior belief about $\theta$ conditional on the signal $z$, leading to a set of beliefs given $z$. Let $\mathbb{E}_{F,G_{\varepsilon}}[\cdot|z]$ denote the conditional mean of $\theta$ when the agent speculates that $\theta$ is distributed according to $F$. The {\it maximum loss} of a forecast $a\in[0,1]$ given a signal $z\in[0,1]$ is \[ l(a;z)=\sup_{F\in\mathcal F_{\delta}} \left(\sup_{a'\in[0,1]} \mathbb{E}_{F,G_{\varepsilon}}[-(a'-\theta)^2|z]-\mathbb{E}_{F,G_{\varepsilon}}[-(a-\theta)^2|z]\right). \] It describes how much more the agent could have obtained if he knew the distribution $F$. A {\it best compromise} is a forecast $a^*(z)$ that achieves the smallest maximum loss, \[ a^*(z)\in\argmin_{a\in[0,1]}l(a;z). \] \begin{proposition}\label{p:for1} The agent's best compromise is \[ a^*(z)=(1-\lambda)z+\lambda\theta_0, \] where \[ \lambda=\frac{\varepsilon}{2}\left(\frac{\delta}{1-\varepsilon(1-\delta)}+\frac{1}{\delta+\varepsilon(1-\delta)}\right). \] \end{proposition} The proof is in Appendix \ref{s:P-F}. Let us present some intuition behind Proposition \ref{p:for1}. Due to the quadratic penalty of making inaccurate forecasts, the loss of a forecast is equal to its distance from the expected mean conditional on the signal. The forecaster is worried about two possible situations, namely, when this conditional mean is high and when it is low. Consequently, the best compromise involves a forecast at the midpoint of these two extreme conditional means. Solving for this midpoint yields the formulae given in the statement of the proposition. In particular, the best compromise forecast lies between the ex-ante mean $\theta_0$ and the signal $z$. Note that the agent's best compromise forecast depends on the precision $\varepsilon$ of her signal and on the degree of the dispersion $\delta$ of the variable of interest. We show how each of these two parameters independently influences the best compromise forecast. Fix the degree of dispersion $\delta$. When the agent's signal is not very noisy, then her forecast is close to the signal. This is because $a^*$ is continuous in $\varepsilon$ and $\lim_{\varepsilon\to 0} a^*(z)=z$. When the signal is very noisy, then her prediction is close to the ex-ante mean, as $\lim_{\varepsilon\to1} a^*(z)=\theta_0$. Now we fix the precision $\varepsilon$ of the noise and vary the bound $\delta$ on the degree of dispersion of $\theta$. As we relax the constraints on $F$ imposed by $\delta$, we obtain that the forecast approximates the midpoint between $\theta_0$ and $z$. Formally, $\lim_{\delta\to 0}a^*(z)=(\theta_0+z)/2$. This is because the best compromise balances two extreme situations. It could be that $F$ has very high dispersion, thus making the signal extremely valuable. On the other hand, it could that $F$ has very low dispersion, in which case the signal has very little value. The forecast seeks a best compromise between these two situations and selects the midpoint. Note that the above analysis and discussion reveals a discontinuity in the forecast $a^*$ at $\varepsilon=\delta=0$. \section{Conclusion}\label{s:concl} We introduce a formal methodology to better understand how players deal with uncertainty in dynamic strategic contexts. We are particularly interested in modeling players who have an intuitive understanding of uncertainty that can be expressed in terms of bounds. The general setting looks at players who have ambiguous preferences that are modeled as multiple priors. Learning occurs by updating prior by prior using Bayes' rule whenever possible. Decisions are made under ambiguity by finding best compromises. Our objective is to present a solution concept that is as close as possible to perfect Bayesian Equilibrium. The idea is to facilitate the understanding and acceptance of PCE and simplify the interpretation of new insights. This design objective also allows us to build on the discipline underlying the concept of a PBE. We identify at least six reasons that motivated us to create this new solution concept, each of them motivated by contexts where PBE is not adequate. These reasons are robustness, ambiguity, non-probabilistic reasoning, parsimony, tractability, and accessibility. We explain each of these in more detail. {\it Robustness.} The PCE can be used to investigate the robustness of a PBE to the priors of the players in a context where each of the players seeks a strategy that also performs well for very similar priors. Similarly it can be used to analyze how the play changes for a given PBE when players only have an approximate understanding of some game elements. {\it Ambiguity.} Ambiguous preferences have become popular. Our concept allows us to include players with such preferences. The formalism we introduce is not limited to the use of best compromises as the solution concept. We could have also inserted any alternative concept for decision making under ambiguity. The most prominent alternative is maximin utility preferences that leads to a pessimistic mindset. We prefer the flavor of finding compromises. Compromises seems necessary in a globalizing world where decision making is made in front of growing audiences and when there is less willingness to base decisions on specific distributional assumptions. {\it Non-probabilistic reasoning.} Uncertainty per se seems to mean that details are hard to describe. And yet traditional models focus on two types of workers, high and low, or capture the uncertainty by a small number of parameters. Uncertainty seems to preclude that players agree on likelihoods of events, and yet this is done in PBE. We introduce PCE to open the door to understanding more realistic uncertainty. {\it Parsimony.} The traditional PBE framework reveals a different solution for each prior. Such flexibility can be useful to fit data. But flexibility in terms of a multitude of different answers gives little guidance to those who need to make choices. One easily loses the big picture if there are many details that determine what happens. To achieve clear and transparent results, one often gives up realism and adapts simplistic uncertainty with only a few types for each player. In contrast, the PCE concept under genuine ambiguity is by design very parsimonious. Making best compromises across many different situations allows to abstract from many details. {\it Tractability.} The usefulness of our solution concept is demonstrated in relevant economic examples where uncertainty is rich. This richness limits a tractable analysis of PBE. PCE yields tractable results with simple proofs as players focus on extreme situations, allowing them to ignore intermediate constellations. {\it Accessibility.} The PCE concept under genuine ambiguity is undemanding and easy to teach. Uncertainty can be described with bounds. There is no need for probabilities, and Bayes' rule can be put back on the shelf. The common acceptance of priors is dwindling. The literature on decision making and game playing under uncertainty has now developed alternative concepts. We hope to add to this literature. Numerous paths to future research open up in a search for new insights and for a clearer exposition of existing understanding of economic and strategic principles. \section*{Appendix A. Proofs.} \renewcommand{B}{A} \subsection{Proof of Theorem \ref{p:exist}}\label{s:pt} Consider a game $\Gamma=(N,\mathcal G,\Omega,(\Pi_1,...,\Pi_n),(u_1,...,$ $u_n))$. Let $\Phi$ be the set of information sets excluding the initial node $\phi_0$, so $\Phi=\bigcup\nolimits_{i\in N} \Phi_i$. Recall that $A_\phi$ is the set of pure actions of the player who moves at information set $\phi\in\Phi$. A strategy profile $s$ associates with each information set $\phi$ a mixed action $s_\phi\in\mathscr A_{\phi}=\Delta(A_\phi)$ at $\phi$. We now define an $\varepsilon$-perturbed game. Let $\varepsilon$ be a small enough positive number. Let $\Delta_\varepsilon(A(\phi))$ be the set of mixed actions at information set $\phi$ such that each pure action in $A(\phi)$ is played with probability at least $\varepsilon$. Let $\mathcal S_\varepsilon$ be the set of strategy profiles such that $s_\phi\in\Delta_\varepsilon(A(\phi))$ for each $\phi\in\Phi$. So the strategies in $\mathcal S_\varepsilon$ are completely mixed. An {\it $\varepsilon$-perturbed game $\Gamma_\varepsilon$} is the original game $\Gamma$ where the players' strategies are confined to $\mathcal S_\varepsilon$. Consider a strategy profile $s\in\mathcal S_\varepsilon$. Because $s$ is fully mixed, the belief system that is consistent with $s$ is uniquely defined by Bayes' rule. Denote this belief system by $\beta(s)$, and let $\beta_{\phi}(\pi;s)$ is the posterior probability distribution over the decision nodes in the information set $\phi$ derived from a prior $\pi$. Let \[ B_{\phi_i}(s)=\left\{\beta_{\phi_i}(\pi_i;s):\pi_i\in\Pi_{i}\right\} \] be the set of beliefs at each $\phi_i\in\Phi_i$ for each player $i\in N$. Let $U_{\phi_i}(s)$ be the negative of player $i$'s maximum loss at $\phi_i\in\Phi_i$ when player $i$ follows her strategy $s_{\phi_i}$, so \begin{align} U_{\phi_i}(s)&=-l(s_{\phi_i}|s,\beta(s),\phi)\notag\\ &=\inf_{b_i\in B_{\phi_i}(s)}\left(\bar u_i(s_{\phi_i}|s,\phi_i,b_i)- \sup_{a_i\in A(\phi_i)} \bar u_i(a_i|s,\phi_i,b_i)\right).\label{e-U-phi} \end{align} Two observations are in order. First, $U_{\phi_i}(s)=U_{\phi_i}(s_{\phi_i},s_{-\phi_i})$ is continuous in $s_{\phi_i}$. This is because $\bar u$ is continuous, and the set $B_{\phi_i}(s)$ of beliefs at $\phi_i$ is independent of $s_{\phi_i}$ (it only depends on the choices in the information sets preceding $\phi_i$). Second, $U_{\phi_i}(s_{\phi_i},s_{-\phi_i})$ is also continuous in $s_{-\phi_i}$ when $s\in S_\varepsilon$, so the strategies are fully mixed. This is because $B_{\phi_i}(s)$ is a continuous correspondence w.r.t.~$s\in S_\varepsilon$, as it is derived by Bayes' rule from the set of priors pointwise, and Bayes' rule is a well defined and continuous operator for $s\in S_\varepsilon$. In addition, both $B_{\phi_i}(s)$ and $A(\phi_i)$ are compact. The continuity of $U_{\phi_i}(s_{\phi_i},s_{-\phi_i})$ in $s_{-\phi_i}$ then follows from the Maximum Theorem \citep{Berge}. We now construct an augmented game $(\Phi,\mathcal G,\Omega,\pi_0,U)$ as follows. Let each information set $\phi\in\Phi$ be associated with a different player, so the set of players is the set of information sets $\Phi$. The game tree $\mathcal G$ and the set of states $\Omega$ remain unchanged. Let $\pi_0$ be a common prior over the states, and assume that $\pi_0$ has full support over $\Omega$. Nature moves first by choosing a state $\omega\in\Omega$ according to the prior $\pi_0$. Each player $\phi\in\Phi$ moves only once, at her information set $\phi$, by choosing a mixed action from the set $\Delta_\varepsilon(A(\phi))$. The interim payoff of each player $\phi\in\Phi$ at the information set $\phi$ is given by $U_\phi(s)$. Let $U=(U_\phi)_{\phi\in\Phi}$. The augmented game $(\Phi,\mathcal G,\Omega,\pi_0,U)$ can be seen as a game of incomplete information with a nonstandard specification of the players' payoffs. While in a standard game the payoffs are specified ex-post at each terminal node, in this augmented game the payoff $U_\phi$ of each player $\phi\in\Phi$ is specified in the interim, at the information set where the player makes a move. Because each player moves only once, the specification of the interim payoffs is sufficient to apply the concept of PBE or sequential equilibrium to the augmented game. Another nonstandard feature of the augmented game is that each player's interim payoff $U_\phi(s)$ depends on the set of beliefs $B_\phi(s)$ at $\phi$, but it is independent of the state $\omega$ itself. So, the prior $\pi_0$ does not affect the best-response actions by the players, it only affects the likelihood of reaching different information sets in the game tree. Let $(s'_{\phi},s_{-\phi})\in\mathcal S_\varepsilon$ denote the strategy profile where $s'_{\phi}$ is played by player $\phi$ and $s_{-\phi}$ is the profile of strategies at all other players. Observe that maximizing $U_{\phi}(s'_\phi,s_{-\phi})$ with respect to player $\phi$'s own decision $s'_\phi\in \Delta_\varepsilon(A(\phi))$ is the same as minimizing the maximum loss at $\phi$ in the perturbed game $\Gamma_\varepsilon$. Consequently, if $\bar s$ is a strategy profile in a sequential equilibrium of the augmented game, then $(\bar s,\beta(\bar s))$ is a PCE of $\Gamma_\varepsilon$. The existence of PCE follows from the existence of sequential equilibrium for finite games. We refer the reader to \cite{Chakrabarti2016} for the backward-induction proof of existence of sequential equilibrium that uses interim payoffs at information sets to determine players' best-response correspondences. Thus we have shown the existence of a PCE in every perturbed game $\Gamma_\varepsilon$. It remains to show the existence of a PCE in the original, unperturbed game $\Gamma$. Consider a sequence $(\varepsilon_k)_{k=1}^\infty$ such that $\lim_{k\to\infty} \varepsilon_k=0$. Let $(s^k,\beta^k)$ be a PCE for the perturbed game $\Gamma_{\varepsilon_k}$. By Bolzano-Weierstrass theorem there exists a subsequence $(k_t)_{t=1}^\infty$ such that $(s^{k_t},\beta^{k_t})$ converges to some $(s^*,\beta^*)$ as $t\to\infty$. Observe that the belief system $\beta^*$ is consistent with $s^*$. This is because for each player $i$, each information set $\phi_i\in\Phi_i$, and each prior $\pi_i\in\Pi_i$, either $\beta^*_{\phi_i}(\pi_i)$ is derived by Bayes rule that is continuous as $(s^{k_t},\beta^{k_t})$ approaches $(s^*,\beta^*)$, or Bayes rule is undefined in the limit, in which case $\beta^*_{\phi_i}(\pi_i)$ is also consistent by definition. Next, for all $\varepsilon>0$, all $t$ such that $\varepsilon\ge \varepsilon_{k_t}$, and all $s'_\phi\in \Delta_\varepsilon(A_{\phi})$ we have \begin{align*} 0\le &U_{\phi}(s^{k_t}_{\phi},s^{k_t}_{-\phi})-U_{\phi}(s'_\phi,s^{k_t}_{-\phi})=-l(s^{k_t}_{\phi}|s^{k_t},\beta^{k_t},\phi)+l(s'_\phi|s^{k_t},\beta^{k_t},\phi)\xrightarrow{t\rightarrow\infty} \\ &-l(s^*_{\phi}|s^*,\beta^*,\phi)+ l(s'_\phi|s^*,\beta^*,\phi)=U_{\phi}(s^*_{\phi},s^*_{-\phi})-U_{\phi}(s'_\phi,s^*_{-\phi}), \end{align*} where the inequality is by $s^{k_t}_{\phi}$ being a best response in the augmented game, the first equality is by \eqref{e-U-phi}, the limit is by the continuity of $l(s_{\phi}|s,\beta,\phi)$ in $s$ and $\beta$, and the second equality is because the set $B_{\phi}(s^{k_t})$ of beliefs at $\phi$ is independent of the mixed action $s^{k_t}_\phi$ at $\phi$. It follows that $s^*_{\phi}$ is a best response to $s^*_{-\phi}$. So $s^*$ is a best compromise strategy profile in the unperturbed game $\Gamma$. We thus have shown that $(s^*,\beta^*)$ is a PCE of $\Gamma$. \qed \subsection{Proof of Proposition \ref{p:lemon}}\label{s:lemon-proof} Each of the seller's two information sets (one for each type, low and high) contains a single decision node. Hence the seller's beliefs at these decision nodes are trivial, and the seller's best compromises are simply her best responses. In the high information set ($\theta=\theta_H$), choosing $\sigma^*_S(\theta_H)=1$ is the strictly dominant strategy. We now consider two possibilities of the seller's choice in the low information set: $\sigma^*_S(\theta_L)>0$ and $\sigma^*_S(\theta_L)=0$. The buyer has a single information set $\phi_B$ and forms a set of belief $B_{\phi_B}$. Each belief $b\in B_{\phi_B}$ is given by $b=\beta_{\phi_B}(\pi_k)$, $k=0,1,...,K$ and denotes the probability of being in the decision node where the state is high, $\theta=\theta_H$. First, suppose that $\sigma^*_S(\theta_L)>0$. Then, for each prior $\pi_k\in\Pi_B$, a buyer's belief $b\in B_{\phi_B}$ is consistent with $\sigma^*_S$ if it is given by Bayes' rule \eqref{e:Bayes}. In particular, $\beta_{\phi_B}^*(0)=0$ and $\beta_{\phi_B}^*(1)=1$. From \eqref{e:loss-lemon} it is evident that the maximum loss for the buyer is determined at the extreme beliefs, 0 and 1. Substituting these into \eqref{e:loss-lemon} yields \[ l_S(\sigma_B|\phi_B,\sigma_S^*,\beta^*_{\phi_B})=\max\big\{(p-c_B)(1-\sigma_B),c_B\sigma_B\big\}, \] which is minimized by $\sigma_B^*=(p-c_B)/p$. However, using assumption \eqref{e-assum-lemon}, we obtain \[ \frac{p-c_B}{p}>\frac{p}{p+c_S}. \] Consequently, by \eqref{e-BR-S-lemon}, the unique best response of the low type seller to the buyer's strategy $\sigma_B^*=(p-c_B)/p$ is $\sigma^*_S(\theta_L)=0$, which contradicts the initially assumed $\sigma^*_S(\theta_L)>0$. We thus conclude that there is no PCE where $\sigma^*_S(\theta_L)>0$. Alternatively, suppose that $\sigma^*_S(\theta_L)=0$. Then, for each prior $\pi_k\in\Pi_B$ with $\pi_k>0$, Bayes' rule \eqref{e:Bayes} yields the belief $b=1$. However, for the prior $\pi_0=0$, Bayes' rule does not apply, so every belief $b_0\in [0,1]$ is consistent with $\sigma^*_S$. We thus obtain $B_{\phi_B}=\{b_0,1\}$ for some $b_0\in[0,1]$. Substituting these beliefs into \eqref{e:loss-lemon} yields \[ l_S(\sigma_B|\phi_B,\sigma_S^*,\beta^*_{\phi_B})=\max\big\{((1-b_0)p-c_B)(1-\sigma_B),c_B\sigma_B\big\}, \] which is minimized by \[ \sigma^*_B=\begin{cases} 0, & \text{if $b_0\in[\frac{p-c_B}{p},1]$,}\\ \frac{(1-b_0)p-c}{(1-b_0)p}, & \text{if $b_0\in[0,\frac{p-c_B}{p})$}. \end{cases} \] By \eqref{e-BR-S-lemon}, if $\sigma^*_B<p/(p+c_S)$ (in particular, if $\sigma^*_B=0$), then the unique best response of the low type seller is $\sigma^*_S(\theta_L)=1$, which contradicts the initially assumed $\sigma^*_S(\theta_L)=0$. However, for each $b_0$ that satisfies \begin{equation}\label{e:cond:sigma} \sigma^*_B=\frac{(1-b_0)p-c}{(1-b_0)p}\in\left[\frac{p}{p+c_S},1\right]. \end{equation} the strategy $\sigma^*_S(\theta_L)=0$ is a best response for the seller. Thus, $(\sigma^*_S,\sigma^*_B)$ with $b_0$ that satisfies \eqref{e:cond:sigma} is a PCE pair of strategies. Finally, observe that the interval of $b_0$ that satisfies \eqref{e:cond:sigma} is given by $\left[0,1-c_B/c_S-c_B/p\right]$. \qed \subsection{Proof of Proposition \ref{p:cournot}}\label{s:p1} To prove the existence of a unique PCE, we find a unique profile of best-compromise strategies and a unique profile of beliefs that satisfy Definition \ref{def:consistency}. First, we find the beliefs. The firms have genuine ambiguity, so the set of priors $\Pi_i$ of firm $i$ is equal to the set of degenerate beliefs over $\mathcal P$. By Definition \ref{def:consistency} and the consistency requirement in PCE, the set $B_i(\phi_i)$ of beliefs of firm $i$ at its unique information set $\phi_i$ must be equal to the set of priors, so $B_i(\phi_i)=\Pi_i$. Next, we find each firm's equilibrium quantity. For derivations, we assume that the quantities and the price are always nonnegative, and then we verify that this is indeed the case in equilibrium. Let $x^*_i(q_{-i}, P)$ be a best response strategy of player $i$ given the knowledge of $q_{-i}$ and the inverse demand function $ P$. The loss of firm $i$ from choosing quantity $q_i$, given $q_{-i}$ and $P$, is denoted by $\Delta u_i(q_i,q_{-i}; P)$ and given by \[ \Delta u_i(q_i,q_{-i}; P)= P(x_i^*(q_{-i}, P)+q_{-i})x_i^*(q_{-i}, P)- P(q_i+q_{-i})q_i. \] By \eqref{e:demand}, the marginal revenue of firm $i$ satisfies \[ \underaccent{\bar} P(q_i+q_{-i})+\underaccent{\bar} P'(q_i+q_{-i})q_i\le P(q_i+q_{-i})+ P'(q_i+q_{-i})q_i\le \bar P(q_i+q_{-i})+\bar P'(q_i+q_{-i})q_i. \] Therefore, for given $q_{j}$ and $ P$, the best-response quantity $x^*_i(q_{-i}, P)$ of firm $i$ always lies between $x^*_i(q_{-i},\underaccent{\bar} P)$ and $x^*_i(q_{-i},\bar P)$. While the profit function need not be concave in general, it is concave when $ P=\underaccent{\bar} P$ or when $ P=\bar P$. So the highest loss will always be attained in one of these two extreme cases: \[ l_i(q_i,q_{-i})=\sup_{ P} \Delta u_i(q_i,q_{-i}; P) =\max\{ \Delta u_i(q_i,q_{-i};\underaccent{\bar} P), \Delta u_i(q_i,q_{-i};\bar P)\}. \] It is easy to see that the maximum loss is minimized by balancing the two expressions under the maximum: \[ \Delta u_i(q_i,q_{-i};\bar P)=\Delta u_i(q_i, q_{-i};\underaccent{\bar} P). \] Substituting $\underaccent{\bar} P$ and $\bar P$ and simplifying the expressions yields the equation \begin{equation}\label{e-maxmin-cournot} \frac{(\bar a-\bar b q_{-i})^2}{4\bar b}-(\bar a-\bar b(q_i+ q_{-i}))q_i=\frac{(\underaccent{\bar} a-\underaccent{\bar} b q_{-i})^2}{4\underaccent{\bar} b}-(\underaccent{\bar} a-\underaccent{\bar} b(q_i+ q_{-i}))q_i. \end{equation} Solving for $q_i$ yields the unique best compromise quantity: \[ q^*_i=\frac{\underaccent{\bar} a\sqrt{\bar b}+\bar a\sqrt{\underaccent{\bar} b}}{2(\underaccent{\bar} b\sqrt{\bar b}+\bar b\sqrt{\underaccent{\bar} b})}-\frac{q_{j}}2, \ \ i=1,2. \] Solving this pair of equations for $(q^*_1,q^*_2)$, we find \eqref{e-cournot-eq}. It is easy to verify that under our assumptions, $q^*_i>0$, and moreover, $ P(q^*_1+q^*_2)\ge\underaccent{\bar} P(q^*_1+q^*_2)>0$. Substituting the solution into \eqref {e-maxmin-cournot} yields the maximum loss of each firm \eqref{e-cournot-com}. \qed \subsection{Proof of Proposition \ref{p:bertrand}}\label{s:p2} Similarly to the proof of Proposition \ref{s:p1}, to prove the existence of a unique PCE, we find a unique profile of best-compromise strategies and a unique profile of beliefs that satisfy Definition \ref{def:consistency}. First, we find the beliefs. The firms have genuine ambiguity, so the set of priors $\Pi_i$ of firm $i$ is equal to the set of degenerate beliefs over $[\underaccent{\bar} c,\bar c]^2$. By Definition \ref{def:consistency} and the consistency requirement in PCE, firm $i$ with cost $c_i$ must have the set $B_i(c_i)$ of beliefs equal to the set of priors, so $B_i(\phi_i)=\Pi_i$. Next, we find each firm's equilibrium quantity. For derivations, we assume that each firm prices at or above marginal cost, and then we verify that this is indeed the case in equilibrium. Consider firm $i$ with type $c_i\in[\underaccent{\bar} c,\bar c]$. Let $s^m(c_i)$ be the profit-maximizing pricing strategy if firm $i$ were the monopoly, so $s^m(c_i)=(a+ c_i)/2$. Since we have assumed that $\bar c\le a/2$, this means that $s^m(c_i)\ge\bar c$ for all $c_i$. The monopoly profit is $(a-c_i)^2/(4b)$. Fix the other firm's strategy $s^*_{-i}(c_{-i})$ and let $\bar p$ be the maximum price of the other firm, so $\bar p=\sup\nolimits_{c_{-i}\in [\underaccent{\bar} c,\bar c]} s^*_{-i}(c_{-i})$. Given the other firm's cost $c_{-i}$, and thus the price $p_{-i}=s^*_{-i}(c_{-i})$, firm $i$'s maximum profit is \begin{align*} u^*_i(p_{-i};c_i)=\sup_{x_i\ge 0}u_i(x_i, p_{-i};c_i)&= \begin{cases} 0, &\text{if $p_{-i}\le c_i$},\\ (p_{-i}-c_i)\frac{a-p_{-i}}{b}, &\text{if $c_i<p_{-i}\le s^m(c_i)$},\\ \frac{(a-c_i)^2}{4b}, &\text{if $p_{-i}> s^m(c_i)$} \end{cases}\\ &=\max\left\{0,(p_{-i}-c_i)\frac{a-p_{-i}}{b},\frac{(a-c_i)^2}{4b}\right\}. \end{align*} Let $p_i$ be a price of firm $i$. We now find the maximum loss of firm $i$ from choosing $p_i$, given its marginal cost $c_i$ and the strategy $s^*_{-i}$ of the other firm. There are three cases. First, suppose that $p_{-i}\le c_i\le p_i$. Then firm $i$ cannot make positive profit, so $p_i$ is a best response. Thus, firm $i$ behaves optimally in this case, so the loss is zero. Second, suppose that $c_i<p_{-i}\le p_i$. Then firm $i$ could have been better off by marginally undercutting $p_{-i}$. Maximizing the loss over $p_{-i}\in(c_i,p_i]$, we obtain \begin{equation}\label{e-loss-1} \sup_{p_{-i}\in(c_i,p_i)}\left(u^*_i(p_{-i};c_i)-u_i(p_i,p_{-i};c_i)\right)= \begin{cases} (p_{i}-c_i)\frac{a-p_{i}}{b}, &\text{if $p_{i}\le s^m(c_i)$,}\\ \frac{(a-c_i)^2}{4b}, &\text{if $p_{i}>s^m(c_i)$}. \end{cases} \end{equation} Third, suppose that $p_i<p_{-i}$. Then firm $i$ could have made more profit by increasing its price, so its maximum loss is \begin{multline}\label{e-loss-2} \sup_{p_{-i}\in(p_i,\bar p]}\left(u^*_i(p_{-i};c_i)-u_i(p_i,p_{-i};c_i)\right)=u^*_i(\bar p_;c_i)-u_i(p_i,\bar p;c_i)\\ =-(p_{i}-c_i)\frac{a-p_{i}}{b}+\begin{cases} (\bar p-c_i)\frac{a-\bar p}{b}, &\text{if $p_i\le s^m(c_i)$,}\\ \frac{(a-c_i)^2}{4b}, &\text{if $p_i>s^m(c_i)$}. \end{cases} \end{multline} To minimize the maximum loss, we need to minimize the greater of the expressions in \eqref{e-loss-1} and \eqref{e-loss-2}. Observe that, by the definition of $s^m(c_i)$, the right-hand side in \eqref{e-loss-1} is constant and the right-hand side in \eqref{e-loss-2} is strictly increasing in $p_i$ for $p_i>s^m(c_i)$. So we only need to consider $p_i\le s^m(c_i)$. Under this assumption, the greater of the expressions in \eqref{e-loss-1} and \eqref{e-loss-2} can be simplified to \[ l_i(p_i,s^*_{-i};c_i)=\max\left\{(p_{i}-c_i)\frac{a-p_{i}}{b},(\bar p-c_i)\frac{a-\bar p}{b}-(p_{i}-c_i)\frac{a-p_{i}}{b}\right\}. \] Because one expression is increasing and the other is decreasing in $p_i$ for $p_i\le s^m(c_i)$, the maximum loss is minimized at the solution of \begin{equation}\label{e-bertrand-loss} (p_{i}-c_i)\frac{a-p_{i}}{b}=(\bar p-c_i)\frac{a-\bar p}{b}-(p_{i}-c_i)\frac{a-p_{i}}{b}. \end{equation} Solving the above for $p_i$ and assigning $s^*_i(c_i)=p_i$, we obtain \eqref{e-bertrand-eq}. To see that $s_i^*(c_i)\ge c_i$, observe that \[ s_i^*(c_i)-c_i=\frac{1}{2}\left(a-c_i-\sqrt{(a-\bar c)^2+(\bar c- c_i)^2}\right)\ge 0 \] by the triangle inequality and $a>\bar c\ge c_i$. Moreover, $s_i^*(c_i)>c_i$ when $c_i<\bar c$, and $s^*_i(\bar c)=\bar c$. Finally, substituting $s^*_i(c_i)$ into the maximum loss expression in \eqref{e-bertrand-loss} yields \eqref{e-bertrand-com}. \qed \subsection{Proof of Proposition \ref{C:PG}}\label{s:C:PG} We prove only part (iii) of Proposition \ref{C:PG} for the proportional rule given by \eqref{t-prop}. The proof of parts (i) and (ii) for the other two rules is analogous but easier, and thus omitted. Let the refunds $r_i$ be given by the proportional rule \eqref{t-prop}. We first derive an agent $i$'s best compromise strategy $s^*_i$. Agent $i$ who chooses $x_i$ worries about two possible situations. It could be that the total contribution is marginally below $c$, so $x_i+\sum_{j\ne i} s_j(v_j)=c-\varepsilon$ for a small $\varepsilon>0$. The good is not provided, but had $i$ contributed $\varepsilon$ more it would have been provided. As $\varepsilon\to 0$, agent $i$'s loss is $v_i-x_i$. Alternatively, it could be that all other agents contribute enough to cover $c$, so $\sum_{j\ne i} s_j(v_j)\ge c$. Thus the agent could have contributed nothing and still received the good. In this case the loss is the amount of contribution net of the refund, $x_i-r_i(x)$. This loss is maximized when the other agents' contributions exactly equal to the cost, so $\sum_{j\ne i} s_j(v_j)=c$, so by \eqref{t-prop} we have \[ x_i-r_i(x)=\frac{cx_i}{x_i+\sum_{j\ne i} s_j(v_j)}\le \frac{cx_i}{x_i+c}. \] The loss in the first case is weakly decreasing and the loss in the second case is strictly increasing in $x_i$. To find $x_i$ that minimizes the maximum loss, we solve the equation \[ v_i-x_i=\frac{cx_i}{x_i+c} \] for $x_i$. Denote the solution by $s^*(v_i)$. It is easy to verify that it is as given in part (iii) of the statement of Proposition \ref{C:PG}. Note that it is symmetric across the players, so we drop the subscript $i$. The above argument requires that there exist values $v_j\in[0,\bar v]$ such that $\sum_{j\ne i} s^*(v_j)=c$. Observe that $s^*(0)=0$ and $s^*(v_i)$ is increasing in $v_i$. So, we only need to verify that $\sum_{j\ne i}s^*(\bar v)\ge c$, which holds under condition \eqref{E:A1}. It remains to determine the maximum welfare loss $L(s^*)$ as defined in \eqref{e:WLoss}. As $s^*(v_i)$ is increasing in $v_i$, the constraint $\sum\nolimits_{i=1}^n s^*(v_i)<c$ must be binding. Moreover, it is easy to verify that $s^*(v_i)$ is convex in $v_i$. Thus, by Jensen's inequality we have \[ \sum\nolimits_{j=1}^n s^*(v_j)\ge n s^*\left(\frac{1}{n}\sum\nolimits_{j=1}^n v_j\right). \] Thus, the maximum is attained for $v_1=...=v_n=z$ for $z\in[0,\bar v]$ such that $n s^*(z)=c$. Solving the equation \[ n\left(\frac{z}{2}-c+\frac{1}{2}\sqrt{z^2+4 c^2}\right)=c \] for $z$ yields \[ z=\frac{2n+1}{n(n+1)}c. \] We thus obtain \[ L(s^*)=nz-c=\frac{2n+1}{n+1}c-c=\frac{n}{n+1}c.\tag*{\qed} \] \subsection{Proof of Proposition \ref{p:spence}}\label{s:p3} First we find the equilibrium wages $w^H$ and $w^L$ after the worker's level of education $e_H$ and $e_L$. For each $j=L,H$, each firm $i$ has the set of speculated states $S_i(e_j)\subset \Omega$. Let this set be the same for each firm. Denote this set by $S(e_j)$, so $S(e_j)=S_1(e_j)=S_2(e_j)$. Let $\underaccent{\bar}\theta_j$ and $\bar\theta_j$ be the lowest and highest productivity levels given $e_j$, so \begin{equation}\label{e-theta} \underaccent{\bar}\theta_j=\inf\{\theta:(\theta,c)\in S(e_j)\} \quad\text{and}\quad \bar\theta_{j}=\sup\{\theta:(\theta,c)\in S(e_j)\}, \ \ j=L,H. \end{equation} Consider a firm $i$, some wages $w_i$ and $w_{-i}$, and a state $(\theta,c)$. Firm $i$'s maximum profit $u^*_i(w_{-i};\theta)$ is obtained by marginally outbidding $w_{-i}$ when it is below $\theta$, and by choosing the wage below $w_{-i}$ and thus giving up the worker if $\theta\le w_{-i}$, so \[ u^*_i(w_{-i};\theta)=\sup_{w_i\ge 0 } u_i(w_i,w_{-i};\theta)=\max\{\theta-w_{-i},0\}. \] Observe that we only need to consider $w_i$ and $w_{-i}$ in $[\underaccent{\bar}\theta_j,\bar\theta_j]$. A wage above $\bar\theta_j$ is dominated and cannot be a best compromise; a wage below $\underaccent{\bar}\theta_j$ will always be overbid by the rival's wage, as there is common knowledge that $\theta\ge \underaccent{\bar}\theta_j$. Suppose that $w_i<w_{-i}$, so $u_i(w_i,w_{-i};\theta)=0$. Then the largest loss is obtained when $\theta$ is the greatest: \[ \sup_{\theta:(\theta,c)\in S(e_j)} (u_i^*(w_{-i};\theta)-u_i(w_i,w_{-i};\theta))\le \max\{\bar \theta_{j}-w_{-i},0\}. \] Next, suppose that $w_i>w_{-i}$, so $u_i(w_i,w_{-i};\theta)=\theta-w_i$. Then the largest loss is obtained when $\theta$ is the smallest: \[ \sup_{\theta:(\theta,c)\in S(e_j)} (u_i^*(w_{-i};\theta)-u_i(w_i,w_{-i};\theta))=\max\{\theta-w_{-i},0\}-(\theta-w_i)\le w_i-\underaccent{\bar} \theta_{j}. \] Finally, suppose that $w_i=w_{-i}$, so $u_i(w_i,w_{-i};\theta)=(\theta-w_i)/2$. Then \begin{multline*} \sup_{\theta:(\theta,c)\in S(e_j)} (u_i^*(w_{-i};\theta)-u_i(w_i,w_{-i};\theta))=\max\{\theta-w_{-i},0\}-\frac{\theta-w_i}2\\ \le \max\{0,\bar \theta_{j}-w_{-i}, (w_i-\underaccent{\bar} \theta_{j})/2\}. \end{multline*} The maximum loss $l_i(w_i,w_{-i})$ is given by the greatest of the three expressions, so \[ l_i(w_i,w_{-i})=\max\{0,\bar \theta_{j}-w_{-i},w_i-\underaccent{\bar} \theta_{j}.\}. \] The wages $w_i$ that minimizes the maximum loss satisfies \[ w_i=\bar \theta_{j}+\underaccent{\bar} \theta_{j}-w_{-i}, \ \ i=1,2. \] So, we have obtained two equations, one for each $i=1,2$. Solving this pair of equations for $w_1$ and $w_2$ yields the best compromise $w^*_i(e_j)$ for each firm $i$, where \begin{equation}\label{e-spence-w} w_i^*(e_j)=\frac{\bar \theta_{j}+\underaccent{\bar} \theta_{j}}{2}, \ \ i=1,2. \end{equation} The associated maximum losses are \begin{equation}\label{e-spence-c} l_i(w^*_i(e_j),w^*_{-i}(e_j))=w^*_i(e_j)-\underaccent{\bar} \theta_{j}. \end{equation} Next, observe that the worker operates under complete information. Given each choice of $e_j$, she anticipates the wages $w^j=w^*_1(e_j)=w^*_2(e_j)$, $j\in\{L,H\}$. So, given a state $(\theta,c)$, the worker chooses $e=e_H$ if and only if\footnote{The tie breaking is arbitrary, because the set of types is a continuum.} \[ w^H-c(\theta)\ge w^L. \] Recall that $c(\theta)$ is strictly decreasing, and denote by $c^{-1}$ its inverse. Then, the worker chooses $e=e_H$ if and only if her type $\theta$ satisfies \[ \theta\ge c^{-1}(w^H-w^L). \] {\it Pooling PCE.} If $w^H\le w^L$, then every type chooses low level of education $e_L$, so the equilibrium is pooling. After observing $e=e_L$, the consistent set of speculated states $S(e_L)$ is thus the entire set of states, so $S(e_L)=\Omega$. By \eqref{e:spence-types}, the highest and lowest $\theta$ in $S(e_L)$ are $\bar\theta_{L}=1$ and $\underaccent{\bar}\theta_{L}=0$. By \eqref{e-spence-w}, we obtain the equilibrium wages $w_i(e_L)=1/2$. After observing an out-of-equilibrium education $e=e_H$, the set of speculated states $S(e_H)$ must induce the wage $w^*_i(e_H)\le w^*_i(e_L)$. In particular, we can assume $S(e_H)=\Omega$, and thus $w^*_i(e_H)=1/2$. Substituting the wage of $w_i^*(e)=1/2$ and the lower bound productivity $\underaccent{\bar} \theta_{L}=0$ into \eqref{e-spence-c}, we obtain the maximum loss for each firm $i$, \[ l_i(w^*_i(e_j),w^*_{-i};e_j)=\frac 1 2, \ \ i=1,2, \ \ j=L,H. \] {\it Separating PCE.} Consider now $w^H>w^L$, so that the worker with cost $c\le w^H-w^L$ chooses high education. Let \[ S(e_L)=\{(\theta,c)\in\Omega: c>w^H-w^L\} \quad\text{and}\quad S(e_H)=\{(\theta,c)\in\Omega: c(\theta)\le w^H-w^L\} \] be the sets of beliefs of each firm when the level of education is $e_L$ and $e_H$, respectively. So, $S(e_L)$ and $S(e_H)$ contain all pairs $(\theta,c)$ such that low and high education is chosen, respectively. These sets thus satisfy the consistency requirement (Definition \ref{def:consistency}). By \eqref{e:spence-types} and \eqref{e-theta}, the highest and lowest $\theta$ in $S(e_H)$ are given by \begin{equation}\label{e-spence-1} \bar\theta_{H}=1 \quad\text{and}\quad \underaccent{\bar}\theta_{H}=\frac{1-w^H+w^L}b. \end{equation} Similarly, the highest and lowest $\theta$ in $S(e_L)$ are given by \begin{equation}\label{e-spence-2} \bar\theta_{L}=\frac{1+\delta-w^H+w^L}b \quad\text{and}\quad \underaccent{\bar}\theta_{L}=0. \end{equation} From \eqref{e-spence-w}, we have \begin{equation}\label{e-spence-3} w^H=\frac{\bar \theta_{H}+\underaccent{\bar} \theta_{H}}{2}\quad\text{and}\quad w^L=\frac{\bar \theta_{L}+\underaccent{\bar} \theta_{L}}{2}. \end{equation} Solving the system of six equations in \eqref{e-spence-1}, \eqref{e-spence-2}, and \eqref{e-spence-3}, with six unknowns ($w^H$, $w^L$, $\bar\theta_{H}$, $\underaccent{\bar}\theta_{H}$, $\bar\theta_{L}$, and $\underaccent{\bar}\theta_{L}$), we obtain the equilibrium wages and the bounds on the productivity types as shown in \eqref{e-spence-w0} and \eqref{e-spence-pb}. Observe that the lowest possible cost of high education is $\inf \{c:(\theta,c)\in\Omega\}=1-b$. Therefore, there exist states $(\theta,c)$ where high education $e_H$ is chosen if and only if $w^H-w^L>1-b$. Substituting our solution for $w^H$ and $w^L$ given by \eqref{e-spence-w0}, we obtain that $w^H-w^L>1-b$ if and only if \[ \delta<2b^2-b. \] This condition is thus necessary and sufficient for the existence of separating PCE. Finally, substituting the wage $w^H$ and the productivity lower bound $\underaccent{\bar}\theta_{H}$ into \eqref{e-spence-c}, we obtain firm $i$'s maximum loss when $e=e_H$, \[ l_i(w^*_i(e_H),w^*_{-i}(e_H);e_H)=w^H-\underaccent{\bar}\theta_{H}=\frac 1 2-\frac{b+\delta}{4b^2}. \] Substituting the wage $w^L$ and the productivity lower bound $\underaccent{\bar}\theta_{L}$ into \eqref{e-spence-c}, we obtain the maximum loss when $e=e_L$, \[ l_i(w^*_i(e_L),w^*_{-i}(e_L);e_L)=w^L-\underaccent{\bar}\theta_{L}=\frac{\delta}{2b}+\frac{b+\delta}{4b^2}.\tag*{\qed} \] \subsection{Proof of Proposition \ref{p:trade-b}}\label{s:p4} Consider how a buyer who knows that $v$ is in $[y_0,y_1]$ reacts when the seller asks $p$. Suppose that $p<1/2$. Then the buyer speculates that $v$ in $\{y_0\}$. This is consistent with the strategy of the seller as $p<1/2$ is out of equilibrium. Given this speculation, accepting $p$ if and only if $p\le y_0$ is a best compromise. Now suppose that $p\ge 1/2$. The largest interval $[x_0,x_1]\subset [0,1]$ that satisfies \eqref{e-p4-1} is $[2p-1,1]$. So the buyer concludes that \[ v\in V_b(p,y_0,y_1)=[y_0,y_1]\cap [2p-1,1]=[\max\{y_0,2p-1\},y_1]. \] Given this information about the set of possible values, the buyer now compares her maximum losses when accepting ($\alpha=1$) and rejecting ($\alpha=0$) the price $p$. The maximum loss from rejecting $p$ is \[ l_b(0;p,y_0,y_1)=\sup_{v\in [\max\{y_0,2p-1\},y_1]}(v-p)=y_1-p. \] The maximum loss from accepting $p$ is \[ l_b(1;p,y_0,y_1)=\sup_{v\in [\max\{y_0,2p-1\},y_1]}(p-v)=\min\{p-y_0,1-p\}. \] Because $y_1\le 1$, it is easy to verify that $l_b(0;p,y_0,y_1)\ge l_b(1;p,y_0,y_1)$ if and only if $p\le\frac 1 2(y_0+y_1)$. Thus, it is a best compromise to buy the good when $p\le\frac 1 2(y_0+y_1)$ and not to buy it otherwise. Let us consider the first stage of the game. Anticipating the buyer's equilibrium behavior $\alpha^*$, the seller chooses a price that minimizes his maximal loss. Observe that choosing a price $p<1/2$ is dominated by $p=1/2$. This is because when $p<1/2$, the buyer accepts $p$ if and only if the value $v$ is guaranteed to be at least as high as the price $p$. In this case, the seller's payoff cannot be positive. Let $p\ge 1/2$. Suppose first that $p>\frac 1 2(y_0+y_1)>v$. So $p$ is rejected, but it would be optimal to reduce the price so that the buyer accepts it, specifically, to ask $p'=(y_0+y_1)/2$, and thus gain $p'-v$. The supremum of this loss is given by \[ \sup_{\substack{(v,y_0,y_1):\, p>\frac 1 2(y_0+y_1)>v,\\ v\in[x_0,x_1]\cap[y_0,y_1]}} \left(\frac{y_0+y_1} 2-v\right)=p-x_0. \] Second, suppose that $p\le \frac 1 2(y_0+y_1)<v$. So $p$ is accepted, but it would be optimal not to sell, and thus gain $v-p$. The supremum of this loss is given by \[ \sup_{\substack{(v,y_0,y_1):\, p\le \frac 1 2(y_0+y_1)<v,\\ v\in[x_0,x_1]\cap[y_0,y_1]}} \left(v-p\right)=x_1-p. \] Third, suppose that $p\le \frac 1 2(y_0+y_1)$ and $v\le \frac 1 2(y_0+y_1)$. So $p$ is accepted, but it would be optimal to sell at a higher price, specifically, at $p'=\frac 1 2(y_0+y_1)$, and thus gain $p'-p$. The supremum of this loss is given by \[ \sup_{\substack{(v,y_0,y_1):\, p,v\le \frac 1 2(y_0+y_1),\\ v\in[x_0,x_1]\cap[y_0,y_1]}} \left(\frac{y_0+y_1} 2-p\right)=\frac{x_1+1}{2}-p. \] Finally, suppose that $p>\frac 1 2(y_0+y_1)$ and $v\ge \frac 1 2(y_0+y_1)$. So, $p$ is rejected, but any price $p'>v$ would have been rejected too, so the loss is zero in this case. The maximum loss associated with the price $p\ge 1/2$ is the largest of the four losses computed above, so \[ l_s(p;x_0,x_1)=\max\left\{p-x_0,x_1-p,\frac{x_1+1}{2}-p,0\right\}=\max\left\{p-x_0,\frac{x_1+1}{2}-p\right\}. \] A best compromise price minimizes the maximum loss $l_s(p;x_0,x_1)$ among all prices $p\ge 1/2$, leading to the seller's equilibrium strategy \eqref{e-p4-1}. \qed \subsection{Proof of Proposition \ref{p:for1}}\label{s:P-F} Before proving Proposition \ref{p:for1}, we present a simple lemma on how the loss of a forecast is computed. \begin{lemma}\label{l:quadratic} $ l(a;z)=\sup_{F\in\mathcal F_{\delta}}(a-\mathbb{E}_{F,G_{\varepsilon}}[\theta|z])^2$. \end{lemma} The intuition is as follows. The variance of $\theta$ conditional on a signal $z$ enters the payoffs additively, and thus cancels out when computing the loss. As a result, the maximum loss $l(a;z)$ is simply the maximum quadratic distance between a forecast $a$ and the mean value of $\theta$ conditional on $z$. \begin{proof}[Proof of Lemma \ref{l:quadratic}] Fix $G_{\varepsilon}$. Let $\bar a_F(z)=\mathbb{E}_{F,G_{\varepsilon}}[\theta|z]$. Observe that \begin{equation}\label{e:l-f} \bar a_F(z)\in\argmax\limits_{a'\in[0,1]} \mathbb{E}_{F,G_{\varepsilon}}[-(a'-\theta)^2|z]. \end{equation} So, we have \begin{multline*} \sup\limits_{a'\in[0,1]} \mathbb{E}_{F,G_{\varepsilon}}[-(a'-\theta)^2|z]-\mathbb{E}_{F,G_{\varepsilon}}[-(a-\theta)^2|z]= \mathbb{E}_{F,G_{\varepsilon}}[-(\bar a_F(z)-\theta)^2+(a-\theta)^2|z]\\ =\mathbb{E}_{F,G_{\varepsilon}}[(a-\bar a_F(z))(a+\bar a_F(z)-2\theta)|z]=(a-\bar a_F(z))^2, \end{multline*} where the first equality is by \eqref{e:l-f} and the last equality is by $\mathbb{E}_{F,G_{\varepsilon}}[\theta|z]=\bar a_F(z)$. Thus, \[ l(a;z)=\sup_{F\in\mathcal F_{\delta}} (a-\bar a_F(z))^2=\sup_{F\in\mathcal F_{\delta}} (a-\mathbb{E}_{F,G_{\varepsilon}}[\theta|z])^2.\qedhere \] \end{proof} We now prove Proposition \ref{p:for1}. Different distributions $F\in\mathcal F_{\delta}$ induce different conditional means $\mathbb{E}_{F,G_{\varepsilon}}[\theta|z]$. Let $H(z)$ and $L(z)$ be the highest and lowest conditional means, respectively, so \begin{equation}\label{e:ex5-0} H(z)=\sup_{F\in\mathcal F_{\delta}} \mathbb{E}_{F,G_{\varepsilon}}[\theta|z] \quad\text{and}\quad L(z)=\inf_{F\in\mathcal F_{\delta}} \mathbb{E}_{F,G_{\varepsilon}}[\theta|z]. \end{equation} The loss of a forecast $a$ given a signal $z$ is \[ l(a;z)=\sup_{F\in\mathcal F_{\delta}} (a-\mathbb{E}_{F,G_{\varepsilon}}[\theta|z])^2=\max\left\{(a-H(z))^2,(a-L(z))^2\right\} \] where the first equality is by Lemma \ref{l:quadratic}, and the last equality is by the convexity of the expression. Thus, the best compromise forecast is the midpoint between the highest and lowest conditional means, so \[ a^*(z)=\inf_{a\in[0,1]}l(a;z)=\frac{1}{2}\left(H(z)+L(z)\right). \] It remains to find $H(z)$ and $L(z)$. Suppose that $z\ge\theta_0$. Observe that \[ \mathbb{E}_{F,G_{\varepsilon}}[\theta|z]=\frac{(1-\varepsilon)f(z)z+\varepsilon\int_0^1\theta f(\theta)\mathrm{d}\theta}{(1-\varepsilon)f(z)+\varepsilon\int_0^1 f(\theta)\mathrm{d}\theta}=\frac{(1-\varepsilon)f(z)z+\varepsilon\theta_0}{(1-\varepsilon)f(z)+\varepsilon} \] is increasing in $f(z)$. Using the assumption that $f(z)\le 1/\delta$, we have \[ H(z)=\sup_{F\in\mathcal F_{\delta}}\frac{(1-\varepsilon)f(z)z+\varepsilon\theta_0}{(1-\varepsilon)f(z)+\varepsilon}=\left.\frac{(1-\varepsilon)f(z)z+\varepsilon\theta_0}{(1-\varepsilon)f(z)+\varepsilon}\right|_{f(z)=1/\delta}=\frac{(1-\varepsilon)z+\varepsilon\delta\theta_0}{1-\varepsilon+\varepsilon\delta}. \] Using the assumption that $f(z)\ge \delta$, we have \[ L(z)=\inf_{F\in\mathcal F_{\delta}}\frac{(1-\varepsilon)f(z)z+\varepsilon\theta_0}{(1-\varepsilon)f(z)+\varepsilon}=\left.\frac{(1-\varepsilon)f(z)z+\varepsilon\theta_0}{(1-\varepsilon)f(z)+\varepsilon}\right|_{f(z)=\delta}=\frac{(1-\varepsilon)\delta z+\varepsilon\theta_0}{(1-\varepsilon)\delta+\varepsilon}. \] Analogously, for $z\le\theta_0$ we obtain $H(z)=\frac{(1-\varepsilon)\delta z+\varepsilon\theta_0}{(1-\varepsilon)\delta+\varepsilon}$ and $L(z)=\frac{(1-\varepsilon)z+\varepsilon\delta\theta_0}{1-\varepsilon+\varepsilon\delta}$. Thus we obtain \[ a^*(z)=\frac{1}{2}\left(H(z)+L(z)\right)=\frac{1}{2}\left(\frac{(1-\varepsilon)z+\varepsilon\delta\theta_0}{1-\varepsilon+\varepsilon\delta}+\frac{(1-\varepsilon)\delta z+\varepsilon\theta_0}{(1-\varepsilon)\delta+\varepsilon}\right).\qedhere \] \section*{Appendix B. Alternative Model of Forecasting}\label{s:F-2} \renewcommand{B}{B} This section considers an alternative variation of the forecasting model presented in Section \ref{s:forec}. Here we are interested in how to forecast a random variable with a known distribution after receiving a noisy signal that has an unknown distribution. Suppose that the agent knows the distribution $F$ of $\theta$, but is uncertain about how the noisy signal $z$ is generated. The following assumptions are made about this signal. The signal $z$ is known to be not too far from the true value of $\theta$, where a parameter $\delta>0$ describes the maximal distance. So $\delta$ can also be interpreted as the precision of the signal. Let $y=z-\theta$ be called the noise. So it is known that $|y|\le \delta$. The distribution of the noise $y$ has a certain and an uncertain component. Let $\varepsilon\in[0,1]$ be a known parameter. With probability $1-\varepsilon$ the noise $y$ is drawn from a known distribution $G_0$ and with probability $\varepsilon$ it is drawn from an unknown distribution $G_1$. So $\varepsilon$ measures how uncertain the agent is about how the noise is generated. Given the support restrictions on $y$, it follows that $G_0$ and $G_1$ both have support contained in $[-\delta, \delta]$. Let $G_\delta$ be the set of all distributions of $y$ that satisfy the above description. Let $\mathbb{E}_{F,G_\delta,\varepsilon}[\cdot|z]$ denote the conditional mean of $\theta$ given $z$ for $G_\delta\in \mathcal G_\delta$. The maximum loss associated with a forecast $a\in[0,1]$ given a signal $z\in[0,1]$ is \[ l(a;z)=\sup_{G_\delta\in\mathcal G_{\delta}} \left(\sup_{a'\in[0,1]} \mathbb{E}_{F,G_\delta,\varepsilon}[-(a'-\theta)^2|z]-\mathbb{E}_{F,G_\delta,\varepsilon}[-(a-\theta)^2|z]\right). \] Let $H(z)$ and $L(z)$ be the highest and lowest conditional means, so \[ H(z)=\sup_{G_\delta\in\mathcal G_{\delta}} \mathbb{E}_{F,G_\delta,\varepsilon}[\theta|z] \quad\text{and}\quad L(z)=\inf_{G_\delta\in\mathcal G_{\delta}}\mathbb{E}_{F,G_\delta,\varepsilon}[\theta|z]. \] It is straightforward to verify that \[ H(z)=\sup_{x\in[-\delta,\delta]} \frac{\varepsilon f(z-x)(z-x)+(1-\varepsilon)\int_{-\delta}^{\delta}(z-y) f(z-y)\mathrm{d} G_0(y)}{\varepsilon f(z-x)+(1-\varepsilon)\int_{-\delta}^{\delta}f(z-y)\mathrm{d} G_0(y)}, \] with an analogous expression for $L(z)$. We obtain the following result. \begin{proposition} The agent's best compromise is \[ a^*(z)=\frac 1 2\left(H(z)+L(z)\right). \] \end{proposition} The proof is analogous to that of Proposition \ref{p:for1} and thus omitted. The best compromise is the midpoint between the highest and lowest conditional means. The agent's best compromise forecast depends on the precision $\delta$ of her signal, as well as on the degree $\varepsilon$ of her uncertainty. We show how each of these two parameters independently influences the best compromise forecast. Fix the degree of uncertainty $\varepsilon$. If the signal is very precise in the sense that $\delta$ is very small, then each of the two extreme conditional means are close to $z$. Hence, the best compromise forecast will also be close to $z$. Formally, $\lim_{\delta\to 0} a^*(z)=z$. Fix the precision $\delta$ of the signal. As the degree of uncertainty $\varepsilon$ vanishes, both extreme conditional means converge to the conditional mean under the benchmark distribution $G_0$. Formally, $\lim_{\varepsilon\to 0} a^*(z)= \mathbb{E}_{F,G_0,0}[\theta|z]$. For instance, if $G_0$ is the uniform distribution, then the best compromise forecast converges to the expected value of $\theta$ conditional on $\theta$ being within $\delta$ of the signal. As the degree of uncertainty $\varepsilon$ becomes large, the role of the benchmark $G_0$ diminishes and almost any noise within $[-\delta,\delta]$ becomes possible. When $\varepsilon=1$, it could be that $G_1$ puts all mass on $-\delta$, in which case $\mathbb{E}_{F,G_\delta,\varepsilon}[\theta|z]=z+\delta$. This is the highest conditional mean given $z$, so $H(z)=z+\delta$. It could also be that $G_1$ puts all mass on $\delta$, in which case $\mathbb{E}_{F,G_\delta,\varepsilon}[\theta|z]=z-\delta$. This is the lowest conditional mean given $z$, so $L(z)=z-\delta$. Consequently, the best compromise forecast is close to the signal $z$ when the agent is very uncertain about how $z$ is generated. Formally, $a^*(z)\to z$ as $\varepsilon\to 1$. \begin{remark}\label{R:F} Note that the distribution $F$ of the underlying variable of interest plays no role when the degree of uncertainty is extreme, so $\varepsilon=1$. Consequently, we obtain that if the agent knows neither $F$ nor the distribution of the noise, then the best compromise forecast is to choose the signal. \end{remark} {\setlength{\baselineskip}{0.2in} \setlength\bibsep{0.2\baselineskip} } \end{document}
\begin{document} \title{Numerical Scheme for Dynkin Games under Model Uncertainty} \thanks{} \author{Yan Dolinksky and Benjamin~Gottesman } \date{\today} \subjclass[2010]{91A15, 91G20, 91G60} \keywords{Dynkin Games, Model Uncertainty, Skorokhod Embedding} \maketitle \markboth{}{} \renewcommand{\arabic{section}.\arabic{equation}}{\arabic{section}.\arabic{equation}} \pagenumbering{arabic} \begin{abstract}\noindent We introduce an efficient numerical scheme for continuous time Dynkin games under model uncertainty. We use the Skorokhod embedding in order to construct recombining tree approximations. This technique allows us to determine convergence rates and to construct numerically optimal stopping strategies. We apply our method to several examples of game options. \end{abstract} \section{Introduction}\label{sec:1}\setcounter{equation}{0} In this paper, we propose an efficient numerical scheme for the computations of values of Dynkin games under volatility uncertainty. We consider a finite maturity, continuous--time robust Dynkin game with respect to a non dominated set of mutually singular probabilities on the canonical space of continuous paths. In this game, Player 1 who negatively/conservatively thinks that the nature is also against him, will pay the following payment to Player 2 if the two players choose stopping strategies $\gamma$ and $\tau$ respectively, \begin{equation}\label{1.1} H(\gamma,\tau):=\mathbb{I}_{\gamma<\tau}X_{\gamma}+\mathbb{I}_{\tau\leq\gamma}Y_{\tau}+\int_{0}^{\gamma\wedge\tau}Z_u du. \end{equation} We model uncertainty by assuming that the stochastic processes $X,Y,Z$ are path--independent functions of an underlying asset $S$ which is an exponential martingale with volatility in a given interval. Thus, our setup can be viewed as a Dynkin game variant of Peng's G--expectation (see \cite{P}). For finite maturity optimal stopping problems/games, there are no explicit solutions even in the relatively simple framework where the probabilistic setup is given and the payoffs are path--independent functions of the standard Brownian motion. Hence, numerical schemes come naturally into the picture. In \cite{ALP}, the authors presented a recombining trinomial tree based approximations for what is now known as a $G$--expectation in the sense of Peng (\cite{P}). However, they did not provide a rigorous proof for the convergence of their scheme and did not obtain error estimates. Moreover, a priori, it is not clear whether the tree approximations from \cite{ALP} can be applied for optimal stopping problems/games. In this paper, we modify slightly the trinomial trees from \cite{ALP}. For the modified (recombining) trees we construct a discrete time version of the Dynkin game given by (\ref{1.1}). The main idea is to apply the Skorokhod embedding technique in order to prove the existence of an exact scheme along stopping times with the required properties. More precisely, for any exponential martingale with volatility in a given interval we prove that there exists a sequence of stopping times such that the ratio of the martingale between two sequel times belongs to some fixed set of the form $\left\{\exp\left(-\bar\sigma\sqrt\frac{T}{n}\right),1,\exp\left(\bar\sigma\sqrt\frac{T}{n}\right)\right\}$ and the expectation of the difference between two sequel times is approximately equal to $\frac{T}{n}$. Here $\bar\sigma>0$ is the right endpoint of the volatility uncertainty interval, $n$ is the number of time steps and $T$ is the maturity date. This machinery also allows to go in the reverse direction, namely for a given distribution on the trinomial tree we can find a "close" distribution on the canonical space which lies in our set of model uncertainty. We prove the convergence of the discrete time approximations to the original control problem. Moreover, we provide error estimates of order $O(n^{-1/4})$. The recombining structure of the trinomial trees allows to compute the corresponding value with complexity $O(n^2)$ where $n$ is the number of time steps. The idea of using the Skorokhod embedding technique in order to obtain an exact sequence along stopping times was also employed in a recent work \cite{BDG} where the authors approximated a one dimensional time--homogeneous diffusion by recombining trinomial trees (and obtained the same error of order $O(n^{-1/4})$). In \cite{BDG}, the authors were able to construct explicitly the stopping times. The construction relies heavily on the well established theory for exit times of one dimensional time--homogeneous diffusion processes. This theory cannot be applied in the present work, since the martingales in the volatility uncertainty setup are not necessarily diffusions, or even Markov processes. Thus, the case of model uncertainty requires additional machinery which we develop in Section \ref{sec:3}. Moreover, since the martingales may not be Markovian we cannot provide an explicit construction of the stopping times (as done in \cite{BDG}), but only prove their existence. Let us remark that the multidimensional version of the above described result is an open question which requires a completely different approach. In particular, it is not clear how to derive recombining tree models which will approximate volatility uncertainty in the multidimensional setup. We leave this challenging question for future research. Since its introduction in \cite{Dyn}, Dynkin games have been analyzed in discrete and continuous time models for decades (see, for instance, \cite{BF,BS,LM,N,O}). In Mathematical Finance, the theory of Dynkin games can be applied to pricing and hedging game options and their derivatives, see \cite{D,HZ,K,KK,MC} and the references in the survey paper \cite{Ki}. In particular, the nondominated version of the optional decomposition theorem developed in \cite{Nu} provides a direct link (as we will see rigorously) between Dynkin games and pricing game options in the model uncertainty framework. In general, the theme of Dynkin games is a central topic in stochastic control. In \cite{CK}, the authors connected Dynkin games to backward stochastic differential equations (BSDEs) with two reflecting barriers. This link inspired a very active research in the field of Dynkin games in a Brownian framework, see e.g. \cite{BL,BY1,H,HH1,HH,X}. Motivated by Knightian uncertainty, recently there is also a growing interest in Dynkin games under model uncertainty, see \cite{BY,D,HH,Y}). In \cite{BY} the authors analyzed a robust version of the Dynkin game over a set of mutually singular probabilities. They proved that the game admits a value. Moreover, they established submartingale properties of the value process. These results will be essential in the present work. The rest of the paper is organized as follows. In the next section we formulate our main result (Theorem \ref{thm2.1}). In Section 3, we introduce our main tool which is Skorokhod embedding under model uncertainty. In Section 4, we complete the proof of the main result. Section 5 is devoted to some auxiliary estimates which are used in the proof of Theorem \ref{thm2.1}. In Section 6, we provide numerical analysis for several examples of game options. Moreover, we argue rigorously the link between Dynkin games and pricing of game options in the current setup of model uncertainty. \section{Preliminaries and Main Result}\label{sec:2}\setcounter{equation}{0} Let $\Omega:=C(\mathbb R_{+}, \mathbb R)$ be the space of continuous paths equipped with the topology of locally uniform convergence and the Borel $\sigma$--field $\mathcal F:=\mathcal B(\Omega$). We denote by $B=B_t$, $t\geq 0$ the canonical process $ B_t(\omega):=\omega_t$ and by $\mathcal F=\mathcal F_t$, $t\geq 0$ the natural filtration generated by $B$. For any $t$, $\mathcal T_t$ denotes the set of all stopping times with values in $[0,t]$. We denote by $\mathcal T$ the set of all stopping times (we allow the stopping times to take the value $\infty$). For a closed interval $I=[\underline{\sigma},\overline{\sigma}]\subset {\mathbb R}_{+}$ and $s>0$ let $\mathcal P^{(I)}_s$ be the set of all probability measures $P$ on $\Omega$ under which the canonical process $B$ is a strictly positive martingale such that $B_0=s$ $P$--a.s., the quadratic variation $\langle B\rangle $ is absolutely continuous $dt\otimes P$ a.s. and $B^{-1}_t\sqrt\frac{d\langle B\rangle _t}{dt}\in I$ $dt\otimes P$ a.s. Observe that if we define the local martingale $M_t:=\int_{0}^t \frac{d B_u}{B_u}$, then from It\^{o} Isometry we get $\sqrt\frac{d\langle M\rangle_t}{dt}=B^{-1}_t\sqrt\frac{d\langle B\rangle _t}{dt}\in I$. Thus $M$ is a true martingale and $B_t=\exp(M_t-\langle M\rangle _t/2)$, $t\geq 0$ is the Dol\'{e}ans--Dade exponential of $M$. In other words, the set $\mathcal P^{(I)}_s$ is the set of all probability measures (on the canonical space) such that the canonical process (which starts in $s$) is a Dol\'{e}ans--Dade exponential of a true martingale with volatility in the interval $I$. From mathematical finance point of view, the set $\mathcal P^{(I)}_s$ describes the set of all possible distributions of the (discounted) stock price process. We assume that $I$ is a finite interval, i.e. $\overline{\sigma}<\infty$. This implies that the set $\mathcal P^{(I)}_s$ is weakly compact and so we can apply the results form \cite{BY} related to the existence of the optimal strategy of the Dynkin game. Moreover, the assumption $\overline{\sigma}<\infty$ is essential for constructing an appropriate sequence of trinomial models . In addition, we assume that $\underline{\sigma}>0$, in other words the model uncertainty setup is "noisy enough". This assumption is technical and will be needed for obtaining uniform bounds on the expectation of the hitting times related to the canonical process. We consider a Dynkin game with maturity date $T<\infty$ and a payoff given by (\ref{1.1}) with $X_t=g(t,B_t)$, $Y_t=f(t,B_t)$, $Z_t=h(t,B_t)$ where $g,f,h:[0,T]\times\mathbb R_{+}\rightarrow \mathbb R$ satisfy $g\geq f$ and the following Lipschitz condition \begin{eqnarray}\label{2.function} &|f(t_1,x_1)-f(t_2,x_2)|+|g(t_1,x_1)-g(t_2,x_2)|+|h(t_1,x_1)-h(t_2,x_2)|\leq \\ &L \left((1+|x_1|)|t_2-t_1|+|x_2-x_1|\right), \ t_1,t_2\in [0,T], \ x_1,x_2\in \mathbb R_{+} \nonumber \end{eqnarray} for some constant $L$. For any $(t,x)\in [0,T]\times\mathbb R_{+}$ define the lower value and the upper value of the game at time $t$ given that the canonical process satisfies $B_t=x$ \begin{eqnarray*} &\underline{V}^{(I)}(t,x):= \sup_{P\in \mathcal{P}^{(I)}_x}\sup_{\tau\in\mathcal T_{T-t}}\inf_{\gamma\in\mathcal T_{T-t}}E_{P}[g(\gamma+t, B_{\gamma}){\mathbb I}_{\gamma<\tau}+\\ &f(\tau+t,B_{\tau})\mathbb{I}_{\tau\leq\gamma} +\int_{0}^{\gamma\wedge\tau}h(u+t, B_{u})du ] \end{eqnarray*} and \begin{eqnarray*} &\overline{V}^{(I)}(t,x):=\inf_{\gamma\in\mathcal T_{T-t}}\sup_{ P\in \mathcal{P}^{(I)}_x}\sup_{\tau\in\mathcal T_{T-t}}E_{P} [g(\gamma+t,B_{\gamma})\mathbb{I}_{\gamma<\tau}+\nonumber\\ &+f(\tau+t,B_{\tau})\mathbb{I}_{\tau\leq\gamma}+\int_{0}^{\gamma\wedge\tau}h(u+t,B_{u})du]. \nonumber \end{eqnarray*} From Theorem 4.1 in \cite{BY} it follows that the lower value and the upper value coincide and thus the game has a value \begin{equation}\label{2.2-} V^{(I)}(t,x):=\overline{V}^{(I)}(t,x)=\underline{V}^{(I)}(t,x), \ \ \forall(t,x)\in [0,T]\times\mathbb R_{+}. \end{equation} Our goal is to calculate numerically the value $V^{(I)}(0,s)$. Moreover, from Theorem 4.1 in \cite{BY} it follows that the stopping time $\gamma^{*}:=T\wedge\inf\{t: g(t,B_t)=V^{(I)}(t,B_t)\}$ is an optimal exercise time for Player 1. In Section 6, we use this formula for numerical calculations of Player 1's optimal strategy. \begin{rem}\label{rem2.1} Our setup is slightly different from the one considered in \cite{BY}. If we use our notations, then the control problem studied in \cite{BY} is \begin{equation}\label{BY1} \inf_{P\in \mathcal{P}^{(I)}_x}\inf_{\gamma\in\mathcal T_{T}}\sup_{\tau\in\mathcal T_{T}} E_P\left[\mathbb{I}_{\gamma<\tau}X_{\gamma}+\mathbb{I}_{\tau\leq\gamma}Y_{\tau}+\int_{0}^{\gamma\wedge\tau}Z_u du\right]. \end{equation} Theorem 4.1 in \cite{BY} shows that the above infimum and supremum can be exchanged. Furthermore, the authors showed that $\tau^{*}:=T\wedge\inf\{t: Y_t=V^{(I)}(t,B_t)\}$ is an optimal stopping time for Player 2 which can be viewed as the holder of the corresponding game option. The term given in (\ref{BY1}) is the lowest arbitrage free price of the corresponding game option. Clearly, if we replace $X,Y,Z$ by $-Y,-X,-Z$ and replace $\gamma\leftrightarrow\tau$, then the above control problem is equivalent to \begin{equation}\label{BY2} \sup_{P\in \mathcal{P}^{(I)}_x}\sup_{\tau\in\mathcal T_{T}}\inf_{\gamma\in\mathcal T_{T}} E_P\left[\mathbb{I}_{\gamma\leq \tau}X_{\gamma}+\mathbb{I}_{\tau<\gamma}Y_{\tau}+\int_{0}^{\gamma\wedge\tau}Z_u du\right]. \end{equation} This is almost the same control problem as we consider, up to the following change. In our setup, on the event $\{\gamma=\tau\}$ Player 1 pays the low payoff $Y_{\tau}+\int_{0}^{\tau} Z_u du$ while in (\ref{BY2}) Player 1 pays the high payoff $X_{\gamma}+\int_{0}^{\gamma} Z_u du$. Still, Theorem 4.1 in \cite{BY} can be extended to this setup as well by following the same proof. Furthermore, analogously, the optimal exercise time for Player 1 is given by $\gamma^{*}:=T\wedge\inf\{t: X_t=V^{(I)}(t,B_t)\}$. Namely, Theorem 4.1 in \cite{BY} provides an optimal exercise time for the player which plays against nature. In our setup, this is Player 1 who can be seen as the seller of the game option. The term given by (\ref{2.2-}) is the highest arbitrage free price of the game option. \end{rem} Next, we describe the trinomial models and the main result. Fix $n\in\mathbb N$. Let $\xi^{(n)}_1,...,\xi^{(n)}_n$ be random variables with values in the set $\{-1,0,1\}$ and let $\mathcal F^{(n)}=\{\mathcal F^{(n)}_k\}_{k=0}^n$ be the filtration generated by $\xi^{(n)}_k$, $k=0,1,...,n$. Denote by $\mathcal T_n$ the set of all stopping times (with respect to the filtration $\mathcal F^{(n)}$) with values in the set $\{0,1,...,n\}$. For a given $t\in [0,T]$ and $s\geq 0$ consider the geometric random walk \[S^{t,s,n}_k:=s\exp\left(\overline{\sigma}\sqrt\frac{T-t}{n}\sum_{i=1}^k \xi^{(n)}_i\right) \ \ k=0,1,...,n.\] Clearly, the process $\{S^{t,s,n}_k\}_{k=0}^n$ lies on the grid $s\exp\left(\overline{\sigma}\sqrt\frac{T-t}{n} i\right)$, $i=-n,1-n,...,0,1,...,n$. Denote by $\mathcal{P}^{I,t,n}$ the set of all probability measures on $\mathcal{F}^{(n)}_n$ such that for any $k=1,...,n$ \begin{eqnarray}\label{2.1} &P(\xi^{(n)}_k=1|\mathcal F^{(n)}_{k-1})\in \frac{1}{1+\exp(\overline{\sigma}\sqrt{\frac{T-t}{n}})}\left[\exp\left(-4\overline{\sigma}\sqrt{\frac{T-t}{n}}\right)\underline{\sigma}^2/\overline{\sigma}^2,1\right]\\ &P(\xi^{(n)}_k=-1|\mathcal F^{(n)}_{k-1})=\exp(\overline{\sigma}\sqrt{\frac{T-t}{n}})P(\xi^{(n)}_k=1|\mathcal F^{(n)}_{k-1})\label{2.1+}\\ &P(\xi^{(n)}_k=0|\mathcal F^{(n)}_{k-1})=1- P(\xi^{(n)}_k=1|\mathcal F^{(n)}_{k-1})-P(\xi^{(n)}_k=-1|\mathcal F^{(n)}_{k-1}).\label{2.1++} \end{eqnarray} Let us explain the intuition behind the definition of the set $\mathcal{P}^{I,t,n}$. First, we observe that for any $P\in \mathcal{P}^{I,t,n}$ and $k\geq 1$, $P(\xi^{(n)}_k=0|\mathcal F^{(n)}_{k-1})\geq 0$, i.e. $P$ is indeed a probability measure. Moreover, from (\ref{2.1+})--(\ref{2.1++}) it follows that for any $k\geq 1$ \begin{eqnarray*} &E_P\left(\frac{S^{t,s,n}_{k}}{S^{t,s,n}_{k-1}}\big|\mathcal F^{(n)}_{k-1}\right)= \exp\left(\overline{\sigma}\sqrt{\frac{T-t}{n}}\right)P(\xi^{(n)}_k=1|\mathcal F^{(n)}_{k-1})+\\ &\exp\left(-\overline{\sigma}\sqrt{\frac{T-t}{n}}\right)P(\xi^{(n)}_k=-1|\mathcal F^{(n)}_{k-1})+P(\xi^{(n)}_k=0|\mathcal F^{(n)}_{k-1})=1. \end{eqnarray*} Hence, $\{S^{t,s,n}_k\}_{k=0}^n$ is a martingale with respect to any probability measure $P\in\mathcal {P}^{I,t,n}$. Finally, from (\ref{2.1})--(\ref{2.1+}) we have that for any $P\in\mathcal {P}^{I,t,n}$ and $k\geq 1$ the conditional expectation of the ratio of the square of the return and the time step satisfy \begin{eqnarray*} &\frac{n}{T-t}E_P \left(\left(\ln{S^{t,s,n}_{k}}-\ln{S^{t,s,n}_{k-1}}\right)^2\big|\mathcal F^{(n)}_{k-1}\right)=\\ &\overline{\sigma}^2 \left(P(\xi^{(n)}_k=1|\mathcal F^{(n)}_{k-1})+P(\xi^{(n)}_k=-1|\mathcal F^{(n)}_{k-1})\right) =\\ &\overline{\sigma}^2\left(1+\exp\left(\overline{\sigma}\sqrt{\frac{T-t}{n}}\right)\right)P(\xi^{(n)}_k=1|\mathcal F^{(n)}_{k-1}) \in \overline{\sigma}^2\left[\exp\left(-4\overline{\sigma}\sqrt{\frac{T-t}{n}}\right)\underline{\sigma}^2/\overline{\sigma}^2,1\right]\\ &=\left[\underline{\sigma}^2,\overline{\sigma}^2\right]\bigcup \underline{\sigma}^2\left[\exp\left(-4\overline{\sigma}\sqrt{\frac{T-t}{n}}\right),1\right]. \end{eqnarray*} In the above union of intervals, the first interval is exactly the square of the model uncertainty interval $I$, and the second interval vanishing as $n\rightarrow\infty$. This is the reason that we expect that the set $\mathcal{P}^{I,t,n}$ will be a good approximation of the set $\mathcal P^{(I)}_s$ restricted to the interval $[0,T-t]$. We emphasis that although the interval $\left[\exp\left(-4\overline{\sigma}\sqrt{\frac{T-t}{n}}\right),1\right]$ is vanishing, it will be essential for the Skorokhod embedding procedure. Next, we define the corresponding Dynkin game under model uncertainty. Introduce the lower value and the upper value of the game \begin{eqnarray*} &\underline{V}^{I,n}(t,s):=\\ &\sup_{ P\in \mathcal{P}^{I,t,n}}\max_{\eta\in\mathcal T_n} \min_{\zeta\in\mathcal T_n} E_{P}[g(t+\zeta (T-t)/n,S^{t,s,n}_{\zeta})\mathbb{I}_{\zeta<\eta}\nonumber\\ &+f(t+\eta (T-t)/n,S^{t,s,n}_{\eta})\mathbb{I}_{\eta\leq\zeta}+\frac{T-t}{n}\sum_{k=0}^{\zeta\wedge\eta-1} h(t+k(T-t)/n,S^{t,s,n}_{k})] \end{eqnarray*} and \begin{eqnarray*} &\overline{V}^{I,n}(t,s):=\min_{\zeta\in\mathcal T_n}\sup_{ P\in \mathcal{P}^{I,t,n}}\max_{\eta\in\mathcal T_n} E_{P}[g(t+\zeta (T-t)/n,S^{t,s,n}_{\zeta})\mathbb{I}_{\zeta<\eta}\nonumber\\ &+f(t+\eta (T-t)/n,S^{t,s,n}_{\eta})\mathbb{I}_{\eta\leq\zeta}+\frac{T-t}{n}\sum_{k=0}^{\zeta\wedge\eta-1} h(t+k(T-t)/n,S^{t,s,n}_{k})].\nonumber \end{eqnarray*} We argue that the above two values coincide. In \cite{KK}, the authors proved a similar statement for the setup where the set of probability measures is the set of equivalent martingale measures. However, the only property that was used in their proof is that there exists a a reference measure. Namely, that there exists a measure $Q$ such that all the probability measures in the model uncertainty set are absolutely continuous with respect to $Q$. In our case the probability measures in $\mathcal{P}^{I,t,n}$ are defined on a finite sample space which supports the random variables $\xi^{(n)}_1,...,\xi^{(n)}_n$. Thus, there exists a reference measure $Q$ for the set $\mathcal{P}^{I,t,n}$. For instance, take $Q$ to be the probability measure for which $\xi^{(n)}_1,...,\xi^{(n)}_n$ are i.i.d. and taking the values $-1,0,1$ with the same probability $1/3$. Following the proof of Theorem 2.2 in \cite{KK} we conclude that the lower value and the upper value coincide and so the game has a value $${V}^{I,n}(t,s):=\overline{V}^{I,n}(t,s)=\underline{V}^{I,n}(t,s) \ \ \forall t,s.$$ Moreover, by using standard dynamical programming for Dynkin games (see \cite{O}) we can calculate $V^{I,n}(t,s)$ by the following backward recursion. Define the functions ${J}^{I,t,s,n}_k:\{-k,1-k,...,0,1,...,k\}\rightarrow\mathbb R, \ \ k=0,1,...,n.$ \begin{equation}\label{recursion1} {J}^{I,t,s,n}_n(z):=f\left(T,s \exp\left(\overline{\sigma}\sqrt{\frac{T-t}{n}}z\right)\right). \end{equation} For $k=0,1,...,n-1$ \begin{eqnarray}\label{recursion2} &{J}^{I,t,s,n}_k(z):=\max\bigg(f\left(t+k (T-t)/n,s \exp\left(\overline{\sigma}\sqrt{\frac{T-t}{n}}z\right)\right), \\ &\min \bigg(g\left(t+k(T-t)/n,s \exp\left(\overline{\sigma}\sqrt{\frac{T-t}{n}}z\right)\right),\frac{T-t}{n}h\left(t+k(T-t)/n,S^{t,s,n}_{k}\right)+\nonumber\\ &\sup_{p\in \left[\exp\left(-4\overline{\sigma}\sqrt{\frac{T-t}{n}}\right)\underline{\sigma}^2/\overline{\sigma}^2,1\right]}\left((1-p)J^{I,t,s,n}_{k+1}(z)+ \frac{p}{1+\exp\left(\overline{\sigma}\sqrt{\frac{T-t}{n}}\right)}J^{I,t,s,n}_{k+1}(z+1)\right.\nonumber \\ &\left. + \frac{p\exp\left(\overline{\sigma}\sqrt{\frac{T-t}{n}}\right)}{1+\exp\left(\overline{\sigma}\sqrt{\frac{T-t}{n}}\right)}J^{I,t,s,n}_{k+1}(z-1)\right)\bigg)\bigg)\nonumber\\ &=\max\bigg(f\left(t+k (T-t)/n,s \exp\left(\overline{\sigma}\sqrt{\frac{T-t}{n}}z\right)\right), \nonumber\\ &\min \bigg(g\left(t+k(T-t)/n,s \exp\left(\overline{\sigma}\sqrt{\frac{T-t}{n}}z\right)\right),\frac{T-t}{n}h\left(t+k(T-t)/n,S^{t,s,n}_{k}\right)+\nonumber\\ &\max_{p\in \left\{\exp\left(-4\overline{\sigma}\sqrt{\frac{T-t}{n}}\right)\underline{\sigma}^2/\overline{\sigma}^2,1\right\}} \left((1-p)J^{I,t,s,n}_{k+1}(z)+\frac{p}{1+\exp\left(\overline{\sigma}\sqrt{\frac{T-t}{n}}\right)}J^{I,t,s,n}_{k+1}(z+1)\right. \nonumber\\ &\left. + \frac{p\exp\left(\overline{\sigma}\sqrt{\frac{T-t}{n}}\right)}{1+\exp\left(\overline{\sigma}\sqrt{\frac{T-t}{n}}\right)}J^{I,t,s,n}_{k+1}(z-1)\right)\bigg)\bigg),\nonumber \end{eqnarray} where the last equality follows from the fact that the supremum (maximum) on an interval of a linear function (with respect to $p$) is achieved at the end points. We get that \begin{equation}\label{recursion3} V^{I,n}(t,s)=J^{I,t,s,n}_0(0). \end{equation} Hence, we see that the computation of $V^{I,n}$ is very simple and its complexity is $O(n^2)$. Next, we formulate our main result. \begin{thm}\label{thm2.1} There exists a constant $C>0$ such that for all $(t,s)\in [0,T]\times\mathbb R_{+}$, \[|V^{I,n}(t,s)-V^{(I)}(t,s)|\leq C(1+s) n^{-1/4}.\] \end{thm} From (\ref{recursion1})--(\ref{recursion2}) and the backward induction it follows that for a fixed $n$ the function $J^{I,\cdot,\cdot\cdot,n}_0:[0,T]\times\mathbb R_{+}\rightarrow\mathbb R$ is continuous. This together with (\ref{recursion3}) and Theorem \ref{thm2.1} gives immediately the following Corollary.\\ \begin{cor}\label{cor2.3} The function $V^{(I)}(t,s):[0,T]\times\mathbb R_{+}\rightarrow\mathbb R$ is continuous. \end{cor} \section{Skorokhod Embedding under Model Uncertainty}\label{sec:3}\setcounter{equation}{0} In this section we fix an arbitrary $n\in\mathbb N$ . For any $A\in (0, \overline{\sigma}\sqrt{T/n}]$ and stopping time $\theta\in\mathcal T$ (recall that $\mathcal T$ is the set of all stopping times with respect to the canonical filtration) consider the stopping times \begin{eqnarray}\label{3.0} &\rho^{(\theta)}_A:=\inf\{t\geq \theta: |\ln B_t-\ln B_{\theta}|=A\} \ \ \mbox{and}\\ &\kappa^{(\theta)}_A:=\infty\mathbb I_{\rho^{(\theta)}_A=\infty}+\sum_{i=1}^2 (-1)^i \mathbb I_{\ln B_{\rho^{(\theta)}_A}=\ln B_{\theta}+(-1)^i A}\times\nonumber\\ &\inf\left\{t\geq \rho^{(\theta)}_A: \ln B_t=\ln B_{\theta} \ \mbox{or} \ \ln B_t=\ln B_{\theta}+(-1)^i \overline{\sigma}\sqrt{T/n}\right\},\nonumber \end{eqnarray} where the infimum over an empty set is equal to $\infty$. Set \[z:=z(n)=\exp(-2\overline{\sigma}\sqrt{T/n})\overline{\sigma}^{-2} \frac{\exp(2\overline{\sigma}\sqrt{T/n})+\exp(-2\overline{\sigma}\sqrt{T/n})-2}{2+\exp(\overline{\sigma}\sqrt{T/n})+\exp(-\overline{\sigma}\sqrt{T/n})}.\] Observe that $z=T/n+O(n^{-3/2})$. As usual, we use the convention $O(x)$ to denote a random variable ($z(n)$ is deterministic) that is uniformly (in time and space) bounded after dividing by $x$. We start with the following lemma. \begin{lem}\label{lem3.1} Let $P\in \mathcal P^{(I)}_s$ and let $\theta\in\mathcal T$ satisfy $E_P[\theta]<\infty$. There exists a stopping time $\mathcal T\ni\hat\theta\geq\theta$ such that $P$ a.s. we have $\hat\theta<\infty$ and $\frac{B_{\hat\theta}}{B_{\theta}}\in \left \{\exp(-\overline{\sigma}\sqrt{T/n}),0,\exp(\overline{\sigma}\sqrt{T/n}) \right\}$. Furthermore, $E_{P}(\hat\theta-\theta|\mathcal F_{\theta})=z$ and \begin{eqnarray}\label{3.1} &P\left(\frac{B_{\hat\theta}}{ B_{\theta}}=\exp(\overline{\sigma}\sqrt{T/n})|\mathcal F_{\theta}\right) \in \frac{1}{1+\exp(\overline\sigma T/n)}\left[\exp\left(-4\overline\sigma T/n\right)\underline{\sigma}^2/\overline{\sigma}^2,1\right],\\ & P\left(\frac{ B_{\hat\theta}}{ B_{\theta}}=\exp(-\overline{\sigma}\sqrt{T/n})|\mathcal F_{\theta}\right)=\exp(\overline{\sigma}\sqrt {T/n}) P\left(\frac{ B_{\hat\theta}}{ B_{\theta}}=\exp(\overline{\sigma}\sqrt{T/n})|\mathcal F_{\theta}\right)\label{3.2},\\ &P\left({ B_{\hat\theta}}={ B_{\theta}}|\mathcal F_{\theta}\right)=1- P\left(\frac{B_{\hat\theta}}{B_{\theta}}=\exp(\overline{\sigma}\sqrt{T/n})|\mathcal F_{\theta}\right)\label{3.3}\\ &-P\left(\frac{B_{\hat\theta}}{B_{\theta}}=-\exp(\overline{\sigma}\sqrt{T/n})|\mathcal F_{\theta}\right).\nonumber \end{eqnarray} Notice the resemblance to the formulas (\ref{2.1})--(\ref{2.1++}). In particular, (\ref{3.1}) gives the technical reason for the definition given by (\ref{2.1}). \end{lem} \begin{proof} Denote $\rho:=\rho^{(\theta)}_{\overline{\sigma}\sqrt{T/n}}$. From the fact that $B$ is a $P$--martingale with volatility bonded away from zero, it follows that $E_P[\rho]<\infty$. Thus, $\frac{B_{\rho}}{B_{\theta}}=\exp(\pm \overline{\sigma}\sqrt{T/n})$, $P$--a.s., and from the martingale property we have \begin{equation*} P\left( B_{\rho}= B_{\theta}\exp(\pm\overline{\sigma}\sqrt{T/n})|\mathcal F_{\theta}\right)= \frac{1}{1+\exp(\pm \overline{\sigma}\sqrt{T/n})}. \end{equation*} Hence, \begin{equation}\label{3.1000} E_{P}\left((B_{\rho}-B_{\theta})^2|\mathcal F_{\theta}\right)= \frac{\exp(2\overline{\sigma}\sqrt{T/n})+ \exp(-2\overline{\sigma}\sqrt{T/n})-2}{2+\exp(\overline{\sigma}\sqrt{T/n})+ \exp(-\overline{\sigma}\sqrt{T/n})}B^2_{\theta}. \end{equation} From the It\^{o} isometry and the fact that under $P$, the process $B$ is an exponential martingale with volatility less or equal then $\overline{\sigma}$ we obtain $$E_{P}\left((B_{\rho}-B_{\theta})^2|\mathcal F_{\theta}\right)\leq E_P\left[\int_{\theta}^{\rho} B^2_t\overline{\sigma}^2 dt|\mathcal F_{\theta}\right]\leq \overline{\sigma}^2\exp( 2\overline{\sigma}\sqrt{T/n}) B^2_{\theta}E_{P}(\rho-\theta|\mathcal F_{\theta}),$$ where the last inequality follows from the fact that $B_t\leq \exp(\overline{\sigma}\sqrt{T/n}) B_{\theta}$ for $t\in [\theta,\rho]$. This together with (\ref{3.1000}) yields \begin{equation}\label{3.5} E_{P}(\kappa^{(\theta)}_{\overline{\sigma}\sqrt{T/n}}-\theta|\mathcal F_{\theta})=E_{P}(\rho-\theta|\mathcal F_{\theta})\geq z. \end{equation} Next, we notice that for $A_2>A_1$ we have $\kappa^{(\theta)}_{A_2}>\kappa^{(\theta)}_{A_1}$, $P$ a.s. Moreover, if $A_n\uparrow A$ then $\kappa^{(\theta)}_{A_n}\uparrow \kappa^{(\theta)}_A$ $P$ a.s. Hence, from the Monotone Convergence Theorem \begin{equation}\label{3.6} A_n\uparrow A \ \Rightarrow \ E_{P}(\kappa^{(\theta)}_A|\mathcal F_{\theta})=\lim_{n\rightarrow\infty}E_{P}(\kappa^{(\theta)}_{A_n}|\mathcal F_{\theta}). \end{equation} Let $\mathbb Q$ be the set of rational numbers. Define the random variable \[\mathcal Z:=\sup\{q\in\mathbb Q\cap (0, \overline{\sigma}\sqrt{T/n}]:E_{P}(\kappa^{(\theta)}_q|\mathcal F_{\theta})\leq z \}.\] Clearly, $\mathcal Z$ is $\mathcal F_{\theta}$--measurable. Moreover, from the monotonicity property of $\kappa^{(\theta)}_A$ and (\ref{3.5})--(\ref{3.6}), we obtain for the stopping time $\hat\theta:=\kappa^{(\theta)}_{\mathcal Z}$ that $E_{P}(\hat\theta-\theta|\mathcal F_{\theta})=z.$ Finally, from the fact that $\frac{B_{\hat\theta}}{B_{\theta}}\in \left \{\exp(-\overline{\sigma}\sqrt{T/n}),0,\exp(\overline{\sigma}\sqrt{T/n}) \right\}$ and $E_{P}\left(\frac{B_{\hat\theta}}{B_{\theta}}|\mathcal F_{\theta}\right)=1$ we conclude that (\ref{3.2})--(\ref{3.3}) hold true. Thus, \begin{eqnarray}\label{3.1001} &E_{P}\left(B^2_{\hat\theta}/B^2_{\theta}-1|\mathcal F_{\theta}\right)= \left(\exp(2\overline{\sigma}\sqrt{T/n})+\exp(-\overline{\sigma}\sqrt{T/n})\right)\times\\ &P\left(\frac{B_{\hat\theta}}{ B_{\theta}}=\exp(\overline{\sigma}\sqrt{T/n})|\mathcal F_{\theta}\right)- \left(1+\exp(\overline{\sigma}\sqrt{T/n})\right)P\left(\frac{B_{\hat\theta}}{B_{\theta}}=\exp(\overline{\sigma}\sqrt{T/n})|\mathcal F_{\theta}\right).\nonumber \end{eqnarray} By applying the It\^{o} isometry, we obtain $$ E_P\left[\int_{\theta}^{\hat\theta} B^2_t\underline{\sigma}^2 dt|\mathcal F_{\theta}\right]\leq E_{P}\left(B^2_{\hat\theta}-B^2_{\theta}|\mathcal F_{\theta}\right)\leq E_P\left[\int_{\theta}^{\hat\theta} B^2_t\overline{\sigma}^2 dt|\mathcal F_{\theta}\right].$$ This together with the equality $E_{P}(\hat\theta-\theta|\mathcal F_{\theta})=z$ and the inequality $\exp(-\overline{\sigma}\sqrt{T/n}) B_{\theta}\leq B_t\leq \exp(\overline{\sigma}\sqrt{T/n})B_{\theta}$ gives $$E_{P}\left(B^2_{\hat\theta}/B^2_{\theta}-1|\mathcal F_{\theta}\right)\in z[\underline{\sigma}^{2}\exp(-2\overline{\sigma}\sqrt{T/n}),\overline{\sigma}^{2}\exp(2\overline{\sigma}\sqrt{T/n})].$$ Hence, from (\ref{3.1001}) and the definition of $z$ we conclude (\ref{3.1}) and completes the proof. \end{proof} Next, for a given initial stock price $s>0$, we construct an embedding of probability measures $\Psi_n:\mathcal P^{I,0,n}\rightarrow \mathcal P^{(I)}_s$. Choose $P\in \mathcal P^{I,0,n}$. There exists functions \[\phi_i:\{-1,0,1\}^i\rightarrow \frac{1}{1+\exp(\overline{\sigma}\sqrt{T/n})}\left[\exp\left(-4\overline{\sigma}\sqrt{T/n}\right)\underline{\sigma}^2/\overline{\sigma}^2,1\right], \ \ i=0,1,...,n-1\] such that (\ref{2.1}) holds true with \begin{equation*} P(\xi^{(n)}_k=1|\mathcal F^{(n)}_{k-1})= \phi_{k-1}(\xi^{(n)}_1,...,\xi^{(n)}_{k-1}), \ \ k=1,...,n. \end{equation*} Recall the canonical space $\Omega=C(\mathbb R_{+},\mathbb R)$. On this sample space we define a sequence of random variables $A_0,...,A_{n},\theta_0,...,\theta_{n}$ by the following recursion. Let $\theta_0:=0$ and $A_0\in (0,\overline{\sigma}\sqrt{T/n}]$ be the unique solution of the equation \begin{equation*} \frac{\exp(x)-1}{(1+\exp(x))(\exp(\overline{\sigma}\sqrt{T/n})-1)}=\phi_0. \end{equation*} Recall the definition given by (\ref{3.0}). For $k=1,...,n$ set $\theta_k:=\kappa^{(\theta_{k-1})}_{A_{k-1}}$, and on the event $\{\theta_k<\infty\}$ define $A_{k}\in (0,\overline{\sigma}\sqrt{T/n}]$ to be the unique solution of the equation \begin{eqnarray*} &\frac{\exp(x)-1}{(1+\exp(x))(\exp(\overline{\sigma}\sqrt{T/n})-1)}= \\ &\phi_{k}\left(\overline{\sigma}^{-1}(T/n)^{-1/2}(\ln B_{\theta_1}- \ln B_{\theta_0}),..., \overline{\sigma}^{-1}(T/n)^{-1/2}(\ln B_{\theta_{k}}-\ln B_{\theta_{k-1}}) \right). \end{eqnarray*} On the event $\{\theta_k=\infty\}$ we set $A_k=0$. Define the random variables $\sigma_0,...,\sigma_{n-1}$ by \begin{eqnarray}\label{3.50} &\sigma_{k}:=\mathbb{I}_{\theta_k<\infty}\max\bigg(\underline{\sigma},\overline\sigma \sqrt{1+\exp(\overline{\sigma}\sqrt{T/n})}\times\\ &\left(\phi_{k}\left(\overline{\sigma}^{-1}(T/n)^{-1/2}(\ln B_{\theta_1}- \ln B_{\theta_0}),..., \overline{\sigma}^{-1}(T/n)^{-1/2}(\ln B_{\theta_{k}}-\ln B_{\theta_{k-1}})\right)\right)^{1/2} \bigg).\nonumber \end{eqnarray} Observe that on the event $\{\theta_k<\infty\}$ we have $\sigma_k\in I$. Thus, the fact that the volatility interval $I$ is bounded away from zero implies that there exists a unique probability measure $\hat P:=\Psi_n(\Pi)\in \mathcal P^{(I)}_s$ such that $E_{\hat P}[\theta_n]<\infty$, and that for any $k<n$, $B^{-1}_t\sqrt\frac{d\langle B\rangle _t}{dt}\equiv \sigma_k$ on the random interval $[\theta_k,\theta_{k+1})$ $\hat P$ a.s. \begin{lem}\label{lem3.2} The joint distribution of $\ln B_{\theta_1}- \ln B_{\theta_0},..., \ln B_{\theta_{n}}-\ln B_{\theta_{n-1}}$ under $\hat P$ is equal to the joint distribution of $\overline{\sigma}\sqrt {T/n}\xi^{(n)}_1,...,\overline{\sigma}\sqrt {T/n}\xi^{(n)}_{n}$ under $P$. Moreover, for any $k<n$, $\hat P(B_{\theta_{k+1}}|\mathcal F_{\theta_k})=\hat P(B_{\theta_{k+1}}|B_{\theta_1},...,B_{\theta_k})$ and $E_{\hat P}(\theta_{k+1}-\theta_{k}|\mathcal F_{\theta_k})=T/n+O(n^{-3/2}).$ \end{lem} \begin{proof} For any $k$ we have $\frac{B_{\theta_{k+1}}}{B_{\theta_k}}\in \left \{\exp(-\overline{\sigma}\sqrt{T/n}),0,\exp(\overline{\sigma}\sqrt{T/n}) \right\}$ and $E_{\hat P}\left(\frac{B_{\theta_{k+1}}}{B_{\theta_k}}|\mathcal F_{\theta_k}\right)=1$. Fix $k<n$. We argue that \begin{eqnarray}\label{3.yan} &\hat P\left(\frac{B_{\theta_{k+1}}}{B_{\theta_k}}=\exp(\overline{\sigma}\sqrt{T/n})|\mathcal F_{\theta_k}\right)=\\ &\phi_{k}\left(\overline{\sigma}^{-1}(T/n)^{-1/2}(\ln B_{\theta_1}- \ln B_{\theta_0}),..., \overline{\sigma}^{-1}(T/n)^{-1/2}(\ln B_{\theta_{k}}-\ln B_{\theta_{k-1}})\right).\nonumber \end{eqnarray} Indeed, from (\ref{3.0}), the definition of $A_k$ and the martingale property of $B$ we get \begin{eqnarray*} &\hat P\left(\frac{B_{\theta_{k+1}}}{B_{\theta_k}}=\exp(\overline{\sigma}\sqrt{T/n})|\mathcal F_{\theta_k}\right)=\\ &\hat P\left(B_{\rho^{(A_k)}_{\theta_k}}=\exp(A_k) B_{\theta_k}|\mathcal F_{\theta_k}\right)\times\\ &\hat P\left(B_{\theta_{k+1}}=\exp(\overline{\sigma}\sqrt{T/n})B_{\theta_{k}}|B_{\rho^{(A_k)}_{\theta_k}}=\exp(A_k) B_{\theta_k},\mathcal F_{\theta_k}\right)=\\ &\frac{1}{1+\exp(A_k)}\frac{\exp(A_k)-1}{\exp(\overline\sigma\sqrt{T/n})-1}=\\ &\phi_{k}\left(\overline{\sigma}^{-1}(T/n)^{-1/2}(\ln B_{\theta_1}- \ln B_{\theta_0}),..., \overline{\sigma}^{-1}(T/n)^{-1/2}(\ln B_{\theta_{k}}-\ln B_{\theta_{k-1}})\right) \end{eqnarray*} as required. In particular $\hat P(B_{\theta_{k+1}}|\mathcal F_{\theta_k})=\hat P(B_{\theta_{k+1}}|B_{\theta_1},...,B_{\theta_k})$. Furthermore, from the definition of $\phi_k$, $k=0,1,...,n-1$ we conclude that the joint distribution of $\ln B_{\theta_1}- \ln B_{\theta_0},..., \ln B_{\theta_{n}}-\ln B_{\theta_{n-1}}$ is equal to the joint distribution of $\overline{\sigma}\sqrt {T/n}\xi^{(n)}_1,...,\overline{\sigma}\sqrt {T/n}\xi^{(n)}_{n}$. Finally, we estimate $E_{\hat P}(\theta_{k+1}-\theta_{k}|\mathcal F_{\theta_k})$. From (\ref{3.50}) and the inequality \[\phi_k\geq \frac{1}{1+\exp(\overline{\sigma}\sqrt{T/n})}\exp\left(-4\overline{\sigma}\sqrt{T/n}\right)\underline{\sigma}^2/\overline{\sigma}^2\] we get \begin{eqnarray*} &\phi_{k}\left(\overline{\sigma}^{-1}(T/n)^{-1/2}(\ln B_{\theta_1}- \ln B_{\theta_0}),..., \overline{\sigma}^{-1}(T/n)^{-1/2}(\ln B_{\theta_{k}}-\ln B_{\theta_{k-1}}) \right)=\\ &\sigma^2_k\left(\frac{1}{\overline\sigma^2(1+\exp(\overline\sigma\sqrt{T/n}))}+O(\sqrt{T/n})\right). \end{eqnarray*} This together with (\ref{3.2})--(\ref{3.3}) and (\ref{3.yan}) yields \begin{eqnarray}\label{3.yann} & E_{\hat P}\left( B^2_{\theta_{k+1}}/B^2_{\theta_k}-1|\mathcal F_{\theta_k}\right)\\ &=\left(\exp(2\overline{\sigma}\sqrt{T/n})+\exp(-\overline{\sigma}\sqrt{T/n})-1-\exp(\overline{\sigma}\sqrt{T/n})\right)\times\nonumber\\ &\sigma^2_k\left(\frac{1}{\overline\sigma^2(1+\exp(\overline\sigma\sqrt{T/n}))}+O(\sqrt{T/n})\right)=\sigma^2_k\left(\frac{T}{n}+O(n^{-3/2})\right).\nonumber \end{eqnarray} From the It\^{o} isometry and the fact that (under the probability measure $\hat P$) the volatility of the canonical process $B$ is constant (equal to $\sigma_k$) on the interval $[\theta_k,\theta_{k+1})$ we obtain $$E_{\hat P}\left( B^2_{\theta_{k+1}}/B^2_{\theta_k}-1|\mathcal F_{\theta_k}\right)\in \sigma^2_k E_{\hat P}(\theta_{k+1}-\theta_{k}|\mathcal F_{\theta_k})[\exp(-2\overline\sigma\sqrt{T/n}),\exp(2\overline\sigma\sqrt{T/n})].$$ Thus, from (\ref{3.yann}) it follows that $E_{\hat P}(\theta_{k+1}-\theta_{k}|\mathcal F_{\theta_k})=(1+O(1/\sqrt n))\frac{T}{n}$, and the proof is completed. \end{proof} \section{Proof Theorem \ref{thm2.1}}\label{sec:4}\setcounter{equation}{0} For simplicity, we assume that the starting time is $t=0$. For a general $t\in [0,T]$ the proof is done in the same way. Denote by $s>0$ the initial stock price. \subsection{Proof of the inequality $V^{(I)}(0,s)\leq V^{I,n}(0,s)+C(1+s) n^{-1/4}$} \begin{proof} Fix $n\in\mathbb N$ and choose $\epsilon>0$. There exists a probability measure $P^*\in \mathcal P^{(I)}_s$ and a stopping time $\tau^*\in\mathcal T_T$ such that \begin{equation}\label{4.1} V^{(I)}(0,s)\leq \epsilon+ \inf_{\gamma\in\mathcal T_{T}}E_{P^*} \left[g(\gamma, B_{\gamma}){\mathbb I}_{\gamma<\tau^*}+ f(\tau^*,B_{\tau^*})\mathbb{I}_{\tau^*\leq\gamma}+\int_{0}^{\gamma\wedge\tau^*}h(u, B_{u})du\right]. \end{equation} From Lemma \ref{lem3.1} it follows that we can choose a sequence of stopping times $0=\theta_0<\theta_1<\theta_2<...<\theta_n$ such that $P^*$ a.s., for any $i=1,...,n$ \[ \frac{B_{\theta_i}}{B_{\theta_{i-1}}}\in \left \{\exp(-\overline{\sigma}\sqrt{T/n}),0,\exp(\overline{\sigma}\sqrt{T/n}) \right\},\] \begin{eqnarray*} &P^*\left(\frac{B_{\theta_i}}{B_{\theta_{i-1}}}=\exp(\overline{\sigma}\sqrt {T/n})|\mathcal F_{\theta_{i-1}}\right) \in \frac{1}{1+\exp(\overline{\sigma}\sqrt{T/n})}\left[\exp\left(-4\overline{\sigma}\sqrt{T/n}\right)\underline{\sigma}^2/\overline{\sigma}^2,1\right],\\ & P^*\left(\frac{B_{\theta_i}}{B_{\theta_{i-1}}}=\exp(-\overline{\sigma}\sqrt{T/n})|\mathcal F_{\theta_{i-1}}\right)=\exp(\overline{\sigma}\sqrt{T/n}) P^*\left(\frac{ B_{\theta_i}}{ B_{\theta_{i-1}}}=\exp(\overline{\sigma}\sqrt {T/n})|\mathcal F_{\theta_{i-1}}\right)\label{4.2},\\ & P^*\left({B_{\theta_i}}={ B_{\theta_{i-1}}}|\mathcal F_{\theta_{i-1}}\right)=1- P^*\left(\frac{B_{\theta_i}}{B_{\theta_{i-1}}}=\exp(\overline{\sigma}\sqrt {T/n})|\mathcal F_{\theta_{i-1}}\right)\label{4.3}\\ &- P^*\left(\frac{B_{\theta_i}}{B_{\theta_{i-1}}}=-\exp(\overline{\sigma}\sqrt {T/n})|\mathcal F_{\theta_{i-1}}\right),\nonumber \end{eqnarray*} and $E_{P^*}(\theta_i-\theta_{i-1}|\mathcal F_{\theta_{i-1}})=z$ where $z=z(n)$ is given before Lemma \ref{lem3.1}. In words, we apply the Skorokhod embedding technique given by Lemma \ref{lem3.1} in order to construct a sequence of stopping times such that the ratio of $B$ between two sequel times belongs to $\left\{\exp\left(-\bar\sigma\sqrt\frac{T}{n}\right),1,\exp\left(\bar\sigma\sqrt\frac{T}{n}\right)\right\}$. Moreover, the expectation of the difference between two sequel times is $\frac{T}{n}+O(n^{-3/2})$. The last fact will be used via the Auxiliary Lemmas \ref{lem5.3}--\ref{stoppingtimes}. Now, comes the main idea of the proof. Recall the geometric random walk $\{S^{0,s,n}_k\}_{k=0}^n$ and the trinomial models given by the set of probability measures $\mathcal P^{I,0,n}$. From (\ref{2.1})--(\ref{2.1++}) and the above properties of the probability measure $P^*$ it follows that there exists a probability measure $\tilde P\in \mathcal P^{I,0,n}$ such that the distribution of $\{B_{\theta_i}\}_{i=0}^n$ under $P^{*}$ equals to the distribution of $\{S^{0,s,n}_k\}_{k=0}^n$ under $\tilde P$. Moreover, using similar arguments as in Lemma \ref{lem3.2} we obtain that for any $k<n$, $P^{*}(B_{\theta_{k+1}}|\mathcal F_{\theta_k})=P^{*}(B_{\theta_{k+1}}|B_{\theta_1},...,B_{\theta_k}).$ The above two properties give \begin{eqnarray*} &\max_{\eta\in\mathcal T_n} \min_{\zeta\in\mathcal T_n} E_{\tilde P}[g(\zeta T/n,S^{0,s,n}_{\zeta})\mathbb{I}_{\zeta<\eta}\nonumber\\ &+f(\eta T/n,S^{0,s,n}_{\eta})\mathbb{I}_{\eta\leq\zeta}+\frac{T}{n}\sum_{k=0}^{\zeta\wedge\eta-1} h(kT/n,S^{0,s,n}_{k})]=\\ &\sup_{\eta\in\mathcal S_n}\inf_{\zeta\in\mathcal S_n}E_{P^{*}}[g(\zeta T/n,B_{\theta_{\zeta}})\mathbb{I}_{\zeta<\eta}\\ &+f(\eta T/n,B_{\theta_{\eta}})\mathbb{I}_{\eta\leq\zeta}+\frac{T}{n}\sum_{k=0}^{\zeta\wedge\eta-1} h(k T/n,B_{\theta_k})]. \end{eqnarray*} Hence, we conclude \begin{eqnarray}\label{4.4} &V^{I,n}(0,s)\geq\sup_{\eta\in\mathcal S_n} \inf_{\zeta\in\mathcal S_n}E_{P^{*}}[g(\zeta T/n,B_{\theta_{\zeta}})\mathbb{I}_{\zeta<\eta}\\ &+f(\eta T/n,B_{\theta_{\eta}})\mathbb{I}_{\eta\leq\zeta}+\frac{T}{n}\sum_{k=0}^{\zeta\wedge\eta-1} h(k T/n,B_{\theta_k})].\nonumber \end{eqnarray} The final step is technical. We are using (\ref{4.1})--(\ref{4.4}) in order to bound from above the difference $V^{(I)}(0,s)-V^{I,n}(0,s)$. Introduce the stopping time $\eta^*:=n\wedge\min\{k:\theta_k\geq\tau^*\}\in\mathcal S_n$. In view of (\ref{4.4}) there exists a stopping time $\zeta^*\in\mathcal S_n$ such that \begin{eqnarray}\label{4.5} &V^{I,n}(0,s)\geq\\ &E_{P^*}\left[g(\zeta^* T/n,B_{\theta_{\zeta^*}})\mathbb{I}_{\zeta^*<\eta^*} +f(\eta^* T/n,B_{\theta_{\eta^*}})\mathbb{I}_{\eta^*\leq\zeta^*}+\frac{T}{n}\sum_{k=0}^{\zeta^*\wedge\eta^*-1} h(k T/n,B_{\theta_k})\right]-\epsilon.\nonumber \end{eqnarray} Define the stopping time $\gamma^*:=(T\wedge\theta^{(n)}_{\zeta^*})\mathbb{I}_{\zeta^*<n}+T\mathbb{I}_{\zeta^*=n}\in\mathcal T_T$. From (\ref{4.1}) and (\ref{4.5}) we obtain that \begin{eqnarray}\label{4.6} &V^{(I)}(0,s)\leq V^{(I,n)}(0,s)+2\epsilon+\\ & E_{P^*}[g(\gamma^*,B_{\gamma^*})\mathbb{I}_{\gamma^*<\tau^*}-g(\zeta^* T/n,B_{\theta_{\zeta^*}})\mathbb{I}_{\zeta^*<\eta^*}]\nonumber\\ &+E_{P^*}[f(\tau^*,B_{\tau^*})\mathbb{I}_{\tau^*\leq\gamma^*} -f(\eta^* T/n,B_{\theta_{\eta^*}})\mathbb{I}_{\eta^*\leq\zeta^*}]\nonumber\\ &+\nonumber E_{P^*}[\int_{0}^{\gamma^*\wedge\tau^*}h(u, B_{u})du-\frac{T}{n}\sum_{k=0}^{\zeta^*\wedge\eta^*-1} h(k T/n,B_{\theta_k})].\nonumber \end{eqnarray} From technical reasons we extend the function $h$ to the domain $\mathbb R^2$ by $h(t,x):=h(t\wedge T,x)$. Clearly, the extended $h$ is satisfying the Lipschitz condition given by (\ref{2.function}) on the domain $\mathbb R^2$. We observe that if $\gamma^*<\tau^*$, then $\zeta^*<\eta^*$. This together with (\ref{2.function}), which in particular implies that $h(t,x)=O(1)(1+|x|)(1+t)$, and (\ref{4.6}) gives \begin{eqnarray}\label{4.7} &V^{(I)}(0,s)\leq V^{(I,n)}(0,s)+2\epsilon+O(1)E_{P^*}|B_{\gamma^*\wedge\tau^*}-B_{\theta_{\zeta^*\wedge\eta^*}}|+\\ &O(1)E_{P^*}\left[(1+\sup_{0\leq t\leq\theta_n\vee T} B_t)(1+\theta_n\vee T)\times\right.\nonumber\\ &\left.(|\gamma^*\wedge\tau^*-\zeta^*\wedge\eta^*\frac{T}{n}|+|\gamma^*\wedge\tau^*-\theta_{\zeta^*\wedge\eta^*}|)\right]\nonumber\\ &+E_{P^*}\left(\max_{1\leq k\leq n}\left|\int_{0}^{\theta_k} h(t,B_t)dt-\sum_{i=0}^{k-1} h(i T/n, B_{\theta_i})\right|\right).\nonumber \end{eqnarray} From the definition of the stopping times $\eta^*$ and $\gamma^*$ it follows that $|\gamma^*\wedge\tau^*-\zeta^*\wedge\eta^* \frac{T}{n}|\leq \max_{1\leq k\leq n}|\theta_k-k T/n|+T/n$ and \[|\gamma^*\wedge\tau^*-\theta_{\zeta^*\wedge\eta^*}|\leq |T-\theta_n|+\max_{1\leq k\leq n}\theta_k-\theta_{k-1}\leq 3\max_{1\leq k\leq n}|\theta_k-k T/n|+T/n.\] Hence, from the Cauchy--Schwarz inequality, the Jensen inequality, Lemma \ref{moments} and Lemma \ref{lem5.3} it follows that \begin{eqnarray}\label{4.8} &E_{P^*}\left[(1+\sup_{0\leq t\leq\theta_n\vee T} B_t)(1+\theta_n\vee T)\times\right.\\ &\left.(|\gamma^*\wedge\tau^*-\zeta^*\wedge\eta^*\frac{T}{n}|+|\gamma^*\wedge\tau^*-\theta_{\zeta^*\wedge\eta^*}|)\right]\leq\nonumber\\ &\left(E_{P^*}((1+\sup_{0\leq t\leq\theta_n\vee T} B_t)^4)\right)^{1/4} \left(E_{P^*}((1+\theta_n\vee T)^4)\right)^{1/4}\times\nonumber\\ &\left(E_{P^*}((4 \max_{1\leq k\leq n}|\theta_k-k T/n|+ 2 T/n)^2)\right)^{1/2}=O((1+s) n^{-1/2}).\nonumber \end{eqnarray} Similarly, from the It\^{o} isometry we obtain \[E_{P^*}((B_{\gamma^*\wedge\tau^*}-B_{\theta_{\zeta^*\wedge\eta^*}})^2)\leq E_{P^*}[\overline\sigma^2\max_{0\leq t\leq\theta_n\vee T}B^2_t |\gamma^*\wedge\tau^*-\theta_{\zeta^*\wedge\eta^*}|]=O(s ^2 n^{-1/2}).\] This together with the Jensen inequality, (\ref{4.7})--(\ref{4.8}) and Lemma \ref{stoppingtimes} gives that $$V^{(I)}(0,s)\leq V^{I,n}(0,s)+2\epsilon+O((1+s) n^{-1/4})$$ and by letting $\epsilon\downarrow 0$ we complete the proof. \end{proof} \subsection{Proof of the inequality $V^{I,n}(0,s)\leq V^{(I)}(0,s)+C(1+s) n^{-1/4}$} \begin{proof} The proof is very similar to the proof of the first inequality. Fix $n\in\mathbb N$ and choose $\epsilon>0$. We abuse notations and denote by $P^{*}$ a probability measure in $\mathcal{P}^{I,0,n}$ which satisfy \begin{eqnarray}\label{4.9} &V^{I,n}(0,s)\leq \epsilon+ \max_{\eta\in\mathcal T_n} \min_{\zeta\in\mathcal T_n} E_{P^*}[g(\zeta T/n,S^{t,s,n}_{\zeta})\mathbb{I}_{\zeta<\eta}\\ &+f(\eta T/n,S^{t,s,n}_{\eta})\mathbb{I}_{\eta\leq\zeta}+\frac{T}{n}\sum_{k=0}^{\zeta\wedge\eta-1} h(kT/n,S^{t,s,n}_{k})].\nonumber \end{eqnarray} Recall the definition of $\hat P^{*}:=\Psi_n(P^{*})$ and the stopping times $0=\theta_0<\theta_1<...<\theta_n$ given before Lemma \ref{lem3.2}. Denote by $\mathcal S_n$ the set of all stopping times with respect to the filtration $\{\mathcal F_{\theta_i}\}_{i=0}^n$ with values in the set $\{0,1,...,n\}$. By applying Lemma \ref{lem3.2} and the same arguments as before (\ref{4.4}) it follows that \begin{eqnarray}\label{4.10} &\max_{\eta\in\mathcal T_n} \min_{\zeta\in\mathcal T_n} E_{P^{*}}[g(\zeta T/n,S^{t,s,n}_{\zeta})\mathbb{I}_{\zeta<\eta}\\ &+f(\eta T/n,S^{t,s,n}_{\eta})\mathbb{I}_{\eta\leq\zeta}+\frac{T}{n}\sum_{k=0}^{\zeta\wedge\eta-1} h(kT/n,S^{t,s,n}_{k})]=\nonumber\\ &\sup_{\eta\in\mathcal S_n} \inf_{\zeta\in\mathcal S_n} E_{\hat P^{*}}[g(\zeta T/n,B_{\theta_{\zeta}})\mathbb{I}_{\zeta<\eta}\nonumber\\ &+f(\eta T/n,B_{\theta_{\eta}})\mathbb{I}_{\eta\leq\zeta}+\frac{T}{n}\sum_{k=0}^{\zeta\wedge\eta-1} h(kT/n,B_{\theta_k})].\nonumber \end{eqnarray} The equality (\ref{4.10}) is the cornerstone of the proof. The remaining part is technical, and we use (\ref{4.9})--(\ref{4.10}) to estimate from above the difference $V^{I,n}(0,s)-V^{(I)}(0,s)$. Indeed, from (\ref{4.9})--(\ref{4.10}) it follows that there exists $\eta^{*}\in\mathcal S_n$ (again we abuse notations) such that \begin{eqnarray}\label{4.11} &V^{I,n}(0,s)\leq 2\epsilon+ \inf_{\zeta\in\mathcal S_n} E_{\hat P^*}[g(\zeta T/n,B_{\theta_{\zeta}})\mathbb{I}_{\zeta<\eta^*}\nonumber\\ &+f(\eta^* T/n,B_{\theta_{\eta^*}})\mathbb{I}_{\eta^*\leq\zeta}+\frac{T}{n}\sum_{k=0}^{\zeta\wedge\eta^*-1} h(kT/n,B_{\theta_k})].\nonumber \end{eqnarray} Define the stopping time $\tau^*:=\theta_{\eta^*}\wedge T\in\mathcal T_T$. Clearly, there exists a stopping time $\gamma^{*}\in\mathcal T_T$ such that \begin{equation*}\label{4.12} V^{(I)}(0,s)\geq E_{\hat P^*}\left[g(\gamma^*,B_{\gamma^*}){\mathbb I}_{\gamma^*<\tau^*}+ f(\tau^*,B_{\tau^*})\mathbb{I}_{\tau^*\leq\gamma^*}+\int_{0}^{\gamma^*\wedge\tau^*}h(u, B_{u})du\right]-\epsilon. \end{equation*} Next, introduce the stopping time $\zeta^*:=n\wedge\min\{k:\theta_k\geq\gamma^*\}\mathbb{I}_{\gamma^*<T}+n\mathbb{I}_{\gamma^*=T}\in\mathcal S_n$. We observe that if $\zeta^*<\eta^*$ then $\gamma^*<\tau^*$. Thus, similarly to (\ref{4.7}) we get \begin{eqnarray*} &V^{I,n}(0,s)\leq V^{(I)}(0,s)+3\epsilon+O(1)E_{\hat P^*}|B_{\gamma^*\wedge\tau^*}-B_{\theta_{\zeta^*\wedge\eta^*}}|+\\ & O(1) E_{\hat P^*}\left[(1+\sup_{0\leq t\leq\theta_n\vee T} B_t)(1+\theta_n\vee T)(|\gamma^*\wedge\tau^*-\zeta^*\wedge\eta^*\frac{T}{n}|+|\gamma^*\wedge\tau^*-\theta_{\zeta^*\wedge\eta^*}|)\right]\nonumber\\ &+E_{\hat P^*}\left(\max_{1\leq k\leq n}\left|\int_{0}^{\theta_k} h(t,B_t)dt-\sum_{i=0}^{k-1} h(i T/n, B_{\theta_i})\right|\right).\nonumber \end{eqnarray*} Finally, by using the same estimates as in Section 4.1, we obtain that $$V^{I,n}(0,s)\leq V^{(I)}(0,s)+3\epsilon+O((1+s) n^{-1/4})$$ and by letting $\epsilon\downarrow 0$ we complete the proof. \end{proof} \begin{rem} Let us notice that in the present setup of model uncertainty we get the same error estimates as in the case with no uncertainty which was studied in \cite{BDG}. The main reason is that Lemma 5.3 which is essential for the proof cannot be improved even for the most simple case where the canonical process is a geometric Brownian motion with constant volatility. Namely, the Skorokhod embedding technique cannot provide error estimates of order better than $O(n^{-1/4})$ even for the approximations of American or game options in the Black--Scholes model. For details, see \cite{Ki1}. Fortunately, same estimates can be obtained for the volatility uncertainty setup. \end{rem} \section{Auxiliary Lemmas}\label{sec:5}\setcounter{equation}{0} In this section we derive the estimates that we used in Section \ref{sec:4}. We fix $n\in\mathbb N$ and a probability measure $P\in \mathcal P^{(I)}_s$. Furthermore, we fix a sequence of stopping times $0=\theta_0<\theta_1<...<\theta_n$ for which we assume that for any $i<n$, $ \frac{B_{\theta_i}}{B_{\theta_{i-1}}}\in \left \{\exp(-\overline{\sigma}\sqrt{T/n}),0,\exp(\overline{\sigma}\sqrt{T/n}) \right\}$ $P$--a.s., \begin{eqnarray*} &P\left(\frac{B_{\theta_i}}{B_{\theta_{i-1}}}=\exp(\overline{\sigma}\sqrt {T/n})|\mathcal F_{\theta_{i-1}}\right) \in \frac{1}{1+\exp(\overline{\sigma}\sqrt{T/n})}\left[\exp\left(-4\overline{\sigma}\sqrt{T/n}\right)\underline{\sigma}^2/\overline{\sigma}^2,1\right],\\ & P\left(\frac{B_{\theta_i}}{B_{\theta_{i-1}}}=\exp(-\overline{\sigma}\sqrt{T/n})|\mathcal F_{\theta_{i-1}}\right)=\exp(\overline{\sigma}\sqrt{T/n}) P\left(\frac{ B_{\theta_i}}{ B_{\theta_{i-1}}}=\exp(\overline{\sigma}\sqrt {T/n})|\mathcal F_{\theta_{i-1}}\right)\label{4.2},\\ & P\left({B_{\theta_i}}={ B_{\theta_{i-1}}}|\mathcal F_{\theta_{i-1}}\right)=1- P\left(\frac{B_{\theta_i}}{B_{\theta_{i-1}}}=\exp(\overline{\sigma}\sqrt {T/n})|\mathcal F_{\theta_{i-1}}\right)\label{4.3}\\ &- P\left(\frac{B_{\theta_i}}{B_{\theta_{i-1}}}=-\exp(\overline{\sigma}\sqrt {T/n})|\mathcal F_{\theta_{i-1}}\right),\nonumber \end{eqnarray*} and $E_P(\theta_{i+1}-\theta_i|\mathcal F_{\theta_i})=T/n+O(n^{-3/2})$. Observe that the stopping times $0=\theta_0<\theta_1<...<\theta_n$ from both Section 4.1 and Section 4.2 satisfy the above conditions. We start with proving the following bound. \begin{lem}\label{moments} \[E_P\left(\sup_{0\leq t\leq T\vee\theta_n}B^4_t\right)=O(1)s^4.\] \end{lem} \begin{proof} Clearly, for any $i<n$, \begin{eqnarray*} &E_P(B^4_{\theta_{i+1}}-B^4_{\theta_{i}}|\mathcal F_{\theta_i})= B^4_{\theta_{i}} P\left(\frac{B_{\theta_{i+1}}}{B_{\theta_{i}}}=\exp(\overline{\sigma}\sqrt {T/n})|\mathcal F_{\theta_{i}}\right)\times\\ &\left(\exp(4\overline{\sigma}\sqrt{T/n})-1+\exp(\overline{\sigma}\sqrt{T/n}) (\exp(-4\overline{\sigma}\sqrt{T/n})-1)\right)\leq B^4_{\theta_{i}} O(1/n). \end{eqnarray*} Hence, $E_P(B^4_{\theta_{n}})\leq s^4(1+O(1/n))^n= O(1)s^4$. This together with the Doob inequality gives that \begin{equation}\label{5.1} E_P\left(\sup_{0\leq t\leq \theta_n}B^4_t\right)=O(1)s^4. \end{equation} Next, we notice that the inequality $B^{-1}_t\sqrt\frac{d\langle B\rangle _t}{dt}\leq\overline\sigma$ together with the It\^{o} formula implies that $\exp(-6\overline\sigma^2 t)B^4_t$, $t\geq 0$ is a super--martingale. In particular, $E_P B^4_T\leq \exp(6\overline\sigma^2 T)s^4$. Thus, from the Doob inequality and (\ref{5.1}) we obtain \[E_P\left(\sup_{0\leq t\leq T\vee\theta_n}B^4_t\right)\leq E_P\left(\sup_{0\leq t\leq T}B^4_t\right)+E_P\left(\sup_{0\leq t\leq \theta_n}B^4_t\right)= O(1)s^4\] and the proof is completed. \end{proof} Next, we prove the following. \begin{lem}\label{lem5.2} For any $i=0,1,...,n-1$, $E_P((\theta_{i+1}-\theta_i)^4|\mathcal F_{\theta_i})=O(n^{-4})$. \end{lem} \begin{proof} Choose $i<n$. From the Burkholder--Davis--Gundy inequality, the inequality $\frac{d\langle B\rangle_t}{dt}\geq\underline\sigma^2 B^2_t$ and the fact that $\frac{B_t}{B_{\theta_i}}\in [\exp(-\overline\sigma\sqrt{T/n}),\exp(\overline\sigma\sqrt{T/n})]$ for $t\in [\theta_i,\theta_{i+1}]$ it follows that \begin{eqnarray*} &\underline\sigma^8\exp(-8\overline\sigma\sqrt{T/n})B^8_{\theta_i}E_P((\theta_{i+1}-\theta_i)^4|\mathcal F_{\theta_i})\leq\\ & E_P \left((\langle B\rangle_{\theta_{i+1}}-\langle B\rangle_{\theta_{i}})^4|\mathcal F_{\theta_i}\right)=O(1) E_P((B_{\theta_{i+1}}- B_{\theta_i})^8|\mathcal F_{\theta_i})=O(n^{-4}) B^8_{\theta_i} \end{eqnarray*} and the result follows. \end{proof} We arrive to our next estimate. \begin{lem}\label{lem5.3} $E_P\left(\max_{0\leq k\leq n}|\theta_k- kT/n|^4\right)= O(n^{-2}).$ \end{lem} \begin{proof} Set $Z_i:=\theta_{i}-\theta_{i-1}-E_P(\theta_{i}-\theta_{i-1}|\mathcal F_{\theta_{i-1}})$, $i=1,...,n$. We use the fact that the expectation of the difference between two sequel times equals approximately to the time step. Formally, for any $i$, we have $E_P(\theta_{i}-\theta_{i-1}-T/n|\mathcal F_{\theta_{i-1}})=O(n^{-3/2})$. Hence, \[\max_{0\leq k\leq n}|\theta_k- kT/n|= O(n^{-1/2})+\max_{1\leq k\leq n}|\sum_{i=1}^{k}Z_i|.\] In view of the inequality $(a+b)^4\leq 8(a^4+b^4)$, $a,b\geq 0$ it remains to prove that $E_P \left(\left(\max_{1\leq k\leq n}|\sum_{i=1}^{k}Z_i|\right)^4\right)=O(n^{-2})$. From the Jensen inequality and Lemma \ref{lem5.2} it follows that $E_P\left((E_P(\theta_{i}-\theta_{i-1}|\mathcal F_{\theta_{i-1}}))^4\right)=O(n^{-4})$ for all $i$. This together with the inequality $(a-b)^4\leq a^4+b^4,$ $a,b\geq 0$ implies that $E_P[Z^4_i]=O(n^{-4})$ for all $i$. Thus, from the Burkholder--Davis--Gundy inequality applied to the martingale $\sum_{i=1}^k Z_i$, $k=1,...,n$, and the inequality $\left(\sum_{i=1}^n a_i\right)^2\leq n\left(\sum_{i=1}^n a^2_i\right)$, $a_1,...,a_n\geq 0$, we obtain \[E_P \left(\left(\max_{1\leq k\leq n}|\sum_{i=1}^{k}Z_i|\right)^4\right)=O(1)E_P\left(\left(\sum_{i=1}^n Z^2_i\right)^2\right)=O(n)\sum_{i=1}^n E_P Z^4_i=O(n^{-2})\] as required. \end{proof} We end this section with proving the next estimate. \begin{lem}\label{stoppingtimes} \[E_P\left(\max_{0\leq k\leq n}\left|\int_{0}^{\theta_k} h(t,B_t) dt-\frac{T}{n}\sum_{i=0}^{k-1} h(i T/n, B_{\theta_i})\right|\right)= O((1+s) n^{-1/2}).\] \end{lem} \begin{proof} Clearly, \[\max_{0\leq k\leq n}\left|\int_{0}^{\theta_k} h(t,B_t) dt-\frac{T}{n}\sum_{i=0}^{k-1} h(i T/n, B_{\theta_i})\right|\leq J_1+J_2+\theta_n J_3\] where \begin{eqnarray*} &J_1:=\max_{1\leq k\leq n}\left|\sum_{i=0}^{k-1}h(iT/n, B_{\theta_i}) \left(E_P(\theta_{i+1}-\theta_i|\mathcal F_{\theta_i})-T/n\right)\right|,\\ &J_2:= \max_{1\leq k\leq n}\left|\sum_{i=0}^{k-1} h(iT/n, B_{\theta_i}) \left(\theta_{i+1}-\theta_i-E_P(\theta_{i+1}-\theta_i|\mathcal F_{\theta_i})\right)\right|,\\ &\mbox{and} \ J_3:=\left(\max_{0\leq k\leq n-1}\sup_{\theta_k\leq t\leq\theta_{k+1}}|h(t,B_t)-h(k T/n, B_{\theta_k}|\right). \end{eqnarray*} We have $E_P(\theta_{i+1}-\theta_i|\mathcal F_{\theta_i})=T/n+O(n^{-3/2})$. Hence, from the bound $h(t,x)=O(1)(1+|x|)(1+t)$, Lemma \ref{moments} and the Jensen inequality it follows that \[E_P[J_1]= O(n^{-1/2})E_P(1+\max_{0\leq k\leq n-1}B_{\theta_k}) =O((1+s)n^{-1/2}).\] Next, we estimate $J_2$. We observe that the stochastic process \[\sum_{i=0}^{k-1} h(iT/n, B_{\theta_i}) \left(\theta_{i+1}-\theta_i-E_P(\theta_{i+1}-\theta_i|\mathcal F_{\theta_i})\right), \ \ k=1,...,n\] is a martingale. Thus, from the Doob inequality, the Cauchy--Schwarz inequality, Lemmas \ref{moments}--\ref{lem5.2} and the above bound on $h$ we obtain \begin{eqnarray*} &E_P[J^2_2]= O(1)\sum_{i=0}^{n-1} E_P\left(h^2(i T/n, B^2_{\theta_i})\left(\theta_{i+1}-\theta_i-E_P(\theta_{i+1}-\theta_i|\mathcal F_{\theta_i})\right)^2 \right) \\ &=O(1)\sum_{i=0}^{n-1}\left(E_P\left(h^4(i T/n, B^2_{\theta_i})\right)\right)^{1/2} \left(E_P\left(\left(\theta_{i+1}-\theta_i-E_P(\theta_{i+1}-\theta_i|\mathcal F_{\theta_i})\right)^4\right)\right)^{1/2}\\ &=O((1+s)^2 n^{-1}). \end{eqnarray*} From the Jensen inequality we conclude that $E_P[J_2]=O((1+s) n^{-1/2})$. Finally, we estimate $E_P[\theta_n J_3].$ From (\ref{2.function}) and the fact that $\frac{B_t}{B_{\theta_k}}=1+O(1/\sqrt n)$ for $t\in [\theta_k,\theta_{k+1}]$ it follows that \[J_3\leq O(n^{-1/2})\max_{0\leq k\leq n-1}B_{\theta_k}+O(1)\max_{0\leq k\leq n-1}[(1+B_{\theta_k})\sup_{\theta_k\leq t\leq\theta_{k+1}}|t-kT/n|].\] Observe that $\max_{0\leq k\leq n-1}\sup_{\theta_k\leq t\leq\theta_{k+1}}|t-kT/n|\leq T/n+\max_{1\leq k\leq n}|\theta_k-k T/n|$. This together with the Cauchy--Schwarz inequality, Lemma \ref{moments} and Lemma \ref{lem5.3} gives \begin{eqnarray*} &E_P[\theta_n J_3]=O(n^{-1/2})\left(E_P\left[\theta^2_n\right]\right)^{1/2} \left(E_P\left(\max_{0\leq k\leq n-1}B^2_{\theta_k}\right)\right)^{1/2}+\\ &O(1)\frac{T}{n}\left(E_P\left[\theta^2_n\right]\right)^{1/2}\left(E_P\left(\max_{0\leq k\leq n-1}(1+B_{\theta_k})^2\right)\right)^{1/2}+\\ &O(1)\left(E_P\left[\theta^2_n\right]\right)^{1/2}\left(E_P\left(\max_{0\leq k\leq n-1}(1+B_{\theta_k})^4\right)\right)^{1/4} \left(E_P\left(\max_{1\leq k\leq n}|\theta_k-k T/n|^4\right)\right)^{1/4}\\ &=O\left((1+s)n^{-1/2}\right) \end{eqnarray*} and the proof is completed. \end{proof} \section{Game Options and Numerical Results}\label{sec:6}\setcounter{equation}{0} In this section we apply Theorem \ref{thm2.1} and provide numerical analysis for path--independent game options with the payoffs $Y_t=f(t,B_t)$ and $X_t=f(t,B_t)$, $t\in [0,T]$, and we set $Z\equiv 0$. First (for the above payoffs), we establish the connection between the super--hedging price of game options and Dynkin games, in the model uncertainty setup. \subsection{Game Options} A game contingent claim (GCC) or game option, which was introduced in \cite{K}, is defined as a contract between the seller and the buyer of the option such that both have the right to exercise it at any time up to a maturity date (horizon) $T$. We consider the following GCC with Markovian payoffs. If the buyer exercises the contract at time $t$ then he receives the payment $Y_t=f(t,B_t)$, but if the seller exercises (cancels) the contract before the buyer then the latter receives $X_t=g(t,B_t)$. The difference $X_t-Y_t$ is the penalty which the seller pays to the buyer for the contract cancellation. In short, if the seller will exercise at a stopping time $\gamma\leq{T}$ and the buyer at a stopping time $\tau\leq{T}$ then the former pays to the latter the amount $H(\gamma,\tau)$ given by (\ref{1.1}). Next, we introduce the setup of super--hedging for the seller (the buyer setup is symmetrical). Recall the natural filtration, $\mathcal F=\mathcal F_t$, $t\geq 0$. We denote by $L(B,\mathcal P^{(I)}_s)$ the set of all $\mathcal F$--predictable processes $\Delta=\{\Delta_t\}_{t=0}^T$ such that for any $P\in\mathcal P^{(I)}_s$, the stochastic (It\^{o}) integral $\int_{0}^t \Delta_u dB_u$, $t\in [0,T]$ is well defined and a super--martingale with respect to $\mathcal F$. We define a hedge for the seller as a triplet $(x,\Delta,\gamma)\in \mathbb R\times L(B,\mathcal P^{(I)}_s)\times\mathcal T_T$ which consists of an initial capital $x$, a trading strategy $\Delta=\{\Delta_t\}_{t=0}^T$ and a stopping time $\gamma$. A hedge $(x,\Delta,\gamma)$ is perfect if for any stopping time (for the buyer) $\tau\in\mathcal T_T$ we have the inequality $$x+\int_{0}^{\gamma\wedge\tau} \Delta_u dB_u\geq H(\gamma,\tau) \ \ P-\mbox{a.s.} \ \mbox{for} \ \mbox{all} \ \ P\in\mathcal P^{(I)}_s.$$ The super--hedging price is defined by $$\mathbf V:=\inf\{x\in\mathbb R: \ \exists (\Delta,\gamma) \ \mbox{such} \ \mbox{that} \ (x,\Delta,\gamma) \ \mbox{is} \ \mbox{a} \ \mbox{perfect} \ \mbox{hedge}\}. $$ \begin{lem}\label{lem6.1} The super--hedging price is given by $\mathbf V=V^{(I)}(0,s).$ Moreover, there exists a perfect hedge with initial capital $V^{(I)}(0,s)$. \end{lem} \begin{proof} As usual, the inequality $\mathbf V\geq V^{(I)}(0,s)$ is immediate. Indeed if $(x,\Delta,\gamma)$ is a perfect hedge then from the super--martingale property of $\int_{0}^t \Delta_u dB_u$, $t\in [0,T]$ we obtain that for any $\tau\in \mathcal T_T$ and $P\in\mathcal P^{(I)}_s$ $$x\geq E_P\left[x+\int_{0}^{\gamma\wedge\tau} \Delta_u dB_u\right]\geq E_P [H(\gamma,\tau)].$$ Thus $x\geq V^{(I)}(0,s)$ as required. It remains to show that there exists a perfect hedge with initial capital $V^{(I)}(0,s)$. We apply Theorem 4.1 in \cite{BY} which not only gives the optimal stopping time for the player which plays against nature but also a sub--martingale property up to the optimal time. Once again taking Remark \ref{rem2.1} into account, for our setup the sub--martingale property becomes a super--martingale property. More precisely, Theorem 4.1 in \cite{BY} implies that for the stopping time $\gamma^{*}:=T\wedge\inf\{t: X_t=V^{(I)}(t,B_t)\}$ we have the following property. For any $P\in\mathcal P^{(I)}_s$, the process $V^{(I)}(t\wedge\gamma^{*},B_{t\wedge\gamma^{*}})$, $t\in [0,T]$ is a $P$--super--martingale with respect to the natural filtration $\mathcal F_t$, $t\geq 0$. We apply the nondominated version of the optional decomposition theorem . Since quadratic variation can be defined in a pathwise form then the condition $B^{-1}\sqrt\frac{d\langle B\rangle }{dt}\in I$ is invariant under equivalent change of measure. Hence the set $P^{(I)}_s$ is a saturated set (using \cite{Nu} terminology) of martingale measures. Namely, if $P\in \mathcal P^{(I)}_s$ and $Q\sim P$ is a martingale measure on the canonical space then $Q\in \mathcal P^{(I)}_s$. Thus, from Theorem 2.4 in \cite{Nu} it follows that there exists a process $\Delta^{*}\in L(B,\mathcal P^{(I)}_s)$ such that for any probability measure $P\in\mathcal P^{(I)}_s$ \begin{equation}\label{6.1-} P\left(V^{(I)}(0,s)+\int_{0}^{t}\Delta^{*}_u dB_u- V^{(I)}(t\wedge\gamma^{*},B_{t\wedge\gamma^{*}})\geq 0, \ \ \forall t\in [0,T]\right)=1. \end{equation} We claim that $(V^{(I)}(0,s),\Delta^{*},\gamma^{*})$ is a perfect hedge. Indeed, let $\tau\in\mathcal T_T$ be a stopping time for the buyer and $P\in\mathcal P^{(I)}_s$. First consider the event $\{\gamma^{*}<\tau\}$. On this event we have (recall the definition of $\gamma^{*}$) $V^{(I)}(\tau\wedge\gamma^{*},B_{\tau\wedge\gamma^{*}})=X_{\gamma^{*}}=H(\gamma^{*},\tau)$ and so from (\ref{6.1-}) \begin{equation*} V^{(I)}(0,s)+\int_{0}^{\tau\wedge\gamma^{*}}\Delta^{*}_u dB_u\geq H(\gamma^{*},\tau) \ \ P-\mbox{a.s.} \end{equation*} Finally, we consider the event $\{\gamma^{*}\geq\tau\}$. Applying (\ref{6.1-}) and the trivial inequality $V^{(I)}(t,x)\geq f(t,x)$ for all $t,x$ we obtain \begin{equation*} V^{(I)}(0,s)+\int_{0}^{\tau\wedge\gamma^{*}}\Delta^{*}_u dB_u\geq Y_{\tau}=H(\gamma^{*},\tau) \ \ P-\mbox{a.s.} \end{equation*} and the proof is completed. \end{proof} \begin{rem} It seems that by applying Theorem 4.1 in \cite{BY} and the optional decomposition Theorem 2.4 in \cite{Nu}, Lemma 6.1 can be extended to path dependent options as long as the regularity assumptions from \cite{BY} are satisfied. Since we are motivated by numerical applications, then for simplicity we considered path--independent payoffs. Still, a challenging open question, is whether Lemma \ref{lem6.1} can be obtained under weaker (than Lipschitz or uniform type of continuity) regularity conditions. \end{rem} \subsection{Numerical Results} In view of Lemma \ref{lem6.1} we use Theorem \ref{thm2.1} and provide a numerical analysis for the super--hedging price of path--independent game options. We assume that the interest rate in the market is a constant $r>0$, and so the stock price before discounting is given by $S_t=e^{rt} B_t$, where, recall that $B$ is the canonical process. The payoffs before discounting are of the form $\hat X_t=\hat g(S_t)$, $\hat Y_t=\hat f(S_t)$ where $\hat g\geq \hat f$. In order to compute the game option price we need to consider the discounted payoffs and so during this section we put $g(t,x):=e^{-rt}\hat g(e^{rt} x)$, $f(t,x):=e^{-rt}\hat f(e^{rt} x)$ and $h\equiv 0$. In \cite{E} (see Section 4), the author proved that for game options (with finite or infinite maturity) with continuous path--independent payoffs $\hat g,\hat f$ satisfying \begin{equation}\label{6.1} \frac{\hat g(x)}{x},\frac{\hat f(x)}{x} \ \mbox{are} \ \mbox{non} \ \mbox{increasing} \ \mbox{for} \ x>0 \end{equation} the price is non decreasing in the volatility. Thus, (if the above assumption is satisfied) the price under volatility uncertainty which is given by the interval $I=[\underline{\sigma},\overline{\sigma}]$ is the same as the price in the complete Black--Scholes market with a constant volatility $\overline{\sigma}$. The later value can be approximated by the standard binomial models (see \cite{Ki1}). In particular, this is the case for game put options given by $$\hat g(x)=C(K-x)^{+}+\delta \ \ \mbox{and} \ \ \hat f(x)=(K-x)^{+}, \ \ C\geq 1,\ K,\delta>0.$$ In Table 1, we test numerically the above statement from \cite{E} for game put options. This is done by comparing our numerical results with previous numerics which was obtained in \cite{KKS} for game put options in the Black--Scholes model. \begin{table} \begin{center} \caption{ In this table we take the parameters $r = 0.06$, $T = 0.5$, $K=100$, $\delta = 5$ and provide numerical results for game put options under model uncertainty given by the interval $I=[0,0.4]$. We compare our results to previous numerical results (see \cite{KKS}) for game put options in the Black--Scholes model with volatility $\sigma=0.4$.} \begin{tabular}{lccccc} \hline \multicolumn{5}{c}{Values obtained with }\\ \cline{2-5} $S_0$ & n = 200 & n = 400 & n = 700 & n = 1200 & Black--Scholes with $\sigma=\overline{\sigma}$ \\ \hline 80 & 20.7003 & 20.6719 & 20.6593 & 20.6532 & 20.6 \\ 90 & 12.4932 & 12.4787 & 12.4938 & 12.4683 & 12.4 \\ 100 & 5.00 & 5.00 & 5.00 & 5.00 & 5.00 \\ 110 & 3.7609 & 3.7240 & 3.6862 & 3.6916 & 3.64 \\ 120 & 2.6169 & 2.5897 & 2.5822 & 2.5729 & 2.54 \\ \hline \end{tabular} \end{center} \end{table} \subsubsection*{Game call options} Next, we deal with game call options given by $$\hat g(x)=C(x-K)^{+}+\delta \ \ \mbox{and} \ \ \hat f(x)=(x-K)^{+}, \ \ C\geq 1,\ K,\delta>0.$$ We observe that in this case (\ref{6.1}) is not satisfied and so we expect that the price for the model uncertainty interval $I=[\underline{\sigma},\overline{\sigma}]$ will be strictly bigger than the game call option price in the Black--Scholes model with volatility $\overline{\sigma}$. We take $C=1$, namely we consider game call options with constant penalty. First, we compare (Table 2) the option prices under model uncertainty with the prices in the Black--Scholes model (with the highest volatility). Since we could not find previous numerical results for finite maturity game call options in the Black--Scholes model, we compute it by applying the binomial trees from \cite{Ki1}. These trees are "almost" the same as our trees for the case where the volatility uncertainty interval $I$ contains only one point. We observe that for call options the prices in general should not coincide. \begin{table} \begin{center} \caption{ We take the same parameters as in Table 1 and provide numerical results for game call options under model uncertainty given by the interval $I=[0,0.4]$. We compare our results to binomial approximations for the Black--Scholes model with $\sigma=0.4$.} \begin{tabular}{lcccc} \hline \multicolumn{5}{c}{Values obtained under model uncertainty }\\ \cline{2-5} $S_0$ & n = 200 & n = 400 & n = 700 & n = 1200 \\ \hline 80 & 2.0805 & 2.0893 & 2.0847 & 2.0948 \\ 85 & 2.8138 & 2.7964 & 2.8055 & 2.8018 \\ 90 & 3.6553 & 3.5966 & 3.6241 & 3.6064 \\ 95 & 4.5827 & 4.4682 & 4.5050 & 4.4874 \\ 105 & 5.00 & 5.00 & 5.00 & 5.00 \\ 110 & 10.00 & 10.00 & 10.00 & 10.00 \\ 115 & 15.00 & 15.00 & 15.00 & 15.00 \\ 120 & 20.00 & 20.00 & 20.00 & 20.00 \\ \hline \multicolumn{5}{c}{Values obtained for Black--Scholes }\\ \cline{2-5} $S_0$ & n = 200 & n = 400 & n = 700 & n = 1200 \\ \hline 80 & 2.0625 & 2.0359 & 2.0244& 2.0210 \\ 85 & 2.7706 & 2.7301 & 2.7274 & 2.7143 \\ 90 & 3.5066 & 3.4889 & 3.4968 & 3.4798 \\ 95 & 4.3497 & 4.3124 & 4.3056 & 4.2481 \\ 105 & 5.00 & 5.00 & 5.00 & 5.00 \\ 110 & 10.00 & 10.00 & 10.00 & 10.00 \\ 115 & 14.9355 & 14.9304 & 14.9275 & 14.9260 \\ 120 & 19.7812 & 19.7735 & 19.7691 & 19.7669 \\ \end{tabular} \end{center} \end{table} Finally, we calculate numerically the stopping regions. We observe that the discounted payoff $f(t,B_t)=(B_t-Ke^{-rt})^{+}$, $t\geq 0$ is a sub--martingale with respect to any probability measure in the set $\mathcal P^{(I)}_s$. Thus, the buyer's optimal stopping time is just $\tau\equiv T$. For the seller, the optimal stopping time is (see Theorem 4.1 in \cite{BY}) $$\gamma^{*}=T\wedge\inf\{t: g(t,B_t)=V^{(I)}(t,B_t)\}.$$ Introduce the function $$\tilde V(u,x):=\sup_{P\in \mathcal{P}^{(I)}_x}\sup_{\tau\in\mathcal T_{u}}\inf_{\gamma\in\mathcal T_{u}} E_{P}\left[e^{-r(\tau\wedge\gamma)}\left((S_{\tau\wedge\gamma}-K)^{+}+\delta\mathbb{I}_{\gamma<\tau}\right)\right] $$ where as before $S_t=e^{rt} B_t$, $t\geq 0$ is the stock price. The term $\tilde V(u,x)$ is the price of a game call option with maturity date $u$ and initial stock price $S_0=x$. We observe that $\gamma^{*}=T\wedge\inf\{t: S_t\in D\}$, where $D=D(T)$ is the stopping region (of course it depends on the maturity date $T$) given by $$D=\{(t,x): \tilde V(T-t,x)=(x-K)^{+}+\delta\}.$$ In \cite{YYZ}, the authors studied the structure of the stopping region $D$ for game call options in the complete Black--Scholes market. They proved (see Theorem 4.2) that the stopping region $D$ is of the form $$D=\{(t,x): t\in [0,T_1], \ K\leq x\leq b(t)\}\bigcup \left\{[T_1,T_2]\times\{K\}\right\}$$ where $T_1<T_2<T$ and $b:[0,T_1]\rightarrow [K,\infty)$ can be computed numerically. In Figure 1 we calculate numerically the stopping regions (for the seller) for game call options both in the model uncertainty setup given by the interval $I=[0,0.4]$ and in the complete Black--Scholes model with volatility $\sigma=0.4$. We obtain that the structure from \cite{YYZ} is valid for the model uncertainty case as well. Furthermore, for both cases $T_2$ is the same, while $T_1$ and $b$ are different. Up to date, there is no theoretical results related to the explicit structure of stopping regions for game options under model uncertainty. \begin{figure} \caption{We consider a game call option with maturity date $T=2$, a constant penalty $\delta=12$ and a strike price $K=100$. As before the interest rate is $r=0.06$. We take $n=1200$ and compute numerically the stopping regions for the seller. For the model uncertainty given by the interval $I=[0,0.4]$ we get that for $t\in [0,1.3]$ the seller should exercise at the first moment when the stock price is between the strike price and the value given by the blue curve. For $t\in [1.3,1.5]$ the seller stops at the first moment the stock price equals to the strike price. After the time $t=1.5$ the investor should not exercise (before the maturity date). For the Black--Scholes model with volatility $\sigma=0.4$ we get that for $t\in [0,0.9]$ the seller should exercise at the first moment when the stock price is between the strike price and the value given by the green curve. For $t\in [0.9,1.5]$ the seller stops at the first moment the stock price equals to the strike price. After the time $t=1.5$ the investor should not exercise (before the maturity date).} \end{figure} \end{document}
\begin{document} \theoremstyle{plain} \newtheorem{lemma}{Lemma}[section] \theoremstyle{remark} \newtheorem{remark}{Remark}[section] \theoremstyle{example} \newtheorem{example}{Example}[section] { \begin{center} \textbf{\Large Full information maximum likelihood estimation in factor analysis with a lot of missing values} \end{center} \begin{center} {l}arge {Kei Hirose, Sunyong Kim, Yutaka Kano, Miyuki Imada, Manabu Yoshida and Masato Matsuo } \end{center} \begin{center} {\it {\small Division of Mathematical Science, Graduate School of Engineering Science, Osaka University,\\ 1-3, Machikaneyama-cho, Toyonaka, Osaka, 560-8531, Japan \\ $^2$ NTT Network Innovation Laboratories, 1--1, Hikarinooka, Yokosuka-shi, Kanagawa, 239--0847 Japan.\\ $^3$ NTT Network Innovation Laboratories, 3--9--11, Midori-cho, Musashino-shi, Tokyo, 180--8585 Japan.\\ }} {\it {\small E-mail: [email protected], }} \end{center} \begin{abstract} We consider the problem of full information maximum likelihood (FIML) estimation in a factor analysis model when a majority of the data values are missing. The expectation--maximization (EM) algorithm is often used to find the FIML estimates, in which the missing values on observed variables are included in complete data. However, the EM algorithm has an extremely high computational cost when the number of observations is large and/or plenty of missing values are involved. In this paper, we propose a new algorithm that is based on the EM algorithm but that efficiently computes the FIML estimates. A significant improvement in the computational speed is realized by not treating the missing values on observed variables as a part of complete data. Our algorithm is applied to a real data set collected from a Web questionnaire that asks about first impressions of human; almost 90\% of the data values are missing. When there are many missing data values, it is not clear if the FIML procedure can achieve good estimation accuracy even if the number of observations is large. In order to investigate this, we conduct Monte Carlo simulations under a wide variety of sample sizes. \end{abstract} \noindent {\bf Key Words}: EM algorithm, Factor analysis, Full Information Maximum Likelihood \section{Introduction} Factor analysis provides a practical tool for exploring the covariance structure among a set of observed random variables by constructing a smaller number of unobserved variables called common factors. Successful applications have been reported in various fields of research, including the social and behavioral sciences. In practical situations, a majority of the data values are often missing or unknown. For example, when a questionnaire asks a research participant about a feeling toward another person, a number of questions are needed to investigate their impressions, using a wide variety of personal-assessment measures. However, answering all of the questions may cause participants fatigue and inattention, resulting in inaccurate answers. In order to gather the high-quality data, the participants may be asked to select just a few of questions; this leads to a large number of missing values. In the presence of missing values, the factor analysis model can be estimated by the full information maximum likelihood (FIML) procedure. It is well-known that the FIML method yields a consistent estimator under the assumption of missing at random (MAR, e.g., \citealp{little1987statistical}), that is, the missingness depends only on the variables that are observed and not on the missing values. There are two crucial issues for the FIML procedure with large rates of missing values. The first issue is the computational speed. Conventionally, FIML estimates have been obtained by Newton-type algorithms. For example, \citet{finkbeiner1979estimation} applied a quasi-Newton method to the factor analysis model, and \citet{lee1986estimation} considered an estimation of the general covariance structure via the reweighed Gauss--Newton algorithm. However, Newton-type methods can be slow and unstable when the number of variables is large. Another popular estimation algorithm is the expectation--maximization (EM) algorithm and its extensions; in this approach, the common factors and missing values on observed variables are included in complete data (e.g., \citealp{dempster1977maximum,rubin1982algorithms,little1987statistical,jamshidian1993conjugate,jamshidian1997algorithm,liu1998maximum}). However, the ordinary EM algorithm also has a high computational cost when a majority of the data values are missing, because a number of missing values must be imputed during the expectation (E) step. In this paper, we propose a new algorithm that is based on the EM algorithm but that efficiently computes the FIML estimates. We include common factors in the complete data, as is the case with the ordinary EM algorithm, but we do not include the missing values on observed variables in the complete data. Because of this, there is no need to impute the missing values in the E step. The proposed algorithm is applied to a real data set collected from a Web questionnaire that asks about first impressions of human; almost 90\% of the data values are missing. Although the ordinary EM algorithm takes hours to run, our algorithm provides precise estimates in several tens of seconds. The second issue is the estimation accuracy of the FIML method: with the rate of missing values as large as 90\%, it is not clear whether the FIML procedure can yield a good estimator even if the number of observations is large, such as $N=2000$. Although several researchers have discussed the effectiveness of the FIML estimator from both theoretical and numerical points of view (e.g., \citealp{finkbeiner1979estimation,lee1986estimation,enders2001relative,enders2001primer}), the rates of missing values they considered were not very large (typically, about 30\%). In order to investigate how well the FIML method performs when the majority of data values are missing, we conducted Monte Carlo simulations under a wide variety of sample sizes. The remainder of this paper is organized as follows: Section 2 defines the factor analysis model and notation, and briefly describes the FIML estimation procedure. In Section 3, we present the ordinary EM algorithms for FIML estimation, and we then modify the algorithm to improve the computational speed. Section 4 presents an application of the proposed algorithm to data from a Web-based questionnaire. In Section 5, a Monte Carlo simulation is conducted to investigate the effectiveness of the FIML procedure. Some concluding remarks are given in Section 6. \section{FIML estimation in factor analysis} Let $\bm{X}=(X_1,{d}ots,X_p)^T$ be a $p$-dimensional random vector with mean vector $\bm{\mu} $ and variance-covariance matrix $\bm{\Sigma}$. The factor analysis model (e.g., \citealp{mulaik2010foundations}) is \begin{equation*} \bm{X} =\bm{\mu} + \bm{\Lambda} \bm{F}+\bm{\varepsilon} , {l}abel{model1} \end{equation*} where $\bm{\Lambda} =({l}ambda_{ij})$ is a $p \times m$ matrix of factor loadings, and $\bm{F} = (F_1,\cdots,F_m)^T$ and $\bm{\varepsilon} = (\varepsilon_1,\cdots, \varepsilon_p)^T$ are unobservable random vectors. The elements of $\bm{F}$ and $\bm{\varepsilon}$ are called common factors and unique factors, respectively. It is assumed that the common factors $\bm{F}$ and the unique factors $\bm{\varepsilon}$ are multivariate-normally distributed with ${E}(\bm{F} )=\bm{0}$, ${E}(\bm{\varepsilon} )=\mathbf{0}$, ${E}(\bm{F}\bm{F}^T)=\bm{I}_m$, ${E}(\bm{\varepsilon} \bm{\varepsilon} ^T)=\bm{\Psi} $, and are independent (i.e., ${E}(\bm{F} \bm{\varepsilon} ^T)=\bm{O}$), where $\bm{I}_m$ is the identity matrix of order $m$, and $\bm{\Psi} $ is a $p \times p$ diagonal matrix in which the $i$-th diagonal element is ${p}si_i$, which is called a unique variance. Under these assumptions, the random vector $\bm{X}$ is multivariate-normally distributed with mean vector $\bm{\mu}$, and variance-covariance matrix $\bm{\Sigma} = \bm{\Lambda} \bm{\Lambda}^T+\bm{\Psi}$. Note that the factor loadings have a rotational indeterminacy, because both $\bm{\Lambda} $ and $\bm{\Lambda} \mathbf{T}$ generate the same covariance matrix $\mathbf{\Sigma}$, where $\mathbf{T}$ is an arbitrary orthogonal matrix. We consider the case where the data values are partially observed. Let $\bm{x}_1,\cdots,\bm{x}_N$ be $N$ sets of ``complete" data drawn from $N_p(\bm{\mu},\bm{\Sigma})$ with $\bm{\Sigma} = \bm{\Lambda} \bm{\Lambda}^T+\bm{\Psi}$, which would occur in the absence of missing values. The complete data vector $\bm{x}_n$ can be expressed as $\bm{x}_n=(\bm{x}_{[n]},\bm{x}_{-[n]})$, where $\bm{x}_{[n]}$ (or $\bm{x}_{-[n]}$) are observable (missing) values for case $n$. Let $\bm{\mu}_{[n]}, \bm{\Lambda}_{[n]}$, and $\bm{\Psi}_{[n]}$ denote model parameters based only on variables that are observed for case $n$. The mean vector and covariance matrix based on the observed values $\bm{x}_{[n]}$ can be written as $\bm{\mu}_{[n]}$ and $\mathbf{\Sigma}_{[n]}=\bm{\Lambda}_{[n]} \bm{\Lambda}_{[n]}^T+\bm{\Psi}_{[n]}$, respectively. The full information log likelihood function is then given by \begin{equation} \ell(\bm{\mu},\bm{\Lambda},\bm{\Psi})= -{f}rac{1}{2}\sum_{n=1}^N \bigg\{ p{l}og(2{p}i)+{l}og |\mathbf{\Sigma}_{[n]}| + (\bm{x}_{[n]}-\bm{\mu}_{[n]})^T\mathbf{\Sigma}_{[n]}^{-1} (\bm{x}_{[n]}-\bm{\mu}_{[n]}) \bigg\} .{l}abel{taisuuyuudo} \end{equation} The FIML estimates of $\bm{\mu}$, ${\bm{\Lambda} }$, and ${\bm{\Psi} }$ are given as the solutions of ${p}artial \ell / {p}artial \bm{\mu} = \mathbf{0}$, ${p}artial \ell / {p}artial \bm{\Lambda} = \bm{O}$, and ${p}artial \ell / {p}artial \bm{\Psi} = \bm{O}$, respectively. Since the solutions cannot be expressed in a closed form, we need to use an iterative algorithm, such as a quasi-Newton method or an EM algorithm. \section{EM algorithms for FIML estimation} In this section, we describe the ordinary EM algorithm for FIML estimations (e.g., \citealp{little1987statistical,jamshidian1997algorithm,liu1998maximum}); in this approach, both common factors and missing values on observed variables are included in the complete data. In practical situations, however, the ordinary EM algorithm can be slow when the number of missing values on observed variables is large. In order to handle this problem, we propose much more efficient algorithm. A significant improvement in the computational speed is realized by not treating the missing values on observed variables as a part of complete data. We call this approach the modified EM algorithm, and the details of the algorithm are given in Section \ref{sec:modified EM}. In Section \ref{sec:Comparison of computational cost}, we then discuss the computational complexity of matrix operations in order to compare the computational loads of the ordinary and the modified EM algorithms. \subsection{Ordinary EM algorithm} {l}abel{sec:ordinary EM} The complete data log likelihood function ${l}_{\rho}^{C} (\bm{\mu},\bm{\Lambda},\bm{\Psi})$ is expressed as \begin{equation*} {l}_{\rho}^{C} (\bm{\mu},\bm{\Lambda},\bm{\Psi}) = \sum_{n=1}^N {l}og f(\bm{x}_n,\bm{f}_n), \end{equation*} where the density function $f(\bm{x}_n,\bm{f}_n)$ is defined by \begin{equation*} f(\bm{x}_n,\bm{f}_n) = \mathrm{Pr}od_{i=1}^p {l}eft[ (2{p}i{p}si_i)^{-1/2} \exp {l}eft\{ - {f}rac{ (x_{ni}-\mu_{ni} - \bm{{l}ambda}_i^T\bm{f}_n )^2}{2{p}si_i} \right\} \right] (2{p}i)^{-m/2}\exp {l}eft( - {f}rac{\| \bm{f}_n \|^2}{2} \right). \end{equation*} Then, we have \begin{eqnarray*} {l}_{\rho}^{C} &=&- {f}rac{N}{2} \sum_{i=1}^p {l}og {p}si_i- {f}rac{1}{2} {\rm tr}{l}eft\{ \bm{\Psi}^{-1} \sum_{n=1}^N(\bm{x}_n - \bm{\mu}-\bm{\Lambda}\bm{f}_n)(\bm{x}_n - \bm{\mu}-\bm{\Lambda}\bm{f}_n)^T \right\} + C \\ &=& - {f}rac{N}{2} \sum_{i=1}^p {l}og {p}si_i - {f}rac{1}{2} {\rm tr}{l}eft[ \bm{\Psi}^{-1} \sum_{n=1}^N {l}eft\{ \bm{x}_n - ( \bm{\mu},\bm{\Lambda}) \begin{pmatrix} 1\\ \bm{f}_n \end{pmatrix} \right\} {l}eft\{ \bm{x}_n - ( \bm{\mu},\bm{\Lambda}) \begin{pmatrix} 1\\ \bm{f}_n \end{pmatrix} \right\}^T \right] +C \\ &=& - {f}rac{N}{2} \sum_{i=1}^p {l}og {p}si_i - {f}rac{1}{2} {\rm tr}{l}eft[ \bm{\Psi}^{-1} {l}eft\{ \bm{S}_{\bm{x}\bm{x}} -2 ( \bm{\mu},\bm{\Lambda}) \bm{S}_{\bm{f}^*\bm{x}} +( \bm{\mu},\bm{\Lambda}) \bm{S}_{\bm{f}^*\bm{f}^*} \begin{pmatrix} \bm{\mu}^T\\ \bm{\Lambda}^T \end{pmatrix} \right\} \right] +C, \end{eqnarray*} where $C$ is a constant value and $$ \bm{S}_{\bm{x}\bm{x}} = \sum_{n=1}^N\bm{x}_n\bm{x}_n^T, {q}uad \bm{S}_{\bm{f}^*\bm{x}} = \sum_{n=1}^N \begin{pmatrix} 1\\ \bm{f}_n \end{pmatrix} \bm{x}_n^T, {q}uad \bm{S}_{\bm{f}^*\bm{f}^*} = \sum_{n=1}^N \begin{pmatrix} 1\\ \bm{f}_n \end{pmatrix} (1,\bm{f}_n^T). $$ \subsubsection*{E step} We compute the expectation of the sufficient statistics $\hat{\bm{S}}_{\bm{x}\bm{x}}=E[\bm{S}_{\bm{x}\bm{x}}|\bm{x}_{[1]},{d}ots, \bm{x}_{[N]},\hat{\bm{\theta}}]$, $\hat{\bm{S}}_{\bm{f}^*\bm{x}}=E[\bm{S}_{\bm{f}^*\bm{x}}|\bm{x}_{[1]},{d}ots, \bm{x}_{[N]},\hat{\bm{\theta}}]$, $\hat{\bm{S}}_{\bm{f}^*\bm{f}^*}=E[\bm{S}_{\bm{f}^*\bm{f}^*}|\bm{x}_{[1]},{d}ots, \bm{x}_{[N]},\hat{\bm{\theta}}]$ from the joint distribution of $(\bm{x}_n,\bm{f}_n)$ given $\bm{\theta}$: \begin{equation} \begin{pmatrix} \bm{x}_n\\ \bm{f}_n \end{pmatrix} \bigg|\bm{\theta} \sim N_{p+m}{l}eft( \begin{pmatrix} \bm{\mu}\\ \bm{0} \end{pmatrix} , \begin{bmatrix} \bm{\Lambda} \bm{\Lambda}^T + \bm{\Psi} & \bm{\Lambda}\\ \bm{\Lambda}^T & \bm{I} \end{bmatrix} \right).{l}abel{xn_fn_distribution} \end{equation} The joint distribution of $(\bm{x}_n,\bm{f}_n)$, given the observed values $\bm{x}_{[n]}$, can be obtained by using the standard methodology of a conditional Gaussian distribution. Let $\bm{z}_n = (\bm{x}_{-[n]}^T,\bm{f}_n^T)^T$. We set \begin{equation*} \begin{pmatrix} \bm{z}_n\\ \bm{x}_{[n]} \end{pmatrix} \sim N {l}eft( \begin{pmatrix} \bm{\mu}_{-[n]}\\ \bm{\mu}_{[n]} \end{pmatrix}, \begin{pmatrix} \bm{\Sigma}_{-[n],-[n]}&\bm{\Sigma}_{-[n],[n]}\\ \bm{\Sigma}_{[n],-[n]}&\bm{\Sigma}_{[n],[n]}\\ \end{pmatrix} \right) . \end{equation*} The conditional distribution of $\bm{z}_n$ is given by \begin{eqnarray*} \bm{z}_n|\bm{x}_{[n]} &\sim& N(\bm{\mu}_{-[n]|[n]},\bm{\Omega}_{-[n],-[n]}^{-1}),\\ \bm{\mu}_{-[n]|[n]} &=& \bm{\mu}_{-[n]} - \bm{\Omega}_{-[n],-[n]}^{-1}\bm{\Omega}_{-[n],[n]}(\bm{x}_{[n]} - \bm{\mu}_{[n]}),\\ \begin{pmatrix} \bm{\Omega}_{-[n],-[n]}&\bm{\Omega}_{-[n],[n]}\\ \bm{\Omega}_{[n],-[n]}&\bm{\Omega}_{[n],[n]}\\ \end{pmatrix} &=& \begin{pmatrix} \bm{\Sigma}_{-[n],-[n]}&\bm{\Sigma}_{-[n],[n]}\\ \bm{\Sigma}_{[n],-[n]}&\bm{\Sigma}_{[n],[n]}\\ \end{pmatrix}^{-1}. \end{eqnarray*} On the other hand, $\bm{x}_{[n]}|\bm{x}_{[n]} \sim N(\bm{x}_{[n]} ,\bm{O}) $. Then, we can compute the conditional distribution \begin{eqnarray} \begin{pmatrix} \bm{x}_n\\ \bm{f}_n \end{pmatrix} \bigg| \bm{x}_{[n]} \sim N{l}eft( \begin{pmatrix} \hat{\bm{x}}_n\\ \hat{\bm{f}}_n\\ \end{pmatrix}, \begin{pmatrix} \hat{\bm{V}}_{\bm{x_n}\bm{x}_n^T}&\hat{\bm{V}}_{\bm{x_n}\bm{f}_n^T}\\ \hat{\bm{V}}_{\bm{f_n}\bm{x}_n^T}&\hat{\bm{V}}_{\bm{f_n}\bm{f}_n^T}\\ \end{pmatrix} \right).{l}abel{posteriorall} \end{eqnarray} The sufficient statistics $\hat{\bm{S}}_{\bm{x}\bm{x}}$, $\hat{\bm{S}}_{\bm{f}^*\bm{x}}$, and $\hat{\bm{S}}_{\bm{f}^*\bm{f}^*}$ are expressed as \begin{eqnarray} \hat{\bm{S}}_{\bm{x}\bm{x}}&=&\sum_{n=1}^N(\hat{\bm{x}}_n\hat{\bm{x}}_n^T + \hat{\bm{V}}_{\bm{x}_n\bm{x}_n^T}), {q}uad \hat{\bm{S}}_{\bm{f}^*\bm{x}}=\sum_{n=1}^N \begin{pmatrix} \hat{\bm{x}}_n^T\\ \hat{\bm{f}}_n\hat{\bm{x}}_n^T + \hat{\bm{V}}_{\bm{f}_n\bm{x}_n^T} \end{pmatrix}, \cr \hat{\bm{S}}_{\bm{f}^*\bm{f}}&=&\sum_{n=1}^N \begin{pmatrix} 1 & \hat{\bm{f}}_n^T\\ \hat{\bm{f}}_n & \widehat{\bm{f}_n \bm{f}_n^T} \end{pmatrix}, \nonumber \end{eqnarray} where $\widehat{\bm{f}_n \bm{f}_n^T} = \hat{\bm{f}}_n\hat{\bm{f}}_n^T + \hat{\bm{V}}_{\bm{f}_n\bm{f}_n^T}$. \subsubsection*{M step} In the maximization (M) step, we maximize the complete data log likelihood function. By taking the derivative with respect to $(\bm{\mu},\bm{\Lambda})$ and $\bm{\Psi}$, we have \begin{eqnarray*} {f}rac{{p}artial E[l_{\rho}]}{{p}artial (\bm{\mu},\bm{\Lambda})} &=& -{f}rac{1}{2} (-2\bm{\Psi}^{-1}\hat{\bm{S}}_{\bm{f}^*\bm{x}}^T + 2 \bm{\Psi}^{-1}(\bm{\mu},\bm{\Lambda})\hat{\bm{S}}_{\bm{f}^*\bm{f}^*}^T),\\ {f}rac{{p}artial E[l_{\rho}]}{{p}artial \bm{\Psi}^{-1}} &=& {f}rac{N}{2} {\rm diag}(\bm{\Psi}) - {f}rac{1}{2} {\rm diag} {l}eft[\hat{\bm{S}}_{\bm{x}\bm{x}} -2 ( \bm{\mu},\bm{\Lambda}) \hat{\bm{S}}_{\bm{f}^*\bm{x}} +( \bm{\mu},\bm{\Lambda}) \hat{\bm{S}}_{\bm{f}^*\bm{f}^*} \begin{pmatrix} \bm{\mu}^T\\ \bm{\Lambda}^T \end{pmatrix} \right]. \end{eqnarray*} The solution is given by \begin{eqnarray*} (\bm{\mu},\bm{\Lambda}) &=& \hat{\bm{S}}_{\bm{f}^*\bm{x}}^T \hat{\bm{S}}_{\bm{f}^*\bm{f}^*}^{-1}, {l}abel{mstep1-1}\\ \bm{\Psi} &=& {f}rac{1}{N} {\rm diag} {l}eft[ \hat{\bm{S}}_{\bm{x}\bm{x}} -2 ( \bm{\mu},\bm{\Lambda}) \hat{\bm{S}}_{\bm{f}^*\bm{x}} +( \bm{\mu},\bm{\Lambda}) \hat{\bm{S}}_{\bm{f}^*\bm{f}^*} \begin{pmatrix} \bm{\mu}^T\\ \bm{\Lambda}^T \end{pmatrix} \right].{l}abel{mstep1-2} \end{eqnarray*} \subsection{Modified EM algorithm}{l}abel{sec:modified EM} When the number of missing values is very large, the ordinary EM algorithm in Section \ref{sec:ordinary EM} becomes inefficient, because we must impute a number of missing values in the E step. In order to overcome this problem, we introduce a modified algorithm. An important point in our algorithm is that the missing values on observed variable $\bm{x}_{-[n]}$ are {\it not} included in the complete data. In this case, the complete data log likelihood function is given by \begin{eqnarray*} {l}_{\rho}^{C^*} &=& -{f}rac{1}{2} \sum_{n \in n{\rm obs}(i)} \sum_{i=1}^p{l}og {p}si_i \\ &&- {f}rac{1}{2} \sum_{n \in n{\rm obs}(i)} \sum_{i=1}^p {f}rac{(x_{ni}-\mu_{i})^2 - 2(x_{ni}-\mu_i)\bm{{l}ambda}_i^T\bm{f}_n+ \bm{{l}ambda}_i^T \bm{f}_n\bm{f}_n^T\bm{{l}ambda}_i}{{p}si_i}, \end{eqnarray*} where $n{\rm obs}(i) =\{n \in \{1,{d}ots, N\} \mid \mbox{$i$-th variable is observed.}\}$ \subsubsection*{E step} We need to compute the expected values of only the common factors given the observed data, i.e., $\hat{\bm{f}}_n$ and $\hat{\bm{V}}_{\bm{f}_n\bm{f}_n^T}$ in (\ref{posteriorall}). \subsubsection*{M step} We can take the derivatives with respect to $\bm{\mu}$, $\bm{\Lambda}$, and $\bm{\Psi}$, which are written as \begin{eqnarray*} {f}rac{{p}artial E[l_{\rho}^{C^*}]}{{p}artial \mu_i} &=& -{f}rac{1}{2 {p}si_i} \sum_{n \in n{\rm obs}(i)} {l}eft\{ -2(x_{ni}-\mu_i) +2 \bm{{l}ambda}_i^T\hat{\bm{f}_n}) \right\},\\ {f}rac{{p}artial E[l_{\rho}^{C^*}]}{{p}artial \bm{{l}ambda}_i} &=& -{f}rac{1}{2 {p}si_i} \sum_{n \in n{\rm obs}(i)} {l}eft\{ -2(x_{ni}-\mu_i) \hat{\bm{f}_n} + 2 \widehat{\bm{f}_n \bm{f}_n^T}\bm{{l}ambda}_i ) \right\},\\ {f}rac{{p}artial E[l_{\rho}^{C^*}]}{{p}artial {p}si_i^{-1}} &=& {f}rac{\# n{\rm obs}(i)}{2} {p}si_i - {f}rac{1}{2} \sum_{n \in n{\rm obs}(i)} {l}eft\{ (x_{ni}-\mu_{i})^2 - 2(x_{ni}-\mu_i)\bm{{l}ambda}_i^T\hat{\bm{f}}_n+ \bm{{l}ambda}_i^T \widehat{\bm{f}_n \bm{f}_n^T}\bm{{l}ambda}_i \right\}. \end{eqnarray*} The solutions are \begin{eqnarray*} \mu_i&=& {f}rac{1}{\# n{\rm obs}(i)} \sum_{n \in n{\rm obs}(i)} (x_{ni} - \bm{{l}ambda}_i^T\hat{\bm{f}_n} ),{l}abel{mstep2-1}\\ \bm{{l}ambda}_i &=& {l}eft\{ \sum_{n \in n{\rm obs}(i)} \widehat{\bm{f}_n \bm{f}_n^T} \right\}^{-1} {l}eft\{ \sum_{n \in n{\rm obs}(i)} (x_{ni}-\mu_i) \hat{\bm{f}_n} \right\}, {l}abel{mstep2-2} \\ {{p}si}_i &=& {f}rac{1}{\# n{\rm obs}(i)} \sum_{n \in n{\rm obs}(i)}{l}eft\{ (x_{ni}-\mu_{i})^2 - 2(x_{ni}-\mu_i)\bm{{l}ambda}_i^T\hat{\bm{f}}_n+ \bm{{l}ambda}_i^T \widehat{\bm{f}_n \bm{f}_n^T}\bm{{l}ambda}_i \right\}. {l}abel{mstep2-3} \end{eqnarray*} \subsection{Computational complexity of matrix operations}{l}abel{sec:Comparison of computational cost} In this section, we discuss the computational complexity of the matrix operations for each algorithm. For ease of comprehension, we assume that the number of missing (or observed) variables, say, $p_{\rm mis}$ (or $p_{\rm obs}$), is constant for all observations. Note that the computational complexity independent of this assumption can be discussed in the same manner. Assume that a massive amount of data is missing, i.e., $p_{\rm mis} \approx p$, and $m$ is sufficiently small. In the E step of the ordinary EM algorithm, the operation $\bm{\Omega}_{-[n],-[n]}^{-1}$ is almost $O(p^2)$. To show this, first, we calculate the inverse of the covariance matrix of the joint distribution $(\bm{x}_n^T,\bm{f}_{n}^T)^T$ in (\ref{xn_fn_distribution}) \begin{equation*} \begin{bmatrix} \bm{\Lambda} \bm{\Lambda}^T + \bm{\Psi} & \bm{\Lambda}\\ \bm{\Lambda}^T & \bm{I} \end{bmatrix}^{-1} = \begin{bmatrix} \bm{\Psi}^{-1} & -\bm{\Psi}^{-1} \bm{\Lambda}\\ -\bm{\Lambda}^T\bm{\Psi}^{-1} & \bm{M} \end{bmatrix}, \end{equation*} where $\bm{M} = \bm{\Lambda}^T\bm{\Psi}^{-1}\bm{\Lambda} + \bm{I}$. Thus, we have \begin{equation*} \bm{\Omega}_{-[n],-[n]}^{-1} = \bm{\Lambda}_{-[n]} (\bm{M} - \bm{\Lambda}_{-[n]}^{T} \bm{\Psi}_{-[n]}^{-1}\bm{\Lambda}_{-[n]})^{-1} \bm{\Lambda}_{-[n]}^{T}, \end{equation*} which requires $O(p^2)$ when $m$ is small. The computational complexity of the E step is then given by $O(Np^2)$. The computational complexity of the M step is $O(p)$, which is sufficiently small compared with that of the E step. On the other hand, the modified EM algorithm is much more efficient: the operation needs only $O(Np_{\rm obs}^2)$. Furthermore, with a large rate of missing values, we found that the number of iterations in the modified EM algorithm tends to be much smaller than that of the ordinary EM algorithm, as shown in the simulation study in Section \ref{sec.timing}. However, we do not yet have mathematical support for this claim. We would like to consider this as a future research topic. \section{Analysis of data from a Web-based questionnaire on first impressions} {l}abel{sec:realdata} We now explore the underlying factor structure of personal assessments of first impressions, based on data from a Web-based questionnaire. The responders were asked to evaluate four virtual people based on several paired adjectives (e.g., pleasant - unpleasant) on a scale of 1 to 5. In order to use a wide variety of personal assessment measures for investigating the underlying structure of first impressions, we prepared 94 measures. Answering 94 items is a heavy load, so the following procedure was carried out: \begin{enumerate} \item Before the four virtual people were displayed, the participants selected four assessment measures (selective measures) that they used in their daily life. \item The participants evaluated the four virtual people based on the four selective measures and an additional six assessment measures that were assigned to all participants (common measures). The six common measure are as follows: ``pleasant - unpleasant", ``friendly - unfriendly", ``careful - hasty", ``sensible - insensible", ``active - passive", and ``confident - unconfident." \end{enumerate} Each participant only selected $10$ $(=4+6)$ items out of 94 items, so that almost 90\% of the data values were missing. Because 8544 participants appropriately completed the questionnaire for four virtual people, the number of observations is $8544 \times 4=34176$. The number of factors was set to be $m=3$, because \citet{rosenberg1968multidimensional} described personality impressions as being based on a three-dimensional configuration. First, the computational time based on the ordinary EM algorithm described in Section \ref{sec:ordinary EM} was compared with that of the modified EM algorithm described in Section \ref{sec:modified EM}. A quasi-Newton method was also compared; the inverse of the Hessian matrix was approximated by the Broyden--Fletcher--Goldfarb--Shanno (BFGS) algorithm. The quasi-Newton method uses the full information likelihood function in (\ref{taisuuyuudo}) and its first derivatives given by \begin{eqnarray*} {d}frac{{p}artial \ell(\bm{\mu},\bm{\Lambda},\bm{\Psi})}{{p}artial \bm{\mu}}&=& \sum_{n=1}^N \mathbf{\Sigma}_{[n]}^{-1}(\bm{x}_{[n]} - \bm{\mu}_{[n]}), {l}abel{mubibun=02-1-2} \\ {d}frac{{p}artial \ell(\bm{\mu},\bm{\Lambda},\bm{\Psi})}{{p}artial \bm{\Lambda}}&=& \sum_{n=1}^N(\mathbf{\Sigma}_{[n]}^{-1}\bm{x}_{[n]}\bm{x}_{[n]}^T\mathbf{\Sigma}_{[n]}^{-1} - \mathbf{\Sigma}_{[n]}^{-1})\bm{\Lambda}_{[n]}, {l}abel{Lbibun=02-1-2} \\ {d}frac{{p}artial \ell(\bm{\mu},\bm{\Lambda},\bm{\Psi})}{{p}artial \bm{\Psi}} &=& {f}rac{1}{2} \sum_{n=1}^N \mathrm{diag}(\mathbf{\Sigma}_{[n]}^{-1}\bm{x}_{[n]}\bm{x}_{[n]}^T\mathbf{\Sigma}_{[n]}^{-1} - \mathbf{\Sigma}_{[n]}^{-1}). {l}abel{Psibibun=02-1-2} \end{eqnarray*} Note that the quasi-Newton algorithm can be inefficient when the number of observations is very large, because the covariance matrix $\mathbf{\Sigma}_{[n]}$ and its inverse must be computed for each case $n$. We computed the average time for 10 runs using different initial values. All computations were carried out with Windows 8 and an Intel Core i7 3.4 GH processor. The program was written in {\tt R} using {\tt C}. For the quasi-Newton method via BFGS optimization, we used the {\tt vmmin} function called by the {\tt optim} function in {\tt R}. The result was: \begin{itemize} \item modified EM algorithm: {q}uad 7.81 seconds, \item ordinary EM algorithm: {q}uad 9.00 hours, \item quasi-Newton method: {q}uad \ \ 25.8 minutes. \end{itemize} Our algorithm was considerably faster than the two existing methods. Note that the EM algorithm converged to the FIML estimates for all 10 initial values, whereas the quasi-Newton algorithm diverged for 2 out of 10 initial values. Thus, the quasi-Newton algorithm may be unstable compared with the EM algorithm. Next, the loading matrix was rotated by the promax method \citep{hendrickson1964promax} to interpret the estimated common factors. The estimated factor loadings and unique variances are shown in Appendix A. The results show that the FIML procedure was able to produce the following three interpretable common factors: {\it personality}, {\it intelligence}, and {\it activeness}. Although the estimated model is interpretable, it is not clear yet whether the FIML can achieve good estimation accuracy when the missing value rate is as large as 90\% even if the number of observations is as large as $N=30000$. In order to investigate how well the FIML method performs when the majority of the data values are missing, we conduct Monte Carlo simulations under a wide variety of sample sizes, as shown in the next section. \section{Monte Carlo Simulations}{l}abel{sec:simulation} In the simulations, we used the following loading matrix and unique variances: \begin{eqnarray*} \bm{\Lambda} &=& (\underbrace{0.8 \bm{I}_3, \ 0.8 \bm{I}_3, \ {d}ots, \ 0.8 \bm{I}_3}_{30})^T, {q}uad \bm{\Psi} = {\rm diag}(\bm{I} - \bm{\Lambda} \bm{\Lambda}^T ). \end{eqnarray*} In this case, $p=90$ and $m=3$. The model was estimated by the maximum likelihood method under the rotational restriction that the upper triangular matrix of the loading matrix is zero, i.e., ${l}ambda_{ij}=0 $ ($j>i$) (e.g., \citealp{anderson1956statistical}). The aim of this simulation study is to (i) investigate how well the FIML method performs when the majority of the data are missing, and (ii) compare the computation times of the quasi-Newton method, the ordinary EM algorithm, and the modified EM algorithm. \subsection{Investigation of performance of the FIML estimation}{l}abel{sec:simulation} First, we investigated the performance of the FIML procedure when a large number of data values were missing. The number of observations was the sequence of twenty integers decreasing on the log scale from $N=40000$ to $N=200$. We first generated the common factors and unique factors by using $\bm{f}_n \sim N(\bm{0}, \bm{I}_3)$ and $\bm{\varepsilon}_n \sim N(\bm{0},\bm{\Psi})$, and then the complete data was created by $\bm{x}_n = \bm{\Lambda} \bm{f}_n + \bm{\varepsilon}_n $ $(n=1,{d}ots,N)$. At each observation, we chose (approximately) $q$ variables and eliminated them. The mechanism for choosing which values to eliminate was assumed to be either missing completely at random (MCAR) or not missing at random (NMAR), as follows: \begin{description} \item[MCAR:] We randomly chose $q$ variables and set these as the missing values. \item[NMAR:] For the $i$-th variable of the $n$-th subject, we calculated the value based on the logistic function $p_{in} = 1/(1+\exp(-\alpha \bm{{l}ambda}_i^T\bm{f}_n))$, and then the missing indicator values for $x_{in}$ were generated from the Bernoulli distribution with probability $p_{in}$. The value of $\alpha$ was chosen so that the mean value of the $p_{in}$ approximates the missing rate, i.e., $\sum_{i,n}p_{in}/(Np) \approx q/p$. \end{description} Note that the MCAR assumption is a special case of MAR, so the FIML procedure produces a consistent estimator under the assumption of MCAR. On the other hand, the NMAR assumption leads to an inconsistent estimator. In each case, the first six items were assumed to be ``common measures" that were not allowed to be missing (i.e., all subjects must answer these six questions). To investigate the effectiveness of the common measures, we also estimated the model without the common measures, i.e., the common measures were eliminated and the model was estimated by using 84 ($=90-6$) variables. This procedure was repeated 1000 times. Figure \ref{fig:simulation_msebias} shows the square root of the mean squared error (${\rm sqrtMSE}$) and the bias (${\rm sqrtBIAS}$) of the estimator $\bm{\Lambda}$ defined by \begin{eqnarray*} {\rm sqrtMSE} &=&\sqrt{ {f}rac{1}{1000r} \sum_{j=1}^{\max(i,m)} \sum_{i=7}^p \sum_{s=1}^{1000} ( \hat{{l}ambda}_{ij}{(s)} - {l}ambda_{ij} )^2 }, \\ {\rm sqrtBIAS}& =&\sqrt{ {f}rac{1}{r} \sum_{j=1}^{\max(i,m)} \sum_{i=7}^p ( \bar{{l}ambda}_{ij} - {l}ambda_{ij})^2 }, \end{eqnarray*} where $\hat{{l}ambda}_{ij}{(s)}$ is the maximum likelihood estimate for the $s$-th dataset, $\bar{{l}ambda}_{ij} = \sum_{s}\hat{{l}ambda}_{ij}{(s)}/1000$, and $r$ is the number of parameters of the last 84 rows of $\bm{\Lambda}$ given by $r=(p-6)m-m(m-1)/2$. Note that the first six rows of the loading matrix were not used to compute the sqrtBIAS and sqrtMSE; this is because we would like to investigate whether the common measures yield a good estimate of the parameters that correspond to the other 84 variables. \begin{figure*} \caption{Square root of the mean squared error (${\rm sqrtMSE} \end{figure*} The range of index $j$ is $\max(i,m)$ because the upper triangular matrix of the loading matrix is zero. We provide a detailed description and discussion of Figure \ref{fig:simulation_msebias}: \begin{itemize} \item The upper left panel shows the sqrtMSE for MCAR with $q=0$ and $q=80$. When $q = 80$, the missing value rate was about 90\%, which is similar to the setting used for the Web-based questionnaire data analysis described in Section \ref{sec:realdata}. The horizontal line shows ${\rm sqrtMSE}=0.05$, which may be small enough to correctly interpret the estimated model if the observed variables are scaled to have unit variance. The sqrtMSE for $q=80$ was much larger than that for $q=0$ when $N<10000$. We may need a large number of observations, such as $N=10000$, to obtain an accurate estimate when a massive amount of data is missing. \item The upper right panel depicts $\sqrt{N \cdot {\rm MSE}}$. It is well known that $\sqrt{{\rm MSE}}$ possesses $\sqrt{N}$-consistency, so that $\sqrt{N \cdot {\rm MSE}}$ may be constant for large values of $N$. We can see that the estimated MSE may be close to the true MSE when $N>20000$. When $N=20000$, the sqrtMSE was approximately $0.02$, which is small enough to correctly interpret the estimated model. As a result, we may need $N>20000$ to produce an accurate estimation. \item The lower left panel shows the sqrtMSE with six common measures and with no common measures. This shows that the common measures play an important role in making the value of the sqrtMSE smaller. \item The lower right panel shows the sqrtBIAS for MCAR and NMAR. This was done to investigate how well the FIML performs when the true missing mechanism is NMAR. When the missing mechanism is MCAR, the sqrtBIAS converges to zero, which means the FIML produces a consistent estimator. On the other hand, the FIML estimates in the NMAR case are biased, so that the sqrtBIAS seems to converge to some small positive value when $N \rightarrow \infty$. However, the sqrtBIAS was approximately $0.01$, which may be sufficiently small compared with the sqrtMSE depicted in the left upper panel. \end{itemize} We also computed the minimum number of observations required to satisfy ${\rm sqrtMSE}<0.05$, $0.025$ for various $q$; these are shown in Table \ref{table:sqrtMSE}. For example, when $q \ge 80$, we need at least $N = 20000$ observations to satisfy ${\rm sqrtMSE}<0.025$. In the data from the Web-based questionnaire, as described in Section \ref{sec:realdata}, the number of observations was $N=34176$. Therefore, the value of the ${\rm sqrtMSE}$ might be less than $0.025$, which is sufficiently small to correctly interpret the estimated model. \begin{table}[t] \caption{Minimum number of observations that satisfy the sqrtMSE for various $q$. } {l}abel{table:sqrtMSE} \begin{center} \begin{tabular}{rrrrrrrrrr} \hline sqrtMSE & $q=0$ & $q=20$ & $q=40$ & $q=60$ & $q=70$ & $q=80$ \\ \hline 0.025 & 1279 & 1460 & 1869 & 3134 & 5281 & 20056 \\ 0.05 & 321 & 385 & 516 & 605 & 1363 & 5329 \\ \hline \end{tabular} \end{center} \end{table} \subsection{Comparison of computation times}{l}abel{sec.timing} We computed the computation time and the number of iterations for MCAR with common measures when $q=0,10,20,{d}ots,80$ and $N=2000$. The other settings were the same as for the comparison of computation times in the analysis of actual data, as discussed in Section \ref{sec:realdata}. Figure \ref{fig:simulation_timing} shows the computation times and the number of iterations, each averaged over 10 runs for each of the three algorithms (quasi-Newton method, ordinary EM, and modified EM). Note that these algorithms converged to the same solutions when starting with the same initial values. From the results presented in Figure \ref{fig:simulation_timing}, we can see that \begin{itemize} \item The modified EM algorithm was the fastest among the three algorithms when $q \ge 10$. In particular, when $q=80$ (i.e., the majority of the data values were missing), the modified EM algorithm was 247 times faster than the ordinary EM algorithm, and 128 times faster than the quasi-Newton method. \item The number of iterations of the ordinary EM algorithm increased as the number of missing variables $q$ increased. This shows that the ordinary EM algorithm may be inefficient when the number of missing values is very large. On the other hand, for both the quasi-Newton method and the modified EM algorithm, the number of iterations decreased as the number of missing values increased. \end{itemize} \begin{figure*} \caption{Comparison of calculation time. The left panel shows the speed up ratio; the baseline is the quasi-Newton method without missing data. The right panel depicts the number of iterations for each method.} \end{figure*} \begin{comment} \subsection{Quasi-Newton methods} A traditional estimation procedure is the quasi-Newton method (e.g., \citealp{finkbeiner1979estimation,lee1986estimation}), where the inverse of Hessian matrix is approximated by the Davidon-Fletcher-Powell (DFP) algorithm or the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. The quasi-Newton method uses the full information likelihood function in (\ref{taisuuyuudo}) and its first derivatives given by \begin{eqnarray*} {d}frac{{p}artial \ell(\bm{\mu},\bm{\Lambda},\bm{\Psi})}{{p}artial \bm{\mu}}&=& \sum_{n=1}^N \mathbf{\Sigma}_{[n]}^{-1}(\bm{x}_n - \bm{\mu}_{[n]}), {l}abel{mubibun=02-1-2} \\ {d}frac{{p}artial \ell(\bm{\mu},\bm{\Lambda},\bm{\Psi})}{{p}artial \bm{\Lambda}}&=& \sum_{n=1}^N(\mathbf{\Sigma}_{[n]}^{-1}\bm{x}_{[n]}\bm{x}_{[n]}^T\mathbf{\Sigma}_{[n]}^{-1} - \mathbf{\Sigma}_{[n]}^{-1})\bm{\Lambda}_{[n]}, {l}abel{Lbibun=02-1-2} \\ {d}frac{{p}artial \ell(\bm{\mu},\bm{\Lambda},\bm{\Psi})}{{p}artial \bm{\Psi}} &=& {f}rac{1}{2} \sum_{n=1}^N \mathrm{diag}(\mathbf{\Sigma}_{[n]}^{-1}\bm{x}_{[n]}\bm{x}_{[n]}^T\mathbf{\Sigma}_{[n]}^{-1} - \mathbf{\Sigma}_{[n]}^{-1}). {l}abel{Psibibun=02-1-2} \end{eqnarray*} Note that a rotation indeterminacy of the factor loadings must be eliminated when the quasi-Newton method is utilized. A typical restriction for the loading matrix is ${l}ambda_{ij}=0$ for $i<j$ (see, e.g., \citealp{anderson1956statistical}.) For each case $n$, the computation of the covariance matrix $\mathbf{\Sigma}_{[n]}$ and its inverse must be computed, so that the quasi-Newton algorithm can be inefficient when the number of observations is very large. \end{comment} \section{Concluding remarks} We presented a new FIML estimation algorithm that improves the computational speed of the ordinary EM algorithm. In the analysis of actual data, the proposed algorithm was considerably faster than the ordinary EM algorithm. We also conducted Monte Carlo simulations to investigate the performance of the FIML procedure. The results showed that several tens of thousands of observations may be necessary in order to obtain an accurate estimate when the rate of missing values was 90\%. Although the FIML procedure performed well even when the true missing data were NMAR based on the logistic function, various other NMAR cases were not explored (e.g., \citealp{yuan2009identifying,kano2011analysis}). As a future research topic, it would be interesting to explore the performance of FIML estimation and to determine algorithms that would be efficient for various NMAR cases. Another topic would be to determine a much faster algorithm for high-dimensional sparse data, such as the Netflix Prize dataset \citep{bennett2007netflix}, which consists of $(N,p)=(480189,17700)$ with 99\% of the data missing. \appendix \section{Estimates of factor loadings for the analysis of the Web-based questionnaire data} The FIML estimates of the factor loadings and unique variances are shown in Tables \ref{tab:factor1}, \ref{tab:factor23}, and \ref{tab:unique}. The estimates of the factor loadings were rotated by the promax method \citep{hendrickson1964promax}. Table \ref{tab:factor1} shows the adjective pairs related to factor 1, and Table \ref{tab:factor23} presents the items related to factors 2 and 3. From 94 personality traits, the following three common factors were found: personality (Factor 1), intelligence (Factor 2), and activeness (Factor 3). Table \ref{tab:unique} shows the adjective pairs that possess large unique variances, which means that these items are not very closely related to these three factors. \begin{table}[!htdp] \caption{Factor loadings for 28 items that possesses large absolute numbers for Factor 1. The absolute values of factor loadings that are larger than 0.4 are in bold.}{l}abel{tab:factor1} \begin{center} \begin{tabular}{lrrrr} \hline Adjective Pairs & Factor 1 & Factor 2 & Factor 3 & Uniquenesses \\ \hline Pleasant $-$ Unpleasant & {\bf 0.706} & 0.154 & 0.061 & 0.262 \\ Friendly $-$ Unfriendly & {\bf 0.791} & $-$0.022 & 0.150 & 0.266 \\ Casual $-$ Formal & {\bf 0.691} & $-$0.221 & 0.172 & 0.575 \\ Dishonest $-$ Honest & {\bf $-$0.479} & $-$0.333 & 0.238 & 0.536 \\ Bad Feeling $-$ Good Feeling & {\bf $-$0.579} & $-$0.036 & $-$0.107 & 0.422 \\ Obedient $-$ Disobedient & {\bf 0.729} & 0.101 & $-$0.160 & 0.459 \\ Skeptical $-$ Credulous & {\bf $-$0.632} & 0.396 & $-$0.046 & 0.827 \\ Honest $-$ Liar & {\bf 0.515} & 0.289 & $-$0.087 & 0.475 \\ Modest $-$ Immodest & {\bf 0.527} & 0.323 & {\bf $-$0.462} & 0.507 \\ Frank $-$ Formal & {\bf 0.624} & $-$0.169 & 0.345 & 0.383 \\ Mild $-$ Intense & {\bf 0.681} & 0.240 & $-$0.396 & 0.420 \\ Kind $-$ Unkind & {\bf 0.672} & 0.179 & $-$0.016 & 0.269 \\ Sympathetic $-$ Unsympathetic & {\bf 0.683} & 0.250 & $-$0.143 & 0.377 \\ Warm $-$ Cold & {\bf 0.810} & 0.092 & $-$0.043 & 0.281 \\ Acid $-$ Round & {\bf $-$0.643} & 0.012 & 0.305 & 0.559 \\ Patient $-$ Impatient & {\bf 0.595} & 0.111 & $-$0.272 & 0.520 \\ Soft $-$ Hard & {\bf 0.812} & $-$0.081 & $-$0.002 & 0.427 \\ Tough $-$ Gentle & {\bf $-$0.546} & 0.222 & 0.384 & 0.615 \\ Mean $-$ Nice & {\bf $-$0.547} & $-$0.052 & 0.195 & 0.454 \\ Laid-back $-$ Rash & {\bf 0.723} & $-$0.058 & $-$0.339 & 0.529 \\ Interesting $-$ Boring & {\bf 0.440} & $-$0.113 & {\bf 0.438} & 0.515 \\ Cheerful $-$ Depressing & {\bf 0.571} & $-$0.030 & 0.239 & 0.346 \\ (Agree $-$ Disagree) with Each Other & {\bf 0.666} & 0.056 & 0.090 & 0.448 \\ (Same $-$ Different) Ways of Thinking & {\bf 0.589} & 0.177 & 0.004 & 0.425 \\ Empathetic $-$ Lack Empathy & {\bf 0.644} & 0.230 & 0.031 & 0.370 \\ Feel at Ease $-$ Frustrating & {\bf 0.716} & 0.144 & $-$0.072 & 0.341 \\ Safe $-$ Dangerous & {\bf 0.656} & 0.302 & $-$0.099 & 0.378 \\ Friend $-$ Enemy & {\bf 0.656} & 0.114 & 0.029 & 0.317 \\ \hline \end{tabular} \end{center} \end{table} \begin{table}[!htdp] \caption{Factor loadings for 29 items that possesses large absolute numbers for Factors 2 and 3. The absolute values of factor loadings that are larger than 0.4 are in bold.}{l}abel{tab:factor23} \begin{center} \begin{tabular}{lrrrr} \hline Adjective Pairs & Factor 1 & Factor 2 & Factor 3 & Uniquenesses \\ \hline Careful $-$ Hasty & 0.193 & {\bf 0.531} & $-$0.169 & 0.345 \\ Sensible $-$ Insensible & 0.313 & {\bf 0.516} & $-$0.017 & 0.295 \\ Stable $-$ Unstable & 0.142 & {\bf 0.517} & 0.096 & 0.436 \\ Neat $-$ Untidy & 0.195 & {\bf 0.606} & 0.041 & 0.400 \\ Serious $-$ Frivolous & 0.398 & {\bf 0.518} & $-$0.128 & 0.341 \\ Responsible $-$ Irresponsible & 0.171 & {\bf 0.621} & 0.028 & 0.417 \\ Careful $-$ Careless & $-$0.060 & {\bf 0.667} & $-$0.060 & 0.391 \\ Intellectual $-$ Sensuous & 0.004 & {\bf 0.638} & 0.039 & 0.468 \\ Mature $-$ Childish & 0.061 & {\bf 0.578} & 0.048 & 0.452 \\ Calm $-$ Passionate & 0.081 & {\bf 0.549} & $-$0.227 & 0.531 \\ Logical $-$ Emotional & $-$0.158 & {\bf 0.715} & $-$0.025 & 0.506 \\ Respected $-$ Disrespectful & 0.382 & {\bf 0.418} & 0.082 & 0.378 \\ Active $-$ Passive & $-$0.166 & 0.103 & {\bf 0.779} & 0.231 \\ Confident $-$ Unconfident & $-$0.220 & 0.208 & {\bf 0.725} & 0.256 \\ Sober $-$ Flashy & 0.339 & 0.371 & {\bf $-$0.495} & 0.578 \\ Healthy $-$ Sickly & 0.150 & $-$0.036 & {\bf 0.638} & 0.299 \\ Strong $-$ Weak & $-$0.255 & 0.102 & {\bf 0.784} & 0.365 \\ Reliable $-$ Unreliable & 0.020 & 0.299 & {\bf 0.548} & 0.413 \\ Bold $-$ Timid & $-$0.019 & $-$0.061 & {\bf 0.731} & 0.481 \\ Clear $-$ Vague & $-$0.193 & 0.236 & {\bf 0.660} & 0.437 \\ Loud $-$ Quiet & 0.105 & $-$0.140 & {\bf 0.724} & 0.425 \\ Extrovert $-$ Introvert & $-$0.039 & $-$0.029 & {\bf 0.771} & 0.347 \\ Talkative $-$ Taciturn & 0.084 & $-$0.161 & {\bf 0.712} & 0.480 \\ Inner $-$ Outward & 0.270 & 0.318 & {\bf $-$0.552} & 0.606 \\ Exhibitionist $-$ Quiet & $-$0.235 & $-$0.100 & {\bf 0.810} & 0.476 \\ Bright $-$ Dark & 0.382 & $-$0.120 & {\bf 0.553} & 0.347 \\ Cheerful $-$ Dismal & 0.439 & $-$0.134 & {\bf 0.504} & 0.297 \\ Rich $-$ Poor & 0.123 & 0.337 & 0.397 & 0.402 \\ Superior $-$ Inferior & $-$0.094 & {\bf 0.422} & {\bf 0.452} & 0.342 \\ \hline \end{tabular} \end{center} \end{table} \begin{table}[!htdp] \caption{Factor loadings for 37 items that possesses large numbers of unique variances. The absolute values of factor loadings that are larger than 0.4 are in bold.}{l}abel{tab:unique} \begin{center} \begin{tabular}{lrrrr} \hline Adjective Pairs & Factor 1 & Factor 2 & Factor 3 & Uniquenesses \\ \hline Neat $-$ Scruffy & 0.127 & {\bf 0.491} & 0.145 & 0.502 \\ Filthy $-$ Clean & $-$0.149 & $-$0.174 & $-$0.194 & 0.666 \\ Disgusting $-$ Delightful & $-$0.326 & $-$0.145 & $-$0.156 & 0.622 \\ Beautiful $-$ Ugly & 0.286 & 0.166 & 0.266 & 0.689 \\ Cool $-$ Youthful & $-$0.383 & 0.374 & $-$0.007 & 0.811 \\ Sophisticated $-$ Na\"ive & 0.092 & 0.299 & 0.355 & 0.731 \\ (Long $-$ Short) Hair & {\bf 0.410} & 0.071 & 0.010 & 0.970 \\ (White $-$ Brown) Skin & {\bf 0.432} & 0.126 & 0.139 & 0.667 \\ Short $-$ Tall & 0.089 & $-$0.194 & $-$0.115 & 0.722 \\ Weak $-$ Strong & 0.018 & $-$0.175 & $-$0.265 & 0.786 \\ Wan $-$ Robust & 0.311 & $-$0.065 & $-$0.254 & 1.018 \\ Cautious $-$ Brave & 0.204 & 0.040 & $-$0.291 & 0.599 \\ Unambitious $-$ Ambitious & 0.150 & $-$0.025 & {\bf $-$0.518} & 0.641 \\ Masculine $-$ Feminine & {\bf $-$0.422} & 0.277 & {\bf 0.438} & 0.775 \\ Fulfilling $-$ Empty & 0.119 & 0.263 & 0.473 & 0.492 \\ Happy $-$ Unhappy & 0.382 & 0.337 & 0.103 & 0.477 \\ Soft $-$ Firm & 0.158 & $-$0.250 & 0.288 & 0.759 \\ Elegant $-$ Ungracious & 0.318 & 0.379 & $-$0.041 & 0.544 \\ Lazy $-$ Hardworking & $-$0.035 & {\bf $-$0.441} & $-$0.194 & 0.758 \\ Incorrect $-$ Correct & $-$0.010 & $-$0.199 & 0.057 & 0.849 \\ Sensitive $-$ Insensitive & 0.051 & {\bf 0.480} & 0.188 & 0.622 \\ Simple $-$ Complex & 0.292 & $-$0.080 & $-$0.062 & 0.856 \\ New $-$ Old & 0.024 & $-$0.029 & {\bf 0.566} & 0.606 \\ Disorganized $-$ Organized & $-$0.067 & {\bf $-$0.553} & 0.178 & 0.529 \\ Stubborn $-$ Flexible & {\bf $-$0.419} & 0.227 & 0.142 & 0.763 \\ Closed $-$ Open$-$Minded & $-$0.23 & 0.209 & {\bf $-$0.488} & 0.751 \\ Unsocial $-$ Social & $-$0.246 & 0.300 & $-$0.337 & 0.869 \\ Unfriendly $-$ Friendly & $-$0.073 & 0.233 & 0.216 & 0.808 \\ Emotional $-$ Intelligent & 0.004 & $-$0.372 & 0.345 & 0.755 \\ Forgetful $-$ Long-Memoried & 0.171 & $-$0.159 & $-$0.245 & 0.805 \\ Incompetent $-$ Competent & 0.151 & $-$0.300 & $-$0.328 & 0.609 \\ Individual $-$ Characterless & 0.017 & 0.100 & {\bf 0.564} & 0.653 \\ Walking Dictionary $-$ Ignorant & $-$0.127 & {\bf 0.412} & 0.332 & 0.457 \\ Deep $-$ Shallow & 0.088 & 0.399 & 0.198 & 0.634 \\ Popular $-$ Unpopular & 0.351 & 0.338 & 0.156 & 0.436 \\ (Similar to $-$ Different from) Myself & {\bf 0.576} & 0.049 & $-$0.076 & 0.769 \\ Worthy $-$ Unworthy & 0.226 & 0.382 & 0.207 & 0.400 \\ \hline \end{tabular} \end{center} \end{table} } { \baselineskip=6mm \end{document}
\betaegin{document} \tauitle{\etauge \betaf Finite time extinction for a diffusion equation with spatially inhomogeneous strong absorption} \alphauthor{ \Lambdaarge Razvan Gabriel Iagar\,\varphiootnote{Departamento de Matem\'{a}tica Aplicada, Ciencia e Ingenieria de los Materiales y Tecnologia Electr\'onica, Universidad Rey Juan Carlos, M\'{o}stoles, 28933, Madrid, Spain, \tauextit{e-mail:} [email protected]} \\[4pt] \Lambdaarge Philippe Lauren{\cal E}d{c}ot\,\varphiootnote{Institut de Math\'ematiques de Toulouse, CNRS UMR~5219, Universit\'e Paul Sabatier, F--31062 Toulouse Cedex 9, France. \tauextit{e-mail:} [email protected]}\\ [4pt] } \deltaate{} \muaketitle \betaegin{abstract} The phenomenon of finite time extinction of bounded and non-negative solutions to the diffusion equation with strong absorption $$ \piartial_t u-\Deltaelta u^m+|x|^{\sigmaigma}u^q=0, \tauhetaquad (t,x)\iotan(0,\iotanfty)\tauimes\muathbb{R}^N, $$ with $m\gammaeq1$, $q\iotan(0,1)$ and $\sigmaigma>0$, is addressed. Introducing the critical exponent $\sigmaigma^* := 2(1-q)/(m-1)$ for $m>1$ and $\sigmaigma_*=\iotanfty$ for $m=1$, extinction in finite time is known to take place for $\sigmaigma\iotan [0,\sigmaigma^*)$ and an alternative proof is provided therein. When $m>1$ and $\sigmaigma\gammae \sigmaigma^*$, the occurrence of finite time extinction is proved for a specific class of initial conditions, thereby supplementing results on non-extinction that are available in that range of $\sigmaigma$ and showing their sharpness. \varepsilonnd{abstract} \betaigskip \nuoindent {\betaf AMS Subject Classification 2010:} 35B33, 35B40, 35K55, 35K65. \sigmamallskip \nuoindent {\betaf Keywords and phrases:} porous medium equation, strong absorption, finite time extinction, inhomogeneous absorption. \sigmaection{Introduction} The aim of this short note is to address the question of finite time extinction of bounded and non-negative solutions to the following diffusion equation with spatially inhomogeneous strong absorption \betaegin{equation}\lambdaabel{eq1} \piartial_tu=\Deltaelta u^m-|x|^{\sigmaigma}u^q, \tauhetaquad (t,x)\iotan(0,\iotanfty)\tauimes\muathbb{R}^N, \varepsilonnd{equation} with initial condition \betaegin{equation}\lambdaabel{init.cond} u(0)=u_0\iotan L_+^{\iotanfty}(\muathbb{R}^N) := \betaig\{ z\iotan L^\iotanfty(\muathbb{R}^N)\ :\ z(x)\gammae 0 \;\tauext{ a.e. in }\; \muathbb{R}^N \betaig\}, \varepsilonnd{equation} where \betaegin{equation}\lambdaabel{range.exp} m\gammaeq1, \tauhetaquad 0<q<1, \tauhetaquad \sigmaigma>0. \varepsilonnd{equation} The dynamics of Eq.~\varepsilonqref{eq1} features a double competition: on the one hand, the effects of the diffusion, which spreads the mass of the solutions as time advances, competes with the absorption term, which triggers a loss of mass, possibly leading to vanishing in finite time. On the other hand, the weight on the absorption term is likely to bring into play significant imbalances between the properties of the solutions in a neighborhood of the origin (where, at least formally, the absorption term is very small) and in outer regions, at uniform distance from the origin, where the weight $|x|^{\sigmaigma}$ becomes very strong. With respect to the first of the two competitions described above, the spatially homogeneous counterpart of Eq.~\varepsilonqref{eq1} with $\sigmaigma=0$, that is \betaegin{equation}\lambdaabel{eq1.hom} \piartial_tu-\Deltaelta u^m+u^q=0, \tauhetaquad m\gammaeq1, \varepsilonnd{equation} has a well understood dynamics at least in some ranges of exponents. Indeed, the critical values for the absorption exponent $q$ are $q=m$ and $q=1$, and its solutions have very different properties according to whether $q>m$, $1<q\lambdaeq m$ or $0<q<1$. The former of these three ranges sees the diffusion term being sufficiently strong either to govern the dynamics, or to balance the effects of the absorption giving rise to profiles known as very singular solutions. Properties such as decay estimates for the solutions, construction of singular (or very singular) self-similar solutions and large time behavior of solutions in the sense of convergence to such self-similar profiles as $t\tauo\iotanfty$ are established in a number of works, see for example {\cal I}te{KP86, PT86, KU87, KV88, KPV89, Le97, Kwak98} and references therein. While for $q>m$ all compactly supported solutions have an algebraic time decay as $t\tauo\iotanfty$ and their supports expand reaching the whole space in the limit, a different situation occurs for $m>1$ and $q\iotan(1,m)$. In the latter range, the effect of the absorption is dominant and the expansion of the positivity region of the compactly supported solutions is limited, leading to the \varepsilonmph{localization of supports}, that is, there exists a large radius $R>0$ not depending on time such that ${\rhom supp}\,u(t)\sigmaubseteq B(0,R)$ for any $t>0$. Solutions still present an algebraic decay as $t\tauo\iotanfty$, but self-similar solutions might become unbounded, presenting a specific growth at infinity {\cal I}te{MPV91, CVW97} and delicate descriptions of the large time behavior in the form of matched asymptotics between ``flat" solutions of the form $K_*t^{-1/(q-1)}$ for some explicit constant $K_*>0$, and boundary layers appearing near the boundary of the localization ball $B(0,R)$, have been established, see {\cal I}te{CV99}. Working with spatially inhomogeneous absorption with general weights, Peletier and Tesei established in dimension $N=1$ in {\cal I}te{PT85,PT86b} positivity of supports for $q>m$, conditions for localization of supports of the solutions for $1<q<m$ and the existence of stationary compactly supported solutions also for $1<q<m$. The dynamics of both Eq.~\varepsilonqref{eq1.hom} and Eq.~\varepsilonqref{eq1} seem to be by far more involved in the range $m\gammaeq1$ and $0<q<1$, also known as the \varepsilonmph{strong absorption range} due to the fact that the absorption term prevails. This dominance gives rise to two new mathematical phenomena not present for $q\gammaeq1$. On the one hand, \varepsilonmph{finite time extinction} of (non-negative bounded) solutions occurs. This means that there exists a time $T_e\iotan(0,\iotanfty)$ such that $u(t)\nuot\varepsilonquiv0$ for $t\iotan(0,T_e)$ but $u(T_e)\varepsilonquiv0$, $T_e$ being thus called the extinction time of $u$. The finite time extinction stems from the ordinary differential equation $\piartial_t u=u^q$ obtained by neglecting the diffusion, emphasizing thus the strength of the absorption, see {\cal I}te{Ka75, Ka84}. On the other hand, \varepsilonmph{instantaneous shrinking of supports} of solutions to Eq.~\varepsilonqref{eq1.hom} with bounded initial condition $u_0$ such that $u_0(x)\tauo0$ as $|x|\tauo\iotanfty$ takes place, that is, for any non-negative initial condition $u_0\iotan L^{\iotanfty}(\muathbb{R}^N)$ such that $u_0(x)\tauo0$ as $|x|\tauo\iotanfty$ and $\tauau>0$, there is $R(\tauau)>0$ such that ${\rhom supp}\,u(t)\sigmaubseteq B(0,R(\tauau))$ for all $t\gammae\tauau$. This is once more due to the strength of the absorption term, which involves a very quick loss of mass, and has been proved in {\cal I}te{EK79, Ka84} in the semilinear case $m=1$ and in {\cal I}te{Abd98} for $m>1$. Finer properties of the dynamics of Eq.~\varepsilonqref{eq1.hom} in this range are still lacking in general. For example, a description of the extinction rates and behavior near the extinction time of the solutions to Eq.~\varepsilonqref{eq1.hom} seems to be only available when $m+q=2$ in {\cal I}te{GV94}, revealing a case of asymptotic simplification, and appears to be a complicated problem if $m+q\nueq2$. Let us now turn our attention to Eq.~\varepsilonqref{eq1} with exponents as in \varepsilonqref{range.exp} and introduce the critical exponent \betaegin{equation}\lambdaabel{crit.exp} \sigmaigma^*:=\lambdaeft\{\betaegin{array}{ll} \deltaisplaystyle{\varphirac{2(1-q)}{m-1}}>0, & {\rhom if} \ m>1, \\ & \\ \iotanfty, & {\rhom if} \ m=1.\varepsilonnd{array}\rhoight. \varepsilonnd{equation} It is established in {\cal I}te{BHV01, KV97} ($m=1$) and in {\cal I}te{Belaud01,BeSh07} ($m>1$) that any solution to Eq.~\varepsilonqref{eq1} posed in a bounded domain $\Omegamega\sigmaubset\muathbb{R}^N$ with homogeneous Neumann boundary condition vanishes in finite time provided $0<\sigmaigma<\sigmaigma^*$ (the analysis performed in the above mentioned references actually deals with more general weights instead of $|x|^\sigmaigma$). A similar result is shown in {\cal I}te{BHV01, BD10} ($m=1$) and in {\cal I}te{BeSh22} ($m>1$) for homogeneous Dirichlet boundary conditions. A direct consequence of the latter and the localization of supports established in {\cal I}te[Theorem~1.1]{ILS22} is that bounded and non-negative weak solutions to~\varepsilonqref{eq1}-\varepsilonqref{init.cond} vanish identically after a finite time when $\sigmaigma\iotan [0,\sigmaigma^*)$. We shall recall this result in Theorem~\rhoef{th.1} below and provide an alternative proof, relying on integral inequalities and the well-known $L^1-L^\iotanfty$-regularizing effect of the heat equation or the porous medium equation in $\muathbb{R}^N$. In the complementary range $m>1$ and $\sigmaigma\gammae \sigmaigma^*$, fewer results seem to be available but finite time extinction is not a generic feature. Indeed, we recently proved in {\cal I}te{ILS22} that, when $m>1$, $\sigmaigma>\sigmaigma^*$, and the initial condition $u_0$ is positive in a neighborhood of the origin, the positivity set \betaegin{equation*} \muathcal{P}(t) := \{ x\iotan\muathbb{R}^N\ :\ u(t,x)>0\} \varepsilonnd{equation*} of the solution $u$ to \varepsilonqref{eq1}-\varepsilonqref{init.cond} is non-empty and contains the origin for all $t\gammae 0$. Moreover, $\muathcal{P}(t)$ shrinks to $\{0\}$ as $t\tauo\iotanfty$. We also identify in {\cal I}te{ILS22} a class of initial conditions $u_0$ for which $0\nuot\iotan\muathcal{P}(t)$ for all $t\gammae 0$; that is, $u(t)$ vanishes at the origin where the absorption is the weakest. It is then tempting to figure out whether finite time extinction could occur and the purpose of this note is to answer this question by the affirmative. \muedskip \nuoindent \tauextbf{Main results}. We deal in this note with the Cauchy problem~\varepsilonqref{eq1}~-\varepsilonqref{init.cond} and first make precise the notion of solution we are using in the present work. \betaegin{definition}\lambdaabel{def.wp} A non-negative weak solution to the Cauchy problem \varepsilonqref{eq1}-\varepsilonqref{init.cond} is a function \betaegin{subequations}\lambdaabel{wp4} \betaegin{equation} u \iotan L_+^\iotanfty((0,\iotanfty)\tauimes\muathbb{R}^N) \lambdaabel{wp4a} \varepsilonnd{equation} such that, for all $T>0$, \betaegin{equation} u^m \iotan L^2\betaig((0,T),H^1_{{\rhom loc}}(\muathbb{R}^N)\betaig) \lambdaabel{wp4b} \varepsilonnd{equation} and \betaegin{equation} \iotant_0^T \iotant_{\muathbb{R}^N} \Big[ (u_0-u) \piartial_t \muathbb{Z}a + \nuabla u^m {\cal D}ot \nuabla\muathbb{Z}a + |x|^\sigmaigma u^q \muathbb{Z}a \Big]\ dxds = 0 \lambdaabel{wp4c} \varepsilonnd{equation} for all $\muathbb{Z}a\iotan C_c^1([0,T)\tauimes\muathbb{R}^N)$. \varepsilonnd{subequations} \varepsilonnd{definition} Basic results of well-posedness of the Cauchy problem and of instantaneous shrinking and localization of supports of solutions in the framework of Definition~\rhoef{def.wp} are established in {\cal I}te[Theorem~1.1]{ILS22}, the latter being restated as a preliminary fact at the beginning of Section~\rhoef{sec.th1}. Let us next recall that finite time extinction \varepsilonmph{always occurs} if $\sigmaigma<\sigmaigma^*$, see {\cal I}te[Theorem~3.1]{BHV01} and {\cal I}te[Theorem~3.16]{BD10} ($m=1$) and {\cal I}te[Theorem~2.1]{BeSh22} ($m>1$). \betaegin{theorem}\lambdaabel{th.1} If $m$ and $q$ are as in \varepsilonqref{range.exp}, $u_0$ is as in \varepsilonqref{init.cond} and $\sigmaigma<\sigmaigma^*$, then the solution $u$ to the Cauchy problem \varepsilonqref{eq1}-\varepsilonqref{init.cond} vanishes in finite time. \varepsilonnd{theorem} As already pointed out, Theorem~\rhoef{th.1} follows from the above mentioned references and the property of localization of supports of any solution to the Cauchy problem~\varepsilonqref{eq1}-\varepsilonqref{init.cond}. Nevertheless, we give a short and much simpler proof based on estimates on the $L^1$ norm of the solution at different positive times, which also works for the spatially homogeneous equation~\varepsilonqref{eq1.hom}. Let us also stress here that, in the semilinear case $m=1$, $\sigmaigma^*=\iotanfty$ and all bounded and non-negative solutions vanish in finite time. However, things are a bit more complex in the range $m>1$ and $\sigmaigma\gammaeq\sigmaigma^*$. Indeed, for $\sigmaigma>\sigmaigma^*$, it is proved in {\cal I}te[Theorem~1.3]{ILS22} that solutions to the Cauchy problem~\varepsilonqref{eq1}-\varepsilonqref{init.cond} with an initial condition $u_0$ which is positive in a small ball $B(0,r)$ converge as $t\tauo\iotanfty$ to a unique non-zero self-similar solution and in particular do no longer vanish in finite time. We thus infer that we need to \varepsilonmph{restrict the class of initial conditions} $u_0$ for the finite time extinction to hold true. This is made precise in the following statement, which is the main result of this note. \betaegin{theorem}\lambdaabel{th.2} Let $m>1$, $0<q<1$ and $\sigmaigma\gammaeq\sigmaigma^*$. Consider $a$, $A$ and $R>0$ such that $a>\sigmaigma/(1-q)$ and \betaegin{equation}\lambdaabel{interm1} A^{m-q}R^{a(m-q)-\sigmaigma-2}\lambdaeq\varphirac{1}{am(am+N-2)}. \varepsilonnd{equation} Let $u_0\iotan L_+^{\iotanfty}(\muathbb{R}^N)$ be such that \betaegin{subequations}\lambdaabel{cond.u0} \betaegin{equation} u_0(x)\lambdaeq A|x|^{a}, \tauhetaquad {\rhom for \ any} \ x\iotan B(0,R), \lambdaabel{cond.u0a} \varepsilonnd{equation} and \betaegin{equation} \|u_0\|_{\iotanfty}\lambdaeq AR^a. \lambdaabel{cond.u0b} \varepsilonnd{equation} \varepsilonnd{subequations} Then the solution $u$ to \varepsilonqref{eq1}-\varepsilonqref{init.cond} vanishes in finite time. \varepsilonnd{theorem} According to our previous results in {\cal I}te{ILS22}, the flatness condition~\varepsilonqref{cond.u0a} as $|x|\tauo0$ is required in order to have finite time extinction. Moreover, in a forthcoming work {\cal I}te{ILS22b} we will show that a limitation on $\|u_0\|_{\iotanfty}$ such as the one in \varepsilonqref{cond.u0b} is also needed, as a self-similar solution with exponential time decay as $t\tauo\iotanfty$ and a dead core (that is, support located in an annulus far away from the origin) can be constructed for $\sigmaigma=\sigmaigma^*$. We may thus conclude that our results are qualitatively sharp. A corollary of Theorem~\rhoef{th.2} which provides examples of initial conditions $u_0$ to which it applies is given in Section~\rhoef{sec.th2} after its proof. \sigmaection{Proof of Theorem \rhoef{th.1}}\lambdaabel{sec.th1} This section is dedicated to the proof of Theorem~\rhoef{th.1}. Before the beginning of the proof, we recall here as a preliminary the precise statement of the well-posedness theorem for Eq.~\varepsilonqref{eq1}, which can be found in {\cal I}te[Theorem~1.1]{ILS22}. It includes properties such as instantaneous shrinking, localization of supports of bounded solutions and the comparison principle, that will be used in the sequel. \betaegin{theorem}\lambdaabel{th.wp} For any $m>1$, $q\iotan(0,1)$ and $\sigmaigma>0$, there is a unique non-negative weak solution to the Cauchy problem~\varepsilonqref{eq1}, \varepsilonqref{init.cond} which satisfies \betaegin{equation} \|u(t)\|_\iotanfty \lambdae \|u_0\|_\iotanfty\,, \tauhetaquad t\gammae 0. \lambdaabel{wp0} \varepsilonnd{equation} In addition, it enjoys the properties of \varepsilonmph{instantaneous shrinking} and \varepsilonmph{localization} of the support; that is, for any $t>0$, $u(t)$ has compact support and, given $\tauau>0$, there exists $R=R(\tauau)>0$, depending on $u_0$ and $\tauau$ but not on $t\iotan [\tauau,\iotanfty)$, such that $$ {\rhom supp}\,u(t)\sigmaubseteq B(0,R(\tauau)), \tauhetaquad {\rhom for \ any} \ t\gammaeq\tauau. $$ Also, the following \varepsilonmph{comparison principle} holds true: given $u_{0,i}\iotan L_+^\iotanfty(\muathbb{R}^N)$, $i=1,2$, such that $u_{0,1}\lambdae u_{0,2}$ in $\muathbb{R}^N$, the corresponding non-negative weak solutions $u_1$ and $u_2$ to~\varepsilonqref{eq1}, \varepsilonqref{init.cond} satisfy $u_1\lambdae u_2$ in $(0,\iotanfty)\tauimes\muathbb{R}^N$. \varepsilonnd{theorem} For a proof, we refer the reader to {\cal I}te[Sections~2 and~3]{ILS22}, while similar results for the semilinear case $m=1$ follow by a simple adaptation of the proofs. With these preparations, we are ready to prove our first main result. This is done by adapting an argument from {\cal I}te{BLS02}. \betaegin{proof}[Proof of Theorem~\rhoef{th.1}] According to Theorem~\rhoef{th.wp}, we may assume without loss of generality that $u_0$ is compactly supported and that there exists $R>0$ such that \betaegin{equation}\lambdaabel{loc} {\rhom supp}\,u(t)\sigmaubset B(0,R), \tauhetaquad {\rhom for \ any} \ t>0. \varepsilonnd{equation} Owing to the non-negativity of $u$, it follows from Eq.~\varepsilonqref{eq1} that for any $T>0$ and $t\iotan(0,T)$ we have \betaegin{equation}\lambdaabel{interm2} \iotant_{t}^T\iotant_{\muathbb{R}^N}|x|^{\sigmaigma}u^q(s,x)\,dx\,ds\lambdaeq \|u(T)\|_{1}+\iotant_{t}^T\iotant_{\muathbb{R}^N}|x|^{\sigmaigma}u^q(s,x)\,dx\,ds\lambdaeq \|u(t)\|_1. \varepsilonnd{equation} Pick now $b\iotan\muathbb{R}$ such that \betaegin{equation}\lambdaabel{interm3} \varphirac{\sigmaigma}{N}+1-q<b<\varphirac{\sigmaigma^*}{N}+1-q. \varepsilonnd{equation} We then infer from \varepsilonqref{interm2} that \betaegin{equation}\lambdaabel{interm4} \betaegin{split} \|u(t)\|_1&\gammaeq\iotant_t^T\|u(s)\|_{\iotanfty}^{-b}\iotant_{\muathbb{R}^N}|x|^{\sigmaigma}\|u(s)\|_{\iotanfty}^bu^q(s,x)\,dx\,ds\\ &\gammaeq\iotant_t^T\|u(s)\|_{\iotanfty}^{-b}\iotant_{\muathbb{R}^N}|x|^{\sigmaigma}u^{b+q}(s,x)\,dx\,ds. \varepsilonnd{split} \varepsilonnd{equation} Next, on the one hand we deduce from \varepsilonqref{loc}, H\"{o}lder's inequality and the non-negativity of $u$ that, for any $s\iotan(t,T)$ \betaegin{equation*} \betaegin{split} \|u(s)\|_1&=\iotant_{B(0,R)}|x|^{\sigmaigma/(b+q)}u(s,x)|x|^{-\sigmaigma/(b+q)}\,dx\\ &\lambdaeq\lambdaeft(\iotant_{B(0,R)}|x|^{\sigmaigma}u^{b+q}(s,x)\,dx\rhoight)^{1/(b+q)}\lambdaeft(\iotant_{B(0,R)}|x|^{-\sigmaigma/(b+q-1)}\,dx\rhoight)^{(b+q-1)/(b+q)}\\ &\lambdaeq C\lambdaeft(\iotant_{\muathbb{R}^N}|x|^{\sigmaigma}u^{b+q}(s,x)\,dx\rhoight)^{1/(b+q)}, \varepsilonnd{split} \varepsilonnd{equation*} since \varepsilonqref{interm3} guarantees that $\sigmaigma/(b+q-1)<N$. We thus find that \betaegin{equation}\lambdaabel{interm5} \|u(s)\|_1^{b+q}\lambdaeq C\lambdaeft(\iotant_{\muathbb{R}^N}|x|^{\sigmaigma}u^{b+q}(s,x)\,dx\rhoight). \varepsilonnd{equation} On the other hand, since $u$ is a subsolution to the porous medium equation if $m>1$ or to the heat equation if $m=1$, it follows from the well-known regularizing effect of the porous medium equation (see for example {\cal I}te[Theorem~2.1]{VazSmooth}) if $m>1$ or from the standard representation formula for the heat equation if $m=1$ that $$ \|u(s)\|_{\iotanfty}\lambdaeq C(s-t)^{-\tauheta}\|u(t)\|_1^{2\tauheta/N}, \tauhetaquad s\gammaeq t, $$ where $\tauheta=N/(N(m-1)+2)$. Consequently, taking into account that $b>0$, we have \betaegin{equation}\lambdaabel{interm6} \|u(s)\|_{\iotanfty}^{-b}\gammaeq C(s-t)^{b\tauheta}\|u(t)\|_1^{-2b\tauheta/N}. \varepsilonnd{equation} Combining the estimates~\varepsilonqref{interm4}, \varepsilonqref{interm5} and~\varepsilonqref{interm6} gives, for any $s\iotan(t,T)$, that $$ \|u(t)\|_1\gammaeq C\iotant_t^{T}(s-t)^{b\tauheta}\|u(t)\|_{1}^{-2b\tauheta/N}\|u(s)\|_1^{b+q}\,ds, $$ or equivalently \betaegin{equation}\lambdaabel{interm7} \iotant_t^{T}(s-t)^{b\tauheta}\|u(s)\|_1^{b+q}\,ds\lambdaeq C\|u(t)\|_1^{(N+2b\tauheta)/N}. \varepsilonnd{equation} Since $\|u(s)\|_1\gammaeq\|u(T)\|_1$ for $s\iotan(t,T)$ by~\varepsilonqref{eq1}, we further infer from \varepsilonqref{interm7} that $$ C\|u(t)\|_1^{(N+2b\tauheta)/N}\gammaeq\|u(T)\|_1^{b+q}\iotant_t^T(s-t)^{b\tauheta}\,ds=\varphirac{1}{b\tauheta+1}(T-t)^{b\tauheta+1}\|u(T)\|_1^{b+q}, $$ which can be written in an equivalent form as \betaegin{equation}\lambdaabel{interm8} \|u(T)\|_1\lambdaeq C(T-t)^{-(b\tauheta+1)/(b+q)}\|u(t)\|_1^{(N+2b\tauheta)/N(b+q)}, \varepsilonnd{equation} which holds true for any $t\iotan(0,T)$. Let us notice that, when $m>1$, \betaegin{equation*} \betaegin{split} N+2b\tauheta-N(b+q)&=N(1-q)-\varphirac{N^2(m-1)}{N(m-1)+2}b\\ &=\varphirac{N}{N(m-1)+2}\lambdaeft[(1-q)(N(m-1)+2)-N(m-1)b\rhoight]>0, \varepsilonnd{split} \varepsilonnd{equation*} since we deduce from \varepsilonqref{interm3} that $$ b<1-q+\varphirac{\sigmaigma^*}{N}=\varphirac{(1-q)(N(m-1)+2)}{N(m-1)}. $$ The adaptation for $m=1$ is immediate as then $\tauheta=N/2$ and the terms involving $b$ just cancel in the previous calculation. We thus have $N+2b\tauheta>N(b+q)$ and it follows from {\cal I}te[Lemme~4.1]{St65} that $u$ vanishes in finite time. \varepsilonnd{proof} \sigmaection{Proof of Theorem \rhoef{th.2}}\lambdaabel{sec.th2} Throughout this section, we fix $m>1$, $q\iotan(0,1)$ and $\sigmaigma\gammaeq\sigmaigma^*$. Before starting the proof of Theorem~\rhoef{th.2}, we need a preparatory result on the availability of suitable stationary supersolutions to Eq.~\varepsilonqref{eq1}. \betaegin{lemma}\lambdaabel{lem.super} Given $a\gammaeq(\sigmaigma+2)/(m-q)$ and $A>0$, $R>0$ satisfying~\varepsilonqref{interm1}, the function $$ S_{a,A}(x):=A|x|^a, \tauhetaquad x\iotan\muathbb{R}^N, $$ is a supersolution to Eq.~\varepsilonqref{eq1} on $(0,\iotanfty)\tauimes B(0,R)$. \varepsilonnd{lemma} \betaegin{proof} Setting $l:=a(m-q)-\sigmaigma-2\gammaeq0$, and fixing some $x\iotan B(0,R)$, we obtain by direct calculation that \betaegin{equation*} \betaegin{split} \piartial_t S_{a,A}(x) - \Deltaelta S_{a,A}^m(x) + |x|^{\sigmaigma}S_{a,A}^q(x) &=-am(am+N-2)A^m|x|^{am-2}+A^q|x|^{\sigmaigma+aq}\\ &=A^q|x|^{\sigmaigma+aq}\lambdaeft[1-am(am+N-2)A^{m-q}|x|^{l}\rhoight]\\ &\gammaeq A^q|x|^{\sigmaigma+aq}\lambdaeft[1-am(am+N-2)A^{m-q}R^l\rhoight]\gammaeq0, \varepsilonnd{split} \varepsilonnd{equation*} where the last inequality follows from~\varepsilonqref{interm1}. \varepsilonnd{proof} We are now in a position to complete the proof of Theorem~\rhoef{th.2}. \betaegin{proof}[Proof of Theorem~\rhoef{th.2}] It follows from \varepsilonqref{cond.u0a} that $$ u_0(x)\lambdaeq S_{a,A}(x)=A|x|^a, \tauhetaquad {\rhom for \ any} \ x\iotan B(0,R), $$ and from \varepsilonqref{cond.u0b} and \varepsilonqref{wp0} that $$ u(t,x)\lambdaeq\|u_0\|_{\iotanfty}\lambdaeq S_{a,A}(x), \tauhetaquad (t,x)\iotan(0,\iotanfty)\tauimes\piartial B(0,R). $$ Noticing that $$ \varphirac{\sigmaigma}{1-q}\gammaeq\varphirac{\sigmaigma+2}{m-q}, \tauhetaquad {\rhom for} \ \sigmaigma\gammaeq\sigmaigma^* $$ and recalling that $A$ and $R$ satisfy~\varepsilonqref{interm1}, Lemma~\rhoef{lem.super} and the comparison principle in Theorem~\rhoef{th.wp} entail that $$ u(t,x)\lambdaeq S_{a,A} = A|x|^a, \tauhetaquad (t,x)\iotan(0,\iotanfty)\tauimes B(0,R). $$ On the one hand, the previous inequality implies that, for $(t,x)\iotan (0,\iotanfty)\tauimes B(0,R)$, \betaegin{equation}\lambdaabel{th2.int} |x|^{\sigmaigma}u^q(t,x)\gammaeq\lambdaeft(\varphirac{u(t,x)}{A}\rhoight)^{\sigmaigma/a}u^q(t,x)=A^{-\sigmaigma/a}u^{(aq+\sigmaigma)/a}(t,x). \varepsilonnd{equation} On the other hand, for $(t,x)\iotan (0,\iotanfty)\tauimes(\muathbb{R}^N\sigmaetminus B(0,R))$, we have $|x|\gammaeq R$, which gives, along with~\varepsilonqref{wp0}, \betaegin{equation}\lambdaabel{th2.ext} |x|^{\sigmaigma}u^q(t,x)=|x|^{\sigmaigma}\varphirac{\|u_0\|_{\iotanfty}^{\sigmaigma/a}}{\|u_0\|_{\iotanfty}^{\sigmaigma/a}}u^q(t,x)\gammaeq\varphirac{R^{\sigmaigma}}{\|u_0\|_{\iotanfty}^{\sigmaigma/a}}u^{(aq+\sigmaigma)/a}(t,x). \varepsilonnd{equation} Introducing $$ B:=\muin\{A^{-\sigmaigma/a},R^{\sigmaigma}\|u_0\|_{\iotanfty}^{-\sigmaigma/a}\}, $$ we obtain from~\varepsilonqref{th2.int} and~\varepsilonqref{th2.ext} that $$ |x|^{\sigmaigma}u^q(t,x)\gammaeq Bu^{(aq+\sigmaigma)/a}(t,x), \tauhetaquad (t,x)\iotan(0,\iotanfty)\tauimes\muathbb{R}^N. $$ Therefore, the comparison principle gives that $u\lambdaeq v$ in $(0,\iotanfty)\tauimes\muathbb{R}^N$, where $v$ is the solution to $$ \piartial_t v-\Deltaelta v^m+Bv^{(aq+\sigmaigma)/a}=0, \tauhetaquad {\rhom in} \ (0,\iotanfty)\tauimes\muathbb{R}^N $$ with the same initial condition $v(0)=u_0$ in $\muathbb{R}^N$. Since $(aq+\sigmaigma)/a<1$ by the choice of $a$, $v$ vanishes in finite time and so does $u$. \varepsilonnd{proof} We provide more precise examples of initial data for which finite time extinction holds true when $\sigmaigma\gammae \sigmaigma^*$ in the following consequence of Theorem~\rhoef{th.2}. \betaegin{corollary}\lambdaabel{cor.2} Let $m>1$, $0<q<1$, $\sigmaigma\gammaeq\sigmaigma^*$ and consider $a>\sigmaigma/(1-q)$ and $u_0\iotan L^{\iotanfty}_{+}(\muathbb{R}^N)$ such that \betaegin{equation} u_0(x)\lambdaeq A_0|x|^a, \tauhetaquad x\iotan\muathbb{R}^N, \lambdaabel{zz} \varepsilonnd{equation} for some $A_0>0$. Then there exists $M>0$ depending only on $m$, $a$, $N$, $q$, $\sigmaigma$ and $A_0$ such that, if $\|u_0\|_{\iotanfty}\lambdaeq M$, then the solution $u$ to the Cauchy problem~\varepsilonqref{eq1}-\varepsilonqref{init.cond} vanishes in finite time. \varepsilonnd{corollary} \betaegin{proof} We set $$ K:=\lambdaeft[am(am+N-2)\rhoight]^{1/(m-q)}, \tauhetaquad l:=a(m-q)-\sigmaigma-2>0, $$ and choose $$ A=A_0, \tauhetaquad R=(KA_0)^{-(m-q)/l}, \tauhetaquad M=A_0R^a. $$ Pick $u_0\iotan L_+^\iotanfty(\muathbb{R}^N)$ satisfying~\varepsilonqref{zz} and $\|u_0\|_\iotanfty\lambdae M$. Then $A$ and $R$ satisfy~\varepsilonqref{interm1} and $u_0$ satisfies~\varepsilonqref{cond.u0}, so that we are in the hypothesis of Theorem~\rhoef{th.2}. An application of this theorem ends the proof. \varepsilonnd{proof} \betaigskip \nuoindent \tauextbf{Acknowledgements.} The authors are partially supported by the Spanish project PID2020-115273GB-I00. R. G. I. wants to thank for the hospitality and support of Institut de Math\'ematiques de Toulouse where part of this work has been done. \betaibliographystyle{plain} \betaegin{thebibliography}{1} \betaibitem{Abd98} U. G. Abdullaev, \varepsilonmph{Instantaneous shrinking of the support of a solution of a nonlinear degenerate parabolic equation}, Mat. Zametki, \tauextbf{63} (1998), no.~3, 323--331 (Russian). Translation in Math. Notes, \tauextbf{63} (1998), no.~3-4, 285--292. \betaibitem{Belaud01} Y. Belaud, \varepsilonmph{Time-vanishing properties of solutions of some degenerate parabolic equations with strong absorption}, Adv. Nonlinear Stud., \tauextbf{1} (2001), no.~2, 117--152. \betaibitem{BD10} Y. Belaud and J.I. Diaz, \varepsilonmph{Abstract results on the finite extinction time property: application to a singular parabolic equation}, J. Convex Anal., \tauextbf{17} (2010), no.~3-4, 827--860. \betaibitem{BHV01} Y. Belaud, B. Helffer and L. V\'eron, \varepsilonmph{Long-time vanishing properties of solutions of some semilinear parabolic equations}, Ann. Inst. H. Poincar\'e C Anal. Non Lin\'eaire, \tauextbf{18} (2001), no. 1, 43--68. \betaibitem{BeSh07} Y. Belaud and A. Shishkov, \varepsilonmph{Long-time extinction of solutions of some semilinear parabolic equations}, J. Differential Equations, \tauextbf{238} (2007), no.~1, 64--86. \betaibitem{BeSh22} Y. Belaud and A. Shishkov, \varepsilonmph{Extinction in a finite time for solutions of a class of quasilinear parabolic equations}, Asymptot. Anal., \tauextbf{127} (2022), no.~1-2, 97--119. \betaibitem{BLS02} S. Benachour, Ph. Lauren{\cal E}d{c}ot and D. Schmitt, \varepsilonmph{Extinction and decay estimates for viscous Hamilton-Jacobi equations in $\muathbb{R}^N$}, Proc. Amer. Math. Soc., \tauextbf{130} (2002), no. 4, 1103--1111. \betaibitem{CV99} M. Chaves and J. L. V\'azquez, \varepsilonmph{Free boundary layer formation in nonlinear heat propagation}, Comm. Partial Differential Equations, \tauextbf{24} (1999), no.~11-12, 1945--1965. \betaibitem{CVW97} M. Chaves, J. L. V\'azquez and M. Walias, \varepsilonmph{Optimal existence and uniqueness in a nonlinear diffusion-absorption equation with critical exponents}, Proc. Roy. Soc. Edinburgh Sect. A, \tauextbf{127} (1997), no.~2, 217--242. \betaibitem{EK79} L. C. Evans and B. F. Knerr, \varepsilonmph{Instantaneous shrinking of the support of nonnegative solutions to certain nonlinear parabolic equations and variational inequalities}, Illinois Math. J., \tauextbf{23} (1979), no.~1, 153--166. \betaibitem{GV94} V. A. Galaktionov and J. L. V\'azquez, \varepsilonmph{Extinction for a quasilinear heat equation with absorption I. Technique of intersection comparison}, Comm. Partial Diff. Equations, \tauextbf{19} (1994), no.~7-8, 1075--1106. \betaibitem{ILS22} R. G. Iagar, Ph. Lauren{\cal E}d{c}ot and A. S\'anchez, \varepsilonmph{Self-similar shrinking of supports and non-extinction for a nonlinear diffusion equation with strong nonhomogeneous absorption}, Submitted (2022), Preprint ArXiv no. 2204.09307. \betaibitem{ILS22b} R. G. Iagar, Ph. Lauren{\cal E}d{c}ot and A. S\'anchez, \varepsilonmph{Eternal solutions for a quasilinear diffusion equation with strong nonhomogeneous absorption}, work in preparation. \betaibitem{Ka75} A. S. Kalasnikov, \varepsilonmph{The propagation of disturbances in problems of non-linear heat conduction with absorption}, U.S.S.R. Comput. Math. Math. Phys., \tauextbf{14} (1975), no.~4, 70--85. \betaibitem{Ka84} A. S. Kalashnikov, \varepsilonmph{Dependence of properties of solutions of parabolic equations on unbounded domains on the behavior of coefficients at infinity}, Mat. Sb., \tauextbf{125 (167)} (1984), no.~3, 398--409 (Russian). Translated as Math. USSR Sb., \tauextbf{53} (1986), no.~2, 399--410. \betaibitem{KP86} S. Kamin and L. A. Peletier, \varepsilonmph{Large time behavior of solutions of the porous media equation with absorption}, Israel J. Math., \tauextbf{55} (1986), no.~2, 129--146. \betaibitem{KPV89} S. Kamin, L. A. Peletier and J. L. V\'azquez, \varepsilonmph{Classification of singular solutions of a nonlinear heat equation}, Duke Math. J., \tauextbf{58} (1989), no.~3, 601--615. \betaibitem{KU87} S. Kamin and M. Ughi, \varepsilonmph{On the behavior as $t\tauo\iotanfty$ of the solutions of the Cauchy problem for certain nonlinear parabolic equations}, J. Math. Anal. Appl., \tauextbf{128} (1987), no.~2, 456--469. \betaibitem{KV88} S. Kamin and L. V\'eron, \varepsilonmph{Existence and uniqueness of the very singular solution of the porous media equation with absorption}, J. Analyse Math., \tauextbf{51} (1988), 245--258. \betaibitem{KV97} V. A. Kondratiev and L. V\'eron, \varepsilonmph{Asymptotic behaviour of solutions of some nonlinear parabolic or elliptic equations}, Asymptot. Anal., \tauextbf{14} (1997), no.~2, 117--156. \betaibitem{Kwak98} M. Kwak, \varepsilonmph{A porous media equation with absorption. I. Long time behavior}, J. Math. Anal. Appl., \tauextbf{223} (1998), no.~1, 96--110. \betaibitem{Le97} G. Leoni, \varepsilonmph{On very singular self-similar solutions for the porous media equation with absorption}, Differential Integral Equations, \tauextbf{10} (1997), no.~6, 1123--1140. \betaibitem{MPV91} J. B. McLeod, L. A. Peletier and J. L. V\'azquez, \varepsilonmph{Solutions of a nonlinear ODE appearing in the theory of diffusion with absorption}, Differential Integral Equations, \tauextbf{4} (1991), no.~1, 1--14. \betaibitem{PT86} L. A. Peletier and D. Terman, \varepsilonmph{A very singular solution of the porous media equation with absorption}, J. Differential Equations, \tauextbf{65} (1986), no.~3, 396--410. \betaibitem{PT85} L. A. Peletier and A. Tesei, \varepsilonmph{Diffusion in inhomogeneous media: localization and positivity}, Ann. Mat. Pura Appl., \tauextbf{141} (1985), no. 1, 307-330. \betaibitem{PT86b} L. A. Peletier and A. Tesei, \varepsilonmph{Global bifurcation and attractivity of stationary solutions of a degenerate diffusion equation}, Adv. in Appl. Math., \tauextbf{7} (1986), no. 4, 435-454. \betaibitem{St65} G. Stampacchia, \varepsilonmph{Le probl\`eme de Dirichlet pour les \'equations elliptiques du second ordre \`a coefficients discontinus} (French), Ann. Inst. Fourier (Grenoble), \tauextbf{15} (1965), fasc. 1, 189--258. \betaibitem{VazSmooth} J. L. V\'azquez, \varepsilonmph{Smoothing and Decay Estimates for Nonlinear Diffusion Equations. Equations of Porous Medium Type}, Oxford Lecture Series in Mathematics and its Applications 33, Oxford University Press, 2006. \varepsilonnd{thebibliography} \varepsilonnd{document}
\betaegin{document} \alphauthor{Olaf M\"uller\footnote{Institut f\"ur Mathematik, Humboldt-Universit\"at zu Berlin, Unter den Linden 6, D-10099 Berlin, \tauexttt{Email: [email protected]}}} \date{\tauoday} \title{Dimensions of ordered spaces and Lorentzian length spaces} \betaegin{abstract} \noindent After calculating the Dushnik-Miller dimension of Minkowski spaces (of manifold dimension larger than 2) to be countable infinity, we define a novel notion of dimension for ordered spaces recovering the correct manifold dimension and obtain a corresponding obstruction for the existence of injective monotonous maps between Lorentzian length spaces. Furthermore we induce metrics on Cauchy subsets, prove existence of rushing Cauchy functions with a given Cauchy zero locus, relate respective Hausdorff dimensions and consider collapse phenomena in this setting. \varepsilonnd{abstract} \sigmaection{Introduction} Motivated by the definition of the Hausdorff dimension for Lorentzian length spaces \circte{rMcS}, the purpose of this article is to explore different notions of dimension in the setting of (almost) Lorentzian length spaces. In the first part the accent is on notions of dimension using only the order relation. In Sec. \rhoef{Dim}, after a short discussion of the notion of dimension in different categories, we calculate $\dim_{DM}(\mathbb{R}^{1,n}) = \alphaleph_0 \ \forall n \gammaeq 2$ for the classical Dushnik-Miller dimension $\dim_{DM}$ of an ordered set and $\mathbb{R}^{1,n}$ being the $(n+1)$-dimensional Minkowski spacetime. For all other previously defined order dimensions (see \circte{HBG}) we show a lower estimate by $\alphaleph_0$ for all $n \gammaeq 2$. To remedy this incompatibility between the manifold dimension and the known order dimensions, we present a novel notion of dimension for ordered sets that takes the expected value $n+1$ on $\mathbb{R}^{1,n}$. We also show how this fact leads to an obstruction to the existence of an order embedding from an order product into an $\mathbb{R}^{1,n}$. An underlying question is how large the space of isomorphism classes of $n$-dimensional globally hyperbolic spacetimes is (w.r.t. the Lorentzian Gromov-Hausdorff metric) within the space of isomorphism classes of $n$-dimensional compact globally hyperbolic Lorentzian length spaces satisfying some bound on curvature and Lorentzian injectivity radius. An answer to this question could provide insights on the question to which extent the functors between the categories of globally hyperbolic compact ordered measure spaces and g.h. compact Lorentzian length spaces from \circte{rMcS} and \circte{oM:LorFunct} are mutually inverse on subsets with bounds on curvature and injectivity radius. \sigmaection{Dimensions} \lambdabel{Dim} Notions of dimension are ubiquitous in the different branches of mathematics. All of them relate to cross products and reflect the existence of left or right invertible morphisms between objects. The author suggests the following general definition of dimension: Let $C$ be a category, a {\betaf dimension} in $C$ is a map from ${\rhom Obj} (C)$ into a totally ordered set\footnote{Often the totally ordered set is one of cardinalities. One can remain in the domain of set theory by just bounding the cardinality of the objects. Of course for this we need that the cardinality of the product of two infinite nonempty sets $A,B$ is equal to $\max \{ \# A, \# B \}$.} $D$ such that for all $A,B \in {\rhom Obj} (C)$ we have $$\dim (A) < \dim(B) \mathbb{R}ightarrow \not \varepsilonxists {\rhom \ injective \ morphism \ } B \rhoightarrow A .$$ We call a dimension $\dim$ {\betaf strong} iff it satisfies $\dim (A) < \dim(B) \mathbb{R}ightarrow \not \varepsilonxists {\rhom \ surjective \ morphism \ } A \rhoightarrow B$. If $C$ has a forgetful functor to the monoidal category of sets and maps with the Kuratowski cross product and $\dim (A \tauimes B) = \dim (A) + \dim (B)$, we call $\dim$ {\betaf monoidal}. As it is to be expected by the form of the definition, dimensions are intimately related to Cantor-Bernstein theorems in the respective categories. Of course the prime example is $C_0 $ being the category of sets with the usual Kuratowski product and $\dim = \#$. Other examples include (many of which are strong and monoidal): \betaegin{enumerate} \item $C^K_1$ the category of $K$-vector spaces and $\dim_1^K (X)$ the cardinality of a Hamel basis of $X$, \item $C_2^K$ the category of $K$-Hilbert spaces and $\dim_2^K$ the cardinality of a Hilbert basis of $X$, \item $C_3$ the category of topological spaces and $\dim_3$ being the small inductive, large inductive, or the covering dimension (all coinciding on the separable metrizable spaces), \item $C_4$ the category of metric spaces and Lipschitz maps with the Hausdorff dimension, \item $C_5$ the category of manifolds and embeddings with the usual manifold dimension, \item $C_6$ the category of ordered sets and increasing maps with the product order on the cross product and e.g. with the Dushnik-Miller dimension. \varepsilonnd{enumerate} In the last example, the {\betaf product order} on an arbitrary Cartesian product $\Pi_{i \in I} C_i$ defined by $ x \leq y : \mathcal{L}eftrightarrow x_i \leq y_i \ \forall i \in I$. A map $f$ between ordered spaces $X$ and $Y$ is called {\betaf order embedding} iff $ a \leq b \mathcal{L}eftrightarrow f(a) \leq f(b)$. Most of the above notions of dimension have the virtue of being compatible, i.e., of coinciding on the intersections of the categories, the only two exceptions being firstly the well-known inequality between the Hamel dimension $\dim^K_1$ and the Hilbert dimension $\dim^K_2$ in infinite dimensions and secondly the inequality beween the Dushnik-Miller dimension on one hand and all other dimensions on the other hand, as we will see below. It is interesting to observe that, with the exception of the Hausdorff dimension $C_3$, the dimensions take values in the set\footnote{Again, put some upper bound on cardinality.} of cardinalities, actually, they factor through some cardinality, and they refine cardinality in the sense that $ \#A < \# B $ implies $\dim (A) < \dim (B)$ for each two objects $ A,B$. Let $(X, \leq)$ be an ordered set. The {\betaf Dushnik-Miller dimension $\dim_{DM} (X)$ of $X$} is defined as $$ \dim_{DM} (X) := \inf \{ \# \{ A \sigmaubset P( X \tauimes X ) \} \vert \forall a \in A: a {\rhom \ total \ order \ on \ } X {\rhom \ and \ } \leq = \betaigcap A \} .$$ Dushnik and Miller \circte{bDeM} showed that $\dim_{DM} (X)$ is well-defined and $> - \infty$ as there are subsets $A$ satisfying the conditions, and Ore \circte{oO} showed that $\dim_{DM} (X)$ is the smallest cardinal $I$ such that there is an order embedding $f$ from $X$ into the Cartesian product $ (\Pi_{i \in I} C_i, \leq_I)$ where the $ C_i$ are totally ordered sets and $\leq_I$ is the product order\footnote{for the nontrivial direction use the well-ordering theorem to choose a well-ordering $\leq$ on $I$ and consider, for each $i \in I$, the bijection $B_i$ commuting $ i $ and the minimal element $0$ leaving anything else unchanged, then let $\leq_j$ be the lexicographical order on $\Pi_{i \in I } C_i $ w.r.t. the total order $B_j^* \leq$ on $I$, which is again total, and the product order is the intersection $ \betaigcap_{j \in I} \leq_j$}. It can easily be seen that $\mathbb{R}^n_{prod}$, which denotes $\mathbb{R}^n$ with the product order, has Dushnik-Miller dimension $n$, in particular $\dim_{DM}(\mathbb{R}^{1,1}) =2$. Moreover, the power set $P(S)$ of a set $S$ equipped with the order $ \leq = \sigmaubset$ has Dushnik-Miller dimension $\# S$. The authors of \circte{HBG} defined two other notions of dimension for ordered spaces. To give an account of those, let us first define some other notions. A {\betaf utility on $X$} is a map $f: X \rhoightarrow \mathbb{R}^I$ for a set $I$ such that $\forall x ,y \in X: x \leq y \mathcal{L}eftrightarrow f_i(x) \leq f_i(y) \ \forall i \in I$. The cardinality $ \# I $ of $I$ is called {\betaf size of $f$}. In \circte{HBG}, the {\betaf Debreu dimension $d_D(X)$ of $X$} is defined by $$ \dim_{D} (X) := \inf \{ \# \{ A \sigmaubset P( X \tauimes X ) \} \vert \forall a \in A: a {\rhom \ total \ Debreu \ separable \ order \ on \ } X {\rhom \ and \ } \leq = \betaigcap A \} ,$$ \noindent where an order on $X$ is called {\betaf Debreu separable} iff there is a countable subset $N$ of $X$ such that $\forall x , y \in X: x \leq y \mathbb{R}ightarrow (\varepsilonxists n \in N: x \leq n \leq y)$. Debreu \circte{gD} proved that a total order on $X$ is Debreu separable iff there is a utility of size $1$ on $X$. \betaigskip Furthermore, \circte{HBG} defines the {\betaf Hack-Braun-Gottwald (HBG) dimension} $\dim_{{\rhom HBG}}$ {\betaf of $X$} as $$ d_{{\rhom HBG}} (X) = \inf \{ \# I \vert \varepsilonxists f: X \rhoightarrow \mathbb{R}^I {\rhom \ utility \ on \ } X \} ,$$ in other words, the HBG dimension is the minimal size of a utility on $X$. Ore's description of $d_{DM}$ shows that $d_{{\rhom HBG}} \gammaeq d_{DM}$ as the totally ordered spaces $C_i$ are restricted to be order equivalent to $\mathbb{R}$ in the HBG dimension. Of course, by definition $\dim_g\gammaeq \dim_D$, as the chains in the HBG dimension do not have to define total orders on $X$. That is, all in all we have $ \dim_{DM} \leq \dim_g \leq \dim_D$. As shown in \circte{HBG}, the lexicographical $\mathbb{R}^2$ provides an example of Dushnik-Miller dimension $1$ and uncountable HBG dimension.\footnote{Indeed, $X:= \mathbb{R}^n_{{\rhom lex}}$ does not admit a utility $f$ of size $1$, as $f$ would map each $\{ x \} \tauimes \mathbb{R} $ to a proper interval $I_x$, and the uncountably many $I_x$ would be pairwise disjoint. Now, if there was a countable-sized utility $\{ f_n \vert n \in \mathbb{N} \}$ on $X$, then we could define $F:= \sigmaum_{i=1}^\infty 2^{-n} \alpharctan \circ f_i : X \rhoightarrow \mathbb{R}$ which is stricly increasing: For $x \leq y $ we have $f_i(x) \leq f_i(y) \ \forall i \in \mathbb{N}$, thus if not $F(x) < F(y)$ we get $f_i(x) = f_i (y) \ \forall i \in \mathbb{N} $ implying $x=y$. And finally, each strictly increasing function on a totally ordered set is a utility.} Furthermore, in that article, the authors give an example of an ordered set $X$ with $d_{{\rhom HBG}} (X) = 2$ and $d_D(X) = \mathbb{N}$. \betaigskip In any case, we never leave the reign of sets, as in the Dushnik-Miller and Debreu case, from the definition, $\# I \leq \# P(P(X))$, whereas in the case of HBG dimension w.l.o.g. $\# I \leq \# P(X)$. \betaigskip The Dushnik-Miller dimension can be localized e.g. by infimizing over order intervals containing a given point $p$, then denoted by $\dim_{DM,l} (M,p)$. By definition, each localized dimension $\dim_{\cdot , l} $ is less or equal to $\dim_{\cdot}$. Still so, Dushnik-Miller dimension, HBG dimension and Debreu dimension, whether localized or not, all fail to recover the manifold dimension even in simple cases, as on Minkowski space of dimension $\gammaeq 3$ all of them take at least the value $\alphaleph_0$: \betaegin{Theorem} $\forall n \sigmaetminus \{ 0,1\} : \dim_{DM} (\mathbb{R}^{1,n}) = \dim_{g} (\mathbb{R}^{1,n}) = \alphaleph_0$, $\forall n \sigmaetminus \{ 0,1\} \forall p,q \in \mathbb{R}^{1,n} : \dim_{DM} (J(p,q)) = \dim_{g} (J(p,q)) = \alphaleph_0,$ $ \dim_{D} (\mathbb{R}^{1,n}) , \dim_{D} (J(p,q)) \gammaeq \alphaleph_0$. \varepsilonnd{Theorem} \noindent{\betaf Proof.} For $\dim_{DM} (\mathbb{R}^{1,n}) \gammaeq \alphaleph_0$, we show that the dimension is not finite as then there would be $p \in \partial J^+(0) $ such that $(\partial J^+ (0)) \cap (\partial J^+ (p)) $ is not totally ordered. More precisely, for $X=\mathbb{R}^{1,n}$, the image of $X$ in all chains $C_i$ in a minimal Ore product have to be order equivalent: Each two points in $X$ can be joined via a piecewise causal curve $c$ (this is even true in any connected Lorentzian manifold). Now $f_i \circ c$ is piecewise monotonous and continuous in the interval topology on $X$ (which just coincides with the usual product topology) and thus on $C_i$. Thus every chain is connected (complete). It is furthermore separable: The image $f_i(S)$ is dense for each separator $S$ in $X$. Thus $f_i(S)$ is order-isometric to a real interval. Now assume the dimension $m$ is finite. Then $f$ is an injective continuous map from $\mathbb{R}^{n+1}$ to $\mathbb{R}^m$. Take a point $p \in f(\partial J^+(0) \sigmaetminus \{ 0\} )$. Now let $p \in J^+(0) \sigmaetminus \{0\}$ such that $ d:= \# \{ i \in \mathbb{N}_m^* \vert f_i(p) \neq 0 \}\gammaeq n >1$, such a point exists due to invariance of the domain. So let $F$ be a hyperplane of dimension $d \in [n; m)$ containing an open neighborhood $U$ of $f(p)$. Then $D_U := f(\partial J^+(0) \cap U) = f(U) \cap F $. The totally ordered subsets in $ \mathbb{R}^m$ and thus in $F$ are just the rays from $0$. But $f(D_U) $ cannot be contained in the ray through $f(p)$, again by invariance of the domain. On the other hand, $\partial J^+(p) \sigmaubset \partial J^+ (0)$ by the push-up property. Thus $\partial J^+ (p) \cap \partial J^+ (0) \cap U$ is not totally ordered, contradiction. Then use the classical fact (only using the countable axiom of choice) that the cardinality of $\mathbb{N}$ is smaller than the cardinality of each other infinite set. For $\dim_g (\mathbb{R}^{1,n}) \leq \alphaleph_0$, we easily show that $\leq = \betaigcap_{v \in \mathbb{S}^{n-1} \cap \mathbb{Q}^{n}} (e_0 + v )^{-1} (\leq_\mathbb{R}) $. $ \betalacksquare$ \betaigskip In particular this means that there is no order embedding of $\mathbb{R}^{1,n}$ into $\mathbb{R}^m_{prod}$ for any $n,m \in \mathbb{N}$. The $\mathbb{R}^m_{prod}$ (which satisfy $\mathbb{R}^m_{prod} \tauimes \mathbb{R}^n_{prod} = \mathbb{R}^{m+n}_{prod}$) arise in a very elementary manner as the causal structures of Lorentzian length spaces as follows. Let us describe causal cylinders over metric spaces, which are g.h. Lorentzian pre-length spaces generalizing standard static spacetimes: Let $(M,d)$ be a metric space. Then the {\betaf causal cylinder $C(M,d)$ over $M$} is defined as $X:= \mathbb{R}\tauimes M$ with the Lorentzian distance $\sigmaigma_d : X \tauimes X \rhoightarrow \mathbb{R} $ given by $$ \sigmaigma_d ((t,p), (s,q)) := \cdot \sigmaqrt{(s-t)^2 - d^2(p,q)} \ {\rhom \ for \ \ } s - t \gammaeq d(p,q)$$ and $- \infty $ otherwise. If $(M,d)$ is a length space then $C(M,d)$ is a Lorentzian length space. (This definition coincides with the notion of generalized cone as in \circte{sAmGmKcS} specialized to the case $f =1$.) For a real interval $I$ we denote the corresponding subset $I \tauimes M$ of $C(M,d)$ by $C_I(M,d)$. Then we can identify $\mathbb{R}^n_{prod} = C ((\mathbb{R}^{n-1}, d_{\varepsilonll^\infty}))$, another way to show that $\mathbb{R}^n_{prod}$ is a Lorentzian length space. Now let us try to define another notion of dimension. Of course one could consider the Lorentzian Hausdorff dimension as defined\footnote{During the preparation of this article, the author learned from Sumati Surya that this notion actually appears under the name of "midpoint scaling dimension" in works of Sorkin, cf. \circte{rS}, p.11, however without a precise statement about it.} in \circte{rMcS} or \circte{oM:LorFunct}, but let us explore the possibility to assign a dimension to ordered spaces without a specified measure. \betaigskip Let $(X, \leq)$ be an ordered space and $p \in X$. First, we can define a chronological relation $\ll$ on $X$ by $\betaeta$ or $\gammaamma$ as described below: Firstly, following a suggestion by Miguzzi and S\'anchez \circte{eMmS}, we define a binary relation $ \betaeta ( \leq )$ via $$(x,y) \in \betaeta ( \leq ) : \mathcal{L}eftrightarrow \betaig( x \leq y \lambdand ( \varepsilonxists u,v \in X: x < u< v < y \lambdand J^+(u) \cap J^-(v) {\rhom \ not \ totally \ ordered} ) \betaig)$$ Secondly, we define $$ p (\gammaamma(\leq) ) q : \mathcal{L}eftrightarrow (\forall a>p \varepsilonxists a>b>p : b \leq q) \lambdand (\forall c<q \varepsilonxists c<d<q : p \leq d).$$ Both functors recover the chronological relation on causally simple spacetimes. For $p \in X$ we define the {\betaf catcher dimension $\dim_j(X,p) $ of $X$ at $p$} by $$\dim_{c} (X,p) := \inf \{ \# A \ \betaig\vert \ A \sigmaubset X \sigmaetminus J(p) \ \lambdand \ J^-(J^+(A)) = X = J^+(J^-(A)) \} ,$$ and its localized version, the {\betaf local catcher dimension $\dim_{c,l}(X,p) $ of $X$ at $p$}, by $$ \dim_{c,l} (X,p) := \limsup_{q^\pm \rhoightarrow p} \{ \dim(J(q_-,q_+),p) \vert p \in I(q^-,q^+) \} .$$ Another definition of dimension is the following: Once we have a chronological relation, we can apply the functor $\tauau_+$ as in \circte{oM:Compl}, \circte{oM:LorFunct} or else the Alexandrov (chronological interval) topology to induce a natural topology on $X$. The {\betaf local compactness dimension $\dim_{k,l} (X,p)$ of $X$ at $p$} is defined as follows: For $p \in X$ and an open neighborhood $U$ of $p$ we define $$\dim_k(X,p,U) := \inf \{ \# A \vert A \sigmaubset U \sigmaetminus J(p) \lambdand (J(p) \sigmaetminus I(A)) {\rhom \ compact} \} , {\rhom and} $$ $$ \dim_k (X,p) := \limsup_{U \rhoightarrow \{p\}} \{ \dim_k(X,p,U) \vert U {\rhom \ open \ neighborhood \ of \ } p \} .$$ \noindent From the above we obtain: \betaegin{Theorem} $\dim_c = \dim_k$ and $\dim_{c,l} = \dim_{k,l}$ on causally simple almost Lorentzian length spaces. \varepsilonnd{Theorem} \noindent{\betaf Proof.} The catcher condition on some $A$ implies precompactness of the connected component of $J(q^-, q^+) \sigmaetminus J(A)$ containing $p$ in the interval topology and also equivalent to precompactness of $J(p, J(p^-, p^+)) \sigmaetminus J(A) $. The converse statement follows easily from causal continuity. $ \betalacksquare$ \betaigskip Whereas by general compactness arguments the unlocalized above dimensions are stable under Lorentzian Gromov-Hausdorff limits, the same is not true for the localized dimensions. \betaigskip Another idea to define dimension is the {\betaf ml dimension} $d_m$ defined as the minimal monoidal local dimension with $\dim_m (\mathbb{R}) = 1$, i.e., for $N(p)$ being the neighborhood system of $p$, $$\dim_m (X,p) := \sigmaup \{\# I \vert \varepsilonxists V \in N_p \ \varepsilonxists U \sigmaubset \mathbb{R}^I {\rhom \ open \ } \varepsilonxists f: U \rhoightarrow V {\rhom \ injective \ increasing} \}.$$ The ml dimension is not stable under Gromov-Hausdorff limits. It is, however, stable if we require uniform sizes of the neighborhoods $V$ along the sequence. Here is a last notion of dimension taken from a preprint of Stoica \circte{cS}: We define the horismotic relation $E^\pm$ as usual by $E^\pm := J^\pm \sigmaetminus I^\pm$. Then the {\betaf horismotic} or {\betaf Stoica dimension $\dim_S$ of $X$ at $p$} is defined by $$ \dim_S (X,p) := \inf \{ \# A \vert A \sigmaubset X \lambdand \betaigcap_{a \in A} E^\pm(a) = \{p \} \}.$$ The horismotic dimension, as defined via subsets of empty interior, is not stable under Lorentzian Gromov-Hausdorff limits. This can easily be seen by looking at its metric counterpart: By writing down the causal cones explicitly, we immediately see that a standard static Lorentzian length space $\mathbb{R}\tauimes S$ (where $S$ is a complete length space) has dimension $n+1$ if and only if $\dim_t (S) = n$ where $\dim_t$ is the {\betaf triangulation dimension of $S$} defined by $$ \dim_t (M,p) := \inf \{ \# A \vert A \sigmaubset M \lambdand \forall q \in M : d(q,a) = d(p,a) \forall a \in A \mathbb{R}ightarrow p = q \} , $$ and it is easy to write down examples of GH-convergent sequences where the dimension drops in the limit. In contrast to Dushnik-Miller, HBG, or Debreu dimension, the four last dimensions {\varepsilonm do} recover the manifold dimension (however, as it is to be expected, only those that are GH-unstable): \betaegin{Theorem} Each $m$-dimensional spacetime satisfies $\dim_{c,l}(M,p) = \dim_m (M,p) = \dim_{S} (M,p) = m \ \forall p \in M$. \varepsilonnd{Theorem} \noindent{\betaf Proof.} By considering small normal neighborhoods and the openness of the conditions, the statement is reduced to the case of Minkowski space. Let us consider first the causal covering dimension. Let $x_0$ be the $0$-th coordinate function on Minkowski space and let $S_a := x_0^{-1} (\{ a\})$. We consider $J^+(0)$. For $p_i \in S_0 \sigmaetminus \{ 0\} \sigmaubset \mathbb{R}^{1,n} \sigmaetminus J^+(0)$, $J^+(p_i) \cap S_1$ is a disc of radius $ 1$. But indeed the $1$-ball around a standard $n$-simplex $S$ of diameter $1$ with midpoint $0$ contains the unit ball $B$. This follows from the fact that the union of half-spaces $$ (p_i^\perp ) ^+ := \{ x \in \mathbb{R}^n : \lambdangle x, p_i \rhoangle >0 \} = \{ x \in \mathbb{R}^n : d(0, \cdot) > d( p_i, 0) \}$$ is all of $\mathbb{R}^n$, as every point in the interior of $S$ is a positive linear combination of the vertices (in barycentric coordinates) and a union of half-spaces contains with a vector $v$ also $\mathbb{R}^+ v$. This also holds for perturbations of $S$: there is a $\delta >0$ such that the $1$-ball around each $\tauilde{S}$ with $d_H(S, \tauilde{S}) <\delta$ contains $B$. Moreover, the question is scaling invariant. The other direction of the dimension estimate is as follows: Given $k <n$ points $q_1, ... q_k$ in $J(q^-,q^+) \sigmaetminus J(p)$, then for their orthogonal projections $v_1,..., v_k$ onto $e_0^\perp$, there is a vector $w$ at $p$ with $\lambdangle w, v_j \rhoangle >0$ for all $j \in \mathbb{N}_k^*$. Small null segments with spatial part pointing the direction of $w$ stay outside of each of the lightcones from the $q_j$, which concludes the proof for the catcher (or compactness) dimension. For the ml dimension, look for a product cone fitting with its closure into the interior of the Euclidean cone at $0$ and extend a bit by continuity. For the horismotic dimension, for $q_i$ in the injectivity radius neighnborhood $U$ of $p$, $f_i:= \sigma^2 (q_i, \cdot)$ is differentiable in $p$ (however not $C^1$) if $q_i \in \partial J^+(p)$, and then ${\rhom grad}_p f_i = v_i:= c_i'(0)$ where $ c_i$ is the unique null geodesic in $U$ with $c(0)= p, c(1) = q_i$. Therefore for $n = \dim (M)$ points in general position, those vectors span the whole space, such that $p$ is the unique point on the intersection of all horismos. $ \betalacksquare$ \betaigskip \noindent{\betaf Remark.} The result should compared with the "catcher theorem" from \circte {oM:Hor}: Let $(M,g)$ be a spacetime with noncompact Cauchy surfaces. Then from any point $p \in M$ and any point $q \in I^+(p)$ there is a continuously inextendible timelike future curve $c$ from $p$ not intersecting $J^+(q)$. This is even true if we replace $q$ with a compact subset $Q\sigmaubset J^+(p)$, via the same proof. \betaigskip Here is a warning example about the unlocalized catcher dimension: Let $n \in \mathbb{N} \sigmaetminus \{0\}$ and $ X:= \mathbb{R} \tauimes \mathbb{R} \tauimes \mathbb{S}^n$ with the g.h. Lorentzian metric $-dt^2 +ds^2 + g_{\mathbb{S}^n}$. Then its catcher dimension is $2$ and not equal to its manifold dimension $2+n$. \betaigskip As one may expect, dimension is key to embedding problems. It is an obstruction for order embeddings, where the dimension of the source serves as an upper bound for the dimension of the target. This obstruction holds even for discrete ordered sets, which is interesting, because finding causal (i.e., order) embeddings and monotonous embeddings of causal sets into, e.g., Minkowski spacetimes is one of the classical problems in causal set theory. First, it is straightforward to se that not every discrete (w.r.t. the interval topology) ordered set of compactness dimension $n+1$ admits a monotonous injective map into $\mathbb{R}^{1,n}$ --- just consider discrete subsets of two-dimensional de-Sitter space containing to antipodal points whose futures and pasts are disjoint, or an Einstein universe where the complement of the time cone of every point is compact. \betaegin{Theorem} Let $X,Y$ be ordered sets with $\dim_c (X) \gammaeq m $ and let $\dim_c (Y) < m$ (resp. $\dim_{c,l} (Y,p) < m$ for all $p \in Y$). Then there is no injective increasing (resp. continuous injective increasing) map $f: X \rhoightarrow Y$. \varepsilonnd{Theorem} \noindent{\betaf Proof.} For a subset $A$ as in the definition of the catcher dimension, its preimage $f^{-1} (A)$ has the same cardinality and satisfies the same conditions w.r.t. $p' := f^{-1} (p)$. $ \betalacksquare$ \sigmaection{Metrics on Cauchy surfaces and collapse} We first establish a link between the convergence of causal cylinders and the convergence of their bases: \betaegin{Theorem} \lambdabel{FromCentersToCylinders} Let $(M_n , d_n) \rhoightarrow^{GH}_{n \rhoightarrow \infty} (M_\infty , d_{\infty})$ be a convergent sequence of compact metric spaces. Let $I \sigmaubset \mathbb{R}$ be a bounded interval and let $X_n := C_I (M_n)$. Then $X_n \rhoightarrow_{n \rhoightarrow \infty} X_\infty$ in $d^-_{GH}$ and $d^\tauimes_{GH}$. \varepsilonnd{Theorem} \noindent{\betaf Proof.} Let $n \mapsto \betaig( C_n : M_n \rhoightarrow M_\infty \betaig)$ be a sequence of correspondences with ${\rm dist} (C_n) \rhoightarrow_{n \rhoightarrow \infty} 0$. Then for $\widehat{C}_n := \mathrm{Id}_\mathbb{R} \tauimes C_n = \{ ((r,m), (r,s)) \vert (m,s) \in C\}$ we calculate, for large $n$, $$ {\rm dist} (\widehat{C}_n) = \sigmaup \{ \vert \sigmaqrt{(t-s)^2 - d_n (p,q) } - \sigmaqrt{(t-s)^2 - d_\infty (p,q) } \vert : (p,q) \in C_n \} \rhoightarrow_{n \rhoightarrow \infty} 0 $$ (as the case $ s - t \leq d_{\infty} (p,q)$ has a zero contribution and the condition $ s - t > d_{\infty} (p,q)$ is open). This concludes the proof for $d^-_{GH}$. For $d^{\tauimes}_{GH}$ we first enlarge $I$ by $I_\varepsilon := (-\varepsilon; \varepsilon )$ show $d^-_{GH}$-convergence of $I_\varepsilon \tauimes M_n$ and then apply Theorem 4 of \circte{oM:GHLLS}. $ \betalacksquare$ \betaigskip The (local at $p$) compactness dimension on a cylinder $\mathbb{R}\tauimes C$ satisfies \betaegin{eqnarray} \lambdabel{LebesgueToCompactness} \dim_c (\mathbb{R}\tauimes C) = \dim_M (C) +1 \varepsilonnd{eqnarray} \noindent if Minkowski's (local at $x:= {\rm pr}_2(p)$) covering dimension $\dim_M(C,x)$ of $C$ exists, defined as $$ \dim_M (C,x) := \lim_{\delta \rhoightarrow 0} \log_{1/\delta} (N(C,\delta)) $$ where $N(C, \delta) $ is the minimal number of $\delta$-balls covering $C$, resp. its local analogon. Equation \rhoef{LebesgueToCompactness} is easy to see: For an open neighborhood $U$ of $p$ we get that $U_0 := U \cap (\{ pr_1 (p)\} \tauimes C)$ is an open neighborhood of ${\rm pr}_2 (p) $. If we can cover every $U_0 \sigmaetminus \{ p \}$ by $n$ balls of radius $\delta$, the compactness dimension of $ [- \delta; \delta] \tauimes C$ is less or equal $n$, and $\dim_{H} (\mathbb{R}\tauimes C) = \dim_H(C) +1 $. \betaigskip From this and Th. \rhoef{FromCentersToCylinders} we conclude that the dimension of a Gromov-Hausdorff limit of $n$-dimensional spacetimes can be strictly less than $n$, even if all Cauchy surfaces have curvature bounded below: We just take a collapsing sequence and form the product of each member with the unit interval. Without a curvature condition, the dimension of a Gromov-Hausdorff limit of $n$-dimensional spacetimes can even be be strictly larger than $n$: We can GH-approximate any compact manifold by compact surfaces of increasing genus, and then we take the sequence of the products with the unit interval. Thus, this kind of collapse effects can occur in Lorentzian Gromov-Hausdorff convergence. \betaigskip Fixing $p \in [1; \infty) $, we define a metric on each temporally compact Lorentzian length space $X$ by $$ \check D_p (x,y) := \vert \sigma_x^p - \sigma_y^p \vert_\infty , $$ \noindent for $p,q \in X$. We find (for geodesicness adapt Prop. 2.5.19 and 2.5.22 from \circte{BBI}) \betaegin{Theorem} If $X$ is a connected g.h. manifold-with boundary and $p \gammaeq 2$ then $(X, D_p := \lambda (\check D_p))$ is a connected geodesic length space. $ \betalacksquare$ \varepsilonnd{Theorem} A small calculation shows that the lengthification $D_p$ of the $\check D_p$ on the strip $[-a;a] \tauimes \mathbb{R}^n \sigmaubset \mathbb{R}^{1,n}$ equals the Euclidean metric w.r.t. the given coordinates. We denote $\check D_{p}$ as the {\betaf Noldus$^p$-metric} and $D_p$ as the {\betaf Noldus$^p$-length metric}. \betaegin{Definition} Let $t$ be a real function on a Lorentzian length space $X$ (whose causal relation is denoted by $J$) carrying a metric $D$. \betaegin{enumerate} \item $t$ is called {\betaf rushing} iff $t(x) - t(x) \gammaeq \sigma (x) - \sigma (y) $ for all $(x,y) \in J$. \item $t$ is called {\betaf $D$-coercive} iff $ t(x) - t(y) \gammaeq D(x,y) $ for all $ (x,y) \in J$. \item $t$ is called {\betaf generalized Cauchy} iff there are $A_-, A_+ \in \mathbb{R} \cup \{ \pm \infty\}$ such that for every causal curve $c: (a_-;a_+) \rhoightarrow X$ not continuously extendible to a larger open interval we have $t(c(s)) \rhoightarrow_{s \rhoightarrow a_\pm} A_\pm$. \varepsilonnd{enumerate} \varepsilonnd{Definition} {\betaf Remark.} The notion of "rushing" has been defined by E. Minguzzi in \circte{eM}. Obviously, any rushing function is a time function. As $\sigma (x,y) \leq D_{p} (x,y)$ for all $(x,y) \in J $ by the conditional inverse triangle inequality, any $D_{p}$-coercive time function is rushing. Obvious examples for $D_{p}$-coercive time functions on open subsets are distances ${\scriptstyle \Delta}elta_S$ (w.r.t. $D_{p}$) from a given Cauchy set $S$. \betaegin{Theorem} \lambdabel{RCF} Let $X$ be a Lorentzian length space, $D \in \{ \check D_p, D_p \}$, let $S$ be a Cauchy subset of $X$. Then there is a $D$-coercive generalized Cauchy time function $T$ on $X$ with $S= T^{-1} (\{ 0\})$. \varepsilonnd{Theorem} {\betaf Proof.} We perform the proof only for the case that $I^+(x) \neq \varepsilonmptyset \neq I^-(x) $ for all $x \in X$ and construct a Cauchy function; by an obvious generalization of the construction one obtains a coercive generalized Cauchy function in the case that there is a past and/or future boundary. The proof is done similar to the proof of existence of a steep Cauchy function as in \circte{oMmS}. We use the following building blocks: \betaegin{enumerate} \item A Cauchy time function $t$ on $X$ whose existence is ensured by the result in \circte{aBlG}, \item a "fat cone covering": a well-behaved covering of each Cauchy surface by forward cones defined below, \item for each $ (p,q) \in J$ a continuous function $ \tauau_{pq}: X \rhoightarrow [0; \infty)$ supported on each $J^+(p)$ and coercive on $J^+(q)$ defined below, \item the facts that sums of coercive functions and time functions are coercive and that for any continuous function $f$ on $X$ and any coercive function $s$ and any $K\sigmaubset X$ compact there is $C>0$ with $f + C s$ coercive on $K$ (proven as in \circte{oMmS}). \varepsilonnd{enumerate} Let $S$, $S'$ be two Cauchy surfaces of $X$ with $S \in I^+(S')$. A {\betaf fat cone covering of $X$ above $S'$} is a set of tuples of points $\{ (p_i,q_i) \vert i \in I \} \sigmaubset I^+ (S') \tauimes I^+ (S')$ such that $ p_i \ll q_i \forall i \in I$ such that $\{ I^+ (q_i ) \cap S \vert i \in I \}$ resp. $ \{ J^+ (p_i) \cap S \vert i \in I \}$ is a covering resp. a locally finite covering of $S$. The existence of a fat cone covering is proven in \circte{oMmS} using only facts available in the synthetic setting as well, essentially future compactness of $J^-(S)$ and continuity of $J^+$ and $I^+$. For fixed $p \in X$, we define $\tauau_{pq} (x) := u (\sigma (p,x)) \cdot \sigmaup \{ D(y,x) \vert y \in J(p,x) \} $ for a continuous increasing $u : [0; \infty ) \rhoightarrow [0; \infty]$ with $u (0) =0$ and $ u ([\sigma(p,q) ; \infty)) = \{ 1/\sigma (p,q)\}$, then it is easy to see that $\tauau_{pq}$ satisfies all the requirements of Item 3 above, recalling $\sigma(p,x) \gammaeq \sigma(p,q) \ \forall x \in J^+(q)$. Now, for two Cauchy subsets $Y,Z$ with $Y \leq Z$, given a fat cone covering of $Z$ above $Y$, denoting $\tauau_i:= \tauau_{p_iq_i}$, we can construct $\check \tauheta^+_{Y,Z}:= \sigmaum_i c_i \tauau_i : X \rhoightarrow \mathbb{R}^+ $ for appropriate $c_i \in (0; \infty)$ with support in $I^+(Y)$, time function on its support and coercive on $J^+(Z)$ with $\check \tauheta^+_{Y,Z} >1 $ on $J^+(Z)$. Analogously we define by time-duality $\check\tauheta^- (Y,Z) $, time function on its support contained in $I_-(Z)$ and coercive on $J^-(Y)$ with $\check \tauheta^-_{Y,Z} ,-1 $ on $J^-(Y)$. Now we define $\tauheta^+_{Y,Z} := \rhoho \circ \check\tauheta_{S',S} $ for $\rhoho: \mathbb{R}\rhoightarrow \mathbb{R}$ continuous increasing with $\rhoho =0$ on $(- \infty; 0]$ and $\rhoho = 1 $ on $[1; \infty)$. Then $\tauheta^+_{Y,Z}$ is a time function on $(\tauheta^{+}_{Y,Z})^{-1} ((0;1))$ with $\tauheta^+_{Y,Z} =0 $ on $ J^-(Y)$ and $\tauheta^+_{Y,Z} =1 $ on $ J^+(Z)$. Next we choose Cauchy subsets $S^{--}, S^-, S^+, S^{++}$ with $S^{--} \leq S^- \leq S \leq S^+ \leq S^{++}$ and $I^+(p) \neq \varepsilonmptyset \neq I^-(q)$ for all $p \in S^{++} $ and all $q \in S^{--}$ and put $\tauheta^- := \tauheta_{S^--, S^-}, \tauheta^+ := \tauheta_{S^=, S^++}$ and $$ \tauheta := 2 \frac{({\scriptstyle \Delta}elta_S +1) \tauheta^-}{({\scriptstyle \Delta}elta_S+1)- \tauheta^+ +1} -1 . $$ Then $\tauheta $ is a time function on $J(S^{--}, S^{++})$, coercive on $J(S^-,S^+)$ with $\tauheta = -1$ on $J^-(S^{--})$ and $\tauheta = 1 $ on $J^+(S^{++})$. Our final definition is $$ T:= \check\tauheta^-_{S^-,S} + \tauheta + \check\tauheta^+_{S, S^+} $$ which eventually satisfies all the requirements of the theorem. $ \betalacksquare$ \betaigskip Of course, there is in general no continuous projection on the level sets of a rushing time function, nor is there, in general, some related product decomposition. \betaigskip Let $(X,d)$ be a metric space and $S \sigmaubset X$. Then the "lengthification of $d$ along $S$" $\lambda_S(d)$ is the induced generalized length metric on $S$.(admitting the value $\infty$). We need semi-continuity of $\lambda_S$: \betaegin{Theorem} \lambdabel{LengthizationSemicont} $\lambda_S $ is lower semi-continuous w.r.t. the Gromov-Hausdorff metric on both sides, but in general not upper semi-continuous. \varepsilonnd{Theorem} {\betaf Proof.} Let us first show pointwise convergence, i.e. we fix two points $x,y$. Let $X_n \rhoightarrow_{n \rhoightarrow \infty} X_\infty$ in the Gromov-Hausdorff metric and let $\varepsilonll(c) \in ( \lambda (d_\infty) (x,y) - \varepsilon ; \lambda (d_\infty ) (x,y)) $. Then there is a polygonal arc $P$ with break points $P_1,...,P_N$ with $$ \sigmaum_{k=1}^{N-1} d_\infty (P_k, p_{k+1}) = \varepsilonll(P) \in (\lambda (d_\infty (x,y ) - 2 \varepsilon); \lambda (d_\infty (x,y))) .$$ Furthermore, let $n \in \mathbb{N}$ such that there is a correspondence $C: X_n \rhoightarrow X_\infty$ with $ {\rm dist} (C) < \varepsilon/N$ and let $Q_k \in C^{-1}(P_k)$, $X \in C^{-1} (x)$, $Y \in C^{-1} (y)$. Then $\vert d_\infty(P_k, P_{k+1}) - d(Q_k,Q_{k+1}) \vert < \varepsilon/N$ for all $k \in \mathbb{N}_N$ and thus $d_n(X,Y) > d_\infty (x,y) - 3 \varepsilon$. Compactness ensures that this is uniform: Let $N$ be a finite $\varepsilon$-net in $(X_\infty, \lambda (d_\infty)) $ of cardinality $N$, say, then we only need to perform the previous step $ N(N-1)$ times to show upper semi-continuity. That $\lambda_S$ is in general not upper semi-continuous can be seen by applying $\lambda_S$ to the quotient metrics of $g_p (x,y) := |x - y |^p$ on $\mathbb{S}^1 = \mathbb{R} / \mathbb{Z} $ obtaining the discrete generalized metric (taking the values $0$ and $\infty$) for $p \in (0;1) $ whereas $g_p \rhoightarrow_{p \rhoightarrow 1} g_1$, the standard metric. $ \betalacksquare$ \betaigskip If we are given a Cauchy set in a spacetime which we consider as an almost Lorentzian length space, the Noldus$^p$ metric on it is in general very different from the induced Riemannian metric. But there is a metric that coincides with the one from the induced Riemannian one in the case of a spacetime: we define the {\betaf diamond distance} $\check{\delta}: X \tauimes X \rhoightarrow \mathbb{R} \cup \{ \infty \} , \check{\delta} (x,y ):= \inf \{ \sigma (p,q) \vert x,y \in J(p,q) \}$ {\betaf between $p$ and $q$}. For $(x,y) \in J $ we get $\check{\delta} (x,y) = \sigma (x,y) $ (we can choose $(p,q) = (x,y)$), but: \betaegin{Theorem} \lambdabel{FactsOnDelta} \betaegin{enumerate} \item If $X$ is future and past compact, of if $X$ is a spacetime, the restriction $\check{\delta}_S$ of $\check{\delta}$ to any acausal subset induces a length metric space $(S, \delta_{S,p}= \lambda_S (\check{\delta}_{S,p}))$. \item If $X$ is a spacetime and $S$ is a differentiable spacelike hypersurface, then $\delta_S := \lambda_S(\check{\delta_S})$ coincides with the metric induced by the induced Riemannian metric on $S$. \item If $S$ is a Cauchy subset of $X$ then $\delta_{S,p} \leq 2 \lambda_S (\check D_p)$, thus, if $\lambda_S(D_p)$ is a metric, so is $\check{\delta}_{S,p}$. \varepsilonnd{enumerate} \varepsilonnd{Theorem} \noindent{Proof.} For Item (1) note that in either case we have compactness of $\partial J^\pm (x) \cap \partial J^\pm (y)$ and that $\sigma(p,q)$ means that $J(p,q)$ is totally ordered. For Item (2) we note that a length space metric $\lambda (d)$ is uniquely determined by the datum of $d$ in arbitrarily small neighborhoods. We consider Fermat coordinates around $S$, in which the metric is $-dt^2 + g_t$, and use local boundedness of $\dot{g}$. For Item (3) we first show that the subset $M_X$ resp. $M_S$ of those points of $X$ resp. of $S$ that are in the image of the interior of the domain of definition of a longest curve is dense in $X$ resp. in $S$ (this is obvious by the fact that the manifold topology is the Alexandrov topology whose basis are causal diamonds, and by geodesic connectedness of globally hyperbolic spacetimes). Now let $y \in M_S$ and choose $ q_0 \in \partial J^+(x) \cap I^+(y)$ such that the maximal extension of the longest curve $c_+: y \leadsto q_0$ intersects $\partial J^-(x)$ in a point $p_0$ (this is possible for $x$ close to $y$ by continuity of $J^\pm$). Then $J(p_0, q_0)$ contains $x,y$, and $$ \sigma (p_0, q_0) = \varepsilonll (c_+ ) + \varepsilonll (c_-) = \sigma (q_0, y) + \sigma (y, p_0) = |\sigma(q_0,y) - \sigma (q_0,x)| + |\sigma (p_0, y) - \sigma (p_0, x)| \leq 2 \check{D}(x,y),$$ thus, for $p \gammaeq 1$, $\check{\delta}_p (p_0, q_0) \leq 2^{p} \check{D}_p (x,y) $ which concludes the proof by continuity and denseness. $ \betalacksquare$ \betaegin{Theorem} Let $p \in [1;\infty )$, let $X$ be an almost Lorentzian length space and let $t$ be a $\check D_{p}$-coercive generalized Cauchy time function on $X$ with $I:= t(X)$. Then the map $s \mapsto (t^{-1} (\{ s\}), d_s := \delta_{t^{-1}(s),p})$ is continuous when we apply the Hausdorff distance w.r.t. $\check D_p$ (i) and also for the Gromov-Hausdorff metric w.r.t. the restrictions of $\check D_p$ (ii). It is lower semi-continuous if we consider the Hausdorff metric w.r.t. $D_p = \lambda_X (\check D_p)$ (iii) on the right-hand side, or the Gromov-Hausdorff metric $D_p = \lambda_X(\check D_p)$ (iv), or the one-parameter family of metrics $\{ \lambda_{S_a}(\check D_p) | a \in I \} $ (v). \varepsilonnd{Theorem} \noindent{\betaf Proof.} (i) is obvious due to continuity of $\sigma$ and compactness of the Cauchy surfaces, and this implies (ii). Item (i) implies with Th. \rhoef{LengthizationSemicont} item (iii), which in turn implies (iv). Item (ii) implies with Th. \rhoef{LengthizationSemicont} Item 5. $ \betalacksquare$ \betaigskip If $c$ is an acausal curve then the $\delta_{p}$-length of the image of $c$ is the Lorentzian analogue of the $p$-packing measure, and it is well-known (cf \circte{Falconer}) that the packing dimension is greater or equal to the Hausdorff dimension. Furthermore, we have a nice dimensional relationship: \betaegin{Theorem} Let an almost Lorentzian length space $(X, \sigmaigma)$ have constant Lorentzian Hausdorff dimension $n$. Let $S$ be a Cauchy set of $X$. Then \betaegin{enumerate} \item $\dim_H (S,d_{S,p}) \leq n $. \item $S$ can be approximated in the Hausdorff metric w.r.t. $D_{p}$ by Cauchy sets of Hausdorff dimension $\gammaeq n -1$. \item If there is $K>0$ with $ \check D_{p} (x,y) \leq K \cdot \sigma (x,y) $ for all $(x,y) \in J$, then $\dim_{H} (S, d_S) \leq n-1$. In particular, this is satisfied if $X$ is the limit of Cauchy slabs with uniform bounds on timelike sec and timelike diameter. \varepsilonnd{enumerate} \varepsilonnd{Theorem} \noindent{\betaf Proof.} For $d \in [0; \infty)$, let $\alpha(d):= \frac{\pi^{d/2}}{\Gamma (d/2+1)}$ (being the volume of the $d$-dimensional unit ball for $d \in \mathbb{N}^*$) and $\circrcmega(d) := \frac{\pi^{\frac{d-1}{2}}}{d \cdot \Gamma (\frac{d+1}{2}) \cdot 2^{d-1}}$. Recall $$\lambda_N^+ (U) := \alphalpha (N) \cdot \sigmaum_i (\frac{{\rm diam}(U_i)}{2})^N $$ \noindent for $U \in CC^+ (A)$ and $\lambda_N^- (U) := \circrcmega (N) \cdot \sigmaum_i (\sigma (p_i,q_i))^N $ for $U \in CC^+ (A)$ (where $CC_\delta (A)$ is the set of open coverings of $A$ with open sets of diameter $\leq \varepsilon$, for $CC_\delta^-$ we additionally require each set to be a causal diamond $J(p_i, q_i)$). On the other hand, we have $$\mu^\pm_{N, \delta} (A) := \inf \{ \lambda_N^+ (U) | U \in CC^\pm_\delta (A) \} $$ \noindent and $\mu_N^\pm (A) := \lim_{\delta \rhoightarrow 0} \mu^\pm_{N, \delta} (A)$. Then $\dim_H(A) = \sigmaup \{ r \in [0; \infty) | \mu_r (A) = \infty \} = \sigmaup \{ r \in [0; \infty) | \mu_r (A) = \infty \}$. Now each $U \in CC^-_\delta (A)$ induces $U \cap \Sigma \in CC^+_\delta (A \cap \Sigma)$, more precisely, for each $p_\pm \in J^\pm (\Sigma) $ we get ${\rm diam}_{\check D_2} (J(p_-, p_+)) \gammaeq \sigma (p_- , p_+) \gammaeq {\rm diam}_d (J(p_-, p_+) \cap \Sigma)$. This settles the first claim. For the second assertion, let $(p,q) (n) \in CC_{1/n} (A)$ and $P(s,n) := \# \{ (p_k^{(n)}, q_k^{(n)} ) | t(p_k ) < s < t(q_k) \} / \# (p,q)(n) $. For generic $s$ there are $c,C >0$ with $cn \leq P(t,n) \leq Cn$. We consider a $D_p$-coercive Cauchy function $T$ on $X$ with $S = T^{-1} (\{ 0\})$ whose existence has been shown in Th. \rhoef{RCF} and $A := T^{-1} (- \varepsilon; \varepsilon)$. Then we apply Eilenberg's coarea theorem (\circte{BZ}, Ch.5) to the $1$-Lipschitz function $\delta$ obtaining that $$ \rm vol_n (A) = \int_I \rm vol_{n-1} (t^{-1} (\{ s\})) ds.$$ That is, $ \rm vol_{n-1} (t^{-1} (\{ s\}))$ is finite for almost all $s$. Furthermore, every such $(-\varepsilon; +\varepsilon)$ contains $s$ s.t. $T^{-1}(s)$ has open subsets of nonzero (but finite) $(n-1)$-dimensional $D_p$-Hausdorff volume, which consequently are of finite $(n-1)$-dimensional $D_p$-Hausdorff volume as well due to Th. \rhoef{FactsOnDelta}, Item 3 (however, the $(n-1)$-dimensional $\delta_p$-Hausdorff volume of open subsets of $T^{-1} (s)$ may be zero as long as we do not have an estimate between $\delta_p$ and $D_p$ in the other direction, as in the last item of the present theorem). This settles the second claim. For the last assertion we only have to show that $\mu_r (S) = 0 $ for all $r > n-1$. Then we take $U_n \in CC_{1/n}^- (X) $ and observe that by the supposed estimates the part $W_n$ of the sum $\Sigma_n $ corresponding to subsets intersecting $S$ nontrivially is in $O(1/n) \cdot \Sigma_n$, thus for $r > n-1$ we have a constant $K>0$ with $ W \leq K \cdot 1/n \cdot \Sigma_n \leq \alphalpha (N) \cdot \sigmaum_i (\frac{{\rm diam}(U_i)}{2})^{r+1} $, which converges to $0$. $ \betalacksquare$ \betaigskip Our last question is whether we can find a good replacement for the projection onto Cauchy hypersurfaces and the product decomposition for g.h. spacetimes. Let $S$ be a subset of a LLS $X$. Then we call a timelike curve $c: [0;1] \rhoightarrow X$ starting at $S$ {\betaf orthogonal to $S$}, in symbols $c \perp S$, iff $\sigma (c(0), c(1)) = \sigma (S, c(1))$. We can define a relation $r_{ut} : S_u \rhoightarrow S_t$ by the orthogonal curves on $S_u$ which is obviously right-total. To the author, it is not clear whether it is also left-total, i.o.w., whether at every point of $S_u$ an orthogonal curve starts. The function $x \mapsto \sigmaqrt{|x|}$ provides an example of a Hölder continuous hypersurface $S$ of $\mathbb{R}^2$ containing a cusp $(0,0)$ that is not initial point of any $S$-minimal geodesic to one of its sides. But at least for Cauchy hypersurfaces in spacetimes with a metric of regularity $C^{1,1}$ and $|u-t|$ small, left-totality is satisfied: \betaegin{Theorem} Let $P$ be a Cauchy surface in a globally hyperbolic spacetime with a metric of regularity $C^{1,1}$. Then each point $q \in P$ is initial point of a locally $P$-maximizing future and past geodesic. \varepsilonnd{Theorem} \noindent{\betaf Proof.} Every semi-Riemannian submanifold $P \sigmaubset M$ has a normal neighborhood $V$ in $M$, i.e., there is an open neighborhood $U$ of the zero section $Z$ over $P$ s.t. for the normal exponential map $\varepsilonxp^\perp$ we have $E:= \varepsilonxp^\perp \vert_U$ is a diffeomorphism onto $V$ (see e.g. \circte{oN}, Prop.7.26). In particular, $E^{-1} (P) = Z$, and in local coordinates $\{ x_i \vert i \in \mathbb{N}_n \}$ around $q \in P$, the curve $ t \mapsto (t, x_1(q),..., x_n(q) )$ is geodesic, and $g_{0i} (q) = 0 \ \forall i \in \mathbb{N}_n^*$ (as $d_0 \varepsilonxp^\perp $ is the canonical identification $T_0T_qM \rhoightarrow T_qM$). We will need regularity at least $C^1$ in order to make sure that geodesics are defined. Like in the smooth case where we would argue with the first variational formula, for general $C^1$ metrics we still get that each $P$-maximal geodesic is initially orthogonal to $P$ (\circte{mG}, Cor.2.5). For $C^{1,1}$ metrics, the connection is of Lipschitz regularity which implies that geodesics do not split and that locally maximal curves are geodesics \circte{eM:Sprays}. This, together with the fact that the exponential map is a diffeomorphism, implies that every $q\in P$ is the initial point of a $P$-maximal curve. $ \betalacksquare$ \betaigskip \betaegin{Theorem} Let $t$ be a $D_p$-coercive Cauchy function on a Lorentzian length space $X$. Then the map $ a \mapsto (t^{-1} (a), \check{d}_{t^{-1} (a)})$ is continuous in the Gromov-Hausdorff metric, and the map $ a \mapsto (t^{-1} (a), d_{t^{-1} (a)})$ is lower semi-continuous in the Gromov-Hausdorff metric. \varepsilonnd{Theorem} \noindent{\betaf Proof.} Let $\varepsilon >0$. By compactness of $S_a$, there is $\rhoho >0$ such that for every $x \in S_a$ there are $x^\pm \in J^\pm (x) $ with $\sigma (x^-, x^+) > \rhoho $. Let $M_a:= d^{-1} ([0, \rhoho]) \sigmaubset S_a \tauimes S_a$. For all $(x,y) \in M_a$ we choose $p^\pm (x,y)$ with $x,y \in D(x,y) := I(p^- (x,y), p^+(x,y))$ and $\sigma (p^-(x,y) , p^+(x,y)) \gammaeq \check{\delta} (x,y) - \varepsilon$. The set $\{ J(x,y) | (x,y) \in M_a\} $ is an open covering of $M_a$ and has a finite subcovering $D:= \{ D_1,..., D_N \} $ due to compactness of $M_a$. Then by causal continuity there is a $\delta >0$ such that $D$ is an open covering of $M_{a+\delta}$. We define a correspondence between $S_a$ and $S_{a+\delta}$ by being contained in a same $D_k$; this correspondence is easily seen to have $\delta$-distorsion $< \varepsilon$. $ \betalacksquare$ \betaigskip \betaigskip \noindent{\betaf Remark.} If we want to show global hyperbolicity of a Gromov-Hausdorff limit $X$ of globally hyperbolic Lorentzian length spaces then by the a priori restriction to compact Lorentzian length spaces we only have to show causal simplicity (closedness of $J^\pm(x)$ for all $x \in X$), which is automatic by the choice of the topology $\tauau_+$ as in \circte{oM:Compl}. \betaegin{thebibliography}{99} \betaibitem{sAmGmKcS} Stephanie B. Alexander, Melanie Graf, Michael Kunzinger, Clemens Sämann: {\varepsilonm Generalized cones as Lorentzian length spaces: Causality, curvature, and singularity theorems}. arXiv:1909.09575 \betaibitem{BBI} Dmitri Burago, Yuri Burago, and Sergei Ivanov: {\varepsilonm A course in metric geometry}, volume 33 of Graduate Studies in Mathematics. American Mathematical Society, Providence, RI, 2001. \betaibitem{BZ} Burago, Zalgaller: {\varepsilonm Geometric Inequalities}. Springer-Verlag (1988) \betaibitem{aBlG} Annegret Burtscher, Leonardo García-Heveling: {\varepsilonm Time functions on Lorentzian length spaces}. arXiv:2108.02693 \betaibitem{gD} Gerard Debreu: {\varepsilonm Representation of a preference ordering by a numerical function.} Decision processes, 3:159–165 (1954). \betaibitem{bDeM} Ben Dushnik, E.W. Miller: {\varepsilonm Partially ordered sets.} American Journal of Mathematics, 63: 600 --- 610 (1941) \betaibitem{Falconer} Falconer, K.J. (1997): {\varepsilonm Fractal Geometry - Mathematical Foundations and Applications}. John Wiley. \betaibitem{mG} Melanie Graf: {\varepsilonm Singularity theorems for $C^1$-Lorentzian metrics}. Commun. Math. Phys. 378 (2020), 1417-1450. arXiv:1910.13915 \betaibitem{HBG} Pedro Hack, Daniel A. Braun, Sebastian Gottwald: {\varepsilonm On a geometrical notion of dimension for partially ordered sets}, arXiv:2203.16272v3 \betaibitem{mKcS} Michael Kunzinger, Clemens Sämann: {\varepsilonm Lorentzian length spaces}. Ann. Global Anal. Geom. 54, no. 3, 399 --- 447 (2018). arXiv: 1711.08990 \betaibitem{aL} Alexander Lytchak: {\varepsilonm On the geometry of subsets of positive reach.} manuscripta math. 115, 199–205 (2004) \betaibitem{aLkN} Alexander Lytchak, Koichi Nagano: {\varepsilonm Topological regularity of spaces with an upper curvature bound}, JEMS vol. 24, no 1, 137 --- 165. arXiv:1809.06183 \betaibitem{rMcS} Robert McCann, Clemens Sämann: {\varepsilonm Lorentzian Hausdorff dimensions and measure}. arXiv:2110.04386 \betaibitem{eM:Sprays} Ettore Minguzzi:{\varepsilonm Convex neighborhoods for Lipschitz connections and sprays}, Monatsh. Math. 177, 569-625 (2015). arXiv:1308.6675 \betaibitem{eM} E. Minguzzi. {\varepsilonm Lorentzian causality theory}. Living Rev. Relativ., 22:3, 2019. https://doi.org/10.1007/s41114-019-0019-x. \betaibitem{eMmS} Ettore Minguzzi, Miguel S\'anchez: {\varepsilonm The causal hierarchy of spacetimes}. In H. Baum and D. Alekseevsky (eds.), vol. Recent developments in pseudo-Riemannian geometry, ESI Lect. Math. Phys., (Eur. Math. Soc. Publ. House, Zurich, 2008), p. 299 -- 358, ISBN=978-3-03719-051-7. arXiv:gr-qc/0609119 \betaibitem{oM:Hor} Olaf M\"uller: {\varepsilonm Horizons}. Advances in Theoretical and Mathematical Physics vol. 19 no 4 pp. 747---760 (2015). arXiv: 1111.4571 \betaibitem{oM:LorFin} Olaf Müller: {\varepsilonm Lorentzian Gromov-Hausdorff theory and finiteness results}, General Relativity and Gravitation 54:117 (2022). arXiv: 1912.00988 \betaibitem{oM:Compl} Olaf Müller: {\varepsilonm Topologies on the future causal completion}, arXiv:1909.03797 \betaibitem{oM:LorFunct} Olaf Müller: {\varepsilonm Functors in Lorentzian geometry: Three variations on a theme}. General Relativity and Gravitation volume 55, Article number 39 (2023). arXiv:2205.01617 \betaibitem{oM:GHLLS} Olaf Müller: {\varepsilonm Gromov-Hausdorff distances for Lorentzian length spaces}. arXiv:2209.12736v2 \betaibitem{oMmS} Olaf Müller, Miguel S\'anchez: {\varepsilonm Lorentzian manifolds isometrically embeddable in $L^N$}, Trans. Amer. Math. Soc. 363 (2011), 5367-5379. arXiv:0812.4439 \betaibitem{kN} Koichi Nagano: {\varepsilonm A volume convergence theorem for Alexandrov spaces with curvature bounded above}, Mathematische Zeitschrift vol. 241, 127 --- 163 (2002) \betaibitem{oN} Barrett O'Neill: {\varepsilonm Semi-Riemannian Geometry With Applications to Relativity}, Elsevier (1983) \betaibitem{oO} Oystein Ore: {\varepsilonm Theory of graphs}, American Mathematical Soc. vol.38 (1987) \betaibitem{rS} R.D. Sorkin: {\varepsilonm Causal Sets: Discrete Gravity (Notes for the Valdivia Summer School)}, in: Proceedings of the Valdivia Summer School, edited by A. Gomberoff and D. Marolf (2003). arXiv: 0309009 \betaibitem{cS} O.C.Stoica: {\varepsilonm Spacetime causal structure and dimension from horismotic relation}. arXiv:1504.03265v2 \varepsilonnd{thebibliography} \varepsilonnd{document}
\begin{document} \begin{center} {\bf \large Tests of Non-Equivalence among Absolutely Nonsingular Tensors through Geometric Invariants\\} \end{center} \begin{center} Sakata, T.$^1$, Maehra, K.$^2$, Sasaki,T.$^3$, Sumi, T. $^{1}$, Miyazaki, M.$^{4}$ and Watanabe Y.$^5$ \\ \vspace*{2mm} \end{center} Department of Design Human Science, Kyushu University$^{1}$\\ School of Design, Kyushu University$^{2}$.\\ Department of Mathematics, Kobe University$^{3}$.\\ Department of Mathematics, Kyoto University of Education$^{4}$\\ Research Institute of Information Technology, Kyushu University$^{5}$ \section{Introduction} Tensor data analysis has successfully developed in various application fields, which is useful to seize multi-factor dependence. An $n \times n \times p$ tensor is a multi-array datum $T=(T_{ijk}),$ where $1 \leq i,j \leq n$ and $1 \leq k \leq p.$ A type $n \times n \times p$ tensor $T$ is denoted by $T=(A_{1};A_{2};\cdots;A_{p})$ where $A_{i}$ denote $n \times n$ matrices. An $n \times n \times p$ tensor $T$ is said to be of rank $1$ if there is a vectors $\bm{a}=(a_{1},a_{2},...,a_{n}),$ $\bm{b}=(b_{1},b_{2},...,b_{n})$, and $\bm{c}=(c_{1},c_{2},...,c_{p})$ such that $T_{ijk}=a_{i}b_{j}c_{k}$ for all $i,j,k,$ and the rank of a tensor $T$ is defined as the minimum of the integer $r$ such that $T$ can be expressed as the sum of $r$ rank-one tensors. The maximal rank of all tensors of type $n \times n \times p$ are also defined in obvious fashion and denoted by ${\rm maxrank(n,n,p)}$.The rank of a tensor describes the complexity of a tensorial datum and the maximal rank describes the model complexity of a class of tensors of a given type, and so they are very important concepts in both applied and theoretical fields. Therefore, the rank and maximal rank determination problems have attracted the interest of many researchers, for example, Kruskal \cite{Kruskal}, ten-Berge \cite{tenBerge}, and Common et al, \cite{Common}, etc. and now have being investigated intensively( for a comprehensive survey, see Kolda et al.\cite{Kolda}). Atkinson et al.(\cite{Atkinson1} and \cite{Atkinson2} ) claimed that ${\rm maxrank}(n,n,3) \leq 2n-1$. Here we introduce an important class of tensors. \begin{definition} A real tensor $T=(A_{1};A_{2};,\dots;A_{p})$ is said to be absolutely nonsingular if $x_{1}A_{1}+x_{2}A_{2}+\cdots +x_{p}A_{p}$ is nonsingular for all $(x_{1},x_{2},...,x_{p}) \neq (0,0,...,0).$ \end{definition} \begin{remark} We called absolutely nonsingular tensors as exceptional tensors in Sakata et al. {\rm \cite{Sakata1},\cite{Sakata2}}. \end{remark} Sumi et al.\cite{Sumi1} proved the claim of Atkinson et al. over the complex number filed $\mathbb{C}$ without any assumption and proved it over the real number filed $\mathbb{R}$ except the class of absolutely nonsingular tensors. Thus, for the proof of the claim of Atkinson et al. over the real number filed $\mathbb{R},$ it is the first thing to determine all absolutely nonsingular tensors. Absolutely nonsingular tensors are characterized by the determinant polynomial defined below. Searching of absolutely nonsingular tensors was pursued in Sakata et al. {\rm \cite{Sakata1},\cite{Sakata2}} in this direction. As well as searching absolutely nonsingular tensors, the equivalence among them under the rank-preserving transformation which is defined below is also important. Note that such equivalence relation has also a relation to the SLOCC equivalence of entangled states in the quantum communication (for example, see Chen et al. \cite{Chen}). \begin{definition} For a $n\times n \times p$ tensor $T=(A_{1};A_{2};,\dots;A_{p}),$ the homogeneous polynomial in $x_{1},...,x_{p}$ of degree $n$ \begin{equation} f_{T}(x_{1},x_{2},...,x_{p})=\det(x_{1}A_{1}+x_{2}A_{2}+\cdots +x_{p}A_{p}) \end{equation} is called the determinant polynomial of a tensor $T$. \end{definition} Then we have the following important characterization. \begin{thm}\label{positivity} If $T=(A_{1};A_{2};,\dots;A_{p})$ is absolutely nonsingular, its determinant polynomial $f_{T}(x_{1},x_{2},...,x_{p})$ is a positive definite homogeneous polynomial or negative definite homogeneous polynomial. \end{thm} \begin{proof} Let $T=(A_{1};A_{2};,\dots;A_{p})$ be absolutely nonsingular, and assume that there are two points $\bm{x}_{0}$ and $\bm{x}_{1}$ such that $f_{T}(\bm{x}_{0})>0$ and $f_{T}(\bm{x}_{1})<0.$ The line $\ell$ combining the two points $\bm{x}_{0}$ and $\bm{x}_{1}$ must pass through the origin $\bm{0}$, since in the segment $[x_{0},x_{1}]$ there must be $\bm{x}'$ such that $f_{T}(\bm{x}')=0$ and it must be $\bm{0}$ because $T$ is absolutely nonsingular. Let take another point $\bm{x}_{2}$ which is not on the line $\ell$. Then, the line passing $\bm{x}_{0}$ and $\bm{x}_{2}$ does not pass the origin and so $f(\bm{x}_{0}) f(\bm{x}_{2})<0$ is impossible just by the same reason given in the previous sentence. So, $f(\bm{x}_{0}) f(\bm{x}_{2})>0.$ Next, consider the line passing $\bm{x}_{1}$ and $\bm{x}_{2}$, which also does not pass the origin and $f(\bm{x}_{1}) f(\bm{x}_{2})<0$. This is also a contradiction. After all, there don't exist points $\bm{x}_{0}$ and $\bm{x}_{1}$ such that $f(\bm{x}_{0})>0$ and $f(\bm{x}_{1})<0.$ This proves Theorem \ref{positivity}. \end{proof} It is well known that tensor rank is invariant by typical matrix transformations, say, $p-$, $q-$, and $r-$transformations defined below. So, equivalence relation of two tensors means that they have a same rank. Thus, to study equivalence among tensors is of some importance for rank determination. \begin{definition} For a $n \times n \times p$ tensor $T=(A_{1};A_{2};\cdots;A_{p}),$ the following transformations \begin{itemize} \item[(1)] $T=(A_{1};A_{2};\cdots;A_{p}) \rightarrow T'=(PA_{1};PA_{2};\cdots;PA_{p}) $ by an $n \times n$ matrix $P \in GL(n),$ \item[(2)] $T=(A_{1};A_{2};\cdots;A_{p}) \rightarrow T'=(A_{1}Q;PA_{2}Q;\cdots;A_{p}Q)$ by an $n \times n$ matrix $P \in GL(n),$ \item[(3)] $T=(A_{1};A_{2};\cdots;A_{p}) \rightarrow T'=(R_{11}A_{1}+R_{12}A_{2}+R_{13}A_{3}; R_{21}A_{1}+R_{22}A_{2}+R_{23}A_{3};R_{31}A_{1}+R_{32}A_{2}+R_{33}A_{3})$ by a $p \times p$ matrix $P \in GL(n)$ \end{itemize} are called as $p-$, $q-$, and $r-$transformations and denoted by $T \rightarrow_{p} T', T \rightarrow_{q} T' and T \rightarrow_{r} T'$ respectively. Further, if $T_{1} \rightarrow_{p} T_{2}$, the $T_{1}$ and $T_{2}$ are said to be in the $p-$equivalence. $q-$ and $r-$equivalence are defined analogously. \end{definition} \begin{definition} Let $T_{1}=(A_{1};A_{2};\cdots;A_{p})$ and $T_{2}=(B_{1};B_{2};\cdots;B_{p})$ be two $n \times n \times p$ tensors. If there is a sequence of $\{T_{i} \}$ starting from $T_{1}$ and ending at $T_{2},$ in which $T_{i}$ and $T_{i+1}$ are in the relation of $p-$, or $q-$, or $r-$equivalence, then $T_{0}$ and $T_{1}$ are said to be equivalent. \end{definition} Now we can reduce the equivalence relation into a more simple one by the following lemma. \begin{lemma}\label{comutative} $p-$, $q-$ and $r-$transformations are mutually commutative. \end{lemma} \begin{proof} For simplicity, we prove for $p=3$, however, the proof is similar for a general $p$. First we prove the commutativity of $p-$transformation and $r-$transformation. Let $$ T_{1} \rightarrow_{p} T_{2} \rightarrow_{r} T_{3} \ \ and \ \ T_{1} \rightarrow_{r}T^{'}_{2} \rightarrow_{p}T^{'}_{3} $$ We will show that $T_{3}=T^{'}_{3}.$ Let $T_{1}=(A_{1};A_{2};A_{3})$ and $P=(p_{ij})$ and $R=(r_{ij}).$ Then, $$ T_{2}=(PA_{1};PA_{2};PA_{3}) $$ and $$ T_{3}=(r_{11}PA_{1}+r_{12}PA_{2}+r_{13}PA_{3}; r_{21}PA_{1}+r_{22}PA_{2}+r_{23}PA_{3}; r_{31}PA_{1}+r_{32}PA_{2}+r_{33}PA_{3}) $$ On the other hand $$ T^{'}_{2}=(r_{11}A_{1}+r_{12}A_{2}+r_{13}A_{3}; r_{21}A_{1}+r_{22}A_{2}+r_{23}A_{3}; r_{31}A_{1}+r_{32}A_{2}+r_{33}A_{3}) $$ and $$ T^{'}_{3}=(r_{11}PA_{1}+r_{12}PA_{2}+r_{13}PA_{3}; r_{21}PA_{1}+r_{22}PA_{2}+r_{23}PA_{3}; r_{31}PA_{1}+r_{32}PA_{2}+r_{33}PA_{3}) $$ Thus, $T_{3}=T^{'}_{3},$ and this means the commutativity of $p$- and $r$-transformations. The commutativity of $q$- and $r$-transformations are proved similarly. $p$- and $q$-transformations are obviously commutative. This proves Lemma \ref{comutative}. \end{proof} Note that in this paper we consider three cases of (1) $P, Q \in GL(n)$ and $R \in GL(p)$ and (2) $P, Q \in GL(n)$ and $R \in SL(p).$ and (3) $P, Q \in SL(n)$ and $R \in SL(p).$ The first is called $GL(p)$-equivalence or simply equivalence, and the second is called $SL(p)$-equivalence in short. The third case is called, in a full term, $SL(n) \times SL(n) \times SL(p)$-equivalence. Lemma \ref{comutative} implies the following theorem. \begin{thm} $T_{1}$ and $T_{2}$ are $GL(p)$-equivalent if and only if there is a set of $p$-transformation, $q$-transformation and $r$-transformation such that $$T_{1} \rightarrow_{p} T^{'} \rightarrow _{q} T^{"} \rightarrow_{r} T_{2}$$ \end{thm} Thus, the equivalence problem of tensors $T_{1}=(A^{(1)}_{1}:,,,:A^{(1)}_{p})$ and $T_{2}=(A^{(2)}_{1}:,,,:A^{(2)}_{p})$ is reduced to the problem whether the following system of algebraic equations for $P$, $Q$ and $R$ can have a solution or not. \begin{equation} A^{(2)}_{i}=P( \{ \sum_{j=1}^{p}r_{ij}A^{(1)}_{j}\}Q, i=1,2,...,p \end{equation} These algebraic equations have too many variables to solve even when the size of matrices $A_{i}$ is moderate. So, in this paper, we propose to see the problem through the determinant polynomial. Then, though we necessarily have to discard the sufficiency part of the problem, however, the problem becomes concise and tractable one by the following proposition. \begin{prop}\label{propequivalence} If $T_{1}$ and $T_{2}$ are $GL(p)$-equivalent, it holds that there is a constant $c \in \mathbb{R}$ and a $p \times p$ nonsingular matrix $R \in GL(p)$ such that \begin{equation}\label{equivalentequation} f_{T_{2}}(\bm{x})=cf_{T_{1}}(\bm{x}R) \end{equation} \end{prop} So, we can say that \begin{prop} For two tensors, if the equation (\ref{equivalentequation}) does not hold for any constant $c \in \mathbb{R}$ and any matrix $R\in GL(p)$, they are not $GL(p)$-equivalent. \end{prop} Though the reduced equation (\ref{equivalentequation}) happens to be solved algebraically in some cases. However, it is still hard to solve, in general, a system of algebraic equation with too many variables. In fact, we need to decide whether a system of $ \frac{(n+1)(n+2)}{2}$ homogeneous equations with $(p^2+1)$ variables of degree $n$ have a solution or not. So, in this paper, we avoid to solve the problem algebraically and propose to attack the problem from a geometric view point, that is, we propose to test non equivalence by checking whether the two surfaces of the determinant polynomials of $T_{1}$ and $T_{2}$ have a same geometric invariants, or not. Here, multi-linear algebra and differential geometry intersect through the widow of determinant polynomials. \\ The first aim of this paper is to show theoretically that differential geometric invariants are useful as testers of non-equivalence among absolutely nonsingular tensors. The second aim is to show that we can calculate the values of the invariants with enough accuracy. Third, we compare the values of invariants calculated by the lattice method and by the t-design method. And it is shown that the lattice point method gives more stable values than the t-design method. As $SL(p)$-invariant, we consider first the volume enclosed by the constant surface and then we consider the affine surface area, and thirdly we consider the $L^{p}$ affine surface area of convex body. Affine surface area was studied by Blaschke \cite{BL} and extended to $L_{p}$ affine surface area by Lutwak \cite{Lut}, (also see Leichtwess \cite{Leicht}). As for a valuation theory of $L_{p}$ affine surface area, see the recent papers by Ludwig \cite{Ludwig2} and Ludwig and Reitzer \cite{LudwigReit}. Finally, as a general reference of affine differential geometry, see K. Nomizu and T. Sasaki \cite{NS}. \\ This paper is organized as follows. In Section 2, we show how to parametrize the constant surface of a determinant polynomial and in Section 3, we review briefly some definitions from differential geometry. In Section 4, we argue rough $SL(p)$-invariants. In Section 5, we deal with $SL(p)$-invariant. In the first subsection, we introduce the valuation theory for the set of convex bodies and in the second subsection, we argue a volume of the region enclosed by a constant surface as an $SL(p)$-invariant. In the third subsection, we argue the affine surface as a $SL(p)$-invariant. In Section 6, we consider the generalized affine surface, that is, $L_{p}$ affine surface area, especially centro-affine surface as a $GL(p)$-invariant. In Section 7, we review the theory of spherical t-design briefly and give a theorem important for approximate calculation of our proposed invariants. In Section 8, we give numerical values of the invariants calculated by the lattice method and t-design method. It is shown numerically that the proposed invariants is usefull to discriminate non equivalence. In Section 9, the conclusion is given. Finally note that in the following we consider mainly the case of $n=4$ and $p=3,$ though some statements are given for general $n$ and $p.$ One reason is that absolutely nonsingular tensors are not so easy to obtain for general cases and the second reason is because it is easy to see that our method is also available for general cases. The study of much higher values of $n$ and $p$ will be given in the future work. \section{Parametrization of constant surface} The determinant polynomial of a $4 \times 4 \times 3$ tensor $T=(A_{1};A_{2};A_{3})$, i$f_{T}(x,y,z)=det(xA_{1}+yA_{2}+zA_{3} ),$ is a homogeneous polynomial of three variables with degree 4. We are concerned with the integral invariants of the constant surface $\partial\Omega_{T}=\{(x,y,z)|f_{T}(x,y,z)=1\}$ for the special linear group $SL(3)$ and the general linear group $GL(3)$. To get such invariants, we need to parametrize this surface by the usual spherical coordinate, \begin{eqnarray} x&=&r\sin s\cos t=r\Phi_{x}(s,t) \\ y&=&r\sin s\sin t=r\Phi_{y}(s,t) \\ z&=&r\cos s=r\Phi_{z}(s,t) , \end{eqnarray} where $0<s<\pi, 0<t<2\pi.$ Let $\bm{x}$ denote the point $(x,y,z)$ on the surface. Putting these into the equation $f_{T}(x,y,z)=1,$ we have \begin{equation} r^{4}=\frac{1}{p(s,t)}, \end{equation} where \begin{equation} p(s,t)=f_{T}(\Phi_{x}(s,t),\Phi_{y}(s,t), \Phi_{z}(s,t)). \end{equation} And so, \begin{equation} \label{parametric} \bm{x}=\frac{1}{p(s,t)^{1/4}} \left( \Phi_{x}(s,t),\Phi_{y}(s,t), \Phi_{z}(s,t) \right). \end{equation} This equation (\ref{parametric}) gives a parametric representation of the constant surface $\partial \Omega_{T}$. Then, the following is a starting point of this research of the constant surface. \begin{thm}\label{compact} The constant surface of the determinant polynomial of an absolutely nonsingular tensor is a compact set in $\mathbb{R}^{3}$ without self-intersection. \end{thm} \begin{proof} Without loss of generality, we assume that $f_{T}(\bm{x})$ is positive definite. If $\bm{x} \in \partial\Omega_{T}$, for any $0<r<1$ and $r>1,$ $r\bm{x}$ is not in $\partial \Omega_{T}.$ That is, the constant surface is of a star-shaped. This proves that the surface has not any self intersection. Since $p(s,t)$ is continuous on the unit sphere it takes a positive minimum and a positive maximum. So, $\bm{x}$ in the equation \ref{parametric} is bounded, which implies the compactness of the constant surface. This completes the proof of Theorem \ref{compact}. \end{proof} The following 8 figures are examples of the constant surfaces of $ 4 \times4 \times 3$ absolutely nonsingular tensors. \begin{figure} \caption{Constant surfaces of the determinant polynomials of tensors:$No1, 3,10,20,99,119,207,237.$} \end{figure} Note that the numbering of tensors is based on the list of nonsingular tensors with elements consisting only of $-1, 0, 1$ found by us. Each figure corresponds to the following tensor and its determinant function respectively. $$ T_{1}=\left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\\ \end{array}\right) : \left( \begin{array}{cccc} -1 & 0 & 1 & 1 \\ 1 & -1 & 1 & -1\\ -1 & -1 & -1 & -1 \\ 0 & 1 & 1 & 0\\ \end{array}\right) : \left( \begin{array}{cccc} 0 & 1 & -1 & 1 \\ 0 & -1 & 1 & 1\\ 0 & -1 & 0 & -1\\ -1 & -1 & 1 & 1\\ \end{array}\right), $$ \begin{eqnarray*} f_{T_{1}}(x,y,z)&=&x^4+6y^4+2z^4+-3x^3y-8xy^3-3xz^3+5z^3y+\\ && 7x^2y^2+3x^2z^2+8z^2y^2-2xy^2z-8xyz^2. \end{eqnarray*} $$ T_{3}=\left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\\ \end{array}\right) : \left( \begin{array}{cccc} 0 & 1 & 0 & 0 \\ 0 & -1 & 0 & 1\\ 1 & 0 & - & 0 \\ -1 & 1 & -1 & 1\\ \end{array}\right) : \left( \begin{array}{cccc} 1 & 1 & 1 & 1 \\ -1 & 0 & 0 & -1\\ 0 & -1 & 0 & 0\\ -1 & 1 & -1 & 1\\ \end{array}\right), $$ \begin{eqnarray*} f_{T_{3}}(x,y,z)&=&x^4+3y^4+6z^4-3x^3z+2xy^3+4y^3z\\ &&-7xz^3-6z^3y+x^2y^2+5x^2z^2-5z^2y^2+4xy^2z-x^2yz+2xyz^2. \end{eqnarray*} $$ T_{10}=\left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\\ \end{array}\right) : \left( \begin{array}{cccc} 0 & 1 & 0 & 0 \\ 0 & -1 & 0 & 1\\ 1 & 0 & 1 & 0\\ 0 & 0 & -1 & -1\\ \end{array}\right) : \left( \begin{array}{cccc} 1 & 1 & 1 & 1\\ -1 & 0 & 0 & -1\\ 0 & -1 & 0 & 0\\ -1 & 1 & -1 & 1\\ \end{array}\right), $$ \begin{eqnarray*} && f_{T_{10}}(x,y,z)=x^4+y^4+2z^4-x^3y+2x^3z+xy^3+2xz^3\\ &&-z^3y-x^2y^2+4x^2z^2-2xy^2z-2x^2yz-xyz^2.\\ \end{eqnarray*} $$ T_{20}=\left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\\ \end{array}\right) : \left( \begin{array}{cccc} -1 & 0 & 1 & -1\\ 0 & 0 & 0 & 1\\ 1 & -1 & 0& 0\\ 1 & 0 & 0 & 0\\ \end{array}\right) : \left( \begin{array}{cccc} 1 & 0 & 1 & 1\\ 1 & 1 & 1 & -1\\ 0 & 0 & 0 & 1\\ 1 & -1 & -1 & -1\\ \end{array}\right), $$ \begin{eqnarray*} && f_{T_{20}}(x,y,z)= x^4+y^4+2z^4-x^3y+x^3z+y^3z-xz^3+2z^3y+\\ &&-x^2z^2+6z^2y^2+xy^2z+x^2yz+2xyz^2. \end{eqnarray*} $$ T_{99}=\left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\\ \end{array}\right) : \left( \begin{array}{cccc} 0 & -1 & - 1 & -1\\ 1 & 1 & 1 & - 1\\ 1 & -1 & 1& 0\\ -1 & 1 & -1 & 1\\ \end{array}\right) : \left( \begin{array}{cccc} 0 & 1 & 0 & 0\\ 0 & -1 & 1 & 1\\ 1 & 0 & 1 & -1\\ 1 & 0 & 1 & 1\\ \end{array}\right), $$ \begin{eqnarray*} && f_{T_{99}}(x,y,z)=x^4+2y^4+2z^4+3x^3y+x^3z+3xy^3+\\ &&y^3z -3z^3y+6x^2y^2+z^2y^2+8xy^2z+x^2yz-5xyz^2 \end{eqnarray*} $$ T_{119}=\left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\\ \end{array}\right) : \left( \begin{array}{cccc} 0 & 0 & 0& -1\\ - 1 & 1 & 1 & - 1\\ 0 & -1 & 0& 0\\ 1 & 0 & 0 & 0\\ \end{array}\right) : \left( \begin{array}{cccc} 1 & -1 & -1 & 1\\ 1 & -1 & 1 & 0\\ 0 & 1 & 1 & 1\\ 1 & 1 & 0 & - 1\\ \end{array}\right), $$ \begin{eqnarray*} && f_{T_{119}}(x,y,z)=x^4+y^4+9z^4+x^3y+xy^3+2y^3z-2z^3y+\\ && 2x^2y^2-3x^2z^2-z^2y^2+xy^2z+x^2yz+xyz^2. \end{eqnarray*} $$ T_{207}=\left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\\ \end{array}\right) : \left( \begin{array}{cccc} -1& -1 & -1& 1\\ 0 & 1 & 1 & 0\\ -1 & 0 & 0& -1\\ -1 & 1 & 0 & -1\\ \end{array}\right) : \left( \begin{array}{cccc} -1 & 1 & -1 & 1\\ -1 & 0 & 1 & 1\\ -1 & 0 & -1 & 1\\ 0 & 0 & -1 & - 1\\ \end{array}\right), $$ \begin{eqnarray*} &&f_{T_{207}}(x,y,z)=x^4+2y^4+2z^4-x^3y-3x^3z+xy^3+6y^3z\\ &&-3xz^3+2z^3y-x^2y^2+4x^2z^2+6z^2y^2+8xy^2z-3x^2yz+7xyz^2. \end{eqnarray*} $$ T_{237}=\left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\\ \end{array}\right) : \left( \begin{array}{cccc} 0& -1 & 1& - 1\\ 0 & - 1 & - 1 & 0\\ 0 & 1 & -1& 0\\ 1 & 1 & 0 & 1\\ \end{array}\right) : \left( \begin{array}{cccc} 0 & -1 & -1 & -1\\ -1 & -1 & -1 & 0\\ 0 & -1 & 0 & 1\\ 1 & -1 & 1 & 1\\ \end{array}\right), $$ \begin{eqnarray*} &&f_{T_{237}}(x,y,z)=x^4+2y^4+3z^4-x^3y+4y^3z +3z^3y+\\ &&x^2y^2-x^2z^2+6z^2y^2+x^2yz. \end{eqnarray*} \section{Notations from differential geometry} For the use in the following sections, we review some basic notations of differential geometry. For more details, for example, see the Nomizu and Sasaki \cite{NS}. When we denote the parametrized point on the surface by $x(s,t),$ we denote its partial derivatives by \begin{eqnarray} x_{s}(s,t)&=&\frac{ \partial x(s,t)}{\partial s},\\ x_{t}(s,t)&=&\frac{ \partial x(s,t)}{\partial t}. \end{eqnarray} \begin{definition} \begin{equation} E=\langle x_{s},x_{s} \rangle, \ \ F=\langle x_{s},x_{t} \rangle, \ \ G=\langle x_{t},x_{t} \rangle, \end{equation} are called the first fundamental coefficients. Putting $dx=x_{s}(s,t)ds+x_{t}(s,t)dt$, the form \begin{equation} I=\langle dx,dx \rangle =Es^2+2Mdsdt+Ndt^2 \end{equation} is called the first fundamental form of the surface. \end{definition} \begin{definition} \begin{equation} \bm{n}=\frac{x_{s} \times x_{t} } {||x_{s} \times x_{t} ||} \end{equation} \end{definition} is called the unit normal vector at the point $x(s,t).$ \begin{definition} Putting \begin{equation} x_{s s}=\frac{ \partial ^2 x(s,t)}{\partial s^2}, \ \ x_{s s}=\frac{ \partial ^2 x(s,t)}{\partial s t}, \ \ x_{s s}=\frac{ \partial ^2 x(s,t)}{\partial t^2}, \end{equation} the scalar functions \begin{equation} L=\langle x_{s s },\bm{n} \rangle \ \ M=\langle x_{s t },\bm{n} \rangle \ \ \ \ N=\langle x_{t t },\bm{n} \rangle \end{equation} are called the second fundamental coefficients. The form \begin{equation} II=-\langle dX,d\bm{n}\rangle=Lds^2+2Mdsdt+Ndt^2 \end{equation} is called the second fundamental form of the surface. \end{definition} \begin{definition} At the point $P$ on the surface, let $k_{1}$ and $k_{2}$ be the maximum and minimum of curvatures of curves generated by the intersection of the surface with the plane spanned by the normal vector and a tangent vector, $H=(k_{1}+k_{2})/2$ is called the mean curvature and $K=k_{1}k_{2}$ is called the Gaussian curvature. These are calculated by \begin{equation} H=\frac{EN-2FM+GL}{2(EG-F^2)} \ \ {\rm and} \ \ K=\frac{LN-M^2}{EG-F^2}. \end{equation} \end{definition} \section{Rough $SL(3)$ invariants } For checking $GL(3)$-equivalence between two tensors $T_{1}$ and $T_{2}$, we need to test the equation $$ f_{T_{2}}(\bm{x})=cf_{T_{1}}(\bm{x}R), c \in \mathbb{R} \ \ {\rm and} \ \ R \in GL(3). $$ Further, $S=R/|R|^{1/3} \in SL(3),$ \begin{equation} cf_{T_{1}}(\bm{x}R)= c|R|^{4/3} f_{T_{1}}(\bm{x} R/|R|^{1/3})=c'f_{T_{1}}(\bm{x}S) \ \ with \ \ c' \in \mathbb{R} \ \ and \ \ S \in SL(3). \end{equation} Thus, $GL(3)$-equivalence reduces to $SL(3)$-equivalence. Then, the following theorem holds. \begin{thm}\label{SLGL} For two tensors $T_{1}$ and $T_{2}$, assume that $f_{T_{2}}(\bm{x})=cf_{T_{1}}(\bm{x}S)$ does not hold for any choice of $c \in \mathbb{R} \ \ {\rm and} \ \ {\rm any} S \in SL(3)$. Then, $T_{1}$ and $T_{2}$ are not $GL(3)$ equivalent. \end{thm} This justifies to study $SL(3)$-equivalence among absolutely nonsingular tensors for investigating $GL(3)$ equivalence. \begin{remark}\label{positivec} If $c$ is negative, then by consider $T_{3}=PT_{2}$ with $|P|<0,$ we have $$ f_{T_{3}}(\bm{x})=cf_{T_{1}}(\bm{x}R), c \in \mathbb{R}^{+} \ \ {\rm and} \ \ R \in GL(3). $$ where $T_{3}$ is equivalent to $T_{2}.$ Thus, we can assume by writing $T_{3}$ as $T_{2}$ again $$ f_{T_{2}}(\bm{x})=cf_{T_{1}}(\bm{x}R), c \in \mathbb{R}^{+} \ \ {\rm and} \ \ R \in GL(3). $$ \end{remark} The following rough $SL(3)$ invariants are useful. \begin{thm} A convex surface is transformed into a convex surface by a $SL(3)$ linear transformation and so, a tensor with a determinant polynomial whose constant surface is convex is not equivalent to a tensor with a determinant polynomial whose constant surface is not convex. \end{thm} Only the tensor of No.1 has the convex surface among 8 figures in the Figure 1.1 and so the tensor of No. 1 is not $SL(3)$ equivalent to all other tensors in the Figure 1. \begin{definition} A point on the surface is called a singular point if the normal vector at the point can not be defined. \end{definition} \begin{thm} If the constant surface of a tensor $T_{1}$ has a singular point and the constant surface of a tensor $T_{2}$ has no singular point, they are not $SL(3)$($GL(3))$ equivalent. \end{thm} \begin{example} Let $$ T_{1}=\left( \begin{array}{cccc} 0 & 1 & 0 & 0 \\ -1 & 0 & 0 & 0 \\ 0 & 0 & 0 & -1 \\ 0 & 0 & 1 & 0 \end{array} \right); \left( \begin{array}{cccc} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ -1 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 \end{array} \right); \left( \begin{array}{cccc} 0 & 0 & 0 & 1 \\ 0 & 0 & -1 & 0 \\ 0 & 1 & 0 & 0 \\ -1 & 0 & 0 & 0 \end{array} \right) $$ and \[T_{2}= \left ( \begin{array}{cccc} 0 & 1 & 0 & 0 \\ -1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{array} \right); \left ( \begin{array}{cccc} 0 & 0 & 0 & -1 \\ 0 & 0 & 1 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ \end{array} \right); \left ( \begin{array}{cccc} 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \\ \end{array} \right), \] and the determinant polynomials of $T_{1}$ and $T_{2}$ are given below respectively. Then, we have $$ f_{T_{1}}(x,y,z)=(x^2+y^{2}+z^{2})^2, $$ and $$ f_{T_{2}}(x,y,z)=(x^2+y^{2})^2+z^{4}. $$ Note that both of them are positive definite, that is, both of $T_{1}$ and $T_{2}$ are absolutely nonsingular. It is clear that the constant surface of $f_{T_{1}}(\bm{x})$ is a sphere and it has no singular point, and on the other hand, that the constant surface of $f_{T_{2}}(\bm{x})$ is a conic and it has a singular point. Hence, $T_{1}$ and $T_{2}$ are not equivalent. \end{example} \begin{definition} When we consider a mesh of the parameter space,it produces a lattice of points on the constant surface. Let $K_{+}$ be the number of lattice points at which the Gaussian curvature is positive, and $K_{-}$ and $K_{0}$ be defined in the same way. \\ \end{definition} Then we have \begin{thm} The triplet $(K_{+},K_{-}, K_{0})$ is an $SL(3)$-invariant. \end{thm} \section{$SL(3)$ integral invariants} The following Figure 2 and 3 shows the figures of convex bodies that are enclosed by constant surfaces. The number of figures corresponds to that in our list of absolutely nonsingular tensors( Maehara \cite{Maehara}). \begin{figure} \caption{The constant surfaces for $No1,19,22,23,42,60,61,65$ absolutely nonsingular tensors} \label{positive} \end{figure} \begin{figure} \caption{The constant surfaces for $No 72,74,76,83,85,87,95,103$ absolutely nonsingular tensors} \label{positivek} \end{figure} In this section, we want to find some $SL(3)$-invariant for such convex bodies. For this purpose, the following valuation theory is a quite useful. Here, we make a brief summary of the valuation theory from Ludwig \cite{Ludwig2} and \cite{LudwigReit}. The definition is stated for a general $p$. \begin{definition} Let $\mathcal{K}$ denote the set of all convex bodies in $\mathbb{R}^{p}$. A functional $t(\cdot)$ from $\mathcal{K}$ to $\mathbb{R}$ is called a valuation if it satisfies \begin{equation} t(K )+t(L)=t(K \cup L ) +t(K \cap L), \ \ K, L \in \mathcal{K}. \end{equation} \end{definition} Next theorem is a starting point of characterization of invariant valuation. \begin{thm} \label{Hadwiger}(Hadwiger\cite{Hadwiger}). A continuous valuation $t(\cdot)$ from $\mathcal{K}$ to $\mathbb{R}$ is invariant with respect to rigid motion if and only if there are constants $c_{0},c_{1},...,c_{p}$ such that \begin{equation} t(K)=c_{0}V_{0}(K)+c_{1}V_{1}(K)+\cdots+c_{p}V_{p}(K), \end{equation} where $V_{0}(K),V_{1}(K),\cdots,$ and $V_{p}(K)$ are the intrinsic volumes of $K.$ We remark that the volumes $V_{k}$ are called quermassintegrals of $K$ in \cite{Leicht} and that $V_{0}$ is the Euler index, $V_{p-1}$ the affine volume of $\partial K$, and $V_{p}$ is the volume of the convex body $K.$ In the following, we simply denote by $a(K)$ as the affine volume and call it affine surface area following \cite{LudwigReit}, and denoe by $V(K)$ the volume $V_{p}(K)$. \end{thm} \begin{definition} A functional $t(\cdot)$ on $\mathcal{K}$ is said to be equi-affine invariant if it is $SL(p)$-invariant and location invariant. \end{definition} The following is essential for us. \begin{thm} \label{Ludwig}(Ludwig \cite{Ludwig0} ,\cite{Ludwig1}, \cite{Ludwig2} and \cite{LudwigReit} ). An upper semi-continuous valuation $t(K)$ from $\mathcal{K}$ to $\mathbb{R}$ is equi-affine invariant if and only if there are constants $c_{0},c_{1}, \in R$ and $c_{2}\geq 0 $ such that \begin{equation} t(K)=c_{0}V_{0}(K)+c_{1}V_{p}(K)+c_{2}a(K), \end{equation} where $a(K)$ denotes the affine surface area. \end{thm} In short, $SL(p)$ invariant valuation is only the weighted sum of the Euler index and the volume $V_{p}(K)$and the affine surface area $a(K)$. This means that \begin{prop}\label{invariant} $V(K)$ and $a(K)$ are $SL$-invariants. \end{prop} So, we adopt the volume $V(K)$ and the affine surface area $a(K)$ as indexes of $SL(p)$-equivalence. Further, the next proposition by Lutwak\cite{Lut} is very useful for us. \begin{prop}\label{homogenous} When $p=3$,$a(K) $is homogeneous of degree $3/2$, that is, \begin{equation} a(dK)=d^{3/2}a(K), K \in \mathcal{K}_{0}. \end{equation} \end{prop} \subsection{Volume as an $SL(3)$-invariant } We are considering the equivalence relation among absolutely nonsingular tensors. As is shown in Theorem \ref{compact}, for such kind of tensors, the constant surfaces of them are compact. Note that from Propisition \ref{invariant} the volume of the region enclosed by the constant surface is $SL(3)$-invariant. Then, by the following Gauss's theorem, we can calculate the volume by the parametric representation given by the equation (\ref{parametric}). \begin{thm}(Gaussian formula)\\ For the region $\Omega$ enclosed by the space surface $\partial \Omega$, letting $f dy \wedge dz+ g dz \wedge dx + h dx \wedge dy$ be the differential form of 2nd degree, it holds \begin{equation}\label{Gauss} \int_{ \partial \Omega} f dy \wedge dz+ g dz \wedge dx + h dx \wedge dy=\int_{ \Omega}\left(\frac{\partial f}{\partial x}+\frac{\partial g}{\partial y}+\frac{\partial h}{\partial z}\right) dx \wedge dy \wedge dz \end{equation} \end{thm} We denote by $V(\Omega)$ the volume of the region $\Omega$. By this formula, we have \begin{equation} V(\Omega)=LHS \ \ of \ \ the \ \ equation (\ref{Gauss}). \end{equation} For the present case, by use of the spherical coordinates (s,t), the point of the boundary $\partial \Omega$ is parametrized as $\bm{x}=r(s,t)(\Phi_{x}(s,t),\Phi_{y}(s,t),\Phi_{z}(s,t)).$ Hence, we have \begin{eqnarray} dy &=& \frac{\partial y}{\partial s}ds+\frac{\partial y}{\partial t}dt\\ \nonumber & =& \left((-1/4)\frac{dp/ds}{p^{5/4}}\Phi_{y}+\frac{d\Phi_{y}/ds }{p^{1/4}}\right) ds +\\ \nonumber && \left ((-1/4)\frac{dp/dt}{p^{5/4}}\Phi_{y}+\frac{\Phi_{y}/dt}{p^{1/4}}\right) dt \\ \nonumber dz &=& \frac{\partial z}{\partial s}ds+\frac{\partial z}{\partial t}dt, \\ \nonumber &=&\left((-1/4)\frac{dp/ds}{p^{5/4}}\Phi_{z}+\frac{d\Phi_{z}/ds }{p^{1/4}}\right) ds +\\ \nonumber & & \left ((-1/4)\frac{dp/dt}{p^{5/4}}\Phi_{z}+\frac{\Phi_{z}/dt}{p^{1/4}}\right) dt. \\ \end{eqnarray} Therefore, \begin{eqnarray*} dy \wedge dz& =& \left((-1/4)\frac{dp/ds}{p^{5/4}}\Phi_{y}+\frac{d\Phi_{y}/ds }{p^{1/4}}\right) \left ((-1/4)\frac{dp/dt}{p^{5/4}}\Phi_{z}+\frac{\Phi_{z}/dt}{p^{1/4}} \right)ds \wedge dt \\ \nonumber && -\left ((-1/4)\frac{dp/dt}{p^{5/4}}\Phi_{y}+\frac{\Phi_{y}/dt}{p^{1/4}}\right) \left((-1/4)\frac{dp/ds}{p^{5/4}}\Phi_{z}+\frac{d\Phi_{z}/ds }{p^{1/4}} \right) ds \wedge dt. \end{eqnarray*} Similarly, \begin{eqnarray*} dz \wedge dx&=& \left((-1/4)\frac{dp/ds}{p^{5/4}}\Phi_{z}+\frac{d\Phi_{z}/ds }{p^{1/4}}\right) \left ((-1/4)\frac{dp/dt}{p^{5/4}}\Phi_{x}+\frac{\Phi_{x}/dt}{p^{1/4}}\right)ds \wedge dt \\ \nonumber && -\left ((-1/4)\frac{dp/dt}{p^{5/4}}\Phi_{z}+\frac{\Phi_{z}/dt}{p^{1/4}}\right) \left((-1/4)\frac{dp/ds}{p^{5/4}}\Phi_{x}+\frac{d\Phi_{x}/ds }{p^{1/4}}\right) ds \wedge dt, \end{eqnarray*} and \begin{eqnarray*} dx \wedge dy&=&\left((-1/4)\frac{dp/ds}{p^{5/4}}\Phi_{x}+\frac{d\Phi_{x}/ds }{p^{1/4}}\right) \left ((-1/4)\frac{dp/dt}{p^{5/4}}\Phi_{y}+\frac{\Phi_{y}/dt}{p^{1/4}}\right)ds \wedge dt \\ \nonumber && -\left ((-1/4)\frac{dp/dt}{p^{5/4}}\Phi_{x}+\frac{\Phi_{x}/dt}{p^{1/4}}\right) \left((-1/4)\frac{dp/ds}{p^{5/4}}\Phi_{y}+\frac{d\Phi_{y}/ds }{p^{1/4}}\right) ds \wedge dt. \end{eqnarray*} By using these, we can calculate the volume of the region enclosed by the constant surface of the determinant polynomial. Let $T_{1}$ and $T_{2}$ be two $ n \times n \times 3 $ tensors and let $\Omega_{i}$ denote the regions $\{\\bm{x}|f_{T_{i}}(\bm{x}) \leq 1\}$, which are enclosed by the surfaces of $\{\bm{x}| f_{T_{1}}(\bm{x}) = 1\}$. Then, by $SL(3)$ invariance of volumes, we have \begin{thm} \label{SLequivalentthm} If $V_{1} \neq V_{2},$ $f_{T_{2}}(\bm{x}) \neq f_{T_{1}}(\bm{x}R)$ for any $R \in SL(3)$, namely, $T_{1}$ and $T_{2}$ are not $SL(n) \times SL(n) \times SL(3)$ equivalent. \end{thm} For $GL$ invariance, the next lemma is helpful. \begin{lemma}\label{dvolume} For a determinant polynomial $f(\bm{x}),$ let $V(c)$ be the volume of $\Omega_{c}=\{\bm{x}|cf(\bm{x})\leq 1\}.$ Then $V(c)=c^{-3/4}V(1)$ for $4 \times 4 \times 3$ case. \end{lemma} \begin{proof} By changing a polynomial $f(\bm{x})$ into its constant multiple $cf(\bm{x})$, the coordinates $\bm{x}(s,t)$ on the constant surface are subject to changes to $(\frac{1}{c})^{1/4}\bm{x}(s,t)$. Hence, the integral \begin{equation} \frac{1}{3}\int_{ \partial \Omega} x dy \wedge dz+ y dz \wedge dx + z dx \wedge dy \end{equation} is multiplied by $c^{-3/4}.$ This proves the assertion of Lemma \ref{dvolume}. \end{proof} \begin{thm}\label{volume} Assume that $T_{1}$ and $T_{2}$ be $SL(3)$ equivalent and therefore that there is a relation between their determinant polynomials, \begin{equation} \label{SLequivalent} f_{T_{2}}(\bm{x})=cf_{T_{1}}(\bm{x}R), \end{equation} where $c \in \mathbb{R}$ and $R \in SL(3)$. Let $V_{1}(c)$ and $V_{2}(c)$ denote the volumes of $\Omega^{(1)}_{c}=\{\bm{x}|cf_{T_{1}}(\bm{x})\leq 1\}$ and $\Omega^{(2)}_{c}=\{\bm{x}|cf_{T_{2}}(\bm{x})\leq 1\}$ respectively. Then, it holds that \begin{equation} c=(V_{1}/V_{2})^{4/3}. \end{equation} \end{thm} \begin{proof} The proof is trivial from Lemma \ref{dvolume} and omitted. \end{proof} From Theorem \ref{volume}, we can know the constant $c$ in the equation (\ref{SLequivalent}). In the next section, it will be made clear that this expresson is helpful for establishing $GL(3)$-equivalence. \subsection{Affine surface area as an $SL(3)$-invariant} In this section, for testing $SL(3)$-equivalence, we propose to use the affine surface area, which is an $SL(3)$-invariant by Theorem \ref{invariant}. When $p=3$, the affine surface area has the following integral expression. \begin{definition} For a smooth convex body $K \subset \mathbb{R}^{3}$, the affine surface area is given by \begin{equation} a(K)=\int_{\partial K}\kappa(K,\bm{x})^{\frac{1}{4}}\sqrt{EG-F^2}dsdt, \end{equation} where $\kappa(\partial K,\bm{x})$ is the Gaussian curvature and $E,F$ and $G$ denote the first fundamental coefficients. \end{definition} Next, we show that the affine surface area is useful even as a tester of $GL(3)$-equivalence. Assume that we know the constant $c$ in the relation $f_{T_{2}}(\bm{x})=cf_{T_{1}}(\bm{x}R)$ with $c \in \mathbb{R}^{+}$ and $R \in SL(3)$ by Theorem \ref{volume}. Then, \begin{eqnarray} \Omega_{2}&=&\{\bm{x}| f_{T_{2}}(\bm{x}) \leq1 \}\\ \nonumber &=& \{ \bm{x}|c f_{T_{1}}(\bm{x}) \leq 1 \}\\ \nonumber &=& \{ \bm{x}|f_{T_{1}}(c^{1/4}\bm{x}) \leq 1 \}\\ \nonumber &=& c^{-1/4} \{\bm{x}| f_{T_{1}}(\bm{x}) \leq 1 \}\\ \nonumber &=& c^{-1/4} \Omega_{1}.\\ \end{eqnarray} From Proposition \ref{homogenous}, we have \begin{equation} a(\Omega_{2})=c^{-3/8} a(\Omega_{1}). \end{equation} Thus, we have the following. \begin{thm} \label{SLGL1} Let $T_{1}$ and $T_{2}$ be absolutely nonsingular tensors. Noting Remark \ref{positivec}, by Theorem \ref{volume}, we can obtain the estimate of $c \in R^{+}$ under the assumption that their determinant polynomials have the relation $f_{T_{2}}(\bm{x})=cf_{T_{1}}(\bm{x}R)$ for some unknown constant $c \in \mathbb{R}^{+}$ and an unknown matrix $R \in SL(3).$ Then, if $a(\Omega_{2}) \neq c^{-1/8} a( \Omega_{1})$, $T_{1}$ and $T_{2}$ are not $GL(3)$ equivalent. \end{thm} By using Theorem \ref{dvolume}, this is rephrased as \begin{thm}\label{SLGL2} Let $T_{1}$ and $T_{2}$ be absolutely nonsingular tensors. Then, if \begin{equation} a(\Omega_{2}) \neq \left( \frac{V(\Omega_{2})}{V(\Omega_{1})} \right) ^{1/2} a(\Omega_{1}), \end{equation} $T_{1}$ and $T_{2}$ are not $GL(3)$-equivalent, where $V(\Omega_{1})$ and $V(\Omega_{2})$ denote the volume of $\Omega_{1}$ and $\Omega_{2}$ respectively. \end{thm} \section{Integral $GL(3)$-invariant} In the latter half of the previous section, we presented a procedure to test a non-$GL(3)$-equivalence, however, it is somewhat indirect because we need to estimate the constant $c$ before starting the procedure. In this section, we consider a direct method handling non-equivalence by using a generalized affine surface area. That is, we consider the $L_{q}$ affine surface area, which is an extension of the affine surface area and developed by Letwak \cite{Lut}. Hug \cite{Hug} gave an equivalent definition. The following is the Hug' s definition. \begin{definition} \begin{equation} L_{q}a(K)= \int_{\partial K} \kappa_{0}(K,\bm{x})^{ \frac{q}{p+q}} d\sigma_{K}(\bm{x}) \end{equation} where \begin{equation} \kappa_{0}(K,\bm{x})=\frac{\kappa(K,\bm{x})}{ \langle \bm{x},\bm{n}(K,\bm{x})\rangle^{p+1}}. \end{equation} and $d\sigma_{K}(\bm{x})$ is called a cone measure defined by \begin{equation} d\sigma_{K}(\bm{x})=\langle \bm{x},\bm{n}(K,\bm{x})\rangle d\bm{x}, \end{equation} and $\bm{n}(K,\bm{x})$ denotes the outer normal at $\bm{x}$ on $\partial K.$ \end{definition} When $q=1$, $L_{q}(K)$ becomes the affine surface area $a(K)$, and when $q=p$, it becomes a classical centro-affine surface area $a_{c}(K)$ thta is defined as \begin{equation} a_{c}(K)=\int _{\partial K}\kappa_{0}(K,\bm{x})^{1/2}d\sigma_{K}(\bm{x}), \end{equation} which is known to be $GL(p)$-invariant. The characterization of a general $GL(p)$-invariant functional is given below. \begin{thm}({\rm Ludwig and Reitzsner \cite{LudwigReit} }) Let $\mathcal{K}_{0}$ be the space of convex bodies that contain the origin in their interiors. An upper semi-continuous functional $t(\cdot)$ from $\mathcal{K}_{0}$ to $\mathbb{R}^{1}$ is $GL(p)$-invariant if and only if there are nonengative constants $c_{0}$ and $c_{1}$ such that \begin{equation} t(K)=c_{0}V_{0}(K)+c_{1}a_{c}(K). \end{equation} \end{thm} \section{Spherical design} According to our experiments, the numerical integrations of the invariants must be accurate at least 2 decimals. So, the caluculations of the invariants are a little bit heavy. In this section, we consider the t-design method as an substitute of the nemerical integrations. The spherical design was initiated by Delsarte et al. \cite{DGS} and has been studied by several researchers, for example, see Bannai and Bannai \cite{BB}. It is defined as follows. \subsection{An overview of spherical design} \begin{definition} A finite set $X$ on the sphere is called t-spherical design if the following equality holds that for any polynomial $f(x,y,z)$ with a degree less than or equal to $t,$ \begin{eqnarray}\label{tdesignintegral} \frac{1}{|S^{2}|}\int_{S^{2}}f(x,y,z)d\sigma=\frac{1}{|X|}\sum_{(x,y,z) \in X} f(x,y,z), \end{eqnarray} where $S^{2}$ denotes the unit sphere of $\mathbb{R}^{3}$ and $d\sigma$denotes the surface element of the sphere and $|S^{2}|$ denotes the surface area of the sphere. \end{definition} A parametrized integral formula of the equation \ref{tdesignintegral} is given by \begin{equation} \int f(s,t) \sin(s) dsdt =\frac{4\pi}{N} \sum_{i=1}^{N}f(s_{i},t_{i}), \end{equation} where $(s_{i},t_{i}),i=1,2,....,N$ are the corresponding parameters to the design points in $X$. One point to overcome for our purpose is that we need to integrate some nonlinear functions that are not polynomials and hence we can not use any t-design directly. However, we can rely on the next theorem to solve this point. \begin{thm} Let $f(x,y,z)$ be an continuous function over the unit sphere and let $\epsilon_{t}$ be a positive number such that $$|f(x,y,z)-p(x,y,z)|<\epsilon_{t}$$ uniformly for some polynomial $p(x,y,z)$ with degree less than or equal to $t$. Then, it holds that \begin{equation} |\int_{\partial S} f(x,y,z) dS - 4\pi\frac{1}{N}\sum_{i=1}^{N}f(x_{i})|<8\pi \epsilon_{t} \end{equation} \end{thm} \begin{proof} \begin{eqnarray} &&\left| \int_{\partial S} f(x,y,z) dS - 4\pi\frac{1}{N}\sum_{i=1}^{N}f(x_{i}) \right| \\ \nonumber &\leq& \left| \int_{\partial S} f(x,y,z) dS - 4\pi\frac{1}{N}\sum_{i=1}^{N}p(x_{i})\right|+\left| 4\pi\frac{1}{N}\sum_{i=1}^{N}p(x_{i})- 4\pi\frac{1}{N}\sum_{i=1}^{N}f(x_{i}) \right|\\ \nonumber &=&\left| \int_{\partial S} f(x,y,z) dS - \int_{|\partial S} p(x_{i})dS \right|+\frac{4\pi}{N} \left| \sum_{i=1}^{N}|p(x_{i})- f(x_{i})|\right| \\ \nonumber &\leq & \int \epsilon_{t} dS+ 4\pi \epsilon_{t}\\ \nonumber &=& 8 \pi \epsilon_{t} \end{eqnarray} \end{proof} \begin{remark} By the above theorem, we need not to know the best approximate polynomial concretely in order to obtain an approximate value of the integration, and it is enough to use $f(x,y,z)$ itself. Moreover the error of the approximation is bounded from above by the multiple of $\epsilon_{t}$ by $8\pi.$ For a substantial evaluation of the approximation, we need to know $\epsilon_{t}$. The problem is interesting, however, it is a little bit heavy task at present, and so it is postponed to the future work. \end{remark} \subsection{Calculation of integral invariants by a 20-design } Using the result of the previous subsection, we consider the integration \begin{equation} a(K)=\int \kappa(s,t)^{1/4}\sqrt{EG-F^2}dsdt. \end{equation} where $s,t$ moves $0<s<\pi$,$0<t<2\pi$. This integration can be thought to be an integration over the unit sphere by \begin{eqnarray} a(K)&=&\int_{(s,t,) \in [0,\pi]\times[0,2\pi]} \kappa(s,t)^{1/4}\sqrt{EG-F^2}dsdt\\ \nonumber &=&\int_{(s,t,) \in [0,\pi]\times[0,2\pi]} \kappa(s,t)^{1/4}\frac{\sqrt{EG-F^2}}{\sin s}\sin s dsdt\\ \nonumber &=&\int_{\partial S} \kappa(s,t)^{1/4}\frac{\sqrt{EG-F^2}}{\sin s}dS,\\ \end{eqnarray} where $dS=sin(s)$. Hence, \begin{equation} p(x,y,z)=\kappa(s,t)^{1/4}\frac{\sqrt{EG-F^2}}{\sin s} \end{equation} is taken to be a function over the unit sphere and so the integral invariant can be approximated by the right hand side of the equation below. \begin{equation} \label{approximation} \int_{\partial S} \kappa(s,t)^{1/4}\frac{\sqrt{EG-F^2}}{\sin s}dS \sim \frac{4\pi}{N}\sum_{i=1}^{N}p(x_{i},y_{i},z_{i}) \end{equation} The values of invariants calculated by the lattice method and the 20-design method will be give in the next section. The 20-design method show very nice approximations in some cases, however, do not show good approximations for other cases. That is, for our integration of invariants, the spherical design method does not give stable values, unfortunately. This might suggest that we need to use design with more higher degree than 20. \section{Effectiveness of the invariants as testers of non-equivalence } In this section, we will show the effectiveness of the numerical values of the invariants as testers of non-equivalence. We numerically calculated the volume $V(\Omega)$, the affine surface area $a(\Omega)$and centro-affine surface area $a_{c}(\Omega)$ of the region $\Omega=\{\bm{x}|f(\bm{x}\leq 1\}$ defined by the determinant polynomials $f_{T}(\bm{x})$. As examples, we calculate them for the 16 tensors which are in $\mathcal{K}_{0}$, whose constant surfaces are figured in Figures 2 and 3 in the section 5. The numerical calculations are performed in two way, that is, by the lattice method and by the t-design method, and they are compared. As for the t-design method, we use the 20-design named des.3.216.20 in \cite{Slone} which has 216 points. In the tables below, M1-P2-G5, M6-P2-G, M1-P2-G7 and 20-design denote the globally adaptive integration with accuracy of 5 digits, pseudo-Monte Carlo integration, the globally adaptive integration with accuracy of 7 digits by 64 decimal calculation and 20-design method by IEEE754 decimal calculation, respectively. For all calculation were done by Mathematica. \begin{table}[!htbp]\label{SLinvariant1} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline Tensor & V0 & V1 & V2 & V3 \\ \hline \hline T001 & 2.9197794095194 & 2.9197794099529 & 2.9197794089308 & 2.9197794061274 \\ \hline T019 & 4.0314824331814 & 4.0314824340674 & 4.0314824332515 & 4.0314824319603 \\ \hline T022 & 3.6306602017309 & 3.6306602004447 &3.6306602054741 &3.6306602016552 \\ \hline T023 & 3.4355628950802 &3.4355628819358 &3.4355628878857& 3.4355628897838 \\ \hline T042 & 3.7515624235646 & 3.7515624142272 & 3.7515624197586&3.7515624152774 \\ \hline T060 & 2.1440485535226 & 2.1440485507771& 2.1440485551454&2.1440485550215 \\ \hline T061 & 2.8594583429857 & 2.8594583441125 &2.8594583445567 & 2.8594583445567 \\ \hline T065 & 3.1084258968340 & 3.1084258946417 &3.1084258957994&3.1084258984271 \\ \hline T072 & 4.6861403575076 &4.6861403597489 & 4.6861403542060 &4.6861403560079\\ \hline T074 & 3.6302252513670 &3.6302253269919&3.63022533280632 & 3.6302253350968 \\ \hline \end{tabular} \end{center} \caption{Volumes by M1-P2-G7 :$Tn$, where $n=001, 0019, 022, 023, 042,060,061$, $065, 072$ and $074$ Each line denoted as $T$n-0 lists the value of the original tensor and the lines $T$n-i,$i=1,2,...,5$ list the values for the transformed tensors of Tn by a randomly chosen matrix of $SL(3)$. } \end{table} \begin{table}[!htbp]\label{SLinvariant12} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline Tensor & M1-P2-G5 & M6-P2-G5 & M1-P2-G7 & 20-design \\ \hline \hline T001-0 & 9.961493457 & 9.962796404 & 9.961471493 & 9.90317 \\ \hline T001-1 & 9.961470358 & 9.961249135 & 9.961471489 & 8.73023 \\ \hline T001-2 & 9.961471133 & 9.971266750 & 9.961471486 & 9.96328 \\ \hline T001-3 & 9.961470327 & 9.959509709 & 9.961471474 & 9.79057 \\ \hline T001-4 & 9.961471186 & 9.979456186 & 9.961471478 & 10.88367 \\ \hline T001-5 & 9.961474220 & 9.997180989 & 9.961471490 & 10.99278 \\ \hline \hline T019-0 &11.560007113 & 11.560055546 & 11.560007991 & 11.87277\\ \hline T019-1 &11.560007742 & 11.552017302 &11.560007993 & 11.69239\\ \hline T019-2 &11.560008558 &11.559866424 & 11.560007993 & 11.51344\\ \hline T019-3 &11.560007692 &11.501609971 & 11.560007989 &13.36769\\ \hline T019-4 &11.560008494 & 11.545260017 & 11.560007991 & 10.40203\\ \hline T019-5 &11.560001924 & 11.558176522 &11.560007996 & 11.49393\\ \hline \hline T022-0 &11.020675684& 11.024551464 & 11.020674135& 11.05202\\ \hline T022-1 &11.020673831 & 11.016947424 & 11.020674138 &11.45345\\ \hline T022-2& 11.020676195 & 11.027525350 & 11.020674147 & 11.54386\\ \hline T022-3 &11.020673214 & 11.016006939& 11.020674140 & 11.07524\\ \hline T022-4 &11.020675399 & 11.031596952& 11.020674133 & 11.37096\\ \hline T022-5 &11.020674431 & 11.022431931 & 11.020674135 & 10.74089\\ \hline \hline T023-0 &10.771760482 & 10.773095422 & 10.771760351 & 10.73293\\ \hline T023-1 &10.771758801 &10.774759865 &10.771760349 & 9.26881\\ \hline T023-2 &10.771759301 &10.725843291 &10.771760352 & 10.94587\\ \hline T023-3 &10.771759730 & 10.766135806 & 10.771760352 & 13.23848\\ \hline T023-4 &10.771757516 & 10.773059224 & 10.771760350 & 10.78732\\ \hline T023-5 &10.771759533 & 10.773749461 &10.771760351 &10.88149\\ \hline \hline T042-0 &11.136697741 & 11.136755128 & 11.136697332 & 10.99424\\ \hline T042-1 &11.136695637 & 11.140565725 & 11.136697314 & 11.28015\\ \hline T042-2 &11.136699257 & 11.140835239 & 11.136697323 & 11.89052\\ \hline T042-3 &11.136721676 & 11.203272583 &11.136697308 &12.13058\\ \hline T042-4 &11.136696147 & 11.106329119 &11.136697270 & 9.81007\\ \hline T042-5 &11.136697731 & 11.107102150 & 11.136697313 & 13.99432\\ \hline \hline \end{tabular} \end{center} \caption{Affine surface area:$Tn$, where $n=001, 019, 022, 023$ and $042$ Each line denoted as $T$n-0 lists the value of the original tensor and the lines $T$n-i,$i=1,2,...,5$ list the values for the transformed tensors of Tn by a randomly chosen matrix of $SL(3)$. } \end{table} \begin{table}[!htbp]\label{SLinvariant2} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline Tensor & M1-P2-G5 & M6-P2-G5 & M1-P2-G7 & 20-design \\ \hline \hline T060-0 & 8.704587985 & 8.705126156 & 8.704588101 & 8.74300\\ \hline T060-1 & 8.704590255 & 8.781085267 & 8.704588109 & 8.50058\\ \hline T060-2 & 8.704596276 & 8.705498658 & 8.704588101 & 8.73210\\ \hline T060-3 & 8.704588380 & 8.711001740 & 8.704588104 & 8.80910\\ \hline T060-4 & 8.704586669 & 8.703029024 & 8.704587984 & 8.85973\\ \hline T060-5 & 8.704588143 & 8.705901809 & 8.704588100 & 8.56701\\ \hline \hline T061-0 & 9.759043314 & 9.759635865 & 9.759045706 & 9.741275\\ \hline T061-1 & 9.759036154 & 9.759076500 & 9.759045704 & 9.72403\\ \hline T061-2& 9.759050068 & 9.748685352 & 9.759045707& 9.56041\\ \hline T061-3 & 9.759044653 & 9.734392909 & 9.759045710 & 9.76040\\ \hline T061-4 & 9.759058206 & 9.745677922 & 9.759045694 & 10.37056\\ \hline T061-5& 9.759046974 & 9.755984062 &9.759045716 & 8.72501\\ \hline \hline T065-0 &10.273389075 & 10.274251947 & 10.273389369 & 10.33927\\ \hline T065-1 &10.273387633 &10.260497042 & 10.273389360 & 10.76367\\ \hline T065-2 &10.273389342 & 10.277789370 & 10.273389368 & 10.26620\\ \hline T065-3 &10.273388029& 10.249526640& 10.273389366& 10.42661\\ \hline T065-4& 10.273389939 & 10.276245030 & 10.273389370 & 10.06295\\ \hline T065-5& 10.273397527 & 10.279052599 &10.273389365 & 10.33636\\ \hline \hline T072-0 &12.483701912& 12.483843205 & 12.483691274& 12.67586\\ \hline T072-1 &12.483689616 & 12.484388161 & 12.483691282 & 12.38965\\ \hline T072-2& 12.483690234 & 12.481034408& 12.483691282 & 10.30731\\ \hline T072-3 &12.483690116 & 12.498107747 & 12.483691264 & 11.96858\\ \hline T072-4 &12.483698348 & 12.435166726 & 12.483691276 & 11.219584\\ \hline T072-5 &12.483686195 & 12.508162438& 12.483691276 & 10.183837\\ \hline \hline T074-0& 10.732327625& 10.732623087 & 10.732332110 & 10.80078\\ \hline T074-1 &10.732332283 & 10.726775221 & 10.732332117 &10.55820\\ \hline T074-2 &10.732337739 & 10.734008328& 10.732332113& 11.20578\\ \hline T074-3& 10.732332889 & 10.724453313 & 10.732332112 &10.87848\\ \hline T074-4& 10.732331733 & 10.729148909 & 10.732332214 & 10.95073\\ \hline T074-5 &10.732329467 & 10.727707310 & 10.732332111 & 10.57017\\ \hline \hline \end{tabular} \end{center} \caption{Affine surface area:$Tn$, where $n=060, 061, 065, 072$ and $074$ Each line denoted as $T$n-0 lists the value of the original tensor and the lines $T$n-i,$i=1,2,...,5$ list the values for the transformed tensors of Tn by a randomly chosen matrix of $SL(3)$. } \end{table} Table 1 shows that the SL invariance of volumes of the redions enclosed by the constant surface is clealry seen numerically for every absolutely nonsingular chosen tensors. Tables 2 and 3 of the affine surface area show that the affine suraface area is SL invariant and that all relevant tensors are not $SL(4) \times SL(4) \times SL(3)$ equivalent mutually. From Theorem \ref{SLGL2}, combining the volume data, we also conclude that they are not $GL$ equivalent. This last fact is also derived by a direct usage of the centro-affine surface data which is seen in Table 4 and Table 5. \begin{table}[!htbp]\label{GLinvariant1} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline Tensor & M1-P2-G7 \ & M6-P2-G5 & M1-P2-G5 & 20-design \\ \hline \hline T001-0 & 11.690150892617500 & 11.687899476332365 & 11.687898336789288 & 11.59968 \\ \hline T001-1 & 11.751922920525157 & 11.687898955213611 & 11.687898343722015 & 8.421025\\ \hline T001-2 &11.689694693901319 & 11.687898370365255 &11.687898355824357 & 11.68469 \\ \hline T001-3 &11.721355953709315 &11.687894829195568 &11.687898343333765 & 11.29880 \\ \hline T001-4 &11.692877418652227 & 11.687897242831659 &11.687897875631138 & 10.59083 \\ \hline T001-5 &11.679997753276740 & 11.687897167430656 &11.687898359334835& 11.46900 \\ \hline \hline T019-0 &11.509733354093680 & 11.509334536488204 &11.509333804897551 & 11.81248 \\ \hline T019-1 &11.472821548199051 &11.509332873485376 &11.509333807975230& 12.65290 \\ \hline T019-2 &11.509963231209824 & 11.509337447098934 &11.509333799194381&11.32764 \\ \hline T019-3 &11.552527050941017 & 11.509334159193976 &11.509333800434498& 11.38444 \\ \hline T019-4 &11.495864684062547 & 11.509335287391230 &11.509333801132503 & 11.37034 \\ \hline T019-5 &11.522546264759133 & 11.509333512497239 & 11.509333798526679 & 22.50564 \\ \hline \hline T022-0 & 11.574282949497377 & 11.568790730790213 & 11.568790156808308&11.59771 \\ \hline T022-1 & 11.570887655990263 &11.568785645774059 & 11.568790144452251& 11.63356 \\ \hline T022-2 & 11.568787229271756 & 11.568788230429048 &11.568790347476747 & 11.90185 \\ \hline T022-3 & 11.567926875696718 & 11.568789765657722 & 11.568790134358534 &11.57677 \\ \hline T022-4 & 11.597381830882211 & 11.568789372237021 & 11.568790132199215 & 13.39389 \\ \hline T022-5 &11.561310960443544 &11.568790290463560 & 11.568790451975676 &11.44911 \\ \hline \hline T023-0 & 11.631078897689606 & 11.626439742081966 & 11.626439153758515 & 11.57877\\ \hline T023-1 &11.619997976153462 & 11.626439518693340 &11.626439146934238 &11.39835\\ \hline T023-2 &11.611266008293132 &11.626440231360507 & 11.626439154231153 &11.43501 \\ \hline T023-3 &11.647477652583963 &11.626433368429471 & 11.626439151521914 &13.20628 \\ \hline T023-4 & 11.607565993095791 & 11.626439544654837 & 11.626439154062155 &10.72795 \\ \hline T023-5 & 11.620421522261536 & 11.626439658214246 &11.626439152653352 &11.74869 \\ \hline \hline T042-0 &11.502624357421948 & 11.504755263366923 & 11.504752079092657& 11.30545 \\ \hline T042-1 & 11.507105268508006 & 11.504753311150279 & 11.504752086650519& 11.07124 \\ \hline T042-2 &11.519424921951189 &11.504753899401661 &11.504752079220612& 9.41924 \\ \hline T042-3 &11.501095106227783 &11.504764044950799 &11.504752085646500& 12.16150 \\ \hline T042-4 & 11.530791206130419 & 11.504752412140897 & 11.504752073761618& 10.11515\\ \hline T042-5 & 11.499503647742464 &11.504752956532382 &11.504752076792938 &10.64605 \\ \hline \hline \end{tabular} \end{center} \caption{Centro-affine surface area:$Tn$, where $n=001, 019, 022, 023$ and $042$ Each line denoted as $T$n-0 lists the value of the original tensor and the lines $T$n-i,$i=1,2,...,5$ list the values for the transformed tensors of Tn by a randomly chosen matrix of $GL(3)$. } \end{table} \begin{table}[!htbp]\label{GLinvariant2} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline Tensor & M6-P2-G5 & M1-P2-G5 & M1-P2-G7 & 20-design\\ \hline \hline T060-0 & 11.989971644000403 & 11.989476119401702 & 11.989477685702977 & 12.05017 \\ \hline T060-1 & 11.990418566344240 & 11.989479611584825 & 11.989477723348062& 12.01483 \\ \hline T060-2 & 11.976852295964006 & 11.989478648296963 &11.989477738864300& 11.80414 \\ \hline T060-3 & 11.994778478025750 & 11.989477993940143 &11.989477740049253& 11.64148\\ \hline T060-4 & 11.989039776987073 & 11.989478217916079 &11.989477724638414 & 12.09719\\ \hline T060-5 & 12.046738095770048 &11.989477160425962 & 11.989477721024015 & 12.11083 \\ \hline \hline T061-0 & 11.519673795399891 & 11.518135424201142 & 11.518135117247486 & 11.48023\\ \hline T061-1 &11.518661618867961 & 11.518134427948641 & 11.518135113069415& 11.45852 \\ \hline T061-2 & 11.519323023325257 & 11.518135433182912 &11.518135109619174&11.45517\\ \hline T061-3 & 11.517878708284445 & 11.518130456996306 & 11.518135109891727 & 11.66299\\ \hline T061-4 & 11.517606307852915 & 11.518129721219993 & 11.518135107795112&11.47192 \\ \hline T061-5 & 11.531441863084249 &11.518134741001642 & 11.518135110921667& 10.31802 \\ \hline \hline T065-0 & 11.660951650838694 & 11.660077139783777 & 11.660146606151409& 11.77330\\ \hline T065-1 & 11.657097464096733 &11.660146835636129 & 11.660146583158155& 11.64542 \\ \hline T065-2 & 11.662671583310311 & 11.660135616613492 &11.660146602669996&11.69570\\ \hline T065-3 &11.657074111599187 & 11.660148534680922 & 11.660146593520388&11.58150\\ \hline T065-4 &11.661605706427124 & 11.660146860738219 & 11.660146601870989& 12.45337 \\ \hline T065-5 & 11.668831112582931 &11.660147254608734 &11.660146596730511 &11.76209\\ \hline \hline T072-0 & 11.545589518947570 & 11.545142097769179 & 11.545141226929544 & 11.76647 \\ \hline T072-1 &11.545716622630585 & 11.545139592226001 & 11.545141221483210 & 11.52675 \\ \hline T072-2 & 11.562200209044268 & 11.545142319259361 & 11.545141224116661& 8.50885 \\ \hline T072-3 & 11.575704218963165 & 11.545144648175810 & 11.545141226744587& 10.30769 \\ \hline T072-4 & 11.535076590703719 & 11.545140326506765 & 11.545141235723655&12.22675 \\ \hline T072-5 &11.545910129113601 &11.545141536682674 &11.545141208615009 &11.19618 \\ \hline \hline T074-0 & 11.116314213623787 & 11.116088526600165 & 11.116090556639371& 11.20632\\ \hline T074-1 & 11.109183432623630 &11.116086365050504 & 11.116090553382580 &11.09112 \\ \hline T074-2 & 11.121063605466493 & 11.116094135711593 & 11.116090554260284 & 9.58706 \\ \hline T074-3 & 11.090134159234779 & 11.116089713933696 &11.116090550551305& 10.19594 \\ \hline T074-4 & 11.135697898280837 & 11.116091869446610 & 11.116090554811293& 11.15140\\ \hline T074-5 & 11.117481923868303 & 11.116094503240262 & 11.116090554606608&11.08749 \\ \hline \hline \end{tabular} \end{center} \caption{Centro-affine surface area:$Tn$, where $n=060, 061, 065, 072$ and $074$ Each line denoted as $T$n-0 lists the value of the original tensor and the lines $T$n-i,$i=1,2,...,5$ list the values for the transformed tensors of Tn by a randomly chosen matrix of $GL(3)$. } \end{table} Indeed, Tables 4 and 5 show that the centro-affine surface area is really $GL$ invarinat, and that three point decimal accuracy will be sufficient to detect non $GL(3)$-equivalence between $4 \times 4 \times 3$ absolutely nonsingular tensors, whose elements consists of only -1,0,1. The M1-P2-G7 method seems clearly the best for discriminating the tensors relating to $GL$ nonequivalence. \section{Conclusion} We treated the $SL(4) \times SL(4) \times SL(3),$ $GL(4) \times GL(4) \times SL(3)$ or $GL(4) \times GL(4) \times GL(3)$ non-equivalence problem of $4 \times 4 \times 3$ absolutely nonsingular tensors. We proposed a method to addres to the problem through the determinant polynomials. Furthermore we proposed to solve the problem by differential geometric $SL(3)$ or $GL(3)$ invariant of the constant surface of the determinant polynomials. From the numerical analysis by Mathematica, it was shown that the stable values of invariants are obtainable numerically and also it was shown that the affine surface area and the centro-affine surface area are useful to detect the non-equivalence. This means that the algebraic problem: whether a system of algebraic equations with many variables can have real solutions or not, can be resolved by differential geometric methods. It is a nice link between algebra and differential geometry. Second, we investigated the spherical design method for calculating invariants. At present, we think that the values given by the adaptive lattice methods are more reliable than those given by the spherical design method. In some future work, we expect to extend the result to more higher dimensional tensors and to know why the spherical design method does not give stable values of invariants. \end{document}
\begin{document} \title{Independence testing for inhomogeneous random graphs} \begin{abstract} Testing for independence between graphs is a problem that arises naturally in social network analysis and neuroscience. In this paper, we address independence testing for inhomogeneous Erd\H{o}s-R\'{e}nyi random graphs on the same vertex set. We first formulate a notion of pairwise correlations between the edges of these graphs and derive a necessary condition for their detectability. We next show that the problem can exhibit a statistical vs. computational tradeoff, i.e., there are regimes for which the correlations are statistically detectable but may require algorithms whose running time is exponential in $n$, the number of vertices. Finally, we consider a special case of correlation testing when the graphs are sampled from a latent space model (graphon) and propose an asymptotically valid and consistent test procedure that also runs in time polynomial in $n$. \end{abstract} \section{Introduction} Detecting correlation and testing for independence between Euclidean vectors is one of the most classical and widely studied inference problems in multivariate statistics. See \citet[Chapter 11]{anderson} and \citet{kendall} for standard references when the data are low-dimensional and \citet{leung_drton,gretton_hsic,correlation_distances} for some examples of recent results in the high-dimensional setting. In contrast to the above mentioned literature, independence testing for graphs is much less studied. Part of the difficulty lies in choosing an appropriate notion of correlation for graph-valued data. In this paper, we are motivated by the problem of, given two graphs on the same vertex set, determining whether or not the presence of an edge between any pair of vertices in the first graph is {\em stochastically} independent of the presence of the corresponding edge in the second graph. Two graphs are thus said to be independent if all edges in the first are {\em pairwise} independent of the corresponding edges in the second, and are said to be correlated otherwise. The above notion of pairwise edge correlation, while rather simple, does appear in many real data applications. For example, two networks on different social platforms are likely to share a common subset of users whose induced sub-networks are correlated in that two users are more likely to be linked in one social platform if they are already linked in the other social platform. As another example, the graphs constructed from the wiring diagrams of the mushroom body for the left and right hemispheres of the {\em Drosophila} larval are highly correlated as a pair of neurons are more likely to send signals to each other in the left hemisphere if their correspondences also send signals to each other in the right hemisphere \citep{eichler2017complete,winding_science}. Finally, two knowledge graphs constructed using documents on the same topics but written in different languages are also highly correlated \citep{priebe2009fusion,haghighi}. In this paper, we consider independence testing in the setting where the graphs are, {\em marginally}, inhomogeneous Erd\H{o}s-R\'{e}nyi random graphs while the pairwise edges correlations are described by the entries of a symmetric matrix $R$ (see Definition~\ref{def_1}). Two graphs are then independent if $R_{ij} \equiv 0$ for all $\{i,j\}$ and are correlated if $|R_{ij}| > 0$ for {\em some} $\{i,j\}$. Equivalently, we are interested in testing the null hypothesis $\mathbb{H}_0 \colon \|R\|_{F} = 0$ against the alternative hypothesis $\mathbb{H}_{A} \colon \|R\|_{F} > 0$. Our contributions are as follows. We first show that there exists a procedure for deciding between $\mathbb{H}_0$ and $\mathbb{H}_A$ with asymptotically {\em vanishing} type-I and type-II errors only if $\|R\|_{F} \rightarrow \infty$ as $n \rightarrow \infty$ (see Section \ref{sec:statistical_limit}). We also describe two related examples where the condition $\|R\|_{F} \rightarrow \infty$ is sufficient for one example but not sufficient for the other. We next show that the problem exhibits a statistical vs. computational tradeoff, i.e., there are regimes for which $\|R\| \rightarrow \infty$ that are statistically detectable but may require running time which scales exponentially with $n$. We achieve this through a polynomial time reduction of the planted clique problem, which is well-known to be computationally hard \citep{alon1998finding,barak2019nearly}, to our correlation testing problem (see Theorem~\ref{thm:gap}). Finally, we consider a special case of our correlation testing problem in which the graphs are sampled from the graphon or latent space model \citep{hoff2002latent,lovasz12:_large,bollobas2007phase} and propose an asymptotically valid and consistent test procedure that also runs in time polynomial in $n$ (see Section~\ref{graphon Model}). We evaluate the performance of the proposed test procedures through simulations and show that they exhibits power even for moderate values of $n$ and small values of $n^{-1}\|R\|_{F}$. We then apply these procedures to two real data experiments. In the first experiment, we analyze the electrical and chemical connectomes for the {\em C. elegans} worm and show that there are significant correlations between the two connectomes; this confirms the observation made in \cite{chen2016joint} wherein the authors showed that, for their vertex nomination tasks, using both connectomes lead to better accuracy. In the second experiment, we analyze two Wikipedia hyperlink graphs constructed using documents on the same topics but in different languages and show that, by estimating the correlations between the graphs, we obtain more accurate links prediction. \subsection{Related Works} Existing research on independence testing for a pair of graphs is quite limited. Among the current literature, only \citet{xiong2019graph} considered independence testing where the notion of correlation is similar to that described here, but the setting in \citet{xiong2019graph} is much more restrictive as they assumed that the pairwise correlations are the same for all edges, i.e., $R_{ij} \equiv c$ for some constant $c$ and furthermore each observed graph is, {\em marginally}, distributed according to a stochastic blockmodel \citep{holland1983stochastic}. Their results will be a special case of Theorem~\ref{thm:SBM} in this paper. Detecting pairwise edges correlation between two graphs is also a central focus in many graph matching algorithms. More specifically, given a pair of {\em unlabeled} adjacency matrices $A$ and $B$, graph matching aims to find a mapping between their vertex sets that best preserves their common structures, for example by minimizing the Frobenius norm error $\|\Pi A \Pi^{\top} - B\|_{F}$ over all {\em permutation} matrices $\Pi$. Recent results show that many graph matching algorithms have {\em average} running times that are polynomial in $n$, the number of vertices, whenever the graphs are sufficiently correlated; see e.g., \citep{wu2020testing,lyzinski2019matchability,lyzinski2016graph,pedarsani2011privacy,onaran2016optimal,korula2014efficient} and the references therein. In contrast, the worst case running time for these algorithms could be exponential in $n$ if the graphs are independent. Because the correspondence between the vertex sets is assumed {\em unknown} in graph matching but is assumed known in the context of our paper, our technical challenges and results are quite different from those for graph matching. Indeed, a common assumption in graph matching (see e.g., \cite{wu2020testing,lyzinski2016graph,pedarsani2011privacy}) is that $R_{ij} \equiv c$ for some constant $c$, and furthermore that the observed graphs are, {\em marginally}, Erd\H{o}s-R\'{e}nyi with the same edge probability $p$. \section{Background and Setting} \label{sec:background} We now formally introduce the hypothesis testing problem considered in this paper. We begin by describing the notion of edge correlated inhomogeneous Erd\H{o}s-R\'{e}nyi graphs. Note that all graphs considered in this paper are simple, undirected graphs, i.e., there are no self-loops and no multiple edges. \begin{mydef}\label{def_1} Let $n \in \mathbb{N}$. Let $P\in [0,1]^{n\times n}$ and $R\in[{-1},1]^{n\times n}$ be {\em symmetric} matrices where $R$ satisfies the constraint \begin{equation} \label{eq:r_ij_bound1} R_{ij}\geq-\min\Big\{\frac{1-P_{ij}}{P_{ij}},\frac{P_{ij}}{1-P_{ij}}\Bigr\}, \quad \text{for all $i,j$}. \end{equation} We say that $(A,B)$ are $R$-correlated heterogeneous Erd\H{o}s-R\'{e}nyi graphs on $n$ vertices with probability matrix $P$ and correlation matrix $R$, denoted by $(A,B)\sim R$-$\mathrm{ER}(P)$, if \begin{enumerate} \item $A$ is the adjacency matrix for an inhomogeneous Erd\H{o}s-R\'{e}nyi graph on $n$ vertices with probability matrix $P$, that is, $A$ is a $n \times n$ symmetric binary matrix whose (upper triangular) entries are independent Bernoulli random variables with success probabilities $\{P_{ij}\}_{i < j}$. \item $B$ is the adjacency matrix for another inhomogeneous Erd\H{o}s-R\'{e}nyi graph on $n$ vertices with probability matrix $P$. \item The pairs $\{(A_{ij},B_{ij})\}_{i < j}$ are {\em mutually independent} bivariate random variables. Here $A_{ij}$ and $B_{ij}$ are the $ij$th entry of $A$ and $B$, respectively. \item For any $i < j$, $A_{ij}$ and $B_{ij}$ are correlated with Pearson correlation $R_{ij}$. \end{enumerate} \end{mydef} \begin{remark}\label{remark_ext} Definition \ref{def_1} assumes that $A$ and $B$ have the same marginal distribution. In Section \ref{different} we will consider a slight extension of this model where we allow for $A$ and $B$ to have different marginal distributions. More specifically, we say that $(A, B)$ are generated from the $R$-ER$(P,Q)$ model if conditions 1 and 2 in Definition \ref{def_1} are replaced by the assumption that $A$ is marginally $P$ and $B$ is marginally $Q$, respectively. The remaining conditions 3 and 4 remain unchanged. Let $\phi(x) = \sqrt{x/(1-x)}$ for $x \in (0,1)$. We then note that for the correlation $R_{ij}$ to be valid in the $R$-ER$(P,Q)$ model, we assume \begin{equation} \begin{split} \label{eq:r_ij_p_q_bound} -\min\Bigl\{\phi(P_{ij})\phi(Q_{ij}),\frac{1}{\phi(P_{ij})\phi(Q_{ij})}\Bigr\} \leq R_{ij} \leq\min\Bigl\{\frac{\phi(P_{ij})}{\phi(Q_{ij})},\frac{\phi(Q_{ij})}{\phi(P_{ij})}\Bigr\} \end{split} \end{equation} as a replacement for the condition $R_{ij}\geq-\min\{\tfrac{1-P_{ij}}{P_{ij}},\tfrac{P_{ij}}{1-P_{ij}}\}$ in Definition \ref{def_1}. \end{remark} Correlated inhomogeneous ER graphs are widely studied in the graph matching literature, see e.g., \citet{fishkind2019alignment,sussman2018matched,lyzinski2019matchability} and the references therein. However, as we allude to in the introduction, for graph matching the correlation matrix $R$ is usually assumed to be a constant matrix, i.e., $R_{ij} \equiv c$ for all $\{i,j\}$. Let $\mathbb{A}_n$ denote the set of $n \times n$ hollow symmetric binary matrices. Given a pair of $R$-ER$(P)$ graphs and symmetric matrices $[a_{ij}]$, $[b_{ij}] \in \mathbb{A}_n$, the joint likelihood for the adjacency matrices $A=[A_{ij}]$ and $B=[B_{ij}]$ is \begin{gather*} \mathbb{P}(A=[a_{ij}],B=[b_{ij}]) = \prod_{i < j} \mathbb{P}(A_{ij}=a_{ij},B_{ij}=b_{ij}) \end{gather*} where $\mathbb{P}(A_{ij} = a, B_{ij} = b)$ for $1 \leq i < j \leq n$ is given by \begin{equation}\label{distribution P_n} \mathbb{P}(A_{ij}=a,B_{ij}=b)= \begin{cases} P_{ij}^2 + P_{ij}(1-P_{ij})R_{ij}, &a = b= 1\\ (1-P_{ij})^2 + P_{ij}(1-P_{ij})R_{ij}, &a = b=0, \\ P_{ij}(1-P_{ij})(1-R_{ij}), & a \not = b. \end{cases} \end{equation} Let $(A,B)$ be a pair of $R$-ER$(P)$ graphs with unknown correlation matrix $R$ and edge probabilities matrix $P$. We want to test the hypotheses $\mathbb{H}_0 \colon R = 0$ against $\mathbb{H}_A \colon R \not = 0$. This is equivalent to testing \begin{equation} \label{eq:test_formulation} \mathbb{H}_0:\|R\|_{\ast} =0 \quad \text{against} \quad \mathbb{H}_A:\|R\|_{\ast}>0. \end{equation} for any choice of matrix norm $\|\cdot\|_{\ast}$. We will use the Frobenius norm as it is one of the simplest and most widely used norms while also yielding useful and interesting theoretical results in the setting of the current paper; see in particular Theorem~\ref{thm:main1} below. We expect that other norms, such as the spectral or infinity norms, will lead to results that are related but distinct from those presented here and we leave this for future work. Let $T$ be any test procedure for the hypothesis in Eq.~\eqref{eq:test_formulation}. A desirable property for $T$ is consistency, that is, as the number of vertices $n$ approaches infinity, we want $T$ to have {\em both} vanishing Type-I error and vanishing Type-II error. More specifically, along a sequence of edge probabilities matrices and correlation matrices $(P_n,R_n)$, a test procedure is consistent if its error rate converges to $0$ as $n\to\infty$, that is \begin{equation*} \lim_{n\to\infty} \Bigl(1 - \mathbb{P}(\text{reject} \,\, \mathbb{H}_0 \mid \|R_n\|_{F} > 0) + \mathbb{P}(\text{reject}\,\, \mathbb{H}_0 \mid \|R_n\|_{F} = 0)\Bigr)=0. \end{equation*} \section{Statistically Limit} \label{sec:statistical_limit} \subsection{Detectability Threshold} \label{sec:detectability} We first derive a necessary condition for the hypothesis testing problem in Eq.\eqref{eq:test_formulation} to be statistically detectable. Our approach is based on the second moment method for the ratio of the likelihood when $\|R\|_{F} = 0$ against the likelihood when $\|R\|_{F} > 0$. See \citet{wu2018statistical} for an elegant survey of the second moment method and its application in deriving detectability thresholds for statistical problems with planted structures. The second moment method is motivated by the notion of contiguity between distributions. Contiguity is an asymptotic generalization of absolutely continuity characterized as follows. \begin{mydef} Let $\{\mathcal{P}_n\}_{n \geq 1}$ and $\{\mathcal{Q}_n\}_{n \geq 1}$ be two sequence of probability measures. We say that $\{\mathcal{P}_n\}$ is contiguous with respect to $\{\mathcal{Q}_n\}$ if $\mathcal{Q}_n(S_n)\rightarrow 0$ implies $\mathcal{P}_n(S_n)\rightarrow 0$ for every sequence of measurable sets $S_n$. \end{mydef} If the probability distribution under the alternative hypothesis is contiguous with respect to the distribution under the null hypothesis, then there does not exist a valid and consistent test procedure. Indeed, due to contiguity, any rejection region with vanishing type I error under the null hypothesis will necessarily have vanishing power under the alternative hypothesis. For more on the definition of contiguity and its properties, see \citet{van2000asymptotic}. Let $\{\mathcal{P}_n\}_{n \geq 1}$ and $\{\mathcal{Q}_n\}_{n \geq 1}$ be the sequences of {\em joint distributions} for pairs of adjacency matrices $(A_n,B_n)_{n \geq 1}$ from the $R$-ER($P$) random graphs model with $\|R\|_{F} > 0$ (for $\mathcal{P}_n$) and $\|R\|_{F} = 0$ (for $\mathcal{Q}_n$), respectively. We emphasize that, for simplicity of notations, we have dropped the index $n$ from the matrices $R$ and $P$. It is well known that $\{\mathcal{P}_n\}$ is contiguous with respect to $\{\mathcal{Q}_n\}$ (see e.g., Eq.(13.3) of \cite{wu2018statistical}) if the second moment of the likelihood ratio between $\mathcal{P}_n$ and $\mathcal{Q}_n$ is bounded i.e., that $$ \limsup_{n \rightarrow \infty} \mathbb{E}_{(A_n,B_n) \sim\mathcal{Q}_n}\Bigl[\Bigl(\frac{\mathcal{P}_n(A_n,B_n)}{\mathcal{Q}_n(A_n,B_n)}\Bigr)^2\Bigr] < \infty.$$ We then have the following result (see the appendix for a proof). \begin{mythm} \label{thm:main1} Let $\{\mathcal{P}_n\}_{n \geq 1}$ be the sequence of {\em joint distributions} for pairs of adjacency matrices $(A_n,B_n)_{n \geq 1 }$ from the $R$-$\mathrm{ER}(P)$ random graphs model with $\|R\|_{F} > 0$. Similarly, let $\{\mathcal{Q}_n\}$ be the joint distribution of the $(A_n,B_{n})_{n \geq 1}$ when $\|R\|_{F} = 0$. Then \begin{equation*} \begin{split} \mathbb{E}_{(A_n,B_n) \sim\mathcal{Q}_n}\Bigl[\Bigl(\frac{\mathcal{P}_n(A_n,B_n)}{\mathcal{Q}_n(A_n,B_n)}\Bigr)^2\Bigr]&=\prod_{i<j}(1+R_{ij}^2). \end{split} \end{equation*} Therefore if $\limsup_{n \rightarrow \infty} \|R\|_{F} < \infty$ then $\mathcal{P}_n$ is contiguous with respect to $\mathcal{Q}_n$. \end{mythm} Theorem~\ref{thm:main1} showed that the condition $\limsup \|R\|_{F} = \infty$ as $n \rightarrow \infty$ is necessary for the existence of a consistent test procedure for testing Eq.~\eqref{eq:test_formulation}. We now present an example of a hypothesis testing problem for which $\limsup \|R\|_{F} = \infty$ is also sufficient. Consider a $R$-ER$(P)$ model with $P_{ij} \equiv p$, i.e., the marginal distribution for the observed graphs is Erd\H{o}s-R\'{e}nyi. Suppose also that $p \geq c_0$ for some constant $c_0 > 0$ not depending on $n$. Next, suppose that $n$ is even and let $\sigma = (\sigma_1, \dots, \sigma_n)$ be such that $\sigma_{i} \in \{-1,1\}$ for all $i$ and $\sum_{i=1}^{n} \sigma_i = 0$. Given $\sigma$, the correlation matrix $R$ has entries \begin{equation} \label{eq:rn_example1} R_{ij} = \begin{cases} r, & \text{if $\sigma_i=\sigma_j$} \\ 0, & \text{otherwise}. \end{cases} \end{equation} We are interested in testing the hypothesis \begin{align*} \mathbb{H}_0 \colon r=0 \quad \text{against} \quad \mathbb{H}_A \colon r\neq0 \end{align*} and this is a special case of Eq.~\eqref{eq:test_formulation} where $$ \frac{(n^2-2n)}{4}r^2 \leq \|R\|_F^2 = \sum_{i\neq j} r^2 1_{\{\sigma_i=\sigma_j\}} \leq n^2r^2.$$ Now consider the matrix $S = A \circ B$ with elements $S_{ij} = A_{ij}B_{ij}$. The $S_{ij}$ are then {\em independent} Bernoulli random variables with \begin{equation*} S_{ij} \sim \begin{cases} \text{Bernoulli}(p^2 + rp(1-p)), & \sigma_i=\sigma_j \\ \text{Bernoulli}(p^2), & \sigma_i\neq\sigma_j. \end{cases} \end{equation*} In other words $S$ is the adjacency matrix of an Erd\H{o}s-R\'{e}nyi graph with edge probabilities $p^2$ under $\mathbb{H}_0$ and is the adjacency matrix of a $2$-blocks stochastic blockmodel graph under $\mathbb{H}_A$. Let $\lambda_1(S)$ be the largest eigenvalue of $S$. Then by \cite{furedi1981eigenvalues} and \cite{tang2018eigenvalues}, we have \begin{gather} \lambda_1(S) - np^2 \longrightarrow N(1 - p^2, 2p^2(1 - p^2)) \quad \text{under $\mathbb{H}_0$}, \\ \lambda_1(S) - np^2 - \frac{1}{2}nrp(1-p) \longrightarrow N(\eta, \gamma) \quad \text{under $\mathbb{H}_A$}. \end{gather} Here $\eta$ and $\gamma$ are bounded constants. Therefore $\lambda_1(S)$ yields a consistent test procedure for testing $\mathbb{H}_0 \colon r = 0$ against $\mathbb{H}_A \colon r \not = 0$ whenever $\|R\|_{F} \asymp |nr| \rightarrow \infty$. More specifically, as both $A$ and $B$ are {\em marginally} Erd\H{o}s-R\'{e}nyi graphs, first estimate $p$ via $$\hat{p} = \frac{1}{n(n-1)}\sum_{i < j} (a_{ij} + b_{ij}).$$ Next, define $T(A, B) = |\lambda_1(S) - n \hat{p}^2|$. Then $T(A, B)$ is bounded in probability under $\mathbb{H}_0$ and $T(A, B)$ diverges under $\mathbb{H}_A$ if $\|R\|_{F} \rightarrow \infty$. We summarized the above discussion in the following result. \begin{mypro} \label{pro:1} Let $(A, B)$ be a pair of $R$-correlated Erd\H{o}s-R\'{e}nyi graphs with marginal edges probability $p$ where $p > 0$ does not depend on $n$ and suppose that $R$ is of the form in Eq.~\eqref{eq:rn_example1}. Then there exists a consistent test procedure for testing $\mathbb{H}_0 \colon r = 0$ against $\mathbb{H}_A \colon r \not = 0$ if and only if $\|R\|_{F} \rightarrow \infty$ as $n \rightarrow \infty$. \end{mypro} \begin{remark} \label{rem:not_sufficient} We end this subsection with an example of a correlation matrix $R$ for which the condition $\|R\|_{F} \rightarrow \infty$ is {\em not} sufficient to guarantee the existence of a consistent test procedure. Fix a constant $\epsilon \geq 0$. Let $(A, B)$ be $R$-correlated Erd\H{o}s-R\'{e}nyi graphs with marginal edges probability $p = 0.5$ where $R$ is a symmetric matrix whose (upper triangular) entries are iid random variables with $\mathbb{P}[R_{ij} = \epsilon] = \mathbb{P}[R_{ij} = - \epsilon] = 0.5$. Given $(A, B)$, detecting correlation between $A$ and $B$ is equivalent to testing $\mathbb{H}_0 \colon \epsilon = 0$ against $\mathbb{H}_A \colon \epsilon > 0$. However, as $R$ is unknown and we only observed $(A, B)$, the probabilities of a given pair $(A,B)$ under either $\mathbb{H}_0$ or $\mathbb{H}_A$ are the same. More specifically, under $\mathbb{H}_0$ we have \begin{equation} \mathbb{P}_{\mathbb{H}_0}(A_{ij}=a,B_{ij}=b) = \frac{1}{4}, \quad \text{for all $(a,b) \in \{(0,0),(0,1),(1,0),(1,1)\}$}. \end{equation} Meanwhile under $\mathbb{H}_A$, if we first conditioned on $R_{ij}$ then \begin{equation} \mathbb{P}(A_{ij}=a,B_{ij}=b|R_{ij}) = \begin{cases} \frac{1 + R_{ij} }{4}, &\text{if $(a,b) \in \{(0,0), (1,1)\}$}\\ \frac{1 - R_{ij}}{4}, &\text{if $(a,b) \in \{(1,0),(0,1)\}$} \end{cases} \end{equation} and hence, as $R_{ij}$ is unobserved, \begin{equation*} \begin{split} \mathbb{P}_{\mathbb{H}_A}(A_{ij} = a, B_{ij} = b) &= \frac{1}{2} \mathbb{P}(A_{ij} = a, B_{ij} = b \mid R_{ij} = \epsilon) + \frac{1}{2} \mathbb{P}(A_{ij} = a, B_{ij} = b \mid R_{ij} = -\epsilon) \\ &= \frac{1}{4}. \end{split} \end{equation*} There is thus no consistent test procedure for $\mathbb{H}_0 \colon \epsilon = 0$ against $\mathbb{H}_A \colon \epsilon \not = 0$ under this assumed correlation structure for $R$ even though $\|R\|_{F} = n \epsilon$ for {\em any} realization of $R$ (so that $\|R\|_{F} \rightarrow \infty$ as $n\rightarrow \infty$ for any fixed $\epsilon > 0$). The condition $\|R\|_{F} \rightarrow \infty$ is therefore not sufficient. \end{remark} \subsection{Computational Feasibility} \label{sec:computational} The results in Section~\ref{sec:detectability} provides a necessary condition for statistical detectability, i.e., the existence of a consistent test procedure for testing $\mathbb{H}_0 \colon \|R\|_{F}$ against $\mathbb{H}_{A} \colon \|R\|_{F} > 0$. However, there are numerous problems that are statistically feasible with parameter regimes for which there are no known {\em computationally efficient} procedures for solving them. Examples include community detection \citep{banks2016information}, sparse PCA \citep{sparse_pca} and estimation in spiked tensor models \citep{statistical_limit_tensor}. We now present an example of this phenomenon in the context of independence testing. In particular, we show the presence of a statistical vs. computational gap by transforming the independence testing problem to the following well-known planted clique problem in theoretical computer science. Let $A$ be an Erd\H{o}s-R\'{e}nyi graph on $n$ vertices with common edges probability $p$. Next, given an integer $s_0 \geq 0$, select a subset of $s_0$ vertices of $A$ and form a clique between these $s_0$ vertices. Suppose we are now given a graph $A$ generated according to the above planted clique model with {\em unknown} $s_0$ and a positive integer $k \geq 1$. The $\mathrm{Planted Clique}$ problem seeks to determine if $A$ contains a clique of size at least $k$. It is conjectured that there is no polynomial time algorithm for the $\mathrm{Planted Clique}$ problem that works for {\em all} values of $k \in \{1,2,\dots,n\}$, unless the $\mathrm{P} = \mathrm{NP}$ hypothesis in computational complexity holds \citep{braverman2017eth}. In particular, if $\log n \ll k \ll n^{1/2}$, then all known algorithms require quasi-polynomial time of $n^{O(\log n)}$ \citep{barak2019nearly,alon1998finding}. Let $(A, B)$ be $R$-correlated Erd\H{o}s-R\'{e}nyi graphs with marginal edges probability $p = \tfrac{1}{2}$ and correlation matrix $R$ constructed as follows. First, select a subset $\mathcal{C}$ of vertices with $|\mathcal{C}| = s_0$. Then define \begin{equation} \label{eq:planted} R_{ij} = \begin{cases} -1, & \text{if $i \in \mathcal{C}$ and $j \in \mathcal{C}$},\\ 0, & \text{otherwise}. \end{cases} \end{equation} Note that $\|R\|_{F} = s_0$ and furthermore, $(A_{ij}, B_{ij}) \in \{(0,1), (1,0)\}$ whenever $i,j \in \mathcal{C}$ and $i \neq j$. Now consider testing $\mathbb{H}_0 \colon \|R\|_{F} = 0$ against $\mathbb{H}_A \colon \|R\|_{F} > 0$. Assume $s_0$ is unknown with $\log n \ll s_0 \ll n^{1/2}$ and suppose that there exists a polynomial time consistent test procedure for testing $\mathbb{H}_0$ and $\mathbb{H}_A$. Let $S = |A - B|$ where the absolute value is taken elementwise, i.e., $S_{ij} = |A_{ij} - B_{ij}|$. Note that $S$ is the adjacency matrix of a random graph generated from the planted clique model with clique size $s_0$ and edges probability $p = \tfrac{1}{2}$. Then a consistent test procedure for testing $\mathbb{H}_0$ against $\mathbb{H}_A$ will also yield a polynomial time algorithm for deciding whether or not $S$ has a clique of size at least $s_0$, thereby contradicting the claim of the Planted Clique conjecture; see the appendix for a more formal argument. In summary we have the following example of a statistical versus computational gap for independence testing. \begin{mythm}\label{thm:gap} Let $(A, B)$ be a pair of $R$-correlated Erd\H{o}s-R\'{e}nyi graphs on $n$ vertices where $R$ is of the form in Eq.~\eqref{eq:planted} for some $\mathcal{C}$ with $\log n \ll |\mathcal{C}| \ll n^{1/2}$. Then, assuming the Planted Clique conjecture holds, there is no polynomial time consistent test procedure for testing $\mathbb{H}_0 \colon \|R\|_{F} = 0$ against $\mathbb{H}_A \colon \|R\|_{F} > 0$ even though $\|R\|_{F} \rightarrow \infty$ as $n \rightarrow \infty$. \end{mythm} \section{Independence testing in the graphon model} \label{graphon Model} \subsection{Same Marginal Distribution}\label{general} We now describe independence testing when $A$ and $B$ are, {\em marginally}, generated from the class of latent positions models or graphons \citep{hoff2002latent,bollobas2007phase,lovasz12:_large}. In particular, we will derive an asymptotically valid and consistent test procedure that also runs in time polynomial in $n$. We first define the notion of a pair of $R$-correlated latent position graphs where the correlation matrix $R$ is also generated from a collection of latent positions. This is a natural extension of the latent positions model for a single graph to the setting of two graphs sharing a common vertex set with edges that are possibly pairwise correlated. \begin{mydef}\label{def_graphon} Let $\{X_1, \dots, X_n\} \subset U \subset \mathbb{R}^{d}$, and $\{Y_1, \dots, Y_n\} \subset V \subset \mathbb{R}^{d'}$ be two collections of {\em latent positions}. Now let $h$ be a symmetric bivariate function from $\mathbb{R}^{d} \times \mathbb{R}^{d}$ to $[0,1]$ and $g$ be a symmetric bivariate function from $\mathbb{R}^{d'} \times \mathbb{R}^{d'}$ to $[-1,1]$; assume also that $h$ and $g$ do not depend on $n$. Let $\rho_{n} \in [0,1]$ and $\gamma_n \in [0,1]$ and define the $n \times n$ matrices $P$ and $R$ by \begin{equation*} P_{ij} = \rho_{n} \cdot h(X_i, X_j), \quad R_{ij} = \gamma_{n} \cdot g(Y_i, Y_j) \end{equation*} where we have implicitly assumed that $\gamma_n$ and $g$ are chosen so that $R_{ij}$ satisfies the constraint in Eq.~\eqref{eq:r_ij_bound1} for all $i,j$. Given $P$ and $R$, we generate $(A, B)$ as in Definition~\ref{def_1}. We then say that $(A, B)$ is a pair of correlated latent position graphs with correlation matrix $R$ and edge probabilities matrix $P$. \end{mydef} \begin{remark} In Definition~\ref{def_graphon}, the factor $\rho_n$ controls the {\em sparsity} of the observed graphs $A$ and $B$ while the factor $\gamma_n$ controls the magnitudes of the pairwise correlations $\{R_{ij}\}$. Note that the average degrees for both $A$ and $B$ is $\Theta(n \rho_n)$; in the following presentation, we will allow for the possibility that $\rho_n \rightarrow 0$ and $\gamma_n \rightarrow 0$ as $n \rightarrow \infty$. \end{remark} Let $(A, B)$ be a pair of graphs generated according to Definition~\ref{def_graphon} and $C$ be the matrix with entries \begin{equation} \label{eq:Cmatrix} C_{ij} = \begin{cases} 1 & \text{if $A_{ij} + B_{ij} > 0$} \\ 0 & \text{otherwise} \end{cases} \end{equation} Denote by $H$ the matrix whose entries are $H_{ij} = \mathbb{E}[C_{ij}]$, i.e., \begin{equation} \label{eq:graphon_H} \begin{split} H_{ij} &= 2P_{ij} - P_{ij}^2 - R_{ij} P_{ij}(1 - P_{ij}) \\ &= \rho_n h(X_i, X_j) + (1 - \gamma_n g(X_i, X_j)) \rho_n h(X_i, X_j) (1 - \rho_n h(X_i, X_j)). \end{split} \end{equation} We now consider testing $\mathbb{H}_{0} \colon \|R\|_{F} = 0$ against $\mathbb{H}_{A} \colon \|R\|_{F} > 0$. Our test statistic is based on estimating $R$ using singular value thresholding (USVT). More specifically, we first compute an estimate $\hat{P}_1$ (resp. $\hat{P}_2$) of $P$ by truncating the singular value decomposition of $A$ (resp. $B$) to keep only the $k_A$ (resp. $k_B$) largest singular values. Here $k_A := \{\max k \colon \sigma_k(A) \geq c_0 \sqrt{n \rho_n}\}$, the $\sigma_1(A) \geq \sigma_2(A) \geq \dots \geq 0$ are the singular values of $A$, and $c_0 > 4$ is a universal constant; the value $k_B$ is defined analogously. See \cite{chatterjee2015matrix,xu2017rates} for more details. We also apply the same operations to $C$ and obtain an estimate $\hat{H}$ of $H$. Let $\circ$ denote the Hadamard product for matrices. Then under $\mathbb{H}_0$ we have $\|H - 2P + P \circ P\|_{F} = 0$ and thus we consider a test statistic based on $\|\hat{H} - 2 \hat{P} + \hat{P} \circ \hat{P}\|_{F}$ where $\hat{P} = (\hat{P}_1 + \hat{P}_2)/2$. Leveraging recent results on the estimation error of SVT for graphon estimation \citep{xu2017rates,chatterjee2015matrix}, we obtain the following consistency guarantee for our test procedure. \begin{mythm}\label{theorem 1} Let $(A, B)$ be a pair of graphs generated according to the model in Definition~\ref{def_graphon}, where $g$ and $h$ are fixed functions and do not vary with $n$. Assume that both $g$ and $h$ are at least $s$ times differentiable for some $s \geq 1$, where $s$ is assumed known, and that $n \rho_n = \omega(\log n)$ as $n$ increases. Let $\alpha = \tfrac{s + d + d'}{2s + d + d'}$ and define $$T(A, B) = \frac{\|\hat{H} - 2 \hat{P} + \hat{P} \circ \hat{P}\|_{F}}{\Delta^{\alpha} \log^{1/2}{n}}.$$ where $\Delta$ is the average of the maximum degree of $A$ and $B$. Let $\mathcal{R} = \{T \colon T > 1\}$. The test statistic $T(A, B)$ with rejection region $\mathcal{R}$ yields an asymptotically valid test procedure for testing $\mathbb{H}_0 \colon \|R\|_{F} = 0$ against $\mathbb{H}_A \colon \|R\|_{F} > 0$ and furthermore, $T$ is consistent whenever $\|R \circ (P - P \circ P)\|_{F} = \Omega((n \rho_n)^{\alpha'})$ for any $\alpha' > \alpha$ as $n \rightarrow \infty$. \end{mythm} We note that the assumption $n \rho_n = \omega(\log n)$ in Theorem~\ref{theorem 1} guarantees that the average degrees of the observed graphs grow slightly faster than $\log n$. This then allows the singular value thresholding step to yield reasonably accurate estimates $\hat{P}$ and $\hat{H}$. See for example the condition in Eq.~3 of \citet{xu2017rates}. Furthermore, the consistency regime for $T$ in Theorem~\ref{theorem 1} is stated in terms of $\|R \circ (P - P \circ P)\|_{F}$ as opposed to $\|R\|_{F}$. This is expected as the entries of $R \circ (P - P \circ P)$, which are $R_{ij} P_{ij} (1 - P_{ij})$, correspond to the difference between the edge probabilities matrix of $(A,B)$ under $\mathbb{H}_0$ and $\mathbb{H}_A$. If we assume that the link function $h$ satisfies $h(x,x') > 0$ for all $(x, x') \in U \times U$ then $P_{ij} = \Omega(\rho_n)$ for all $i,j$, and the condition $\|R \circ (P - P \circ P)\|_{F} = \Omega((n \rho_n)^{\alpha'} )$ simplifies to $\rho_n \|R\|_{F} = \Omega((n \rho_n)^{\alpha'} )$, or equivalently that $\|R\|_{F} = \Omega(n^{\alpha'} \rho_n^{\alpha' - 1})$. As $\alpha$ decreases when $s$ increases, we see that smoother $h$ and $g$ lead to a sharper consistency threshold for $\|R\|_{F}$. As a special case of Theorem~\ref{theorem 1}, suppose that both $g$ and $h$ are {\em infinitely} differentiable. We then have $\alpha \leq 1/2 + \epsilon$ for any $\epsilon > 0$ and our test procedure is consistent whenever $\|R \circ (P - P \circ P)\|_{F} = \Omega((n\rho_n)^{1/2 + \epsilon})$ for any $\epsilon > 0$. More specifically we have the following corollary. \begin{corollary} \label{cor:lpg} Consider the setting in Theorem~\ref{theorem 1} and suppose that $g$ and $h$ are both infinitely differentiable. Now choose an arbitrary $\epsilon > 0$ not depending on $n$ and define $$T(A, B) = \frac{\|\hat{H} - 2 \hat{P} + \hat{P} \circ \hat{P}\|_{F}}{\Delta^{1/2 + \epsilon/2}}.$$ Let the rejection region be given by $\mathcal{R} = \{T \colon T > 1\}$. Then the test statistic $T(A, B)$ with rejection region $\mathcal{R}$ yields an asymptotically valid test procedure for testing $\mathbb{H}_0 \colon \|R\|_{F} = 0$ against $\mathbb{H}_A \colon \|R\|_{F} > 0$ and furthermore, $T$ is consistent whenever $\|R \circ (P - P \circ P)\|_{F} = \Omega((n \rho_n)^{1/2 + \epsilon})$ as $n \rightarrow \infty$. \end{corollary} \begin{remark} If we suppose that (1) $h(x,x') > 0$ for all $x,x' \in U$ and (2) $g(y, y') > 0$ for all $y, y' \in V$ then $\|R \circ (P - P \circ P)\|_{F} = \Theta(n \gamma_n \rho_n )$ and the condition $\|R \circ (P - P \circ P)\|_{F} = \Omega((n \rho_n)^{1/2 + \epsilon})$ in Corollary~\ref{cor:lpg} is equivalent to the condition $\gamma_n = \Omega((n \rho_n)^{-1/2 + \epsilon})$, i.e., the test procedure is consistent whenever the pairwise correlations decay to $0$ slower than the reciprocal of the square root of the average degree. \end{remark} \begin{small} \begin{algorithm}[tp] \caption{Bootstrap procedure for graphons} \label{Graphon_Model_Testing_Simulation_2} \begin{algorithmic} \Require Adjacency matrices $A$ and $B$, both of size $n \times n$, significance level $\alpha \in (0,1)$, number of bootstrap samples $m$. \State (A) Compute the matrix $C$ whose entries are $C_{ij} = 1$ if $A_{ij} + B_{ij} > 0$ and $C_{ij} = 0$ otherwise. \State (B) Compute $\hat{P}_1, \hat{P}_2$ and $\hat{H}$ by applying universal singular value thresholding (USVT) on $A$, $B$, and $C$, respectively. \State (C) Let $\hat{P}=\frac{1}{2}(\hat{P}_1+\hat{P}_2)$ and calculate the test statistic $T = \|\hat{H} - 2 \hat{P} + \hat{P} \circ \hat{P}\|_{F}$. \For{$s=1$ to $m$} \State (i) Generate adjacency matrices $(A^{(s)}, B^{(s)})$ according to Definition~\ref{def_1} with $R = 0$ and marginal edge probabilities matrix $\hat{P}$. \State (ii) Compute $\hat{P}_1^{(s)}$ and $\hat{P}_2^{(s)}$ as the universal singular value threshold of $A^{(s)}$ and $B^{(s)}$, respectively. \State (iii) Calculate $T^{(s)} = \|\hat{H}^{(s)}- 2 \hat{P}^{(s)} + \hat{P}^{(s)} \circ \hat{P}^{(s)}\|_{F}$, where $\hat{P}^{(s)}=\frac{1}{2}(\hat{P}_1^{(s)}+\hat{P}_2^{(s)})$ and $\hat{H}^{(s)}$ is the universal singular value thresholding of $A^{(s)} + B^{(s)}$. \EndFor \State (D) Set $c_{\alpha}$ to be the $(1 - \alpha) \times 100\%$ percentile of the $\{T^{(s)}\}_{s=1}^{m}$ \State \textbf{Output} If $T>c_{\alpha}$ then reject $\mathbb{H}_0$; otherwise fail to reject $\mathbb{H}_0$. \end{algorithmic} \end{algorithm} \end{small} If $g$ and $h$ are not infinitely differentiable then the test statistic in Theorem~\ref{theorem 1} depends on knowing (1) a lower bound for the smoothness $s$ of the functions $g$ and $h$ and (2) upper bounds for the dimensions $d$ and $d'$ of the latent positions $\{X_i\}$ and $\{Y_i\}$. These values are most likely unknown in practice. Furthermore, even when $s, d$ and $d'$ are known, for finite sample the rejection region in Theorem~\ref{theorem 1} is likely to be overly conservative. This is a common issue in many graph testing problems whose test statistics have no known non-degenerate limiting distribution; see for example the test statistics in \citet{tang2017semiparametric}, \citet{ghoshdastidar2020two}, and \citet{gretton2012kernel}. In light of these limitations, for this paper we will instead consider the {\em unnormalized} test statistic $\tilde{T}(A, B) = \|\hat{H} - 2 \hat{P} + \hat{P} \circ \hat{P}\|_{F}$ and use a bootstrap procedure to determine the rejection region for $\tilde{T}$. More specifically the bootstrap procedure generates additional pairs of graphs $\{A_b, B_b\}_{b=1}^{B}$ from the estimated edge probabilities matrix $\hat{P}$ with $R = 0$ and then computes the test statistics for these bootstrap pairs to obtain an empirical distribution for the test statistics under the null hypothesis. See Algorithm~\ref{Graphon_Model_Testing_Simulation_2} for more details. Note that $\tilde{T}(A, B)$ differs from $T(A, B)$ only in the term $\Delta^{\alpha} \log^{1/2}{n}$. This term is a normalizing factor that guarantees $T(A, B) < 1$ when $\|R\|_{F} = 0$ and $T(A, B) \rightarrow \infty$ when $\|R\|_{F} = \Omega((n \rho_n)^{\alpha'})$. Hence, by using $\tilde{T}(A, B)$ and bootstrapping, we circumvent the need to know/estimate $\alpha$ (a possibly non-trivial if not impossible task). \subsection{Detection thresholds for stochastic blockmodels} \label{sec:SBM} We now consider the special case of independence testing when the graphs $A$ and $B$ are, {\em marginally}, stochastic blockmodel graphs with a common block structure. We begin by formulating an extension of the stochastic blockmodel \citep{holland1983stochastic} for a single graph to the case of a pair of graphs whose edge correlations also exhibit a block structure. This extension had appeared previously in the literature in the context of graph matching (see e.g., \cite{onaran2016optimal,racz2021correlated,lyzinski2014seededER}). \begin{mydef} \label{def:sbm} Let $A$ and $B$ be graphs on $n$ vertices. We say that $(A, B)$ is a pair of $K$-blocks correlated stochastic blockmodel graphs with common community assignments $\tau$, block probabilities matrices $\Theta_P$ and $\Theta_Q$, sparsity factor $\rho_n$ and block correlations matrix $\Theta_R$ if $(A,B)$ are generated from the $R$-$\mathrm{ER}(P,Q)$ model where \begin{enumerate} \item $\Theta_P$ and $\Theta_Q$ are $K \times K$ {\em symmetric} matrices with entries in $[0,1]$. \item Marginally, the edge probabilities matrix for $A$ and $B$ are $P = \rho_n Z \Theta_P Z^{\top}$ and $Q = \rho_n Z \Theta_Q Z^{\top}$ where $Z$ is a $n \times K$ matrix such that, for any $k \in \{1,2,\dots,K\}$, we have $Z_{ik} = 1$ if $\tau_i = k$ and $Z_{ik} = 0$ otherwise. \item The correlation matrix $R$ is $R = Z \Theta_R Z^{\top}$. \end{enumerate} \end{mydef} Let $(A, B)$ be generated from the model in Definition~\ref{def:sbm}. We now describe a test statistic $T$ for testing $\mathbb{H}_0 \colon \|R\|_{F} = 0$ against $\mathbb{H}_A \colon \|R\|_{F} > 0$ with the following properties (1) $T$ has a central $\chi^2$ limiting distribution under the null hypothesis and (2) $T$ is consistent under the alternative hypothesis provided that $\|R\|_{F} \rightarrow \infty$. First, compute the matrix $C$ as defined in Eq.~\eqref{eq:Cmatrix} and cluster the vertices of $C$ into $K$ communities using a community detection algorithm that guarantees exact recovery (see e.g., \cite{abbe2017community,gao2017achieving,lyzinski2014perfect}) where the value of $K$ can be chosen using model selection procedures such as those described in \cite{li_network,wang_bickel_llr,lei2014}. Let $\hat{\tau}$ be the resulting estimated community assignment. Next, compute, for $1 \leq k \leq \ell \leq K$, the Pearson sample correlation $\hat{\rho}_{k \ell}$ between the edges $\{A_{ij}, B_{ij}\}$ in the $(k,\ell)$th block, i.e., $$\hat{\rho}_{k \ell} = \mathrm{cor}\Bigl(\{A_{ij} \colon i < j, \hat{\tau}_i = k, \hat{\tau}_j = \ell\}, \{B_{ij} \colon i < j, \hat{\tau}_i = k, \hat{\tau}_j = \ell\}\Bigr).$$ Let $\hat{n}_k = |\{ i \colon \hat{\tau}_i = k\}|$ for all $k$ and define the test statistic $$T(A, B) = \sum_{k \leq \ell} \hat{n}_{k \ell} \hat{\rho}_{k \ell}^{2}$$ where $\hat{n}_{kk} = \tbinom{\hat{n}_k}{2}$ if $k = \ell$ and $\hat{n}_{k \ell} = \hat{n}_k \hat{n}_{\ell}$ if $k \not = \ell$. We then have the following result. \begin{mythm} \label{thm:SBM} Let $(A,B)$ be a pair of graphs on $n$ vertices generated according to Definition~\ref{def:sbm} where $\Theta_P$ and $\Theta_Q$ are fixed $K \times K$ matrices not depending on $n$. Suppose that, as $n$ increases, we have $n_k = |\{i \colon \tau_i = k\}| = \Theta(n)$ for all $k \in\{1,2,\dots,K\}$ and the sparsity $\rho_n$ satisfies $n \rho_n = \omega(\log n)$. Then under $\mathbb{H}_0 \colon \|R\|_{F} = 0$ we have $T(A, B) \rightsquigarrow \chi^2_{K(K+1)/2}$ as $n \rightarrow \infty$. Furthermore let $\mu > 0$ be a finite constant such that $\|R\|_{F} > 0$ satisfies \begin{equation} \label{eq:local_alternative} \sum_{k \leq \ell} n_{k \ell} \Theta_R^2(k,\ell) \rightarrow \mu \end{equation} as $n \rightarrow \infty$ (here $\Theta_{R}(k,\ell)$ denote the $k,\ell$th entry of $\Theta_R$). We then have $$T(A, B) \rightsquigarrow \chi^2_{K(K+1)/2}(\mu)$$ as $n \rightarrow \infty$. Here $\chi^2_{K(K+1)/2}(\mu)$ is a non-central $\chi^2$ with $K(K+1)/2$ degrees of freedom and non-centrality parameter $\mu$. \end{mythm} Theorem~\ref{theorem 1} (and its associated Corollary~\ref{cor:lpg}) to Theorem~\ref{thm:SBM} we see that by assuming a more specialized structure on $R$ we obtain a much sharper detection threshold. Indeed, the local alternative in Theorem~\ref{thm:SBM} implies that our test statistic achieves power converging to $1$ whenever $\Theta_R$ satisfies the condition $\sum_{k \ell} n_{k \ell} \Theta_R^2(k,\ell) \rightarrow \infty$. This is equivalent to the condition that $\|R\|_{F} \rightarrow \infty$ and is thus, by Theorem~\ref{thm:main1}, both necessary and sufficient. Theorem~\ref{thm:SBM} also assumes that the block structure for $R$ is the same as that for $P$ and $Q$, and this is done purely for ease of exposition as it allows us to more easily aggregate the edges of $A$ and $B$ into blocks and estimate the pairwise correlations between the edges in each block. Extending Theorem~\ref{thm:SBM} to the case where the SBM structure for $R$ differs from that of $P$ and $Q$ is straightforward but tedious, and we leave it to the interested reader. Finally, \cite{xiong2019graph} considered a special case of Definition~\ref{def:sbm} with $R_{ij} \equiv c$ for all $\{i,j\}$, but they did not derive a non-degenerate limiting distribution for their test statistic. \subsection{Different Marginal Distributions}\label{different} We now discuss how the previous model and results can be extended to the case where the graphs $A$ and $B$ have different {\em marginal} distributions. We first define a model that generalizes the $R$-$\mathrm{ER}(P)$ model in Definition~\ref{def_1} (see also Remark~\ref{remark_ext}). \begin{mydef} Let $n \in \mathbb{N}$. Let $P \in [0,1]^{n\times n}$, $Q \in [0,1]^{n \times n}$, and $R\in[{-1},1]^{n\times n}$ be symmetric matrices where $R$ satisfies the constraint \begin{equation} \label{eq:constraint_different} -\min\Bigl\{\tfrac{P_{ij}Q_{ij}}{(1-P_{ij})(1-Q_{ij})},\tfrac{(1-P_{ij})(1-Q_{ij})}{P_{ij}Q_{ij}}\Bigr\}^{1/2} \leq R_{ij}\leq\min\Bigl\{\tfrac{P_{ij}(1-Q_{ij})}{Q_{ij}(1-P_{ij})},\tfrac{Q_{ij}(1-P_{ij})}{P_{ij}(1-Q_{ij})}\Bigr\}^{1/2}, \end{equation} for all $i,j$. We say that $(A,B)$ are $R$-correlated heterogeneous Erd\H{o}s-R\'{e}nyi graphs on $n$ vertices with {\em marginal} edge probabilities $(P,Q)$ and correlations $R$, denoted by $(A,B)\sim R$-$\mathrm{ER}(P,Q)$, if \begin{enumerate} \item $A$ is the adjacency matrix for an inhomogeneous Erd\H{o}s-R\'{e}nyi graph on $n$ vertices with probability matrix $P$. \item $B$ is the adjacency matrix for another inhomogeneous Erd\H{o}s-R\'{e}nyi graph on $n$ vertices with probability matrix $Q$. \item The pairs of entries $\{(A_{ij},B_{ij})\}_{1\leq i<j\leq n}$ are independent bivariate random vectors. \item For $i < j$, $A_{ij}$ and $B_{ij}$ are correlated with Pearson correlation $R_{ij}$. \end{enumerate} \end{mydef} Following the proof of Theorem~\ref{thm:main1}, we can show that the condition $\limsup \|R\|_{F} < \infty$ as$n \rightarrow \infty$ is also necessary for the existence of a consistent test procedure for detecting the correlation between $R$-$\mathrm{ER}(P,Q)$ graphs. A simple generative model for $R$-$\mathrm{ER}(P,Q)$ graphs is the following variant of the graphon model described in Definition~\ref{def_graphon} where, instead of only having a single $h$, we have two link functions $h_1$ and $h_2$ from $U \times U \mapsto [0,1]$ and define $P$ and $Q$ via $P_{ij} = \rho_n h_1(X_i, X_j)$ and $Q_{ij} = \rho_n h_2(X_i, X_j)$ for all $i \leq j$. The function $g$ is also chosen so that the correlation matrix $R$ with entries $R_{ij} = \gamma_n g(Y_i, Y_j)$ satisfies the constraint in Eq.~\eqref{eq:constraint_different}. Note that for ease of exposition we had assumed that the sparsity factor for both $P$ and $Q$ are the same. The case where $P$ and $Q$ have different sparsity factors involves more tedious book-keeping but is, otherwise, conceptually identical and leads to similar results as those described below. Given $(A, B)$ sampled from the above variant of the $R$-$\mathrm{ER}(P,Q)$ model we once again let $C$ be the matrix with entries $C_{ij} = 1$ if $A_{ij} + B_{ij} > 0$ and $C_{ij} = 0$ otherwise. Denote $H = \mathbb{E}[C]$. We then have \begin{equation} \label{eq:h_form} H_{ij} = P_{ij} + Q_{ij} - P_{ij} Q_{ij} - R_{ij}\sqrt{P_{ij}(1 - P_{ij})Q_{ij}(1 - Q_{ij})} \end{equation} and thus we can construct a test statistic for $\mathbb{H}_0 \colon \|R\|_{F}$ against $\mathbb{H}_A \colon \|R\|_{F} > 0$ by first applying USVT to $A$ and $B$ to obtain estimates $\hat{P}$ of $P$ and $\hat{Q}$ of $Q$, then applying USVT to $C$ to obtain an estimate $\hat{H}$ of $H$, and finally compute $\tilde{T}(A, B) = \|\hat{H} - \hat{P} - \hat{Q} + \hat{P} \circ \hat{Q}\|_{F}.$ The following result is derived using an identical argument to that for Theorem~\ref{theorem 1} and shows that rejecting $\mathbb{H}_0$ for large values of $\tilde{T}$ yields a consistent test procedure. \begin{mythm}\label{theorem 4} Let $(A, B)$ be a pair of graphs generated from the above $R$-$\mathrm{ER}(P,Q)$ model where $g$, $h_1$ and $h_2$ are fixed functions and do not vary with $n$. Assume that all of $g, h_1, h_2$ are at least $s$ times continuously differentiable for some $s \geq 1$, where $s$ is assumed known, and that $n \rho_n = \omega(\log n)$ as $n$ increases. Let $\alpha = \tfrac{s + d + d'}{2s + d + d'}$ and define $$T(A, B) = \frac{\|\hat{H} - \hat{P} - \hat{Q} + \hat{P} \circ \hat{Q}\|_{F}}{\Delta^{\alpha} \log^{1/2}{n}}.$$ where $\Delta$ is the average of the maximum degree of $A$ and $B$. Let $\mathcal{R} = \{T \colon T > 1\}$. The test statistic $T(A, B)$ with rejection region $\mathcal{R}$ yields an asymptotically valid test procedure for testing $\mathbb{H}_0 \colon \|R\|_{F} = 0$ against $\mathbb{H}_A \colon \|R\|_{F} > 0$ and furthermore, $T$ is consistent whenever $\|R \circ \Xi\|_{F} = \Omega((n \rho_n)^{\alpha'})$ for any $\alpha' > \alpha$ as $n \rightarrow \infty$. Here $\Xi$ is the $n \times n$ matrix with entries $\Xi_{ij} = \bigl(P_{ij}(1 - P_{ij}) Q_{ij}(1 - Q_{ij})\bigr)^{1/2}.$ \end{mythm} Similar to our discussion after Corollary~\ref{cor:lpg}, if the link functions $g$, $h_1$ and $h_2$ are not infinitely differentiable then the test statistic in Theorem~\ref{theorem 4} depends on knowing a lower bound for the smoothness $s$ and upper bounds for the dimensions $d$ and $d'$ of the latent positions $\{X_i\}$ and $\{Y_i\}$. These values are once again unknown or, even if known, the rejection region in Theorem~\ref{theorem 4} is still overly conservative for finite sample inference. We will thus also use bootstrap resampling to calibrate the rejection region for the above test statistic $T(A, B)$; see Algorithm~\ref{Analysis_for_real_data} in Section~\ref{Real_Data} for more details. \section{Simulation Results} We now conduct simulation experiments to evaluate the performance of our test procedures for testing the hypothesis in Section~\ref{graphon Model}. For our first experiment, we generate $\{X_i\}_{i=1}^{n}$ as iid sample from a bivariate normal with mean $\bm{0}$ and identity covariance matrix. We then consider two different choices of link functions for $P$. The first is the cosine similarity \begin{equation} \label{eq:cosine} P_{ij} = \frac{|X_i^{\top}X_j|}{2\|X_i\|\|X_j\|} \end{equation} and the second is the Gaussian kernel $P_{ij} = \exp\big(-{\|X_i-X_j\|^2}\big)/2$. Note that the rank of $P$ is $2$ and $n$ for the cosine and Gaussian similarity, respectively. Given $P$ we set $R_{ij} \equiv r$ for some value $r$ to be specified later. We then generate a pair of graphs $(A, B) \sim R$-$\mathrm{ER}(P)$ on $n$ vertices and apply the test statistic $T$ in Corollary~\ref{cor:lpg} to test $\mathbb{H}_0 \colon \|R\|_{F} = 0$ against $\mathbb{H}_A \colon \|R\|_{F} > 0$ where the rejection region is calibrated via bootstrapping with significance level $0.05$ (see Algorithm~\ref{Graphon_Model_Testing_Simulation_2}). We repeat the above steps for $m = 100$ Monte Carlo replicates to obtain an empirical estimate of the type I error (when $r = 0$) and type II error (when $r > 0$) for $T$. The results are presented in Table \ref{Table:same cosine} for combinations of $n \in \{100,200,500,1000,2000\}$ and $r \in \{0,0.1,0.3,0.5\}$. We observe that the type I error of the test statistic is well-controlled and that the test statistic also exhibits power even for small values of $r$ and moderate values of $n$. Note that we fixed $k = 2$ when $P$ is the cosine similarity; here $k$ is the number of singular values used to construct $T$. In contrast, we chose $k$ via USVT (see Section~\ref{graphon Model}) when $P$ is the Gaussian similarity. \begin{table}[tp] \centering \begin{tabular}{lcccc} \hline $n \setminus r$ &$r=0$&$r=0.1$&$r=0.3$&$r=0.5$ \\ \hline $n=100$ & 0.06 & 0.17 & 0.85 & 1\\ $n=200$ & 0.07 & 0.98 & 1 & 1 \\ $n=500$ & 0.04 & 1 & 1 & 1\\ $n=1000$ & 0.02 & 1 & 1 & 1\\ $n=2000$ & 0.01 & 1 & 1 & 1\\ \hline \end{tabular} \caption{Empirical estimates (based on $m = 100$ Monte Carlo replicates) for the Type I and type II error when $P_{ij}$ is from the cosine similarity. Given values correspond to the Type-I error when $r = 0$ and to the power (i.e., one minus the Type II error) when $r > 0$.} \label{Table:same cosine} \end{table} \begin{table}[tp] \centering \begin{tabular}{lcccc} \hline $n \setminus r$ &$r=0$&$r=0.1$&$r=0.3$&$r=0.5$ \\ \hline $n=100$ & 0 & 0 & 0.06 & 0.74\\ $n=200$ & 0 & 0 & 0.26 & 1\\ $n=500$ & 0 & 0 & 1 &1\\ $n=1000$ & 0 & 0 & 1 & 1\\ $n=2000$ & 0 & 0.02 & 1 & 1 \\ \hline \end{tabular} \caption{Empirical estimates (based on $m = 100$ Monte Carlo replicates) for the type I and type II error when $P_{ij}$ is from the Gaussian kernel. The given values correspond to the Type-I error when $r = 0$ and to the power (i.e., one minus the type II error) when $r > 0$.} \label{Table: exponential} \end{table} For our second experiment we consider the case where the graphs $A$ and $B$ have different marginal distributions (see Section~\ref{different}). In particular we generate $\{X_i\}_{i=1}^{n}$ and $\{Y_i\}_{i=1}^{n}$ as iid samples from a bivariate normal with mean $\bm{0}$ and identity covariance matrix and then set $P$ and $Q$ to have entries \begin{gather*} P_{ij} = \frac{|X_i^{\top} X_j|}{2 \|X_i\|\|X_j\|}, \quad \text{and} \quad Q_{ij} = \frac{|Y_i^{\top} Y_j|}{4\|Y_i\|\|Y_j\|}. \end{gather*} Given $P$ and $Q$ we once again set $R_{ij} \equiv r$ and generate pair of graphs $(A, B)$ on $n$ vertices from the $R$-$\mathrm{ER}(P,Q)$ model. We apply the test statistic in Theorem~\ref{theorem 4} to test $\mathbb{H}_0 \colon \|R\|_{F} = 0$ against $\mathbb{H}_A \colon \|R\|_{F} > 0$; the rejection region is also calibrated via bootstrapping with significance level $0.05$ using Algorithm~\ref{Analysis_for_real_data}. Empirical estimates of the type I and type II errors (based on $m = 100$ Monte Carlo replicates) are presented in Table \ref{Table:Different} for combinations of $n \in \{100,200,500,1000,2000\}$ and $r \in \{0,0.1,0.3,0.5\}$. Once again, the type-I error is well-controlled and the test also exhibits power even for small values of $r$ and moderate values of $n$. \begin{table}[tp] \centering \begin{tabular}{lcccc} \hline $n \setminus r$ &$r=0$&$r=0.1$&$r=0.3$&$r=0.5$ \\ \hline $n=100$ & 0 & 0.35 & 1 & 1\\ $n=200$ & 0 & 0.83 & 1 & 1 \\ $n=500$ & 0.11 & 1 & 1 & 1\\ $n=1000$ & 0.01 & 1 & 1 & 1\\ $n=2000$ & 0.02 & 1 & 1 & 1\\ \hline \end{tabular} \caption{Empirical estimates (based on $m = 100$ Monte Carlo replicates) for the type $I$ and type $II$ error when $P_{ij}$ and $Q_{ij}$ are from the cosine similarity with $P \not = Q$. The given values correspond to the type I error when $r = 0$ and to the power (i.e., one minus the type II error) when $r > 0$.} \label{Table:Different} \end{table} For our last experiment, we consider correlation testing when $A$ and $B$ are {\em marginally} stochastic blockmodel graphs (c.f. Section~\ref{sec:SBM}). More specifically, we assume that $A$ is a $K$-block SBM where each vertex of $A$ is assigned to some a block $k \in \{1,\dots,K\}$ with probability $1/K$ and the marginal edge probabilities between vertices in block $k$ and vertices in block $\ell$ are given by $0.45 - |k - \ell|/(2K)$ for any $k, \ell \in \{1,2,\dots,K\}$. Similarly, $B$ is a $K$-block SBM with the same membership assignment as $A$ and marginal edge probabilities $0.4 - |k - \ell|/(2K+2)$. We set $R$ to have the same block structure as both $A$ and $B$ and entries of the form $r( 1 - |k - \ell|/K)$ for values of $r$ that are specified later. We then sample a pair $(A,B)$ from the model in Definition~\ref{def:sbm} with parameters given above and test the hypothesis that $\|R\|_{F} = 0$ against $\|R\|_{F} > 0$ using the test statistic in Theorem~\ref{thm:SBM} where the rejection region is based on the $95\%$ percentile of the $\chi^2_{K(K+1)/2}$ distribution. We repeat these steps for $m = 1000$ Monte Carlo replicates to obtain empirical estimates of the type-I and type-II error of our test statistic. The results are presented in Tables \ref{Table: SBM1} for various combinations of $n \in \{200,500, \dots, 3000\}$, $K \in \{2,5,7\}$ and $r \in \{0,0.001,0.005,0.01\}$. For comparisons we also include the limiting theoretical power given by $F^{-1}_{\mu}(c_*)$ where $c_*$ is the $95\%$ percentile of the (central) $\chi^2_{K(K+1)/2}$ distribution and $F^{-1}_{\mu}$ is the quantile function for a {\em non-central} $\chi_{K(K+1)/2}^2$ with non-centrality parameter $\mu = r^2(\tfrac{K^2+1}{4K^2} n^2 - \tfrac{n}{2})$. We observe that the empirical type-I errors (when $r = 0$) and empirical power (when $r > 0$) are close to their limiting theoretical counterparts provided that $n$ is sufficiently large compared to $K$; indeed, as $K$ increases we generally need larger values of $n$ to achieve accurate recovery of the latent community assignments. \begin{table}[htp] \centering \label{Table: SBM1} \begin{tabular}{lcccc} \hline $K = 2$ &$r=0$&$r=0.001$&$r=0.005$&$r=0.01$ \\ \hline $n=200$ & 0.044/0.050 & 0.045/0.051 & 0.056/0.069 & 0.147/0.133 \\ $n=500$ & 0.047/0.050 & 0.055/0.055 & 0.177/0.188 & 0.621/0.641\\ $n=1000$ & 0.056/0.050 & 0.070/0.069 & 0.638/0.642 & 1/0.999\\ $n=2000$ & 0.048/0.050 & 0.139/0.134 & 0.998/0.999 & 1/1\\ $n=3000$& 0.040/0.050 & 0.236/0.259 & 1/1& 1/1\\ \hline \end{tabular} \begin{tabular}{lcccc} \hline $K=5$&$r=0$&$r=0.001$&$r=0.005$&$r=0.01$ \\ \hline $n=200$ & 0.864/0.050 & 0.755/0.050 & 0.795/0.056 & 0.888/0.076 \\ $n=500$ & 0.055/0.050 & 0.053/0.051 & 0.103/0.093 & 0.311/0.289\\ $n=1000$ & 0.051/0.050 & 0.067/0.056 & 0.280/0.290 & 0.940/0.931\\ $n=2000$ & 0.054/0.050 & 0.070/0.076 & 0.936/0.932 & 1/1\\ $n=3000$ & 0.041/0.050 & 0.111/0.116 & 1/1 & 1/1\\ \hline \end{tabular} \begin{tabular}{lcccc} \hline $K=7$&$r=0$&$r=0.001$&$r=0.005$&$r=0.01$ \\ \hline $n=200$ & 0.954/0.050 & 0.912/0.050 & 0.955/0.054 & 0.973/0.067 \\ $n=500$ & 0.617/0.050 & 0.560/0.051 & 0.656/0.079 & 0.826/0.208\\ $n=1000$ & 0.058/0.050 & 0.062/0.054 & 0.218/0.209 & 0.859/0.833\\ $n=2000$& 0.056/0.050 & 0.074/0.068 & 0.823/0.834 & 1/1\\ $n=3000$& 0.049/0.050 & 0.104/0.094 & 1/0.999 & 1/1\\ \hline \end{tabular} \caption{Empirical estimates (based on $m = 100$ Monte Carlo replicates) for the type $I$ and type $II$ error compared to the theoretical (limiting) value. Here $A$ and $B$ are $R$-correlated SBM graphs. The first (resp. second) entry in each cell correspond to the empirical estimate (resp. theoretical value) of the type I error when $r = 0$ and to the power (i.e., one minus the type II error) when $r > 0$. The theoretical values are based on the non-central $\chi^2$ distribution with non-centrality parameter $\mu=r^2(\frac{K^2+1}{4K^2}n^2-\frac{n}{2})$.} \end{table} \section{Real Data Experiments}\label{Real_Data} \subsection{Analysis of C. elegans Data} \label{sec:c_elegans} We now apply the test statistics in Section~\ref{graphon Model} to the connectomes of the {\em C. elegans} roundworm. More specifically, we used the wiring diagram formed by the somatic nervous system, which consists of $279$ neurons; these neurons are classified into one of three categories namely motor neurons, sensory neurons, and inter-neuron. There are two types of connections between the neurons, i.e., either via chemical synapses or electrical gap junctions. This result in two related but distinct networks, namely a chemical synapse network $A_c$ with $6394$ edges and a gap junction network $A_g$ with $1777$ edges. See \cite{varshney2011structural} for a more detailed description of the construction of these connectomes. We first consider testing the null hypothesis that $A_c$ and $A_g$ are independent against the alternative hypothesis that they are correlated. As the two graphs have a quite large difference in the edge densities, we suppose that $A_c$ and $A_g$ are generated from the $R$-$\mathrm{ER}(P,Q)$ model (see Section~\ref{different}) and use the test statistic $T$ in Theorem~\ref{theorem 4} with $k = 3$ as the rank for the estimates $\hat{P}$ and $\hat{Q}$; the choice $k = 3$ is motivated by the fact that there are three categories of neurons. This result in an observed value of $T = 8.313$. We calibrate our test statistic using the bootstrapping procedure in Algorithm~\ref{Analysis_for_real_data} with $m = 10000$ Monte Carlo replicates and obtain an approximate $p$-value of $5 \times 10^{-5}$. There is thus strong evidence to reject the null hypothesis in favor of the alternative hypothesis that the two connectomes are correlated. This conclusion, while biologically relevant, is also certainly expected. We now quantify the degree of correlation between the edges of the two graphs. Recalling Eq.~\eqref{eq:h_form} we first compute an estimate of the correlation $R_{ij}$ between the edges of $A_c$ and $A_g$ via \begin{equation} \label{eq:estimate_R} \hat{R}_{ij}:= \begin{cases} 0, & \text{if $\hat{P}_{ij}$ or $\hat{Q}_{ij}\in\{0,1\}$}\\ \max\Big\{\min\Big\{\tfrac{\hat{P}_{ij}+\hat{Q}_{ij} - \hat{P}_{ij}\hat{Q}_{ij} - \hat{H}_{ij}}{\bigl(\hat{P}_{ij}(1-\hat{P}_{ij})\hat{Q}_{ij}(1-\hat{Q}_{ij})\bigr)^{1/2}},1\Big\},-1\Big\}, & \text{otherwise.} \end{cases} \end{equation} where $\hat{P}, \hat{Q}$ and $\hat{H}$ are obtained by applying USVT to the adjacency matrices $A$, $B$, and $C$ respectively; recall that $C_{ij} = 1$ if $A_{ij} + B_{ij} > 0$ and $C_{ij} = 0$ otherwise. We then compute the average correlations for edges connecting vertices from the same category as well as edges connecting vertices from different categories, e.g., we calculate the sample mean of the $\{\hat{R}_{ij}\}$ when $i$ and $j$ are both motor neurons as well as the sample mean of the $\{\hat{R}_{ij}\}$ when $i$ is a motor neuron and $j$ is a sensory neuron. The results are presented in Table \ref{Correlation_Matrix_Graphon_c.elegans} for all possible pairs of neuron categories; these correlations are all positive and quite large. \begin{table}[tp] \centering \begin{tabular}{|l|c|c|c|} \hline & motor & inter & sensory\\ \hline motor & 0.144 & 0.111 & 0.153\\ inter & 0.111 & 0.088 & 0.137\\ sensory & 0.153 & 0.137 & 0.193\\ \hline \end{tabular} \caption{Sample means of the estimated correlations $\{\hat{R}_{ij}\}$ for different combinations of neuron types for $i$ and $j$.} \label{Correlation_Matrix_Graphon_c.elegans} \end{table} As a sanity check, we also compute the sample Pearson correlation based on the binary entries of $A_c$ and $A_g$ directly, i.e., we compute $\mathrm{Cor}(\{A_c(i,j), A_g(i,j)\})$ where $i$ ranges over all neurons of type $k$ and $j$ ranges over all neurons of type $\ell$ (with $k$ possibly being the same as $\ell$). The results are presented in Table~\ref{Correlation_Matrix_SBM_c.elegans}. We see that these sample Pearson correlations exhibit the same general pattern as that for the $\{\hat{R}_{ij}\}$ in Table~\ref{Correlation_Matrix_Graphon_c.elegans}. Indeed, the difference between the entries in Table~\ref{Correlation_Matrix_Graphon_c.elegans} and Table~\ref{Correlation_Matrix_SBM_c.elegans} are all less than $0.1$ and can be as small as $0.01$ or $0.03$. \begin{table}[tp] \centering \begin{tabular}{|l|c|c|c|} \hline & motor & inter & sensory\\ \hline motor & 0.153 & 0.203 & 0.129\\ inter & 0.203 & 0.145 &0.147\\ sensory & 0.129 & 0.147 & 0.171\\ \hline \end{tabular} \caption{Pearson correlations between the edges in $A_c$ and $A_g$ for different combinations of neuron types.} \label{Correlation_Matrix_SBM_c.elegans} \end{table} Finally, we discuss the use of the $\{\hat{R}_{ij}\}$ to help improve link predictions for the edges of $A_g$. More specifically, we evaluate the accuracy for link prediction using only the estimated edge probabilities matrix $\hat{P}$ against the accuracy when using $\hat{P}$ in conjunction with $\hat{R}$. For both approaches, we first sub-sample a $A_g^{(\mathrm{sub})}$ from $A_g$ by setting $10\%$ of the entries of $A_g$ to $0$. We next apply USVT to $A_g^{(\mathrm{sub})}$ to obtain an estimate $\hat{P}^{(\mathrm{sub})}$ of $P$. Now let $\mathcal{E}$ be the set of entries in $A_g$ that are set to $0$ in $A_g^{\mathrm{sub}}$. We then threshold the entries $\hat{P}^{(\mathrm{sub})}_{ij}$ for all $(i,j) \in \mathcal{E}$, i.e., for $(i,j) \in \mathcal{E}$ we predict the presence of a link if $\hat{P}^{(\mathrm{sub})}_{ij} > t$ and an absence of a link otherwise. By varying $t \in [0,1]$ we obtain a ROC curve and an associated AUC for link prediction using only the estimated $\hat{P}^{(\mathrm{sub})}$. A similar approach had also been used in \cite{yuan_zhang,gao2015rate,rubin2017statistical} when the graphs are assumed to be generated from a latent space or graphon model. Link prediction using both $A_c$ and $A_g$ also follows a similar procedure, but this time we use both $A_g^{(\mathrm{sub})}$ and $A_c^{(\mathrm{sub})}$ to estimate the marginal edge probabilities $\hat{P}^{(\mathrm{sub})}$ for $A_g$ and $\hat{Q}^{(\mathrm{sub})}$ for $A_c$ as well as the estimated correlations $\hat{R}^{(\mathrm{sub})}$; here $A_c^{(\mathrm{sub})}$ is obtained by setting the entries of $A_c$ indexed by $\mathcal{E}$ to $0$ and $\hat{R}^{(\mathrm{sub})}$ is calculated using a similar expression as that in Eq.~\eqref{eq:estimate_R} but with $\hat{P}$ and $\hat{Q}$ replaced by $\hat{P}^{(\mathrm{sub})}$ and $\hat{Q}^{(\mathrm{sub})}$, respectively. Given the $\hat{P}^{(\mathrm{sub})}$ and $\hat{R}^{(\mathrm{sub})}$ we then threshold the entries of $\hat{P}^{(\mathrm{sub})}_{ij} + \hat{R}_{ij}^{(\mathrm{sub})}( A_c(i,j) - \hat{P}_{ij}^{(\mathrm{sub})})$ for all $(i, j) \in \mathcal{E}$; note that these choices of quantities is motivated from the fact that if $(A, B) \sim R$-$ER(P,Q)$ then $\mathbb{P}[A_{ij} = 1 \mid B_{ij}] = P_{ij} + R_{ij}(B_{ij} - P_{ij})$. By varying the threshold $t \in [0,1]$ we also obtain a ROC curve and associated AUC for link prediction using both $\hat{P}^{(\mathrm{sub})}$ and $\hat{R}^{(\mathrm{sub})}$. We perform the above AUC calculations $100$ times, each time choosing a random subset of entries $\mathcal{E}$ to set to $0$. ROC curves for a random realization of $\mathcal{E}$ are shown in Figure \ref{C.elegans_roc}. The average AUC when using only $\hat{P}^{(\mathrm{sub})}$ is $0.575$ with a standard error of $0.002$; in contrast, the average AUC when using both $\hat{P}^{(\mathrm{sub})}$ and $\hat{R}^{(\mathrm{sub})}$ is $0.705$ with a standard error of $0.003$. The use of $\hat{R}^{(\mathrm{sub})}$ thus leads to a significant increase in accuracy. \begin{figure} \caption{ROC curves for link prediction on a randomly selected set of entries $\mathcal{E} \label{C.elegans_roc} \end{figure} \begin{algorithm}[tp] \caption{Bootstrap procedure for graphons with possibly $P \not = Q$.} \label{Analysis_for_real_data} \begin{algorithmic} \Require Adjacency matrices $A$ and $B$, both of size $n \times n$, significance level $\alpha \in (0,1)$, number of bootstrap samples $m$. \State (A) Compute the matrix $C$ whose entries are $C_{ij} = 1$ if $A_{ij} + B_{ij} > 0$ and $C_{ij} = 0$ otherwise. \State (B) Compute $\hat{P}, \hat{Q}$ and $\hat{H}$ by applying universal singular value thresholding (USVT) on $A$, $B$, and $C$, respectively. \State (C) Calculate the test statistic $T = \|\hat{H} - \hat{P} -\hat{Q} + \hat{P} \circ \hat{Q}\|_{F}$. \For{$s=1$ to $m$} \State (i) Generate adjacency matrices $(A^{(s)}, B^{(s)})$ according to Definition 5 with $R = 0$ and marginal edge probabilities matrices $\hat{P}$ and $\hat{Q}$. \State (ii) Compute $\hat{P}$ and $\hat{P}$ as the universal singular value threshold of $A^{(s)}$ and $B^{(s)}$, respectively. \State (iii) Calculate $T^{(s)} = \|\hat{H}^{(s)}- \hat{P}^{(s)}-\hat{Q}^{(s)} + \hat{P}^{(s)} \circ \hat{Q}^{(s)}\|_{F}$, where $\hat{H}^{(s)}$ is the universal singular value thresholding of $A^{(s)} + B^{(s)}-A^{(s)} \circ B^{(s)}$. \EndFor \State (D) Find the smallest number $t$ such that $T>T_t$, where $T_t$ is the $t$-th largest element in $\{T^{(s)}\}_{s=1}^{m}$, \State (D) p-value$=(t-0.5)/m$ \State \textbf{Output} p-value. \end{algorithmic} \end{algorithm} \subsection{Wikipedia Data} We now analyze two networks formed by a collection of Wikipedia articles. The first network, denote as $A_e$, consists of $1382$ vertices and $37714$ edges. Each vertex in $A_e$ represents an article in the English Wikipedia on topics related to Algebraic Geometry, and two given vertices are connected if there is a hyperlink between them in the English Wikipedia. The second network, denote as $A_f$, consists of $1382$ vertices and $29946$ edges corresponding to the same Wikipedia articles as that in $A_e$ but the hyperlinks are now for the French Wikipedia. See \cite{priebe2009fusion} for a more detailed description of these networks. We now follow the same analysis as that described in Section~\ref{Analysis_for_real_data} for the {\em C. elegans data}. In particular, we first test the null hypothesis that $A_e$ and $A_f$ are independent. As $A_e$ and $A_f$ are both quite sparse (their edge densities are $0.02$ and $0.016$, respectively), we apply the test statistic in Theorem~\ref{theorem 1} to their complements $\bar{A}_e$ and $\bar{A}_f$, i.e., $\bar{A}_e = 11^{\top} - A_e$ and $\bar{A}_f = 11^{\top} - A_f$ where $11^{\top}$ is the $1382 \times 1382$ matrix of all ones. Note that, by Eq.~\eqref{distribution P_n}, if $(A, B) \sim R$-$ER(P)$ then $(\bar{A}, \bar{B}) \sim R-ER(11^{\top} - P)$ and hence, assuming the model in Section~\ref{graphon Model} is appropriate, inference based on $T(A, B)$ and $T(\bar{A}, \bar{B})$ are theoretically equivalent. This yield an observed test statistic of $T(\bar{A}_e, \bar{A}_f) = 21.514$ and, using the bootstrapping procedure in Algorithm~\ref{Analysis_for_real_data} with $m = 10000$, an approximate $p$-value of $5 \times 10^{-5}$. We thus reject the null hypothesis and are in favor of the alternative hypothesis that the English and French Wikipedia networks are correlated. We next quantify the degree of correlations between the edges of $A_e$ and $A_f$. The articles in $A_e$ and $A_f$ can be grouped into six classes, namely (1) people, (2) places, (3) dates, (4) math things (articles about math topics that are neither people, places, nor dates) (5) things (article about non-math topics that are neither people, places nor dates) and (6) categories (a special type of Wikipedia article). We then calculate the sample means of the estimated correlations $\hat{R}$ for edges within the same categories and between different categories (see the description on Table~\ref{Correlation_Matrix_Graphon_c.elegans} and Table~\ref{Correlation_Matrix_SBM_c.elegans} in Section~\ref{sec:c_elegans} for more details). The results are presented in Table~\ref{Correlation_Matrix_Graphon_Wiki} and Table~\ref{Correlation_Matrix_SBM_Wiki}; once again we see that the entries in the two tables are highly similar and they both indicate that the correlations between the edges of $A_e$ and $A_f$ are positive and quite large. \begin{table}[h] \centering \begin{tabular}{|l|c|c|c|c|c|c|} \hline & people & places & dates & things & math things & categories\\ \hline people & .501 & .419 &.336 &.381 &.375 &.417\\ places & .419 & .318 &.269 &.296 &.272 &.307\\ dates & .336 & .269 &.278 &.227 &.148 &.159\\ things & .381 & .296 &.227 &.267 &.238 &.282\\ math things& .375 & .272 &.148 &.238 &.192 &.231\\ categories & .417 & .307 &.159 &.282 &.231 &.273\\ \hline \end{tabular} \caption{Sample means of the estimated correlations $\{\hat{R}_{ij}\}$ for different combinations of Wikipedia article types for $i$ and $j$.} \label{Correlation_Matrix_Graphon_Wiki} \end{table} \begin{table}[h] \centering \begin{tabular}{|l|c|c|c|c|c|c|} \hline & people & places & dates & things & math things & categories\\ \hline people & .570 & .501 &.460 &.399 &.499 &.751\\ places &.501 & .489 & .376 & .412 & .295 & .607\\ dates & .460 & .376 & .589 & .292 & .100 & NA\\ things & .399 & .412 & .292 &.348 &.233 & .452\\ math things& .499 & .295 & .100 &.233 & .283 & .507\\ categories & .751 & .607 & NA & .452 &.507 &.462\\ \hline \end{tabular} \caption{Pearson correlations between the edges of $A_e$ and $A_f$ for different combinations of Wikipedia article types. The value \texttt{NA} for the pair \texttt{dates} and \texttt{categories} is because there are no edges between any vertices in \texttt{date} and any vertices in \texttt{categories} for both graphs.} \label{Correlation_Matrix_SBM_Wiki} \end{table} Finally, we consider link prediction for the English Wikipedia network $A_e$. We follow the procedure described in Section~\ref{Analysis_for_real_data} wherein we set $10\%$ of the entries of $A_e$ to $0$ and then compare the AUC for link prediction from $\hat{P}^{(\mathrm{sub})}$ alone against that of $\hat{P}^{(\mathrm{sub})}$ and $\hat{R}^{\mathrm{sub}}$. The sample mean of the AUCs based on $100$ randomly selected $\mathcal{E}$ are $0.863$ (standard error = $0.0006$) for $\hat{P}^{(\mathrm{sub})}$ only and improve to $0.956$ (standard error = $0.0005$) when we also include $\hat{R}^{(\mathrm{sub})}$. ROC curves for a random realization of $\mathcal{E}$ are shown in Figure \ref{wiki_roc}. \begin{figure} \caption{ ROC curves for link prediction on a randomly selected set of entries $\mathcal{E} \label{wiki_roc} \end{figure} \section{Conclusion} \label{sec:conclusion} In this paper, we formulated independence testing between graphs as, given a pair of inhomogeneous Erd\H{o}s-R\'{e}nyi graphs with edge-correlation $R$, deciding between $\mathbb{H}_0 \colon \|R\|_{F} = 0$ and $\mathbb{H}_A \colon \|R\|_{F} > 0$. We show that there exists an asymptotically valid and consistent test procedure only if $\|R\|_{F}\to\infty$ as the number of vertices $n$ diverges. When the graphs and their pairwise correlations are generated from a latent position model, we propose an asymptotically valid and consistent test procedure that also runs in time polynomial in $n$. We now mention two directions for future research. Comparing the theoretical results in Theorem~\ref{theorem 1} and Theorem~\ref{thm:SBM} against either Remark~\ref{rem:not_sufficient} or Theorem~\ref{thm:gap}, we see that while $\|R\|_{F} \rightarrow \infty$ for all of these examples, it is nevertheless easier, both statistically and computationally, to detect $R \not = 0$ when it has some structure. Indeed, for both Theorem~\ref{theorem 1} and Theorem~\ref{thm:SBM} we have $R_{ij} = f(m_{ij})$ where $f$ is a smooth function and the matrix $M = (m_{ij})$ is low-rank. In contrast, the matrix $R$ in Remark~\ref{rem:not_sufficient} and Theorem~\ref{thm:gap} is either completely random or has no low-rank structure. Therefore, while $R \not = 0$ if and only if $\|R\|_{F} > 0$, the magnitude of $\|R\|_{F}$ itself is not sufficiently refined to distinguish between the simple and more difficult settings for $R \not = 0$. Determining the right measure of the correlation between graphs is thus of both theoretical and practical interest, especially if this measure also leads to thresholds that are both necessary and sufficient for our independence testing problem. Continuing on the above theme, the critical region for our test statistics in Section~\ref{general} and Section~\ref{different} are based on bootstrapping graphs from the estimated edge probabilities matrices (see e.g., Algorithm~ \ref{Graphon_Model_Testing_Simulation_2}). The validity of these resampling techniques is justified by the empirical simulation studies as well as real data analysis. However, bootstrap sampling of a graph on $n$ vertices generally requires $O(n^2)$ time and $O(n^2)$ memory, which can be prohibitive if $n$ is large. Therefore our test procedures could be more robust and computationally efficient if we are able to derive the limiting distribution of the test statistics in Theorem~\ref{theorem 1} and Theorem~\ref{theorem 4} and thereby obtain approximate critical values. We surmise, however, that this will be a quite technical and challenging problem as it requires substantial refinement of all existing results for USVT as these exclusively focus on upper bounds for the estimation error in Frobenius norm. Finally, it will also be useful to study other formulations of independence testing for graphs, e.g., by not assuming that they are marginally inhomogeneous Erd\H{o}s-R\'{e}nyi graphs, or by considering more complex correlation structures. A natural and interesting example of this latter type of problem is when we have three or more graphs as their joint distributions cannot be specified using only the marginal distributions and pairwise edges correlations. \appendix \section*{Proofs of Stated Results} \label{sec:proofs_stated_results} {\bf Proof of Theorem~\ref{thm:main1}.} Let $\mathcal{S} = \{(0,0),(0,1),(1,0),(1,1)\}$. Let $t = n(n-1)$. Now denote by $$\mathcal{S}^{t} = \{(s_1, s_2, \dots, s_t) \colon s_i \in \mathcal{S}\}$$ the set of tuples of length $t$ whose elements are from $\mathcal{S}$. We can then view any realization $(A_{n}, B_n)$ from the $R$-ER$(P)$ model as corresponding to some element of $\mathcal{S}^{t}$. Let $A_{ij}$ and $B_{ij}$ denote the $ij$th element of $A_{n}$ and $B_{n}$, respectively; note that, for ease of exposition, we dropped the index $n$ from these notations. The second moment for the likelihood ratio between $\mathcal{P}_n$ and $\mathcal{Q}_n$ is then given by \begin{equation*} \begin{split} \mathbb{E}_{\mathcal{Q}_n}\Big[\Big(\frac{\mathcal{P}_n(A_n, B_n)}{\mathcal{Q}_n(A_n,B_n)}\Big)^2\Big] &=\sum_{ (A_{n},B_{n}) \in \mathcal{S}^{t}} \frac{\mathbb{P}(A_n,B_n)^2}{\mathbb{Q}(A_n,B_n)} \\ &=\sum_{(A_{n},B_{n}) \in \mathcal{S}^{t}} \prod_{i< j}\frac{{\mathbb{P}(A_{ij},B_{ij})^2}}{\mathbb{Q}(A_{ij}, B_{ij})}\\ &= \prod_{i < j}\Big[\Big(\frac{{\mathbb{P}}_{ij}(1,1)^2}{{\mathbb{Q}}_{ij}(1,1)}+ \frac{{2\mathbb{P}}_{ij}(1,0)^2}{{\mathbb{Q}}_{ij}(1,0)}+\frac{{\mathbb{P}}_{ij}(0,0)^2}{{\mathbb{Q}}_{ij}(0,0)}\Big) \\ &=\prod_{i < j}(1+R_{ij}^2) \end{split} \end{equation*} The last equality in the above display is derived as follows (see also Eq.~\eqref{distribution P_n}) \begin{equation*} \begin{split} \frac{\mathbb{P}_{ij}(1,1)^2}{\mathbb{Q}_{ij}(1,1)} + \frac{2\mathbb{P}_{ij}(1,0)^2}{\mathbb{Q}_{ij}(1,0)} + \frac{\mathbb{P}_{ij}(0,0)^2}{\mathbb{Q}_{ij}(0,0)} &= P_{ij}^2 +2 P_{ij}(1-P_{ij})R_{ij} +(1-P_{ij})^2R_{ij}^2\\ &+2P_{ij}(1-P_{ij}) - 4 P_{ij}(1-P_{ij})R_{ij} \\ &+2P_{ij}(1-P_{ij})R_{ij}^2 \\ &+(1-P_{ij})^2 +2P_{ij}(1-P_{ij})R_{ij} +P_{ij}^2R_{ij}^2\\ &=1+R_{ij}^2 \end{split} \end{equation*} We thus obtain \begin{equation} \label{eq:2nd_moment} \begin{split} \mathbb{E}_{\mathcal{Q}_n}\Big[\Big(\frac{\mathcal{P}_n(A_n,B_n)}{\mathcal{Q}_n(A_n,B_n)}\Big)^2\Big] =\prod_{i < j}(1+R_{ij}^2). \end{split} \end{equation} Theorem~\ref{thm:main1} follows directly from Eq.~\eqref{eq:2nd_moment} and the following technical lemma. \begin{mylem} \label{lem:technical} $\limsup \prod_{i < j} (1 + R_{ij}^2) < \infty$ iff $\limsup \|R\|_F^2 < \infty$. \end{mylem} \begin{proof} First suppose that $\limsup_{n \rightarrow \infty} \|R\|_F^2\leq C$ for some finite constant $C>0$. Denote $N = n(n-1)/2$. Then by Jensen’s inequality we have, for all but a finite number of $n$, that \begin{equation*} \begin{split} \log \Bigl(\prod_{i < j}(1+R_{ij}^2)\Bigr) &= \sum_{i < j}\log(1+R_{ij}^2) \\ &\leq N\log\Bigl(1+\frac{1}{N}\sum_{i < j}R_{ij}^2\Bigr)\\ &\leq \log\Big[\Bigl(1+\frac{1}{2N}\|R\|_F^2\Bigr)^{N}\Big] \leq\log\Big[\Bigl(1+\frac{C}{2N}\Bigr)^{N}\Big] \leq C/2 \end{split} \end{equation*} Conversely, suppose $\limsup_{n \rightarrow \infty} \sum_{i < j}\log(1+R_{ij}^2)\leq C$ for some finite constant $C>0$. Then, as $R_{ij}^2 \leq 1$ for all $\{i,j\}$ and $R_{ii} = 0$ for all $i$, we have \begin{equation*} \begin{split} \frac{1}{2}\|R\|_F^2 & \leq \sum_{i < j}\Bigl[e^{\log(1+R_{ij}^2)}-1\Bigr]\\ &=\sum_{k=1}^\infty\sum_{i < j}\frac{\log^k(1+R_{ij}^2)}{k!} \\ & \leq \sum_{k=1}^\infty\sum_{i < j}\frac{\log(1+R_{ij}^2) \times \log^{k-1} 2}{k!} \\ & =\Bigl(\sum_{i < j}\log(1+R_{ij}^2)\Bigr) \sum_{k=1}^\infty\frac{\log^{k-1} 2}{k!} \leq \frac{C}{\log 2}. \end{split} \end{equation*} as desired. \end{proof} {\bf Proof of Theorem~\ref{thm:gap}.} Suppose we are given an adjacency matrix $S$ sampled from a planted clique model with edges probability $p = \tfrac{1}{2}$ and {\em unknown} clique size $s_0$. Let us generate a pair of {\em undirected} random graphs $(A, B)$ as follows. The collection $\{(A_{ij}, B_{ij})\}$ for $i < j$ are {\em independent} bivariate random variables and furthermore, for any pair $i < j$, \begin{gather*} \mathbb{P}(A_{ij} = B_{ij} = 1 \mid S_{ij} = 0) = \mathbb{P}(A_{ij} = B_{ij} = 0 \mid S_{ij} = 0) = 0.5,\\ \mathbb{P}(A_{ij} = 1, B_{ij} = 0 \mid S_{ij} = 1) = \mathbb{P}(A_{ij} = 0, B_{ij} = 1 \mid S_{ij} = 1) = 0.5. \end{gather*} Let $\xi_{ij} = 1$ if vertices $i$ and $j$ is part of the planted clique in $S$ and $\xi_{ij} = 0$ otherwise. Note that we can view the $\xi_{ij}$ as deterministic quantities by assuming that the vertices forming the planted clique are chosen prior to adding the random edges in $S$. In particular, $S_{ij} = 1$ whenever $\xi_{ij} = 1$ and $S_{ij} \sim \mathrm{Bernoulli}(0.5)$ otherwise. We therefore have \begin{equation*} \begin{split} \mathbb{P}(A_{ij} = 1, B_{ij} = 1) &= \mathbb{P}(A_{ij} = B_{ij} = 1 \mid S_{ij} = 0) \times \mathbb{P}(S_{ij} = 0) \\ &= 0.5 \times \bigl(\mathbb{P}(S_{ij} = 0, \xi_{ij} = 0) + \mathbb{P}(S_{ij} = 0, \xi_{ij} = 1)\bigr) \\ &= \tfrac{1}{4} \times \bm{1}\{\xi_{ij} = 0\}. \end{split} \end{equation*} Similar reasonings yield \begin{gather*} \mathbb{P}(A_{ij} = 0, B_{ij} = 0) = \mathbb{P}(A_{ij} = B_{ij} = 0 \mid S_{ij} = 0) \times \mathbb{P}(S_{ij} = 0) = \tfrac{1}{4} \times \bm{1}\{\xi_{ij} = 0\}, \\ \mathbb{P}(A_{ij} = 0, B_{ij} = 1) = 0.5 \times \mathbb{P}(S_{ij} = 1) = \tfrac{1}{4} \times \bm{1}\{\xi_{ij} = 0\} + \tfrac{1}{2} \times \bm{1}\{\xi_{ij} = 1\}, \\ \mathbb{P}(A_{ij} = 1, B_{ij} = 0) = \tfrac{1}{4} \times \bm{1}\{\xi_{ij} = 0\} + \tfrac{1}{2} \times \bm{1} \{\xi_{ij} = 1\}, \end{gather*} and hence $(A, B)$ is a realization of a $R$-correlated Erd\H{o}s-R\'{e}nyi graph with edges probability $p = \tfrac{1}{2}$ and $R_{ij} = -\xi_{ij}$ for all $\{i,j\}$ (see Eq.~\eqref{distribution P_n}). Now suppose that either $s_0 = 0$ or $s_0 = n^{1/4}$. Then, given $S$ and the pair $(A, B)$ randomly generated from $S$, we have \begin{align*} \mathbb{H}_0^{(1)} \colon \|R\|_F = 0 & \Longleftrightarrow \mathbb{H}_0^{(2)} \colon \text{$S$ has no planted clique}\\ \mathbb{H}_A^{(1)} \colon\|R\|_F = n^{1/4} & \Longleftrightarrow \mathbb{H}_A^{(2)} \colon \text{$S$ has a planted clique of size at least $n^{1/4}$.} \end{align*} Therefore, for any given instance $S \sim \text{PlantedClique}(n, 1/2, s_0)$, there exists an instance $(A, B)$ from $R$-$\mathrm{ER}(1/2)$ where $R$ is such that (1) the pair $(A, B)$ is generated in polynomial time and (2) the planted clique problem on $S$ is equivalent to deciding between the null and alternative hypothesis for $\|R\|_F$. Thus, assuming the Planted Clique conjecture holds, i.e., the PlantedClique problem requires quasi-polynomial time, there is no efficient algorithm for deciding between $\|R\|_F = 0$ versus $\|R\|_F > 0$ in this setting. {\bf Proof of Theorem \ref{theorem 1}} Let $h_n = \rho_n h$ and $g_n = \gamma_n g$. Now recall Eq.~\eqref{eq:graphon_H}. Then $C$ corresponds to the adjacency matrix of a latent position graph with latent positions $\{(X_i, Y_i)\}_{i=1}^{n}$ and link function $f_n \colon \mathbb{R}^{d + d'} \times \mathbb{R}^{d + d'} \mapsto [0,1]$ given by \begin{equation}\label{eq:graphon_Ha} f_n((x,y),(x',y')) = h_n(x,x') + (1 - g_n(y,y')) h_n(x,x')(1 - h_n(x,x')). \end{equation} Next suppose that $\|R\|_{F} = 0$. Then by Theorem~3 in \cite{xu2017rates}, for all $c>0$ there exists a constant $C$ such that with probability at least $1 - n^{-c}$ \begin{eqnarray} \label{eq:theorem1_xu1} \| \hat{P} - P \|_F \leq C (n \rho_n)^{\tfrac{s + d}{2s + d}}, \quad \text{and} \quad \| \hat{H} - 2P + P \circ P \|_F \leq C (n \rho_n)^{\tfrac{s+d}{2s+d}}. \end{eqnarray} simultaneously. Similarly, suppose $\|R\| > 0$ holds. Once again by Theorem~3 in \citet{xu2017rates}, there exists a constant $C'$ such that with probability at least $1 - n^{-c}$, \begin{eqnarray} \label{eq:theorem1_xu2} \| \hat{P} - P \|_F \leq C (n \rho_n)^{\tfrac{s + d}{2s + d}}, \quad \text{and} \quad \| \hat{H} - H \|_F \leq C (n \rho_n)^{\tfrac{s+d+d'}{2s + d + d'}} \end{eqnarray} simultaneously. We note that the upper bound for $\|\hat{H} - H\|_{F}$ in Eq.~\eqref{eq:theorem1_xu2} is larger than that in Eq.~\eqref{eq:theorem1_xu1} and this is due mainly to the fact that if $\|R\|_{F} > 0$ then we will be using the latent positions $Z_i = (X_i, Y_i) \in \mathbb{R}^{d+d'}$ together with a link function $f_n$ that is also at least $s$ times continuously differentiable. We therefore have, with probability at least $1 - n^{-c}$, that \begin{eqnarray*} \| \hat{P}\circ\hat{P} - P\circ P \|_F^2 = \sum_{i, j} (\hat{P}_{ij} + P_{ij})^2(\hat{P}_{ij} - P_{ij})^2 \leq 4\| \hat{P} - P \|_F^2 \leq 4C^2 (n\rho_n)^{\tfrac{2s+2d}{2s + d}}. \end{eqnarray*} and hence, for $\|R\| = 0$ we have \begin{eqnarray*} \|\hat{H} - 2 \hat{P} + \hat{P}\circ\hat{P}\|_F &\leq& \|\hat{H} - 2P + P \circ P \|_F + 2\|\hat{P} - P\|_F + \|\hat{P} \circ \hat{P} - P \circ P\|_{F} \\ &\leq& 4C(n\rho_n)^{\tfrac{s+d}{2s+d}} \end{eqnarray*} with probability at least $1 - n^{-c}$. Similarly, for $\|R\| > 0$ we have \begin{equation*} \begin{split} \|\hat{H} - 2 \hat{P} + \hat{P} \circ \hat{P}\|_F &\geq \|H - 2P - P\circ P\|_F - \bigl(\|\hat{H} - H\|_F + 2 \|\hat{P} - P\|_{F} + \|\hat{P}\circ\hat{P} - P\circ P\|_F\bigr) \\ &\geq \|H - 2P - P\circ P\|_F - 4C(n \rho_n)^{\tfrac{s+d+d'}{2s+d+d'}} \\ &= \|R \circ (P - P \circ P)\|_{F} - 4C(n \rho_n)^{\tfrac{s+d+d'}{2s+d+d'}} \end{split} \end{equation*} Now let $\alpha = \tfrac{s + d + d'}{2s + d + d'}$ and note that $\frac{s+d}{2s+d} \leq \alpha$ for any choice of $s \geq 0, d \geq 0$ and $d' \geq 0$. Define $T(A,B)$ as the test statistic $$T(A, B) = \frac{\|\hat{H} - 2 \hat{P} + \hat{P} \circ \hat{P}\|_{F}}{(n \rho_n)^{\alpha} \log^{1/2}{n}}.$$ If $\|R\|_{F} = 0$ then $T(A, B) \rightarrow 0$ as $n \rightarrow \infty$. Furthermore if $\|R \circ (P - P \circ P)\|_{F} = \Omega((n \rho_n)^{\alpha'})$ for any $\alpha' > \alpha$ then $T(A, B) \rightarrow \infty$ as $n \rightarrow \infty$. Thus rejecting $\mathbb{H}_0$ for large values of $T(A, B)$ leads to an asymptotically valid and consistent test. {\bf Proof of Corollary~\ref{cor:lpg}} If $f$ and $g$ are infinitely differentiable then, in place of Eq.~\eqref{eq:theorem1_xu1} and Eq.~\eqref{eq:theorem1_xu2}, we have $$\|\hat{P} - P\|_{F} = O((n \rho_n)^{1/2} \log^{d/2}(n \rho_n)), \quad \|\hat{H} - 2P + P \circ P\|_{F}$$ under $\mathbb{H}_0$ and $$\|\hat{P} - P\|_{F} = O((n \rho_n)^{1/2} \log^{d/2}(n \rho_n)), \quad \|\hat{H} - H\|_{F}$$ under $\mathbb{H}_A$. See Theorem~4 in \cite{xu2017rates} for a statement of these bounds. The remaining steps follow the same argument as that presented in the proof of Theorem~\ref{theorem 1}. We omit the details. {\bf Proof of Theorem~\ref{thm:SBM}} Recall that the vertices of $C$ are clustered using a community detection algorithm which guarantees exact recovery (see e.g., \cite{abbe2017community,gao2017achieving,lyzinski2014perfect}). We therefore have $\hat{\tau} = \tau$ asymptotically almost surely. Let us now condition on the event that $\hat{\tau} = \tau$. Then for any $k, \ell \in \{1,2,\dots,K\}$, the collection $\{(A_{ij}, B_{ij}) \colon \tau_i = k, \tau_{j} = \ell\}$ are iid bivariate random vectors with Pearson correlation $\rho_{k \ell}$. The central limit theorem then implies $$\sqrt{n_{k \ell}} (\hat{\rho}_{k\ell}-{\rho}_{k\ell}) \overset{d}{\to}\mathcal{N}(0,(1 - \rho_{k \ell}^2)^2).$$ Furthermore, as $\hat{\rho}_{k \ell}$ depends only on the edges from vertices in the $k$th block to vertices in the $\ell$th block, the $\{\hat{\rho}_{k \ell}\}$ are {\em mutually} independent. Now suppose that the null hypothesis is true. Then $\rho_{k \ell} \equiv 0$ and the $\sqrt{n_{k \ell}} \hat{\rho}_{k \ell}$ are iid standard normals. In other words we have $$\sum_{k \leq \ell} n_{k \ell} \hat{\rho}^{2}_{k \ell} \overset{d}{\rightarrow} \chi^2_{K(K+1)/2}$$ under $\mathbb{H}_0$. Next suppose that the alternative hypothesis is true and that there exists a constant $\mu > 0$ such that $$\sum_{k \leq \ell} n_{k \ell} \rho_{k \ell}^2 \rightarrow \mu.$$ Then for any $k, \ell$, the term $\sqrt{n_{k \ell}} \rho_{k \ell}$ is bounded and thus, by Slutsky's theorem, we have $$\sqrt{n_{k \ell}}(\hat{\rho}_{k \ell} - \rho_{k \ell}) \overset{d}{\rightarrow} \mathcal{N}(0,1)$$ for all $k, \ell$. We can then follow the same arguments as that in \cite{guenther1964another} and show that the limiting distribution of $\sum_{k \leq \ell} n_{k \ell} \hat{\rho}_{k \ell}^2$ depends on the $\{\rho_{k \ell}\}$ only through the quantity $\sum_{k \leq \ell} n_{k \ell} \rho_{k \ell}^2$. Hence, as $\sum_{k \ell} n_{k \ell} \rho_{k \ell}^2 \rightarrow \mu$ we have by Slutksy's theorem that $$\sum_{k\leq l}n_{kl}\hat{\rho}_{kl}^2 \overset{d}{\to} \chi^2_{K(K+1)/2}(\mu)$$ as desired. \end{document}
\begin{document} \title{The Varieties of Minimal Tomographically Complete Measurements} \author[$\dag\star$]{John B.\ DeBrota} \author[$\dag\star$]{Christopher A.\ Fuchs} \author[$\dag$]{Blake C.\ Stacey} \affil[$\dag$]{\small \href{http://www.physics.umb.edu/Research/QBism}{QBism Group}, Physics Department, University of Massachusetts Boston, \par 100 Morrissey Boulevard, Boston MA 02125, USA} \affil[$\star$]{\href{http://stias.ac.za/events/workshop-on-participatory-realism/}{Stellenbosch Institute for Advanced Study} (STIAS), Wallenberg Research Center at Stellenbosch University, Marais Street, Stellenbosch 7600, South Africa} \date{\today} \maketitle \begin{abstract} Minimal Informationally Complete quantum measurements, or MICs, illuminate the structure of quantum theory and how it departs from the classical. Central to this capacity is their role as tomographically complete measurements with the fewest possible number of outcomes for a given finite dimension. Despite their advantages, little is known about them. We establish general properties of MICs, explore constructions of several classes of them, and make some developments to the theory of MIC Gram matrices. These Gram matrices turn out to be a rich subject of inquiry, relating linear algebra, number theory and probability. Among our results are some equivalent conditions for unbiased MICs, a characterization of rank-1 MICs through the Hadamard product, several ways in which immediate properties of MICs capture the abandonment of classical phase space intuitions, and a numerical study of MIC Gram matrix spectra. We also present, to our knowledge, the first example of an unbiased rank-1 MIC which is not group covariant. This work provides further context to the discovery that the symmetric informationally complete quantum measurements (SICs) are in many ways optimal among MICs. In a deep sense, the ideal measurements of quantum physics are not orthogonal bases. \end{abstract} \section{Introduction} \label{sec:intro} \normalsize A significant part of science is the pursuit of measurements that are as informative as possible. Attempts to provide an elementary explanation of ``the scientific method'' sometimes convey the notion that an ideal measurement is one which is exactly reproducible, always yielding the same answer when applied in succession. But this notion has fairly obvious problems, for example, when the system being measured is dynamical. When the experiment's sought outcome is the position of Mars at midnight, the numbers will not be the same from one night to the next, and yet Kepler could run a scientific revolution on that data. A more refined standard would be that an ideal measurement is one that provides enough information to project the complete dynamical trajectory of the measured system through phase space. Quantum physics frustrates this ambition by denying the phase space: Quantum uncertainties are not uncertainties about the values of properties that pre-exist the act of measurement. Yet the ideal of a sufficiently informative measurement, the expectations for which fully fix the expectations for any other, can still be translated from classical thought to quantum, and doing so illuminates the nature of quantum theory itself. Let $\mathcal{H}_d$ be a $d$-dimensional complex Hilbert space, and let $\{E_i\}$ be a set of positive semidefinite operators on that space which sum to the identity: \begin{equation} \sum_{i=1}^N E_i = I. \end{equation} The set $\{E_i\}$ is a \emph{positive-operator-valued measure} (POVM), which is the mathematical representation of a measurement process in quantum theory. Each element in the set --- called an \emph{effect} --- stands for a possible outcome of the measurement~\cite[\S 2.2.6]{Nielsen:2010}. A POVM is said to be \emph{informationally complete} (IC) if the operators $\{E_i\}$ span $\mathcal{L}(\mathcal{H}_d)$, the space of Hermitian operators on $\mathcal{H}_d$, and an IC POVM is said to be \emph{minimal} if it contains exactly $d^2$ elements. For brevity, we can call a minimal IC POVM a MIC. A matrix which captures many important properties of a MIC is its Gram matrix, that is, the matrix $G$ whose entries are given by \begin{equation} [G]_{ij} := {\rm tr}\, E_i E_j\;. \end{equation} Of particular note among MICs are those which enjoy the symmetry property \begin{equation} [G]_{ij} = [G_{\rm SIC}]_{ij} := \frac{1}{d^2} \frac{d\delta_{ij} + 1}{d+1}\;. \end{equation} These are known as \emph{symmetric} informationally complete POVMs, or SICs for short~\cite{Zauner:1999, Renes:2004, Scott:2010a, Fuchs:2017a}. In addition to their purely mathematical properties, SICs are of central interest to the technical side of QBism, a research program in the foundations of quantum mechanics~\cite{Fuchs:2014b, Fuchs:2013, Healey:2016, Fuchs:2016a}. Investigations motivated by foundational concerns led to the discovery that SICs are in many ways optimal among MICs~\cite{Appleby:2014b, Appleby:2015, DeBrota:2018}. In this paper, we elaborate upon some of those results and explore the conceptual context of MICs more broadly. MICs provide a new way of understanding the Born Rule, a key step in how one uses quantum physics to calculate probabilities. The common way of presenting the Born Rule suggests that it fixes probabilities in terms of more fundamental quantities, namely quantum states and measurement operators. MICs, however, suggest a change of viewpoint. From this new perspective, the Born Rule should be thought of as a \emph{consistency condition between the probabilities assigned in diverse scenarios} --- for instance, probabilities assigned to the outcomes of complementary experiments. The bare axioms of probability theory do not themselves impose relations between probabilities given different conditionals: In the abstract, nothing ties together $P(E|C_1)$ and $P(E|C_2)$. Classical intuition suggests one way to fit together probability assignments for different experiments, and quantum physics implies another. The discrepancy between these standards encapsulates how quantum theory departs from classical expectations~\cite{Fuchs:2017b, Stacey:2018b}. MICs provide the key to addressing this discrepancy; any MIC may play the role of a reference measurement through which the quantum consistency condition may be understood. To understand MICs is to understand how quantum probability is like, and differs from, classical. In the next section, we introduce the fundamentals of quantum information theory and the necessary concepts from linear algebra to prove a few basic results about MICs and comment on their conceptual meaning. Among the results included are a characterization of unbiased MICs, a condition in terms of matrix rank for when a set of vectors in~$\mathbb{C}^d$ can be fashioned into a MIC, and an explicit example of an unbiased MIC which is not group covariant. In Section~\ref{sec:constructions}, we show how to construct several classes of MICs explicitly and note some properties of their Gram matrices. In Section~\ref{sec:optimal}, we explore several ways in which SICs are optimal among MICs for the project of differentiating the quantum from the classical, a topic complementing one of our recent papers~\cite{DeBrota:2018}. To conclude, in Section~\ref{sec:numerics}, we conduct an initial numerical study of the Gram matrix eigenvalue spectra of randomly-chosen MICs of four different types. The empirical eigenvalue distributions we find have intriguing features, not all of which have been explained yet. \section{Basic Properties of MICs} \label{sec:basics} We begin by briefly establishing the necessary notions from quantum information theory on which this paper is grounded. In quantum physics, each physical system is associated with a complex Hilbert space. Often, in quantum information theory, the Hilbert space of interest is taken to be finite-dimensional. We will denote the dimension throughout by $d$. A \emph{quantum state} is a positive semidefinite operator of unit trace. The extreme points in the space of quantum states are the rank-1 projection operators: \begin{equation} \rho = \ketbra{\psi}{\psi}. \end{equation} These are idempotent operators; that is, they all satisfy $\rho^2 = \rho$. If an experimenter ascribes the quantum state $\rho$ to a system, then she finds her probability for the $i^{\rm th}$ outcome of the measurement modeled by the POVM $\{E_i\}$ via the Hilbert--Schmidt inner product: \begin{equation} p(E_i) = {\rm tr}\,\rho E_i. \end{equation} This formula is a standard presentation of the Born Rule. The condition that the $\{E_i\}$ sum to the identity ensures that the resulting probabilities are properly normalized. If the operators $\{E_i\}$ span the space of Hermitian operators, then the operator $\rho$ can be reconstructed from its inner products with them. In other words, the state $\rho$ can be calculated from the probabilities $\{p(E_i)\}$, meaning that the measurement is ``informationally complete'' and the state $\rho$ can, in principle, be dispensed with. Any MIC can thus be considered a ``Bureau of Standards'' measurement, that is, a reference measurement in terms of which all states and processes can be understood~\cite{Fuchs:2002}. Writing a quantum state $\rho$ is often thought of as specifying the ``preparation'' of a system, though this terminology is overly restrictive, and the theory applies just as well to physical systems that were not processed on a laboratory workbench~\cite{Fuchs:2011c}. Given any POVM $\{E_i\}$, we can always write its elements as unit-trace positive semidefinite operators with appropriate scaling factors we call \textit{weights}: \begin{equation} E_i := e_i \rho_i, \hbox{ where } e_i = {\rm tr}\, E_i. \end{equation} If the operators $\rho_i$ are all rank-1 projectors, we will refer to the set $\{E_i\}$ as a \emph{rank-1 POVM}. We will call a POVM \emph{unbiased} when the weights $e_i$ are all equal. Such operator sets represent quantum measurements that have no intrinsic bias: Under the Born Rule they map the ``garbage state'' $(1/d)I$ to a flat probability distribution. For an unbiased MIC, the condition that the elements sum to the identity then fixes $e_i = 1/d$. A \textit{column (row) stochastic matrix} is a real matrix with nonnegative entries whose columns (rows) sum to $1$. If a matrix is both column and row stochastic we say it is \textit{doubly stochastic}. The following theorem allows us to identify an unbiased MIC from a glance at its Gram matrix or Gram matrix spectrum. \begin{theorem}\label{unbiased} Let $\{E_i\}$ be a MIC and $\lambda_{\rm max}(G)$ be the maximal eigenvalue of its Gram matrix $G$. The following are equivalent: \begin{enumerate} \item $\{E_i\}$ is unbiased. \item $dG$ is doubly stochastic. \item $\lambda_{\rm max}(G)=1/d$. \end{enumerate} \end{theorem} \begin{proof} The equivalence of the first two conditions is readily shown. We show (2)$\iff$(3). Let $\ket{v}:=\frac{1}{d}(1,\ldots,1)^{\rm T}$ be the normalized $d^2$ element uniform vector of $1$s. If $dG$ is doubly stochastic, $\ket{v}$ is an eigenvector of $dG$ with eigenvalue $1$, and the Gershgorin disc theorem \cite{Leinster:2016} ensures $\lambda_{\rm max}(G)=1/d$. For any MIC, \begin{equation}\label{minmaxeval} \lambda_{\rm max}(G)\geq\bra{v}G\ket{v}=\frac{1}{d}\;, \end{equation} with equality iff $\ket{v}$ is an eigenvector of $G$ with eigenvalue $1/d$. Since $G\ket{v}=(e_1,\ldots,e_{d^2})^{\rm T}$, $\ket{v}$ is an eigenvector of $G$ iff $e_i=1/d$ for all $i$. \end{proof} Given a basis for an inner product space, the \textit{dual basis} is defined by the condition that the inner products of a vector with the elements of the dual basis provide the coefficients in the expansion of that vector in terms of the original basis. In our case, let $\{\widetilde{E}_i\}$ denote the basis dual to $\{E_i\}$ so that, for any vector $A\in\mathcal{L}(\mathcal{H}_d)$, \begin{equation}\label{dualdef} A=\sum_j({\rm tr}\, A\widetilde{E}_j)E_j\;. \end{equation} One consequence of this definition is that if we expand the original basis in terms of itself, \begin{equation} E_i = \sum_j ({\rm tr}\, E_i \widetilde{E}_j) E_j\;, \end{equation} linear independence of the $\{E_i\}$ implies that \begin{equation} {\rm tr}\, E_i \widetilde{E}_j = \delta_{ij}\;, \end{equation} from which one may easily see that the original basis is the dual of the dual basis, \begin{equation} A=\sum_j({\rm tr}\, A E_j)\widetilde{E}_j\;. \end{equation} In the familiar case when the original basis is orthonormal, the dual basis coincides with it: When we write a vector $\mathbf{v}$ as an expansion over the unit vectors $(\mathbf{\hat{x}}, \mathbf{\hat{y}}, \mathbf{\hat{z}})$, the coefficient of $\mathbf{\hat{x}}$ is simply the inner product of $\mathbf{\hat{x}}$ with $\mathbf{v}$. A MIC is a positive semidefinite operator basis. For positive semidefinite operators $A$ and $B$, ${\rm tr}\, AB=0$ iff $AB=0$. Recall that a Hermitian matrix which is neither positive semidefinite nor negative semidefinite is known as an \textit{indefinite} matrix. \begin{theorem} The dual basis of a MIC is composed entirely of indefinite matrices. \end{theorem} \begin{proof} Suppose $\widetilde{E}_1\geq 0$. The definition of a dual basis tells us ${\rm tr}\,\widetilde{E}_1E_k=0$ for all $k\neq1$. Because they are both positive semidefinite, $\widetilde{E}_1E_k=0$ for all $k\neq1$. This means the $d^2-1$ MIC elements other than $E_1$ are operators on a $d-\text{rank}(\widetilde{E}_1)$ dimensional subspace. But \begin{equation} \text{dim}\left[\mathcal{L}\left(\mathcal{H}_{d-\text{rank}(\widetilde{E}_1)}\right)\right]\leq(d-1)^2<d^2-1, \end{equation} so they cannot be linearly independent. If $\widetilde{E}_1\leq0$, $-\widetilde{E}_1$ is positive semidefinite and the same logic holds. \end{proof} \begin{corollary}\label{noprops} No element in a MIC can be proportional to an element of the MIC's dual basis. \end{corollary} \begin{corollary}\label{noortho} No MIC can form an orthogonal basis. \end{corollary} \begin{proof} Suppose $\{E_i\}$ is a MIC which forms an orthogonal basis, that is, ${\rm tr}\, E_iE_j=c_i\delta_{ij}$ for some constants $c_i$. Summing this over $i$ reveals $c_j=e_j$, the weights of the MIC. Thus the dual basis is given by $\widetilde{E}_j=E_j/e_j=\rho_j$ which is a violation of Corollary \ref{noprops}. \end{proof} \begin{corollary} No MIC outcome can ever be assigned probability $1$. \end{corollary} \begin{proof} MIC probabilities provide the expansion coefficients for a state in the dual basis. If $P(E_i)=1$ for some $i$, the state would equal the dual basis element, but a state must be positive semidefinite. \end{proof} \begin{corollary}\label{noprojs} No effect of a MIC can be an unscaled projector. \end{corollary} \begin{proof} Suppose $E_1$ were equal to an unscaled projector $P$. Then any eigenvector of $P$ is a pure state which would imply probability 1 for the MIC outcome $E_1$, which is impossible. \end{proof} Theorem~\ref{noprops} and the subsequent corollaries have physical meaning. In classical probability theory, we grow accustomed to orthonormal bases. For example, imagine an object that can be in any one of $N$ distinct configurations. When we write a probability distribution over these $N$ alternatives, we are encoding our expectations about which of these configurations is physically present --- about the ``physical condition'' of the object, as Einstein would say~\cite{Stacey:2018}, or in more modern terminology, about the object's ``ontic state''~\cite{Spekkens:2007}. We can learn everything there is to know about the object by measuring its ``physical condition'', and any implementation of such an ideal measurement is represented by conditional probabilities that are 1 in a single entry and 0 elsewhere. In other words, the map from the object's physical configuration to the reading on the measurement device is, at its most complicated, a permutation of labels. Without loss of generality, we can take the vectors that define the ideal measurement to be the vertices of the probability simplex: The measurement basis is identical with its dual, and the dual-basis elements simply label the possible ``physical conditions'' of the object which the measurement reads off. In quantum theory, by contrast, no element of a MIC may be proportional to an element in the dual. This stymies the identification of the dual-basis elements as intrinsic ``physical conditions'' ready for a measurement to read. \begin{theorem}\label{nopovm} No elementwise rescaling of a proper subset of a MIC may form a POVM. \end{theorem} \begin{proof} Since a MIC is a linearly independent set, the identity element is uniquely formed by the defining expression \begin{equation} {I}=\sum_{i=1}^{d^2}E_i. \label{povmdef} \end{equation} If a linear combination of a proper subset $\Omega$ of the MIC elements could be made to also sum to the identity, \begin{equation} {I}=\sum_{i\in \Omega}\alpha_i E_i, \label{lincomb} \end{equation} then subtracting \eqref{lincomb} from \eqref{povmdef} implies \begin{equation} 0=\sum_{i\in\Omega}(1-\alpha_i)E_i+\sum_{i\notin\Omega}E_i \end{equation} which is a violation of linear independence. \end{proof} \begin{corollary}\label{d2ortho} No two elements in a $d=2$ MIC may be orthogonal under the Hilbert--Schmidt inner product. \end{corollary} \begin{proof} An orthogonal pair of elements in dimension $2$ may be rescaled such that they sum to the identity element. Therefore, by Theorem \ref{nopovm}, they cannot be elements of a MIC. \end{proof} These results also have physics implications. For much of the history of quantum mechanics, one type of POVM had special status: the \emph{von Neumann measurements,} which consist of $d$ elements given by the projectors onto the vectors of an orthonormal basis of $\mathbb{C}^d$. Indeed, in older books, these are the only quantum measurements that are considered (often being defined as the eigenbases of Hermitian operators called ``observables''). We can now see that, from the standpoint of informational completeness, the von Neumann measurements are rather pathological: There is no way to build a MIC by augmenting a von Neumann measurement with additional outcomes. Another holdover from the early days of quantum theory concerns the process of updating a quantum state in response to a measurement outcome. If one restricts attention to von Neumann measurements, one may feel tempted to grant special importance to the post-measurement state being one of the eigenvectors of an ``observable''. This type of updating is a special case of the more general theory developed as quantum mechanics was understood more fully. The \emph{L\"uders Rule} \cite{Busch:2009,Barnum:2002} states that the post-measurement state upon obtaining the outcome associated with effect $E_i$ for a POVM $\{E_i\}$ is \begin{equation} \rho_i':=\frac{\sqrt{E_i}\rho\sqrt{E_i}}{{\rm tr}\, \rho E_i}\;. \end{equation} In the special case of a von Neumann measurement, this reduces to replacing the state for the system with the eigenprojector corresponding to the measurement outcome. A physicist who plans to follow that procedure and then repeat the measurement immediately afterward would expect to obtain the same outcome twice in succession. Some authors regard this possibility as the essential point of contact with classical mechanics and attempt to build an understanding of quantum theory around such ``ideal'' measurements~\cite{Cabello:2019}. But, as we said in the introduction, obtaining the same outcome twice in succession is not a good notion of a ``classical ideal''. Especially in view of the arbitrariness of von Neumann measurements from our perspective, we regard this possibility as conceptually downstream from the phenomenon of informationally complete measurements. Corollary \ref{d2ortho} prompts a question: May any elements of a MIC in arbitrary dimension be orthogonal? In other words, can any entry in a $G$ matrix equal zero? We answer this question in the affirmative with an explicit example of a rank-$1$ MIC in dimension $3$ with $7$ orthogonal pairs. \begin{example}\label{7orthopairs} When multiplied by $1/3$, the following is a rank-1 unbiased MIC in dimension $3$ with $7$ orthogonal pairs. \renewcommand\arraystretch{1.2} \begin{equation} \begin{split} &\left\{\begin{bmatrix} 1 & 0 & 0 \\ 0&0&0\\ 0&0&0 \end{bmatrix}, \begin{bmatrix} 0 & 0 & 0 \\ 0&1&0\\ 0&0&0 \end{bmatrix}, \begin{bmatrix} \frac{1}{2} & 0 & \frac{1}{2} \\ 0&0&0\\ \frac{1}{2}&0&\frac{1}{2} \end{bmatrix}, \begin{bmatrix} 0 & 0 & 0 \\ 0&\frac{1}{2}&\frac{1}{2}\\ 0&\frac{1}{2}&\frac{1}{2} \end{bmatrix}, \begin{bmatrix} \frac{1}{2} & 0 & \frac{i}{2} \\ 0&0&0\\ -\frac{i}{2}&0&\frac{1}{2} \end{bmatrix}, \begin{bmatrix} 0 & 0 & 0 \\ 0&\frac{1}{2}&\frac{i}{2}\\ 0&-\frac{i}{2}&\frac{1}{2} \end{bmatrix}, \begin{bmatrix} \frac{1}{3} & \frac{i}{3} & -\frac{i}{3} \\ -\frac{i}{3}&\frac{1}{3}&-\frac{1}{3}\\ \frac{i}{3}&-\frac{1}{3}&\frac{1}{3} \end{bmatrix},\right.\\ &\left. \begin{bmatrix} \frac{5}{8} & -\frac{1}{8}-\frac{i}{4} & -\frac{3}{8}-\frac{i}{8} \\ -\frac{1}{8}+\frac{i}{4}&\frac{1}{8}&\frac{1}{8}-\frac{i}{8}\\ -\frac{3}{8}+\frac{i}{8}&\frac{1}{8}+\frac{i}{8}&\frac{1}{4} \end{bmatrix}, \begin{bmatrix} \frac{1}{24} & \frac{1}{8}-\frac{i}{12} & -\frac{1}{8}-\frac{i}{24} \\ \frac{1}{8}+\frac{i}{12}&\frac{13}{24}&-\frac{7}{24}-\frac{3i}{8}\\ -\frac{1}{8}+\frac{i}{24}&-\frac{7}{24}+\frac{3i}{8}&\frac{5}{12} \end{bmatrix}\right\}. \end{split} \end{equation} These are projectors onto the following vectors in $\mathcal{H}_d$: \begin{equation} \begin{split} &\left\{(1,0,0),(0,1,0),\frac{1}{\sqrt{2}}(1,0,1),\frac{1}{2}(0,1,1),\frac{1}{\sqrt{2}}(1,0,-i),\frac{1}{\sqrt{2}}(0,1,-i),\frac{1}{\sqrt{3}}(1,-i,i),\right.\\ &\left.\frac{1}{\sqrt{40}}(5,-1+2i,-3+i),\frac{1}{\sqrt{24}}(1,3+2i,-3+i)\right\}. \end{split} \end{equation} The Gram matrix of the MIC elements is \renewcommand\arraystretch{1.2} \begin{equation} \begin{bmatrix} \frac{1}{9}&0&\frac{1}{18}&0&\frac{1}{18}&0&\frac{1}{27}&\frac{5}{72}&\frac{1}{216}\\ 0&\frac{1}{9}&0&\frac{1}{18}&0&\frac{1}{18}&\frac{1}{27}&\frac{1}{72}&\frac{13}{216}\\ \frac{1}{18}&0&\frac{1}{9}&\frac{1}{36}&\frac{1}{18}&\frac{1}{36}&\frac{1}{27}&\frac{1}{144}&\frac{5}{432}\\ 0&\frac{1}{18}&\frac{1}{36}&\frac{1}{9}&\frac{1}{36}&\frac{1}{18}&0&\frac{5}{144}&\frac{1}{48}\\ \frac{1}{18}&0&\frac{1}{18}&\frac{1}{36}&\frac{1}{9}&\frac{1}{36}&0&\frac{5}{144}&\frac{1}{48}\\ 0&\frac{1}{18}&\frac{1}{36}&\frac{1}{18}&\frac{1}{36}&\frac{1}{9}&\frac{1}{27}&\frac{1}{144}&\frac{5}{432}\\ \frac{1}{27}&\frac{1}{27}&\frac{1}{27}&0&0&\frac{1}{27}&\frac{1}{9}&\frac{1}{54}&\frac{1}{18}\\ \frac{5}{72}&\frac{1}{72}&\frac{1}{144}&\frac{5}{144}&\frac{5}{144}&\frac{1}{144}&\frac{1}{54}&\frac{1}{9}&\frac{1}{27}\\ \frac{1}{216}&\frac{13}{216}&\frac{5}{432}&\frac{1}{48}&\frac{1}{48}&\frac{5}{432}&\frac{1}{18}&\frac{1}{27}&\frac{1}{9}\\ \end{bmatrix}. \end{equation} \end{example} \noindent The numerical search resulting in this example led us to formulate the following: \begin{conjecture} A rank-1 MIC in dimension 3 can have no more than 7 pairs of orthogonal elements. \end{conjecture} Our next result characterizes when it is possible to build a rank-1 POVM out of a set of vectors and specifies the additional conditions which must be met in order for it to form a MIC. We make use of the \textit{Hadamard product}~\cite{Horn:1994}, denoted $\circ$, which is elementwise multiplication of matrices. \begin{theorem}\label{r1povmtightframe} Consider a set of $N$ normalized vectors $\ket{\phi_i}$ in $\mathcal{H}_d$ and real numbers $0 \leq e_i\leq 1$. The following are equivalent: \begin{enumerate} \item $E_i:= e_i\ketbra{\phi_i}{\phi_i}$ forms a rank-1 POVM. \item The Gram matrix $g$ of the rescaled vectors $\sqrt{e_i}\ket{\phi_i}$ is a rank-$d$ projector. \end{enumerate} Furthermore, if $N=d^2$ and ${\rm rank}(g\circ g^*)=d^2$, $\{E_i\}$ forms a rank-1 MIC. \end{theorem} \begin{proof} Suppose $E_i$ forms a rank-1 POVM, that is, \begin{equation}\label{rank1povm} \sum_ie_i\ketbra{\phi_i}{\phi_i}=I\;. \end{equation} It is easy to see that this is only possible if the set $\{\sqrt{e_i}\ket{\phi_i}\}$ spans $\mathcal{H}_d$, and, consequently, $N\geq d$. It now follows that $g$ is a rank-$d$ projector because the left hand side of \eqref{rank1povm} is a matrix that has the same nonzero spectrum as $g$~\cite{Waldron:2018}. On the other hand, if $g$ is a rank-$d$ projector, $N\geq d$ and $\{\sqrt{e_i}\ket{\phi_i}\}$ spans a $d$ dimensional space because the rank of a Gram matrix is equal to the dimension of the space spanned by the vectors. Using again the fact the left hand side of \eqref{rank1povm} has the same nonzero spectrum as the Gram matrix, it must equal the identity and thus the rank-1 POVM condition holds. To be a MIC, $N$ must equal $d^2$. The remaining condition on $\{E_i\}$ for it to form a rank-1 MIC is that its elements be linearly independent. This is equivalent to the condition that its Gram matrix $G$ is full rank. The relation between $g$ and $G$ is given by the Hadamard product of $g$ with its conjugate, \begin{equation} g \circ g^*=G\;, \end{equation} and so, if $N=d^2$ and ${\rm rank}(g\circ g^*)=d^2$, $\{E_i\}$ forms a rank-1 MIC. \end{proof} \noindent For any two matrices $A$ and $B$, the Hadamard product satisfies the rank inequality \begin{equation}\label{rank} \text{rank}(A\circ B) \leq \text{rank}(A)\;\text{rank}(B)\;, \end{equation} so a rank-1 MIC is produced when $\text{rank}(g\circ g^*)$ achieves its maximal value with the minimal number of effects. Perhaps this criterion will lead to a way to conceptualize rank-1 MICs directly in terms of the vectors in $\mathcal{H}_d$ from which they can be constructed. As a brief illustration, all rank-$d$ projectors are unitarily equivalent so the specification of $g$ for the rescaled vectors of any rank-1 MIC is obtainable from any rank-$d$ projector by conjugating it with the right unitary. Specifying $g$ specifies the MIC: $g$ is equal to its own square root, so its columns are the vectors up to unitary equivalence which form this Gram matrix. To obtain these vectors as elements of $\mathcal{H}_d$, one can simply write them in the basis provided by the eigenvectors of $g$ with nonzero eigenvalues. From a fixed starting projector, then, finding a rank-1 MIC is equivalent to choosing a unitary in $U(d^2)$ which maximizes $\text{rank}(g\circ g^*)$. Numerically this maximization appears to be typical, but we are not aware of an explicit characterization. A further question to ask is whether there are special classes of unitaries which give particular types of MICs. We finish this section with a very brief discussion of the geometry of MIC space. For this purpose it is not necessary to distinguish between MICs which differ only in permutations of their effects. We further discuss this in the loose sense of not having chosen any particular metric. The full sets of $N$-outcome POVMs are in general convex manifolds~\cite{DAriano:2005}, but the requirement of linear independence prevents this from being true for MICs --- it is possible for a convex mixture of MICs to introduce a linear dependence and thus step outside of the set. There do, however, exist infinite sequences and curves lying entirely within the set of MICs. In these terms one can see that the space of MICs lacks much of its boundary, that is, one can construct infinite sequences of MICs for which the limit point is not a MIC. The simplest such limit point is the POVM consisting of the identity and $d^2-1$ zero matrices. Similarly there are MICs arbitrarily close to any POVM with fewer than $d^2$ elements which has been padded by zero matrices. Among unbiased MICs, another limit point lying outside of the set is the trivial POVM consisting of $d^2$ identical matrices $E_i=\frac{1}{d^2}I$. Provided they exist, SICs are limit points, at least among equiangular MICs (see section \ref{equiangularMICs}), which are contained within the set. \section{Explicit Constructions of MICs} \label{sec:constructions} \subsection{SICs} The MICs that have attracted the most interest are the SICs, which in many ways are the optimal MICs~\cite{Appleby:2014b, Appleby:2015, DeBrota:2018, Fuchs:2003, Scott:2006}. SICs were studied as mathematical objects (under the name ``complex equiangular lines'') before their importance for quantum information was recognized~\cite{Delsarte:1975, Hoggar:1981, Coxeter:1991, Hoggar:1998}. Prior to SICs becoming a physics problem, constructions were known for dimensions $d = 2$, 3 and 8~\cite{Konig:2001}. Exact solutions for SICs are now known in 79 dimensions: \begin{equation} \begin{split} d = 2&\hbox{--}28,30,31,35,37\hbox{--}39,42,43,48,49,52,53,57,61\hbox{--}63,67,73,74,78,79,84,91,93,\\ & 95,97\hbox{--}99,103,109,111,120,124,127,129,134,143,146,147,168,172,195,199,\\ & 228,259,292,323,327,399,489,844,1299. \end{split} \end{equation} The expressions for these solutions grow complicated quickly, but there is hope that they can be substantially simplified~\cite{Appleby:2018}. Numerical solutions have also been extracted, to high precision, in the following dimensions: \begin{equation} d= 2\hbox{--}193,204,224,255,288,528,725,1155,2208. \end{equation} Both the numerical and the exact solutions have been found in irregular order and by various methods. Many entries in these lists are due to A.\ J.\ Scott and M.\ Grassl~\cite{Scott:2010a, Scott:2017, Grassl:2017}; other explorers in this territory include M.\ Appleby, I.\ Bengtsson, T.-Y.\ Chien, S.\ T.\ Flammmia, G.\ S.\ Kopp and S.\ Waldron. Together, these results have created the community sentiment that SICs \emph{should} exist for every finite value of~$d$. To date, however, a general proof is lacking. The current frontier of SIC research extends into algebraic number theory~\cite{Appleby:2013, Appleby:2016, Bengtsson:2016, Appleby:2017b, Kopp:2018}, which among other things has led to a method for uplifting numerical solutions to exact ones~\cite{Appleby:2017}. The topic has begun to enter the textbooks for physicists~\cite{Bengtsson:2017} and for mathematicians~\cite{Waldron:2018}. The effects of a SIC are given by \begin{equation} E_i = \frac{1}{d}\Pi_i, \hbox{ where } \Pi_i = \ketbra{\pi_i}{\pi_i}\;, \end{equation} where we will take the liberty of calling any of the sets $\{E_i\}$, $\{\Pi_i\}$, and $\{\ket{\pi_i}\}$ SICs. It is difficult to find a meaningful visualization of structures in high-dimensional complex vector space. However, for the $d = 2$ case, an image is available. Any quantum state for a 2-dimensional system can be written as an expansion over the Pauli matrices: \begin{equation} \rho = \frac{1}{2}\left(I + x\sigma_x + y\sigma_y + z\sigma_z\right). \end{equation} The coefficients $(x,y,z)$ are then the coordinates for $\rho$ in the \emph{Bloch ball}. The surface of this ball, the \emph{Bloch sphere,} lives at radius 1 and is the set of pure states. In this picture, the quantum states $\{\Pi_i\}$ comprising a SIC form a regular tetrahedron; for example, \begin{equation} \Pi_{s,s'} = \frac{1}{2}\left(I + \frac{1}{\sqrt{3}}\left(s\sigma_x + s'\sigma_y + ss'\sigma_z\right)\right), \end{equation} where $s$ and $s'$ take the values $\pm 1$. The matrix $G_{\rm SIC}$ has the spectrum \begin{equation} \lambda(G_{\rm SIC}) = \left( \frac{1}{d}, \frac{1}{d(d+1)}, \ldots, \frac{1}{d(d+1)} \right). \end{equation} The flatness of this spectrum will turn out to be significant; we will investigate this point in depth in the next section. \subsection{MICs from Random Bases}\label{sec:micfrombases} It is possible to construct a MIC for any dimension $d$. Let $\{A_i\}$ be any basis of positive semidefinite operators in $\mathcal{L}(\mathcal{H}_d)$ and define $\Omega:=\sum_i A_i$. Then \begin{equation}\label{arbitraryMIC} E_i:=\Omega^{-1/2}A_i\Omega^{-1/2} \end{equation} forms a MIC. If $\{A_i\}$ consists entirely of rank-1 matrices, we obtain a rank-1 MIC.\footnote{In the rank-1 case, this procedure is equivalent to forming what is called the \textit{canonical tight frame} associated with the frame of vectors in $\mathcal{H}_d$ whose outer products form the $A_i$ matrices. For more information on this, see \cite{Waldron:2018}.} If $\{A_i\}$ is already a MIC, $\Omega=I$ and the transformation is trivial; MICs are the fixed points of this mapping from one positive semidefinite operator basis to another. Thanks to this property, this method can produce \textit{any} MIC if the initial basis is drawn from the full space of positive semidefinite operators. This procedure was used by Caves, Fuchs and Schack in the course of proving a quantum version of the de Finetti theorem~\cite{Caves:2002c}. (For background on this theorem, a key result in probability theory, see~\cite[\S 5.3]{Stacey:2015} and~\cite{Diaconis:2017}.) We refer to the particular MICs they constructed as the \emph{orthocross MICs.} As the orthocross MICs are of historical importance, we explicitly detail their construction and provide some first properties and conjectures about it in the remainder of this subsection. To construct an orthocross MIC in dimension $d$, first pick an orthonormal basis $\{\ket{j}\}$. This is a set of $d$ objects, and we want a set of $d^2$, so our first step is to take all possible combinations: \begin{equation} \Gamma_{jk} := \ketbra{j}{k}. \end{equation} The orthocross MIC will be built from a set of $d^2$ rank-1 projectors $\{\Pi_\alpha\}$, the first $d$ of which are given by \begin{equation} \Pi_\alpha = \Gamma_{\alpha\alpha}. \end{equation} Then, for $\alpha = d+1, \ldots, \frac{1}{2}d(d+1)$, we take all the quantities of the form \begin{equation} \frac{1}{2}\left(\ket{j} + \ket{k}\right) \left(\bra{j} + \bra{k}\right) = \frac{1}{2} (\Gamma_{jj} + \Gamma_{kk} + \Gamma_{jk} + \Gamma_{kj}), \end{equation} where $j < k$. We construct the rest of the $\{\Pi_\alpha\}$ similarly, by taking all quantities of the form \begin{equation} \frac{1}{2}\left(\ket{j} + i\ket{k}\right) \left(\bra{j} - i\bra{k}\right) = \frac{1}{2} (\Gamma_{jj} + \Gamma_{kk} - i\Gamma_{jk} + i\Gamma_{kj}), \end{equation} where again the indices satisfy $j < k$. That is, the set $\{\Pi_\alpha\}$ contains the projectors onto the original orthonormal basis, as well as projectors built from the ``cross terms''. The operators $\{\Pi_\alpha\}$ form a positive semidefinite operator basis which can be plugged into the procedure described above. Explicitly, \begin{equation} \Omega = \sum_{\alpha=1}^{d^2} \Pi_\alpha\;, \end{equation} and the orthocross MIC elements are given by \begin{equation} E_\alpha := \Omega^{-1/2} \Pi_\alpha \Omega^{-1/2}\;. \end{equation} The operator $\Omega$ for the initial set of vectors has a comparatively simple matrix representation: The elements along the diagonal are all equal to~$d$, the elements above the diagonal are all equal to $\frac{1}{2}(1-i)$, and the rest are $\frac{1}{2}(1+i)$, as required by $\Omega = \Omega^\dag$. The matrix $\Omega$ is not quite a circulant matrix, thanks to that change of sign, but it can be turned into one by conjugating with a diagonal unitary matrix. Consequently, the eigenvalues of~$\Omega$ can be found explicitly via discrete Fourier transformation. The result is that, for $m = 0,\ldots,d-1$, \begin{equation} \lambda_m = d + \frac{1}{2}\left(\cot \frac{\pi(4m+1)}{4d} - 1 \right). \end{equation} This mathematical result has a physical implication~\cite{Fuchs:2002}. \begin{theorem} The probability of any outcome $E_\alpha$ of an orthocross MIC, given any quantum state $\rho$, is bounded above by \begin{equation} P(E_\alpha) \leq \left[d - \frac{1}{2}\left(1 + \cot \frac{3\pi}{4d}\right)\right]^{-1} < 1. \end{equation} \end{theorem} \begin{proof} The maximum of ${\rm tr}\,(\rho E_\alpha)$ over all $\rho$ is bounded above by the maximum of ${\rm tr}\,(\Pi E_\alpha)$, where $\Pi$ ranges over the rank-1 projectors. In turn, this is bounded above by the maximum eigenvalue of~$E_\alpha$. We then invoke that \begin{equation} \lambda_{\rm max}(E_\alpha) = \lambda_{\rm max}(\Omega^{-1/2}\Pi_\alpha \Omega^{-1/2}) = \lambda_{\rm max}(\Pi_\alpha \Omega^{-1} \Pi_\alpha) \leq \lambda_{\rm max} (\Omega^{-1}). \end{equation} The desired bound then follows. \end{proof} Note that all the entries in the matrix $2\Omega$ are Gaussian integers, that is, numbers whose real and imaginary parts are integers. Consequently, all the coefficients in the characteristic polynomial of~$2\Omega$ will be Gaussian integers, and so the eigenvalues of~$2\Omega$ will be roots of a monic polynomial with Gaussian-integer coefficients. This is an example of how, in the study of MICs, number theory becomes relevant to physically meaningful quantities --- in this case, a bound on the maximum probability of a reference-measurement outcome. Number theory has also turned out to be very important for SICs, in a much more sophisticated way~\cite{Appleby:2013, Appleby:2016, Bengtsson:2016, Appleby:2017b, Kopp:2018}. The following conjectures about orthocross MICs have been motivated by numerical investigations. We suspect that their proofs will be relatively straightforward, but so far they have eluded us. \begin{conjecture} The entries in $G$ for orthocross MICs can become arbitrarily small with increasing $d$, but no two elements of an orthocross MIC can be exactly orthogonal. \end{conjecture} \begin{conjecture} For any orthocross MIC, the entries in $G^{-1}$ are integers or half-integers. \end{conjecture} \subsection{Group Covariant MICs}\label{sec:groupcovariant} The method discussed in the previous subsection allows us to make fully arbitrary MICs, but it is also possible to construct MICs with much more built-in structure. The MICs which have received the most attention in the literature to date are the \textit{group covariant} MICs --- those whose elements are the orbit of a group of unitary matrices acting by conjugation. For additional discussion of group covariant IC POVMs, see \cite{DAriano:2004}. The Gram matrix of a group covariant MIC is very simple. Suppose $\{E_i\}$ is a group covariant MIC, so $E_i=U_iE_0U_i^\dag$ where $E_0$ is the first element of the MIC and the index $i$ gives the element of the unitary representation of the group sending this element to the $i$th element. Then all distinct elements of the Gram matrix are present in the first row because \begin{equation} [G]_{ij}={\rm tr}\, E_iE_j={\rm tr}\, U_iE_0U_i^\dag U_jE_0U_j^\dag={\rm tr}\, E_0U_kE_0U_k^\dag\;, \end{equation} for some $k$ determined by the group. Another way to say this is that every row of the Gram matrix of a group covariant MIC is some permutation of the first row. Note that any group covariant MIC is unbiased because conjugation by a unitary cannot change the trace of a matrix, but the converse is not true; the simplest example of an unbiased MIC which is not group covariant which we have encountered is the one given in Example \ref{7orthopairs}. The most studied and likely most important group covariant MICs are the \textit{Weyl--Heisenberg} MICs (WH MICs), which are covariant with respect to the Weyl--Heisenberg group. Part of the intution for the importance of this group comes from the fact that its generators form finite dimensional analogs of the position and momentum operators. Perhaps more telling, every known SIC is group covariant and in all cases but one that group is the Weyl--Heisenberg group~\cite{Fuchs:2017a}. The Weyl--Heisenberg group is constructed as follows. Let $\{\ket{j}: j = 0,\ldots,d-1\}$ be an orthonormal basis, and define $\omega = e^{2\pi i/d}$. Then the operator \begin{equation} X\ket{j} = \ket{j+1}, \end{equation} where addition is interpreted modulo $d$, effects a cyclic shift of the basis vectors. The Fourier transform of the $X$ operator is \begin{equation} Z \ket{j} = \omega^j \ket{j}, \end{equation} and together these operators satisfy the Weyl commutation relation \begin{equation} ZX = \omega XZ. \end{equation} The Weyl--Heisenberg displacement operators are \begin{equation} D_{k,l} := (-e^{\pi i/d})^{kl} X^k Z^l, \end{equation} and together they satisfy the conditions \begin{equation} D_{k,l}^\dag = D_{-k,-l},\quad\ D_{k,l}D_{m,n} = (-e^{\pi i/d})^{lm-kn} D_{k+m,l+n}. \end{equation} Each $D_{k,l}$ is unitary and a $d^{\rm th}$ root of the identity. The Weyl--Heisenberg group is the set of all operators $(-e^{\pi i/d})^m D_{k,l}$ for arbitrary integers $m$, and it is projectively equivalent to $\mathbb{Z}_d \times \mathbb{Z}_d$. Then, for any density matrix $\rho$ such that \begin{equation} {\rm tr}\, \!\!\left(D_{k,l}^\dag\rho\right)\neq 0,\quad \forall (k,l)\in \mathbb{Z}_d\times\mathbb{Z}_d\;, \end{equation} the set \begin{equation} E_{k,l}:=\frac{1}{d}D_{k,l}\rho D_{k,l}^\dag \end{equation} forms a WH MIC. \subsection{Equiangular MICs}\label{equiangularMICs} An \textit{equiangular}\footnote{Appleby and Graydon introduced the term \emph{SIM} for an equiangular MIC of arbitrary rank; a rank-1 SIM is a SIC~\cite{Graydon:2016, Graydon:2016a}. } MIC is one for which the Gram matrix takes the form \begin{equation} [G]_{ij}=\alpha\delta_{ij}+\zeta\;. \end{equation} Equiangular MICs are unbiased (see Corollary 3 in \cite{Appleby:2015}) and, because $\sum_{ij}[G]_{ij}=d$, it is easy to see that $\alpha=1/d-d^2\zeta$ and that \begin{equation} \frac{1}{d^2(d+1)}\leq\zeta<\frac{1}{d^3}\;. \end{equation} SICs are rank-1 equiangular MICs for which $\zeta$ achieves the minimum allowed value. The upper bound $\zeta$ value is approached by MICs arbitrarily close to $E_i=\frac{1}{d^2}I$ for all $i$. Armed with a SIC in a given dimension, one can construct an equiangular MIC for any allowed $\zeta$ value by mixing in some of the identity to each element: \begin{equation}\label{remixedSIC} E_i=\frac{\beta}{d}\Pi_i+\frac{1-\beta}{d^2}I\;,\quad \frac{-1}{d-1}\leq(\beta\neq0)\leq1\;. \end{equation} Even if a SIC is not known, it is generally much easier to construct equiangular MICs when the elements are not required to be rank-1. One way to do this which always works for any $\beta\leq\frac{1}{d+1}$ is by replacing the SIC projector in equation \eqref{remixedSIC} with a quasi-SIC.\footnote{See Appendix A of \cite{DeBrota:2019} for the definition and a construction of a quasi-SIC.} Depending on the quasi-SIC, higher values of $\beta$ may also work. Another construction in odd dimensions are the \emph{Appleby MICs}~\cite{Appleby:2007a}. The Appleby MICs are WH covariant and constructed as follows. Let the operator $B$ be constructed as \begin{equation} B := \frac{1}{\sqrt{d+1}} \sum_{\{k,l\}\neq\{0,0\}} D_{k,l}, \end{equation} and define $B_{k,l}$ to be its conjugate under a Weyl--Heisenberg displacement operator: \begin{equation} B_{k,l} := D_{k,l} B D_{k,l}^\dag. \end{equation} The elements of the Appleby MIC have rank $(d+1)/2$, and are defined by \begin{equation} E_{k,l} := \frac{1}{d^2} \left(I + \frac{1}{\sqrt{d+1}} B_{k,l}\right). \end{equation} For any quantum state $\rho$, the quantities \begin{equation} W_{k,l} := (d+1) {\rm tr}\,(E_{k,l} \rho) - \frac{1}{d} \end{equation} are \emph{quasiprobabilities}: They can be negative, but the sum over all of them is unity. The quasiprobability function $\{W_{k,l}\}$ is known as the \emph{Wigner function} of the quantum state $\rho$. This is an example of a relation we study much more generally in a companion paper~\cite{DeBrota:2019b}. In particular, we speculate there that the Appleby MIC and other MICs may inherit special significance from a related a Wigner function. \subsection{Tensorhedron MICs} So far, we have not imposed any additional structure upon our Hilbert space. However, in practical applications, one might have additional structure in mind, such as a preferred factorization into a tensor product of smaller Hilbert spaces. For example, a register in a quantum computer might be a set of $N$ physically separate qubits, yielding a joint Hilbert space of dimension $d = 2^N$. In such a case, a natural course of action is to construct a MIC for the joint system by taking the tensor product of multiple copies of a MIC defined on the component system: \begin{equation} E_{j_1,j_2,\ldots,j_N} := E_{j_1} \otimes E_{j_2} \otimes \cdots \otimes E_{j_N}. \end{equation} Since a collection of $N$ qubits is a natural type of system to consider for quantum computation, we define the $N$-qubit \emph{tensorhedron MIC} to be the tensor product of $N$ individual qubit SICs. These have appeared in the setting of quantum cryptography \cite{Englert:2005} as well as quantum tomography \cite{Zhu:2010} where it was proven that tensor product SICs (and thus $N$-qubit tensorhedron MICs) are optimal among product measurementss in the same way that SICs are across all measurements. \begin{theorem} The Gram matrix of an $N$-qubit tensorhedron MIC is the tensor product of $N$ copies of the Gram matrix for the qubit SIC out of which the tensorhedron is constructed. \end{theorem} \begin{proof} Consider the two-qubit tensorhedron MIC, whose elements are given by \begin{equation} E_{d(j-1)+j'} := \frac{1}{4}\Pi_j \otimes \Pi_{j'}, \end{equation} with $\{\Pi_j\}$ being a qubit SIC. The Gram matrix for the tensorhedron MIC has entries \begin{equation} [G]_{d(j-1)+j',d(k-1)+k'} = \frac{1}{16} {\rm tr}\,[(\Pi_j\otimes\Pi_{j'}) (\Pi_k\otimes\Pi_{k'})]. \end{equation} We can group together the projectors that act on the same subspace: \begin{equation} [G]_{d(j-1)+j',d(k-1)+k'} = \frac{1}{16} {\rm tr}\,(\Pi_j \Pi_k \otimes \Pi_{j'}\Pi_{k'}). \end{equation} Now, we distribute the trace over the tensor product, obtaining \begin{equation} [G]_{d(j-1)+j',d(k-1)+k'} = \frac{1}{16} \frac{2\delta_{jk}+1}{3} \frac{2\delta_{j'k'} + 1}{3} = [G_{\rm SIC}]_{jk} [G_{\rm SIC}]_{j'k'}, \end{equation} which is just the definition of the tensor product: \begin{equation} G = G_{\rm SIC} \otimes G_{\rm SIC}. \end{equation} This extends in the same fashion to more qubits. \end{proof} \begin{corollary} The spectrum of the Gram matrix for an $N$-qubit tensorhedron MIC contains only the values \begin{equation} \lambda = \frac{1}{2^N}\frac{1}{3^m},\ m = 0,\ldots,N. \end{equation} \end{corollary} \begin{proof} This follows readily from the linear-algebra fact that the spectrum of a tensor product is the set of products $\{\lambda_i \mu_j\}$, where $\{\lambda_i\}$ and $\{\mu_j\}$ are the spectra of the factors. \end{proof} We can also deduce properties of MICs made by taking tensor products of MICs that have orthogonal elements. Let $\{E_j\}$ be a $d$-dimensional MIC with Gram matrix $G$, and suppose that exactly $N$ elements of $G$ are equal to zero. The tensor products $\{E_j \otimes E_{j'}\}$ construct a $d^2$-dimensional MIC, the entries in whose Gram matrix have the form $[G]_{jk} [G]_{j'k'}$, as above. This product will equal zero when either factor does, meaning that the Gram matrix of the tensor-product MIC will contain $2d^4 N - N^2$ zero-valued entries. It seems plausible that in prime dimensions, where tensor-product MICs cannot exist, the possible number of zeros is more tightly bounded, but this remains unexplored territory. \section{SICs are Minimally Nonclassical Reference Measurements} \label{sec:optimal} What might it mean for a MIC to be the best among all MICs? Naturally, it depends on what qualities are valued in light of which one MIC may be superior to another. As mentioned in the introduction, for a large number of metrics, SICs are optimal. The authors of this paper particularly value the capacity of MICs to index probabilistic representations of the Born Rule. For this use, the best MIC is the one which provides the most useful probabilistic representation, adopting some quantitative ideal that a representation should approach. One codification of such an ideal is as follows. In essence, we want to find a MIC that furnishes a probabilistic representation of quantum theory which looks as close to classical probability as is mathematically possible. The residuum that remains --- the unavoidable discrepancy that even the most clever choice of MIC cannot eliminate --- is a signal of what is truly \emph{quantum} about quantum mechanics. In a recent paper it was shown that SICs are strongly optimal for this project~\cite{DeBrota:2018}. To see why, consider the following scenario. An agent has a physical system of interest, and she plans to carry out either one of two different, mutually exclusive procedures on it. In the first procedure, she will drop the system directly into a measuring apparatus and thereby obtain an outcome. In the second procedure, she will cascade her measurements, sending the system through a reference measurement and then, in the next stage, feeding it into the device from the first procedure. Probability theory unadorned by physical assumptions provides no constraints binding her expectations for these two different courses of action. Let $P$ denote her probability assignments for the consequences of following the two-step procedure and $Q$ those for the single-step procedure. Then, writing $\{H_i\}$ for the possible outcomes of the reference measurement and $\{D_j\}$ for those of the other, \begin{equation} P(D_j) = \sum_i P(H_i) P(D_j|H_i). \end{equation} This equation is a consequence of Dutch-book coherence~\cite{Fuchs:2013, Diaconis:2017} known as the Law of Total Probability (LTP). But the claim that \begin{equation} Q(D_j) = P(D_j) \end{equation} is an assertion of \emph{physics,} not entailed by the rules of probability theory alone. This assertion codifies in probabilistic language the classical ideal that a reference measurement simply reads off the system's ``physical condition'' or ``ontic state''. We know this classical ideal is not met in quantum theory, that is, $Q(D_j)\neq P(D_j)$. Instead, as detailed in reference \cite{DeBrota:2018}, $Q(D_j)$ is related to $P(H_i)$ and $P(D_j|H_i)$ in a different way. To write the necessary equations compactly, we introduce a vector notation where the LTP takes the form \begin{equation} P(D) = P(D|H)P(H)\;. \end{equation} To set up the quantum version of the above scenario, let $\{H_i\}$ be a MIC and $\{D_j\}$ be an arbitrary POVM. Furthermore, let $\{\sigma_i\}$ denote a set of post-measurement states for the reference measurement; that is, if the agent experiences outcome $H_i$, her new state for the system will be $\sigma_i$. In this notation, the Born Rule becomes \begin{equation}\label{ltpanalog} Q(D) = P(D|H)\Phi P(H)\;, \hbox{ with } [\Phi^{-1}]_{ij} := {\rm tr}\, H_i \sigma_j\;. \end{equation} The matrix $\Phi$ depends upon the MIC and the post-measurement states, but it is always a column quasistochastic matrix, meaning its columns sum to one but may contain negative elements~\cite{DeBrota:2018}. In fact, $\Phi$ \emph{must} contain negative entries; this follows from basic structural properties of quantum theory~\cite{Ferrie:2011}. Now, the classical intution we mentioned above would be expressed by $\Phi=I$. However, no choice of MIC and set of post-measurement states can achieve this. The MICs and post-measurement sets which give a $\Phi$ matrix closest to the identity therefore supply the ideal representation we seek. Theorem 1 in reference \cite{DeBrota:2018} proves that the distance between $\Phi$ and the identity with respect to any unitarily invariant norm is minimized when both the MIC and the post-measurement states are proportional to a SIC. Unitarily invariant norms include the Frobenius norm, the trace norm, the operator norm, and all the other Schatten $p$-norms, as well as the Ky Fan $k$-norms. Although this theorem was proven for foundational reasons, a special case of the result turns out to answer in the affirmative a conjecture regarding a practical matter of quantum computation~\cite[\S VII.A]{Veitia:2018}. What ended up being important for the optimality proof in \cite{DeBrota:2018} was that both the MIC and the post-measurement states be proportional to SICs, but not necessarily that they be proportional to the \textit{same} SIC. Although the measures considered there were not sensitive to this distinction, the same SIC case has obvious conceptual and mathematical advantages. From a conceptual standpoint, when the post-measurement states are simply the projectors $\Pi_i$ corresponding to the SIC outcome just obtained, our ``throw away and reprepare'' process is equivalent to L\"uders rule updating, which there are independent reasons for preferring~\cite{Barnum:2002}. When the post-measurement states are the same SIC as the reference measurement, $\Phi$ takes the uniquely simple form \begin{equation} \Phi_{\rm SIC}=(d+1)I-\frac{1}{d}J\;, \end{equation} where $J$ is the Hadamard identity, that is, the matrix of all 1s. Inserted into \eqref{ltpanalog} and written in index form, this produces the expression \begin{equation} Q(D_j)=\sum_i\left[(d+1)P(H_i)-\frac{1}{d}\right]P(D_j|H_i)\;, \label{urgleichung} \end{equation} having the advantage that for each conditional probability given an outcome $H_i$, only the $i^{\rm th}$ reference probability figures into that term in the sum. This is not so for two arbitrarily chosen SICs, and, as such, that case would result in a messier probabilistic representation. This path is not the only one from which to arrive at the conclusion that SICs furnish a minimally nonclassical reference measurement. Recall the close association of classicality and orthogonality noted in section~\ref{sec:basics}. From this standpoint, one might claim that most ``classical'' or least ``quantum'' reference measurement is one that is closest to an orthogonal measurement. While we know from Corollary \ref{noortho} that a MIC cannot be an orthogonal basis, how close can one get? One way to quantify this closeness is via an operator distance between the Gramians of an orthogonal basis and a MIC. From the proof of Corollary \ref{noortho}, we know that if a MIC could be orthogonal its Gram matrix would be $[G]_{ij}=e_i\delta_{ij}$. With no further restrictions, we can get arbitrarily close to this ideal, for instance, with a MIC constructed as follows. Consider a set of $d^2$ matrices $\{A_i\}$ where the first $d$ of them are the eigenprojectors of a Hermitian matrix and the remaining $d^2-d$ are the zero matrix. Then, for an arbitrary\footnote{As long as a linear dependence does not develop.} MIC $\{B_j\}$, we may form a new MIC, indexed by a real number $0<t<1$, \begin{equation} E^t_i:=t A_i+(1-t)B_i\;. \end{equation} One may see that the Gram matrix of $\{E^t_i\}$ approaches the orthogonal Gram matrix in the limit $t\rightarrow 1$. But at such an extreme, the usefulness of a MIC is completely destroyed. In the above scenario when $t$ is close to $1$, the informational completeness is all but gone, as one has to reckon with vanishingly small probabilities when dealing with a MIC close to the limit point. Such a MIC fails miserably at being anything like a reasonable reference measurement. Although formally capable of being a reference measurement, a biased MIC deprives us of an even-handed treatment of indifference; the garbage state, which is poised in Hilbert space to capture pure state preparation indifference, would be represented by a non-flat probability distribution. Worse, for any sufficiently biased MIC, i.e., one with any weight less than $1/d^2$, the flat probability distribution is not reached by any density matrix. Consequently, what we're really after is an unbiased reference measurement which is as close to an orthogonal measurement as possible. With this additional constraint, the following theorem demonstrates that SICs are the optimal choice. \begin{theorem}\label{closetoortho} The closest an unbaised MIC can be to an orthogonal basis, as measured by the Frobenius distance between their Gramians, is when the MIC is a SIC.\footnote{An earlier paper by one of us (CAF) and a collaborator~\cite{Fuchs:2013} made the claim that the condition of being unbiased could be derived by minimizing the squared Frobenius distance; this is erroneous as the unequally weighted example with $\{E^t_i\}$ shows. For the purposes of that earlier paper, it is sufficient to impose by hand the requirement that the MIC be unbiased, since this is a naturally desirable property for a standard reference measurement. Having made this extra proviso, the conceptual conclusions of that work are unchanged.} \end{theorem} \begin{proof} We lower bound the square of the Frobenius distance: \begin{equation} \begin{split} \sum_{ij}\left(\frac{1}{d}\delta_{ij}-{\rm tr}\, E_iE_j\right)^{\!2}&=\sum_i\left(\frac{1}{d}-{\rm tr}\, E_i^2\right)^{\!2}+\sum_{i\neq j}({\rm tr}\, E_iE_j)^2\\ &\geq \frac{1}{d^2}\left(\sum_i\left(\frac{1}{d}-{\rm tr}\, E_i^2\right)\right)^{\!2}+\frac{1}{d^4-d^2}\left(\sum_{i\neq j}{\rm tr}\, E_iE_j\right)^{\!2}\\ &=\frac{1}{d^2}\left(d-\sum_i{\rm tr}\, E_i^2\right)^{\!2}+\frac{1}{d^4-d^2}\left(d-\sum_i{\rm tr}\, E_i^2\right)^{\!2}\\ &=\frac{1}{d^2-1}\left(d-\sum_i{\rm tr}\, E_i^2\right)^{\!2}\geq\frac{(d-1)^2}{d^2-1}=\frac{d-1}{d+1}\;. \end{split} \end{equation} The first inequality follows from two invocations of the Cauchy--Schwarz inequality and achieves equality iff ${\rm tr}\, E_i^2$ and ${\rm tr}\, E_iE_j$, for $i\neq j$, are constants, that is, iff the MIC is an equiangular MIC. The third line is easy to derive from the fact that for any MIC, $\sum_{ij}[G]_{ij}=d$. The final inequality comes from noting that $\sum_i{\rm tr}\, E_i^2=\frac{1}{d^2}\sum_i{\rm tr}\,\rho_i^2\leq1$ with equality iff the MIC is rank-1. Thus the lower bound is saturated iff the equal weight MIC is rank-1 and equiangular, that is, iff it is a SIC. \end{proof} Theorem \ref{closetoortho} concerned the Gramian of a MIC. We can, in fact, show a stronger result on the inverse of the Gram matrix. \begin{theorem}\label{U-norm} Let $G$ be the Gram matrix of an unbiased MIC, and let $\norm{\cdot}$ be any unitarily invariant norm (i.e., any norm where $\norm{A} = \norm{UAV}$ for arbitrary unitaries $U$ and $V$). Then \begin{equation} \left\|{I-\frac{1}{d}G^{-1}}\right\|\geq \left\|{I - \frac{1}{d}G_{\rm SIC}^{-1}}\right\|\;, \end{equation} with equality if and only if the MIC is a SIC. \end{theorem} \begin{proof} This is a special case of Theorem 1 in~\cite{DeBrota:2018}. \end{proof} As with the theorems we proved above about MICs in general, this mathematical result has physical meaning. Classically speaking, the ``ideal of the detached observer'' (as Pauli phrased it~\cite{Fuchs:2017b}) is a measurement that reads off the system's point in phase space, call it $\lambda_i$, without disturbance. A state of maximal certainty is one where an agent is absolutely certain which $\lambda_i$ exists. An agent having maximal certainty about each of a pair of identically prepared systems implies that she expects to obtain the same outcome for a reference measurement on each system. In other words, her ``collision probability'' is unity: \begin{equation} \sum_i p(\lambda_i)^2 = 1. \label{eq:cprob-condition} \end{equation} There is also a quantum condition on states of maximal certainty. As before, we can approach the question, ``What is the unavoidable residuum that separates quantum from classical?''\ by finding the form of this quantum condition that brings it as close as possible to the classical version. \begin{lemma} Given a MIC $\{E_i\}$ with Gramian $G$, a quantum state is pure if and only if its probabilistic representation satisfies \begin{equation} \sum_{ij} p(E_i) p(E_j) [G^{-1}]_{ij} = 1. \end{equation} \end{lemma} \begin{proof} Let $\{E_i\}$ be a MIC. The expansion of any quantum state $\rho$ in the dual basis is \begin{equation} \rho = \sum_i ({\rm tr}\, E_i \rho) \widetilde{E}_i\;. \end{equation} By the Born Rule, the coefficients are probabilities: \begin{equation} \rho = \sum_i p(E_i) \widetilde{E}_i\;. \end{equation} Now, recall that while ${\rm tr}\, \rho = 1$ holds for any quantum state, ${\rm tr}\, \rho^2 = 1$ holds if and only if that operator is a pure state, i.e., a rank-1 projector. These operators are the extreme points of quantum state space; all other quantum states are convex combinations of them. In terms of the MIC's dual basis, the pure-state condition is \begin{equation} \sum_{ij} p(E_i)p(E_j) {\rm tr}\, \widetilde{E}_i \widetilde{E}_j = 1\;, \end{equation} and so, because the Gramian of the dual basis is the inverse of the MIC Gram matrix, \begin{equation} \sum_{ij} p(E_i) p(E_j) [G^{-1}]_{ij} = 1\;, \label{eq:QM-condition} \end{equation} as desired. \end{proof} Equation \eqref{eq:QM-condition} closely resembles the collision probability, \eqref{eq:cprob-condition}. If $G^{-1}$ were the identity, they would be identical. On the face of it, it looks as though we should see how close $G^{-1}$ can get to the identity. One minor wrinkle is that we should actually compare $G^{-1}$ with $dI$ instead of just with $I$, because an unbiased, orthogonal MIC (if one could exist) would have the Gram matrix $\frac{1}{d}I$. So, how close can we bring $G^{-1}$ to $dI$, by choosing an appropriate unbiased MIC? We know the answer to this from Theorem \ref{U-norm}: The best choice is a SIC. \section{Computational Overview of MIC Gramians} \label{sec:numerics} In order to explore the realm of MICs more broadly, and to connect them with other areas of mathematical interest, it is worthwhile to generate MICs \emph{randomly} and study the typical properties which result. In this section we focus on the Gram matrix spectra of four MIC varieties whose constructions are described in section \ref{sec:constructions}. These types are: \begin{enumerate} \item Generic MICs: a MIC generated from an arbitrary positive semidefinite basis \item Generic Rank-1 MICs: a MIC generated from an arbitrary rank-1 positive semidefinite basis \item WH MICs: a MIC obtained from the WH orbit of an arbitrary density matrix \item Rank-1 WH MICs: a MIC obtained from the WH orbit of an arbitrary pure state density matrix. \end{enumerate} In Hilbert space dimensions $2$ through $5$ we generated $10^5$ MICs with the following methodologies. We constructed the generic MICs as in section \ref{sec:micfrombases} and the WH MICs as in section \ref{sec:groupcovariant}. Each generic MIC was obtained from a basis of positive semidefinite operators and each WH MIC was obtained from the orbit of an initial density matrix. In the generic rank-1 case, the pure states defining the basis of projectors were sampled uniformly from the Haar measure. Likewise, in the rank-1 WH case, the initial vector was also sampled uniformly from the Haar measure. The positive semidefinite bases for the arbitrary-rank generic MICs and the initial states for the arbitrary-rank WH MICs were constructed as follows. First, Hermitian matrices $M$ were sampled from the Gaussian Unitary distribution, and, for each of these, the positive semidefinite matrix $M^\dag M$ was formed. $d^2$ of these sufficed to form a positive semidefinite basis without loss of generality and a trace-normalized instance served as the initial state for the WH MICs. For each MIC, we constructed its Gram matrix and computed the eigenvalues. Figures \ref{fig:d2}, \ref{fig:d3}, \ref{fig:d4}, and \ref{fig:d5} are histograms of the eigenvalue distributions for dimensions $2$, $3$, $4$, and $5$, respectively. We note some expected and unexpected features of these distributions. In accordance with Theorem \ref{unbiased}, both group covariant types, being unbiased, always have the maximal eigenvalue $1/d$, while this is the lower bound for the maximal eigenvalue for the other two types. Particularly in the unbiased cases, because the eigenvalues must sum to $1$, not all of them can be too large, so it is perhaps not surprising that there are few eigenvalues approaching $1/d$ and that all families show exponential decay until that value. However, the spectra of rank-1 MICs, especially in dimensions 2 and 3 (Figures \ref{fig:d2} and \ref{fig:d3}), display a richness of features for which we have no explanation. Most surprising of all is the small eigenvalue plateau in Figure \ref{fig:d3} for the $d=3$ rank-1 WH MICs. Further scrutiny has revealed that the plateau ends precisely at $1/12$, the average value for the non-maximal eigenvalues of an unbiased $d=3$ MIC Gram matrix. The Gram matrix for a $d=3$ SIC has the spectrum \begin{equation} \left(\frac{1}{3},\frac{1}{12},\frac{1}{12},\frac{1}{12},\frac{1}{12},\frac{1}{12},\frac{1}{12},\frac{1}{12},\frac{1}{12}\right)\;, \end{equation} which has the maximal amount of degeneracy allowed. Dimension $3$ is also exceptional in the study of SICs: It is the only known dimension for which there is a continuous family of unitarily inequivalent SICs~\cite{Tabia:2013, Hughston:2016}. Because the Gram matrix spectra for $d=3$ rank-1 WH MICs also behaves unlike the other dimensions we have checked and because the plateau appears to be connected with the value $1/12$, we conjecture that the eigenvalue plateau and the continuous family of SICs may be related. \begin{conjecture} The plateau in the eigenvalue distribution for $d = 3$, seen in Figure~\ref{fig:d3}, is related to the existence of a continuous family of unitarily inequivalent SICs in that dimension. \end{conjecture} \begin{figure} \caption{$d=2$ random MIC Gram matrix spectra, $N=10^5$, bin size $1/200$.} \label{fig:d2} \end{figure} \begin{figure} \caption{$d=3$ random MIC Gram matrix spectra, $N=10^5$, bin size $1/198$.} \label{fig:d3} \end{figure} \begin{figure} \caption{$d=4$ random MIC Gram matrix spectra, $N=10^5$, bin size $1/200$.} \label{fig:d4} \end{figure} \begin{figure} \caption{$d=5$ random MIC Gram matrix spectra, $N=10^5$, bin size $1/200$.} \label{fig:d5} \end{figure} \section{Conclusions} We have argued that informational completeness provides the right perspective from which to compare the quantum and the classical. The structure of Minimal Informationally Complete quantum measurements and especially how and to what degree this structure requires the abandonment of classical intuitions therefore deserves explicit study. We have surveyed the domain of MICs and derived some initial results regarding their departure from such classical intuitions as orthogonality, repeatability, and the possibility of certainty. Central to understanding MICs are their Gram matrices; it is through properties of these matrices that we were able to derive many of our results. We have only just scratched the surface of this topic, as our conjectures and unexplained numerical features of Gram matrix spectra can attest. In a sequel, we will explore another application of Gram matrices. They hold a central role in the construction of \emph{Wigner functions} from MICs~\cite{Stacey:2016c, Zhu:2016a, DeBrota:2017}, and Wigner functions are a topic pertinent to quantum computation~\cite{Wootters:1987, Gibbons:2004, Veitch:2012, Veitch:2014, Howard:2016}. Many properties of MIC Gram matrices remain unknown. Numerical investigations have, in some cases, outstripped the proving of theorems, resulting in the conjectures we have enumerated. Another avenue for potential future exploration is the application of Shannon theory to MICs. Importing the notions of information theory into quantum mechanics has proved quite useful over the years at illuminating strange or surprising features of the physics~\cite{Braunstein:1990, Fuchs:1998, Buck:2000}. One promising avenue of inquiry is studying the probabilistic representations of quantum states using entropic measures. In the case of SICs, this has already yielded intriguing connections among information theory, group theory and geometry~\cite{Stacey:2016c, Stacey:2016b, Slomczynski:2014, Szymusiak:2014, Szymusiak:2015}. The analogous questions for other classes of MICs remain open for investigation. \section*{Acknowledgments} We thank Marcus Appleby, Lane Hughston, Peter Johnson, Steven van Enk and Huangjun Zhu for discussions. This research was supported in part by the John E.\ Fetzer Memorial Trust, the John Templeton Foundation, and grants FQXi-RFP-1612 and FQXi-RFP-1811B of the Foundational Questions Institute and Fetzer Franklin Fund, a donor advised fund of Silicon Valley Community Foundation. The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of the John Templeton Foundation. \providecommand{\href}[2]{#2}\begingroup\raggedright \endgroup \end{document}
\begin{document} \title{The Cascading Haar Wavelet algorithm for computing the Walsh-Hadamard Transform} \name{Andrew Thompson} \address{Mathematical Insititute\\University of Oxford\\United Kingdom} \newtheorem{thm}{Theorem} \newtheorem{lem}{Lemma} \maketitle \begin{abstract} We propose a novel algorithm for computing the Walsh Hadamard Transform (WHT) which consists entirely of Haar wavelet transforms. We prove that the algorithm, which we call the Cascading Haar Wavelet (CHW) algorithm, shares precisely the same serial complexity as the popular divide-and-conquer algorithm for the WHT. We also propose a natural way of parallelizing the algorithm which has a number of attractive features. \end{abstract} \begin{keywords} Walsh-Hadamard Transform, Haar wavelet transform, complexity analysis. \end{keywords} \section{Introduction} The Walsh-Hadamard Transform (WHT) is a staple of the digital signal processing world, and is used extensively in communication systems, image processing, and in general as a proxy for the Fast Fourier Transform (FFT)~\cite{walsh_paley}. Like the FFT, it is well known that the WHT of a signal of length $n$, where $n$ is a power of $2$, can be computed with $\mathcal{O}(n\log n)$ complexity. There exist well established algorithms for computing the WHT based on divide-and-conquer principles, which exploit the recursive properties of the transform, namely that any WHT of size $2^m$ can be broken down into two WHTs of size $2^{m-1}$. Various orderings of WHT coefficients are possible, most notably natural, dyadic and sequency orderings, and classical WHT algorithms essentially differ depending upon the desired ordering. See~\cite{walsh_paley} for background on the various WHT orderings and their corresponding algorithms. These fundamental algorithms have been known for many years, and more recent work has focused on practical considerations, such as how to incorporate these algorithms into parallel architectures and FPGAs; see for example~\cite{performance,FPGA}. It has been previously noted that there exist interesting relationships between the WHT with dyadic ordering and the oldest and simplest discrete wavelet transform, the Haar wavelet transform. It was observed in~\cite{walsh_relations} that computing the WHT of a signal has a striking interpretation in terms of Haar wavelet coefficients: it is equivalent to applying WHTs of different sizes independently to the coefficients within each scale of the Haar wavelet transform. Based on this observation, the authors propose a \emph{Haar-Walsh} transform, which transforms Haar wavelet coefficients into WHT coefficients, thereby giving an alternative approach to computing the WHT: via a detour into the Haar wavelet domain. This approach was shown to match the complexity of the standard algorithms, and it has the additional appeal of computing the Haar wavelet transform for free in the process. Nonetheless, it appears that the approach never became a popular alternative to the standard WHT algorithms. We also note that, while a single Haar wavelet transform is computed at the start, the algorithm proceeds using the standard divide-and-conquer approach within each scale of the Haar wavelet transform thereafter. In this paper, we propose a novel algorithm for computing the WHT with coefficients (in dyadic order) which is inspired by some of the connections between WHTs and Haar wavelet transforms, but which is fundamentally different from all preceding algorithms. Its marked difference is apparent from the fact that it consists entirely of Haar wavelet transforms. We show that the algorithm, which we call the Cascading Haar Wavelet (CHW) algorithm, matches the serial complexity of the standard algorithms for either the natural or dyadic orderings, requiring precisely $n\log_2 n$ addition operations for its computation. Furthermore, we propose a natural way of parallelizing the algorithm in such a way that each of the nodes in the parallel architecture performs a single fixed task, namely a Haar wavelet transform of a given size. \section{Description of the algorithm} Given $m\geq 0$, the $2^m\times 2^m$ Hadamard matrix with columns in dyadic (Paley) order~\cite{walsh_relations,walsh_paley}, $H_m$, is defined by the recursion \begin{equation}\label{walsh_dyadic} H_0:=1;\;\;\;\;H_{m+1}=\frac{1}{\sqrt{2}}\begin{bmatrix}\begin{array}{c}H_m\otimes\left(\begin{array}{ll}1&1\end{array}\right)\\ H_m\otimes\left(\begin{array}{ll}1&-1\end{array}\right)\end{array}\end{bmatrix}\;\mbox{for}\;m\geq 0, \end{equation} where $\otimes$ denotes the Kronecker product. Given $m\geq 0$, the $2^m\times 2^m$ Haar matrix~\cite{walsh_paley}, $\Psi_m$, may be defined by the recursion \begin{equation}\label{haar} \Psi_0:=1;\;\;\;\;\Psi_{m+1}=\frac{1}{\sqrt{2}}\begin{bmatrix}\begin{array}{c}\Psi_m\otimes\left(\begin{array}{ll}1&1\end{array}\right)\\I_m\otimes\left(\begin{array}{ll}1&-1\end{array}\right)\end{array}\end{bmatrix}\;\mbox{for}\;m\geq 0, \end{equation} where we write $I_m$ for the $2^m\times 2^m$ identity matrix. The CHW algorithm is based on a particular decomposition of a Hadamard matrix in terms of Haar wavelet transform matrices. We use the notation $\displaystyle\prod_{r=1}^p M_r=M_p\cdots M_2 M_1$ for a $p$-fold matrix product. \begin{thm}\label{decomp} \begin{equation}\label{decomp_eqn} H_m=\left\{\prod_{r=1}^{m-1}I_{r-1}\otimes\begin{bmatrix}I_{m-r}&0\\0&\Psi_{m-r}\end{bmatrix}\right\}\Psi_m,\;\mbox{for}\;m\geq 1. \end{equation} \end{thm} A proof of Theorem~\ref{decomp} is given in Section~\ref{derivation}. Expanding the product in (\ref{decomp_eqn}), we have $$H_m=\left\{\begin{bmatrix} I_1&&&&&&\\ &\Psi_1&&&&&\\ &&I_1&&&&\\ &&&\Psi_1&&&\\ &&&&\ddots&&\\ &&&&&I_1&\\ &&&&&&\Psi_1\end{bmatrix}\cdots\right.$$ $$\left.\cdots\begin{bmatrix}I_{m-2}&&&\\&\Psi_{m-2}&&\\&&I_{m-2}&\\&&&\Psi_{m-2}\end{bmatrix}\begin{bmatrix}I_{m-1}&\\&\Psi_{m-1}\end{bmatrix}\right\}\Psi_m,$$ which shows that the WHT can be computed by first computing the Haar wavelet transform, and then employing a divide-and-conquer approach also consisting of Haar wavelet transforms, as illustrated in Figure~\ref{flow_diagram}. \begin{figure} \caption{An illustration of the Cascading Haar Wavelet algorithm.} \label{flow_diagram} \end{figure} Figure~\ref{flow_diagram} is potentially misleading, in that the identity transforms do not actually need to be performed! We analyze the complexity of the CHW algorithm in Section~\ref{complexity}, where we show that the algorithm requires $n\log_2 n$ summations, where $n=2^m$ -- exactly the same as the standard WHT algorithms~\cite{walsh_paley}. \section{Complexity analysis}\label{complexity} We have proposed a method for computing the WHT which is built up entirely of Haar wavelet transforms. To analyze its complexity, we therefore need a complexity result for the Haar wavelet transform. \begin{lem}[{\cite[Section 7.3.3]{orthogonal_transforms}}]\label{haar_lemma} The Haar wavelet transform corresponding to multiplication by $\Psi_m$ can be computed in $2(2^m-1)$ operations. \end{lem} Equipped with this result, we can determine the complexity of the CHW algorithm. \begin{thm}\label{CHW_complexity} The CHW algorithm can be implemented in $m\cdot 2^m$ operations. \end{thm} \textbf{Proof:} From Figure~\ref{flow_diagram} we see that the CHW algorithm requires a single Haar wavelet transform of size $2^m$, and $2^{m-1-r}$ Haar wavelet transforms of size $2^r$, for $r=1,2,\ldots,m-1$. By Lemma~\ref{haar_lemma}, the total number of operations is therefore \begin{eqnarray} &&2(2^m-1)+\sum_{r=1}^{m-1}\left\{2^{m-1-r}\cdot 2(2^r-1)\right\}\nonumber\\ &=&2^{m+1}-2+\sum_{r=1}^{m-1}2^m-\sum_{r=1}^{m-1}2^{m-r}\nonumber\\ &=&2^{m+1}-2+2^m(m-1)-2(2^{m-1}-1),\nonumber \end{eqnarray} which simplifies to $m\cdot 2^m$. $\Box$ We have shown that the CHW algorithm has precisely the same serial complexity as the popular divide-and-conquer algorithms for the WHT. In the next section, we propose a natural way of parallelizing the CHW which has a number of attractive features. \section{A proposal for a parallel implementation} Given the WHT's importance in signal processing, it is not surprising that there already exists a body of work addressing the question of how to efficiently parallelize it; see~\cite{performance} for an example. In this section, we make the observation that there is a very natural way to parallelize the CHW algorithm, which possesses a number of attractive features. In the CHW algorithm, a signal of length $2^m$ is cascaded through a succession of Haar wavelet transforms. It is possible therefore to consider a parallel architecture in which each of $m-1$ nodes is devoted to the task of performing Haar wavelet transforms of a certain size. A scheduling chart illustrating this procedure for $m=4$ is shown in Figure~\ref{parallel_flow}. In this case, we have three nodes, each devoted to the task of performing the Haar wavelet transforms $\Psi_1$, $\Psi_2$ and $\Psi_3$. A full Haar wavelet transform $\Psi_4$ must first be performed (by one of the three nodes, or by an extra one), and thereafter each node is occupied for approximately half of the total running time. The output is the WHT coefficients in dyadic order. Note the attractive properties of this scheme: each node need only be programmed to perform a single task, and communication of the output from any given node follows fixed and straightforward rules. \begin{figure} \caption{An illustration of the proposed parallel implementation of the CHW algorithm.} \label{parallel_flow} \end{figure} \section{Proof of Theorem 1}\label{derivation} We proceed by induction. The result holds trivially for $m=1$. Assume (\ref{decomp_eqn}) holds for $m-1$. Then \begin{eqnarray} &&\sqrt{2}\left\{\prod_{r=1}^{m-1}I_{r-1}\otimes\begin{bmatrix}I_{m-r}&0\\0&\Psi_{m-r}\end{bmatrix}\right\}\Psi_m\nonumber\\ &=&\left\{\prod_{r=1}^{m-1}I_{r-1}\otimes\begin{bmatrix}I_{m-r}&0\\0&\Psi_{m-r}\end{bmatrix}\right\}\begin{bmatrix}\begin{array}{c}\Psi_m\otimes\left(\begin{array}{ll}1&1\end{array}\right)\\I_m\otimes\left(\begin{array}{ll}1&-1\end{array}\right)\end{array}\end{bmatrix}\nonumber \end{eqnarray} \begin{eqnarray} &=&\left\{\prod_{r=2}^{m-1}I_{r-1}\otimes\begin{bmatrix}I_{m-r}&0\\0&\Psi_{m-r}\end{bmatrix}\right\}\nonumber\\ &&\;\;\;\;\cdot\begin{bmatrix}I_{m-1}&0\\0&\Psi_{m-1}\end{bmatrix}\begin{bmatrix}\begin{array}{c}\Psi_{m-1}\otimes\left(\begin{array}{ll}1&1\end{array}\right)\\I_{m-1}\otimes\left(\begin{array}{ll}1&-1\end{array}\right)\end{array}\end{bmatrix}\nonumber\\ &=&\left\{\prod_{r=2}^{m-1}I_{r-1}\otimes\begin{bmatrix}I_{m-r}&0\\0&\Psi_{m-r}\end{bmatrix}\right\}\begin{bmatrix}\begin{array}{c}\Psi_{m-1}\otimes\left(\begin{array}{ll}1&1\end{array}\right)\\\Psi_{m-1}\otimes\left(\begin{array}{ll}1&-1\end{array}\right)\end{array}\end{bmatrix}\nonumber\\ &=&I_1\otimes\left\{\prod_{r=2}^{m-1}I_{r-2}\otimes\begin{bmatrix}I_{m-r}&0\\0&\Psi_{m-r}\end{bmatrix}\right\}\nonumber\\ &&\;\;\;\;\cdot\begin{bmatrix}\begin{array}{c}\Psi_{m-1}\otimes\left(\begin{array}{ll}1&1\end{array}\right)\\\Psi_{m-1}\otimes\left(\begin{array}{ll}1&-1\end{array}\right)\end{array}\end{bmatrix}\nonumber\\ &=&\begin{bmatrix}\begin{array}{c}\displaystyle\prod_{r=1}^{m-2}I_{r-1}\otimes\begin{bmatrix}I_{m-1-r}&0\\0&\Psi_{m-1-r}\end{bmatrix}\Psi_{m-1}\otimes\left(\begin{array}{ll}1&1\end{array}\right)\\ \displaystyle\prod_{r=1}^{m-2}I_{r-1}\otimes\begin{bmatrix}I_{m-1-r}&0\\0&\Psi_{m-1-r}\end{bmatrix}\Psi_{m-1}\otimes\left(\begin{array}{ll}1&-1\end{array}\right)\end{array}\end{bmatrix},\nonumber \end{eqnarray} which, by the inductive hypothesis, is equal to \begin{eqnarray} &=&\begin{bmatrix}H_{m-1}\Psi^T_{m-1}\Psi_{m-1}\otimes\left(\begin{array}{ll}1&1\end{array}\right)\\H_{m-1}\Psi^T_{m-1}\Psi_{m-1}\otimes\left(\begin{array}{ll}1&-1\end{array}\right)\end{bmatrix}\nonumber\\ &=&\begin{bmatrix}H_m\otimes\left(\begin{array}{ll}1&1\end{array}\right)\\ H_m\otimes\left(\begin{array}{ll}1&-1\end{array}\right)\end{bmatrix},\nonumber\\ &=&\sqrt{2}H_m.\nonumber \end{eqnarray} \section{Relation to prior work} The author is aware of two papers especially in which the relationship between the WHT and the Haar wavelet transform has been explored. In~\cite{walsh_relations}, the authors consider a \emph{Haar-Walsh} transform which transforms Haar wavelet coefficients into WHT coefficients (in dyadic order), which they observe to be equivalent to multiplication by the matrix \begin{equation}\label{decomp2} H_m\Psi_m^T=\begin{bmatrix} 1&&&&\\ &H_0&&&\\ &&H_1&&\\ &&&\ddots&\\ &&&&H_{m-1}\end{bmatrix}.\end{equation} The Haar-Walsh transform can therefore be computed by taking separate WHTs of the Haar wavelet coefficients at each scale. See also~\cite{multilevel} by the current author in which the implications of this decomposition are explored for multilevel compressive sensing. The possibility of recursively decomposing Hadamard matrices using (\ref{decomp2}) appears to be spotted in the concluding remarks of~\cite{fino}, and indeed it is possible to derive the CHW algorithm from repeated application of (\ref{decomp2}). Closely related though the ideas in~\cite{fino} are, to the author's best knowledge, there is no mention in the literature of a WHT consisting entirely of Haar wavelet transforms, nor any statement of the decomposition result given in Theorem~\ref{decomp}. \section{Concluding remarks} We have proposed the novel Cascading Haar Wavelet (CHW) algorithm for computing the WHT. We have also proposed a parallelization scheme, and it remains to comprehensively understand the practical implementation advantages that the CHW might have over other approaches to parallelization of the WHT. \end{document}
\begin{document} \title{The uniqueness of $\PSU_3(8)$ in the Monster} \begin{abstract} As a contribution to an eventual solution of the problem of the determination of the maximal subgroups of the Monster we show that there is a unique conjugacy class of subgroups isomorphic to $\mathrm{PSU}_3(8)$. The argument depends on some computations in various subgroups, but not on computations in the Monster itself. \end{abstract} \section{Introduction} The maximal subgroup problem for almost simple groups became a major focus for research in group theory in the 1980s, and remains so today. In the case of the sporadic groups, a systematic attack on the problem began earlier, with Livingstone and his students in the 1960s. The problem was solved in the 20th century for $25$ of the $26$ sporadic simple groups, and their automorphism groups, but one case, namely the Fischer--Griess Monster group $\mathbb{M}$, remains outstanding. A great deal of work on this case has already been done. The maximal $p$-local subgroups were classified in \cite{oddlocals,MSh,Meier}, and some theoretical work on non-local subgroups was accomplished in \cite {Anatomy1,Anatomy2}. Following successful computer constructions of the Monster \cite{3loccon,2loccon} other techniques became available, and more progress was made \cite{post,A5subs,S4subs,L241,L227,L213B,Sz8A}, including discovery of five previously unknown maximal subgroups, isomorphic to \begin{itemize} \item $\mathrm{PSL}_2(71)$, $\mathrm{PSL}_2(59)$, $\mathrm{PSL}_2(41)$, $\mathrm{PGL}_2(29)$, $\mathrm{PGL}_2(19)$. \end{itemize} The cases left open by this published work are possible maximal subgroups with socle isomorphic to one of the following simple groups: \begin{itemize} \item $\mathrm{PSL}_2(8)$, $\mathrm{PSL}_2(13)$, $\mathrm{PSL}_2(16)$, $\mathrm{PSU}_3(4)$, $\mathrm{PSU}_3(8)$. \end{itemize} Of these, $\mathrm{PSL}_2(8)$ and $\mathrm{PSL}_2(16)$ have been classified in unpublished work of P. E. Holmes, although the results seem not to be publicly available. In this paper we deal with the case $\mathrm{PSU}_3(8)$. Specifically, we show that, up to conjugacy, there is a unique subgroup $\mathrm{PSU}_3(8)$ in the Monster. Its normalizer is the already known maximal subgroup $(A_5\times \mathrm{PSU}_3(8){:}3){:}2$. Notation follows \cite{Atlas,FSG}, where required background information can be found. \section{Existence} Exactly one conjugacy class of subgroups of $\mathbb M$ isomorphic to $\mathrm{PSU}_3(8)$ is contained in the known maximal subgroups. The normalizer of such a group is $(A_5\times\mathrm{PSU}_3(8){:}3){:}2$, itself a maximal subgroup of $\mathbb M$. For details, see \cite{Anatomy1}. \section{Strategy for proving uniqueness} The group $\mathrm{PSU}_3(8)$ can be generated from a group $\mathrm{PSL}_2(8)\times 3$ by extending $D_{18}\times 3$ to $(9\times 3){:}S_3$. Note that $9\times 3$ contains three cyclic subgroups of order $9$, which are permuted by the $S_3$. Similarly, there are three complements of order $3$, which are also permuted by the $S_3$. Hence it is sufficient to extend $9\times 3$ to $D_{18}\times 3$ normalizing one of the other two cyclic subgroups of order $9$. We note in particular that all cyclic groups of order $9$ in $\mathrm{PSU}_3(8)$ are conjugate, and hence we need only consider subgroups $\mathrm{PSL}_2(8)\times 3$ in which the diagonal elements of order $9$ are conjugate in the Monster to the elements of order $9$ inside $\mathrm{PSL}_2(8)$. We shall show that there is only one class of $\mathrm{PSL}_2(8)\times 3$ in the Monter that satisfies this condition. Moreover, the cyclic group of order $9$ extends to a unique $D_{18}$ in $\mathrm{PSU}_3(8)$. Hence the $D_{18}\times 3$ we wish to construct is conjugate in the Monster to the one inside $\mathrm{PSL}_2(8)\times 3$. \section{The subgroup $3\times\mathrm{PSL}_2(8)$} Since $\mathrm{PSL}_2(8)$ contains elements of order $9$, the elements of order $3$ fuse to $\mathbb M$-class $3B$. Since it contains a pure $2^3$, the involutions are in $\mathbb M$-class $2B$. In \cite{Anatomy1} Norton accounts for many of the structure constants of type $(2,3,7)$ in the Monster. In particular he shows that there is no $3\times\mathrm{PSL}_2(8)$ in which the $\mathrm{PSL}_2(8)$ is of type $(2B,3B,7B)$. He also shows that there are three classes of $\mathrm{PSL}_2(8)$ of type $(2B,3B,7A)$, just two of which centralize elements of order $3$. The respective normalizers are: \begin{enumerate} \item $\mathrm{PSL}_2(8){:}3\times 3S_6$. Here the central $3$ in $3A_6$ is in Monster class $3A$, as are the $3$-cycles. The elements mapping to fixed-point-free $3$-elements in $A_6$ are in Monster class $3C$. \item $\mathrm{PSL}_2(8)\times 2\times S_4$. Here, the elements of order $3$ in $S_4$ are in Monster class $3A$. \end{enumerate} Hence there are exactly four classes of $\mathrm{PSL}_2(8)\times 3$ in the Monster. \section{Fusion of elements of order $9$} Consider first the case where the central elements of $3\times \mathrm{PSL}_2(8)$ are in class $3C$ in the Monster. We restrict the character of degree $196883$ to $S_3\times \mathrm{Th}$. Using the character values on $3C$ and $2A$, we obtain a decomposition as $$2\otimes 65628 + 1^+\otimes 34999 + 1^-\otimes 30628$$ where the first factor denotes the representation of $S_3$. The values on classes $2A$ and $7A$ of $\mathrm{Th}$ are easily computed: $$\begin{array}{rrr} 1A & 2A & 7A\cr 34999 & 183 & 13\cr 30628 & -92 & 3\cr 65628 & 92 & 17 \end{array}$$ from which it is easy to see that the decomposition into irreducibles of $\mathrm{Th}$ is given by \begin{eqnarray*} 34999 &=& 30875+4123+1\\ 30628&=& 30628\\ 65628&=& 61256+4123+248+1 \end{eqnarray*} It then follows that the values of the character of degree $196883$ on elements of $\mathrm{Th}$-class $9A$, $9B$, and $9C$ are respectively $-1$, $-1$, $26$, while the values on the corresponding diagonal elements in $3\times\mathrm{Th}$ are $26$, $26$, and $-1$ respectively. In other words, the diagonal elements are always in a different conjugacy class from the elements in $\mathrm{Th}$. Hence this case is eliminated. (In fact, in this case the $\mathrm{PSL}_2(8)$ contains elements of $\mathrm{Th}$ class $9C$, that is, Monster class $9A$.) The remaining three classes of $\mathrm{PSL}_2(8)\times 3$, namely the ones with a central $3A$-element, are contained in the double cover of the Baby Monster. The work in \cite{maxB} then shows that in these cases the elements of order $9$ in $\mathrm{PSL}_2(8)$ are in Baby Monster class $9B$, so Monster class $9A$. Moreover, in two of the three cases, the diagonal elements of order $9$ are in Baby Monster class $9A$, so Monster class $9B$. But in $\mathrm{PSU}_3(8)$ these two classes of elements of order $9$ are fused. Hence these cases cannot extend to $\mathrm{PSU}_3(8)$ in the Monster. The remaining case therefore is a $3A$-type, with normalizer $\mathrm{PSL}_2(8){:}3\times 3S_6$. We know there exists such a subgroup $\mathrm{PSU}_3(8)$ in the Monster, so all elements of order $9$ fuse to $9A$. \section{The centralizer of a $9A$ element} From \cite{oddlocals}, the centralizer of a $9A$-element in the Monster has shape $[3^7].\mathrm{PSU}_4(2)$. Looking more closely, we see that the structure is the central product of the cyclic group of order $9$ with a group of shape $3^6{}^{\textstyle .} \Omega_5(3)$, in which the action of $\Omega_5(3)$ on $3^6$ is uniserial, with a trivial submodule and a natural module as quotient. Moreover, since this group contains $9 \times 3{}^{\textstyle .} S_6$, the extension is non-split, in the sense that $C(9)/9\cong 3^5{}^{\textstyle .} \Omega_5(3)$. These facts can be checked computationally, using the construction of the subgroup $3^{1+12}{}^{\textstyle .}2{}^{\textstyle .}\mathrm{Suz}{:}2$ described in \cite{3loccon}. But in fact the proof below does not depend on any of the subtleties, so the sceptical reader can ignore them. \section{The centralizer of $9\times 3$} Centralizing the additional element of order $3$ reduces the group from $9\circ3^6{}^{\textstyle .} \Omega_5(3)$ to $9\circ3^6{}^{\textstyle .} A_6$. The structure of the latter group is very subtle, and in particular it contains several conjugacy classes of $3{}^{\textstyle .} A_6$, and it is not obvious which one centralizes $\mathrm{PSL}_2(8)$. In any case, the group of elements which either centralize $9\times 3$ or extend it to $D_{18}\times 3$ is of shape $(9\circ 3^6){}^{\textstyle .}(A_6\times 2)= (9\times 3).3^4.(2\times A_6)$. We must adjoin an involution in the conjugacy class which maps to the central involution in the quotient $A_6\times 2$. But this conjugacy class contains only $3^6=729$ elements, while the group of symmetries is $9\times 3A_6$, of order $9720$. Hence every group generated in the prescribed fashion has non-trivial centralizer in the Monster. Indeed, this counting argument implies that such a centralizer has order at least $9720/729=13\frac13$. \section{Proof of the theorem} The centralizer of an element of order $19$ is $19\times A_5$, containing elements of classes $2A$, $3C$ and $5A$. The only subgroup of $A_5$ with order at least $14$ is $A_5$ itself. Hence every $\mathrm{PSU}_3(8)$ in the Monster has centralizer conjugate to this $A_5$. As a corollary we obtain new proofs of the uniqueness of $\mathrm{PSU}_3(8)$ as a subgroup of the Baby Monster, the Thompson group, and the Harada--Norton group. \end{document}
\begin{document} \title{Noncontextuality Inequalities from Antidistinguishability} \author{Matthew Leifer} \affiliation{Schmid College of Science and Technology, Chapman University, One University Dr., Orange, CA 92866, USA} \affiliation{Institute for Quantum Studies, Chapman University, One University Dr., Orange, CA 92866, USA} \author{Cristhiano Duarte} \affiliation{Schmid College of Science and Technology, Chapman University, One University Dr., Orange, CA 92866, USA} \email[Corresponding author: ]{[email protected]} \date{\today} \begin{abstract} Noncontextuality inequalities are usually derived from the distinguishability properties of quantum states, i.e.\ their orthogonality. Here, we show that \emph{anti}distinguishability can also be used to derive noncontextuality inequalities. The Yu-Oh 13 ray noncontextuality inequality can be re-derived and generalized as an instance of our antidistinguishability method. For some sets of states, the antidistinguishability method gives tighter bounds on noncontextual models than just considering orthogonality, and the Hadamard states provide an example of this. We also derive noncontextuality inequalities based on mutually unbiased bases and symmetric informationally complete POVMs. Antidistinguishability based inequalities were initially discovered as overlap bounds for the reality of the quantum state. Our main contribution here is to show that they are also noncontextuality inequalities. \end{abstract} \maketitle \section{Introduction} \label{Intro} Quantum contextuality has its origins in work of Bell \cite{Bell_ProblemHiddenVariables_1966}, and Kochen and Specker \cite{Kochen_ProblemHiddenVariables_1967}, where they proved a no-go theorem ruling out deterministic hidden variable theories in which the value assigned to an observable is independent of how you measure it. In recent years, contextuality has attracted increasing attention for its role in quantum information processing advantages \cite{Spekkens_2009, Kleinmann_2011, Grudka_2014, Chailloux_2016, Abramsky_2017, Schmid_2018, Duarte_ResourceTheoryContextuality_2018, Ghorai_OptimalQuantumPreparation_2018} and explaining the power of quantum computation \cite{Galvao_2005, Cormick_2006, Anders_2009, Howard_ContextualitySuppliesMagic_2014, Hoban_2014, Karanjai_2018, Abramsky_2017, Frembs_ContextualityResourceMeasurementbased_2018, Raussendorf_2019, Catani_2019}. For these purposes, it is useful to find new classes of noncontextuality inequalities and to find the tightest possible bounds on them. Noncontextuality inequalities are usually based on the orthogonality properties of sets of quantum states, or, equivalently, they are based on our ability to perfectly distinguish sets of quantum states. A powerful method for deriving bounds on noncontextuality inequalities from the orthogonality graphs of events has been developed by Cabello, Severini and Winter (CSW) \cite{Cabello_NonContextualityPhysical_2010, Cabello_GraphTheoreticApproachQuantum_2014}. A similar method, also exploring our ability of perfectly distinguish between objects, has been applied to Bell inequalities to provide tighter bounds \cite{RDLTC14}. In this paper, we show that the \emph{anti}distinguishability properties \cite{Leifer_QuantumStateReal_2014} \footnote{Antidistinguishability also goes by the names \emph{PP-incompatibility} \cite{Caves_ConditionsCompatibilityQuantumstate_2002} and \emph{conclusive exclusion of quantum states} \cite{Bandyopadhyay_ConclusiveExclusionQuantum_2014}.} of quantum states can also be used to derive noncontextuality inequalities. Our method reproduces the inequality used in the Yu-Oh 13 ray proof of contextuality \cite{Yu_StateIndependentProofKochenSpecker_2012}, giving more intuition behind its structure and allowing us to propose several generalizations. In some cases, when we apply both the CSW method and our method to the same set of states, we get a much tighter bound on the noncontextuality inequality. The concept of antidistinguishability was first proposed in \cite{Caves_ConditionsCompatibilityQuantumstate_2002}, and played a key role in the proof of the Pusey, Barrett and Rudolph (PBR) theorem \cite{Pusey_RealityQuantumState_2012}. The aim of the PBR theorem was to address the question of whether the quantum state is a state of reality, akin to a point in phase space for a classical particle (known as the $\psi$-ontic view of quantum states), or a state of knowledge, more akin to a probability distribution over phase space (known as the $\psi$-epistemic view). The $\psi$-epistemic view has a lot of advantages, as many otherwise puzzling phenomena, including the indistinguishability of non-orthogonal quantum states and the no-cloning theorem, are easily explained by the fact that the probability distributions representing non-orthogonal quantum states can overlap in a $\psi$-epistemic model \cite{Spekkens_EvidenceEpistemicView_2007, Leifer_QuantumStateReal_2014, Jennings_2015}. The PBR theorem showed that, within a standard framework for realist models, known as the ontological models framework \cite{Harrigan_EinsteinIncompletenessEpistemic_2010}, only $\psi$-ontic models are possible. However, the PBR theorem is based on additional assumptions beyond the bare ontological models framework, and these assumptions have attracted criticism \cite{Hall_GeneralisationsRecentPuseyBarrettRudolph_2011,Emerson_WholeGreaterSum_2013,Schlosshauer_ImplicationsPuseyBarrettRudolphQuantum_2012}. Subsequently, there was an effort to determine what could be proved about the reality of the quantum state without such additional assumptions. It was shown that $\psi$-epistemic models exist in all finite Hilbert space dimensions \cite{Lewis_2012, Aaronson_PsiEpistemicTheoriesRole_2013}. This led to the definition of \emph{maximally} $\psi$-epistemic models \cite{Leifer_MaximallyEpistemicInterpretations_2013,Maroney_HowStatisticalAre_2012,Harrigan_OntologicalModelsInterpretation_2007} \footnote{Non maximally $\psi$-epistemic models were originally defined under the name \emph{deficient} models in \cite{Harrigan_OntologicalModelsInterpretation_2007}.} and the study of overlap bounds for probability distributions in ontological models \cite{Barrett_2014, Leifer_PsEpistemicModelsAre_2014, Branciard_HowPsepistemicModels_2014, Ringbauer_2015, Knee_2017}. In order for the $\psi$-epistemic explanations of quantum phenomena to work, it is not enough that there is just some amount of overlap of probability distributions, but the overlap should be comparable to the degree of indistinguishability of the quantum states. This was ruled out by showing that it would imply that the ontological model is noncontextual \cite{Leifer_MaximallyEpistemicInterpretations_2013, Leifer_QuantumStateReal_2014}, which is ruled out by existing contextuality proofs. Noncontextuality inequalities can then be used to bound the degree of overlap in an ontological model, and one class of overlap bounds is based on doing exactly this with CSW inequalities \cite{Leifer_PsEpistemicModelsAre_2014}. However, another class of overlap inequalities was proposed in the literature based on the antidistinguishability of quantum states \cite{Barrett_2014, Branciard_HowPsepistemicModels_2014, Ringbauer_2015, Knee_2017} and it was not obvious whether these have anything to do with contextuality. Our main result is to re-derive these inequalities as noncontextuality inequalities, which means that all the antidistinguishability overlap bounds in the literature can now be reinterpreted as noncontextuality inequalities. We also re-derive and generalize some other noncontextuality inequalities that have appeared in the literature \cite{Yu_StateIndependentProofKochenSpecker_2012, Bengtsson_2012} by showing that they are examples of the antidistinguishability-based construction. The rest of this paper is organized as follows. In \S\ref{ConSce} we review the mathematical framework of \emph{contextuality scenarios} as developed in \cite{Acin_2015}, slightly generalized to allow for both measurements with a fully specified set of outcomes and those with an under-specified set. This is the framework in which we prove our results. In \S\ref{Anti}, we give a definition of antidistinguishability for contextuality scenarios that generalizes the existing definition for quantum states. \S\ref{Ineq} contains our main results. It introduces the notions of \emph{strong and weak pairwise antisets}, which are sets of outcomes in a contextuality scenario such that any pair of them together with another outcome in a specified set is antidistinguishable. Our main result shows that there is a noncontextuality inequality associated with any pairwise antiset. \S\ref{Exa} gives examples of pairwise antisets in quantum theory and their associated noncontextuality inequalities, showing how existing inequalities can be re-derived and generalized in this approach. The proof of our main results is given in \S\ref{Proof} and \S\ref{Conc} concludes with a summary and outlook. \section{Contextuality Scenarios} \label{ConSce} This section reviews a slightly generalized version of the contextuality scenario framework developed in \cite{Acin_2015}. After introducing the basic definitions, we review the concepts of \emph{value functions} in \S\ref{Cont:Value} and \emph{quantum models} in \S\ref{Cont:Quant}. These describe the possible noncontextual and quantum realizations of contextuality scenarios respectively \S\ref{Cont:States} reviews the concept of \emph{states} on a contextuality scenario, which describe the observable probabilities in noncontextual, quantum, and more general models. The aim is to arrive at a general framework for discussing \emph{noncontextuality inequalities}, which are inequalities satisfied by noncontextual states, but not necessarily quantum or more general states. \begin{definition} A \emph{contextuality scenario} $\mathfrak{C}$ is a structure $\mathfrak{C} = (X,\mathcal{M},\mathcal{N})$ where \begin{itemize} \item $X$ is a set of \emph{outcomes}. \item $\mathcal{M}$ is a set of subsets of $X$ such that if $M,M' \in \mathcal{M}$ then $M'\not\subset M$. An $M \in \mathcal{M}$ is called a \emph{(measurement) context}. \item $\mathcal{N}$ is a set of subsets of $X$ such that if $M \in \mathcal{M}$ then $M \not\in \mathcal{N}$ and if $N,N'\in \mathcal{N}$ then $N' \not\subset N$. An $N \in \mathcal{N}$ is called a \emph{maximal partial (measurement) context}. \end{itemize} Finally, a contextuality scenario is \emph{finite} if $X$ is a finite set. \end{definition} The idea of a contextuality scenario is that you have a system on which you can perform several different measurements. $X$ is the set of all possible measurement outcomes. A context $M\in \mathcal{M}$ is the full set of distinct outcomes that can occur in some possible measurement. Note that the condition that $\mathcal{M}$ contains no sets that are subsets of other sets in $\mathcal{M}$ is not usually imposed in the literature, but is true of all the interesting examples. A maximal partial context $N \in \mathcal{N}$ is a set of outcomes that can occur as the outcome of some possible measurement, but not necessarily the full set. We allow for the set of outcomes of some measurements to be incompletely specified. For example, a failure to detect the system at all could count as an unspecified outcome. In this respect, our definition of a contextuality scenario is slightly more general than that of \cite{Acin_2015}, which only has $\mathcal{M}$. Note that all the contextuality scenarios we use in this paper are finite, so we will assume this going forward without further comment. A contextuality scenario with no maximal partial contexts is a specific type of hypergraph, and, in general, a contextuality scenario can be seen as is a generalization of a hypergraph with two kinds of hyperedges \footnote{Instead of generalizing the concept of a hypergraph, we could simply have considered a coloring process on the hyperedges of such a hypergraph. Assigning different colours to different kinds of hyperedges, we would end up drawing essentially the same graphs as shown in fig.\ref{fig:classical}}. We can draw diagrams of them by denoting contexts with solid lines and maximal partial contexts with dashed lines, as in the following examples. \begin{example} \label{exa:class} A \emph{classical} contextuality scenario has a finite set $X$ of outcomes, $\mathcal{M} = \{X\}$, and $\mathcal{N} = \emptyset$. A \emph{partial classical} contextuality scenario has a finite set $X$ of outcomes, $\mathcal{M} = \emptyset$, and $\mathcal{N} = \{X\}$. In words, every set of outcomes can, and indeed does, occur together in a single realization of a measurement. These scenarios are depicted in \cref{fig:classical} \end{example} \begin{figure} \caption{Examples of classical contextuality scenarios with 5 outcomes.} \label{fig:classical} \end{figure} \begin{example} \label{exa:Speck} The \emph{Specker Triangle} \cite{Specker_1960} is the contextuality scenario with $X = \{a,b,c\}$, $\mathcal{M} = \{\{a,b\},\{b,c\},\{c,a\}\}$, and $\mathcal{N} = \emptyset$, as shown in \cref{fig:specker}. \end{example} \begin{figure} \caption{The Specker Triangle} \label{fig:specker} \end{figure} \begin{example} \label{exa:anti} The following is an example of an \emph{antidistinguishability} scenario that we will make use of later. It has both contexts and maximal partial contexts. Set $X = \{a_1,a_2,a_3,a^{\perp}_1,a^{\perp}_2,a^{\perp}_3\}$, $\mathcal{M} = \{\{a^{\perp}_1,a^{\perp}_2,a^{\perp}_3\}\}$, and $\mathcal{N} = \{\{a_1,a^{\perp}_1\},\{a_2,a^{\perp}_2\},\{a_3,a^{\perp}_3\}\}$. This is shown in \cref{fig:antidistinguish} \end{example} \begin{figure} \caption{An antidistinguishability scenario} \label{fig:antidistinguish} \end{figure} \begin{example} \label{exa:qanti} A \emph{quantum} contextuality scenario is constructed as follows. Let $X$ be a set of pure states (unit vectors with vectors differing by a global phase identified) in a Hilbert space $\mathcal{H}$. A subset $M \subseteq X$ is in $\mathcal{M}$ iff $M$ is an orthonormal basis. A subset $N \subseteq X$ is in $\mathcal{N}$ iff the states it contains are pairwise orthogonal, it is not a basis (i.e.\ it is incomplete), and it is not a subset of any other $M \in \mathcal{M}$ or $N \in \mathcal{N}$. As an example, consider the six states \begin{align} \ket{a_1} &= \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}, \ket{a_2} = \frac{1}{\sqrt{3}} \begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix}, \ket{a_3} = \frac{1}{\sqrt{3}} \begin{pmatrix} -1 \\ 1 \\ 1 \end{pmatrix} \label{Eq:ExampleAnti} \\ \ket{a_1^\perp} &= \begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix}, \ket{a_2^\perp} = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ 0 \\ -1 \end{pmatrix}, \ket{a_3^\perp} = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ 0 \\ 1 \end{pmatrix} \end{align} Inspection of the orthogonality relations shows that the quantum contextuality scenario generated by these states is the antidistinguishability scenario of \cref{exa:anti}. \end{example} \subsection{Value Functions} \label{Cont:Value} \begin{definition} A \emph{value function} $v:X \rightarrow \{0,1\}$ on a contextuality scenario $\mathfrak{C} = (X,\mathcal{M},\mathcal{N})$ is a function that assigns a value $0$ or $1$ to every outcome such that \begin{itemize} \item For every $M \in \mathcal{M}$, $v(a) = 1$ for exactly one $a\in M$. \item For every $N \in \mathcal{N}$, $v(a) = 1$ for at most one $a \in N$. \end{itemize} The set of all value functions on $\mathfrak{C}$ is denoted $V_{\mathfrak{C}}$. \end{definition} \begin{definition} For a contextuality scenario $\mathfrak{C} = (X,\mathcal{M},\mathcal{N})$ and an outcome $a \in X$, an $a$-\emph{definite} value function is a value function such that $v(a) = 1$. The set of $a$-definite value functions is denoted $V_a$. \end{definition} The idea of a value function is that it is a deterministic assignment of outcomes to every measurement. For every context, one of the outcomes must occur because the context contains the full set of possible outcomes of that measurement, so the chosen outcome is assigned the value $1$. For partial contexts, one of the unspecified outcomes may be the actual outcome of the measurement, so we only demand that at most one outcome is assigned the value $1$. Value functions are noncontextual because they are defined directly on $X$. A given $a\in X$ may occur in more than one (maximal partial) context, as in the Specker triangle, but the value assigned to the outcome is not allowed to depend on which context is being measured. Note that not all contextuality scenarios have value functions. For example, in the Specker triangle, we would have to assign value $1$ to exactly one of each pair $\{a,b\}$, $\{b,c\}$ and $\{a,c\}$. By symmetry, we can start by assigning $1$ to any of the three outcomes, so let's choose $a$. Then we must assign $0$ to $b$ because of the pair $\{a,b\}$ and $0$ to $c$ because of the pair $\{a,c\}$. But then neither $b$ nor $c$ is assigned the value $1$, which contradicts the requirement that exactly one of the pair $\{b,c\}$ is assigned the value $1$. There are also quantum contextuality scenarios that have no value functions. This is the content of the Bell-Kochen-Specker theorem \cite{Bell_ProblemHiddenVariables_1966, Kochen_ProblemHiddenVariables_1967}. \subsection{Quantum Models} \label{Cont:Quant} \begin{definition} A \emph{quantum model} of a contextuality scenario $\mathfrak{C} = (X,\mathcal{M},\mathcal{N})$ consists of \begin{itemize} \item A choice of Hilbert space $\mathcal{H}$. \item For every $a\in X$, a projection operator $P_a$ onto a closed subspace of $\mathcal{H}$ such that: \begin{itemize} \item For every $M \in \mathcal{M}$, $\sum_{a \in M} P_a=I$, where $I$ is the identity operator. \item For every $N \in \mathcal{N}$, $a,b\in N$ and $a\neq b$, $P_aP_b=0$. \end{itemize} \end{itemize} \end{definition} A quantum model represents every context by a projective quantum measurement and every maximal partial context by a subset of the projectors in such a measurement. Not all contextuality scenarios have a quantum model. The Specker triangle is again an example. The context $\{a,b\}$ implies that $P_a + P_b = I$, so $P_b = I - P_a$, and $\{a,c\}$ that $P_c = I-P_a$. Then, $\{b,c\}$ implies that $P_b+P_c = I$, and substituting the previous two equations into this gives $P_a = I/2$, which is not a projection operator. Clearly, if we start with a quantum contextuality scenario then it has a quantum model, i.e.\ the projectors onto the states that define the model, but it also has other quantum models. For example, applying a unitary transformation to all the states preserves their orthogonality structure, so it gives us another quantum model. The Bell-Kochen-Specker theorem implies that there are contextuality scenarios that have a quantum model, but no value functions. However, whenever there is a value function there is a quantum model. \begin{proposition} \label{prop:classinquant} If a contextuality scenario $\mathfrak{C}=(X,\mathcal{M},\mathcal{N})$ has a value function then it also has a quantum model. \end{proposition} \begin{proof} Let $\mathcal{H} = \mathcal{H}_{V_{\mathfrak{C}}}$, i.e.\ the Hilbert space with orthonormal basis vectors labeled by the elements of $V_{\mathfrak{C}}$. For every $a\in X$, define the projector \[P_a = \sum_{v\in V_a} \ket{v}\bra{v}. \] This defines a quantum model. To see this, let $M \in \mathcal{M}$. Notice that the sets $V_a$ for $a \in M$ are disjoint because each value function assigns value $1$ to exactly one element of $M$. They also cover the whole set $V_{\mathfrak{C}}$ because every value function assigns value $1$ to some element of $M$. Thus, \begin{align*} \sum_{a \in M} P_a & = \sum_{a\in M} \sum_{v\in V_a} \ket{v}\bra{v} \\ & = \sum_{v\in V_{\mathfrak{C}}} \ket{v}\bra{v} = I. \end{align*} Now let $N \in \mathcal{N}$ and consider $a,b\in N$, $a\neq b$. We have \[ P_aP_b = \sum_{v\in V_a}\sum_{w \in V_b} \ket{v}\braket{v | w}\bra{w} = 0, \] because $V_a$ and $V_b$ are disjoint. \end{proof} \subsection{States} \label{Cont:States} \begin{definition} A \emph{state} $\omega: X \rightarrow [0,1]$ on a contextuality scenario $\mathfrak{C} = (X,\mathcal{M},\mathcal{N})$ is a function that assigns a probability to every outcome such that \begin{itemize} \item For all $M \in \mathcal{M}$, \[\sum_{a\in M} \omega(a) = 1.\] \item For all $N \in \mathcal{N}$, \[\sum_{a\in N} \omega(a) \leq 1.\] \end{itemize} The set of states on $\mathfrak{C}$ is denoted $S_{\mathfrak{C}}$. \end{definition} A state is an assignment of probabilities to outcomes that is compatible with every (maximal partial) context having a well-defined probability distribution. For the maximal partial contexts, we only demand that the probabilities add up to something less than or equal to $1$ because it is possible to put probability weight on the unspecified outcomes. For a classical scenario, the states are exactly the probability distributions on $X$ and for a partial classical scenario, they are the sub-normalized probability distributions on $X$. The Specker triangle has exactly one state: $\omega(a) = \omega(b) = \omega(c) = \frac{1}{2}$, which can be obtained by solving the equations defining the state space. There are also scenarios with no states, the simplest being $X = \{a_1,a_2,a_3,b_1,b_2,b_3\}$, $\mathcal{M} = \{\{a_1,a_2,a_3\},\{b_1,b_2,b_3\},\{a_1,b_1\},\{a_2,b_2\},\{a_3,b_3\}\}$ and $\mathcal{N} = \emptyset$. The first two contexts require $\omega(a_1) + \omega(a_2) + \omega(a_3) = 1$ and $\omega(b_1) + \omega(b_2) + \omega(b_3) = 1$, so that \[\sum_{j=1}^3 \left [ \omega(a_j) + \omega(b_j) \right ] = 2\]. However, the last three contexts require $\omega(a_j) + \omega(b_j) = 1$ for $j=1,2,3$, and hence \[\sum_{j=1}^3 \left [ \omega(a_j) + \omega(b_j) \right ] = 3,\] which is a contradiction. We can represent a state by a vector in the space $\mathbb{R}^X$ where, for each $a \in X$, $\omega(a)$ is the component of the vector in the direction corresponding to $a$. In this representation, the state space is a convex polytope because it is defined by a finite set of linear equations and inequalities and every component is bounded between $0$ and $1$. \begin{definition} A \emph{Kochen-Specker (KS) noncontextual} state on a contextuality scenario $\mathfrak{C} = (X,\mathcal{M},\mathcal{N})$ is a state $\omega$ such that \[\omega(a) = \sum_{v\in V_{\mathfrak{C}}} p_v v(a),\] where $p_v$ is a probability distribution on $V_{\mathfrak{C}}$, i.e.\ $0 \leq p_v \leq 1$ and $\sum_{v \in V_{\mathfrak{C}}} p_v = 1$. The set of KS noncontextual states on $\mathfrak{C}$ is denoted $C_{\mathfrak{C}}$. A state $\omega$ that is not contained in $C_{\mathfrak{C}}$ is called a \emph{contextual} state. \end{definition} Viewed as a subset of $\mathbb{R}^X$, $C_{\mathfrak{C}}$ is also a convex polytope because there are a finite number of value functions which define its vertices. If we observe probabilities in an experiment that agree with a KS noncontextual state then we can imagine that there is always a definite noncontextual outcome for each measurement, and the observation of probabilities that differ from $0$ or $1$ is just due to our ignorance of which value function holds in each particular run of the experiment. On the other hand, contextual states cannot be understood in this way. \begin{definition} A \emph{quantum state} on a contextuality scenario $\mathfrak{C} = (X,\mathcal{M},\mathcal{N})$ is a state $\omega$ such that there exists a quantum model and a density operator $\rho$ on $\mathcal{H}$ (the Hilbert state of the model) for which \[\omega(a) = \Tr(P_a \rho).\] The set of quantum states on $\mathfrak{C}$ is denoted $Q_{\mathfrak{C}}$. \end{definition} The set of quantum states is the set of observable probabilities for a contextuality scenario that is realized as a quantum experiment. If we find a contextual quantum state then this is a proof that quantum mechanics is contextual. The set of quantum states is a compact convex set, but not necessarily a polytope \cite{Brunner13}. \begin{definition} A \emph{state independent noncontextuality inequality} for a contextuality scenario $\mathfrak{C} = (X,\mathcal{M},\mathcal{N})$ is a linear inequality of the form \begin{equation} \label{eq:ineq} \sum_{a\in X} c_a \omega(a) \leq \gamma_c, \end{equation} where $c_a,\gamma_c\in\mathbb{R}$, which is satisfied for all $\omega \in C_{\mathfrak{C}}$. A \emph{state dependent noncontextuality inequality} is an inequality of the form of \cref{eq:ineq} that is satisfied for all $\omega \in C_{\mathfrak{C}}$ that also satisfy some additional set of constraints. \end{definition} If, having derived a state independent noncontextuality inequality, we find a state $\omega$ such that $\sum_{a\in X}c_a \omega(a) > \gamma_c$, then this is a proof that $\omega$ is contextual. The kind of additional constraints that might be imposed in a state dependent inequality are things like $\omega(a) = 0$ for some specified outcome. In this case, if we find a state such that $\sum_{a\in X}c_a \omega(a) > \gamma_c$ that also satisfies the additional constraints, then this is a proof that $\omega$ is contextual. Note, the inequalities that we derive in this paper have $c_a \in \{0,1\}$ for all $a \in X$, but more general inequalities are possible. The terminology state independent/dependent \emph{inequality} that we have introduced here should be contrasted with the notions of state independent/dependent \emph{proofs} of contextuality, which are common in the literature \cite{Yu_StateIndependentProofKochenSpecker_2012}. In a state independent proof, once a quantum model is fixed for a contextuality scenario, we find that $\sum_{a\in X} c_a \omega(a)$ is completely independent of the quantum state $\omega$ chosen so all quantum states are contextual in that model. In a state dependent proof, the value varies with $\omega$, so whether the inequality is violated, and by how much it is violated, depends on the state chosen. A state independent inequality can be the basis of either a state independent or dependent proof, depending on the details of the quantum model chosen, but a state dependent inequality necessarily leads to a state dependent proof, since the inequality does not hold for all choices of state. \begin{example}[Klyachko Inequality \cite{Klyachko_2002, Klyachko_2008}] Consider the Klyachko contextuality scenario $\mathfrak{C} = (X,\mathcal{M},\mathcal{N})$ with $X = \{0,1,2,3,4\}$, $\mathcal{M} = \emptyset$ and $\mathcal{N} = \{\{0,1\},\{1,2\},\{2,3\},\{3,4\},\{4,0\}\}$ as depicted in \cref{fig:Klayatchko}. Then, \[\sum_{a\in X}\omega(a) \leq 2,\] is a state independent noncontextuality inequality. \begin{figure} \caption{The Klyachko contextuality scenario} \label{fig:Klayatchko} \end{figure} To see this note that, for a KS noncontextual state of the form $\omega(a) = \sum_{v\in V_{\mathfrak{C}}} p_v v(a)$, we have \begin{align*} \sum_{a \in X} \omega_a & = \sum_{a\in X}\sum_{v\in V_{\mathfrak{C}}} p_v v(a) \\ & = \sum_{v\in V_{\mathfrak{C}}} p_v \sum_{a\in X} v(a) \\ & \leq \max_{v\in V_{\mathfrak{C}}} \left [ \sum_{a \in X} v(a) \right ], \end{align*} where the last line follows from convexity. It is easy to see that, for any $v \in V_{\mathfrak{C}}$, $v(0) + v(1) + v(2) + v(3) + v(4) \leq 2$. By symmetry, we can start by assigning $v(0) = 1$, which implies that $v(1) = v(4) = 0$. Then we could assign $v(2) = 1$, which requires $v(3) = 0$, or $v(3) = 1$, which requires $v(2) = 0$. Either way, we get an upper bound of $2$ for the sum. \end{example} \begin{proposition} For any contextuality scenario $\mathfrak{C}$, $C_\mathfrak{C} \subseteq Q_{\mathfrak{C}} \subseteq S_{\mathfrak{C}}$. There exist contextuality scenarios in which both inclusions are strict. \end{proposition} \begin{proof} The inclusion of $C_{\mathfrak{C}}$ and $Q_{\mathfrak{C}}$ in $S_{\mathfrak{C}}$ is trivial, since both are defined as subsets of states, so we only have to prove $C_\mathfrak{C} \subseteq Q_{\mathfrak{C}}$. \Cref{prop:classinquant} shows how to construct a quantum model from the set of value functions. If we have a KS noncontextual state of the form $\omega(a) = \sum_{v\in V_{\mathfrak{C}}}p_v v(a)$ then we can construct a density operator $\rho = \sum_{v\in V_{\mathfrak{C}}} p_v \ket{v}\bra{v}$ on the Hilbert space of the corresponding model. It is straightforward to show that this yields the same probabilities. For the strictness, consider a noncontextuality inequality $\sum_{a\in X} c_a \omega(a) \leq \gamma_c$ and let $\gamma_q$ be the largest value of $\sum_{a\in X} c_a \omega(a)$ obtainable from a quantum state. If $\gamma_q > \gamma_c$ and there exists a state with $\sum_{a\in X} c_a \omega(a) > \gamma_q$ then the inclusions are strict. The Klyachko scenario and inequality are an example of this. It can be shown that $\gamma_q = \sqrt{5} > 2 = \gamma_c$ for this scenario \cite{Cabello_NonContextualityPhysical_2010, Liang_2011, Cabello_GraphTheoreticApproachQuantum_2014}. However, $\omega(0) = \omega(1) = \omega(2) = \omega(3) = \omega(4) = 1/2$ is a valid state and this has $\sum_{a\in X}\omega(a) = 5/2 > \sqrt{5}$. \end{proof} \section{Antidistinguishability} \label{Anti} In this section, we review the concept of \emph{antidistinguishability}, which was originally introduced under the name \emph{PP-incompatibility} in \cite{Caves_ConditionsCompatibilityQuantumstate_2002} and re-branded as antidistinguishability in \cite{Leifer_QuantumStateReal_2014}. Although antidistinguishability is usually discussed for sets of quantum states, here we define it for sets of outcomes in a contextuality scenario. In a quantum contextuality scenario, the outcomes, which are elements of orthonormal bases, can also be regarded as pure quantum states. Therefore, in a quantum contextuality scenario, antidistinguishability of outcomes and of pure quantum states amounts to the same thing. In a general contextuality scenario, where there need not be a self-duality between states and measurement outcomes, this would not be the case. Although the concept of antidistinguishability of states is more natural, antidistinguishability of outcomes is what we need to prove noncontextuality inequalities. We start this section by giving our general definition, then explain how it reduces to the usual definition for quantum contextuality scenarios, and then state a useful theorem from \cite{Caves_ConditionsCompatibilityQuantumstate_2002} that characterizes antidistinguishability for sets of three pure quantum states. We will use this to establish examples of antidistinguishability-based noncontextuality inequalities in \S\ref{Exa}. \begin{definition} \label{def:Anti} In a contextuality scenario $\mathfrak{C} = (X,\mathcal{M},\mathcal{N})$, a set of outcomes $\{a_1,a_2,\cdots,a_n\} \subseteq X$ is \emph{antidistinguishable} if there exists outcomes $a_1^{\perp},a_2^{\perp},\cdots,a_n^{\perp} \in X$ such that \begin{itemize} \item There exists a context $M \in \mathcal{M}$ with $\{a_1^{\perp},a_2^{\perp},\cdots,a_n^{\perp}\} \subseteq M$. \item For each $j \in [n]$, there exists a context or a maximal partial context $N_j$ such that $\{a_j,a_j^{\perp}\} \subseteq N_j$. \item For each outcome $a \in M \backslash \{a_1^{\perp},a_2^{\perp},\cdots,a_n^{\perp}\}$ and each $a_j$, there exists a context or maximal partial context $N$ such that $\{a,a_j\}\subseteq N$. \end{itemize} \end{definition} \Cref{exa:anti} is a simple example of a set of three antidistinguishable outcomes. To understand this better, it is useful to look at how \cref{def:Anti} applies to the quantum case in more detail. \begin{example} A set $\{\ket{a_1},\cdots,\ket{a_n}\}$, $n \leq d$ of states in $\mathbb{C}^d$ is antidistinguishable if there exists an orthonormal basis $\{ \ket{a_{1}^{\perp}},\cdots,\ket{a_{n}^{\perp}},\cdots,\ket{a_d^{\perp}} \}$ such that \begin{equation} \braket{a_j^{\perp} | a_{j}} = 0, \,\, \forall \,\, j \in [n] \label{Eq:DefAnti1} \end{equation} and \begin{equation} \braket{a_k^{\perp} | a_{j}} = 0 , \,\, \forall \,\, j \in [n], k\in [n+1 ,d]. \label{Eq:DefAnti2} \end{equation} \label{Def:AntiDist} \end{example} The idea of antidistinguishability for states is that if one of the states $\ket{a_1},\cdots,\ket{a_n}$ is prepared and you do not know which then there exists a measurement that allows you to definitively rule out one of the states. It should be contrasted with distinguishability in which there exists a measurement that allows you to tell exactly which state was prepared. Antidistinguishability is weaker than distinguishability. \Cref{Eq:DefAnti2} states that the vectors $\ket{a_j}$ are in the subspace spanned by $\ket{a_k^{\perp}}$ for $k\in [n]$. This rules out the trivial case where we choose all these $\ket{a_k^{\perp}}$ to be orthogonal to every $\ket{a_j}$ for every $j$. This is also the reason for the third clause in \cref{def:Anti}. The following theorem from \cite{Caves_ConditionsCompatibilityQuantumstate_2002}, provides a useful characterization of antidistinguishability for sets of three pure states, as it avoids the need to construct the antidistinguishing measurement explicitly. \begin{thm} Consider a set $\mathcal{A} = \{\ket{a_1},\ket{a_2},\ket{a_3}\}$ of three states and let $x_1 = \vert \braket{a_2 | a_3} \vert^2$, $x_2 = \vert \braket{a_1 | a_3} \vert^2$, $x_3 = \vert \braket{a_1 | a_2} \vert^2$. Then, $\mathcal{A}$ is antidistinguishable iff \begin{align} x_1 + x_2 + x_3 & < 1 \label{Eq:Anti1} \\ (x_1 + x_2 + x_3 - 1)^2 & \geq 4 x_1x_2x_3. \label{Eq:Anti2} \end{align} \label{Thm:Anti3} \end{thm} The following corollary, as stated in \cite{Havlicek_2019}, gives a simpler sufficient condition for antidistinguishability that is easier to check. It follows by substitution into \cref{Eq:Anti1} and \cref{Eq:Anti2}. \begin{corollary} \label{Cor:Anti} Consider a set $\mathcal{A} = \{\ket{a_1},\ket{a_2},\ket{a_3}\}$ of three states and let $x_1 = \vert \braket{a_2 | a_3} \vert^2$, $x_2 = \vert \braket{a_1 | a_3} \vert^2$, $x_3 = \vert \braket{a_1 | a_2} \vert^2$. Then, $\mathcal{A}$ is antidistinguishable if \begin{equation} x_1,x_2,x_3 \leq \frac{1}{4}. \end{equation} \end{corollary} Additional criteria for antidistinguishability have been proved for more general cases \cite{Bandyopadhyay_ConclusiveExclusionQuantum_2014,Heinosaari_AntidistinguishabilityPureQuantum_2018}, but we shall not need them here. \section{Noncontextuality Inequalities from Antidistinguishability} \label{Ineq} This section describes our main results. We can use the concept of antidistinguishability to derive noncontextuality inequalities based on \emph{pairwise antisets}. These come in two versions---strong and weak---which are used to derive state independent and state dependent inequalities respectively. The notion of a weak pairwise antiset, applied to states rather than outcomes and not explicitly named, was used in \cite{Barrett_2014} to derive overlap bounds on the reality of the quantum state. Other examples of this construction were given in \cite{Branciard_HowPsepistemicModels_2014, Knee_2017}. In light of our results, these bounds can now be reinterpreted as state dependent noncontextuality inequalities. The notion of a strong pairwise antiset is novel to this work, and allows us to show that some of these inequalities are actually state independent. After defining pairwise antisets and stating our main results, \S\ref{Exa} gives examples of our construction for quantum contextuality scenarios. The proof of our main results is given in \S\ref{Proof}. \begin{definition}\label{Def:StrongPairwiseAS} A \emph{strong pairwise antiset} $W$ in a contextuality scenario $\mathfrak{C} = (X,\mathcal{M},\mathcal{N})$ is a set of outcomes for which there exists a context $M \in \mathcal{M}$ such that, for every $a,b \in W$ and $c \in M$, the triple $\{a,b,c\}$ is antidistinguishable. The context $M$ is called a \emph{principal context} for the pairwise antiset $W$. \end{definition} \begin{definition} A \emph{weak pairwise antiset} $W$ in a contextuality scenario $\mathfrak{C} = (X,\mathcal{M},\mathcal{N})$ is a set of outcomes for which there exists another outcome $c \in X$ such that, for every $a,b \in W$, the triple $\{a,b,c\}$ is antidistinguishable. The outcome $c$ is called a \emph{principal outcome} for the pairwise antiset $W$. \end{definition} We are now in a position to state our main results. \begin{restatable}{thm}{mainresult} \label{Thm:MainResult} Let $W$ be a pairwise antiset in a contextuality scenario $\mathfrak{C} = (X,\mathcal{M},\mathcal{N})$. If $W$ is strong then any state $\omega \in C_{\mathfrak{C}}$ satisfies \begin{equation} \label{eq:mainresult} \sum_{a \in W} \omega(a) \leq 1. \end{equation} If $W$ is weak then any $\omega \in C_{\mathfrak{C}}$ that also satisfies $\omega(c) = 1$ for a principal outcome $c$ satisfies \cref{eq:mainresult}. \end{restatable} \section{Examples} \label{Exa} Before proving \cref{Thm:MainResult}, here are some interesting examples of pairwise antisets that occur in quantum contextuality scenarios and the noncontextuality inequalities that arise from them. \subsection{Strong Pairwise Antisets} In this section, we give examples of strong pairwise antisets and state independent inequalities. \begin{example}[The Yu-Oh inequality] As a first example, we re-derive a noncontextuality inequality first given in \cite{Yu_StateIndependentProofKochenSpecker_2012} using \cref{Thm:MainResult}. Consider the following four vectors in $\mathbb{C}^3$ \begin{align} \ket{a_1}= \frac{1}{\sqrt{3}}\begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix}, \,\, \ket{a_2}= \frac{1}{\sqrt{3}}\begin{pmatrix} -1 \\ 1 \\ 1 \end{pmatrix}, \nonumber \\ \ket{a_3}= \frac{1}{\sqrt{3}}\begin{pmatrix} 1 \\ -1 \\ 1 \end{pmatrix}, \,\, \ket{a_4}= \frac{1}{\sqrt{3}}\begin{pmatrix} 1 \\ 1 \\ -1 \end{pmatrix}. \label{Eq:YuOhRays} \end{align} These form a strong pairwise antiset with principal basis \begin{align} \ket{c_1}= \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}, \,\, \ket{c_2}= \begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix}, \,\, \ket{c_3}= \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix}. \label{Eq:YuOhBasis} \end{align} The triple $\{\ket{c_1},\ket{a_1},\ket{a_2}\}$ was shown to be antidistinguishable in \cref{exa:qanti}. The other triples $\{\ket{c_j},\ket{a_k},\ket{a_m}\}$ for $k\neq m$ are antidistinguishable because they have the same inner products so they satisfy the conditions of \cref{Thm:Anti3}. \Cref{Thm:MainResult} thus implies the noncontextuality inequality \begin{equation} \label{Eq:YuOhIneq} \sum_{j=1}^4 \omega(a_j) \leq 1. \end{equation} However, the four states $\ket{a_j}$ satisfy \[\sum_{j=1}^4 \ket{a_j}\bra{a_j} = \frac{4}{3} I,\] where $I$ is the identity operator. This implies that for any quantum state $\omega$, the quantum predictions are \begin{equation} \sum_{j=1}^4 \omega(a_j) = \frac{4}{3} > 1. \end{equation} \end{example} In \cite{Yu_StateIndependentProofKochenSpecker_2012}, the inequality of \cref{Eq:YuOhIneq} was derived by applying an exhaustive search over noncontextual assignments to the orthogonality graph of $13$ rays in $\mathbb{C}^3$ \footnote{See \cite{Cabello_ProposedExperimentsQutrit_2012} for more details.}. Here we only used $7$ rays, but the other rays used in \cite{Yu_StateIndependentProofKochenSpecker_2012} are just the elements of the orthonormal bases that are required to antidistinguish the triples used in our argument. Re-deriving the inequality using \cref{Thm:MainResult} shows that it was based on antidistinguishability all along, and this allows us to easily generalize the example. \begin{example}[Hadamard States] The Yu-Oh construction can be generalized as follows. Consider the following vectors in $\mathbb{C}^d$: \begin{equation} \ket{a_{\bm{x}}}= \frac{1}{\sqrt{d}}\begin{pmatrix} -1^{x_1} \\ -1^{x_2} \\ -1^{x_3} \\ \vdots \\ -1^{x_d} \end{pmatrix}, \end{equation} where $\bm{x}=(x_1,\cdots,x_d)$ is a binary vector in $\{0,1\}^{d}$. This means that, ignoring normalization for the moment, the components of $\ket{a_{\bm{x}}}$ are all either $+1$ or $-1$ and as we run through the possible vectors $\bm{x}$ we get all possible combinations of $\pm 1$ components. There are $2^d$ such vectors. These vectors are called \emph{Hadamard states} because they can be thought of as the possible columns of Hadamard matrices. In addition, let $\{\ket{0},\ket{1},\cdots,\ket{d-1}\}$ be the standard orthonormal basis for $\mathbb{C}^d$, which we will use as the principal basis (in the sense of def. \ref{Def:StrongPairwiseAS}). Now, obviously, not all triples $\{\ket{j},\ket{a_{\bm{x}}},\ket{a_{\bm{x}'}}\}$ are antidistinguishable because some pairs $\ket{a_{\bm{x}}},\ket{a_{\bm{x'}}}$ only differ by a phase, i.e.\ $\ket{a_{\bm{x}'}} = -\ket{a_{\bm{x}}}$. In this case, $ \left | \braket{a_{\bm{x}} | a_{\bm{x}'}} \right |^2 = 1$ and so \cref{Eq:Anti1} of \cref{Thm:Anti3} is not satisfied. We can eliminate such cases by only considering binary vectors $\bm{x}$ that begin with a $0$. Denote this set $B^d_0$ and the set of binary strings that begin with a $1$ by $B^d_1$. Both sets contain $2^{d-1}$ vectors. Restricting to $B^d_0$, the triples $\{\ket{j},\ket{a_{\bm{x}}},\ket{a_{\bm{x'}}}\}$ satisfy the conditions of \cref{Thm:Anti3} for $\bm{x} \neq \bm{x}'$ and so \cref{Thm:MainResult} implies that noncontextual states satisfy \begin{equation} \label{Eq:HadamardIneq1} \sum_{\bm{x} \in B_0^d} \omega(a_{\bm{x}}) \leq 1. \end{equation} Since the vectors in $B_1^d$ represent the same set of rays, we can run the same argument and obtain \begin{equation} \label{Eq:HadamardIneq2} \sum_{\bm{x} \in B_1^d} \omega(a_{\bm{x}}) \leq 1. \end{equation} Adding the two inequalities gives \begin{equation} \label{Eq:HadamardIneq3} \sum_{\bm{x} \in \{0,1\}^d} \omega(a_{\bm{x}}) \leq 2. \end{equation} Although it is not necessary to add the inequalities like this, it is a bit cleaner to work with the full set of vectors of size $2^d$ rather than two sets of size $2^{d-1}$. For the quantum probabilities we note that \[\left ( \sum_{\bm{x} \in \{0,1\}^d} \ket{a_{\bm{x}}}\bra{a_{\bm{x}}} \right )_{jk} = \frac{1}{d} \sum_{\bm{x} \in \{0,1\}^d} (-1)^{x_j + x_k}.\] For $j=k$, each term in the sum is $+1$, so the diagonal components are all $2^d/d$. For $j\neq k$, the off-diagonal components are all $0$ because there are as many vectors in which $x_j = x_k$ as there are in which $x_j \neq x_k$ so there are an equal number of $+1$'s and $-1$'s in the sum. Thus, we have \[\sum_{\bm{x} \in \{0,1\}^d} \ket{a_{\bm{x}}}\bra{a_{\bm{x}}} = \frac{2^d}{d} I,\] so the probabilities for any quantum state $\omega$ are \begin{equation} \sum_{\bm{x}\in\{0,1\}^d} \omega(a_{\bm{x}}) = \frac{2^d}{d}, \end{equation} This is larger than $2$ whenever $d \geq 3$, which yields another state independent contextuality proof. \end{example} Hadamard states, combined with the Frankl-R\"{o}dl theorem \cite{Frankl_1987}, have previously been used to prove noncontextuality inequalities and to bound quantum information protocols \cite{Buhrman_1998, Brassard_1999, Mancinska_2013}. From a modern perspective, this amounts to considering the orthogonality properties of Hadamard states instead of their antidistinguishability, and applying the CSW formalism \cite{Cabello_NonContextualityPhysical_2010,Cabello_GraphTheoreticApproachQuantum_2014}. From this, we find that there exists an $\epsilon > 0$ such that \begin{equation} \label{Eq:HadamardIneq4} \sum_{\bm{x} \in \{0,1\}^d} \omega(a_{\bm{x}}) \leq (2 - \epsilon)^d, \end{equation} for every $\omega \in C_{\mathfrak{C}}$. While this also proves contextuality for sufficiently large $d$, the bound is a lot larger than that of \cref{Eq:HadamardIneq3}, which shows the benefit of considering antidistinguishability. In \cite{Leifer_PsEpistemicModelsAre_2014}, one of the authors of the present paper used the noncontextuality inequality of \cref{Eq:HadamardIneq4} to derive an overlap bound constraining $\psi$-epistemic models. It was subsequently pointed out by Maroney \cite{Maroney_2014} and Branciard \cite{Branciard_HowPsepistemicModels_2014} that the overlap bound could be tightened along the lines of \cref{Eq:HadamardIneq3} using antidistinguishability. The innovation here is to recognize that \cref{Eq:HadamardIneq3} is also a noncontextuality inequality. The next example was also first proposed as an overlap bound in \cite{Barrett_2014}, which we can now recognize as a noncontextuality inequality. \begin{example}[Mutually Unbiased Basis (MUBs)] Two orthonormal bases $\{\ket{e_j}\}_{j=1}^d$ and $\{\ket{f_j}\}_{j=1}^d$ in $\mathbb{C}^d$ are \emph{mutually unbiased} if $\left | \braket{e_j | f_k} \right |^2 = 1/d$ for all $j$ and $k$. When $d$ is a prime power, then $d+1$ mutually unbiased bases are known to exist \cite{Bengtsson_2007}. Let $\{\ket{a_{jk}}\}$ be the set of all vectors that appear in one of these basis, where $j$ runs over the choice of basis from $1$ to $d+1$ and $k$ runs over the vectors within a basis from $1$ to $d$. We remove one basis, say $ \{\ket{a_{1k}}\}_{k=1}^d$, to be our principal basis, so there are $d^2$ vectors left in the set. We have $\left | \braket{a_{jk}|a_{j'k'}} \right |^2 = \delta_{jj'}\delta_{kk'} + (1-\delta_{jj'})\frac{1}{d}$ and, for $d\geq 4$, \cref{Cor:Anti} implies that $\{\ket{a_{1k}},\ket{a_{j'k'}},\ket{a_{j''k''}}\}$ is antidistinguishable whenever $j'k'$ is distinct from $j''k''$ and $j',j'' \neq 1$. Thus, we have a strong pairwise antiset so \cref{Thm:MainResult} implies that \begin{equation} \left [ \sum_{j=1}^{d+1} \sum_{k=1}^d \omega(a_{jk}) \right ] - \sum_{k=1}^d \omega(a_{1k}) \leq 1, \end{equation} for any state $\omega \in C_{\mathfrak{C}}$. In fact, since $\{a_{1k}\}_{k=1}^d$ is a context, we have $\sum_{k=1}^d \omega(a_{1k}) = 1$ for any state $\omega$, so we have \begin{equation} \label{Eq:MUBIneq} \sum_{j=1}^{d+1} \sum_{k=1}^d \omega(a_{jk}) \leq 2. \end{equation} Since $\{\ket{a_{jk}}\}_{k=1}^d$ is an orthonormal basis, we have \[\sum_{j=1}^{d+1} \sum_{k=1}^d \ket{a_{jk}}\bra{a_{jk}} = (d+1) I,\] so the quantum probabilities are \begin{equation} \sum_{j=1}^{d+1} \sum_{k=1}^d \omega(a_{jk}) = d + 1, \end{equation} for any quantum state $\omega \in Q_{\mathfrak{C}}$. This violates \cref{Eq:MUBIneq} for $d \geq 3$, but recall that the antidistinguishability conditions only hold for $d \geq 4$, so this is a contextuality proof for prime power $d \geq 4$. \end{example} \subsection{Weak Pairwise Antisets} In this section, we give examples of state dependent noncontextuality inequalities arising from weak pairwise antisets. The following simple example is due to Owen Maroney \cite{Maroney_2014}. \begin{example}[Maroney States] Consider the following vectors in $\mathbb{C}^d$ \begin{equation} \ket{a_j} = \frac{1}{\sqrt{3}} \ket{0} + \sqrt{\frac{2}{3}} \ket{j}, \end{equation} where $j$ runs from $1$ to $d-1$ and we denote the standard orthonormal basis vectors as $\ket{0},\ket{1},\cdots,\ket{d-1}$. We also set $\ket{c} = \ket{0}$. Using \cref{Thm:Anti3}, we can easily check that $\{\ket{c},\ket{a_j},\ket{a_k}\}$ is antidistinguishable for $j \neq k$, so we have a weak pairwise antiset $W = \{\ket{a_j}\}_{j=1}^{d-1}$ and principal outcome $\ket{c}$. \Cref{Thm:MainResult} then gives \begin{equation} \sum_{j=1}^{d-1} \omega(a_j) \leq 1, \end{equation} for any noncontextual state $\omega$ such that $\omega(c)=1$. The quantum state $\omega$ corresponding to the vector $\ket{c} = \ket{0}$ obviously satisfies $\omega (c) = 1$ and it has $\omega(a_j) = \left | \braket{a_j|c}\right |^2 = 1/3$ for all $j$ so we get \begin{equation} \sum_{j=1}^{d-1} \omega(a_j) = \frac{d-1}{3}. \end{equation} This proves that $\omega$ is contextual in this scenario for $d \geq 5$. \end{example} \begin{example}[Symmetric Informationally Complete (SIC) POVMs] A SICPOVM, or SIC for short, is a set of semi-positive operators $\{E_j\}_{j=1}^{d^2}$ on $\mathbb{C}^d$ that satisfy \begin{equation} \label{eq:SICnorm} \sum_{j=1}^{d^2} E_j = I, \end{equation} and are of the form $E_j = \frac{1}{d} \ket{a_j}\bra{a_j}$ where \begin{equation} \label{eq:SICdef} \left | \braket{a_j|a_k} \right |^2 = \frac{1}{d+1}, \end{equation} for $j \neq k$. SICs are conjectured to exist in all finite Hilbert space dimensions. They have been shown to exist in all dimensions up to $d=151$ and in several larger dimensions up to $d=844$ \cite{Fuchs_2017}. For a SIC, let $\ket{c} = \ket{a_1}$ and $W = \{\ket{a_j}\}_{j=2}^{d^2}$. \Cref{Cor:Anti} implies that, for $d \geq 3$, the triples $\{\ket{c},\ket{a_j},\ket{a_k}\}$ are all antidistinguishable for $j\neq k$ and $j,k \neq 1$ so we have a weak pairwise antiset. Thus, \cref{Thm:MainResult} implies that \begin{equation} \left [ \sum_{j=1}^{d^2} \omega(a_j) \right ] - \omega(a_1) \leq 1, \end{equation} for any noncontextual state $\omega$ such that $\omega(c)=1$. Since $c = a_1$, we obviously also have $\omega(a_1) = 1$, so \begin{equation} \label{eq:SICInequality} \sum_{j=1}^{d^2} \omega(a_j) \leq 2, \end{equation} for any noncontextual state $\omega$ such that $\omega(c)=1$. Now consider any quantum state $\omega$. From \cref{eq:SICnorm}, we have \[\sum_{j=1}^{d^2} \ket{a_j}\bra{a_j} = d I,\] so the quantum predictions are \begin{equation} \sum_{j=1}^{d^2} \omega(a_j) = d, \end{equation} If we also have $\omega(c)=1$, which is the case for the quantum state corresponding to $\ket{a_1}$ for example, then this state is contextual for $d \geq 3$. \end{example} For $d=3$, the inequality of \cref{eq:SICInequality} was derived as a \emph{state independent} contextuality inequality in \cite{Bengtsson_2012} based on a special relationship between MUBs and SICs that only occurs in that dimension. They considered the orthogonality graph of $21$ vectors in $\mathbb{C}^3$ consisting of the vectors that appear in a SIC and those that appear in a related set of $4$ MUBs. From our perspective, the special relationship is that, in $d=3$, MUBs can be chosen that antidistinguish each of the triples $\{\ket{c},\ket{a_j},\ket{a_k}\}$ used in our proof. Our generalization follows from the fact that these antidistinguishability relations still hold in higher dimensions, but the antidistinguishing measurements are no longer necessarily MUBs. Unfortunately, our generalization is only a state dependent inequality, as we did not find a way of generating a principal context from a SIC\@. This indicates that other methods of generating noncontextuality inequalities from antidistinguishability might exist. \section{Proof of Theorem~\ref{Thm:MainResult}} \label{Proof} \mainresult* The proof is based on one lemma and the Bonferroni inequalities \cite{Rohatgi_2001}. \begin{lem} Let $A$ be a set of antidistinguishable outcomes in a contextuality scenario $\mathfrak{C} = (X,\mathcal{M},\mathcal{N})$. Then, there are no value functions that are $a$-definite for every $a\in A$, i.e. \begin{equation} \bigcap_{a \in A} V_a = \emptyset. \end{equation} \label{Lem:AntiImpliesEmpty} \end{lem} \begin{proof} Suppose $A = \{a_1,a_2,\cdots,a_n\}$ has $n$ outcomes and that $M = \{a_1^{\perp},a_2^{\perp},\cdots,a_m^{\perp}\}$ (with $m \geq n$) is a context that antidistinguishes them, i.e. \begin{itemize} \item For all $j \in [n]$, there exists a context or maximal partial context $N_j$ such that $a_j,a_j^{\perp} \in N_j$. \item For all $j \in [n]$ and $k \in [n+1,m]$, there exists a context or maximal partial context $N_{jk}$ such that $a_j,a_k^{\perp} \in N_{jk}$. \end{itemize} Since $\{a_1^{\perp},a_2^{\perp},\cdots,a_m^{\perp}\}$ is a context, every value function must assign the value $1$ to exactly one outcome in this set. Consider a value function $v \in V_{a_1}$. Since, for every $k \in [n+1,m]$, there is a (maximal partial) context that contains both $a_1$ and $a_k^{\perp}$, $v$ must assign value $0$ to every $a_k^{\perp}$ for $k \in [n+1,m]$. This means that it must assign value $1$ to one of $a_1^{\perp}, a_2^{\perp},\cdots,a_n^{\perp}$. Now suppose that $v \in \bigcap_{a \in A} V_a$ so that it assigns value $1$ to every $a_j$ for $j \in [n]$. It cannot assign $v(a_j^{\perp}) = 1$ for any $j = [n]$ because, for every such $j$, there is always a (maximal partial) context $N_j$ such that $a_j,a_j^{\perp} \in N_j$ and we already have $v(a_j) = 1$. This means that the value function must assign value $0$ to every $a_j^{\perp}$ for $j \in [m]$, contradicting the requirement that it assign value $1$ to exactly one outcome in every context. Therefore, no such value function exists, so $\bigcap_{a \in A} V_a = \emptyset$. \end{proof} \begin{proof}[Proof of \Cref{Thm:MainResult}] Given a contextuality scenario $\mathfrak{C} = (X,\mathcal{M},\mathcal{N})$ and a noncontextual state $\omega \in C_{\mathfrak{C}}$, which is necessarily of the form \[\omega(a) \sum_{v\in V_{\mathfrak{C}}} p_v v(a),\] we can define a probability space $(V_{\mathfrak{C}},2^{V_{\mathfrak{C}}},P)$ over the value functions via \[P(V) = \sum_{v\in V_{\mathfrak{C}}} p_v.\] Now consider the quantity \begin{equation} \sum_{a \in W} \omega(a) = \sum_{a \in W} \sum_{v\in V_{\mathfrak{C}}} p_v v(a). \end{equation} Because $v(a) = 1$ iff $v \in V_a$ and $v(a) = 0$ otherwise, we can rewrite this as \begin{equation} \label{eq:OmegaP} \sum_{a \in W} \omega(a) = \sum_{a \in W} P(V_a). \end{equation} Next, we make use of the Bonferroni inequalities \cite{Rohatgi_2001}. Recall that the Bonferroni inequalities are a generalization of the inclusion-exclusion principle to probability spaces. For a probability space $(\Omega,\Sigma,P)$, let $\Omega_1,\Omega_2,\cdots,\Omega_n \in \Sigma$ be measurable sets. Then, we have the sequence of inequalities: \begin{align} P\left ( \bigcup_{j=1}^n \Omega_j \right ) & \leq \sum_{j = 1}^n P \left ( \Omega_j \right ) \\ P\left ( \bigcup_{j=1}^n \Omega_j \right ) & \geq \sum_{j = 1}^n P \left ( \Omega_j \right ) - \sum_{j < k} P \left ( \Omega_j \cap \Omega_k \right ) \label{eq:Bonf} \\ P\left ( \bigcup_{j=1}^n \Omega_j \right ) & \leq \sum_{j = 1}^n P \left ( \Omega_j \right ) - \sum_{j < k} P \left ( \Omega_j \cap \Omega_k \right ) \nonumber \\ & + \sum_{j < k < l} P \left ( \Omega_j \cap \Omega_k \cap \Omega_l \right ) \\ \vdots & \nonumber \end{align} The pattern continues with alternating signs of the additional terms and alternating directions of the inequalities. Here, we will make use of the second Bonferroni inequality given in \cref{eq:Bonf}. Suppose that the pairwise antiset $W$ has $n$ outcomes $W = \{a_1,a_2,\ldots,a_n\}$ and consider the corresponding sets of $a_j$-definite value functions $V_{a_1},V_{a_2},\cdots,V_{a_n}$. By the second Bonferroni inequality we have \begin{equation} P\left ( \bigcup_{j=1}^n V_{a_j} \right ) \geq \sum_{j = 1}^n P \left ( V_{a_j} \right ) - \sum_{j < k} P \left ( V_{a_j} \cap V_{a_k} \right ). \end{equation} Combining this with \cref{eq:OmegaP} and rearranging gives \begin{equation} \sum_{j=1}^n \omega(a_j) \leq P\left ( \bigcup_{j=1}^n V_{a_j} \right ) + \sum_{j < k} P \left ( V_{a_j} \cap V_{a_k} \right ). \end{equation} Because $P$ is a probability measure, we have $P\left ( \bigcup_{j=1}^n V_{a_j} \right ) \leq 1$, so \begin{equation} \sum_{j=1}^n \omega(a_j) \leq 1 + \sum_{j < k} P \left ( V_{a_j} \cap V_{a_k} \right ). \end{equation} Therefore, the theorem follows if we can show that $P \left ( V_{a_j} \cap V_{a_k} \right ) = 0$ for every $j\neq k$. To do this, we consider the cases of strong and weak pairwise antisets separately. In the strong case, consider a principal context $M$. Since it is a context the sets $V_c$ for $c \in M$ are disjoint and form a partition of $V_{\mathfrak{C}}$, so $V_{\mathfrak{C}} = \bigcup_{c \in M} V_c$. Also, $P(V_{\mathfrak{C}}) = 1$, so we have \begin{align} P \left ( V_{a_j} \cap V_{a_k} \right ) & = P \left ( V_{\mathfrak{C}} \cap \left [ V_{a_j} \cap V_{a_k} \right ] \right )\\ & = P \left ( \left [ \bigcup_{c \in M} V_c \right ] \cap \left [ V_{a_j} \cap V_{a_k} \right ] \right ) \\ & = \sum_{c \in M} P \left ( V_c \cap V_{a_j} \cap V_{a_k} \right ), \end{align} where the last line follows from disjointness of the $V_c$'s. Since each triple $\{c,a_j,a_k\}$ is antidistinguishable, \cref{Lem:AntiImpliesEmpty} implies that $V_c \cap V_{a_j} \cap V_{a_k} = \emptyset$, and hence $P(V_c \cap V_{a_j} \cap V_{a_k}) = 0$. Now consider the weak case. Let $c$ be a principal outcome and suppose that $\omega(c) = 1$. Since $\omega(c) = P(V_c)$ it follows that $P(V_c) = 1$. Thus, we can write \begin{align} P \left ( V_{a_j} \cap V_{a_k} \right ) = P \left ( V_c \cap V_{a_j} \cap V_{a_k} \right ), \end{align} and since each triple $\{c,a_j,a_k\}$ is antidistinguishable, \cref{Lem:AntiImpliesEmpty} implies that $V_c \cap V_{a_j} \cap V_{a_k} = \emptyset$, and hence $P(V_c \cap V_{a_j} \cap V_{a_k}) = 0$. \end{proof} \section{Conclusions} \label{Conc} In this paper, we have shown that the antidistinguishability properties of sets of quantum states, and more abstractly outcomes in a contextuality scenario, can be used to derive noncontextuality inequalities. Our method can be used to re-derive some known inequalities, such as the Yu-Oh inequality \cite{Yu_StateIndependentProofKochenSpecker_2012}, in a simple way that uncovers the previously hidden antidistinguishability structure of the proof. It can also be used to generalize known inequalities to higher dimensions, such as in the Hadamard and SIC examples, and derive new classes of noncontextuality inequalities, such as the example based on MUBs. In some cases, we get much tighter bounds on the inequalities than we would get from considering the distinguishability properties alone, such as in the Hadamard example. Our method is not necessarily the only way of deriving noncontextuality inequalities from antidistinguishability, and we think there is much to be gained from considering antidistinguishability structures further, particularly given their role in some recently proposed quantum information protocols \cite{Perry_CommunicationTasksInfinite_2015, Havlicek_2019}. In principle, our noncontextuality inequalities could be made robust to noise and tested experimentally using the techniques described in \cite{Kunjwal_2018, ADO18}. However, in order to do so, one would have to experimentally test that the antidistinguishabilities used in the proofs hold approximately in the lab. This would involve constructing the bases that antidistinguish the states in our pairwise antisets, increasing the number of vectors needed to establish the proof. It would then essentially reduce to a proof based on the orthogonality properties of the states. From a theoretical point of view, one of the virtues of our method is that you do not have to explicitly construct the antidistinguishing measurements, so we can derive our inequalities using a smaller number of vectors than would be needed in methods based on orthogonality. This advantage would be lost in the experimental tests. Thus, we think the main use of our method will be in theoretical work, where contextuality inequalities can be used to prove things about quantum computation and quantum information protocols. As an example of this, the amount of memory needed to classically simulate stabilizer quantum computations was recently bounded using contextuality proofs based on antidistinguishability \cite{Karanjai_2018}. We expect that having a general method of constructing inequalities based on antidistinguishability could be used to prove similar and more general results for other classes of quantum computation. Our work also has implications for thinking about overlap bounds on the reality of the quantum state. One known class of bounds is based on CSW noncontextuality inequalities, but the other class---based on antidistinguishability---did not previously have a known connection to contextuality. In this paper, we have shown that this second class of bounds are also noncontextuality inequalities. It has been shown that a maximally $\psi$-epistemic model (one in which the quantum and classical overlaps are equal) must be noncontextual \cite{Leifer_MaximallyEpistemicInterpretations_2013, Leifer_QuantumStateReal_2014}, which explains why contextuality proofs provide overlap bounds. However, the converse is not necessarily true. This indicates that better overlap bounds than those currently known might be obtainable by considering the constraints on maximally $\psi$-epistemic models that are not implied by noncontextuality. \begin{acknowledgments} Matthew Leifer wishes to thank Owen Maroney and Matt Pusey for useful discussions. This research was supported in part by the Fetzer Franklin Fund of the John E.\ Fetzer Memorial Trust and by grant number FQXi-RFP-IPW-1905 from the Foundational Questions Institute and Fetzer Franklin Fund, a donor advised fund of Silicon Alley Community Foundation. Matthew Leifer is grateful for the hospitality of Perimeter Institute where part of this work was carried out. Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Economic Development, Job Creation and Trade. Cristhiano Duarte was supported by a fellowship from the Grand Challenges Initiative at Chapman University. \end{acknowledgments} \end{document}
\begin{document} \def\spacingset#1{\renewcommand{\baselinestretch} {#1}\small\normalsize} \spacingset{1} \if00 { \title{\bf Propensity Process: a Balancing Functional} \author{Pallavi S. Mishra-Kalyani\\ Department of Biostatistics and Bioinformatics\\ Emory University \\ Brent A. Johnson\\ Department of Biostatistics and Computational Biology\\ University of Rochester\\ and \\ Qi Long\footnotemark[1]\\ Department of Biostatistics, Epidemiology, and Informatics\\ University of Pennsylvania} \date{} \maketitle } \fi \if10 { \begin{center} {\LARGE\bf Propensity Process: a Balancing Functional} \end{center} } \fi \begin{abstract} In observational clinic registries, time to treatment is often of interest, but treatment can be given at any time during follow-up and there is no structure or intervention to ensure regular clinic visits for data collection. To address these challenges, we introduce the time-dependent propensity process as a generalization of the propensity score. We show that the propensity process balances the entire time-varying covariate history which cannot be achieved by existing propensity score methods and that treatment assignment is strongly ignorable conditional on the propensity process. We develop methods for estimating the propensity process using observed data and for matching based on the propensity process. We illustrate the propensity process method using the Emory Amyotrophic Lateral Sclerosis (ALS) Registry data. \end{abstract} \noindent {\it Keywords:} Balancing Score; Generalized Propensity Score; Propensity Process; Propensity Score; Observational Registry; Time-Varying Covariates \spacingset{1.45} \section{Introduction} \label{sec:intro} Amyotrophic lateral sclerosis (ALS) is a rare progressive neurological disorder resulting in the degeneration of both upper motor neurons of the cerebral cortex and lower motor neurons of the spinal cord and peripheral nervous system, with a very poor prognosis. Currently, there is no cure for ALS and clinical care is generally limited to treating secondary infections and palliative care, such as surgically inserting a percutaneous endogastrostomy (PEG) tube to provide enteral nutrition for individuals having difficulty swallowing \citep{procaccini2008}. Our objective is to assess the effect of inserting a PEG feeding tube on preventing weight loss. PEG insertion is an individual decision and one that must be made while the individual is strong enough to proceed with surgery. Hence, a randomized controlled trial to study the effect of PEG would be implausible. We develop new methods to evaluate PEG using data from the Emory ALS Clinic registry. Let $T$ denote the continuously-defined time of PEG insertion for a randomly selected patient from the population. The observed outcome $Y$ is collected at or just after a fixed point at time $L$, which consequently restricts the time of PEG insertion. If subjects were randomly assigned to receive PEG prior to $L$ and randomly assigned to treatment times, then both treatment effect and dose-response curve could be estimated using standard methods. However, treatment assignment depends on patient characteristics and confounds the effect of treatment on outcome. To remove confounding associated with covariate imbalance among treatment levels, we rely on the general concept of the propensity score \citep{rosenbaumandrubin1983,rubin1996}. When treatment assignment is binary, the propensity score \citep{rosenbaumandrubin1983} is defined as probability of receiving a treatment given a set of observed variables. Generalizations of the propensity score as a balancing score have been investigated in various settings \citep{hirano2004,imaiandvandyk2004,hansen2008prognostic,allen2011control,hu2014estimation}. For continuously-defined treatment levels, \citet{hirano2004} proposed a direct translation of the propensity score by replacing the conditional probability mass function with the conditional density function of treatment assignment given covariates, known as a generalized propensity score (GPS); while this approach leads to as many propensity scores as there are levels of the treatment it uses only one single score at a time. Although \citet{imaiandvandyk2004} similarly found that the conditional density function of treatment assignment given covariates could serve as a propensity score, they noted potential limitations of this approach and suggested instead using the linear predictor in regression models or other summary statistic that are of finite dimension. When treatment assignment occurs over time as in the case where an individual chooses to receive PEG insertion or not, we must allow for the possibility of time-dependent confounding. To this end, let $X_t$ denote a set of $p$-dimensional time-dependent covariates at time $t$ and ${\ensuremath{\mathcal X}}_t=\left\{X_s,~0\le s\le t\right\}$ denote the history of covariates up to time $t$. Then, the probability of treatment assignment at time $t$ given the covariate history up to time $t$ is \begin{equation}\label{eq:density.tz} f(t\mid{\ensuremath{\mathcal X}}_t) = \lim_{\epsilon\rightarrow 0} \epsilon^{-1} P\left(t\le T < t+\epsilon \mid {\ensuremath{\mathcal X}}_t\right), \end{equation} where $f(t\mid {\ensuremath{\mathcal X}}_t) = h(t\mid X_t)\exp\left\{-\int_0^t h(s\mid X_s)\, ds\right\}$ and the hazard function is \begin{equation}\label{eq:haz.tz} h(t\mid X_t) = \lim_{\epsilon\rightarrow 0} \epsilon^{-1} P\left(t\le T < t+\epsilon \mid T\ge t, X_t\right). \end{equation} Because $h(t\mid X_t)$ uniquely parameterises $f(t\mid \cal{X}_t)$, either model~\eqref{eq:density.tz} or model~\eqref{eq:haz.tz} may be regarded as a legitimate treatment assignment model for continuous treatment with time-independent or time-dependent confounding \citep{li2001,lu2005}. Of note, $f(t\mid \cal{X}_t)$ is a function of the entire covariate history ${\ensuremath{\mathcal X}}_t$, whereas the hazard function $h(t\mid X_t)$ is a function of $X_t$ only. This subtle, yet important difference can lead to difficulties when extending methods proposed by \cite{imaiandvandyk2004} and \citet{hirano2004} to time-dependent confounding via standard hazard modeling. In addition, both \citet{li2001} and \citet{lu2005} used the hazard function $h(t\mid X_t)$ as a GPS for matching which allows for balancing $X_t$ at the time of treatment in a matched set. However, they did not establish the strong ignorability of treatment assignment given their time-dependent GPS; this property does not hold if $Y$ is associated with ${\ensuremath{\mathcal X}}_t$ rather than just $X_t$, in which case their proposed procedures may not lead to valid causal inference. Additionally, their proposed methods are only applicable to studies with data routinely collected at regular intervals, which is often not true in clinical registries. We propose the propensity process to correct for confounding in observational studies by balancing the covariate history ${\ensuremath{\mathcal X}}_t$. After the propensity process is estimated, bias-corrected data analyses can be achieved through matching or stratification \citep{rosenbaumandrubin1983}. Establishing formally the theoretical properties of the propensity process for time-independent confounding requires different arguments than those presented in \citet{imaiandvandyk2004}. \section{Methods}\label{sec:PS} \subsection{Notation and Assumptions} Our framework is constructed through potential outcomes \citep{rubin2005causal}. For $t\in [0,L)$, we define $U_t=T\wedge t$ as the treatment time restricted to time $t$ and $U=T\wedge L$ as the treatment time restricted to time $L$, where $a \wedge b$ denotes the minimum of $a$ and $b$. Let ${\ensuremath{\mathcal T}}_t=\left\{[0,t),t+\right\}$ define the set of potential treatment times restricted to $t$, $t\in[0,L)$, where $t+$ means that a patient did not receive PEG treatment before $t$. Let $Y^*_t$ be the potential outcome if a subject received PEG treatment at time $t$, $t\in[0,L)$, and $Y^*_{t+}$ the potential outcome if a subject did not receive PEG treatment in the interval $[0,t)$. It follows that $Y^*_{L+}$ denotes the potential outcome if a subject did not receive PEG treatment in the interval $[0,L)$. We also define the treatment-free potential covariate process ${\ensuremath{\mathcal X}}^*_t,~t\le L$. Then, the set of potential outcomes and treatment-free potential covariate process for a randomly selected subject from the population is $\{Y^*_s, {\ensuremath{\mathcal X}}^*_s,~s\in {\ensuremath{\mathcal T}}_t\}$ when treatment time is restricted at $t$, $t\in[0,L)$. In contrast, the observed data are $(Y, U, {\ensuremath{\mathcal X}}_{U})$, where the observed outcome $Y=Y^{*}_{U}$, and the observed covariate history ${\ensuremath{\mathcal X}}_{U}={\ensuremath{\mathcal X}}^*_{U}$. Given $\theta_t=h(t\mid X^*_t)$, we define the propensity process as the sample path of the hazard function from baseline to time $t$, i.e., \begin{equation}\label{eq:pp} \Theta_t = \left\{ \theta_s=h(s\mid X^*_s),~0\le s\le t \right\}, \end{equation} noting that $\Theta_t$ is dependent on ${\ensuremath{\mathcal X}}^*_t$. As ${\ensuremath{\mathcal X}}^*_t$ is observable only up to $U$, $\Theta_t$ is estimable only up to $U$. While this concept seems similar to the propensity function \citep{imaiandvandyk2004}, the distinguishing factor of the propensity process is that $\Theta_t$ depends on $t$ and is of infinite dimension and $\Theta_L$ cannot be fully estimated for subjects receiving PEG before $L$, whereas the propensity function in \citet{imaiandvandyk2004} only allows for incorporation of time-independent covariates and can be estimated for all subjects. In our framework, we make two assumptions. \begin{assump} [Stable unit treatment value assumption] The distributions of potential outcomes for different subjects are independent of one another. \end{assump} \begin{assump}[Strong Ignorability] For every $t\in [0, L)$, $\mbox{pr}(U_t\in{\ensuremath{\mathcal A}}\mid Y^*_s,{\ensuremath{\mathcal X}}^*_t)=\mbox{pr}(U_t\in{\ensuremath{\mathcal A}}\mid {\ensuremath{\mathcal X}}^*_t)$ and $\mbox{pr}\left(U_t\in{\ensuremath{\mathcal A}}\mid {\ensuremath{\mathcal X}}^*_t\right)>0$ for all $s\in {\ensuremath{\mathcal T}}_t$, ${\ensuremath{\mathcal X}}^*_t$, and ${\ensuremath{\mathcal A}}\subseteq{\ensuremath{\mathcal T}}_t$. \end{assump} Assumption~1 is a common assumption in causal inference. However, our Assumption~2 is defined for each time point $t$ and differs from the standard strong ignorability of treatment assignment assumption used in earlier work for balancing scores. One implication of Assumption~2 is that, conditional on the treatment-free history ${\ensuremath{\mathcal X}}^*_t$, receiving treatment at $t$ or not is independent of the set of potential outcomes, allowing us to model treatment assignment without conditioning on potential outcomes. \subsection{Main results}\label{theory} We establish the large-sample results of the propensity process assuming that the true propensity process is known along the lines of \citet{rosenbaumandrubin1983} and \citet{imaiandvandyk2004}. \begin{prop} \label{prop1} $U$ is conditionally independent of treatment-free covariate history ${\ensuremath{\mathcal X}}^*_L$ given $\Theta_L$, where ${\ensuremath{\mathcal X}}^*_L$ and $\Theta_L$ are the entire treatment-free covariate history and propensity process, respectively. \end{prop} Proposition 1 establishes $\Theta_L$ as a balancing functional that balances the entire covariate history. Proposition 1 requires that $\Theta_L$ is known or can be estimated in the entire domain $[0,L)$. In practice, however, we can only observe the covariate process ${\ensuremath{\mathcal X}}^*_{U}$ and hence estimate $\Theta_{U}$. Proposition 2 establishes the balancing property for every given time point $t$ in $[0,L)$. \begin{prop}\label{prop2} For every $t\in[0,L)$, $U_t$ is conditionally independent of treatment-free covariate history ${\ensuremath{\mathcal X}}^*_t$ given $\Theta_t$, where ${\ensuremath{\mathcal X}}^*_t$ and $\Theta_t$ are the treatment-free covariate history and propensity process through time $t$, respectively. \end{prop} When $t=U$ in Proposition~\ref{prop2}, we have that $U$ is independent of treatment-free covariate history ${\ensuremath{\mathcal X}}^*_U$ given $\Theta_U$, where ${\ensuremath{\mathcal X}}^*_U={\ensuremath{\mathcal X}}_U$ is observable and hence $\Theta_U$ is estimable. \begin{thm}\label{Thm1} For every $t\in [0, L)$, $\mbox{pr} \left(U_t\in{\ensuremath{\mathcal A}}\mid Y^*_s,\Theta_t\right)=\mbox{pr}(U_t\in{\ensuremath{\mathcal A}}\mid \Theta_t)$ for all $s\in {\ensuremath{\mathcal T}}_t$, $\Theta_t$, and ${\ensuremath{\mathcal A}}\subseteq{\ensuremath{\mathcal T}}_t$. \end{thm} When $t=U$ in Theorem~\ref{Thm1}, we have that $U$ is independent of potential outcomes given $\Theta_U$, where $\Theta_U$ is estimable. Several remarks are in order. First, in \S~\ref{ssec:pp} we suggest modeling the hazard function in~\eqref{eq:haz.tz} through the proportional hazards model ~\eqref{eq:PH}; one could also use other model formulations for~\eqref{eq:haz.tz} and the results in Propositions~1--2 and Theorem~1 would still apply. Second, Proposition~\ref{prop2} and Theorem~\ref{Thm1} provide justifications for matching a subject treated at $t$ with an eligible control subject untreated at $t$ based on the propensity process up to $t$. It follows that each matched pair would have the same distribution for the covariate process up to $t$ and their potential outcomes are independent of their treatment assignments, allowing for valid causal inference. Third, our Proposition 2 is similar in spirit to Proposition 1 in \citet{lu2005} but is more general in the sense that the propensity process balances the entire covariate history up to $t$ not just the covariates measured at $t$. In addition, \citet{lu2005} did not establish the strong ignorability of treatment assignment given propensity scores similar to our Theorem 1. Proofs for Propositions~1--2 and Theorem~1 are given in the Appendix. \section{Implementation and Practical Considerations} \label{methods} \subsection{Interpolated Propensity Processes}\label{ssec:pp} In practice, the propensity process $\Theta_{U}$ must be estimated from the observed data. The challenge for estimating the propensity process is that we may not observe the complete treatment-free covariate process ${\ensuremath{\mathcal X}}^*_U$ on $[0,U]$; rather, we only get to observe the covariate process at a coarse set of discrete time points as is the case in the motivating ALS study. Here, we propose to borrow strength across subjects in the study sample by modeling each time-dependent covariate as a random curve over time via nonlinear mixed effects models. This allows a predictive curve to be estimated for the entire treatment-free covariate process for each subject. First, suppose we parameterize the hazard function in \eqref{eq:haz.tz} through Cox's proportional hazards model and define the propensity process through the linear predictor, \begin{align}\label{eq:PH} &h(t\mid X_t;\beta)=h_{0}(t)\exp(\beta^{\rm T}X_t), &&\Theta_t = \left\{ \theta_s=\beta^{\rm T}X^*_{s},~0\le s\le t \right\}, \end{align} where $h_{0}(t)$ is the unspecified baseline hazard function. Next, write the observed treatment-free covariate history for the $i$-th subject and $k$-th covariate as ${\ensuremath{\mathcal X}}_{ik}=\left(X_{i1k},\ldots,X_{im_ik}\right)$, with time-dependent covariate $X_{ijk}$ measured at time $t_{ij}$. We note that the observation times $(t_{ij},~j=1,\ldots,m_i)$ may be different for each subject but are assumed to be the same for all covariates within a subject. Then, for each time-dependent covariate, we fit the model, \begin{eqnarray} X_{ijk} &=& b_k^{\rm T}(t_{ij})\gamma_k + b_k^{\rm T}(t_{ij})\alpha_{ik} + \epsilon_{ijk},~(i=1,\ldots,n;~j=1,\ldots,m_i;k=1,\ldots,p),\label{eq:mm} \end{eqnarray} where $\epsilon_{ijk}$ are independent, mean-zero random errors. To provide greater flexibility in modeling the covariate process over time, we use spline-type models \citep{ruppert2003semiparametric} in \eqref{eq:mm} where $b(\cdot)$ denotes a set of basis functions and $\gamma_k$ and $\alpha_{ik}$ are regression coefficients corresponding to the basis functions for the fixed and random effects, respectively. The interpolated treatment-free $\widehat{{\ensuremath{\mathcal X}}}_t$ can be obtained from model~\eqref{eq:mm} by replacing regression coefficients $\gamma_k$ and $\alpha_{ik}$ with their estimates $\widehat\gamma_k$ and $\widehat\alpha_{ik}$, respectively. Then the estimated propensity process $\widehat\Theta_{U}$ can be obtained from~\eqref{eq:PH} by plugging in the interpolated $\widehat{{\ensuremath{\mathcal X}}}_U$ and $\widehat\beta$, where $\widehat\beta$ is the estimated regression coefficient vector in the Cox proportional hazards model. \subsection{Matching}\label{subsec:matching} The use of matched analyses based on propensity scores for testing causal null hypotheses has been advocated by several other authors; for example, see \citet{rosenbaumandrubin1983}, \citet{li2001} and \citet{lu2005} and references therein. Matching can be performed by minimizing the integrated squared error between the estimated propensity process $\widehat\Theta_{t}$ of a subject who received PEG treatment at time $t$ and that of each eligible control with $U>t$. To accomplish this task, we implement a sequential matching algorithm. We start by ordering chronologically subjects according to their time of PEG treatment or censoring, namely $U$. Set the matched pair counter to $m=1$ and select the subject with the smallest time to PEG treatment, say subject $i_1$. Define the integrated squared difference in interpolated propensity processes between $i_1$ and $l$ as $Q(i_1,l) = I(T_{i_1}\le L) \int_0^{T_{i_{1}}} (\widehat\theta_{i_{1},t}-\widehat\theta_{l,t})^2\, dt,$ for all subjects $l$ in the set of $n-1$ eligible controls $\mathcal{C}_1=\{l\mid l=1,\ldots,n,~l\ne i\}$. The matched control for $i_1$ is the nearest neighbor in interpolated propensity processes among eligible controls, i.e., $\mbox{argmin}_{l\in\mathcal{C}_1} Q(i_1,l)$. Increment the matched pair counter by one to $m=2$ and select the subject with the smallest time to PEG treatment, say $i_2$, excluding the two subjects in the first matched pair. Therefore, the set of eligible controls, say $\mathcal{C}_2$, contains $n-3$ subjects: all $n$ subjects less the two subjects in the first matched pair and $i_2$. The matched control for $i_2$ is the nearest neighbor in interpolated propensity processes among the set of eligible controls, $\mbox{argmin}_{l\in\mathcal{C}_2} Q(i_2,l)$. Increment the matched pair counter by one and continue until all treated individuals are matched or until there are no suitable controls available for matching. \section{Analysis of the ALS Registry Data} \label{results} Using a data set from the Emory ALS Registry, we assess the association of PEG treatment with the change in body mass index (BMI) from baseline to 18 months, i.e., $L$ = 18 months. The data set includes 240 patients who survived past $L$ and had at least one clinic visit between baseline and $L$. The patients who received PEG did so after their first clinic visit. The timing of recommending PEG by the physician involved many factors and the final decision to have PEG was made by each patient. We model treatment assignment through the proportional hazards model \eqref{eq:PH} including the following covariates. The baseline risk factors are age at diagnosis, sex, site of onset of disease, negative inspiratory force, and time from diagnosis to the first clinic visit. Two time-varying covariates are forced vital capacity and body mass index, which may not be measured at every clinic visit for every patient. Each time-varying covariate is modeled over time using the mixed model~\eqref{eq:mm}, where polynomial spline basis functions are used. The estimated curves are used to interpolate the covariate values needed for estimating the propensity process based on \eqref{eq:PH}. We compare three alternative approaches to the proposed propensity process. First, a na\"{i}ve analysis compares all treated individuals to those who are untreated prior to $L$. The second approach is the propensity function \citep{imaiandvandyk2004} that uses baseline risk factors $X_0$ only in the treatment assignment model~\eqref{eq:PH}, where $\theta_0=\beta^{\rm T}X_0$ defines the propensity function. The third approach is the interpolated generalized propensity score, which uses the interpolated treatment-free $\widehat{X}_t$ defined in \S~\ref{ssec:pp} to obtain the GPS for each subject in the spirit of \citet{lu2005}, noting that $X_t$ may not be observed at time $U$ for a subject and its eligible controls as defined in \S~\ref{subsec:matching}. The same sequential matching algorithm in \S~\ref{subsec:matching} is used for all propensity score methods. Our matching algorithm resulted in $M=74$ pairs for the analysis using the propensity function and $M=76$ pairs for both analyses using the generalized propensity score and propensity process. Following \citet{li2001} and \citet{lu2005}, we assess balance of covariates by examining Type I errors from a log-rank test of the effect of the covariate on time to treatment, one covariate at a time. In the matched analyses, this model is stratified by the $M$ matched pairs. As shown in Table 1, prior to matching, balance is not achieved. While other methods improve covariate balance, they do not balance all covariates. However, matching using the propensity process results in balance across all covariates. This indicates that the propensity process outperforms the baseline propensity function or interpolated GPS in terms of balancing covariates and there may be residual confounding after matching by the other propensity score methods. \begin{table}[h]\label{tab:balance} \centering \caption{Covariate balance before and after matching} \begin{tabular}{lcccc} \hline & Prior to & Propensity & Generalized & Propensity \\ Covariate & Matching & Function & Propensity Score & Process \\\hline Body mass index & 0.277 & 0.245 & 0.986 & 0.991 \\ Forced vital capacity & 0.764 & 0.539 & 0.201 & 0.317 \\ Negative inspiratory force & 0.151 & 0.022 & 0.016 & 0.704 \\ Age & 0.162 & 0.718 & 0.378 & 0.195 \\ Sex & 0.577 & 0.695 & 0.002 & 0.706 \\ Site & 0.001 & 0.003 & 1.000 & 0.341 \\ Time from diagnosis & 0.676 & 0.633 & 0.033 & 0.854\\ \hline \end{tabular} \end{table} After matching, we test the causal null hypothesis that the mean potential outcome is the same whether a patient received PEG treatment at time $t$ versus PEG treatment at some time after $t$ or untreated by $L$, which can be written as $H_0: E(Y^*_t)=E(Y^*_s)$ for all $t<s\leq L$. We test this hypothesis by a Wilcoxon signed rank test on matched pairs for all the matched analyses. The Wilcoxon rank sum test is used for hypothesis testing in the na\"ive analysis. Table 2 presents the median difference in BMI change at 18 months and p-value of the Wilcoxon test for each approach. The propensity process matched analysis suggests a protective effect of PEG on BMI, whereas the other three methods all show effects that are attenuated towards 0 and are not statistically significant. \begin{table}[h]\label{tab:outcome} \centering \caption{Results in the data analysis} \begin{tabular}{lcc} \hline & Median Difference& P-value\\ \hline Na\"ive & 0.035 & 0.673 \\ Propensity Function& 0.030 & 0.466 \\ Generalized Propensity Score& 0.360 & 0.453 \\ Propensity Process& 0.830 & 0.022 \\ \hline \end{tabular} \end{table} \section{Discussion} \label{discussion} Compared to the existing propensity score methods, the propensity process offers the advantage of balancing time-varying covariate history from baseline to time of treatment. A key component of this approach is the interpolation of covariate curves. We propose to use nonlinear mixed models to provide flexibility for modeling covariate history, though there must be enough individual longitudinal data collected to estimate these curves, which is a potential limitation in settings with sparsely collected longitudinal data. However, data interpolation may not be needed in settings such as critical care in intensive care units where time series data including heart rate and blood pressure are continuously recorded \citep{lehman2013tracking}. In our data analysis, we use a straightforward approach for hypothesis testing after matching. Future extensions may include conditional likelihood methods for estimating treatment effects based on matched pairs/sets and methods for stratification and covariate adjustment using the propensity process. Additionally, our analysis excludes individuals who died prior to $L$ in order to avoid complications due to censoring by death \citep{rubin2006,zhang2003estimation}, which could be addressed in future extensions such that no such exclusion is necessary. \section*{Appendix: Proofs of Propositions 1, 2, and Theorem 1} \noindent\textit{Proof of Propositions 1 and 2:} We prove Propositions 1 and 2 based on the treatment assignment model defined in \eqref{eq:density.tz} and \eqref{eq:haz.tz}. Given $\theta_t=h(t\mid X^*_t)$, \begin{eqnarray} f(t\mid {\ensuremath{\mathcal X}}^*_t,\Theta_t)&=&f(t\mid {\ensuremath{\mathcal X}}^*_t)\nonumber\\ &=& h(t\mid X^*_t)\exp\left\{-\int_0^t h(s\mid X^*_s)\, ds\right\}\nonumber\\ &=& \theta_t\exp\left\{-\int_0^t \theta_s\, ds\right\}\label{eq:f}\\ &=& f(t\mid \Theta_t), \mbox{ for all $t \in [0,L)$},\nonumber \end{eqnarray} where the first equality is due to the fact that $\Theta_t$ is redundant given ${\ensuremath{\mathcal X}}^*_t$. It follows from integrating both sides in $[0,L]$ that $\mbox{pr}(T\geq L\mid {\ensuremath{\mathcal X}}^*_L,\Theta_L)=\mbox{pr}(T\geq L\mid \Theta_L)$. The result in Proposition~1 follows immediately, i.e., $U$ is conditionally independent of ${\ensuremath{\mathcal X}}^*_L$ given $\Theta_L$. Along similar lines, we can prove the result in Proposition~2, i.e., $U_t$ is conditionally independent of ${\ensuremath{\mathcal X}}^*_t$ given $\Theta_t$ for all $t \in [0,L)$. \noindent\textit{Proof of Theorem 1:} For every $t \in [0,L)$, all $s\in {\ensuremath{\mathcal T}}_t$, $\Theta_t$, and ${\ensuremath{\mathcal A}}\subseteq{\ensuremath{\mathcal T}}_t$, \begin{eqnarray} \mbox{pr}\left(U_t\in{\ensuremath{\mathcal A}}\mid Y^*_s,\Theta_t\right)&=&E\left\{\mbox{pr}\left(U_t\in{\ensuremath{\mathcal A}}\mid Y^*_s,{\ensuremath{\mathcal X}}^*_t\right)\mid Y^*_s,\Theta_t\right\}\label{eq:prop3.1} \\ &=&E\left\{\mbox{pr}\left(U_t\in{\ensuremath{\mathcal A}}\mid {\ensuremath{\mathcal X}}^*_t\right)\mid Y^*_s,\Theta_t\right\}\label{eq:prop3.2}\\ &=&E\left\{\mbox{pr}\left(U_t\in{\ensuremath{\mathcal A}}\mid {\ensuremath{\mathcal X}}^*_t,\Theta_t\right)\mid Y^*_s,\Theta_t\right\}\label{eq:prop3.3}\\ &=&E\left\{\mbox{pr}\left(U_t\in{\ensuremath{\mathcal A}}\mid \Theta_t\right)\mid Y^*_s,\Theta_t\right\}\label{eq:prop3.4}\\ &=&\mbox{pr}\left(U_t\in{\ensuremath{\mathcal A}}\mid \Theta_t\right). \label{eq:prop3.5} \end{eqnarray} Let $\sigma(Y^*_s,\Theta_t)$ and $\sigma(Y^*_s,{\ensuremath{\mathcal X}}_t)$ denote the $\sigma$-field generated by $(Y^*_s,\Theta_t)$ and $(Y^*_s,{\ensuremath{\mathcal X}}_t)$, respectively. By the definition of $\Theta_t$ in~\eqref{eq:pp}, we have $\sigma(Y^*_s,\Theta_t)\subseteq\sigma(Y^*_s,{\ensuremath{\mathcal X}}_t)$ and then \eqref{eq:prop3.1} follows immediately \citep[cf.][Theorem 34.4]{billingsley2008probability}. \eqref{eq:prop3.2} is due to Assumption~2 while \eqref{eq:prop3.3} follows from the fact that $\Theta_t$ is redundant given ${\ensuremath{\mathcal X}}^*_t$. The fourth expression \eqref{eq:prop3.4} is due to Proposition~2 while \eqref{eq:prop3.5} follows from~\eqref{eq:f}, i.e., $\mbox{pr}\left(U_t\in{\ensuremath{\mathcal A}}\mid \Theta_t\right)$ is independent of $Y^*_s$. \end{document}
\begin{document} \begin{abstract} We consider an exit-time minimum problem with a running cost \, $l\geq 0$ and unbounded controls. The occurrence of points where $l=0$ can be regarded as a transversality loss. Furthermore, since controls range over unbounded sets, the family of admissible trajectories may lack important compactness properties. In the first part of the paper we show that the existence of a {\it $\p$-Minimum Restraint Function} provides not only global asymptotic controllability (despite non-transversality) but also a state-dependent upper bound for the value function (provided $p_0>0$). This extends to unbounded dynamics a former result which heavily relied on the compactness of the control set. In the second part of the paper we apply the general result to the case when the system is {\it polynomial} in the control variable. Some elementary, algebraic, properties of the convex hull of vector-valued polynomials' ranges allow some simplifications of the main result, in terms of either near-affine-control systems or reduction to weak subsystems for the original dynamics. \end{abstract} \maketitle \section{Introduction} Mainly motivated by the case when the dynamics is polynomial in the control, we deal with optimal control problems of the form \begin{align} &\dot x=f(x,u), \qquad x(0) = z \label{Eintro}, \\&(x(t),u(t)) \in (\Omega\backslash{\bf C})\times U , \quad \lim_{t\to {T}_{x}^-} \d (x(t),\mathbf C)=0,\\ & \displaystyle{\I(x,u):= \int_ 0^{T_{x}} l(x(t),u(t))\, dt,} \qquad \displaystyle{V(z):= \inf_{(x,u)} \I(x,u)},\label{minprobintro} \end{align} where: i) for given positive integers $n,m$, the {\it state space} $\Omega$ is an open subset of $ \RR^n $, the {\it controls} $u$ range over a (possibly unbounded) subset of $U\subseteq\R^m$, and ${\bf C}\subset\Omega$ is a closed {\it target} with compact boundary; ii) the {\it current cost} $l(x,u)$ is $\geq 0$ for all $ (x,u) \in (\Omega\backslash{\bf C})\times U$; iii) $T_{{x}} \in [0,+\infty]$ is the infimum of times needed for the trajectory $x(\cdot)$ to approach the target $\mathbf C$; and iv) $\mathbf d(x,{\bf C})$ denotes the usual (Euclidean) distance of the point $x$ from the subeset $\bf C$. We focus on a particular kind of Lyapunov function, called {\it $p_0$-Minimum Restraint Function } ($\p\geq 0$). This notion has been introduced in \cite{MR13} under the extra-hypothesis that the controls range over a bounded set. The existence of a $p_0$-Minimum Restraint Function, besides implying global asymptotic controllability to $\mathbf C$, was shown to provide a continuous upper estimate for the value function $V$. Such an estimate is not trivial, in that the problem (here and in \cite{MR13} as well) lacks what in first order PDE's is called {\it transversality}, which would correspond to the assumption $l(x,u) \neq 0 $ for all $(x,u)$ (as in the minimal time problem, where $l=1$)\footnote{But here the exit time can well be infinite.}. Here, we extend the concept of $p_0$-Minimum Restraint Function to unbounded dynamics $f$. Notice that the unboundedness of $f$ (and $l$) cannnot be neglected, for no {\it coercivity} hypotheses --roughly speaking, the fact that $u\mapsto l(x,u)$ grows suitably faster than $u\mapsto f(x,u) $ -- rule out the need of larger and larger velocities in a minimizing sequence. Precisely, for a $\p\geq 0$ we call \emph{$\p$-Minimum Restraint Function} every continuous function $$W:\Omega\setminus\overset{\circ}\mathbf C\to[0,+\infty[$$ whose restriction to $\Omega\setminus\mathbf C$ (is locally semiconcave, positive definite and proper \footnote{See Definition \ref{defMRF}, where, as soon as $\Omega\subsetneq \R^n$, one also posits $W_0\in \R\cup \{+\infty\}$ such that $W (\Omega\setminus\mathbf C)< W_0$ \text{and} $\lim_{x\to x_0} W(x)=W_0$, for every $x_0\in\partial\Omega$.}, and) verifies \begin{equation}\label{MRH1intro} H_{l,f}(x,\p,D^*W(x))<0 \quad \forall x\in \Omega\setminus\mathbf C, \end{equation} where the Hamiltonian $H_{l,\F}$ is defined by \begin{equation}\label{Hamintro } H_{l,f}(x,p_0,p):= \inf_{u\in U}\Big\{ \langle p ,f(x,u) \rangle+p_0\,l(x,u)\Big\} . \end{equation} The inequality \eqref{MRH1intro} has to be interpreted as $H_{l,\F}(x,\p, p )<0$ $\forall p\in D^*W(x)$---which includes the case $H_{l,\F}(x,\p, p )=-\infty$ . The following hypothesis will be crucial: \vskip0.3truecm {\bf Hypothesis A}: {\it For every compact subset ${\mathcal K}\subset\Omega\backslash\mathbf C$ the function \begin{equation}\label{ipointro} (\bar l,\bar{\F})(x,u) := \frac{(l,{\F})}{1 +|(l,\F)(x,u)|} (x,u)\end{equation} is uniformly continuous on ${\mathcal K}\times U$.} \vskip0.3truecm Observe that Hypothesis {\bf A} allows for a vast class of cost-dynamic pairs $(l,f)(x,u)$\footnote{See Remark \ref{A'}, for a bit stronger hypothesis.}, including ($x$-dependent) polynomials in $u_1,\cdots,u_m$, $|u_1|,\cdots,|u_m|$, $|u|$, and compositions of polynomials with exponential and Lipschitz continuous functions. Let us bring forward the statement of our main result: \begin{theorem}\label{stimaenergiaintro} Assume Hypothesis {\bf A} and let $W$ be a $\p$-Minimum Restraint Function for the problem $(l,f,\mathbf C)$, for some $\p\ge0$. Then \begin{itemize} \item[{\bf (i)}] system {\rm\eqref{Eintro}} is globally asymptotically controllable to $\mathbf C$. \end{itemize} Furthermore, \begin{itemize}\item[{\bf (ii)}] if $\p>0$, then \begin{equation}\label{} {{V}}(z)\le \frac{W(z)}{{{\p}}}\, \qquad\forall z\in \Omega\setminus \mathbf C. \end{equation} \end{itemize} \end{theorem} The proof of the theorem relies on a state-based time rescaling of the problem, which in turn is made possible by Hypothesis {\bf A}. The controls of the rescaled problem (see Section \ref{generalsec}) still range in the (possibly unbounded) set $U$. Yet, some compactness properties of the rescaled dynamics are of crucial importance in the construction of trajectories reaching the target at least asymptotically. An application to the gyroscope (see Subsection \ref{gyroex}) concludes Section \ref{generalsec}: an explicit $\p$-Minimum Restraint Function is provided for a minimum problem where the {control is identified with the pair made by the precession and spin velocities, while the state corresponds to pair made by the nutation angle and its time-derivative. \vskip0.5truecm The remaining part of the paper is devoted to problems whose dynamics can be parameterized by a $u$-polynomial: \begin{equation}\label{polintro} \dot x = \displaystyle f (x,u):= f_0(x)+\sum_{i=1}^d\left(\sum_{\alpha\in\N^m, \, \alpha_1+\dots+\alpha_m=i} u_1^{\alpha_1}\cdots u_m^{\alpha_m} f_{\alpha_1,\dots,\alpha_m}(x)\right)\,. \end{equation} Among applications for which the polynomial dependence is relevant let us mention Lagrangian mechanical systems, possibly with friction forces, in which inputs are identified with the derivatives of some Lagrangian coordinates. In this case $d=2$\footnote{This is clearly a consequence of the fact that the kinetic energy is a quadratic form of the velocity (see, besides Subsection \ref{gyroex}, \cite{aldobressan} and \cite{BR10}).}. We point out also that, in connection with the investigation of uniqueness and regularity of solutions for Hamilton-Jacobi equations, dynamics and current costs with unbounded controls and { polynomial growth} have been already addressed in \cite{mot04}, \cite{MS14}, by embedding the problem in a space-time problem through techniques of graph's reparameterization -- see e.g. \cite{BR88,BR10,Dyk90,Gur72,Mil96,Ris65,RS00,VP88}. With similar arguments (see also \cite{MR03}) necessary conditions for the existence of (possibly impulsive) minima of input-polynomial optimal control problems have been studied in \cite{chez}. Furthermore, the interplay between convexity and polynomial dependence of both the dynamics and the running cost \, has been investigated also in \cite{PT09}, in connection with problems of existence of optimal solutions. A careful investigation of elementary, algebraic properties of the convex hull $co~f(x,\R^m)$ proves essential for the application of Theorem \ref{stimaenergiaintro} to the polynomial case \eqref{polintro}. For instance, we consider {\it near-control-affine} control systems, a class of control-polynomial systems where the convex hull of the dynamics can be parameterized as a control-affine system with controls in a neighborhood of the origin\footnote{Once the convex hull of the dynamics is so nicely parameterized, relaxation arguments allow applying several well-established results for control-affine systems.}. For instance, this is clearly false for the system $ \dot x = f_0(x) +uf_1(x)+u^2 f_2(x), $ $u\in\R,$ -- because the origin $(0,0)$ does not belong to the the convex hull's interior of the curve $(u,u^2)$. Instead, in view of Theorem \ref{cnear-control-affine}, the convex hull of the image of \begin{multline*} f_0(x) +u_1 f_{1,0,0,0,0,0,0}(x) + u_1u^5_3f_{1,0,5,0,0,0,0}(x) +u_2^3u_6^3 f_{0,3,0,0,0, 3,0}(x) \\+u_1u_3^5u_7^9 f_{1,0,5,0,0,0,9}(x), \end{multline*} $(u_1,\dots,u_7) \in\R^7$ does coincide with the range of $$ f_0(x) +w_1f_{1,0,0,0,0,0,0}(x) + w_2 f_{1,0,5,0,0,0,0}(x) +w_3 f_{0,3,0,0,0, 3,0}(x) +w_4 f_{1,0,5,0,0,0,9}(x), $$ $(w_1,w_2,w_3,w_4)\in\R^4.$ \vskip0.3truecm When the system is not near-control-affine (and $U=\RR^m$), one can try to exploit {\it weak subsystems}: the latter are selections of the set-valued function $x\mapsto co~f(x,\R^m)$. In particular, we consider the \emph{maximal degree subsystem} and, for any $\lambda$ in the $m$-dimensional simplex, the \emph{$\lambda$-diagonal subsystems} (see Definition \ref{diagdef} and Subsection \ref{secmaximal}, respectively). The idea of utilizing {\it subsystems} might look counterproductive with respect to the task of finding a $\p$-Minimum Restraint Function: indeed, for such a purpose, having a sufficiently large amount of available directions plays crucial. However, from a practical perspective, a diminished complexity in the dynamics might ease the guess of a $\p$-Minimum Restraint Function, which would automatically be a $\p$-Minimum Restraint Function for the original polynomial problem. To give the flavour of this viewpoint, let us anticipate a result (see Theorem \ref{maximalth} for details) concerning maximal degree subsystems. \begin{theorem}\label{thselectionintro} Let the growth assumption specified in {Hypothesis} {\bf A$_{max}$} below (Section \ref{secmaximal}) be verified. If $W$ is a $\p$-Minimum Restraint Function for the {\rm maximal degree subsystem} $$f^{max}(x,u):=f_0(x)+\sum_{\alpha\in\N^m, \, \alpha_1+\dots+\alpha_m=d} u_1^{\alpha_1}\cdots u_m^{\alpha_m} f_{\alpha_1\dots\alpha_m}(x),$$ then $W$ is also a $\p$-Minimum Restraint Function for the original control polynomial system $$\displaystyle f (x,u):= f_0(x)+\sum_{i=1}^d\left(\sum_{\alpha\in\N^m, \, \alpha_1+\dots+\alpha_m=i} u_1^{\alpha_1}\cdots u_m^{\alpha_m} f_{\alpha_1,\dots,\alpha_m}(x)\right)\,.$$ \end{theorem} \vskip0.3truecm The paper is organized as follows. In the remaining part of the present section we provide some preliminary definitions and notation. In Section \ref{generalsec} we prove Theorem \ref{stimaenergiaintro} and exhibit a $\p$-Minimum Restraint Function for the gyroscope (see Subsection \ref{gyroex}). Section \ref{rescsec} is entirely devoted to the proof of Theorem \ref{nuovo} which deals with a suitably rescaled problem. In Section \ref{balsect} we focus on the case when the system is polynomial in the control variable. An Appendix with a technical proof concludes the paper. \vskip0.3truecm \subsection{Preliminary concepts and notation}\label{subpb} $\,$ \vskip0.3truecm Let us gather some notational conventions as well as some basic concepts and results which will be used throughout the paper. \vv We are given an open set $\Omega\subset\RR^n$ and a {\it target }$\mathbf C\subset \Omega$, which we assume to have compact boundary $\partial\mathbf C$. For brevity, let us use the notation $\d(x)$ in place of $\d(x,\mathbf C)$. \begin{definition}\label{xadm} We say that a path $x:[0,T_x[\to\Omega$ is {\rm admissible} if \begin{itemize} \item[i)] $0< T_x\leq+\infty$, \item[ii)] $x\in AC_{loc}([0,T_x[,\Omega)$, \item[iii)] $x([0,T_x[)\subset \Omega\backslash \mathbf C$, \item[iv)] $\displaystyle \lim_{t\to T^-_{ x}} {\bf d}(x(t))=0$. \end{itemize} We call $T_x$ the {\rm exit time of $x$ from $\Omega\setminus \mathbf C$}. \end{definition} Notice that the limit of $x(\cdot)$ for $t\to T_x^-$ need not exist, even when $T_x<+\infty$. Of course, if the limit exists, then it belongs to the target $\mathbf C$. \begin{definition}\label{adm} Let $g:\Omega\times U\to\R^n$ be a continuous function. For every $z\in\Omega\setminus\mathbf C$, we will say that $(x,u)$ is an {\rm admissible trajectory-control pair} from $z$ for the control system \begin{equation}\label{Egen} \dot x=g(x,u), \quad x(0)=z \end{equation} if \begin{itemize} \item[i)] $x:[0,T_x[\to \Omega\setminus\mathbf C$ is an admissible path, \item[ii)] $u(\cdot)\in L^\infty_{loc}([0,T_x[,U)$, \item[iii)] $x(\cdot)$ is a Charath\'eodory solution\footnote{Notice that such a solution might be not unique.} of {\rm\eqref{Egen}} corresponding to the input $u$. \end{itemize} We shall use ${\mathcal{ A}}_g({ z})$ to denote the family of admissible trajectory-control pairs from $z$ for the control system {\rm\eqref{Egen}}. \end{definition} \vskip0,3truecm As customary, we shall use ${\mathcal KL}$ to denote the set of all continuous functions $$\beta:[0,+\infty[\times[0,+\infty[\to[0,+\infty[$$ such that: (1)\, $\beta(0,t)=0$ and $\beta(\cdot,t)$ is strictly increasing and unbounded for each $t\ge0$; (2)\, $\beta(r,\cdot)$ is decreasing for each $r\ge0$; (3)\, $\beta(r,t)\to0$ as $t\to+\infty$ for each $r\ge0$. \begin{definition}\label{(GAC)} The system (\ref{Egen}) is {\em globally asymptotically controllable to $\mathbf C$} -- shortly, (\ref{Egen}) is {\em GAC to $\mathbf C$} -- provided there is a function $\beta\in{\mathcal KL}$ such that, for each initial state $z\in\Omega\setminus \mathbf C$, there exists an admissible trajectory-control pair $(x,u)\in {\mathcal{ A}}_g({ z})$ that verifies \begin{equation}\label{bbound} {\bf d}(x(t))\le \beta({\bf d}(z),t) \qquad \forall t\in[0,+\infty[.\,\, \footnote{ By convention, we fix an arbitrary $\bar z\in\partial\mathbf C$ and formally establish that, if $T_{x}<+\infty$, the trajectory $x(\cdot)$ is prolonged to $[0,+\infty[$, by setting $x(t)=\bar z$ for all $t\geq T_{x}$.} \end{equation} \end{definition} \vv \begin{definition}[Positive definite and proper functions] Let ${\bf E}$, $\mathbf Cheta\subset \RR^n$ be, respectively, a closed and an open set with ${\bf E}\subset \mathbf Cheta$ and let $F:\mathbf Cheta\setminus\overset{\circ}{\bf E}\to\R$ be a continuous function. Then $F$ is {\em positive definite on $\mathbf Cheta\setminus {\bf E}$} if $F(x)>0$ for all $ x\in\mathbf Cheta\setminus {\bf E}$ and $F(x)=0$ for all $ x\in\partial{\bf E}$. \\ The function $F$ is called {\em proper on $\mathbf Cheta\setminus {\bf E}$} if the pre-image $F^{-1}(K)$ of any compact set $K\subset[0,+\infty)$ is compact. \end{definition} \begin{definition}[Semiconcave functions]\label{sconc} Let $\mathbf Cheta\subseteq\R^n$. A continuous function $F:\mathbf Cheta\to\R$ is said to be {\rm semiconcave on $\mathbf Cheta$} if $$ F(z_1)+F(z_2)-2F\left(\frac{z_1+z_2}{2}\right)\le \rho|z_1-z_2|^2, $$ for all $z_1$, $z_2\in \mathbf Cheta$ such that $[z_1,z_2]\subseteq\mathbf Cheta$. $F$ is said to be {\rm locally semiconcave on $\mathbf Cheta$} if it semiconcave on every compact subset of $\mathbf Cheta$. \end{definition} We remind that locally semiconcave functions are locally Lipschitz continuous. \begin{definition}[{\rm Limiting gradient}]\label{D*} Let $\mathbf Cheta\subset\R^n$ be an open set and let $F:\mathbf Cheta\to\R$ be a locally Lipschitz function. For every $x\in \mathbf Cheta$ we set $$ D^*{F}(x) := \Big\{ w\in\R^n\mid \ \ w=\lim_{k}\nabla {F}(x_k), \ \ x_k\in DIFF(F)\setminus\{x\}, \ \ \lim_k x_k=x\Big\} $$ where $\nabla$ denotes the classical gradient operator and $DIFF(F)$ is the set of differentiability points of $F$. $D^*{F}(x)$ is called the \emph{set of limiting gradients} of $F$ at $x$. \end{definition} \begin{remark} {\rm The set-valued map $x\mapsto D^*F(x)$ is upper semicontinuous on $\mathbf Cheta$, with non-empty, compact values. Notice that $D^*{F}(x)$ is not convex. When $F$ is a locally semiconcave function, $D^*{F}$ coincides with the limiting subdifferential $\partial_LF$, namely, $$ D^*F(x)=\partial_LF(x) := \{\lim \, p_i: \ p_i\in \partial_PF(x_i), \ \lim\, x_i=x\} \quad \forall x\in\mathbf Cheta, $$ where $\partial_PF$ denotes the proximal subdifferential, largely used in the literature on Lyapunov functions. } \end{remark} Basic properties of the semiconcave functions imply the following fact: \begin{lemma}\label{Lscv} Let $\mathbf Cheta\subset\R^n$ be an open set and let $F:\mathbf Cheta \to\R$ be a locally semiconcave function. Then for any compact set $\mathcal{K}\subset \mathbf Cheta$ there exist some positive constants $L$ and $\rho$ such that, for any $x\in \mathcal{K}$ \footnote{The inequality (\ref{scvintro}) is usually formulated with the proximal superdifferential $\partial^P F$. However, this does not make a difference here since $\partial^P F=\partial_C F=co D^* F$ as soon as $F$ is locally semiconcave. Hence (\ref{scvintro}) is true in particular for $D^*F$. }, \begin{equation}\label{scvintro} \begin{array}{l} F(\hat x)-F(x)\le \langle p,\hat x-x \rangle+\rho|\hat x-x|^2, \\ \, \\ |p|\le L \quad \forall p\in D^*F(x), \end{array} \end{equation} for any point $\hat x\in\mathcal{K}$ such that $[x,\hat x]\subset \mathcal{K}$. \end{lemma} \section{$\p$-Minimum restraint functions}\label{generalsec} \subsection{The main result} $\,$ \vskip0.3truecm Let us begin with a precise formulation of the minimum problem. For every initial condition $z\in\Omega\setminus\mathbf C$, we consider the control system \begin{equation}\label{E} \dot x=f(x,u), \quad x(0)=z, \end{equation} and, for any admissible trajectory-control pair $(x,u)\in{\mathcal A}_f(z)$ (see Definition \ref{adm}), let us introduce the payoff \begin{equation}\label{1.2} \I(x,u):= \int_ 0^{T_x} l(x(t),u(t))\, dt \qquad (T\in]0,+\infty]). \end{equation} The corresponding {\it value function} is given by \begin{equation}\label{minprob} V(z)\:= \inf_{(x,u)\in{\mathcal{ A}}_f({z})}\I( x, u ) \quad(\le+\infty). \end{equation} \vskip0.3truecm Recall our principal hypothesis: \vskip0.3truecm {\bf Hypothesis A}: {\it For every compact subset ${\mathcal K}\subset\Omega\backslash\mathbf C$ the function \begin{equation}\label{risc} (\bar l,\bar{\F})(x,u) := \frac{(l,{\F})}{1 +|(l,\F)(x,u)|} (x,u) \end{equation} is uniformly continuous on ${\mathcal K}\times U$.} \vskip0.3truecm \begin{remark}\label{A'} {\rm As observed in the Introduction, this hypothesis allows for a wide set of {\it unbounded} dynamics and running costs. Furthermore, it is easy to check that the following condition {\it is sufficient for Hypothesis {\bf A}} to hold true: $\,$ \begin{itemize} \item[]{\it The map ~$(l,\F)$ is continuous with respect to the state variable $x$ and locally Lipschitz with respect to the control variable $u$, and $$ \left|\frac{D_u(l,\F)}{(1+|(l,\F)|)^2}\right|(x,u) \leq \eta(x) \qquad \text{ for a.e. } (x,u) \in (\Omega\backslash\mathbf C)\times U, $$ for some continuous function $\eta:~\Omega\backslash\mathbf C\to~[0,+\infty[ $.} \end{itemize}} \end{remark} \vv \vskip0.3truecm Let us extend the definition of $\p$-Minimum Restraint Function (\cite{MR13}) to the case of unbounded control sets. \begin{definition}\label{defMRF} Let $W:\Omega\setminus\overset{\circ}{\mathbf C}\to[0,+\infty[$ be a continuous function, and let us assume that $W$ is locally semiconcave, positive definite, and proper on $\Omega\setminus\mathbf C$. We say that $W$ is a \emph{$p_0$-Minimum Restraint Function --in short, $\p$-MRF-- for $( l, \F,\mathbf C)$ in $\Omega$ for some $\p\ge0$} if \begin{equation}\label{MRH1} H_{l,\F}(x,\p, D^*W(x) )<0 \quad \forall x\in {{\Omega}\setminus\mathbf C} \quad\footnote{This means that $H_{l,\F}(x,\p, p )<0$ for every $p\in D^*W(x)$.} \end{equation} and, moreover, there exists $W_0\in [0,+\infty]$, such that $$W(\Omega\setminus {\mathbf C})< W_0\quad \text{and} \quad \lim_{x\to x_0,\ x\in \Omega} W(x)=W_0$$ for every $x_0\in\partial\Omega$. \end{definition} \vskip0.3truecm We can now state our main result: \vskip0.2truecm \noindent{\bf Theorem 1.1.}\label{MRFth} {\it Assume Hypothesis {\bf A} and let $W$ be a $\p$-Minimum Restraint Function for the problem $(l,f,\mathbf C)$, for some $\p\ge0$. Then: \begin{itemize} \item[{\bf (i)}] system {\rm \eqref{E}} is globally asymptotically controllable to $\mathbf C$; \item[{\bf (ii)}] if $\p>0$, then \begin{equation}\label{Wprop} {{V}}(z)\le \frac{W(z)}{{{\p}}}\, \qquad\forall z\in \Omega\setminus \mathbf C. \end{equation} \end{itemize}} \vv \begin{proof} We begin with a state-based rescaling procedure. Precisely, we consider the optimal control problem \begin{equation}\label{peoblemrep} \begin{array}{l} \displaystyle y'(s) = \bar \F(y,v) \quad y(0) = z;\\\,\\ \displaystyle \I_{\bar l,\bar\F}(y,v) := \int_0^{S_y}\bar l(y(s),v(s)) ds, \qquad \bar V(z) :=\displaystyle\inf_{(y,v)\in {\mathcal{ A}}_{\bar f}({z})} \I_{\bar l,\bar\F}(y,v), \end{array}\end{equation} where $\bar l$, $\bar{\F}$ are defined in \eqref{risc}, the apex denotes differentiation with respect to the parameter $s$, and $S_y\le+\infty$ is the exit time of the admissible trajectory $y(\cdot)$ (in the time parameter $s$). The connection between the original optimal control problem and the rescaled one is established by the following result. \vskip0.2truecm \begin{Claim}\label{cres} The path $(y,v)$ is an admissible trajectory-control pair for {\rm \eqref{peoblemrep}} if and only if, setting \begin{align*} &t(s):=\int_0^s (1+|(l,\F)(y(\eta),v(\eta)|)^{-1} d\eta \qquad\forall s\in [0,S_{y }[\\ &x(t):= y \circ s(t)\qquad u(t):= v\circ s(t)\qquad\ \forall t\in[0,T_x[, \quad T_x:=t(S_{y }), \end{align*} the path $(x,u)$ is an admissible trajectory-control pair for {\rm \eqref{E}}--{\rm \eqref{minprob}}. Furthermore, $$ \displaystyle\int_0^{S_{y }} \bar l(y(s),v(s)) ds = \displaystyle\int_0^{T_x} l(x(t),u(t)) dt. $$ In particular, one has $$V(z)=\bar V(z)$$ for all $z\in \Omega\setminus\mathbf C$. \end{Claim} \vskip0.2truecm Indeed, since $t=t(s)$ is absolutely continuous and $t'(s)>0$ almost everywhere, the inverse map $s(\cdot)= t^{-1}(\cdot)$ is absolutely continuous (see e.g. \cite[Theorem 4, page 253]{Nat57} or, for a more general statement, \cite[Theorem 2.10.13, page 177]{Fed69}). In particular, $x=y\circ s$ is absolutely continuous, and $u= v\circ s$ turns out to be Borel measurable as well. Hence the claim follows by a standard application of the chain rule\footnote{ Notice that the solutions to $\dot x=f$ or $\dot y =\bar\F$ are not necessarily unique.}. \vskip0.2truecm The Hamiltonian $H_{\bar l,\bar \F}$ associated to $\bar l$, $\bar \F$, $$ H_{\bar l,\bar \F}(x,p_0,p):= \inf_{u\in U}\Big\{ \langle p ,\bar \F(x,u) \rangle+p_0\,\bar l(x,u)\Big\} $$ for all $(x,p_0,p)\in (\Omega\backslash\mathbf C)\times \R^{1+n},$ is continuous and sublinear in $(p_0,p)$, uniformly with respect to $x$. Furthermore, it is also trivial to check that, for every $(x,p_0,p)\in(\Omega\setminus\mathbf C)\times\RR^{1+n}$, \begin{equation}\label{HH} H_{\bar l,\bar{\F}}(x,p_0,p)<0 \quad\iff\quad H_{l,\F }(x,p_0,p)< 0. \end{equation} In particular, for every $\p\ge0$ $W$ is a $\p$-MRF for $(l,f,\mathbf C)$ if and only if $W$ is a $\p$-MRF for $(\bar l,\bar{f},\mathbf C)$. Moreover, because of Hypothesis {\bf A}, the problem $(\bar l,\bar\F,\mathbf C)$ meets the hypotheses of Theorem \ref{nuovo} below. Therefore: \vskip0.2truecm {\it \begin{itemize} \item[(i)] if there exists a $\p$-MRF $W$ for $(l,f,\mathbf C)$, then the rescaled system in {\rm \eqref{peoblemrep}} is GAC to $\mathbf C$, i.e. there exists a function $\beta\in{\mathcal K\mathcal L}$ such that for any $z\in\Omega\setminus\mathbf C$ there is an admissible trajectory-control pair $(y,v)\in{\mathcal A}_{\bar f}(z)$ that verifies \begin{equation}\label{bbound1} {\bf d}(y(s))\le \beta({\bf d}(z),s) \qquad \forall s\in[0,+\infty[; \end{equation} \item[(ii)] moreover, if $\p>0$, then \begin{equation}\label{Wprop1} {{\bar V}}(z)\le \frac{W(z)}{{{\p}}}. \end{equation} \end{itemize}} \vskip0.2truecm If $x(\cdot)$ is the trajectory defined in Claim \ref{cres}, one then obtains \begin{equation}\label{bbound2} {\bf d}(x(t))\le\beta({\bf d}(z),s(t)) \qquad \forall t\in[0,+\infty[ \end{equation} and, if $\p>0$, \begin{equation}\label{Wprop2} {{V}}(z)\le \frac{W(z)}{{{\p}}}\, \qquad\forall z\in \Omega\setminus \mathbf C. \end{equation} Notice that $t(s)\leq s$ for all $s$, so that $t\leq s(t)$ for all $t$. Since the map $\beta(z,\cdot)$ is decreasing, one gets $$ \beta(z,s(t))\leq \beta(z,t)$$ for all $t$. It follows by \eqref{bbound1} that \begin{equation}\label{bbound3} {\bf d}(x(t))\le \beta({\bf d}(z),t) \qquad \forall t\in[0,+\infty[, \end{equation} so the theorem is proved. \end{proof} We conclude this section with an application of Theorem \ref{stimaenergiaintro} to Mechanics. \vskip0.3truecm \subsection{The gyroscope: controlling the nutation through precession and spin}\label{gyroex} $\,$ \vskip0.3truecm A gyroscope can be represented as a mechanism composed by a rotor --in our setting a spinning disk-- and two gimbals. The spin axis of the rotor is fixed to the inner gimbal, whose spin axis is fixed to the outer gimbal (see Figure \ref{gyro2}). \begin{figure} \caption{\label{gyro2} \label{gyro2} \end{figure} Besides an inertial reference frame $OXYZ$ we consider a reference frame $oxyz$ fixed to the rotor. In particular, we choose the latter reference so that the centre of mass of the rotor has coordinates $(0,0,z_G)$. The motion of the rotor can be parametrized by Euler angles as depicted in Figure \ref{gyro2}: the outer gimbal's position is represented by the \emph{precession} angle $\phi$, the inner gimbal's position is given by the \emph{nutation} angle $\theta$, and the rotor's position is measured by the \emph{spin} angle $\psi$. The kinetic energy (in the inertial frame) is so given by $$ {\mathcal T}=\frac{1}{2}I_0(\dot \phi^2\sin^2 \theta+ \dot \theta^2)+\frac{1}{2} I(\dot \phi \cos\theta + \dot\psi)^2, $$ where $I_0$ is the moment of inertia of the rotor with respect to any axis through $o$ and orthogonal to $z$ \footnote{All these moments coincide because of the symmetry of the rotor.} and $I$ is the moment of inertia of the rotor about its spin axis $oz$. We have tacitly assumed that the rotor's mass $M$ is the only non-negligible mass of the system. For simplicity, we also suppose $I_0=I$. If $g$ denotes the gravitational acceleration, the potential energy $\mathcal V $ is given by $$ \mathcal V(\theta):=M gz_G \cos\theta \quad \forall \theta\in[-\pi/2,\pi/2]. $$ We will regard the precession velocity $\dot \phi$ and the spin velocity $\dot \psi$ as {\it controls} belonging to $U=\RR^2$. Considering the predetermination of $\phi(\cdot) $ and $\psi(\cdot)$ as a holonomic constraint, we assume the classical D'Alembert hypothesis (see \cite{aldobressan}). The resulting control mechanical system is \begin{equation}\label{gyrosys} \begin{cases} \displaystyle{\dot\theta=\frac{1}{I}\pi_{\theta}}\\ \displaystyle{\dot \pi_{\theta}=Mgz_G\sin\theta-I\sin\theta \dot\phi\dot\psi}, \end{cases} \end{equation} where $\pi_\theta$ is the conjugate momentum $\pi_\theta:=\frac{\partial ({\mathcal T}+\mathcal V)}{\partial\dot\theta}=I\,\dot\theta$. If we set $u:=(\dot \phi,\dot \psi)$, $x=(x_1,x_2)^{tr} :=(\theta,\pi_\theta)$ , $f_0(x)=(I^{-1}x_2,Mgz_G\sin x_1)^{tr}$, and $f_{11}(x)=(0,-I\sin x_1)^{tr}$ we obtain the control-quadratic control system \begin{equation}\label{balgyro} \dot x=f(x,u) :=f_0(x)+u_1u_2 f_{11}(x) , \end{equation} with $(u_1,u_2)\in\RR^2$. The state space of the control system \eqref{balgyro} is the open set $\Omega=]-\pi/2,\pi/2[\times \R$ and we choose $\mathbf C=\{(0,0)\}$ as a target and $l(x_1,x_2)=x_2^2$ as a running cost \,. \vskip0.4truecm Let us set $$W(x_1,x_2):=W_1(x_1,x_2)(2-|W_2(x_1,x_2)|),$$ where \begin{align*} &W_1(x_1,x_2):=\tan^2x_1+x_2^2,\\ &W_2(x_1,x_2):=\begin{cases} \sin\left(2\text{arctan}\left(\frac{-\tan x_1+\sqrt{3} x_2}{\sqrt{3}\tan x_1+x_2}\right)\right)&\text{ if } x_2\not=-\sqrt{3}\tan x_1\\ 0&\text{ otherwise }. \end{cases} \end{align*} With some computation, one proves that \begin{Claim}\label{claimgyro} For any $\p< \min\{1/I,8\sqrt{3}/3\}$, the function $W$ is $p_0$-MRF for the problem $(f,l, \mathbf C)$. \end{Claim} Therefore, by Theorem \ref{stimaenergiaintro} we can conclude that the control system for the nutation $\theta$ and its conjugate moment $\pi_\theta$ is \emph{GAC} to the origin. In addition, the optimal value $V$ of the minimum problem with running cost equal to $\pi_\theta^2$ \ $(=I^2\dot\theta^2)$ verifies $$V(\bar \theta, \bar\pi_\theta)\leq \frac{W(\bar\theta, \bar\pi_\theta)}{p_{0}} $$ for all initial data $(\bar \theta, \bar\pi_\theta)$ and $\p<\min \{1/I,8\sqrt{3}/3\}$. Notice that, as it might be expected, the larger the moment of inertia $I$ is, the larger is the provided bound for $V$. \vskip0.3truecm \section{The rescaled problem}\label{rescsec} The main step of the proof of Theorem \ref{stimaenergiaintro} is based on Theorem \ref{nuovo} below, which concerns GAC and optimization for a cost-dynamics pair $({\bf l},{\mathbf{f}})$ verifying the following boundedness and uniform continuity hypothesis: \vskip0.3truecm {\bf Hypothesis A$_{UC}$}\emph{ The vector field $({\bf l},{\mathbf{f}})$ is continuous on $( \Omega \backslash \mathbf C)\times U$ and, for every compact subset ${\mathcal K}\subset \Omega \backslash \mathbf C$, it is bounded and uniformly continuous on ${\mathcal K}\times U$.} \vskip0.3truecm We point out that the control set $U$ is still allowed to be unbounded. \vskip0.3truecm Let us consider the exit time optimal control problem \begin{equation}\label{sis1} y' = {\mathbf{f}}(y,v), \quad y(0) = z, \end{equation} \begin{equation}\label{cost1} {\bf V} (z) :=\inf_{(y,v)\in \mathcal{ A}_{{\bf f}}(z) }\int_0^{T_y}\mathbf{l}(y(t),v(t)) dt. \end{equation} \begin{theorem}\label{nuovo} Let us assume Hypothesis {\rm \bf A$_{UC}$}, and let $W$ be a $\p$-Minimum Restraint Function for the problem $({\bf l},{\mathbf{f}},\mathbf C)$. Then: \begin{itemize} \item[{\bf (i)}] system {\rm \eqref{sis1}} is GAC to $\mathbf C$; \item[{\bf (ii)}] moreover, if $\p>0$, \begin{equation}\label{} {\bf V} (z) \leq \frac{W(z)}{{{\p}}}\, \qquad\forall z\in \Omega\setminus \mathbf C. \end{equation} \end{itemize} \end{theorem} \vskip0.3truecm \subsection{Preliminary results} $\,$ \vskip0.3truecm The proof of Theorem \ref{nuovo} relies on Propositions \ref{claim1}, \ref{cB}, and \ref{claim2bis} below. Hypothesis {\bf A$_{UC}$} is used throughout the whole subsection. \begin{proposition}\label{claim1} For every $\sigma>0$ there exists a continuous, increasing map $\gamma:]0,2\sigma]\to ]0,+\infty[$ such that, for every $r\in]0,2\sigma]$, \begin{equation}\label{c0} H_{{\bf l},{\mathbf{f}}} (x,\p,D^*W(x)) <-\gamma(r) \qquad \forall x\in W^{-1}([r,2\sigma])\, \text{ and } \, p\in D^*W(x). \end{equation} \end{proposition} This result is a consequence of the upper semicontinuity of the set-valued map $x\to D^*W(x)$ together with the continuity of $(x,p)\mapsto H_{{\bf l},{\mathbf{f}}}$, when the latter is restricted to the sets $W^{-1}([r,2\sigma])\times \R^n$ (for the details, see \cite[Proposition 3.1]{MR13}). \vv \begin{proposition}\label{cB} For a given $\sigma>0$, let $\gamma(\cdot)$ be a map as in Proposition \ref{claim1}. Then there exists a continuous, decreasing function $N: ]0,2\sigma]\to ]0,+\infty[$ such that, setting $$ H_{{\bf l},{\mathbf{f}}, N(r)}(x,p_0,p):=\min_{u\in U\cap B(0,N(r))}\Big\{\langle p, {\mathbf{f}}(x,u)\rangle+\p {\bf l}(x,u) \Big\} \quad \forall r\in ]0,2\sigma], $$ we get \begin{equation}\label{c2'} H_{{\bf l},{\mathbf{f}}, N(W(x))} (x,p_0,D^*W(x))< -\gamma(W(x)) \qquad \forall x\in W^{-1}(]0,2\sigma]). \end{equation} \end{proposition} \begin{proof} Given $r\in]0,2\sigma]$, let us first show that there exists some $N(r)$ such that \begin{equation}\label{c0'} H_{{\bf l},{\mathbf{f}}, N(r)} (x,\p,D^*W(x))<-\gamma(r)<0 \qquad \forall x\in W^{-1}([r,2\sigma])\, \text{ and } \, p\in D^*W(x). \end{equation} Assume by contradiction that for any integer $k$ there is some pair $(x_k,p_k)$ with $x_k\in W^{-1}([r,2\sigma])$ and $p_k\in D^*W(x_k)$ such that, \begin{equation}\label{fk} \Big( u \in U: \quad \langle p_k,{\mathbf{f}}(x_k,u )\rangle+\p {\mathbf{l}}(x_k,u ) <-\gamma(r)<0\Big) \ \Longrightarrow \ |u|>k \end{equation} (by Proposition \ref{claim1}, controls verifying the inequality surely exist). Because of the compactness of $W^{-1}([r,2\sigma])$ and of the upper semicontinuity of the set-valued map $D^*W(\cdot)$, there is a subsequence, which we still denote $(x_k,p_k)$, converging to some $(\bar x,\bar p)$ such that $\bar x\in W^{-1}([r,2\sigma])$ and $\bar p\in D^*W(\bar x)$. Since $W$ verifies (\ref{c0}), there is some $\bar u\in U$ such that $$ \alpha:=\langle \bar p, {\mathbf{f}}(\bar x, \bar u)\rangle+\p{\mathbf{l}}(\bar x, \bar u)<-\gamma(r)<0. $$ Thus, the uniform continuity of the maps ${\bf l}$, ${\mathbf{f}}$ on $W^{-1}([r,2\sigma])\times U$ implies that $$ \langle (p_k, {\mathbf{f}}(x_k,\bar u)\rangle+\p {\mathbf{l}}(x_k,\bar u)+\gamma(r)<\frac{\alpha}{2}<0 \qquad \forall k\ge \bar k, $$ some integer $\bar k$, which contradicts \eqref{fk} as soon as $k>|\bar u|$. \noindent Moreover, for every $r_1,r_2\in ]0,2\sigma]$, $r_1< r_2$, one clearly has $N(r_1)\ge N(r_2)$ and, enlarging $N(r)$ if necessary, one can assume the map $r\mapsto N(r)$ continuous. Therefore, for any $x\in W^{-1}(]0,2\sigma])$, the thesis (\ref{c2'}) follows from (\ref{c0'}) as soon as $r= W(x)$. \end{proof} \vv Let us introduce the following definition, useful in the sequel. \begin{definition}\label{FDBK} Let $\sigma>0$ and fix a selection $p(x)\in D^*W(x)$ for any $x\in W^{-1}(]0,2\sigma])$. Let $\gamma(\cdot)$, $N(\cdot)$ be the same as in Proposition \ref{cB}. We call a {\rm feedback} on $W^{-1}(]0,2\sigma])$ a map $$ x\mapsto {\bf u}(x)\in U\cap B(0,N(W(x)) $$ verifying \begin{equation}\label{feed1} \langle p(x), {\mathbf{f}}(x,{\bf u}(x))\rangle+\p {\mathbf{l}}(x,{\bf u}(x)) <-\gamma(W(x)) \end{equation} for every $x\in W^{-1}(]0,2\sigma])$. \end{definition} Moreover, for any $\mu>0$ and any continuous path $\tilde y:[\tau, +\infty[\to\R^n$ such that $W(\tilde y(\tau))>\mu$, we define the time to reach the enlarged target $W^{-1}([0,\mu])$ as \begin{equation}\label{Tz} {\mathcal T}_{\tilde y}^\mu\, :=\inf\{r\ge\tau: \ W(\tilde y(r))\le \mu\} \end{equation} (in particular, ${\mathcal T}_{\tilde y}^\mu=+\infty$ if $W(\tilde y(r))> \mu$ for all $r\ge\tau$). \vv \begin{proposition}\label{claim2bis} Fix $\sigma\in]0,W_0[$, and let $\gamma(\cdot)$, $N(\cdot)$ be as in Propositions \ref{claim1}, \ref{cB}. Moreover, let $\varepsilon$, $\bar\mu$, $\hat\mu$ verify $\varepsilon>0$ and $0<\hat\mu<\bar\mu\le\sigma$. Then there exists some $\delta>0$ such that, for every partition $\pi=(t^j)$ of $[0,+\infty[$ with diam$(\pi)\le\delta$ \footnote{ A {\it partition} of $[0,+\infty[$ is a sequence $\pi=(t^j) $ such that $t^0=0, \quad t^{j-1}<t^j$ \, $\forall j\ge 1$, and $\lim_{j\to+\infty}t^j=+\infty$. The number diam$(\pi)\doteq\sup(t^{j }-t^{j-1})$ is called the {\rm diameter} of the sequence $\pi$.} and for each $x\in\Omega\setminus\mathbf C$ satisfying $W(x)=\bar\mu$, there are a piecewise constant control $v:[0,\hat t]\to U\cap B(0,N(\hat\mu))$ and a solution $y:[0,\hat t]\to W^{-1}([\hat\mu,\bar\mu])$ to the Cauchy problem $$ y'= {\mathbf{f}}(y,v), \qquad y(0)=x, $$ enjoying following properties: \begin{itemize} \item[\bf (a)] $\hat t:={\mathcal T}^{\hat\mu}_y<+\infty$ and $\bar n:=\sup\{j\ge1: t^{j-1}< {\mathcal T}^{\hat\mu}_y\}<+\infty$. \item[\bf (b)] for every $t\in[0,\hat t[$ and $j\ge1$ such that $t\in[t^{j-1}, t^j[$, \begin{equation}\label{dainserire} W(y(t))-W(y(t^{j-1}))+\p \int_{t^{j-1}}^t \mathbf{l}(y(\tau),v(\tau))\,d\tau \le -\frac{\gamma(W(y(t^{j-1}))) }{\varepsilon+1}(t -t^{j-1}). \end{equation} \end{itemize} \end{proposition} \begin{proof} Let $p(\cdot)$ be a selection of $D^*W$ on $W^{-1}({[{\hat\mu/4},{2\sigma}]})$ and let us consider a feedback ${\bf u}$ as in Definition \ref{FDBK}. Let $M$ denote the sup-norm of ${\mathbf{f}}$ on $W^{-1}({[{\hat\mu/4},{2\sigma}]})\times U$, and let $\omega_{{\bf l}}(\cdot)$ be the modulus of continuity of ${\bf l}$ on $W^{-1}({[{\hat\mu/4},{2\sigma}]})\times U$. By the local semiconcavity and the properness of $W$, Lemma \ref{Lscv} implies that there exist $\rho$, $L>0$ such that, for any $x$ belonging to the compact set $W^{-1}({[{\hat\mu/4},{2\sigma}]})$, one has \footnote{The inequality (\ref{scv}) is usually formulated with the proximal superdifferential $\partial^P F$ instead of $\partial_C F$. However, this does not make a difference here since $\partial^P F=\partial_C F$ as soon as $F$ is locally semiconcave.} \begin{equation}\label{scv} W(\hat x)-W(x)\le \langle p,\hat x-x \rangle+\rho\,|\hat x-x|^2 \qquad \forall p\in D^*W(x), \end{equation} for every $\hat x$ such that the segment $[x,\hat x]\subset W^{-1}({[{\hat\mu/4},{2\sigma}]})$, and \begin{equation}\label{Lip} |p|\le L \qquad \forall p\in D^*W(x). \end{equation} Let $\psi:\R^n\to[0,1]$ be a $C^{\infty}$ (cut-off) map such that \begin{equation}\label{psi} \psi = 1 \quad\hbox{on}\quad W^{-1}([{\hat\mu/2}, \sigma]) , \qquad \psi = 0 \quad\hbox{on}\,\,\, \R^n\backslash W^{-1}([{\hat\mu/4}, {2\sigma}])\,. \end{equation} Let $\omega$ denote the modulus of continuity of the product $(\psi\,{\mathbf{f}})$ on $\R^n\times U$. \noindent We set \begin{equation}\label{E1} \delta:=\min\left\{ \frac{\hat\mu}{2LM}, \delta_2\right\}, \end{equation} where $\delta_2>0$ verifies \begin{equation}\label{E2} \frac{L\,\omega \left( M\, \delta_2\right) + \rho\, M^2 \, \delta_2 +\p\,\omega_{{\bf l} } \left( M\,\delta_2\right)}{\gamma(\hat\mu/4)}= \frac{\varepsilon}{\varepsilon+1}. \end{equation} Let $ \pi=(t^j)$ be an arbitrary partition of $[0,+\infty[$ such that diam$(\pi)\le \delta$. For each $x\in\Omega\setminus\mathbf C$ verifying $U(x)=\bar\mu$, define recursively a sequence of trajectory-control pairs $(y^j,v^j):[t^{j-1},t^j]\to \Omega\times U$, $j\ge1$, as follows: \begin{itemize} \item $y^1(t^0):= x^1:= x\, , \ \ v^1:= {\bf u}(x^1);$ \item for every $j> 1$, $$ y^{j }(t^{j-1}):= y^{j-1}(t^{j-1}):= x^j\,, \quad v^j:= {\bf u}(x^j); $$ \item for every $j\ge 1$, $y^j:[t^{j-1},t^j]\to\R^n$ is a solution of the Cauchy problem $$ y'(t) = \psi(y)\,{\mathbf{f}}(y,v^j) \quad y (t^{j-1}) = x^j. $$ \end{itemize} Notice that, by the continuity of the vector field and because of the cut-off factor $\psi$, any trajectory $y^j(\cdot)$ exists globally and cannot exit the compact subset $W^{-1}({[{\hat\mu/4},{2\sigma}]})$. Let us set $$ (y(t), v(t)):=(y^j(t), v^j) \ \ \forall t\in[t^{j-1},t^j[, \quad \text{for every $j\ge1$.} $$ In view of the $L$-Lipschitz continuity of $W$ on $W^{-1}({[{\hat\mu/4},{2\sigma}]})$, the condition $\delta\le \hat\mu/2LM$ in (\ref{E1}), implies that $ |W(y^j(t))- W(x^j)|\le L|y^j(t)- x^j|\le \hat\mu/2,$ so that $$ W(y^j(t))\ge \hat\mu/2 \quad \forall t\in [t^{j-1}, t^j], \quad \text{for every $j\ge1$,} $$ as soon as $W(x^j)\ge \hat\mu$. \noindent Recalling that $|\psi|\le 1$ and $\psi( x^j)=1$ when $x^j\in W^{-1}([\hat\mu/2,2\sigma])$, (\ref{feed1}) and (\ref{scv}) and imply that, for every $j\ge1$ such that $t^{j-1}< {\mathcal T}^{\hat\mu}_y$ (see Definition \ref{Tz}), one has, $\forall t\in [t^{j-1}, t^j]$, \begin{align*} &W(y^j(t))-W(x^j) +\p\int_{t^{j-1}}^t \mathbf{l}(y^j(\tau),v^j)\,d\tau\le \langle p(x^j),y^j(t)- x^j\rangle+\rho|y^j(t)- x^j|^2 +\\ & \p\int_{t^{j-1}}^t \left[ \mathbf{l}(y^j(\tau),v^j)- \mathbf{l}(x^j,v^j)\right]\,d\tau+ \p\, {\bf l} (x^j,v^j)(t-t^{j-1})\\ &\le \left\langle p(x^j),\int_{t^{j-1}}^{t }\left[\psi(y^j(\tau)) \, {\mathbf{f}}(y^j(\tau),v^j)-{\mathbf{f}}(x^{j},v^j )\right]\,d\tau\right\rangle \\&+ \rho\left( \int_{t^{j-1}}^{t}\left|\psi(y^j(\tau)){\mathbf{f}}(y^j(\tau),v^j)\right|\,d\tau\right) ^2 +\p\, \omega_{{\bf l} } \left(M \, (t^j-t^{j-1})\right)\, (t-t^{j-1})\\ &+ \left\langle p(x^j),{\mathbf{f}} (x^{j},v^j) \right\rangle\, (t-t^{j-1}) +\p\, \mathbf{l}(x^j,v^j)(t-t^{j-1})\\ \le &~L\,\omega \left( M \, (t^j-t^{j-1})\right)\, (t-t^{j-1})+ \rho\, M^2 \, (t-t^{j-1})^2 \\ &+\p\, \omega_{{\bf l} } \left(M \, (t^j-t^{j-1})\right)\, (t-t^{j-1})- \gamma(W(x^j))(t-t^{j-1}) \\ \le &\left[\frac{L\,\omega \left( M \, (t^j-t^{j-1})\right) + \rho\,M^2 \, (t^j-t^{j-1})+ \p\,\omega_{{\bf l} } \left( M \, (t^j-t^{j-1})\right)}{ \gamma(W(x^j))}-1\right] \\ &\cdot \gamma(W(x^j))( t-t^{j-1}). \end{align*} Since $ \forall t\in[t^{j-1},t^j]$, $t-t^{j-1}\le \delta \le \delta_2$, by (\ref{E2}) it follows that \begin{equation}\label{stimaE} W(y^j(t))-W(x^j)+\p\int_{t^{j-1}}^t \mathbf{l}(y^j(\tau),v^j)\,d\tau\le -\frac{ \gamma(W(x^j))}{\varepsilon+1}(t -t^{j-1}), \end{equation} which implies, also recalling the definition $x^j= y^{j-1}(t^{j-1})$, \begin{equation}\label{stimaE2} \begin{split} W(y(t))-W(x)&+\p\int_0^t \mathbf{l}(y(\tau),v(\tau))\,d\tau\\ &=[W(y^j(t))-W(x^j)]+\dots + [W(y^1(t^1))-W(x)] \\ & + \p\int_{t^{j-1}}^t \mathbf{l}(y^j(\tau),v^j)\,d\tau +\dots +\p\int_{0}^{t^1} \mathbf{l}(y^1j(\tau),v^1)\,d\tau\\ & \le -\frac{\gamma(W(x^j))(t-t^{j-1})+ \sum_{i=1}^{j-1} \gamma(W(x^i))(t^i-t^{i-1})}{\varepsilon+1}. \end{split} \end{equation} In particular, (\ref{stimaE2}) yields that $W(y(t))\le\ W(x)=\bar\mu$ for all $t\in[0,t^j]$. \vv Notice that ${\mathcal T}^{\hat\mu}_y<+\infty$. Indeed, if by contradiction ${\mathcal T}^{\hat\mu}_y=+\infty$, (\ref{stimaE2}) held true for all $t\in[0,t^j]$ with $j$ arbitrarily large, i.e. (since $(t^j)$ is a partition of $[0,+\infty[$), for all $t\ge0$. Therefore, recalling that $\gamma(W(x^i))\ge \gamma(\hat\mu/4)>0$ for all $i=1,\dots,j$, one would have $\lim_{t\to+\infty} W(y(t))= 0$, which is not allowed, since, by the definition of ${\mathcal T}^{\hat\mu}_y$, \begin{equation}\label{fuori}W(y(t))>\hat\mu\qquad \forall t\in[0,{\mathcal T}^{\hat\mu}_y[ .\end{equation} Let us set $$ \hat t:= {\mathcal T}^{\hat\mu}_y (<+\infty), $$ so that $\bar n$ reads $$ \quad \bar n=\sup\{j\ge1: t^{j-1}< \hat t\}. $$ Let us observe that $\bar n<+\infty$. Finally, notice that, because of (\ref{fuori}), $\psi(y(t) )= 1$ for every $t\in[0, t^{\bar n}]$. Hence, for any $j\in\{1,\dots,\bar n\}$, $y^j(\cdot)$ is a solution of $$ \frac{dy}{dt} = {\mathbf{f}}(y,v^j) \ \ \forall t\in[t^{j-1},t^j], \quad y (t^{j-1}) = x^j. $$ It follows that conditions {\bf (a)}--{\bf (b)} are satisfied. \end{proof} \vv \subsection{Proof of Theorem \ref{nuovo}} $\,$ \vskip0.3truecm Let $\sigma\in]0, W_0[$ and let $\gamma(\cdot)$, $N(\cdot)$ be defined as in Proposition \ref{cB}. Fix $\varepsilon>0$ and let $(\nu_k)\subset]0,1]$ be a sequence such that $1=\nu_0>\nu_1>\nu_2>\dots$ and $\lim_{k\to\infty}\nu_k=0$. Assume that $z\in W^{-1}(]0,\sigma])$ and set $$ \mu_k:= \nu_k W(z) \quad \forall k\ge 0. $$ We are going to exploit Proposition \ref{claim2bis} in order to build a trajectory-control pair $$ (y,v):[0,\bar t[\to (\Omega\setminus\mathbf C)\times U $$ by concatenation $$ (y(t),v(t)) = (y_k(t),v_k(t)) \quad \forall t\in [t_{k-1},t_k[, \quad \forall k\ge1, $$ where the pairs $ (y_k(t),v_k(t))$ are described by induction as follows. \vv {\it The case $k=1$.} Let us begin by constructing $(y_1,v_1)$. Let us set $\bar\mu=\mu_0$, $\hat\mu=\mu_1$, and let us build a trajectory-control pair $$ (y_1,v_1):[0,\hat t]\to W^{-1}([\mu_1,\mu_0])\times U\cap B(0,N(\mu_1)), \qquad y_1(0)=z, $$ according to Proposition \ref{claim2bis}. We set $t_0:= 0$ and $t_1:= \hat t$ and observe that, in view of {\bf (a)} in Proposition \ref{claim2bis}, $t_1= {\mathcal T}_{y_1}^{\mu_1}$. {\it The case $k>1$.} Let us define $(y_k,v_k)$ for $k> 1$. Let us set $\bar\mu=\mu_{k-1 }$, $\hat\mu=\mu_{k }$, and construct $$ (\hat y_k,\hat v_k):[0,\hat t]\to W^{-1}([\mu_k,\mu_{k-1}])\times U\cap B(0,N(\mu_k)), \qquad \hat y_k(0)=y_{k-1}(t_{k-1}), $$ still according to Proposition \ref{claim2bis}. We set $t_k:= t_{k-1}+\hat t$ and $(y_k,v_k)(t)=(\hat y_k,\hat v_k)(t-t_{k-1})$ $\forall t\in[t_{k-1},t_k]$. We observe that $t_k= {\mathcal T}_{y_k}^{\mu_k}$. The concatenation procedure is concluded as soon as we set $\bar t := \lim_{k\to \infty} t_k$. Notice that it may well happen that $\bar t=+\infty$. We claim that \begin{equation}\label{raggiunge} \lim_{t\to\bar t^-} {\bf d}(y(t)) = 0. \end{equation} Indeed, for every $k\ge1$, Proposition \ref{claim2bis} yields the existence of a finite partition $\pi_k=\{\hat t^0_k,\dots,\hat t^{\bar n_k}_k\}$ of $[0,t_k- t_{k-1}]$ such that, setting, $$t_k^j:= t_{k-1}+\hat t^j_k \qquad\forall j\in\{0,\dots, \bar n_k\}, $$ one has $y(0)\,(=y_1(0))=z$, and, for every $k\ge1$: \begin{itemize} \item[\bf (a)$_k$] $y_{k+1}(t_{k}) = y_{k}(t_{k})$, \, $W(y_k(t_{k-1})) = \mu_{k-1 }$; and \newline $W(y_k (t_k))<W(y_k(t))\le W(y_k(t_{k-1}))\le W(z)$ \, $\forall t\in[t_{k-1} ,t_k[$; \item[\bf (b)$_k$] for all $j\in\{1,\dots, \bar n_k\}$, \newline $W(y_k (t))-W(y_k(t_k^{j-1})) +\p\int_{t_k^{j-1}}^t \mathbf{l}(y_k^j(\tau),v_k(\tau))\,d\tau \le$ \newline \hphantom{mmmmmmmmmmmmmm} $ -\frac{1}{\varepsilon+1} \gamma(W(y_k(t_k^{j-1})))(t-t_k^{j-1}) $ \, $\forall t\in[t_k^{j-1}, t_k^j[$. \end{itemize} In particular, by {\bf (a)$_k$}, claim (\ref{raggiunge}) is equivalent to \begin{equation}\label{discrete} \lim_{k\to\infty} {\bf d}(y_k(t_{k}))=0. \end{equation} Since $W$ is proper and positive definite, (\ref{discrete}) is a straightforward consequence of $$\lim_{k\to\infty} W(y_k(t_k)) = \lim_{k\to\infty} \nu_{k }\,W(z) = 0,$$ so (\ref{raggiunge}) is verified as well. \vv We now need precise estimates of both the decreasing rate of $W$ and the cost gain along $(y,v)$. Let us consider $t$, $k$, $j$ such that $t<\bar t$ and $t\in[t_{k}^{j-1}, t_k^j[$. Notice that {\bf (b)$_k$} implies \begin{equation}\label{sigma1} W(y(t))\le W(y_k(t_{k}^{j-1}))\le W(y(t_{k-1}))\le\dots\le W(y(t_1))\le W(z)\le\sigma, \end{equation} and, in view of the definition of $(y_k,v_k)$, also \begin{align*} &W(y_k(t))-W(y_k(t_{k-1})) +\p\int_{t_{k-1}}^t \mathbf{l}(y_k(\tau),v_k(\tau))\,d\tau = \\ &[W(y_k(t))-W(y_k(t_k^{j-1}))]+ [W(y_k(t_k^{j-1}))-W(y_k(t_k^{j-2}))] +\dots +[W(y_k(t_k^{1}))-W(y_k(t_k^{0}))]\\ &+\p\int_{t_k^{j-1}}^t \mathbf{l}(y_k(\tau),v_k(\tau))\,d\tau+\dots+\p\int_{t_k^{0}}^{t_k^{1}} \mathbf{l}(y_k(\tau),v_k(\tau))\,d\tau\\ & \le-\frac{1}{\varepsilon+1}\left[\gamma(W(y_k(t_k^{j-1})))(t-t_k^{j-1})+\sum_{i=1}^{j-1} \gamma(W(y_k(t_k^{i-1})))(t_k^{i }-t_k^{i-1})\right] . \end{align*} By the monotonicity of $\gamma$ one has $\gamma(W(y_k(t_k^{j-1})))\le \gamma(W(y_k(t_k^{i-1})))$ for any $i=1,\dots,j-1$, which implies $$ W(y_k(t))-W(y_k(t_{k-1})) +\p\int_{t_{k-1}}^t \mathbf{l}(y_k(\tau),v_k(\tau))\,d\tau\le -\frac{1}{\varepsilon+1} \gamma(W(y_k(t_k^{j-1})))(t-t_{k-1}). $$ Hence, recalling the definition of $(y,v)$, we have \begin{align*} &W(y(t))-W(z) +\p\int_{0}^t \mathbf{l}(y(\tau),v(\tau))\,d\tau = \\ &[W(y(t))-W(y(t_{k-1}))]+[W(y(t_{k-1}))-W(y(t_{k-2}))] +\dots +[W(y(t_1))-W(y(0)]\\ &+\p\int_{t_{k-1}}^t \mathbf{l}(y_k(\tau),v_k(\tau))\,d\tau+\dots+\p\int_{0}^{t_1} \mathbf{l}(y_k(\tau),v_k(\tau))\,d\tau, \end{align*} so, by using (\ref{sigma1}), we finally obtain \begin{equation}\label{EP} W(y(t))-W(z) +\p\int_{0}^t \mathbf{l}(y(\tau),v(\tau))\,d\tau\le -\frac{1}{\varepsilon+1} \gamma(W(y_k(t_k^{j-1})))t. \end{equation} This is the key inequality for proving both claim {\bf (i)} and claim {\bf (ii)} of the theorem. \vv As for claim {\bf (i)} --stating that the system is (GAC) to $\mathbf C$--, we have to establish the existence of a ${\mathcal KL}$ function $\beta$ as in Definition \ref{(GAC)}. Let $t$ belong to $[0,\bar t[$. Then $t\in[t_k^{j-1}, t_k^j[$ for some $k\ge1$ and some $j\in\{0,\dots,{\bar n_k}\}$. Since $l\ge0$, by \eqref{EP} we get \begin{equation}\label{stg} W(y(\tau))+\frac{ \gamma(W(y(t_k^{j-1}))\,\tau}{\varepsilon+1}\le W(z) \qquad \forall \tau\in[t_k^{j-1},t_k^j]. \end{equation} Observe that the function $\tilde \gamma:[0,+\infty[\to[0,+\infty[$ defined by $\tilde \gamma(r):= \min\{r,\gamma(r)\}$ for all $r\in[0,+\infty[$ is continuous, strictly increasing, and $\tilde \gamma(r)>0$ \, $\forall r>0$, $\tilde \gamma(0)=0$. Then, taking $\tau=t_k^{j-1}$ in (\ref{stg}), one has $$ \tilde \gamma(W(y(t_k^{j-1}))\left[1+\frac{ t_k^{j-1}}{\varepsilon+1}\right]\le W(z), $$ so that $$ W(y(t))\le W(y(t_k^{j-1}))\le \tilde \gamma^{-1}\left(\frac{\varepsilon+1}{\varepsilon+1+t_k^{j-1}}\,W(z)\right). $$ By Proposition \ref{claim2bis} it is not restrictive to assume $diam(\pi_k)\le1/2$. Therefore we get $$ W(y(t))\le \tilde \gamma^{-1}\left(\frac{2( \varepsilon+1)}{\varepsilon+1+t }\,W(z)\right). $$ Proceeding as usual in the construction of the function $\beta$, we set \begin{equation}\label{sigma} \sigma_-(r):=\min\{r\,,\,\min\{{\bf d}(x): \ W(x)\ge r\}\}, \quad \sigma^+(r):=\max\{{\bf d}(x): \ W(x)\le r\}. \end{equation} Clearly, $\sigma_-$, $\sigma^+:[0,+\infty[\to\R$ are continuous, strictly increasing, unbounded functions such that $\sigma_-(0)=\sigma^+(0)=0$ and $$ \forall x\in{W^{-1}([0,\sigma])}: \quad \sigma_-(W(x))\le {\bf d}(x)\le\sigma^+(W(x)). $$ We now define $\beta:[0,+\infty[\times[0,+\infty[\to[0,+\infty[$ by setting \begin{equation}\label{defbeta} \beta(r,t):= \sigma^+\circ\tilde \gamma^{-1}\left( \sigma_-^{-1}(r) \,\frac{2(\varepsilon+1)}{\varepsilon+1+t} \right), \end{equation} so, by straightforward calculations, it follows that ($T_y=\bar t$ and) $$ {\bf d}(y(t ))\le \beta({\bf d}(z), t ) \qquad \forall t\in[0,T_y[. $$ By the arbitrariness of $\sigma>0$, this concludes the proof of claim {\bf (i)} of the theorem. \vv As for claim {\bf (ii)}, we now observe that inequality (\ref{EP}) implies also $$ \int_0^{\bar t}\mathbf{l}(y(t),v(t))\,dt =\lim_{k\to+\infty}\int_0^{t_k}\mathbf{l}(y(t),v(t))\,dt \le \lim_{k\to+\infty}\frac{W(z)-W(y(t_k))}{\p}= \frac{W(z)}{\p}\,, $$ from which (\ref{Wprop}) follows. \,\qed \section{Control-polynomial systems}\label{balsect}$\,$ in this section and in the next one we will assume the dynamics $\F$ to be a polynomial of degree $d\geq 0$ in the control variable $u$: \begin{equation}\label{polymomium-est} \begin{array}{c} \displaystyle \dot x=\P(x,u):= f_0(x)+\sum_{i=1}^d\left(\sum_{\alpha\in\N^m, \, \alpha_1+\dots+\alpha_m=i} u_1^{\alpha_1}\cdots u_m^{\alpha_m} f_{\alpha_1,\dots,\alpha_m}(x)\right) , \quad x(0)=z, \\ \, \\ V(z) :=\displaystyle\inf_{(x,u)\in {\mathcal{ A}}_{f}({z})} \int_0^{T_x} l(x(t),u(t))\, dt. \end{array}\end{equation} We assume the vector fields $f_0, f_{\alpha_1,\dots,\alpha_m}$ to be continuous and the controls to range on the set $$U_r:= [-r, r]^m,$$ for some $r$, $0< r\leq +\infty$ (if $r=+\infty$ we mean $U_r:=\R^m$). On the one hand such polynomial structure is of obvious interest for applications. For instance, in the example of the gyroscope (Section \ref{gyroex}) the dynamics is quadratic in the controls, namely the precession and rotation velocities. Also the impressive behaviour of the Kapitza pendulum --where a fast oscillation of the pivot turns an unstable (or even a non-equilibrium) point into a stable point-- can be explained by saying that the square of the pivot velocity --regarded as a control-- prevails on gravity. Many other mechanical systems, possibly non-holonomic, can be thought as control systems with quadratic dependence on the inputs, see e.g. \cite{BR10}. On the other hand, it is natural to try to exploit the control polynomial dependence for a careful study of the vectogram's convex hull \footnote{In some classical literature, as well as in some recent papers, objects akin to the convex hull of the image of the vector valued function that maps $u\in\R^m$ into the (suitably ordered) sequence of all monomials of $u$ up to the degree $d$, are referred to as {\it spaces of moments}, see e.g. \cite{Akh65,Ego02,Mez04,PT09,Sho50}.}. \subsection{Near-control-affine systems} \,\,$\,$ In this subsection we address the task of representing a control-polynomial system -- actually, its convexification -- by means of a control-affine dynamics like $$ {\F}_{\text{\it aff}}(x,w):= f_0(x)+\sum_{i=1}^d\left(\sum_{\alpha\in\N^m, \, \alpha_1+\dots+\alpha_m=i} w_{\alpha_1,\dots,\alpha_m} f_{\alpha_1,\dots,\alpha_m}(x)\right).$$ Such a representation in general does not exist, as it is clear when $\F(x,u)= uf_1(x)+u^2 f_{2}(x)$, \, $u\in\R$. However, an affine representation is achievable in the case of {\it near-control-affine } systems, where the only non-zero terms are those corresponding to control monomials such that each component $u_i$ ($i=1,\dots,m$) has an exponent equal either $0$ or a fixed odd positive number $K_i$. To state precisely the main result, let us give some definitions. \vv For every $\alpha\in\N^m$, let us set $c(\alpha):=\#\{\alpha_i\ne0; \ i=1,\dots,m\}$. \begin{definition}[Near-control-affine systems]\label{defbal} We say that the control-polynomial dynamics $f(x,u)$ in {\normalfont(\ref{polymomium-est})} is {\em near-control-affine} if there exist an $m-$tuple $K=(K_1,\dots,K_m)$ of positive odd numbers and a positive integer $\dbar \leq m$ such that $$\F(x,u):=f_0(x)+\sum_{i=1}^{\dbar}\left( \sum_{\alpha\in\N^m: \ c(\alpha)=i, \ \alpha_1\in\{0, K_1\}, \dots, \alpha_m\in\{0,K_m\}} u_1^{\alpha_1}\cdots u_m^{\alpha_m} f_{\alpha_1,\dots,\alpha_m}(x)\right). $$ \end{definition} \vv \begin{remark}\label{rmknormd}{\rm If the near-control-affine system {\normalfont(\ref{polymomium-est})} is of degree $d$, one obviously has $\dbar\le d$. Moreover, when $\dbar=m$, the number $M$ of non-drift terms of a near-control-affine system $\F$ verifies $M\le \sum_{k=1}^m \binom{m}{k}=2^m-1$. Indeed for every $k\leq m$, the maximum number of non zero terms of the form $u_1^{\alpha_1}\cdots u_m^{\alpha_m} f_{\alpha_1\cdots\alpha_m}$ with $k$ coefficients $\alpha_i\ne0$ is equal to $\binom{m}{k}$. } \end{remark} \vskip 0.3truecm For every $r\in]0,+\infty[$ we set \begin{equation}\label{rbaldef} \bar r:=\frac{1}{M}\min\{r^{j K_i}\mid i=1,\dots,m; ~j=1,\dbar\} \end{equation} and $$ \Wbalr:=[-\bar r,\bar r]^M. $$ In addition, we set $$ \bar U_{+\infty} :=\RR^M.$$ \vskip0.3truecm Theorem \ref{cnear-control-affine}, where we assume Hypothesis {\bf A}$_{b}$ below, establishes that near-control-affine systems can be regarded as control-affine systems with independent control variables. \vskip0.3truecm \noindent {\bf Hypothesis A}$_{b}:$ \begin{enumerate} \item $f$ is near-control-affine; \item for every $x\in \Omega\backslash\mathbf C$, the map $l(x,\cdot): U_r \to\R$ is bounded; \item let us define the (non-negative, continuous) function $$\ell(x):=\sup_{u\in U} l(x,u).$$ The control set for the minimum problems $(\ell,{\F}_{\text{\it aff}},\mathbf C)$ coincides with $\Wbalr$ . \end{enumerate} \vskip0.3truecm \begin{theorem}\label{cnear-control-affine} Let us assume{ Hypothesis} {\bf A}$_{b}$ and let $W$ be a $\p$-MRF for the affine problem $( \ell,{\F}_{\text{\it aff}},\mathbf C)$ for some $\p\ge0$. Then the map $W$ is a $\p$-MRF for the original (non-affine) problem $(l,\F,\mathbf C)$ as well. In particular, the control system in {\rm \eqref{polymomium-est}} is \emph{GAC} to $\mathbf C$ and, if $\p>0$, $$V(z)\leq \frac{W(z)}{\p} \qquad\forall z\in \Omega\backslash \mathbf C.$$ \end{theorem} \begin{proof} Let $x\in \Omega\setminus\mathbf C$. By assumption one has $$ \inf_{w\in \Wbalr} \Big\{\Big\langle p\,,\, {\F}_{\text{\it aff}}(x,w) \Big\rangle\Big\}+\p\ell(x) < 0 \qquad \text{for all }p\in D^*W(x). $$ By Lemma \ref{lnear-control-affine} below, ${\F}_{\text{\it aff}}(x,\Wbalr)\subseteq co \F(x,\Ubalr)$, which implies \begin{equation}\label{fco} \inf_{u\in \Ubalr} \Big\{\Big\langle p\,,\, \F(x,u) \Big\rangle\Big\}+\p\ell(x) < 0 \qquad \text{for all }p\in D^*W(x). \end{equation} This concludes the proof, since \eqref{fco} yields $$ \inf_{u\in \Ubalr} \Big\{\Big\langle p\,,\,\F(x,u) \Big\rangle+\p l(x,u)\Big\} < 0 \qquad \text{for all }p\in D^*W(x). $$ \end{proof} \begin{lemma}\label{lnear-control-affine} For every $r\in [0,+\infty]$ \begin{equation}\label{comb2} {\F}_{\text{\it aff}}(x,\Wbalr)\subset co~\F(x,\Ubalr)\quad \forall x\in\Omega\setminus\mathbf C. \end{equation} \end{lemma} This result will be proved in Appendix \ref{proofsec}. \begin{remark}\label{rmkbal}{ Besides implying Theorem \ref{cnear-control-affine}, Lemma \ref{lnear-control-affine} gives access to classical results on control-affine systems for the study of local controllability of near-control-affine systems. For instance, consider the driftless, near-control-affine system (with $d=8$, $K=(1,3,5)$ and $\dbar =2$) \begin{equation}\label{balsyst}\dot x=\F(x,u)=u_1u_2^3 f_{1,3,0}(x) +u_1u_3^5 f_{1,0,5}(x) + u_2^3u_3^5 f_{0,3,5}(x),\end{equation} with $x=(x_1,x_2,x_3,x_4)\in \RR^4$, $u=(u_1,u_2,u_3)\in \RR^3$ and \begin{equation*} f_{1,3,0}(x) =(1,0,x_2,0)^{tr};\quad f_{1,0,5}(x)=(0,1,-x_1,0)^{tr};\quad f_{0,3,5}(x)=(0,0,0,1)^{tr}. \end{equation*} Notice that $\{(u_1u_2^3,u_1u_3^5,u_2^3u_3^5)\mid (u_1,u_2,u_3)\in\RR^3\}\subset \RR^3$ and, for instance, $$(0,1,1)\notin \{(u_1u_2^3,u_1u_3^5,u_2^3u_3^5)\mid (u_1,u_2,u_3)\in\RR^3\},$$ so $\F$ cannot be parameterized as control-linear vector field with controls in $\R^3$. However, by Lemma \ref{lnear-control-affine} the control-linear vector field $${\F}_{\text{\it aff}}(x,w)= w_{1,3,0} f_{1,3,0}(x) +w_{1,0,5} f_{1,0,5}(x) +w_{0,3,5} f_{0,3,5}(x) \quad (w_{1,3,0},w_{1,0,5},w_{0,3,5})\in\RR^3$$ satisfies $${\F}_{\text{\it aff}}(x,\Wbalr)\subset co(\F(x,\Ubalr)) \quad \forall x\in\RR^4;~\forall r>0.$$ For example, we have that $f_{1,0,5}(x) + f_{0,3,5}(x) \notin \F(x,\Ubalr)$, while $$ f_{1,0,5}(x) + f_{0,3,5}(x) = \frac 1 2 \F(x,(1,0,2^{1/5})) + \frac 1 2 \F(x,(0,1,2^{1/5})).$$} \end{remark} \begin{remark}{\rm Let us see a simple utilization of the affine representability of ${\F}_{\text{\it aff}}$ for system \eqref{balsyst}. Observe that the latter verifies the so-called Lie algebra rank condition, $$Lie_x\{f_{1,3,0},f_{1,0,5},f_{0,3,5}\}=\RR^4 \qquad \forall x\in\RR^4.$$ Indeed the Lie bracket $[f_{1,3,0},f_{1,0,5}]$ coincides with the vector field constantly equal to $(0,0,2,0)^t$, so that $$span\{ f_{1,3,0},f_{1,0,5},f_{0,3,5},[f_{1,3,0},f_{1,0,5}]\} = \R^4$$ at every point. Therefore, by Chow-Rashevsky's Theorem the system $\dot x= {\F}_{\text{\it aff}}(x,w)$ turns out to be small time locally controllable. Now, by Lemma \ref{lnear-control-affine} $${\F}_{\text{\it aff}}(x,\Wbalr)\subset co(\F(x,\Ubalr))\quad \forall x\in\Omega\setminus\mathbf C.$$ Consequently, by a standard relaxation argument, we can deduce that the system $\dot x= \F(x,u)$ is small time locally controllable as well. } \end{remark} \vskip1truecm \subsection{Maximal degree weak subsystems}\label{secmaximal} $\,$ \vskip0.3truecm In this subsection and the next one, we assume $r=+\infty$, i.e. $U_r=\RR^m$ and look for {\it weak subsystems}, namely set-valued selections of the convex-valued multifunction $x\mapsto co\,\,\F(x,\R^m). $ We begin with a class of weak subsystems which we call {\it maximal degree} subsystems. Theorem \ref{maximalth} below extends in several directions a result contained in \cite{BR10} and valid for the case $d=2$. It states that in order to test if a function $W$ is a $\p$-MRF function for problem \eqref{polymomium-est}, it is sufficient to test $W$ on the (simpler) {\it maximal degree} problem \begin{equation}\label{max} \begin{array}{l} \dot x=\E(x,u), \qquad x(0)=z, \\ \, \\ \displaystyle\inf_{(x,u)\in{\mathcal A}_{\E}(z) }\displaystyle \int_0^{T_{x}} l(x(t),u(t)) dt, \end{array} \end{equation} where the \emph{maximal degree} control-polynomial vector field $\E$ is defined by $$\E(x,u):= f_0(x)+\sum_{\alpha\in\N^m, \, \alpha_1+\dots+\alpha_m=d} u_1^{\alpha_1}\cdots u_m^{\alpha_m} f_{\alpha_1,\dots,\alpha_m}(x).$$ We shall assume the following additional hypothesis on the running cost: \vskip0.5truecm \noindent {\bf Hypothesis A}$_{max}${\it : There exist non negative continuous functions $M_0=M_0(x)$, $M_1= M_1(x,u)$ such that \begin{equation}\label{lmax} l(x,u) =M_0(x) + M_1(x,u), \end{equation} with $M_1$ verifying $$ M_1(x,0)=0 ,\qquad M_1(x,ku)\leq k^d M_1(x,u) \qquad \forall k\geq 1,~x\in\Omega\setminus\mathbf C,~u\in\RR^m.$$} Notice that running costs of the form $$ l(x,u) = l_0(x) + l_1(x)|u| +\dots+ l_d(x)|u|^d, $$ where the maps $l_i(\cdot)$ are continuous and non-negative, verify Hypothesis {\bf A$_{max}$}. \begin{theorem}\label{maximalth} Let us assume{ Hypothesis} {\bf A}$_{max}$, and let $W$ be a $\p$-MRF for the maximal degree problem $(l,\F^{max}_{\lambda},\mathbf C)$, for some $\p\ge0$. Then the map $W$ is a $\p$-MRF for the original problem $(l,\F,\mathbf C)$. In particular, the control system in {\rm\eqref{polymomium-est}} is {\rm GAC} to $\mathbf C$ and, if $\p>0$, $$V(z) \leq \frac{W(z)}{\p} \qquad\forall z\in \Omega\backslash \mathbf C.$$ \end{theorem} \begin{proof} Assume by contradiction that there exist $x\in\Omega\backslash\mathbf C$ and $p\in D^*W(x)$ such that \begin{equation}\label{MRFmax1} \p l(x,u) + \langle p, f_0(x)\rangle +\sum_{i=1}^d\left(\sum_{\alpha\in\N^m, \, \alpha_1+\dots+\alpha_m=i} \langle p, u_1^{\alpha_1}\cdots u_m^{\alpha_m} f_{\alpha_1,\dots,\alpha_m}(x)\rangle \right)\ge 0 \end{equation} for all $u\in \R^m$. By taking $u=0$ we obtain \begin{equation}\label{MRFmax3} \p M_0(x) + \langle p, f_0(x)\rangle \ge 0. \end{equation} By assumption, there exists $\tilde u\in\R^m$ and $\eta >0$ such that \begin{equation}\label{MRFmax2} \p\, l(x,\tilde u) + \langle p~,~f_0(x)\rangle + \sum_{\alpha\in\N^m, \, \alpha_1+\dots+\alpha_m=d} \langle p, \tilde u_1^{\alpha_1}\cdots \tilde u_m^{\alpha_m} f_{\alpha_1,\dots,\alpha_m}(x)\rangle = -\eta . \end{equation} Moreover, \eqref{MRFmax3}-\eqref{MRFmax2} imply \begin{equation}\label{MRFmax3bis} \p k^dM_1(x,\tilde u)+ k^d \sum_{\alpha\in\N^m, \, \alpha_1+\dots+\alpha_m=d} \langle p, \tilde u_1^{\alpha_1}\cdots \tilde u_m^{\alpha_m} f_{\alpha_1,\dots,\alpha_m}(x)\rangle \leq -\eta k^d \end{equation} for any $k\geq 0$. Hence, for every $k\geq 1$ \begin{align*} \p l(x,k\tilde u) + \langle p~,~ & f_0(x)\rangle + k \sum_{\alpha\in\N^m, \, \alpha_1+\dots+\alpha_m=1} \langle p, \tilde u_1^{\alpha_1}\cdots \tilde u_m^{\alpha_m} f_{\alpha_1,\dots,\alpha_m}(x)\rangle+\\ & \dots+k^{d-1} \sum_{\alpha\in\N^m, \, \alpha_1+\dots+\alpha_m=d-1} \langle p, \tilde u_1^{\alpha_1}\cdots \tilde u_m^{\alpha_m} f_{\alpha_1,\dots,\alpha_m}(x)\rangle\\ & + k^d \sum_{\alpha\in\N^m, \, \alpha_1+\dots+\alpha_m=d} \langle p, \tilde u_1^{\alpha_1}\cdots \tilde u_m^{\alpha_m} f_{\alpha_1,\dots,\alpha_m}(x)\rangle\leq \\ \p k^d M_1(x,\tilde u) + \p M_0(x) &+ \langle p~,~ f_0(x)\rangle + k \sum_{\alpha\in\N^m, \, \alpha_1+\dots+\alpha_m=1} \langle p, \tilde u_1^{\alpha_1}\cdots \tilde u_m^{\alpha_m} f_{\alpha_1,\dots,\alpha_m}(x)\rangle+\\ & \dots+k^{d-1} \sum_{\alpha\in\N^m, \, \alpha_1+\dots+\alpha_m=d-1} \langle p, \tilde u_1^{\alpha_1}\cdots \tilde u_m^{\alpha_m} f_{\alpha_1,\dots,\alpha_m}(x)\rangle\\ & + k^d \sum_{\alpha\in\N^m, \, \alpha_1+\dots+\alpha_m=d} \langle p, \tilde u_1^{\alpha_1}\cdots \tilde u_m^{\alpha_m} f_{\alpha_1,\dots,\alpha_m}(x)\rangle\leq\\ \p M_0(x) + \langle p~,~ f_0(x)\rangle&+ k \sum_{\alpha\in\N^m, \, \alpha_1+\dots+\alpha_m=1} \langle p, \tilde u_1^{\alpha_1}\cdots \tilde u_m^{\alpha_m} f_{\alpha_1,\dots,\alpha_m}(x)\rangle+\\ & \dots+k^{d-1} \sum_{\alpha\in\N^m, \, \alpha_1+\dots+\alpha_m=d-1} \langle p, \tilde u_1^{\alpha_1}\cdots \tilde u_m^{\alpha_m} f_{\alpha_1,\dots,\alpha_m}(x)\rangle -\eta k^d . \end{align*} If $k$ is sufficiently large the last term is negative, which contradicts \eqref{MRFmax1}. \end{proof} \begin{remark}{\rm The thesis of Theorem \ref{maximalth} cannot be extended to the case of bounded control sets. For instance, if $d=3$, $n=m=1$, $U=[-1,1]$, $\mathbf C=\{0\}$, $l\equiv0$, and $\F(x,u)=(u^2+u^3)x$, one has $\dot x=\F(x,u)\geq 0$ for $x\geq 0$, so the system is not GAC to $\mathbf C$ and no control Lyapunov function \footnote{When $l=0$ the notion of $\p$-MRF coincides with that of control Lyapunov function.} exists. However, $W(x)=x^2$ is a control Lyapunov function for $(l,\F^{max})$, so that the system $\dot x=\F^{max}(x,u)$ is GAC to $\mathbf C$. Nevertheless, some symmetry arguments may allow the extension of Theorem \ref{maximalth} to some special classes of polynomial control systems with {\it bounded} control sets. This might be the case when $d=2$, $U$ is a (compact) symmetric control set (i.e. $u\in U$ implies $-u\in U$) and, for all $x\in\Omega\setminus\mathbf C$, $l(x,\cdot)$ is an even function. For example, consider the system $$ \dot x =f(x,u), \quad x(0)=z, \qquad u\in U := [-1,1]^2$$ where \, $$f(x,u):= f_0(x) +u_1 f_{1,0}(x)+u_2 f_{0,1}(x)+ u_1^2 f_{2,0} (x)+ u^2_2 f_{0,2} (x) + u_1u_2 f_{1,1} (x),$$ together with the minimum problem $$ \displaystyle \inf_{(x,u)\in{\mathcal A}_f(z)}\int_0^{T_{x}} (|u| + x^2u^2)dt. $$ Notice that $$(l,\F^{max})(x,u)=\frac{1}{2}(l,\F)(x,u)+\frac{1}{2}(l,\F)(x,-u)\in co(l,\F)(x,U)\qquad \forall x\in\Omega\setminus\mathbf C,~ u\in U.$$ Therefore, for every $(x,(p_0,p))\in (\Omega\setminus\mathbf C)\times\RR^{1+n}$, one has $$H_{l,\F^{max}}(x,p_0,p)<0\quad \Rightarrow \quad H_{l,\F}(x,p_0,p)<0.$$ Consequently a map $W$ is {\rm $\p$-MRF} for $(l,\F^{max},\mathbf C)$ for some $\p\ge0$ if and only if $W$ is a {\rm $\p$-MRF} for $(l,\F,\mathbf C)$. Then Theorem \ref{thselectionintro} applies and, consequently, Theorem \ref{maximalth} can be extended to this case.} \end{remark} \vskip0.3cm \subsection{Diagonal weak subsystems}$\,$\label{secdiagonal} Another class of weak subsystems is given by the {\it diagonal subsystems} described below. We still assume $U=\R^m$. Let us use $\mathbf e_1,\cdots,\mathbf e_m$ to denote the basis of $\R^m$ and let us set $\mathbf e_0:=0$. \begin{definition}\label{diagdef} For every $\lambda$ belonging to the simplex $\Lambda:=\{\lambda\in\RR^m\mid \sum_{i=1}^m \lambda_i\leq 1;~\lambda_i\geq 0\}$, \begin{equation}\label{diaginconc} \F^{diag}_{\lambda}(x,u) := \sum_{i=0}^m \lambda_i \F(x,{\lambda_i}^{-\frac 1 d}{u_i}\mathbf e_i) , \end{equation} where $\lambda_0:=1-\sum_{i=1}^m \lambda_i$, will be called the {\em $\lambda$-diagonal} control vector field corresponding to $\F$ and $\lambda$. \end{definition} For instance, setting $f_{\alpha_1,\dots,\alpha_m}:=f_\alpha$ for every $\alpha\in\N^m$, when $d=2$, $d=3$ one has $$ \F^{diag}_{\lambda}(x,u)= f_0(x)+\sum_{i=1}^m \lambda_i^{\frac 1 2}u_i f_{\mathbf e_i}(x)+\sum_{i=1}^m u_i^2 f_{2\mathbf e_i}(x). $$ and $$ \F^{diag}_{\lambda}(x,u)= f_0(x)+\sum_{i=1}^m \lambda_i^{\frac 2 3}u_i f_{\mathbf e_i}(x)+\sum_{i=1}^m \lambda_i^{\frac 1 3} u_i^2 f_{2\mathbf e_i}(x)+\sum_{i=1}^m u_i^3 f_{3\mathbf e_i}(x), $$ respectively. \begin{remark}\label{remdiag}{\rm Since $\sum_{i=0}^m \lambda_i=1$, this implies that \begin{equation}\label{Fdiag} \F^{diag}_\lambda(x,\R^m) \subseteq co~\F(x,\R^m). \end{equation} }\end{remark} \vskip0.2truecm We shall assume the following hypothesis on the running cost: \vskip0.2truecm \noindent {\bf Hypothesis A}$_{diag}${: There exists a real number $\M\geq0$ such that, for every $\lambda\in\Lambda$ verifying $\lambda_i>0$, $i=1,\dots,m$, one has } \begin{equation}\label{HIP} l(x,0) + \sum_{i=1}^m \lambda_i l(x,\frac{u_i}{\sqrt[d]{\lambda_i}}\mathbf e_i)\leq \M \,l(x,u) \qquad \forall u\in\R^m. \end{equation} \vskip0.5truecm \begin{remark}{\rm Notice that for every $q\geq 1$, the particular running cost \begin{equation}\label{pollagrang} l(x,u) := l_0(x) + l_{1}(x) |u|+\cdots l_q(x)|u|^q \end{equation} does verify Hypothesis {\bf A}$_{diag}$ (with $\M=\sqrt{m}$)\footnote{This is due to the elementary inequalities $$|u_1|+\cdots+|u_m|\leq \sqrt{m} |u|\qquad (|u_1|^q+\cdots+|u_m|^q)^\frac{1}{q}\leq |u|\qquad \forall q>1 .$$}. As a model, simple case, one could consider $ l(x,u) = |u|^q$, $q\geq d$, so that the functional to be minimized would be nothing but the $q$-th power of the $L^q$-norm of $u$ .}\end{remark} \begin{theorem}\label{diagonalth} Assume that Hypothesis {\bf A}$_{diag}$ holds true for a suitable $M_0\geq 0$, and let $W$ be a $\p$-MRF for the $\lambda$-diagonal problem $(l,\F^{diag}_{\lambda},\mathbf C)$, for some $\p\ge0$. Then the map $W$ is a $\bar\p$-MRF for the original problem $(l,\F,\mathbf C)$, where $\bar\p:= \frac{\p}{\M}$ if $\M>0$, while, if $\M=0$, $\bar p_0$ is allowed to be any positive real number. \\ In particular, the control system in {\rm\eqref{polymomium-est}} is {\rm GAC} to $\mathbf C$ and, if $\p>0$, \begin{equation}\label{Wdiag} V(z) \leq \frac{\M W(z)}{\p}\qquad\forall z\in \Omega\backslash \mathbf C. \end{equation} \end{theorem} \begin{proof} Set $\lambda_0=1-\sum_{i=1}^m \lambda_i$ and $\mathbf e_0 = 0$. First assume $\M >0$. Then for every $i=0,\dots,m$, every $(x,u)\in (\Omega\backslash \mathbf C)\times\R^m$ and every $p\in D^*W(x)$, one has $$ \lambda_i H_{l,\F}(x, \frac{\p}{K},p) \leq \lambda_i \left< (\frac{\p}{K},p)~,~( l, \F)(x,\lambda_i^{-\frac 1 d}{u_i}\mathbf e_i)\right> $$ that, summing up for $i=0, \dots,m$, yields \begin{equation}\label{stimadiag} \begin{array}{c} \displaystyle{H_{l,\F}(x, \frac{\p}{K},p) \leq \sum_{i=0}^m \lambda_i \left<(\frac{\p}{K},p)~,~( l,\F)(x,\lambda_i^{-\frac 1 d}{u_i}\mathbf e_i)\right>\leq}\\\,\\ \displaystyle{\frac{\p}{\M } \M l(x,u) + \left<p~,~ \F^{diag}_\lambda(x,u)\right > = \p l(x,u) + \left<p~,~ \F^{diag}_\lambda(x,u)\right > .} \end{array} \end{equation} Since by hypothesis $ \max_{p\in D^*W(x)}H_{l,\F^{diag}_\lambda}(x,\p,p) <0, $ then there exists $\tilde u$ such that $$ \p l(x,\tilde u) + \left< p,\F^{diag}_\lambda(x,\tilde u)\right ><0 \quad \forall p\in D^*W(x),$$ this, together with by \eqref{stimadiag}, implies $$ H_{l,\F}(x, \frac{\p}{\M },p) <0 \quad \forall p\in D^*W(x) $$ which indeed is the thesis of the theorem. Assume otherwise $\M =0$. Then $l\equiv 0$, consequently $W(z)\equiv 0$ and (\ref{Wdiag}) is trivially verified. Since $W$ is a $\p$-MRF for $(l,\F_\lambda^{diag},\mathbf C)$ and since $l\equiv 0$, for every $x\in\Omega\setminus\mathbf C$ there exists $\tilde u\in\RR^m$ such that $\langle p,\F^{diag}_\lambda(x,\tilde u)\rangle<0$ for all $p\in D^*W(x)$. Consequently, for every $\bar p_0\in\RR$ and for every $p\in D^*W(x)$ \begin{align*} H_{l,\F}(x,\bar p_0,p)&=\inf_{u\in\RR^m}\langle p,~\F(x,u)\rangle\leq \sum_{i=0}^m\lambda_i\langle p,~ \F(x,\lambda_i^{-\frac 1 d}u_i\mathbf e_i)\rangle\\ &=\langle p,~ \F^{diag}_\lambda(x,u)\rangle<0. \end{align*} This gives the thesis in the case $\M =0$ and completes the proof. \end{proof} \begin{example}\label{diagexample} {\rm Let $\mathbf C:=\{0\}$, $u\in\R^2$ and let us consider in $\R^2$ the exit-time problem \begin{equation}\label{exdiag} \begin{array}{l}\dot x = \F(x,u):=x +u_1u_2 (|x|^{-1},1)^{tr} -u_1^2 (1,0)^{tr} -u_2^2 (0,1)^{tr} + 3u_1^2u_2^2 x\quad x(0)= z; \\\,\\ V(z) := \displaystyle \inf_{(x,u)\in{\mathcal A}(z)} \int_0^{T_{x}} x^2 |u|^2\,dt. \end{array} \end{equation} Let $\Phi:[0,+\infty[\to \R$ be a smooth convex function such that $\Phi(0) = 0$ , $\Phi'(0)\geq 1$. In order to verify that a function of the form $$ W(x)=\Phi(|x|^2) $$ is a $\p$-MRF function for some $\p >0$, let us begin with observing that {\it the maximal degree subsystem $$ \dot x = \E(x,u) =x + 3u_1^2u_2^2 x$$ does not give any useful information}. Indeed \begin{align*} H_{l,\E}(x,\p,\nabla W(x))&=\inf_u \left\{\Big\langle \nabla W(x)~ ,~ \E(x,u)\Big\rangle + \p x^2 |u|^2\right\}\\ &=\inf_u \Big\{2 \Phi'(|x|^2) |x|^2 (1+ 3u_1^2u_2^2) + p_\I x^2 |u|^2 \Big\} \geq 0 \end{align*} for all $x\in\R^2\backslash \{0\}$ and $\p\geq 0$. On the other hand, by considering the diagonal subsystem $$ \dot x = \F^{diag}_{(\frac 1 2 ,\frac 1 2)} = x -u_1^2(1/\sqrt{2},0)^{tr} -u_2^2(0, 1/\sqrt{2})^{tr}, $$ if $\p <1$ \, $ (\leq \Phi'(|x|^2) \text{ for all } x\in \RR^2)$, we get, for all $x\in \RR^2\setminus\{0\}$, $$\begin{array}{c} H_{l,{ \F^{diag}_{(\frac 1 2,\frac 1 2)}}}(x,\p,\nabla W(x)) \leq \inf_u \Big\{ |x|^2 \Big( \Phi'(|x|^2)(2-u^2) + \p u^2\Big)\Big\} = -\infty, \end{array}$$ i.e., $W$ is a $\p$-MRF for the problem $(l,{ \F^{diag}_{(\frac 1 2,\frac 1 2)}})$. Therefore, in view of Theorem \ref{diagonalth}, $W$ is a $\p$-MRF for the problem \eqref{exdiag} as well. } \end{example} \appendix \section{Proof of Lemma \ref{lnear-control-affine}\label{proofsec}} For the reader convenience let us recall the statemen of Lemma \ref{lnear-control-affine}: \vskip2truemm {\it For every $r\in [0,+\infty]$ \begin{equation}\label{comb} {\F}_{\text{\it aff}}(x,\Wbalr)\subset co~\F(x,\Ubalr)\quad \forall x\in\Omega\setminus\mathbf C. \end{equation} } \vskip2truemm We prove this result in the case all components of the $m$-tuple $K$ are equal to $1$, i.e., $K=(1,\dots,1)$ (this assumption implies $\bar d=m=d$, see Remark \ref{rmknormd}). Indeed, to prove the theorem when $K$ is a general $m$-tuple of odd numbers it is sufficient to apply the result to the rescaled control-polynomial vector field $$\hat\F(x,u):=\F(x,u_1^{\frac 1 K_1},\dots,u_m^{\frac 1 K_m}).$$ Fix $k\in\NN$ and denote by $\{1,-1\}^k$ the set of $k$-tuples $(s_1,\dots,s_k)$ with $s_j\in\{-1,1\}$. Denote by $P(S)$ the power set of a set $S$ and consider the set-valued map $S_k:\{1,-1\}\to P(\{-1,1\}^k)$ defined by $$S_k(s)=\left\{(s_1,\dots,s_k)\in\{-1,1\}^k\mid s_1\cdots s_k=s\right\}.$$ Let us begin with a combinatorial result: \vskip0.4truecm {\it Claim A:} {\it Let $k,d\in\NN$, $k< d$. For every $i_1,\dots, i_k\in\N$, $1\leq i_1<\cdots<i_k\leq d$, and for every $s\in \{-1,1\}$ \begin{equation}\label{sum} \sum_{(s_1,\dots,s_d)\in S_d(s)} s_{i_1}\cdots s_{i_k} =0. \end{equation}} To prove {\it Claim A}, notice that \begin{equation}\label{zerosum} \sum_{(s_1,\dots,s_k)\in \{-1,1\}^\k}s_1 s_2\cdots s_k=0. \end{equation} Now, fix $i_1,\dots, i_k\in\N$, $1\leq i_1<\cdots<i_k\leq d$ and an auxiliary $k$-uple $\bar{\mathbf s}=(\bar s_1,\dots,\bar s_k)\in \{-1,1\}^k$. One has \begin{equation*} \#\left\{(s_1,\dots,s_d)\in \{-1,1\}^d\mid s_{i_h}=\bar s_{h};~h=1,\dots,\k\right\}=2^{d-\k}. \end{equation*} Therefore, by a symmetry argument, \begin{equation}\label{card} \#\left\{(s_1,\dots,s_d)\in S_d(s)\mid s_{i_h}=\bar s_{h};~h=1,\dots,\k\right\}=2^{d-\k-1} \quad \forall s\in\{-1,1\}. \end{equation} In view of (\ref{zerosum}) and of (\ref{card}), for every $s\in \{-1,1\}$ \begin{equation*} \sum_{(s_1,\dots,s_d)\in S_d(s)} s_{i_1}\cdots s_{i_k} = 2^{d-k-1}\left(\sum_{(s_{i_1},\dots,s_{i_\k})\in \{-1,1\}^\k} s_{i_1}\cdots s_{i_k}\right)=0. \end{equation*} This concludes the proof of {\it Claim A}. \vskip0.4truecm \noindent We continue the proof of Lemma \ref{lnear-control-affine} by proving {\it Claim B} below, which concerns the convex hull $ co~ \F(x,U_r)$. For every integer $j\ge 1$, let us set $$I_{r,j}:=\begin{cases} [-r^j,r^j]&\text{ if } r<+\infty\\ \RR &\text{ if } r=+\infty.\\ \end{cases}$$ \vskip0.4truecm {\it Claim B:} {\it Let $d\leq m$. For every $k\leq d$, $i_1,\dots, i_k\in\N$, $1\leq i_1<\cdots<i_k\leq d$, and $w\in I_{r,k}$, one has \begin{equation}\label{component} f_0(x)+w f_{\alpha_1,\dots,\alpha_m}(x)\in co~\F(x,U_r), \end{equation} where $\alpha_{j}=1$ for $j\in\{i_1,\dots,i_k\}$ and $\alpha_j=0$ otherwise.} \vskip0.4truecm To prove {\it Claim B}, denote by $s(w)$ the sign of $w$ and select from $I_{r,1}$ a set of $k$ real numbers $u_{i_1},\dots,u_{i_k}$ such that $u_{i_1}\cdots u_{i_k} =w$. Define $$u^{(\mathbf s)}:= \sum_{j=1}^k \mathbf s_{j} |u_{i_j}| \mathbf e_{i_j} \quad \text{for every } \mathbf s:=(s_{1},\dots,s_{k}) \in S_k(s(w)).$$ By construction one has $u^{(\mathbf s)}\in [-r,r]^m=U_r$ and $$u^{(\mathbf s)}_{i_1}\cdots u^{(\mathbf s)}_{i_k}=w.$$ By {\it Claim A}, for every $h<k$ and every increasing finite subsequence \newline $i_{1}\leq i_{j_1}<\cdots<i_{j_h}\leq i_k$ of $i_1,\dots, i_k$, one has $$\sum_{\mathbf s\in S_k(s(w))} u^{(\mathbf s)}_{i_{j_1}}\cdots u^{(\mathbf s)}_{i_{j_{h}}}=|u_{i_{j_{h}}}|\cdots |u_{i_{j_{h}}}| \sum_{(s_{1},\dots,s_{k})\in S_k(s)} s_{{j_{h}}}\cdots s_{{j_{h}}}=0.$$ Notice that $2^{k-1}$ is the cardinality of $S_k(s(w))$. Hence by the definition of near-control-affine system it easily follows that \begin{align*} \displaystyle \sum_{\mathbf s\in S_k(s(w))}& \frac{1}{2^{k-1}}\F(x,u^{(\mathbf s)})= f_0(x) + \\\,\\ &\sum_{h=1}^{k} \frac{1}{2^{k-1}}\left( \sum_{ i_{1}\leq i_{j_1}<\cdots<i_{j_h}\leq i_k } \left( \sum_{\mathbf s\in S_k(s(w))} u^{(\mathbf s)}_{i_{j_1}}\cdots u^{(\mathbf s)}_{i_{j_{h}}} \right)f_{{\bf{e}}_{i_{j_1}}+\dots+{\bf{e}}_{i_{j_h}}}(x)\right)\\\,\\ \displaystyle =&f_0(x)+\frac{1}{2^{k-1}}\left( \sum_{ i_{1}<\cdots< i_k } \left( \sum_{\mathbf s\in S_k(s(w))} u^{(\mathbf s)}_{i_{1}}\cdots u^{(\mathbf s)}_{i_{{k}}} \right)f_{{\bf{e}}_{i_{1}}+\dots+{\bf{e}}_{i_{k}}}(x)\right) \\\,\\ \displaystyle =&f_0(x)+ w~f_{\alpha_1,\dots,\alpha_m}(x), \end{align*} which concludes the proof of {\it Claim B}. \vskip0.4truecm To end the proof of Lemma \ref{lnear-control-affine} in case $K=(1,\dots,1)$, it suffices to remark that for every $k=1,\dots,d$, by the definition of $\bar r$ given in \eqref{rbaldef} $$[-\bar r, \bar r]\subseteq M [-r^k,r^k].$$ Therefore {\it Claim B} implies that for every $$w=(w_{\mathbf e_1},\dots,w_{\mathbf e_d},w_{\mathbf e_1+\mathbf e_2},w_{\mathbf e_1+\mathbf e_3},\dots,w_{\mathbf e_1+\cdots+\mathbf e_d})\in [-\bar r,\bar r]^M=\bar U_r$$ \begin{align*} {\F}_{\text{\it aff}}(x,w)&= &\sum_{k=1}^d \sum_{i_1<\dots<i_k} \frac{1}{M}(f_0(x)+ M w_{\mathbf e_{i_1}+\dots+\mathbf e_{i_k}} f_{\mathbf e_{i_1}+\dots+\mathbf e_{i_k}}(x)) \in co~\F(x,U_r). \end{align*} \end{document}
\begin{document} \keywords{quandle, twisted Alexander matrix, surface knot, quandle homology group} \subjclass[2020]{57K10, 57K12.} \maketitle \abstract{Ishii and Oshiro introduced the notion of an $f$-twisted Alexander matrix, which is a quandle version of a twisted Alexander matrix and defined an invariant of finitely presented quandles. In this paper, we study $f$-twisted Alexander matrices of certain quandles with the Alexander pair obtained from a quandle $2$-cocycle. We show that the 0-th elementary ideal of $f$-twisted Alexander matrix of the knot quandle of a surface knot with the Alexander pair obtained from a quandle $2$-cocycle can be described with the Carter-Saito-Satoh's invariant. We also discuss a relationship between $f$-twisted Alexander matrices of connected quandles with the Alexander pair obtained from a quandle $2$-cocycle and quandle homology groups.} \section{Introduction} The Alexander polynomial \cite{Alexander1928topo} is one of the most important invariant of a knot. We can compute the Alexander polynomial from the Alexander matrix. Using Fox free derivatives \cite{Fox1953free}, the Alexander matrix is obtained from a presentation of the knot group. In 1990's, X. S. Lin \cite{Lin2001repr} defined the twisted Alexander polynomial associated with a linear representation and M. Wada \cite{Wada1994twisted} generalized this notion for finitely presented group using Fox free derivatives. A quandle \cite{Joyce1982quandle, Matveev1982distributive} is a set with a binary operation whose axioms correspond to the Reidemeister moves. Given an oriented any dimensional knot, we have a quandle associated with the knot, which is called the knot quandle. It is known that the knot quandle is more useful than the knot group for distinguishing knots. In \cite{Ishii2022twisted}, A. Ishii and K. Oshiro introduced derivatives for quandles and the notion of an $f$-twisted Alexander matrix. Using an $f$-twisted Alexander matrix, we can obtain an invariant for finitely presented quandles. Futhermore, they showed that the (twisted) Alexander polynomial of a knot can be recovered from an $f$-twisted Alexander matrix by setting well (see also \cite{Ishii2022quandle}). To obtain an $f$-twisted Alexander matrix, we need to fix an Alexander pair, which is a pair of maps satisfy certain conditions. In \cite{Taniguchitwisted}, the author pointed out that an Alexander pair can be obtained from a quandle 2-cocycle, and showed that the 0-th elementary ideal of the $f$-twisted Alexander matrix of the knot quandle of a classical knot with the Alexander pair obtained from a quandle $2$-cocycle is determined by the quandle cocycle invariant \cite{Carter2003quandle}. Furthermore, using $f$-twisted Alexander matrices with a certain Alexander pair, he distinguished two classical knots which can not be distinguished by the (twisted) Alexander polynomial. The purpose of this paper is to study $f$-twisted Alexander matrices of certain quandles with the Alexander pair obtained from a quandle $2$-cocycle. First, we focus on $f$-twisted Alexander matrices of knot quandles of surface knots with the Alexander pair obtained from a quandle 2-cocycle. We show that the 0-th elementary ideal of the $f$-twisted Alexander matrix of the knot quandle of a surface knot with the Alexander pair obtained from a quandle 2-cocycle is determined by the Carter-Saito-Satoh's invariant \cite{Carter2006ribbon} (Theorem \ref{theo:main_1}). By the definition, the Carter-Saito-Satoh's invariant of a 2-knot, which is a 2-sphere embedded in $\mathbb{R}^4$, is trivial. This implies that the 0-th elementary ideal of the $f$-twisted Alexander matrix of the knot quandle of a 2-knot with the Alexander pair obtained from a quandle 2-cocycle is trivial (Corollary \ref{cor:elementary_ideal_surface_knot}). Second, we discuss $f$-twisted Alexander matrices of connected quandles with the Alexander pair obtained from a quandle $2$-cocycle. We prove that the 0-th elementary ideal of the $f$-twisted Alexander matrix of a connected quandle with the Alexander pair obtained from a quandle 2-cocycle can be realized by the second quandle homology group of the quandle (Theorem \ref{theo:main_2}). Combining our results, we see that the second quandle homology group of the knot quandle of a 2-knot is trivial (Corollary \ref{cor:2-knot_homology}). This paper is organized as follows: In section \ref{sect:qdle_presentation}, we recall the definition of a quandle and presentations of quandle. In section \ref{sect:def_twisted_Alexander}, we review the definition of an $f$-twisted Alexander matrix. In section \ref{sect:Alexander_matrix_surface_knot}, we discuss $f$-twisted Alexander matrices of knot quandles of surface knots using the Alexander pair associated with a quandle 2-cocycle. In section \ref{sect:Alexander_matrix_connected_qdle}, we study $f$-twisted Alexander matrices of connected quandles using the Alexander pair associated with a quandle 2-cocycle and determine the second quandle homology group of the knot quandle of 2-knots. \section{Quandles and quandle presentations} \label{sect:qdle_presentation} A {\it quandle}~\cite{Joyce1982quandle, Matveev1982distributive} is a non-empty set $X$ with a binary operation $X^2\to X;(x,y)\mapsto x^y$ satisfying the following axioms: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \item[(Q1)] For any $x\in X$, we have $x^x=x$. \item[(Q2)] For any $x,y\in X$, there exists unique element $z$ such that $z^y=x$. \item[(Q3)] For any $x,y,z\in X$, we have $(x^y)^z=(x^z)^{(y^z)}$. \end{itemize} We denote the element $z$ appeared in the axiom (Q2) by $x^{y^{-1}}$. In this paper, $(x^y)^z$ is denoted by $x^{yz}$. Let $X$ and $Y$ be quandles. A map $f:X\to Y$ is a {\it quandle homomorphism} if $f(x^y)=f(x)^{f(y)}$ for any $x,y\in X$. A quandle homomorphism $f:X\to Y$ is a {\it quandle isomorphism} if $f$ is a bijection. We denote by ${\rm Hom}(X,Y)$ the set of all homomorphism from $X$ to $Y$. \begin{exam} Let $\mathcal{K}$ be an oriented, connected, closed $n$-dimensional submanifold in $\mathbb{R}^{n+2}$. Let $Q(\mathcal{K})$ be the set of all homotopy classes, $x=[(D,\alphapha)]$, of all pairs $(D,\alphapha)$, where $D$ is a meridian disk of $\mathcal{K}$ and $\alphapha$ is a path from a point in $\partial D$ and ending at a fixed base point $\ast\in \mathbb{R}^{n+2}\backslash\mathcal{K}$. Then, $Q(\mathcal{K})$ is a quandle with an operation defined by \[ [(D_1,\alphapha)]^{[(D_2,\beta)]}=[(D_1,\alphapha\cdot\beta^{-1}\cdot\partial D_2\cdot \beta)], \] where $\partial D_2$ is a meridian loop starting from the initial point of $\beta$ and going along $\partial D_2$ in the positive direction. We call this quandle $Q(\mathcal{K})$ the {\it knot quandle} of $\mathcal{K}$. \end{exam} Next, we review the definition of presentations of quandles. Let $S$ be a non-empty set. The {\it free quandle} on $S$ denoted by $FQ(S)$ is a quandle with a map $\mu:S\to FQ(S)$ such that for any quandle $X$ and any map $f:S\to X$, there exists unique quandle homomorphism $f_{\#}:FQ(S)\to X$ such that $f=f_{\#}\circ\mu$. For $R\subset FQ(S)^2$, let us define the congruence $\sim_R$ on $FQ(S)$ to be the smallest congruence containing $R$. Then, the set $\langle S\mid R\rangle:=FQ(S)/\sim_R$ has the quandle operation inherited from $FQ(S)$. We call the elements of $S$ the {\it generators} of $\langle S\mid R\rangle$ and the elements of $R$ the {\it relators} of $\langle S\mid R\rangle$. If a quandle $X$ is isomorphic to $\langle S\mid R\rangle$, we say that $X$ has a presentation $\langle S\mid R\rangle$. A presentation $\langle S\mid R\rangle$ is {\it finite} if $S$ and $R$ are finite sets. Refer to \cite{Fenn1992racks, Kamada2017surface} for details. R. Fenn and C. Rouke showed \cite{Fenn1992racks} that $\langle S_1\mid R_1\rangle$ and $\langle S_2\mid R_2\rangle$ are isomorphic if and only if these presentations are related by a finite sequence of the following operations: \begin{enumerate} \setlength{\leftskip}{0.5cm} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \item[(T1-1)] $\langle S\mid R\rangle\leftrightarrow\langle S\mid R\cup\{(x,x)\}\rangle$ $(x\in FQ(S))$. \item[(T1-2)] $\langle S\mid R\cup\{(x,y)\}\rangle\leftrightarrow\langle S\mid R\cup\{(x,y), (y,x)\}\rangle$. \item[(T1-3)] $\langle S\mid R\cup\{(x,y), (y,z)\}\rangle\leftrightarrow\langle S\mid R\cup\{(x,y), (y,z),(x,z)\}\rangle$. \item[(T1-4)] $\langle S\mid R\cup\{(x,y)\}\rangle\leftrightarrow\langle S\mid R\cup\{(x,y),(x^{z^{\varepsilon}},y^{z^{\varepsilon}})\}\rangle$ $(z\in S,\varepsilon\in\{\pm 1\})$. \item[(T1-5)] $\langle S\mid R\cup\{(x,y)\}\rangle\leftrightarrow\langle S\mid R\cup\{(x,y),(z^x,z^y)\}\rangle$ $(z\in FQ(S))$. \item[(T2)] $\langle S\mid R\rangle\leftrightarrow\langle S\cup\{y\}\mid R\cup\{(y,w_y)\}\rangle$ $(y\notin S,w_y\in FQ(S))$. \end{enumerate} These operations are called {\it Tietze's moves}. \begin{exam} {\rm Let $D$ be a diagram of an oriented classical knot $K$ in $\mathbb{R}^3$ and ${\rm Arc}(D)$ the set of arcs of $D$. For each crossing $\chi$, we denote a relator $r_\chi$ by $(x_i^{x_j},x_k)$, where $x_i,x_j,x_k$ are the arcs around $\chi$ such that the normal orientation of the over arc $x_j$ points from $x_i$ to $x_k$ (see Figure \ref{crossing_condition}). \begin{figure} \caption{The arcs around the crossing $\chi$} \label{crossing_condition} \end{figure} Then, the knot quandle $Q(K)$ has the following presentation: \[ \langle {\rm Arc}(D)\mid \{ r_\chi\mid \chi:\textrm{ a crossing of }D\}. \] } \end{exam} \section{$f$-twisted Alexander matrices} \label{sect:def_twisted_Alexander} Let $X$ be a quandle and $R$ a ring with the unity $1$. A pair $(f_1,f_2)$ of maps $f_1,f_2:X^2\to R$ is an {\it Alexander pair} \cite{Ishii2022twisted} if $f_1$ and $f_2$ satisfy the following axioms: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \item For any $x\in X$, we have $f_1(x,x)+f_2(x,x)=1$. \item For any $x,y\in X$, $f_1(x,y)$ is a unit of $R$. \item For any $x,y,z\in X$, we have \begin{align*} &f_1(x^y,z)f_1(x,y)=f_1(x^z,y^z)f_1(x,z),\\ &f_1(x^y,z)f_2(x,y)=f_2(x^z,y^z)f_1(y,z), {\rm and}\\ &f_2(x^y,z)=f_1(x^z,y^z)f_2(x,z)+f_2(x^z,y^z)f_2(y,z). \end{align*} \end{itemize} In \cite{Andruskiewitsch2003racks}, N. Andruskiewitsch and M. Gra\~{n}a studied the extension of quandles and introduced a {\it dynamical cocycle}. An Alexander pair is a dynamical cocycle corresponding to a {\it quandle module}. \begin{exam} \label{exam:Alexander_pair_1} Let $X$ be a quandle and $\mathbb{Z}[t^{\pm 1}]$ the ring of Laurent polynomials with integer coefficients. Let us define maps $f_1,f_2:X^2\to \mathbb{Z}[t^{\pm 1}]$ by \[ f_1(x,y):=t,\quad f_2(x,y):=1-t. \] Then, the pair $(f_1,f_2)$ is an Alexander pair. \end{exam} \begin{exam} \label{exam:Alexander_pair_2} Let $X$ be a quandle and $A$ an abelian group. A map $\theta:X^2\to A$ is a {\it quandle $2$-cocycle} \cite{Carter2003quandle} if $\theta$ satisfies the following conditions: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \item For any $x\in X$, we have $\theta(x,x)=0_A$, where $0_A$ is the identity element. \item For any $x,y,z\in X$, we have $\theta(x,y)+\theta(x^y,z)=\theta(x,z)+\theta(x^z,y^z)$. \end{itemize} Let $\mathbb{Z}[A]$ be the group ring. For a quandle $2$-cocycle $\theta:X^2\to A$, we define the map $f_{\theta},0:X^2\to\mathbb{Z}[A]$ by \[ f_{\theta}(x,y):=1\cdot\theta(x,y),\quad 0(x,y):=0. \] By the direct calculation, we see that $f_{\theta}=(f_{\theta},0)$ is an Alexander pair, which is called the {\it Alexander pair associated with the quandle $2$-cocycle} \cite{Taniguchitwisted}. \end{exam} Next, we will review the definition of $f$-twisted Alexander matrices introduced in \cite{Ishii2022twisted}. Refer to \cite{Ishii2022twisted} for details. Let $S=\{ x_1,\ldots,x_n\}$ be a finite set and $Q$ a quandle with a finite presentation $\langle x_1,\ldots,x_n\mid r_1,\ldots,r_m\rangle$. Let ${\rm pr}_{\rm qdle}:FQ(S)\to Q$ be the canonical projection. In this paper, we often omit pr to represent ${\rm pr}_{\rm qdle}(a)$ as $a$. Let $R$ be a ring and $f=(f_1,f_2)$ an Alexander pair of maps $f_1,f_2:Q\times Q\to R$. For $j\in\{1,\ldots,n\}$, let us define a map $\frac{\partial_{f}}{\partial{x_j}}:FQ(S)\to R$ by the following rules: \begin{itemize} \item For any $x,y\in FQ(S)$, we have $\frac{\partial_{f}}{\partial{x_j}}(x^y)=f_1(x,y)\frac{\partial_{f}}{\partial{x_j}}(x)+f_2(x,y)\frac{\partial_{f}}{\partial{x_j}}(y)$. \item For each $i\in\{1,\ldots,n\}$, we have $\frac{\partial_{f}}{\partial{x_j}}(x_i)=\begin{cases} 1\ (i=j)\\ 0\ (i\neq j) \end{cases}$. \end{itemize} The map $\frac{\partial_{f}}{\partial{x_j}}:FQ(S)\to R$ is called the {\it $f$-derivative with respect to $x_j$} \cite{Ishii2022twisted}. Using the first rule, we see that \[ \frac{\partial_{f}}{\partial{x_j}}(x^{y^{-1}})=f_1(x^{y^{-1}},y)^{-1}\frac{\partial_{f}}{\partial{x_j}}(x)-f_1(x^{y^{-1}},y)^{-1}f_2(x^{y^{-1}},y)\frac{\partial_{f}}{\partial{x_j}}(y). \] for any $x,y\in FQ(S)$.\\ For a relator $r=(r_1,r_2)$, we define $\frac{\partial_{f}}{\partial{x_j}}(r):=\frac{\partial_{f}}{\partial{x_j}}(r_1)-\frac{\partial_{f}}{\partial{x_j}}(r_2)$. \begin{prop} \label{prop:calculation} Let $Q$ be a quandle with a finite presentation $\langle S\mid R\rangle$, $X$ a quandle, $A$ an abelian group and $\theta:X^2\to A$ a quandle $2$-cocycle. For any element $x^{y_{1}^{\varepsilon_1}\cdots y_{n}^{\varepsilon_n}}\in FQ(S)$ and $z\in S$, we have \[ \frac{\partial_{f_{\theta}}}{\partial{z}}(x^{y_{1}^{\varepsilon_1}\cdots y_{n}^{\varepsilon_n}})=1\cdot\left(\sum^{n}_{i=1}\varepsilon_i\theta(x^{y_{1}^{\varepsilon_1}\cdots y_{i-1}^{\varepsilon_{i-1}}y_i^{\frac{\varepsilon_i-1}{2}}},y_i)\right)\frac{\partial_{f_{\theta}}}{\partial{z}}(x). \] \end{prop} \begin{proof} By the definition of the $f$-derivative, it holds that \[ \frac{\partial_{f_{\theta}}}{\partial z}(x^y)=f_\theta(x,y)\frac{\partial_{f_{\theta}}}{\partial z}(x)+0(x,y)\frac{\partial_{f_{\theta}}}{\partial z}(y)=1\cdot\theta(x,y)\frac{\partial_{f_{\theta}}}{\partial z}(x) \] and \begin{eqnarray*} \frac{\partial_{f_{\theta}}}{\partial z}(x^{y^{-1}})&=&f_\theta(x^{y^{-1}},y)^{-1}\frac{\partial_{f_{\theta}}}{\partial z}(x)-f_{\theta}(x^{y^{-1}},y)^{-1}0(x^{y^{-1}},y)\frac{\partial_{f_{\theta}}}{\partial z}(y)\\ &=&1\cdot(-\theta(x^{y^{-1}},y))\frac{\partial_{f_{\theta}}}{\partial z}(x) \end{eqnarray*} for any $x,y\in FQ(S)$. Hence, we have \[ \frac{\partial_{f_{\theta}}}{\partial{z}}(x^{y_{1}^{\varepsilon_1}\cdots y_{n}^{\varepsilon_n}})=1\cdot(\varepsilon_n\theta(x^{y_{1}^{\varepsilon_1}\cdots y_{n-1}^{\varepsilon_{n-1}}y_{n}^{\frac{\varepsilon_n-1}{2}}},y_{n}))\frac{\partial_{f_{\theta}}}{\partial x_j}(x^{y_{1}^{\varepsilon_1}\cdots y_{n-1}^{\varepsilon_{n-1}}}). \] Repeating this procedure, we see that \[ \frac{\partial_{f_{\theta}}}{\partial{z}}(x^{y_{1}^{\varepsilon_1}\cdots y_{n}^{\varepsilon_n}})=1\cdot\left(\sum^{n}_{i=1}\varepsilon_i\theta(x^{y_{1}^{\varepsilon_1}\cdots y_{i-1}^{\varepsilon_{i-1}}y_{i}^{\frac{\varepsilon_i-1}{2}}},y_{i})\right)\frac{\partial_{f_{\theta}}}{\partial z}(x). \] \end{proof} Let $A$ be an $m\times n$ matrix over a commutative ring $R$. The {\it $d$-th elementary ideal} of $A$, which is denoted by $E_d(A)$, is the ideal generated by all $(n-d)$-minors of $A$ if $n-m\leq d<n$, and \[ E_d(A)=\begin{cases} 0\quad\ \textrm{if }d<n-m,\\ R\quad\textrm{if }n\leq d. \end{cases} \] Let $Q$ be a quandle with a finite presentation $\langle x_1,\ldots,x_n\mid r_1,\ldots,r_m\rangle$, $X$ a quandle and $\rho:Q\to X$ a quandle homomorphism. Let $R$ be a ring and $f=(f_1,f_2)$ an Alexander pair of maps $f_1,f_2:X\times X\to R$. We remark that the pair $f\circ\rho^2=(f_1\circ(\rho\times\rho),f_2\circ(\rho\times\rho))$ is also an Alexander pair (see \cite{Ishii2022twisted}). Let us define the $m\times n$ matrix $A(Q,\rho;f_1,f_2)$ by \[ A(Q,\rho;f_1,f_2)=\left( \begin{array}{ccc} \frac{\partial_{f\circ\rho^2}}{\partial{x_1}}(r_1) & \cdots & \frac{\partial_{f\circ\rho^2}}{\partial{x_n}}(r_1) \\ \vdots & \ddots & \vdots \\ \frac{\partial_{f\circ\rho^2}}{\partial{x_1}}(r_m) & \cdots & \frac{\partial_{f\circ\rho^2}}{\partial{x_n}}(r_m) \end{array} \right). \] We call this matrix $A(Q,\rho;f_1,f_2)$ {\it Alexander matrix of the finite presentation $\langle x_1,\ldots,x_n\mid r_1,\ldots,r_m\rangle$ associated to the quandle homomorphism $\rho$} or the {\it $f$-twisted Alexander matrix} of $(Q,\rho)$ with respect to the quandle presentation $\langle x_1,\ldots,x_n\mid r_1,\ldots,r_m\rangle$ \cite{Ishii2022twisted}. Let $Q^{\prime}$ be a quandle with a finite presentation and $\rho^{\prime}:Q^{\prime}\to X$ be a quandle homomorphism. In \cite{Ishii2022twisted}, A. Ishii and K. Oshiro showed that if there exists a quandle isomorphism $\varphi:Q\to Q^{\prime}$ such that $\rho=\rho^{\prime}\circ\varphi$, then $A(Q,\rho;f_1,f_2)$ and $A(Q^{\prime},\rho^{\prime};f_1,f_2)$ are related by a finite sequence of the following transformations $({\rm M1})\sim({\rm M8})$: \begin{align*} &({\rm M1})\ ({\bm a_1},\ldots,{\bm a_i},\ldots,{\bm a_j},\ldots,{\bm a_n})\leftrightarrow({\bm a_1},\ldots,{\bm a_i}+{\bm a_j}r,\ldots,{\bm a_j},\ldots,{\bm a_n})\ (r\in R),\\ &({\rm M2})\ \left( \begin{array}{c} {\bm a_1} \\ \vdots \\ {\bm a_i}\\ \vdots \\ {\bm a_j} \\ \vdots \\ {\bm a_n} \end{array} \right)\leftrightarrow\left( \begin{array}{c} {\bm a_1} \\ \vdots \\ {\bm a_i}+r{\bm a_j} \\ \vdots \\ {\bm a_j} \\ \vdots \\ {\bm a_n} \end{array} \right)\ (r\in R),\\ &({\rm M3})\ A\leftrightarrow\left( \begin{array}{c} A\\ {\bm 0} \end{array} \right),\ \ ({\rm M4})\ A\leftrightarrow\left( \begin{array}{cc} A & {\bm 0}\\ {\bm 0} & 1 \end{array} \right).\\ & ({\rm M5}) ({\bm a_1},\ldots,{\bm a_i},\ldots,{\bm a_j},\ldots,{\bm a_n})\leftrightarrow({\bm a_1},\ldots,{\bm a_j},\ldots,{\bm a_i},\ldots,{\bm a_n}),\\ & ({\rm M6}) ({\bm a_1},\ldots,{\bm a_i},\ldots,{\bm a_n})\leftrightarrow({\bm a_1},\ldots,{\bm a_i}u,\ldots,{\bm a_n})\ (u:\textrm{a unit of }R),\\ &({\rm M7})\ \left( \begin{array}{c} {\bm a_1} \\ \vdots \\ {\bm a_i}\\ \vdots \\ {\bm a_j} \\ \vdots \\ {\bm a_n} \end{array} \right)\leftrightarrow\left( \begin{array}{c} {\bm a_1} \\ \vdots \\ {\bm a_j} \\ \vdots \\ {\bm a_i} \\ \vdots \\ {\bm a_n} \end{array} \right),\ \ ({\rm M8})\ \left( \begin{array}{c} {\bm a_1} \\ \vdots \\ {\bm a_i}\\ \vdots \\ {\bm a_n} \end{array} \right)\leftrightarrow\left( \begin{array}{c} {\bm a_1} \\ \vdots \\ u{\bm a_i}\\ \vdots \\ {\bm a_n} \end{array} \right)\ (u:\textrm{a unit of }R). \end{align*} Furthermore, if $R$ is a commutative ring, we have \[ E_d(A(Q,\rho;f_1,f_2))=E_d(A(Q^{\prime},\rho^{\prime};f_1,f_2)). \] \begin{comment} \begin{exam}[\cite{Ishii2022twisted}] Let $K$ be an oriented classical knot, $X$ be a quandle and $(f_1,f_2)$ be the Alexander pair of maps $f_1,f_2:X^2\to\mathbb{Z}[t^{\pm 1}]$ in Example \ref{exam:Alexander_pair_1}. In this setting, for any quandle homomorphism $\rho:Q(K)\to X$, the $f$-twisted Alexander matrix $A(Q(K),\rho;f_1,f_2)$ is an {\it Alexander matrix} (cf. \cite{Burde2014knots}). Thus, we see that ${\rm Coker}(A(Q(K),\rho;f_1,f_2))$ is isomorphic to the direct sum of the {\it Alexander module} of $K$ and $\mathbb{Z}$ and it holds that $E_d(A(Q(K),\rho;f_1,f_2))$ is equal to the {\it $d-1$-th elementary ideal} of $K$ (cf. \cite{Burde2014knots}). \end{exam} \end{comment} \section{Surface knot invariants obtained from a quandle $2$-cocycle} \label{sect:Alexander_matrix_surface_knot} In this section, we will discuss the relationship between $f$-twisted Alexander matrices of the knot quandles of surface knots using an Alexander pair in Example \ref{exam:Alexander_pair_2} and the Carter-Saito-Satoh's invariant \cite{Carter2006ribbon}. At first, we will review definitions of surface knots and its diagrams. Refer to \cite{Carter2004surfaces,Kamada2017surface} for details A {\it surface knot} is a connected oriented closed surface smoothly embedded in 4-space $\mathbb{R}^4$ up to ambient isotopies. When the surface is homeomorphic to a 2-sphere, it is called a {\it 2-knot}. We fix a projection ${\rm pr}:\mathbb{R}^4\to \mathbb{R}^3$. Every surface knot can be perturbed slightly in $\mathbb{R}^4$ so that the singularity set of ${\rm pr}(F)$ consists of double points, isolated triple points, and isolated branched points as illustrated in Figure \ref{singular_set}. Each connected component of the set of all double points of ${\rm pr}(F)$ is a curve in $\mathbb{R}^3$, which is called a {\it double point curve}. For each double point curve $\delta$, the preimage ${\rm pr}^{-1}(\delta)$ consists a union of two curves in $F$. Then, we call the curves which is under the other the {\it lower decker curve}. \begin{figure} \caption{The singular points} \label{singular_set} \end{figure} The crossing information is indicated in ${\rm pr}(F)$ as follows: At each double point curve, two sheets intersects locally, one of which is under the other relative to the projection direction of $p$. Then, the lower sheet is broken by the upper sheet. A {\it diagram} of $F$ is the image ${\rm pr}(F)$ with such crossing information. We can regard a diagram as a union of disjoint compact, connected, surfaces. For a diagram $D$, we denote by $S(D)$ the set of such connected surfaces of $D$. We call an element of $S(D)$ a {\it sheet}. Since the surface is oriented, we take normal vectors $\vec{n}$ to ${\rm pr}(F)$ such that the triple $(\vec{v_1},\vec{v_2},\vec{n})$ represents the orientation of $\mathbb{R}^3$, where $(\vec{v_1},\vec{v_2})$ defines the orientation of ${\rm pr}(F)$. Such normal vectors are defined on the ${\rm pr}(F)$ at all points othar than isolated branched points. We call the normal orientation represented by $\vec{n}$ is called the {\it normal orientation} determined from the orientation of $F$. In this paper, we indicate the orientations of sheets by the normal orientation. Let $D$ be a diagram of a surface knot $F$. For each double point curve $\delta$, let $x_i,x_j,x_k$ be sheets around the double point curve $\delta$ such that the normal orientation of the upper sheet $x_j$ points from $x_i$ to $x_k$ as shown in Figure \ref{coloring_rule}. \begin{figure} \caption{The sheets around double point curve $\delta$} \label{coloring_rule} \end{figure} The sheet $x_i$ is called the {\it source sheet} of $\delta$. Then, we define the relator $r_\delta$ by $(x_i\ast x_j,x_k)$. The knot quandle of a surface knot $F$ has the following presentation: \[ \langle S(D)\mid \{ r_{\delta}\mid \delta:\textrm{ a double point curve of }D\}\rangle. \] This presentaion is called the {\it Wirtinger presentation} of $Q(F)$ with respect to the diagram $D$. Next, we will introduce the Carter-Saito-Satoh's invariant \cite{Carter2006ribbon}. Let $X$ be a quandle. A map $c:S(D)\to X$ is an {\it $X$-coloring of $D$} if it satisfies the following condition near each double point curve: if $x_i,x_j,x_k$ are the sheets as illustrated in Figure \ref{coloring_rule}, then it holds that $c(x_i)\ast c(x_j)=c(x_k)$. The element $c(x)$ assigned to the sheet $x$ is called the {\it color} of $x$. We denote the set of all $X$-colorings of $D$ by ${\rm Col}_X(D)$. Let $\rho:Q(F)\to X$ be a quandle homomorphism. Using the Wirtinger presentation of $Q(F)$ with respect to the diagram $D$, we can regard each sheet as an element of $Q(F)$. Hence, let us define the map $c_{\rho}:S(D)\to X$ by $c_{\rho}(x)=\rho(x)$ for any $x\in S(D)$. Then, the map $c_{\rho}$ is an $X$-coloring of $D$. In this paper, we call $c_{\rho}$ is the {\it $X$-coloring of $D$ corresponding to $\rho$}. It is known that the map ${\rm Hom}(Q(F),X)\to{\rm Col}_X(D);\rho\mapsto c_\rho$ is bijective. Let $c:S(D)\to X$ be an $X$-coloring of $D$ and $\theta:X^2\to A$ be a quandle $2$-cocycle. We consider an oriented immersed circle $L$ on $F$. We denote ${\rm pr}(L)$ by $L^D$. We assume that $L^D$ intersects the double point curves transversely, and misses triple points and branched points. Let $d_1,\ldots,d_n$ be points on $F$ at which $L$ intersects the lower decker curves. In this paper, we denote ${\rm pr}(x)$ by $x^D$ for $x\in F$. For each $d_l$, we give the sign $\varepsilon(d_{l})\in\{\pm 1\}$ such that $\varepsilon(d_{l})=+1$ if and only if the orientation of $L^D$ agrees with the normal orientation of the upper sheet around $d_{l}^D$. Then, let us define an element $W_{\theta}(d_l,c)$ as follows: Let $x_i,x_j,x_k$ be the sheets around $d_l^D$ such that $x_j$ is the upper sheet and $x_i$ is the source sheet as shown in Figure \ref{weight_surface_knot}. We set $W_{\theta}(d_l,c):=\varepsilon(d_{l})\theta(c(x_i),c(x_j))$. Moreover, we put $W_{\theta}(L,c):=\sum^n_{l=1}W_{\theta}(d_l,c)$. \begin{figure} \caption{The weight at $d_l$} \label{weight_surface_knot} \end{figure} \begin{lemm}[\cite{Carter2006ribbon}] If $L$ and $L^{\prime}$ are homologous, then we have $W_{\theta}(L,c)=W_{\theta}(L^{\prime},c)$. \end{lemm} Hence, for each $\lambda\in H_1(F)$, we can define $W_{\theta}(\lambda,c):=W_{\theta}(L_\lambda,c)$, where $L_\lambda$ is a representative curve of $\lambda$. Then, we denote a multi-set $\{W_{\theta}(\lambda,c)\mid c\in{\rm Col}_{X}(D)\}$ by $\Omega_{\theta}(\lambda)$. Furthermore, we define a family of multi-sets of $A$ by $\Omega_{\theta}(F)=\{\Omega_{\theta}(\lambda)\mid\lambda\in H_1(F)\}$. \begin{prop}[\cite{Carter2006ribbon}] The family $\Omega_{\theta}(F)$ does not depend on the choice of a diagram $D$ of $F$. \end{prop} Thus, $\Omega_{\theta}(F)$ is an invariant of surface knots. The aim of this section is to show the following Theorem. \begin{theo} \label{theo:main_1} Let $D$ be a diagram of surface knot $F$, $X$ a quandle, $A$ an abelian group and $\theta:X^2\to A$ a quandle 2-cocycle. Then, for any quandle homomorphism $\rho:Q(F)\to X$, it holds that \[ (\{1\cdot W_{\theta}(\lambda,c_{\rho})-1\cdot 0_A\mid \lambda\in H_1(F;\mathbb{Z})\})=E_0(A(Q(F),\rho;f_{\theta},0)), \] where $c_{\rho}$ is the $X$-coloring of $D$ corresponding to $\rho$. \end{theo} \begin{remark} \label{remark:ideal} {\rm Let $\{ b_1,\ldots,b_{2g}\}$ be a basis of $H_1(F)$. Since $W_{\theta}(\lambda_1+\lambda_2,c)$ is equal to $W_{\theta}(\lambda_1,c)+W_{\theta}(\lambda_2,c)$ for any $\lambda_1,\lambda_2\in H_1(F)$, we have \begin{eqnarray*} 1\cdot W_{\theta}(\lambda_1+\lambda_2,c)-1\cdot 0_A&=&1\cdot( W_{\theta}(\lambda_1,c)+W_{\theta}(\lambda_2,c))-1\cdot 0_A\\ &=&1\cdot W_{\theta}(\lambda_1,c)(1\cdot W_{\theta}(\lambda_2,c)-1\cdot 0_A)\\ &&+1\cdot W_{\theta}(\lambda_1,c)-1\cdot 0_A. \end{eqnarray*} Hence, the ideal generated by $\{1\cdot W_{\theta}(\lambda,c)-1\cdot 0_A\mid \lambda\in H_1(F;\mathbb{Z})\}$ coincides with the ideal $(1\cdot W_{\theta}(b_1,c)-1\cdot 0_A,\ldots,1\cdot W_{\theta}(b_{2g},c)-1\cdot 0_A)$. } \end{remark} \begin{proof}[Proof of Theorem \ref{theo:main_1}] Let $D$ be a diagram of a surface knot $F$, $S(D)=\{ x_1,\ldots,x_n\}$ the set of all sheets of $D$, ${\rm Ldeck(D)}$ the set of all lower decker curves and $q$ a point in $x_n$. For each $i\in\{1,\ldots,n-1\}$, we take a path $\gamma_i$ in $F$ starting from ${\rm pr}^{-1}(q)$ and terminating at a point of ${\rm pr}^{-1}(x_i)$ as shown in Figure \ref{path_gamma}. We assume that ${\rm pr}(\gamma_i)$ intersects the double point curve transversely, and misses triple points and branched points. We denote the path $[0,1]\to F;x\to q$ by $\gamma_n$. For each $i\in\{1,\ldots,n\}$, let $m_i$ be the cardinality of the set $\gamma_i\cap{\rm Ldeck}(D)$ and $d_{i1},\ldots,d_{im_i}$ the points at which $\gamma_i$ intersects ${\rm Ldeck}(D)$ in turn with the orientation. We denote the upper-sheet at $d_{ij}^D$ by $u_{ij}$. Here, $u_{ij}$ is an element of $S(D)$. Then, let us define the relator $r_i$ by $(x_n^{u_{i1}^{\varepsilon(d_{i1})}\cdots u_{im_i}^{\varepsilon(d_{im_i})}},x_i)$. \begin{figure} \caption{The path $\gamma_i$} \label{path_gamma} \end{figure} We denote the upper sheet around a double point curve $\delta$ by $u_\delta$. For each double point curve $\delta$, let $s(\delta)$ be an element of $\{1,\ldots,n\}$ satisfying that $x_{s(\delta)}$ is the source sheet of $\delta$, and $t(\delta)$ an element of $\{1,\ldots,n\}$ satisfying that the normal orientation of the upper-sheet $u_\delta$ points to $x_{t(\delta)}$. Then, we define the relator $r_\delta$ by \[ r_\delta=\left( x_n^{u_{s(\delta)1}^{\varepsilon\left(d_{s(\delta)1}\right)}\cdots u_{s(\delta)m_{s(\delta)}}^{\varepsilon\left(d_{s(\delta)m_{s(\delta)}}\right)}u_\delta u_{t(\delta)m_{t(\delta)}}^{-\varepsilon\left(d_{t(\delta)m_{t(\delta)}}\right)}\cdots u_{t(\delta)1}^{-\varepsilon\left(d_{t(\delta)1}\right)}},x_n\right). \] We see that $\left\langle x_1,\ldots x_n~ \begin{array}{|c} r_1,\ldots,r_{n-1}\\ r_\delta \ (\delta:\textrm{a double point curve}) \end{array} \right\rangle $ and the Wirtinger presentation of $D$ are related by a finite sequence of the Tietze's moves. Thus, $\left\langle x_1,\ldots x_n~ \begin{array}{|c} r_1,\ldots,r_{n-1}\\ r_\delta \ (\delta:\textrm{a double point curve}) \end{array} \right\rangle $ is also a presentation of the knot quandle $Q(F)$. We fix this presentation. Let $\rho:Q(F)\to X$ be a quandle homomorphism and $c_{\rho}:S(D)\to X$ the $X$-coloring of $D$ corresponding to $\rho$. Then, we consider the $f$-twisted Alexander matrix of $(Q(F),\rho)$ with resepect to the fixed presentation. At first, let us consider $\frac{\partial_{f_{\theta}\circ\rho^2}}{\partial{x_j}}(r_i)$. By Proposition \ref{prop:calculation}, for any $i\in\{1,\ldots,n-1\}$ and $j\in\{1,\ldots,n\}$, there is an element $a_i\in A$ such that \[ \frac{\partial_{f_{\theta}\circ\rho^2}}{\partial{x_j}}\left(x_n^{u_{i1}^{\varepsilon(d_{i1})}\cdots u_{im_i}^{\varepsilon(d_{im_i})}}\right)=1\cdot a_i\frac{\partial_{f_{\theta}\circ\rho^2}}{\partial{x_j}}(x_n). \] Hence, we have the following equalities: \begin{eqnarray*} \frac{\partial_{f_{\theta}\circ\rho^2}}{\partial{x_j}}(r_i)&=&\frac{\partial_{f_{\theta}\circ\rho^2}}{\partial{x_j}}\left(x_n^{u_{i1}^{\varepsilon(d_{i1})}\cdots u_{im_i}^{\varepsilon(d_{im_i})}}\right)-\frac{\partial_{f_{\theta}\circ\rho^2}}{\partial{x_j}}(x_i)\\ &=&1\cdot a_i\frac{\partial_{f_{\theta}\circ\rho^2}}{\partial{x_j}}(x_n)-\frac{\partial_{f_{\theta}\circ\rho^2}}{\partial{x_j}}(x_i)\\ &=&\begin{cases} 1\cdot a_i\hspace{5.5mm} (j=n)\\ -1\cdot 0_A\ (j=i)\\ 0\hspace{11.5mm} (j\neq i,n). \end{cases} \end{eqnarray*} Next, we will discuss $\frac{\partial_{f_{\theta}\circ\rho^2}}{\partial{x_j}}(r_\delta)$ for any double point curve $\delta$ and $j\in\{1,\ldots,n\}$. For each double point curve $\delta$, we define the immersed curve $L_\delta$ on $F$ by $L_\delta=\gamma_{s(\delta)}\cdot\gamma_\delta\cdot\gamma_{t(\delta)}^{-1}$, where $\gamma_\delta$ is a path from the terminal point of $\gamma_{s(\delta)}$ to the terminal point of $\gamma_{t(\delta)}$ which intersects lower decker curves only at point $d_\delta$ as illustrated in Figure \ref{path_alpha}. \begin{figure} \caption{The path $\gamma_\delta$} \label{path_alpha} \end{figure} We remark that $L_\delta\cap {\rm Ldeck}(D)=\{d_{s({\delta})1},\ldots,d_{s({\delta})m_{s(\delta)}},d_\delta,d_{t(\delta)1},\ldots,d_{t(\delta)m_{t(\delta)}}\}$. If $\varepsilon(d_{t(\delta)1})=+1$, it holds that \begin{eqnarray*} &&\frac{\partial_{f_{\theta}\circ\rho^2}}{\partial{x_j}}\left( x_n^{u_{s(\delta)1}^{\varepsilon\left(d_{s(\delta)1}\right)}\cdots u_{s(\delta)m_{s(\delta)}}^{\varepsilon\left(d_{s(\delta)m_{s(\delta)}}\right)}u_\delta u_{t(\delta)m_{t(\delta)}}^{-\varepsilon\left(d_{t(\delta)m_{t(\delta)}}\right)}\cdots u_{t(\delta)1}^{-\varepsilon\left(d_{t(\delta)1}\right)}}\right)\\ &&\frac{\partial_{f_{\theta}\circ\rho^2}}{\partial{x_j}}\left( x_n^{u_{s(\delta)1}^{\varepsilon\left(d_{s(\delta)1}\right)}\cdots u_{s(\delta)m_{s(\delta)}}^{\varepsilon\left(d_{s(\delta)m_{s(\delta)}}\right)}u_\delta u_{t(\delta)m_{t(\delta)}}^{-\varepsilon\left(d_{t(\delta)m_{t(\delta)}}\right)}\cdots u_{t(\delta)1}^{-1}}\right)\\ &=&1\cdot\left(-\theta\left(\rho\left(x_n^{u_{s(\delta)1}^{\varepsilon\left(d_{s(\delta)1}\right)}\cdots u_{s(\delta)m_{s(\delta)}}^{\varepsilon\left(d_{s(\delta)m_{s(\delta)}}\right)}u_\delta u_{t(\delta)m_{t(\delta)}}^{-\varepsilon\left(d_{t(\delta)m_{t(\delta)}}\right)}\cdots u_{t(\delta)1}^{-1}}\right),\rho(u_{t(\delta)1})\right)\right)\\ &&\times\frac{\partial_{f_{\theta}\circ\rho^2}}{\partial{x_j}}\left( x_n^{u_{s(\delta)1}^{\varepsilon\left(d_{s(\delta)1}\right)}\cdots u_{s(\delta)m_{s(\delta)}}^{\varepsilon\left(d_{s(\delta)m_{s(\delta)}}\right)}u_\delta u_{t(\delta)m_{t(\delta)}}^{-\varepsilon\left(d_{t(\delta)m_{t(\delta)}}\right)}\cdots u_{t(\delta)2}^{-\varepsilon\left(d_{t(\delta)2}\right)}}\right). \end{eqnarray*} By the definition of $c_{\rho}$, we remark that the following conditions are satisfied: \begin{itemize} \item The element $\rho(u_{t(\delta)1})$ equals the color of the upper sheet around $d_{t(\delta)1}^D$ by $c_\rho$ \item The element $\rho\left(x_n^{u_{s(\delta)1}^{\varepsilon\left(d_{s(\delta)1}\right)}\cdots u_{s(\delta)m_{s(\delta)}}^{\varepsilon\left(d_{s(\delta)m_{s(\delta)}}\right)}u_\delta u_{t(\delta)m_{t(\delta)}}^{-\varepsilon\left(d_{t(\delta)m_{t(\delta)}}\right)}\cdots u_{t(\delta)1}^{-1}}\right)$ equals the color of the source sheet around $d_{t(\delta)1}^D$ by $c_{\rho}$. \end{itemize} Thus, it holds that \[ -\theta\left(\rho\left(x_n^{u_{s(\delta)1}^{\varepsilon\left(d_{s(\delta)1}\right)}\cdots u_{s(\delta)m_{s(\delta)}}^{\varepsilon\left(d_{s(\delta)m_{s(\delta)}}\right)}u_\delta u_{t(\delta)m_{t(\delta)}}^{-\varepsilon\left(d_{t(\delta)m_{t(\delta)}}\right)}\cdots u_{t(\delta)1}^{-1}}\right),\rho(u_{t(\delta)1})\right)=W_{\theta}(d_{t(\delta)1},c_{\rho}).\] Hence, we have the following equality: \begin{eqnarray*} &&\frac{\partial_{f_{\theta}\circ\rho^2}}{\partial{x_j}}\left( x_n^{u_{s(\delta)1}^{\varepsilon\left(d_{s(\delta)1}\right)}\cdots u_{s(\delta)m_{s(\delta)}}^{\varepsilon\left(d_{s(\delta)m_{s(\delta)}}\right)}u_\delta u_{t(\delta)m_{t(\delta)}}^{-\varepsilon\left(d_{t(\delta)m_{t(\delta)}}\right)}\cdots u_{t(\delta)1}^{-\varepsilon\left(d_{t(\delta)1}\right)}}\right)\\ &=&1\cdot W_{\theta}(d_{t(\delta)1},c_{\rho})\frac{\partial_{f_{\theta}\circ\rho^2}}{\partial{x_j}}\left( x_n^{u_{s(\delta)1}^{\varepsilon\left(d_{s(\delta)1}\right)}\cdots u_{s(\delta)m_{s(\delta)}}^{\varepsilon\left(d_{s(\delta)m_{s(\delta)}}\right)}u_\delta u_{t(\delta)m_{t(\delta)}}^{-\varepsilon\left(d_{t(\delta)m_{t(\delta)}}\right)}\cdots u_{t(\delta)2}^{-\varepsilon\left(d_{t(\delta)2}\right)}}\right). \end{eqnarray*} If $\varepsilon(d_{t(\delta)1})=-1$, we see that \begin{eqnarray*} &&\frac{\partial_{f_{\theta}\circ\rho^2}}{\partial{x_j}}\left( x_n^{u_{s(\delta)1}^{\varepsilon\left(d_{s(\delta)1}\right)}\cdots u_{s(\delta)m_{s(\delta)}}^{\varepsilon\left(d_{s(\delta)m_{s(\delta)}}\right)}u_\delta u_{t(\delta)m_{t(\delta)}}^{-\varepsilon\left(d_{t(\delta)m_{t(\delta)}}\right)}\cdots u_{t(\delta)1}^{-\varepsilon\left(d_{t(\delta)1}\right)}}\right)\\ &&\frac{\partial_{f_{\theta}\circ\rho^2}}{\partial{x_j}}\left( x_n^{u_{s(\delta)1}^{\varepsilon\left(d_{s(\delta)1}\right)}\cdots u_{s(\delta)m_{s(\delta)}}^{\varepsilon\left(d_{s(\delta)m_{s(\delta)}}\right)}u_\delta u_{t(\delta)m_{t(\delta)}}^{-\varepsilon\left(d_{t(\delta)m_{t(\delta)}}\right)}\cdots u_{t(\delta)1}}\right)\\ &=&1\cdot\left(\theta\left(\rho\left(x_n^{u_{s(\delta)1}^{\varepsilon\left(d_{s(\delta)1}\right)}\cdots u_{s(\delta)m_{s(\delta)}}^{\varepsilon\left(d_{s(\delta)m_{s(\delta)}}\right)}u_\delta u_{t(\delta)m_{t(\delta)}}^{-\varepsilon\left(d_{t(\delta)m_{t(\delta)}}\right)}\cdots u_{t(\delta)2}^{-\varepsilon\left(d_{t(\delta)2}\right)}}\right),\rho(u_{t(\delta)1})\right)\right)\\ &&\times\frac{\partial_{f_{\theta}\circ\rho^2}}{\partial{x_j}}\left( x_n^{u_{s(\delta)1}^{\varepsilon\left(d_{s(\delta)1}\right)}\cdots u_{s(\delta)m_{s(\delta)}}^{\varepsilon\left(d_{s(\delta)m_{s(\delta)}}\right)}u_\delta u_{t(\delta)m_{t(\delta)}}^{-\varepsilon\left(d_{t(\delta)m_{t(\delta)}}\right)}\cdots u_{t(\delta)2}^{-\varepsilon\left(d_{t(\delta)2}\right)}}\right). \end{eqnarray*} As in the case of $\varepsilon(d_{t(\delta)1})=1$, it holds that \begin{eqnarray*} &&\frac{\partial_{f_{\theta}\circ\rho^2}}{\partial{x_j}}\left( x_n^{u_{s(\delta)1}^{\varepsilon\left(d_{s(\delta)1}\right)}\cdots u_{s(\delta)m_{s(\delta)}}^{\varepsilon\left(d_{s(\delta)m_{s(\delta)}}\right)}u_\delta u_{t(\delta)m_{t(\delta)}}^{-\varepsilon\left(d_{t(\delta)m_{t(\delta)}}\right)}\cdots u_{t(\delta)1}^{-\varepsilon\left(d_{t(\delta)1}\right)}}\right)\\ &=&1\cdot W_{\theta}(d_{t(\delta)1},c_{\rho})\frac{\partial_{f_{\theta}\circ\rho^2}}{\partial{x_j}}\left( x_n^{u_{s(\delta)1}^{\varepsilon\left(d_{s(\delta)1}\right)}\cdots u_{s(\delta)m_{s(\delta)}}^{\varepsilon\left(d_{s(\delta)m_{s(\delta)}}\right)}u_\delta u_{t(\delta)m_{t(\delta)}}^{-\varepsilon\left(d_{t(\delta)m_{t(\delta)}}\right)}\cdots u_{t(\delta)2}^{-\varepsilon\left(d_{t(\delta)2}\right)}}\right). \end{eqnarray*} Repeating this procedure, we have \begin{eqnarray*} &&\frac{\partial_{f_{\theta}\circ\rho^2}}{\partial{x_j}}(x_n^{u_{i1}^{\varepsilon(d_{i1})}\cdots u_{im_i}^{\varepsilon(d_{im_i})}u_\delta u_{jm_j}^{-\varepsilon(d_{jm_j})}\cdots u_{j1}^{-\varepsilon(d_{j1})}})\\ &=&1\cdot\left(\sum_{d\in L_\delta\cap {\rm Ldeck}(D)}W_{\theta}(d,c_{\rho})\right)\frac{\partial_{f_{\theta}\circ\rho^2}}{\partial{x_j}}(x_n)\\ &=&\begin{cases} 1\cdot W_{\theta}(L_\delta,c_{\rho})\ \ (j=n)\\ 0\quad\quad\quad\quad\quad\quad(j\neq n) \end{cases}. \end{eqnarray*} Hence, it holds that \[ \frac{\partial_{f_{\theta}\circ\rho}}{\partial{x_j}}(r_\delta)=\begin{cases} 1\cdot W_{\theta}(L_\delta,c_{\rho})-1\cdot0_A\ (j=n)\\ 0\hspace{35.5mm}(j\neq n). \end{cases} \] By the definition of $f$-twisted Alexander matrices and previous discussions, we have \[ A(Q(F),\rho;f_{\theta},0)=\left( \begin{array}{ccccc} -1\cdot 0_A & 0 & \cdots & 0 & 1\cdot a_1 \\ 0 & -1\cdot 0_A & & & 1\cdot a_2\\ & & \ddots & & \vdots\\ \vdots& & & -1\cdot 0_A & 1\cdot a_{n-1}\\ 0 & &\cdots & 0 & U \end{array} \right), \] where $U=\left( 1\cdot W_{\theta}(L_\delta,c_{\rho})-1\cdot 0_A\right)$. We remark that $U$ is a column vector with $k$ rows, where $k$ is the number of double point curves. We can see that $A(Q(F),\rho;f_{\theta},0)$ and $U$ are related by a finite sequence of the transformations (M1) $\sim$ (M8), which implies that $E_0(A(Q(F),\rho;f_{\theta},0))$ is the ideal generated by $\{1\cdot W_{\theta}(L_\delta,c_{\rho})-1\cdot0_A\mid \delta:\textrm{a double point curve}\}$. Thus, it holds that $E_0(A(Q(F),\rho;f_{\theta},0))\subset(\{1\cdot W_{\theta}(\lambda,c_{\rho})-1\cdot 0_A\mid \lambda\in H_1(F)\})$. Let $\lambda$ be an element of $H_1(F)$. We take an oriented immersed curve $L_{\lambda}$ on $F$ based at ${\rm pr}^{-1}(q)$ which is a representative element of $\lambda$ and fix it. We may assume that $L^D_\lambda$ intersects the double point curves transversely, and misses triple points and branched points. We set $l_\lambda:=|L_\lambda\cap {\rm Ldeck}(D)|$. Let $e_{1}$ be the first point at which $L_\lambda$ intersects lower decker curves ${\rm Ldeck}(D)$ after departing from ${\rm pr}^{-1}(q)$ and let $e_{2},\ldots,e_{l_\lambda}$ be the points at which $L_\lambda$ intersects ${\rm Ldeck}(D)$ in turn with the orientation of $L_\lambda$. For $e_{i}$, we denote the upper sheet near the point $e_{i}^D$ by $v_{i}$. Then, the element $x_n^{v_{1}^{\varepsilon(e_{1})}\cdots v_{l_\lambda}^{\varepsilon(e_{l_\lambda})}}$ is equal to $x_n$ in the knot quandle $Q(F)$. Hence, we see that the presentation \[ \left\langle x_1,\ldots x_n~ \begin{array}{|c} r_1,\ldots,r_{n-1}\\ r_\delta \ (\delta:\textrm{a double point curve})\\ (x_n^{v_{1}^{\varepsilon(e_{1})}\cdots v_{l_\lambda}^{\varepsilon(e_{l_\lambda})}},x_n) \end{array} \right\rangle \] is also a presentation of $Q(F)$. As in the previous discussion, we have \begin{eqnarray*} \frac{\partial_{f_{\theta}\circ\rho^2}}{\partial{x_j}}\left(x_n^{v_{1}^{\varepsilon(e_{1})}\cdots v_{l_\lambda}^{\varepsilon(e_{l_\lambda})}}\right)&=&1\cdot\left(\sum^{l_\lambda}_{j=1}W_{\theta}(e_{j},c_{\rho})\right)\frac{\partial_{f_{\theta}\circ\rho^2}}{\partial{x_j}}(x_n)\\ &=&1\cdot W_{\theta}(L_\lambda,c_{\rho})\frac{\partial_{f_{\theta}\circ\rho^2}}{\partial{x_j}}(x_n)\\ &=&1\cdot W_{\theta}(\lambda,c_{\rho})\frac{\partial_{f_{\theta}\circ\rho^2}}{\partial{x_j}}(x_n). \end{eqnarray*} Thus, it holds that \[ \frac{\partial_{f_{\theta}\circ\rho^2}}{\partial{x_j}}\left(\left(x_n^{v_{i1}^{\varepsilon(e_{i1})}\cdots v_{il_i}^{\varepsilon(e_{il_i})}},x_n\right)\right)=\begin{cases} 1\cdot W_{\theta}(\lambda,c_{\rho})-1\cdot0_A\ (j=n)\\ 0\hspace{36mm}(j\neq n). \end{cases} \] This implies that $1\cdot W_{\theta}(\lambda,c_{\rho})-1\cdot 0_A$ is an element of the ideal $E_0(A(Q(F,\rho;f_{\theta},0)))$. Hence, the ideal $(\{1\cdot W_{\theta}(\lambda,c_{\rho})-1\cdot 0_A\mid \lambda\in H_1(F)\})$ is contained in $E_{0}(A(Q(F),\rho;f_{\theta},0))$. This implies the assertion. \end{proof} In the case of 2-knots, we have the following Corollary. \begin{cor} \label{cor:elementary_ideal_surface_knot} Let $F$ be a 2-knot, $X$ a quandle, $A$ an abelian group and $\theta:X^2\to A$ a quandle 2-cocycle. For any quandle homomorphism $\rho:Q(F)\to X$, we have $E_0(A(Q(F),\rho;f_{\theta},0))=(0)$. \end{cor} \begin{proof} By the definition of the Carter-Saito-Satoh's invariant, if $\lambda$ is the identity element of $H_1(F)$, we have $W_{\theta}(\lambda,c)=0_A$ for any $X$-coloring $c:S(D)\to X$ and quandle 2-cocycle $\theta:X^2\to A$. Thus, if $F$ is a 2-knot, it holds that \[ E_0(A(Q(F),\rho;f_{\theta},0))=(\{1\cdot W_{\theta}(\lambda,c_{\rho})-1\cdot 0_A\mid \lambda\in H_1(F;\mathbb{Z})\})=(0), \] where the first equality follows from Theorem \ref{theo:main_1}. \end{proof} \section{$f$-twisted Alexander matrices for connected quandles} \label{sect:Alexander_matrix_connected_qdle} In this section, we will discuss the $f$-twisted Alexander matrix of connected quandles with the Alexander pair obtained from a quandle $2$-cocycle. At first, we recall the definition of (co)homology groups of quandles \cite{Carter2003quandle}. Let $X$ be a quandle. For each positive integer $n$, let us denote the free abelian group whose basis is $X^n$ by $C^R_n(X)$. For $n\leq 0$, we assume $C^R_n(X)=0$. For each element $(x_1,\ldots,x_n)\in X^n$, we define an element $\partial(x_1,\ldots,x_n)$ of $C^R_{n-1}(X)$ by \begin{eqnarray*} \partial(x_1,\ldots,x_n)&=&\sum^n_{i=2}(-1)^i(x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_n)\\ &&-\sum^n_{i=2}(-1)^i(x_1^{x_i},\ldots,x_{i-1}^{x_i},x_{i+1},\ldots,x_n). \end{eqnarray*} Using this, for $n\ge 2$, we obtain a homomorphism $\partial_n:C^R_n(X)\to C^R_{n-1}(X)$. For $n\leq 1$, we define $\partial_n:C^R_n(X)\to C^R_{n-1}(X)$ by the zero map. Then, $(C^R_n(X),\partial_n)$ is a chain complex. Let $C^D_n(X)$ be the subgroup of $C^R_n(X)$ generated by the elements of \[ \{ (x_1,\ldots,x_n)\in X^n\mid x_i=x_{i+1}\ \textrm{for some }i\}. \] We can verify that $(C^D_n(X),\partial_n)$ is a subcomplex of $(C^R_n(X),\partial_n)$. Thus, we obtain the chain complex $(C^Q_n(X)=C^R_n(X)/C^D_n(X),\partial_n)$. The homology group of the chain complex $(C^Q_n(X),\partial_n)$ is called the {\it quandle homology group}~\cite{Carter2003quandle}, which is denoted by $H^Q_n(X)$. Let $A$ be an abelian group. For $n\in\mathbb{Z}$, let us define $C^n_Q(X;A)$ by the set of all group homomorphisms from $C^Q_n(X)$ to $A$ and $\delta_n:C^n_Q(X;A)\to C^{n+1}_Q(X;A)$ by $\delta_n(f)=f\circ\partial_{n+1}$. Then, we obtain a cochain complex $(C^n_Q(X),\delta_n)$, which is called the {\it quandle cochain complex}~\cite{Carter2003quandle}. Let $\theta:X^2\to A$ be a quandle $2$-cocycle. We denote the linear extension of $\theta$ by the same symbol $\theta:\mathbb{Z}[X^2]\to A$. We see that the linear extension $\theta$ is a $2$-cocycle of the quandle cochain complex. Thus, given a quandle $2$-cocycle $\theta:X^2\to A$, we obtain the group homomorphism from $H^Q_2(X)$ to $A$. We also denote this group homomorphism by $\theta:H^Q_2(X)\to A$. A quandle $X$ is {\it connected} if for every $x,y\in X$, it holds that $x^{z_1^{\varepsilon_1}\cdots z_n^{\varepsilon_n}}=y$ for some $z_1,\ldots,z_n\in X$ and $\varepsilon_1,\ldots,\varepsilon_n\in\{\pm 1\}$. \begin{theo} \label{theo:main_2} Let $Q=\langle x_1,\ldots,x_n\mid r_1,\ldots,r_m\rangle$ be a connected quandle, $X$ a quandle, $A$ an abelian group and $\theta:X^2\to A$ a quandle 2-cocycle. For any quandle homomorphism $\rho:Q\to X$, we have \[ E_0(A(Q,\rho;f_{\theta},0))=(\{1\cdot a-1\cdot 0_A\mid a\in{\rm Im}(\theta\circ\rho_{\ast})\}), \] where $\rho_{\ast}:H^Q_2(Q)\to H^Q_2(X)$ is the group homomorphism induced by $\rho$. \end{theo} \begin{proof} Let $\langle x_1,\ldots,x_n\mid r_1,\ldots, r_m\rangle$ be a presentation of $Q$. Since $Q$ is connected, for each $1\leq i\leq n-1$, there exist $y_{i1},\ldots,y_{il_i}\in Q$ and $\varepsilon_{i1},\ldots,\varepsilon_{il_i}\in \{\pm 1\}$ such that $x_i^{y_{i1}^{\varepsilon_{i1}}\cdots y_{il_i}^{\varepsilon_{il_i}}}=x_n$. Thus, using Tietze's moves, we may assume that \[ r_i=\begin{cases} (x_i^{y_{i1}^{\varepsilon_{i1}}\cdots y_{il_i}^{\varepsilon_{il_i}}},x_n)\ (1\leq i\leq n-1)\\ (x_n^{y_{i1}^{\varepsilon_{i1}}\cdots y_{il_i}^{\varepsilon_{il_i}}},x_n)\ (n\leq i\leq m). \end{cases} \] We fix this presentation. \begin{comment} \begin{eqnarray*} \frac{\partial_{f_{\theta}}}{\partial x_j}(r_i)&=&\frac{\partial_{f_{\theta}}}{\partial x_j}(x_i\ast^{\varepsilon_{i1}}a_{i1}\cdots x_{i-1m_{i-1}}\ast^{\varepsilon_{il_i}}a_{il_i})-\frac{\partial_{f_{\theta}}}{\partial x_j}(x_n)\\ &=&\theta(x_i\ast^{\varepsilon_{i1}}a_{i1}\cdots x_{i-1m_{i-1}}\ast^{\frac{-1+\varepsilon_{il_i}}{2}}a_{il_i},a_{il_i})^{\varepsilon_{il_i}}\frac{\partial_{f_{\theta}}}{\partial x_j}(x_i\ast^{\varepsilon_{i1}}a_{i1}\cdots x_{i-1m_{i-1}})\\ &&-\frac{\partial_{f_{\theta}}}{\partial x_j}(x_n) \end{eqnarray*} \end{comment} For any $1\leq j\leq m$, by Proposition \ref{prop:calculation}, it holds that \[ \frac{\partial_{f_{\theta}\circ\rho^2}}{\partial x_j}(x_i^{y_{i1}^{\varepsilon_{i1}}\cdots y_{il_i}^{\varepsilon_{il_i}}})=1\cdot\left(\sum^{l_i}_{k=1}\varepsilon_{ik}\theta(\rho(x_i^{y_{i1}^{\varepsilon_{i1}}\cdots y_{ik-1}^{\varepsilon_{ik-1}}y_{ik}^{\frac{\varepsilon_{ik}-1}{2}}}),\rho(y_{ik}))\right) \frac{\partial_{f_{\theta}\circ\rho^2}}{\partial x_j}(x_i) \] if $1\leq i\leq n-1$, and \[ \frac{\partial_{f_{\theta}\circ\rho^2}}{\partial x_j}(x_n^{y_{i1}^{\varepsilon_{i1}}\cdots y_{il_i}^{\varepsilon_{il_i}}})=1\cdot\left(\sum^{l_i}_{k=1}\varepsilon_{ik}\theta(\rho(x_n^{y_{i1}^{\varepsilon_{i1}}\cdots y_{ik-1}^{\varepsilon_{ik-1}}y_{ik}^{\frac{\varepsilon_{ik}-1}{2}}}),\rho(y_{ik}))\right) \frac{\partial_{f_{\theta}\circ\rho^2}}{\partial x_j}(x_n) \] if $n\leq i\leq m$. We put \[ a_i:=\begin{cases} \sum^{l_i}_{k=1}\varepsilon_{ik}\theta(\rho(x_i^{y_{i1}^{\varepsilon_{i1}}\cdots y_{ik-1}^{\varepsilon_{ik-1}}y_{ik}^{\frac{\varepsilon_{ik}-1}{2}}}),\rho(y_{ik}))\ (1\leq i\leq n-1)\\ \sum^{l_i}_{k=1}\varepsilon_{ik}\theta(\rho(x_n^{y_{i1}^{\varepsilon_{i1}}\cdots y_{ik-1}^{\varepsilon_{ik-1}}y_{ik}^{\frac{\varepsilon_{ik}-1}{2}}}),\rho(y_{ik}))\ (n\leq i\leq m). \end{cases} \] Then, we have \[ \frac{\partial_{f_{\theta}\circ\rho^2}}{\partial x_j}(r_i)=\begin{cases} 1\cdot a_i\frac{\partial_{f_{\theta}\circ\rho^2}}{\partial x_j}(x_i)-1\cdot 0_A\frac{\partial_{f_{\theta}\circ\rho^2}}{\partial x_j}(x_n)\ (1\leq i\leq n-1)\\ (1\cdot a_i-1\cdot 0_A)\frac{\partial_{f_{\theta}\circ\rho^2}}{\partial x_j}(x_n)\hspace{14mm} (n\leq i\leq m). \end{cases} \] By the definition of the $f$-twisted Alexander matrix, it holds that \[ A(Q,\rho;f_{\theta},0)=\left( \begin{array}{ccccc} 1\cdot a_1 & 0 & \cdots & 0 & -1\cdot 0_A \\ 0 & 1\cdot a_2 & & & -1\cdot 0_A\\ & & \ddots & & \vdots\\ \vdots& & & 1\cdot a_{n-1} & -1\cdot 0_A\\ & & & & 1\cdot a_n-1\cdot 0_A \\ & & & & \vdots \\ 0 & & \cdots & 0 & 1\cdot a_m-1\cdot 0_A \\ \end{array} \right). \] \begin{comment} Using transformations, we have \begin{eqnarray*} &&\left( \begin{array}{ccccc} 1\cdot z_1 & 0& \cdots & 0 & -1\cdot 0_A \\ 0 & 1\cdot z_2 & & & -1\cdot 0_A\\ & & \ddots & & \vdots\\ \vdots& & & 1\cdot z_{n-1} & -1\cdot 0_A\\ & & & & 1\cdot z_n-1\cdot 0_A \\ & & & & \vdots \\ 0 & & & & 1\cdot z_m-1\cdot 0_A \\ \end{array} \right)\\ &\stackrel{\rm (M6)}{\sim}& \left( \begin{array}{ccccc} 1\cdot 0_A & 0 & \cdots & 0 & -1\cdot 0_A \\ 0 & 1\cdot z_2 & & & -1\cdot 0_A\\ & & \ddots & & \vdots\\ \vdots& & & 1\cdot z_{n-1} & -1\cdot 0_A\\ & & & & 1\cdot z_n-1\cdot 0_A \\ & & & & \vdots \\ 0 & & & & 1\cdot z_m-1\cdot 0_A \\ \end{array} \right)\\ &\stackrel{\rm (M1)}{\sim}&\left( \begin{array}{ccccc} 1\cdot 0_A & 0& \cdots & 0 & 0 \\ 0 & 1\cdot z_2 & & & -1\cdot 0_A\\ & & \ddots & & \vdots\\ \vdots & & & 1\cdot z_{n-1} & -1\cdot 0_A\\ & & & & 1\cdot z_n-1\cdot 0_A \\ & & & & \vdots \\ 0 & & & & 1\cdot z_m-1\cdot 0_A \\ \end{array} \right)\\ &\stackrel{\rm (M5),\ (M7)}{\sim}&\left( \begin{array}{ccccc} 1\cdot z_2 & & & -1\cdot 0_A & 0\\ & \ddots & & \vdots & \\ & & 1\cdot z_{n-1} & -1\cdot 0_A & \\ & & & 1\cdot z_n-1\cdot 0_A & \vdots\\ & & & \vdots & \\ & & & 1\cdot z_m-1\cdot 0_A & 0\\ 0& &\cdots &0 & 1\cdot 0_A \end{array} \right)\\ &\stackrel{\rm (M4)}{\sim}& \left( \begin{array}{ccccc} 1\cdot z_2 & & & -1\cdot 0_A \\ 0& \ddots & & \vdots \\ & & 1\cdot z_{n-1} & -1\cdot 0_A \\ \vdots& & & 1\cdot z_n-1\cdot 0_A \\ & & & \vdots \\ 0 & & & 1\cdot z_m-1\cdot 0_A \end{array} \right)\\ &\sim& \left( \begin{array}{c} 1\cdot z_n-1\cdot 0_A \\ \vdots \\ 1\cdot z_m-1\cdot 0_A \end{array} \right).\end{eqnarray*} \end{comment} As in the proof of Theorem \ref{theo:main_1}, we see that $A(Q,\rho;f_{\theta},0)$ and the matrix $\left(\begin{array}{c} 1\cdot a_n-1\cdot 0_A \\ \vdots \\ 1\cdot a_m-1\cdot 0_A \end{array}\right)$ are related by a finite sequence of the transformations (M1) $\sim$ (M8). Hence, the 0-th elementary ideal $E_0(A(Q,\rho;f_{\theta},0))$ is the ideal generated by $1\cdot a_n-1\cdot 0_A,\ldots,1\cdot a_m-1\cdot 0_A$. Next, we will show that the elements $a_n,\ldots,a_m$ are elements in ${\rm Im}(\theta\circ\rho_\ast)$. Let us consider the $2$-dimensional complex $\tilde{\Gamma}$ which is discussed in the section $8.3$ in \cite{Eisermann2014quandle} (see also \cite{Fenn1995trunks}). The complex $\tilde{\Gamma}$ is defined as follows: Let $\Gamma$ be the oriented graph with vertices $q\in Q$ and edges $(p\xrightarrow{q} r)$ for each triple $p,q,r\in Q$ with $p^q=r$. We regard $\Gamma$ as $1$-skeleton and glue in a 2-cell for each relation of $p^p=p,p^{qq^{-1}}=p^{q^{-1}q}=p$ and $p^{qr}=p^{rq^r}$. For convenience, the inverse path of $(p^{q^{-1}}\xrightarrow{q}p)$ is denoted by $(p\xrightarrow{q^{-1}}p^{q^{-1}})$. By Theorem $9.9$ in \cite{Eisermann2014quandle}, a map $\varphi:C_1(\tilde{\Gamma})\to C^Q_2(Q)$ defined by mapping each edge $(p\xrightarrow{q}p^q )$ to $(p,q)$ induced an isomorphism $\varphi:H_1(\tilde{\Gamma})\to H^Q_2(Q)$. We note that $\varphi((p\xrightarrow{q^{-1}} p^{q^{-1}}))=-(p^{q^{-1}},q)$ for any $p,q\in Q$. We take a path $(p_0\xrightarrow{q_1^{\varepsilon_1}}\cdots\xrightarrow{q_l^{\varepsilon_l}}p_l)$ of $\Gamma$ and regard the path as a path in $\tilde{\Gamma}$. Then, it holds that \[ \varphi\left(\sum^l_{i=1}(p_{i-1}\xrightarrow{q_i^{\varepsilon_i}}p_i)\right)=\sum^{l}_{i=1}\varepsilon_i(p_{i-1}^{q_{i}^{\frac{\varepsilon_i-1}{2}}},q_{i}), \] where $p_i=p_0^{q_1^{\varepsilon_1}\cdots q_i^{\varepsilon_{i}}}$. Assume that $p_0,q_1,\ldots,q_l$ are elements of $S=\{ x_1,\ldots,x_n\}$. \begin{comment} If $\varepsilon_{l}=1$, we have \begin{eqnarray*} \frac{\partial_{f_{\theta}\circ\rho}}{\partial x_j}(p_0^{q_1^{\varepsilon_1}\cdots q_{l-1}^{\varepsilon_{l-1}}q_l})&=&1\cdot\theta(\rho(p_0^{q_1^{\varepsilon_1}\cdots q_{l-1}^{\varepsilon_{l-1}}}),\rho(q_l))\frac{\partial_{f_{\theta}\circ\rho}}{\partial x_j}(p_0^{q_1^{\varepsilon_1}\cdots q_{l-1}^{\varepsilon_{l-1}}})\\ &=&1\cdot\theta(\rho(p_{l-1}),\rho(q_l))\frac{\partial_{f_{\theta}\circ\rho}}{\partial x_j}(p_0^{q_1^{\varepsilon_1}\cdots q_{l-1}^{\varepsilon_{l-1}}}). \end{eqnarray*} If $\varepsilon_{l}=-1$, it holds that \begin{eqnarray*} \frac{\partial_{f_{\theta}\circ\rho}}{\partial x_j}(p_0^{q_1^{\varepsilon_1}\cdots q_{l-1}^{\varepsilon_{l-1}}q_l^{-1}})&=&1\cdot(-\theta(\rho(a_0^{b_1^{\varepsilon_1}\cdots b_{l-1}^{\varepsilon_{l-1}}b_l^{-1}}),\rho(b_l)))\frac{\partial_{f_{\theta}\circ\rho}}{\partial x_j}(a_0^{b_1^{\varepsilon_1}\cdots b_{l-1}^{\varepsilon_{l-1}}})\\ &=&1\cdot(-\theta(\rho(a_{l-1}^{b_l^{-1}}),\rho(b_l)))\frac{\partial_{f_{\theta}\circ\rho}}{\partial x_j}(a_0^{b_1^{\varepsilon_1}\cdots b_{l-1}^{\varepsilon_{l-1}}}). \end{eqnarray*} To summarize previous calculations, we see that \[ \frac{\partial_{f_{\theta}\circ\rho}}{\partial x_j}(a_0^{b_1^{\varepsilon_1}\cdots b_{l-1}^{\varepsilon_{l-1}}b_l^{\varepsilon_l}})=1\cdot(\varepsilon_l\theta(\rho(a_{l-1}^{b_{l}^{\frac{\varepsilon_l-1}{2}}}),\rho(b_{i})))\frac{\partial_{f_{\theta}\circ\rho}}{\partial x_j}(a_0^{b_1^{\varepsilon_1}\cdots b_{l-1}^{\varepsilon_{l-1}}}). \] Repeating this procedure, it holds that \[ \frac{\partial_{f_{\theta}\circ\rho}}{\partial x_j}(a_0^{b_1^{\varepsilon_1}\cdots b_{l-1}^{\varepsilon_{l-1}}b_l^{\varepsilon_l}})=1\cdot\left(\sum^{l}_{i=1}\varepsilon_i\theta(\rho(a_{i-1}^{b_{i}^{\frac{\varepsilon_i-1}{2}}}),\rho(b_{i}))\right)\frac{\partial_{f_{\theta}\circ\rho}}{\partial x_j}(a_0). \] \end{comment} By Proposition \ref{prop:calculation}, we have \[ \frac{\partial_{f_{\theta}\circ\rho^2}}{\partial x_j}(p_0^{q_1^{\varepsilon_1}\cdots q_l^{\varepsilon_l}})=1\cdot\left(\sum^{l}_{i=1}\varepsilon_i\theta(\rho(p_{i-1}^{q_{i}^{\frac{\varepsilon_i-1}{2}}}),\rho(q_{i}))\right)\frac{\partial_{f_{\theta}\circ\rho^2}}{\partial x_j}(p_0) \] for any $1\leq j\leq n$. If $p_l=p_0$, the 2-chain $\sum^{l}_{i=1}\varepsilon_i(p_{i-1}^{q_{i}^{\frac{\varepsilon_i-1}{2}}},q_{i})$ is a $2$-cycle. This implies that the element $\sum^{l}_{i=1}\varepsilon_i\theta(\rho(p_{i-1}^{q_{i}^{\frac{\varepsilon_i-1}{2}}}),\rho(q_{i}))$ is an element of ${\rm Im}(\theta\circ\rho_{\ast})$. By $x_n^{y_{i1}^{\varepsilon_{i1}}\cdots y_{il_i}^{\varepsilon_{il_i}}}=x_n$ in $Q$ for $n\leq i\leq m$, the element $a_i$ which satisfies $\frac{\partial_{f_{\theta}\circ\rho}}{\partial x_j}(x_n^{y_{i1}^{\varepsilon_{i1}}\cdots y_{il_i}^{\varepsilon_{il_i}}})=1\cdot a_{i}\frac{\partial_{f_{\theta}\circ\rho}}{\partial x_j}(x_n)$ is an element of the image of $\theta\circ\rho_{\ast}$ for any $n\leq i\leq m$. Thus, we see that $E_{0}(A(Q,\rho;f_{\theta},0))$ is contained in $(\{1\cdot a-1\cdot 0_A\mid a\in{\rm Im}(\theta\circ\rho_{\ast})\})$. For any $g\in H^Q_2(Q)$, there is a loop $(x_n\xrightarrow{y_1^{\varepsilon_1}}\cdots\xrightarrow{y_l^{\varepsilon_l}}x_n)$ in $\Gamma$ such that \[ \varphi\left(\sum^{l}_{i=1}(x_n^{y_1^{\varepsilon_1}\cdots y_{i-1}^{\varepsilon_{i-1}}}\xrightarrow{y_i^{\varepsilon_i}}x_n^{y_1^{\varepsilon_1}\cdots y_{i}^{\varepsilon_{i}}})\right)=g. \] Using relation $p^{qr}=p^{rq^r}$, we may assume that $y_1,\ldots,y_l$ are elements in $S$. By the above calculation, it holds that \[ \frac{\partial_{f_{\theta}\circ\rho^2}}{\partial x_j}(x_n^{y_1^{\varepsilon_1}\cdots y_l^{\varepsilon_l}})=1\cdot \theta\circ\rho_{\ast}(g)\frac{\partial_{f_{\theta}\circ\rho^2}}{\partial x_j}(x_n) \] for any $1\leq j\leq n$. It is obvious that the presentation \[ \langle x_1,\ldots,x_n\mid r_1,\ldots,r_m,(x_n^{y_1^{\varepsilon_1}\cdots y_l^{\varepsilon_l}},x_n)\rangle \] is also a presentation of $Q$. Hence, the matrix $A(Q,\rho;f_{\theta},0)$ with respect to the presentation $\langle x_1,\ldots,x_n\mid r_1,\ldots,r_m,(x_n^{y_1^{\varepsilon_1}\cdots y_l^{\varepsilon_l}},x_n)\rangle$ is related to the matrix $\left( \begin{array}{c} 1\cdot a_n-1\cdot 0_A \\ \vdots \\ 1\cdot a_m-1\cdot 0_A \\ 1\cdot \theta\circ\rho_{\ast}(g)-1\cdot 0_A \end{array} \right)$, which implies that $1\cdot \theta\circ\rho_{\ast}(g)-1\cdot 0_A$ is an element of $E_0(A(Q,\rho;f_{\theta},0))$. Thus, the ideal generated by $\{1\cdot a-1\cdot0_A\mid a\in{\rm Im}(\theta\circ\rho_{\ast})\}$ is contained in $E_{0}(A(Q,\rho;f_{\theta},0))$. \end{proof} It is known that the knot quandle of a surface knot is connected. Hence, by Corollary \ref{cor:elementary_ideal_surface_knot} and Theorem \ref{theo:main_2}, we have the following Corollary. \begin{cor} \label{cor:2-knot_homology} For any $2$-knot $F$, $H^Q_2(Q(F))$ is trivial. \end{cor} \begin{proof} Let $F$ be a 2-knot. Assume that $H^Q_2(Q(F))$ is non-trivial. We set $A=H^Q_2(Q(F))$ and take a non-trivial element $a$ in $A$. Let $\theta:Q(F)^2\to A$ be a quandle 2-cocycle which corresponds to the identity map $H^Q_2(Q(F))\to A=H^Q_2(Q(F))$. Let $\rho$ be the identity map ${\rm id}:Q(F)\to Q(F)$. Since the composite map $\theta\circ\rho_{\ast}:H^Q_2(Q(F))\to A=H^Q_2(Q(F))$ is the identity map and the knot quandle $Q(F)$ is a connected quandle, $1\cdot a-1\cdot 0_A$ is contained in $E_0(A(Q(F),\rho;f_{\theta},0))$. This implies that $E_0(A(Q(F),\rho;f_{\theta},0))$ is not equal to zero ideal. On the other hand, by Corollary \ref{cor:elementary_ideal_surface_knot}, $E_0(A(Q(F),\rho;f_{\theta},0))$ is the zero ideal. This is a contradiction. \end{proof} \begin{remark} {\rm By Artin's spinning construction \cite{Artin1925zur}, we see that that for any classical knot $K$, there exists a 2-knot $F$ such that $\pi_1(\mathbb{R}^3\backslash K)$ is isomorphic to $\pi_1(\mathbb{R}^4\backslash F)$. In other words, it holds that \[ \{\pi_1(\mathbb{R}^3\backslash K)\mid K:\textrm{ a classical knot}\}\subset\{\pi_1(\mathbb{R}^4\backslash F)\mid F:\textrm{ a 2-knot}\}. \] It is known that for any non-trivial classical knot $K$, the second quandle homology group $H^Q_2(Q(K))$ is the infinite cyclic group \cite{Eisermann2003unknot}. Hence, by Corollary \ref{cor:2-knot_homology}, we see that for any non-trivial knot $K$, the knot quandle $Q(K)$ can not be realized by the knot quandle of 2-knots, that is, it holds that \[ \{Q( K)\mid K:\textrm{ a non-trivial classical knot}\}\cap\{Q(F)\mid F:\textrm{ a 2-knot}\}=\emptyset. \] } \end{remark} \end{document}
\begin{document} \title[Lebesgue decomposition]{Some Implications of Lebesgue Decomposition} \myself \date \today \subjclass[2000]{Primary 28A33, Secondary 46E27.} \keywords{Lebesgue decomposition, Riesz representation, Weak compactness, Halmos Savage Theorem, Komlos Lemma.} \maketitle \begin{abstract} Based on a generalization of Lebesgue decomposition we obtain a characterization of weak compactness in the space $ba(\mathscr{A})$, a representation of its dual space and some results on the structure of finitely additive measures. \end{abstract} \section{Introduction and Notation} Throughout the paper $\Omega$ will be an arbitrary set, $\mathscr{A}$ an algebra of its subsets, $\lambda$ a bounded, finitely additive set function on $\mathscr{A}$ (i.e. $\lambda \in ba(\mathscr{A})$) and $\mathscr M\subset ba(\mathscr{A})$. Among the well known facts of measure theory is the Lebesgue decomposition: each $\mu\in ba(\mathscr{A})$ admits a unique way of writing $\lambda=\lambda_\mu^c+\lambda_\mu^\perp$ where $\lambda_\mu^c\ll\mu$ and $\lambda_\mu^\perp\perp\mu$. In section \ref{sec lebesgue} we prove a slight generalization of this classical result and use it to obtain implications on the properties of relatively weakly compact subsets of the space of $ba(\mathscr{A})$, section \ref{sec compact}, and on the representation of the corresponding dual space, section \ref{sec riesz} and to explore some implications, section \ref{sec implications}. Eventually, in section \ref{sec halmos savage} we exploit Lebesgue decomposition to investigate some properties of dominated families of finitely additive measures. The main, simple idea is to treat the orthogonality condition implicit in Lebesgue decomposition as a separating condition for subsets of $ba(\mathscr{A})$ and to investigate its implications in the presence of some form of compactness. A classical result associates relative weak compactness with uniform absolute continuity. In Theorems \ref{th compact} and \ref{th compact 2}, we obtain new necessary and sufficient conditions for relative weak compactness of subsets of $ba(\mathscr{A})$ all of which stating that a corresponding measure theoretic property has to hold uniformly. Following from these, we then obtain, Theorem \ref{th riesz}, a complete characterization of the dual space of $ba(\mathscr{A})$ in terms of bounded Cauchy nets. The Riesz representation we propose is unfortunately not as handy as that emerging from the Riesz-Nagy Theorem for Lebesgue spaces. Nevertheless it is helpful in some problems as those treated in Corollary \ref{cor sep}. We also exploit it to establish a partial analogue of the Koml\'os Lemma under finite additivity. Likewise, the absolute continuity property implicit in Lebesgue decomposition is exploited in section \ref{sec halmos savage} to investigate some properties of dominated sets of measures. We obtain the finitely additive versions of two classical results, due to Halmos and Savage and to Yan, respectively. Somehow surprisingly, these two Theorems, whose original proofs use countable additivity in an extensive way, carry through unchanged to finite additivity. It is also shown, see Theorem \ref{th hs}, that dominated families of set functions have an implicit, desirable property which allows to replace arbitrary families of measurable sets with countable subfamilies. For the theory of finitely additive measures and integrals we mainly follow the notation and terminology introduced by Dunford and Schwarz \cite{bible}, although we prefer the symbol $\abs\lambda$ to denote the total variation measure generated by $\lambda$. $\mathscr{S}(\mathscr{A})$ and $\mathfrak{B}(\mathscr{A})$ designate the families of $\mathscr{A}$ simple functions, endowed with the supremum norm, and its closure, respectively. If $f\in L^1(\lambda)$ we denote its integral interchangeably as $\int fd\lambda$ or $\lambda(f)$ although, when regarded as a set function, we will always use the symbol $\lambda_f\in ba(\mathscr{A})$. We prefer, however, $\lambda_B$ to $\lambda_{\set B}$ when $B\in\mathscr{A}$. We define the following families: $ba(\mathscr{A},\lambda)=\{\mu\in ba(\mathscr{A}):\mu\ll\lambda\}$, $ba_1(\mathscr{A},\lambda)=\{\lambda_f:f\in L^1(\lambda)\}$ and $ba_\infty(\mathscr{A},\lambda)= \{\mu\in ba(\mathscr{A}):\abs\mu\le c\abs\lambda\text{ for some }c>0\}$ while $\mathbb{P}_{ba}(\mathscr{A})$ will denote the collection of finitely additive probabilities. The closure of $\mathscr M$ in the strong, weak and weak$^*$ topology of $ba(\mathscr{A})$ is denoted by $\overline\mathscr M$, $\cl\mathscr M w$ and $\cl\mathscr M*$, respectively. We refer to $\mathscr M$ the properties holding for each of its elements and use the corresponding symbols accordingly. Thus, we write $\lambda\gg\mathscr M$ (resp. $\lambda\perp\mathscr M$) whenever $\lambda\gg\mu$ (resp. $\lambda\perp\mu$) for every $\mu\in\mathscr M$. $\lambda\gg\mathscr M$ is sometimes referred to by saying that $\mathscr M$ is dominated by $\lambda$. \section{Lebesgue Decomposition} \label{sec lebesgue} Associated with $\mathscr M$ is the collection \begin{equation} \label{AM} \mathbf A(\M)=\left\{\sum_n\alpha_n\frac{\abs{\mu_n}}{1\vee\norm{\mu_n}}:\ \mu_n\in\mathscr M,\ \alpha_n\ge0\text{ for }n=1,2,\ldots,\sum_n\alpha_n=1\right\} \end{equation} as well as the set function \begin{equation} \label{psi} \Psi_\mathscr M(A)=\sup_{\mu\in\mathscr M}\abs{\mu}(A)\qquad A\in\mathscr{A} \end{equation} It is at times convenient to investigate the properties of $\mathbf A(\M)$ rather than $\mathscr M$ and we note to this end that $\lambda\gg\mathscr M$ (resp. $\lambda\perp\mathscr M$) is equivalent to $\lambda\gg\mathbf A(\M)$ (resp. $\lambda\perp\mathbf A(\M)$). We say that $\mathscr M$ is uniformly absolutely continuous (resp. uniformly orthogonal) with respect to $\lambda$, in symbols $\lambda\gg_u\mathscr M$ (resp. $\mathscr M\perp_u\lambda$) whenever $\lim_{\abs\lambda(A)\to0}\Psi_\mathscr M(A)=0$ (resp. when for each $\varepsilon$ there exists $A\in\mathscr{A}$ such that $\Psi_\mathscr M(A)+\abs\lambda(A^c)<\varepsilon$). One easily verifies that either of the these uniform properties extends from $\mathbf A(\M)$ to $\mathscr M$ if and only if $\mathscr M$ is norm bounded. \begin{lemma} \label{lemma lebesgue} There exists a unique way of writing \begin{equation} \label{lebesgue} \lambda=\lambda_\M^c+\lambda_\M^\perp \end{equation} where $\lambda_\M^c,\lambda_\M^\perp\in ba(\mathscr{A})$ are such that (i) $m\gg\lambda_\M^c$ for some $m\in\mathscr{A}v\mathscr M$ and (ii) $\lambda_\M^\perp\perp\mathscr M$. If $\lambda$ is positive or countably additive then so are $\lambda_\M^\perp,\lambda_\M^c$. \end{lemma} \begin{proof} Take an increasing net $\neta\nu$ in \begin{equation} \label{L(M)} \mathcal L(\mathscr M)=\{\nu\in ba(\mathscr{A}):\nu\ll m\text{ for some }m\in\mathscr{A}v\mathscr M\} \end{equation} with $\nu=\lim_\alpha\nu_\alpha\in ba(\mathscr{A})$. Extract a sequence $\sseq {\nu_{\alpha_n}}n$ such that $\norm{\nu-\nu_{\alpha_n}}=(\nu-\nu_{\alpha_n})(\Omega)<2^{-n-1}$, choose $m_n\in\mathscr{A}v\mathscr M$ such that $m_n\gg\nu_{\alpha_n}$ and define $m=\sum_n2^{-n}m_n\in\mathscr{A}v\mathscr M$. Since $m\gg\nu_{\alpha_n}$ for each $n\in\mathbb{N}$ so that there is $\delta_n>0$ such that $m(A)<\delta_n$ implies $\abs{\nu_{\alpha_n}}(A)<2^{-n-1}$ and, therefore, $\abs\nu(A)\le\abs{\nu_{\alpha_n}}(A)+2^{-n-1}\le2^{-n}$. Thus $\mathcal L(\mathscr M)$ is a normal sublattice of $ba(\mathscr{A})$ and \eqref{lebesgue} is the Riesz decomposition of $\lambda$ with $\lambda_\M^c\in\mathcal L(\mathscr M)$ and $\lambda_\M^\perp\perp\mathcal L(\mathscr M)$. \end{proof} Of course a different way of stating the same result is the following: \begin{corollary} Define $\mathcal L(\mathscr M)$ as in \eqref{L(M)}. Then, $\mathcal L(\mathscr M)=(\mathscr M^\perp)^\perp$. \end{corollary} Decomposition \eqref{lebesgue} gains a special interest when combined with some form of compactness. \begin{lemma} \label{lemma sep} Let $\mathscr M\subset ba(\mathscr{A})_+$ be convex and weak$^*$ compact. $\lambda\perp\mathscr M$ if and only if $\lambda\perp_u\mathscr M$. \end{lemma} \begin{proof} Fix $\varepsilon>0$ and consider the set \begin{align} \label{Ke} \mathcal K=\left\{f\in\mathscr{S}(\mathscr{A}):1\geq f\geq0,\ \abs\lambda(1-f)<\frac{\varepsilon}{4}\right\} \end{align} If $\lambda\perp\mathscr M$, then $\sup_{\mu\in\mathscr M}\inf_{f\in\mathcal K}\mu(f)<\varepsilon/4$. Endow $ba(\mathscr{A})$ and $\mathscr{S}(\mathscr{A})$ with the weak$^*$ and the uniform topology respectively. Then, both $\mathscr M$ and $\mathcal K$ are convex, the former is compact and the function $\phi(\mu,f)=\mu(f):ba(\mathscr{A})\times\mathscr{S}(\mathscr{A})\to\mathbb{R}$ is separately linear and continuous. By a standard application of Sion's minimax Theorem \cite[Corollary 3.3]{sion}, there exists $f\in\mathcal K$ such that $\sup_{\mu\in\mathscr M}\mu(f)<\varepsilon/4$. Let $A=\{1-f<1/2\}\in\mathscr{A}$. Then Tchebiceff inequality implies $\abs\lambda(A^c)+\mu(A)<\varepsilon$ for all $\mu\in\mathscr M$. The converse is obvious. \end{proof} It is of course possible and perhaps instructive to rephrase the preceding Lemma as a separating condition. \begin{corollary} \label{cor sep} Either one of the following mutually exclusive conditions holds: (i) $m\gg\lambda$ for some $m\in\mathscr{A}v\mathscr M$ or (ii) there exists $\eta>0$ such that for each $\mathscr M_0\subset\mathscr M$ with $\mathbf A(\mathscr M_0)$ weak$^*$ closed and each $k>0$ there exists $A\in\mathscr{A}$ for which \begin{equation} \label{sep} \abs\lambda(A)>\eta>k\Psi_{\mathscr{A}v{\mathscr M_0}}(A) \end{equation} If $\mathscr{A}$ is a $\sigma$-algebra and $\lambda\in ca(\mathscr{A})$ then \eqref{sep} rewrites as $\abs\lambda(A)>0=\Psi_{\mathscr{A}v{\mathscr M_0}}(A)$ for some $A\in\mathscr{A}$. \end{corollary} Convex, weak$^*$ compact subsets of $ba(\mathscr{A})$ are often encountered in separation problems, where a family $\mathcal K$ of $\mathscr{A}$ measurable functions is given and $\mathscr M$ is the set $\left\{m\in\mathbb{P}_{ba}(\mathscr{A}):\sup_{k\in\mathcal K}m(k)\le1\right\}$ of separating probabilities. In such special case we learn that $\lambda$ and $\mathscr M$ may be strictly separated by a set in $\mathscr{A}$. \section{The Weak Topology} \label{sec compact} Decomposition \eqref{lebesgue} provides some useful insight in the study of weakly compact subsets of $ba(\mathscr{A})$. An exact characterization is the following: \begin{theorem} \label{th compact} Let $\mathscr M$ be norm bounded. Then the following conditions (i)--(v) are mutually equivalent and imply (vi). If $\mathscr{A}$ is a $\sigma$ algebra and $\mathscr M\subset ca(\mathscr{A})$, then (vi) implies (iii). \begin{enumerate}[(i)] \item\label{uac} $m\gg_u\mathscr M$ for some $m\in\mathbf A(\M)$; \item\label{comp} $\mathscr M$ is relatively weakly compact; \item\label{umc} the set $\{\abs\mu:\mu\in\mathscr M\}$ is uniformly monotone continuous, i.e. if $\seqn A$ is a monotone sequence in $\mathscr{A}$ the limit $\lim_n\abs\mu(A_n)$ exists uniformly in $\mathscr M$; \item\label{tail} for each $\mathscr M_0\subset\mathscr M$ and each sequence $\seqn A$ in $\mathscr{A}$ such that \begin{equation} \label{M sequence} \lim_j\lim_k\abs\mu\left(\bigcup_{n=j}^{j+k}A_n\right)=0\qquad\mu\in\mathscr M_0 \end{equation} $\mu(A_n)$ converges to $0$ uniformly with respect to $\mu\in\mathscr M_0$; \item\label{uacp} $\mathscr M$ possesses the uniform absolute continuity property, i.e. $\mathscr M_0\subset\mathscr M$ and $\lambda\gg\mathscr M_0$ imply $\lambda\gg_u\mathscr M_0$; \item\label{uop} $\mathscr M$ possesses the uniform orthogonality property, i.e. $\mathscr M_0\subset\mathscr M$ and $\lambda\perp\mathscr M_0$ imply $\lambda\perp_u\mathscr M_0$. \end{enumerate} \end{theorem} \begin{proof} \imply{uac}{comp}. This is just \cite[IV.9.12]{bible}. \imply{comp}{umc}. Let $\seqn A$ be a decreasing sequence in $\mathscr{A}$ and define $\phi_n:ba(\mathscr{A})\to\mathbb{R}$ by letting $\phi_n(\mu)=\lim_k\abs\mu(A_n\cap A_k^c)$. Then, $\phi_n$ is continuous and decreases to $0$ on the weak closure of $\mathscr M$ which, under \iref{comp}, is compact. By Dini's Theorem, convergence is uniform. \imply{umc}{uop}. Suppose $\lambda\perp\mathscr M$ and let $\mathscr M_1=\clt\mathbf A(\M)$. With no loss of generality we can assume $\lambda\ge0$. We claim that $\lambda\perp\mathscr M_1$. If not then there is $m\in\mathscr M_1$ such that for some $\eta>0$ and all $A\in\mathscr{A}$, the inequality $4\eta<m(A)+\lambda(A^c)$ obtains. Fix $m_1\in\mathbf A(\M)$ such that $\abs{(m-m_1)(\Omega)}<\eta/2$ and $A_1\in\mathscr{A}$ such that $m_1(A_1)+\lambda(A_1^c)<\eta$. Assume that $m_1\ldots,m_{n-1}\in\mathbf A(\M)$ and $A_1,\ldots,A_{n-1}\in\mathscr{A}$ have been chosen such that \begin{equation} \label{seq} m_i(A_i)+\sum_{j\le i}\lambda(A_i^c)<\eta \quad\text{and}\quad \dabs{(m_i-m)\left(\bigcap_{j<i}A_j\right)}<\eta2^{-i} \qquad i=1,\ldots,n-1 \end{equation} Then pick $m_n\in\mathbf A(\M)$ such that $\abs{(m_n-m)(\bigcap_{j<n}A_j)}<\eta2^{-n}$ and, by orthogonality, $A_n\in\mathscr{A}$ such that $m_n(A_n)+\lambda(A_n^c)<\eta-\sum_{k=1}^{n-1}\lambda(A_k^c)$. This proves, by induction that it is possible to construct two sequences $\seqn m$ in $\mathbf A(\M)$ and $\seqn A$ in $\mathscr{A}$ that satisfy property \eqref{seq} for each $n\in\mathbb{N}$. It is then implicit that for all $n,p\in\mathbb{N}$ \begin{align*} m_n\left(\bigcap_{i=1}^{n+p}A_i\right)+\lambda\left(\bigcup_{i=1}^{n+p}A_i^c\right) &\le m_n\left(A_n\right)+\sum_{i=1}^n\lambda(A_i^c)+\sum_i\lambda(A_i^c)<2\eta \end{align*} and so $(m-m_n)\left(\bigcap_{i=1}^{n+p}A_i\right)>2\eta$. Observe that, under \iref{umc}, $\mathbf A(\M)$ is uniformly monotone continuous and so one may fix $k$ sufficiently large so that $\inf_n(m-m_n)\left(\bigcap_{i=1}^kA_i\right)>\eta$, contradicting \eqref{seq}. Thus $\lambda\perp\mathscr M_1$ and, by Lemma \ref{lemma sep}, $\lambda\perp\Psi_{\mathscr M_1}$ so that $\lambda\perp_u\mathscr M$. Given that property \iref{umc} extends from $\mathscr M$ to each of its subsets, then so does the conclusion just obtained and \iref{uop} is proved. \imply{umc}{tail}. Let $\mathscr M_0$ and $\seqn A$ be as in \iref{tail}. Suppose that, up to the choice of a subsequence, there is $\varepsilon$ and a sequence $\seqn\mu$ in $\mathscr M_0$ such that $\abs{\mu_n}(A_n)>\varepsilon$. By \iref{umc}, for each $n$ there exists $k_n>n$ such that $$ \sup_{\{\mu\in\mathscr M_0,\ p\in\mathbb{N}\}}\abs\mu\left(\bigcup_{i=n}^{k_n+p}A_i\right) -\abs\mu\left(\bigcup_{i=n}^{k_n}A_i\right)<\varepsilon/2 $$ Define $\gamma\in ba(\mathscr{A})$ implicitly by setting \begin{equation} \label{gamma} \gamma(A)=\LIM_n\abs{\mu_n}(A_n\cap A)\qquad A\in\mathscr{A} \end{equation} where $\LIM$ denotes the Banach limit. If $B_j=\bigcup_{i=j}^{k_j}A_i$, one easily concludes \begin{align*} \gamma(B_j)&=\LIM_{n>j}\abs{\mu_n}(A_n\cap B_j) >\LIM_{n>j}\abs{\mu_n}\left(A_n\cap \bigcup_{i=j}^{k_j+n}A_i\right)-\varepsilon/2 =\LIM_{n>j}\abs{\mu_n}(A_n)-\varepsilon/2 \ge\varepsilon/2 \end{align*} while, under \eqref{M sequence}, $\lim_j\abs\mu(B_j)=0$. By Lemma \ref{lemma lebesgue}, $\gamma_{\mathscr M_0}^\perp\ne0$ and, by (\textit{\ref{uop}}), $\gamma_{\mathscr M_0}^\perp\perp_u\mathscr M_0$ in contrast with the definition \eqref{gamma}. \imply{tail}{uacp}. Let $\mathscr M_0\subset\mathscr M$ and $\lambda\gg\mathscr M_0$. For each $n\in\mathbb{N}$ let $A_n\in\mathscr{A}$ be such that $\abs\lambda(A_n)<2^{-n}$. Then $\sup_k\abs\lambda(\bigcup_{i=j}^kA_j)<2^{-j}$ so that, by \iref{tail}, $\abs\mu(A_n)$ converges to $0$ uniformly in $\mathscr M_0$. \imply{uacp}{uac}. For each $m\in\mathbf A(\M)$, let $$ \chi(m)=\sup_{\mu\in\mathscr M}\norm{\mu_m^\perp}\qquad\text{and} \qquad \chi(\mathscr M)=\inf_{m\in\mathscr{A}v\mathscr M}\chi(m) $$ If $\seqn m$ is a sequence in $\mathbf A(\M)$ such that $\chi(m_n)<\chi(\mathscr M)+2^{-n}$ and if we define $m=\sum_n2^{-n}m_n\in\mathscr{A}v\mathscr M$, then from $m\gg m_n$ we conclude $\chi(m)=\chi(\mathscr M)$. Fix $\gamma_1=m$ and let $\mu_1\in\mathscr M$ and $A_1\in\mathscr{A}$ be such that $\gamma_1(A_1)<2^{-2}$ and $\abs{\mu_1}(A_1)\ge\chi(\mathscr M)/2$. Assume that $\mu_1,\ldots,\mu_{n-1}\in\mathscr M$ and $A_1,\ldots,A_{n-1}\in\mathscr{A}$ have been chosen so that, letting $\gamma_i=\frac{1}{i}(m+\abs{\mu_1}+\ldots+\abs{\mu_{i-1})}$, \begin{equation} \gamma_{n-1}(A_{n-1})<2^{-2(n-1)}\qquad \abs{\mu_{n-1}}(A_{n-1})\ge\chi(\mathscr M)/2 \end{equation} Since $\gamma_n\gg m$, then $\chi(\gamma_n)=\chi(\mathscr M)$. There exists then $\mu_n\in\mathscr M$ such that $\norm{(\mu_n)_{\gamma_n}^\perp}>\chi(\mathscr M)/2$ and thus a set $A_n\in\mathscr{A}$ such that $\abs{\mu_n}(A_n)\ge\chi(\mathscr M)/2$ while $\gamma(A_n)<2^{-2n}$. It follows by induction that there are sequences $\seqn\mu$ and $\seqn A$ such that for all $n$ \begin{align*} \sup_{i< n}\abs{\mu_i}(A_{n-1})<2^{-n} \quad\text{and}\quad \abs{\mu_n}(A_n)\ge\chi(\mathscr M)/2 \end{align*} Let $\mu=\sum_n2^{-n}\abs{\mu_n}$. \iref{uacp} implies that the sequence $\seqn\mu$ is uniformly absolutely continuous with respect to $\mu$; on the other hand, \begin{equation*} \mu(A_k)=\sum_n2^{-n}\abs{\mu_n}(A_k)\le \sum_{n=1}^k2^{-n}\abs{\mu_n}(A_k)+2^{-k} \le2^{-(k-1)} \end{equation*} so that $\chi(\mathscr M)\le2\lim_k\abs{\mu_k}(A_k)=0$. But then $\chi(m)=0$ i.e. $m\gg\mathscr M$ and, by (\textit{v}), $m\gg_u\mathscr M$. \imply{uop}{umc}. Let $\mathscr{A}$ be a $\sigma$ algebra and $\mathscr M\subset ca(\mathscr{A})$. Consider a decreasing sequence $\seqn B$ in $\mathscr{A}$ and let $A_n=B_n\backslash \bigcap_kB_k$. If there exists $\varepsilon>0$ and a sequence $\seqn\mu$ in $\mathscr M$ such that $\lim_n\abs{\mu_n}(A_n)>\varepsilon$, define $\gamma\in ba(\mathscr{A})$ as in \eqref{gamma}. It is obvious that $\gamma(A_n)>\varepsilon$ so that $\gamma$ is not countably additive i.e. its purely fintely additive part, $\gamma^\perp$, is non zero. However, $\gamma^\perp\perp\mathscr M$ while, by construction, $\gamma\le\Psi_\mathscr M$, contradicting \iref{uop}. \end{proof} We also conclude \begin{corollary} \label{cor compact} Let $\mathscr M\subset ba(\mathscr{A})$ be relatively weakly compact. Then, (i) $\lambda\perp\mathscr M$ if and only if $\lambda\perp_u\clt\mathbf A(\M)$ and (ii) $m\in\clt\mathbf A(\M)$ implies $m_\mathscr M^\perp=0$. \end{corollary} \begin{proof} In the proof of the implication \imply{umc}{uop} of Theorem \ref{th compact} we showed that $\lambda\perp\mathscr M$ if and only if $\lambda\perp\clt\mathbf A(\M)$. (\textit{i}) then follows from Corollary \ref{cor sep}; the second from (\textit{i}) and Lemma \ref{lemma lebesgue}. \end{proof} Theorem \ref{th compact} has a number of implications which help clarifying the relationship with other well known criteria for relative weak compactness. For example, $\mathscr M$ is relatively weakly compact if and only if $\{\abs\mu:\mu\in\mathscr M\}$ is so. Moreover, all disjoint sequences of sets satisfy condition \eqref{M sequence} (by boundedness) so that if $\mathscr M$ is relatively weakly compact then necessarily $m(A_n)$ converges to $0$ uniformly in $\mathscr M$ for every disjoint sequence, a property of weakly convergent sequences already outlined in \cite[Theorem 8.7.3]{rao}. Another immediate consequence of Theorem \ref{th compact} is that a subset of $ca(\mathscr{A})$ is relatively weakly compact if and only if norm bounded and uniformly countably additive or, equivalently, uniformly absolutely continuous with respect to some $\lambda\in ca(\mathscr{A})$, see \cite[IV.9.1 and IV.9.2]{bible}. Another characterization of weak compactness is given in the following Theorem \ref{th compact 2}. A sequence $\seqn f$ in $\mathscr{S}(\mathscr{A})$ is said to be uniformly bounded whenever $\sup_n\norm{f_n}<\infty$. \begin{theorem} \label{th compact 2} In the following, conditions \iref{rwc}--\iref{un cauchy} are equivalent and imply \iref{lim}: \begin{enumerate}[(i)] \item\label{rwc} $\mathscr M$ is relatively weakly compact; \item\label{un cauchy} $\mathscr M$ is bounded and possesses the uniform Cauchy property, i.e. if $\mathscr M_0\subset\mathscr M$ and $\seqn f$ is a uniformly bounded sequence in $\mathscr{S}(\mathscr{A})$ which is Cauchy in $L^1(\mu)$ for all $\mu\in\mathscr M_0$, then \begin{equation} \label{unCa} \lim_n\sup_{\mu\in\mathscr M_0}\sup_{p,q}\abs\mu(\abs{f_{n+p}-f_{n+q}})=0 \end{equation} \item\label{lim} $\mathscr M$ is bounded and for each sequence $\seqn f$ as in \iref{un cauchy} and each sequence $\seq\mu k$ in $\mathscr M$ \begin{equation} \label{limits} \LIM_k\lim_n\mu_k(f_n)=\lim_n\LIM_k\mu_k(f_n) \end{equation} \end{enumerate} \end{theorem} \begin{proof} \imply{rwc}{un cauchy} If $\mathscr M$ is relatively weakly compact it is bounded and uniformly absolutely continuous with respect to some $m\in\mathbf A(\M)$. If $\seqn f$ is uniformly bounded and Cauchy in $L^1(\mu)$ for all $\mu\in\mathscr M$, then it is Cauchy in $L^1(m)$ too. Moreover, given that $$ \abs\mu(\dabs{f_{k+p}-f_{k+q}})\le 2\sup_n\norm{f_n}\ \abs\mu^*(\dabs{f_{k+p}-f_{k+q}}\ge c) +c\sup_{\mu\in\mathscr M}\norm\mu $$ \eqref{unCa} follows from uniform absolute continuity. \imply{un cauchy}{lim} It follows from the inequality \begin{align*} \dabs{\LIM_k\lim_n\mu_k(f_n)-\lim_i\LIM_k\mu_k(f_i)}\le \lim_n\sup_k\sup_{p,q}\abs{\mu_k}(\abs{f_{n+p}-f_{n+q}}) \end{align*} \imply{un cauchy}{rwc} Choose the sequence $\seqn f$ in \iref{un cauchy} to consist of indicators of a decreasing sequence $\seqn A$ of $\mathscr{A}$ measurable sets. Then \eqref{unCa} implies that $\{\abs\mu:\mu\in\mathscr M\}$ is uniformly monotone continuous. \end{proof} \section{The Representation of Continuous Linear Functionals on $ba(\mathscr{A})$} \label{sec riesz} The class of sequences introduced in Theorem \ref{th compact 2} will be in this section the basis to obtain a rather precise representation of continuous linear functionals on $ba(\mathscr{A})$. To this end we need some additional notation. $f\in\mathfrak{B}(\mathscr{A})$ and $\mu\in ba(\mathscr{A})$ admit the Stone space representation as $\tilde f\in\mathcal C(\tilde\mathscr{A})$ and $\tilde\lambda\in ca(\sigma\tilde\mathscr{A})$ where $\tilde\mathscr{A}$ is the algebra of all clopen sets of a compact, Hausdorff, totally disconnected space $\tilde\Omega$ such that $\mu(f)=\tilde\mu(\tilde f)$, \cite{bible}. In the following we also use $\mathscr L(\mathscr{A})$ for the space of continuous linear operators $T:ba(\mathscr{A})\to ba(\mathscr{A})$ and $\mathscr L_*(\mathscr{A})$ for the subspace of those $T\in\mathscr L(\mathscr{A})$ possessing the additional property \begin{equation} \label{T} T(\mu_f)=T(\mu)_f \qquad f\in L^1(\mu),\mu\in ba(\mathscr{A}) \end{equation} Remark that if $A,A_1,\ldots,A_N\in\mathscr{A}$ with $A_n\cap A_m=\varnothing$ for $n\ne m$, then \eqref{T} implies \begin{align*} \sum_{n=1}^N\abs{T(\mu)(A\cap A_n)}=\sum_{n=1}^N\abs{T(\mu_{A\cap A_n})(\Omega)} \le\norm T\sum_{n=1}^N\norm{\mu_{A\cap A_n}} =\norm T\sum_{n=1}^N\abs{\mu}(A\cap A_n) \le\norm T\abs\mu(A) \end{align*} so that $\abs{T(\mu)}\le\norm T\abs\mu$, i.e. $T(\mu)\in ba_\infty(\mathscr{A},\mu)$. Eventually, if $T\in\mathscr L(\mathscr{A})$ let $T_\lambda$ denote its restriction to $ba(\mathscr{A},\lambda)$. \begin{proposition} \label{pro riesz} $ba(\mathscr{A})^*$ is isometrically isomorphic to the space $\mathscr L_*(\mathscr{A})$ and the corresponding elements are related via the identity \begin{equation} \label{riesz seq} \phi(\mu)=T(\mu)(\Omega)\qquad \mu\in ba(\mathscr{A}) \end{equation} Moreover, there is a sequence $\seqn {f^\lambda}$ in $\mathscr{S}(\mathscr{A})$ uniformly bounded by $\norm{T_\lambda}$ which is Cauchy in $L^1(\mu)$ for all $\mu\in ba(\mathscr{A},\lambda)$ and such that \begin{equation} \label{riesz seq} \limsup_n\norm{f^\lambda_n}=\norm{T_\lambda} \quad\text{and}\quad T(\mu)=\lim_n\mu(f^\lambda_n) \qquad \mu\in ba(\mathscr{A},\lambda) \end{equation} If $T$ is positive then $\seqn{f^\lambda}$ can be chosen to be positive. \end{proposition} \begin{proof} If $T\in\mathscr L_*(\mathscr{A})$ it is obvious that the right hand side of \eqref{riesz seq} implicitly defines a continuous linear functional on $ba(\mathscr{A})$ and that $\norm\phi\le\norm T$. Conversely, let $\phi\in ba(\mathscr{A})^*$, fix $\mu\in ba(\mathscr{A})$ and define the set function $T(\mu)$ on $\mathscr{A}$ implicitly by letting \begin{equation} \label{restr} T(\mu)(A)=\phi(\mu_A)\qquad A\in\mathscr{A} \end{equation} $T(\mu)$ is additive by the linearity of $\phi$. Moreover, if $A_1,\ldots,A_N\in\mathscr{A}$ are disjoint then \begin{align*} \sum_{n=1}^N\abs{T(\mu)(A\cap A_n)} =\sum_{n=1}^N\abs{\phi(\mu_{A\cap A_n})} \le\sum_{n=1}^N\norm\phi\norm{\mu_{A\cap A_n}} =\sum_{n=1}^N\norm\phi\abs\mu(A\cap A_n) =\norm\phi\abs\mu(A) \end{align*} so that $\abs{T(\mu)}\le\norm\phi\abs\mu$. It follows that $T(\mu)\in ba_\infty(\mathscr{A},\mu)$ and $\norm T\le\norm\phi$. Since $(\mu_A)_B=\mu_{A\cap B}$, we conclude from \eqref{restr} that $T(\mu)(A\cap B)=T(\mu_A)(B)$ so that $T(\mu_A)=T(\mu)_A$ for all $A\in\mathscr{A}$. This conclusion extends by linearity to $\mathscr{S}(\mathscr{A})$. If $\seqn f$ is a fundamental sequence for $f\in L^1(\mu)\subset L^1(T(\mu))$, then by continuity \begin{align*} T(\mu_f)=\lim_nT(\mu_{f_n})=\lim_nT(\mu)_{f_n}=T(\mu)_f \end{align*} and we conclude that $T\in\mathscr L_*(\mathscr{A})$. \eqref{riesz seq} thus defines a linear isometry of $\mathscr L_*(\mathscr{A})$ onto $ba(\mathscr{A})^*$. To conclude that this is an isomorphism let $T_1,T_2\in\mathscr L_*(\mathscr{A})$ and let $\phi_1,\phi_2$ be the associated elements of $ba(\mathscr{A})^*$. If $T_1\ne T_2$ then $T_1(\mu)\ne T_2(\mu)$ for some $\mu\in ba(\mathscr{A})$ and thus, by \eqref{restr}, $\phi_1(\mu_A)=T_1(\mu)(A)\ne T_2(\mu)(A)= \phi_2(\mu_A)$ for some $A\in\mathscr{A}$. To prove \eqref{riesz seq}, denote by $\sigma:ba(\mathscr{A})\to ca(\sigma\tilde\mathscr{A})$ the Stone isomorphism. Then, if $T\in\mathscr L_*(\mathscr{A})$ and $\tilde T=\sigma\cdot T\sigma^{-1}$ one immediately concludes that $\tilde T:ca(\sigma\tilde\mathscr{A})\to ca(\sigma\tilde\mathscr{A})$ and that \begin{align*} \tilde T(\tilde\mu_{\tilde f}) &= \lim_n\tilde T(\tilde\mu_{\tilde f_n}) = \lim_n\sigma \left(T(\mu_{f_n})\right) = \lim_n\sigma\left( T(\mu)_{f_n}\right) = \lim_n\sigma\left( T(\mu)\right)_{\tilde f_n} = \lim_n\tilde T(\tilde\mu)_{\tilde f_n} = \tilde T(\tilde\mu)_{\tilde f} \end{align*} so that $\tilde T\in\mathscr L_*(\sigma\tilde\mathscr{A})$. Exploiting the existence of Radon Nikodym derivatives we conclude that when $\mu\in ba(\mathscr{A},\lambda)$, \begin{align*} \tilde T(\tilde\mu)=\tilde T(\tilde\lambda)_{\tilde f^\mu} =\int\tilde f^\mu\tilde f^\lambda d\abs{\tilde\lambda} =\int\tilde f^\lambda d\tilde\mu \end{align*} with $\tilde f^\mu\in L^1(\tilde\lambda)$, $\tilde f^\lambda\in L^\infty(\tilde\lambda)$ and $\norm{\tilde f^\lambda}_{L^\infty}\le\norm{T_\lambda}$. Let, as usual, \begin{equation*} \tilde f^\lambda_n=\sum_{i=-2^n}^{2^n}i2^{-n}\norm{T_\lambda} \sset{i2^{-n}\norm{T_\lambda}\le\tilde f^\lambda<(i+1)2^{-n}\norm{T_\lambda}} \end{equation*} The sequence $\seqn{\tilde f^\lambda}$ in $\mathscr{S}(\sigma\tilde\mathscr{A})$ is increasing, converges uniformly to $\tilde f^\lambda$ and is positive if $T_\lambda$ is so. Replacing each $\sigma\tilde\mathscr{A}$ measurable set in the support of $\tilde f^\lambda_n$ with a corresponding $\tilde\mathscr{A}$ measurable set arbitrarily close to it in $\tilde\lambda$ measure, we obtain a sequence $\seqn{\hat f^\lambda}$ in $\mathscr{S}(\tilde\mathscr{A})$ such that (\textit{i}) $\norm{\hat f^\lambda_n}\le\norm{T_\lambda}$, (\textit{ii}) $\hat f^\lambda_n$ is positive if $T_\lambda$ is so and (\textit{iii}) $\seqn{\hat f^\lambda}$ converges to $\tilde f^\lambda$ in $L^1(\tilde\mu)$ for each $\mu\in ba(\mathscr{A},\lambda)$, by \cite[III.3.6]{bible}. Let $f_n^\lambda=\sigma^{-1}\left(\hat f^\lambda_n\right)\in\mathscr{S}(\mathscr{A})$. Then, \begin{align} \label{conv} T(\mu)=\sigma^{-1}\left(\tilde T(\tilde\mu)\right) = \lim_n\sigma^{-1}\left(\int\hat f^\lambda_nd\tilde\mu\right) = \lim_n\int\sigma^{-1}\left(\hat f^\lambda_n\right)d\mu = \lim_n\int f^\lambda_nd\mu \end{align} so that $\norm{T_\lambda}\le\limsup_n\norm{f^\lambda_n}$. Properties (\textit{i}) and (\textit{ii}) carry over to the sequence $\seqn{f^\lambda}$, by the properties of the Stone isomorphism, and therefore $\norm{T_\lambda}=\limsup_n\norm{f^\lambda_n}$. Moreover, \begin{equation*} \lim_n\sup_{p,q}\abs\mu\left(\dabs{f^\lambda_{n+p}-f^\lambda_{n+q}}\right) = \lim_n\sup_{p,q}\abs{\tilde\mu}\left(\dabs{\hat f^\lambda_{n+p}-\hat f^\lambda_{n+q}}\right) = 0 \end{equation*} so that the sequence is Cauchy in $L^1(\mu)$ for all $\mu\in ba(\mathscr{A},\lambda)$. \end{proof} Implicit in Proposition \ref{pro riesz} is a simple proof of the following, important result. \begin{corollary}[Berti and Rigo] \label{cor berti rigo} The dual space of $L^1(\lambda)$ is isomorphic to $ba_\infty(\mathscr{A},\lambda)$ and the corresponding elements are related via the identity \begin{equation} \label{berti rigo} \varphi(f)=\mu(f)\qquad f\in L^1(\lambda) \end{equation} \end{corollary} \begin{proof} By the isometric isomorphism between $L^1(\lambda)$ and $ba_1(\mathscr{A},\lambda)$ and Proposition \ref{pro riesz}, each continuous linear functional $\varphi$ on $L^1(\lambda)$ corresponds isometrically to some $T\in\mathscr L_*(\mathscr{A})$ via the identity $\varphi(f)=T(\lambda)(f)$. Write $\mu=T(\lambda)$. Conversely, if $\mu\in ba_\infty(\mathscr{A},\lambda)$ then it is obvious the right hand side of \eqref{berti rigo} defines a continuous linear functional on $L(\lambda)$. \end{proof} Another interesting conclusion is \begin{corollary} \label{cor riesz 2} For every uniformly bounded net $\neta h$ in $\mathfrak{B}(\mathscr{A})$ there exists a uniformly bounded sequence $\seqn f$ in $\mathscr{S}(\mathscr{A})$ which is Cauchy in $L^1(\mu)$ for all $\mu\in ba(\mathscr{A},\lambda)$ and such that \begin{equation} \label{net} \LIM_a\mu(h_a\set A)=\lim_n\mu(f_n\set A)\qquad A\in\mathscr{A},\ \mu\in ba(\mathscr{A},\lambda) \end{equation} If $\neta h$ is increasing then $\seqn f$ can be chosen to be increasing too. \end{corollary} \begin{proof} The existence claim follows from Proposition \ref{pro riesz} upon noting that the left hand side of \eqref{net} indeed defines a continuous linear functional on $ba(\mathscr{A})$. \end{proof} Corollary \ref{cor riesz 2} suggests that dominated families of measures admit an implicit, denumerable structure. This intuition will be made precise in the next section. An exact integral representation of the form $\phi(\mu)=\mu(f)$ for elements of $ba(\mathscr{A})$ will not be possible in general, see \cite[9.2.1]{rao}. On the other hand, the representation \eqref{riesz seq} may seem unsatisfactory inasmuch the intervening sequence depends on the choice of $\lambda$. This last remark also applies to $ca(\mathscr{A})$, a space for which, despite the characterization of weak compactness, a representation of continuous linear functionals is missing. The following result provides an answer. \begin{theorem} \label{th riesz} A linear functional $\phi$ on $ba(\mathscr{A})$ is continuous if and only if it admits the representation \begin{equation} \label{riesz net} \phi(\mu)=\lim_\alpha\mu(f_\alpha)\qquad\mu\in ba(\mathscr{A}) \end{equation} where $\neta f$ is a uniformly bounded net in $\mathscr{S}(\mathscr{A})$ with $\limsup_{\alpha\in\mathfrak A}\norm{f_\alpha}=\norm\phi$ which is Cauchy in $L^1(\mu)$ for all $\mu\in ba(\mathscr{A})$. \end{theorem} \begin{proof} It is easily seen that if the net $\neta f$ is as in the claim, then the right hand side of \eqref{riesz net} indeed defines a continuous linear functional on $ba(\mathscr{A})$ and that $\norm\phi\le\limsup_\alpha\norm{f_\alpha}$. For the converse, passing to the Stone space representation and given completeness of $ca(\sigma\tilde\mathscr{A})$, \eqref{riesz seq} becomes \begin{equation} \label{rep} T(\mu)(A)=\tilde\mu(\set{\tilde A}\tilde f^\lambda)\qquad A\in\mathscr{A},\ \mu\in ba(\mathscr{A},\lambda) \end{equation} for some $\tilde f^\lambda\in L^\infty(\tilde\lambda)$ with $\abs{\tilde f^\lambda}\le\norm{T_\lambda}$. Let $\mathfrak A$ be the collection of all finite subsets of $ba(\mathscr{A})$ directed by inclusion. For each $\alpha\in\mathfrak A$ choose $\lambda_\alpha\in ba(\mathscr{A})$ such that $\lambda_\alpha\gg\alpha$. Of course, for each $\mu\in ba(\mathscr{A})$ there exists $\alpha\in\mathfrak A$ such that $\lambda_\alpha\gg\mu$. We then get the representation \begin{equation} \label{rep2} T(\mu)(A)=\tilde\mu(\tilde f^{\lambda_\alpha}\set{\tilde A}) \qquad A\in\mathscr{A},\ \mu\in\alpha,\ \alpha\in\mathfrak A \end{equation} with $\norm{\tilde f^{\lambda_\alpha}}\le\norm{T_{\lambda_\alpha}}$. Fix $\tilde f_\alpha\in\mathscr{S}(\tilde\mathscr{A})$ such that $\norm{\tilde f_\alpha}\le\norm{\tilde f^{\lambda_\alpha}}$ and $$ \sup_{\mu\in\alpha} \tilde\mu\left(\dabs{\tilde f^{\lambda_\alpha}-\tilde f_\alpha}\right)\le2^{-\abs\alpha-1} $$ and let $f_\alpha\in\mathscr{S}(\mathscr{A})$ correspond to $\tilde f_\alpha$ under the Stone isomorphism. Then, \begin{align*} \lim_\alpha\mu( f_\alpha\set A)&=\lim_{\alpha}\tilde\mu(\tilde f_\alpha\set{\tilde A}) =\lim_{\alpha}\tilde\mu(\tilde f^{\lambda_\alpha}\set{\tilde A}) =T(\mu\set A) \qquad A\in\mathscr{A},\ \mu\in ba(\mathscr{A}) \end{align*} which, together with \eqref{riesz net}, proves the existence of the representation \eqref{riesz seq} and of the inequality $\limsup_\alpha\norm{f_\alpha}\le\lim_\alpha\norm{T_{\lambda_\alpha}} \le\norm T=\norm\phi$. Moreover, if $\alpha_1,\alpha_2,\alpha\in\mathfrak A$ and $\mu\in\alpha\subset\alpha_1,\alpha_2$, then \begin{align*} \mu(\dabs{f_{\alpha_1}-f_{\alpha_2}}) &=\mu\left(h(\alpha_1,\alpha_2)(f_{\alpha_1}-f_{\alpha_2})\right)\\ &\le2^{-\abs\alpha}+\tilde\mu\left(\tilde h(\alpha_1,\alpha_2)(\tilde f^{\lambda_{\alpha_1}} -\tilde f^{\lambda_{\alpha_2}})\right)\\ &=2^{-\abs\alpha} \end{align*} the third line following from \eqref{rep2} and the inclusion $h(\alpha_1,\alpha_2)\in\mathscr{S}(\mathscr{A})$. But then $\neta f$ is indeed a Cauchy net in $L^1(\mu)$ for all $\mu\in ba(\mathscr{A})$. \end{proof} The space of uniformly bounded nets in $\mathscr{S}(\mathscr{A})$ is a linear space if, for $\tilde f=\neta f$ and $\tilde g=\net g\delta{\mathfrak D}$ two such nets, we endow $\mathfrak A\times\mathfrak D$ with the product order obtained by letting $(\alpha_1,\delta_1)\ge(\alpha_2,\delta_2)$ whenever $\alpha_1\ge\alpha_2$ and $\delta_1\ge\delta_2$ and write $\tilde f+\tilde g$ as $\nnet{f_\alpha+g_\delta}{(\alpha,\delta)}{\mathfrak A\times\mathfrak D}$. Theorem \ref{th riesz} suggests the definition of a seminorm on such space by letting \begin{equation} \label{norm} \norm{F}=\limsup_\alpha\norm{f_\alpha} \qquad\text{whenever}\qquad F=\neta f \end{equation} and denote by $\mathfrak C(\mathscr{A})$ the linear space of equivalence classes of uniformly bounded nets in $\mathscr{S}(\mathscr{A})$ which are Cauchy in $L^1(\mu)$ for all $\mu\in ba(\mathscr{A})$. \begin{theorem} The identity \eqref{riesz net} defines an isometric isomorphism between $ba(\mathscr{A})^*$ and $\mathfrak C(\mathscr{A})$. \end{theorem} \begin{proof} The right hand side of \eqref{riesz net} is invariant upon replacing the net $F=\neta f$ with $G=\net g\delta{\mathfrak D}$ whenever $\norm{F-G}=0$. \end{proof} \section{Some Implications.} \label{sec implications} The characterization so obtained is admittedly not an easy one, due to the intrinsic difficulty of identifying explicitly the net associated to each continuous functional. It has, this notwithstanding, a number of interesting implications. We illustrate some with no claim of completeness. \begin{corollary} \label{cor sep} Let $\mathscr M$ and $\mathscr N$ be convex, weakly compact subsets of $ba(\mathscr{A})$. Then, \begin{enumerate}[(i)] \item $\mathscr M\cap\mathscr N=\varnothing$ if and only if there exists $f\in\mathscr{S}(\mathscr{A})$ such that $\inf_{\nu\in\mathscr N}\nu(f)>\sup_{\mu\in\mathscr M}\mu(f)$; \item there exists $\mathcal K\subset\mathscr{S}(\mathscr{A})$ and a subset $\mathscr M_0$ of extreme points of $\mathscr M$ such that \begin{equation} \label{support} \mathscr M=\left\{m\in ba(\mathscr{A}):m(k)\le\max_{\mu\in\mathscr M_0}\mu(k)\text{ for all }k\in\mathcal K\right\} \end{equation} \end{enumerate} \end{corollary} \begin{proof} (\textit{i}). The weak topology is linear. There is then a linear functional $\phi$ on $ba(\mathscr{A})$ and constants $a_1>b_1$ such that $\inf_{\nu\in\mathscr N}\phi(\nu)>a>b>\sup_{\mu\in\mathscr M}\phi(\mu)$. By compactness, $\mathscr M$ and $\mathscr N$ are dominated so that, by Proposition \ref{pro riesz}, $\phi$ is associated with a uniformly bounded, Cauchy sequence $\seqn f$ in $\mathscr{S}(\mathscr{A})$, as in \eqref{riesz seq}. We also know from Theorem \ref{th compact 2} that for all $\varepsilon$ there exists $n$ sufficiently large so that $\inf_{\nu\in\mathscr N}\nu(f_n)>a_1-\varepsilon$ and $b_1+\varepsilon>\sup_{\mu\in\mathscr M}\mu(f_n)$. Choosing $\varepsilon<(a-b)/2$ we get $\inf_{\nu\in\mathscr N}\nu(f_n)>\frac{a+b}{2}\sup_{\mu\in\mathscr M}\mu(f_n)$. (\textit{ii}). For each $m\notin\mathscr M$ there is then $k_m\in\mathscr{S}(\mathscr{A})$ such that $\sup_{\mu\in\mathscr M}\mu(k_m)<m(k_m)$. Let $\mathcal K=\{k_m:m\notin\mathscr M\}$. For each $k\in\mathcal K$ choose one extreme point $\mu_k\in\mathscr M$ in the corresponding supporting set of $\mathscr M$ and let $\mathscr M_0=\{\mu_k:k\in\mathcal K\}$. By construction, each $k\in\mathcal K$, when considered as a function on $\mathscr M$, attains its maximum on $\mathscr M_0$, so that the right hand side of \eqref{support} contains $\mathscr M$. For each $m\notin\mathscr M$ there is $k\in\mathcal K$ such that $m(k)>\sup_{\mu\in\mathscr M}\mu(k)$ so that the right hand side is included in $\mathscr M$. \end{proof} It is well known that, combining the Theorems of Eberlein Smulian and of Mazur, and taking convex combinations one may transform a weakly convergent sequence in a Banach space into a norm convergent one. The following result establishes a weak form of this fundamental result which holds even in the absence of weak convergence. The proof exploits some of the ideas introduced by Koml\'{o}s \cite{komlos}. \begin{theorem} \label{th komlos} Let $\seqn\mu$ be a norm bounded sequence in $ba(\mathscr{A})_+$ and define \begin{equation} \Gamma(n)=\co(\mu_n,\mu_{n+1},\ldots) \quad\text{and}\quad \lambda=\sum_n2^{-n}\mu_n \end{equation} There exists $\xi\in ba(\lambda)_+$ and a sequence $\seqn m$ with $m_n\in\Gamma(n)$ for each $n\in\mathbb{N}$ such that \begin{equation} \label{fatou} \lim_n\norm{(\xi\wedge k\lambda)-(m_n\wedge k\lambda)}=0 \quad\text{and}\quad \xi(A)\le\liminf_nm_n(A) \qquad A\in\mathscr{A} \end{equation} \end{theorem} \begin{proof} Define the families \begin{align*} \mathscr C(n) = \{\nu\in ba(\mathscr{A})_+:\nu\le m\text{ for some }m\in\Gamma(n)\} \quad\text{and}\quad \mathscr C=\bigcap_n\cls{\mathscr C(n)}{ } \end{align*} and the set functions \begin{align} \label{nu} \nu_k=\LIM_n(\mu_n\wedge k\lambda) \qquad k\in\mathbb{N} \end{align} We notice that the sequence $\tilde\nu=\seq{\nu}{k}$ so obtained satisfies the following properties for all $k\in\mathbb{N}$: (\textit{a}) $\nu_{k-1}\le\nu_k\le k\lambda$ and (\textit{b}) $\nu_k\in\mathscr C$ as, by Theorem \ref{th exchange}, \begin{align*} \nu_k \in \bigcap_n\cco{}(\mu_n\wedge k\lambda,\mu_{n+1}\wedge k\lambda,\ldots) \end{align*} The family $\Xi$ of sequences possessing properties (\textit{a}) and (\textit{b}) may be partially ordered upon letting $\tilde\nu\ge\tilde\nu'$ whenever $\nu_k\ge\nu'_k$ for all $k\in\mathbb{N}$. If $\{\tilde\nu^a:a\in\mathfrak A\}$ is a chain in $\Xi$ we may let $\nu_k=\lim_a\nu^a_k$ and $\tilde\nu=\seq{\nu}{k}$. It is easily seen that $\tilde\nu$ is increasing and that $\nu_k\le k\lambda$. Moreover, since $\mathscr C$ is closed and norm bounded, $\nu_k^a$ converges to $\nu_k$ in norm, so that $\nu_k\in\mathscr C$. Let $\tilde\xi$ be a maximal element in $\Xi$ and define the set function $\xi$ implicitly by letting $\xi(A)=\lim_k\xi_k(A)$ for each $A\in\mathscr{A}$. Observe that $\xi(\Omega)\le\sup_n\norm{\mu_n}<\infty$ so that $\lim_k\norm{\xi-\xi_k}=0$ and $\xi\in\mathscr C$. There exist then sequences $\seqn m$, with $m_n\in\Gamma(n)$, and $\seqn\delta$ in $ba(\mathscr{A})_+$ such that $\kappa_n=m_n-\delta_n\in\mathscr C(n)$ and that $\lim_n\norm{\kappa_n-\xi}=0$. But then \begin{align} \label{dom} \xi_k \le \xi\wedge k\lambda = \lim_n(\kappa_n\wedge k\lambda) \le \LIM_n(m_n\wedge k\lambda) \equiv \xi'_k \qquad k\in\mathbb{N} \end{align} where the intervening limits refer to setwise convergence. Observe that $\tilde\xi'=\seq{\xi'}k\in\Xi$ and $\tilde\xi'\ge\tilde\xi$ which is contradictory unless $\xi_k=\xi\wedge k\lambda=\LIM_n(m_n\wedge k\lambda)$. This clearly implies $\xi(A)\le\liminf_nm_n(A)$ for each $A\in\mathscr{A}$. Moreover, \eqref{dom} remains true if we replace the sequence $\seqn m$ with any of its subsequences. This implies that $(m_n\wedge k\lambda)(A)$ converges to $\xi_k(A)$ for each $A\in\mathscr{A}$ so that $\LIM_n(m_n\wedge k\lambda)=\lim_n(m_n\wedge k\lambda)$. From the inequality $ \delta_n \le \abs{\kappa_n-\xi}+m_n-\xi_k $ we deduce \begin{align*} \delta_n\wedge k\lambda \le \abs{\kappa_n-\xi}+((m_n-\xi_k)\wedge k\lambda) \le \abs{\kappa_n-\xi}+(m_n\wedge2k\lambda)-\xi_k \end{align*} and therefore \begin{align*} \limsup_n\norm{\delta_n\wedge k\lambda} &\le \lim_k\limsup_n(\delta_n\wedge k\lambda)(\Omega)\\ &\le \lim_k\{\limsup_n(m_n\wedge2k\lambda)(\Omega)-\xi_k(\Omega)\}\\ &\le \lim_k(\xi_{2k}-\xi_k)(\Omega)\\ &=0 \end{align*} But then we conclude \begin{align*} \limsup_n\norm{(m_n\wedge k\lambda)-(\xi\wedge k\lambda)} = \limsup_n\norm{(m_n\wedge k\lambda)-(\kappa_n\wedge k\lambda)} \le \limsup_n\norm{\delta_n\wedge k\lambda} = 0 \end{align*} Property (\textit{ii}) in the claim is a clear consequence of \eqref{dom} and of the fact that $\xi\ll\lambda$. \end{proof} It is clear from the proof that the condition $\liminf_n\norm{\lambda_n}<\infty$ may be replaced with the inequality $\lim_k\lim_n\sup_{\mu\in\Gamma(n,k)}\norm\mu<\infty$, which is more general but less perspicuous. One should also remark that if the sequence $\seqn\lambda$ in Theorem \ref{th komlos} is weakly convergent, then, by the uniform absolute continuity property, $\mu_n\wedge k\lambda$ converges (in norm) to $\mu_n$ uniformly in $n\in\mathbb{N}$ and thus the sequence $\seqn\mu$ converges strongly to $\xi$. Theorem \ref{th komlos} is then indeed a generalization of more classical results. Some implications of Theorem \ref{th komlos} are developed in \cite{BK}. \section{The Halmos-Savage Theorem and its Implications} \label{sec halmos savage} The results of the preceding section mainly develop the orthogonality implications of Lemma \ref{lemma lebesgue}. We may as well deduce interesting conclusions concerning absolute continuity, among which the following finitely additive version of the Lemma of Halmos and Savage \cite[Lemma 7, p. 232]{halmos savage}. \begin{theorem}[Halmos and Savage] \label{th halmos savage} $\mathscr M\subset ba(\mathscr{A},\lambda)$ if and only if $\mathscr M\subset ba(\mathscr{A},m)$ for some $m\in\mathscr{A}v\mathscr M$. \end{theorem} \begin{proof} $\lambda$ dominates $\mathscr M$ if and only if $\lambda_\M^c$ does. The claim follows from Lemma \ref{lemma lebesgue}. \end{proof} As is well known, Halmos and Savage provided applications of this result to the theory of sufficient statistics. Another possible development is the following finitely additive version of a well known Theorem of Yan \cite[Theorem 2, p. 220]{yan}: \begin{corollary}[Yan] \label{cor yan} Let $\mathcal K\subset L^1(\lambda)$ be convex with $0\in\mathcal K$, $\mathcal C=\mathcal K-\mathscr{S}(\mathscr{A})_+$ and denote by $\overline{\mathcal C}$ the closure of $\mathcal C$ in $L^1(\lambda)$. The following are equivalent: \begin{enumerate}[(i)] \item\label{eta f} for each $f\in L^1(\lambda)_+$ with $\abs\lambda(f)>0$ there exists $\eta>0$ such that $\eta f\notin\overline{\mathcal C}$; \item\label{d A} for each $A\in\mathscr{A}$ with $\abs\lambda(A)>0$ there exists $d>0$ such that $d\set A\notin\overline{\mathcal C}$; \item\label{m} there exists $m\in\mathbb{P}_{ba}(\mathscr{A})$ such that (a) $\mathcal K\subset L^1(m)$ and $\sup_{k\in\mathcal K}m(k)<\infty$, (b) $m\in ba_\infty(\mathscr{A},\lambda)$ and (c) $m(A)=0$ if and only if $\abs\lambda(A)=0$. \end{enumerate} \end{corollary} \begin{proof} The implication \imply{eta f}{d A} is obvious. If $A$ and $d$ are as in \iref{d A} there exists a continuous linear functional $\phi^A$ on $L^1(\lambda)$ separating $\{d\set A\}$ and $\overline{\mathcal C}$ and $\phi^A$ admits the representation $\phi^A(f)=\mu^A(f)$ for some $\mu^A\in ba_\infty(\mathscr{A},\lambda)$ such that $\mu^A\le c^A\abs\lambda$, Corollary \ref{cor berti rigo}. Thus $\sup_{h\in\mathcal C}\mu^A(f)\le a<b<d\mu^A(A)$. The inclusion $0\in\mathcal C$ implies $a\ge0$ so that $\mu^A(A)>0$; moreover, $\mu^A\ge0$ as $-\mathscr{S}(\mathscr{A})_+\subset\mathcal C$. By normalization we can assume $\norm{\mu^A}\vee c^A\vee a\le1$. The collection $\mathscr M=\{\mu^A:A\in\mathscr{A},\ \abs\lambda(A)>0\}$ so obtained is dominated by $\lambda$ and therefore by some $m\in\mathscr{A}v\mathscr M$, by Theorem \ref{th halmos savage}. Thus $m\le\abs\lambda$, $\norm m\le1$ and $\sup_{h\in\mathcal C}m(h)\le1$. If $A\in\mathscr{A}$ and $\abs\lambda(A)>0$ then $m\gg\mu^A$ implies $m(A)>0$. By normalization we can take $m\in\mathbb{P}_{ba}(\mathscr{A})$. Let $m$ be as in \iref{m} so that $L^1(\lambda)\subset L^1(m)$. If $f\in L^1(\lambda)_+$ and $\abs\lambda(f)>0$ then $f\wedge n$ converges to $f$ in $L^1(\lambda)$ \cite[III.3.6]{bible} so that we can assume that $f$ is bounded. Then, by \cite[4.5.7 and 4.5.8]{rao} there exists an increasing sequence $\seqn f$ in $\mathscr{S}(\mathscr{A})$ with $0\le f_n\le f$ such that $f_n$ converges to $f$ in $L^1(\lambda)$ and therefore in $L^1(m)$ too. For $n$ large enough, then, $\abs\lambda(f_n)>0$ and, $f_n$ being positive and simple, $m(f_n)>0$. But then $m(f)=\lim_nm(f_n)>0$ so that $\eta f$ cannot be an element of $\overline\mathcal C$ for all $\eta>0$ as $\sup_{h\in\overline\mathcal C}m(h)<\infty$. \end{proof} An application of Corollary \ref{cor yan} is obtained in \cite{BK}. One may also draw from Theorem \ref{th halmos savage} some implications on the structure of a finitely additive set function. \begin{theorem} \label{th hs} Let $\mathscr M\subset ba(\mathscr{A},\lambda)$ and let $\mathscr H_0\subset\mathscr{A}$ generate the ring $\mathscr H$. There exist $H_1,H_2,\ldots\in\mathscr H_0$ such that, letting $G_n=H_n\backslash\bigcup_{k<n}H_k$ and $G=\bigcap_nH_n^c$, the following holds: \begin{align} \label{disintegrate H} \abs\mu^*(H\cap G)=0\quad\text{and}\quad \mu(A\cap H)=\sum_n\mu\left(A\cap H\cap G_n\right) \qquad\mu\in\mathscr M,\ A\in\mathscr{A},\ H\in\mathscr H \end{align} Moreover: (i) if $\mu\in\mathscr M$ is $\mathscr H_0$-inner regular then \begin{equation} \label{disintegrate A} \mu(A)=\sum_n\mu\left(A\cap G_n\right) \qquad A\in\mathscr{A} \end{equation} (ii) if $\mathscr H_0$ is closed with respect to countable unions then \begin{equation} \label{disintegrate A0} \mu(A)=\mu(A\cap G)+\sum_n\mu(A\cap G_n)\qquad \mu\in\mathscr M,\ A\in\mathscr{A} \end{equation} \end{theorem} \begin{proof} With no loss of generality, let $\lambda\ge0$ and write $\mathscr M=\{\lambda_H:H\in\mathscr H_0\}$. By Theorem \ref{th halmos savage}, choose $m_0=\sum_n\alpha_n\lambda_{H_n}\in\mathscr{A}v\mathscr M$ to be such that $m_0\gg\mathscr M$. Let $G$ and $G_n$ be as in the statement and define $m=\sum_n\lambda_{G_n}$. Observe that $m\ge m_0$ and that, by construction, $\lim_km(\bigcap_{n<k}H_n^c)=0$. But then, for each $H\in\mathscr H_0$ we conclude $\lim_k\lambda_H(\bigcap_{n<k}H_n^c) =\lim_k\lambda(H\cap\bigcap_{n<k}H_n^c)=0$ and, by absolute continuity, $\abs\mu^*(H\cap G)\le\lim_k\abs\mu(H\cap\bigcap_{n<k}H_n^c)=0$ for all $\mu\in\mathscr M$. Consequently, if $A\in\mathscr{A}$ and $H\in\mathscr H_0$ \begin{align*} \mu(A\cap H) &=\mu\left(A\cap H\cap\left(\bigcup_{n< k}G_n\cup\bigcap_{n< k}G^c_n\right)\right)\\ &=\lim_k\mu\left(A\cap H\cap\bigcup_{n< k}G_n\right)\\ &=\sum_n\mu(A\cap H\cap G_n) \end{align*} The set function $\sum_n\mu_{G_n}$ agrees with $\mu$ on the ring $\mathscr R$ consisting of all finite, disjoint unions of sets of the form $A\cap H$ with $A\in\mathscr{A}$ and $H\in\mathscr H_0$. Another ring is the collection $\mathscr J= \{H\in\mathscr H:A\cap H\in\mathscr R\ \text{ for all }A\in\mathscr{A}\}$ which therefore coincides with $\mathscr H$. Thus, $\{H\cap A:H\in\mathscr H,\ A\in\mathscr{A}\}\subset\mathscr R$ which proves \eqref{disintegrate H}. If $\mu\in\mathscr M$ is $\mathscr H_0$-inner regular, then, \begin{align*} \mu^+(A)=\sup_{\{H\in\mathscr H_0:H\subset A\}}\mu(H) =\sup_{\{H\in\mathscr H_0:H\subset A\}}\ \sum_n\mu(H\cap G_n) \le\sum_n\mu^+(A\cap G_n) \le\mu^+(A) \end{align*} the last inequality following from additivity. Exchanging $\mu$ with $-\mu$ proves \eqref{disintegrate A}. Eventually, if $\mathscr H_0$ is closed with respect to countable unions, then $\bigcup_{n>k}G_n\in\mathscr H$ and, by \eqref{disintegrate H}, $\mu\left(\bigcup_{n>k}G_n\right)=\sum_{n>k}\mu(G_n)$ from which \eqref{disintegrate A0} readily follows. \end{proof} The following Corollary \ref{cor borel} illustrates a special case. \begin{corollary} \label{cor borel} Let $\Omega$ be a separable metric space, $\mathscr{A}$ its Borel $\sigma$-algebra and $\mathscr M\subset ca(\mathscr{A},\lambda)$. If $\pi$ is a partition of $\Omega$ into open sets then there exist $H_1,H_2,\ldots\in\pi$ such that \begin{equation} \label{disintegrate borel} \mu(A)=\sum_n\mu(A\cap H_n)\qquad A\in\mathscr{A},\ \mu\in\mathscr M \end{equation} \end{corollary} \begin{proof} Under the current assumptions, for each increasing net $\neta O$ of open sets we have $\lambda(\bigcup_\alpha O_\alpha)=\lim_\alpha\lambda(O_\alpha)$, \cite[Proposition 7.2.2]{bogachev}. Let $\mathscr H_0=\pi$ extract $H_1,H_2,\ldots\in\pi$ as in Theorem \ref{th hs} and observe that, $\pi$ being a partition, $G_n=H_n$ for $n=1,2,\ldots$; moreover $G=\bigcup_{H\in\pi,H\subset G}H$ and so $\lambda(G)=0$. We conclude that \eqref{disintegrate borel} holds. \end{proof} To motivate further our interest in the preceding conclusions, assume that $\pi$ is an $\mathscr{A}$ partition and that $\lambda$ is $\pi$-inner regular. Then for each $H\in\pi$ and $A\in\mathscr{A}$ one may define $\cond\sigma A H=\lambda(A\cap H_n)/\lambda(H_n)$ if $H=H_n$ and $\lambda(H_n)\ne0$ or $\cond\sigma A H=m_H(A)$ for any $m_H\in\mathbb{P}_{ba}(\mathscr{A})$ with $m_H(H)=1$. Write $\cond\sigma A \pi=\sum_{H\in\pi}\cond\sigma A H \set H$. Then, \begin{equation} \lambda(A)=\int\cond\sigma A\pi d\lambda\qquad A\in\mathscr{A} \end{equation} This follows from $\int\cond\sigma A\pi d\lambda= \sum_n\cond\sigma A {G_n}\lambda(G_n)+\int_{G}\cond\sigma A\pi d\lambda =\sum_n\lambda(A\cap G_n) =\lambda(A) $. In the terminology introduced by Dubins \cite{dubins}, $\lambda$ is then strategic along any partition relatively to which it is inner regular. \end{document}
\begin{document} \title[On the action of the Steenrod-Milnor operations]{On the action of the Steenrod-Milnor operations\\ on the invariants of the general linear groups } \author{Nguyen Thai Hoa, Pham Thi Kim Minh, Nguyen Sum} \vskip2.5cm \maketitle \section{Introduction} Let $p$ be an odd prime number. Denote by $GL_n = GL(n,\mathbb F_p)$ the general linear group over the prime field $\mathbb F_p$. Each subgroup of $GL_n$ acts on the algebra $P_n=E(x_1,\ldots,x_n)\otimes \mathbb F_p(y_1,\ldots,y_n)$ in the usual manner. Here and in what follows, $E(.,\ldots,. )$ and $\mathbb F_p(.,\ldots,.)$ are the exterior and polynomial algebras over $\mathbb F_p$ generated by the indicated variables. We grade $P_n$ by assigning $\dim x_i=1$ and $\dim y_i=2.$ Dickson showed in \cite{1} that the invariant algebra $\mathbb F_p(y_1,\ldots,y_n)^{GL_n}$ is a polynomial algebra generated by the Dickson invariants $Q_{n,s},\ 0\le s<n$. Hu\`ynh M\`ui \cite{2,3} computed the invariant algebras $P_n^{GL_n}$. He proved that $P_n^{GL_n}$ is generated by $Q_{n, s},\ 0 \le s < n,\ R_{n, s_1, \ldots, s_k},\ 0 \le s_1 < \ldots < s_k < n.$ Here $R_{n, s_1, \ldots, s_k}$ are M\`ui invariants and $Q_{n,s}$ are Dickson invariants (see Section 2). Let $\mathcal A_p$ be the mod $p$ Steenrod algebra and let $\tau_s, \xi_i$ be the Milnor elements of dimensions $2p^s-1,\ 2p^i-2$ respectively in the dual algebra $\mathcal A_p^*$ of $\mathcal A_p$. In \cite{7}, Milnor showed that as an algebra, $$\mathcal A_p^* = E(\tau_0,\tau_1,\ldots )\ \otimes\ \mathbb F_p(\xi_1,\xi_2,\ldots ). $$ Then $\mathcal A_p^*$ has a basis consisting of all monomials $$\tau_S\xi^R \ =\ \tau_{s_1}\ldots \tau_{s_k}\xi_1^{r_1}\ldots \xi_m^{r_m},$$ with $S = (s_1,\ldots ,s_k),\ 0 \le s_1 <\ldots <s_k , R = (r_1,\ldots ,r_m),\ r_i \ge 0 $. Let $St^{S,R} \in \mathcal A_p$ denote the dual of $\tau_S\xi^R$ with respect to that basis. Then $\mathcal A_p$ has a basis consisting of all operations $St^{S,R}$. For $S=\emptyset, R=(r)$, $St^{\emptyset, (r)}$ is nothing but the Steenrod operation $P^r$. So, we call $St^{S,R}$ the Steenrod-Milnor operation of type $(S,R)$. We have the Cartan formula $$St^{S,R}(uv) =\sum_{\overset{\scriptstyle{S_1\cup S_2=S}}{R_1+R_2=R}}St^{S_1,R_1}(u)St^{S_2,R_2}(v),$$ where $ R_1=(r_{1i}),\ R_2=(r_{2i}),\ R_1+R_2=(r_{1i}+r_{2i}), S_1\cap S_2=\emptyset, u,v\in P_n$ (see M\`ui \cite{3}). We denote $ St_u=St^{(u),(0)},\ St^{\Delta_i}=St^{\emptyset,\Delta_i}$, where $\Delta_i=(0,\ldots,$ $1,\ldots,0)$ with 1 at the $i$-th place. In \cite{3}, Hu\`ynh M\`ui proved that as a coalgebra, $$\mathcal A_p =\Lambda(St_0,St_1,\ldots)\otimes \Gamma(St^{\Delta_1},St^{\Delta_1},\ldots ).$$ Here, $\Lambda(St_0,St_1,\ldots)$ (resp. $\Gamma(St^{\Delta_1},St^{\Delta_2},\ldots )$) denotes the exterior (resp. polynomial) Hopf algebra with divided powers generated by the primitive Steenrod-Milnor operations $St_0,\ \!St_1,\ldots$ (resp. $St^{\Delta_1}, St^{\Delta_2},\ldots )$. The Steenrod algebra $\mathcal A_p$ acts on $P_n$ by means of the Cartan formula together with the relations \begin{align*} St^{S,R}x_k&=\begin{cases} x_k, & S=\emptyset,\ R=(0),\\ y_k^{p^u}, &S=(u),\ R=(0), \\ 0, &otherwise, \end{cases}\\ St^{S,R}y_k&=\begin{cases} y_k, & S=\emptyset ,\ R=(0),\\ y_k^{p^i},&S=\emptyset,\ R=\Delta_i, \\ 0, &otherwise, \end{cases} \end{align*} for $k=1,\ \! 2, \ldots, n$ (see Steenrod-Epstein \cite{13}, Sum \cite{11}). Since this action commutes with the action of $GL_n$, it induces an actions of $\mathcal A_p$ on $P_n^{GL_n}$. The action of $St^{S,R}$ on the modular invariants of subgroups of general linear group has partially been studied by many authors. This action for $S=\emptyset,\ \! R=(r)$ was explicitly determined by Smith-Switzer \cite{12}, Hung-Minh \cite{5}, Kechagias \cite{4}, Sum \cite{11,20,21,22}. Smith-Switzer \cite{12}, Wilkerson \cite{14} have studied the action of $St^{\Delta_i}$ on the Dickson invariants. The purpose of the paper is to compute the action of the Steenrod-Milnor operations on the generators of $P_2^{GL_2}$. More precisely, we explicitly determine the action of $St^{(i,j)}$ on the Dickson invariants $Q_{2,0}$ and $Q_{2,1}$. The analogous results for $p = 2$ have been anounced in \cite{9}. The rest of the paper contains two sections. In Section 2, we recall some needed information on the invariant theory. In Section 3, we compute the action of the Steenrod-Milnor operations on the Dickson invariants. \section{Preliminaries} \begin{defn} Let $(e_{k+1 },\ldots,e_n),\ 0 \leq k < n$, be a sequence of non-negative integers. Following Dickson \cite{1}, M\`ui \cite{2}, we define $$ [k;e_{k+1}, \ldots, e_n] = \frac 1{k!} \vmatrix x_1&\cdots &x_n\\ \vdots&\cdots &\vdots\\ x_1&\cdots &x_n\\ y_1^{p^{e_{k+1}}}&\cdots &y_m^{p^{e_{k+1}}}\\ \vdots&\cdots &\vdots\\ y_1^{p^{e_n}} & \cdots & y_m^{p^{e_n}} \endvmatrix, $$ in which there are exactly $k$ rows of $(x_1 \ldots x_n)$. (See the accurate meaning of the determinants in a commutative graded algebra in M\`ui n \cite{2}). For $k=0$, we write $$[0;e_{1}, \ldots, e_n] = [e_{1}, \ldots, e_n] =\det(y_i^{p^{e_j}}).$$ In particular, we set \begin{align*} L_{n,s}&=[0,1, \ldots,\hat s,\ldots, n],\ \! 0\le s \le n,\\ L_n &=L_{n,n}=[0,1,\ldots,n-1]. \end{align*} Each $[k;e_{k+1}, \ldots, e_n]$ is an invariant of the special linear group $SL_n$ and $[e_{1}, \ldots, e_n]$ is divisible by $L_n$. Then, Dickson invariants $Q_{n,s},\ 0 \le s < n$, are defined by $$ Q_{n,s}=L_{n,s}/L_n.$$ Here, by convention, $L_0=[\emptyset] =1.$ \end{defn} Now we recall the following which will be used in the next section. \begin{lem}[Sum \cite{20}] The actions of $St^R$ on $L_2,L_{2,0},L_{2,1}$ for $\ell(R)=2$ are respectively given in the following tables: {\begin{tabular}{cllcl} $(i,j)$ &\vdvh\ $St^{(i,j)}L_2 $ &\vdvh\ & $(i,j)$ &\vdvh\ $St^{(i,j)}L_2 $\cr \hline $(0,0)$ &\vdvh\ $L_2 $ &\vdvh\ & $(0,1) $&\vdvh\ $-L_2Q_{2,0}$ \cr $(0,p) $&\vdvh\ $L_2(Q_{2,1}^{p+1}-Q_{2,0}^p)$ & \vdvh\ & $(0,p+1)$ &\vdvh\ $L_2Q_{2,0}^{p+1}$\cr $(1,p) $&\vdvh\ $L_2Q_{2,0}Q_{2,1}^p$ &\vdvh\ & $(p,0)$ &\vdvh\ $L_2Q_{2,1}$\cr $(p+1,0)$ &\vdvh\ $L_2Q_{2,0}$ &\vdvh\ &\text{otherwise}&\vdvh\ 0,\cr \end{tabular}} {\begin{tabular}{cllcl} $(i,j)$ &\vdvh\ $St^{(i,j)}L_{2,0} $ &\vdvh\ & $(i,j)$ &\vdvh\ $St^{(i,j)}L_{2,0} $\cr \hline $(0,0)$ &\vdvh\ $L_2Q_{2,0}$ &\vdvh\ &$(0,p)$ &\vdvh\ $-L_2Q_{2,0}^{p+1} $ \cr $(0,p^2)$ &\vdvh\ $L_2(Q_{2,0}Q_{2,1}^{p^2+p}-Q_{2,0}^{p^2+1})$ &\vdvh\ & $(0,p^2+p)$ &\vdvh\ $L_2Q_{2,0}^{p^2+p+1}$ \cr $(p,p^2) $&\vdvh\ $L_2Q_{2,0}^{p+1}Q_{2,1}^{p^2}$ &\vdvh\ & $(p^2,0)$ &\vdvh\ $L_2Q_{2,0}Q_{2,1}^p$ \cr $(p^2+p,0)$ &\vdvh\ $L_2Q_{2,0}^{p+1}$ &\vdvh\ & \text{otherwise} &\vdvh\ 0,\cr \end{tabular}} {\begin{tabular}{cl} $(i,j)$ &\vdvh\ $St^{(i,j)}L_{2,1}$ \cr \hline $(0,0) $&\vdvh\ $L_2Q_{2,1}$ \cr $(0,p^2)$ &\vdvh\ $L_2(Q_{2,1}^{p^2+p+1}-Q_{2,0}^pQ_{2,1}^{p^2}-Q_{2,0}^{p^2}Q_{2,1})$\cr $(0,p^2+1)$ &\vdvh\ $L_2Q_{2,0}^{p+1}Q_{2,1}^{p^2}$ \cr $(1,0)$ &\vdvh\ $L_2Q_{2,0}$\cr $(1,p^2)$ &\vdvh\ $L_2(Q_{2,0}Q_{2,1}^{p^2+p}-Q_{2,0}^{p^2+1})$ \cr $(p^2,0)$ &\vdvh\ $L_2(Q_{2,1}^{p+1}-Q_{2,0}^p)$ \cr $(p^2,1)$ &\vdvh\ $L_2Q_{2,0}^{p+1}$ \cr $(p^2+1,0)$ &\vdvh\ $L_2Q_{2,0}Q_{2,1}^{p}$\cr \text{otherwise} &\vdvh\ 0.\cr \end{tabular}} \end{lem} \eject \section{Main Results} First of all, we recall the following. \begin{prop}[Hung-Minh \cite{5}, Sum \cite {11}] Let $i$ be a nonnegative integer. We have \begin{align*} &St^{(i,0)}Q_{2,0}= \begin{cases} Q_{2,0}^p, &i=p^2-1,\\ (-1)^k\binom krQ_{2,0}^{r+1}Q_{2,1}^{k-r},& i = kp +r ,\ 0\le r \le k < p,\\ 0, &\text{ otherwise},\end{cases}\\ &St^{(i,0)}Q_{2,1}= \begin{cases} Q_{2,1}^p, &i=p^2-p,\\ (-1)^k\binom {k+1}rQ_{2,0}^{r}Q_{2,1}^{k+1-r},& i = kp +r,\ r \le k+1\le p,\\ 0, &\text{ otherwise}.\end{cases} \end{align*} \end{prop} \begin{thm}[Sum \cite{20}] Let $j$ be a nonnegative integer and $s=0,1$. Then we have $$St^{(0,j)}Q_{2,s}= \begin{cases} Q_{2,0}^{r}Q_{2,s}\sum_{i=0}^k(-1)^i \binom{k+s}{i +s}\binom {r+i}iQ_{2,0}^{(k-i)p}Q_{2,1}^{i(p+1)},\\ \hskip2.2cm \text{ if } j=kp+r \text{ with } 0\le k,\ r <p,\\ 0, \hskip2cm \text{otherwise}.\end{cases}$$ \end{thm} Our new results are the actions of $St^{(i,j)}$ on $Q_{2,0}$ and $Q_{2,1}$ with $i>0.$ \begin{thm}\label{dl33} Let $i, j$ be nonnegative integers. We have {\rm(i)} If $0\le k<i<p$ and $0\le r <p$ then $$St^{(i,kp+r)}Q_{2,0}=0.$$ {\rm (ii)} If $0\le i <p$ then $$St^{(i,ip)}Q_{2,0}=(-1)^iQ_{2,0}^{i+1}Q_{2,1}^{ip}.$$ {\rm (iii)} $$ St^{(1,j)}Q_{2,0}= \begin{cases} -Q_{2,0}^{r+2}Q_{2,1}^p\sum_{i=0}^{k-1}(-1)^i \big[ k \binom {k-1}i \binom {r+i+1}{i+1}\\ \hskip2.5cm - \binom{k-2}{i-1}\binom{r+i}{i+1}\big] Q_{2,0}^{(k-1-i)p}Q_{2,1}^{i(p+1)},\\ \hskip 2.2cm\text{ if } j = kp+r \text{ with } 0\le k,r<p,\\ 0, \hskip2cm \text{otherwise}.\end{cases}$$ {\rm(iv)} If $k+1<i<p $ and $0\le r<p$ then $$St^{(i,kp+r)}Q_{2,1}=0.$$ {\rm(v)} If $0\le k <p$ then $$St^{(k+1,kp)}Q_{2,1}= (-1)^kQ_{2,0}^{k+1}Q_{2,1}^{kp}.$$ {\rm(vi)} $$St^{(1,j)}Q_{2,1}= \begin{cases} (k+1)Q_{2,0}^{r+1}\sum_{i=0}^k(-1)^i \binom {k}{i} \binom {r+i}iQ_{2,0}^{(k-i)p}Q_{2,1}^{i(p+1)}\\ \hskip2.3cm \text{if $j=kp+r$ with $0\le k,\ r<p$,}\\ 0, \hskip2cm \text{otherwise}.\end{cases}$$ \end{thm} \begin{proof} We prove i) by induction on $k$. Let $k=0$. For $r=0$, using Cartan formula we have $St^{(i,0)}Q_{2,0}=0.$ For $r>0$, applying the Cartan formula and the inductive hypothesis one gets $$ St^{(i,r)}Q_{2,0}= Q_{2,0}St^{(i,r-1)}Q_{2,0}=0.$$ Part i) of the theorem holds for $k=0$. Suppose it is true for $0\le k<p-1$. For $i > k+1$, we have \begin{align*} St^{(i,(k+1)p)}Q_{2,0}&=Q_{2,0} St^{(i,kp+p-1)}Q_{2,0} +(Q_{2,0}^p-Q_{2,1}^{p+1})St^{(i,kp)}Q_{2,0}\\ &\qquad -Q_{2,0}^{p+1}St^{(i,(k-1)p+p-1)}Q_{2,0} -Q_{2,0}Q_{2,1}^pSt^{(i-1,kp)}Q_{2,0}\\ &=0.\end{align*} Part i) of the theorem holds for $i>k+1$ and $r=0$. Suppose it is true for $i > k+1$ and $0\le r<p-1$. Then using the inductive hypothesis we have \begin{align*} St^{(i,(k+1)p+r+1)}Q_{2,0}&=Q_{2,0} St^{(i,(k+1)p+r)}Q_{2,0} +(Q_{2,0}^p-Q_{2,1}^{p+1})St^{(i,kp+r+1)}Q_{2,0}\\ &\qquad -Q_{2,0}^{p+1}St^{(i,kp+r)}Q_{2,0} -Q_{2,0}Q_{2,1}^pSt^{(i-1,kp+r+1)}Q_{2,0}\\ &=0.\end{align*} Part i) of the theorem is proved. Now we prove Part ii). Obviously, it is true for $i=0$. Suppose $i>0$ and Part ii) holds for $i-1$. Using the Cartan formula and the inductive hypothesis, we have \begin{align*} St^{(i,ip)}Q_{2,0} &= - Q_{2,0}Q_{2,1}^p St^{(i-1, (i-1)p)}Q_{2,0}\\ &= -(-1)^{i-1} Q_{2,0}Q_{2,1}^p Q_{2,0}^i Q_{2,1}^{(i-1)p} \\ &=(-1)^{i} Q_{2,0}^{i+1}Q_{2,1}^{ip}.\end{align*} The proof of Part iii). It is well known that the Steenrod-Milnor operations are stable. That means $St^{(1,j)}x = 0$ for any homogeneous $x \in P_2$ with $\deg x < j$ (see M\`ui \cite{2}). Since $\deg Q_{2,0} = p^2-1 <p^2$, we have $St^{(1,j)}Q_{2,0} = 0$ for $j \geq p^2$. Suppose that $j < p^2$. Then using the $p$-adic expansion of $j$, we have $$ j= kp + r \ \text{ for } 0 \le k, r <p.$$ Now we prove the theorem by double induction on $(k,r)$. If $k=0$, then using Part i) of Theorem \ref{dl33} we get $St^{(1,r)}Q_{2,0}=0$, so the theorem is true for $k=0$. If $r=p-1$, then $ \binom {r+i+1}{i+1}= \binom{r+i}{i+1} =0$ in $\mathbb F_p$, so we obtain $$St^{(1,kp+p-1)}Q_{2,0}=0.$$ If $r=0$, then the formula in the theorem becomes $$St^{(1,kp)}Q_{2,0}= -k Q_{2,0}^2Q_{2,1}^p(Q_{2,0}^p-Q_{2,1}^{p+1})^{k-1}.$$ We prove this formula by induction on $k$. Suppose this formula holds for $0\le k<p-1$. For $r=0$, we have \begin{align*} St^{(1,(k+1)p)}Q_{2,0}&= (Q_{2,0}^p-Q_{2,1}^{p+1}) St^{(1,kp)}Q_{2,0}-Q_{2,0}Q_{2,1}^pSt^{(0,kp)}Q_{2,0}\\ &=-kQ_{2,0}^2Q_{2,1}^p(Q_{2,0}^p-Q_{2,1}^{p+1})^{k} -Q_{2,0}^2Q_{2,1}^p(Q_{2,0}^p-Q_{2,1}^{p+1})^{k}\\ &=-(k+1)Q_{2,0}^2Q_{2,1}^p(Q_{2,0}^p-Q_{2,1}^{p+1})^{k} . \end{align*} So the formula holds for $k+1$ and $r=0$. Suppose $0\le r<p-1$ and the formula holds for $j=(k+1)p+r$. Applying the Cartan formula and the inductive hypothesis, one gets \begin{align*} S&t^{(1,(k+1)p+r+1)}Q_{2,0}= Q_{2,0}St^{(1,(k+1)p+r)}Q_{2,0}+ (Q_{2,0}^p-Q_{2,1}^{p+1}) St^{(1,kp+r+1)}Q_{2,0}\\ &\quad - Q_{2,0}^{p+1} St^{(1,kp+r)}Q_{2,0} -Q_{2,0}Q_{2,1}^pSt^{(0,kp+r+1)}Q_{2,0}\\ \end{align*} \begin{align*} &=-Q_{2,0}^{r+3}Q_{2,1}^p\Big(k\sum_{i=0}^{k}(-1)^i \big[ \binom {k}i \binom {r+i+1}{i+1}- \binom{k-1}{i}\binom{r+i+2}{i+1}\\ &\quad + \binom {k-1}{i-1} \binom {r+i+1}{i}- \binom{k-1}{i}\binom{r+i+1}{i+1} \big] Q_{2,0}^{(k-i)p}Q_{2,1}^{i(p+1)}\\ &\quad+ \sum_{i=0}^{k}(-1)^i \binom ki \big[ \binom {r+i+1}{i+1}-\binom{r+i+1}{i}\big] Q_{2,0}^{(k-i)p}Q_{2,1}^{i(p+1)}\\ & \quad-\sum_{i=0}^{k}(-1)^i \big[ \binom {k-1}{i-1} \binom {r+i}{i+1}+\binom{k-2}{i-1}\binom{r+i+1}{i+1}\\ &\quad + \binom {k-2}{i-2} \binom {r+i}{i}- \binom{k-2}{i-1}\binom{r+i}{i+1} \big] Q_{2,0}^{(k-i)p}Q_{2,1}^{i(p+1)}\Big)\\ &=-Q_{2,0}^{r+3}Q_{2,1}^p\sum_{i=0}^{k}(-1)^i \big[(k+1) \binom {k}i \binom {r+i+2}{i+1}\\ &\hskip3cm- \binom{k-1}{i-1}\binom{r+i+1}{i+1} \big] Q_{2,0}^{(k-i)p}Q_{2,1}^{i(p+1)}.\end{align*} Parts iv), v) and vi) are proved by the same argument as the previous one. \end{proof} This work was supported in part by a grant of Quy Nhon University Research Project. { Department of Mathematics, Quy Nhon University 170 An Duong Vuong, Quy Nhon, Binh Dinh, Viet Nam \end{document}
\begin{document} \title{Occupation time fluctuation limits of infinite variance equilibrium branching systems} \begin{abstract} We establish limit theorems for the fluctuations of the rescaled occupation time of a $(d,\alpha,\beta)$-branching particle system. It consists of particles moving according to a symmetric $\alpha$-stable motion in $\mathbb{R}^d$. The branching law is in the domain of attraction of a (1+$\beta$)-stable law and the initial condition is an equilibrium random measure for the system (defined below). In the paper we treat separately the cases of intermediate $\alpha/\beta<d<(1+\beta)\alpha/\beta$, critical $d=(1+\beta)\alpha/\beta$ and large $d>(1+\beta)\alpha/\beta $ dimensions. In the most interesting case of intermediate dimensions we obtain a version of a fractional stable motion. The long-range dependence structure of this process is also studied. Contrary to this case, limit processes in critical and large dimensions have independent increments. \end{abstract} AMS subject classification: primary 60F17, 60J80, secondary 60G18, 60G52 \\ Key words: Functional central limit theorem; Occupation time fluctuations; Branching particles systems; Fractional stable motion; Equilibrium measure \section{Introduction} \subsection{Branching system and occupation time fluctuations} The aim of this paper is to present some (functional) limit theorems for the occupation time fluctuation process of a branching particle system. We call a $(d, \alpha, \beta)$-branching particle system (denoted in the sequel by $N$) a set of particles moving independently according to the spherically symmetric $\alpha$-stable L\'evy motion ($0<\alpha\leq 2$) in $\mathbb{R}^d$ and splitting after exponential time (with intensity $V$) with branching law \[ p_k=\left\{ \begin{array}{cc} 0 & k=1\\ \frac{1}{1+\beta} \binom{{1+\beta}}{k}(-1)^k & k = 0,2,3,\ldots \end{array} \right. \] $( 0<\beta < 1 )$. This is an example of a law in the domain of attraction of $(1+\beta)$-stable variable. It has infinite variance and is critical. For $\beta=1$ it reduces to binary critical branching which was treated in a series of papers mentioned below. The generating function of this law is \begin{equation} F(s) = s + \frac{1}{1+\beta} (1-s)^{{1+\beta}}, s\in(0,1). \label{def:generating_beta} \end{equation} The particle system will be represented by an empirical measure process $(N_t)_{t \geq 0}$, i.e. for a Borel set $A$, $N_t(A)$ is a (random) number of particles in $A$ at time $t$. The initial particle distribution is yet to be introduced. The most natural choice is a Poisson random field with homogeneous intensity, i.e. (Lebesgue measure) $\lambda$. This case, which was studied in \cite{Bojdecki:2007aa} and \cite{bojdecki-2005}, is a starting point and reference for our investigation. It is known \cite{Gorostiza:1991aa} that for $\alpha/\beta<d$ such system (denoted by $N^{Poiss}$) converges to an equilibrium distribution \begin{equation} N^{Poiss} \Rightarrow Eq \label{def:Eq} \end{equation} where $\Rightarrow$ denotes weak convergence in the space of point measures. The Laplace functional of the equilibrium distribution is given by \begin{equation} \mathbb{E}\exp\left\{ -\left\langle Eq,\varphi \right\rangle \right\} =\exp\left\{ \left\langle \lambda,e^{-\varphi}-1\right\rangle +V\int_{0}^{\infty}\left\langle \lambda,H\left(j\left(\cdot,s\right)\right)\right\rangle ds\right\} ,\label{eq: laplace_equilibrum}\end{equation} where \begin{equation} j\left(x,l\right):=\mathbb{E}\exp\left(-\left\langle N_{l}^{x},\varphi \right\rangle \right) \label{def: h} \end{equation} $N^x$ is the empirical process of the system starting from $x\in \mathbb{R}^d$, $H(s)=F\left(s\right)-s$, $\varphi:\mathbb{R}^d\rightarrow\mathbb{R}_{+}$, $\varphi \in \mathcal{L}^1(\mathbb{R}^d)\cap C(\mathbb{R}^d)$ and $j$ satisfies the integral equation\[ j\left(x,l\right)=\mathcal{T}_{l}e^{-\varphi}\left(x\right)+V\int_{0}^{l}\mathcal{T}_{l-s}H\left(j\left(\cdot,s\right)\right)\left(x\right)ds.\] This equations can be obtained in the same way as \cite[(2.4)]{Gorostiza:1991aa}. In this paper we consider a system $N$ starting off from $Eq$ and compare the obtained result to the ones in \cite{Bojdecki:2007aa} and \cite{bojdecki-2005}. For the process $(N_t)_{t\geq 0}$ we define the rescaled occupation time fluctuations process by \begin{equation} X_T(t) = \frac{1}{F_T} \int_0^{Tt} (N_s - \lambda) ds, \label{def:occupation_fluct} \end{equation} where $F_T$ is a proper normalization and $T$ is a scaling parameter which accelerates the time. The object of our investigation is the limit of $X_T$ as $T$ tends to $+\infty$ \begin{equation} X_T \Rightarrow X. \label{lim:main} \end{equation} For the time being we are not very rigorous and do not specify the type of convergence. \subsection{Results and proof techniques} In the proofs we will rely on methods presented in \cite{bojdecki-2005}, \cite{Bojdecki:2007aa} and \cite{Milos:2007aa}.\\ Although the process $X_T$ is signed-measure-valued it is convenient to regard it as a process with values in the space $\mathcal{S}'(\mathbb{R}^d)$ of tempered distributions, which is dual to the space of smooth and rapidly decreasing functions $\mathcal{S}(\mathbb{R}^d)$. We denote duality in this space by $\ddp{\cdot}{\cdot}$. In this space one may employ space-time method introduced by \cite{Bojdecki:1986aa} which together with Mitoma's theorem constitute a powerful technique in proving weak, functional convergence.\\ Three kinds of convergence are used. The convergence of finite-dimensional distributions is denoted by $\Rightarrow_f$. For a continuous, $S'(\mathbb{R}^d)$-valued process $X = (X_t)_{t\geq 0}$ and any $\tau>0$ one can define an $\mathcal{S}'(\mathbb{R}^{d+1})$-valued random variable \begin{equation} \ddp{\tilde{X}}{\Phi} = \intc{\tau} \ddp{X_s}{\Phi(\cdot, s)} ds, \: \Phi \in \mathcal{S}(\mathbb{R}^{d+1}). \label{def: space-time} \end{equation} If for any $\tau>0$ $\tilde{X}_n\rightarrow \tilde{X}$ in distribution, we say that the convergence in the space-time sense holds and denote this fact by $\Rightarrow_i$. Finally, we consider the functional weak convergence denoted by $X_n\Rightarrow_c X$. It holds if for any $\tau>0$ processes $X_n = (X_n(t))_{t\in [0, \tau]}$ converge to $X = (X(t))_{t\in [0, \tau]}$ weakly in $C([0, \tau], \mathcal{S}'(\mathbb{R}^d))$ (in the sequel without loss of generality we assume $\tau = 1$). It is known that $\Rightarrow_i$ and $\Rightarrow_f$ do not imply each other, but either of them together with tightness implies $\Rightarrow_c$ \cite{Bojdecki:1986aa}. Conversely, $\Rightarrow_c$ implies both $\Rightarrow_i$, $\Rightarrow_f$. \\ The presentation of the results naturally splits into parts, corresponding to intermediate dimensions \[ \frac{\alpha}{\beta} < d < \frac{\alpha ({1+\beta})}{\beta}, \] critical \[ d = \frac{\alpha ({1+\beta})}{\beta}, \] and large dimensions \[ d > \frac{\alpha ({1+\beta})}{\beta}, \] respectively.\\ In the first case of intermediate dimensions we obtain a weak functional convergence to process $X$ of the form $X=K\eta\lambda$ where $K$ is a constant, $\eta$-a stable process being in a sense a stable (non-Gaussian) analogue of a fractional Brownian motion. So we see that in this case the limit has a very simple spatial structure whereas its temporal structure is complicated. It is also worthwhile to point out that $\eta$ has a stationary increments (unlike the corresponding process in \cite{Bojdecki:2007aa}) and is a heavy-tailed process with long-range dependence. This dependence is described in Section \ref{sec:results} in terms of dependence exponent and roughly speaking it means that "dependence" decays polynomially. The cases of critical and large dimensions differ substantially, one can prove only finite-dimensional distributions and space-time convergences. In both cases we obtain processes with independent increments and the limit for large dimensions is truly $\mathcal{S}'(\mathbb{R}^d)$-valued. These processes are not continuous hence it is not possible to obtain functional convergence. \subsection{A survey of results} This paper is a part of a larger programme carried out by Bojdecki et al. and recently by Milos. It seems useful to present it here in a compact way. This small survey is not meant to be exhaustive nor very strict it aims only to give a reader a glimpse of the whole picture. In tables below we gather limits of occupation time fluctuations under time rescaling (as defined by (\ref{lim:main})), type of convergence and the normalizing factor in different settings. The structure of tables reflects dependence on the dimension of the space and the starting distribution.\\ \begin{center} \begin{tabular}{|c|c|c|} \multicolumn{3}{c}{Table 1. Systems with finite variance branching law}\\ \hline & Poisson & Equilibrium \tabularnewline \hline $\begin{array}{cc} \alpha <d <2\alpha\\ \textit{intermediate}\end{array}$ & $\begin{array}{ccc} K\cdot\textit{sub-frac-BM}\cdot\lambda \\ \text{functional } \: T^{\frac{3-d/\alpha}{2}} \\ \text{\cite{Bojdecki:2006ab} and \cite{Milos:2007aa}} \end{array}$ & $\begin{array}{ccc} K\cdot\textit{frac-BM}\cdot\lambda \\ \text{functional } \: T^{\frac{3-d/\alpha}{2}} \\ \text{\cite{Milos:2007aa}} \end{array}$ \tabularnewline \hline $\begin{array}{cc} d = 2\alpha\\ \textit{critical}\end{array}$& $\begin{array}{ccc} K\cdot\textit{BM}\cdot\lambda \\ \text{functional } \: (T \log T)^{\frac{1}{2}} \\ \text{\cite{Bojdecki:2006aa} and \cite{Milos:2007ab}} \end{array}$ & $\begin{array}{cc} \textit{the same} \\ \text{\cite{Milos:2007ab}} \end{array}$ \tabularnewline \hline $\begin{array}{cc} d > 2\alpha\\ \textit{large}\end{array}$& $\begin{array}{ccc} \mathcal{S}'(\mathbb{R}^d)\textit{-BM} \\ \text{functional } \: T^{\frac{1}{2}} \\ \text{\cite{Bojdecki:2006aa} and \cite{Milos:2007ab}} \end{array}$ & $\begin{array}{cc} \textit{the same*} \\ \text{\cite{Milos:2007ab}} \end{array}$ \tabularnewline \hline \end{tabular} \end{center} * - due to technical difficulties functional convergence proved only for finite fourth-moment branching law.\\ $K$ denotes a generic constant \\ \textit{BM} - standard real-valued Brownian motion\\ \textit{sub-frac-BM} - sub-fractional Brownian motion (ie. centered Gaussian process with covariance function $s^h + t^h -\frac{1}{2}[(s+t)^h + |s-t|^h]$) \\ \textit{frac-BM} - fractional Brownian motion (ie. centered Gaussian process with covariance function $\frac{1}{2}[s^h + t^h + |s-t|^h]$)\\ $\mathcal{S}'(\mathbb{R}^d)\textit{-BM}$ - centered Gaussian $\mathcal{S}'(\mathbb{R}^d)$-valued process with covariance functional \begin{equation} Cov\left( \ddp{X_s}{\varphi_1}, \ddp{X_t}{\varphi_2} \right) = (s\wedge t) \frac{1}{2\pi} \int_{\mathbb{R}^d} \left( \frac{2}{|z|^\alpha} + \frac{Vm}{2|z|^{2\alpha}} \right) \widehat{\varphi_1}(z) \overline{\widehat{\varphi_2}(z)} dz \nonumber, \end{equation} where$\ \varphi_1,\varphi_2 \in\mathcal{S}\left(\mathbb{R}^{d}\right)$ and $m$ depends on branching law. Papers \cite{Bojdecki:2006ab} and \cite{Bojdecki:2006aa} contain also results for systems without branching. \begin{center} \begin{tabular}{|c|c|c|} \multicolumn{3}{c}{Table 2. Systems with infinite variance branching law - generating function (\ref{def:generating_beta})}\\ \hline & Poisson & Equilibrium \tabularnewline \hline $\begin{array}{cc} \frac{\alpha}{\beta} <d < \frac{\alpha({1+\beta})}{\beta}\\ \textit{intermediate}\end{array}$ & $\begin{array}{cccc} K\cdot\textit{sub-frac-SM}\cdot\lambda \\ \text{functional } \\ F_{T} = T^{(2+\beta - \frac{d}{\alpha}\beta)/({1+\beta})} \\ \text{\cite{Bojdecki:2007aa}} \end{array}$ & $\begin{array}{cccc} K\cdot\textit{frac-SM}\cdot\lambda \\ \text{functional } \\ F_{T} = T^{(2+\beta - \frac{d}{\alpha}\beta)/({1+\beta})} \\ \text{this paper} \end{array}$ \tabularnewline \hline $\begin{array}{cc} d = \frac{\alpha({1+\beta})}{\beta}\\ \textit{critical}\end{array}$& $\begin{array}{cccc} K\cdot\textit{SM}\cdot\lambda \\ \text{fin-dims and space-time } \\ (T \log T)^{\frac{1}{{1+\beta}}} \\ \text{\cite{bojdecki-2005}} \end{array}$ & $\begin{array}{cc} \textit{the same} \\ \text{this paper} \end{array}$ \tabularnewline \hline $\begin{array}{cc} d > \frac{\alpha({1+\beta})}{\beta} \\ \textit{large}\end{array}$& $\begin{array}{cccc} \mathcal{S}'(\mathbb{R}^d)\textit{-SM} \\ \text{fin-dims and space-time } \\ T^{\frac{1}{{1+\beta}}} \\ \text{\cite{bojdecki-2005}} \end{array}$ & $\begin{array}{cc} \textit{the same} \\ \text{this paper} \end{array}$ \tabularnewline \hline \end{tabular} \end{center} Here,\\ \textit{sub-frac-SM} - "sub-fractional" stable motion, defined by (\ref{def:eta1})\\ \textit{frac-SM} - "fractional" stable motion, defined by (\ref{def:eta})\\ \textit{SM} - stable motion with independent increments, with finite dimensional distributions given by (\ref{def:lim_critical})\\ $\mathcal{S}'(\mathbb{R}^d)$\textit{-SM} - $\mathcal{S}'(\mathbb{R}^d)$-valued stable motion with finite dimensional distributions given by (\ref{def:lim_big}).\\ Let us notice first that the results for the case of finite and infinite variance are in a sense similar. The processes in Table 1 are Gaussian counterparts of stable processes in Table 2. Informally speaking, the finite variance branching law is a limit of laws given by (\ref{def:generating_beta}) hence one can observe similar phenomena in both cases. The case of intermediate dimensions is most interesting. The limits have similar spatial structure and complicated temporal one with long-range dependence property. The dependence on the starting distribution is intriguing since $Eq$ measure is the limit for a Poisson-starting system (\ref{def:Eq}) and by the time did not acquire any intuitive explanation. However both limits have the long-range dependence property but for the equilibrium-starting system this dependence is stronger.\\ The remarkable feature of the limit process in the equilibrium case is that it can be decomposed into a sum of two independent $({1+\beta})$-stable processes. One of them being exactly the limit process in the Poisson case. This decomposition is an analogue of the one studied in \cite{Dzhaparidze:2004aa} for fractional Brownian motion. It should be noted that the process obtained in this case is a stable analogue of fractional Brownian motion. Namely, it is self-similar and has stationary increments. Processes with this properties were discussed in \cite[Chapter 7]{Samorodnitsky:1994aa} (see also Remark \ref{rem:Hssi}). This fact makes analogies between infinite-variance and finite-variance cases even stronger (recall that the limit process in the case of the finite variance branching law is fractional Brownian motion). The cases of critical and large dimensions are less complicated. The limits have independent increments and complicated spatial structure for large dimensions going beyond the space of measures. The qualitative change of the type of limits with dimension can be, partially, explained by recurrence and transient property of the underlying $\alpha$-stable L\'evy motion. Further results on the fluctuations of the occupation time can be found in \cite{bojdecki-2006}, \cite{bojdecki-2007-12}, \cite{Bojdecki:2007ab} were high-density limits and system with inhomogeneous starting distributions are studied. One should also mention \cite{Birkner:aa} and \cite{Birkner:2005aa} where similar problems are considered in discrete setting (lattice $\mathbf{Z}^d$).\\ Although the proofs in this paper rely mostly on the schema and methods used in \cite{Bojdecki:2007aa}, \cite{bojdecki-2005} and \cite{Milos:2007aa} we had to overcome some new technical difficulties which emerged during studies on additional terms arising in analysis of equilibrium-starting system. \section{Results} \label{sec:results} By $\mathcal{T}$ we denote the semigroup of the $\alpha$-stable motion and by $p_t$ its transition density, i.e., \begin{equation} \mathcal{T}_t f(x) = (p_t \ast f)(x). \label{def:T} \end{equation} Let $M$ be an independently scattered random $({1+\beta})$-stable measure on $\mathbb{R}^d \times \mathbb{R}_+$ with the Lebesgue control measure. More precisely, for a Borel set $A$, $M(A)$ is $({1+\beta})$-stable variable with characteristic function \[ \exp \left\lbrace \lambda(A) |z|^{{1+\beta}} \left( 1 - i \text{sgn}(z) \tan \frac{\pi}{2}({1+\beta})\right)\right\rbrace, \] variables on disjoint set are independent and $M$ is $\sigma$-additive a.s.\\ We define two stable processes. \begin{equation} \eta^{1}_{t} = \int_{\mathbb{R}^{d+1}} \rbr{\mathbf{1}_{[0,t]}(r) \int_{r}^{t}p_{u-r}(x)du} M(dx, dr) \label{def:eta1} \end{equation} and \begin{equation} \eta^{2}_{t} =\int_{0}^{+\infty} \int_{\mathbb{R}^d} \left( \intc{t} p_{s+l}(x) ds \right) M(dx,dl) \label{def:eta2} \end{equation} It can be checked that for intermediate dimensions both processes are well-defined in the sense given in \cite[Chapter 3]{Samorodnitsky:1994aa}.\\ Assume now that $\eta^1$ and $\eta^2$ are independent, then we define \begin{equation} \eta := \eta^1 + \eta^2. \label{def:eta} \end{equation} This process plays fundamental role in this paper. Detailed presentation of its properties is postponed to Theorem \ref{thm:eta_hssi} and Proposition \ref{prep:eta_kappa}. Now we give series of three theorems which are the main results of the paper.\\ In these theorems $X_{T}$ is the rescaled occupation time fluctuation process defined by (\ref{def:occupation_fluct}) for a system $N$ starting from equilibrium distribution (\ref{eq: laplace_equilibrum}). \begin{thm} \label{thm:intermediate} Assume $\frac{\alpha}{\beta} < d < \frac{\alpha ({1+\beta})}{\beta}$ and $F_{T} = T^{(2+\beta - \frac{d}{\alpha}\beta)/({1+\beta})}$. Then \[ X_{T} \Rightarrow_c K\eta\lambda , \] where \[ K = \rbr{-\frac{V}{{1+\beta}}\cos \frac{\pi}{2}({1+\beta})}^{1+\beta}. \] \end{thm} \begin{rem} As announced in the Introduction the process $\eta$ consists of two independent summands (\ref{def:eta}) where $\eta^1$ is the process that occurs in the limit for the Poisson system (see \cite{Bojdecki:2007aa}). An explanation of the reason of this structure of $\eta$ as well as the interpretation of $\eta^2$ require further studies. If $\beta=1$ and $\eta$ is a fractional Brownian motion on the the whole line then $\eta^1 = \frac{\eta_t + \eta_{-t}}{2}$ (sub-fractional Brownian motion) and $\eta^1 = \frac{\eta_t - \eta_{-t}}{2}$ (see \cite{Dzhaparidze:2004aa}). \end{rem} \begin{thm}\label{thm:critical} Assume $d = \frac{\alpha ({1+\beta})}{\beta}$ and $F_{T} = (T \log T)^{\frac{1}{{1+\beta}}}$. Then \[ X_T \rightarrow_i K \lambda \xi \text{ and } X_T \rightarrow_f K \lambda \xi \text{ as } T \rightarrow +\infty \] where $\xi$ is $({1+\beta})$-stable process with stationary independent increments and characteristic function \begin{equation} \ev{\exp(iz\xi_t)} = \exp\left\lbrace -t|z|^{{1+\beta}} \left( 1-i \text{sgn}(z) \tan\frac{\pi}{2}({1+\beta})\right)\right\rbrace, z\in \mathbf{R}, t\geq 0 \label{def:lim_critical} \end{equation} and \[ K = \left(-V \cos\frac{\pi}{2}({1+\beta})\int_{\mathbb{R}^d} \left( \int_0^1 p_r(x) dr\right)^\beta p_1(x) dx \right)^{\frac{1}{{1+\beta}}} \] \end{thm} Before presenting the last theorem we introduce the potential operator corresponding to the $\alpha$-stable motion. \[ \mathcal{G}f(x) = \int_0^{+\infty}\mathcal{T}_t f(x) dt. \] \begin{thm}\label{thm:big} Assume $d > \frac{\alpha ({1+\beta})}{\beta}$ and $F_{T} = T^{\frac{1}{{1+\beta}}}$. Then \[ X_T \rightarrow_i X \text{ and } X_T \rightarrow_f X \text{ as } T \rightarrow +\infty, \] where $X$ is an $\mathcal{S}'(\mathbb{R}^d)$-valued $({1+\beta})$-stable process with stationary independent increments and characteristic function \begin{eqnarray} \ev{\exp(i\ddp{X(t)}{\phi})} = \exp\left\lbrace -K^{{1+\beta}}t\int_{\mathbb{R}^d} |\mathcal{G}\phi(x)|^{{1+\beta}} \left(1 - i(\text{sgn} \mathcal{G}\phi(x) \tan \frac{\pi}{2}({1+\beta})) \right) dx \right \rbrace, \nonumber \\ \phi \in \mathcal{S}(\mathbb{R}^d),t\geq 0, \: \label{def:lim_big} \end{eqnarray} where \[ K = \left( - \frac{V}{{1+\beta}} cos\frac{\pi}{2}({1+\beta}) \right)^{1+\beta} \] \end{thm} \begin{rem} The limits in the last two theorems have independent increments and are non-Gaussian hence by \cite[Theorem 13.4]{Kallenberg:2002aa} are not continuous. It is somehow unexpected since processes $X_T$ are clearly continuous. This is also the reason why we can not obtain functional convergence in those cases. \end{rem} Propositions below summarize basic properties of $\eta$ defined by (\ref{def:eta}). \begin{thm} \label{thm:eta_hssi} $\eta$ is a continuous, $({1+\beta})$-stable process process. It is self-similar with exponent $H = (2+\beta - \frac{d \beta}{\alpha} )/(1+\beta)$ and has stationary increments. \end{thm} The self-similarity can be proved by a simple calculation using the characteristic function of the finite-dimensional distributions of (\ref{def:eta}) obtained using \cite[(3.2.2)]{Samorodnitsky:1994aa}. Processes $X_T$ have stationary increments which comes straightforward from the fact that $N$ is stationary (since it is a Markov process starting from stationary distribution). From Theorem \ref{thm:intermediate} we know that the process $\eta$ is a limit of $X_T$ hence is also stationary. \begin{rem} \label{rem:Hssi} Processes of this type were discussed in \cite[Chapter 7]{Samorodnitsky:1994aa}. In the notation used there $\eta$ is $H$-ssi stable process. Contrary to the Gaussian case, where there is a unique up to a constant $H$-ssi process for a given $H$ (fractional Brownian motion with Hurst parameter $H$), there are plenty of stable $H$-ssi's. It would be interesting to check if $\eta$ is one of already known processes (this could draw analogies to other problems) or is a new process. Unfortunately, we do not know the answer to this question. \end{rem} We introduce now a general notation to investigate long-range dependence in the case of stable processes (see \cite{Bojdecki:2007aa}). \begin{definition} \label{def:dep-exp} Let $\eta$ be a real infinitely divisible process. For $0\leq u< v<s<t, T>0, z_1,z_2 \in \mathbf{R}$ define \begin{eqnarray} D_T(z_1,z_2;u,v,s,t)&=& |\log \ev{e^{iz_1(\eta_v - \eta_u) + iz_2(\eta_{T+t} - \eta_{T+s})}} \nonumber \\ &&- \log\ev{e^{iz_1(\eta_v - \eta_u)}} - \ev{e^{iz_2(\eta_{T+t} - \eta_{T+s})}} |, \end{eqnarray} Dependence exponent $\kappa$ is defined by \[ \kappa = \inf_{z_1, z_2 \in \mathbf{R}} \inf_{0\leq u<v<s<t} \sup\lbrace \gamma>0: D_T(z_1,z_2;u,v,s,t) = o(T^{-\gamma}) \text{ as } T\rightarrow +\infty \rbrace \] \end{definition} By \cite[Theorem 2.7]{Bojdecki:2007aa} exponent $\tilde{\kappa}$ of $\eta_1$ (denoted by $\xi$ therein) is \[ \tilde{\kappa} = \left \lbrace \begin{array}{cc} \frac{d}{\alpha} & \beta > \frac{d}{d+\alpha}\\ \frac{d}{\alpha}\left( 1+\beta - \frac{d}{\alpha+d}\right) & \beta \leq \frac{d}{d+\alpha}. \end{array} \right . \] Conducting similar computations as in \cite[Proof of Theorem 2.7]{Bojdecki:2007aa} it can be checked that dependence exponent for $\eta_2$ is $\kappa = \frac{d}{\alpha}-1$. It is straightforward consequence of Definition \ref{def:dep-exp} that the dependence exponent of a sum of independent processes is minimum of the exponents of the summands hence we obtain \begin{prep} \label{prep:eta_kappa} Process $\eta$ has dependence exponent $\kappa = \frac{d}{\alpha}-1$. \end{prep} \begin{rem} It is interesting to notice that addition of an independent term $\eta_2$ arising in the limit for the equilibrium-starting system (recall Theorem \ref{thm:intermediate}) increases long-range dependence and the dependence exponent does not depend on $\beta$ any more. \end{rem} \section{Proofs} For the sake of brevity proofs for Theorem \ref{thm:critical} and \ref{thm:big} are omitted. They are direct combination of the methods of \cite{bojdecki-2005} and the argument employed in the proof of Theorem \ref{thm:intermediate}. The scheme below is quite general and could be easily adapted for those proofs. \subsection{Scheme of the proof} \label{sec:proof_schema} To make the proof clearer we present a general scheme. Detailed calculation are deferred to a separated section. We treat measure-valued processes as $\mathcal{S}'(\mathbb{R}^d)$-valued one. This enables usage of space-time method from \cite{Bojdecki:1986aa}. Let $\tilde{X}_T$ denote $\mathcal{S}'(\mathbb{R}^d)$-random variable defined by (\ref{def: space-time}) corresponding to the process $X_T$. In order to prove weak convergence in $\mathcal{C}([0,1], \mathcal{S}'(\mathbb{R}^d))$ to $X$ it suffices to prove weak convergence of $\tilde{X}_T$ \begin{equation} \tilde{X}_T \Rightarrow \tilde{X} \label{lim:main2} \end{equation} and tightness of $\lbrace X_T \rbrace_{T\geq 1}$. In order to obtain (\ref{lim:main2}) it suffices to verify that \begin{equation} \ev{e^{\ddp{\tilde{X}_T}{\Phi}}} \rightarrow \ev{e^{\ddp{\tilde{X}}{\Phi}}} \label{lim:laplace} \end{equation} for any non-negative $\Phi\in \mathcal{S}(\mathbb{R}^d)$. The tightness can be proven utilizing the Mitoma theorem \cite{Mitoma:1983aa}, which states that tightness of $\lbrace X_T\rbrace_{T \geq 1} $ in $\mathcal{C}([0,\tau], \mathcal{S}'(\mathbb{R}^d))$ is equivalent to tightness of $\ddp{X_T}{\phi}$ in $\mathcal{C}([0,\tau], \mathbb{R}^d)$ for any $\phi \in \mathcal{S}(\mathbb{R}^d)$. \subsubsection{Space-time convergence} \label{sec:space-time} The purpose of this subsection is a calculation of the Laplace transform and gathering facts used to show convergence (\ref{lim:laplace}). The schema described below generally follows the lines of a scheme presented in \cite{Bojdecki:2007aa}, \cite{bojdecki-2005} and \cite{Milos:2007ab} hence we omit some details.\\ To make the proof shorter we will consider $\Phi$ of the special form: \[ \Phi(x,t) = \varphi(x) \psi(t)\:\:\varphi\in \mathcal{S}(\mathbb{R}^d), \psi\in \mathcal{S}(\mathbb{R}^+), \varphi\geq 0, \phi \geq 0. \] We also denote \begin{equation} \varphi_T = \frac{1}{F_T}\varphi, \: \chi(t) = \int_t^1 \psi(s) ds, \: \chi_T(t) = \chi(\frac{t}{T}). \label{def: abbrev} \end{equation} We write \begin{equation} \Psi(x,t) = \varphi(x) \chi(t), \label{def: Psi_bez_T} \end{equation} \begin{equation} \Psi_T(x,t) = \varphi_T(x) \chi_T(t), \label{def: Psi} \end{equation} note that $\Psi$ and $\Psi_T$ are positive functions. For generating function $F$ we define $G(s) = F(1-s) - (1-s)$ so in our case \[ G(s) = \frac{s^{{1+\beta}}}{{1+\beta}} \] Behavior of the system starting off from a single particle at $x$ is described by the function \begin{equation} v_{\Psi}\left(x,r,t\right)=1-\mathbb{E}\exp\left\{ -\int_{0}^{t}\left\langle N_{s}^{x},\Psi\left(\cdot,r+s\right)\right\rangle ds\right\} \label{def: v} \end{equation} where $N_{s}^{x}$ denotes the empirical measure of the particle system with the initial condition $N_{0}^{x}=\delta_{x}$. $v_{\Psi}$ satisfies the equation \begin{equation} v_{\Psi}\left(x,r,t\right)=\int_{0}^{t}\mathcal{T}_{t-s}\left[\Psi\left(\cdot,r+t-s\right)\left(1-v_{\Psi}\left(\cdot,r+t-s,s\right)\right)-VG\left(v_{\Psi}\left(\cdot,r+t-s,s\right)\right)\right]\left(x\right)ds.\label{eq: v} \end{equation} This equation can be derived using the Feynman-Kac formula in the same way as \cite[Lemma 3.4]{Milos:2007ab}. We also define \begin{equation} n_{\Psi}\left(x,r,t\right)=\int_{0}^{t}\mathcal{T}_{t-s}\Psi\left(\cdot,r+t-s\right)\left(x\right)ds.\label{def: n} \end{equation} Since we consider only positive $\Psi$, hence (\ref{def: v}) and (\ref{eq: v}) yield \begin{equation} 0\leq v_T(x,r,t) \leq n_T(x,r,t) \label{ineq: n<v}, \end{equation} \begin{equation} v_T(x,r,t) \leq 1 \label{ineq: v<1}, \end{equation} In the sequel, for simplicity of notation, we write \begin{equation} v_{T}\left(x,r,t\right)=v_{\Psi_{T}}\left(x,r,t\right),\label{eq:v_T notacja} \end{equation} \begin{equation} n_{T}\left(x,r,t\right)=n_{\Psi_{T}}\left(x,r,t\right),\label{eq:n_T notacja} \end{equation} \begin{equation} v_{T}\left(x\right)=v_{T}\left(x,0,T\right),\label{eq: v_T_x notacja} \end{equation} \begin{equation} n_{T}\left(x\right)=n_{T}\left(x,0,T\right)\label{eq: n_T_x notacja} \end{equation} when no confusion arises.\\ \begin{fact} $n_{T}\left(x,T-s,s\right)\rightarrow0$ uniformly in $x\in\mathbb{R}^{d}$, $s\in\left[0,T\right]$ as $T\rightarrow+\infty$. \label{fact: uniformaly} \end{fact} The fact was proved in \cite[Fact 3.7]{Milos:2007ab}. From the proof therein we obtain also the inequality \begin{equation} n_T \leq \frac{c}{F_T}. \label{ineq:n_T<1/F_T} \end{equation} Following the lines of \cite[Section 3.2.2]{Milos:2007ab} we introduce function $V_T$ \begin{equation} V_T(x,l) = 1 - \ev{\exp\rbr{\ddp{N_l^x}{\ln (1-v_T)}}} \label{def:V} \end{equation} which satisfies the equation (see \cite[(3.20)]{Milos:2007ab}) \begin{equation} V_{T}\left(x,l\right)=\mathcal{T}_{l}v_{T}\left(x\right)-V\int_{0}^{l}\mathcal{T}_{l-s}G\left(V_{T}\left(\cdot,s\right)\right)\left(x\right)ds.\label{main_equation} \end{equation} A trivial verification using (\ref{def:V}), (\ref{ineq: v<1}) and (\ref{main_equation}) provides us with \begin{equation} 0\leq V_{T}\left(x,l\right)\leq\mathcal{T}_{l}v_{T}\left(x\right),\,\forall_{x\in\mathbb{R}^{d},l\geq0}. \label{ineq: VT<TvT} \end{equation} Next we write the Laplace transform of the occupation time fluctuation process (\ref{def:occupation_fluct}) for the system $N$ starting from equilibrium distribution \begin{equation} \ev{e^{-\ddp{\tilde{X}_T}{\Phi}}} = e^{A(T) + B(T)} \label{eq:laplace-eq} \end{equation} where \begin{equation} A\left(T\right)= \int_{\mathbb{R}^{d}}\int_{0}^{T}\Psi_{T}\left(x,T-s\right)v_{T}\left(x,T-s,s\right)+VG\left(v_{T}\left(x,T-s,s\right)\right)dsdx, \label{def: A} \end{equation} \begin{equation} B\left(T\right)= V\int_{0}^{+\infty}\int_{\mathbb{R}^{d}}G\left(V_{T}\left(x,t\right)\right)dxdt. \label{def: B} \end{equation} The derivation of this formula can be found in \cite[Section 3.2.2]{Milos:2007ab}. To show (\ref{lim:laplace}) we need to calculate limits of $A(T)$ and $B(T)$. Let us notice here that $\exp(A(T))$ is the same as the right-hand side of \cite[(3.7)]{Bojdecki:2007aa} so it is the Laplace transform of a Poisson-starting system \[ A(T) \rightarrow \frac{V}{{1+\beta}} \int_{\mathbb{R}^d} \intc{1} \left[\int_{\mathbb{R}^d} \int_r^1 \varphi(y) \psi(s) \int_r^s p_{u-r}(x) du ds dy\right]^{{1+\beta}} dr dx. \] In Section \ref{sec:calculations} we shall prove \begin{equation} B(T) \rightarrow \frac{V}{{1+\beta}} \int_{0}^{+\infty} \int_{\mathbb{R}^d} \left( \intc{1} \int_{\mathbb{R}^d} \varphi(y) p_{l+s}\left(x\right)\chi(s) ds dy \right)^{1+\beta} dx dl \label{eq:goal}. \end{equation} \\Finally, we are in position to interpret the result. By properties of the Laplace transform, the limit of (\ref{eq:laplace-eq}) splits into two independent parts, corresponding to $A(T)$ and $B(T)$ respectively. The first was investigated in \cite{Bojdecki:2007aa} and corresponds to the process $\eta^1$ defined by (\ref{def:eta1}).\\ Let us now investigate the second one. Denote the limit of (\ref{eq:goal}) by $B$. It can be handled in the following way \[ B = \frac{V}{{1+\beta}} \int_{0}^{+\infty} \int_{\mathbb{R}^d} \left( \int_{\mathbb{R}^d} \intc{1} p_{l+s}(x) \int_s^1 \Phi(y, u) du ds dy\right)^ {1+\beta} dx dl. \] We write \[ F(x, l) = \int_{\mathbb{R}^d} \intc{1} p_{l+s}(x) \int_s^1 \Phi(y, u) du ds dy, \] then \[ B = \frac{V}{{1+\beta}} \int_{0}^{+\infty} \int_{\mathbb{R}^d} F(x,l)^{1+\beta} dx dl. \] An argument as in \cite[Corollary 3.5]{Bojdecki:2007aa} implies that the characteristic function of $\ddp{\tilde{X}}{\Phi}$ is \begin{equation} \exp\left\{-K \int_{0}^{+\infty} \int_{\mathbb{R}^d} |F(x,l)|^{1+\beta} \left(1-i\:\text{sgn} (F(x,l))\tan\frac{\pi}{2}({1+\beta}) \right)dx dl\right\}, \label{eq:tmp:char} \end{equation} where $K = \frac{V}{{1+\beta}} \cos\frac{\pi}{2}({1+\beta})$.\\ Following the lines of reasoning in \cite[End of Section 3]{Bojdecki:2007aa}, we can obtain the characteristic function of the finite dimensional distributions (by passing to the limit with appropriate sequence approximating $ \Phi(y, s) = \sum_j z_j \varphi_j \delta_{t_j}(s)$). It is of the form (\ref{eq:tmp:char}) with \[ F(x,l) = \sum_j z_j \ddp{\lambda}{\varphi_j} \intc{t_j} p_{l+s}(x) ds. \] Using theorem \cite[Proposition 3.4.2]{Samorodnitsky:1994aa} one can infer easily that $\eta^2$ (recall (\ref{def:eta2})) has the same finite dimensional distributions. \subsubsection{Tightness} \label{sec:tightness_schema} It has been already mentioned that to prove tightness it suffices to show tightness of real-valued processes $\ddp{X_T}{\varphi}$ for any $\varphi \in \mathcal{S}(\mathbb{R}^d)$. We apply below a scheme presented in \cite{Bojdecki:2007aa}. By \cite[Theorem 12.3]{Billingsley:1968aa} it is enough to show that there exist constants $\chi >0 $ and $\nu\geq 0$ such that \begin{equation} \mathbb{P}(|\ddp{X_T(t_2)}{\varphi} - \ddp{X_T(t_1)}{\varphi}| \geq \delta) \leq \frac{C(\varphi)}{\delta^\nu}(t_2 - t_1)^{1+\chi} \label{ineq:bill} \end{equation} holds for all $t_1,t_2\in[0,1]$, $t_1<t_2$, all $T\geq1$, and all $\delta>0$. A lemma in \cite[Section 3]{Bojdecki:2006ab} shows that each $\varphi\in \mathcal{S}(\mathbb{R}^d)$ can be decomposed $\varphi = \varphi_1 - \varphi_2$, $\varphi_1,\varphi_2 \in \mathcal{S}(\mathbb{R}^d)$, and $\varphi_1,\varphi_2 \geq 0$, hence from now on we will assume that $\varphi \geq 0$. Tail probability can be estimated using inequality \cite[(3.39)]{Bojdecki:2006ab} \begin{equation} \mathbb{P}(|\ddp{\tilde{X}_T}{\varphi \otimes \phi}| \geq \delta) \leq C \delta \intc{1/\delta}(1-Re(\ev{\exp(-i \theta \ddp{\tilde{X}_T}{\varphi \otimes \phi} )})) d \theta \label{ineq:char} \end{equation} Now an analysis similar to that in \cite{Bojdecki:2006aa} shows us (\ref{ineq:bill}). Indeed, we approximate $\delta_{t_2} - \delta_{t_1}$ by $\phi \in \mathcal{S}(\mathbb{R}^d)$ such that $\chi(t) = \int_t^1 \phi(s) ds$ fulfils \[ 0\leq \chi \leq \mathbf{1}_{[t_1,t_2]}. \] Notice that $\ddp{\tilde{X}_T}{\varphi \otimes \phi}|$ approximates $|\ddp{X_T(t_2)}{\varphi} - \ddp{X_T(t_1)}{\varphi}|$.\\ Suppose that we know that \begin{equation} \delta \intc{1/\delta}(1-Re(\ev{\exp(-i \theta \ddp{\tilde{X}_T}{\varphi \otimes \phi} )})) d \theta \leq \frac{C(\varphi)}{\delta^\nu}(t_2 - t_1)^{1+\chi} \label{ineq:tightness_main} \end{equation} A passage to the limit using Fatou's lemma implies (\ref{ineq:bill}). The task of proving the last inequality is delegated to Section (\ref{sec:tightness_calculations}). The proof there will be conducted by estimating characteristic function appearing on the right-hand side of (\ref{ineq:char}). Analogously to (\ref{eq:laplace-eq}) we have \begin{equation} \ev{\exp\left(-i \ddp{\tilde{X}_T}{\varphi \otimes \phi} \right)} = \exp\left( A(T) + B(T)\right), \label{eq:decomp_complex} \end{equation} where $A(T)$ and $B(T)$ are given by (\ref{def: A}) and (\ref{def: B}) with $v_T$ and $V_T$ being the complex counterparts of functions $v$ and $V$ from Section \ref{sec:space-time}. It is easy to check that they fulfil equations (\ref{def:V}) and \[ v_T(x,t) = \intc{t} \mathcal{T}_{t-s}\left[i \varphi_T(\cdot)\chi_T(T-s)(1-v_T(\cdot,s)) - G(v_T(\cdot,s)) \right](x)ds \] (cf. (\ref{eq: v})). \subsubsection{Auxiliary facts} Before proceeding to calculations we gather a few additional facts. Let $p_{t}$ denote transition density of $\alpha$-stable motion. We have \begin{equation} p_t(x) = t^{-\frac{d}{\alpha}}p_1(xt^{-\frac{1}{\alpha}}) \label{eq:p_t p_1} \end{equation} And hence \begin{equation} \Vert p_t \Vert _q ^q = t^{\frac{d}{\alpha}(1-q)} \Vert p_1 \Vert _q ^q,\: q> \frac{d}{d+\alpha}. \label{eq:norm_p} \end{equation} We will need a few straightforward inequalities \begin{equation} (a+b)^{1+\beta} \leq 2^{1+\beta}(a^{1+\beta} + b^{1+\beta}), \label{ineq:a+b pb<} \end{equation} \begin{equation} (a+b)^{1+\beta} - a^{1+\beta} - b^{1+\beta} \geq \beta b^\beta a,\: b\geq a\geq 0, \label{ineq: a+b >} \end{equation} \begin{equation} (a+b)^{1+\beta} - a^{1+\beta} - b^{1+\beta} \leq ({1+\beta})a^\delta b^{1+\beta-\delta}, \beta\leq \delta \leq 1, a,b\geq 0, \label{ineq:a+b<} \end{equation} Moreover, we will use the generalized Minkowski inequality \begin{equation} \norm{\int \! f}{p}{} \leq \int\! \norm{f}{p}{}, p\geq 1\label{ineq:Minkowski} \end{equation} and Young's inequality \begin{equation} \norm{f \ast g}{q}{} \leq \norm{f}{p_1}{}\norm{g}{p_2}{}, \: \frac{1}{q} = \frac{1}{p_1} + \frac{1}{p_2} -1. \label{ineq:Young} \end{equation} \section{Calculations for the proof of Theorem \ref{thm:intermediate}} \label{sec:calculations} The general schema presented in Section \ref{sec:proof_schema} left the main technical difficulty of the proof, namely convergence of (\ref{eq:goal}) untouched. We decompose $B(T)$ in the following way \[ B(T) = \frac{V}{{1+\beta}} \left(B_3(T) - B_2(T) - B_1(T) \right), \] where \[ B_1(T) = \int_{0}^{+\infty} \int_{\mathbb{R}^d} (\T{l}v_T(x))^{1+\beta} - (V_T(x,l))^{1+\beta} dx dl, \] \[ B_2(T) = \int_{0}^{+\infty} \int_{\mathbb{R}^d} (\T{l}n_T(x))^{1+\beta} - (v_T(x,l))^{1+\beta} dx dl, \] \[ B_3(T) = \int_{0}^{+\infty} \int_{\mathbb{R}^d} (\T{l}n_T(x))^{1+\beta} dx dl, \] (\ref{eq:goal}) will be shown once we obtain \begin{equation} B_1(T) \rightarrow 0 \label{lim_B1}, \end{equation} \begin{equation} B_2(T) \rightarrow 0, \label{lim_B2} \end{equation} \begin{equation} B_3(T) \rightarrow \norm{\varphi}{}{1+\beta}\int_{0}^{+\infty} \int_{\mathbb{R}^d} \left( \intc{1} p_{l+s}\left(x\right)\chi(s) ds\right)^{1+\beta} dx dl. \label{lim_B3} \end{equation} The proof of (\ref{lim_B2}) is omitted since it is similar to the case of (\ref{lim_B1}) but simpler. \subsection{Convergence of $B_3$} By definition of $n_{T}$ (see (\ref{def: n})) we obtain \[ B_3(T) = \int_{0}^{+\infty} \int_{\mathbb{R}^d} \left(\T{l} \intc{T} \T{s}\varphi_T(x)\chi_T(s) ds\right)^{1+\beta} dx dl. \] Changing integration variable $s \rightarrow Ts$ and using definition of $\varphi_T$ (recall (\ref{def: abbrev})) \[ B_3(T) = T^{\frac{d}{\alpha}\beta - 1}\int_{0}^{+\infty} \int_{\mathbb{R}^d} \left( \intc{1} \T{l+Ts}\varphi(x)\chi(s) ds \right)^{1+\beta} dx dl.\] Using the definition of semigroup $\mathcal{T}$ (see (\ref{def:T})) yields \[ B_3(T) = T^{\frac{d}{\alpha}\beta - 1}\int_{0}^{+\infty} \int_{\mathbb{R}^d} \left( \intc{1} \int_{\mathbb{R}^d} p_{l+Ts}(x-y) \varphi (y)\chi(s) dy ds\right)^{1+\beta} dx dl.\] By (\ref{eq:p_t p_1}) we have \[ B_3(T) = T^{\frac{d}{\alpha}\beta - 1}\int_{0}^{+\infty} \int_{\mathbb{R}^d} \left( \intc{1} \int_{\mathbb{R}^d} T^{-\frac{d}{\alpha}} p_{l/T + s}\left(T^{-\frac{1}{\alpha}}(x-y)\right) \varphi(y)\chi(s) dy ds\right)^ {1+\beta} dx dl ,\] and, after obvious substitutions \[ B_3(T) = \int_{0}^{+\infty} \int_{\mathbb{R}^d} \left( \intc{1} \int_{\mathbb{R}^d} p_{l+s}\left(x-T^{-\frac{1}{\alpha}}y\right) \varphi (y)\chi(s) dy ds\right)^{1+\beta} dx dl.\] This can be written as \[ B_3(T) = \int_{0}^{+\infty} \int_{\mathbb{R}^d} \rbr{f_l \ast g_T}^{1+\beta} dx dl \] where $f_l(x) = \int_0^1 p_{s+l}(x) \chi(s) ds$ and $g_T(x) = T^{\frac{d}{\alpha}}\varphi(xT^{\frac{1}{\alpha}})$. \\ Firstly, using the Jensen inequality we easily check \[ \norm{f_l}{{1+\beta}}{{1+\beta}} = \norm{\intc{1} p_{l+s}(x)}{{1+\beta}}{{1+\beta}} \leq \int_{\mathbb{R}^d} \intc{1}(p_{l+s}(x))^{1+\beta} dx ds = \intc{1} \norm{p_{l+s}}{{1+\beta}}{{1+\beta}} ds = \] (by (\ref{eq:norm_p})) \[ c \intc{1} (l+s)^{-\frac{d}{\alpha}\beta} ds \leq c_1 l^{-\frac{d}{\alpha}\beta}. \] Secondly, using (\ref{ineq:Minkowski}) and (\ref{eq:norm_p}) we get \[ \norm{f_l}{{1+\beta}}{{1+\beta}} = \norm{\intc{1}\! p_{l+s}(x)}{{1+\beta}}{{1+\beta}} \leq \rbr{\intc{1} \!\norm{p_{l+s}}{{1+\beta}}{} ds}^{1+\beta} \!\!\! = \rbr{\intc{1} (l+s)^{-\frac{d}{\alpha} \frac{\beta}{{1+\beta}}} ds}^{1+\beta} \!\!\!\leq c \] Combining the last two estimates we get $\norm{f_l}{{1+\beta}}{{1+\beta}} \leq c (1\wedge l^{-\frac{d}{\alpha}\beta})$. In this way we have proved that $f_l$ is $({1+\beta})$-integrable with respect to $x$ and $l$ since $\frac{d}{\alpha}\beta > 1 $. Taking into account the form of $g_T$ (informally speaking $g_T$ converges to $\delta_x \cdot\norm{\varphi}{}{}$) we acquire the $\mathcal{L}^{{1+\beta}}$ convergence \[ f_l\ast g_T \rightarrow f_l\cdot\norm{\varphi}{}{} \] which is exactly (\ref{lim_B3}). \subsection{Convergence of $B_1$} We prove (\ref{lim_B1}). Applying (\ref{ineq:a+b<}) we write \[ B_1(T) \leq B_{11}(T) + B_{12}(T), \] where \[ B_{11}(T) = \int_{0}^{+\infty} \int_{\mathbb{R}^d} (1+\beta) [ \T{l}v_T(x) - V_T(x,l)]^{\frac{1+\beta}{2}} [\T{l}v_T(x)]^{\frac{1+\beta}{2}} dx dl, \] \[ B_{12}(T) = \int_{0}^{+\infty} \int_{\mathbb{R}^d} [\T{l}v_T(x) - V_T(x,l)]^{1+\beta} dx dl. \] Using (\ref{main_equation}), (\ref{ineq: VT<TvT}) and (\ref{ineq: n<v}) we get \[ B_{12}(T) \leq c \int_{0}^{+\infty} \int_{\mathbb{R}^d} \rbr{\intc{l} \T{l-s} (\T{s} n_T(x))^{1+\beta} ds}^{1+\beta} dx dl \] By definition of $n_T$ (see (\ref{def: n})) \[ B_{12}(T) \leq c \int_{0}^{+\infty} \int_{\mathbb{R}^d} \rbr{\intc{l} \T{l-s} \rbr{\T{s} \intc{T} \T{u} \varphi_T(x) du}^{1+\beta} ds}^{1+\beta} dx dl \] Combining with definition of $\mathcal{T}$ (\ref{def:T}) and subsituting $u \rightarrow Tu$ we can rewrite \[ T^{(1+\beta)(\frac{d}{\alpha}\beta-1)} \int_{0}^{+\infty} \int_{\mathbb{R}^d} \rbr{\intc{l} \int_{\mathbb{R}^d} p_{l-s}(x-y) \rbr{\intc {1} \int_{\mathbb{R}^d} p_{Tu+s}(y-z) \varphi(z) dz du}^{1+\beta} dy ds}^{1+\beta} dx dl \] Application of (\ref{eq:p_t p_1}) and substitutions $y \rightarrow T^{-\frac{1}{\alpha}}y$ and $x \rightarrow T^{\frac{1}{\alpha}}x$ yield \[ T^{C_T} \int_{0}^{+\infty} \int_{\mathbb{R}^d} \rbr{\intc{l} \int_{\mathbb{R}^d} p_{l-s}\rbr{T^{\frac{1}{\alpha}}(x-y)} \rbr {\intc{1} \int_{\mathbb{R}^d} p_{u+s/T}\rbr{y-T^{-\frac{1}{\alpha}}z} \varphi(z) dz du}^{1+\beta} dy ds}^{1+\beta} dx dl \] where $C_T=-({1+\beta})^2+\frac{d}{\alpha}(1+\beta^2 + \beta^3)$. Applying (\ref{eq:p_t p_1}) and substituting $s \rightarrow Ts$ and $l \rightarrow Tl$ we obtain \[ T^{1-\frac{d}{\alpha}\beta} \int_{0}^{+\infty} \int_{\mathbb{R}^d} \rbr{\intc{l} \int_{\mathbb{R}^d} p_{l-s}\rbr{x-y} \rbr{\intc{1} \int_{\mathbb{R}^d} p_{u+s}\rbr {y-T^{-\frac{1}{\alpha}}z} \varphi(z) dz du}^{1+\beta} dy ds}^{1+\beta} dx dl \] Let us denote \[ f_T(s, y) = \rbr{\intc{1} \int_{\mathbb{R}^d} p_{u+s}\rbr{y-T^{-\frac{1}{\alpha}}z} \varphi(z) dz du}^{1+\beta} = \] \[ \rbr{\Phi_T \ast \intc{1} p_{u+s} du}^{1+\beta} (y), \] where $\Phi_T(x) = T^{\frac{d}{\alpha}}\varphi(T^{\frac{1}{\alpha}}x)$. We obtain \begin{equation} B_{12}(T) \leq C_T \int_{0}^{+\infty} H(l) dl, \label{integral_to_be_proved} \end{equation} where \[ H(l) = \norm{\intc{l} p_{l-s} \ast f_T(s, \cdot) ds}{{1+\beta}}{{1+\beta}}. \] Applying (\ref{ineq:Minkowski}) we get \[ H(l) \leq \rbr{\intc{l} \norm{p_{l-s} \ast f_T(s, \cdot)}{{1+\beta}}{}ds}^{1+\beta}. \] Utilizing Youngs's inequality (\ref{ineq:Young}) we write \[ H(l) \leq \rbr{\intc{l} \norm{p_{l-s}}{{1+\beta}}{} \norm{f_T(s, \cdot)}{1}{} ds}^{1+\beta}. \] By (\ref{eq:norm_p}) and the definition of $f_T$ we obtain \[ H(l) \leq \rbr{\intc{l} (l-s)^{-\frac{d\beta}{\alpha ({1+\beta})}} \norm{\Phi_T \ast \intc {1} p_{u+s} du}{{1+\beta}}{{1+\beta}} ds}^{1+\beta}. \] Using (\ref{ineq:Young}) once again we have \[ H(l) \leq \rbr{\intc{l} (l-s)^{-\frac{d\beta}{\alpha ({1+\beta})}} \norm{\Phi_T}{1}{{1+\beta}} \norm{\intc{1} p_{u+s} du}{{1+\beta}}{{1+\beta}} ds}^{1+\beta}. \] Hence by the (\ref{ineq:Minkowski}) and (\ref{eq:norm_p}) \[ H(l) \leq c \rbr{\intc{l} (l-s)^{-\frac{d\beta}{\alpha ({1+\beta})}} \rbr{\intc{1} (u+s)^{- \frac{d\beta}{ \alpha ({1+\beta})}} du}^{1+\beta} ds}^{1+\beta}. \] For intermediate dimensions $\frac{d}{\alpha}\frac{\beta}{{1+\beta}}<1$, hence the inner integral can be estimated by a constant independent of $s$ so \[ H(l) \leq c \rbr{\intc{l} (l-s)^{-\frac{d\beta}{\alpha ({1+\beta})}} ds}^{1+\beta}. \] Assume $l\leq 1$. Then \begin{equation} H(l)<c. \label{H_small} \end{equation} Now we derive estimation that works for "large" $l$'s. By (\ref{ineq:Minkowski}) we have \[ H(l) \leq \rbr{\intc{l} \norm{p_{l-s} \ast f_T(\cdot, s)}{{1+\beta}}{} ds}^{1+\beta}. \] Young's inequality (\ref{ineq:Young}) yields \[ H(l) \leq \rbr{\intc{l} \norm{p_{l-s}}{{1+\beta}}{} \norm{f_T(\cdot, s)}{1}{} ds}^{1+\beta}. \] We estimate $\mathcal{L}^1$ norm of $f_t$ using (\ref{ineq:Young}) \[ \norm{f_T(\cdot, s)}{1}{} = \norm{\Phi_T \ast \intc{1} p_{u+s} du}{{1+\beta}}{{1+\beta}} \leq \norm{\Phi_T}{1}{{1+\beta}} \norm{\intc{1} p_{u+s} du}{{1+\beta}}{{1+\beta}}.\] We use the trivial fact that $\norm{\Phi_T}{1}{}=c$ and (\ref{ineq:Young}) ($p,q\in [1,{1+\beta}]$ are yet to be specified) \[ \norm{f_T(\cdot, s)}{1}{} \leq c \norm{p_s \ast \intc{1} p_{u} du}{{1+\beta}}{{1+\beta}} \leq \norm{p_s}{p}{{1+\beta}} \norm{\intc{1} p_{u} du}{q}{{1+\beta}}.\] We have \[ \norm{\intc{1} p_{u} du}{q}{{1+\beta}} \leq \rbr{\intc{1} \norm{p_{u}}{q}{} du}^ {{1+\beta}} \leq \rbr{\intc{1} u^{\frac{d}{\alpha}(\frac{1}{q}-1)} du}^{{1+\beta}} \leq c, \] since it is obvious that for any $q\in [1,{1+\beta}]$ we have $\frac{d}{\alpha}(\frac{1}{q}-1)>-1$. And finally \[ \norm{f_T(\cdot, s)}{1}{} \leq c s^{\frac{d}{\alpha}\rbr{\frac{1}{p} - 1}\rbr{{1+\beta}}}. \] One can adjust $p$ to make exponent $A={\frac{d}{\alpha}\rbr{\frac{1}{p} - 1} \rbr{{1+\beta}}}$ arbitrary near $-1$ (because if $p={1+\beta}$ then $A<-1$). Hence going back to estimation of $H(l)$ \[ H(l) \leq c\rbr{\intc{l} (l-s)^B s^A ds}^{1+\beta}, \] where $B=\frac{d}{\alpha}\rbr{\frac{1}{{1+\beta}}-1}$. Substituting $s\rightarrow ls$ we obtain \[ H(l) \leq c \rbr{l^{A+B+1}\intc{1} (1-s)^B s^A ds}^{1+\beta} = c l^{(A+B+1)({1+\beta})}\rbr{\intc{1} (1-s)^B s^A ds}^{1+\beta} \] Notice that $B>-1$ and \[(A+B+1)({1+\beta}) = (A+1)({1+\beta}) + \rbr{-\frac{d}{\alpha}\beta} \] $-\frac{d}{\alpha}\beta<-1$ and $A+1$ can be made arbitrarily near $0$, hence \begin{equation} H(l) \leq l^W \label{H_big} \end{equation} where $W<-1$. Combining estimates (\ref{H_small}) and (\ref{H_big}) for $H$ we can conclude that the integral in (\ref{integral_to_be_proved}) is finite and $B_{12}\rightarrow 0$. \subsection{Tightness calculations} \label{sec:tightness_calculations} Following the scheme in Section \ref{sec:tightness_schema} it remains to prove (\ref{ineq:tightness_main}). Slightly abusing notation we will use an additional argument $\theta$ to indicate that a function is computed for $\theta \Phi$ instead of $\Phi$ (eg. $A(T,\theta)$). In this section we deal with complex functions defined in Section \ref{sec:tightness_schema}, which are not to be confused with functions in sections devoted to space-time convergence. \\ From equation (\ref{ineq:tightness_main}) we have to estimate \begin{equation*} L = 1-Re\rbr{\ev{\exp\rbr{-i \theta \ddp{\tilde{X}_T}{\varphi \otimes \phi} }}} \end{equation*} We use (\ref{eq:decomp_complex}) and then estimate $A(T,\theta)$ in the same way as in the proof of tightness in \cite{Bojdecki:2007aa}, thus \[L \leq \ev{\left| 1-\exp\rbr{-i \theta \ddp{\tilde{X}_T}{\varphi \otimes \phi} } \right|} \leq |I + II + III| \] where \[ I = i \theta \int_{\mathbb{R}^d} \intc{T} \varphi_T(x) \chi_T(T-s) v_{T,\theta}(x,s) dx ds \] \[ II = \frac{V}{{1+\beta}} \int_{\mathbb{R}^d} \intc{T} v_{T,\theta}^{1+\beta}(x,s) dx ds \] \[ III = V \int_{0}^{+\infty} \int_{\mathbb{R}^d} \rbr{V_{T,\theta}(x,t)}^{1+\beta} dx dt \] The terms $I$ and $II$ are the same as in \cite{Bojdecki:2007aa}. Hence we have only to deal with $|III|$. Before that we show an estimation (which holds for $T$ large enough). Firstly, recall the definition of $V_T$ (\ref{def:V}) \[ |V_{T,\theta}(x, t)| = |1 - \ev{e^{\ddp{N^x_t}{ \ln w_{T,\theta} }}}| \leq \ev{|1 - e^{\ddp{N^x_t} { \ln w_{T,\theta} }}|} \] where \begin{equation*} w_{T,\theta}(x,r,t) = \mathbb{E}\exp\left\{ -i\theta\int_{0}^{t}\left\langle N_{s}^{x},\Psi\left(\cdot,r+s\right)\right\rangle ds\right\}. \end{equation*} We know that $|w_{T,\theta}| \leq 1$ which implies $|\ln w_{T,\theta}| \leq 0$ and consequently $e^{\ddp{N^x_t}{ \ln w_{T,\theta} }} \leq 1$. Finally, if $|z|<1$ we can use inequality $|1-e^z|\leq 2 |z|$. Hence \[ |V_{T,\theta}(x, t)| \leq 2\ev{|\ddp{N^x_t}{ \ln w_{T,\theta} }|} \leq 2\ev{\ddp{N^x_t}{ |\ln w_{T,\theta}| }} = 2\T{t} |\ln w_T| \leq 2\T{t} n_{T,\theta}. \] (note that $n_{T, \theta}$ is a real function).\\ Therefore we have to estimate \[ |B_{T,\theta}| \leq \int_{0}^{+\infty} \int_{\mathbb{R}^d} (\T{l} n_{T,\theta}(x))^{1+\beta} dx dl \] Let us notice that the integral is the same as $B_3$ from in Section \ref{sec:calculations}. For $T $ large enough we have \[ |B_{T,\theta}| \leq C \norm{\theta \varphi}{}{1+\beta}\int_{0}^{+\infty} \int_{\mathbb{R}^d} \left( \intc{1} p_ {l+s}\left(x\right)\chi(s) ds\right)^{1+\beta} dx dl \] According to the argument in Section \ref{sec:tightness_schema} we choose $\chi \simeq 1_{[t_1, t_2]}$ \[ |B_{T,\theta}| \leq\int_{0}^{+\infty} \int_{\mathbb{R}^d} \left( \int_{t_1}^{t_2} p_{l+s}\left(x\right) ds\right)^{1+\beta} dx dl \] Denote $\Delta := t_2 - t_1$ \[ |B_{T,\theta}| \leq \int_{0}^{+\infty} \int_{\mathbb{R}^d} \left( \intc{\Delta} p_{l+s}\left(x\right) ds\right)^{1+\beta} dx dl \] After obvious substitutions and using (\ref{eq:p_t p_1}) we obtain \[ |B_{T,\theta}| \leq \Delta^{(2+\beta)} \Delta^{-\frac{d}{\alpha}\beta} \int_{0}^{+\infty} \int_{\mathbb{R}^d} \left( \intc{1} p_{l + s}\left( x\right) ds\right)^{1+\beta} dx dl. \] It is easy to check that $2+\beta-\frac{d}{\alpha}\beta>1$ reasoning along the lines of the proof in \cite{Bojdecki:2007aa} completes the proof.\\ \end{document}
\begin{document} \title{\textbf{Bringing Order to Special Cases of \\ Klee's Measure Problem}} \author{ Karl Bringmann } \date{} \institute{Max Planck Institute for Informatics} \maketitle \begin{abstract} Klee's Measure Problem (KMP) asks for the volume of the union of $n$ axis-aligned boxes in $\mathbb{R}^d$. Omitting logarithmic factors, the best algorithm has runtime $\mathcal{O}^*(n^{d/2})$ [Overmars,Yap'91]. There are faster algorithms known for several special cases: \textsc{Cube-KMP}\ (where all boxes are cubes), \textsc{Unitcube-KMP}\ (where all boxes are cubes of equal side length), \textsc{Hypervolume}\ (where all boxes share a vertex), and \GRND{k} (where the projection onto the first $k$ dimensions is a \textsc{Hypervolume}\ instance). In this paper we bring some order to these special cases by providing reductions among them. In addition to the trivial inclusions, we establish \textsc{Hypervolume}\ as the easiest of these special cases, and show that the runtimes of \textsc{Unitcube-KMP}\ and \textsc{Cube-KMP}\ are polynomially related. More importantly, we show that any algorithm for one of the special cases with runtime $T(n,d)$ implies an algorithm for the general case with runtime $T(n,2d)$, yielding the first non-trivial relation between KMP and its special cases. This allows to transfer \ensuremath{\textup{W}[1]}-hardness of \textsc{KMP}\ to all special cases, proving that no $n^{o(d)}$ algorithm exists for any of the special cases assuming the Exponential Time Hypothesis. Furthermore, assuming that there is no \emph{improved} algorithm for the general case of KMP (no algorithm with runtime $\mathcal{O}(n^{d/2 - \ensuremath{\varepsilon}})$) this reduction shows that there is no algorithm with runtime $\mathcal{O}(n^{\lfloor d/2 \rfloor /2 - \ensuremath{\varepsilon}})$ for any of the special cases. Under the same assumption we show a tight lower bound for a recent algorithm for \GRND{2} [Y{\i}ld{\i}z,Suri'12]. \end{abstract} \definecolor{light}{rgb}{0.85 0.85 0.85} \definecolor{dark}{rgb}{0.65 0.65 0.65} \section{Introduction} \textbf{Klee's measure problem (\textbf{\textsc{KMP}})} asks for the volume of the union of $n$ axis-aligned boxes in $\ensuremath{\mathbb{R}^d}$, where $d$ is considered to be a constant. This is a classic problem with a long history~\cite{Klee77,Bentley77,LeeuwenW81,OvermarsY91,Chan09,FredmanW78}. The fastest algorithm has runtime $\mathcal{O}(n^{d/2} \log n)$ for $d \geqslant 2$, given by Overmars and Yap~\cite{OvermarsY91}, which was slightly improved to $n^{d/2} 2^{\mathcal{O}(\log^*n)}$ by Chan~\cite{Chan09}. Thus, for over twenty years there has been no improvement over the runtime bound $n^{d/2}$. As already expressed in~\cite{Chan09}, one might conjecture that no \emph{improved} algorithm for \textsc{KMP}\ exists, i.e., no algorithm with runtime $\mathcal{O}(n^{d/2 - \ensuremath{\varepsilon}})$ for some $\ensuremath{\varepsilon} > 0$. However, no matching lower bound is known, not even under reasonable complexity theoretic assumptions. The best unconditional lower bound is $\Omega(n \log n)$ for any dimension $d$~\cite{FredmanW78}. Chan~\cite{Chan09} proved that \textsc{KMP}\ is \ensuremath{\textup{W}[1]}-hard by giving a reduction to the $k$-Clique problem. Since his reduction has $k = d/2$, we can transfer runtime lower bounds from $k$-Clique to \textsc{KMP}, implying that there is no $n^{o(d)}$ algorithm for \textsc{KMP}\ assuming the Exponential Time Hypothesis (see~\cite{fptsurvey}). However, this does not determine the correct constant in the exponent. Moreover, Chan argues that since no ``purely combinatorial'' algorithm with runtime $\mathcal{O}(n^{k-\ensuremath{\varepsilon}})$ is known for Clique, it might be that there is no such algorithm with runtime $\mathcal{O}(n^{d/2-\ensuremath{\varepsilon}})$ for \textsc{KMP}, but this does not rule out faster algorithms using, e.g., fast matrix multiplication techniques. Since no further progress was made for \textsc{KMP}\ for a long time, research turned to the study of special cases. Over the years, the following special cases have been investigated. For each one we list the asymptotically fastest results. \begin{itemize} \item \textbf{\textsc{Cube-KMP}:} Here the given boxes are cubes, not necessarily all with the same side length. This case can be solved in time $\mathcal{O}(n^{(d+2)/3})$ for $d \geqslant 2$~\cite{Bringmann10}. In dimension $d=3$ this has been improved to $\mathcal{O}(n \log^4 n)$ by Agarwal~\cite{Agarwal10}. In dimensions $d \leqslant 2$ even the general case can be solved in time $\mathcal{O}(n \log n)$, the same bound clearly applies to this special case. As described in~\cite{Bringmann10}, there are simple reductions showing that the case of cubes is roughly the same as the case of ``$\alpha$-fat boxes'', where all side lengths of a box differ by at most a constant factor $\alpha$. \item \textbf{\textsc{Unitcube-KMP}:} Here the given boxes are cubes, all of the same side length. This is a specialization of \textsc{Cube-KMP}, so all algorithms from above apply. The combinatorial complexity of a union of unit cubes is $\mathcal{O}(n^{\lfloor d/2 \rfloor})$~\cite{Boissonnat98}. Using this, there are algorithms with runtime $\mathcal{O}(n^{\lfloor d/2 \rfloor} \operatorname{poly}log n)$~\cite{KaplanRSV07} and $\mathcal{O}(n^{\lceil d/2 \rceil - 1 + \frac{1}{\lceil d/2 \rceil}} \operatorname{poly}log n)$~\cite{Chan03}. Again, there is a generalization to ``$\alpha$-fat boxes of roughly equal size'', and any algorithm for \textsc{Unitcube-KMP}\ can be adapted to an algorithm for this generalization~\cite{Bringmann10}. \item \textbf{\textsc{Hypervolume}:} Here all boxes have a common vertex. Without loss of generality, we can assume that they share the vertex $(0,\ldots,0) \in \ensuremath{\mathbb{R}^d}$ and lie in the positive orthant~$\mathbb{R}_{\geqslant0}^d$. This special case is of particular interest for practice, as it is used as an indicator of the quality of a set of points in the field of Evolutionary Multi-Objective Optimization~\cite{ZitzlerT99,BeumeNE07,ZitzlerK04,Igel06}. Improving upon the general case of \textsc{KMP}, there is an algorithm with runtime $\mathcal{O}(n \log n)$ for $d=3$~\cite{BeumeFLIPV}. The same paper also shows an unconditional lower bound of $\Omega(n \log n)$ for $d > 1$, while $\#P$-hardness in the number of dimensions was shown in~\cite{BF09b}. Recently, an algorithm with runtime $\mathcal{O}(n^{(d-1)/2} \log n)$ for $d \geqslant 3$ was presented in~\cite{YildizS12}. \item \textbf{\GRND{k}:} Here the projection of the input boxes to the first $k$ dimensions is a \textsc{Hypervolume}\ instance, where $0 \leqslant k \leqslant d$, the other coordinates are arbitrary. This rather novel special case appeared in~\cite{YildizS12}, where an algorithm with runtime $\mathcal{O}(n^{(d-1)/2} \log^2 n)$ for $d \geqslant 3$ was given for \GRND{2}. \end{itemize} Note that for none of these special cases \ensuremath{\textup{W}[1]}-hardness is known, so there is no larger lower bound than $\Omega(n \log n)$ (for constant or slowly growing $d$), not even under reasonable complexity theoretic assumptions. Also note that there are trivial inclusions of some of these special cases: Each special case can be seen as a subset of all instances of the general case. As such subsets, the following inclusions hold. \begin{itemize} \item $\textsc{Unitcube-KMP} \subseteq \textsc{Cube-KMP} \subseteq \textsc{KMP}$. \item $\GRND{(k+1)} \subseteq \GRND{k}$ for all $k$. \item $\GRND{d} = \textsc{Hypervolume}$ and $\GRND{0} = \textsc{KMP}$. \end{itemize} This allows to transfer some results listed above to other special cases. E.g., the \textsc{Cube-KMP}\ algorithm with runtime $\mathcal{O}(n^{(d+2)/3})$ also applies to \textsc{Unitcube-KMP}. \subsection{Our results} We present several reductions among the above four special cases and the general case of \textsc{KMP}. They provide bounds on the runtimes needed for these variants and, thus, yield some order among the special cases. Our first reduction relates \textsc{Hypervolume}\ and \textsc{Unitcube-KMP}. \begin{theorem} \label{thm:hypuckmp} If there is an algorithm for \textsc{Unitcube-KMP}\ with runtime $\ensuremath{T_\UCKMP}(n,d)$, then there is an algorithm for \textsc{Hypervolume}\ with runtime \begin{align*} \ensuremath{T_\HYP}(n,d) \leqslant \mathcal{O}(\ensuremath{T_\UCKMP}(n,d)). \end{align*} \end{theorem} Note that if \textsc{Hypervolume}\ were a subset of \textsc{Unitcube-KMP}, then the same statement would hold, with the constant hidden by the $\mathcal{O}$-notation being~1. Hence, this reduction can nearly be seen as an inclusion. Moreover, together with the trivial inclusions this reduction establishes \textsc{Hypervolume}\ as the easiest of all studied special cases. \begin{corollary} \label{cor:nlogn} For all studied special cases, \textsc{Hypervolume}, \textsc{Unitcube-KMP}, \textsc{Cube-KMP}, and \GRND{k} (for any $0 \leqslant k \leqslant d$), we have the unconditional lower bound $\Omega(n \log n)$ for any $d > 1$. \end{corollary} One can find contradicting statements regarding the feasibility of a reduction as in \thmref{hypuckmp} in the literature. On the one hand, existence of such a reduction has been mentioned in~\cite{YildizS12}. On the other hand, a newer paper~\cite{GuerreiroFE12} contains this sentence: ``Better bounds have been obtained for the KMP on unit cubes ..., but reducing the hypervolume indicator to such problems is not possible in general.'' In any case, to the best of our knowledge a proof of such a statement cannot be found anywhere in the literature. Our second reduction substantiates the intuition that the special cases \textsc{Cube-KMP}\ and \textsc{Unitcube-KMP}\ are very similar, by showing that their runtimes differ by at most a factor of $\mathcal{O}(n)$. Recall that $\textsc{Unitcube-KMP} \subseteq \textsc{Cube-KMP}$ was one of the trivial inclusions. We prove an inequality in the other direction in the following theorem. \begin{theorem} \label{thm:uckmpckmp} If there is an algorithm for \textsc{Unitcube-KMP}\ with runtime $\ensuremath{T_\UCKMP}(n,d)$, then there is an algorithm for \textsc{Cube-KMP}\ with runtime \begin{align*} \ensuremath{T_\CKMP}(n,d) \leqslant \mathcal{O}( n \cdot \ensuremath{T_\UCKMP}(n,d) ). \end{align*} \end{theorem} Our third and last reduction finally allows to show lower bounds for all special cases. We show an inequality between the general case of \textsc{KMP}\ and \GRND{2k}, in the opposite direction than the trivial inclusions. For this, we have to increase the dimension in which we consider \GRND{2k}. \begin{theorem} \label{thm:thereduction} If there is an algorithm for \GRND{2k} in dimension $d+k$ with runtime $\TGRND{2k}(n,d+k)$, then there is an algorithm for \textsc{KMP}\ in dimension $d$ with runtime \begin{align*} \ensuremath{T_\KMP}(n,d) \leqslant \mathcal{O}(\TGRND{2k}(n,d+k)). \end{align*} \end{theorem} Note that, if we set $k = d$, the special case \GRND{2k} in $d+k$ dimensions becomes \textsc{Hypervolume}\ in $2d$ dimensions. Since we established \textsc{Hypervolume}\ as the easiest variant, the above reduction allows to transfer \ensuremath{\textup{W}[1]}-hardness from the general case to all special cases. Since the dimension is increased only by a constant factor, even the tight lower bound on the runtime can be transferred to all special cases. \begin{corollary} \label{cor:Wone} There is no $n^{o(d)}$ algorithm for any of the special cases \textsc{Hypervolume}, \textsc{Unitcube-KMP}, \textsc{Cube-KMP}, and \GRND{k}, assuming the Exponential Time Hypothesis. \end{corollary} We immediately get more precise lower bounds if we assume that no improved algorithm exists for \textsc{KMP}\ (no algorithm with runtime $\mathcal{O}(n^{d/2 - \ensuremath{\varepsilon}})$). \begin{corollary} If there is no improved algorithm for \textsc{KMP}, then there is no algorithm with runtime $\mathcal{O}(n^{\lfloor d/2 \rfloor /2 - \ensuremath{\varepsilon}})$ for any of \textsc{Hypervolume}, \textsc{Unitcube-KMP}, \textsc{Cube-KMP}, and \GRND{k}, for any $\ensuremath{\varepsilon} > 0$. \end{corollary} This shows the first lower bound for all studied special cases that is larger than $\Omega(n \log n)$. Note that there is, however, a wide gap to the best known upper bound of $\mathcal{O}(n^{(d+2)/3})$ for \textsc{Hypervolume}, \textsc{Unitcube-KMP}, and \textsc{Cube-KMP}. Furthermore, setting $k=1$, \thmref{thereduction} immediately implies that the recent algorithm for \GRND{2} with runtime $\mathcal{O}(n^{(d-1)/2} \log^2 n)$~\cite{YildizS12} is optimal (apart from logarithmic factors and if there is no improved algorithm for \textsc{KMP}). \begin{corollary} \label{cor:grnd2} If there is no improved algorithm for \textsc{KMP}, then there is no algorithm for \GRND{2} with runtime $\mathcal{O}(n^{(d-1)/2 - \ensuremath{\varepsilon}})$ for any $\ensuremath{\varepsilon} > 0$. \end{corollary} To simplify our runtime bounds, in some proofs we use the following technical lemma. Informally, it states that for any \GRND{k} algorithm with runtime $T(n,d)$ we have $T(\mathcal{O}(n),d) \leqslant \mathcal{O}(T(n,d))$. Note that in this paper we hide by the $\mathcal{O}$-notation any functions depending solely on $d$. \begin{lemma} \label{lem:technical} Fix $0 \leqslant k \leqslant d$ and $c > 1$. If there is an algorithm for \GRND{k} with runtime $\TGRND{k}(n,d)$ then there is another algorithm for \GRND{k} with runtime $\TGRND{k}'(n,d)$ satisfying \begin{align*} \TGRND{k}'(c n,d) \leqslant \mathcal{O}(\TGRND{k}(n,d)). \end{align*} \end{lemma} \subsection{Notation and Organization} A \emph{box} is a set of the form $B = [a_1,b_1]\times \ldots \times [a_d,b_d] \subset \ensuremath{\mathbb{R}^d}$, $a_i,b_i \in \mathbb{R}$, $a_i \leqslant b_i$. A \emph{cube} is a box with all side lengths equal, i.e., $|b_1-a_1| = \ldots = |b_d - a_d|$. Moreover, a \emph{\textsc{KMP}\ instance} is simply a set $M$ of $n$ boxes. In \textsc{Cube-KMP}\ all these boxes are cubes, and in \textsc{Unitcube-KMP}\ all these boxes are cubes of common side length. In \textsc{Hypervolume}, all input boxes share the vertex $(0,\ldots,0) \in \ensuremath{\mathbb{R}^d}$, i.e., each input box is of the form $B = [0,b_1] \times \ldots \times [0,b_d]$. In \GRND{k}, the projection of each input box to the first $k$ dimensions is a \textsc{Hypervolume}\ box, meaning that each input box is of the form $B = [a_1,b_1] \times \ldots \times [a_d,b_d]$ with $a_1 = \ldots = a_k = 0$. We write the usual Lebesgue measure of a set $A \subseteq \ensuremath{\mathbb{R}^d}$ as $\textsc{vol}(A)$. For sets $R,A \subseteq \ensuremath{\mathbb{R}^d}$ we write $\textsc{vol}_R(A) := \textsc{vol}(R \cap A)$, the volume of $A$ restricted to $R$. For a \textsc{KMP}\ instance $M$ we let $\mathcal{U}(M) := \bigcup_{B \in M} B$. To shorten notation we write $\textsc{vol}(M) := \textsc{vol}(\mathcal{U}(M))$ and $\textsc{vol}_R(M) := \textsc{vol}(R \cap \mathcal{U}(M))$. In the next section we present the proof of \thmref{hypuckmp}. In \secref{uckmpckmp} we prove \thmref{uckmpckmp}. The proof of \thmref{thereduction} is split into \secref{thereductionOne} and \secref{thereductionTwo}: We first give the reduction for \GRND{2} and then generalize this result to $\GRND{2k}$, $k > 1$. We prove the technical \leqslantmref{technical} in \secref{technical} and close with an extensive list of open problems in \secref{conclusions}. \section{\textsc{Hypervolume}\ $\leqslantqslant$ \textsc{Unitcube-KMP}} \label{sec:hypuckmp} In this section we prove \thmref{hypuckmp} by giving a reduction from \textsc{Hypervolume}\ to \textsc{Unitcube-KMP}. Given an instance of \textsc{Hypervolume}, let $\Delta$ be the largest coordinate of any box. We extend all boxes to cubes of side length $\Delta$, yielding a \textsc{Unitcube-KMP}\ instance. In this process, we make sure that the new parts of each box will not lie in the positive orthant $\mathbb{R}_{\geqslant 0}^d$, but in the other orthants, as depicted in \figref{hypuckmp}. This means that the volume of the newly constructed cubes - restricted to $\mathbb{R}_{\geqslant 0}^d$ - is the same as the volume of the input boxes. To compute this restricted volume, we compute the volume of the constructed \textsc{Unitcube-KMP}\ instance once with and once without an additional cube $C = [0,\Delta]^d$. From this we can infer the volume of the input \textsc{Hypervolume}\ instance. \begin{figure*} \caption{\label{fig:hypuckmp} \label{fig:hypuckmp} \end{figure*} \section{\textsc{Unitcube-KMP}\ $\geqslant$ \textsc{Cube-KMP}} \label{sec:uckmpckmp} In this section we prove \thmref{uckmpckmp} by giving a reduction from \textsc{Cube-KMP}\ to \textsc{Unitcube-KMP}. Given a \textsc{Cube-KMP}\ instance, let $C$ be the cube with smallest side length. We will compute the \emph{contribution} $v$ of $C$, i.e., the volume of space that is contained in~$C$ but no other cube. Having this, we can delete $C$ and recurse on the remaining boxes. Adding up yields the total volume of the input instance. To compute $v$, we modify each cube such that it becomes a cube of $C$'s side length and its restriction to $C$ stays the same, as depicted in \figref{uckmpckmp}. Applying this construction to all input boxes, we get a \textsc{Unitcube-KMP}\ instance that, inside $C$, looks the same as the input \textsc{Cube-KMP}\ instance. Computing the volume of this new instance once with and once without $C$ allows to infer $v$. \begin{figure*} \caption{\label{fig:uckmpckmp} \label{fig:uckmpckmp} \end{figure*} \section{2-Grounded $\geqslant$ \textsc{KMP}} \label{sec:thereductionOne} We first show the reduction of \thmref{thereduction} for \GRND{2}, i.e., we show $\ensuremath{T_\KMP}(n,d) \leqslant \mathcal{O}(\TGRND{2}(n,d+1))$ by giving a reduction from \textsc{KMP}\ to \GRND{2}. This already implies \corref{grnd2} and lays the foundations for the complete reduction given in the next section. We begin by showing the reduction for $d=1$. As a second step we show how to generalize this to larger dimensions. \subsection{Dimension $d=1$} \label{sec:grndTwoDOne} We want to give a reduction from \textsc{KMP}\ in 1 dimension to \GRND{2} in 2 dimensions. Note that the latter is the same as \textsc{Hypervolume}\ in 2 dimensions. Let $M$ be an instance of \textsc{KMP}\ in 1 dimension, i.e., a set of $n$ intervals in $\mathbb{R}$. We will reduce the computation of $\textsc{vol}(M)$ to two instances of \GRND{2}. Denote by $x_1 < \ldots < x_{m}$ the endpoints of all intervals in $M$ (if all endpoints are distinct then $m=2n$). We can assume that $x_1 = 0$ after translation. Consider the boxes \begin{align*} A_i := [m-i-1,m-i] \times [x_i,x_{i+1}] \end{align*} in $\mathbb{R}^2$ for $1 \leqslant i \leqslant m-1$, as depicted in \figref{grndkmp}. Denote the union of these boxes by~$A$. Note that the volume of box $A_i$ is the same as the length of the interval $[x_i,x_{i+1}]$. This means that we took the chain of intervals $\{[x_i,x_{i+1}]\}$ and made it into a staircase of boxes $\{A_i\}$, where each box has the same volume as the corresponding interval. \begin{figure*} \caption{\label{fig:grndkmp} \label{fig:grndkmp} \end{figure*} Now consider an interval $I = [x_j,x_k] \in M$. We construct the box \begin{align*} C_I := [0,m-j] \times [0,x_k], \end{align*} also shown in \figref{grndkmp}. Then $C_I$ contains the boxes $A_i$ with $j \leqslant i < k$ and (its interior) has no common intersection with any other box $A_i$. This is easily seen as $A_i \subseteq C_I$ iff $m-i \leqslant m-j$ and $x_{i+1} \leqslant x_k$. Hence, for any interval $I \in M$ we constructed a box $C_I$ that contains exactly those boxes $A_i$ whose corresponding interval $[x_i,x_{i+1}]$ is contained in $I$, or in other words \begin{align*} [x_i,x_{i+1}] \subseteq I &\Leftrightarrow A_i \subseteq C_I, \\ \textsc{vol}([x_i,x_{i+1}] \cap I) &= \textsc{vol}(A_i \cap C_I). \end{align*} From these properties it follows that the volume of $C_I$ restricted to $A$ is the same as the length of~$I$, i.e., \begin{align*} \textsc{vol}_A(C_I) = \textsc{vol}(I). \end{align*} Furthermore, considering the whole set $M$ of intervals, the interval $[x_i,x_{i+1}]$ is contained in some interval in~$M$ iff the box $A_i$ is contained in some box in $C_M := \{C_I \mid I \in M \}$. This yields \begin{align*} \textsc{vol}(M) = \textsc{vol}_A(C_M). \end{align*} It remains to reduce the computation of $\textsc{vol}_A(C_M)$ to two \GRND{2} instances. For this we consider \begin{align*} T_0 := \bigcup_{1 \leqslant i \leqslant m} C_{[x_i,x_i]}. \end{align*} Informally speaking, $T_0$ consists of all points ``below'' $A$, as depicted in \figref{grndkmp}. Note that no set $A_j$ is contained in $T_0$. Moreover, we consider the set $T_1 := T_0 \cup A$. Observe that we can write \begin{align*} T_1 = \bigcup_{1 \leqslant i \leqslant m-1} C_{[x_i,x_{i+1}]}, \end{align*} since $A_i \subseteq C_{[x_i,x_{i+1}]}$. Note that both sets $T_0$ and $T_1$ are unions of $\mathcal{O}(n)$ \GRND{2} boxes. Informally, $T_0$ is the maximum \GRND{2} instance that has $\textsc{vol}_A(T_0) = 0$, and $T_1$ is the minimum \GRND{2} instance with $\textsc{vol}_A(T_1) = \textsc{vol}(A)$. Now, we can compute $\textsc{vol}_A(C_M)$ as follows. \begin{lemma} In the above situation we have \begin{align*} \textsc{vol}_A(C_M) = \textsc{vol}(A) + \textsc{vol}(T_0 \cup \mathcal{U}(C_M)) - \textsc{vol}(T_1 \cup \mathcal{U}(C_M)). \end{align*} \end{lemma} \begin{proof} Set $U := \mathcal{U}(C_M)$. Using $T_0 \subseteq T_1$ and $A = T_1 \setminus T_0$ in a sequence of simple transformations, we get \begin{align*} \textsc{vol}(T_1 \cup U) - \textsc{vol}(T_0 \cup U) &= \textsc{vol}((T_1 \cup U) \setminus (T_0 \cup U)) \\ &= \textsc{vol}((T_1 \setminus T_0) \setminus U) \\ &= \textsc{vol}(A \setminus U) \\ &= \textsc{vol}(A) - \textsc{vol}(A \cap U) \\ &= \textsc{vol}(A) - \textsc{vol}_A(C_M), \end{align*} which proves the claim. \qed \end{proof} Note that $\textsc{vol}(A) = \sum_i \textsc{vol}(A_i) = \sum_i |x_{i+1}-x_i| = |x_m - x_1|$ is trivial. Also note that both sets $T_0$ and $T_1$ are the union of $\mathcal{O}(n)$ \GRND{2} boxes, so that $\textsc{vol}(T_b \cup \mathcal{U}(C_M))$ can be seen as a \GRND{2} instance of size $\mathcal{O}(n)$, for both $b \in \{0,1\}$. Hence, we reduced the computation of the input instance's volume $\textsc{vol}(M)$ to $\textsc{vol}_A(C_M)$ and further to the \GRND{2} instances $\textsc{vol}(T_0 \cup \mathcal{U}(C_M))$ and $\textsc{vol}(T_1 \cup \mathcal{U}(C_M))$. As we have to sort the given intervals first, we get \begin{align*} \ensuremath{T_\KMP}(n,1) \leqslant \mathcal{O}(\TGRND{2}(\mathcal{O}(n),2) + n \log n). \end{align*} Note that this inequality alone gives no new information, as already Klee~\cite{Klee77} showed that $\ensuremath{T_\KMP}(n,1) \leqslant \mathcal{O}(n \log n)$. However, we get interesting results when we generalize this reduction to higher dimensions in the next section. \subsection{Larger Dimensions} In this section we show how the reduction from the last section carries over to larger dimensions, yielding a reduction from \textsc{KMP}\ in $d$ dimensions to \GRND{2} in $d+1$ dimensions. This implies $\ensuremath{T_\KMP}(n,d) \leqslant \mathcal{O}(\TGRND{2}(n,d+1))$. Assume we are given a \textsc{KMP}\ instance $M$ in dimension $d$. The idea is that we use the dimension doubling reduction from the last section on the first dimension and leave all other dimensions untouched. More precisely, for a box $B \in M$ let $\pi_1(B)$ be its projection onto the first dimension and let $\pi_*(B)$ be its projection onto the last $d-1$ dimensions, so that $B = \pi_1(B) \times \pi_*(B)$. Now follow the reduction from the last section on the instance $M' := \{\pi_1(B) \mid B \in M\}$. This yields sets $A$, $T_0$, $T_1$, and a box $C_I$ for each $I \in M'$. We set $C_B := C_{\pi_1(B)} \times \pi_*(B)$ and $C_M = \{C_B \mid B \in M\}$. A possible way of generalizing $A$ would be to set $A'' := A \times \mathbb{R}^{d-1}$. Then we would be interested in $\textsc{vol}_{A''}(C_M)$, which can be seen to be exactly $\textsc{vol}(M)$. This definition of $A''$ is, however, not simple enough, as it is not a difference of \GRND{2} instances (unlike $A = T_1 \setminus T_0$). To give a different definition, assume (after translation) that all coordinates of the input instance are non-negative and let $\Delta$ be the maximal coordinate in any dimension. We set $A' := A \times [0,\Delta]^{d-1}$ and still get the same volume $\textsc{vol}_{A'}(C_M) = \textsc{vol}(M)$. This allows to generalize $T_0$ and $T_1$ to $T_0' := T_0 \times [0,\Delta]^{d-1}$ and $T_1' := T_1 \times [0,\Delta]^{d-1}$, while still having \begin{align*} \textsc{vol}_{A'}(C_M) = \textsc{vol}(A') + \textsc{vol}(T_0' \cup \mathcal{U}(C_M)) - \textsc{vol}(T_1' \cup \mathcal{U}(C_M)). \end{align*} Note that $T_0'$ and $T_1'$ are also a union of $\mathcal{O}(n)$ \GRND{2} boxes, so a volume such as $\textsc{vol}(T_0' \cup \mathcal{U}(C_M))$ can be seen as a \GRND{2} instance. This completes the reduction and yields the time bound \begin{align*} \ensuremath{T_\KMP}(n,d) \leqslant \mathcal{O}(\TGRND{2}(\mathcal{O}(n),d+1) + n \log n). \end{align*} Using the lower bound $\Omega(n \log n)$ of \corref{nlogn} we can hide the additional $n \log n$ in the first summand. Moreover, first using the technical \leqslantmref{technical} we can finally simplify this to the statement of \corref{grnd2}, \begin{align*} \ensuremath{T_\KMP}(n,d) \leqslant \mathcal{O}(\TGRND{2}(n,d+1)). \end{align*} \section{2k-Grounded $\geqslant$ \textsc{KMP}} \label{sec:thereductionTwo} In this section we prove \thmref{thereduction} in its full generality, building on the redcutions from the last section. Recall that we want to give a reduction from \textsc{KMP}\ in dimension $d$ to \GRND{2k} in dimension $d+k$. Let $M$ be a \textsc{KMP}\ instance. After translation we can assume that in every dimension the minimal coordinate among all boxes in $M$ is 0. Denoting the largest coordinate of any box by $\Delta$ we thus have $B \subseteq [0,\Delta]^d = \Omega^d$ for all $B \in M$, where $\Omega := [0,\Delta]$. The first steps of generalizing the reduction from the last section to the case $k > 1$ are straight forward. We want to use the dimension doubling reduction from \secref{grndTwoDOne} on each one of the first $k$ dimensions. For any box $B \in \mathbb{R}^d$ denote its projection onto the $i$-th dimension by $\pi_i(B)$, $1 \leqslant i \leqslant k$, and its projection onto dimensions $k+1,\ldots,d$ by $\pi_*(B)$. We use the reduction from \secref{grndTwoDOne} on each dimension $1 \leqslant i \leqslant k$, i.e., on each instance $M^{(i)} := \{\pi_i(B) \mid B \in M\}$, yielding sets $A^{(i)}, T_0^{(i)}, T_1^{(i)}$, and a box $C_I^{(i)}$ for each $I \in M^{(i)}$. We assume that all these sets are contained in $[0,\Delta]^2 = \Omega^2$, meaning that all coordinates are upper bounded by $\Delta$ (this holds after possibly increasing the $\Delta$ we chose before). For a box $B \in M$ we now define \begin{align*} C_B := C^{(1)}_{\pi_1(B)} \times \ldots \times C^{(k)}_{\pi_k(B)} \times \pi_*(B). \end{align*} This is a box in $\mathbb{R}^{d+k}$, it is even a \GRND{2k} box, as its projection onto the first $2k$ coordinates has the vertex $(0,\ldots,0)$. Set $C_M := \{C_B \mid B \in M\}$. As shown by the following lemma, we want to determine the volume $\textsc{vol}_{A}(C_M)$ in \begin{align*} A := A^{(1)}\times \ldots \times A^{(k)} \times [0,\Delta]^{d-k}. \end{align*} \begin{lemma} \label{lem:VolACM} In the above situation we have \begin{align*} \textsc{vol}(M) = \textsc{vol}_A(C_M). \end{align*} \end{lemma} \begin{proof} Denote by $x_1^{(i)} < \ldots < x_{m_i}^{(i)}$ the coordinates of all boxes in $M$ in the $i$-th dimension. We can express $\textsc{vol}(M)$ in terms of the boxes \begin{align*} E_{j_1,\ldots,j_d} := [x_{j_1}^{(1)}, x_{j_1+1}^{(1)}] \times \ldots \times [x_{j_d}^{(d)}, x_{j_d+1}^{(d)}], \end{align*} for $1 \leqslant j_i < m_i$. Since each such box is either completely included in some box in $M$ or does not contribute to $\textsc{vol}(M)$, we have \begin{align*} \textsc{vol}(M) = \sum_{j_1,\ldots,j_d} [ E_{j_1,\ldots,j_d} \subseteq \mathcal{U}(M) ] \cdot \textsc{vol}(E_{j_1,\ldots,j_d}), \end{align*} where $[X]$ is 1 if $X$ is true, and 0 otherwise. Recall from the reduction in \secref{grndTwoDOne} that $A^{(i)} = A_1^{(i)} \cup \ldots \cup A^{(i)}_{m_i - 1}$, and there is a one-to-one correspondence between intervals $[x_j^{(i)},x_{j+1}^{(i)}]$ and 2-dimensional boxes $A_j^{(i)}$, in particular both have the same volume. This carries over to a one-to-one correspondence between $d$-dimensional boxes $E_{j_1,\ldots,j_d}$ and $(d+k)$-dimensional boxes \begin{align*} E'_{j_1,\ldots,j_d} := A_{j_1}^{(1)} \times \ldots \times A_{j_k}^{(k)} \times [x_{j_{k+1}}^{(k+1)}, x_{j_{k+1}+1}^{(k+1)}] \times \ldots \times [x_{j_d}^{(d)}, x_{j_d+1}^{(d)}], \end{align*} in particular both have the same volume. Additionally, recall that an interval $I \in M^{(i)}$ includes $[x_j^{(i)},x_{j+1}^{(i)}]$ if and only if the 2-dimensional box $C_I^{(i)}$ contains $A_j^{(i)}$. If $I$ does not include $[x_j^{(i)},x_{j+1}^{(i)}]$, then $\textsc{vol}(I \cap [x_j^{(i)},x_{j+1}^{(i)}]) = 0$, and we also have $\textsc{vol}(C_I^{(i)} \cap A_j^{(i)}) = 0$. Hence, we have for any $B \in M$ that $E_{j_1,\ldots,j_d} \subseteq B$ if and only if $E'_{j_1,\ldots,j_d} \subseteq C_B$, which implies that $E_{j_1,\ldots,j_d} \subseteq \mathcal{U}(M)$ if and only if $E'_{j_1,\ldots,j_d} \subseteq \mathcal{U}(C_M)$. Furthermore, $E'_{j_1,\ldots,j_d}$ is either included in $\mathcal{U}(C_M)$ or does not contribute to $\textsc{vol}(C_M)$. In total, we get \begin{align*} \textsc{vol}(M) &= \sum_{j_1,\ldots,j_d} [ E_{j_1,\ldots,j_d} \subseteq \mathcal{U}(M) ] \cdot \textsc{vol}(E_{j_1,\ldots,j_d}) \\ &= \sum_{j_1,\ldots,j_d} [ E'_{j_1,\ldots,j_d} \subseteq \mathcal{U}(C_M) ] \cdot \textsc{vol}(E'_{j_1,\ldots,j_d}) = \textsc{vol}_A(C_M), \end{align*} which completes the proof. \qed \end{proof} Unfortunately, the set $A$ is not simply a difference of two \GRND{2k} instances. Thus, the hard part is to reduce the computation of $\textsc{vol}_{A}(C_M)$ to \GRND{2k} instances, which we will do in the remainder of this section. For $1 \leqslant i \leqslant k$ and $b \in \{0,1\}$ we set \begin{align*} \tilde{T}_b^{(i)} := \Omega^{2(i-1)} \times T_b^{(i)} \times \Omega^{d+k-2i}. \end{align*} This set in $\Omega^{d+k}$ consists of all points $x$ whose projection to dimensions $2i-1$ and $2i$ is contained in $T_b^{(i)}$. Note that each set $\tilde{T}_b^{(i)}$ can be written as the union of $\mathcal{O}(n)$ \GRND{2k} boxes, since $T_b^{(i)}$ is the union of $\ell = \mathcal{O}(n)$ \GRND{2} boxes in $\mathbb{R}^2$, i.e., $T_b^{(i)} = \bigcup_{j=1}^{\ell} C_j$, so that we may write $\tilde{T}_b^{(i)} = \bigcup_{j=1}^{\ell} \Omega^{2(i-1)} \times C_j \times \Omega^{d-k-2i}$. Thus, we can use an algorithm for \GRND{2k} to compute any volume of the form $\textsc{vol}(\tilde{T}_b^{(i)} \cup V)$, where~$V$ is a union of $\mathcal{O}(n)$ \GRND{2k} boxes. Furthermore, define for $S \subseteq [k]$ \begin{align*} D_S := \bigg( \bigcup_{i \in S} \tilde{T}_1^{(i)} \bigg) \cup \bigcup_{i \in [k] \setminus S} \tilde{T}_0^{(i)}. \end{align*} Note that $D_S \subseteq D_{S'}$ holds for $S \subseteq S'$. We can express $A$ using the sets $D_S$ as shown by the following lemma. \begin{lemma} \label{lem:AcapDS} In the above situation we have \begin{align*} A = \bigcap_{1 \leqslant i \leqslant k} D_{\{i\}} \setminus D_\emptyset. \end{align*} \end{lemma} \begin{proof} We have $D_\emptyset = \bigcup_{i \in [k]} \tilde{T}_0^{(i)}$ and $\tilde{T}_0^{(i)} \subseteq \tilde{T}_1^{(i)}$ for all $i$, implying \begin{align*} D_{\{i\}} = \tilde{T}_1^{(i)} \cup D_\emptyset. \end{align*} This yields \begin{align*} \bigcap_{1 \leqslant i \leqslant k} D_{\{i\}} \setminus D_\emptyset &= \bigg( \bigcap_{1 \leqslant i \leqslant k} \tilde{T}_1^{(i)} \bigg) \setminus \bigcup_{i \in [k]} \tilde{T}_0^{(i)}. \end{align*} A point $x = (x_1,\ldots,x_{d+k}) \in \mathbb{R}^{d+k}$ is in the set on the right hand side if and only if it has the following three properties: \begin{itemize} \item $x_i \in [0,\Delta]$ for all $1\leqslant i \leqslant d+k$, \item $(x_{2i-1},x_{2i})$ lies in $T_1^{(i)}$ for all $1 \leqslant i \leqslant k$, \item $(x_{2i-1},x_{2i})$ does not lie in $T_0^{(i)}$ for all $1 \leqslant i \leqslant k$. \end{itemize} Since $A^{(i)} = T_1^{(i)} \setminus T_0^{(i)}$, this description captures exactly $A^{(1)}\times \ldots \times A^{(k)} \times [0,\Delta]^{d-k} = A$, finishing the proof. \qed \end{proof} Moreover, each $D_S$ can be written as the union of $\mathcal{O}(n)$ \GRND{2k} instances, since the same was true for the sets $\tilde{T}_b^{(i)}$. Hence, we can use an algorithm for \GRND{2k} to compute the volume \begin{align*} H_S := \textsc{vol}(D_S \cup \mathcal{U}(C_M)). \end{align*} Next we show that we can compute $\textsc{vol}_A(C_M)$ from the $H_S$ by an interesting usage of the inclusion-exclusion principle. \begin{lemma} \label{lem:last} In the above situation we have \begin{align*} \textsc{vol}_A(C_M) = \textsc{vol}(A) + \sum_{S \subseteq [k]} (-1)^{|S|} H_S. \end{align*} \end{lemma} \begin{proof} In this proof we write for short $U := \mathcal{U}(C_M)$. We clearly have \begin{align} \label{eq:firststep} \textsc{vol}(A) - \textsc{vol}_A(U) = \textsc{vol}(A \setminus U). \end{align} We first show \begin{align} \label{eq:toshow} \textsc{vol}(A \setminus U) = \sum_{\emptyset \ne S \subseteq [k]} (-1)^{|S|+1} (H_S - H_\emptyset), \end{align} and simplify the right hand side later. Using $A = \bigcap_{1 \leqslant i \leqslant k} D_{\{i\}} \setminus D_\emptyset$ (\leqslantmref{AcapDS}) and the inclusion-exclusion principle we arrive at \begin{align*} \textsc{vol}(A \setminus U) &= \textsc{vol}\Big( \bigcap_{1 \leqslant i \leqslant k} D_{\{i\}} \setminus (D_\emptyset \cup U) \Big) \\ &= \sum_{\emptyset \ne S \subseteq [k]} (-1)^{|S|+1} \textsc{vol}\bigg( \bigcup_{i \in S} D_{\{i\}} \setminus (D_\emptyset \cup U) \bigg). \end{align*} Note that $D_S = \bigcup_{i \in S} D_{\{i\}}$, so that the above equation simplifies to \begin{align} \label{eq:undnocheine} \textsc{vol}(A \setminus U) &= \sum_{\emptyset \ne S \subseteq [k]} (-1)^{|S|+1} \textsc{vol}\bigg( D_S \setminus (D_\emptyset \cup U) \bigg) \end{align} Using the definition of $H_S$ and $D_\emptyset \subseteq D_S$ for any $S \subseteq [k]$, we get \begin{align*} H_S - H_\emptyset &= \textsc{vol}(D_S \cup U) - \textsc{vol}(D_\emptyset \cup U) \\ &= \textsc{vol}((D_S \cup U) \setminus (D_\emptyset \cup U)) \\ &= \textsc{vol}( D_S \setminus (D_\emptyset \cup U) ). \end{align*} Plugging this into (\ref{eq:undnocheine}) yields (\ref{eq:toshow}). Observe that \begin{align*} \sum_{\emptyset \ne S \subseteq [k]} (-1)^{|S|+1} (- H_\emptyset) &= - H_\emptyset. \end{align*} This allows to further simplify (\ref{eq:toshow}) to \begin{align*} \textsc{vol}(A \setminus U) = \sum_{S \subseteq [k]} (-1)^{|S|+1} H_S. \end{align*} Plugging this into (\ref{eq:firststep}) yields the desired equation. \qed \end{proof} As $\textsc{vol}(A) = \Delta^{d-k} \cdot \prod_{1 \leqslant i \leqslant k} \textsc{vol}(A^{(i)})$ is trivial, we have reduced the computation of $\textsc{vol}_A(C_M)$ to $2^k = \mathcal{O}(1)$ instances of \GRND{2k}, each consisting of $\mathcal{O}(n)$ boxes. During the construction of these instances we need to sort the coordinates, so that we need additional time $\mathcal{O}(n \log n)$. This yields \begin{align*} \ensuremath{T_\KMP}(n,d) \leqslant \mathcal{O}(\TGRND{2k}(\mathcal{O}(n),d+k) + n \log n). \end{align*} Because of the lower bound from \corref{nlogn}, we have $\TGRND{2k}(\mathcal{O}(n),d+k) = \Omega(n \log n)$, so we can hide the second summand in the first, \begin{align*} \ensuremath{T_\KMP}(n,d) \leqslant \mathcal{O}(\TGRND{2k}(\mathcal{O}(n),d+k)). \end{align*} We may use the technical \leqslantmref{technical} to get rid of the inner $\mathcal{O}$: This lemma guarantees an algorithm with runtime $\TGRND{2k}'(n,d)$ such that we get \begin{align*} \ensuremath{T_\KMP}(n,d) \leqslant \mathcal{O}(\TGRND{2k}'(\mathcal{O}(n),d+k)) \leqslant \mathcal{O}(\TGRND{2k}(n,d+k)). \end{align*} This finishes the proof. \section{Proof of \leqslantmref{technical}} \label{sec:technical} In this section we prove the technical \leqslantmref{technical}. \begingroup \def\ref{lem:technical}{\ref{lem:technical}} \begin{lemma} Fix $0 \leqslant k \leqslant d$ and $c > 1$. If there is an algorithm for \GRND{k} with runtime $\TGRND{k}(n,d)$ then there is another algorithm for \GRND{k} with runtime $\TGRND{k}'(n,d)$ satisfying \begin{align*} \TGRND{k}'(c n,d) \leqslant \mathcal{O}(\TGRND{k}(n,d)). \end{align*} \end{lemma} \endgroup \begin{proof} Let $M$ be an instance of \GRND{k} of size $|M| = n$. Denote by $z_1\leqslant \ldots \leqslant z_{2n}$ the coordinates in the first dimension of all boxes in $M$. Let $a_1 := z_{(1-\alpha)n}$ and $b_1 := z_{(1+\alpha)n}$, where $\alpha := 1/(3d)$. Denote by $\{x_1 \leqslant a_1\}$ the set of all points $x=(x_1,\ldots,x_d) \in \ensuremath{\mathbb{R}^d}$ with $x_1 \leqslant a_1$, similarly for $\{a_1 \leqslant x_1 \leqslant b_1\}$ and $\{x_1 \geqslant b_1\}$. We consider the three \textsc{KMP}\ instances \begin{align*} M_a &:= \{ B \cap \{x_1 \leqslant a_1\} \mid B \in M\}, \\ M_b &:= \{ B \cap \{x_1 \geqslant b_1\} \mid B \in M\}, \\ M' &:= \{ B \cap \{a_1 \leqslant x_1 \leqslant b_1\} \mid B \in M\}. \end{align*} Note that all three instances can be seen as \GRND{k} instances: Projected onto the first $k$ dimensions, all boxes in $M_a$ share the vertex $(0,\ldots,0)$, all boxes in $M_b$ share the vertex $(b_1,0,\ldots,0)$, and all boxes in $M'$ share the vertex $(a_1,0,\ldots,0)$, so after translation they all share the vertex $(0,\ldots,0)$. Moreover, $\{x_1 < a_1\}$ and $\{x_1 > b_1\}$ contain only $(1-\alpha)n$ coordinates of boxes in $M$ in the first dimension. Hence, there are at most $(1-\alpha)n$ boxes intersecting $\{x_1 < a_1\}$, so after deleting boxes with volume 0 we get $|M_a| \leqslant (1-\alpha)n$, similarly for $M_b$. This reasoning does not work for $M'$, it might even be that all $n$ boxes are present in $M'$: If a box has left coordinate smaller than $a_1$ and right coordinate larger than $b_1$, then none of its coordinates is seen in $\{a_1 \leqslant x_1 \leqslant b_1\}$, although it has non-empty intersection with $\{a_1 \leqslant x_1 \leqslant b_1\}$. However, such a box in $M'$ is trivial in the first dimension: its coordinates in the first dimension are simply $[a_1,b_1]$. If all boxes in $M'$ were trivial in the first dimension, then $M'$ would clearly be simpler than the input instance. Although this is not the case, we can bound the number of boxes in $M'$ that are non-trivial in the first dimension: Since there are at most $2 \alpha n$ coordinates~$z_i$ in $[a_1,b_1]$, all but at most $2 \alpha n$ boxes in $M'$ are trivial in the first dimension. Thus, also $M'$ is easier than the input instance $M$, in a certain sense. Note that $\textsc{vol}(M) = \textsc{vol}(M_a) + \textsc{vol}(M_b) + \textsc{vol}(M')$ and $M_a,M_b$ are strictly easier than $M$, as they contain at most $(1-\alpha)n$ boxes. We have to simplify $M'$ further. For this, we use the same construction as above (on $M$ and dimension 1) on $M'$ and dimension 2, i.e., we split by coordinates in dimension 2 at $a_2$ and $b_2$. This yields three \GRND{k} instances. Two of them contain at most $(1-\alpha)n$ boxes. The third one, $M''$, may contain up to $n$ boxes. However, all but at most $4 \alpha n$ of these boxes are trivial in the first and second dimension, meaning that their projection onto the first 2 dimensions is $[a_1,b_1]\times [a_2,b_2]$. Iterating this reduction $d$ times yields $2d$ instances of \GRND{k} containing at most $(1-\alpha)n$ points and one instance $M^*$ that may contain up to $n$ boxes. However, all but at most $2d \, \alpha n$ of these boxes are trivial in all $d$ dimensions, meaning that they are equal to $[a_1,b_1] \times \ldots \times [a_d,b_d]$. Since all boxes in $M^*$ are contained in $[a_1,b_1] \times \ldots \times [a_d,b_d]$, if any such trivial box exists, the volume of $M^*$ is trivial. Otherwise $M^*$ only contains at most $2d \, \alpha n = \frac23 n \leqslant (1-1/(3d))n = (1-\alpha)n$ boxes. Thus, we have reduced the computation of $\textsc{vol}(M)$ to $2d+1$ instances of \GRND{k} with at most $(1-\alpha)n$ boxes each. The reduction itself can be made to run in $\mathcal{O}(n)$ time. Hence, if we solve the reduced problems by an algorithm with runtime $\TGRND{k}(n,d)$, then we get an algorithm with runtime $\TGRND{k}'(n,d)$ satisfying \begin{align*} \TGRND{k}'(n,d) \leqslant (2d+1) \TGRND{k}((1-\alpha)n,d) + \mathcal{O}(n). \end{align*} As every algorithm for \GRND{k} at least has to read its whole input, we can hide the $\mathcal{O}(n)$ by the first term, \begin{align*} \TGRND{k}'(n,d) \leqslant \mathcal{O}( \TGRND{k}((1-\alpha)n,d) ), \end{align*} or, \begin{align*} \TGRND{k}'(n/(1-\alpha),d) \leqslant \mathcal{O}( \TGRND{k}(n,d) ). \end{align*} Repeating this construction an appropriate number of times we can increase the constant $1/(1-\alpha)$ to any constant $c > 1$, while the factor on the right hand side is still bounded by a constant. This finally yields an algorithm with runtime satisfying \begin{align*} \TGRND{k}''(cn,d) \leqslant \mathcal{O}(\TGRND{k}(n,d)). \end{align*} \qed \end{proof} \section{Conclusion} \label{sec:conclusions} We presented reductions between the special cases \textsc{Cube-KMP}, \textsc{Unitcube-KMP}, \textsc{Hypervolume}, and \GRND{k} of Klee's measure problem. These reductions imply statements about the runtime needed for these problem variants. We established \textsc{Hypervolume}\ as the easiest among all studied special cases, and showed that the variants \textsc{Cube-KMP}\ and \textsc{Unitcube-KMP}\ have polynomially related runtimes. Moreover, we presented a reduction from the general case of \textsc{KMP}\ to \GRND{2k}. This allows to transfer \ensuremath{\textup{W}[1]}-hardness from \textsc{KMP}\ to all special cases, proving that no $n^{o(d)}$ algorithm exists for any of the special cases assuming the Exponential Time Hypothesis. Moreover, assuming that no improved algorithm exists for \textsc{KMP}, we get a tight lower bound for a recent algorithm for \GRND{2}, and a lower bound of roughly $n^{(d-1)/4}$ for all other special cases. Thus, we established some order among the special cases of Klee's measure problem. Our results lead to a number of open problems, both asking for new upper and lower bounds: \begin{itemize} \item Is there a polynomial relation between $\textsc{Hypervolume}$ and $\textsc{Unitcube-KMP}$, similar to $\textsc{Cube-KMP}$ and $\textsc{Unitcube-KMP}$, or do both problems have significantly different runtimes? \item Show that no improved algorithm exists for \textsc{KMP}, e.g., assuming the Strong Exponential Time Hypothesis, as has been done for the Dominating Set problem, see~\cite{fptsurvey}. Or give an improved algorithm. \item Assuming that no improved algorithm for \textsc{KMP}\ exists, we know that the optimal runtimes of \textsc{Hypervolume}\ and \textsc{Cube-KMP}/\textsc{Unitcube-KMP}\ are of the form $n^{c_d\cdot d \pm \mathcal{O}(1)}$, with $c_d \in [1/4,1/3]$. Determine the correct value of~$c_d$. \item Generalize the $\mathcal{O}(n^{(d-1)/2} \log^2 n)$ algorithm for \GRND{2}~\cite{YildizS12} to an $\mathcal{O}(n^{(d-k)/2 + o(1)})$ algorithm for \GRND{2k}. This would again be optimal by \thmref{thereduction}. \item We showed the relation $\ensuremath{T_\KMP}(n,d) \leqslant \mathcal{O}(\TGRND{2k}(n,d+k))$. Show an inequality in the opposite direction, i.e., a statement of the form $\TGRND{k}(n,d) \leqslant \mathcal{O}(\ensuremath{T_\KMP}(n,d'))$ with $d' < d$. \end{itemize} \renewcommand{\section*{References}}{\section*{References}} \end{document}
\begin{document} \def \Z{\bf mathbb Z} \def \C{\bf mathbb C} \def \R{\Bbb R} \def \Q{\bf mathbb Q} \def \N{\bf mathbb N} \def \A{{\bf mathcal{A}}} \def \D{{\bf mathcal{D}}} \def \E{{\bf mathcal{E}}} \def \E{{\bf mathcal{E}}} \def \H{\bf mathcal{H}} \def \S{{\bf mathcal{S}}} \def \wt{{\rm wt}} \def \tr{{\rm tr}} \def \span{{\rm span}} \def \Res{{\rm Res}} \def \Der{{\rm Der}} \def \End{{\rm End}} \def \Ind {{\rm Ind}} \def \Irr {{\rm Irr}} \def \Aut{{\rm Aut}} \def \GL{{\rm GL}} \def \Hom{{\rm Hom}} \def \bf mod{{\rm mod}} \def \ann{{\rm Ann}} \def \ad{{\rm ad}} \def \rank{{\rm rank}\;} \def \<{\langle} \def \>{\rangle} \def \g{{\frak{g}}} \def \h{{\hbar}} \def \k{{\frak{k}}} \def \sl{{\frak{sl}}} \def \gl{{\frak{gl}}} \def \be{\begin{equation}\label} \def \ee{\end{equation}} \def \bex{\begin{example}\label} \def \eex{\end{example}} \def \bl{\begin{lem}\label} \def \el{\end{lem}} \def \bt{\begin{thm}\label} \def \et{\end{thm}} \def \bp{\begin{prop}\label} \def \ep{\end{prop}} \def \br{\begin{rem}\label} \def \er{\end{rem}} \def \bc{\begin{coro}\label} \def \ec{\end{coro}} \def \bd{\begin{de}\label} \def \ed{\end{de}} \bf newcommand{\bf m}{\bf m} \bf newcommand{\bf n}{\bf n} \bf newcommand{\bf nno}{\bf nonumber} \bf newcommand{\bf nord}{\bf mbox{\scriptsize ${\circ\atop\circ}$}} \bf newtheorem{thm}{Theorem}[section] \bf newtheorem{prop}[thm]{Proposition} \bf newtheorem{coro}[thm]{Corollary} \bf newtheorem{conj}[thm]{Conjecture} \bf newtheorem{example}[thm]{Example} \bf newtheorem{lem}[thm]{Lemma} \bf newtheorem{rem}[thm]{Remark} \bf newtheorem{de}[thm]{Definition} \bf newtheorem{hy}[thm]{Hypothesis} \bf makeatletter \@addtoreset{equation}{section} \def\thesection.\arabic{equation}{\thesection.\arabic{equation}} \bf makeatother \bf makeatletter \begin{center} {\Large \bf Regular representations and $A_{n}(V)$-$A_{m}(V)$ bimodules} \end{center} \begin{center} {Haisheng Li\\ Department of Mathematical Sciences\\ Rutgers University, Camden, NJ 08102} \end{center} \begin{abstract} This paper is to establish a natural connection between regular representations for a vertex operator algebra $V$ and $A_{n}(V)$-$A_{m}(V)$ bimodules of Dong and Jiang. Let $W$ be a weak $V$-module and let $(m,n)$ be a pair of nonnegative integers. We study two quotient spaces $A_{n,m}^{\dagger}(W)$ and $A^{\diamond}_{n,m}(W)$ of $W$. It is proved that the dual space $A^{\dagger}_{n,m}(W)^{*}$ viewed as a subspace of $W^*$ coincides with the level-$(m,n)$ vacuum subspace of the regular representation module $\bf mathfrak{D}_{(-1)}(W)$. By making use of this connection, we obtain an $A_{n}(V)$-$A_m(V)$ bimodule structure on both $A_{n,m}^{\dagger}(W)$ and $A^{\diamond}_{n,m}(W)$. Furthermore, we obtain an $\N$-graded weak $V$-module structure together with a commuting right $A_m(V)$-module structure on $A^{\diamond}_{\Box,m}(W):=\oplus_{n\in \N}A^{\diamond}_{n,m}(W)$. Consequently, we recover the corresponding results and roughly confirm a conjecture of Dong and Jiang. \end{abstract} \section{Introduction} For a vertex operator algebra $V$, what was called the regular representation (module), associated to a weak $V$-module $W$ and a nonzero complex number $z$, is a weak $V\otimes V$-module $\bf mathfrak{D}_{(z)}(W)$ which was constructed canonically inside (the full dual space) $W^*$ (see \cite{Li-reg}). In case $W=V$ (the adjoint module), it was proved that the socle of the weak $V\otimes V$-module $\bf mathfrak{D}_{(z)}(V)$ admits a Peter-Weyl type decomposition. It turned out that regular representation has deep connections with various theories in the field of vertex operator algebras. First of all, it has an intrinsic connection with Huang and Lepowsky's tensor product theory (see \cite{hl-1, hl-3}, \cite{huang-4}). In this very theory, for any $V$-modules $W_1$, $W_2$ and for any nonzero complex number $z$, a $V$-module is constructed inside the full dual space $(W_1\otimes W_2)^{*}$ and its contragredient module is defined to be the tensor product of $W_1$ and $W_2$. A result of \cite{Li-reg} states that for $V$-modules $W_1, W_2$ and $W$, a linear map $F: W_1\otimes W_2\rightarrow W^{*}$ is a $P(z)$-intertwining map in the sense of Huang and Lepowsky if and only if $F(W_1\otimes W_2)\subset \bf mathfrak{D}_{(z)}(W)$ and $F$ is a $V\otimes V$-module homomorphism from $W_1\otimes W_2$ to $\bf mathfrak{D}_{(z)}(W)$. (It was proved in \cite{hl-2} that a $P(z)$-intertwining map amounts to an intertwining operator in the sense of \cite{fhl}.) The relationship between regular representation and Huang-Lepowsky tensor functor was explored further in \cite{Li-reg-hl}. Regular representation also has natural connections with Zhu's $A(V)$-theory (see \cite{zhu1, zhu2}) and its generalization $A_n(V)$-theory (see \cite{dlm-anv}). The essence of Zhu's $A(V)$-theory is that an associative algebra $A(V)$ is associated to each vertex operator algebra $V$ and a natural bijection is established between the set of equivalence classes of irreducible $\N$-graded, namely admissible, $V$-modules and the set of equivalence classes of irreducible $A(V)$-modules. Also in this theory, Frenkel-Zhu's fusion rule theorem gives a way to determine fusion rules by using $A(V)$-bimodules and $A(V)$-module homomorphisms (see \cite{fz}; cf. \cite{li-fusion}). Zhu's $A(V)$-theory was generalized in \cite{dlm-anv}, where a sequence of associative algebras $A_n(V)$ for $n\ge 0$ was introduced with $A(V)=A_0(V)$. This sequence of associative algebras is an important tool in the study on vertex operator algebra representations, and one of the results therein is that a vertex operator algebra $V$ is rational if and only if $A_n(V)$ for all $n\ge 0$ are (finite-dimensional) semisimple. Connections of regular representation with Zhu's $A(V)$-theory and $A_n(V)$-theory were studied in \cite{Li-reg-fusion, Li-Anv}, where new proofs for several known theorems were obtained. One of the main results states that for any $V$-module $W$, the dual of the Frenkel-Zhu $A(V)$-bimodule $A(W)$, as a subspace of $W^{*}$, coincides with the vacuum space of $\bf mathfrak{D}_{(-1)}(W)$. This gives a direct connection of $A(W)$ with (weak) $V$-modules and intertwining operators, which leads to more conceptual and (arguably) easier proofs. (See for example \cite{Li-reg-fusion} for a different proof of the Frenkel-Zhu's fusion rule theorem; cf. \cite{li-fusion}.) In this current work, our main concerns are $A_{n}(V)$-$A_{m}(V)$ bimodules which were introduced by Dong and Jiang. Let $V$ be a vertex operator algebra. For $m,n\in \N$, Dong and Jiang in \cite{dj1} defined a quotient space $A_{n,m}(V)$ of $V$ and proved that $A_{n,m}(V)$ has a natural (left-right) $A_{n}(V)$-$A_{m}(V)$ bimodule structure. Among the main results, it was proved that for any $A_m(V)$-module $U$ with $m$ fixed, $\bigoplus_{n\in \N}A_{n,m}(V)\otimes_{A_m(V)}U$ has a canonical $V$-module structure with $A_{m,m}(V)\otimes_{A_m(V)}U=U$, which satisfies a certain universal property. This theory generalizes the $A_n(V)$-theory in a natural way with new perspectives. The main goal of this paper is to establish a natural connection between regular representations and Dong-Jiang's $A_{n}(V)$-$A_{m}(V)$-bimodules for a vertex operator algebra $V$. For any weak $V$-module $W$ and for any pair $(m,n)$ of nonnegative integers, we study two quotient spaces of $W$, denoted by $A_{n,m}^{\dagger}(W)$ and $A^{\diamond}_{n,m}(W)$. It is proved that the dual space $A^{\dagger}_{n,m}(W)^{*}$ viewed as a subspace of $W^*$ coincides with the level-$(m,n)$ vacuum subspace of the regular representation module $\bf mathfrak{D}_{(-1)}(W)$. By making use of this connection, we obtain an $A_{n}(V)$-$A_m(V)$ bimodule structure on both $A_{n,m}^{\dagger}(W)$ and $A^{\diamond}_{n,m}(W)$. Furthermore, for any fixed $m\in \N$ we obtain an $\N$-graded weak $V$-module structure together with a commuting right $A_m(V)$-module structure on $A^{\diamond}_{\Box,m}(W):=\oplus_{n\in \N}A^{\diamond}_{n,m}(W)$. Using this we recover the aforementioned main result of \cite{dj1}. We continue to describe the contents of this paper with some details. Let $V$ be a vertex operator algebra and let $n\in \N$. From \cite{dlm-anv} we have an associative algebra $A_n(V)$ with an order $2$ anti-automorphism $\theta$. For any weak $V$-module $W$, set $$\Omega_n(W)=\{ w\in W\ |\ x^{n}Y(x^{L(0)}v,x)w\in W[[x]]\ \ \text{ for }v\in V\}.$$ Then $\Omega_n(W)$ is naturally an $A_n(V)$-module. Now, let $M$ be a weak $V\otimes V$-module. On $M$, we have two commuting $V$-module structures, denoted by $Y^{L}(\cdot,x)$ and $Y^R(\cdot,x)$, through the natural identifications $V=V\otimes \C{\bf 1}$ and $V=\C{\bf 1}\otimes V$. For a pair $(m,n)$ of nonnegative integers, define the level-$(m,n)$ vacuum space $$\Omega_{m,n}(M)=\Omega_{m}(M,Y^L)\cap \Omega_{n}(M,Y^R), $$ which is naturally an $A_m(V)\otimes A_n(V)$-module. Let $W$ be a weak $V$-module and let $m,n\in \N$. As the main objects of this paper, we study two quotient spaces $A_{n,m}^{\dagger}(W):=W/O^{\dagger}_{n,m}(W)$ and $A_{n,m}^{\diamond}(W):=W/O'_{n,m}(W)$, where $O^{\dagger}_{n,m}(W)$ is the subspace of $W$, linearly spanned by vectors \begin{eqnarray*} v\circ_{m}^{n}w:=\Res_{z}\frac{(1+z)^{m}}{z^{m+n+2}}Y((1+z)^{L(0)}v,z)w \end{eqnarray*} for $v\in V,\ w\in W$, and where \begin{eqnarray*} O'_{n,m}(W)=O^{\dagger}_{n,m}(W)+(L(-1)+L(0)+m-n)W, \end{eqnarray*} which is the same as in \cite{dj1}. As the first key result, we show that as subspaces of $W^{*}$, \begin{eqnarray} \Omega_{m,n}(\bf mathfrak{D}_{(-1)}(W))=(A^{\dagger}_{n,m}(W))^{*}. \end{eqnarray} View $(A^{\diamond}_{n,m}(W))^{*}$ naturally as a subspace of $(A^{\dagger}_{n,m}(W))^{*}$ and define \begin{eqnarray} \Omega^{\diamond}_{m,n}(\bf mathfrak{D}_{(-1)}(W))=(A^{\diamond}_{n,m}(W))^{*}. \end{eqnarray} It is proved that $\Omega^{\diamond}_{m,n}(\bf mathfrak{D}_{(-1)}(W))$ is an $A_m(V)\otimes A_n(V)$-submodule of $\Omega_{m,n}(\bf mathfrak{D}_{(-1)}(W))$. (More precisely, we use a deformed $A_m(V)\otimes A_n(V)$-module structure on $\Omega_{m,n}(\bf mathfrak{D}_{(-1)}(W))$, which corresponds to a particularly deformed $V\otimes V$ structure on $\bf mathfrak{D}_{(-1)}(W)$.) Using this and the algebra anti-automorphism $\theta$, we then obtain an $A_n(V)$-$A_m(V)$-bimodule structure on both $A^{\dagger}_{n,m}(W)$ and $A^{\diamond}_{n,m}(W)$. In \cite{dj1}, a subspace $O_{n,m}(V)$ of $V$ was introduced, which contains $O'_{n,m}(V)$, and it was proved that $A_{n,m}(V):=V/O_{n,m}(V)$ is an $A_n(V)$-$A_m(V)$-bimodule. It was conjectured therein that $O_{n,m}(V)=O'_{n,m}(V)$ and this conjecture was confirmed in case $m=n$. While the actions of $A_n(V)$ in \cite{dj1} and in this current paper are the same, the right actions of $A_m(V)$ appear to be different. Let $m\in \N$ be fixed. Consider the subspace $\Omega^{\diamond}_{m,\Box}(\bf mathfrak{D}_{(-1)}(W)):= \sum_{n\in \N} \Omega^{\diamond}_{m,n}(\bf mathfrak{D}_{(-1)}(W))$ of $\bf mathfrak{D}_{(-1)}(W)$. It is shown that \begin{eqnarray} \Omega^{\diamond}_{m,\Box}(\bf mathfrak{D}_{(-1)}(W))=\bigoplus_{n\in \N} \Omega^{\diamond}_{m,n}(\bf mathfrak{D}_{(-1)}(W)), \end{eqnarray} which is naturally an $\N$-graded weak $V$-module with a commuting (left) $A_m(V)$-module structure. On the other hand, follow \cite{dj1} to define an $\N$-graded vector space \begin{eqnarray} A^{\diamond}_{\Box,m}(W)=\bigoplus_{n\in \N}A^{\diamond}_{n,m}(W), \end{eqnarray} whose graded dual space coincides with $\Omega^{\diamond}_{m,\Box}(\bf mathfrak{D}_{(-1)}(W))$. We then obtain an $\N$-graded weak $V$-module structure together with a commuting right $A_m(V)$-module structure on $A^{\diamond}_{\Box,m}(W)$, with $\Omega^{\diamond}_{m,\Box}(\bf mathfrak{D}_{(-1)}(W))$ as its contragredient dual $V$-module. This recovers a theorem of \cite{dj1}. To a certain extent, this confirms Dong and Jiang's conjecture. The canonical connection established in this current work reconfirms from a different direction that $A_{n}(V)$-$A_{m}(V)$-bimodules are natural and important objects to study. This paper is organized as follows: In Section 2, we review associative algebras $A_n(V)$, level-$n$ vacuum space $\Omega_n(W)$, and spaces $A^{\dagger}_{n,m}(W)$, $A^{\diamond}_{n,m}(W)$. In Section 3, we recall some basic results on regular representations and establish a canonical connection between $A_{n,m}^{\dagger}(W)$ and $\Omega_{m,n}(\bf mathfrak{D}_{(-1)}(W))$. In Section 4, we study the connection between $\Omega^{\diamond}_{m,n}(\bf mathfrak{D}_{(-1)}(W))$ and $A^{\diamond}_{n,m}(W)$, and study $\N$-graded $V$-modules $\Omega^{\diamond}_{m,\Box}(\bf mathfrak{D}_{(-1)}(W))$ and $A^{\diamond}_{\Box,m}(W)$. {\bf Acknowledgments:} We thank Chongying Dong for extensive discussions. \section{Associative algebra $A_n(V)$ and level-$n$ vacuum space $\Omega_n(W)$} This section is preliminary; We recall from \cite{dlm-anv} the sequence of associative algebras $A_n(V)$ and the level-$n$ vacuum space $\Omega_n(W)$ of a weak $V$-module $W$. Following Dong-Jiang \cite{dj1}, we introduce subspaces $O^{\dagger}_{n,m}(W)$ and $O'_{n,m}(W)$ of $W$ for any pair $(m,n)$ of nonnegative integers and for any weak $V$-module $W$. We begin by reviewing some basics on vertex operator algebras and modules (see \cite{flm}, \cite{fhl}). Let $V$ be a vertex operator algebra. The following are some of the main ingredients: $V=\bigoplus_{n\in \Z}V_{(n)}$ is a $\Z$-graded vector space with a distinguished vector $\omega\in V_{(2)}$, called the {\em conformal vector}. Write $$Y(\omega,x)\ \left(=\sum_{n\in \Z}\omega_nx^{-n-1}\right) =\sum_{n\in \Z}L(n)x^{-n-2}.$$ The following are the basic properties: $$[L(m),L(n)]=L(m+n)+\frac{1}{12}(m^3-m)\delta_{m+n,0}c$$ for $m,n\in \Z$, where $c\in \C$ is called the {\em central charge,} and for $n\in \Z$, \begin{eqnarray*} &&V_{(n)}=\{ v\in V\ |\ L(0)v=nv\},\\ &&Y(L(-1)v,x)=\frac{d}{dx}Y(v,x)\quad \text{ for }v\in V. \end{eqnarray*} A vector $v\in V_{(n)}$ with $n\in \Z$ is said to be {\em homogeneous} of {\em conformal weight $n$} and we write $\wt v=n$. The {\em two grading restrictions} state \begin{eqnarray} && \dim V_{(n)}<\infty\quad \text{ for all }n\in \Z,\\ &&V_{(n)}=0\quad \text{ for all sufficiently negative integers }n. \end{eqnarray} A $V$-module $W$ as a vector space is $\C$-graded with $W=\bigoplus_{\lambda \in \C}W_{(\lambda)}$, where $$W_{(\lambda)}=\{ w\in W\ |\ L(0)w=\lambda w\},$$ satisfying the two grading restrictions that for $\lambda\in \C$, $\dim W_{(\lambda)}<\infty$, and for any fixed $\lambda\in \C$, $W_{(\lambda+n)}=0$ for all sufficiently negative integers $n$. By definition, a {\em weak $V$-module} is a module for $V$ viewed as a vertex algebra. Any weak $V$-module is naturally a module for the Virasoro algebra (of the same central charge), though the action of $L(0)$ might be nonsemisimple. The following are some basic facts from \cite{fhl} we shall need: \begin{eqnarray} &&x_0^{L(0)}Y(u,x)=Y(x_0^{L(0)}u,x_0x)x_0^{L(0)}, \label{L(0)-conjugation} \\ &&e^{-x_0L(1)}Y(u,x)= Y\left(e^{-x_0(1+x_0x)L(1)}(1+x_0x)^{-2L(0)}u, \frac{x}{1+x_0x}\right)e^{-x_0L(1)}. \quad \label{L(1)-conjugation} \end{eqnarray} Also from \cite{fhl}, we have \begin{eqnarray} x_1^{-L(0)}e^{xL(1)}x_1^{L(0)}&=&e^{xx_1L(1)},\\ (1+z_0x)^{-2L(0)}e^{z_0(1-z_0x)L(1)}&=&e^{z_0(1+z_0x)L(1)}(1-z_0x)^{2L(0)}. \end{eqnarray} Introduce a bijective linear operator on $V$ \begin{eqnarray}\label{def-theta} \theta: \ V\rightarrow V; \ \ v\bf mapsto e^{L(1)}(-1)^{L(0)}v, \end{eqnarray} which plays an important role (see \cite{fhl}, \cite{zhu1}). We have $\theta^2=1$ as $$(-1)^{-L(0)}e^{L(1)}(-1)^{L(0)}=e^{-L(1)}\ \text{ and }\ (-1)^{-L(0)}=(-1)^{L(0)}\ \text{ on }V.$$ Next, we recall the sequence of associative algebras $A_n(V)$ from \cite{dlm-anv}. \bd{def-O-n-W} {\em Let $W$ be a weak $V$-module and let $n$ be a nonnegative integer. For $v\in V, \ w\in W$, set \begin{eqnarray} v*_{n}w&=&\Res_{z}\sum_{i=0}^{n}\binom{-n-1}{i}\frac{(1+z)^{n}}{z^{n+1+i}}Y\left((1+z)^{L(0)}v,z\right)w,\label{*n-product}\\ v\circ_{n}w&=&\Res_{z}\frac{(1+z)^{n}}{z^{2n+2}}Y\left((1+z)^{L(0)}v,z\right)w. \end{eqnarray} Furthermore, define $O_{n}(W)$ to be the subspace of $W$, linearly spanned by vectors $$(L(-1)+L(0))w,\ \ v\circ_{n}w \quad \text{(for all }v\in V,\ w\in W).$$} \ed The following is a result of \cite{dlm-anv}: \bt{AnV-algebra} Let $n$ be any nonnegative integer. For the nonassociative algebra $(V,*_n)$ with the operation $*_n$ defined by (\ref{*n-product}), $O_n(V)$ is a two-sided ideal and the quotient algebra $V/O_n(V)$ is an associative algebra, denoted by $A_n(V)$. Furthermore, the linear operator $\theta=e^{L(1)}(-1)^{L(0)}$ on $V$ reduces to an anti-automorphism of $A_n(V)$ with $\theta^2=1$. On the other hand, the identity operator on $V$ gives rise to an algebra epimorphism $\psi_n: A_{n+1}(V)\rightarrow A_n(V)$ for every $n\in \N$. \et \bd{def-Omega-n} {\em Let $(W,Y_W)$ be a weak $V$-module, $n$ a nonnegative integer. Set \begin{eqnarray} \Omega_n(W)=\{ w\in W\ |\ x^nY_W(x^{L(0)}v,x)w\in W[[x]]\ \ \text{ for } v\in V \}. \end{eqnarray}} \ed The following was obtained in \cite{dlm-anv}, generalizing a result of Zhu (see \cite{zhu1}): \bp{prop-Omega-n-W} Let $(W,Y_W)$ be any weak $V$-module. For $v\in V,\ w\in W$, define \begin{eqnarray} v\cdot w=\Res_x x^{-1}Y_W(x^{L(0)}v,x)w\in W. \end{eqnarray} Then $$u\cdot (v\cdot w)=(u*_nv)\cdot w \quad \text{for }u,v\in V,\ w\in \Omega_n(W),$$ and $O_n(V)\cdot \Omega_n(W)=0$. Furthermore, $\Omega_n(W)$ is naturally an $A_n(V)$-module. \ep Recall the following technical result from \cite{Li-Anv} (Lemma 3.8\footnote{There is a typo for the two strict inequalities, which is fixed here}): \bl{lem3.8-anv} Let $W$ be a weak $V$-module, $n$ a nonnegative integer. Then for any (finitely many) homogeneous vectors $v^{(1)},v^{(2)},\dots, v^{(r)}\in V$ and for any integers $m_i\in \Z$, \begin{eqnarray} v^{(1)}_{m_1}v^{(2)}_{m_2}\cdots v^{(r)}_{m_r}\cdot \Omega_n(W)=0 \end{eqnarray} whenever $\wt \left(v^{(1)}_{m_1}v^{(2)}_{m_2}\cdots v^{(r)}_{m_r}\right)< -n$, i.e., $$m_1+\cdots +m_r> \wt v^{(1)}+\cdots +\wt v^{(r)}-r+n.$$ In particular, for any homogeneous vector $v\in V$, $(v_m)^n\Omega_n(W)=0$ if $m\ge \wt v$. \el Note that for any weak $V$-module $W$, subspaces $\Omega_n(W)$ for $n\in \N$ form an ascending sequence. Using Lemma \ref{lem3.8-anv}, we immediately have: \bl{lemma-omega-union} Let $W$ be a weak $V$-module. Set $\Omega_n(W)=0$ for $n<0$. Then \begin{eqnarray} u_{m}\cdot \Omega_n(W)\subset \Omega_{n+\wt (u_m)} (W) \end{eqnarray} for any homogeneous vector $u\in V$ and for any $m,n\in \Z$, where $\wt (u_m)=\wt u-m-1$. \el The following result was obtained in loc. cit (Lemma 3.5 and Corollary 3.9): \bp{prop-union} Let $W$ be any weak $V$-module. Set \begin{eqnarray} \Omega_{\infty}(W)=\bigcup_{n\in \N}\Omega_n(W)\subset W. \end{eqnarray} Then $\Omega_{\infty}(W)$ is a $V$-submodule of $W$. Furthermore, for any homogeneous vector $v\in V$ and for any integer $k\ge \wt v$, $v_k$ is locally nilpotent on $\Omega_{\infty}(W)$. In particular, $L(1)$ is locally nilpotent on $\Omega_{\infty}(W)$. \ep The following is straightforward: \bp{filter-omega-n} Let $W$ be a weak $V$-module such that $W=\Omega_{\infty}(W)$. Form an $\N$-graded vector space \begin{eqnarray} {\rm gr}(W)=\bigoplus_{n\in \N}\left(\Omega_{n}(W)/\Omega_{n-1}(W)\right). \end{eqnarray} Then ${\rm gr}(W)$ is naturally an $\N$-graded weak $V$-module, and for $n\in \Z$, $$\Omega_n({\rm gr}(W))=\bigoplus_{k\le n}\left(\Omega_{k}(W)/\Omega_{k-1}(W)\right).$$ \ep \br{rem-O-nW-trivial} {\em Note that if $W$ is an irreducible weak $V$-module such that $\Omega_n(W)\bf ne 0$ for some $n\in \N$, then $W=\Omega_{\infty}(W)$. Equivalently, if $W$ is an irreducible weak $V$-module such that $W\bf ne \Omega_{\infty}(W)$, then $\Omega_n(W)= 0$ for all $n\in \N$.} \er The following definition is due to Dong and Jiang (see \cite{dj1}): \bd{def-O'-m-n} {\em Let $W$ be a weak $V$-module and let $m,n\in \N$. Denote by $O^{\dagger}_{n,m}(W)$ the subspace of $W$, linearly spanned by vectors \begin{eqnarray} v\circ_{m}^{n}w:=\Res_{z}\frac{(1+z)^{m}}{z^{m+n+2}}Y((1+z)^{L(0)}v,z)w \end{eqnarray} for $v\in V,\ w\in W$. Furthermore, set \begin{eqnarray} O'_{n,m}(W)=O^{\dagger}_{n,m}(W)+(L(-1)+L(0)+m-n)W. \end{eqnarray}} \ed Using Zhu's argument in \cite{zhu1} (cf. \cite{dlm-anv}, \cite{dj1}) we get \begin{eqnarray}\label{general-O-n-m} \Res_{z}\frac{(1+z)^{\wt v+m+s}}{z^{m+n+2+k}}Y(v,z)w\in O^{\dagger}_{n,m}(W) \end{eqnarray} for $v\in V,\ w\in W$ with $v$ homogeneous and for $s,k\in \N$ with $s\le k$. \br{rem-inclusion} {\em Let $m,n,p,q\in \N$. Then \begin{eqnarray} O^{\dagger}_{q,m}(W)\subset O^{\dagger}_{n,m}(W) && \text{ if }q\ge n,\\ O^{\dagger}_{n,p}(W)\subset O^{\dagger}_{n,m}(W) && \text{ if } p\ge m. \end{eqnarray} On the other hand, we have \begin{eqnarray} O_{n}(W)=O'_{n,n}(W). \end{eqnarray}} \er Furthermore, we form two vector spaces, which will play a key role in this paper. \bd{def-A-dagger-prime} Let $W$ be a weak $V$-module and let $m,n\in \N$. Define vector spaces \begin{eqnarray} &&A^{\dagger}_{n,m}(W)=W/O^{\dagger}_{n,m}(W),\\ &&A^{\diamond}_{n,m}(W)=W/O'_{n,m}(W). \end{eqnarray} \ed \br{rem-DJ} {\em In \cite{dj1}, Dong and Jiang introduced two more subspaces $O''_{n,m}(V)$, $O'''_{n,m}(V)$ of $V$, and defined \begin{eqnarray*} &&O_{n,m}(V)=O'_{n,m}(V)+O''_{n,m}(V)+O'''_{n,m}(V),\\ &&A_{n,m}(V):=V/O_{n,m}(V). \end{eqnarray*} It was proved therein that $A_{n,m}(V)$ has an $A_n(V)$-$A_m(V)$ bimodule structure in the sense that $A_{n,m}(V)$ has a left $A_{n}(V)$-module structure together with a commuting right $A_{m}(V)$-module structure. It was conjectured that $O_{n,m}(V)=O'_{n,m}(V)$, where the conjecture was confirmed in case $m=n$. Since we use a different right $A_m(V)$-module action in this paper, we skip the detailed definitions and precise theorems. } \er To end this section, we formulate a linear algebra duality result which we shall use. Let $(A,*)$ be a non-associative algebra with an order-$2$ bijective linear operator $\theta$. Assume that $W$ is a vector space equipped with a bilinear map $A\times W\rightarrow W;\ (a,u)\bf mapsto au$. Define a bilinear map $$A\times W^*\rightarrow W^*; \quad (a,f)\bf mapsto af$$ by $$\< af,w\>=\<f,\theta(a)w\>\ \ \ \ \bf mbox{ for }a\in A, \ f\in W^{*},\ w\in W.$$ \bl{l-classical} Let $W_0$ be a subspace of $W$ and view $(W/W_0)^{*}$ as a subspace of $W^{*}$ naturally through the quotient map $W\rightarrow W/W_0$. Then $(W/W_0)^{*}$ is an $A$-stable subspace of $W^{*}$ if and only if $W_0$ is an $A$-stable subspace of $W$. \el Furthermore, assume that $J$ is an ideal of $A$ with $\theta(J)=J$ such that $A/J$ is an associative algebra and $\theta$ is an algebra anti-automorphism. We have: \bl{l-classical-2} Let $W_0$ be a subspace of $W$ such that $(W/W_0)^{*}$ is an $A$-stable subspace of $W^{*}$ with $J\cdot (W/W_0)^{*}=0$. Then $J\cdot (W/W_0)=0$. Furthermore, $(W/W_0)^{*}$ is a module for $A/J$ viewed as an associative algebra if and only if $W/W_0$ is an $A/J$-module. \el \bl{l-classical-3} Let $U$ be a vector space, let $\psi\in \End U$, and let $U_1$, $U_2$ be subspaces of $U$. Denote by $\psi^{*}$ the dual operator of $\psi$ on $U^{*}$. Then $\psi^{*}((U/U_1)^{*})\subset (U/U_2)^{*}$ if and only if $\psi(U_2)\subset U_1$. \el \section{The regular representation on $\bf mathfrak{D}_{(-1)}(W)$ and $A^{\dagger}_{n,m}(W)$} In this section, we first recall the basic results on regular representations from \cite{Li-reg}, and we then introduce the level-$(m,n)$ vacuum space $\Omega_{m,n}(\bf mathfrak{D}_{(-1)}(W))$ of the regular representation module $\bf mathfrak{D}_{(-1)}(W)$ and give a canonical identification of $(A^{\dagger}_{n,m}(W))^{*}$ with $\Omega_{m,n}(\bf mathfrak{D}_{(-1)}(W))$. By making use this identification, we recover some of Dong-Jiang's results in \cite{dj1}. Let $V$ be a vertex operator algebra and let $(W,Y_{W})$ be a weak $V$-module, both of which are fixed throughout this section. Define a linear map $$Y_{W}^{*}(\cdot,x):\ V\rightarrow (\End W^{*})[[x,x^{-1}]]$$ as in \cite{fhl} by the condition \begin{eqnarray} \< Y_{W}^{*}(v,x)f,w\>=\<f,Y_{W}(e^{xL(1)}(-x^{-2})^{L(0)}v,x^{-1})w\> \end{eqnarray} for $v\in V,\ f\in W^{*}, \ w\in W$. \bd{dDz0} {\em Let $z\in \C^{\times}$. Define $\bf mathfrak{D}_{(z)}(W)$ to consist of every $f\in W^{*}$, satisfying the condition that for any $v\in V$, there exists a nonnegative integer $k$ such that \begin{eqnarray}\label{erational-prop} (x-z)^{k}Y_{W}^{*}(v,x)f\in W^{*}((x)), \end{eqnarray} i.e., for any $v\in V$, there exist nonnegative integers $k$ and $l$ such that \begin{eqnarray}\label{erational-prop-2} x^l(x-z)^{k}Y_{W}^{*}(v,x)f\in W^{*}[[x]]. \end{eqnarray}} \ed Let $v\in V,\ f\in \bf mathfrak{D}_{(z)}(W)$. Define \begin{eqnarray}\label{eYR} Y_{(z)}^{R}(v,x)f=(-z+x)^{-k}\left[(x-z)^{k}Y_{W}^{*}(v,x)f\right]\in W^{*}((x)), \end{eqnarray} where $k$ is any nonnegative integer such that (\ref{erational-prop}) holds and where as a convention $$(-z+x)^{-k}=\sum_{j\ge 0}\binom{-k}{j}(-z)^{-k-j}x^{j}\in \C[[x]].$$ Note that (\ref{erational-prop}) implies that there is a nonnegative integer $l$ such that \begin{eqnarray}\label{erational-prop-z0} (x+z)^{l}Y_{W}^{*}(v,x+z)f\in W^{*}((x)). \end{eqnarray} (In fact, it was proved that (\ref{erational-prop}) and (\ref{erational-prop-z0}) are equivalent.) Then define \begin{eqnarray}\label{eYL-z0} Y_{(z)}^{L}(v,x)f=(z+x)^{-l}\left[(x+z)^{l}Y_{W}^{*}(v,x+z)f\right]\in W^{*}((x)). \end{eqnarray} The following is a key result in \cite{Li-reg}: \bt{thm-reg} Let $z\in \C^{\times}$. Then both pairs $(\bf mathfrak{D}_{(z)}(W),Y_{(z)}^{L})$ and $(\bf mathfrak{D}_{(z)}(W),Y_{(z)}^{R})$ carry the structures of a weak $V$-module. Furthermore, the actions of $V$ under $Y_{(z)}^{R}(\cdot,x)$ and $Y_{(z)}^{L}(\cdot,x)$ commute, and $\bf mathfrak{D}_{(z)}(W)$ is naturally a weak module for the tensor product vertex operator algebra $V\otimes V$. \et The following was also obtained therein (Lemma 3.23): \bl{tri-relation} Let $v\in V,\ \alpha\in \bf mathfrak{D}_{(z)}(W)$. Then \begin{eqnarray}\label{tri-relation-alpha} &&x_0^{-1}\delta\left(\frac{x-z}{x_0}\right)Y_W^{*}(v,x)\alpha-x_0^{-1}\delta\left(\frac{z-x}{-x_0}\right)Y^{R}_{(z)}(v,x)\alpha \bf nonumber\\ &&\hspace{2cm} =z^{-1}\delta\left(\frac{x-x_0}{z}\right)Y^{L}_{(z)}(v,x_0)\alpha. \end{eqnarray} \el For simplicity, we shall simply write $Y^R$ and $Y^L$ for $Y^R_{(z)}$ and $Y^L_{(z)}$ whenever it is clear from the context. \bd{def-omega-m-n-M} {\em Let $M$ be a weak $V\otimes V$-module. For $v\in V$, set $$Y^{1}(v,x)=Y(v\otimes {\bf 1},x),\ \ \ \ Y^{2}(v,x)=Y({\bf 1}\otimes v,x).$$ For $m,n\in \N$, let $\Omega_{m,n}(M)$ consist of all vectors $w\in M$ such that \begin{eqnarray} x^{\wt v+m}Y^{1}(v,x)w,\quad x^{\wt v+n}Y^{2}(v,x)w \in M[[x]] \end{eqnarray} for every homogeneous vector $v\in V$.} \ed From definition, we have \begin{eqnarray} \Omega_{m,n}(M)=\Omega_m(M, Y^{1}(\cdot,x))\cap \Omega_n(M, Y^{2}(\cdot,x)). \end{eqnarray} As the actions of $V$ under $Y^{1}(\cdot,x)$ and $Y^{2}(\cdot,x)$ on $M$ commute, from Proposition \ref{prop-Omega-n-W}, $\Omega_{m,n}(M)$ is naturally an $A_{m}(V)$-module and an $A_{n}(V)$-module. Furthermore, $\Omega_{m,n}(M)$ is a (left) $A_{m}(V)\otimes A_{n}(V)$-module. As the first result of this paper, we have: \bp{p-A-D-space} Let $W$ be a weak $V$-module and let $m,n\in \N$. Set \begin{eqnarray} A^{\dagger}_{n,m}(W)=W/O^{\dagger}_{n,m}(W), \end{eqnarray} a vector space. Let $f\in W^{*}$. Then $f\in (A^{\dagger}_{n,m}(W))^{*}$ if and only if \begin{eqnarray}\label{2.16} x^{\wt v+n}(x+1)^{\wt v+m}Y_{W}^{*}(v,x)f\in W^{*}[[x]] \end{eqnarray} for every homogeneous vector $v\in V$. Furthermore, we have \begin{eqnarray} (A^{\dagger}_{n,m}(W))^{*}= \Omega_{m,n}(\bf mathfrak{D}_{(-1)}(W)) \end{eqnarray} as subspaces of $W^{*}$. \ep \begin{proof} Suppose $f\in (A^{\dagger}_{n,m}(W))^{*}\subset W^{*}$, i.e., $f\in W^{*}$ such that $f|_{O^{\dagger}_{n,m}(W)}=0$. Let $v\in V$ be homogeneous. For every $w\in W$ and for any $k\in \N$, using (\ref{general-O-n-m}) we get \begin{eqnarray*} &&\Res_{x}x^{\wt v+n+k}(x+1)^{\wt v+m}\<Y_{W}^{*}(v,x)f,w\>\bf nonumber\\ &=&\Res_{x}x^{\wt v+n+k}(x+1)^{\wt v+m}\<f,Y_{W}(e^{xL(1)}(-x^{-2})^{L(0)}v,x^{-1})w\>\bf nonumber\\ &=&(-1)^{\wt v}\Res_{x}x^{n+k-\wt v} (x+1)^{\wt v+m} \<f,Y_{W}(e^{xL(1)}v,x^{-1})w\>\bf nonumber\\ &=&(-1)^{\wt v}\Res_{z}\frac{(1+z)^{\wt v+m}}{z^{m+n+k+2}} \<f,Y_{W}(e^{z^{-1}L(1)}v,z)w\>\bf nonumber\\ &=&(-1)^{\wt v}\sum_{i\ge 0}\frac{1}{i!}\Res_{z}\frac{(1+z)^{\wt L(1)^{i}v+m+i}}{z^{m+n+k+2+i}} \<f,Y_{W}(L(1)^{i}v,z)w\>\bf nonumber\\ &=&0, \end{eqnarray*} which implies (\ref{2.16}). Conversely, assume that $f\in W^{*}$ such that (\ref{2.16}) holds for every homogeneous vector $v\in V$. Let $u\in V$ be homogeneous. Then for any $w\in W$ we have \begin{eqnarray*} &&\Res_z\frac{(1+z)^{\wt u +m}}{z^{m+n+2}} \<f, Y_W(u,z)w\>\bf nonumber\\ &=&\Res_z\frac{(1+z)^{\wt u +m}}{z^{m+n+2}}\<Y_W^{*}(e^{zL(1)}(-z^{-2})^{L(0)}u,z^{-1})f,w\>\bf nonumber\\ &=&(-1)^{\wt u}\Res_x x^{\wt u +n}(x+1)^{\wt u +m} \<Y_W^{*}(e^{x^{-1}L(1)}u,x)f,w\>\bf nonumber\\ &=&(-1)^{\wt u}\sum_{i\ge 0}\frac{1}{i!}\Res_xx^{\wt u+n-i}(x+1)^{\wt u+m}\<Y_W^{*}(L(1)^{i}u,x)f,w\>\bf nonumber\\ &=&(-1)^{\wt u}\sum_{i\ge 0}\frac{1}{i!}\Res_xx^{\wt L(1)^iu+n}(x+1)^{\wt L(1)^iu+m+i}\<Y_W^{*}(L(1)^{i}u,x)f,w\>\bf nonumber\\ &=&0. \end{eqnarray*} This proves $f|_{O^{\dagger}_{n,m}(W)}=0$. That is, $f\in (A^{\dagger}_{n,m}(W))^{*}$. This proves the first assertion. Now, let $f\in W^{*}$ such that (\ref{2.16}) holds for every homogeneous vector $v\in V$. Note that (\ref{2.16}) implies $f\in \bf mathfrak{D}_{(-1)}(W)$. Furthermore, from the definition of $Y^{R}(v,x)f$ we have \begin{eqnarray}\label{eYR=Y*} x^{\wt v+n}(1+x)^{\wt v+m}Y^{R}(v,x)f=x^{\wt v+n}(x+1)^{\wt v+m}Y_{W}^{*}(v,x)f. \end{eqnarray} Combining this with (\ref{2.16}) we get $$x^{\wt v+n}(1+x)^{\wt v+m}Y^{R}(v,x)f\in \bf mathfrak{D}_{(-1)}(W)[[x]],$$ which implies \begin{eqnarray}\label{YRcondition} x^{\wt v+n}Y^{R}(v,x)f\in \bf mathfrak{D}_{(-1)}(W)[[x]]. \end{eqnarray} On the other hand, from (\ref{2.16}) we get \begin{eqnarray*} (x-1)^{\wt v+n}x^{\wt v+m}Y_{W}^{*}(v,x-1)f\in W^{*}[[x]]. \end{eqnarray*} From the definition of $Y^{L}(v,x)f$ we have \begin{eqnarray}\label{YL=Y*} (-1+x)^{\wt v+n}x^{\wt v+m}Y^{L}(v,x)f=(x-1)^{\wt v+n}x^{\wt v+m}Y_{W}^{*}(v,x-1)f. \end{eqnarray} By the same reasoning, we get \begin{eqnarray}\label{YLcondition} x^{\wt v+m}Y^{L}(v,x)f\in \bf mathfrak{D}_{(-1)}(W)[[x]]. \end{eqnarray} With (\ref{YRcondition}) and (\ref{YLcondition}) we conclude $f\in \Omega_{m,n}(\bf mathfrak{D}_{(-1)}(W))$. This proves $(A^{\dagger}_{n,m}(W))^{*}\subset \Omega_{m,n}(\bf mathfrak{D}_{(-1)}(W))$. On the other hand, let $f\in \Omega_{m,n}(\bf mathfrak{D}_{(-1)}(W))$. For any homogeneous vector $v\in V$, using (\ref{tri-relation-alpha}), (\ref{YRcondition}) and (\ref{YLcondition}) we see that (\ref{2.16}) holds (multiplying both sides by $x^{\wt v+n}x_0^{\wt v+m}$ and then applying $\Res_{x_0}$). Then $f\in (A^{\dagger}_{n,m}(W))^{*}$. This proves $\Omega_{m,n}(\bf mathfrak{D}_{(-1)}(W))\subset (A^{\dagger}_{n,m}(W))^{*}$, concluding the proof. \end{proof} Next, we refine Proposition \ref{p-A-D-space}, to get an identification of $A_{m}(V)\otimes A_n(V)$-modules. To this end, we shall need the following two results from \cite{Li-Anv} (cf. \cite{Li-reg}, Remark 2.10): \bl{lemma-e-L(1)} Let $(E,Y_E)$ be a weak $V$-module and let $z_0\in \C$. For $v\in V$, define \begin{eqnarray}\label{deformed-module} Y_E^{[z_0]}(v,x)=Y_E(e^{-z_0(1+z_0x)L(1)}(1+z_0x)^{-2L(0)}v,x/(1+z_0x)). \end{eqnarray} Then the pair $(E, Y_E^{[z_0]})$ carries the structure of a weak $V$-module, and for homogeneous vector $v\in V$ and for $m\in \Z$, \begin{eqnarray}\label{degree-0-formula} \Res_x x^mY_E^{[z_0]}(v,x)=\Res_x x^m(1-z_0x)^{2\wt v-m-2}Y_E(e^{-z_0(1-z_0x)^{-1}L(1)}v,x). \end{eqnarray} Furthermore, if $L(1)$ is locally nilpotent on $E$, $e^{-z_0L(1)}$ is a $V$-module isomorphism from $(E,Y_E)$ to $(E,Y_E^{[z_0]})$. \el \bp{p-anv-omega} Let $(E,Y_E)$ be a weak $V$-module and let $z_0\in \C$. Then for every $n\in \N$, \begin{eqnarray} \Omega_n(E,Y_E)= \Omega_n(E,Y_E^{[z_0]}). \end{eqnarray} Furthermore, $e^{-z_0L(1)}$ is an $A_n(V)$-module isomorphism from $\Omega_n(E,Y_E)$ to $\Omega_n(E,Y_E^{[z_0]})$. \ep As an immediate consequence of Lemma \ref{lemma-e-L(1)} and Proposition \ref{p-anv-omega}, we have: \bc{z-0-anv-module} Let $(E,Y_E)$ be a weak $V$-module and let $z_0\in \C$. Define a bilinear map from $V\times E$ to $E$ by \begin{eqnarray} v\bullet_{(z_0)} w=\Res_x x^{\wt v-1}(1-z_0x)^{\wt v-1}Y_E(e^{-z_0(1-z_0x)^{-1}L(1)}v,x)w \end{eqnarray} for $v\in V,\ w\in E$ with $v$ homogeneous. Then this bilinear map gives rise to an $A_n(V)$-module structure on $\Omega_n(E)$ for every $n\in \N$. We denote this $A_n(V)$-module by $\Omega_n^{[z_0]}(E)$. \ec \bd{Am-action} {\em Define a bilinear map $\bar{*}_{m,n}: V\times W\rightarrow W; \ (v,w)\bf mapsto v\bar{*}_{m,n}w$ by \begin{eqnarray} v\bar{*}_{m,n}w=\Res_x \sum_{i=0}^{m}\binom{-n-1}{i} (-1)^{n+i}\frac{(1+x)^{i-1}}{x^{n+i+1}}Y_W\left((1+x)^{L(0)}v,x\right)w \end{eqnarray} for $v\in V,\ w\in W$.} \ed \bl{Y-L-DJ} Let $v\in V,\ f\in A^{\dagger}_{n,m}(W)^{*}\subset W^*,\ w\in W$ with $v$ homogeneous. Then \begin{eqnarray} \Res_{x}x^{\wt v-1}\<(Y^{L})^{[1]}(v,x)f,w\>=\<f, v\bar{*}_{m,n}w\>. \end{eqnarray} \el \begin{proof} Note that $x^{\wt u+m}Y^{L}(u,x)f\in \bf mathfrak{D}_{(-1)}(W)[[x]]$ for any homogeneous vector $u\in V$. As $(-1+x)^{p}\in \C[[x]]$ and $\wt (L(1)^rv)=\wt v-r$ for any $p\in \Z,\ r\in \N$, we have $$\Res_xx^{\wt v+j-1} (-1+x)^{\wt v+n}Y^{L}(e^{(-1+x)^{-1}L(1)}v,x)f=0 $$ for all $j>m$. From (\ref{YL=Y*}), we have $$(-1+x)^{\wt u+n}Y^{L}(u,x)f=(x-1)^{\wt u+n}Y_W^{*}(u,x-1)f$$ for any homogeneous vector $u\in V$, which yields $$(-1+x)^{\wt v+n}Y^{L}(e^{(-1+x)^{-1}L(1)}v,x)f=(x-1)^{\wt v+n}Y_W^{*}(e^{(x-1)^{-1}L(1)}v,x-1)f.$$ Then using (\ref{degree-0-formula}) we get \begin{eqnarray*} &&\Res_{x}x^{\wt v-1}\<(Y^{L})^{[1]}(v,x)f,w\>\\ &=&\Res_x (-1)^{\wt v-1}x^{\wt v-1} (-1+x)^{\wt v-1}\<Y^{L}(e^{(-1+x)^{-1}L(1)}v,x)f,w\>\\ &=&\Res_x \sum_{i=0}^{\infty}\binom{-n-1}{i} (-1)^{\wt v+n-i}x^{\wt v+i-1} (-1+x)^{\wt v+n}\<Y^{L}(e^{(-1+x)^{-1}L(1)}v,x)f,w\>\\ &=&\Res_x \sum_{i=0}^{m}\binom{-n-1}{i} (-1)^{\wt v+n-i}x^{\wt v+i-1} (-1+x)^{\wt v+n}\<Y^{L}(e^{(-1+x)^{-1}L(1)}v,x)f,w\>\\ &=&\Res_x \sum_{i=0}^{m}\binom{-n-1}{i} (-1)^{\wt v+n-i}x^{\wt v+i-1} (x-1)^{\wt v+n}\<Y_W^{*}(e^{(x-1)^{-1}L(1)}v,x-1)f,w\>\\ &=&\Res_x \sum_{i=0}^{m}\binom{-n-1}{i} (-1)^{\wt v+n-i}(x+1)^{\wt v+i-1} x^{\wt v+n}\<Y_W^{*}(e^{x^{-1}L(1)}v,x)f,w\>\\ &=&\Res_x \sum_{i=0}^{m}\binom{-n-1}{i} (-1)^{\wt v+n-i}(x+1)^{\wt v+i-1} x^{\wt v+n}\\ &&\quad\quad \times \<f, Y_W(e^{xL(1)}(-x^{-2})^{L(0)}e^{x^{-1}L(1)}v,x^{-1})w\>\\ &=&\Res_x \sum_{i=0}^{m}\binom{-n-1}{i} (-1)^{\wt v+n-i}(x+1)^{\wt v+i-1} x^{\wt v+n}\<f, Y_W((-x^{2})^{-L(0)}v,x^{-1})w\>\\ &=&\Res_x \sum_{i=0}^{m}\binom{-n-1}{i} (-1)^{n-i}(x+1)^{\wt v+i-1} x^{-\wt v+n}\<f, Y_W(v,x^{-1})w\>\\ &=&\Res_x \sum_{i=0}^{m}\binom{-n-1}{i} (-1)^{n-i}(x^{-1}+1)^{\wt v+i-1} x^{\wt v-n-2}\<f, Y_W(v,x)w\>\\ &=&\Res_x \sum_{i=0}^{m}\binom{-n-1}{i} (-1)^{n+i}\frac{(1+x)^{\wt v+i-1}}{x^{n+i+1}} \<f, Y_W(v,x)w\>\\ &=&\<f, v\bar{*}_{m,n}w\>, \end{eqnarray*} as desired. \end{proof} Combining Lemma \ref{Y-L-DJ} with Lemmas \ref{l-classical} and \ref{l-classical-2}, we immediately obtain: \bp{cor-L-connection} The bilinear map $V\times W\rightarrow W;\ (v,w)\bf mapsto v\cdot w:=v\bar{*}_{m,n}w$ reduces to a bilinear map from $A_m(V)\times A^{\dagger}_{n,m}(W)$ to $A^{\dagger}_{n,m}(W)$ such that $A^{\dagger}_{n,m}(W)$ is a right $A_m(V)$-module. \ep Recall the following bilinear map from \cite{dj1}: \bd{An-action} {\em Define a bilinear map $\bar{*}_{m}^{n}: V\times W\rightarrow W; \ (v,w)\bf mapsto v\bar{*}_{m}^{n}w$ by \begin{eqnarray} v\bar{*}_{m}^{n}w=\sum_{i=0}^{n}\binom{-m-1}{i}\Res_{z}\frac{(1+z)^{m}}{z^{m+i+1}}Y((1+z)^{L(0)}v,z)w \end{eqnarray} for $v\in V,\ w\in W$.} \ed The following is a connection between the product $v\bar{*}_{m,n}w$ and the product $v\bar{*}_{m}^nw$ for $v\in V,\ w\in W$: \bl{left-right-diff} For any homogeneous vector $v\in V$ and for any $w\in W$, we have \begin{eqnarray} v\bar{*}_{m,n}w=v\bar{*}_{m}^nw-\Res_x (1+x)^{\wt v-1}Y(v,x)w. \end{eqnarray} \el \begin{proof} Recall the following identity from \cite{dj1} (Proposition 5.1): \begin{eqnarray}\label{DJ-formula} \sum_{i=0}^m\binom{-n-1}{i}\frac{(1+x)^{n+1}}{x^{n+i+1}}-\sum_{i=0}^n\binom{-m-1}{i}(-1)^{m+i}\frac{(1+x)^i}{x^{m+i+1}}=1. \end{eqnarray} Using this (with $m$ and $n$ switched) we have \begin{eqnarray} && \sum_{i=0}^m\binom{-n-1}{i}(-1)^{n+i}\frac{(1+x)^{\wt v+i-1}}{x^{n+i+1}}\bf nonumber\\ &=&(1+x)^{\wt v-1}\left[-1+\sum_{i=0}^n\binom{-m-1}{i}\frac{(1+x)^{m+1}}{x^{m+i+1}}\right]\bf nonumber\\ &=&-(1+x)^{\wt v-1}+\sum_{i=0}^n\binom{-m-1}{i}\frac{(1+x)^{\wt v+m}}{x^{m+i+1}}. \end{eqnarray} Then by the definitions of $v\bar{*}_{m,n}w$ and $v\bar{*}_{m}^nw$ we obtain \begin{eqnarray*} v\bar{*}_{m,n}w=v\bar{*}_{m}^nw-\Res_x (1+x)^{\wt v-1}Y(v,x)w. \end{eqnarray*} as desired. \end{proof} We have the following analogue of Lemma \ref{Y-L-DJ}: \bl{R-dj-relation} Let $v\in V,\ f\in A^{\dagger}_{n,m}(W)^{*}\subset W^*,\ w\in W$ with $v$ homogeneous. Then \begin{eqnarray}\label{R-dj-relation-formula} \Res_{x}x^{\wt v-1}\<(Y^{R})^{[-1]}(v,x)f,w\>= \<f, \theta(v)\bar{*}_{m}^{n}w\>. \end{eqnarray} \el \begin{proof} Recall that for any homogeneous vector $u\in V$, we have \begin{eqnarray*} &&\quad \quad x^{\wt u+n}Y^R(u,x)f\in \bf mathfrak{D}_{(-1)}[[x]],\\ &&(1+x)^{\wt u+m}Y^{R}(u,x)f=(x+1)^{\wt u+m}Y^{*}(u,x)f. \end{eqnarray*} Note that $(1+x)^{p}\in \C[[x]]$ for $p\in \Z$ and $\wt (L(1)^rv)=\wt v-r$ for $r\in \N$. Using all of these facts and (\ref{degree-0-formula}), we obtain \begin{eqnarray*} &&\Res_{x}x^{\wt v-1}\<(Y^{R})^{[-1]}(v,x)f,w\>\\ &=&\Res_x x^{\wt v-1} (1+x)^{\wt v-1}\<Y^{R}(e^{(1+x)^{-1}L(1)}v,x)f,w\>\\ &=&\Res_x \sum_{i=0}^{\infty}\binom{-m-1}{i} x^{\wt v+i-1} (1+x)^{\wt v+m}\<Y^{R}(e^{(1+x)^{-1}L(1)}v,x)f,w\>\\ &=&\Res_x \sum_{i=0}^{n}\binom{-m-1}{i} x^{\wt v+i-1} (1+x)^{\wt v+m}\<Y^{R}(e^{(1+x)^{-1}L(1)}v,x)f,w\>\\ &=&\Res_x \sum_{i=0}^{n}\binom{-m-1}{i} x^{\wt v+i-1} (x+1)^{\wt v+m}\<Y_W^{*}(e^{(x+1)^{-1}L(1)}v,x)f,w\>\\ &=&\Res_x \sum_{i=0}^{n}\binom{-m-1}{i} x^{\wt v+i-1} (x+1)^{\wt v+m}\\ &&\quad\quad \times \<f, Y_W(e^{xL(1)}(-x^{-2})^{L(0)}e^{(x+1)^{-1}L(1)}v,x^{-1})w\>\\ &=&\Res_x \sum_{i=0}^{n}\binom{-m-1}{i} x^{\wt v+i-1} (x+1)^{\wt v+m} \<f, Y_W(e^{x(x+1)^{-1}L(1)} (-x^{-2})^{L(0)}v,x^{-1})w\>\\ &=&\Res_x \sum_{i=0}^{n}\binom{-m-1}{i} \frac{(1+x)^{\wt v+m}}{x^{m+i+1}} \<f, Y_W(e^{(1+x)^{-1}L(1)}(-1)^{L(0)}v,x)w\>\\ &=&\sum_{r\ge 0}\frac{1}{r!}\Res_x \sum_{i=0}^{n}\binom{-m-1}{i} \frac{(1+x)^{\wt (L(1)^rv)+m}} {x^{m+i+1}} \<f, Y_W(L(1)^r(-1)^{L(0)}v,x)w\>\\ &=& \<f, \theta(v)\bar{*}_{m}^{n}w\>, \end{eqnarray*} as desired. \end{proof} By invoking the linear algebra duality again, we immediately recover the following result of \cite{dj1}: \bp{left-module-bar} The bilinear map $V\times W\rightarrow W; \ (v,w)\bf mapsto v\bar{*}_{m}^{n}w$ reduces to a bilinear map from $A_n(V)\times A^{\dagger}_{n,m}(W)$ to $A^{\dagger}_{n,m}(W)$ such that $A^{\dagger}_{n,m}(W)$ becomes a (left) $A_n(V)$-module. \ep As the actions of $V$ on $\bf mathfrak{D}_{(-1)}(W)$ under $Y^{L}$ and $Y^{R}$ commute, it follows from (\ref{deformed-module}) that the actions of $V$ on $\bf mathfrak{D}_{(-1)}(W)$ under $(Y^{L})^{[1]}$ and $(Y^{R})^{[-1]}$ commute. Then using Lemmas \ref{Y-L-DJ} and \ref{R-dj-relation} we immediately have: \bp{prop-A-dagger} Let $m,n\in \N$. Then on the space $A^{\dagger}_{n,m}(W)$ the left module action of $A_n(V)$ established in Proposition \ref{left-module-bar} and the right module action of $A_m(V)$ established in Proposition \ref{cor-L-connection} commute. \ep To summarize, as the main result of this section we have: \bt{thm-Amn} Let $W$ be a weak $V$-module and let $m,n\in \N$. Then $\Omega_{m,n}(\bf mathfrak{D}_{(-1)}(W))$ is an $A_{m}(V)\otimes A_{n}(V)$-module with the actions of $A_m(V)$ and $A_n(V)$ given respectively by \begin{eqnarray} &&v\bullet_L f=\Res_x x^{\wt v-1}(1-x)^{\wt v-1}Y^{L}(e^{(-1+x)^{-1}L(1)}v,x)f,\label{vLf}\\ &&v\bullet_R f=\Res_x x^{\wt v-1}(1+x)^{\wt v-1}Y^{R}(e^{(1+x)^{-1}L(1)}v,x)f\label{vRf} \end{eqnarray} for $v\in V,\ f\in \bf mathfrak{D}_{(-1)}(W)$. Denote this $A_{m}(V)\otimes A_{n}(V)$-module by $\Omega_{m,n}^{[1,-1]}(\bf mathfrak{D}_{(-1)}(W))$. On the other hand, the subspace $A^{\dagger}_{n,m}(W)^{*}$ of $W^{*}$ is an $A_{m}(V)\otimes A_{n}(V)$-module with the actions of $A_m(V)$ and $A_n(V)$ given respectively by \begin{eqnarray} &&\<v\cdot_L f,w\>=\< f, v\bar{*}_{m,n}w\>,\label{v-dot-L}\\ &&\<v\cdot_R f,w\>=\< f, \theta(v)\bar{*}_{m}^{n}w\>\label{v-dot-R} \end{eqnarray} for $v\in V,\ f\in A^{\dagger}_{n,m}(W)^{*},\ w\in W$. Furthermore, we have \begin{eqnarray} \Omega_{m,n}^{[1,-1]}(\bf mathfrak{D}_{(-1)}(W))=(A^{\dagger}_{n,m}(W))^{*} \end{eqnarray} as $A_{m}(V)\otimes A_{n}(V)$-modules. \et \begin{proof} From definition, we have \begin{eqnarray*} \Omega_{m,n}(\bf mathfrak{D}_{(-1)}(W)) =\Omega_{m}(\bf mathfrak{D}_{(-1)}(W),Y^L)\cap \Omega_{n}(\bf mathfrak{D}_{(-1)}(W),Y^R) \end{eqnarray*} as vector spaces. In view of Corollary \ref{z-0-anv-module}, $\Omega_{m}(\bf mathfrak{D}_{(-1)}(W),Y^L)$ is an $A_m(V)$-module with the action given by (\ref{vLf}), and $\Omega_{n}(\bf mathfrak{D}_{(-1)}(W),Y^R)$ is an $A_n(V)$-module with the action given by (\ref{vRf}). Recall that the actions of $V$ under $Y^R$ and $Y^L$ on $\bf mathfrak{D}_{(-1)}(W)$ commute. Then it follows that $\Omega_{m,n}(\bf mathfrak{D}_{(-1)}(W))$ is an $A_m(V)\otimes A_n(V)$-module with the actions of $A_m(V)$ and $A_n(V)$ given respectively by (\ref{vLf}) and (\ref{vRf}). This proves the first assertion. As $\theta$ gives rise to an anti-automorphism of $A_n(V)$, with Proposition \ref{prop-A-dagger}, we conclude that $A^{\dagger}_{n,m}(W)^{*}$ is an $A_m(V)\otimes A_n(V)$-module with the actions of $A_m(V)$ and $A_n(V)$ given respectively by (\ref{v-dot-L}) and (\ref{v-dot-R}). The furthermore assertion follows from Propositions \ref{Y-L-DJ} and \ref{R-dj-relation}. \end{proof} \br{Omega-direct-limit} {\em Let $W$ be a weak $V$-module. Set \begin{eqnarray} \Omega_{\infty}(\bf mathfrak{D}_{(-1)}(W))=\cup_{m,n\in \N}\Omega_{m,n}(\bf mathfrak{D}_{(-1)}(W)), \end{eqnarray} which is a $V\otimes V$-submodule of $\bf mathfrak{D}_{(-1)}(W)$. Equip $\N\times \N$ with the partial order $\le$, where $(m,n)\le (p,q)$ if and only if $m\le p$ and $n\le q$. Then $\Omega_{m,n}(\bf mathfrak{D}_{(-1)}(W))$ for $m,n\in \N$ form a direct system over $\N\times \N$, for which $\Omega_{\infty}(\bf mathfrak{D}_{(-1)}(W))$ is a direct limit. } \er \br{rem-A(W)} {\em For $n\in \N$, as $O^{\dagger}_n(W)=O^{\dagger}_{n,n}(W)$, we have $A^{\dagger}_n(W)=A^{\dagger}_{n,n}(W)$. Then \begin{eqnarray} (A^{\dagger}_n(W))^{*}=(A^{\dagger}_{n,n}(W))^{*}=\Omega_{n,n}(\bf mathfrak{D}_{(-1)}(W)). \end{eqnarray} Consequently, we have \begin{eqnarray} \Omega_{\infty}(\bf mathfrak{D}_{(-1)}(W))=\cup_{n\in \N}(A^{\dagger}_n(W))^{*}. \end{eqnarray}} \er Recall the weak $V$-module structure $(Y^R)^{[-1]}(\cdot,x)$ on $\bf mathfrak{D}_{(-1)}(W)$, where \begin{eqnarray} \Res_xx^k(Y^R)^{[-1]}(v,x)=\Res_xx^k(1+x)^{2\wt v-k-2}Y^R(e^{(1+x)^{-1}L(1)}v,x) \end{eqnarray} for homogeneous vector $v\in V$ and for $k\in \Z$. For any $u\in V$, write \begin{eqnarray} (Y^R)^{[-1]}(u,x)=\sum_{k\in\Z}u^{R[-1]}_kx^{-k-1}. \end{eqnarray} Using Lemma \ref{lemma-omega-union} and Proposition \ref{p-anv-omega} we immediately have: \bl{lem-YR[-1]} Let $m,n\in \N,\ p\in \Z$. Then for any homogeneous vector $v\in V$, \begin{eqnarray} v_{\wt v-1+p}^{R[-1]}\cdot \Omega_{m,n}(\bf mathfrak{D}_{(-1)}(W))\subset \Omega_{m,n-p}(\bf mathfrak{D}_{(-1)}(W)), \end{eqnarray} where by convention $\Omega_{m,k}(\bf mathfrak{D}_{(-1)}(W))=0$ for $k<0$. \el The following is a slight variation of Dong and Jiang's notion $u*_{m,p}^{n}v$ (see \cite{dj1}): \bd{def-general-product} {\em Let $m,n\in \N,\ p\in \Z$. For $u\in V,\ w\in W$, define \begin{eqnarray}\label{deform-dj} u[p]\bar{*}_{m}^{n}w=\Res_x \sum_{i=0}^{n+|p|}\binom{-m-p-1}{i} \frac{(1+x)^{m}} {x^{p+m+i+1}}Y_W((1+x)^{L(0)}u,x)w\in W. \end{eqnarray}} \ed We have the following generalization of Lemma \ref{R-dj-relation}: \bl{lem-general-YR} Let $f\in \Omega_{m,n}(\bf mathfrak{D}_{(-1)}(W))=(A_{n,m}^{\dagger}(W))^{*}$ with $m,n\in \N$. Then for any $v\in V, \ p\in \Z,\ w\in W$, we have \begin{eqnarray} \Res_x x^{p-1}\<(Y^{R})^{[-1]}(x^{L(0)}v,x)f,w\>= \<f, \theta(v)[p]\bar{*}_{m}^{n}w\>. \end{eqnarray} \el \begin{proof} Let $v\in V,\ p\in \Z,\ w\in W$ with $v$ homogeneous. Note that \begin{eqnarray*} &&x^{\wt v-1+p+i} (1+x)^{\wt v+m}Y^{R}(e^{(1+x)^{-1}L(1)}v,x)f\\ &=&\sum_{r\ge 0}\frac{1}{r!}x^{\wt (L(1)^rv)-1+r+p+i} (1+x)^{\wt (L(1)^rv)+m}Y^{R}(L(1)^{r}v,x)f\\ &\in & W^{*}[[x]] \end{eqnarray*} for $i>n+|p|$. Then using (\ref{degree-0-formula}) we get \begin{eqnarray*} &&\Res_{x}x^{p-1}\<(Y^{R})^{[-1]}(x^{L(0)}v,x)f,w\>\\ &=&\Res_x x^{\wt v-1+p} (1+x)^{\wt v-1-p}\<Y^{R}(e^{(1+x)^{-1}L(1)}v,x)f,w\>\\ &=&\Res_x \sum_{i=0}^{\infty}\binom{-m-p-1}{i} x^{\wt v-1+p+i} (1+x)^{\wt v+m}\<Y^{R}(e^{(1+x)^{-1}L(1)}v,x)f,w\>\\ &=&\Res_x \sum_{i=0}^{n+|p|}\binom{-m-p-1}{i} x^{\wt v-1+p+i} (1+x)^{\wt v+m}\<Y^{R}(e^{(1+x)^{-1}L(1)}v,x)f,w\>\\ &=&\Res_x \sum_{i=0}^{n+|p|}\binom{-m-p-1}{i} x^{p+\wt v-1+i} (x+1)^{\wt v+m}\<Y_W^{*}(e^{(x+1)^{-1}L(1)}v,x)f,w\>\\ &=&\Res_x \sum_{i=0}^{n+|p|}\binom{-m-p-1}{i} x^{\wt v-1+p+i} (x+1)^{\wt v+m}\\ &&\quad\quad \times \<f, Y_W(e^{xL(1)}(-x^{-2})^{L(0)}e^{(x+1)^{-1}L(1)}v,x^{-1})w\>\\ &=&\Res_x \sum_{i=0}^{n+|p|}\binom{-m-p-1}{i} x^{\wt v-1+p+i} (x+1)^{\wt v+m} \<f, Y_W(e^{x(x+1)^{-1}L(1)} (-x^{-2})^{L(0)}v,x^{-1})w\>\\ &=&\Res_x \sum_{i=0}^{n+|p|}\binom{-m-p-1}{i} \frac{(1+x)^{\wt v+m}}{x^{p+m+i+1}} \<f, Y_W(e^{(1+x)^{-1}L(1)}(-1)^{L(0)}v,x)w\>\\ &=&\sum_{r\ge 0}\frac{1}{r!}\Res_x \sum_{i=0}^{n+|p|}\binom{-m-p-1}{i} \frac{(1+x)^{\wt (L(1)^rv)+m}} {x^{p+m+i+1}} \<f, Y_W(L(1)^r(-1)^{L(0)}v,x)w\>\\ &=& \<f, \theta(v)[p]\bar{*}_{m}^{n}w\>, \end{eqnarray*} as desired. \end{proof} Note that for homogeneous vector $v\in V$, as $\theta^2=1$ we also have \begin{eqnarray}\label{other-direction} \<f, v[p]\bar{*}_{m}^{n}w\>&=&\Res_x x^{p-1}\< (Y^R)^{[-1]}(x^{L(0)}\theta(v),x)f,w\>\bf nonumber\\ &=&\sum_{r\ge 0}\frac{1}{r!}\< (L(1)^r(-1)^{L(0)}v)^{R[-1]}_{\wt v-r-1+p}f,w\>. \end{eqnarray} Using Lemmas \ref{lem-YR[-1]}, \ref{lem-general-YR} and \ref{l-classical-3} we immediately recover the following result of \cite{dj1}: \bc{coro-O-m-n} Let $m,n\in \N,\ p\in \Z$, and let $u\in V$. Then \begin{eqnarray} u[p]\bar{*}_{m}^{n} O^{\dagger}_{n,m}(W)\subset O^{\dagger}_{n+p,m}(W), \end{eqnarray} where by definition $O^{\dagger}_{k,m}(W)=W$ for $k<0$ (so that $A_{k,m}^{\dagger}(W)^{*}=0=\Omega_{m,k}^{\dagger}(\bf mathfrak{D}_{(-1)}(W))$). \ec \section{$A^{\diamond}_{n,m}(W)$ and $\Omega^{\diamond}_{m,n}(\bf mathfrak{D}_{(-1)}(W))$} Here, we continue to identify $(A^{\diamond}_{n,m}(W))^{*}$ with a particular subspace $\Omega^{\diamond}_{m,n}(\bf mathfrak{D}_{(-1)}(W))$ of $\Omega_{m,n}(\bf mathfrak{D}_{(-1)}(W))$. For each fixed $m\in \N$, the sum of subspaces $\Omega^{\diamond}_{m,n}(\bf mathfrak{D}_{(-1)}(W))$ for $n\in \N$ is shown to be a direct sum and an $\N$-graded weak $V$-module. Using this connection, we obtain an $\N$-graded weak $V$-module structure on $A^{\diamond}_{\Box,m}(W):=\bigoplus_{n\in \N}A^{\diamond}_{n,m}(W)$, which recovers a result of \cite{dj1}. Let $(W,Y_W)$ be a weak $V$-module as before. Write $$Y^{*}_W(\omega,x)=\sum_{k\in \Z}L^{*}(k)x^{-k-2}$$ on $W^{*}$, where $\omega$ is the conformal vector of $V$. We have $$\< L^{*}(k)\alpha,u\>=\<\alpha, L(-k)u\>\quad \text{ for }\alpha\in W^{*},\ u\in W.$$ Recall the weak $V$-module structures $(Y^L)^{[1]}(\cdot,x)$ and $(Y^R)^{[-1]}(\cdot,x)$ on $\bf mathfrak{D}_{(-1)}(W)$. Write \begin{eqnarray} (Y^L)^{[1]}(\omega,x)=\sum_{k\in \Z}L_l^{[1]}(k)x^{-k-2},\quad (Y^R)^{[-1]}(\omega,x)=\sum_{k\in \Z}L_r^{[-1]}(k)x^{-k-2}. \end{eqnarray} Recall that $O'_{n,m}(W)=O^{\dagger}_{n,m}(W)+(L(-1)+L(0)+m-n)W$ for $m,n\in \N$. We have: \bp{p-L(-1)+L(0)} The following relation holds for any $f\in \bf mathfrak{D}_{(-1)}(W)$: \begin{eqnarray}\label{L*=Lr=Ll} (L^{*}(1)+L^{*}(0))f=L_r^{[-1]}(0)f-L_l^{[1]}(0)f, \end{eqnarray} which is equivalent to \begin{eqnarray} \<(L_r^{[-1]}(0)-L_l^{[1]}(0))f,w\>=\<f, (L(-1)+L(0))w\> \end{eqnarray} for $w\in W$. Furthermore, let $f\in \bf mathfrak{D}_{(-1)}(W)\subset W^{*}$. Then $f\in (W/O'_{n,m}(W))^{*}$ with $m,n\in \N$ if and only if $f\in \Omega_{m,n}(\bf mathfrak{D}_{(-1)}(W))$ and \begin{eqnarray} L_r^{[-1]}(0)f-L_l^{[1]}(0)f=(n-m)f. \end{eqnarray} \ep \begin{proof} Recall from Proposition \ref{tri-relation} that for $v\in V,\ f\in \bf mathfrak{D}_{(-1)}(W)$, we have \begin{eqnarray*} x_0^{-1}\delta\left(\frac{x+1}{x_0}\right)Y_W^{*}(v,x)f-x_0^{-1}\delta\left(\frac{1+x}{x_0}\right)Y^{R}(v,x)f =x^{-1}\delta\left(\frac{-1+x_0}{x}\right)Y^{L}(v,x_0)f.\ \ \end{eqnarray*} Specializing $v$ to the conformal vector $\omega$ and applying $\Res_{x_0}(x+x^2)$, we get \begin{eqnarray*} (x+x^2)Y_W^{*}(\omega,x)f- (x+x^2)Y^R(\omega,x)f=\Res_{x_0}x^{-1}\delta\left(\frac{-1+x_0}{x}\right)(x+x^2)Y^L(\omega,x_0)f, \end{eqnarray*} which by applying $\Res_x$ to both sides yields \begin{eqnarray} (L^{*}(0)+L^{*}(1))f- \Res_x (x+x^2)Y^R(\omega,x)f =-\Res_{x_0}x_0(1-x_0)Y^L(\omega,x_0)f. \end{eqnarray} As $L(1)\omega=0$, from (\ref{degree-0-formula}) we have \begin{eqnarray} &&L_r^{[-1]}(0)f=\Res_x x(1+x)Y^R(\omega,x)f,\\ &&L_l^{[1]}(0)f=\Res_x x(1-x)Y^L(\omega,x)f. \end{eqnarray} Then we immediately get (\ref{L*=Lr=Ll}). Let $g\in W^{*}$. Then $g=0$ on $(L(-1)+L(0)+m-n)W$ if and only if \begin{eqnarray} (L^{*}(1)+L^{*}(0))g=(n-m)g\ \text{ in } W^*. \end{eqnarray} Furthermore, by Proposition \ref{p-A-D-space} and the first assertion we see that $g=0$ on $O'_{n,m}(W)$ if and only if $g\in\Omega_{m,n}( \bf mathfrak{D}_{(-1)}(W))$ and $L_r^{[-1]}(0)g-L_l^{[1]}(0)g=(n-m)g$. This proves the second assertion. \end{proof} With $O^{\dagger}_{n,m}(W)\subset O'_{n,m}(W)$, the identity operator on $W$ yields a linear epimorphism from $A^{\dagger}_{n,m}(W)$ to $A^{\diamond}_{n,m}(W)$, and hence $(A^{\diamond}_{n,m}(W))^{*}$ is naturally a subspace of $(A^{\dagger}_{n,m}(W))^{*}$. \bc{coro-O-m-n-prime} Let $m,n\in \N$. Then $(A^{\diamond}_{n,m}(W))^{*}$ is a submodule of the $A_m(V)\otimes A_n(V)$-module $(A^{\dagger}_{n,m}(W))^{*}$ defined in Theorem \ref{thm-Amn}. On the other hand, the image of $(L(-1)+L(0)+m-n)W$ in $A^{\dagger}_{n,m}(W)$ is a submodule of the $A_n(V)$-$A_m(V)$ bimodule $A^{\dagger}_{n,m}(W)$ and $A^{\diamond}_{n,m}(W)$ is naturally an $A_n(V)$-$A_m(V)$ bimodule. \ec \begin{proof} Recall that for any $k\in \N$, the image $[\omega]$ of the conformal vector $\omega$ in $A_k(V)$ is a central element and $[\omega]$ acts on $\Omega_k(M)$ as $L(0)$ for any weak $V$-module $M$. We see that $L_r^{[-1]}(0)-L_l^{[1]}(0)$ commutes with the action of $A_m(V)\otimes A_n(V)$ on $\Omega_{m,n}^{[1,-1]}(\bf mathfrak{D}_{(-1)}(W))$. It then follows immediately from the second part of Proposition \ref{p-L(-1)+L(0)} that $(A^{\diamond}_{n,m}(W))^{*}$ is an $A_m(V)\otimes A_n(V)$-submodule of $\Omega_{m,n}^{[1,-1]}(\bf mathfrak{D}_{(-1)}(W))$. Then making use of the linear algebra duality again we conclude that the image of $(L(-1)+L(0)+m-n)W$ in $A^{\dagger}_{n,m}(W)$ is an $A_n(V)$-$A_m(V)$ submodule, so that the corresponding quotient module gives an $A_n(V)$-$A_m(V)$ bimodule structure on $A^{\diamond}_{n,m}(W)$. \end{proof} \bd{def-Omega'} {\em For $m,n\in \N$, set \begin{eqnarray} \Omega^{\diamond}_{m,n}(\bf mathfrak{D}_{(-1)}(W))=\{ f\in \Omega_{m,n}(\bf mathfrak{D}_{(-1)}(W))\ |\ (L_r^{[-1]}(0)-L_l^{[1]}(0))f=(n-m)f\}. \end{eqnarray}} \ed By Proposition \ref{p-L(-1)+L(0)}, we have \begin{eqnarray}\label{O-diamond-A-diamond} \Omega^{\diamond}_{m,n}(\bf mathfrak{D}_{(-1)}(W))=(A^{\diamond}_{n,m}(W))^{*}. \end{eqnarray} From Definition \ref{def-Omega'}, for any fixed $m\in \N$, the sum $\sum_{n\in \N}\Omega^{\diamond}_{m,n}(\bf mathfrak{D}_{(-1)}(W))$ in $\bf mathfrak{D}_{(-1)}(W)$ is a direct sum. For $m\in \N$, set \begin{eqnarray} \Omega^{\diamond}_{m,\Box}(\bf mathfrak{D}_{(-1)}(W)) =\bigoplus_{n\in \N}\Omega^{\diamond}_{m,n}(\bf mathfrak{D}_{(-1)}(W)) \subset \bf mathfrak{D}_{(-1)}(W). \end{eqnarray} Note that for homogeneous vector $u\in V$ and for $p\in \Z$, $$[L_l^{[1]}(0), u^{R[-1]}_{\wt u-1+p}]=0\ \ \text{and }\ [L_r^{[-1]}(0), u^{R[-1]}_{\wt u-1+p}]=-pu^{R[-1]}_{\wt u-1+p}.$$ Combining this with Lemma \ref{lemma-omega-union}, we immediately have: \bp{lem-O-diamond-mBox} The subspace $\Omega^{\diamond}_{m,\Box}(\bf mathfrak{D}_{(-1)}(W))$ of $\bf mathfrak{D}_{(-1)}(W)$ is an $\N$-graded weak (sub)module of $(\bf mathfrak{D}_{(-1)}(W), (Y^R)^{[-1]})$ (and $(\bf mathfrak{D}_{(-1)}(W), Y^{R})$). Furthermore, we have \begin{eqnarray} u^{R[-1]}_{\wt u-1+p}\cdot \Omega^{\diamond}_{m,n}(\bf mathfrak{D}_{(-1)}(W))\subset \Omega^{\diamond}_{m,n-p}(\bf mathfrak{D}_{(-1)}(W)) \end{eqnarray} for any homogeneous vector $u\in V$ and for any $p,n\in \Z$. \ep Next, we consider the dual case. Using (\ref{other-direction}), (\ref{O-diamond-A-diamond}), and Proposition \ref{lem-O-diamond-mBox}, by a straightforward argument we have the following result of \cite{dj1} (cf. Corollary \ref{coro-O-m-n}): \bl{v[p]-O'} Let $v\in V,\ p\in \Z,\ m,n\in \N$. Then \begin{eqnarray} v[p]\bar{*}_{m}^{n}O'_{n,m}(W)\subset O'_{n+p,m}(W), \end{eqnarray} where by definition $O'_{k,m}(W)=W$ for $k<0$ (so that $A_{k,m}^{\diamond}(W)^{*}=0=\Omega_{m,k}^{\diamond}(\bf mathfrak{D}_{(-1)}(W))$). \el The following definition was due to \cite{dj1}: \bd{def-A-diamond} {\em Let $m\in \N$. Form an $\N$-graded vector space \begin{eqnarray} A^{\diamond}_{\Box,m}(W)=\bigoplus_{n\in \N}A^{\diamond}_{n,m}(W), \end{eqnarray} which is naturally a right $A_m(V)$-module.} \ed From (\ref{O-diamond-A-diamond}), $\Omega^{\diamond}_{m,\Box}(\bf mathfrak{D}_{(-1)}(W))$ is the graded dual space of $A^{\diamond}_{\Box,m}(W)$. In the following we equip $A^{\diamond}_{\Box,m}(W)$ with an $\N$-graded weak $V$-module as the contragredient module. \bd{def-v[p]-A-diamond} {\em Let $m\in \N$. For $v\in V,\ p\in \Z$, define a homogeneous operator $v[p]$ of degree $p$ on $A^{\diamond}_{\Box,m}(W)$ by \begin{eqnarray} v[p]\cdot (w+O'_{n,m}(W))=v[p]\bar{*}_{m}^{n}w +O'_{n+p,m}(W)\in A^{\diamond}_{n+p,m}(W) \end{eqnarray} for $n\in \Z,\ w\in W$. (Recall Lemma \ref{v[p]-O'}).} \ed Then we have the following result of \cite{dj1} (Remark 4.15) with a different proof: \bt{thm-last} Let $W$ be a weak $V$-module and let $m\in \N$. Define a linear map $$Y^{\diamond}(\cdot,x):\ V\rightarrow (\End A^{\diamond}_{\Box,m}(W))[[x,x^{-1}]]$$ by the condition that for homogeneous vector $v\in V$, \begin{eqnarray} Y^{\diamond}(v,x)=\sum_{p\in \Z}v[p]x^{p-\wt v}. \end{eqnarray} Then $(A^{\diamond}_{\Box,m}(W), Y^{\diamond})$ is a weak $V$-module, which coincides with the contragredient module of $\Omega^{\diamond}_{m,\Box}(\bf mathfrak{D}_{(-1)}(W))$. Furthermore, the action of $V$ commutes with the action of $A_m(V)$. \et \begin{proof} Let $f\in \Omega^{\diamond}_{m,n}(\bf mathfrak{D}_{(-1)}(W))=(A^{\diamond}_{n,m}(W))^{*}$ with $m,n\in \N$ and let $v\in V$. We claim \begin{eqnarray}\label{claim} \< (Y^R)^{[-1]}(v,x)f,w\>=\<f, Y^{\diamond}(e^{xL(1)}(-x^{-2})^{L(0)}v,x^{-1})w\> \end{eqnarray} for $w\in W$. Assume that $v$ is homogeneous. Using Lemma \ref{lem-general-YR} we get \begin{eqnarray*} &&\<f,Y^{\diamond}(e^{zL(1)}(-z^{-2})^{L(0)}v,z^{-1})w\>\\ &=&\sum_{r\ge 0}\frac{1}{r!}z^{r-2\wt v}\<f,Y^{\diamond}(L(1)^r(-1)^{L(0)}v,z^{-1})w\>\\ &=&\sum_{r\ge 0}\frac{1}{r!}z^{r-2\wt v}\sum_{p\in \Z}\<f,(L(1)^r(-1)^{L(0)}v)[p]\bar{*}_{m}^nw\> z^{\wt (L(1)^rv)-p} \bf nonumber\\ &=&\sum_{r\ge 0}\sum_{p\in \Z}\sum_{i=0}^{n+|p|}\frac{1}{r!}z^{r-2\wt v}\binom{-m-p-1}{i}\bf nonumber\\ &&\hspace{0.5cm} \times \Res_x \frac{(1+x)^{\wt (L(1)^rv)+m}} {x^{p+m+i+1}} \<f, Y_W(L(1)^r(-1)^{L(0)}v,x)w\> z^{\wt (L(1)^rv)-p}\\ &=&\sum_{p\in \Z}\<f, \theta(v)[p]\bar{*}_{m}^nw\>z^{-\wt v-p} \\ &=&\sum_{p\in \Z}\<v^{R[-1]}_{\wt v-1+p}f, w\>z^{-\wt v-p} \\ &=&\< (Y^R)^{[-1]}(v,z)f,w\>. \end{eqnarray*} This proves (\ref{claim}). Then it follows from the result of \cite{fhl} formulated as Lemma \ref{lemma-graded-dual} below. \end{proof} The following is a variation of Theorem 5.2.1 and Proposition 5.3.1 \cite{fhl}: \bl{lemma-graded-dual} Let $V$ be a vertex operator algebra and let $E=\bigoplus_{n\in \N}E(n)$ be an $\N$-graded vector space equipped with a linear map $$Y_E(\cdot,x): V\rightarrow (\End E)[[x,x^{-1}]];\quad v\bf mapsto Y_E(v,x)=\sum_{p\in \Z}v_px^{-p-1}.$$ Let $E'=\bigoplus_{n\in \N}E(n)^{*}$ be the graded dual space of $W$ and let $$Y_E^{*}(\cdot,x): \ V\rightarrow (\End E')[[x,x^{-1}]];\ \ v\bf mapsto Y_E^{*}(v,x)$$ be a linear map such that $$\<Y_E^{*}(v,x)f,w\>=\<f, Y_E(e^{xL(1)}(-x^{-2})^{L(0)}v,x^{-1})w\>$$ for $v\in V,\ f\in E',\ w\in E$. Then $(E', Y_E^*)$ is an $\N$-graded weak $V$-module if and only if $(E,Y_E)$ is an $\N$-graded weak $V$-module. \el In the following, we discuss one of the main theorems in \cite{dj1} in the revised setup. Let $m\in \N$ and let $U$ be an $A_m(V)$-module. Follow \cite{dj1} to define \begin{eqnarray} A^{\diamond}_{\Box,m}(V)\otimes _{A_m(V)}U=\bigoplus_{n\in \N}A^{\diamond}_{n,m}(V)\otimes _{A_m(V)}U, \end{eqnarray} which is an $\N$-graded weak $V$-module. The following is part of Theorem 4.13 in \cite{dj1} with a suitably modified proof: \bl{lemma-U} We have $A^{\diamond}_{m,m}(V)\otimes_{A_m(V)}U=U$ as (left) $A_m(V)$-modules. Furthermore, $U$ generates $A^{\diamond}_{\Box,m}(V)\otimes _{A_m(V)}U$ as a weak $V$-module. \el \begin{proof} From definition, we see that $O'_{m,m}(V)=O_m(V)$ and hence $A^{\diamond}_{m,m}(V)=A_m(V)$ as vector spaces. Let $u,v\in V$ be homogeneous vectors. By Lemma \ref{left-right-diff} we have \begin{eqnarray*} u\bar{*}_{m}^nv-u\bar{*}_{m,n}v=\Res_x (1+x)^{\wt u-1}Y(u,x)v. \end{eqnarray*} On the other hand, recall from \cite{dlm-anv} that $$ u*_{m}v-v*_{m}u-\Res_x (1+x)^{\wt u-1}Y(u,x)v\in O_m(V).$$ From definition, $u\bar{*}_{m}^mv=u*_mv$, so $u\bar{*}_{m,n}v-v*_mu\in O_m(V)$. Thus $A^{\diamond}_{m,m}(V)=A_m(V)$ as $A_m(V)$-bimodules. Therefore, we have $$A^{\diamond}_{m,m}(V)\otimes_{A_m(V)}U=A_{m}(V)\otimes_{A_m(V)}U=U$$ as (left) $A_m(V)$-modules. Let $n\in \N$ and let $u\in V$ be any homogeneous vector. From (\ref{deform-dj}) we have \begin{eqnarray*} u[n-m]\bar{*}_{m}^{m}{\bf 1}&=&\Res_x \sum_{i=0}^{m+|n-m|}\binom{-n-1}{i} \frac{(1+x)^{m+\wt u}} {x^{n+i+1}}Y(u,x){\bf 1}\\ &=&\Res_x \sum_{i=0}^{m+|n-m|}\binom{-n-1}{i} \frac{(1+x)^{m+\wt u}} {x^{n+i+1}}e^{xL(-1)}u\\ &=&\Res_x \sum_{i=0}^{m+|n-m|}\sum_{j,k\ge 0}\binom{-n-1}{i} \binom{m+\wt u}{j} \frac{x^{j+k}}{x^{n+i+1}}\frac{1}{k!}L(-1)^{k}u. \end{eqnarray*} Noticing that for any homogeneous vector $v\in V$, $$L(-1)v\equiv (n-m-\wt v)v\ \ \bf mod\; O'_{n,m}(V),$$ we have \begin{eqnarray*} \frac{1}{k!}L(-1)^{k}u&\equiv &\frac{1}{k!}(n-m-(\wt u+k-1))\cdots (n-m-\wt u)u\ \ \bf mod\; O'_{n,m}(V)\\ &=&\binom{n-m-\wt u}{k}u \end{eqnarray*} for $k\ge 0$. Then \begin{eqnarray*} u[n-m]\bar{*}_{m}^{m}{\bf 1}\equiv \sum_{i=0}^{m+|n-m|}\sum_{j,k\ge 0}\binom{-n-1}{i}\binom{m+\wt u}{j}\binom{n-m-\wt u}{k}u \ \ \bf mod\; O'_{n,m}(V), \end{eqnarray*} where $n+i=j+k$ is assumed. Note that for $r\ge n$, we have $$\sum_{j,k\ge 0,\; j+k=r}\binom{m+\wt u}{j}\binom{n-m-\wt u}{k}=\binom{n}{r}=\delta_{r,n}.$$ Thus $u[n-m]\bar{*}_{m}^{m}{\bf 1}\equiv u\ \ \bf mod\; O'_{n,m}(V)$. That is, $u[n-m]({\bf 1}+O'_{m,m}(V))=u+O'_{n,m}(V)$. This shows that $U$ generates $A^{\diamond}_{\Box,m}(V)\otimes_{A_m(V)}U$ as a weak $V$-module. \end{proof} The following is Theorem 4.13 in \cite{dj1} with a suitably modified proof: \bt{A-nm-W} Let $m\in \N$ and let $U$ be an $A_m(V)$-module. Suppose that $W$ is a weak $V$-module and $\psi: U\rightarrow \Omega_m(W)$ is an $A_m(V)$-module morphism. For $n\in \N$, define a bilinear map $F_{n,m}:\ V\times U\rightarrow W$ by \begin{eqnarray} F_{n,m}(v,w)=\Res_x x^{m-n-1} Y(x^{L(0)}v,x)\psi(w) \end{eqnarray} for $v\in V,\ w\in U$. Then $F_{n,m}(v,w)\in \Omega_n(W)$, $F_{n,m}(O'_{n,m}(V)\times U)=0$, and $F_{n,m}$ reduces to an $A_n(V)$-module morphism $F_{n,m}^{\diamond}:\ A^{\diamond}_{n,m}(V)\otimes _{A_m(V)}U\rightarrow \Omega_n(W)$. Define a linear map $$\tilde{\psi}:\ A^{\diamond}_{\Box,m}(V)\otimes _{A_m(V)}U\rightarrow W$$ by $\tilde{\psi}|_{A^{\diamond}_{n,m}(V)\otimes _{A_m(V)}U}=F^{\diamond}_{n,m}$ for $n\in \N$. Then $\tilde{\psi}$ is a $V$-module morphism which is uniquely determined by the condition $\tilde{\psi}|_U=\psi$. \et \begin{proof} For $v\in V,\ w\in U$, as $\psi(w)\in \Omega_m(W)$, by Lemma \ref{lemma-omega-union}, we have $F_{n,m}(v,w)\in \Omega_n(W)$. Let $u,v\in V$ be homogeneous vectors and let $w\in U$. Recall $$u\circ_{m}^nv=\Res_z \frac{(1+z)^{\wt u+m}}{z^{m+n+2}}Y(u,z)v.$$ Noticing that $x^{\wt u+m}Y(u,x)\psi(w)\in W[[x]]$, from Jacobi identity, we have \begin{eqnarray}\label{weak-assoc-Y0} (x_0+x_2)^{\wt u+m}Y(u,x_0+x_2)Y(v,x_2)\psi(w)=(x_2+x_0)^{\wt v+m}Y(Y(u,x_0)v,x_2)\psi(w). \end{eqnarray} Using this relation we get \begin{eqnarray*} &&F_{n,m}(u\circ_{m}^nv,w)\\ &=&\Res_x\Res_z x^{m-n-1}\frac{(1+z)^{\wt u+m}}{z^{m+n+2}} Y\left(x^{L(0)}Y(u,z)v,x\right)\psi(w)\\ &=&\Res_x\Res_z x^{m-n-1+\wt u+\wt v}\frac{(1+z)^{\wt u+m}}{z^{m+n+2}} Y\left(Y(u,xz)v,x\right)\psi(w)\\ &=&\Res_{z_0}\Res_{x}x^{\wt v+m}\frac{(x+z_0)^{\wt v+m}}{z_0^{m+n+2}}Y(Y(u,z_0)v,x)\psi(w)\\ &=&\Res_{z_0}\Res_{x}x^{\wt v+m}\frac{(z_0+x)^{\wt v+m}}{z_0^{m+n+2}}Y(u,z_0+x)Y(v,x)\psi(w)\\ &=&0 \end{eqnarray*} as $x^{\wt v+m}Y(v,x)\psi(w)\in W[[x]]$. On the other hand, we have \begin{eqnarray*} &&F_{n,m}((L(-1)+L(0)+m-n)v,w)\\ &=&\Res_x x^{m-n-1}Y\left(x^{L(0)}(L(-1)+L(0)+m-n)v,x\right)\psi(w)\\ &=&\Res_x x^{m-n+\wt v}Y(L(-1)v,x)\psi(w) +x^{m-n-1+\wt v}Y((L(0)+m-n)v,x)\psi(w)\\ &=&\Res_x x^{m-n+\wt v}\frac{d}{dx}Y(v,x)\psi(w)+\Res_x (\wt v+m-n)x^{m-n-1+\wt v}Y(v,x)\psi(w)\\ &=&\Res_x \frac{d}{dx}\left(x^{m-n+\wt v}Y(v,x)\psi(w)\right)\\ &=&0. \end{eqnarray*} This shows that $F_{n,m}$ vanishes on $O'_{n,m}(V)\times U$. Noticing that as $\psi(w)\in \Omega_m(W)$, $$x^{m-n-1+\wt v+i}Y(v,x)\psi(w)\in W[[x]]$$ for $i>n$, using Lemma \ref{left-right-diff} we have \begin{eqnarray*} &&F_{n,m}(u\bar{*}_{m,n}v,w)\\ &=&F_{n,m}(u\bar{*}_{m}^{n}v,w)-\Res_z(1+z)^{\wt u-1}F_{n,m}(Y(u,z)v,w)\\ &=&\Res_{x}\Res_{z}\sum_{i=0}^{n}\binom{-m-1}{i}\frac{(1+z)^{\wt u+m}}{z^{m+i+1}}x^{m-n-1}Y(x^{L(0)}Y(u,z)v,x)\psi(w)\\ &&-\Res_{x}\Res_{z}(1+z)^{\wt u-1}x^{m-n-1}Y(x^{L(0)}Y(u,z)v,x)\psi(w)\\ &=&\Res_{x}\Res_{z}\sum_{i=0}^{n}\binom{-m-1}{i}\frac{(1+z)^{\wt u+m}}{z^{m+i+1}}x^{m-n-1+\wt u+\wt v}Y(Y(u,xz)v,x)\psi(w)\\ &&-\Res_{x}\Res_{z}x^{m-n-1+\wt u+\wt v}(1+z)^{\wt u-1}Y(Y(u,xz)v,x)\psi(w) \end{eqnarray*} and \begin{eqnarray*} &&F_{n,m}(v,u\cdot w)\\ &=&\Res_{x_1}\Res_{x}x_1^{\wt u-1} x^{m-n-1+\wt v}Y(v,x)Y(u,x_1)\psi(w)\\ &=&\Res_{x_1}\Res_{x}x_1^{\wt u-1} x^{m-n-1+\wt v}Y(u,x_1)Y(v,x)\psi(w)\\ &&-\Res_{x_0}\Res_{x} x^{m-n-1+\wt v}(x+x_0)^{\wt u-1} Y(Y(u,x_0)v,x)\psi(w)\\ &=&\Res_{x_0}\Res_{x}x^{m-n-1+\wt v}(x_0+x)^{\wt u-1} Y(u,x_0+x)Y(v,x)\psi(w)\\ &&-\Res_{z}\Res_{x}x^{m-n-1+\wt u+\wt v}(1+z)^{\wt u-1} Y(Y(u,xz)v,x)\psi(w)\\ &=&\sum_{i=0}^n\binom{-m-1}{i}\Res_{x_0}\Res_{x} x_0^{-m-1-i}x^{m-n-1+\wt v+i}\\ &&\quad \cdot [(x_0+x)^{\wt u+m}Y(u,x_0+x)Y(v,x)\psi(w)]\\ &&-\Res_{z}\Res_{x}x^{m-n-1+\wt u+\wt v}(1+z)^{\wt u-1} Y(Y(u,xz)v,x)\psi(w)\\ &=&\sum_{i=0}^n\binom{-m-1}{i}\Res_{x_0}\Res_{x} x_0^{-m-1-i}x^{m-n-1+\wt v+i}(x+x_0)^{\wt u+m}Y(Y(u,x_0)v,x)\psi(w)\\ &&-\Res_{z}\Res_{x}x^{m-n-1+\wt u+\wt v}(1+z)^{\wt u-1} Y(Y(u,xz)v,x)\psi(w)\\ &=&\sum_{i=0}^n\binom{-m-1}{i}\Res_{z}\Res_{x} x^{m-n-1+\wt u+\wt v}\frac{(1+z)^{\wt u+m}}{z^{m+i+1}}Y(Y(u,xz)v,x)\psi(w)\\ &&-\Res_{z}\Res_{x}x^{m-n-1+\wt u+\wt v}(1+z)^{\wt u-1} Y(Y(u,xz)v,x)\psi(w). \end{eqnarray*} Consequently, we get \begin{eqnarray}\label{left-right-comm} F_{n,m}(u\bar{*}_{m,n}v,w)=F_{n,m}(v,u\cdot w). \end{eqnarray} On the other hand, we have \begin{eqnarray}\label{4terms-relation} &&F_{n,m}(v,u\cdot w)\bf nonumber\\ &=&\Res_{x_1}\Res_{x}x_1^{\wt u-1} x^{m-n-1+\wt v}Y(v,x)Y(u,x_1)\psi(w)\bf nonumber\\ &=&\Res_{x_1}\Res_{x}x_1^{\wt u-1} x^{m-n-1+\wt v}Y(u,x_1)Y(v,x)\psi(w)\bf nonumber\\ &&-\Res_{x_0}\Res_{x} x^{m-n-1+\wt v}(x+x_0)^{\wt u-1}Y(Y(u,x_0)v,x)\psi(w)\bf nonumber\\ &=&u\cdot F_{n,m}(v,w) -\Res_{z}\Res_{x} x^{m-n-1+\wt u+\wt v}(1+z)^{\wt u-1}Y(Y(u,xz)v,x)\psi(w)\bf nonumber\\ &=&u\cdot F_{n,m}(v,w) -\Res_{z}\Res_{x} x^{m-n-1}(1+z)^{\wt u-1} Y(x^{L(0)}Y(u,z)v,x)\psi(w)\bf nonumber\\ &=&u\cdot F_{n,m}(v,w)+F_{n,m}(u\bar{*}_{m,n}v,w)-F_{n,m}(u\bar{*}_{m}^nv,w). \end{eqnarray} Combining this with (\ref{left-right-comm}) we get \begin{eqnarray} u\cdot F_{n,m}(v,w)=F_{n,m}(u\bar{*}_{m}^nv,w). \end{eqnarray} Therefore, $F_{n,m}$ reduces to an $A_n(V)$-module morphism $F_{n,m}^{\diamond}:\ A^{\diamond}_{n,m}(V)\otimes _{A_m(V)}U\rightarrow W$. Now, using $F^{\diamond}_{n,m}$ for all $n\in \N$ we get a linear map $$\tilde{\psi}:\ A^{\diamond}_{\Box,m}(V)\otimes _{A_m(V)}U\rightarrow W.$$ Let $u,v\in V$ be homogeneous and let $w\in U,\ p\in \Z$. Noticing that $$x^{m-n-1+i}Y(x^{L(0)}v,x)\psi(w)\in W[[x]]$$ for all $i>n+|p|\ (\ge n)$, we have \begin{eqnarray*} &&\Res_{x_1}x_1^{\wt u-1-p}Y(u,x_1)F_{n,m}(v,w)\\ &=&\Res_{x_1}\Res_{x}x_1^{\wt u-1-p}x^{m-n-1}Y(u,x_1)Y(x^{L(0)}v,x)\psi(w)\\ &=&\Res_{x_0}\Res_{x}(x_0+x)^{\wt u-1-p}x^{m-n-1}Y(u,x_0+x)Y(x^{L(0)}v,x)\psi(w)\\ &=&\Res_{x_0}\Res_{x}\sum_{i=0}^{n+|p|}\binom{-m-p-1}{i}x_0^{-m-p-1-i}x^{m-n-1+i}\\ &&\quad \quad \cdot (x_0+x)^{\wt u+m}Y(u,x_0+x)Y(x^{L(0)}v,x)\psi(w)\\ &=&\Res_{x_0}\Res_{x}\sum_{i=0}^{n+|p|}\binom{-m-p-1}{i}x_0^{-m-p-1-i}x^{m-n-1+i}\\ &&\quad \quad \cdot (x+x_0)^{\wt u+m}Y(Y(u,x_0)x^{L(0)}v,x)\psi(w)\\ &=&\Res_{x_0}\Res_{x}\sum_{i=0}^{n+|p|}\binom{-m-p-1}{i}x_0^{-m-p-1-i}x^{m-n-1+i}\\ &&\quad \quad \cdot (x+x_0)^{\wt u+m}Y(x^{L(0)}Y(x^{-L(0)}u,x_0/x)v,x)\psi(w)\\ &=&\Res_{z}\Res_{x}\sum_{i=0}^{n+|p|}\binom{-m-p-1}{i}x^{m-n-1-p}\frac{(1+z)^{\wt u+m}}{z^{m+p+1+i}}Y(x^{L(0)}Y(u,z)v,x)\psi(w)\\ &=&\Res_{x}x^{m-n-1-p}Y(x^{L(0)}(u[p]\bar{*}_m^nv),x)\psi(w)\\ &=&\Res_{x_1}x_1^{\wt u-1-p}F_{n+p,m}\left(Y^{\diamond}(u,x_1)(v+O'_{n,m}(V)),w\right). \end{eqnarray*} This proves that $\tilde{\psi}$ is a $V$-module homomorphism. The uniqueness is clear. \end{proof} \end{document}
\mat{b}egin{document} \mathfrak{m}aketitle \mat{b}egin{abstract} We present a new algorithm for domain adaptation improving upon a discrepancy minimization algorithm previously shown to outperform a number of algorithms for this task. Unlike many previous algorithms for domain adaptation, our algorithm does not consist of a fixed reweighting of the losses over the training sample. We show that our algorithm benefits from a solid theoretical foundation and more favorable learning bounds than discrepancy minimization. We present a detailed description of our algorithm and give several efficient solutions for solving its optimization problem. We also report the results of several experiments showing that it outperforms discrepancy minimization. \epsilonnd{abstract} \section{Introduction} A common problem arising in a variety of applications such as natural language processing and computer vision is that of \epsilonmph{domain adaptation} \citep{Dredze07Frustratingly, Blitzer07Biographies,jiang-zhai07, LegetterWoodlang,Rosenfeld96}: quite often little or no labeled data from the \epsilonmph{target domain} is at one's disposal, but labeled data from a \epsilonmph{source domain} somewhat similar to the target, as well as a relatively large amount of unlabeled data from the target domain are available. The problem then consists of using the source labeled and target unlabeled data, and possibly a small amount of labeled data from the target, to derive a hypothesis performing well on the target domain. This problem is challenging both from the theoretical and algorithmic point of view since its scenario does not match the standard assumption of a fixed distribution for training and test points adopted in much of learning theory and algorithmic design. A theoretical analysis of the problem of adaptation has been developed over the last few years. This includes generalization bounds based on a notion of \epsilonmph{discrepancy}, or \epsilonmph{$d_A$-distance} in the special case of a binary classification loss, which emerges as the natural measure of the difference of distributions for adaptation \citep{MansourMohriRostamizadeh2009,BenDavidBlitzerCrammerPereira2006,blitzer,CortesMohri2011}. The notion of discrepancy has also been shown to be relevant in the analysis of the related problem of drifting distributions \citep{drift}. Tighter bounds than those of \citet{MansourMohriRostamizadeh2009} are given by \citet{drift} via the use of the ${\mathfrak{m}athcal Y}$-discrepancy, a finer notion of discrepancy that depends on the labels and which therefore cannot be estimated. The same quantity was also later used by \citet{ZhangZhangYe2012} under the name of \epsilonmph{integral probability metric} for the analysis of domain adaptation and multitask learning. A PAC-Bayesian study of domain adaptation was also recently presented by \citet{germain2013pac} based on a weighted version of the discrepancy. Several negative results have also been given for the problem of adaptation \citep{shai2010,BenDavidUrner2012}. These results give worst case lower bounds on the sample size of domain adaptation: as stated by the authors, the problem becomes intractable when the hypothesis set does not contain any candidate achieving a good performance on the training set. In particular, for the counterexample presented by \citet{BenDavidUrner2012}, the best-in-class classification error with respect to the source distribution is only one half. It should be clear that adaptation can not be successful in such cases since the only information available to the learner about the labeling function is through the training data. These results suggest that, as expected, adaptation cannot always be successful. Nevertheless, there are various favorable conditions under which an adaptation algorithm can succeed. In particular, recently, a \epsilonmph{discrepancy minimization} (DM) algorithm was introduced by \citet*{MansourMohriRostamizadeh2009} and further studied and enhanced by \citet{CortesMohri2011,CortesMohri2013} which was shown both to perform well in a number of adaptation and sample bias correction tasks and to match or exceed the performance of several algorithms, including KLIEP \citep{SugiyamaNakajimaKashimaVonBunauKawanabe2008}, KMM \citep{HuangSmolaGrettonBorgwardtScholkopf2006} and a two-stage algorithm of \citep{BickelBrucknerScheffer2009}. In addition to its favorable empirical performance, the DM algorithm benefits from a series of pointwise loss guarantees for the general class of kernel-based regularization algorithms in terms of the empirical discrepancy and a term that depends on the closeness of the labeling function to the hypothesis over the samples \citep{CortesMohri2013}. One critical advantage of the DM algorithm over previous algorithms is that the reweighting of the losses on the training points takes into account both the loss function and the hypothesis sets, both ignored in the design of other methods. One shortcoming of the DM algorithm, however, is that it seeks to reweigh the loss on the training samples to minimize a quantity defined as the maximum over \epsilonmph{all} pairs of hypotheses, including hypotheses that the learning algorithm might not consider as candidates. Thus, the algorithm tends to be too conservative. We present an alternative theoretically well founded algorithm for domain adaptation that is based on minimizing a finer quantity, the \epsilonmph{generalized discrepancy}, and that seeks to improve upon DM. Unlike many previous algorithms for domain adaptation, our algorithm does not consist of a fixed reweighting of the losses over the training sample. Instead, the weights assigned to training sample losses vary as a function of the hypothesis $h$. This helps us ensure that, for every hypothesis $h$, the empirical loss on the source distribution is as close as possible to the empirical loss on the target distribution for that particular $h$. We describe the learning scenario considered (Section~\ref{sec:scenario}), then present a detailed description of our algorithm and show that it can be formulated as a convex optimization problem (Section~\ref{sec:algorithm}). Next, we analyze the theoretical properties of our algorithm and show that it benefits from more favorable learning guarantees than the DM algorithm (Section~\ref{sec:guarantees}). This includes a study of the scenario in which some small amount of labeled data from the target domain is available, which may in fact be the most realistic setting for adaptation. In Section~\ref{sec:optimization}, we analyze the optimization problem defining our algorithm and derive an equivalent form that can be handled by a standard convex optimization solver. In Section~\ref{sec:experiments}, we report the results of experiments demonstrating that our algorithm outperforms the DM algorithm in several tasks. \section{Learning scenario} \label{sec:scenario} This section defines the learning scenario of domain adaptation we consider, which coincides with that of \citet{blitzer}, or \citet{MansourMohriRostamizadeh2009} and \citet{CortesMohri2013}; and introduces the definitions and concepts needed for the following sections. For the most part, we follow the definitions and notation of \citet{CortesMohri2013}. Let ${\mathfrak{m}athcal X}$ denote the input space and ${\mathfrak{m}athcal Y} \subseteq \mathfrak{R}set$ the output space. We define a \epsilonmph{domain} as a pair formed by a distribution over ${\mathfrak{m}athcal X}$ and a target labeling function mapping from ${\mathfrak{m}athcal X}$ to ${\mathfrak{m}athcal Y}$. Throughout the paper, $(Q, f_Q)$ denotes the \epsilonmph{source domain} and $(P, f_P)$ the \epsilonmph{target domain} with $Q$ the source and $P$ the target distribution over ${\mathfrak{m}athcal X}$ while $f_Q, f_P \colon {\mathfrak{m}athcal X} \to {\mathfrak{m}athcal Y}$, are the source and target labeling functions respectively. In the scenario of \epsilonmph{domain adaptation} we consider, the learner receives two samples: a labeled sample of $m$ points ${\mathfrak{m}athcal S} = ((x_1, y_1), \ldots, (x_m, y_m)) \in ({\mathfrak{m}athcal X} \times {\mathfrak{m}athcal Y})^m$ from the source domain with $x_1, \ldots, x_m$ drawn i.i.d.\ according to $Q$ and $y_i = f_Q(x_i)$ for $i \in [1, m]$; and an unlabeled sample ${\mathfrak{m}athcal T} = (x'_1, \ldots, x'_n)\in {\mathfrak{m}athcal X}^n$ of size $n$ drawn i.i.d.\ according to the target distribution $P$. We denote by $\mathfrak{m}at{w}idehat Q$ the empirical distribution corresponding to $x_1, \ldots, x_m$ and by $\mathfrak{m}at{w}idehat P$ the empirical distribution corresponding to ${\mathfrak{m}athcal T}$. We will also analyze a common scenario where, in addition to these two samples, the learner receives a small amount of labeled data from the target domain ${\mathfrak{m}athcal T}' = ((x''_1, y''_1), \ldots, (x''_{s}, y''_{s})) \in ({\mathfrak{m}athcal X} \times {\mathfrak{m}athcal Y})^{s}$. We consider a loss function $L\colon {\mathfrak{m}athcal Y} \times {\mathfrak{m}athcal Y} \to \mathfrak{R}set_+$ jointly convex in its two arguments. The $L_p$ losses commonly used in regression and defined by $L_p(y, y') = |y' - y|^p$ for $p \geq 1$ are special instances of this definition. For any two functions $h, h'\colon {\mathfrak{m}athcal X} \to {\mathfrak{m}athcal Y}$ and any distribution $D$ over ${\mathfrak{m}athcal X}$, we denote by ${\mathfrak{m}athcal L}_D(h, h')$ the expected loss of $h(x)$ and $h'(x)$: ${\mathfrak{m}athcal L}_D(h, h') = \E_{x \sim D} [L(h(x), h'(x))]$. The learning problem consists of selecting a hypothesis $h \in H$ out of a hypothesis set $H$ with a small expected loss ${\mathfrak{m}athcal L}_P(h, f_P)$ with respect to the target domain. We further extend this notation to arbitrary functions ${\mathfrak{m}athsf q}\colon {\mathfrak{m}athcal X} \to \mathfrak{R}set$ with a finite support as follows: ${\mathfrak{m}athcal L}_{\mathfrak{m}athsf q}(h, h') = \sum_{x \in {\mathfrak{m}athcal X}} q(x) L(h(x), h'(x))$. \section{Algorithm} \label{sec:algorithm} In this section, we introduce our adaptation algorithm by first reviewing related previous work, next presenting the key idea behind the algorithm and deriving its general form, and finally by formulating it as a convex optimization problem. \subsection{Previous work} \label{subsec:discmin} It was shown by \citet{MansourMohriRostamizadeh2009} and \citet{CortesMohri2011} (see also the \epsilonmph{$d_A$-distance} \citep{BenDavidBlitzerCrammerPereira2006} in the case of binary loss for classification) that a key measure of the difference of two distributions in the context of adaptation is the \epsilonmph{discrepancy}. Given a hypothesis set $H$, the discrepancy $\mathrm{disc}$ between two distributions $P$ and $Q$ over ${\mathfrak{m}athcal X}$ is defined by: \mat{b}egin{equation} \mathrm{disc}(P, Q) = \mathfrak{m}ax_{h, h' \in H} \mat{b}ig| {\mathfrak{m}athcal L}_{P}(h', h) - {\mathfrak{m}athcal L}_{Q}(h', h) \mat{b}ig|. \epsilonnd{equation} The discrepancy has several advantages over a measure such as the $L_1$ or total variation distance \citep{CortesMohri2013}: it is a finer measure than the $L_1$ distance, it takes into account the loss function and the hypothesis set, it can be accurately estimated from finite samples for common hypothesis sets such as kernel-based ones, it is symmetric and verifies the triangle inequality. It further defines a distance in the case of an $L_p$ loss used with a universal kernel such as a Gaussian kernel. Several generalization bounds for adaptation in terms of the discrepancy have been given in the past \citep{BenDavidBlitzerCrammerPereira2006,MansourMohriRostamizadeh2009,CortesMohri2011,CortesMohri2013}, including pointwise guarantees in the case of kernel-based regularization algorithms, which includes algorithms such as support vector machines (SVM), kernel ridge regression, or support vector regression (SVR). The bounds given in \citep{MansourMohriRostamizadeh2009} motivated a \epsilonmph{discrepancy minimization} algorithm. Given a positive semi-definite (PSD) kernel $K$, the hypothesis returned by the algorithm is the solution of the following optimization problem \mat{b}egin{equation} \label{eq:qmin-opt} \mathfrak{m}in_{h \in \mathbb{H}} \mathfrak{q}uad \lambda \| h \|_K^2 + {\mathfrak{m}athcal L}_{\mathfrak{q}min} (h, f_Q), \epsilonnd{equation} where $\| \cdot \|_K $ is the norm on the reproducing Hilbert space $\mathbb{H}$ induced by the kernel $K$ and $\mathfrak{q}min$ is a distribution over the support of $\mathfrak{m}at{w}idehat Q$ such that $\mathfrak{q}min = \mat{a}rgmin_{{\mathfrak{m}athsf q} \in {\mathfrak{m}athcal Q}} \mathrm{disc}({\mathfrak{m}athsf q}, \mathfrak{m}at{w}idehat P)$, where ${\mathfrak{m}athcal Q}$ is the set of all distributions defined over the support of $\mathfrak{m}at{w}idehat Q$. Using $\mathfrak{q}min$ instead of $\mathfrak{m}at{w}idehat Q$ amounts to reweighting the loss on the training samples to minimize the discrepancy between the empirical distribution and $\mathfrak{m}at{w}idehat P$. Besides its theoretical motivation, this algorithm has been shown to outperform several other algorithms in a series of experiments carried out by \citep{CortesMohri2013}. Observe that, by definition, the solution $\mathfrak{q}min$ of discrepancy minimization is obtained by minimizing a maximum over all pairs of hypotheses, that is $\mathfrak{m}ax_{h, h' \in H} |{\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, h') - {\mathfrak{m}athcal L}_{\mathfrak{q}min}(h, h')|$. But, the maximizing pair of hypotheses may not be among the candidates considered by the learning algorithm. Thus, a learning algorithm based on discrepancy minimization tends to be too conservative. \subsection{Main idea} Assume as in several previous studies \citep{MansourMohriRostamizadeh2009,CortesMohri2013} that the standard algorithm selected by the learner is regularized risk minimization over the Hilbert space $\mathbb{H}$ induced by a PSD kernel $K$. This covers a broad family of algorithms frequently used in applications. Ideally, that is in the absence of a domain adaptation problem, the learner would have access to the labels of the points in ${\mathfrak{m}athcal T}$. Therefore, he would return the hypothesis $h^*$ solution of the optimization problem $\mathfrak{m}in_{h \in \mathbb{H}} F(h)$, where $F$ is the convex function defined for all $h \in \mathbb{H}$ by \mat{b}egin{equation} \label{eq:Pmin} F(h) = \lambda \| h \|_K^2 + {\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P} (h, f_P), \epsilonnd{equation} where $\lambda \geq 0$ is a regularization parameter. Thus, $h^*$ can be viewed as the \epsilonmph{ideal hypothesis}. In view of that, we can formulate our objective, in the \epsilonmph{presence} of a domain adaptation problem, as that of finding a hypothesis $h$ whose loss ${\mathfrak{m}athcal L}_P(h, f_P)$ with respect to the target domain is as close as possible to ${\mathfrak{m}athcal L}_P(h^*, f_P)$. To do so, we will seek in fact a hypothesis $h$ that is as close as possible to $h^*$, which would imply the closeness of the losses with respect to the target domains. We do not have access to $f_P$ and can only access the labels of the training sample ${\mathfrak{m}athcal S}$. Thus, we must resort to using in our objective function, instead of ${\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P} (h, f_P)$, a reweighted empirical loss over the training sample ${\mathfrak{m}athcal S}$. The main idea behind our algorithm is to define, for any $h \in \mathbb{H}$, a reweighting function ${\mathfrak{m}athsf Q}_h\colon {\mathfrak{m}athcal S}_{\mathfrak{m}athcal X} = \set{x_1, \ldots, x_m} \to \mathfrak{R}set$ such that the objective function $G$ defined for all $h \in \mathbb{H}$ by \mat{b}egin{equation} \label{eq:qhmin} G(h) = \lambda \| h \|_K^2 + {\mathfrak{m}athcal L}_{{\mathfrak{m}athsf Q}_h} (h, f_Q) \epsilonnd{equation} is uniformly close to $F$, thereby resulting in close minimizers. Since the first term of \epsilonqref{eq:Pmin} and \epsilonqref{eq:qhmin} coincide, the idea consists equivalently of seeking ${\mathfrak{m}athsf Q}_h$ such that ${\mathfrak{m}athcal L}_{{\mathfrak{m}athsf Q}_h}(h, f_Q) $ and ${\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, f_P)$ be as close as possible. Observe that this departs from the standard reweighting methods: instead of reweighting the training sample with some fixed set of weights, we allow the weights to vary as a function of the hypothesis $h$. Note that we have further relaxed the condition commonly adopted by reweighting techniques that the weights must be non-negative and sum to one. Allowing the weights to be in a richer space than the space of probabilities over ${\mathfrak{m}athcal S}_{\mathfrak{m}athcal X}$ could raise over-fitting concerns but, we will later see that this in fact does not affect our learning guarantees and leads to excellent empirical results. Of course, searching for ${\mathfrak{m}athsf Q}_h$ to directly minimize $|{\mathfrak{m}athcal L}_{{\mathfrak{m}athsf Q}_h}(h, f_Q) - {\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, f_P)|$ is in general not possible since we do not have access to $f_P$, but it is instructive to consider the imaginary case where the average loss ${\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, f_P)$ is known to us for any $h \in \mathbb{H}$. ${\mathfrak{m}athsf Q}_h$ could then be determined via \mat{b}egin{equation} \label{eq:qh} {\mathfrak{m}athsf Q}_h = \mat{a}rgmin_{{\mathfrak{m}athsf q} \in {\mathfrak{m}athcal F}({\mathfrak{m}athcal S}_\mathfrak{m}at{X}, \mathfrak{R}set)} | {\mathfrak{m}athcal L}_{\mathfrak{m}athsf q}(h, f_Q) - {\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, f_P)|, \epsilonnd{equation} where ${\mathfrak{m}athcal F}({\mathfrak{m}athcal S}_\mathfrak{m}at{X}, \mathfrak{R}set)$ is the set of real-valued functions defined over ${\mathfrak{m}athcal S}_{\mathfrak{m}athcal X}$. For any $h$, we can in fact select ${\mathfrak{m}athsf Q}_h$ such that ${\mathfrak{m}athcal L}_{{\mathfrak{m}athsf Q}_h}(h, f_Q) = {\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, f_P)$ since ${\mathfrak{m}athcal L}_{\mathfrak{m}athsf q}(h, f_Q)$ is a linear function of ${\mathfrak{m}athsf q}$ and thus the optimization problem \epsilonqref{eq:qh} reduces to solving a simple linear equation. With this choice of ${\mathfrak{m}athsf Q}_h$, the objective functions $F$ and $G$ coincide and by minimizing $G$ we can recover the ideal solution $h^*$. Note that, in general, the DM algorithm could not recover that ideal solution. Even a finer discrepancy minimization algorithm exploiting the knowledge of $ {\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, f_P)$ for all $h$ and seeking a distribution ${\mathfrak{m}athsf q}'_\text{min}$ minimizing $\mathfrak{m}ax_{h \in H} | {\mathfrak{m}athcal L}_{\mathfrak{m}athsf q}(h, f_Q) - {\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, f_P)|$ could not, in general, recover the ideal solution since we could not have ${\mathfrak{m}athcal L}_{{\mathfrak{m}athsf q}'_\text{min}}(h, f_Q) = {\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, f_P)$ for all $h \in \mathbb{H}$. Of course, in practice access to ${\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, f_P)$ is unfeasible since the sample ${\mathfrak{m}athcal T}$ is unlabeled. Instead, we will consider a non-empty convex set of candidate hypotheses $H'' \subseteq H$ that could contain a good approximation of $f_P$. Using $H''$ as a set of surrogate labeling functions leads to the following definition of ${\mathfrak{m}athsf Q}_h$ instead of \epsilonqref{eq:qh}: \mat{b}egin{equation} \label{eq:agnosqh} {\mathfrak{m}athsf Q}_h = \mat{a}rgmin_{{\mathfrak{m}athsf q} \in {\mathfrak{m}athcal F}({\mathfrak{m}athcal S}_\mathfrak{m}at{X}, \mathfrak{R}set)} \mathfrak{m}ax_{h'' \in H''}| {\mathfrak{m}athcal L}_{\mathfrak{m}athsf q}(h, f_Q) - {\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, h'') |. \epsilonnd{equation} The choice of the subset $H''$ is of course key. A detailed analysis of this choice is presented in Section~\ref{sec:guarantees}. We present the formulation of the optimization problem for an arbitrary choice of the convex subset $H''$. \subsection{Formulation of optimization problem} \label{sec:formulation-optimization} The following result gives a more explicit expression for ${\mathfrak{m}athcal L}_{{\mathfrak{m}athsf Q}_h}(h, f_Q)$ leading to a simpler formulation of the optimization problem defining our algorithm. \mat{b}egin{proposition} \label{prop:maxmin} For any $h \in \mathbb{H}$, let ${\mathfrak{m}athsf Q}_h$ be defined by \epsilonqref{eq:agnosqh}. Then, the following identity holds for any $h \in \mathbb{H}$: \mat{b}egin{equation*} {\mathfrak{m}athcal L}_{{\mathfrak{m}athsf Q}_h}(h, f_Q) = \frac{1}{2} \Big(\mathfrak{m}ax_{h'' \in H''} {\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, h'') + \mathfrak{m}in_{h'' \in H''}{\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, h'') \Big). \epsilonnd{equation*} \epsilonnd{proposition} \mat{b}egin{proof} For any $h \in \mathbb{H}$, the equation ${\mathfrak{m}athcal L}_{\mathfrak{m}at{q}}(h, f_Q) = l$ with $l \in \mathfrak{R}set$ admits a solution ${\mathfrak{m}athsf q} \in {\mathfrak{m}athcal F}({\mathfrak{m}athcal S}_\mathfrak{m}at{X}, \mathfrak{R}set)$. Thus, for any $h \in \mathbb{H}$, we can write \mat{b}egin{align*} {\mathfrak{m}athcal L}_{{\mathfrak{m}athsf Q}_h}(h, f_Q) & = \mat{a}rgmin_{\substack{l \in \set{{\mathfrak{m}athcal L}_{\mathfrak{m}at{q}}(h, f_Q): {\mathfrak{m}athsf q} \in {\mathfrak{m}athcal F}({\mathfrak{m}athcal S}_\mathfrak{m}at{X}, \mathfrak{R}set)}}} \mathfrak{m}ax_{h'' \in H''}| l - {\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, h'')| \\ & = \mat{a}rgmin_{l \in \mathfrak{R}set} \mathfrak{m}ax_{h'' \in H''}| l - {\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, h'') | \\ & = \mat{a}rgmin_{l \in \mathfrak{R}set} \mathfrak{m}ax_{h''\in H''} \mathfrak{m}ax \Big \{ {\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, h'') - l, l - {\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, h'') \Big \}\\ & = \mat{a}rgmin_{l \in \mathfrak{R}set} \mathfrak{m}ax \Big \{ \mathfrak{m}ax_{h''\in H''} {\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, h'') - l, l - \mathfrak{m}in_{h''\in H'' } {\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, h'') \Big \}\\ & = \frac{1}{2} \Big(\mathfrak{m}ax_{h'' \in H''} {\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, h'') + \mathfrak{m}in_{h'' \in H''}{\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, h'') \Big), \epsilonnd{align*} since the minimizing $l$ is obtained for $\mathfrak{m}ax_{h''\in H''} {\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, h'') - l = l - \mathfrak{m}in_{h''\in H'' }{\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, h'')$. \epsilonnd{proof} In view of this proposition, with our choice of ${\mathfrak{m}athsf Q}_h$ based on \epsilonqref{eq:agnosqh}, the objective function $G$ of our algorithm \epsilonqref{eq:qhmin} can be equivalently written for all $h \in \mathbb{H}$ as follows \mat{b}egin{equation} \label{eq:optmaxmin} G(h) = \lambda\| h \|_K^2 + \frac{1}{2} \Big(\mathfrak{m}ax_{h'' \in H''} {\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, h'') + \mathfrak{m}in_{h'' \in H''}{\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, h'')\Big). \epsilonnd{equation} The function $h \mathfrak{m}apsto \mathfrak{m}ax_{h'' \in H''} {\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, h'')$ is convex as a pointwise maximum of the convex functions $h \mathfrak{m}apsto {\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, h'')$. Since the loss function $L$ is jointly convex, so is ${\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}$, therefore, the function derived by partial minimization over a non-empty convex set $H''$ for one of the arguments, $h \mathfrak{m}apsto \mathfrak{m}in_{h'' \in H''} {\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, h'')$, also defines a convex function \citep{BoydVandenberghe2004}. Thus, $G$ is a convex function as a sum of convex functions. \section{Learning guarantees} \label{sec:guarantees} In this section, we present pointwise learning guarantees for our algorithm and show that they compare favorably to the previous guarantees given for the DM algorithm. More formally, we prove that there exists a family of convex sets with an element $H''$ yielding provable better guarantees for our algorithm. Moreover, this family is parametrized by a single variable, therefore making the search for $H''$ tractable. As in previous work, we assume that the loss function $L$ is \epsilonmph{$\mathfrak{m}u$-admissible}: there exists $\mathfrak{m}u > 0$ such that \mat{b}egin{equation} \label{eq:mu-admissible} |L(h(x), y) - L(h'(x), y)| \leq \mathfrak{m}u |h(x) - h'(x)| \epsilonnd{equation} holds for all $(x, y) \in {\mathfrak{m}athcal X} \times {\mathfrak{m}athcal Y}$ and $h', h \in H$, a condition that is somewhat weaker than $\mathfrak{m}u$-Lipschitzness with respect to the first argument. The $L_p$ losses commonly used in regression, $p \geq 1$, verify this condition (see Appendix~\ref{app:muadmissible}). \subsection{Learning bounds and comparisons} The existing pointwise guarantees for the DM algorithm are directly derived from a bound on the norm of the difference of the ideal function $h^*$ and the hypothesis obtained after reweighting the sample losses using a distribution ${\mathfrak{m}athsf q}$. The bound is expressed in terms of the discrepancy and a term $\epsilonta_H(f_P, f_Q)$ measuring the difference of the source and target labeling functions defined by \mat{b}egin{equation} \epsilonta_H(f_P, f_Q) = \mathfrak{m}in_{h_0 \in H} \Big(\mathfrak{m}ax_{x \in \supp(\mathfrak{m}at{w}idehat P)} |f_P(x) - h_0(x)| + \mathfrak{m}ax_{x \in \supp(\mathfrak{m}at{w}idehat Q)} |f_Q(x) - h_0(x)| \Big), \epsilonnd{equation} and is given by the following proposition. \mat{b}egin{theorem}[\citep{CortesMohri2013}] \label{th:disc} Let ${\mathfrak{m}athsf q}$ be an arbitrary distribution over ${\mathfrak{m}athcal S}_{\mathfrak{m}athcal X}$ and let $h^*$ and $h_{\mathfrak{m}athsf q}$ be the hypotheses minimizing $\lambda \mathfrak{n}orm{h}{K}^2 + {\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, f_P)$ and $\lambda \mathfrak{n}orm{h}{K}^2 + {\mathfrak{m}athcal L}_{{\mathfrak{m}athsf q}}(h, f_Q)$ respectively. Then, the following inequality holds: \mat{b}egin{equation} \label{eq:disc} \lambda \mathfrak{n}orm{h^* - h_{\mathfrak{m}athsf q}}{K}^2 \leq \mathfrak{m}u \, \epsilonta_H(f_P, f_Q) + \mathrm{disc}(\mathfrak{m}at{w}idehat P, {\mathfrak{m}athsf q}). \epsilonnd{equation} \epsilonnd{theorem} The DM algorithm is defined by selecting the distribution ${\mathfrak{m}athsf q}$ minimizing the right-hand side of the bound \epsilonqref{eq:disc}, that is $\mathrm{disc}(\mathfrak{m}at{w}idehat P, {\mathfrak{m}athsf q})$. We will show a result of the same nature for our hypothesis-dependent reweighting ${\mathfrak{m}athsf Q}_h$ by showing that its choice also coincides with that of minimizing an upper bound on $\lambda \mathfrak{n}orm{h^* - h'}{K}^2$. Let ${\mathfrak{m}athcal A}(H)$ be the set of all functions ${\mathfrak{m}athsf U}\colon h \mathfrak{m}apsto {\mathfrak{m}athsf U}_h$ mapping $H$ to ${\mathfrak{m}athcal F}({\mathfrak{m}athcal S}_{\mathfrak{m}athcal X}, \mathfrak{R}set)$ such that for all $h \in H$, $h \mathfrak{m}apsto {\mathfrak{m}athcal L}_{{\mathfrak{m}athsf U}_h}(h, f_Q)$ is a convex function. ${\mathfrak{m}athcal A}(H)$ contains all constant functions ${\mathfrak{m}athsf U}$ such that ${\mathfrak{m}athsf U}_h = {\mathfrak{m}athsf q}$ for all $h \in H$, where ${\mathfrak{m}athsf q}$ is a distribution over ${\mathfrak{m}athcal S}_{\mathfrak{m}athcal X}$. By Proposition~\ref{prop:maxmin}, ${\mathfrak{m}athcal A}(H)$ also includes the function ${\mathfrak{m}athsf Q}: h \to {\mathfrak{m}athsf Q}_h$ used by our algorithm. \mat{b}egin{definition}[generalized discrepancy] For any ${\mathfrak{m}athsf U} \in {\mathfrak{m}athcal A}(H)$, we define the notion of \epsilonmph{generalized discrepancy} between $\mathfrak{m}at{w}idehat P$ and ${\mathfrak{m}athsf U}$ as the quantity $\mathrm{DISC}(\mathfrak{m}at{w}idehat P, {\mathfrak{m}athsf U})$ defined by \mat{b}egin{equation} \label{eq:DIS} \mathrm{DISC}(\mathfrak{m}at{w}idehat P, {\mathfrak{m}athsf U}) = \mathfrak{m}ax_{h \in H, h'' \in H''} |{\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, h'') - {\mathfrak{m}athcal L}_{{\mathfrak{m}athsf U}_h}(h, f_Q) |. \epsilonnd{equation} \epsilonnd{definition} We also denote by $d_\infty^{\mathfrak{m}at{w}idehat P}(f_P, H'')$ the following distance of $f_P$ to $H''$ over the support of $\mathfrak{m}at{w}idehat P$: \mat{b}egin{equation} \label{eq:delta} d_\infty^{\mathfrak{m}at{w}idehat P}(f_P, H'') = \mathfrak{m}in_{h_0 \in H''} \mathfrak{m}ax_{x \in \supp(\mathfrak{m}at{w}idehat P)} |h_0(x) - f_P(x)|. \epsilonnd{equation} The following theorem gives an upper bound on the norm of the difference of the minimizing hypotheses in terms of the generalized discrepancy and $d_\infty^{\mathfrak{m}at{w}idehat P}(f_P, H'')$. \mat{b}egin{theorem} \label{th:qhbound} Let ${\mathfrak{m}athsf U}$ be an arbitrary element of ${\mathfrak{m}athcal A}(H)$ and let $h^*$ and $h_{\mathfrak{m}athsf U}$ be the hypotheses minimizing $\lambda \mathfrak{n}orm{h}{K}^2 + {\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, f_P)$ and $\lambda \mathfrak{n}orm{h}{K}^2 + {\mathfrak{m}athcal L}_{{\mathfrak{m}athsf U}_h}(h, f_Q)$ respectively. Then, the following inequality holds for any convex set $H'' \subseteq H$: \mat{b}egin{equation} \label{eq:qhbound} \lambda \mathfrak{n}orm{h^* - h_{\mathfrak{m}athsf U}}{K}^2 \leq \mathfrak{m}u \, d_\infty^{\mathfrak{m}at{w}idehat P}(f_P, H'') + \mathrm{DISC}(\mathfrak{m}at{w}idehat P, {\mathfrak{m}athsf U}). \epsilonnd{equation} \epsilonnd{theorem} \mat{b}egin{proof} Fix ${\mathfrak{m}athsf U} \in {\mathfrak{m}athcal A}(H)$ and let $G_{\mathfrak{m}at{w}idehat P}$ denote $h \mathfrak{m}apsto {\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, f_P)$ and $G_{\mathfrak{m}athsf U}$ the function $h \mathfrak{m}apsto {\mathfrak{m}athcal L}_{{\mathfrak{m}athsf U}_h}(h, f_Q)$. Since $h \mathfrak{m}apsto \lambda \mathfrak{n}orm{h}{K}^2 + G_{\mathfrak{m}at{w}idehat P}(h)$ is convex and differentiable and since $h^*$ is its minimizer, the gradient is zero at $h^*$, that is $2 \lambda h^* = -\mathfrak{n}abla G_{\mathfrak{m}at{w}idehat P}(h^*)$. Similarly, since $h \mathfrak{m}apsto \lambda \mathfrak{n}orm{h}{K}^2 + G_{\mathfrak{m}athsf U}(h)$ is convex, it admits a sub-differential at any $h \in \mathbb{H}$. Since $h_{\mathfrak{m}athsf U}$ is a minimizer, its sub-differential at $h_{\mathfrak{m}athsf U}$ must contain $0$. Thus, there exists a sub-gradient $g_0 \in \partial G_{\mathfrak{m}athsf U}(h_{\mathfrak{m}athsf U})$ such that $2 \lambda h_{\mathfrak{m}athsf U} = -g_0$, where $\partial G_{\mathfrak{m}athsf U}(h_{\mathfrak{m}athsf U})$ denotes the sub-differential of $G_{\mathfrak{m}athsf U}$ at $h_{\mathfrak{m}athsf U}$. Using these two equalities we can write \mat{b}egin{align*} 2 \lambda \mathfrak{n}orm{h^* - h_{\mathfrak{m}athsf U}}{K}^2 & = \langle h^* - h_{\mathfrak{m}athsf U}, g_0 - \mathfrak{n}abla G_{\mathfrak{m}at{w}idehat P}(h^*) \rangle\\ & = \langle g_0, h^* - h_{\mathfrak{m}athsf U} \rangle - \langle \mathfrak{n}abla G_{\mathfrak{m}at{w}idehat P}(h^*), h^* - h_{\mathfrak{m}athsf U} \rangle \\ & \leq G_{\mathfrak{m}athsf U}(h^*) - G_{\mathfrak{m}athsf U}(h_{\mathfrak{m}athsf U}) + G_{\mathfrak{m}at{w}idehat P}(h_{\mathfrak{m}athsf U}) - G_{\mathfrak{m}at{w}idehat P}(h^*) \\ & = {\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h_{\mathfrak{m}athsf U}, f_P) -{\mathfrak{m}athcal L}_{{\mathfrak{m}athsf U}_h}(h_{\mathfrak{m}athsf U}, f_Q) + {\mathfrak{m}athcal L}_{{\mathfrak{m}athsf U}_h}(h^*, f_Q) - {\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h^*, f_P) \\ & \leq 2 \mathfrak{m}ax_{h \in H} |{\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, f_P) -{\mathfrak{m}athcal L}_{{\mathfrak{m}athsf U}_h}(h, f_Q)|, \epsilonnd{align*} where we used for the first inequality the convexity of $G_{\mathfrak{m}athsf U}$ combined with the sub-gradient property of $g_0 \in \partial G_{\mathfrak{m}athsf U}(h_{\mathfrak{m}athsf U})$, and the convexity of $G_{\mathfrak{m}at{w}idehat P}$. For any $h \in H$, using the $\mathfrak{m}u$-admissibility of the loss, we can upper bound the operand of the $\mathfrak{m}ax$ operator as follows: \mat{b}egin{align*} |{\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, h'') - {\mathfrak{m}athcal L}_{{\mathfrak{m}athsf U}_h}(h, f_Q)| & \leq |{\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, f_P) - {\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, h_0)| + |{\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, h_0) - {\mathfrak{m}athcal L}_{{\mathfrak{m}athsf U}_h}(h, f_Q)| \\ & \leq \mathfrak{m}u \E_{x \sim \mathfrak{m}at{w}idehat P}|f_P(x) - h_0(x)| + \mathfrak{m}ax_{h'' \in H''} |{\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, h'') - {\mathfrak{m}athcal L}_{{\mathfrak{m}athsf U}_h}(h, f_Q)| \\ & \leq \mathfrak{m}u \mathfrak{m}ax_{x \in \supp(\mathfrak{m}at{w}idehat P)} | f_P(x) - h_0(x) | + \mathfrak{m}ax_{h'' \in H''} |{\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, h'') - {\mathfrak{m}athcal L}_{{\mathfrak{m}athsf U}_h}(h, f_Q)|, \epsilonnd{align*} where $h_0$ is an arbitrary element of $H''$. Since this bound holds for all $h_0 \in H''$, it follows immediately that \mat{b}egin{equation*} \lambda \mathfrak{n}orm{h^* - h_{\mathfrak{m}athsf U}}{K}^2 \leq \mathfrak{m}u \mathfrak{m}in_{h_0 \in H''} \mathfrak{m}ax_{x \in \supp(\mathfrak{m}at{w}idehat P)} | f_P(x) - h_0(x) | + \mathfrak{m}ax_{h \in H} \mathfrak{m}ax_{h'' \in H''} |{\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, h'') - {\mathfrak{m}athcal L}_{{\mathfrak{m}athsf U}_h}(h, f_Q) |, \epsilonnd{equation*} which concludes the proof. \epsilonnd{proof} Our algorithm is strongly motivated by the previous bound. Indeed, for a fixed set $H''$, our choice of ${\mathfrak{m}athsf Q}$ precisely coincides with the choice of ${\mathfrak{m}athsf U}_h$ minimizing the right-hand side of \epsilonqref{eq:qhbound}, or the second term of the bound, since the first one does not vary with $h$ or ${\mathfrak{m}athsf U}_h$. This, however, does not imply a better performance of our algorithm over DM. Therefore, a natural question is whether there exists a choice of $H''$ for which \epsilonqref{eq:qhbound} is a uniformly tighter upper bound than \epsilonqref{eq:disc}. The following proposition shows that when using an $L_p$ loss, there exists a simple family of sets for which this property holds. The result is expressed in terms of the \epsilonmph{local discrepancy} defined by: \mat{b}egin{equation*} \label{eq:localdiscrepancy} \mathrm{disc}_{H''}(\mathfrak{m}at{w}idehat P , {\mathfrak{m}athsf q}) = \mathfrak{m}ax_{h \in H, h'' \in H''} |{\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, h'') - {\mathfrak{m}athcal L}_{{\mathfrak{m}athsf q}}(h, h'')|, \epsilonnd{equation*} which is a finer measure than the standard discrepancy for which the $\mathfrak{m}ax$ is defined over a pair of hypothesis \epsilonmph{both} in $H \supseteq H''$. \mat{b}egin{theorem} \label{th:betterbound} Let $L$ be the $L_p$ loss for some $p \geq 1$ and $h_0^*$ the minimizer in the definition of $\epsilonta_H(f_P, f_Q)$: $h_0^* = \mat{a}rgmin_{h_0 \in H} \mat{b}ig(\mathfrak{m}ax_{x \in \supp(\mathfrak{m}at{w}idehat P)} |f_P(x) - h_0(x)| + \mathfrak{m}ax_{x \in \supp(\mathfrak{m}at{w}idehat Q)} |f_Q(x) - h_0(x)| \mat{b}ig)$. Define $r \geq 0$ by $r = \mathfrak{m}ax_{x \in \supp(\mathfrak{m}at{w}idehat Q)} |f_Q(x) - h_0^*(x)|$. Let ${\mathfrak{m}athsf q}$ be a distribution over ${\mathfrak{m}athcal S}_{\mathfrak{m}athcal X}$ and let $H''$ be defined by $H'' = \{h'' \in H | {\mathfrak{m}athcal L}_{{\mathfrak{m}athsf q}}(h'', f_Q) \leq r^p\}$. Then, $h_0^* \in H''$ and the following inequality holds: \mat{b}egin{equation} \label{eq:boundcomp} \mathfrak{m}u \, d_\infty^{\mathfrak{m}at{w}idehat P}(f_P, H'') + \mathfrak{m}ax_{h \in H, h'' \in H''} |{\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, h'') - {\mathfrak{m}athcal L}_{{\mathfrak{m}athsf q}}(h, f_Q) | \leq \mathfrak{m}u \, \epsilonta_H(f_P, f_Q) + \mathrm{disc}_{H''}(\mathfrak{m}at{w}idehat P, {\mathfrak{m}athsf q}). \epsilonnd{equation} \epsilonnd{theorem} \mat{b}egin{proof} The fact that $h_0^* \in H''$ follows from \mat{b}egin{equation*} {\mathfrak{m}athcal L}_{{\mathfrak{m}athsf q}}(h_0^*, f_Q) = \E_{x \sim {\mathfrak{m}athsf q}}\mat{b}ig[ |h_0^*(x) - f_Q(x)|^p \mat{b}ig] \leq \mathfrak{m}ax_{x \in \supp(\mathfrak{m}at{w}idehat Q)} |h_0^*(x) - f_Q(x)|^p \leq r^p. \epsilonnd{equation*} By Lemma~\ref{lemma:holder}, for all $h, h'' \in H$, $| {\mathfrak{m}athcal L}_{{\mathfrak{m}athsf q}}(h, h'') - {\mathfrak{m}athcal L}_{{\mathfrak{m}athsf q}}(h, f_Q)| \leq \mathfrak{m}u [{\mathfrak{m}athcal L}_{{\mathfrak{m}athsf q}}(h'', f_Q)]^{\frac{1}{p}}$. In view of this inequality, we can write: \mat{b}egin{multline*} \mathfrak{m}ax_{h \in H, h'' \in H''} |{\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, h'') - {\mathfrak{m}athcal L}_{{\mathfrak{m}athsf q}}(h, f_Q) |\\ \mat{b}egin{aligned} & \leq \mathfrak{m}ax_{h \in H, h'' \in H''} |{\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, h'') - {\mathfrak{m}athcal L}_{{\mathfrak{m}athsf q}}(h, h'') | + \mathfrak{m}ax_{h \in H, h'' \in H''} | {\mathfrak{m}athcal L}_{{\mathfrak{m}athsf q}}(h, h'') - {\mathfrak{m}athcal L}_{{\mathfrak{m}athsf q}}(h, f_Q)| \\ & \leq \mathrm{disc}_{H''}(\mathfrak{m}at{w}idehat P, {\mathfrak{m}athsf q}) + \mathfrak{m}ax_{h'' \in H''}\mathfrak{m}u [{\mathfrak{m}athcal L}_{{\mathfrak{m}athsf q}}(h'', f_Q)]^{\frac{1}{p}} \\ & \leq \mathrm{disc}_{H''}(\mathfrak{m}at{w}idehat P, {\mathfrak{m}athsf q}) + \mathfrak{m}u r \\ & = \mathrm{disc}_{H''}(\mathfrak{m}at{w}idehat P, {\mathfrak{m}athsf q}) + \mathfrak{m}u \mathfrak{m}ax_{x \in \supp(\mathfrak{m}at{w}idehat Q)} |f_Q(x) - h_0^*(x)|. \epsilonnd{aligned} \epsilonnd{multline*} Using this inequality and the fact that $h_0^* \in H''$, we can write \mat{b}egin{align*} & \mathfrak{m}space{-45mu} \mathfrak{m}u \, d_\infty^{\mathfrak{m}at{w}idehat P}(f_P, H'') + \mathfrak{m}ax_{h \in H, h'' \in H''} |{\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, h'') - {\mathfrak{m}athcal L}_{{\mathfrak{m}athsf q}}(h, f_Q) | \\ & \leq \mathfrak{m}u \mathfrak{m}in_{h_0 \in H''} \mathfrak{m}ax_{x \in \supp(\mathfrak{m}at{w}idehat P)} |f_P(x) - h_0(x)| + \mathrm{disc}_{H''}(\mathfrak{m}at{w}idehat P, {\mathfrak{m}athsf q}) + \mathfrak{m}u \mathfrak{m}ax_{x \in \supp(\mathfrak{m}at{w}idehat Q)} |f_Q(x) - h_0^*(x)| \\ & \leq \mathfrak{m}u \mat{b}ig(\mathfrak{m}ax_{x \in \supp(\mathfrak{m}at{w}idehat P)} |f_P(x) - h^*_0(x)| + \mathfrak{m}ax_{x \in \supp(\mathfrak{m}at{w}idehat Q)} |f_Q(x) - h^*_0(x)| \mat{b}ig) + \mathrm{disc}_{H''}(\mathfrak{m}at{w}idehat P, {\mathfrak{m}athsf q})\\ & = \mathfrak{m}u \mathfrak{m}in_{h_0 \in H} \mat{b}ig(\mathfrak{m}ax_{x \in \supp(\mathfrak{m}at{w}idehat P)} |f_P(x) - h_0(x)| + \mathfrak{m}ax_{x \in \supp(\mathfrak{m}at{w}idehat Q)} |f_Q(x) - h_0(x)| \mat{b}ig) + \mathrm{disc}_{H''}(\mathfrak{m}at{w}idehat P, {\mathfrak{m}athsf q}) \\ & = \mathfrak{m}u \, \epsilonta_H(f_P, f_Q) + \mathrm{disc}_{H''}(\mathfrak{m}at{w}idehat P, {\mathfrak{m}athsf q}). \epsilonnd{align*} which concludes the proof. \epsilonnd{proof} The theorem shows that for that choice of $H''$, for any constant function ${\mathfrak{m}athsf U}_h \in {\mathfrak{m}athcal A}(H)$ with ${\mathfrak{m}athsf U}_h = {\mathfrak{m}athsf q}$ for some fixed distribution ${\mathfrak{m}athsf q}$ over ${\mathfrak{m}athcal S}_{\mathfrak{m}athcal X}$, the right-hand side of the bound of Theorem~\ref{th:disc} is lower bounded by the right-hand side of the bound of Theorem~\ref{th:qhbound}, since the local discrepancy is a finer quantity than the discrepancy: $\mathrm{disc}_{H''}(\mathfrak{m}at{w}idehat P , {\mathfrak{m}athsf q}) \leq \mathrm{disc}(\mathfrak{m}at{w}idehat P , {\mathfrak{m}athsf q})$. Thus, our algorithm benefits from a more favorable guarantee than the DM algorithm for the particular choice of $H''$, especially since, our choice of ${\mathfrak{m}athsf Q}$ is based on the minimization over all elements in ${\mathfrak{m}athcal A}(H)$ and not just the subset of constant functions mapping to a distribution. The following theorem gives pointwise guarantees for the solution $h_{\mathfrak{m}athsf Q}$ returned by our algorithm. \mat{b}egin{corollary} \label{coro:pointwise} Let $h^*$ be a minimizer of $\lambda \mathfrak{n}orm{h}{K}^2 + {\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, f_P)$ and $h_{\mathfrak{m}athsf Q}$ a minimizer of $\lambda \mathfrak{n}orm{h}{K}^2 + {\mathfrak{m}athcal L}_{{\mathfrak{m}athsf Q}_h}(h, f_Q)$. Then, the following holds for any convex set $H'' \subseteq H$: \mat{b}egin{equation} \forall x \in {\mathfrak{m}athcal X}, y \in {\mathfrak{m}athcal Y}, |L(h_{\mathfrak{m}athsf Q}(x), y) - L(h^*(x), y)| \leq \mathfrak{m}u R \sqrt{\frac{\mathfrak{m}u \, d_\infty^{\mathfrak{m}at{w}idehat P}(f_P, H'') + \mathrm{DISC}(\mathfrak{m}at{w}idehat P, {\mathfrak{m}athsf Q})}{\lambda}}, \epsilonnd{equation} where $R^2 = \sup_{x \in {\mathfrak{m}athcal X}} K(x,x)$. If further $L$ is an $L_p$ loss for some $p \geq 1$ and $H''$ defined as in Theorem~\ref{th:betterbound}, then the following holds: \mat{b}egin{equation} \forall x \in {\mathfrak{m}athcal X}, y \in {\mathfrak{m}athcal Y}, |L(h_{\mathfrak{m}athsf Q}(x), y) - L(h^*(x), y)| \leq \mathfrak{m}u R \sqrt{\frac{\mathfrak{m}u \, \epsilonta_H(f_P, f_Q) + \mathrm{disc}_{H''}(\mathfrak{m}at{w}idehat P, \mathfrak{q}min)}{\lambda}}. \epsilonnd{equation} \epsilonnd{corollary} \mat{b}egin{proof} By the $\mathfrak{m}u$-admissibility of the loss, the reproducing property of $\mathbb{H}$, and the Cauchy-Schwarz inequality, the following holds for all $x \in {\mathfrak{m}athcal X}$ and $y \in {\mathfrak{m}athcal Y}$: \mat{b}egin{multline*} |L(h_{\mathfrak{m}athsf Q}(x), y) - L(h^*(x), y)| \leq \mathfrak{m}u |h'(x) - h^*(x)| = |\langle h' - h^*, K(x, \cdot) \rangle_K| \\ \leq \|h' - h^*\|_K \sqrt{K(x, x)} \leq R \|h' - h^*\|_K. \epsilonnd{multline*} Upper bounding $\|h' - h^*\|_K$ using the bound of Theorem~\ref{th:qhbound} and using the fact that ${\mathfrak{m}athsf Q}$ is a minimizer of the bound over all choices of ${\mathfrak{m}athsf U} \in {\mathfrak{m}athcal A}(H)$ yields the desired result. \epsilonnd{proof} The pointwise loss guarantees just presented can be directly used to bound the difference of the expected loss of $h^*$ and $h_{\mathfrak{m}athsf Q}$ in terms of the same upper bounds, e.g., \mat{b}egin{equation} \label{eq:genbound} {\mathfrak{m}athcal L}_P(h_{\mathfrak{m}athsf Q}, f_P) \leq {\mathfrak{m}athcal L}_P(h^*, f_P)| + \mathfrak{m}u R \sqrt{\frac{\mathfrak{m}u \, d_\infty^{\mathfrak{m}at{w}idehat P}(f_P, H'') + \mathrm{DISC}(\mathfrak{m}at{w}idehat P, {\mathfrak{m}athsf Q})}{\lambda}}. \epsilonnd{equation} The results presented in this section suggest selecting $H''$ to minimize the right-hand side of \epsilonqref{eq:genbound}. The space over which $H''$ is searched is the family of all balls centered in $f_Q$ defined in terms of ${\mathfrak{m}athcal L}_{\mathfrak{q}min}$, which is parametrized only by the radius $r$. This is motivated by Theorem~\ref{th:betterbound} which shows that this family contains choices for $H''$ with provably more favorable guarantees than that of the DM algorithm. Given a small amount of labeled data from the target domain (which is often the case in practice), it can be used as a validation set to select the value of r minimizing the bound of Corollary~\ref{coro:pointwise}. \subsection{Scenario of additional labeled data} Here, we consider a rather common scenario in practice where, in addition to the labeled sample ${\mathfrak{m}athcal S}$ drawn from the source domain and the unlabeled sample ${\mathfrak{m}athcal T}$ from the target domain, the learner receives a small amount of labeled data from the target domain ${\mathfrak{m}athcal T}' = ((x''_1, y''_1), \ldots, (x''_{s}, y''_{s})) \in ({\mathfrak{m}athcal X} \times {\mathfrak{m}athcal Y})^{s}$. This sample is typically too small to be used solely to train an algorithm and achieve a good performance. However, it can be useful in at least two ways that we discuss here. One important benefit of ${\mathfrak{m}athcal T}'$ is to serve as a validation set to determine the parameter $r$ that defines the convex set $H''$ used by our algorithm. Another use of ${\mathfrak{m}athcal T}'$ is to augment our algorithm to exploit the additional source of information it provides. Our learning guarantees can be extended to cover this case. Let $\mathfrak{m}at{w}idehat P'$ denote the empirical distribution associated to ${\mathfrak{m}athcal T}'$. To take advantage of ${\mathfrak{m}athcal T}'$, our algorithm can be trained on the sample of size $(m + s)$ obtained by combining ${\mathfrak{m}athcal S}$ and ${\mathfrak{m}athcal T}'$, which corresponds to the new empirical distribution $\mathfrak{m}at{w}idehat Q' = \frac{m}{m + s} \mathfrak{m}at{w}idehat Q + \frac{s}{m + s} \mathfrak{m}at{w}idehat P'$. Note that for large values of $s$, $\mathfrak{m}at{w}idehat Q'$ essentially ignores the points from the source distribution $Q$, which corresponds to the standard supervised learning scenario in the absence of adaptation. Let $\mathfrak{q}pmin$ denote the discrepancy minimization solution when using $\mathfrak{m}at{w}idehat Q'$. Since $\supp(\mathfrak{m}at{w}idehat Q') \supseteq \supp(\mathfrak{m}at{w}idehat Q)$, the local discrepancy using $\mathfrak{q}pmin$ is a lower bound on the local discrepancy using $\mathfrak{q}min$: \mat{b}egin{equation*} \mathrm{disc}_{H''}(\mathfrak{q}pmin, \mathfrak{m}at{w}idehat P) = \mathfrak{m}in_{\supp({\mathfrak{m}athsf q}) \subseteq \supp(\mathfrak{m}at{w}idehat Q')} \mathrm{disc}_{H''}(\mathfrak{m}at{w}idehat P, {\mathfrak{m}athsf q}) \leq \mathfrak{m}in_{\supp({\mathfrak{m}athsf q}) \subseteq \supp(\mathfrak{m}at{w}idehat Q)} \mathrm{disc}_{H''}(\mathfrak{m}at{w}idehat P, {\mathfrak{m}athsf q}) = \mathrm{disc}_{H''}(\mathfrak{q}min, \mathfrak{m}at{w}idehat P). \epsilonnd{equation*} Thus, in view of Corollary~\ref{coro:pointwise}, for an appropriate choice of $H''$, the learning guarantee for our algorithm is more favorable when using $\mathfrak{m}at{w}idehat Q'$, which suggests that, using the limited amount of labeled points from the target distribution can improve the performance of our algorithm. \section{Optimization solution} \label{sec:optimization} As shown in Section~\ref{sec:formulation-optimization}, the function $G$ defining our algorithm is convex and the problem of minimizing the expression \epsilonqref{eq:optmaxmin} is a convex optimization problem. Nevertheless, the problem is not straightforward to solve, in particular because evaluating the term $\mathfrak{m}ax_{h'' \in H''} {\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, h'')$ that it contains requires solving a non-convex optimization problem. We present two solutions for the problem in the case of the $L_2$ loss: an exact solution obtained by solving a semi-definite programming (SDP) problem, which we prove is equivalent to the original optimization problem in the case of a broad family of convex sets $H''$; and an approximate solution for an arbitrary convex set $H''$ based on sampling and solving a quadratic programming (QP) problem. \subsection{SDP formulation} \label{sec:sdp} As discussed in Section~\ref{sec:guarantees}, the choice of $H''$ is a key component of our algorithm. In view of Corollary~\ref{coro:pointwise}, we will consider the set $ H' = \set{ h'' \,|\, {\mathfrak{m}athcal L}_{\mathfrak{q}min}(h'', f_Q) \leq r^2 }$, for a fixed value of $r$. Define $W$ by $W = \operatorname{span}(K(x_1, \cdot), \ldots, K(x_m, \cdot))$ and denote by $W^\mat{b}ot$ its orthogonal complement. By the reproducing property of $\mathbb{H}$, for every $h^\mat{b}ot \in W^\mat{b}ot$ we have $h^\mat{b}ot(x_i) = \langle h^\mat{b}ot, K(x_i, \cdot) \rangle_K = 0$. Thus, the equality ${\mathfrak{m}athcal L}_{\mathfrak{q}min}(h'', f_Q) = {\mathfrak{m}athcal L}_{\mathfrak{q}min}(h'' + h^\mat{b}ot, f_Q)$ holds for for any function $h''$. We will therefore consider only hypotheses in the subspace $W$ and define $H''$ to be equal to the set $\{ \mat{a} \in \mathfrak{R}set^m | \sum_{j=1}^m \mathfrak{q}min(x_j)(\sum_{i=1}^m a_i \mathfrak{q}min(x_i)^{1/2}K(x_i, x_j) - y_j)^2 \leq r^2\}$. Similarly, by the representer theorem, we know the solution to \epsilonqref{eq:optmaxmin} will be of the form $h = n^{-1/2}\sum_{i=1}^nb_i K(x_i', \cdot)$. We define the \epsilonmph{normalized} kernel matrices $\mathfrak{m}at{K}_{t}$, $\mathfrak{m}at{K}_s$, and $\mathfrak{m}at{K}_st$ respectively by $\mathfrak{m}at{K}_{t}^{ij} = n^{-1} K(x_i', x_j')$, $\mathfrak{m}at{K}_s^{ij} = \mathfrak{q}min(x_i)^{1/2} \mathfrak{q}min(x_j)^{1/2} K(x_i, x_j)$ and $\mathfrak{m}at{K}_st^{ij} = n^{-1/2} \mathfrak{q}min(x_j)^{1/2} K(x_i', x_j)$. For our choice of the convex set $H''$, problem \epsilonqref{eq:optmaxmin} is then equivalent to \mat{b}egin{equation} \label{eq:kmaxmin} \mathfrak{m}in_{\mat{b} \in \mathfrak{R}set^n} \lambda \mat{b}^\top \mathfrak{m}at{K}_{t} \mat{b} + \frac{1}{2} \left(\mathfrak{m}ax_{\substack{\mat{a} \in \mathfrak{R}set^m \\ \|\mathfrak{m}at{K}_s \mat{a} - \mathfrak{m}at{y}\|^2 \leq r^2}} \|\mathfrak{m}at{K}_st \mat{a} - \mathfrak{m}at{K}_{t} \mat{b}\|^2 + \mathfrak{m}in_{\substack{\mat{a} \in \mathfrak{R}set^m \\ \|\mathfrak{m}at{K}_s \mat{a} - \mathfrak{m}at{y}\|^2 \leq r^2}} \|\mathfrak{m}at{K}_st \mat{a} - \mathfrak{m}at{K}_{t} \mat{b}\|^2 \right), \epsilonnd{equation} where $\mathfrak{m}at{y} = (\mathfrak{q}min(x_1)^{1/2}y_1, \ldots, \mathfrak{q}min(x_m)^{1/2} y_m)$ is the vector of normalized labels. \mat{b}egin{lemma} \label{lemma:sdpdual} The Lagrangian dual of the problem $\mathfrak{m}ax_{\substack{\mat{a} \in \mathfrak{R}set^m \\ \|\mathfrak{m}at{K}_s \mat{a} - \mathfrak{m}at{y}\|^2 \leq r^2}} \ \frac{1}{2}\|\mathfrak{m}at{K}_st \mat{a}\|^2 - \mat{b}^\top \mathfrak{m}at{K}_{t} \mathfrak{m}at{K}_st \mat{b}$ is given by \mat{v}space{-.75cm} \mat{b}egin{align*} \mathfrak{m}in_{\epsilonta \geq 0, \gamma} & \ \gamma \\ \text{s. t.} & \ \left( \def\mat{a}rraystretch{1.3} \mat{b}egin{array}{cc} -\frac{1}{2} \mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_st + \epsilonta \mathfrak{m}at{K}_s^2 & \frac{1}{2}\mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_{t} \mat{b} - \epsilonta \mathfrak{m}at{K}_s\mathfrak{m}at{y} \\ \frac{1}{2} \mat{b}^\top \mathfrak{m}at{K}_{t} \mathfrak{m}at{K}_st - \epsilonta \mathfrak{m}at{y}^\top \mathfrak{m}at{K}_s & \epsilonta (\|\mathfrak{m}at{y}\|^2 - r^2) + \gamma \epsilonnd{array} \right) \succeq 0. \epsilonnd{align*} Furthermore, the duality gap for these problems is zero. \epsilonnd{lemma} The proof of the lemma is given in Appendix~\ref{app:sdpdual}. The lemma helps us derive the following equivalent SDP formulation for our original optimization problem. Its solution can be found in polynomial time using standard convex optimization solvers. \mat{b}egin{proposition} \label{prop:cone} The optimization problem \epsilonqref{eq:kmaxmin} is equivalent to the following SDP: \mat{b}egin{align*} \mathfrak{m}ax_{\mat{a}lpha, \mat{b}eta, \mathfrak{n}u, \mathfrak{m}at{Z}, \mathfrak{m}at{z}} & \ \frac{1}{2} \Tr(\mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_st \mathfrak{m}at{Z}) - \mat{b}eta - \mat{a}lpha\\ \text{s. t} & \ \left( \def\mat{a}rraystretch{1.3} \mat{b}egin{array}{cc} \mathfrak{n}u \mathfrak{m}at{K}_s^2 + \frac{1}{2}\mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_st - \frac{1}{4} \mathfrak{m}at{w}idetilde{\mathfrak{m}at K} & \mathfrak{n}u \mathfrak{m}at{K}_s \mathfrak{m}at{y} + \frac{1}{4}\mathfrak{m}at{w}idetilde{\mathfrak{m}at K} \mathfrak{m}at{z} \\ \mathfrak{n}u \mathfrak{m}at{y}^\top \mathfrak{m}at{K}_s + \frac{1}{4} \mathfrak{m}at{z}^\top \mathfrak{m}at{w}idetilde{\mathfrak{m}at K} &\mat{a}lpha + \mathfrak{n}u (\|\mathfrak{m}at{y}\|^2 - r^2) \epsilonnd{array} \right) \succeq 0 \mathfrak{q}uad \mathfrak{m}at{w}edge \mathfrak{q}uad \left( \mat{b}egin{array}{cc} \mathfrak{m}at{Z} & \mathfrak{m}at{z} \\ \mathfrak{m}at{z}^\top & 1 \epsilonnd{array} \right) \succeq 0 \\ & \ \left( \def\mat{a}rraystretch{1.3} \mat{b}egin{array}{cc} \lambda \mathfrak{m}at{K}_{t} + \mathfrak{m}at{K}_{t}^2 & \frac{1}{2} \mathfrak{m}at{K}_{t} \mathfrak{m}at{K}_st \mathfrak{m}at{z} \\ \frac{1}{2} \mathfrak{m}at{z}^\top \mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_{t} & \mat{b}eta \epsilonnd{array} \right) \succeq 0 \mathfrak{q}uad \mathfrak{m}at{w}edge \mathfrak{q}uad \Tr(\mathfrak{m}at{K}_s^2 \mathfrak{m}at{Z}) - 2\mathfrak{m}at{y}^\top \mathfrak{m}at{K}_s \mathfrak{m}at{z} + \|\mathfrak{m}at{y}\|^2 \leq r^2 \mathfrak{q}uad \mathfrak{m}at{w}edge \mathfrak{q}uad \mathfrak{n}u \geq 0, \epsilonnd{align*} where $\mathfrak{m}at{w}idetilde{\mathfrak{m}at K} = \mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_{t} (\lambda \mathfrak{m}at{K}_{t} + \mathfrak{m}at{K}_{t}^2)^\dag \mathfrak{m}at{K}_{t} \mathfrak{m}at{K}_st$. \epsilonnd{proposition} \subsection{QP formulation} \label{sec:optimizationqp} The SDP formulation described in the previous section is applicable for a specific choice of $H''$. In this section, we present an analysis that holds for an arbitrary convex set $H''$. First, notice that the problem of minimizing $G$ (expression \epsilonqref{eq:optmaxmin}) is related to the minimum enclosing ball (MEB) problem. For a set $D \subseteq \mathfrak{R}set^d$, the MEB problem is defined as follows: \mat{b}egin{equation*} \mathfrak{m}in_{\mathfrak{m}at{u} \in \mathfrak{R}set^d}\mathfrak{m}ax_{\mathfrak{m}at{v} \in D} \|\mathfrak{m}at{u} - \mathfrak{m}at{v}\|^2. \epsilonnd{equation*} Omitting the regularization and the $\mathfrak{m}in$ term from \epsilonqref{eq:optmaxmin} leads to a problem similar to the MEB. Thus, we could benefit from the extensive literature and algorithmic study available for this problem \citep{welz1991,Kumar03,schonherr,fischer2003fast,Yildirim2008}. However, to the best of our knowledge, there is currently no solution available to this problem in the case of an infinite set $D$, as in the case of our problem. Instead, we present a solution for solving an approximation of \epsilonqref{eq:optmaxmin} based on sampling. Let $\{h_1, \ldots, h_k\}$ be a set of hypotheses in $\partial H''$ and let $\mathfrak{m}athcal{C} = \mathfrak{m}athcal{C}(h_1, \ldots, h_k)$ denote their convex hull. The following is the sampling-based approximation of \epsilonqref{eq:optmaxmin} that we consider: \mat{b}egin{equation} \label{eq:optmaxapp} \mathfrak{m}in_{h \in \mathbb{H}} \lambda \mathfrak{n}orm{h}{K}^2 + \frac{1}{2} \mathfrak{m}ax_{i=1,...,k} {\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P}(h, h_i) + \frac{1}{2} \mathfrak{m}in_{h' \in \mathfrak{m}athcal{C}} {\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat P }(h, h'). \epsilonnd{equation} \mat{b}egin{proposition} \label{prop:dual} Let $\mathfrak{m}at Y =(Y_{ij}) \in \mathfrak{R}set^{n \times k}$ be the matrix defined by $Y_{ij} = n^{-1/2} h_j(x_i')$ and $\mathfrak{m}at{y}' = (y'_1, \ldots, y'_k)^\top \in \mathfrak{R}set^k $ the vector defined by $y'_i = n^{-1} \sum_{j=1}^n h_i(x'_j)^2$. Then, the dual problem of \epsilonqref{eq:optmaxapp} is given by \mat{b}egin{align} \label{eq:dualapp} \mathfrak{m}ax_{\mat{b}m \mat{a}lpha, \mat{b}m \gamma, \mat{b}eta} & \ -\Big(\mathfrak{m}at Y \mat{b}m \mat{a}lpha + \frac{\mat{b}m \gamma}{2} \Big)^\top \mathfrak{m}at{K}_{t}\Big(\lambda \mathfrak{m}at{I} + \frac{1}{2}\mathfrak{m}at{K}_{t}\Big)^{-1} \Big(\mathfrak{m}at Y \mat{b}m \mat{a}lpha + \frac{\mat{b}m \gamma}{2}\Big) - \frac{1}{2} \mat{b}m \gamma^\top \mathfrak{m}at{K}_{t} \mathfrak{m}at{K}_{t}^\dag \mat{b}m \gamma + \mat{b}m \mat{a}lpha^\top \mathfrak{m}at{y}' - \mat{b}eta\\ \text{s.t.} & \ \mathfrak{m}at{1}^\top \mat{b}m \mat{a}lpha = \frac{1}{2}, {\mathfrak{m}athsf q}uad \mathfrak{m}at{1} \mat{b}eta \geq -\mathfrak{m}at Y^\top \mat{b}m \gamma, {\mathfrak{m}athsf q}uad \mat{b}m \mat{a}lpha\geq 0, \mathfrak{n}onumber \epsilonnd{align} where $\mathfrak{m}at{1}$ is the vector in $\mathfrak{R}set^k$ with all components equal to $1$. Furthermore, the solution $h$ of \epsilonqref{eq:optmaxapp} can be recovered from a solution $(\mat{b}m \mat{a}lpha, \mat{b}m \gamma, \mat{b}eta)$ of \epsilonqref{eq:dualapp} by $\forall x, h(x) =\sum_{i = 1}^n a_i K(x_i, x)$, where $\mat{b}m a = \mat{b}ig(\lambda \mathfrak{m}at{I} + \frac{1}{2}\mathfrak{m}at{K}_{t})^{-1}(\mathfrak{m}at Y \mat{b}m \mat{a}lpha + \frac{1}{2}\mat{b}m \gamma)$. \epsilonnd{proposition} The proof of the proposition is given in Appendix~\ref{app:qpformula}. The result shows that, given a finite sample $h_1, \ldots, h_k$ on the boundary of $H''$, \epsilonqref{eq:optmaxapp} is in fact equivalent to a standard QP. Hence, a solution can be found efficiently with one of the many off-the-shelf algorithms for quadratic programming. We now describe the process of sampling from the boundary of the set $H''$, which is a necessary step for defining problem \epsilonqref{eq:optmaxapp}. We consider compact sets of the form $H'':= \{h'' \in \mathbb{H} \; | \; g_i(h'') \leq 0\}$, where the functions $g_i$ are continuous and convex. For instance, we could consider the set $H''$ defined in the previous section. More generally, we can consider a family of sets $H''_p = \{h'' \in H | \; | \; \sum_{i=1}^m \mathfrak{q}min(x_i)|h(x_i) -y_i|^p \leq r^p\}$. Assume that there exists $h_0$ satisfying $g_i(h_0) < 0$. Our sampling process is illustrated by Figure~\ref{fig:hsampling} and works as follows: pick a random direction $\mathfrak{m}at{w}idehat h$ and define $\lambda_i$ to be the minimal solution to the system \mat{b}egin{equation*} (\lambda \geq 0) \mathfrak{m}at{w}edge (g_i(h_0 + \lambda \mathfrak{m}at{w}idehat{h}) = 0). \epsilonnd{equation*} Set $\lambda_i = \infty$ if no solution is found and define $\lambda^* = \mathfrak{m}in_i \lambda_i$. Notice that the compactness of $H''$ guarantees the condition $\lambda^* < \infty$. The hypothesis $h = h_0 + \lambda^* \mathfrak{m}at{w}idehat h$ satisfies $h \in H''$ and $g_j(h) = 0$ for $j$ such that $\lambda_j = \lambda^*$. The latter is straightforward. To verify the former, assume that $g_i(h_0 + \lambda^* \mathfrak{m}at{w}idehat h) > 0$ for some $i$. The continuity of $g_i$ would imply the existence of $\lambda_i'$ with $0 < \lambda'_i < \lambda^* \leq \lambda _i$ such that $g_i(h_0 + \lambda_i' \mathfrak{m}at{w}idehat h) = 0$. This would contradict the choice of $\lambda_i$, thus, the inequality $g_i(h_0 + \lambda^* \mathfrak{m}at{w}idehat h) \leq0$ must hold for all $i$. Since a point $h_0$ with $g_i(h_0) < 0$ can be obtained by solving a convex program and solving the equations defining $\lambda_i$ is, in general, simple, the process described provides an efficient way of sampling points from the convex set $H''$. \mat{b}egin{figure}[t] \centering \includegraphics[scale=.4]{hsampling.pdf} \mat{v}skip -.4cm \caption{Illustration of the sampling process on the set $H''$.} \label{fig:hsampling} \epsilonnd{figure} In the next section, we report the results of our experiments with our algorithm in several tasks in which it outperforms the DM algorithm. \section{Experiments} \label{sec:experiments} The results of extensive comparisons between GDM and several other adaptation algorithms is presented in this section with favorable results for our algorithm. \mat{b}egin{figure}[t] \centering \mat{b}egin{tabular}{cc} (a) \includegraphics[scale=.32]{oned-crop.pdf} & (b) \includegraphics[scale=.32]{losses-crop.pdf} \epsilonnd{tabular} \caption{(a) Linear hypotheses obtained by training on the source (green circles), target (red triangles) and by using the DM (solid blue) and GDM algorithms (dashed blue). (b)Objective functions associated with training on the source distribution, target distribution as well as the GDM and DM algorithm. The hypothesis set $H$ and surrogate hypothesis set $H''$ are shown at the bottom of the plot.} \label{fig:classifiers} \epsilonnd{figure} \subsection{Synthetic data set} In order to illustrate the differences between the GDM and DM algorithms we generate the following synthetic task which is similar to the one considered by \cite{HuangSmolaGrettonBorgwardtScholkopf2006}: source distribution examples are sampled from the uniform distribution over the interval $[.2, 1]$ and target data is sampled uniformly over $[0, .25]$. The labels are given by the map $x \mathfrak{m}apsto -x + x^3 + \mathfrak{m}at{x}i$ where $\mathfrak{m}at{x}i$ is a Gaussian random variable with mean $0$ and standard deviation $0.1$. As hypothesis set we use linear functions without an offset. Figure~\ref{fig:classifiers}(a) shows the regression hypotheses obtained by training the DM and GDM algorithm as well as training on the source and target distributions. The ideal hypothesis is shown on red. Notice how the GDM solution approaches the ideal solution better than DM. In order to better understand the difference in the solutions of these algorithms Figure~\ref{fig:classifiers}(b) depicts the objective function minimized by each algorithm as a function of the slope $w$ of the linear function, the only variable of the hypothesis. The vertical lines show the value of the minimizing hypothesis for each loss. Keeping in mind that the regularization parameter $\lambda$ used in ridge regression corresponds to a Lagrange multiplier for the constraint $w^2 \leq \Lambda^2$ for some $\Lambda$ \citep{CortesMohri2013} [Lemma 1], the hypothesis set $H = \{ w | |w| \leq \Lambda\}$ is depicted at the bottom of this plot. The shaded region represents the set $H'' = H \cap \{h'' |{\mathfrak{m}athcal L}_{\mathfrak{q}min} (h'') \leq r \}$. It is clear from this plot that DM helps approximate the target loss function. Nevertheless, only GDM seems to uniformly approach it. This should come as no surprise since our algorithm was designed precisely for this purpose. \subsection{Adaptation data sets} We now present the results of evaluating our algorithm against several other adaptation algorithms. GDM is compared against DM and training on the uniform distribution. The following baselines were also considered: \mat{b}egin{enumerate} \itemsep 0em \item The KMM algorithm, which reweights examples from the source distribution in an attempt to match the mean of the source and target data in a feature space induced by a universal kernel. The hyper-parameters of this algorithm were set to the recommended values of $B = 1000$ and $\epsilonpsilon = \frac{\sqrt{m}}{\sqrt{m} - 1}$. \item KLIEP. This algorithm attempts to estimate the importance ratio of the source and target distribution by modeling this ratio as a mixture of basis functions and learning the mixture coefficients from the data. Gaussian kernels were used as basis function where the bandwidth for the kernel was selected to be the best performer on the \epsilonmph{test} set. \item FE. This simple algorithm maps source and target data into a common high-dimensional feature space where the difference of the distributions is expected to reduce. \epsilonnd{enumerate} Unless explicitly stated, our hypothesis set will be a subset of the RKHS induced by a Gaussian kernel. The learning algorithm used for all tasks will be kernel ridge regression and the reported risk will be the mean square error. We follow the setup of \cite{CortesMohri2011} and select regularization parameter $\lambda$ and Gaussian kernel bandwidth $\sigma$ via 10-fold cross validation over the training data by doing a grid search for $\lambda \in \{2^{-25}, \ldots, 2^{-5} \}$ and $\sigma \in \{k d| k = 2^{-10}, \ldots, 1 \}$ where $d$ is the dimensionality of the data. Finally, in view of Section~\ref{sec:guarantees}, the surrogate set $H''$ was selected from the family $\mathfrak{m}athscr{H} := \{H'' | H'' = \{h'' | {\mathfrak{m}athcal L}_{\mathfrak{q}min}(h'') \leq r \mat{v}ee {\mathfrak{m}athcal L}_{\mathfrak{m}at{w}idehat Q}(h'') \leq r \}, r \in [0, \frac{1}{m} \sum_{i=1}^m{y_i^2}] \}$ through validation on a small amount of data from the target distribution. For our comparisons to be fair, all algorithms were allowed to use the small amount of labeled data too. Since, with exception of FE, all other baselines do not propose a way of dealing with labeled data from the target distribution, we simply added this data to the training set and ran the algorithms on the extended source data. The first task we consider is given by the 4 {\tt kin-8xy} Delve data sets \citep{Delve}. These data sets are all variations of the same model: a realistic simulation of the forward dynamics of an 8 link all-revolute robot arm. The task in all data sets is to predict the distance of the end-effector from a target. The data sets differ by the degree of non-linearity (fairly linear , {\tt \small x=f}, or non-linear, {\tt \small x=n}) and the amount of noise in the output (moderate, {\tt \small y=m} or high, {\tt \small y=h}). The data set defines 4 different domains, that is 12 pairs of different distributions and labeling functions. A sample of 200 points from each domain was used and 10 labeled points from the target distribution were used to select $H''$. The experiment was carried out 10 times and the results of testing on a sample of $400$ points from the target domain are reported in Figure~\ref{fig:kin}. The bars represent the median performance of each algorithm. The error bars are the low and high quartiles respectively. All results are normalized in such a way that the median performance of training on the target is equal to 1. Since the source labeling function for this task is fairly linear, our hypotheses consist of vectors $\mathfrak{m}at{w} \in \mathfrak{R}set^8$. Notice that the performance of all algorithms is comparable when adapting to {\tt kin8-fm} since both labeling functions are fairly linear, yet only GDM is able to reasonably adapt to the two data sets with different labeling functions. \mat{b}egin{figure}[t] \centering \includegraphics[height=2.5in, width=3.2in]{kin-crop.pdf} \caption{MSE performance for different adaptation algorithms when adapting from {\tt kin-8fh} to the three other {\tt kin-8xy} domains.} \label{fig:kin} \epsilonnd{figure} \mat{b}egin{figure}[t] \mat{b}egin{tabular}{cc} (a) \includegraphics[height=2in, width=2.9in]{books-adapt.pdf} & (b)\includegraphics[height=2in,width=2.9in]{caltechadapt.pdf} \epsilonnd{tabular} \caption{(a) Performance for the sentiment adaptation task from the {\tt books} domain to all others. (b) MSE of different algorithms adapting from the {\tt caltech256} data set to all others.} \label{fig:realworldsets} \epsilonnd{figure} For our next experiment we consider the cross-domain sentiment analysis data set of \cite{Blitzer07Biographies}. This data set consists of consumer reviews from 4 different domains: {\tt books, kitchen, electronics} and {\tt dvds}. We used the top 5000 unigrams and bigrams as the features for this task. For each pair of adaptation tasks we sample $700$ points from the source distribution and $700$ unlabeled points from the target. Only $50$ labeled points from the target distribution are used to tune the parameter $r$ of our algorithm. The final evaluation is done on a test set of $1000$ points. Figure~\ref{fig:realworldsets}(a) shows MSE of all algorithms when adapting from {\tt books} to all other domains. Finally, we consider a novel domain adaptation task \citep{TommasiTC14} paramount in the computer vision community. The domains correspond to 4 well known collections of images: {\tt bing, caltech256, sun} and {\tt imagenet}. These data sets have been standardized so that they all share the same feature representation and labeling function \citep{TommasiTC14}. We use the data from the first 5 shared classes and sample 800 labeled points from the source distribution and 800 unlabeled points from the target distribution as well as 50 labeled target points to be used for validation of $r$. The results of testing on $1000$ points from the target domain are depicted in Figure~\ref{fig:realworldsets}(b) where we trained on {\tt caltech256}. The results of all possible adaptation problems for the sentiment task as well as for the image task are shown in Appendix~\ref{app:experiments}. The results of this section show that GDM was the only algorithm that could consistently perform better than or on par with the DM algorithm, and it consistently outperforms other algorithms. \section{Conclusion} We presented a new theoretically well-founded domain adaptation algorithm seeking to make the empirical loss closer to an ideal one for \epsilonmph{each} hypothesis. This departs from the existing paradigm of a fixed reweighting for the training losses and leads to a new theoretical analysis of adaptation. We presented both an SDP solution for a specific convex set and a more general sampling-based QP solution for solving the corresponding optimization problem. Our empirical results show that our algorithm significantly outperforms the state-of-the art DM algorithm. \mathfrak{m}arginpar{NEW}page \mat{b}ibliography{da} \mathfrak{m}arginpar{NEW}page \mat{a}ppendix \section{Supplementary material} Here we set the value of $\Lambda$ that will define our hypothesis set H. $\Lambda = \sqrt{\frac{\mathfrak{m}u R}{\lambda}}$, that is $H =\set{h \in \mathbb{H} \colon \| h \|_K \leq \sqrt{\frac{\mathfrak{m}u R}{\lambda}}}$. This does not impose any additional constraint to the minimization \epsilonqref{eq:Pmin} as shown by the following lemma. \mat{b}egin{lemma} Let $h \in \mathbb{H}$ a be a solution of the minimization \epsilonqref{eq:Pmin} for some training sample ${\mathfrak{m}athcal S}$, then $h$ satisfies the inequality $\| h \|_K \leq \sqrt{\frac{\mathfrak{m}u R}{\lambda}}$, where $K(x, x) \leq R$ for all $x \in {\mathfrak{m}athcal X}$. \epsilonnd{lemma} \mat{b}egin{proof} Since $0$ is an element of $\mathbb{H}$, the value of the objective function for the minimizer $h$ is upper bounded by the one for $0$: \mat{b}egin{equation} \frac{1}{m} \sum_{i = 1}^m L(h(x_i), y_i) + \lambda \| h \|_K^2 \leq \frac{1}{m} \sum_{i = 1}^m L(0, y_i). \epsilonnd{equation} By the $\mathfrak{m}u$-admissibility of the loss, we can then write \mat{b}egin{align*} \lambda \| h \|_K^2 \leq \frac{1}{m} \sum_{i = 1}^m L(0, y_i) - \frac{1}{m} \sum_{i = 1}^m L(h(x_i), y_i) \leq \frac{\mathfrak{m}u}{m} \sum_{i = 1}^m |0 - h(x_i)| \leq \frac{\mathfrak{m}u}{m} \sum_{i = 1}^m \| h \|_K K(x_i, x_i) \leq \mathfrak{m}u R \| h \|_K, \epsilonnd{align*} which implies $\lambda \| h \|_K \leq \mathfrak{m}u R$ and concludes the proof. \epsilonnd{proof} \section{SDP formulation} \label{app:sdpdual} \mat{b}egin{replemma}{lemma:sdpdual} The Lagrangian dual of the problem \mat{b}egin{align} \label{eq:maxprob2} \mathfrak{m}ax_{\substack{\mat{a} \in \mathfrak{R}set^m \\ \|\mathfrak{m}at{K}_s \mat{a} - \mathfrak{m}at{y}\|^2 \leq r^2}} & \ \frac{1}{2}\|\mathfrak{m}at{K}_st \mat{a}\|^2 - \mat{b}^\top \mathfrak{m}at{K}_{t} \mathfrak{m}at{K}_st \mat{b}, \epsilonnd{align} is given by \mat{b}egin{align*} \mathfrak{m}in_{\epsilonta \geq 0, \gamma} & \ \gamma \\ \text{s. t.} & \ \left( \def\mat{a}rraystretch{1.3} \mat{b}egin{array}{cc} -\frac{1}{2} \mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_st + \epsilonta \mathfrak{m}at{K}_s^2 & \frac{1}{2}\mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_{t} \mat{b} - \epsilonta \mathfrak{m}at{K}_s\mathfrak{m}at{y} \\ \frac{1}{2} \mat{b}^\top \mathfrak{m}at{K}_{t} \mathfrak{m}at{K}_st - \epsilonta \mathfrak{m}at{y}^\top \mathfrak{m}at{K}_s & \epsilonta (\|\mathfrak{m}at{y}\|^2 - r^2) + \gamma \epsilonnd{array} \right) \succeq 0. \epsilonnd{align*} Furthermore, the duality gap for these problems is zero. \epsilonnd{replemma} \mat{b}egin{proof} For $\epsilonta \geq 0$ the Lagrangian of \epsilonqref{eq:maxprob2} is given by \mat{b}egin{align*} L(\mat{a}, \epsilonta) & = \frac{1}{2}\|\mathfrak{m}at{K}_st \mat{a}\|^2 - \mat{b}^\top \mathfrak{m}at{K}_{t} \mathfrak{m}at{K}_st \mat{a} - \epsilonta( \|\mathfrak{m}at{K}_s \mat{a} - \mathfrak{m}at{y}\|^2 - r^2) \\ & =\mat{a}^\top \Big(\frac{1}{2} \mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_st - \epsilonta \mathfrak{m}at{K}_s^2 \Big) \mat{a} + (2 \epsilonta \mathfrak{m}at{K}_s \mathfrak{m}at{y} - \mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_{t} \mat{b} )^\top \mat{a} - \epsilonta (\|\mathfrak{m}at{y}\|^2 - r^2). \epsilonnd{align*} Since the Lagrangian is a quadratic function of $\mat{a}$ and that the conjugate function of a quadratic can be expressed in terms of the pseudo-inverse, the dual is given by \mat{b}egin{align*} \mathfrak{m}in_{\epsilonta \geq 0} & \ \frac{1}{4}(2 \epsilonta \mathfrak{m}at{K}_s \mathfrak{m}at{y} - \mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_{t} \mat{b})^\top \Big(\epsilonta \mathfrak{m}at{K}_s^2 - \frac{1}{2} \mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_st \Big)^{\dag}(2 \epsilonta \mathfrak{m}at{K}_s \mathfrak{m}at{y} - \mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_{t} \mat{b}) - \epsilonta(\|\mathfrak{m}at{y}\|^2 - r^2)\\ \text{s. t. } & \ \epsilonta \mathfrak{m}at{K}_s^2 - \frac{1}{2} \mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_st\succeq 0. \epsilonnd{align*} Introducing the variable $\gamma$ to replace the objective function yields the equivalent problem \mat{b}egin{flalign*} \mathfrak{m}in_{\epsilonta \geq 0, \gamma} & \gamma \\ \text{s. t. } & \ \epsilonta \mathfrak{m}at{K}_s^2 - \frac{1}{2} \mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_st \succeq 0 \\ & \gamma - \frac{1}{4} (2 \epsilonta \mathfrak{m}at{K}_s \mathfrak{m}at{y} - \mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_{t} \mat{b})^\top \Big(\epsilonta \mathfrak{m}at{K}_s^2 - \frac{1}{2}\mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_st\Big)^{\dag} (2 \epsilonta \mathfrak{m}at{K}_s \mathfrak{m}at{y} - \mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_{t} \mat{b}) + \epsilonta(\|\mathfrak{m}at{y}\|^2 - r^2) \geq 0\\ \epsilonnd{flalign*} Finally, by the properties of the Schur complement \citep{BoydVandenberghe2004}, the two constraints above are equivalent to \mat{b}egin{equation*} \left( \mat{b}egin{array}{cc} -\frac{1}{2} \mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_st + \epsilonta \mathfrak{m}at{K}_s^2 & \frac{1}{2} \mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_{t} \mat{b} - \epsilonta\mathfrak{m}at{K}_s \mathfrak{m}at{y} \\ \Big(\frac{1}{2} \mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_{t} \mat{b} - \epsilonta\mathfrak{m}at{K}_s \mathfrak{m}at{y}\Big)^\top & \epsilonta (\|\mathfrak{m}at{y}\|^2 - r) + \gamma \epsilonnd{array} \right) \succeq 0. \epsilonnd{equation*} Since duality holds for a general QCQP with only one constraint \citep{BoydVandenberghe2004}[Appendix B], the duality gap between these problems is $0$. \epsilonnd{proof} \mat{b}egin{repproposition}{prop:cone} The optimization problem \epsilonqref{eq:kmaxmin} is equivalent to the following SDP: \mat{b}egin{align*} \mathfrak{m}ax_{\mat{a}lpha, \mat{b}eta, \mathfrak{n}u, \mathfrak{m}at{Z}, \mathfrak{m}at{z}} & \ \frac{1}{2} \Tr(\mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_st \mathfrak{m}at{Z}) - \mat{b}eta - \mat{a}lpha\\ \text{s. t} & \ \left( \def\mat{a}rraystretch{1.3} \mat{b}egin{array}{cc} \mathfrak{n}u \mathfrak{m}at{K}_s^2 + \frac{1}{2}\mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_st - \frac{1}{4} \mathfrak{m}at{w}idetilde{\mathfrak{m}at K} & \mathfrak{n}u \mathfrak{m}at{K}_s \mathfrak{m}at{y} + \frac{1}{4}\mathfrak{m}at{w}idetilde{\mathfrak{m}at K} \mathfrak{m}at{z} \\ \mathfrak{n}u \mathfrak{m}at{y}^\top \mathfrak{m}at{K}_s + \frac{1}{4} \mathfrak{m}at{z}^\top \mathfrak{m}at{w}idetilde{\mathfrak{m}at K} &\mat{a}lpha + \mathfrak{n}u (\|\mathfrak{m}at{y}\|^2 - r^2) \epsilonnd{array} \right) \succeq 0 \mathfrak{q}uad \mathfrak{m}at{w}edge \mathfrak{q}uad \left( \mat{b}egin{array}{cc} \mathfrak{m}at{Z} & \mathfrak{m}at{z} \\ \mathfrak{m}at{z}^\top & 1 \epsilonnd{array} \right) \succeq 0 \\ & \ \left( \def\mat{a}rraystretch{1.3} \mat{b}egin{array}{cc} \lambda \mathfrak{m}at{K}_{t} + \mathfrak{m}at{K}_{t}^2 & \frac{1}{2} \mathfrak{m}at{K}_{t} \mathfrak{m}at{K}_st \mathfrak{m}at{z} \\ \frac{1}{2} \mathfrak{m}at{z}^\top \mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_{t} & \mat{b}eta \epsilonnd{array} \right) \succeq 0 \mathfrak{q}uad \mathfrak{m}at{w}edge \mathfrak{q}uad \Tr(\mathfrak{m}at{K}_s^2 \mathfrak{m}at{Z}) - 2\mathfrak{m}at{y}^\top \mathfrak{m}at{K}_s \mathfrak{m}at{z} + \|\mathfrak{m}at{y}\|^2 \leq r^2 \mathfrak{q}uad \mathfrak{m}at{w}edge \mathfrak{q}uad \mathfrak{n}u \geq 0, \epsilonnd{align*} where $\mathfrak{m}at{w}idetilde{\mathfrak{m}at K} = \mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_{t} (\lambda \mathfrak{m}at{K}_{t} + \mathfrak{m}at{K}_{t}^2)^\dag \mathfrak{m}at{K}_{t} \mathfrak{m}at{K}_st$. \epsilonnd{repproposition} \mat{b}egin{proof} By Lemma~\ref{lemma:sdpdual}, we may rewrite \epsilonqref{eq:kmaxmin} as \mat{b}egin{align} \label{eq:firstequiv} \mathfrak{m}in_{\mat{a}, \gamma , \epsilonta, \mat{b}} & \ \mat{b}^\top (\lambda \mathfrak{m}at{K}_{t} + \mathfrak{m}at{K}_{t}^2) \mat{b} + \frac{1}{2} \mat{a}^\top \mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_st \mat{a} - \mat{a}^\top \mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_{t} \mat{b} + \gamma \\ \text{s. t. } & \ \left( \def\mat{a}rraystretch{1.3} \mat{b}egin{array}{cc} -\frac{1}{2} \mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_st + \epsilonta \mathfrak{m}at{K}_s^2 & \frac{1}{2}\mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_{t} \mat{b} - \epsilonta \mathfrak{m}at{K}_s \mathfrak{m}at{y} \\ \frac{1}{2} \mat{b}^\top \mathfrak{m}at{K}_{t} \mathfrak{m}at{K}_st - \epsilonta \mathfrak{m}at{y}^\top \mathfrak{m}at{K}_s & \epsilonta (\|\mathfrak{m}at{y}\|^2 - r^2) + \gamma \epsilonnd{array} \right) \mathfrak{q}uad \mathfrak{m}at{w}edge \mathfrak{q}uad \epsilonta \geq 0 \mathfrak{n}onumber \\ & \ \|\mathfrak{m}at{K}_s \mat{a} - \mathfrak{m}at{y}\|^2 \leq r^2. \mathfrak{n}onumber \epsilonnd{align} Let us apply the change of variables $\mat{b} = \frac{1}{2}(\lambda \mathfrak{m}at{K}_{t} + \mathfrak{m}at{K}_{t}^2)^{\dag} \mathfrak{m}at{K}_{t} \mathfrak{m}at{K}_st \mat{a} +\mat{v}$. The following equalities can be easily verified. \mat{b}egin{align*} \mat{b}^\top (\lambda \mathfrak{m}at{K}_{t} + \mathfrak{m}at{K}_{t}^2) \mat{b} & = \frac{1}{4} \mat{a}^\top \mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_{t} (\lambda \mathfrak{m}at{K}_{t} + \mathfrak{m}at{K}_{t}^2)^\dag \mathfrak{m}at{K}_{t} \mathfrak{m}at{K}_st \mat{a} + \mat{v}^\top \mathfrak{m}at{K}_{t} \mathfrak{m}at{K}_st \mat{a} + \mat{v}^\top (\lambda \mathfrak{m}at{K}_{t} + \mathfrak{m}at{K}_{t}^2) \mat{v}.\\ \mat{a}^\top \mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_{t} \mat{b} & = \frac{1}{2} \mat{a}^\top \mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_{t} (\lambda \mathfrak{m}at{K}_{t} + \mathfrak{m}at{K}_{t}^2)^\dag \mathfrak{m}at{K}_{t} \mathfrak{m}at{K}_st \mat{a} + \mat{v}^\top \mathfrak{m}at{K}_{t} \mathfrak{m}at{K}_st \mat{a}. \epsilonnd{align*} Thus, replacing $\mat{b}$ on \epsilonqref{eq:firstequiv} yields \mat{b}egin{align*} \mathfrak{m}in_{\mat{a}, \mat{v}, \gamma , \epsilonta} & \ \mat{v}^\top (\lambda \mathfrak{m}at{K}_{t} + \mathfrak{m}at{K}_{t}^2) \mat{v} + \ \mat{a}^\top \Big(\frac{1}{2} \mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_st - \frac{1}{4}\mathfrak{m}at{w}idetilde{\mathfrak{m}at K} \Big)\mat{a} + \gamma\\ \text{s. t. } & \ \left( \def\mat{a}rraystretch{1.3} \mat{b}egin{array}{cc} -\frac{1}{2} \mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_st + \epsilonta \mathfrak{m}at{K}_s^2 & \frac{1}{4} \mathfrak{m}at{w}idetilde{\mathfrak{m}at K}\mat{a} + \frac{1}{2}\mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_{t} \mat{v} - \epsilonta \mathfrak{m}at{K}_s \mathfrak{m}at{y} \\ \frac{1}{4} \mat{a}^\top \mathfrak{m}at{w}idetilde{\mathfrak{m}at K} + \frac{1}{2}\mat{v}^\top \mathfrak{m}at{K}_{t} \mathfrak{m}at{K}_st - \epsilonta \mathfrak{m}at{y}^\top \mathfrak{m}at{K}_s & \epsilonta (\|\mathfrak{m}at{y}\|^2 - r^2) + \gamma \epsilonnd{array} \right) \succeq 0 \mathfrak{q}uad \mathfrak{m}at{w}edge \mathfrak{q}uad \epsilonta \geq 0 \mathfrak{n}onumber \\ & \ \|\mathfrak{m}at{K}_s \mat{a} - \mathfrak{m}at{y}\|^2 \leq r^2. \mathfrak{n}onumber \epsilonnd{align*} Introducing the scalar multipliers $\mathfrak{m}u, \mathfrak{n}u \geq 0$ and the matrix \mat{b}egin{equation*} \left( \mat{b}egin{array}{cc} \mathfrak{m}at{Z} & \mathfrak{m}at{z} \\ \mathfrak{m}at{z}^\top & \mathfrak{m}at{w}idetilde z, \epsilonnd{array} \right) \succeq 0 \epsilonnd{equation*} as a multiplier for the matrix constraint, we can form the Lagrangian: \mat{b}egin{multline*} \mathfrak{m}athfrak{L} := \mat{v}^\top (\lambda \mathfrak{m}at{K}_{t} + \mathfrak{m}at{K}_{t}^2) \mat{v} + \mat{a}^\top \Big(\frac{1}{2} \mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_st - \frac{1}{4}\mathfrak{m}at{w}idetilde{\mathfrak{m}at K} \Big)\mat{a} + \gamma - \mathfrak{m}u \epsilonta + \mathfrak{n}u (\|\mathfrak{m}at{K}_s \mat{a} - \mathfrak{m}at{y}\|^2 - r^2) \\ - \Tr \left( \left( \mat{b}egin{array}{cc} \mathfrak{m}at{Z} & \mathfrak{m}at{z} \\ \mathfrak{m}at{z} & \mathfrak{m}at{w}idetilde z \epsilonnd{array} \right) \left( \def\mat{a}rraystretch{1.3} \mat{b}egin{array}{cc} -\frac{1}{2} \mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_st + \epsilonta \mathfrak{m}at{K}_s^2 & \frac{1}{4} \mathfrak{m}at{w}idetilde{\mathfrak{m}at K}\mat{a} + \frac{1}{2}\mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_{t} \mat{v} - \epsilonta \mathfrak{m}at{K}_s \mathfrak{m}at{y} \\ \frac{1}{4} \mat{a}^\top \mathfrak{m}at{w}idetilde{\mathfrak{m}at K} + \frac{1}{2}\mat{v}^\top \mathfrak{m}at{K}_{t} \mathfrak{m}at{K}_st - \epsilonta \mathfrak{m}at{y}^\top \mathfrak{m}at{K}_s & \epsilonta (\|\mathfrak{m}at{y}\|^2 - r^2) + \gamma \epsilonnd{array} \right) \right). \epsilonnd{multline*} The KKT conditions $\frac{\partial \mathfrak{m}athfrak L}{\partial \epsilonta} = \frac{\partial \mathfrak{m}athfrak L}{\partial \gamma} = 0$ trivially imply $\mathfrak{m}at{w}idetilde z= 1$ and $\Tr(\mathfrak{m}at{K}_s^2 \mathfrak{m}at{Z}) - 2\mathfrak{m}at{y}^\top \mathfrak{m}at{K}_s \mathfrak{m}at{z} + \|\mathfrak{m}at{y}\|^2 - r^2 + \mathfrak{m}u = 0$. These constraints on the dual variables guarantee that the primal variables $\epsilonta$ and $\gamma$ will vanish from the Lagrangian, thus yielding \mat{b}egin{multline*} \mathfrak{m}athfrak{L} = \frac{1}{2} \Tr(\mathfrak{m}at{K}_s^2 \mathfrak{m}at{Z}) + \mathfrak{n}u(\|\mathfrak{m}at{y}\|^2 - r^2) + \mat{v}^\top (\lambda \mathfrak{m}at{K}_{t} + \mathfrak{m}at{K}_{t}^2) \mat{v}^\top - \mathfrak{m}at{z}^\top \mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_{t} \mat{v}\\ + \mat{a}^\top\Big(\mathfrak{n}u \mathfrak{m}at{K}_s^2 + \frac{1}{2}\mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_st - \frac{1}{4} \mathfrak{m}at{w}idetilde{\mathfrak{m}at K}\Big) \mat{a} -\Big(2 \mathfrak{n}u \mathfrak{m}at{K}_s \mathfrak{m}at{y} + \frac{1}{2}\mathfrak{m}at{w}idetilde{\mathfrak{m}at K} \mathfrak{m}at{z}\Big)^\top \mat{a}. \epsilonnd{multline*} This is a quadratic function on the primal variables $\mat{a}$ and $\mat{v}$ with minimizing solutions \mat{b}egin{equation*} \mat{a} = \frac{1}{2} \Big(\mathfrak{n}u \mathfrak{m}at{K}_s^2 + \frac{1}{2}\mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_st - \frac{1}{4} \mathfrak{m}at{w}idetilde{\mathfrak{m}at K}\Big)^\dag \Big(2 \mathfrak{n}u \mathfrak{m}at{K}_s \mathfrak{m}at{y} + \frac{1}{2}\mathfrak{m}at{w}idetilde{\mathfrak{m}at K} \mathfrak{m}at{z}\Big) {\mathfrak{m}athsf q}uad \text{and} {\mathfrak{m}athsf q}uad \mat{v} = \frac{1}{2}(\lambda \mathfrak{m}at{K}_{t} + \mathfrak{m}at{K}_{t}^2)^{\dag}\mathfrak{m}at{K}_{t} \mathfrak{m}at{K}_st \mathfrak{m}at{z}, \epsilonnd{equation*} and optimal value equal to the objective of the Lagrangian dual: \mat{b}egin{multline*} \frac{1}{2} \Tr(\mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_st \mathfrak{m}at{Z}) + \mathfrak{n}u(\|\mathfrak{m}at{y}\|^2 - r^2) - \frac{1}{4} \mathfrak{m}at{z}^\top \mathfrak{m}at{w}idetilde{\mathfrak{m}at K} \mathfrak{m}at{z} \\ - \frac{1}{4} \Big(2 \mathfrak{n}u \mathfrak{m}at{K}_s \mathfrak{m}at{y} + \frac{1}{2}\mathfrak{m}at{w}idetilde{\mathfrak{m}at K} \mathfrak{m}at{z}\Big)^\top \Big(\mathfrak{n}u \mathfrak{m}at{K}_s^2 + \frac{1}{2}\mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_st - \frac{1}{4} \mathfrak{m}at{w}idetilde{\mathfrak{m}at K}\Big)^\dag \Big(2 \mathfrak{n}u \mathfrak{m}at{K}_s \mathfrak{m}at{y} + \frac{1}{2}\mathfrak{m}at{w}idetilde{\mathfrak{m}at K} \mathfrak{m}at{z}\Big). \epsilonnd{multline*} As in Lemma~\ref{lemma:sdpdual}, we apply the properties of the Schur complement to show that the dual is given by \mat{b}egin{align*} \mathfrak{m}ax_{\mat{a}lpha, \mat{b}eta, \mathfrak{n}u, \mathfrak{m}at{Z}, \mathfrak{m}at{z}} & \ \frac{1}{2} \Tr(\mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_st \mathfrak{m}at{Z}) - \mat{b}eta - \mat{a}lpha\\ \text{s. t} & \ \left( \def\mat{a}rraystretch{1.3} \mat{b}egin{array}{cc} \mathfrak{n}u \mathfrak{m}at{K}_s^2 + \frac{1}{2}\mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_st - \frac{1}{4} \mathfrak{m}at{w}idetilde{\mathfrak{m}at K} & \mathfrak{n}u \mathfrak{m}at{K}_s \mathfrak{m}at{y} + \frac{1}{4}\mathfrak{m}at{w}idetilde{\mathfrak{m}at K} \mathfrak{m}at{z}\ \\ \mathfrak{n}u \mathfrak{m}at{y}^\top \mathfrak{m}at{K}_s + \frac{1}{4} \mathfrak{m}at{z}^\top \mathfrak{m}at{w}idetilde{\mathfrak{m}at K} &\mat{a}lpha + \mathfrak{n}u (\|\mathfrak{m}at{y}\|^2 - r^2) \epsilonnd{array} \right) \succeq 0 \mathfrak{q}uad \mathfrak{m}at{w}edge \mathfrak{q}uad \left( \mat{b}egin{array}{cc} \mathfrak{m}at{Z} & \mathfrak{m}at{z} \\ \mathfrak{m}at{z}^\top & 1 \epsilonnd{array} \right) \succeq 0 \\ & \ \Tr(\mathfrak{m}at{K}_s^2 \mathfrak{m}at{Z}) - 2\mathfrak{m}at{y}^\top \mathfrak{m}at{K}_s \mathfrak{m}at{z} + \|\mathfrak{m}at{y}\|^2 \leq r^2 \mathfrak{q}uad \mathfrak{m}at{w}edge \mathfrak{q}uad \mat{b}eta \geq \frac{1}{4} \mathfrak{m}at{z}^\top \mathfrak{m}at{w}idetilde{\mathfrak{m}at K} \mathfrak{m}at{z} \mathfrak{q}uad \mathfrak{m}at{w}edge \mathfrak{q}uad \mathfrak{n}u \geq 0 \epsilonnd{align*} Finally, recalling the definition of $\mathfrak{m}at{w}idetilde{\mathfrak{m}at K}$ and using the Schur complement one more time we arrive to the final SDP formulation: \mat{b}egin{align*} \mathfrak{m}ax_{\mat{a}lpha, \mat{b}eta, \mathfrak{n}u, \mathfrak{m}at{Z}, \mathfrak{m}at{z}} & \ \frac{1}{2} \Tr(\mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_st \mathfrak{m}at{Z}) - \mat{b}eta - \mat{a}lpha\\ \text{s. t} & \ \left( \def\mat{a}rraystretch{1.3} \mat{b}egin{array}{cc} \mathfrak{n}u \mathfrak{m}at{K}_s^2 + \frac{1}{2}\mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_st - \frac{1}{4} \mathfrak{m}at{w}idetilde{\mathfrak{m}at K} & \mathfrak{n}u \mathfrak{m}at{K}_s \mathfrak{m}at{y} + \frac{1}{4}\mathfrak{m}at{w}idetilde{\mathfrak{m}at K} \mathfrak{m}at{z} \\ \mathfrak{n}u \mathfrak{m}at{y}^\top \mathfrak{m}at{K}_s + \frac{1}{4} \mathfrak{m}at{z}^\top \mathfrak{m}at{w}idetilde{\mathfrak{m}at K} &\mat{a}lpha + \mathfrak{n}u (\|\mathfrak{m}at{y}\|^2 - r^2) \epsilonnd{array} \right) \succeq 0 \mathfrak{q}uad \mathfrak{m}at{w}edge \mathfrak{q}uad \left( \mat{b}egin{array}{cc} \mathfrak{m}at{Z} & \mathfrak{m}at{z} \\ \mathfrak{m}at{z}^\top & 1 \epsilonnd{array} \right) \succeq 0 \\ & \ \left( \def\mat{a}rraystretch{1.3} \mat{b}egin{array}{cc} \lambda \mathfrak{m}at{K}_{t} + \mathfrak{m}at{K}_{t}^2 & \frac{1}{2} \mathfrak{m}at{K}_{t} \mathfrak{m}at{K}_st \mathfrak{m}at{z} \\ \frac{1}{2} \mathfrak{m}at{z}^\top \mathfrak{m}at{K}_st^\top \mathfrak{m}at{K}_{t} & \mat{b}eta \epsilonnd{array} \right) \succeq 0 \mathfrak{q}uad \mathfrak{m}at{w}edge \mathfrak{q}uad \Tr(\mathfrak{m}at{K}_s^2 \mathfrak{m}at{Z}) - 2\mathfrak{m}at{y}^\top \mathfrak{m}at{K}_s \mathfrak{m}at{z} + \|\mathfrak{m}at{y}\|^2 \leq r^2 \mathfrak{q}uad \mathfrak{m}at{w}edge \mathfrak{q}uad \mathfrak{n}u \geq 0. \epsilonnd{align*} \epsilonnd{proof} \section{QP formulation} \label{app:qpformula} \mat{b}egin{repproposition}{prop:dual} Let $\mathfrak{m}at Y =(Y_{ij}) \in \mathfrak{R}set^{n \times k}$ be the matrix defined by $Y_{ij} = n^{-1/2} h_j(x_i')$ and $\mathfrak{m}at{y}' = (y'_1, \ldots, y'_k)^\top \in \mathfrak{R}set^k $ the vector defined by $y'_i = n^{-1} \sum_{j=1}^n h_i(x'_j)^2$. Then, the dual problem of \epsilonqref{eq:optmaxapp} is given by \mat{b}egin{align} \label{eq:dual} \mathfrak{m}ax_{\mat{b}m \mat{a}lpha, \mat{b}m \gamma, \mat{b}eta} & \ -\Big(\mathfrak{m}at Y \mat{b}m \mat{a}lpha + \frac{\mat{b}m \gamma}{2} \Big)^\top \mathfrak{m}at{K}_{t}\Big(\lambda \mathfrak{m}at{I} + \frac{1}{2}\mathfrak{m}at{K}_{t}\Big)^{-1} \Big(\mathfrak{m}at Y \mat{b}m \mat{a}lpha + \frac{\mat{b}m \gamma}{2}\Big) - \frac{1}{2} \mat{b}m \gamma^\top \mathfrak{m}at{K}_{t} \mathfrak{m}at{K}_{t}^\dag \mat{b}m \gamma + \mat{b}m \mat{a}lpha^\top \mathfrak{m}at{y}' - \mat{b}eta\\ \text{s.t.} & \ \mathfrak{m}at{1}^\top \mat{b}m \mat{a}lpha = \frac{1}{2}, {\mathfrak{m}athsf q}uad \mathfrak{m}at{1} \mat{b}eta \geq -\mathfrak{m}at Y^\top \mat{b}m \gamma, {\mathfrak{m}athsf q}uad \mat{b}m \mat{a}lpha\geq 0, \mathfrak{n}onumber \epsilonnd{align} where $\mathfrak{m}at{1}$ is the vector in $\mathfrak{R}set^k$ with all components equal to $1$. Furthermore, the solution $h$ of \epsilonqref{eq:optmaxapp} can be recovered from a solution $(\mat{b}m \mat{a}lpha, \mat{b}m \gamma, \mat{b}eta)$ of \epsilonqref{eq:dual} by $\forall x, h(x) =\sum_{i = 1}^n a_i K(x_i, x)$, where $\mat{b}m a = \mat{b}ig(\lambda \mathfrak{m}at{I} + \frac{1}{2}\mathfrak{m}at{K}_{t})^{-1}(\mathfrak{m}at Y \mat{b}m \mat{a}lpha + \frac{1}{2}\mat{b}m \gamma)$. \epsilonnd{repproposition} We will first prove a simplified version of the proposition for the case of linear hypotheses, i.e. we can represent hypotheses in $\mathbb{H}$ and elements of ${\mathfrak{m}athcal X}$ as vectors $\mathfrak{m}at{w}, \mathfrak{m}at{x} \in \mathfrak{R}set^d$ respectively. Define $\mathfrak{m}at{X}' = n^{-1/2} (\mathfrak{m}at{x}_1', \ldots, \mathfrak{m}at{x}_n')$ to be the matrix whose columns are the normalized sample points from the target distribution. Let also $\{\mathfrak{m}at{w}_1, \ldots, \mathfrak{m}at{w}_k\}$ be a sample taken from $\partial H''$ and define $\mat{b}m W := (\mathfrak{m}at{w}_1, \ldots, \mathfrak{m}at{w}_k) \in \mathfrak{R}set^{d \times k}$. Under this notation, problem \epsilonqref{eq:optmaxapp} may be rewritten as \mat{b}egin{equation} \label{eq:linoptimization} \mathfrak{m}in_{\mathfrak{m}at{w} \in \mathfrak{R}set^d} \lambda \|\mathfrak{m}at{w}\|^2 + \frac{1}{2} \mathfrak{m}ax_{i =1, \dots, k} \|\mathfrak{m}at{X}'^\top(\mathfrak{m}at{w} - \mathfrak{m}at{w}_i)\|^2 + \frac{1}{2} \mathfrak{m}in_{\mathfrak{m}at{w}' \in \mathfrak{m}athcal C} \|\mathfrak{m}at{X}'^\top (\mathfrak{m}at{w} - \mathfrak{m}at{w}')\|^2 \epsilonnd{equation} \mat{b}egin{lemma} The Lagrange dual of problem \epsilonqref{eq:linoptimization} is given by \mat{b}egin{align*} \mathfrak{m}ax_{\mat{b}m \mat{a}lpha, \mat{b}m \gamma, \mat{b}eta}& \ -\Big(\mathfrak{m}at Y \mat{b}m \mat{a}lpha + \frac{\mat{b}m \gamma}{2}\Big)^\top \mathfrak{m}at{X}'^\top \Big(\lambda \mathfrak{m}at{I} + \frac{\mathfrak{m}at{X}' \mathfrak{m}at{X}'^\top}{2}\Big)^{-1}\mathfrak{m}at{X}' \Big(\mat{b}m Y \mat{b}m\mat{a}lpha + \frac{\mat{b}m \gamma}{2}\Big) - \frac{1}{2}\mat{b}m \gamma^\top \mathfrak{m}at{X}'^\top (\mathfrak{m}at{X}' \mathfrak{m}at{X}'^\top)^\dag \mathfrak{m}at{X}' \mat{b}m \gamma + \mat{b}m \mat{a}lpha^\top \mathfrak{m}at{y}' - \mat{b}eta\\ \text{s. t.} & \ \mathfrak{m}at{1}^\top \mat{b}m \mat{a}lpha = \frac{1}{2} \mathfrak{q}uad \mathfrak{q}uad \mathfrak{m}at{1} \mat{b}eta \geq- \mathfrak{m}at Y^\top \mat{b}m \gamma \mathfrak{q}uad \mathfrak{q}uad \mat{b}m \mat{a}lpha\geq 0, \epsilonnd{align*} where $\mathfrak{m}at Y = \mathfrak{m}at{X}'^\top \mat{b}m W$ and $\mathfrak{m}at{y}'_i =\|\mathfrak{m}at{X}'^\top \mathfrak{m}at{w}_i\|^2.$ \epsilonnd{lemma} \mat{b}egin{proof} By applying the change of variable $\mathfrak{m}at{u} = \mathfrak{m}at{w}' - \mathfrak{m}at{w}$, problem \epsilonqref{eq:linoptimization} is can be made equivalent to \mat{b}egin{equation*} \mathfrak{m}in_{\mathfrak{m}at{w} \in \mathfrak{R}set^d \mathfrak{m}at{u} \in \mathfrak{m}athcal{C} - \mathfrak{m}at{w}} \lambda \|\mathfrak{m}at{w}\|^2 + \frac{1}{2} \|\mathfrak{m}at{X}'^\top \mathfrak{m}at{w}\|^2 + \frac{1}{2}\|\mathfrak{m}at{X}'^\top u\|^2 + \frac{1}{2}\mathfrak{m}ax_{i=1,\ldots, k} \|\mat{b}m \mathfrak{m}at{X}'^\top \mathfrak{m}at{w}_i\|^2 -2 \mathfrak{m}at{w}_i^ \top \mathfrak{m}at{X}' \mathfrak{m}at{X}'^\top \mathfrak{m}at{w}. \epsilonnd{equation*} By making the constraints on $\mathfrak{m}at{u}$ explicit and replacing the maximization term with the variable $r$ the above problem becomes \mat{b}egin{align*} \mathfrak{m}in_{\mathfrak{m}at{w}, \mathfrak{m}at{u} , r, \mat{b}m \mathfrak{m}u} & \mathfrak{q}uad \lambda \|\mathfrak{m}at{w}\|^2 + \frac{1}{2} \|\mathfrak{m}at{X}'^\top \mathfrak{m}at{w}\|^2 + \frac{1}{2}\|\mathfrak{m}at{X}'^\top \mathfrak{m}at{u}\|^2 + \frac{1}{2}r \\ \text{s. t.} & \mathfrak{q}uad \mathfrak{m}at{1} r \geq \mathfrak{m}at{y}' - 2 \mathfrak{m}at Y^\top \mathfrak{m}at{X}'^\top \mathfrak{m}at{w} \\ & \mathfrak{q}uad \mathfrak{m}at{1}^\top \mat{b}m \mathfrak{m}u =1 {\mathfrak{m}athsf q}uad \mat{b}m \mathfrak{m}u \geq 0 {\mathfrak{m}athsf q}uad \mat{b}m W \mat{b}m \mathfrak{m}u - \mathfrak{m}at{w} = \mathfrak{m}at{u}. \epsilonnd{align*} For $\mat{b}m \mat{a}lpha, \mat{b}m \delta \geq 0$, the Lagrangian of this problem is defined as \mat{b}egin{align*} \mathfrak{m}athfrak{L}(\mathfrak{m}at{w}, \mathfrak{m}at{u}, \mat{b}m \mathfrak{m}u, r, \mat{b}m \mat{a}lpha, \mat{b}eta, \mat{b}m \delta, \mat{b}m \gamma') & = \lambda \|\mathfrak{m}at{w}\|^2 + \frac{1}{2} \|\mathfrak{m}at{X}'^\top \mathfrak{m}at{w}\|^2 + \frac{1}{2}\|\mathfrak{m}at{X}'^\top \mathfrak{m}at{u}\|^2 + \frac{1}{2}r + \mat{b}m \mat{a}lpha^\top(\mathfrak{m}at{y}' - 2(\mathfrak{m}at{X}' \mathfrak{m}at Y)^\top \mathfrak{m}at{w} - \mathfrak{m}at{1} r) \\ & \mathfrak{m}space{40mu} + \mat{b}eta(\mathfrak{m}at{1}^\top \mat{b}m \mathfrak{m}u - 1) - \mat{b}m \delta^\top \mat{b}m \mathfrak{m}u + \mat{b}m \gamma'^\top(\mat{b}m W \mat{b}m \mathfrak{m}u - \mathfrak{m}at{w} - \mathfrak{m}at{u}). \epsilonnd{align*} Minimizing with respect to the primal variables yields the following KKT conditions: \mat{b}egin{align} \mathfrak{m}at{1}^\top \mat{b}m \mat{a}lpha = \frac{1}{2} & \mathfrak{q}uad \mathfrak{q}uad \mathfrak{m}at{1}\mat{b}eta = \mat{b}m \delta -\mat{b}m W^\top \mat{b}m \gamma'. \label{eq:linear}\\ \mat{b}m \mathfrak{m}at{X}'\mathfrak{m}at{X}'^\top \mathfrak{m}at{u} = \mat{b}m \gamma' & \mathfrak{q}uad \mathfrak{q}uad 2 \left(\lambda \mathfrak{m}at{I} + \frac{\mathfrak{m}at{X}' \mathfrak{m}at{X}'^\top}{2}\right)\mathfrak{m}at{w} = 2(\mathfrak{m}at{X}' \mat{b}m Y)\mat{b}m \mat{a}lpha + \mat{b}m \gamma' \label{eq:quad} \epsilonnd{align} Condition \epsilonqref{eq:linear} implies that the terms involving $r$ and $\mat{b}m \mathfrak{m}u$ will vanish from the Lagrangian. Furthermore, the first equation in \epsilonqref{eq:quad} implies that any feasible $\mat{b}m \gamma'$ must satisfy $\mat{b}m \gamma' = \mathfrak{m}at{X}' \mat{b}m \gamma$ for some $\gamma \in \mathfrak{R}set^n$. Finally, it is immediate that $\mat{b}m \gamma'^\top \mathfrak{m}at{u} = \mathfrak{m}at{u}^\top \mathfrak{m}at{X}' \mathfrak{m}at{X}'^\top \mathfrak{m}at{u}$ and $2\mathfrak{m}at{w} ^\top\left(\lambda \mathfrak{m}at{I} + \frac{\mathfrak{m}at{X}' \mathfrak{m}at{X}'^\top}{2}\right) \mathfrak{m}at{w} = 2 \mat{b}m \mat{a}lpha^\top (\mathfrak{m}at{X}' \mathfrak{m}at Y)^\top \mathfrak{m}at{w} \mat{b}m + \mat{b}m \gamma'^\top \mathfrak{m}at{w}$. Thus, at the optimal point, the Lagrangian becomes \mat{b}egin{align*} & \mathfrak{q}uad -\mathfrak{m}at{w}^\top \Big(\lambda \mathfrak{m}at{I} + \frac{1}{2} \mathfrak{m}at{X}' \mathfrak{m}at{X}'^\top\Big) \mathfrak{m}at{w} - \frac{1}{2} \mathfrak{m}at{u}^\top \mathfrak{m}at{X}' \mathfrak{m}at{X}'^\top \mathfrak{m}at{u} + \mat{b}m \mat{a}lpha^\top \mathfrak{m}at{y}' - \mat{b}eta \\ \text{s. t.} & \mathfrak{q}uad \mathfrak{m}at{1}^\top \mat{b}m \mat{a}lpha = \frac{1}{2} \mathfrak{q}uad \mathfrak{q}uad \mathfrak{m}at{1} \mat{b}eta = \mat{b}m \delta - \mat{b}m W^\top \mat{b}m \gamma' \mathfrak{q}uad \mathfrak{q}uad \mat{b}m \mat{a}lpha\geq 0 \mathfrak{m}at{w}edge \mat{b}m \delta \geq 0. \epsilonnd{align*} The positivity of $\mat{b}m \delta$ implies that $\mathfrak{m}at{1}\mat{b}eta \geq -\mat{b}m W^\top \mat{b}m \gamma'$. Solving for $\mathfrak{m}at{w}$ and $\mathfrak{m}at{u}$ on \epsilonqref{eq:quad} and applying the change of variable $\mathfrak{m}at{X}' \mat{b}m \gamma = \mat{b}m\gamma'$ we obtain the final expression for the dual problem: \mat{b}egin{align*} \mathfrak{m}ax_{\mat{b}m \mat{a}lpha, \mat{b}m \gamma, \mat{b}eta} & \ -\Big(\mathfrak{m}at Y \mat{b}m \mat{a}lpha + \frac{\mat{b}m \gamma}{2}\Big)^\top \mathfrak{m}at{X}'^\top \Big(\lambda \mathfrak{m}at{I} + \frac{\mathfrak{m}at{X}' \mathfrak{m}at{X}'^\top}{2}\Big)^{-1}\mathfrak{m}at{X}' \Big(\mat{b}m Y \mat{b}m\mat{a}lpha + \frac{\mat{b}m \gamma}{2}\Big) - \frac{1}{2}\mat{b}m \gamma^\top \mathfrak{m}at{X}'^\top (\mathfrak{m}at{X}' \mathfrak{m}at{X}'^\top)^\dag \mathfrak{m}at{X}' \mat{b}m \gamma + \mat{b}m \mat{a}lpha^\top \mathfrak{m}at{y}' - \mat{b}eta\\ \text{s. t.} & \ \mathfrak{m}at{1}^\top \mat{b}m \mat{a}lpha = \frac{1}{2} \mathfrak{q}uad \mathfrak{q}uad \mathfrak{m}at{1} \mat{b}eta \geq- \mathfrak{m}at Y^\top \mat{b}m \gamma \mathfrak{q}uad \mathfrak{q}uad \mat{b}m \mat{a}lpha\geq 0, \epsilonnd{align*} where we have used the fact that $\mathfrak{m}at Y^\top \mat{b}m \gamma = \mathfrak{m}at{W} \mathfrak{m}at{X}'^\top \mat{b}m \gamma$ to simplify the constraints. Notice also that we can recover the solution $\mathfrak{m}at{w}$ of problem \epsilonqref{eq:linoptimization} as $\mathfrak{m}at{w} = (\lambda \mathfrak{m}at{I} + \frac{1}{2} \mathfrak{m}at{X}'^\top \mathfrak{m}at{X}')^{-1}\mathfrak{m}at{X}'( \mathfrak{m}at Y \mat{b}m \mat{a}lpha + \frac{1}{2}\mat{b}m \gamma)$ \epsilonnd{proof} Using the matrix identities $ \mathfrak{m}at{X}'(\lambda \mathfrak{m}at{I} + \mathfrak{m}at{X}'^\top \mathfrak{m}at{X}')^{-1} = (\lambda \mathfrak{m}at{I} + \mathfrak{m}at{X}' \mathfrak{m}at{X}'^\top) \mathfrak{m}at{X}'$ and $\mathfrak{m}at{X}'^\top \mathfrak{m}at{X}' (\mathfrak{m}at{X}'^\top \mathfrak{m}at{X}')^\dag = \mathfrak{m}at{X}'^\top (\mathfrak{m}at{X}' \mathfrak{m}at{X}'^\top) ^\dag \mathfrak{m}at{X}'$, the proof of Proposition~\ref{prop:dual} is now immediate. \\ \mat{b}egin{proof}[Proposition~\ref{prop:dual}] We can rewrite the dual objective of the previous lemma in terms of the Gram matrix $\mathfrak{m}at{X}'^\top \mathfrak{m}at{X}'$ alone as follows: \mat{b}egin{align*} \mathfrak{m}ax_{\mat{b}m \mat{a}lpha, \mat{b}m \gamma, \mat{b}eta} & \ -\Big(\mathfrak{m}at Y \mat{b}m \mat{a}lpha + \frac{\mat{b}m \gamma}{2}\Big)^\top \mathfrak{m}at{X}'^\top \mathfrak{m}at{X}' \Big(\lambda \mathfrak{m}at{I} + \frac{\mathfrak{m}at{X}'^\top \mathfrak{m}at{X}'}{2}\Big)^{-1} \Big(\mat{b}m Y \mat{b}m\mat{a}lpha + \frac{\mat{b}m \gamma}{2}\Big) - \frac{1}{2}\mat{b}m \gamma^\top \mathfrak{m}at{X}'^\top \mathfrak{m}at{X}'(\mathfrak{m}at{X}'^\top \mathfrak{m}at{X}')^\dag \mat{b}m \gamma + \mat{b}m \mat{a}lpha^\top \mathfrak{m}at{y}' - \mat{b}eta\\ \text{s. t.} & \ \mathfrak{m}at{1}^\top \mat{b}m \mat{a}lpha = \frac{1}{2} \mathfrak{q}uad \mathfrak{q}uad \mathfrak{m}at{1} \mat{b}eta \geq- \mathfrak{m}at Y^\top \mat{b}m \gamma \mathfrak{q}uad \mathfrak{q}uad \mat{b}m \mat{a}lpha\geq 0. \epsilonnd{align*} By replacing $\mathfrak{m}at{X}'^\top \mathfrak{m}at{X}'$ by the more general kernel matrix $\mat{b}m \mathfrak{m}at{K}_{t}$ (which corresponds to the Gram matrix in the feature space) we obtain the desired expression for the dual. Additionally, the same matrix identities applied to condition \epsilonqref{eq:quad} imply that the optimal hypothesis $h$ is given by $h(x) =\sum_{i=1}^n a_i K(x_i',x)$ where $\mat{b}m a = (\lambda \mathfrak{m}at{I} + \frac{1}{2}\mathfrak{m}at{K}_{t})^{-1}(\mathfrak{m}at Y \mat{b}m \mat{a}lpha + \frac{\mat{b}m \gamma}{2})$. \epsilonnd{proof} \section{$\mathfrak{m}u$-admissibility} \label{app:muadmissible} \mat{b}egin{lemma}[Relaxed triangle inequality] \label{lemma:relaxed} For any $p \geq 1$, let $L_p$ be the loss defined over $\mathfrak{R}set^N$ by $L_p(\mathfrak{m}at{x}, \mathfrak{m}at{y}) = \| \mathfrak{m}at{y} - \mathfrak{m}at{x} \|^p$ for all $\mathfrak{m}at{x}, \mathfrak{m}at{y} \in \mathfrak{R}set^N$. Then, the following inequality holds for all $\mathfrak{m}at{x}, \mathfrak{m}at{y}, \mathfrak{m}at{z} \in \mathfrak{R}set^N$: \mat{b}egin{equation*} L_p(\mathfrak{m}at{x}, \mathfrak{m}at{z}) \leq 2^{q -1} [ L_p(\mathfrak{m}at{x}, \mathfrak{m}at{y}) + L_p(\mathfrak{m}at{y}, \mathfrak{m}at{z}) ]. \epsilonnd{equation*} \epsilonnd{lemma} \mat{b}egin{proof} Observe that \mat{b}egin{equation*} L_p(\mathfrak{m}at{x}, \mathfrak{m}at{z}) = 2^p \Big\| \frac{\mathfrak{m}at{x} - \mathfrak{m}at{y}}{2} + \frac{ \mathfrak{m}at{y} - \mathfrak{m}at{z}}{2} \Big\|^p. \epsilonnd{equation*} For $p \geq 1$, $x \mathfrak{m}apsto x^p$ is convex, thus, \mat{b}egin{equation*} L_p(\mathfrak{m}at{x}, \mathfrak{m}at{z}) \leq 2^p \frac{1}{2} \Big[ \| (\mathfrak{m}at{x} - \mathfrak{m}at{y}) \|^p + \| (\mathfrak{m}at{y} - \mathfrak{m}at{z}) \|^p \Big] = 2^{p - 1} [ L_p(\mathfrak{m}at{x}, \mathfrak{m}at{z}) + L_p(\mathfrak{m}at{y}, \mathfrak{m}at{z}) ], \epsilonnd{equation*} which concludes the proof. \epsilonnd{proof} \mat{b}egin{lemma} \label{lemma:muadmissible} Assume that $L_p(h(x), y) \leq M$ for all $x \in {\mathfrak{m}athcal X}$ and $y \in {\mathfrak{m}athcal Y}$, then $L_p$ is $\mathfrak{m}u$-admissible with $\mathfrak{m}u = p M^{p - 1}$. \epsilonnd{lemma} \mat{b}egin{proof} Since $x \mathfrak{m}apsto x^p$ is $p$-Lipschitz over $[0,1]$ we can write \mat{b}egin{align*} | L(h(x),y) - L(h'(x), y) | & = M^p \mat{b}igg|\Big(\frac{|h(x) - y|}{M}\Big)^p - \Big(\frac{|h'(x) - y|}{M}\Big)^p \mat{b}igg|\\ & \leq p M^{p-1}|h(x) - y + y - h'(x)| \\ & = p M^{p-1} |h(x) - h'(x)|, \epsilonnd{align*} which concludes the proof. \epsilonnd{proof} \mat{b}egin{lemma} \label{lemma:holder} Let $L$ be the $L_p$ loss for some $p \geq 1$ and let $h, h', h''$ be functions satisfying $L_p(h(x), h'(x)) \leq M$ and $L_p(h''(x), h'(x)) \leq M$ for all $x \in {\mathfrak{m}athcal X}$, for some $M \geq 0$. Then, for any distribution ${\mathfrak{m}athcal D}$ over ${\mathfrak{m}athcal X}$, the following inequality holds: \mat{b}egin{equation} | {\mathfrak{m}athcal L}_{\mathfrak{m}athcal D}(h, h') - {\mathfrak{m}athcal L}_{\mathfrak{m}athcal D}(h'', h') | \leq p M^{p-1}[{\mathfrak{m}athcal L}_{\mathfrak{m}athcal D}(h, h'')]^{\frac{1}{p}}. \epsilonnd{equation} \epsilonnd{lemma} \mat{b}egin{proof} Proceeding as in the proof of Lemma~\ref{lemma:muadmissible}, we obtain \mat{b}egin{align*} | {\mathfrak{m}athcal L}_{\mathfrak{m}athcal D}(h, h') - {\mathfrak{m}athcal L}_{\mathfrak{m}athcal D}(h'', h') | & = | \E_{x \in {\mathfrak{m}athcal D}}\mat{b}ig[L_p(h(x), h'(x)) - L_p(h''(x), h'(x)\mat{b}ig] | \\ & \leq p M^{p-1} \E_{x \in {\mathfrak{m}athcal D}} \mat{b}ig[|h(x) - h''(x)| \mat{b}ig]. \epsilonnd{align*} Since $p \geq 1$, by Jensen's inequality, we can write $\E_{x \in {\mathfrak{m}athcal D}} \mat{b}ig[|h(x) - h''(x)| \mat{b}ig] \leq \E_{x \in {\mathfrak{m}athcal D}} \mat{b}ig[|h(x) - h''(x)|^p \mat{b}ig]^{1/p} = [{\mathfrak{m}athcal L}_{\mathfrak{m}athcal D}(h, h'')]^{\frac{1}{p}}$. \epsilonnd{proof} \section{Experiments} \label{app:experiments} Here we report the results of all pairs of adaptation problems for the image task and the sentiment task. Each row of the plot corresponds to a different source domain and each column in the plot corresponds to a different target domain. The results reported here are the mean performance of the same experiment repeated 10 times. The error bars represent 1 standard deviation. \mat{b}egin{figure}[t] \centering \includegraphics[scale=.9]{images-plot.pdf} \caption{Adaptation results for the image data set} \epsilonnd{figure} \mat{b}egin{figure}[t] \centering \includegraphics[scale=.9]{sentiment-plots.pdf} \caption{Adaption results for the sentiment data set} \epsilonnd{figure} \epsilonnd{document}
\begin{document} \title{A non-residually finite group acting uniformly properly on a hyperbolic space} \author{R. Coulon\thanks{The first author acknowledges the support of the ANR grant DAGGER ANR-16-CE40-0006-01. He is also grateful to the \emph{Centre Henri Lebesgue} ANR-11-LABX-0020-01 for creating an attractive mathematical environment.}, D. Osin\thanks{The work of the second author has been supported by the NSF grant DMS-1612473.}} \date{} \maketitle \begin{abstract} In this article we produce an example of a non-residually finite group which admits a uniformly proper action on a Gromov hyperbolic space. \end{abstract} \section{Introduction} By default, all actions of groups on metric spaces considered in this paper are by isometries. Recall that a group is \emph{hyperbolic} if and only if it acts properly and cocompactly on a hyperbolic metric space. It is natural to ask what kind of groups we get if we remove the requirement of cocompactness from this definition. However, it turns out that every countable group admits a proper action on a hyperbolic space, namely the parabolic action on a combinatorial horoball \cite{Groves:2008ip}. Thus to obtain an interesting class of groups we have to strengthen our properness assumptions. In this paper we propose to study the class of groups that admit a uniformly proper action on a hyperbolic length space. We denote this class of group by $\mathcal P$. Recall that an action of a group $G$ on a metric space $X$ is \emph{uniformly proper} if for every $r\in \mathbb R_+$, there exists $N\in \mathbb N$, such that for all $x \in X$, \begin{displaymath} \card{\set{g\in G}{\dist[X] x{gx}\leq r}}\leq N. \end{displaymath} Having a uniformly proper action on a hyperbolic space is a rather restrictive condition. For instance, \cite[Theorem 1.2]{Osi16} implies that every group $G\in \mathcal P$ (as well as every its subgroup) is either virtually cyclic or acylindrically hyperbolic, which imposes strong restrictions on the algebraic structure of $G$. Hyperbolic groups and their subgroups obviously belong to $\mathcal P$ and, in general, groups from the class $\mathcal P$ have many properties similar to those of hyperbolic groups. In fact, we do not know the answer to the following question: \emph{Does $\mathcal P$ coincide with the class of all subgroups of hyperbolic groups?} Although the affirmative answer seems unlikely, we are not aware of any obvious counterexamples. This paper is inspired by the well-known open problem of whether every hyperbolic group is residually finite. Our main result shows that the answer to this question is negative if one replaces the class of hyperbolic groups with the class $\mathcal P$. \begin{thm}{\bf Lab}el{main} There exists a finitely generated non-trivial group $G$ acting uniformly properly on a hyperbolic graph of bounded valence such that every amenable quotient of $G$ is trivial. In particular, $G\in \mathcal P$ and $G$ is not residually finite. \end{thm} In the process of constructing such a group $G$, we show that a subclass of $\mathcal P$ is closed under taking certain small cancellation quotients (see Section~\ref{sec: sc}). This result seems to be of independent interest and can potentially be used to construct other interesting examples of groups from the class $\mathcal P$. The proof of the second claim of Theorem \ref{main} can be illustrated as follows. We first use a variant of the Rips construction suggested in \cite{BO} to construct a subgroup $N$ of a torsion-free hyperbolic group $H$ and two elements $a,b\in N$ which are ``sufficiently independent" in $N$ (more precisely, non-commensurable - see Section~\ref{sec: hyp geom} for the definition) but are conjugate in every finite quotient of $N$. The fact that these elements are ``sufficiently independent" together with the result about small cancellation quotients mentioned above imply that the quotient group $G=N/\langle\hspace{-.7mm}\langle a^p, b^q\rr$ belongs to $\mathcal P$ for some (in fact, all sufficiently large) primes $p$ and $q$. If $p\ne q$, the images of $a$ and $b$ are clearly trivial in every finite quotient of $G$. In particular, $G$ is not residually finite. A slightly more elaborated version of this idea involving Kazhdan's property (T) leads to the proof of the first claim of the theorem. \paragraph{Acknowledgments.} We are grateful to Ashot Minasyan for useful comments and suggestions, which allowed us to simplify the original proof of Theorem \ref{thm-main}. \section{A short review of hyperbolic geometry} {\bf Lab}el{sec: hyp geom} In this section we recall a few notations and definitions regarding hyperbolic spaces in the sense of Gromov. For more details, refer the reader to Gromov's original article \cite{Gro87} or \cite{CooDelPap90,Ghys:1990ki}. \paragraph{The four point inequality.} Let $(X,d)$ be a length space. Recall that the \emph{Gromov product} of three points $x,y,z \in X$ is defined by \begin{displaymath} \gro xyz = \frac 12 \left\{ \dist xz + \dist yz - \dist xy \right\}. \end{displaymath} In the remainder of this section, we assume that $X$ is \emph{$\delta$-hyperbolic}, i.e. for every $x,y,z,t \in X$, \begin{equation} \gro xzt \geq \min\left\{ \gro xyt, \gro yzt \right\} - \delta. \end{equation} We denote by $\partial X$ the boundary at infinity of $X$, see \cite[Chapitre 2]{CooDelPap90}. \paragraph{Quasi-convex subsets.} Let $Y$ be a subset of $X$. Recall that $Y$ is \emph{$\alpha$-quasi-convex} if for every $x \in X$, for every $y,y' \in Y$, we have $d(x,Y) \leq \gro y{y'}x + \alpha$. If $Y$ is path-connected, we denote by $\distV[Y]$ the length pseudo-metric on $Y$ induced by the restriction of $\distV[X]$ on $Y$. The set $Y$ is \emph{strongly quasi-convex} if $Y$ is $2\delta$-quasi-convex and for every $y,y' \in Y$ we have \begin{displaymath} \dist[X] y{y'} \leq \dist[Y] y{y'} \leq \dist[X] y{y'} + 8\delta. \end{displaymath} We denote by $Y^{+\alpha}$, the \emph{$\alpha$-neighborhood} of $Y$, i.e. the set of points $x \in X$ such that $d(x, Y) \leq \alpha$. \paragraph{Group action.} Let $G$ be a group acting uniformly properly on $X$. An element $g \in G$ is either \emph{elliptic} (it has bounded orbits, hence finite order) or \emph{loxodromic} (it has exactly two accumulation points in $\partial X$) \cite[Lemma~2.2]{Bowditch:2008bj}. A subgroup of $G$ is either \emph{elementary} (it is virtually cyclic) or contains a copy of the free group $\mathbb F_2$ \cite[Paragraph~8.2]{Gro87}. In order to measure the action of $g$ on $X$, we use the translation length defined as follows \begin{equation*} \norm[X] g = \inf_{x \in X} \dist {gx}x. \end{equation*} If there is no ambiguity, we omit the space $X$ in the notation. A loxodromic element $g \in G$ fixes exactly two points $g^-$ and $g^+$ in $\partial X$. We denote by $E(g)$ the stabilizer of $\left\{ g^-, g^+\right\}$. It is the maximal elementary subgroup containing $g$. Moreover $\langle g \rangle$ has finite index in $E(g)$ \cite[Lemma 6.5]{Dahmani:2017ef}. Given a loxodromic element $g \in G$, there exists a $g$-invariant strongly quasi-convex subset $Y_g$ of $X$ which is quasi-isometric to line; its stabilizer is $E(g)$ and the quotient $Y_g/E(g)$ is bounded \cite[Definition~3.12 and Lemma~3.13]{Coulon:2016if}. We call this set $Y_g$ the \emph{cylinder} of $g$. We say that two elements $g,h \in G$ are \emph{commensurable}, if there exists $n,m \in \mathbb Z^*$ and $u \in G$ such that $g^n = uh^mu^{-1}$. Every loxodromic element is contained in a unique maximal elementary subgroup \cite[Lemma~3.28]{Coulon:2016if}. Hence two loxodromic elements $g$ and $h$ are commensurable if and only if there exits $u \in G$ such that $g$ and $uhu^{-1}$ generate an elementary subgroup. \begin{lem} {\bf Lab}el{res: fellow travel conjugate} Let $S$ be a finite collection of pairwise non commensurable loxodromic elements of $G$. There exists $\Delta \in \mathbb R_+$ with the following property. For every $g,g' \in S$, for every $u \in G$, if \begin{displaymath} \operatorname{diam} \left(Y_g^{+5\delta} \cap uY_{g'}^{+5\delta}\right) > \Delta, \end{displaymath} then $g = g'$ and $u \in E(g)$. \end{lem} \begin{proof} The action of $G$ on $X$ being uniformly proper, it is also acylindrical. According to \cite[Proposition~3.44 and Lemma~6.14]{Coulon:2016if} there exists a constant $A,B >0$ with the following property: if $h,h' \in G$ are two loxodromic elements generating a non-elementary subgroup, then \begin{displaymath} \operatorname{diam} \left(Y_h^{+5\delta} \cap Y_{h'}^{+5\delta}\right) \leq A \max\{ \norm h, \norm {h'}\} + B. \end{displaymath} We now let \begin{displaymath} \Delta = A\max_{g \in S} \norm g + B. \end{displaymath} Let $g,g' \in S$, and $u \in G$ such that \begin{displaymath} \operatorname{diam} \left(Y_g^{+5\delta} \cap uY_{g'}^{+5\delta}\right) > \Delta. \end{displaymath} Recall that $uY_{g'}$ is the cylinder of $ug'u^{-1}$. It follows from our choice of $\Delta$, that $g$ and $ug'u^{-1}$ generate an elementary subgroup. Since the elements of $S$ are pairwise non-commensurable it forces $g = g'$ and $u \in E(g)$. \end{proof} \section{An auxiliary class $\mathcal P_0$} To prove our main result we will make use of an auxiliary class $\mathcal P_0$. \begin{defn} {\bf Lab}el{def: bounded geometry} A subset $S$ of a metric space $X$ is \emph{$r$-separated} if for every distinct points $s,s' \in S$, $\dist s{s'} \geq r$. Given a subset $Y$ of $X$ and $r >0$, we define the \emph{$r$-capacity} of $Y$, denoted by $C_r(Y)$, as the maximal number of points in an $r$-separated subset of $Y$. We say that $X$ has \emph{$r$-bounded geometry} if for every $R>0$, there is an integer $N$ bounding from above the $r$-capacity of every ball of radius $R$. If there exists $r>0$ such that $X$ has $r$-bounded geometry we simply say that $X$ has \emph{bounded geometry}. \end{defn} The class $\mathcal P_0$ we are interested in consists of all groups admitting a uniformly proper action on a hyperbolic length space with bounded geometry. It is clear that $\mathcal P_0\subseteq \mathcal P$. We will show that the class $\mathcal P_0$ is closed under certain small cancellation quotients. Before we discuss the precise statements and proofs, a few remarks are in order. First, we do not know whether $\mathcal P_0$ is indeed a proper subclass of $\mathcal P$. Second, it is possible to prove the results of the next section for the whole class $\mathcal P$. Nevertheless, the proofs become much easier for $\mathcal P_0$. Therefore we restrict our attention to this subclass, which is sufficient for the proof of our main theorem. We start with a few equivalent characterizations of the class $\mathcal P_0$. A graph $\Gamma = (V,E)$ is understood here in the sense of Serre \cite{Serre:1977wy}. Observe that a graph $\Gamma$ has bounded geometry whenever it has \emph{uniformly bounded valence} i.e. there exists $d \in \mathbb N$, such that the valence of any vertex $v \in V$ is at most $d$. \begin{rem} The converse statement is false. Indeed, consider the real line, which we think of as a graph with the vertex set $\mathbb Z$ and the obvious edges; to each vertex, attach infinitely many edges of length $1$. The resulting graph has $3$-bounded geometry while some vertices have infinite valence. \end{rem} If $\Gamma$ is a graph with uniformly bounded valence, the action of a group $G$ on $\Gamma$ is uniformly proper if and only if there exists $N \in \mathbb N$ such that the stabilizer of any vertex contains at most $N$ elements. \begin{prop} {\bf Lab}el{res: characterization P0} Let $G$ be a group. The following assertions are equivalent. \begin{enumerate} \item {\bf Lab}el{enu: characterization P0 - def} $G$ belongs to $\mathcal P_0$. \item {\bf Lab}el{enu: characterization P0 - action w/o inversion} $G$ acts uniformly properly without inversion on a hyperbolic graph $\Gamma$ with uniformly bounded valence. \item {\bf Lab}el{enu: characterization P0 - free action} $G$ acts on a hyperbolic graph $\Gamma$ with uniformly bounded valence such that the action of $G$ is free when restricted to the vertex set of $\Gamma$. \end{enumerate} \end{prop} \begin{proof} To show that (\ref{enu: characterization P0 - free action}) $\Rightarrow$ (\ref{enu: characterization P0 - action w/o inversion}) one simply takes the barycentric subdivision of the graph. The implication (\ref{enu: characterization P0 - action w/o inversion}) $\Rightarrow$ (\ref{enu: characterization P0 - def}) directly follows from the definition. We now focus on (\ref{enu: characterization P0 - def}) $\Rightarrow$ (\ref{enu: characterization P0 - free action}). By definition there exists $r \in \mathbb R_+^*$ such that $G$ acts uniformly properly on a hyperbolic length space $X$ with $r$-bounded geometry. Using Zorn's Lemma we choose an $r$-separated subset $\bar S$ of $\bar X = X/G$ which is maximal for this property. We denote by $S$ the pre-image of $\bar S$ in $X$. We fix $S_0 \subset S$ to be a set of representatives for the action of $G$ on $S$. Let $R = 2r + 1$. We now define a graph $\Gamma = (V,E)$ as follows. Its vertex set is $V = G \times S_0$. The edge set $E$ is the set of pairs $((u,s),(u',s')) \in V \times V$ such that $\dist[X]{us}{u's'} \leq R$. The initial and terminal vertices of such an edge are $(u,s)$ and $(u',s')$ respectively. The group $G$ acts freely on $V$ as follows: for every $g \in G$, for every $(u,s) \in V$, we have $g \cdot (u,s) = (gu,s)$. This action induces an action by isometries of $G$ on $\Gamma$. Recall that $R > 2r$. This allows us to perform a variation on the Milnor-Svar\v c Lemma and prove that the map $V \to X$ sending $(u,s)$ to $us$ induces a ($G$-equivariant) quasi-isometry from $\Gamma$ to $X$. In particular $\Gamma$ is hyperbolic. We are left to prove that $\Gamma$ has uniformly bounded valence. Since $X$ has $r$-bounded geometry, there exists $N_1 \in \mathbb N$ such that the $r$-capacity of any ball of radius $R$ in $X$ is at most $N_1$. The group $G$ acting uniformly properly on $X$, there exists $N_2 \in \mathbb N$ such that for every $x \in X$, the cardinality of the set \begin{displaymath} U(x) = \left\{ g \in G \mid \dist[X] x{gx}\leq 2R \right\} \end{displaymath} is bounded above by $N_2$. We now fix a vertex $v_0 = (u_0,s_0)$ of $\Gamma$. We fix a subset $S_1$ of $B(u_0s_0,R)$ such that any $G$-orbit of $S$ intersecting $B(u_0s_0,R)$ contains exactly one point in $S_1$. It follows from our choice of $S$ that if $s, s' \in S$ belong to distinct $G$-orbits, then $\dist[X]s{s'} \geq r$. Consequently the cardinality of $S_1$ is bounded above by the $r$-capacity of this ball, i.e. $N_1$. By construction for every $s \in S_1$, there exists $u_s \in G$ such that $u_ss$ belongs to $S_0$. It follows from the definition of $\Gamma$ combined with the triangle inequality that any neighbor of $v_0$ belongs to the set \begin{displaymath} \left\{ (uu_s^{-1},u_ss) \mid s \in S_1, u \in U(s) \right\}. \end{displaymath} The cardinality of this set is bounded above by $d = N_1N_2$, which does not depend on $v_0$, hence $\Gamma$ has uniformly bounded valence. \end{proof} \section{Stability of the class $\mathcal P_0$.} {\bf Lab}el{sec: sc} We now explain how $\mathcal P_0$ behaves under small cancellation. To that end we first review the geometric theory of small cancellation as it has been introduced by M.~Gromov \cite{Gro01b} and further developed in \cite{DelGro08,Coulon:il,Dahmani:2017ef}. For a detailed exposition we refer the reader to \cite[Sections~4-6]{Coulon:2014fr}. \paragraph{Settings.} Let $X$ be a $\delta$-hyperbolic length space and $G$ a group acting on $X$. Let $\mathcal Q$ be a family of pairs $(H,Y)$ such that $Y$ is a strongly quasi-convex subset of $X$ and $H$ a subgroup of $\stab Y$. We assume that $\mathcal Q$ is closed under the following action of $G$: for every $(H,Y) \in \mathcal Q$, for every $g \in G$, $g(H,Y) = (gHg^{-1},gY)$. In addition we require that $\mathcal Q/G$ is finite. We denote by $K$ the (normal) subgroup generated by the subgroups $H$ where $(H,Y) \in \mathcal Q$. The goal is to study the quotient $\bar G = G/K$ and the corresponding projection $\pi \colon G \to \bar G$. To that end we define the following two small cancellation parameters \begin{eqnarray*} \Delta (\mathcal Q,X) & = & \sup \set{\operatorname{diam}\left(Y_1^{+5\delta}\cap Y_2^{+5\delta}\right)}{ (H_1,Y_1) \neq (H_2,Y_2) \in \mathcal Q}, \\ \inj[X]{\mathcal Q} & = & \inf \set{\norm h}{h \in H\setminus\{1\},\; (H,Y) \in \mathcal Q}. \end{eqnarray*} They play the role of the length of the longest piece and the shortest relation respectively. We now fix a number $\rho>0$. Its value will be made precise later. It should be thought of as a very large parameter. \paragraph{Cones.} Let $(H,Y) \in \mathcal Q$. The \emph{cone of radius $\rho$ over $Y$}, denoted by $Z(Y)$, is the quotient of $Y\times [0,\rho]$ by the equivalence relation that identifies all the points of the form $(y,0)$. The equivalence class of $(y,0)$, denoted by $a$, is called the \emph{apex} of the cone. By abuse of notation, we still write $(y,r)$ for the equivalence class of $(y,r)$. The map $\iota \colon Y \rightarrow Z(Y)$ that send $y$ to $(y,\rho)$ provides a natural embedding form $Y$ to $Z(Y)$. This space can be endowed with a metric as described below. For the geometric interpretation of the distance see \cite[Section~4.1]{Coulon:2014fr}. \begin{prop}{\rm \cite[Chapter I.5, Proposition 5.9]{BriHae99}} \quad {\bf Lab}el{res: def distance cone} The cone $Z(Y)$ is endowed with a metric characterized in the following way. Let $x=(y,r)$ and $x'=(y',r')$ be two points of $Z(Y)$ then \begin{displaymath} \cosh \dist[Z(Y)] x{x'} = \cosh r \cosh r' - \sinh r\sinh r' \cos \theta(y,y'), \end{displaymath} where $\theta(y,y')$ is the \emph{angle at the apex} defined by $\theta(y,y') = \min \left\{ \pi , {\dist[Y]y{y'}}/\sinh \rho\right\}$. \end{prop} \paragraph{Cone-off over a metric space.} The \textit{cone-off of radius $\rho$ over $X$ relative to $\mathcal Y$} denoted by $\dot X_\rho(\mathcal Q)$ (or simply $\dot X$) is obtained by attaching for every $(H,Y) \in \mathcal Q$, the cone $Z(Y)$ on $X$ along $Y$ according to $\iota$. We endow $\dot X$ with the largest pseudo-metric $\distV[\dot X]$ for which all the maps $X \to \dot X$ and $Z(Y) \to \dot X$ -- when $(H,Y)$ runs over $\mathcal Q$ -- are $1$-Lipschitz. It turns out that this pseudo-distance is actually a distance on $\dot X$ \cite[Proposition 5.10]{Coulon:2014fr}. Actually $(\dot X, \distV[\dot X])$ is a length space. The action of $G$ on $X$ naturally extends to an action by isometries on $\dot X$ as follows. Let $(H,Y) \in \mathcal Q$. For every $x = (y,r) \in Z(Y)$, for every $g \in G$, $gx$ is the point of $Z(gY)$ defined by $gx = (gy,r)$. The space $\bar X_\rho(\mathcal Q)$ (or simply $\bar X$) is the quotient $\bar X = \dot X/K$. The metric on $\dot X$ induces a pseudo-metric on $\bar X$. We write $\zeta \colon \dot X \rightarrow \bar X$ for the canonical projection from $\dot X$ to $\bar X$. The quotient $\bar G$ naturally acts by isometries on $\bar X$. \begin{prop} {\bf Lab}el{res: qi quotient spaces} Assume that for every $(H,Y) \in \mathcal Q$, the space $Y/H$ is bounded. Then the spaces $\bar X$ and $X/K$ are quasi-isometric. \end{prop} \begin{proof} Recall that the embedding $X \to \dot X$ is $1$-Lipschitz. Hence it induces a $1$-Lipschitz embedding $X/K \to \bar X$. We claim that the map $X/K \to \bar X$ is actually bi-Lipschitz. For simplicity, we implicitly identify $X/K$ with its image in $\bar X$. Recall that $\mathcal Q/G$ is finite. It follows from our assumption that there exists $D \in \mathbb R_+$ such that for every $(H,Y) \in \mathcal Q$, the image of $Y$ in $X/K$ has diameter at most $D$. Let $\bar x, \bar x' \in X/K$. Let $\eta \in \mathbb R_+^*$. There exist $x,x' \in X$, respective pre-images of $\bar x$ and $\bar x'$, such that $\dist[\dot X] x{x'} < \dist[\bar X] {\bar x}{\bar x'} + \eta$. Following the construction of the metric on $\dot X$ -- see for instance \cite[Section~5.1]{Coulon:2014fr} -- we observe that there exists a sequence of points $(x_0,y_0,x_1, y_1, \dots, x_m,y_m)$ which approximates the distance between $x$ and $x'$ in the following sense: \begin{enumerate} \item $x_0 =x$ and $y_m = x'$; \item For every $i \in \{ 0, \dots, m-1\}$, there exists $(H_i,Y_i) \in \mathcal Q$ such that $y_i,x_{i+1} \in Y_i$; \item \begin{equation} {\bf Lab}el{eqn: qi quotient spaces - approx dot X} \sum_{i = 0}^m \dist[X] {x_i}{y_i} + \sum_{i = 0}^{m-1} \dist[Z(Y_i)]{y_i}{x_{i+1}} < \dist[\dot X] x{x'} + \eta. \end{equation} \end{enumerate} For every $i \in \{0, \dots, m\}$, we write $\bar x_i$ and $\bar y_i$ for the images in $X/K$ of $x_i$ and $y_i$ respectively. It follows from the triangle inequality that \begin{equation} {\bf Lab}el{eqn: qi quotient spaces - approx bar X} \dist[X/K]{\bar x}{\bar x'} \leq \sum_{i = 0}^m \dist[X/K] {\bar x_i}{\bar y_i} + \sum_{i = 0}^{m-1} \dist[X/K]{\bar y_i}{\bar x_{i+1}} \end{equation} We are going to compare the terms of the latter inequality with the ones of (\ref{eqn: qi quotient spaces - approx dot X}). Note first that for every $i \in \{0, \dots, m\}$, we have \begin{equation} {\bf Lab}el{eqn: qi quotient spaces - easy comparison} \dist[X/K] {\bar x_i}{\bar y_i} \leq \dist[X] {x_i}{y_i}. \end{equation} Let $i \in \{0, \dots, m-1\}$. In order to estimate $\dist[X/K]{\bar y_i}{\bar x_{i+1}}$, we distinguish two cases. Assume first that $\dist[Y_i]{y_i}{x_{i+1}} \leq \pi \sinh \rho$. It follows from the definition of the metric on $Z(Y_i)$ that \begin{equation} {\bf Lab}el{eqn: qi quotient spaces - hard comparison - case 1} \dist[X/K]{\bar y_i}{\bar x_{i+1}} \leq \dist[X]{y_i}{x_{i+1}} \leq \dist[Y_i]{y_i}{x_{i+1}} \leq \frac {\pi \sinh \rho}{2\rho} \dist[Z(Y_i)]{y_i}{x_{i+1}}. \end{equation} Assume now that $\dist[Y_i]{y_i}{x_{i+1}} > \pi \sinh \rho$. In particular $\dist[Z(Y_i)]{y_i}{x_{i+1}} = 2 \rho$. Recall that the diameter of the image of $Y_i$ in $X/K$ is at most $D$. Hence \begin{equation} {\bf Lab}el{eqn: qi quotient spaces - hard comparison - case 2} \dist[X/K]{\bar y_i}{\bar x_{i+1}} \leq \frac D{2\rho} \dist[Z(Y_i)]{y_i}{x_{i+1}}. \end{equation} Combining (\ref{eqn: qi quotient spaces - approx dot X}) - (\ref{eqn: qi quotient spaces - hard comparison - case 2}) we get that \begin{displaymath} \dist[X/K]{\bar x}{\bar x'} \leq \lambda \left(\dist[\dot X]x{x'} + \eta\right) \leq \lambda \left(\dist[\bar X]{\bar x}{\bar x'} + 2 \eta \right), \end{displaymath} where \begin{displaymath} \lambda = \max \left\{ 1, \frac {\pi \sinh\rho}{2\rho}, \frac D{2\rho} \right\}. \end{displaymath} The previous inequality holds for every $\eta \in \mathbb R_+^*$, hence $X/K \to \bar X$ is bi-Lipschitz, which completes the proof of our claim. Note that the diameter of the cones attached to $X$ to form the cone-off space $\dot X$ have diameter at most $2\rho$. Hence any point of $\bar X$ is a distance at most $2\rho$ from a point of $X/K$. Consequently the map $X/K \to \bar X$ is a quasi-isometry. \end{proof} \paragraph{Small cancellation theorem.} The small cancellation theorem recalled bellow is a compilation of Proposition~6.7, Corollary~3.12, and Proposition~6.12 from \cite{Coulon:2014fr} \begin{thm} {\bf Lab}el{res: small cancellation theorem} There exist positive constants $\delta_0, \delta_1$, $\Delta_0$ and $\rho_0$ satisfying the following property. Let $X$ be a $\delta$-hyperbolic length space and $G$ a group acting by isometries on $X$. Let $\mathcal Q$ be a $G$-invariant family of pairs $(H,Y)$ where $Y$ is a strongly quasi-convex subset of $X$ and $H$ a subgroup of $G$ stabilizing $Y$. We assume that $\mathcal Q/G$ is finite. Let $\rho \geq \rho_0$. If $\delta \leq \delta_0$, $\Delta(\mathcal Q,X) \leq \Delta_0$ and $\inj[X]{\mathcal Q} \geq 2 \pi \sinh \rho$ then the following holds. \begin{enumerate} \item The space $\bar X = \bar X_\rho(\mathcal Q)$ is a $\delta_1$-hyperbolic length space. \item Let $(H,Y) \in \mathcal Q$. Let $a$ be the apex of $Z(Y)$ and $\bar a$ its image in $\bar X$. The projection $\pi \colon G \twoheadrightarrow \bar G$ induces an isomorphism from $\stab Y/H$ onto $\stab{\bar a}$. \item For every $x \in X$, the projection $\pi \colon G \to \bar G$ induces a bijection from the set $\set{g \in G}{\dist{gx}x \leq \rho/100}$ onto its image. \item Let $\bar F$ be an elliptic subgroup of $\bar G$. Either there exists an elliptic subgroup $F$ of $G$ such that the projection $\pi \colon G \to \bar G$ induces an isomorphism from $F$ onto $\bar F$, or there exists $(H,Y) \in \mathcal Q$ such that $\bar F$ is contained in $\stab{\bar a}$, where $\bar a$ stands for the image in $\bar X$ of the apex $a$ of the cone $Z(Y)$. \end{enumerate} \end{thm} We are now in position to prove the following statement. \begin{prop} {\bf Lab}el{SCQ} Let $G$ be a group acting uniformly properly without inversion on a hyperbolic graph $\Gamma$ with uniformly bounded valence. Let $\{g_1, \dots, g_m\}$ be a finite subset of $G$ whose elements are loxodromic (with respect to the action of $G$ on $\Gamma$) and pairwise non-commensurable. In addition, we assume that for every $i\in\{1, \dots, m\}$, the group $\langle g_i \rangle$ is normal in $E(g_i)$. Then for every finite subset $U\subseteq G$ there exists $N \in \mathbb N$ with the following property. Let $n_1, \dots, n_m \in \mathbb N$, all bounded below by $N$. Let $K$ be the normal closure of $\{g_1^{n_1}, \dots, g_m^{n_m}\}$ in $G$. Then the quotient $\bar G = G/K$ belongs to $\mathcal P_0$. Moreover, we have the following. \begin{enumerate} \item {\bf Lab}el{enu: sc quotient w/ bounded geometry - cone point stab} For every $i\in\{1, \dots, m\}$, the natural homomorphism $\pi \colon G \to \bar G$ induces an embedding of $E(g_i) / \langle g_i \rangle$ into $\bar G$. \item {\bf Lab}el{enu: sc quotient w/ bounded geometry - local one-to-one} The projection $\pi$ is injective when restricted to $U$. \item {\bf Lab}el{enu: sc quotient w/ bounded geometry - elliptic} Let $\bar F$ be a finite subgroup of $\bar G$. Then either there exists a finite subgroup $F$ of $G$ such that $\pi (F)=\bar F$ or $\bar F$ is conjugate to a subgroup of $\pi(E(g_i))$ for some $i\in\{1, \dots, m\}$. \end{enumerate} \end{prop} \begin{proof} The constant $\delta_0$ $\delta_1$, $\Delta_0$, and $\rho_0$ are the one given by Theorem~\ref{res: small cancellation theorem}. We choose an arbitrary $\rho \geq \rho_0$. We write $\delta$ for the hyperbolicity constant of $\Gamma$. According to Lemma~\ref{res: fellow travel conjugate} there exists a constant $\Delta$ such that for every $u \in G$, for every $i \neq j$ in $\{1, \dots, m\}$, if \begin{displaymath} \operatorname{diam}\left(Y_{g_i}^{+5\delta}\cap uY_{g_j}^{+5\delta}\right) > \Delta, \end{displaymath} then $i = j$ and $u$ belongs to $E(g_i)$. Up to replacing $\Gamma$ by a rescaled version of $\Gamma$, that we denote $X$, we may assume that the following holds \begin{itemize} \item $\delta \leq \delta_0$ and $\Delta \leq \Delta_0$, \item there exists $x \in X$, such that for every $u \in U$ we have $\dist[X]{ux}x \leq \rho /100$. \end{itemize} Since the $g_i$'s are loxodromic, there exists $N \in \mathbb N$ such that for every $n \geq N$, for every $i \in \{1, \dots, m\}$, we have $\norm[X]{g_i^n} \geq 2\pi \sinh \rho$. Let $n_1, \dots, n_m \in \mathbb N$, all bounded below by $N$. Let $K$ be the normal closure of $\{g_1^{n_1}, \dots, g_m^{n_m}\}$ and $\bar G$ be the quotient $\bar G = G/K$. Since $G$ acts without inversion on $\Gamma$, the quotient of $\bar \Gamma = \Gamma /K$ is a graph endowed with an action without inversion of $\bar G$. According to our assumptions there exist $d, M \in \mathbb N$ such that given any vertex $v$ of $\Gamma$, its valence is at most $d$ and the cardinality of its stabilizer is bounded above by $M$. Observe that the same holds for the vertices of $\bar \Gamma$. To prove that $\bar G$ belongs to $\mathcal P_0$, it suffices to show that $\bar \Gamma$ is hyperbolic. To that end, we use small cancellation theory. Let $\mathcal Q$ be the following collection \begin{equation*} \mathcal Q = \set{\left(\langle ug_i^{n_i}u^{-1}\rangle , uY_g\right)}{u \in G,\ 1 \leq i \leq m}. \end{equation*} By construction $\Delta(\mathcal Q,X) \leq \Delta_0$ and $\inj[X]{\mathcal Q} \geq 2\pi \sinh \rho$. The cone-off space $\dot X = \dot X_\rho(\mathcal Q)$ and the quotient $\bar X = \dot X/K$ are built as above. The parameters have been chosen in such a way so that the family $\mathcal Q$ satisfies the assumptions of Theorem~\ref{res: small cancellation theorem}. It follows that $\bar X$ is a hyperbolic length space. Note that for every $(H,Y) \in \mathcal Q$, the quotient $Y/H$ is bounded, hence $\bar X$ is quasi-isometric to $X/K$ (Proposition~\ref{res: qi quotient spaces}). Nevertheless $X/K$ is just a rescaled copy of $\bar \Gamma$. Thus $\bar \Gamma$ is quasi-isometric to $\bar X$, and therefore hyperbolic. Points~(\ref{enu: sc quotient w/ bounded geometry - cone point stab})-(\ref{enu: sc quotient w/ bounded geometry - elliptic}) directly follows from Theorem~\ref{res: small cancellation theorem}. \end{proof} \section{Proof of the main theorem} We begin with an auxiliary result, which is similar to \cite[Proposition 4.2]{MM}. \begin{lem} {\bf Lab}el{res: rips} Let $Q$ be a finitely presented infinite simple group and let $H$ be a torsion-free hyperbolic group splitting as \begin{displaymath} 1\to N\to H\to Q\to 1, \end{displaymath} where the subgroup $N$ is finitely generated. Let $a\in N\setminus \{ 1\}$. Then there exists $b \in N\setminus\{1\}$ such that $a$ and $b$ are not commensurable in $N$ but are conjugate in every finite quotient of $N$. \end{lem} \begin{proof} Let $C=\langle c\rangle $ be the maximal cyclic subgroup of $H$ containing $a$ and let $h\in H\setminus CN$ (note that our assumptions imply that $CN\ne H$). Let $b=h^{-1}ah$ and $a=c^n$ for some $n\in \mathbb Z\setminus \{ 0\}$. If $a$ and $b$ are commensurable in $N$, then there exist $t\in N$ and $k, \ell\in \mathbb Z\setminus \{ 0\}$ such that $c^{kn}=t^{-1}h^{-1}c^{\ell n}ht$. Since $H$ is torsion-free we have $k=\ell$ and by the uniqueness of roots in a torsion-free hyperbolic group we obtain $c=t^{-1}h^{-1}cht$. It follows that $ht\in C$ and consequently $h\in CN$, which contradicts our assumption. Thus $a$ and $b$ are not commensurable in $N$. Assume now that there exists a finite index normal subgroup $K$ of $N$ such that the images of $a$ and $b$ are not conjugate in $N/K$. Since $N$ is finitely generated, there are only finitely many subgroups of any finite index in $N$. Replacing $K$ with the intersection of all subgroups of $N$ of index $[N:K]$ if necessary, we can assume that $K$ is normal in $H$. The natural action of the group $H$ on the finite set $\Omega$ of conjugacy classes of $N/K$ is non-trivial; indeed, the element $h$ acts non-trivially as the images of $a$ and $b$ are not conjugate in $N/K$. Since every element of $N$ acts on $\Omega $ trivially, the action of $H$ on $\Omega$ gives rise to a non-trivial homomorphism $\epsilon\colon Q\to Sym(\Omega)$, which contradicts the assumption that $Q$ is infinite simple. \end{proof} \begin{thm}{\bf Lab}el{thm-main} There exists a finitely generated group $G\in \mathcal P_0$ such that every amenable quotient of $G$ is trivial. \end{thm} \begin{proof} Let $H_1$ be a torsion-free hyperbolic group with property (T) of Kazhdan and \begin{equation*} H_2=\left\langle x,y\mid y=x\left(y^{-1}xy\right)x^2\left(y^{-1}xy\right) \cdots x^{10} \left(y^{-1}xy\right)\right\rangle. \end{equation*} It is easy to see that $H_2$ satisfies the $C^\prime (1/6)$ small cancellation condition and hence is hyperbolic. Moreover it is generated by some conjugates of $x$. Any two non-cyclic torsion-free hyperbolic groups have a common non-cyclic torsion-free hyperbolic quotient group \cite[Theorem~2]{Ols93}. Let $H_0$ denote a common non-cyclic torsion-free hyperbolic quotient of $H_1$ and $H_2$. By \cite[Corollary 1.2]{BO}, there exists a short exact sequence \begin{displaymath} 1\to N\to H\to Q\to 1 \end{displaymath} such that $H$ is torsion-free hyperbolic, $N$ is a quotient of $H_0$, and $Q$ is a finitely presented infinite simple group. Clearly $N$ inherits property (T) from $H_1$. As a subgroup of a hyperbolic group, $N$ belongs to the class $\mathcal P_0$. Let $a$ denote the image of $x\in H_2$ in $N$. Since $N$ is a quotient group of $H_2$, it is generated by conjugates of $a$ (in $N$). According to the previous lemma, there exists $b\in N$ such that $a$ and $b$ are not commensurable in $N$ but are conjugate in every finite quotient of $N$. By Proposition \ref{SCQ}, there exist distinct primes $p$ and $q$ such that $G=N/\normal{a^p, b^q}$ belongs to $\mathcal P_0$, and the images of $a$ and $b$ in $G$ have orders $p$ and $q$, respectively. Let $A$ be an amenable quotient of $G$. Being a quotient group of $N$, $A$ has property (T) and, therefore, is finite. It follows that the images of $a$ and $b$ in $A$, denoted by $\bar a$ and $\bar b$, are conjugate. As $\bar a^p=\bar b ^q=1$ and $gcd(p,q)=1$, we have $\bar a= \bar b =1$. Since $N$ is generated by conjugates of $a$, $A$ is generated by conjugates of $\bar a$, which implies $A=\{ 1\}$. \end{proof} \noindent \emph{R\'emi Coulon} \\ Univ Rennes, CNRS \\ IRMAR - UMR 6625 \\ F-35000 Rennes, France\\ \texttt{[email protected]} \\ \texttt{http://rcoulon.perso.math.cnrs.fr} \noindent \emph{Denis Osin} \\ Department of Mathematics \\ Vanderbilt University \\ Nashville, TN 37240, U.S.A.\\ \texttt{[email protected]} \\ \texttt{https://as.vanderbilt.edu/math/bio/denis-osin} \end{document}
\begin{document} \title{Forcing the ${\Sigma^1_3} \begin{abstract} We generically construct a model in which the $\bf{\Sigma^1_3}$-separation property is true, i.e. every pair of disjoint $\bf{\Sigma^1_3}$-sets can be separated by a $\bf{\Delta^1_3}$-definable set. This answers an old question from the problem list $``$Surrealist landscape with figures$"$ by A. Mathias from 1968. We also construct a model in which the (lightface) $\Sigma^1_3$-separation property is true. \end{abstract} \section{Introduction} The separation property, together with the reduction property and the uniformization property, are three classical notions which were introduced and studied first by Polish and Russian descriptive set theorists in the 1920's and 1930's. \begin{definition} Let $\Gamma$ be a (lightface or boldface) projective pointclass, and let $\check{\Gamma}=\{X \, : \, \omega^{\omega} \backslash X \in \Gamma \}$ denote the dual pointclass of $\Gamma$. \begin{itemize} \item We say that $\Gamma$ has the separation property iff every pair $A_1$ and $A_2$ of disjoint elements of $\Gamma$ has a separating set $C \in \Gamma \cap \check{\Gamma}$, where $C$ separates $A_1$ and $A_2$ if $A_1 \subset C$ and $A_2 \subset \omega^{\omega} \backslash C$. \item $\Gamma$ has the reduction property if for any pair $A_1$ and $A_2$ in $\Gamma$, there are disjoint sets $B_1 \subset A_1$ and $B_2 \subset A_2$ both in $\Gamma$ such that $A_1 \cup A_2= B_1 \cup B_2$. \item $\Gamma$ has the uniformization property if for every $A \subset \omega^{\omega} \times \omega^{\omega}$ there is a uniformizing function $f_A$ whose graph is in $\Gamma$, where we say that $f_A$ is a uniformizing function of $A$ if $dom f_A = pr_1(A)=\{ x \in \omega^{\omega} \, : \, \exists y ((x,y) \in A) \}$ and $f_A \subset A$. \end{itemize} \end{definition} It is rather straightforward to see that the uniformization property for $\Gamma$ implies the reduction property for $\Gamma$. A classical result due to Novikov shows that the reduction property can not hold simultaneously at both $\Gamma$ and $\check{\Gamma}$. Passing to complements immediately yields that the reduction property for $\Gamma$ implies that the dual $\check{\Gamma}$ has the separation property (see e.g. Y. Moschovakis book \cite{Moschovakis} for many more information on the orgin and history of these notions). Consequentially, $\bf{\Sigma^1_1}$ and $\bf{\Pi^1_2}$-sets have the separation property due to M. Kondo's theorem that $\bf{\Pi^1_1}$, hence also $\bf{\Sigma^1_2}$ has the uniformization property. The fact that the $\bf{\Sigma^1_1}$-separation property is true has been proved by N. Lusin already in 1927. This is as much as $\mathsf{ZFC}$ can prove about the separation property. In G\"odel's constructible universe $L$ there is a good $\Sigma^1_2$-definable wellorder of the reals, hence the $\bf{\Sigma^1_n}$-uniformization property holds for $n\ge 3$, so $\bf{\Pi^1_n}$-separation must hold as well. On the other hand, by the celebrated results of Y. Moschovakis $\bf{\Delta}^1_{2n}$-determinacy implies the $\bf{\Pi}^1_{2n+1}$-uniformization property, so in particular under $\bf{\Delta}^1_2$-determinacy $\bf{\Sigma^1_3}$-separation holds. Note here, that due to H. Woodin, ${\Delta^1_2}$-determinacy together with $\bf{\Pi}^1_1$-determinacy already implies that $M_1^{\#}$ exists and is $\omega_1$-iterable (see \cite{MSW}, Theorem 1.22) which in turn implies the existence of an inner model with a Woodin cardinal. On the other hand, in the presence of $``$every reals has a sharp$"$, if the $\bf{\Sigma}^1_3$-separation property holds, then it does so because already $\bf{\Delta}^1_2$-determinacy holds. The above follows from Steel's and Woodin's solution to the fourth Delfino problem. Steel, showed that in the presence of $``$every reals has a sharp$"$, the $\bf{\Sigma}^1_3$-separation property implies the existence of an inner model with a Woodin cardinal as well (see \cite{Steel}, Theorem 0.7), more precisely, under the stated assumptions, for any real $y$, there is a proper class model $M$ with $y \in M$, and an ordinal $\delta$ such that $V^M_{\delta+1}$ is countable and $\delta$ is a Woodin cardinal in $M$. Now results of Woodin which were later reproved by I. Neeman using different methods (see \cite{Neeman}, Corollary 2.3) the latter assertion implies that $\bf{\Delta}^1_2$-determinacy must hold in $V$. It is natural to ask whether one can get a model of the $\bf{\Sigma^1_3}$-separation property from just assuming the consistency of $\mathsf{ZFC}$. Indeed, this question has been asked long before the connection of determinacy assumptions and large cardinals has been uncovered; it appears as Problem 3029 in A. Mathias's list of open problems compiled in 1968 (see \cite{Mathias}, or \cite{Kanovei1}, where the problem is stated again). The problem itself seems to have a nontrivial history of attempted solutions (see \cite{Kanovei2} for an account). Put in wider context, this paper can be seen as following a tradition of establishing consequences from (local forms of) projective determinacy using the methods of forcing. There is an extensive list of results which deal with forcing statements concerning the Lebesgue measurability and the Baire property of certain levels of the projective hierarchy. For the separation property, L. Harrington, in unpublished notes dating back to 1974, constructed a model in which the separation property fails for both $\bf{\Sigma}^1_3$ and $\bf{\Pi}^1_3$-sets. In the same set of handwritten notes, he outlines how his proof can be altered to work for arbitrary $n \ge 3$. Very recently, using different methods, V. Kanovei and V. Lyubetsky devised a forcing which, given an arbitrary $n \ge 3$, produces a universe in which the $\bf{\Sigma}^1_n$- and the $\bf{\Pi}^1_n$-separation property fails (see \cite{Kanovei1}). Yet tools for producing models which deal with the separation property, the reduction property or the uniformization property in a positive way were non-existent. Goal of this paper is to show that the $\bf{\Sigma^1_3}$-separation property has no large cardinal strength, which answers Mathias question. \begin{theorem*} Starting with $L$ as the ground model, one can produce a set-generic extension $L[G]$ in which the $\bf{\Sigma^1_3}$-separation property holds. \end{theorem*} The proof method also allows to tackle the $\Sigma^1_3$-separation property: \begin{theorem*} Starting with $L$ as the ground model, one can produce a set-generic extension $L[G]$ in which the ${\Sigma^1_3}$-separation property holds. \end{theorem*} As always, the flexibility of the forcing method can be exploited to produce effects which can not be inferred from projective determinacy assumptions alone. An example would be that the above proofs lift without pain to statements about the $\Sigma^1_1$-separation property in the generalized Baire space $\omega_1^{\omega_1}$. Another example, though speculative, involves lifting the above results to inner models with finitely many Woodin cardinals. We strongly believe that the proofs of the theorems above can serve as a blueprint to obtain models where the $\Sigma^1_{n+3}$-separation property holds, for arbitrary $n \in \omega$, while working over the canonical inner model with $n$ Woodin cardinals $M_n$ instead of $L$, as we do in this article. Note here, that for even $n$ this would produce models, which display a behaviour of the separation property which contradicts the one implied by $\mathsf{PD}$. We finish the introduction with a short summary of the present article. In section two we briefly introduce the forcings which will be used in order to prove the two main theorems. In the third section we shall construct a mild generic extension of $L$ denoted with $W$ which is for our needs the right ground model to work with. In the third subsection of section three, we prove an auxiliary result whose purpose is to highlight several important ideas in an easier setting. Our hopes are that this way, the reader obtains a better understanding of the proofs of the later main theorems. In section four we prove the boldface separation property and in the fifth section we prove the lightface separation property. The latter relies on several arguments from the boldface case and can not be read separately. In the sixth section we discuss some interesting and open questions. \section{Preliminaries} \subsection{Notation} The notation we use will be mostly standard, we hope. We write $\mathbb{P}=(\mathbb{P}_{\alpha} \, : \, \alpha < \gamma)$ for a forcing iteration of length $\gamma$ with initial segments $\mathbb{P}_{\alpha}$. The $\alpha$-th factor of the iteration will be denoted with $\mathbb{P}(\alpha)$. Note here that we drop the dot on $\mathbb{P}(\alpha)$, even though $\mathbb{P}(\alpha)$ is in fact a $\mathbb{P}_{\alpha}$-name of a partial order. If $\alpha' < \alpha < \gamma$, then we write $\mathbb{P}_{\alpha' \alpha}$ to denote the intermediate forcing of $\mathbb{P}$ which happens in the interval $[\alpha',\alpha)$, i.e. $\mathbb{P}_{\alpha' \alpha}$ is such that $\mathbb{P} \cong \mathbb{P}_{\alpha'} \ast \mathbb{P}_{\alpha' \alpha}$. We write $\mathbb{P} \Vdash \varphi$ whenever every condition in $\mathbb{P}$ forces $\varphi$, and make deliberate use of restricting partial orders below conditions, that is, if $p \in \mathbb{P} $ is such that $p \Vdash \varphi$, we let $\mathbb{P}':= \mathbb{P}_{\le p}:=\{ q \in \mathbb{P} \, : \, q \le p\}$ and use $\mathbb{P}'$ instead of $\mathbb{P}$. This is supposed to reduce the notational load of some definitions and arguments. We also sometimes write $V[\mathbb{P}]\models \varphi$ to indicate that for every $\mathbb{P}$-generic filter $G$ over $V$, $V[G] \models \varphi$. \subsection{The forcings which are used} The forcings which we will use in the construction are all well-known. We nevertheless briefly introduce them and their main properties. \begin{definition} For a stationary $S \subset \omega_1$ the club-shooting forcing with finite conditions for $S$, denoted by $\mathbb{P}_S$ consists of conditions $p$ which are finite partial functions from $\omega_1$ to $S$ and for which there exists a normal function $f: \omega_1 \rightarrow \omega_1$ such that $p \subset f$. $\mathbb{P}_S$ is ordered by end-extension. \end{definition} The club shooting forcing $\mathbb{P}_S$ is the paradigmatic example for an $S$-\emph{proper forcing}, where we say that $\mathbb{P}$ is $S$-proper if and only if for every condition $p \in \mathbb{P}_S$, every sufficiently large $\theta$ and every countable $M \prec H(\theta)$ such that $M \cap \omega_1 \in S$ and $p, \mathbb{P}_S \in M$, there is a $q<p$ which is $(M, \mathbb{P}_S)$-generic. \begin{lemma} The club-shooting forcing $\mathbb{P}_S$ generically adds a club through the stationary set $S \subset \omega_1$, while being $S$-proper and hence $\omega_1$-preserving. Moreover stationary subsets $T$ of $S$ remain stationary in the generic extension. \end{lemma} We will choose a family of $S_{\beta}$'s so that we can shoot an arbitrary pattern of clubs through its elements such that this pattern can be read off from the stationarity of the $S_{\beta}$'s in the generic extension. For that it is crucial to recall that $S$-proper posets can be iterated with countable support and always yield an $S$-proper forcing again. This is proved exactly as in the well-known case for plain proper forcings (see \cite{Goldstern}, 3.19. for a proof). \begin{fact} Let $(\mathbb{P}_{\alpha} \,:\, \alpha< \gamma)$ be a countable support iteration, assume also that at every stage $\alpha$, $\mathbb{P}_{\alpha} \Vdash_{\alpha} \mathbb{P}(\alpha)$ is $S$-proper. Then the iteration is an $S$-proper notion of forcing again. \end{fact} Once we decide to shoot a club through a stationary, co-stationary subset of $\omega_1$, this club will belong to all $\omega_1$-preserving outer models. This hands us a robust method of coding arbitrary information into a suitably chosen sequence of sets. Let $(S_{ \alpha} \, : \, \alpha < \omega_1)$ be a sequence of stationary, co-stationary subsets of $\omega_1$ such that $\forall \alpha, \, \beta < \omega_1 (S_{\alpha} \cap S_{\beta} \in \hbox{NS}_{\omega})$, and let $S\subset \omega_1$ be stationary and such that $S \cap S_{\alpha} \in \hbox{NS}_{\omega})$. Note that we can always assume that these objects exist. The following coding method has been used several times already (see \cite{SyVera}). \begin{lemma} Let $r \in 2^{\omega_1}$ be arbitrary, and let $\mathbb{P}$ be a countable support iteration $(\mathbb{P}_{\alpha} \, : \, \alpha < \omega_1)$, inductively defined via \[\mathbb{P}(\alpha) := \mathbb{P}_{\omega_1 \backslash S_{2 \cdot \alpha}} \text{ if } r(\alpha)=1 \] and \[\mathbb{P}({\alpha}) := \mathbb{P}_{\omega_1 \backslash S_{(2 \cdot\alpha) +1}} \text{ if } r(\alpha)=0.\] Then in the resulting generic extension $V[\mathbb{P}]$, we have that $\forall \alpha < \omega_1:$ \[ r(\alpha)=1 \text{ if and only if } S_{2 \cdot \alpha} \text{ is nonstationary, }\] and \[ r_{\alpha}=0 \text{ iff } S_{(2 \cdot \alpha)+1} \text{ is nonstationary.} \] \end{lemma} \begin{proof} Note first that the iteration will be $S$-proper, hence $\omega_1$-preserving. Assume that $r(\alpha)=1$ in $V[{\mathbb{P}}]$. Then by definition of the iteration we must have shot a club through the complement of $S_{\alpha}$, thus it is nonstationary in $V[{\mathbb{P}}]$. On the other hand, if $S_{2 \cdot \alpha}$ is nonstationary in $V[{\mathbb{P}}]$, then as for $\beta \ne 2 \cdot \alpha$, every forcing of the form $\mathbb{P}_{S_{\beta}}$ is $S_{2 \cdot \alpha}$-proper, we can iterate with countable support and preserve $S_{2 \cdot \alpha}$-properness, thus the stationarity of $S_{2 \cdot \alpha}$. So if $S_{2 \cdot \alpha}$ is nonstationary in $V[{\mathbb{P}}]$, we must have used $\mathbb{P}_{S_{2 \cdot \alpha}}$ in the iteration, so $r(\alpha)= 1$. \end{proof} The second forcing we use is the almost disjoint coding forcing due to R. Jensen and R. Solovay. We will identify subsets of $\omega$ with their characteristic function and will use the word reals for elements of $2^{\omega}$ and subsets of $\omega$ respectively. Let $D=\{d_{\alpha} \, : \, \alpha < \aleph_1 \}$ be a family of almost disjoint subsets of $\omega$, i.e. a family such that if $d, d' \in D$ then $d \cap d'$ is finite. Let $X\subset \kappa$ for $\kappa \le 2^{\aleph_0}$ be a set of ordinals. Then there is a ccc forcing, the almost disjoint coding $\mathbb{A}_D(X)$ which adds a new real $x$ which codes $X$ relative to the family $D$ in the following way $$\alpha \in X \text{ if and only if } x \cap d_{\alpha} \text{ is finite.}$$ \begin{definition} The almost disjoint coding $\mathbb{A}_D(X)$ relative to an almost disjoint family $D$ consists of conditions $(r, R) \in \omega^{<\omega} \times D^{<\omega}$ and $(s,S) < (r,R)$ holds if and only if \begin{enumerate} \item $r \subset s$ and $R \subset S$. \item If $\alpha \in X$ and $d_{\alpha} \in R$ then $r \cap d_{\alpha} = s \cap d_{\alpha}$. \end{enumerate} \end{definition} For the rest of this paper we let $D \in L$ be the definable almost disjoint family of reals one obtains when recursively adding the $<_L$-least real to the family which is almost disjoint from all the previously picked reals. Whenever we use almost disjoint coding forcing, we assume that we code relative to this fixed almost disjoint family $D$. The last two forcings we briefly discuss are Jech's forcing for adding a Suslin tree with countable conditions and, given a Suslin tree $T$, the associated forcing which adds a cofinal branch through $T$. Recall that a set theoretic tree $(T, <)$ is a Suslin tree if it is a normal tree of height $\omega_1$ and has no uncountable antichain. As a result, forcing with a Suslin tree $S$, where conditions are just nodes in $S$, and which we always denote with $S$ again, is a ccc forcing of size $\aleph_1$. Jech's forcing to generically add a Suslin tree is defined as follows. \begin{definition} Let $\mathbb{P}_J$ be the forcing whose conditions are countable, normal trees ordered by end-extension, i.e. $T_1 < T_2$ if and only if $\exists \alpha < \text{height}(T_1) \, T_2= \{ t \upharpoonright \alpha \, : \, t \in T_1 \}.$ \end{definition} It is wellknown that $\mathbb{P}_J$ is $\sigma$-closed and adds a Suslin tree. In fact more is true, the generically added tree $T$ has the additional property that for any Suslin tree $S$ in the ground model $S \times T$ will be a Suslin tree in $V[G]$. This can be used to obtain a robust coding method (see also \cite{Ho} for more applications) \begin{lemma}\label{oneSuslintreepreservation} Let $V$ be a universe and let $S \in V$ be a Suslin tree. If $\mathbb{P}_J$ is Jech's forcing for adding a Suslin tree, $g \subset \mathbb{P}_J$ be a generic filter and if $T=\bigcup g$ is the generic tree then $$V[g][T] \models S \text{ is Suslin.}$$ \end{lemma} \begin{proof} Let $\dot{T}$ be the $\mathbb{P}_J$-name for the generic Suslin tree. We claim that $\mathbb{P}_J \ast \dot{T}$ has a dense subset which is $\sigma$-closed. As $\sigma$-closed forcings will always preserve ground model Suslin trees, this is sufficient. To see why the claim is true consider the following set: $$\{ (p, \check{q}) \, : \, p \in \mathbb{P}_J \land height(p)= \alpha+1 \land \check{q} \text{ is a node of $p$ of level } \alpha \}.$$ It is easy to check that this set is dense and $\sigma$-closed in $\mathbb{P}_J \ast \dot{T}$. \end{proof} A similar observation shows that we can add an $\omega_1$-sequence of such Suslin trees with a countably supported iteration. \begin{lemma}\label{ManySuslinTrees} Let $S$ be a Suslin tree in $V$ and let $\mathbb{P}$ be a countably supported product of length $\omega_1$ of forcings $\mathbb{P}_J$. Then in the generic extension $V[G]$ there is an $\omega_1$-sequence of Suslin trees $\vec{T}=(T_{\alpha} \, : \, \alpha \in \omega_1)$ such that for any finite $e \subset \omega$ the tree $S \times \prod_{i \in e} T_i$ will be a Suslin tree in $V[\vec{T}]$. \end{lemma} These sequences of Suslin trees will be used for coding in our proof and get a name. \begin{definition} Let $\vec{T} = (T_{\alpha} \, : \, \alpha < \kappa)$ be a sequence of Suslin trees. We say that the sequence is an independent family of Suslin trees if for every finite set $e= \{e_0, e_1,...,e_n\} \subset \kappa$ the product $T_{e_0} \times T_{e_1} \times \cdot \cdot \cdot \times T_{e_n}$ is a Suslin tree again. \end{definition} The upshot of being an independent sequence is that we can pick our favourite subset of indices and decide to shoot a branch through every tree whose index belongs to the set, while guaranteeing that no other Suslin tree from the sequence is destroyed. The following fact can be easily seen via induction on $\kappa$. \begin{fact} Let $\vec{T} = (T_{\alpha} \, : \, \alpha < \kappa)$ be independent and let $I \subset \kappa$ be arbitrary. If we form the finitely supported product of forcings $\mathbb{P}:=\prod_{\alpha \in I} T_{\alpha}$, then for every $\beta \notin I$, $V[\mathbb{P}] \models ``T_{\beta}$ is a Suslin tree$"$. \end{fact} Thus independent Suslin trees are suitable to encode information, as soon as we can make the independent sequence definable. \section{A first step towards the proof of the boldface separation property} \subsection{The ground model $W$ of the iteration} We have to first create a suitable ground model $W$ over which the actual iteration will take place. $W$ will be a generic extension of $L$, satisfying $\mathsf{CH}$ and, as stated already earlier, has the property that it contains two $\omega_1$-sequences $\vec{S}=\vec{S^1} \cup \vec{S^2}$ of mutually independent Suslin trees. The goal is to add the trees generically, and in a second forcing, use an $L$-definable sequence of stationary subsets of $\omega_1$ to code up the trees. The resulting universe will have the feature that any further outer universe, which preserves stationary subsets, can decode the information written into the $L$-stationary subsets in a $\Sigma_1(\omega_1)$-definable way, and hence has access to the sequence of independent Suslin trees $\vec{S}$. This property can be used to create two $\Sigma_1(\omega_1)$-predicates which are empty first and which can be filled with arbitrary reals $x$, using $\aleph_1$-sized forcings with the countable chain condition. These forcing have the crucial feature that they will be independent of the ground model they live in, a feature we will exploit heavily later on. We start with G\"odels constructible universe $L$ as our ground model. Next we fix an appropriate sequence of stationary subsets of $\omega_1$. Recall that $\diamondsuit$ holds in our ground model $L$, i.e. there is a $\Sigma_1$-definable sequence $(a_{\alpha} \, : \, \alpha < \omega_1)$ of countable subsets of $\omega_1$ such that any set $A \subset \omega_1$ is guessed stationarily often by the $a_{\alpha}$'s, i.e. $\{ \alpha < \omega_1 \, : \, a_{\alpha}= A \cap \alpha \}$ is a stationary subset of $\omega_1$. The $\diamondsuit$-sequence can be used to produce an easily definable sequence of stationary subsets: using a definable bijection between $\omega_1$ and $\omega_1 \cdot \omega_1$, we list the reals in $L$ in an $\omega_1 \cdot \omega_1$ sequence $(r_{\beta} \, : \, \beta < \omega_1 \cdot \omega_1)$ and define for every $\beta < \omega_1 \cdot \omega_1$ a stationary set in the following way: $$R_{\beta} := \{ \alpha < \omega_1 \, : \, a_{\alpha}= r_{\beta} \}.$$ and let $\vec{R}= (R_{\beta} \, : \, \beta < \omega_1 \cdot \omega_1)$ denote the sequence. We proceed with adding an $\omega_1$-sequence of Suslin trees with a countably supported product of Jech's Forcing $ \mathbb{P}_J$. We let \[\mathbb{R} := \prod_{\beta \in \omega_1} \mathbb{P}_J \] using countable support. This is a $\sigma$-closed, hence proper notion of forcing. We denote the generic filter of $\mathbb{R}$ with $\vec{S}=(S_{\alpha} \, : \, \alpha < \omega_1)$ and note that whenever $I \subset \omega_1$ is a set of indices then for every $j \notin I$, the Suslin tree $S_j$ will remain a Suslin tree in the universe $L[\vec{S}][g]$, where $g \subset \prod_{i \in I} S_i$ denotes the generic filter for the forcing with the finitely supported product of the trees $S_i$, $i \in I$ (see \cite{Ho} for a proof of this fact). We fix a definable bijection between $[\omega_1]^{\omega}$ and $\omega_1$ and identify the trees in $(S_{\alpha }\, : \, \alpha < \omega_1)$ with their images under this bijection, so the trees will always be subsets of $\omega_1$ from now on. In a second step, we destroy each element of $\vec{S}$, via adding generically branches. That is, we let the second forcing \[ \mathbb{R}':= \prod_{\alpha < \omega_1} S_{\alpha}, \] using countable support. Note that by the argument from the proof of Lemma \ref{oneSuslintreepreservation}, $\mathbb{R} \ast \mathbb{R}'$ has a dense subset which is $\sigma$-closed, hence the $L[\mathbb{R} \ast \mathbb{R}']$ is a $\sigma$-closed generic extension of $L$. In a third step we code the trees from $\vec{S}$ into the sequence of $L$-stationary subsets $\vec{R}$ we produced earlier, using club shooting forcing. It is important to note, that the forcing we are about to define does preserve Suslin trees, a fact we will show later. The forcing used in the second third will be denoted by $\mathbb{S}$. Fix $\alpha< \omega_1$ and consider the $\omega_1$ tree $S_{\alpha} \subset \omega_1$. We let $\mathbb{R}_{\alpha}$ be the countable support product which codes the characteristic function of $S_{\alpha}$ into the $\alpha$-th $\omega_1$-block of the $R_{\beta}$'s. $$\mathbb{R}_{\alpha} = \prod_{\gamma \in S_{\alpha}} \mathbb{P}_{\omega_1 \backslash R_{\omega_1 \cdot \alpha + 2 \cdot \gamma}} \times \prod_{\gamma \notin S_{\alpha}} \mathbb{P}_{\omega_1 \backslash R_{\omega_1 \cdot \alpha + 2 \cdot \gamma +1}} $$ Recall that for a stationary, co-stationary $R \subset \omega_1$, $\mathbb{P}_{R}$ denotes the club shooting forcing which shoots a club through $R$, thus $\mathbb{R}_{\alpha}$ codes up the tree $S_{\alpha}$ via writing the 0,1-pattern of the characteristic function of $S_{\alpha}$ into the $\alpha$-th $\omega_1$-block of $\vec{R}$. If we let $R$ be some stationary subset of $\omega_1$ which is disjoint from all the $R_{\alpha}$'s, whose existence is guaranteed by $\diamondsuit$, then it is obvious that for every $\alpha < \omega_1$, $\mathbb{R}_{\alpha}$ is an $R$-proper forcing which additionally is $\omega$-distributive. Then we let $\mathbb{S}$ be the countably supported iteration, $$\mathbb{S}:=\bigstar_{\alpha< \omega_1} \mathbb{R}_{\alpha}$$ which is again $R$-proper and $\omega$-distributive. This way we can turn the generically added sequence of $\omega_1$-trees $\vec{S}$ into a definable sequence of $\omega_1$-trees. Indeed, if we work in $L[\vec{S}\ast G]$, where $\vec{S} \ast G$ is $\mathbb{R} \ast \mathbb{S}$-generic over $L$, then \begin{align*} \forall \alpha, \gamma < \omega_1 (&\gamma \in S_{\alpha} \Leftrightarrow R_{\omega_1 \cdot \alpha + 2 \cdot \gamma} \text{ is not stationary and} \\ & \gamma \notin S_{\alpha} \Leftrightarrow R_{\omega_1 \cdot \alpha + 2 \cdot \gamma +1} \text{ is not stationary}) \end{align*} Note here that the above formula can be written in a $\Sigma_1(\omega_1)$-way, as it reflects down to $\aleph_1$-sized, transitive models of $\mathsf{ZF}P$ which contain a club through exactly one element of every pair $\{(R_{\alpha}, R_{\alpha+1}) \, : \, \alpha < \omega_1\}$. Finally we partition $\vec{S}$ into its even and its odd members and let \[ \vec{S^1}:= \{ S_{\alpha} \in \vec{S} \, : \, \alpha \text{ is even } \} \] and \[ \vec{S^2}:= \{ S_{\beta} \in \vec{S} \, : \, \beta \text { is odd} \} \] Again, both sequences $\vec{S^1}$ and $\vec{S^2}$ are $\Sigma^1(\omega_1)$-definable in $W$ and all stationary set preserving outer models of $W$. Our goal is to use $\vec{S^1}$ and $\vec{S^2}$ for coding again. For this it is essential, that both sequences remain independent in the inner model $L[\mathbb{R}][\mathbb{S}]$, after forcing with $\mathbb{S}$. The following line of reasoning is similar to \cite{Ho}. Recall that for a forcing $\mathbb{P}$ and $M \prec H(\theta)$, a condition $q \in \mathbb{P}$ is $(M,\mathbb{P})$-generic iff for every maximal antichain $A \subset \mathbb{P}$, $A \in M$, it is true that $ A \cap M$ is predense below $q$. The key fact is the following (see \cite{Miyamoto2} for the case where $\mathbb{P}$ is proper) \begin{lemma}\label{preservation of Suslin trees} Let $T$ be a Suslin tree, $S \subset \omega_1$ stationary and $\mathbb{P}$ an $S$-proper poset. Let $\theta$ be a sufficiently large cardinal. Then the following are equivalent: \begin{enumerate} \item $\Vdash_{\mathbb{P}} T$ is Suslin \item if $M \prec H_{\theta}$ is countable, $\eta = M \cap \omega_1 \in S$, and $\mathbb{P}$ and $T$ are in $M$, further if $p \in \mathbb{P} \cap M$, then there is a condition $q<p$ such that for every condition $t \in T_{\eta}$, $(q,t)$ is $(M, \mathbb{P} \times T)$-generic. \end{enumerate} \end{lemma} \begin{proof} For the direction from left to right note first that $\Vdash_{\mathbb{P}} T$ is Suslin implies $\Vdash_{\mathbb{P}} T$ is ccc, and in particular it is true that for any countable elementary submodel $N[\dot{G}_{\mathbb{P}}] \prec H(\theta)^{V[\dot{G}_{\mathbb{P}}]}$, $\Vdash_{\mathbb{P}} \forall t \in T (t$ is $(N[\dot{G}_{\mathbb{P}}],T)$-generic). Now if $M \prec H(\theta)$ and $M \cap \omega_1 = \eta \in S$ and $\mathbb{P},T \in M$ and $p \in \mathbb{P} \cap M$ then there is a $q<p$ such that $q$ is $(M,\mathbb{P})$-generic. So $q \Vdash \forall t \in T (t$ is $(M[\dot{G}_{\mathbb{P}}], T)$-generic, and this in particular implies that $(q,t)$ is $(M, \mathbb{P} \times T)$-generic for all $t \in T_{\eta}$. For the direction from right to left assume that $\Vdash \dot{A} \subset T$ is a maximal antichain. Let $B=\{(x,s) \in \mathbb{P} \times T \, : \, x \Vdash_{\mathbb{P}} \check{s} \in \dot{A} \}$, then $B$ is a predense subset in $\mathbb{P} \times T$. Let $\theta$ be a sufficiently large regular cardinal and let $M \prec H(\theta)$ be countable such that $M \cap \omega_1=\eta \in S$ and $\mathbb{P}, B,p,T \in M$. By our assumption there is a $q <_{\mathbb{P}} p$ such that $\forall t \in T_{\eta} ((q,t)$ is $(M, \mathbb{P} \times T)$-generic). So $B \cap M$ is predense below $(q,t)$ for every $t \in T_{\eta}$, which yields that $q \Vdash_{\mathbb{P}} \forall t \in T_{\eta} \exists s<_{T} t(s \in \dot{A})$ and hence $q \Vdash \dot{A} \subset T \upharpoonright \eta$, so $\Vdash_{\mathbb{P}} T$ is Suslin. \end{proof} In a similar way, one can show that Theorem 1.3 of \cite{Miyamoto2} holds true if we replace proper by $S$-proper for $S \subset \omega_1$ a stationary subset. \begin{theorem} Let $(\mathbb{P}_{\alpha})_{\alpha < \eta}$ be a countable support iteration of length $\eta$, let $S \subset \omega_1$ be stationary and suppose that for every $\alpha < \eta$, for the $\alpha$-th factor of the iteration $\dot{\mathbb{P}}(\alpha)$ it holds that $\Vdash_{\alpha} ``\dot{\mathbb{P}}(\alpha)$ is $S$-proper and preserves every Suslin tree.$"$ Then $\mathbb{P}_{\eta}$ is $S$-proper and preserves every Suslin tree. \end{theorem} So in order to argue that our forcing $\mathbb{S}$ preserves Suslin trees if used over $L[\mathbb{R}]$, it is sufficient to show that every factor preserves Suslin trees. This is indeed the case. \begin{lemma} Let $S \subset \omega_1$ be stationary, co-stationary, then the club shooting forcing $\mathbb{P}_S$ preserves Suslin trees. \end{lemma} \begin{proof} Because of Lemma \ref{preservation of Suslin trees}, it is enough to show that for any regular and sufficiently large $\theta$, every $M \prec H_{\theta}$ with $M \cap \omega_1 = \eta \in S$, and every $p \in \mathbb{P}_S \cap M$ there is a $q<p$ such that for every $t \in T_{\eta}$, $(q,t)$ is $(M,(\mathbb{P}_S \times T))$-generic. Note first that as $T$ is Suslin, every node $t \in T_{\eta}$ is an $(M,T)$-generic condition. Further, as forcing with a Suslin tree is $\omega$-distributive, $M[t]$ has the same $M[t]$-countable sets as $M$. It is not hard to see that if $M\prec H(\theta)$ is such that $M \cap \omega_1 \in S$ then an $\omega$-length descending sequence of $\mathbb{P}_S$-conditions in $M$ whose domains converge to $M \cap \omega_1$ has a lower bound as $M \cap \omega_1 \in S$. We construct an $\omega$-sequence of elements of $\mathbb{P}_S$ which has a lower bound which will be the desired condition. We list the nodes on $T_{\eta}$, $(t_i \, : \, i \in \omega)$ and consider the according generic extensions $M[t_i]$. In every $M[t_i]$ we list the $\mathbb{P}_S$-dense subsets of $M[t_i]$, $(D^{t_i}_n \, : \, n \in \omega)$ and write the so listed dense subsets of $M[t_i]$ as an $\omega \times \omega$-matrix and enumerate this matrix in an $\omega$-length sequence of dense sets $(D_i \, : \, i \in \omega)$. If $p=p_0 \in \mathbb{P}_S \cap M$ is arbitrary we can find, using the fact that $\forall i \, (\mathbb{P}_S \cap M[t_i] = M \cap \mathbb{P}_S$), an $\omega$-length, descending sequence of conditions below $p_0$ in $\mathbb{P}_S \cap M$, $(p_i \, : \, i \in \omega)$ such that $p_{i+1} \in M \cap \mathbb{P}_S$ is in $D_i$. We can also demand that the domain of the conditions $p_i$ converge to $M \cap \omega_1$. Then the $(p_i)$'s have a lower bound $p_{\omega} \in \mathbb{P}_S$ and $(t, p_{\omega})$ is an $(M, T \times \mathbb{P}_S)$-generic conditions for every $t \in T_{\eta}$ as any $t \in T_{\eta}$ is $(M,T)$-generic and every such $t$ forces that $p_{\omega}$ is $(M[T], \mathbb{P}_S)$-generic; moreover $p_{\omega} < p$ as desired. \end{proof} Putting things together we obtain: \begin{theorem} The forcing $\mathbb{S}$, defined above preserves Suslin trees. \end{theorem} Let us set $W:= L[\mathbb{R} \ast (\mathbb{R}' \times \mathbb{S}) ]$ which will serve as our ground model for a second iteration of length $\omega_1$. Note that $W$ satisfies that it is an $\omega$-distributive generic extension of $L$. We end with a straightforward lemma which is used later in coding arguments. \begin{lemma}\label{a.d.coding preserves Suslin trees} Let $T$ be a Suslin tree and let $\mathbb{A}_F(X)$ be the almost disjoint coding which codes a subset $X$ of $\omega_1$ into a real with the help of an almost disjoint family of reals of size $\aleph_1$. Then $$\Vdash_{\mathbb{A}_{F}(X)} T \text{ is Suslin }$$ holds. \end{lemma} \begin{proof} This is clear as $\mathbb{A}_{F}(X)$ has the Knaster property, thus the product $\mathbb{A}_{F}(X) \times T$ is ccc and $T$ must be Suslin in $V^{\mathbb{A}_{F}(X)}$. \end{proof} \subsection{Coding reals into Suslin trees} We introduced the model $W$ for one specific purpose: the possibility to code up reals into the sequence of definable Suslin trees $\vec{S^1}$ or $\vec{S^2}$ using a method which is not sensitive to its ground model. For the following, we let $W$ be our ground model, though the definitions will work, and will be used for suitable outer models of $W$ as well. We will encounter this situation as ultimately we will iterate the coding forcings we are about to define. Let $x \in W$ be an arbitrary real, ;et $m,k \in \omega$, let $(x,m,k)$ denote the real which codes the triple consisting of $x,m$ and $k$ in some fixed recursive way, and let $i \in \{ 1,2\}$. Then we shall define the forcing $\hbox{Code}(x,i)$, which codes the real $x$ into $\aleph_1$-many $\omega$-blocks of $\vec{S^i}$ as a two step iteration: \[ \hbox{Code}((x,m,k),i):= \mathbb{C} (\omega_1)^L \ast \dot{\mathbb{A}} (\dot{Y}_{x,i}) \] where the first factor is ordinary $\omega_1$-Cohen forcing, but defined in $L$, and the second factor codes a specific subset of $\omega_1$ denoted with $Y_{(x,m,k),i}$ into a real using almost disjoint coding forcing relative to the canonical, constructible almost disjoint family of reals $D$. We emphasize, that in iterations of coding forcings, we still fall back to force with $(\mathbb{C} (\omega_1))^L$ as our first factor, that is we never use the $\omega_1$-Cohen forcing of the current universe. Thus, iterating the coding forcings is in fact a hybrid of a product (namely the coordinates where we use $(\mathbb{C}(\omega_1))^L$) and a finites support iteration (the coordinates where we use the almost disjoint coding forcing). We shall discuss this later in more detail. We let $g \subset \omega_1$ be a $\mathbb{C} (\omega_1)^L$-generic filter over $W$, and let $\rho: [\omega_1]^{\omega} \rightarrow \omega_1$ be some canonically definable, constructible bijection between these two sets. We use $\rho$ and $g$ to define the set $h \subset \omega_1$, which eventually shall be the set of indices of $\omega$-blocks of $\vec{S}^i$, where we code up the characteristic function of the real ($(x,m,k)$. Let $h:= \{\rho( g \cap \alpha) \,: \, \alpha < \omega_1 \}$ and let $X \subset \omega_1$ be the $<$-least set (in some previously fixed well-order of $H(\omega_2)^{W[g]}$ which codes the follwing objects: \begin{itemize} \item The $<$-least set of $\omega_1$-branches in $W$ through elments of $\vec{S}$ which code $(x,m,k)$ at $\omega$-blocks which start at values in $h$, that is we collect $\{ b_{\beta} \subset S_{\beta} \, : \, \beta= \omega \gamma + 2n, \gamma \in h \land n \in \omega \land n \notin (x,m,k) \}$ and $\{ b_{\beta} \subset S_{\beta} \, : \, \beta= \omega \gamma + 2n+1, \gamma \in h \land n \in \omega \land n \in (x,m,k) \}$. \item The $<$-least set of $\omega_1 \cdot \omega \cdot \omega_1$-many club subsets through $\vec{R}$, our $\Sigma_1 (\omega_1)$-definable sequence of $L$-stationary subsets of $\omega_1$ from the last section, which are necessary to compute every tree $S_{\beta} \in \vec{S}$ which shows up in the above item, using the $\Sigma_1 (\omega_1)$-formula from the previous section before Lemma 2.10. \end{itemize} Note that, when working in $L[X]$ and if $\gamma \in h$ then we can read off $(x,m,k)$ via looking at the $\omega$-block of $\vec{S^i}$-trees starting at $\gamma$ and determine which tree has an $\omega_1$-branch in $L[X]$: \begin{itemize} \item[$(\ast)$] $n \in (x,m,k)$ if and only if $S^i_{\omega \cdot \gamma +2n+1}$ has an $\omega_1$-branch, and $n \notin (x,m,k)$ if and only if $S^i_{\omega \cdot \gamma +2n}$ has an $\omega_1$-branch. \end{itemize} Note that $(\ast)$ is actually a formula $(\ast) ((x,y,m) ,\gamma)$ with two parameters $(x,y,m)$ and $\gamma$ but we will suppress this, as the parameters usually are clear from the context. Indeed if $n \notin (x,m,k)$ then we added a branch through $S^i_{\omega \cdot \gamma+ 2n}$. If on the other hand $S^i_{\omega \cdot\gamma +2n}$ is Suslin in $L[X]$ then we must have added an $\omega_1$-branch through $S^i_{\omega \cdot \gamma +2n+1}$ as we always add an $\omega_1$-branch through either $S^i_{\omega \cdot \gamma +2n+1}$ or $S^i_{\omega \cdot \gamma +2n}$ and adding branches through some $S^i_{\alpha}$'s will not affect that some $S^i_{\beta}$ is Suslin in $L[X]$, as $\vec{S}$ is independent. We note that we can apply an argument resembling David's trick in this situation. We rewrite the information of $X \subset \omega_1$ as a subset $Y \subset \omega_1$ using the following line of reasoning. It is clear that any transitive, $\aleph_1$-sized model $M$ of $\mathsf{ZF}P$ which contains $X$ will be able to correctly decode out of $X$ all the information. Consequentially, if we code the model $(M,\in)$ which contains $X$ as a set $X_M \subset \omega_1$, then for any uncountable $\beta$ such that $L_{\beta}[X_M] \models \mathsf{ZF}P$ and $X_M \in L_{\beta}[X_M]$: \[L_{\beta}[X_M] \models \text{\ldq The model decoded out of }X_M \text{ satisfies $(\ast)$ for every $\gamma \in h \subset \omega_1$\rdq.} \] In particular there will be an $\aleph_1$-sized ordinal $\beta$ as above and we can fix a club $C \subset \omega_1$ and a sequence $(M_{\alpha} \, : \, \alpha \in C)$ of countable elementary submodels such that \[\forall \alpha \in C (M_{\alpha} \prec L_{\beta}[X_M] \land M_{\alpha} \cap \omega_1 = \alpha)\] Now let the set $Y\subset \omega_1$ code the pair $(C, X_M)$ such that the odd entries of $Y$ should code $X_M$ and if $Y_0:=E(Y)$ where the latter is the set of even entries of $Y$ and $\{c_{\alpha} \, : \, \alpha < \omega_1\}$ is the enumeration of $C$ then \begin{enumerate} \item $E(Y) \cap \omega$ codes a well-ordering of type $c_0$. \item $E(Y) \cap [\omega, c_0) = \emptyset$. \item For all $\beta$, $E(Y) \cap [c_{\beta}, c_{\beta} + \omega)$ codes a well-ordering of type $c_{\beta+1}$. \item For all $\beta$, $E(Y) \cap [c_{\beta}+\omega, c_{\beta+1})= \emptyset$. \end{enumerate} We obtain \begin{itemize} \item[$({\ast}{\ast})$] For any countable transitive model $M$ of $\mathsf{ZF}P$ such that $\omega_1^M=(\omega_1^L)^M$ and $ Y \cap \omega_1^M \in M$, $M$ can construct its version of the universe $L[Y \cap \omega_1^M]$, and the latter will see that there is an $\aleph_1^M$-sized transitive model $N \in L[Y \cap \omega_1^M]$ which models $(\ast)$ for $(x,m,k)$ and every $\gamma \in h \subset \omega_1^M$. \end{itemize} Thus we have a local version of the property $(\ast)$. In the next step $\dot{\mathbb{A}} (\dot{Y})$, working in $W[g]$, for $g\subset \mathbb{C} (\omega_1)$ generic over $W$, we use almost disjoint forcing $\mathbb{A}_D(Y)$ relative to the $<_L$-least almost disjoint family of reals $D \in L $ to code the set $Y$ into one real $r$. This forcing is well-known, has the ccc and its definition only depends on the subset of $\omega_1$ we code, thus the almost disjoint coding forcing $\mathbb{A}_D(Y)$ will be independent of the surrounding universe in which we define it, as long as it has the right $\omega_1$ and contains the set $Y$. We finally obtained a real $r$ such that \begin{itemize} \item[$({\ast}{\ast}{\ast})$] For any countable, transitive model $M$ of $\mathsf{ZF}P$ such that $\omega_1^M=(\omega_1^L)^M$ and $ r \in M$, $M$ can construct its version of $L[r]$ which in turn thinks that there is a transitive $\mathsf{ZF}P$-model $N$ of size $\aleph_1^M$ such that $N$ believes $(\ast)$ for $(x,m,k)$ and every $\gamma \in h$. \end{itemize} Note that the above is a $\Pi^1_2(r)$-statement. We say in this situation that the real $(x,m,k)$\emph{ is written into $\vec{S}^i$}, or that $(x,m,k)$ \emph{is coded into} $\vec{S^i}$. If $(x,m,k)$ is coded into $\vec{S}^i$ and $r$ is a real witnessing this, then the set $h$ which is equal to $\{ \gamma < \omega_1 \, ;\, \gamma \text{ is a starting point for an $\omega$-block where $(\ast)$ }$ for $(x,y,m)$ holds$\}$ is dubbed (following \cite{FS}) the coding area of $(x,m,k)$ with respect to $r$. We want to iterate these coding forcings. As the first factor of a coding forcing will always be $(\mathbb{C}(\omega_1))^L$, an iteration of the coding forcing is in fact a hybrid of a (countably supported) product (namely the coordinates where we use $(\mathbb{C}(\omega_1))^L$) and an actual finite support iteration (the coordinates where we use almost disjoint coding forcing). \begin{definition} A mixed support iteration $\mathbb{P}=(\mathbb{P}_{\beta}\,:\, {\beta< \alpha})$ is called legal if $\alpha < \omega_1$ and there exists a bookkeeping function $F: \alpha \rightarrow H(\omega_2)^2$ such that $\mathbb{P}$ is defined inductively using $F$ as follows: \begin{itemize} \item If $F(0)=(x,i)$, where $x$ is a real, $i\in \{1,2\}$, then $\mathbb{P}_0= \hbox{Code}({x,i} )$. Otherwise $\mathbb{P}_0$ is the trivial forcing. \item If $\beta>0$ and $\mathbb{P}_{\beta}$ is defined, $G_{\beta} \subset \mathbb{P}_{\beta}$ is a generic filter over $W$, $F(\beta)=(\dot{x}, i)$, where $\dot{x}$ is a $\mathbb{P}_{\beta}$-name of a real, $i \in \{1,2\}$ and $\dot{x}^{G_{\beta}}=x$ then, working in $W[G_{\beta}]$ we let $\mathbb{P}(\beta):= \hbox{Code} ({x,i} )$, that is we code $x$ into the $\vec{S}^i$, using our coding forcing. We shall use full (i.e. countable) support on the $(\mathbb{C}(\omega_1))^L$-coordinates and finite support on the coordinates where we use almost disjoint coding forcing. \end{itemize} \end{definition} Informally speaking, a legal forcing just decides to code the reals which the bookkeeping $F$ provides into either $\vec{S^1}$ or $\vec{S^2}$. Note further that the notion of legal can be defined in exactly the same way over any $W[G]$, where $G$ is a $\mathbb{P}$-generic filter over $W$ for an legal forcing. Finally note that instead of creating $\omega$-blocks of Suslin trees using $\mathbb{C} (\omega_1)^L$ where we code the branches every single time we code a real, we could have also defined an altered ground model $W'$ as $W[g]$, where $g \subset \prod \mathbb{C} (\omega_1)$ is generic for the countably supported product of $\aleph_1$-many copies of $\omega_1$-Cohen forcing, and then worked over $W'$ using exclusively almost disjoint coding forcings which pick first one coordinate $g_{\alpha}$, $\alpha< \omega_1$ of $g$ in an injective way, and then code the $\aleph_1$-many branches along $g_{\alpha}$ using almost disjoint coding forcings as described above. The difference between these approaches is only of symbolic nature, we opted for the one we chose because of a slightly neater presentation. We obtain the following first properties of legal forcings: \begin{lemma} \begin{enumerate} \item If $\mathbb{P}=(\mathbb{P}(\beta) \, : \, \beta < \delta) \in W$ is legal then for every $\beta < \delta$, $\mathbb{P}_{\beta} \Vdash| \mathbb{P}(\beta)|= \aleph_1$, thus every factor of $\mathbb{P}$ is forced to have size $\aleph_1$. \item Every legal forcing over $W$ preserves $\aleph_1$ and $\mathsf{CH}$. \item The product of two legal forcings is legal again. \end{enumerate} \end{lemma} \begin{proof} The first assertion follows immediately from the definition. To see the second item we exploit some symmetry. Indeed, every legal $\mathbb{P} = \bigstar_{\beta < \delta} P(\beta)= \bigstar_{\beta < \delta} ( ((\mathbb{C} (\omega_1))^L \ast \dot{\mathbb{A}} (\dot{Y_{\beta} }) )$ can be rewritten as \[(\prod_{\beta < \delta} (\mathbb{C} (\omega_1))^L )\ast \bigstar_{\beta < \delta} \dot{\mathbb{A}}_D (\dot{Y}_{\beta} )\] (again with mixed support). The latter representation is easily seen to be of the form $\mathbb{P} \ast \bigstar_{\beta < \delta} \dot{\mathbb{A}}_D(\dot{Y}_{\beta} )$, where $\mathbb{P}$ is $\sigma$-closed and the second part is a finite support iteration of ccc forcings, hence $\aleph_1$ is preserved. That $\mathsf{CH}$ holds is standard. To see that the third item is true, we recall that the definition of $\hbox{Code} _{x,i}$ is independent of the surrounding universe as long as it contains the real $x$, thus we see that a two step iteration $\mathbb{P}_1 \ast \mathbb{P}_2$ of two legal $\mathbb{P}_1, \mathbb{P}_2 \in W$ is in fact a product. As the iteration of two legal forcings (in fact the iteration of countably many legal forcings) is legal as well, the proof is done. \end{proof} The second assertion of the last lemma immediately gives us the following: \begin{corollary} Let $\mathbb{P}= (\mathbb{P}(\beta) \, : \, \beta < \delta) \in W$ be an legal forcing over $W$. Then $W[\mathbb{P}] \models \mathsf{CH}$. Further, if $\mathbb{P}= (\mathbb{P}(\alpha) \, : \, \alpha < \omega_1) \in W$ is an $\omega_1$-length iteration such that each initial segment of the iteration is legal over $W$, then $W[\mathbb{P}] \models \mathsf{CH}$. \end{corollary} In an iteration of coding forcing we do not add any unwanted or accidental solutions to our $\Sigma^1_3$ predicate give by $({\ast} {\ast} {\ast})$, which we shall show now. The set of triples of (names of) reals which are enumerated by the bookkeeping function $F \in W$ which comes along with an legal $\mathbb{P} = (\mathbb{P}(\beta) \, : \, \beta < \delta)$, we call the set of reals coded by $\mathbb{P}$. That is, if \[ \mathbb{P}(\beta)= (\mathbb{C}(\omega_1))^L \ast \dot{\mathbb{A}}_D (\dot{Y}_{(\dot{x}_{\beta}, \dot{y}_{\beta}, \dot{m}_{\beta} ) } ) \] and $G \subset \mathbb{P}$ is a generic filter and if we let for every $\beta < \delta$, $ \dot{x}_{\beta}^G =:x_{\beta}$, $\dot{y}_{\beta}^G =:y_{\beta}$, $\dot{m}_{\beta}^G =:m_{\beta}$, then $\{ (x_{\beta},y_{\beta},m_{\beta} ) \, : \, \beta < \alpha \}$ is the set of reals coded by $\mathbb{P}$ and $G$ (though we will suppress the $G$). \begin{lemma} If $\mathbb{P} \in W$ is legal, $\mathbb{P}=(\mathbb{P}_{\beta} \, : \, \beta < \delta)$, $G \subset \mathbb{P}$ is generic over $W$ and $\{ (x_{\beta},y_{\beta},m_{\beta} ) \, : \, \beta < \delta\}$ is the set of (triples of) reals which is coded as we use $\mathbb{P}$. Let \[A:= \{ (x,m,k) \in W[G] \, : \, \exists r ( ({\ast}{\ast}{\ast} )\text{ holds for $r$ and $(x,m,k)$} \}. \] Then in $W[G]$, the set of reals which belong to $A$ is exactly $\{ (x_{\beta},y_{\beta},m_{\beta} ) \, : \, \beta < \delta\}$, that is, we do not code any unwanted information accidentally. \end{lemma} \begin{proof} Let $G$ be $\mathbb{P}$ generic over $W$. Let $g= (g_{\beta} \, : \, {\beta} < \delta)$ be the set of the $\delta$ many $\omega_1$ subsets added by the $(\mathbb{C} (\omega_1))^L$-part of the factors of $\mathbb{P}$. We let $\rho : ([\omega_1]^{\omega})^L \rightarrow \omega_1$ be our fixed, constructible bijection and let $h_{\beta}= \{ \rho (g_{\beta} \cap \alpha) \, : \, \alpha < \omega_1\}$. Note that the family $\{h_{\beta} \,: \, \beta < \delta \}$ forms an almost disjoint family of subsets of $\omega_1$. Thus there is $\alpha < \omega_1$ such that $\alpha> h_{\beta_1}\cap h_{\beta_2}$ for $\beta_1 \ne \beta_2 < \delta$ and additionally, $\alpha$ is an index not used by the iterated coding forcing $\mathbb{P}$, where we say that an index $i$ of $\vec{S}$ is used by $\mathbb{P}$ whenever an $\omega_1$-branch through $S_i$ is coded by a factor of $\mathbb{P}$. We fix such an $\alpha$ and $S_{\alpha} \in \vec{S}$. We claim that there is no real in $W[G]$ such that $W[G] \models L[r] \models ``S_{\alpha}$ has an $\omega_1$-branch$"$. We show this by pulling the forcing $S_{\alpha}$ out of $\mathbb{P}$. Indeed if we consider $W[\mathbb{P}]=L[\mathbb{Q}^0] [\mathbb{Q}^1][\mathbb{Q}^2][\mathbb{P}]$, and if $S_{\alpha}$ is as described already, we can rearrange this to $W[\mathbb{P}]= L [\mathbb{Q}^0] [\mathbb{Q}'^1 \times S_{\alpha} ] [ \mathbb{Q}^2] [\mathbb{P}] = W[\mathbb{P}'] [S_{\alpha} ]$, where $\mathbb{Q}'^1$ is $\prod_{\beta \ne \alpha} S_{\beta}$ and $\mathbb{P}'$ is $\mathbb{Q}^0 \ast \mathbb{Q}'^1 \ast \mathbb{Q}^2 \ast \mathbb{P}$. Note now that, as $S_{\alpha}$ is $\omega$-distributive, $2^{\omega} \cap W[\mathbb{P}] = 2^{\omega} \cap W[\mathbb{P}']$, as $S_{\alpha}$ is still a Suslin tree in $W[\mathbb{P}']$ by the fact that $\vec{S}$ is independent, and no factor of $\mathbb{P}'$ besides the trees from $\vec{S}$ used in $\mathbb{P}'$ destroys Suslin trees. But this implies that \[W[\mathbb{P}'] \models \lnot \exists r L[r] \models `` S_{\alpha} \text{ has an $\omega_1$-branch}" \] as the existence of an $\omega_1$-branch through $S_{\alpha}$ in the inner model $L[r]$ would imply the existence of such a branch in $W[\mathbb{P}']$. Further and as no new reals appear when passing to $W[\mathbb{P}]$ we also get \[W[\mathbb{P}] \models \lnot \exists r L[r] \models `` S_{\alpha} \text{ has an $\omega_1$-branch}". \] On the other hand any unwanted information, i.e. any $(x,m,k) \notin \{(x_{\beta}, m_{\beta},k_{\beta}) \, : \, \beta < \delta \}$ such that $W[G] \models (x,m,k) \in A$, will also witness that \[L[r] \models `` S_{\alpha} \text{ has an $\omega_1$-branch}"\] for unboundedly many $\alpha$'s which are not in any of the $h_{\beta}$'s from above. Indeed, if $r$ witnesses $({\ast} {\ast} {\ast})$ for $(x,y,m)$, then there must also be an uncountable $M$, $r \in M$, $M \models \mathsf{ZF}P$, whose local version of $L[r]$ believes $(\ast)$ for every $\gamma \in h$, as otherwise we could find $r, x \in M_0 \prec M$, $M_0$ countable, and the transitive collapse $\bar{M}_0$ of $M_0$ is a counterexample to the truth of $({\ast} {\ast} {\ast})$, which is a contradiction. If $M$ is an uncountable, transitive $\mathsf{ZF}P$ model as above, then $L[r]^M \models ``S_{\alpha}$ has an $\omega_1$-branch$"$, and as the trees from $\vec{S}$ are $\Sigma_1(\omega_1)$-definable, and as the existence of an $\omega_1$-branch is again a $\Sigma_1(\omega_1)$-statement, we obtain by upwards absoluteness that $L[r] \models ``S_{\alpha}$ has an $\omega_1$-branch$"$, as claimed. In particular, as $(x,m,k ) \in A$, $r$ will satisfy that \[n \in (x,y,m) \rightarrow L[r] \models ``S_{\omega \gamma+2n+1} \text{ has an $\omega_1$-branch}" \] and \[ n \notin (x,y,m) \rightarrow L[r] \models ``S_{\omega \gamma+2n} \text{ has an $\omega_1$-branch}". \] for $\omega_1$-many $\gamma$'s. But by the argument above, only trees which we used in one of the factors of $\mathbb{P}$ have this property, so there can not be unwanted codes. \end{proof} \subsection{An auxiliary result} We proceed via proving first the following auxiliary theorem whose proof introduces some of the key ideas, and will serve as a simplified blueprint for the proof of the main results. \begin{theorem} There is a generic extension $L[G]$ of $L$ in which there is a real $R_0$ such that every pair of disjoint (ligthface) $\Sigma^1_3$-sets can be separated by a $\Delta^1_3(R_0)$-formula. \end{theorem} For its proof, we will use the two easily definable $\omega_1$-sequences of Suslin trees on $\omega_1$, $\vec{S}=\vec{S}^1 \cup \vec{S}^2$ and branch shooting forcings to create for every pair $(A_m,A_k)$ of disjoint $\Sigma^1_3$-definable sets of reals a $\Delta_3^1 (\alpha_0)$-definable separating set $D_{m,k} \supset A_m$. Using a bookkeeping function we list all the triples $(x,m,k)$ where $x$ is a real and $m, k\in \omega$, and decide for every such triple whether we code it into $\vec{S^1}$ which is equivalent to put it into $D_{m,k}$ or code it into $\vec{S^2}$ which eventually should become the complement $D_{m,k}^c$. Using coding arguments the sets $D_{m,k}$ and its complement will be $\Sigma^1_3(R_0)$-definable. The fact that we have to decide at every stage where to put the current real $x$ before the iteration is actually finished seems to be somewhat daring as the evaluation of the $\Pi^1_3$ and $\Sigma^1_3$-sets vary as we generically enlarge our surrounding universe along the iteration. Additionally one has to deal with possible degenerated cases which stem from a certain amount of self referentiality in the way we set up things. Indeed it could happen that forcing a triple $(x,m,k)$ into one side, $D_{m,k}$ say, could force simultaneously that $x$ will become a member of $A_k$ in the generic extension, thus preventing $D_{m,k}$ to actually separate $A_m$ and $A_k$. A careful case distinction will show that this problem can be overcome though. \subsection{Definition of the iteration over $W$} For $n \in \omega$ let \[\varphi_n(v_0)= \exists v_1 \psi_n(v_0,v_1)\] be the $n$-th formula in an enumeration of the $\Sigma^1_3$-formulas with one free variable. Let \[A_n:= \{ x \in 2^{\omega} \, : \, \varphi_n(x) \text{ is true} \},\] so $A_n$ is the set of reals whose definition uses the $n$-th $\Sigma^1_3$-formula in our enumeration. We force with an $\omega_1$-length mixed support iteration of legal forcings which all have size $\aleph_1$, and use a definable, surjective bookkeeping-function \[ F: \omega_1 \rightarrow \omega_1 \times \omega_1 \times \omega \times \omega\] to determine the iteration. We demand that every $\alpha < \omega_1$ is always strictly bigger than the first projection of $F(\alpha)$. We also assume that every quadruple $(\beta, \gamma, m,k)$ in $\omega_1 \times \omega_1 \times \omega \times \omega$ is hit unboundedly often by $F$. The purpose of $F$ is to list all triples of the form $(x,m,k)$, where $x$ is a real in some intermediate universe of our iteration and $m,k \in \omega$ corresponds to a pair $(\varphi_m,\varphi_k)$ of $\Sigma^1_3$-formulas. The iteration will be defined in such a way, that, at every stage $\beta$ of the iteration, whenever some triple $(x,m,k)$ is considered by $F$, we must decide immediately whether to code (a real coding) the triple $(x,m,k)$ somewhere into the $\vec{S^1}$ or $\vec{S^2}$-sequence. The set of codes written into $\vec{S}^1$ which contain $m,k$ will result in the $\Sigma^1_3$-set $D^1_{m,k}$ which is a supset of $A_m$, the set of codes containing $m,k$ which are written into $\vec{S^2}$ shall result in the $\Sigma^1_3$-supset $D^2_{m,k}$ of $A_k$. The real $R_0$ will be used to indicate, in a uniform way for all $m,k$, the set of $\aleph_1$-many $\omega$-blocks of $\vec{S}$, which represent insecure data we should not use for our separating sets. The reader should think of $R_0$ as an error term, modulo which the separating sets will work. That is $R_0 \in 2^{\omega}$ is such that it codes $\aleph_1$-many $\omega$-blocks of $\vec{S}$, and $D^1_{m,k}$ and $D^2_{m,k}$ should be the set of $(x,m,k)$'s which are coded into $\vec{S}^1$ and $\vec{S}^2$ respectively whose coding areas are almost disjoint from the set of ordinals coded by $R_0$, in that their intersection is bounded below $\omega_1$. More precisely let \begin{align*} D^1_{m,k} (R_0):= \{ (x,m,k) \, : &\, x \in 2^{\omega} \land (x,m,k) \text{ is coded into } \vec{S}^1 \\&\text{and its coding area is almost disjoint} \\&\text{ from the indices coded by $R_0$} \} \end{align*} and let $D^2_{m,k}(R_0)$ be defined similarly. Our goal is to have $D^1_{m,k}(R_0) \cap D^2_{m,k}(R_0) = \emptyset$ for every $m,k \in \omega$. Thus we have found our separating sets for $A_m$ and $A_k$. We proceed with the details of the inductive construction of the forcing iteration. Assume that we are at some stage $\alpha < \omega_1$ of our iteration, let $\mathbb{P}_{\alpha}$ denote the partial order we have defined so far, let $G_{\alpha}$ denote a generic filter for $\mathbb{P}_{\alpha}$. We inductively assume in addition, that we have created a $\mathbb{P}_{\alpha}$-name of a set $\dot{b}_{\alpha}$ which is forced to be a set of countably many $\omega$-blocks of $\vec{S^1}$ and $\vec{S^2}$. Our goal is to define the next forcing $\dot{\mathbb{Q}}_{\alpha}$ which we shall use. As will become clear after finishing the definition of the iteration, we can assume that $\mathbb{P}_{\alpha}$ is a legal notion of forcing. We look at the value $F(\alpha)$ and define the forcing $\dot{\mathbb{Q}}_{\alpha}$ according to $F(\alpha)$ by cases as follows. \subsubsection{Case a} For the first case we assume that $F(\alpha) = ( \beta, \gamma, m,k)$, and that the $\gamma$-th (in some wellorder of $W$) name of a real of $W^{\mathbb{P}_{\beta}}$ is $\dot{x}$. We ask, whether there exists a forcing $\mathbb{P}$ such that \[W[G_{\alpha}] \models \mathbb{P} \text{ is legal and } \mathbb{P} \Vdash \exists z (\varphi_m(z) \land\varphi_k(z)) .\] If there is such a legal $\mathbb{P}$, then we use it, i.e. we fix the $<$-least such forcing and let $\mathbb{P}(\alpha):= \mathbb{P}$, let $\mathbb{P}_{\alpha+1}=\mathbb{P}_{\alpha} \ast \mathbb{P}(\alpha)$, and let $G(\alpha+1)$ be $\mathbb{P}(\alpha+1)$-generic over $W$. \subsubsection{Case b} We assume again that $F(\alpha) = ( \beta, \gamma, m,k)$, and that the $\gamma$-th (in some wellorder of $W$) name of a real of $W^{\mathbb{P}_{\beta}}$ is $\dot{x}$. Let $\dot{x}^{G_{\alpha}}=x$. Now we assume that case a is wrong, i.e. in $W[G_{\alpha}]$, there is no legal $\mathbb{P}$ such that $\mathbb{P} \Vdash \exists z (\varphi_m(z) \land \varphi_k(z) )$. We shall distinguish three sub-cases. \begin{itemize} \item[(i)] First assume that there is a legal forcing $\mathbb{Q}$ such that \begin{align*} W[G_{\alpha}] \models &\, \mathbb{Q} \Vdash \varphi_m(x) \end{align*} In this situation, we will code $(x,m,k)$ into the $\vec{S}^1$-sequence, i.e. we let \[ \mathbb{P}(\alpha):= \hbox{Code} ((x,m,k),1) \] and set $\mathbb{P}_{\alpha+1}:=\mathbb{P}_{\alpha} \ast \mathbb{P}(\alpha)$. The upshot of the arguments above is the following: \begin{flushleft} \textbf{Claim:} Let $G_{\alpha+1}$ be a $\mathbb{P}_{\alpha+1}$-generic filter over $W$ and let $\mathbb{P}$ be an arbitrary legal forcing in $W[G_{\alpha+1}]$. Then \end{flushleft} \[ \cancel{\Vdash}_{\mathbb{P}} \, x \in A_k.\] \begin{proof} Indeed if not, then pick $\mathbb{P} \in W[G_{\alpha+1}]$ such that there is a $p \in \mathbb{P}$ such that $p \Vdash_{\mathbb{P}} x \in A_k$. If we consider $\mathbb{P}$ below the condition $p$, we obtain a legal forcing $\mathbb{P}_{ \le p}$ again, and $\mathbb{Q} \times \mathbb{P}_{\le p} \Vdash x \in A_m \cap A_k$, because $\mathbb{Q}$ introduces a real $r_m$ which witnesses that $\varphi_m(x)$ holds true. In particular $\varphi_m(x)$ is true in all outer models of $W[G_{\alpha+1}] [\mathbb{Q}]$ by upwards absoluteness of $\Sigma^1_3$-statements, which follows from Shoenfield absoluteness. Likewise, $\mathbb{P}_{\le p}$ shows that $W[G_{\alpha+1}] [\mathbb{P}_{ \le p} \models \varphi_k (x)$, thus $\mathbb{Q} \times \mathbb{P}_{\le p}$ is a legal forcing which forces $x \in A_m \cap A_k$ which is a contradiction. \end{proof} \item[(ii)] The second subcase is symmetric to the first one. We assume that there is no legal forcing $\mathbb{Q}$, for which $\Vdash_{\mathbb{Q}} \varphi_m(x)$ is true, but there is a legal $\mathbb{Q}$ for which it is true that \begin{align*} W[G_{\alpha}] \models \mathbb{Q} \Vdash \varphi_k(x). \end{align*} Then we code $x$ into the $\vec{S^2}$-sequence with the usual coding. Note that by the symmetric argument from above, no further legal extension will ever satisfy that $x \in A_m$. \item[(iii)] In the final subcase, there is no legal forcing which forces $x \in A_m \cup A_k$, and we are free to choose where to code $x$. In that situation we settle to code $(x,m,k)$ into $\vec{S^1}$. \end{itemize} This ends the inductive definition of the iteration. \subsubsection{Discussion of case b} We pause here to discuss briefly the crucial case b of the iteration. At first glance it seems promising, when in the first subcase of case b of the iteration, to use the legal forcing $\mathbb{Q}$, granted to exist by assumption, in order to obtain $x \in A_m$. After all, we know that case a is not true here, so if we can force $x \in A_m$ with a legal forcing, we can conclude that for all further future legal extensions, $x$ will not belong to $A_k$ which seems to fully settle the problem of where to place the particular triple $(x,m,k)$. The just described strategy will fail however, for reasons having to do with the already mentioned self-referentiality of the the set-up. Indeed, one can easily produce $\Sigma^1_3$-predicates $\varphi_m$ and $\varphi_k$ such that case b will apply for all reals $x$, and such that whenever we decide to code $(x,m,k)$ into $\vec{S^1}$, in the resulting generic extension $\varphi_k(x)$ will become true. And vice versa, whenever we decide to code $(x,m,k)$ into $\vec{S^2}$, $\varphi_m(x)$ will hold in the resulting generic extension. Thus, for these particular $m,k$, we are in case b throughout the iteration, and find a legal $\mathbb{Q}$ for every real $x$ we encounter, which forces $\mathbb{Q} \Vdash x \in A_k(x)$. But the forcings $\mathbb{Q}$, when applied, always add pathological and unwanted situations, namely $\mathbb{Q} \Vdash x \in A_k(x)$, yet $x$ is coded into $\vec{S^1}$. And as we have to place all the reals, we will produce these problems cofinally often throughout the whole iteration which ruins our attempts to proof the theorem. Our definition of the iteration circumvents these problems via noting that the possibility to actually use the forcing $\mathbb{Q}$ is sufficient to rule out a pathological situation, by the closure of legal forcings under products. Thus the mere existence of such a legal $\mathbb{Q}$ is sufficient to not run into any problems when coding $(x,m,k)$ into $\vec{S}^1$. We emphasize that this line of reasoning takes advantage of the specific coding method we decided to use and justifies the construction of our ground model $W$. \subsection{Discussion of the resulting W[G]} We let $G$ be a generic filter for the $\omega_1$-length iteration which we just described using mixed support. First we note that the iteration is proper, hence the iteration preserves $\aleph_1$. Consequently there will be no new reals added at stage $\omega_1$, so $\omega^{\omega} \cap W[G] = \bigcup_{\alpha< \omega_1} \omega^{\omega} \cap W[G_{\alpha}]$, in particular $\mathsf{CH}$ is true in $W[G]$. A second useful observation is that for every pair of stages $\alpha <\beta < \omega_1$, the quotient-forcing which we use to pass from $\mathbb{P}_{\alpha}$ to $\mathbb{P}_{\beta}$ is a legal forcing as seen from the intermediate model $W[\mathbb{P}\alpha]$. Our goal is now to define a real $R_0$ and, given a pair of disjoint $\Sigma^1_3$-definable sets $A_m,A_k$, a $\Delta^1_3(R_0)$-definable separating set, i.e. a set such that $A_m \subset D^1_{m,k}$ and $A_k \subset D_{m,k}^2$ and such that $D_{m,k}^2=D_{m,k}^c$. We want our set $D^1_{m,k} (R_0)$ to consist of the codes written into $\vec{S}^1$ beyond $\alpha_0$ which itself contain a code for the pair $(m,k)$ and its converse $D_{m,k}^2$ to consist of all the codes on the $\vec{S}^2$-side which contain a code for $(m,k)$ and whose coding areas are almost disjoint from the subset of $\omega_1$ coded by the real $R_0$. What should the real $R_0$ be? It is clear from the definition of the iteration $\mathbb{P}$, that there are stages in the iteration where case a applies. There, we just blindly use legal forcings. In particular, nothing prevents these legal forcings to code up $(x,m,k)$ into, say $\vec{S^1}$ while $\varphi_k(x)$ is true, thus adding a problem. Note however that such degenerate situations can only happen once for every pair $(A_m,A_k)$. As we only have countably many such pairs and as our iteration has length $\omega_1$ and as we visit every triple $(x,m,k)$ uncountably often with our bookkeeping function, there will be a stage $\beta_0 < \omega_1$ such that from $\beta_0$ on all the codes we have written into $\vec{S}^1$ and $\vec{S}^2$ are intended ones, i.e. the codes really define a separating set $D_{m,k}$ for $A_m$ and $A_k$. Thus, in order to define $R_0$, we first let $\beta_0 < \omega_1$ be the last stage in the iteration $\mathbb{P}$, where case a is applied. Then, working in $W[G_{\beta_0}]$, we let $R_0$ code the collection of all indices of all the trees from $\vec{S^1}$ and $\vec{S^2}$, which were used for coding in $W[G_{\beta_0}]$. Note here that this collection is characterized by the countable set $\{ r_{\beta} \, : \, \beta < \beta_0\}$ where $r_{\beta}$ is the real which is added with almost disjoint coding at stage $\beta$ and which witnesses $( {\ast} {\ast} {\ast} )$ for $(x_{\beta},m_{\beta},k_{\beta})$ and each $\gamma \in h_{\beta}$ (where $h_{\beta}$ just denotes the coding area given by the real $r_{\beta}$) holds. This countable set of reals can itself be coded by a real, and this real is $R_0$. We define: \begin{align*} x \in D_{m,k}^1(R_0) \Leftrightarrow &\exists r \in 2^{\omega} L[r] \models ``\exists M (\text{$M$ witnesses that }(\ast) \text{ holds for $(x,m,k)$, $\vec{S}^1$}\\ & \text{and every $\gamma$ in its coding area $h \subset \omega_1"$}. \\& \text{ Further } L[r,R_0] \models ``h \text{ is almost disjoint} \\& \qquad \qquad \text{ from the set of indices coded by $R_0"$ )} \end{align*} and \begin{align*} x \in D_{m,k}^2(R_0) \Leftrightarrow &\exists r \in 2^{\omega} L[r] \models ``\exists M (\text{$M$ witnesses that }(\ast) \text{ holds for $(x,m,k)$, $\vec{S}^2$}\\ & \text{and every $\gamma$ in its coding area $h \subset \omega_1"$}. \\& \text{ Further } L[r,R_0] \models ``h \text{ is almost disjoint} \\& \qquad \qquad \text{ from the set of indices coded by $R_0"$ )} \end{align*} We shall show now that these sets work as intended. \begin{lemma} In $W[G]$ for every pair $m \ne k \in {\omega}$, $D^1_{m,k}(R_0)$ and $D_{m,k}^2(R_0)$ union up to all the reals. \end{lemma} \begin{proof} Immediate from the definitions. \end{proof} \begin{lemma} In $W[G]$ for every pair $(m,k)$, if the $\Sigma^1_3$-sets $A_m$ and $A_k$ are disjoint then $D^1_{m,k}(R_0)$ separates $A_m$ from $A_k$, i.e. $A_m \subset D^1_{m,k}(R_0)$ and $A_k \cap D^1_{m,k}(R_0)=\emptyset$. Likewise $D^2_{m,k}$ separates $A_k$ from $A_m$. Consequentially, for every $m,k$ such that $A_m \cap A_k = \emptyset$, $D^1_{m,k}(R_0) \cap D^2_{m,k}(R_0)=\emptyset$. \end{lemma} \begin{proof} We will only proof the first assertion, the second one is proved exactly as the first one with the roles of $m,k$ switched. Assume that $A_m$ and $A_k$ are disjoint and let $x \in W[G] \cap 2^{\omega}$ be arbitrary, such that $x \in A_m$ is coded into $\vec{S^1}$ and its coding area is almost disjoint from the set of ordinals coded by $R_0$. There is a least stage $\alpha$ with $\beta_0<\alpha< \omega_1$ such that $F(\alpha)= (\dot{x},m,k)$ where $\dot{x}$ is a name for $x$. According to the definition of the iteration and the assumption that $A_m \cap A_k=\emptyset$, we can rule out case a. Thus case b remains, and hence the first or the third subcase did apply at stage $\alpha$. Suppose, without loss of generality, that we were in the first subcase of case b. Assume for a contradiction that in $W[G]$, $x \in A_k$, then there would be a stage $\alpha'$ of the iteration, $\alpha'> \alpha> \beta_0$ such that $W[G_{\alpha'}] \models x \in A_k$ and the part of the iteration $\mathbb{P}$ between stage $\alpha$ and $\alpha'$, denoted with $\mathbb{P}_{\alpha \alpha'}$, is a legal forcing, and which forces $x \in A_k$. But, as at stage $\alpha$, the first subcase of b applied, there is a legal forcing $\mathbb{Q}$, such that $\mathbb{Q} \Vdash x \in A_m$, hence, at $\alpha$, there is a legal forcing which forces $x \in A_m \cap A_k$, namely $\mathbb{P}_{\alpha \alpha'} \times \mathbb{Q}$, which is a contradiction. \end{proof} \begin{lemma}\label{Sigma13} In $W[G]$, for every $m,k \in \omega$, $D^1_{m,k}$ and $D^2_{m,k}$ are $\Sigma^1_3(R_0)$-definable. Thus $W[G]$ satisfies that every pair of disjoint $\Sigma^1_3$-sets can be separated by a $\Delta^1_3 (R_0)$-set. \end{lemma} \begin{proof} We claim that for $m,k \in \omega \times \omega$ arbitrary, $D^1_{m,k}$ and $D^2_{m,k}$ have the following definitions in $W[G]$: \begin{align*} x \in D_{m,k}^1 \Leftrightarrow & \exists r \forall M (r, R_0 \in M \land \omega_1^M=(\omega_1^L)^M \land M \text{ transitive } \rightarrow \\ &M \models L[r] \models ``\exists N ( N \models \mathsf{ZF}P \land |N| = \aleph_1^M \land N \text{ is transitive } \land \\ &N \text{ believes $(\ast)$ for $(x,y,m)$ and $\vec{S}^1$ and every $\gamma \in h"$} \\&\text{ and $L[r,R_0] \models``$the coding area $h$ of $(x,m,k)$ is almost disjoint} \\& \text{from the set of indices coded by $R_0"$} )). \end{align*} and \begin{align*} x \in D_{m,k}^2 \Leftrightarrow & \exists r \forall M (r, R_0 \in M \land \omega_1^M=(\omega_1^L)^M \land M \text{ transitive } \rightarrow \\ &M \models L[r] \models ``\exists N ( N \models \mathsf{ZF}P \land |N| = \aleph_1^M \land N \text{ is transitive } \land \\ &N \text{ believes $(\ast)$ for $(x,y,m)$ and $\vec{S}^2$ and every $\gamma \in h"$} \\&\text{ and $L[r,R_0] \models``$the coding area $h$ of $(x,m,k)$ is almost disjoint} \\& \text{from the set of indices coded by $R_0"$} )). \end{align*} Counting quantifiers yields that both formulas are of the form $\exists \forall (\Sigma^1_2 \rightarrow \Delta^1_2)$ and hence $\Sigma^1_3$. We will only show the result for $D_{m,k}^1$. To show the direction from left to right, note that if $x \in D_{m,k}^1$, then there was a stage $\alpha>\beta_0$ in our iteration such that we coded $x$ into the $\vec{S}^1$-sequence. In particular we added a real $r_{\alpha}$ for which property $({\ast}{\ast}{\ast} )$ is true, hence $r_{\alpha}$ witnesses that the right hand side is true in $W[G]$. For the other direction assume that the right hand side is true. This in particular means that the assertion is true for transitive models containing $r$ of arbitrary size. Indeed if there would be a transitive $M$ which contains $r$ and whose size is $\ge \aleph_1$, then there would be a countable $M_0 \prec M$ which contains $r$. The transitive collapse of $M_0$ would form counterexample to the assertion of the right hand side, which is a contradiction to our assumption. But if the right hand side is true for models of arbitrary size, by reflection it must be true for $W[G]$ itself, hus $x \in D_{m,k}^1$, and we are done. \end{proof} \section{Boldface Separation} \subsection{Preliminary Considerations} We turn our attention to boldface separation. Goal of this section is to prove the first main theorem. \begin{theorem} There is an $\omega_1$-preserving, generic extension of $L$ in which every pair of disjoint $\bf{\Sigma^1_3}$-sets $A_m$ and $A_k$ can be separated by a $\bf{\Delta^1_3}$-set. \end{theorem} It uses the proof of our auxiliary theorem as the base case of an inductive construction. The main idea to keep control is to replace the notion of legal forcing with a dynamic variant which keeps changing along the iteration. To motivate the following we first consider a more fine-tuned approach to the definition of the iteration of the proof of the last theorem. Let us assume that $(m,k)$ is the first pair such that case b in the definition of the iteration applies. Recall that in the discussion of case b, we showed that, given a pair $(A_m, A_k)$ of $\Sigma^1_3$-sets for which there does not exist a legal forcing $\mathbb{Q}$ such that $\Vdash_{\mathbb{Q}} \exists z (\varphi_m(z) \land \varphi_k(z))$ becomes true, we can assign for an arbitrary real $x$ always a side $\vec{S}^1$ or $\vec{S}^2$ such that in all future legal extensions, there will never occur a pathological situation, i.e. from that stage on we never run into the problem of having coded the triple $(x,m,k)$ into, say, $\vec{S^1}$, yet $\varphi_k(x)$ becomes true in some future extension of our iteration (or vice versa). Note here that the arguments in the discussion of case b were uniform for all reals $x$ which appear in a legal extension. So it is reasonable to define for the pair $(m,k)$ a stronger notion of legality, called 1-legal with respect to $(0,m,k)$ (the 0 indicates the base case of an inductive construction we define later) as follows: Let $F: \gamma \rightarrow H(\omega_2)^4$ be a bookkeeping function and let $E:=\{ (0,m,k) \}$. We let $\mathbb{P}$ be a mixed support iteration of length $ \gamma$. Then we say that $(\mathbb{P} \, : \, \beta < \gamma\} ) $ is 1-legal with respect to $E$ and $F$ if \begin{itemize} \item $\mathbb{P}$ is a legal forcing relative to $F$. \item Whenever $\beta < \gamma$ is a stage such that $F(\beta)=(\dot{x},m,k,i)$, where $\dot{x}$ is a $\mathbb{P}_{\beta}$-name of a real, $\xi$ is an ordinal and $i \in \{1,2 \}$ we split into three subcases: \begin{itemize} \item[(i)] First we assume that in $W[G_{\beta}]$, there is a legal forcing $\mathbb{Q}$ such that $\mathbb{Q} \Vdash x (=\dot{x}^{G_{\beta}}) \in A_m$. Then, the $\beta$-th forcing of $\mathbb{P}$, $\mathbb{P}(\beta)$ must be $\hbox{Code} ((x,m,k), 1)$. \item[(ii)] Assume that (i) is wrong but the dual situation is true for $A_k$. That is, there is a legal forcing $\mathbb{Q}$ such that $\mathbb{Q} \Vdash x (=\dot{x}^{G_{\beta}}) \in A_k$. Then, the $\beta$-th forcing of $\mathbb{P}$, $\mathbb{P}(\beta)$ must be $\hbox{Code} ((x,m,k), 2)$. \item[(iii)] In the third case we assume that neither (i) nor (ii) is true. In that situation we force with either coding $x$ into the $A_m$ or the $A_k$ side, whatever the bookkeeping tells us. \end{itemize} If the iteration $(\mathbb{P} \, : \, \beta < \gamma)$ obeys the above rules, then we say that it is 1-legal with respect to $E$ and bookkeeping function $F$. \end{itemize} From earlier considerations it is clear that if we drop the notion of legal from now on and replace it with 1-legal relative to $E=\{ (0,m,k)\}$ in our iteration, we can ensure that, at least for the pair $(m,k)$ no new pathological situations will arise anymore. This process can be iterated. Assume that we run into a new pair $(m',k')$ where the modified case b applies, i.e. there is no 1-legal forcing which forces $\exists z (\varphi_{m'}(z) \land\varphi_{k'} (z))$, we can introduce the new notion of 2-legal with respect to $\{(0,m,k) , (1,m',k')\}$. If chosen the right way, this new notion will hand us a condition that guarantees that no new pathological situations arises for the two pairs $(\varphi_m,\varphi_k)$ and $(\varphi_{k'}, \varphi_{m'})$. So our strategy for producing a model where the $\bf{\Sigma^1_3}$-separation property is as follows: we list all possible reals $x$, parameters $y$ and pairs of $\Sigma^1_3$-formulas $(\varphi_m(\cdot,y), \varphi_k(\cdot,y))$, while simultaneously define stronger and stronger versions of legality, which take care of placing the reals we encounter along the iteration in a non-pathological way. \subsection{$\alpha$-legal forcings} This section shall give a precise recursive definition of the process sketched above. The notions of 0 and 1-legality will form the base cases of an inductive definition. Let $\alpha \ge 1$ be an ordinal and assume we defined already the notion of $\alpha$-legality. Then we can inductively define the notion of $\alpha+1$-legality as follows. Suppose that $\gamma < \omega_1$, $F$ is a bookkeeping function, \[F: \gamma \rightarrow H(\omega_2)^5 \] and \[\mathbb{P}=(\mathbb{P}_{\beta} \, : \, \beta < \gamma)\] is a legal forcing relative to $F$ (in fact relative to some bookkeeping $F'$ determined by $F$ in a unique way - the difference here is not relevant). Suppose that \[E= \{(\delta, \dot{y}_{\delta}, m_{\delta} ,k_{\delta}) \, : \, \delta \le \alpha\}\] where $m_{\delta},k_{\delta} \in \omega$ and every $\dot{y}_{\delta}$ is a $\mathbb{P}$-name of a real and for every two ordinals $\beta, \gamma< \alpha$, $\mathbb{P} \Vdash (\dot{y}_{\beta}, m_{\beta},k_{\beta}) \ne (\dot{y}_{\gamma},m_{\gamma},k_{\gamma})$. Suppose that for every $\delta \le \alpha$, $(\mathbb{P}_{\beta} \, : \, \beta < \gamma )$ is $\delta$-legal with respect to $E \upharpoonright \delta = \{ (\eta, \dot{y}_{\eta},m_{\eta},k_{\eta}) \in E \, : \, \eta < \delta \}$ and $F$. Finally assume that $\dot{y}_{\alpha+1}$ is a $\mathbb{P}$-name for a real and $m_{\alpha+1},k_{\alpha+1} \in \omega$ such that $\mathbb{P} \Vdash \forall \delta \le \alpha ((\dot{y}_{\delta},m_{\delta},k_{\delta}) \ne (\dot{y}_{\alpha+1},m_{\alpha+1},k_{\alpha+1}))$. Then we say that $(\mathbb{P}_{\beta} \, : \, \beta < \gamma )$ is $\alpha+1$-legal with respect to $E \cup \{\alpha+1, \dot{y}_{\alpha+1},m_{\alpha+1},k_{\alpha+1})\}$ and $F$ if it obeys the following rules. \begin{enumerate} \item Whenever $\beta < \gamma $ is such that there is a $\mathbb{P}_{\beta}$-name $\dot{x}$ of a real and an integer $ i\in\{1,2\}$ such that \[F(\beta)= (\dot{x},\dot{y}_{\alpha+1},m_{\alpha+1},k_{\alpha+1},i)\] and $\dot{y}_{\alpha+1}$ is in fact a $\mathbb{P}_{\beta}$-name, and for $G_{\beta}$ a $\mathbb{P}_{\beta}$-generic over $W$, $W[G_{\beta}]$ thinks that \begin{align*} \exists \mathbb{Q} (&\mathbb{Q} \text{ is } \alpha \text{-legal with respect to } E \, \land \\& \mathbb{Q} \Vdash x \in A_m({y}_{\alpha+1})), \end{align*} where $x=\dot{x}^G$, and $y_{\alpha}=\dot{y}_{\alpha+1}^G$. Then continuing to argue in $W[G_{\beta}]$, if ${\mathbb{Q}}_1=\dot{\mathbb{Q}}_1^{G_{\beta}}$ we let \[\mathbb{P}(\beta)= \hbox{Code}((x,y,m,k),1). \] Note that we confuse here the quadruple $(x,y,m,k)$ with one real $w$ which codes this quadruple. \item Whenever $\beta < \gamma $ is such that there is a $\mathbb{P}_{\beta}$-name $\dot{x}$ of a real and an integer $ i \in \{1, 2\}$ such that \[F(\beta)= (\dot{x},\dot{y}_{\alpha+1},m_{\alpha+1},k_{\alpha+1}, i)\] and for $G_{\beta}$ which is $\mathbb{P}_{\beta}$-generic over $W$, $W[G_{\beta}]$ thinks that \begin{align*} \forall \mathbb{Q}_1 (&\mathbb{Q}_1 \text{ is } \alpha \text{-legal with respect to } E \\& \rightarrow \, \lnot(\mathbb{Q}_1 \Vdash x \in A_m(\dot{y}_{\alpha+1}))) \end{align*} but there is a forcing $\mathbb{Q}_2$ such that $W[G_{\beta}]$ thinks that \begin{align*} \mathbb{Q}_{2} \text{ is } \alpha &\text{-legal with respect to $E$ and } \\ & {\mathbb{Q}_2} \Vdash x \in A_k( \dot{y}_{\alpha+1} ) \end{align*} Then continuing to argue in $W[G_{\beta}]$, we force with \[\mathbb{P}(\beta):= \hbox{Code}((x,y,m,k),2).\] Note that we confuse here again the quadruple $(x,y,m,k)$ with one real $w$ which codes this quadruple. \item If neither 1 nor 2 is true, then either \[ \mathbb{P}(\beta)=\hbox{Code}((x,y,m,k),2)\] or \[ \mathbb{P}(\beta)=\hbox{Code}((x,y,m,k),1)\] depending on whether $ i\in\{1,2\}$ in $F(\beta)$ was 1 or 2. \item If $F(\beta) = (\dot{x},\dot{y},m,k,i)$ and for our $\mathbb{P}_{\beta}$-generic filter $G$, $W[G] \models \forall \delta \le \alpha+1 ((\delta,\dot{y}^G,m,k) \notin E^G)$, then, working over $W[G_{\beta}]$ let \[ \mathbb{P}(\beta)=\hbox{Code}((x,y,m,k),i)\] depending on whether $ i\in\{1,2\}$ in $F(\beta)$ was 1 or 2. \end{enumerate} This ends the definition for the successor step $\alpha \rightarrow \alpha+1$. For limit ordinals $\alpha$, we say that a legal forcing $\mathbb{P}$ is $\alpha$ legal with respect to $E$ and $F$ if for every $\eta < \alpha$, $(\mathbb{P}_{\beta} \,: \, \beta < \gamma)$ is $\eta$-legal with respect to $E \upharpoonright \eta$ and some $F'$. \par We add a couple of remarks concerning the last definition. \begin{itemize} \item By definition, if $\delta_2 < \delta_1$ and $\mathbb{P}_1$ is $\delta_1$-legal with respect to $E= \{(\beta, \dot{y}_{\beta}, m_{\beta} ,k_{\beta}) \, : \, \beta \le \delta_1\}$ and some $F_1$, then $\mathbb{P}_1$ is also $\delta_2$-legal with respect to $E \upharpoonright \delta_2 = \{(\beta, \dot{y}_{\beta}, m_{\beta} ,k_{\beta}) \, : \, \beta \le \delta_2\}$ and an altered bookkeeping function $F'$. \item The notion of $\alpha$-legal can be defined in a uniform way over any legal extension $W'$ of $W$. \item We will often just say that some iteration $\mathbb{P}$ is $\alpha$-legal, by which we mean that there is a set $E$ and a bookkeeping $F$ such that $\mathbb{P} $ is $\alpha$-legal with respect to $E$ and $F$. \end{itemize} \begin{lemma}\label{productlegal} Let $\alpha \ge 1$, assume that $W'$ is some $\alpha$-legal generic extension of $W$, and that $\mathbb{P}^1=(\mathbb{P}^1_{\beta} \,: \,\beta < \delta)$ and $\mathbb{P}^2=(\mathbb{P}^2_{\beta} \,: \, \beta < \delta) $ are two $\alpha$-legal forcings over $W'$ with respect to a common set $E=\{\delta,\dot{y}_{\delta},m_{\delta}, k_{\delta} \,: \, \delta < \alpha\}$ and bookkeeping functions $F_1$ and $F_2$ respectively. Then there is a bookkeeping function $F$ such that $\mathbb{P}_1 \times \mathbb{P}_2$ is $\alpha$-legal over $W'$ with respect to $E$ and $F$. \end{lemma} \begin{proof} We define $F \upharpoonright \delta_1$ to be $F_1$. For values $\delta_1+ \beta > \delta_1$ we let $F(\delta_1+\beta)$ be such that its value on the first four coordinates equal the first four coordinates of $F_2(\beta)$, i.e. $F(\delta_1+\beta)=(\dot{x},\dot{y},m,k,i)$ for some $i \in \{1,2\}$ where $F_2(\beta)=(\dot{x},\dot{y},m,k,i')$. We claim now that we can define the remaining value of $F(\beta)$, in such a way that the lemma is true. This is shown by induction on $\beta< \delta_2$. Let $(\mathbb{P}_2)_{\beta}$ be the iteration of $\mathbb{P}_2$ up to stage $\beta < \delta_2$. Assume, that $\mathbb{P}_1 \times (\mathbb{P}_2)_{\beta}$ is in fact an $\alpha$-legal forcing relative to $E$ and $F$. Then we have that $F(\delta_1+\beta) \upharpoonright 5=F_2(\beta) \upharpoonright 5 =(\dot{x},\dot{y},m,k)$, and we claim that at that stage, \begin{claim} If we should apply case 1,2 or 3, when considering the forcing $\mathbb{P}_1 \times \mathbb{P}_2$ as an $\alpha$-legal forcing relative to $E$ over the model $W'$, we must apply the same case when considering $\mathbb{P}_2$ as an $\alpha$-legal forcing over the model $W'$ relative to $E$. \end{claim} Once the claim is shown, the lemma can be proven as follows by induction on $\beta < \delta_2$: we work in the model $W'[\mathbb{P}_1][(\mathbb{P}_2)_{\beta}]$, consider $F(\delta_1+\beta) \upharpoonright 5 = F_2(\beta) \upharpoonright 5$, and ask which of the four cases has to be applied. By the claim, it will be the same case, as when considering $\mathbb{P}_2$ over $W'$ as an $\alpha$-legal forcing relative to $E$ and $F_2$. In particular the forcing $\mathbb{P}_2(\beta)$ we define at stage $\beta$ will be a choice obeying the rules of $\alpha$-legality, even when working over the model $W'[\mathbb{P}_1][(\mathbb{P}_2)_{\beta}]$. This shows that $\mathbb{P}_1 \times \mathbb{P}_2$ is an $\alpha$-legal forcing relative to $E$ and some $F$ over $W'$. The proof of the claim is via induction on $\alpha$. If $\alpha=1$ and both $\mathbb{P}_1$ and $\mathbb{P}_2$ are 1-legal with respect to $E$ which must be of the form $ \{0,\dot{y},m,k\}$, then we shall show that there is a bookkeeping $F$ such that $(\mathbb{P}_2)_{\beta}\, : \, \beta < \delta_2\})$ is still 1-legal with respect to $E$, even when considered in the universe $W'[\mathbb{P}_1]$. We assume first that at stage $\delta_1+\beta$ of $\mathbb{P}_1 \times \mathbb{P}_2$ case 1 in the definition of 1-legal applies, when working in the model $W'[\mathbb{P}_1][(\mathbb{P}_2)_{\beta}]$ relative to $E$ and $F$. Thus \[ F(\beta) \upharpoonright 5= (\dot{x},\dot{y},m,k) \] and $(0,\dot{y},m,k) \in E$ and for any $G^1 \times G_{\beta}$ which is $\mathbb{P}_1 \times (\mathbb{P}_2)_{\beta}$-generic over $W'$, if $\dot{x}^{G_{\beta}}=x$ and $\dot{y}^{G_{\beta}}=y$, the universe $W'[G_1 \times G_{\beta}]$ thinks that \begin{align*} \exists \mathbb{Q} (&\mathbb{Q} \text{ is } 0 \text{-legal with respect to } E \text{ and some F }\, \land \\& \, \mathbb{Q} \Vdash {x} \in A_m({y})). \end{align*} Thus, if we work over $W'[G_{\beta}]$ instead it will think \begin{align*} \exists (\mathbb{P}_1 \times \mathbb{Q}) (&\mathbb{P}_1 \times \mathbb{Q} \text{ is } 0 \text{-legal } \, \land \\& \, \mathbb{P}_1 \times \mathbb{Q} \Vdash {x} \in A_m({y})). \end{align*} Thus, at stage $\beta$, we are in case 1 as well, when considering $\mathbb{P}_2$ as an 1-legal forcing over $W'$ relative to $E$. If, at stage $\beta$, case 2 applies, when considering $\mathbb{P}_1 \times \mathbb{P}_2$ as a 1-legal forcing with respect to $E$ over $W'$, then we argue first that case 1 is impossible when considering $\mathbb{P}_2$ as a $1$-legal forcing over $W'$. Indeed, assume for a contradiction that case 1 must be applied, then, by assumption, $\mathbb{P}_2(\beta)$ will force that $x \in A_m(y)$. Yet, by Shoenfield absoluteness, $\mathbb{P}_2(\beta)$ would witness that we are in case 1 at stage $\beta$ when considering $\mathbb{P}_1 \times\mathbb{P}_2$ as 1-legal with respect to $E$ over $W'$, which is a contradiction. Thus we can not be in case 1 and we shall show that we are indeed in case 2, i.e. there is a 0-legal forcing $\mathbb{Q}$, such that $\mathbb{Q} \Vdash x \in A_k(y)$, but such a $\mathbb{Q}$ exists, namely $\mathbb{P}_2(\beta)$, Finally, if at stage $\beta$, case 3 applies when considering $\mathbb{P}_2$ as a 1-legal forcing with respect to $E$ over $W'[\mathbb{P}_1]$, we claim that we must be in case 3 as well, when considering $\mathbb{P}_2$ over just $W'$. If not, then we would be in case 1 or 2 at $\beta$. Assume without loss of generality that we were in case 1, then, as by assumption $\mathbb{P}_2$ is 1-legal over $W'$, $\mathbb{P}_2(\beta)$ will force $\Vdash x \in A_m(y)$. But this is a contradiction, so we must be in case 3 as well. This finishes the proof of the claim for $\alpha=1$. We shall argue now that the Claim is true for $\alpha+1$-legal forcings provided we know that it is true for $\alpha$-legal forcings. Again we shall show the claim via induction on $\beta$. So assume that $\mathbb{P}_1 \times (\mathbb{P}_2)_{\beta}$ is $\alpha+1$-legal with respect to $E=E \upharpoonright \alpha \cup \{(\alpha, \dot{y},m_{\alpha},k_{\alpha})\}$ and an $F$ whose domain is $\delta_1 +\beta$. We look at \[F(\delta_1+ \beta) \upharpoonright 5=F_2(\beta) \upharpoonright 5= (\dot{x},\dot{y},m_{\alpha},k_{\alpha})\] We concentrate on the case where $\beta$ is such that case 2 applies when considering $\mathbb{P}_1 \times (\mathbb{P}_2)_{\beta}$ over $W'$. The rest follows similarly. Our goal is to show that case 2 must apply when considering the $\beta$-th stage of the forcing using $F_2$ and $E$ over $W'[(\mathbb{P}_2)_{\beta}]$ as well. Assume first for a contradiction, that, when working over $W'[(\mathbb{P}_2)_{\beta}]$, at stage $\beta$, case 1 applies. Then, for any $(\mathbb{P}_2)_{\beta}$-generic filter $ G_{\beta}$ over $W'$, \begin{align*} W'[G_{\beta}] \models \exists \mathbb{Q} (&\mathbb{Q} \text{ is $\alpha$-legal with respect to $E \upharpoonright \alpha$ and some $F'$ and} \\ & \mathbb{Q} \Vdash x \in A_m(y)) \end{align*} Now, as $\mathbb{P}_2$ is $\alpha$-legal, we know that $\mathbb{P}_2(\beta)$ is such that $\mathbb{P}_2(\beta) \Vdash x \in A_m(y)$. Thus, using the upwards-absoluteness of $\Sigma^1_3$-formulas, at stage $\beta$ of the $\alpha+1$-legal forcing determined by $F$ and $E$, there is an $\alpha$-legal forcing $\mathbb{Q}$ with respect to $E \upharpoonright \alpha$ which forces $x \in A_m(y)$, namely $\mathbb{P}_2(\beta)$. But this is a contradiction, as we assumed that when considering $\mathbb{P}_1 \times (\mathbb{P}_2)_{\beta}$ over $W'$ at stage $\beta$, case 1 does not apply, hence such an $\alpha$-legal forcing should not exist. So we know that case 1 is not true. We shall show now that case 2 must apply at stage $\beta$ when considering $\mathbb{P}_2$ over the universe $W'$. By assumption we know that \begin{align*} W'[\mathbb{P}_1][(\mathbb{P}_2)_{\beta}] \models&\exists \mathbb{Q}_2 (\mathbb{Q}_2 \text{ is $\alpha$-legal with respect to $E \upharpoonright \alpha$ and } \\& \, \, \mathbb{Q}_2 \Vdash x \in A_k(y) \end{align*} As $\mathbb{P}_1$ is $\alpha+1$-legal with respect to $E$ and $F_1$, it is also $\alpha$-legal with respect to $E \upharpoonright \alpha$ and some altered $F'_1$, thus, as a consequence from the induction hypothesis, we obtain that \[W'[(\mathbb{P}_2)_{\beta}] \models \mathbb{P}_1 \times \mathbb{Q}_2 \text{ is $\alpha$-legal and } \mathbb{P}_1 \times \mathbb{Q}_2 \Vdash x \in A_k(y).\] But then, $\mathbb{P}_1 \times \mathbb{Q}_2$ witnesses that we are in case 2 as well when at stage $\beta$ of $\mathbb{P}_2$ over $W'$. This ends the proof of the claim and so we have shown the lemma. \end{proof} \subsection{Proof of the first Main Theorem} We are finally in the position to prove that the $\bf{\Sigma^1_3}$-separation property can be forced over $W$. The iteration we are about to define inductively will be a legal iteration, whose tails are $\alpha$-legal and $\alpha$-increases along the iteration. We start with fixing a bookkeeping function \[ F: \omega_1 \rightarrow H(\omega_1)^4 \] which visits every element cofinally often. The role of $F$ is to list all the quadruples of the form $(\dot{x}, \dot{y},m,k)$, where $\dot{x}, \dot{y}$ are names of reals in the forcing we already defined, and $m$ and $k$ are natural numbers which represent $\Sigma^1_3$-formulas with two free variables, cofinally often. Assume that we are at stage $\beta < \omega_1$ of our iteration. By induction we will have constructed already the following list of objects. \begin{itemize} \item An ordinal $\alpha_{\beta} \le \beta$ and a set $E_{\alpha_{\beta}}$ which is of the form $\{\eta ,\dot{y}_{\eta}, m_{\eta},k_{\eta} \, : \, \eta < \alpha_{\beta} \}$, where $\dot{y}_{\eta}$ is a $\mathbb{P}_{\beta}$-name of a real, $m_\eta, k_{\eta}$ are natural numbers. As a consequence, for every bookkeeping function $F'$, we do have a notion of $\eta$-legality relative to $E$ and $F'$ over $W[G_{\beta}]$. \item We assume by induction that for every $\eta < \alpha_{\beta}$, if $\beta_{\eta}< \beta$ is the $\eta$-th stage in $\mathbb{P}_{\beta}$, where we add a new member to $E_{\alpha_{\beta}}$, then $W[G_{\beta_{\eta}}]$ thinks that the $\mathbb{P}_{\beta_{\eta} \beta}$ is $\eta$-legal with respect to $E_{\alpha_{\beta}} \upharpoonright \eta$. \item If $(\eta, \dot{y}_{\eta},m_{\eta},k_{\eta}) \in E_{\alpha_{\beta}}$, then we set again $\beta_{\eta}$ to be the $\eta$-th stage in $\mathbb{P}_{\beta}$ such that a new member to $E_{\alpha_{\beta}}$ is added. In the model $W[G_{\beta_{\eta}}]$, we can form the set of reals $R_{\eta}$ which were added so far by the use of a coding forcing in the iteration up to stage $\beta_{\eta}$, and which witness $({\ast} {\ast} {\ast})$ holds for some $(x,y,m,k)$; Note that $R_{\eta}$ is a countable set of reals and can therefore be identified with a real itself, which we will do. The real $R_{\eta}$ indicates the set of places we must avoid when expecting correct codes, at least for the codes which contain $\dot{y}_{\eta},m_{\eta}$ and $k_{\eta}$. \end{itemize} Assume that $F(\beta)= (\dot{x},\dot{y},m,k)$, assume that $\dot{x}$, $\dot{y}$ are $\mathbb{P}_{\beta}$-names for reals, and $m,k\in \omega$ correspond to the $\Sigma^1_3$-formulas $\varphi_m(v_0,v_1)$ and $\varphi_k(v_0,v_1)$. Assume that $G_{\beta}$ is a $\mathbb{P}_{\beta}$-generic filter over $W$. Let $\dot{x}^{G_{\beta}}=x$ and $\dot{y}_1^{G_{\beta} }=y_1, \dot{y}_2^{G_{\alpha}}=y_2$. We turn to the forcing $\mathbb{P}(\beta)$ we want to define at stage $\beta$ in our iteration. Again we distinguish several cases. \begin{itemize} \item[(A)] Assume that $W[G_{\beta}]$ thinks that there is an $\alpha_{\beta}$-legal forcing $\mathbb{Q}$ relative to $E_{\alpha_{\beta}}$ and some $F'$ such that \begin{align*} \mathbb{Q} \Vdash \exists z (z\in A_m(y) \cap A_k(y)). \end{align*} Then we pick the $<$-least such forcing, where $<$ is some previously fixed wellorder. We denote this forcing with $\mathbb{Q}_1$ and use \[\mathbb{P}(\beta):= \mathbb{Q}_1.\] We do not change $R_{\beta}$ at such a stage. \item[(B)] Assume that (i) is not true. \begin{itemize} \item[(i)] Assume however that there is an $\alpha_{\beta}$-legal forcing $\mathbb{Q}$ in $W[G_{\beta}]$ with respect to $E_{\alpha_{\beta}}$ and some $F'$ such that \begin{align*} \mathbb{Q} \Vdash x \in A_m(y). \end{align*} Then we set \[\mathbb{P}(\beta):= \hbox{Code} (({x}, {y}, m,k), 1).\] In that situation, we enlarge the $E$-set as follows. We let $(\alpha_{\beta}, \dot{y}, m, k)=: (\alpha_{\beta}, \dot{y}_{\alpha_{\beta}}, m_{\alpha_{\beta}}, k_{\alpha_{\beta}})$ and \[E_{\alpha_{\beta}+1}:= E_{\alpha_{\beta}} \cup \{ (\alpha_{\beta}, \dot{y}, m, k) \} .\] Further, if we let $r_{\eta}$ be the real which is added by $\hbox{Code} (({x}, {y}, m,k), 1)$ at stage $\eta$ of the iteration which witnesses $({\ast} {\ast} {\ast})$ of some quadruple $(x_{\eta},y_{\eta},m_{\eta},k_{\eta}$). Then we collect all the countably many such reals we have added so far in our iteration up to stage $\beta$ and put them into one set $R$ and let \[ R_{\alpha_{\beta}+1 }:= R . \] \item[(ii)] Assume that (i) is wrong, but there is an $\alpha_{\beta}$-legal forcing $\mathbb{Q}$ with respect to $E_{\alpha_{\beta}}$ and some $F'$ in $W[G_{\beta}]$ such that \begin{align*} \mathbb{Q} \Vdash x \in A_k(y). \end{align*} Then we set \[\mathbb{P}(\beta):= \hbox{Code} (({x}, {y}, m,k), 2).\] In that situation, we enlarge the $E$-set as follows. We let the new $E$ value $(\alpha_{\beta}, \dot{y}_{\alpha_{\beta}}, m_{\alpha_{\beta}}, k_{\alpha_{\beta}})$ be $ (\alpha_{\beta}, \dot{y}, m, k)$ and \[E_{\alpha_{\beta}+1}:= E_{\alpha_{\beta}} \cup \{ (\alpha_{\beta}, \dot{y}, m, k) \}.\] Further, if we let $r_{\eta}$ be the real which is added by $\hbox{Code} (({x}, {y}, m,k), 1)$ at stage $\eta$ of the iteration which witnesses $({\ast} {\ast} {\ast})$ of some quadruple $(x_{\eta},y_{\eta},m_{\eta},k_{\eta}$). Then we collect all the countably many such reals we have added so far in our iteration up to stage $\beta$ and put them into one set $R$ and let \[ R_{\alpha_{\beta}+1 }:= R . \] \item[(iii)] If neither (i) nor (ii) is true, then there is no $\alpha_{\beta}$-legal forcing $\mathbb{Q}$ with respect to $E_{\alpha_{\beta}}$ which forces $x \in A_m(y)$ or $x \in A_k(y)$, and we set \[ \mathbb{P}(\beta):=\hbox{Code} (({x}, {y}, m,k), 1).\] Further, if we let $r_{\eta}$ be the real which is added by $\hbox{Code} (({x}, {y}, m,k), 1)$ at stage $\eta$ of the iteration which witnesses $({\ast} {\ast} {\ast})$ of some quadruple $(x_{\eta},y_{\eta},m_{\eta},k_{\eta}$). Then we collect all the countably many such reals we have added so far in our iteration up to stage $\beta$ and put them into one set $R$ and let \[ R_{\alpha_{\beta}+1 }:= R . \] Otherwise we force with the trivial forcing. \end{itemize} \end{itemize} At limit stages $\beta$, we let $\mathbb{P}_{\beta}$ be the inverse limit of the $\mathbb{P}_{\eta}$'s, $\eta < \beta$, and set $E_{\alpha_{\beta}}= \bigcup_{\eta < \beta} E_{\alpha_{\eta}}$. This ends the definition of $\mathbb{P}_{\omega_1}$. \subsection{Discussion of the resulting universe} We let $G_{\omega_1}$ be a $\mathbb{P}_{\omega_1}$-generic filter over $W$. As $W[G_{\omega_1}]$ is a proper extension of $W$, $\omega_1$ is preserved. Moreover $\mathsf{CH}$ remains true. A second observation is that for every stage $\beta$ of our iteration and every $\eta > \beta$, the intermediate forcing $\mathbb{P}_{[\beta, \eta)}$, defined as the factor forcing of $\mathbb{P}_{\beta}$ and $\mathbb{P}_{\eta}$, is always an $\alpha_{\beta}$-legal forcing relative to $E_{\alpha_{\beta}}$ and some bookkeeping. This is clear as by the definition of the iteration, we force at every stage $\beta$ with a $\alpha_{\beta}$-legal forcing relative to $E_{\alpha_{\beta}}$ and $\alpha_{\beta}$-legal becomes a stronger notion as we increase $\alpha_{\beta}$. We shall define the separating sets now. For a pair of disjoint $\Sigma^1_3(y)$ sets $A_m(y)$ and $A_k(y)$ we consider the least stage $\beta$ such that there is a $\mathbb{P}_{\beta}$-name $\dot{z}$ such that $\dot{z}^{G_{\beta}}=z$ and $(z,y,m,k)$ are considered by $F$ at stage $\beta$. Let $R_{\beta}$ be the set of all reals which were added by the coding forcing up to stage $\beta$ and which witness $({\ast}{\ast}{\ast})$ for some $(x,y,m,k)$. Then for any real $x \in W[G_{\omega_1}]$: \begin{align*} x \in D^1_{y,m,k} (R_{\beta})\Leftrightarrow \exists r \notin R_{\beta} (&L[r] \models (x,y,m,k) \text{ can be read off from a code} \\& \text{ written on an } \omega_1\text{-many $\omega$-blocks of elements of } \\& \vec{S^1} ). \end{align*} and \begin{align*} x \in D^2_{y,m,k} (R_{\beta})\Leftrightarrow \exists r \notin R_{\beta} (&L[r] \models (x,y,m,k) \text{ can be read off from a code} \\& \text{ written on an } \omega_1\text{-many $\omega$-blocks of elements of } \\& \vec{S^2}). \end{align*} It is clear from the definition of the iteration that for any real parameter $y$ and any $m,k \in \omega$, $D^1_{y,m,k} \cup D^2_{y,m,k} = 2^{\omega}$. The next lemma establishes that the sets are indeed separating. \begin{lemma} In $W[G_{\omega_1}]$, let $y$ be a real and let $m,k \in \omega$ be such that $A_m(y) \cap A_k(y)=\emptyset.$ Then there is an real $R$ such that the sets $D^1_{y,m,k}(R)$ and $ D^2_{y,m,k}(R)$ partition the reals. \end{lemma} \begin{proof} Let $\beta$ be the least stage such that there is a real $x$ such that $F(\beta) \upharpoonright 4=(\dot{x}, \dot{y},m,k)$ with $\dot{x}^G_{\beta}=x$, $\dot{y}^G_{\beta}=y$. Let $R$ be $R_{\beta}$ and $R_{\beta}$ be as defined above. Then, as $A_m(y)$ and $A_k(y)$ are disjoint in $W[G_{\omega_1}]$, by the rules of the iteration, case B must apply at $\beta$. Assume now for a contradiction, that $D^1_{y,m,k}(R)$ and $ D^2_{y,m,k}(R)$ do have non-empty intersection in $W[G_{\omega_1}]$. Let $z \in D^1_{y,m,k}(R) \cap D^2_{y,m,k}(R)$ and let $\beta' > \beta$ be the first stage of the iteration which sees that $z$ is in the intersection. Then, by the rules of the iteration and without loss of generality, we must have used case B(i) at $\beta$, and case B(ii) at stage $\beta'$. But this would imply, that at stage $\beta$, there is an $\alpha_{\beta}$-legal forcing with respect to $E_{\alpha_{\beta}}$, which forces $x \in A_m(y) \cap A_k(y)$, namely the intermediate forcing $(\mathbb{P}_{\beta \beta'} )$. This is a contradiction. \end{proof} \begin{lemma} In $W[G_{\omega_1}]$, for every pair $m,k$ and every parameter $y \in 2^{\omega}$ such that $A_m(y) \cap A_k(y) = \emptyset$ there is a real $R$ such that \[ A_m(y) \subset D^1_{y,m,k}(R) \land A_k(y) \subset D^2_{y,m,k}(R)\] \end{lemma} \begin{proof} The proof is by contradiction. Assume that there is a real $x$ such that $x \in A_m(y) \cap D^2_{y,m,k}(R)$ for every $R$. We consider the smallest ordinal $\beta < \omega_1$ such that $F(\beta)\upharpoonright 4$ considers a quintuple of the form $(x,y,m,k)$ and let $R= R_{\beta}$. As $A_m(y)$ and $A_k(y)$ are disjoint we know that at stage $\beta$ we were in case B. As $x$ is coded into $\vec{S^2}$ after stage $\beta$ and by the last Lemma, Case B(i) is impossible at $\beta$. Hence, without loss of generality we may assume that case B(ii) applies at $\beta$. As a consequence, there is a forcing $\mathbb{Q}_2$ which is $\alpha_{\beta}$-legal with respect to $E_{\alpha_{\beta}}$ which forces $\mathbb{Q}_2 \Vdash x \in A_k(y)$. Note that in that case we collect all the reals which witness $( {\ast} {\ast} {\ast})$ for some quadruple to form the set $R_{\beta}$. As $x \in A_m(y) \cap D^2_{y,m,k} (R)$, we let $\beta'> \beta$ be the first stage such that $W[G_{\beta'}] \models x \in A_m(y)$. By Lemma \ref{productlegal}, $W[G_{\beta}]$ thinks that $\mathbb{Q}_2 \times \mathbb{P}_{\beta \beta'}$ is $\alpha_{\beta}$-legal with respect to $E_{\alpha_{\beta}}$, yet $\mathbb{Q}_2 \times \mathbb{P}_{\beta \beta'} \Vdash x \in A_m(y) \cap A_k(y)$. This is a contradiction. \end{proof} The next lemma will finish the proof of our theorem: \begin{lemma} In $W[G_{\omega_1}]$, if $y \in 2^{\omega}$ is an arbitrary parameter, $R$ a real and $m,k$ natural numbers, then the sets $D^1_{y,m,k}(R)$ and $D^2_{y,m,k}(R)$ are $\Sigma^1_3(R)$-definable. \end{lemma} \begin{proof} The proof is almost identical to the proof of Lemma \ref{Sigma13}, the only thing added is the real $R$ as parameter. \end{proof} \section{Forcing the lightface $\Sigma^1_3$-separation property} The techniques developed in the previous sections can be used to force a model where the (lightface) $\Sigma^1_3$-separation property is true. In what follows, we heavily use ideas and notation from earlier sections, so the upcoming proof can not be read independently. \begin{theorem} Starting with $L$ as the ground model, one can produce a set-generic extension $L[G]$ which satisfies $\mathsf{CH}$ and in which the ${\Sigma^1_3}$-separation property holds. \end{theorem} \begin{proof} For the the proof to come, we will redefine the notion of legal forcings. \begin{definition} A mixed support iteration is called (0-)legal if it is defined as in Definition 4.2, with the only difference that we code (reals that code) quadruples of the form $(x,0,m,k)$ and $(x,1,m,k)$, where $m,k \in \omega$ and $x$ is (the name of) a real into $\vec{S}$. \end{definition} Similar to before, we take 0-legal forcings as the base set of forcings and define gradually smaller families of forcings, which we call $n$-legal forcings with respect to a set $E$ and a bookkeeping function $F$. Assume that $E$ is a finite list of length $n$ of pairwise distinct pairs of natural numbers $(m_l, k_l)$, $l \le n$, and that $F$ is a bookkeeping function. Assume that for every $l \le n$ we do have a notion of $l$-legality with respect to $E \upharpoonright l$, let $(m_{n+1}, k_{n+1})$ be a new pair of natural numbers, distinct from the previous ones. Then we say that $\mathbb{P}$ is $n+1$-legal with respect to $E \cup \{ (m_{n+1}, k_{n+1} ) \}$ and $F$ if for every $l \le n$, $\mathbb{P} $ is $l$-legal with respect to $E \upharpoonright l$ and $F$ and it obeys the following rules: \begin{enumerate} \item $\mathbb{P}$ never codes a quadruple of the form $(x,1,m_{n+1},k_{n+1})$ into $\vec{S}$. \item Whenever $\beta < \gamma $, where $\gamma$ is the length of the iteration $\mathbb{P}$, is such that there is a $\mathbb{P}_{\beta}$-name $\dot{x}$ of a real and an integer $ i\in\{1,2\}$ such that \[F(\beta)= (\dot{x},m_{n+1},k_{n+1},i)\] and for $G$ which is $\mathbb{P}_{\beta}$-generic over $W$, $W[G]$ thinks that \begin{align*} \exists \mathbb{Q} (&\mathbb{Q} \text{ is } n \text{-legal with respect to } E \, \land \\& \mathbb{Q} \Vdash x \in A_{m_{n+1}}), \end{align*} where $x=\dot{x}^G$, and $y_{\alpha}=\dot{y}_{\alpha+1}^G$. Then continuing to argue in $W[G]$, we let \[\mathbb{P}(\beta)= \hbox{Code}((x,0,m_{n+1},k_{n+1}),1).\] \item Whenever $\beta < \gamma $ is such that there is a $\mathbb{P}_{\beta}$-name $\dot{x}$ of a real and an integer $ i\in\{1,2\}$ such that \[F(\beta)= (\dot{x},m_{n+1},k_{n+1},i)\] and for $G_{\beta}$ which is $\mathbb{P}_{\beta}$-generic over $W$, $W[G_{\beta}]$ thinks that \begin{align*} \forall \mathbb{Q}_1 (&\mathbb{Q}_1 \text{ is } n \text{-legal with respect to } E \\& \rightarrow \, \lnot(\mathbb{Q}_1 \Vdash x \in A_{m_{n+1}})) \end{align*} but there is a forcing $\mathbb{Q}_2$ such that $W[G_{\beta}]$ thinks that \begin{align*} \mathbb{Q}_{2} \text{ is } n \text{-legal with respect to $E$ and } \\ {\mathbb{Q}_2} \Vdash x \in A_{k_{n+1}} \end{align*} Then continuing to argue in $W[G_{\beta}]$, we force with \[\mathbb{P}(\beta):= \hbox{Code}((x,0,m_{n+1},k_{n+1}),2).\] \item If neither 1 nor 2 is true, then either \[ \mathbb{P}(\beta)=\hbox{Code}((x,0,m_{n+1},k_{n+1}),2)\] or \[ \mathbb{P}(\beta)=\hbox{Code}((x,0,m_{n+1},k_{n+1}),1)\] depending on whether $ i\in\{1,2\}$ in $F(\beta)$ was 1 or 2. Otherwise $\mathbb{P}$ uses the trivial forcing at that stage. \item If $F(\beta) = (\dot{x},m,k,i)$ and for every $\mathbb{P}_{\beta}$-generic filter $G$, $W[G] \models \forall l \le n+1 ((m_l,k_l) \ne (m,k))$, then let \[ \mathbb{P}(\beta)=\hbox{Code}((x,0,m,k),i)\] depending on whether $ i\in\{1,2\}$ in $F(\beta)$ was 1 or 2. \end{enumerate} This ends the definition for the successor step $n \rightarrow n+1$. With this new notion of $n$-legality, we start the proof of the theorem. The ground model over which we form an iteration is the universe $W$ again, which was defined earlier. Over $W$ we will perform first an $\omega$-length, finitely supported iteration $(\mathbb{P}_n)_{n \in \omega}$ of legal posets, and then a second legal iteration of length $\omega_1$. The codes of the form $(x,0,m,k)$ shall eventually define the separating sets for $\varphi_m$ and $\varphi_k$; codes of the form $(x,1,m,k)$ shall correspond to countable sets of reals (i.e. reals themselves) which indicate the correctness of certain codes of the form $(x,0,m,k)$ which avoid the coding areas coded by these reals. We let $\{(\varphi_{m_n}, \varphi_{k_n}) \, : \, n \in \omega \}$ be an enumeration of all pairs of $\Sigma^1_3$-formulas. Assume that $(\varphi_{m_0}, \varphi_{k_0})$ is such that there is no legal forcing $\mathbb{Q}$ such that $W[\mathbb{Q}] \models \exists z (\varphi_{m_0}(z) \land \varphi_{k_0}(z))$. Repeating the arguments from before, we set $E_0:=\{m_0,k_0) \}$ and define the notion of 1-legal with respect to $E_0$. As will become clear in a second, every step of the iteration $(\mathbb{P}_n \, : \, {n \in\omega})$ will either use a legal forcing or define a new and gradually stronger notion of legality. We let $l_n \in \omega$ denote the degree of legality we have already defined at stage $n \in \omega$ of our iteration and define $l_0$ to be $0$ (where 0-legal should just be legal) and $l_1$ to be 1 for the base case of our induction; likewise $\mathbb{P}_0$ is set to be the trivial forcing. The forcing $\mathbb{P}_{\omega}$ is the countably supported iteration of $(\mathbb{P}_n \, : \,n \in \omega)$, which we will define inductively. Assume we are at stage $n \ge 1 \in \omega$ of the iteration and we have defined already the following list of objects and notions: \begin{enumerate} \item $\mathbb{P}_{n-1}$ and the generic filter $G_{n-1}$. \item A natural number $l_n \le n$ and a notion of $l_n$-legal relative to $E_{l_n}= \{ (m'_0,k'_0),...,(m'_{l_n-1},k'_{l_n-1}) \} \subset \{ (m_0,k_0),...,(m_{n-1},k_{n-1}) \} $, which is a strengthening of 1-legal relative to $E_0$. \item A finite set of reals $\{R_0<...<R_{n - (l_n -1)} \}$, where each real $R_i$ codes a countable set of reals. The choice of the indices will become clear later. \end{enumerate} Consider now the $n+1$-th pair $(\varphi_{m_n}, \varphi_{k_n})$ and split into cases. \begin{enumerate} \item[$(a)$] There is an $l_n$-legal forcing $\mathbb{Q}$ such that \[W[G_{n-1}] \models \mathbb{Q} \Vdash \exists z (\varphi_{m_n} (z) \land \varphi_{k_n} (z)), \] then we must use the forcing $\mathbb{Q}$. We collect all the reals we have added so far generically which witness $({\ast} {\ast} {\ast})$ for a triple $(x,m,k)$ and call the set $R_{n-l_n}$. In a second step, we use the usual method to code the quadruple $(R_{{n-l_n}}, 1,m'_{l_{n-1}},k'_{l_{n-1}} )$ into $\vec{S}^1$. \item[$(b)$] In the second case there is no $l_{n}$-legal forcing $\mathbb{Q}$ relative to $E_{l_n}$ which forces $\varphi_{m_n}$ and $\varphi_{k_n}$ to have non-empty intersection. In that case we force with the trivial forcing and define the notion $l_{n+1}$-legal. We first let $(m'_{l_n},k'_{l_n})=(m_n,k_n)$ and $E_{l_n +1}:= E_{l_n} \cup \{(m_n,k_n)\}$, and define $l_{n+1}$-legal relative to $E_{l_n+1}$ just as above. We do not define a new $R_{n-l_n}$. \end{enumerate} We let $\mathbb{P}_{\omega}$ be the inverse limit of the forcings $\mathbb{P}_n$ and consider the universe $W[\mathbb{P}_{\omega}]$. We shall and will assume from now on that in $\mathbb{P}_{\omega}$, case $(b)$ is applied infinitely many times. \begin{lemma} For every $n \in \omega$, the tail of the iteration $(\mathbb{P}_m \, :\, m \ge n)$ is at least an $l_n-1$-legal iteration relative to $E_{l_n}$ as seen from $W[G_n]$. \end{lemma} \begin{proof} This can easily be seen if one stares at the definition of the iteration. At stage $n$ we either follow case $(a)$ which is an $l_n -1$-legal forcing, as we use the $l_n$-legal $\mathbb{Q}$ and additionally code $(R_{{n-l_n}}, 1,m'_{l_{n-1}},k'_{l_{n-1}} )$ which results in an $l_n-1$-legal forcing. Or we apply case $(b)$, thus define $l_n+1$-legality and every further factor of the iteration must be $l_n$-legal. As mixed support iterations of $l_{n}-1$-legal forcings yield an $l_n -1$-legal forcing, this ends the proof. \end{proof} The arguments which are about to follow will depend in detail heavily on the actual form of the iteration $\mathbb{P}_{\omega}$ which in turn depends on how the enumeration of the $\Sigma^1_3$-formulas behaves. The theorem will of course be true, no matter how $\mathbb{P}_{\omega}$ does look like. To facilitate the arguments and notation, however, we assume without loss of generality from now on that the sequence $(\varphi_{m_n}, \varphi_{k_n})$ is so chosen that in the definition of $\mathbb{P}_{\omega}$ the cases $(a)$ and $(b)$ are alternating. Thus whenever $n$ is even then $(\varphi_{m_n}, \varphi_{k_n})$ is such that case $(b)$ has to be applied and whenever $n$ is odd then for $(\varphi_{m_n}, \varphi_{k_n})$ case $(a)$ is the one which one should apply. Via changing the order of $(\varphi_{m_n}, \varphi_{k_n})$, this can always be achieved. Consequentially, at even stages $2n$ of the iteration, we define the new notion of $n+1$-legal, while forcing with the trivial forcing; on odd stages $2n+1$, we use the $n+1$-legal forcing to force that $\varphi_{m_{2n+1}}$ and $\varphi_{k_{2n+1}}$ have non-empty intersection and then form the real $R_{2n+1 -n} =R_{n+1}$ and then code the quadruple $(R_{n+1},1,m_{2n},k_{2n})$ into $\vec{S^1}$. A consequence of the last lemma (and our assumption on the form of $\mathbb{P}_{\omega}$) is that for every even natural number $2n$, there is a final stage in $\mathbb{P}_{\omega}$ where we create new codes of the form $(R,1,m_{n},k_{n})$. Indeed, by the definition of $l_{n+1}$-legal, no codes of the form $(R,1,m_{n},k_{n})$ are added by $l_{n+1}$-legal forcings and we have that \begin{align*} R_{n}:= \{ R \, : \, (R, 1,m_{2n},k_{2n}) \text{ is coded into } \vec{S^1} \}. \end{align*} To introduce a useful notion, we say that a real $x$, which is coded into $\vec{S^1}$ or $\vec{S^2}$, has coding area almost disjoint from the real $R$, if $R$ codes an $\omega_1$-sized subset of $\omega$ and the $\omega$-blocks, where $x$ is coded are almost disjoint from the set of ordinals coded by $R$, in that their intersection is countable. \begin{lemma} In $W[\mathbb{P}_{\omega}]$ for every $n \in \omega \backslash 0$, there is only one real $R$, namely $R_{n}$ which has a code of the form $(R_n,1,m_{2n},k_{2n})$ written into $\omega_1$-many $\omega$-blocks of elements of $\vec{S^1}$ almost disjoint from $R_{n-1}$. \end{lemma} \begin{proof} This is just a straightforward consequence of the definition of the iteration and of our assumption on the form of $\mathbb{P}_{\omega}$. For $n=1$, note that $R_0$ is the unique ordinal for which $(R,1,m_0,k_0)$ is coded into $\vec{S^1}$. Then the next forcing $\mathbb{P}(2)$ is trivial while 2-legal is defined and the forcing $\mathbb{P}(3)$ first forces $\varphi_{m_3}$ and $\varphi_{k_3}$ to intersect without creating any code of the form $(R,1,m_0,k_0)$ or $(R,1,m_2,k_2)$, forms $R_1$ and writes $(R_1,1,m_2,k_2)$ into $\vec{S}^1$ with coding area almost disjoint from $R_0$. As all later factors of the iteration are 2-legal, there will be no new codes of the form $(R,1,m_2,k_2)$, hence $R_1$ is the unique real for which $(R_1,1,m_2,k_2)$ is written into $\vec{S^1}$ withwith coding area almost disjoint from $R_0$. The argument for arbitrary $n$ works exactly the same way with the obvious replacements of letters. \end{proof} \begin{lemma} In $W[\mathbb{P}_{\omega}]$, every real $R_{n}$ is $\Sigma^1_3$-definable. \end{lemma} \begin{proof} This is by induction on $n$. For $n=0$, $R_0$ is the unique real for which $(R,1,m_0,k_0)$ is coded into $\vec{S^1}$. This can be written in a $\Sigma^1_3$-way: \begin{align*} x= R_0 \Leftrightarrow \exists r \forall M &(|M|=\aleph_0 \land M \text{ is transitive } \land r \in M \land \omega_1^M=(\omega_1^L)^M \rightarrow \\&M \text{ sees with the help of the coded information in } r \text{ that }\\& x \text{ is the unique real such that } (x,1,m_0,k_0) \text{ is coded in} \\& \text{ a block of } \vec{S^1}). \end{align*} Now assume that there is a $\Sigma ^1_3$-formula which uniquely defines $R_{n}$, then, by the last lemma, $R_{n+1}$ is the unique real which has a code of the form $(R,1,m_{2(n+1)},k_{2(n+1)})$ written into $\aleph_1$-many $\omega$-blocks of elements of $\vec{S^1}$ almost disjoint from the set of ordinals coded by $R_n$. Let $\psi$ be the $\Sigma^1_3$-formula which defines $R_{n}$, then \begin{align*} x= \alpha_{n+1} \Leftrightarrow &\psi(R_{n}) \text{ and } \\ &\exists r \forall M (|M|=\aleph_0 \land M \text{ is transitive } \land r \in M \land \omega_1^M=(\omega_1^L)^M \rightarrow \\&M \text{ sees with the help of the coded information in } r \text{ that }\\& x \text{ is the unique real such that } (x,1,m_{2(n+1)},k_{2(n+1)}) \\& \text{is coded in} \text{ $\aleph_1$-many blocks of } \vec{S^1} \\& \text{almost disjoint from the ordinals coded by } R_{n}). \end{align*} \end{proof} The ordinals $R_{n}$ indicate the set of places we need to exclude in order ro obtain correct codes of the form $(x,0,m_{2n},k_{2n})$ which are written into $\vec{S}$. Indeed the iteration $\mathbb{P}_{\omega}$, after the stage where we coded the quadruple $(R_{n},1,m_{2n},k_{2n})$ into $\vec{S^1}$ will be $2n-1$-legal, just by the definition of the iteration, which means that the tail of $\mathbb{P}_{\omega}$ will never produce a pathological situation. Finally we can form our desired universe of the $\Sigma^1_3$-separation property. We let $E_{\omega} = \{(m_{2n},k_{2n}) \,: \, n \in \omega\}$ and force with $W[\mathbb{P}_{\omega}]$ as our ground model. We use a countably supported iteration of length $\omega_1$ where we force every quadruple of the form $(x,0,m_{2n},k_{2n})$, $x \in 2^{\omega}$, into either $\vec{S^1}$ or $\vec{S^2}$ with coding area almost disjoint from $ R_{n}$, according to whether case 2, 3 or 4 is true in the definition of $n$-legal. Note that, by assumption, all pairs $(\varphi_{m_n}, \varphi_{k_n})$, $n$ odd, do have a non-empty intersection. Let $W_1$ denote the universe we obtain this way. As a consequence we can define in $W_1$ the desired separating sets as follows. For a pair $(\varphi_{m_{2n}},\varphi_{k_{2n}})$ we let $\psi_{n}$ be the $\Sigma^1_3$-formula which defines $\alpha_{n}$. Now for any real $x$ we let \begin{align*} x \in D_{m_{2n},k_{2n}} \Leftrightarrow \, &\psi(R_{n}) \text{ and } \\& \exists r \forall M (|M|=\aleph_0 \land M \text{ is transitive } \land r \in M \land \omega_1^M=(\omega_1^L)^M \\&\rightarrow M \text{ sees with the help of the coded information in } r \text{ that }\\& (x,0,m_{2n},k_{2n}) \text{ is coded into $\aleph_1$-many blocks of } \vec{S^1} \\& \text{almost disjoint from the ordinals coded by } R_{n}). \end{align*} and \begin{align*} x \notin D_{m_{2n},k_{2n}} \Leftrightarrow \, &\psi(R_{n}) \text{ and } \\& \exists r \forall M (|M|=\aleph_0 \land M \text{ is transitive } \land r \in M \land \omega_1^M=(\omega_1^L)^M \\&\rightarrow M \text{ sees with the help of the coded information in } r \\& \text{that } (x,0,m_{2n},k_{2n}) \text{ is coded into $\aleph_1$-many blocks of } \vec{S^2} \\&\text{almost disjoint from the ordinals coded by } R_{n}). \end{align*} Both formulas are $\Sigma^1_3$, hence the $\Sigma^1_3$-separation property holds in $W_1$. \end{proof} \section{Possible further applications and open problems} The method which was used to prove the consistency of $\bf{\Sigma^1_{3}}$-separation can be applied to the generalized Baire space as well as we will sketch briefly. Let $BS(\omega_1)$ be defined as $\omega_1^{\omega_1}$ equipped with the usual product topology, i.e. basic open sets are of the form $O_{\sigma} :=\{ f \supset \sigma \, : \, f \in \omega_1^{\omega_1}, \, \sigma \in \omega_1^{\omega} \}$. The projective hierarchy of $BS(\omega_1)$ is formed just as in the classical setting via projections and complements. The $\bf{\Sigma^1_{1}}$-sets are projections of closed sets, the $\bf{\Pi^1_{1}}$-sets are the complements of the $\bf{\Sigma^1_{1}}$-sets and so on. The corresponding separation problem in $BS(\omega_1)$ is the following: does there exist a set generic extension of $L$ where $\bf{\Sigma^1_1}$-sets can be separated with $\bf{\Delta^1_1}$-sets? Our above proof can be applied here as well. All we have to do is to lengthen our sequence of stationary sets we will use to code. We start with $L$ as our ground model, fix our definable sequence of pairwise almost disjoint, $L$-stationary subsets of $\omega_1$, $(R_{\alpha} \, : \, \alpha < \omega_2)$. We again split $\vec{R}$ into $\vec{R}^1$ and $\vec{R}^2$, add $\omega_2$-many Suslin trees $\vec{S}$ generically and use $\vec{R}$ to code up $\vec{S}$ just as we did it in the construction of the universe $W$, but leave out the almost disjoint coding forcings as we quantify over $H(\omega_2)$ anyway. Next we list the $\bf{\Sigma^1_1}$-formulas $\varphi_n$ and start an $\omega_2$-length iteration where we add branches of members of the definable $(S_{\alpha} \, : \, \alpha < \omega_2)$ whenever our bookkeeping function $F$ hands us a triple $(x,m,k)$, just as in the situation of the usual Baire space. As there we distinguish the several cases and restrict ourselves to \emph{legal} forcings, where legal is the straightforward adjustment of legal in the $\omega$-case. The separating sets $D_{m,k}$ are defined using $\aleph_1$-sized, transitive models as which witness the wanted patterns on $\vec{S}^1$ and $\vec{S}^2.$ The sequence of the fixed $W$-Suslin trees $(S_{\alpha} \, : \, \alpha < \omega_1)$ is $\Sigma_1 (\omega_1)$-definable, thus the codes we write into them are $\Sigma_1(\omega_1)$-definable as well. We do not have to add almost disjoint coding forcings, as we quantify over subsets of $\omega_1$ in this setting anyway. All the factors will have the ccc, thus an iteration of length $\omega_2$ is sufficient to argue just as above that in the resulting generic extension $L[G]$, every pair of $\bf{\Sigma^1_1}$-sets is separated by the according $D_{m,k}$. The just sketched method is not limited to the case $\omega_1$. Indeed, if $\kappa$ is a successor cardinal,in $L$ then we can lift the argument to $\kappa$ as well. The proof will rely on a different kind of preservation result for iterated forcing constructions, as we can not use Shelah's theory of iterations of $S$-proper forcings anymore. Also, the choice of the definable sequence of $L$-stationary subsets of $\kappa$ has to be altered slightly, as we can not shoot clubs in a nice way through arbitrary stationary subsets of $\kappa$. How to solve some of the just posed problems is worked out in \cite{SyLL}. What remains an interesting open problem is the following: \begin{question} Can one force the $\Sigma^1_{1}$-separation property for $BS(\kappa)$ where $\kappa$ is inaccessible? What if $\kappa$ is weakly compact? \end{question} Another possible further direction is, as mentioned already in the introduction, to replace the ground model $L$ with inner models with large cardinals. We expect that a modification of the ideas of this article can be lifted in that context. In particular we expect that, given any natural number $n \ge 1$, over the canonical inner model with $n$-many Woodin cardinals $M_n$ one can force a model for which the $\Sigma^1_{n+3}$-separation property is true. Note here, that this would produce universes where the $\Sigma^1_{2n}$-separation property is true for the first time. These considerations rely on large cardinals however and it is interesting whether one can get by without them. \begin{question} For $n \ge 4$, can one force the $\Sigma^1_n$-separation property over $L$? \end{question} Last, note that the technique presented in this article seems to only work locally for one fixed $\Sigma^1_n$-pointclass. It would be very interesting to produce a model where we force a global behaviour of the $\Sigma^1_n$-separation property. \begin{question} For $n,m \in \omega$, is it possible to force the $\Sigma^1_n$ and the $\Sigma^1_m$-separation property simultaneously? If $E \subset \omega$, is it possible to force a universe in which the $\Sigma^1_n$-property is true for every $n \in E$? \end{question} Last, it is tempting to analyse how much consequences of $\Delta^1_2$-determinacy can be forced to hold simultaneously. A first test question in that direction would be \begin{question} Is it possible to force over $L$ the existence of a universe in which the $\Sigma^1_3$-separation property holds and every $\Sigma^1_3$-set has the Baire property? \end{question} \end{document}
\begin{document} \title{Some asymptotic expansions for a semilinear reaction-diffusion problem in a sector hanks{This publication has emanated from research conducted with the financial support of Science Foundation Ireland under the Basic Research Grant Programme 2004; Grants 04/BR/M0055 and 04/BR/M0055s1.} \begin{abstract} The semilinear reaction-diffusion equation $-\e^2\D u+b(x,u)=0$ with Dirichlet boundary conditions is considered in a convex unbounded sector. The diffusion parameter $\eps^2$ is arbitrarily small, and the ``reduced equation'' $b(x,u_0(x))=0$ may have multiple solutions. A formal asymptotic expansion for a possible solution $u$ is constructed that involves boundary and corner layer functions. For this asymptotic expansion, we establish certain inequalities that are used in \cite{KK0} to construct sharp sub- and super-solutions and then establish the existence of a solution to a similar nonlinear elliptic problem in a convex polygon. \end{abstract} \section{Introduction} In this note we consider the singularly perturbed semilinear reaction-diffusion boundary-value problem \begin{subequations}\label{1.1} \beqa F u \equiv -\eps^2 \triangle u+ b(x,u) = 0,& \qquad& x=(x_1,x_2)\in S\subset\mathbb{R}^2, \label{1.1a}\\ u(x)=g(x),&\qquad& x\in \partial S. \label{1.1b} \eeqa \end{subequations} in a convex sector $S$ with vertex $O$ and sides $\G$ and $\G^-$. Our purpose in this note is to establish some asymptotic expansions and related inequalities for a possible solution to the problem. These are needed in \cite{KK0} to construct sharp sub- and super-solutions and then establish the existence of a solution to a similar nonlinear elliptic problem in a convex polygon. The proofs involve lengthy formal calculations, and is the purpose of this paper. The ``reduced problem'' associated with (\ref{1.1}) is defined by formally setting $\varepsilon=0$ in (\ref{1.1}a), i.e. \begin{equation} \label{red} b(x,u_0(x))=0 \quad \mbox{for } x\in \bar S. \end{equation} It is assumed that \eqref{red} has a smooth solution $u_0$ that is stable in a sense to be described below. The hypotheses on $b$ are such as to include the possibility of multiple solutions to (\ref{red}) and therefore to (\ref{1.1}). Since it may happen that $u_0 \ne g$ on $\pd S $, the solutions may exhibit boundary layer behavior near $\pd S$. We shall assume that the function $b$ is smooth and that $g$ is smooth on each $\G$ and $\G^-$ and continuous at the vertex $O$. Furthermore, we assume that $u_0(x)$, $g(x)$, and, for each fixed $s$, the function $b(x,s)$, as well as their derivatives, are bounded as $|x|\rightarrow\infty$. In addition we make the following assumptions. \begin{description} \item[A1] ( {\em stable reduced solution}) There is a number $\g>0$ such that $$ b_u(x,u_0(x))>\gamma^2>0 \quad \mbox{for all } x\in S. $$ \item[A2] ({\em boundary condition}) The boundary data $g(x)$ from (\ref{1.1b}) satisfy $$ \int^{v}_{u_0(x)} \!b(x,s)\,ds>0 \qquad\mbox{for all}\;\; v \in \bigl(u_0(x), g(x)\bigr]',\qquad x\in\partial S. $$ Here the notation $(a,b]'$ is defined to be $(a,b]$ when $a<b$ and $[b, a)$ when $a>b$, while $(a,b]'=\emptyset$ when $a=b$. \item[A3] ({\em corner condition}) If $g(O)\neq u_0(O)$, then $$ \frac{b(O,g(O))}{g(O)-u_0(O)}> 0. $$ \item[A4] Only to simplify our presentation, we make a further assumption that $$ u_0(x) < g(x)\qquad\mbox{for all} \;\;x \in \pd S. $$ Using A4, we can simplify A3 to $b( O , g( O ) ) > 0$. \end{description} \noindent Note that if $g(x)\approx u_0(x)$, then A2 follows from A1 combined with (\ref{red}), while if $g(x)=u_0(x)$ at some point $x\in\partial S$, then A2 does not impose any restriction on $g$ at this point. Similarly, if $g(O)\approx u_0(O)$, then A3 follows from A1 combined with (\ref{red}), while if $g(O)=u_0(O)$ at some vertex $O$, then A3 does not impose any restriction on $g$ at this point. Assumption A1 is local and permits the construction of multiple solutions to (\ref{red}) and therefore to (\ref{1.1}). Assumptions A2 and A3 guarantee existence of boundary and corner layer ingredients, respectively in an asymptotic expansion for problem (\ref{1.1}). The note is organized as follows. Section~\ref{sec2} defines some boundary layer functions associated with each side of the sector $S$ and some corner layer functions associated with the vertex of $S$. The boundary layer functions are defined as solutions of some ordinary differential equations in a stretched independent variable. The corner layer functions are solutions of some elliptic partial differential equations in stretched independent variables. In Sections~\ref{sec3} and~\ref{sec_beta}, these boundary and corner functions are assembled into a formal first-order asymptotic expansion and a perturbed asymptotic expansion, respectively, and then certain properties of the unperturbed and perturbed asymptotic expansions are established, that are used in \cite{KK0}. The proofs involve much computation, and is the purpose of this note. \noindent {\it Notation.} Throughout the paper we let $C$, $\bar C$, $c$, $c'$ denote generic positive constants that may take different values in different formulas, but are always independent of $\eps$ ($\bar C$ is usually used for a sufficiently large constant). A subscripted $C$ (e.g., $C_1$) denotes a positive constant that is independent of $\eps$ and takes a fixed value. For any two quantities $w_1$ and $w_2$, the notation $w_1=O(w_2)$ means $|w_1|\le C|w_2|$. \section{ Boundary and corner layer functions }\label{sec2} This section defines some boundary layer functions associated with each side of the sector $S$ and some corner layer functions associated with the vertex of $S$. The boundary layer functions are defined as solutions of some ordinary differential equations in a stretched independent variable. The corner layer functions are solutions of some elliptic partial differential equations in stretched independent variables. The existence and properties of the corner layer functions are established in \cite[Section~3]{KK0}. We use the functions \beq\label{B_tB} B(x,t)=b(x,u_0(x)+t),\qquad \Bt(x,t;p)=b(x,u_0(x)+t)-p\,t. \eeq The perturbed version $\Bt$ of the function $B$ is used, with $|p|$ sufficiently small, in the construction of sub- and super-solutions. In the constructions that follow, a tilde will always denote a perturbed function. The perturbed functions always depend on the parameter $p$, but we will sometimes not show the explicit dependence. Thus, we will sometimes write $\Bt(x,t)$ for $\Bt(x,t;p)$. We need a notation for the derivatives of $\Bt$. For derivatives with respect to the first argument, we write $\nab_x\Bt$, $\nab_x^2\Bt$, etc., for the vector, matrix of second derivatives, etc., with respect to $x$. We write $\Bt_t$, $\Bt_{tt}$, etc., for derivatives with respect to $t$. Note also that $\Bt(x,0)=0$, so $\nab_x^k\Bt(x,0)=0$ for $k=1,2,\cdots$, so \begin{equation}\label{DxB} |\nab_x^k\Bt(x,t)| \le C|t| \qquad\rmfor k=0,1,2,\cdots. \end{equation} We will occasionally use, for any function $f$, the notations \begin{equation}\label{fabc} f\big|^b_a=f(b)-f(a),\qquad f\big|^c_{a;\,b}=f(c)-f(b)-f(a). \end{equation} Since $f\big|^{a+b}_{a;\,b}+f(0)=abf''(t)$, we see that $f(0)=0$ implies $f\big|^{a+b}_{a;\,b}=O(|ab|)$ and therefore $f\big|^{a+b+c}_{a;\,b}=O(|c|+|ab|)$. In view of (\ref{DxB}), we thus have \beq\label{DxB_abc} \nab_x^k\Bt(x,\cdot)\Bigr|^{c+a+b}_{a;\,b}=O(|c|+|ab|). \eeq We shall now define functions needed to assemble a first-order asymptotic expansion and its perturbed version. The following two subsections deal respectively with a side $\G$ of $S$, and with the vertex $O$ of $S$. \subsection{Solution near a side}\label{ssec2_1} In this subsection we construct boundary layer functions associated with the side $\G$ of $\pd S$. An analogous construction can be made for the side~$\G^-$. Throughout the subsection, $\G$ denotes the line that extends the ray~$\G$. Extend $u_0$ and $b$ to smooth functions, also denoted $u_0$ and $b$, on $\mathbb{R}^2$ and $\mathbb{R}^2\times \mathbb{R}$, respectively, so that (\ref{red}) and A1 hold true for all $x\in\mathbb{R}^2$. Furthermore, extend $g$ defined on the ray $\G$ to a smooth function, also denoted $g$, on the line $\G$, which satisfies the extended form of A2 and A4 for all $x \in \G$. Let $\ev_s$ denote the unit vector pointing in the direction of $\G$. Let $\ev_r$ be the unit vector perpendicular to $\ev_s$ and oriented to point into $S$. Let $s$ denote the signed distance along $\G$ with $s=0$ at $O$ and $s>0$ on the ray $\G$. For $x \in \Rbb^2$ write $x=O+s\ev_s+r\ev_r$. Then $\xb=O+s\ev_s$ is the point on $\G$ which is closest to $x$ and $r$ is the signed distance from $\xb$ to $x$, with $r>0$ if $x \in \O$. ($\ev_s$, $\ev_r$, $x$ and $\bar x$ are shown in Figure~\ref{fig_fig1}). Let $\vt_0(\xi,s;p)$ be the solution to the nonlinear autonomous two point boundary value problem \begin{subequations}\label{v0} \beqa \label{v0a} &\displaystyle-\frac{\partial^2 \vt_0}{\partial \xi^2}+\Bt(\bar x,\vt_0;p)=0,\\ \displaystyle\label{v0b} &\vt_0(0,s;p)=g(\bar x)-u_0(\bar x), \qquad \vt_0(\infty,s;p)=0. \eeqa \end{subequations} The geometric meaning of the variable $\x$ is given by the formula $\x=r/\e$. The variables $p$ and $s$ appear as parameters in the problem \eqref{v0}. The parameter $p$ satisfies $|p|<\g^2$ and in general will be close to zero. We sometimes omit the explicit dependence of $\vt_0$ on $p$ and write $\vt_0(\x,s)=\vt_0(\x,s;p)$. We set $v_0(\x,s)=\vt_0(\x,s;0)$. The function $v_0$ appears in the asymptotic expansion of the solution near the side $\G$. With $v_0$ defined, we define a function $v_1(\x,s)$ to be the solution to the linear two point boundary value problem \begin{subequations}\label{v1} \beqa &\displaystyle -\frac{\partial^2v_1}{\partial \xi^2}+v_1 B_t(\bar x,v_0) =-\x\,\ev_r\cdot\nab_xB(\xb,v_0), \label{v1a}\\ \displaystyle & v_1(0,s)=v_1(\infty,s)=0.\label{v1b} \eeqa \end{subequations} Note that $v_1$ is not a perturbed function as it does not depend on $p$. We also define \begin{equation}\label{v} \begin{array}{c} \vto_0(\x;p)=\vt_0(\x,0;p),\qquad \vo_0(\x) = v_0(\x,0),\qquad \vo_1(\x)=v_1(\x,0), \\ \vt = \vt_0+\e v_1,\quad\; v=v_0+\e v_1, \quad\; \vto=\vto_0+\e \vo_1, \quad\; \vo=\vo_0+\e \vo_1. \\ \end{array} \end{equation} In our notation, a small circle above a function name indicates that in the argument of the function we have set $s=0$. For the solvability and properties of problems \eqref{v0} and \eqref{v1} we cite a result from \cite[Lemma~2.1]{KK0}. \begin{lemma}\label{l-v} There is $p_0\in(0,\gamma^2)$ such that for all $|p|\le p_0$ there exist functions $\vt_0$ and $v_1$ that satisfy \eqref{v0},\,\eqref{v1}. For the function $\vt_0=\vt_0(\xi,s;p)$ we have \beq\label{vt_monotone} \vt_0\ge 0, \qquad\quad \frac{\partial \vt_0}{\partial p}\ge 0. \eeq Furthermore, for any $k \ge 0$ and arbitrarily small but fixed $\d$, there is a $C>0$ such that for $0\le \xi<\infty $, $s \in \Rbb$ and $k=0,1,\cdots$, \begin{equation*} \Bigl|\frac{\partial^k \vt_0}{\partial \xi^k} \Bigr| +\Bigl|\frac{\partial^k \vt_0}{\partial s^k}\Bigr| +\Bigl|\frac{\partial^k v_1}{\partial \xi^k} \Bigr| +\Bigl|\frac{\partial^k v_1}{\partial s^k}\Bigr| +\Bigl|\frac{\partial \vt_0}{\partial p}\Bigr| +\Bigl|\frac{\partial^2 \vt_0}{\partial p\,\partial s}\Bigr| \le C e^{-(\gamma-\sqrt{|p|}-\delta)\xi}. \end{equation*} \end{lemma} For later purposes we shall now obtain an estimate for $\vt_0-v_0$. \begin{lemma}\label{vt0-v0} We have, for $|p|$ sufficiently small, \begin{equation}\label{Dw0ineq} -\e^2\D (\vt_0-v_0) =-B(x,\cdot)\Bigl|^{\vt}_{v} +p v_0+O(\e^2+p^2). \end{equation} \end{lemma} \begin{proof} It follows from Lemma~\ref{l-v} that $|\vt_0-v_0| \le Cpe^{-c\x}$ and $|\frac{\partial^2}{\partial s^2}\vt_0|\le C$. The latter estimate yields $\e^2\D (\vt_0-v_0)={\textstyle\frac{\partial^2}{\partial\xi^2}}(\vt_0-v_0)+O(\e^2)$. Furthermore, invoking (\ref{v0a}),\,(\ref{B_tB}) and the estimate for $\vt_0-v_0$, we get $$ -{\textstyle\frac{\partial^2}{\partial\xi^2}}(\vt_0-v_0) =-B(\xb,\cdot)\Bigl|^{\vt_0}_{v_0}+p\vt_0 =-\Gc(\xb,0)+pv_0+O(p^2), $$ where we use the auxiliary function $\Gc(x,t)=B(x,\cdot)\Bigl|^{\vt_0+t}_{v_0+t}$. Noting that $B(x,\cdot)\Bigl|^{\vt}_{v} =B(x,\cdot)\Bigl|^{\vt_0+\eps v_1}_{v_0+\eps v_1}=\Gc(x,\eps v_1)$, it remains to establish the estimate $\Gc(x,\eps v_1)-\Gc(\xb,0)=O(\e^2+p^2)$. Indeed, we have \beq\label{last} \Gc(x,\eps v_1)-\Gc(\xb,0) =(x-\xb)\cdot\nab_x\Gc^*+\e v_1\Gc^*_t =O(\e^2+p^2), \eeq where the asterisks indicate that the derivatives are evaluated at an intermediate point. In the last step in (\ref{last}) we combined $|\Gc_t^*|=|\Gc_t(x^*,t^*)|=|B_t(x^*,\vt_0+t^*)-B_t(x^*,v_0+t^*)| \le C|\vt_0-v_0|$, and a similar estimate $|\nab_x\Gc^*| \le C|\vt_0-v_0|$ with $|x-\xb|=\e\x$, $|v_1| \le C$ and $|\vt_0-v_0| \le Cpe^{-c\x}$. \end{proof} \subsection{Solution near a vertex}\label{ssec2_2} In this subsection we construct corner layer functions associated with the vertex $O$. Some notation is required for the constructions. Let $s$ denote the distance along $\G$, measured from $O$, and let $r$ denote the perpendicular distance to a point $x \in S$. Thus, $x \to (s,r)$ is a linear orthogonal map. We also let $\ev_s$ and $\ev_r$ denote the unit vectors along $\G$ and orthogonal to $\G$ respectively, so $x=r\ev_r+s\ev_s$. We denote by $\xb=s\ev_s$ the point of $\G$ that is closest to $x$. In a similar manner, we define variables $(s^-,r^-)$, so $x=r^-\ev_{r^-}+s^-\ev_{s^-}$, and $\xb^-=s^-\ev_{s^-}$ associated with the side $\G^-$. The variable $s^-$ denotes the distance along $\G^-$, measured from $O$. We will also need stretched variables. We set $\y=x/\e$, $\x=r/\e$, $\s=s/\e$, $\x^-=r^-/\e$, $\s^-=s^-/\e$. These variables are shown in Figure~1. \begin{figure} \caption{Geometry of the sector $S$} \label{fig_fig1} \end{figure} Using these notations, Section~\ref{ssec2_1} gives functions $\vt_0(\x,s;p)$ and $v_1(\x,s)$ associated with the side $\G$ and functions $\vt_0^-(\x^-\!\!,s^-\!;p)$ and $v_1^-(\x^-\!\!,s^-)$ associated with the side $\G^-$. We also recall the notations in \eqref{v} and use corresponding notations for the side $\G^-$. The function $\vt$ matches the disparity between the boundary conditions of (\ref{1.1}b) and the value of $u_0$ on $\G$, but leaves a rapidly decaying boundary value on $\G^-$. The function $\vt^-$ has a similar behavior, with a rapidly decaying boundary value on $\G$. To deal with these rapidly decaying boundary values we construct functions $\zt_{0}(\y;p)$ and $z_{1}(\y)$, defined in terms of the stretched variable $\y$. The function $\zt_{0}$ is defined to be a bounded solution of the autonomous nonlinear elliptic boundary value problem \begin{equation}\begin{array}{rcl}\label{z0} -\D_\y \zt_{0}+\Bt(O,\zt_{0};p)=0&\quad& \rmin S, \\ \zt_{0}=A:=g(O)-u_0(O) &&\rmon \pd S. \end{array}\end{equation} Here we have $A>0$, by our assumption A4 at the point $O$. We also set $z_{0}(\y)=\zt_{0}(\y;0)$. The existence and properties of $z_{0}$ are given in the following theorem; see \cite[Theorem~2.2]{KK0}. \def \dist{{\rm dist}} \begin{theorem}\label{T-z0} There is a positive constant $p^*$ such that if $|p| \le p^*$, the problem \eqref{z0} has, for each $p$, a solution $\zt_{0}$ which satisfies $\zt_{0}\le A$ and \begin{equation}\label{T-z0-a} 0<\max\{\vto_0,\vto_0^-\} \le \zt_{0}(\y;p) \le \max\{\vto_0,\vto_0^-\}+C|\eta|^{-1}, \end{equation} and which is an increasing function of $p$. Also, $|\nab\zt_0| $ is bounded in $S$. Finally there is a constant $C>0$ such that \begin{equation}\label{T-z0-b} \zt_{0}(\y) \le C\LP e^{-\g\x}+e^{-\g\x^-} \RP . \end{equation} \end{theorem} We also consider a function $z_1(\y)$ which satisfies the linear elliptic boundary value problem \beq\label{z1} \begin{array}{c} -\D_\y z_{1}+z_{1}B_t(O,z_{0}) =-\eta\cdot\nabla_x B(O,z_{0}) \;\;\rmin S, \\ z_{1}=\s\, {\textstyle\frac{\pd}{\pd s}}(g-u_0)\bigr|_{x=O} \;\rmon \G,\quad\quad z_{1}=\s^- {\textstyle\frac{\pd}{\,\pd s^-\!}}(g^--u_0)\bigr|_{x=O} \;\rmon \G^-, \end{array} \eeq The functions $\zt_0$ and $z_1$ form a correction $\zt_0+\eps z_1$ to the reduced solution $u_0$ in close proximity of the vertex $O$. To extend it further away from $O$, the corrections $\vt_0+\e v_1$ and $\vt^-_0+\e v_1^-$ to $u_0$ near the sides $\G$ and $\G^-$ are to be invoked as follows. We use the corner functions $\zt_0$ and $z_1$ together with the boundary functions $\vt_0$, $v_1$, $\vt^-_0$, $v_1^-$ to define a related pair of corner functions $\tilde q_0$ and $q_1$, which, rather than $\zt_0$ and $z_1$, will appear in a formal asymptotic expansion of the solution of \eqref{1.1} in the entire $S$; see Sections~\ref{sec3},\,\ref{sec_beta} below. We shall use the following notation. Pick a point $\y \in S$. Having chosen $\y$, the formulas \begin{equation}\label{yrs} \y=\x\ev_r+\s\ev_s = \x^-\ev_{r^-}+\s^-\ev_{s^-} \end{equation} determine numbers $\x,\s,\x^-,\s^-$; see Figure~1. With this notation, and using the functions $\zt_{0}$, $z_{1}$ and $\vto_0$, $\vto_0^-$, $\vo_1$, $\vo^-_1$ of \eqref{v0},\,\eqref{v1},\eqref{v}, we define \begin{subequations}\label{qdef} \begin{align} \qt_{0}(\y;p) &= \zt_{0}(\y;p)-\vto_0(\x;p)-\vto_0^-(\x^-;p), \label{qdefa} \\ q_{1}(\y) &= z_{1}(\y) -[\vo_1(\x)+ \s\vo_{0,s}(\x)] -[\vo_1^-(\x^-) + \s^- \vo^-_{0,s^-}(\x^-)], \label{qdefb} \end{align} and furthermore, \beq\label{qdefc} \! \qt(\y;p) = \qt_{0}(\y;p)+\e q_{1}(\y), \;\;\; q_{0}(\y)= \qt_{0}(\y;0),\;\;\; q(\y)=q_{0}(\y)+\e q_{1}(\y). \eeq In these formulas, following the notational conventions of \eqref{v}, we mean \beq\label{qdefd} \vo_{0,s}(\x)={\textstyle\frac{\pd}{\pd s}}v_0(\x,s)\bigr|_{s=0}, \qquad \vo^-_{0,s^-}={\textstyle\frac{\pd}{\,\pd s^-\!}}v^-_0(\x^-\!\!,s^-)\bigr|_{s^-=0}. \eeq \end{subequations} Under this notation, the boundary conditions in (\ref{z1}) become \beq\label{z1_bc} z_{1}=\s\,\vo_{0,s}\quad \rmon \G,\qquad\quad z_{1}=\s^- \vo^-_{0,s^-} \quad \rmon \G^-. \eeq From the above formulas, noting that $\D_\y\qt_0=\D_\y\zt_0-\frac{\partial^2}{\partial\xi^2}\vto_{0}-\frac{\partial^2}{(\partial\xi^-)^2}\vto^-_{0}$, and using \eqref{v0},\,\eqref{z0}, we derive a nonlinear boundary value problem satisfied by $\qt_0$: \begin{subequations}\label{qt0} \begin{align} \D_\y\qt_0 &= \Bt(O,\qt_0+\vto_0+\vto^-_0)-\Bt(O,\vto_0)- \Bt(O,\vto_0^-) , \label{qt0a} \\ \qt_0 &= -\vto_0^- \rmon \G,\qquad\quad \qt_0 = -\vto_0 \rmon \G^-. \label{qt0b} \end{align} \end{subequations} Similarly (see Lemma~\ref{bvps-q} below for details), using \eqref{v0}, \eqref{v1} and \eqref{z1}, we formally derive a linear boundary value problem satisfied by $q_1$: \begin{subequations}\label{q1bvp} \begin{align} -\D_\y q_1&+q_{1}B_t(O,z_{0}) =-\y\cdot\nab_x B(O,\cdot)\Bigr|_{\vo_0;\ \vo_0^-}^{z_{0}} \notag\\ &-(\vo_1+\s\vo_{0,s}) \ B_t(O,\cdot)\Bigr|_{\vo_0}^{z_{0}} -(\vo_1^- +\s^-\vo^-_{0,s^-})\ B_t(O,\cdot)\Bigr|_{\vo_0^-}^{z_{0}}, \label{q1bvpa}\\ q_1 = &-(\vo_1^-+\s^-\vo_{0,s^-}^-) \rmon \G, \qquad q_1 = -(\vo_1+\s\vo_{0,s}) \rmon \G^-. \label{q1bvpb} \end{align} \end{subequations} where we used the notation~\eqref{fabc}. Finally, by formally differentiating relation \eqref{qdefa} and problem \eqref{z0} (or the equivalent problem \eqref{qt0}) with respect to $p$ and invoking (\ref{B_tB}), we formally derive a boundary value problem that is satisfied by $\qt_{0,p}$: \beq\label{qt0pbvp} \begin{array}{c} -\D_\y\qt_{0,p}+\qt_{0,p}\Bt_t(O,\zt_0) = \qt_0 -\vto_{0,p}\ \Bt_t(O,\cdot)\Bigr|^{\zt_0}_{\vto_0} -\vto^-_{0,p}\ \Bt_t(O,\cdot)\Bigr|^{\zt_0}_{\vto^-_0}\ , \\ \qt_{0,p} = -\vto^-_{0,p} \rmon \G,\qquad\quad \qt_{0,p} = -\vto_{0,p} \rmon \G^-. \end{array} \eeq It is shown in \cite[Lemmas~3.6,\ 3.16,\ 3.17]{KK0} that the functions $\qt_0$, $q_1$ and $\qt_{0,p}$ exist and are exponentially decaying in $S$, i.e. there are constants $C_1$ and $c_1$ such that \begin{equation}\label{q_decay} |\qt_0|+|q_1|+|\qt_{0,p}| \le C_1e^{-c_1|\y|} \qquad\rmin S. \end{equation} In view of \eqref{qdefb}, the existence of $q_1$ immediately implies existence of $z_1$. Similarly, having proved the existence of the solution to \eqref{qt0pbvp}, an integration is used to show that this solution is in fact the derivative of $\qt_0$ with respect to $p$. We now derive the boundary value problem satisfied by the function $q_1$. \begin{lemma}\label{bvps-q} The function $q_1$ defined by (\ref{qdefb}) satisfies problem (\ref{q1bvp}). \end{lemma} \begin{proof} To prove \eqref{q1bvpa}, note that \beq\label{r1} \D_\y q_1 = \D_\y z_1 -(\D_\y \vo_1+ \s\,\D_\y\vo_{0,s}) -(\D_\y \vo^-_1 + \s^-\D_\y \vo^-_{0,s^-}). \eeq Next, using (\ref{qdefd}) and then (\ref{v0a}), we calculate $$ \D_\y\vo_{0,s}={\textstyle\frac{\partial}{\partial s}} v_{0,\xi\xi}\bigr|_{s=0} ={\textstyle\frac{\partial}{\partial s}} B(s\ev_s,v_0)\bigr|_{s=0} =\ev_s\cdot\nab_xB(O,\vo_0)+ \vo_{0,s}B_t(O,\vo_0). $$ Combining this with \eqref{v1a}, yields \beqa \D_\y \vo_1+ \s\,\D_\y\vo_{0,s}&=& [\vo_1B_t(O,\vo_0)+\x\ev_r\cdot \nab_xB(O,\vo_0)]\nonumber\\\nonumber &&\;\;{}+\s[\ev_s\cdot\nab_xB(O,\vo_0)+ \vo_{0,s}B_t(O,\vo_0)]\\ &=&(\vo_1+ \s\vo_{0,s})B_t(O,\vo_0) +\eta\cdot \nab_xB(O,\vo_0), \label{r2} \eeqa where we used $\x\ev_r+\s\ev_s=\y$ from (\ref{yrs}). Similarly, one gets \beq\label{r3} \D_\y \vo^-_1+ \s^-\D_\y\vo^-_{0,s^-}=(\vo^-_1+ \s^-\vo^-_{0,s^-})B_t(O,\vo^-_0) +\eta\cdot \nab_xB(O,\vo^-_0). \eeq Recalling that, by (\ref{z1}), we have $\D_\y z_1=z_1 B_t(O,z_0 ) + \y \cdot \nab_x B(O,z_0)$, where in the right-hand side $z_1$ is replaced by $q_1+(\vo_1+ \s\vo_{0,s})+(\vo^-_1 + \s^- \vo^-_{0,s^-})$, and combining this with (\ref{r1}),\,(\ref{r2}),\,(\ref{r3}), yields \eqref{q1bvpa}. Finally, noting that $\vo_1=0$ on $\G$ and $\vo_1^-=0$ on $\G^-$, and then comparing (\ref{qdefb}) and (\ref{z1_bc}), we immediately get \eqref{q1bvpb}. \end{proof} \section{Asymptotic expansion}\label{sec3} In Section~\ref{ssec2_1} we have defined boundary layer functions $\vt=\vt_{0}+\eps v_{1}$ and $\vt^-=\vt^-_{0}+\eps v_{1}^-$ associated, respectively, with the sides $\G$ and $\G^-$ of $S$, and in Section~\ref{ssec2_2} we have defined corner layer functions $\qt=\qt_{0}+\e q_{1}$ associated with the vertex $O$ on $S$. In the present and next sections, these functions are used to assemble a formal first-order asymptotic expansion and then a perturbed asymptotic expansion for the problem \eqref{1.1}. We establish certain properties of the unperturbed and perturbed asymptotic expansions that are used in \cite{KK0}. The proofs involve lengthy formal calculations, and is the purpose of this paper. The asymptotic expansion $\uasS$ is defined as follows: \beqa \uasS(x) & =&u_0(x)+v(\x,s)+v^-(\x^-,s^-)+q(\y). \label{uas} \eeqa The next lemma shows that the differential equation applied to this asymptotic expansion is $O(\e^2)$. \begin{lemma}\label{Fuas} For the asymptotic expansion $\uasS(x)$ of (\ref{uas}) one has \begin{subequations}\label{resid} \begin{align} F\uasS&=O(\e^2), \label{resida}\\ \uasS(x)&=g(x)+O(\eps^2) \rmfor x \in \pd S. \label{residb} \end{align} \end{subequations} \end{lemma} \begin{proof} (i) We start by establishing \beq\label{B(x,v)} -\eps^2\D v+B(x,v)=O(\eps^2), \qquad -\eps^2\D v^-+B(x,v^-)=O(\eps^2). \eeq The first bound here is obtained estimating $B(x,v)$ as follows. Fix $\xi$ and $s$; then $B(x,v)$ is a function of $\eps$, i.e. $B(x,v)=\Gc(\e)$, where \begin{equation*} \begin{aligned} \Gc(\e)=B\big(\xb+\e\x\ev_r,v_0 +\e v_1\big) \quad\mbox{with} \; \xb=s\ev_s,\;v_0=v_0(\x,s),\;v_1=v_1(\x,s). \end{aligned} \end{equation*} Expand $\Gc$ in a Taylor series around $\e=0$ to obtain \beq\label{G_eps} \Gc(\e)=\Gc(0)+\e \Gc'(0)+{\textstyle\half}\e^2\Gc''(\e^*) \eeq with $0<\e^*<\e$. A calculation shows that \begin{equation*} \begin{aligned} \Gc'(\e)&=\x \ev_r\cdot\nab_x B(\xb+\e\x\ev_r,v_0 +\e v_1) + v_1B_t(\xb+\e\x\ev_r,v_0 +\e v_1), \\ \Gc''(\e)&=\x^2 \ev_r^T[\nab_x^2B(\xb+\e\x\ev_r,v_0+\e v_1)]\ev_r \\ &\quad + 2v_1\x \ev_r\cdot\nab_x B_t(\xb+\e\x\ev_r,v_0+\e v_1) + v_1^2B_{tt}(\xb+\e\x\ev_r,v_0 +\e v_1). \\ \end{aligned} \end{equation*} Hence \begin{equation}\label{G_eps_0} \begin{aligned} \Gc(0) &= B(\xb,v_0)={\textstyle\frac{\partial^2}{\partial\x^2}}v_{0}, \\ \Gc'(0)&=\x \ev_r\cdot\nab_x B(\xb,v_0 ) + v_1B_t(\xb,v_0 ) ={\textstyle\frac{\partial^2}{\partial\x^2}}v_{1}, \end{aligned} \end{equation} where we also used \eqref{v0a} and \eqref{v1a}. Applying \eqref{DxB}, we get \begin{equation*} |\nab_x^2B(\xb+\e\x\ev_r,v_0+\e v_1)| \le C|v_0+\e v_1|. \end{equation*} By Lemma~\ref{l-v}, $v_0$ and $v_1$ are exponentially decaying in $\x$, which yields $\x^2|v_0+\e v_1| \le C$. Hence the first term in the formula for $\Gc''(\e)$ is bounded. The other 2 terms are bounded for a similar reason, so $|\Gc''(\e)| \le C$, where $C$ is independent of $\xi$ and $s$. Combining this with (\ref{G_eps}) and (\ref{G_eps_0}), yields \begin{equation*} B(x,v)=\Gc(\e) = {\textstyle\frac{\partial^2}{\partial\x^2}}(v_{0} + \e v_1) +O(\e^2) ={\textstyle\frac{\partial^2}{\partial\x^2}}v +O(\e^2). \end{equation*} As $\eps^2\D v={\textstyle\frac{\partial^2}{\partial\x^2}}v+O(\eps^2)$, the first bound in (\ref{B(x,v)}) is established. The second bound is obtained similarly. (ii) To show \eqref{resida}, we calculate \beqa F\uasS &=& -\e^2\D[u_0+v+v^-+q]+b(x,u_0+v+v^-+q) \nonumber\\\label{ineqsb1} &=& B(x,\cdot)\Big|^{v+v^-+q}_{v;\ v^-} -\D_\y q + O(\e^2), \eeqa where we used (\ref{B_tB}) and \eqref{B(x,v)}, and also the notation (\ref{fabc}). Fix a point $\y \in S$; then the first term in \eqref{ineqsb1} is a function of $\eps$, which we denote $\Fc(\e)$. To be more precise, having chosen $\y$, the formulas \eqref{yrs} determine fixed numbers $\x,\s,\x^-,\s^-$. With the understanding that \begin{subequations}\label{underst} \begin{equation}\label{underst_a} \begin{aligned} &v =v(\e)= v_0(\x,\e\s)+\e v_1(\x,\e\s), \\ &v^- =v^-(\e)= v^-_0(\x^-,\e\s^-)+\e v^-_1(\x^-,\e\s^-), \\ &q =q(\e)= q_0(\y)+\e q_1(\y), \end{aligned} \end{equation} define a function $\Fc(\e)$ by \begin{equation}\label{underst_b} \Fc(\e)=B(\e\y,\cdot)\Bigr|_{v;\ v^-}^{v+v^-+q}. \end{equation} \end{subequations} In view of \eqref{ineqsb1}, to prove \eqref{resida}, we need to show that \begin{equation*} \begin{aligned} \Fc(\e)-\D_\y [q_0+\e q_1] = O(\e^2) \qquad\rmin S. \end{aligned} \end{equation*} Thus we must show that there is a number $C$, independent of $\y$, such that \begin{subequations}\label{ineqsb2} \begin{align} \Fc(0) &= \D_\y q_0,\label{ineqsb2a} \\ \Fc'(0) &= \D_\y q_1,\label{ineqsb2b} \\ |\Fc''(\e)| &\le C. \label{ineqsb2c} \end{align} \end{subequations} From the definition (\ref{underst}) of $\Fc$, we have $v\bigr|_{\eps=0}=\vo_0$, $v^-\bigr|_{\eps=0}=\vo^-_0$, and hence \begin{equation*} \begin{aligned} \Fc(0)=B(O,\vo_0+\vo_0^-+q_0)-B(O,\vo_0)-B(O,\vo^-_0), \end{aligned} \end{equation*} so \eqref{qt0a} gives \eqref{ineqsb2a}. A calculation using (\ref{underst_a}) yields \begin{equation}\label{dv_de} \frac{dv}{d\e}=v_1 +\s v_{0,s}+\e\s v_{1,s}\,, \qquad \frac{d^2v}{d\e^2}=2\s v_{1,s} +\s^2 v_{0,ss}+\e\s^2 v_{1,ss}\,, \end{equation} similar relations for $v^-$, and also $\frac{dq}{d\e}=q_1$, $\frac{d^2q}{d\e^2}=0$. As $|\sigma|\le |\eta|$ and $|\sigma^-|\le|\eta|$, invoking Lemma~\ref{l-v} and (\ref{q_decay}), for $k=0,1,2$ we get \begin{equation}\label{dv_de_est} \bigl|\frac{d^kv}{d\e^k}\bigr|\le C(1+|\eta|^k)e^{-c\xi},\; \bigl|\frac{d^kv^-}{d\e^k}\bigr|\le C(1+|\eta|^k)e^{-c\xi^-},\; \bigl|\frac{d^kq}{d\e^k}\bigr|\le Ce^{-c|\eta|}. \end{equation} We now calculate \begin{equation*} \begin{aligned} \Fc'(\e) &=\y\cdot\nab_x B(\e\y,\cdot)\Bigr|_{v;\ v^-}^{v+v^-+q}\\ &+\frac{dq}{d\e}B_t(\e\y,\cdot)\Bigr|_{v+v^-+q} +\frac{dv}{d\e}B_t(\e\y,\cdot)\Bigr|_{v}^{v+v^-+q} +\frac{dv^-\!\!\!}{d\e}B_t(\e\y,\cdot)\Bigr|_{v^-}^{v+v^-+q}. \end{aligned} \end{equation*} Hence, using the first relation in (\ref{dv_de}) and its analogue for $v^-$, we get \begin{equation*} \begin{aligned} \Fc'(0)&=\y\cdot\nab_x B(O,\cdot)\Bigr|_{\vo_0;\ \vo_0^-}^{\vo_0+\vo_0^-+q_0}\\ &+q_1B_t(O,\vo_0+\vo_0^-+q_0)\\ &+(\vo_1+\s\vo_{0,s})B_t(O,\cdot)\Bigr|_{\vo_0}^{\vo_0+\vo_0^-+q_0} +(\vo^-_1+\s^-\vo^-_{0,s})B_t(O,\cdot)\Bigr|_{\vo_0^-}^{\vo_0+\vo_0^-+q_0}. \end{aligned} \end{equation*} Recalling that $\vo_0+\vo_0^-+q_0=z_0$ and inspecting \eqref{q1bvpa} we see that (\ref{ineqsb2}b) holds. A formula for the quantity $\Fc''(\e)$ is obtained by a lengthy but straightforward computation, which gives \begin{equation*} \begin{aligned} &\Fc''(\e) =\y^T \Bigl(\nab^2_xB(\e\y,\cdot)\Bigr|_{v;\ v^-}^{v+v^-+q}\Bigr)\y\\ &+\y\cdot\Bigl(\frac{dq}{d\e}\nab_xB_t(\e\y,\cdot)\Bigr|_{v+v^-+q} +\frac{dv}{d\e}\nab_xB_t(\e\y,\cdot)\Bigr|_{v}^{v+v^-+q} \!\!+\frac{dv^-\!\!\!}{d\e}\nab_xB_t(\e\y,\cdot)\Bigr|_{v^-}^{v+v^-+q}\Bigr)\\ &+\frac{dq}{d\e} \Bigl(\y\cdot\nab_x B_t(\e\y,\cdot)\Bigr|_{v+v^-+q} +\frac{d(v+v^-+q)}{d\e}B_{tt}(\e\y,\cdot)\Bigr|_{v+v^-+q}\Bigr)\\ &+\frac{dv}{d\e} \Bigl(\y\cdot\nab_x B_t(\e\y,\cdot)\Bigr|_{v}^{v+v^-+q} \!\!+\frac{dv}{d\e}B_{tt}(\e\y,\cdot)\Bigr|_{v}^{v+v^-+q} \!\!+\frac{d(v^-+q)}{d\e}B_{tt}(\e\y,\cdot)\Bigr|_{v+v^-+q}\Bigr)\\ &+\frac{dv^-\!\!\!}{d\e} \Bigl(\y\cdot\nab_x B_t(\e\y,\cdot)\Bigr|_{v^-}^{v+v^-+q} \!\!+\frac{dv^-\!\!\!}{d\e}B_{tt}(\e\y,\cdot)\Bigr|_{v^-}^{v+v^-+q} \!\!+\frac{d(v+q)}{d\e}B_{tt}(\e\y,\cdot)\Bigr|_{v+v^-+q}\Bigr)\\ &+\frac{d^2v}{d\e^2}B_t(\e\y,\cdot)\Bigr|_{v}^{v+v^-+q} +\frac{d^2v^-\!\!\!}{d\e^2}B_t(\e\y,\cdot)\Bigr|_{v^-}^{v+v^-+q}. \end{aligned} \end{equation*} From inspection of this formula it is seen that each term in the formula is of one of three types, which we refer to as type I, type II, or type III. We shall invoke (\ref{dv_de_est}) to estimate them. The only term of type I is in the first line of this formula and is clearly $|\eta|^2O(|q|+|vv^-|)$, by (\ref{DxB_abc}), and thus $O(1)$ by (\ref{dv_de_est}). The terms of of type II involve, for $l=0,1$ and $k=1,2$, the quantities \begin{equation*} \begin{aligned} \nab_x^l\, {\textstyle\frac{\partial^k}{\partial t^k}}\,B(\e\y,\cdot)\Bigr|_{v}^{v+v^-+q} &=O(v^-+q)=O(e^{-c\xi^-}),\\ \nab_x^l\, {\textstyle\frac{\partial^k}{\partial t^k}}\,B(\e\y,\cdot)\Bigr|_{v^-}^{v+v^-+q} &=O(v+q)=O(e^{-c\xi}), \end{aligned} \end{equation*} which are always multiplied by $(1+|\eta|^2)O( e^{-c\xi})$ or $(1+|\eta|^2)O( e^{-c\xi^-})$, respectively. Thus the terms of type II are $O(1)$. Finally the terms of of type III involve, for $l=0,1$ and $k=1,2$, the quantity $$ \nab_x^l\, {\textstyle\frac{\partial^k}{\partial t^k}}\,B(\e\y,\cdot)\Bigr|_{v+v^-+q} =O(1), $$ which is always multiplied by $(1+|\eta|^2)O( e^{-c|\eta|}+e^{-c\xi}e^{-c\xi^-})$. As above, one sees that terms of type III are bounded. This completes the proof of (\ref{ineqsb2}c), and therefore (\ref{resida}). (iii) It is sufficient to prove \eqref{residb} for $x=\xb\in\G$, as the other case of $x\in\G^-$ is similar. Let $\xb \in \G$ be given. Define $s,\x^-$ and $s^-$ by the formulas \begin{equation*} \xb=s\ev_s=\e\x^-\ev_{r^-}+s^-\ev_{s^-}. \end{equation*} By (\ref{v0b}),\,(\ref{v1b}), we have $v_0(0,s)=g(\xb)-u_0(\xb)$ and $v_1(0,s)=0$; therefore $$ (u_0+v)\bigr|_{\xb}=u_0(\xb)+[v_0(0,s)+\e v_1(0,s)] =g(\xb). $$ Thus it remains to show that $(v^-+q)\bigr|_{\xb}=O(\eps^2)$. Indeed, by (\ref{qt0b})\,(\ref{q1bvpb}), we have $$ v^-+q =[v_0^--\vo_0^-] +\e [v_1^--(\vo_1^- +\s^- \vo_{0,s^-}^-)]=O(\eps^2). $$ In the last step here we have invoked the formulas \begin{equation*} \begin{aligned} |v_0^--\vo_0^- -\e \s^- \vo_{0,s^-}^-|&= |v_0^-(\x^-,s^-)-v_0^-(\x^-,0) -\e \s^-v_{0,s^-}^-(\x^-,0)| \\ &= {\textstyle\half} \e^2 (\s^-)^2 |v_{0,s^-s^-}^-(\x^-,\hat s^-)|=O(\eps^2), \\ |v_1^--\vo_1^-|&= |v_1^-(\x^-,s^-) - v_1^-(\x^-,0)|\\ &= \e\s^-| v_{1,s^-}^-(\x^-,\hat s^-)| =O(\eps), \end{aligned} \end{equation*} which are obtained using the exponential decay of $v_0^-$ and $v_1^-$ in $\xi^-$ and noting that $\sigma^-=(\cot\omega)\xi^-$ on the side $\G$, where $\omega$ is the angle at the apex. \end{proof} \section{Perturbed asymptotic expansion}\label{sec_beta} The perturbed version $\b_S$ of the asymptotic expansion $\uasS$ of (\ref{uas}) is defined as follows: \beqa \bS(x;p)=u_0(x)+\vt(\x,s;p)+\vt^-(\x^-,s^-;p)+\qt(\y;p)+ \th p,\label{beta} \eeqa where a value for the positive parameter $\theta$ and a range of values for $p$ will be chosen below. Comparing (\ref{beta}) with (\ref{uas}), yields $\bS(x;0)=\uasS(x) $ and, furthermore, an alternative equivalent representation \begin{subequations}\label{beta_VQ} \beq\label{beta_VQ_a} \bS(x;p)=\uasS(x)+V(\xi,s;p)+V^-(\xi^-\!\!,s^-\!;p)+Q(\eta;p) + \th p, \eeq where $V=\vt-v$, $V^-=\vt^--v^-$, $Q=\qt-q$, and therefore \beq\label{VQ} V=\vt_0-v_0,\qquad V^-=\vt_0^--v_0^-,\qquad Q=\qt_0-q_0. \eeq \end{subequations} Note that for $V$, $V^-$ and $Q$ here, by the exponential-decay estimates for $\frac{\partial}{\partial p}\vt_0$ and $\frac{\partial}{\partial p}\qt_0$ from Lemma~\ref{l-v} and (\ref{q_decay}), we have \beq\label{VQ_decay} \hspace{-1pt} (1+\xi)|V|\le Cp,\quad (1+\xi^-)|V^-|\le Cp,\quad (1+|\eta|)|Q|\le Cp e^{-c|\eta|} \le Cp. \eeq Furthermore, since $|\eta|\le C(\x+\x^-)$, invoking the exponential-decay estimates for $\frac{\partial}{\partial p}\vt_0$ and $\frac{\partial^2}{\partial p\,\partial s}\vt_0$ from Lemma~\ref{l-v}, yields a more elaborate estimate \beq\label{VQ_decay_b} (1+|\eta|e^{-c\xi^-})\,(|V|+|\textstyle\frac{\partial V}{\partial s}|)\le Cp, \eeq and a similar estimate involving $V^-$. In the remainder of this section we establish some inequalities that involve the perturbed asymptotic expansions $\b_S$. In particular, the inequalities of Lemmas~\ref{lem_monotone} and~\ref{lem_Fbeta} are used in \cite{KK0} to construct sub- and super-solutions to our nonlinear boundary value problem. \begin{lemma}\label{lem_monotone} For the function $\bS$ of (\ref{beta}) we have $\bS=\uasS+O(p)$. Furthermore, for some sufficiently small $\eps^*>0$, if $p\ge0$ and $\eps\le\eps^*$, then for all $x\in S$ we have \beq\label{monotone} \bS(x;-p)\le \uasS(x)-{\textstyle\frac12}\theta p,\qquad \uasS(x)+{\textstyle\frac12}\theta p \le \bS(x;p). \eeq \end{lemma} \begin{proof} The assertion $\bS=\uasS+O(p)$ immediately follows from (\ref{beta_VQ}). Furthermore, by (\ref{beta_VQ}), the bound for $\bS(x;p)$ in the remaining assertion (\ref{monotone}) can be rewritten as \beq\label{monotone_aux} V+V^-+Q+{\textstyle\frac12}\theta p\ge 0\qquad\mbox{for}\;\;p\ge 0. \eeq By (\ref{VQ_decay}), there is a sufficiently large $\bar C=\bar C(\theta)$ such that if $|\eta|\ge\bar C$, then $|Q|\le p\,C e^{-c|\eta|}\le \half\theta p$. Combining this with $V\ge0$ and $V^-\ge 0$, which follow from monotonicity of $\vt_0$ and $\vt_0^-$ in $p$, established in (\ref{vt_monotone}), we get (\ref{monotone_aux}). Now let $|\eta|<\bar C$. Then $|s|,\, |s^-|<\eps\bar C$. Invoking (\ref{qdefa}), we have $$ Q=\qt_{0}-q_0 = (\zt_{0}-z_0)-(\vto_0-\vo_0)-(\vto_0^--\vo_0^-) =(\zt_{0}-z_0)-\mathring{V}-\mathring{V}^-, $$ and therefore $ V+V^-+Q=(\zt_{0}-z_0)+I+I^-, $ where $I=V-\mathring{V}$ and $I^-=V^--\mathring{V}^-$, and, as usual, a small circle above a function name indicates that in the argument of the function we have set $s=0$. For $I$, using (\ref{VQ_decay_b}), we get $|I|=|s\frac{\partial V}{\partial s}|\le C|s|p\le C\bar C\eps p\le \frac14 \theta p$. Similarly, $|I^-|\le \frac14 \theta p$. As, by Theorem~\ref{T-z0}, we also have $\zt_{0}-z_0\ge 0$, then again $V+V^-+Q\ge -\half \theta p$. Thus we have obtained (\ref{monotone_aux}), and therefore the bound for $\bS(x;p)$ in (\ref{monotone}). The bound for $\bS(x;-p)$ in (\ref{monotone}) is obtained similarly. \end{proof} Next, to estimate $F\b_S$, we prepare two lemmas. \begin{lemma} For $Q=Q(\eta;p)=\qt_0-q_0$ we have \begin{equation}\label{t2} \e^2\triangle Q= B(x,\cdot)\bigr|^{\qt+\vt+\vt^-}_{\vt;\ \vt^-} -B(x,\cdot)\bigr|^{q+v+v^-}_{v;\ v^-} -p q_0 +O(\e^2+p^2). \end{equation} \end{lemma} \begin{proof} From (\ref{qt0a}), also using (\ref{B_tB}) and $\qt_0=q_0+Q=q_0+O(p)$, we get \beqa \hspace{-0.5cm}\e^2\D Q&=&\D_\eta (\qt_0-q_0)\nonumber\\ &=&B(O,\cdot)\bigr| ^{\qt_0+\vto_0+\vto^-_0 }_{\vto_0;\ \vto_0^- } -B(O,\cdot)\bigr|^{q_0+\vo_0+\vo^-_0}_{ \vo_0;\ \vo_0^-} -p q_0+O(p^2). \label{t1} \eeqa Recalling the definitions (\ref{VQ}), introduce the function $$ {\cal H}(\eps,\t):=B(\e \eta,\cdot)\bigr|^{q_0+v_0+v^-_0 +\eps[q_1+v_1+v_1^-]+\t [Q+ V+V^-]} _{ v_0+\eps v_1+\t V;\;\; v^-_0+\eps v_1^-+\t V^-}, $$ in which we write $v_{k}=v_{k}(\xi,\e\s)$ and $v_{k}^-=v_{k}^-(\xi^-\!\!,\e\s^-)$ for $k=0,1$, and also $V=V(\xi,\e\s)$, $V^-=V^-(\xi^-\!\!,\e\s^-)$, $Q=Q(\y)$. The function ${\cal H}$ is defined so that, using \eqref{t1}, \begin{equation}\label{t3} \e^2\D Q={\cal H}(0,1)-{\cal H}(0,0)-p q_0+O(p^2), \end{equation} and the assertion \eqref{t2} may be written as \begin{equation}\label{t4} \e^2\D Q={\cal H}(\eps,1)-{\cal H}(\eps,0)-p q_0+O(\eps^2+p^2). \end{equation} To check the formulas \eqref{t3} and \eqref{t4} we calculate \begin{equation*} \begin{aligned} {\cal H}(0,0)&=B(O,\cdot)\bigr|^{q_0+\vo_0+\vo^-_0} _{\vo_0;\;\; \vo^-_0}, \\ {\cal H}(0,1)&=B(O,\cdot)\bigr|^{q_0+\vo_0+\vo^-_0 + [Q+ \mathring{V}+\mathring{V}^-]}_{\vo_0+\mathring{V}\,;\;\; \vo^-_0+\mathring{V}^-} =B(O,\cdot)\bigr|^{\qt_0+\vto_0+\vto^-_0}_{ \vto_0;\;\; \vto^-_0} ,\\ {\cal H}(\e,0)&=B(\e\y,\cdot)\bigr|^{q_0+v_0+v^-_0 +\e[q_1+v_1+v_1^-]}_{v_0+ \e v_1;\;\; v^-_0+ \e v_1^-} =B(\e\y,\cdot)\bigr|^{q+v+v^-}_{ v;\;\; v^-}, \\ {\cal H}(\e,1)&=B(\e\y,\cdot)\bigr|^{q_0+v_0+v_0^- +\e[q_1+v_1+v_1^-]+[Q+V+V^-]} _{v_0+ \e v_1+V;\;\; v^-_0+ \e v_1^-+V^-} =B(\e\y,\cdot)\bigr|^{\qt+\vt+\vt^-}_{ \vt;\;\; \vt^-} . \end{aligned} \end{equation*} Here, as usual in our notation, a small circle above a function name indicates that in the argument of the function we have set $s=0$; in particular, $\mathring{V}=\vto_0-\vo_0$ and $\mathring{V}^-=\vto^-_0-\vo^-_0$. Hence \eqref{t3} is indeed equivalent to \eqref{t1} and \eqref{t4} is equivalent to \eqref{t2}. To show that \eqref{t3} implies \eqref{t4} we use the mean value theorem for the second difference and write the discrepancy between these two formulas as \begin{equation*} \begin{aligned} \Hc(\e,1)-\Hc(\e,0)-\Hc(0,1)+\Hc(0,0) = \e{\pd^2 \Hc \over \pd\e\,\pd\t}(\e^*,\t^*). \end{aligned} \end{equation*} Now it suffices to show that $|{\pd^2 \Hc \over \pd\e\,\pd\t}|\le Cp$. Then the discrepancy between the two formulas for $\e^2\triangle Q$ is bounded by $C\e|p|$, which yields \eqref{t4}, and therefore \eqref{t2}. To get the desired estimate for ${\pd^2 \Hc \over \pd\e\,\pd\t}$, we first evalulate $$ {\pd\Hc \over \pd\t}(\eps,\t)= Q\,{\cal A} +V\,{\cal B}+V^-{\cal B}^-, $$ where \beqann {\cal A}&=&B_t(\e\y,\cdot)\bigr|^{q_0+v_0+v^-_0+\eps[q_1+v_1+v_1^-]+\t[Q+V+V^-]}, \\ {\cal B}&=&B_t(\e\y,\cdot)\bigr|^{q_0+v_0+v^-_0 +\eps[q_1+v_1+v_1^-]+\t[Q+V+V^-]}_{ v_0+\eps v_1+\t V}, \\ {\cal B}^-\!\!\!\!&=&B_t(\e\y,\cdot)\bigr|^{q_0+v_0+v^-_0 +\eps[q_1+v_1+v_1^-]+\t[Q+V+V^-]}_{v^-_0+\eps v_1^-+\t V^-}. \eeqann To estimate ${\pd^2\Hc \over \pd\e\pd\t}$, we show that each of ${\pd(Q\,{\cal A}) \over \pd\e}$, ${\pd(V\,{\cal B}) \over \pd\e}$ and ${\pd(V^-{\cal B}^-) \over \pd\e}$ is $O(p)$. For ${\pd(Q\,{\cal A}) \over \pd\e}$ we have ${\pd(Q\,{\cal A}) \over \pd\e}=Q{\pd{\cal A} \over \pd\e}$. A calculation then shows that $$ \begin{array}{l} \displaystyle {\pd{\cal A} \over \pd\e}= \y\cdot\nab_x B_t\\ \displaystyle\quad{}+\big\{[q_1+v_1+v_1^-] +\s {\textstyle\frac{\partial}{\partial s}}(v_{0}+\e v_{1}+\t V) +\s^-\!{\textstyle\frac{\partial}{\partial s^-\!\!}}(v^-_{0}+\e v^-_{1}+\t V^-)\big\}B_{tt}, \end{array} $$ where the terms $\nab_x B_t$ and $B_{tt}$ are computed at the point $(\e\y,{q_0+v_0+v^-_0}+\eps[q_1+v_1+v_1^-]+\t[Q+V+V^-])$. Recalling that $|\sigma|\le |\eta|$ and $|\s^-|\le |\eta|$, we get $|{\pd(Q\,{\cal A}) \over \pd\e}| \le C(1+|\eta|)|Q|\le Cp$, where we also used (\ref{VQ_decay}). To estimate ${\pd(V\,{\cal B}) \over \pd\e}$, another tedious calculation gives $$ \begin{array}{l}\displaystyle {\pd{\cal B} \over \pd\e}= \y\cdot\nab_x B_t(\e\y,\cdot)\bigr|^{q_0+v_0+v^-_0 +\eps[q_1+v_1+v_1^-]+\t[Q+V+V^-]}_{ v_0+\eps v_1+\t V}\\\displaystyle {}+\big\{v_1 +\s {\textstyle\frac{\partial}{\partial s}}(v_{0}+\e v_{1}+\t V) \big\}B_{tt}(\e\y,\cdot)\bigr|^{q_0+v_0+v^-_0 +\eps[q_1+v_1+v_1^-]+\t[Q+V+V^-]}_{ v_0+\eps v_1+\t V}\\\displaystyle {}+\big\{[q_1+v_1^-]\\\displaystyle \quad\;\;{}+\s^-\!{\textstyle\frac{\partial}{\partial s^-\!\!}}(v^-_{0}+\e v^-_{1}+\t V^-)\big\} B_{tt} (\e\y,\cdot)\bigr|^{q_0+v_0+v^-_0 +\eps[q_1+v_1+v_1^-]+\t[Q+V+V^-]}. \end{array} $$ Now invoking Lemma~\ref{l-v} and (\ref{q_decay}), we observe that $|{\cal B}|\le Ce^{-c\xi^-}$ and $|{\pd{\cal B} \over \pd\e}| \le C(1+|\y|)e^{-c\xi^-}$. Combining this with ${\pd V \over \pd\e}=\s {\textstyle\frac{\partial V}{\partial s}}$, where $|\s|\le|\eta|$, we have $|{\pd(V\,{\cal B}) \over \pd\e}|\le C(1+|\y|)e^{-c\xi^-}(|V|+|\frac{\partial}{\partial s} V|)$ By (\ref{VQ_decay_b}), this yields the desired estimate $|{\pd(V\,{\cal B}) \over \pd\e}|\le Cp$. A similar argument gives $|{\pd(V^-{\cal B}^-) \over \pd\e}|\le C p$. Thus, we have shown that each of the three components in ${\pd^2\Hc \over \pd\e\pd\t}$ is bounded by $Cp$, which completes the proof. \end{proof} For $F\bS$ we get the following preliminary result. \begin{lemma}\label{L=Fbet} For the function $\bS$ of (\ref{beta}) we have $$ F\bS= \theta p\, b_u(x,u_0)+p \,[1+\theta\lambda(x)]\,(v_0+v_0^-+q_0)+O(\eps^2+p^2), $$ where $\lambda(x):=b_{uu}(x,u_0+\vartheta [v_0+v_0^-+q_0])$ with some $\vartheta=\vartheta(x)\in(0,1)$. \end{lemma} \begin{proof} As from Lemma~\ref{Fuas} we have $F\uasS=O(\e^2)$, in view of (\ref{beta_VQ}),\,(\ref{beta}) and (\ref{uas}), we get \begin{equation*} \begin{aligned} F\bS &= F\bS-F\uasS+O(\e^2) \\& = -\e^2\D(\bS-\uasS)+b(x,\cdot)\Bigr|^{\bS}_{\uasS}+O(\e^2)\\& = -\e^2\D(V+V^-+Q)+B(x,\cdot)\Bigr|^{\vt+\vt^-+\qt+\theta p}_{v+v^-+q}+O(\e^2). \end{aligned} \end{equation*} By (\ref{Dw0ineq}) and its analogue for $v^-$, we readily have $$ -\e^2\D (V+V^-) = -B(x,\cdot)\Bigl|^{\vt}_{v} \!-B(x,\cdot)\Bigl|^{\vt^-}_{v^-} +p (v_0+v_0^-)+O(\e^2+p^2). $$ Since (\ref{t2}) can be rewritten as $$ -\e^2\triangle Q= -B(x,\cdot)\bigr|^{\qt+\vt+\vt^-}_{q+v+v^-} +B(x,\cdot)\Bigl|^{\vt}_{v} +B(x,\cdot)\Bigl|^{\vt^-}_{v^-} +p q_0 +O(\e^2+p^2), $$ we now arrive at \beqann F\bS &\!\!\!\!=\!\!\!& -B(x,\cdot)\bigr|^{\qt+\vt+\vt^-}_{q+v+v^-}\!+p(v_0+v_0^-+q_0) +B(x,\cdot)\Bigr|^{\vt+\vt^-+\qt+\theta p}_{v+v^-+q}\!\!+O(\e^2+p^2)\\ &\!\!\!\!=\!\!\!& B(x,\cdot)\Bigr|^{\vt+\vt^-+\qt+\theta p}_{\vt+\vt^-+\qt}+p(v_0+v_0^-+q_0) +O(\e^2+p^2). \eeqann Note that (\ref{v}),\,(\ref{VQ_decay}) imply that $\vt+\vt^-+\qt=v_0+v_0^-+q_0+O(\eps+p)$. Hence \beqann B(x,\cdot)\Bigr|^{\vt+\vt^-+\qt+\theta p}_{\vt+\vt^-+\qt}&=& \theta p\,[B_t(x,v_0+v_0^-+q_0)+O(\eps+p)]\\ &=&\theta p\,[B_t(x,0)+\lambda(x)\,(v_0+v_0^-+q_0)+O(\eps+p)]. \eeqann Here, by (\ref{B_tB}), one has $B_t(x,0)=b_u(x,u_0)$, and $\lambda(x)={B_{tt}(x,\vartheta[v_0+v_0^-+q_0])}$ ${{}=b_{uu}(x,u_0+\vartheta [v_0+v_0^-+q_0])}$, as in the statement of this lemma. Combining these formulas, we complete the proof. \end{proof} We are now prepared to establish our main result for $F\bS$. \begin{lemma}\label{lem_Fbeta} There are positive numbers $\theta$, $\eps^*$, $p^*$ and $c_1$ such that with $\eps\le\eps^*$ and $|p| \le p^*$, for the function $\bS$ of (\ref{beta}) one has \beqann F\b_{S} &\ge& \;\;\,\,{\textstyle\half}\theta\g^2\,p\,\,-c_1\e^2 \quad\qquad\;\;\ \!\!\rmfor p>0, \\ F\b_{S} &\le& -{\textstyle\half}\theta\g^2|p|+c_1\e^2 \quad\qquad\;\;\rmfor p<0. \eeqann \end{lemma} \begin{proof} By (\ref{q_decay}), one has $|q_0| \le Ce^{-c_1|\y|}$. Since $v_0 \ge 0$ and $v_0^- \ge 0$ it then follows that $v_0+v_0^-+q_0 \ge -|q_0|\ge -Ce^{-c_1|\y|}$ and therefore \begin{equation}\label{****} v_0+v_0^-+q_0 \ge -C\i\e|\ln\e|, \end{equation} provided that $|\y| \ge c_1\i|\ln\e|$ and $\e^*<e\i$ so that $|\ln\e|>1$. Furthermore, \eqref{****} also holds, with possibly a different constant $C$, when $|\y| \le c_1\i|\ln\e|$. Indeed, by (\ref{qdefa}),\,(\ref{T-z0-a}), we have $v_0+v_0^-+q_0 =z_0+(v_0-\vo_0)+(v_0^--\vo_0^-) $, where $z_0\ge0$ and, by (\ref{v}), $|v_0-\vo_0|\le C|s|$, $|v^-_0-\vo^-_0|\le C|s^-|$. Combining these observations with $|s|+|s^-|=\eps(|\s|+|\s^-|)\le 2\eps|\eta|$, we obtain (\ref{****}) for $|\y| \le c_1\i|\ln\e|$. Thus we have (\ref{****}) everywhere in $S$. Next, choose the parameter $\theta$ in the definition (\ref{beta}) of $\bS$ sufficiently small so that $0<\theta\le|\lambda(x)|^{-1}$, where $\lambda(x)$ is from Lemma~\ref{L=Fbet}, and thus $[1+\theta\lambda]\ge0$. Now from Lemma~\ref{L=Fbet} and \eqref{****} we obtain, for some constants $C'$ and $C''$, $$ F\bS\ge \theta p \,b_u(x,u_0)-C'p\,\eps|\ln\eps|+C''(\eps^2+p^2). $$ Since from our assumption A1 we have $b_u(x,u_0)\ge \gamma^2>0$, by choosing $\e^*$ and $p^*$ sufficiently small we get the assertion of the lemma in the case $p>0$. The case $p<0$ is similar. \end{proof} \section*{Conclusion} In this note we have established four results, Lemmas~\ref{bvps-q}, \ref{Fuas}, \ref{lem_monotone} and~\ref{lem_Fbeta}, whose proofs involve lengthy calculations. These results are used in \cite{KK0} to construct sub- and super-solutions to a nonlinear boundary value problem of type (\ref{1.1}) posed in a polygonal domain. \end{document}
\begin{document} \title{LeHDC\xspace: Learning-Based Hyperdimensional Computing Classifier} \author{Shijin Duan} \email{[email protected]} \affiliation{ \institution{Northeastern University} \city{Boston} \state{MA} \country{USA} } \author{Yejia Liu} \email{[email protected]} \affiliation{ \institution{UC Riverside} \city{Riverside} \state{CA} \country{USA} } \author{Shaolei Ren} \email{[email protected]} \affiliation{ \institution{UC Riverside} \city{Riverside} \state{CA} \country{USA} } \author{Xiaolin Xu} \email{[email protected]} \affiliation{ \institution{Northeastern University} \city{Boston} \state{MA} \country{USA} } \begin{abstract} Thanks to the tiny storage and efficient execution, hyperdimensional Computing (HDC) is emerging as a lightweight learning framework on resource-constrained hardware. Nonetheless, the existing HDC training relies on various heuristic methods, significantly limiting their inference accuracy. In this paper, we propose a new HDC framework, called LeHDC\xspace, which leverages a principled learning approach to improve the model accuracy. Concretely, LeHDC\xspace\ maps the existing HDC framework into an equivalent Binary Neural Network architecture, and employs a corresponding training strategy to minimize the training loss. Experimental validation shows that LeHDC\xspace\ outperforms previous HDC training strategies and can improve on average the inference accuracy over 15\% compared to the baseline HDC. \end{abstract} \maketitle \section{Introduction} \label{sec:introduction} Brain-inspired hyperdimensional computing (HDC) is raised to represent samples by projecting them to extremely high-dimensional vectors, i.e., \textit{hypervector} \cite{kanerva2009hyperdimensional}. As an emerging method, HDC is a promising alternative to conventional machine learning models like deep neural networks (DNNs), with less storage usage and higher efficiency. Although HDC is not meant to replace DNNs on all complex classification tasks, it indeed shows impressive performance on lightweight tasks and fits well in highly resource-limited Internet-of-Things (IoTs) devices. Given these characteristics, the studies on HDC have been quickly proliferating, including energy efficiency improvement \cite{imani2019quanthd, imani2019searchd} and applications on tiny devices \cite{imani2018hdna, thapa2021spamhd}. Meanwhile, HDC models have also been adopted on various acceleration platforms, such as FPGA \cite{imani2019quanthd}, GPU \cite{kim2020geniehd}, and in-memory computing \cite{karunaratne2020memory}, thanks to their high parallelism capacity. Depending on the format of hypervectors, HDC can be broadly divided into binary HDC and non-binary HDC: the hypervectors and computations in binary HDC are all binarized, while binarization is not used in non-binary HDC. Naturally, non-binary HDC contains richer information expression on hypervectors, but it also costs more computing resources than binary HDC. On the other hand, binary HDC consumes lower energy and resources and is more friendly to hardware implementation. More recently, some heuristic approaches, such as retraining (i.e., fine-tune the class hypervectors after initial training) \cite{imani2019quanthd}, have been proposed to improve the inference accuracy, making binary HDC achieve a competitive accuracy performance compared to its non-binary counterpart. Nonetheless, the initial training process for an HDC model, binary and non-binary, still heavily depends on a simple strategy of \emph{averaging} the sample hypervectors to obtain class hypervectors. In other words, there have been no principled approaches to optimally train an HDC model and rigorously learn the class hypervectors that provide the best possible accuracy performance for HDC. In this paper, we demonstrate for the first time that a binary HDC classifier is equivalent to a wide single-layer binary neural network (BNN).\footnote{Our result also applies to non-binary HDC models by changing the BNN to a wide single-layer neural newtork with non-binary weights.} Specifically, the binary weights in the BNN can be viewed as the class hypervectors in binary HDC, and the Hamming distance between the encoded hypervector and the class hypervectors in binary HDC can be linearly transformed into multiplication of the encoded hypervector and the binary weights. By viewing the class hypervector training process from the BNN perspective, we reveal the key limitations of the current HDC training strategy: heavily relying on heuristic approaches to search for class hypervectors. To address this limitation, we propose a learning-based HDC training strategy, namely LeHDC\xspace. Specifically, LeHDC\xspace takes the sample hypervector as input, assigns \textit{one-hot} labels, and optimizes the BNN weights in the training process. The binary weights in the BNN are trained with state-of-the-art learning algorithms, and consequently, these binary weights can be directly converted to class hypervectors for the binary HDC classification. The binary HDC with obtained class hypervectors can achieve significantly higher accuracy than the current retraining strategies. Importantly, LeHDC\xspace introduces a completely new training process, but does not modify the encoding or inference processes used in the existing HDC. As a result, LeHDC\xspace can be integrated into any existing HDC framework to improve the accuracy performance, yet without any extra resource or execution overhead during the inference. The main contributions of this work are as follows: \begin{itemize} \item We transform the existing binary HDC classifier into an equivalent BNN, and then reveal the key limitations in the current HDC training process that limit them from obtaining the optimal class hypervectors and achieving the best accuracy. \item We propose a new training strategy, LeHDC\xspace, on binary HDC classification, by leveraging state-of-the-art BNN learning algorithms. To the best of our knowledge, this is the first work using learning-based methods to train HDC classifications in a principled manner. \item We show empirically that LeHDC\xspace can significantly outperform current HDC models and provide over $15\%$ accuracy improvement against the baseline HDC, while introducing zero resource and time overhead during inference. \end{itemize} Our paper organization is: In Sec. \ref{sec:background}, we briefly discuss the HDC classification and current training strategies. Sec. \ref{sec:defects} reveals the limitations of the current HDC training process and equivalently expresses the HDC model in a wide single-layer BNN structure. Sec. \ref{sec:proposal} illustrates the proposed training strategy in LeHDC\xspace, and Sec. \ref{sec:validation} presents the feasibility of our strategy and compares the performance with other training strategies. The conclusion and discussion on our work are addressed in Sec. \ref{sec:conclusion}. \section{HDC Classification Tasks} \label{sec:background} Binary HDC has been emerging as a novel paradigm that represent attributes with hyperdimensional bi-polar vectors $\{1, -1\}^D$ \cite{kanerva2009hyperdimensional}. For a specific sample $\textbf{F} = \{f_1, f_2, ..., f_N\}$ where $f_i$ is the value of the $i$-th feature for $i=1,\cdots,N$, its feature positions and feature values are represented by randomly generated hypervectors, whose dimension (e.g., $D = 10,000$) is much larger than the number of features/values. In a typical HDC model \cite{imani2019quanthd}, feature position hypervectors ($\mathcal{F}$) are orthogonal to identify an individual feature, i.e., the normalized Hamming distance $Hamm(\mathcal{F}_i, \mathcal{F}_j) \approx 0.5, i,j\in \{1,2,...,N\}$. Differently, feature value hypervectors ($\mathcal{V}$) are correlated to reflect the correlations in real values, i.e., $Hamm(\mathcal{V}_{f_i}, \mathcal{V}_{f_j}) \propto\frac{|f_i - f_j|}{max - min}$, where $f_i$ and $f_j$ are two samples in the value range, $f_i,f_j\in [min, max]$. \subsection{Binary HDC} Binary HDC costs much lower power and computational resources, and is also the mainstream HDC framework \cite{imani2019quanthd}. By binding feature position hypervectors and value hypervectors, a sample can be described as a new hypervector ($\mathcal{H}\in\{1,-1\}^D$): \begin{equation} \mathcal{H} = sgn\left(\sum_{i=1}^N \mathcal{F}_i\circ \mathcal{V}_{f_i}\right) \label{eq:encoding} \end{equation} where this sample has $N$ features, and $\circ$ denotes the Hadamard product to multiply two hypervectors in element-wise. $sgn(\cdot)$ is the sign function to binarize the sum of hypervectors; here we assume $sgn(0)$ is randomly assigned with 1 or -1. In general, an HDC classifier can use record-based or $N$-gram-based encoders \cite{ge2020classification}. While our training approach applies to any encoding methods (including advanced ones \cite{HDC_Cornell_arXiv_2022} based on sophisticated feature extractions), \footnote{Per the DAC’22 policy, we are not allowed to make significant non-editorial changes to papers once accepted. Thus, as Ref.~\cite{HDC_Cornell_arXiv_2022} (arXiv date 2/10/2022) was not available or cited at the time of our DAC’22 submission on 11/22/2021, it will not be included in the final camera-ready version of this paper.} for a concrete case study, we adopt the commonly-used \textit{record-based encoding} which, shown in Eq.~\ref{eq:encoding}, has higher accuracy than the $N$-gram-based method for many applications \cite{ge2020classification}. Note that LeHDC\xspace does not modify the encoding process, and hence can work with any encoders. \begin{figure} \caption{Binary HDC classification framework.} \label{fig:binaryHDC} \end{figure} \textbf{Training.} The basic training strategy in HDC is to simply accumulate all the samples belonging to that class, in order to obtain the class hypervectors $\mathcal{C}$: \begin{equation} c_k = sgn\left(\sum_{\mathcal{H}\in \Omega_k} \mathcal{H}\right) \label{eq:initial_training} \end{equation} where $c_k$ denotes the $k$-th class hypervector in $\mathcal{C}$, and $\Omega_k$ is the set of sample hypervectors belonging to class $k$. \textbf{Inference.} A query sample is first encoded using Eq. \ref{eq:encoding}. Then, the similarities, measured in terms of the Hamming distance between the query hypervector and class hypervectors in $\mathcal{C}$ are calculated. The most similar one, i.e., the class with the lowest Hamming distance, is labeled as the predicted class. For the ease of understanding the HDC flow, we show the scheme of binary HDC classification in Fig. \ref{fig:binaryHDC}. This procedure is similar to the nearest centroid classification in machine learning \cite{levner2005feature}, which searches for an optimal centroid for each class. \subsection{Training Enhancement} Various training strategies have been proposed to increase accuracy. Here, we introduce a state-of-the-art approach: retraining \cite{imani2019quanthd}. Eq. \ref{eq:initial_training} gives the initial training results for class hypervectors. The retraining strategy \cite{imani2019quanthd} further fine-tunes the initial class hypervectors, in which both non-binary and binary class hypervectors are used for the training. Specifically, the binary class hypervectors are utilized for validation and non-binary ones are used for updating. As shown in Fig. \ref{fig:retraining_model}, in each retraining iteration, training samples are classified based on the current binary class hypervectors. If a training sample is misclassified, then the non-binary class hypervectors will be updated regarding to the encoded hypervector ($\mathcal{H}$): \begin{equation} \begin{split} c_{nb}^+ &= c_{nb}^+ + \alpha \mathcal{H}\\ c_{nb}^- &= c_{nb}^- - \alpha \mathcal{H} \end{split} \label{eq:retraining} \end{equation} where $c_{nb}^+$ and $c_{nb}^-$ are the correct and misclassified non-binary class hypervectors, respectively, and $\alpha$ is referred to as the learning rate. This retraining step intends to increase the influence of misclassified sample on the correct class while reducing it on the misclassified class. The retraining stops when the updating on class hypervectors is negligible. \begin{figure} \caption{Retraining strategy to adjust class hypervectors against misclassified samples.} \label{fig:retraining_model} \end{figure} In addition, an ensemble approach (e.g., multi-model HDC where multiple models collectively classify each sample \cite{imani2019searchd}) can also increase the accuracy, but the storage size will grow when the number of ensembled HDC models increases. \section{Inner Mechanism of HDC Classifiers} \label{sec:defects} In this section, we demonstrate the equivalence of a binary HDC classifier to a corresponding BNN for inference. Importantly, we highlight that the existing training strategies for HDC, are mostly heuristics and hence not optimal. \subsection{From Binary HDC to BNN} Considering a binary HDC classifier, we denote the input feature of a sample as $x\in\mathbb{R}^N$. The encoder in binary HDC will transfer the real-valued input feature $x$ to a binary hypervectors: $En(x): \mathbb{R}^{N} \mapsto \{-1, 1\}^{D}$, where $D$ stands for the dimension of each hypervector, i.e., projecting the sample input from a low dimensional space to a much higher dimension. Assuming there are $K$ classes for this HDC classifier, a trained class hypervector set is $\mathcal{C} = \{c_1, c_2, ..., c_K\} \in \{-1, 1\}^{D\times K}$, where $c_k$ is the $k$-th class hypervector. The predicted label for the sample $x$ is \begin{equation} k^\star = \underset{k}{argmin}\ Hamm(En(x), c_k), \end{equation} where $Hamm(\mathcal{H}_1, \mathcal{H}_2) = \frac{|\mathcal{H}_1\neq \mathcal{H}_2|}{D}$ represents the normalized Hamming distance operator between any two hypervectors $\mathcal{H}_1$ and $\mathcal{H}_2$, and $|\mathcal{H}_1\neq \mathcal{H}_2|$ denotes the number of different bits in $\mathcal{H}_1$ and $\mathcal{H}_2$. A key property is that the hamming distance can be equivalently projected to the cosine similarity: $cosine(\mathcal{H}_1, \mathcal{H}_2) = 1 - 2\cdot Hamm(\mathcal{H}_1, \mathcal{H}_2)$. To see this point more concretely, we write the cosine similarity of two binary hypervectors $\mathcal{H}_1$ and $\mathcal{H}_2$ as \begin{equation} \begin{split} cosine(\mathcal{H}_1, \mathcal{H}_2) = \frac{\mathcal{H}_1 ^T \mathcal{H}_2}{\|\mathcal{H}_1\|\ \|\mathcal{H}_2\|} \end{split} \end{equation} where $\|\mathcal{H}_1\|$ and $\|\mathcal{H}_2\|$ denote the $l_2$ norms of $\mathcal{H}_1$ and $\mathcal{H}_2$, respectively. Due to the bipolar values $\{1, -1\}$ in hypervectors, we have $\mathcal{H}_1^T \mathcal{H}_2 = (|\mathcal{H}_1 = \mathcal{H}_2| - |\mathcal{H}_1 \neq \mathcal{H}_2|)$. Plus the fact that $\|\mathcal{H}_1\|\ \|\mathcal{H}_2\| = D$ and $(|\mathcal{H}_1\neq \mathcal{H}_2| + |\mathcal{H}_1= \mathcal{H}_2|) = D$, we can conclude the equivalence between the Hamming distance and cosine similarity. Therefore, the predicted label $k^\star$ can be equivalently represented as \begin{equation}\label{eq:equal_to_BNN} \begin{split} k^\star &=\underset{k}{argmin}\ Hamm(En(x), c_k)\\ &= \underset{k}{argmax}\ cosine(En(x), c_k)\\ &= \underset{k}{argmax}\ En(x)^T c_k \end{split} \end{equation} \textbf{Remark.} From Eq. \ref{eq:equal_to_BNN}, we argue that the computation $\ En(x)^T c_k$ is actually the same as forward propagation in a BNN. Specifically, as illustrated in Fig.~\ref{fig:bnnNetwork}, the single-layer BNN takes $En(x)$ as its input and has $K$ output neurons that represent $K$ classes. The binary connection weights between the input $En(x)\in \{-1,1\}^D$ and the $k$-th output neuron is $c_k\in\{-1,1\}^D$, while the $k$-th output is $\ En(x)^T c_k$ and non-binary. While binary HDC is the mainstream choice for HDC classifiers, our analysis also applies to non-binary HDC, where the encoded hypervector $En(x)$ and class hypervector $c_k$ can take both non-binary values. In this case, cosine similarity is directly used as the measure between $En(x)$ and $c_k$ for classification, and a non-binary HDC can be equivalently viewed as a simple single-layer neural network (i.e., perceptron). \subsection{Limitations in Current HDC Training} By establishing the equivalence between a binary HDC model and a BNN, we can see that the HDC training process (i.e., finding class hypervectors $\{c_1,\cdots,c_K\}$) is essentially the same as training the BNN weights $c_1,\cdots,c_K$. In the basic HDC training process, each class hypervector $c_k$ is obtained by simply averaging the sample hypervectors $En(x)$ of all the samples belonging to that class. Clearly, this naive approach does not optimize the BNN weight $c_k$ at all. Next, we also highlight the key limitations in the state-of-the-art retraining strategy. \textbf{(1)} \textit{Retraining only updates non-binary class hypervectors that correspond to the misclassified class and the true class, while other class hypervectors stay the unchanged.} Intuitively, when a training sample $x$ is misclassified, the retraining step can partially mitigate the impact of $En(x)$ on the misclassified class hypervector while enhancing its impact on the true one; when a training sample is correctly classified, no action is taken. However, this strategy neglects two scenarios that could occur during the retraining phase: \circled{1} If a training sample $x$ is misclassified, while there are multiple wrong labels with high similarity, only the class hypervector corresponding to the wrong label with the highest similarity is updated. Hence, other wrong class hypervectors can only be updated in the future iterations, or even never get updated. \circled{2} If a training sample $x$ is correctly classified, even though the correct label has a slightly higher similarity than other classes (i.e., the other class hypervectors also have high similarities to this sample), no class hypervectors are updated. In this case, albeit correctly classified, this sample is very close to the classification border. If more samples of this kind occur during the training process, \textit{over-fitting} is likely to happen, thus weakening the generalization of the HDC model. In contrast, there are various mechanisms, such as dropout and regularization, which mitigate the over-fitting in BNNs. Thus, by training the equivalent BNN, we can systematically improve the testing accuracy of HDC models. \textbf{(2)} \textit{All the weights along the updating class hypervector have to be updated with a fixed step size.} In the retraining strategy \cite{imani2019quanthd}, the updating scale is only determined by the encoded hypervector $En(x)$ and a fixed learning rate $\alpha$, as shown in Eq.\ref{eq:retraining}. On the other hand, the general updating rule for a (single-layer) DNN is: \begin{equation} c^{t+1}_{k,j} = c^t_{k,j} - \alpha \frac{\partial \mathcal{L}}{\partial c_{k,j}} x \label{eq:updating} \end{equation} where $c_{k,j}^t$ is one weight parameter for iteration $t$, $\mathcal{L}$ denotes the loss function, $\alpha$ is the learning rate, and $x$ stands for the sample input equivalent to $En(x)$ in the HDC model. By comparing Eq.\ref{eq:retraining} and Eq.\ref{eq:updating}, we can directly observe that the derivative of the error $\frac{\partial \mathcal{L}}{\partial c_{i,j}}$ is overlooked in the retraining strategy. For this single-layer network, the derivative of loss function can reflect the similarity between the input $En(x)$ and the class hypervectors $c_k, k\in\{1,2,...,K\}$. However, the retraining strategy indeed does not consider the similarity during the updating. As an improved version, adaptive learning rate is proposed in \cite{imani2019adapthd}, but the adaptability is still determined on the validation error rate or the difference between the similarities of $cosine(En(x),c_{correct})$ and $cosine(En(x),c_{wrong})$, not the similarities themselves on all class hypervectors. Consequently, the state-of-the-art retraining strategy will converge very slowly due to updating with incomplete information. \subsection{Case Study} We now practically demonstrate how thes limitations can affect HDC training. For the case study, we use Fashion-MNIST dataset \cite{xiao2017/online} to illustrate the inner mechanism of HDC learning. Fashion-MNIST consists of $L=60,000$ training images which are classified into $K=10$ classes, and we set the hypervector dimension as $D=10,000$. In Fig. \ref{fig:comparison}, we compare the performance of enhanced retraining against that of the default retraining. We make modifications on top of the existing retraining strategy to enhance the retraining process. Specifically, once a training sample is misclassified, all the class hypervectors that have higher similarities than the correct class hypervector will be updated, instead of only the one with the highest similarity. During the updating, we add the similarity influence. We specify that the ideal Hamming distance between the training sample and the correct/wrong class hypervector is 0/0.5. Then, we calculate the difference between the Hamming distance and the ideal one, and use it as a scaling factor for updating a class hypervector during retraining. This is equivalent to Eq.~\ref{eq:updating} when the loss function is the squared error. \begin{figure} \caption{Iteration comparison on the basic \cite{imani2019quanthd} \label{fig:retraining_training} \label{fig:retraining_testing} \label{fig:comparison} \end{figure} The result shows that, for both the training and testing procedures, the enhanced retraining strategy can start with and converge at a higher accuracy, affirming that the discussed limitations indeed limit the retraining strategy. On the other hand, the basic retraining strategy starts to oscillate after the initial convergence. In contrast, the enhanced retraining can make the training/testing procedure more stable, due to the introduction of similarity metric for scaling the updating steps. Nonetheless, the enhancements we make are still heuristic, lacking principled guidance to optimize the class hypervectors in HDC. \section{LeHDC\xspace: the Learning-Based HDC} \label{sec:proposal} We now present LeHDC\xspace as an alternative and principled approach to train the class hypervectors in an HDC classifier. Based on the discovered equivalence between an HDC model and a single-layer BNN, LeHDC\xspace leverages state-of-the-art principled learning algorithms to train the BNN weights. Compared to non-binary neural networks, BNN is more challenging to train, as the weights and output values are all binary. For instance, a large learning rate may successfully flip the binary weights but introduce severe oscillation at the same time; while a small learning rate may not be powerful enough to flip binary bits, resulting in the updating trapped in local optima. A BNN model for the binary HDC learning is demonstrated in Fig. \ref{fig:bnnNetwork}. Here, $En(x)$ is an encoded sample hypervector and the input to the BNN, $\mathcal{C}$ represents the class hypervectors and are the BNN weights, and output $\textbf{o}$ is $(En(x)^Tc_1,\cdots, En(x)^Tc_K)$ and equivalently measures the similarities between the input and each class. In this paper, we adopt the state-of-the-art BNN training strategy in \cite{liu2021adam}, and propose the following approach to obtaining the optimal class hypervectors. \begin{figure} \caption{The equivalent BNN model for binary HDC.} \label{fig:bnnNetwork} \end{figure} Unlike other BNN models, our single-layer BNN corresponding to the binary HDC model does not require the binary activation function at each output neuron, since the non-binary BNN outputs (i.e., $\ En(x)^T c_k$, for $k=1,\cdots,K$) are directly used to determine the classification result. For the binary weights $\mathcal{C}\in \{-1, 1\}^{D\times K}$ (i.e., class hypervectors), both binary ($\mathcal{C}$) and non-binary ($\mathcal{C}_{nb}$) forms are stored during training. The non-binary hypervectors are utilized to accumulate small gradients, and they are updated during the back propagation. The binary hypervectors are utilized for feed-forward and updated after each iteration as: \begin{equation} \mathcal{C} = sgn(\mathcal{C}_{nb}) = \begin{cases} -1\ \ \text{if $\mathcal{C}_{nb}$ < 0}\\ +1\ \ \text{otherwise.} \end{cases} \end{equation} For each sample $x$, the true label ${y}$ is \textit{one-hot} encoded at the output layer. During the training, the \textit{softmax} function is applied to the output, and the \textit{cross entropy} is used as the training loss function. Thus, the loss function of the output $\textbf{o} = En(\mathbf{X})\times \mathcal{C}$ can be denoted as \begin{equation} \text{Loss} = \text{CrossEntropy}(\text{softmax}(\textbf{o}), {y}) \end{equation} Besides, \textit{weight decay} is also an important step in BNN training. Weight decay usually behaves as a $L2$-norm penalty to prevent the weights from evolving too large, which is an effective strategy to mitigate over-fitting during the training. Combined with small gradients accumulated on non-binary class hypervectors, weight decay makes $\mathcal{C}_{nb}$ more sensitive to the input patterns and less dependent on the weight initialization. Hence, the final empirical loss is given as \begin{equation} \mathcal{L} = \sum_{i}\text{CrossEntropy}(\text{softmax}(En({x}_i)^T \mathcal{C}), {y}_i) + \frac{\lambda}{2} \left \| \mathcal{C}_{nb} \right \| ^2 \end{equation} where $({x}_i, y_i)$ is the training sample $i$ and $\lambda$ is a regularization weight. As the training configuration, $Adam$ is selected as the optimizer. Regarding the evaluations in \cite{liu2021adam}, $Adam$ can outperform other $SGD$-based algorithms on the BNN optimization. Moreover, the \textit{dropout} strategy also plays an indispensable role in the equivalent single-layer BNN training. Since updating all weights is likely to introduces over-fitting, dropout is proposed to greatly prevent the over-fitting \cite{srivastava2013improving}. Here, despite that LeHDC\xspace only has one layer without a complex architecture, its width is large and all the $D$ values corresponding to each class hypervector are straightforwardly updated, based on the gradient of loss. This may force the class hypervectors adapt to the training samples, leading to over-fitting. Hence, dropout is necessary to obtain the better performance on the HDC classification. With the equivalent BNN model, we propose LeHDC\xspace for binary HDC classification. Our training method solves the mentioned limitations in current HDC training strategies and mitigates the over-fitting issue in a principled manner, providing better generalization ability. The cross entropy function along with the weight decay and dropout strategies are only used for training the equivalent BNN. After training, the weight matrix $\mathcal{C} = sgn(\mathcal{C}_{nb})$ can be directly used as the class hypervectors. The HDC inference process remains the unchanged, without requiring extra resources. Hence, LeHDC\xspace\ induces zero resource and time overhead during inference. Further, our method is inspired by modern BNN training techniques, which have theoretical support to approach the optimum of HDC training, rather than using heuristic training strategies in the existing HDC models. \begin{table*}[h] \captionsetup{width=\linewidth} \caption{Inference accuracy (\%) comparison between LeHDC and other strategies. Data are shown with format $mean^{\pm std}$.} \centering \begin{tabular}{lccccccc} \toprule & MNIST & Fashion-MNIST & CIFAR-10 & UCIHAR & ISOLET & PAMAP & \textbf{Avg Increment} \\ \midrule Baseline Binary HDC & $80.36^{\pm 0.11}$ & $68.04^{\pm 0.17}$ & $29.55^{\pm 0.35}$ & $82.46^{\pm 0.11}$ & $87.42^{\pm 0.15}$ & $77.66^{\pm 0.01}$ & $-$ \\ Multi-Model \cite{imani2019searchd} & $84.43^{\pm 0.5}$ & $74.05^{\pm 0.5}$ & $22.66^{\pm 0.59}$ & $82.31^{\pm 0.89}$ & $83.47^{\pm 0.43}$ & $91.87^{\pm 0.85}$ & $+2.22$ \\ Retraining \cite{imani2019quanthd} & $89.28^{\pm 0.07}$ & $80.26^{\pm 0.27}$ & $28.42^{\pm 1.46}$ & $91.25^{\pm 0.21}$ & $92.70^{\pm 0.12}$ & $95.64^{\pm 0.03}$ & $+8.67$ \\ \midrule \textbf{LeHDC} & $\mathbf{ 94.74^{\pm 0.18}}$ & $\mathbf{87.11^{\pm 0.08}}$ & $\mathbf{46.10^{\pm 0.20}}$ & $\mathbf{95.23^{\pm 0.16}}$ & $\mathbf{94.89^{\pm 0.17}}$ & $\mathbf{99.55^{\pm 0.05}}$ & \textbf{+15.32}\\ \bottomrule \end{tabular} \label{tab:framecomparison} \end{table*} \section{Experiments} \label{sec:validation} We evaluate the proposed LeHDC\xspace on several selected benchmarks: CV classification tasks (MNIST \cite{726791}, Fashion-MNIST \cite{xiao2017/online}, and CIFAR-10 \cite{cifar10}) and datasets used in the original retraining work \cite{imani2019quanthd} (UCIHAR \cite{ucihar}, ISOLET \cite{isolet}, and PAMAP \cite{pamap}). Our goal is to highlight the advantages of LeHDC\xspace over the existing HDC training processes, and hence we mainly compare LeHDC\xspace against the existing HDC models. Note that the pros and cons between a general HDC and conventional machine learning models have been extensively studied in the literature, which is thus not the focus in this work \cite{imani2019framework}. Unless otherwise stated, we adopt the follow configurations in our evaluation.\footnote{Since the existing HDC models in \cite{imani2019quanthd,imani2019searchd} are not open-sourced, we build the retraining and multi-model HDC frameworks by ourselves, and the actual numerical values might differ.} For the retraining strategy, the learning rate is $\alpha = 0.05$, and $\alpha = 1.5$ in the first iteration. We run 150 iterations to ensure the retraining has converged. For the multi-model strategy, we follow the approach in \cite{imani2019searchd} and choose 64 hypervectors per class. For our proposed BNN-training strategy, the hyper-parameters are shown in Table \ref{tab:parameter}. As a baseline reference, we test the benchmarks on binary HDC without any retraining. All the experiments are evaluated with Python on an 3.60GHz Intel i7-9700K CPU with 16GB memory and Tesla P100 GPU with 16GB memory. \subsection{Model Evaluation} First, we validate the significance of weight decay and dropout in LeHDC\xspace. In Fig. \ref{fig:casestudy}, we show the training and testing trajectories along the iterations on the CIFAR-10 dataset. By considering the weight decay and dropout during training, the testing accuracy can be increased. An interesting observation is that, if considering both the weight decay and dropout, the training accuracy will decrease. However, the testing accuracy in this case is the highest one. This is due to over-fitting that occurs when either weight decay or dropout is not included. Hence, the trained class hypervectors have better generality. \begin{figure} \caption{The training and testing accuracy of CIFAR-10 along the iterations. We consider the cases that have weight decay, dropout, and both.} \label{fig:casestudy} \end{figure} Further, we evaluate the scalability of LeHDC\xspace. We show the accuracy degradation along the dimension reduction across different training strategies in Fig. \ref{fig:scalability}. We can see that LeHDC\xspace\ always outperforms other training strategies. Additionally, it achieves the same accuracy as $D=2,000$ as the retraining strategy with a much higher dimension $D=10,000$. Another observation is that the multi-model strategy sometimes may even perform worse than the baseline binary HDC, such as on the ISOLET dataset. \begin{figure} \caption{The change of inference accuracy along with the dimension reduction on Fashion-MNIST and ISOLET datasets.} \label{fig:scalability} \end{figure} Moreover, we discuss the computational resource required for different binary HDC frameworks. Since LeHDC\xspace\ only optimizes the training procedure, without inducing extra computation during inference, it has the same time consumption and resource occupation as the baseline and retraining binary HDC. However, multi-model strategy costs more storage due to the multiple class hypervectors. Also, hardware acceleration on FPGA and in-memory computing is explored to support the inference in microseconds \cite{imani2019quanthd, imani2019searchd}. Thus, LeHDC\xspace improves the accuracy performance, with the same energy, latency, and size during inference. \subsection{Accuracy Improvement} We evaluate the accuracy performance of LeHDC\xspace\ with other HDC training strategies. We fine-tune the training configuration for each dataset, as shown in Table \ref{tab:parameter}. Note that we still use $D=10,000$ for the evaluation, in order to make the comparison fair with other strategies. The learning rate will decay during the training, if the training loss increasing is detected. \begin{table}[h] \caption{Hyper-parameters used in LeHDC\xspace\ configurations.} \resizebox{\linewidth}{!}{ \begin{tabular}{lccccc} \toprule \multirow{2}{*}{Dataset} & \multicolumn{5}{c}{Parameters} \\\cline{2-6} & \textit{WD}$^{\mathrm{1}}$ & \textit{LR}$^{\mathrm{2}}$ & \textit{B}$^{\mathrm{3}}$ & \textit{DR}$^{\mathrm{4}}$ & \textit{Epochs}\\ \hline MNIST & $0.05$ & $0.01$ & $64$ & $0.5$ & $100$\\ Fashion-MNIST & $0.03$ & $0.1$ & $256$ & $0.3$ & $200$\\ CIAFR-10 & $0.03$ & $0.001$ & $512$ & $0.3$ & $200$ \\ UCIHAR, ISOLET, PAMAP & $0.05$ & $0.01$ & $64$ & $0.5$ & $100$\\ \bottomrule \multicolumn{6}{l}{$^{\mathrm{1}}$\textit{WD} = Weight Decay $^{\mathrm{2}}$\textit{LR} = Learning Rate $^{\mathrm{3}}$\textit{B} = Batch Size $^{\mathrm{4}}$\textit{DR} = Dropout Rate} \end{tabular} } \label{tab:parameter} \end{table} The inference accuracy comparison is shown in Table \ref{tab:framecomparison}. As shown in the results, baseline HDC performs the worst in most benchmarks. However, the multi-model strategy sometimes even performs worse than the baseline HDC, such as on the CIFAR-10 and ISOLET datasets. By observing the characteristic of these benchmarks, we find that these two datasets have a large number of features or classes, but relatively fewer training samples; thus, the multi-model cannot deal with a complicated HDC classification task without sufficient training samples. Meanwhile, the retraining strategy enhances the training procedure, and has good accuracy improvement against the baseline HDC. On the other hand, our proposed LeHDC\xspace\ further improves the inference accuracy against the retraining. Hence, LeHDC\xspace\ can make the HDC classification closer to an optimum with zero resource and time overhead during inference. \section{Conclusion and Discussion} \label{sec:conclusion} In this work, we investigate the existing limitations in current HDC training strategies and construct an equivalent BNN to the binary HDC model. Accordingly, we propose LeHDC\xspace\ to train the class hypervectors on the BNN structure in a principled manner. The evaluation shows that the learning-based BNN strategy can outperform other HDC training strategies, achieving the best close-to-optimal accuracy performance out, while introducing zero resource and time overhead during the HDC inference. Despite that we only fine-tune the explicit hyper-parameters, the LeHDC\xspace\ strategy outperforms other training strategies on the selected benchmarks. However, we note there are other implicit ones, such as the ratio of validation set and the learning rate decay along the training. Moreover, since HDC model can be equivalently represented as neural network models, along with the advances in training BNNs, we expect that the HDC model performance can be further improved by training an equivalent BNN. Although significantly improving the inference accuracy with the same energy consumption and latency, we admit that the HDC-based inference is still not as powerful as modern DNN framework. For example, a Convolutional Neural Network (CNN) can easily achieve over 90\% accuracy on CIFAR-10. This is mainly due to the fundamental limitations of the existing HDC framework, which is essentially a simple single-layer BNN. \input{ref.bbl} \end{document}
\begin{document} \title{\textbf{Bures and Sj\"{o}qvist Metrics over Thermal State Manifolds for Spin Qubits and Superconducting Flux Qubits}} \author{\textbf{Carlo Cafaro}$^{1}$ and \textbf{Paul M.\ Alsing}$^{2}$} \affiliation{$^{1}$SUNY Polytechnic Institute, 12203 Albany, New York, USA} \affiliation{$^{2}$Air Force Research Laboratory, Information Directorate, 13441 Rome, New York, USA} \begin{abstract} The interplay among differential geometry, statistical physics, and quantum information science has been increasingly gaining theoretical interest in recent years. In this paper, we present an explicit analysis of the Bures and Sj\"{o}qvist metrics over the manifolds of thermal states for specific spin qubit and the superconducting flux qubit Hamiltonian models. While the two metrics equally reduce to the Fubini-Study metric in the asymptotic limiting case of the inverse temperature approaching infinity for both Hamiltonian models, we observe that the two metrics are generally different when departing from the zero-temperature limit. In particular, we discuss this discrepancy in the case of the superconducting flux Hamiltonian model. We conclude the two metrics differ in the presence of a nonclassical behavior specified by the noncommutativity of neighboring mixed quantum states. Such a noncommutativity, in turn, is quantified by the two metrics in different manners. Finally, we briefly discuss possible observable consequences of this discrepancy between the two metrics when using them to predict critical and/or complex behavior of physical systems of interest in quantum information science. \end{abstract} \pacs{Quantum Computation (03.67.Lx), Quantum Information (03.67.Ac), Quantum Mechanics (03.65.-w), Riemannian Geometry (02.40.Ky), Statistical Mechanics (05.20.-y).} \maketitle \fancyhead[R]{\ifnum\value{page}<2\relax\else\thepage\fi} \thispagestyle{fancy} \section{Introduction} Geometry plays a special role in the description and, to a certain extent, in the understanding of various physical phenomena \cite{pettini07,karol06}. The concepts of length, area, and volume are ubiquitous in physics and their meaning can prove quite helpful in explaining physical phenomena from a more intuitive perspective \cite{cafaroprd22,cafaropre22}. The notions of \textquotedblleft\emph{longer}\textquotedblright\ and \textquotedblleft \emph{shorter}\textquotedblright\ are extensively used in virtually all disciplines \cite{cafarophysicaa22}. Indeed, geometric formulations of classical and quantum evolutions along with geometric descriptions of classical and quantum mechanical aspects of thermal phenomena are becoming increasingly important in science. Concepts, such as thermodynamic length, area law, and statistical volumes are omnipresent in geometric thermodynamics, general relativity, and statistical physics, respectively. The concept of entropy finds application in essentially any realm of science, from classical thermodynamics to quantum information science. The notions of \textquotedblleft\emph{hotter}\textquotedblright\ and \textquotedblleft \emph{cooler}\textquotedblright\ are widely used in many fields. Entropy can be used to provide measures of distinguishability of classical probability distributions, as well as pure and mixed quantum states. It can also be used to propose measures of complexity for classical motion, quantum evolution, and entropic motion on curved statistical manifolds underlying the entropic dynamics of physical systems for which only partial knowledge of relevant information can be obtained \cite{cafaroPhD,cafaroCSF,felice18}. Furthermore, entropy can also be used to express the degree of entanglement in a quantum state specifying a composite quantum system. For instance, concepts such as Shannon entropy, von Neumann entropy, and Umegaki relative entropy are ubiquitous in classical information science, quantum information theory, and information geometric formulations of mixed quantum state evolutions \cite{amari}, respectively. In this paper, inspired by the increasing theoretical interest in the interplay among differential geometry, statistical physics, and quantum information science \cite{zanardiprl07,zanardi07,pessoa21,silva21,silva21B,mera22}, we present an explicit analysis of the Bures \cite{bures69,uhlman76,hubner92} and Sj\"{o}qvist \cite{erik20} metrics over the manifolds of thermal states for the spin qubit and the superconducting flux qubit Hamiltonian models. From a chronological standpoint, the first physical application of the Sj\"{o}qvist interferometric metric occurs in the original paper by Sj\"{o}qvist himself in Ref. \cite{erik20}. Here, the author considered his newly proposed interferometric metric to quantify changes in behavior of a magnetic system in a thermal state under modifications of temperature and magnetic field intensity. Keeping the temperature constant while changing the externally applied magnetic field, the Sj\"{o}qvist interferometric metric was shown to be physically linked to the magnetic susceptibility. This quantity, in turn, quantifies how much a material will become magnetized when immersed in a magnetic field. A second application of the Sj\"{o}qvist interferometric metric happens in Refs. \cite{silva21,silva21B}. Here, this metric is used to characterize the finite-temperature phase transitions in the framework of band insulators. In particular, the authors considered the massive Dirac Hamiltonian model for a band insulator in two spatial dimensions. The corresponding Sj\"{o}qvist interferometric metric was calculated and expressed in terms of two physical parameters, the temperature, and the hopping parameter. Furthermore, the Sj\"{o}qvist interferometric metric was physically regarded as an interferometric susceptibility. Interestingly, it was observed in Refs. \cite{silva21,silva21B} a dramatic difference between the Sj\"{o}qvist interferometric metric and the Bures metric when studying topological phase transitions in these types of systems. Specifically, while the topological phase transition is captured for all temperatures in the case of the Sj\"{o}qvist interferometric metric, the topological phase transition is captured only at zero temperature in the case of the Bures metric. Clearly, the authors leave as an unsolved question the experimental observation of the singular behavior of the Sj\"{o}qvist interferometric metric in actual laboratory experiments in Refs. \cite{silva21,silva21B}. A third interesting work is the one presented in Ref. \cite{mera22}. Here the authors focus on the zero-temperature aspects of certain quantum systems in their (pure) ground quantum state. They consider two systems. The first system is described by the $XY$ anisotropic spin-$1/2$ chain with $N$-sites on a circle in the presence of an external magnetic field. The second model is the Haldane model, a two-dimensional condensed-matter lattice model \cite{haldane88}. In the first case, the two parameters that determine both the Hamiltonian and the parameter manifold are the anisotropy degree and the magnetic field intensity. In the second case, instead, the two key parameters become the on-site energy and the phase in the model being considered. Expressing the so-called quantum metric in terms of these tunable parameters, they study the thermodynamical limit of this metric along critical submanifolds of the whole parameter manifold. They observe a singular (regular) behavior of the metric along normal (tangent) directions to the critical submanifolds. Therefore, they conclude that tangent directions to critical manifolds are special. Finally, the authors also point out that it would be interesting to understand how their findings generalize to the finite-temperature case where states become mixed. Interestingly, without reporting any explicit analysis, the authors state they expect the Bures and the Sj\"{o}qvist metrics to assume different functional forms. In this paper, inspired by the previously mentioned relevance of comprehending the physical significance of choosing one metric over another one in such geometric characterizations of physical aspects of quantum systems, we report a complete and straightforward analysis of the link between the Sj\"{o}qvist interferometric metric and the Bures metric for two special classes of nondegenerate mixed quantum states. Specifically , focusing on manifolds of thermal states for the spin qubit and the superconducting flux qubit Hamiltonian models, we observe that while the two metrics both reduce to the Fubini-Study metric \cite{provost80,wootters81,braunstein94} in the zero-temperature asymptotic limiting case of the inverse temperature $\beta\overset{\text{def}}{=}\left( k_{B}T\right) ^{-1}$ (with $k_{B}$ being the Boltzmann constant) approaching infinity for both Hamiltonian models, the two metrics are generally different. Furthermore, we observe this different behavior in the case of the superconducting flux Hamiltonian model. More generally, we note that the two metrics seem to differ when a nonclassical behavior is present since in this case, the metrics quantify noncommutativity of neighboring mixed quantum states in different manners. Finally, we briefly discuss the possible observable consequences of this discrepancy between the two metrics when using them to predict critical and/or complex behavior of physical systems of interest. We acknowledge that despite the fact that most of the preliminary background results presented in this paper are partially known in the literature, they appear in a scattered fashion throughout several papers written by researchers working in distinct fields of physics who may not be necessary aware of each other's findings. For this reason, we present here an explicit and unified comparative formulation of the Bures and Sj\"{o}qvist metrics for simple quantum systems in mixed states. In particular, as mentioned earlier, we illustrate our results on the examples of thermal state manifolds for spin qubits and superconducting flux qubits. These applications are original and, to the best of our knowledge, do not appear anywhere in the literature. The layout of the rest of this paper is as follows. In Section II, we present with explicit derivations the expressions of the Bures metric in H\"{u}bner's \cite{hubner92} and Zanardi's \cite{zanardi07} forms. In Section III, we present an explicit derivation of the Sj\"{o}qvist interferometric metric \cite{erik20} between two neighboring nondegenerate density matrices. In Section IV, we present two Hamiltonian models. The first Hamiltonian model describes a spin-$1/2$ particle in a uniform and time-independent external magnetic field oriented along the $z$-axis. The second Hamiltonian model, instead, specifies a superconducting flux qubit. Bringing these two systems in thermal equilibrium with a reservoir at finite and non-zero temperature $T$, we construct the two corresponding parametric families of thermal states. In Section V, we present an explicit calculation of both the Sj\"{o}qvist and the Bures metrics for each one of the two distinct families of parametric thermal states. From our comparative analysis, we find that the two metric coincide for the first Hamiltonian model (electron in a constant magnetic field along the $z$-direction), while they differ for the second Hamiltonian model (superconducting flux qubit). In Section VI, we discuss the effects that arise from the comparative analysis carried out in Section V concerning the Bures and Sj\"{o}qvist metrics for spin qubits and superconducting flux qubits Hamiltonian models introduced in Section IV. Finally, we conclude with our final remarks along with a summary of our main findings in Section VII. \section{The Bures metric} In this section, we present two explicit calculations. In the first calculation, we carry out a detailed derivation of the Bures metric by following the original work presented by H\"{u}bner in Ref. \cite{hubner92}. In the second calculation, we recast the expression of the Bures metric obtained by H\"{u}bner in a way that is more suitable in the framework of geometric analyses on thermal states manifolds. Here, we follow the original work presented by Zanardi and collaborators in Ref. \cite{zanardi07}. \subsection{The explicit derivation of H\"{u}bner's general expression} We begin by carrying out an explicit derivation of the Bures metric inspired by H\"{u}bner \cite{hubner92}. Recall that the squared Bures distance between two density matrices infinitesimally far apart is given by, \begin{equation} \left[ d_{\mathrm{Bures}}\left( \rho\text{, }\rho+d\rho\right) \right] ^{2}=2-2\mathrm{tr}\left[ \rho^{1/2}\left( \rho+d\rho\right) \rho ^{1/2}\right] ^{1/2}\text{.} \label{buri1} \end{equation} To find a useful expression for $\left[ d_{\mathrm{Bures}}\left( \rho\text{, }\rho+d\rho\right) \right] ^{2}$, we follow the line of reasoning used by H\"{u}bner in Ref. \cite{hubner92}. Consider an Hermitian matrix $A\left( t\right) $ with $t\in \mathbb{R} $ defined as \begin{equation} A\left( t\right) \overset{\text{def}}{=}\left[ \rho^{1/2}\left( \rho+td\rho\right) \rho^{1/2}\right] ^{1/2}\text{,} \label{buri2} \end{equation} with $A\left( t\right) A\left( t\right) =\rho^{1/2}\left( \rho +td\rho\right) \rho^{1/2}$. Note that $A\left( 0\right) =\rho$ and, for later use, we assume $\rho$ to be invertible. At this point we observe that knowledge of the metric tensor $g_{ij}\left( \rho\right) $ at $\rho$ requires knowing $\left[ d_{\mathrm{Bures}}\left( \rho\text{, }\rho +td\rho\right) \right] ^{2}$ up to the second order in $t$. H\"{u}bner's ansatz (i.e., educated guess) is \begin{equation} \left[ d_{\mathrm{Bures}}\left( \rho\text{, }\rho+td\rho\right) \right] ^{2}=t^{2}g_{ij}\left( \rho\right) d\rho^{i}d\rho^{j}\text{,} \label{buri3} \end{equation} with $\left\{ \rho^{i}\right\} $ denoting a given set of coordinates on the manifold of density matrices. From Eq. (\ref{buri3}), we note that \begin{equation} g_{ij}\left( \rho\right) d\rho^{i}d\rho^{j}=\frac{1}{2}\left( \frac{d^{2} }{dt^{2}}\left[ d_{\mathrm{Bures}}\left( \rho\text{, }\rho+td\rho\right) \right] ^{2}\right) _{t=0}\text{.} \label{buri4} \end{equation} Observe that using Eq. (\ref{buri2}), the RHS\ of Eq. (\ref{buri4}) becomes \begin{align} \frac{1}{2}\left( \frac{d^{2}}{dt^{2}}\left[ d_{\mathrm{Bures}}\left( \rho\text{, }\rho+td\rho\right) \right] ^{2}\right) _{t=0} & =\frac{1} {2}\frac{d^{2}}{dt^{2}}\left\{ 2-2\mathrm{tr}\left[ \rho^{1/2}\left( \rho+td\rho\right) \rho^{1/2}\right] ^{1/2}\right\} _{t=0}\nonumber\\ & =\frac{1}{2}\frac{d^{2}}{dt^{2}}\left\{ 2-2\mathrm{tr}\left[ A\left( t\right) \right] \right\} _{t=0}\nonumber\\ & =-\mathrm{tr}\left[ \ddot{A}\left( t\right) \right] _{t=0}\text{,} \end{align} that is, \begin{equation} \frac{1}{2}\left( \frac{d^{2}}{dt^{2}}\left[ d_{\mathrm{Bures}}\left( \rho\text{, }\rho+td\rho\right) \right] ^{2}\right) _{t=0}=-\mathrm{tr} \left[ \ddot{A}\left( t\right) \right] _{t=0}\text{.} \label{buri5} \end{equation} From Eqs. (\ref{buri4}) and (\ref{buri5}), we get \begin{equation} g_{ij}\left( \rho\right) d\rho^{i}d\rho^{j}=-\mathrm{tr}\left[ \ddot {A}\left( t\right) \right] _{t=0}\text{.} \label{buri6} \end{equation} Differentiating two times the relation $A\left( t\right) A\left( t\right) =\rho^{1/2}\left( \rho+td\rho\right) \rho^{1/2}$, setting $t=0$ and, finally, assuming $\rho$ diagonalized in the form \begin{equation} \rho=\sum_{i}\lambda_{i}\left\vert i\right\rangle \left\langle i\right\vert \text{,} \end{equation} we have \begin{equation} \left\{ \frac{d^{2}}{dt^{2}}\left[ A\left( t\right) A\left( t\right) \right] \right\} _{t=0}=\left\{ \frac{d^{2}}{dt^{2}}\left[ \rho ^{1/2}\left( \rho+td\rho\right) \rho^{1/2}\right] \right\} _{t=0}\text{.} \end{equation} More explicitly, we notice that \begin{equation} \frac{d}{dt}\left[ A\left( t\right) A\left( t\right) \right] =\dot {A}\left( t\right) A\left( t\right) +A\left( t\right) \dot{A}\left( t\right) \label{buri7} \end{equation} and, \begin{equation} \frac{d}{dt}\left[ \rho^{1/2}\left( \rho+td\rho\right) \rho^{1/2}\right] =\rho^{1/2}d\rho\rho^{1/2}\text{.} \label{buri8} \end{equation} Setting $t=0$, from Eqs. (\ref{buri7}) and (\ref{buri8}) we obtain \begin{equation} \dot{A}\left( 0\right) A\left( 0\right) +A\left( 0\right) \dot{A}\left( 0\right) =\rho^{1/2}d\rho\rho^{1/2}\text{.} \label{buri8a} \end{equation} After the second differentiation of $A\left( t\right) A\left( t\right) $ and $\rho^{1/2}\left( \rho+td\rho\right) \rho^{1/2}$, we get \begin{equation} \ddot{A}\left( 0\right) A\left( 0\right) +2\text{ }\dot{A}\left( 0\right) \dot{A}\left( 0\right) +A\left( 0\right) \ddot{A}\left( 0\right) =0\text{.} \label{pauli} \end{equation} Multiplying both sides of Eq. (\ref{pauli}) by $A^{-1}\left( 0\right) $ from the right and using the cyclicity of the trace operation, we get \begin{equation} \mathrm{tr}\left[ \ddot{A}\left( 0\right) \right] =-\mathrm{tr}\left[ A^{-1}\left( 0\right) \dot{A}\left( 0\right) ^{2}\right] \text{.} \label{buri9} \end{equation} From Eqs. (\ref{buri6}) and (\ref{buri9}), the Bures metric $ds_{\mathrm{Bures}}^{2}$ becomes \begin{align} ds_{\mathrm{Bures}}^{2}\left( \rho\text{, }\rho+d\rho\right) & =g_{ij}\left( \rho\right) d\rho^{i}d\rho^{j}\nonumber\\ & =-\mathrm{tr}\left[ \ddot{A}\left( t\right) \right] _{t=0}\nonumber\\ & =\mathrm{tr}\left[ A^{-1}\left( 0\right) \dot{A}\left( 0\right) ^{2}\right] \nonumber\\ & =\sum_{i}\left\langle i\left\vert A^{-1}\left( 0\right) \dot{A}\left( 0\right) ^{2}\right\vert i\right\rangle \text{,} \end{align} that is, \begin{equation} ds_{\mathrm{Bures}}^{2}\left( \rho\text{, }\rho+d\rho\right) =\frac{1} {2}\sum_{i,k,l}\left[ \left\langle i\left\vert A^{-1}\left( 0\right) \right\vert k\right\rangle \left\langle k\left\vert \dot{A}\left( 0\right) \right\vert l\right\rangle \left\langle l\left\vert \dot{A}\left( 0\right) \right\vert i\right\rangle +\left\langle i\left\vert A^{-1}\left( 0\right) \right\vert l\right\rangle \left\langle l\left\vert \dot{A}\left( 0\right) \right\vert k\right\rangle \left\langle k\left\vert \dot{A}\left( 0\right) \right\vert i\right\rangle \right] \text{.} \label{buri9a} \end{equation} Observe that $A\left( 0\right) \left\vert k\right\rangle =\rho\left\vert k\right\rangle =\lambda_{k}\left\vert k\right\rangle $ and, therefore, $A^{-1}\left( 0\right) \left\vert k\right\rangle =\rho^{-1}\left\vert k\right\rangle =\lambda_{k}^{-1}\left\vert k\right\rangle $ with $\lambda _{k}\neq0$ for any $k$. We need to find an expression for $\left\langle i\left\vert \dot{A}\left( 0\right) \right\vert j\right\rangle $. From Eq. (\ref{buri8a}), we get \begin{equation} \left\langle i\left\vert \dot{A}\left( 0\right) A\left( 0\right) +A\left( 0\right) \dot{A}\left( 0\right) \right\vert j\right\rangle =\left\langle i\left\vert \rho^{1/2}d\rho\rho^{1/2}\right\vert j\right\rangle \text{.} \label{buri10} \end{equation} We note that \begin{align} \left\langle i\left\vert \dot{A}\left( 0\right) A\left( 0\right) +A\left( 0\right) \dot{A}\left( 0\right) \right\vert j\right\rangle & =\left\langle i\left\vert \dot{A}\left( 0\right) \rho+\rho\dot{A}\left( 0\right) \right\vert j\right\rangle \nonumber\\ & = {\displaystyle\sum\limits_{k}} \lambda_{k}\left\langle i\left\vert \dot{A}\left( 0\right) \right\vert k\right\rangle \left\langle k\left\vert j\right. \right\rangle + {\displaystyle\sum\limits_{k}} \lambda_{k}\left\langle i\left\vert k\right. \right\rangle \left\langle k\left\vert \dot{A}\left( 0\right) \right\vert j\right\rangle \nonumber\\ & =\lambda_{j}\left\langle i\left\vert \dot{A}\left( 0\right) \right\vert j\right\rangle +\lambda_{i}\left\langle i\left\vert \dot{A}\left( 0\right) \right\vert j\right\rangle \nonumber\\ & =\left( \lambda_{i}+\lambda_{j}\right) \left\langle i\left\vert \dot {A}\left( 0\right) \right\vert j\right\rangle \text{,} \end{align} that is, \begin{equation} \left\langle i\left\vert \dot{A}\left( 0\right) A\left( 0\right) +A\left( 0\right) \dot{A}\left( 0\right) \right\vert j\right\rangle =\left( \lambda_{i}+\lambda_{j}\right) \left\langle i\left\vert \dot{A}\left( 0\right) \right\vert j\right\rangle \text{.} \label{buri11} \end{equation} Moreover, we observe that \begin{align} \left\langle i\left\vert \rho^{1/2}d\rho\rho^{1/2}\right\vert j\right\rangle & =\left\langle i\left\vert \left( \sum_{k}\sqrt{\lambda_{k}}\left\vert k\right\rangle \left\langle k\right\vert \right) d\rho\left( \sum_{m} \sqrt{\lambda_{m}}\left\vert m\right\rangle \left\langle m\right\vert \right) \right\vert \right\rangle \nonumber\\ & =\sum_{k,m}\sqrt{\lambda_{k}}\sqrt{\lambda_{m}}\left\langle i\left\vert k\right. \right\rangle \left\langle k\left\vert d\rho\right\vert m\right\rangle \left\langle m\left\vert j\right. \right\rangle \nonumber\\ & =\sum_{k,m}\sqrt{\lambda_{k}}\sqrt{\lambda_{m}}\delta_{ik}\delta _{mj}\left\langle k\left\vert d\rho\right\vert m\right\rangle \nonumber\\ & =\sqrt{\lambda_{i}}\sqrt{\lambda_{j}}\left\langle i\left\vert d\rho\right\vert j\right\rangle \text{,} \end{align} that is, \begin{equation} \left\langle i\left\vert \rho^{1/2}d\rho\rho^{1/2}\right\vert j\right\rangle =\sqrt{\lambda_{i}}\sqrt{\lambda_{j}}\left\langle i\left\vert d\rho\right\vert j\right\rangle \text{.} \label{buri12} \end{equation} Using Eqs. (\ref{buri10}), (\ref{buri11}), and (\ref{buri12}), we have \begin{equation} \left\langle i\left\vert \dot{A}\left( 0\right) \right\vert j\right\rangle =\frac{\sqrt{\lambda_{i}}\sqrt{\lambda_{j}}}{\lambda_{i}+\lambda_{j} }\left\langle i\left\vert d\rho\right\vert j\right\rangle \text{.} \label{buri13} \end{equation} Finally, using Eq. (\ref{buri13}), $ds_{\mathrm{Bures}}^{2}$ in Eq. (\ref{buri9a}) becomes \begin{equation} ds_{\mathrm{Bures}}^{2}\left( \rho\text{, }\rho+d\rho\right) =\frac{1} {2}\sum_{k,l}\frac{\lambda_{l}+\lambda_{k}}{\left( \lambda_{l}+\lambda _{k}\right) ^{2}}\left\vert \left\langle k\left\vert d\rho\right\vert l\right\rangle \right\vert ^{2}\text{,} \end{equation} that is, relabelling the dummy indices (i.e., $k\rightarrow i$ and $l\rightarrow j$), \begin{equation} ds_{\mathrm{Bures}}^{2}\left( \rho\text{, }\rho+d\rho\right) =\frac{1} {2}\sum_{i,j}\frac{\left\vert \left\langle i\left\vert d\rho\right\vert j\right\rangle \right\vert ^{2}}{\lambda_{i}+\lambda_{j}}\text{.} \label{buri14} \end{equation} The derivation of Eq. (\ref{buri14}) ends our revisitation of H\"{u}bner's original analysis presented in Ref. \cite{hubner92}. Note that in obtaining Eq. (\ref{buri14}), there is no need to introduce any Hamiltonian \textrm{H} that might be responsible for the changes from $\rho$ to $\rho+d\rho$. For this reason, the expression of the Bures metric in Eq. (\ref{buri14}) is said to be general. \subsection{The explicit derivation of Zanardi's general expression} In Ref. \cite{zanardi07}, Zanardi and collaborators provided an alternative expression of the Bures metric in Eq. (\ref{buri14}). Interestingly, in this alternative expression, the Bures metric in Eq. (\ref{buri14}) can be decomposed in its classical and nonclassical parts. To begin, in view of future geometric investigations in statistical physics, let us use a different notation and rewrite the Bures metric in Eq. (\ref{buri14}) as \begin{equation} ds_{\mathrm{Bures}}^{2}\left( \rho\text{, }\rho+d\rho\right) \overset{\text{def}}{=}\frac{1}{2}\sum_{n\text{, }m}\frac{\left\vert \left\langle m|d\rho|n\right\rangle \right\vert ^{2}}{p_{m}+p_{n}}\text{,} \label{Bures} \end{equation} with $1\leq m$, $n\leq N$. Let us assume that the quantities $\rho$ and $d\rho$ in Eq. (\ref{Bures}) are given by, \begin{equation} \rho\overset{\text{def}}{=}\sum_{n}p_{n}\left\vert n\right\rangle \left\langle n\right\vert \text{,} \end{equation} and \begin{equation} d\rho\overset{\text{def}}{=}\sum_{n}\left[ dp_{n}\left\vert n\right\rangle \left\langle n\right\vert +p_{n}\left\vert dn\right\rangle \left\langle n\right\vert +p_{n}\left\vert n\right\rangle \left\langle dn\right\vert \right] \text{,} \label{dro} \end{equation} respectively, with $\left\langle n\left\vert m\right. \right\rangle =\delta_{n,m}$. Let us use Eqs. (\ref{dro}) and (\ref{Bures}) to find a more explicit expression for the Bures metric $ds_{\mathrm{Bures}}^{2}\left( \rho\text{, }\rho+d\rho\right) $. Observe that the quantity $\left\langle i\left\vert d\rho\right\vert j\right\rangle $ can be recast as, \begin{align} \left\langle i\left\vert d\rho\right\vert j\right\rangle & =\left\langle i\left\vert \left( \sum_{n}\left[ dp_{n}\left\vert n\right\rangle \left\langle n\right\vert +p_{n}\left\vert dn\right\rangle \left\langle n\right\vert +p_{n}\left\vert n\right\rangle \left\langle dn\right\vert \right] \right) \right\vert j\right\rangle \nonumber\\ & =\sum_{n}dp_{n}\left\langle i|n\right\rangle \left\langle n|j\right\rangle +p_{n}\left\langle i|dn\right\rangle \left\langle n|j\right\rangle +p_{n}\left\langle i|n\right\rangle \left\langle dn|j\right\rangle \nonumber\\ & =\sum_{n}dp_{n}\delta_{in}\delta_{nj}+p_{n}\left\langle i|dn\right\rangle \delta_{nj}+p_{n}\delta_{in}\left\langle dn|j\right\rangle \nonumber\\ & =dp_{i}\delta_{ij}+p_{j}\left\langle i|dj\right\rangle +p_{i}\left\langle di|j\right\rangle \text{,} \end{align} that is, \begin{equation} \left\langle i\left\vert d\rho\right\vert j\right\rangle =dp_{i}\delta _{ij}+p_{j}\left\langle i|dj\right\rangle +p_{i}\left\langle di|j\right\rangle \text{.} \label{1} \end{equation} Note that the orthonormality condition $\left\langle i|j\right\rangle =\delta_{ij}$ implies $\left\langle di|j\right\rangle +\left\langle i|dj\right\rangle =0$, that is \begin{equation} \left\langle di|j\right\rangle =-\left\langle i|dj\right\rangle \text{.} \label{2} \end{equation} Using Eq. (\ref{2}), $\left\langle i\left\vert d\rho\right\vert j\right\rangle $ in Eq. (\ref{1}) becomes \begin{equation} \left\langle i\left\vert d\rho\right\vert j\right\rangle =dp_{i}\delta _{ij}+p_{j}\left\langle i|dj\right\rangle -p_{i}\left\langle i|dj\right\rangle =\delta_{ij}dp_{i}+\left( p_{j}-p_{i}\right) \left\langle i|dj\right\rangle \text{,}\nonumber \end{equation} that is, \begin{equation} \left\langle i\left\vert d\rho\right\vert j\right\rangle =\delta_{ij} dp_{i}+\left( p_{j}-p_{i}\right) \left\langle i|dj\right\rangle \text{.} \label{3} \end{equation} Making use of Eq. (\ref{3}), we can now find an explicit expression of the quantity $\left\vert \left\langle m|d\rho|n\right\rangle \right\vert ^{2}$ in Eq. (\ref{Bures}). Indeed, from Eq. (\ref{3}) we have \begin{align} \left\vert \left\langle m|d\rho|n\right\rangle \right\vert ^{2} & =\left\langle m|d\rho|n\right\rangle \left\langle m|d\rho|n\right\rangle ^{\ast}\nonumber\\ & =\left\langle m|d\rho|n\right\rangle \left\langle n|d\rho|m\right\rangle \nonumber\\ & =\left[ \delta_{mn}dp_{m}+\left( p_{n}-p_{m}\right) \left\langle m|dn\right\rangle \right] \left[ \delta_{nm}dp_{n}+\left( p_{m} -p_{n}\right) \left\langle n|dm\right\rangle \right] \nonumber\\ & =\delta_{mn}dp_{m}\delta_{nm}dp_{n}+\delta_{mn}dp_{m}\left( p_{m} -p_{n}\right) \left\langle n|dm\right\rangle +\left( p_{n}-p_{m}\right) \left\langle m|dn\right\rangle \delta_{nm}dp_{n}+\nonumber\\ & +\left( p_{n}-p_{m}\right) \left\langle m|dn\right\rangle \left( p_{m}-p_{n}\right) \left\langle n|dm\right\rangle \nonumber\\ & =\delta_{nm}dp_{n}^{2}-\left( p_{n}-p_{m}\right) ^{2}\left\langle m|dn\right\rangle \left\langle n|dm\right\rangle \text{,} \label{4} \end{align} that is, \begin{equation} \left\vert \left\langle m|d\rho|n\right\rangle \right\vert ^{2}=\delta _{nm}dp_{n}^{2}-\left( p_{n}-p_{m}\right) ^{2}\left\langle m|dn\right\rangle \left\langle n|dm\right\rangle \text{.} \label{5} \end{equation} Using Eq. (\ref{2}), Eq. (\ref{5}) reduces to \begin{align} \left\vert \left\langle m|d\rho|n\right\rangle \right\vert ^{2} & =\delta_{nm}dp_{n}^{2}+\left( p_{n}-p_{m}\right) ^{2}\left\langle dm|n\right\rangle \left\langle n|dm\right\rangle \nonumber\\ & =\delta_{nm}dp_{n}^{2}+\left( p_{n}-p_{m}\right) ^{2}\left\vert \left\langle n|dm\right\rangle \right\vert ^{2}\text{,} \end{align} that is, \begin{equation} \left\vert \left\langle m|d\rho|n\right\rangle \right\vert ^{2}=\delta _{nm}dp_{n}^{2}+\left( p_{n}-p_{m}\right) ^{2}\left\vert \left\langle n|dm\right\rangle \right\vert ^{2}\text{.} \label{6} \end{equation} Finally, substituting Eq. (\ref{6}) into Eq. (\ref{Bures}), $ds_{\mathrm{Bures}}^{2}\left( \rho\text{, }\rho+d\rho\right) $ becomes \begin{align} ds_{\mathrm{Bures}}^{2}\left( \rho\text{, }\rho+d\rho\right) & =\frac {1}{2}\sum_{n\text{, }m}\frac{\delta_{nm}dp_{n}^{2}+\left( p_{n} -p_{m}\right) ^{2}\left\vert \left\langle n|dm\right\rangle \right\vert ^{2} }{p_{m}+p_{n}}\nonumber\\ & =\frac{1}{4}\sum_{n}\frac{dp_{n}^{2}}{p_{n}}+\frac{1}{2}\sum_{n\neq m}\left\vert \left\langle n|dm\right\rangle \right\vert ^{2}\frac{\left( p_{n}-p_{m}\right) ^{2}}{p_{m}+p_{n}}\text{,} \end{align} that is, \begin{equation} ds_{\mathrm{Bures}}^{2}\left( \rho\text{, }\rho+d\rho\right) =\frac{1} {4}\sum_{n}\frac{dp_{n}^{2}}{p_{n}}+\frac{1}{2}\sum_{n\neq m}\left\vert \left\langle n|dm\right\rangle \right\vert ^{2}\frac{\left( p_{n} -p_{m}\right) ^{2}}{p_{m}+p_{n}}\text{.} \label{7} \end{equation} Eq. (\ref{7}) is the explicit expression of $ds_{\mathrm{Bures}}^{2}\left( \rho\text{, }\rho+d\rho\right) $ we were searching for. As a side remark, note that if both $\left\vert n\right\rangle $ and $\left\vert m\right\rangle \in\ker\left( \rho\right) $, we have that $\left\langle n|d\rho |m\right\rangle =0$. Indeed, from $\rho\left\vert m\right\rangle =\left\vert 0\right\rangle $, we have $d\rho\left\vert m\right\rangle +\rho\left\vert dm\right\rangle =\left\vert 0\right\rangle $. Therefore, we have $\left\langle n|d\rho|m\right\rangle +\left\langle n|\rho|dm\right\rangle =\left\langle n|0\right\rangle $, that is, $\left\langle n|d\rho|m\right\rangle =0$ since $\left\langle n|0\right\rangle =0$ and $\left\langle n|\rho|dm\right\rangle =0$. The expression of the Bures metric in Eq. (\ref{7}) can be regarded as given by two contribution, a classical and a nonclassical term. The first term in $ds_{\mathrm{Bures}}^{2}\left( \rho\text{, }\rho+d\rho\right) $ in Eq. (\ref{7}) is the classical one and is represented by the classical Fisher-Rao information metric between the two probability distributions $\left\{ p_{n}\right\} _{1\leq n\leq N}$ and $\left\{ p_{n}+dp_{n}\right\} _{1\leq n\leq N}$. The second term is the nonclassical one and emerges from the noncommutativity of the density matrices $\rho$ and $\rho+d\rho$ (i.e., $\left[ \rho\text{, }\rho+d\rho\right] =\left[ \rho\text{, }d\rho\right] \neq0$, in general). When $\left[ \rho\text{, }\rho+d\rho\right] =0$, the problem becomes classical and the Bures metric reduces to the classical Fisher-Rao metric. \subsection{The explicit derivation of Zanardi's expression for thermal states} In what follows, we specialize on the functional form of $ds_{\mathrm{Bures} }^{2}\left( \rho\text{, }\rho+d\rho\right) $ in Eq. (\ref{7}) for thermal states. Specifically, let us focus on mixed quantum states $\rho\left( \beta\text{, }\lambda\right) $ of the form, \begin{equation} \rho\left( \beta\text{, }\lambda\right) \overset{\text{def}}{=} \frac{e^{-\beta\mathrm{H}\left( \lambda\right) }}{\mathcal{Z}} =\frac{e^{-\beta\mathrm{H}\left( \lambda\right) }}{\mathrm{tr}\left( e^{-\beta\mathrm{H}\left( \lambda\right) }\right) }\text{,} \label{ro} \end{equation} with $\mathcal{Z}\overset{\text{def}}{=}\mathrm{tr}\left( e^{-\beta \mathrm{H}\left( \lambda\right) }\right) $ denoting the partition function of the system. The Hamiltonian $\mathrm{H}$ in Eq. (\ref{ro}) depends on a set of parameters $\left\{ \lambda\right\} $ and is such that $\mathrm{H} \left\vert n\right\rangle =E_{n}\left\vert n\right\rangle $ or, equivalently \begin{equation} \mathrm{H}=\sum_{n}E_{n}\left\vert n\right\rangle \left\langle n\right\vert \text{,} \label{spectral} \end{equation} with $1\leq n\leq N$. Using the spectral decomposition of $\mathrm{H}$ in Eq. (\ref{spectral}), $\rho\left( \beta\text{, }\lambda\right) $ in Eq. (\ref{ro}) can be recast as \begin{equation} \rho\left( \beta\text{, }\lambda\right) =\sum_{n}p_{n}\left\vert n\right\rangle \left\langle n\right\vert =\sum_{n}\frac{e^{-\beta E_{n}} }{\mathcal{Z}}\left\vert n\right\rangle \left\langle n\right\vert \text{,} \label{rouse} \end{equation} with $p_{n}\overset{\text{def}}{=}e^{-\beta E_{n}}/\mathcal{Z}$. Note that the $\lambda$-dependence of $\rho$ in Eq. (\ref{rouse}) appears, in general, in both $E_{n}=E_{n}\left( \lambda\right) $ and $\left\vert n\right\rangle =\left\vert n\left( \lambda\right) \right\rangle $. Before presenting the general case where both $\beta$ and the set of $\left\{ \lambda\right\} $ can change, we focus on the sub-case where $\beta$ is kept constant while $\left\{ \lambda\right\} $ is allowed to change. \subsubsection{Case: $\beta$-constant and $\lambda$-nonconstant} In what follows, assuming that $\beta$ is fixed, we wish to find the expression of $ds_{\mathrm{Bures}}^{2}\left( \rho\text{, }\rho+d\rho\right) $ in Eq. (\ref{7}) when $\rho$ is given as in Eq. (\ref{rouse}). \ Clearly, we note that the two key quantities that we need to find are $dp_{i}^{2}$ and $\left\langle i|dj\right\rangle $. Let us start with the latter. From $\mathrm{H}\left\vert j\right\rangle =E_{j}\left\vert j\right\rangle $, we have $d\mathrm{H}\left\vert j\right\rangle +\mathrm{H}\left\vert dj\right\rangle =dE_{j}\left\vert j\right\rangle +E_{j}\left\vert dj\right\rangle $.\ Assuming $i\neq j$, we have \begin{equation} \left\langle i\right\vert d\mathrm{H}\left\vert j\right\rangle +\left\langle i\right\vert \mathrm{H}\left\vert dj\right\rangle =\left\langle i\right\vert dE_{j}\left\vert j\right\rangle +\left\langle i\right\vert E_{j}\left\vert dj\right\rangle =dE_{j}\delta_{ij}+E_{j}\left\langle i|dj\right\rangle =E_{j}\left\langle i|dj\right\rangle \text{,} \end{equation} that is, \begin{equation} \left\langle i\right\vert d\mathrm{H}\left\vert j\right\rangle +\left\langle i\right\vert \mathrm{H}\left\vert dj\right\rangle =E_{j}\left\langle i|dj\right\rangle \text{.} \label{8} \end{equation} Observe that, \begin{equation} \left\langle i\right\vert \mathrm{H}\left\vert dj\right\rangle =\left\langle dj\right\vert \mathrm{H}^{\dagger}\left\vert i\right\rangle ^{\ast }=\left\langle dj\right\vert \mathrm{H}\left\vert i\right\rangle ^{\ast} =E_{i}\left\langle dj|i\right\rangle ^{\ast}=E_{i}\left\langle i|dj\right\rangle \text{.} \label{9} \end{equation} Substituting Eq. (\ref{9}) into Eq. (\ref{8}), we get \begin{equation} \left\langle i\right\vert d\mathrm{H}\left\vert j\right\rangle +E_{i} \left\langle i|dj\right\rangle =E_{j}\left\langle i|dj\right\rangle \text{,} \end{equation} that is, \begin{equation} \left\langle i|dj\right\rangle =\frac{\left\langle i\right\vert d\mathrm{H} \left\vert j\right\rangle }{E_{j}-E_{i}}\text{.} \label{10} \end{equation} Eq. (\ref{10}) is the first piece of relevant information we were looking for. Let us not focus on calculating $dp_{i}^{2}$. From $p_{i}\overset{\text{def} }{=}e^{-\beta E_{i}}/\mathcal{Z}$, we get \begin{align} dp_{i} & =d\left( \frac{e^{-\beta E_{i}}}{\mathcal{Z}}\right) \nonumber\\ & =\frac{1}{\mathcal{Z}}d\left( e^{-\beta E_{i}}\right) +e^{-\beta E_{i} }d\left( \frac{1}{\mathcal{Z}}\right) \nonumber\\ & =\frac{1}{\mathcal{Z}}\frac{d}{dE_{i}}\left( e^{-\beta E_{i}}\right) dE_{i}+e^{-\beta E_{i}}\left( -\frac{1}{\mathcal{Z}^{2}}d\mathcal{Z}\right) \nonumber\\ & =-\beta\frac{e^{-\beta E_{i}}}{\mathcal{Z}}dE_{i}-\frac{e^{-\beta E_{i}} }{\mathcal{Z}}\frac{d\mathcal{Z}}{\mathcal{Z}}\nonumber\\ & =-\beta\frac{e^{-\beta E_{i}}}{\mathcal{Z}}dE_{i}-\frac{e^{-\beta E_{i}} }{\mathcal{Z}}\sum_{j}\left( \frac{d\mathcal{Z}}{dE_{j}}\frac{dE_{j} }{\mathcal{Z}}\right) \text{,} \end{align} that is, \begin{equation} dp_{i}=-\beta\frac{e^{-\beta E_{i}}}{\mathcal{Z}}dE_{i}-\frac{e^{-\beta E_{i} }}{\mathcal{Z}}\sum_{j}\left( \frac{d\mathcal{Z}}{dE_{j}}\frac{dE_{j} }{\mathcal{Z}}\right) \text{.} \label{11} \end{equation} At this point, note that \begin{align} \sum_{j}\frac{d\mathcal{Z}}{dE_{j}}\frac{dE_{j}}{\mathcal{Z}} & =\sum _{j}\frac{d}{dE_{j}}\left( \sum_{k}e^{-\beta E_{k}}\right) \frac{dE_{j} }{\mathcal{Z}}\nonumber\\ & =-\beta\sum_{j}\frac{e^{-\beta E_{j}}}{\mathcal{Z}}dE_{j}\nonumber\\ & =-\beta\sum_{j}p_{j}dE_{j}\text{.} \label{12} \end{align} Substituting Eq. (\ref{12}) into Eq. (\ref{11}), we get \begin{equation} dp_{i}=-\beta p_{i}dE_{i}+\beta p_{i}\sum_{j}p_{j}dE_{j}=-\beta p_{i}\left[ dE_{i}-\sum_{j}p_{j}dE_{j}\right] \text{.} \label{13} \end{equation} Eq. (\ref{13}) is the second piece of relevant information we were looking for. We can now calculate $ds_{\mathrm{Bures}}^{2}\left( \rho\text{, } \rho+d\rho\right) $ in Eq. (\ref{7}) by means of Eqs. (\ref{10}) and (\ref{13}). We obtain, \begin{align} \frac{1}{4}\sum_{i}\frac{dp_{i}^{2}}{p_{i}} & =\frac{1}{4}\sum_{i}\beta ^{2}\frac{p_{i}^{2}}{p_{i}}\left[ dE_{i}-\sum_{j}p_{j}dE_{j}\right] ^{2}\nonumber\\ & =\frac{\beta^{2}}{4}\sum_{i}p_{i}\left[ dE_{i}-\left\langle dE\right\rangle _{\beta}\right] ^{2}\nonumber\\ & =\frac{\beta^{2}}{4}\left( \left\langle dE^{2}\right\rangle _{\beta }-\left\langle dE\right\rangle _{\beta}^{2}\right) \nonumber\\ & =\frac{\beta^{2}}{4}\left( \left\langle d\mathrm{H}_{d}^{2}\right\rangle _{\beta}-\left\langle d\mathrm{H}_{d}\right\rangle _{\beta}^{2}\right) \text{,} \end{align} that is, \begin{equation} \frac{1}{4}\sum_{i}\frac{dp_{i}^{2}}{p_{i}}=\frac{\beta^{2}}{4}\left( \left\langle d\mathrm{H}_{d}^{2}\right\rangle _{\beta}-\left\langle d\mathrm{H}_{d}\right\rangle _{\beta}^{2}\right) \text{.} \label{14} \end{equation} The quantity $d\mathrm{H}_{d}$ in Eq. (\ref{14}) is defined as \begin{equation} d\mathrm{H}_{d}\overset{\text{def}}{=}\sum_{j}dE_{j}\left\vert j\right\rangle \left\langle j\right\vert \text{,} \end{equation} and is different from $d\mathrm{H}$. For clarity, we also observe that \begin{equation} \left\langle d\mathrm{H}_{d}\right\rangle _{\beta}\overset{\text{def}}{=} \sum_{i}p_{i}dE_{i}\text{, and }\left\langle d\mathrm{H}_{d}^{2}\right\rangle _{\beta}\overset{\text{def}}{=}\sum_{i}p_{i}dE_{i}^{2}\text{.} \end{equation} Finally, using Eqs. (\ref{10}) and (\ref{14}), we get \begin{equation} ds_{\mathrm{Bures}}^{2}\left( \rho\text{, }\rho+d\rho\right) =\frac {\beta^{2}}{4}\left( \left\langle d\mathrm{H}_{d}^{2}\right\rangle _{\beta }-\left\langle d\mathrm{H}_{d}\right\rangle _{\beta}^{2}\right) +\frac{1} {2}\sum_{n\neq m}\left\vert \frac{\left\langle n|d\mathrm{H}|m\right\rangle }{E_{n}-E_{m}}\right\vert ^{2}\frac{\left( e^{-\beta E_{n}}-e^{-\beta E_{m} }\right) ^{2}}{\mathcal{Z}\left( e^{-\beta E_{n}}+e^{-\beta E_{m}}\right) }\text{.} \label{bures1} \end{equation} The Bures metric $ds_{\mathrm{Bures}}^{2}\left( \rho\text{, }\rho +d\rho\right) $ in Eq. (\ref{bures1}) is the Bures metric in Eq. (\ref{7}) between two mixed thermal states $\rho\left( \beta\text{, }\lambda\right) $ and $\left( \rho+d\rho\right) \left( \beta\text{, }\lambda\right) $ when only changes in $\lambda$ are permitted. \subsubsection{Case: $\beta$-nonconstant and $\lambda$-nonconstant} In what follows, we consider the general case where both $\beta$ and the set of $\left\{ \lambda\right\} $ can change. The sub-case where $\beta$ changes while the set of $\left\{ \lambda\right\} $ is kept constant is then obtained as a special case. For simplicity, let us assume we have two parameters, $\beta$ and a single parameter $\lambda$ that we denote with $h$ (a magnetic field intensity, for instance). In this two-dimensional parametric case, we generally have that \begin{equation} ds_{\mathrm{Bures}}^{2}\left( \beta\text{, }h\right) =\left( \begin{array} [c]{cc} d\beta & dh \end{array} \right) \left( \begin{array} [c]{cc} g_{\beta\beta} & g_{\beta h}\\ g_{h\beta} & g_{hh} \end{array} \right) \left( \begin{array} [c]{c} d\beta\\ dh \end{array} \right) =g_{\beta\beta}d\beta^{2}+g_{hh}dh^{2}+2g_{\beta h}d\beta dh\text{,} \label{15} \end{equation} where we used the fact that $g_{h\beta}=g_{\beta h}$. From Eq. (\ref{15}), we note that \begin{equation} ds_{\mathrm{Bures}}^{2}\left( \beta\text{, }h\right) =\left\{ \begin{array} [c]{c} g_{\beta\beta}\left( \beta\text{, }h\right) d\beta^{2}\text{, if }h=\text{\textrm{const}.}\\ g_{hh}\left( \beta\text{, }h\right) dh^{2}\text{, if }\beta =\text{\textrm{const}.}\\ g_{\beta\beta}\left( \beta\text{, }h\right) d\beta^{2}+g_{hh}\left( \beta\text{, }h\right) dh^{2}+2g_{\beta h}\left( \beta\text{, }h\right) d\beta dh\text{, if }\beta\neq\text{\textrm{const}. and }h\neq \text{\textrm{const}.} \end{array} \right. \text{.} \end{equation} Recalling $ds_{\mathrm{Bures}}^{2}\left( \rho\text{, }\rho+d\rho\right) $ in Eq. (\ref{7}), we start by calculating the expression of $dp_{n}$ with $p_{n}=p_{n}\left( h\text{, }\beta\right) \overset{\text{def}}{=}e^{-\beta E_{n}}/\mathcal{Z}$. We observe that $dp_{n}$ can be written as, \begin{equation} dp_{n}=\frac{\partial p_{n}}{\partial h}dh+\frac{\partial p_{n}}{\partial \beta}d\beta\text{,} \label{util1} \end{equation} where $\partial p_{n}/\partial\beta$ is given by, \begin{align} \frac{\partial p_{n}}{\partial\beta} & =\frac{\partial}{\partial\beta }\left( \frac{e^{-\beta E_{n}}}{\mathcal{Z}}\right) \nonumber\\ & =\frac{1}{\mathcal{Z}}\frac{\partial}{\partial\beta}\left( e^{-\beta E_{n}}\right) +e^{-\beta E_{n}}\frac{\partial}{\partial\beta}\left( \frac {1}{\mathcal{Z}}\right) \nonumber\\ & =-\frac{E_{n}}{\mathcal{Z}}e^{-\beta E_{n}}+e^{-\beta E_{n}}\frac{\partial }{\partial\mathcal{Z}}\left( \frac{1}{\mathcal{Z}}\right) \frac {\partial\mathcal{Z}}{\partial\beta}\nonumber\\ & =-p_{n}E_{n}-\frac{e^{-\beta E_{n}}}{\mathcal{Z}}\frac{1}{\mathcal{Z}} \frac{\partial\mathcal{Z}}{\partial\beta}\nonumber\\ & =-p_{n}E_{n}-p_{n}\frac{\partial\ln\mathcal{Z}}{\partial\beta}\nonumber\\ & =-p_{n}E_{n}+p_{n}\frac{1}{\mathcal{Z}}\sum_{n}E_{n}e^{-\beta E_{n} }\nonumber\\ & =-p_{n}E_{n}+p_{n}\sum_{n}p_{n}E_{n}\nonumber\\ & =-p_{n}E_{n}+p_{n}\left\langle \mathrm{H}\right\rangle \text{,} \end{align} that is, \begin{equation} \frac{\partial p_{n}}{\partial\beta}d\beta=p_{n}\left[ \left\langle \mathrm{H}\right\rangle -E_{n}\right] d\beta\text{.} \label{util2} \end{equation} Note that the expectation value $\left\langle \mathrm{H}\right\rangle $ in Eq. (\ref{util2}) is defined as $\left\langle \mathrm{H}\right\rangle $ $\overset{\text{def}}{=}\sum_{n}p_{n}E_{n}$. From Eq. (\ref{13}), we also have \begin{equation} \frac{\partial p_{n}}{\partial h}dh=-\beta p_{n}\left[ \frac{\partial E_{n} }{\partial h}-\sum_{j}p_{j}\frac{\partial E_{j}}{\partial h}\right] dh=\beta p_{n}\left[ \left\langle \left( \partial_{h}\mathrm{H}\right) _{d}\right\rangle -\partial_{h}E_{n}\right] dh\text{,} \label{util3} \end{equation} where $\left\langle \left( \partial_{h}\mathrm{H}\right) _{d}\right\rangle $ is defined as \begin{equation} \left\langle \left( \partial_{h}\mathrm{H}\right) _{d}\right\rangle \overset{\text{def}}{=}\sum_{j}p_{j}\partial_{h}E_{j}\text{.} \label{masini} \end{equation} Using Eqs. (\ref{util1}), (\ref{util2}), and (\ref{util3}), we wish to calculate the term $\left( 1/4\right) \sum_{n}dp_{n}^{2}/p_{n}$ in $ds_{\mathrm{Bures}}^{2}\left( \rho\text{, }\rho+d\rho\right) $ in Eq. (\ref{7}). Let us begin by observing that \begin{equation} dp_{n}^{2}=\left( \frac{\partial p_{n}}{\partial h}dh+\frac{\partial p_{n} }{\partial\beta}d\beta\right) ^{2}=\left( \partial_{h}p_{n}dh\right) ^{2}+\left( \partial_{\beta}p_{n}d\beta\right) ^{2}+2\partial_{\beta} p_{n}\partial_{h}p_{n}d\beta dh\text{.} \end{equation} Therefore, we get \begin{align} \frac{1}{4}\sum_{n}\frac{dp_{n}^{2}}{p_{n}} & =\frac{1}{4}\sum_{n} \frac{\left( \partial_{h}p_{n}dh\right) ^{2}+\left( \partial_{\beta} p_{n}d\beta\right) ^{2}+2\partial_{\beta}p_{n}\partial_{h}p_{n}d\beta dh}{p_{n}}\nonumber\\ & =\frac{1}{4}\sum_{n}\frac{\left( \partial_{h}p_{n}\right) ^{2}}{p_{n} }dh^{2}+\frac{1}{4}\sum_{n}\frac{\left( \partial_{\beta}p_{n}\right) ^{2} }{p_{n}}d\beta^{2}+\frac{1}{4}\sum_{n}\frac{2\partial_{\beta}p_{n}\partial _{h}p_{n}}{p_{n}}d\beta dh\text{.} \end{align} First, note that \begin{equation} \frac{1}{4}\sum_{n}\frac{\left( \partial_{\beta}p_{n}\right) ^{2}}{p_{n} }d\beta^{2}=\frac{1}{4}\left[ \left\langle \mathrm{H}^{2}\right\rangle -\left\langle \mathrm{H}\right\rangle ^{2}\right] d\beta^{2}\text{,} \label{A} \end{equation} where $\left\langle \mathrm{H}\right\rangle $ and $\left\langle \mathrm{H} ^{2}\right\rangle $ are defined as \begin{equation} \left\langle \mathrm{H}\right\rangle \overset{\text{def}}{=}\sum_{i}p_{i} E_{i}\text{, and }\left\langle \mathrm{H}^{2}\right\rangle \overset{\text{def} }{=}\sum_{i}p_{i}E_{i}^{2}\text{,} \end{equation} respectively. Indeed, using Eq. (\ref{util2}), we have \begin{align} \sum_{n}\frac{\left( \partial_{\beta}p_{n}\right) ^{2}}{p_{n}} & =\sum _{n}\frac{p_{n}^{2}\left[ \left\langle \mathrm{H}\right\rangle -E_{n}\right] ^{2}}{p_{n}}\nonumber\\ & =\sum_{n}\frac{p_{n}^{2}\left\langle \mathrm{H}\right\rangle ^{2}+p_{n} ^{2}E_{n}^{2}-2\left\langle \mathrm{H}\right\rangle p_{n}^{2}E_{n}}{p_{n} }\nonumber\\ & =\left\langle \mathrm{H}\right\rangle ^{2}+\left\langle \mathrm{H} ^{2}\right\rangle -2\left\langle \mathrm{H}\right\rangle ^{2}\nonumber\\ & =\left\langle \mathrm{H}^{2}\right\rangle -\left\langle \mathrm{H} \right\rangle ^{2}\text{.} \end{align} Second, observe that \begin{equation} \frac{1}{4}\sum_{n}\frac{\left( \partial_{h}p_{n}\right) ^{2}}{p_{n}} dh^{2}=\frac{1}{4}\beta^{2}\left\{ \left\langle \left[ \left( \partial _{h}\mathrm{H}\right) _{d}\right] ^{2}\right\rangle -\left\langle \left( \partial_{h}\mathrm{H}\right) _{d}\right\rangle ^{2}\right\} dh^{2}\text{,} \label{B} \end{equation} where $\left\langle \left( \partial_{h}\mathrm{H}\right) _{d}\right\rangle $ is given in Eq. (\ref{masini}) and $\left\langle \left[ \left( \partial _{h}\mathrm{H}\right) _{d}\right] ^{2}\right\rangle $ is defined as \begin{equation} \left\langle \left[ \left( \partial_{h}\mathrm{H}\right) _{d}\right] ^{2}\right\rangle \overset{\text{def}}{=}\sum_{i}p_{i}\left( \partial _{h}E_{i}\right) ^{2}\text{.} \end{equation} Indeed, using Eq. (\ref{util3}), we have \begin{align} \sum_{n}\frac{\left( \partial_{h}p_{n}\right) ^{2}}{p_{n}} & =\sum _{n}\frac{\left( \beta p_{n}\right) ^{2}\left[ \left\langle \left( \partial_{h}\mathrm{H}\right) _{d}\right\rangle -\partial_{h}E_{n}\right] ^{2}}{p_{n}}\nonumber\\ & =\beta^{2}\sum_{n}\left[ p_{n}\left\langle \left( \partial_{h} \mathrm{H}\right) _{d}\right\rangle ^{2}+p_{n}\left( \partial_{h} E_{n}\right) ^{2}-2p_{n}\left\langle \left( \partial_{h}\mathrm{H}\right) _{d}\right\rangle \partial_{h}E_{n}\right] \nonumber\\ & =\beta^{2}\left\{ \left\langle \left( \partial_{h}\mathrm{H}\right) _{d}\right\rangle ^{2}+\left\langle \left[ \left( \partial_{h} \mathrm{H}\right) _{d}\right] ^{2}\right\rangle -2\left\langle \left( \partial_{h}\mathrm{H}\right) _{d}\right\rangle ^{2}\right\} \nonumber\\ & =\beta^{2}\left\{ \left\langle \left[ \left( \partial_{h}\mathrm{H} \right) _{d}\right] ^{2}\right\rangle -\left\langle \left( \partial _{h}\mathrm{H}\right) _{d}\right\rangle ^{2}\right\} \text{.} \end{align} Third, we note that \begin{equation} \frac{1}{4}\sum_{n}\frac{2\partial_{\beta}p_{n}\partial_{h}p_{n}}{p_{n}}d\beta dh=\frac{1}{4}2\beta\left[ \left\langle \mathrm{H}\left( \partial _{h}\mathrm{H}\right) _{d}\right\rangle -\left\langle \mathrm{H}\right\rangle \left\langle \left( \partial_{h}\mathrm{H}\right) _{d}\right\rangle \right] d\beta dh\text{.} \label{C} \end{equation} Indeed, using Eqs. (\ref{util2}) and (\ref{util3}), we get \begin{align} \sum_{n}\frac{2\partial_{\beta}p_{n}\partial_{h}p_{n}}{p_{n}} & =\sum _{n}\frac{2p_{n}\left[ \left\langle \mathrm{H}\right\rangle -E_{n}\right] \beta p_{n}\left[ \left\langle \left( \partial_{h}\mathrm{H}\right) _{d}\right\rangle -\partial_{h}E_{n}\right] }{p_{n}}\nonumber\\ & =\sum_{n}2\beta\left[ \left\langle \mathrm{H}\right\rangle -E_{n}\right] \left[ \left\langle \left( \partial_{h}\mathrm{H}\right) _{d}\right\rangle -\partial_{h}E_{n}\right] p_{n}\nonumber\\ & =\sum_{n}2\beta\left[ \left\langle \mathrm{H}\right\rangle \left\langle \left( \partial_{h}\mathrm{H}\right) _{d}\right\rangle -\left\langle \mathrm{H}\right\rangle \partial_{h}E_{n}-E_{n}\left\langle \left( \partial_{h}\mathrm{H}\right) _{d}\right\rangle +E_{n}\partial_{h} E_{n}\right] p_{n}\nonumber\\ & =2\beta\left[ \left\langle \mathrm{H}\right\rangle \left\langle \left( \partial_{h}\mathrm{H}\right) _{d}\right\rangle -\left\langle \mathrm{H} \right\rangle \left\langle \left( \partial_{h}\mathrm{H}\right) _{d}\right\rangle -\left\langle \mathrm{H}\right\rangle \left\langle \left( \partial_{h}\mathrm{H}\right) _{d}\right\rangle +\left\langle \mathrm{H} \left( \partial_{h}\mathrm{H}\right) _{d}\right\rangle \right] \nonumber\\ & =2\beta\left[ \left\langle \mathrm{H}\left( \partial_{h}\mathrm{H} \right) _{d}\right\rangle -\left\langle \mathrm{H}\right\rangle \left\langle \left( \partial_{h}\mathrm{H}\right) _{d}\right\rangle \right] \text{,} \end{align} where $\left\langle \mathrm{H}\left( \partial_{h}\mathrm{H}\right) _{d}\right\rangle $ is defined as \begin{equation} \left\langle \mathrm{H}\left( \partial_{h}\mathrm{H}\right) _{d} \right\rangle \overset{\text{def}}{=}\sum_{i}p_{i}E_{i}\partial_{h} E_{i}\text{.} \end{equation} Finally, employing Eqs. (\ref{A}), (\ref{B}), and (\ref{C}), the most general expression of $ds_{\mathrm{Bures}}^{2}\left( \rho\text{, }\rho+d\rho\right) $ in Eq. (\ref{7}) between two mixed thermal states $\rho\left( \beta\text{, }h\right) $ and $\left( \rho+d\rho\right) \left( \beta\text{, }h\right) $ when either changes in the parameter $\beta$ or $h$ are allotted becomes \begin{align} ds_{\mathrm{Bures}}^{2}\left( \rho\text{, }\rho+d\rho\right) & =\frac {1}{4}\left[ \left\langle \mathrm{H}^{2}\right\rangle -\left\langle \mathrm{H}\right\rangle ^{2}\right] d\beta^{2}\nonumber\\ & +\frac{1}{4}\left\{ \beta^{2}\left\{ \left\langle \left[ \left( \partial_{h}\mathrm{H}\right) _{d}\right] ^{2}\right\rangle -\left\langle \left( \partial_{h}\mathrm{H}\right) _{d}\right\rangle ^{2}\right\} +2\sum_{n\neq m}\left\vert \frac{\left\langle n|\partial_{h}\mathrm{H} |m\right\rangle }{E_{n}-E_{m}}\right\vert ^{2}\frac{\left( e^{-\beta E_{n} }-e^{-\beta E_{m}}\right) ^{2}}{\mathcal{Z}\left( e^{-\beta E_{n}}+e^{-\beta E_{m}}\right) }\right\} dh^{2}+\nonumber\\ & +\frac{1}{4}\left\{ 2\beta\left[ \left\langle \mathrm{H}\left( \partial_{h}\mathrm{H}\right) _{d}\right\rangle -\left\langle \mathrm{H} \right\rangle \left\langle \left( \partial_{h}\mathrm{H}\right) _{d}\right\rangle \right] \right\} d\beta dh\text{.} \label{general} \end{align} Note that\textbf{ }$ds_{\mathrm{Bures}}^{2}\left( \rho\text{, }\rho +d\rho\right) $\textbf{ }is the sum of two contributions, the classical Fisher-Rao information metric contribution and\textbf{ }the non-classical metric contribution expressed in the summation term in the right-hand-side of Eq. (\ref{general}). For later convenience, we also remark that the quadratic term\textbf{ }$\left\vert \left\langle n|\partial_{h}\mathrm{H}|m\right\rangle \right\vert ^{2}$\textbf{ }in the summation term in the right-hand-side of Eq. (\ref{general}) is invariant under change of sign of the Hamiltonian of the system.\textbf{ }Clearly, from Eq. (\ref{general}) we find that $ds_{\mathrm{Bures}}^{2}\left( \rho\text{, }\rho+d\rho\right) =(1/4)\left[ \left\langle \mathrm{H}^{2}\right\rangle -\left\langle \mathrm{H}\right\rangle ^{2}\right] d\beta^{2}$ when $h=$\textrm{const}. and only $\beta$ can change. If $\beta=$\textrm{const}., $ds_{\mathrm{Bures}}^{2}\left( \rho\text{, } \rho+d\rho\right) $ in Eq. (\ref{general}) reduces to Eq. (\ref{bures1}). The explicit derivation of Eq. (\ref{general}) ends our calculation of the Bures metric between neighboring thermal states undergoing temperature and/or magnetic field intensity changes as originally presented by Zanardi and collaborators in Ref. \cite{zanardi07}. \section{The Sj\"{o}qvist metric} In this section, we introduce the Sj\"{o}qvist metric \cite{erik20} for nondegerante mixed states with an explicit derivation. Assume to consider two rank-$N$ neighboring nondegenerate density operators $\rho\left( t\right) $ and $\rho\left( t+dt\right) $ linked by means of a smooth path $t\mapsto \rho\left( t\right) $ specifying the evolution of a given quantum system. The nondegeneracy property implies that the phase of the eigenvectors represents the gauge freedom in the spectral decomposition of the density operators. As a consequence, there exists a one-to-one correspondence between the set of two orthogonal rays $\left\{ e^{i\phi_{k}\left( t\right) }\left\vert e_{k}\left( t\right) \right\rangle :0\leq\phi_{k}\left( t\right) <2\pi\right\} _{1\leq k\leq N}$ that specify the spectral decomposition along the path $t\mapsto\rho\left( t\right) $ and the rank-$N$ nondegenerate density operator $\rho\left( t\right) $. Obviously, if some nonzero eigenvalue of $\rho\left( t\right) $ is degenerate, this correspondence would no longer exist. We present next the explicit derivation of the Sj\"{o}qvist metric. \subsection{The explicit derivation} Consider two neighboring states $\rho\left( t\right) $ and $\rho\left( t+dt\right) $ with spectral decompositions given by $\mathcal{B}\left( t\right) =\left\{ \sqrt{p_{k}\left( t\right) }e^{if_{k}\left( t\right) }\left\vert n_{k}\left( t\right) \right\rangle \right\} _{1\leq k\leq N}$ and $\mathcal{B}\left( t+dt\right) =\left\{ \sqrt{p_{k}\left( t+dt\right) }e^{if_{k}\left( t+dt\right) }\left\vert n_{k}\left( t+dt\right) \right\rangle \right\} _{1\leq k\leq N}$, respectively. The quantity $N$ denotes the rank of the nondegenerate density operator $\rho\left( t\right) $. Consider the infinitesimal distance $d^{2}\left( t\text{, }t+dt\right) $ between $\rho\left( t\right) $ and $\rho\left( t+dt\right) $ defined as \begin{equation} d^{2}\left( t\text{, }t+dt\right) \overset{\text{def}}{=}\sum_{k=1} ^{N}\left\Vert \sqrt{p_{k}\left( t\right) }e^{if_{k}\left( t\right) }\left\vert n_{k}\left( t\right) \right\rangle -\sqrt{p_{k}\left( t+dt\right) }e^{if_{k}\left( t+dt\right) }\left\vert n_{k}\left( t+dt\right) \right\rangle \right\Vert ^{2}\text{.} \label{distb} \end{equation} The Sj\"{o}qvist metric is defined as the minimum of $d^{2}\left( t\text{, }t+dt\right) $ in Eq. (\ref{distb}). Note that the squared norm term $\left\Vert \sqrt{p_{k}\left( t\right) }e^{if_{k}\left( t\right) }\left\vert n_{k}\left( t\right) \right\rangle -\sqrt{p_{k}\left( t+dt\right) }e^{if_{k}\left( t+dt\right) }\left\vert n_{k}\left( t+dt\right) \right\rangle \right\Vert ^{2}$ can be written as \begin{align} & \left( \sqrt{p_{k}\left( t\right) }e^{-if_{k}\left( t\right) }\left\langle n_{k}\left( t\right) \right\vert -\sqrt{p_{k}\left( t+dt\right) }e^{-if_{k}\left( t+dt\right) }\left\langle n_{k}\left( t+dt\right) \right\vert \right) \nonumber\\ & \left( \sqrt{p_{k}\left( t\right) }e^{if_{k}\left( t\right) }\left\vert n_{k}\left( t\right) \right\rangle -\sqrt{p_{k}\left( t+dt\right) }e^{if_{k}\left( t+dt\right) }\left\vert n_{k}\left( t+dt\right) \right\rangle \right) \nonumber\\ & =p_{k}\left( t\right) +p_{k}\left( t+dt\right) -\sqrt{p_{k}\left( t\right) p_{k}\left( t+dt\right) }e^{i\left[ f_{k}\left( t+dt\right) -f_{k}\left( t\right) \right] }\left\langle n_{k}\left( t\right) |n_{k}\left( t+dt\right) \right\rangle +\nonumber\\ & -\sqrt{p_{k}\left( t\right) p_{k}\left( t+dt\right) }e^{-i\left[ f_{k}\left( t+dt\right) -f_{k}\left( t\right) \right] }\left\langle n_{k}\left( t+dt\right) |n_{k}\left( t\right) \right\rangle \text{,} \end{align} with $e^{i\left[ f_{k}\left( t+dt\right) -f_{k}\left( t\right) \right] }\left\langle n_{k}\left( t\right) |n_{k}\left( t+dt\right) \right\rangle +e^{-i\left[ f_{k}\left( t+dt\right) -f_{k}\left( t\right) \right] }\left\langle n_{k}\left( t+dt\right) |n_{k}\left( t\right) \right\rangle $ equal to \begin{align} & 2\operatorname{Re}\left\{ e^{i\left[ f_{k}\left( t+dt\right) -f_{k}\left( t\right) \right] }\left\langle n_{k}\left( t\right) |n_{k}\left( t+dt\right) \right\rangle \right\} \nonumber\\ & =2\operatorname{Re}\left\{ e^{i\left[ f_{k}\left( t+dt\right) -f_{k}\left( t\right) \right] }\left\vert \left\langle n_{k}\left( t\right) |n_{k}\left( t+dt\right) \right\rangle \right\vert e^{i\arg\left( \left\langle n_{k}\left( t\right) |n_{k}\left( t+dt\right) \right\rangle \right) }\right\} \nonumber\\ & =2\left\vert \left\langle n_{k}\left( t\right) |n_{k}\left( t+dt\right) \right\rangle \right\vert \operatorname{Re}\left\{ e^{i\left[ f_{k}\left( t+dt\right) -f_{k}\left( t\right) \right] }e^{i\arg\left( \left\langle n_{k}\left( t\right) |n_{k}\left( t+dt\right) \right\rangle \right) }\right\} \nonumber\\ & =2\left\vert \left\langle n_{k}\left( t\right) |n_{k}\left( t+dt\right) \right\rangle \right\vert \cos\left[ f_{k}\left( t+dt\right) -f_{k}\left( t\right) +\arg\left( \left\langle n_{k}\left( t\right) |n_{k}\left( t+dt\right) \right\rangle \right) \right] \text{.} \end{align} Note that $f_{k}\left( t+dt\right) =f_{k}\left( t\right) +\dot{f} _{k}\left( t\right) dt+O\left( dt^{2}\right) $ and $\left\vert n_{k}\left( t+dt\right) \right\rangle =\left\vert n_{k}\left( t\right) \right\rangle +\left\vert \dot{n}_{k}\left( t\right) \right\rangle dt+O\left( dt^{2}\right) $. Therefore, we have \begin{equation} \cos\left[ f_{k}\left( t+dt\right) -f_{k}\left( t\right) +\arg\left( \left\langle n_{k}\left( t\right) |n_{k}\left( t+dt\right) \right\rangle \right) \right] =\cos\left\{ \dot{f}_{k}\left( t\right) dt+\arg\left[ 1+\left\langle n_{k}\left( t\right) |\dot{n}_{k}\left( t\right) \right\rangle dt\right] +O\left( dt^{2}\right) \right\} \text{.} \end{equation} Setting $\dot{f}_{k}\left( t\right) dt+\arg\left[ 1+\left\langle n_{k}\left( t\right) |\dot{n}_{k}\left( t\right) \right\rangle dt\right] +O\left( dt^{2}\right) \overset{\text{def}}{=}\lambda_{k}\left( t\text{, }t+dt\right) $, the infinitesimal distance $d^{2}\left( t\text{, }t+dt\right) $ becomes \begin{equation} d^{2}\left( t\text{, }t+dt\right) =2-2\sum_{k=1}^{N}\sqrt{p_{k}\left( t\right) p_{k}\left( t+dt\right) }\left\vert \left\langle n_{k}\left( t\right) |n_{k}\left( t+dt\right) \right\rangle \right\vert \cos\left[ \lambda_{k}\left( t\text{, }t+dt\right) \right] \text{.} \end{equation} Then, the Sj\"{o}qvist metric $ds_{\mathrm{Sj\ddot{o}qvist}}^{2}$ is the minimum of $d^{2}\left( t\text{, }t+dt\right) $, $d_{\min}^{2}\left( t\text{, }t+dt\right) $, and is obtained when $\lambda_{k}\left( t\text{, }t+dt\right) $ equals zero for any $1\leq k\leq N$. Its expression is given by, \begin{equation} ds_{\mathrm{Sj\ddot{o}qvist}}^{2}=2-2\sum_{k=1}^{N}\sqrt{p_{k}\left( t\right) p_{k}\left( t+dt\right) }\left\vert \left\langle n_{k}\left( t\right) |n_{k}\left( t+dt\right) \right\rangle \right\vert \text{.} \label{zero1} \end{equation} It is worthwhile emphasizing that the minimum of $d^{2}\left( t\text{, }t+dt\right) $ is achieved by selecting phases $\left\{ f_{k}\left( t\right) \text{, }f_{k}\left( t+dt\right) \right\} $ such that \begin{equation} \dot{f}_{k}\left( t\right) dt+\arg\left[ 1+\left\langle n_{k}\left( t\right) \left\vert \dot{n}_{k}\left( t\right) \right. \right\rangle dt+O\left( dt^{2}\right) \right] =0 \label{condo1} \end{equation} Observing that $e^{\left\langle n_{k}\left( t\right) \left\vert \dot{n} _{k}\left( t\right) \right. \right\rangle dt}=1+\left\langle n_{k}\left( t\right) \left\vert \dot{n}_{k}\left( t\right) \right. \right\rangle dt+O\left( dt^{2}\right) $ is such that $\arg\left[ e^{\left\langle n_{k}\left( t\right) \left\vert \dot{n}_{k}\left( t\right) \right. \right\rangle dt}\right] =-i\left\langle n_{k}\left( t\right) \left\vert \dot{n}_{k}\left( t\right) \right. \right\rangle dt$, Eq. (\ref{condo1}) can be rewritten to the first order in $dt$ as \begin{equation} \dot{f}_{k}\left( t\right) -i\left\langle n_{k}\left( t\right) \left\vert \dot{n}_{k}\left( t\right) \right. \right\rangle =0\text{.} \label{condo2} \end{equation} Eq. (\ref{condo2}) denotes the parallel transport condition $\left\langle \psi_{k}\left( t\right) \left\vert \psi_{k}\left( t\right) \right. \right\rangle =0$ where $\left\vert \psi_{k}\left( t\right) \right\rangle \overset{\text{def}}{=}e^{if_{k}\left( t\right) }\left\vert n_{k}\left( t\right) \right\rangle $ is associated with individual pure state paths in the chosen ensemble that defines the mixed state $\rho\left( t\right) $ \cite{aharonov87}. To find a more useful expression of $ds_{\mathrm{Sj\ddot {o}qvist}}^{2}$, let us start by observing that, \begin{equation} \sqrt{p_{k}\left( t\right) p_{k}\left( t+dt\right) }=p_{k}\left( t\right) \sqrt{1+\frac{dp_{k}\left( t\right) }{p_{k}\left( t\right) } }=p_{k}+\frac{1}{2}\dot{p}_{k}dt-\frac{1}{8}\frac{\dot{p}_{k}^{2}}{p_{k} }dt^{2}+O\left( dt^{2}\right) \text{.} \label{zero2} \end{equation} Furthermore, to the second order in $dt$, the state $\left\vert n_{k}\left( t+dt\right) \right\rangle $ can be written as \begin{equation} \left\vert n_{k}\left( t+dt\right) \right\rangle =\left\vert n_{k}\left( t\right) \right\rangle +\left\vert \dot{n}_{k}\left( t\right) \right\rangle dt+\frac{1}{2}\left\vert \ddot{n}_{k}\left( t\right) \right\rangle dt^{2}+O\left( dt^{2}\right) \text{.} \label{oro1} \end{equation} Therefore, to the second order in $dt$, the quantum overlap $\left\langle n_{k}\left( t\right) |n_{k}\left( t+dt\right) \right\rangle $ becomes \begin{equation} \left\langle n_{k}\left( t\right) |n_{k}\left( t+dt\right) \right\rangle =\left\langle n_{k}\left( t\right) |n_{k}\left( t\right) \right\rangle +\left\langle n_{k}\left( t\right) |\dot{n}_{k}\left( t\right) \right\rangle dt+\frac{1}{2}\left\langle n_{k}\left( t\right) |\ddot{n} _{k}\left( t\right) \right\rangle dt^{2}+O\left( dt^{2}\right) \end{equation} Let us focus now on calculating $\left\vert \left\langle n_{k}\left( t\right) |n_{k}\left( t+dt\right) \right\rangle \right\vert =\sqrt {\left\vert \left\langle n_{k}\left( t\right) |n_{k}\left( t+dt\right) \right\rangle \right\vert ^{2}}$, where \begin{equation} \left\vert \left\langle n_{k}\left( t\right) |n_{k}\left( t+dt\right) \right\rangle \right\vert ^{2}=\left\langle n_{k}\left( t\right) |n_{k}\left( t+dt\right) \right\rangle \left\langle n_{k}\left( t+dt\right) |n_{k}\left( t\right) \right\rangle \text{.} \label{oro2} \end{equation} Using Eq. (\ref{oro1}), Eq. (\ref{oro2}) becomes \begin{align} \left\langle n_{k}\left( t\right) |n_{k}\left( t+dt\right) \right\rangle \left\langle n_{k}\left( t+dt\right) |n_{k}\left( t\right) \right\rangle & \approx\left[ \left\langle n_{k}\left( t\right) |n_{k}\left( t\right) \right\rangle +\left\langle n_{k}\left( t\right) |\dot{n}_{k}\left( t\right) \right\rangle dt+\frac{1}{2}\left\langle n_{k}\left( t\right) |\ddot{n}_{k}\left( t\right) \right\rangle dt^{2}\right] \cdot\nonumber\\ & \cdot\left[ \left\langle n_{k}\left( t\right) |n_{k}\left( t\right) \right\rangle +\left\langle \dot{n}_{k}\left( t\right) |n_{k}\left( t\right) \right\rangle dt+\frac{1}{2}\left\langle \ddot{n}_{k}\left( t\right) |n_{k}\left( t\right) \right\rangle dt^{2}\right] \nonumber\\ & =\left[ 1+\left\langle n_{k}|\dot{n}_{k}\right\rangle dt+\frac{1} {2}\left\langle n_{k}|\ddot{n}_{k}\right\rangle dt^{2}\right] \left[ 1+\left\langle \dot{n}_{k}|n_{k}\right\rangle dt+\frac{1}{2}\left\langle \ddot{n}_{k}|n_{k}\right\rangle dt^{2}\right] \nonumber\\ & \approx1+\left\langle \dot{n}_{k}|n_{k}\right\rangle dt+\frac{1} {2}\left\langle \ddot{n}_{k}|n_{k}\right\rangle dt^{2}+\left\langle n_{k} |\dot{n}_{k}\right\rangle dt+\left\langle n_{k}|\dot{n}_{k}\right\rangle \left\langle \dot{n}_{k}|n_{k}\right\rangle dt^{2}+\frac{1}{2}\left\langle n_{k}|\ddot{n}_{k}\right\rangle dt^{2}\nonumber\\ & =1+\left[ \left\langle \dot{n}_{k}|n_{k}\right\rangle +\left\langle n_{k}|\dot{n}_{k}\right\rangle \right] dt+\left\langle n_{k}|\dot{n} _{k}\right\rangle \left\langle \dot{n}_{k}|n_{k}\right\rangle dt^{2}+\frac {1}{2}\left[ \left\langle n_{k}|\ddot{n}_{k}\right\rangle +\left\langle \ddot{n}_{k}|n_{k}\right\rangle \right] dt^{2}\nonumber\\ & =1+\left\langle n_{k}|\dot{n}_{k}\right\rangle \left\langle \dot{n} _{k}|n_{k}\right\rangle dt^{2}-\left\langle \dot{n}_{k}|\dot{n}_{k} \right\rangle dt^{2}\text{,} \end{align} that is, \begin{equation} \left\langle n_{k}\left( t\right) |n_{k}\left( t+dt\right) \right\rangle \left\langle n_{k}\left( t+dt\right) |n_{k}\left( t\right) \right\rangle =1+\left\langle n_{k}|\dot{n}_{k}\right\rangle \left\langle \dot{n}_{k} |n_{k}\right\rangle dt^{2}-\left\langle \dot{n}_{k}|\dot{n}_{k}\right\rangle dt^{2}+O\left( dt^{2}\right) \text{,} \label{zero3} \end{equation} since $\left\langle n_{k}|n_{k}\right\rangle =1$ implies $\left\langle \dot {n}_{k}|n_{k}\right\rangle +\left\langle n_{k}|\dot{n}_{k}\right\rangle =0$ and $\left\langle n_{k}|\ddot{n}_{k}\right\rangle +\left\langle \ddot{n} _{k}|n_{k}\right\rangle =-2\left\langle \dot{n}_{k}|\dot{n}_{k}\right\rangle $. Finally, using Eqs. (\ref{zero2}) and (\ref{zero3}) along with noting that $\sum_{k}\dot{p}_{k}=0$, the Sj\"{o}qvist metric $ds_{\mathrm{Sj\ddot{o} qvist}}^{2}$ in Eq. (\ref{zero1}) becomes \begin{align} ds_{\mathrm{Sj\ddot{o}qvist}}^{2} & \approx2-2\sum_{k=1}^{N}\left( p_{k}+\frac{1}{2}\dot{p}_{k}dt-\frac{1}{8}\frac{\dot{p}_{k}^{2}}{p_{k}} dt^{2}\right) \left( 1+\frac{1}{2}\left\langle n_{k}|\dot{n}_{k} \right\rangle \left\langle \dot{n}_{k}|n_{k}\right\rangle dt^{2}-\frac{1} {2}\left\langle \dot{n}_{k}|\dot{n}_{k}\right\rangle dt^{2}\right) \nonumber\\ & \approx2-2\sum_{k=1}^{N}p_{k}-\sum_{k=1}^{N}p_{k}\left\langle n_{k}|\dot {n}_{k}\right\rangle \left\langle \dot{n}_{k}|n_{k}\right\rangle dt^{2} +\sum_{k=1}^{N}p_{k}\left\langle \dot{n}_{k}|\dot{n}_{k}\right\rangle dt^{2}+\frac{1}{4}\sum_{k=1}^{N}\frac{\dot{p}_{k}^{2}}{p_{k}}dt^{2}\text{,} \end{align} that is, \begin{align} ds_{\mathrm{Sj\ddot{o}qvist}}^{2} & \approx\frac{1}{4}\sum_{k=1}^{N} \frac{\dot{p}_{k}^{2}}{p_{k}}dt^{2}+\sum_{k=1}^{N}p_{k}\left[ \left\langle \dot{n}_{k}|\dot{n}_{k}\right\rangle -\left\langle n_{k}|\dot{n} _{k}\right\rangle \left\langle \dot{n}_{k}|n_{k}\right\rangle \right] dt^{2}\nonumber\\ & \approx\frac{1}{4}\sum_{k=1}^{N}\frac{\dot{p}_{k}^{2}}{p_{k}}dt^{2} +\sum_{k=1}^{N}p_{k}\left[ \left\langle \dot{n}_{k}|\left( \mathrm{I} -\left\vert n_{k}\right\rangle \left\langle n_{k}\right\vert \right) |\dot {n}_{k}\right\rangle \right] dt^{2}\text{,} \label{89} \end{align} where \textrm{I} in Eq. (\ref{89}) denotes the identity operator on the $N$-dimensional Hilbert space. Finally, neglecting terms that are smaller than $O\left( dt^{2}\right) $ in Eq. (\ref{zero1}) and defining $ds_{k} ^{2}\overset{\text{def}}{=}\left[ \left\langle \dot{n}_{k}|\left( \mathrm{I}-\left\vert n_{k}\right\rangle \left\langle n_{k}\right\vert \right) |\dot{n}_{k}\right\rangle \right] dt^{2}$, the expression of the Sj\"{o}qvist metric will be formally taken to be \begin{equation} ds_{\mathrm{Sj\ddot{o}qvist}}^{2}\overset{\text{def}}{=}\frac{1}{4}\sum _{k=1}^{N}\frac{\dot{p}_{k}^{2}}{p_{k}}dt^{2}+\sum_{k=1}^{N}p_{k}ds_{k} ^{2}\text{.} \label{vetta} \end{equation} The derivation of Eq. (\ref{vetta}) concludes our explicit calculation of the Sj\"{o}qvist metric for nondegerante mixed states. Interestingly, note that $ds_{k}^{2}\overset{\text{def}}{=}\left\langle \dot{n}_{k}\left\vert \left( \mathrm{I}-\left\vert n_{k}\right\rangle \left\langle n_{k}\right\vert \right) \right\vert \dot{n}_{k}\right\rangle dt^{2}$ in Eq. (\ref{vetta}) can be written as $ds_{k}^{2}=\left\langle \nabla n_{k}\left\vert \nabla n_{k}\right. \right\rangle $ with $\left\vert \nabla n_{k}\right\rangle \overset{\text{def}}{=}\mathrm{P}_{\bot}^{\left( k\right) }\left\vert \dot{n}_{k}\right\rangle $ being the covariant derivative of $\left\vert n_{k}\right\rangle $ and $\mathrm{P}_{\bot}^{\left( k\right) } \overset{\text{def}}{=}\mathrm{I}-\left\vert n_{k}\right\rangle \left\langle n_{k}\right\vert $ denoting the projector onto states perpendicular to $\left\vert n_{k}\right\rangle $.\textbf{ }In analogy to the Bures metric case (see the comment right below Eq. (\ref{general})), we stress for later convenience that the quadratic term\textbf{ }$ds_{k}^{2}$\textbf{ }does not change under change of sign of the Hamiltonian of the system. The expression of the Sj\"{o}qvist metric in Eq. (\ref{vetta}) can be viewed as expressed by two contributions, a classical and a nonclassical term. The first term in Eq. (\ref{vetta}) is the classical one and is represented by the classical Fisher-Rao information metric between the two probability distributions $\left\{ p_{k}\right\} _{1\leq k\leq N}$ and $\left\{ p_{k}+dp_{k}\right\} _{1\leq k\leq N}$. The second term is the nonclassical one and is represented by a weighted average of pure state Fubini-Study metrics along directions specified by state vectors $\left\{ \left\vert n_{k}\right\rangle \right\} _{1\leq k\leq N}$. We are now ready to introduce our Hamiltonian models. \section{The Hamiltonian Models} In this section, we present two Hamiltonian models. The first Hamiltonian model specifies a spin-$1/2$ particle in a uniform and time-independent external magnetic field oriented along the $z$-axis. The second Hamiltonian model, instead, describes a superconducting flux qubit. Finally, we construct the two corresponding parametric families of thermal states by bringing these two systems in thermal equilibrium with a reservoir at finite and non-zero temperature $T$.\begin{figure} \caption{Schematic depiction of a spin qubit (a) and a superconducting flux qubit (b) in thermal equilibrium with a reservoir at non-zero temperature T. The spin qubit in (a) has opposite orientations of the spin along the quantization axis as its two states. The superconducting flux qubit in (b), instead, has circulating currents of opposite sign as its two states.} \end{figure} \subsection{Spin-1/2 qubit Hamiltonian} Consider a spin-$1/2$ particle represented by an electron of $m$, charge $-e$ with $e\geq0$ immersed in an external magnetic field $\vec{B}\left( t\right) $. From a quantum-mechanical perspective, the Hamiltonian of this system can be described the Hermitian operator \textrm{H}$\left( t\right) $\textrm{ }given by $\mathrm{H}\left( t\right) \overset{\text{def}}{=}-\vec{\mu }\mathbf{\cdot}\vec{B}\left( t\right) $ \cite{sakurai}, with $\vec{\mu}$ denoting the electron magnetic moment operator. The quantity $\vec{\mu}$ is defined as $\vec{\mu}\overset{\text{def}}{=}-\left( e/m\right) \vec{s}$ with $\vec{s}\overset{\text{def}}{=}\left( \hslash/2\right) \vec{\sigma}$ being the spin operator. Clearly, $\hslash\overset{\text{def}}{=}h/(2\pi)$ is the reduced Planck constant and $\vec{\sigma}\overset{\text{def}}{=}\left( \sigma_{x}\text{, }\sigma_{y}\text{, }\sigma_{z}\right) $ is the usual Pauli spin vector operator. Assuming a time-independent magnetic field along the $z$-direction given by $\vec{B}\left( t\right) =B_{0}\hat{z}$ and introducing the frequency $\omega\overset{\text{def}}{=}(e/m)B_{0}$, the spin-$1/2$ qubit (SQ) Hamiltonian becomes \begin{equation} \mathrm{H}_{\mathrm{SQ}}\left( \omega\right) \overset{\text{def}}{=} \frac{\hslash\omega}{2}\sigma_{z}\text{,} \label{spinH} \end{equation} where $\sigma_{z}\overset{\text{def}}{=}\left\vert \uparrow\right\rangle \left\langle \uparrow\right\vert -\left\vert \downarrow\right\rangle \left\langle \downarrow\right\vert $ with $\left\vert \uparrow\right\rangle $ and $\left\vert \downarrow\right\rangle $ denoting the spin-up and the spin-down quantum states, respectively. Observe that with the sign convention used for\textbf{ }$\mathrm{H}_{\mathrm{SQ}}\left( \omega\right) $\textbf{ }in Eq. (\ref{spinH}), we have that\textbf{ }$\left\vert \downarrow \right\rangle $\textbf{ (}$\left\vert \uparrow\right\rangle $\textbf{) }denotes the ground (excited) state of the system with energy\textbf{ }$-\hslash\omega/2$\textbf{ (}$+\hslash\omega/2$\textbf{).} \subsection{Superconducting flux qubit Hamiltonian} It is known that a qubit is a two-level (or, a two-state) quantum system and, moreover, it is possible to realize the two levels in a number of ways. For example, the two-levels can be regarded as the spin-up and spin-down of an electron, or as the vertical and horizontal polarization of a single photon. Interestingly, the two-levels of a qubit can be also realized as the supercurrent flowing in an anti-clockwise and clockwise directions in a superconducting loop \cite{clarke08,devoret13}. A flux qubit is a superconducting loop interrupted by one or three Josephson junctions (i.e., a dissipationless device with a nonlinear inductance). An arbitrary flux qubit can be described as a superposition of two persistent current basis states. The two quantum states are total magnetic flux $\Phi$ pointing up $\left\vert \uparrow\right\rangle $ and $\Phi$ pointing down $\left\langle \downarrow \right\vert $. Alternatively, as previously mentioned, the two-levels of the quantum system can be described as the supercurrent $I_{q}$ circulating in the loop anti-clockwise and $I_{q}$ circulating clockwise. The Hamiltonian of a superconducting flux qubit (\textrm{SFQ}) in persistent current basis $\left\{ \left\vert \uparrow\right\rangle \text{, }\left\langle \downarrow\right\vert \right\} $ is given by \cite{chiorescu03,pekola07,paauw09,pekola16}, \begin{equation} \mathrm{H}_{\mathrm{SFQ}}\left( \Delta\text{, }\epsilon\right) \overset{\text{def}}{=}-\frac{\hslash}{2}\left( \Delta\sigma_{x} +\epsilon\sigma_{z}\right) \text{.} \label{superH} \end{equation} In Eq. (\ref{superH}), $\hslash\overset{\text{def}}{=}h/\left( 2\pi\right) $ is the reduced Planck constant, while $\sigma_{x}$ and $\sigma_{z}$ are Pauli matrices. Furthermore, $\hslash\epsilon\overset{\text{def}}{=}2I_{q}\left( \Phi_{e}-\frac{\Phi_{0}}{2}\right) $ is the magnetic energy bias defined in terms of the supercurrent $I_{q}$, the externally applied magnetic flux $\Phi_{e}$, and the magnetic flux quantum $\Phi_{0}\overset{\text{def} }{=}h/\left( 2e\right) $ with $e$ being the absolute value of the electron charge. Finally, $\hslash\Delta$ is the energy gap at the degeneracy point specified by the relation $\Phi_{e}=\Phi_{0}/2$ (i.e., $\epsilon=0$) and represents the minimum splitting of the energy levels of the ground state $\left\vert g\right\rangle $ and the first excited state $\left\vert e\right\rangle $ of the superconducting qubit. At the gap, the coherence properties of the qubit are optimal. Away from the degeneracy point, $\epsilon\neq0$ and the energy-level splitting becomes $\hslash\nu \overset{\text{def}}{=}\hslash\sqrt{\epsilon^{2}+\Delta^{2}}$, with $\nu$ being the transition angular frequency of the qubit. The energy level splitting $\hslash\Delta$ depends on the critical current of the three Josephson junctions and their capacitance \cite{paauw09}. For flux qubits one has $\Delta\sim E_{C}/E_{J}$ with $E_{C}$ and $E_{J}$ denoting the Cooper pair charging energy and the Josephson coupling energy \cite{pekola16}, respectively. In summary, a flux qubit can be represented by a double-well potential whose shape (symmetrical versus asymmetrical) can be tuned with the externally applied magnetic flux $\Phi_{e}$. When $\Phi_{e}=\Phi_{0}/2$, the double-well is symmetric, the energy eigenstates (i.e., ground state and first excited states $\left\vert g\right\rangle $ and $\left\vert e\right\rangle $, respectively) are symmetric (i.e., $\left\vert g\right\rangle \overset{\text{def}}{=}\left[ \left\vert \uparrow\right\rangle +\left\vert \downarrow\right\rangle \right] /\sqrt{2}$) and antisymmetric (i.e., $\left\vert e\right\rangle \overset{\text{def}}{=}\left[ \left\vert \uparrow\right\rangle -\left\vert \downarrow\right\rangle \right] /\sqrt{2}$) superpositions of the two states $\left\vert \uparrow\right\rangle $ and $\left\vert \downarrow\right\rangle $ and, finally, the splitting of the energy levels of $\left\vert g\right\rangle $ and $\left\vert e\right\rangle $ is $\Delta$. Instead, when $\Phi_{e}\neq\Phi_{0}/2$, the double-well is not symmetric, the energy eigenstates are arbitrary superpositions of the basis states $\left\vert \uparrow\right\rangle $ and $\left\vert \downarrow \right\rangle $ (i.e., $\alpha\left\vert \uparrow\right\rangle \pm \beta\left\vert \downarrow\right\rangle $ with $\left\vert \alpha\right\vert ^{2}+\left\vert \beta\right\vert ^{2}=1$) and, finally, the energy gap becomes $\hslash\nu\overset{\text{def}}{=}\hslash\sqrt{\epsilon^{2}+\Delta^{2}}$. For more details on the theory underlying superconducting flux qubits, we refer to Ref. \cite{clarke08}. The transition from (isolated) physical systems specified by pure states evolving according to the Hamiltonians in Eqs. (\ref{spinH})\ and (\ref{superH}) to the same (open) physical systems described by mixed quantum states can be explained as follows. Assume a quantum system specified by an Hamiltonian $\mathrm{H}$ is in thermal equilibrium with a reservoir at non-zero temperature $T$. Then, following the principles of quantum statistical mechanics \cite{huang87}, the system has temperature $T$ and its state is described by a thermal state \cite{strocchi08} specified by a density matrix $\rho$ given by, \begin{equation} \rho\overset{\text{def}}{=}\frac{e^{-\beta\mathrm{H}}}{\mathrm{tr}\left( e^{-\beta\mathrm{H}}\right) }\text{.} \label{densityma} \end{equation} In Eq. (\ref{densityma}), $\beta\overset{\text{def}}{=}\left( k_{B}T\right) ^{-1}$ denotes the so-called inverse temperature, while $k_{B}$ is the Boltzmann constant. In what follows, we shall consider two families of mixed quantum thermal states given by \begin{equation} \rho_{\mathrm{SQ}}\left( \beta\text{, }\omega\right) \overset{\text{def} }{=}\frac{e^{-\beta\mathrm{H}_{\mathrm{SQ}}\left( \omega\right) } }{\mathrm{tr}\left( e^{-\beta\mathrm{H}_{\mathrm{SQ}}\left( \omega\right) }\right) }\text{ and, }\rho_{\mathrm{SFQ}}\left( \beta\text{, } \epsilon\right) \overset{\text{def}}{=}\frac{e^{-\beta\mathrm{H} _{\mathrm{SFQ}}\left( \epsilon\right) }}{\mathrm{tr}\left( e^{-\beta \mathrm{H}_{\mathrm{SFQ}}\left( \epsilon\right) }\right) }\text{.} \label{densita} \end{equation} Note that in $\rho_{\mathrm{SFQ}}\left( \beta\text{, }\epsilon\right) $ in Eq. (\ref{densita}), we assume that the parameter $\Delta$ is fixed. For a work on how to tune the energy gap $\Delta$ in a flux qubit from an experimental standpoint, we refer to Ref. \cite{paauw09}. In Fig. $1$, we present a schematic depiction of of a spin qubit and a superconducting flux qubit in thermal equilibrium with a reservoir at non-zero temperature $T$. \section{Applications} In this section, we calculate both the Sj\"{o}qvist and the Bures metrics for each one of the two distinct families of parametric thermal states mentioned in the previous section. From our comparative investigation, we find that the two metric coincide for the first Hamiltonian model (electron in a constant magnetic field along the $z$-direction), while they differ for the second Hamiltonian model (superconducting flux qubit). \subsection{Spin qubits} Let us consider a system with an Hamiltonian described by $\mathrm{H} _{\mathrm{SQ}}\left( \omega\right) \overset{\text{def}}{=}\left( \hslash\omega/2\right) \sigma_{z}$ in Eq. (\ref{spinH}). Observe that $\mathrm{H}_{\mathrm{SQ}}\left( \omega\right) $ can be recast as \begin{equation} \mathrm{H}_{\mathrm{SQ}}\left( \omega\right) =\sum_{n=0}^{1}E_{n}\left\vert n\right\rangle \left\langle n\right\vert =\frac{\hslash\omega}{2}\left\vert 0\right\rangle \left\langle 0\right\vert -\frac{\hslash\omega}{2}\left\vert 1\right\rangle \left\langle 1\right\vert \text{,} \label{h1} \end{equation} where $E_{0}\overset{\text{def}}{=}\hslash\omega/2$, $E_{1}\overset{\text{def} }{=}-\hslash\omega/2$, and $\left\{ \left\vert n\right\rangle \right\} \overset{\text{def}}{=}\left\{ \left\vert 0\right\rangle =\left\vert \uparrow\right\rangle \text{, }\left\vert 1\right\rangle =\left\vert \downarrow\right\rangle \right\} $. For clarity, note that\textbf{ }$\left\vert 1\right\rangle =\left\vert \downarrow\right\rangle $\textbf{ (}$\left\vert 0\right\rangle =\left\vert \uparrow\right\rangle $\textbf{) }denotes here the ground (excited) state corresponding to the lowest (highest) energy level with\textbf{ }$E_{1}\overset{\text{def}}{=}-\hslash\omega/2$ ($E_{0}\overset{\text{def}}{=}\hslash\omega/2$). Observe that the thermal state $\rho_{\mathrm{SQ}}$ emerging from the Hamiltonian $\mathrm{H} _{\mathrm{SQ}}$ in Eq. (\ref{h1}) can be written as \begin{equation} \rho_{\mathrm{SQ}}=\rho_{\mathrm{SQ}}\left( \beta\text{, }\omega\right) \overset{\text{def}}{=}\frac{e^{-\beta\mathrm{H}_{\mathrm{SQ}}\left( \omega\right) }}{\mathrm{tr}\left( e^{-\beta\mathrm{H}_{\mathrm{SQ}}\left( \omega\right) }\right) }\text{.} \label{mike1} \end{equation} The thermal state $\rho_{\mathrm{SQ}}\left( \beta\text{, }\omega\right) $ in Eq. (\ref{mike1}) can be rewritten as, \begin{align} \rho_{\mathrm{SQ}}\left( \beta\text{, }\omega\right) & =\frac {e^{-\beta\frac{\hslash\omega}{2}\sigma_{z}}}{\mathrm{tr}\left( e^{-\beta\frac{\hslash\omega}{2}\sigma_{z}}\right) }\nonumber\\ & =\frac{\left( \begin{array} [c]{cc} e^{-\beta\frac{\hslash\omega}{2}} & 0\\ 0 & e^{\beta\frac{\hslash\omega}{2}} \end{array} \right) }{e^{-\beta\frac{\hslash\omega}{2}}+e^{\beta\frac{\hslash\omega}{2}} }\nonumber\\ & =\frac{\left( \begin{array} [c]{cc} \cosh\left( \beta\frac{\hslash\omega}{2}\right) -\sinh\left( \beta \frac{\hslash\omega}{2}\right) & 0\\ 0 & \cosh\left( \beta\frac{\hslash\omega}{2}\right) +\sinh\left( \beta \frac{\hslash\omega}{2}\right) \end{array} \right) }{2\cosh\left( \beta\frac{\hslash\omega}{2}\right) }\nonumber\\ & =\frac{1}{2}\left( \begin{array} [c]{cc} 1-\tanh\left( \beta\frac{\hslash\omega}{2}\right) & 0\\ 0 & 1+\tanh\left( \beta\frac{\hslash\omega}{2}\right) \end{array} \right) \nonumber\\ & =\frac{1}{2}\left[ \mathrm{I}-\tanh\left( \beta\frac{\hslash\omega} {2}\right) \sigma_{z}\right] \text{,} \end{align} that is, \begin{equation} \rho_{\mathrm{SQ}}\left( \beta\text{, }\omega\right) =\frac{1}{2}\left[ \mathrm{I}-\tanh\left( \beta\frac{\hslash\omega}{2}\right) \sigma _{z}\right] \text{.} \label{ro1} \end{equation} In what follows, we shall use $\rho_{\mathrm{SQ}}\left( \beta\text{, } \omega\right) $ in Eq. (\ref{ro1}) to calculate the Bures and the Sj\"{o}qvist metrics. \subsubsection{The Bures metric} We begin by noticing that $ds_{\mathrm{Bures}}^{2}$ in Eq. (\ref{general}) becomes in our case \begin{align} ds_{\mathrm{Bures}}^{2} & =\frac{1}{4}\left[ \left\langle \mathrm{H} ^{2}\right\rangle -\left\langle \mathrm{H}\right\rangle ^{2}\right] d\beta^{2}\nonumber\\ & +\frac{1}{4}\left\{ \beta^{2}\left\{ \left\langle \left[ \left( \partial_{\omega}\mathrm{H}\right) _{d}\right] ^{2}\right\rangle -\left\langle \left( \partial_{\omega}\mathrm{H}\right) _{d}\right\rangle ^{2}\right\} +2\sum_{n\neq m}\left\vert \frac{\left\langle n|\partial _{\omega}\mathrm{H}|m\right\rangle }{E_{n}-E_{m}}\right\vert ^{2}\frac{\left( e^{-\beta E_{n}}-e^{-\beta E_{m}}\right) ^{2}}{\mathcal{Z}\cdot\left( e^{-\beta E_{n}}+e^{-\beta E_{m}}\right) }\right\} d\omega^{2}+\nonumber\\ & +\frac{1}{4}\left\{ 2\beta\left[ \left\langle \mathrm{H}\left( \partial_{\omega}\mathrm{H}\right) _{d}\right\rangle -\left\langle \mathrm{H}\right\rangle \left\langle \left( \partial_{\omega}\mathrm{H} \right) _{d}\right\rangle \right] \right\} d\beta d\omega\text{,} \label{general1} \end{align} where, for simplicity of notation, we denote $\mathrm{H}_{\mathrm{SQ}}\left( \omega\right) $ in Eq. (\ref{h1}) with $\mathrm{H}$. To calculate $ds_{\mathrm{Bures}}^{2}$ in Eq. (\ref{general}), we perform three distinct calculations. Specifically, we compute the metric tensor components $g_{\beta\beta}$, $2g_{\beta\omega}$, and $g_{\omega\omega}$ defined as \begin{equation} g_{\beta\beta}\left( \beta\text{, }\omega\right) \overset{\text{def} }{=}\frac{1}{4}\left[ \left\langle \mathrm{H}^{2}\right\rangle -\left\langle \mathrm{H}\right\rangle ^{2}\right] \text{, }2g_{\beta\omega}\left( \beta\text{, }\omega\right) \overset{\text{def}}{=}\frac{1}{4}\left\{ 2\beta\left[ \left\langle \mathrm{H}\left( \partial_{\omega}\mathrm{H} \right) _{d}\right\rangle -\left\langle \mathrm{H}\right\rangle \left\langle \left( \partial_{\omega}\mathrm{H}\right) _{d}\right\rangle \right] \right\} \text{, } \label{secondo} \end{equation} and, \begin{equation} g_{\omega\omega}\left( \beta\text{, }\omega\right) \overset{\text{def} }{=}\frac{1}{4}\left\{ \beta^{2}\left\{ \left\langle \left[ \left( \partial_{\omega}\mathrm{H}\right) _{d}\right] ^{2}\right\rangle -\left\langle \left( \partial_{\omega}\mathrm{H}\right) _{d}\right\rangle ^{2}\right\} +2\sum_{n\neq m}\left\vert \frac{\left\langle n|\partial _{\omega}\mathrm{H}|m\right\rangle }{E_{n}-E_{m}}\right\vert ^{2}\frac{\left( e^{-\beta E_{n}}-e^{-\beta E_{m}}\right) ^{2}}{\mathcal{Z}\cdot\left( e^{-\beta E_{n}}+e^{-\beta E_{m}}\right) }\right\} \text{,} \label{terzo} \end{equation} respectively. \paragraph{First sub-calculation} Let us begin with calculating $(1/4)\left[ \left\langle \mathrm{H} ^{2}\right\rangle -\left\langle \mathrm{H}\right\rangle ^{2}\right] d\beta^{2}$. Observe that the expectation value $\left\langle \mathrm{H} ^{2}\right\rangle $ of $\mathrm{H}^{2}$ is given by, \begin{align} \left\langle \mathrm{H}^{2}\right\rangle & =\mathrm{tr}\left( \mathrm{H}^{2}\rho\right) =\sum_{i=0}^{1}p_{i}E_{i}=p_{0}E_{0}+p_{1} E_{1}\nonumber\\ & =\frac{e^{-\beta E_{0}}}{\mathcal{Z}}\left( \frac{\hslash\omega} {2}\right) ^{2}+\frac{e^{-\beta E_{1}}}{\mathcal{Z}}\left( -\frac {\hslash\omega}{2}\right) ^{2}\nonumber\\ & \nonumber\\ & =\frac{\hslash^{2}\omega^{2}}{4}\left( \frac{e^{-\beta\frac{\hslash\omega }{2}}}{\mathcal{Z}}+\frac{e^{\beta\frac{\hslash\omega}{2}}}{\mathcal{Z} }\right) \nonumber\\ & =\frac{\hslash^{2}\omega^{2}}{4}\text{,} \end{align} that is, \begin{equation} \left\langle \mathrm{H}^{2}\right\rangle =\frac{\hslash^{2}\omega^{2}} {4}\text{,} \label{a} \end{equation} where the partition function is $\mathcal{Z}\overset{\text{def}}{=} e^{-\beta\frac{\hslash\omega}{2}}+e^{\beta\frac{\hslash\omega}{2}} =2\cosh\left( \beta\frac{\hslash\omega}{2}\right) $. Furthermore, we note that the expectation value $\left\langle \mathrm{H}\right\rangle $ of the Hamiltonian is \begin{align} \left\langle \mathrm{H}\right\rangle & =\mathrm{tr}\left( \rho \mathrm{H}\right) =\sum_{i=0}^{1}p_{i}E_{i}=p_{0}E_{0}+p_{1}E_{1} =\frac{e^{-\beta E_{0}}}{\mathcal{Z}}\frac{\hslash\omega}{2}-\frac{e^{-\beta E_{1}}}{\mathcal{Z}}\frac{\hslash\omega}{2}\nonumber\\ & =\frac{e^{-\beta E_{0}}-e^{-\beta E_{1}}}{\mathcal{Z}}\frac{\hslash\omega }{2}=\frac{e^{-\beta\frac{\hslash\omega}{2}}-e^{\beta\frac{\hslash\omega}{2}} }{e^{-\beta\frac{\hslash\omega}{2}}+e^{\beta\frac{\hslash\omega}{2}}} \frac{\hslash\omega}{2}=-\frac{2\sinh\left( \beta\frac{\hslash\omega} {2}\right) }{2\cosh\left( \beta\frac{\hslash\omega}{2}\right) } \frac{\hslash\omega}{2}=-\frac{\hslash\omega}{2}\tanh\left( \beta \frac{\hslash\omega}{2}\right) \text{,} \end{align} that is, \begin{equation} \left\langle \mathrm{H}\right\rangle =-\frac{\hslash\omega}{2}\tanh\left( \beta\frac{\hslash\omega}{2}\right) \text{.} \label{b} \end{equation} Therefore, using Eqs. (\ref{a}) and (\ref{b}), $(1/4)\left[ \left\langle \mathrm{H}^{2}\right\rangle -\left\langle \mathrm{H}\right\rangle ^{2}\right] d\beta^{2}$ becomes \begin{equation} g_{\beta\beta}\left( \beta\text{, }\omega\right) d\beta^{2} \overset{\text{def}}{=}\frac{1}{4}\left[ \left\langle \mathrm{H} ^{2}\right\rangle -\left\langle \mathrm{H}\right\rangle ^{2}\right] d\beta^{2}=\frac{\hslash^{2}}{16}\omega^{2}\left[ 1-\tanh^{2}\left( \beta\frac{\hslash\omega}{2}\right) \right] d\beta^{2}\text{.} \label{A1} \end{equation} For completeness, we remark that\textbf{ }$1-\tanh^{2}\left[ \beta\left( \hslash\omega/2\right) \right] $\textbf{ }in Eq. (\ref{A1}) can also be expressed as\textbf{ }$1$\textbf{/}$\cosh^{2}\left[ \beta\left( \hslash\omega/2\right) \right] $. The calculation of $g_{\beta\beta}\left( \beta\text{, }\omega\right) $ in Eq. (\ref{A1}) ends our first sub-calculation. \paragraph{Second sub-calculation} Let us focus on the second term in Eq. (\ref{secondo}). We start by noting that $\left\langle \mathrm{H}\left( \partial_{\omega}\mathrm{H}\right) _{d}\right\rangle $ is given by \begin{align} \left\langle \mathrm{H}\left( \partial_{\omega}\mathrm{H}\right) _{d}\right\rangle & =\sum_{i=0}^{1}p_{i}E_{i}\partial_{\omega}E_{i} =p_{0}E_{0}\partial_{\omega}E_{0}+p_{1}E_{1}\partial_{\omega}E_{1}\nonumber\\ & =p_{0}\frac{\hslash\omega}{2}\partial_{\omega}\left( \frac{\hslash\omega }{2}\right) +p_{1}\left( -\frac{\hslash\omega}{2}\right) \partial_{\omega }\left( -\frac{\hslash\omega}{2}\right) \nonumber\\ & =\frac{\hslash^{2}}{4}\omega p_{0}+\frac{\hslash^{2}}{4}\omega p_{1} =\frac{\hslash^{2}}{4}\omega\left( p_{0}+p_{1}\right) =\frac{\hslash^{2}} {4}\omega\text{,} \end{align} that is, \begin{equation} \left\langle \mathrm{H}\left( \partial_{\omega}\mathrm{H}\right) _{d}\right\rangle =\frac{\hslash^{2}}{4}\omega\text{.} \label{c} \end{equation} Moreover, $\left\langle \left( \partial_{\omega}\mathrm{H}\right) _{d}\right\rangle $ can be expressed as \begin{align} \left\langle \left( \partial_{\omega}\mathrm{H}\right) _{d}\right\rangle & =\sum_{i=0}^{1}p_{i}\partial_{\omega}E_{i}=p_{0}\partial_{\omega}E_{0} +p_{1}\partial_{\omega}E_{1}=\frac{\hslash}{2}p_{0}-\frac{\hslash}{2} p_{1}\nonumber\\ & =\frac{\hslash}{2}\frac{e^{-\beta E_{0}}-e^{-\beta E_{1}}}{\mathcal{Z} }=\frac{\hslash}{2}\frac{e^{-\beta\frac{\hslash\omega}{2}}-e^{\beta \frac{\hslash\omega}{2}}}{e^{-\beta\frac{\hslash\omega}{2}}+e^{\beta \frac{\hslash\omega}{2}}}=-\frac{\hslash}{2}\frac{2\sinh\left( \beta \frac{\hslash\omega}{2}\right) }{2\cosh\left( \beta\frac{\hslash\omega} {2}\right) }\nonumber\\ & =-\frac{\hslash}{2}\tanh\left( \beta\frac{\hslash\omega}{2}\right) \text{,} \end{align} that is, \begin{equation} \left\langle \left( \partial_{\omega}\mathrm{H}\right) _{d}\right\rangle =-\frac{\hslash}{2}\tanh\left( \beta\frac{\hslash\omega}{2}\right) \text{.} \label{d} \end{equation} Therefore, using Eqs. (\ref{b}), (\ref{c}), and (\ref{d}), we obtain \begin{equation} 2g_{\beta\omega}\left( \beta\text{, }\omega\right) d\beta d\omega \overset{\text{def}}{=}\frac{1}{4}\left\{ 2\beta\left[ \left\langle \mathrm{H}\left( \partial_{\omega}\mathrm{H}\right) _{d}\right\rangle -\left\langle \mathrm{H}\right\rangle \left\langle \left( \partial_{\omega }\mathrm{H}\right) _{d}\right\rangle \right] \right\} d\beta d\omega =\frac{\hslash^{2}}{8}\beta\omega\left[ 1-\tanh^{2}\left( \beta\frac {\hslash\omega}{2}\right) \right] d\beta d\omega\text{.} \label{B1} \end{equation} The calculation of $2g_{\beta\omega}\left( \beta\text{, }\omega\right) $ in Eq. (\ref{B1}) ends our second sub-calculation. \paragraph{Third sub-calculation} Let us now calculate the term in Eq. (\ref{terzo}). Recall from Eq. (\ref{d}) that $\left\langle \left( \partial_{\omega}\mathrm{H}\right) _{d} \right\rangle =-\frac{\hslash}{2}\tanh\left( \beta\frac{\hslash\omega} {2}\right) $. Therefore, we have \begin{equation} \left\langle \left( \partial_{\omega}\mathrm{H}\right) _{d}\right\rangle ^{2}=\frac{\hslash^{2}}{4}\tanh^{2}\left( \beta\frac{\hslash\omega} {2}\right) \text{.} \label{f} \end{equation} Moreover, we note that $\left\langle \left[ \left( \partial_{\omega }\mathrm{H}\right) _{d}\right] ^{2}\right\rangle $ can be rewritten as \begin{align} \left\langle \left[ \left( \partial_{\omega}\mathrm{H}\right) _{d}\right] ^{2}\right\rangle & =\sum_{i=0}^{1}p_{i}\left( \partial_{\omega} E_{i}\right) ^{2}=p_{0}\left( \partial_{\omega}E_{0}\right) ^{2} +p_{1}\left( \partial_{\omega}E_{1}\right) ^{2}\nonumber\\ & =p_{0}\left[ \partial_{\omega}\left( -\frac{\hslash\omega}{2}\right) \right] ^{2}+p_{1}\left[ \partial_{\omega}\left( \frac{\hslash\omega} {2}\right) \right] ^{2}\nonumber\\ & =\frac{\hslash^{2}}{4}p_{0}+\frac{\hslash^{2}}{4}p_{1}=\frac{\hslash^{2} }{4}\left( p_{0}+p_{1}\right) =\frac{\hslash^{2}}{4}\text{,} \end{align} that is, \begin{equation} \left\langle \left[ \left( \partial_{\omega}\mathrm{H}\right) _{d}\right] ^{2}\right\rangle =\frac{\hslash^{2}}{4}\text{.} \label{g} \end{equation} Finally, note that \begin{align} 2\sum_{n\neq m}\left\vert \frac{\left\langle n|\partial_{\omega} \mathrm{H}|m\right\rangle }{E_{n}-E_{m}}\right\vert ^{2}\frac{\left( e^{-\beta E_{n}}-e^{-\beta E_{m}}\right) ^{2}}{\mathcal{Z}\cdot\left( e^{-\beta E_{n}}+e^{-\beta E_{m}}\right) } & =\frac{2}{\mathcal{Z} }\left\vert \frac{\left\langle 0|\partial_{\omega}\mathrm{H}|1\right\rangle }{E_{0}-E_{1}}\right\vert ^{2}\frac{e^{-\beta E_{0}}-e^{-\beta E_{1}} }{e^{-\beta E_{0}}+e^{-\beta E_{1}}}+\frac{2}{\mathcal{Z}}\left\vert \frac{\left\langle 1|\partial_{\omega}\mathrm{H}|0\right\rangle }{E_{1}-E_{0} }\right\vert ^{2}\frac{e^{-\beta E_{1}}-e^{-\beta E_{0}}}{e^{-\beta E_{1} }+e^{-\beta E_{0}}}\nonumber\\ & =0\text{,} \label{h} \end{align} since $\left\langle 0|\partial_{\omega}\mathrm{H}|1\right\rangle =\left\langle 1|\partial_{\omega}\mathrm{H}|0\right\rangle =0$ as a consequence of the fact that $\mathrm{H}=\mathrm{H}_{\mathrm{SQ}}\left( \omega\right) $ in Eq. (\ref{spinH}) is diagonal. Therefore, using Eqs. (\ref{f}), (\ref{g}), and (\ref{h}), we finally get that Eq. (\ref{terzo}) becomes \begin{equation} g_{\omega\omega}\left( \beta\text{, }\omega\right) d\omega^{2}=\frac {\hslash^{2}}{16}\beta^{2}\left[ 1-\tanh^{2}\left( \beta\frac{\hslash\omega }{2}\right) \right] d\omega^{2}\text{.} \label{C1} \end{equation} The calculation of $g_{\omega\omega}\left( \beta\text{, }\omega\right) $ in Eq. (\ref{C1}) ends our third sub-calculation. In conclusion, exploiting Eqs. (\ref{A1}), (\ref{B1}), and (\ref{C1}), the Bures metric $ds_{\mathrm{Bures}}^{2}=g_{\beta\beta}\left( \beta\text{, }\omega\right) d\beta^{2}+g_{\omega\omega}\left( \beta\text{, } \omega\right) d\omega^{2}+2g_{\beta\omega}\left( \beta\text{, } \omega\right) d\beta d\omega$ in Eq. (\ref{general1}) becomes \begin{equation} ds_{\mathrm{Bures}}^{2}=\frac{\hslash^{2}\omega^{2}}{16}\left[ 1-\tanh ^{2}\left( \beta\frac{\hslash\omega}{2}\right) \right] d\beta^{2} +\frac{\hslash^{2}\beta^{2}}{16}\left[ 1-\tanh^{2}\left( \beta\frac {\hslash\omega}{2}\right) \right] d\omega^{2}+\frac{\hslash^{2}\beta\omega }{8}\left[ 1-\tanh^{2}\left( \beta\frac{\hslash\omega}{2}\right) \right] d\beta d\omega\text{.} \label{07} \end{equation} Using Einstein's summation convention, $ds_{\mathrm{Bures}}^{2}=g_{ij} ^{\left( \mathrm{Bures}\right) }\left( \beta\text{, }\omega\right) d\theta^{i}d\theta^{j}$ with $\theta^{1}\overset{\text{def}}{=}\beta$ and $\theta^{2}\overset{\text{def}}{=}\omega$. Finally, using Eq. (\ref{07}), the Bures metric metric tensor $g_{ij}^{\left( \mathrm{Bures}\right) }\left( \beta\text{, }\omega\right) $ becomes \begin{equation} g_{ij}^{\left( \mathrm{Bures}\right) }\left( \beta\text{, }\omega\right) =\frac{\hslash^{2}}{16}\left[ 1-\tanh^{2}\left( \beta\frac{\hslash\omega} {2}\right) \right] \left( \begin{array} [c]{cc} \omega^{2} & \beta\omega\\ \beta\omega & \beta^{2} \end{array} \right) \text{,} \label{f2} \end{equation} with $1\leq i$, $j\leq2$. Note that $g_{ij}^{\left( \mathrm{Bures}\right) }\left( \beta\text{, }\omega\right) $\textbf{ }in Eq. (\ref{f2}) equals the classical Fisher-Rao metric since there is no non-classical contribution in this case. The derivation of $g_{ij}^{\left( \mathrm{Bures}\right) }\left( \beta\text{, }\omega\right) $ in Eq. (\ref{f2}) ends our calculation of the Bures metric tensor for spin qubits. Interestingly, we observe that setting $k_{B}=1$, $\beta=t^{-1}$, and $\omega_{z}=t$, our Eq. (\ref{f2}) reduces to the last relation obtained by Zanardi and collaborators in Ref. \cite{zanardi07}. \subsubsection{The Sj\"{o}qvist metric} Given the expression of $\rho_{\mathrm{SQ}}\left( \beta\text{, } \omega\right) $ in\ Eq. (\ref{ro1}), we can proceed with the calculation of the Sj\"{o}qvist metric given by \begin{equation} ds_{\mathrm{Sj\ddot{o}qvist}}^{2}=\frac{1}{4}\sum_{k=0}^{1}\frac{dp_{k}^{2} }{p_{k}}+\sum_{k=0}^{1}p_{k}\left\langle dn_{k}|\left( \mathrm{I}-\left\vert n_{k}\right\rangle \left\langle n_{k}\right\vert \right) |dn_{k}\right\rangle \text{.} \label{smetric} \end{equation} In our case, we note that the probabilities $p_{0}$ and $p_{1}$ are given by \begin{equation} p_{0}=p_{0}\left( \beta\text{, }\omega\right) \overset{\text{def}}{=} \frac{1-\tanh\left( \beta\frac{\hslash\omega}{2}\right) }{2}\text{, and }p_{1}=p_{1}\left( \beta\text{, }\omega\right) \overset{\text{def}}{=} \frac{1+\tanh\left( \beta\frac{\hslash\omega}{2}\right) }{2}\text{,} \label{prob} \end{equation} respectively. Furthermore, the states $\left\vert n_{0}\right\rangle $ and $\left\vert n_{1}\right\rangle $ are \begin{equation} \left\vert n_{0}\right\rangle \overset{\text{def}}{=}\left\vert 0\right\rangle \text{, and }\left\vert n_{1}\right\rangle \overset{\text{def}}{=}\left\vert 1\right\rangle \text{.} \label{stati} \end{equation} Observe that since $n_{k}=n_{k}\left( \beta\text{, }\omega\right) $, we have that $dn_{k}\overset{\text{def}}{=}\frac{\partial n_{k}}{\partial\beta} d\beta+\frac{\partial n_{k}}{\partial\omega}d\omega$. In our case, we get from Eq. (\ref{stati}) that $\left\vert dn_{k}\right\rangle =\left\vert 0\right\rangle $. From Eq. (\ref{smetric}), $ds_{\mathrm{Sj\ddot{o}qvist}} ^{2}$ reduces to \begin{equation} ds_{\mathrm{Sj\ddot{o}qvist}}^{2}=\frac{1}{4}\sum_{k=0}^{1}\frac{dp_{k}^{2} }{p_{k}}=\frac{1}{4}\left( \frac{dp_{0}^{2}}{p_{0}}+\frac{dp_{1}^{2}}{p_{1} }\right) \text{,} \label{n2} \end{equation} where the differentials $dp_{0}$ and $dp_{1}$ are given by \begin{equation} dp_{0}\overset{\text{def}}{=}\frac{\partial p_{0}}{\partial\beta}d\beta +\frac{\partial p_{0}}{\partial\omega}d\omega\text{, and }dp_{1} \overset{\text{def}}{=}\frac{\partial p_{1}}{\partial\beta}d\beta +\frac{\partial p_{1}}{\partial\omega}d\omega\text{,} \label{n1} \end{equation} respectively. Therefore, substituting Eq. (\ref{n1}) into Eq. (\ref{n2}), we get \begin{align} ds_{\mathrm{Sj\ddot{o}qvist}}^{2} & =\frac{1}{4}\frac{\left( \partial _{\beta}p_{0}d\beta+\partial_{\omega}p_{0}d\omega\right) ^{2}}{p_{0}} +\frac{1}{4}\frac{\left( \partial_{\beta}p_{1}d\beta+\partial_{\omega} p_{1}d\omega\right) ^{2}}{p_{1}}\nonumber\\ & =\frac{\left( \partial_{\beta}p_{0}\right) ^{2}d\beta^{2}+\left( \partial_{\omega}p_{0}\right) ^{2}d\omega^{2}+2\partial_{\beta}p_{0} \partial_{\omega}p_{0}d\beta d\omega}{4p_{0}}+\frac{\left( \partial_{\beta }p_{1}\right) ^{2}d\beta^{2}+\left( \partial_{\omega}p_{1}\right) ^{2}d\omega^{2}+2\partial_{\beta}p_{1}\partial_{\omega}p_{1}d\beta d\omega }{4p_{1}}\nonumber\\ & =\left[ \frac{\left( \partial_{\beta}p_{0}\right) ^{2}}{4p_{0}} +\frac{\left( \partial_{\beta}p_{1}\right) ^{2}}{4p_{1}}\right] d\beta ^{2}+\left[ \frac{\left( \partial_{\omega}p_{0}\right) ^{2}}{4p_{0}} +\frac{\left( \partial_{\omega}p_{1}\right) ^{2}}{4p_{1}}\right] d\omega^{2}+\left[ \frac{2\partial_{\beta}p_{0}\partial_{\omega}p_{0}} {4p_{0}}+\frac{2\partial_{\beta}p_{1}\partial_{\omega}p_{1}}{4p_{1}}\right] d\beta d\omega\text{,} \end{align} that is, \begin{equation} ds_{\mathrm{Sj\ddot{o}qvist}}^{2}=\left[ \frac{\left( \partial_{\beta} p_{0}\right) ^{2}}{4p_{0}}+\frac{\left( \partial_{\beta}p_{1}\right) ^{2} }{4p_{1}}\right] d\beta^{2}+\left[ \frac{\left( \partial_{\omega} p_{0}\right) ^{2}}{4p_{0}}+\frac{\left( \partial_{\omega}p_{1}\right) ^{2} }{4p_{1}}\right] d\omega^{2}+\left[ \frac{2\partial_{\beta}p_{0} \partial_{\omega}p_{0}}{4p_{0}}+\frac{2\partial_{\beta}p_{1}\partial_{\omega }p_{1}}{4p_{1}}\right] d\beta d\omega\text{.} \label{chi2} \end{equation} From Eq. (\ref{prob}), we observe that \begin{align} \partial_{\beta}p_{0} & =-\frac{\hslash\omega}{4}\left[ 1-\tanh^{2}\left( \beta\frac{\hslash\omega}{2}\right) \right] \text{, }\partial_{\omega} p_{0}=-\frac{\hslash\beta}{4}\left[ 1-\tanh^{2}\left( \beta\frac {\hslash\omega}{2}\right) \right] \text{,}\nonumber\\ & \nonumber\\ \partial_{\beta}p_{1} & =\frac{\hslash\omega}{4}\left[ 1-\tanh^{2}\left( \beta\frac{\hslash\omega}{2}\right) \right] \text{, }\partial_{\omega} p_{1}=\frac{\hslash\beta}{4}\left[ 1-\tanh^{2}\left( \beta\frac {\hslash\omega}{2}\right) \right] \text{.} \label{chi1} \end{align} Finally, substituting Eq. (\ref{chi1}) into Eq. (\ref{chi2}), we obtain \begin{equation} ds_{\mathrm{Sj\ddot{o}qvist}}^{2}=\frac{\hslash^{2}\omega^{2}}{16}\left[ 1-\tanh^{2}\left( \beta\frac{\hslash\omega}{2}\right) \right] d\beta ^{2}+\frac{\hslash^{2}\beta^{2}}{16}\left[ 1-\tanh^{2}\left( \beta \frac{\hslash\omega}{2}\right) \right] d\omega^{2}+\frac{\hslash^{2} \beta\omega}{8}\left[ 1-\tanh^{2}\left( \beta\frac{\hslash\omega}{2}\right) \right] d\beta d\omega\text{.} \label{f0} \end{equation} Using Einstein's summation convention, $ds_{\mathrm{Sj\ddot{o}qvist}} ^{2}=g_{ij}^{\left( \mathrm{Sj\ddot{o}qvist}\right) }\left( \beta\text{, }\omega\right) d\theta^{i}d\theta^{j}$ with $\theta^{1}\overset{\text{def} }{=}\beta$ and $\theta^{2}\overset{\text{def}}{=}\omega$. Finally, using Eq. (\ref{f0}), the Sj\"{o}qvist metric metric tensor becomes \begin{equation} g_{ij}^{\left( \mathrm{Sj\ddot{o}qvist}\right) }\left( \beta\text{, } \omega\right) =\frac{\hslash^{2}}{16}\left[ 1-\tanh^{2}\left( \beta \frac{\hslash\omega}{2}\right) \right] \left( \begin{array} [c]{cc} \omega^{2} & \beta\omega\\ \beta\omega & \beta^{2} \end{array} \right) \text{,} \label{f1} \end{equation} with $1\leq i$, $j\leq2$. Note that $g_{ij}^{\left( \mathrm{Sj\ddot{o} qvist}\right) }\left( \beta\text{, }\omega\right) $\textbf{ }in Eq. (\ref{f2}) is equal to the classical Fisher-Rao metric since the non-classical contribution is absent in this case. The derivation of $g_{ij}^{\left( \mathrm{Sj\ddot{o}qvist}\right) }\left( \beta\text{, }\omega\right) $ in Eq. (\ref{f1}) ends our calculation of the Sj\"{o}qvist metric tensor for spin qubits. Recalling the general expressions of the Bures and Sj\"{o}qvist metrics in Eqs. (\ref{7}) and (\ref{vetta}) and, moreover, from our first set of explicit calculations, a few remarks are in order. First, both metrics have a classical and a non-classical contribution. Second, the classical Fisher-Rao metric contribution is related to changes\textbf{ }$dp_{n}=\partial_{\beta} p_{n}d\beta+\partial_{h}p_{n}dh$\textbf{ }in the probabilities\textbf{ } $p_{n}\left( \beta\text{, }h\right) \propto e^{-\beta E_{n}\left( h\right) }$\textbf{ }with\textbf{ }$\left\{ E_{n}\left( h\right) \right\} $\textbf{ }being the eigenvalues of the Hamiltonian. Finally, the non-classical contribution in the two metrics is linked to changes $\left\vert dn\right\rangle =\partial_{h}\left\vert n\right\rangle dh=\left\vert \partial_{h}n\right\rangle dh$\textbf{ }in the eigenvectors\textbf{ }$\left\{ \left\vert n\left( h\right) \right\rangle \right\} $\textbf{ }of the Hamiltonian. In our first Hamiltonian model,\textbf{ }H$\propto\sigma_{z} $\textbf{ }is diagonal and, thus, its eigenvectors do not depend on any parameter. Therefore, we found that both the Bures and Sj\"{o}qvist metrics reduce to the classical Fisher-Rao metric. However, one expects that if\textbf{ \ }H\textbf{ }is not proportional to the Pauli matrix operator\textbf{ }$\sigma_{z}$\textbf{, }non-classical contributions do not vanish any longer and the two metrics may yield different quantum (i.e., non-classical) metric contributions. Indeed, if one considers a spin qubit Hamiltonian specified by a magnetic field with an orientation that is not constrained to be along the $z$-axis, the Bures and Sj\"{o}qvist metrics happen to be different. In particular, for a time-independent and uniform magnetic field given by $\vec{B}=B_{x}\hat{x}+B_{z}\hat{z}$, the spin qubit Hamiltonian becomes \textrm{H}$_{\mathrm{SQ}}\left( \omega_{x}\text{, } \omega_{z}\right) \overset{\text{def}}{=}\left( \hslash/2\right) (\omega_{x}\sigma_{x}+\omega_{z}\sigma_{z})$. Assuming $\omega_{x}$ -fixed$\neq0$, tuning only the parameters $\beta$ and $\omega_{z}$, and repeating our metric calculations, it can be shown that the Bures and Sj\"{o}qvist metric tensor components $g_{ij}^{\mathrm{Bures}}\left( \beta\text{, }\omega_{z}\right) $ and $g_{ij}^{\mathrm{Sj\ddot{o}qvist} }\left( \beta\text{, }\omega_{z}\right) $ are \begin{equation} g_{ij}^{\mathrm{Bures}}\left( \beta\text{, }\omega_{z}\right) =\frac {\hslash^{2}}{16}\left[ 1-\tanh^{2}\left( \beta\frac{\hslash\sqrt{\omega _{x}^{2}+\omega_{z}^{2}}}{2}\right) \right] \left( \begin{array} [c]{cc} \omega_{x}^{2}+\omega_{z}^{2} & \beta\omega_{z}\\ \beta\omega_{z} & \beta^{2}\frac{\omega_{z}^{2}}{\omega_{x}^{2}+\omega_{z} ^{2}}+\frac{4}{\hslash^{2}}\frac{\omega_{x}^{2}}{\left( \omega_{x}^{2} +\omega_{z}^{2}\right) ^{2}}\frac{\tanh^{2}\left( \beta\frac{\hslash \sqrt{\omega_{x}^{2}+\omega_{z}^{2}}}{2}\right) }{1-\tanh^{2}\left( \beta\frac{\hslash\sqrt{\omega_{x}^{2}+\omega_{z}^{2}}}{2}\right) } \end{array} \right) \text{,} \label{G1A} \end{equation} and, \begin{equation} g_{ij}^{\mathrm{Sj\ddot{o}qvist}}\left( \beta\text{, }\omega_{z}\right) =\frac{\hslash^{2}}{16}\left[ 1-\tanh^{2}\left( \beta\frac{\hslash \sqrt{\omega_{x}^{2}+\omega_{z}^{2}}}{2}\right) \right] \left( \begin{array} [c]{cc} \omega_{x}^{2}+\omega_{z}^{2} & \beta\omega_{z}\\ \beta\omega_{z} & \beta^{2}\frac{\omega_{z}^{2}}{\omega_{x}^{2}+\omega_{z} ^{2}}+\frac{4}{\hslash^{2}}\frac{\omega_{x}^{2}}{\left( \omega_{x}^{2} +\omega_{z}^{2}\right) ^{2}}\frac{1}{1-\tanh^{2}\left( \beta\frac {\hslash\sqrt{\omega_{x}^{2}+\omega_{z}^{2}}}{2}\right) } \end{array} \right) \text{,} \label{G1B} \end{equation} respectively.\textbf{ }For completeness, we remark that useful calculation techniques to arrive at expressions as in Eqs. (\ref{G1A}) and (\ref{G1B}) will be performed in the next subsection where\textbf{ }\textrm{H} $_{\mathrm{SQ}}\left( \omega_{x}\text{, }\omega_{z}\right) $ will be replaced by the superconducting flux qubit\textbf{ }Hamiltonian $\mathrm{H} _{\mathrm{SFQ}}\left( \Delta\text{, }\epsilon\right) \overset{\text{def} }{=}\left( -\hslash/2\right) \left( \Delta\sigma_{x}+\epsilon\sigma _{z}\right) $. Returning to our considerations, recall that for any\textbf{ }$x\in \mathbb{R} $\textbf{,} we have \begin{equation} \frac{\tanh^{2}\left( x\right) }{1-\tanh^{2}\left( x\right) }=\sinh ^{2}\left( x\right) \text{, and }\frac{1}{1-\tanh^{2}\left( x\right) }=\cosh^{2}\left( x\right) \text{.} \end{equation} \textbf{T}hen, using Eqs. (\ref{G1A}) and (\ref{G1B}), we obtain \begin{equation} 0\leq\frac{g_{\omega_{z}\omega_{z}}^{\mathrm{nc}\text{, \textrm{Bures}} }\left( \beta\text{, }\omega_{z}\right) }{g_{\omega_{z}\omega_{z} }^{\mathrm{nc}\text{, }\mathrm{Sj\ddot{o}qvist}}\left( \beta\text{, } \omega_{z}\right) }=\frac{\sinh^{2}\left( \beta\frac{\hslash\sqrt{\omega _{x}^{2}+\omega_{z}^{2}}}{2}\right) }{\cosh^{2}\left( \beta\frac {\hslash\sqrt{\omega_{x}^{2}+\omega_{z}^{2}}}{2}\right) }=\tanh^{2}\left( \beta\frac{\hslash\sqrt{\omega_{x}^{2}+\omega_{z}^{2}}}{2}\right) \leq1\text{,} \label{G1C} \end{equation} with\textbf{ }$g_{\omega_{z}\omega_{z}}^{\mathrm{nc}\text{, \textrm{Bures}} }\left( \beta\text{, }\omega_{z}\right) $ and $g_{\omega_{z}\omega_{z} }^{\mathrm{nc}\text{, }\mathrm{Sj\ddot{o}qvist}}\left( \beta\text{, } \omega_{z}\right) $ denoting the non-classical contributions in the Bures and Sj\"{o}qvist metric cases, respectively. From Eqs. (\ref{G1A}) and (\ref{G1B}), we conclude that the introduction of a nonvanishing component of the magnetic field along the $x$-direction introduces a visible non-commutative probabilistic structure in the quantum mechanics of the system characterized by a non-classical scenario with $\left[ \rho\text{, } \rho+d\rho\right] \neq0$). In such a case, the Bures and the Sj\"{o}qvist metrics exhibit a different behavior as evident from their nonclassical metric tensor components (i.e., $g_{\omega_{z}\omega_{z}}^{\mathrm{nc}}\left( \beta\text{, }\omega_{z}\right) $) in Eq. (\ref{G1C}). \subsection{Superconducting flux qubits} Let us consider a system with an Hamiltonian described by $\mathrm{H} _{\mathrm{SFQ}}\left( \Delta\text{, }\epsilon\right) \overset{\text{def} }{=}\left( -\hslash/2\right) \left( \Delta\sigma_{x}+\epsilon\sigma _{z}\right) $ in Eq. (\ref{superH}). The thermal state $\rho_{\mathrm{SFQ} }\left( \beta\text{, }\epsilon\right) $ corresponding to $\mathrm{H} _{\mathrm{SFQ}}\left( \Delta\text{, }\epsilon\right) $ with $\Delta$ assumed to be constant is given by \begin{equation} \rho_{\mathrm{SFQ}}\left( \beta\text{, }\epsilon\right) \overset{\text{def} }{=}\frac{e^{-\beta\mathrm{H}_{\mathrm{SFQ}}\left( \Delta\text{, } \epsilon\right) }}{\mathrm{tr}\left( e^{-\beta\mathrm{H}_{\mathrm{SFQ} }\left( \Delta\text{, }\epsilon\right) }\right) }\text{.} \label{anto1} \end{equation} Observe that $\mathrm{H}_{\mathrm{SFQ}}\left( \Delta\text{, }\epsilon\right) $ is diagonalizable and can be recast as $\mathrm{H}_{\mathrm{SFQ} }=M_{\mathrm{H}_{\mathrm{SFQ}}}\mathrm{H}_{\mathrm{SFQ}}^{\left( \mathrm{diagonal}\right) }M_{\mathrm{H}_{\mathrm{SFQ}}}^{-1}$ where\textbf{ }$M_{\mathrm{H}_{\mathrm{SFQ}}}$\textbf{ }and\textbf{ }$M_{\mathrm{H} _{\mathrm{SFQ}}}^{-1}$\textbf{ }are the eigenvector matrix and its inverse, respectively. Therefore, after some algebra, $\rho_{\mathrm{SFQ}}\left( \beta\text{, }\epsilon\right) $ in Eq. (\ref{anto1}) can be rewritten as \begin{align} \rho_{\mathrm{SFQ}}\left( \beta\text{, }\epsilon\right) & =\frac {e^{-\beta\mathrm{H}_{\mathrm{SFQ}}\left( \Delta\text{, }\epsilon\right) } }{\mathrm{tr}(e^{-\beta\mathrm{H}_{\mathrm{SFQ}}\left( \Delta\text{, }\epsilon\right) })}=\frac{e^{-\beta M_{\mathrm{H}_{\mathrm{SFQ}}} \mathrm{H}_{\mathrm{SFQ}}^{\left( \mathrm{diagonal}\right) }M_{\mathrm{H} _{\mathrm{SFQ}}}^{-1}}}{\mathrm{tr}(e^{-\beta M_{\mathrm{H}_{\mathrm{SFQ}} }\mathrm{H}_{\mathrm{SFQ}}^{\left( \mathrm{diagonal}\right) }M_{\mathrm{H} _{\mathrm{SFQ}}}^{-1}})}\nonumber\\ & =\frac{M_{\mathrm{H}_{\mathrm{SFQ}}}e^{-\beta\mathrm{H}_{\mathrm{SFQ} }^{\left( \mathrm{diagonal}\right) }}M_{\mathrm{H}_{\mathrm{SFQ}}}^{-1} }{\mathrm{tr}(M_{\mathrm{H}_{\mathrm{SFQ}}}e^{-\beta\mathrm{H}_{\mathrm{SFQ} }^{\left( \mathrm{diagonal}\right) }}M_{\mathrm{H}_{\mathrm{SFQ}}}^{-1} )}\nonumber\\ & =M_{\mathrm{H}_{\mathrm{SFQ}}}\frac{e^{-\beta\mathrm{H}_{\mathrm{SFQ} }^{\left( \mathrm{diagonal}\right) }}}{\mathrm{tr}(e^{-\beta\mathrm{H} _{\mathrm{SFQ}}^{\left( \mathrm{diagonal}\right) }})}M_{\mathrm{H} _{\mathrm{SFQ}}}^{-1}\text{,} \end{align} that is, \begin{equation} \rho_{\mathrm{SFQ}}\left( \beta\text{, }\epsilon\right) =M_{\mathrm{H} _{\mathrm{SFQ}}}\frac{e^{-\beta\mathrm{H}_{\mathrm{SFQ}}^{\left( \mathrm{diagonal}\right) }}}{\mathrm{tr}(e^{-\beta\mathrm{H}_{\mathrm{SFQ} }^{\left( \mathrm{diagonal}\right) }})}M_{\mathrm{H}_{\mathrm{SFQ}}} ^{-1}=M_{\mathrm{H}_{\mathrm{SFQ}}}\rho_{\mathrm{SFQ}}^{\left( \mathrm{diagonal}\right) }\left( \beta\text{, }\epsilon\right) M_{\mathrm{H}_{\mathrm{SFQ}}}^{-1}\text{.} \label{anto0} \end{equation} The quantity $\mathrm{H}_{\mathrm{SFQ}}^{\left( \mathrm{diagonal}\right) }$ in Eq. (\ref{anto0}) is defined as, \begin{equation} \mathrm{H}_{\mathrm{SFQ}}^{\left( \mathrm{diagonal}\right) } \overset{\text{def}}{=}E_{0}\left\vert n_{1}\right\rangle \left\langle n_{1}\right\vert +E_{1}\left\vert n_{0}\right\rangle \left\langle n_{0}\right\vert \text{.} \label{anto2} \end{equation} The the eigenvalues $E_{0}$ and $E_{1}$ are given by $E_{0}\overset{\text{def} }{=}-\left( \hslash/2\right) \nu$ and $E_{1}\overset{\text{def}}{=}+\left( \hslash/2\right) \nu$, respectively, with $\nu\overset{\text{def}}{=} \sqrt{\Delta^{2}+\epsilon^{2}}$. For later use, it is convenient to introduce the notation $\tilde{E}_{0}\overset{\text{def}}{=}E_{1}$ and $\tilde{E} _{1}\overset{\text{def}}{=}E_{0}$ so that $\mathrm{H}_{\mathrm{SFQ}}^{\left( \mathrm{diagonal}\right) }\overset{\text{def}}{=}\tilde{E}_{0}\left\vert n_{0}\right\rangle \left\langle n_{0}\right\vert +\tilde{E}_{1}\left\vert n_{1}\right\rangle \left\langle n_{1}\right\vert $. The two orthonormal eigenvectors corresponding to $E_{0}$ and $E_{1}$ are $\left\vert n_{1}\right\rangle $ and $\left\vert n_{0}\right\rangle $, respectively. They are given by \begin{equation} \left\vert n_{0}\right\rangle \overset{\text{def}}{=}\frac{1}{\sqrt{2}}\left( \begin{array} [c]{c} \frac{\epsilon-\sqrt{\epsilon^{2}+\Delta^{2}}}{\sqrt{\epsilon^{2}+\Delta ^{2}-\epsilon\sqrt{\epsilon^{2}+\Delta^{2}}}}\\ \frac{\Delta}{\sqrt{\epsilon^{2}+\Delta^{2}-\epsilon\sqrt{\epsilon^{2} +\Delta^{2}}}} \end{array} \right) \text{ and, }\left\vert n_{1}\right\rangle \overset{\text{def} }{=}\frac{1}{\sqrt{2}}\left( \begin{array} [c]{c} \frac{\epsilon+\sqrt{\epsilon^{2}+\Delta^{2}}}{\sqrt{\epsilon^{2}+\Delta ^{2}+\epsilon\sqrt{\epsilon^{2}+\Delta^{2}}}}\\ \frac{\Delta}{\sqrt{\epsilon^{2}+\Delta^{2}+\epsilon\sqrt{\epsilon^{2} +\Delta^{2}}}} \end{array} \right) \text{,} \label{anto444} \end{equation} respectively. A suitable choice for the eigenvector matrix $M_{\mathrm{H} _{\mathrm{SFQ}}}$ and its inverse $M_{\mathrm{H}_{\mathrm{SFQ}}}^{-1}$ in Eq. (\ref{anto0}) can be expressed as \begin{equation} M_{\mathrm{H}_{\mathrm{SFQ}}}\overset{\text{def}}{=}\left( \begin{array} [c]{cc} \frac{\epsilon+\nu}{\Delta} & \frac{\epsilon-\nu}{\Delta}\\ 1 & 1 \end{array} \right) \text{ and, }M_{\mathrm{H}_{\mathrm{SFQ}}}^{-1}\overset{\text{def} }{=}\left( \begin{array} [c]{cc} \frac{\Delta}{2\nu} & \frac{\nu-\epsilon}{2\nu}\\ -\frac{\Delta}{2\nu} & \frac{\nu+\epsilon}{2\nu} \end{array} \right) \text{,} \label{anto44} \end{equation} respectively. Using Eqs. (\ref{anto2}) and (\ref{anto44}), $\rho _{\mathrm{SFQ}}\left( \beta\text{, }\epsilon\right) $ in Eq. (\ref{anto0}) becomes \begin{equation} \rho_{\mathrm{SFQ}}\left( \beta\text{, }\epsilon\right) =\frac{1}{2}\left( \begin{array} [c]{cc} 1+\frac{\epsilon}{\sqrt{\epsilon^{2}+\Delta^{2}}}\tanh\left( \beta \hslash\frac{\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) & \frac{\Delta} {\sqrt{\epsilon^{2}+\Delta^{2}}}\tanh\left( \beta\hslash\frac{\sqrt {\epsilon^{2}+\Delta^{2}}}{2}\right) \\ \frac{\Delta}{\sqrt{\epsilon^{2}+\Delta^{2}}}\tanh\left( \beta\hslash \frac{\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) & 1-\frac{\epsilon} {\sqrt{\epsilon^{2}+\Delta^{2}}}\tanh\left( \beta\hslash\frac{\sqrt {\epsilon^{2}+\Delta^{2}}}{2}\right) \end{array} \right) \text{,} \end{equation} that is, \begin{equation} \rho_{\mathrm{SFQ}}\left( \beta\text{, }\epsilon\right) =\frac{1}{2}\left[ \mathrm{I}+\left( \frac{\Delta}{\sqrt{\epsilon^{2}+\Delta^{2}}}\sigma _{x}+\frac{\epsilon}{\sqrt{\epsilon^{2}+\Delta^{2}}}\sigma_{z}\right) \tanh\left( \beta\hslash\frac{\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) \right] \text{.} \label{anto5} \end{equation} For completeness, we note here that the spectral decomposition of $\rho_{\mathrm{SFQ}}\left( \beta\text{, }\epsilon\right) =M_{\mathrm{H} _{\mathrm{SFQ}}}\rho_{\mathrm{SFQ}}^{\left( \mathrm{diagonal}\right) }\left( \beta\text{, }\epsilon\right) M_{\mathrm{H}_{\mathrm{SFQ}}}^{-1}$ in Eq. (\ref{anto5}) is given by $\rho_{\mathrm{SFQ}}\overset{\text{def}}{=} p_{0}\left\vert n_{0}\right\rangle \left\langle n_{0}\right\vert +p_{1}\left\vert n_{1}\right\rangle \left\langle n_{1}\right\vert $. The probabilities $p_{0}$ and $p_{1}$ are \begin{equation} p_{0}\overset{\text{def}}{=}\frac{e^{-\beta\tilde{E}_{0}}}{\mathcal{Z}} =\frac{1}{2}\left[ 1-\tanh\left( \beta\hslash\frac{\sqrt{\epsilon^{2} +\Delta^{2}}}{2}\right) \right] \text{, and }p_{1}\overset{\text{def} }{=}\frac{e^{-\beta\tilde{E}_{1}}}{\mathcal{Z}}=\frac{1}{2}\left[ 1+\tanh\left( \beta\hslash\frac{\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) \right] \text{,} \label{sorry} \end{equation} respectively, with $\mathcal{Z}\overset{\text{def}}{\mathcal{=}}e^{-\beta E_{0}}+e^{-\beta E_{1}}=e^{-\beta\tilde{E}_{0}}+e^{-\beta\tilde{E}_{1}} =2\cosh(\beta\hslash\frac{\sqrt{\epsilon^{2}+\Delta^{2}}}{2})$ denoting the partition function of the system. In what follows, we shall use $\rho _{\mathrm{SFQ}}\left( \beta\text{, }\epsilon\right) $ in Eq. (\ref{anto5}) to calculate the Bures and the Sj\"{o}qvist metrics. \subsubsection{The Bures metric} For simplicity of notation, we replace $\mathrm{H}_{\mathrm{SFQ}}$ with $\mathrm{H}$ in the forthcoming calculation. We begin by noting that, in our case, the general expression of the Bures metric $ds_{\mathrm{Bures}}^{2}$ in Eq. (\ref{general}) becomes \begin{align} ds_{\mathrm{Bures}}^{2} & =\frac{1}{4}\left[ \left\langle \mathrm{H} ^{2}\right\rangle -\left\langle \mathrm{H}\right\rangle ^{2}\right] d\beta^{2}\nonumber\\ & +\frac{1}{4}\left\{ \beta^{2}\left\{ \left\langle \left[ \left( \partial_{\epsilon}\mathrm{H}\right) _{d}\right] ^{2}\right\rangle -\left\langle \left( \partial_{\epsilon}\mathrm{H}\right) _{d}\right\rangle ^{2}\right\} +2\sum_{n\neq m}\left\vert \frac{\left\langle n|\partial _{\epsilon}\mathrm{H}|m\right\rangle }{\tilde{E}_{n}-\tilde{E}_{m}}\right\vert ^{2}\frac{\left( e^{-\beta\tilde{E}_{n}}-e^{-\beta\tilde{E}_{m}}\right) ^{2}}{\mathcal{Z}\cdot\left( e^{-\beta\tilde{E}_{n}}+e^{-\beta\tilde{E}_{m} }\right) }\right\} d\epsilon^{2}+\nonumber\\ & +\frac{1}{4}\left\{ 2\beta\left[ \left\langle \mathrm{H}\left( \partial_{\epsilon}\mathrm{H}\right) _{d}\right\rangle -\left\langle \mathrm{H}\right\rangle \left\langle \left( \partial_{\epsilon} \mathrm{H}\right) _{d}\right\rangle \right] \right\} d\beta d\epsilon \text{.} \label{zarrillo} \end{align} As previously pointed out in this manuscript, $ds_{\mathrm{Bures}}^{2} $\textbf{ }is the sum of two contributions, the classical Fisher-Rao information metric contribution and the non-classical metric contribution described in the summation term in the right-hand-side of Eq. (\ref{zarrillo} ). In what follows, we shall the that the presence of nonvanishing terms\textbf{ }$\left\vert \left\langle n|\partial_{\epsilon}\mathrm{H} |m\right\rangle \right\vert ^{2}$ leads to the existence of a non-classical contribution in\textbf{ }$ds_{\mathrm{Bures}}^{2}$. Following our previous line of reasoning, we partition our calculation in three parts. In particular, since $ds_{\mathrm{Bures}}^{2}=g_{\beta\beta}\left( \beta\text{, } \epsilon\right) d\beta^{2}+g_{\epsilon\epsilon}\left( \beta\text{, } \epsilon\right) d\epsilon^{2}+2g_{\beta\epsilon}\left( \beta\text{, }\epsilon\right) d\beta d\epsilon$, we focus on computing $g_{\beta\beta }\left( \beta\text{, }\epsilon\right) $, $2g_{\beta\epsilon}\left( \beta\text{, }\epsilon\right) $, and $g_{\epsilon\epsilon}\left( \beta\text{, }\epsilon\right) $. \subsubsection{First sub-calculation} We proceed here with the first sub-calculation. We recall that $g_{\beta\beta }\left( \beta\text{, }\epsilon\right) d\beta^{2}=(1/4)\left[ \left\langle \mathrm{H}^{2}\right\rangle -\left\langle \mathrm{H}\right\rangle ^{2}\right] d\beta^{2}$. Note that $\left\langle \mathrm{H}^{2}\right\rangle $ and $\left\langle \mathrm{H}\right\rangle ^{2}$ are given by, \begin{equation} \left\langle \mathrm{H}^{2}\right\rangle =\mathrm{tr}\left( \mathrm{H} ^{2}\rho\right) =\frac{\hslash^{2}}{4}\left( \epsilon^{2}+\Delta^{2}\right) \text{,} \label{jo2} \end{equation} and, \begin{equation} \left\langle \mathrm{H}\right\rangle ^{2}=\left[ \mathrm{tr}\left( \mathrm{H}\rho\right) \right] ^{2}=\frac{\hslash^{2}}{4}\left( \epsilon ^{2}+\Delta^{2}\right) \tanh^{2}\left( \beta\hslash\frac{\sqrt{\epsilon ^{2}+\Delta^{2}}}{2}\right) \text{,} \label{jo3} \end{equation} respectively. Therefore, using Eqs. (\ref{jo2}) and (\ref{jo3}), $g_{\beta\beta}\left( \beta\text{, }\epsilon\right) d\beta^{2}$ becomes \begin{equation} g_{\beta\beta}\left( \beta\text{, }\epsilon\right) d\beta^{2}=\frac {\hslash^{2}}{16}\left( \epsilon^{2}+\Delta^{2}\right) \left[ 1-\tanh ^{2}\left( \beta\hslash\frac{\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) \right] d\beta^{2}\text{.} \label{J1} \end{equation} Our fist sub-calculation ends with the derivation of Eq. (\ref{J1}). \subsubsection{Second sub-calculation} In our second calculation, we focus on calculating the term $2g_{\beta \epsilon}\left( \beta\text{, }\epsilon\right) d\beta d\epsilon$ defined as \begin{equation} 2g_{\beta\epsilon}\left( \beta\text{, }\epsilon\right) d\beta d\epsilon \overset{\text{def}}{=}\frac{1}{4}\left\{ 2\beta\left[ \left\langle \mathrm{H}\left( \partial_{\epsilon}\mathrm{H}\right) _{d}\right\rangle -\left\langle \mathrm{H}\right\rangle \left\langle \left( \partial_{\epsilon }\mathrm{H}\right) _{d}\right\rangle \right] \right\} d\beta d\epsilon \text{.} \label{jo4} \end{equation} Note that $\left\langle \mathrm{H}\left( \partial_{\epsilon}\mathrm{H} \right) _{d}\right\rangle $ can be recast as \begin{align} \left\langle \mathrm{H}\left( \partial_{\epsilon}\mathrm{H}\right) _{d}\right\rangle & =\sum_{i=0}^{1}p_{i}\tilde{E}_{i}\partial_{\epsilon }\tilde{E}_{i}=p_{0}\tilde{E}_{0}\partial_{\epsilon}\tilde{E}_{0}+p_{1} \tilde{E}_{1}\partial_{\epsilon}\tilde{E}_{1}\nonumber\\ & =p_{0}\left( \frac{\hslash}{2}\sqrt{\epsilon^{2}+\Delta^{2}}\right) \partial_{\epsilon}\left( \frac{\hslash}{2}\sqrt{\epsilon^{2}+\Delta^{2} }\right) +p_{1}\left( -\frac{\hslash}{2}\sqrt{\epsilon^{2}+\Delta^{2} }\right) \partial_{\epsilon}\left( -\frac{\hslash}{2}\sqrt{\epsilon ^{2}+\Delta^{2}}\right) \nonumber\\ & =\left( p_{0}+p_{1}\right) \left( \frac{\hslash}{2}\sqrt{\epsilon ^{2}+\Delta^{2}}\right) \partial_{\epsilon}\left( \frac{\hslash}{2} \sqrt{\epsilon^{2}+\Delta^{2}}\right) \nonumber\\ & =\left( \frac{\hslash}{2}\sqrt{\epsilon^{2}+\Delta^{2}}\right) \partial_{\epsilon}\left( \frac{\hslash}{2}\sqrt{\epsilon^{2}+\Delta^{2} }\right) \nonumber\\ & =\frac{\hslash^{2}}{4}\epsilon\text{,} \end{align} that is, \begin{equation} \left\langle \mathrm{H}\left( \partial_{\epsilon}\mathrm{H}\right) _{d}\right\rangle =\frac{\hslash^{2}}{4}\epsilon\text{.} \label{mas1} \end{equation} We also note that the expectation value $\left\langle \mathrm{H}\right\rangle $ of the Hamiltonian $\mathrm{H}$ equals \begin{equation} \left\langle \mathrm{H}\right\rangle =-\frac{\hslash}{2}\sqrt{\epsilon ^{2}+\Delta^{2}}\tanh\left( \beta\hslash\frac{\sqrt{\epsilon^{2}+\Delta^{2}} }{2}\right) \text{.} \label{mas2} \end{equation} Finally, the quantity $\left\langle \left( \partial_{\epsilon}\mathrm{H} \right) _{d}\right\rangle $ can be rewritten as \begin{align} \left\langle \left( \partial_{\epsilon}\mathrm{H}\right) _{d}\right\rangle & =\sum_{i=0}^{1}p_{i}\partial_{\epsilon}\tilde{E}_{i}=p_{0}\partial _{\epsilon}\tilde{E}_{0}+p_{1}\partial_{\epsilon}\tilde{E}_{1}\nonumber\\ & =p_{0}\partial_{\epsilon}\left( \frac{\hslash}{2}\sqrt{\epsilon^{2} +\Delta^{2}}\right) +p_{1}\partial_{\epsilon}\left( -\frac{\hslash}{2} \sqrt{\epsilon^{2}+\Delta^{2}}\right) \nonumber\\ & =\frac{e^{-\beta\hslash\frac{\sqrt{\epsilon^{2}+\Delta^{2}}}{2}} }{\mathcal{Z}}\left( \frac{\hslash}{2}\right) \frac{\epsilon}{\sqrt {\epsilon^{2}+\Delta^{2}}}+\frac{e^{\beta\hslash\frac{\sqrt{\epsilon ^{2}+\Delta^{2}}}{2}}}{\mathcal{Z}}\left( -\frac{\hslash}{2}\right) \frac{\epsilon}{\sqrt{\epsilon^{2}+\Delta^{2}}}\nonumber\\ & =-\frac{\hslash}{2}\frac{2\sinh\left( \beta\hslash\frac{\sqrt{\epsilon ^{2}+\Delta^{2}}}{2}\right) }{2\cosh\left( \beta\hslash\frac{\sqrt {\epsilon^{2}+\Delta^{2}}}{2}\right) }\frac{\epsilon}{\sqrt{\epsilon ^{2}+\Delta^{2}}}\nonumber\\ & =-\frac{\hslash}{2}\frac{\epsilon}{\sqrt{\epsilon^{2}+\Delta^{2}}} \tanh\left( \beta\hslash\frac{\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) \text{,} \end{align} that is, \begin{equation} \left\langle \left( \partial_{\epsilon}\mathrm{H}\right) _{d}\right\rangle =-\frac{\hslash}{2}\frac{\epsilon}{\sqrt{\epsilon^{2}+\Delta^{2}}}\tanh\left( \beta\hslash\frac{\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) \text{.} \label{mas3} \end{equation} Finally, using Eqs. (\ref{mas1}), (\ref{mas2}), and (\ref{mas3}), $2g_{\beta\epsilon}\left( \beta\text{, }\epsilon\right) d\beta d\epsilon$ in Eq. (\ref{jo4}) becomes \begin{align} 2g_{\beta\epsilon}\left( \beta\text{, }\epsilon\right) d\beta d\epsilon & =\frac{1}{4}\left\{ 2\beta\left[ \begin{array} [c]{c} \frac{\hslash^{2}}{4}\epsilon+\frac{\hslash}{2}\sqrt{\epsilon^{2}+\Delta^{2} }\tanh\left( \beta\hslash\frac{\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) \cdot\\ \left( -\frac{\hslash}{2}\frac{\epsilon}{\sqrt{\epsilon^{2}+\Delta^{2}}} \tanh\left( \beta\hslash\frac{\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) \right) \end{array} \right] \right\} d\beta d\epsilon\nonumber\\ & =\frac{1}{4}\left\{ 2\beta\left[ \frac{\hslash^{2}}{4}\epsilon -\frac{\hslash^{2}}{4}\epsilon\tanh^{2}\left( \beta\hslash\frac {\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) \right] \right\} d\beta d\epsilon\nonumber\\ & =\frac{\hslash^{2}}{8}\beta\epsilon\left[ 1-\tanh^{2}\left( \beta \hslash\frac{\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) \right] d\beta d\epsilon\text{,} \end{align} that is, \begin{equation} 2g_{\beta\epsilon}\left( \beta\text{, }\epsilon\right) d\beta d\epsilon =\frac{\hslash^{2}}{8}\beta\epsilon\left[ 1-\tanh^{2}\left( \beta \hslash\frac{\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) \right] d\beta d\epsilon\text{.} \label{J2} \end{equation} Our second sub-calculation ends with the derivation of Eq. (\ref{J2}). \subsubsection{Third sub-calculation} In what follows, we focus on the calculation of $g_{\epsilon\epsilon}\left( \beta\text{, }\epsilon\right) d\epsilon^{2}$, \begin{equation} \ g_{\epsilon\epsilon}\left( \beta\text{, }\epsilon\right) d\epsilon ^{2}\ \overset{\text{def}}{=}\frac{1}{4}\left\{ \beta^{2}\left\{ \left\langle \left[ \left( \partial_{\epsilon}\mathrm{H}\right) _{d}\right] ^{2}\right\rangle -\left\langle \left( \partial_{\epsilon }\mathrm{H}\right) _{d}\right\rangle ^{2}\right\} +2\sum_{n\neq m}\left\vert \frac{\left\langle n|\partial_{\epsilon}\mathrm{H}|m\right\rangle }{\tilde {E}_{n}-\tilde{E}_{m}}\right\vert ^{2}\frac{\left( e^{-\beta\tilde{E}_{n} }-e^{-\beta\tilde{E}_{m}}\right) ^{2}}{\mathcal{Z}\cdot\left( e^{-\beta \tilde{E}_{n}}+e^{-\beta\tilde{E}_{m}}\right) }\right\} d\epsilon ^{2}\text{.} \label{zar1} \end{equation} Let us recall that $\left\langle \left( \partial_{\epsilon}\mathrm{H}\right) _{d}\right\rangle $ is given in Eq. (\ref{mas3}). Therefore, we get \begin{equation} \left\langle \left( \partial_{\epsilon}\mathrm{H}\right) _{d}\right\rangle ^{2}=\frac{\hslash^{2}}{4}\frac{\epsilon^{2}}{\epsilon^{2}+\Delta^{2}} \tanh^{2}\left( \beta\hslash\frac{\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) \text{.} \label{zar2} \end{equation} Moreover, $\left\langle \left( \partial_{\epsilon}\mathrm{H}\right) _{d} ^{2}\right\rangle $ is given by \begin{align} \left\langle \left( \partial_{\epsilon}\mathrm{H}\right) _{d}^{2} \right\rangle & =\sum_{i=0}^{1}p_{i}\left( \partial_{\epsilon}\tilde{E} _{i}\right) ^{2}=p_{0}\left( \partial_{\epsilon}\tilde{E}_{0}\right) ^{2}+p_{1}\left( \partial_{\epsilon}\tilde{E}_{1}\right) ^{2}\nonumber\\ & =p_{0}\frac{\hslash^{2}}{4}\frac{\epsilon^{2}}{\epsilon^{2}+\Delta^{2} }+p_{1}\frac{\hslash^{2}}{4}\frac{\epsilon^{2}}{\epsilon^{2}+\Delta^{2} }\nonumber\\ & =\left( p_{0}+p_{1}\right) \frac{\hslash^{2}}{4}\frac{\epsilon^{2} }{\epsilon^{2}+\Delta^{2}}\nonumber\\ & =\frac{\hslash^{2}}{4}\frac{\epsilon^{2}}{\epsilon^{2}+\Delta^{2}}\text{,} \end{align} that is, \begin{equation} \left\langle \left( \partial_{\epsilon}\mathrm{H}\right) _{d}^{2} \right\rangle =\frac{\hslash^{2}}{4}\frac{\epsilon^{2}}{\epsilon^{2} +\Delta^{2}}\text{.} \label{zar3} \end{equation} Finally, let us focus on the term in Eq. (\ref{zar1}) given by \begin{align} 2\sum_{n\neq m}\left\vert \frac{\left\langle n|\partial_{\epsilon} \mathrm{H}|m\right\rangle }{\tilde{E}_{n}-\tilde{E}_{m}}\right\vert ^{2} \frac{\left( e^{-\beta\tilde{E}_{n}}-e^{-\beta\tilde{E}_{m}}\right) ^{2} }{\mathcal{Z}\cdot\left( e^{-\beta\tilde{E}_{n}}+e^{-\beta\tilde{E}_{m} }\right) } & =\frac{2}{\mathcal{Z}}\left\vert \frac{\left\langle n_{0}|\partial_{\epsilon}\mathrm{H}|n_{1}\right\rangle }{\tilde{E}_{0} -\tilde{E}_{1}}\right\vert ^{2}\frac{\left( e^{-\beta\tilde{E}_{0}} -e^{-\beta\tilde{E}_{1}}\right) ^{2}}{\left( e^{-\beta\tilde{E}_{0} }+e^{-\beta\tilde{E}_{1}}\right) }+\nonumber\\ & +\frac{2}{\mathcal{Z}}\left\vert \frac{\left\langle n_{1}|\partial _{\epsilon}\mathrm{H}|n_{0}\right\rangle }{\tilde{E}_{1}-\tilde{E}_{0} }\right\vert ^{2}\frac{\left( e^{-\beta\tilde{E}_{1}}-e^{-\beta\tilde{E}_{0} }\right) ^{2}}{\left( e^{-\beta\tilde{E}_{0}}+e^{-\beta\tilde{E}_{1} }\right) }\nonumber\\ & =\frac{2}{\mathcal{Z}}\frac{\left( e^{-\beta\tilde{E}_{0}}-e^{-\beta \tilde{E}_{1}}\right) ^{2}}{\left( e^{-\beta\tilde{E}_{0}}+e^{-\beta \tilde{E}_{1}}\right) }\frac{\left( \left\vert \left\langle n_{0} |\partial_{\epsilon}\mathrm{H}|n_{1}\right\rangle \right\vert ^{2}+\left\vert \left\langle n_{1}|\partial_{\epsilon}\mathrm{H}|n_{0}\right\rangle \right\vert ^{2}\right) }{\left\vert \tilde{E}_{0}-\tilde{E}_{1}\right\vert ^{2}}\nonumber\\ & =2\frac{\left( e^{-\beta\tilde{E}_{0}}-e^{-\beta\tilde{E}_{1}}\right) ^{2}}{\left( e^{-\beta\tilde{E}_{0}}+e^{-\beta\tilde{E}_{1}}\right) ^{2} }\frac{\frac{\hslash^{2}}{4}\frac{\Delta^{2}}{\Delta^{2}+\epsilon^{2}} +\frac{\hslash^{2}}{4}\frac{\Delta^{2}}{\Delta^{2}+\epsilon^{2}}}{\hslash ^{2}\left( \epsilon^{2}+\Delta^{2}\right) }\nonumber\\ & =\frac{\Delta^{2}}{\left( \Delta^{2}+\epsilon^{2}\right) ^{2}}\tanh ^{2}\left( \beta\hslash\frac{\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) \text{,} \label{alsing} \end{align} that is, \begin{equation} 2\sum_{n\neq m}\left\vert \frac{\left\langle n|\partial_{\epsilon} \mathrm{H}|m\right\rangle }{\tilde{E}_{n}-\tilde{E}_{m}}\right\vert ^{2} \frac{\left( e^{-\beta\tilde{E}_{n}}-e^{-\beta\tilde{E}_{m}}\right) ^{2} }{\mathcal{Z}\cdot\left( e^{-\beta\tilde{E}_{n}}+e^{-\beta\tilde{E}_{m} }\right) }d\epsilon^{2}=\frac{\Delta^{2}}{\left( \Delta^{2}+\epsilon ^{2}\right) ^{2}}\tanh^{2}\left( \beta\hslash\frac{\sqrt{\epsilon^{2} +\Delta^{2}}}{2}\right) d\epsilon^{2}\text{.} \label{zar4} \end{equation} For clarity, note that $\partial_{\epsilon}\mathrm{H}$ in Eq. (\ref{zar4}) equals $\left( -\hslash/2\right) \sigma_{z}$ in the standard computational basis $\left\{ \left\vert 0\right\rangle \text{, }\left\vert 1\right\rangle \right\} $. Therefore, combining Eqs. (\ref{zar2}), (\ref{zar3}), and (\ref{zar4}) we get that $g_{\epsilon\epsilon}\left( \beta\text{, } \epsilon\right) d\epsilon^{2}$ in Eq. (\ref{zar1}) equals \begin{equation} g_{\epsilon\epsilon}\left( \beta\text{, }\epsilon\right) d\epsilon^{2} =\frac{\hslash^{2}}{16}\left\{ \frac{4\Delta^{2}}{\hslash^{2}\left( \Delta^{2}+\epsilon^{2}\right) ^{2}}\tanh^{2}\left( \beta\hslash\frac {\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) +\frac{\beta^{2}\epsilon^{2} }{\epsilon^{2}+\Delta^{2}}\left[ 1-\tanh^{2}\left( \beta\hslash\frac {\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) \right] \right\} d\epsilon ^{2}\text{.} \label{J3} \end{equation} Then, using Eqs. (\ref{J1}), (\ref{J2}), and (\ref{J3}), $ds_{\mathrm{Bures} }^{2}$ in Eq. (\ref{zarrillo}) becomes \begin{align} ds_{\mathrm{Bures}}^{2} & =\frac{\hslash^{2}}{16}\left( \epsilon^{2} +\Delta^{2}\right) \left[ 1-\tanh^{2}\left( \beta\hslash\frac {\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) \right] d\beta^{2}+\nonumber\\ & +\frac{\hslash^{2}}{8}\beta\epsilon\left[ 1-\tanh^{2}\left( \beta \hslash\frac{\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) \right] d\beta d\epsilon+\nonumber\\ & +\frac{\hslash^{2}}{16}\left\{ \frac{4\Delta^{2}}{\hslash^{2}\left( \Delta^{2}+\epsilon^{2}\right) ^{2}}\tanh^{2}\left( \beta\hslash\frac {\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) +\frac{\beta^{2}\epsilon^{2} }{\epsilon^{2}+\Delta^{2}}\left[ 1-\tanh^{2}\left( \beta\hslash\frac {\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) \right] \right\} d\epsilon ^{2}\text{.} \label{f4} \end{align} Finally, using Eq. (\ref{f4}), the Bures metric tensor in the case of a superconducting flux qubit becomes \begin{equation} g_{ij}^{\left( \mathrm{Bures}\right) }\left( \beta\text{, }\epsilon\right) =\frac{\hslash^{2}}{16}\left[ 1-\tanh^{2}\left( \beta\hslash\frac {\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) \right] \left( \begin{array} [c]{cc} \epsilon^{2}+\Delta^{2} & \beta\epsilon\\ \beta\epsilon & \frac{\beta^{2}\epsilon^{2}}{\epsilon^{2}+\Delta^{2}} +\frac{4\Delta^{2}}{\hslash^{2}\left( \Delta^{2}+\epsilon^{2}\right) ^{2} }\frac{\tanh^{2}\left( \beta\hslash\frac{\sqrt{\epsilon^{2}+\Delta^{2}}} {2}\right) }{1-\tanh^{2}\left( \beta\hslash\frac{\sqrt{\epsilon^{2} +\Delta^{2}}}{2}\right) } \end{array} \right) \text{,} \label{chetu1} \end{equation} with $1\leq i$, $j\leq2$. The derivation of Eqs. (\ref{f4}) and (\ref{chetu1}) completes our calculation of the Bures metric structure for a superconducting flux qubit. \subsubsection{The Sj\"{o}qvist metric} Let us observe that the Sj\"{o}qvist metric in Eq. (\ref{vetta}) can be rewritten in our case as \begin{equation} ds_{\mathrm{Sj\ddot{o}qvist}}^{2}\overset{\text{def}}{=}\frac{1}{4}\sum _{k=0}^{1}\frac{dp_{k}^{2}}{p_{k}}+\sum_{k=0}^{1}p_{k}ds_{k}^{2}\text{,} \label{cami0} \end{equation} where $ds_{k}^{2}\overset{\text{def}}{=}\left[ \left\langle dn_{k}|\left( \mathrm{I}-\left\vert n_{k}\right\rangle \left\langle n_{k}\right\vert \right) |dn_{k}\right\rangle \right] $ and $\left\langle n_{k}\left\vert n_{k^{\prime}}\right. \right\rangle =\delta_{kk^{\prime}}$. From Eq. (\ref{anto444}), the states $\left\vert dn_{0}\right\rangle $ and $\left\vert dn_{1}\right\rangle $ become \begin{equation} \left\vert dn_{0}\right\rangle \overset{\text{def}}{=}\frac{1}{\sqrt{2} }\left( \begin{array} [c]{c} \frac{1}{2}\frac{\Delta^{2}}{\sqrt{\epsilon^{2}+\Delta^{2}}}\frac {\sqrt{\epsilon^{2}+\Delta^{2}}-\epsilon}{\left( \epsilon^{2}+\Delta ^{2}-\epsilon\sqrt{\epsilon^{2}+\Delta^{2}}\right) ^{3/2}}d\epsilon\\ \frac{1}{2}\frac{\Delta^{2}}{\sqrt{\epsilon^{2}+\Delta^{2}}}\frac{\left( \sqrt{\epsilon^{2}+\Delta^{2}}-\epsilon\right) ^{2}}{\left( \epsilon ^{2}+\Delta^{2}-\epsilon\sqrt{\epsilon^{2}+\Delta^{2}}\right) ^{3/2} }d\epsilon \end{array} \right) \text{, and }\left\vert dn_{1}\right\rangle \overset{\text{def} }{=}\frac{1}{\sqrt{2}}\left( \begin{array} [c]{c} \frac{1}{2}\frac{\Delta^{2}}{\sqrt{\epsilon^{2}+\Delta^{2}}}\frac {\epsilon+\sqrt{\epsilon^{2}+\Delta^{2}}}{\left( \epsilon^{2}+\Delta ^{2}+\epsilon\sqrt{\epsilon^{2}+\Delta^{2}}\right) ^{3/2}}d\epsilon\\ -\frac{1}{2}\frac{\Delta^{2}}{\sqrt{\epsilon^{2}+\Delta^{2}}}\frac{\left( \epsilon+\sqrt{\epsilon^{2}+\Delta^{2}}\right) ^{2}}{\left( \epsilon ^{2}+\Delta^{2}+\epsilon\sqrt{\epsilon^{2}+\Delta^{2}}\right) ^{3/2} }d\epsilon \end{array} \right) \text{,} \label{cami1} \end{equation} respectively. Eqs. (\ref{anto444}) and (\ref{cami1}) will be used to calculate the nonclassical contribution that appears in the Sj\"{o}qvist metric in\ Eq. (\ref{cami0}). In what follows, however, let us consider the classical contribution $\left[ ds_{\mathrm{Sj\ddot{o}qvist}}^{2}\right] ^{\left( \mathrm{classical}\right) }$ in Eq. (\ref{cami0}). We note that $\left[ ds_{\mathrm{Sj\ddot{o}qvist}}^{2}\right] ^{\left( \mathrm{classical}\right) }$ equals \begin{align} \frac{1}{4}\sum_{k=0}^{1}\frac{dp_{k}^{2}}{p_{k}} & =\frac{1}{4}\frac {dp_{0}^{2}}{p_{0}}+\frac{1}{4}\frac{dp_{1}^{2}}{p_{1}}\nonumber\\ & =\left[ \frac{\left( \partial_{\beta}p_{0}\right) ^{2}}{4p_{0}} +\frac{\left( \partial_{\beta}p_{1}\right) ^{2}}{4p_{1}}\right] d\beta ^{2}+\left[ \frac{\left( \partial_{\epsilon}p_{0}\right) ^{2}}{4p_{0} }+\frac{\left( \partial_{\epsilon}p_{1}\right) ^{2}}{4p_{1}}\right] d\epsilon^{2}+\left[ \frac{2\partial_{\beta}p_{0}\partial_{\epsilon}p_{0} }{4p_{0}}+\frac{2\partial_{\beta}p_{1}\partial_{\epsilon}p_{1}}{4p_{1} }\right] d\beta d\epsilon\text{.} \label{q2} \end{align} Using Eq. (\ref{sorry}), $\left[ ds_{\mathrm{Sj\ddot{o}qvist}}^{2}\right] ^{\left( \mathrm{classical}\right) }$ in Eq. (\ref{q2}) reduces to \begin{align} \left[ ds_{\mathrm{Sj\ddot{o}qvist}}^{2}\right] ^{\left( \mathrm{classical} \right) } & =\frac{1}{4}\sum_{k=0}^{1}\frac{dp_{k}^{2}}{p_{k}}\nonumber\\ & =\frac{\hslash^{2}}{16}\left( \epsilon^{2}+\Delta^{2}\right) \left[ 1-\tanh^{2}\left( \beta\hslash\frac{\sqrt{\epsilon^{2}+\Delta^{2}}} {2}\right) \right] d\beta^{2}+\nonumber\\ & +\frac{\hslash^{2}}{16}\frac{\beta^{2}\epsilon^{2}}{\epsilon^{2}+\Delta ^{2}}\left[ 1-\tanh^{2}\left( \beta\hslash\frac{\sqrt{\epsilon^{2} +\Delta^{2}}}{2}\right) \right] d\epsilon^{2}+\nonumber\\ & +\frac{\hslash^{2}}{8}\beta\epsilon\left[ 1-\tanh^{2}\left( \beta \hslash\frac{\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) \right] d\beta d\epsilon\text{.} \label{classical} \end{align} We can now return our focus on the nonclassical contribution $\left[ ds_{\mathrm{Sj\ddot{o}qvist}}^{2}\right] ^{\left( \text{\textrm{quantum} }\right) }$ that specifies the Sj\"{o}qvist metric. We have \begin{align} \left[ ds_{\mathrm{Sj\ddot{o}qvist}}^{2}\right] ^{\left( \text{\textrm{quantum}}\right) } & =\sum_{k=0}^{1}p_{k}\left\langle dn_{k}|\left( \mathrm{I}-\left\vert n_{k}\right\rangle \left\langle n_{k}\right\vert \right) |dn_{k}\right\rangle \nonumber\\ & =p_{0}\left\langle dn_{0}|dn_{0}\right\rangle -p_{0}\left\vert \left\langle dn_{0}|n_{0}\right\rangle \right\vert ^{2}+p_{1}\left\langle dn_{1} |dn_{1}\right\rangle -p_{1}\left\vert \left\langle dn_{1}|n_{1}\right\rangle \right\vert ^{2}\text{.} \end{align} A simple check allows us to verify that $\left\langle dn_{0}|n_{0} \right\rangle =0$ and $\left\langle dn_{1}|n_{1}\right\rangle =0$. Therefore, $\left[ ds_{\mathrm{Sj\ddot{o}qvist}}^{2}\right] ^{\left( \text{\textrm{quantum}}\right) }$ becomes \begin{align} \left[ ds_{\mathrm{Sj\ddot{o}qvist}}^{2}\right] ^{\left( \text{\textrm{quantum}}\right) } & =p_{0}\left\langle dn_{0}|dn_{0} \right\rangle +p_{1}\left\langle dn_{1}|dn_{1}\right\rangle \nonumber\\ & =p_{0}\frac{1}{8}\frac{\Delta^{2}}{\Delta^{2}+\epsilon^{2}}\left( \epsilon-\sqrt{\Delta^{2}+\epsilon^{2}}\right) ^{2}\frac{2\Delta ^{2}+2\epsilon^{2}-2\epsilon\sqrt{\Delta^{2}+\epsilon^{2}}}{\left( \Delta ^{2}+\epsilon^{2}-\epsilon\sqrt{\Delta^{2}+\epsilon^{2}}\right) ^{3} }d\epsilon^{2}+\nonumber\\ & +p_{1}\frac{1}{8}\frac{\Delta^{2}}{\Delta^{2}+\epsilon^{2}}\left( \epsilon+\sqrt{\Delta^{2}+\epsilon^{2}}\right) ^{2}\frac{2\Delta ^{2}+2\epsilon^{2}+2\epsilon\sqrt{\Delta^{2}+\epsilon^{2}}}{\left( \Delta ^{2}+\epsilon^{2}+\epsilon\sqrt{\Delta^{2}+\epsilon^{2}}\right) ^{3} }d\epsilon^{2}\nonumber\\ & =\frac{1}{4}\frac{\Delta^{2}}{\left( \Delta^{2}+\epsilon^{2}\right) ^{2} }d\epsilon^{2}\nonumber\\ & =\frac{\hslash^{2}}{16}\frac{4\Delta^{2}}{\hslash^{2}\left( \Delta ^{2}+\epsilon^{2}\right) ^{2}}d\epsilon^{2}\text{,} \end{align} that is, \begin{equation} \left[ ds_{\mathrm{Sj\ddot{o}qvist}}^{2}\right] ^{\left( \text{\textrm{quantum}}\right) }=\frac{\hslash^{2}}{16}\frac{4\Delta^{2} }{\hslash^{2}\left( \Delta^{2}+\epsilon^{2}\right) ^{2}}d\epsilon ^{2}\text{.} \label{quantum} \end{equation} Finally, combining Eqs. (\ref{classical}) and (\ref{quantum}), the Sj\"{o}qvist metric $ds_{\mathrm{Sj\ddot{o}qvist}}^{2}$ becomes \begin{align} ds_{\mathrm{Sj\ddot{o}qvist}}^{2} & =\frac{\hslash^{2}}{16}\left( \epsilon^{2}+\Delta^{2}\right) \left[ 1-\tanh^{2}\left( \beta\hslash \frac{\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) \right] d\beta ^{2}+\nonumber\\ & +\frac{\hslash^{2}}{8}\beta\epsilon\left[ 1-\tanh^{2}\left( \beta \hslash\frac{\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) \right] d\beta d\epsilon+\nonumber\\ & +\frac{\hslash^{2}}{16}\left\{ \frac{4\Delta^{2}}{\hslash^{2}\left( \Delta^{2}+\epsilon^{2}\right) ^{2}}+\frac{\beta^{2}\epsilon^{2}} {\epsilon^{2}+\Delta^{2}}\left[ 1-\tanh^{2}\left( \beta\hslash\frac {\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) \right] \right\} d\epsilon ^{2}\text{.} \label{f3} \end{align} The metric tensor $g_{ij}^{\left( \mathrm{Sj\ddot{o}qvist}\right) }\left( \beta\text{, }\epsilon\right) $ from Eq. (\ref{f3}) is given by \begin{equation} g_{ij}^{\left( \mathrm{Sj\ddot{o}qvist}\right) }\left( \beta\text{, }\epsilon\right) =\frac{\hslash^{2}}{16}\left[ 1-\tanh^{2}\left( \beta\hslash\frac{\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) \right] \left( \begin{array} [c]{cc} \epsilon^{2}+\Delta^{2} & \beta\epsilon\\ \beta\epsilon & \frac{\beta^{2}\epsilon^{2}}{\epsilon^{2}+\Delta^{2}} +\frac{4\Delta^{2}}{\hslash^{2}\left( \Delta^{2}+\epsilon^{2}\right) ^{2} }\frac{1}{1-\tanh^{2}\left( \beta\hslash\frac{\sqrt{\epsilon^{2}+\Delta^{2}} }{2}\right) } \end{array} \right) \text{,} \label{chetu} \end{equation} with $1\leq i$, $j\leq2$. The derivation of Eqs. (\ref{f3}) and (\ref{chetu}) completes our calculation of the Sj\"{o}qvist metric structure for superconducting flux qubits. \section{Considerations from the comparative analysis} In this section, we discuss the outcomes of the comparative analysis carried out in Section V concerning the Bures and Sj\"{o}qvist metrics for spin qubits and superconducting flux qubits Hamiltonian models presented in Section IV. \subsection{The asymptotic limit of $\beta$ approaching infinity} We begin by discussing the asymptotic limit of $\beta$ approaching infinity. In \ the case of a spin qubit with Hamiltonian \textrm{H}$_{\mathrm{SQ} }\left( \omega\right) $ in Eq. (\ref{spinH}), the density matrix $\rho_{\mathrm{SQ}}\left( \beta\text{, }\omega\right) $ in Eq. (\ref{ro1}) approaches $\rho_{\mathrm{SQ}}^{\beta\rightarrow\infty}\left( \omega\right) \overset{\text{def}}{=}\left\vert 1\right\rangle \left\langle 1\right\vert $ as $\beta\rightarrow\infty$. Observe that $\left\vert 1\right\rangle $ denotes here the ground state, the state of lowest energy $-\hslash\omega/2$. Since $\rho_{\mathrm{SQ}}^{\beta\rightarrow\infty}\left( \omega\right) $ is a constant in $\omega$, the Bures and Sj\"{o}qvist metrics in Eqs. (\ref{07}) and (\ref{f0}), respectively, both vanish in this limiting scenario. In this regard, the case of the superconducting flux qubit specified by the Hamiltonian \textrm{H}$_{\mathrm{SFQ}}\left( \Delta\text{, }\epsilon\right) $ in Eq. (\ref{superH}) is more interesting. Indeed, in this case the density matrix $\rho_{\mathrm{SFQ}}\left( \beta\text{, }\epsilon\right) $ in Eq. (\ref{anto5}) approaches $\rho_{\mathrm{SFQ}}^{\beta\rightarrow\infty}\left( \epsilon\right) $ when $\beta$ approaches infinity. The quantity $\rho_{\mathrm{SFQ}}^{\beta\rightarrow\infty}\left( \epsilon\right) $ represents a pure (ground) state of lowest energy $\left( -\hslash/2\right) \sqrt{\Delta^{2}+\epsilon^{2}}$ and is given by \begin{equation} \rho_{\mathrm{SFQ}}^{\beta\rightarrow\infty}\left( \epsilon\right) =\frac {1}{2}\left( \begin{array} [c]{cc} 1+\frac{\epsilon}{\sqrt{\epsilon^{2}+\Delta^{2}}} & \frac{\Delta} {\sqrt{\epsilon^{2}+\Delta^{2}}}\\ \frac{\Delta}{\sqrt{\epsilon^{2}+\Delta^{2}}} & 1-\frac{\epsilon} {\sqrt{\epsilon^{2}+\Delta^{2}}} \end{array} \right) \text{,} \end{equation} with $\rho_{\mathrm{SFQ}}^{\beta\rightarrow\infty}\left( \epsilon\right) =(\rho_{\mathrm{SFQ}}^{\beta\rightarrow\infty}\left( \epsilon\right) )^{2}$ and \textrm{tr}$\left( \rho_{\mathrm{SFQ}}^{\beta\rightarrow\infty}\left( \epsilon\right) \right) =1$. Furthermore, when $\beta\rightarrow\infty$, the Bures and Sj\"{o}qvist metrics in Eqs. (\ref{f4}) and (\ref{f3}), respectively, reduce to the same expression \begin{equation} ds_{\mathrm{Bures}}^{2}\overset{\beta\rightarrow\infty}{\rightarrow}\frac {1}{4}\frac{\Delta^{2}}{\left( \Delta^{2}+\epsilon^{2}\right) ^{2}} d\epsilon^{2}\text{, and }ds_{\mathrm{Sj\ddot{o}qvist}}^{2}\overset{\beta \rightarrow\infty}{\rightarrow}\frac{1}{4}\frac{\Delta^{2}}{\left( \Delta ^{2}+\epsilon^{2}\right) ^{2}}d\epsilon^{2}\text{.} \label{betalim} \end{equation} The limiting expressions assumed by the Bures and Sj\"{o}qvist metrics in Eq. (\ref{betalim}) are, modulo an unimportant constant factor, the Fubini-Study metric $ds_{\mathrm{FS}}^{2}$ for pure states. Indeed, we have \begin{equation} ds_{\mathrm{FS}}^{2}\overset{\text{def}}{=}\mathrm{tr}\left[ \left( \frac{\partial\rho_{\mathrm{SFQ}}^{\beta\rightarrow\infty}\left( \epsilon\right) }{\partial\epsilon}\right) ^{2}\right] d\epsilon^{2} =\frac{1}{2}\frac{\Delta^{2}}{\left( \Delta^{2}+\epsilon^{2}\right) ^{2} }d\epsilon^{2}\text{.} \label{FSlimit} \end{equation} In the next subsection, we discuss the discrepancy in the Bures (Eqs. (\ref{07}) and (\ref{f4})) and Sj\"{o}qvist (Eqs. (\ref{f0}) and (\ref{f3})) metrics emerging from the different nature of the nonclassical contributions in the two metrics. \subsection{The metrics discrepancy} \begin{figure} \caption{Plots of the metric discrepancy $\Delta g_{\epsilon\epsilon} \end{figure} We begin by noting that in the case of the spin qubit Hamiltonian model in Eq. (\ref{spinH}), there is no discrepancy since the Bures and the Sj\"{o}qvist metrics in Eqs. (\ref{07}) and (\ref{f0}), respectively, coincide. Indeed, in this case, both metrics reduce to the classical Fisher-Rao information metric. The nonclassical/quantum terms vanish in both metrics due to the commutativity of $\rho_{\mathrm{SQ}}\left( \beta\text{, }\omega\right) $ and $\left( \rho_{\mathrm{SQ}}+d\rho_{\mathrm{SQ}}\right) \left( \beta\text{, } \omega\right) $, with $\rho_{\mathrm{SQ}}\left( \beta\text{, }\omega\right) $ in Eq. (\ref{ro1}). In the case of the superconducting flux qubit Hamiltonian model in Eq. (\ref{superH}), instead, the nonclassical/quantum terms vanish in neither the Bures nor the Sj\"{o}qvist metrics due to the non-commutativity of $\rho_{\mathrm{SFQ}}\left( \beta\text{, }\epsilon \right) $ and $\left( \rho_{\mathrm{SFQ}}+d\rho_{\mathrm{SFQ}}\right) \left( \beta\text{, }\epsilon\right) $, with $\rho_{\mathrm{SQ}}\left( \beta\text{, }\epsilon\right) $ in Eq. (\ref{anto5}). However, these nonclassical contributions differ in the two metrics and this leads to a discrepancy in the Bures and Sj\"{o}qvist metrics in Eqs. (\ref{f4}) and (\ref{f3}), respectively. More specifically, we have \begin{equation} ds_{\mathrm{Sj\ddot{o}qvist}}^{2}-ds_{\mathrm{Bures}}^{2}=\Delta g_{\epsilon\epsilon}^{\mathrm{nc}}\left( \beta\text{, }\epsilon\right) d\epsilon^{2}\geq0\text{,} \label{dis1} \end{equation} for any $\beta$ and $\epsilon$. Note that $\Delta g_{\epsilon\epsilon }^{\mathrm{nc}}\left( \beta\text{, }\epsilon\right) \overset{\text{def} }{=}g_{\epsilon\epsilon}^{\mathrm{nc,}\text{ }\mathrm{Sj\ddot{o}qvist}}\left( \beta\text{, }\epsilon\right) -g_{\epsilon\epsilon}^{\mathrm{nc,}\text{ }\mathrm{Bures}}\left( \beta\text{, }\epsilon\right) $ is the difference between the nonclassical (\textrm{nc}) contributions in the metric components $g_{\epsilon\epsilon}\left( \beta\text{, }\epsilon\right) \overset{\text{def}}{=}g_{\epsilon\epsilon}^{\mathrm{c}}\left( \beta\text{, }\epsilon\right) +g_{\epsilon\epsilon}^{\mathrm{nc}}\left( \beta\text{, }\epsilon\right) $ and is given by \begin{equation} \Delta g_{\epsilon\epsilon}^{\mathrm{nc}}\left( \beta\text{, }\epsilon \right) \overset{\text{def}}{=}\frac{1}{4}\frac{\Delta^{2}}{\left( \Delta^{2}+\epsilon^{2}\right) ^{2}}\left[ 1-\tanh^{2}\left( \beta \hslash\frac{\sqrt{\epsilon^{2}+\Delta^{2}}}{2}\right) \right] \text{,} \label{discrepancy} \end{equation} with $0\leq$ $\tanh^{2}\left( x\right) \leq1$ for any $x\in \mathbb{R} $. To be crystal clear, it is useful to view the metric tensor $g_{ij}\left( \beta\text{, }\epsilon\right) $ with $1\leq i$, $j\leq2$ (i.e., $\theta ^{1}\overset{\text{def}}{=}\beta$ and $\theta^{2}\overset{\text{def} }{=}\epsilon$) recast as \begin{equation} g_{ij}\left( \beta\text{, }\epsilon\right) =\left( \begin{array} [c]{cc} g_{\beta\beta}\left( \beta\text{, }\epsilon\right) & g_{\beta\epsilon }\left( \beta\text{, }\epsilon\right) \\ g_{\epsilon\beta}\left( \beta\text{, }\epsilon\right) & g_{\epsilon\epsilon }^{\mathrm{c}}\left( \beta\text{, }\epsilon\right) +g_{\epsilon\epsilon }^{\mathrm{nc}}\left( \beta\text{, }\epsilon\right) \end{array} \right) \text{.} \end{equation} The discrepancy between the Bures and Sj\"{o}qvist metrics arises only because $g_{\epsilon\epsilon}^{\mathrm{nc,}\text{ }\mathrm{Sj\ddot{o}qvist}}\left( \beta\text{, }\epsilon\right) \neq g_{\epsilon\epsilon}^{\mathrm{nc,}\text{ }\mathrm{Bures}}\left( \beta\text{, }\epsilon\right) $. However, the metric discrepancy $\Delta g_{\epsilon\epsilon}^{\mathrm{nc}}\left( \beta\text{, }\epsilon\right) $ in Eq. (\ref{discrepancy}) vanishes in the asymptotic limit of $\beta$ approaching infinity. In Fig. $1$, we plot the discrepancy between the Bures and the Sj\"{o}qvist metrics for the superconducting flux qubit Hamiltonian model. In Table I, instead, we summarize the links between the Bures and the Sj\"{o}qvist metrics.\begin{table}[t] \centering \begin{tabular} [c]{c|c|c|c}\hline\hline \textbf{Description of quantum states} & \textbf{Quantum states} & \textbf{Bures metric} & \textbf{Sj\"{o}qvist metric}\\\hline Pure & $\rho=\rho^{2}$ & Fubini-Study metric & Fubini-Study metric\\ Mixed, classical scenario & $\rho\neq\rho^{2}$, $\left[ \rho\text{, } \rho+d\rho\right] =0$ & Fisher-Rao metric & Fisher-Rao metric\\ Mixed, nonclassical scenario & $\rho\neq\rho^{2}$, $\left[ \rho\text{, } \rho+d\rho\right] \neq0$ & $ds_{\mathrm{Bures}}^{2}\neq ds_{\mathrm{Sj\ddot {o}qvist}}^{2}$ & $ds_{\mathrm{Sj\ddot{o}qvist}}^{2}\neq ds_{\mathrm{Bures} }^{2}$\\\hline \end{tabular} \caption{Relation between the Bures and the Sj\"{o}qvist metrics\textbf{. }The Bures and the Sj\"{o}qvist metrics are identical when considering pure quantum states $\left( \rho=\rho^{2}\right) $ or mixed quantum states $\left( \rho\neq\rho^{2}\right) $ for which the non-commutative probabilistic structure underlying quantum theory is not visible (i.e., in the classical scenario with $\left[ \rho\text{, }\rho+d\rho\right] =0$). In particular, in the former and latter cases, they becomes the Fubini-Study and the Fisher-Rao information metrics, respectively. However, the Bures and the Sj\"{o}qvist metrics are generally distinct when considering mixed quantum states $\left( \rho\neq\rho^{2}\right) $ for which the non-commutative probabilistic structure of quantum mechanics is visible (i.e., in the nonclassical scenario with $\left[ \rho\text{, }\rho+d\rho\right] \neq0$).} \end{table} \section{Conclusive Remarks} In this paper, building on our recent scientific effort in Ref. \cite{cafaroprd22}, we presented an explicit analysis of the Bures and Sj\"{o}qvist metrics over the manifolds of thermal states for the spin qubit (Eq. (\ref{spinH})) and the superconducting flux qubit Hamiltonian (Eq. (\ref{superH})) models. We observed that while both metrics (Eqs. (\ref{07}) and (\ref{f0})) reduce to the Fubini-Study metric in the (zero-temperature) asymptotic limiting case of the inverse temperature $\beta$ approaching infinity for both Hamiltonian models, the two metrics are generally different. We observed this different behavior in the case of the superconducting flux Hamiltonian model. In general, we note that the two metrics (Eqs. (\ref{f4}) and (\ref{f3})) seem to differ when nonclassical behavior is present since they quantify noncommutativity of neighboring mixed quantum states in different manners (Eqs. (\ref{dis1}) and (\ref{discrepancy})). In summary, we reach (see Table I) the conclusion that for pure quantum states $\left( \rho=\rho^{2}\right) $ and for mixed quantum states $\left( \rho\neq\rho ^{2}\right) $ for which the non-commutative probabilistic structure underlying quantum theory is not visible (i.e., in the classical scenario with $\left[ \rho\text{, }\rho+d\rho\right] =0$), the Bures and the Sj\"{o}qvist metrics are the same (Eqs. (\ref{f2}) and (\ref{f1})). Indeed, in the former and latter cases, they reduce to the Fubini-Study and Fisher-Rao information metrics, respectively. Instead, when investigating mixed quantum states $\left( \rho\neq\rho^{2}\right) $ for which the non-commutative probabilistic structure of quantum mechanics is visible (i.e., in the non-classical scenario with $\left[ \rho\text{, }\rho+d\rho\right] \neq0$), the Bures and the Sj\"{o}qvist metrics exhibit a different behavior (Eqs. (\ref{G1A}) and (\ref{G1B}); Eqs. (\ref{chetu1}) and (\ref{chetu})). Our main conclusions can be outlined as follows: \begin{enumerate} \item[{[i]}] We presented an explicit derivation of Bures metric for arbitrary density matrices in H\"{u}bner's form (Eq. (\ref{buri14})) and in Zanardi's form (Eq. (\ref{7})). Moreover, we presented a clear derivation of Zanardi's form of the Bures metric suitable for the special class of thermal states (Eq. (\ref{general})). Finally, we reported an explicit derivation of the Sj\"{o}qvist metric for nondegenerate density matrices (Eq. (\ref{vetta})). \item[{[ii]}] Using our explicit derivations outlined in [i], we performed detailed analytical calculations yielding the expressions of the Bures (Eqs. (\ref{07}) and (\ref{f4})) and Sj\"{o}qvist (Eqs. (\ref{f0}) and (\ref{f3})) metrics on manifolds of thermal states (Eqs. (\ref{ro1}) and (\ref{anto5})) that correspond to a spin qubit (Eq. (\ref{spinH})) and a superconducting flux qubit (Eq. (\ref{superH})) Hamiltonian models. \item[{[iii]}] In the absence of nonclassical features in which the neighboring density matrices $\rho$ and $d\rho$ commute, the Bures and the Sj\"{o}qvist metrics lead to and identical metric expression exemplified by the classical Fisher-Rao metric tensor. We have explicitly verified this similarity in the case of a manifold of thermal states for spin qubits in the presence of a constant magnetic field along the quantization $z$-axis. \item[{[iv]}] In general, the Bures and the Sj\"{o}qvist metrics are expected to yield different expressions. Indeed, the Bures and Sj\"{o}qvist metrics seem to quantify the noncommutativity of neighboring mixed states $\rho$ and $d\rho$ in different manners, in general. We have explicitly verified this difference in the case of a manifold of thermal states for superconducting flux qubits (see Fig. $2$). \item[{[v]}] In the asymptotic limit of $\beta\rightarrow\infty$, both Bures and Sj\"{o}qvist metric tensors reduce to the same limiting value (Eq. (\ref{betalim})) specified by, modulo an unimportant constant factor, the Fubini-Study metric tensor (Eq. (\ref{FSlimit})) for the zero-temperature pure states. \item[{[vi]}] In the superconducting flux qubit Hamiltonian model considered here, we observe that the difference $ds_{\mathrm{Sj\ddot{o}qvist}} ^{2}-ds_{\mathrm{Bures}}^{2}$ is a positive quantity that depends solely on the difference between the nonclassical nature of the metric tensor component $g_{\epsilon\epsilon}\left( \beta\text{, }\epsilon\right) $. Indeed, we have shown that $ds_{\mathrm{Sj\ddot{o}qvist}}^{2}-ds_{\mathrm{Bures}}^{2}=\Delta g_{\epsilon\epsilon}^{\mathrm{nc}}\left( \beta\text{, }\epsilon\right) d\epsilon^{2}\geq0$ (Eqs. (\ref{dis1}) and (\ref{discrepancy})). \item[{[vii]}] The existence of nonclassical contributions in the Bures and Sj\"{o}qvist metrics is related to the presence of non-vanishing quadratic terms like\textbf{ }$\left\vert \left\langle n\left\vert \partial _{h}\mathrm{H}\right\vert m\right\rangle \right\vert ^{2}$\textbf{ }and\textbf{ }$\left\vert \left\langle dn\left\vert n\right. \right\rangle \right\vert ^{2}$\textbf{,} respectively. The former term is related to modulus squared of suitable quantum overlaps defined in terms of parametric variations in the Hamiltonian of the system. The latter term, instead, is specified by the modulus squared of suitable quantum overlaps characterized by parametric variations of the eigenstates of the Hamiltonian of the system. It is not unreasonable to expect a formal connection between these two types of terms causing the noncommutativity between\textbf{ }$\rho$ and $\rho+d\rho$ (see, for instance, Eq. (15.30) in Ref. \cite{karol06}) and find a deeper relation between the Bures and Sj\"{o}qvist metrics for the class of thermal quantum states. Indeed, for a more quantitative discussion on the link between these two terms, see Ref. \cite{alsing23}. \item[{[viii]}] The differential\textbf{ }$d\rho\left( \beta\text{, }h\right) \overset{\text{def}}{=}\partial_{\beta}\rho d\beta+\partial_{h}\rho dh$\textbf{ }depends both on eigenvalues and eigenvectors parametric variations.\textbf{ }However, the noncommutativity between\textbf{ }$\rho $\textbf{ }and\textbf{ }$d\rho$\textbf{ }is related to that part of\textbf{ }$d\rho$\textbf{ }that emerges from the eigenvectors parametric variations. These changes, in turn, can be related to the existence of a nonvanishing commutator between the Hamiltonian of the system and the density matrix specifying the thermal state. Indeed, in the two main examples studied in this paper, we have\textbf{ }$\left[ \mathrm{H}_{\mathrm{SQ}}\left( \omega\right) \text{, }\rho_{\mathrm{SQ}}\left( \beta\text{, }\omega\right) \right] =0$\textbf{ }and\textbf{ }$\left[ \mathrm{H}_{\mathrm{SFQ}}\left( \Delta\text{, }\epsilon\right) \text{, }\rho_{\mathrm{SFQ}}\left( \beta\text{, }\epsilon\right) \right] \neq0$\textbf{, }respectively. In the former case, unlike the latter case, there is no contribution to\textbf{ }$d\rho$\textbf{ }arising from a variation in the eigenvectors of the Hamiltonian. \end{enumerate} For the set of pure states, the scenario is rather unambiguous. The Fubini--Study metric represents the only natural option for a measure that characterizes \textquotedblleft random states\textquotedblright. Alternatively, for mixed-state density matrices, the geometry of the state space is more complicated \cite{karol06,brody19}. There is a collection of distinct metrics that can be used, each of them with different physical inspiration, benefits and disadvantages that can rest on the peculiar application one \ might be interested in examining. Specifically, both simple and complicated geometric quantities (i.e., for instance, path, path length, volume, curvature, and complexity) seem to depend on the measure selected on the space of mixed states that specify the quantum system being investigated \cite{karol99,cafaroprd22}. Therefore, our work in this paper can be particularly important in offering an explicit comparative study between the (emerging) Sj\"{o}qvist interferometric geometry and the (established) Bures geometry for mixed quantum states. Gladly, the importance of the existence of this kind of comparative investigation was lately emphasized in Refs. \cite{mera22} and \cite{cafaroprd22} too. From a mathematics standpoint, it would be interesting to formally prove (or, alternatively, disprove with an explicit counterexample) the monotonicity \cite{petz96a,petz99} of the Sj\"{o}qvist metric in an arbitrarily finite-dimensional space of mixed quantum states. From a physics perspective that relies on empirical evidence, instead, it would be very important to understand what the physical significance of employing either metric is \cite{mera22,cafaroprd22}. In conclusion, despite its relatively limited scope, we hope this work will inspire either young or senior scientists to pursue ways to deepen our understanding (both mathematically and physically) of this fascinating connection among information geometry, statistical physics, and quantum mechanics \cite{cafaropre20,gassner21,hasegawa21,miller20,cc,saito20,ito20}. \begin{acknowledgments} C.C. is grateful to the United States Air Force Research Laboratory (AFRL) Summer Faculty Fellowship Program for providing support for this work. C. C. acknowledges helpful discussions with Orlando Luongo,\ Cosmo Lupo, Stefano Mancini, and Hernando Quevedo. P.M.A. acknowledges support from the Air Force Office of Scientific Research (AFOSR). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Air Force Research Laboratory (AFRL). \end{acknowledgments} \end{document}
\begin{document} $\ $ \centerline{\large\bf EFFICIENT NONPARAMETRIC ESTIMATION AND } \centerline{\large\bf INFERENCE FOR THE VOLATILITY FUNCTION} \centerline{Francesco Giordano\footnote{Department of Economics and Statistics, University of Salerno - Via Giovanni Paolo II n.137 - 84084 Fisciano (SA) - Italy, [email protected]} and Maria Lucia Parrella\footnote{Department of Economics and Statistics, University of Salerno - Via Giovanni Paolo II n.137 - 84084 Fisciano (SA) - Italy, [email protected], tel:+39 089962211, fax:+39 089962049}} \begin{quotation} \noindent {\textbf{Abstract}:} During the last decades there has been increasing interest in modeling the volatility of financial data. Several parametric models have been proposed to this aim, starting from ARCH, GARCH and their variants, but often it is hard to evaluate which one is the most suitable for the analyzed financial data. In this paper we focus on nonparametric analysis of the volatility function for mixing processes. Our approach encompasses many parametric frameworks and supplies several tools which can be used to give evidence against or in favor of a specific parametric model: nonparametric function estimation, confidence bands and test for symmetry. Another contribution of this paper is to give an alternative representation of the $GARCH(1,1)$ model in terms of a \emph{Nonparametric-ARCH(1)} model, which avoids the use of the lagged volatility, so that a more precise and more informative \emph{News Impact Function} can be estimated by our procedure. We prove the consistency of the proposed method and investigate its empirical performance on synthetic and real datasets. Surprisingly, for finite sample size, the simulation results show a better performance of our nonparametric estimator compared with the MLE estimator of a $GARCH(1,1)$ model, even in the case of correct specification of the model.\par \noindent {\it \textbf{Key words and phrases:}} Nonparametric volatility estimation, confidence intervals for volatility, testing for symmetry. \par \end{quotation}\par \section{Introduction} The importance of a correct specification for volatility models has been confirmed since the work of \cite{EngNg93}. Several attempts have been made from then to deal with the volatility processes nonparametrically, in order to avoid the mispecification problems and to produce robust estimation results. There have been different approaches that focus (alternatively) on the error density, on the functional form of the volatility function, or the kind of nonparametric estimator. See, among others, \cite{FanYao98, HarTsy97, FraDia06, XuPhi11, WanAlt12, HarAlt15}. First of all, \cite{HarTsy97} proposed to estimate the $ARCH(p)$ class of models using the local linear estimator, where $p$ is the number of lags in the model. But their model suffers from the well-known \emph{curse of dimensionality} problem which affects the nonparametric estimators. In fact, the best rate of convergence of any nonparametric estimator of a function is $n^{-2/(4+p)}$, where $n$ is the time series length and $p$ the number of covariates (=lags) in the model \cite{Gyorfy02}. This rate is extremely slow when $p$ is large. Therefore, \cite{AudBul01} and \cite{Bulmcn02} proposed a nonparametric procedure based on a bivariate smoother in order to nest the $GARCH(1,1)$ class of models of \cite{Bol86}, which are known to be equivalent to the $ARCH(\infty)$ although they only need two covariates. Therefore, the convergence rate improves from $n^{-2/(4+p)}$ to $n^{-1/3}$. However, the difficulties with the proposal of \cite{Bulmcn02} are given by i) the initialization of the latent process used as a covariate in the $GARCH$ smoother and ii) the choice of the bivariate bandwidth (tuning parameter). A different (semi-parametric) approach has been recently proposed by \cite{WanAlt12}, where a GARCH(1,1) model is approximated by a truncated additive model with nonparametric components, estimated by smoothing splines and linked together by a common parametric coefficient. However, also in the last paper, the problem of bandwidth selection still remains a crucial and unsolved issue. Although the many proposals, nonparametric methods for volatility analysis have not gained much interest from practitioners and, therefore, research on such approaches has not increased in recent years, contrary to what has happened with parametric approaches. The main reasons have been: i) the difficulty to set the tuning parameters of the nonparametric procedures and ii) the slow convergence rate of the nonparametric estimators. These two drawbacks have not been sufficiently compensated by the gain in robustness of the nonparametric analysis. Trying to deal with the two drawbacks, in this paper we show that nonparametric methods can give important and essential contributions to financial data analysis, mainly from an inferential point of view. In fact, risk evaluation and volatility forecasts are some of the goals that require the selection of a suitable parametric model for the data generating process in order to get consistent results. So, in order to validate analysis using empirical evidence, parametric estimators should be replaced by nonparametric ones, or at least compared with them basing on nonparametric confidence intervals and/or tests for symmetry. Therefore, in this paper, we focus on a general nonparametric framework for the analysis of the volatility function. Our aim is to provide a set of tools that can be used for robust volatility analysis: \emph{a}) a consistent nonparametric estimator of the volatility function based on local smoothing with data-driven optimal bandwidth, \emph{b}) the nonparametric confidence intervals for volatility model selection and \emph{c}) a nonparametric test for symmetry of the volatility function. All this is made by means of a Nonparametric Autoregressive Conditional Heteroskedastic model of order one, here denoted as $NARCH(1)$. Now we summarize how we avoid the two drawbacks above. To face the problem of setting the tuning parameters, we extend the smoothing procedure of \cite{GioPar14} to the case of heteroskedastic and autoregressive models. This method is based on a hybrid, data-driven bandwidth estimator, a cross between local and global smoothing, which encompasses the adaptability of local smoothing and the efficiency of global smoothing. As a result, it allows to reach the best rate of convergence for the final nonparametric volatility estimator. Note that we cannot directly apply the theoretical results in \cite{GioPar14} because here we have dependent data. To deal with the second problem (improving the convergence rate of the nonparametric volatility estimator), we reduce the number of regressors in the model. In fact, remembering that the best rate of convergence of any nonparametric estimator of a function is $n^{-2/(4+p)}$, we show that the nonlinear and nonadditive structure of the $NARCH(1)$ model can be exploited in order to capture the dependence of the process by means of only one regressor (\emph{i.e.}, $p=1$) for many parametric models. As an application of this idea, we show in section \ref{MOD} how the $GARCH(1,1)$ model can be equivalently represented by a particular $NARCH(1)$ model, with two immediate advantages: i) we avoid the (latent) lagged volatility as a covariate of the model, so that we can build a more precise \emph{News Impact Curve}, which can be used to give effective evidence of \emph{leverage effects} in the data; ii) we improve the convergence rate of the nonparametric volatility estimator from $n^{-1/3}$ (reached by \cite{Bulmcn02}) to $n^{-2/5}$ (reached by our nonparametric estimator). All this may have a strong positive impact on the efficiency of volatility estimates, as shown by simulation results. The rest of the paper is organized as follows. Section \ref{MOD} presents the nonparametric model and gives the main idea concerning the new representation of the $GARCH(1,1)$ model and the new interpretation of the \emph{News Impact Curve}. Section \ref{idea} describes the estimation procedure and gives the theoretical results on consistency both for the asymptotic optimal bandwidth and volatility function estimators. The derivation of the confidence intervals and the test for symmetry are shown in sections \ref{Conf} and \ref{Test}, respectively. The empirical performance of the method is investigated in section \ref{sim}, with simulated data, and section \ref{data}, with a real dataset. Some concluding remarks are given in section \ref{final}. All the assumptions and the proofs are concentrated in the Appendix. \section{An adaptive nonparametric setup for volatility}\label{MOD} Consider a stationary process $\left\{X_t\right\}$ and define a \emph{Nonparametric Autoregressive Conditional Heteroscedastic} model of order 1, the $NARCH(1)$, as follows \begin{equation}\label{eqn01} X_{t}=\sigma(X_{t-1})\varepsilon_{t}, \qquad\qquad t\in\mathbb{N}, \end{equation} where the errors $\varepsilon_t$ are independent and identically distributed real random variables, satisfying $E(\varepsilon_t)=0$ and $\mathop{Var}(\varepsilon_t)=1$ for each $t$. For simplicity, we assume that the conditional mean function $m(x)=E\left\{X_{t}|X_{t-1}=x\right\}$ is equal to zero. This setup is typically considered when analyzing financial data, where no conditional structure in the mean is generally observed from data (otherwise, it is sufficient to work with the residual process $R_t=X_t-m(X_{t-1})$ as in \cite{FanYao98} and \cite{HarTsy97}). Here and in the sequel, $x$ represents a generic point of the support of $X_t$. Given model (\ref{eqn01}), we look at the conditional variance function, also known as \emph{volatility function}, \begin{equation}\label{mphi.x} \sigma^2(x)=\mathop{Var}\left\{X_{t}|X_{t-1}=x\right\}. \end{equation} By (\ref{mphi.x}), we have a general class of volatility functions and the error term $\varepsilon_t$ is also general enough (see assumptions (a) in the Appendix), so that model (\ref{eqn01}) encompasses many parametric volatility models proposed in the literature. In particular, it is immediate to see that the classic $ARCH(1)$ model is a particular case of model (\ref{eqn01}), given by the linear equation $\sigma^2(X_{t-1})\equiv \alpha_0+\alpha_1X_{t-1}^2$, with $\alpha_i>0$, $i=0,1$, and $\alpha_1<1$. Other examples are the generalizations of $ARCH$ models, such as the threshold based $TARCH(1)$. The $ARCH(1)$ model and its variants are often accused to perform poor with real data. In practice, one needs many lagged variables in the model to match the dependence found in financial data, which implies the need of $ARCH(p)$ models, where $\sigma^2(X_{t-1},\ldots,X_{t-p})=\alpha_0+\alpha_1X^2_{t-1}\ldots+\alpha_pX_{t-p}^2$. The estimation of such models can be inefficient when $p$ is large. This has motivated the orientation towards the $GARCH(1,1)$ model, which is one of the most used in financial econometrics. It is given by \begin{eqnarray} X_{t} &=& \sigma_{t}\varepsilon_{t} \label{garch}\\ \sigma_t^2 &=& \alpha_0+\alpha_1X^2_{t-1}+\beta \sigma_{t-1}^2, \nonumber \end{eqnarray} with $\alpha_i>0$, $i=0,1$, $\beta\ge 0$ and $\alpha_1+\beta<1$. The advantage with this model is that it is formally equivalent to the $ARCH(\infty)$, although the dependence structure is captured by only two regressors ($X_{t-1}$ and $\sigma_{t-1}$) instead of infinite regressors. Several studies have established the good performance of the $GARCH(1,1)$ model compared to $GARCH(p,q)$ and to many other volatility models (see, for example, \cite{HanLun01}). But a serious problem is given by the fact that the regressor $\sigma_{t-1}$ is a latent process (the lag of volatility itself) which must be estimated or substituted by some reliable proxy. As a consequence, the estimation of the $GARCH(1,1)$ model (and its variants) is not trivial and may lead to unstable results. In this section, we show that the classic $GARCH(1,1)$ model can be equivalently represented as a \emph{nonparametric $ARCH(1)$ model}, that is the $NARCH(1)$ defined in (\ref{eqn01}). The advantage of this new representation is threefold: \emph{a}) the new model is able to capture the dependence structure of a $GARCH(1,1)$, and therefore of an $ARCH(\infty)$, by means of only one covariate; \emph{b}) such a covariate is the lag $X_{t-1}$, which is an observed process; \emph{c}) a different and more precise \emph{News Impact Curve} can be derived and estimated for the new model. This threefold advantage is obtained thanks to the nonparametric structure of the model, which allows to capture the effects of the infinite lags $X_{t-j}$ on the volatility by means of the adaptive and nonlinear structure of the volatility function itself. In other words, we allow the function $\sigma(\cdot)$ to be ``free'' and therefore ``capable'' to well suit the relation between $X_t$ and its past, comdensed in $X_{t-1}$. This is stated in Theorem \ref{theorem1}. \begin{theorem}\label{theorem1} Assuming a symmetric density for the error $\varepsilon_t$, the $GARCH(1,1)$ model in (\ref{garch}), with parameters $\alpha_0>0, \alpha_1>0$, $\beta\ge 0$ and $\alpha_1+\beta<1$, is equivalent to a nonlinear volatility model as in (\ref{eqn01}), where the volatility function $\sigma^2(x)$ is given by \begin{eqnarray}\label{sigmax} \sigma^2(x) &=&\left\{\begin{array}{lll} A_0&\mbox{if}&x=0,\\ A_0+(\alpha_1+\beta)\tilde g(x;\alpha_1,\beta)x^2&\mbox{if}&x\not =0, \end{array} \right. \end{eqnarray} where \[ \tilde g(x;\alpha_1,\beta)=g(x;\alpha_1,\beta)-\frac{B_0}{x^2(\alpha_1+\beta)} \] with \[ g(x;\alpha_1,\beta)\equiv E\left(\left.\frac{1}{C_{\tilde{\varepsilon}_t}}\right|X_{t}=x\right), \quad B_0=\beta\alpha_0/(1-\alpha_1-\beta), \quad A_0=\alpha_0+B_0, \] \[C_{\tilde{\varepsilon}_t}=1+\beta/\alpha_1(1-1/\tilde{\varepsilon}_t^2)\quad \mbox{ and }\quad \tilde\varepsilon_t=\mathop{sign}(\varepsilon_t)\sqrt{\frac{\alpha_1\varepsilon^2_t+\beta}{\alpha_1+\beta}}. \] \end{theorem} \begin{remark} \label{rem1bis} Theorem \ref{theorem1} can be generalized in two directions. First, we can relax the assumption of symmetry for the density function of $\varepsilon_t$. We only use it to simplify the proof of Theorem \ref{theorem1} in order to derive that $E\left(\tilde{\varepsilon}_t\right)=0$. Second, we can extend the result of Theorem \ref{theorem1} to nonparametric \textit{GARCH}(1,1) models. \end{remark} By Theorem \ref{theorem1} we have that the \textit{GARCH}(1,1) process can be written as $X_t=C_{\tilde{\varepsilon}_t}^{1/2}\tilde{X}_t$ where $\tilde X_t\sim ARCH(1;\alpha_0,\alpha_1+\beta)$, with the error terms $\tilde{\varepsilon}_t$. Note that this representation is exact. Of course, $X_t$ is not an \textit{ARCH}(1) process, since one can show that $E\left[(X_t-\tilde X_t)^2\right]>0$, $\forall t$. However, $E(X_t^2)=E(\tilde X_t^2)$ and this is used in Theorem \ref{theorem1} to show that the \textit{GARCH}(1,1) process can be equivalently represented by a particular $NARCH(1)$ structure, which only depends on $X_{t-1}$. In fact, for a given value of $X_{t-1}=x$, the volatility function is $\sigma^2(x)=A_0+(\alpha_1+\beta)\tilde g(x;\alpha_1,\beta)x^2\equiv A_0+\tilde\alpha_1(x)x^2$, that represents a ``rescaled'' $ARCH(1)$ model with support-dependent coefficient $\tilde\alpha_1$. To summarize, Theorem \ref{theorem1} shows that there exists a \textit{nonlinear} representation of the volatility function from a \textit{GARCH}(1,1) process which only depends on $X_{t-1}$ instead of its classic \textit{linear} representation with infinite variables, $ARCH(\infty)$. It is important to stress that the value of the coefficient $\tilde\alpha_1(x)$ in model (\ref{sigmax}) changes with $x$, so that we have a function of coefficients instead of a single coefficient to estimate. However, this function of coefficients \textbf{cannot} be expressed in closed form, therefore model (\ref{sigmax}) \textbf{cannot} be analyzed and estimated with parametric methods. In fact, it would be necessary to know the density of the errors $\varepsilon_t$ in order to derive the analytic form of $\tilde g(x;\cdot)$, but even in the simplest case such derivations would be difficult (and the parametric estimation methods, such as maximum likelihood or quasi-maximum likelihood, are impossible to apply). Therefore, Theorem \ref{theorem1} is useless in the parametric framework, but it has a natural application in the nonparametric framework. In fact, note that we do not need to compute (explicitly) the component $\tilde g(x;\cdot)$ in order to make inference or generate predictions. We just need to guarantee that the estimation procedure is able to incorporate this component in the final estimations. The nonparametric procedure proposed in section \ref{idea}, based on the local polynomial estimator with optimal data-driven local bandwidth, perceives such a goal. \begin{figure} \caption{\footnotesize The \emph{News Impact Curve} \label{figmG} \end{figure} \subsection{A new interpretation of the News Impact Curve} Theorem \ref{theorem1} has important consequences for the interpretation of the \emph{News Impact Curve (NIC)}. The \emph{NIC} has been first defined by \cite{EngNg93} for $GARCH$ models and its variants, to measure how new information is incorporated into volatility estimates. It is defined as the implied relation between $X_{t-1}$ and $\sigma_t^2$, once considered constant the information at time $t-2$ and earlier, so that $\sigma_t^2=\alpha_0+\alpha_1X_{t-1}^2+\beta\alpha_0/(1-\alpha_1-\beta)$ (see \cite{EngNg93}, p. 1754). In practice, the \emph{NIC} is derived by imposing the lagged volatility value $\sigma^2_{t-1}$ to be equal to its unconditional mean $\beta\alpha_0/(1-\alpha_1-\beta)$. This choice (conditioning to the unconditional mean) is strictly necessary in order to draw the \emph{NIC} as a function of the $X_{t-1}$ alone, so that it can be plotted as the well-known $U$-shaped curve. The main utility of such a curve is to give evidence of \emph{leverage effects} in the data. Now, by (\ref{sigmax}) in Theorem \ref{theorem1}, the volatility function of a $GARCH(1,1)$ model can be reformulated as a nonlinear function of the lagged return $X_{t-1}$ alone. So, it is not necessary to set a value for the lagged volatility in order to plot the function, although the effect of the lagged volatility is incorporated in the NIC by means of $\tilde g(x;\cdot)$. In other words, instead of using the constant $\beta\alpha_0/(1-\alpha_1-\beta)$, we take advantage of the function $\tilde g(x;\cdot)$ to improve local adaptivity of the NIC. Figure \ref{figmG} gives an illustrative example of the \emph{NIC} for two different models and also shows empirically the result of Theorem \ref{theorem1}. We report the volatility function $E(X_t^2|X_{t-1}=x)$ estimated nonparametrically on two different datasets, using the procedure suggested in section \ref{idea}. The first dataset is generated from a $GARCH(1,1)$ model, with $\alpha_0=0.1,\alpha_1=0.3,\beta=0.2$ and standard normal error $\varepsilon_t$. The second dataset originates from an $ARCH(1)$ model with $\alpha_0=0.1,\tilde\alpha_1=\alpha_1+\beta=0.5$ and error term $\widetilde{\varepsilon}_t$ defined in Theorem \ref{theorem1}. From a theoretical point of view, as shown in Theorem \ref{theorem1}, the two volatility models are not equivalent because $\tilde\alpha_1$ is constant in the $ARCH$ model. As a consequence, the two curves in Figure \ref{figmG} do not coincide and the difference between them reflects the component $\tilde g(x;\cdot)$ defined in Theorem \ref{theorem1} (plus a constant term). Note that the two functions represent the \emph{NIC} for the two models. We can observe that they tend to have the same behaviour for large values of $|x|$ whereas the \textit{NIC} of the $GARCH(1,1)$ shows an inflation of the volatility function with respect to the $ARCH(1)$ case for small values of $|x|$. In fact, by Theorem \ref{theorem1}, the $GARCH(1,1)$ curve has a minimum at zero which is $A_0$. Instead, $ARCH(1)$ curve exhibits the minimum at the same point but with value $\alpha_0$. \section{Nonparametric estimation of volatility}\label{idea} For the estimation of the volatility function we generalize the global adaptive smoothing procedure (GAS) proposed by \cite{GioPar14}. In the appendix we give the theoretical results which extend the consistency of GAS to the current setup of $\alpha$-mixing processes. Given a realization of the process $\{X_t;t=1,\ldots,n\}$, the volatility function $\sigma^2(x)$ is estimated using a local linear estimator (LLE) with adaptive bandwidth function. Let $K:[-1,1]\rightarrow\mathbb{R}$ be a density function, henceforth called \emph{kernel}, and write $\sigma^2_{(2)}(\cdot)$ for the derivative of order 2 of the volatility function. Assuming that $\sigma_{(2)}^2(\cdot)$ exists at the point $x\in\mathbb{R}$, the LLE of $\sigma^2(x)$ can be written as a weighted linear estimator \begin{equation}\label{phi.hat} \widehat{ \sigma}^2(x;h) = \sum_{t=2}^n X^2_{t} W_{K,h}\!\left(X_{t-1}-x\right), \end{equation} where $h$ is the bandwidth and $W_{K,h}(\cdot)$ gives the \emph{effective kernel weights}. These weights are derived by locally approximating the function with a line. Local linear estimators are well established and they are implemented in all statistical softwares. See, for example, the \texttt{KernSmooth} package for R; see also \cite{FanGij96} for further details on LLE. As with all nonparametric methods, the crucial step with LLE is setting the bandwidth $h$, that behaves as a tuning parameter and affects the consistency of the nonparametric estimator. It may happen, with nonparametric and semi-parametric procedures, that tuning parameters are set by rule of thumb, given the difficulty of setting them automatically (see, for example, section 2.4 in \cite{WanAlt12}). In this paper we avoid this drawback and propose a self-contained data-driven method. To do this, we extend the approach of \cite{GioPar14} in order to deal with dependent data. In general, there are two categories of bandwidths: global (\emph{i.e.}, constant, not dependent on $x$) and local (\emph{i.e.} variable with $x$). The smoothing procedure proposed in \cite{GioPar14} is based on an hybrid, data-driven bandwidth estimator which exploits the advantages of both local smoothing (adaptability) and global smoothing (efficiency). This procedure has a better performance than other procedures (Cross-Validation, plug-in global smoothing) in terms of mean squared error, and reaches the optimal convergence rate of the final smoothing estimator $\hat\sigma^2(x;h)$, as shown in \cite{GioPar14}. Further simulation results, not reported here, Moreover, another advantage of the GAS procedure is that it exploits bandwidth estimation in order to derive all the pivotal quantities necessary to make inference for the volatility function. As a result, what is generally seen as a drawback of kernel regression (the necessity of estimating the bandwidth) becomes here the main tool to make inference on the estimated function. The method is as follows. Define a compact subset $I_x$, centered at the point $x$, such that $I_x=[x-a/2, x+a/2]$, with $a>0$. The \emph{global adaptive bandwidth} is \begin{equation}\label{h.i} h_{I_x}=\left\{\frac{\mathbb{V}_{\omega_{I_x}}}{4n\mathbb{B}_{\omega_{I_x}}}\right\}^{1/5}, \end{equation} where \begin{equation}\label{fun1} \mathbb{B}_{\omega_{I_x}}= C_1^2\int_{I_x}[\sigma^2_{(2)}(u)]^2f_X(u)d\omega_{I_x}(u),\quad \mathbb{V}_{\omega_{I_x}} = C_2\int_{I_x}V(u)d\omega_{I_x}(u), \end{equation} $f_X(\cdot)$ and ${\mu_X(\cdot)}$ are the density and the measure of the process, respectively, $d\omega_{I_x}(u)={du}/{\mu_X(I_x)}$, $V(x)=\mathop{Var}(X^2_{t}|X_{t-1}=x)$, while $C_1$ and $C_2$ are known constants depending on the kernel function. See \cite{GioPar14} for further details and an explanation on how to set the parameter $a$. In the following, we propose the estimators of $\mathbb{B}_{\omega_{I_x}}$ and $\mathbb{V}_{\omega_{I_x}}$ in (\ref{fun1}), which can be plugged into the (\ref{h.i}) to obtain the bandwidth estimator $\widehat{ h}_{I_x}$. Note that such functionals are connected with the conditional bias and the conditional variance of the estimator (\ref{phi.hat}), respectively. Therefore, they will also be used in sections \ref{Conf} and \ref{Test} to derive the confidence intervals and to test the symmetry of the volatility function. For $r\in \mathbb{N}$, let $m_r(x)$ be the conditional moment function $E(X_{t}^r|X_{t-1}=x)$. Then $\sigma^2(x)\equiv m_2(x)$ and $V(x) \equiv m_4(x)-m_2^2(x)$. Generally, nonparametric estimation of $V(x)$ implies two separate estimations of $m_4(x)$ and $m_2(x)$, as in \cite{HarTsy97,FanYao98,FraDia06}. This approach is rather inefficient. To gain efficiency, we propose an alternative approach based on only one estimation. It uses the following reparameterization of model (\ref{eqn01}) \begin{equation}\label{riparam} V(x) = m_4(x)-m_2^2(x) = m_2^2(x)\left(m_{4\varepsilon}-1\right), \end{equation} where $m_{4\varepsilon}=E(\varepsilon_t^4)$. Then we consider an estimator of $m_2(x)$, that is the Neural Networks one. Denote it by $q\!\left(x;\B{\eta}\right)$ and it is estimated by \begin{equation}\label{approximator} \B{\widehat{ \eta}}=\arg\min_{\B{\eta}}\sum_{t=2}^n \left[X^2_{t}-q\!\left(X_{t-1};\B{\eta}\right)\right]^2. \end{equation} Now using (\ref{riparam}) and (\ref{approximator}), we propose the estimator $\widehat{ V}(x) = \widehat{ m}_2^2(x)\left[\widehat{m}_{4\varepsilon}-1\right]$, where \begin{equation}\label{est.new1} \widehat{ m}_2(x) \equiv q(x, \B{\widehat{ \eta}}), \quad \widehat{ m}_{4\varepsilon} = \frac{\sum_{t=2}^{n} X_{t-1}^4}{\sum_{t=2}^{n} [q(X_{t-1}, \B{\widehat{ \eta}})]^2}. \end{equation} Next, we use again $q(x, \B{\widehat{ \eta}})$ to estimate the derivative $\sigma^2_{(2)}(x)$ by \begin{equation}\label{est.new2} \widehat{ \sigma}^2_{(2)}(x)\equiv q_{(2)}(x;\B{\widehat{\eta}}). \end{equation} Finally, the estimators for the functionals in (\ref{fun1}) are \begin{eqnarray}\label{Rhat} \widehat{\mathbb{B}}_{\omega_{I_x}} &=& \frac{C_1^2\sum_{t=2}^{n}\left[\widehat{\sigma}^2_{(2)}(X_{t-1})\right]^2\mathbb{I}(X_{t-1}\in I_x)}{\sum_{t=2}^{n}\mathbb{I}(X_{t-1}\in I_x)} \\ \widehat{\mathbb{V}}_{\omega_{I_x}} &=& \frac{C_2\sum_{i=1}^{n^*}\widehat{V}(z_i)/n^*}{\sum_{t=2}^{n}\mathbb{I}(X_{t-1}\in I_x)/n}, \nonumber \end{eqnarray} where $\mathbb{I}(\cdot)$ is the indicator function and the points $\{z_1, z_2, \ldots, z_{n^*}\}$ are values that are equally spaced from the interval $I_x$, with $n^*=O(n)$. Next, we consider the optimal bandwidth and its plug-in estimator for the unknown function $\sigma^2(\cdot)$ in model (\ref{eqn01}) using the local linear estimator. So, we have the true bandwidth and its estimator, as \[ h_{I_x}=\left\{\frac{\mathbb{V}_{\omega_{I_x}}}{4n\mathbb{B}_{\omega_{I_x}}}\right\}^{1/5}\mbox{and}\quad \widehat{h}_{I_x}=\left\{\frac{\widehat{\mathbb{V}}_{\omega_{I_x}}}{4n\widehat{\mathbb{B}}_{\omega_{I_x}}}\right\}^{1/5}. \] Let $I_x^n=[x-a_n/2,x+a_n/2]$ and $I_x=[x-a/2,x+a/2]$, where $\{a_n\}$ is a bounded and positive sequence and $a>0$. Instead, when $a_n\rightarrow 0$, it follows that $h_{I_x^n}\rightarrow h^{opt}(x)$ when $n\rightarrow\infty$, where $h^{opt}(x)$ is the local bandwidth given by \[ h^{opt}(x)=\left\{\frac{\mathcal{V}(x)}{4n\mathcal{B}^2(x)}\right\}^{1/5}, \] where \begin{equation} \label{fun2} \mathcal{B}(x)=C_1\sigma_{(2)}^2(x)\quad \mbox{ and } \quad \mathcal{V}(x)=C_2\frac{V(x)}{f_X(x)}. \end{equation} We can state the following theorem. \begin{theorem} \label{theorem2} If the assumptions (a1) -- (a5) and (b1) -- (b3) hold and assume that $[\sigma_{(2)}^2(x)]\not= 0$ in $I_x$, then $\widehat{h}_{I_x^n}$ is consistent in the sense that: \begin{itemize} \item \textit{if $Q_1(n)\rightarrow 0$ as $n\rightarrow \infty$, then} \begin{eqnarray*} \frac{\widehat{h}_{I_x^n}}{h_{I_x}}\stackrel{p}{\longrightarrow}1, & \mbox{if }a_n\rightarrow a>0\\ \frac{\widehat{h}_{I_x^n}}{h^{opt}(x)}\stackrel{p}{\longrightarrow}1, & \mbox{if } a_n\rightarrow 0 \mbox{ and } na_n\rightarrow\infty; \end{eqnarray*} \item if, in addition, $Q_2(n)\rightarrow 0$ for some $\lambda>0$, then \begin{eqnarray*} \frac{\widehat{h}_{I_x^n}}{h_{I_x}}\stackrel{a.s.}{\longrightarrow}1, & \mbox{if }a_n\rightarrow a>0\\ \frac{\widehat{h}_{I_x^n}}{h^{opt}(x)}\stackrel{a.s.}{\longrightarrow}1, & \mbox{if } a_n\rightarrow 0 \mbox{ and } na_n\rightarrow\infty. \end{eqnarray*} \end{itemize} where $Q_1(n)$ and $Q_2(n)$ are defined in (\ref{Qs}). \end{theorem} By Theorem \ref{theorem2} we get the global bandwidth estimator if we set the parameter $a$ large enough with respect to the support of the volatility function. It is easy to apply Theorem \ref{theorem2} to have the consistency of the estimator for the volatility function. So, we can state the following Corollary from Theorem \ref{theorem2}. The proof is straightforward by applying Theorem \ref{theorem2} and Theorem 3.1 of \cite{HarTsy97}. \begin{corollary}\label{rem2} Suppose that assumptions (a1) -- (a6) and (b1) -- (b3) hold. If we consider the estimated bandwidth, say $\hat h$, both for the cases of global and local, it follows that \begin{equation} \label{rate_s} \left|\widehat{ \sigma}^2(x;\hat h) -\sigma^2(x)\right|=O_p(n^{-2/5}). \end{equation} \end{corollary} \begin{remark} \label{rem2bis} Theorem \ref{theorem2}, by Propositions \ref{proposition1} and \ref{proposition2} in the Appendix, uses the Neural Networks estimator for the functionals $\mathbb{B}_{w_{I_x}}$ and $\mathbb{V}_{w_{I_x}}$ to overcome the issue of the pilot bandwidth estimation. We have both the consistency and optimality for the global and local bandwidth estimators. It is clear that Theorem \ref{theorem2} holds again if we consider \textit{any} consistent estimator of the functionals $\mathbb{B}_{w_{I_x}}$ and $\mathbb{V}_{w_{I_x}}$, not necessarily based on the Neural Network technique. \end{remark} \section{Nonparametric confidence intervals for volatility} \label{Conf} Using the GAS procedure, we can build \emph{unbiased} confidence intervals for the volatility function. An application to real data is reported in Section \ref{data}. The bias of the volatility estimator is given in (\ref{fun1}) and it can be estimated by (\ref{Rhat}). Without loss of generality, for a given $a>0$ and $I_x=[x-a/2,x+a/2]$, suppose that $\mathbb{B}_{\omega_{I_x}}\approx \mathcal{B}(x)$ and $\mathbb{V}_{\omega_{I_x}}\approx \mathcal{V}(x)$, where $\mathcal{B}(x)$ and $\mathcal{V}(x)$ are defined in (\ref{fun2}). We can state the following result. \begin{theorem}\label{corol} Suppose that the assumptions (a1) -- (a6) and (b1) -- (b3) hold. If $Q_1(n)\rightarrow 0$ as $n\rightarrow\infty$, then for each $x$ \[ \sqrt{n\widehat{ h}_{I_x}}\frac{\left[\widehat{\sigma}^2(x;\widehat{ h}_{I_x})-\sigma^2(x)-\frac{1}{2}\widehat{ h}^2_{I_x}\widehat{\mathbb{B}}_{\omega_{I_x}}\right]}{\left(\widehat{\mathbb{V}}_{\omega_{I_x}}\right)^{1/2}}\stackrel{d}{\longrightarrow} N(0,1). \] Here $\widehat{\sigma}^2(x;\widehat{ h}_{I_x})$ is the LLE for the volatility function given in (\ref{phi.hat}). The estimators $\widehat{\mathbb{B}}_{\omega_{I_x}}$ and $\widehat{\mathbb{V}}_{\omega_{I_x}}$ are defined in (\ref{Rhat}). The estimated optimal bandwidth $\widehat{ h}_{I_x}$ is given in section \ref{idea}. $Q_1(n)$ is defined in (\ref{Qs}). \end{theorem} If we drop the assumptions $\mathbb{B}_{\omega_{I_x}}\approx \mathcal{B}(x)$ and $\mathbb{V}_{\omega_{I_x}}\approx \mathcal{V}(x)$, Theorem \ref{corol} holds again but replacing $\widehat{\sigma}^2(x;\widehat{ h}_{I_x})$ and $\sigma^2(x)$ with $\widehat{\sigma}^2(I_x)$ and $\sigma^2(I_x)$, respectively, where \begin{equation} \label{new.stat} \widehat{\sigma}^2(I_x)=\frac{\sum_{t=2}^{n}\widehat{\sigma}^2(X_{t-1};\widehat{ h}_{I_x})\mathbb{I}(X_{t-1}\in I_x)}{\sum_{t=2}^{n}\mathbb{I}(X_{t-1}\in I_x)},\quad \sigma^2(I_x)=\frac{1}{\mu_X(I_x)}\int_{I_x}\sigma^2(x)f_X(x)dx. \end{equation} \section{Nonparametric testing for symmetry of volatility} \label{Test} Another useful application of our results in Section \ref{idea} is to build a statistical test for the symmetry of the volatility function around zero. The hypothesis $H_0$ is $\sigma^2(x)\equiv\sigma^2(-x)$ for each $x$, and the alternative $H_1$ means that $\sigma^2(x')\not = \sigma^2(-x')$ for at least one $x'$. Without loss of generality, suppose that $\mathbb{V}_{\omega_{I_x}}\approx \mathcal{V}(x)$ for a given $a>0$ and $I_x=[x-a/2,x+a/2]$, where $\mathcal{V}(x)$ is defined as in section \ref{Conf}. We have the following result. \begin{theorem}\label{cor2} Assume that: a) the bivariate density function for the process $\{X_t\}$ in model (\ref{eqn01}) is bounded, say $f_{X_1X_2}(x_1,x_2)\le C_0<\infty$, $\forall (x_1,x_2)\in \mathbb{R}^2$; b) the same assumptions as in Theorem \ref{corol} hold. If $Q_1(n)\rightarrow 0$ as $n\rightarrow\infty$, then under $H_0$ we have \[ \sqrt{n}\frac{\left[\widehat{ h}_{I_x}^{1/2}\widehat{\sigma}^2(x;\widehat{ h}_{I_x})-\widehat{ h}_{I_{-x}}^{1/2}\widehat{\sigma}^2(-x;\widehat{ h}_{I_x})\right]}{\left(\widehat{\mathbb{V}}_{\omega_{I_x}}+\widehat{\mathbb{V}}_{\omega_{I_{-x}}}\right)^{1/2}}\stackrel{d}{\longrightarrow} N(0,1)\qquad \mbox{for\ each\ } x>0, \] where $\widehat{\sigma}^2(x;\widehat{ h}_{I_x})$ is the LLE for the volatility function given in (\ref{phi.hat}). The estimated optimal bandwidth $\widehat{ h}_{I_x}$ is given in section \ref{idea} and the estimator $\widehat{\mathbb{V}}_{\omega_{I_x}}$ is defined in (\ref{Rhat}). $Q_1(n)$ is defined in (\ref{Qs}). \end{theorem} Now, we have to consider a number of points, say $n_x$, such that $\{-x_i,x_i\}$, $i=1,\ldots,n_x/2$. We have to do $n_x/2$ tests by Theorem \ref{cor2}. Using a simple multiple test approach as the Bonferroni's technique, we have to compute \[ T_i=\sqrt{n}\frac{\left[\widehat{ h}_{I_{x_i}}^{1/2}\widehat{\sigma}^2(x_i;\widehat{ h}_{I_{x_i}})-\widehat{ h}_{I_{-x_{i}}}^{1/2}\widehat{\sigma}^2(-x_i;\widehat{ h}_{I_{-x_i}})\right]}{\left(\widehat{\mathbb{V}}_{\omega_{I_{x_i}}}+\widehat{\mathbb{V}}_{\omega_{I_{-x_{i}}}}\right)^{1/2}}\qquad i=1,\ldots,n_x/2. \] Given a level $\alpha$ as the first type error, we accept the Null if all of the following conditions are satisfied, \[ |T_i|<q_\phi(1-\alpha/n_x)\qquad i=1,\ldots,n_x/2, \] where $q_\phi(\cdot)$ is the quantile from the Standard Normal distribution. In this way, we reject $H_0$ if at least one condition above is not true. Note that the results in Theorems \ref{corol} and \ref{cor2} hold again if we drop the assumption that $\mathbb{V}_{\omega_{I_x}}\approx \mathbb{V}(x)$ and replace $\widehat{\sigma}^2(\cdot)$ with $\widehat{\sigma}^2(I_{\cdot})$, defined in (\ref{new.stat}). \section{Simulation study} \label{sim} In the first part of the simulation study, we compare the nonparametric GAS method for volatility estimations with the classic parametric estimation methods (maximum likelihood estimator, MLE). It must be remarked that a direct comparison between parametric methods (MLE) and nonparametric methods (GAS) should not be made, for several reasons: nonparametric methods take advantage from being model free whereas parametric methods take advantage from having a faster convergence rate under the assumption of correct specification of the model. It is expected, therefore, to see more robust estimations from nonparametric methods and more efficient estimations from parametric methods. Therefore, they are not directly comparable since they works on different assumptions. Anyway, the following results show very interesting performances of the two estimation methods that are worthwhile to be reported. In particular, surprisingly, GAS shows (more robust results, as expected, but also) lower variability with respect to MLE for small sample sizes, even in the case of correct specification of the model. We consider three models with a null conditional mean function. They are reported in the following table, where $\phi(\cdot)$ denotes the standard normal density. \begin{center} \begin{tabular}{lll} \hline\noalign{ } Model & & Errors \\ \hline \emph{1: ARCH(1)} & $X_t=\sqrt{0.1+0.5X_{t-1}^2}\varepsilon_{t}$ & $\varepsilon_t\sim \phi$\\ \emph{2: GARCH(1,1)} & $X_t=\sqrt{0.1+0.3X_{t-1}^2+0.2\sigma^2_{t-1}}\varepsilon_{t}$ & $\varepsilon_t\sim \phi$ \\ \emph{3: HT} & $X_t=\left[\phi(X_{t-1}+1.2)+1.5\phi(X_{t-1}-1.2)\right]\varepsilon_{t}$ & $\varepsilon_t\sim \phi$\\ \hline \end{tabular} \end{center} Model 1 is a classic $ARCH(1)$ and model 2 a $GARCH(1,1)$. Model 3 (HT) is a nonlinear $ARCH$ used by \cite{HarTsy97}. All the models satisfy the assumptions of this paper. Contrary to models 1 and 2, model 3 is not symmetric with respect to zero and it is highly nonlinear, so that a variable bandwidth should be preferred for this model. We stress here that the GAS method automatically detects the kind of bandwidth to use, which is a trade-off between local and global smoothing, by automatically setting the optimal value for the parameter $a$. Anyway, given the aims of this paper, here we do not investigate on the performance of GAS with respect to the selection of the parameter $a$ (see \cite{GioPar14} for some results on this). So, for the sake of comparison, in the whole simulation study we will impose a global bandwidth for all three models. We use R to perform a Monte Carlo simulation study with $500$ replications and three different lengths for the simulated time series: $n=(500, 1000, 2000)$. We implement the procedure described in section \ref{idea}, using the Epanechnikov kernel $K(\cdot)$ for the LLE and the Logistic Sigmoidal function for the Neural Networks estimator. The number of nodes in the hidden layer of the neural network is selected following an automatic BIC optimization procedure, as in \cite{FarCha98}. Some experiments not reported in this paper show that the number of nodes of the neural network does not have a strong influence on the final estimation results. See \cite{GioPar14} for further details. For each replication, the integrated squared error ($ISE$) is calculated as \begin{equation}\label{ISE} ISE(\widehat{ h})=\frac{1}{n_x}\sum_{j=1}^{n_x}\left[\widehat{ \sigma}^2(x_j)-\sigma^2(x_j)\right]^2, \end{equation} where $x_1,\ldots,x_{n_x}$, with $n_x=20$, are randomly chosen over the support of the volatility function. In the following table, we report the mean, the median and the standard deviation of the $ISE(\widehat{ h})$ ($MISE$, $MEDISE$, and $SDISE$, respectively) for the 500 replications of the models. We compare two kinds of estimators $\widehat{ \sigma}^2(x_j)$ in the (\ref{ISE}). From the one hand we have the GAS volatility estimator $\widehat{ \sigma}^2(x) \equiv\widehat{ \sigma}^2(x;\widehat{ h}) $ given in (\ref{phi.hat}), with the bandwidth estimated by the procedure explained in section \ref{idea} (the parameter $a$ is set to a high value to have a global smoothing); we denote this estimator with the suffix \emph{GAS} in the tables. From the other hand, we use the classic MLE for the estimation of the parameters of a parametric $GARCH(1,1)$ model, using the package \texttt{rugarch} of R. \begin{table}[t] \centering \begin{tabular}{rcccccc} \hline \multicolumn{7}{l}{\textbf{Model 1: $ARCH(1)$}} \\ \hline $n$ & $ MISE_{GAS} $ & $MISE_{MLE}$ & $MEDISE_{GAS}$ & $MEDISE_{MLE}$ & $SDISE_{GAS}$ & $SDISE_{MLE}$ \\ \hline 500 & 0.563642 & 0.315009 & 0.324258 & 0.187120 & 1.223880 & 0.442006 \\ 1000 & 0.490322 & 0.391450 & 0.295758 & 0.248785 & 0.549949 & 0.491264 \\ 2000 & 0.520154 & 0.549357 & 0.380937 & 0.360051 & 0.606490 & 0.658968 \\ \hline \multicolumn{7}{l}{\textbf{Model 2: $GARCH(1,1)$}} \\ \hline $n$ & $ MISE_{GAS} $ & $MISE_{MLE}$ & $MEDISE_{GAS}$ & $MEDISE_{MLE}$ & $SDISE_{GAS}$ & $SDISE_{MLE}$ \\ \hline 500 & 0.582419 & 1.54658 & 0.409780 & 0.608519 & 0.546963 & 2.99049 \\ 1000 & 0.716475 & 1.37249 & 0.515701 & 0.657613 & 0.690708 & 2.16681 \\ 2000 & 1.071458 & 1.30674 & 0.775978 & 0.804068 & 1.127064 & 1.97655 \\ \hline \multicolumn{7}{l}{\textbf{Model 3: $HT$}} \\ \hline $n$ & $ MISE_{GAS} $ & $MISE_{MLE}$ & $MEDISE_{GAS}$ & $MEDISE_{MLE}$ & $SDISE_{GAS}$ & $SDISE_{MLE}$ \\ \hline 500 & 0.364365 & 0.957462 & 0.255328 & 0.907852 & 0.356133 & 0.337183\\ 1000 & 0.356558 & 1.732367 & 0.261877 & 1.661635 & 0.331395 & 0.496300\\ 2000 & 0.385294 & 3.403906 & 0.282595 & 3.371010 & 0.331421 & 0.920338\\ \hline \end{tabular} \caption{\label{tabella1} \small Comparison between GAS and MLE methods for the estimation of the volatility function. The table reports the mean, the median and the standard deviation of the integrated squared error ($MISE$, $MEDISE$ and $SDISE$, respectively) for the 500 replications of models 1-3. All the values have been multiplied by $n$ to make them more comparable.} \end{table} Table \ref{tabella1} shows the results. All the values of $MISE$, $MEDISE$ and $SDISE$ have been multiplied by the sample size $n$, to make them more comparable (this explains why the values shown in the table do not decrease with $n$, actually they do, after dividing by $n$). As expected, for model 1, that is an $ARCH(1)$, the smaller values are observed for the MLE method. In fact, for this model the rate of convergence of the MLE estimator is $O_p(n^{-1/2})$, faster than the convergence rate of the GAS estimator, which is $O_p(n^{-2/5})$ (see Corollary \ref{rem2}). However, for $n=2000$ the results do not present any relevant difference. Instead, for model 3, we observe the smaller values for the $GAS$ method, as we expect since in this case the MLE works with a misspecified model. For this model, the GAS estimator is consistent whereas the MLE for $GARCH(1,1)$ is not. In fact, the $MISE$, $MEDISE$ and $SDISE$ (multiplied by $n$) seem to be constant for GAS when $n$ grows. This is not true for the MLE, where instead they increase. But very surprisingly, for model 2 which is a $GARCH(1,1)$, we again observe the smaller values for the GAS method notwithstanding the MLE works with a correctly specified model. This result actually gives evidence of the usefulness of Theorems \ref{theorem1} and \ref{theorem2}. In fact, thanks to Theorem \ref{theorem1}, the GAS method formulates and estimates the $GARCH(1,1)$ by means of a particular $NARCH(1)$, basing on a unique regressor $X_{t-1}$, thus with rate $O_p(n^{-2/5})$. Moreover, by Theorem \ref{theorem2} we estimate the optimal bandwidth. On the other side, the MLE works with a (correctly specified) model with two regressors ($X_{t-1}$ and $\sigma_{t-1}$), one of which is latent and therefore estimated. As a consequence, its finite sample performance shows a penalty for this aspect. However, when $n$ grows the differences tend to reduce. Finally, we report some simulation results to evaluate the test for the symmetry of the volatility function, proposed in section \ref{Test}. We have applied the test for model 2, where the volatility function is symmetric, and for model 3, where the volatility function is not symmetric (\emph{leverage effects}). We consider 500 simulation runs and $n_x=20$ points (equally spaced) with $10$ multiple tests. We use the global bandwidth in $T_i$, $i=1,\ldots,10$ and the estimator $\widehat{\mathbb{V}}_{\omega_{I_x}}$ in (\ref{Rhat}). We set $\alpha=1\%$. The results in table \ref{tabella4} have to be read as the size of the test for model 2, which is symmetric, and the power of the test for model 3, which is asymmetric. The first row is around the nominal size of $1\%$ for large $n$. Moreover, as we expect, the power (second row) grows when $n$ increases. \begin{table} \centering \begin{tabular}{lccc} \hline & & $n$ &\\ Model & 500 & 1000 & 2000\\ \hline $GARCH(1,1)$ & 3.8\% & 1.6\% & 1.6\%\\ $HT$ & 70.0\% & 93.8\% & 99.8\% \\ \hline \end{tabular} \caption{\label{tabella4} Empirical percentages to reject the Null (symmetry) over 500 replications from models 2 ($GARCH(1,1)$) and 3 ($HT$), with $n={500, 1000, 2000}$. $\alpha=1\%$.} \end{table} \section{Real data Application} \label{data} \begin{figure} \caption{The observed returns of Dow Jones index from 1996, January 3$^{rd} \label{fig1} \end{figure} In this section we apply our method to real data. We consider a time series of Dow Jones index from 1996, January 3$^{rd}$ to the end of January 2002. It means that the length of time series is 1500. We derive the returns and use them in order to estimate the volatility function and its confidence intervals using the GAS procedure. As a proxy of the true volatility, we also extract the \emph{realized volatility} time series from the Oxford-Man Institute's \emph{realized library}, which contains daily measures of how volatility financial assets or indexes were in the past, basing on infra-daily data (see \cite{HebAlt09}). In figure \ref{fig1}, we report the returns on the $x$ axis and the realized volatility on the $y$ axis. Using only the observed returns, we apply our method to estimate the volatility function. We draw it in figure \ref{fig2} as the central solid line. The estimate of the parameter $a$ (for bandwidth slection) is $\widehat{a}=0.089$, following \cite{GioPar14}. By Theorem \ref{corol}, we can build the confidence intervals. They are shown in figure \ref{fig2} by the two external solid lines. We add the estimated volatility function and the confidence intervals derived by imposing a \emph{global constant smoothing} (i.e., with constant bandwidth obtained by fixing a large $a$). They are shown in figure \ref{fig2} by the central dashed and the two external dashed lines, respectively. Note that, in both cases of local and global approaches, we do not consider any correction for the bias. We can point out an important difference between the GAS and the global bandwidth approaches. If we look at the confidence intervals in figure \ref{fig2}, the GAS ones have a better adaptability than the global bandwidth method. In fact, the GAS procedure has the advantage to take into account the heteroscedastic behaviour in the data. In figure (\ref{fig2}) we plot $n_x=100$ points (returns and realized volatility) which are randomly chosen. By figures (\ref{fig1}) and (\ref{fig2}) we can note that there is an asymmetry for the realized volatility. In particular, we can observe a greater variability for negative values of returns. All that is confirmed by the GAS confidence intervals (solid lines in figure (\ref{fig2})) which are wider than the confidence intervals for the global bandwidth approach (dashed lines) when the observed returns are negative. Finally, for the $n_x=100$ points in figure (\ref{fig2}), we have an actual coverage of 98\% and 45\% for the GAS and global bandwidth methods, respectively. It means that we have an important gain with respect to the global bandwidth technique when we need to consider an asymmetric behaviour in the data. \begin{figure} \caption{The central solid line is the volatility function estimated by GAS method. The solid lines on the top and bottom are the upper and lower confidence intervals at 95\%, respectively. The dashed lines refer to the estimated volatility and confidence intervals using the global bandwidth. The dot points are the realized volatility values.} \label{fig2} \end{figure} \section{Conclusions} \label{final} In this paper we have presented a general nonparametric framework for volatility analysis. The main contributions of this work are: \begin{itemize} \item the extension of the GAS method of \cite{GioPar14} to the framework of dependent data, to achieve an optimal bandwidth estimation for volatility; \item a new nonparametric estimator of the volatility function is proposed, based on local linear polynomials with data-driven optimal local bandwidth. The new volatility estimator reaches the optimal convergence rate, as shown theoretically in the paper; \item moreover, starting from the functionals that we need to estimate for the optimal bandwidth in GAS procedure, we can use them to derive two useful inferential tools to test the validity of a given parametric model: nonparametric confidence intervals and test for symmetry; \item last, but not least, a new representation of the $GARCH(1,1)$ model by means of a nonparametric $ARCH(1)$ model. With this new representation, we avoid the use of the (latent) lagged volatility in the model and, therefore, a more precise \emph{News Impact Curve} can be derived and estimated. Moreover, we improve the rate of convergence of the nonparametric volatility estimator. \end{itemize} \appendix \section{Assumptions and Proofs} We make the following assumptions. First, given (\ref{eqn01}) and (\ref{mphi.x}), we need to guarantee that $E(X_{t+1})^4<\infty$. \noindent\textbf{Assumptions (a)} \begin{enumerate} \item[(a1)] The errors $\varepsilon_t$ have a continuous and positive density function with \[ E(\varepsilon^2_t)=1,\quad E(\varepsilon_t)=E(\varepsilon^3_t)=0, \quad E(\varepsilon_t)^4<\infty. \] \item[(a2)] The function $\sigma(\cdot)$ is positive and has a continuous second derivative. \item[(a3)] There exist some constants $M$ and $\alpha$ such that, \begin{itemize} \item[i)] $0<M<\infty$, $\sigma(y) \leq M(1+|y|)$ and $M\left[E|\varepsilon_t|^4\right]^{1/4}<1$ for all $y\in{\mathbb{R}}$; \item[ii)] $0\leq\alpha\leq M$, $\sigma(y)-\alpha|y|=o(1)$ for $|y|\rightarrow\infty$. \end{itemize} \item[(a4)] The process $\{X_t\}$ is strictly stationary. \item[(a5)] The density function $f_X(\cdot)$ of the (stationary) measure of the process $\mu_X$ exists; it is bounded, continuous and positive on every compact set in ${\mathbb{R}}$. \item [(a6)] The kernel function, $K(\cdot)$, is compactly supported bounded function such that it is positive on a set of positive Lesbegues measure. \end{enumerate} \noindent Under the assumptions (a1), (a3), (a5) and (a2) with the part that the function $\sigma(\cdot)$ is always positive, it can be shown that the process is geometrically ergodic and exponentially $\alpha$-mixing with $E(X_t^4)<\infty$ (see \cite{HarTsy97}). Moreover, assumption (a4) is only made to simplify the proofs. Assumption (a2), with the part of the continuous second derivative for $\sigma(\cdot)$, is used for the estimation of the same second derivative for $\sigma^2(\cdot)$. In particular, we need that this second derivative is continuous in order to apply the bounds for the Neural Networks estimation. Finally, assumption (a6) is typical for the Kernel function as in \cite{HarTsy97}. \noindent\textbf{Assumptions (b)} \begin{enumerate} \item[(b1)] $\sum_{k=1}^{d_n}\left|c_k\right|\le \Delta_n$. \item[(b2)] $d_n\rightarrow \infty$, $\Delta_n\rightarrow \infty$ as $n\rightarrow \infty$. \item[(b3)] The activation function is strictly increasing, sigmoidal and has a continuous second derivative. \end{enumerate} \noindent The assumptions (b1) and (b2) for $d_n$ are typical in order to assure the approximation capability of the Neural Networks technique. Instead, the assumptions (b1) and (b2) for $\Delta_n$ allow that the approximation capability of the Neural Networks works well in a non-compact sets (see \cite{FraDia06}). The assumption (b3) assures that the activation function of the Neural Networks is regular enough (continuous second derivative) in order to estimate some functionals which depend on the second derivative of the unknown volatility function. \noindent\textbf{Proof of Theorem \ref{theorem1}: }Suppose that the distribution function of $\varepsilon_t$ is symmetric around zero. Then $\sigma_t^2=\alpha_0+\alpha_1\varepsilon_{t-1}^2\sigma^2_{t-1}+\beta \sigma_{t-1}^2=\alpha_0+(\alpha_1+\beta)\widetilde{X}_{t-1}^2$, where $\widetilde{X}_{t}^2=\widetilde{\varepsilon}^2_{t}\sigma_t^2$ with \[ \widetilde{\varepsilon}_t=sgn(\varepsilon_t)\sqrt{\frac{\varepsilon_t^2+\beta/\alpha_1}{1+\beta/\alpha_1}}\qquad \mbox{ and }\qquad sgn(x)=\left\{\begin{array}{ll}1&\mbox{ if }x>0\\ 0&\mbox{ if }x=0.\\ -1&\mbox{ if }x<0 \end{array} \right. \] Therefore, we can write the \textit{GARCH(1,1)} process as \begin{equation}\label{garch2} X_t=\widetilde{X}_tC_{\widetilde{\varepsilon}_t}^{1/2}\qquad \mbox{with} \qquad C_{\widetilde{\varepsilon}_t}=1+\beta/\alpha_1\left(1-1/\widetilde{\varepsilon}_t^2\right). \end{equation} Note that $\widetilde{X}_t\sim ARCH(1;\alpha_0,\alpha_1+\beta)$ with the error term $\{\widetilde{\varepsilon}_t\}$ defined above. Moreover, model (\ref{garch2}) is an exact representation for the \textit{GARCH(1,1)}. It is easy to verify that the \textit{GARCH(1,1)} model in (\ref{garch2}) is well defined in the sense that $E\left(\widetilde{\varepsilon}_tC_{\widetilde{\varepsilon}_t}^{1/2}\right)=0$ and $E\left(\widetilde{\varepsilon}_t^2C_{\widetilde{\varepsilon}_t}\right)=1$. Moreover, it follows that $E\left(X_t^2|\widetilde{X}_{t-1}=\widetilde x\right)=\alpha_0+(\alpha_1+\beta)\widetilde x^2\equiv \sigma^2(\widetilde x)$. Now, we can write $X_t^2=C_{\widetilde{\varepsilon}_t}\widetilde{\varepsilon}_t^2\left(\alpha_0+(\alpha_1+\beta)\frac{X_{t-1}^2}{C_{\widetilde{\varepsilon}_{t-1}}}\right)$. So we have \[ E\left(X_t^2|X_{t-1}=x\right)=\alpha_0+(\alpha_1+\beta)x^2E\left(\left.\frac{1}{C_{\widetilde{\varepsilon}_{t-1}}}\right|X_{t-1}=x\right),\quad \forall x\not= 0. \] Let $g(x;\alpha_1,\beta)\equiv E\left(\left.\frac{1}{C_{\widetilde{\varepsilon}_{t-1}}}\right|X_{t-1}=x\right)$, $\forall x\not= 0$. When $x=0$ we can use (\ref{garch}). So we can write \[ E(X_t^2|X_{t-1}=0)=\alpha_0+\beta E(\sigma_{t-1}^2|X_{t-1}=0)=\alpha_0+\beta E(\sigma_{t-1}^2|\varepsilon_{t-1}=0)= \] \[ =\alpha_0+\beta E(\sigma_{t-1}^2)=\alpha_0+\frac{\beta \alpha_0}{1-\alpha_1-\beta}\equiv A_0. \] Let $B_0=\beta\alpha_0/(1-\alpha_1-\beta)$. Now, we need to evaluate the function $g(x;\alpha_1,\beta)$, for each $x\not =0$. In such a case, we have $X_{t-1}\not =0 \Longleftrightarrow\varepsilon_{t-1}\not =0\Longleftrightarrow C_{\widetilde{\varepsilon}_{t-1}}>0$, with probability one. Since $C_{\widetilde{\varepsilon}_{t-1}}<\infty$, with probability one, the function $g(\cdot;\alpha_1,\beta)$ is always positive and bounded for each $x\not =0$. Now, we can conclude that \begin{eqnarray}\label{garch11} E(X_t^2|X_{t-1}=x) &=&\left\{\begin{array}{lll} A_0&\mbox{if}&x=0,\\ A_0+(\alpha_1+\beta)\tilde g(x;\alpha_1,\beta)x^2&\mbox{if}&x\not =0, \end{array} \right. \end{eqnarray} where $\tilde g(x;\alpha_1,\beta)=g(x;\alpha_1,\beta)-\frac{B_0}{x^2(\alpha_1+\beta)}$. Finally, if $|x|\rightarrow\infty$ then $|X_{t-1}|\ge|x|$,with probability one, since $C_{\widetilde{\varepsilon}_{t-1}}$ is always bounded. Thus we have $X_{t-1}=O(\widetilde{X}_{t-1})$, with probability one. Moreover, $E(X_t^2)=E(\widetilde{X}_t^2)$. Then, there exists a $M>0$ such that \[ E(X_t^2|X_{t-1}=x)=E(X_t^2|\widetilde{X}_{t-1}=x)=\alpha_0+(\alpha_1+\beta)x^2\qquad \forall x>M. \] It follows that $g(x;\alpha_1,\beta)\rightarrow 1$ and also $\tilde g(x;\alpha_1,\beta)\rightarrow 1$ when $|x|\rightarrow\infty$. $\Box$ \noindent\textbf{Proof of Theorem \ref{theorem2}: } We analyze the convergence in probability since the almost sure convergence is straightforward as in the proofs of Propositions \ref{proposition1} and \ref{proposition2} in section \ref{GioPar16}. First, consider $I_x$. By the assumptions of this Theorem, we have that $\widehat{\mathbb{B}}_{\omega_{I_x}}>0$ in probability, if $n\rightarrow\infty$. Therefore, by Propositions \ref{proposition1} and \ref{proposition2} in section \ref{GioPar16} we have that $\widehat{h}_{I_x}/h_{I_x}\stackrel{p}{\longrightarrow}1$, when $n\rightarrow \infty$. Since $I_x^n\rightarrow I_x$ when $a_n\rightarrow a$ and given that $h_{I_x}$ is a bounded and continuous function with respect to $a$, it follows that $\widehat{h}_{I_x^n}/h_{I_x}\stackrel{p}{\longrightarrow}1$ when $a_n\rightarrow a$ with $n\rightarrow\infty$. Now we can consider the case when $a_n\rightarrow 0$. Using the mean value theorem it follows that $h_{I_x^n}/h^{opt}(x)\rightarrow 1$, when $n\rightarrow \infty$. So we have only to prove that \begin{displaymath} \frac{\widehat{h}_{I_x^n}}{h_{I_x^n}}\stackrel{p}{\longrightarrow}1 \quad n\rightarrow \infty \end{displaymath} It is sufficient to show that the number of values from the process (\ref{eqn01}) in $I_x^n$, $N_{I^n_x}$, tends to infinity with probability one, if $n\rightarrow\infty$, in order to apply, again, Propositions \ref{proposition1} and \ref{proposition2} in section \ref{GioPar16}. We fix a positive $a'$ in the sequence $\{a_n\}$. Thus, we have a $I_x^{a'}$ and $J_x^{a'}:=\overline{I}_x^{a'}$. We can build a Markov Chain with two states, $I$ and $J$, which are the states when the process from (\ref{eqn01}) is in $I_x^{a'}$ and $J_x^{a'}$, respectively. Let $p_{JI}^{(n)}$ be the transition probability from state $J$ to state $I$ in $n$ steps. Now, using $\mu_X(\cdot)$ we get the unique stationary probability. Based on Markov's Theorem it follows that $p_{\cdot I}^{(n)}\rightarrow \mu_X(I_x^{a'})$, when $n\rightarrow \infty$, for every initial state $J$. Based on assumptions (a) the process in (\ref{eqn01}) is geometrically ergodic, so there exists a $n_0$ such that $\forall n>n_0$ and $\forall a'>0$ we have $\left|p_{\cdot I}^{(n)}-\mu_X(I_x^{a'})\right|\le W_1 e^{-W_2n}$, with $W_1$ and $W_2$ two positive constants.\\ Now, using the Ergodic Theorem for Markov's Chain we can write $N_{I^{a'}_x}\approx n p_{\cdot I}^{(n)}$ with probability one. By assumption (a5), it is $\mu_X(I_x^{a'})=f_X(x')a'$, for a value $x'\in I_x^{a'}$. Therefore, $n p_{\cdot I}^{(n)}\approx W_1ne^{-W_2n}+f_X(x')na'$. Since $na_n\rightarrow\infty$, when $n\rightarrow\infty$, if we replace $a'$ with $a_n$, then we have that $N_{I_x^n}\rightarrow\infty$ with probability one. Finally, the result follows. $\Box$\\ \begin{remark} \label{rem3} looking at the proof of Theorem \ref{theorem2}, we can say that the condition $na_n\rightarrow\infty$ can be replaced by the assumption that the number of values from the process (\ref{eqn01}) in $I_x^n$ must tend to infinity but with a lower order with respect to $n$, when $n\rightarrow\infty$. \end{remark} \noindent{\textbf{Proof of Theorem \ref{corol}}}: By Theorem \ref{theorem2} we have that $\widehat{ h}_{I_x}/h_{I_x}\stackrel{p}{\longrightarrow} 1$. Using the same arguments as in Proposition \ref{proposition1} in section \ref{GioPar16}, it follows that $\widehat{\mathbb{B}}_{\omega_{I_x}}\stackrel{p}{\longrightarrow}\mathbb{B}_{\omega_{I_x}}\approx \mathcal{B}(x)$. Besides, by Proposition \ref{proposition2} in section \ref{GioPar16} we have that $\widehat{\mathbb{V}}_{\omega_{I_x}}\stackrel{p}{\longrightarrow}\mathbb{V}_{\omega_{I_x}}\approx \mathcal{V}(x)$. The quantities $\mathcal{B}(x)$ and $\mathcal{V}(x)$ are defined in (\ref{fun2}). Moreover, by \ref{rate_s} in Remark \ref{rem2}, $\widehat{\sigma}^2(x;\widehat{ h}_{I_x})-\widehat{\sigma}^2(x;h_{I_x})\stackrel{p}{\longrightarrow}0$. Therefore, we can conclude that \[ \sqrt{n\widehat{ h}_{I_x}}\frac{\left[\widehat{\sigma}^2(x;\widehat{h}_{I_x})-\sigma^2(x)-\frac{1}{2}\widehat{ h}^2_{I_x}\widehat{\mathbb{B}}_{\omega_{I_x}}\right]}{\left(\widehat{\mathbb{V}}_{\omega_{I_x}}\right)^{1/2}}\quad \mbox{ and }\quad \sqrt{n h_{I_x}}\frac{\left[\widehat{\sigma}^2(x;h_{I_x})-\sigma^2(x)-\frac{1}{2}h^2_{I_x}\mathcal{B}(x)\right]}{\left(\mathcal{V}(x)\right)^{1/2}} \] have the same asymptotic distribution. Applying Theorem 3.2 of \cite{HarTsy97} the result follows. $\Box$ \\ \\ \noindent{\textbf{Proof of Theorem \ref{cor2}}}: Under $H_0$, $\sigma^2(x)=\sigma^2(-x)$, $\forall x>0$. The same is true for the bias and variance. We assume that $\mathbb{V}_{\omega_{I_x}}\approx \mathcal{V}(x)$. By Theorem \ref{theorem2} and Proposition \ref{proposition2} in section \ref{GioPar16}, we have $\widehat{ h}_{I_x}/h_{I_x}\stackrel{p}{\longrightarrow}1$ and $\widehat{\mathbb{V}}_{\omega_{I_x}} \stackrel{p}{\longrightarrow}\mathcal{V}(x)$. Let \[ \widehat{ T}(x)=\sqrt{n}\frac{\left[\widehat{ h}_{I_{x}}^{1/2}\widehat{\sigma}^2(x;\widehat{ h}_{I_{x}})-\widehat{ h}_{I_{-x}}^{1/2}\widehat{\sigma}^2(-x;\widehat{ h}_{I_{-x}})\right]}{\left(\widehat{\mathbb{V}}_{\omega_{I_{x}}}+\widehat{\mathbb{V}}_{\omega_{I_{-x}}}\right)^{1/2}}. \] It follows that $\widehat{ T}(x)$ has the same asymptotic distribution as \[ \widetilde{T}(x)=\sqrt{nh_{I_x}}\frac{\left[\widehat{\sigma}^2(x;h_{I_x})-\widehat{\sigma}^2(-x;h_{I_{-x}})\right]}{\left(2\mathcal{V}(x)\right)^{1/2}}. \] By Theorem \ref{corol}, both $\sqrt{nh_{I_x}}\widehat{\sigma}^2(x;h_{I_x})$ and $\sqrt{nh_{I_{x}}}\widehat{\sigma}^2(-x;h_{I_{-x}})$ have an asymptotic Normal distribution. We have only to evaluate the mixed terms, that is, the terms with different fixed points, $-x$ and $x$. Now, it is sufficient to prove that \[ (I)\mbox{ } hE\left[\frac{1}{h^2}K\left(\frac{X-x}{h}\right)K\left(\frac{X+x}{h}\right)\right]\rightarrow 0 \qquad \mbox{and} \] \[ (II) \mbox{ } hE\left[\frac{1}{h^2}K\left(\frac{X_1-x}{h}\right)K\left(\frac{X_2+x}{h}\right)\right]\rightarrow 0 \] when $n\rightarrow\infty$, with $h\rightarrow 0$. The univariate random variable $X$ is drawn form the process $\{X_t\}$ in model (\ref{eqn01}). While $(X_1,X_2)$ is a bivariate random variable from the same process. \[ I=\frac{1}{h}\int K\left(\frac{X-x}{h}\right)K\left(\frac{X+x}{h}\right)f_X(X)dX=\int K(Z)K\left(Z+\frac{2x}{h}\right)f_X(x+hZ)dZ, \] changing the variable from $X$ to $Z=\frac{X-x}{h}$. By assumption (a5) it follows that \[ \int K(Z)K\left(Z+\frac{2x}{h}\right)f_X(x+hZ)dZ\le \sup f_X(x+hZ)\int K(Z)K\left(Z+\frac{2x}{h}\right)dZ\rightarrow 0 \] when $n\rightarrow\infty$, $h\rightarrow 0$ and $K(\cdot)$ is bounded by (a6). Note that the convergence to zero holds for any rate with respect to $h$. Finally, \[ II=\frac{1}{h}\int\int K\left(\frac{X_1-x}{h}\right)K\left(\frac{X_2+x}{h}\right)f_{X_1X_2}(X_1,X_2)dX_1dX_2= \] \[ =h\int\int K(Z_1)K\left(Z_2+\frac{2x}{h}\right)f_{X_1X_2}(x+hZ_1,x+hZ_2)dZ_1dZ_2,\quad (III) \] changing the variable from $(X_1,X_2)$ to $(Z_1=\frac{X_1-x}{h},Z_2=\frac{X_2-x}{h})$. Since $f_{X_1X_2}(\cdot,\cdot)$ is bounded by $C_0$, then \[ III\le hC_0\int K(Z_1)dZ_1\int K\left(Z_2+\frac{2x}{h}\right)dZ_2=hC_0\int K\left(Z_2+\frac{2x}{h}\right)dZ_2\rightarrow 0 \] when $n\rightarrow\infty$, $h\rightarrow 0$ and again for the boundedness of $K(\cdot)$ by (a6). The proof is complete. $\Box$ \section{Supplementary results}\label{GioPar16} In this Section we show a self-contained method to estimate the unknown functionals for the asymptotic bandwidth parameter in Local Linear Polynomial estimator. We extend the method in \cite{GioPar14} to the case of dependent data. Moreover, we use a different technical approach as in \cite{FraDia06} and \cite{Gyorfy02} in order to deal with the Neural Networks estimator for unknown functions defined on non compact sets. \begin{remark}\label{rem1} Under the assumptions ({a1})--({a5}), it can be shown that the process is geometrically ergodic and exponentially $\alpha$-mixing (see \cite{HarTsy97}). \end{remark} Let us consider, for some $\lambda>0$, \begin{equation} \label{Qs} Q_1(n):=\frac{\Delta_n^2d_n\log\left(\Delta_n^2d_n\right)}{\sqrt{n}}, \quad Q_2(n):=\frac{\Delta_n^4}{n^{1-\lambda}}, \quad \mathcal{F}_n:=\left\{q:\sum_{k=1}^{d_n}\left|c_k\right|\le \Delta_n\right\}. \end{equation} $\mathcal{F}_n$ is the class of feedforward neural networks with bounded weights. Now $\mathcal{F}=\bigcup_{n=1}^{\infty}\mathcal{F}_n$ is the class of general feedforward neural networks. $\mathcal{F}$ is dense with respect to the class of squared integrable functions using a predefined measure (\cite{Hor91}). Under model (\ref{eqn01}), the Neural Network estimator $q(x, \B{\widehat{ \eta}})$ can be written as \begin{equation} \label{NNe} q(x, \B{\widehat{ \eta}})=\arg \min_{f\in\mathcal{F}_n}\frac{1}{n-1}\sum_{k=1}^{n-1}\left(X_{k+1}^2-f(X_k)\right)^2 \end{equation} \subsection{Preliminary results} In this section we report some preliminary results for the Neural Networks estimator. Lemma \ref{lemma1} extends the results for the consistency in \cite{FraDia06} with respect to the Neural Network estimator, $q(x, \B{\widehat{ \eta}})$, using assumptions (a) and (b). Moreover, the same consistency, as in \cite{FraDia06}, is shown in the Lemma \ref{lemma2} for the Neural Network estimator of the second derivative for the unknown function $\sigma^2(x)$. \begin{lemma}\label{lemma1} Under assumptions (a1) -- (a5) and (b), the estimator $q(x;\widehat{\eta})$ of $\sigma^2(x)$, defined in (\ref{NNe}), is consistent in the sense that: \begin{itemize} \item if $Q_1(n)\rightarrow 0$ as $n\rightarrow \infty$, then \begin{displaymath} E\int\left(q\left(x;\widehat{\mathbf{\eta}}\right)-\sigma^2(x)\right)^2d\mu_X(x)\rightarrow 0 \qquad n\rightarrow \infty; \end{displaymath} \item if, additionally, $Q_2(n)\rightarrow 0$ for some $\lambda>0$, then \begin{displaymath} \int\left(q\left(x;\widehat{\mathbf{\eta}}\right)-\sigma^2(x)\right)^2d\mu_X(x)\stackrel{a.s.}{\longrightarrow} 0 \qquad n\rightarrow \infty. \end{displaymath} \end{itemize} \end{lemma} \noindent\textit{\textbf{Proof:}} It is sufficient to apply Theorem (3.2) in \cite{FraDia06} with respect to the estimator $q(x, \B{\widehat{ \eta}})$. Based on the previous Remark \ref{rem1}, the process in (\ref{eqn01}) is exponentially $\alpha$-mixing and the activation function for Neural Network estimator is sigmoidal, continuous and strictly increasing by (b3). So the conditions for the Theorem (3.2) in \cite{FraDia06} are satisfied. $\Box$ \begin{lemma}\label{lemma2} Under the same assumptions as in Lemma \ref{lemma1}, the estimator of the second derivative of $\sigma^2(x)$ is consistent in the sense that: \begin{itemize} \item if $Q_1(n)\rightarrow 0$ as $n\rightarrow \infty$, then \begin{displaymath} E\int\left(q^{(2)}\left(x;\widehat{\mathbf{\eta}}\right)-\sigma^2_{(2)}(x)\right)^2d\mu_X(x)\rightarrow 0 \qquad n\rightarrow \infty; \end{displaymath} \item if, additionally, $Q_2(n)\rightarrow 0$ for some $\lambda>0$, then \begin{displaymath} \int\left(q^{(2)}\left(x;\widehat{\mathbf{\eta}}\right)-\sigma^2_{(2)}(x)\right)^2d\mu_X(x)\stackrel{a.s.}{\longrightarrow} 0 \qquad n\rightarrow \infty, \end{displaymath} where $\sigma^2_{(2)}(x)$ is the second derivative of $\sigma^2(x)$. \end{itemize} \end{lemma} \noindent\textit{\textbf{Proof:}} Define with $\mathcal{G}$ the class of all functions $\sigma^2(x)$ satisfying the assumptions (a2) and (a3). Now we can write \begin{displaymath} \int\left(q^{(2)}\left(x;\widehat{\mathbf{\eta}}\right)-\sigma^2_{(2)}(x)\right)^2d\mu_X(x)\le\left\|D^2\right\|^2\int\left(q\left(x;\widehat{\mathbf{\eta}}\right)-\sigma^2(x)\right)^2d\mu_X(x) \end{displaymath} where $\left\|D^2\right\|^2=\sup_{f\in\mathcal{G}}\int \left(f''(x)\right)^2d\mu_X(x)$.\\ By assumptions, the linear operator $D^2$ is bounded. So $\left\|D^2\right\|^2<\infty$. Finally, using Lemma \ref{lemma1} we obtain the result. $\Box$\\ \noindent The next two lemmas are used in Propositions \ref{proposition1} and \ref{proposition2}. \begin{lemma}\label{lemma3} Under the same assumptions as in Lemma \ref{lemma1}, $\frac{1}{n-1}\sum_{t=1}^{n-1}\left(q\left(X_t;\widehat{\mathbf{\eta}}\right)-\sigma^2(X_t)\right)^2$ is consistent in the sense that: \begin{itemize} \item if $Q_1(n)\rightarrow 0$ as $n\rightarrow \infty$, then \[ \frac{1}{n-1}\sum_{t=1}^{n-1}\left(q\left(X_t;\widehat{\mathbf{\eta}}\right)-\sigma^2(X_t)\right)^2\stackrel{p}{\longrightarrow} 0 \quad n\rightarrow \infty; \] \item if, additionally, $Q_2(n)\rightarrow 0$ for some $\lambda>0$, then \[ \frac{1}{n-1}\sum_{t=1}^{n-1}\left(q\left(X_t;\widehat{\mathbf{\eta}}\right)-\sigma^2(X_t)\right)^2\stackrel{a.s.}{\longrightarrow} 0 \quad n\rightarrow \infty. \] \end{itemize} \end{lemma} \noindent\textit{\textbf{Proof:}} By Theorem (3.2) in \cite{FraDia06}, which uses the same line of the proof as in Theorem (16.1) in \cite{Gyorfy02}, we have that \begin{displaymath} W_1:=\sup_{f\in\mathcal{F}_n}\left|\frac{1}{n-1}\sum_{k=1}^{n-1}\left|f(X_k)-X_{k+1}^2\right|^2-E\left\{\left|f(X_t)-X_{t+1}^2\right|^2\right\}\right|\stackrel{p (a.s.)}{\longrightarrow}0 \end{displaymath} for $n\rightarrow \infty$. The above convergence is in probability or almost sure according to the two conditions, $Q_1(n)\rightarrow 0$ and $Q_2(n)\rightarrow 0$, respectively. Below, we use only the convergence in probability because the almost sure convergence follows exactly the same technique. The neural network estimator $q\left(X_t;\widehat{\mathbf{\eta}}\right)\in\mathcal{F}_n$ for some $n>n_0$. Using model (\ref{eqn01}) we can write \begin{eqnarray}\label{eqn01app} \nonumber &&W_2:=\frac{1}{n-1}\sum_{k=1}^{n-1}\left(q\left(X_k;\widehat{\mathbf{\eta}}\right)-\sigma^2(X_k)\right)^2-E\left\{\left(q\left(X_t;\widehat{\mathbf{\eta}}\right)-\sigma^2(X_t)\right)^2\right\}+\\ \nonumber &&+\frac{1}{n-1}\sum_{k=1}^{n-1}\left(\sigma^2(X_k)\varepsilon_{k+1}^2-\sigma^2(X_k)\right)^2-E\left\{\left(\sigma^2(X_t)\varepsilon_{t+1}^2-\sigma^2(X_t)\right)^2\right\}+\\ \nonumber &&-\frac{2}{n-1}\sum_{k=1}^{n-1}\left|q\left(X_k;\widehat{\mathbf{\eta}}\right)-\sigma^2(X_k)\right|\left|\sigma^2(X_k)\varepsilon_{k+1}^2-\sigma^2(X_k)\right|+\\ &&+2E\left\{\left|q\left(X_k;\widehat{\mathbf{\eta}}\right)-\sigma^2(X_k)\right|\left|\sigma^2(X_k)\varepsilon_{k+1}^2-\sigma^2(X_k)\right|\right\} \end{eqnarray} Therefore $W_2\stackrel{p}{\longrightarrow}0$. Consider the terms in the second line of (\ref{eqn01app}). By assumptions (a) it follows that $E\left\{\left(\sigma^2(X_t)\varepsilon_{t+1}^2-\sigma^2(X_t)\right)^2\right\}=E\left[\sigma^4(X_t)\right]E\left\{\left(\varepsilon^2_{t}-1\right)^2\right\}:=c_1$ with $0<c<\infty$. By ergodicity of the process $\{X_t\}$ and using the assumptions (a) we have that \begin{displaymath} \frac{1}{n-1}\sum_{k=1}^{n-1}\left(\sigma^2(X_k)\varepsilon_{k+1}^2-\sigma^2(X_k)\right)^2\stackrel{a.s.}{\longrightarrow}c_1 \qquad n\rightarrow\infty. \end{displaymath} Since $E\left[\sigma^4(X_t)\right]<\infty$ then $E\left\{\left(q\left(X_t;\widehat{\mathbf{\eta}}\right)-\sigma^2(X_t)\right)^2\right\}<\infty$ and by Lemma \ref{lemma1} \[ E\left\{\left(q\left(X_t;\widehat{\mathbf{\eta}}\right)-\sigma^2(X_t)\right)^2\right\}\stackrel{p}{\longrightarrow}0 \] when $n\rightarrow\infty$. Using Schwartz's inequality we have that \[E\left\{\left|q\left(X_k;\widehat{\mathbf{\eta}}\right)-\sigma^2(X_k)\right|\left|\sigma^2(X_k)\varepsilon_{k+1}^2-\sigma^2(X_k)\right|\right\}\stackrel{p}{\longrightarrow}0. \] So we have that \begin{eqnarray*} &&W_3:=\frac{1}{n-1}\sum_{k=1}^{n-1}\left(q\left(X_k;\widehat{\mathbf{\eta}}\right)-\sigma^2(X_k)\right)^2-\frac{2}{n-1}\sum_{k=1}^{n-1}\left|q\left(X_k;\widehat{\mathbf{\eta}}\right)-\sigma^2(X_k)\right|\left|\sigma^2(X_k)\varepsilon_{k+1}^2-\sigma^2(X_k)\right| \end{eqnarray*} and $W_3\stackrel{p}{\longrightarrow}0$. Since $E\left\{\left(q\left(X_t;\widehat{\mathbf{\eta}}\right)-\sigma^2(X_t)\right)^2\right\}<\infty$ then it follows that \begin{displaymath} \frac{1}{n-1}\sum_{k=1}^{n-1}\left(q\left(X_k;\widehat{\mathbf{\eta}}\right)-\sigma^2(X_k)\right)^2\stackrel{p}{\longrightarrow}c_2 \end{displaymath} when $n\rightarrow\infty$, with $0\le c_2<\infty$. So, it implies that \begin{displaymath} \frac{1}{n-1}\sum_{k=1}^{n-1}\left|q\left(X_k;\widehat{\mathbf{\eta}}\right)-\sigma^2(X_k)\right|\left|\sigma^2(X_k)\varepsilon_{k+1}^2-\sigma^2(X_k)\right|\stackrel{p}{\longrightarrow}c_2/2 \end{displaymath} when $n\rightarrow\infty$. But, by Schwartz's inequality we can write \begin{eqnarray*} &&\frac{1}{n-1}\sum_{k=1}^{n-1}\left|q\left(X_k;\widehat{\mathbf{\eta}}\right)-\sigma^2(X_k)\right|\left|\sigma^2(X_k)\varepsilon_{k+1}^2-\sigma^2(X_k)\right|\le\\ &&\le\left[\frac{1}{n-1}\sum_{k=1}^{n-1}\left(q\left(X_k;\widehat{\mathbf{\eta}}\right)-\sigma^2(X_k)\right)^2\right]^{1/2}\left[\frac{1}{n-1}\sum_{k=1}^{n-1}\left(\sigma^2(X_k)\varepsilon_{k+1}^2-\sigma^2(X_k)\right)^2\right]^{1/2}. \end{eqnarray*} If we apply convergence, we have $c_2/2\le \sqrt{c_1c_2}$. Since $c_1$ can be considered an arbitrary constant because it depends on the fourth moment of $\varepsilon_t$, while $c_2$ does not, the inequality is true if and only if $c_2=0$. This completes the proof. $\Box$ \begin{lemma}\label{lemma4} Under the same assumptions as in Lemma \ref{lemma1}, $\frac{1}{n-1}\sum_{t=1}^{n-1}\left(q^{(2)}\left(X_t;\widehat{\mathbf{\eta}}\right)-\sigma^2_{(2)}(X_t)\right)^2$ is consistent in the sense that: \begin{itemize} \item if $Q_1(n)\rightarrow 0$ as $n\rightarrow \infty$, then \[ \frac{1}{n-1}\sum_{t=1}^{n-1}\left(q^{(2)}\left(X_t;\widehat{\mathbf{\eta}}\right)-\sigma^2_{(2)}(X_t)\right)^2\stackrel{p}{\longrightarrow} 0 \quad n\rightarrow \infty; \] \item if, in addition, $Q_2(n)\rightarrow 0$ for some $\lambda>0$, then \[ \frac{1}{n-1}\sum_{t=1}^{n-1}\left(q^{(2)}\left(X_t;\widehat{\mathbf{\eta}}\right)-\sigma^2_{(2)}(X_t)\right)^2\stackrel{a.s.}{\longrightarrow} 0 \quad n\rightarrow \infty. \] \end{itemize} \end{lemma} \noindent\textit{\textbf{Proof:}} As in the proof of Lemma (2), let $\mathcal{G}$ be the class of all functions $\sigma^2(x)$ which satisfy the assumptions (a2) and (a3). Now, we have that \begin{displaymath} \frac{1}{n-1}\sum_{k=1}^{n-1}\left(q^{(2)}\left(X_t;\widehat{\mathbf{\eta}}\right)-\sigma^2_{(2)}(X_t)\right)^2\le\left\|\widehat{d}^2_n\right\|^2\frac{1}{n-1}\sum_{k=1}^{n-1}\left(q\left(X_t;\widehat{\mathbf{\eta}}\right)-[\sigma^2(X_t)]\right)^2 \end{displaymath} where $\left\|d^2_n\right\|^2=\sup_{f\in\mathcal{G}}\frac{1}{n-1}\sum_{k=1}^{n-1}\left[f''(x_k)\right]^2$ and $\left\|\widehat{d}^2_n\right\|^2=\sup_{f\in\mathcal{G}}\frac{1}{n-1}\sum_{k=1}^{n-1}\left[f''(X_k)\right]^2$, with the stochastic process $\{X_t\}$ defined in (\ref{eqn01}) and $\left\|\cdot\right\|$ the norm of $L_2$ space with respect to the empirical measure. Based on assumption (a3), every function in $\mathcal{G}$ has a bounded second derivative and so \begin{displaymath} \lim_{n\rightarrow\infty}\frac{1}{n-1}\sum_{k=1}^{n-1}\left[f''(x_k)\right]^2<\infty \end{displaymath} for every sequence $\{x_k\}\in\mathbb{R}$, $k=1,2,\ldots$. Based on assumptions (a) and ergodicity of the stochastic process $\{X_t\}$ we have that $\left\|\widehat{d}^2_n\right\|^2\stackrel{a.s.}{\longrightarrow}c<\infty$. Finally, using Lemma (3) it follows that \begin{displaymath} \left\|\widehat{d}^2_n\right\|^2\frac{1}{n-1}\sum_{k=1}^{n-1}\left(q\left(X_t;\widehat{\mathbf{\eta}}\right)-\sigma^2(X_t)\right)^2\stackrel{p(a.s.)}{\longrightarrow}0 \qquad n\rightarrow\infty. \end{displaymath} The above convergence is in probability if $Q_1(n)\rightarrow 0$, when $n\rightarrow\infty$. If, in addition, $Q_2(n)\rightarrow 0$, when $n\rightarrow\infty$, then there will be almost sure convergence. This completes the proof. $\Box$\\ \subsection{Consistency for the Functional of the bias} Let $I_x=[x-a/2,x+a/2]$, with $a>0$ for all $x\in\mathbb{R}$. According to assumption (a5) it follows that $\mu_X(I_x)>0$. Moreover, the number of observed values in $I_x$ from (\ref{eqn01}) tends to infinity when $n\rightarrow\infty$ with probability 1. Using model (\ref{eqn01}), we can write the functional of the bias, $\mathbb{B}_{\omega_{I_x}}$, as \begin{equation}\label{fun1app} \mathbb{B}_{\omega_{I_x}}= C_1^2\int_{I_x}\left(\sigma^2_{(2)}(x)\right)^2f_X(x)d\omega_{I_x}. \end{equation} Similarly, we can write its estimator as $\widehat{\mathbb{B}}_{\omega_{I_x}}$, that is \begin{equation}\label{Rhatapp} \widehat{\mathbb{B}}_{\omega_{I_x}}=\frac{C_1^2\sum_{t=1}^{n-1}\left[q^{(2)}\left(X_t;\widehat{\mathbf{\eta}}\right)\right]^2\mathbb{I}(X_t\in I_x)}{\sum_{t=1}^{n-1}\mathbb{I}(X_t\in I_x)} \end{equation} as reported in section 1.2 of this Supplement. \begin{proposition}\label{proposition1} Under the same assumptions as in Lemma \ref{lemma1}, $\widehat{\mathbb{B}}_{\omega_{I_x}}$, defined in (\ref{Rhatapp}), is consistent in the sense that: \begin{itemize} \item If $Q_1(n)\rightarrow 0$ as $n\rightarrow \infty$, then \[ \widehat{\mathbb{B}}_{\omega_{I_x}}\stackrel{p}{\longrightarrow} \mathbb{B}_{\omega_{I_x}} \quad n\rightarrow \infty \] \item if, additionally, $Q_2(n)\rightarrow 0$ for some $\lambda>0$, then \[ \widehat{\mathbb{B}}_{\omega_{I_x}}\stackrel{a.s.}{\longrightarrow} \mathbb{B}_{\omega_{I_x}} \quad n\rightarrow \infty \] \end{itemize} where $\mathbb{B}_{\omega_{I_x}}$ is defined in (\ref{fun1app}). \end{proposition} \noindent\textit{\textbf{Proof:}} For the sake of simplicity we consider only the convergence in probability. The almost sure convergence follows the same technique. The estimator in (\ref{Rhatapp}) can be written as \[ \widehat{\mathbb{B}}_{\omega_{I_x}}=\frac{C_1^2\frac{1}{n-1}\sum_{t=1}^{n-1}\left[q^{(2)}\left(X_t;\widehat{\mathbf{\eta}}\right)\right]^2\mathbb{I}(X_t\in I_x)}{\frac{1}{n-1}\sum_{t=1}^{n-1}\mathbb{I}(X_t\in I_x)}. \] The quantity $C_1^2$ is known. By ergodicity of the stochastic process $\{X_t\}$ it follows that \begin{displaymath} \frac{1}{n-1}\sum_{t=1}^{n-1}\mathbb{I}(X_t\in I_x)\stackrel{a.s.}{\longrightarrow}\mu_X(I_x). \end{displaymath} Using assumptions (a) and again ergodicity of the stochastic process $\{X_t\}$ we have that \begin{displaymath} \frac{1}{n-1}\sum_{t=1}^{n-1}\left(\sigma^2_{(2)}(X_t)\right)^2\mathbb{I}(X_t\in I_x)\stackrel{a.s.}{\longrightarrow}\int_{I_x}\left(\sigma^2_{(2)}(x)\right)^2f_X(x)dx. \end{displaymath} By Lemma \ref{lemma4} $\frac{1}{n-1}\sum_{t=1}^{n-1}\left[q^{(2)}\left(X_t;\widehat{\mathbf{\eta}}\right)\right]^2$ and $\frac{1}{n-1}\sum_{t=1}^{n-1}\left(\sigma^2_{(2)}(X_t)\right)^2$ converge in probability to the same limit. But the result is the same if we consider \[\frac{1}{n-1}\sum_{t=1}^{n-1}\left[q^{(2)}\left(X_t;\widehat{\mathbf{\eta}}\right)\right]^2\mathbb{I}(X_t\in I_x).\] Therefore, it follows that \begin{displaymath} \frac{1}{n-1}\sum_{t=1}^{n-1}\left[q^{(2)}\left(X_t;\widehat{\mathbf{\eta}}\right)\right]^2\mathbb{I}(X_t\in I_x)\stackrel{p}{\longrightarrow}\int_{I_x}\left(\sigma^2_{(2)}(x)\right)^2f_X(x)dx \end{displaymath} Since $d\omega_{I_x}g=dx/\mu_X(I_x)$ and $\mu_X(I_x)>0$ we have that $\widehat{\mathbb{B}}_{\omega_{I_x}}\stackrel{p}{\longrightarrow}\mathbb{B}_{\omega_{I_x}}$. The proof is complete. $\Box$\\ Let $m_{4\varepsilon}=E(\varepsilon_t^4)$ and $\widehat{m}_{4\varepsilon}=\frac{\sum_{t=2}^{n} X_{t-1}^4}{\sum_{t=2}^{n} [q(X_{t-1}, \B{\widehat{ \eta}})]^2}$. \begin{corollary}\label{corollary2} Using the same conditions as in Proposition \ref{proposition1}, the estimator $\widehat{m}_{4\varepsilon}$, is consistent in the sense that: \begin{itemize} \item if $Q_1(n)\rightarrow 0$ as $n\rightarrow \infty$, then $\widehat{m}_{4\varepsilon}\stackrel{p}{\longrightarrow} m_{4\varepsilon} \quad n\rightarrow \infty$ \item if, in addition, $Q_2(n)\rightarrow 0$ for some $\lambda>0$, then $\widehat{m}_{4\varepsilon}\stackrel{a.s.}{\longrightarrow} m_{4\varepsilon} \quad n\rightarrow \infty$. \end{itemize} \end{corollary} \noindent\textit{\textbf{Proof:}} As in the previous proofs, we analyze the convergence in probability since the almost sure convergence is straightforward. The estimator $\widehat{m}_{4\varepsilon}$ can be written as \begin{displaymath} \widehat{ m}_{4\varepsilon} = \frac{\frac{1}{n-1}\sum_{t=1}^{n-1} X_t^4}{\frac{1}{n-1}\sum_{t=1}^{n-1} \left[q\left(X_t;\widehat{\mathbf{\eta}}\right)\right]^2}. \end{displaymath} Based on assumptions (a) and ergodicity of the stochastic process $\{X_t\}$, it follows that \begin{displaymath} \frac{1}{n-1}\sum_{t=1}^{n-1}X_t^4\stackrel{a.s.}{\longrightarrow}E\left[\sigma^4(X_t)\right]m_{4\varepsilon}\qquad n\rightarrow\infty \end{displaymath} and \begin{displaymath} \frac{1}{n-1}\sum_{t=1}^{n-1}\sigma^4(X_t)\stackrel{a.s.}{\longrightarrow}E\left[\sigma^4(X_t)\right]\qquad n\rightarrow\infty \end{displaymath} since $E\left[\sigma^4(X_t)\right]<\infty$ and $m_{4\varepsilon}<\infty$. Using Lemma (3), it implies that $\frac{1}{n-1}\sum_{t=1}^{n-1}\sigma^4(X_t)$ and $\frac{1}{n-1}\sum_{t=1}^{n-1} \left[q\left(X_t;\widehat{\mathbf{\eta}}\right)\right]^2$ have the same limit in probability. Therefore, \[\frac{1}{n-1}\sum_{t=1}^{n-1} \left[q\left(X_t;\widehat{\mathbf{\eta}}\right)\right]^2\stackrel{p}{\longrightarrow}E\left[\sigma^4(X_t)\right], \] when $n\rightarrow\infty$. We can conclude that $\widehat{m}_{4\varepsilon}\stackrel{p}{\longrightarrow}m_{4\varepsilon}$. The proof is complete. $\Box$\\ \subsection{Consistency for the functional of variance} Using model (\ref{eqn01}), we can write the functional of variance, $\mathbb{V}_{\omega_{I_x}}$, as \begin{equation}\label{fun1app1} \mathbb{V}_{\omega_{I_x}}= C_2\int_{I_x}\left[\sigma^4(u)\right]d\omega_{I_x}(u) \left(m_{4\varepsilon}-1\right). \end{equation} Similarly, we can write its estimator as \begin{equation}\label{Rhatapp1} \widehat{\mathbb{V}}_{\omega_{I_x}}=\frac{C_2\sum_{i=1}^{n^*}\left[q\left(z_i;\widehat{\mathbf{\eta}}\right)\right]^2/n^*}{\sum_{t=1}^{n}\mathbb{I}(X_t\in I_x)/n} \left(\widehat{m}_{4\varepsilon}-1\right). \end{equation} as reported in section 1.2 of this Supplement. The points $\{z_1, z_2, \ldots, z_{n^*}\}$ are uniformly spaced values from the interval $I_x$, with $n^*=O(n)$. \begin{proposition}\label{proposition2} Using the same conditions as in Proposition \ref{proposition1}, then $\widehat{\mathbb{V}}_{\omega_{I_x}}$, defined in (\ref{Rhatapp1}), with $I_x\subset\mathbb{R}$ and $n^*=O(n)$, is consistent in the sense that: \begin{itemize} \item If $Q_1(n)\rightarrow 0$ as $n\rightarrow \infty$, then \[ \widehat{\mathbb{V}}_{\omega_{I_x}}\stackrel{p}{\longrightarrow} \mathbb{V}_{\omega_{I_x}} \quad n\rightarrow \infty \] \item if, in addition, $Q_2(n)\rightarrow 0$ for some $\lambda>0$, then \[ \widehat{\mathbb{V}}_{\omega_{I_x}}\stackrel{a.s.}{\longrightarrow} \mathbb{V}_{\omega_{I_x}} \quad n\rightarrow \infty. \] \end{itemize} \end{proposition} \noindent\textit{\textbf{Proof:}} As in the previous proofs, we analyze the convergence in probability since the almost sure convergence is straightforward.\\ By Corollary \ref{corollary2}, it follows that $\widehat{m}_{4\varepsilon}\stackrel{p}{\longrightarrow}m_{4\varepsilon}$, when $n\rightarrow\infty$. Using the ergodicity of the stochastic process $\{X_t\}$, we have that $\sum_{t=1}^{n}\mathbb{I}(X_t\in I_x)/n\stackrel{a.s.}{\longrightarrow}\mu_X(I_x)$, when $n\rightarrow\infty$.\\ Since $C_2$ is a known quantity, we need only to show that \begin{displaymath} \sum_{i=1}^{n^*}\left[q\left(z_i;\widehat{\mathbf{\eta}}\right)\right]^2/n^*\stackrel{p}{\longrightarrow}\int_{I_x}\left[\sigma^4(u)\right]du \end{displaymath} where the points $\{z_1, z_2, \ldots, z_{n^*}\}$ are deterministic and uniformly spaced values from the interval $I_x$. By Lemma \ref{lemma1} and Lemma \ref{lemma3}, we know that \begin{equation}\label{distapp} \inf_{s\in\mathcal{F}_n}\int_{\mathbb{R}}\left(s(x)-\sigma^2(x)\right)^2d\mu_X(x)\rightarrow 0\qquad n\rightarrow\infty \end{equation} \begin{equation}\label{varapp} \sup_{s\in\mathcal{F}_n}\left|\sum_{k=1}^{n-1}\left|s(X_k)-X^2_{k+1}\right|^2-E\left\{\left|s(X_t)-X_{t+1}^2\right|^2\right\}\right|\stackrel{p}{\longrightarrow}0\qquad n\rightarrow\infty. \end{equation} Both (\ref{distapp}) and (\ref{varapp}) refer to the Neural Network estimator with respect to the stochastic process $\{X_t\}$. Instead, we consider some points which are not drawn by the process. So, in this case, we have to show that (\ref{distapp}) and (\ref{varapp}) hold. By assumption (a5), we have that \begin{displaymath} \inf_{s\in\mathcal{F}_n}\int_{\mathbb{R}}\left(s(x)-\sigma^2(x)\right)^2d\mu_X(x)\ge \inf_{s\in\mathcal{F}_n}\int_{I_x}\left(s(x)-\sigma^2(x)\right)^2f_X(x)dx\ge \end{displaymath} \begin{displaymath} \ge C_f\inf_{s\in\mathcal{F}_n}\int_{I_x}\left(s(x)-\sigma^2(x)\right)^2dx \end{displaymath} where $C_f:=\min_{x\in I_x}\{f_X(x)\}$, with $0<C_f<\infty$. By (\ref{distapp}), it follows that \begin{displaymath} \inf_{s\in\mathcal{F}_n}\int_{I_x}\left(s(x)-\sigma^2(x)\right)^2dx\rightarrow 0\qquad n\rightarrow\infty. \end{displaymath} Thus, we have proved that (\ref{distapp}) is also true with respect to the points $\{z_1, z_2, \ldots, z_{n^*}\}$ uniformly spaced from the interval $I_x$. Since $n^*=O(n)$, we can consider asymptotically $n$ instead of $n^*$. Put $z_i=\widetilde{X}_i+(z_i-\widetilde{X}_i)$, where $\widetilde{X}_i\in\{X_1,X_2,\ldots,X_n\}$, $i=1,2,\ldots,n^*$. For every $\epsilon>0$ and $z_i$, we choose a $\widetilde{X}_i$ such that $\left|z_i-\widetilde{X}_i\right|<\epsilon$. Now, we have to show that such a $\widetilde{X}_i$ exists with probability 1. By assumption (a5) and based on Proposition (A1.7) in \cite{Ton90}, every non null compact set is a ``small set" with respect to the Lebesgue measure for the Markov process in (\ref{eqn01}). But every set of radius $\epsilon$, which contains $z_i$ is non null compact set using the Lebesgue measure. Therefore, there exists a $n_0$ such that for each $n>n_0$ we can find at least a $\widetilde{X}_i\in\{X_1,X_2,\ldots,X_n\}$ with probability 1.\\ Define $d_i:=(z_i-\widetilde{X}_i)$. Then $|d_i|<\infty$ with probability 1, when $n\rightarrow\infty$, $\forall i$.\\ Define $Z_i:=(\widetilde{X}_i,d_i)$. The bi-dimensional random variables $Z_i$ retain the property of exponentially $\alpha$-mixing because we have only deterministic variables $z_i$ and random variables $X_i$ which are exponentially $\alpha$-mixing. Since $n^*=O(n)$, we can write, asymptotically, \begin{eqnarray*} && \sup_{s\in\mathcal{F}_n}\left|\sum_{k=1}^{n-1}\left|s(X_k)-X^2_{k+1}\right|^2-E\left\{\left|s(X_t)-X_{t+1}^2\right|^2\right\}\right| \\ &\le& \sup_{s\in\mathcal{F}_n}\left|\sum_{k=1}^{n-1}\left|s(Z_k)-\widetilde{X}^2_{k+1}\right|^2-E\left\{\left|s(Z_t)-\widetilde{X}_{t+1}^2\right|^2\right\}\right| \end{eqnarray*} because, using the proof of Theorem (3.2) from \cite{FraDia06}, the upper bounds for the \textit{sup} depend on $d_n$, $\Delta_n$ and the dimension of the input variables. But this dimension is 1 in (\ref{varapp}) and 2 if we use $Z_i$ as input variables, that is the uniformly spaced values in $I_x$.\\ Therefore, these upper bounds are the same when $n\rightarrow\infty$. So, it follows that \begin{displaymath} \sup_{s\in\mathcal{F}_n}\left|\sum_{k=1}^{n-1}\left|s(Z_k)-\widetilde{X}^2_{k+1}\right|^2-E\left\{\left|s(Z_t)-\widetilde{X}_{t+1}^2\right|^2\right\}\right|\stackrel{p}{\longrightarrow}0\qquad n\rightarrow\infty \end{displaymath} In this way, we can apply Lemma (3) in the case of the uniformly spaced values in $I_x$. Then we have that $\sum_{i=1}^{n^*}\left[q\left(z_i;\widehat{\mathbf{\eta}}\right)\right]^2/n^*$ and $\sum_{i=1}^{n-1}\left(\sigma^4(z_i)\right)/n$ have the same limit in probability, when $n\rightarrow\infty$.\\ Finally, the result follows. $\Box$\\ \vskip 14pt \noindent {\textbf{Acknowledgements}} Financial support by the Italian Ministry of Education, University and Research (MIUR), PRIN Research Project 2010--2011 -- prot. 2010J3LZEN, ``Forecasting economic and financial time series: understanding the complexity and modelling structural change'', is gratefully acknowledge. \end{document}
\begin{document} \title{\ \\ Hyperk\"{a}hler varieties as Brill-Noether loci on curves } \author{Soheyla Feyzbakhsh } \maketitle \begin{abstract} Consider the moduli space $M_C(r; K_C)$ of stable rank r vector bundles on a curve $C$ with canonical determinant, and let $h$ be the maximum number of linearly independent global sections of these bundles. If $C$ embeds in a K3 surface $X$ as a generator of $\Pic(X)$ and the genus of $C$ is sufficiently high, we show the Brill-Noether locus $\bn_C \subset M_C(r; K_C)$ of bundles with $h$ global sections is a smooth projective Hyperk\"{a}hler manifold of dimension $2g -2r \lfloor \frac{g}{r}\rfloor$, isomorphic to a moduli space of stable vector bundles on $X$. \end{abstract} \section{Introduction}\label{section.intro} In this paper, we show that specific Brill-Noether loci of rank $r$-stable vector bundles on K3 curves are isomorphic to moduli spaces of stable vector bundles on K3 surfaces. This generalises Mukai's program \cite{mukai:non-abelian-brill-noether,arbarello:maukai-program,feyz:mukai-program,feyz:mukai-program-ii} to Brill-Noether loci on K3 curves of dimension higher than $2$. Fix $(r, k) \in \mathbb{Z}_{>0} \oplus \mathbb{Z}_{>0}$ such that $\gcd(r, k) =1$ and $k < r$. Then take $g \gg 0$. Let $(X, H)$ be a polarised K3 surface with $\Pic(X) = \mathbb{Z} \cdot H$ and let $C$ be any curve in the linear system $|H|$ of genus $g$. There is a unique $s \in \mathbb{Z}$ such that \begin{equation}\label{assumption} -2 \leq k^2H^2 -2rs < -2 +2r\, . \end{equation} We further assume $\gcd(s, k) =1$. Consider the Brill-Noether locus $\bn_C(r, k(2g-2), r+s)$ of semistable\footnote{See Definition \ref{def-stability}. } rank $r$-vector bundles on $C$ having degree $kH^2 =k(2g-2)$ and possessing at least $r+s$ linearly independent global sections. Also $M_X(\v)$ denotes the moduli space of $H$-Gieseker semistable sheaves on $X$ with Mukai vector $\v = (r, kH, s)$. It is a (non-empty) smooth quasi-projective variety\footnote{See for instance \cite[Chapter 10, Corollary 2.1 \& Theorem 2.7]{huybretchts:lectures-on-k3-surfaces}.} of dimension $\v^2+2 = k^2H^2-2rs+2$. \begin{Thm}\label{thm} There is an isomorphism \begin{equation*} \Psi \colon M_X(\v) \rightarrow \bn_C(r, k(2g-2), r+s) \end{equation*} which sends a vector bundle\footnote{Any $H$-Gieseker semistable sheaf of Mukai vector $\v$ is locally free, see Lemma \ref{lem-locally-free}.} $E$ on $X$ to its restriction $E|_C$. \end{Thm} The above Theorem says that for any vector bundle $F$ on the curve $C$ in the Brill-Noether locus $\bn_C(r, k(2g-2), r+s)$, there exists a unique vector bundle on the surface $X$ whose restriction to $C$ is isomorphic to $F$. In particular, we obtain the following: \begin{Cor} Any vector bundle $F \in \bn_C(r, k(2g-2), r+s)$ is stable (cannot be strictly semistable) and $\wedge^r F = \omega_C^{\otimes k}$ with $h^0(C, F) = r+s$. \end{Cor} \subsection*{Proof ideas} To prove Theorem \ref{thm}, we go through the following steps: \begin{enumerate*} \item Consider the embedding $\iota \colon C \hookrightarrow X$. The push-forward $\iota_* F$ for any vector bundle $F \in \bn_C(r, kH^2, r+s)$ is semistable in the large volume limit of the space of Bridgeland stability conditions on $X$. We move down and study walls for objects of class $\ch(\iota_*F)$ between the large volume limit and the Brill-Noether wall that the structure sheaf $\cO_X$ is making. The first wall is made by stable sheaves with Mukai vector $\v= (r, kH, s)$. We show $\iota_* F$ gets destabilised along this wall if and only if $F = E|_C$ for a sheaf $E \in M_X(\v)$. \item The assumption \eqref{assumption} implies that any stable coherent sheaf of Mukai vector $\v$ is locally-free. We show that there is no wall for objects with Mukai vector $\v$ between the large volume limit and the Brill-Noether wall. That's why we get $h^0(X, E) = r+s$ for any $E \in M_X(\v)$. Hence, there is a well-defined map \begin{align*} \Psi \colon M_X(\v) &\rightarrow \bn_C(r, kH^2, r+s)\\ E & \mapsto E|_C. \end{align*} Then a usual wall-crossing argument gives injectivity of $\Psi$ due to the uniqueness of Harder-Narasimhan filtration. \item To prove $\Psi$ is surjective we apply the technique developed in \cite{feyz:mukai-program} which bounds the number of global sections of sheaves on $X$ in terms of the length of a convex polygon (which is the Harder-Narasimhan polygon in the Brill-Noether region). This implies that if $h^0(X, \iota_*F) \geq r+s$, then $F$ is the restriction of a vector bundle $E \in M_X(\v)$ to the curve $C$, so in particular, $\Psi$ is surjective. \item Finally, any vector bundle $E \in M_X(\v)$ is generated by global sections and the kernel \begin{equation*} K_E \coloneqq \ker (\cO_X^{\oplus r+s} \xrightarrow{\text{ev}} E). \end{equation*} is a slope-stable vector bundle. By applying a wall-crossing argument, we show that Hom$(K_E, E(-H)[1]) = 0$. Then a usual deformation theory argument implies that the derivative $d \Psi$ is surjetive, and so $\Psi$ is an isomorphism. \end{enumerate*} Note that in Theorem \ref{thm}, the assumption $g \gg 0$ is necessary for all the above steps. Remark \ref{Rem-big enough} gives a list of inequalities that $g$ must satisfy to get Theorem \ref{thm}. \subsection*{Outlook} In this paper, we only work on curves on K3 surfaces of Picard rank one. More generally one can consider a polarised K3 surfaces $(X, H)$ of arbitrary Picard rank and pick a curve $C \in |H|$ of genus $g(C) \gg 0$. For any $r, k \in \mathbb{Z}^{>0}$, we define\footnote{One can replace the structure sheaf $\cO_C$ with any other vector bundle on $C$ and consider the corresponding Brill-Noether locus.} \begin{equation*} h_{r, k} \coloneqq \max\left\{h^0(C, F) \colon \text{rank $r$-stable vector bundles $F$ on $C$ with $\wedge^rF = \omega_C^{\otimes k}$ } \right\}. \end{equation*} Consider the Brill-Noether locus $\bn_C^{\text{st}}(r,\, \omega_C^{\otimes k} ,\, h_{r, k})$ of stable rank $r$-vector bundles $F$ on $C$ of fixed determinant $ \omega_C^{\otimes k}$ and having $h_{r, k}$ linearly independent global sections. Then the natural question is whether this Brill-Noether locus is a Hyperk\"{a}hler variety when $g(C)$ is high enough. I expect to answer this question in the future by applying a more general wall-crossing argument. Note that the assumption $g(C) \gg 0$ is necessary as there are examples of the above Brill-Noether loci on K3 curves of genus $7$ \cite{mukai:non-abelian-brill-noether} and genus $12$ \cite{feyz-genus-12} which are smooth Fano varieties. A similar technique as in this paper can be applied to generalise Theorem \ref{thm} to \begin{enumerate} \item polarised K3 surfaces $(X, H)$ such that $H^2$ divides $H.D$ for all curve classes $D$ on $X$, and \item curves $C \hookrightarrow X$ which are not necessarily in the linear system $|H|$. \end{enumerate} \subsection*{Plan} In Section \ref{section.review}, we review Bridgeland stability conditions on the bounded derived category of coherent sheaves on a K3 surface $X$. Section \ref{section.wall-crossing} analyses walls for the push-forward of vector bundles in our Brill-Noether locus to the K3 surface $X$ and as a result, proves Theorem \ref{thm}. The computations for the location of walls are all postponed to Section \ref{section.location}. \section{Review: Bridgeland stability conditions on K3 surfaces}\label{section.review} In this section, we review the description of a slice of the \textit{space of stability conditions} on the bounded derived category of coherent sheaves on a K3 surface given in \cite[Section 1-7]{bridgeland:K3-surfaces}. Let $X$ be a projective scheme over $\mathbb{C}$ of dimension $\dim(X) \geq 1$, and let $H$ be an ample line bundle on $X$. The Hilbert polynomial $P(E, m)$ is given by $m \mapsto \chi(\cO_X, E \otimes \cO_X(mH))$. It can be uniquely written in the form \begin{equation*} P(E, m) = \sum_{i=0}^{\dim(E)}\al_i(E)\frac{m^i}{i!} \end{equation*} with integral coefficients $\al_i(E)$. The reduced Hilbert polynomial $p(E, m)$ of a coherent sheaf $E$ of dimension $d$ is defined by \begin{equation*} p(E, m) = \frac{P(E, m)}{\al_d(E)}. \end{equation*} \begin{Def}{\cite[Definition 1.2.4]{huybrechts:geometry-of-moduli-space-of-sheaves}}\label{def-stability} A coherent sheaf $E$ of dimension $d$ is (semi)stable if $E$ is pure and for any proper subsheaf $F \subset E$ one has $p(F, m) \,(\leq)\, p(E, m)$ for $m \gg 0$. Here $(\le)$ denotes $<$ for stability and $\le$ for semistability. \end{Def} From now on, we always assume $(X,H)$ is a smooth polarized K3 surface over $\mathbb{C}$ with Pic$(X) = \Z \cdot H$. Denote the bounded derived category of coherent sheaves on $X$ by $\cD(X)$ and its Grothendieck group by $K(X):=K(\cD(X))$. Given an object $E \in \cD(X)$, we write $\ch(E) = (\ch_0(E), \ch_1(E), \ch_2(E)) \in H^*(X, \Z)$ for its Chern characters. The Mukai vector of an object $E \in \cD(X)$ is an element of the numerical Grothendieck group $\mathcal{N}(X) = \mathbb{Z} \oplus \text{NS}(X) \oplus \mathbb{Z} \cong \mathbb{Z}^3$ defined via \begin{equation*} v(E) \coloneqq \big(\ch_0(E),\, \ch_1(E),\, \ch_0(E) +\ch_2(E)\big) = \text{ch}(E)\sqrt{\text{td}(X)}\,. \end{equation*} The Mukai bilinear form \begin{equation*} \left \langle v(E), v(E')\right \rangle = \ch_1(E)\ch_1(E') + \ch_0(E)\left(\ch_0(E') +\ch_2(E')\right) + \ch_0(E')\left(\ch_0(E) +\ch_2(E)\right) \end{equation*} makes $\mathcal{N}(X)$ into a lattice of signature $(2,1)$. The Riemann-Roch theorem implies that this form is the negative of the Euler form, defined as \begin{equation*} \chi(E,E') = \sum_{i} (-1)^{i} \dim_{\mathbb{C}} \text{Hom}_X^{i}(E,E') = -\left \langle v(E), v(E')\right \rangle. \end{equation*} The slope of a coherent sheaf $E \in \Coh(X)$ is defined by \[ \mu\_{H}(E) := \begin{cases} \frac{H.\ch_1(E)}{H^2\ch_0(E)} & \text{if $\ch_0(E) > 0$} \\ +\infty & \text{if $\ch_0(E) = 0$}. \end{cases}\] This leads to the usual notion of $\mu_H$-stability. Associated to it every sheaf $E$ has a Harder-Narasimhan filtration. Its graded pieces have slopes whose maximum we denote by $\mu_H^+(E)$ and minimum by $\mu_H^-(E)$. For any $b \in \mathbb{R}$, let $\cA(b)\subset\cD(X)$ denote the abelian category of complexes \begin{equation}\label{Abdef} \mathcal{A}(b)\ =\ \big\{E^{-1} \xrightarrow{\,d\,} E^0 \ \colon\ \mu_H^{+}(\ker d) \leq b \,,\ \mu_H^{-}(\cok d) > b \big\}. \end{equation} Then $\cA(b)$ is the heart of a bounded t-structure on $\cD(X)$ by \cite[Lemma 6.1]{bridgeland:K3-surfaces}. For any pair $(b,w) \in \R^2$, we define the group homomorphism $Z_{b,w} \colon K(X) \to \C$ by \begin{equation} \label{eq:Zab} Z_{b,w}(E) := -\ch_2(E) + w \ch_0(E) + i \bigg(\frac{H\ch_1(E)}{H^2} -b \ch_0(E) \bigg). \end{equation} Define the function $\Gamma \colon \mathbb{R} \rightarrow \mathbb{R}$ as \[ \Gamma(b) \coloneqq \begin{cases} b^2 \left(\frac{H^2}{2}+1\right)-1 & \text{if $b\neq 0$} \\ 0 & \text{if $b = 0$}. \end{cases}\] By abuse of notations, we also denote the graph of $\Gamma$ by \textit{curve $\Gamma$} (see Figure \ref{projetcion-fig}). We define \begin{equation*} U \coloneqq \left\{(b,w) \in \mathbb{R}^2 \colon w > \Gamma(b) \right\}. \end{equation*} In figures, we will plot the $(b,w)$-plane simultaneously with the image of the projection map \begin{eqnarray*} \Pi\colon\ K(X) \setminus \big\{E \colon \ch_0(E) = 0\big\}\! &\longrightarrow& \R^2, \\ E &\ensuremath{\shortmid\joinrel\relbar\joinrel\rightarrow}& \!\!\bigg(\frac{\ch_1(E).H}{\ch_0(E)H^2}\,,\, \frac{\ch_2(E)}{\ch_0(E)}\bigg). \end{eqnarray*} \begin{figure} \caption{$(b,w)$-plane and the projection $\Pi(E)$} \label{projetcion-fig} \end{figure} Consider the slope function \[ \nu\_{{b,w}}\colon \cA(b) \to \R \cup \{+\infty\}, \quad \nu\_{b,w} (E) \coloneqq \begin{cases} -\frac{\text{Re}[Z_{b,w}(E)]}{\text{Im}[Z_{b,w}(E)]} & \text{if $\text{Im}[Z_{b,w}(E)] > 0$} \\ +\infty & \text{if $\text{Im}[Z_{b,w}(E)] = 0$.} \end{cases} \] This defines our notion of stability in $\cA(b)$: \begin{Def} Fix $w> \Gamma(b)$. We say $E\in\cD(X)$ is (semi)stable with respect to the pair $\sigma_{b,w} \coloneqq \left(\cA(b),\, Z_{b,w} \right)$ if and only if \begin{itemize} \item $E[k]\in\cA(b)$ for some $k\in\Z$, and \item $\nu\_{b,w}(F)\,(\le)\,\nu\_{b,w}\big(E[k]/F\big)$ for all non-trivial subobjects $F\hookrightarrow E[k]$ in $\cA(b)$. \end{itemize} \end{Def} Let $E$ be a semistable coherent sheaf, or more generally a $\sigma_{b,w}$-semistable object for some $(b,w) \in U$. By \cite[Remark 2.3(a)]{feyz-li:clifford-indices}\footnote{Note that the curve $\Gamma$ defined in this paper lies above the curve $\Gamma$ considered in \cite{feyz-li:clifford-indices}.}, the projection $\Pi(E)$ does not lie in $U$. Thus the same argument as in \cite[Lemma 6.2]{bridgeland:K3-surfaces} shows that the pair $\sigma_{b,w} = \left(\cA(b),\, Z_{b,w} \right)$ is a Bridgeland stability condition on $\cD(X)$, see \cite[Definition 1.1 \& Proposition 5.3]{bridgeland:stability-condition-on-triangulated-category}\footnote{Up to the action of $\widetilde{\text{GL}}^{+}(2;\mathbb{R})$, the stability conditions $\sigma_{b,w}$ are the same as the stability conditions defined in \cite[section 6]{bridgeland:K3-surfaces}.}. This in particular implies that every object $E \in \cA(b)$ admits a \textit{Harder--Narasimhan filtration} which is a finite sequence of objects in $\cA(b)$: \[0=E_0\subset E_1\subset E_2\subset\dots\subset E_k=E\] whose factors $E_i/E_{i-1}$ are $\sigma_{b,w}$-semistable and $\nu_{b,w}(E_1)>\nu_{b,w}(E_2)>\dots>\nu_{b,w}(E/E_{k-1})$. We denote $\nu_{b,w}^+(E)\coloneqq \nu_{b,w}(E_1)$ and $\nu_{b,w}^-(E) \coloneqq \nu_{b,w}(E_k)$. Summarizing, we have the following: \begin{Thm}[{\cite[Section 1]{bridgeland:K3-surfaces}}] \label{thm:stabconstr} For any pair $(b,w) \in U$, the pair $\sigma_{b,w} = \left(\cA(b), Z_{b,w}\right)$ defines a stability condition on $\cD(X)$. Moreover, the map from $U$ to the space of stability conditions $\mathop{\mathrm{Stab}}\nolimits(X)$ on $\cD(X)$ given by $(b,w) \mapsto \sigma_{b,w}$ is continuous. \end{Thm} The second part of Theorem \ref{thm:stabconstr} implies that the two-dimensional family of stability conditions $\sigma_{b,w}$ satisfies wall-crossing as $b$ and $w$ vary, see for instance \cite[Proposition 4.1]{feyz:thomas-noether-loci} or \cite[Proposition 2.4]{feyz-li:clifford-indices}. \begin{Prop}[\textbf{Wall and chamber structure}]\label{locally finite set of walls} Fix a non-zero object $E \in \mathcal{D}(X)$. There exists a collection of line segments (walls) $\cW_E^i$ in $U$ (called ``\emph{walls}") which are locally finite and satisfy \begin{itemize*} \item[\emph{(a)}] The extension of each line segment $\cW_E^i$ passes through the point $\Pi(E)$ if $\ch_0(E) \neq 0$, or has fixed slope $\frac{\ch_2(E)H^2}{\ch_1(E).H}$ if $\ch_0(E) = 0$. \item[\emph{(b)}] An endpoint of the segment $\cW_E^i$ is either on the curve $\Gamma$ or on the line segment through $(0, 0)$ to $(0,-1)$. \item[\emph{(c)}] The $\sigma\_{b,w}$-(semi)stability of $E$ is unchanged as $(b,w)$ varies within any connected component (called a ``\emph{chamber}") of $U \setminus \bigcup_{i \in I} \cW_E^i $. \item[\emph{(d)}] For any wall $\cW_E^i$ there is $k_i \in \mathbb{Z}$ and a map $f\colon F\to E[k_i]$ in $\cD(X)$ such that \begin{itemize} \item for any $(b,w) \in \cW_E^i$, the objects $E[k_i],\,F$ lies in the heart $\cA(b)$, \item $E[k_i]$ is $\nu\_{b,w}$-semistable with $\nu\_{b,w}(E)=\nu\_{b,w}(F)=\,\mathrm{slope}\,(\cW_E^i)$ constant on $\cW_E^i$, and \item $f$ is an injection $F\subset E[k_i]$ in $\cA(b)$ which strictly destabilises $E[k_i]$ for $(b,w)$ in one of the two chambers adjacent to the wall $\cW_E^i$. \end{itemize} \end{itemize*} \begin{figure} \caption{Walls $\cW_E^i$ for $E$.} \label{wall.figure} \end{figure} \end{Prop} \section{Wall-crossing}\label{section.wall-crossing} Let $(X,H)$ be a smooth polarized K3 surface over $\mathbb{C}$ with Pic$(X) = \Z \cdot H$, and let $C \in |H|$ be any curve of genus $g = \frac{1}{2}H^2 +1$. The Brill-Noether loci of vector bundles on $C$ when genus $g$ is low have been highly studied, see e.g. \cite{mukai-curves-k3surface-genus-11,mukai:curve-k3surface-genus-less-than-10,mukai:new-development-in-the-theory-of-fano-threefold,mukai:non-abelian-brill-noether,mukai:vector-bundles-and-brill-noether}. But in this section, we analyse these loci when genus $g$ is high. As in the Introduction, fix $(r, k) \in \mathbb{Z}_{>0} \oplus \mathbb{Z}_{>0}$ such that $\gcd(r, k) =1$ and $k < r$. Then we take $g \gg 0$ and consider polarised K3 surfaces $(X, H)$ of genus $g = \frac{1}{2}H^2+1$. Consider the Mukai vector $\v \coloneqq (r, kH, s)$ such that $\gcd(s, k) =1$ and the assumption \eqref{assumption} holds: \begin{equation*} -2 \leq \v^2= k^2H^2 -2rs < -2 +2r\,. \end{equation*} This implies \begin{equation}\label{bound for s} -1+\frac{1}{r} +\frac{k^2}{2r}H^2< \ s \ \leq \frac{1}{r}+\frac{k^2}{2r}H^2\, . \end{equation} We set Mukai vector $\al \coloneqq (s, -kH, r)$, so \begin{equation*} \Pi(\v) = \left(\frac{k}{r},\, \frac{s-r}{r} \right)\ , \qquad \Pi(\al) = \left(-\frac{k}{s}\ ,\ \frac{r-s}{s} \right), \end{equation*} and \begin{equation*} \Pi(\v(-H)) = \Pi\left(r, (k-r)H, s-kH^2+ \frac{r}{2} H^2 \right) = \left(\frac{k-r}{r} \ ,\ \frac{s-r}{r} -\frac{k}{r}H^2+ \frac{1}{2}H^2 \right). \end{equation*} \begin{Def}\label{def-ellstar} Let $\ell^*$ be the lowest line of slope $H^2(\frac{k}{r} -\frac{1}{2})$ which intersects the curve $\Gamma$ at two points with $b$-values $b_1^*<b_2^*$ satisfying \begin{equation}\label{cond.for ellstar} \max \left\{b_1^*- \frac{k-r}{r} \, , \ \frac{k}{r} - b_2^* \right\} \,\leq\, \frac{1}{r^2(r+1)} \end{equation} \end{Def} \begin{figure} \caption{The lines $\ell^*$, $\ell_{\v} \label{projetcion} \end{figure} \begin{Prop} \label{prop.all bounds} Suppose $H^2= 2g-2 \gg 0$. \begin{enumerate*} \item The origin point $(0,0)$ lies below $\ell^*$ and \begin{equation*} \frac{k-r}{r}< b_1^* < b_2^* < \frac{k}{r}. \end{equation*} \item The line $\widetilde{\ell}$ passing through $\Pi(\v)$ and $\Pi(\v(-H))$ is parallel to $\ell^*$ and lies above it. It intersects the curve $\Gamma$ at two points with $b$-values $\widetilde b_1 < \widetilde b_2$ such that \begin{equation*} \frac{k-r}{r} \leq \widetilde b_1 < b_1^* \qquad \text{and}\qquad b_2^* < \widetilde b_2 \leq \frac{k}{r}. \end{equation*} \item The line $\ell_{\v}$ passing through $\Pi(\v)$ and the origin $(0, 0)$ intersects $\Gamma$ at two points (except the origin) with $b$-values $b_1^{\v} < b_2^{\v}$ satisfying \begin{equation*} b_1^{\v} < -\frac{k}{s} + \frac{1}{s(s-1)} \qquad \text{and} \qquad b_2^* <b_2^{\v} \leq \frac{k}{r}. \end{equation*} \item The line $\ell_{\al}$ passing through $\Pi(\v(-H))$ and $\Pi(\al)$ intersects $\Gamma$ at two points with $b$-values $b_1^{\al} <b_2^{\al}$ so that \begin{equation*} \frac{k-r}{r} \leq b_1^{\al} < b_1^* \qquad \text{and} \qquad -\frac{k}{s} - \frac{1}{s(s-1)} < b_2^{\al}. \end{equation*} \item The line $\ell_{\v(-H)}$ passing through $\Pi(\v(-H))$ and the origin intersects the curve $\Gamma$ at two points (except the origin) with $b$-values $b_1^{\v(-H)} < 0 <b_2^{\v(-H)}$ such that \begin{equation*} \frac{k-r}{r} \leq b_1^{\v(-H)} < b_1^*. \end{equation*} \end{enumerate*} \end{Prop} \begin{Lem}\label{lem-walls above ell start in section 3} Let $\ell^*$ intersects the vertical lines $b = \frac{k}{r}$ and $b = \frac{k-r}{r}$ at $p_1 = (\frac{k}{r}, w_1)$ and $p_2= (\frac{k-r}{r}, w_2)$, respectively. The line $\ell_1$ that passes through $p_1$ and the origin intersects the vertical line $b = \frac{k}{r} -\frac{1}{r(r-1)}$ at a point inside $U$. Similarly, the line $\ell_2$ through $p_2$ and the origin intersects $b = \frac{k-r}{r} + \frac{1}{r(r-1)}$ at a point inside $U$. \end{Lem} We postpone the proof of Proposition \ref{prop.all bounds} and Lemma \ref{lem-walls above ell start in section 3} to Section \ref{section.location}. The next Lemma gives vertical lines on which we can rule out walls of instability for objects of certain classes. \begin{Lem}\label{lem-minimal} Take an object $E \in \cD(X)$ with $\ch_{\leq 1}(E) = (r', k'H)$ such that $\gcd(r', k') =1$. There are unique $m^{\pm}, n^{\pm} \in \mathbb{Z}$ such that $\abs{n^{\pm}} < r'$, $n^{\pm}r' > 0$ and \begin{equation*} m^{-} r' - n^{-} k' = -1 \qquad \text{and} \qquad m^+ r' - n^+ k' = 1. \end{equation*} We set \begin{equation*} b_E^{\pm} \coloneqq \frac{m^{\pm}}{n^\pm} = \frac{k'}{r'} \pm \frac{1}{n^\pm r'}\,, \end{equation*} then there is no wall for $E$ crossing the vertical lines $b = b_E^{\pm}$, i.e., if $E$ is $\sigma_{b=b_E^{\pm},w}$-semistable for some $w > \Gamma(b_E^{\pm})$, then it is $\sigma_{b=b_E^{\pm},w}$-stable for any $w > \Gamma(b_E^{\pm})$. \end{Lem} \begin{proof} The first part is trivial and the second part follows by the fact that the imaginary part \begin{equation*} \abs{\text{Im}[Z\_{b = b_E^{\pm}, w}(E)]} = \abs{k' - \frac{m_i}{n_i} r'} = \frac{1}{\abs{n_i}} \end{equation*} is minimal, see \cite[Lemma 3.5]{feyz:effective-restriction-theorem} for more details. \end{proof} Let $M_{X}(\v)$ be the moduli space of $H$-Gieseker semistable sheaves on the surface $X$ with Mukai vector $\v = (r,kH,s)$. \begin{Lem}\label{lem-locally-free} Any coherent sheaf $E \in M_{X}(\v)$ is a $\mu_H$-stable locally free sheaf. \end{Lem} \begin{proof} Since $\gcd(r, k) =1$, any $H$-Gieseker-semistable sheaf of class $\v$ is $\mu_H$-stable. Assume $E$ is not locally-free, then taking its reflexive hull gives the exact sequence \begin{equation*} E \hookrightarrow E^{\vee\vee} \twoheadrightarrow Q \end{equation*} where $Q$ is a torsion sheaf supported in dimension zero. We know the reflexive sheaf $E^{\vee \vee}$ is also slope-stable, so \begin{equation*} -2 \leq v(E^{\vee \vee})^2 = (r, kH, s +\ch_3(Q))^2 = k^2H^2-2r(s +\ch_3(Q)) \leq k^2H^2-2rs -2r \end{equation*} which is not possible by our assumption \eqref{assumption}. \end{proof} Define the object $K_E[1] \in \mathcal{D}(X)$ as the cone of the evaluation map: \begin{equation}\label{cone} \mathcal{O}_X^{h^0(X,E)} \xrightarrow{\text{ev}} E \rightarrow K_E[1]. \end{equation} \begin{Prop}\label{prop.restriction} Let $E \in M_{X}(\v)$ be a $\mu_H$-stable vector bundle on the surface $X$. Then \begin{itemize} \item[(a)] $\Hom\big(E,E(-H)[1]\big) = 0$. \item[(b)] For any curve $C \in |H|$, the restriction $E|_C$ is a stable\footnote{Stability on the curve $C$ is defined as in Definition \ref{def-stability}} vector bundle on $C$ and $h^0(C,E|_C) = r+s$. \item[(c)] The object $K_E$ is a $\mu_H$-stable locally free sheaf on $X$ of Mukai vector $v(K_{E}) = \al = (s, -kH, r) $ and $\Hom\big(K_E, E(-H)[1]\big) = 0$. \end{itemize} \end{Prop} \begin{proof} \textbf{Step 1.} We first do wall-crossing for the bundle $E$. By \cite[Proposition 14.2]{bridgeland:K3-surfaces}, $E$ is $\sigma_{b, w}$-stable for $b< \mu_H(E)$ and $w \gg 0$. As in Lemma \ref{lem-minimal} consider the vertical line $b= b^-_E$. We know \begin{equation*} \frac{k}{r} -\frac{1}{r} \leq\, b_E^- \,\leq \frac{k}{r}-\frac{1}{r(r-1)}. \end{equation*} By Proposition \ref{prop.all bounds} (c), the line $\ell_{\v}$ intersects $\Gamma$ at a point with positive $b$-value $b_2^{\v}$ satisfying \begin{equation*} \frac{k}{r}-\frac{1}{r^2(r+1)} \overset{\eqref{cond.for ellstar}}{\leq} b_2^* < b_2^{\v}. \end{equation*} Thus the line segment $\ell_\v \cap U$ intersects the vertical line $b= b^-_E$ at a point inside $U$. Hence Lemma \ref{lem-minimal} implies that there is no wall for $E$ above $\ell_{\v}$. We have $\Hom(\cO_X, E[2]) = \Hom(E, \cO_X)^* = 0$, so \begin{equation*} \chi(\cO_X, E) = r+s = h^0(E) -h^1(E). \end{equation*} Thus $h^0(E) \neq 0$ and $\cO_X$ makes a wall for $E$. This wall is indeed $\ell_\v \cap U$ for $b \in (b_1^{\v}, 0)$. Take a point $(b,w)$ on this wall and consider the evaluation map \begin{equation*} \cO_X^{h^0(E)} \xhookrightarrow{\text{ev}} E \twoheadrightarrow K_E[1] \end{equation*} We know $\cO_X$ and $E$ are $\sigma_{b,w}$-semistable of the same $\nu\_{b,w}$-slope and $\cO_X$ is strictly $\sigma_{b,w}$-stable. Thus the map $\ev$ is injective in the abelian subcategory of $\sigma_{b,w}$-semistable objects in $\cA(b)$ of $\nu_{b,w}$-slope equal to $\nu_{b,w}(\cO_X)$. Therefore the co-kernel $K_E[1]$ is also $\sigma_{b,w}$-semistable. By \cite[Lemma 3.2]{feyz:mukai-program}, we know \begin{equation*} h^0(X, E) \leq \frac{r+s}{2} + \frac{\sqrt{(r-s)^2 +k^2(2H^2 +4)}}{2} \coloneqq h \end{equation*} We claim $h < r+s+1$, i.e. \begin{equation*} (r-s)^2 +k^2(2H^2 +4) < (r+s +2)^2. \end{equation*} This is equivalent to $k^2H^2 -2rs< 2 +2(r+s)-2k^2 $. Since $\v^2 =k^2H^2-2rs < 2r-2$ by \eqref{assumption}, we only need to show \begin{equation*} 2r-2 \leq 2+2(r+s)-2k^2 \end{equation*} i.e. $-2+k^2 \leq s$ which holds by \eqref{bound for s} if $H^2 > 2r$. Therefore $h^0(E) = r+s$, $h^1(E) = 0$ and $v(K_E[1]) = -\al = (-s, kH, -r)$. \textbf{Step 2.} The next step is to examine walls for $K_E[1]$. By Proposition \ref{prop.all bounds} (c), our Brill-Noether wall $\ell_\v \cap U$ intersects $\Gamma$ at a point with $b$-value $b_1^\v < -\frac{k}{s} + \frac{1}{s(s-1)}$. We know $\gcd(s, k) = 1$, so if $k >1$, the value $b_{K_E}^{+}$ in Lemma \ref{lem-minimal} satisfies \begin{equation*} -\frac{k}{s} + \frac{1}{s(s-1)} \leq b_{K_E}^{+} \leq -\frac{k}{s} + \frac{1}{s} < 0. \end{equation*} Thus the vertical line $b = b_{K_E}^{+}$ intersects $\ell_\v \cap U$ at a point inside $U$. Hence $\sigma_{b,w}$-semistability of $K_E[1]$ for $(b,w) \in \ell_\v \cap U$ implies that it is $\sigma_{b = b_{K_E}^{+} ,w}$-stable for any $w > \Gamma(b_{K_E}^{+})$ by Lemma \ref{lem-minimal}. If $k=1$, then we show directly that $K_E[1]$ is $\sigma_{b,w}$-stable for $(b,w) \in \ell_\v \cap U$ when $b < 0$. Assume otherwise, and let $K_1$ be a $\sigma_{b,w}$-stable factor of $K_E$. There are $t_1, s_1 \in \mathbb{Q}$ such that $\ch(K_1) = s_1\ch(\cO_X) +t_1\ch(E)$, see for instance \cite[Remark 2.5]{feyz:mukai-program}. Taking $\ch_1$ implies $t_1 \in \mathbb{Z}$. Moving along the wall $\ell_{\v}$ towards the origin implies that \begin{equation*} 0 \leq \lim\limits_{b \rightarrow 0}\text{Im}[Z_{(b,w)}(K_1)] = t_1 \leq \lim\limits_{b \rightarrow 0}\text{Im}[Z_{(b,w)}(K_E[1])] =1. \end{equation*} Thus $t_1 = 0$ or $1$, so the structure sheaf $\mathcal{O}_X$ is a subobject or a quotient of $K_E[1]$. By definition Hom$(\mathcal{O}_X,K_E[1]) = 0$ and since Hom$(E , \mathcal{O}_X) = 0$, we have Hom$(K_E[1],\mathcal{O}_X) =0$, a contradiction. Therefore $K_E[1]$ is $\sigma_{b,w}$-stable along the wall $\ell_{\v} \cap U$ for $b<0$. Openness of stability implies that it is $\sigma_{b=0, w}$-stable for $0 < w \ll 1$. Then Lemma \ref{lem-minimal} implies that $K_E[1]$ is $\sigma_{b=0, w}$-stable for any $w > 0$. Hence in both cases, $K_E[1]$ is $\sigma_{b,w}$-stable for $b > \mu(K_E)$ and $w \gg 0$. \cite[Lemma 6.18]{macri:intro-bridgeland-stability} implies that $\cH^{-1}(K_E[1])$ is a $\mu_H$-semistable torsion-free sheaf and $\cH^0(K_E[1])$ is supported in dimension zero. Since $\gcd(s, k) =1$, $\cH^{-1}(K_E[1])$ is $\mu_H$-stable. If $\cH^{0}(K_E[1])$ is non-zero, then \begin{align}\label{com} -2 \leq v(\cH^{-1}(K_E[1]))^2 =\ & \big(s, -kH, r+ \ch_3(\cH^0(K_E[1]))\big)^2 \nonumber\\ =\ & v(E)^2 -2s\ch_3(\cH^0(K_E[1])) \nonumber\\ \overset{\eqref{assumption}}{<}\ & 2r-2-2s\,. \end{align} This implies $s < r$, thus \eqref{bound for s} gives \begin{equation*} -1+\frac{1}{r} + \frac{k^2}{2r}H^2 < r \end{equation*} which is not possible as $H^2> 2r(r+1)$. Hence $K_E$ is a $\mu_H$-stable sheaf, and so is $\sigma_{b,w}$-stable for $b > -\frac{k}{s}$ and $w \gg 0$ by \cite[Proposition 14.2]{bridgeland:K3-surfaces}. By taking the reflexive hull and doing the same computations as in \eqref{com}, one can easily check that $K_E$ is indeed locally-free. \textbf{Step 3.} The final step is to analyse walls for $K_E$ and $E(-H)[1]$ when $b < \mu(K_E)$. Consider the vertical line $b = b^-_{K_E}$ as in Lemma \ref{lem-minimal}. By Proposition \ref{prop.all bounds} (d), the line $\ell_{\al}$ intersects $\Gamma$ at two points with $b$-values $b_1^{\alpha} < b_2^{\alpha}$ such that \begin{equation}\label{w.2} b_1^{\al} < -\frac{k}{s} - \frac{1}{s} \leq b^-_{K_E} \leq -\frac{k}{s} -\frac{1}{s(s-1)} < b_2^{\al}. \end{equation} The first inequality follows from \begin{equation*} b_1^{\al} < b_1^* \overset{\eqref{cond.for ellstar}}{\leq} \frac{k-r}{r} + \frac{1}{r^2(r+1)} \end{equation*} and the point that for $g \gg 0$, we have \begin{align}\label{cond--1} \frac{k}{r} -1 + \frac{1}{r^2(r+1)} <\ & \frac{-(k+1)}{\frac{k^2}{2r}H^2-1} \\ \overset{\eqref{bound for s}}{<} & -\frac{k}{s} - \frac{1}{s}.\nonumber \end{align} Thus Lemma \ref{lem-minimal} shows that by moving down along the vertical line $b= b_{K_E}^-$ we get $K_E$ is $\sigma_{b,w}$-stable for any $(b,w) \in \ell_\al \cap U$. On the other hand, $E$ is a $\mu_H$-stable vector bundle, thus \cite[Lemma 3.3]{feyz:effective-restriction-theorem} implies that $E(-H)[1]$ is $\sigma_{b,w}$-stable for $b > \frac{k-r}{r}$ and $w \gg 0$. If $k \neq r-1$, consider the vertical line $b = b^+_{E(-H)}$, then by Proposition \ref{prop.all bounds} (d), \begin{equation*} b_1^{\al} < b_1^* \overset{\eqref{cond.for ellstar}}{\leq} \frac{k-r}{r} + \frac{1}{r(r-1)} \leq b^+_{E(-H)} \leq \frac{k-r}{r} + \frac{1}{r} \leq -\frac{1}{r}. \end{equation*} If $k=r-1$, then we consider the vertical line $b = -\frac{1}{r+1}$ and the same argument as in Lemma \ref{lem-minimal} shows that there is no wall for $E(-H)[1]$ crossing this vertical line. For $g \gg 0$, we have \begin{align}\label{cond--2} -\frac{1}{r+1} < & \frac{-(k+1)}{\frac{k^2}{2r}H^2-1}\\ \overset{\eqref{bound for s}}{<} & -\frac{k+1}{s} < b_2^{\alpha}\nonumber, \end{align} Hence Proposition \ref{prop.all bounds} implies the lines $\widetilde{\ell}$, $\ell_{\v(-H)}$ and $\ell_{\al}$ intersect the vertical line $b = b^+_{E(-H)}$ if $k<r-1$ or $b=-\frac{1}{r+1}$ in case $k =r-1$ at points inside $U$. Hence by Lemma \ref{lem-minimal}, $E(-H)[1]$ is $\sigma_{b,w}$-stable along these three lines and so we get the following: \noindent (a) $E$ and $E(-H)[1]$ are stable of the same $\nu\_{b,w}$-slope for $(b,w) \in \widetilde{\ell}\, \cap\, U$, therefore $\Hom(E, E(-H)[1]) = 0$ as claimed in part (a). \noindent (b) We know $\iota_*E|_C$ lies in the exact sequence \begin{equation*} E \hookrightarrow \iota_* E|_C \twoheadrightarrow E(-H)[1] \end{equation*} in $\cA(b =0)$. Applying the same argument as in \cite[Corollary 4.3]{feyz:effective-restriction-theorem} implies that $E|_C$ is stable. Since $E(-H)[1]$ and $\cO_X$ are $\sigma_{b,w}$-stable of the same $\nu_{b,w}$-slope for $(b,w) \in \ell_{\v(-H)}\, \cap\, U$, we get $\Hom(\cO_X, E(-H)[1]) = 0$. Thus $h^0(C, E|_C) = h^0(X, E) = r+s$. This completes the proof of (b). \noindent (c) We have shown that $K_E$ is a $\mu_H$-stable locally free sheaf. Moreover $E(-H)[1]$ and $K_E$ are $\sigma_{b,w}$-stable of the same slope with respect to $(b,w) \in \ell_\al\, \cap\, U$, so $\Hom(K_E, E(-H)[1]) = 0$ as claimed in part (c). \end{proof} \subsection*{Wall-crossing for the push-forward of vector bundles on curves} As before $C \xhookrightarrow{\iota} X$ is a curve in the linear system $|H|$. Let $F$ be a semistable vector bundle on $C$ of rank $r$ and degree $kH^2$. Then $\iota_*F$ is of Mukai vector \begin{equation*} v(\iota_* F) = \left(0, \ rH, \ kH^2 -\frac{r}{2}H^2 \right). \end{equation*} Semistability of $F$ implies that $\iota_* F$ is $H$-Gieseker-semistable, so it is $\sigma_{b,w}$-semistable for $w \gg 0$ \cite[Proposition 14.2]{bridgeland:K3-surfaces}. The walls of instability for $\iota_*F$ are parallel lines of slope $H^2\left(\frac{k}{r} -\frac{1}{2}\right)$. \begin{Prop}\label{prop-wall} Let $\ell$ be a wall for $\iota_* F$ which lies above or on the line $\ell^*$ (see Definition \ref{def-ellstar}). Let $F_1 \hookrightarrow \iota_*F \twoheadrightarrow F_2$ be a destabilising sequence along $\ell$ with $\nu\_{b,w}(F_1) = \nu\_{b,w}(\iota_* F)$ for $(b,w) \in \ell \cap U$ and $\nu\_{b,w^-}(F_1) > \nu\_{b,w^-}(\iota_*F)$ for $(b,w^-)\in U$ below $\ell \cap U$. Then \begin{enumerate*} \item $F_1$ is a $\mu_H$-stable sheaf with $\ch_{\leq 1}(F_1) = (r, kH)$. \item $\cH^{-1}(F_2)$ is a $\mu_H$-stable sheaf with $\ch_{\leq 1}(\cH^{-1}(F_2)) = (r, (k-r)H)$ and $\cH^0(F_2)$ is either zero or a torsion sheaf supported in dimension zero. \item The sequence $F_1 \hookrightarrow \iota_* F \twoheadrightarrow F_2$ is the HN filtration of $\iota_*F$ with respect to $\sigma_{b=0,w}$ for $0 < w \ll 1$. \item The wall $\ell$ lies below or on $\widetilde{\ell}$. If it coincides with $\widetilde{\ell}$, then $\ch_2(F_1)$ is maximum (equal to $s-r$) and $F= F_1|_C$. \end{enumerate*} \end{Prop} \begin{proof} The first part of the argument is similar to \cite[Proposition 4.2]{feyz:mukai-program}; we add it for completeness. Taking cohomology of the destabilising sequence gives a long exact sequence of coherent sheaves \begin{equation}\label{exact.12} 0 \rightarrow \cH^{-1}(F_1) \rightarrow 0 \rightarrow \cH^{-1}(F_2) \rightarrow \cH^0(F_1) \xrightarrow{d_0} \iota_*F \xrightarrow{d_1} \cH^0(F_2) \rightarrow 0. \end{equation} Thus $\cH^{-1}(F_1) = 0$ and $\cH^0(F_1) \cong F_1$. Let $v(F_1) = \big(r',k'H,s'\big)$. If $r' =0$, then $F_1$ and $i_*F$ have the same $\nu\_{b,w}$-slope with respect to any $(b,w) \in U$, so $F_1$ will not destabilise $\iota_*F$ below the wall. Hence $r' >0$. The first step is to show that $r'=r$. Let $T(F_1)$ be the maximal torsion subsheaf of $F_1$ and $F_1/T(F_1)$ be its torsion-free part. Let $v\big(T(F_1)\big) = (0,\tilde{r}H,\tilde{s})$. Right-exactness of the underived pull-back $\iota^*$ applied to the short exact sequence \begin{equation}\label{torsion-exact} T(F_1) \hookrightarrow F_1 \twoheadrightarrow F_1/T(F_1) \end{equation} implies that \begin{equation}\label{bound for rank} \text{rank}(\iota^*F_1) \leq \text{rank}\big(\iota^*T(F_1)\big) + \text{rank}\big(\iota^*\big(F_1/T(F_1)\big)\big). \end{equation} Take a point $(b,w) \in \ell \cap U$. By definition of $\cA(b)$, the sequence \eqref{torsion-exact} is an exact triangle in $\cA(b)$. Consider the composition of injections $s \colon T(F_1) \hookrightarrow F_1 \hookrightarrow \iota_*F$ in $\cA(b)$. Then the cokernel $c(s)$ in $\cA(b)$ is also a rank zero sheaf because if $\cH^{-1}(c(s)) \neq 0$, the it must be a torsion-free sheaf which is not possible. Therefore, $T(F_1)$ is a subsheaf of $\iota_*F$. Since $F$ is a vector bundle on an irreducible curve $C$, we get rank$(\iota^*T(F_1)) = \tilde{r}$. Thus \eqref{bound for rank} gives $\text{rank}(\iota^*F_1) \leq \tilde{r} + r'$. Let $v\big(\cH^0(F_2)\big) = \big(0,k''H,s''\big)$. The right-exactness of $\iota^*$ implies \begin{equation}\label{above} r-k'' = \text{rank}\big(\iota^*\ker d_1 \big) = \text{rank}\big(\iota^*\im d_0 \big) \leq \text{rank}(\iota^*F_1) \overset{\eqref{bound for rank}}{\leq} \tilde{r}+r'. \end{equation} Since $\ell$ lies above or on $\ell^*$, it intersects $\Gamma$ at two points with $b$-values $b_1<b_2$ such that $b_1 \leq b_1^*$ and $b_2^* \leq b_2$. Note that the point $(0, 0)$ lies below $\ell \cap U$ by Proposition \ref{prop.all bounds} (a). Therefore \begin{equation}\label{slopes} \mu^+(\cH^{-1}(F_2)) \leq b_1^* \qquad \text{and} \qquad b_2^* \leq \mu^-(F_1). \end{equation} This implies \begin{align*} \dfrac{r-k''-\tilde{r}}{r'} = \mu_H\big(F_1/T(F_1)\big) - \mu_H\big(\cH^{-1}(F_2)\big) \geq b_2^*-b_1^* \end{align*} and so \begin{equation}\label{below} r' \leq \frac{r-k''-\tilde{r}}{b_2^*-b_1^*} < (r-k''-\tilde{r})+1. \end{equation} Here the right hand side inequality comes from \begin{equation*} \frac{r-k''-\tilde{r}}{r-k''-\tilde{r}+1}\leq \frac{r}{r+1} < 1-\frac{1}{r^2(r+1)} \overset{\eqref{cond.for ellstar}}{\leq} b_2^*-b_1^* \,. \end{equation*} Comparing \eqref{above} with \eqref{below} implies $r' = r-k'' -\tilde{r}$. Then \eqref{slopes} implies \begin{equation}\label{a.1} b_2^* \leq\ \mu_H(F_1/T(F_1)) = \frac{k'- \tilde{r}}{r-k''-\tilde{r}} = \frac{\frac{\ch_1(\cH^{-1}(F_2)).H}{H^2} + r -k'' -\tilde{r}}{r-k'' -\tilde{r}}\ \leq b_1^* + 1. \end{equation} Moreover, Proposition \ref{prop.all bounds} (a) and \eqref{cond.for ellstar} give \begin{equation}\label{a.2} b_2^* < \frac{k}{r} < b_1^*+1 \qquad \text{and} \qquad b_1^*+1-b_2^* \leq \frac{2}{r^2(r+1)}. \end{equation} Thus comparing \eqref{a.1} and \eqref{a.2} shows \begin{equation*} \abs{\frac{k'-\tilde{r}}{r-k''-\tilde{r}} - \frac{k}{r}} < \frac{1}{r(r+1)} \end{equation*} which is possible only if we have equality of two fractions $\frac{k'-\tilde{r}}{r-k''-\tilde{r}} = \frac{k}{r}$. Since $\gcd(r, k) =1$, we get $k''= \tilde{r} = 0$, $(r', k') = (r, k)$ and $v(T(F_1)) = (0,0,\tilde{s})$. However $T(F_1)$ cannot be a skyscraper sheaf because $T(F_1)$ is a subsheaf of $\iota_*F$ as explained above. Thus $T(F_1) = 0$ and $F_1$ is torsion-free. Then \eqref{slopes} and \eqref{cond.for ellstar} give \begin{equation*} \frac{k}{r} -\frac{1}{r^2(r+1)} \leq b_2^* \leq \mu_H^-(F_1) \leq \mu_H(F_1) =\frac{k}{r} \end{equation*} which implies $F_1$ is $\mu_H$-stable. This completes the proof of (a). Since $k''= 0$, we have $\ch_{\leq 1}(\cH^{-1}(F_2)) = (r, (k-r)H)$, so the first inequality in \eqref{slopes} implies \begin{equation*} \frac{k-r}{r} \leq \mu_H^+(\cH^{-1}(F_2)) \leq b_1^* \leq \frac{k-r}{r} + \frac{1}{r^2(r+1)}. \end{equation*} Therefore $\cH^{-1}(F_2)$ is a $\mu_H$-stable sheaf as $\gcd(r, k-r) =1$. This completes the proof of part (b). Since $F_1$ is $\mu_H$-stable, we get \begin{equation*} v(F_1)^2 = k^2H^2 -2r(r+\ch_2(F_1)) \geq -2. \end{equation*} Thus $\ch_2(F_1) \leq s-r$. We know $\ell$ passes through $\Pi(F_1)$, so if $\ch_2(F_1) =s-r$, then $\ell$ coincides with $\widetilde{\ell}$ and if $\ch_2(F_1) < s-r$, it lies below $\widetilde{\ell}$. We claim $F_1$ is $\sigma_{b=0, w}$-stable for any $w >0$. Consider the vertical line $b= b_{F_1}^-$ as in Lemma \ref{lem-minimal}; it satisfies \begin{equation}\label{b} 0 \leq \frac{k}{r} -\frac{1}{r} < b_{F_1}^- \leq \frac{k}{r} -\frac{1}{r(r-1)}\,. \end{equation} Let $\ell_{F_1}$ be the line passing through $\Pi(F_1)$ and the origin. The line segment $\ell_{F_1} \cap U$ lies above $\ell_1 \cap U$ considered in Lemma \ref{lem-walls above ell start in section 3}, see Figure \ref{fig.vertical line}. Thus combining \eqref{b} with Lemma \ref{lem-walls above ell start in section 3} implies that $\ell_{F_1} \cap U$ intersects the vertical line $b=b^-_{F_1}$ at a point in the closure $\overline{U}$. We know the wall $\ell$ lies above $\ell_{F_1}$ as the origin point $(0, 0)$ lies below $\ell^*$ and so $\ell$ by Proposition \ref{prop.all bounds}(a). Thus $\ell$ intersects the vertical line $b= b_{F_1}^-$ at a point $(b_{F_1}^{-}, w)$ inside $U$, so semistability of $F_1$ along the wall and Lemma \ref{lem-minimal} imply that $F_1$ is $\sigma_{b=b_{F_1}^-}$-stable for any $w > \Gamma(b_{F_1}^-)$. Hence the structure of walls (which all pass through $\Pi(F_1)$) implies that $F_1$ is $\sigma_{b,w}$-stable for any $(b,w)$ above $\ell_{F_1}$ when $b < \frac{k}{r}$, so in particular, it is $\sigma_{b=0, w}$-stable for any $w >0$. \begin{figure} \caption{The vertical lines $b=b_{F_1} \label{fig.vertical line} \end{figure} Similarly, we know \begin{equation*} \frac{k-r}{r}+\frac{1}{r(r-1)} \leq\ b_{F_2}^+\ \leq \frac{k-r}{r} + \frac{1}{r} \leq 0. \end{equation*} Thus by the same argument as for $F_1$, Lemma \ref{lem-walls above ell start in section 3} implies that the line $\ell_{F_2}$ passing through $\Pi(F_2)$ and the origin intersects the vertical line $b = b_{F_2}^+$ at a point inside $U$, see Figure \ref{fig.vertical line}. Thus by Lemma \ref{lem-minimal}, $F_2$ is also $\sigma_{b=0,w}$-stable for any $w >0$. We know $\nu\_{b,w}$-slope of $F_1$ is bigger than $F_2$ for $(b,w)$ below the wall $\ell$, thus the sequence $F_1 \rightarrow \iota_* F \rightarrow F_2$ is the HN filtration of $\iota_*F$ with respect to $\sigma_{0, w}$ for $0 < w \ll 1$ as claimed in part (c). In case of equality $\ch_3(F_1) = s-r$, Proposition \ref{prop.restriction} implies that $\iota_*F_1|_C$ is a stable sheaf. We know the non-zero morphism $d_0$ in the long exact sequence \eqref{exact.12} factors via the morphism $d_0' \colon i_*F_1|_C \rightarrow i_*F$. The sheaves $i_*F_1|_C$ and $i_*F$ are both stable and have the same Mukai vector, hence $d_0'$ is an isomorphism. This completes the proof of part (d). \end{proof} Proposition \ref{prop-wall} only describes walls for class $v(\iota_*F)$ which are above $\ell^*$. Instead of classifying walls below $\ell^*$, we jump over the Brill-Noether region (neighbourhood of $\Pi(\cO_X)$) and find an upper bound for the number of global sections of stable vector bundles on the curve $C$ of rank $r$ and degree $kH^2$. We first recall definition of the Harder-Narasimhan polygon. \begin{Def} Given a stability condition $\sigma_{(b,w)}$ and an object $E \in \mathcal{A}(b)$, the Harder-Narasimhan polygon of $E$ with respect to $(b,w)$ is the convex hull of the points $Z_{b,w}(E')$ for all subobjects $E '\subset E$ of $E$. \end{Def} If the Harder-Narasimhan filtration of $E$ is the sequence \begin{equation*} 0 = {E}_0 \subset {E}_1 \subset .... \subset {E}_{n-1} \subset {E}_n =E, \end{equation*} then the points $\left\{ p_i \coloneqq Z_{b,w}({E}_i) \right\}_{i=0}^{n}$ are the extremal points of the Harder-Narasimhan polygon of $E$ on the left side of the line segment $\overline{oZ_{b,w}(E)}$, see Figure \ref{polygon figure.1}. \begin{figure} \caption{HN polygon} \label{polygon figure.1} \end{figure} We want to consider the limit of HN polygon when $(b,w)$ is close to the origin. Define the function $\overline{Z} \colon K(X) \rightarrow \mathbb{C}$ as $$\overline{Z}(E) \coloneqq \lim\limits_{w \rightarrow 0} Z_{b =0, w}(E) = -\ch_2(E) \,+\, i\, \frac{\ch_1(E).H}{H^2}.$$ Take an object $E \in \mathcal{A}(b=0)$ which has no subobject $E' \subset E$ in $\mathcal{A}(0)$ with ch$_1(E') = 0$, i.e. $\nu_{b,w}^+(E) < +\infty$. \cite[Proposition 3.4]{feyz:mukai-program} implies that there exists $w^* > 0$ such that the Harder-Narasimhan filtration of $E$ is a fixed sequence \begin{equation}\label{HN} 0 = E_{0} \subset E_{1} \subset .... \subset E_{n-1} \subset E_n=E, \end{equation} with respect to all stability conditions $\sigma_{0,w}$ where $0< w < w^*$. We define $\p_E$ to be the polygon with extremal points $p_i = \overline{Z}(E_i)$ and sides $\overline{p_0p_n}$ and $\overline{p_ip_{i+1}}$ for $i \in [0, n-1]$. Since $P_E$ is the limit of HN polygon when $w \rightarrow 0$, it is also a convex polygon. \begin{Lem}\label{lem-polygon} Let $F$ be a rank $r$-semistable vector bundle on $C$ with degree $kH^2$ as before. The polygon $\p_{\iota_*F}$ is contained in the triangle $\triangle oz_1z_2$ where \begin{equation*} z_1 \coloneqq \overline{Z}(\v) = -s+ r\ +i\, k \qquad \text{and} \qquad z_2 \coloneqq \overline{Z}(\iota_*F) = H^2\left(\frac{r}{2} - k\right) +\ i\, r. \end{equation*} If $\p_{\iota_*F}$ coincides with $\triangle oz_1z_2$, then $F = E|_C$ for a vector bundle $E \in M_X(\v)$. \end{Lem} \begin{figure} \caption{The polygon $P_{i_*F} \label{wall.4} \end{figure} \begin{proof} Consider the HN filtration \eqref{HN} for $E = \iota_* F$ with respect to $\sigma_{0, w}$ where $0 < w \ll 1$. Since $P_{\iota_* F}$ is a convex polygon, to prove the first claim we only need to show \begin{equation*} -\dfrac{\text{Re}[\overline{Z}( E_1)]}{\text{Im}[\overline{Z}(E_1)]} \leq -\dfrac{\text{Re}[z_1]}{\text{Im}[z_1]} \qquad \text{and} \qquad -\dfrac{\text{Re}[z_2-z_1]}{\text{Im}[z_2-z_1]} \leq -\dfrac{\text{Re}[\overline{Z}(\iota_*F/E_{n-1})]}{\text{Im}[\overline{Z}(\iota_*F/E_{n-1})]}, \end{equation*} i.e. \begin{equation}\label{c} \frac{\ch_2(E_1)H^2}{\ch_1(E_1)H} \leq \frac{\ch_2(\v)H^2}{\ch_1(\v)H} \qquad \text{and} \qquad \frac{\ch_2(\v(-H))H^2}{\ch_1(\v(-H))H} \leq \frac{\ch_2(\iota_*F/E_{n-1})H^2}{\ch_1(\iota_*F/E_{n-1})H}. \end{equation} We first show that $\ch_0(E_1) >0$ and $\ch_0(\iota_*F/E_{n-1}) <0$. Taking cohomology from the exact sequence $E_1 \hookrightarrow \iota_* F \twoheadrightarrow \iota_*F/E_1$ in $\cA(b=0)$ implies that $E_1$ is a sheaf. If $E_1$ is of rank zero, then $\cH^{-1}(\iota_*F/E_1)$ is zero as it is must be a torsion-free sheaf, so $F_1$ is a subsheaf of $\iota_*F$ of bigger $\nu\_{0, w}$-slope which is not possible as $F$ is semistable\footnote{Recall that $\nu\_{b,w}$-slope of any rank zero sheaf $E$ is equal to $\frac{\ch_2(E)H^2}{\ch_1(E)H^2}$ which is independent of $(b,w)$.}, thus $\ch_0(E_1) > 0$. Similarly, the exact triangle $E_{n-1} \hookrightarrow \iota_*F \twoheadrightarrow \iota_*F/E_{n-1}$ implies $\ch_0(E_{n-1}) >0$ and so $\iota_*F/E_{n-1}$ is of negative rank. Since $E_1, \iota_*F/E_{n-1} \in \cA(b=0)$ we get \begin{equation*} 0 \leq \frac{1}{\ch_0(E_1)}\text{Im}\left[Z_{0, w}(E_1)\right] = \mu(E_1) \qquad \text{and} \qquad \frac{1}{\ch_0(\iota_*F/E_{n-1})}\text{Im}\left[Z_{0, w}(\iota_*F/E_{n-1})\right] \leq 0 \end{equation*} Therefore \eqref{c} is equivalent to the claim that $\Pi(E_1)$ lies below or on the line $\ell_{\v}$ and $\Pi(\iota_*F/E_{n-1})$ lies below or on $\ell_{\v(-H)}$, see Figure \ref{projetcion.2}. \begin{figure} \caption{Walls for $\iota_*F$} \label{projetcion.2} \end{figure} If there exists a wall for $\iota_*F$ above or on $\ell^*$, Proposition \ref{prop-wall} gives a complete description of the HN factors of $\iota_*F$ with respect to $\sigma_{0, w}$ for $0 < w \ll 1$ and so the claim follows. Thus we may assume there is no wall above or on $\ell^*$ for $\iota_*F$ and so $\iota_*F$ is $\sigma_{b,w}$-semistable for any $(b,w) \in U$ above $\ell^*$. Consider the line $\ell$ parallel to $\ell^*$ passing through $\Pi(E_1)$. We know $\iota_*F$ and $E_1$ have the same $\nu\_{b,w}$-slope for $(b,w) \in \ell \cap U$ and $E_1$ has bigger slope then $\iota_*F$ below the line as the slope function $\nu\_{b,w}(E_1)$ is a decreasing function with respect to $w$. Thus $\iota_*F$ is $\sigma_{b,w}$-unstable below $\ell$, hence $\ell$ and so $\Pi(E_1)$ lie below $\ell^*$. On the other hand, $\sigma_{0, w}$-semistability of $E_1$ implies that $\Pi(E_1)$ does not lie above $\Gamma$, so $\Pi(E_1)$ lies below the line passing through the origin and $(b_2^*, \Gamma(b_2^*))$. Then Proposition \ref{prop.all bounds}(c) implies $\Pi(E_1)$ lies below $\ell_{\v}$ as claimed. A similar argument shows that $\Pi(\iota_*F/E_{n-1})$ lies below the line passing through the origin and $(b_1^*, \Gamma(b_1^*))$ and so it lies below $\ell_{\v(-H)}$ by Proposition \ref{prop.all bounds}(e). This completes the proof of \eqref{c} and so $P_{\iota_*F}$ is contained in the triangle $\triangle oz_1z_2$. If HN polygon coincides with the triangle $\triangle oz_1z_2$, then we have equality in both inequalities of \eqref{c}, i.e. $\Pi(E_1)$ lies on $\ell_\v$ and $\Pi(\iota_*F/E_{n-1})$ lies on $\ell_{\v(-H)}$. Since $\Pi(E_1)$ cannot lie inside $U$, it lies above or on $\ell^*$. Thus there exits a wall for $\iota_*F$ above or on $\ell^*$ and so Proposition \ref{prop-wall} implies $v(F_1) = \v$ and $F = E_1|_C$ as claimed. \end{proof} Let $p_i = \overline{Z}(E_i)$ be the extremal points of $\p_{\iota_*F}$ where $E_i$'s are the factor in the HN filtration of $E = \iota_*F$ as in \eqref{HN}. Then \cite[Proposition 3.4]{feyz:mukai-program} implies that \begin{equation}\label{bound} h^0(X,\iota_*F) \leq \dfrac{\chi(X, \iota_*F)}{2} + \dfrac{1}{2} \sum_{i=1}^{n} \lVert \overline{p_ip_{i-1}} \rVert \end{equation} where $\lVert . \rVert$ is the non-standard norm defined in \cite[Equation (3.2)]{feyz:mukai-program}, i.e. $\lVert x+ iy \rVert = \sqrt{ x^2 + (2H^2+4) y^2}$. \begin{Prop}\label{prop-polygon} Let $F$ be a rank $r$-semistable vector bundle on $C$ with degree $kH^2$ as before. If the polygon $\p_{\iota_*F}$ lies strictly inside $\triangle oz_1z_2$, then $h^0(C, F) < r+s$. \end{Prop} \begin{proof} Since the extremal points of $\p_{\iota_*F}$ have integral coordinates, $\p_{\iota_*F}$ lies inside $oz_0'z_1'z_2'z_2$ where $z_0' = (-s+r)\frac{k-1}{k} +i\, (k-1)$, $z_1' = -s+r+1 +i\, k$ and \begin{equation*} z_2' = (-kH^2 + \frac{r}{2}H^2 + s-r)\frac{1}{r-k} -s+r + i (k+1), \end{equation*} see Figure \ref{fig.polygon}. \begin{figure} \caption{$\p_{i_*F} \label{fig.polygon} \end{figure} Convexity of $\p_{\iota_*F}$ and the polygon $oz_0'z_1'z_2'z_2$ gives $$\sum_{i=1}^{n} \lVert \overline{p_ip_{i-1}} \rVert \leq \lVert \overline{oz_0'} \rVert + \lVert \overline{z_0'z_1'} \rVert + \lVert \overline{z_1'z_2'} \rVert + \lVert \overline{z_2'z_2} \rVert \eqqcolon Q_{\text{in}}.$$ Let $Q_{\text{out}} \coloneqq \lVert \overline{oz_1} \rVert + \lVert \overline{z_1z_2} \rVert$, then \begin{equation*} Q_{\text{out}}-Q_{in} = \lVert \overline{z_0'z_1} \rVert - \lVert \overline{z_0'z'_1} \rVert + \lVert \overline{z_1z_2'} \rVert - \lVert \overline{z'_1z'_2} \rVert. \end{equation*} Define $\epsilon\_{\text{out}} \coloneqq \frac{1}{2}\chi(\iota_*F) + \frac{1}{2}Q_{\text{out}} - (r+s)$. Assume for a contradiction that $r+s \leq h^0(F)$, then \eqref{bound} gives \begin{equation*} \dfrac{1}{2}\chi(\iota_*F) + \dfrac{1}{2}Q_{\text{out}} - \epsilon\_{\text{out}}\; = \; r+s\; \leq h^0(F) \leq \dfrac{\chi(\iota_*F)}{2} + \dfrac{1}{2} \sum_{i=1}^{n}\lVert \overline{p_ip_{i-1}}\rVert \leq \dfrac{1}{2}\chi(\iota_*F) + \dfrac{1}{2}Q_{\text{in}}. \end{equation*} Thus $Q_{\text{out}}-Q_{\text{in}} \leq 2\,\epsilon\_{\text{out}}$ but we show for $g \gg 0$ it does not hold. We know \begin{align} 2\epsilon\_{\text{out}} \coloneqq \ & kH^2 - \frac{r}{2}H^2 + Q_{\text{out}} -2(r+s)\nonumber\\ = \ & \sqrt{(s-r)^2 + (2H^2+4)k^2} + \sqrt{\left(-kH^2 + \frac{r}{2}H^2+s-r\right)^2+ (2H^2+4)(r-k)^2 } \nonumber \\ & -(s+r) - (-kH^2 + \frac{r}{2}H^2+s+r)\nonumber\\ =\ & \frac{-4rs + (2H^2+4)k^2}{\sqrt{(s-r)^2 + (2H^2+4)k^2} + (s+r)} \,+ \label{aa.1}\\ & \frac{-4r(-kH^2 + \frac{r}{2}H^2+s)+ (2H^2+4)(r-k)^2}{\sqrt{(-kH^2 + \frac{r}{2}H^2+s-r)^2+ (2H^2+4)(r-k)^2 } + (-kH^2 + \frac{r}{2}H^2+s+r)}\,.\label{aa.2} \end{align} By our assumption \eqref{assumption}, $2k^2H^2 -4rs +4k^2 \geq -4+4k^2 \geq 0$ so \begin{equation*} (s+r)^2 \leq (s+r)^2-4rs+k^2(2H^2+4) = (s-r)^2 + k^2(2H^2+4). \end{equation*} Thus $\eqref{aa.1} \leq \frac{-4rs + (2H^2+4)k^2}{2(r+s)}$. The numerator of \eqref{aa.2} is $2k^2H^2 -4rs +4(r-k)^2 \geq -4+4(r-k)^2 \geq 0$ by \eqref{assumption}, so \begin{align*} \left(-kH^2 + \frac{r}{2}H^2+s+r\right)^2 \leq\ & \left(-kH^2 + \frac{r}{2}H^2+s+r\right)^2 + 2k^2H^2 -4rs +4(r-k)^2\\ =\ & \left(-kH^2 + \frac{r}{2}H^2+s-r\right)^2 + (2H^2+4)(r-k)^2. \end{align*} This implies $\eqref{aa.2} \leq \frac{2k^2H^2 -4rs+4(r-k)^2}{2(-kH^2 + \frac{r}{2}H^2+s+r)}$, thus \begin{align*} 2\epsilon\_{\text{out}} < \frac{k^2(H^2+2) -2rs}{r+s} + \frac{k^2H^2 -2rs+2(r-k)^2}{-kH^2 + \frac{r}{2}H^2+s+r} \eqqcolon M_1 \end{align*} When $H^2=2g-2 \rightarrow +\infty$, we know $s \rightarrow \frac{k^2}{2r}H^2 \rightarrow +\infty$ by \eqref{bound for s}, so one gets $M_1 \rightarrow 0$. On the other hand, \begin{align}\label{l-lin} Q_{\text{out}} -Q_{\text{in}} = \ & \sqrt{\left( \frac{1}{k}(s-r) \right)^2+ (2H^2+4) } - \sqrt{\left(\frac{1}{k}(s-r) -1 \right)^2 + (2H^2+4) }\ \ + \\ & \sqrt{\left(\frac{1}{r-k}\left(-kH^2 + \frac{r}{2}H^2+s-r\right) \right)^2+ (2H^2+4) }\ \ - \nonumber\\ & \sqrt{\left(\frac{1}{r-k}\left(-kH^2 + \frac{r}{2}H^2+s-r\right) -1\right)^2+ (2H^2+4) } \nonumber \end{align} By \eqref{bound for s}, when $H^2 \rightarrow +\infty$, we have $\frac{1}{k}(s-r) >1$ and \begin{align*} \frac{1}{r-k}\left(-kH^2 + \frac{r}{2}H^2+s-r\right)\overset{\eqref{bound for s}}{>}\ & \frac{1}{r-k}\left(-kH^2 + \frac{r}{2}H^2+\frac{k^2}{2r}H^2-r -1+\frac{1}{r}\right)\\ = \ & \frac{r-k}{2r}H^2 + \frac{1}{r-k}\left(-r -1+\frac{1}{r}\right)\\ > \ & 1\,. \end{align*} Thus \eqref{l-lin} gives \begin{align*} Q_{\text{out}} -Q_{\text{in}} > \ & \frac{\frac{s-r}{k} -\frac{1}{2}}{\sqrt{ \frac{1}{k^2} (s-r)^2+ (2H^2+4) }} + \frac{\frac{-kH^2+\frac{r}{2}H^2+s-r}{r-k} -\frac{1}{2} }{ \sqrt{\frac{1}{(r-k)^2}(-kH^2 + \frac{r}{2}H^2+s-r)^2+ (2H^2+4) }}. \end{align*} When $s \rightarrow \frac{k^2}{2r}H^2 \rightarrow +\infty$, the right hand side goes to $2$, thus for $g \gg 0$, we obtain \begin{equation}\label{epsilon} 2\epsilon\_{\text{out}} < Q_{\text{out}} -Q_{\text{in}} \end{equation} as claimed. \end{proof} \begin{proof}[Proof of Theorem \ref{thm}] By Proposition \ref{prop.restriction}(b), there is a well-defined map \begin{align*} \Psi \colon M_X(\v)& \rightarrow \bn_C(r, kH^2, r+s)\\ E & \mapsto E|_C. \end{align*} We know the exact sequence $E \rightarrow \iota_*E|_C \rightarrow E(-H)[1]$ in $\cA(b=0)$ is the HN filtration of $\iota_*E|_C$ below the wall $\widetilde{\ell}$, so the uniqueness of HN factors implies $\Psi$ is injective. Combining Lemma \ref{lem-polygon} with Proposition \ref{prop-polygon} shows $\Psi$ is surjective. Thus Proposition \ref{prop.restriction}(b) implies that any vector bundle $F$ in the Brill-Noether locus $\bn_C(r,\,k(2g-2),\, r+s)$ is stable and $h^0(F) = r+s$. The Zariski tangent space to the Brill-Noether locus at the point $[F]$ is the kernel of the map \begin{equation*} k_1 \colon \text{Ext}^1(F,F) \rightarrow \text{Hom} \big( H^0(C,F) , H^1(C,F) \big), \end{equation*} where any $f \colon F \rightarrow F[1] \in \text{Ext}^1(F,F) = \text{Hom}_{C}(F,F[1])$ goes to \begin{equation*} k_1(f) = H^0(f) \colon \text{Hom}_{C}(\mathcal{O}_C,F) \rightarrow \text{Hom}_{C}(\mathcal{O}_C,F[1]), \end{equation*} see \cite[Proposition 4.3]{bhosle:brill-noether-loci-on-nodal-curves} for more details. Note that the proof in \cite{bhosle:brill-noether-loci-on-nodal-curves} is valid for any family of simple sheaves on a variety. The moduli space $M_{X}(\v)$ is a (non-empty) smooth quasi-projective variety, see for instance \cite[Chapter 10, Corollary 2.1 \& Theorem 2.7]{huybretchts:lectures-on-k3-surfaces}. Hence to prove Theorem \ref{thm}, we only need to show the derivative of the restriction map \begin{equation*} d\Psi \colon T_{[E]}M_{X}(\v) \rightarrow T_{[E|_C]} \mathcal{BN}, \end{equation*} is surjective. It sends any $f \colon E \rightarrow E[1] \in \text{Hom}_X(E,E[1])$ to its restriction $\iota^*f \colon \iota^*E \rightarrow \iota^*E[1] \in \ker(k_1)$. By Proposition \ref{prop.restriction}(c), we can apply the same argument as in the proof of \cite[Theorem 1.2]{feyz:mukai-program} to show surjectivity of $d\Psi$, hence $\Psi$ is an isomorphism. \end{proof} \section{location of the first wall}\label{section.location} In this section, we prove Proposition \ref{prop.all bounds} and Lemma \ref{lem-walls above ell start in section 3} for $g \gg 0$. The first step is to control the position of the lines $\ell_{\v}$ and $\ell_{\alpha}$. \begin{Prop}\label{prop-b-i} Fix $0 < \epsilon, \epsilon'< \frac{1}{2r}$. If $g \gg 0$, then \begin{enumerate*} \item The line $\ell_\v$ passing through $\Pi(\v)$ and the origin intersects $\Gamma$ at two points (except the origin) with $b$-values $b_1^\v< 0 < b_2^\v$ such that \begin{equation*} b_1^\v < -\frac{k}{s} + \frac{1}{s(s-1)} \qquad \text{and} \qquad \frac{k}{r} - \epsilon < b_2^\v \ . \end{equation*} \item The line $\ell_{\alpha}$ passing through $\Pi(\v(-H))$ and $\Pi(\al)$ intersects $\Gamma$ at two points with $b$-values $b_1^\al < b_2^\al$ such that \begin{equation*} b_1^\al < \frac{k-r}{r} + \epsilon' \qquad \text{and} \qquad -\frac{k}{s} - \frac{1}{s(s-1)} < b_2^\al. \end{equation*} \end{enumerate*} \end{Prop} \begin{proof} The line $\ell_\v$ which is of equation $w = \frac{s-r}{k} b$ connects the point $\Pi(\v)$ which is outside or on $\Gamma$ to the point $(0,0)$, so it intersects $\Gamma$ at two other points except the origin with $b$-values $b_1^\v < b_2^\v$. The equation of $\Gamma$ for $b \neq 0$ is $\Gamma(b) = (1+ \frac{H^2}{2})b^2 -1$, thus \begin{equation*} b_1^\v, b_2^{\v} = \frac{\frac{s-r}{k} \pm \sqrt{(\frac{s-r}{k})^2 +2H^2+4 }}{H^2+2}. \end{equation*} We claim if $g \gg 0$, then \begin{equation*} b_1^\v < -\frac{k}{s} + \frac{1}{s(s-1)} \end{equation*} i.e. \begin{equation*} \frac{s-r}{k} + \frac{\big(k(s-1)-1\big)(H^2+2)}{s(s-1)} < \sqrt{\left(\frac{s-r}{k}\right)^2 +2H^2+4}\ . \end{equation*} Taking the power 2 of both sides shows that we only need to prove \begin{equation*} \big(k(s-1)-1\big)\left(2\frac{s-r}{k}s(s-1) + \big(k(s-1)-1\big)(H^2+2) \right) < 2s^2(s-1)^2 \end{equation*} which is equivalent to \begin{equation}\label{cond.1} -\frac{2}{k}s(s-r) + (H^2+2)\left(k^2(s-1) + \frac{1}{s-1} -2k \right) < 2rs(s-1). \end{equation} When $H^2 \rightarrow +\infty$, then $s \rightarrow \frac{k^2}{2r}H^2 \rightarrow +\infty$ by \eqref{bound for s}, so the limit of the left hand side is $(H^2)^2\,\frac{k^3}{2r} \left(k -\frac{1}{r} \right)$ but the limit of the right hand side is $(H^2)^2\,\frac{k^4}{2r}$, thus the claim follows. The next step is to show for $g \gg 0$, we have \begin{equation}\label{cond.2} b_2^\v = \frac{\frac{s-r}{k} + \sqrt{(\frac{s-r}{k})^2 +2H^2+4 }}{H^2+2} > \frac{k}{r} - \epsilon\, . \end{equation} This holds if \begin{equation*} \frac{1}{2} \left(\frac{k}{r} - \epsilon \right)^2(H^2+2) < \frac{s-r}{k} \left(\frac{k}{r} - \epsilon \right)+1\,. \end{equation*} The limit of the right hand side when $s \rightarrow \frac{k^2}{2r}H^2 \rightarrow +\infty$ is $\frac{k}{2r}\left(\frac{k}{r} - \epsilon \right)H^2$, so the above holds when $g \gg 0$. This completes the proof of part (a). The line $\ell_{\alpha}$ in part (b) is of equation $w = \theta b + \beta-1$ for \begin{equation}\label{theta-beta} \theta \coloneqq \frac{s^2-r^2 -skH^2 +\frac{rs}{2} H^2} { s(k-r) +kr}\ , \qquad \beta \coloneqq \frac{r(k-r) +sk-k^2H^2 + \frac{kr}{2}H^2}{s(k-r) +kr}. \end{equation} When $s \rightarrow \frac{k^2}{2r}H^2 \rightarrow +\infty$, we know \begin{equation}\label{limit} \theta\ \rightarrow\ \frac{k-r}{2r}H^2 \qquad \text{and} \qquad \beta\ \rightarrow\ \frac{k-r}{k}. \end{equation} Hence if $g \gg 0$, we have \begin{equation}\label{cond.3-before} \theta^2 + 2\beta(H^2+2) \geq 0. \end{equation} Therefore $\ell_{\al}$ intersects the parabola $w = b^2(\frac{H^2}{2} +1)-1$ at two points with $b$-values \begin{equation*} b_1^{\al},\, b_2^\al = \frac{\theta \pm \sqrt{\theta^2 + 2\beta(H^2+2)}}{H^2+2}\,. \end{equation*} We claim if $g \gg 0$, then \begin{equation}\label{cond.3} -\frac{k}{s} -\frac{1}{s(s-1)} < b_2^\al = \frac{\theta + \sqrt{\theta^2 + 2\beta(H^2+2)}}{H^2+2} \end{equation} which holds if \begin{equation}\label{cl} \frac{H^2+2}{2} \left(\frac{k}{s} + \frac{1}{s(s-1)}\right)^2 < \beta - \theta\left(\frac{k}{s}+ \frac{1}{s(s-1)}\right). \end{equation} The right hand side is equal to \begin{equation*} \beta - \theta\left(\frac{k}{s}+ \frac{1}{s(s-1)}\right) = \frac{ \frac{H^2}{s-1}\left(k-\frac{r}{2}\right) +r(k-r) -\frac{s}{s-1} + \frac{kr^2}{s} + \frac{r^2}{s(s-1)} }{s(k-r)+kr} \end{equation*} thus its limit when $s \rightarrow \frac{k^2}{2r}H^2 \rightarrow +\infty$ is equal to $\frac{1}{H^2}\left(2\frac{r^2}{k^2} + \frac{2r(r-k)}{k^4}\right)$. Since the limit of the left hand side of \eqref{cl} is equal to $\frac{2r^2}{H^2k^2}$ the claim \eqref{cl} follows. The other intersection point of $\ell_{\al}$ with $\Gamma$ satisfies \begin{equation*} b_1^\al = \frac{\theta - \sqrt{\theta^2 + 2\beta(H^2+2)}}{H^2+2} < \frac{k-r}{r} + \epsilon' \end{equation*} if and only if \begin{equation}\label{cond.4} \theta - (H^2+2) \left(\frac{k-r}{r} + \epsilon' \right) < \sqrt{\theta^2 + 2\beta(H^2+2)}\,. \end{equation} Taking power 2 from both sides shows that \eqref{cond.4} holds if \begin{equation*} \frac{H^2+2}{2} \left(\frac{r-k}{r} - \epsilon' \right)^2 < \beta -\theta\left(\frac{r-k}{r} - \epsilon' \right) \end{equation*} which is satisfied by \eqref{limit} for $g \gg 0$ and $\epsilon' < \frac{1}{2r}$. \end{proof} If $g \gg 0$, \eqref{limit} implies \begin{equation}\label{cond.5-before} \beta -1 < 0 \end{equation} where $\beta$ is defined as in \eqref{theta-beta}. Thus the origin point $(0,0)$ and so the line segment connecting $\Pi(\v(-H))$ to the origin lies above $\ell_{\al}$. Thus it intersects the curve $\Gamma$ at a point with $b$-value $b_1^{\v(-H)}$ satisfying \begin{equation}\label{b-1-v(-H)} \frac{k-r}{r}\, \leq\, b_1^{\v(-H)} \,\leq\, b_1^{\al} \ . \end{equation} Consider the line $\widetilde{\ell}$ passing through $\Pi(\v)$ and $\Pi(\v(-H))$. It is of equation \begin{equation*} w = H^2\left(\frac{k}{r} - \frac{1}{2}\right) b \,+ \frac{s}{r} -H^2\frac{k}{r} \left(\frac{k}{r} - \frac{1}{2} \right) -1. \end{equation*} When $s \rightarrow \frac{k^2}{2r}H^2 \rightarrow +\infty$, we get \begin{equation}\label{cond.5} \frac{s}{r} -H^2\frac{k}{r} \left(\frac{k}{r} - \frac{1}{2} \right)-1 \ > \ 0. \end{equation} This implies the origin $(0, 0)$ lies below $\widetilde{\ell}$. Therefore the line segment connecting $\Pi(\v)$ to $\Pi(\v(-H))$ (which is part of $\widetilde{\ell}$) lies above $\ell_\v$ and $\ell_{\v(-H)}$, so $\widetilde{\ell}$ intersects $\Gamma$ at two points with $b$-values $\widetilde{b}_1 < \widetilde{b}_2$ satisfying \begin{equation*} \frac{k-r}{r} \,\leq\, \widetilde{b}_1 \,\leq\, b_1^{\v(-H)} \qquad \text{and} \qquad b_2^\v\, \leq\, \widetilde{b}_2 \, \leq \, \frac{k}{r}. \end{equation*} Thus \eqref{b-1-v(-H)} together with Proposition \ref{prop-b-i} implies \begin{equation}\label{s-1} \frac{k-r}{r} \,\leq\, \widetilde{b}_1 \,<\, \frac{k-r}{r} +\epsilon' \qquad \text{and} \qquad \frac{k}{r} -\epsilon \,<\, \widetilde{b}_2 \,\leq\, \frac{k}{r}\,. \end{equation} Thus if we assume $H^2$ is large enough, the line $\widetilde{\ell}$ intersects $\Gamma$ at two points which are arbitrary close to $\Pi(\v)$ and $\Pi(\v(-H))$. Consider the line $\ell^*$ as in Definition \ref{def-ellstar}. We set \begin{equation}\label{chosen epsilon} \epsilon' = b_1^* -\frac{k-r}{r} \qquad \text{and} \qquad \epsilon = \frac{k}{r} -b_2^*\,. \end{equation} Then the line $\ell^*$ lies below the parallel line $\widetilde{\ell}$. \begin{Lem} The origin point $(0,0)$ lies below $\ell^*$ and \begin{equation}\label{claim} \frac{k-r}{r}< b_1^* <0< b_2^* < \frac{k}{r}. \end{equation} \end{Lem} \begin{proof} The equation of $\ell^*$ is given by $w = H^2\left(\frac{k}{r} -\frac{1}{2}\right) b+ \alpha -1$ for some $\alpha \in \mathbb{R}$. To prove the first claim we need to show $\alpha >1$. We know the intersection points of $\ell^*$ with $\Gamma$ satisfies \begin{equation*} b_2^*-b_1^* = \frac{2\sqrt{(H^2)^2\left(\frac{k}{r} -\frac{1}{2}\right)^2 +2\alpha(H^2+2) }}{H^2+2} \geq \frac{k}{r} - \frac{1}{r^2(r+1)} -\frac{k-r}{r} -\frac{1}{r^2(r+1)} \end{equation*} This implies for $g \gg 0$, we have \begin{equation}\label{condition.extra} 2\alpha \geq (H^2+2) \left(\frac{1}{2} -\frac{1}{r^2(r+1)} \right)^2 - \frac{(H^2)^2}{H^2+2} \left(\frac{k}{r} -\frac{1}{2}\right)^2 > 2. \end{equation} The second claim \eqref{claim} follows from \eqref{s-1} and the point that by the choice of $\epsilon$ and $\epsilon'$ as in \eqref{chosen epsilon}, we know $\widetilde{\ell}$ lies above $\ell^*$. \end{proof} The above arguments completes the proof of Proposition \ref{prop.all bounds}. The final step is to prove Lemma \ref{lem-walls above ell start in section 3}. \begin{proof}[Proof of Lemma \ref{lem-walls above ell start in section 3}] The equation of the line $\ell^*$ is given by $w = H^2(\frac{k}{r} -\frac{1}{2}) b + \alpha-1$ where $\alpha = (b_2^*)^2\left(\frac{H^2}{2} +1\right) - H^2(\frac{k}{r} -\frac{1}{2}) b_2^*$ and equation of the line $\ell_1$ is given by $w = \frac{r}{k} w_1 b $. We know $b_2^* = \frac{k}{r} -\epsilon$, so \begin{equation*} \alpha = \left(\frac{H^2}{2} +1 \right) \left(\frac{k}{r} -\epsilon\right)^2- H^2\left(\frac{k}{r} -\frac{1}{2}\right)\left(\frac{k}{r} -\epsilon\right). \end{equation*} Thus when $H^2 \rightarrow +\infty$, we know \begin{equation*} \alpha \ \rightarrow\ \frac{H^2}{2} \left(\frac{k}{r} -\epsilon \right)\left(-\frac{k}{r} -\epsilon + 1 \right) \end{equation*} and so \begin{equation}\label{limit-2} \frac{r}{k}w_1 \ \rightarrow\ H^2\left(\frac{k}{r} -\frac{1}{2}\right)+ \frac{H^2r}{2k} \left(\frac{k}{r} -\epsilon \right)\left(-\frac{k}{r} -\epsilon + 1 \right). \end{equation} The line $\ell_1$ intersects the vertical line $b = \frac{k}{r} -\frac{1}{r(r-1)}$ at a point with $w = \frac{r}{k} w_1(\frac{k}{r} -\frac{1}{r(r-1)})$. We claim if $g \gg 0$, this point lies in $U$, i.e. \begin{equation}\label{cond.e1} \frac{r}{k} w_1 > \Gamma\left(\frac{k}{r} -\frac{1}{r(r-1)}\right) \frac{1}{\frac{k}{r} -\frac{1}{r(r-1)}}\,. \end{equation} The limit of the right hand side when $H^2 \rightarrow +\infty$ is $\frac{H^2}{2}(\frac{k}{r} -\frac{1}{r(r-1)})$, so \eqref{limit-2} shows that \eqref{cond.e1} holds if \begin{equation*} \left(\frac{k}{r} -\epsilon \right)\left(-\frac{k}{r} -\epsilon + 1 \right) > \frac{k}{r} \left(1-\frac{k}{r} -\frac{1}{r(r-1)}\right). \end{equation*} This is equivalent to \begin{equation*} \epsilon -\epsilon^2 < \frac{k}{r^2(r-1)} \end{equation*} which is satisfied by our assumption on $\epsilon = \frac{k}{r} - b_2^*$ in Definition \ref{def-ellstar}. Similarly, we have \begin{equation*} w_2 = H^2\frac{k-r}{r} \left(\frac{k}{r} -\frac{1}{2} \right) + \alpha-1 \end{equation*} We require \begin{equation}\label{cond.e2} \frac{r}{k-r} w_2 < \Gamma\left(\frac{k-r}{r} +\frac{1}{r(r-1)}\right) \frac{1}{\frac{k-r}{r} +\frac{1}{r(r-1)}}\,. \end{equation} The limit of the left hand side is \begin{align*} \lim\limits_{H^2 \rightarrow +\infty} \frac{r}{k-r}w_2 =\ & H^2\left(\frac{k}{r} -\frac{1}{2}\right)+ \frac{H^2r}{2(k-r)} \left(\frac{k}{r} -\epsilon \right)\left(-\frac{k}{r} -\epsilon + 1 \right) \end{align*} and the limit of the right hand side is $\frac{H^2}{2}(\frac{k-r}{r} +\frac{1}{r(r-1)})$. Thus \eqref{cond.e2} holds for $g \gg 0$ if \begin{align*} \frac{r}{k-r} \left(\frac{k}{r} -\epsilon \right)\left(-\frac{k}{r} -\epsilon + 1 \right) < \frac{1}{r(r-1)} -\frac{k}{r}. \end{align*} This is equivalent to \begin{equation*} \epsilon - \epsilon^2 < \frac{r-k}{r^2(r-1)} \end{equation*} which holds again by our assumptions on $\epsilon = \frac{k}{r} -b_2^*$. \end{proof} \begin{Rem}\label{Rem-big enough} The genus $g = \frac{H^2}{2} +1$ is large enough for Theorem \ref{thm} if $H^2 > 2r(r+1)$ and inequalities \eqref{cond--1}, \eqref{cond--2}, \eqref{epsilon}, \eqref{cond.1}, \eqref{cond.3-before}, \eqref{cond.3}, \eqref{cond.5-before}, \eqref{cond.5}, \eqref{condition.extra}, \eqref{cond.e1}, \eqref{cond.e2}, and for the chosen values of $\epsilon, \epsilon'$ in \eqref{chosen epsilon}, inequalities \eqref{cond.2} and \eqref{cond.4} are satisfied. \end{Rem} \noindent {\tt{[email protected]}} \noindent Department of Mathematics, Imperial College, London SW7 2AZ, United Kingdom \end{document}
\begin{document} \title{A Modal Characterization Theorem for a Probabilistic Fuzzy Description Logic} \begin{abstract} The fuzzy modality \emph{probably} is interpreted over probabilistic type spaces by taking expected truth values. The arising probabilistic fuzzy description logic is invariant under probabilistic bisimilarity; more informatively, it is non-expansive wrt.\ a suitable notion of behavioural distance. In the present paper, we provide a characterization of the expressive power of this logic based on this observation: We prove a probabilistic analogue of the classical van Benthem theorem, which states that modal logic is precisely the bisimulation-invariant fragment of first-order logic. Specifically, we show that every formula in probabilistic fuzzy first-order logic that is non-expansive wrt.\ behavioural distance can be approximated by concepts of bounded rank in probabilistic fuzzy description logic. For a modal logic perspective on the same result, see~\cite{wspk:van-benthem-prob-arxiv}. \end{abstract} \section{Introduction} \noindent In the representation of uncertain knowledge, one will often wish to avoid mention of exact numerical probabilities, e.g.\ when these are not precisely known or not relevant to the representation task at hand -- as a typical example, a medical practitioner will rarely name a numerical threshold for the likelihood of a diagnosis, and instead qualify the diagnosis as, say, `suspected' or `probable'. This has led to efforts aimed at formalizing a modality \emph{probably}, used alternatively to modalities `with probability at least~$p$' \cite{LarsenSkou91,HeifetzMongin01,LutzSchroder10}. Such a formalization may be approached in a two-valued setting via qualitative axiomatizations of likelihood \cite{Burgess69,HalpernRabin87} or via threshold probabilities \cite{Hamblin59,Herzig03}. In a fuzzy setting, `probably' leads a natural life as a fuzzy modality~$\probably$, whose truth value just increases as its argument becomes more probable (this modality thus connects the otherwise well-distinguished worlds of fuzziness and probability~\cite{LukasiewiczStraccia08}). The semantics of this operator, first defined by Zadeh~\shortcite{Zadeh68}, interprets $\probably\,\phi$ as the expected truth value of~$\phi$. It appears in various fuzzy propositional~\cite{Hajek07,FlaminioGodo07}, modal~\cite{DesharnaisEA99,BreugelWorrell05}, fixpoint~\cite{ Kozen85,HuthKwiatkowska97}, and description logics~\cite{SchroderPattinson11}. In the present paper, we pin down the exact expressiveness of the basic description logic of \emph{probably}, which we briefly refer to as \emph{probabilistic fuzzy $\mathcal{A}LC$} or $\mathcal{A}LCP$, within a natural ambient probabilistic fuzzy first-order logic $\mathsf{FO}(\probably)$, by providing a \emph{modal characterization theorem}. The prototype of such characterization theorems is \emph{van Benthem's theorem}~\shortcite{BenthemThesis}, which states that (classical) modal logic is precisely the bisimulation-invariant fragment of first-order logic. It has been noted that in systems with numerical values, \emph{behavioural pseudometrics} offer a more fine-grained measure of equivalence than two-valued bisimilarity~\cite{GiacaloneEA90,DesharnaisEA99,BreugelWorrell05,DesharnaisEA08,bbkk:behavioral-metrics-functor-lifting}. When propositional connectives are equipped with Zadeh semantics, $\mathcal{A}LCP$ is \emph{non-expansive} wrt.\ behavioural distance; we continue to refer to this property as \emph{bisimulation invariance}. In previous work~\cite{WildEA18} we have shown that \emph{relational} fuzzy modal logic is the bisimulation-invariant fragment of fuzzy FOL, more precisely that every bisimulation-invariant fuzzy FO formula can be approximated by fuzzy modal formulae \emph{of bounded rank}. The bound on the rank is key; without it, the statement turns into a form of the (much simpler) Hennessy-Milner theorem~\cite{HennessyMilner85} (which classically states that non-bisimilar states in finitely branching systems can be distinguished by modal formulae), and indeed does not need to assume FO definability of the given bisimulation-invariant property~\cite{BreugelWorrell05}. Here, we establish a corresponding result for the rather more involved probabilistic setting: We show that \emph{every bisimulation-invariant formula in probabilistic fuzzy FOL can be approximated in bounded rank in probabilistic fuzzy $\mathcal{A}LC$.} This means not only that, up to approximation, $\mathcal{A}LCP$ is as powerful as $\mathsf{FO}(\probably)$ on bisimulation-invariant properties, but also that $\mathcal{A}LCP$ provides effective syntax for bisimulation-invariant $\mathsf{FO}(\probably)$, which $\mathsf{FO}(\probably)$ itself does not~\cite{Otto06}. Proofs are mostly omitted or only sketched; full proofs are in the appendix. \paragraph{Related Work} There is widespread interest in modal characterization theorems in modal logic~\cite{DawarOtto05}, database theory~\cite{FigueiraEA15}, concurrency~\cite{JaninWalukiewicz95,Carreiro15}, and AI~\cite{SturmWolter01,WildSchroder17,WildEA18}. The overall structure of our proof builds partly on that of our modal characterization theorem for relational fuzzy modal logic~\cite{WildEA18} (in turn based ultimately on a strategy due to Otto~\shortcite{o:van-Benthem-Rosen-elementary}) but deals with a much more involved logic, which instead of just the lattice structure of the unit interval involves its full arithmetic structure, via the use of probabilities and expected values, necessitating, e.g., the use of Kantorovich-Rubinstein duality. Notable contributions of our proof include new forms of probabilistic bisimulation games up-to-$\epsilon$ (different from games introduced by Desharnais et al.~\shortcite{DesharnaisEA08}, which characterize a different metric) and Ehrenfeucht-Fra\"iss\'e games, related to two-valued games considered in the context of topological FOL~\cite{MakowskyZiegler80}. (For lack of space, we omit discussion of quantitative Hennessy-Milner type results beyond the mentioned result by van Breugel and Worrell~\shortcite{BreugelWorrell05}.) $\mathsf{FO}(\probably)$ may be seen as a fuzzy variant of Halpern's~\shortcite{Halpern90} type-1 (i.e.\ statistical) two-valued probabilistic FOL, and uses a syntax related to coalgebraic predicate logic~\cite{LitakEA18} and, ultimately, Chang's \emph{modal predicate logic}~\cite{Chang73}. Van-Benthem style theorems for two-valued coalgebraic modal logic~\cite{SchroderEA17} instantiate to two-valued probabilistic modal logic, then establishing expressibility of bisimulation-invariant probabilistic FO formulae by probabilistic modal formulae with infinite conjunction but of bounded rank, in an apparent analogy to bounded-rank approximation in the fuzzy setting. \section{Fuzzy Probabilistic Logics}\label{sec:logics} We proceed to introduce the logics featuring in our main result. We fix (w.l.o.g., finite) sets~$\mathsf{N}_{\mathsf{C}}$ of \emph{atomic concepts} and~${\mathsf{N}_{\mathsf{R}}}$ of \emph{roles}; \emph{concepts} $C,D$ of \emph{quantitative probabilistic $\mathcal{A}LC$} ($\mathcal{A}LCP$) are defined by the grammar \begin{equation*} C,D::= q\mid A\mid C\ominus q\mid \neg C\mid C\mathop{\sqcap} D \mid \probably\,r.\, C \end{equation*} where $q\in\mathbb{Q}I$, $A\in\mathsf{N}_{\mathsf{C}}$ and $r\in{\mathsf{N}_{\mathsf{R}}}$. The intended reading of~$\probably$ is `probably'; we give examples below. Slightly deviating from standard practice, we define the \emph{rank}~$\operatorname{\mathsf{rk}}(C)$ of a concept~$C$ as the maximal nesting depth of the~$\probably$ \emph{and atomic concepts} in~$C$; e.g.\ $\operatorname{\mathsf{rk}}((\probably\,r.\,\probably\,s.\, A)\mathop{\sqcap}(\probably\, r.\, B))=3$. We denote the set of all concepts of rank at most~$n$ by~$\modf{n}$. Concepts are interpreted over probabilistic structures to which we neutrally refer as \emph{interpretations} or, briefly, \emph{models}. We allow infinite models but restrict to discrete probability distributions over successors at each state. A model \begin{equation*} \mathcal{I} = (\Delta^\CI,(A^\mathcal{I})_{A\in\mathsf{N}_{\mathsf{C}}},(r^\mathcal{I})_{r\in{\mathsf{N}_{\mathsf{R}}}}) \end{equation*} consists of a \emph{domain} $\Delta^\CI$ of \emph{states} or \emph{individuals}, and interpretations $A^\mathcal{I}\!:\!on \Delta^\CI\to[0,1]$, $r^\mathcal{I}\!:\!on \Delta^\CI\times\Delta^\CI\to[0,1]$ of atomic concepts~$A$ and roles~$r$ such that for each $a\in\Delta^\CI$, the map \[ r_a\!:\!on \Delta^\CI\to[0,1], \quad r_a(a') = r^\mathcal{I}(a,a') \] is either zero or a probability mass function on $\Delta^\CI$, i.e. \[\textstyle\sum_{a'\in\Delta^\CI}r_a(a') \in \{0,1\}\] (implying that the \emph{support} $\{a'\in \Delta^\CI\mid r_a(a')>0\}$ of~$r_a$ is at most countable). We call a state~$a$ \emph{$r$-blocking} if $\sum_{a'\in \Delta^\CI}r_a(a') = 0$. At non-blocking states~$a$, $r_a$ thus acts as a probabilistic accessibility relation; we abuse~$r_a$ to denote also the probability measure defined by~$r_a$. The interpretation $C^\mathcal{I}\!:\!on \Delta^\CI\to[0,1]$ of concepts is defined recursively, extending that of atomic concepts, by \begin{align*} q^\mathcal{I}(a) &= q \\ (C\ominus q)^\mathcal{I}(a) &= \max(C^\mathcal{I}(a)-q,0) \\ (\neg C)^\mathcal{I}(a) &= 1-C^\mathcal{I}(a) \\ (C\mathop{\sqcap} D)^\mathcal{I}(a) & = \min(C^\mathcal{I}(a),D^\mathcal{I}(a))\\ (\probably\,r.\, C)^\mathcal{I}(a) &= \operatorname{E}_{r_a}(C^\mathcal{I}) = \textstyle\sum_{a'\in\Delta^\CI} r_a(a') \cdot C^\mathcal{I}(a') \end{align*} At non-blocking~$a$, $(\probably\,r.\, C)^\mathcal{I}(a)$ is thus the expected truth value of~$C$ for a random $r$-successor of~$a$. We define disjunction~$\mathop{\sqcup}$ as the dual of~$\mathop{\sqcap}$ as usual, so~$\mathop{\sqcup}$ takes maxima. We use Zadeh semantics for the propositional operators, which will later ensure non-expansiveness wrt.\ behavioural distance; see additional comments in Section~\ref{sec:concl}. Up to minor variations, our models correspond to Markov chains or, in an epistemic reading, \emph{type spaces} (e.g.~\cite{HeifetzMongin01}). The logic $\mathcal{A}LCP$ was \mbox{considered (with} \L{}uk\-as\-iew\-icz semantics) by Schröder and Pattinson~\shortcite{SchroderPattinson11}, and resembles van Breugel and Worrell's quantitative probabilistic modal logic~\shortcite{BreugelWorrell05}. E.g., in a reading of~$\Delta^\mathcal{I}$ as consisting of real-world individuals, the concept \begin{equation*} \mathsf{Loud}\mathop{\sqcap}\probably\,\mathsf{hasSource}.\,(\mathsf{Large}\mathop{\sqcap} \probably\,\mathsf{hasMood}.\,\mathsf{Angry}) \end{equation*} describes noises you hear in your tent at night as being loud and probably coming from the large and probably angry animal whose shadow just crossed the tent roof. (In this view,~$\probably$ can be usefully combined with crisp or fuzzy relational modalities, using off-the-shelf compositionality mechanisms \cite{SchroderPattinson11}.) In an epistemic reading where the elements of~$\Delta^\mathcal{I}$ are possible worlds, and the roles are understood as epistemic agents, the concept \begin{equation*} \neg\mathsf{GoodHand}\mathop{\sqcap}\hspace{1pt} \probably\,\mathsf{player}.\,\probably\,\mathsf{opponent}.\,\mathsf{GoodHand} \end{equation*} denotes the degree to which $\mathsf{player}$ believes she is successfully bluffing by letting $\mathsf{opponent}$ overestimate $\mathsf{player}$'s hand. For readability, we will restrict the technical treatment to a single role~$r$, omitted in the syntax, from now on, noting that covering multiple roles amounts to no more than additional indexing. As the first-order correspondence language of quantitative probabilistic $\mathcal{A}LC$ we introduce \emph{quantitative probabilistic first-order logic} ($\mathsf{FO}(\probably)$), with \emph{formulae} $\phi,\psi,\dots$ defined by the grammar \begin{multline*} \phi,\psi::= q\mid A(x)\mid x=y\mid\phi\ominus q\mid \neg\phi\mid\phi\mathop{\sqcap}\psi \mid \exists x.\,\phi\\ \mid \diabind{x}{y}{\phi}\qquad(q\in\mathbb{Q}I, A\in\mathsf{N}_{\mathsf{C}}) \end{multline*} where~$x$ and~$y$ range over a fixed countably infinite reservoir of \emph{variables}. The reading of $\diabind{x}{y}{\phi}$ is the expected truth value of~$\phi$ at a random successor~$y$ of~$x$. (In particular, when~$\phi$ is crisp, then $\diabind{x}{y}{\phi}$ is just the probability of~$y$ satisfying~$\phi$, similar to the weights $w_y(\phi)$ in Halpern's type-1 probabilistic FOL~\shortcite{Halpern90}.) We have the expected notions of free and bound variables, under the additional proviso that~$y$ (but not~$x$!) is bound in $\diabind{x}{y}{\phi}$. The \emph{(quantifier) rank} $\mathsf{qr}(\phi)$ of a formula~$\phi$ is the maximal nesting depth of the variable-binding operators~$\exists$ and~$\probably$ and propositional atoms~$A$ in~$\phi$; e.g.~$\exists x.\,\diabind{x}{y}{A(y)}$ has rank~$3$. Given a model $\mathcal{I} = (\Delta^\CI,(A^\mathcal{I})_{A\in\mathsf{N}_{\mathsf{C}}},r^\mathcal{I})$ and a vector $\bar a=(a_1,\dots,a_n)\in (\Delta^\CI)^n$ of values, the semantics of the logic assigns a truth value $\phi(\bar a)\in[0,1]$ to a formula $\phi(x_1,\dots,x_n)$ with free variables at most $x_1,\dots,x_n$. We define $\phi(\bar a)$ recursively by essentially the same clauses as in $\mathcal{A}LCP$ for the propositional constructs, and \begin{align*} A(x_i)(\bar a) & = A^\mathcal{I}(a_i)\\ (\exists x_0.\,\phi(x_0,x_1,\dots,x_n))(\bar a) & = \textstyle \bigvee_{a_0\in\Delta^\CI}\phi(a_0,a_1,\dots, a_n)\\ (\diabind{x_i}{y}{\phi(y,x_1,\dots,x_n)})(\bar a) & = \intsuc{a_i}{\phi(\,\cdot\,,a_1,\dots,a_n)} \end{align*} where~$\bigvee$ takes suprema. Moreover, equality is two-valued, i.e.\ $(x_i=x_j)(\bar a)$ is~$1$ if $a_i=a_j$, and $0$ otherwise. E.g.\ the formula $\diabind{x}{z}{z=y}$ (`the successor of~$x$ is probably~$y$') denotes the access probability from~$x$ to~$y$, $\diabind{x}{z}{\diabind{z}{w}{w=y}}$ the probability of reaching~$y$ from~$x$ in two independently distributed steps, and $\exists y.\,\diabind{x}{z}{z=y}$ the probability of the most probable successor of~$x$. We have a \emph{standard translation}~$\mathsf{ST}_x$ from $\mathcal{A}LCP$ into $\mathsf{FO}(\probably)$, indexed over a variable~$x$ naming the current state. Following Litak et al.~\shortcite{LitakEA18}, we define $\mathsf{ST}_x$ recursively by \begin{align*} \mathsf{ST}_x(A) & = A(x) \\ \mathsf{ST}_x(\probably C) & = \diabind{x}{y}{\mathsf{ST}_y(C)}, \end{align*} and by commutation with all other constructs. \begin{lem} For every $\mathcal{A}LCP$-concept~$C$ and state~$a$, $C(a)=\mathsf{ST}_x(C)(a)$. \end{lem} \noindent $\mathsf{ST}$ thus identifies $\mathcal{A}LCP$ as a fragment of $\mathsf{FO}(\probably)$. \section{Behavioural Distances and Games} \label{sec:games} \noindent We next discuss several notions of behavioural distance between states: via fixed point iteration \`a la Wasserstein/Kantorovich, via games and via the logic. We focus mostly on depth-$n$ distances. Only for one version, we define also the unbounded distance, which will feature in the modal characterization result. We show in Section~\ref{sec:modal-approx} that at finite depth, all these distances coincide. It has been shown in previous work~\cite{dgjp:metrics-labelled-markov,BreugelWorrell05} that the unbounded-depth distances defined via Kantorovich fixed point iteration and via the logic, respectively, coincide in very similar settings; such results can be seen as probabilistic variants of the Hennessy-Milner theorem. We recall standard notions on pseudometric spaces: \begin{defn}[Pseudometric spaces, non-expansive maps] \label{def:metric} A \emph{(bounded) pseudometric} on a set $X$ is a function $d\!:\!on X\times X\to [0,1]$ such that for $x,y,z\in X$, the following axioms hold: $d(x,x) = 0$ (\emph{reflexivity}), $d(x,y) = d(y,x)$ (\emph{symmetry}), $d(x,z) \le d(x,y)+d(y,z)$ (\emph{triangle inequality}). If additionally $d(x,y)=0$ implies $x=y$, then $d$ is a \emph{metric}. A \emph{(pseudo)metric space} $(X,d)$ consists of a set~$X$ and a (pseudo)metric~$d$ on $X$. A map $f\!:\!on X\to[0,1]$ is \emph{non-expansive} wrt.\ a pseudometric~$d$ if $|f(x)-f(y)|\le d(x,y)$ for all $x,y\in X$. The space of these non-expansive functions, denoted $\nonexpI{X,d}$, is equipped with the \emph{supremum (pseudo)metric} $d_\infty$, \begin{equation*} d_\infty(f,g) = \supnorm{f-g} = \textstyle\bigvee_{x\in X} |f(x)-g(x)|. \end{equation*} \noindent We denote by $\ball{d}{\epsilon}{x} = \{y\in X\mid d(x,y) \le \epsilon\}$ the \emph{ball} of radius $\epsilon$ around $x$ in $(X,d)$. The space $(X,d)$ is \emph{totally bounded} if for every $\epsilon > 0$ there exists a finite \emph{$\epsilon$-cover}, i.e.\ finitely many elements $x_1,\dots,x_n\in X$ such that $X = \bigcup_{i=1}^n \ball{d}{\epsilon}{x_i}$. \end{defn} \noindent Recall that a metric space is compact iff it is complete and totally bounded. We next introduce the Wasserstein and Kantorovich distances, which coincide according to Kantorovich-Rubinstein duality. To this end, we first need the notion of a coupling of two probability distributions, from which the original distributions are factored out as marginals. \begin{defn} \label{def:coupling} Let $\pi_1$ and $\pi_2$ be discrete probability measures on $A$ and $B$, respectively. We denote by $\cpl(\pi_1,\pi_2)$ the set of \emph{couplings} of~$\pi_1$ and~$\pi_2$, i.e.\ probability measures $\mu$ on $A\times B$ such that $\pi_1$ and $\pi_2$ are \emph{marginals} of $\mu$: \begin{itemize} \item for all $a\in A$, $\sum_{b\in B}\mu(a,b) = \pi_1(a)$; \item for all $b\in B$, $\sum_{a\in A}\mu(a,b) = \pi_2(b)$. \end{itemize} \end{defn} \begin{defn}[Wasserstein and Kantorovich distances] \label{def:wasserstein-kantorovich} Let $(X,d)$ be a pseudometric space. We generally write \begin{equation*} \mathcal{D} X \end{equation*} for the set of discrete probability distributions on~$X$. We define two pseudometrics on~$\mathcal{D} X$, the \emph{Kantorovich distance}~$d^\uparrow$ and the \emph{Wasserstein distance}~$d^\downarrow$: \begin{gather*} d^\uparrow(\pi_1,\pi_2) = \textstyle\bigvee \{ |\operatorname{E}_{\pi_1} (f) - \operatorname{E}_{\pi_2} (f)| \mid f \in \nonexpI{X,d} \} \\ d^\downarrow(\pi_1,\pi_2) = \textstyle\bigwedge \{ \operatorname{E}_\mu (d) \mid \mu\in\cpl(\pi_1,\pi_2) \} \end{gather*} where $\bigwedge$ takes meets (and~$\bigvee$ suprema). We extend these distances without further mention to zero functions (like the functions~$r_a$ at blocking states~$a$) by decreeing that the zero function has distance~$1$ from all probability distributions. \end{defn} \noindent The notation $d^\uparrow,d^\downarrow$ is meant as a mnemonic for the fact that these distances are obtained via suprema respectively via infima. If $(X,d)$ is separable (contains a countable dense subset), these pseudometrics coincide, a fact known as the \emph{Kantorovich-Rubinstein duality} (e.g.~\cite{dudley2002}): \begin{lem}[Kantorovich-Rubinstein duality] \label{lem:kr-duality} Let $(X,d)$ be a separable pseudometric space. Then for all $\pi_1,\pi_2\in\mathcal{D} X$, \[ d^\uparrow(\pi_1,\pi_2) = d^\downarrow(\pi_1,\pi_2). \] \end{lem} \noindent The above notions of \emph{lifting} a distance on~$X$ to a distance on distributions over~$X$ can be used to give fixed point equations for behavioural distances on models. \begin{defn}[Fixed point iteration \`a la Wasserstein/Kantorovich] \label{def:fixed-point-iteration} Given a model $\mathcal{I}$, we define the chains $(d^K_n)$, $(d^W_n)$ of \emph{depth-$n$ Kantorovich} and \emph{Wasserstein distances}, respectively, via fixed point iteration: \begin{gather*} d^W_0(a,b) = d^K_0(a,b) = 0 \\ d^W_{n+1}(a,b) = \textstyle\bigvee_{A\in\mathsf{N}_{\mathsf{C}}}|A^\mathcal{I}(a)-A^\mathcal{I}(b)| \vee (d^W_n)^\downarrow(\pi_a,\pi_b) \\ d^K_{n+1}(a,b) = \textstyle\bigvee_{A\in\mathsf{N}_{\mathsf{C}}}|A^\mathcal{I}(a)-A^\mathcal{I}(b)| \vee (d^K_n)^\uparrow(\pi_a,\pi_b) \end{gather*} where $\vee$ is binary join. We extend this to states $a,b$ in different models~$\mathcal{I}$, $\mathcal{J}$ by taking the disjoint union of~$\mathcal{I}$, $\mathcal{J}$. \end{defn} \noindent In both cases, we start with the zero pseudometric, and in the next iteration lift the pseudometric~$d_n$ from the previous step via Wasserstein/Kantorovich. This lifted metric is then applied to the probability distributions $\pi_a,\pi_b$ associated with $a,b$. In addition we take the maximum with the supremum over the distances for all atomic $A\in\mathsf{N}_{\mathsf{C}}$. We now introduce a key tool for our technical development, a new up-to-$\epsilon$ bisimulation game inspired by the definition of the Wasserstein distance. \begin{defn}[Bisimulation game] \label{def:bisimulation-game} Given models $\mathcal{I},\mathcal{J}$, $a_0\in \Delta^\CI,b_0\in \Delta^\CJ$, and $\epsilon_0\in[0,1]$, the \emph{$\epsilon_0$-bisimulation game} for $a_0$ and $b_0$ is played by \emph{Spoiler} ($S$) and \emph{Duplicator} ($D$), with rules as follows: \begin{myitemize} \item \emph{Configurations}: triples $(a,b,\epsilon)$, with states $a\in \Delta^\CI$, $b\in \Delta^\CJ$ and maximal allowed deviation $\epsilon\in[0,1]$. The \emph{initial configuration} is $(a_0,b_0,\epsilon_0)$. \item \emph{Moves}: In each round, $D$ first picks a probability measure $\mu \in \cpl(\pi_a,\pi_b)$. Then, $D$ distributes the deviation~$\epsilon$ over all pairs $(a',b')$ of successors, i.e.~picks a function $\epsilon'\!:\!on \Delta^\CI\times\Delta^\CJ\to[0,1]$ such that $\operatorname{E}_\mu(\epsilon')\le\epsilon$. Finally, $S$ picks a pair $(a',b')$ with $\mu(a',b') > 0$; the new configuration is then $(a',b',\epsilon'(a',b'))$. \item $D$ \emph{wins} if both states are blocking or $\epsilon = 1$. \item $S$ \emph{wins} if exactly one state is blocking and $\epsilon < 1$. \item \emph{Winning condition}: $|A^\mathcal{I}(a)-A^\mathcal{J}(b)|\hspace{-0.4mm}\le\hspace{-0.4mm}\epsilon$ for all $A\in\mathsf{N}_{\mathsf{C}}$. \end{myitemize} \noindent The game comes in two variants, the \emph{(unbounded) bisimulation game} and the \emph{$n$-round bisimulation game}, where $n\ge 0$. Player $D$ wins if the winning condition holds \emph{before} every round, otherwise $S$ wins. More precisely, $D$ wins the unbounded game if she can force an infinite play and the $n$-round game once~$n$ rounds have been played (the winning condition is not checked after the last round, so in particular, any $0$-round game is an immediate win for $D$). \end{defn} \begin{rem} The above bisimulation game differs from bisimulation games in the literature (e.g.~\cite{DesharnaisEA08}) in a number of salient features. A particularly striking aspect is that~$D$'s moves are not similar to those of~$S$, and moreover~$D$ in fact moves before~$S$. Intuitively,~$D$ is required to commit beforehand to a strategy that she will use to respond to~$S$'s next move. Note also that the precision~$\epsilon$ changes as the game is being played, a complication forced by the arithmetic nature of models. \end{rem} \noindent This leads to notions of game distance: \begin{defn} \label{def:game-distance} \emph{depth-$n$ game distance}~$d^G_n$ and (unbounded-depth) \emph{game distance}~$d^G$ are defined as \begin{align*} d^G_n(a,b)& = \textstyle\bigwedge\{ \epsilon\mid D\text{ wins }\mathsf{G}_n(a,b,\epsilon)\}\\ d^G(a,b) & = \textstyle\bigwedge\{ \epsilon\mid D\text{ wins }\mathsf{G}(a,b,\epsilon)\}. \end{align*} where $\mathsf{G}(a,b,\epsilon)$ and $\mathsf{G}_n(a,b,\epsilon)$ denote the the bisimulation game and the $n$-round bisimulation game on $(a,b,\epsilon)$, respectively. \end{defn} \noindent Finally we define the depth-$n$ logical distance via $\mathcal{A}LCP$, restricting to concepts of rank at most~$n$: \begin{defn} \label{def:logical-distance} The \emph{depth-$n$ logical distance}~$d^L_n(a,b)$ of states $a$, $b$ in models $\mathcal{I}$, $\mathcal{J}$ is defined as \begin{equation*} d^L_n(a,b) = \textstyle\bigvee \{ |C^\mathcal{I}(a)-C^\mathcal{J}(b)| \mid \operatorname{\mathsf{rk}}(C)\le n \}. \end{equation*} \end{defn} \noindent The equivalence of the four bounded-depth behavioural distances introduced above will be shown in Theorem~\ref{thm:modal-approx}. \noindent Behavioural distance forms the yardstick for our notion of bisimulation invariance; for definiteness: \begin{defn} A quantitative, i.e.\ $[0,1]$-valued, property~$Q$ of states, or a formula or concept defining such a property, is \emph{bisimulation-invariant} if~$Q$ is non-expansive wrt.\ game distance, i.e.\ for states $a,b$ in models $\mathcal{I},\mathcal{J}$, respectively, \begin{equation*} |Q(a)-Q(b)|\le d^G(a,b). \end{equation*} Similarly,~$Q$ is \emph{depth-$n$ bisimulation invariant}, or \emph{finite-depth bisimulation invariant} if mention of~$n$ is omitted, if~$Q$ is non-expansive wrt.\ $d^G_n$ in the same sense. \end{defn} \noindent It is easy to see that \emph{$\mathcal{A}LCP$-concepts are bisimulation-invariant}. More precisely, $\mathcal{A}LCP$-concepts of rank at most~$n$ are depth-$n$ bisimulation invariant (a stronger invariance since clearly $d^G_n\le d^G$), as shown by routine induction. In contrast, many other properties of states are expressible in $\mathsf{FO}(\probably)$ but not in $\mathcal{A}LCP$, as they fail to be bisimulation-invariant. Examples include $\diabind{x}{y}{x=y}$ (probability of a self-transition) and $\exists z.\, \diabind{x}{y}{y=z}$ (highest transition probability to a successor). We are now ready to formally state our main theorem (a proof will be given in Section~\ref{sec:main}): \begin{thm}[Modal characterization]\label{thm:van-benthem} Every bisimulation-invariant $\mathsf{FO}(\probably)$-formula of rank at most $n$ can be approximated (uniformly across all models) by $\mathcal{A}LCP$-concepts of rank at most $3^n$. \end{thm} \noindent (The exponential bound on the rank features also in the full statement of van Benthem's theorem.) \section{Modal Approximation at Finite Depth} \label{sec:modal-approx} We now establish the most important stepping stone on the way to the eventual proof of the modal characterization theorem: We show that every depth-$n$ bisimulation-invariant property of states can be approximated by $\mathcal{A}LCP$-concepts of rank at most~$n$. We prove this simultaneously with coincidence of the various finite-depth behavioural pseudometrics defined in the previous section. To begin, \begin{lem}\label{lem:metrics-equal-GW} The game-based pseudometric~$d^G_n$ coincides with the Wasserstein pseudometric~$d^W_n$, \end{lem} \noindent We note next that the modality~$\probably$ is non-expansive: We extend~$\probably$ to act on $[0,1]$-valued functions $f\!:\!on\Delta^\CI\to[0,1]$ by \begin{align*} (\probably f)(a)&=\intsuc{a}{f}. \end{align*} \begin{lem}\label{lem:diamond-nonexp} The map $f \mathsf{map}sto \probably f$ is non-expansive wrt.\ the supremum metric, that is $\supnorm{\probably f - \probably g} \le \supnorm{f-g}$ for all $f,g\!:\!on \Delta^\CI\to[0,1]$. \pwnote{Find a better spot for this lemma?} \end{lem} \noindent Following our previous work~\cite{WildEA18}, we prove coincidence of the remaining pseudometrics in one big induction, along with total boundedness (needed later to apply a variant of the Arzel\`a-Ascoli theorem and the Kantorovich-Rubinstein duality) and modal approximability of depth-$n$ bisimulation-invariant properties. We phrase the latter as density of the (semantics of) $\mathcal{A}LCP$-concepts of rank at most~$n$ in the non-expansive function space (Definition~\ref{def:metric}): \begin{thm}\label{thm:modal-approx} Let $\mathcal{I}$ be a model. Then for all $n\ge 0$, \begin{enumerate} \item we have $d^G_n = d^W_n = d^K_n = d^L_n =: d_n$ on $\mathcal{I}$; \label{item:metrics-equal} \item the pseudometric space $(\Delta^\CI,d_n)$ is totally bounded; \label{item:tot-bounded} \item $\modf{n}$ is a dense subset of $\nonexpI{\Delta^\CI,d_n}$. \label{item:modal-approx} \end{enumerate} \end{thm} \begin{proof}[Proof sketch] By simultaneous induction on~$n$. In the base case $n = 0$, all the behavioural distances are the zero pseudometric, so that total boundedness follows trivially and the density claim follows because non-expansive maps are just constants in $[0,1]$ and the syntax of $\mathcal{A}LCP$ includes truth constants $q\in\mathbb{Q}I$. For the inductive step, let $\mathcal{I}$ be a model and $n > 0$, and assume as the inductive hypothesis that all claims in Theorem~\ref{thm:modal-approx} hold for all $n' < n$. We begin with Item~\ref{item:metrics-equal}; $d^G_n = d^W_n$ is already proved (Lemma~\ref{lem:metrics-equal-GW}). \begin{itemize}[wide] \item $d^W_n = d^K_n$ follows by Kantorovich-Rubinstein duality (Lemma~\ref{lem:kr-duality}), since every totally bounded pseudometric space is separable. \item $d^K_n = d^L_n$: By Lemma~\ref{lem:diamond-nonexp} and the inductive hypothesis, $\probably[\modf{n-1}]$ is dense in $\probably[\nonexpI{\Delta^\CI,d_{n-1}}]$. Thus, the supremum in the definition of $d^K_n$ does not change when it is taken only over the concepts in $\modf{n-1}$ instead of all nonexpansive properties. The proof is finished by a simple induction over propositional combinations of concepts. \end{itemize} \noindent \emph{Item~\ref{item:tot-bounded}}: By the inductive hypothesis, the space $(\Delta^\CI,d_{n-1})$ is totally bounded. By the Arzel\`a-Ascoli theorem (in a version for totally bounded spaces and non-expansive maps, cf.~\cite{WildEA18}), it follows that $\nonexpI{\Delta^\CI,d_{n-1}}$ is totally bounded wrt.~the supremum pseudometric. This implies that depth-$n$ distances can be approximated up to $\epsilon$ by examining differences at only finitely many, say $m$, concepts. As $([0,1]^m,d_\infty)$ is totally bounded, $(\Delta^\CI,d_n)$ is, too. \emph{Item~\ref{item:modal-approx}}: By the Stone-Weierstra\ss{} theorem (again in a version for totally bounded spaces and non-expansive maps~\cite{WildEA18}) it suffices to give, for each $\epsilon>0$, each non-expansive map $f\in\nonexpI{\Delta^\CI,d_n}$, and each pair of states $a,b\in\Delta^\CI$ a concept $C\in\modf{n}$ such that \begin{equation*} \max(|f(a)-C^\mathcal{I}(a)|, |f(b)-C^\mathcal{I}(b)|) \le \epsilon. \end{equation*} To construct such a~$C$, we note that $|f(a)-f(b)|\le d^L_n(a,b)$ (by non-expansiveness), so there exists some $D\in\modf{n}$ such that $|D^\mathcal{I}(a)-D^\mathcal{I}(b)|\ge |f(a)-f(b)|-\epsilon$. From~$D$, we can construct~$C$ using truncated subtraction~$\ominus$. \end{proof} \noindent This completes the proof of Theorem~\ref{thm:modal-approx}. Now that we can approximate depth-$k$ bisimulation-invariant properties by $\mathcal{A}LCP$-concepts of rank~$k$ on any fixed model, we need to make the approximation uniform across all models. We achieve this by means of a \emph{final} model, i.e.\ one that realizes all behaviours. Formally: \begin{defn} A \emph{(probabilistic) bounded morphism} between models~$\mathcal{I}$, $\mathcal{J}$ is a map $f:\Delta^\CI\to\Delta^\CJ$ such that $A^\mathcal{I}=f^{-1}[A^\mathcal{J}]$ for each $A\in\mathsf{N}_{\mathsf{C}}$ and $r_{f(a)}(B)=r_a(f^{-1}[B])$ for all $B\subseteq\Delta^\CJ$, $a\in\Delta^\CI$ (implying that~$a$ is blocking iff $f(a)$ is blocking). A model~$\mathcal{F}$ is \emph{final} if for every model~$\mathcal{I}$, there exists a unique bounded morphism $\mathcal{I}\to\mathcal{F}$. \end{defn} \noindent It follows from standard results in coalgebra~\cite{Barr93} that a final model exists. Bounded morphisms preserve behaviour on-the-nose, that is: \begin{lem}\label{lem:bounded-morphism-game} Let $f\!:\!on\mathcal{I}\to\mathcal{J}$ be a bounded morphism. Then, for any $a\in\Delta^\mathcal{I}$, $d^G(a,f(a)) = 0$. \end{lem} \noindent This entails the following lemma, which will enable us to use approximants on the final model as uniform approximants across all models: \begin{lem}\label{lem:uniform-approx} Let~$\mathcal{F}$ be a final model, and let $\phi$ and $\psi$ be bisimulation-invariant first-order properties. Then, for any model~$\mathcal{I}$, $\supnorm{\phi-\psi}^\mathcal{I} \le \supnorm{\phi-\psi}^\mathcal{F}$. \end{lem} \section{Locality}\label{sec:locality} The proof of the modal characterization theorem now further proceeds by first establishing that every bisimulation-invariant first-order formula~$\phi$ is \emph{local} in a sense to be made precise shortly, and subsequently that~$\phi$ is in fact even finite-depth bisimulation invariant, for a depth that is exponential in the rank of~$\phi$. Locality refers to a probabilistic variant of Gaifman graphs~\cite{Gaifman82}: \begin{defn} Let $\mathcal{I}$ be a model. \begin{itemize}[wide] \item The \emph{Gaifman graph} of $\mathcal{I}$ is the undirected graph on the set~$\Delta^\CI$ of vertices that has an edge for every pair $(a,b)$ with $r^\mathcal{I}(a,b) > 0$ or $r^\mathcal{I}(b,a) > 0$. \item The \emph{Gaifman distance} $D \!:\!on \Delta^\CI\times\Delta^\CI \to \mathbb{N}\cup\{\infty\}$ is graph distance in the Gaifman graph: For every $a,b\in\Delta^\CI$, the distance $D(a,b)$ is the least number of edges on any path from~$a$ to $b$, if such a path exists, and $\infty$ otherwise. \item For $a\in\Delta^\CI$ and $k \ge 0$, the \emph{radius $k$ neighbourhood} $\nbhood{k}{a} = \{b \in \Delta^\CI \mid D(a,b) \le k \}$ of~$a$ consists of the states reachable from~$a$ in at most $k$ steps. \item The \emph{restriction} of $\mathcal{I}$ to $\nbhood{k}{a}$ is the model $\mathcal{I}^k_a$ with set $\nbhood{k}{a}$ of states, and \begin{align*} A^{\mathcal{I}^k_a}(b) & = A^\mathcal{I}(b) & r^{\mathcal{I}^k_a}(b,c) & = \begin{cases} r^\mathcal{I}(b,c) & \text{if } D(a,b) < k \\ 0 & \text{if } D(a,b) = k \\ \end{cases} \end{align*} for $A\in\mathsf{N}_{\mathsf{C}}$ and $b,c\in\nbhood{k}{a}$. \end{itemize} \end{defn} \noindent The restriction to $\nbhood{k}{a}$ thus makes all states at distance~$k$ blocking. Restricted models have the expected relationship with games of bounded depth: \begin{lem}\label{lem:nbhood-bisim} Let~$a$ be a state in a model~$\mathcal{I}$. Then~$D$ wins the $k$-round $0$-bisimulation game for $\mathcal{I},a$ and $\mathcal{I}^k_a,a$. \end{lem} \noindent Locality of a formula now means that its truth values only depend on the neighbourhood of the state in question: \begin{defn} A formula $\phi(x)$ is \emph{$k$-local} for a radius~$k$ if for every model $\mathcal{I}$ and every $a\in \Delta^\CI$, $\phi^\mathcal{I}(a) = \phi^{\mathcal{I}^k_a}(a)$. \end{defn} \noindent As $\mathcal{A}LCP$-concepts are bisimulation-invariant, Lemma~\ref{lem:nbhood-bisim} implies \begin{lem} Every $\mathcal{A}LCP$-concept of rank at most $k$ is $k$-local. \end{lem} \noindent To prove locality of bisimulation-invariant $\mathsf{FO}(\probably)$-formulae, we require a model-theoretic tool, an adaptation of Eh\-ren\-feucht-Fraïss{\'e} equivalence to the probabilistic setting: \begin{defn} \label{def:ef-game} Let $\mathcal{I},\mathcal{J}$ be models, and let~$\bar a_0$ and~$\bar b_0$ be vectors of equal length over~$\Delta^\CI$ and~$\Delta^\CJ$, respectively. The \emph{Ehrenfeucht-Fra\"iss\'e game for $\mathcal{I},\bar a_0$ and $\mathcal{J},\bar b_0$}, played by \emph{Spoiler} ($S$) and \emph{Duplicator} ($D$), is given as follows. \begin{itemize} \item\emph{Configurations:} pairs $(\bar a,\bar b)$ of vectors $\bar a$ over~$\Delta^\CI$ and~$\bar b$ over~$\Delta^\CJ$; the \emph{initial configuration} is $(\bar a_0,\bar b_0)$. \item \emph{Moves:} Each round can be played in one of two ways, chosen by $S$: \begin{itemize}[wide] \item \emph{Standard round}: $S$ selects a state in one model, say $a\in\Delta^\CI$, and $D$ then has to select a state in the other model, say $b\in\Delta^\CJ$, reaching the configuration $(\bar aa,\bar bb)$. \item \emph{Probabilistic round}: $S$ selects an index~$i$ and a fuzzy subset in one model, say $\phi_A\!:\!on\Delta^\CI\to [0,1]$. $D$ then has to select a fuzzy subset in the other model, say $\phi_B\!:\!on\Delta^\CJ\to [0,1]$, such that $\intsuc{a_i}{\phi_A} = \intsuc{b_i}{\phi_B}$. Then, $S$ selects an element on one side, say $a\in\Delta^\CI$, such that $r_{a_i}(a)>0$, and $D$ subsequently selects an element on the other side, say $b\in\Delta^\CJ$, such that $\phi_A(a) = \phi_B(b)$ and $r_{b_i}(b)>0$, reaching the configuration $(\bar aa,\bar bb)$. \end{itemize} \item \emph{Winning conditions}: Any player who cannot move loses. $S$ wins if a configuration is reached (including the initial configuration) that fails to be a partial isomorphism. Here, a configuration $(\bar a,\bar b)$ is a \emph{partial isomorphism} if \begin{itemize} \item $a_i=a_j\iff b_i=b_j$ \item $A^\mathcal{I}(a_i) = A^\mathcal{J}(b_i)$ for all $i$ and all $A\in\mathsf{N}_{\mathsf{C}}$ \item $r^\mathcal{I}(a_i,a_j) = r^\mathcal{J}(b_i,b_j)$ for all $i,j$. \end{itemize} Player~$D$ wins if she reaches the $n$-th round (maintaining configurations that are not winning for $S$). \end{itemize} \end{defn} \noindent For our purposes, we need only soundness of Ehrenfeucht-Fra\"iss\'e equivalence: \begin{lem}[Ehrenfeucht-Fra\"iss\'e invariance] \label{lem:ef-inv-fol} Let $\mathcal{I},\mathcal{J}$ be models, and let $\bar a_0,\bar b_0$ be vectors of length $m$ over~$\Delta^\CI$ and $\Delta^\CJ$, respectively, such that $D$ wins the $n$-round Ehrenfeucht-Fra\"iss\'e game on $\bar a_0,\bar b_0$. Then for every $\mathsf{FO}(\probably)$-formula $\phi$ with $\mathsf{qr}(\phi)\le n$ and free variables at most $x_1,\dots,x_m$, \begin{equation*} \phi(\bar a_0) = \phi(\bar b_0). \end{equation*} \end{lem} \noindent Since embeddings into disjoint unions of models are bounded morphisms, the following is immediate from Lemma~\ref{lem:bounded-morphism-game}: \begin{lem}\label{lem:bisim-inv-disjoint} Every bisimulation-invariant formula is also invariant under disjoint union. \end{lem} \noindent We are now in a position to prove our desired locality result: \begin{lem}[Locality]\label{lem:bisim-inv-local} Let $\phi(x)$ be a bisimulation-invariant $\mathsf{FO}(\probably)$-formula of rank $n$ with one free variable~$x$. Then $\phi$ is $k$-local for $k = 3^n$. \end{lem} \begin{proof}[Proof sketch] Let $a$ be a state in a model~$\mathcal{I}$. We need to show $\phi^\mathcal{I}(a) = \phi^{\mathcal{I}^k_a}(a)$. Construct models~ $\mathcal{J},\mathcal{K}$ that extend $\mathcal{I}$ and~$\mathcal{I}^k_a$, respectively, by adding $n$ disjoint copies of both $\mathcal{I}$ and~$\mathcal{I}^k_a$. We finish the proof by showing that \begin{equation*} \phi^\mathcal{I}(a) = \phi^\mathcal{J}(a) = \phi^\mathcal{K}(a) = \phi^{\mathcal{I}^k_a}(a). \end{equation*} The first and third equality follow by bisimulation invariance of~$\phi$ (Lemma~\ref{lem:bisim-inv-disjoint}), and the second using Lemma~\ref{lem:ef-inv-fol}, by giving a winning invariant for~$D$ in the $n$-round Ehrenfeucht-Fra\"iss\'e game for $\mathcal{J},a$ and $\mathcal{K},a$. \end{proof} \section{Proof of the Main Result} \label{sec:main} \noindent Having established locality of bisimulation-invariant first-order formulae and modal approximability of finite-depth bisimulation-invariant properties, we now discharge the last remaining steps in our programme: We show by means of an unravelling construction that bisimulation-invariant first-order formulae are already finite-depth bisimulation-invariant, and then conclude the proof of our main result, the modal characterization theorem. \begin{defn} Let $\mathcal{I}$ be a model. The \emph{unravelling} $\mathcal{I}^\ast$ of $\mathcal{I}$ is a model with non-empty finite sequences $\bar a\in (\Delta^\CI)^+$ as states, where atomic concepts and roles are interpreted by \begin{equation*} A^{\mathcal{I}^\ast}(\bar a) = A^\mathcal{I}(\mathsf{last}(\bar a)) \qquad r^{\mathcal{I}^\ast}(\bar a,\bar aa) = r^\mathcal{I}(\mathsf{last}(\bar a),a), \end{equation*} for $\bar a \in (\Delta^\CI)^+$ and $a\in\Delta^\CI$, where $\mathsf{last}$ takes last elements. \end{defn} \noindent As usual, models are bisimilar to their unravellings: \begin{lem}\label{lem:bisim-unravel} For any model $\mathcal{I}$ and $a\in\Delta^\CI$, $D$ has a winning strategy in the $0$-bisimulation game for $\mathcal{I},a$ and $\mathcal{I}^\ast,a$. \end{lem} \noindent We next show that locality and bisimulation invariance imply finite-depth bisimulation invariance: \begin{lem}\label{lem:local-k-bisim-inv} Let $\phi$ be bisimulation invariant and $k$-local. Then $\phi$ is depth-$k$ bisimulation invariant. \end{lem} \begin{proof}[Proof sketch] By unravelling (Lemma~\ref{lem:bisim-unravel}) and loc\-ality (Lemma~\ref{lem:nbhood-bisim}), we need only consider depth-$k$ tree models. On such models, winning strategies in $k$-round bisimulation games automatically win also the unrestricted game. \end{proof} \noindent This allows us to wrap up the proof of our main result: \begin{proof}[Proof of Theorem~\ref{thm:van-benthem}] Let $\phi$ be a probabilistic first-order formula of rank $n$. By Lemma~\ref{lem:bisim-inv-local} and Lemma~\ref{lem:local-k-bisim-inv}, $\phi$ is depth-$k$ bisimulation-invariant for $k = 3^n$. By Theorem~\ref{thm:modal-approx}, for every $\epsilon>0$, there exists an $\mathcal{A}LCP$ concept $C_\epsilon$ of rank at most $k$ such that $\supnorm{\phi^\mathcal{F}-C^\mathcal{F}_\epsilon}\le\epsilon$ on the final model $\mathcal{F}$. By Lemma~\ref{lem:uniform-approx}, this approximation works over all models. \end{proof} \section{Conclusions}\label{sec:concl} \noindent We have established a modal characterization result for a probabilistic fuzzy DL~$\mathcal{A}LCP$, stating that every formula of quantitative probabilistic FOL that is \emph{bisimulation-invariant}, i.e.\ non-expansive wrt.\ a natural notion of behavioural distance, can be approximated by $\mathcal{A}LCP$-concepts of bounded modal rank, the bound being exponential in the rank of the original formula. As discussed in the introduction, the bound on the modal rank is the crucial feature making this result into a van-Benthem (rather than Hennessy-Milner) type theorem. It remains open whether our main result can be sharpened to make do without approximation. (Similar open problems persist for the case of fuzzy modal logic~\cite{WildEA18} and two-valued probabilistic modal logic~\cite{SchroderEA17}.) Further directions for future research include a treatment of \L{}ukasiewicz semantics of the propositional connectives (for which non-expansiveness in fact fails). Moreover, the version of our main result that restricts the semantics to finite models, in analogy to Rosen's finite-model version of van Benthem's theorem~\cite{Rosen97}, remains open. \interlinepenalty=10000 \appendix \section{Appendix} \subsection{Coalgebraic Modelling}\label{sec:coalg} \lsnote{May still need adaptation} Universal coalgebra~\cite{Rutten00} serves as a generic framework for modelling state-based systems, with the system type encapsulated as a set functor. Although we are only concerned with a concrete system type in the present paper, we do need coalgebraic methods to some degree. In particular, the requisite background on behavioural distances~\cite{BreugelWorrell05,bbkk:behavioral-metrics-functor-lifting} is largely based on coalgebraic techniques, and moreover we will need the final coalgebra at one point in the development. We require only basic definitions, which we recapitulate here and then instantiate to the case of our notion of model. Recall first that a set functor~$F:\mathsf{Set}\to\mathsf{Set}$ consists of an assignment of a set~$FX$ to every set~$X$ and a map $Ff:FX\to FY$ to every map $f:X\to Y$, preserving identities and composition. The core example of a functor for the present purposes is the \emph{distribution functor}~$\mathcal{D}$, which assigns to a set $X$ the set $\mathcal{D} X$ of discrete probability measures on~$X$, and to a map $f:X\to Y$ the map $\mathcal{D} f:\mathcal{D} X\to\mathcal{D} Y$ that takes image measures; explicitly, $\mathcal{D} f(\mu)$ is the image measure of~$\mu$ along~$f$, given by $\mathcal{D} f(\mu)(A)=\mu(f^{-1}[A])$. Functors can be combined by taking \emph{products} and \emph{sums}: Given set functors $F,G:\mathsf{Set}\to\mathsf{Set}$, the set functors $F\times G,F+G:\mathsf{Set}\to\mathsf{Set}$ are given by $(F\times G)X=FX\times GX$ and $(F+G)X=FX+GX$, respectively, with the evident action on maps in both cases; here, $+$ denotes disjoint union as usual. Every set~$C$ induces a \emph{constant functor}, also denoted~$C$ and given by $CX=C$ and $Cf=\mathsf{id}_C$ for every set~$X$ and every map~$f$. Moreover, the \emph{identity functor}~$\mathsf{id}$ is given by $\mathsf{id}\, X=X$ and $\mathsf{id}\, f=f$ for all sets~$X$ and all maps~$f$. An \emph{$F$-coalgebra} $(A,\xi)$ for a set functor~$F$ consists of a set~$X$ of \emph{states} and a \emph{transition map} $\xi:A\to FA$, thought of as assigning to each state $a\in A$ a structured collection $\xi(a)$ of successors. A $\mathcal{D}$-coalgebra $(A,\xi)$, for instance, is just a Markov chain: its transition map $\xi:A\to\mathcal{D} A$ assigns to each state a distribution over successor states. Similarly, models in the sense defined above are coalgebras $(A,\xi)$ for the set functor $[0,1]^{\mathsf{N}_{\mathsf{C}}}\times(\mathcal{D}+1)$: If $\xi(a)=(f,\pi)$, then $f:\mathsf{N}_{\mathsf{C}}\to[0,1]$ determines the truth values of the atomic concepts at the state~$a$, and $\pi$ is either a discrete probability measure determining the successors of~$a$ or a designated value denoting termination. The probabilistic transition systems considered by van Breugel and Worrell \cite{BreugelWorrell05}, which indexes probabilistic transition relations over a set~$\mathsf{Act}$ of actions and moreover uses unrestricted subdistributions, corresponds to coalgebras $(A,\xi)$ for the set functor $\mathcal{D}(\mathsf{id}+1)^\mathsf{Act}$ -- given a state~$a$ and an action $c\in\mathsf{Act}$, $\xi(a)(c)\in\mathcal{D}(A+1)$ is a subdistribution over successor states of~$a$, with the summand~$1$ serving to absorb the weight missing to obtain total weight~$1$.\bknote{Actually another difference is that van Breugel/Worrell work in a non-discrete setting in measurable spaces.} A \emph{morphism} $f:(A,\xi)\to(B,\zeta)$ between $F$-coalgebras $(A,\xi)$ and $(B,\zeta)$ is a map $f:A\to B$ such that \begin{equation*} Ff(\xi(a)) = \zeta(f(a)) \end{equation*} for all states $a\in A$. Morphisms should be thought of as behaviour-preserving maps or functional bisimulations. E.g.\ $f:A\to B$ is a morphism of $\mathcal{D}$-coalgebras (i.e.\ Markov chains) $(A,\xi)$ and $(B,\zeta)$ if for each set $Y\subseteq B$ and each state $a\in A$, \begin{equation*} \zeta(f(a))(Y)=\xi(a)(f^{-1}[Y]), \end{equation*} i.e.\ the probability of reaching $Y$ from $f(a)$ is the same as that of reaching $f^{-1}[Y]$ from~$a$. Morphisms of probabilistic transition systems, viewed as coalgebras, satisfy a similar condition for the successor distributions, and additionally preserve the truth values of atomic concepts. An $F$-coalgebra $(Z,\zeta)$ is \emph{final} if for every $F$-coalgebra $(A,\xi)$ there exists exactly one morphism $(A,\xi)\to(Z,\zeta)$. Final coalgebras are unique up to isomorphism if they exist, and should be thought of as having as states all possible behaviours of states in $F$-coalgebras. For our present purposes, we do not need an explicit description of the final coalgebra; it suffices to know that since the functor describing probabilistic transition systems is \emph{accessible} (more precisely $\omega_1$-accessible), a final coalgebra for it, i.e.\ a final probabilistic transition system, exists~\cite{Barr93}. \subsection{Omitted Proofs} \subsubsection{Proof of Lemma~\ref{lem:kr-duality}} We make use of the following version of the Kantorovich-Rubinstein duality~\cite[Proposition 11.8.1]{dudley2002}: \begin{lem}[Kantorovich-Rubinstein duality] \label{lem:kr-duality-cited} Let $(X,d)$ be a separable metric space, and let $\mathcal{P}_1(X)$ denote the space of probability measures $\mu\!:\!on\mathcal{B}(X) \to [0,1]$ on the Borel $\sigma$-algebra $\mathcal{B}(X)$ such that $\textstyle{\int} d(x,\,\cdot\,) \,\mathrm{d}\mu < \infty$ for some $x\in X$. Then for $\mu_1,\mu_2\in\mathcal{P}_1(X)$, \begin{equation*} d^\uparrow(\mu_1,\mu_2) = d^\downarrow(\mu_1,\mu_2). \end{equation*} \end{lem} Essentially, we only need to transfer this version of Kantorovich-Rubinstein duality to the slightly more general case of pseudometrics. First, note that the relation $x\sim y :\iff d(x,y) = 0$ is an equivalence relation on $X$. The quotient set $Y := X/{\sim}$ is made into a metric space $(Y,d')$, the \emph{metric quotient} of $(X,d)$, by taking $d'([x],[y])=d(x,y)$. Let $p\!:\!on A \to B$ be the projection map. By construction, $p$ is an isometry. Both the Kantorovich and the Wasserstein lifting preserve isometries~\cite{bbkk:behavioral-metrics-functor-lifting}, so for all discrete probability measures $\mu_1,\mu_2$ on $X$, \begin{align*} d^{\uparrow}(\mu_1,\mu_2) & = (d')^{\uparrow}((\mathcal{D} p)\mu_1,(\mathcal{D} p)\mu_2) \\ & = (d')^{\downarrow}((\mathcal{D} p)\mu_1,(\mathcal{D} p)\mu_2) \\ & = d^{\downarrow}(\mu_1,\mu_2). \end{align*} In the second step we have applied Lemma~\ref{lem:kr-duality-cited} to the metric space $(Y,d')$, noting that every discrete probability measure can be defined on the Borel $\sigma$-algebra. \subsubsection{Proof of Lemma~\ref{lem:metrics-equal-GW}.} Induction over~$n$. The base case $n = 0$ is clear: the $0$-round game is an immediate win for~$D$, so $d^G_0 = d^W_0 = 0$. We proceed with the inductive step from~$n$ to~$n+1$. So let~$a$ and~$b$ be states in a model $\mathcal{I}$. If $a$ and $b$ are both blocking, then $d^G_{n+1}(a,b) = d^W_{n+1}(a,b) = 0$. If exactly one of $a,b$ is blocking, then $d^G_{n+1}(a,b) = d^W_{n+1}(a,b) = 1$. Now assume that both $a$ and $b$ are non-blocking. ``$\ge$'': Let $d^G_{n+1}(a,b) \le \epsilon$, so $D$ wins the $(n+1)$-round bisimulation game on $(a,b,\epsilon)$. We show that $d^W_{n+1}(a,b) \le \epsilon$. First, for every $A\in\mathsf{N}_{\mathsf{C}}$, $|A^\mathcal{I}(a)-A^\mathcal{I}(b)| \le \epsilon$ by the winning condition. Second, suppose $D$ chooses $\mu\in\cpl(r_a,r_b)$ and $\epsilon'\!:\!on \Delta^\CI\times\Delta^\CI \rightarrow[0,1]$ in the first turn. By assumption, $D$ wins the $n$-round bisimulation game on $(a',b',\epsilon'(a',b'))$ for every $a',b'\in\Delta^\CI$, so $d^W_n = d^G_n\le\epsilon'$ by induction, and thus $\operatorname{E}_\mu (d^W_n) \le \operatorname{E}_\mu (\epsilon') \le \epsilon$. ``$\le$'': Let $d^W_{n+1}(a,b) < \epsilon$. It suffices to give a winning strategy for $D$ in the $(n+1)$-round bisimulation game on $(a,b,\epsilon)$ (implying~$d^G_{n+1}(a,b)\le\epsilon$). The winning condition in the initial configuration follows immediately from the assumption. Also by the assumption, there exists $\mu\in\cpl(r_a,r_b)$ such that $\operatorname{E}_\mu (d^W_n) < \epsilon$. As $r_a$ and $r_b$ are discrete, the set \[ R := \{(a',b') \mid r_a(a') > 0 \text{ and } r_b(b') > 0 \} \] is countable; so we can write $R = \{(a_1,b_1),(a_2,b_2),\dots\}$. Now put $\delta = \epsilon - \operatorname{E}_\mu (d^W_n)$ and define \[\epsilon'(a_i,b_i) = d^W_n(a_i,b_i) + 2^{-i}\delta \] for $(a_i,b_i)\in R$ and $\epsilon'(a',b') = 0$ for $(a',b') \notin R$. Then \[ \operatorname{E}_\mu (\epsilon') \le \operatorname{E}_\mu (d^W_n) + \delta = \epsilon, \] so playing $\mu$ and $\epsilon'$ constitutes a legal move for $D$. Now, since $\mu\in\cpl(r_a,r_b)$, $\mu(a',b') = 0$ for all $(a',b')\notin R$. This means that $S$ must pick some $(a_i,b_i) \in R$. Then \[ d^G_n(a_i,b_i) = d^W_n(a_i,b_i) < \epsilon'(a_i,b_i), \] so $D$ wins the $n$-round game on $(a_i,b_i,\epsilon'(a_i,b_i))$. \subsubsection{Proof of Lemma~\ref{lem:diamond-nonexp}.} Let $\supnorm{f-g}\le\epsilon$; we have to show $\supnorm{\probably f - \probably g}\le\epsilon$. So let $a\in \Delta^\CI$; then \[ |(\probably f)(a) - (\probably g)(a)| = \intsuc{a}{f-g} \le \intsuc{a}{\epsilon} \le \epsilon, \] as required. \subsubsection{Proof of Theorem~\ref{thm:modal-approx}.} We proceed by simultaneous induction on $n$. In the base case $n=0$, all the behavioural distances are the zero pseudometric: $d^G_0 = 0$ because by the rules of the game each $0$-round game is an immediate win for $D$; $d^W_0 = d^K_0 = 0$ by definition; and $d^L_0 = 0$ because each rank-$0$ concept is a propositional combination of truth constants and therefore constant. Total boundedness follows directly from the fact that under the zero pseudometric every $\epsilon$-ball is the entire space, regardless of $\epsilon$. Finally, the density claim follows because non-expansive maps under the zero pseudometric are just constants in $[0,1]$ and the syntax of $\mathcal{A}LCP$ includes truth constants $q\in\mathbb{Q}I$. For the inductive step, let $\mathcal{I}$ be a model and $n > 0$, and assume as the inductive hypothesis that all claims in Theorem~\ref{thm:modal-approx} hold for all $n' < n$. We begin with Item~\ref{item:metrics-equal}: \begin{itemize}[wide] \item $d^G_n = d^W_n$ is Lemma~\ref{lem:metrics-equal-GW}. \item $d^W_n = d^K_n$ follows by Kantorovich-Rubinstein duality (Lemma~\ref{lem:kr-duality}), since every totally bounded pseudometric space is separable. \item $d^K_n = d^L_n$: Let $a,b\in\Delta^\CI$ and consider the map \[ G\!:\!on\nonexpI{\Delta^\CI,d_{n-1}}\to[0,1], \quad f \mathsf{map}sto |(\probably f)(a) - (\probably f)(b)|, \] Then~$G$ is a continuous function because all of its constituents are continuous (in particular, $\probably$ is continuous by Lemma~\ref{lem:diamond-nonexp}). By the induction hypothesis, and because density is preserved by continuous maps, $G[\modf{n-1}]$ is a dense subset of $G[\nonexpI{\Delta^\CI,d_{n-1}}]$. Thus, \begin{align*} d^K_n & (a,b) \\ & = \bigvee_{A\in\mathsf{N}_{\mathsf{C}}} |A^\mathcal{I}(a)-A^\mathcal{I}(b)| \vee \bigvee G[\nonexpI{\Delta^\CI,d_{n-1}}] \\ & = \bigvee_{A\in\mathsf{N}_{\mathsf{C}}} |A^\mathcal{I}(a)-A^\mathcal{I}(b)| \vee \bigvee G[\modf{n-1}] \\ & = \bigvee_{A\in\mathsf{N}_{\mathsf{C}}} |A^\mathcal{I}(a)-A^\mathcal{I}(b)| \vee \bigvee_{\mathclap{\operatorname{\mathsf{rk}} C\le n-1}} |(\probably C)^\mathcal{I}(a)-(\probably C)^\mathcal{I}(b)| \\ & = \bigvee_{\operatorname{\mathsf{rk}} C\le n} |C^\mathcal{I}(a)-C^\mathcal{I}(b)| = d^L_n(a,b). \end{align*} To prove the penultimate step, we first note that ``$\le$'' follows immediately. To see ``$\ge$'', we proceed by induction over the propositional combinations of atomic concepts $A\in\mathsf{N}_{\mathsf{C}}$ and concepts $\probably C$, where $C\in\modf{n-1}$, using that for any concepts $C,D$ and $q\in\mathbb{Q}I$: \begin{align*} |(C\ominus q)^\mathcal{I}(a)-(C\ominus q)^\mathcal{I}(b)| & \le |C^\mathcal{I}(a)-C^\mathcal{I}(b)| \\ |(\neg C)^\mathcal{I}(a)-(\neg C)^\mathcal{I}(b)| & = |C^\mathcal{I}(a)-C^\mathcal{I}(b)| \\ |(C\mathop{\sqcap} D)^\mathcal{I}(a)-(C\mathop{\sqcap} D)^\mathcal{I}(b)| & \\ & \hspace{-2.3cm} \le \max(|C^\mathcal{I}(a)-C^\mathcal{I}(b)|, |D^\mathcal{I}(a)-D^\mathcal{I}(b)|). \end{align*} \end{itemize} \noindent \emph{Item~\ref{item:tot-bounded}}: We make use of the following version of the Arzel\`a-Ascoli theorem~\cite{WildEA18} where function spaces are restricted to non-expansive functions instead of the more general continuous functions, but the underlying spaces are only required to be totally bounded instead of compact: \begin{lem}[Arzel\`a-Ascoli for totally bounded spaces]\label{lem:arzela-ascoli} Let $(X,d)$ be a totally bounded pseudometric space. Then the space $\nonexpI{X,d}$, equipped with the supremum pseudometric, is totally bounded. \end{lem} By Lemma~\ref{lem:arzela-ascoli}, applied to the inductive hypothesis, we know that the space $\nonexpI{\Delta^\CI,d_{n-1}}$ is totally bounded wrt.~the supremum pseudometric. Let $\epsilon > 0$. As $\modf{n-1}$ is dense in $\nonexpI{\Delta^\CI,d_{n-1}}$, there exist finitely many $C_1,\dots,C_m\in\modf{n-1}$ such that \[ \bigcup_{i=1}^m \ball{}{\frac{\epsilon}{8}}{C_i} = \nonexpI{\Delta^\CI,d_{n-1}} \] From these concepts, together with the atomic concepts $A_1,\dots,A_k$, we can construct the map \begin{align*} I\!:\!on \Delta^\CI & \to [0,1]^{k+m} \\ a & \mathsf{map}sto (A_1^\mathcal{I}(a),\dots,A_k^\mathcal{I}(a), (\probably C_1)^\mathcal{I}(a),\dots,(\probably C_m)^\mathcal{I}(a)). \end{align*} Note that we assume here that the set of atomic concepts is a finite set $\mathsf{N}_{\mathsf{C}} = \{A_1,\dots,A_k\}$. This is without loss of generality for the modal characterization theorem, because every formula of $\mathsf{FO}(\probably)$ can only contain finitely many propositional atoms, so $\mathsf{N}_{\mathsf{C}}$ can be restricted to just those atoms. It turns out that $I$ is an $\frac{\epsilon}{4}$-isometry, that is \[ |d_n(a,b) - \supnorm{I(a) - I(b)}| \le \tfrac{\epsilon}{4} \] for all $a,b\in \Delta^\CI$. Thus, by the triangle inequality, we can take preimages to turn a finite $\frac{\epsilon}{4}$-cover of $[0,1]^{k+m}$ (a compact, hence totally bounded space) into a finite $\epsilon$-cover of $(\Delta^\CI,d_n)$. \emph{Item~\ref{item:modal-approx}}: We make use of the following Stone-Weierstra\ss{} theorem ~\cite{WildEA18} (again in a version for totally bounded spaces and non-expansive maps): \begin{lem}[Stone-Weierstra\ss{} for totally bounded spaces] \label{lem:stone-weierstrass} Let $(X,d)$ be a totally bounded pseudometric space, and let $L$ be a subset of $\nonexpI{X,d}$ such that $f_1,f_2 \in L$ implies $\min(f_1,f_2),\max(f_1,f_2) \in L$. Then $L$ is dense in $\nonexpI{X,d}$ if each $f\in \nonexpI{X,d}$ can be approximated at each pair of points by functions in~$L$; that is for all $\epsilon>0$ and all $x_1,x_2\in X$ there exists $g\in L$ such that \[ \max(|f(x_1)-g(x_1)|,|f(x_2)-g(x_2)|) \le\epsilon. \] \end{lem} We apply Lemma~\ref{lem:stone-weierstrass} to $(\Delta^\CI,d_n)$ with $L := \modf{n}$. Clearly $L$ is closed under $\min$ and $\max$ so, to finish the proof, it suffices to give, for each $\epsilon>0$, each non-expansive map $f\in\nonexpI{\Delta^\CI,d_n}$ and each pair of states $a,b\in\Delta^\CI$ a concept $C\in\modf{n}$ such that \begin{equation*} \max(|f(a)-C^\mathcal{I}(a)|, |f(b)-C^\mathcal{I}(b)|) \le \epsilon. \end{equation*} To construct such a~$C$, we note that $|f(a)-f(b)|\le d^L_n(a,b)$ (by non-expansiveness), so there exists some $D\in\modf{n}$ such that $|D^\mathcal{I}(a)-D^\mathcal{I}(b)|\ge |f(a)-f(b)|-\epsilon$. From~$D$, we can construct~$C$ using truncated subtraction~$\ominus$. \subsubsection{Proof of Lemma~\ref{lem:bounded-morphism-game}.} We show that $D$ wins the bisimulation game for $(a_0,f(a_0),0)$ by maintaining the invariant that the current configuration is of the form $(a,b,0)$ with $b = f(a)$, which ensures that the winning condition always holds. It remains to show that $D$ can maintain the invariant. In each round, $D$ begins by picking $\mu(a',b') = r_a(a')$ if $b'=f(a')$ and $0$ otherwise, and $\epsilon' = 0$. We can see that $\mu\in\cpl(r_a,r_b)$, because \begin{equation*} \textstyle \sum_{b'\in \Delta^\CJ}\mu(a',b') = r_a(a') \end{equation*} and \begin{equation*} \textstyle\sum_{a'\in \Delta^\CI}\mu(a',b') = \sum_{f(a')=b'} r_a(a') = r_b(b') \end{equation*} for all $a'\in \Delta^\CI$ and $b'\in \Delta^\CJ$. Also, clearly $\operatorname{E}_\mu(\epsilon') = 0$. Now any choice by $S$ leads to another configuration $(a',b',0)$ with $b'=f(a')$. \subsubsection{Proof of Lemma~\ref{lem:uniform-approx}.} Let $\mathcal{I}$ be a model, and let $h\!:\!on\mathcal{I}\to\mathcal{F}$ be the unique morphism. Let $a\in\Delta^\CI$. Then $d^G(a,h(a)) = 0$ by Lemma~\ref{lem:bounded-morphism-game}, and thus $\phi^\mathcal{I}(a) = \phi^\mathcal{F}(h(a))$ and $\psi^\mathcal{I}(a) = \psi^\mathcal{F}(h(a))$ by bisimulation invariance. So \begin{align*} \supnorm{\phi-\psi}^\mathcal{I} & = \textstyle\bigvee_{a\in \Delta^\CI} |\phi^\mathcal{I}(a)-\psi^\mathcal{I}(a)|\\& = \textstyle\bigvee_{a\in \Delta^\CI} |\phi^\mathcal{F}(h(a))-\psi^\mathcal{F}(h(a))| \\&\le \supnorm{\phi-\psi}^\mathcal{F}. \end{align*} \subsubsection{Proof of Lemma~\ref{lem:nbhood-bisim}.} Player~$D$ wins by maintaining the invariant that whenever $i$ rounds have been played, the current configuration is of the form $(a_i,a_i,0)$ for some $a_i\in \Delta^\CI$ with $D(a,a_i)\le i$. For $i<k$, no configuration of this kind can be winning for $S$, because the two states in this configuration represent the same state in different models (recall that the winning conditions are not checked after the last round has been played). It remains to give a strategy for $D$ that maintains the invariant. It clearly holds at the start of the game, with $a_0 = a$. When the $(i+1)$-th round is played, $D$ can pick $\mu\in\cpl(r_{a_i},r_{a_i})$ and $\epsilon'\!:\!on \Delta^\CI\times\nbhood{k}{a} \to [0,1]$ as follows: \begin{align*} \mu(a',a'') & = \begin{cases} \pi_{a_i}(a'), & \text{ if } a' = a'', \\ 0, & \text{ otherwise}, \end{cases} \\ \epsilon'(a',a'') & = 0. \end{align*} Clearly, $\operatorname{E}_\mu(\epsilon') = 0$, so this is a legal move. Now the new configuration chosen by $S$ necessarily satisfies the invariant. \subsubsection{Proof of Lemma~\ref{lem:ef-inv-fol}.} We proceed by induction over formulae. \begin{itemize} \item The cases $A(x_i)$ and $x_i=x_j$ (with $A\in\mathsf{N}_{\mathsf{C}}$) follow immediately from the fact that the initial configuration is a partial isomorphism. \item The Boolean cases ($q, \phi\ominus q, \neg\phi, \phi\mathop{\sqcap}\psi$) follow directly by the inductive hypothesis. \item $\exists x.\,\phi$: Let $(\bar a,\bar b)$ be the current configuration. Let $\delta>0$, let $a$ be such that \begin{equation*} (\exists x.\,\phi)(\bar a) - \phi(\bar a a)< \delta, \end{equation*} and let $b$ be the winning answer for $D$ in reply to $S$ choosing $a$. By induction, $\phi(\bar a a) = \phi(\bar b b)$, so \begin{equation*} (\exists x.\,\phi)(\bar b)\ge\phi(\bar b b) = \phi(\bar a a) >(\exists x.\,\phi)(\bar a)-\delta. \end{equation*} Because $\delta>0$ was arbitrary, it follows that $(\exists x.\,\phi)(\bar b)\ge(\exists x.\,\phi)(\bar a)$. We can symmetrically show that $(\exists x.\,\phi)(\bar a)\ge(\exists x.\,\phi)(\bar b)$, which proves this case. \item $\diabind{x_i}{x_{m+1}}{\phi}$: Let $(\bar a,\bar b)$ be the current configuration. Suppose that $S$ picks the index $i$ and the fuzzy subset \begin{equation*} \phi_A\!:\!on \Delta^\CI\to [0,1], \quad a \mathsf{map}sto \phi^\mathcal{I}(\bar a a) \end{equation*} and $D$'s winning reply is $\psi_B\!:\!on \Delta^\CJ\to [0,1]$. We show that on the support of $r_{b_i}$, $\psi_B$ must be equal to \begin{equation*} \phi_B\!:\!on \Delta^\CJ\to [0,1], \quad b \mathsf{map}sto \phi^\mathcal{J}(\bar b b). \end{equation*} Suppose there exists some $b\in \Delta^\CJ$ with $r(b_i,b)>0$ and $\phi_B(b) \neq \psi_B(b)$. Then $D$ has a winning reply $a\in \Delta^\CI$ in case $S$ picks this $b$, which means, by the rules of the game, that $r(a_i,a)>0$ and $\phi_A(a) = \psi_B(b)$. However, it is also true that $\phi_A(a) = \phi_B(b)$, by the inductive hypothesis. This is a contradiction. Now, because $\psi_B$ was a winning reply, we obtain \begin{align*} (\diabind{x_i}{x_{m+1}}{\phi})(\bar a) & = \intsuc{a_i}{\phi_A}\\& = \intsuc{b_i}{\psi_B}\\& = \intsuc{b_i}{\phi_B} \\&= (\diabind{x_i}{x_{m+1}}{\phi})(\bar b). \end{align*} \end{itemize} \subsubsection{Proof of Lemma~\ref{lem:bisim-inv-local}.} Let $a$ be a state in a model~$\mathcal{I}$. We need to show $\phi^\mathcal{I}(a) = \phi^{\mathcal{I}^k_a}(a)$. Let $\mathcal{J}$ be a new model that extends $\mathcal{I}$ by adding $n$ disjoint copies of both $\mathcal{I}$ and $\mathcal{I}^k_a$. Let $\mathcal{K}$ be the model that extends $\mathcal{I}^k_a$ likewise. We finish the proof by showing that \begin{equation*} \phi^\mathcal{I}(a) = \phi^\mathcal{J}(a) = \phi^\mathcal{K}(a) = \phi^{\mathcal{I}^k_a}(a). \end{equation*} The first and third equality follow by bisimulation invariance of $\phi$ (Lemma~\ref{lem:bisim-inv-disjoint}). The second equality follows by Ehrenfeucht-Fra\"iss\'e invariance (Lemma~\ref{lem:ef-inv-fol}) once we show that $D$ has a winning strategy in the $n$-round Ehrenfeucht-Fra\"iss\'e game for $\mathcal{J},a$ and $\mathcal{K},a$. Such a winning strategy can be described as follows: For $\bar a = (a_1,\dots,a_n)$, put $\nbhood{k}{\bar a} = \bigcup_{i\le n}\nbhood{k}{a_i}$. Then $D$ maintains the invariant that, if the configuration reached after $i$ rounds is $(\bar b,\bar c)$, then there exists an isomorphism $f_i$ between $\nbhood{k_i}{\bar b}$ and $\nbhood{k_i}{\bar c}$ that maps each $b_j$ to the corresponding $c_j$, where $k_i = 3^{n-i}$. The invariant holds at the start of the game, because the neighbourhoods on both sides are just $\nbhood{k}{a}$. Similarly, whenever the invariant holds, the current configuration is a partial isomorphism by restriction of the given isomorphism to the two vectors of the configuration. Now we consider what happens during the rounds. Suppose that $i$ rounds have been played, and the current configuration is $(\bar b,\bar c)$. If $S$ decides to play a standard round, playing some $b\in \Delta^\CJ$, then there are two cases: \begin{itemize} \item $b\in \nbhood{2k_{i+1}}{\bar b}$: In this case, the radius-$k_{i+1}$ neighbourhood $\nbhood{k_{i+1}}{b}$ of $b$ is fully contained in the domain $\nbhood{k_i}{\bar b}$ of $f_i$ -- this follows by the triangle inequality, as $2k_{i+1} + k_{i+1} = 3k_{i+1} = k_i$. Now $D$ can just reply with $c := f_i(b)$, and an isomorphism $f_{i+1}$ between $\nbhood{k_{i+1}}{\bar bb}$ and $\nbhood{k_{i+1}}{\bar cc}$ is formed by restricting the domain and codomain of $f_i$ appropriately. \item $b\notin \nbhood{2k_{i+1}}{\bar b}$: In this case, the radius-$k_{i+1}$ neighbourhoods $\nbhood{k_{i+1}}{b}$ of $b$ and $\nbhood{k_{i+1}}{\bar b}$ of $\bar b$ do not intersect -- this too follows from the triangle inequality. Now $D$ can pick a fresh copy of $\mathcal{I}$ or $\mathcal{I}^k_a$ in $\mathcal{K}$ (depending on which kind of copy $b$ lies in); her reply $c$ is then just $b$ in that copy. Here, a fresh copy is one that was never visited on any of the previous rounds. By construction of $\mathcal{J}$ and $\mathcal{K}$, such a copy is always available. This means that we now have two isomorphisms, one between $\nbhood{k_{i+1}}{\bar b}$ and $\nbhood{k_{i+1}}{\bar c}$ (by restriction of $f_i$), and one between $\nbhood{k_{i+1}}{b}$ and $\nbhood{k_{i+1}}{c}$ (by isomorphism of the respective copies of $\mathcal{I}$ or $\mathcal{I}^k_a$). Because these isomorphisms have disjoint domains and codomains, we can combine them to form the desired isomorphism $f_{i+1}$. \end{itemize} If $S$ plays a standard round with some $c\in \Delta^\mathcal{K}$ instead, the same argument applies. Finally, if $S$ starts a probabilistic round by picking an index $0\le j\le i$ and playing some $\phi_B\!:\!on \Delta^\CJ\to [0,1]$, then we first note that, by the rules of the game, the support of $\phi_B$ must be contained in $\nbhood{1}{\bar b}$, which in turn must be contained in the domain of $f_i$. This means that $D$ can construct $\phi_C\!:\!on \Delta^\mathcal{K}\to [0,1]$ by mapping along $f_i$, i.e.~$\phi_C(c) = \phi_B(f_i^{-1}(c))$ for all successors $c$ of $c_j$, and $\phi_C(c) = 0$ otherwise. Now, whichever $b$ or $c$ is picked by $S$, $D$ can just reply with $c:=f_i(b)$ or $b:=f_i^{-1}(c)$ and $f_{i+1}$ is formed as in the first case of a standard round. Again, the same argument applies if $S$ picks a fuzzy subset $\phi_C$ on the other side. \subsubsection{Proof of Lemma~\ref{lem:bisim-unravel}.} $D$ wins by maintaining the invariant that the configuration of the game is of the form $(\bar a,\mathsf{last}(\bar a),0)$ for some $\bar a\in (\Delta^\CI)^+$. To do so, she can put $\mu(\bar aa,a) = \pi_{\bar a}(\bar aa) = \pi_{\mathsf{last}(\bar a)}(a)$ for all $a\in (\Delta^\CI)^+$, all other values of $\mu$ are $0$, and $\epsilon' = 0$. Then any move by $S$ leads to a configuration where the invariant holds. \subsubsection{Proof of Lemma~\ref{lem:local-k-bisim-inv}.} Let $\mathcal{I}$ and $\mathcal{J}$ be two models and let $a\in \Delta^\CI$ and $b\in \Delta^\CJ$ be two states such that $d_k^G(a,b)<\epsilon$. It is enough to show that $|\phi^\mathcal{I}(a)-\phi^\mathcal{J}(b)|\le\epsilon$. We denote by $a'$ and $a''$ the copies of $a$ in $\mathcal{I}^\ast$ and $(\mathcal{I}^\ast)^k_a$, respectively. Similarly, $b'$ and $b''$ denote the copies of $b$ in $\mathcal{J}^\ast$ and $(\mathcal{J}^\ast)^k_b$. By Lemma~\ref{lem:bisim-unravel}, $D$ wins the $0$-bisimulation-game for $\mathcal{I},a$ and $\mathcal{I}^\ast,a'$ (similarly for $\mathcal{J}$) and by Lemma~\ref{lem:nbhood-bisim}, she also wins the $k$-round $0$-bisimulation game for $\mathcal{I}^\ast,a'$ and $(\mathcal{I}^\ast)^k_a,a''$ (similarly for $\mathcal{J}$). Because behavioural distance $d^G_k$ is a pseudometric, this means that \begin{align*} &d^G_k(a'',b'')\\& \le d^G_k(a'',a')+d^G_k(a',a)\\&\qquad +d^G_k(a,b)+d^G_k(b,b')+d^G_k(b',b'')\\ & =d^G_k(a,b)<\epsilon, \end{align*} so $D$ has a winning strategy in the $k$-round $\epsilon$-bisimulation game for $(\mathcal{I}^\ast)^k_a,a''$ and $(\mathcal{J}^\ast)^k_b,b''$. In both $(\mathcal{I}^\ast)^k_a,a''$ and $(\mathcal{J}^\ast)^k_b,b''$, the reachable states form a tree of depth at most $k$. This implies that, after $i$ rounds of the game, the two states on either side of the current configuration are nodes at distance $i$ from the root of their respective tree. Thus, whenever $k$ rounds have been played in the game, $S$ does not have a legal move in the next round, because at that point, both nodes in the configuration are necessarily leaves and thus blocking. This in turn means that if $D$ can win the $k$-round game, she also wins the unbounded game, so, by bisimulation invariance of $\phi$, $|\phi^{(\mathcal{I}^\ast)^k_a}(a'')-\phi^{(\mathcal{J}^\ast)^k_b}(b'')|\le\epsilon$. By locality and bisimulation invariance of $\phi$, and again Lemma~\ref{lem:bisim-unravel}, we have $\phi^{(\mathcal{I}^\ast)^k_a}(a'') = \phi^{\mathcal{I}^\ast}(a') = \phi^\mathcal{I}(a)$ as well as $\phi^{(\mathcal{J}^\ast)^k_b}(b'') = \phi^{\mathcal{J}^\ast}(b') = \phi^\mathcal{J}(b)$. Thus $|\phi^\mathcal{I}(a)-\phi^\mathcal{J}(b)|\le\epsilon$, as claimed. \end{document}
\begin{document} \title{Linearized inverse Schr\"{o} \begin{abstract} We investigate recovery of the (Schr\"{o}dinger) potential function from many boundary measurements at a large wavenumber. By considering such a linearized form, we obtain a H\"{o}lder type stability which is a big improvement over a logarithmic stability in low wavenumbers. Furthermore we extend the discussion to the linearized inverse Schr\"{o}dinger potential problem with attenuation, where an exponential dependence of the attenuation constant is traced in the stability estimate. Based on the linearized problem, a reconstruction algorithm is proposed aiming at the recovery of the Fourier modes of the potential function. By choosing the large wavenumber appropriately, we verify the efficiency of the proposed algorithm by several numerical examples. \end{abstract} \begin{keywords} Stability estimate, Inverse boundary value problem, Schr\"{o}dinger potential problem \end{keywords} \begin{AMS} 35R30, 65N21 \end{AMS} \section{Introduction}\label{se_intro} We consider a linearized problem of recovering the potential function $c: = c(x)$ in the Schr\"{o}dinger equation \begin{equation}\label{eqn:mainprob} \left\{ \begin{aligned} - \Delta u - (k^{2} - c) u &= 0 & & \text{in\ } \Omega \subset \mathbb{R}^{n},\\ u &= g_{0} & & \text{on\ } \partial \Omega, \end{aligned} \right. \end{equation} from many boundary measurements at a large wavenumber $k$ in the dimension $n \geqslant 2$. As the data, we use a linearized form of the standard Dirichlet-to-Neumann (DtN) map \begin{equation*} \Lambda: g_{0} \mapsto g_{1} := \partial_{\nu} u \quad \text{on\ } \partial \Omega \end{equation*} which will be specified below. Reconstruction of the Schr\"{o}dinger potential function $c(x)$ at a fixed wavenumber $k = 0$ in (\ref{eqn:mainprob}) retrospects back to the original inverse conductivity problem, proposed by Calder\'{o}n, arising in electrical impedance tomography where uniqueness for the linearization can be proven by using the complex exponential solutions \cite{C1980}. Later on, the fundamental work by Sylvester and Uhlmann proved the global uniqueness for the inverse potential problem in three and higher dimensions by constructing almost complex exponential solutions, which also yields the global uniqueness of the inverse conductivity problem \cite{SU1987}. In two dimensions, recent work by Bukhgeim \cite{B2007} demonstrated uniqueness, at any fixed $k$, by introducing stationary phase methods. At $k = 0$, a logarithmic stability estimate of recovering the potential function is given by Alessandrini \cite{Alessandrini1988} and is further proven to be optimal by Mandache \cite{M2001}. As for the numerical reconstruction of the potential function (or the conductivity function) at a fixed wavenumber, we refer to \cite{DS1994} and recent work \cite{HLSST2016}. We shall mention that the inverse medium problem in \cite{BL2005} also highly relates with the current work. Recently it has been widely observed in different inverse boundary value problems that the stability estimate improves with growing wavenumbers (or energy) both analytically and numerically. The increasing stability in the inverse boundary value problem for the Schr\"{o}dinger equation (\ref{eqn:mainprob}) was firstly observed within different ranges of the wavenumbers in \cite{I2011}. In \cite{IW2014}, the authors considered the increasing stability on inverse boundary value problems for the Schr\"{o}dinger equation with attenuation where a linear exponential dependence on the attenuation constant is established. We note that all the above mentioned increasing stability results on inverse Schr\"{o}dinger potential problems are considered in three and higher dimensions under different a priori regularity assumptions. Another multifrequency set-up uses single observation for multiple wavenumbers. To have an overview of such results, we recommend a recent review paper \cite{BLLT2015} which nicely summarizes the theoretical and numerical evidences verifying the increasing stability in inverse medium and source problems for acoustic Helmholtz equations and Maxwell equations. In this paper, we investigate a linearized form of the inverse Schr\"{o}dinger potential problem (\ref{eqn:mainprob}) with a large wavenumber in two and higher dimensions. Such a linearization is carried out at a zero potential function which is reasonable if $c$ is sufficiently small compared with the squared wavenumber $k^{2}$. Noticing that the squared wavenumber $k^{2}$ may be large, we are allowed to reconstruct a potential function of moderate amplitude. In Section \ref{se_sta} we introduce the linearized problem and obtain an (increasing) H\"{o}lder-type stability estimate for the linearized inverse Schr\"{o}dinger potential problem with a large wavenumber by using bounded complex exponential solutions. We extend, in Section \ref{se_atten}, the discussion to the inverse Schr\"{o}dinger potential problem with attenuation. An exponential dependence on the attenuation constant in the stability estimate is observed. A novel reconstruction algorithm is proposed to recover the Fourier modes of the potential function in Section \ref{se_algo} based on the Calder\'{o}n's method. By choosing the large wavenumber appropriately, we show various numerical examples confirming the efficiency of the proposed algorithm in Section \ref{se_numer}. Finally a conclusion Section \ref{se_con} ends the manuscript with further prospects. \section{Increasing stability in the linearized inverse Schr\"{o}dinger potential problem at the large wavenumber}\label{se_sta} We recall that the original problem, which initializes the current work, is to find the Schr\"{o}dinger potential function $c = c(x)$ defined in a bounded domain $\Omega$ in the following problem \begin{equation}\label{eqn:prob} (I) \quad \left\{ \begin{aligned} - \Delta u - (k^{2} - c) u &= 0 & & \text{in\ } \Omega \subset \mathbb{R}^{n},\\ u &= g_{0} & & \text{on\ } \partial \Omega, \end{aligned} \right. \end{equation} from the knowledge of the Dirichlet-to-Neumann (DtN) map \begin{equation}\label{eqn:dtn} \Lambda: g_{0} \mapsto g_{1} := \partial_{\nu} u \quad \text{on\ } \partial \Omega. \end{equation} We assume that $k^2$ is not a Dirichlet eigenvalue for the Laplace operator in $\Omega$. If we assume that $c$ is small (or $k$ is large), we can justify the linearization of the Schr\"{o}dinger equation. More precisely, let $u_{0}$, $u_{1}$ solve the following sub-problems \begin{subequations} \begin{align} (I_{0}) \quad & \left\{ \begin{aligned} - \Delta u_{0} - k^{2} u_{0} &= 0 \quad & & \text{in\ } \Omega,\\ u_{0} &= g_{0} & & \text{on\ } \partial \Omega, \end{aligned} \right. \label{eqn:sub_I_0}\\ (I_{1}) \quad & \left\{ \begin{aligned} - \Delta u_{1} - k^{2} u_{1} &= - c u_{0} & & \text{in\ } \Omega,\\ u_{1} &= 0 & & \text{on\ } \partial \Omega, \end{aligned} \right. \label{eqn:sub_I_1} \end{align} \end{subequations} then the solution $u$ of the original problem (\ref{eqn:prob}) is \begin{equation*} u = u_{0} + u_{1} + \cdots \end{equation*} where the remaining "$\cdots$" are "higher" order terms. The linearzied DtN map $\Lambda^{\prime}$ of $\Lambda$ in (\ref{eqn:dtn}) is defined accordingly as \begin{equation}\label{eqn:dtn_p} \Lambda^{\prime} : g_{0} \mapsto \partial_{\nu} u_{1} \quad \text{on\ } \partial \Omega. \end{equation} Multiplying both sides of the sub-problem (\ref{eqn:sub_I_1}) with a test function $v \in H^{1}(\Omega)$ satisfying the equation \begin{equation*} - \Delta v - k^{2} v = 0 \quad \text{in\ } \Omega, \end{equation*} we obtain \begin{equation}\label{eqn:integral} \int_{\Omega} c \, u_{0} v = \int_{\partial \Omega} \left( \partial_{\nu} u_{1} \right) v. \end{equation} Without loss of generality we may assume that $0 \in \Omega$. Let $D = 2 \sup\limits_{x \in \Omega} \left| x \right|$ and $\epsilon$ be the operator norm of $\Lambda^{\prime} : H^{\frac{1}{2}}(\partial \Omega) \to H^{-\frac{1}{2}}(\partial \Omega)$. We present the main stability estimate below: \begin{thm}\label{thm_1} Let $D \leqslant 1$, $\left\| c \right\|_{H^{1}(\Omega)} \leqslant M_{1}$, and $k > 1$, $\epsilon < 1$, then the following estimate holds true \begin{equation*} \left\| c \right\|_{L^{2}(\Omega)}^{2} \leqslant C(\Omega) \left( k^{n+4} + E^{n+4} \right) \epsilon^{2} + C(\Omega) E^{n+2} \left( \epsilon + \epsilon^{3} \right) + \frac{M_{1}^{2}}{1 + E^{2} + 3k^{2}} \end{equation*} for the linearized system (\ref{eqn:sub_I_0})-(\ref{eqn:sub_I_1}) with $E = -\ln \epsilon$ and the constant $C(\Omega)$ depending on the domain $\Omega$. \end{thm} \begin{proof} The proof is based on the complex exponential solutions suggested by Calder\'{o}n \cite{C1980} and Faddeev \cite{F1966}. Let $\xi \in \mathbb{R}^{n}$ and $\zeta, \zeta^{*} \in \mathbb{C}^{n}$ with $\xi, \zeta, \zeta^{*} \neq 0$. We consider the following solutions \begin{equation}\label{eqn:uvcgo} u_{0}(x) = e^{\mathbf{i} \zeta \cdot x}, \quad v(x) = e^{\mathbf{i} \zeta^{*} \cdot x}. \end{equation} To select $\zeta$ and $\zeta^{*}$, we introduce an orthonormal base $\left\{ e_{1} := \frac{\xi}{\left| \xi \right|}, e_{2}, \cdots, e_{n} \right\}$ of $\mathbb{R}^{n}$ and let \begin{equation*} \zeta := \frac{\left| \xi \right|}{2} e_{1} + \sqrt{k^{2} - \frac{\left| \xi \right|^{2}}{4}} e_{2}, \quad \zeta^{*} := \frac{\left| \xi \right|}{2} e_{1} - \sqrt{k^{2} - \frac{\left| \xi \right|^{2}}{4}} e_{2} \end{equation*} where $\sqrt{k^{2} - \frac{\left| \xi \right|^{2}}{4}} = \mathbf{i} \sqrt{\frac{\left| \xi \right|^{2}}{4} - k^{2}}$ if $k < \frac{\left| \xi \right|}{2}$. For brevity, we denote $\Xi := \sqrt{\frac{\left| \xi \right|^{2}}{4} - k^{2}}$. Then $u_{0}(x) v(x) = e^{\mathbf{i} \xi \cdot x}$ and from the equality (\ref{eqn:integral}) we derive \begin{equation}\label{eqn:integral_F} \mathcal{F}[c](\xi) := \int_{\Omega} c(x) e^{\mathbf{i}\xi \cdot x} \,\mathrm{d}x = \int_{\partial \Omega} \left( \partial_{\nu} u_{1} \right) v. \end{equation} Here $\mathcal{F}[\cdot]$ denotes the Fourier transform. Observe that if $k \geqslant \frac{\left| \xi \right|}{2}$, then the norms of exponential solutions in (\ref{eqn:uvcgo}) are bounded as follow \begin{equation}\label{eqn:xi_1} \left\| u_{0} \right\|_{H^{1}(\Omega)}^{2} = \left\| v \right\|_{H^{1}(\Omega)}^{2} = \int_{\Omega} \left( 1 + \left| \zeta \right|^{2} \right) \,\mathrm{d}x \leqslant \left( 1 + k^{2} \right) \operatorname{Vol}_{n} \Omega, \end{equation} where $\operatorname{Vol}_{n} \Omega$ is the volume of $\Omega \subset \mathbb{R}^{n}$. If $k < \frac{\left| \xi \right|}{2}$, then \begin{equation}\label{eqn:xi_2} \begin{aligned} \left\| u_{0} \right\|_{H^{1}(\Omega)}^{2} = \left\| v \right\|_{H^{1}(\Omega)}^{2} &= \left( 1 + k^{2} \right) \int_{\Omega} e^{- 2 \Xi \, e_{2} \cdot x} \,\mathrm{d}x \\ &\leqslant \left( 1 + k^{2} \right) \operatorname{Vol}_{n-1} \Omega \,\frac{e^{D \Xi} - e^{-D \Xi}}{2 \Xi}, \end{aligned} \end{equation} where $\operatorname{Vol}_{n-1} \Omega = \sup\limits_{\Omega^{\prime}} \{ \operatorname{Vol}_{n-1} \Omega^{\prime} \}$ over all $(n-1)$-dimensional orthonormal projections $\Omega^{\prime}$ of $\Omega$. By the trace theorem, there hold \begin{equation}\label{eqn:trace} \left\| u_{0} \right\|_{H^{\frac{1}{2}}(\partial \Omega)} \leqslant C(\Omega) \left\| u_{0} \right\|_{H^{1}(\Omega)}, \quad \left\| v \right\|_{H^{\frac{1}{2}}(\partial \Omega)} \leqslant C(\Omega) \left\| v \right\|_{H^{1}(\Omega)}. \end{equation} So from (\ref{eqn:integral_F}) and (\ref{eqn:trace}), we have \begin{equation*} \begin{aligned} \left| \mathcal{F}[c](\xi) \right|^{2} &\leqslant \left\| \partial_{\nu} u_{1} \right\|_{H^{-\frac{1}{2}}(\partial \Omega)}^{2} \left\| v \right\|_{H^{\frac{1}{2}}(\partial \Omega)}^{2} \\ &\leqslant \epsilon^{2} \left\| u_{0} \right\|_{H^{\frac{1}{2}}(\partial \Omega)}^{2} \left\| v \right\|_{H^{\frac{1}{2}}(\partial \Omega)}^{2} \\ &\leqslant \epsilon^{2} C^{4}(\Omega) \left\| u_{0} \right\|_{H^{1}(\Omega)}^{2} \left\| v \right\|_{H^{1}(\Omega)}^{2}. \end{aligned} \end{equation*} Hence, due to (\ref{eqn:xi_1}) and (\ref{eqn:xi_2}), \begin{equation*} \left| \mathcal{F}[c](\xi) \right|^{2} \leqslant C^{4}(\Omega) \left( 1 + k^{2} \right)^{2} \left( \operatorname{Vol}_{n} \Omega \right)^{2} \epsilon^{2}, \quad \text{if\ } k \geqslant \frac{\left| \xi \right|}{2}, \end{equation*} and \begin{equation}\label{eq_nminus1bound} \left| \mathcal{F}[c](\xi) \right|^{2} \leqslant C^{4}(\Omega) \left( 1 + k^{2} \right)^{2} \left( \operatorname{Vol}_{n-1} \Omega \right)^{2} \frac{\left( e^{D \Xi} - e^{-D \Xi} \right)^2}{4 \Xi^{2}} \epsilon^{2}, \quad \text{if\ } k < \frac{\left| \xi \right|}{2}. \end{equation} Let $E := - \ln \epsilon > 0$ and $k > 1$, $\epsilon <1$, we consider two cases \begin{enumerate} \item[a)] $k > E$ (i.e. $\epsilon = e^{-E} > e^{-k}$), and \item[b)] $k \leqslant E$ (i.e. $\epsilon = e^{-E} \leqslant e^{-k}$). \end{enumerate} In the case a), we have \begin{equation}\label{eqn:bound_a} \begin{aligned} \left\| c \right\|_{L^{2}(\Omega)}^{2} &= \int \left| \mathcal{F}[c](\xi) \right|^{2} \,\mathrm{d}\xi = \int_{k \geqslant \frac{\left| \xi \right|}{2}} \left| \mathcal{F}[c](\xi) \right|^{2} \,\mathrm{d}\xi + \int_{k < \frac{\left| \xi \right|}{2}} \left| \mathcal{F}[c](\xi) \right|^{2} \,\mathrm{d}\xi \\ &\leqslant C^{4}(\Omega) \left( 1 + k^{2} \right)^{2} \left( \operatorname{Vol}_{n} \Omega \right)^{2} \sigma_{n} (2k)^{n} \epsilon^{2} + \frac{M_{1}^{2}}{1 + (2k)^{2}} \\ &\leqslant C_{1}(\Omega) k^{n+4} \epsilon^{2} + \frac{M_{1}^{2}}{1 + 4 k^{2}} \leqslant C_{1}(\Omega) k^{n+4} \epsilon^{2} + \frac{M_{1}^{2}}{1 + E^{2} + 3 k^{2}} \end{aligned} \end{equation} where $\sigma_{n}$ is the volume of an unit ball in $\mathbb{R}^{n}$, and the constant $C_{1}(\Omega)$ is defined as $C^{4}(\Omega) \left( \operatorname{Vol}_{n} \Omega \right)^{2} \sigma_{n} 2^{n+2}$. In the case b), we let $\rho^{2} := \frac{E^{2}}{D^{2}} + 4 k^{2}$ and split \begin{equation*} \begin{aligned} \left\| c \right\|_{L^{2}(\Omega)}^{2} &= \int_{k \geqslant \frac{\left| \xi \right|}{2}} \left| \mathcal{F}[c](\xi) \right|^{2} \,\mathrm{d}\xi + \int_{k < \frac{\left| \xi \right|}{2} < \frac{\rho}{2}} \left| \mathcal{F}[c](\xi) \right|^{2} \,\mathrm{d}\xi + \int_{\rho \leqslant \left| \xi \right|} \left| \mathcal{F}[c](\xi) \right|^{2} \,\mathrm{d}\xi \\ &\leqslant C^{4}(\Omega) \left( 1 + k^{2} \right)^{2} \left( \operatorname{Vol}_{n} \Omega \right)^{2} \sigma_{n} (2k)^{n} \epsilon^{2} \\ &\quad + C^{4}(\Omega) \left( 1 + k^{2} \right)^{2} \left( \operatorname{Vol}_{n-1} \Omega \right)^{2} \left( \int_{k < \frac{\left| \xi \right|}{2} < \frac{\rho}{2}} \,\mathrm{d}\xi \right) \frac{D^{2} \left( e^{\frac{E}{2}} - e^{-\frac{E}{2}} \right)^{2}}{E^{2}} \epsilon^{2} \\ &\quad + \frac{M_{1}^{2}}{1 + \rho^{2}}. \end{aligned} \end{equation*} The second term in the above inequality is derived by (\ref{eq_nminus1bound}) and the fact that \begin{equation*} \frac{e^{y}-e^{-y}}{y} = 2 \left( 1 + \frac{y^{2}}{3!} + \cdots + \frac{y^{2n}}{(2n+1)!} + \cdots \right) \end{equation*} increases while $y > 0$ and hence \begin{equation*} \frac{e^{D \Xi} - e^{-D \Xi}}{2 \Xi} \leqslant \frac{D\left( e^{\frac{E}{2}} - e^{-\frac{E}{2}} \right)}{E}, \quad {\rm since} \quad k < \frac{\left| \xi \right|}{2} < \frac{\rho}{2} = \sqrt{\frac{E^{2}}{4D^{2}} + k^{2}}. \end{equation*} Observe that, since $k \leqslant E$, \begin{equation*} \begin{aligned} \int_{k < \frac{\left| \xi \right|}{2} < \frac{\rho}{2}} \,\mathrm{d}\xi &= \sigma_{n} \left[ \rho^{n} - (2k)^{n} \right] = \sigma_{n} \left[ \left( \frac{E^{2}}{D^{2}} + (2k)^{2} \right)^{\frac{n}{2}} - (2k)^{n} \right] \\ &= \sigma_{n} \frac{E^{n}}{D^{n}} \left[ \left( 1 + \left( 2 k \frac{D}{E} \right)^{2} \right)^{\frac{n}{2}} - \left( 2 k \frac{D}{E} \right)^{n} \right] \\ &\leqslant \sigma_{n} \frac{E^{n}}{D^{n}} \left[ \left( 1 + \left( 2 D \right)^{2} \right)^{\frac{n}{2}} - \left( 2 D \right)^{n} \right]. \end{aligned} \end{equation*} Thus \begin{equation}\label{eqn:bound_b} \begin{aligned} \left\| c \right\|_{L^{2}(\Omega)}^{2} &\leqslant C^{4}(\Omega) \left( \operatorname{Vol}_{n} \Omega \right)^{2} \sigma_{n} 2^{n+2} k^{n+4} \epsilon^{2} \\ &\quad + C^{4}(\Omega) \left( \operatorname{Vol}_{n-1} \Omega \right)^{2} \sigma_{n} 2^{2} k^{4} \frac{E^{n-2}}{D^{n-2}} \left[ \left( 1 + \left( 2 D \right)^{2} \right)^{\frac{n}{2}} - \left( 2 D \right)^{n} \right] \\ &\quad \qquad \cdot \left( e^{E} + e^{-E} - 2 \right) \epsilon^{2} \\ &\quad + \frac{M_{1}^{2}}{1 + \frac{E^{2}}{D^{2}} + 4 k^{2}} \\ &\leqslant C_{1}(\Omega) E^{n+4} \epsilon^{2} + C_{2}(\Omega) E^{n+2} \left( \epsilon + \epsilon^{3} \right) + \frac{M_{1}^{2}}{1 + \frac{E^{2}}{D^{2}} + 4 k^{2}} \end{aligned} \end{equation} where $C_{2}(\Omega) := C^{4}(\Omega) \left( \operatorname{Vol}_{n-1} \Omega \right)^{2} \sigma_{n} \frac{2^{2}}{D^{n-2}} \left[ \left( 1 + \left( 2 D \right)^{2} \right)^{\frac{n}{2}} - \left( 2 D \right)^{n} \right]$. Combining case a) $k > E$ with the bound (\ref{eqn:bound_a}) and case b) $k \leqslant E$ with the bound (\ref{eqn:bound_b}), we prove Theorem \ref{thm_1}. \end{proof} \begin{rmk}\label{rem_1} When $\Omega$ is a sphere, constants in the trace theorem are explicit and one can make $C(\Omega)$ in Theorem \ref{thm_1} explicitly. \end{rmk} In view of (\ref{eqn:bound_b}) it is obvious that stability is increasing with growing wavenumber $k$, provided $k \leqslant E$. We comment on the increasing stability when $k > E$. In such a case, we have an upper bound estimate (\ref{eqn:bound_a}) of $c$ which is \begin{equation*} \begin{aligned} \left\| c \right\|_{L^{2}(\Omega)}^{2} \leqslant \omega(k) := C_{1}(\Omega) k^{n+4} \epsilon^{2} + \frac{M_{1}^{2}}{4 k^{2}}. \end{aligned} \end{equation*} By an elementary calculus, the upper bound $\omega(k)$ decreases on $(0,k_{*})$ and increases on $(k_{*},\infty)$ where \begin{equation}\label{eq_kstar} \begin{aligned} k_{*} = \left( \frac{M_{1}^{2}}{2 (n+4) C_{1}(\Omega) \epsilon^{2}} \right)^{\frac{1}{n+6}}. \end{aligned} \end{equation} If $k_{*} \leqslant 1$ in (\ref{eq_kstar}), we have \begin{equation*} \begin{aligned} M_{1}^{2} \leqslant 2 (n+4) C_{1}(\Omega) \epsilon^{2} \end{aligned} \end{equation*} and the minimum of $\omega(k)$ is obtained at $k = 1$ such that \begin{equation*} \begin{aligned} \left\| c \right\|_{L^{2}(\Omega)}^{2} \leqslant \omega(1) = C_{1}(\Omega) \epsilon^{2} + \frac{M_{1}^{2}}{4} \leqslant C_{1}(\Omega) \left( 3 + \frac{n}{2} \right) \epsilon^{2}. \end{aligned} \end{equation*} If $k_{*} > 1$, then minimum of $\omega(k)$ is obtained at $k = k_{*}$ such that \begin{equation*} \begin{aligned} \left\| c \right\|_{L^{2}(\Omega)}^{2} \leqslant \omega(k_{*}) = C_{1}^{\frac{2}{n+6}}(\Omega) M_{1}^{2\frac{n+4}{n+6}} \left[ \left( 2(n+4) \right)^{-\frac{n+4}{n+6}} + 2^{-2} \left( 2(n+4) \right)^{\frac{2}{n+6}} \right] \epsilon^{\frac{4}{n+6}}. \end{aligned} \end{equation*} Now we consider the "small" perturbation when $E \geqslant 1$ (i.e., $\epsilon \leqslant e^{-1}$). If $k_{*} \leqslant E$, then minimum of $\omega(k)$ (at $k = E$) is \begin{equation*} \begin{aligned} \left\| c \right\|_{L^{2}(\Omega)}^{2} & \leqslant \omega(E) = C_{1}(\Omega) E^{n+4} \epsilon^{2} + \frac{M_{1}^2}{4 E^{2}} \\ & \leqslant C_{1}(\Omega) E^{n+4} \epsilon^{2} + C_{1}^{\frac{2}{n+6}}(\Omega) M_{1}^{2\frac{n+4}{n+6}} 2^{-2} \left( 2(n+4) \right)^{\frac{2}{n+6}} \epsilon^{\frac{4}{n+6}}. \end{aligned} \end{equation*} If $k_{*} > E$, then minimum of $\omega(k)$ is at $k = k_{*}$ which is the same as in the case $0 < E < 1$. So in all possible cases above, given a small error (or a perturbation) $\epsilon$ we can choose a large wavenumber $k$ producing a H\"{o}lder stability which is a big improvement over a logarithmic stability at low wavenumbers. \section{Linearized inverse Schr\"{o}dinger potential problem at the large wavenumber with attenuation}\label{se_atten} Furthermore we provide some extended discussion on the linearized inverse Schr\"{o}dinger potential problem with attenuation. The original problem in three (and higher) dimensions is studied in \cite{IW2014} where increasing stability estimates with a linearly exponential dependence of the attenuation constant are obtained for low/high frequencies separately by constructing almost complex exponential solutions and real solutions respectively. In current section, we investigate the stability estimate recovering the potential function $c = c(x)$ below \begin{equation}\label{eqn:mainprob_attenuation} \left\{ \begin{aligned} - \Delta u - (k^{2} - c) u + \mathbf{i}kb u &= 0 & & \text{in\ } \Omega \subset \mathbb{R}^{n},\\ u &= g_{0} & & \text{on\ } \partial \Omega, \end{aligned} \right. \end{equation} from the knowledge of linearized DtN map for a large wavenumber $k$. The constant $b > 0$ is the attenuation constant. For the sake of simplicity, we use the notations in Section \ref{se_sta}, for instance, that $u$ represents the solution of the Schr\"{o}dinger potential equation with attenuation. Referring to Section \ref{se_sta}, we let $u_{0}$, $u_{1}$ solve the following sub-problems \begin{subequations} \begin{align} & \left\{ \begin{aligned} - \Delta u_{0} - k^{2} u_{0} + \mathbf{i}kb \, u_{0} &= 0 \quad & & \text{in\ } \Omega,\\ u_{0} &= g_{0} & & \text{on\ } \partial \Omega, \end{aligned} \right. \label{eqn:sub_I_0_attenuation}\\ & \left\{ \begin{aligned} - \Delta u_{1} - k^{2} u_{1} + \mathbf{i}kb \, u_{1} &= - c u_{0} & & \text{in\ } \Omega,\\ u_{1} &= 0 & & \text{on\ } \partial \Omega, \end{aligned} \right. \label{eqn:sub_I_1_attenuation} \end{align} \end{subequations} and the solution $u$ of (\ref{eqn:mainprob_attenuation}) is \begin{equation*} u = u_{0} + u_{1} + \cdots \end{equation*} with "$\cdots$" denoting the remaining terms. Similarly the linearized DtN map $\Lambda^{\prime}$ is defined as \begin{equation}\label{eqn:dtn_atten} \Lambda^{\prime}: g_{0} \mapsto \partial_{\nu} u_{1} \quad {\rm on} \quad \partial \Omega. \end{equation} Multiplying both sides of the sub-problem (\ref{eqn:sub_I_1_attenuation}) with a test function $v$ satisfying \begin{equation*} -\Delta v - k^{2} v + \mathbf{i}kb \, v = 0 \quad {\rm in }\quad \Omega, \end{equation*} we obtain the equality \begin{equation}\label{eqn:integral_atten} \int_{\Omega} c \, u_{0} v = \int_{\partial\Omega} \left( \partial_{\nu} u_{1} \right) v \end{equation} which will again play important roles below. Let $\epsilon$ be the operator norm of $\Lambda^{\prime} : H^{\frac{1}{2}}(\partial \Omega) \to H^{-\frac{1}{2}}(\partial \Omega)$, $0\in \Omega$ and $D := 2 \sup \left| x \right|$, $x\in \Omega$. We present the main stability estimate in this section. \begin{thm}\label{thm_2} Let $D \leqslant 1$, $\left\| c \right\|_{H^{1}(\Omega)} \leqslant M_{1}$, and $k > 1$, $\epsilon < 1$, then the following estimate holds true \begin{equation*} \begin{aligned} \left\| c \right\|_{L^{2}(\Omega)}^{2} &\leqslant C(\Omega) \left( k^{n+4} + E^{n+4} \right) e^{2D b} \epsilon^{2} \\ &\quad + C(\Omega) E^{n+4} e^{D^{2} b} \epsilon + C(\Omega) E^{n+2} e^{D^{2} b} \epsilon + \frac{M_{1}^{2}}{1 + E^{2} + 2 k^{2}} \end{aligned} \end{equation*} for the linearized system (\ref{eqn:sub_I_0_attenuation})-(\ref{eqn:sub_I_1_attenuation}) with $E = -\ln \epsilon$ and the constant $C(\Omega)$ depending on the domain $\Omega$. \end{thm} \begin{proof} Similar to the proof of Theorem \ref{thm_1}, we again use the exponential solutions. Let $\xi \in \mathbb{R}^{n}$ and $\zeta, \zeta^{*} \in \mathbb{C}^{n}$ with $\xi, \zeta, \zeta^{*} \neq 0$. We consider the following solution \begin{equation*} u_{0}(x) = e^{\mathbf{i} \zeta \cdot x}, \quad v(x) = e^{\mathbf{i} \zeta^{*} \cdot x}. \end{equation*} The orthonormal base is chosen by $\left\{ e_{1} = \frac{\xi}{\left| \xi \right|}, e_{2}, \cdots, e_{n} \right\}$ and we let \begin{equation*} \zeta := \frac{\left| \xi \right|}{2} e_{1} + \sqrt{ k^{2} - \frac{\left| \xi \right|^{2}}{4} - \mathbf{i}kb } \, e_{2}, \quad \zeta^{*} := \frac{\left| \xi \right|}{2} e_{1} - \sqrt{ k^{2} - \frac{\left| \xi \right|^{2}}{4} - \mathbf{i}kb } \, e_{2}. \end{equation*} Then $u_{0}(x) v(x) = e^{\mathbf{i} \xi \cdot x}$ and we derive, by (\ref{eqn:integral_atten}) \begin{equation*} \mathcal{F}[c](\xi) := \int_{\Omega} c(x) e^{\mathbf{i} \xi \cdot x} \,\mathrm{d}x = \int_{\partial\Omega} \left( \partial_{\nu} u_{1} \right) v. \end{equation*} We can write $\sqrt{ k^{2} - \frac{\left| \xi \right|^{2}}{4} - \mathbf{i}kb } = X + \mathbf{i} Y$, $X > 0$. If $\left| \xi \right|^{2} \leqslant 3 k^{2}$, then squaring the both sides and following the arguments in \cite{IW2014}, we have \begin{equation*} \left| Y \right| = \frac{kb}{2X} = \frac{kb}{\sqrt{2} \sqrt{\left( k^{2} - \frac{\left| \xi \right|^{2}}{4} \right) + \sqrt{\left( k^{2} - \frac{\left| \xi \right|^{2}}{4} \right)^{2} + k^{2} b^{2}}}} \leqslant b. \end{equation*} If $3 k^{2} < \left| \xi \right|^{2} \leqslant 4 k^{2}$, we derive \begin{equation*} \left| Y \right| = \frac{kb}{\sqrt{2} \sqrt{\left( k^{2} - \frac{\left| \xi \right|^{2}}{4} \right) + \sqrt{\left( k^{2} - \frac{\left| \xi \right|^{2}}{4} \right)^{2} + k^{2} b^{2}}}} \leqslant \frac{1}{2} \left( \frac{k}{D} + D b \right). \end{equation*} On the other hand, if $\left| \xi \right|^{2} > 4 k^{2}$, then \begin{equation}\label{eqn:se5_Y} Y = -\frac{kb}{2 X} = -\frac{1}{2} \sqrt{\left( \frac{\left| \xi \right|^{2}}{2} - 2 k^{2} \right) + \sqrt{\left( \frac{\left| \xi \right|^{2}}{2} - 2 k^{2} \right)^{2} + 4 k^{2} b^{2}}}. \end{equation} We then derive that \begin{itemize} \item if $\left| \xi \right|^{2} \leqslant 3 k^{2}$, then \begin{equation*} \begin{aligned} \left\| u_{0} \right\|_{H^{1}(\Omega)}^{2} = \left\| v \right\|_{H^{1}(\Omega)}^{2} & \leqslant \left( 1 + k^{2} \right) \int_{\Omega} e^{-2 Y e_{2} \cdot x} \,\mathrm{d}x \\ & \leqslant \left( 1 + k^{2} \right) \operatorname{Vol}_{n} \Omega \, e^{D b}. \end{aligned} \end{equation*} \item If $3 k^{2} < \left| \xi \right|^{2} \leqslant 4 k^{2}$, then \begin{equation*} \begin{aligned} \left\| u_{0} \right\|_{H^{1}(\Omega)}^{2} = \left\| v \right\|_{H^{1}(\Omega)}^{2} & \leqslant \left( 1 + k^{2} \right) \int_{\Omega} e^{-2 Y e_{2} \cdot x} \,\mathrm{d}x \\ & \leqslant \left( 1 + k^{2} \right) \operatorname{Vol}_{n} \Omega \, e^{\frac{1}{2} ( k + D^{2} b )}. \end{aligned} \end{equation*} \item If $\left| \xi \right|^{2} > 4 k^{2}$, there holds \begin{equation*} \begin{aligned} \left\| u_{0} \right\|_{H^{1}(\Omega)}^{2} = \left\| v \right\|_{H^{1}(\Omega)}^{2} & \leqslant \left( 1 + k^{2} \right) \operatorname{Vol}_{n-1} \Omega \int_{-\frac{D}{2}}^{\frac{D}{2}} e^{-2 Y t} \,\mathrm{d}t \\ & = \left( 1 + k^{2} \right) \operatorname{Vol}_{n-1} \Omega \, \frac{ e^{Dy} - e^{-Dy} }{2y} \end{aligned} \end{equation*} where a variable $y := -Y > 0$ due to (\ref{eqn:se5_Y}). \end{itemize} Similarly to the proof of Theorem \ref{thm_1}, we have \begin{equation*} \begin{aligned} \left| \mathcal{F}[c](\xi) \right|^{2} & \leqslant \epsilon^{2} C^{4}(\Omega) \left\| u_{0} \right\|_{H^{1}(\Omega)}^{2} \left\| v \right\|_{H^{1}(\Omega)}^{2}, \end{aligned} \end{equation*} which further yields that \begin{itemize} \item if $\left| \xi \right|^{2} \leqslant 3 k^{2}$, then \begin{equation*} \begin{aligned} \left| \mathcal{F}[c](\xi) \right|^{2} \leqslant C^{4}(\Omega) \left( 1 + k^{2} \right)^{2} \left( \operatorname{Vol}_{n} \Omega \right)^{2} e^{2D b} \epsilon^{2}. \end{aligned} \end{equation*} \item If $3 k^{2} < \left| \xi \right|^{2} \leqslant 4 k^{2}$, then \begin{equation*} \begin{aligned} \left| \mathcal{F}[c](\xi) \right|^{2} \leqslant C^{4}(\Omega) \left( 1 + k^{2} \right)^{2} \left( \operatorname{Vol}_{n} \Omega \right)^{2} e^{ k + D^{2} b } \epsilon^{2}. \end{aligned} \end{equation*} \item If $\left| \xi \right|^{2} > 4 k^{2}$, then \begin{equation*} \begin{aligned} \left| \mathcal{F}[c](\xi) \right|^{2} \leqslant C^{4}(\Omega) \left( 1 + k^{2} \right)^{2} \left( \operatorname{Vol}_{n-1} \Omega \right)^{2} \frac{ \left( e^{Dy} - e^{-Dy} \right)^{2} }{4 y^{2}} \epsilon^{2}. \end{aligned} \end{equation*} \end{itemize} Let $E = -\ln \epsilon > 0$, $k > 1$ and $\epsilon < 1$. We again consider the cases below \begin{enumerate} \item[a)] $k > E$ (i.e. $\epsilon = e^{-E} > e^{-k}$), and \item[b)] $k \leqslant E$ (i.e. $\epsilon = e^{-E} \leqslant e^{-k}$). \end{enumerate} In the case a), we have \begin{equation}\label{eqn:thm2proof_estimate1} \begin{aligned} \left\| c \right\|_{L^{2}(\Omega)}^{2} & = \int_{\left| \xi \right|^{2} \leqslant 3 k^{2}} \left| \mathcal{F}[c](\xi) \right|^{2} \,\mathrm{d}\xi + \int_{ 3 k^{2} < \left| \xi \right|^{2} } \left|\mathcal{F}[c](\xi) \right|^{2} \,\mathrm{d}\xi \\ & \leqslant C^{4}(\Omega) \left( 1 + k^{2} \right)^{2} \left( \operatorname{Vol}_{n} \Omega \right)^{2} \sigma_{n} \left( \sqrt{3} k \right)^{n} e^{2D b} \epsilon^{2} + \frac{M_{1}^{2}}{1 + 3 k^{2}} \\ & \leqslant \frac{1}{4}C_{1}(\Omega) k^{n+4} e^{2D b} \epsilon^{2} + \frac{M_{1}^{2}}{1 + E^{2} + 2 k^{2}}. \end{aligned} \end{equation} In the case b), we again choose $\rho^{2} := \frac{E^{2}}{D^{2}} + 4 k^{2}$ and split \begin{equation*} \begin{aligned} \left\| c \right\|_{L^{2}(\Omega)}^{2} &= \int_{\left| \xi \right|^{2} \leqslant 3 k^{2}} \left| \mathcal{F}[c](\xi) \right|^{2} \,\mathrm{d}\xi + \int_{ 3 k^{2} < \left| \xi \right|^{2} \leqslant 4 k^{2} } \left| \mathcal{F}[c](\xi) \right|^{2} \,\mathrm{d}\xi \\ & \quad + \int_{ 4 k^{2} < \left| \xi \right|^{2} \leqslant \rho^{2} } \left|\mathcal{F}[c](\xi) \right|^{2} \,\mathrm{d}\xi + \int_{ \rho^{2} < \left| \xi \right|^{2} } \left| \mathcal{F}[c](\xi) \right|^{2} \,\mathrm{d}\xi \\ &\leqslant \frac{1}{4} C_1(\Omega) k^{n+4} e^{2Db} \epsilon^2 + \frac{1}{4} C_1(\Omega) \left(2^n- 3^{\frac{n}{2}}\right) k^n e^{k+D^2 b} \epsilon^2 \\ & \quad + C^{4}(\Omega) \left( 1 + k^{2} \right)^{2} \left( \operatorname{Vol}_{n-1} \Omega \right)^{2} \left( \int_{ 4 k^{2} < \left| \xi \right|^{2} \leqslant \rho^{2} } \frac{ \left( e^{Dy} - e^{-Dy} \right)^{2}}{4 y^{2}} \,\mathrm{d}\xi \right) \epsilon^{2} \\ &\quad + \frac{M_{1}^{2}}{1 + \rho^{2}}. \end{aligned} \end{equation*} As in Theorem \ref{thm_1}, $\frac{ e^{Dy} - e^{-Dy} }{y}$ increases while $y > 0$ and thus it attains the maximum value if $\left| \xi \right| = \rho$, which yields \begin{equation*} \begin{aligned} \frac{ \left( e^{Dy} - e^{-Dy} \right)^{2}}{4 y^{2}} \leqslant \frac{ e^{D \sqrt{ \frac{1}{2} \frac{E^{2}}{D^{2}} + \sqrt{ \left( \frac{1}{2} \frac{E^{2}}{D^{2}} \right)^{2} + 4 k^{2} b^{2} } }} }{ \frac{1}{2} \frac{E^{2}}{D^{2}} + \sqrt{ \left( \frac{1}{2} \frac{E^{2}}{D^{2}} \right)^{2} + 4 k^{2} b^{2} } } \quad \text{because} \quad 4 k^{2} < \left| \xi \right|^{2} \leqslant \rho^{2}. \end{aligned} \end{equation*} Since \begin{equation*} \begin{aligned} \frac{1}{2} \frac{E^{2}}{D^{2}} + \sqrt{ \left( \frac{1}{2} \frac{E^{2}}{D^{2}} \right)^{2} + 4 k^{2} b^{2}} \geqslant \frac{E^{2}}{D^{2}} \end{aligned} \end{equation*} and, by using $k \leqslant E$ \begin{equation*} \frac{1}{2} \frac{E^{2}}{D^{2}} + \sqrt{ \left( \frac{1}{2} \frac{E^{2}}{D^{2}} \right)^{2} + 4 k^{2} b^{2} } \leqslant \frac{E^{2}}{D^{2}} + 2 k b \leqslant \frac{E^{2}}{D^{2}} + 2 \frac{E}{D} D b \leqslant \left( \frac{E}{D} + D b \right)^{2}, \end{equation*} we then bound \begin{equation*} \begin{aligned} \frac{ \left( e^{Dy} - e^{-Dy} \right)^{2} }{4 y^{2}} \leqslant \frac{ e^{ E + D^{2} b } }{ \left( \frac{E}{D} \right)^{2} }. \end{aligned} \end{equation*} Meanwhile, by using $k \leqslant E$ and the proof of Theorem \ref{thm_1}, we yield \begin{equation*} \begin{aligned} & \int_{ 4 k^{2} < \left| \xi \right|^{2} \leqslant \rho^{2} } \,\mathrm{d}\xi \leqslant \sigma_{n} \frac{E^{n}}{D^{n}} \left[ \left( 1 + \left( 2 D \right)^{2} \right)^{\frac{n}{2}} - \left( 2 D \right)^{n} \right]. \end{aligned} \end{equation*} Combining above inequalities and using $k \leqslant E$ again, we conclude that \begin{equation}\label{eqn:thm2proof_estimate2} \begin{aligned} \left\| c \right\|_{L^{2}(\Omega)}^{2} &\leqslant \frac{1}{4} C_1(\Omega) k^{n+4} e^{2Db} \epsilon^2 + \frac{1}{4} C_1(\Omega) \left(2^n- 3^{\frac{n}{2}}\right) k^n e^{k+D^2 b} \epsilon^2 \\ & \quad + C^{4}(\Omega) \left( \operatorname{Vol}_{n-1} \Omega \right)^{2} \sigma_{n} \frac{2^{2}}{D^{n-2}} \left[ \left( 1 + \left( 2 D \right)^{2} \right)^{\frac{n}{2}} - \left( 2 D \right)^{n} \right] E^{n+2} e^{D^{2} b} \epsilon \\ & \quad + \frac{M_{1}^{2}}{1 + \frac{E^{2}}{D^{2}} + 4 k^{2}}. \end{aligned} \end{equation} We end the proof by combining the bounds (\ref{eqn:thm2proof_estimate1}) for $k > E$ and (\ref{eqn:thm2proof_estimate2}) for $k \leqslant E$. \end{proof} Following the discussion in Section \ref{se_sta}, we again observe the increasing stability with growing wavenumbers $k$. At the same time, because of the attenuation term, such an error estimate is deteriorated by the attenuation constant $b$ with a linearly exponential growth. Such a dependance is better than that for the multifrequency inverse source problem where a quadratically exponential growth is observed comparing the increasing stability estimates in \cite{CIL2016, IL2018}. \section{Reconstruction algorithm}\label{se_algo} In this section, we propose a reconstruction algorithm for the linearized inverse Schr\"{o}dinger potential problem at a large wavenumber, observing that the linearized problems are (\ref{eqn:sub_I_0}) and (\ref{eqn:sub_I_1}). For the sake of simplicity, we fix the dimensionality $n = 2$. The linearized problem (\ref{eqn:sub_I_0})-(\ref{eqn:sub_I_1}) represents a linearized DtN map $\Lambda^{\prime}: g_{0} \mapsto \partial_{\nu} u_{1}$ by varying $g_{0}$. In particular, we can choose the solutions $u_{0}$ to be the bounded complex exponential solutions (\ref{eqn:uvcgo}) for $n = 2$ such that \begin{equation}\label{eqn:numer_u0} u_{0} = e^{\mathbf{i} \zeta \cdot x} \quad \text{with} \quad \zeta = \frac{\xi}{2} + \sqrt{k^{2} - \frac{\left| \xi \right|^{2}}{4}} \, \xi^{\perp} \end{equation} by varying different vectors $\xi$ in the phase space and $g_{0} = \left. u_{0} \right|_{\partial \Omega}$, where $\xi^{\perp}$ satisfies $\xi \cdot \xi^{\perp} = 0$ and $\left| \xi^{\perp} \right| = 1$. At the same time, substituting each $u_{0}$ above into the second problem (\ref{eqn:sub_I_1}), we obtain the solution $u_{1}$ and its Neumann trace $\partial_{\nu} u_{1} = \Lambda^{\prime} g_{0}$ which also varies with different choice of $\xi$. Similarly we can choose the test function $v$ to be the relevant exponential solutions \begin{equation}\label{eqn:numer_v} v = e^{\mathbf{i} \zeta^{*} \cdot x} \quad \text{with\ } \quad \zeta^{*} = \frac{\xi}{2} - \sqrt{k^{2} - \frac{\left| \xi \right|^{2}}{4}} \, \xi^{\perp}. \end{equation} To realize the reconstruction algorithm numerically, we shall approximate the linearized DtN map $\Lambda^{\prime}$ (\ref{eqn:dtn_p}) accurately. In particular, we mention that the Neumann boundary data $\partial_\nu u_{1}$ depends on the unknown potential function $c$ referring to (\ref{eqn:sub_I_1}). Assuming that the potential function $c$ is comparably small with respect to the squared wavenumber $k^{2}$ and following the standard linearization approach (for instance, in \cite{DS1994}), we define a linearized Neumann boundary data \begin{equation}\label{eqn:lineardtn} g_{1}^{\,\prime} := g_{1} - \partial_{\nu} u_{0} = \partial_{\nu} u - \partial_{\nu} u_{0}, \end{equation} where $g_{1}$ is the Neumann boundary data in (\ref{eqn:dtn}) of the original problem (\ref{eqn:prob}) and $\partial_{\nu} u_{0}$ is the Neumann boundary data of the sub-problem (\ref{eqn:sub_I_0}). We thus approximate the linearized Neumann boundary data numerically by $ \partial_{\nu} u_{1} \approx g_{1}^{\,\prime}$, where the "higher" order terms are ignored in (\ref{eqn:lineardtn}). If we substitute the above solutions (\ref{eqn:numer_u0}), (\ref{eqn:numer_v}) into the equality (\ref{eqn:integral}), the recovery of $c$ is equivalent to solving the following integral equations \begin{equation}\label{eqn:integral_F_v} \mathcal{F}[c](\xi) = \int_{\Omega} c(x) \, e^{\mathbf{i} \xi \cdot x} \,\mathrm{d}x = \int_{\partial \Omega} \left( \partial_{\nu} u_{1} \right) e^{\mathbf{i} \zeta^{*} \cdot x} \,\mathrm{d}s_{x} \approx \int_{\partial \Omega} g_{1}^{\,\prime} \, e^{\mathbf{i} \zeta^{*} \cdot x} \,\mathrm{d}s_{x} \end{equation} where vectors $\xi$, $\xi^{\perp}$ vary in the phase space with a fixed wavenumber $k$. The left hand side of the above formula (\ref{eqn:integral_F_v}) is the Fourier coefficient $\mathcal{F}[c](\xi)$ of the potential function $c$ at a point $\xi$. Thus the reconstruction of $c$ can be achieved by varying vectors $\xi$, $\xi^{\perp}$ and the inverse Fourier transform. \begin{rmk} Notice that, in current section, the chosen orthonormal base of the phase space $\mathbb{R}^{2}$ is $\left\{ e_{1} = \frac{\xi}{\left| \xi \right|}, \, e_{2} = \xi^{\perp} \right\}$. Thus the formula of a vector $\zeta$ (or $\zeta^{*}$) is fixed by $k$. Noticing that $\left| \xi \right| > 2k$ and $\sqrt{k^{2} - \frac{\left| \xi \right|^{2}}{4}} = \mathbf{i} \sqrt{\frac{\left| \xi \right|^{2}}{4} - k^{2}}$, we write the exponential solution $u_{0}$ (or $v$) explicitly by \begin{equation}\label{eqn:numer_u0_xi} u_{0}(x) = e^{\mathbf{i} \zeta \cdot x} = \left\{ \begin{aligned} &\exp \left\{ \mathbf{i} \left( \frac{\xi}{2} + \sqrt{k^{2} - \frac{\left| \xi \right|^{2}}{4}} \, \xi^{\perp} \right) \cdot x \right\}, & \left| \xi \right| \leqslant 2k, \\ &\exp \left\{ \mathbf{i} \frac{\xi}{2} \cdot x \right\} \exp \left\{ - \sqrt{\frac{\left| \xi \right|^{2}}{4} - k^{2}} \, \xi^{\perp} \cdot x \right\}, & \left| \xi \right| > 2k. \end{aligned} \right. \end{equation} Indeed, $u_{0}$ is a plane wave traveling along its wave-vector $\zeta$ with a length equal to the wavenumber $k$ if $\left| \xi \right| \leqslant 2k$. Meanwhile, if $\left| \xi \right| > 2k$, the (high) oscillation of the exponential solution $u_{0}$ is observed in a direction $e_{1} = \frac{\xi}{\left| \xi \right|}$ with the spatial frequency $\frac{\left| \xi \right|}{2}$, and decays exponentially along the unit vector $e_{2} = \xi^{\perp}$. Then numerical calculation in (\ref{eqn:integral}) or (\ref{eqn:integral_F_v}) becomes unstable when $\left| \xi \right| > 2k$. \end{rmk} Now we describe a reconstruction algorithm based on discrete sets of length and angle of the vectors in phase space. We first choose a discrete and finite length set \begin{equation*} \{\kappa_{\ell}\}_{\ell=1}^{M} \subset (0, m k \,] \quad \text{for any fixed\ } k. \end{equation*} Here we choose $2 \leqslant m \in \mathbb{N}_{+}$ and $m k$ is the maximum length of the vector $\xi$. Combining with two unit vector (or angle) sets \begin{equation*} \{\hat{y}_{s}\}_{s=1}^{N} \subset \mathbb{S}^{n-1} \quad \text{and} \quad \{\hat{z}_{s}\}_{s=1}^{N} \subset \mathbb{S}^{n-1}, \end{equation*} which satisfy that $\hat{y}_{s} \cdot \hat{z}_{s} = 0$, we denote \begin{equation*} \xi^{\langle \ell;s \rangle} = \kappa_{\ell} \, \hat{y}_{s}, \quad \zeta^{\langle \ell;s \rangle} = \frac{\kappa_{\ell}}{2} \hat{y}_{s} + \sqrt{k^{2} - \frac{\kappa_{\ell}^{2}}{4}} \, \hat{z}_{s}, \quad \zeta^{\langle \ell;s \rangle}_{*} = \frac{\kappa_{\ell}}{2} \hat{y}_{s} - \sqrt{k^{2} - \frac{\kappa_{\ell}^{2}}{4}} \, \hat{z}_{s}, \end{equation*} which are the vectors (or points) chosen in the phase space, while $\ell = 1,2,\cdots,M$ and $s = 1,2,\cdots,N$. More precisely, the superscript notation $\cdot^{\langle \ell;s \rangle}$ will be referred to a vector $\xi^{\langle \ell;s \rangle}$ with the $\ell$th length $\kappa_{\ell}$ and the $s$th angle $\hat{y}_{s}$. Finally, for the inverse Fourier transform, a numerical quadrature rule can be constructed by a suitable choice of the weights $\sigma^{\langle \ell;s \rangle}$ according to these points $\xi^{\langle \ell;s \rangle}$. We summarize our reconstruction algorithm below. \hrule\hrule {\parindent 0pt \bf Algorithm 1: Reconstruction Algorithm for the Linearized Schr\"{o}dinger Potential Problem} \hrule {\parindent 0pt \bf Input:} $\{\kappa_{\ell}\}_{\ell=1}^{M}$, $\{\hat{y}_{s}\}_{s=1}^{N}$, $\{\hat{z}_{s}\}_{s=1}^{N}$ and $\sigma^{\langle \ell;s \rangle}$; \\[5pt] {\parindent 0pt \bf Output:} Approximated Potential $c_{\rm inv}= c^{\langle M+1;1 \rangle}$. \\[-5pt] \begin{enumerate} \item[1:] $\,$ Set $c^{\langle 1;1 \rangle} := 0$; \item[2:] $\,$ {\bf For} $\ell = 1,2,\cdots,M$ (length~updating) \item[3:] $\,$ \quad {\bf For} $s = 1,2,\cdots,N$ (angle~updating) \item[4:] $\,$ \quad \quad Choose $u_{0} := \exp \{ \mathbf{i} \zeta^{\langle \ell;s \rangle} \cdot x \}$ and $g_{0} := \left. u_{0} \right|_{\partial \Omega}$; \item[5:] $\,$ \quad \quad Measure the Neumann boundary data $\partial_{\nu} u$ of the forward problem (\ref{eqn:prob}) \item[] $\,$ \quad \qquad given the Dirichlet boundary data $g_{0}$; \item[6:] $\,$ \quad \quad Calculate the approximated Neumann boundary data $g_{1}^{\,\prime} = \partial_{\nu} u - \partial_{\nu} u_{0}$ \item[] $\,$ \quad \qquad on the boundary $\partial \Omega$; \item[7:] $\,$ \quad \quad Choose $v:= \exp \{ \mathbf{i} \zeta^{\langle \ell;s \rangle}_{*} \cdot x \}$ and $w := \big[ u_{0} v \big]^{-1} = \exp \{ -\mathbf{i} \xi^{\langle \ell;s \rangle} \cdot x \}$; \item[8:] $\,$ \quad \quad Compute $\mathcal{F}[c](\xi^{\langle \ell;s \rangle}) \approx \int_{\partial \Omega} g_{1}^{\,\prime} \, v \,\mathrm{d}s_{x}$; \item[9:] $\,$ \quad \quad Update $c^{\langle \ell;s+1 \rangle} = c^{\langle \ell;s \rangle} + \sigma^{\langle \ell;s \rangle} \mathcal{F}[c](\xi^{\langle \ell;s \rangle}) \, w$; \item[10:] $\,$ \quad {\bf End} \item[11:] $\,$ \quad Set $c^{\langle \ell+1;1 \rangle} := c^{\langle \ell;N+1 \rangle}$; \item[12:] $\,$ {\bf End}. \end{enumerate} \hrule\hrule The above Algorithm 1 provides us with a reconstructed potential function $c_{\rm inv}$ to mimic the exact one. The main error between these two potential functions consists of two parts. The first one is the approximation error because of the discrete and finite length set and angle sets. The second one is the numerical error of approximated linearized DtN map $\Lambda^{\prime}$ in (\ref{eqn:integral_F_v}) where the elliptic equation solvers of large wavenumbers and the numerical differentiation of the Neumann boundary data may induce some additional ill-posedness. The error estimate of Algorithm 1 may be carried out by choosing the length and angle sets following the analysis in \cite{BLRX2015,BT2010}. We intend to report such results in a separate work. \section{Numerical examples}\label{se_numer} In this section, we provide some numerical examples verifying the efficiency of the above Algorithm 1. The main computational costs in Algorithm 1 is Step 5 for the forward problem (\ref{eqn:prob}) and Step 9 for the inverse Fourier transform. To avoid the inverse crime, we will use fine grids (for instance, $100 \times 100$ or $200 \times 200$ equal-distance points) for the forward problem and a coarse grid ($90 \times 90$ equal-distance points) for the inversion both in the square domain $[-1,1]^{2}$. \subsection{CASE 1} In the first case, a circular domain $\Omega$ with a radius $r = 0.7$ and a simple true potential function $c$ are chosen, which are shown in the upper left panel of Figure \ref{fig:1_potential}. The potential function contains one peak and one valley patterns in the bounded domain $\Omega$. The boundary measurement points on $\partial \Omega$ are also presented in the upper left panel of Figure \ref{fig:1_potential}, marked by "$*$". The forward problem grid is $100 \times 100$ equal-distance points and the inversion grid is $90 \times 90$ equal-distance points in the domain $[-1,1]^{2}$. In Subsection \ref{subse_4.1.3}, we will show that one can improve the resolution of the reconstructed potential function if a finer grid of the forward problem is chosen. \begin{figure} \caption{Upper left: the true potential function $c$. Upper right: the nine slope lines in the phase space. The red "$\circ$" represents a sample point $\xi = \kappa \hat{y} \label{fig:1_potential} \end{figure} \subsubsection{CASE 1: General recovery} To realize the Fourier transform $\mathcal{F}[\cdot]$, we use the following sampling points $\xi$ in the phase space which are marked by "$*$" in the upper right panel of Figure \ref{fig:1_potential}. The modulus of $\xi$, that is $\kappa = \left| \xi \right|$, are equally distributed in the interval $[1,50]$, and the step size is $0.2$. The degree of the angle between two adjacent slope lines is $\frac{2}{9} \pi$. For example, the red "$\circ$" in the upper right panel of Figure \ref{fig:1_potential} is a point in the phase space with an angle $\pi$ and a modulus $10.2$. Thus, the coordinate of this point $\xi$ in the phase space is $(-10.2,0)$, or $\kappa = 10.2$ and $\hat{y} = (\cos(\pi),\sin(\pi)) = (-1,0)$, i.e., $\xi = \kappa \hat{y}$. Fourier coefficients of the true potential function near the nine slope lines are presented in the bottom panel of Figure \ref{fig:1_potential}. By varying different $\hat{y}$ (or corresponding $\hat{z}$) and different $\kappa = \left| \xi \right|$ in the given sets, we obtain different exponential solutions (\ref{eqn:numer_u0}) (or (\ref{eqn:numer_u0_xi})). In particular, we present two different $u_{0}$ (real part only) in Figures \ref{fig:u0g0_k15p2_xi8p4} for $\kappa \leqslant 2k$ and Figure \ref{fig:u0g0_k15p2_xi32p6} for $\kappa > 2k$ while $k = 15.2$ is fixed. The real part of the corresponding Dirichlet boundary data $g_{0}$ for the linearized problem are also presented in these two figures. For instance, in Figure \ref{fig:u0g0_k15p2_xi8p4} (left), the exponential solution $u_{0}$ with a small length $\kappa = 8.4$ and $\hat{y} = (-0.17,0.98)$ is a plane wave indeed. In Figure \ref{fig:u0g0_k15p2_xi8p4} (middle), we present its Dirichlet boundary data $g_{0}$. By substituting the Dirichlet boundary data $g_{0}$ into the original problem (\ref{eqn:prob}), we obtain the solution $u$ and its Neumann boundary data $g_{1} = \partial_{\nu} u$, which are shown in Figure \ref{fig:u0g0_k15p2_xi8p4} (right). In order to obtain the measurable data $g_{1}^{\,\prime}$ of the linearized DtN map (\ref{eqn:dtn_p}), we subtract the benchmark solution $u_{0}$ with respect to the solution $u$ in Figure \ref{fig:1_u1g1p_k15p2_xi8p4} (left), and measure the linearized data $g_{1}^{\,\prime} = g_{1} - \partial_{\nu} u_{0}$ on the boundary $\partial \Omega$, which is shown in Figure \ref{fig:1_u1g1p_k15p2_xi8p4} (right). \begin{figure} \caption{Set $k = 15.2$, $\kappa = 8.4$ and $\hat{y} \label{fig:u0g0_k15p2_xi8p4} \end{figure} \begin{figure} \caption{Set $k = 15.2$, $\kappa = 8.4$ and $\hat{y} \label{fig:1_u1g1p_k15p2_xi8p4} \end{figure} The second exponential solution $u_{0}$ with a large length $\kappa = 32.6$ and $\hat{y} = (-0.77,-0.64)$ is shown in Figure \ref{fig:u0g0_k15p2_xi32p6} (left) where exponential decay in the direction $\hat{z}$ is observed. The Dirichlet boundary data $g_{0}$ and the Neumann boundary data $g_{1} = \partial_{\nu} u$ are shown in Figure \ref{fig:u0g0_k15p2_xi32p6} (middle and right) respectively. The difference between $u$ and $u_{0}$, the measurable data $g_{1}^{\,\prime} = g_{1} - \partial_{\nu} u_{0}$ of the linearized DtN map are shown in Figure \ref{fig:1_u1g1p_k15p2_xi32p6}. Compared with the moderate linearized difference for $\kappa = 8.4$ as shown in Figure \ref{fig:1_u1g1p_k15p2_xi8p4}, we observe a large amplitude of the linearized difference in the right panel of Figure \ref{fig:1_u1g1p_k15p2_xi32p6} for $\kappa = 32.6$. Such phenomena further yields an unstable calculation of the Fourier coefficients when $\kappa > 2k$. \begin{figure} \caption{Set $k = 15.2$, $\kappa = 32.6$ and $\hat{y} \label{fig:u0g0_k15p2_xi32p6} \end{figure} \begin{figure} \caption{Set $k = 15.2$, $\kappa = 32.6$ and $\hat{y} \label{fig:1_u1g1p_k15p2_xi32p6} \end{figure} Noticing that the exponential solution $v$ in (\ref{eqn:numer_v}) is similar to $u_{0}$, we use the same sampling points $\xi$ in the upper right panel of Figure \ref{fig:1_potential} for $v$. However, to illustrate the difference, for any given wavenumber $k$ and vector $\xi = \kappa \hat{y}$, the directions of two wave-vectors $\zeta = \frac{\kappa}{2} \hat{y} + \sqrt{k^{2} - \frac{\kappa^{2}}{4}} \, \hat{z}$ and $\zeta^{*} = \frac{\kappa}{2} \hat{y} - \sqrt{k^{2} - \frac{\kappa^{2}}{4}} \, \hat{z}$ are not the same. Indeed, since these vectors obey $\hat{y} \cdot \hat{z} = 0$, the wave-vectors $\zeta$, $\zeta^{*}$ of exponential solutions $u_{0}$ and $v$ are symmetric with respect to the vector $\xi$ (or $\hat{y}$). Thus, we could reflect the pattern of the exponential solution $u_{0}$ with respect to the vector $\xi$ to obtain the pattern of the solution $v$. The key step of the reconstruction Algorithm 1 is to compute the value of each Fourier coefficient $\mathcal{F}[c](\xi)$ with different $\xi = \kappa \hat{y}$ for any fixed $k$ by employing the formula (\ref{eqn:integral_F_v}). The numerical coefficients are collected in Figure \ref{fig:1_fourier_k15p2} when $k = 15.2$. Each curve there represents the recovered value of Fourier coefficients on each slope of sampling points $\xi$ in the upper right panel of Figure \ref{fig:1_potential}, while the horizontal axis is the length interval $(0,50]$ for $\kappa = \left| \xi \right|$. We shall emphasize that the chosen wavenumber $k = 15.2$ avoids the eigenvalues for the Laplacian operator in the domain $\Omega$. On the other hand, by choosing a wavenumber $k$ near the eigenvalues, we may recover a potential function with lower resolution which will be presented in next subsection. \begin{figure} \caption{Set $k = 15.2$. The recovered Fourier coefficients $\mathcal{F} \label{fig:1_fourier_k15p2} \end{figure} Finally, we implement the inverse Fourier transform to reconstruct the potential function $c$. To obtain such a reconstruction, we need to choose a truncated value $K$ in the phase space such that all the Fourier coefficients with $\kappa \leq K$ shall be used to reconstruct an approximant. By choosing different truncated value $K = m k$ while $m = 1, 2, 3$, we collect those reconstructed functions $c_{\rm inv}$ in Figure \ref{fig:1_c_inv_k15p2}. More precisely, the dashed lines in Figure \ref{fig:1_fourier_k15p2} highlight those threshold values as $k = 15.2$, $2 k = 30.4$ and $3 k = 45.6$ when $k = 15.2$. The Fourier coefficients $\mathcal{F}[c](\xi)$ with lengths $\kappa = \left| \xi \right|$ smaller than the threshold values, i.e. $\kappa \in (0,m k\,]$, will be used to generate $c_{\rm inv}$ respectively. As one can observe, we obtain a stable reconstructed potential function $c_{\rm inv}$ with threshold values $k$ and $2 k$ in Figure \ref{fig:1_c_inv_k15p2} (left and middle), because all the Fourier coefficients $\mathcal{F}[c](\xi)$ are rather small for $\kappa \leqslant k$ or $\kappa \leqslant 2k$. The more Fourier coefficients when $\kappa \leqslant 2k$, the better resolution of the reconstructed potential $c_{\rm inv}$. On the other hand, if we increase the truncated value to $3 k$, we observe some high frequency patterns in the right panel of Figure \ref{fig:1_c_inv_k15p2}, which are also reflected in the (blowing-up) Fourier coefficients when $\kappa > 2k$, see Figure \ref{fig:1_fourier_k15p2}, or referring to the estimate (\ref{eq_nminus1bound}) in the proof of Theorem \ref{thm_1}. In current work, we change different truncated value $K=mk$, $m=1,2,3$ but in real calculation we suggest to choose the truncated value $K=2k$. \begin{figure} \caption{Set $k = 15.2$. The reconstructed potential function $c_{\rm inv} \label{fig:1_c_inv_k15p2} \end{figure} We shall note that a large value of the wavenumber $k$ allows us to use more (stable) Fourier coefficients in the phase space when $\kappa \leqslant 2k$. To visualize such difference, we presents the reconstructed potential function $c_{\rm inv}$ in Figure \ref{fig:1_c_inv_k5} by choosing $k = 5$ and the truncated values $K = 5, 10, 15$ respectively. \begin{figure} \caption{Set $k = 5$. The reconstructed potential function $c_{\rm inv} \label{fig:1_c_inv_k5} \end{figure} \subsubsection{CASE 1: Recovery at a large wavenumber near eigenvalues} In this subsection, we numerically investigate the consequence by choosing a large wavenumber near eigenvalues for a Laplacian operator in the circular domain $\Omega$. A particular choice of such a large wavenumber is $k = 12.3625$. In Figure \ref{fig:1_c_inv_k12p4}, we present the reconstructed Fourier coefficients and the reconstructed potential function $c_{\rm inv}$ analogously as in the previous subsection by choosing truncated values $K = 2k = 24.7250$. Noticing that the high frequency patterns appear in Figure \ref{fig:1_c_inv_k12p4} (right), we also observe (blowing-up) Fourier coefficients compared with those in Figures \ref{fig:1_fourier_k15p2}. \begin{figure} \caption{Recovery by using the wavenumber $k = 12.3625$. Left: the recovered Fourier coefficients $\mathcal{F} \label{fig:1_c_inv_k12p4} \end{figure} \subsubsection{CASE 1: Finer grids and less Fourier modes}\label{subse_4.1.3} We further focus on another two numerical aspects enhancing or weakening the resolution of the reconstructed potential function $c_{\rm inv}$. The first aspect is the grid of the forward problem. In the above two subsections, we choose a $100 \times 100$ equal-distance points grid in $[-1, 1]^{2}$. The chosen grid may capture some solution patterns if the wavenumber $k$ is small. But when the wavenumber $k$ increases, for instance from $5$ to $15.2$, the high frequency patterns induced by large wavenumbers may not be accurately demonstrated in the grid. To weaken such difficulty, we consider a finer $200 \times 200$ equal-distance points grid in $[-1, 1]^{2}$. By implementing the same approach in the above subsection, we present the Fourier coefficients and their reconstructed potential function $c_{\rm inv}$ in Figure \ref{fig:1_c_inv_k15p2_finergrid} for $k = 15.2$ again. As one can observe, the Fourier coefficients of the finer grid become more accurate in the large length interval ($\kappa \in [20,30]$) than those of the original grid. The reconstructed potential functions $c_{\rm inv}$ also enhance the resolution (in the peak pattern) if we choose the truncated value $K = 2 k = 30.4$, see Figure \ref{fig:1_c_inv_k15p2_finergrid} (right). \begin{figure} \caption{Finer grids. Set $k = 15.2$. Left: the recovered Fourier coefficients $\mathcal{F} \label{fig:1_c_inv_k15p2_finergrid} \end{figure} We also investigate the influence of the observation angles, i.e. the limited-angle slope lines in the upper right panel of Figure \ref{fig:1_potential}. In Figure \ref{fig:1_c_inv_k15p2_finergrid_phase}, we show the recovered potential functions with $2$, $3$, $7$ different slope lines by choosing the truncated value $K = 2 k$ and $k = 15.2$. Compared with the result in Figure \ref{fig:1_c_inv_k15p2_finergrid}, we observe that the more slope lines, the better accurately recovered potential functions, because of the gained Fourier coefficients. In principle, one can include more angle slope lines to obtain better resolution. \begin{figure} \caption{The reconstructed potential function $c_{\rm inv} \label{fig:1_c_inv_k15p2_finergrid_phase} \end{figure} \subsection{Another numerical example} Another example considers a complicate potential function $c$ in Figure \ref{fig:2} (left) with the same circular domain $\Omega$. Similar to the previous example, the same set of the sampling points $\xi$ in the phase space is utilized referring to the upper right panel of Figure \ref{fig:1_potential}. For the sake of simplicity, we skip all the discussion on the reconstruction algorithm but provide the reconstructed potential function $c_{\rm inv}$ with the truncated value $K=2 k$ and $k=5.0$, $k = 20.0$ respectively in Figure \ref{fig:2} (middle \& right). Similarly, we also have chosen a finer $200 \times 200$ equal-distance points grid in the domain $[-1, 1]^{2}$ for the forward problem. Comparing the reconstructed potential functions $c_{\rm inv}$ for $k=5$ and $k=20$, we clearly visualize the improved resolution at the large wavenumber. \begin{figure} \caption{Left: The true potential function $c$. Middle and right: its reconstruction $c_{\rm inv} \label{fig:2} \end{figure} \section{Conclusion}\label{se_con} The original inverse problem for the Schr\"{o}dinger potential is ill-posed and nonlinear (moreover, non-convex). To study stability we prefer to focus on the most serious difficulty: ill-conditioning and we linearized inverse problem to avoid additional difficulties with multiple local minima. A numerical solution of the linearized problem is much faster and more reliable and is quite satisfactory in many applications. We demonstrated that recovery of the Schr\"{o}dinger potential from all boundary data at a fixed energy/wavenumber is dramatically improving for a larger $k$. There is no doubt that this improvement will be more significant when finding more complicated $c$ or using a complete nonlinear problem. These results promise a better numerical reconstruction of the conductivity coefficient in the stationary Maxwell system at higher wavenumbers, as predicted analytically in \cite{ILW2016}. We intend to work on the reconstruction at least in the linearized version in near future. There are good expectations that such numerical results will be a solid base for a serious improvement in the electrical impedance tomography which now suffers from a very low resolution. As known, this type of tomography has many geophysical and medical applications. \end{document}
\begin{document} \author{G\'erard Endimioni} \address{C.M.I-Universit\'{e} de Provence, 39, rue F. Joliot-Curie, F-13453 Marseille Cedex 13} \email{[email protected]} \title[Polynomial Functions]{On the Group of Polynomial Functions in a Group} \subjclass[2000]{20F16, 20F18} \begin{abstract} Let $G$ be a group and let $n$ be a positive integer. A polynomial function in $G$ is a function from $G^n$ to $G$ of the form $(t_{1},\ldots,t_{n})\to f(t_{1},\ldots,t_{n})$, where $f(x_{1},\ldots,x_{n})$ is an element of the free product of $G$ and the free group of rank $n$ freely generated by $x_{1},\ldots,x_{n}$. There is a natural definition for the product of two polynomial functions; equipped with this operation, the set $\overline{G}[x_{1},\ldots,x_{n}]$ of polynomial functions is a group. We prove that this group is polycyclic if and only if $G$ is finitely generated, soluble, and nilpotent-by-finite. In particular, if the group of polynomial functions is polycyclic, then necessarily it is nilpotent-by-finite. Furthermore, we prove that $G$ itself is polycyclic if and only if the subgroup of polynomial functions which send $(1,\ldots,1)$ to $1$ is finitely generated and soluble. \end{abstract} \maketitle \section{Introduction and Main Results} Let $G$ be a group and let $F_{n}$ be the free group of rank $n>0$ freely generated by $x_{1},\ldots,x_{n}$. More or less explicitely, the free product $G[x_{1},\ldots,x_{n}]:=G \ast F_{n}$ frequently occurs in group theory, for example in the study of equations in groups. Actually, in the class of $G$-groups, $G[x_{1},\ldots,x_{n}]$ plays the role of the polynomial ring $K[x_{1},\ldots,x_{n}]$ in the class of $K$-algebras (following the terminology used in \cite{BMR}, a $G$-group is by definition a group containing a designated copy of $G$). If $f(x_{1},\ldots,x_{n})\in G[x_{1},\ldots,x_{n}]$ is a ``polynomial'', one can define in an obvious way the associated polynomial function $(t_{1},\ldots,t_{n})\to f(t_{1},\ldots,t_{n})$, which is a map from $G^n$ to $G$. We shall denote by $\overline{G}[x_{1},\ldots,x_{n}]$ the set of polynomial functions (understood: in $n$ variables and with coefficients in $G$). When $n=1$, we shall write $\overline{G}[x]$ instead of $\overline{G}[x_{1}]$. We define in a natural way the product of two polynomial functions by carrying over to $\overline{G}[x_{1},\ldots,x_{n}]$ the product of $G[x_{1},\ldots,x_{n}]$. In other words, the product of $(t_{1},\ldots,t_{n})\to f(t_{1},\ldots,t_{n})$ and $(t_{1},\ldots,t_{n})\to g(t_{1},\ldots,t_{n})$ is equal to the polynomial function $$(t_{1},\ldots,t_{n})\to f(t_{1},\ldots,t_{n})g(t_{1},\ldots,t_{n}).$$ Equipped with this operation, $\overline{G}[x_{1},\ldots,x_{n}]$ is a group and the map changing $f(x_{1},\ldots,x_{n})\in G[x_{1},\ldots,x_{n}]$ into the polynomial function $(t_{1},\ldots,t_{n})\to f(t_{1},\ldots,t_{n})$ is an epimorphism. In \cite{BMR}, the authors define notions of zero divisor, ideal,... in a group (more precisely in a $G$-group); they obtain then a set of results showing a surprising similarity to algebraic geometry. In particular, a notion of ``equationally Noetherian group'' is introduced (see \cite{BMR} for a definition) and an analogue of the Hilbert's basis theorem is proposed as a conjecture \cite[p.42]{BMR}, . The notion of ``equationally Noetherian group'' is different from the usual notion of Noetherian group. Recall here that a Noetherian group is a group satisfying the maximal condition for its subgroups; a polycyclic group is a soluble Noetherian group. The aim of this paper is to investigate the connections between a group $G$ and the group $\overline{G}[x_{1},\ldots,x_{n}]$ of its polynomial functions for the property of polycyclicity. Remark in passing that such a question is not really interesting for the group $G[x_{1},\ldots,x_{n}]$. Indeed, if $A$ and $B$ are groups such that $A\neq \{ 1\}$ and $|B|\geq 3$, the free product $A\ast B$ contains a free subgroup of rank 2 \cite[p.177]{LS} and so is not Noetherian. Using this result, it is not hard to see that $G[x_{1},\ldots,x_{n}]$ is Noetherian if and only if $G$ is trivial and $n=1$. First we characterize the groups $G$ such that $\overline{G}[x_{1},\ldots,x_{n}]$ is polycyclic. Notice that in this case, our characterization shows that necessarily $\overline{G}[x_{1},\ldots,x_{n}]$ is then nilpotent-by-finite. \begin{thm} For any group $G$ and for any positive integer $n$, the following assertions are equivalent: \noindent {\rm (i)} $\overline{G}[x_{1},\ldots,x_{n}]$ is polycyclic; \noindent {\rm (ii)} $G$ is finitely generated, soluble, and nilpotent-by-finite; \noindent {\rm (iii)} $\overline{G}[x_{1},\ldots,x_{n}]$ is finitely generated, soluble, and nilpotent-by-finite. \end{thm} An immediate consequence of this theorem is the following. \begin{cor} Let $G$ be a group such that $\overline{G}[x]$ is polycyclic; then so is $\overline{G}[x_{1},\ldots,x_{n}]$ for any positive integer $n$. \end{cor} Theorem 1.1 shows in particular that if $G$ is polycyclic, then $\overline{G}[x_{1},\ldots,x_{n}]$ is not necessarily polycyclic. The next theorem characterizes in terms of polynomial functions the case where $G$ is polycyclic. Before to state this result, introduce a subset of polynomial functions: we denote by $\overline{G}_{1}[x_{1},\ldots,x_{n}]$ the set of polynomial functions which send the $n$-tuple $(1,\ldots,1)$ to $1$. This set is obviously a normal subgroup of $\overline{G}[x_{1},\ldots,x_{n}]$. More precisely, it is easy to see that $\overline{G}[x_{1},\ldots,x_{n}]$ is the (internal) semidirect product of $\overline{G}_{1}[x_{1},\ldots,x_{n}]$ and the subgroup of constant polynomial functions (isomorphic to $G$). We can now state our result. \begin{thm} Let $G$ be a finitely generated group and let $n$ be a positive integer. Then $G$ is a polycyclic group if and only if $\overline{G}_{1}[x_{1},\ldots,x_{n}]$ is a finitely generated soluble group. \end{thm} Notice that in this theorem, it is necessary to assume that $G$ is finitely generated: for example, if $G$ is an abelian group which is not finitely generated, then $\overline{G}_{1}[x_{1},\ldots,x_{n}]$ is finitely generated and abelian but $G$ is not polycyclic. \section{Proof of Theorem 1.1} Consider a group $G$ and an element $v=(a_{1},\ldots,a_{n})\in G^n$. The map $\phi_{v}: \overline{G}[x_{1},\ldots,x_{n}] \to G$ defined by $\phi_{v}(f)=f(v)$ (for all $f\in \overline{G}[x_{1},\ldots,x_{n}]$) is clearly a homomorphism. Moreover, the intersection $\displaystyle \bigcap_{v\in G^n}^{}\ker \phi_{v}$ is trivial. It follows that any law of $G$ is also a law for $\overline{G}[x_{1},\ldots,x_{n}]$. Conversely, since $\overline{G}[x_{1},\ldots,x_{n}]$ contains a copy of $G$ (the subgroup of constant polynomial functions), each law of $\overline{G}[x_{1},\ldots,x_{n}]$ is a law for $G$. Thus we can state: \begin{lem} For any group $G$ and for any positive integer $n$, the variety generated by $G$ coincide with the variety generated by $\overline{G}[x_{1},\ldots,x_{n}]$ (that is, $G$ and $\overline{G}[x_{1},\ldots,x_{n}]$ have the same set of laws). \end{lem} Now we introduce some notations used in the statement of the next lemma. Let $a,b$ be elements of a group $G$. As usual, $[a,\,_{k}b]$ is defined for each integer $k\geq 0$ by $[a,\,_{0}b]=a$ and $[a,\,_{k+1}b]=[[a,\,_{k}b],b]$ (where $[a,b]=a^{-1}b^{-1}ab$). We shall write $\langle a^{\langle a,b \rangle} \rangle$ for the normal closure of $a$ in the subgroup generated by $a$ and $b$ and $\langle a^{\langle a,b \rangle} \rangle '$ for its derived subgroup. Suppose that there exists a relation of the form $$w(a,b)[a,\ _{r}b]^{e_0}[a,\ _{r+1}b]^{e_1}\ldots [a,\ _{r+s}b]^{e_{s}}=1$$ with $r,s \in {\mathbb N}$, $e_0,e_1,\ldots ,e_s\in {\mathbb Z}$ ($e_0, e_{s}\neq 0$) and $w(a,b)\in \langle a^{\langle a,b \rangle} \rangle '$. We denote by $\Omega_{\star}(a,b)$ (respectively $\Omega^{\star}(a,b)$) the least integer $\vert e_0\vert$ (respectively $\vert e_s\vert$) with this property. If $a$ and $b$ do not satisfy a relation of the previous form, we set $\Omega_{\star}(a,b)=\Omega^{\star}(a,b)=+\infty$. In this way, $\Omega_{\star}$ and $\Omega^{\star}$ are two functions from $G^2$ to the set ${\mathbb N}^{*}\cup \{ +\infty \}$. We proved in \cite{EN} the following result: \begin{lem} \cite[Corollary 1]{EN} Let $G$ be a finitely generated soluble group. Then $G$ is nilpotent-by-finite if and only if for any $a,b\in G$, $\Omega^{\star}(a,b)=1$ and the sequence $\bigl(\Omega_{\star}(a,b^k)\bigr)_{k>0}$ is bounded. \end{lem} \noindent {\em Proof of Theorem 1.1.} (i)$\Rightarrow$(ii). Consider a group $G$ such that $\overline{G}[x_{1},\ldots,x_{n}]$ is polycyclic for some positive integer $n$. Actually, since plainly $\overline{G}[x_{1},\ldots,x_{n}]$ contains a copy of $\overline{G}[x]$, we can assume that $n=1$. First remark that $G$ is a finitely generated soluble group for $\overline{G}[x]$ contains a copy of $G$. It remains to prove that $G$ is nilpotent-by-finite. Let $a$ be an element in $G$. Consider the subgroup $H\leq \overline{G}[x]$ generated by the polynomial functions $f_{k}:t\to [a,\,_{k} t]$, for all positive integers $k$. In fact, there exists an integer $m>0$ such that $f_{1},\ldots,f_{m}$ generates $H$ for $\overline{G}[x]$ is polycyclic. In particular, we can write $f_{m+1}$ as a product of factors of the form $f_{j}^{\epsilon_{j}}$ ($j=1,\ldots,m$, $\epsilon_{j}\in {\mathbb Z}$). From this writing, we deduce a relation of the form $$f_{m+1}=w(a,x)f_{1}^{e_{1}}\ldots f_{m}^{e_{m}} \;\;\; (e_{j}\in {\mathbb Z}),$$ where $w(a,x)$ belongs to the derived subgroup of $H$. It follows in $G$ the equality $$[a,\,_{m+1} b]= w(a,b)[a,\, b]^{e_{1}}[a,\,_{2} b]^{e_{2}}\ldots [a,\,_{m} b]^{e_{m}} \;\;\; ({\rm for \: all}\: b\in G),$$ with $w(a,b)\in \langle a^{\langle a,b \rangle} \rangle '$. Notice that this relation is independant of $b$. Hence $\Omega^{\star}(a,b)=1$ and the sequence $\bigl(\Omega_{\star}(a,b^k)\bigr)_{k>0}$ is bounded. Therefore, by Lemma 2.2, $G$ is nilpotent-by-finite. \noindent (ii)$\Rightarrow$(iii). Now suppose that the group $G$ is finitely generated, soluble and nilpotent-by-finite. More precisely, suppose that $G$ is (nilpotent of class $\nu$)-by-(exponent $\epsilon$), soluble of derived length $\rho$, and generated by $g_{1},\ldots ,g_{d}$. Then, by Lemma 2.1, $\overline{G}[x_{1},\ldots,x_{n}]$ is also (nilpotent of class $\nu$)-by-(exponent $\epsilon$) and soluble of derived length $\rho$. Besides, $\overline{G}[x_{1},\ldots,x_{n}]$ is finitely generated: it is plain that this group is generated by the $d$ constant functions $(t_{1},\ldots,t_{n})\to g_{i}$ ($i=1,\ldots,d$) and by the $n$ functions $(t_{1},\ldots,t_{n})\to t_{j}$ ($j=1,\ldots,n$). \noindent (iii)$\Rightarrow$(i). This last implication is an immediate consequence of well known results. \null $\square$ \section{Proof of Theorem 1.2} Let $\varphi$ be an automorphism of a finitely generated abelian group $A$ (written additively). Then, by a result of Cohen \cite{CO}, one can find a monic polynomial $P\in {\mathbb Z}[T]$ with constant term $1$ such that $P(\varphi)=0$ (see also \cite[Theorem 2]{HR}). If $P=T^{\lambda}+\epsilon_{\lambda-1}T^{\lambda-1}+ \cdots +\epsilon_{2}T^2 +\epsilon_{1}T+1$, we can state this result with the multiplicative notation under the form: \begin{lem} \cite{CO} Let $\varphi$ be an automorphism of a finitely generated abelian group $A$ (written multiplicatively). Then there exist a positive integer $\lambda$ and integers $\epsilon_{1},\ldots, \epsilon_{\lambda-1}$ such that $$\varphi^{\lambda}(a)\varphi^{\lambda-1}(a)^{\epsilon_{\lambda-1}} \ldots \varphi^2(a)^{\epsilon_{2}} \varphi(a)^{\epsilon_{1}}a=1$$ for all $a\in A$. \end{lem} The next lemma is an extension of Lemma 3.1 to polycyclic groups. \begin{lem} Let $\varphi$ be an automorphism of a polycyclic group $G$. Then there exist positive integers $\mu_{1}, \ldots,\mu_{k}$ (with $\mu_{i}<\mu_{k}$ for $i=1,\ldots,k-1$) and integers $\eta_{1},\ldots, \eta_{k-1}$ such that $$\varphi^{\mu_{k}}(t)\varphi^{\mu_{k-1}}(t)^{\eta_{k-1}} \ldots \varphi^{\mu_{2}}(t)^{\eta_{2}} \varphi^{\mu_{1}}(t)^{\eta_{1}}t=1$$ for all $t\in G$. \end{lem} \begin{proof} We argue by induction on the derived length $\rho$ of $G$. For $\rho =1$, the result is given by Lemma 3.1. Now consider the case $\rho >1$ and suppose that the result holds for $\rho -1$. Put $A=G^{(\rho -1)}$. Since the subgroup $A$ is characteristic, $\varphi$ induces in $G/A$ an automorphism. By the inductive hypothesis applied to $G/A$ with this automorphism, there exist positive integers $\mu_{1}, \ldots,\mu_{k}$ ($\mu_{i}<\mu_{k}$ for $i=1,\ldots,k-1$) and integers $\eta_{1},\ldots, \eta_{k-1}$ such that $$\varphi^{\mu_{k}}(t)\varphi^{\mu_{k-1}}(t)^{\eta_{k-1}} \ldots \varphi^{\mu_{2}}(t)^{\eta_{2}} \varphi^{\mu_{1}}(t)^{\eta_{1}}t$$ belongs to $A$ for all $t\in G$. Now notice that $\varphi$ defines by restriction an automorphism in $A$. Apply Lemma 3.1 to $A$ with this automorphism; thus there exist a positive integer $\lambda$ and integers $\epsilon_{1},\ldots, \epsilon_{\lambda-1}$ such that $$\varphi^{\lambda}(a)\varphi^{\lambda-1}(a)^{\epsilon_{\lambda-1}} \ldots \varphi^2(a)^{\epsilon_{2}} \varphi(a)^{\epsilon_{1}}a=1$$ for all $a\in A$. Clearly, by taking $$a=\varphi^{\mu_{k}}(t)\varphi^{\mu_{k-1}}(t)^{\eta_{k-1}} \ldots \varphi^{\mu_{2}}(t)^{\eta_{2}} \varphi^{\mu_{1}}(t)^{\eta_{1}}t$$ in this relation (for any $t\in G$), we obtain the required result. \end{proof} \begin{lem} Let $\varphi$ be an automorphism of a polycyclic group $G$. Then there exists a positive integer $\xi$ such that, for each integer $\zeta\geq \xi$ (resp. $\zeta\leq -\xi$), there exist a positive integer $m$ and integers $\xi_{1}, \ldots,\xi_{m},\theta_{1},\ldots, \theta_{m}$ with $0\leq \xi_{i}<\xi$ (resp. $-\xi < \xi_{i}\leq 0$) for each $i=1,\ldots,m$, such that $\varphi^{\zeta}(t)= \varphi^{\xi_{1}}(t)^{\theta_{1}} \ldots \varphi^{\xi_{m}}(t)^{\theta_{m}} $ for all $t\in G$. \end{lem} \begin{proof} Set $\xi=\mu_{k}$, where $\mu_{k}$ is the integer defined in Lemma 3.2. If $\zeta=\xi$, Lemma 3.2 gives the required relation, namely $$\varphi^{\xi}(t)=t^{-1}\varphi^{\mu_{1}}(t)^{-\eta_{1}}\ldots \varphi^{\mu_{k-1}}(t)^{-\eta_{k-1}}\;\;\; ({\rm for\: all}\: t\in G).$$ By using this last relation, an easy induction prove the property for all $\zeta\geq\xi$. When $\zeta\leq -\xi$, the argument is similar, but one use the relation $$ \varphi^{-\xi}(t)=\varphi^{-\mu_{k}}(t)=\varphi^{\mu_{1}-\xi}(t)^{-\eta_{1}} \varphi^{\mu_{2}-\xi}(t)^{-\eta_{2}} \ldots \varphi^{\mu_{k-1}-\xi}(t)^{-\eta_{k-1}}t^{-1},$$ which follows from Lemma 3.2. \end{proof} The proof of the next lemma will be omitted: the first part is a direct consequence of Lemma 3.3 and the second part follows from the first part by induction on $d$. \begin{lem} Let $G$ be a polycyclic group. Then: \noindent {\rm (i)} If $\varphi$ is an automorphism of $G$, there exists a positive integer $\xi$ such that, for each $\zeta\in {\mathbb Z}$, there exist a positive integer $m$ and integers $\xi_{1}, \ldots,\xi_{m},\theta_{1},\ldots, \theta_{m}$, with $|\xi_{i}| <\xi$ for $i=1,\ldots,m$, such that $\varphi^{\zeta}(t)= \varphi^{\xi_{1}}(t)^{\theta_{1}} \ldots \varphi^{\xi_{m}}(t)^{\theta_{m}} $ for all $t\in G$; \noindent {\rm (ii)} More generally, if $\varphi_{1},\ldots,\varphi_{d}$ are $d$ automorphisms of $G$, there exists a positive integer $\xi$ such that, for each $(\zeta_{1},\ldots,\zeta_{d})\in {\mathbb Z}^d$, there exist a positive integer $m$ and integers $\xi_{1,j}, \ldots,\xi_{m,j}$ ($j=1,\ldots,d$), $\theta_{1},\ldots, \theta_{m}$, with $|\xi_{i,j}| <\xi$ for $i=1,\ldots,m$ and $j=1,\ldots,d$, such that $$\varphi_{1}^{\zeta_{1}}\circ \ldots \circ \varphi_{d}^{\zeta_{d}}(t)= \left( \varphi_{1}^{\xi_{1,1}}\circ \ldots \circ \varphi_{d}^{\xi_{1,d}}(t)\right)^{\theta_{1}} \ldots \left( \varphi_{1}^{\xi_{m,1}}\circ \ldots \circ \varphi_{d}^{\xi_{m,d}}(t)\right)^{\theta_{m}}$$ for all $x\in G$. \end{lem} Finally, as a particular case, we obtain the following result, essential in the proof of Theorem 1.2. \begin{lem} Let $b_{1},\ldots,b_{d}$ be $d$ fixed elements of a polycyclic group $G$ and let $H$ be the subgroup of $\overline{G}[x]$ generated by the polynomial functions (in one variable) of the form $t\to b_{d}^{-\beta_{d}} \ldots b_{1}^{-\beta_{1}} t b_{1}^{\beta_{1}}\ldots b_{d}^{\beta_{d}}$, with $\beta_{1},\ldots,\beta_{d}\in {\mathbb Z}$. Then there exists a positive integer $\xi$ such that $H$ is generated by the polynomial functions of the form $$t\to b_{d}^{-\beta_{d}} \ldots b_{1}^{-\beta_{1}} t b_{1}^{\beta_{1}}\ldots b_{d}^{\beta_{d}},$$ where $\beta_{1},\ldots,\beta_{d}$ are integers such that $|\beta_{i}| <\xi$ (in other words, $H$ is finitely generated). \end{lem} \begin{proof} It suffices to apply Lemma 3.4(ii) in the case where $\varphi_{j}$ is the inner automorphism of $G$ defined by $\varphi _{j} (t)=b_{j}^{-1}tb_{j}$, with $j=1,\ldots,d$. \end{proof} We need again two easy lemmas: \begin{lem} In each polycyclic group, there exists a finite sequence $b_{1},\ldots,b_{d}\in G$ such that any element $a\in G$ may be written in the form $a=b_{1}^{\beta_{1}}\ldots b_{d}^{\beta_{d}}$, where $\beta_{1},\ldots,\beta_{d}$ are integers. \end{lem} \begin{proof} The group $G$ being polycyclic, it has a series $$\{ 1\} =G_{d+1}\unlhd G_{d}\unlhd\cdots\unlhd G_{2}\unlhd G_{1}=G$$ in which each factor $G_{j}/G_{j+1}$ is cyclic. For $j=1,\ldots,d$, choose in each $G_{j}$ an element $b_{j}$ such that $b_{j}G_{j+1}$ generates $G_{j}/G_{j+1}$. Then clearly the sequence $b_{1},\ldots,b_{d}$ satisfies the statement of the lemma. \end{proof} \begin{lem} Let $G$ be a finitely generated group in which the normal closure of each element is finitely generated and soluble. Then $G$ is polycyclic. \end{lem} \begin{proof} Clearly, a finitely generated group $\langle g_{1},\ldots,g_{d}\rangle$ such that the normal closure of each generator $g_{j}$ is soluble is soluble itself. Thus $G$ is soluble. Now suppose that $G$ is not polycyclic. Since a polycyclic group is finitely presented, we can assume that each proper homomorphic image of $G$ is polycyclic (see for example \cite[Part 2, Lemma 6.17]{RO}). The group $G$ being soluble, it contains a non-trivial abelian normal subgroup; let $A$ be the normal closure of a non-trivial element of this subgroup. Then $G/A$ is polycyclic. Furthermore, $A$ is abelian and finitely generated by hypothesis; thus $A$ is polycyclic. It follows that $G$ is polycyclic, a contradiction. \end{proof} \noindent {\em Proof of Theorem 1.2.} First suppose that $G$ is a polycyclic group. Since $G$ is soluble, Lemma 2.1 shows that $\overline{G}[x_{1},\ldots,x_{n}]$ is soluble too; thus so is the subgroup $\overline{G}_{1}[x_{1},\ldots,x_{n}]$. For the second part of the property, we begin with the case $n=1$; thus we want prove that $\overline{G}_{1}[x]$ is finitely generated. It is not difficult to see that this group is generated by the functions of the form $t\to a^{-1}ta$, with $a\in G$. By Lemma 3.6, there exist elements $b_{1},\ldots,b_{d}\in G$ depending only on $G$ such that each $a\in G$ may be written in the form $a=b_{1}^{\beta_{1}}\ldots b_{d}^{\beta_{d}}$. Therefore $\overline{G}_{1}[x]$ is generated by the functions of the form $t\to b_{d}^{-\beta_{d}}\ldots b_{1}^{-\beta_{1}} t b_{1}^{\beta_{1}}\ldots b_{d}^{\beta_{d}}$ ($\beta_{j}\in {\mathbb Z}$). We can deduce then from Lemma 3.5 that the group $\overline{G}_{1}[x]$ is finitely generated. \noindent Now suppose that $n$ is an arbitrary positive integer. For each $i\in\{ 1,\ldots,n\}$, one can define a monomorphism $\Psi_{i}:\overline{G}_{1}[x]\to \overline{G}_{1}[x_{1},\ldots,x_{n}]$ in the following way: if $f\in \overline{G}_{1}[x]$, then $\Psi_{i} (f) (t_{1},\ldots,t_{n})=f(t_{i})$ for all $(t_{1},\ldots,t_{n})\in G^n$. Therefore each subgroup $\Psi_{i}\left( \overline{G}_{1}[x] \right)$ is isomorphic to $\overline{G}_{1}[x]$ and so is finitely generated. Now remark that $\overline{G}_{1}[x_{1},\ldots,x_{n}]$ is generated by the functions of the form $(t_{1},\ldots,t_{n})\to a^{-1}t_{i}a$ (with $i=1,\ldots,n$ and $a\in G$); furthermore, such a function belongs to $\Psi_{i}\left( \overline{G}_{1}[x] \right)$. Thus $\overline{G}_{1}[x_{1},\ldots,x_{n}]$ is generated by $\Psi_{1}\left( \overline{G}_{1}[x] \right)\cup \ldots \cup \Psi_{n}\left( \overline{G}_{1}[x] \right)$ and so is finitely generated, as required. \noindent Conversely, suppose now that $\overline{G}_{1}[x_{1},\ldots,x_{n}]$ is a finitely generated soluble group. Consider an element $v=(a,1,\ldots,1)\in G^n$, where $a$ is an element of $G$. The map $\Phi_{v}: \overline{G}_{1}[x_{1},\ldots,x_{n}] \to G$ defined by $\Phi_{v}(f)=f(v)$ is clearly a homomorphism. Moreover, it is easy to see that $\Phi_{v} \left( \overline{G}_{1}[x_{1},\ldots,x_{n}] \right)$ coincide with the normal closure of $a$ in $G$. Thus the normal closure of each element in $G$ is a finitely generated soluble subgroup. Since $G$ is finitely generated by hypothesis, we can apply Lemma 3.7, and hence $G$ is polycyclic. This completes the proof of the theorem. \null $\square$ \end{document}
\begin{document} {\bf a}selineskip .7cm \author{ Navin Khaneja \thanks{To whom correspondence may be addressed. Email:[email protected]} \thanks{Department of Electrical Engineering, IIT Bombay - 400076, India.}} \vspace*{.2in}kip 4em \title{\bf Time optimal control in coupled spin systems: a second order analysis} \maketitle \vspace*{.2in}kip 3cm \begin{center} {\bf Abstract} \end{center} In this paper, we study some control problems that derive from time optimal control of coupled spin dynamics in NMR spectroscopy and quantum information and computation. Time optimal control helps to minimize relaxation losses. The ability to synthesize, local unitaries, much more rapidly than evolution of couplings, gives a natural time scale separation in these problems. The generators of evolution, $\g$, are decomposed into fast generators $\k$ (local Hamiltonians) and slow generators $\p$ (couplings) as a Cartan decomposition $\mathfrak{g}= \mathfrak{p}\oplus \k$. Using this decomposition, we exploit some convexity ideas to completely characterize the reachable set and time optimal control for these problems. In this paper, we carry out a second order analysis of time optimality. \vspace*{.2in}kip 3cm \section{Introduction} A rich class of model control problems arise, when one considers dynamics of two coupled spin $\frac{1}{2}$. The dynamics of two coupled spins, forms the basis for the field of quantum information processing and computing \cite{nc} and is fundamental in multidimensional NMR spectroscopy \cite{Ernst}, \cite{Palmer}. Numerous experiments in NMR spectroscopy, involve synthesizing unitary transformations \cite{timeopt, cartan, Alessandro} that require interaction between the spins (evolution of the coupling Hamiltonian). These experiments involve transferring, coherence and polarization from one spin to another and involve evolution of interaction Hamiltonians \cite{Ernst}. Similarly, many protocols in quantum communication and information processing involve synthesizing entangled states starting from the separable states \cite{nc, kraus, bennett}. This again requires evolution of interaction Hamiltonians between the qubits. A typical feature of many of these problems is that evolution of interaction Hamiltonians takes significantly longer than the time required to generate local unitary transformations (unitary transformations that effect individual spins only). In NMR spectroscopy \cite{Ernst, Palmer}, local unitary transformations on spins are obtained by application of rf-pulses, whose strength may be orders of magnitude larger than the couplings between the spins. Given the Schr\'oedinger equation for unitary evolution \begin{equation} \label{eq:sfcoupling} \dot{U} = -i [H_c + \sum_{j=1}^n u_j H_j] U , \ \ U(0) = I, \end{equation} where $H_c$ represents a coupling Hamiltonian, and $u_j$ are controls that can be switched on and off. What is the minimum time required to synthesize any unitary transformation in the coupled spin system, when the control generators $H_j$ are local Hamiltonians and are much stronger than the coupling between the spins ($u_j$ can be made large). Design of time optimal rf-pulse sequences is an important research subject in NMR spectroscopy and quantum information processing \cite{timeopt}-\cite{hai3spin}, as minimizing the time to execute quantum operations can reduce relaxation losses, which are always present in an open quantum system \cite{Redfield, Lind}. The present problem has a special mathematical structure that helps to characterize all the time optimal trajectories \cite{timeopt}. The special mathematical structure manifested in the coupled two spin system, motivates a broader study of control systems with the same properties. The Hamiltonian of a spin $\frac{1}{2}$ can be written in terms of the generators of rotations on a two dimensional space and these are the Pauli matrices $-i \sigma_x, -i \sigma_y , -i \sigma_z$, where, \begin{equation} \sigma_z = \frac{1}{2} \left [ \begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array} \right ]; \ \ \sigma_y = \frac{1}{2} \left [ \begin{array}{cc} 0 & -i \\ i & 0 \end{array} \right ]; \ \ \sigma_x = \frac{1}{2} \left [ \begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right ]. \ \ \end{equation}Note \begin{equation} \label{eq:paulicommute} [\sigma_x, \sigma_y ] = i \sigma_z, \ \ [\sigma_y, \sigma_z ] = i \sigma_x, \ \ [\sigma_z, \sigma_x ] = i \sigma_y, \end{equation}where $[A, B] = AB-BA$ is the matrix commutator and \begin{equation} \label{eq:pauliproduct} \sigma_x^2 = \sigma_y^2 = \sigma_z^2 = \frac{\mbox{$\bf 1\ $}}{4}, \end{equation} The Hamiltonian for a system of two coupled spins takes the general form \begin{equation} \label{eq:naturalH} H_0 = \sum a_{\alpha} \sigma_{\alpha}\otimes \mbox{$\bf 1\ $} + \sum b_{\beta} \mbox{$\bf 1\ $} \otimes \sigma_{\beta} + \sum J_{\alpha \beta} \ \sigma_{\alpha} \otimes \sigma_{\beta}, \end{equation} where $\alpha, \beta \in \{x, y, z \}$. The Hamiltonians $\sigma_{\alpha}\otimes \mbox{$\bf 1\ $} $ and $\mbox{$\bf 1\ $} \otimes \sigma_{\beta}$ are termed local Hamiltonians and operate on one of the spins. The Hamiltonian \begin{equation} \label{eq:couplingH} H_c = \sum J_{\alpha \beta} \ \sigma_{\alpha} \otimes \sigma_{\beta}, \end{equation}is the coupling or interaction Hamiltonian and operates on both the spins. The following notation is therefore common place in the NMR literature. \begin{equation} I_{\alpha} = \sigma_{\alpha} \otimes \mbox{$\bf 1\ $} \ \ ;\ \ S_{\beta} = \mbox{$\bf 1\ $} \otimes \sigma_{\beta}. \end{equation} The operators $I_{\alpha}$ and $S_{\beta}$ commute and therefore $\exp( -i \sum_{\alpha} a_\alpha I_\alpha + \sum_{\beta} b_{\beta} S_{\beta} )=$ \begin{equation} \exp( -i \sum_{\alpha} a_\alpha I_\alpha)\exp(-i \sum_{\beta} b_{\beta} S_{\beta} )= (\exp( -i \sum_{\alpha} a_\alpha \sigma_\alpha)\otimes \mbox{$\bf 1\ $})(\mbox{$\bf 1\ $} \otimes \exp(-i \sum_{\beta} b_{\beta} \sigma_{\beta} ), \end{equation}The unitary transformations of the kind $$ \exp( - i \sum_{\alpha} a_\alpha \sigma_\alpha )\otimes \exp( -i \sum_{\beta} b_\beta \sigma_\beta ), $$ obtained by evolution of the local Hamiltonians are called local unitary transformations. The coupling Hamiltonian can be written as \begin{equation} H_c = \sum J_{\alpha \beta} I_{\alpha}S_{\beta}. \end{equation}Written explicitly, some of these matrices take the form \begin{equation} I_{z} = \sigma_z \otimes \mbox{$\bf 1\ $} = \frac{1}{2} \left [ \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -1 \end{array} \right ]. \end{equation}and \begin{equation} I_{z}S_z = \sigma_z \otimes \sigma_z = \frac{1}{4} \left [ \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & 1 \end{array} \right ]. \end{equation}The $15$ operators, $$ -i \{ I_\alpha, S_\beta, I_\alpha S_\beta \}, $$ for $\alpha, \beta \in \{x, y, z \}$, form the basis for the Lie algebra $\mathfrak{g}= su(4)$, the $4 \times 4$, traceless skew Hermitian matrices. For the coupled two spins, the generators $-iH_c, -iH_j \in su(4)$ and the evolution operator $U(t)$ in Eq. (\ref{eq:sfcoupling}) is an element of $SU(4)$, the $4 \times 4$, unitary matrices of determinant $1$. The Lie algebra $\mathfrak{g}= su(4)$ has a direct sum decomposition $ \mathfrak{g}= \mathfrak{p}\oplus \mathfrak{k}$, where \begin{equation} \label{eq:su(4)} \mathfrak{k}= -i \{ I_\alpha , S_\beta \}, \ \ \mathfrak{p}= -i \{ I_\alpha S_\beta \}. \end{equation} Here $\k$ is a sub-algebra of $\g$ made from local Hamiltonians and $\p$ nonlocal Hamiltonians. In Eq. \ref{eq:sfcoupling} , we have $-iH_j \in \k$ and $-iH_c \in \p$, It is easy to verify that \begin{equation} \label{eq:cartan} [\k, \k] \subset \mathfrak{k}, \ \ \ [\k, \p] \subset \p, \ \ [\p, \p] \subset \p. \end{equation} This decomposition of a real semi-simple Lie algebra $\mathfrak{g}= \mathfrak{p}\oplus \k$ satisfying (\ref{eq:cartan}) is called the Cartan decomposition of the Lie algebra $\g$ \cite{Helg}. This special structure of Cartan decomposition arising in dynamics of two coupled spins in Eq. \ref{eq:sfcoupling}, motivates study of a general class of time optimal control problems. Consider the following canonical problems. Given the evolution \begin{equation} \dot{U} = ( X_d + \sum_j u_j(t) X_j ) U, \ \ U(0) = \mbox{$\bf 1\ $}, \end{equation} where $U \in SU(n)$, the special Unitary group (determinant $1$, $n \times n$ matrices $U$ such that $UU' = \mbox{$\bf 1\ $}$, $'$ is conjugate transpose). Where $$ X_d = -i \left[ \begin{array}{cccc} \lambda_1 & 0 & \hdots & 0 \\ 0 & \lambda_2 & \hdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \hdots & \lambda_n \end{array} \right ], \ \ \sum \lambda_i = 0$$ and $\{X_j\}_{LA}$, the Lie algebra ($X_j$ and its matrix commutators) generated by generators $\{X_j \}$ is $\{X_j\}_{LA} = \mathfrak{k}= so(n)$, skew symmetric matrices. We want to find the minimum time to steer this system between points of interest, assuming no bounds on our controls $u_j(t)$. Here again we have a Cartan decomposition on generators. Given $\mathfrak{g}= su(n)$, traceless skew hermitian matrices , generators of $SU(n)$, we have $\mathfrak{g}= \mathfrak{p}\oplus \k$, where $\mathfrak{p}= - i A$ where $A$ is traceless symmetric and $\mathfrak{k}= so(n)$. As before, $X_d \in \p$ and $X_j \in \k$. We want to find time optimal ways to steer this system. We call this $\frac{SU(n)}{SO(n)}$ problem. We show for $n=4$, this system models the dynamics of two coupled nuclear spins in NMR spectrosocpy. Consider another problem evolving on $SU(2n)$. \begin{equation} \dot{U} = ( X_d + \sum_j u_j(t) X_j ) U, \ \ U(0) = \mbox{$\bf 1\ $}. \end{equation} Here $$ X_d = \left[ \begin{array}{cccccc} 0 & \hdots & 0 & \lambda_1 & \hdots & 0 \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ 0 & \hdots & 0 & \hdots & 0 & \lambda_n \\ -\lambda_1 & \hdots & 0 & 0 & \hdots & 0 \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ 0 & \hdots & -\lambda_n & 0 & \hdots & 0 \end{array} \right ]$$ and $\{X_j\}_{LA} = \mathfrak{k}= \left [ \begin{array}{cc} A & 0 \\ 0 & B \end{array} \right ]$, space of block diagonal traceless skew Hermitian matrices. We want to find minimum time to steer this system between points of interest, assuming no bounds on our controls $u_j(t)$. Here again, we have a Cartan decomposition, of $\mathfrak{g}= su(2n)$ as $\mathfrak{g}= \mathfrak{p}\oplus \k$ and $\mathfrak{p}= \left [ \begin{array}{cc} 0 & Z \\ -Z' & 0 \end{array} \right ]$. $X_d \in \p$ and $X_j \in \k$, we want to find time optimal ways to steer this system. We call this $\frac{SU(2n)}{SU(n) \times SU(n) \times U(1)}$ problem. We show for $n=2$, this system models the dynamics of coupled electron-nuclear spin system in EPR \cite{Zeier}. In general, given $U$, in compact Lie group $G$ (such as $SU(n)$), with $X_d, X_j$ in its real semisimple (no abelian ideals) Lie algebra $\g$ and \begin{equation} \dot{U} = ( X_d + \sum_j u_j(t) X_j ) U, \ \ U(0) = \mbox{$\bf 1\ $}. \end{equation} Given the cartan decomposition $\mathfrak{g}= \mathfrak{p}\oplus \k$, where $X_d \in \p$, $\{X_j\}_{LA} = \k$ and $K = \exp(\k)$ (product of exponentials of $\k$) a closed subgroup of G. We want to find the minimum time to steer this system between points of interest, assuming no bounds on our controls $u_j(t)$. Since $\{X_j\}_{LA} = \k$, any rotation (evolution) in subgroup $K$ can be synthesized with evolution of $X_j$ \cite{Brockettc, Jurdjevic}. Since there are no bounds on $u_j(t)$, this can be done in arbitarily small time \cite{timeopt}. We call this $\frac{G}{K}$ problem. The special structure of the problem aids in complete description of the reachable set. The elements of the reachable set at time $T$, takes the form $ U(T) \in$ \begin{equation} \label{eq:reachableintro} S = K_1 \exp (T \sum_k \alpha_k \ \W_k X_d \W_k^{-1}) K_2, \end{equation} where $K_1, K_2, \W_k \in \exp(\k)$, and $\alpha_k > 0$, $\sum \alpha_k =1$ and $\W_k X_d \W_k^{-1}$ all commute and unbounded control suggests that $K_i, \W_k$ can be synthesized in negligible time. This reachable set, which is formed from evolution of commuting Hamiltonians $\W_k X_d \W_k^{-1}$, can be understood as follows. The Cartan decomposition of the Lie algebra $\g$, in Eq. (\ref{eq:cartan}) leads to a decomposition of the Lie group $G$ \cite{Helg}. Let $\a$, denote the largest abelian sub-algebra contained inside $\p$. Then any $X \in \p$ is $Ad_K$ conjugate to an element of $\a$, i.e. $X = K a_1 K^{-1}$ for some $a_1 \in \a$. Then, any arbitrary element of the group $G$ can be written as \begin{equation} \label{eq:cartangroup} G = K_0 \exp(X) = K_0 \exp(Ad_K(a_1)) = K_1 \exp(a_1) K_2, \end{equation}for some $X \in \p$ where $K_i \in K$ and $a_1 \in \a$. The first equation is a fact about geodesics in $G/K$ space \cite{Helg}, where $K = \exp(\k)$ is a closed subgroup of $G$. Eq. (\ref{eq:cartangroup}) is called the KAK decomposition \cite{Helg}. The results in this paper suggest that $K_1$ and $K_2$ can be synthesized by unbounded controls $X_i$ in negligible time. The time consuming part of the evolution $\exp(a_1)$ is synthesized by evolution of Hamiltonian $X_d$. Time optimal strategy suggests evolving $X_d$ and its conjugates $\W_k X_d \W_k^{-1}$ where $\W_k X_d \W_k^{-1}$ all commute. Written as evolution $$ G = K_1 \prod_k \exp(t_k \W_k X_d \W_k^{-1}) \ K_2 $$ where $K_1, K_2, W_k$ take negligible time to synthesize using unbounded controls $u_i$ and time-optimality is characterized by synthesis of commuting Hamiltonians $\W_k X_d \W_k^{-1}$. This characterization of time optimality, involving commuting Hamiltonians is derived using convexity ideas \cite{Kostant, timeopt}. The remaining paper develops these notions. The paper is orgaized as follows. In section 2, we study the $\frac{SU(n)}{SO(n)}$ problem. In section 3, we study the $\frac{SU(2n)}{SU(n) \times SU(n) \times U(1)}$ problem. In section 4, we study the general $\frac{G}{K}$ problem. We conclude in section 5 , with facts about roots and reflections, with application to dynamics of coupled spins. Given Lie algebra $\g$, we use killing form $\langle x, y \rangle = tr(ad_xad_y)$ as a inner product on $\g$. When $\mathfrak{g}= su(n)$, we also use the inner product $\langle x, y \rangle = tr(x'y)$. We call this standard inner product. \section{Time Optimal Control for $SU(n)/SO(n)$ problem} \label{sec:second} \begin{remark} {\rm Birkhoff convexity states, a real $n \times n$ matrix $A$ is doubly stochastic ($\sum_{i} A_{ij} = \sum_{j} A_{ij}= 1$, for $A_{ij} \geq 0$) iff it can be written as convex hull of permutation matrices $P_i$ (only one $1$ and everything else zero in every row and column). Given $\Theta \in SO(n)$ and $X = \left[ \begin{array}{cccc} \lambda_1 & 0 & \hdots & 0 \\ 0 & \lambda_2 & \hdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \hdots & \lambda_n \end{array} \right ]$, we have $diag(\Theta X \Theta^T) = B \ diag(X)$ where $diag(X)$ is a column vector containing diagonal entries of $X$ and $B_{ij} = (\Theta^{ij})^2$ and hence $B$ is a doubly stochastic matrix which can be written as convex sum of permutations. Therefore $B \ diag(X) = \sum_i \alpha_i P_i\ diag(X)$, i.e. diagonal of a symmetric matrix $\Theta X \Theta^T$, lies in convex hull of its eigenvalues and its permutations. This is called Schur convexity. }\end{remark} We now give an elementary proof of special case of the KAK decomposition in Eq. (\ref{eq:cartangroup}), where $G = SU(n)$ has a closed subgroup $K = SO(n)$ and a Cartan decompostion of its Lie algerbra $\mathfrak{g}= su(n)$ as $\mathfrak{g}= \mathfrak{p} \oplus \k$, for $\mathfrak{k}= so(n)$ and $p = -i A$ where $A$ is traceless symmetric and $\a$ is maximal abelian subalgebra of $\p$ , such that $\mathfrak{a}= -i \left[ \begin{array}{ccc} \lambda_1 & \dots & 0 \\ 0 & \ddots & 0 \\ 0 & 0 & \lambda_n \end{array} \right ],$ where $\sum_i \lambda_i = 0$. \begin{theorem}\label{th:SUdecompose}{\rm Let $U \in SU(n)$, then $U = \Theta_1 \exp(\Omega)\Theta_2$ where $\Theta_1, \Theta_2 \in SO(n)$ and $$\Omega = -i \left[ \begin{array}{ccc} \lambda_1 & \dots & 0 \\ 0 & \ddots & 0 \\ 0 & 0 & \lambda_n \end{array} \right ], $$ where $\sum_i \lambda_i = 0$.} \end{theorem} Observe $UU^T$ is in $SU(n)$. The eigenvalues of $UU^T$ are of the form $\exp(j \theta)$. $$ UU^T z = \exp(j \theta) z. $$ $$ \exp(-j \frac{\theta}{2}) U^T z = \exp(j \frac{\theta}{2}) (U^T)^{\ast} z. $$ $$ (C + iD) z = (C - iD)z. $$ $$ D (x + iy) = 0. $$ This implies $UU^T x = \exp(j \theta) x $ and $UU^T y = \exp(j \theta) y $. This implies $ UU^T = \Theta \Sigma \Theta'$, where columns of $\Theta$ are real, perpendicular, and $$ \Sigma = \left[ \begin{array}{ccc} \exp(-i \lambda_1) & \dots & 0 \\ 0 & \ddots & 0 \\ 0 & 0 & \exp(-i \lambda_n) \end{array} \right ] $$ where $\Sigma \in SU(n)$. Let $ U = \Theta \Sigma^{\frac{1}{2}} V$. $ UU^T = \Theta \Sigma \Theta' = \Theta \Sigma^{\frac{1}{2}} VV^T \Sigma^{\frac{1}{2}} \Theta' $. Implying $VV^T = \mbox{$\bf 1\ $}$. Then $U = \Theta \Sigma^{\frac{1}{2}} V$, where $\Theta, V$ can be chosen in $SO(n)$ and $$ \Sigma^{\frac{1}{2}} = \left[ \begin{array}{ccc} \exp(-i \mu_1) & \dots & 0 \\ 0 & \ddots & 0 \\ 0 & 0 & \exp(-i \mu_n) \end{array} \right ], $$ where $ \sum \mu_i = 2 m \pi $. Choose $\mu_n \rightarrow \mu_n - 2 m \pi$ so that $ \sum \lambda_i = 0$ and result follows.{\bf q.e.d} \noindentindent We now give a proof of the reachable set in Eq. (\ref{eq:reachableintro}), for the $\frac{SU(n)}{SO(n)}$ problem. \begin{theorem}\label{th:model1}{ \rm Let $P(t) \in SU(n)$ be a solution to the differential equation $$ \dot{P} = Ad_{K(t)}(X_d) P, $$ where $Ad_K(X_d) = K X K^{-1}$ and $ X_d = -i \left[ \begin{array}{cccc} \lambda_1 & 0 & \hdots & 0 \\ 0 & \lambda_2 & \hdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \hdots & \lambda_n \end{array} \right ]$. The elements of the reachable set at time $T$, take the form $ K_1 \exp(-i \mu T) K_2 $, where $K_1, K_2 \in SO(n)$ and $\mu \prec \lambda$ ($\mu$ lies in convex hull of $\lambda$ and its permutations), where $\lambda = (\lambda_1, \dots, \lambda_n)'$.}\end{theorem} {\bf Proof:} As a first step, discretize the evolution of $P(t)$, as piecewise constant evolution, \begin{equation} \label{eq:discretize} P_n = \prod_i \exp(Ad_{k_i} (X_d) \tau), \end{equation} of steps of size $\tau$. For arbitrary $t \in [0, T]$ we look at the evolution of $P(t)$. Let $t \in [(n-1) \tau, n \tau]$. Choose small step $\Delta$, such that $n \tau - t < \Delta$, then $P(t + \Delta) = \exp(Ad_K(X_d)\Delta) P(t)$. \noindentindent From theorem \ref{th:SUdecompose}, $P(t) = K_1 \left [ \begin{array}{cccc} \exp(i \phi_1) & 0 & 0 & 0 \\ 0 & \exp(i \phi_2) & 0 & 0 \\ 0 & 0 & \ddots & 0 \\ 0 & 0 & 0 & \exp(i \phi_n) \end{array} \right ] K_2 $, where $K_1, K_2 \in SO(n)$,where to begin with, assume eigenvalues $\phi_j - \phi_k \noindentindenteq n \pi$, where $n$ is an integer. Let $ K_1(t + \Delta) = \exp(\Omega_1 \Delta) K_1(t)$, $K_2(t + \Delta) = \exp(\Omega_2 \Delta) K_2 $, and $A(t + \Delta) = \exp(a \Delta) A(t)$, where, $\Delta$, $\Omega_1$, $\Omega_2$ and $a$ are detailed below. Let $Q(t + \Delta) = K_1(t + \Delta) A(t + \Delta) K_2(t + \Delta)$. \begin{equation} \label{eq:1} Q(t + \Delta) = \exp(\Omega_1 \Delta)\exp(K_1 a K_1' \Delta)\exp(K_1 A \Omega_2 A' K_1' \Delta) P(t). \end{equation} Let \begin{equation} \label{eq:2} P(t + \Delta) = \exp(Ad_{K}(H_d) \Delta ) P(t). \end{equation} We equate $P(t + \Delta)$ and $Q(t+ \Delta)$ to first order in $\Delta$. This gives, \begin{equation} \label{eq:solve1} Ad_{K}(X_d) = \Omega_1 + K_1 a K_1' + K_1 A \Omega_2 A' K_1'. \end{equation} Multiplying both sides with $K_1'(\cdot)K_1$ gives \begin{equation} \label{eq:solve2} Ad_{{\bf a}r K}(X_d) = \Omega_1' + a + A \Omega_2 A'. \end{equation} where, ${\bf a}r K = K_1' K$ and $\Omega_1' = K' \Omega K $. We evaluate $ A \Omega_2 A^{\dagger} $, for $\Omega_2 \in so(n)$. $$ D = \{ A \Omega_2 A^{\dagger} \}_{kl} = \exp \{i(\phi_k - \phi_l)\} (\Omega_2)_{kl} = \cos (\phi_k - \phi_l) (\Omega_2)_{kl} + i \underbrace{\sin(\phi_k - \phi_l) (\Omega_2)_{kl}}_{R_{kl}} $$ such that $R$ is traceless symmetric matrix with $P_1 = i R \in \p$ and onto $\a^{\perp}$, by appropriate choice of $\Omega_2$. Given $Ad_{{\bf a}r K}(X_d) \in \p$, we decompose it as $$ P ( Ad_{{\bf a}r K}(X_d)) + Ad_{{\bf a}r K}(X_d)^{\perp}, $$ with $P$ denoting the projection onto $\a$ ( $\mathfrak{a}= -i \left[ \begin{array}{ccc} \lambda_1 & \dots & 0 \\ 0 & \ddots & 0 \\ 0 & 0 & \lambda_n \end{array} \right ],$ where $\sum_i \lambda_i = 0$.) w.r.t to standard inner product and $Ad_{{\bf a}r K}(X_d)^{\perp}$ to the orthogonal component. If $\phi_i - \phi_j \noindentindenteq 0, \pi$, we can solve for $(\Omega_2)_{ij}$ such that $P_1 = Ad_{{\bf a}r K}(X_d)^{\perp}$. This gives $\Omega_2$. Let $a = P(Ad_{{\bf a}r K}(X_d))$. As described above in Eq. (\ref{eq:solve2}), we choose $\Omega_1' = Ad_{{\bf a}r{K}}(X_d)^{\perp} - A \Omega A^{\dagger} \in \mathfrak{k}$. Then $$P(t + \Delta) - Q(t + \Delta) = o(\Delta^2). $$ Consider the case, when $A$ is degenerate. Let, \begin{equation}\label{eq:blocks} A = \left[ \begin{array}{cccc} A_1 & 0 & \hdots & 0 \\ 0 & A_2 & \hdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \hdots & A_n \end{array} \right ],\end{equation} where $A_k$ is $n_k$ fold degenerate ( modulo sign) described by $n_k \times n_k$ block. WLOG, we arrange $$ A_k = \left[ \begin{array}{cccccc} \exp(i \phi_k) & 0 & \hdots & \hdots & \hdots & 0\\ \vdots & \ddots & \vdots & \vdots & \vdots & 0 \\ 0 & \hdots & \exp(i \phi_k) & \hdots & \dots & 0 \\ 0 & \vdots & \hdots & -\exp(i \phi_k) & \hdots & \vdots \\ \vdots & \vdots & \hdots & \vdots & \ddots & \vdots \\ 0 & \hdots & \hdots & \hdots & 0 & -\exp(i \phi_k) \end{array} \right ]. $$ Consider the decomposition $$ Ad_{{\bf a}r K}(X_d) = P(Ad_{{\bf a}r K}(X_d)) + Ad_{{\bf a}r K}(X_d)^{\perp}, $$ where $P$ denotes projection onto $n_k \times n_k$ blocks in equation \ref{eq:blocks} and $Ad_K(X_d)^{\perp}$, the orthogonal complement. \begin{equation} P( \left[ \begin{array}{cccc} X_{11} & X_{12} & \hdots & X_{1n} \\ X_{21} & X_{22} & \hdots & X_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ X_{n1} & X_{n2} & \hdots & X_{nn} \end{array} \right ]) = \left[ \begin{array}{cccc} X_{11} & 0 & \hdots & 0 \\ 0 & X_{22} & \hdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \hdots & X_{nn} \end{array} \right ] ,\end{equation} where $X_{ij}$ are blocks. We can solve for $(\Omega_2)_{ij}$ such that $P_1 = Ad_{{\bf a}r K}(X_d)^{\perp}$. This gives $\Omega_2$ in Eq. (\ref{eq:solve2}). Choose, $ Ad_{{\bf a}r{K}}(X_d)^{\perp} - A \Omega A^{\dagger} = \Omega_1' \in \mathfrak{k}$, and Let $H_1 = \exp(h_1)$ be a rotation formed from block diagonal matrix \begin{equation}\label{eq:rotblocks} H_1 = \left[ \begin{array}{cccc} \Theta_1 & 0 & \hdots & 0 \\ 0 & \Theta_2 & \hdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \hdots & \Theta_n \end{array} \right ],\end{equation} where $\Theta_k$ is $n_k \times n_k$ sub-block in $SO(n_k)$. $H_1 = exp(h_1)$ is chosen such that $$ H_1' P ( Ad_{{\bf a}r K}(X_d)) H_1 = a$$ is a diagonal matrix. $H_2 = \exp(\underbrace{A^{-1}h_1A}_{h_2})$, where $h_2$ is skew symmetric, such that \begin{equation}\label{eq:rotgenblocks} h_1 = \left[ \begin{array}{cccc} \theta_1 & 0 & \hdots & 0 \\ 0 & \theta_2 & \hdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \hdots & \theta_n \end{array} \right ], h_2 = \left[ \begin{array}{cccc} \hat \theta_1 & 0 & \hdots & 0 \\ 0 & \hat \theta_2 & \hdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \hdots & \hat \theta_n \end{array} \right ], \end{equation} where $\theta_k, \hat \theta_k$ is $n_k \times n_k$ sub-block in $so(n_k)$, related by \begin{equation}\label{eq:rotgenblocks1} \hat \theta_k = A_k ' \theta_k A_k ,\ \theta_k = \left[ \begin{array}{cc} \theta_{11} & \theta_{12} \\ - \theta_{12}^{\dagger} & \theta_{22} \end{array} \right ], \hat \theta_k = \left[ \begin{array}{cc} \theta_{11} & -\theta_{12} \\ \theta_{12}^{\dagger} & \theta_{22} \end{array} \right ] \end{equation} Note $H_1' P ( Ad_k(X_d)) H_1 = a$ lies in convex hull of eigenvalues of $X_d$. This is true if we look at the diagonal of $H_1' Ad_K(X_d) H_1$, it follows from Schur Convexity. The diagonal of $H_1' Ad_k(X_d)^\perp H_1 $ is zero as its inner product $$ tr (a_1 H_1' Ad_k(X_d)^\perp H_1) = tr (H_1 a_1 H_1' Ad_k(X_d)^\perp) = 0. $$ as $H_1 a_1 H_1'$ has block diagonal form which is perpendicular to $Ad_k(X_d)^\perp$. Therefore diagonal of $H_1' P ( Ad_k(X_d)) H_1$ is same as diagonal of $H_1' Ad_K(X_d) H_1$. Let $$ Q(t + \Delta) = \exp(\Omega_1 \Delta) K_1 \exp(P(Ad_{{\bf a}r K}(X_d) \Delta)) H_1 A H_2^{\dagger} \exp(\Omega_2 \Delta) K_2. $$ \begin{equation} \label{eq:defa} Q(t + \Delta) = \exp(\Omega_1 \Delta) K_1 H_1 \exp(a \Delta) A H_2^{\dagger} \exp(\Omega_2 \Delta) K_2. \end{equation} where the above expression can be written as $$ Q(t + \Delta) = \exp(\Omega_1 \Delta) \exp(K_1 H_1 a H_1'K_1' \Delta) \exp(K_1 A \Omega_2 A' K_1' \Delta) P(t). $$ Where $\Omega_1$, $H_1, a, \Omega_2$, are chosen such that $$ (\Omega_1 + K_1 H_1 a H_1' K_1' + K_1 A \Omega_2 A' K_1') = Ad_K(X_d). $$ $$ (\Omega_1' + H_1 a H_1' + A \Omega_2 A') = Ad_{{\bf a}r K}(X_d). $$ $$ Q(t + \Delta)- P(t + \Delta) = o(\Delta^2) P(t). $$ $$ Q(t + \Delta) = (I + o(\Delta^2))P(t + \Delta). $$ $$ Q(t + \Delta)Q(t + \Delta)^{T} = (I + o(\Delta^2))P(t + \Delta)P^{T}(t + \Delta) (I + o(\Delta^2)) = P(t+ \Delta)P^{T}(t + \Delta) [ I + o(\Delta^2)]. $$ $$ P(t+\Delta)P^{T}(t+ \Delta) = K_1 \left[ \begin{array}{cccc} \exp(i 2 \phi_1) & 0 & \hdots & 0 \\ 0 & \exp(i 2 \phi_2) & \hdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \hdots & \exp(i 2 \phi_n) \end{array} \right ] K_1^{T}. $$ Let $F = P(t + \Delta)P^T(t + \Delta)$ and $G = Q(t + \Delta)Q^T(T + \Delta)$ we relate the eigenvalues, of $F$ and $G$. Given $F, G$, as above, with $| F - G | \leq \epsilonsilon$, and a ordered set of eigenvalues of F, denote $\lambda(F) = \left[ \begin{array}{c} \exp(i 2 \phi_1) \\ \exp(i 2 \phi_2) \\ \vdots \\ \exp(i 2 \phi_n) \end{array} \right ]$, there exists an ordering (correspondence) of eigenvalues of $G$, such that $|\lambda(F) - \lambda(G)| < \epsilonsilon$. Choose an ordering of $\lambda(G)$ call $\mu$ that minimizes $|\lambda(F) - \lambda(G)|$. $F = U_1 D(\lambda)U_1'$ and $G = U_2 D(\mu) U_2'$ , where $D(\lambda)$ is diagonal with diagonal as $\lambda$, let $U = U_1'U_2$, $$ | F- G|^2 = |D(\lambda) - U D(\mu) U' |^2 = |\lambda|^2 + |\mu|^2 - tr(D(\lambda)' U D(\mu) U' + (U D(\mu) U)' D(\lambda)), $$ By Schur convexity, $$ tr(D(\lambda)' U D(\mu) U' + (U D(\mu) U')' D(\lambda)) = \sum_i \alpha_i (\lambda' P_i(\mu) + P_i(\mu)' \lambda), $$ where $P_i$ are permutations. Therefore $|F-G|^2 > |\lambda - \mu|^2$. Therefore, $$ \lambda(QQ^T(t+\Delta)) = \lambda(PP^{T}(t+\Delta)) + o(\Delta^2). $$ The difference $$ o(\Delta^2) = \underbrace{\exp ((\Omega_1 + K_1 H_1 a H_1' K_1' + K_1 A \Omega_2 A' K_1')\Delta)}_{\exp(Ad_K(X_d)\Delta)} - \exp (\Omega_1 \Delta) \exp(K_1 H_1 a H_1' K_1' \Delta) \exp (K_1 A \Omega_2 A' K_1' \Delta), $$ is regulated by size of $\Omega_2$, which is bounded by $|\Omega_2| \leq \frac{\|X_d \|}{\sin(\phi_i -\phi_j)}$, where $\sin(\phi_i-\phi_j)$ is smallest non-zero difference. $\Delta$ is chosen small enough such that $|o(\Delta^2)| < \epsilonsilon \Delta$. For each point $t \in [ 0, T]$, we choose a open nghd $N(t) = (t - N_t, t + N_t )$ ( $[0, N_0)$ and $(T-N_T, T]$ ), such that $o_t(\Delta^2) < \epsilonsilon \Delta$ for $\Delta \in N(t)$. $N(t)$ forms a cover of $[0, T]$. We can choose a finite sub-cover. Consider trajectory at points $(P(t_1), P(t_2), \dots P(t_n))$. Let $t_{i, i+1}$ be the point in intersection of $N(t_i)$ and $N(t_{i+1})$. Let $\Delta_i^{+} = t_{i, i+1}-t_i$ and $\Delta_{i+1}^{-} = t_{i+1} - t_{i, i+1}$. We consider points $P(t_i), P(t_{i+1}), P(t_{i, i+1}), \underbrace{Q(t_{i} + \Delta_i^{+})}_{Q_{i+}}, \underbrace{Q(t_{i+1}-\Delta_{i+1}^{-})}_{Q_{(i+1)-}}$ \begin{figure} \caption{Figure A shows collection of overlappings neighbourhoods forming the finite subcover. Figure B depicts $P_i$, $P_{i+1} \end{figure} The recursive relation gives, \begin{eqnarray} \lambda(Q_{i+}Q_{i+}^{T}) &=& \exp(2 a_i^{+} \Delta_i^+)\ \lambda(P_{i}P_{i}^{T}) \\ \lambda(P_{i, i+1}P_{i, i+1}^{T}) &=& \lambda(Q_{i+}Q^{T}_{i+}) + o((\Delta_i^+)^2) \\ \lambda(Q_{(i+1)-}Q_{(i+1)-}^{T}) &=& \lambda(P_{i, i+1}P_{i, i+1}^{T}) + o((\Delta^{-}_{i+1})^2) \\ \exp(-2 a_{i+1}^{-} \Delta_{i+1}^{-})\ \lambda(P_{i+1}P_{i+1}^T) &=& \lambda(Q_{(i+1)-}Q^{T}_{(i+1)-}) \end{eqnarray}where $a_i^+$ and $a_{i+1}^-$ correspond to $a$ in Eq. (\ref{eq:defa}) and lie in the convex hull of the eigenvalues $X_d$. Adding the above equations, \begin{equation} \lambda(P_{i+1}P_{i+1}^{T}) = \exp (o(\Delta^2))\ \exp(2 ( a_i^+ \Delta_i^+ + a_{i+1}^{-}\Delta_{i+1}^{-})\ \lambda(P_i P_i^{\dagger}). \end{equation}where $o(\Delta^2)$ is diagonal. \begin{equation} \lambda(P_nP_n^{T}) = \exp(\underbrace{\sum o(\Delta^2)}_{\leq \epsilonsilon T})\ \exp(2 \sum_i a_i^{+} \Delta_i^{+} + a_{i+1}^{-}\Delta_{i+1}^{-})\ \lambda(P_1P_1^{T}). \end{equation} where $|\exp(i \alpha) -1| = 2 \sin\frac{|\alpha_1|}{2} > \frac{|\alpha_1|}{2}$. Therefore, $|\exp(i \alpha) - 1| \leq \frac{\theta}{2}$ implies $|\alpha| \leq \theta$. \begin{equation} \lambda(P_nP_n^{T}) = \exp(\underbrace{\sum o(\Delta^2)}_{\leq \epsilonsilon T})\ \exp(2 T \sum_k \alpha_k P_k(\lambda))\ \lambda(P_1P_1^{T}) = \exp(\underbrace{\sum o(\Delta^2)}_{\leq \epsilonsilon T})\ \exp(2 \mu T)\ \lambda(P_1P_1^{T}), \end{equation}where $\mu \prec \lambda$ and $P_1 = I$. \begin{equation} P_n = K_1 \exp(\mu T) \exp(\frac{1}{2}\underbrace{\sum o(\Delta^2)}_{\leq \epsilonsilon T}) K_2. \end{equation} Note, $|P_n - K_1 \exp(\mu T) K_2| = o(\epsilonsilon)$. This implies that $P_n$ belongs to the compact set $K_1 \exp(\mu T) K_2$, else it has minimum distance from this compact set and by making $\Delta \rightarrow 0$ and hence $\epsilonsilon \rightarrow 0$, we can make this arbitrarily small. In Eq. \ref{eq:discretize}, $P_n \rightarrow P(T)$ as $\tau \rightarrow 0$. Hence $P(T)$ belongs to compact set $K_1 \exp(\mu T) K_2$. {\bf q.e.d} \begin{corollary} \label{cor:model1}{ \rm Let $U(t) \in SU(n)$ be a solution to the differential equation $$ \dot{U} = (X_d + \sum_i u_i X_i )U, $$ where $\{X_i\}_{LA}$, the Lie algebra generated by $X_i$, is $so(n)$ and $ X_d = -i \left[ \begin{array}{cccc} \lambda_1 & 0 & \hdots & 0 \\ 0 & \lambda_2 & \hdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \hdots & \lambda_n \end{array} \right ]$. The elements of reachable set at time $T$, takes the form $ U(T) \in K_1 \exp(-i \mu T) K_2 $, where $K_1, K_2 \in SO(n)$ and $\mu \prec \lambda$, where $\lambda = (\lambda_1, \dots, \lambda_n)'$ and the set $S = K_1 \exp(-i \mu T) K_2$ belongs to the closure of reachable set. }\end{corollary} \noindentindento {\bf Proof:} Let $V(t) = K'(t) U(t) $, where, $\dot{K} = (\sum_i u_i X_i) K$. Then $$\dot{V}(t) = Ad_{K'(t)}(X_d) V(t). $$ From theorem \ref{th:model1}, we have $V(T) \in K_1 \exp(-i \mu T) K_2$. Therefore $U(T) \in K_1 \exp(-i \mu T) K_2$. Given $$ U = K_1 \exp(-i \mu T) K_2 = K_1 \exp(-i \sum_j \alpha_j P_j(\lambda) T) K_2 = K_1 \prod_j \exp(-i t_j X_d ) K_j,\ \ \sum t_j = T .$$ We can synthesize $K_j$ in negligible time, therefore $| U(T) - U | < \epsilonsilon$, for any desired $\epsilonsilon$. Hence $U$ is in closure of reachable set. {\bf q.e.d} \begin{remark}{\rm We now show how theorem \ref{th:SUdecompose} and \ref{th:model1} can be mapped to results on decomposition and reachable set for coupled spins/qubits. Consider the transformation $$ W = \exp(-i \pi I_yS_y) \exp(-i \frac{\pi}{2} I_z )$$ The transformation maps the algebra $\mathfrak{k}= su(2) \times su(2) = \{ I_\alpha, S_\alpha \}$ to $\k_1 = so(4)$, four dimensional skew symmetric matrices, i.e., $Ad_W(\k) = \k_1$. The transformation maps $\mathfrak{p}= \{ I_\alpha S_\beta \}$ to $\p_1 = -i A $, where $A$ is traceless symmetric and maps $\mathfrak{a}= -i \{ I_xS_x, I_yS_y, I_zS_z \}$ to $\a_1 = -i\{-\frac{S_z}{2}, \frac{I_z}{2}, I_zS_z \}$, space of diagonal matrices in $\p_1$, such that the triplet $(a_x, a_y, a_z)$ gets mapped to the four vector (the diagonal) $(\lambda_1, \lambda_2, \lambda_3, \lambda_4) = (a_y + a_z - a_x, a_x + a_y - a_z, -(a_x + a_y + a_z), a_x + a_z - a_y)$.} \end{remark} \begin{corollary}{\rm {\bf Canonical Decomposition:} Given the decomposition of SU(4) from theorem \ref{th:SUdecompose}, we can write $$ U = \exp(\Omega_1) \exp( -i \left[ \begin{array}{ccc} \lambda_1 & \dots & 0 \\ 0 & \ddots & 0 \\ 0 & 0 & \lambda_4 \end{array} \right ]) \exp(\Omega_2),$$ where $\Omega_1, \Omega_2 \in so(4)$. We write above as $$ U = \exp(\Omega_1) \exp( -i (-\frac{a_x}{2} S_z + \frac{a_y}{2} I_z + a_z I_zS_z)) \exp(\Omega_2),$$ Multiplying both sides with $W'(.)W$ gives $$ W' U W = K_1 \exp(-i a_x I_xS_x + a_y I_yS_y + a_z I_zS_z) K_2, $$ where $K_1, K_2 \in SU(2) \times SU(2)$ local unitaries and we can rotate to $a_x \geq a_y \geq |a_z|$.} \end{corollary} \begin{corollary}{\rm {\bf Digonalization} Given $-iH_c = -i \sum_{\alpha \beta} J_{\alpha \beta}I_\alpha S_\beta$ , there exists a local unitary $K$ such that $$ K (-iH_c) K' = -i(a_x I_xS_x + a_y I_yS_y + a_z I_zS_z), a_x \geq a_y \geq |a_z|. $$ Note $W (-iH_c)W' \in \p_1$. Then choose $\Theta \in SO(n)$ such that $\Theta W (-iH_c)W' \Theta' = -i (-\frac{a_x}{2} S_z + \frac{a_y}{2} I_z + a_z I_zS_z)$ and hence $$ (W' \exp(\Omega) W) (-i H_c ) (W \exp(\Omega) W')' = -i(a_x I_xS_x + a_y I_yS_y + a_z I_zS_z). $$ Where $ K = W' \exp(\Omega) W$ is a local unitary. We can rotate to ensure $a_x \geq a_y \geq |a_z|$.}\end{corollary} \begin{corollary}{\rm Given the evolution of coupled qubits $\dot U = -i(H_c + \sum_j u_j H_j) U$, we can diagonalize $H_c = \sum_{\alpha \beta} J_{\alpha \beta} I_{\alpha}S_{\beta}$ by local unitary $X_d = K' H_c K = a_x I_xS_x + a_y I_yS_y + a_z I_zS_z$, $a_x \geq a_y \geq |a_z|$, which we write as triple $(a_x, a_y, a_z)$ . From this, there are 24 triples obtained by permuting and changing sign of any two by local unitary. Then $U(T) \in S$ where $$ S = K_1 \exp(T \sum_i \alpha_i (a_i, b_i, c_i)) K_2, \ \alpha_i > 0 \ \sum_{i}\alpha_i = 1. $$ Furthermore $S$ belongs to the closure of the reachable set. Alternate description of $S$ is $$U = K_1 \exp( -i ( \alpha I_xS_x + \beta I_yS_y + \gamma I_zS_z))K_2, \ \ \alpha \geq \beta \geq |\gamma|, $$ $\alpha \leq a_x T$ and $\alpha + \beta \pm \gamma \leq (a_x + a_y \pm a_z) T$.} \end{corollary} \noindentindento {\bf Proof:} Let $V(t) = K'(t) U(t) $, where, $\dot{K} = (-i \sum_j u_j X_j) K$. Then $$\dot{V}(t) = Ad_{K'(t)}(-iX_d) V(t). $$ Consider the product $$ V = \prod_i \exp(Ad_{K_i}(-i X_d) \Delta t) $$ where $K_i \in SU(2)\otimes SU(2)$ and $X_d = a_x I_xS_x + a_y I_yS_y + a_z I_zS_z$, where $a_x \geq a_y \geq |a_z|$. Then, $$ W V W' = \prod_i \exp(Ad_{WK_iW'}(-i WX_dW') \Delta t) $$ Observe $WK_iW' \in SO(4)$ and $WX_dW'= diag(\lambda_1, \lambda_2, \dots , \lambda_4)$. Then using results from theorem \ref{th:model1}, we have $$ WVW' = J_1 \exp(-i \mu)J_2 = J_1 \exp(-i \sum_j \alpha_j P_j(\lambda)) J_2 , \ \ J_1, J_2 \in SO(4), \ \ \mu \prec \lambda T $$ Multiplying both sides with $W'(\cdot)W$ , we get $$ V = K_1 \exp(T \sum_i \alpha_i (a_i, b_i, c_i)) K_2, \ \alpha_i > 0 \ \sum_{i}\alpha_i = 1. $$ which we can write as $$ V = K_1 \exp ( -i (\alpha I_xS_x + \beta I_yS_y + \gamma I_zS_z)) K_2, \ \ \alpha \geq \beta \geq |\gamma|, $$ where using $\mu \prec \lambda T $, we get, \begin{eqnarray} \alpha + \beta - \gamma &\leq& (a_x + a_y - a_z) T \\ \alpha &\leq& a_x T \\ \alpha + \beta + \gamma &\leq& (a_x + a_y + a_z) T. \end{eqnarray} Furthermore $U = KV$. Hence the proof. {\bf q.e.d} \section{Time Optimal Control for $\frac{SU(2n)}{SU(n) \times SU(n) \times U(1)}$ problem} \label{sec:third} \begin{remark}\label{rem:stabilizer}{\rm {\bf Stabilizer:} Let $\mathfrak{g}= \mathfrak{p}\oplus \k$ be cartan decomposition of real semisimple Lie algebra $\g$ and $\mathfrak{a}\in \mathfrak{p}$ be its Cartan subalgebra. Let $a \in \a$. $ad_a^2 : \mathfrak{p}\rightarrow \p$ is symmetric in basis orthonormal wrt to the killing form. We can diagonalize $ad_a^2$. Let $Y_i$ be eigenvectors with nonzero (negative) eigenvalues $-\lambda_i^2$. Let $X_i = \frac{[a, Y_i]}{\lambda_i}$, $\lambda_i > 0$. $$ ad_a(Y_i) = \lambda_i X_i, \ \ ad_a(X_i) = -\lambda_i Y_i. $$ $X_i$ are independent, as $\sum \alpha_i X_i = 0$ implies $- \sum \alpha_i \lambda_i Y_i=0$. Since $Y_i$ are independent, $X_i$ are independent. Given $X \perp X_i$ , then $[a, X ] =0$, otherwise we can decompose it in eigenvectors of $ad_a^2$, i.e., $[a, X] = \sum_i \alpha_i a_i + \sum_j \beta_j Y_j $, where $a_i$ are zero eigenvectors of $ad_a^2$. Since $0 = \langle X [a[a, X] \rangle = - \| [a, X] \|^2$, which means $[a, X] =0$. This is a contradiction. $Y_i$ are orthogonal, implies $X_i$ are orthogonal, $\langle [a, Y_i] [a, Y_j] \rangle = \langle [a, [a, Y_i] Y_j \rangle = \lambda_i^2 \langle Y_i Y_j \rangle = 0$. Let $\k_0 \in \k$ satisfy $[a, \k_0] = 0$. Then $\k_0 = \{ X_i \}^{\perp}$. $\tilde Y_i$ denote eigenvectors that have $\lambda_i$ as non-zero integral multiples of $\pi$. $\tilde{X}_i$ are $ad_a$ related to $\tilde{Y}_i$. We now reserve $Y_i$ for non zero eigenvectors that are not integral multiples of $\pi$. Let $$\mathfrak{f}= \{a_i\} \oplus \tilde{Y}_i, \ \ \ \mathfrak{h}= \k_0 \oplus \tilde{X}_i, $$ $\tilde{X_i}, X_l, k_j$ where $k_j$ forms a basis of $\k_0$, forms a basis of $\k$. Let $A = \exp(a)$. $A k A^{-} = A (\sum_i \alpha_i X_i + \sum_l \alpha_l \tilde{X}_l + \sum_j \alpha_j k_j ) A^{-} $, where $k \in \k$ $A k A^{-} = \sum_i \alpha_i [ \cos(\lambda_i) X_i - \sin(\lambda_i) Y_i ] + \sum_l \pm \alpha_l \tilde{X}_l + \sum_j \alpha_j k_j $ The range of $A (\cdot) A^{-}$ in $\p$, is perpendicular to $\f$. Given $Y \in \p$ such that $Y \in \f^{\perp}$. The norm $\|X\|$ of $X \in \k$, such that $\p$ part of $A X A^{-1}|_{\p} = Y$ satisfies \begin{equation} \label{eq:nghdbnd} \| X \| \leq \frac{\|Y \|}{\sin \lambda_s}. \end{equation} where $\lambda_{s}^2$ is the smallest nonzero eigenvalue of $-ad_a^2$ such that $\lambda_s$ is not an integral multiple of $\pi$. $A^2 k A^{-2}$ stabilizes $\mathfrak{h}\in \k$ and $\mathfrak{f}\in \p$. If $k \in \k$, is stabilized by $A^2 (\cdot) A^{-2}$, $\lambda_i = n \pi$, i.e., $k \in \h$. This means $\h$ is an sub-algebra, as the Lie bracket of $[y, z] \in \k$ for $y, z \in \h$ is stabilized by $A^2 (\cdot) A^{-2}$. Let $H = \exp(\h)$, be an integral manifold of $\h$. Let $\tilde{H} \in K$ be the solution to $A^2 \tilde H A^{-2} = \tilde H$ or $A^2 \tilde H - \tilde H A^{-2} = 0$. $\tilde{H}$ is closed, $H \in \tilde H$. We show that $\tilde H$ is a manifold. Given $H_0 \in \tilde{H} \in K$, where $K$ is closed, we have a $\exp(B_{\delta}^{\k})$ nghd of $H_0$, in $\exp(B_{\delta})$ ball nghd of $H_0$, which is one to one. For $x \in B_{\delta}^{\k}$, $ A^2 \exp(x) A^{-2} = \exp(x)$, implies, \begin{equation} A^2 \exp(\sum_i \alpha_i X_i + \sum_l \beta_l \tilde{X}_l + \sum_j \gamma_j k_j )H_0 A^{-2} = \exp (\sum_i \alpha_i \cos(2 \lambda_i) X_i - \sin(2 \lambda_i) Y_i + \sum_l \beta_l \tilde{X}_l + \sum_j \gamma_j k_j )H_0, \end{equation} then by one to one, $\exp(B_{\delta})$, we get $\alpha_i = 0$ and $x \in \h$. Therefore $\exp(B_{\delta}^{\h})H_0$ is a nghd of $H_0$. Given a sequence $H_i \in \exp(\h)$ converging to $H_0$, for $n$ large enough $H_n \in \exp(B_{\delta}^{\h})H_0$. Then $H_0$ is in invariant manifold $\exp(\h)$. Hence $\exp(\h)$ is closed and hence compact. Let $y \in \f$, then there exists a $h_0 \in \h$ such that $ \exp(h_0) y \exp(-h_0) \in \a$. We maximize the function $ \langle a_r, \exp(h) y \exp(h) \rangle $, over the compact group $\exp(\h)$, for regular element $a_r \in \a$ and $\langle .,.\rangle$ is the killing form. At the maxima, we have at $t=0$, $\frac{d}{dt} \langle a_r, \exp(h_1 t) (\exp(h_0) y \exp(-h_0)) \exp(-h_1 t) \rangle = 0$. $$ \langle a_r, [h_1 \exp(h_0) y \exp(-h_0)] \rangle = - \langle h_1, [a_r \exp(h_0) y \exp(-h_0)] \rangle, $$ if $\exp(h_0) y \exp(-h_0) \noindentindenteq \a$, then $[a_r,\ \exp(h_0) y \exp(-h_0)] \in \k$. The bracket $[a_r,\ \exp(h_0) y \exp(-h_0)]$ is $Ad_{A^2}$ invariant and hence belong to $\h$. We can choose $h_1$ so that gradient is not zero. Hence $\exp(h_0) y \exp(-h_0) \in \a$. For $z \in \p$ such that $z \in \f^{\perp}$, we have $\exp(h_0) z \exp(-h_0) \in \a^{\perp}$. $$ \langle \a, \exp(h_0) z \exp(-h_0) \rangle = \langle \exp(-h_0) \mathfrak{a}\exp(h_0), z \rangle = 0, $$ as $\exp(-h_0) \mathfrak{a}\exp(h_0)$ is $Ad_{A^2}$ invariant, hence $\exp(-h_0) \mathfrak{a}\exp(h_0) \in \f$. In above, we worked with killing form. For $\mathfrak{g}= su(n)$, we may use standard inner product.}\end{remark} \begin{remark}{\bf Kostant Convexity}\label{rem:convexity}{ \rm \cite{Kostant} Given the decomposition $\mathfrak{g}= \mathfrak{p}\oplus \k$, let $\mathfrak{a}\subset \p$ and $X \in \a$,. Let $\W_i \in \exp(\k)$ such that $\W_i X \W_i \in \a$ are distinct, Weyl points. Then projection (w.r.t killing form) of $Ad_K(X)$ on $\a$ lies in convex hull of these Weyl points. The $\C$ be the convex hull and let projection $P(Ad_K(X))$ lie outside this Hull. Then there is a separating hyperplane $a$, such that $\langle Ad_K(X), a \rangle < \langle \C, a \rangle $. W.L.O.G we can take $a$ to be a regular element. We minimize $\langle Ad_K(X), a \rangle$, with choice of $K$ and find that minimum happens when $[Ad_K(X), a] = 0$, i.e. $Ad_K(X)$ is a Weyl point. Hence $P(Ad_K(X)) \in \sum_i \alpha_i \W_i X \W_i^{-1}$, for $\alpha_i > 0$ and $\sum_i \alpha_i = 1$. The result is true with a projection w.r.t inner product that satisfies $\langle x, [y, z] \rangle = \langle [x, y], z] \rangle $, like standard inner product on $\mathfrak{g}= su(n)$.} \end{remark} We now give an elementary proof (using eigenvalues, eigenvectors) of the special case of KAK decomposition for the group $G = SU(2n)$ with a closed subgroup $K = SU(n) \times SU(n) \times U(1)$, of block diagonal special unitaries, such that the respective lie algebras $\mathfrak{g}= su(2n)$, traceless skew hermitians, and $\mathfrak{k}= su(n) \oplus su(n) \oplus u(1)$, block diagonal, traceless skew hermitians, have the Cartan decomposition $\mathfrak{g}= \mathfrak{p}\oplus \k$ where $\mathfrak{p}= \left [ \begin{array}{cc} 0 & Z \\ -Z' & 0 \end{array} \right ]$. The associated cartan subalgebra $\mathfrak{a}= \left [ \begin{array}{cc} 0 & \lambda \\ -\lambda & 0 \end{array} \right ]$ where $\lambda = \left[ \begin{array}{cccc} \lambda_1 & 0 & \hdots & 0 \\ 0 & \lambda_2 & \hdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \hdots & \lambda_n \end{array} \right ]$ and $|\lambda_i| \noindentindenteq |\lambda_j| \noindentindenteq 0$ is a regular element of $\a$. \begin{theorem}\label{th:SU(2n)decompose}{\rm Let $U \in SU(2n)$, then $$ U = \left [ \begin{array}{cc} K_1 & 0 \\ 0 & K_2 \end{array} \right ] \exp(\left[ \begin{array}{cc} 0 & \lambda \\ -\lambda & 0 \end{array} \right ]) \left[ \begin{array}{cc} K_3 & 0 \\ 0 & K_4 \end{array} \right ], $$where $\left [ \begin{array}{cc} K_1 & 0 \\ 0 & K_2 \end{array} \right ], \left [ \begin{array}{cc} K_3 & 0 \\ 0 & K_4 \end{array} \right ] \in SU(n)\times SU(n)\times U(1)$ (Block diagonal special unitary matrices) and $$ \left[ \begin{array}{cc} 0 & \lambda \\ -\lambda & 0 \end{array} \right ] = \left[ \begin{array}{cccccc} 0 & \hdots & 0 & \lambda_1 & \hdots & 0 \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ 0 & \hdots & 0 & \hdots & 0 & \lambda_n \\ -\lambda_1 & \hdots & 0 & 0 & \hdots & 0 \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ 0 & \hdots & -\lambda_n & 0 & \hdots & 0 \end{array} \right ] $$ }\end{theorem} {\bf Proof: }Let Block diagonal $$ S = \left[ \begin{array}{cc} \mbox{$\bf 1\ $} & 0 \\ 0 & -\mbox{$\bf 1\ $} \end{array} \right ] $$ Then for $\cos(\lambda) = \left[ \begin{array}{cccc} \cos \lambda_1 & 0 & \hdots & 0 \\ 0 & \cos \lambda_2 & \hdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \hdots & \cos \lambda_n \end{array} \right ]$ $$ S \left[ \begin{array}{cc} \cos(\lambda) & -\sin(\lambda) \\ \sin(\lambda) & \cos(\lambda) \end{array} \right ]S = \left[ \begin{array}{cc} \cos(\lambda) & \sin(\lambda) \\ -\sin(\lambda) & \cos(\lambda) \end{array} \right ] $$ $USU'S \in SU(2n)$, then let $x$ be an eigenvector of $USU'S$. Then $$ USU'S x = \exp(j \theta) x. $$ Taking inverse $$ USU'S (Sx) = \exp(-j \theta) Sx. $$ Let $\Sigma_1$, be perpendicular eigenvectors corresponding to eigenvalues $\exp(-j \theta)$. Then $S \Sigma_1$ are perpendicular eigenvectors corresponding to eigenvalues $\exp(j \theta)$. This says that eigenvalues $\exp(j \theta)$ and $\exp(-j \theta)$ have same multiplicities. This leaves us with eigenvalues $1$ and $-1$. Given the eigenvector $ z = \left[ \begin{array}{c} x \\ y \end{array} \right ] $, with eigenvalue $1$, $ S z = \left[ \begin{array}{c} x \\ -y \end{array} \right ]$ is an eigenvector with eigenvalue $1$. This says eigenvectors of $1$ and $-1$ are of the form $ \left[ \begin{array}{c} x \\ 0 \end{array} \right ] $ and $ \left[ \begin{array}{c} 0 \\ y \end{array} \right ] $. This allows to form orthonormal pairs $ \left[ \begin{array}{c} x \\ y \end{array} \right ] $ and $ \left[ \begin{array}{c} x \\ - y \end{array} \right ]$. After pairing, let $x_{\pm}$ and $y_{\pm}$ be surplus eigenvectors with $\pm 1$ eigenvalues in which $y$ and $x$ parts are zero respectively. Then, dimension of independent $x_{+}$ and $y_{-}$ is the same. The eigenvectors can be organized in columns as follows $$ P = \left[ \begin{array}{cccc} z_{+} \ x_{+} \ z_{-} \ y_{-} \end{array} \right ], $$ where $z_{+}$ and $z_{-}$ are eigenvectors corresponding to $ \left[ \begin{array}{c} x \\ y \end{array} \right ] $, and $ \left[ \begin{array}{c} x \\ -y \end{array} \right ]$ respectively. Let $I= \frac{1}{\sqrt{2}} \mbox{$\bf 1\ $}_{k \times k} $ and $I_1 = \mbox{$\bf 1\ $}_{m \times m}$ and $I_0 = \mbox{$\bf 0\ $}_{m \times m}$. $$ S_1 = \left[ \begin{array}{cc} I_A & -I_B \\ I_B & I_A \end{array} \right ]. $$ $$ I_A = \left[ \begin{array}{cc} I & 0 \\ 0 & I_1 \end{array} \right ], \ \ I_B = \left[ \begin{array}{cc} I & 0 \\ 0 & I_0 \end{array} \right ]. $$ Then $$ USU'S = P S_1 S_1' \left[ \begin{array}{cccc} I_{\theta} & 0 & 0 & 0 \\ 0 & I_1 & 0 & 0 \\ 0 & 0 & I_{-\theta} & 0 \\ 0 & 0 & 0 & -I_1 \end{array} \right ]S_1 S_1' P', $$ where, $I_{\theta} = \left[ \begin{array}{cccc} \exp(i \theta_1) & 0 & 0 & 0 \\ 0 & \exp(i \theta_2) & 0 & 0 \\ 0 & 0 & \ddots & 0 \\ 0 & 0 & 0 & \exp(i \theta_k) \end{array} \right ]$. Then $$ P S_1 = \left[ \begin{array}{cc} K_1 & 0 \\ 0 & K_2 \end{array} \right ] = K .$$ Then $$ S_1' \left[ \begin{array}{cccc} I_{\theta} & 0 & 0 & 0 \\ 0 & I_1 & 0 & 0 \\ 0 & 0 & I_{-\theta} & 0 \\ 0 & 0 & 0 & -I_1 \end{array} \right ]S_1 = \left[ \begin{array}{cccc} \cos_\theta & 0 & -i \sin_\theta & 0 \\ 0 & I_1 & 0 & 0 \\ -i \sin_\theta & 0 & \cos_\theta & 0 \\ 0 & 0 & 0 & -I_1 \end{array} \right ]= A^2. $$ Let $$ U = \left[ \begin{array}{cccc} K_1 & 0 \\ 0 & K_2 \end{array} \right ] \left[ \begin{array}{cccc} \cos_{\frac{\theta}{2}} & 0 & -i \sin_{\frac{\theta}{2}} & 0 \\ 0 & I_1 & 0 & 0 \\ -i \sin_{\frac{\theta}{2}} & 0 & \cos_{\frac{\theta}{2}} & 0 \\ 0 & 0 & 0 & i I_1 \end{array} \right ] V = K A V. $$ Then $$ VSV' = A' K'U S U' K A = A' K' U S U' S S K A = A' K' K A^2 K' K A'' S = W S. $$ where, $$ A'' = \left[ \begin{array}{cccc} \cos_{\frac{\theta}{2}} & 0 & i \sin_{\frac{\theta}{2}} & 0 \\ 0 & I_1 & 0 & 0 \\ i \sin_{\frac{\theta}{2}} & 0 & \cos_{\frac{\theta}{2}} & 0 \\ 0 & 0 & 0 & i I_1 \end{array} \right ]. $$ This gives $SVS = W V$,where $ V = \left[ \begin{array}{cc} U_1 & U_2 \\ U_3 & U_4 \end{array} \right ]$. $W = \left[ \begin{array}{ccc} \mbox{$\bf 1\ $}_{n \times n} & 0 & 0 \\ 0 & I & 0 \\ 0 & 0 & -I_1 \end{array} \right ] $. This gives $U_2 = 0$. Since $V$ is unitary, gives $U_3 = 0$. Therefore $U=$ $$\left[ \begin{array}{cccc} K_1 & 0 \\ 0 & K_2 \end{array} \right ] \left[ \begin{array}{cccc} \cos_{\frac{\theta}{2}} & 0 & -i \sin_{\frac{\theta}{2}} & 0 \\ 0 & I_1 & 0 & 0 \\ -i \sin_{\frac{\theta}{2}} & 0 & \cos_{\frac{\theta}{2}} & 0 \\ 0 & 0 & 0 & i I_1 \end{array} \right ] \left[ \begin{array}{cccc} U_1 & 0 \\ 0 & U_4 \end{array} \right ] = \left[ \begin{array}{cccc} K_1 & 0 \\ 0 & K_2 \end{array} \right ] \left[ \begin{array}{cccc} \cos_{\frac{\theta}{2}} & 0 & \sin_{\frac{\theta}{2}} & 0 \\ 0 & I_1 & 0 & 0 \\ -\sin_{\frac{\theta}{2}} & 0 & \cos_{\frac{\theta}{2}} & 0 \\ 0 & 0 & 0 & I_1 \end{array} \right ] \left[ \begin{array}{cccc} K_3 & 0 \\ 0 & K_4 \end{array} \right ] , $$ where $ \left[ \begin{array}{cccc} K_1 & 0 \\ 0 & K_2 \end{array} \right ]$ and $\left[ \begin{array}{cccc} K_3 & 0 \\ 0 & K_4 \end{array} \right ]$ are block diagonal special unitary matrices. {\bf q.e.d} \noindentindent We now give a proof of the reachable set in Eq. (\ref{eq:reachableintro}), for the $\frac{SU(2n)}{SU(n) \times SU(n) \times U(1)}$ problem. \begin{theorem}\label{th:model2}{\rm Let $P(t) \in SU(2n)$ be a solution to the differential equation $$ \dot{P} = Ad_{K(t)}(X_d) P, $$ where $K(t) = \left [ \begin{array}{cc} K_1 & 0 \\ 0 & K_2 \end{array} \right ] \in SU(n) \times SU(n) \times U(1)$, block diagonal special unitary matrices. $$ X_d = \left[ \begin{array}{cc} 0 & \lambda \\ -\lambda & 0 \end{array} \right ] = \left[ \begin{array}{cccccc} 0 & \hdots & 0 & \lambda_1 & \hdots & 0 \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ 0 & \hdots & 0 & \hdots & 0 & \lambda_n \\ -\lambda_1 & \hdots & 0 & 0 & \hdots & 0 \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ 0 & \hdots & -\lambda_n & 0 & \hdots & 0 \end{array} \right ]. $$ The elements of reachable set at time $T$, takes the form $$ P(T) = \left [ \begin{array}{cc} \Theta_1 & 0 \\ 0 & \Theta_2 \end{array} \right ]\exp (T \sum_k \alpha_k \W_k (\left [ \begin{array}{cc} 0 & \lambda \\ -\lambda & 0 \end{array} \right ])\W_k') \left [ \begin{array}{cc} \Theta_3 & 0 \\ 0 & \Theta_4 \end{array} \right ], $$ where $ \left [ \begin{array}{cc} \Theta_1 & 0 \\ 0 & \Theta_2 \end{array} \right ], \left [ \begin{array}{cc} \Theta_3 & 0 \\ 0 & \Theta_4 \end{array} \right ], \W_k \in SU(n) \times SU(n) \times U(1)$. $\W_k$ induce permutations and sign changes on $\lambda$.}\end{theorem} From theorem \ref{th:SU(2n)decompose}, for every time $t$, $P(t)$ has the form, $$ P = \underbrace{\left [ \begin{array}{cc} U_1 & 0 \\ 0 & U_2 \end{array} \right ]}_{K_1} \left[ \begin{array}{cccccc} \cos \phi_1 & \dots & 0 & \sin \phi & \hdots & 0 \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cos \phi_n & 0 & 0 & \sin \phi_n \\ -\sin \phi_1 & 0 & 0 & \cos \phi_1 & \dots & 0 \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & -\sin \phi_n & 0 & 0 & \cos \phi_n \end{array} \right ] \underbrace{\left [ \begin{array}{cc} U_3 & 0 \\ 0 & U_4 \end{array} \right ]}_{K_2}, $$ $$ P = K_1 \exp \left[ \begin{array}{cccccc} 0 & \dots & 0 & \phi_1 & \hdots & 0 \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & 0 & 0 & \phi_n \\ -\phi_1 & 0 & 0 & 0 & \dots & 0 \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & -\phi_n & 0 & 0 & 0 \end{array} \right ] K_2 , $$ As in theorem \ref{th:model1}, we consider evaluating the product $\Pi \exp(Ad_{K_i}(X_d)\Delta )$. We approximate the evolution $\exp(Ad_K(X_d)\Delta ) P(t)$ by $Q(t + \Delta)$, where we define $Q(t + \Delta)$ as \begin{equation} \label{eq:Qdegenerate.2} Q(t + \Delta) = \exp(\Omega_1 \Delta) K_1 H_1 \exp(a \Delta) A H_2 \exp(\Omega_2 \Delta) K_2 \end{equation} where $H_1, H_2 \in \exp(\h)$, the stabilizer group, as discussed in remark \ref{rem:stabilizer}, such that $H_1 A H_2 = A$, by choosing $H_2^{-1} = A^{-1}H_1A$. The above expression can be written as $$ Q(t + \Delta) = \exp(\Omega_1 \Delta) \exp(K_1 H_1 a H_1'K_1' \Delta) \exp(K_1 A \Omega_2 A' K_1' \Delta) P(t). $$ Where $\Omega_1$, $H_1, a, \Omega_2$, have the same meaning as in theorem \ref{th:model1} in context of the present decomposition $\mathfrak{g}= \mathfrak{p}\oplus \k$ and are chosen such that $$ (\Omega_1 + K_1 H_1 a H_1' K_1' + K_1 A \Omega_2 A' K_1') = Ad_K(X_d). $$ $$ (\Omega_1' + H_1 a H_1' + A \Omega_2 A') = Ad_{{\bf a}r K}(X_d), $$ where ${\bf a}r{K} = K_1^{-1}K$ and \begin{equation} P(t + \Delta) = \exp( (\Omega_1 + K_1 H_1 a H_1' K_1' + K_1 A \Omega_2 A' K_1') \Delta) P(t) \end{equation} $$ P(t + \Delta) - Q(t + \Delta) = o(\Delta^2) P(t) $$ $$ Q(t + \Delta) = (I + o(\Delta^2))P(t + \Delta). $$ Let $$ \left [ \begin{array}{cc} A & B \\ C & D \end{array} \right ]^{\ast} = \left [ \begin{array}{cc} I & 0 \\ 0 & -I \end{array} \right ] \left [ \begin{array}{cc} A & B \\ C & D \end{array} \right ]' \left [ \begin{array}{cc} I & 0 \\ 0 & -I \end{array} \right ], $$ where we have $n \times n$ subblocks. $$ PP^\ast = \left [ \begin{array}{cc} K_1 & 0 \\ 0 & K_2 \end{array} \right ] \left[ \begin{array}{cccccc} \cos 2\phi_1 & \dots & 0 & \sin 2\phi & \hdots & 0 \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cos 2\phi_n & 0 & 0 & \sin 2\phi_n \\ -\sin 2\phi_1 & 0 & 0 & \cos 2\phi_1 & \dots & 0 \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & -\sin 2\phi_n & 0 & 0 & \cos 2\phi_n \end{array} \right ] \left [ \begin{array}{cc} K_1' & 0 \\ 0 & K_2' \end{array} \right ],$$ let $S = \exp(-i \frac{\pi}{2} \sigma_x \otimes \mbox{$\bf 1\ $}_{n \times n})$ $$ PP^\ast = \left [ \begin{array}{cc} K_1 & 0 \\ 0 & K_2 \end{array} \right ] S' \left[ \begin{array}{cccccc} \exp(i 2\phi_1) & \dots & 0 & 0 & \hdots & 0 \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \exp(i 2\phi_n) & 0 & 0 & 0 \\ 0 & 0 & 0 & \exp(-i 2\phi_1) & \dots & 0 \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & 0 & 0 & \exp(-i 2\phi_n) \end{array} \right ] S \left [ \begin{array}{cc} K_1' & 0 \\ 0 & K_2' \end{array} \right ],$$ The above expression can be written as $$ Q(t + \Delta)Q(t + \Delta)^{\ast} = (I + o(\Delta^2))P(t + \Delta)P^{\ast}(t + \Delta) (I + o(\Delta^2)) = P(t+ \Delta)P^{\ast}(t + \Delta) [ I + o(\Delta^2)]. $$ $Q(t+\Delta)Q(t + \Delta)^{\ast} = \underbrace{\left [ \begin{array}{cc} \tilde{K}_1 & 0 \\ 0 & \tilde{K}_2 \end{array} \right ] S^{\dagger}}_{U} \Sigma \underbrace{S \left [ \begin{array}{cc} \tilde{K}_1 & 0 \\ 0 & \tilde{K}_2 \end{array} \right ]'}_{U'}$, where, $$\Sigma = \left[ \begin{array}{cccccc} \exp i2(\phi_1 + a_1 \Delta) & \dots & 0 & 0 & \hdots & 0 \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \exp i 2(\phi_n + a_n \Delta) & 0 & 0 & 0 \\ 0 & 0 & 0 & \exp -i 2(\phi_1 + a_1 \Delta) & \dots & 0 \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & 0 & 0 & \exp -i 2(\phi_n + a_n \Delta) \end{array} \right ].$$ $$ P(t + \Delta)P^{\ast}(t+\Delta) = V \left[ \begin{array}{cccccc} \exp i \mu_1 & \dots & 0 & 0 & \hdots & 0 \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \exp i \mu_n & 0 & 0 & 0 \\ 0 & 0 & 0 & \exp -i \mu_1 & \dots & 0 \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & 0 & 0 & \exp -i \mu_n \end{array} \right ]V^{\dagger} ,$$ By choosing $\Delta$ small, we can as in theorem \ref{th:model1}, bound $|o(\Delta^2)| < c \Delta $ (see Eq. \ref{eq:nghdbnd}). If $\tilde{a} = \mathop{\rm min}_{a_i \noindentindenteq \pm a_j}(a_i \pm a_j)$. For small $\Delta$, the minimum spacing between non-degenerate eigenvalues of $QQ^{\ast} (t + \Delta)$ is $\tilde{a} \Delta$. When $c \leq \frac{\tilde{a}}{6}$, $c \Delta \leq \frac{\tilde{a} \Delta}{6} \leq 2 \sin(\frac{\tilde{a} \Delta}{6})$. If eigenvalue $\exp(-i \lambda_j)$ in $\lambda(QQ^{\ast})= \left [ \begin{array}{c} \exp(-i \lambda) \\ \exp(i \lambda) \end{array} \right ]$ ($\exp(-i \lambda)$ is an eigenvalue set)), is $m+n$ degenerate ($m$ coming from top, remaining from bottom), then its $\frac{\tilde{a} \Delta}{3}$, nghd, has precisely $m+n$ eigenvalues of $\lambda(PP^{\ast})$. This follows from Schur convexity, as otherwise $$ o (\Delta^2) = |QQ^{\ast} -PP^{\ast}| = \sum_k \alpha_k |\phi - P_k(\mu)| > 2 \sin(\frac{\tilde{a} \Delta}{6}), $$ where $\phi = \lambda(QQ^{\ast}), \mu = \lambda(PP^{\ast})$, $\sum_k \alpha_k = 1$ and $P_k$ are permulations. Of these $m+n$ eigenvalues of $\lambda(PP^{\ast})$, $m$ can be assigned to pocket of $\exp(-i \lambda_j)$ and remaining $n$ to conjugate pocket $\exp(i \lambda_j)$ ($m$ in top, $n$ in bottom). We have shown an eigenvalue set of $PP^{\ast}$ which is in ngd of eigenvalue set of $QQ^\ast$. We can again set up a chain of overlapping nghds such that evolution of eigenvalues can be written as \begin{eqnarray} \lambda(Q_{i+}Q_{i+}^{\ast}) &=& \exp(2 a_i^{+} \Delta_i^+)\ \lambda(P_{i}P_{i}^{\ast}) \\ \lambda(P_{i, i+1}P_{i, i+1}^{\ast}) &=& \lambda(Q_{i+}Q^{+}_{i+}) + o((\Delta_i^+)^2) \\ \lambda(Q_{(i+1)-}Q_{(i+1)-}^{\ast}) &=& \lambda(P_{i, i+1}P_{i, i+1}^{\ast}) + o((\Delta^{-}_{i+1})^2) \\ \exp(-2 a_{i+1}^{-} \Delta_{i+1}^{-})\ \lambda(P_{i+1}P_{i+1}^{\ast}) &=& \lambda(Q_{(i+1)-}Q^{\ast}_{(i+1)-}) \end{eqnarray}where in Eq. (\ref{eq:Qdegenerate.2}), $a$ has the form, $a = \left[ \begin{array}{cc} 0 & a_i^{+} \\ -a_i^{+} & 0 \end{array} \right]$ and $\lambda(QQ^{\ast})$ only denotes one (say top one) of the conjugate eigenvalue set. Adding the above equations, \begin{equation} \lambda(P_{i+1}P_{i+1}^{\ast}) = \exp (o(\Delta^2))\ \exp(2 ( a_i^+ \Delta_i^+ + a_{i+1}^{-}\Delta_{i+1}^{-}))\ \lambda(P_i P_i^{\ast}), \end{equation} where $o(\Delta^2)$ is diagonal. where $|\exp(i \alpha) -1| = 2 \sin\frac{|\alpha|}{2} > \frac{|\alpha|}{2}$. Therefore, $|\exp(i \alpha) - 1| \leq \frac{\theta}{2}$ implies $|\alpha| \leq \theta$. \begin{equation} \lambda(P_nP_n^{\ast}) = \exp(\underbrace{\sum o(\Delta^2)}_{\leq \epsilonsilon T})\ \exp(2 \sum_i a_i^{+} \Delta_i^{+} + a_{i+1}^{-}\Delta_{i+1}^{-})\ \lambda(P_1P_1^{\ast}). \end{equation} \begin{equation} \lambda(P_nP_n^{\ast}) = \exp(\underbrace{\sum o(\Delta^2)}_{\leq \epsilonsilon T})\ \exp(2 T \sum_k \alpha_k P_k(\lambda))\ \lambda(P_1P_1^{\ast}) = \exp(\underbrace{\sum o(\Delta^2)}_{\leq \epsilonsilon T})\ \exp(2 \mu T)\ \lambda(P_1P_1^{\ast}), \end{equation}where $P_k$ are $2^n n!$ permutations alongwith sign changes. By Kostant convexity in remark \ref{rem:convexity}, $\mu \prec \lambda$, i.e. $\mu$ lies in convex hull of permutation and sign changes of $\lambda$. Since $P_1 = I$, \begin{equation} P_n = K_1 \exp \left[ \begin{array}{cccccc} 0 & \dots & 0 & \mu_1T + m_1 \pi & \hdots & 0 \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & 0 & 0 & \mu_nT + m_n \pi \\ -( \mu_1T + m_1 \pi) & 0 & 0 & 0 & \dots & 0 \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & -( \mu_nT + m_n \pi) & 0 & 0 & 0 \end{array} \right ] \exp(o(\Delta^2)) K_2. \end{equation} $m \pi$ can be absorbed in $K_1$. By letting $\Delta^2$, go to zero, we find $$ P_n \in K_1 \exp (T \sum_k \alpha_k \W_k (\left [ \begin{array}{cc} 0 & \lambda \\ -\lambda & 0 \end{array} \right ])\W_k') K_2. $$ where $\W_k$ are Weyl elements that induce permutations and sign changes of $\lambda$. \begin{corollary}\label{cor:en}{\rm \begin{equation} \dot{U} = ( X_d + \sum_j u_j(t) X_j ) U, \ \ U(0) = \mbox{$\bf 1\ $}. \end{equation} Here $X_d = \left[ \begin{array}{cc} 0 & \lambda \\ -\lambda & 0 \end{array} \right ]$ and $\{X_j\}_{LA} = \left [ \begin{array}{cc} A & 0 \\ 0 & B \end{array} \right ]$, space of block diagonal skew Hermitian matrices. The elements of reachable set at time $T$, takes the form $ U(T) \in$ $$ S = K_1 \exp (T \sum_k \alpha_k \W_k (\left [ \begin{array}{cc} 0 & \lambda \\ -\lambda & 0 \end{array} \right ])\W_k') K_2, $$ where $\W_k$ are Weyl elements that induce permutations and sign changes of $\lambda$ and $K_1, K_2, \W_k \in SU(n)\times SU(n) \times U(1)$ (Block diagonal matrices in $SU(2n)$ ). $S$ belongs to closure of reachable set. }\end{corollary} \begin{example}{\rm The problem in this section models the evolution of electron-nuclear spin system in EPR \cite{Zeier}. Let $S$ and $I$ be spin operators for electron and nuclear spin respectively. The evolution of coupled spin dynamics is in $G = SU(4)$. Its algebra $$ \mathfrak{g}= -i \{ I_{\alpha}, S_\beta, I_{\alpha}S_{\beta} \} = \underbrace{-i \{I_x, I_y, I_x S_\alpha, I_yS_{\beta}\}}_{\p} \oplus -i \underbrace{\{I_z, S_{\alpha}, I_{z}S_{\beta}\}}_{\k}. $$ $\k$ are fast spin operators in the dynamics and involve rotations derived from electron spin rotations, hyperfine coupling and Larmor precession of nuclear spin. $\p$ represent operators that rotate nuclear spin with rf-pulses and are slow part of dynamics. The control subgroup is $K = \exp(\k) = SU(2) \times SU(2) \times U(1)$. The Cartan subalgebra is $\mathfrak{a}= -i \{I_x, I_xS_z \}$. Given $X_d \in \a$ such that $X_d = -i(\alpha I_x + \beta 2 I_x S_z) = -i [ \underbrace{(\alpha+\beta)}_{2a} I_x(\frac{\mbox{$\bf 1\ $}}{2} + S_z) + \underbrace{(\alpha -\beta)}_{2b} I_x(\frac{\mbox{$\bf 1\ $}}{2} -S_z)] $. $$ X_d = -i \left [ \begin{array}{cc} 0 & \Lambda \\ \Lambda & 0 \end{array} \right ] ; \ \ \Lambda = \left [ \begin{array}{cc} a & 0 \\ 0 & b \end{array} \right ]. $$ There are eight Weyl Points, $(a, b)$, and its permutations with sign changes. We ask for a reachable set described by evolution of the kind $$ U = K_0 \prod_i \exp(X_d \Delta t_i ) K_i, \ \ \sum \Delta t_i = T, \ \ K_i, K_0 \in K $$ The reachable set is described in above corollary \ref{cor:en}. }\end{example} We now present results in section \ref{sec:second} and \ref{sec:third} in a more general setting. \section{Time Optimal control for $G/K$ problem} \label{sec:fourth} \begin{theorem}\label{th:generalmodel}{\rm Given a compact Lie group $G$ and Lie algebra $\g$. Consider the Cartan decomposition of a real semisimple Lie algebra $\mathfrak{g}= \mathfrak{p}\oplus \k$. Given the control system $$ \dot{X} = Ad_{K(t)}(X_d) X, \ P(0) = \mbox{$\bf 1\ $} $$ where $X_d \in \a$, the cartan subalgebra $\mathfrak{a}\in \p$ and $K(t) \in \exp{\k}$, a closed subgroup of $G$. The end point $$P(T) = K_1 \exp(T \sum_i \alpha_i \W_i(X_d) ) K_2, $$ where $K_1, K_2 \in \exp(\k)$ and $\W_i(X_d) \in \a$ are Weyl points, $\alpha_i > 0$ and $\sum_i {\alpha_i} = 1$.} \end{theorem} \noindentindento {\bf Proof:} As in proof of Theorem \ref{th:model1} and \ref{th:model2}, we define $$ P(t + \Delta) = \exp(Ad_K(X_d) \Delta ) P(t) = \exp(Ad_K(X_d) \Delta ) K_1 \exp(a) K_2 $$ and show that \begin{equation} \label{eq:iterative.a.increment} \exp(Ad_K(X_d) \Delta)K_1 A K_2 = K_a \exp(a_0 \Delta + C \Delta^2) A K_b = K_a \exp(a + a_0 \Delta + C \Delta^2) K_b, \end{equation} where for ${\bf a}r{K} = K^{-1}K$, $$ Ad_{{\bf a}r{K}}(X_d) = \underbrace{P(Ad_{{\bf a}r{K}}(X_d))}_{a_0} + Ad_{{\bf a}r{K}}(X_d)^{\perp}.$$ where $P$ is projection w.r.t killing form and $a_0 \in \f$, the centralizer in $\p$ as defined in remark \ref{rem:stabilizer}, $C \Delta^2 \in \f$ is a second order term that can be made small by choosing $\Delta$ and $K_a, K_b \in \exp(\k)$. To show Eq. \ref{eq:iterative.a.increment}, we show there exists $K_1'', K_2'' \in K$ such that \begin{equation} \label{eq:iterative.a.increment1} \underbrace{\exp(k_1'')}_{K_1''} \exp(Ad_{{\bf a}r K}(X_d) \Delta) \underbrace{\exp(A k_2'' A^{-1})}_{K_2''} = \exp(a_0 \Delta + C \Delta^2), \end{equation} where $K_1''$ and $K_2''$ are constructed by a iterative procedure as described in the proof below. Given $X$ and $Y$ as $N \times N$ matrices, considered elements of a matrix Lie algebra $\g$, we have, \begin{equation} log(e^X e^Y) - (X + Y) = \sum_{n>0} \frac{(-1)^{n-1}}{n} \sum_{1\leq i \leq n} \frac{[X^{r_1}Y^{s_1}\dots X^{r_n}Y^{s_n}]}{\sum_{i=1}^n (r_i + s_i) r_1! s_1! \dots r_n! s_n !}, \end{equation}where $r_i + s_i > 0$. We bound the largest element (absolute value) of $log(e^X e^Y) - (X + Y)$, denoted as $|log(e^X e^Y) - (X + Y)|_0$, given $|X|_0 < \Delta$ and $|Y|_0 < b_0 \Delta^k$ , where $k \geq 1$, $\Delta < 1$, $b_0 \Delta < 1 $. \begin{eqnarray} \label{eq:bound} |log(e^X e^Y) - (X + Y)|_0 &\leq& \sum_{n=1} N b_0 e \Delta^{k+1} + \sum_{n>1} \frac{1}{n} \frac{(2Ne^2)^n b_0 \Delta^{n+k-1}}{n} \\ &\leq & N b_0 e \Delta^{k+1} + (Ne^2)^2 b_0 \Delta^{k+1} ( 1 + 2Ne^2\Delta + \dots ) \\ &\leq & N b_0 e \Delta^{k+1} + \frac{(Ne^2)^2 b_0 \Delta^{k+1}}{1 - 2Ne^2\Delta} \leq \tilde{M} b_0 \Delta^{k+1} \end{eqnarray}where $2 N \Delta < 1$ and $\tilde{M} \Delta < 1$. Given decomposition of $\mathfrak{g}= \mathfrak{p}\oplus \mathfrak{k}$, $\mathfrak{p}\perp \mathfrak{k}$ with respect to the negative definite killing form $B(X,Y) = tr(ad_X ad_Y)$. Furthermore there is decomposition of $\mathfrak{p}= \mathfrak{a}\oplus \a^{\perp}$. Given $$ U_0 = \exp(a_0 \Delta + b_0 \Delta + c_0 \Delta), $$ where $a_0 \in \a$, $b_0 \in \a^{\perp}$ and $c_0 \in \k$, such that $|a_0|_0 + |b_0|_0 + |c_0|_0 < 1$, which we just abbreviate as $a_0 + b_0 + c_0 < 1$ (we follow this convention below) We describe an iterative procedure \begin{equation} U_n = \Pi_{k=1}^n \exp(-c_k \Delta ) \ U_0 \ \Pi_{k=0}^n \exp(-b_{k} \Delta), \end{equation} where $c_k \in \k$ and $b_k \in \a^{\perp}$, such that the limit \begin{equation} n \rightarrow \infty \ \ U_n = \exp(a_0 \Delta + C \Delta^2), \end{equation}where $a_0, C \in \a$. \begin{eqnarray*} U_1 &=& \exp(-c_0 \Delta)\exp(a_0 \Delta + b_0 \Delta + c_0 \Delta) \exp(-b_0 \Delta) \\ &=& \exp(a_0 \Delta + b_0 \Delta + c_0' \Delta^2) \exp(-b_0 \Delta) \\ &=& \exp(a_0 \Delta + b_0' \Delta^2 + c_0' \Delta^2) \\ &=& \exp( (a_1 + b_1 + c_1 )\Delta ) \end{eqnarray*} Note $b_0'$ and $c_0'$ are elements of $\g$ and need not be contained in $\a^{\perp}$ and $\k$. Where, using bound in $c_0' \leq \tilde{M}c_0$, which gives $a_0 + b_0 + c_0' \Delta \leq a_0 + b_0 + c_0$. Using the bound again, we obtain, $b_0' \leq \tilde{M} b_0 $. We can decompose, $(b_0' + c_0')\Delta$, into subspaces $a_0'' + b_1 + c_1$, where $a_0'' \leq M (b_0' + c_0')\Delta$, $b_1 \leq M(b_0' + c_0')\Delta$ and $c_1 \leq M(b_0' + c_0')\Delta$, where $-B(X, X) \leq \lambda_{max} |X|^2$, where $|X|$ is Frobenius norm and $-B(X, X) \geq \lambda_{min} |X|^2$. Let $M = \frac{N \lambda_{max}}{\lambda_{min}}$. This gives, $a_0'' \leq M(b_0' + c_0')\Delta$, $b_1 \leq M(b_0' + c_0')\Delta$ and $c_1 \leq M(b_0' + c_0')\Delta$. This gives \begin{eqnarray*} a_1 &\leq& a_0 + \tilde{M}M (b_0 + c_0)\Delta \\ b_1 &\leq& \tilde{M}M(b_0 + c_0) \Delta \\ c_1 &\leq& \tilde{M}M(b_0 + c_0) \Delta \end{eqnarray*} For $4 \tilde{M} M \Delta < 1$, we have, $a_1 + b_1 + c_1 \leq a_0 + b_0 + c_0 $, Using $(b_k + c_k) \leq 2\tilde{M}M \Delta (b_{k-1} + c_{k-1}) \leq (2\tilde{M}M \Delta)^k (b_0 + c_0)$. Similarly, $| a_k - a_{k-1}|_0 \leq (2\tilde{M}M \Delta)^k (b_0 + c_0)$ Note, $(a_k, b_k, c_k)$ is a Cauchy sequences which converges to $(a_\infty, 0, 0)$, where $$ | a_{\infty}-a_0 |_0 \leq (b_0 + c_0) \sum_{k=1}^{\infty} (2\tilde{M}M \Delta)^k \leq \frac{2 M \tilde{M}\Delta(b_0 + c_0)}{1 -2\tilde{M}M \Delta} \leq C \Delta, $$ where $C = 4\tilde{M}M (b_0 + c_0)$. The above excercise was illustrative. Now we use an iterative procedure as above to show Eq. (\ref{eq:iterative.a.increment1}). Writing $$ Ad_{{\bf a}r{K}}(X_d) = \underbrace{P(Ad_{{\bf a}r{K}}(X_d))}_{a_0} + \underbrace{Ad_{{\bf a}r{K}}(X_d)^{\perp}}_{b_0}, $$ where $a_0 \in \f$ and $b_0 \in \f^{\perp}$, consider again the iterations \begin{eqnarray*} U_0 &=& \exp(- {\bf a}r{c}_0 \Delta)\exp(a_0 \Delta + b_0 \Delta) \exp(-b_0 \Delta + {\bf a}r{c}_0 \Delta) \\ &=& \exp(- {\bf a}r{c}_0 \Delta) \exp(a_0 \Delta + {\bf a}r{c}_0 \Delta + b_0' \Delta^2) \\ &=& \exp(a_0 \Delta + b_0' \Delta^2 + c_0' \Delta^2)\\ &=& \exp(a_1 \Delta + b_1 \Delta + c_1 \Delta ) \end{eqnarray*} We refer to remark \ref{rem:stabilizer}, Eq. \ref{eq:nghdbnd}. Given $b_0 \Delta \in \p$ such that $b_0 \Delta \in \f^{\perp}$. If $A k' A' = -b_0 \Delta + {\bf a}r{c}_0 \Delta$, then $\| k' \| \leq h \| b_0 \Delta \| $ (killing norm). ${\bf a}r{c}_0 \in \k$, is bounded ${\bf a}r{c}_0 \leq Mh b_0$, where $M$ as before converts between two different norms. Using bounds derived above $b_0' \leq \tilde{M}(Mh + 1)b_0$, and $c_0' \leq \tilde{M}Mhb_0$, $2\tilde{M}(Mh+1) \Delta < 1$, we obtain which gives $a_0 + b_0' \Delta + {\bf a}r{c}_0 \leq a_0 + b_0(\tilde{M}(Mh+ 1)\Delta + Mh) \leq 1 $. For appropriate $M'$, we have \begin{eqnarray*} a_1 &\leq& a_0 + \frac{M'}{3} (b_0 + c_0)\Delta \\ b_1 &\leq& \frac{M'}{3}(b_0 + c_0) \Delta \\ c_1 &\leq& \frac{M'}{3}(b_0 + c_0)\Delta \end{eqnarray*} we obtain $$ a_1 + b_1 + c_1 \leq a_0 + M'(b_0 + c_0)\Delta \leq a_0 + b_0 + c_0 $$ where $\Delta$ is chosen small. \begin{eqnarray*} U_1 &=& \exp(-(c_1 + {\bf a}r{c}_1) \Delta)\exp(a_1 \Delta + b_1 \Delta + c_1 \Delta) \exp(-b_1 \Delta + {\bf a}r{c}_1 \Delta) \\ &=& \exp(-(c_1 + {\bf a}r{c}_1) \Delta) \exp(a_1 \Delta + (c_1 + {\bf a}r{c}_1) \Delta + b_1' \Delta^2) \\ &=& \exp(a_1 \Delta + b_1' \Delta^2 + c_1' \Delta^2) \\ &=& \exp(a_2 \Delta + b_2 \Delta + c_2 \Delta) \end{eqnarray*} Where ${\bf a}r{c}_1 \in \k$, such that ${\bf a}r{c}_1 \leq Mh b_1$. Where, using bounds derived above $b_1' \leq \tilde{M}(Mh + 1)b_1$, and $c_1' \leq \tilde{M}(Mhb_1 + c_1)$, where using the bound $2\tilde{M}(Mh+1) \Delta < 1$, we obtain which gives $a_1 + b_1' \Delta + (c_1 + {\bf a}r{c}_1 ) \leq a_1 + ((1 + Mh)b_1 + c_1) \leq a_0 + b_0 + c_0 $. We can decompose, $(b_1' + c_1')\Delta^2$, into subspaces $(a_1'' + b_2 + c_2)\Delta$, where $a_1'' \leq M (b_1' + c_1') \Delta$, $b_2 \leq M(b_1' + c_1')\Delta $ and $c_2 \leq M(b_1' + c_1')\Delta$, where $M$ as before converts between two different norms. This gives \begin{eqnarray*} a_2 &\leq& a_1 + 4\tilde{M}M^2h (b_1 + c_1)\Delta \\ b_2 &\leq& 4\tilde{M}M^2h(b_1 + c_1)\Delta \\ c_2 &\leq& 4\tilde{M}M^2h(b_1 + c_1)\Delta \end{eqnarray*} For $x = 8 \tilde{M} M^2h \Delta < \frac{2}{3}$, we have, $a_2 + b_2 + c_2 \leq a_1 + (b_1 + c_1) \leq a_0 + b_0 + c_0$, Using $(b_k + c_k) \leq x (b_{k-1} + c_{k-1}) \leq x^k (b_0 + c_0)$. Similarly, $| a_k - a_{k-1}|_0 \leq x^k (b_0 + c_0)$ Note, $(a_k, b_k, c_k)$ is a Cauchy sequences which converges to $(a_\infty, 0, 0)$, where $$ | a_{\infty}-a_0 |_0 \leq x (b_0 + c_0)\sum_{k=0}^{\infty} x^k \leq \frac{x (b_0 + c_0)}{1-x} \leq C \Delta, $$ where $C = 16 \tilde{M}M^2 h(b_0 + c_0)$. From Eq. (\ref{eq:iterative.a.increment1}), $$ \exp((K_1' Ad_K(X_d) K_1)\Delta) = \exp(-k_1'') \exp(a_0 \Delta + C \Delta^2) \exp(-A k_2'' A'). $$ where $a_0 \Delta + C \Delta^2 \in \f$. By using a stabilizer $H_1, H_2$, we can rotate them to $\a$ such that $$ \exp(Ad_K(X_d) \Delta)K_1 A K_2 = K_a H_1 \exp(a_0' \Delta + C' \Delta^2) A H_2 K_b $$ such that $H_1^{-1}(a_0 \Delta + C \Delta^2)H_1 = a_0' \Delta + C' \Delta^2$ is in $\a$ and $a_0' = P(H_1^{-1} a_0 H_1)$ is projection onto $\a$ such that $$ P(H_1^{-1} a_0 H_1) = \sum _k \alpha_k \W_k (X_d). $$ This follows because the orthogonal part of $Ad_{{\bf a}r K}(X_d)$ to $\f$ written as $Ad_{{\bf a}r K}(X_d)^\perp$ remains orthogonal of $\f$ $$ \langle H^{-1} Ad_K(X_d)^{\perp}H, \mathfrak{a}\rangle = \langle Ad_K(X_d)^{\perp}, H \mathfrak{a}H^{-1}\rangle = \langle Ad_K(X_d)^{\perp}, \a'' \rangle = 0$$ ($a'' \in \f$), remains orthogonal to $\a$. Therefore $ P(H_1^{-1} a_0 H_1) = P(H_1^{-1} Ad_{{\bf a}r K}(X_d) H_1) = \sum _k \alpha_k \W_k (X_d)$. $$ \exp(Ad_K(X_d) \Delta)K_1 A K_2 = K_a \exp(a + a_0' \Delta + C' \Delta^2) K_b. $$ \begin{lemma}\label{lem:join1}{\rm Given $P = K_1 \underbrace{\exp(a + a_1 \Delta)}_{A_1} K_2 = K_3 \underbrace{\exp(b - b_1 \Delta)}_{A_2} K_4$, where $a, b, a_1, b_1 \in \a$. We can express $$ \exp(b) = K_a \exp(a + a_1 \Delta + {\mathcal W}(b_1) \Delta) K_b, $$ where $\W(b_1)$ is Weyl element of $b_1$. Furthermore $$ \exp(b + b_2 \Delta) = K_a'' \exp(a + a_1 \Delta + {\mathcal W}(b_1) \Delta + {\mathcal W}(b_2) \Delta ) K_b''. $$ Note, $A_2 = K_3^{-1} P K_4^{-1}$, commutes with $b_1$. This implies $A_2 = K \exp(a + a_1 \Delta)K^{-1} \tilde{K} $ commutes with $b_1$. This implies that $\tilde{K}^{-1} \exp(-Ad_K(a + a_1 \Delta)) b_1 \exp(Ad_K(a + a_1 \Delta)) \tilde{K} = b_1$, which implies that $\exp(-Ad_K(a + a_1 \Delta)) b_1 \exp(Ad_K(a + a_1 \Delta)) \in \mathfrak{p}$. Recall, from remark \ref{rem:stabilizer}, $$ \exp(-Ad_K(a + a_1 \Delta)) b_1 \exp(Ad_K(a + a_1 \Delta)) = \sum_k c_k (Y_k \cos(\lambda_k) + X_k \sin(\lambda_k)), $$ This implies $\sum_k c_k \sin(\lambda_k) X_k = 0$, implying $\lambda_k = n \pi$. Therefore, $$ \exp(-2 Ad_K(a + a_1 \Delta)) b_1 \exp(2 Ad_K(a + a_1 \Delta)) = b_1. $$ We have shown existence of $H$ such that $H^{-1} b_1 H \in Ad_K(\a)$, Therefore, $$ \exp(b_1 \Delta) \exp(Ad_K(a + a_1 \Delta)) \tilde{K} = K_a \exp(a + a_1 \Delta + {\mathcal W}(b_1) \Delta) K_b. $$ Applying the theorem again to $$ \exp(b_2 \Delta) K_a \exp(a + a_1 \Delta + {\mathcal W}(b_1) \Delta) K_b = K_a'' \exp(a + a_1 \Delta + {\mathcal W}(b_1) \Delta + {\mathcal W}(b_2) \Delta ) K_b''. $$ } \end{lemma} \begin{lemma}\label{lem:join2} { \rm Given $P_i = K_1^i A^i K_2^i = K_1^i \exp(a^i) K_2^i $, we have $P_{i, i+1} = \exp(H_i^+ \Delta_i^+) P_i$, and $P_{i, i+1} = \exp(-H_{i+1}^- \Delta_{i+1}^-) P_{i+1}$, where $H_i^+ = Ad_{K_i}(X_d)$. From above we can express $$ P_{i, i+1} = K_a^{i+} \exp(a^i + a_1^{i+} \Delta_{+}^i + a_2^{i+} (\Delta_{+}^i)^2) K_b^{i+}. $$ where $a_1^{i+}$ and $a_2^{i+}$ are first and second order increments to $a_i$ in the positive direction. The remaining notation is self explanatory. $$ P_{i, i+1} = K_a^{(i+1)-} \exp(a^{i+1} - a_1^{(i+1)-} \Delta_{-}^{i+1} - a_2^{(i+1)-} (\Delta_{-}^{i+1})^2) K_b^{(i+1)-}. $$ $$ \exp(a^{i+1}) = K_1 \exp(a^i + a_1^{i+} \Delta_{+}^i + a_2^{i+} (\Delta_{+}^i)^2 + {\mathcal W} ( a_1^{(i+1)-} \Delta_{-}^{i+1} + a_2^{(i+1)-} (\Delta_{-}^{i+1})^2 )) K_2. $$ $$ \W(a_1^{(i+1)-} \Delta_{-}^{i+1} + a_2^{(i+1)-} (\Delta_{-}^{i+1})^2 ) = \mathcal {P}(\W(a_1^{(i+1)-})) \Delta_{-}^{i+1} + \P( \W(a_2^{(i+1)-})) (\Delta_{-}^{i+1})^2 = \sum_k \alpha_k \W_k (X_d) \Delta_{-}^{i+1} + o((\Delta_{-}^{i+1})^2 )$$ where, $a^i, a_1^i, a_2^i \in \a$. }\end{lemma} Using lemma \ref{lem:join1} and \ref{lem:join2} , we can express $$P_n(T) = K_1 \exp(a_n) \exp K_2 = K_1 \exp(\sum_i \W(a_i^{+}) \Delta_i^{+} + \W(a_{i+1}^{-})\Delta_{i+1}^{-})\exp(\underbrace{\sum o(\Delta^2)}_{\leq \epsilonsilon T}) K_2 $$ Letting $\epsilonsilon$ go to $0$, we have $$ P_n(T) = K_1 \exp(T \sum_i \alpha_i \W_i(X_d)) K_2. $$ Hence the proof of theorem \ref{th:generalmodel}. {\bf q.e.d.} \begin{corollary}\label{cor:generalmodel}{\rm Given $U$, in compact Lie group $G$, with $X_d, X_j$ in its Lie algebra $\g$. Given the Cartan decomposition $\mathfrak{g}= \mathfrak{p}\oplus \k$, where $X_d \in \mathfrak{a}\subset \p$ \begin{equation} \dot{U} = ( X_d + \sum_j u_j(t) X_j ) U, \ \ U(0) = \mbox{$\bf 1\ $}, \end{equation} and $\{X_j\}_{LA} = \k$. The elements of the reachable set at time $T$, takes the form $ U(T) \in$ $$ S = K_1 \exp (T \sum_k \alpha_k \ \W_k X_d \W_k^{-1}) K_2, $$ where $\W_k$ are Weyl elements and $K_1, K_2, \W_k \in \exp(\k)$. $S$ belongs to the closure of reachable set. }\end{corollary} \begin{theorem}{\bf Co-ordinate theorem}\label{th:coordinate}{\rm \ \ Let $\mathfrak{g}= \mathfrak{p}\oplus \k$ be a Cartan decomposition with Cartan subalgebra $\mathfrak{a}\in \p$. Let $a \in \a$ be a regular element such that $\mathfrak{f}= \a$. Given ${\rm ad}_a^2: \mathfrak{p}\rightarrow \p$ symmetric. Let $Y_i$ be the eigenvectors of ${\rm ad_a}^2$ that are orthogonal to $\mathfrak{a}= \{ Z_j \}$. Let $X_i = \frac{[a, Y_i]}{\lambda_i}$, $\lambda_i > 0$, where $-\lambda_i^2$ is a eigenvalue of $ad_a^2$. Then $\mathfrak{k}= \{X_i \} + \k_0 = \{X_k \}$, where $[a, \k_0]=0$ and $X_i \perp \k_0$. $$ ad_a(Y_i) = \lambda_i X_i, \ \ ad_a(X_i) = -\lambda_i Y_i $$ \begin{equation} A X_i A^{-1} = \cos(\lambda_i) X_i - \sin(\lambda_i) Y_i, \end{equation}where $A = \exp(a)$. Given $U = K_1 \exp(a) K_2$, consider the map $$ U(a_i, b_j, c_k) = \exp(\sum_k c_k X_k) \ K_1 \exp(\sum_j b_j Z_j) \ \exp(a) \ \exp(\sum_i a_i X_i)K_2 $$ such that $U(0, 0, 0) = U$. \begin{eqnarray} \frac{\partial U}{\partial a_i}|_{(0,0,0)} &=& (\cos(\lambda_i) Ad_{K_1}(X_i) - \sin(\lambda_i) Ad_{K_1}(Y_i))\ U \\ \frac{\partial U}{\partial b_j}|_{(0,0,0)} &=& Ad_{K_1}(Z_j)\ U \\ \frac{\partial U}{\partial c_k}|_{(0,0,0)} &=& X_k \ U. \end{eqnarray} $Y_i, Z_j$ span $\p$, $Ad_{K_1}(Y_i)$, $Ad_{K_1}(Z_j)$, span $\p$. $Ad_{K_1}(Y_i)$, $Ad_{K_1}(Z_j)$, $X_k$ span $\mathfrak{p}\ \oplus \ \k$. $\cos(\lambda_i) Ad_{K_1}(X_i) - \sin(\lambda_i) Ad_{K_1}(Y_i)$, $Ad_{K_1}(Z_j)$ and $X_k$, span $\mathfrak{p}\oplus \k$. By inverse function theorem $U(a_i, b_j, c_k)$ is a nghd of $U$, any curve $U(t)$ passing through $U$, at $t=0$, for $t \in (-\delta, \delta)$ can be written as $$ U(t) = \exp(\sum_k c_k(t) X_k) K_1 \exp(\sum_j b_j(t) Z_j) \exp(a) \exp(\sum_i a_i(t) X_i) K_2 = K_1(t)A(t)K_2(t). $$ $(a_i, b_j, c_k)$ are coordinates of nghd of $U$. Given $U(0) = K_1 \exp(a) K_2$ such that $a$ is regular ($\mathfrak{f}= \a$, see remark \ref{rem:stabilizer}), we can represent a curve $\dot{U}(t) = Ad_{K(t)}(X_d) U(t)$ passing through $U(0)$ as $U(t) = K_1(t)A(t)K_2(t)$, where $\dot{K_1} = \Omega_1(t) K_1$, $\dot{K_2} = \Omega_2(t) K_2$ and $\dot A(t) = \Omega(t) A(t)$ where $\Omega_1(t), \Omega_2(t) \in \k$ and $\Omega(t) \in \a$. Differentiating, we get $$ Ad_{K(t)}(X_d) U(t) = (\Omega_1 + K_1 \Omega K_1^{-1} + K_1 A \Omega_2 A^{-1} K_1^{-1})U(t), $$ which gives for ${\bf a}r{K} = K_1^{-1} K$, and $\Omega_1' = K_1^{-1} \Omega_1 K_1 \in \k$, $$ Ad_{{\bf a}r K}(X_d) = \Omega_1' + \Omega + A \Omega_2 A^{-1}. $$ Using $A \Omega_2 A^{-1} \perp \a$, we obtain $\Omega = P(Ad_{{\bf a}r K}(X_d))$, projection of $Ad_{{\bf a}r{K}}(X_d)$ on $\a$. $A(t)$ evolves as this projection, which lies in convex hull of Weyl points of $X_d$ by Kostant Convexity theorem.} \end{theorem} \section{Roots and reflections} \begin{remark}{\rm {\bf Roots:} Let $\g$ be real, compact, semisimple Lie algebra, with negative definite killing form $\langle ., . \rangle$. Let $E_i$ be basis of $\g$, orthonormal, wrt to the killing form. ${\rm ad}_X$ is skew symmetric matrix, wrt to these basis. $$ \langle E_i, {\rm ad}_X(E_j) \rangle = tr({\rm ad}_{E_i}{\rm ad}_{[X, E_j]}) = tr({\rm ad}_{E_i} [{\rm ad}_X {\rm ad}_{E_j}]) = - \langle E_j, {\rm ad}_X(E_i) \rangle. $$ where, we use, $ad_{[X, Y]} = [ad_X, ad_Y]$, which follows from Jacobi identity, $[[x, y], z] = [x [y, z]] - [y [x, z]]$, $ad_{[X, Y]} = [ad_X, ad_Y]$.} \end{remark} Let $a \in \a$. Eigenvalues of $A = ad_{a}$, are imaginary ($A$ is skew symmetric), as $Ax = \lambda x$, implies $\lambda = \frac{x'Ax}{x'x}$, implying, $\lambda^{\ast} = -\lambda$. The coefficients of characteristic polynomial being real, the roots, occur in conjugate pair. $(A - \lambda I)^2 x = 0$, implies, $-((A - \lambda I)x)'(A-\lambda I)x = 0$, implying $(A- \lambda I)x = 0$. Repeated use of this gives, $(A - \lambda I)^{k} x =0$, implies $(A - \lambda I) x = 0$, hence $A$ diagonalizable. If $\lambda_i \noindentindenteq \lambda_j$, $A x_i = \lambda_i x_i$, implying $x_j' A x_i = \lambda_i x_j' x_i$, implying $\lambda_j x_j'x_i = \lambda_i x_j' x_i$, implying $x_j'x_i = 0$. Given, $A(x + i y) = i \lambda (x + i y)$, let, $x = x_p + x_k$ and $y = y_p + y_k$, be direct decomposition in $p + k$ parts. \begin{equation} A ( x_p + x_k + i (y_p + y_k)) = i \lambda (x_p + x_k + i(y_p + y_k)); \\ \end{equation} \begin{equation} A x_p = -\lambda y_k ; \ \ A y_k = \lambda x_p; \end{equation} \begin{equation} Ax_k = - \lambda y_p ; \ \ Ay_p = \lambda x_k. \end{equation} \begin{equation} A ( x_p + i y_k ) = i \lambda (x_p + i y_k); \ \ A (y_p - ix_k) = i \lambda (y_p - ix_k) \end{equation} Eigenvectors of $A$, have the form $x_p \pm i y_k$, with conjugate eigenvalues. Choose a basis for $\a$ as $a_i$, with $A_i = {\rm ad}_{a_i}$. Since $A_i$, commute, we have $A_1 A_2 x = A_2 A_1 x = \lambda A_2 x$, where, $\lambda$, is a an eigenvalue of $A_1$. If $\lambda$ is a distinct eigenvalue, $A_2 x = \mu x$, $x$, is a eigenvector of $A_2$. If $\lambda_k$ has multiplicity $m$ with eigenvectors $x^k_1, \dots, x^k_m$, with $A_2 x^k_i = \sum_{j=1}^m C_{ij}x^k_j$ . Let $X^k = [x^k_1, x^k_2, \dots, x^k_{m}]$, where eigenvectors have been stacked as columns, $A_2 X^k = X^k C$. Let $\alpha$, be an eigenvalue of $C$, then $(C - \alpha I)y = 0$, this means, $(A_2 - \alpha I)X^k y = 0$, which implies $\alpha$ is an eigenvalue of $A_2$, and hence imaginary. Furthermore, $(C - \alpha I)^d y = 0$ implies $(A_2 - \alpha I)^d X^k y = 0$. This entails, $(A_2 - \alpha I)X^k y = 0$ and hence $(C - \alpha I)y = 0$. Therefore, $C = U \Sigma U^{-1}$ can be diagonalized. Let $Y^{kl} = [y^{kl}_1, \dots, y^{kl}_d]$, is a subset of columns of $X^k U$, with eigenvalues, $\lambda_k, \lambda_l$, for $A_1, A_2$ respectively. This process can be continued. Let ${\cal I} = (\lambda_{i_1}, \dots, \lambda_{i_n})$ be a multi-index, such that $Y^{\cal I}$, be the set of eigenvectors with eigenvalues $(\lambda_{i_1}, \dots, \lambda_{i_n})$ for $A_1, \dots, A_n$ respectively. Then $Y^{\cal I}$ is $\perp$ to $Y^{\cal I'}$ where ${\cal I} \noindentindenteq {\cal I'}$. Finally a column of $Y^{\cal I}$, can be written as $(x_p + x_k + i (y_p + y_k))$. Let $(x_p + i y_k)_s$, be independent vectors distilled from columns of $Y^{\cal I}$, denoted as $\tilde{Y}^{\cal I}$. (Note, if $(x_p + i y_k)$, are independent under reals, they are independent under complex). Then $\tilde{Y}^{\cal I}$, has same number of columns as $Y^{\cal I}$, as $\tilde{Y}^{\cal I}$, and $Y^{\cal I}$ are independent, and $Y^{\cal I} = \tilde{Y}^{\cal I} U$ and by Jordan normal form, $(x_p + i y_k)_s$ can be expressed as linear combination of columns of $Y^{\cal I}$. The $x_p$ corresponding to distinct ${\cal I}$ (modulo $-{\cal I}$) are orthogonal, as $x_p$, is a eigenvector of $A^2 x_p = -\lambda^2 X_p$, and since $A^2$, is symmetric, $x_p$ corresponding to distinct $\lambda$ are perpendicular. If $x + i y$, is a zero eigenvector of $A_i$, then $A_ix = A_iy = 0$. Let $\I_0$ correspond to multi-index, with eigenvalues identically zero. Then $ Y^{{\cal I}_0} = \{x_{1}, \dots, x_{j}, y_{1}, \dots, y_{k} \}$, where $x_{j} \in \p$ and $y_k \in \k$. Given $x \in \p$, $x \in Y^{{\cal I}_0}$, iff $x \in \a$, as $\a$, the maximal abelian subspace $\in \p$. These eigenvectors ${\cal I}$ can be stacked as a Matrix $J$, which simultaneously diagonalizes all $A_i$, i.e., $JA_iJ^{-1} = \Sigma_i$. For ${\cal I} \noindentindenteq {\pm \cal I'}$, if $(x_p + i y_k) \in {\cal I}$ and $(x_p' + iy_k') \in {\cal I'}$, then $x_p \perp x_p'$ and $y_k \perp y_k'$. If $(x_p + iy_k) \in {\cal I}$, then $(x_p - iy_k) \in {- \cal I}$. We abbreviate $x_p + i y_k$ as $p + i k$ and call them {\it \bf roots}. We also use the notation $k + ip$ for roots, obtained by multiplication by $i$. We use roots to show existence of a {\it regular element}. Given roots $p_j + i k_j$. $p_j$ span ${\a^{\perp}}$. Let $X_i$ be a basis for $\a$. Then $X_i, p_j$ forms a complete basis for $\p$. Consider the matrix $Z$, such that $Z_{ij} k_j = [X_i, p_j]$. We form the ratio, $\alpha_{n-1} = \mathop{\rm min}_j \{ \frac{|Z_{nj}|}{|Z_{(n-1)j}|} \}$, where both numerator and denominator, are non-zero. When, no such pair exits $\alpha_{n-1} = 1$. We $X_{n-1}$ to $X_{n-1} \leftarrow X_{n} + \frac{\alpha_{n-1}}{2} X_{n-1}$. Similarly define $\alpha_{k}$. Let $X_{k} \leftarrow X_{k+1} + \frac{\alpha_{k}}{2} X_{k}$. Then $X_{1}$ as formed from linear combination $X_i$ is a {\it regular element}. Given a root vector $e = p + ik$, (with $p, k$ normalized to killing norm $1$) its value $\alpha$ defined as $[a, p + ik ] = -i \alpha(a) (p + ik)$, can be read by taking inner product with vector $[p, k] \in \a$. $$ \langle a, [p, k] \rangle = \langle [a, p], k \rangle = \alpha. $$ We represent the root by its representative vector $e = [p, k] \in \a$. Choose a basis for the roots $e_k$. We can express all roots in terms of $e_k$ as coefficients $(c_1, \dots, c_k, \dots, c_n)$. The ones with positive leading non-zero entry are called positive and viceversa. \begin{theorem}\label{rem:reflection}{\rm {\bf Reflection:} $\mathfrak{g}= \mathfrak{p}\oplus \mathfrak{k}$ , Let $Y_\alpha \pm i X_\alpha$, where that $Y_\alpha \in \p$ and $X_\alpha \in \k$, are the roots, such that, $$ [\a, Y_{\alpha}] = \alpha(\a) X_{\alpha} ; \ \ [\a, X_{\alpha}] = - \alpha(\a) Y_{\alpha} $$ $$ [\a, Y_{\alpha} + i X_{\alpha} ] = -i \alpha(\a) ( Y_{\alpha} + i X_{\alpha}) $$ Note, $[X_{\alpha}, Y_{\alpha}] \in \a$. Note, $[X_{\alpha}, Y_{\alpha}] \in \p$, let $a_0 \in \a$ be regular. $$ ad_{a_0}([X_{\alpha}, Y_{\alpha}]) = [ad_{a_0}(X_{\alpha}), Y_{\alpha}] + [X_{\alpha}, ad_{a_0}(Y_{\alpha})] = 0. $$ Observe, $$ \exp(s X_{\alpha})[X_{\alpha}, Y_{\alpha}] \exp(-s X_{\alpha}) = [X_{\alpha}, Y_{\alpha}] + \beta s Y_{\alpha} + \frac{\beta s^2}{2}[X_{\alpha}, Y_{\alpha}] + \dots $$ where $\beta < 0$. The above expression can be written as, $$ \exp(s X_{\alpha})[X_{\alpha}, Y_{\alpha}] \exp(-s X_{\alpha}) = \cos (\sqrt{|\beta|} s) [X_{\alpha}, Y_{\alpha}] - \sqrt{|\beta|} \sin(\sqrt{|\beta|}s) Y_{\alpha}. $$ By choosing, $s = \frac{\pi}{\sqrt{|\beta|}}$, we have $$ U [X_{\alpha}, Y_{\alpha}] U^{-1} = \exp(s X_{\alpha})[X_{\alpha}, Y_{\alpha}] \exp(-s X_{\alpha}) = -[X_{\alpha}, Y_{\alpha}]. $$ Given $$ Z = c [X_{\alpha}, Y_{\alpha}] + \sum_k \alpha_k Z_k, $$ where $\langle [X_{\alpha}, Y_{\alpha}], Z_k \rangle = 0$. This implies that $[Z_k, X_{\alpha}] = 0 $, else $[Z_k, X_{\alpha}] = \alpha(Z_k) Y_{\alpha}$. Since $$ \langle Y_{\alpha}, [Z_k, X_{\alpha}] \rangle = \langle Z_k, [X_{\alpha}, Y_{\alpha}] \rangle = 0, $$ implying $\alpha(Z_k) = 0$. This implies for $U = \exp( \frac{\pi}{\sqrt{|\beta|}} X_{\alpha})$ $$ U Z U^{-1} = - c [X_{\alpha}, Y_{\alpha}] + \sum_k \alpha_k Z_k. $$ This is reflection in the plane given by $\alpha(.) = \langle [X_{\alpha}, Y_{\alpha}], . \rangle = 0$. In orthonormal basis $E_i$ (for $\a$), $Z$, $[X_{\alpha}, Y_{\alpha}]$ and $Z_k$, takes the form of coordinates, $\z, \m, \z_k$, respectively, where, $\z = c \mathfrak{m}+ \sum_k \z_k$, and $\z_k \perp \m$, the reflection formula takes the form $$ R_{\m} (\z) = \z - 2 \frac{\langle \m, \z \rangle}{\langle \m, \mathfrak{m}\rangle}\mathfrak{m} = -c \mathfrak{m}+ \sum_k \z_k. $$ } \end{theorem} \begin{remark} {\rm When $\a$ is one-dimensional in theorem \ref{th:generalmodel}, we can choose $U$ as in above remark \ref{rem:reflection}, such that $U X_d U^{-1} = -X_d$. Let $X(T) \in KU_F$, belong to coset of $U_F$, where $\dot{X} = Ad_K(X_d)X$. Let the length $L(X(t)) = \beta T$, where $\beta = |Ad_K(X_d)|$. Form of geodesics say that we have for $l \leq \beta T$ such that $\exp(Yl) \in KU_F$ , where $|Y| = 1$. Therefore, $\exp((\beta Y) \frac{l}{\beta}) \in KU_F$. Let $Ad_K(X_d) = \beta Y$, by appropriate choice of $K$. This is achieved by Maximization of $\langle Ad_K(X_d), Y \rangle$, w.r.t $K$, which yields $[Ad_K(X_d), Y] =0$. This, gives $Ad_K(X_d) = \pm \beta Y$. We can choose either, by the choice of $U$. Therefore, $X(T) = K_1 \exp(t X_d) K_2$, where $t = \frac{l}{\beta} \leq T$. For $t < T$, we can use $U$, to insure $U_F = K_1 \exp(T (\alpha X_d + (1-\alpha) U X_d U^{-1}) )K_2$. We get the form of the reachable set in theorem \ref{th:generalmodel}, by a geodesic argument.} \end{remark} \begin{remark}\label{rm:Weyltransitive}{\rm Let $e_j^{+} = k_j + i p_j$, be positive roots. The roots divides $\a$, into connected regions called Weyl chambers defined by $sign(e_j^{+}(x)) = \pm 1$, where the signs donot change over a connected region. On the boundary of a Weyl chamber, some of $e_j^{+}(x) = 0$. By a sequence of reflections $s_j$, around roots $e_j^{+}$, we can map one Weyl chamber into another. Let $x$ be a point in Weyl chamber $A$. Choose a point $y$ in the interior of principal Weyl chamber $\c$ defined as $e_j^{+}>0$, choose a $e_k^{+}$ such that $e_k^{+}(x) < 0$. We can decompose $y = y^{\parallel} + y^{\perp}$, similarly for $x = x^{\parallel} + x^{\perp}$, where $\perp$ and $\parallel$ is w.r.t. the hyperplane of $e_k^+$. Then the distance between $x$ and $y$ is $d_1 = \sqrt{|y^{\parallel} - x^{\parallel}|^2 + (|x^{\perp}| + |y^{\perp}|)^2}$, after reflection the distance changes to $d_2 = \sqrt{| y^{\parallel} - x^{\parallel}|^2 + (|x^{\perp}| - |y^{\perp}|)^2}$, as part parallel to hyperplane of $e_k^{+}$ is invariant under reflection. $$ d_1^2 - d_2^2 = 4 |y^{\perp}||x^{\perp}| = 4 | \langle y,\ e_k^{+} \rangle \langle x,\ e_k^{+} \rangle|. $$ Let $y_{min}$ be the minimum of $|\langle y,\ e_j^{+} \rangle|$, and $x_{min}$ be the minimum of nonzero $|\langle x,\ e_j^{+} \rangle|$ taken over all $j$. Let $\Delta = 4 x_{min} y_{min}$. We can continue this process by finding next $k$, such that, where $ sign(e_k^{+}(x))\noindentindenteq sign(e_k^{+}(y))$. The value of root on reflected $x$, can be evaluated by permuting roots and evaluating them on original $x$. Let $\W$, be the Weyl rotation corresponding to reflection $s$, then $$[\mathcal {W}\mathfrak{a}\W^{-},\ k + ip] = \lambda(\mathcal {W}\mathfrak{a}\W^{-}) (k + ip) $$ $$ \mathcal {W} [ \a, \W^{-} (k+ ip) \mathcal {W}] \W^{-1} = \lambda(\mathcal {W}\mathfrak{a}\W^{-}) (k + ip) $$ $$ [ \a, \W^{-} (k+ ip) \mathcal {W}] = \lambda(\mathcal {W}\mathfrak{a}\W^{-}) \W^{-} (k + ip) \mathcal {W}$$ Thus $\W^{-} (k+ ip) \mathcal {W}= k_1 + i p_1$ is a root, such that its value $\lambda_1(\a)$ at $\a$ is same as $\lambda(\mathcal {W}\mathfrak{a}\W^{-})$ abbreviated as $\lambda(\mathcal {W}\a)$. $$[\W_n \dots \W_1 \mathfrak{a}\W_1^{-} \dots \W_n^{-}, k + ip ] = \lambda(\W_n \dots \W_1 \mathfrak{a}) (k + ip). $$ $$[\a, \W_1^{-} \dots \W_n^{-} k + ip \W_n \dots \W_1 ] = \lambda(\mathfrak{a}) \W_1^{-} \dots \W_n^{-} k + ip \W_n \dots \W_1. $$ $ \langle x e_k^{+} \rangle \geq x_{min} $, where $x_{min} = min_j \{ | \langle x e_j^{+} \rangle| \noindentindenteq 0 \} $. Each reflection reduces the squared distance by atleast $\Delta$. In finite rotations, either the distance is reduced to $0$ or $x$ reaches principal Weyl chamber, $\c$. If $x$ is an interior point then after reflections it stays an interior point. Boundary points go to boundary points. For $y \in \c$, if $\W(y) \in \c$, then $\W$ is identity. Weyl rotations act simple on Weyl chambers \cite{Helg}. Simple action entails that any $\W$ can be written as product of finite reflections.} \end{remark} \begin{remark}{\rm Given the reflection formula of root $\alpha$ round $\beta$, $$ \alpha \rightarrow \alpha - 2 \frac{\langle \alpha, \beta \rangle }{\langle \beta, \beta \rangle} \beta. $$ We claim $2 \frac{\langle \alpha, \beta \rangle }{\langle \beta, \beta \rangle}$ is an integer. Consider the root $\beta = p + i k$, and $e = \frac{p + ik}{\|[p, k]\|}$ and $f = \frac{p - ik}{\|[p, k]\|}$ and $h = [f, e]$ where $h = \frac{2 i [p, k]}{\|[p, k]\|^2}$. Then $[h, e] = 2 e$ and $[h, f] = -2 f$. Consider the root $\upsilon$. Then using the convention $h(\upsilon) = [h, \upsilon]= \lambda_0 \upsilon $, we have $h(e (\upsilon)) = e (h (\upsilon)) + [h, e] (\upsilon) = (\lambda_0 + 2) e(\upsilon)$. In general then, $h(e^k (\upsilon)) = (\lambda_0 + 2k) e^k (\upsilon)$. $e$ is called the {\it raising operator}. Now let $e^{k+1}(\upsilon) = e (w) = 0$, where $h (w) = \lambda w = (\lambda_0 + 2k) w$. Now consider $w, f (w), \dots f^d (w) $ such that $f^{d+1} (w) = 0$. $h(f (w)) = f (h (w)) + [h, f] (w) = (\lambda - 2) f(w)$. In general then, $h(f^k (w)) = (\lambda - 2k) f^k (w)$. $f$ is called the {\it lowering operator}. Furthermore $e(f^k (w))$ lies in span of $w, f(w), \dots, f^{k-1}(w)$. Its true by induction. When $k=1$, we have $e (f (w)) = -h w = -\lambda w$. Assuming true for $k$, we have $e(f^{k+1}(w)) = f (e (f^{k}w)) - [f, e](f^{k} w)$. Hence $h = [f, e]$ which is a diagonal matrix on $w, f(w), \dots, f^{d}(w)$ is a commutator $[f, e]$, hence must have trace zero. The trace of $h = (\lambda -d)(d + 1)$, hence $\lambda = d$ an integer. This says that $h(\alpha)$ is a integer for root $\alpha$. If the root $\alpha = p_1 + i k_1$. Then this says that $$ h (\alpha) = 2 \frac{\langle [p, k][p_1 k_1] \rangle }{\| [p, k] \|^2} = 2 \frac{\langle \alpha, \beta \rangle }{\langle \beta, \beta \rangle}, $$ is an integer. Now this says that $$ 2 \frac{\langle \alpha, \beta \rangle }{\langle \beta, \beta \rangle} 2 \frac{\langle \beta, \alpha \rangle }{\langle \alpha, \alpha \rangle} = 4 \cos^2 \theta $$ where $\theta$ is the angle between the two roots. This says that $4 \cos^2 \theta$ only takes integer values $\{0, 1, 2, 3 \}$. Hence the angle between the roots can only take values $\{ 0, \frac{\pi}{6}, \frac{\pi}{4}, \frac{\pi}{3}, \frac{5 \pi}{6}, \frac{3 \pi}{4}, \frac{2 \pi}{3}, \pi.\}$}\end{remark} \begin{remark}{\rm There exist a basis $e_i$ for the $\a$ such that all positive roots can be expressed as $f = \sum \alpha_j e_j$ where $\alpha_j > 0$ are integers. There exits a $z$ such that $\langle z, f \rangle > 0$ for all positive $f$. Lets collect from $f$, all roots such that cannot be written as a sum of other two roots $\alpha + \beta$, we call this set $\B$, then set of simple positive roots. We choose from $\B^c$, $x$ such that $\langle z, x \rangle$ is smallest in $\B^c$. Then it follows $x = x_1 + x_2$ and both $x_1$ and $x_2$ are in $\B$. We claim elements of $\B$ make an obtuse angle among themselves. Suppose not. Then given $\alpha, \beta \in \B$, making an acute angle, reflect around root of larger magnitude, say $\beta$. This gives $$ \alpha \rightarrow \alpha - 2 \frac{\langle \alpha, \beta \rangle }{\langle \beta, \beta \rangle} \beta. $$ Since $2 \frac{\langle \alpha, \beta \rangle }{\langle \beta, \beta \rangle}$ is a integer, it possible value is $1$. Hence $\alpha -\beta$ is a root. Then $\alpha$ is not a simple root, which is a contradiction. Given $e_i \in \B$, we claim $e_i$ form an independent set. Suppose dependent then $\sum_i \alpha_i e_i = 0$ for nonzero $e_i$. We can write this as $x = \sum_i \alpha_i e_i = \sum_j \beta_j f_j$ where $\alpha_i >0 , \beta_j > 0$. Then $\langle x, x \rangle = \langle \sum \alpha_i e_i, \sum \beta_j f_j \rangle \leq 0$. This implies that $x = \sum_i \alpha_i e_i = 0$. Since $\langle e_i , z \rangle >0$, implies $\alpha_i = 0$ and $\beta_j = 0$. Hence $\B$ is an independent set. Hence the proof that $\B$ forms a basis. From previous remark, possible angles between elements of $\B$ is $90^{\circ}, 120^{\circ}, 135^{\circ}, 150^{\circ}$. We call $\B$ fundamental roots. } \end{remark} \begin{theorem}{\rm Let $\mathfrak{g}= \mathfrak{p}\oplus \k$ be simple algebra (no ideals). Given any $a_1 \in \a$, We show $\a$ is spanned by ${\W}_i(a_1) = Ad_{k_i}(a_1)$.}\end{theorem} Consider the reflection around the root $e_1$, where $e_1$ is independent of $a_1$, this gives, $a_1 \rightarrow a_1 - \langle a_1 e_1 \rangle e_1 = a_2$. $a_2$ is independent of $a_1$. Let $e_{k}$ be independent of the generated vectors $a_1, \dots, a_k$, and not perpendicular to these, then reflecting these around $e_{k}$, produces $a_{k+1}$, which is independent of these. If no such $e_k$ can be found beyond $k-1$, chain terminates. Then we can divide the root vectors into two categories, $R_1 = \{e_1, \dots, e_q \} \in span \{ a_1, \dots, a_k \} = {\a}_1$ and $R_2 = \{e_{q+1}, \dots, e_N \} \perp span \{ a_1, \dots, a_k \} = {\a}_2$. Given $e \in R_1$ and $f \in R_2$, then $e \perp f$. $[e, f ]$, if not zero, is $e + f$, has non-vanishing inner product with $e$ and $f$. Then $[e, f]$ is root that is neither parallel or perpendicular to $span\{a_k \}$, therefore $[e, f] =0$ This divides nontrivial roots into two commuting sets $R_1$ and $R_2$. Let $\k_1$, and $\k_2$ be the $\k$ part of the roots $k + ip$ comprising $R_1$ and $R_2$. Similarly $\p_1$ and $\p_2$. $$ [\a_1, \k_1] \in \p_1 $$ $$[ \a_1, \p_1 ] \in \k_1 $$ $$[ \a_1, \k_2 ] = 0 $$ $$[ \a_1, \p_2 ] = 0 $$ Similarly, for $\a_2$. $$ [\k_1, \p_1 ] \in \p_1 \oplus \a_1. $$ $$ [\k_2, \p_2 ] \in \p_2 \oplus \a_2. $$ Follows from $\a_2$ and $\p_2$ commute with $\k_1$ and $\p_1$ and viceversa. Then for $k_1 \in {\k}_1$ and $k_2 \in {\k}_2$, $[k_1, k_2]= 0$. This follows from $[k_1 + ip_1, k_2 \pm i p_2] = 0$. Similarly $[{\p}_1, {\p}_2] =0$ and $$ [\p_1, \k_2 ] = 0 $$ $$ [\k_1, \p_2 ] = 0 $$ $$ [\k_1, \k_1 ] \perp \k_2 $$ $$ [\k_2, \k_2 ] \perp \k_1 $$ $$ [\p_1, \p_1 ] \perp \k_2 $$ $$ [\p_2, \p_2 ] \perp \k_1 $$ Let $\k_0$ be trivial roots in $k$, i.e., $[a, \k_0] = 0$. $$[\k_0, \k_1 ] \in \k_1 $$ $$[\k_0, \p_1 ] \in \p_1 $$ Similarly for $\k_2$, $\p_2$. Let $B_1 \in \k_0$ be generated by $\k_1$ orthogonal part of $[\k_1, \k_1]$ and $[\p_1, \p_1]$. Let $B_2 \in \k_0$ be generated by $\k_2$ orthogonal part of $[\k_2, \k_2]$ and $[\p_2, \p_2]$. $[B_1, \a_1] =0$, $[B_1, \k_1] \in \k_1$ and $[B_1, \p_1] \in \p_1$, $[B_1, B_1] \in \tilde{\k}_1$, where, ${\tilde{\k}}_1 = \k_1 \oplus B_1$ and ${\tilde{\k}}_2 = \k_2 \oplus B_2$. Let $B_3$ be part of $\k_0$, orthogonal to $B_1$ and $B_2$. Note, $[{\tilde{\k}}_i, {\tilde{\k}}_i] \in {\tilde{\k}}_i $. ${\tilde{\k}}_1 \perp {\tilde{\k}}_2$. $[B_3, \tilde{\k}_i] \in \tilde{\k}_i$. Then ${\mathfrak I}_1 = \a_1 \oplus \p_1 \oplus {\tilde{\k}}_1$ and ${\mathfrak I}_2 = \a_2 \oplus \p_2 \oplus {\tilde{\k}}_2$, are non-trivial ideals. Therefore $k =n$, i.e, $span\{a_1, \dots, a_n \} = \a$. We show positive span of $\{a_1, \dots, a_n \} = \a$. Consider the convex hull of $C = {\W}_i a_1 $, where ${\W}_i = Ad_{k_i}$. Suppose origin is not in the convex hull. By Hahn Banach theorem we can find a separating Hyperplane such that $\langle c, x \rangle = \sum c_i x_i > 0$ for all Weyl points of $a_1$. We can write the hyperplane in terms of $n$ independent root vectors as $\sum_j b_j \langle s_j, x \rangle$, $c = \sum b_j s_j$. Let $y$ be chosen such that $sign \langle s_j, y \rangle = -sign(b_j)$. By reflecting around plane $s_j$, if $sign(\langle s_j, x \rangle ) \noindentindenteq sign(\langle s_j, y \rangle)$, we decrease the distance between $a_1$ and $y$ (this is same idea as in remark \ref{rm:Weyltransitive}). In finite steps $b_j \langle s_j, x \rangle \leq 0$. Therefore $\sum_j b_j \langle s_j, x \rangle \leq 0$. Therefore $0 \in C$, i.e., $\sum_j \alpha_j Ad_{k_j} a_1 = 0$, i.e., $-a_1 = \sum \alpha_j Ad_{k_j}(a_1)$. Hence the proof. \begin{theorem}\label{th:semisimple}{\rm Let $V_i$ be mutually commuting root vectors, such that no further subdivision in commuting sets is possible. Given roots $e_l = k_l + i p_l \in V_l$ and $e_m = k_m + i p_m \in V_m$, we have $[k_l \pm i p_l, k_m \pm i p_m ] = 0$ which implies $[k_l, k_m] = [k_l, p_m] = [k_m, p_l] = [p_l, p_m] = 0$ The associated root vectors are $[k_l, p_l]$ and $[k_m, p_m]$. The $$ \langle [k_l, p_l] [k_m, p_m] \rangle = \langle k_l [p_l, [k_m, p_m]] \rangle = 0 $$ where we use Jacobi identity. Let $\a_i$ be the subspace spanned by root vectors $V_i$, a direct decomposition of $\a$, into root spaces, $$ \mathfrak{a}= \a_1 \oplus \a_2 \dots \oplus \a_s. $$ Given an element $a \in \a$, we can decompose, $a = a_1 + a_2 + \dots + a_s $ Reflecting in root vectors in $V_i$, only reflects root vector $a_i$. Reflecting $a_1$ we can produce $\sum_i \alpha_i {\W}_1(a_1) = -a_1$ leaving $a_i$, $i \noindentindenteq 1$ invariant. We can synthesize a convex combination that synthesizes $\pm a_{1}$. Using the construction detailed before, we can synthesize $V_1$. Similarly we can synthesize all $V_j$, and hence any $V$. Let $\k_i$ and $\p_i$ be the subspace formed from the $k$ and $p$ parts of the roots in $V_i$. Then $[\k_i, \p_j] = 0$, $[\k_i, \a_j] = 0$, $[\k_i, \p_j] = 0$, where $i \noindentindenteq j$. We have $[\k_i, \p_i] \perp \p_j$ , $[\k_i, \p_i] \perp \a_j$, for $i \noindentindenteq j$. This implies $[\k_i, \p_i] \in \p_i \oplus \a_i $. We have $[\k_0, a] = 0$ and $[\k_0, \p_i] \in \p_i$. This implies $\tilde{\p} = \sum_{i = 1}^m \a_i \oplus {\p}_i$, where $m < s$, is invariant under $ad_k$ and $Ad_k$. Given $X_d \in \tilde{\p}$, the solution to the differential equation $$ \dot{X} = (X_d + \sum_i u_i k_i) X, \ \ k_i \in \mathfrak{k}$$ are confined to the invariant, manifold $\tilde{G} = \exp(\{\tilde{\p}, \mathfrak{k}\})$. $K = \exp(\k)$ is closed subgroup, we can decompose $\tilde{G} = \exp(\tilde{\p}) K$. Given $Y \in \tilde{\p}$, we can rotate it to Cartan subalgebra $\a$, i.e., $Ad_K(Y) \in \a$. Since $Y \in \tilde{\p}$ is $Ad_K$ invariant, $Ad_K(Y) \in \sum_{i=1}^m \a_i$. $\tilde{G} = K_1 \exp(\sum_{i=1}^m b_i)K_2$, where $b_i \in \a_i$. We can synthesize $\sum_{i}^m b_i = \sum_j \alpha_j Ad_{k_j} X_d$ as detailed before. $\tilde{G} = K_1 \exp(\sum_j \alpha_j Ad_{k_j} X_d)K_2$.} \end{theorem} \begin{theorem}{\rm Given $\mathfrak{g}= \mathfrak{p}\oplus \k$, Let $a \in \a$. The number of Weyl points $\mathcal {W}a \mathcal {W}\in \a$ are finite.}\end{theorem} {\bf Proof:} For $j = 1, \dots, m$, we choose as basis of $\g$, $k_j$ and $p_j$ (normalized to killing norm $1$) where $k_j + i p_j$ are nontrivial roots. The remaining basis can be chosen as basis of ${\mathfrak k}_0$ and $\a$. We can organize the basis as the first $m$ vectors being $p_j$ followed by next $m$ elements as $k_j$ respectively. In these basis $ad_a$ takes the block form $$ \left [ \begin{array}{cc} A & 0 \\ 0 & 0 \end{array} \right ], $$ where $$ A = \left [ \begin{array}{cc} 0 & \Lambda \\ -\Lambda & 0 \end{array} \right ], \Lambda = \left [ \begin{array}{ccc} \lambda_1 & \hdots & \hdots \\ \vdots & \ddots & \vdots \\ 0 & 0 & \lambda_m \end{array} \right ] = i 2 \sigma_y \otimes \Lambda. $$ By performing a rotation by $S = \exp( -i \sigma_x \otimes I_m)$, we have $$ S A S' = i 2 \sigma_z \otimes \Lambda = \left [ \begin{array}{cc} i \Lambda & 0 \\ 0 & -i \Lambda \end{array} \right ] $$ We define, $\tilde{S} = \left [ \begin{array}{cc}S & 0 \\ 0 & I_n \end{array} \right ]$, The adjoint representation of $Ad_K(a)$, takes the form $$ \Theta_1 \tilde{S}' \left [ \begin{array}{ccc} i \Lambda & 0 & 0\\ 0 & -i \Lambda & 0 \\ 0 & 0 & 0 \end{array} \right ]\tilde{S} \Theta_1, $$ where $\Theta_1$ is matrix representation of $Ad_K(\cdot)$ over the chosen basis. It is orthonormal, as it preserves the killing norm. When $Ad_K$ is an automorphism of $\a$ , we have $$ \tilde{S} ad_{Ad_K(a)} \tilde{S}' = \tilde{S} \Theta_1 \tilde{S}' \left [ \begin{array}{ccc} i \Lambda & 0 & 0 \\ 0 & -i \Lambda & 0 \\ 0 & 0 & 0 \end{array} \right ] \tilde{S} \Theta_1' \tilde{S}' = \left [ \begin{array}{ccc} i \tilde{\Lambda} & 0 & 0 \\ 0 & -i \tilde{\Lambda} & 0 \\ 0 & 0 & 0 \end{array} \right ] $$ Since eigenvalues are preserved by similarity transformation, there are only finite possibilities $ \left [ \begin{array}{cc} i \tilde{\Lambda} & 0 \\ 0 & -i \tilde{\Lambda} \end{array} \right ] $, which means there are finite possibilities for $ad_{Ad_K(a)}$ and $Ad_K(a)$. Hence Weyl points are finite and therefore number of $Ad_K$ automorphisms of $\a$ are finite. \begin{example}{\rm Let $$ \mathfrak{g}= -i \{ I_{\alpha}, S_\beta, I_{\alpha}S_{\beta} \} = \underbrace{-i {I_{\alpha}S_{\beta}}}_{\p} \oplus \underbrace{-i I_{\alpha}, S_{\beta}}_{\k},$$ $\mathfrak{a}= -i \{I_\alpha S_\alpha \}$. Given $\alpha I_xS_x + \beta I_y S_y + \gamma I_z S_z$, the roots are $-i \{ I_{y}S_{z} \pm I_z S_{y} + i \frac{1}{2}(I_x \pm S_{x}) \}$, with value $\frac{\gamma \mp \beta}{2}$ , $-i \{ I_{z}S_{x} \pm I_x S_{z} \mp i \frac{1}{2}(I_y \pm S_{y}) \} $ with value $\frac{\gamma \mp \alpha}{2}$ and $-i \{ I_{x}S_{y} \pm I_y S_{x} + i \frac{1}{2} (I_z \pm S_{z})\} $, with value $\frac{\beta \mp \alpha}{2}$. Regular element is $|\alpha| \noindentindenteq |\beta| \noindentindenteq |\gamma|$. The fundamental roots are $\frac{\beta \pm \alpha}{2}$, $\frac{\gamma - \beta}{2}$.} \end{example} \begin{example}{\rm Let $$ \mathfrak{g}= -i \{ I_{\alpha}, S_\beta, I_{\alpha}S_{\beta} \} = \underbrace{-i \{I_x, I_y, I_x S_\alpha, I_yS_{\beta}\}}_{\p} \oplus \underbrace{\{-i I_z, S_{\alpha}, I_{z}S_{\beta}\}}_{\k}, $$ $\mathfrak{a}= -i \{I_x, I_xS_z \}$. Given $\alpha I_x + \beta 2 I_x S_z = \frac{(\alpha+\beta)}{2} I_x(\frac{\mbox{$\bf 1\ $}}{2} + S_z) + \frac{(\alpha -\beta)}{2} I_x(\frac{\mbox{$\bf 1\ $}}{2} -S_z) $, the roots are $-i \{ I_y(\frac{\mbox{$\bf 1\ $}}{2} + S_z) + i I_z(\frac{\mbox{$\bf 1\ $}}{2} + S_z) \}$, with value $\frac{\alpha + \beta}{2}$ , $-i \{ I_y(\frac{\mbox{$\bf 1\ $}}{2} - S_z) + i I_z(\frac{\mbox{$\bf 1\ $}}{2} - S_z) \}$ with value $\frac{\alpha-\beta}{2}$, $-i \{ 2I_xS_x + i S_y \}$ with value $\beta$, $-i \{ 2I_yS_x + i 2I_z S_x \}$ with value, $\alpha$, $-i \{ 2I_yS_y + i 2I_z S_y \}$ with value $\alpha$ $-i \{ 2I_xS_y - i S_x \}$, with value $\beta$. Regular element is $|\alpha| \noindentindenteq |\beta| \noindentindenteq 0$. The fundamental roots are $\beta$ and $\frac{\alpha - \beta}{2}$, with double root at $\alpha$ and $\beta$.}\end{example} \begin{example}\label{eg:e-nroots}{\rm Given the cartan decomposition $\mathfrak{g}= \mathfrak{p}\oplus \k$, where, $\mathfrak{g}= su(2n)$, $ \mathfrak{p}= \left [ \begin{array}{cc} 0 & X \\ -X' & 0 \end{array} \right ],$ $\mathfrak{k}= \left [ \begin{array}{cc} A & 0 \\ 0 & B \end{array} \right ]$, where $Tr(A+B) = 0$, $A, B \in u(n)$, and $\mathfrak{a}= \{ X = \Sigma \in diag(\lambda_i) \}$, $\lambda_i$ is real. We calculate, the roots $k + ip$.} \end{example} {\rm $A = A^1 + i A^2$, $B = B^1 + iB^2$ and $X = X^1 + iX^2$. Let $A^1_{ij}, B^1_{ij}$ be $1$, $-1$ in the $ij$ and $ji$ spot, $(i < j)$. Let $A^2_{ij}, B^2_{ij}$ be $1$, $1$ in the $ij$ and $ji$ spot, $(i < j)$. $X^{1}_{ij}, X^2_{ij}$ is $1$, in the $ij$ spot. $\Lambda^{\pm}_i$, is $1$ in $A^2_{ii}$ and $\pm 1$ in $B^2_{ii}$. On $A^1_{ij}$, $B^1_{ij}$, $[\Sigma, .]$, takes the form $$ \left [ \begin{array}{cc} -\lambda_j & \lambda_i \\ \lambda_i & -\lambda_j \end{array} \right ] $$ where $X^1_{ij}, X^1_{ji}$ are basis for the range. The eigenvalues and eigenvectors are $\lambda_i - \lambda_j$ and $-(\lambda_i + \lambda_j)$ with eigenvectors $[1, 1]'$ and $[1, -1]'$ respectively. On $X^1_{ij}$, $X^1_{ji}$, $[\Sigma, .]$, takes the form where $$ \left [ \begin{array}{cc} \lambda_j & -\lambda_i \\ -\lambda_i & \lambda_j \end{array} \right ], $$ where $A^1_{ij}, B^1_{ij}$ are basis for the range. The eigenvalues and eigenvectors are $\lambda_j - \lambda_i$ and $(\lambda_i + \lambda_j)$ with eigenvectors $[1, 1]'$ and $[1, -1]'$ respectively. This gives $\frac{1}{\sqrt{2}} ( A^1_{ij} + B^1_{ij}) + i (X^1_{ij} + X^1_{ji})$ and $\frac{1}{\sqrt{2}} ( A^1_{ij} - B^1_{ij}) + i (X^1_{ij} - X^1_{ji})$ are roots, with eigenvalues $-i(\lambda_i - \lambda_j)$ and $i (\lambda_i + \lambda_j)$, respectively. On $A^2_{ij}$, $B^2_{ij}$, $[\Sigma, .]$, takes the form $$ T_1 = \left [ \begin{array}{cc} -\lambda_j & \lambda_i \\ -\lambda_i & \lambda_j \end{array} \right ], $$ where $X^2_{ij}, X^2_{ji}$ are basis for the range, On $X^2_{ij}$, $X^2_{ji}$, $[\Sigma, .]$, takes the form where $$ T_2 = - T_1^T = \left [ \begin{array}{cc} \lambda_j & \lambda_i \\ -\lambda_i & -\lambda_j \end{array} \right ], $$ where $A^2_{ij}, B^2_{ij}$ are basis for the range. Eigenvalues and eigenvectors of $T_2 T_1 = - T_1^{T} T_1$, are $-(\lambda_i - \lambda_j)^2$, and $-(\lambda_1 + \lambda_j)^2$ with eigenvector $[1, 1]$, $[1, -1]$, respectively, with $T_1 [1, 1]' = (\lambda_i - \lambda_j) [1 -1]'$ and $T_1 [1, -1]' = -(\lambda_i + \lambda_j) [1, 1]'$. This gives $\frac{1}{\sqrt{2}} ( A^2_{ij} + B^2_{ij}) + i (X^2_{ij} - X^2_{ji})$ and $\frac{1}{\sqrt{2}} ( A^2_{ij} - B^2_{ij}) + i (X^2_{ij} + X^2_{ji})$ are roots, with eigenvalues $-i(\lambda_i - \lambda_j)$ and $i (\lambda_i + \lambda_j)$, respectively. We have a double root with eigenvalues $\lambda_i \pm \lambda_j$. On $\Lambda^{+}$, we have $[\Sigma, \Lambda^{+}] = 0$ and $$[\Sigma, \Lambda^{-}_{k}]= -2 i \lambda_{k} X^2_{kk}, \ \ [\Sigma, i X^2_{kk}]= 2 \lambda_k \Lambda^{-}_{k}. $$ This gives $(\Lambda^{-}_k - X^2_{kk})$ are roots, with eigenvalues $i 2 \lambda_{k}$ respectively.} \end{document}
\begin{document} \title{Effective birationality for sub-pairs with real coefficients} \author{Jingjun Han and Jihao Liu} \begin{abstract} For $\epsilon$-lc Fano type varieties $X$ of dimension $d$ and a given finite set ${\Gamma}$, we show that there exists a positive integer $m_0$ which only depends on $\epsilon,d$ and ${\Gamma}$, such that both $|-mK_X-\sum_i\lceil mb_i\rceil B_i|$ and $|-mK_X-\sum_i\lfloorloor mb_i\rfloorloor B_i|$ define birational maps for any $m\ge m_0$ provided that $B_i$ are pseudo-effective Weil divisors, $b_i\in{\Gamma}$, and $-(K_X+\sum_ib_iB_i)$ is big. When ${\Gamma}\subset[0,1]$ satisfies the DCC but is not finite, we construct an example to show that the effective birationality may fail even if $X$ is fixed, $B_i$ are fixed prime divisors, and $(X,B)$ is $\epsilon'$-lc for some $\epsilon'>0$. \end{abstract} \address{Department of Mathematics, Johns Hopkins University, Baltimore, MD 21218, USA} \email{[email protected]} \address{Department of Mathematics, The University of Uath, Salt Lake City, UT 84112, USA} \email{[email protected]} \subjclass[2010]{Primary 14E30, Secondary 14B05.} \date{\today} \maketitle \tableofcontents \section{Introduction} We work over the field of complex numbers $\mathbb C$. For any Fano variety $X$, the anti-pluricanonical systems $|-mK_X|$ determine its geometry to a large extent. In the past decade, one of the most important progress in birational geometry is the effective birationality for $\epsilon$-lc Fano varieties proved by Birkar, which leads to a solution of the Borisov-Alexeev-Borisov (BAB) conjecture: \begin{thm}\cite[Theorem 1.2]{Bir19}\langlebel{thm: birkar -mK_X} Let $d$ be a positive integer and $\epsilon$ a positive real number. Then there exists a positive integer $m$ depending only on $d$ and $\epsilon$ such that if $X$ is any $\epsilon$-lc weak Fano variety of dimension $d$, then $|-mK_{X}|$ defines a birational map. \end{thm} It is natural to ask whether Theorem \ref{thm: birkar -mK_X} can be improved to the case of log pairs $(X,B)$. This is inspired by similar works in the general type case. The effective birationality of the linear systems $|mK_X|$ for smooth varieties $X$ of general type was proved by Hacon-M\textsuperscript{c}Kernan \cite{HM06}, and Takayama \cite{Tak06} independently. Later, in a series of celebrated papers, Hacon, M\textsuperscript{c}Kernan and Xu showed that the effective birationality of the linear systems $|m(K_X+B)|$ for lc pairs $(X,B)$ of log general type, provided that the coefficients of $B$ satisfy the descending chain condition (DCC) \cite{HMX13,HMX14}. Here we adopt the convention that $|D|:=|\lfloorloor D\rfloorloor|$ for any $\mathbb{R}$-divisor $D$. Unfortunately, Example \ref{ex: gen hyp p11910} indicates that we cannot generalize Birkar's result straightforwardly, even when $\dim X=2$ and the coefficients set ${\Gamma}$ satisfies the DCC. Nevertheless, when ${\Gamma}$ is a finite set, the effective birationality holds even when we loose our assumption to a wider class of sub-pairs $(X,B)$. \begin{thm}\langlebel{thm: eff bir fin coeff gpair} Let $d$ be a positive integer, $\epsilon$ a positive real number, and ${\Gamma}_0$ a finite set of non-negative real numbers. Then there exists a positive integer $m_0$ depending only on $d,\epsilon$ and ${\Gamma}_0$ satisfying the following. Assume that \begin{enumerate} \item $(X,\Delta)$ is projective $\epsilon$-lc of dimension $d$ for some boundary $\Delta$ such that $K_X+\Delta\sim_{\mathbb R}0$ and $\Delta$ is big, \item $B=\sum_{i=1}^s b_iB_i$ is a pseudo-effective $\mathbb R$-divisor on $X$, where each $b_i\in{\Gamma}_0$, and each $B_i$ is a pseudo-effective Weil divisor, and \item $-(K_X+B)$ is big, \end{enumerate} then $|-mK_X-\sum_{i=1}^s\lceil mb_i\rceil B_i|$ and $|-mK_X-\sum_{i=1}^s\lfloorloor mb_i\rfloorloor B_i|$ define birational maps for any integer $m\geq m_0$ respectively. In particular, if each $B_i\geq 0$, then $|-m(K_X+B)|$ defines a birational map for any integer $m\geq m_0$. \end{thm} In Theorem \ref{thm: eff bir fin coeff gpair}, $(X,B)$ may not even be sub-lc as $B_i$ may not be effective and ${\Gamma}_0$ may not belong to $[0,1]$. It is also possible that $|-mK_X-\sum_{i=1}^s\lceil mb_i\rceil B_i|\not\subset |-mK_X-\sum_{i=1}^s\lfloorloor mb_i\rfloorloor B_i|$ although $\sum_{i=1}^s(\lceil mb_i\rceil-\lfloorloor mb_i\rfloorloor)B_i$ is pseudo-effective. \mathcal{E}dskip When $\operatorname{vol}(-(K_X+B))$ is bounded from below away from $0$, we can even remove the assumption on the coefficients $b_i$ in Theorem \ref{thm: eff bir fin coeff gpair}. \begin{thm}\langlebel{thm: eff bir bound volume more section gpair} Let $d$ be a positive integer, and $\epsilon,v$ and $\delta$ three positive real numbers. Then there exist a positive integer $m_0$ depending only on $d,\epsilon$ and $v$, and a positive integer $m_0'$ depending only on $d,\epsilon,v$ and $\delta$ satisfying the following. Assume that \begin{itemize} \item $(X,\Delta)$ is projective $\epsilon$-lc of dimension $d$ for some boundary $\Delta$ such that $K_X+\Delta\sim_{\mathbb R}0$ and $\Delta$ is big, \item $B$ is a pseudo-effective $\mathbb R$-divisor on $X$, and \item $\operatorname{vol}(-(K_X+B))\geq v$, \end{itemize} then \begin{enumerate} \item $|\lceil- m(K_X+B)\rceil|$ defines a birational map for any integer $m\geq m_0$, \item if $B=\sum_{i=1}^s b_iB_i$, where each $B_i$ is a pseudo-effective Weil divisor, then \begin{enumerate} \item $|-mK_X-\sum_{i=1}^s\lfloorloor mb_i\rfloorloor B_i|$ defines a birational map for any integer $m\geq m_0$, and \item if $b_i\geq\delta$ for every $i$, then $|-mK_X-\sum_{i=1}^s\lceil mb_i\rceil B_i|$ defines a birational map for any integer $m\geq m_0'$. In particular, if each $B_i\geq 0$, then $|-m(K_X+B)|$ defines a birational map for any integer $m\geq m_0'$. \end{enumerate} \end{enumerate} \end{thm} Indeed, Theorem \ref{thm: eff bir fin coeff gpair} follows from Theorem \ref{thm: eff bir bound volume more section gpair} and a detailed study on the volume $\operatorname{vol}(-(K_X+B))$: \begin{thm}\langlebel{thm: bddvolacc maynotbelc} Let $d$ be an integer, $\epsilon$ a positive real number, and ${\Gamma}$ a DCC (resp. finite) set of non-negative real numbers. Then there exists an ACC (resp. finite) set ${\Gamma}'$ depending only on $d,\epsilon$ and ${\Gamma}$ satisfying the following. Assume that \begin{enumerate} \item $(X,\Delta)$ is projective $\epsilon$-lc of dimension $d$ for some boundary $\Delta$ such that $K_X+\Delta\sim_{\mathbb R}0$ and $\Delta$ is big, \item $B$ is a pseudo-effective $\mathbb{R}$-divisor on $X$, and \item $B=\sum_{i=1}^s b_iB_i$, where each $b_i\in{\Gamma}$, and each $B_i$ is a pseudo-effective Weil divisor. \end{enumerate} Then $\operatorname{vol}(-(K_X+B))\in {\Gamma}'$. \end{thm} \begin{rem} A special case when we can apply Theorem \ref{thm: eff bir fin coeff gpair}, Theorem \ref{thm: eff bir bound volume more section gpair}, and Theorem \ref{thm: bddvolacc maynotbelc} is when $(X,B:=B'+M)$ is a generalized polarized pair, where $M$ is a nef $\mathbb{Q}$-Cartier combinations (NQC) $\mathbb{R}$-divisor. We refer the readers to \cite{BZ16,HL18,LT19,HL19,HL20} in this direction. \begin{rem} The assumptions of Theorem \ref{thm: eff bir fin coeff gpair}, Theorem \ref{thm: eff bir bound volume more section gpair}, and Theorem \ref{thm: bddvolacc maynotbelc} are necessary: see Section 6 for some examples. \end{rem} \end{rem} We remark that Birkar proved Theorem \ref{thm: birkar -mK_X} together with Shokurov's conjecture on boundednss of lc complements when ${\Gamma}\subset[0,1]\cap \mathbb{Q}$ is finite \cite[Theorem 1.1, Theorem 1.7, Theorem 1.8]{Bir19}. Shokurov's conjecture on boundednss of lc complements (when ${\Gamma}$ satisfies the DCC) was proved by the authors and Shokurov \cite[Theorem 1.10]{HLS19}, and Chen generalized it to generalized polarized pairs \cite[Theorem 1.1]{Che20}. The proofs of Theorem \ref{thm: eff bir fin coeff gpair} and Theorem \ref{thm: eff bir bound volume more section gpair} do not depend on those results in \cite{HLS19,Che20}. \mathcal{E}dskip We also point out that in Theorem \ref{thm: eff bir fin coeff gpair} and Theorem \ref{thm: eff bir bound volume more section gpair}, we show the effective birationality for any positive integer $m\ge m_0$ rather than for a fixed positive integer $m$ (c.f. Theorem \ref{thm: birkar -mK_X}). In this paper, we will also generalize other previous results on effective birationality, \cite[Theorem 1.3(3)]{HMX14}, \cite[Theorem 1.3]{BZ16}, to this setting. See Theorem \ref{thm: birational one m to inf m} and Theorem \ref{thm: IMPROVED all big theorems on effective birationality}. We also slightly generalize \cite[Theorem 1.2]{BZ16}, see Theorem \ref{thm: effective iitaka sufficiently large}. \mathcal{E}dskip \noindent\textit{Structure of the paper}. In Section 2, we introduce some notation and tools that will be used in the rest of the paper. In Section 3, we study the volume $\operatorname{vol}(-(K_X+B))$ and prove Theorem \ref{thm: bddvolacc maynotbelc}. In Section 4 we prove Theorem \ref{thm: eff bir fin coeff gpair} and Theorem \ref{thm: eff bir bound volume more section gpair}. In Section 5, we adopt the methods developed in this paper to slightly generalize \cite[Theorem 1.3(3)]{HMX14} and \cite[Theorem 1.2, Theorem 1.3]{BZ16}. In Section 6, we give some examples to illustrate that the assumptions in the theorems we proved are necessary. \mathcal{E}dskip \noindent\textbf{Acknowledgement}. The authors would like to thank Guodu Chen, Chen Jiang and Yuchen Liu for useful comments and discussions. The first author would like to thank Chenyang Xu for encouraging him to start this work. The second author would like to thank Christopher D. Hacon for useful discussions and encouragements. The second author was partially supported by NSF research grants no: DMS-1801851, DMS-1952522 and by a grant from the Simons Foundation; Award Number: 256202. \section{Preliminaries} \subsection{Pairs and singularities} \begin{defn}\langlebel{defn: DCC and ACC} Let ${\Gamma}$ be a set of real numbers. We say that ${\Gamma}$ satisfies the \emph{descending chain condition} (DCC) if any decreasing sequence $a_1\ge a_2 \ge \cdots \ge a_k \ge\cdots$ in ${\Gamma}$ stabilizes. We say that ${\Gamma}$ satisfies the \emph{ascending chain condition} (ACC) if any increasing sequence in ${\Gamma}$ stabilizes. \end{defn} \begin{defn} Let $X$ be a normal variety, and $B=\sum_{i=1}^s b_iB_i$ an $\mathbb{R}$-divisor on $X$, where $B_i$ are the irreducible components of $B$. $b_i$ are called the \emph{coefficents} of $B$, $||B||:=\max_{1\leq i\leq s}\{|b_i|\}$, and we write $B\in{\Gamma}$ if $b_i\in{\Gamma}$ for every $i$. We write $B\geq 0$ if $B\in [0,+\infty)$. If $B\sim_{\mathbb R}B'\geq 0$ for some $B'$, $B$ is called \emph{effective}. \end{defn} \begin{defn}\langlebel{defn: positivity} A \emph{sub-pair} $(X,B)$ consists of a normal quasi-projective variety $X$ and an $\mathbb{R}$-divisor $B$ such that $K_X+B$ is $\mathbb{R}$-Cartier. If $B\geq 0$, then $(X,B)$ is called a \emph{pair}. If $B\in [0,1]$, then $B$ is called a \emph{boundary} of $X$. Let $E$ be a prime divisor on $X$ and $D$ an $\mathbb R$-divisor on $X$. We define $\operatorname{mult}_ED$ to be the \emph{multiplicity} of $E$ along $D$. Let $\phi:W\to X$ be any log resolution of $(X,B)$, and let $$K_W+B_W:=\phi^{*}(K_X+B).$$ The \emph{log discrepancy} of a prime divisor $D$ on $W$ with respect to $(X,B)$ is $1-\operatorname{mult}_{D}B_W$ and it is denoted by $a(D,X,B).$ For any positive real number $\epsilon$, we say that $(X,B)$ is \emph{klt} (resp. \emph{lc}, \emph{$\epsilon$-lc}) if $a(D,X,B)>0$, (resp. $\geq0$, $\geq\epsilon$) for every log resolution $\phi: W\to X$ as above and every prime divisor $D$ on $W$. We say that $(X,B)$ is \emph{dlt} if $a(D,X,B)>0$ for any exceptional prime divisor $D\subset W$ over $X$ for some log resolution $\phi:W\rightarrow X$. \end{defn} \begin{defn} Let $X$ be a normal projective variety. We say $X$ is of \emph{Fano type} if $(X,B)$ is klt and $-(K_X+B)$ is big and nef for some boundary $B$. In particular, $-K_X$ is big. \end{defn} \begin{defn} Let $X$ be a normal projective variety. A \emph{big} $\mathbb{R}$-divisor on $X$ is an $\mathbb{R}$-divisor $D$ on $X$ such that $D\sim_{\mathbb R}A+E$, where $A$ is an ample $\mathbb{R}$-divisor and $E\geq 0$. We emphasize that $D$ may not be $\mathbb{R}$-Cartier. \end{defn} \subsection{Bounded families} \begin{defn}\langlebel{defn: bdd} A \emph{couple} $(X,D)$ consists of a normal projective variety $X$ and a reduced divisor $D$ on $X$. Two couples $(X,D)$ and $(X',D')$ are \emph{isomorphic} if there exists an isomorphism $X\rightarrow X'$ mapping $D$ onto $D'$. A set $\mathcal{P}$ of couples is \emph{bounded} if there exist finitely many projective morphisms $V^i\rightarrow T^i$ of varieties and reduced divisors $C^i$ on $V^i$ satisfying the following. For each $(X,D)\in\mathcal{P}$, there exist an index $i$ and a closed point $t\in T^i$, such that two couples $(X,D)$ and $(V^i_t,C^i_t)$ are isomorphic, where $V^i_t$ and $C^i_t$ are the fibers over $t$ of the morphisms $V^i\rightarrow T^i$ and $C^i\rightarrow T^i$ respectively. A set $\mathcal{C}$ of projective pairs $(X,B)$ is said to be \emph{log bounded} if the set of the corresponding set of couples $\{(X,\operatorname{Supp} B)\}$ is bounded. A set $\mathcal{D}$ of projective varieties $X$ is said to be \emph{bounded} if the corresponding set of couples $\{(X,0)\}$ is bounded. A log bounded (resp. bounded) set is also called a \emph{log bounded family} (resp. \emph{bounded family}). \end{defn} \begin{thm}[{\cite[Corollary 1.2]{Bir16}}]\langlebel{thm: BAB} Let $d$ be a positive integer and $\epsilon$ a positive real number. Then the projective varieties $X$ such that \begin{enumerate} \item $(X,B)$ is projective $\epsilon$-lc of dimension $d$ for some boundary $B$, and \item $K_X+B\sim_{\mathbb R}0$ and $B$ is big \end{enumerate} form a bounded family. \end{thm} \begin{rem} Assume that $X$ is of Fano type. Then we can run any $D$-MMP which terminates with some model $Y$ for any $\mathbb{R}$-Cartier $\mathbb{R}$-divisor $D$ on $X$ (cf. \cite[Corollary 2.9]{PS09}). Moreover, if $(X,\Delta)$ is $\epsilon$-lc for some positive real number $\epsilon$ and some boundary $\Delta$ such that $K_X+\Delta\sim_{\mathbb R}0$, then the $D$-MMP is a sequence of $(K_X+\Delta)$-flops. Therefore, let $\Delta_Y$ be the strict transform of $\Delta$ on $Y$, then $(Y,\Delta_Y)$ is $\epsilon$-lc and $K_Y+\Delta_Y\sim_{\mathbb R}0$. In particular, $Y$ belongs to a bounded family. We will repeatedly use this fact in the rest of the paper. \end{rem} The next corollary is well-known to experts: \begin{cor}\langlebel{cor: bab log bdd} Let $d$ be a positive integer and $\epsilon,\delta$ two positive real numbers. Then the projective pairs $(X,B)$ such that \begin{enumerate} \item $(X,\Delta)$ is projective $\epsilon$-lc of dimension $d$ for some boundary $\Delta$ such that $K_X+\Delta\sim_{\mathbb R}0$ and $\Delta$ is big, \item the non-zero coefficients of $B$ are $\geq\delta$, and \item $-(K_X+B)$ is pseudo-effective, \end{enumerate} form a log bounded family. \end{cor} \begin{proof} By Theorem \ref{thm: BAB}, $X$ belongs to a bounded family, so there exist a positive integer $r$ depending only on $d$ and $\epsilon$, and a very ample divisor $H$ on $X$, such that $H^d\leq r$ and $(-K_X)\cdot H^{d-1}\leq r$. Since $-(K_X+B)$ is pseudo-effective, $-(K_X+B)\cdot H^{d-1}\geq 0$. Thus for any irreducible component $D$ of $B$, $D\cdot H^{d-1}\leq\frac{r}{\delta}$ is bounded from above. By \cite[Lemma 2.20]{Bir19}, $(X,B)$ is log bounded. \end{proof} \begin{lem}\langlebel{lem: psd ft change to eff} Let $d$ be a positive integer and $\epsilon,\delta$ two positive real numbers. Then there exists an integer $n$ depending only on $d$, $\epsilon$ and $\delta$ satisfying the following. Assume that \begin{enumerate} \item $(X,\Delta)$ is projective $\epsilon$-lc of dimension $d$ for some boundary $\Delta$ such that $K_X+\Delta\sim_{\mathbb R}0$ and $\Delta$ is big, and \item $B'$ is a pseudo-effective Weil divisor on $X$ such that $-(\delta K_X+B')$ is pseudo-effective, \end{enumerate} then there exists a $\mathbb{Q}$-divisor $B''\ge0$ on $X$, such that $nB''\sim nB'$. \end{lem} \begin{proof} Possibly taking a small $\mathbb{Q}$-factorialization we may assume that $X$ is $\mathbb{Q}$-factorial. Since $X$ is of Fano type, we may run a $B'$-MMP and reach a minimal model of $B'$, $f:X\dashrightarrow Y$. By Theorem \ref{thm: BAB}, $Y$ belongs to a bounded family. Thus there exist a positive integer $r$ depending only on $d$ and $\epsilon$, and a very ample divisor $H$ on $Y$, such that $H^d\leq r$ and $-K_Y\cdot H^{d-1}\leq r$. Let $B'_Y$ be the strict transform of $B'$ on $Y$. Since $-(\delta K_Y+B'_Y)$ is pseudo-effective, $-(\delta K_Y+B'_Y)\cdot H^{d-1}\ge0$, hence $B'_Y\cdot H^{d-1}\leq -\delta K_Y\cdot H^{d-1}\leq \delta r$. By \cite[Lemma 2.25]{Bir19}, there exists a positive integer $n_1$ depending only on $d,\epsilon$ and $\delta$, such that $n_1B'_Y$ is Cartier. We claim that $n:=2n_1(d+2)!(d+1)$ has the required properties. By \cite[Theorem 1.1]{Kol93}, $nB'_Y$ is base-point-free. In particular, there exists a $\mathbb{Q}$-divisor $B''_Y\ge 0$, such that $nB''_Y\sim nB'_Y$. Let $p: W\rightarrow X$ and $q: W\rightarrow Y$ be a common resolution of $f:X\dashrightarrow Y$, then we have $$p^*B'=q^*B'_Y+E$$ for some $\mathbb{Q}$-divisor $E\geq 0$. Let $$B'':=p_*q^*B''_Y+p_*E.$$ It follows that $nB''=n(p_*q^*B'_Y+p_*E)\sim np_{*}p^{*}B'=nB'$. \end{proof} Corollary \ref{cor: psd complement for bounded family} could be regarded as a generalization of the theory of complements to the setting of ``pseudo-effective boundaries" $B$ while $X$ lies in a bounded family of Fano type varieties. \begin{cor}\langlebel{cor: psd complement for bounded family} Let $d$ be a positive integer, $\epsilon$ a positive real number, and ${\Gamma}_0$ a finite set of rational numbers. Then there exists a positive integer $n$ depending only on $d$, $\epsilon$ and ${\Gamma}_0$ satisfying the following. Assume that \begin{enumerate} \item $(X,\Delta)$ is projective $\epsilon$-lc of dimension $d$ for some boundary $\Delta$ such that $K_X+\Delta\sim_{\mathbb R}0$ and $\Delta$ is big, \item $B=\sum_{i=1}^s b_iB_i$ is a pseudo-effective $\mathbb{R}$-Cartier $\mathbb R$-divisor on $X$, where each $b_i\in{\Gamma}_0$ and each $B_i$ is a pseudo-effective Weil divisor, and \item $-(K_X+B)$ is pseudo-effective, \end{enumerate} then there exists a $\mathbb{Q}$-divisor $B^{+}$ on $X$, such that $nB^{+}$ is a Weil divisor, and $n(B^{+}-B)\in |-n(K_X+B)|$. In particular, $n(K_X+B^{+})\sim 0$. \end{cor} \begin{proof} Let $n_0$ be a positive integer such that $n_0{\Gamma}_0\subset\mathbb{Z}$, and $B':=-n_0(K_X+B)$. Then both $B'$ and $-(n_0K_X+B')=n_0B$ are pseudo-effective. By Lemma \ref{lem: psd ft change to eff}, there exist a positive integer $n_1$ depending only on $d,\epsilon$ and $n_0$, and a $\mathbb{Q}$-divisor $B''\ge 0$, such that $n_1B''\sim n_1B'=-n_0n_1(K_X+B)$. It follows that $n:=n_0n_1$, and $B^{+}:=\frac{1}{n}(n_1B''+nB)$ have the required properties. \end{proof} \subsection{Potentially birational} \begin{defn} Let $X$ be a normal projective variety and $D$ a big $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor on $X$. We say that $D$ is \emph{potentially birational} if for any two general closed points $x$ and $y$ on $X$, possibly switching $x$ and $y$, we may find $0\le \Delta\sim_{\mathbb Q}(1-\epsilon)D$ for some $\mathbb{Q}$-divisor $\Delta$ and rational number $\epsilon\in (0,1)$, such that $(X,\Delta)$ is lc at $x$ with $\{x\}$ an lc center, and $(X,\Delta)$ is not klt at $y$. \end{defn} \begin{rem} By definition, potentially birationality is preserved under $\mathbb{Q}$-linear equivalence: if $D$ is potentially birational and $D\sim_{\mathbb Q}D'$, then $D'$ is also potentially birational. We will repeatedly use this fact in this paper. \end{rem} \begin{lem}[{\cite[Lemma 2.3.4]{HMX13}}]\langlebel{lem: potentially birational} Let $X$ be a normal projective variety and $D$ a big $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor on $X$. \begin{enumerate} \item If $D$ is potentially birational, then $|K_X+\lceil D\rceil|$ defines a birational map. \item If $|D|$ defines a birational map, then $(2n+1)\lfloorloor D\rfloorloor$ is potentially birational. \end{enumerate} \end{lem} \begin{lem}\langlebel{lem: pot bir plus eff is pot bir} Let $X$ be a normal projective variety, $D$ a big $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor on $X$, and $E$ an effective $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor on $X$. Suppose that $D$ is potentially birational. Then $D+E$ is potentially birational. \end{lem} \begin{proof} Since $E$ is effective, $E\sim_{\mathbb Q}F$ for some $\mathbb{Q}$-divisor $F\geq 0$. Since $D$ is potentially birational, for any two general closed points $x$ and $y$ on $X$, possibly switching $x$ and $y$, we may find $0\le \Delta\sim_{\mathbb Q}(1-\epsilon)D$ for some rational number $\epsilon\in (0,1)$, such that $(X,\Delta)$ is lc at $x$ with $\{x\}$ an lc center, and $(X,\Delta)$ is not klt at $y$. Since $x,y$ are general, $x,y\not\in\operatorname{Supp} F$, hence $(X,\Delta':=\Delta+(1-\epsilon)F)$ is lc at $x$ with $\{x\}$ an lc center, and $(X,\Delta')$ is not klt at $y$. We have $$(1-\epsilon)(D+E)\sim_{\mathbb Q}\Delta+(1-\epsilon)F\geq 0.$$ It follows that $D+E$ is potentially birational. \end{proof} The next technical result is crucial to the proofs of our main theorems: \begin{thm}\langlebel{thm: birational one m to inf m} Let $d,m_1,m_2$ be three positive integers and $\epsilon$ a positive real number. Then there exists a positive integer $m_0$ depending only on $d,m_1,m_2$ and $\epsilon$ satisfying the following. Assume that \begin{enumerate} \item $X$ is a $\mathbb{Q}$-factorial projective variety of dimension $d$, \item $B=\sum_{i=1}^s b_iB_i$ is a pseudo-effective $\mathbb{R}$-Cartier $\mathbb R$-divisor on $X$, where each $B_i$ is a pseudo-effective Weil divisor, \item $D=n(K_X+B)$ is a big $\mathbb{R}$-divisor on $X$ for some integer $n$, \item $|m_2m_1nK_X+m_2\sum_{i=1}^s\lfloorloor m_1nb_i\rfloorloor B_i|$ defines a birational map, and \item for any $\mathbb{R}$-divisor $B'=\sum_{i=1}^sb_i'B_i$ on $X$, if $\max_{1\leq i\leq s}\{|b_i-b_i'|\}\leq\epsilon$, then $n(K_X+B')$ is big. \end{enumerate} Then $|mnK_X+\sum_{i=1}^s\lfloorloor mnb_i\rfloorloor B_i|$ defines a birational map for every positive integer $m\geq m_0$. \end{thm} \begin{proof} Since $|m_2m_1nK_X+m_2\sum_{i=1}^s\lfloorloor m_1nb_i\rfloorloor B_i|$ defines a birational map, by Lemma \ref{lem: potentially birational}(2), $(2d+1)(m_2m_1nK_X+m_2\sum_{i=1}^s\lfloorloor m_1nb_i\rfloorloor B_i)$ is potentially birational. Let $m_3:=(2d+1)m_1m_2$. We will show that $m_0:=m_3+\lceil\frac{1}{\epsilon}\rceil+1$ satisfies our requirements. For any integer $m\geq m_0$, \begin{align*} &(mnK_X+\sum_{i=1}^s\lfloorloor mnb_i\rfloorloor B_i-K_X)-(2d+1)(m_2m_1nK_X+m_2\sum_{i=1}^s\lfloorloor m_1nb_i\rfloorloor B_i)\\ =&(n(m-m_3)-1)(K_X+\sum_{i=1}^s(b_i-\frac{\{mnb_i\}}{n(m-m_3)-1})B_i)\\ &+(B+\frac{m_3}{m_1}\sum_{i=1}^s\{m_1nb_i\}B_i).\\ \end{align*} Since $$|\frac{\{mnb_i\}}{n(m-m_3)-1}|<\frac{1}{|n(m-m_3)-1|}\leq\epsilon$$ for any $i$, by our assumptions, $$n(K_X+\sum_{i=1}^s(b_i-\frac{\{mnb_i\}}{n(m-m_2)-1})B_i)$$ is big, hence $$(mnK_X+\sum_{i=1}^s\lfloorloor mnb_i\rfloorloor B_i-K_X)-(2d+1)(m_2m_1nK_X+m_2\sum_{i=1}^s\lfloorloor m_1nb_i\rfloorloor B_i)$$ is big. By Lemma \ref{lem: pot bir plus eff is pot bir}, $$mnK_X+\sum_{i=1}^s\lfloorloor mnb_i\rfloorloor B_i-K_X$$ is potentially birational. By Lemma \ref{lem: potentially birational}(1), $|mnK_X+\sum_{i=1}^s\lfloorloor mnb_i\rfloorloor B_i|$ defines a birational map. \end{proof} \begin{rem} In Theorem \ref{thm: birational one m to inf m}, $n$ can be either positive or negative ($n\not=0$ as $D=n(K_X+B)$ is big) and may depend on $X$. On the other hand, $m_0$ does not depend on $n$. In this paper, we usually take $n=1$ or $-1$. \end{rem} \section{Volume for bounded log Fano sub-pairs} In this section, we study several properties of the volume $\operatorname{vol}(-(K_X+B))$, where $X$ is a Fano variety which belongs to a bounded family, and $B$ is a pseudo-effective $\mathbb{R}$-divisor. We prove Theorem \ref{thm: bddvolacc maynotbelc} at the end of this section. \begin{lem}[{\cite[Lemma 2.5]{Jia18}, \cite[Lemma 4.2]{DS16}, \cite[Lemma 2.1]{CDHJS18}}]\langlebel{lem: volineq} Let $X$ be a normal projective variety, $D$ an $\mathbb{R}$-Cartier $\mathbb{R}$-divisor, and $S$ a base-point free normal Cartier prime divisor. Then for any positive real number $t$, \[ \operatorname{vol}(D+tS)\leq \operatorname{vol}(D)+t(\dim X) \operatorname{vol}(D|_S+tS|_S). \] \end{lem} \begin{thm}\langlebel{thm: approximation big bounded} Let $v$ be a positive real number and $\mathcal{P}$ a log bounded set of pairs. Then there exists a positive real number $\epsilon$ depending only on $v$ and $\mathcal{P}$ satisfying the following. Assume that \begin{enumerate} \item $(X,B)\in\mathcal{P}$ is a $\mathbb{Q}$-factorial projective pair, \item $M$ is a pseudo-effective $\mathbb{R}$-divisor on $X$, \item $\operatorname{vol}(-(K_X+M))\geq v$, and \item $B'$ is an $\mathbb{R}$-divisor on $X$ such that $||B'||\leq\epsilon$ and $\operatorname{Supp} B'\subset\operatorname{Supp} B$. \end{enumerate} Then $\operatorname{vol}(-(K_X+B'+M))\geq\frac{v}{2}$. In particular, $-(K_X+B'+M)$ is big. \end{thm} \begin{proof} Since $\mathcal{P}$ is log bounded, we may pick a very ample divisor $S$ on $X$, such that $S^d\leq r$, and $S+K_X$ and $S-\operatorname{Supp} B$ are big. Let $S'\in |S|$ be a general element, such that $S'$ is a normal Cartier prime divisor, and $(S+K_X+M)|_{S'}$ is effective. By Lemma \ref{lem: volineq}, for any positive real number $t$, \begin{align*} v&\leq\operatorname{vol}(-(K_X+M))\\ &\leq\operatorname{vol}(-(K_X+M)-t S')+t(\dim X)\operatorname{vol}(-(K_X+M)|_{S'})\\ &\leq\operatorname{vol}(-(K_X+M)-t S')+t(\dim X)\operatorname{vol}(S|_{S'})\\ &=\operatorname{vol}(-(K_X+M)-t S')+t(\dim X)r. \end{align*} Since $\mathcal{P}$ is log bounded, we may assume that $\dim X\leq d$ for some positive integer $d$ depending only on $\mathcal{P}$. Let $\epsilon:=\frac{v}{2dr}$. For any $\mathbb{R}$-divisor $B'$ on $X$ such that $||B'||\leq\epsilon$ and $\operatorname{Supp} B'\subset\operatorname{Supp} B$, we have \begin{align*} \operatorname{vol}(-(K_X+B'+M))&\geq\operatorname{vol}(-(K_X+\epsilon\operatorname{Supp} B+M))\\ &\geq\operatorname{vol}(-(K_X+M)-\epsilon S')\\ &\geq v-dr\epsilon=\frac{v}{2}. \end{align*} Thus $\epsilon$ satisfies our requirements. \end{proof} The following Proposition is a special case of Theorem \ref{thm: bddvolacc maynotbelc}, i.e., when ${\Gamma}$ is a finite set. The Step 2 in the proof of Proposition \ref{prop: bddvolfin maybenotlc} is similar to the proof of \cite[Lemma 3.26]{HLS19}. \begin{lem}\langlebel{prop: bddvolfin maybenotlc} Let $d$ be a positive integer, $\epsilon$ a positive real number, and ${\Gamma}_0$ a finite set of non-negative numbers. Then there exists a finite set ${\Gamma}_0'$ depending only on $d,\epsilon$ and ${\Gamma}_0$ satisfying with the following. Assume that \begin{enumerate} \item $(X,\Delta)$ is projective $\epsilon$-lc of dimension $d$ for some boundary $\Delta$ such that $K_X+\Delta\sim_{\mathbb R}0$ and $\Delta$ is big, \item $B$ is a pseudo-effective $\mathbb{R}$-divisor on $X$, and \item $B=\sum_{i=1}^s b_iB_i$, where each $b_i\in{\Gamma}_0$, and each $B_i$ is a pseudo-effective Weil divisor. \end{enumerate} then $\operatorname{vol}(-(K_X+B))\in {\Gamma}_0'$. \end{lem} \begin{proof} \noindent\textbf{Step 1}. In this step, we reduce the lemma to the case when each $B_i\geq 0$ and $X$ is $\mathbb{Q}$-factorial. Possibly replacing $X$ with a small $\mathbb{Q}$-factorization, we may assume that $X$ is $\mathbb{Q}$-factorial. We may assume that $-(K_X+B)$ is big, otherwise $\operatorname{vol}(-(K_X+B))=0$ and there is nothing to prove. We may also assume that each $B_i\not\equiv 0$ and each $b_i>0$, otherwise we may replace $B$ with $B-b_iB_i$. Since $-(K_X+b_iB_i)- (-(K_X+B))$ is pseudo-effective, $-(\frac{1}{b_i}K_X+B_i)$ is big for any $i$. By Lemma \ref{lem: psd ft change to eff}, there exist a positive integer $n$ depending only on $d,\epsilon$ and ${\Gamma}_0$, and $\mathbb{Q}$-divisors $B_i'\geq 0$, such that $nB_i'\sim nB_i$ for each $i$. Possibly replacing ${\Gamma}_0$ with $\frac{1}{n}{\Gamma}_0$ and $B_i$ with $B_i'$, we may assume that each $B_i\geq 0$. By Theorem \ref{thm: BAB}, $X$ is bounded. Thus there exist a positive integer $r$ depending only on $d$ and $\epsilon$, and a very ample divisor $H$ on $X$, such that $(-K_X)\cdot H^{d-1}\leq r$. For any irreducible component $D$ of $B$, since $-(K_X+B)$ is big, $-(K_X+(\operatorname{mult}_DB)D)$ is big. Thus $-(K_X+(\operatorname{mult}_DB)D)\cdot H^{d-1}>0$, hence $\operatorname{mult}_DB<r$, and $B\in [0,r)$. Let $${\Gamma}_0':=\{\sum_{j=1}^sn_i\gamma_i\mid s,n_i\in\mathbb N, \gamma_i\in{\Gamma}_0,\sum_{j=1}^sn_i\gamma_i<r\},$$ then ${\Gamma}_0'$ is a finite set, and all the coefficients of $B$ belong to ${\Gamma}_0'$. Possibly replacing $B_1,\dots,B_s$ with the irreducible components of $B$, and ${\Gamma}_0$ with ${\Gamma}_0'$, we may assume that $B_i$ are distinct prime divisors. \mathcal{E}dskip \noindent\textbf{Step 2}. In this step, we prove the lemma by contradiction. Suppose that there exists a sequence of $\mathbb{Q}$-factorial pairs $(X_i,B^i)$, such that $(X_i,\Delta_i)$ is $\epsilon$-lc and $K_{X_i}+\Delta_i\sim_{\mathbb R}0$ for some $\Delta_i$, $B^i\in{\Gamma}_0$, $-(K_{X_i}+B^i)$ is big, and $\operatorname{vol}(-(K_{X_i}+B^i))$ is either strictly increasing or strictly decreasing. Since $X_i$ is of Fano type, possibly replacing $-(K_{X_i}+B^i)$ with a minimal model, we may assume that $-(K_{X_i}+B^i)$ is big and nef. By Corollary \ref{cor: bab log bdd}, $(X_i,B^i)$ belongs to a log bounded family. Possibly passing to a subsequence of $(X_i,\operatorname{Supp} B^i)$, we may assume that there exist a projective morphism $V\to T$ of varieties, a non-negative integer $u$, a reduced divisor $C=\sum_{j=1}^u C_j$ on $V$, and a dense set of closed points $t_i\in T$ such that $X_i$ is the fiber of $V\to T$ over $t_i$, and each component of $\operatorname{Supp} B^i$ is a fiber of $C_j\to T$ over $t_i$ for some $j$. Since $X_i$ is normal, possibly replacing $V$ with its normalization and replacing $C$ with its inverse image with reduced structure, we may assume that $V$ is normal. Possibly shrinking $T$, using Noetherian induction, and passing to a subsequence of $(X_i,\operatorname{Supp} B^i)$, we may assume that $V\to T$ is flat, and $C_j\to T$ is flat for any $j$. We may assume that $B^i=\sum_{j=1}^ub^i_j C_j|_{t_i}$, where $b^i_j\in{\Gamma}_0$. Possibly passing to a subsequence, we may assume that $b^i_j$ is a constant for each $j$. For each $j$, let $a^i_j$ be a sequence of increasing rational numbers, such that $a^i_j\le b^1_j$ and $\lim_{i\to+\infty}a^i_j=b^1_j$. Let $B^1_i:=\sum_{j=1}^u a^i_j C_j|_{t_1}$ and $B^2_i:=\sum_{j=1}^u a^i_j C_j|_{t_2}$. By the asymptotic Riemann-Roch theorem and the invariance of Euler characteristic in a flat family, we have \begin{align*} &\operatorname{vol}(-(K_{X_1}+B^1))=(-(K_{X_1}+B^1))^d=\lim_{i\to+\infty}(-(K_{X_1}+B^1_i))^d\\ =&\lim_{i\to+\infty}(-(K_{X_2}+B^2_i))^d =(-(K_{X_2}+B^2))^d=\operatorname{vol}(-(K_{X_2}+B^2)), \end{align*} a contradiction. \end{proof} Now we are ready to prove Theorem \ref{thm: bddvolacc maynotbelc}. The Step 2 in the proof of Theorem \ref{thm: bddvolacc maynotbelc} is similar to the proof of \cite[Proposition 3.28]{HLS19}. \begin{proof}[Proof of Theorem \ref{thm: bddvolacc maynotbelc}] By Proposition \ref{prop: bddvolfin maybenotlc}, we only need to prove the case when ${\Gamma}$ is a DCC set and show that the set of $\operatorname{vol}(-(K_X+B))$ satisfies the ACC. \mathcal{E}dskip \noindent\textbf{Step 1}. In this step, we reduce the lemma to the case when each $B_i\geq 0$ and $X$ is $\mathbb{Q}$-factorial. Possibly replacing $X$ with a small $\mathbb{Q}$-factorization, we may assume that $X$ is $\mathbb{Q}$-factorial. We may assume that $-(K_X+B)$ is big, otherwise $\operatorname{vol}(-(K_X+B))=0$ and there is nothing to prove. We may also assume that each $B_i\not\equiv 0$ and each $b_i>0$, otherwise we may replace $B$ with $B-b_iB_i$. Since $-(K_X+b_iB_i)- (-(K_X+B))$ is pseudo-effective, $-(\frac{1}{b_i}K_X+B_i)$ is big for any $i$. By Lemma \ref{lem: psd ft change to eff}, there exist a positive integer $n$ depending only on $d,\epsilon$ and ${\Gamma}$, and $\mathbb{Q}$-divisors $B_i'\geq 0$, such that $nB_i'\sim nB_i$ for each $i$. Possibly replacing ${\Gamma}$ with $\frac{1}{n}{\Gamma}$ and $B_i$ with $B_i'$, we may assume that each $B_i\geq 0$. By Theorem \ref{thm: BAB}, $X$ is bounded. Thus there exist a positive integer $r$ depending only on $d$ and $\epsilon$, and a very ample divisor $H$ on $X$, such that $(-K_X)\cdot H^{d-1}\leq r$. For any irreducible component $D$ of $B$, since $-(K_X+B)$ is big, $-(K_X+(\operatorname{mult}_DB)D)$ is big. Thus $-(K_X+(\operatorname{mult}_DB)D)\cdot H^{d-1}>0$, hence $\operatorname{mult}_DB<r$, and $B\in [0,r)$. Let $${\Gamma}':=\{\sum_{j=1}^sn_i\gamma_i\mid s,n_i\in\mathbb N, \gamma_i\in{\Gamma},\sum_{j=1}^sn_i\gamma_i<r\},$$ then ${\Gamma}'$ is a DCC set, and all the coefficients of $B$ belong to ${\Gamma}'$. Possibly replacing $B_1,\dots,B_s$ with the irreducible components of $B$, and ${\Gamma}$ with ${\Gamma}'$, we may assume that $B_i$ are distinct prime divisors. \mathcal{E}dskip \noindent\textbf{Step 2}. In this step, we prove the lemma by contradiction. Suppose that there exists a sequence of $\mathbb{Q}$-factorial pairs $(X_i,B^i)$, such that $(X_i,\Delta_i)$ is $\epsilon$-lc and $K_{X_i}+\Delta_i\sim_{\mathbb R}-$ for some $\Delta_i$, $B^i\in{\Gamma}$, $-(K_{X_i}+B^i)$ is big, and $\operatorname{vol}(-(K_{X_i}+B^i))$ is strictly increasing. Since $X_i$ is of Fano type, possibly replacing $-(K_{X_i}+B^i)$ with a minimal model, we may assume that $-(K_{X_i}+B^i)$ is big and nef. By Corollary \ref{cor: bab log bdd}, $(X_i,B^i)$ belongs to a log bounded family. In particular, the number of irreducible components of $B^i$ is bounded from above. Possibly passing to a subsequence, we may assume that $$B^i=\sum_{j=1}^u b^i_jB^i_j,$$ where $u$ is a non-negative integer, $B^i_j$ are the irreducible components of $B^i$, $b^i_j\in{\Gamma}$, and $\{b^i_j\}_{i=1}^{\infty}$ is an increasing sequence for every $j\in\{1,2,\dots,u\}$. Let $$\overline{b_j}:=\lim_{i\to +\infty}b^i_j,\text{ and }\overline{B^i}:=\sum_{j=1}^u \overline{b_j}B^i_j.$$ \begin{claim}\langlebel{claim: effective acc volume maynotbelc} There exists a sequence of positive real numbers $\epsilon_i$, such that $\lim_{i\to+\infty}\epsilon_i=0$, and $$\epsilon_i(-(K_{X_i}+B^i))-(\overline{B^i}-B^i)$$ is effective. \end{claim} Suppose that the claim is true. Possibly passing to a subsequence, we may assume that $\epsilon_i<1$ for every $i$. Then $$-(K_{X_i}+\overline{B^i})-(1-\epsilon_i)(-(K_{X_i}+B^i))=\epsilon_i(-(K_{X_i}+B^i))-(\overline{B^i}-B^i)$$ is effective and $-(K_{X_i}+\overline{B^i})$ is big. Thus \begin{align*} \operatorname{vol}(-(K_{X_i}+\overline{B^i})) \ge&(1-\epsilon_i)^d\operatorname{vol}(-(K_{X_i}+B^i))\\ \ge&(1-\epsilon_i)^d\operatorname{vol}(-(K_{X_i}+\overline{B^i})). \end{align*} By Proposition \ref{prop: bddvolfin maybenotlc}, $\operatorname{vol}(-(K_{X_i}+\overline{B^i}))$ belongs to a finite set. Possibly passing to a subsequence, we may assume that $\operatorname{vol}(-(K_{X_i}+\overline{B^i}))=C$ is a constant. Hence $$ \frac{C}{(1-\epsilon_i)^d} \ge\operatorname{vol}(-(K_{X_i}+B^i))\\ \ge C. $$ Since $\lim_{i\rightarrow\infty}\epsilon_i=0$ and $\operatorname{vol}(-(K_{X_i}+B^i))$ is increasing, we deduce that $\operatorname{vol}(-(K_{X_i}+B^i))$ belongs to a finite set, a contradiction. \end{proof} \begin{proof}[Proof of Claim \ref{claim: effective acc volume maynotbelc}] Let $A_i$ be a very ample divisor on $X_i$, such that $$\delta_i A_i-(\overline{B^i}-B^i)$$ is effective for some positive real number $\delta_i$, $\lim_{i\to +\infty}\delta_i=0$, and $(-(K_{X_i}+B^i))^{d-1}\cdot A_i\leq r$, where $r$ is a positive real number depending only on $d,\epsilon$ and ${\Gamma}$. There exists a positive real number $b$, such that $$b<\frac{(-(K_{X_i}+B^i))^{d}}{d(-(K_{X_i}+B^i))^{d-1}\cdot A_i}=\frac{\operatorname{vol}(-(K_{X_i}+B^i))}{d(-(K_{X_i}+B^i))^{d-1}\cdot A_i}.$$ By Lemma \ref{lem: volineq}, we have \begin{align*} &\operatorname{vol}(-(K_{X_i}+B^i)-bA_i)\\ \ge& \operatorname{vol}(-(K_{X_i}+B^i))-bd\operatorname{vol}(-(K_{X_i}+B^i)|_{A_i})\\ >&\operatorname{vol}(-(K_{X_i}+B^i))-\frac{\operatorname{vol}(-(K_{X_i}+B^i))}{(-(K_{X_i}+B^i))^{d-1}\cdot A_i}\cdot ((-(K_{X_i}+B^i))^{d-1}\cdot A_i)\\ =&0, \end{align*} which implies that $$-(K_{X_i}+B^i)-bA_i$$ is effective. Let $\epsilon_i:=\frac{\delta_i}{b}$, then $$\epsilon_i(-(K_{X_i}+B^i))-(\overline{B^i}-B^i)=(\epsilon_i(-(K_{X_i}+B^i))-b\epsilon_iA_i)+(\delta_iA_i-(\overline{B^i}-B^i))$$ is effective, and the claim is proved. \end{proof} \section{Proofs of Theorem \ref{thm: eff bir fin coeff gpair} and Theorem \ref{thm: eff bir bound volume more section gpair}} \begin{lem}\langlebel{lem: finite rational coefficient eff bir} Let $d,n$ be two positive integers and $\epsilon$ a positive real number. Then there exists a positive integer $m$ divisible by $n$ depending only on $d,n$ and $\epsilon$ satisfying the following. Assume that \begin{enumerate} \item $(X,\Delta)$ is projective $\epsilon$-lc of dimension $d$ for some boundary $\Delta$ such that $K_X+\Delta\sim_{\mathbb R}0$ and $\Delta$ is big, \item $B$ is a pseudo-effective $\mathbb{Q}$-divisor on $X$ such that $K_X+B$ is $\mathbb{Q}$-Cartier, \item $nB$ is a Weil divisor, and \item $-(K_X+B)$ is big, \end{enumerate} then $|-m(K_X+B)|$ defines a birational map. \end{lem} \begin{proof} Possibly replacing $X$ with a small $\mathbb{Q}$-factorization, we may assume that $-(K_X+B)$ is $\mathbb{Q}$-Cartier. Possibly replacing $-(K_X+B)$ with its minimal model, we may assume that $-(K_X+B)$ is big and nef. By Theorem \ref{thm: BAB}, $X$ is bounded. Thus there exist a positive integer $r$ depending only on $d$ and $\epsilon$, and a very ample Cartier divisor $H$ on $X$, such that $H^d\leq r$ and $-K_X\cdot H^{d-1}\leq r$. Since $B$ is pseudo-effective, $-n(K_X+B)\cdot H^{d-1}\leq -nK_X\cdot H^{d-1}\leq nr$. By \cite[Lemma 2.25]{Bir19}, there exists a positive integer $m_1$ depending only on $d,n$ and $\epsilon$ such that $m_1(K_X+B)$ is Cartier. By \cite[Theorem 1.1]{Kol93}, $m:=2m_1(d+2)!(d+1)$ satisfies our requirements. \end{proof} \begin{proof}[Proof of Theorem \ref{thm: eff bir bound volume more section gpair}(2.b)] By Lemma \ref{lem: psd ft change to eff}, possibly replacing $X$ with a small $\mathbb{Q}$-factorialization and each $B_i$ with its pullback, we may assume that $X$ is $\mathbb{Q}$-factorial. Moreover, if we assume that each $B_i\geq 0$, then all the coefficients of $B\geq\delta$. Possibly replacing $B_1,\dots,B_s$ with the irreducible components of $B$, we may assume that $B_i$ are distinct prime divisors. By Lemma \ref{lem: psd ft change to eff}, there exists a positive integer $n$ depending only on $d$ and $\epsilon$, and $\mathbb{Q}$-divisors $B_i'\geq 0$ for every $i\in\{1,2,\dots,s\}$, such that $nB_i'\sim nB_i$ for each $i$. Possibly reodering indices, we may let $\bar B:=\sum_{i=1}^{s_1}b_iB_i'+\sum_{i=s_1+1}^sb_iB_i'$ for some $s_1\in\{0,1,\dots,s\}$, such that $B'_i=0$ if and only if $i\geq s_1+1$. Since all the non-zero coefficients of $\bar B$ are $\geq\frac{\delta}{n}$, by Corollary $\ref{cor: bab log bdd}$, $(X,\bar B)$ is log bounded. Thus there exists a positive integer $u$ depending only on $d,\epsilon$ and $\delta$, such that $s_1\leq u$. Moreover, since $-(K_X+\bar B)$ is big, there exists a positive integer $N$ depending only on $d,\epsilon$ and $\delta$, such that for any $i\in\{1,2,\dots,s\}$ and any irreducible component $D$ of $\bar B$, $\operatorname{mult}_DB_i'\leq N$. By Theorem \ref{thm: approximation big bounded}, there exists a positive integer $n_0$ depending only on $d,\epsilon,\delta$ and $v$, such that $\operatorname{vol}(-(K_X+\bar B'))\geq\frac{v}{2}$ for any $\mathbb{R}$-divisor $B'$ on $X$ such that $||\bar B-B'||\leq\frac{1}{n_0}$ and $\operatorname{Supp} B'\subset\operatorname{Supp}\bar B$. Let $n_1:=Nun_0$. Then for any real numbers $b'_1,\dots,b'_s$ such that $\max\{|b_i-b'_i|\}\leq\frac{1}{n_1}$ and any irreducible component $D$ of $\bar B$, \begin{align*} |\operatorname{mult}_D(\bar B-\sum_{i=1}^sb_i'B_i')|&=|\operatorname{mult}_D(\sum_{i=1}^s(b_i-b_i')B_i')|\leq\frac{1}{n_1}\sum_{i=1}^s|\operatorname{mult}_DB_i'|\\ &=\frac{1}{n_1}\sum_{i=1}^{s_1}|\operatorname{mult}_DB_i'|\leq\frac{Nu}{n_1}=\frac{1}{n_0}, \end{align*} which implies that $$||\bar B-\sum_{i=1}^sb_i'B_i'||\leq\frac{1}{n_0},$$ hence $$\operatorname{vol}(-(K_X+\sum_{i=1}^sb_i'B_i))=\operatorname{vol}(-(K_X+\sum_{i=1}^sb_i'B'_i))\geq\frac{v}{2}.$$ In particular, $$\operatorname{vol}(-(K_X+\sum_{i=1}^s\frac{\lceil n_1b_i\rceil}{n_1}B_i))\geq\frac{v}{2}.$$ By Lemma \ref{lem: finite rational coefficient eff bir}, there exists a positive integer $m_1$ depending only on $d,\epsilon,\delta$ and $v$, such that $$|-m_1(K_X+\sum_{i=1}^s\frac{\lceil n_1b_i\rceil}{n_1}B_i)|$$ defines a birational map and $n_1\mid m_1$. This is equivalent to say that $$|\frac{m_1}{n_1}\cdot(-n_1)K_X+\frac{m_1}{n_1}\sum_{i=1}^s(\lfloorloor -n_1b_i\rfloorloor B_i)|$$ defines a birational map. The theorem follows from Theorem \ref{thm: birational one m to inf m}. \end{proof} \begin{proof}[Proof of Theorem \ref{thm: eff bir bound volume more section gpair}(1) and Theorem \ref{thm: eff bir bound volume more section gpair}(2.a)] Possibly replacing $X$ with a small $\mathbb{Q}$-factorialization, we may assume that $X$ is $\mathbb{Q}$-factorial. Since $-(K_X+B)$ is big, $-K_X$ is also big and $\operatorname{vol}(-K_X)\geq v$. By Theorem \ref{thm: eff bir bound volume more section gpair}(2.b), there exists a positive integer $m_1$ depending only on $d$ and $\epsilon$, such that $|-m_1K_X|$ defines a birational map. By Lemma \ref{lem: potentially birational}(2), $-(2d+1)m_1K_X$ is potentially birational. By Lemma \ref{lem: psd ft change to eff}, there exists a positive integer $n$ depending only on $d$ and $\epsilon$ and a $\mathbb{Q}$-divisor $P\geq 0$ on $X$, such that $-nK_X\sim nP$. By Corollary \ref{cor: bab log bdd}, $(X,P)$ is log bounded. By Theorem \ref{thm: approximation big bounded}, there exists a positive integer $\epsilon'$ depending only on $d,\epsilon$ and $v$, such that for any positive real number $\delta\leq\epsilon'$, $\operatorname{vol}(-(K_X+B+\delta P))\geq\frac{v}{2}$. Let $m_2:=(2d+1)m_1$. We show that $m_0:=\lceil\frac{m_2+1}{\epsilon'}\rceil$ satisfies our requirements. For every $m\geq m_0$, we have \begin{align*} &((-mK_X-\lfloorloor mB\rfloorloor)-K_X)-(-(2d+1)m_1K_X)\\ =&-m(\frac{m-m_2-1}{m}K_X+B)+\{mB\}\\ \sim_{\mathbb R}&-m(K_X+B+\frac{m_2+1}{m}P)+\{mB\}, \end{align*} and under the assumption of Theorem \ref{thm: eff bir bound volume more section gpair}(2.a), \begin{align*} &((-mK_X-\sum_{i=1}^s\lfloorloor mb_i\rfloorloor B_i)-K_X)-(-(2d+1)m_1K_X)\\ =&-m(\frac{m-m_2-1}{m}K_X+B)+\sum_{i=1}^s\{mb_i\}B_i\\ \sim_{\mathbb R}&-m(K_X+B+\frac{m_2+1}{m}P)+\sum_{i=1}^s\{mb_i\}B_i. \end{align*} Since $m\geq\lceil\frac{m_2+1}{\epsilon'}\rceil$, $-m(K_X+B+\frac{m_2+1}{m}P)$ is big, which implies that $$(-mK_X-\lfloorloor mB\rfloorloor)-K_X-(-(2d+1)m_1K_X)$$ is big, and under the assumption of Theorem \ref{thm: eff bir bound volume more section gpair}(2.a), $$(-mK_X-\sum_{i=1}^s\lfloorloor mb_i\rfloorloor B_i-K_X)-(-(2d+1)m_1K_X)$$ is big. By Lemma \ref{lem: pot bir plus eff is pot bir}, $$(-mK_X-\lfloorloor mB\rfloorloor)-K_X$$ is potentially birational, and under the assumption of Theorem \ref{thm: eff bir bound volume more section gpair}(2.a), $$(-mK_X-\sum_{i=1}^s\lfloorloor mb_i\rfloorloor B_i)-K_X$$ is potentially birational. Theorem \ref{thm: eff bir bound volume more section gpair}(1) and Theorem \ref{thm: eff bir bound volume more section gpair}(2.a) follow from Lemma \ref{lem: potentially birational}(1). \end{proof} \begin{proof}[Proof of Theorem \ref{thm: eff bir fin coeff gpair}] For any $X$ and $B$ as in the assumption, by Theorem \ref{thm: bddvolacc maynotbelc}, $\operatorname{vol}(-(K_X+B))$ is bounded from below by a positive real number $v$ depending only on $d,\epsilon$ and ${\Gamma}_0$. The Theorem follows from Theorem \ref{thm: eff bir bound volume more section gpair}. \end{proof} \section{Log general type pairs and effective Iitaka fibration} In this section, we gather several state-of-the-art results on effective birationality and effective Iitaka fibrations, and slightly generalize them. \begin{comment} \begin{thm}[{{\cite[Theorem 1.1]{HM06}},\cite[Theorem 1.3(3)]{HMX14}}]\langlebel{thm: HMX14 eff bir} Let $d$ be a positive integer and ${\Gamma}\subset [0,1]$ a DCC set. Then there exists a positive integer $m$ depending only on $d$ and ${\Gamma}$ satisfying the following. Assume that \begin{enumerate} \item $(X,B)$ is an lc pair of dimension $d$, \item $B\in{\Gamma}$, and \item $K_X+B$ is big, \end{enumerate} then $|m(K_X+B)|$ defines a birational map. \end{thm} \end{comment} \begin{thm}[{{{\cite[Theorem 1.1]{HM06}},\cite[Theorem 1.3(3)]{HMX14}},\cite[Theorem 1.3]{BZ16}}]\langlebel{thm: BZ16 eff bir} Let $d,n$ be two positive integers and ${\Gamma}\subset [0,1]$ a DCC set. Then there exists a positive integer $m_0$ depending only on $d,n$ and ${\Gamma}$ satisfying the following. Assume that \begin{enumerate} \item $(X,B)$ is an lc pair of dimension $d$, \item $B\in{\Gamma}$, \item $M$ is a nef $\mathbb{Q}$-divisor on $X$ such that $nM$ is Cartier, and \item $K_X+B+M$ is big, \end{enumerate} then $|m(K_X+B+M)|$ defines a birational map for any positive integer $m$ divisible by $m_0$. \end{thm} By using methods in this paper, we could slightly improve Theorem \ref{thm: BZ16 eff bir}: \begin{thm}\langlebel{thm: IMPROVED all big theorems on effective birationality} Let $d,n$ be two positive integers and ${\Gamma}\subset [0,1]$ a DCC set. Then there exists a positive integer $m_0$ depending only on $d,n$ and ${\Gamma}$ satisfying the following. Assume that \begin{enumerate} \item $(X,B)$ is an lc pair of dimension $d$, \item $B\in{\Gamma}$, \item $M$ is a nef $\mathbb{Q}$-divisor on $X$ such that $nM$ is Cartier, and \item $K_X+B+M$ is big, \end{enumerate} then $|m(K_X+B+M)|$ defines a birational map for any positive integer $m\geq m_0$ such that $mM$ is a Weil divisor. In particular, if $K_X+B$ is big, then $|m(K_X+B)|$ defines a birational map for any positive integer $m\geq m_0$. \end{thm} \begin{comment} \begin{enumerate} \item if $K_X+B$ is big, then $|m(K_X+B)|$ defines a birational map for any positive integer $m\geq m_1$, and \item if $K_X+B+M$ is big, then $|m(K_X+B+M)|$ defines a birational map for any positive integer $m\geq m_2$ such that $mM$ is a Weil divisor. \end{enumerate} First we prove Theorem \ref{thm: IMPROVED all big theorems on effective birationality}(2). For any $(X,B+M)$ as in the assumption, Theorem \ref{thm: IMPROVED all big theorems on effective birationality}(1) follows from Theorem \ref{thm: IMPROVED all big theorems on effective birationality}(2) by taking $n=1$ and $M=0$. \end{comment} \begin{proof}[Proof of Theorem \ref{thm: IMPROVED all big theorems on effective birationality}] Possibly replacing ${\Gamma}$ with ${\Gamma}\cup\{1\}$, $(X,B)$ with a dlt modification, and $M$ with its pullback, we may assume that $X$ is $\mathbb{Q}$-factorial. Assume that $B=\sum_{i=1}^sb_iB_i$ where $B_i$ are the irreducible components of $B$. Let $n_0$ be the least positive integer such that $M_0:=n_0M$ is a Weil divisor. Since $nM$ is Cartier, $n_0\mid n$. By Theorem \ref{thm: BZ16 eff bir}, there exists a positive integer $m_0'$ depending only on $d,n$ and ${\Gamma}$ such that $|m_0'n(K_X+B+M)|$ defines a birational map. This is equivalent to say that $$|m_2m_1K_X+\sum_{i=1}^s\lfloorloor m_2m_1b_i\rfloorloor B_i+\lfloorloor\frac{m_2m_1}{n_0}\rfloorloor M_0|$$ defines a birational map, where $m_2:=1$ and $m_1:=m_0'n$. By \cite[Theorem 8.1]{BZ16}, there exists a positive real number $\epsilon$ depending only on $d,n$ and ${\Gamma}$, such that for any $\mathbb{R}$-divisor $B'$ on $X$ and real number $\delta$, if $\operatorname{Supp} B'\subset\operatorname{Supp} B$, $||B-B'||\leq\epsilon$, and $|\frac{1}{n_0}-\delta|<\epsilon$, then $K_X+B'+\delta M_0$ is big. By Theorem \ref{thm: birational one m to inf m}, there exists a positive integer $m_0$ depending only on $d,n$ and ${\Gamma}$, such that $$|mK_X+\sum_{i=1}^s\lfloorloor mb_i\rfloorloor B_i+\lfloorloor\frac{m}{n_0}\rfloorloor M_0|$$ defines a birational map for any positive integer $m\geq m_0$. In particular, when $mM$ is a Weil divisor, $\lfloorloor\frac{m}{n_0}\rfloorloor M_0=\frac{m}{n_0}M_0=mM$, hence $|m(K_X+B+M)|$ defines a birational map. When $K_X+B$ is big and $M=0$, we have $M_0=0$, hence $|m(K_X+B)|$ defines a birational map. \end{proof} Theorem \ref{thm: IMPROVED all big theorems on effective birationality} immediately implies the following theorem on effective Iitaka fibrations, which slightly improves \cite[Theorem 1.2]{BZ16}. \begin{thm}\langlebel{thm: effective iitaka sufficiently large} Let $d,b$ and $\beta$ be three positive integers, then there exists a positive integer $m_0=m_0(d,b,\beta)$ depending only on $d,b$ and $\beta$ satisfying the following. Assume that \begin{enumerate} \item $W$ is a smooth projective variety of dimension $d$ such that $\kappa(W)\geq 0$, \item $V\rightarrow W$ is a resolution of $W$, \item $f: V\rightarrow X$ is an Iitaka fibration of $K_W$, and \item $$b:=\min\{u\in\mathbb N^+\mid |uK_F|\not=\emptyset\},$$ where $F$ is a very general fiber of $V\rightarrow X$, \item $\tilde F$ is the smooth model of the $\mathbb Z/(b)$-cover of $F$ over the unique divisor in $|bK_F|$, \item $\beta:=\dim H^{\dim\tilde F}(\tilde F,\mathbb C)$, and \item $N:=\operatorname{lcm}\{n\in\mathbb N^+\mid \varphi(n)\leq\beta\}$, where $\varphi$ is the Euler function. \end{enumerate} Then $|mK_W|$ defines an Iitaka fibration for any integer $m\geq m_0$ such that $Nb\mid m$. \end{thm} \begin{proof}[Proof of Theorem \ref{thm: effective iitaka sufficiently large}] We follow the same lines as the proof of \cite[Theorem 1.2]{BZ16}. Possibly replacing $W$ with $V$, we may assume that the Iitaka fibration is $f: W\rightarrow X$. We may assume that $\kappa(W)\geq 1$, otherwise there is nothing to prove. We let $${\Gamma}={\Gamma}(b,N):=\{\frac{bNu-v}{bNu}\mid u,v\in\mathbb N^+,v\leq bN\}$$ which is a DCC set in $[0,1)$. By \cite[Theorem 1.2]{VZ09} and \cite{FO00}, possibly replacing $W$ and $X$ with sufficiently high resolutions, we may assume that $X$ is smooth, and there exist a boundary $B$ on $X$, and a nef $\mathbb{Q}$-divisor $M$ on $X$, such that \begin{itemize} \item $B$ is a simple normal crossing $\mathbb{Q}$-divisor whose coefficients are contained in ${\Gamma}$, in particular, $(X,B)$ is lc, \item $NbM$ is nef Cartier, \item $K_X+B+M$ is big, and \item for any positive integer $m$ divisible by $b$, $|mK_W|$ defines an Iitaka fibration if and only if $|m(K_X+B+M)|$ defines a birational map. \end{itemize} Theorem \ref{thm: effective iitaka sufficiently large} immediately follows from Theorem \ref{thm: IMPROVED all big theorems on effective birationality}. \end{proof} \section{Examples}\langlebel{sec: example for eff bir} In this section we give examples which show that the assumptions of our main theorems are necessary. We state the following lemma, which provides a very convenient necessary condition for a divisor to define a birational map, and will be repeatedly used in this section. \begin{lem}[c.f. {\cite[Lemma 2.2]{HM06}}]\langlebel{lem: birational vol 1} Let $X$ be a normal projective variety and $D$ a $\mathbb{Q}$-Cartier Weil divisor on $X$ such that $|D|$ defines a birational map. Then $\operatorname{vol}(D)\geq 1$. \end{lem} \subsection{Necessity of integrability of the nef part} The next example shows that we need to study linear systems of the forms $|-mK_X-\sum_{i=1}^s\lceil mb_i\rceil B_i|$ instead of $|-m(K_X+B)|$ in Theorem \ref{thm: eff bir fin coeff gpair} and Theorem \ref{thm: eff bir bound volume more section gpair}(2) when $B\not\geq 0$, and also shows that the requirement ``$mM$ is a Weil divisor" in Theorem \ref{thm: IMPROVED all big theorems on effective birationality} is necessary. \begin{ex} Let $n$ be a positive integer, and $X$ a curve of genus $0$ (resp. genus $2$). For every positive integer $s$, let $p_1,\dots,p_{2s}$ be $2s$ general closed points on $X$, and let $B^s=M_s:=\frac{1}{n}\sum_{i=1}^s(p_{2i-1}-p_{2i})$. Notice that $\deg(-(K_X+B^s))=2$ (resp. $\deg(K_X+M_s)=2$), $\sum_{i=1}^s(p_{2i-1}-p_{2i})$ is a nef Cartier divisor, and $mM_s$ is a Weil divisor if and only if $n\mid m$. For any positive integer $m<\frac{s}{2}$ such that $n\nmid m$, $$\deg(\lfloorloor -m(K_X+B^s)\rfloorloor)=2m-s<0\ (\text{resp.}\ \deg(\lfloorloor m(K_X+M_s)\rfloorloor)=2m-s<0).$$ Therefore, if $m<\frac{s}{2}$, then $|\lfloorloor -m(K_X+B^s)\rfloorloor|$ (resp. $|\lfloorloor m(K_X+M_s)\rfloorloor|$) defines a birational map if and only if $n\mid m$. \end{ex} The next example shows that in Theorem \ref{thm: IMPROVED all big theorems on effective birationality}, $m_2$ relies on the Cartier index of $M$ (i.e. relies on $n$) even when $\dim X=2, B=0$, and $M$ is integral. \begin{ex} For any positive integer $n$, let $X_n:=\mathbb P(1,1,n)$, $H_{1,n}$ the first toric invariant divisor (i.e. corresponds to the first $1$), and $M_n=-K_{X_n}+H_{1,n}$. Then $M_n$ is nef integral and $K_{X_n}+M_n=H_{1,n}$ is big. However, since $$\deg(m(K_{X_n}+M_n))=m\deg H_{1,n}=m,$$ $m(K_{X_n}+M_n)$ does not define a birational map when $m<n$. \end{ex} \subsection{Necessity of the lower bound of the volume} The following example shows that the assumption $\operatorname{vol}(-(K_X+B))\geq v$ in Theorem \ref{thm: eff bir bound volume more section gpair}(2.b) is necessary even when $\dim X=1$ and each $B_i\geq 0$: \begin{ex}\langlebel{ex: p1 dcc vol no bound counter} Let $X:=\mathbb P^1$, $p_1,p_2,p_3,p_4$ four distinct closed points on $X$, and $B^n:=(\frac{1}{2}-\frac{1}{2n})(p_1+p_2+p_3+p_4)$ for every integer $n\geq 2$. Then all the coefficients of $B^n$ are $\geq\frac{1}{4}$, $(X,B^n)$ is $\frac{1}{2}$-lc log Fano for every $n$, and $\deg(-(K_X+B^n))=\frac{2}{n}$. For every positive integer $m<n$, $\deg(\lfloorloor-m(K_X+B^n)\rfloorloor)\leq 0$, so $|-m(K_X+B^n)|$ does not define a birational map. \end{ex} \subsection{Necessity of the lower bound of the coefficients} The following example shows that the assumption ``$b_i\geq\delta$" in Theorem \ref{thm: eff bir bound volume more section gpair}(2.b) is necessary even when $\dim X=1$ and each $B_i\geq 0$: \begin{ex}\langlebel{ex: p1 with many points} Consider $X:=\mathbb P^1$ and $B^n:=\frac{1}{2n}\sum_{i=1}^{2n}p_i$ for every positive integer $n$, where $p_i$ are distinct closed points on $X$. Then $(X,B^n)$ is $\frac{1}{2}$-lc and $\deg(-(K_X+B^n))=1$ for every $n$. However, $|-m(K_X+B^n)|$ does not define a birational map for any $m\leq 2n-1$. \end{ex} \subsection{Necessity of the singularity assumption} The following example shows that the assumption ``$(X,\Delta)$ is projective $\epsilon$-lc of dimension $d$ for some boundary $\Delta$ such that $K_X+\Delta\sim_{\mathbb R}0$ and $\Delta$ is big" is necessary in Theorem \ref{thm: eff bir fin coeff gpair} and Theorem \ref{thm: eff bir bound volume more section gpair}, even when $\dim X=2$ and $B=0$. We remark that when $\dim X=1$, $X=\mathbb P^1$ and the assumption is automatically satisfied by letting $\epsilon=\frac{1}{2}$ and $\Delta=\frac{1}{2}\sum_{i=1}^4p_i$, where $p_i$ are different closed points on $X$. \begin{ex}\langlebel{ex: gen hyp p111n} Consider the general hypersurfaces $X_{n+1}\subset\mathbb P(1,1,1,n)$ of degree $n+1$ for every positive integer $n$. By \cite[Theorem 8.1]{IF00}, $X$ is quasismooth klt Fano, and $\operatorname{vol}(-K_{X_{n+1}})=\frac{4(n+1)}{n}>4$, but $X_{n+1}$ is not $\frac{2}{n}$-klt. Notice that the map defined by $|-mK_{X_{n+1}}|$ cannot be birational for any positive integer $m<\frac{n}{2}$. Indeed, for any $n\geq 3$ and positive integer $m<\frac{n}{2}$, the map given by $|-mK_{X_{n+1}}|$ must factorize through $\mathbb P(1,1,1)\cong\mathbb P^2$. Since $\deg X_{n+1}=n+1>1+1+1$, the map defined by $|-mK_{X_{n+1}}|$ cannot be birational. \end{ex} The following well-known example shows that the assumption ``$(X,\Delta)$ is projective $\epsilon$-lc of dimension $d$ for some boundary $\Delta$ such that $K_X+\Delta\sim_{\mathbb R}0$ and $\Delta$ is big" is necessary in Theorem \ref{thm: bddvolacc maynotbelc} even when $\dim X=2$ and $B=0$. Notice that when $d=1$, $(X=\mathbb P^1,\Delta=\frac{1}{2}\sum_{i=1}^4p_i)$ is $\frac{1}{2}$-lc, where $p_i$ are different closed points on $X$. \begin{ex}[{\cite[22.5 Proposition]{KM99}, \cite[Example 2.1.1]{HMX14}}] Let $p,q,r$ be three coprime positive integers. Then $\mathbb P(p,q,r)$ is a klt del Pezzo surface such that $\operatorname{vol}(-K_{\mathbb P(p,q,r)})=\frac{(p+q+r)^2}{pqr}$. However $$\{\frac{(p+q+r)^2}{pqr}\mid p,q,r\in\mathbb N^+,(p,q)=(q,r)=(p,r)=1\}$$ is dense in $\mathbb R^+$. \end{ex} \subsection{The linear system $|-\lfloorloor m(K_X+B)\rfloorloor|$} Even if we replace $|-m(K_X+B)|$ with the linear system $|-\lfloorloor m(K_X+B)\rfloorloor|$ which has more sections, the assumption ``$B\in{\Gamma}_0$" in Theorem \ref{thm: eff bir fin coeff gpair} and the assumption ``$\operatorname{vol}(-(K_X+B))\geq v$" in Theorem \ref{thm: eff bir bound volume more section gpair}(1) and Theorem \ref{thm: eff bir bound volume more section gpair}(2.a) are still necessary, even when $\dim X\geq 2$, the coefficients of $B$ belong a DCC set of non-negative real numbers, and $(X,B)$ is $\epsilon$-lc for some $\epsilon>0$. We remark that when $\dim X=1$, $\deg(-\lfloorloor m(K_X+B)\rfloorloor)\geq 1$ so $|-\lfloorloor m(K_X+B)\rfloorloor|$ defines a birational map for any positive integer $m$. \begin{ex}\langlebel{ex: gen hyp p11910} Let $$X=X_{18}\subset W:=\mathbb P(2,3,5,9)$$ be a general hypersurface of degree $18$. By \cite[Theorem 8.1]{IF00}, $X$ is quasismooth. By \cite[Lemma 3.1.4; Part 4, The Big Table, page 135]{CPS10}, $X$ is an exceptional del Pezzo surface and $\operatorname{vol}(-K_X)=\frac{1}{15}$. Let $H\subset W$ be the toric invariant divisor corresponds to the first coordinate, i.e. corresponds to the $2$ in $(2,3,5,9)$, and $B:=H|_X$. Since $K_W+X+\frac{1}{2}H\sim_{\mathbb{Q}}0$, $K_X+\frac{1}{2}B\sim_{\mathbb{Q}}0$. Since $X$ is exceptional, $(X,\frac{1}{2}B)$ is klt, and hence $\epsilon$-lc for some real number $\epsilon>0$. (Indeed, by \cite[ Lemma 3.1.4]{CPS10}, $(X,0)$ is $\frac{3}{5}$-lc. Since $(X,B)$ is lc, we may take $\epsilon=\frac{3}{10}$.) Consider $(X,B_n:=(\frac{1}{2}-\frac{1}{2n})B)$. Then $(X,B_n)$ is $\epsilon$-lc log Fano for every positive integer $n$. Moreover, for every positive integer $m<n$, $$\lfloorloor m(K_X+B_n)\rfloorloor=mK_X+\lfloorloor\frac{m-1}{2}\rfloorloor B\sim_{\mathbb{Q}} (m-2\lfloorloor\frac{m-1}{2}\rfloorloor)K_X.$$ Since $\operatorname{vol}(-K_X)=\frac{1}{15}$, $$\operatorname{vol}(-\lfloorloor m(K_X+B_n)\rfloorloor)=\operatorname{vol}(-(m-2\lfloorloor\frac{m-1}{2}\rfloorloor)K_X)\leq\operatorname{vol}(-2K_X)=\frac{4}{15}<1.$$ By Lemma \ref{lem: birational vol 1}, $|-\lfloorloor m(K_X+B_n)\rfloorloor|$ does not define a birational map for any positive integer $m<n$. \end{ex} \end{document}
\begin{document} \title{Discrete Wasserstein Barycenters:\\Optimal Transport for Discrete Data} \author{Ethan Anderes \and Steffen Borgwardt \and Jacob Miller} \institute{Ethan Anderes \at Department of Statistics, University of California Davis, California, U.S.A. \email{[email protected]} \and Steffen Borgwardt (corresponding author) \at Fakult\"at f\"ur Mathematik, Technische~Universit\"at M\"{u}nchen, Germany \email{[email protected]} \and Jacob Miller \at Department of Mathematics, University of California Davis, California, U.S.A. \email{[email protected]} } \maketitle \vspace*{-1cm} \begin{abstract} Wasserstein barycenters correspond to optimal solutions of transportation problems for several marginals, and as such have a wide range of applications ranging from economics to statistics and computer science. When the marginal probability measures are absolutely continuous (or vanish on small sets) the theory of Wasserstein barycenters is well-developed (see the seminal paper \cite{ac-11}). However, exact continuous computation of Wasserstein barycenters in this setting is tractable in only a small number of specialized cases. Moreover, in many applications data is given as a set of probability measures with finite support. In this paper, we develop theoretical results for Wasserstein barycenters in this discrete setting. Our results rely heavily on polyhedral theory which is possible due to the discrete structure of the marginals. Our results closely mirror those in the continuous case with a few exceptions. In this discrete setting we establish that Wasserstein barycenters must also be discrete measures and there is always a barycenter which is provably sparse. Moreover, for each Wasserstein barycenter there exists a non-mass-splitting optimal transport to each of the discrete marginals. Such non-mass-splitting transports do not generally exist between two discrete measures unless special mass balance conditions hold. This makes Wasserstein barycenters in this discrete setting special in this regard. We illustrate the results of our discrete barycenter theory with a proof-of-concept computation for a hypothetical transportation problem with multiple marginals: distributing a fixed set of goods when the demand can take on different distributional shapes characterized by the discrete marginal distributions. A Wasserstein barycenter, in this case, represents an optimal distribution of inventory facilities which minimize the squared distance/transportation cost totaled over all demands. \keywords{barycenter \and optimal transport \and multiple marginals \and polyhedral theory \and mathematical programming} \subclass{90B80 \and 90C05 \and 90C10 \and 90C46 \and 90C90} \end{abstract} \section {Introduction} Optimal transportation problems with multiple marginals are becoming important in applications ranging from economics and finance \cite{bhp-13,ce-10,cmn-10,ght-14} to condensed matter physics and image processing \cite{bpg-12,cfk-13,jzd-98,rpdb-12,ty-05}. The so-called Wasserstein barycenter corresponds to optimal solutions for these problems, and as such has seen a flurry of recent activity (see \cite{ac-11,bk-12,bll-11,coo-14,cd-14,mmh-11,mtbmmh-15,p-11b,p-13,p-11,p-14,tmmh-14}). Given probability measures $P_1,\ldots, P_N$ on $\Bbb R^d$, a Wasserstein barycenter is any probability measure $\bar P$ on $\Bbb R^d$ which satisfies \begin{equation} \label{two} \sum_{i=1}^N W_2(\bar{P},P_i)^2=\inf_{P\in\mathcal P^2(\Bbb R^d)}\sum_{i=1}^N W_2( P, P_i)^2 \end{equation} where $W_2$ denotes the quadratic Wasserstein distance and $\mathcal P^2(\Bbb R^d)$ denotes the set of all probability measures on $\Bbb R^d$ with finite second moments. See the excellent monographs \cite{v-03,v-09} for a review of the Wasserstein metric and optimal transportation problems. Much of the recent activity surrounding Wasserstein barycenters stems, in part, from the seminal paper \cite{ac-11}. In that paper, Agueh and Carlier establish existence, uniqueness and an optimal transport characterization of $\bar P$ when $P_1,\ldots, P_N$ have sufficient regularity (those which vanish on small sets or which have a density with respect to Lebesgue measure). The transportation characterization of $\bar P$, in particular, provides a theoretical connection with the solution of (\ref{two}) and the estimation of deformable templates used in medical imaging and computer vision (see \cite{jzd-98,ty-05} and references therein). Heuristically, any measure $\bar P$ is said to be a deformable template if there exists a set of deformations $\varphi_1,\ldots, \varphi_N$ which push-forward $\bar P$ to $P_1,\ldots, P_N$, respectively, and are all ``as close as possible" to the identity map. Using a quadratic norm on the distance of each map $\varphi_1(x),\ldots, \varphi_N(x)$ to $x$, a deformable template $\bar P$ then satisfies \begin{equation} \label{one} \bar P \in \text{arg}\inf_{P\in\mathcal P^2(\Bbb R^d)}\left[ \inf_{\shortstack{\scriptsize $\{(\varphi_1,\ldots,\varphi_N) $\phantom{ s.t. }\\\scriptsize s.t. $\varphi_i( P)=P_i\}$ }} \sum_{i=1}^N \int_{\Bbb R^d} |\varphi_i(x)-x|^2 d P(x) \right]. \end{equation} The results of Agueh and Carlier establish that (\ref{two}) and (\ref{one}) share the same solution set when $P_1, \ldots, P_N$ have densities with respect to Lebesgue measures (for example). \begin{figure} \caption{\small The above four images represent hypothetical monthly demands (as a percentage of total supply) for distributing a fixed set of goods to nine California cities (denoted by red `x' marks) in four different months (February, March, June and July). Percent demand within each month is plotted proportional to disk area and is computed from monthly average temperature and population within each city (see Section \ref{computations} \label{inputFigs} \end{figure} While absolutely continuous barycenters are mathematically interesting, in practice, data is often given as a set of {\em discrete probability measures} $P_1,\ldots, P_N$, i.e. those with finite support in $\mathbb{R}^d$. For example, in Figure \ref{inputFigs} the discrete measures denote different demand distributions over $9$ California cities for different months (this example is analyzed in detail in Section \ref{computations}). For the remainder of the paper we refer to a {\em discrete Wasserstein barycenter} as any probability measure $\bar{ P}$ which satisfies (\ref{two}) and where all the $P_1,\ldots, P_N$ have discrete support. In this paper we develop theoretical results for discrete Wasserstein barycenters. Our results closely mirror those in the continuous case with a few exceptions. In the discrete case, the uniqueness and absolute continuity of the barycenter is lost. More importantly, however, is the fact that $\bar P$ is provably discrete when the marginals are discrete (see Proposition \ref{existdiscrete}). This guarantees that finite-dimensional linear programming will yield all possible optimal $\bar P$, and this in turn is utilized in this paper to study the properties of these barycenters from the point of view of polyhedral theory. In doing so, we find remarkable differences and similarities between continuous and discrete barycenters. In particular, unlike the continuous case, there is always a discrete barycenter with provably sparse finite support; however, analogously to the continuous case, there still exists non-mass-splitting optimal transports from the discrete barycenter to each discrete marginal. Such non-mass-splitting transports generally do not exist between two discrete measures unless special mass balance conditions hold. This makes discrete barycenters special in this regard. In Section \ref{results}, we introduce the necessary formal notation and state our main results. The corresponding proofs are found in Section $3$. To illustrate our theoretical results we provide a computational example, dicussed in Section \ref{computations} and Figures \ref{inputFigs}-\ref{transportFigs}, for a hypothetical transportation problem with multiple marginals: distributing a fixed set of goods when the demand can take on different distributional shapes characterized by $P_1, \ldots, P_N$. A Wasserstein barycenter, in this case, represents an optimal distribution of inventory facilities which minimize the squared distance/transportation cost totaled over all demands $P_1, \ldots, P_N$. \begin{figure} \caption{\small The {\em leftmost image} \label{barycenterFigs} \label{barycenterFigs} \end{figure} \section{Results}\label{results} For the remainder of this paper $P_1,\ldots, P_N$ will denote discrete probability measures on $\mathbb{R}^d$ with finite second moments. Let $\mathcal{P}^2(\Bbb R^d)$ denote the space of all probability measures with finite second moments on $\mathbb{R}^d$. Recall, a Wasserstein barycenter $\bar P$ is an optimizer to the problem \begin{equation} \label{barycenter} \inf_{P\in\mathcal P^2(\Bbb R^d)}\sum_{i=1}^N W_2( P, P_i)^2. \end{equation} The first important observation is that all optimizers of (\ref{barycenter}) must be supported in the finite set $S\subset \mathbb{R}^d$ where \begin{equation} \label{centers} S=\left\{\frac{x_1+\ldots+x_N}{N}\big|\text{ } x_i\in\text{supp}(P_i)\right\} \end{equation} is the set of all possible centroids coming from a combination of support points, one from each measure $P_i$. In particular, letting $\mathcal{P}_{\hspace*{-0.05cm}\mathcal{S}}^2(\Bbb R^d)=\{P\in\mathcal{P}^2(\Bbb R^d)|\text{ } \text{supp}(P)\subseteq S\}$ the infinite dimensional problem (\ref{barycenter}) can be solved by replacing the requirement $P\in\mathcal P^2(\Bbb R^d)$ with $P\in\mathcal{P}_{\hspace*{-0.05cm}\mathcal{S}}^2(\Bbb R^d)$ to yield a finite dimensional minimization problem. This result follows from Proposition \ref{existdiscrete} below. \begin{minipage}{\linewidth} \begin{proposition}\label{existdiscrete} Suppose $P_1,\ldots,P_N$ are discrete probability measures on $\Bbb R^d$. Let $\Pi({P_1,\ldots,P_N})$ denote the set of all coupled random vectors $(X_1,\ldots, X_N)$ with marginals $X_i\sim P_i$ and let $\overline X$ denote the coordinate average $\frac{X_1+\ldots+X_N}{N}$. Let $S$ be defined as in (\ref{centers}). \begin{enumerate}[i)] \item\label{prop_i} There exists $(X^o_1,\ldots,X_N^o)\in \Pi({P_1,\ldots,P_N})$ such that \begin{equation} \label{OptMean} E\bigl|\overline{ X^o}\bigr|^2=\sup_{\shortstack{\scriptsize $ (X_1,\ldots,X_N)$\\ \scriptsize $\quad\quad\in\Pi({P}_1,\ldots,{P}_N)$ }} E\bigl |\overline X \bigr|^2. \end{equation} \item\label{prop_ii} Any $(X^o_1,\ldots,X_N^o)\in \Pi({P_1,\ldots,P_N})$ which satisfies (\ref{OptMean}) has $\text{supp}(\mathcal L\overline{ X^o})\subseteq S$ and \begin{equation} \label{7777} \sum_{i=1}^N W_2\bigl(\mathcal L\overline{ X^o},P_i\bigr)^2=\inf_{P\in \mathcal P^2(\Bbb R^d)}\sum_{i=1}^N W_2(P,P_i)^2=\inf_{P\in \mathcal \mathcal{P}_{\hspace*{-0.05cm}\mathcal{S}}^2(\Bbb R^d)}\sum_{i=1}^N W_2(P,P_i)^2. \end{equation} where $\mathcal L \overline{ X^o}$ denotes the distribution (or law) of $\overline{ X^o}$. \item\label{prop_iii} Any $\bar{P}\in\arg\min\limits_{P\in\mathcal{P}^2(\Bbb R^d)}\sum\limits_{i=1}^N W_2( P, P_i)^2$ satisfies $\text{supp}(\bar{P})\subseteq S$. \end{enumerate} \end{proposition} \end{minipage} Notice that the existence of $(X_1^o,\ldots, X_N^o)$, in part {\em \ref{prop_i})} of the above proposition, follows immediately from the general results found in Kellerer \cite{k-84} and Rachev \cite{r-84}. Parts {\em \ref{prop_ii})} and {\em \ref{prop_iii})} are proved in Section \ref{Proof_section}. We also remark that during the preparation of this manuscript the authors became aware that Proposition \ref{existdiscrete} was independently noted in \cite{coo-14}, with a sketch of a proof. For completeness we will include a detailed proof of this statement which will also provide additional groundwork for Theorem \ref{nomasssplit} and Theorem \ref{sparsethm} below. \begin{figure} \caption{\small These two plots illustrate the special property of discrete Wasserstein barycenters proved in Theorem \ref{nomasssplit} \label{transportFigs} \end{figure} Proposition \ref{existdiscrete} guarantees that any barycenter $\bar{P}$ computed with discrete marginals has the form \begin{equation} \bar{P}=\sum_{{\bf x}\in S}z_{\bf x}\delta_{\bf x},\hspace{0.2in}z_{\bf x}\in\mathbb{R}_{\geq 0}. \end{equation} Here $\delta_{\bf x}$ is the Dirac-$\delta$-function at ${\bf x}\in \mathbb{R}^d$ and $z_{\bf x}$ corresponds to the mass (or probability) at ${\bf x}$. This implies that any coupling of $\bar P$ with $P_i$, which realizes the Wasserstein distance, is in fact characterized by a finite matrix. Treating the coordinates of these matrices and the values $z_{\bf x}$ as variables, the set of all solutions to (\ref{two}) are obtained through a finite-dimensional linear program (see (\ref{LPequ}) below). In \cite{coo-14} a similar linear program was used to find approximate barycenters for sets of absolutely continuous measures by finitely approximating the support of $\bar P$ (which is sub-optimal for the continuous problem). Our use of the finite linear program characterization of $\bar P$ is different from continuous approximation. We use a version of the linear program to analyze properties of discrete barycenters themselves. Indeed, since the set of all discrete barycenters is on a face of the underlying polyhedron, one can study their properties by means of polyhedral theory. Our first theorem illustrates a similarity between barycenters defined from absolutely continuous $P_1, \ldots, P_N$ and barycenters defined in the discrete setting. The results of \cite{ac-11} establish, in the absolutely continuous case, that there exist optimal transports from the barycenter to each $P_i$ which are optimal in the sense of Wasserstein distance and are gradients of convex functions. Theorem \ref{nomasssplit} shows that such transports not only exist for discrete barycenters but also share similar properties. \begin{theorem}\label{nomasssplit} Suppose $P_1,\ldots,P_N$ are discrete probability measures. Let $\bar{P}$ denote a Wasserstein barycenter solution to (\ref{two}) and let $X$ be a random variable with distribution $\bar{P}$. Then there exist finite convex functions $\psi_i:\mathbb{R}^d \rightarrow \mathbb{R}^d$, for each $i=1,\ldots, N$, such that \begin{enumerate}[i)] \item \label{nomasssplit_i}$\displaystyle\nabla\psi_i(\bar{P}) = P_i, \text{ }\forall i.$ \item \label{nomasssplit_ii}$\displaystyle E|X - \nabla\psi_i(X)|^2 = W_2(\bar{P},P_i)^2,\text{ }\forall i.$ \item \label{nomasssplit_iii} $\displaystyle\frac{1}{N}\sum_{i=1}^N \nabla\psi_i(x_j) = x_j,\text{ }\forall x_j\in\text{supp}(\bar{P}).$ \item \label{nomasssplit_iv} $\displaystyle\frac{1}{N}\sum_{i=1}^N\psi_i(x_j) = \frac{|x_j|^2}{2},\text{ }\forall x_j\in\text{supp}(\bar{P}).$ \end{enumerate} \end{theorem} Intuitively, one would expect the support of a barycenter to be large to accommodate such a condition. This is particularly plausible since such these transports must realize the Wasserstein distance between each measure and the barycenter. However, it has been noted that the barycenters of discrete measures are often sparse in practice; see for example \cite{cd-14}. Our second main result resolves this tension and establishes that there always is a Wasserstein barycenter whose solution is theoretically guaranteed to be sparse. \begin{theorem}\label{sparsethm} Suppose $P_1,\ldots,P_N$ are discrete probability measures, and let $S_i=|\text{supp}(P_i)|$. Then there exists a barycenter $\bar{P}$ of these measures such that \begin{equation}\label{sparseequ} |\text{supp}(\bar{P})| \leq \sum_{i=1}^N S_i - N + 1. \end{equation} \end{theorem} \noindent We would like to stress how low this guaranteed upper bound on the size of the support of the barycenter actually is. For example, let every $P_i$ have a support of the same cardinality $T$. Then $|S| \leq T^N$ and if the support points are in general position one has $|S|=T^N$. In contrast, the support of the barycenter has cardinality at most $NT$. Additionally, the bound in Theorem \ref{sparsethm} is the best possible in the sense that, for any natural numbers $N$ and $W$, it is easy to come up with a set of $N$ measures for which $|\text{supp}(\bar{P})| = \sum_{i=1}^N S_i - N + 1=W$: Choose $P_1$ to have $W$ support points and uniformly distributed mass $\frac{1}{W}$ on each of these points. Choose the other $P_i$ to have a single support point of mass $1$. Then $|S|=W$ and the barycenter uses all of these possible support points with mass $\frac{1}{W}$. A particularly frequent setting in applications is that all the $P_i$ are supported on the same discrete grid, uniform in all directions, in $\mathbb{R}^d$. See for example \cite{cd-14,rpdb-12} for applications in computer vision with $d=2$. In this situation, the set $S$ of possible centroids is a finer uniform grid in $\mathbb{R}^d$, which allows us to strengthen the results in Proposition \ref{existdiscrete} and Theorem \ref{sparsethm}. \begin{corollary}\label{sparsegrid} Let $P_1,\ldots,P_N$ be discrete probability measures supported on an $L_1{\times}\ldots{\times}L_d$-grid, uniform in all directions, in $\mathbb{R}^d$. Then there exists a barycenter $\bar{P}$ supported on a refined $(N(L_1-1)+1)\times \ldots \times (N(L_d-1)+1)$-grid, uniform in all directions, with $|\text{supp}(\bar P)|\leq N(\prod\limits_{i=1}^d L_i-1) +1$. In particular, the density of the support of the barycenter on this finer grid is less than \begin{equation*} \frac{1}{N^{d-1}}\prod\limits_{i=1}^d\frac{ L_i}{ (L_i-1)}. \end{equation*} \end{corollary} \section{Proofs}\label{Proof_section} In this section we prove the results outlined in Section \ref{results}. We begin with a proof of Proposition \ref{existdiscrete}. \subsection{\bf Existence of Discrete Barycenters} Recall that a discrete barycenter $\bar{P}$ is an optimizer of $(\ref{barycenter})$ when $P_1,\ldots, P_N$ are discrete probability measures. We will show that $\bar{P}$ must have the form of a coordinatewise average of optimally coupled random vectors with marginals given by the $P_i$. In particular, we will establish the existence of $N$ random vectors $X_1^o,\ldots, X_N^o$ with marginal distributions $X_i^o\sim P_i$ that are as highly correlated as possible so that the variability in the average $\overline{X^o}=\frac{X_1^o+\cdots+ X_N^o}{N}$ is maximized. Once these coupled random vectors $X_1^o,\ldots, X_N^o$ are obtained, the distribution of the average $\overline{X^o}$ (denoted $\mathcal L \overline{X^o}$) will serve as $\bar{P}$. \begin{proof}[{of Proposition \ref{existdiscrete}}] As remarked earlier, part {\em \ref{prop_i})} of Proposition \ref{existdiscrete} follows from the general results of Kellerer \cite{k-84} and Rachev \cite{r-84}. Therefore there exists an optimally coupled random vector $(X_1^o,\ldots, X_N^o)\in \Pi({P_1,\ldots,P_N})$ which satisfies (\ref{OptMean}). We will show that \begin{equation} \label{step1inprop1} \sum_{i=1}^N W_2\bigl(\mathcal L\overline{ X^o},P_i\bigr)^2=\inf_{P\in \mathcal P^2(\Bbb R^d)}\sum_{i=1}^N W_2(P,P_i)^2. \end{equation} Notice the definition of $S$ automatically implies $\text{supp}(\mathcal L\overline{ X^o})\subseteq S$ so that (\ref{step1inprop1}) will imply \begin{equation} \sum_{i=1}^N W_2\bigl(\mathcal L\overline{ X^o},P_i\bigr)^2=\inf_{\mathcal{P}_{\hspace*{-0.05cm}\mathcal{S}}^2(\Bbb R^d)}\sum_{i=1}^N W_2(P,P_i)^2=\inf_{P\in \mathcal P^2(\Bbb R^d)}\sum_{i=1}^N W_2(P,P_i)^2 \end{equation} and complete the proof of part {\em \ref{prop_ii})}. So suppose $P\in \mathcal P^2(\Bbb R^d)$. Then for all $i=1,\ldots, N$ there exists an optimally coupled random vector $(Y^*_i,X_i^*)\in \Pi(P,P_i)$ such that $ W_2(P,P_i)^2 = E|Y^*_i-X_i^*|^2$. (This is a well known property of the Wasserstein distance $W_2$, see for example Proposition 2.1 in \cite{v-03}.) Since the random variables $Y^*_1,\ldots, Y^*_N$ all have distribution $P$ it is easy to see that there exists a generalized Gluing lemma for the existence of a random vector $(Y, X_1,\ldots, X_N)\in\Pi(P,P_1,\ldots,P_N)$ such that $(Y,X_i)$ has the same distribution as $(Y^*,X_i^*)$ for each $i$. This can be seeing by first sampling a single $Y\sim P$ then sample $X_1,\ldots, X_N$ independently conditional on $Y$ where the conditional distribution $Pr(X_i =x|Y = y)$ is set to $Pr(X_i^* =x|Y^* = y)$ (the finite support of $P_1,\ldots, P_N$ is sufficient to guarantee existence of these conditional distributions). This yields \begin{equation} \label{rew} \frac{1}{N} \sum_{i=1}^N W_2(P,P_i)^2= \frac{1}{N}\sum_{i=1}^N E |Y_i^*-X^*_i|^2= \frac{1}{N}\sum_{i=1}^N E |Y-X_i|^2. \end{equation} Now note that $X_i\sim P_i$ and $X_i^o\sim P_i$. Thus \begin{align} \label{hugeThing} \sum_{i=1}^N E\bigl |\overline{X^o}- X^o_i \bigr|^2&=\sum_{i=1}^NE\bigl|\overline{X^o}\bigr|^2-2E\sum_{i=1}^N\langle \overline{X^o},X^o_i\rangle +\sum_{i=1}^NE\bigl|X^o_i\bigl|^2=-NE\bigl|\overline{X^o}\bigr|^2 + \sum_{i=1}^NE\bigl|X^o_i\bigl|^2\nonumber\\ &= -NE\bigl|\overline{X^o}\bigr|^2 + \sum_{i=1}^NE\bigl|X_i\bigl|^2 = \inf_{\shortstack{\scriptsize $ (X_1,\ldots,X_N)$\\ \scriptsize $\quad\quad\in\Pi({P}_1,\ldots,{P}_N)$ }} -NE\bigl |\overline X \bigr|^2 + \sum_{i=1}^NE\bigl|X_i\bigl|^2\nonumber \\ &=\inf_{\shortstack{\scriptsize $ (X_1,\ldots,X_N)$\\ \scriptsize $\quad\quad\in\Pi({P}_1,\ldots,{P}_N)$ }} \sum_{i=1}^NE\bigl|\overline{X}\bigr|^2-2E\sum_{i=1}^N\langle \overline{X},X_i\rangle +\sum_{i=1}^NE\bigl|X_i\bigl|^2\nonumber\\ &=\inf_{\shortstack{\scriptsize $ (X_1,\ldots,X_N)$\\ \scriptsize $\quad\quad\in\Pi(P_1,\ldots,P_N)$ }} \sum_{i=1}^N E\bigl | \overline X- X_i \bigr|^2. \end{align} Also, note that \begin{equation}\label{smallThing}E|\overline{ X^o}-X^o_i|^2\geq \inf_{ (Y,X) \in\Pi(\mathcal L\overline{ X^o},P_i)} E|Y-X|^2=W_2(\mathcal L\overline{ X^o},P_i)^2.\end{equation} Combining $(\ref{hugeThing})$ and $(\ref{smallThing})$, we get \begin{equation}\label{optdist} \frac{1}{N}\sum_{i=1}^N E |\overline X-X_i|^2 \geq \frac{1}{N}\sum_{i=1}^N E|\overline{ X^o}-X^o_i|^2 \geq \frac{1}{N}\sum_{i=1}^N W_2(\mathcal L\overline{ X^o},P_i)^2. \end{equation} Further we have a minorant for the right hand side of (\ref{rew}) as follows \begin{align} \frac{1}{N}\sum_{i=1}^N E |Y-X_i|^2 &=\frac{1}{N}\sum_{i=1}^N E |Y-\overline{X} + \overline{X}-X_i|^2\nonumber\\ &=\frac{1}{N}\sum_{i=1}^N E |Y-\overline{X}|^2 +\frac{2}{N}E\sum_{i=1}^N\langle Y-\overline{X},\overline{X} - X_i\rangle +\frac{1}{N} \sum_{i=1}^N E|\overline X-X_i|^2 \nonumber\\ &= E |Y-\overline{X}|^2 +\frac{2}{N}E\langle Y-\overline{X},\sum_{i=1}^N(\overline{X} - X_i)\rangle +\frac{1}{N}\sum_{i=1}^NE|\overline X-X_i|^2\nonumber\\ &=E |Y-\overline{X}|^2 + \frac{1}{N}\sum_{i=1}^N E|\overline X-X_i|^2 \geq \frac{1}{N}\sum_{i=1}^N E|\overline X-X_i|^2.\label{minor} \end{align} Putting (\ref{rew}), (\ref{optdist}), and (\ref{minor}) together we obtain \begin{equation} \frac{1}{N} \sum_{i=1}^N W_2(P,P_i)^2 = \frac{1}{N}\sum_{i=1}^N E |Y-X_i|^2 \geq \frac{1}{N}\sum_{i=1}^N E|\overline X-X_i|^2\geq \frac{1}{N}\sum_{i=1}^N W_2(\mathcal L\overline{ X^o},P_i)^2. \end{equation} This shows that $\mathcal L\overline{ X^o}$ is a minimizer of our problem and hence a barycenter, proving part {\em \ref{prop_i})}. Finally, to prove part {\em \ref{prop_iii})}, note that if $P\in \mathcal P^2(\Bbb R^d)$ and $\text{supp}(P)\nsubseteq S$, then any coupling $(Y, X_1,\ldots, X_N)\in\Pi(P,P_1,\ldots,P_N)$ must satisfy $E|Y-\overline{X}|^2>0$ (since $\text{supp}(\overline{X})\subseteq S$ and $\text{supp}(P)\nsubseteq S$). This implies, by the last line of (\ref{minor}), that \begin{equation}\frac{1}{N}\sum_{i=1}^N E |Y-X_i|^2 > \frac{1}{N}\sum_{i=1}^N E|\overline X-X_i|^2,\end{equation} and hence that \begin{equation}\frac{1}{N} \sum_{i=1}^N W_2(P,P_i)^2 = \frac{1}{N}\sum_{i=1}^N E |Y-X_i|^2 >\frac{1}{N} \sum_{i=1}^N E|\overline X-X_i|^2\geq \frac{1}{N}\sum_{i=1}^N W_2(\mathcal L\overline{ X^o},P_i)^2,\end{equation} so that $P$ is not a barycenter. Therefore for any barycenter $\bar{P}$, we must have $\text{supp}(\bar{P})\subseteq S$, which proves part {\em \ref{prop_iii})}. \qed \end{proof} \subsection{\bf Linear Programming and Optimal Transport} Let us now develop a linear programming model (LP) for the exact computation of a discrete barycenter. Suppose we have a set of discrete measures $P_i$, $i=1,\ldots,N$, and additionally another discrete measure $P$. Let $S_0=|\text{supp}(P)|$ and $S_i=|\text{supp}(P_i)|$ for each $i$ as before. Let $x_{j}$, $j=1,\ldots,S_0$ be the points in the support of $P$, each with mass $d_{j}$, and let $x_{ik}$, $k=1,\ldots,S_i$ be the points in the support of $P_i$, each with mass $d_{ik}$. For the sake of a simple notation in the following, when summing over these values, the indices take the full range unless stated otherwise. If $(X,Y_i)\in\Pi(P,P_i)$, then this coupling can be viewed as a finite matrix, since both probability measures are discrete. We define $y_{ijk}\geq 0$ to be the value of the entry corresponding to the margins $x_j$ and $x_{ik}$ in this finite matrix. Note in this coupling that $\sum_k y_{ijk} = d_j$ for all $j$ and that $\sum_j y_{ijk} = d_{ik}$ for all $k$ and further that \begin{equation}\label{distcost} E|X-Y|^2 = \sum_{j,k} |x_j - x_{ik}|^2\cdot y_{ijk} = \sum_{j,k} c_{ijk}\cdot y_{ijk}, \end{equation} where $c_{ijk}:= |x_j - x_{ik}|^2$ just by definition. Given a non-negative vector ${\bf y}= (y_{ijk})\geq 0$ that satisfies $\sum_k y_{ijk} = d_j$ for all $i$ and $j$ and $\sum_j y_{ijk} = d_{ik}$ for all $i$ and $k$, we call ${\bf y}$ an \emph{N-star transport} between $P$ and the $P_i$. We define the \emph{cost} of this transport to be $c({\bf y}):=\sum_{i,j,k} c_{ijk}\cdot y_{ijk}$. Further there exist vectors $(X^*,Y_i^*)\in\Pi(P,P_i)$ for all $i$, and a corresponding $N$-star transport ${\bf y^*}$, such that \begin{equation} \sum_{i} W_2(P,P_i)^2 = \sum_i E|X^*-Y^*|^2 = c({\bf y^*}). \end{equation} For any $(X,Y_i)\in\Pi(P,P_i)$ we also have $E|X^*-Y_i^*|^2\leq E|X-Y_i|^2$, and hence it is easily seen that ${\bf y^*}$ is an optimizer to the following linear program \begin{align}\label{nstar} \min_{\bf y} &\text{ }\text{ } c({\bf y}) \nonumber \\ \sum_k y_{ijk} = &\text{ }\text{ } d_j, \text{ }\text{ }\forall i=1,\ldots,N,\text{ }\forall j=1,\ldots,S_0,\nonumber\\ \sum_{j} y_{ijk} = &\text{ }\text{ } d_{ik}, \text{ }\text{ }\forall i=1,\ldots,N,\text{ }\forall k=1,\ldots,S_i,\\ y_{ijk} \geq & \text{ }\text{ }0, \text{ }\text{ }\text{ }\text{ }\text{ }\forall i=1,\ldots,N,\text{ }\forall j=1,\ldots,S_0,\text{ }\forall k=1,\ldots,S_i.\nonumber \end{align} Now suppose we wish to find a barycenter using a linear program. Then using Proposition \ref{existdiscrete} we know that this amounts to finding a solution to \begin{equation} \min_{P\in\mathcal{P}_{\hspace*{-0.05cm}\mathcal{S}}^2(\Bbb R^d)}\sum_{i=1}^N W_2( P, P_i)^2,\hspace{0.5in}P =\sum_{{\bf x}\in S}z_{\bf x}\delta_{\bf x},\hspace{0.2in}z_{\bf x}\in\mathbb{R}_{\geq 0}. \end{equation} Using this we can expand the possible support of $P$ in the previous LP to $S$, and let the mass at each $x_j\in S$ be represented by a variable $z_j\geq 0$. This is a probability distribution if and only if the constraint $\sum_j z_j =1$ is satisfied. Then every exact barycenter, up to measure-zero sets, must be represented by some assignment of these variables and hence is an optimizer of the LP \begin{align} \min_{\bf y,z} &\text{ }\text{ } c({\bf y}) \nonumber \\ \sum_k y_{ijk} = &\text{ }\text{ } z_j, \text{ }\text{ }\forall i=1,\ldots,N,\text{ }\forall j=1,\ldots,S_0,\nonumber\\ \sum_{j} y_{ijk} = &\text{ }\text{ } d_{ik}, \text{ }\text{ }\forall i=1,\ldots,N,\text{ }\forall k=1,\ldots,S_i,\nonumber\\ y_{ijk} \geq & \text{ }\text{ }0,\text{ }\text{ }\text{ } \text{ }\text{ }\forall i=1,\ldots,N,\text{ }\forall j=1,\ldots,S_0,\text{ }\forall k=1,\ldots,S_i,\nonumber\\ z_j \geq & \text{ }\text{ }0,\text{ }\text{ }\text{ } \text{ }\text{ }\forall j=1,\ldots,S_0.\label{LPequ} \end{align} Since each $P_i$ is a probability distribution it is easy to see that $\sum_j z_j =1$ is just a consequence of satisfaction of the other constraints. Any optimizer $({\bf y^*},{\bf z^*})$ to this LP is a barycenter $\bar{P}$ in that \begin{equation} \min_{P\in\mathcal{P}_{\hspace*{-0.05cm}\mathcal{S}}^2(\Bbb R^d)}\sum_{i=1}^N W_2( P, P_i)^2=\sum_{i=1}^N W_2(\bar{P}, P_i)^2=c({\bf y^*})\hspace{0.3in}\text{ and }\hspace{0.3in}\bar{P} =\sum_{j}z^*_j\delta_{x_j}. \end{equation} It is notable that the LP in $(\ref{LPequ})$ corresponds to $N$ transportation problems, linked together with variables $z_j$, representing a common marginal for each transportation problem. In fact it is not hard to show that in the case $N=2$ this LP can be replaced with a network flow LP on a directed graph. It is easily seen that this LP is both bounded (it is a minimization of a positive linear sum of non-negative variables) and feasible (assign an arbitary $z_j=1$ and the remainder of them $0$ and this reduces to solving $N$ transportation LPs). Thus it becomes useful to write down the dual LP, which also bares similarity to a dual transportation problem \begin{align} \max_{\bf \tau,\theta} &\text{ }\text{ } \sum_{i,k}d_{ik}\cdot \tau_{ik}\nonumber \\ \theta_{ij} + \tau_{ik} \leq&\text{ }\text{ } c_{ijk},\text{ }\text{ }\forall i=1,\ldots,N,\text{ }\forall j=1,\ldots,S_0,\text{ }\forall k=1,\ldots,S_i,\nonumber\\ \sum_{j} \theta_{ij} \geq &\text{ }\text{ } 0, \text{ }\text{ }\text{ }\text{ }\text{ }\forall i=1,\ldots,N,\text{ }\forall j=1,\ldots,S_0,\label{dualLP} \end{align} where there is a variable $\tau_{ik}$ for each defining measure $i$ and each $x_{ik}\in\text{supp}(P_i)$ and a variable $\theta_{ij}$ for each defining measure $i$ and each $x_j\in S$. These LPs not only will be used for computations in Section \ref{computations}, but also can be used to develop the necessary theory for Theorem \ref{nomasssplit}. \begin{lemma}\label{nosplitlem} Let $P_1,\ldots,P_N$ be discrete probability measures with a barycenter $\bar{P}$ given by a solution $({\bf y^*},{\bf z^*})$ to (\ref{LPequ}). Then \begin{enumerate}[i)] \item\label{nosplitlem_i} For any $x_j\in\text{supp}(\bar{P})$ (i.e. $z_j^*>0$) combined with any choice of $x_{ik_i}\in\text{supp}(P_i)$ for $i=1,\ldots,N$ such that $y^*_{ijk_i}>0$ for each $i$, one then has $x_j=\frac{1}{N}\sum_i x_{ik_i}$. \item\label{nosplitlem_ii} For any $x_j\in\text{supp}(\bar{P})$ and $i=1,\ldots,N$, one has $\big|\;\{y^*_{ijk}>0|\text{ } x_{ik}\in\text{supp}(P_i)\}\;\big| = 1$. \end{enumerate} \end{lemma} \begin{proof} \emph{i)} Suppose the statement in \emph{i)} is false. Then there exists an $x_{j_0}\in\text{supp}(\bar{P})$ and there are points $x_{ik_i}\in\text{supp}(P_i)$ for $i=1,\ldots,N$ such that $y^*_{ij_0k_i}>0$ for each $i$ and $x_{j_0}\neq\frac{1}{N}\sum_i x_{ik_i}$. Let $\alpha = \min_i y^*_{ij_0k_i} >0$ and let $x_{j^*} = \frac{1}{N}\sum_i x_{ik_i}$. Then define $({\bf \hat{y}},{\bf \hat{z}})$ such that $\hat{y}_{ij_0k_i} = y^*_{ij_0k_i} - \alpha$ for each $i$, $\hat{y}_{ij^*k_i} = y^*_{ij^*k_i} + \alpha$ for each $i$, $\hat{z}_{j_0} = z^*_{j_0} - \alpha$, $\hat{z}_{j^*} = z^*_{j^*} + \alpha$, and $\hat{z}_j = z^*_j$ and $\hat{y}_{ijk}=y^*_{ijk}$ for all other variables. It is easily checked that $({\bf \hat{y}},{\bf \hat{z}})$ is also a feasible solution to (\ref{LPequ}). Further \begin{equation} c({\bf \hat{y}}) = c({\bf y^*}) + \alpha\left( \sum_i c_{ij^*k_i} - \sum_i c_{ij_0k_i}\right)<c({\bf y^*}), \end{equation} where the strict inequality follows since $x_{j_0}\neq\frac{1}{N}\sum_i x_{ik_i}=x_{j^*}$ and therefore \begin{equation} \sum_i c_{ij_0k_i} = \sum_i |x_{j_0} - x_{ik_i}|^2 > \sum_i |x_{j^*} - x_{ik_i}|^2 = \sum_i c_{ij^*k_i}, \end{equation} which is a contradiction with $\bar{P}$ being a barycenter. \emph{ii)} If $x_j\in\text{supp}(\bar{P})$, then $z^*_j>0$ and therefore $\big|\;\{y^*_{ijk}>0|\text{ } x_{ik}\in\text{supp}(P_i)\}\;\big| \geq 1$ for all $i$ is an immediate consequence of the contraints in (\ref{LPequ}). Suppose rhe statement is false, then there is some $x_j\in\text{supp}(\bar{P})$ such that, without loss of generality, $\big|\{y^*_{1jk}>0|\text{ } x_{1k}\in\text{supp}(P_1)\}\big| \geq 2$. Then we can choose $x_{1k'}\neq x_{1k''}$ such that $y^*_{1jk'},y^*_{1jk''}>0$ and further can choose $x_{ik_i}$ for $i=2,\ldots,N$ such that $y_{ijk_i}>0$ for each $i$. Then this implies, by part $(i)$, that \begin{equation} \frac{1}{N}\bigl(x_{1k'} + \sum_{i=2}^Nx_{ik_i}\bigr) = x_j = \frac{1}{N}\bigl(x_{1k''} + \sum_{i=2}^Nx_{ik_i}\bigr), \end{equation} which in turn immediately would imply $x_{1k'} = x_{1k''}$; a contradiction with our choice of $x_{1k'}\neq x_{1k''}$. Hence $\big|\;\{y^*_{1jk}>0|\text{ } x_{ik}\in\text{supp}(P_1)\}\;\big|=1$. \qed \end{proof} Lemma \ref{nosplitlem} already implies that there exists a transport from any barycenter $\bar{P}$ to each $P_i$. However, to prove Theorem \ref{nomasssplit} we need the concept of {\em strict complimentary slackness}. If you have a primal LP $\{\min {\bf c}^T{\bf x}|\text{ } {\bf A}{\bf x}={\bf b},\text{ } {\bf x}\geq{\bf 0}\}$ which is bounded and feasible and its dual LP $\{\max {\bf b}^T{\bf y}|\text{ } {\bf A}^T{\bf y}\leq{\bf c}\}$, then complimentary slackness states that the tuple $({\bf x^*},{\bf y^*})$ gives optimizers for both of these problems if and only if $x^*_i(c_i-{\bf a_i}^T{\bf y^*}) = 0$ for all $i$, where ${\bf a_i}$ is the $i$-th column of ${\bf A}$. This statement can be strengthened in form of the strict complimentary slackness condition \cite{z-94}: \begin{proposition}\label{strictcomp} Given a primal LP $\{\min {\bf c}^T{\bf x}|\text{ } {\bf A}{\bf x}={\bf b},\text{ } {\bf x}\geq{\bf 0}\}$ and the corresponding dual LP\\ ${\{\max {\bf b}^T{\bf y}|\text{ } {\bf A}^T{\bf y} \leq{\bf c}\}}$, both bounded and feasible, there exists a tuple of optimal solutions $({\bf x^*},{\bf y^*})$, to the primal and dual respectively, such that for all $i$ \begin{equation} x^*_i(c_i-{\bf a_i}^T{\bf y^*}) = 0, \hspace{1in} x^*_i + (c_i-{\bf a_i}^T{\bf y^*}) >0. \end{equation} \end{proposition} With these tools, we are now ready to prove Theorem \ref{nomasssplit}. \begin{proof}[{Proof of Theorem \ref{nomasssplit}}] Let $({\bf y^*},{\bf z^*},{\bf \tau^*},{\bf \theta^*})$ be a solution to (\ref{LPequ}) and (\ref{dualLP}), as guaranteed by Proposition \ref{strictcomp}. Let $\bar{P}$ be a barycenter corresponding to the solution $(\hat{\bf y},\hat{\bf z})$. For each $x_j\in\text{supp}(\bar{P})$ let $x_{ik_j}\in\text{supp}(P_i)$ be the unique location such that $\hat{y}_{ijk_j}>0$ as guaranteed by Lemma \ref{nosplitlem} part {\em \ref{nosplitlem_ii})}. Now for each $i$ define \begin{equation} \psi_i(x) = \max_{x_{ik}\in\text{supp}(P_i)}\langle x,x_{ik}\rangle - \frac{1}{2}|x_{ik}|^2 + \frac{1}{2}\tau^*_{ik}. \end{equation} Using Lemma \ref{nosplitlem} part {\em \ref{nosplitlem_i})}, it is easy to see that for proving part {\em \ref{nomasssplit_i})}-{\em \ref{nomasssplit_iii})} of Theorem \ref{nomasssplit} it suffices to show that for each $\psi_i$ we have that $\nabla\psi_i(x_j)=x_{ik_j}$ for each $x_j\in\text{supp}(\bar P)$. By definition, each $\psi_i$ is convex (as the maximum over a set of linear functions) and $\psi_i(x)$ is finite for all $x\in \mathbb{R}^d$. Further \begin{align} |x|^2 - 2\psi_i(x)& = |x|^2 - 2\max_{x_{ik}\in\text{supp}(P_i)}\langle x,x_{ik}\rangle - \frac{1}{2}|x_{ik}|^2 + \frac{1}{2}\tau^*_{ik}\nonumber\\ & = \min_{x_{ik}\in\text{supp}(P_i)} |x|^2 -2\langle x,x_{ik}\rangle +|x_{ik}|^2 - \tau^*_{ik}\nonumber\\ & = \min_{x_{ik}\in\text{supp}(P_i)} |x-x_{ik}|^2 - \tau^*_{ik}, \end{align} and hence \begin{equation} |x_j|^2 - 2\psi_i(x_j)=\min_{x_{ik}\in\text{supp}(P_i)} |x_j-x_{ik}|^2 - \tau^*_{ik}=\min_{x_{ik}\in\text{supp}(P_i)} c_{ijk} - \tau^*_{ik}. \end{equation} By complimentary slackness, we have that since $\hat{y}_{ijk_j}\neq 0$, that $c_{ijk_j} -\tau^*_{ik_j}-\theta^*_{ij}=0$. Therefore by strict complimentary slackness we get $y^*_{ijk_j}\neq 0$ and hence by Lemma \ref{nosplitlem} part {\em \ref{nosplitlem_ii})} we get $y^*_{ijk}=0$ for all $k\neq k_j$. This implies by strict complimentary slackness that for all $k\neq k_j$ we obtain $c_{ijk} -\tau^*_{ik}-\theta^*_{ij}\neq 0$ and therefore, by feasibility, that $c_{ijk}-\tau^*_{ik}<\theta^*_{ij}$. Factoring in that $c_{ijk_j} -\tau^*_{ik_j} = \theta^*_{ij}$ by complimentary slackness we have that $|x_j|^2 - 2\psi_i(x_j) = \theta^*_{ij}$. Further, since the function corresponding to $k_j$ is the only continuous function in the minimization that achieves this minimum at $x_j$ (by the above argument), we obtain that for $x$ in some neighborhood of $x_j$ \begin{align} \;\; & |x|^2 - 2\psi_i(x) = |x-x_{ik_j}|^2 - \tau^*_{ik_j},\nonumber\\ \mathbb{R}ightarrow\;\; & \psi_i(x) =\langle x,x_{ik_j}\rangle - \frac{1}{2}|x_{ik_j}|^2 + \frac{1}{2}\tau^*_{ik_j},\nonumber\\ \mathbb{R}ightarrow\;\; & \nabla\psi_i(x) = x_{ik_j},\\ \end{align} so that $\nabla\psi_i(x_j)=x_{ik_j}$. Further, note that complimentary slackness implies $\sum_i\theta^*_{ij} = 0$ for each $x_j\in\text{supp}(\bar{P})$ and hence \begin{equation} 0 = \sum_i\theta^*_{ij} = \sum_i |x_j|^2 - 2\psi_i(x_j) \end{equation} \begin{equation} \mathbb{R}ightarrow\frac{1}{N} \sum_i \psi_i(x_j) = \frac{|x_j|^2}{2}. \end{equation} This shows part {\em \ref{nomasssplit_iv})} of Theorem \ref{nomasssplit} and thus completes the proof. \qed \end{proof} \subsection{\bf Sparsity and Transportation Schemes} As before, let $P_1,\ldots,P_N$ be discrete probability measures, with point masses $d_{ik}$ for $x_{ik}\in \text{supp}(P_i)$ defined as in the previous subsection. Then for any set $\mathcal{S}\subseteq S\times \text{supp}(P_1)\times\ldots\times\text{supp}(P_N)$ we fix an arbitary order on $\mathcal{S}$, i.e. $\mathcal{S}=\{s_1,s_2,\dots,s_m\}$ where each $s_h=(q_{h0},q_{h1},\dots,q_{hN})$, and define a \emph{location-fixed transportation scheme} as the set \begin{equation} \mathcal{T}(\mathcal{S}) := \{{\bf w}\in\mathbb{R}_{\geq 0}^{|\mathcal{S}|}| \sum_{\substack{h=1,\\ q_{hi}=x_{ik}}}^m w_h = d_{ik},\text{ }\forall i=1,\dots,N,\text{ }\forall k=1,\dots,S_i\}. \end{equation} Informally, the coefficients of ${\bf w}\in\mathcal{T}(\mathcal{S})$ correspond to an amount of transported mass from a given location in $S$ to combinations of support points in the $P_i$, where each of these support points receives the correct total amount. Given a ${\bf w}$, we define its corresponding discrete probability measure \begin{equation} P({\bf w},\mathcal{S}):=\sum_{h=1}^m w_h\delta_{q_{h0}}, \end{equation} and the cost of this pair $({\bf w},\mathcal{S})$ \begin{equation} c({\bf w},\mathcal{S}):=\sum_{h=1}^m c_hw_h,\text{ }\text{ }\text{ } c_h=\sum_{i=1}^N|q_{h0}-q_{hi}|^2. \end{equation} In the following, let $\text{supp}({\bf w})$ denote the set of strictly positive entries of {\bf w}. Informally, we now give a translation between $N$-star transports, the feasible region of $(\ref{nstar})$, and location-fixed transportation schemes. \begin{lemma}\label{scheme} Given $\mathcal{S}\subseteq S\times \text{supp}(P_1)\times\ldots\times\text{supp}(P_N)$ such that $\mathcal{T}(\mathcal{S})\neq\varnothing$: \begin{enumerate}[i)] \item\label{scheme_i} For each ${\bf w}\in\mathcal{T}(\mathcal{S})$, $P({\bf w},\mathcal{S})$ is a probability measure with $|\text{supp}(P({\bf w},\mathcal{S}))|\leq|\text{supp}({\bf w})|$. \item\label{scheme_ii} For each ${\bf w}\in\mathcal{T}(\mathcal{S})$, there exists an $N$-star transport ${\bf y}$ between $P({\bf w},\mathcal{S})$ and $P_1,\ldots,P_N$ such that $c({\bf w},\mathcal{S})=c({\bf y})$. \item\label{scheme_iii} For every discrete probability measure $P$ supported on $S$ and $N$-star transport ${\bf y}$ between $P$ and $P_1,\ldots,P_N$ there exists a pair $({\bf w},\mathcal{S'})$ such that: ${\bf w}\in\mathcal{T}(\mathcal{S'})$, $P=P({\bf w},\mathcal{S'})$, and $c({\bf w},\mathcal{S'})=c({\bf y})$. \end{enumerate} \end{lemma} \begin{proof} \emph{i)} $|\text{supp}(P({\bf w},\mathcal{S}))|\leq|\text{supp}({\bf w})|$ is clear by definition (note that strictness of this inequality can occur if there exist non-zero $w_h,w_{h'}$ for which $q_{h0}=q_{h'0}$). To see that $P({\bf w},\mathcal{S})$ is a probability measure it suffices to show that $\sum_{h=1}^m w_h = 1$. This holds since for any $i=1,\ldots,N$ we have \begin{equation} \sum_{h=1}^m w_h = \sum_{k=1}^{S_i}\sum_{\substack{h,\\ q_{hi}=x_{ik}}} w_h= \sum_{k=1}^{S_i} d_{ik} = 1, \end{equation} since the $P_i$ are probability measures. \emph{ii)} For each $i=1,\ldots,N$, $j=1,\ldots,|S|$, $k=1,\ldots,S_i$ define \begin{equation} y_{ijk}=\sum_{\substack{h=1\\q_{h0} = x_{j} \\q_{hi} = x_{ik}}}^m w_h. \end{equation} Clearly $y_{ijk}\geq 0$ and it is easily checked that $\sum_j y_{ijk} = d_{ik}$ for any $i$ and $k$ and that $\sum_k y_{ijk}$ is the mass at location $x_j\in S$ in the measure $P({\bf w},\mathcal{S})$. Hence ${\bf y}$ is an $N$-star transport between $P({\bf w},\mathcal{S})$ and $P_1,\ldots,P_N$. Further we have \begin{align} c({\bf y})&= \sum_{i,j,k} |x_{j}-x_{ik}|^2\cdot y_{ijk} = \sum_{i,j,k} |x_{j}-x_{ik}|^2\cdot \sum_{\substack{h=1\\q_{h0} = x_{j} \\q_{hi} = x_{ik}}}^m w_h=\sum_{i=1}^N\sum_{j,k}\sum_{\substack{h=1\\q_{h0} = x_{j} \\q_{hi} = x_{ik}}}^m |q_{h0}-q_{hi}|^2\cdot w_h\nonumber\\ &=\sum_{i=1}^N\sum_{h=1}^m|q_{h0}-q_{hi}|^2\cdot w_h=\sum_{h=1}^mc_hw_h=c({\bf w},\mathcal{S}). \end{align} \emph{iii)} We note first that all of our arguments up to now not only hold for $P_i$ and $P$ being probability measures, but for any measures with total mass $0\leq r\leq 1$ that is the same for all $P_i$ and $P$. Using this fact we prove this part of the lemma for these types of measures by induction on $|\text{supp}({\bf y})|$.\\ For $|\text{supp}({\bf y})| = 0$, we clearly have that any $\mathcal{S}$ paired with ${\bf w} = {\bf 0}$ satifies the given conditions. So suppose $|\text{supp}({\bf y})|>0$, then let $\mu = \min_{y_{ijk}>0} y_{ijk}$ and let $(i^*,j^*,k^*)$ be a triplet such that $y_{i^*j^*k^*} = \text{arg}\min_{y_{ijk}>0} y_{ijk}$. This implies that $d_{j^*}\geq \mu$ and so for each $i=1,\ldots,N$ there exists a $k_i$ such that $y_{ij^*k_i}\geq\mu$. In particular one can choose $k_{i^*} = k^*$ here. We then have a vector ${\bf y'}$ with $y'_{ij^*k_i} = y_{ij^*k_i} - \mu$ and $y'_{ijk}=y_{ijk}$ otherwise. Then ${\bf y'}$ is an $N$-star transport for $P'$ to $P'_1,\ldots,P'_N$ where $P'$ is obtained from $P$ by decreasing the mass on $x_{j^*}$ by $\mu$ and each $P'_i$ is obtained from $P_i$ by decreasing the mass on $x_{ik_i}$ by $\mu$. Then $|\text{supp}({\bf y'})|<|\text{supp}({\bf y})|$ since $y'_{i^*j^*k^*} = 0$. Therefore, by induction hypothesis, there exists a pair $({\bf w},\mathcal{S'})$ such that ${\bf w}\in\mathcal{T}(\mathcal{S'})$, $P'=P({\bf w},\mathcal{S'})$, and $c({\bf w},\mathcal{S'})=c({\bf y'})$ for $P'_1,\ldots,P'_N$. Let now $|\mathcal{S'}|=m$ and let $s_{m+1}=(x_{j^*},x_{1k_1},\ldots,x_{ik_i},\ldots,x_{Nk_N})$ and define $\mathcal{S}=\mathcal{S'}\cup\{s_{m+1}\}$. Then $({\bf w}^T,\mu)\in\mathcal{T}(\mathcal{S})$ and $P=P(({\bf w}^T,\mu),\mathcal{S})$ for $P_1,\ldots,P_N$. Further we have that \begin{align} c({\bf y})&= c({\bf y'}) + \sum_{i=1}^N c_{ij^*k_i}\mu = c({\bf y'}) + \sum_{i=1}^N |x_{j^*}-x_{ik_i}|^2\mu\nonumber\\ &=c({\bf w}^T,\mathcal{S'}) + c_{m+1}\mu = c(({\bf w}, \mu),\mathcal{S}), \end{align} which completes the proof by induction. \qed \end{proof} We now show the existence of a transportation scheme ${\bf w^*}$ for which $|\text{supp}({\bf w^*})|$ is provably small. \begin{lemma}\label{sparsemin} Given a location-fixed transportation scheme $\mathcal{T}(\mathcal{S})\neq\varnothing$ for discrete probability measures $P_1,\ldots,P_N$, there exists ${\bf w^*}\in \arg\min_{{\bf w}\in\mathcal{T}(\mathcal{S})}c({\bf w},\mathcal{S})$, such that \begin{equation} |\text{supp}({\bf w^*})|\leq \sum_{i=1}^N S_i - N + 1. \end{equation} \end{lemma} \begin{proof} We have that $\min_{{\bf w}\in\mathcal{T}(\mathcal{S})}c({\bf w},\mathcal{S})$ is equivalent to the following LP by definition: \begin{align} &\min_{{\bf w}}\;\; c({\bf w},\mathcal{S})\text{ }\text{ }\text{ } \nonumber\\ \sum_{\substack{h,\\ q_{hi}=x_{ik}}} w_h&= d_{ik},\text{ }\forall i=1,\ldots,N,\text{ }\forall k=1,\ldots,S_i,\nonumber\\ w_h&\geq 0,\text{ }\text{ }\text{ }\forall h=1,\ldots,m. \end{align} Thus there is a basic solution to this problem ${\bf w^*}\in\mathcal{T}(\mathcal{S})$ such that $|\text{supp}({\bf w^*})|$ is bounded above by the rank of the matrix of the equality constraints in the first line. Since there are $\sum_i S_i=\sum_i |\text{supp}(P_i)|$ of these equality constraints by definition, it suffices to show that at least $N-1$ of these constraints are redundant. Let ${\bf a}_{ik}$ denote the row corresponding to the equation for some $i$ and $1\leq k\leq S_i$. Note that for a fixed $i$, $\sum_k {\bf a}_{ik}$ yields a vector of all ones, as $w_h$ appears in exactly one equation for each fixed $i$. Hence it is immediate that the row ${\bf a}_{iS_i}$ is redundant for all $i=2,\ldots,N$ since \begin{equation} {\bf a}_{iS_i} = {\bf 1} - \sum_{k=1}^{S_i-1} {\bf a}_{ik}=\sum_{k=1}^{S_1}{\bf a}_{1k}- \sum_{k=1}^{S_i-1} {\bf a}_{ik}, \end{equation} where ${\bf 1}$ is the row vector of all-ones. Hence we get $N-1$ redundant rows. \qed \end{proof} We are now ready to prove Theorem \ref{sparsethm}. \begin{proof}[{of Theorem \ref{sparsethm}}] Since all barycenters are a solution to (\ref{LPequ}), there exists an $N$-star transport ${\bf y'}$ from some barycenter $\bar P'$ to $P_1,\ldots,P_N$ and $c({\bf y'})=\sum_{i=1}^N W_2(\bar{P'},P_i)^2$. By Lemma \ref{scheme} part {\em \ref{scheme_iii})}, there is some location-fixed transportation scheme $\mathcal{T}(\mathcal{S})$ for $P_1,\ldots,P_N$ and some ${\bf w'}\in\mathcal{T}(\mathcal{S})$ such that $\bar{P'}=P({\bf w'},\mathcal{S})$ and $c({\bf y'})=c({\bf w'},\mathcal{S})$. By Lemma \ref{sparsemin} there is some ${\bf w^*}\in\text{arg}\min_{{\bf w}\in\mathcal{T}(\mathcal{S})}c({\bf w},\mathcal{S})$ such that $|\text{supp}({\bf w^*})|\leq \sum_{i=1}^NS_i - N + 1$. Now let $\bar{P} = P({\bf w^*},\mathcal{S})$, then by Lemma \ref{scheme} \begin{equation} |\text{supp}(\bar P)|\leq |\text{supp}({\bf w^*})|\leq \sum_{i=1}^NS_i - N + 1. \end{equation} Further, by Lemma \ref{scheme} part {\em \ref{scheme_ii})}, there is an $N$-star transport {\bf y} between $\bar{P}$ and $P_1,\ldots,P_N$ such that \begin{equation} \sum_{i=1}^N W_2(\bar{P},P_i)^2 \leq c({\bf y}) = c({\bf w^*},\mathcal{S}) \leq c({\bf w'},\mathcal{S}) = c({\bf y'}) = \sum_{i=1}^N W_2(\bar{P'},P_i)^2\leq\sum_{i=1}^N W_2(\bar{P},P_i)^2, \end{equation} where the last inequality follows since $\bar{P'}$ is already a barycenter. Hence this chain of inequalities collapses into a chain of equalities and we see that $\bar{P}$ is the desired barycenter. \qed \end{proof} Finally, let us exhibit how to refine our results for discrete probability measures arising that are supported on an $L_1{\times}\dots{\times}L_d$-grid in $\mathbb{R}^d$ that is uniform in all directions. \begin{proof}[{of Corollary \ref{sparsegrid}}] An $L_1{\times}\ldots{\times}L_d$-grid in $\mathbb{R}^d$ for ${\bf e_0} \in \mathbb{R}^d$ and linearly independent vectors ${\bf e_1},\dots,{\bf e_d}\in \mathbb{R}^d$ is the set $\{{\bf v}\in \mathbb{R}^d: {\bf v}={\bf e_0}+\sum\limits_{s=1}^d \frac{l_s}{L_s-1}{\bf e_s}: 0\leq l_s \leq L_s-1, l_s\in \mathbb{Z}\}$. Since by Proposition \ref{existdiscrete} we have $\text{supp}(\bar{P})\subseteq S$, for each $x_j\in\text{supp}(\bar{P})$ there exist $x_i={\bf e_0}+\sum\limits_{s=1}^d \frac{\alpha_{si}}{L_s-1}{\bf e_{s}}$ with $ 0\leq \alpha_{si} \leq L_s-1$ for all $i \leq N$ such that \begin{equation} x_j=\frac{1}{N}\sum_{i=1}^N x_i=\frac{1}{N}\sum_{i=1}^N {\bf e_0}+\frac{1}{N}\sum_{i=1}^N\sum\limits_{s=1}^d \frac{\alpha_{si}}{L_s-1}{\bf e_s} ={\bf e_0} + \sum\limits_{i=1}^N\sum\limits_{s=1}^d \frac{\alpha_{si}}{N\cdot (L_s-1)}{\bf e_s}. \end{equation} This tells us that $\text{supp}(\bar{P})$ lies on the $(N(L_1-1)+1)\times \ldots \times (N(L_d-1)+1)$-grid for ${\bf e_0}$ and ${\bf e_1},\dots,{\bf e_d}$. Since $\text{supp}(P_i)$ lies on an $L_1{\times}\ldots{\times}L_d$-grid, the absolute bound on $|\text{supp}(\bar{P})|$ follows immediately from Theorem \ref{sparsethm}. Since $\bar{P}$ is supported on a $(N(L_1-1)+1)\times \ldots \times (N(L_d-1)+1)$-grid, we observe a relative density of less than \begin{equation}\frac{N(\prod\limits_{i=1}^d L_i-1) +1}{\prod\limits_{i=1}^d (N(L_i-1)+1)}\leq \frac{N\prod\limits_{i=1}^d L_i }{N^d\prod\limits_{i=1}^d (L_i-1)} =\frac{1}{N^{d-1}}\prod\limits_{i=1}^d\frac{ L_i}{ (L_i-1)},\end{equation} in this grid, which proves the claim. \qed \end{proof} \section{Computations}\label{computations} In this section we apply the computational and theoretical results developed in this paper to a hypothetical transportation problem for distributing a fixed set of goods, each month, to 9 California cities where the demand distribution changes month to month. A Wasserstein barycenter, in this case, represents an optimal distribution of inventory facilities which minimize squared distance/transportation costs totaled over multiple months. Although this data is artificially generated for purposes of exposition, the data is based on observed average high temperatures per month \cite{wiki:climate}. All the source code used in this section is publicly available through the on-line repository \url{https://github.com/EthanAnderes/WassersteinBarycenterCode} The probability measures used in this example are defined on $\Bbb R^2$ and are denoted $P_{\text{dec}}$, $P_{\text{jan}}$, $P_{\text{feb}}$, $P_{\text{mar}}$, $P_{\text{jun}}$, $P_{\text{jul}}$, $P_{\text{aug}}$ and $P_{\text{sep}}$ to correspond with $8$ months of the year (scaling up to $12$ months, while not intractable, imposes unnecessary computation burdens for computational reproducibility). The support of each distribution is given by the longitude-latitude coordinates of the following $9$ California cities: Bakersfield, Eureka, Fresno, Los Angeles, Sacramento, San Bernardino, San Francisco, San Jose and South Lake Tahoe. The mass distribution assigned to each $P_{\text{dec}},\ldots, P_{\text{sep}}$ is computed in two steps. The first step calculates \[(\text{population in city $C$})\times(\text{average high temp for month $M$ - $72^o$})^2\] for each city $C$ and each month $M$. The second step simply normalizes these values within each month to obtain $8$ probability distributions defined over the same $9$ California cities. Figure \ref{inputFigs} shows $P_{\text{feb}}$, $P_{\text{mar}}$, $P_{\text{jun}}$ and $P_{\text{jul}}$. Let $\bar P$ denote an optimal Wasserstein barycenter as defined by Equation (\ref{two}). Proposition \ref{existdiscrete} and Theorem \ref{sparsethm} both give bounds on the support of $\bar P$ uniformly over rearrangement of the mass assigned to each support point in $P_{\text{dec}},\ldots, P_{\text{sep}}$. Proposition \ref{existdiscrete} gives an upper bound for $\text{supp}(\bar P)$ in the form of a finite covering set which guarantees that finite dimensional linear programing can yield all possible optimal $\bar P$ (see (\ref{LPequ})). Conversely, Theorem \ref{sparsethm} gives an upper bound for the magnitude $|\text{supp}(\bar P)|$ which is additionally uniform over rearrangement of the locations of the support points in $P_{\text{dec}},\ldots, P_{\text{sep}}$. In the implementation presented here we use the modeling package {\em JuMP} \cite{LubinDunningIJOC} which supports the open-source {\em COIN-OR} solver {\em Clp} for linear programming within the language {\em Julia} \cite{beks-14}. The set $S$, defined in (\ref{centers}), covers the support of $\bar P$ and is shown in the rightmost image of Figure \ref{barycenterFigs}. A typical {\em stars and bars} combinatorial calculation yields $|S|=\binom{9+8-1}{9-1}=\binom{16}{8}=12870$. The corresponding LP problem for $\bar P$ therefore has $939510$ variables with $103032$ linear constraints. On a 2.3 GHz Intel Core i7 MacBook Pro a solution was reached after $505$ seconds (without using any pre-optimization step). The solution is shown in the leftmost image of Figure \ref{barycenterFigs}. Notice that Theorem \ref{sparsethm} establishes an upper bound of $65 = 9 \cdot 8 - 8 + 1$ for $|\text{supp}(\bar P)|$. The LP solution yields $|\text{supp}(\bar P)| = 63$. Not only does this give good agreement with the sparsity bound from Theorem \ref{sparsethm} but also illustrates that Wasserstein barycenters are very sparse with only $0.5\%$ of the possible support points in $S$ getting assigned non-zero mass. In Figure \ref{transportFigs} we illustrate Theorem \ref{nomasssplit} which guarantees the existence of pairwise optimal transport maps from $\bar P$ to each $P_{\text{dec}},\ldots, P_{\text{sep}}$ which do not split mass. The existence of these discrete non-mass-splitting optimal transports is a special property of $\bar P$. Indeed, unless special mass balance conditions hold, there will not exist any transport map (optimal or not) between two discrete probability measures. The implication for this example is that {\em all} the inventory stored at a barycenter support point will be optimally shipped to exactly one city each month. Moreover, since the transportation displacements must satisfy Theorem \ref{nomasssplit} {\em \ref{nomasssplit_iii})} each city is at the exact center of its $8$ monthly transportation plans. \end{document}
\begin{document} \title{Existence and Stability of Traveling Waves for an Integro-differential Equation for Slow Erosion} \author{Graziano Guerra$^1$ and Wen Shen$^2$} \date{} \footnotetext[1]{Dept.~of Mathematics and Applications, Milano-Bicocca University, Italy\quad \texttt{[email protected]}} \footnotetext[2]{Dept.~of Mathematics, Penn State University, U.S.A. \texttt{shen\[email protected]}} \maketitle \begin{abstract} We study an integro-differential equation that describes the slow erosion of granular flow. The equation is a first order non-linear conservation law where the flux function includes an integral term. We show that there exist unique traveling wave solutions that connect profiles with equilibrium slope at $\pm\infty$. Such traveling waves take very different forms {from} those in standard conservation laws. Furthermore, we prove that the traveling wave profiles are locally stable, i.e., solutions with monotone initial data approaches the traveling waves asymptotically as $t\to+\infty$. \end{abstract} \textbf{keywords:} traveling waves, existence and stability, integro-differential equation, conservation law. \section{Introduction} We consider the Cauchy problem for the scalar integro-differential equation \begin{equation}\label{1.1} u_t(t,x) - \left(\exp\int_x^\infty f(u_x(t,y))\dd y \right)_x =0\,, \qquad u(0,x) = \bar u (x). \end{equation} The model describes the slow erosion of granular flow, where $u(t,x) $ is the height of the standing profile of granular matter. We assume that the slope of the profile has a fixed sign, i.e., $u_x>0$, and granular matter is poured at a constant rate {from} an uphill location outside the interval of interest, and slides down the hill as a very thin layer. The interaction between the two layers is controlled by the erosion function $f$, which denote the rate of the mass being eroded (or deposited if negative) per unit length travelled in $x$ direction per unit mass passing through. We assume that the erosion rate $f$ depends only on the slope $u_x$. At the critical slope $1$ (normalized), we have $f(1)=0$. If $u_x>1$, we have erosion and $f>0$. Otherwise, if $u_x<1$, we have deposition and $f<0$. The independent time-variable $t$ denotes the amount of mass that has passed through, in a very long time. We will still refer to $t$ as ``time'' throughout the paper, and call $\bar u(x)$ the ``initial data''. The model was first derived in \cite{AS-arma} as the slow erosion limit of a $2\times2$ model for granular flow proposed by Hadeler \& Kuttler in \cite{HK}, with a specific erosion function $f(u_x)=(u_x-1)/u_x$. Later on, more general classes of erosion functions were studied, making distinction between whether the slope $u_x$ blows up or remains uniformly bounded. Let $w\dot=u_x$ denote the slope. Under the following assumptions on the erosion function $f\in\mathcal{C}^2$ \begin{equation} \label{f-prop} f(1)=0, \qquad f' \ge 0, \qquad f'' \le 0 \end{equation} and \begin{equation}\label{fp1} \lim_{w\to+\infty} \frac{f(w)}{w}=0, \end{equation} the slope $w$ remains uniformly bounded for all $t\ge0$, see \cite{AS-ft}. In this case, one could study the following conservation law for $w$, \begin{equation} \label{1.2} w_t + \left(f(w) \cdot \exp\int_x^\infty f(w(t,y))\dd y \right)_x ~=~0. \end{equation} Here the flux contains an integral term in $x$. Due to the nonlinearity of the function $f(w)$, jumps in $w$ could develop in finite time even for smooth initial data, which leads to kinks in the profile $u$. Thanks to the uniform bound on $w$, global existence and uniqueness of BV solutions for \eqref{1.2} are established in \cite{AS-ft,AS-se}. However, if we allow more erosion for large slope, the solutions behave very differently. If the erosion function approaches a linear function for large $w$, i.e, if \eqref{fp1} is replaced by \begin{equation} \label{fprop2} \lim_{w\to +\infty} f(w) - w f'(w) < \infty, \end{equation} then the slope $w$ could blow up, leading to vertical jumps in the profile, and $w=u_x$ would contain point masses. In this case one must study the equation \eqref{1.1}. It is observed in \cite{ShZh} that 3 types of singularities may occur in the solutions of \eqref{1.1}, namely \begin{itemize} \item a kink, where $u_x$ is discontinuous; \item a jump, where $u$ is discontinuous; \item a hyper-kink, where $u$ is continuous, but the right limit of $u_x$ is infinite, or both left and right limits of $u_x$ are infinite. \end{itemize} The global existence of BV solutions for \eqref{1.1} is obtained in \cite{ShZh}, through a modified version of front tracking algorithm which generates piecewise affine approximations that also allow discontinuities. We remark that, model \eqref{1.1} differs {from} other integro-differential equations in the literature where the gradient $u_x$ may blow up. For example, the variational wave equation \cite{BPZ,HZ} and the Camassa-Holm equation \cite{CH} are both well-studied. In both cases, thanks to an a-priori bound on the $\mathbf{L}^2$ norm of $u_x^2$, the solution $u$ remains H\"{o}lder continuous at all times. In contrast, the solutions to our equation \eqref{1.1} could develop jumps, and the distributional derivative $u_x$ could contain point masses. Since $u(t,x)$ is an increasing function in $x$, the inverse $X(t,u)$ is well-defined. We define the corresponding gradient \begin{equation*}z(t,u) ~\dot=~ X_u(t,u). \end{equation*} Formal computation shows that $X(t,u)$ and $z(t,u)$ are conserved quantities, and they satisfy the equations \begin{eqnarray} \label{1.2x} X_t + \left( \exp\int_u^\infty g(z(t,v))\dd v \right)_u &=& 0\,,\\ \label{1.2z} z_t - \left(g(z) \cdot \exp\int_u^\infty g(z(t,v))\dd v \right)_u &=&0\,. \end{eqnarray} Here \begin{equation}\label{gdef} g(z) ~\dot= ~z f(1/z) \end{equation} is the erosion function in the coordinates $(t,u)$, representing the rate of erosion per unit mass passed per unit distance in $u$. {From} the properties of $f$ in \eqref{f-prop} and \eqref{fprop2}, the function $g(z)\in \mathcal{C}^2$ satisfies \begin{equation}\label{gprop} g(1)=0, \qquad g(0) \ge 0, \qquad g''<0. \end{equation} Note that, for a given $t$, when $z(t,u)=0$ on an interval in $u$, the physical slope $w(t,x)$ blows up to infinity, and the profile $u(t,x)$ has a vertical jump. However, the solutions for \eqref{1.2z} could become negative, which have no physical meaning. Therefore equation \eqref{1.2z} must be equipped with the pointwise constraint $z\ge0$. We now modify equation \eqref{1.2z} into \begin{equation}\label{1.3} z_t - \left(g(z) \cdot \exp\int_u^\infty g(z(t,v))\dd v \right)_u ~=~\mu. \end{equation} The measure $\mu$ in \eqref{1.3} yields the projection into the cone of non-negative functions. Therefore, we ``transformed'' the point mass in $u_x(t,x)$ into constraint in $z(t,u)$. It turns out that the projection reduces the $\mathbf{L}^1$ distance between solutions $z(t,u)$. Thanks to this property, in \cite{CGS} we proved continuous dependence on initial data and on the erosion function, for the entropy weak solutions generated as the limit of a front tracking approximation. This establishes a Lipschitz semigroup for the solutions of \eqref{1.3}. In this paper we are interested in the traveling wave solutions of \eqref{1.1}. We seek a traveling wave that connects profiles $u(t,x)$ with slope $w=1$ at both $-\infty$ and $+\infty$, with slope $w>1$ in between, traveling with speed $\sigma$, i.e., \begin{equation}\label{2.1} w(t,x)=W(\xi), \qquad \xi = x-\sigma t, \qquad \lim_{\xi\to \pm\infty}W(\xi)=1, \qquad W(\xi)\ge 1. \end{equation} In the variable $u(t,x)$, these become profiles that travel upwards along lines of slope 1 with constant speed. See Figure~\ref{fig1} for an illustration. \begin{figure} \caption{Example of a traveling wave profile for $u(t,x)$.} \label{fig1} \end{figure} We now make an important observation. In Figure \ref{fig1} we see that, viewed in the direction of the dotted lines with slope 1, the profile remains stationary in $t$. This motivates another variable change which would yield stationary traveling waves. To this end, we consider solutions $z(t,u)$ to \eqref{1.3} such that, for all finite $t \ge 0$, \begin{equation}\label{cond-z} z(t,\cdot)\in BV\,, \qquad z(t,\cdot)-1 \in \L1\,, \qquad \lim_{u\to\pm\infty} z(t,u) =1,\qquad 0\le z(t,u)\le 1. \end{equation} This indicates that the profile $u(t,x)$ approaches a linear asymptote with slope 1 at both $x=\pm\infty$. Without loss of generality, we assume that $u(t,x)$ approaches $u=x$ at $x\to +\infty$. We define the ``drop'' function \begin{equation}\label{defq} q(t,u) ~\dot =~ \int_u^{+\infty} (z(t,v)-1)\dd v. \end{equation} Note that $q(t,u)$ indicates the vertical drop of the profile $u(t,x)$ at $(t,u)$ comparing to the line $u=x$. Under our assumptions \eqref{cond-z}, $q(t,u)$ is an increasing function in $u$, and approaches 0 as $u\to +\infty$. Therefore, $q(t,u) \le 0$. We also define the ``total drop'' of the profile as \begin{equation}\label{TD} D ~\dot =~ - \int_{-\infty}^{+\infty} (z(t,v)-1)\dd v = \|z(t,\cdot)-1\|_{\L1}. \end{equation} This denotes the vertical drop between the lines of asymptote at $x\to \pm\infty$. See Figure~\ref{fig1}. We see that, as $x\to-\infty$, the profile $u(t,x)$ approaches the asymptote $u=x-D$. We remark that $z=1$ is a trivial solution. Under the assumptions \eqref{cond-z}, the $\L1$ norm of $z-1$ remains constant in $t$. Therefore the total drop $D$ also remains constant in $t$. If we assume further that $z(t,u)=1$ outside an interval $I\subset{\mathbb{R}}$, while $z(t,u)<1$ on $I$, then the function $u\mapsto q(t,u)$ is invertible on $I$. Let $\tilde u(t,q)$ be its inverse defined for $q\in\left(-D,0\right)$. We now consider $(t,q)$ as the independent variables, and define the composite functions \begin{equation}\label{eq:defs} {\mathcal X}(t,q) = X\left(t,\tilde u(t,q)\right), \qquad \zeta(t,q) = z\left(t,\tilde u(t,q)\right). \end{equation} In this new coordinate, the quantities ${\mathcal X}(t,q)$ and ${\mathcal X}_q(t,q)$ are the conserved. We define the corresponding erosion function, \begin{equation}\label{hDef} h(z)~ \dot=~ \frac{g(z)}{1-z}=\frac{f(1/z)}{1/z-1} \quad (0\le z<1), \qquad h(1)=-g'(1)=f'(1). \end{equation} For smooth solutions, we formally have \begin{eqnarray} {\mathcal X}_t(t,q) + \left( \exp\int_q^0 h(\zeta(t,s))\dd s \right)_q &=& 0\,, \label{eq:xi0} \\ ({\mathcal X}_q)_t - \left( h(\zeta) \cdot \exp\int_q^0 h(\zeta(t,s))\dd s \right)_q &=& 0\,. \label{eq:xiq0} \end{eqnarray} Treating $\zeta(t,q)$ as the unknown, and using the identity $ {\mathcal X}_q = \zeta/(1-\zeta)$, equation \eqref{eq:xiq0} can be rewritten as \begin{equation} \label{eq:zeta0} \zeta_t - (1-\zeta)^2 \left( h(\zeta) \cdot \exp\int_q^0 h(\zeta(t,s))\dd s \right)_q = 0. \end{equation} By this construction, smooth traveling wave solutions are stationary solutions of \eqref{eq:zeta0}. This is the main technique in our analysis. We will construct traveling waves as stationary solutions of $\zeta(t,q)=Z(q)$ for \eqref{eq:zeta0}. Depending on the size of the total drop $D$, different types of profiles can be constructed, and different types of singularities will form in the solutions. In particular, all the stationary profiles have a downward jump at $q=-D$ (kink for the profile $u$), with possibly an interval where $Z=0$ (a shock in the profile $u$), and then possibly a smooth stationary rarefaction fan. In all cases, $Z(q)$ is non-decreasing on $-D<q\le0$. See Section 3 for details. We will also show that such profiles are unique with respect to the total drop $D$. For any given $D$, there exist a unique traveling wave profile. The corresponding result for the physical variables $u(t,x)$ and $w(t,x)$ follows {from} the well-posedness of the variable changes \eqref{eq:defs}. For the Cauchy problem of \eqref{1.3} with initial data satisfying \eqref{cond-z}, the existence of a Lipschitz semigroup of BV solutions is established in \cite{CGS}. In turn this result provides us also the existence of semi-group solutions for the new variable $\zeta(t,q)$. We will study the local stability of the stationary traveling wave profiles. We show that, if the initial data is ``non-decreasing'', the solution approaches a traveling wave profile $Z(q)$ as $t\to+\infty$. By ``non-decreasing'', with a slight abuse of notation, we mean initial data which satisfies the following assumptions \begin{equation}\label{z02} z(0,u) = \begin{cases} 1 \quad & (u<u_a),\\ \tilde z_o(u)\quad & (u\ge u_a), \end{cases} \qquad \tilde z_o(u_2)-\tilde z_o(u_1)\ge0\quad \text{for}\quad u_2\ge u_1\ge u_{a}\,. \end{equation} Note that this property is shared by the traveling wave profile. As we will see later in Section 4, property \eqref{z02} will be preserved in the solution for all $t>0$. We will prove that, solutions of the Cauchy problem for \eqref{1.3} with initial data satisfying \eqref{cond-z} and \eqref{z02}, converges to the traveling wave profile asymptotically. Details will be explained in Section 4. The rest of the paper is organized as follows. In section 2 we make some basic analysis, where we derive waves speeds and prove some technical Lemmas. In section 3 we show the existence of traveling wave solutions by construction. Furthermore, such profiles are unique with respect to the total drop. We then return to the original variable and state the corresponding results in the physical variables $u(t,x),w(t,x)$. In section 4 we establish local stability of these traveling waves, showing that solutions with non-decreasing initial data approach traveling waves asymptotically as $t\to+\infty$. A numerical simulation is given in Section 5 to demonstrate the convergence. Finally, we give several concluding remarks in Section 6. \section{Basic analysis} \subsection{Smooth stationary solutions for $\zeta(t,q)$} We start with the discussion on the properties of the erosion function $h(z)$ defined in \eqref{hDef}. \begin{lemma}\label{lm:hprop} For $0\le z \le 1$, the erosion function $h(z)$ satisfies \begin{equation}\label{hps} h(z) \ge 0\,, \qquad h'(z) >0 \,, \qquad h''(z) < \frac{2h'(z)}{1-z}. \end{equation} \end{lemma} \begin{proof} By the definition \eqref{hDef}, we have $h(0)=g(0)\ge0$. If $0<z<1$, since $g(z)>0$, we have $h(z) >0$. For $z=1$, we have $h(1) = -g'(1) >0$. This proves that $h(z)\ge 0$ and $h(z)=0$ if and only if when $z=0$ and $g(0)=0$. For $h'$, we have \begin{equation}\label{dh} h'(z) =\frac{(1-z) g'(z) + g(z)}{(1-z)^2}. \end{equation} Since \begin{equation}\label{dh2} \frac{d}{dz} \left\{ (1-z) g'(z) + g(z)\right\} = (1-z) g''(z) <0, \qquad \left\{ (1-z) g'(z) + g(z)\right\}_{z=1} =0, \end{equation} we have $h'(z) >0$ for $0\le z <1$. For $z=1$, we have \begin{equation*} h'(1) = \lim_{z\to 1} \frac{(1-z) g'(z) + g(z)}{(1-z)^2} = \lim_{z\to 1} \frac{g'(z) + h(z)}{1-z} = - \lim_{z\to 1} [g''(z) + h'(z) ] = - g''(1) - h'(1), \end{equation*} so $h'(1) = - \frac12 g''(1) >0$, proving the second inequality in \eqref{hps}. By \eqref{dh} and \eqref{dh2}, we now get \begin{equation}\label{hp5} \frac{d}{dz} \left\{(1-z)^2 h'(z) \right\} <0\,. \end{equation} Working out the derivative, we get \begin{equation*} \frac{d}{dz} \left\{ (1-z)^2 h'(z) \right\} = -2 (1-z) h'(z) + (1-z)^2 h''(z) <0\,, \end{equation*} proving the third property in \eqref{hps}. \end{proof} For notation simplicity, in the rest of this paper we let $F(\zeta;q)$ denote the integral term \begin{equation}\label{Fdef} F(\zeta;q) ~\dot=~ \exp\int_q^0 h(\zeta(t,s))\dd s\,, \qquad F_q (\zeta;q) = - h(\zeta) \cdot F(\zeta;q)\,. \end{equation} Equation \eqref{eq:zeta0} can be rewritten as \begin{equation}\label{eq:zetaNN} \zeta_t - (1-\zeta)^2 \left( h'(\zeta)\zeta_q - h^2(\zeta) \right) \cdot F(\zeta;q) =0. \end{equation} For smooth solutions, along lines of characteristics $t\mapsto q$ we have \begin{eqnarray} \dot q(t) &=& - (1-\zeta)^2 h'(\zeta) \cdot F(\zeta;q)\,, \label{char1}\\ \dot \zeta(t,q(t)) &=& -(1-\zeta)^2 h^2(\zeta) \cdot F(\zeta;q)\,. \label{char2} \end{eqnarray} We observe that $\dot q(t)<0$ and $\dot \zeta(t,q(t))< 0 $, therefore all characteristics travel to the left, and $\zeta$ is decreasing along characteristics. We now derive the ODE satisfied by smooth stationary solutions for \eqref{eq:zetaNN}. Let $\tilde\phi(q)$ be a smooth stationary solution of \eqref{eq:zetaNN}, then it must be the solution of the Cauchy problem \begin{equation} \label{phi} \tilde\phi'(q) =\frac{h^2(\tilde\phi)}{h'(\tilde\phi)}, \qquad \tilde\phi(0)=1, \qquad (q\le 0). \end{equation} The ODE \eqref{phi} is autonomous and can be solved explicitly by separation of variables. Indeed, we have \begin{displaymath} h'(\tilde\phi) \tilde\phi' = h^2(\tilde\phi), \quad\rightarrow\quad \frac{d}{dq}h\left(\tilde\phi\right)=h^2\left(\tilde\phi\right) \quad\rightarrow\quad \frac{dh}{h^{2}}=dq. \end{displaymath} Integrating $q$ over $(q,0)$, and $h$ over $(h(\tilde\phi(q)), h\left(1\right))$, we get \begin{equation} \label{tw11} \frac{1}{h(\tilde\phi(q))} - \frac{1}{h(1)} = - q, \quad\rightarrow\quad h(\tilde\phi(q))=\frac{h(1)}{1-h(1)q}. \end{equation} This gives an explicit formula for $\tilde\phi(q)$, i.e, \begin{equation} \label{phif} \tilde\phi(q) = h^{-1} \left(\frac{h\left(1\right)}{1-h\left(1\right)q} \right), \end{equation} where $h^{-1}$ denotes the inverse mapping of the function $z\mapsto h\left(z\right)$. By \eqref{hps} we know that $h'>0$ for $0\le z\le 1$, therefore the inverse is well-defined. Furthermore, since $\tilde\phi' >0$, the function $\tilde\phi(q)$ is strictly increasing. We observe that, if $h(0)>0$, the solution $\tilde\phi(q)$ reaches 0 at $q=-{D_{hk}}$ where ${D_{hk}}$ is a finite value. By \eqref{tw11}, we have \begin{equation}\label{Dhk} {D_{hk}} ~\dot= ~\frac{1}{h(0)} - \frac{1}{h(1)}, \qquad \tilde\phi(-{D_{hk}})=0. \end{equation} We now define the function \begin{equation} \label{eq:phi} \phi(q) ~\dot=~ \begin{cases} \tilde\phi (q)&\text{ for } q\in\left[-{D_{hk}},0\right],\\ 0&\text{ for } q<-{D_{hk}}. \end{cases} \end{equation} We observe that if $h(0)=0$, then ${D_{hk}}= +\infty$. If the total drop $D$ is finite, we have \begin{equation}\label{zetamin} \phi(q) \ge c_o, \quad \text{ for }q\in\left]-D,0\right], \quad \text{where} \quad c_o = \phi(-D) >0\,. \end{equation} \begin{remark}\label{rm:00} If $h(0)=0$, this means we have $g(0)=0$ and $f'(+\infty)=0$. It is observed in \cite{AS-se} that the slope $w=u_x(t,x)$ remains uniformly bounded for all $t$. Correspondingly $z(t,u)\ge c_{o}$ for all $(t,u)$ and for some positive constant $c_{o}>0$. This implies \begin{displaymath} \zeta(t,q)\ge c_o > 0, \quad \forall t >0\text{ and }\forall q \in\left[-D,0\right]. \end{displaymath} We have the same observation in the traveling wave solution \eqref{zetamin}. \end{remark} We immediately have the next Lemma which provides a lower bound on the derivative of the smooth stationary profile. \begin{lemma}\label{Dphi} If $h(0)>0$, the smooth stationary profile $\phi$ satisfies \begin{equation}\label{eq:Dphi1} \phi'(q) \ge c_1, \qquad c_1 = \frac{h(0)^{2}}{\max_{0<z<1} h'(z)}>0, \qquad -{D_{hk}} < q<0\,. \end{equation} If $h(0)=0$, and let $D$ be the total drop, we have \begin{equation}\label{eq:Dphi2} \phi'(q) \ge c_2, \qquad c_2 = \frac{h(\phi(-D))^{2}}{\max_{\phi(-D)<z<1} h'(z)}>0, \qquad -D< q<0\,. \end{equation} \end{lemma} \begin{example} \label{ex:1} Let us choose the following erosion function $$ f(w) = w - \frac{1}{w}, \qquad g(z) = 1-z^2, \qquad h(\zeta) = 1+\zeta, $$ By \eqref{phif}, the smooth profile satisfies $$ \phi(q) = h^{-1} \left(\frac{2}{1-2q}\right) = \frac{2}{1-2q} -1 = \frac{1+2q}{1-2q}, \qquad q\le 0. $$ We see that $\phi(-0.5)=0$, so ${D_{hk}}=0.5$. \end{example} \subsection{Wave speeds, admissible conditions and stationary waves} To understand how the fronts move in the solution $\zeta(t,q)$, we derive here the wave speeds, for various types of singularities. Furthermore, we discus their admissibility conditions, following the Lax's entropy condition. We also single out the cases where the fronts are stationary. To simplify notation, we denote the integral term in the $(t,u)$ coordinate by \begin{equation} \label{eq:defG} G\left(z;u\right)=\exp \int_{u}^{+\infty}g\left(z\left(t,v\right)\right)\dd v\,, \qquad G_u(z;u) = -g(z) \cdot G(z;u)\,. \end{equation} Note that if $\zeta(t,q)=z(t,u(q))$ where $q(t,u)$ is the drop function defined in \eqref{defq}, we have $ F(\zeta;q) = G(z;u) $. Let $u(t)$ be the location of a discontinuity, in a piecewise smooth solutions $z(t,u)$. The wave speed $\dot u(t)$ for three types of discontinuities in $z(t,u)$ were derived in \cite{CGS}. Let $q(t)$ be the location of the corresponding discontinuity in the solution $\zeta(t,q)$. Formally, by the conservation law \eqref{1.2z} we obtain \begin{equation}\label{relation} \dot q(t)= -\left[z(t,u+)-1\right]\dot u(t) -g\left(z(t,u+)\right) G\left(z;u\right). \end{equation} Thanks to \eqref{relation}, we can now list the corresponding wave speed $\dot q(t)$. \paragraph{Kink.} A concave kink in $u(t,x)$ corresponds to a downward jump in $z(t,u)$. Since $-g(z)$ is concave, only downward jumps are admissible. We can write \begin{equation*} z^{-} = z(t,u(t)-), \qquad z^{+} = z(t, u(t)+), \qquad 1 \ge z^{-} > z^{+} >0. \end{equation*} The speed of this wave is determined by the Rankine-Hugoniot condition for \eqref{1.2z} (see \cite{CGS}), $$ \dot u(t) =- G(z;u) \cdot \frac{g\left(z^{+}\right)-g\left(z^{-}\right)}{z^{+}-z^{-}}. $$ By using the relation \eqref{relation}, we have \begin{equation*} \dot q(t)=G\left(z;u\right) \left\{\left(z^{+}-1\right)\frac{g\left(z^{+}\right)-g\left(z^{-}\right)} {z^{+}-z^{-}}-g\left(z^{+}\right)\right\}\,. \end{equation*} Using the functions $h$ and $F$, we can write the speed as \begin{equation} \label{eq:qvk} \dot q(t) =-F\left(\zeta;q\right)\left(1-z^{+}\right)\left(1-z^{-}\right) \frac{h\left(z^{-}\right)-h\left(z^{+}\right)}{z^{-}-z^{+}}. \end{equation} Since $z^{+}<1$, the kink is stationary if and only if $z^{-}=1$. \paragraph{Hyper-kink.} A hyper-kink in $u(t,x)$ corresponds to a downward jump in $z(t,u)$ with $z^{+}=0$. By taking the limit $z^{+}\to 0$ in \eqref{eq:qvk}, we get \begin{equation}\label{speedHK} \dot q(t) = -F\left(\zeta;q\right)\left(1-z^{-}\right) \frac{h\left(z^{-}\right)-h(0)}{z^{-}}. \end{equation} Note that a hyper-kink is admissible only if the jumps in $z$ is downward. Furthermore, \eqref{speedHK} indicates that it is stationary if and only if $z^{-}=1$. \paragraph{Shock.} A jump in the profile $u(t,x)$ corresponds to an interval where $z(t,u)=0$. Let $z(t,u)=0$ on the interval $u^{-} < u < u^{+}$, and we write \begin{equation*} z^{+}(t) = z(t,u^{+}(t)+) >0, \qquad z^{-}(t) = z(t,u^{-}(t)-) >0, \qquad \Delta = u^{+} - u^{-}. \end{equation*} Here $\Delta$ is the size of the shock. In the $(t,q)$ coordinate, we let $ q^{-} < q < q^{+}$ denote the corresponding interval of the shock, and $\Delta=u^{+}-u^{-} = q^{+}-q^{-}$. This shock gives two fronts, namely the left front $q^{-}(t)$ and the right front $q^{+}(t)$. We now introduce the function \begin{equation}\label{xidef} \psi(s) ~\dot=~\frac{e^{h(0)s}-1}{s} \quad (s>0),\qquad \psi(0)=h(0)\ge 0\,. \end{equation} The derivative of $\psi$ satisfies \begin{equation}\label{xi'} \psi'(s) = \frac{h(0)s e^{h(0)s} -e^{h(0)s}+1}{s^2} >0, \qquad (s>0) \end{equation} Let's first consider the right front $q^{+}(t)$. The speed $\dot u^{+}(t)$ could be obtained by the Rankine-Hugoniot condition for \eqref{1.1} (see \cite[Subsection 2.1]{CGS} for details) \begin{displaymath} \dot u^{+}(t)=-\frac{G\left(z;u^{+}\right)}{z^{+}} \left( g\left(z^{+}\right)-\psi(\Delta)\right)\,. \end{displaymath} Using again \eqref{relation}, we get \begin{equation*} \dot q^{+}(t) = G\left(z;u^{+}\right)\frac{1-z^{+}}{z^{+}} \left[ \psi(\Delta) - \frac{g(z^{+})}{1-z^{+}} \right]\,. \end{equation*} Using the functions $h$ and $F$, we get \begin{equation} \label{eq:srv} \dot q^{+}(t) = F\left(\zeta;q^{+}\right)\frac{1-z^{+}}{z^{+}}\left[ \psi(\Delta) -h\left(z^{+}\right)\right]\,. \end{equation} A similar computation gives us the speed for the left front $q^{-}$, \begin{equation} \label{eq:slv} \dot q^{-}(t)=F\left(\zeta;q^{-}\right)\frac{1-z^{-}}{z^{-}}\left[ \psi(\Delta) e^{-h(0)\Delta} -h\left(z^{-}\right)\right]\,. \end{equation} We now discuss the admissible conditions. By Lax entropy condition, characteristics could only enter or stay parallel to shock curves. One can easily check that the left front at $q^{-}$ is always admissible, see \cite{CGS,ShZh}. For the right front $q^{+}$, by using \eqref{char1}, Lax condition yields the following inequality \begin{equation} \label{en-z} h\left(z^{+}\right)-z^{+}\left(1-z^{+}\right)h'\left(z^{+}\right) \le \psi(\Delta)\,. \end{equation} Condition \eqref{en-z} gives an upper bound for the value of $z^{+}$ for a given shock size $\Delta$. To this end, it is convenient to define a function $\eta\left(\Delta\right)$ that gives the maximum value of $z^{+}$ such that the entropy condition \eqref{en-z} holds for a shock size of $\Delta$, i.e., when \eqref{en-z} holds with equal sign. Therefore, we define the mapping $\Delta \mapsto \eta$ implicitly by \begin{equation} \label{eq:defent} h\left(\eta\right)-\eta\left(1-\eta\right)h'\left(\eta\right) = \psi(\Delta)\,. \end{equation} We see that this implicit definition is well-defined. Indeed, we know $\psi'>0$. By the third property in \eqref{hps} we have \begin{equation}\label{eq:ggg} \frac{d}{dz} \left( h(z)-z(1-z)h'(z) \right)=z\left[2h'(z)-(1-z)h''(z)\right]>0, \qquad (0<z\le 1)\,. \end{equation} We now check the condition when the fronts are stationary. The left front $q^{-}$ is stationary if and only if $z^{-}=1$, according to \eqref{eq:slv}. For the right front $q^{+}$, we define a function $\varphi\left(\Delta\right)$ such that for any given shock size $\Delta$, we have $\dot q=0$ if $z^{+}=\varphi(\Delta)$. Thus, by the front speed \eqref{eq:srv}, the mapping $\Delta \mapsto \varphi$ is implicitly defined as \begin{equation} \label{eq:defzero} h\left(\varphi\right)= \psi(\Delta)\,. \end{equation} Both $h$ and $\psi$ are strictly increasing functions, therefore the implicitly function \eqref{eq:defzero} is well-defined. For a given shock size $\Delta$, by \eqref{eq:srv} we have \begin{eqnarray} \dot q^{+} <0 \qquad &\text{if}& \qquad z^{+} > \varphi\left(\Delta\right)\,,\label{RS-}\\ \dot q^{+} >0 \qquad &\text{if}& \qquad z^{+} < \varphi\left(\Delta\right)\,.\label{RS+} \end{eqnarray} {From} \eqref{eq:srv} we also observe that, if $z^{+}=1$, the right front of the shock is stationary, provided that it satisfies the admissible condition \eqref{en-z}. Let $D_{ss}$ denote the smallest shock size such that $z^{+}=1$ is admissible. Then $D_{ss}$ satisfies the equation \begin{equation} \label{Dss} \psi(D_{ss})=h(1)\,. \end{equation} By \eqref{xi'}, the function $\psi$ is strictly increasing, therefore the value $D_{ss}$ is uniquely determined by \eqref{Dss}. We can now conclude that any shock with $z^{-}=z^{+}=1$ and the shock size larger than $D_{ss}$ is stationary. \begin{example} If the erosion function has the simple form $g(z)=1-z^{2}$, as in Example \ref{ex:1}, then $h(z)=1+z$ and $h(0)=g(0)=1$. In this case, the implicit functions can be expressed explicitly. We have \begin{eqnarray*} \eta \left(\Delta\right)~=~ \sqrt{\frac{e^\Delta-1}{\Delta}-1}, \qquad \varphi\left(\Delta\right) ~=~ \frac{e^\Delta-1}{\Delta} -1,&& \frac{e^{D_{ss}}-1}{D_{ss}}~=~2, \qquad D_{ss}\approx 1.256. \end{eqnarray*} Let $D$ denote the total drop. For various values of $D$, we plot the functions $\phi(q)$, $\eta(D+q)$, $\varphi(D+q)$ in Figure \ref{fig:03}, using red, blue and green colors respectively. These graphs give us insights in the construction of the stationary traveling waves. Consider the intersection point of the graphs $\phi$ (in red) and $\eta$ (in green), and let $\bar q$ be its $q$ coordinate. Then, a shock with $z=0$ on $q\in[-D, \bar q]$, with the right front value $\eta(D+\bar q)$ would be stationary. {From} Figure \ref{fig:03} we see that, if $D< {D_{hk}}$, the two curves do not intersect, so no stationary shock could exist. If ${D_{hk}}<D<D_{ss}$, then there exists only one intersection point, indicating one possible stationary shock. Finally, if $D>D_{ss}$, the two curves do not intersect. However, since $D>D_{ss}$, then a shock with $z=0$ on $q\in [-D,0]$ would be admissible with $z^{+}=1$, therefore it is stationary. \end{example} \begin{figure} \caption{The graphs of $\phi(q)$ (in red), $\eta(D+q)$ (in blue) and $\varphi(D+q)$ (in green) for $4$ different values of $D$. (I): $D<{D_{hk} \label{fig:03} \end{figure} \subsection{Technical Lemmas} Inspired by the graphs in Figure \ref{fig:03}, we now prove some technical lemmas. \begin{lemma} \label{lm:ZZ2} If $h(0)>0$, the followings hold. \begin{enumerate}[a)] \item The functions $\varphi\left(\Delta\right)$ and $\eta\left(\Delta\right)$ are strictly increasing for $\Delta\ge 0$; \item $\varphi\left(0\right)=\eta\left(0\right)=0$ and $\varphi\left(D_{ss}\right)=\eta\left(D_{ss}\right)=1$; \item $\varphi\left(\Delta\right) < \eta\left(\Delta\right)$ for any $\Delta\in\left(0,D_{ss}\right)$. \end{enumerate} \end{lemma} \begin{proof} These properties follow {from} the definition of the functions $\varphi$ and $\eta$, and the properties of the functions $h$ and $\psi$. \begin{enumerate}[a)] \item Since $h'>0$ and $\psi'>0$, by the definition of $\varphi$ in \eqref{eq:defzero} we have $\varphi' >0$. For $\eta$ defined in \eqref{eq:defent}, by the property \eqref{eq:ggg} and $\psi'>0$, we conclude that $\eta'>0$. \item When $\Delta=0$, we have $\psi(0)=h(0)$. Since $h(0)-0\left(1-0\right)h'(0)=h(0)$, from \eqref{eq:defent} we must have $\varphi(0)=\eta(0)=0$. When $\Delta=D_{ss}$, by \eqref{Dss} and \eqref{eq:defzero}, we have $h(\varphi) = \psi(D_{ss})=h(1)$, therefore $\varphi(D_{ss})=1$. For $\eta$, by \eqref{eq:defent} we have $h(\eta)=\psi(D_{ss})=h(1)$, therefore $\eta(D_{ss})=1$. \item By the definitions \eqref{eq:defent} and \eqref{eq:defzero}, and the property $z(1-z)h'(z)>0$ for $0<z<1$, we immediately conclude $c)$. \end{enumerate} \end{proof} \begin{lemma}\label{lm:ZZ1} If $\varphi\left(\Delta\right) = \phi(q)$ for some values of $\Delta\ge 0$ and $q\in[-{D_{hk}},0]$ then $$ \phi'(q)-\varphi'(\Delta) \ge \kappa>0\,, $$ for some constant $\kappa>0$ independent of $\Delta$ or $q$. This implies that any horizontal shift of the graph of $\varphi$ can intersect the graph of $\phi$ at most at one point. \end{lemma} \begin{proof} If $h(0)=0$, then \eqref{eq:defzero} implies $h(\varphi)=h(0)$, which gives $\varphi\left(\Delta\right) \equiv 0$, so the Lemma is trivial, and one can let \begin{equation*} \kappa=\min_{0<z<1} \frac{h^2(z)}{h'(z)}. \end{equation*} We now consider the case $h(0)>0$. Let $\Delta$ and $q$ be the values such that $z=\varphi(\Delta)= \phi(q)$. By \eqref{phi} and \eqref{eq:defzero}, we have \begin{equation*} h'(\varphi) \phi'(q) = h^2(\varphi) = \psi^2(\Delta), \qquad h'(\varphi) \varphi'(\Delta) = \psi'\left(\Delta\right). \end{equation*} We have \begin{eqnarray*} && \phi'(q) - \varphi'(\Delta) ~=~ \frac{1}{h'(z)} \left[ \psi^2(\Delta)-\psi'(\Delta)\right] ~\ge ~ \frac{1}{\max_{0<z<1} h'(z)}\cdot \min_{s\ge 0} \left[ \psi^2(s)-\psi'(s)\right]\\ &=& \frac{1}{\max_{0<z<1} h'(z)} \cdot \min_{s\ge 0} \frac{e^{h(0)s}\left(e^{h(0)s}-h(0)s-1\right)}{s^2}\\ &=& \frac{1}{\max_{0<z<1} h'(z)} \cdot \lim_{s\to 0+} \frac{e^{h(0)s}\left(e^{h(0)s}-h(0)s-1\right)}{s^2} ~=~ \frac{1}{\max_{0<z<1} h'(z)} \cdot \frac12 h^2(0) >0\,. \end{eqnarray*} We can now let \begin{equation*} \kappa = \frac{h^2(0)}{2 \cdot\max_{0<z<1} h'(z)} >0\,, \end{equation*} completing the proof. \end{proof} \begin{lemma}\label{lm:ZZ5} If $h(0)>0$, we have ${D_{hk}}<D_{ss}$. \end{lemma} \begin{proof} By $b)$ in Lemma \ref{lm:ZZ2} and \eqref{phi}, \eqref{Dhk}, we have \begin{equation*} D_{ss} = \int_0^1 \left[\varphi^{-1}\right]'(\xi)\, d\xi, \qquad {D_{hk}} = \int_0^1 \left[\phi^{-1}\right]'(\xi)\, d \xi. \end{equation*} By Lemma \ref{lm:ZZ1}, $\left[\varphi^{-1}\right]'(\xi)> \left[\phi^{-1}\right]'(\xi)$, therefore ${D_{hk}} < D_{ss}$, completing the proof. \end{proof} We immediately have the next Corollary regarding the intersection point of the graphs of $\phi(q)$ and $\varphi(q+D)$. This will be useful in the construction of the stationary traveling wave profiles and in the proof of their uniqueness w.r.t.~total drop. \begin{corollary}\label{cor:ZZ3} Let $D$ be the total drop. We have the following results concerning the intersection points of the graph of $\phi(q)$ and $\varphi(q+D)$ on the interval $-D\le q\le 0$. \begin{itemize} \item[(1).] If $D < {D_{hk}}$, the two graphs never intersect, and $\varphi(q+D)<\phi(q)$ for $-D\le q\le 0$; \item[(2).] If $D = {D_{hk}}$, the two graphs intersection once, at $q=-D$, and $\varphi(q+D)<\phi(q)$ for $-D< q \le 0$; \item[(3).] If ${D_{hk}} <D<D_{ss}$, the two graphs intersect once, at $q^{+}$ where $-{D_{hk}}<q^{+}<0$, $\varphi(q+D) > \phi(q)$ for $q<q^{+}$ and $\varphi(q+D) < \phi(q)$ for $q>q^{+}$; \item[(4).] If $D = D_{ss}$, the two graphs intersect once at $q=0$, and $\varphi(q+D) > \phi(q)$ for $-D < q<0$; \item[(5).] If $D > D_{ss}$, the two graphs never intersect, and $\varphi(q+D) > \phi(q)$ for $-D<q\le 0$; \end{itemize} \end{corollary} This Corollary is illustrated in Fig.~\ref{fig:03}. \section{Existence and uniqueness of traveling waves} \subsection{Stationary traveling waves for $\zeta(t,q)$} In this subsection we prove the following Theorem on existence of traveling waves and their uniqueness with respect to total drop. \begin{theorem}\label{thm:tw} For every value of the total drop $D$, there exists exactly only one stationary traveling wave profile $\zeta(t,q)=Z(q)$, defined on $q\in[-D,0]$. \end{theorem} \begin{proof} We prove the existence by construction. Let $\zeta(t,q)=Z(q)$ be defined on the interval $q\in [-D,0]$, with \begin{displaymath} Z(-D)=1, \quad Z(0)=1, \qquad 0 \le Z(q) < 1, \quad q\in ]-D,0[, \end{displaymath} for various values of the total drop $D$. We recall the values ${D_{hk}}$ and $D_{ss}$, defined in \eqref{Dhk} and \eqref{Dss}, respectively. The stationary profile is constructed in different ways for different values of total drop $D$. \textbf{Type 1.} This is the case when $D < {D_{hk}}$. We let $$ Z(q) ~\dot= ~ \begin{cases} 1 & q=-D, \\ \phi(q) \qquad & -D < q \le 0. \end{cases} $$ Here we have a kink at $q=-D$ where $Z$ has a downward jump, then it is connected to a smooth stationary profile on the right. By \eqref{eq:qvk}, the kink at $q=-D$ is also stationary. We remark that if $h(0)=0$, the solution $\zeta(t,q)$ remains uniformly bounded away {from} 0 for all $t>0$. In this case, only Type 1 profiles are possible. For the next 3 types, we assume $h(0)>0$. \textbf{Type 2.} In this case we have $D={D_{hk}}$. We let $$ Z(q) ~\dot= ~ \begin{cases} 1 & q=-D, \\ \phi(q)\qquad & -D < q \le 0. \end{cases} $$ Here we have a hyper-kink at $q=-D$ and it is connected to a smooth stationary profile on the right. By \eqref{eq:qvk}, the hyper-kink at $q=-D$ is stationary. \textbf{Type 3.} We consider the case with ${D_{hk}} < D < D_{ss}$. Let $(q^{+},z^{+})$ be the intersection point of the graphs $\phi(q)$ and $\varphi(-D+q)$. According to Corollary \ref{cor:ZZ3} the two graphs intersect only once, at some interior point where $-D < q^{+} <0$. At this intersection point we have $$ z^{+} = \phi(q^{+})= \varphi\left(\Delta\right), \qquad \mbox{where} \qquad \Delta=q^{+}+D\,. $$ We define the profile $$ Z(q) ~\dot= ~ \begin{cases} 1 & q=-D,\\ 0 & -D < q \le q^{+}, \\ \phi(q) \qquad & q^{+} < q \le 0. \end{cases} $$ Here we have a shock on $q\in(-D,q^{+})$, and it is connected to a smooth stationary profile on the right. The left front of the shock, located at $q=-D$, is stationary by \eqref{eq:slv}. The right front at $q^{+}$ is also stationary by construction. \textbf{Type 4.} This is the last case when $D \ge D_{ss}$. We let $$ Z(q) ~\dot= ~ \begin{cases} 1 & q=-D,\\ 0 & -D < q <0,\\ 1 \qquad & q= 0. \end{cases} $$ This is a simple shock which is admissible. By \eqref{eq:slv} and \eqref{eq:srv}, both left and right fronts are stationary. Therefore, the whole profile is stationary. Finally we note that the right front of a shock would be stationary if and only if it is an intersection point of the graph $\phi$ with some horizontal shift of the graph $\varphi$. By Corollary \ref{cor:ZZ3}, there exist at most one such intersection point. This shows the uniqueness of these profiles with respect to the total drop, completing the proof. \end{proof} \begin{example} \label{ex:2} If we choose the erosion function as in Example~\ref{ex:1}, then $D_{ss}$ satisfies \begin{equation*} 2=\frac{\exp\{D_{ss}\}-1}{D_{ss}}, \qquad D_{ss} \approx 1.26. \end{equation*} We have ${D_{hk}}=0.5$ {from} Example~\ref{ex:1}. The plots of these 4 types of traveling waves are given in Figure~\ref{fig:zq4}. \end{example} \begin{figure} \caption{Stationary traveling wave profiles $\zeta(t,q)=Z(q)$.} \label{fig:zq4} \end{figure} \subsection{Moving traveling waves for $u(t,x)$} For the original physical variables $u(t,x)$ and the slope $w(t,x)=u_x(t,x)$, the traveling waves are not stationary. They share some interesting properties which we would like to comment. Recall \eqref{2.1}, and let $W(\xi)$ be such a traveling wave, and let $\mathcal{F}(\xi)$ be the corresponding integral term. {From} the equation \eqref{1.2} we get $$ -\sigma W'(\xi) + (f(W(\xi)) \, \mathcal{F}(\xi))' =0. $$ Integrating it once, by the boundary condition \eqref{2.1} we get \begin{equation}\label{ode0} -\sigma W(\xi) + f(W(\xi)) \mathcal{F}(\xi) = -\sigma, \qquad \sigma = \frac{f(W(\xi))}{W(\xi)-1} \cdot \mathcal{F}(\xi) \end{equation} This gives us the constant propagation speed \begin{equation}\label{sigma} \sigma ~= ~ \frac{f(W(\xi))}{W(\xi)-1} \cdot \mathcal{F}(\xi) = \lim_{\bar\xi\to+\infty} \frac{f(W(\bar\xi))}{W(\bar\xi)-1} \cdot \mathcal{F}(\bar\xi)= f'(1)\,, \qquad \forall \xi\,. \end{equation} This says that, for any traveling wave that consists of a smooth part, (i.e., Type 1, 2 and 3), they must all travel with speed $\sigma=f'(1)$! We now derive the ODE satisfied by $W(\xi)$. Taking logarithm function on both sides of \eqref{ode0}, we get \begin{equation*} \ln \sigma + \ln (W-1) = \ln f(W) + \int_\xi^\infty f(W(y))\dd y\,.\end{equation*} We now differentiate both sides of this equation with respect to $\xi$, and we get an autonomous ODE with a boundary condition \begin{equation}\label{Wode} W'(\xi) = - \frac{f^2(W) (W-1)}{f(W)-f'(W)(W-1)},\qquad \lim_{\xi\to+\infty} W(\xi) = 1. \end{equation} Integrating $W$ once, we obtain the corresponding profile for $u(t,x)$. If the shock size $\Delta$ is bigger than $D_{ss}$, we have Type 4, which is a single shocks connected to $u_x=1$ on both the left and the right sides. It travels with the shock speed (see \cite[(2.11)]{ShZh}): \begin{displaymath} \sigma = \frac{e^{\Delta\cdot f'(+\infty)}-1}{\Delta}. \end{displaymath} As a consequence of Theorem \ref{thm:tw}, we conclude that such traveling waves exist for $u(t,x)$ and $w(t,x)$, and they are unique w.r.t.~the total drop. \begin{example}\label{ex:W} Consider the erosion function $f(w)=w-\frac{1}{w}$, same as in Example \ref{ex:1}, then \eqref{Wode} becomes $$ W'(\xi)= -(W+1)^2(W-1). $$ Integrating $W(\xi)$ in space, one obtains the traveling waves for the profile height $u(t,x)$. These are plotted in Figure~\ref{fig:3T} for Type 1, 2 and 3, at $t=1$. All these waves travel with the same speed $\sigma=f'(1)=2$. \end{example} \begin{figure} \caption{3 Types of traveling wave profiles in the physical coordinate $u(1,x)$.} \label{fig:3T} \end{figure} \section{Stability of traveling waves} In this Section we prove the local stability for traveling waves. \begin{theorem}\label{thm:stab} Let $z(t,u)$ be the solution of the Cauchy problem for \eqref{1.3} with initial data satisfying \eqref{cond-z} and \eqref{z02}, and let $\zeta(t,q)$ be defined in \eqref{eq:defs}. Let $D$ be the total drop, and $Z(q)$ be the corresponding stationary traveling wave profile. Then, for every $\varepsilon>0$, there exists a finite value $T^\varepsilon$ such that \begin{equation}\label{eq:stab} \left\| \zeta(t,\cdot)-Z(\cdot)\right\|_{\mathbf{L}^1} \le C \varepsilon\,, \quad \text{ for all }\quad t\ge T^{\varepsilon}. \end{equation} Here the constant $C$ is independent of $\varepsilon$. \end{theorem} The rest of this Section is devoted to the proof of Theorem \ref{thm:stab}. \subsection{Structure of the solution $\zeta(t,q)$ for non-decreasing data} Thanks to the additional assumption \eqref{z02}, we now make the following assumptions on the solutions $\zeta(t,q)$. \begin{assumption}\label{prop:CGS} Let $z(t,u)$ be the solution of the Cauchy problem for \eqref{1.3} with initial data satisfying \eqref{cond-z} and \eqref{z02}. Let $\zeta(t,q)$ be defined in \eqref{eq:defs}. It satisfies the following properties. \begin{itemize} \item There is a stationary downward jump at $q=-D$, with $\zeta(t,-D-)=1$, for all $t\ge0$; \item There is maximum one shock in the solution, with the left front at $q=-D$, and the right front at $q^{+}(t)$ where $-D\le q^{+}(t)\le 0$. \item For any $t>0$, $\zeta(t,q)$ is locally Lipschitz continuous and strictly increasing on the interval $\tilde q(t) < q \le 0$, where $\tilde q(t)=q^{+}(t)$ if there is a shock, and $\tilde q(t)=-D$ if no shock exists; \item The total variation of $q\mapsto\zeta(t,q)$ is uniformly bounded by $2$ for all $t\ge0$. \end{itemize} \end{assumption} \begin{remark} The properties in Assumption \ref{prop:CGS} are expected in the solution $\zeta(t,q)$. A rigorous proof could be carried out through front tracking piecewise constant approximate solutions. However, such a proof could be lengthy, since one has to enter the details of setting up the front tracking algorithm, then establish the a-priori estimates and the convergence. Since this current paper is focused on traveling waves and their properties, we would not provide the detailed proof and state these as assumptions. Below we provide some formal arguments to support these assumptions. \textbf{(1).} The assumptions \eqref{cond-z} and \eqref{z02} imply the following properties for $\zeta(0,q)$, $$ \zeta(0,-D)=1,\quad \zeta(0,q_1)<\zeta(0,q_2) \quad (q_1<q_2), \quad \zeta(0,0)=1\,. $$ For smooth solutions $\zeta(t,q)$, the characteristic equations \eqref{char1}-\eqref{char2} hold. Therefore $\zeta$ decreases along characteristics. The assumption \eqref{z02} implies that $\zeta(t,q)<1$ for all $t>0$ and $-D<q<0$. \textbf{(2).} The integral term $F(\zeta;q)$ is strictly decreasing in $q$, i.e, $$ F(\zeta;q_1) > F(\zeta; q_2), \qquad (-D<q_1<q_2<0)\,. $$ \textbf{(3).} We consider two points in the solutions along the characteristics, $\zeta(t,q_1(t))$ and $\zeta(t,q_2(t))$, with $-D<q_1(0)<q_2(0)<0$ and $\zeta(0,q_1(0)) \le \zeta(0,q_2(0))$. As long as $\zeta(t,q_1(t))>0$ and $\zeta(t,q_2(t))>0$, the equations \eqref{char1}-\eqref{char2} hold. Let $\bar t$ be a first time such that \begin{displaymath} \zeta(\bar t,q_1(\bar t)) = \zeta(\bar t,q_2(\bar t))\quad\text{ or }\quad q_1(\bar t)=q_2(\bar t). \end{displaymath} Since the maps $z\mapsto \left(1-z\right)^{2}h'(z)$, $q\mapsto F\left(\zeta;q\right)$ are decreasing and positive, we have \begin{displaymath} \frac{d}{dt}\left[q_{2}(t)-q_{1}(t)\right]=\dot q_{2}(t)- \dot q_{1}(t)\ge 0. \end{displaymath} This implies that $q_{2}(\bar t)=q_{1}(\bar t)$ cannot happen. If $\zeta(\bar t,q_1(\bar t)) = \zeta(\bar t,q_2(\bar t))$ holds, we can compute \begin{displaymath} \frac{d}{dt}\left[\zeta(\bar t,q_{2}(\bar t)) - \zeta(\bar t,q_{1}(\bar t))\right]= g\left(\zeta\left(\bar t,q_{1}(\bar t\right)\right)\left[F\left(\zeta ; q_{1}(t)\right)-F\left(\zeta ; q_{2}(t)\right)\right]>0 \end{displaymath} which gives a contradiction. In other words, for any $t>0$, if $\zeta(t,q)\ge0$, then $q\mapsto \zeta(t,q)$ is non--decreasing and the rarefaction fan is strictly spreading. \textbf{(4).} The total variation of $\zeta(t,\cdot)$ is uniformly bounded by $2$ for $t\ge 0$. \textbf{(5).} If $\zeta(t,q)$ is strictly increasing, then there must be a downward jump in $\zeta(t,q)$ at $q=-D$, i.e., \begin{equation*} \zeta(t,-D)=1, \quad \zeta(t,-D+) <1, \qquad (t>0)\,.\end{equation*} This jump is stationary. \textbf{(6).} If $h(0)>0$, shocks might form in the solution. Since $\zeta(t,q)$ is strictly increasing, there could be maximum one shock, with the left front at $q=-D$ and the right front at some $q^{+}$ where $-D\le q^{+}(t)\le 0$ for all $t\ge0$. \textbf{(7).} The solution $\zeta(t,q)$ remains smooth and strictly increasing on the part where $\zeta(t,q)>0$. If there are no shocks, this is valid on the whole interval $-D < q \le 0$. If there is a shock, then the interval is $q^{+}(t) < q \le 0$ where $q^{+}(t)$ is the position of the right front of the shock. \end{remark} \subsection{Properties of rarefaction fronts; formal arguments.} Thanks to Assumption \ref{prop:CGS}, we now only need to trace the location of the right front $q^{+}(t)$ of the possible shock, and study the evolution of the rarefaction fronts by characteristic equations \eqref{char1}-\eqref{char2}. Next Lemma follows {from} \eqref{char1}-\eqref{char2}. \begin{lemma}\label{lm:4a} Let $\zeta(t,q(t))>0$ be a point on the rarefaction fan. Then, as $t$ grows, the trajectory $t\mapsto (q,\zeta)$ matches some horizontal shift of the graph of $\phi$. Furthermore, the point $(q(t), \zeta(t,q(t)))$ travels to the left and downwards, until it merges into a singularity, at some $t\le T_{\zeta_{o}}$ where $T_{\zeta_{o}}=\frac{D}{\left(1-\zeta_{o}\right)\min_{0\le z \le 1}h'(z)},\qquad \zeta_{o}=\zeta\left(0,q(0)\right)<1$. \end{lemma} \begin{proof} It suffices to observe that $$ \phi'(q)=\frac{h^2(\phi)}{h'(\phi)},\quad \frac{\dot \zeta(t,q(t))}{\dot q(t)} = \frac{h^2(\zeta)}{h'(\zeta)},\quad \dot q(t) <0, \quad \dot \zeta(t,q(t))<0\,, $$ therefore setting $q_{o}=q(0)<0$ and $\zeta_{o}=\zeta\left(0,q(0)\right)<1$, we have \begin{displaymath} \dot q(t)=-\left(1-\zeta\left(t,q(t)\right)\right)h'\left(\zeta\left(t,q(t)\right)\right) F\left(\zeta;q(t)\right)\le -\left(1-\zeta_{o}\right)\min_{0\le z \le 1}h'(z)<0, \end{displaymath} and the curve $q(t)$ has to merge into a singularity before the time $T_{\zeta_{o}}$. \end{proof} The formal arguments for asymptotic analysis is now rather simple, thanks to Lemma \ref{lm:4a}. We have the following observations. \begin{itemize} \item Lemma \ref{lm:4a} implies that, as $t\mapsto +\infty$, the remaining rarefaction fan in the solution is generated near the point $(q,z)=(0,1)$. Again, since all rarefaction fronts travel along some horizontal shifts of $\phi(q)$, we see that the smooth part of the solution $\zeta(t,q)$ must approach the graph of $\phi(q)$ as $t\to+\infty$. \item If a shock forms in the solution, by Corollary \ref{cor:ZZ3} and \eqref{RS-}-\eqref{RS+}, it must settle at the corresponding location of the stationary shock front in $Z(q)$. \end{itemize} Combining these two observations, one may conclude that $\zeta(t,q)$ converges to $Z(q)$ in $\mathbf{L}^1$ as $t\to+\infty$. However, there is a complication. It is observed in \cite{CGS,ShZh} that, because of the admissible condition \eqref{en-z}, characteristic curves could come out of the right front of the shock in a tangent direction at some $t>0$. Therefore, the formal argument alone is not sufficient. Instead, in our proof we will construct suitable upper and lower envelopes for the solution $\zeta(t,q)$. \subsection{Upper envelopes.} The upper envelopes might take 2 stages, depending on the type of stationary profiles. In stage 1 we control the smooth part of the solution. We show that the rarefaction fan gets very close to the stationary traveling wave after sufficiently long time. \begin{lemma}\label{lm:UB} Let $D$ be the total drop, and let $\varepsilon>0$ be given. We define the function $$ \phi^{+}(q)~\dot=~ \begin{cases} 1 & q=-D,\\ \phi(q+\varepsilon) \qquad &-D < q \le -\varepsilon,\\ 1 & -\varepsilon < q \le 0. \end{cases} $$ Then there exists a time $ T_1^\varepsilon$, such that for $t\ge T_1^\varepsilon$, we have \begin{equation*} \zeta(t,q) \le \phi^{+}(q), \qquad -D<q\le 0\,.\end{equation*} If the initial data satisfies $\zeta(0,q)\le\phi(q)$ for $-\varepsilon<q<0$, we can simply take $\phi^{+}(q)=\phi(q)$. \end{lemma} \begin{proof} Let $\varepsilon>0$ be given and define $q_{o}=-\varepsilon<0$, $\zeta_{o}=\zeta\left(0,q_{o}\right)<1$. We consider the rarefaction fronts generated on the interval $\left(q_{o},0\right)$. By Lemma \ref{lm:4a} they travel along horizontal shifts of the graph of $\phi^{+}$, therefore they stay below this graph. Let $t\mapsto q$ denote the characteristic curve with $q(0)=q_{o}$. By Lemma \ref{lm:4a}, the point $(q(t),\zeta(t,q(t)))$ will merge into a singularity before the time $T_{1}^{\varepsilon}=T_{\zeta_{o}}$. After that, the smooth part of the solution is the rarefaction fan generated on the interval $\left(q_{o},0\right)$ at $t=0$. Therefore, we have $\zeta(t,q) \le \phi^{+}(q)$ for $t\ge T_{1}^{\varepsilon}$. Finally, if $\zeta(0,q)\le\phi(q)$ for $-\varepsilon<q<0$, we can simply take $\phi^{+}(q)=\phi(q)$ and carry out the whole argument in a complete similar way, completing the proof. \end{proof} If the stationary profile is of Type 1 and Type 2, the upper envelopes are complete. We now consider Type 3 and Type 4, and enter Stage 2 to control the location of the right front of the shock. \begin{lemma}\label{lm:UB2} Assume $D>{D_{hk}}$, and let $q^{+}\le 0$ be the location of the right front of the shock in the stationary traveling wave profile $Z(q)$. Let $\varepsilon>0$ be given. There exists a time $ T_2^\varepsilon$, such that $$ \zeta(t,q) \le Z^{+}(q), \qquad -D<q\le 0\,, \qquad t\ge T_1^\varepsilon+T_2^\varepsilon\,, $$ where the function $Z^{+}$ is defined as \begin{equation}\label{Z+} Z^{+}(q) ~\dot=~ \begin{cases} 1, \qquad & q=-D,\\ 0, \qquad & -D<q\le \hat q, \qquad \hat q = q^{+} - C_1 \varepsilon\,,\\ \phi^{+}(q), \qquad &\hat q < q \le 0\,. \end{cases} \end{equation} Here the constant $C_1$ does not dependent on $\varepsilon$. \end{lemma} \begin{proof} Let $D>{D_{hk}}$ be the total drop, and let $\bar q$ be the intersection point of the graphs of $\phi^{+}(q)$ and $\varphi(q+D)$, such that $\phi^{+}(\bar q) = \varphi(\bar q+D)$. By Lemma \ref{lm:ZZ1}, the functions $\phi$ and $\varphi$ are strictly increasing and transversal. We have \begin{equation*} q^{+}-\bar q \le \bar C \varepsilon, \qquad \bar C = \frac{\max_{q}\phi'( q)}{\kappa}=\frac{1}{\kappa}\max_{0<z<1}\frac{h^2(z)}{h'(z)}\,, \end{equation*} where $\kappa$ is defined in Lemma \ref{lm:ZZ1}. We can choose the constant in \eqref{Z+} as $C_1=2 \bar C$. We now construct the upper envelopes, for $t\ge T_1^\varepsilon$, $$ \mathcal{Z}^{+}(t,q) ~\dot=~ \begin{cases} 1, \qquad & q=-D,\\ 0, \qquad & -D<q\le \tilde q(t) ,\\ \phi^{+}(q), \qquad &\tilde q(t) < q \le 0, \end{cases} $$ Here the front $\tilde q(t)$ travels with the speed as if it were the right front of a shock. By \eqref{eq:srv}, we have \begin{equation}\label{tqODE} \tilde q'(t) = F(\zeta;\tilde q) \cdot \frac{1-\phi^{+}(\tilde q)}{\phi^{+}(\tilde q)} \left[\psi(\tilde q+D)-h(\phi^{+}(\tilde q))\right], \qquad \tilde q(T_1^\varepsilon) = -{D_{hk}}-\varepsilon\,. \end{equation} The ODE \eqref{tqODE} is solved for $t\ge T_1^\varepsilon$, until at some time $t=T_1^\varepsilon+T_2^\varepsilon$ when the front $\tilde q(t)$ passes the one in $Z^{+}$, i.e., when $\tilde q(T_1^\varepsilon+T_2^\varepsilon) \ge q^{+}-C_{1}\varepsilon$. We now show that $T_2^\varepsilon$ is finite for any given $\varepsilon$. When $\tilde q(t)< \bar q$ the graph of $\phi^{+}$ lies strictly below the graph of $\varphi(q+D)$. Since $q^{+}-C_{1}\varepsilon < \bar q$, therefore by continuity we have \begin{displaymath} v_{\varepsilon}=\min\left\{\varphi\left(q+D\right)-\phi^{+}(q):\quad q\le q^{+}-C_{1}\varepsilon\right\}>0. \end{displaymath} Hence, as long as $\tilde q(t)\le q^{+}-C_{1}\varepsilon$, we can compute \[ \begin{split} \tilde q'(t) &\ge \left[1-\phi^{+}(\bar q)\right] \left[\psi(\tilde q+D)-h(\phi^{+}(\tilde q))\right]\\ &\ge \left[1-\phi( q^{+})\right]\left[h\left(\varphi\left(\tilde q + D\right)\right)-h(\phi^{+}(\tilde q))\right]\\ &\ge \left[1-\phi( q^{+})\right]\cdot\min h' \cdot v_{\varepsilon}. \end{split} \] We can now conclude that $$ \mathcal{Z}^{+} (t,q) \le Z^{+}(q), \qquad \text{for}\quad t\ge T_1^\varepsilon+T_2^\varepsilon\,, \quad \text{where}\quad T_2^\varepsilon= \frac{{D_{hk}} + \varepsilon}{\left[1-\phi( q^{+})\right]\cdot\min h' \cdot v_{\varepsilon}}\,. $$ It remains to show that the solution $\zeta(t,q)$ satisfies \begin{equation}\label{comp} \zeta(t,q)\le \mathcal{Z}^{+}(t,q)\qquad \text{for} \quad T_1^\varepsilon \le t \le T_1^\varepsilon+ T_2^\varepsilon\,. \end{equation} Indeed, by construction \eqref{comp} holds for $t=T_1^\varepsilon$ because \begin{equation*} \mathcal{Z}^{+}(T_1^\varepsilon,q)=\phi^{+}(q) \ge \zeta(T_1^\varepsilon,q)\,, \end{equation*} where $\phi^{+}$ is the the function defined in Lemma \ref{lm:UB}. Now we consider a later time $\bar t\ge T_1^\varepsilon$. By Lemma \ref{lm:4a}, the smooth part of the solution $\zeta(\bar t,q)$ remains below the graph of $\phi^{+}(q)$. It is then enough to check the speed of right front of the possible shock. Assume that $\zeta(\bar t,q)$ has a shock, with its right front at $\check{q}(t)$, and \begin{equation*} \check q(\bar t)=\tilde q(\bar t), \qquad \zeta(\bar t, \check q(\bar t)+) \le \phi^{+}(\tilde q(\bar t)). \end{equation*} By \eqref{eq:srv} we clearly have $ \check q'(\bar t) \ge \tilde q'(\bar t) $, so the graph of $\zeta(t,q)$ remains below that of $\mathcal{Z}^{+}(t,q)$, completing the proof. \end{proof} The two stages are illustrated in Figure \ref{fig:upper3} for Type 3. \begin{figure} \caption{Upper envelope, for type 3, stage 1 (left) and stage 2 (right).} \label{fig:upper3} \end{figure} \subsection{Lower envelopes.} For Type 4, the lower envelope is trivial, since one can simply take the stationary traveling wave profile $Z(q)$. We now construct the lower envelopes for Type 1, 2, and 3, following a similar line of arguments as those for the upper envelopes. In stage 1, we show that the rarefaction fan approaches the stationary traveling wave profile as $t$ grows. \begin{lemma}\label{lm:LB} Assume $D<D_{ss}$ and let $\varepsilon>0$ be given. We define the function \begin{equation}\label{phi-} \phi^{-}(q) ~\dot=~ \begin{cases} 1, & q=-D,\\ 0,& -D<q< q_1, \\ \phi(q-\varepsilon), \qquad &q_1 \le q \le 0, \end{cases} \end{equation} Here the value $q_1$ is determined as follows. If $D$ is sufficiently small such that the graph of $\phi(q-\varepsilon)$ lies completely above the graph of $\eta(q+D)$, we let $q_1=-D$, and we remove the second line with $-D<q<q_1$ in the definition \eqref{phi-}. Otherwise, we let $q_1$ be the right-most intersection point of those two graphs. Then, there exists a time $\mathcal{T}_1^\varepsilon$, such that, \begin{equation}\label{eq:LB} \zeta(t,q) \ge \phi^{-}(q), \qquad t\ge \mathcal{T}_1^\varepsilon. \end{equation} \end{lemma} \begin{proof} The proof follows a similar structure as the one for Lemma \ref{lm:UB}, with some modifications. Let $\varepsilon>0$ be given. Since $\phi^{-}(0)=\phi(-\varepsilon)<1$, by continuity there exists $q_{o}<0$ such that \begin{equation*} \zeta(0,q) \ge \phi^{-}(q), \qquad \text{for} \quad q_0 \le q \le 0, \quad \zeta_{o}=\zeta\left(0,q_{o}\right)<1. \end{equation*} We consider the rarefaction fronts generated on $[q_{o},0]$ and let $t\mapsto q$ be the characteristic curve with $q(0)=q_{o}$. Then, the point $(q(t),\zeta(t,q(t)))$ travels along some shift of the graph of $\phi$ until it merges into a singularity. There are two possibilities. (1). If $D$ is small such that the graph of $\phi$ lies completely above the graph of $\eta(q+D)$, then the characteristic will reach $q=-D$ and enter the downward jump at $q=-D$. (2). If the two graphs intersect, then the right-most position of the right front of a possible shock is $q_1$, and \eqref{eq:LB} holds. If the actual shock front is to the left of $q_1$, or the characteristic does not enter any shock, we still have \eqref{eq:LB}. By Lemma \ref{lm:4a} we have $\mathcal{T}_1^\varepsilon=T_{\zeta_{o}}$, which is finite because $\zeta_{o}<1$, completing the proof. \end{proof} If $D$ is small such that $q_1=-D$ in Lemma \ref{lm:LB}, the lower envelopes are complete. For the rest of the subsection, we assume $q_1 > -D$. We now enter stage 2, and we control the location of the right front of the shock. We will use again the symbols $\bar q, \tilde q, \hat q, \bar C$ etc, for different values, without causing any confusion. \begin{lemma}\label{lm:LB2} Assume $D<D_{ss}$ and $q_1>-D$ in \eqref{phi-}. Let $q^{+}$ be the location of right front of the stationary shock for Type 3, and let $q^{+}=-D$ if it is Type 1 and 2. Let $\varepsilon>0$ be given. There exists a time $\mathcal{T}_2^\varepsilon$, such that \begin{equation*} \zeta(t,q) \ge Z^{-}(q), \qquad -D\le q\le 0, \quad \text{for} \quad t \ge \mathcal{T}_1^\varepsilon+\mathcal{T}_2^\varepsilon. \end{equation*} Here the function $Z^{-}$ is defined as follows. For Type 1, we let \[ Z^{-}(q) ~\dot=~ \begin{cases} 1, & q=-D,\\ \phi(q-\varepsilon), \qquad & -D < q \le 0. \end{cases} \] For Type 2 and 3, we let \begin{equation}\label{Z-} Z^{-}(q) ~\dot=~ \begin{cases} 1, & q=-D,\\ 0,& -D<q\le \hat q , \qquad \hat q= q^{+}+C_2\varepsilon , \\ \phi(q-\varepsilon), \qquad & \hat q < q \le 0. \end{cases} \end{equation} The constant $C_2$ does not depend on $\varepsilon$. \end{lemma} \begin{proof} Again, the proof follows a similar line of arguments as for Lemma \ref{lm:UB2}, with modifications. Let $\bar q$ be the intersection point of the graphs of $\phi(q-\varepsilon)$ and $\varphi(q+D)$. We have \begin{equation*} \bar q-q^{+} \le \bar C \varepsilon, \qquad \bar C =\frac{1}{\kappa} \cdot \max_{0<z<1} \frac{h^2(z)}{h'(z)}\,. \end{equation*} We can now choose the constant in \eqref{Z-} as $C_2=2 \bar C$. The lower envelopes are defined as follows, for $t\ge \mathcal{T}_1^\varepsilon$, \[ \mathcal{Z}^{-}(t,q) ~\dot=~ \begin{cases} 1, \qquad & q=-D,\\ 0, \qquad & -D<q\le \tilde q(t) ,\\ \phi(q-\varepsilon), \qquad &\tilde q(t) < q \le 0. \end{cases} \] Here the front $\tilde q(t)$ travels with the speed as if it were the right front of a shock. By \eqref{eq:srv}, $\tilde q(t)$ satisfies the ODE, for $t\ge \mathcal{T}_1^\varepsilon$ \begin{equation}\label{tqODE-} \tilde q'(t) = F(\zeta;\tilde q) \cdot \frac{1-\phi(\tilde q-\varepsilon)}{\phi(\tilde q-\varepsilon)} \left[\psi(\tilde q+D)-h(\phi(\tilde q-\varepsilon))\right], \qquad \tilde q(\mathcal{T}_1^\varepsilon) = q_1\,. \end{equation} The ODE \eqref{tqODE-} is solved for $t\ge \mathcal{T}_1^\varepsilon$, until at some time $t=\mathcal{T}_1^\varepsilon+\mathcal{T}_2^\varepsilon$ when the front $\tilde q(t)$ passes the one in $Z^{-}$, i.e., when $\tilde q(\mathcal{T}_1^\varepsilon+\mathcal{T}_2^\varepsilon) \le \hat q$. We now show that $\mathcal{T}_2^\varepsilon$ is finite for any given $\varepsilon$. As in the proof of Lemma \ref{lm:UB2}, observe that when $\tilde q(t)> \bar q$ the graph of $\phi^{-}$ lies strictly above the graph of $\varphi(q+D)$, therefore, by continuity and since $q^{-}+C_{2}\varepsilon > \bar q$ we have \begin{displaymath} v_{\varepsilon}=-\max\left\{\varphi\left(q+D\right)-\phi^{-}(q):\quad q\ge q^{+}+C_{2}\varepsilon\right\}>0. \end{displaymath} Hence, as long as $q^{+}+C_{2}\varepsilon \le \tilde q(t)\le q_{1}$, we can compute \[ \begin{split} \tilde q'(t) &\le \left[1-\phi^{-}(\tilde q)\right] \left[\psi(\tilde q+D)-h(\phi^{+}(\tilde q))\right]\\ &\le \left[1-\phi(q_{1} )\right]\left[h\left(\varphi\left(\tilde q + D\right)\right)-h(\phi^{+}(\tilde q))\right]\\ &\le -\left[1-\phi( q_{1})\right]\cdot\min h' \cdot v_{\varepsilon}. \end{split} \] We can now conclude that \[ \mathcal{Z}^{-} (t,q) \ge Z^{-}(q), \qquad \text{for}\quad t\ge \mathcal{T}_1^\varepsilon+\mathcal{T}_2^\varepsilon\,, \quad \text{where}\quad \mathcal{T}_2^\varepsilon= \frac{{D_{hk}} + \varepsilon}{\left[1-\phi( q_{1})\right]\cdot\min h' \cdot v_{\varepsilon}}\,. \] We note that for Type 1, the front $\tilde q(t)$ would merge into $q=-D$ at some $t<\mathcal{T}_1^\varepsilon+\mathcal{T}_2^\varepsilon$. It remains to show that the solution $\zeta(t,q)$ satisfies \begin{equation}\label{comp-} \zeta(t,q)\ge \mathcal{Z}^{-}(t,q)\qquad \text{for} \quad \mathcal{T}_1^\varepsilon \le t \le \mathcal{T}_1^\varepsilon+\mathcal{T}_2^\varepsilon\,. \end{equation} Indeed, by construction \eqref{comp-} holds for $t=\mathcal{T}_1^\varepsilon$, since $\mathcal{Z}^{-}(\mathcal{T}_1^\varepsilon,q)=\phi^{-}(q)$, where $\phi^{-}$ is the the function defined in Lemma \ref{lm:LB}. Now we consider a later time $\bar t\ge \mathcal{T}_1^\varepsilon$. By Lemma \ref{lm:4a}, the smooth part of the solution $\zeta(\bar t,q)$ remains above the graph of $\phi(q-\varepsilon)$. It is then enough to check the speed of right front of the possible shock. Assume that $z(\bar t,q)$ has a shock, with its right front at $\check q(t)$, and \begin{equation*} \check q(\bar t)=\tilde q(\bar t), \qquad \zeta(\bar t, \check q(\bar t)) \ge \phi(\tilde q(\bar t)-\varepsilon). \end{equation*} By \eqref{eq:srv} we clearly have $ \check q'(\bar t) \le \tilde q'(\bar t) $, so the graph of $\zeta(t,q)$ remains above that of $\mathcal{Z}^{-}(t,q)$, completing the proof. \end{proof} The two stages are illustrated in Figure \ref{fig:upper3} for Type 3. \begin{figure} \caption{Lower envelope, for Type 3, with Stage 1 (left) and Stage 2 (right).} \label{fig:lower3} \end{figure} We now combine all the estimates and prove Theorem \ref{thm:stab}. \begin{proof} (of Theorem \ref{thm:stab}) Given $\varepsilon>0$. By Lemma \ref{lm:UB2} and Lemma \ref{lm:LB2}, there exists a time \begin{equation*} T^\varepsilon = \max \left\{ T_1^\varepsilon+T_2^\varepsilon \,, \, \mathcal{T}_1^\varepsilon+\mathcal{T}_2^\varepsilon \right\}, \end{equation*} such that for $t\ge T^\varepsilon $, we have \begin{equation}\label{eq:squeeze} Z^{-}(q) \le \zeta(t,q) \le Z^{+}(q), \qquad -D\le q \le 0\,. \end{equation} Here $Z^{-}$ and $Z^{+}$ are defined in \eqref{Z-} and \eqref{Z+}, respectively, and they satisfy \begin{equation}\label{eq:squeeze2} Z^{-}(q) \le Z(q) \le Z^{+}(q), \qquad -D\le q \le 0\,, \end{equation} \begin{equation}\label{eq:squeeze3} \left\| Z^{-} - Z\right\|_{\mathbf{L}^1} \le C_2 \varepsilon, \qquad \left\| Z^{+} - Z\right\|_{\mathbf{L}^1} \le C_1 \varepsilon, \qquad \left\| Z^{+} - Z^-\right\|_{\mathbf{L}^1} \le (C_1+C_2) \varepsilon, \end{equation} where the constants $C_1,C_2$ does not depend of $\varepsilon$. Thanks to \eqref{eq:squeeze}-\eqref{eq:squeeze3}, we now conclude \begin{equation*} \left\| \zeta(t,\cdot) - Z(\cdot)\right\|_{\mathbf{L}^1} \le \left\| Z^{+} - Z^-\right\|_{\mathbf{L}^1} \le (C_1+C_2) \varepsilon,\qquad t\ge T^\varepsilon\,, \end{equation*} completing the proof. \end{proof} \section{A Numerical Example} In this section we present a numerical simulation of \eqref{1.3}. We generate piecewise constant approximate solutions using an extended version of the wave front tracking algorithm described in \cite{CGS,ShZh}. We use the erosion function $$g(z)=\left(1-z\right)\left(\frac{1}{2}+z\right)$$ and the initial data \begin{displaymath} z_{o}(u)= \begin{cases} 1&\text{ for }u\le 0\,,\\ 0&\text{ for }0<u\le 0.6\,,\\ \exp\left(-\frac{1}{2}\left(u+0.11\right)\right)&\text{ for }0.6<u\,. \end{cases} \end{displaymath} With this initial data, the traveling wave profile is of Type 3. Solutions are plotted for nine different values of $t$, for both the functions $\zeta(t,q)$ and $z(t,u)$, in Figure \ref{fig:qzgraphics} and Figure \ref{fig:uzgraphics}, respectively. In Figure \ref{fig:qzgraphics} we also plotted the stationary Type 3 traveling wave $Z(q)$ (in red) together with the solution $\zeta(t,q)$. As we observe in Figure \ref{fig:uzgraphics}, the traveling waves for the solutions $z(t,u)$ are not stationary. We clearly observe the moving traveling wave in the last 2-3 plots in Figure \ref{fig:uzgraphics}. It is interesting to observe that, for waves of Type 1, 2 and 3, they all travel with the same constant speed $\tilde \sigma$ which can be deduced {from} \eqref{sigma}, i.e., \begin{displaymath} \tilde\sigma = \frac{1}{\sigma}=\frac{1}{f'(1)}= -g'(1)=\frac{3}{2}=1.5. \end{displaymath} For a Type 4 wave, it travels with the shock speed. This simulation also demonstrates the complexity of the transient dynamics of the wave formation and interaction. One obverses that the shock in the initial data disappears as the rarefaction wave on the right merges into the shock, only to reappears later as a new shock forms. \begin{figure}\label{fig:qzgraphics} \end{figure} \begin{figure}\label{fig:uzgraphics} \end{figure} \section{Concluding remarks} In this paper we prove the existence of traveling wave profiles for an integro differential equation modeling slow erosion of granular flow. Such profiles are unique with respect to the total drop $D$. Furthermore we show that these profiles provide local attractors for the solutions of the Cauchy problem. We now conclude the paper with several final remarks. \begin{remark}\label{rmk1} The basin of attraction of the traveling wave profile is actually much larger, and the initial data does not need to be non-decreasing. The initial data $\zeta_0(q)=\zeta(0,q)$ only needs to satisfy the following. For some $\epsilon>0$, we have \begin{eqnarray} \zeta_0(-D)=\zeta_0(0)=1,\quad \zeta_0(q) \le 1- C\epsilon, && (-D+\epsilon\le q\le -\epsilon) \label{cr1}\\ \zeta_0(q_1)-\zeta_0(q_2) \ge 0, && (-\epsilon \le q_2<q_1<0)\label{cr2}\\ \zeta_0(q_4)-\zeta_0(q_3) \ge 0, && (-D \le q_4<q_3< -D+\epsilon) \label{cr3}\\ \text{TV}\{\zeta_0\} \le M,\quad \| \zeta_0-1 \|_{\mathbf{L}^1} \le M.&&\label{cr4} \end{eqnarray} {From} \cite{CGS} we see that for general initial data with bounded variation, the total variation of $\zeta(t,\cdot)$ can grow exponentially in $t$. However for this simpler case \eqref{cr1}-\eqref{cr4}, one should be able to improve the BV estimate and actually obtain a bound that is uniform in $t$. We now provide a formal argument. Consider a characteristic curve $t\mapsto q(t)$ initiated on the interval $[-D+\epsilon,-\epsilon]$. As long as $\zeta(t,q(t))>0$, we have $\dot q <-c_0 \epsilon$ and $\dot \zeta(t,q(t))<-c_0 \epsilon$, i.e., the characteristic curve travels strictly to the left, and the $\zeta$ value is strictly decreasing along the characteristics. In finite time, this curve will either enter a shock such that $\zeta=0$, or reach $q=-D$. This implies that all the singularities would finish all possible interactions in finite time, and after that the solution $\zeta(t,q)$ will be non-decreasing, as in the assumption \eqref{z02}. Therefore, after finite time the total variation of $\zeta(t,\cdot)$ will be bounded by $2$. Then one can apply the result in this paper and obtain the asymptotic behavior. \end{remark} \begin{remark}\label{rmk2} At this point, we also conjecture that our result could be extended to general BV initial data $z_{o}(u)$, provided that the total drop $D$ is positive. Due to the nonlinearity of the erosion function, all singularities will interact and merge into a single singularity in finite time, even though the transient dynamics could be very complicated. The solution will satisfy the assumption \eqref{z02} in finite time. It should be possible to carry out a rigorous analysis through piecewise constant approximate solutions generated by the front tracking algorithm. \end{remark} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2} \end{document}
\begin{document} \hypersetup { pdfauthor={Georg Moser, Maria Anna Schett}, pdfsubject={Extended Abstract for TERMGRAPH 2016}, pdftitle={Kruskal's Tree Theorem for Acyclic Term Graphs}, pdfkeywords={Term Graph Rewriting, Termination, Kruskal's Tree Theorem} } \title{Kruskal's Tree Theorem for Acyclic Term Graphs ootnote{This work was partially supported by FWF (Austrian Science Fund) project P\ 25781-N15.} \newcommand*{\cmt}[1]{\textcolor{White}{\colorbox{PineGreen}{\texttt{#1}}}} \newcommand{\MS}[1]{{\leavevmode\color{Periwinkle}{#1}}} \newcommand{\GM}[1]{{\leavevmode\color{Emerald}{#1}}} \newcommand*{\df}{\mathrel{\mathrel{\mathop:}=}} \newcommand*{\F}{\mathcal{F}} \newcommand*{\V}{\mathcal{V}} \newcommand*{\ar}{\mathsf{arity}} \newcommand*{\asq}{\mathbf{a}} \newcommand*{\mbsq}{\mathbf{T}} \newcommand*{\argsq}{\mathbf{H}} \newcommand*{\topf}{\mathbf{f}} \newcommand*{\wqo}{\leqslant} \newcommand*{\simpo}{\prec} \newcommand*{\domein}{\mathrel{\mid}} \newcommand*{\pr}{\text{'}} \newcommand*{\grw}{\to_{\mathcal{G}}} \newcommand*{\lex}{<_{\mathsf{lex}}} \newcommand*{\lof}{\ll} \newcommand*{\rd}{\leftarrow} \newcommand*{\un}{\oplus} \newcommand*{\lpo}{<_{\mathsf{lpo}}} \newcommand*{\lpoeq}{\leqslant_{\mathsf{lpo}}} \newcommand*{\toplex}{\topemb_{\mathsf{lex}}} \newcommand*{\lexeq}{=_{\mathsf{lex}}} \newcommand*{\ndes}{\mathcal{N}} \newcommand*{\succs}{\mathsf{succ}} \newcommand*{\succi}[1]{\overset{#1}{\rightharpoonup}} \newcommand*{\lb}{\mathsf{label}} \newcommand*{\rt}{\mathsf{root}} \newcommand*{\inlets}{\mathsf{inlets}} \newcommand*{\sub}{{\upharpoonright}} \newcommand*{\ctg}{\vartriangle} \newcommand*{\TG}{\mathcal{TG}} \newcommand*{\G}{\mathcal{G}} \newcommand{\mathsf{Pos}}{\mathsf{Pos}} \newcommand*{\fleq}{\mathrel{\trianglerighteq}} \newcommand*{\ufleq}{\mathrel{\trianglelefteq}} \newcommand*{\iso}{\cong} \newcommand*{\Top}{\mathsf{Top}} \newcommand*{\Tops}{\mathsf{Tops}} \newcommand*{\topemb}{\sqsubset} \newcommand*{\topembeq}{\sqsubseteq} \newcommand*{\stopemb}{\sqsupset} \newcommand*{\stopembeq}{\sqsupseteq} \newcommand*{\ilab}{\ctg} \newcommand*{\emb}{\sqsubset_{\mathsf{emb}}} \newcommand*{\embeq}{\sqsubseteq_{\mathsf{emb}}} \newcommand*{\semb}{\sqsupset_{\mathsf{emb}}} \newcommand*{\sembeq}{\sqsupseteq_{\mathsf{emb}}} \newcommand*{\args}{\mathsf{arg}} \newcommand*{\pembeq}{\embeq^\text{\cite{1997_plump}}} \newcommand*{\psembeq}{\sembeq^\text{\cite{1997_plump}}} \newcommand*{\mm}[2]{m(\cial{#1}) = \cial{#2}} \newcommand*{\ci}[1]{\tikz{ \node[circle, draw, inner sep=1.5pt, minimum size = 9pt] (char) {\tiny{#1}};}} \newcommand*{\cial}[1]{\tikz [baseline=(char.base)] { \node[circle, draw, inner sep=1.5pt, minimum size = 9pt] (char) {\tiny{#1}};}} \tikzstyle{n}=[rectangle, inner sep=2pt, outer sep=1pt, minimum size=12pt, node distance = 0.2cm and 0.2cm] \newcommand{\nn}[2]{$#1\mathrel{:}\cial{#2}$} \tikzstyle{nme}=[node distance = 0.5cm] \tikzstyle{mo}=[|->, dashed] \newcommand*{\fun}[1]{\mathsf{#1}} \newcommand*{\f}{\fun{f}} \newcommand*{\fa}{\fun{a}} \newcommand*{\g}{\fun{g}} \newcommand*{\fb}{\fun{b}} \newcommand*{\femb}[3]{ \node [n, xshift= #3 cm] (1) {$\f$}; \node [n, xshift=0.5cm, below left = of 1] (2) {$#1$}; \node [n, xshift=-0.5cm, below right = of 1] (3) {$#2$}; \path[->] (1) edge (2) (1) edge (3) ; } \newcommand*{\fembs}[2]{ \node [n, xshift= #2 cm] (1) {$\f$}; \node [n, below = of 1] (2) {$#1$}; \path[->] (1) edge [bend left] (2) (1) edge [bend right] (2) ; } \newcommand*{\tct}{\textsf{T\kern-0.2em\raisebox{-0.3em}C\kern-0.2emT}} \newcommand*{\aprove}{\textsf{AProVE}} \begin{abstract} In this paper we study termination of term graph rewriting, where we restrict our attention to \emph{acyclic} term graphs. Motivated by earlier work by Plump we aim at a definition of the notion of \emph{simplification order} for acyclic term graphs. For this we adapt the homeomorphic embedding relation to term graphs. In contrast to earlier extensions, our notion is inspired by morphisms. Based on this, we establish a variant of Kruskal's Tree Theorem formulated for acyclic term graphs. In proof, we rely on the new notion of embedding and follow Nash-Williams' minimal bad sequence argument. Finally, we propose a variant of the \emph{lexicographic path order} for acyclic term graphs. \end{abstract} \section{Introduction} It is well-known that term graph rewriting is \emph{adequate} for term rewriting. However, this requires suitable care in the treatment of sharing, typically achieved by extending the term graph rewrite relation with \emph{sharing} (aka \emph{folding}) steps and \emph{unsharing} (aka \emph{unfolding}) steps, cf.~\cite{1998_plump,TeReSeC13}. If one focuses on term graph rewriting alone, then it is well-known that termination of a given graph rewrite system does not imply termination of the corresponding term rewrite system~\cite{1997_plump}. This follows as the representation of a term as a graph enables us to share equal subterms. However, if we do not provide the possibility to unshare equal subterms, we change the potential rewrite steps. Then not every term rewrite step can be simulated by a graph rewrite step. This motivates our interest in termination techniques \emph{directly} for term graph rewriting. More generally our motivation to study term graph rewriting stems from ongoing work on complexity or termination analysis of programs based on transformation to term rewrite systems (see e.g.~\cite{GRSST:11,BMOG12,SGBFFHS14,ADM:2015,AM:16}). In particular in work on termination of imperative programs (see~e.g.~\cite{SGBFFHS14}) these works require a term representation of the heap, which would be much more naturally be encoded as term dags (see the definition below). However, complexity and termination analysis of term graph rewrite systems have only recently be conceived attention in the literature~\cite{2013_bonfante_et_al,BKZ:14,2015_bruggink_et_al,AM:2016}. In particular, at the moment there are no automated tools, which would allow an application for program analysis and could be compared to existing approaches using either \aprove~\cite{GBEFFOPSSST14} or \tct~\cite{AMS:16}. In our definition of term graph rewriting we essentially follow Barendsen~\cite{TeReSeC13}, but also~\cite{1987_barendregt_et_al,2013_avanzini}, which are notationally closest to our presentation. We restrict our attention to term graphs, which represent such (finite) terms, that is in our context term graphs are directed, \emph{rooted}, and \emph{acyclic} graphs with node labels over a set of function symbols and variables. In term rewriting, termination is typically established via compatibility with a reduction order. Well-foundedness of such an order is more often than not a consequence of Kruskal's Tree Theorem~\cite{1960_kruskal} (e.g. in~\cite{1982_dershowitz}). In particular, Kruskal's Tree Theorem underlies the concept of simple termination (see e.g.~\cite{1997_middeldorp_et_al}). Indeed, Plump~\cite{1997_plump} defines a simplification order for acyclic term graphs. This order relies on the notion of \emph{tops}. The top of a term graph is its root \emph{and} its direct successors---thus keeping information on how these successors are shared. We recall briefly. Let $\wqo$ be a partial order. If for any infinite sequence, we can find two elements $a_i, a_j$ with $i < j$ where $a_i \wqo a_j$, then $\wqo$ is a well-quasi order. Now, Kruskal's Tree Theorem states, in a formulation suited to our needs, that given a well-quasi order $\topembeq$ on the symbols in a term, the \emph{homeomorphic embedding} relation $\embeq$ is a well-quasi order $\embeq$ on terms. We consider term graphs, not terms, and our symbols are tops. Usually, the relation $\embeq$ is simply called an \emph{embedding}. Plump~\cite{1997_plump} defines the embedding $\pembeq$, but as he notes, for the following two term graphs, his definition of $\pembeq$ holds in both directions. \begin{center} \pdftooltip{ \begin{tikzpicture} \node [n] (1) {$\f$}; \node [n, xshift=0.5cm, below left = of 1] (2) {$\g$}; \node [n, xshift=0.25cm, below = of 2] (4) {$\fa$}; \path[->] (1) edge (2) (1) edge [bend left] (4) (2) edge (4) ; \node [xshift=1.3cm, yshift=-0.4cm] (E) {$\pembeq$}; \node [n,xshift = 2.5cm] (1) {$\f$}; \node [n, xshift=0.5cm, below left = of 1] (2) {$\g$}; \node [n, below = of 2] (4) {$\fa$}; \node [n, xshift=-0.5cm, below right = of 1] (3) {$\fa$}; \path[->] (1) edge (2) (1) edge (3) (2) edge (4) ; \node [xshift=4.5cm, yshift=-0.4cm] (E) { \begin{tabular}{l} but also \end{tabular} }; \node [n, xshift = 6.5cm] (1) {$\f$}; \node [n, xshift=0.5cm, below left = of 1] (2) {$\g$}; \node [n, below = of 2] (4) {$\fa$}; \node [n, xshift=-0.5cm, below right = of 1] (3) {$\fa$}; \path[->] (1) edge (2) (1) edge (3) (2) edge (4) ; \node [xshift=7.8cm, yshift=-0.4cm] (E) {$\pembeq$}; \node [n, xshift = 9.2cm] (1) {$\f$}; \node [n, xshift=0.5cm, below left = of 1] (2) {$\g$}; \node [n, xshift=0.25cm, below = of 2] (4) {$\fa$}; \path[->] (1) edge (2) (1) edge [bend left] (4) (2) edge (4) ; \end{tikzpicture} } { The term f(g(a),a) can be represented in two ways: once with the a shared, and once where it is not shared. They are, however, mutually embedded with respect to the embedding relation defined in \cite{1997_plump}. } \end{center} In particular, \cite{1997_plump} does not take sharing into account---except for direct successors through tops. This is a consequence of identifying each sub-graph independently. This is the inspiration and starting point for our work: We want to define an embedding relation, which also takes sharing into account. With this new embedding relation we re-prove Kruskal's Tree Theorem. Also here we take a slightly different approach to \cite{1997_plump}, which relies on an encoding of tops to function symbols with different arities. It is stated that there is a direct proof based on~\cite{1963_nash-williams}, which will be our direction. As already mentioned, the context of this paper is the quest for termination techniques for term graph rewriting. Here \emph{termination} refers to the well-foundedness of the graph rewrite relation $\grw$, induced by a graph rewrite system $\mathcal{G}$, cf.~\cite{TeReSeC13}. In particular, we seek a technique based on orders. This is in contrast to related work in the literature. There termination is typically obtained through interpretations or weights, cf.~Bonfante et al.~\cite{2013_bonfante_et_al}. Also Bruggink et al.~\cite{BKZ:14,2015_bruggink_et_al} use an interpretation method, where they use type graphs to assign weights to graphs to prove termination. Finally, in~\cite{AM:2016} complexity of acyclic term graph rewriting is investigated, based on the use of interpretations and suitable adaptions of the dependency pair framework. This paper is structured as follows. The next section provides basic definitions. In Section~\ref{Embedding} we discuss potential adaptions of the homeomorphic embedding relation to term graphs and establish a suitable notion that extends the notion of \emph{collapse} known from the literature. Section~\ref{Kruskal} establishes our generalisation of Kruskal's Tree Theorem to acyclic term graphs. In Section~\ref{Simplification:Orders} we establish a new notion of simplification orders. Finally, in Section~\ref{Conclusion} we conclude and mention future work. \section{Preliminaries} First, we introduce our flavour of term graphs based on term dags, define term graph rewriting in our context, and give the collapse relation. Then we investigate tops with respect to a function symbol but also with respect to a node in a term graph. Based on this, we will consider a precedence on tops. \begin{definition} Let $\ndes$ be a set of nodes, $\F$ a set of function symbols, and $\V$ a set of variables. A \emph{graph} is $G = (N, \succs, \lb)$, where $N \subseteq \ndes $, $\succs: N \to N^*$, and $\lb: N \to {\F \cup \V}$. Here, $\succs$ maps a node $n$ to an ordered list of \emph{successors} $[n_1 \ldots n_k]$. Further, $\lb$ assigns labels, where \begin{enumerate*}[label=\itshape(\roman*)] \item for every node $n \in G$ with $\lb(n) = f \in \F$ we have $\succs(n) = [n_1, \dots , n_{\ar(f)}]$, and \item for every $n \in G$ with $\lb(n) \in \V$, we have $\succs(n) = [~]$. \end{enumerate*} If $G$ is acyclic, then $G$ is a \emph{term dag}. \end{definition} The \emph{size} of a graph $|G|$ is the number of its nodes $N$. We write $n \in G$ and mean $n \in N$, and call $G$ \emph{ground}, if $\lb: N \to \F$. If $\succs(n) = [\ldots, n_i, \ldots]$, we write $n \succi{i} n_i$, or simply $n \succi{} n_i$ for any $i$. Further,~$\succi{}^+$ is the transitive, and $\succi{}^*$ the reflexive, transitive closure of $\succi{}$. If $n \succi{}^* n\pr$, then $n\pr$ is \emph{reachable} from $n$. In the sub-graph $G \sub [n_1, \ldots, n_k]$ all nodes reachable from $n_1, \ldots, n_k$ are collected, i.e.\ $N = \{ n \mid n_i \succi{}^* n, 1 \leqslant i \leqslant k \}$, and the domains of $\succs$ and $\lb$ are restricted accordingly. \begin{definition} Let $T$ be a term dag. If all nodes are reachable from one node called $\rt(T)$, that is, $T$ is \emph{rooted}, then $T$ is a \emph{term graph} with $\inlets \df \succs(\rt(T))$. For a term dag $G$ with $\inlets = [t_1, \ldots, t_n]$, the \emph{argument graph} is defined as $G \sub \inlets\pr$, where $\inlets\pr \df \succs(t_1) \cdots \succs(t_l)$. \end{definition} \noindent \begin{minipage}[t]{.85\linewidth} \begin{example} \label{ex:termgraph} On the right we show the term graph $T = (\{\cial{1}, \cial{2}\}, \succs, \lb)$, with $\succs: \cial{1} \mapsto [\cial{2}, \cial{2}], \cial{2} \mapsto [~]$, and $\lb: \cial{1} \mapsto \f, \cial{2} \mapsto \fa$. The term representation of $T$ is $\f(\fa,\fa)$, $|T| = 2$, and $T$ is ground. The argument graph of $T$ is ~ \nn{\fa}{2} ~ with $\inlets = [\cial{2}, \cial{2}]$. \end{example} \end{minipage} \begin{minipage}[t]{.1\linewidth} \pdftooltip{ \begin{tikzpicture}[anchor=base, baseline=10pt] \node [n] (1) {\nn{\f}{1}}; \node [n, below = of 1] (2) {\nn{\fa}{2}}; \path[->] (1) edge [bend left] (2) (1) edge [bend right] (2) ; \end{tikzpicture} } {A term graph representing the term f(a,a). Here the "a" is shared. The root node with label "f" has the node number 1. The argument node with label "a" has the node number 2. There are two edges from node 1 to node 2.} \end{minipage} \newline A \emph{graph rewrite rule} is a term dag $G$ with a root node~$l$ of the left hand side, and a root node~$r$ of right hand side. We denote a graph rewrite rule by $L \to R$, where $G \sub [l] = L $ and $G \sub [r] = R$. For a graph rewrite rule the following has to hold: \begin{enumerate*}[label=\itshape(\roman*)] \item $\lb(l) \not\in \V$, \item if $n \in R$ with $\lb(n) \in \V$ then $n \in L$, and \item for all nodes $n, n\pr \in G$, if $\lb(n) = \lb(n\pr) \in \V$ then $n=n\pr$. \end{enumerate*} A \emph{graph rewrite system (GRS)} $\G$ is a set of graph rewrite rules. To define a \emph{graph rewrite step}, we first need the auxiliary concepts of \emph{redirection} of edges and \emph{union} of two term dags. To \emph{redirect} edges pointing from node $u$ to node $v$, we write $G[v \rd u]$, which is defined as $(N_G, \succs_{G[v \rd u]}, \lb_{G})$, where for all nodes $n \in G$, $\succs^i_{G[v \rd u]}(n) \df v $ if $n = u$, and $\succs^i_{G[v \rd u]}(n) \df n$ otherwise. Note, that for $G[v \rd u]$ we still have $u \in G$. For two term dags $G$ and $H$, their (left-biased) \emph{union}, denoted by $G \un H$, is defined as $(N_G \cup N_H, \succs_{G} \un \succs_{H}, \lb_{G} \un \lb_{H}) $, where for $f \in \{\succs, \lb\}$ we define $ f_G \un f_H (n) \df f_G(n) $ if $n \in G$, and $f_H(n)$ if $n \not\in G$ and $n \in H$. Note, that we do not require $N_G \cap N_H = \varnothing$. Next we investigate how to determine whether a graph rewrite rule matches a term graph. Therefore we first need to find a common structure between two graphs---through a \emph{morphism}. \begin{definition} \label{def:dmo} Let $S, T$ be term graphs, and $\Delta \subseteq \F \cup \V$. A function $m: S \to T$ is \emph{morphic} if for a node $n \in S$ \begin{enumerate}[label=\itshape(\roman*)] \item \label{dmo:lb} $\lb_S(n) = \lb_T(m(n))$ and \item \label{dmo:succs} if $n \succi{i}_{S} n_i $ then $m(n) \succi{i}_T m(n_i)$ for all appropriate $i$. \end{enumerate} A $\Delta$-\emph{morphism} from $S$ to $T$ is a mapping $m : S \to_\Delta T$, which is morphic in all nodes $n \in S$ with $\lb(n) \not\in \Delta$ and additionally $m(\rt(S)) = \rt(T)$ holds. \end{definition} A $\Delta$-morphism only enforces Conditions~\ref{dmo:lb} and \ref{dmo:succs} on nodes with labels which are not in~$\Delta$. With $\Delta = \V$ we can determine whether a left-hand side of a graph rewrite rule \emph{matches} a term graph, i.e., $L$ matches $S$ if there is a morphism $m : L \to_{\V} S$. Here, a node representing a variable in~$L$ can be mapped to a node with any label and successors. The morphism~$m$ is \emph{applied} to $R$, denoted by $m(R)$, by redirecting all variable nodes in $R$ to their image. That is, for all $n_1, \ldots, n_k \in R$, where $\lb(n_i) \in \V$, we define $ m(R) = ((R \un S) [m(n_1) \rd v_1]) \ldots [m(n_k) \rd n_k]$. Finally, for two term graphs $S, T$, $n$ a node in $S$, and $N_S \cap N_T = \varnothing$, the replacement of the subgraph $S\sub n$ by T, denoted $S[T]_n$, is defined as $T$, if $n = \rt(S)$, and as $(S \un T)[\rt(T) \rd n] \sub \rt(S)$ otherwise. \begin{definition} Let $\G$ be a GRS. A term graph $S$ \emph{rewrites} to a term graph $T$, denoted by $S \grw T$, if there is a graph rewrite rule $L \to R \in \G$ with $N_R \cap N_S = \varnothing$, and a morphism $m : L \to S\sub n$ such that $S[m(R)]_n = T$. \end{definition} Finally, we can introduce the notion of termination. \begin{definition} If $\grw$ is well-founded, we say that the GRS $\G$ is \emph{terminating}. \end{definition} So far, we have not taken sharing into account---which we will investigate next. For term graphs $S$ and $T$, we may ask: Is $S$ a ``more shared'' version of $T$? Are $S$ and $T$ ``equal''? To answer this, we rely again on a morphism as in Definition~\ref{def:dmo}, where we require Condition~\ref{dmo:lb} and~\ref{dmo:succs} for every node, i.e.\ we set $\Delta = \varnothing$. \begin{definition} \label{def:sharing} If there is a morphism $m: S \to_\varnothing T$, then $S$ \emph{collapses} to $T$, denoted by $S \fleq T$. If $S \fleq T$ and $T \fleq S$, then $S$ is \emph{isomorphic} to $T$, denoted by $S \iso T$. \end{definition} Reconsidering Example~\ref{ex:termgraph}, let $S$ be a tree representation of $\f(\fa, \fa)$, then $S \fleq T$. Now recall, that we aim to give a notion of $\Top$, which takes the sharing of successor nodes into account, formalised via the collapse relation. Thus---with collapsing---we can give a definition of $\Tops$ for a function symbol $f$. \begin{definition} \label{def:tops} Let $f \in \F$, $\ctg$ a fresh symbol wrt.\ $\F$, and $S$ a tree representation of $f(\ctg, \ldots, \ctg)$. Then $\Tops(f)= \{ T \mid T \text{ is a termgraph, and } S \fleq T \}$ and $\Tops(\F) = \bigcup_{f \in \F} \Tops(f)$. \end{definition} Now, similar to a precedence on function symbols, we define a precedence $\topembeq$ on $\Tops(\F)$. \begin{definition} \label{def:precedence} A \emph{precedence} on $\F$ is a transitive relation $\topembeq$ on $\Tops(\F)$, where for $S, T \in \Tops(\F)$ we have \begin{enumerate*}[label=\itshape(\roman*)] \item \label{it:embTopRef} $S \iso T$ implies $S \topembeq T$ and $T \topembeq S$, and \item \label{it:embTopMo} $T \topembeq S$ implies $|T| \leqslant |S|$. \end{enumerate*} \end{definition} Condition~\ref{it:embTopRef} implies reflexivity, but also includes isomorphic copies. Condition~\ref{it:embTopMo} hints at a major distinction to the term rewriting setting: We can distinguish the same function symbol with different degrees of sharing---and even embed nodes, which are labelled with function symbols with a smaller arity, in nodes labelled with function symbols with a larger arity. But, to ensure that such an embedding is indeed possible, enough nodes have to present---which is guaranteed by Condition~\ref{it:embTopMo}. With Definition~\ref{def:tops} we can compute the $\Tops$ for a function symbol---but we also want to compute the $\Top$ from some node in a term dag. \begin{definition} \label{def:top} For a term dag $G = (N, \succs, \lb)$ and a node $n \in G$, we define $\Top_G(n) \df (\{n\} \cup \succs(n), \lb\pr, \succs\pr)$, where \begin{enumerate*}[label=\itshape(\roman*)] \item $\lb\pr(n) = \lb(n)$, $\succs\pr(n) = \succs(n)$, and \item for $n_i \in \succs(n)$, $ \lb\pr(n_i) = \ilab$, and $\succs\pr(n_i) = [~]$. \end{enumerate*} \end{definition} For $\Top_G(n)$, where $\lb_G(n) = f$, we find an isomorphic copy $G\pr$ of $\Top_G(n)$ in $\Tops(f)$, i.e. $\Top_G(n) \iso G\pr \in \Tops(f)$. In the context of this work we focus on the graph rewrite relation $\grw$ and not on a relation combined with any explicit collapsing relation $\fleq$, as e.g., in~\cite{1997_plump}. In passing, we note that for the below established notion of homeomorphic embedding a similar relation to the collapse relation $\fleq$ is possible as in Plump's work, cf.~\cite[Lemma~24]{1997_plump}. \section{On Embedding} \label{Embedding} Next we continually develop a suitable definition of \emph{homeomorphic embedding} for term dags. To get an intuition for embedding of term graphs consider the following example. \begin{figure} \caption{Intuitive Embeddings} \label{fig:ex:embedding} \end{figure} \begin{example} \label{ex:embedding} In Figure~\ref{fig:ex:embedding}, we find three term graphs, which are intuitively embedded from left to right under the given precedence. \end{example} We base our definition of embedding on morphisms. We evolve this definition to highlight difficulties and pitfalls. In the first attempt we try mapping nodes from the embedded to the embedding graph. \begin{definition} [first attempt] \label{def:embeddingFstAttempt} Let $\topembeq$ be a precedence. We say that $S$ is \emph{embedded} in $T$, denoted as $S \embeq T$, if there exists a function $m \colon S \to T$, such that for all nodes $s \in S$, we have \begin{enumerate}[label=\itshape(\roman*)] \item $\Top_S(s) \topembeq \Top_T(m(s))$, and \item if $s \succi{}_S s\pr$ for some $s\pr \in S$, then $m(s) \succi{}_T^+ m(s\pr)$ holds. \end{enumerate} \end{definition} \begin{example} We illustrate this definition with Figure~\ref{fig:ex:embedding}. From the first to the second term dag we have a function $m$, with $m(\cial{A}) = \cial{1}$, and either $m(\cial{B}) = \cial{2}$ or $m(\cial{B}) = \cial{3}$. Here $m$ is not unique. From the second to the third term dag the morphism $m\pr$ maps $m\pr(\cial{I}) = \cial{A}$ and $m\pr(\cial{II}) = \cial{B}$. \end{example} In Definition~\ref{def:embeddingFstAttempt} the morphism maps nodes from the embedded graph $S$ to nodes in the embedding graph $T$. The following example shows a problem arising from this. \begin{example} \label{ex:StoT} The embedding given in Figure~\ref{fig:variants}(a) is valid after Definition~\ref{def:embeddingFstAttempt}. Here a morphism that satisfies both conditions is $\mm{A}{1}$, $\mm{B}{2}$, $\mm{C}{3}$, and also $\mm{D}{2}$ as well as $\mm{E}{3}$. This embedding could be prohibited by demanding $m$ to be injective. \end{example} Demanding injectivity in Definition~\ref{def:embeddingFstAttempt} prohibits the embedding $S \embeq T$ if $S \ufleq T$ (in general). Thus we attempt to expand our definition such that a term dag also embeds a collapsed version of itself, i.e.\ embedding takes sharing into account. To achieve this the embedding relation has to contain the collapse relation of Definition~\ref{def:sharing}. Then the embedding relation relies on a \emph{partial} mapping from the embedding term graph $S$ to $T$. \begin{definition}[second attempt] \label{def:embeddingSndAttempt} Let $\topembeq$ be a precedence. We say that $S$ \emph{embeds} $T$, denoted as $S \sembeq T$, if there exists a partial, surjective function $m \colon S \to T$, such that for all nodes $s$ in the domain of~$m$, holds \newcounter{embenum} \begin{enumerate}[label=\itshape(\roman*)] \item $\Top_T(m(s)) \topembeq \Top_S(s)$, and \item $m(s) \succi{}_T m(s\pr)$ implies $ s \succi{}^{+}_S n\pr$ for some $ n\pr \in \{ n \mid (n) = m(s\pr) \}$. \setcounter{embenum}{\value{enumi}} \end{enumerate} \end{definition} \begin{figure} \caption{Variants of Embedding} \label{fig:variants} \end{figure} \begin{example} Again consider Figure~\ref{fig:ex:embedding}. From the first to the second term dag we have a function $m$, with $m(\cial{1}) = \cial{A}$, $m(\cial{2}) = \cial{B}$, and/or $m(\cial{3}) = \cial{B}$. Here $m$ is not unique. From the second to the third term dag the morphism $m\pr$ maps $m\pr(\cial{A}) = \cial{I}$ and $m\pr(\cial{B}) = \cial{II}$. \end{example} One may observe, that both definitions of embedding so far are very permissive: it does not regard the order of the arguments. This is best illustrated by an example. \begin{example} \label{ex:fabfba} The two term graph shown in Figure~\ref{fig:variants}(b) representing the terms $\f(\fa,\fb)$ and $\f(\fb,\fa)$ are mutually embedded: from left to right we have the morphism $m$ with $m(\cial{1}) = \cial{I}$, $m(\cial{2}) = \cial{III}$, and $m(\cial{3}) = \cial{II}$. But---the inverse morphism $m^{-1}$ also fulfils both conditions in Definition~\ref{def:embeddingFstAttempt} and Definition~\ref{def:embeddingSndAttempt}. \end{example} To remedy this, we need to take the order of the arguments into account. Informally speaking, we want the preserve the relative order between the nodes: if a node~$n$ is ``left of'' a node~$n\pr$, $m(n)$ should be ``left of'' $m(n\pr)$ in the embedded graph. For a formal description of ``left of'', we employ \emph{positions}. Positions are sequences of natural numbers with $\cdot$ as delimiter. The set of positions of a node~$n$ in a term graph~$S$ is defined as follows: $\mathsf{Pos}_S(n) \df \{\epsilon \}$ if $n = \rt(S)$, and $\mathsf{Pos}_S(n) \df \{ p \cdot i \mid \exists n\pr \in S \text{ with } n\pr \succi{i}_S n \text{ and } p \in \mathsf{Pos}_S(n\pr) \}$ otherwise. For a term dag $G$ with $\inlets_G$, the base case is adapted slightly: $\mathsf{Pos}_G(n) \df \{ i \} $ if $n$ is on $i$th position in $\inlets_G$. We can now compare two positions $p$ and $q$: $p$ is left---or above---of $q$, if $p = p_1 \cdots p_k \lex q_1 \cdots q_l = q$, i.e.\ $p_i = q_i$ for $1 \leqslant i \leqslant j$ and $j = k < l$ or $p_j < q_j$. We now have to extend this comparison from positions to nodes. This entails on the one hand an intra-node comparison which finds the smallest position within a node. Then an inter-node comparison comparing the smallest positions of two nodes. Two nodes are called \emph{parallel} in a term graph $G$, if they are mutually unreachable. \begin{definition} Let $G$ be a term dag. We define a partial order $\lof_G$ on the parallel nodes in $G$. Let $n, n\pr \in G$ and suppose $n$ and $n\pr$ are parallel. Further, suppose $p \in \mathsf{Pos}(n)$ is minimal wrt.\ $\lex$ and $q \in \mathsf{Pos}(n\pr)$ is minimal wrt.\ $\lex$. Then $n \lof_G n\pr$ if $p \lex q$. \end{definition} Based on the above definition, we develop Definition~\ref{def:embeddingSndAttempt} further to the final version of embedding. \begin{definition}[final] \label{def:embeddingFinal} Let $\topembeq$ be a precedence. We say that $S$ \emph{embeds} $T$, denoted as $S \sembeq T$, if there exists a partial, surjective function $m \colon S \to T$, such that for all nodes $s$ in the domain of~$m$, holds \begin{enumerate}[label=\itshape(\roman*)] \item \label{embed:1} $\Top_T(m(s)) \topembeq \Top_S(s)$, and \item \label{embed:2} $m(s) \succi{}_T m(s\pr) $ implies $ s \succi{}^{+}_S n\pr$ for some $ n\pr \in \{ n \mid m(n) = m(s\pr) \}$, and \item \label{embed:3} $m(s) \lof_T m(s\pr)$ implies either that none of the nodes in the preimage of $m(s\pr)$ is parallel to $s$, or there exists $n\pr \in \{ n \mid m(n) = m(s\pr) \}$ such that $s \lof_S n\pr$. \end{enumerate} \end{definition} \begin{example} Recall Example~\ref{ex:fabfba}. With the final definition of embedding, the two term graphs are not mutually embedded per se---embedding now depends on $\topembeq$. As a further example for the embedding of the term graphs consider Figure~\ref{fig:variants}(c). We have the following morphism: $\mm{1}{A}$, $\mm{2}{B}$, $\mm{3}{C}$, and $\mm{4}{D}$. Here we have $\cial{B} \lof \cial{C}$ but $\cial{2}$ and $\cial{3}$ are not parallel. However, even with $\lof$ the two graphs in Figure~\ref{fig:variants}(d) are mutually embedded. Here we have neither $\cial{2} \lof \cial{3}$ nor $\cial{B} \lof \cial{C}$, so Condition~\ref{embed:3} holds trivially in both directions. \end{example} The relation $\sembeq$ is transitive, i.e.\ $S \sembeq T$ and $T \sembeq U$ implies $S \sembeq U$. The proof is straight forward: We construct the embedding $m_3 : S \to U$, based on the implied embeddings $m_2 : S \to T$ and $m_1: T \to U$, by setting $m_3(n) = m_1(m_2(n))$ and show that $m_3$ fulfils the conditions in Definition~\ref{def:embeddingFinal}. \section{Kruskal's Tree Theorem for Acyclic Term Graphs} \label{Kruskal} Our proof follows \cite{1997_middeldorp_et_al} for the term rewrite setting, which in turn follows the minimal bad sequence argument of Nash-Williams \cite{1963_nash-williams}: we assume a minimal ``bad'' infinite sequence of term graphs and construct an even smaller ``bad'' infinite sequence of their arguments. By minimality we contradict that this sequence of arguments is ``bad'', and conclude that it is ``good''. So we start by defining the notions of ``good'' and ``bad''. \begin{definition} Assume a reflexive and transitive order $\wqo$, and an infinite sequence~$\asq$ with $a_i, a_j$ in $\asq$. If for some $i < j$ we have $a_i \wqo a_j$, then $\asq$ is \emph{good}. Otherwise, $\asq$ is \emph{bad}. If every infinite sequence is good, then $\wqo$ is a \emph{well-quasi order} (wqo). \end{definition} After we determined the sequence of arguments to be good, we want to--- roughly speaking---plug the $\Top$ back on its argument. For this, we need a wqo on $\Tops(\F)$ and the following, well established, lemma. \begin{lemma} \label{Lem:Chain} If $\wqo$ is a wqo then every infinite sequence contains a subsequence---a \emph{chain}---with $a_i \wqo a_{i+1}$ for all $i$. \end{lemma} With this lemma, we can construct witnesses that our original minimal bad sequence of term graphs is good, contradicting its badness and concluding the following theorem. \begin{theorem} \label{t:kruskal} If $\topembeq$ is a wqo on $\Tops(\F)$, then $\embeq$ is a wqo on ground, acyclic term graphs. \end{theorem} \begin{proof} By definition, $\embeq$ is a wqo, if every infinite sequence is good, i.e.\ for every infinite sequence of term graphs, there are two term graphs $T_i, T_j$, such that $T_i \embeq T_j$ with $1 \leqslant i < j$. We construct a minimal bad sequence of term graphs $\mbsq$: Assume we picked $T_1, \ldots, T_{n-1}$. We next pick $T_n$---minimal with respect to $|T_n|$---such that there are bad sequences that start with $T_1, \ldots, T_n$. Let $G_i$ be the argument graph of the $i$th term graph $T_i$. We collect in $G$ the arguments of all term graphs of $\mbsq$, i.e. $G = \bigcup_{i \geqslant 1} G_i$ and show that $\embeq$ is a wqo on $G$. For a contradiction, we assume $G$ admits a bad sequence $\argsq$. We pick $G_k \in G$ with $k \geqslant 1$ such that $H_1 = G_k$. In $G\pr$ we collect all argument graphs up to $G_k$, i.e. $G' = \bigcup_{i \geqslant 1}^k G_i$. The set $G\pr$ is finite, hence there exists an index $l > 1$, such that for all $H_i$ with $i \geqslant l$ we have that $H_i \in G$ but $H_i \not\in G\pr$. We write $\argsq_{\geqslant l}$ for the sequence $\argsq$ starting at index~$l$. Now consider the sequence $T_1, \ldots, T_{k-1}, G_k, \argsq_{\geqslant l}$. By minimality of $\mbsq$ this is a good sequence. So we try to find a witness and distinguish on $i, j$: \noindent \begin{tabularx}{\textwidth}{l X} $\underbrace{ T_1, \ldots, T_{k-1}}_{i,j}, G_k, \argsq_{\geqslant l}$ & For $1 \leqslant i < j \leqslant k-1$, we have $T_i \embeq T_j$, which contradicts the badness of~$\mbsq$.\\ $\underbrace{ T_1, \ldots, T_{k-1}}_{i}, \underbrace{G_k}_{j}, \argsq_{\geqslant l} $ & For $1 \leqslant i \leqslant k-1$ and $j = k$, we have $T_i \embeq G_k$ and $G_k \embeq T_k$, where the latter is a direct consequence of the definitions. Hence, by transitivity, $T_i \embeq T_j$, which contradicts the badness of $\mbsq$. \\ $\underbrace{ T_1, \ldots, T_{k-1}}_{i}, G_k, \underbrace{\argsq_{\geqslant l}}_{j}$ & For $1 \leqslant i \leqslant k-1$ and $j \geqslant l$, we have $H_j \not\in G\pr$ by construction, but then $H_j = G_m$ for some $m > k$ and thus $H_j \embeq T_m$. Together with $T_i \embeq H_j$, we obtain $T_i \embeq T_m$ by transitivity, which contradicts the badness of $\mbsq$. \\ $T_1, \ldots, T_{k-1}, \underbrace{G_k, \argsq_{\geqslant l}}_{i,j}$ & Hence for some $1 \leqslant i < j$, where $i,j \not\in \{2, \ldots, l-1\}$, we have some $H_i \embeq H_j$, which contradicts the badness of $\argsq$. \end{tabularx} \noindent We conclude $\argsq$ is a good sequence and $\embeq$ is wqo on $G$. Next we consider the $\Top$s of $\mbsq$. Let these $\Top$s be $\topf$. By assumption, $\topembeq$ is a wqo on $\Tops(\F)$, and by Lemma~\ref{Lem:Chain}, $\topf$ contains a chain $\topf_\phi$, i.e. $f_{\phi(i)} \topembeq f_{\phi(i+1)}$ for all $i \geqslant 1$. We proved $\embeq$ to be a wqo on $G$. Hence we have $G_{\phi(i)} \embeq G_{\phi(j)}$ for some $1 \leqslant i < j$. It remains to be shown, that $f_{\phi(i)} \topembeq f_{\phi(j)}$ and $G_{\phi(i)} \embeq G_{\phi(j)}$ implies $T_{\phi(i)} \embeq T_{\phi(j)}$. We construct $T_{\phi(i)}$, and analogous $T_{\phi(j)}$, from $f_{\phi(i)} = (n_i, \lb_{f\phi(i)}, \succs_{f\phi(i)})$ and $G_{\phi(i)} = (N_{G\phi(i)}, \lb_{G\phi(i)}, \succs_{G\phi(i)})$ with $\inlets_{G\phi(i)}$. We have ${{N_{G\phi(i)}} \cap \{n_i\}} = \varnothing $. Then $T_{\phi(i)} = (N_{T\phi(i)}, \lb_{T\phi(i)}, \succs_{T\phi(i)})$ where \begin{enumerate}[label=\itshape(\roman*)] \item the nodes $N_{T\phi(i)} \df {N_{G\phi(i)}} \cup \{n_i \}$, \item $\lb_{T\phi(i)} \df \lb_{G\phi(i)}$ extended by $\lb_{T\phi(i)}(n_i) = \lb_{f\phi(i)}(n_i)$, and \item $\succs_{T\phi(i)} \df \succs_{G\phi(i)}$ extended by $\succs_{T\phi(i)} (n_i) = \inlets_{G\phi(i)}$. \end{enumerate} We aim for $T_{\phi(i)} \embeq T_{\phi(j)}$ and therefore construct the morphism $m: T_{\phi(j)} \to T_{\phi(i)}$. From $G_{\phi(i)} \embeq G_{\phi(j)}$, we obtain a morphism $m_G : G_{\phi(j)} \to G_{\phi(i)}$. We set $m(n) = m_G(n)$ for $n \in G_{\phi(j)}$, and $m(n_j) = n_i$. It remains to be shown that $m$ fulfils Definition~\ref{def:embeddingFinal}. Surjectivity of $m$ follows directly from the surjectivity of $m_G$. Condition~\ref{embed:1} holds for all nodes in $m_G$, and by $f_{\phi(i)} \topembeq f_{\phi(j)}$ also for $\rt(T_{\phi(j)}) = n_j$. For Condition~\ref{embed:2} we have to show: If $m(n_j) \succi{}_{T_{\phi{i}}} n\pr_i = m(n\pr_j)$ then $n_j \succi{}^{+} n\pr_j$. By definition $n\pr_i \in \inlets_{G\phi(i)}$ and hence also $n\pr_i \in G_{\phi(i)}$. By surjectivity of $m_G$ exist $m_G(n\pr_j) = n\pr_i$. It remains to be shown that $n_j \succi{}^+ n\pr_j$. By definition $n_j \succi{} u_j$, where $u_j \in \inlets_{G\phi(j)}$. By definition of argument graph, all nodes in $G_{\phi(j)}$ are reachable from nodes in $\inlets_{G\phi(j)}$, and in particular $n_j \succi{} u_j \succi{}^* n\pr_j$. Finally, Condition~\ref{embed:3} holds trivially for $n$ and by $G_{\phi(i)} \embeq G_{\phi(j)}$. Hence we found a $T_{\phi(i)} \embeq T_{\phi(j)}$, which contradicts the badness of $\mbsq$. Therefore $\mbsq$ is good and~$\embeq$ is a wqo. \end{proof} \section{Simplification Orders} \label{Simplification:Orders} In the term rewriting setting simplification orders are defined through the embedding relation. That is, a rewrite order~$\simpo$ is a \emph{simplification order} if ${\emb} \subseteq {\simpo}$ \cite{1997_middeldorp_et_al}. Then, if we can orient the rules in a rewrite system with $\simpo$, there are no infinite rewrite sequences. We try to directly transfer this idea to the term graph rewriting setting---but this is not sufficient, as the following example shows. \begin{example} We can orient the rule on the left with $\semb$, but still may get an infinite rewrite sequence, as shown on the right. \begin{center} \pdftooltip{ \begin{tikzpicture} \femb{\fa}{\fa}{1} \node [xshift=2cm, yshift=-0.4cm] (E1) {$\semb$}; \fembs{\fa}{3} \femb{\fa}{\fa}{6} \node [xshift=7cm, yshift=-0.4cm] (E) {$\grw$}; \fembs{\fa}{8} \node [xshift=9cm, yshift=-0.4cm] (E2) {$\grw$}; \fembs{\fa}{10} \node [xshift=11cm, yshift=-0.4cm] (E2) {$\ldots$}; \end{tikzpicture} }{ On the left we have the rule: first, on the left hand side is the tree representation of the term f(a,a), second, on the right hand side, the subterm a is shared. Here the left hand side strictly embeds the right hand side. So on the right we have an infinite rewrite sequence. The rewrite sequence starts with the tree representation of f(a,a), and performs one step to share the term a. But then the rule is applicable again, and again, ... } \end{center} Note, that this infinite rewrite sequence is not bad wrt.\ $\embeq$. \end{example} This problem is \emph{not} caused by our definition of embedding, and also occurs in \cite{1997_plump}. Rather, the reason is that from orientation of the rules, we cannot conclude orientation of all rewrite steps. However, it should be noted, that the definition of simplification order in~\cite{1997_plump} is indeed transferable to our presentation. \begin{definition}[\cite{1997_plump}] \label{d:simpo} Let $\embeq$ be the embedding relation induced by a precedence $\topembeq$ that is a wqo. A transitive relation $\simpo$ is a \emph{simplification order}, if \begin{enumerate} [label=\itshape(\roman*)] \item $\emb \subset \simpo$, and \item for all $S$ and $T$, if $S \embeq T$ and $T \embeq S$ then $S \not\simpo T$. \end{enumerate} \end{definition} A direct consequence of the second condition is that simplification orders are irreflexive. We obtain the following theorem. \begin{theorem} Every simplification order is well-founded. \end{theorem} \begin{proof} Let $\succ$ denote a simplification order. Thus there exists a well-quasi ordered precedence and an induced embedding relation, such that its strict part $\emb$ is contained in $\succ$. Due to Theorem~\ref{t:kruskal}, $\emb$ is a well-quasi order. Further, by definition $\succ$ is an irreflexive and transitive extension of $\embeq$. Thus $\succ$ is well-founded. \end{proof} Based on this observation, we adapt the definition of a \emph{lexicographic path order} (\emph{LPO} for short) from term rewriting to term graph rewriting and thus have a technique to show termination directly for acyclic term graph rewriting. Based on the above definition of embedding, it is natural to define LPO on term dags. Thus, we obtain the following definition of $\lpo$ induced by a well-quasi ordered precedence. \begin{definition} Let $\topembeq$ be a well-quasi ordered precedence. We write $\toplex$ for the lexicographic extension of $\sqsubset$. Let $S, T$ be term dags with $\inlets_S = [s_1, \ldots, s_k]$ and $\inlets_T = [t_1, \ldots, t_k]$, where $s_i$, $s_j$ and $t_i$, $t_j$ are parallel. Then $T \lpo S$ if one of the following holds \begin{enumerate}[label=\itshape(\roman*)] \item \label{lpo:1} $T \lpoeq S \sub [s_{i_1}, \ldots, s_{i_{k'}}]$ for some $1 \leqslant i_1 < \ldots < i_{k'} \leqslant k$, or \item \label{lpo:2} $[\Top(t_1), \ldots, \Top(t_l)] \toplex [\Top(s_1), \ldots, \Top(s_k)]$ and $\args(T) \lpo S$, or \item \label{lpo:3} $[\Top(t_1), \ldots, \Top(t_l)] = [\Top(s_1), \ldots, \Top(s_k)]$ and $\args(T) \lpo \args(S) $. \end{enumerate} \end{definition} \begin{example} Recall Example~\ref{ex:fabfba}. Given the precedence $\fa \topemb \fb$ we can orient the two term graphs: from right to left. To orient the term graphs wit $\lpo$ we first use \ref{lpo:3} and compare the argument graphs. Then we compare their respective $\inlets$ lexicographically, i.e., $[\Top(\cial{2}), \Top(\cial{3})] \toplex [ \Top(\cial{II}), \Top(\cial{III})]$ using \ref{lpo:2}. \end{example} To prove that $\lpo$ contains $\embeq$ for term graphs, it important to note that $\lpo$ requires that nodes are parallel within $\inlets$. That means, we can inductively step through a term graph, with $\inlets$ forming a level in the term graph. With~\ref{lpo:1} we can project the largest term dag to the dag that is actually used in the embedding. \section{Conclusion and Discussion} \label{Conclusion} Inspired by~\cite{1997_plump} we defined an embedding relation for the term graph rewriting flavour of \cite{2013_avanzini,AM:2016} and re-proved Kruskal's Tree Theorem. Furthermore, based on Plump's work~\cite{1997_plump} we establish a new notion of simplification order for acyclic term graphs and provide a suitable adaption of the lexicographic path order to acyclic term graphs. In contrast to~\cite{1997_plump}, where the proof uses an encoding of $\Top$ to function symbols with different arities, our proof operates on term graphs. With a new definition of the embedding relation, based on the notion of morphism and taking sharing into account, and a new definition of arguments we finally showed Kruskal's Tree Theorem for term graphs: A well-quasi order on $\Tops$, i.e.\ $\topembeq$, induces a well-quasi order~$\embeq$ on ground term graphs. One insight from our proof concerns the arguments of a term graph---or rather \emph{the} argument. For a term structure we have several subterms as arguments. For a term graph structure it is beneficial to regard the arguments as only one single argument graph. This preserves sharing. Moreover a single argument simplifies the proof as extending the order to sequences, Higman's Lemma~\cite{1952_higman}, can be omitted. In future work, we will focus on the establishment of genuinely novel notions of simplification orders for term graph rewriting and investigate suitable adaptions of reduction orders for complexity analysis. \end{document}
\begin{document} \title{Local universality of the number of zeros of random trigonometric polynomials with continuous coefficients} \begin{abstract} Let $X_N$ be a random trigonometric polynomial of degree $N$ with iid coefficients and let $Z_N(I)$ denote the (random) number of its zeros lying in the compact interval $I\subset\mathbb R$. Recently, a number of important advances were made in the understanding of the asymptotic behaviour of $Z_N(I)$ as $N\to\infty$, in the case of standard Gaussian coefficients. The main theorem of the present paper is a universality result, that states that the limit of $Z_N(I)$ does not really depend on the exact distribution of the coefficients of $X_N$. More precisely, assuming that these latter are iid with mean zero and unit variance and have a density satisfying certain conditions, we show that $Z_N(I)$ converges in distribution toward $Z(I)$, the number of zeros within $I$ of the centered stationary Gaussian process admitting the cardinal sine for covariance function. \end{abstract} \section{Introduction and main result} Random polynomials are popular models in probability theory. They have found a lot of applications in several fields of physics, engineering and economics. In particular, there is a great variety of problems where the distribution of {\it zeros} of random polynomials occurs, including nuclear physics (in particular, random matrix theory), statistical mechanics or quantum mechanics, to name but a few; see, e.g., Bharucha-Reid and Sambandham \cite{BR} or Bogomolny, Bohigas and Leb{\oe}uf \cite{BBL} and references therein. The most studied classes of random polynomials are the algebraic and trigonometric ensembles. As a matter of fact, it was rapidly observed that the behaviour of their zeros exhibit important differences. For instance, both the asymptotic mean and variance of the number of real roots of Kac algebraic polynomials are equivalent to $\log N$, while in the trigonometric case both the asymptotic mean and variance of the number of roots on $[0,2\pi]$ are equivalent to $N$, with $N$ the degree of the polynomial. See, e.g., \cite{granville} or \cite{far98} for precise statements and references. Besides, for algebraic polynomials it often happens that the dominant term in its expansion is solely responsible for the limit; in contrast, in the trigonometric case generally each term contributes infinitesimally. One can find in \cite{BBL} several reasons explaining the wide interest of scientists in random trigonometric polynomials. For instance, in the quantum semiclassical limit one expects to have a large proportion of roots on, or close to, the unit circle in the complex plane. Under a certain natural sufficient condition on the coefficients of the random polynomials (self-invertibility), the authors were led to consider trigonometric polynomials. More specifically, throughout this paper we will deal with random trigonometric polynomials of the form \begin{eqnarray} P_N(t) = \sum_{n=1}^N \big\{ a_n\, \cos(nt) + b_n\, \sin(nt) \big\} ,\quad t\in\mathbb R, \label{P_N} \end{eqnarray} where the coefficients $a_n$ and $b_n$ are iid random variables that are normalised so that $\mathbb E[a_1]=0$ and $\mathbb E[a_1^2]=1$. The problem we want to study is the following:\\ {\bf Question Q}. {\it Fix a small interval containing $t=0$. How does the number of zeros of $P_N$ lying in this interval behave as $N\to\infty$}?\\ In order to solve this question, we first have to find the right scale at which a non-degenerate limit may happen. This leads us to change $t$ into $\frac{t}{N}$ and to consider the following normalized version of $P_N$: \begin{eqnarray} X_N(t) = \frac{1}{\sqrt{N}}\sum_{n=1}^N \bigg\{ a_n\, \cos\left(\frac{nt}{N}\right) + b_n\, \sin\left(\frac{nt}{N}\right) \bigg\} ,\quad t\in\mathbb R. \label{eq:trig-pol} \end{eqnarray} We can now investigate the limit of the number of the zeros of $X_N$ lying in any compact interval $I\subset\mathbb R$. (Observe that the factor $1/\sqrt{N}$ in (\ref{eq:trig-pol}) is of course totally useless as far as zeros are concerned; but since it will play a role when passing to the limit later on, we found convenient to keep it in our definition of $X_N$.) In the existing literature, a lot of investigations about the number of zeros of (\ref{P_N}) concerns the particular case where $a_1\sim N(0,1)$. This situation turns out to be more amenable to analysis; indeed, $P_N$ (or $X_N$) is then Gaussian, centered and stationary. For instance, one can rely on the Rice's formula to find that the mean of the number $Z_N(I)$ of zeros of $X_N$ within the compact interval $I$ converges to $|I|/(\pi\sqrt{3})$, with $|I|$ the length of $I$. (We recall from the celebrated Gaussian Rice's formula that, for any centered stationary Gaussian process $X$ with variance 1, the mean of the number of zeros within any interval $I$ is given by $\frac{\sqrt{-r''(0)}}{\pi}|I|$, where $r(t-s)=\mathbb E[X(t)X(s)]$). Nevertheless, even in this Gaussian framework the analysis becomes increasingly harder when higher moments are concerned. For example, a prediction for the limit of the variance of the number of zeros was made in \cite{BBL} in 1996, but it was only a dozen years later that this claim was confirmed by Granville and Wigman \cite{granville} by combining techniques from probability theory, stochastic processes and harmonic analysis. Besides, the authors of \cite{granville} also showed that a central limit theorem (CLT) for the number of zeros of $X_N$ within $[0,2\pi N]$ holds. Finally, we would like to conclude this very short picture of the existing results in the case $a_1\sim N(0,1)$ by mentioning the recent paper \cite{al} by Aza\"is and Le\'on, in which the authors make use of the Wiener chaos techniques to prove, more generally, a CLT for the number of crossings of any given level $u\in\mathbb R$ (see also \cite{adl} for a related study). As we just explained, assuming in (\ref{P_N}) or (\ref{eq:trig-pol}) that the coefficients of $X_N$ are Gaussian is of great help when dealing with the moments of the number of zeros, as it gives one access to a variety of tools and desirable properties. In contrast, solving Question Q when $a_1\sim\frac{1}{2}(\delta_1+\delta_{-1})$ (that is, in the case where $a_1$ is distributed according to the Rademacher distribution) seems clearly out of reach of existing methods. However, empirical simulations (see Figure \ref{fig}) suggest that the number $Z_N$ of zeros of $X_N$ within any given compact interval $I$ exhibits a {\it universality phenomenon}; this leads us to formulate the natural following conjecture.\\ {\bf Conjecture C}. {\it Assume that $a_1$ is square integrable with mean zero and unit variance. Then the number of zeros of $X_N$ within any given compact interval $I\subset\mathbb R$ converges, as $N\to\infty$, to the number of zeros of the centered stationary Gaussian process admitting the cardinal sine for covariance function, and this \underline{irrespective of the exact distribution of $a_1$}.\\ } \begin{figure} \caption{\it Empirical distribution of the number of zeros of $X_{50} \label{fig} \end{figure} At this stage, it is worth stressing that our universality conjecture concerns the {\it local} behavior of the number of zeros. Indeed, we are interested in the number of zeros of $X_N$ (not $P_N$) on a compact interval $I$ (for instance, $I=[0,50]$ like in Figure \ref{fig}). Another natural problem is to rather consider the {\it global} behavior, by looking this time at the number of zeros of $P_N$ on a compact interval $I$ or, equivalently, at the number of zeros of $X_N$ on $nI$. We refer to the recent preprint \cite{ap} by Angst and Poly, for an analysis of the universality phenomenon for the {\it mean} number of zeros of $X_N$ on the interval $[0,n\pi]$. Let us go back to the present paper. Our main result, Theorem \ref{th} just below, provides a positive solution to our Conjecture C, in the case of {\it continuous} coefficients whose density satisfies some regularity and boundedness conditions. \begin{theorem}\label{th} Assume in (\ref{eq:trig-pol}) that $a_n,b_n$ are iid square integrable random variables and admit a density $\rho:\mathbb R\to (0,\infty)$ of the form $\rho=e^{-\Psi}$, and that $\mathbb E[a_1]=0$ and $\mathbb E[a_1^2]=1$. Assume furthermore that $\Psi$ is $C^{\infty}$ and satisfies that \begin{equation}\label{Hypo-densite} \Psi^{(p)} \in \bigcap_{q\geq 1} L^q(e^{-\Psi(x)} dx), \quad p\geq 1. \end{equation} Then, for any compact interval $I\subset\mathbb R$, \begin{equation}\label{cv} Z_{X_N}(I) \overset{\rm law}{\to} Z_W(I)\quad\mbox{as $N\to\infty$}, \end{equation} where, whenever $Y$ is a process, we denote by $Z_Y(I)$ the number of zeros of $Y$ within the interval $I$ and where $W$ is the centered stationary Gaussian process defined on $[0,\infty)$ with covariance function \begin{equation}\label{r} \mathbb E[W(s)W(t)]=\text{sc}(t-s):=\frac{\sin(t-s)}{t-s}. \end{equation} \end{theorem} \begin{remark} {\rm As this stage, we would like to emphasize that the assumption (\ref{Hypo-densite}) is general and actually contains a wide range of densities. Indeed, assume for instance the following two conditions on $\Psi$, that roughly express the fact that $\Psi$ diverges at a polynomial speed and that its derivatives have polynomial growths: \begin{itemize} \item their exist $\alpha,c,M>0$ such that $\Psi(x) \ge c |x|^{\alpha}$ for all $|x|>M$; \item for each $p\ge 0$, their exist $\beta_p,c_p >0$ such that $\left|\Psi^{(p)}(x)\right|\le c_p\left(1+|x|^{\beta_p}\right)$. \end{itemize} Then condition (\ref{Hypo-densite}) holds. Also, it is worth highlighting that densities of the form $e^{-\Psi}$ appear naturally as invariant measures of diffusions of the form $dX_t=d W_t-\Psi'(t) dt$. Indeed, the infinitesimal generator is $\frac12 \frac{d^2}{dx^2}-\Psi'(x) \frac{d}{dx}$. } \end{remark} The heuristic behind Theorem \ref{th} is pretty simple. Using the classical CLT, it is easy to see that the finite dimensional marginals of $X_N$ converge to those of $W$. Thus, expecting that (\ref{cv}) holds true looks reasonable. However, due to the highly non-linear structure of the problem as well as the complexity of the relationships between zeros and coefficients, to transform this intuition into a rigorous theorem is a challenging question. To reach the conclusion of Theorem \ref{th}, we will use techniques from two apparently distinct fields, namely Rice's formulas (which have been used extensively to study the zeros of random polynomials) on one hand and local central limit theorems deduced by Malliavin calculus on the other one. The required smoothness and decay of the finite-dimensional densities of the process involved in the Rice's formulas will be obtained by introducing a suitable formalism of integration by parts and will require a long and careful analysis. Note that the idea of combining Rice and Malliavin techniques appeared for the first time in the work by Nualart and Wschebor \cite{nw}. Let us be a little bit more precise on the technical details, by sketching the route we will follow in order to prove Theorem \ref{th}. We will first check in Lemma \ref{prop:det-moments} that the distribution of $Z_W(I)$ is characterised by its moments. As a result, in order to reach the conclusion of Theorem \ref{th} it will be enough to check that all the moments of $Z_{X_N}(I)$ converge, as $N\to\infty$, to the corresponding moments of $Z_W(I)$. For technical reasons, it will be convenient (as well as equivalent) to rather prove the convergence of {\it factorial} moments. In other words, we will show that, for all integer $m\geq 1$, \begin{equation}\label{eq:conv-moments} \mathbb E[Z_{X_N}(I) ^{[m]}]\to \mathbb E[Z_{W}(I) ^{[m]}]\, \mbox{ as } N\to\infty, \end{equation} where $x^{[m]}=x(x-1)\ldots(x-m+1)$. The proof of (\ref{eq:conv-moments}) shall be done into two main steps of totally different natures: \begin{enumerate} \item[(i)] Firstly, by means of the Rice's formula we will give integral expressions for the factorial moments. To describe these expressions, we need to introduce some notation. Fix an integer $m\geq 1$ and consider, for $\varepsilon>0$, ${\mathbf t}=(t_1,\dots,t_m)\in I^m$ and ${\mathbf x}=(x_1,\dots,x_m),{\mathbf x}^\prime=(x^\prime_1,\dots,x^\prime_m)\in \mathbb R^m$, \begin{itemize} \item the set $D_m$ of hyper-diagonals, \begin{equation}\label{hyper} D_m=\{(t_1,\dots,t_m)\in I^m:\exists i\neq j / t_i=t_j\}; \end{equation} \item the $\varepsilon$-enlargement of $D_m$, \begin{equation}\label{enlarge} D^{\,\varepsilon}_m=\{(t_1,\dots,t_m)\in I^m:\exists i\neq j / |t_i-t_j|<\varepsilon\}; \end{equation} \item the random vector \begin{equation}\label{vn} V_N({\mathbf t})=(X_N(t_1),\dots,X_N(t_m),X^{\prime}_N(t_1),\dots,X^{\prime}_N(t_m)); \end{equation} \item and (whenever it exists) the density $p_{N,{\mathbf t}}({\mathbf x};{\mathbf x}^\prime)$ of $V_N ({\bf t})$ at $({\mathbf x},{\mathbf x}')$. \end{itemize} Corollary \ref{cor:Rice-trunc} will basically state that, for $N$ large enough, \begin{equation}\label{eq:Rice2} \mathbb E[Z_{X_N}(I) ^{[m]}] =\int_{I^m\setminus D^{\varepsilon}_m} \int_{\mathbb R^m}|y_1|\ldots |y_m | \,p_{N,{\mathbf t}} ({\mathbf 0};{\mathbf y}) d{\mathbf y} d{\mathbf t}+ O(\varepsilon^{1/5}), \end{equation} where ${\boldsymbol 0}=(0,\dots,0)\in\mathbb R^m$ and where the constant involved in the Landau notation $O(\cdot)$ is uniform with respect to $N$. \item[(ii)] Secondly, Proposition \ref{prop:conv} will establish a local limit theorem for the density $p_{N,{\mathbf t}}$ appearing in \eqref{eq:Rice2}. By passing to the limits as $N\to\infty$ (after swaping the limit and the integral by dominated convergence) and $\mbox{\rm Var}epsilon\to0$, the conclusion of Theorem \ref{th} will follow. \end{enumerate} Throughout all the paper $(Const)$ denotes an unimportant universal constant whose value may change from one occurrence to another. When a constant depends on some parameter, for example $\varepsilon$, we shall denote it by $ C_\varepsilon$. The rest of the paper is organized as follows. The proof of Theorem \ref{th} is given in Section 2, whereas Section 3 gathers the most technical results. \section{Proof of Theorem \ref{th}} Theorem \ref{th} is shown by means of the method of moments. More precisely, its proof is splitted into two steps: \begin{enumerate} \item Section \ref{S1} will first show that the distribution of $Z_W(I)$ is determined by its moments. \item Section \ref{S2} will then show that the (factorial) moments of $Z_{X_N}(I)$ converges to the corresponding moments of $Z_W(I)$. \end{enumerate} \subsection{The distribution of $Z_W(I)$ is determined by its moments} \label{S1} Recall from Theorem \ref{th} that $W$ is the centered stationary Gaussian process on $[0,\infty)$ admitting the cardinal sine for covariance function. \begin{lemma}\label{prop:det-moments} All the moments of $Z_W(I)$ are finite. Furthermore, the law of $Z_W(I)$ is determined by its moments. \end{lemma} \begin{proof} Using Nualart-Wschebor criterion (see \cite[Theorem 3.6, Corollary 3.7]{aw}), one has that all the moments of $Z_W(I)$ are indeed finite. In order to show that the moments of $Z_W(I)$ determine its law, we use the well-known Carleman's condition, that we restate here for convenience. \begin{lemma}[Carleman] Let $Y$ be a real-valued random variable. If $\mu_m=\mathbb E(|Y|^m)<\infty$ for all $m\in\mathbb N$ and if $$ \sum^\infty_{m=1} \mu_{2m}^{-\frac{1}{2m}}=\infty, $$ then the law of $Y$ is determined by its moments. \end{lemma} Assume that the length $|I|$ of $I$ is greater than 1, otherwise let us bound it by $1$. Set $\mu_{m,N}=\mathbb E(Z_{X_N}(I)^m)$ and $\mu_{m}=\mathbb E(Z_W(I)^m)$. Using inequality \eqref{ineq1} below, we have $$ \mu_{m,N}\leq (Const)\sum^\infty_{k=1}k^m\frac{|I|^{k-1/2}}{\sqrt{k!(k-1)!}}. $$ Proposition \ref{prop:conv} below implies that $\mu_m$ is the limit of $\mu_{m,N}$ as $N\to\infty$. As a consequence, $$ \mu_{m}\leq (Const)\sum^\infty_{k=1}k^m\frac{|I|^{k-1/2}}{\sqrt{k!(k-1)!}}. $$ Now, if $k\leq 3m$ we use the rough bound $$ k^m\frac{|I|^{k-1/2}}{\sqrt{k!(k-1)!}}\leq |I|^{-1/2}(3m|I|^{3})^m. $$ If $k\geq 3m$, we use the bound \begin{equation*} k^m\frac{|I|^{k-1/2}}{\sqrt{k!(k-1)!}} \leq \frac{k^m|I|^{k-1/2}}{(k-2m)^{2m-1/2}(k-2m)!}\\ \leq \frac{3^m|I|^{k-1/2}}{(k-2m)^{m-1/2}(k-2m)!}. \end{equation*} Hence, $$ \mu_{m}\leq (3m)^{m+1}\,|I|^{3m-1/2} +3^{m}|I|^{3m-1/2}R(m), $$ where $R(m)=\sum^\infty_{k=0}\tfrac{|I|^{k}}{(k+m)^{m-1/2}(k+m)!}$ is bounded and decreasing with $m$. Thus, $$ \mu_m\leq (Const)(3m|I|^3)^{m+1}. $$ Therefore, $$ \sum^\infty_{m=1}\mu^{-\frac{1}{2m}}_{2m}\geq \sum^\infty_{m=1}\frac{1}{(3|I|^{3}2m)^{(2m+1)/2m}}, $$ which is indeed divergent. The desired result follows. \end{proof} \subsection{Convergence of factorial moments}\label{S2} We start with a basic fact about the non-degeneracy of the finite-dimensional distributions of the process $W$ appearing in Theorem \ref{th}. Recall also $D_m$ from (\ref{hyper}). \begin{lemma}\label{lemma:non-deg} For $\mathbf t=(t_1,\ldots,t_m) \in I^m$, let us consider the covariance matrix $\Sigma({\bf t})$ of the Gaussian vector \begin{equation*} V(\mathbf t)=(W(t_1),\ldots,W(t_m);W'(t_1),\ldots,W'(t_m)). \end{equation*} If $t\not\in D_m$ then $\Sigma({\bf t})$ is invertible. \end{lemma} \begin{proof} We shall use the method of Cram\'{e}r and Leadbetter \cite{cl}, see also Exercise 3.4 in Aza\"\i s and Wschebor \cite{aw}. Note that the spectral density of $X$ is $ f(\lambda)=\frac12\,\mathbf 1_{[-1,1]}(\lambda)$. We want to study the strict positiveness of $F({\mathbf z})={\mathbf z}^T\,\Sigma\,{\mathbf z},$ where $$ \mathbf z =(\mathbf z_1,\mathbf z_2)=(z^1_1,z^1_2,\ldots,z^1_m;z^2_1,z^2_2,\ldots,z^2_m). $$ With obvious notation, it holds that $$ F({\mathbf z})= \mathbf z_1^T\,\Sigma_{11}\,\mathbf z_1+\mathbf z_1^T\,\Sigma_{12}\, {\bf z}_2+\ {\bf z}_2^T\,\Sigma_{21}\,\mathbf z_1+\mathbf z_2^T\,\Sigma_{22}\,\mathbf z_2. $$ We have \begin{eqnarray*} 2\,\mathbf z_1^T\,\Sigma_{11}\,\mathbf z_1&=&\int_{-1}^1\sum_{j=1}^m\sum_{l=1}^m e^{i(t_l-t_j)\lambda}z_l^1z_j^1d\lambda=\int_{-1}^1\left|\sum_{l=1}^de^{i t_l\lambda}z^1_l\right|^2 d\lambda\\ &=:& \int_{-1}^1|P_1( \mathbf t,\mathbf z_1,\lambda)|^2 d\lambda,\\ 2\,\mathbf z_2^T\,\Sigma_{22}\,\mathbf z_2&=&\int_{-1}^1\sum_{j=1}^m\sum_{l=1}^m e^{i(t_l-t_j)\lambda}z_l^2z_j^2\lambda^2 d\lambda=:\int_{-1}^1|P_2( \mathbf t,\mathbf z_2,\lambda)|^2\lambda^2 d\lambda,\\ 2\,\mathbf z_1^T\,\Sigma_{12}\,\mathbf z_2&=&\int_{-1}^1\sum_{j=1}^m\sum_{l=1}^m e^{i(t_l-t_j)\lambda}z_l^1z_j^2 i\lambda d \lambda =\int_{-1}^1 P_1( \mathbf t,\mathbf z_1,\lambda) \overline{P_2( \mathbf t,\mathbf z_2,\lambda)}i\lambda d\lambda,\\ 2\,\mathbf z_2^T\,\Sigma_{21}\,\mathbf z_1&=&-\int_{-1}^1 P_2( \mathbf t,\mathbf z_2,\lambda) \overline{P_1( \mathbf t,\mathbf z_1,\lambda)}i\lambda d\lambda. \end{eqnarray*} As a result, $$ F(\mathbf z)=\frac12\int_{-1}^1 |P_1( \mathbf t,\mathbf z_1,\lambda)-iP_2( \mathbf t,\mathbf z_2,\lambda)\lambda|^2 d\lambda. $$ Since the analytic function inside the square in the right-hand side of the previous identity cannot vanish almost everywhere when the $t_i$'s are pairwise distinct and ${\bf z}\neq 0$, we deduce that $F({\bf z})>0$ for all ${\bf z}\neq 0$. This is the desired conclusion. \end{proof} Apart from the most technical proofs that are postponed in Section 3, we are now in a position to check the convergence of moments. \begin{proposition}\label{prop:conv} For all $m\in\mathbb N$, one has \begin{equation*} \mathbb E[Z_{X_N}(I)^{[m]}]\to \mathbb E[Z_{W}(I)^{[m]}]\quad \mbox{as $N\to\infty.$} \end{equation*} \end{proposition} \begin{proof} Fix $m\in\mathbb{N}^*$ and a compact interval $I\subset\mathbb R$. By the forthcoming Corollary \ref{cor:Rice-trunc} we know that, for any $\varepsilon>0$, \begin{eqnarray*} \mathbb E\left(Z_{X_N}(I)^{[m]}\right) &=&\int_{I^m\setminus D^{\,\varepsilon}_m}\int_{\mathbb R^m}\prod_{i=1}^m |y_i| p_{N,{\textbf t}}(\textbf{0},\textbf{y})d\textbf{y}d\textbf{t}+O(\varepsilon^{\frac{1}{5}}), \end{eqnarray*} where $p_{\textbf{t},N}(\textbf{0},\textbf{y})$ is the density of the vector $V_N({\textbf t})$ (see (\ref{vn})) evaluated at the point $(0,\ldots,0;y_1,\ldots,y_m)$, and $O(\cdot)$ is uniform with respect to $N$. In order to achieve the proof we shall use a {\it local} central limit theorem to guarantee the pointwise convergence of $p_{N,{\textbf t}}$ and then take the limit under the integral. Denote by $\gamma_{{\bf t}}$ the density of the centered Gaussian vector with covariance $\Sigma({\bf t})$, where ${\bf t}$ and $\Sigma({\bf t})$ are like in Lemma \ref{lemma:non-deg}. Thanks to Lemma \ref{lemma:llt}, the sequence of functions $(p_{N,{\textbf t}})_{N\geq N_0}$ is equicontinuous and equibounded. Moreover, since by the CLT one has that $V_N({\bf t})$ converges toward $N(0,\Sigma({\bf t}))$, the limit of any convergent subsequence of $(p_{N,{\textbf t}})_{N\geq N_0}$ is necessarily $\gamma_{{\bf t}}$ by the Scheff\'e theorem. By a compactness argument, it follows that $p_{N,{\textbf t}}$ converges uniformly on each compact of $\mathbb R^{2m}$ towards $\gamma_{{\bf t}}$. Besides, the bound \eqref{controleinfini} gives the suitable domination to pass the limit under the integral sign in the Rice's formula \eqref{eq:Rice2}. We obtain \begin{equation}\label{presquefini} \int_{I^m\setminus D^{\,\varepsilon}_m} \int_{\mathbb R^m}\prod_{i=1}^m |y_i| p_{N,{\textbf t}}(\textbf{0},\textbf{y})d\textbf{y}d\textbf{t} \mathop{\to}\limits_{N\to\infty} \int_{I^m\setminus D^{\,\varepsilon}_m}\int_{\mathbb R^m}\prod_{i=1}^m |y_i| \gamma_{{\bf t}}(\textbf{0},\textbf{y}) d\textbf{y} d\textbf{t}. \end{equation} It remains to let $\varepsilon \to 0$ and we reach the desired result, namely \begin{eqnarray*} \lim_{N\to\infty}\mathbb E\left(Z_{X_N}(I)^{[m]}\right)&=&\int_{I^m}\int_{\mathbb R^m}\prod_{i=1}^m |y_i| \gamma_{{\bf t}}(\textbf{0},\textbf{y}) d\textbf{y} d\textbf{t}\\ &=&\mathbb E(Z_{W}(I)^{[m]}). \end{eqnarray*} \end{proof} \section{Auxiliary results} \subsection{Upper bounds for the density $p_{N,{\bf t}}$ and its gradient} The next proposition provides useful bounds for the density $p_{N,{\bf t}}$ (as well as its gradient) of $V_N({\bf t})$, given by (\ref{vn}), in the case where $N$ is large enough and ${\bf t}$ does not belong to the $\varepsilon$-enlargement of the set $D_m$ of hyper-diagonals. \begin{lemma}\label{lemma:llt} Fix $\varepsilon>0$. Then, for any ${\bf t}\in I^m\setminus{D\varepsilon_m} $ and any $N$ large enough (bigger or equal than $N_0$, say), the density $p _{N, \mathbf t}$ exists, is $\mathcal{C}^1$ and satisfies, for all $\mathbf{x},\mathbf{y}\in\mathbb R^{m}$, \begin{eqnarray}\label{controleinfini} \sup_{ \mathbf t \in I^m\setminus{D_\varepsilon^m}}\,\,\sup_{N\geq N_0} \,\, p _{N, \mathbf t}(\mathbf{x},\mathbf{y}) &\le& \frac{C_\varepsilon }{(1+y_1^4)\cdots (1+y_{m}^4)}\leq C_\varepsilon;\\ \label{lipschitz} \sup_{ \mathbf t \in I^m\setminus{D_\varepsilon^m}}\,\,\sup_{N\geq N_0} \,\, \|\nabla p_{N, \mathbf t} \|_\infty &\le& C_\varepsilon. \end{eqnarray} We recall that $C_\varepsilon$ denotes a constant that only depends on $\varepsilon$ and whose value may change from one occurence to another. \end{lemma} \begin{remark} {\rm A careful inspection of the proof would show that the denominator in the right-hand side of (\ref{controleinfini}) may be replaced by any other polynomial in ${\bf x}$ and ${\bf y}$. But because we will only need the one appearing in (\ref{controleinfini}), for simplicity we decided not to state our Lemma \ref{lemma:llt} in such a general setting. } \end{remark} Our proof of Lemma \ref{lemma:llt} will heavily rely on the use of the following celebrated theorem due to Paul Malliavin \cite{Mallia}. \begin{theorem}[Malliavin]\label{paul} Let $\mu$ be a finite signed measure over $\mathbb R^{2m}$. \begin{itemize} \item[(i)]If, for all $i=1,\ldots,2m$ there exists a constant $C_i$ satisfying that for any $\phi \in \mathcal{C}^\infty_c(\mathbb R^{2m},\mathbb R)$, \begin{equation}\label{criteredensite} \left|\int_{\mathbb R^{2m}} \partial_i \phi (x) d\mu(x)\right|\leq C_i \|\phi\|_\infty, \end{equation} then $\mu$ is absolutely continuous with respect to Lebesgue measure of $\mathbb R^{2m}$. \item[(ii)] If, for any multi-index $\alpha$ there exists a constant $C_\alpha>0$ satisfying that for any $\phi \in \mathcal{C}^\infty_c(\mathbb R^{2m},\mathbb R)$, \begin{equation}\label{regudensi} \left|\int_{\mathbb R^{2m}} \partial_\alpha \phi (x) d\mu(x)\right|\leq C_\alpha \|\phi\|_\infty, \end{equation} then $\mu$ admits a density in the class $\mathcal{C}^\infty_b(\mathbb R^{2m},\mathbb R)$ of $C^\infty$ functions which are bounded together with all its derivatives. \end{itemize} \end{theorem} Actually, we shall rather use the following criterion which is an immediate consequence of Theorem \ref{paul}. \begin{corollary}\label{criteresequentiel} Consider a sequence of finite signed measures $\mu_N$ over $\mathbb R^{2m}$ such that, for any multi-index $\alpha$, we may find a constant $C_{\alpha}>0$ satisfying that for any $\phi \in \mathcal{C}^\infty_c(\mathbb R^{2m},\mathbb R)$, \begin{equation}\label{criteresequentieleq} \sup_N \left|\int_{\mathbb R^{2m}} \partial_\alpha \phi (x) d{\mu_N}(x)\right|\leq C_\alpha \|\phi\|_\infty. \end{equation} Then, the sequence of densities of $\mu_N$ is uniformly bounded (by a constant only depending on $C_\alpha$) for the (nuclear) topology of $\mathcal{C}^\infty_b(\mathbb R^{2m},\mathbb R)$. \end{corollary} We are now in a position to prove Lemma \ref{lemma:llt}. First, let us introduce the formalism of integrations by parts. For any pair $(\Psi_1,\Psi_2)$ of $\mathcal{C}^1(\mathbb R^{2m},\mathbb R)$, let us set \begin{equation}\label{Gamma} \Gamma[\Psi_1(\textbf{a},\textbf{b}),\Psi_2(\textbf{a},\textbf{b})]=\sum_{i=1}^{2m}\partial_i \Psi_1(\textbf{a},\textbf{b})\partial_i \Psi_2(\textbf{a},\textbf{b}). \end{equation} Also, for $F\in \mathcal{C}^2(\mathbb R^{2m},\mathbb R)$, set \begin{equation}\label{Generateur} L[F(\textbf{a},\textbf{b})]=\sum_{i=1}^{2m} \partial^2_{i,i} F(\textbf{a},\textbf{b})+\sum_{i=1}^{m} \partial_i F(\textbf{a},\textbf{b})\frac{\rho'(a_i)}{\rho(a_i)}+\sum_{i=m+1}^{2m}\partial_i F(\textbf{a},\textbf{b})\frac{\rho'(b_i)}{\rho(b_i)}. \end{equation} In order to simplify the notation in (\ref{Gamma})-(\ref{Generateur}) let us use here and throughout the text the shorthand notation $\textbf{a}=(a_1,\cdots,a_m)$ and $\textbf{b}=(b_1,\cdots,b_m)$. Besides, in the sequel $(\textbf{a},\textbf{b})$ will denote the $2m$-uple $(x_1,\cdots,x_{2m})$ such that $(x_1,\cdots,x_m)=\textbf{a}$ and $(x_{m+1},\cdots,x_{2m})=\textbf{b}$. The key relationship between the operators $L$ and $\Gamma$ is the following integration by parts formula: for any $\Psi_1 \in\mathcal{C}^1_b(\mathbb R^{2m},\mathbb R)$ and $\Psi_2 \in \mathcal{C}^2_b(\mathbb R^{2m},\mathbb R)$, it holds that \begin{eqnarray}\notag &&\int_{\mathbb R^{2m}}\Gamma\left[\Psi_1(\textbf{a},\textbf{b}),\Psi_2(\textbf{a},\textbf{b})\right]\rho(a_1)\rho(b_1)\ldots \rho(a_m)\rho(b_m)d\textbf{a}d\textbf{b}\\ &=&-\int_{\mathbb R^{2m}}\Psi_1(\textbf{a},\textbf{b}) L\left[\Psi_2(\text{a},\textbf{b})\right]\rho(a_1)\rho(b_1)\ldots \rho(a_m)\rho(b_m)d\textbf{a}d\textbf{b}.\label{IPP} \end{eqnarray} Let us apply this formalism to our problem. Fix ${\bf t}=(t_1,\cdots,t_m)\in I^m\setminus D^{\,\varepsilon}_m$. We have, for any $i\neq j$, \begin{eqnarray*} \Gamma[X_N(t_i),X_N(t_i)]&=& 1,\\ \Gamma[X_N'(t_i),X_N(t_i)]&=& 0,\\ \Gamma[X_N'(t_i),X_N'(t_i)]&=& \frac{1}{N}\sum_{n=1}^N \frac{n^2}{N^2},\\ \Gamma[X_N(t_i),X_N(t_j)]&=& \frac{1}{N}\sum_{n=1}^N \cos\left(\frac{ n (t_i-t_j)}{N}\right),\\ \Gamma[X_N(t_i),X_N'(t_j)]&=& \frac{1}{N}\sum_{n=1}^N \frac{n}{N}\sin\left(\frac{ n (t_i-t_j)}{N}\right),\\ \Gamma[X_N'(t_i),X_N'(t_j)]&=& -\frac{1}{N}\sum_{n=1}^N \frac{n^2}{N^2}\cos\left(\frac{ n (t_i-t_j)}{N}\right). \end{eqnarray*} Recall from (\ref{vn}) that $V_N({\bf t})=(X_N(t_1),\cdots,X_N(t_m);X_N'(t_1),\cdots,X_N'(t_m))$. Let us also denote by $\widehat{\Gamma}_N$ the \textit{Malliavin matrix} associated with $V_N$: $$\widehat{\Gamma}_N({\bf t})=\big(\Gamma[ V_N({\bf t})_i, V_N({\bf t})_j]\big)_{1\le i,j \le 2m}.$$ \begin{remark} {\rm At this stage, it is worthwhile noting that many other choices for the operators $\Gamma$ and $L$ could have led to integration by parts formulas. However the choice we made here seems to be the only reasonable one leading to a {\it deterministic} (that is, independent of $a_n,b_n$) Malliavin matrix. As we will see, this determinacy will turn out to be very helpful and will play an important role in our reasoning. } \end{remark} Consider $r_N(x) = \frac{1}{N}\sum_{n=1}^N \cos ( \frac{n}{N} x )$ and recall that $\text{sc}(x)=\sin x/x$. It is proved in \cite[Section 3.1]{granville} that $r_N,r_N',r_N''$ converge toward $\text{sc},\text{sc}',\text{sc}''$ uniformly on every compact as $N\to\infty$. As a result, the matrix $\widehat{\Gamma}_N({\bf t})$ converge uniformly over $I^m$ towards the matrix $ \Sigma({\bf t})$ of Lemma \ref{lemma:non-deg}. Still by Lemma \ref{lemma:non-deg}, the determinant of $\Sigma({\bf t})$ is non-zero on $I^m\setminus D_m$. Fix $\varepsilon>0$. By a classical compactness argument, we deduce from the previous fact that their exist $N_0=N_0(\varepsilon)\in\mathbb{N}$ and $\eta=\eta(\varepsilon)>0$ such that $$ \forall N\geq N_0,\,\,\forall {\bf t}\in I^m \setminus D_m^{\,\varepsilon}:\quad \det(\widehat{\Gamma}_N({\bf t}))\geq \eta. $$ The following class of random variables will naturally be present in the weights that will appear after applying repeatedly the integration by parts (\ref{IPP}). Set \begin{eqnarray*} \mathcal{A}_0(N,{\bf t})&=&\left\{X_N(t_1),\cdots,X_N(t_m),X_N'(t_1),\cdots,X_N'(t_m)\right\}.\\ \mathcal{A}_1(N,{\bf t}) &=&L (\mathcal{A}_0(N,{\bf t})) \cup \Gamma[\mathcal{A}_0(N,{\bf t}),\mathcal{A}_0(N,{\bf t})]\\ &\vdots&\\ \mathcal{A}_{r+1}(N,{\bf t})&=&L(\mathcal{A}_r(N,{\bf t})) \cup \Gamma[\mathcal{A}_r(N,{\bf t}),\mathcal{A}_r(N,{\bf t})]\\ &\vdots&\\ \mathcal{A}_\infty (N,{\bf t})&=& \bigcup_{r=0}^\infty \mathcal{A}_r(N,{\bf t}), \end{eqnarray*} where, for a given set $\mathcal{E}=\{e_1,\cdots,e_s\}$, $L(\mathcal{E})$ denotes $\{ L[e_1],\cdots,L[e_s]\}$ while $\Gamma[\mathcal{E},\mathcal{E}]$ stands for $\{\Gamma[e_i,e_j]\,|\,1\le i,j\le s\}$. To achieve the proof of Lemma \ref{lemma:llt}, we will need the following result. \begin{lemma}\label{superborne} Suppose that Assumption \ref{Hypo-densite} is in order. Then, for any $s\geq 2$ there exists a constant $C_s>0$ such that, for any $N$, any ${\bf t}\in I^m$ and any $U_N({\bf t})\in\mathcal{A}_\infty (N,{\bf t})$, \begin{equation}\label{megabornitude} \mathbb E[|U_N({\bf t})|^s]\leq C_s. \end{equation} \end{lemma} \begin{proof} \underline{Step 1}: {\it explicit description of the elements of $\mathcal{A}_\infty(N,{\bf t})$}. Let us establish by induction that, for any $r\ge 0$, the elements of $\mathcal{A}_r(N,{\bf t})$ are of the form \begin{equation}\label{i} \frac{1}{N^p}\sum_{n=1}^N \frac{n^q}{N^q} \left( \alpha f(a_n) \phi_{n,N}(\textbf{t}) + \beta f (b_n) \psi_{n,N}(\textbf{t})\right), \end{equation} with $\alpha,\beta \in \{-1,1\}$, $p\in\{\frac12\}\cup[1,\infty)$, $q\in \mathbb N$, $f$ some $\mathcal{C}^\infty$ function such that $\displaystyle{f^{(l)}(a_1)\in \bigcap_{s>1} L^s(\Omega)}$ for any $l\ge 0$, and where $\phi_{n,N}(\textbf{t}),\psi_{n,N}(\textbf{t})$ are products of at most $2^{2^r}$ terms among $\cos(\frac{n t_i}{N}),\sin(\frac{n t_i}{N})$, $1\le i\le m$. Moreover, when $p=\frac12$ in (\ref{i}), we have that $\mathbb E[f(a_1)]=0$. It is immediate that the elements of $\mathcal{A}_0(N,{\bf t})$ are of the form (\ref{i}). Assume now that the above description of $\mathcal{A}_r(N,{\bf t})$ has been established until the rank $r \ge 0$. Applying $L$ to some element of $\mathcal{A}_r(N,{\bf t})$ of the form (\ref{i}) leads to \begin{eqnarray*} &&\frac{1}{N^p}\sum_{n=1}^N \frac{n^q}{N^q} \left(\alpha L[f(a_n)] \phi_{n,N}(\textbf{t})+ \beta L[f(b_n)] \psi_{n,N}(\textbf{t})\right)\\ &=&\frac{1}{N^p}\sum_{n=1}^N \frac{n^q}{N^q} \left(\alpha \left(f''(a_n)+f'(a_n)\frac{\rho'(a_n)}{\rho(a_n)}\right) \phi_{n,N}(\textbf{t}) \right.\\ &&\left.\hskip2.4cm+ \beta \left(f''(b_n)+f'(b_n)\frac{\rho'(b_n)}{\rho(b_n)}\right) \psi_{n,N}(\textbf{t})\right), \end{eqnarray*} which is again of the form (\ref{i}). Indeed, $\mathbb E(L[f(a_1)])=-\mathbb E(\Gamma[1,f(a_1)])=0$ and by our assumptions on $f$ and $\frac{\rho'}{\rho}$ it holds that $g:=f''+f' \frac{\rho'}{\rho}$ is such that $\displaystyle{g^{(l)}(a_1)\in \bigcap_{s>1} L^s(\Omega)}$ for any $l\ge 0$. Now, applying the bilinear form $\Gamma$ to two elements of $\mathcal{A}_r(N,{\bf t})$, say \begin{eqnarray*} U_N(\textbf{t})&=&\frac{1}{N^p}\sum_{n=1}^N \frac{n^q}{N^q} \left( \alpha f(a_n) \phi_{n,N}(\textbf{t})+ \beta f (b_n) \psi_{n,N}(\textbf{t})\right)\\ V_N(\textbf{t})&=&\frac{1}{N^{p'}}\sum_{n=1}^N \frac{n^{q'}}{N^{q'}} \left( \alpha' g(a_n) \widetilde{\phi_{n,N}}(\textbf{t}) + \beta' g(b_n) \widetilde{\psi_{n,N}}(\textbf{t})\right),\\ \end{eqnarray*} leads us to \begin{eqnarray*} \Gamma[U_N(\textbf{t}),V_N(\textbf{t})]&=& \frac{1}{N^{p+p'}}\sum_{n=1}^N \frac{n^{q+q'}}{N^{q+q'}}\left(\alpha \alpha' f'(a_n)g'(a_n)\phi_{n,N}(\textbf{t})\widetilde{\phi_{n,N}}(\textbf{t})\right.\\ &&\left.\hskip3cm+\beta \beta' f'(b_n)g'(b_n)\psi_{n,N}(\textbf{t})\widetilde{\psi_{n,N}}(\textbf{t})\right). \end{eqnarray*} We easily observe that $f' g'$ together with all its derivatives are in $\bigcap_{s>1} L^s(\Omega)$, that $\alpha\alpha', \beta\beta' \in \{-1,1\}$ and that $\phi_{n,N}(\textbf{t})\widetilde{\phi_{n,N}}(\textbf{t}),\psi_{n,N}(\textbf{t})\widetilde{\psi_{n,N}}(\textbf{t})$ both contain at most $2^{2^{r+1}}$ terms. ~\\\\ \underline{Step 2}: {\it bounding the elements of $\mathcal{A}_\infty(N,{\bf t})$}. Fix $s\geq 2$, and let us consider the following element in $\mathcal{A}_\infty(N,{\bf t})$ of the form (\ref{i}): $$U_N(\textbf{t})=\frac{1}{N^p}\sum_{n=1}^N \frac{n^q}{N^q} \left( \alpha f(a_n) \phi_{n,N}(\textbf{t})+ \beta f (b_n) \psi_{n,N}(\textbf{t})\right).$$ Relying on Step 1, we may infer that $\sup_{\textbf{t}\in [0,T]^m} \left|\phi_{k,N}(\textbf{t})\right|+\left|\psi_{k,N}(\textbf{t})\right|\le 2$. As a result, when $p\geq 1$ and using the triangle inequality for the norm $\|\cdot\|_s$, one can write \begin{eqnarray*} \left\|U_N(\textbf{t})\right\|_s\le \left(\frac{4}{N^p}\sum_{k=1}^N\frac{k^q}{N^q}\right)\left\|f(a_1)\right\|_s \le\frac{4}{N^{p-1}}\le 4\left\|f(a_1)\right\|_s. \end{eqnarray*} Let us now consider the situation where $p=\frac12$ and recall that $\mathbb E[f(a_1)]=0$ in this case, implying in turn that $E[U_N({\bf t})]=0$. Due to this latter property, the following sequence is a martingale: $$ M_N=\sum_{k=1}^N k^q\left\{\alpha f(a_k) \phi_{k,N}(\textbf{t})+ \beta f(b_k) \psi_{k,N}(\textbf{t})\right\}.$$ The $s/2$-moment of its quadratic variation can be bounded as follows: \begin{eqnarray*} &&\mathbb E\big[\langle M_N, M_N\rangle^{s/2}\big] \\ &=& \mathbb E\left[\left( \sum_{k=1}^N k^{2q}\left\{f^2(a_k) \phi_{k,N}^2(\textbf{t})+ f^2(b_k) \psi_{k,N}^2(\textbf{t})\right\}\right)^{s/2} \right] \\ &\le& 2N^{(q+\frac12)s}\,\,\mathbb E[|f(a_1)|^s]. \end{eqnarray*} Applying Burkholder-Davis-Gundy inequality to $M_N$ leads to \begin{eqnarray*} \mathbb E\left(|M_N|^s\right)&\le& C_s \,\mathbb E\left(\langle M_N, M_N\rangle^{\frac s 2}\right)=\mathcal{O}\left(N^{(q+\frac12)s}\right). \end{eqnarray*} Finally, \begin{eqnarray*} \mathbb E\left(\left|U_N(\textbf{t})\right|^s\right)=\mathbb E\left(\left|\frac{M_N}{N^{q+\frac{1}{2}}}\right|^s\right)=\mathcal{O}\left(1\right), \end{eqnarray*} where the last bound is uniform in $\textbf{t}\in I^m$. This concludes the proof in the case $p=\frac12$ as well. \end{proof} Now Proposition \ref{superborne} has been established, let us do the proof of Lemma \ref{lemma:llt}. \begin{proof} For any $\Phi:\mathbb R^{2m}\to \mathbb R$ the chain rule for $\Gamma$ leads to \begin{eqnarray*} \Gamma[\Phi(V_N({\bf t})),V_N({\bf t})_j]&=&\sum_{i=1}^{2m} \partial_i \Phi (V_N({\bf t})) \Gamma[V_N({\bf t})_i,V_N({\bf t})_j].\\ \end{eqnarray*} As a result, setting $W_N({\bf t})$ to be the vector $(\Gamma[\Phi(V_N({\bf t})),V_N({\bf t})_j])_{1\le j \le 2m}$, the previous equation can be written as $$ W_N({\bf t}) = \widehat{\Gamma}_N({\bf t})\times \nabla \Phi ( V_N ({\bf t})). $$ Recalling that $\widehat{\Gamma}_N({\bf t})$ is invertible on $I^m \setminus D_m^{\,\varepsilon}$, it follows that \begin{equation}\label{inverse} \nabla \Phi(V_N({\bf t})) = \widehat{\Gamma}_N({\bf t})^{-1} \times W_N({\bf t}). \end{equation} Fix $\varepsilon>0$, ${\bf t}\in I^m\setminus D_m^\varepsilon$, a multi-index $\alpha$, a polynomial $Q:\mathbb R^{2m}\to \mathbb R$ and a test function $\Psi$. One has, with $\theta=\partial_{\alpha\setminus\{\alpha_1\}} \Psi,$ \begin{eqnarray*} &&\mathbb E\left(Q(V_N({\bf t}))\partial_\alpha\Psi(V_N({\bf t}))\right) =\mathbb E\left(Q(V_N({\bf t})) \partial_{\alpha_1} \theta(V_N({\bf t}))\right)\\ &=&\sum_{j=1}^{2m}(\widehat{\Gamma}_N({\bf t})^{-1})_{\alpha_1,j}\,\mathbb E\left(Q(V_N({\bf t}))\Gamma[\theta(V_N({\bf t})),V_N({\bf t})_j]\right)\\ &=&-\sum_{j=1}^{2m}(\widehat{\Gamma}_N({\bf t})^{-1})_{\alpha_1,j}\,\mathbb E\left(Q(V_N({\bf t}))L[V_N({\bf t})_j]\theta(V_N({\bf t}))\right)\\ &&-\sum_{j=1}^{2m}(\widehat{\Gamma}_N({\bf t})^{-1})_{\alpha_1,j}\,\mathbb E\left(\Gamma[Q(V_N({\bf t})),V_N({\bf t})_j]\theta(V_N({\bf t}))\right)\\ &=&\\ &\vdots&\\ &=& \mathbb E(\Psi(V_N({\bf t})) H_N({\bf t}) ), \end{eqnarray*} where $H_N({\bf t})$ is an element of the algebra generated by $\mathcal{A}_\infty(N,{\bf t})$. Here, we applied \eqref{inverse} in the second equality and \eqref{IPP} in the third equality; also, we used routine calculus to deal with the term of the form $\mathbb E(\Psi_1\Gamma(\Psi_2,\Psi_3))$. By virtue of Lemma \ref{superborne}, $\sup_N\sup_{t\in I^m}\mathbb E\big[|H_N({\bf t})|^q\big]<\infty$ for any $q$. Finally, choosing $Q({\bf x},{\bf y})=(1+y_1)^4\ldots(1+ y_m)^4$ yields $$\sup_{N\ge N_0} \left|\int \partial_\alpha \Psi(x)d\mu_N(x)\right|\leq C_\alpha \|\Psi\|_\infty$$ with $\mu_N$ the measure with density $(1+y_1^4)\ldots (1+y_m^4)p_{N,{\textbf t}}(\textbf{x},\textbf{y}) d \textbf{x} d\textbf{y}.$ In virtue of (\ref{criteresequentieleq}), we may infer that (\ref{controleinfini}) takes place. On the other hand, using the same criterion with $Q=1$ leads this time to (\ref{lipschitz}). \end{proof} \subsection{Rice's Formulas} Rice's formulas are integral formulas for the (factorial) moments of the number of crossings of a stochastic process within a given interval. They are true for Gaussian processes under minimal hypotheses. However, since we are here dealing with {\it non}-Gaussian processes, we have to be careful and to check their validity. General results allowing to do so exist in the literature (see, e.g., Theorem 3.4 in \cite{aw} or Theorem 11.2.1 in \cite{adler}) but they rely on rather heavy conditions. This is why we prefer here to give a simple proof for smooth processes, which is well suited for our need. \begin{proposition}[Rice's Formula for smooth processes] \label{prop:Rice} Let $m$ be a positive integer, let $I$ be a compact interval and let $Y$ be a process satisfying that: \begin{enumerate} \item[A1.] it has $ \mathcal{C}^1$ sample paths; \item[A2.] the one-dimensional density $y\mapsto p_{Y(t)}(y) $ is uniformly bounded for $t \in I$ and for $y$ in a neighborhood of a certain level $u$; \item[A3.] the number $Z_{\dot Y}(I)$ of zeros of the derivative $\dot Y$ of $Y$ within $I$ admits a moment of order $m$; \item[A4.] for any pairwise disjoint intervals $J_1, \ldots, J_m$ included in $I$, the Rice function \begin{eqnarray*} && B_{J_1\times\cdots \times J_m} (v_1,\ldots,v_m) \\ &=& \int_{J_1\times\cdots \times J_m} \mathbb E(|Y'(t_1)|\ldots |Y'(t_m)|\big| Y(t_1)=v_1,\ldots Y(t_m)=v_m)\\ &&\hskip4.2cm \times p_{Y(t_1),\ldots, Y(t_m)} (v_1,\ldots,v_m) dt_1\ldots dt_m \end{eqnarray*} is well defined and continuous at $(u,\ldots,u)$. \end{enumerate} Then $Y$ satisfies the $m^{\mbox{th}}$ Rice's formula, that is, \begin{enumerate} \item[(i)] for any pairwise disjoint intervals $J_1, \ldots, J_m$ included in $I$, $$ \mathbb E (Z^u_Y(J_1) \times \cdots \times Z^u_Y(J_m)) = B_{J_1\times\cdots \times J_m} (u, \ldots,u); $$ \item[(ii)] \begin{equation*} \mathbb E(Z^u_Y(I)^{[m]} )=B_{I^m}(u, \ldots u), \end{equation*} where $Z^u_Y(I)$ denotes the number of crossing of the level $u$ on the interval $I$. \end{enumerate} \end{proposition} \begin{corollary}\label{cor:Rice-trunc} For $\varepsilon>0$, we have \begin{equation*} \mathbb E(Z_{X_N}(I) ^{[m]}) =\int_{I^m\setminus D^{\varepsilon}_m} \int_{\mathbb R^m}|y_1|\ldots |y_m | \,p_{N,{\mathbf t}} ({\mathbf 0};{\mathbf y}) d{\mathbf y} d{\mathbf t}+ O(\varepsilon^{1/5}). \end{equation*} The constants involved in the Landau notation depend on $m$ and $I$ but \underline{not} on $N$. \end{corollary} Let us first prove Proposition \ref{prop:Rice}. \begin{proof}[Proof of Proposition \ref{prop:Rice}] We begin with the case $m=1$. By assumption, the process $Y$ has $ \mathcal{C}^1$ sample paths and $Y_t$ admits a uniformly bounded density. Ylvisaker theorem (see, e.g., \cite[Theorem 1.21]{aw}) implies that almost surely there is no point $t \in I$ such that $Y(t) =0$ and $Y'(t)=0$. As a consequence, the number of crossings of the level $u$ is almost surely finite and we can apply the Kac formula (Lemma 3.1 in \cite{aw}), according to which \begin{equation} \label{e:kac} Z^u_Y(I) =\lim_{\delta \to 0} Z^u_\delta(I), \end{equation} where $$ Z^u_\delta(I) := \frac {1} {2\delta} \int _0^T \mathbf{1}_{\{|Y(t) -u|\leq \delta\}} |Y'(t)| dt. $$ It is easy to check that $$ Z^u_\delta(I) \leq Z_{\dot Y}(I) +1 . $$ By dominated convergence in \eqref{e:kac}, we get that $$ \mathbb E [Z^u_Y(I)] = \lim_{\delta \to 0} \frac {1} {2\delta} \int_{u-\delta} ^{u+\delta} B_{[0,T]}(v) dv = B_{[0,T]}(u). $$ The last equality comes from the continuity of $B_{[0,T]}$ at $u$. We turn now to the case $m>1$. Let $C_u(I) $ denote the set of those $t\in I$ such that $Y(t) =u$. Since the set $D_m$ of hyperdiagonals of $I^m$ has Lebesgue measure zero and since $$ Z^u_Y (I)^{[m]} = {\rm Card}\big(C_u(I)\setminus D_m\big), $$ it is sufficient to prove $(i)$, that is, for pairwise disjoint interval $J_1\times \cdots J_m$, \begin{equation}\label{e:rice} \mathbb E \big( Z^u_Y(J_1) \times \cdots \times Z^u_Y(J_m) \big) =B_{J_1\times\ldots\times J_m}(u,\ldots,u). \end{equation} The result $(ii)$ for $I^m$ will then follow from a standard approximation argument using the absolute continuity of the measure defined by the right-hand-side. To prove \eqref{e:rice}, we use Kac's formula and dominated convergence, exactly as in the case $m=1$. \end{proof} Finally, let us do the proof of Corollary \ref{cor:Rice-trunc}. It will rely on several lemmas, that may have their own interests. \begin{lemma}\label{lm1} Fix a compact interval $I$ of length $|I|$. For any $N$ and $k$, \begin{equation}\label{81} \mathbb E\left[\sup_{t\in I}|X_N^{(k)}(t)|^2\right]\leq 2(1+|I|^2). \end{equation} \end{lemma} \begin{proof} Assume that $I=[a,b]$. The proof is divided into two steps.\\ {\it First step}. If $f:[a,b]\to\mathbb R$ is a $\mathcal{C}^1$ function, then we can straightforwardly check that, for $t\in [a,b]$, \begin{equation*} f(t)=\frac{1}{b-a}\int_a^{b}f(s)ds + \frac{1}{b-a}\int_a^{t}(s-a)f'(s)ds +\frac{1}{b-a}\int_t^{b}(s-b)f'(s)ds. \end{equation*} As a result, \begin{equation*} \sup_{t\in I}|f(t)|\leq \frac{1}{b-a}\int_a^{b}|f(s)|ds + \int_a^{b}|f'(s)|ds, \end{equation*} implying in turn, due to $(x+y)^2\leq 2x^2+2y^2$ and Cauchy-Schwarz inequality, \begin{equation}\label{ine} \sup_{t\in I}|f(t)|^2\leq \frac2{b-a}\int_a^{b}f(s)^2ds + 2(b-a)\int_a^{b}f'(s)^2ds. \end{equation} {\it Second step}. For any $N$ and any $l$, \begin{eqnarray*} X_N^{(2l)}(t) &=& \frac{1}{\sqrt{N}}\sum_{j=1}^N a_j\,(-1)^l \left(\frac{j}{N}\right)^{2l} \cos(\frac{j}{N}t) + b_j\,(-1)^l \left(\frac{j}{N}\right)^{2l} \sin(\frac{j}{N}t) ,\\ X_N^{(2l+1)}(t) &=& \frac{1}{\sqrt{N}}\sum_{j=1}^N a_j\,(-1)^{l+1} \left(\frac{j}{N}\right)^{2l+1} \sin(\frac{j}{N}t) + b_j\,(-1)^l \left(\frac{j}{N}\right)^{2l+1} \cos(\frac{j}{N}t). \end{eqnarray*} As a consequence, for any $N$ and any $k$, \[ \mathbb E[X_N^{(k)}(t)^2] = \frac{1}{N}\sum_{j=1}^N \left(\frac{j}{N}\right)^{2k} \leq 1. \] The conclusion (\ref{81}) thus follows by plugging the previous inequality into (\ref{ine}). \end{proof} \begin{lemma}\label{lm3} For any interval $I$ of length $|I|$ and any integers $k\geq 1$ and $N\geq 1$, \begin{equation}\label{ineq1} \mathbb P(Z_{X_N}(I)\geq k)\leq (Const) (k!(k-1)!)^{-\frac12}\,|I|^{k-\frac1{2}}. \end{equation} In particular, for any $r>1$ and any interval $I$, we have the following uniform bound: \begin{equation}\label{ineq2} \mathbb E[Z_{X_N}(I)^r] \leq (Const) \left(\sum_{k=1}^\infty \frac{|I|^{\frac{2k-1}{2r}}}{(k!(k-1)!)^{\frac1{2r}}}\right)^r. \end{equation} \end{lemma} \begin{proof} Let us first concentrate on the inequality \eqref{ineq1}. Throughout its proof, we will need the following result which follows from Lagrange formula for the difference between the function and its polynomial interpolation (which vanishes), see, e.g., \cite{davis}. \\ {\bf Claim}: Assume that $f:[a,b]\to\mathbb R$ is of class $\mathcal{C}^k$ ($k\geq 1$) and that their exist $x_1,\ldots,x_k\in[a,b]$ (possibly repeated) such that $f(x_1)=\ldots=f(x_k)=0$. Then their exist $y_1,\ldots,y_{k-1}\in[a,b]$ (possibly repeated) such that $f'(y_1)=\ldots=f'(y_{k-1})=0$; moreover, for all $x\in[a,b]$ there exist $\xi,\eta\in(a,b)$ such that \begin{equation}\label{rolle} f(x)=\frac{1}{k!}f^{(k)}(\xi)\prod_{i=1}^k(x-x_i) \quad \mbox{and}\quad f'(x)=\frac{1}{(k-1)!}f^{(k)}(\eta)\prod_{i=1}^{k-1}(x-y_i). \end{equation} Thanks to the conclusion of the previous claim, we can now decompose our probability of interest in a clever way, by introducing an extra parameter $M>0$ whose value will be optimized in the end. From (\ref{rolle}), one easily deduces that, if $\sup_{t\in I}|X_N^{(k)}(t)|\leq M$, then $|X_N(c)|\leq \frac{M |I|^k}{k!}$ and $|X'_N(c)|\leq \frac{M |I|^{k-1}}{(k-1)!}$, where $c$ denote the middle point of the interval $I$ (say). We thus have \begin{eqnarray*} &&\mathbb P (Z_N(I)\geq k)\\ &\leq& \mathbb P\left(\sup_{t\in I}|X_N^{(k)}(t)|> M\right)+ \mathbb P\left(Z_N(I)\geq k,\,\sup_{t\in I}|X_N^{(k)}(t)|\leq M\right) \\ &\leq&\frac1{M^2}\,\mathbb E\left[\sup_{t\in I}|X_N^{(k)}(t)|^2\right] + \mathbb P\left( |X_N(c)|\leq \frac{M |I|^k}{k!},\,|X'_N(c)|\leq \frac{M |I|^{k-1}}{(k-1)!} \right)\\ &\leq& (Const) \Big( \frac{1}{M^2} + \frac{M^2|I|^{2k-1}}{k!(k-1)!}\Big)\quad\mbox{(by Lemmas \ref{lm1} and \ref{lemma:llt})}. \end{eqnarray*} Choosing $M^2=\sqrt{k!(k-1)!}\,|I|^{\frac{1-2k}{2}}$ leads to the desired conclusion (\ref{ineq1}). Now, let us focus on (\ref{ineq2}). Using Fubini and then H\"older with $a=\frac{r}{r-1}$ and $b=r$, we can write \begin{eqnarray*} \mathbb E[Z_N(I)^r] &=& \sum_{k=1}^\infty \mathbb E [Z_N(I)^{r-1}{\bf 1}_{\{Z_N(I)\geq k\}}]\leq \mathbb E[Z_N(I)^r]^{\frac{r-1}{r}} \sum_{k=1}^\infty \mathbb P(Z_N(I)\geq k)^{\frac1r}, \end{eqnarray*} so that, using (\ref{ineq1}), \[ \mathbb E[Z_N(I)^r]\leq \left(\sum_{k=1}^\infty \mathbb P(Z_N(I)\geq k)^{\frac1r}\right)^r \leq (Const) \left(\sum_{k=1}^\infty \frac{|I|^{\frac{2k-1}{2r}}}{(k!(k-1)!)^{\frac1{2r}}}\right)^r, \] which is exactly (\ref{ineq2}). \end{proof} We are now in a position to prove Corollary \ref{cor:Rice-trunc}.\\ {\it Proof of Corollary \ref{cor:Rice-trunc}}. First, let us check that the assumptions A1 to A4 of Proposition \ref{prop:Rice} are satisfied for $u=0$ and $Y=X_N$: A1 is obvious; A2 follows from Lemma \ref{lemma:llt}; A3 holds since the number of zeros of any trigonometric polynomial is bounded by two times its degree; and finally A4 is an immediate consequence of (\ref{controleinfini}) in Lemma \ref{lemma:llt}. We deduce that $$ \mathbb E(Z_{X_N}(I) ^{[m]}) =\int_{I^m} \int_{\mathbb R^m}|y_1|\ldots |y_m | \,p_{N,{\mathbf t}} ({\mathbf 0};{\mathbf y}) d{\mathbf y} d{\mathbf t}. $$ To conclude, we are thus left to show that $$ \int_{D^\varepsilon_m} \int_{\mathbb R^m}|y_1|\ldots |y_m | \,p_{N,{\mathbf t}} ({\mathbf 0};{\mathbf y}) d{\mathbf y} d{\mathbf t}=O(\varepsilon^{\frac{1}{5}}). $$ To do so, consider the measure $\mu_N$ defined on $\mathcal{B}(I^m)$ by \[ \mu_N(J)= \mathbb E\big[{\rm Card}(J\cap C_N(I)^m)\big], \] where $C_N(I)$ is the set of zeros of $X_N$ lying in $I$. We know from Proposition \ref{prop:Rice} that $\mu_N$ restricted to $I^m\setminus D_m$ is absolutely continuous with respect to Lebesgue measure and that $$\mu_N(D^m_\varepsilon)= \int_{D^\varepsilon_m} \int_{\mathbb R^m}|y_1|\ldots |y_m | \,p_{N,{\mathbf t}} ({\mathbf 0};{\mathbf y}) d{\mathbf y} d{\mathbf t}.$$ Moreover, it is easy to check that \[ \mu_N(I^m) = \mu_N(I^m\setminus D_m) = \mathbb E [ {\rm Card}\big( C_N(I) ^m \setminus D_m\big)] = \mathbb E \big( Z_N(I)^{[m]}\big), \] and that for any (non necessarily disjoint) intervals $J_1,\ldots,J_k\subset I$ and any sequence $r_1,\ldots,r_k\geq 1$ of integers satisfying $r_1+\ldots+r_k=m$, \[ \mu_N(J_1^{r_1}\times\ldots\times J_k^{r_k}) = \mathbb E[ {\rm Card}\big( C_N(J_1) ^{r_1}\times\ldots\times C_N(J_k)^{r_k} \setminus D_m \big)]. \] It is also clear that \begin{eqnarray*} &&{\rm Card}\big( C_N(J_1) ^{r_1}\times\ldots\times C_N(J_k)^{r_k} \setminus D_m \big)\\ &\le& {\rm Card}\big( C_N(J_1) ^{r_1} \setminus D_{r_1}\times \ldots\times C_N(J_k) ^{r_k} \setminus D_{r_k} \big) = Z_N(J_1)^{[r_1]}\ldots Z_N(J_k)^{[r_k]}; \end{eqnarray*} as a consequence \[ \mu_N(J_1^{r_1}\times\ldots\times J_k^{r_k})\leq \mathbb E ( Z_N(J_1)^{[r_1]}\ldots Z_N(J_k)^{[r_k]}). \] With all these properties at hand, we are now ready to conclude the proof of Corollary \ref{cor:Rice-trunc}, by showing that \begin{equation}\label{cequilreste} \sup_{N \geq N_0} \mu_N(D_m^{\,\varepsilon}) = O( \varepsilon^{\frac15})\quad\mbox{as $\varepsilon\to 0$}. \end{equation} Firstly, we observe that \[ D_m^{\,\varepsilon} = \bigcup_{1\leq i\neq j\leq m} \Delta_{\varepsilon,i,j}, \] where $\Delta_{\varepsilon,i,j}=\{(t_1,\ldots,t_m)\in I^m:\,|t_i-t_j|\leq\varepsilon\}.$ Thus, to prove (\ref{cequilreste}) we are left to show that $\sup_{N\geq N_0} \mathbb E[\mu_N(\Delta_{\varepsilon,i,j})]= O( \varepsilon^{\frac15})$ for any {\it fixed} $1\leq i\neq j\leq m$. To do so, by the uniformity of the bound \eqref{ineq1}, we can assume without loss of generality that $i=1$ and $j=2$. Secondly, by noting $a=\min I$ and $b=\max I$ the extremities of $I$, we observe that \[ \Delta_{\varepsilon,1,2} \subset \bigcup_{k=1}^{\lfloor 1/\varepsilon\rfloor} A_{k,\varepsilon}^2\times I^{m-2}, \] where $A_{k,\varepsilon}=[a+(b-a)(k-1)\varepsilon,a+(b-a)(k+1)\varepsilon]\cap I$. As a result, $$ \mathbb E [\mu_N(\Delta_{\varepsilon,1,2})]\leq\sum_{k=1}^{\lfloor 1/\varepsilon\rfloor} \mathbb E [Z_N(A_{k,\varepsilon})^{[2]}Z_N(I)^{[m-2]}]. $$ But, for any $k\in\{1,\ldots,\lfloor 1/\varepsilon\rfloor\}$, \begin{eqnarray*} &&\mathbb E [Z_N(A_{k,\varepsilon})^{[2]}Z_N(I)^{[m-2]}]= \mathbb E [Z_N(A_{k,\varepsilon})^{[2]}Z_N(I)^{[m-2]} {\bf 1}_{\{Z_N(A_{k,\varepsilon})\geq 2\}}]\\ &\leq& \mathbb E[Z_N(I)^{m}{\bf 1}_{\{Z_N(A_{k,\varepsilon})\geq 2\}}] \leq \mathbb E[Z_N(I)^{5m}]^{\frac15}\,\,\mathbb P(Z_N(A_{k,\varepsilon})\geq 2)^{\frac45}. \end{eqnarray*} Lemma \ref{lm3} thus yields \begin{eqnarray*} \sup_N \mathbb E[\mu_N(\Delta_{\varepsilon,1,2})]= O( \varepsilon^{\frac15}), \end{eqnarray*} which in turn implies (\ref{cequilreste}). The proof of Corollary \ref{cor:Rice-trunc} is complete.\qed \noindent\rule{6cm}{0.4pt} \noindent{\bf Jean-Marc Aza\"{i}s} \\ Universit\'{e} de Toulouse. \\ [email protected]\\ {\bf Federico Dalmao}\\ Universidad de la Rep\'ublica and Universit\'e du Luxembourg. \\ [email protected]\\ {\bf Jos\'e R. Le\'on} \\ Universidad Central de Venezuela and INRIA Grenoble. \\ [email protected] \\ {\bf Ivan Nourdin} \\ Universit\'e du Luxembourg. \\ [email protected]\\ {\bf Guillaume Poly} \\ Universit\'e de Rennes 1. \\ [email protected] \end{document}
\begin{document} \title{Quantum Circuit for Calculating\\ Symmetrized Functions\\ Via Grover-like Algorithm} \author{Robert R. Tucci\\ P.O. Box 226\\ Bedford, MA 01730\\ [email protected]} \date{\today} \maketitle \vskip2cm \section*{Abstract} In this paper, we give a quantum circuit that calculates symmetrized functions. Our algorithm applies the original Grover's algorithm or a variant thereof such as AFGA (adaptive fixed point Grover's algorithm). Our algorithm uses AFGA in conjunction with two new techniques we call ``targeting two hypotheses" and ``blind targeting". Suppose AFGA drives the starting state $|s\rangle$ to the target state $|t\rangle$. When targeting two hypotheses, $|t\rangle$ is a superposition $a_0|0\rangle + a_1|1\rangle$ of two orthonormal states or hypotheses $\ket{0}$ and $\ket{1}$. When targeting blindly, the value of $\langle t| s\rangle$ is not known a priori. \section{Introduction} In this paper, we give a quantum circuit that calculates symmetrized functions (i.e., it calculates the right hand side of Eq.(\ref{eq-goal})). Our algorithm utilizes the original Grover's algorithm (see Ref.\cite{Gro}) or any variant thereof, as long as it accomplishes the task of driving a starting state $\ket{s}$ towards a target state $\ket{t}$. However, we recommend to the users of our algorithm that they use a variant of Grover's algorithm called AFGA (adaptive fixed point Grover's algorithm) which was first proposed in Ref.\cite{afga}. A large portion of our algorithm for calculating symmetrized functions has been proposed before by Barenco et al in Ref.\cite{Bar}. However, we make some important changes to their algorithm. One trivial difference between our work and that of Barenco et al is that our operators $V^{(\lam)}_1$ are different from the corresponding ones that Barenco et al use. A more important difference is that we combine their circuit with Grover's algorithm (or variant thereof), which they don't. Furthermore, we use Grover's algorithm in conjunction with two new techniques that we call ``targeting two hypotheses" and ``blind targeting". When targeting two hypotheses, $|t\rangle$ is a superposition $a_0|0\rangle + a_1|1\rangle$ of two orthonormal states or hypotheses $\ket{0}$ and $\ket{1}$. When targeting blindly, the value of $\langle t| s\rangle$ is not known a priori. The technique of ``targeting two hypotheses" can be used in conjunction with Grover's algorithm or variants thereof to estimate (i.e., infer) the amplitude of one of many states in a superposition. An earlier technique by Brassard et al (Refs.\cite{Bra1,Bra2}) can also be used in conjunction with Grover's algorithm to achieve the same goal of amplitude inference. However, our technique is very different from that of Brassard et al. They try to produce a ket $\ket{x^n}$, where the bit string $x^n$ encodes the amplitude that they are trying to infer. We, on the other hand, try to infer an amplitude $|a_1|$ by measuring the ratio $|a_1|/|a_0|$ and assuming we know $|a_0|$ a priori. For more background information on the use of symmetrized functions in quantum information theory, we refer the reader to a recent review by Harrow, Ref.\cite{Har}. \section{Notation and Preliminaries} In this section, we will review briefly some of the more unconventional notation used in this paper. For a more detailed discussion of Tucci's notation, especially its more idiosyncratic aspects, see, for example, Ref.\cite{Paulinesia}. Let $\theta({\cal S})$ stand for the truth function. It equals 1 when statement ${\cal S}$ is true and 0 when it isn't. The Kronecker delta function $\theta(a=b)$ will also be denoted by $\delta_a^b$ or $\delta(a,b)$. Given a set $A$, the indicator function for set $A$ is defined by $1_A(x) = \theta(x\in A)$. We will sometimes use the following abbreviation for sets: $\{f(x):\forall x\in S\}=\{f(x)\}_{\forall x}$. We will sometimes use the following abbreviation for Hermitian conjugates: $[x] + [h.c.] = x + x^\dagger$, and $[x][h.c.] = x x^\dagger$, where $x$ is some complicated expression that we don't want to write twice. We will sometimes use the following abbreviation: $\frac{f(x)}{\sum_x num}= \frac{f(x)}{\sum_x f(x)}$, where $f(x)$ is some complicated expression of $x$ that we don't want to write twice. Let $Bool=\{0,1\}$. For any $b\in Bool$, let $\olb=1-b$. Let $\CC$ stand for the complex numbers and $\RR$ for the real numbers. For integers $a,b$ such that $a\leq b$, let $\{a..b\}=\{a,a+1,\ldots, b\}=\{b..a\}$. We will represent $n$-tuples or vectors with $n$ components by $x^n=x.=(x_{n-1}, x_{n-2}, \ldots , x_1, x_0)$. If $x^n=(x_j)_{\forall j}\in Bool^n$, then define functions $dec()$ and $bin^n()$ by $dec(x^n) = \sum_{j=0}^{n-1} 2^j x_j$ and $bin^n(\sum_{j=0}^{n-1} 2^j x_j)= x^n$. We will use the term qu(d)it to refer to a quantum system that lives in a $d$-dimensional Hilbert space $\CC^d =span_\CC\{\ket{j}: j=0,1,\ldots,d-1\}$. Hence a qu(4)it has 4 possible independent states. A qubit is a qu(2)it. Systems (or horizontal wires in a quantum circuit) will be labelled by Greek letters. If $\alpha$ lives in the Hilbert space $(\CC^d)^{\otimes n}$, we will say $width(\alpha)=d^n$. For example, we'll say $width(\alpha) = 3^5$ if wire $\alpha$ carries 5 qu(3)its. As is usual in the Physics literature, $\sigma_X,\sigma_Y, \sigma_Z$ will denote the Pauli matrices. $H=\frac{1}{\sqrt{2}}\left[ \begin{array}{cc} 1&1 \\ 1&-1 \end{array}\right]$ will denote the 1 qubit Hadamard matrix. $H^{\otimes n}$, the $n$-fold tensor product of $H$, is the $n$ qubits Hadamard matrix. Define the number operator $n$ and its complement $\overline{n}$ by \beq n=P_1 = \ket{1}\bra{1} \;,\;\; \ol{n} = 1-n = P_0= \ket{0}\bra{0} \;. \eeq If we need to distinguish the number operator from an integer called $n$, we will use $n_{op}$ or $\hat{n}$, or $\ul{n}$ for the number operator. The number operator just defined acts only on qubits. For qu(d)its, one can use instead \beq P_b= \ket{b}\bra{b} \;, \eeq where $b\in\{0,1,\ldots,d-1\}$. For 2 qu(d)its, one can use \beq P_{b_1}\otimes P_{b_0}=P_{b_1}(\beta_1) P_{b_0}(\beta_0)= P_{b_1,b_0}(\beta_1,\beta_0) \;, \label{eq-pb-2} \eeq where $b_1,b_0\in\{0,1,\ldots,d-1\}$. Eq.(\ref{eq-pb-2}) generalizes easily to an arbitrary number of qu(d)its. We will often denote tensor products of kets vertically instead of horizontally. The horizontal and vertical notations will be related by the conventions: \beq \ket{a_{n-1}}\otimes\ldots \ket{a_1}\otimes \ket{a_0} = \ket{a_{n-1},\ldots,a_1,a_0} = \begin{array}{c} \ket{a_0} \\ \ket{a_1} \\ \vdots \\ \ket{a_{n-1}} \end{array} \; \eeq and \beq (\ket{a_{n-1},\ldots,a_1,a_0})^\dagger = \bra{a_{n-1},\ldots,a_1,a_0} = \begin{array}{c} \bra{a_0} \\ \bra{a_1} \\ \vdots \\ \bra{a_{n-1}} \end{array} \;. \eeq As usual for us, we will represent various types of controlled nots as follows: \beq \begin{array}{c} \Qcircuit @C=1em @R=1em @!R{ &\dotgate&\qw&\scriptstyle{\alpha} \\ &\timesgate\qwx[-1]&\qw&\scriptstyle{\beta} } \end{array} = \sigma_X(\beta)^{n(\alpha)} \;,\;\; \begin{array}{c} \Qcircuit @C=1em @R=1em @!R{ &\ogate&\qw&\scriptstyle{\alpha} \\ &\timesgate\qwx[-1]&\qw&\scriptstyle{\beta} } \end{array} = \sigma_X(\beta)^{\ol{n}(\alpha)} \;,\;\; \begin{array}{c} \Qcircuit @C=1em @R=1em @!R{ &\dotgate&\qw&\scriptstyle{\alpha_0} \\ &\dotgate&\qw&\scriptstyle{\alpha_1} \\ &\timesgate\qwx[-2]&\qw&\scriptstyle{\beta} } \end{array} = \sigma_X(\beta)^{n(\alpha_0)n(\alpha_1)} \eeq We will represent as follows a controlled $U$, where the unitary operator $U$ acts on $\alpha$ and where $\beta$ is the control: \beq \begin{array}{c} \Qcircuit @C=1em @R=1em @!R{ &\gate{U}&\qw&\scriptstyle{\alpha} \\ &\dotgate\qwx[-1]&\qw&\scriptstyle{\beta} } \end{array} \;\;\;\; = U(\alpha)^{n(\beta)} \;. \eeq Note that $ [U(\alpha)^{n(\beta)}][h.c.]=1 $ so controlled unitaries are themselves unitary. We will use the following identity repeatedly throughout this paper. For any quantum systems $\alpha$ and $\beta$, any unitary operator $U(\beta)$ and any projection operator $\pi(\alpha)$ (i.e., $\pi^2=\pi$), one has \beq U(\beta)^{\pi(\alpha)}= (1-\pi(\alpha)) + U(\beta)\pi(\alpha) \;. \label{eq-u-pi-id} \eeq We will denote ordered products of operators $U_b$ as follows: \beq \prod_{b=0\rarrow2} U_b= U_0 U_1 U_2 \;,\;\; \prod_{b=2\rarrow0} U_b= U_2 U_1 U_0 \;. \eeq Suppose $a,b\in Bool$ and $x,y,\theta$ are real numbers. Note that $\delta_a^0 = \ola$ and $\delta_a^1=a$. Furthermore, note that $x^a y^\ola= xa + y\ola= x(a)$ where $x(a) = x$ if $a=1$ and $x(a)=y$ if $a=0$. If we let $S = \sin \theta$ and $C= \cos \theta$, then \beqa \av{a | e^{-i\sigma_Y \theta b} |0} &=& \delta_a^0 \delta _b^0 + (C \delta_a^0 + S \delta_a^1)\delta_b^1 \\ &=& \ola \olb + (C \ola + S a)b \;. \eeqa \section{Permutation Circuits} For this section, we will assume that the reader has a rudimentary knowledge of permutations, as can be obtained from any first course in abstract algebra. In this section, we will attempt to connect that rudimentary knowledge of permutations with quantum computation. More specifically, we will show how to permute the qu(d)its of a multi-qu(d)it quantum state using a quantum circuit. Given any finite set $S$, a permutation on set $S$ is a 1-1 onto map from $S$ to $S$. Define \beq Sym(S)= \{ \sigma| \sigma\mbox{ is a permutation of set } S\} \;. \eeq The properties of $Sym(S)$ don't depend on the nature of $S$, except for its cardinality $|S|$ (i.e., number of elements of $S$). Hence, we will often denote $Sym(S)$ by $Sym_{|S|}$. If permutation $\sigma$ maps $x\in S$ to $\sigma(x)\in S$, we will often write $\sigma(x) = x^\sigma$. For example, if $\sigma$ maps 1 to 2, we will write $1^\sigma = 2$. As usual, a permutation $\sigma$ will be represented by \beq \left( \begin{array}{cccc} 1&2&\cdots&n-1 \\ 1^\sigma &2^\sigma &\cdots & (n-1)^\sigma \end{array} \right)= (1^\sigma, 2^\sigma,\dots, (n-1)^\sigma) \; \eeq For $\sigma\in Sym(S)$, and any set $A$, define \beq A^\sigma= \{a^{\sigma'}: a\in A\}\mbox{, where } a^{\sigma'} =\left\{ \begin{array}{l} a^\sigma \mbox{ if } a\in S \\ a \mbox{ if } a\notin S \end{array} \right. \;. \eeq For example, if $S=\{1,2,3\}$ then $\{1,2,4\}^\sigma = \{1^\sigma, 2^\sigma, 4\}$. If $\sigma\in Sym_n$ and $c^{n} =(c_{(n-1)}, \ldots, c_{1},c_{0})\in (S_\rvc)^n$, define \beq c^{n\sigma} =(c_{(n-1)}^\sigma, \ldots, c_{1}^\sigma,c_{0}^\sigma) = (c_{(n-1)^\sigma}, \ldots, c_{1^\sigma},c_{0^\sigma}) \;. \eeq For any permutation map $\sigma:S\rarrow S$, one can define a matrix such that each of its columns has all entries equal to zero except for one single entry which equals 1. Also, the entry that is 1 is at a different position for each column. We will denote such a matrix (which is orthogonal and unitary) also by $\sigma$. Whether $\sigma$ stands for the map or the matrix will be clear from context, as in the following equation which uses $\sigma$ to stand for the matrix on its left side and the map on its right side: \beq \sigma \begin{array}{l} \ket{a_0} \\ \ket{a_1} \\ \vdots \\ \ket{a_{n-1}} \end{array} = \begin{array}{l} \ket{a_{0^\sigma}} \\ \ket{a_{1^\sigma}} \\ \vdots \\ \ket{a_{(n-1)^\sigma}} \end{array} \; \eeq Suppose $a^n=(a^{n-1}, a^{n-2},\ldots, a^0)\in (S_\rva)^n$ and $\av{b^n|a^n}=\delta_{a^n}^{b^n}$ for all $a^n,b^n\in S^n_\rva$. If $|S_\rva|=d$, then we can assume without loss of generality that $S_\rva = \{0..d-1\}$. Suppose $width(\alpha^n)=d^n$. Let \beq \ket{\psi}_{\alpha^n}= \sum_{a^n}A(a^n)\ket{a^n}_{\alpha^n} \;,\;\; A(a^n) =\av{a^n|\psi} \;. \eeq Then \beq \begin{array}{c} \bra{a_0} \\ \bra{a_1} \\ \vdots \\ \bra{a_{n-1}} \end{array} \sigma \ket{\psi}_{\alpha^n} = \begin{array}{c} \bra{a_{0^\tau}} \\ \bra{a_{1^\tau}} \\ \vdots \\ \bra{a_{(n-1)^\tau}} \end{array} \ket{\psi} = A(a^{n\tau}) \;, \eeq where $\tau=\sigma^{-1}$. When $\sigma$ is a permutation matrix, it's unitary so $\sigma^{-1}=\sigma^\dagger$. Define \beq \pi_{Sym_n}=\frac{1}{n!}\sum_{\sigma\in Sym_n}\sigma \;. \eeq One finds that \beq [\pi_{Sym_n}]^2 = \pi_{Sym_n} \; \eeq so $\pi_{Sym_n}$ is a projection operator. Furthermore, one finds that \beq \av{a^n|\pi_{Sym_n}|\psi}= \frac{1}{n!} \sum_\sigma A(a^{n\sigma}) \;. \label{eq-goal} \eeq The goal of this paper is to find a quantum circuit that allows us to calculate $|\av{a^n|\pi_{Sym_n}|\psi}|^2$ for some predetermined point $a^n\in \{0..d-1\}^n$ and state $\ket{\psi}_{\alpha^n}$, where $width(\alpha^n)=d^n$. As is well known, any permutation can be expressed as a product of transpositions (a.k.a. swaps). For quantum circuits, it is common to define a swap gate which acts as follows: \beq \begin{array}{c} \Qcircuit @C=1em @R=1em @!R{ &\lamgate&\qw\;\;\;\;\;\ket{\psi_1} \\ &\veegate\qwx[-1]&\qw\;\;\;\;\;\ket{\psi_2} \\ &\qw&\qw\;\;\;\;\;\ket{\psi_3} } \end{array} \;\;\;\;= \begin{array}{c} \Qcircuit @C=1em @R=1em @!R{ &\qw&\qw\;\;\;\;\;\ket{\psi_2} \\ &\qw&\qw\;\;\;\;\;\ket{\psi_1} \\ &\qw&\qw\;\;\;\;\;\ket{\psi_3} } \end{array} \;. \eeq In this example, the gate $swap(1,2)$ is acting on 3 qu(d)its called 1,2,3. Clearly, $ \left[ \begin{array}{c} \Qcircuit @C=1em @R=1em @!R{ &\lamgate&\qw \\ &\veegate\qwx[-1]&\qw } \end{array} \right]^2=1 $. One also finds that \beqa \left[ \begin{array}{c} \Qcircuit @C=1em @R=1em @!R{ \wedge&\lamgate&\lamgate \\ \vee\qwx[-1]&\qw&\veegate\qwx[-1] \\ &\veegate\qwx[-2]&\qw } \end{array} = \begin{array}{c} \Qcircuit @C=1em @R=1em @!R{ &\qw&\qw \\ &\lamgate&\qw \\ &\veegate\qwx[-1]&\qw } \end{array} \right] \;,\;\; \left[ \begin{array}{c} \Qcircuit @C=1em @R=1em @!R{ &\lamgate&\qw \\ \wedge&\veegate\qwx[-1]&\lamgate \\ \vee\qwx[-1]&\qw&\veegate\qwx[-1] } \end{array} = \begin{array}{c} \Qcircuit @C=1em @R=1em @!R{ &\lamgate&\qw \\ &\qw&\qw \\ &\veegate\qwx[-2]&\qw } \end{array} \right] \;,\;\; \left[ \begin{array}{c} \Qcircuit @C=1em @R=1em @!R{ &\lamgate&\qw \\ \wedge&\qw&\lamgate \\ \vee\qwx[-1]&\veegate\qwx[-2]&\veegate\qwx[-1] } \end{array} = \begin{array}{c} \Qcircuit @C=1em @R=1em @!R{ &\lamgate&\qw \\ &\veegate\qwx[-1]&\qw \\ &\qw&\qw } \end{array} \right] \;. \eeqa One can summarize these 3 identities by saying that the horizontal line with {\it 3 arrow heads} on it can be replaced by {\it no arrow heads} on it. At the same time, the horizontal line with {\it 2 arrow heads} on it can be replaced by {\it 1 arrow head} on it. Note that the elements of $Sym_3$ in the so called dictionary order are \beq \begin{array}{cccccc} \begin{array}{c} (1) \\ (1,2,3) \\ \Qcircuit @C=1em @R=1em @!R{ \scriptscriptstyle{1}\;\;&\qw \\ \scriptscriptstyle{2}\;\;&\qw \\ \scriptscriptstyle{3}\;\;&\qw } \end{array} & \begin{array}{c} (2) \\ (1,3,2) \\ \Qcircuit @C=1em @R=1em @!R{ &\qw \\ &\lamgate \\ &\veegate\qwx[-1] } \end{array} & \begin{array}{c} (3) \\ (2,1,3) \\ \Qcircuit @C=1em @R=1em @!R{ &\lamgate \\ &\veegate\qwx[-1] \\ &\qw } \end{array} & \begin{array}{c} (4) \\ (2,3,1) \\ \Qcircuit @C=1em @R=1em @!R{ \wedge\qwx[2] &\lamgate \\ &\veegate\qwx[-1] \\ \vee& \qw } \end{array} & \begin{array}{c} (5) \\ (3,1,2) \\ \Qcircuit @C=1em @R=1em @!R{ \wedge&\lamgate \\ \vee\qwx[-1]&\qw \\ &\veegate\qwx[-2] } \end{array} & \begin{array}{c} (6) \\ (3,2,1) \\ \Qcircuit @C=1em @R=1em @!R{ &\lamgate \\ &\qw \\ &\veegate\qwx[-2] } \end{array} \end{array} \;. \label{eq-sym3-dict} \eeq Note that the sum of the 6 elements of $Sym_3$ can be generated from a product of matrices which are themselves sums of permutation matrices, as follows: \beqa \lefteqn{ \left( \begin{array}{c} \Qcircuit @C=1em @R=1em @!R{ &\qw \\ &\qw \\ &\qw } \end{array} + \begin{array}{c} \Qcircuit @C=1em @R=1em @!R{ &\qw \\ &\lamgate \\ &\veegate\qwx[-1] } \end{array} + \begin{array}{c} \Qcircuit @C=1em @R=1em @!R{ &\lamgate \\ &\qw \\ &\veegate\qwx[-2] } \end{array} \right) \left( \begin{array}{c} \Qcircuit @C=1em @R=1em @!R{ &\qw \\ &\qw \\ &\qw } \end{array} + \begin{array}{c} \Qcircuit @C=1em @R=1em @!R{ &\lamgate \\ &\veegate\qwx[-1] \\ &\qw } \end{array} \right) =} \nonumber \\ &=& \begin{array}{c} (1) \\ \Qcircuit @C=1em @R=1em @!R{ &\qw \\ &\qw \\ &\qw } \end{array} + \begin{array}{c} (2) \\ \Qcircuit @C=1em @R=1em @!R{ &\qw \\ &\lamgate \\ &\veegate\qwx[-1] } \end{array} + \begin{array}{c} (6) \\ \Qcircuit @C=1em @R=1em @!R{ &\lamgate \\ &\qw \\ &\veegate\qwx[-2] } \end{array} + \begin{array}{c} (3) \\ \Qcircuit @C=1em @R=1em @!R{ &\lamgate \\ &\veegate\qwx[-1] \\ &\qw } \end{array} + \begin{array}{c} (5) \\ \Qcircuit @C=1em @R=1em @!R{ \wedge&\lamgate \\ \vee\qwx[-1]&\qw \\ &\veegate\qwx[-2] } \end{array} + \begin{array}{c} (4) \\ \Qcircuit @C=1em @R=1em @!R{ \wedge\qwx[2] &\lamgate \\ &\veegate\qwx[-1] \\ \vee& \qw } \end{array} \nonumber \\ &=&\sum_{\sigma\in Sym_3}\sigma \;. \label{eq-sym3-prod-sum} \eeqa It's fairly clear how to generalize the pattern of Eq. (\ref{eq-sym3-prod-sum}) to the case of $n$ qu(d)its and $Sym_n$, where $n$ is any integer greater than 1. \section{Decomposing a State Vector into 2 Orthogonal Projections} In this section, we will review a technique that we like to call ``decomposing a state vector into orthogonal projections". This technique is frequently used in quantum computation circuits, and will be used later on in this paper, inside more complicated circuits. Suppose $\alpha$ is a qu(d)it and $\beta$ is a qubit. Let $\pi$ be a Hermitian projection operator (i.e., $\pi^\dagger = \pi$, $\pi^2=\pi$) acting on $\alpha$, and let $\ol{\pi}=1-\pi$. Let $\ket{\psi}_\alpha$ be a state vector of qu(d)it $\alpha$. Applying identity Eq.(\ref{eq-u-pi-id}) with $U=\sigma_X(\beta)$ yields: \beq \sigma_X(\beta)^{\pi(\alpha)} \begin{array}{l} \ket{\psi}_\alpha \\ \ket{0}_\beta \end{array} = \sigma_X(\beta)^{\ol{\pi}(\alpha)} \begin{array}{l} \ket{\psi}_\alpha \\ \ket{1}_\beta \end{array} = \begin{array}{c} \ol{\pi}(\alpha)\ket{\psi}_\alpha \\ \ket{0}_\beta \end{array} + \begin{array}{c} \pi(\alpha)\ket{\psi}_\alpha \\ \ket{1}_\beta \end{array} \;. \eeq One can say that the state vector $\ket{\psi}$ is ``decomposed" by the circuit into two orthogonal projections $\pi\ket{\psi}$ and $\ol{\pi}\ket{\psi}$. Some examples of this decomposition are (1) when $\alpha$ is a qubit and $\pi(\alpha)=n(\alpha)$, (2) when $\alpha=(\alpha_1,\alpha_0)$ where $\alpha_0,\alpha_1$ are both qubits and $\pi(\alpha)=n(\alpha_0)\ol{n}(\alpha_1)$. \section{Labelling and Summing Unitaries} \label{sec-label-sum-uni} In this section, we will review a technique that we like to call ``labelling and summing unitaries". This technique is also frequently used in quantum computation circuits, and will be used later on in this paper, inside more complicated circuits. Let $\alpha$ be a qu(d)it for some $d\geq 2$. Let $U$ be a $d$ dimensional unitary matrix. First we will consider the case that $\beta$ is a qubit. One finds that \beq \begin{array}{c} U(\alpha)^{n(\beta)}\\ \end{array} \begin{array}{c} \ket{\psi}_\alpha \\ H(\beta)\ket{0}_\beta \end{array} = \frac{1}{\sqrt{2}} \left( \begin{array}{c} \ket{\psi}_\alpha \\ \ket{0}_\beta \end{array} + \begin{array}{c} U\ket{\psi}_\alpha \\ \ket{1}_\beta \end{array} \right) \; \label{eq-label-unitaries} \eeq and \beq \begin{array}{c} \\ H(\beta) \end{array} \begin{array}{c} U(\alpha)^{n(\beta)} \\ \end{array} \begin{array}{c} \ket{\psi}_\alpha \\ H(\beta)\ket{0}_\beta \end{array} = \begin{array}{c} \left(\frac{1+U}{2}\right) \ket{\psi}_\alpha \\ \ket{0}_\beta \end{array} + \begin{array}{c} \left(\frac{1-U}{2}\right) \ket{\psi}_\alpha \\ \ket{1}_\beta \end{array} \;. \label{eq-sum-unitaries} \eeq One can say that the unitaries $1$ and $U$ are labelled by Eq.(\ref{eq-label-unitaries}), and they are summed, in the coefficient of $\ket{0}_\beta$, in Eq.(\ref{eq-sum-unitaries}). So far we have considered $\alpha$ to be a qu(d)it for arbitrary $d\geq 2$, but we have restricted $\beta$ to be a qubit. Let's next consider a $\beta$ which has more than 2 independent states. For concreteness, suppose $\beta$ is a qu(3)it. Let $T^{(3)}$ be a 3 dimensional unitary matrix that satisfies \beq T^{(3)}\ket{0} = \frac{1}{\sqrt{3}}\sum_{b=0}^{2} \ket{b} \;. \eeq Suppose $U_2,U_1,U_0$ are three 3-dimensional unitary matrices. Then Eq.(\ref{eq-label-unitaries}) generalizes to \beqa \lefteqn{ \begin{array}{c} \prod_{b=2\rarrow 0} \left\{U_b(\alpha)^{P_b(\beta)}\right\} \\ \end{array} \begin{array}{c} \ket{\psi}_\alpha \\ T^{(3)}(\beta)\ket{0}_\beta \end{array} = } \\ &=& \frac{1}{\sqrt{3}} \left( \begin{array}{c} U_2\ket{\psi}_\alpha \\ \ket{2}_\beta \end{array} + \begin{array}{c} U_1\ket{\psi}_\alpha \\ \ket{1}_\beta \end{array} + \begin{array}{c} U_0\ket{\psi}_\alpha \\ \ket{0}_\beta \end{array} \right) \;, \eeqa and Eq.(\ref{eq-sum-unitaries}) generalizes to \beqa \begin{array}{c} \\ T^{(3)\dagger}(\beta) \end{array} \lefteqn{ \begin{array}{c} \prod_{b=2\rarrow 0} \left\{U_b(\alpha)^{P_b(\beta)}\right\} \\ \end{array} \begin{array}{c} \ket{\psi}_\alpha \\ T^{(3)}(\beta)\ket{0}_\beta \end{array} = } \\ &=& \underbrace{ \begin{array}{c} \frac{1}{3} (U_2 + U_1 + U_0)\ket{\psi}_\alpha \\ \ket{0}_\beta \end{array} } _{= \begin{array}{c} z_0 \ket{\psi_0}_\alpha \\ \ket{0}_\beta \end{array} } + \sum_{b=2,1} \begin{array}{c} z_b \ket{\psi_b}_\alpha \\ \ket{b}_\beta \end{array} \;, \eeqa where $\sum_{b=0}^2|z_b|^2=1$ and $\av{\psi_b|\psi_b}=1$ for all $b$. Note that one or more of the $U_b$ can be equal to 1. \section{The operators $V^{(\lam)}_0$ and $V^{(\lam)}_1$} \label{sec-v-ops} In the preceding Sec.\ref{sec-label-sum-uni}, we used operators $H(\beta)$ and $T^{(3)}(\beta)$ to ``label" a set of unitary matrices $\{1,U\}$ and $\{U_2,U_1,U_0\}$, respectively. In this section, we will define new operators $V^{(\lam)}_0$ and $V^{(\lam)}_1$, where $\lam=1,2,3,\ldots$, that will be used in later circuits of this paper in a similar role, as ``label producers" or ``labellers" of a set of unitary matrices. Throughout this section, let $\lam\in\{1,2,3, \ldots\}$ and $m\in\{0,1\}$. For $\lam=4$ and $m=0,1$, define \beq V^{(4)}_m = \begin{array}{c} \Qcircuit @C=1em @R=1em @!R{ &\ogate &\ogate &\ogate &\gate{R_y^0} &\qw \\ &\ogate &\ogate &\gate{R_y^1}\qwx[-1] &\qw &\qw \\ &\ogate &\gate{R_y^2}\qwx[-2] &\qw &\qw &\qw \\ &\gate{R_y^3}\qwx[-3] &\qw &\qw &\qw &\qw } \end{array} \;, \label{eq-vtm4} \eeq where \beq R_y^r = \exp( -i\sigma_Y \theta_r) \; \eeq for row $r=0,1,2,3$. The angles $\{\theta_r: r=0,1,2,3\}$ for both $m=0$ and $m=1$ will be specified later on. $V^{(\lam)}_m$ for $\lam$ other than 4 is defined by analogy to Eq.(\ref{eq-vtm4}). Below, we will use the shorthand notations \beq C_r = \cos(\theta_r) \;,\;\; S_r = \sin(\theta_r) \; \eeq and \beq \ket{\{1,0^{\lam-1}\}}= \ket{1 0^{\lam-1}} + \ket{0 1 0^{\lam-2}} + \ket{0^2 1 0^{\lam-3}} + \ldots + \ket{0^{\lam-1} 1} \;. \eeq \begin{claim}\label{cl-cs-values-4} If \beq \begin{array}{cccc} S_0=\sqrt{\frac{1}{5}} & S_1=\sqrt{\frac{1}{4}} & S_2=\sqrt{\frac{1}{3}} & S_3=\sqrt{\frac{1}{2}} \\ C_0=\sqrt{\frac{4}{5}} & C_1=\sqrt{\frac{3}{4}} & C_2=\sqrt{\frac{2}{3}} & C_3=\sqrt{\frac{1}{2}} \end{array} \;, \label{eq-cs-values-4} \eeq then $V^{(4)}_1$ maps \beq V^{(4)}_1: \ket{0^4}\mapsto \frac{1}{\sqrt{5}}\left[ \ket{0^4}+ \ket{\{1,0^{3}\}} \right] \;. \label{eq-maps-vt1} \eeq From Eq.(\ref{eq-maps-vt1}) it follows that for $b^4\in Bool^4$, \beq \av{ b^4 | V^{(4)}_1 |0^4}= \frac{1}{\sqrt{5}} \left\{ \begin{array}{r} \olb_3 \olb_2\olb_1 b_0 \\ + \olb_3 \olb_2 b_1 \olb_0 \\ + \olb_3 b_2\olb_1 \olb_0 \\ + b_3 \olb_2\olb_1 \olb_0 \\ + \olb_3 \olb_2\olb_1 \olb_0 \end{array} \right. \;. \label{eq-v4m-matrix-el} \eeq \end{claim} \proof One has that \beqa A(b^4)&=& \left[\;\;\;\; \begin{array}{c} \Qcircuit @C=1em @R=1em @!R{ \bra{b_0}\;\;\;\;\; &\ogate &\ogate &\ogate &\emptygate &\qw \;\;\;\;\;\ket{0}_{\beta_0} \\ \bra{b_1}\;\;\;\;\; &\ogate &\ogate &\emptygate\qwx[-1] &\qw &\qw \;\;\;\;\;\ket{0}_{\beta_1} \\\bra{b_2}\;\;\;\;\; &\ogate &\emptygate\qwx[-2] &\qw &\qw &\qw \;\;\;\;\;\ket{0}_{\beta_2} \\\bra{b_3}\;\;\;\;\; &\emptygate\qwx[-3] &\qw &\qw &\qw &\qw \;\;\;\;\;\ket{0}_{\beta_3} } \end{array} \;\;\;\; \right] \\ &=& \begin{array}{l} \bra{b_0} \\ \bra{b_1} \\ \bra{b_2} \\ \bra{b_3} \end{array} \left[ \begin{array}{c} \exp\{-i\sigma_Y(\beta_0)\theta_0\} \\ \exp\{-i\sigma_Y(\beta_1)\theta_1 P_0(\beta_0)\} \\ \exp\{-i\sigma_Y(\beta_2)\theta_2 P_{0}(\beta_1)P_{0}(\beta_0) \} \\ \exp\{-i\sigma_Y(\beta_3)\theta_3 P_{0}(\beta_2)P_{0}(\beta_1)P_{0}(\beta_0) \} \end{array} \right] \ket{0^4} \;. \eeqa It's easy to convince oneself that the only non-vanishing matrix elements are those for which $b^4$ has either (1) all 4 components equal to 0, or (2) a single component equal to 1 and the other 3 components equal to 0. Evaluating each of these possibilities separately, one finds \beq A(b^4)= \left\{ \begin{array}{r} S_0\; \olb_3 \olb_2\olb_1 b_0 \\ +S_1 C_0\; \olb_3 \olb_2 b_1 \olb_0 \\ +S_2 C_1 C_0 \; \olb_3 b_2\olb_1 \olb_0 \\ +S_3 C_2 C_1 C_0 \; b_3 \olb_2\olb_1 \olb_0 \\ +C_3 C_2 C_1 C_0 \; \olb_3 \olb_2\olb_1 \olb_0 \end{array} \right. \;. \label{eq-a-a4} \eeq Now one can plug into Eq.(\ref{eq-a-a4}) the values of $C_r$ and $S_r$ given in the premise of our claim to show that the conclusion of our claim holds. \qed \begin{claim}\label{cl-cs-values-4-neg} If $C_r$ and $S_r$ for $r=3,2,1,0$ have the values given by Eqs.(\ref{eq-cs-values-4}), then $V^{(4)}_1$ maps \beq V^{(4)}_1: \begin{array}{c} \ket{0} \\ \ket{0} \\ \ket{0} \\ \ket{1} \end{array} \mapsto \frac{1}{\sqrt{5}}\left[ -\ket{0^4}+ \ket{\{1,0^{3}\}} \right] \;. \label{eq-maps-vt1-neg} \eeq From Eq.(\ref{eq-maps-vt1-neg}) it follows that for $b^4\in Bool^4$, \beq \bra{b^4 } V^{(4)}_1 \begin{array}{c} \ket{0} \\ \ket{0} \\ \ket{0} \\ \ket{1} \end{array}= \frac{1}{\sqrt{5}} \left\{ \begin{array}{r} \olb_3 \olb_2\olb_1 b_0 \\ + \olb_3 \olb_2 b_1 \olb_0 \\ + \olb_3 b_2\olb_1 \olb_0 \\ + b_3 \olb_2\olb_1 \olb_0 \\ - \olb_3 \olb_2\olb_1 \olb_0 \end{array} \right. \;. \label{eq-v4m-matrix-el-neg} \eeq \end{claim} \proof Eq.(\ref{eq-a-a4}) is true in this case, but only if we replace $C_3\rarrow -S_3=-\frac{1}{\sqrt{2}}$ and $S_3\rarrow C_3=\frac{1}{\sqrt{2}}$. \qed \begin{claim}\label{cl-cs-values-4-zero} If \beq \begin{array}{cccc} S_0=\sqrt{\frac{1}{4}} & S_1=\sqrt{\frac{1}{3}} & S_2=\sqrt{\frac{1}{2}} & S_3=1 \\ C_0=\sqrt{\frac{3}{4}} & C_1=\sqrt{\frac{2}{3}} & C_2=\sqrt{\frac{1}{2}} & C_3=0 \end{array} \;, \label{eq-cs-values-4-zero} \eeq then $V^{(4)}_0$ maps \beq V^{(4)}_0: \ket{0^4}\mapsto \frac{1}{\sqrt{4}} \ket{\{1,0^{3}\}} \;. \label{eq-maps-vt0} \eeq From Eq.(\ref{eq-maps-vt0}) it follows that for $b^4\in Bool^4$, \beq \av{ b^4 | V^{(4)}_0 |0^4}= \frac{1}{\sqrt{4}} \left\{ \begin{array}{r} \olb_3 \olb_2\olb_1 b_0 \\ + \olb_3 \olb_2 b_1 \olb_0 \\ + \olb_3 b_2\olb_1 \olb_0 \\ + b_3 \olb_2\olb_1 \olb_0 \end{array} \right. \;. \label{eq-v4m-matrix-el-zero} \eeq Note that $C_3=0,S_3=1$ means $\theta_3 = \pi/2$, and $e^{-i\sigma_Y \theta_3}=-i\sigma_Y$. \end{claim} \proof Plug into Eq.(\ref{eq-a-a4}) the values of $C_r$ and $S_r$ given in the premise of our claim to show that the conclusion of our claim holds. \qed \begin{claim} $V^{(\lam)}_m$ for $\lam$ other than 4 satisfies claims analogous to Claims \ref{cl-cs-values-4}, \ref{cl-cs-values-4-neg}, and \ref{cl-cs-values-4-zero}. \end{claim} \proof Obvious. \qed \section{Targeting Two Hypotheses} \label{sec-cats} In this section, we will describe a simple trick that can sometimes be used when applying Grover's original algorithm or some variant thereof like AFGA, as long as it drives a starting state $\ket{s}$ to a target state $\ket{t}$. Sometimes it is possible to arrange things so that the target state is a superposition $a_0\ket{0}+a_1\ket{1}$ of two orthonormal states $\ket{0}$ and $\ket{1}$, so that if we know $a_0$, we can infer $a_1$, a type of hypothesis testing with 2 hypotheses. If the target state were just proportional to say $\ket{0}$, then its component along $\ket{0}$ would be 1 after normalization so one wouldn't be able to do any type of amplitude inference. Suppose $z_0, z_1$ are complex numbers and $\ket{\chi}$ is an unnormalized state such that \beq |z_0|^2 + |z_1|^2 + \av{\chi|\chi} =1 \;. \eeq Define \beq p = |z_0|^2 + |z_1|^2 \;,\;\; q= 1-p \;, \eeq and \beq \hat{z}_0 = \frac{z_0}{\sqrt{p}}\;,\;\; \hat{z}_1 = \frac{z_1}{\sqrt{p}} \;. \eeq Assume the states $\{\ket{\psi_j}_\mu\}_{j=0,1}$ are orthonormal, the states $\{\ket{j}_\nu\}_{j=0,1}$ are orthonormal, and the states $\{\ket{b}_\omega \}_{b=0,1}$ are orthonormal. We wish to do AFGA with the following starting state $\ket{s}_{\mu, \nu,\omega}$ and target state $\ket{t}_{\mu, \nu,\omega}$: \beq \ket{s}_{\mu,\nu,\omega}= \begin{array}{c} z_0 \ket{\psi_0}_\mu \\ \ket{0}_\nu \\ \ket{0}_\omega \end{array} + \begin{array}{c} z_1 \ket{\psi_1}_\mu \\ \ket{1}_\nu \\ \ket{0}_\omega \end{array} + \begin{array}{c} \ket{\chi}_{\mu, \nu} \\ \ket{1}_\omega \end{array} \; \eeq and \beq \ket{t}_{\mu, \nu,\omega}= \begin{array}{c} \hat{z}_0\ket{\psi_0}_\mu \\ \ket{0}_\nu \\ \ket{0}_\omega \end{array} + \begin{array}{c} \hat{z}_1\ket{\psi_1}_\mu \\ \ket{1}_\nu \\ \ket{0}_\omega \end{array} \;. \label{eq-t-full} \eeq We will refer to $\ket{0}_\nu$ as the null hypothesis state, and to $\ket{1}_\nu$ as the alternative or rival hypothesis state. From the previous definitions, one finds \beq \begin{array}{r} \left[\ket{t}\bra{t}\right]_{\mu,\nu,\omega} \ket{s}_{\mu,\nu,\omega}= \sqrt{p} \;\;\ket{t}_{\mu,\nu,\omega} \\ \left[\ket{0}\bra{0}\right]_{\omega} \ket{s}_{\mu,\nu,\omega}= \sqrt{p} \;\;\ket{t}_{\mu,\nu,\omega} \end{array} \; \eeq and \beq \begin{array}{r} \left[\ket{t}\bra{t}\right]_{\mu,\nu,\omega} \ket{t}_{\mu,\nu,\omega}= \;\;\ket{t}_{\mu,\nu,\omega} \\ \left[\ket{0}\bra{0}\right]_{\omega} \ket{t}_{\mu,\nu,\omega}= \;\;\ket{t}_{\mu,\nu,\omega} \end{array} \;. \eeq $\ket{t}$ only appears in AFGA within the projection operator $\ket{t}\bra{t}$, and this projection operator always acts solely on the space spanned by $\ket{t}$ and $\ket{s}$. But $\ket{t}\bra{t}$ and $\ket{0}\bra{0}_\omega$ act identically on that space. Hence, for the purposes of AFGA, we can replace $\ket{t}\bra{t}$ by $\ket{0}\bra{0}_\omega$. We will call $\ket{0}_\omega$ the ``sufficient" target state to distinguish it from the full target state $\ket{t}_{\mu, \nu,\omega}$. Recall that AFGA converges in order $\frac{1}{|\av{t|s}|}$ steps. From the definitions of $\ket{s}$ and $\ket{t}$, one finds \beq \av{t|s} = \sqrt{p} \;. \label{eq-st-sqrt-p} \eeq Once system $(\mu,\nu,\omega)$ has been driven to the target state $\ket{t}_{\mu,\nu,\omega}$, one can measure the subsystem $\nu$ while ignoring the subsystem $(\mu,\omega)$. If we do so, the outcome of the measurements of $\nu$ can be predicted from the partial density matrix: \beq \tr_{\mu,\omega} \left\{\ket{t}\bra{t}_{\mu,\nu,\omega}\right\}= P(0) \ket{0}\bra{0}_\nu + P(1) \ket{1}\bra{1}_\nu \;, \eeq where \beq P(0) = |\hat{z}_0|^2 \;,\;\; P(1) = |\hat{z}_1|^2 \;. \eeq Hence \beq |z_1|^2 = \frac{P(1)}{P(0)} |z_0|^2 \;. \label{eq-z1-z0} \eeq We see that $|z_1|$ and $|z_0|$ are proportional to each other, with a proportionality factor that can be calculated by measuring the subsystem $\nu$ multiple times. If we know $|z_0|$, we can use Eq.(\ref{eq-z1-z0}) to find $|z_1|$. More generally, if $|z_j|^2=f_j(\theta)$ for $j=0,1$, and the functions $f_j()$ are known but the parameter $\theta$ isn't, we can solve $f_1(\theta)/f_0(\theta)=P(1)/P(0)$ for $\theta$. Eq.(\ref{eq-z1-z0}) only relates the magnitudes of $z_0$ and $z_1$. One can also measure the relative phase between $z_0$ and $z_1$ as follows. Let $z_0(z_1)^*=|z_0 z_1|e^{i\theta}$. Before taking the final measurement of $\nu$, apply a unitary transformation that maps $\ket{t}$ given by Eq.(\ref{eq-t-full}) to $\ket{t'}$ given by \beq \ket{t'}_{\mu, \nu,\omega}= \left(\frac{\hat{z}_0 + \hat{z}_1}{\sqrt{2}}\right) \begin{array}{r} \ket{\psi_0}_\mu \\ \ket{0}_\nu \\ \ket{0}_\omega \end{array} + \left(\frac{\hat{z}_0 - \hat{z}_1}{\sqrt{2}}\right) \begin{array}{r} \ket{\psi_1}_\mu \\ \ket{1}_\nu \\ \ket{0}_\omega \end{array} \;. \eeq Then do as before, measure $\nu$ in the $\{\ket{0},\ket{1}\}$ basis while ignoring $(\mu,\omega)$. If we do so, the outcome of the measurements of $\nu$ can be predicted from the partial density matrix: \beq \tr_{\mu,\omega} \left\{\ket{t'}\bra{t'}_{\mu,\nu,\omega} \right\}= P(+) \ket{0}\bra{0}_\nu + P(-) \ket{1}\bra{1}_\nu \;, \eeq where \beqa P(\pm) &=& \frac{1}{2} |\hat{z}_0\pm \hat{z}_1|^2 \\ &=& \frac{1}{2} \left[ P(0) + P(1) \pm 2P(0)P(1)\cos \theta \right] \;. \eeqa Hence, \beq \cos \theta = \frac{P(+)-P(-)}{2P(0)P(1)} \;. \label{eq-z01-cos-theta} \eeq \section{Blind Targeting}\label{sec-blind} At first sight, it seems that Grover-like algorithms and AFGA in particular require knowledge of $|\av{t|s}|$. In this section, we will describe a technique for bypassing that onerous requirement. For concreteness, we will assume in our discussion below that we are using AFGA and that we are targeting two hypotheses, but the idea of this technique could be carried over to other Grover-like algorithms in a fairly obvious way. According to Eq.(\ref{eq-st-sqrt-p}), when targeting two hypotheses, $|\av{t|s}| = \sqrt{p}$. Suppose we guess-timate $p$, and use that estimate and the AFGA formulas of Ref.\cite{afga} to calculate the various rotation angles $\alpha_j$ for $j=0,1,\ldots,N_{Gro}-1$, where $N_{Gro}$ is the number of Grover steps. Suppose $N_{Gro}$ is large enough. Then, in the unlikely event that our estimate of $p$ is perfect, $\hat{s}_j$ will converge to $\hat{t}$ as $j\rarrow N_{Gro}-1$. On the other hand, if our estimate of $p$ is not perfect but not too bad either, we expect that as $j\rarrow N_{Gro}-1$, the point $\hat{s}_j$ will reach a steady state in which, as $j$ increases, $\hat{s}_j$ rotates in a small circle in the neighborhood of $\hat{t}$. After steady state is reached, all functions of $\hat{s}_j$ will vary periodically with $j$. Suppose we do AFGA with $p$ fixed and with $N_{Gro}=(N_{Gro})_0 + r$ Grover steps where $r=0,1,\ldots N_{tail}-1$. Call each $r$ a ``tail run", so $p$ is the same for all $N_{tail}$ tail runs, but $N_{Gro}$ varies for different tail runs. Suppose that steady state has already been reached after $(N_{Gro})_0$ steps. For any quantity $Q_r$ where $r=0,1,\ldots N_{tail}-1$, let $\avss{Q}$ denote the outcome of passing the $N_{tail}$ values of $Q_r$ through a low pass filter that takes out the AC components and leaves only the DC part. For example, $\avss{Q}$ might equal $\sum_r Q_r/N_{tail}$ or $[\max_r{Q_r} + \min_r{Q_r}]/2$. By applying the SEO of tail run $r$ to a quantum computer several times, each time ending with a measurement of the quantum computer, we can obtain values $P_r(0)$ and $P_r(1)$ of $P(0)$ and $P(1)$ for tail run $r$. Then we can find $\avss{\sqrt{P(1)/P(0)}} =\avss{|z_1|/|z_0|}$. But we also expect to know $|z_0|$, so we can use $\avss{|z_1|/|z_0|}|z_0|$ as an estimate of $|z_1|$. This estimate of $|z_1|$ and the known value of $|z_0|$ yield a new estimate of $p=|z_1|^2 + |z_0|^2$, one that is much better than the first estimate we used. We can repeat the previous steps using this new estimate of $p$. Every time we repeat this process, we get a new estimate of $p$ that is better than our previous estimate. Call a ``trial" each time we repeat the process of $N_{tail}$ tail runs. $p$ is fixed during a trial, but $p$ varies from trial to trial. Appendix \ref{app-blind} describes a numerical experiment that we performed. The experiment provides some evidence that our blind targeting technique behaves as we say it does when used in conjunction with AFGA. \section{Quantum Circuit For Calculating $|\av{c^n|\pi_{Sym_n}|\psi}|^2$} In this section, we will give the main quantum circuit of this paper, one that can be used to calculate $|\av{c^n|\pi_{Sym_n}|\psi}|^2$ for some predetermined point $c^n\in\{0..d-1\}^n$ and state $\ket{\psi}_{\alpha^n}$, where $width(\alpha^n)=d^n$. Actually, in this paper, we will give two alternative methods for calculating $|\av{c^n|\pi_{Sym_n}|\psi}|^2$. The method presented in this section will be called Method A. Appendix \ref{app-method-b} presents an alternative method that will be called Method B. We will assume that we know how to compile $\ket{\psi}_{\alpha^n}$ (i.e., that we can construct it starting from $\ket{0^n}_{\alpha^n}$ using a sequence of elementary operations. Elementary operations are operations that act on a few (usually 1,2 or 3) qubits at a time, such as qubit rotations and CNOTS.) Multiplexor techniques for doing such compilations are discussed in Ref.\cite{tuc-multiplexor}. If $n$ is very large, our algorithm will be useless unless such a compilation is of polynomial efficiency, meaning that its number of elementary operations grows as poly($n$). For concreteness, henceforth we will use $n=4$ in this section, but it will be obvious how to draw an analogous circuit for arbitrary $n$. For $r=4,3,2,1$, define \beq Q^{(r)}(c^4) = |\bra{c^4} \pi_{Sym_r}(\alpha_{\leq r-1})\ket{\psi}_{\alpha^4}|^2 \; \eeq where $\alpha_{\leq r-1}=(\alpha_{r-1},\ldots, \alpha_1,\alpha_0)$. For instance, \beq Q^{(1)}(c^4)= |\av{c^4|\psi}|^2 \; \eeq and \beq Q^{(2)}(c^4)= |\bra{c^4} \pi_{Sym_2}(\alpha_1,\alpha_0) \ket{\psi}_{\alpha^4}|^2 \;. \eeq We want all horizontal lines in Fig.\ref{fig-sym-ckt} to represent qubits, except for the $\alpha_j$ lines which should represent qu(d)its. Let $\alpha = \alpha^4$ and $\beta= (\beta^1_{;0},\beta^2_{;1},\beta^3_{;2})$. Define \beq T(\alpha, \beta^1_{;0}) = V_1^{(1)\dagger}(\beta^1_{;0}) \left[ \begin{array}{c} \Qcircuit @C=1em @R=1em @!R{ &\lamgate&\qw&\scriptstyle{\alpha_0} \\ &\veegate&\qw&\scriptstyle{\alpha_1} \\ &\qw&\qw&\scriptstyle{\alpha_2} \\ &\qw&\qw&\scriptstyle{\alpha_3} \\ &\dotgate\qwx[-4]&\qw&\scriptstyle{\beta_{0;0}} } \end{array} \right] V_1^{(1)}(\beta^1_{;0}) \;, \eeq \beq T(\alpha, \beta^2_{;1})= V_1^{(2)\dagger}(\beta^2_{;1}) \left[ \begin{array}{c} \Qcircuit @C=1em @R=1em @!R{ &\lamgate&\qw&\qw&\scriptstyle{\alpha_0} \\ &\qw&\lamgate&\qw&\scriptstyle{\alpha_1} \\ &\veegate&\veegate&\qw&\scriptstyle{\alpha_2} \\ &\qw&\qw&\qw&\scriptstyle{\alpha_3} \\ &\qw&\dotgate\qwx[-3]&\qw&\scriptstyle{\beta_{0;1}} \\ &\dotgate\qwx[-5]&\qw&\qw&\scriptstyle{\beta_{1;1}} } \end{array} \right] V_1^{(2)}(\beta^2_{;1}) \;, \eeq \beq T(\alpha, \beta^3_{;2})= V_1^{(3)\dagger}(\beta^3_{;2}) \left[ \begin{array}{c} \Qcircuit @C=1em @R=1em @!R{ &\lamgate&\qw&\qw&\qw&\scriptstyle{\alpha_0} \\ &\qw&\lamgate&\qw&\qw&\scriptstyle{\alpha_1} \\ &\qw&\qw&\lamgate&\qw&\scriptstyle{\alpha_2} \\ &\veegate&\veegate&\veegate&\qw&\scriptstyle{\alpha_3} \\ &\qw&\qw&\dotgate\qwx[-2]&\qw&\scriptstyle{\beta_{0;2}} \\ &\qw&\dotgate\qwx[-4]&\qw&\qw&\scriptstyle{\beta_{1;2}} \\ &\dotgate\qwx[-6]&\qw&\qw&\qw&\scriptstyle{\beta_{2;2}} } \end{array} \right] V_1^{(3)}(\beta^3_{;2}) \;, \eeq \beq T(\alpha,\beta) =\prod_{\ell=2\rarrow 0}T(\alpha, \beta_{;\ell}) \;, \eeq \beq \pi(\alpha)=\prod_{j=0}^3 P_{c_j}(\alpha_j) \; \eeq and \beq \pi(\beta)= \left\{ \begin{array}{l} P_0(\beta_{0;0}) \\ P_0(\beta_{0;1}) P_0(\beta_{1;1}) \\ P_0(\beta_{0;2}) P_0(\beta_{1;2}) P_0(\beta_{2;2}) \end{array} \right. \;. \label{eq-def-pi-beta} \eeq \subsection{Method A}\label{sec-method-a} \begin{figure} \caption{Method A circuit for generating $\ket{s} \label{fig-sym-ckt} \end{figure} Method A for calculating $Q^{(4)}(c^4)$ consists of applying the algorithm AFGA of Ref.\cite{afga} in the way that was described in Sec.\ref{sec-cats}, using the techniques of targeting two hypotheses and blind targeting. As in Sec.\ref{sec-cats}, when we apply AFGA in this section, we will use a sufficient target $\ket{0}_\omega$. All that remains for us to do to fully specify our circuit for calculating $Q^{(4)}(c^4)$ is to give a circuit for generating $\ket{s}$. A circuit for generating $\ket{s}$ is given by Fig. \ref{fig-sym-ckt}. Fig.\ref{fig-sym-ckt} is equivalent to saying that \beq \ket{s}_{\mu,\nu,\omega}= \sigma_X(\omega)^{ \pi(\beta) \pi(\alpha)} \frac{1}{\sqrt{2}} \left[ \begin{array}{l} T(\alpha,\beta) \begin{array}{r} \ket{\psi}_{\alpha^4} \\ \ket{0^6}_\beta \end{array} \\ \ket{1}_\gamma \\ \ket{1}_{\mu_0} \\ \ket{1}_\omega \end{array} + \begin{array}{l} \ket{\psi}_{\alpha^4} \\ \ket{0^6}_\beta \\ \ket{0}_\gamma \\ \ket{0}_{\mu_0} \\ \ket{1}_\omega \end{array} \right] \;. \eeq \begin{claim} \beq \ket{s}_{\mu,\nu,\omega}= \begin{array}{c} z_1 \ket{\psi_1}_\mu \\ \ket{1}_\nu \\ \ket{0}_\omega \end{array} + \begin{array}{c} z_0 \ket{\psi_0}_\mu \\ \ket{0}_\nu \\ \ket{0}_\omega \end{array} + \begin{array}{c} \ket{\chi}_{\mu,\nu} \\ \ket{1}_\omega \end{array} \;, \label{eq-start-with-chi} \eeq for some unnormalized state $\ket{\chi}_{\mu,\nu}$, where \beq \begin{array}{|c|c|} \hline \ket{\psi_1}_\mu= \begin{array}{r} \ket{c^4}_\alpha \\ \ket{1}_{\mu_0} \end{array} & \ket{\psi_0}_\mu= \begin{array}{r} \ket{c^4}_\alpha \\ \ket{0}_{\mu_0} \end{array} \\ \ket{1}_\nu= \left[ \begin{array}{l} \ket{0}_{\beta_{;0}} \\ \ket{00}_{\beta_{;1}} \\ \ket{000}_{\beta_{;2}} \\ \ket{1}_{\gamma} \end{array} \right] & \ket{0}_\nu= \left[ \begin{array}{l} \ket{0}_{\beta_{;0}} \\ \ket{00}_{\beta_{;1}} \\ \ket{000}_{\beta_{;2}} \\ \ket{0}_{\gamma} \end{array} \right] \\ \hline \end{array} \;, \eeq \beq z_1 = \frac{1}{\sqrt{2}}\av{c^4|\pi_{Sym_4}|\psi} =\sqrt{\frac{Q^{(4)}(c^4)}{2}} \;, \eeq \beq z_0 = \frac{1}{\sqrt{2}}\av{c^4|\psi} =\sqrt{\frac{Q^{(1)}(c^4)}{2}} \;, \eeq \beq \frac{|z_1|}{|z_0|} = \sqrt{\frac{P(1)}{P(0)}} \;. \label{eq-z1-z0-again} \eeq \end{claim} \proof Applying identity Eq.(\ref{eq-u-pi-id}) with $U=\sigma_X(\omega)$ yields: \beqa \ket{s}&=& \sigma_X(\omega)^{\pi(\beta)\pi(\alpha)} \ket{s'} \\ &=& \sigma_X(\omega)\pi(\beta)\pi(\alpha) \ket{s'} + \begin{array}{l} \ket{\chi} \\ \ket{1}_\omega \end{array} \;. \eeqa Eq.(\ref{eq-z1-z0-again}) is just Eq.(\ref{eq-z1-z0}). \qed In case $\av{c^4|\psi}=0$, this procedure won't yield $Q^{(4)}(c^4)$, but it can be patched up easily. Note that if we know how to compile $\ket{\psi}_{\alpha^4}$ with polynomial efficiency, then we also know how to compile $\ket{\psi'}=swap(\alpha_0,\alpha_1)\ket{\psi}$ with polynomial efficiency. Furthermore, \beq \av{c^4|\pi_{Sym_4}|\psi}= \av{c^4|\pi_{Sym_4}|\psi'} \;. \eeq If $\av{c^4|\psi'}\neq 0$, mission accomplished. Even if $\av{c^4|\psi'}=0$, as long as we can replace $\ket{\psi}$ by some partially symmetrized version of it, call it $\ket{\psi_S}$, such that $\av{c^4|\psi_S}\neq 0$, we should be able to apply this method to get $Q^{(4)}(c^4)$. \appendix \section{Appendix: Numerical Experiment to Test Blind Targeting with AFGA}\label{app-blind} In this appendix, we will describe a numerical experiment that we conducted to test blind targeting with AFGA. The experiment is not a conclusive proof that blind targeting with AFGA always converges to the right answer, but it does provide some evidence that it often does. Our algorithm for blind targeting is based on the following Bloch sphere picture. We will use the notation of Ref.\cite{afga}. Suppose we know the vector $\hat{s}_0$ but we don't know that $\hat{t}=\hat{z}$, so we don't know the initial $|\av{t|s}| =\left|\cos(\frac{1}{2}{\rm acos}(\hat{t}\cdot\hat{s}))\right|$. Suppose we guess-timate $|\av{t|s}|$, and use that estimate and the AFGA formulas of Ref.\cite{afga} to calculate the unit vector $\hat{s}_j$ for $j=0,1,\ldots,N_{Gro}-1$, where $N_{Gro}$ is the number of Grover steps. Suppose $N_{Gro}$ is large enough. Then, in the unlikely event that our estimate of $|\av{t|s}|$ is perfect, as $j\rarrow N_{Gro}-1$, the point $\hat{s}_j$ will converge to $\hat{t}=\hat{z}$. On the other hand, if our estimate of $|\av{t|s}|$ is not perfect but not too bad either, we expect that as $j\rarrow N_{Gro}-1$, the point $\hat{s}_j$ will reach a steady state in which, as $j$ increases, $\hat{s}_j$ rotates in a circle of constant latitude very close to the North Pole of the Bloch sphere. If we pass through a low pass filter the values of $\hat{s}_j$ after it reaches this steady state, we will get an estimate of the position of the North Pole. Using that estimate $\hat{t}_{est}$ of the position of the North Pole and our assumed knowledge of $\hat{s}$ allows us to get a new estimate of $|\av{t|s}|$, one that is much better than the first estimate we used. We can repeat the previous steps using this new estimate of $|\av{t|s}|$. Every time we repeat this process, we get a new estimate of $|\av{t|s}|$ that is better than our previous estimate. To get some numerical evidence that this Bloch sphere picture argument applies, we wrote a new version of the .m files\footnote{Our .m files are written in the language of Octave. The Octave environment is a free, open source, partial clone of the MatLab environment. Octave .m files can usually be run in Matlab with zero or only minor modifications.} that were written to illustrate the AFGA algorithm of Ref.\cite{afga} and were included with the arXiv distribution of that paper. The arXiv distribution of the present paper includes 3 new Octave .m files: {\tt afga\_blind.m}, {\tt afga\_step.m} and {\tt afga\_rot.m}. The files {\tt afga\_step.m} and {\tt afga\_rot.m} contain auxiliary functions called by the main file {\tt afga\_blind.m}. These 2 files are identical to the files with the same names that were included with Ref.\cite{afga}. Hence, we will say nothing more about them here. The file {\tt afga\_blind.m} is a slight expansion of the file {\tt afga.m} that was presented and explained in Ref.\cite{afga}. The first 7 non-comment lines of {\tt afga\_blind.m} instantiate the following 7 input parameters: \begin{itemize} \item {\tt g0\_degs}$=\gamma=\gamma_0$ in degrees. Used only to calculate $\hat{s}_0$, which is assumed known, not to calculate the initial $\av{t|s}$, which is assumed a priori unknown. \item {\tt g0est\_degs} $=$ an estimate of $\gamma_0$, in degrees. Used to get first estimate of $\av{t|s}$. \item {\tt del\_lam\_degs}$= \Delta \lam$ in degrees \item {\tt num\_steps}$=N_{Gro}=$ number of Grover steps. \item {\tt tail\_len} $= N_{tail}=$ tail length, number of tail runs. Low pass filtering is applied to points $j=N_{Gro}-N_{tail}, \ldots, N_{Gro}-3, N_{Gro}-2, N_{Gro}-1$ of each trial to get the estimate of $\av{t|s}$ for the next trial. \item {\tt num\_trials} $=$ number of trials. $\gamma_0$ remains constant during a trial, but changes from trial to trial. \item {\tt plotted\_trial} $=$ trial for which program will plot the time series $\hat{s}_j$ for $j=0,1,\ldots,N_{Gro}-1$. \end{itemize} Each time {\tt afga\_blind.m} runs successfully, it outputs two files called {\tt afga\_blind.txt} and {\tt afga\_blind.svg}. The output file {\tt afga\_blind.txt} is a text file. Its contents are very similar to the contents of the file {\tt afga.txt} that is outputted by the program {\tt afga.m} of Ref.\cite{afga}. The contents of an {\tt afga.txt} file are thoroughly explained in Ref.\cite{afga}. From that, it's very easy to understand the meaning of the contents of an {\tt afga\_blind.txt} file. An {\tt afga\_blind.txt} file contains the records of {\tt num\_trials} trials instead of just one trial like an {\tt afga.txt} file does. The output file {\tt afga\_blind.svg} is a picture of a plot, in .svg (scalable vector graphic) format. .svg files can be viewed with a web browser. They can be viewed and modified with, for example, the free, open source software program Inkscape. The plot in an {\tt afga\_blind.svg} file gives the 3 components of the unit vector $\hat{s}_j$ as a function of the Grover step $j$. The 7 input parameters just described are listed in a legend of the plot. Here are some sample plots. \begin{itemize} \item We got Fig.\ref{fig-afga-blind90-0} with {\tt plotted\_trial}=0 (first trial) and with a $\gamma_0$ close to 90 degrees. Then we changed {\tt plotted\_trial} to 1 and got Fig.\ref{fig-afga-blind90-1}. \item We got Fig.\ref{fig-afga-blind179-0} with {\tt plotted\_trial}=0 (first trial) and with a $\gamma_0$ close to 180 degrees. Then we changed {\tt plotted\_trial} to 4 and got Fig.\ref{fig-afga-blind179-4}. \end{itemize} Further plots can be generated by the user using {\tt afga\_blind.m}. Note that $\gamma_0=180-\epsilon$ degrees, where $0<\epsilon<<1$, corresponds to the regime $|\av{t|s}|<<1$ of the ``hardest" problems. In that regime of hardest problems, we found that larger $N_{Gro}$ and larger $N_{tail}$ are required for convergence than in other regimes. Furthermore, in this regime the algorithm becomes very sensitive to various adjustable input parameters like $N_{Gro}$, $N_{tail}$, $\Delta \lam$ and to the type of low pass filter we use. We used two types of low pass filters in the software. The user can test them both himself. One was the MMM filter; i.e., a ``min-max-mean" filter that uses $[\max_r(\hat{s}_r) + \min_r(\hat{s}_r)]/2$. We found the MMM filter to be the more robust of the two filters we tried. The example plots presented in this section of the paper were all generated using the MMM filter. Further work will be required to determine how to choose adjustable input parameters and a low pass filter which are optimal, or nearly so, for this type of algorithm. \begin{figure} \caption{ The 3 components of the unit vector $\hat{s} \label{fig-afga-blind90-0} \end{figure} \begin{figure} \caption{ The 3 components of the unit vector $\hat{s} \label{fig-afga-blind90-1} \end{figure} \begin{figure} \caption{ The 3 components of the unit vector $\hat{s} \label{fig-afga-blind179-0} \end{figure} \begin{figure} \caption{ The 3 components of the unit vector $\hat{s} \label{fig-afga-blind179-4} \end{figure} \section{Appendix: Method B of calculating $Q^{(4)}(c^4)$}\label{app-method-b} In this appendix, we will present Method B, an alternative to the Method A that was presented in Sec.\ref{sec-method-a}. Both methods can be used to calculate $Q^{(4)}(c^4)$. \begin{figure} \caption{Method B circuit for generating $\ket{s} \label{fig-sym4-ckt} \end{figure} Unlike in Method A of calculating $Q^{(4)}(c^4)$, in Method B we will assume the restriction that $\av{c^4|\psi}\geq 0$. See Appendix \ref{app-restriction} for cases in which it is possible to sidestep this restriction. In Method A, we applied TTH (Targeting Two Hypotheses) only once. In method B, we will apply TTH multiple times, for $k=4,3,2$, each time applying it in the way that was described in Sec.\ref{sec-cats}, together with blind targeting, and using a sufficient target $\ket{0}_\omega$. All that remains for us to do to fully specify our Method B circuit for calculating $Q^{(4)}(c^4)$ is to give a circuit for generating $\ket{s}^{(k)}$ for $k=4,3,2$. A circuit for generating $\ket{s}^{(4)}$ is given by Fig. \ref{fig-sym4-ckt}. Note that in this circuit we do not use the qubit $\gamma$ that was used in method A. Define $\pi'(\beta)$ to be equal to the $\pi(\beta)$ defined by Eq.(\ref{eq-def-pi-beta}) but with the projector $P_0(\beta_{2;2})$ removed. In other words, ``formally", \beq \pi'(\beta)=\pi(\beta)/P_0(\beta_{2;2}) \;. \eeq Then Fig.\ref{fig-sym4-ckt} is equivalent to saying that \beq \ket{s}^{(4)}_{\mu,\nu,\omega}= \sigma_X(\omega)^{ \pi'(\beta) \pi(\alpha)} \sigma_X(\mu_0)^{P_1(\beta_{2;2})} \begin{array}{l} T(\alpha,\beta) \begin{array}{r} \ket{\psi}_{\alpha^4} \\ \ket{0^6}_\beta \end{array} \\ \ket{0}_{\mu_0} \\ \ket{1}_\omega \end{array} \;. \eeq \begin{claim} \label{cl-meth-b-q4} \beq \ket{s}^{(4)}_{\mu,\nu,\omega}= \begin{array}{c} z_1 \ket{\psi_1}_\mu \\ \ket{1}_\nu \\ \ket{0}_\omega \end{array} + \begin{array}{c} z_0 \ket{\psi_0}_\mu \\ \ket{0}_\nu \\ \ket{0}_\omega \end{array} + \begin{array}{c} \ket{\chi}_{\mu,\nu} \\ \ket{1}_\omega \end{array} \;, \eeq for some unnormalized state $\ket{\chi}_{\mu,\nu}$, where \beq \begin{array}{|c|c|} \hline \ket{\psi_1}_\mu= \begin{array}{l} \ket{c^4}_\alpha \\ \ket{0}_{\mu_0} \end{array} & \ket{\psi_0}_\mu= \begin{array}{l} \ket{c^4}_\alpha \\ \ket{1}_{\mu_0} \end{array} \\ \ket{1}_\nu= \left[ \begin{array}{r} \ket{0}_{\beta_{;0}} \\ \ket{00}_{\beta_{;1}} \\ \ket{000}_{\beta_{;2}} \end{array} \right] & \ket{0}_\nu= \left[ \begin{array}{r} \ket{0}_{\beta_{;0}} \\ \ket{00}_{\beta_{;1}} \\ -\ket{100}_{\beta_{;2}} \end{array} \right] \\ \hline \end{array} \;, \eeq \begin{subequations} \label{eq-z0-z1-new-constraints} \beq z_1 = \sqrt{Q^{(4)}(c^4)}\geq 0 \;, \eeq \beq z_0 +z_1= \frac{2}{4}\sqrt{Q^{(3)}(c^4)} \;, \eeq \end{subequations} \begin{subequations} \label{eq-z0-z1-old-constraints} \beq \frac{|z_0|}{|z_1|} = \sqrt{\frac{P(0)}{P(1)}} \;, \eeq \beq sign(z_0)= \left\{ \begin{array}{l} +1 \mbox{ if } P(+)>P(-) \\ -1 \mbox{ otherwise } \end{array} \right. \; \eeq \end{subequations} \end{claim} \proof According to Claims \ref{cl-cs-values-4} and \ref{cl-cs-values-4-neg}, \beq V_1^{(3)} \begin{array}{c} \ket{0} \\ \ket{0} \\ \ket{0} \end{array} = \frac{1}{\sqrt{4}} \left[ \begin{array}{c} \ket{0} \\ \ket{0} \\ \ket{0} \end{array} + \begin{array}{c} \ket{1} \\ \ket{0} \\ \ket{0} \end{array} + \begin{array}{c} \ket{0} \\ \ket{1} \\ \ket{0} \end{array} +\begin{array}{c} \ket{0} \\ \ket{0} \\ \ket{1} \end{array} \right] \;, \eeq and \beq V_1^{(3)} \begin{array}{c} \ket{0} \\ \ket{0} \\ \ket{1} \end{array} = \frac{1}{\sqrt{4}} \left[ - \begin{array}{c} \ket{0} \\ \ket{0} \\ \ket{0} \end{array} + \begin{array}{c} \ket{1} \\ \ket{0} \\ \ket{0} \end{array} + \begin{array}{c} \ket{0} \\ \ket{1} \\ \ket{0} \end{array} +\begin{array}{c} \ket{0} \\ \ket{0} \\ \ket{1} \end{array} \right] \;. \eeq Therefore, Figure \ref{fig-v3-ckt} is true. Figs.\ref{fig-sym4-ckt} and \ref{fig-v3-ckt} imply the two constraints given by Eqs.(\ref{eq-z0-z1-new-constraints}). The two constraints given by Eqs.(\ref{eq-z0-z1-old-constraints}) are old news. They come directly from Eq.(\ref{eq-z1-z0}) and Eq.(\ref{eq-z01-cos-theta}). Note that $\cos \theta = sign(z_0)$ for the special case being considered in this claim, namely when $z_1$ is non-negative real and $z_0$ is either positive or negative real. \qed \begin{figure} \caption{Two matrix elements of $\beta_{;2} \label{fig-v3-ckt} \end{figure} After doing TTH with $\ket{s}^{(4)}$, we are left knowing $Q^{(4)}(c^4)$ in terms of $Q^{(3)}(c^4)$. If we know $Q^{(3)}(c^4)$, we can stop right there and we are done. Otherwise, we can do TTH again, this time with the $\ket{s}^{(3)}$ given by Fig.\ref{fig-sym3-ckt}. Again, we are left knowing $Q^{(3)}(c^4)$ in terms of $Q^{(2)}(c^4)$. If we know $Q^{(2)}(c^4)$, we can stop right there and we are done. Otherwise, we can do TTH again, this time with the $\ket{s}^{(2)}$ given by a circuit analogous to Figs.\ref{fig-sym4-ckt} and \ref{fig-sym3-ckt}. Eventually we end up finding $Q^{(4)}(c^4)$ in terms $Q^{(1)}(c^4)$. We assume the latter is known. \begin{claim} \beq \sqrt{Q^{(4)}(c^4)}= \frac{ \frac{2}{4} \frac{2}{3} \frac{2}{2} \sqrt{Q^{(1)}(c^4)} }{ \prod_{k=4,3,2} \left\{ 1 + \sigma^{(k)} \sqrt{\frac{P^{(k)}(0)}{P^{(k)}(1)}} \right\} } \; \eeq \beq \sigma^{(k)}= \left\{ \begin{array}{l} +1 \mbox{ if } P^{(k)}(+)>P^{(k)}(-) \\ -1 \mbox{ otherwise} \end{array} \right. \; \eeq \end{claim} \proof Follows from Claim \ref{cl-meth-b-q4}, by analogy. \qed \begin{figure} \caption{Method B circuit for generating $\ket{s} \label{fig-sym3-ckt} \end{figure} \section{Appendix: Linear Transform of Vector If Vector Not Normalized} \label{app-restriction} Often, when calculating with a quantum computer the linear transform of a vector $\ket{\psi}$, our algorithm works only if we assume that the vector $\ket{\psi}$ has non-negative components in some basis, or is normalized in some norm, or both. The purpose of this appendix is to show that this restriction on $\ket{\psi}$ does not imply a large reduction of generality of the algorithm. We will show that given some simple information about $\ket{\psi}$, we can still use the restricted algorithm to find the linear transform of $\ket{\psi}$, even if $\ket{\psi}$ doesn't satisfy the restrictions. The results of this appendix are very obvious but worth keeping in mind. For any $z\in \CC$, let $z_r,z_i$ be its real and imaginary parts respectively. We wish to consider some finite set $S_\rvx$ and two functions $f, f^-:S_\rvx\rarrow \CC$ related by \beq f(x)=\sum_{x^-\in S_\rvx} M(x,x^-)f^-(x^-) \; \eeq where $M(x,x^-)\in \CC$. Function $f$ will be referred to as the M-transform of function $f^-$. \begin{claim} If one is given constants $a_r,a_i,b_r, b_i\in \RR$ such that \beq a_r<f^-_r(x^-)<b_r \;,\;\; a_i<f^-_i(x^-)<b_i \; \eeq for all $x^-$, and one is given $\sum_{x^-} M(x,x^-)$, then the M-transform of $f^-(x^-)$ can be calculated easily from the M-transform of functions $g_r(),g_i()$ which satisfy $0\leq g_r(x^-),g_i(x^-)\leq 1$ for all $x^-$. \end{claim} \proof Define \beq L=\max(b_r-a_r, b_i-a_i) \; \eeq and \beq g_r(x^-) = \frac{f^-_r(x^-)-a_r}{L} \;,\;\; g_i(x^-) = \frac{f^-_i(x^-)-a_i}{L} \;. \eeq Then \beq \sum_{x^-} M(x,x^-) \left[\frac{f^-(x^-)-a}{L}\right] = \sum_{x^-} M(x,x^-)g^-_r(x^-) + i\sum_{x^-} M(x,x^-)g^-_i(x^-) \;. \eeq \qed \begin{claim} If one is given a constant $N>0$, then the M-transform of $f^-(x^-)$ can be easily calculated from the M-transform of $f^-(x^-)/N$. For instance, $N$ might be $\sqrt{\sum_{x^-}|f^-(x^-)|^2}$. \end{claim} \proof \beq f(x)=N\sum_{x^-\in S_\rvx} M(x,x^-)\frac{f^-(x^-)}{N} \;. \eeq \qed \end{document}
\begin{document} \setlength{\baselineskip}{15pt} \begin{center}{\Large \bf On the spectral moment of graphs with $k$ cut edges\footnote{Financially supported by the National Natural Science Foundation of China (Grant Nos. 11071096, 11271149) and the Special Fund for Basic Scientific Research of Central Colleges (CCNU11A02015).}} {\large Shuchao Li$^{a,}$\footnote{E-mail: [email protected] (S.C. Li), [email protected] (H. Zhang)},\ \ Huihui Zhang$^a$,\ \ Minjie Zhang$^b$} $^a$Faculty of Mathematics and Statistics, Central China Normal University, Wuhan 430079, P.R. China $^b$School of Mathematics and Physics, Hubei Institute of Technology, Huangshi 435003, P.R. China \end{center} \noindent {\bf Abstract}: Let $A(G)$ be the adjacency matrix of a graph $G$ with $\lambda_{1}(G)$, $\lambda_{2}(G)$, $\dots$, $\lambda_{n}(G)$ being its eigenvalues in non-increasing order. Call the number $S_k(G):=\sum_{i=1}^{n}\lambda_{i}^k(G)\, (k=0,1,\dots,n-1)$ the $k$th spectral moment of $G$. Let $S(G)=(S_0(G),S_1(G),\dots,S_{n-1}(G))$ be the sequence of spectral moments of $G$. For two graphs $G_1$ and $G_2$, we have $G_1\prec_sG_2$ if $S_i(G_1)=S_i(G_2)\, (i=0,1,\dots,k-1)$ and $S_k(G_1)<S_k(G_2)$ for some $k\in \{1,2,\emph{}\dots,n-1\}$. Denote by $\mathscr{G}_n^k$ the set of connected $n$-vertex graphs with $k$ cut edges. In this paper, we determine the first, the second, the last and the second last graphs, in an $S$-order, among $\mathscr{G}_n^k$, respectively. \noindent{\it Keywords}: Spectral moment; Cut edge; Clique \noindent{AMS subject classification:} 05C50,\ 15A18 {\setcounter{section}{0} \section{\normalsize Introduction}\setcounter{equation}{0} All graphs considered here are finite, simple and connected. Undefined terminology and notation may be referred to \cite{D-I}. Let $G=(V_G,E_G)$ be a simple undirected graph with $n$ vertices. $G-v$, $G-uv$ denote the graph obtained from $G$ by deleting vertex $v \in V_G$, or edge $uv \in E_G$, respectively (this notation is naturally extended if more than one vertex or edge is deleted). Similarly, $G+uv$ is obtained from $G$ by adding an edge $uv \not\in E_G$. For $v\in V_G$, let $N_G(v)$ (or $N(v)$ for short) denote the set of all the adjacent vertices of $v$ in $G$ and $d_G(v)=|N_G(v)|$, and $\dist_G(u,v)$ is the distance between $u$ and $v$. For an edge subset $E'$ of $G$, denoted by $G[E']$ the subgraph induced by $E'$. A cut edge in a connected graph $G$ is an edge whose deletion breaks the graph into two components. Let $\mathscr{G}_n^k$ be the set of all $n$-vertex graphs, each of which contains $k$ cut edges. Let $A(G)$ be the adjacency matrix of a graph $G$ with $\lambda_1(G),\lambda_2(G),\dots,\lambda_n(G)$ being its eigenvalues in non-increasing order. The number $\sum_{i=1}^n\lambda_i^k(G)\, (k=0,1,\dots,n-1)$ is called the $k$th spectral moment of $G$, denoted by $S_k(G)$. Let $S(G)=(S_0(G), S_1(G), \dots, S_{n-1}(G))$ be the sequence of spectral moments of $G$. For two graphs $G_1, G_2$, we shall write $G_1=_sG_2$ if $S_i(G_1)=S_i(G_2)$ for $i=0,1,\dots,n-1$. Similarly, we have $G_1\prec_sG_2\, (G_1$ comes before $G_2$ in an $S$-order) if for some $k\, (1\leq k\leq {n-1})$, we have $S_i(G_1)=S_i(G_2)\, (i=0,1,\dots,k-1)$ and $S_k(G_1)<S_k(G_2)$. We shall also write $G_1\preceq_sG_2$ if $G_1\prec_sG_2$ or $G_1=_sG_2$. $S$-order has been used in producing graph catalogs (see \cite{C-G}), and for a more general setting of spectral moments one may be referred to \cite{C-R-S1}. Investigation on $S$-order of graphs attracts more and more researchers' attention. Cvetkovi\'c and Rowlinson \cite{C-R-S3} studied the $S$-order of trees and unicyclic graphs and characterized the first and the last graphs, in an $S$-order, of all trees and all unicyclic graph with given girth, respectively. Chen, Liu and Liu \cite{C-L-L} studied the lexicographic ordering by spectral moments ($S$-order) of unicyclic graph with a given girth. Wu and Fan \cite{D-F} determined the first and the last graphs, in an $S$-order, of all unicyclic graphs and bicyclic graphs, respectively. Pan et al. \cite{X-F-P} gave the first $\sum_{k=1}^{\lfloor\frac{n-1}{3}\rfloor}(\lfloor\frac{n-k-1}{2}\rfloor-k+1)$ graphs apart from an $n$-vertex path, in an $S$-order, of all trees with $n$ vertices. Wu and Liu \cite{D-M} determined the last $\lfloor\frac{d}{2}\rfloor+1$, in an $S$-order, among all $n$-vertex trees of diameter $d\, (4 \le d \le n-3)$. Pan et al. \cite{B-Z1} identified the last and the second last graphs, in an $S$-order, of quasi-trees. Hu, Li and Zhang \cite{Li-H} studied the spectral moments of graphs with given clique number and chromatic number, respectively. Li and Song \cite{Li-S} identified the last $n$-vertex tree with a given degree sequence in an $S$-order. Consequently, the last trees in an $S$-order in the sets of all trees of order $n$ with the largest degree, the leaves number, the independence number and the matching number was also determined, respectively. In light of the information available from the related results on the spectral moments of graphs, it is natural to consider this problem on some other class of graphs, and the connected graphs with $k$ cut edges are a reasonable starting point for such a investigation. The $n$-vertex connected graphs with $k$ cut edges have been considered in different fields \cite{B-Z10,2,3,4}, whereas to our best knowledge, the spectral moments of graphs in $\mathscr{G}_n^k$ were, so far, not considered. Here, we identified the first, the second, the last and the second last graphs, in an $S$-order, among $\mathscr{G}_n^k$, respectively. Throughout the text we denote by $P_n, K_{1,n-1}, C_n$ and $K_n$ the path, star, cycle and complete graph on $n$ vertices, respectively. Let $K_{1,n-1}^*$ be a graph obtained from a star $K_{1,n-1}$ by attaching a leaf to one leaf of $K_{1,n-1}$, $U_n$ be a graph obtained from $C_{n-1}$ by attaching a leaf to one vertex of $C_{n-1}$, and $B_4, B_5$ be two graphs obtained from two cycle $C_3, C_3'$ of length 3 by identifying one edge of $C_3$ with one edge of $C_3'$ and identifying one vertex of $C_3$ with one vertex of $C_3'$, respectively; see Fig. 1. \begin{figure} \caption{Graphs $B_4, B_5, K_6^1, K_6^2, K_6^3, P_6^1, P_6^2, P_6^3$ and $K(a_0,\{a_1, a_2, \dots, a_k\} \end{figure} The graph $K_n^k$ is an $n$-vertex graph obtained by attaching $k$ pendant vertices to one vertex of $K_{n-k}$. The graph $P_n^k$ is a graph obtained by identifying one end-vertex of $P_{k+1}$ with one vertex of $C_{n-k}$. For example, for $n=6, K_6^0=K_6, K_6^5 $ is a star, $P_6^0=C_6$ and $K_6^1, K_6^2, K_6^3, P_6^1, P_6^2, P_6^3$ are depicted in Fig. 1. In general, $K_n^0=K_n, K_n^{n-1}$ is star $K_{1,n-1}$, $K_n^{n-2}\cong K_n^{n-1}$ and $P_n^0=C_n$. Let $K(a_0,\{a_1,a_2,\dots,a_k\})$ be a graph obtained from $K_{1,k}$ by replacing each $u_i\in V_{K_{1,k}}$ by a clique $K_{a_i}\, (a_i\geq1, i=0,1,2,\dots,k)$; see Fig. 1. Denote $$ \mathscr{K}_n^k=\left\{K(a_0,\{a_1,a_2,\dots,a_k\}):a_i\geq1(0\leq i\leq k),\,\sum_{i=0}^ka_i=n\right\}. $$ Let $F$ be a graph. An $F$-subgraph of $G$ is a subgraph of $G$ which is isomorphic to the graph $F$. Let $\phi_{G}(F)$ (or $\phi(F)$) be the number of all $F$-subgraph of $G$. \begin{lem} {\rm(see \cite{C-R-S2})} The $k$th spectral moment of $G$ is equal to the number of closed walks of length $k$. \end{lem} \begin{lem} For every graph $G$, we have \begin{wst} \item[{\rm (i)}] $S_4(G)=2\phi(P_2)+4\phi(P_3)+8\phi(C_4)$ {\rm(see \cite{C-G});} \item[{\rm (ii)}] $S_5(G)=30\phi(C_3)+10\phi(U_4)+10\phi(C_5)$ {\rm(see\cite{D-M});} \item[{\rm (iii)}] $S_6(G)=2\phi(P_2)+12\phi(P_3)+6\phi(P_4)+12\phi(K_{1,3})+12\phi(U_5)+36\phi(B_4)+24\phi(B_5)+24\phi(C_3)+48\phi(C_4)+12\phi(C_6) \, ${\rm(see\cite {C-L-L}).} \end{wst} \end{lem} \begin{lem}[\cite{C-R-S2}] Given a connected graph $G$, $S_0(G)=n, S_1(G)=l, S_2(G)=2m, S_3(G)=6t$, where $n, l, m, t$ denote the number of vertices, the number of loops, the number of edges and the number of triangles contained in $G$, respectively. \end{lem} \begin{lem}[\cite{C-R-S3}]\label{lem4} In an $S$-order of the $n$-vertex unicyclic graphs with girth $g$, the first graph is $U_n^g$ which is obtained by the coalescence of a cycle $C_g$ with a path $P_{n-g+1}$ at one of its end-vertices. \end{lem} \section{\normalsize The last and the second last graphs in an $S$-order among $\mathscr{G}_n^k$}\setcounter{equation}{0} In this section, we will determine the last two graphs, in an $S$-order, among $\mathscr{G}_n^k$. Let $\Bbb E=\{e_1,e_2,\dots,e_k\}$ be the set of the cut edges of $G\in\mathscr{G}_n^k$. Note that $S_2(G)=2|E_G|$, hence $S_2(G+e)>S_2(G)$. By Lemma 1.3, in order to determine the last graph in an $S$-order among $\mathscr{G}_n^k$, it suffices to choose graph $G\in\mathscr{G}_n^k\emph{}$ such that its $S_2(G)$ is as large as possible. So we can have the following assumption throughout this section. \noindent {\bf Assumption 0.} Each component of $G-\Bbb E$ is a clique. \begin{thm} Of all the connected graphs with $n$ vertices and $k$ cut edges, the last graph in an $S$-order is obtained uniquely at $K_n^k$. \end{thm} \begin{proof} If $k=0$, then by Assumption 0 we have $\mathscr{G}_n^0=\{K_n\}$, our result holds immediately. Therefore we may assume that $k\geq1$. Again by Assumption 0, we can denote the components of $G-\Bbb E$ by $K_{a_0},K_{a_1},\dots,K_{a_k},\, a_0+a_1+\dots+a_k=n$. Assume, without loss of generality, that $a_0\geq a_1\geq a_2\geq\dots\geq a_k\geq1$. Let $V_i=\{v\in V_{K_{a_i}}\!:\, v$ is an end-vertex of a cut edge of $G$\}. Choose $G\in\mathscr{G}_n^k$ such that $G$ is as large as possible under the order $\preceq_s$. In order to complete the proof, it suffices to show the following facts. \noindent {\bf Fact 1}. $|V_i|=1$ for $i=0,1,2,\ldots, k.$ \begin{proof} Suppose to the contrary that there exists $i\in \{0,1,2,\ldots, k\}$ such that $|V_i|>1$. Let $u,u' \in V_{a_i}$, both $u$ and $u'$ are end-vertices of the cut edges of $G$. Denote $N_G(u)\backslash N_{K_{a_i}}(u)=\{w_1,w_2,\dots,w_s\}$ and $N_G(u')\backslash N_{K_{a_i}}(u')=\{z_1,z_2,\dots,z_l\}$. It is routine to check that $s\geq1,\, l\geq 1$. Let $$G^*=G-\{u'z_1,u'z_2,\dots,u'z_l\}+\{uz_1,uz_2,\dots,uz_l\}, $$ then $G^* \in \mathscr{G}_n^k$. On the one hand, $S_i(G)=S_i(G^*)$ for $i=0, 1, 2, 3.$ On the other hand, $\phi_G(P_2)=\phi_{G^*}(P_2), \phi_G(C_4)=\phi_{G^*}(C_4)$, hence by Lemma 1.2(i), \begin{eqnarray*} S_4(G)-S_4(G^*) = 4(\phi_G(P_3)-\phi_{G^*}(P_3))=4\left({s\choose 2}+{l\choose 2}-{s+l\choose 2}\right)=-4sl<0. \end{eqnarray*} which implies that $G\prec_sG^*$, a contradiction. Therefore $|V_i|=1$ for $0\leq i \leq k $. \end{proof} By Fact 1, we can assume that $V_i=\{u_i\}$ for $i=0,1,2,\ldots, k$. \noindent {\bf Fact 2}. $G\in \mathscr{K}_n^k$. \begin{proof} If not, then there exists a cut edge $u_0u_i\in \Bbb E$ such that $u_i$ is an end-vertex of another cut edge(s). Let $$ |N_G(u_i)\setminus (N_{K_{a_i}}(u_i)\cup \{u_0\})|=l, \ \ \ |N_G(u_0)\setminus (N_{K_{a_0}}(u_0)\cup \{u_i\})|=s. $$ It is straightforward to check that $l\ge 1$ and $s\ge 0.$ First consider that $s\geq1$. In this case, let $$ G^*=G-\{u_iz: \, z\in N_G(u_i)\setminus (N_{K_{a_i}}(u_i)\cup \{u_0\})\}+\{u_0z: \, z\in N_G(u_i)\setminus (N_{K_{a_i}}(u_i)\cup \{u_0\})\}. $$ It is easy to see that $G^*\in \mathscr{G}_n^k$. Note that $S_i(G)=S_i(G^*)$ for $i=0,1,2,3$ and $\phi_G(P_2)=\phi_{G^*}(P_2)$, $\phi_G(C_4)=\phi_{G^*}(C_4)$, hence $$ S_4(G)-S_4(G^*) = 4(\phi_G(P_3)-\phi_{G^*}(P_3))=4(l(a_i-1)-l(a_0-1)-ls)=4l(a_i-a_0-s)<0, $$ which implies that $G\prec_sG^*$, a contradiction. Now consider that $s=0$. In this case, there exists a cut edge $u_iu_j\in \Bbb E$ such that $u_j$ is an end-vertex of another cut edge(s). Let $ |N_G(u_j)\setminus (N_{K_{a_j}}(u_j)\cup \{u_i\})|=p. $ It is straightforward to check that $p\ge 1$. Let \begin{eqnarray*} G^*&=&G-\{u_jz: \, z\in N_G(u_j)\setminus (N_{K_{a_j}}(u_j)\cup \{u_i\})\}+\{u_0z: \, z\in N_G(u_j)\setminus (N_{K_{a_j}}(u_j)\cup \{u_i\})\}\\ && -\{u_iw: \, w\in N_G(u_i)\setminus (N_{K_{a_i}}(u_i)\cup \{u_0\})\}+\{u_0w: \, w\in N_G(u_i)\setminus (N_{K_{a_i}}(u_i)\cup \{u_0\})\}. \end{eqnarray*} It is easy to see that $G^*\in \mathscr{G}_n^k$. Note that $S_i(G)=S_i(G^*)$, $i=0,1,2,3$ and $\phi_G(P_2)=\phi_{G^*}(P_2)$, $\phi_G(C_4)=\phi_{G^*}(C_4)$. Hence, \begin{eqnarray*} S_4(G)-S_4(G^*) &=& 4(\phi_G(P_3)-\phi_{G^*}(P_3))\\ &=& 4l(a_i-1)+p(a_j-1)-4p(l-1)-4p-4(l+p)(a_0-1)\\ &=&4[l(a_i-a_0)+p(a_j-a_0)-pl]<0. \end{eqnarray*} The last inequality follows from $a_i\le a_0, a_j\le a_0$ and $pl>0.$ Hence, we obtain that $G\prec_sG^*$, a contradiction. Therefore $G\in \mathscr{K}_n^k$. \end{proof} By Fact 2, we can assume that $u_0u_j\in \Bbb E$, $1\leq j \leq k$. \noindent {\bf Fact 3}. $a_1=a_2=\cdots=a_k=1$. \begin{proof} Assume to the contrary that there exists a $j\in \{1,2,\ldots,k\}$ such that $a_j>1$. By Fact 2, we have $G=K(a_0,\{a_1,\ldots,a_{j-1},a_j, a_{j+1},\ldots,a_k\})$. Now we consider $G^*=K(a_0+a_j-1,\{a_1,\ldots,a_{j-1},1, a_{j+1},\linebreak \ldots,a_k\})$. It is easy to see that $G^*\in \mathscr{K}_n^k.$ Note that $S_i(G)=S_i(G^*),\,i=0,1$ and $$ S_2(G)-S_2(G^*)=2(a_j-1)-2(a_j-1)a_0=2(a_j-1)(1-a_0)<0, $$ i.e., $ G\prec_s G^*$, a contradiction. Therefore $a_j=1$ for $j=1,2,\ldots, k$. \end{proof} In the view of Fact 3, we have $a_0=n-k$. Hence, $G=K(n-k,\{1,1,\ldots,1\})$, i.e., $G\cong K_n^k$, as desired. \end{proof} In the rest of this section, we are to determine last graph in an $S$-order among $\mathscr{G}_n^k\backslash K_n^k$. Delete an edge, say $xy$, from $K_n$ and denote the resultant graph by $G_1$. Let $G_2$ be a graph obtained from $G_1$ by attaching a pendant vertex to one vertex, say $r$, of $G_1$ with $r\not=x, y$. Let $G_3=K(n-k,\{\underbrace{1,1,\dots,1}_k\})-uw+vw$, where $uw$ is a cut edge and $u, v$ are two different vertices in $V_{K_{n-k}}$. Based on Lemma 1.3, it is easy to see that among $\mathscr{G}_n^0,$ $K_n$ (resp. $G_1$) is the last (resp. the second last) graph in an $S$-order, while among $\mathscr{G}_n^1$ with $n\ge 5$, based on $S_2(G)$, the second last graph in an $S$-order must be a graph obtained from $K_n^1$ by deleting a non-cut edge, say $e$, from $K_n^1$. Denote the resultant graph by $G'$ if $e$ has a common vertex with the cut edge in $K_n^1$ and by $G_2$ otherwise. Note that $S_i(G_2)=S_i(G')$ for $i=0,1,2,3$ and $\phi_{G_2}(P_2)=\phi_{G'}(P_2),\, \phi_{G_2}(C_4)=\phi_{G'}(C_4),$ hence by Lemma 1.2(i) $$ S_4(G')-S_4(G_2)=4(\phi_{G'}(P_3)-\phi_{G_2}(P_3))=-4<0, $$ i.e., $G'\prec_sG_2$, Hence, among $\mathscr{G}_n^1$ with $n\ge 5$, $G_2$ is the second last graph in an $S$-order. In what follows we only consider $k\geq2$. \begin{thm} Among $\mathscr{G}_n^k$ with $2\le k\le n-1$, the second last graph in an $S$-order is obtained uniquely at $G_3$ if $k\in \{2,3,\ldots, n-2\}$ and at $K_{1,n-1}^*$ otherwise, where $G_3$ is defined as above. \end{thm} \begin{proof} Choose $G\in \mathscr{G}_n^k\setminus\{K_n^k\}$ such that it is as large as possible according to $\preceq_s$. Denote the components of $G-\Bbb E$ by $U_0, U_1, U_2, \ldots, U_k$. We are to show that each of the components is a complete graph. In fact, if there exists a $U_i$ which is not a complete graph, i.e., $U_i$ contains two vertices $x, y$ satisfying $xy\not\in E_{U_i}$. Let $G'=G+xy$. If $G'\not\cong K_n^k$, it is easy to see that $G\prec_s G'$, a contradiction. If $G'\cong K_n^k$, then either $x$ or $y$ is not an end-vertex of a cut edge of $G$. Without loss of generality, assume that $x$ is not an end-vertex of a cut edge of $G$, delete a cut edge of $G'$ and connect the isolated vertex with $x$ by an edge; denote the resultant graph by $G''$. Then we have $S_0(G)=S_0(G''), S_1(G)=S_1(G'')$ and $S_2(G)<S_2(G'').$ Hence, $G\prec_s G''$, a contradiction. Therefore, we may denote the components of $G-\Bbb E$ by $K_{a_0},K_{a_1},\dots,K_{a_k}$,\, $a_0+a_1+\dots+a_k=n$. Without loss of generality, assume that $a_0\geq a_1\geq a_2\geq\dots\geq a_k\geq1$. If $a_0=1$, then $G$ is an $n$-vertex tree. By [13, Theorems 3.3 and 3.8], we know the second last tree in an $S$-order among $n$-vertex trees is just $K_{1,n-1}^*$. It is easy to see that $a_0\not=2,$ hence in what follows we consider $a_0\ge 3.$ Let $V_i=\{v\in V_{K_{a_i}}\!:\, v$ is an end-vertex of a cut edge of $G$\}. In order to complete the proof, it suffices to show the following facts. \noindent {\bf Fact 1}. If $a_0\geq3$, then $|V_0|=2, |V_1|=|V_2|=\cdots=|V_k|=1$. \begin{proof} We prove Fact 1 by contradiction. If $|V_0|=|V_1|=|V_2|=\cdots=|V_k|=1$, then without loss of generality assume that $V_i=\{u_i\}$,\, $i=0,1,\ldots, k$. First we consider $G\in \mathscr{K}_n^k$. Note that $G\in \mathscr{K}_n^k\setminus\{K_n^k\}$, hence $a_1\geq3$; otherwise, $a_1=2$, which implies that $G$ contains at least $k+1$ cut edges, a contradiction. If $a_1>3$, we consider graph $G^*:=K(a_0+1,\{a_1-1, a_2,\ldots, a_k\})$ in $\mathscr{K}_n^k\setminus\{K_n^k\}$. Note that $S_i(G)=S_i(G^*)$ for $i=0,1$ and $S_2(G)-S_2(G^*)=2(a_1-1-a_0)<0$, hence $G\prec_sG^*$, a contradiction. Therefore $a_1=3$. If $a_2>1$, we consider graph $G':=K(a_0+a_2-1,\{3, 1,a_3,\ldots, a_k\})\in \mathscr{G}_n^k\backslash K_n^k$. Note that $ S_i(G)=S_i(G')$ for $i=0,1$ and $S_2(G)-S_2(G')=2(a_2-1-(a_2-1)a_0)=2(a_2-1)(1-a_0)<0$, hence $G\prec_s G'$, a contradiction. Therefore, $a_2=1$, whence $a_3=\dots=a_k=1$. Together with $a_1=3$, we have $a_0=n-k-2.$ That is to say, $G\cong K(n-k-2,\{3,1,1,\ldots, 1\}).$ For convenience, let $w_1\in N_{K_{a_0}}(u_0)$ and $N_G(u_1)=\{u_0,v_1,v_2\}.$ Consider $$ G^*:=G-\{u_1v_1,u_1v_2\}+\{w_1v_1,w_1v_2\}, $$ it is easy to see that $G^*\in\mathscr{G}_n^k\backslash K_n^k$. Note that $S_i(G)=S_i(G^*)$ for $i=0,1,2,3$ and $\phi_G(P_2)=\phi_{G^*}(P_2), \phi_G(C_4)=\phi_{G^*}(C_4)$, $\phi_G(P_3)-\phi_{G^*}(P_3)=2-2(a_0-1)=2(2-a_0)<0$, hence by Lemma 1.2(i) $S_4(G)<S_4(G^*)$. Thus $G\prec_s G^*$, a contradiction. Therefore $G\not\in\mathscr{K}_n^k$. Now we consider the case $G\not\in\mathscr{K}_n^k$. It is easy to see that the edge induced graph $G[\Bbb{E}]$ is a tree which is not isomorphic to $K_{1,k}.$ Hence, partition $V_{G[\Bbb{E}]}$ into $D^0(G[\Bbb{E}])\cup D^1(G[\Bbb{E}])\cup D^2(G[\Bbb{E}])\cup D^3(G[\Bbb{E}])\cup \cdots$, where $D^i(G[\Bbb{E}])=\{u\in V_{G[\Bbb{E}]}: \dist_{G[\Bbb{E}]}(u, u_0)=i\},\, i=0,1,2,3,\ldots.$ It is easy to see that $D^2(G[\Bbb{E}])\not=\emptyset.$ If $D^3(G[\Bbb{E}])\not=\emptyset$, that is to say, there exists $u\in D^2(G[\Bbb{E}])$ such that $d_{G[\Bbb{E}]}(u)\ge 2,$ then choose $u_i$ from $D^1(G[\Bbb{E}])$ such that $u_i$ is adjacent to $u_0$ and $u$. Let $W:=N_{G[\Bbb{E}]}(u_i)\setminus \{u_0\}.$ As $u\in W$, we have $W\not=\emptyset$. Consider $$ G^*=G-\{u_iw:\, w\in W\}+\{u_0w:\, w\in W\}, $$ then its routine to check that $G^*\in\mathscr{G}_n^k\setminus \{K_n^k\}$. Note that $S_i(G)=S_i(G^*)$ for $i=0, 1, 2,3$ and $\phi_G(P_2)=\phi_{G^*}(P_2), \phi_G(C_4)=\phi_{G^*}(C_4)$, hence by Lemma 1.2(i) we have $$ S_4(G)-S_4(G^*) = 4(\phi_G(P_3)-\phi_{G^*}(P_3))=4[(a_i-a_0)-st], $$ where $s=|W|\ge 1$ and $t=|N_{G[\Bbb{E}]}(u_0)\setminus\{u_i\}|\ge 0$. Note that $a_i\le a_0$, hence if $a_i<a_0$, then for all $t\ge 0$ we have $(a_i-a_0)-st<0$, which implies that $G\prec_sG^*$, a contradiction. If $a_i=a_0$, then for all $t\ge 1$ we have $(a_i-a_0)-st<0$, which implies that $G\prec_sG^*$, a contradiction. If $a_i=a_0$ and $t = 0$, then $G^*\cong G$. Hence, in order to complete the proof, it suffices to consider $D^3(G[\Bbb{E}])=\emptyset$ and $d_{G[\Bbb{E}]}(u_0)>1.$ Furthermore, as $G\not\in \mathscr{K}_n^k$, we have $D^2(G[\Bbb{E}])\not=\emptyset$ and for each $u\in D^2(G[\Bbb{E}])$, $u$ is a leaf of $G[\Bbb{E}]$ (otherwise, $D^3(G[\Bbb{E}])\not=\emptyset,$ a contradiction). If there exists $u_i\in D^1(G[\Bbb{E}])$ such that $d_{G[\Bbb{E}]}(u_i)\ge 3$, then move $d_{G[\Bbb{E}]}(u_i)-2$ pendant edges to $u_0$ and denote the resultant graph by $G'$. It is easy to see that $G'\in\mathscr{G}_n^k \setminus \{K_n^k\}.$ Note that $s:=d_{G[\Bbb{E}]}(u_i)-2\ge 1,\, q:=d_{G[\Bbb{E}]}(u_0)-1\ge 1,\, S_i(G)=S_i(G')$ for $ i=0,1,2,3$ and $\phi_G(P_2)=\phi_{G'}(P_2),\phi_G(C_4)=\phi_{G'}(C_4)$, hence by Lemma~1.2(i) we have \begin{eqnarray} S_4(G)-S_4(G') &=& 4(\phi_G(P_3)-\phi_{G'}(P_3))\notag\\ &=& 4[s(a_i-1)-s(a_0-1)-s(q-1)]\notag\\ &=& 4s(a_i-a_0-q+1). \end{eqnarray} If $a_0>a_i$ or $q\geq2$, in the view of (2.1), we obtain that $S_4(G)-S_4(G')<0$, i.e., $G\prec_sG'$, a contradiction. If $a_0=a_i$ and $q=1$, then it is easy to see $G'\cong G$. Hence, in order to complete the proof, it suffices to consider that, in the edge induced graph $G[\Bbb{E}]$, each of the non-pendant vertices in $D^1(G[\Bbb{E}])$ is of degree 2. For convenience, let $W=\{u: u\in D^1(G[\Bbb{E}]), d_{G[\Bbb{E}]}(u)=2\}.$ It is easy to see that $W\not=\emptyset.$ If $|W|\ge 2$, choose $u\in W$ such that its unique neighbor in $G[\Bbb{E}]$ is a leaf, say $u'$. Let $$ G^*=G-uu'+u_0u', $$ then $G^*\in\mathscr{G}_n^k\setminus\{K_n^k\}$. Since $S_i(G)=S_i(G^*)$ for $ i=0,1,2,3$ and $\phi_G(P_2)=\phi_{G^*}(P_2),\phi_G(C_4)=\phi_{G^*}(C_4)$, $$ \phi_G(P_3)-\phi_{G^*}(P_3)=(a_i-1)-(a_0-1)-p=a_i-a_0-p<0, $$ we have $S_4(G)-S_4(G^*)<0,$ i.e., $G\prec_sG^*$, a contradiction. Hence, $|W|=1$. By a similar discussion as in the proof of Fact 3 in Theorem 2.1, we can obtain that $a_0=n-k, a_1=a_2=\ldots=a_k=1$. Note that $a_0\geq 3$, hence $k<n-1$. Assume that $W=\{u\}$ with $N_{G[\Bbb{E}]}(u)=\{u_0, u'\}$, where $u'$ is a pendant vertex in $G[\Bbb{E}]$. Let $x\in N_{K_{a_0}}(u_0)$. Consider $ G^*=G-\{uu'\}+\{xu'\}, $ then $G^*\in\mathscr{G}_n^k\setminus\{K_n^k\}$. Note that $S_i(G)=S_i(G^*)$ for $i=0,1,2,3$,\, $\phi_G(P_2)=\phi_{G^*}(P_2), \phi_G(C_4)=\phi_{G^*}(C_4)$ and $\phi_G(P_3)-\phi_{G^*}(P_3)=1-(a_0-1)=2-a_0<0$ $(a_0\geq3)$, hence $S_4(G)-S_4(G^*)<0,$ i.e., $G\prec_sG^*$, a contradiction. If there is a $V_i$ satisfying $|V_i|\geq3$, then choose two distinct vertices $u_i', u_i''$ in $V_i$ and let $ G^*=G-\{u_i'u:\, u\in N_{G[\Bbb{E}]}(u_i')\}+\{u_i''u:\, u\in N_{G[\Bbb{E}]}(u_i')\}. $ It is easy to see that $G^*\in\mathscr{G}_n^k\setminus\{K_n^k\}$. Note that $S_i(G)=S_i(G^*), i=0, 1, 2, 3$, \, $\phi_G(P_2)=\phi_{G^*}(P_2),\phi_G(C_4)=\phi_{G^*}(C_4)$ and $\phi_G(P_3)-\phi_{G^*}(P_3)=-|N_{G[\Bbb{E}]}(u_i')||N_{G[\Bbb{E}]}(u_i'')|<0$, hence $S_4(G)-S_4(G^*)<0,$ which implies that $G\prec_sG^*$, a contradiction. If there exists an $i\in \{1,2,\ldots, k\}$ such that $|V_i|=2$. Assume, without loss of generality, that $V_i=\{u_i, u'\}$, where $u_i$ is adjacent to $u_0\in V_0$. Let $$ G^*=G-\{u_ix: x\in V_{K_{a_i}}\}+\{yx: y\in V_{K_{a_0}}, x\in V_{K_{a_i}}\setminus\{u_i\}\}. $$ It is easy to see that $G^*\in\mathscr{G}_n^k\backslash K_n^k$. Note that $S_i(G)=S_i(G^*)$ for $i=0,1$ and $S_2(G)-S_2(G^*)=2(a_i-1)-2(a_i-1)a_0=2(a_i-1)(1-a_0)<0$, hence $S_2(G)<S_2(G^*),$ i.e., $G\prec_s G^*$, a contradiction. Combining with discussion as above, we obtain that $|V_1|=|V_2|=\cdots=|V_k|=1,$ whence $|V_0|=2$, as desired. \end{proof} \noindent {\bf Fact 2}. $a_1=a_2=\cdots=a_k=1$. \begin{proof} By a similar discussion as in the proof of Fact 3 in Theorem 2.1, we can get $a_0=n-k$, $a_1=a_2=\dots=a_k=1$. We omit the procedure here. \end{proof} \noindent {\bf Fact 3}. $G\cong G_3$, where $G_3=K(n-k,\{1,1,\dots,1\})-u_0u_k+u_0'u_k$, where $u_0u_k$ is a cut edge and $u_0'\in V_{K_{a_0}}\backslash \{u_0\}$. \begin{proof} Note that if $G$ has just two cut edges, it is easy to see that $G\cong G_3$ defined as above. Hence in what follows we consider that $G$ contains at least three cut edges. Let $N_{G[\Bbb{E}]}(u_0)=\{u_1,u_2,\dots,u_m\}$ and $N_{G[\Bbb{E}]}(u_0')=\{u_1',u_2',\dots,u_t'\}$. Without loss of generality, assume that $m\geq t$. Obviously, $t\ge 1$. At first we show that $t=1$. Otherwise, let $$ G^*=G-\{u_0'u_2',u_0'u_3',\dots,u_0'u_t'\}+\{u_0u_2',u_0u_3',\dots,u_0u_t'\}. $$ It is easy to see that $G^*\in\mathscr{G}_n^k\backslash K_n^k$. Note that $S_i(G)=S_i(G^*)$ for $ i=0,1,2,3$,\, $\phi_G(P_2)=\phi_{G^*}(P_2),$ and $\phi_G(C_4)=\phi_{G^*}(C_4)$, hence $$ S_4(G)-S_4(G^*)=4(\phi_G(P_3)-\phi_{G^*}(P_3))=-4m(t-1)<0, $$ i.e., $G\prec_sG^*$, a contradiction. Now we are to show that $m=k-1.$ If not, there exists a vertex $u\in \{u_1,u_2,\dots,u_m, u_1',u_2',\dots,u_t'\}$ such that $d_{G[\Bbb{E}]}(u)\ge 2.$ Denote $N_{G[\Bbb{E}]}(u)\setminus\{u_0,v_0\}=\{\hat{u}_1,\hat{u}_2,\dots,\hat{u}_s\}$, $s\geq1$. Let $$ G^*=G-\{u\hat{u}_1,u\hat{u}_2,\dots,u\hat{u}_s\}+\{u_0\hat{u}_1,u_0\hat{u}_2,\dots,u_0\hat{u}_s\}. $$ It is easy to see that $G^*\in\mathscr{G}_n^k\backslash K_n^k$. Notice that $S_i(G)=S_i(G^*)$ for $ i=0,1,2,3$, $\phi_G(P_2)=\phi_{G^*}(P_2)$ and $\phi_G(C_4)=\phi_{G^*}(C_4)$, hence $$ S_4(G)-S_4(G^*)=4(\phi_G(P_3)-\phi_{G^*}(P_3))=4s((a_i-1)-(a_0-1)-(m-1))=4s(a_i-a_0)-4s(m-1)<0. $$ The last inequality follows by $a_i=1< n-k = a_0$ (by fact 2), and $m\ge 1, s\ge 1$. Hence, we get $S_4(G)<S_4(G^*),$ i.e., $G\prec_sG^*$, a contradiction. so we have $m=k-1, t=1$, which is equivalent to that $G\cong G_3$. \end{proof} This completes the proof. \end{proof} \section{\normalsize The first and the second graphs in an $S$-order among $\mathscr{G}_n^k$ }\setcounter{equation}{0} In this section, we are to determine the first and the second graphs in an $S$-order among $\mathscr{G}_n^k$. Let ${\Bbb E}=\{e_1,e_2,\dots,e_k\}$ be the set of all the cut edges of $G\in\mathscr{G}_n^k$. Note that if we delete an edge, say $e$, from a connected graph $G$, then in the view of $S_2(G)=2|E_G|$, we have $S_2(G)>S_2(G-e)$. In order to determine the first graph in an $S$-order among $\mathscr{G}_n^k$, it suffices to choose the graph such that its size is as small as possible. \begin{thm} Of all the connected graphs with $n$ vertices and $k$ cut edges, the first graph in an $S$-order is obtained uniquely at $P_n^k$. \end{thm} \begin{proof} Choose $G\in \mathscr{G}_n^k$ such that it is as small as possible according to the relation $\preceq_s$. If $k=0$, then it is easy to see that $G\cong C_n$ and our result holds immediately. Therefore we may assume that $k\geq1$. We show the following claim at first. \begin{claim} $G$ contains exactly one cycle. \end{claim} \begin{proof} Assume to the contrary that $G$ contains at least two cycles. If $G$ contains two cycles $C^1$ and $C^2$ such that $C^1$ and $C^2$ have edges in common; see Fig. 2(a), then let $G^*=G-\{uv, xy\}+ux$ (see Fig. 2(b)); if $G$ contains two cycles $C^1$ and $C^2$ such that $C^1$ and $C^2$ have just one vertex in common; see Fig. 2(c), then let $G^*=G-\{ux, vx\}+uv$ (see Fig. 2(d)). It is routine to check that $G^*\in \mathscr{G}_n^k$ \begin{figure} \caption{Graphs used in the proof of Claim 1.} \end{figure} and in each of the above cases one has $S_i(G)=S_i(G^*),\,i=0,1$ and $S_2(G)-S_2(G^*)=2>0$, hence $ G^*\prec_s G$, a contradiction. If $G$ contains two cycles $C_l=u_0u_1u_2\dots u_{l-1} $ and $C_j=v_0v_1v_2\dots v_{j-1}$ such that $C_l$ connects $C_j$ by a path $P_i,\, i\ge 2$, whose end vertices are $u_0, v_1$, and the vertex, say $u_t$ (resp. $v_m$), on the cycle $C_l$ (resp. $C_j$) in $G$ either is of degree 2 or has subgraph $G_t$ (resp. $H_m$) attached, $ 0\leq t \leq {l-1} $, $ 0\leq m \leq {j-1} $; see Fig. 3. Let $$ G^*=G-\{u_0u_1,v_1v_2,v_0v_1\}+\{u_0v_2,u_1v_0\} , $$ then $G^*\in\mathscr{G}_n^k$. Since $S_i(G)=S_i(G^*),\,i=0,1$. $S_2(G)-S_2(G^*)=2>0$, then $ G^*\prec_s G$, a contradiction. Therefore, $G$ contains exactly one cycle. \end{proof} \begin{figure} \caption{Graph $G\Rightarrow G^*$.} \end{figure} By Claim 1, we know that $G$ is a unicyclic graph. Note that $G$ contains exactly $k$ cut edges, hence $G$ is an $n$-vertex unicyclic graph with girth $n-k$. By Lemma \ref{lem4} the first graph in an $S$-order among the $n$-vertex unicyclic graph with girth $n-k$ is just the graph $P_n^k,$ as desired. \end{proof} At the rest of this section, we are to determine the second graph in an $S$-order among $\mathscr{G}_n^k\, (k \geq 3)$. \begin{thm} Of all graphs with $n$ vertices and $k$ cut edges, the second graph in an $S$-order is obtained uniquely at $\hat{U}_n^k\, (k\geq 3)$, where $\hat{U}_n^k$ is obtained by attaching two leafs to the pendant vertex of graph $P_{n-2}^{k-2}$. \end{thm} \begin{proof} Note that if we delete an edge $e$ from a connected graph $G$, then in the view of $S_2(G)=2|E_G|$, we have $S_2(G)>S_2(G-e)$, hence in order to determine the second graph in an $S$-order among $\mathscr{G}_n^k$, it suffices to determine the second graph in an $S$-order among the set of all $n$-vertex unicyclic graphs with girth $n-k$; we denote this set by $\mathscr{U}_n^k$. Choose $G\in \mathscr{U}_n^k\setminus\{P_n^k\}$ such that it is as small as possible with respect to $\preceq_s$. Note that $\Bbb{E}$ is the set of $k$ cut edges of $G$, hence $G[\Bbb{E}]$ is a forest. We are to show that $G[\Bbb{E}]$ is a tree. If this is not true, then it is equivalent to that there exist at least two vertices, say $u_0, v_0$, on the unique cycle contained in $G$ satisfying $d_G(u_0), d_G(v_0)\ge 3.$ In the edge induced graph $G[\Bbb{E}]$, consider the tree, say $T_1$, containing $u_0$. We are to show that $T_1$ is a path; otherwise, choose a longest path $P=u_0u_1\dots u_p$ in $T_1$ with end-vertex $u_0, u_p$, it is easy to see $d_{T_1}(u_p)=1$. If there exists $u_i$ with $i\ge 1$ on $P$ such that $d_G(u_i)>2$. Choose a vertex $x$ in $N_G(u_i)\setminus\{u_{i-1}, u_{i+1}\}$ and let $ G^*=G-u_ix+u_px, $ then $G^*\in\mathscr{G}_n^k\backslash \{P_n^k\}$. Note that $S_i(G)=S_i(G^*)$ for $ i=0,1,2,3$,\, $\phi_G(P_2)=\phi_{G^*}(P_2),\phi_G(C_4)=\phi_{G^*}(C_4)$ and $\phi_G(P_3)-\phi_{G^*}(P_3)\geq1$, hence by Lemma~1.2(i), we get $S_4(G)-S_4(G^*)>0$, i.e., $G^*\prec_s G$, a contradiction. Hence, we obtain that each vertex $u_i$ on $P$ is of degree 2 in $G$ for $i=1,2,\ldots, p-1$. Hence, if $d_G(u_0)=3$, then $T_1$ is a path, as desired. If $d_G(u_0)>3$, then choose $x$ from $N_G(u_0)$ such that $x$ is not on the cycle and the path $P$ contained in $G$. Let $ G^*=G-u_0x+u_px, $ then $G^*\in\mathscr{G}_n^k\backslash P_n^k$. Notice that $S_i(G)=S_i(G^*)$ for $ i=0,1,2,3$, $\phi_G(P_2)=\phi_{G^*}(P_2),\phi_G(C_4)=\phi_{G^*}(C_4)$ and $\phi_G(P_3)-\phi_{G^*}(P_3)\geq\emph{}2$, by Lemma 1.2(i), we get $S_4(G)-S_4(G^*)>0,$ i,e., $G^*\prec_s G$, a contradiction. By a similar discussion as above, we can also show that, in $G[\Bbb{E}]$, the component contains $v_0$ is also a path, say $P'$. For convenience, let $v_0'$ be the neighbor of $v_0$ on $P'$. If there exists another vertex $u_0'\neq u_0, v_0$, on the unique cycle contained in $G$ satisfying $d_G(u_0')\ge 3$. Let $G^*=G-\{u_0'x, x\in N_{G[\Bbb{E}]}(u_0')\}+\{u_px, x\in N_{G[\Bbb{E}]}(u_0')\}$, then $G^*\in\mathscr{G}_n^k\backslash P_n^k$. Note that $S_i(G)=S_i(G^*)$ for $ i=0,1,2,3$, $\phi_G(P_2)=\phi_{G^*}(P_2),\phi_G(C_4)=\phi_{G^*}(C_4)$ and $\phi_G(P_3)-\phi_{G^*}(P_3)\geq\emph{}1$, by Lemma 1.2(i), we get $S_4(G)-S_4(G^*)>0,$ i,e., $G^*\prec_s G$, a contradiction. So, we just need to consider there exist two vertices on the unique cycle contained in $G$ satisfying $d_G(u_0), d_G(v_0)\ge 3.$ Without loss of generality, assume that $|E_P|\geq |E_{P'}|$. Let $G^*=G-v_0v_0'+u_{p-1}v_0'$, it is easy to see that $G^*\in\mathscr{G}_n^k\setminus\{P_n^k\}$. \noindent $\bullet$ $k=3$. By Lemma 1.1, we have $S_i(G)\geq S_i(G^*)$ for $i=0,1,\dots,n-2$ and $S_{n-1}(G)>S_{n-1}(G^*)$. Hence $G^*\prec_sG$, a contradiction. \noindent $\bullet$ $k\geq4$. Note that $\phi_G(P_2)=\phi_{G^*}(P_2),\phi_G(C_4)=\phi_{G^*}(C_4)$, $\phi_G(P_3)=\phi_{G^*}(P_3),\phi_G(K_{1,3})=\phi_{G^*}(K_{1,3}),\linebreak \phi_G(U_5)=\phi_{G^*}(U_5), \phi_G(B_4)=\phi_{G^*}(B_4)$, $\phi_G(B_5)=\phi_{G^*}(B_5)$, $\phi_G(C_3)=\phi_{G^*}(C_3)$, $\phi_G(C_6)=\phi_{G^*}(C_6) $ and $\phi_G(P_4)-\phi_{G^*}(P_4)\geq1$, hence by Lemma 1.2, 1.3, we get that $S_i(G)=S_i(G^*)$ for $i=0,1,2,3,4,5$ and $S_6(G)-S_6(G^*)>0,$ i.e., $G^*\prec_sG$, a contradiction. Therefore, we obtain that $G[\Bbb{E}]$ is a tree. That is to say, there exists just one vertex, say $u_0$, on the unique cycle such that $d_G(u_0)\ge 3$. Choose one of the longest paths, say $P:=u_0u_1\dots u_p$, from $G[\Bbb{E}]$. It is easy to see that $u_p$ is a leaf of $G$. Furthermore, we have the following claim. \begin{claim} The length of $P$ is $k-1$, i.e., $P:=u_0u_1\dots u_{k-2}u_{k-1}$ and $G[\Bbb{E}]$ is obtained from $P$ by attaching a leaf to $u_{k-2}$ of $P$. \end{claim} \begin{proof} Note that $P=u_0u_1\dots u_p$ is one of the longest paths of $G[\Bbb{E}]$ and $u_p$ is a leaf. Hence, we first show that $d_G(u_0)=3$. Otherwise, choose $x$ from $N_G(u_0)$ such that $x$ is not on the cycle and the path $P$ of $G$. If $d_G(u_i)\geq3$ for some $i\in \{1,2,\dots,p-1\}$, let $ G^*=G-u_0x+u_px. $ Obviously, $G^*\in\mathscr{G}_n^k\backslash\{P_n^k\}$. Note that $S_i(G)=S_i(G^*)$, $ i=0,1,2,3$,\, $\phi_G(P_2)=\phi_{G^*}(P_2),\phi_G(C_4)=\phi_{G^*}(C_4)$ and $\phi_G(P_3)-\phi_{G^*}(P_3)\geq 2$, hence by Lemma~1.2(i), we get $S_4(G)-S_4(G^*)>0,$ i.e., $G^*\prec_s G$, a contradiction. If $d_G(u_i)=2$ for any $i\in \{1,2,\dots,p-1\}$, let $ G^*=G-u_0x+u_{p-1}x. $ Obviously, $G^*\in\mathscr{G}_n^k\backslash\{P_n^k\}$. Note that $S_i(G)=S_i(G^*)$, $ i=0,1,2,3$,\, $\phi_G(P_2)=\phi_{G^*}(P_2),\phi_G(C_4)=\phi_{G^*}(C_4)$ and $\phi_G(P_3)-\phi_{G^*}(P_3)\geq 1$, hence by Lemma~1.2(i), we get $S_4(G)-S_4(G^*)>0,$ i.e., $G^*\prec_s G$, a contradiction. Now we show that $d_G(u_i)=2$, $i=1,2,\dots,p-2$ and $d_G(u_{p-1})=3$. Note that $G\ncong P_n^k$, hence there exists at least one vertex $u_i\, (1 \leq i\leq {p-1})$ on $P$ such that $d_G(u_i)\geq3$. If there exists a vertex $u_i\, (1 \leq i\leq {p-1})$ on $P$ such that $d_G(u_i)\geq 4$, then choose $x\in N_G(u_i)\setminus\{u_{i-1}, u_{i+1}\}$ and let $ G^*=G-u_ix+u_px. $ Obviously, $G^*\in\mathscr{G}_n^k\backslash P_n^k$. Since $S_i(G)=S_i(G^*)$, $ i=0,1,2,3$. $\phi_G(P_2)=\phi_{G^*}(P_2),\,\phi_G(C_4)=\phi_{G^*}(C_4)$ and $\phi_G(P_3)-\phi_{G^*}(P_3)\geq2$, by Lemma 1.2(i), we have $S_4(G)-S_4(G^*)>0,$ i.e., $G^*\prec_sG$, a contradiction. Hence, $\max\{d_G(u_i),i=1,2,\dots,p-1\}=3$. If $d_G(u_{p-1})=d_G(u_i)=3$ for some $i\in \{1, 2, \ldots, p-2\}$, then choose $z_1$ in $N_G(u_i)\setminus\{u_{i-1},u_{i+1}\}$ and let $ G^*=G-u_iz_1+u_pz_1, $ then $G^*\in\mathscr{G}_n^k\backslash P_n^k$. Notice that $S_i(G)=S_i(G^*)$, $i=0,1,2,3$. $\phi_G(P_2)=\phi_{G^*}(P_2),\,\phi_G(C_4)=\phi_{G^*}(C_4)$ and $\phi_G(P_3)-\phi_{G^*}(P_3)=1>0$, hence by Lemma 1.2(i), we get that $S_4(G)-S_4(G^*)>0,$ i.e., $G^*\prec_sG$, a contradiction. If $d_G(u_{p-1})=2,\, d_G(u_i)=3$ for some $i\in \{1, 2, \ldots, p-2\}$, then choose $z_1$ in $N_G(u_i)\setminus\{u_{i-1},u_{i+1}\}$ and let $ G^*=G-u_iz_1+u_{p-1}z_1, $ then it is easy to see that $G^*\in\mathscr{G}_n^k\backslash \{P_n^k\}$. Note that $S_i(G)=S_i(G^*)$ for $i=0,1,2,3,4,5$, $\phi_G(P_2)=\phi_{G^*}(P_2),\,\phi_G(C_4)=\phi_{G^*}(C_4)$, $\phi_G(P_3)=\phi_{G^*}(P_3),\,\phi_G(K_{1,3})=\phi_{G^*}(K_{1,3}),\,\phi_G(U_5)=\phi_{G^*}(U_5),\, \phi_G(B_4)=\phi_{G^*}(B_4),$ $\phi_G(B_5)=\phi_{G^*}(B_5),\,\phi_G(C_3)=\phi_{G^*}(C_3),\,$ $\phi_G(C_6)=\phi_{G^*}(C_6)$ and $\phi_G(P_4)-\phi_{G^*}(P_4)\geq1$, hence by Lemma 1.2(iii), we get that $S_6(G)-S_6(G^*)>0,$ i.e., $G^*\prec_sG$, a contradiction. Hence, we obtain that $d_G(u_0)=3, d_G(u_1)=d_G(u_2)=\ldots=d_G(u_{p-2})=2, d_G(u_{p-1})=3$ and $d_G(u_p)=1.$ For convenience, let $N_G(u_{p-1})\setminus\{u_{p-2},u_p\}=\{z_0\}$. It is easy to see that $z_0$ is a leaf; otherwise $G[\Bbb{E}]$ contains a path $P':=u_0u_1\ldots u_{p-1}z_0\ldots z_t$, where $z_t$ is a leaf. It is routine to check that the length of $P'$ is longer than that of $P$, a contradiction. Therefore, $d_G(z_0)=1$, as desired. \end{proof} Based on Claim 2, Theorem 3.2 follows immediately. \end{proof} \end{document}
\begin{document} \sloppy \begin{center} \textbf{Fundamental solutions for a class of multidimensional elliptic \\ equations with several singular coefficients}\\[5pt] \textbf{Ergashev T.G.\\} { Institute of Mathematics, Uzbek Academy of Sciences, Tashkent, Uzbekistan. \\ {\verb [email protected] }}\\ \end{center} \begin{quote} The main result of the present paper is the construction of fundamental solutions for a class of multidimensional elliptic equations with several singular coefficients. These fundamental solutions are directly connected with multiple hypergeometric functions and the decomposition formula is required for their investigation which would express the multivariable hypergeometric function in terms of products of several simpler hypergeometric functions involving fewer variables. In this paper, such a formula is proved instead of a previously existing recurrence formula.The order of singularity and other properties of the fundamental solutions that are necessary for solving boundary value problems for degenerate second-order elliptic equations are determined.\\ \textit{\textbf{Key words:}} multidimensional elliptic equation with several singular coefficients; fundamental solutions; multiple hypergeometric functions; decomposition formula; order of the singularity. \end{quote} \section{Introduction} It is known that fundamental solutions have an essential role in studying partial differential equations. Formulation and solving of many local and non-local boundary value problems are based on these solutions. Moreover,\,\,\,fundamental solutions appear as potentials, for instance, as simple-layer and double-layer potentials in the theory of potentials. The explicit form of fundamental solutions gives a possibility to study the considered equation in detail. For example, in the works of Barros-Neto and Gelfand \cite{{BN1},{BN2},{BN3}} fundamental solutions for Tricomi operator, relative to an arbitrary point in the plane were explicitly calculated. We also mention Leray's work \cite{L}, which it was described as a general method, based upon the theory of analytic functions of several complex variables, for finding fundamental solutions for a class of hyperbolic linear differential operators with analytic coefficients. In this direction we would like to note the works \cite{{HK},{I}}, where three-dimensional fundamental solutions for elliptic equations were found. In the works \cite{{EH},{M},{U}}, fundamental solutions for a class of multidimensional degenerate elliptic equations with spectral parameter were constructed. The found solutions can be applied to solving some boundary value problems \cite{{Erg},{GC},{Kar},{KN},{SH7},{SH8},{SHCh}}. Let us consider the generalized Helmholtz equation with a several singular coefficients \begin{equation}\label{e1} L_{(\alpha)}^{m} (u):={\sum\limits_{i = 1}^{m} {u_{x_{i} x_{i}}} } + {\sum\limits_{j = 1}^{n} {{\frac{{2\alpha _{j}}} {{x_{j}}} }}} u_{x_{j}} = 0 \end{equation} in the domain $R^{n+}_m:=\left\{ \left(x_1,...,x_m\right):x_1>0,...,x_n>0\right\}\,$, where $m$ is a dimension of the Euchlidean space, $n$ is a number of the singular coefficients of equation (\ref{e1}); $m \ge 2,0 \le n \le m$; $\alpha_j$ are real constants and $0<2\alpha_j<1, $ $j=1,...,n$; $(\alpha)=(\alpha_1,...,\alpha_n) $. Various modifications of the equation (\ref{e1}) in the two- and three-dimensional cases were considered in many papers \cite{{A},{F},{H},{HK},{K},{MC},{R},{UrK},{Wt},{Wn4},{Wn5}}. Fundamental solutions for elliptic equations with singular coefficients are directly connected with hypergeometric functions. Therefore, basic properties such as decomposition formulas, integral representations, formulas of analytical continuation, formulas of differentiation for hypergeometric functions are necessary for studying fundamental solutions. Since the aforementioned properties of hypergeometric functions of Gauss, Appell, Kummer were known \cite{AP}, results on investigations of elliptic equations with one or two singular coefficients were successful. In the paper \cite{HK} when finding and studying the fundamental solutions of equation (\ref{e1}) for $m = 3$, an important role was played the decomposition formula of Hasanov and Srivastava \cite{{HS6},{HS7}}, however, the recurrence of this formula did not allow further advancement in the direction of increasing the number of singular coefficients. In the present paper we construct the $2^n$ fundamental solutions for equation (\ref{e1}) in an explicit form and we prove a new formula for the expansion of several Lauricella hypergeometric functions by simple Gauss, with which it is possible to reveal that the found hypergeometric functions have a singularity of order $1/r^{m-2}$ at $r \to 0.$ The plan of this paper is as follows. In Section 2 we briefly give some preliminary information, which will be used later.We transform the recurrence decomposition formula of Hasanov and Srivastava \cite{HS6} to the form convenient for further research. Also some constructive formulas for the operator $L$ are given. In Section 3 we describe the method of finding fundamental solutions for the considered equation and in section we show what order of singularity the found solutions will have. \section{Preliminaries} Below we give definition of Pochhammer symbol and some formulas for Gauss hypergeometric functions of one and two variables, Lauricella hypergeometric functions of three and more variables, which will be used in the next section. A symbol $\left( {\kappa} \right)_{\nu} $ denotes the general Pochhammer symbol or the shified factorial, since $\left( {1} \right)_{l} = l!$ $ \left( {l \in N \cup \{0\};\,\,N: = \{1,2,3,...\}} \right),$ which is defined (for $\kappa ,\nu \in C)$, in terms of the familiar Gamma function, by $$ (\kappa )_{\nu} : = {\frac{{\Gamma (\kappa + \nu )}}{{\Gamma (\kappa )}}} = {\left\{ {{\begin{array}{*{20}c} {1\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,(\nu = 0;\kappa \in C\backslash \{0\})} \\ {\kappa (\kappa + 1)...(\kappa + l - 1)\,\,\,\,\,(\nu = l \in N;\kappa \in C),} \\ \end{array}}} \right.} $$ \noindent it being understood conventionally that $\left( {0} \right)_{0} : = 1$ and assumed tacitly that the $\Gamma - $ quotient exists. A function $$F\left[ {{\begin{array}{*{20}c} {a,b;} \\ {c;} \\ \end{array}} x} \right]=\sum\limits_{k=0}^\infty\frac{(a)_k(b)_k}{k!(c)_k}x^k,\,\,|x|<1$$ is known as the Gauss hypergeometric function and an equality \begin{equation}\label{e21} F\left[ {{\begin{array}{*{20}c} {a,b;} \\ {c;} \\ \end{array}} 1} \right]=\frac{\Gamma(c)\Gamma(c-a-b)}{\Gamma(c-a)\Gamma(c-b)}, c\neq0,-1,-2,..., \textrm{Re}(c-a-b)>0 \end{equation} holds \cite[ p.73,(14)]{Erd}. Moreover, the following autotransformer formula \cite[ p.76,(22)]{Erd} \begin{equation}\label{e22}F\left[ {{\begin{array}{*{20}c} {a,b;} \\ {c;} \\ \end{array}} x} \right]=(1-x)^{-b}F\left[ {{\begin{array}{*{20}c} {c-a,b;} \\ {c;} \\ \end{array}} \frac{x}{x-1}} \right] \end{equation} is valid. The hypergeometric function of $n$ variables has a form \cite[pp.114-115(1),(5)]{AP} (see also \cite[p.33]{SK}) \begin{equation}\label{e23}F^{(n)}_A \left[ {{\begin{array}{*{20}c} {a,b_1,...,b_n;} \\ {c_1,...,c_n;} \\ \end{array}} x_1,...,x_n} \right] =\sum\limits_{m_1,...,m_n=0}^\infty\frac{(a)_{m_1+...+m_n}(b_1)_{m_1}...(b_n)_{m_n}}{m_1!...m_n!(c_1)_{m_1}...(c_n)_{m_n}}x_1^{m_1}...x_n^{m_n}, \end{equation} where $|x_1|+...+|x_n|<1, n \in {\rm N}$. For a given multivariable function, it is useful to fund a decomposition formula which would express the multivariable hypergeometric function in terms of products of several simpler hypergeometric functions involving fewer variables. In the case of two variables for the function \begin{equation*}F_2 \left[ {{\begin{array}{*{20}c} {a,b_1,b_2;} \\ {c_1,c_2;} \\ \end{array}} x,y} \right] =\sum\limits_{i,j=0}^\infty\frac{(a)_{i+j}(b_1)_{i}(b_2)_{j}}{i!j!(c_1)_{i}(c_2)_{j}}x^{i}y^{j} \end{equation*} was known expansion formula \cite{BC1} \begin{equation}\label{e25}F_2 \left[ {{\begin{array}{*{20}c} {a,b_1,b_2;} \\ {c_1,c_2;} \\ \end{array}} x,y} \right] =\sum\limits_{k=0}^\infty\frac{(a)_k(b_1)_k(b_2)_k}{k!(c_1)_{k}(c_2)_{k}}x^{k}y^{k}F\left[ {{\begin{array}{*{20}c} {a+k,b_1+k;} \\ {c_1+k;} \\ \end{array}} x} \right]F\left[ {{\begin{array}{*{20}c} {a+k,b_2+k;} \\ {c_2+k;} \\ \end{array}} y} \right]. \end{equation} Following the works \cite{{BC1},{BC2}} Hasanov and Srivastava \cite{HS6} found following decomposition formula for the Lauricella function of three variables \begin{equation}\label{e26}\begin{array}{l}F^{(3)}_A \left[ {{\begin{array}{*{20}c} {a,b_1,b_2,b_3;} \\ {c_1,c_2,c_3;} \\ \end{array}} x,y,z} \right] ={\sum\limits_{i,j,k=0}^\infty\frac{(a)_{i+j+k}(b_1)_{j+k}(b_2)_{i+k}(b_3)_{i+j}}{i!j!k!(c_1)_{j+k}(c_2)_{i+k}(c_3)_{i+j}}} \\\cdot x^{j+k}y^{i+k}z^{i+j} F\left[ {{\begin{array}{*{20}c} {a+j+k,b_1+j+k;} \\ {c_1+j+k;} \\ \end{array}} x} \right]\\ \cdot F\left[ {{\begin{array}{*{20}c} {a+i+j+k,b_2+i+k;} \\ {c_2+i+k;} \\ \end{array}} y} \right]F\left[ {{\begin{array}{*{20}c} {a+i+j+k,b_3+i+j;} \\ {c_3+i+j;} \\ \end{array}} z} \right]\end{array} \end{equation} and they proved that for all $n\in N\backslash\{1\}$ is true the recurrence formula \cite {HS6} \begin{equation} \label{e27} \begin{array}{l} F_{A}^{(n)}\left[ {{\begin{array}{*{20}c} {a,b_1,...,b_n;} \\ {c_1,...,c_n;} \\ \end{array}} x_1,...,x_n} \right] = {\sum\limits_{m_{2} ,...,m_{n} = 0}^{\infty} {{\frac{{(a)_{m_{2} + \cdot \cdot \cdot + m_{n}} (b_{1} )_{m_{2} + \cdot \cdot \cdot + m_{n}} (b_{2} )_{m_{2}} \cdot \cdot \cdot (b_{n} )_{m_{n}}} } {{m_{2} ! \cdot \cdot \cdot m_{n} !(c_{1} )_{m_{2} + \cdot \cdot \cdot + m_{n}} (c_{2} )_{m_{2}} \cdot \cdot \cdot (c_{n} )_{m_{n}}} } }}} \\ \cdot x_{1}^{m_{2} + \cdot \cdot \cdot + m_{n}} x_{2}^{m_{2}} \cdot \cdot \cdot x_{n}^{m_{n}} F\left[ {{\begin{array}{*{20}c} {a + m_{2} + \cdot \cdot \cdot + m_{n},b_{1} + m_{2} + \cdot \cdot \cdot + m_{n};} \\ {c_{1} + m_{2} + \cdot \cdot \cdot + m_{n};} \\ \end{array}} x_1} \right] \\ \cdot F_{A}^{(n - 1)} \left[ {{\begin{array}{*{20}c} {a + m_{2} + \cdot \cdot \cdot + m_{n} ,b_{2} + m_{2} ,...,b_{n} + m_{n} ;} \\ c_{2} + m_{2} ,....,c_{n} + m_{n} ; \\ \end{array}} x_{2} ,...,x_{n}} \right]. \\ \end{array} \end{equation} Further study of the properties of the Lauricella function (\ref{e23}) showed that the formula (\ref{e27}) can be reduced to a more convenient form. {\textbf{Lemma.} \textit{The following formula holds true at $n\in N$ \begin{equation} \label{e28}\begin{array}{l} {F_{A}^{(n)} {\left[ {{\begin{array}{*{20}c}{a,b_{1} ,....,b_{n} ;} \\ {c_{1} ,....,c_{n} ;} \\ \end{array}} x_{1} ,...,x_{n}} \right]} = {\sum\limits_{{\mathop {m_{i,j} = 0}\limits_{(2 \le i \le j \le n)} }}^{\infty} {{\frac{{(a)_{N_{2} (n,n)}}} {{{\mathop {m_{2,2} !m_{2,3} ! \cdot \cdot \cdot m_{i,j} ! \cdot \cdot \cdot m_{n,n} !}\limits_{(2 \le i \le j \le n)}}} }}}}} \\ \cdot{\prod\limits_{k = 1}^{n} {{ {{\frac{{(b_{k} )_{M_{2} (k,n)} }}{{(c_{k} )_{M_{2} (k,n)}}} x_{k}^{M_{2} (k,n)} F\left[{\begin{array}{*{20}c} {a + N_{2} (k,n),b_{k} + M_{2} (k,n);} \\ c_{k} + M_{2} (k,n); \\ \end{array} x_{k}} \right]} } }}}, \end{array} \end{equation} where $$ M_{l} (k,n) = {\sum\limits_{i = l}^{k} {m_{i,k} +}} {\sum\limits_{i = k + 1}^{n} {m_{k + 1,i}}},\,\, \quad N_{l} (k,n) = {\sum\limits_{i = l}^{k + 1} {{\sum\limits_{j = i}^{n} {m_{i,j}}} } } , \, l \in N. $$ } \textbf{Proof}. We carry out the proof by the method mathematical induction. The equality (\ref{e28}) in the case $n=1$ is obvious. Let $n = 2$. Since $M_2(1,2)=M_2(2,2)=N_2(1,2)=N_2(2,2)=m_{2,2},$ we obtain the formula (\ref{e25}). For the sake of interest, we will check the formula (\ref{e28}) in yet another value of $n$. Let $n=3.$ In this case $$M_2(1,3)=m_{2,2}+m_{2,3},\,\, M_2(2,3)=m_{2,2}+m_{3,3},\,\, M_2(3,3)=m_{2,3}+m_{3,3},$$ $$N_2(1,3)=m_{2,2}+m_{2,3},\,\, N_2(2,3)= N_2(3,3)=m_{2,2}+m_{2,3}+m_{3,3}.$$ For brevity, making the substitutions $m_{2,2}:=i,\,\,m_{2,3}:=j,\,\,m_{3,3}:=k$, we obtain the formula (\ref{e26}). So the formula (\ref{e28}) works for $n=1,$ $n=2$ and $n=3$. Now we assume that for $n = s$ equality (\ref{e28}) holds; that is, that \begin{equation} \label{e29} \begin{array}{l} F_{A}^{(s)} {\left[ \begin{array}{*{20}c}{a,b_{1} ,....,b_{s} ;} \\ c_{1} ,....,c_{s} ; \\ \end{array} x_{1} ,...,x_{s} \right]} = {\sum\limits_{{\mathop {m_{i,j} = 0}\limits_{(2 \le i \le j \le s)} }}^{\infty} {{\frac{{(a)_{N_{2} (s,s)}}} {{{\mathop {m_{2,2} !m_{2,3} ! \cdot \cdot \cdot m_{i,j} ! \cdot \cdot \cdot m_{s,s} !}\limits_{(2 \le i \le j \le s)}}} }}}} \\ \cdot {\prod\limits_{k = 1}^{s} {{{{\frac{{(b_{k} )_{M_{2} (k,s)} }}{{(c_{k} )_{M_{2} (k,s)}}} }x_{k}^{M_{2} (k,s)} F\left[ \begin{array}{*{20}c}{a + N_{2} (k,s),b_{k} + M_{2} (k,s);} \\ c_{k} + M_{2} (k,s); \\ \end{array} x_{k} \right]} }}} . \\ \end{array} \end{equation} Let $n=s+1.$ We prove that following formula \begin{equation} \label{e210} \begin{array}{l} F_{A}^{(s + 1)} {\left[ \begin{array}{*{20}c}{a,b_{1} ,....,b_{s+1} ;} \\ c_{1} ,....,c_{s+1} ; \\ \end{array} x_{1} ,...,x_{s+1} \right]} = {\sum\limits_{{\mathop {m_{i,j} = 0}\limits_{(2 \le i \le j \le s + 1)} }}^{\infty} {{\frac{{(a)_{N_{2} (s + 1,s + 1)}}} {{{\mathop {m_{2,2} !m_{2,3} ! \cdot \cdot \cdot m_{i,j} ! \cdot \cdot \cdot m_{s + 1,s + 1} !}\limits_{(2 \le i \le j \le s + 1)}}} }}}} \\ \cdot {\prod\limits_{k = 1}^{s + 1} {{ {{\frac{{(b_{k} )_{M_{2} (k,s + 1)} }}{{(c_{k} )_{M_{2} (k,s + 1)}}} }x_{k}^{M_{2} (k,s + 1)} F\left[ {{\begin{array}{*{20}c} {a + N_{2} (k,s + 1),b_{k} + M_{2} (k,s + 1);} \\ {c_{k} + M_{2} (k,s + 1);} \\ \end{array}} x_{k}} \right]} }}} \\ \end{array} \end{equation} is valid. We write the Hasanov-Srivastava's formula (\ref{e27}) in the form \begin{equation} \label{e211} \begin{array}{l} F_{A}^{(s + 1)} {\left[ {{\begin{array}{*{20}c}{a,b_{1} ,....,b_{s + 1} ;} \\ { c_{1} ,....,c_{s + 1};} \\ \end{array}} x_{1} ,...,x_{s + 1}} \right]} \\ = {\sum\limits_{m_{2,2} ,...,m_{2,s + 1} = 0}^{\infty} {{\frac{{(a)_{N_2(1,s+1)} (b_{1} )_{M_2(1,s+1)} (b_{2} )_{m_{2,2}} \cdot \cdot \cdot (b_{s + 1} )_{m_{2,s + 1}}} } {{m_{2,2} ! \cdot \cdot \cdot m_{2,s + 1} !(c_{1} )_{M_2(1,s+1)} (c_{2} )_{m_{2,2}} \cdot \cdot \cdot (c_{s + 1})_{m_{2,s + 1}}} } }}} \\ \cdot x_{1}^{M_2(1,s+1)} x_{2}^{m_{2,2}} \cdot \cdot \cdot x_{s + 1}^{m_{2,s + 1}} F\left[\begin{array}{*{20}c}{a + N_2(1,s+1) ,b_{1} + M_2(1,s+1) ;} \\ c_{1} +M_2(1,s+1); \\ \end{array} x_{1} \right] \\ \cdot F_{A}^{(s)} {\left[ {{\begin{array}{*{20}c} {a + N_2(1,s+1) ,b_{2} + m_{2,2} ,...,b_{s + 1} + m_{2,s + 1} ;} \\ {c_{2} + m_{2,2} ,....,c_{s + 1} + m_{2,s + 1}}; \\ \end{array}} x_{2} ,...,x_{s + 1}} \right]}. \\ \end{array} \end{equation} By virtue of the formula (\ref{e29}) we have \begin{equation} \label{e212} \begin{array}{l} F_{A}^{(s)} \left[ \begin{array}{*{20}c}{a + N_2(1,s+1) ,b_{2} + m_{2,2} ,...,b_{s + 1} + m_{2,s + 1} ;} \\ c_{2} + m_{2,2} ,...,c_{s + 1} + m_{2,s + 1} ; \\ \end{array} x_{2} ,...,x_{s + 1} \right] \\ = {\sum\limits_{{\mathop {m_{i,j} = 0}\limits_{(3 \le i \le j \le s + 1)} }}^{\infty} {{\frac{{\left( {a +N_2(1,s+1)} \right)_{N_{3} (s + 1,s + 1)}}} {{{\mathop {m_{3,3} !m_{3,4} ! \cdot \cdot \cdot m_{i,j} ! \cdot \cdot \cdot m_{s + 1,s + 1} !}\limits_{(3 \le i \le j \le s + 1)} }}}}}} \prod\limits_{k = 2}^{s + 1} {\frac{{(b_{k} + m_{2,k} )_{M_{3} (k,s + 1)}}} {{(c_{k} + m_{2,k} )_{M_{3} (k,s + 1)}}} }x_{k}^{M_{3} (k,s + 1)} \\ \cdot F\left[ {{\begin{array}{*{20}c} {a + N_2(1,s+1) + N_{3} (k,s + 1),b_{k} + m_{2,k} + M_{3} (k,s + 1);} \\ {c_{k} + m_{2,k} + M_{3} (k,s + 1);} \\ \end{array}} x_{k}} \right] . \\ \end{array} \end{equation} Substituting from (\ref{e212}) into (\ref{e211}) we obtain \begin{equation*} \begin{array}{l} F_{A}^{(s + 1)} {\left[ {a,b_{1} ,....,b_{s + 1} ;c_{1} ,....,c_{s + 1} ;x_{1} ,...,x_{s + 1}} \right]} \\ = {\sum\limits_{{\mathop {m_{i,j} = 0}\limits_{(2 \le i \le j \le s + 1)} }}^{\infty} {{\frac{{\left( {a} \right)_{N_2(1,s+1) + N_{3} (s + 1,s + 1)}}} {{{\mathop {m_{2,2} !m_{2,3} ! \cdot \cdot \cdot m_{i,j} ! \cdot \cdot \cdot m_{s + 1,s + 1} !}\limits_{(2 \le i \le j \le s + 1)}}} }}}} \prod\limits_{k = 1}^{s + 1} {\frac{{(b_{k} )_{m_{2,k} + M_{3} (k,s + 1)}}} {{(c_{k} )_{m_{2,k} + M_{3} (k,s + 1)}}} }x_{k}^{m_{2,k} + M_{3} (k,s + 1)} \\ \cdot F\left[ {{\begin{array}{*{20}c} {a + N_2(1,s+1) + N_{3} (k,s + 1),} {b_{k} + m_{2,k} + M_{3} (k,s + 1);} \\ {c_{k} + m_{2,k} + M_{3} (k,s + 1)}; \\ \end{array}} x_{k}} \right]. \\ \end{array} \end{equation*} Further, by virtue of the following obvious equalities $$ N_2(1,s+1) + N_{3} (k,s + 1) = N_{2} (k,s + 1),\,\,1\leq k\leq s+1, s\in N, $$ $$ m_{2,k} + M_{3} (k,s + 1) = M_{2} (k,s + 1), 1\leq k\leq s+1, s\in N, $$ we finally find the equality (\ref{e210}). Q.E.D. \textbf{Remark}. Let $\delta=(\delta_1,...,\delta_n),$ where $\delta_i\in \{0,1\}$, $1\leq i \leq n$. Let be denote $$(\alpha):=(\alpha_1,...,\alpha_n),\,\, (1-\alpha):=(\alpha_1+(1-2\alpha_1)\delta_1,...,\alpha_n+(1-2\alpha_n)\delta_n).$$ Then the following constructive formulas $$ L^m_{(\alpha)}\left( x_1^{\delta_1(1-2\alpha_1)}\cdot\cdot\cdot x_n^{\delta_n(1-2\alpha_n)}\cdot u \right)= x_1^{\delta_1(1-2\alpha_1)}\cdot\cdot\cdot x_n^{\delta_n(1-2\alpha_n)}L_{{(1-\alpha)}}^m(u)$$ are true. We note, there are $2^n$ equalities. \section{Fundamental solutions} Consider equation (\ref{e1}) in $R_{m}^{n +} .$ Let $x:=\left(x_1,...,x_m\right)$ be any point and $x_{0}:=\left(x_{01},...,x_{0m}\right) $ be any fixed point of $R_{m}^{n +}.$ We search for a solution of (\ref{e1}) as follows: \begin{equation} \label{e31} u(x,x_0) = P(r)w(\xi), \end{equation} where $$ \xi = \left( \xi_1,\xi_2,...,\xi_n \right),\,\, \alpha = \alpha_1+...+\alpha_n - 1 + {\frac{{m}}{{2}}}, $$ $$ P(r) = \left( r^{2} \right)^{ - \alpha}, \,\,\,r^{2} = {\sum\limits_{i = 1}^{m} {(x_{i} - x_{0i} )^{2}}} , $$ $$ r_{k}^{2} = (x_{k} + x_{0k} )^{2} + {\sum\limits_{i = 1,i \ne k}^{m} {(x_{i} - x_{0i} )^{2}}}, \,\,\xi _{k} = {\frac{{r^{2} - r_{k}^{2}}} {{r^{2}}}}, \quad k = 1,2,...,n. $$ We calculate all necessary derivatives and substitute them into equation (\ref{e1}) : \begin{equation} \label{e32} {\sum\limits_{k = 1}^{n} {A_{k} \omega _{\xi_{k} \xi _{k}}} } + {\sum\limits_{k = 1}^{n} {{\sum\limits_{l = k + 1}^{n} {B_{k,l} \omega _{\xi _{k} \xi _{l}}} } } } + {\sum\limits_{k = 1}^{n} {C_{k} \omega _{\xi _{k}}} }+ D\omega = 0, \end{equation} where $$A_k=P\sum\limits_{i=1}^m\left(\frac{\partial \xi_k}{\partial x_i}\right)^2, B_{k,l}=2P\sum\limits_{i=1}^m \frac{\partial \xi_k}{\partial x_i}\frac{\partial \xi_l}{\partial x_i}, \,\,k\neq l, k=1,...,n,$$ $$C_k=P\sum\limits_{i=1}^m\frac{\partial^2 \xi_k}{\partial x_i^2}+2\sum\limits_{i=1}^m \frac{\partial P}{\partial x_i}\frac{\partial \xi_k}{\partial x_i}+2P\sum\limits_{j=1}^n \frac{\alpha_j}{ x_j}\frac{\partial \xi_k}{\partial x_j}, $$ $$D=\sum\limits_{i=1}^m\frac{\partial^2 P}{\partial x_i^2}+2P\sum\limits_{j=1}^n \frac{\alpha_j}{ x_j}\frac{\partial P}{\partial x_j}.$$ After several evaluations we find \begin{equation} \label{e33} A_{k} = - {\frac{{4P(r)}}{{r^{2}}}}{\frac{{x_{k}}} {{x_{0k}}} }\xi _{k} (1 - \xi_{k} ), \end{equation} \begin{equation} \label{e34} B_{k,l} = {\frac{{4P(r)}}{{r^{2}}}}\left( {{\frac{{x_{0k}}} {{x_{k}}} } + {\frac{{x_{0l}}} {{x_{l}}} }} \right)\xi _{k} \xi _{l} ,\,\,\,k \ne l,\quad l = 1,...,n, \end{equation} \begin{equation} \label{e35} C_{k} = - {\frac{{4P(r)}}{{r^{2}}}}{\left\{ { - \xi_{k} {\sum\limits_{j = 1}^{n} {{\frac{{x_{0j}}} {{x_{j}}} }\alpha _{j}}} + {\frac{{x_{0k} }}{{x_{k}}} }{\left[ {2\alpha _{k} - \alpha \xi_{k}} \right]}} \right\}}, \end{equation} \begin{equation} \label{e36} D = {\frac{{4\alpha P(r)}}{{r^{2}}}}{ { {\sum\limits_{j = 1}^{n} {{\frac{{x_{0j}}} {{x_{j}}} }\alpha _{j}}} } } .\end{equation} Substituting equalities (\ref{e33})-(\ref{e36}) into (\ref{e32}) we obtain the following system of hypergeometric equations of Lauricella \cite[ p.117]{AP}, which has $2^n$ linearly-independent solutions \cite[ p.118]{AP}. Considering those solutions, from (\ref{e31}) we obtain $2^n$ fundamental solutions of equation (\ref{e1}): \\ \\ $ 1{\left\{ {} \right.} F_A^{(n)} \left[\begin{array}{*{20}c} a,b_1,...,b_n; \\ c_1,...,c_n; \\ \end{array}\xi \right], $\\ $ C_{q}^{1} {\left\{ {{\begin{array}{*{20}c} {(x _{1}x_{01})^{1 - c_{1}} F_A^{(n)} \left[ \begin{array}{*{20}c} a,b_{1} + 1 - c_{1} ,b_{2} ,...,b_{n} ; \\ 2 - c_{1} ,c_{2} ,...,c_{n} ; \\ \end{array}\xi \right]}, \\ {..............................................................................} \\ {(x _{n}x_{0n})^{1 - c_{n}} F_A^{(n)} \left[\begin{array}{*{20}c} a,b_{1} ,...,b_{n - 1} ,b_{n} + 1 - c_{n} ; \\ c_{1} ,...,c_{n - 1} ,2 - c_{n} ; \\ \end{array}\xi \right]}, \\ \end{array}}} \right.} $ \\ $ C_{n}^{2} {\left\{ {{\begin{array}{*{20}c} {(x _{1}x_{01})^{1 - c_{1}} (x _{2}x_{02})^{1 - c_{2}} F_A^{(n)} \left[ \begin{array}{*{20}c} a,b_{1} + 1 - c_{1} ,b_{2} + 1 - c_{2} ,b_{3} ,...,b_{n} ; \\ 2 - c_{1} ,2 - c_{2} ,c_{3} ,...,c_{n} ; \\ \end{array}\xi \right]}, \\ {..............................................................................} \\ {(x _{1}x_{01})^{1 - c_{1}} (x _{n}x_{0n})^{1 - c_{n}} F_A^{(n)} \left[ \begin{array}{*{20}c} a,b_{1} + 1 - c_{1} ,b_{2} ,...,b_{n - 1} ,b_{n} + 1 - c_{n} ; \\ 2 - c_{1} ,c_{2} ,...,c_{n - 1} ,2 - c_{n} ; \\ \end{array}\xi \right]}, \\ {(x _{2}x_{02})^{1 - c_{2}} (x _{3}x_{03})^{1 - c_{3}} F_A^{(n)}\left[ \begin{array}{*{20}c} a,b_{1} ,b_{2} + 1 - c_{2} ,b_{3} + 1 - c_{3} ,b_{4} ,...,b_{n} ; \\ c_{1} ,2 - c_{2} ,2 - c_{3} ,c_{4} ,...,c_{n} ; \\ \end{array}\xi \right]}, \\ {..............................................................................} \\ {(x _{n-1}x_{0\,n-1})^{1 - c_{n - 1}} (x _{n}x_{0n})^{1 - c_{n}} F_A^{(n)} \left[ \begin{array}{*{20}c} a,b_{1} ,...,b_{n - 2} ,b_{n - 1} + 1 - c_{n - 1} ,b_{n} + 1 - c_{n} ; \\ c_{1} ,...,c_{n - 2} ,2 - c_{n - 1} ,2 - c_{n} ; \\ \end{array}\xi \right]}, \\ \end{array}}} \right.} $ \\$$ ................................................................. $$ $ 1{\left\{{(x _{1}x_{01})^{1 - c_{1}} \cdot ... \cdot (x _{n}x_{0n})^{1 - c_{n}} F_A^{(n)}\left[ \begin{array}{*{20}c} a,b_{1} + 1 - c_{1} ,...,b_{n} + 1 - c_{n} ; \\ 2 - c_{1} ,...,2 - c_{n} ; \\ \end{array}\xi \right]} \right.}, $ \\ where $$C_n^i=\frac{n!}{i!(n-i)!}, 0\leq i \leq n.$$ It is easy to see that $$1+C_n^1+C_n^2+...+C_n^{n-1}+1=2^n.$$ The fundamental solutions of equation (\ref{e1}) found above can be written in a form which is a convenient for further investigation \begin{equation} \label{e37} q_k(x,x_0)=\gamma_k r^{-2\alpha} \prod\limits_{i=1}^n \left(x_ix_{0i}\right)^{\delta_{ki}(1-2\alpha_i)} \cdot F_{A}^{(n)}\left[\begin{array}{*{20}c} \alpha+A_k,B_k; \\ 2B_k; \\ \end{array}\xi\right], \end{equation} where $$\delta_k=\left(\delta_{k1},...,\delta_{kn}\right), \delta_{kj}\in\{0,1\}, \,1\leq j\leq n, \, 1\leq k\leq 2^n, \,k=1+\sum\limits_{j=1}^n\delta_{kj}\cdot2^{(n-j)\delta_{kj}};$$ $$A_k=\sum\limits_{j=1}^n\left(1-2\alpha_j\right)\delta_{kj},\, B_k=\left(\alpha_1+(1-2\alpha_1)\delta_{k1},...,\alpha_n+(1-2\alpha_n)\delta_{kn} \right),$$ $\gamma_k$ are constants, which will be determined when we solve boundary-value problems. \section{Singularity properties of fundamental solutions} Let us show that the fundamental solutions (\ref{e37}) have a singularity at $r=0$. In the case $\delta_1=(0,0,...,0)$ we choose a solution $q_1(x,x_0)$ and we use the expansion for the hypergeometric function of Lauricella (\ref{e28}). As a result, a solution $$ q_1(x,x_0)=\gamma_1r^{-2\alpha}F_{A}^{(n)}\left[\begin{array}{*{20}c}\alpha,\alpha_1,...,\alpha_n; \\ 2\alpha_1,...,2\alpha_n; \\ \end{array}\xi\right] $$ can be written as follows \begin{equation} \label{e41} \begin{array}{l} q_1(x,x_0) = \gamma_1r^{-2\alpha}{\sum\limits_{{\mathop {m_{i,j} = 0}\limits_{(2 \le i \le j \le n)} }}^{\infty} {{\frac{{(\alpha)_{N_{2} (n,n)}}} {{{\mathop {m_{2,2} !m_{2,3} ! \cdot \cdot \cdot m_{i,j} ! \cdot \cdot \cdot m_{n,n} !}\limits_{(2 \le i \le j \le n)}}} }}}} \\ \cdot{\prod\limits_{k = 1}^{n} {{{{\frac{{(\alpha_{k} )_{M_{2} (k,n)} }}{{(2\alpha_{k} )_{M_{2} (k,n)}}} }\left(1-\frac{r_k^2}{r^2}\right)^{M_{2} (k,n)} F\left[\begin{array}{*{20}c} \alpha + N_{2} (k,n),\alpha_{k} + M_{2} (k,n); \\ 2\alpha_{k} + M_{2} (k,n); \\ \end{array} 1-\frac{r_k^2}{r^2} \right]} }}} . \\ \end{array} \end{equation} By virtue of formula (\ref{e22}) we rewrite (\ref{e41}) as $$ q_1(x,x_0) = \frac{\gamma_1}{r^{m-2}}\prod\limits_{k = 1}^{n}r^{-2\alpha_k}_k \cdot f_1\left(r^2,r_1^2,...,r_n^2\right), $$ where $$ f_1\left(r^2,r_1^2,...,r_n^2\right)={\sum\limits_{{\mathop {m_{i,j} = 0}\limits_{(2 \le i \le j \le n)} }}^{\infty} {{\frac{{(\alpha)_{N_{2} (n,n)}}} {{{\mathop {m_{2,2} !m_{2,3} ! \cdot \cdot \cdot m_{i,j} ! \cdot \cdot \cdot m_{n,n} !}\limits_{(2 \le i \le j \le n)}}} }}}}$$ $$ \cdot{\prod\limits_{k = 1}^{n} {{ {{\frac{{(\alpha_{k} )_{M_{2} (k,n)} }}{{(2\alpha_{k} )_{M_{2} (k,n)}}} }\left(\frac{r^2}{r_k^2}-1\right)^{M_{2} (k,n)} F\left[\begin{array}{*{20}c} 2\alpha_k-\alpha +M_2(k,n)- N_{2} (k,n),\alpha_{k} + M_{2} (k,n); \\ 2\alpha_{k} + M_{2} (k,n); \\ \end{array} 1-\frac{r^2}{r_k^2} \right]} }}} . $$ Below we show that $f_1\left(r^2,r_1^2,...,r_n^2\right)$ will be constant at $r \to 0$. For this aim we use an equality (\ref{e21}) and following inequality $$N_2(k,n)-M_2(k,n):=\sum\limits_{i=2}^k\left(\sum\limits_{j=i}^n m_{i,j}-m_{i,k}\right)\geq 0, 1\leq k \leq n \leq m, m>2.$$ Then we get \begin{equation} \label{e42} \lim_{r\rightarrow 0}f\left(r^2,r_1^2,...,r_n^2\right)=\frac{1}{\Gamma^n(\alpha)}{\prod\limits_{k = 1}^{n} \frac{\Gamma\left(2\alpha_k\right)\Gamma\left(\alpha-\alpha_k\right)}{\Gamma\left(\alpha_k\right)}}. \end{equation} Expressions (\ref{e41}) and (\ref{e42}) give us the possibility to conclude that the solution $q_1(x,x_0)$ reduces to infinity of the order $r^{2-m}$ at $r \to 0$. Similarly it is possible to be convinced that solutions $q_k(x;x_0),\,\,k=2,3,...,2^n$ also reduce to infinity of the order $r^{2-m}$ when $r \to 0$. It can be directly checked that constructed functions (\ref{e37}) possess properties \begin{equation} {\left. {{\frac{{\partial^{\delta_{kj}} q_{k} \left( {x,x_0} \right)}}{{\partial x_{j}^{\delta_{kj}} }}}} \right|}_{x_{j} = 0} = 0, \delta_{kj}\in\{0,1\}, \,1\leq j\leq n, \, 1\leq k\leq 2^n. \end{equation} \begin{center} \textbf{References} \end{center} {\small \begin{enumerate} \bibitem{A} Altin A., Some expansion formulas for a class of singular partial differential equations, Proc.Amer.math.Soc.85(1), 1982. 42-46. \bibitem{AP} Appell P. and Kampe de Feriet J.,Fonctions Hypergeometriques et Hyperspheriques; Polynomes d'Hermite, Gauthier - Villars. Paris, 1926. \bibitem {BN1} Barros-Neto J.J., Gelfand I.M., Fundamental solutions for the Tricomi operator , Duke Math.J. 98(3),1999. 465-483. \bibitem {BN2} Barros-Neto J.J., Gelfand I.M., Fundamental solutions for the Tricomi operator II, Duke Math.J. 111(3),2001.P.561-584. \bibitem {BN3} Barros-Neto J.J., Gelfand I.M., Fundamental solutions for the Tricomi operator III, Duke Math.J. 128(1)\,2005.\,119-140. \bibitem {BC1} Burchnall J.L., Chaundy T.W., Expansions of Appell's double hypergeometric functions. The Quarterly Journal of Mathematics, Oxford, Ser.11,1940. 249-270. \bibitem {BC2} Burchnall J.L., Chaundy T.W., Expansions of Appell's double hypergeometric functions(II). The Quarterly Journal of Mathematics, Oxford, Ser.12,1941. 112-128. \bibitem {Erd} Erdelyi A., Magnus W., Oberhettinger F. and Tricomi F.G., Higher Transcendental Functions, Vol.I (New York, Toronto and London:McGraw-Hill Book Company), 1953. \bibitem {Erg}Ergashev T.G., The fourth double-layer potential for a generalized bi-axially symmetric Helmholtz equation, Tomsk State University Journal of Mathematics and Mechanics, 50, 2017. 45-56. \bibitem {EH} Ergashev T.G., Hasanov A., Fundamental solutions of the bi-axially symmetric Helmholtz equation, Uzbek Math. J., 1, 2018. 55-64. \bibitem {F} Fryant A.J., Growth and complete sequences of generalized bi-axially symmetric potentials, J.Differential Equations, 31(2), 1979. 155-164. \bibitem {GC} Golberg M.A., Chen C.S., The method of fundamental solutions for potential, Helmholtz and diffusion problems, in: Golberg M.A.(Ed.), Boundary Integral Methods-Numerical and Mathematical Aspects, Comput.Mech.Publ.,1998. 103-176. \bibitem {H} Hasanov A., Fundamental solutions bi-axially symmetric Helmholtz equation. Complex Variables and Elliptic Equations. Vol. 52, No.8, 2007. 673-683. \bibitem {HK} Hasanov A., Karimov E.T, Fundamental solutions for a class of three-dimensional elliptic equations with singular coefficients. Applied Mathematic Letters, 22 (2009). 1828-1832. \bibitem {HS6} Hasanov A., Srivastava H., Some decomposition formulas associated with the Lauricella function $F_A^{(r)}$ and other multiple hypergeometric functions, Applied Mathematic Letters, 19(2) (2006), 113-121. \bibitem {HS7} Hasanov A., Srivastava H., Decomposition Formulas Associated with the Lauricella Multivariable Hypergeometric Functions, Computers and Mathematics with Applications, 53:7 (2007), 1119-1128. \bibitem {I} Itagaki M., Higher order three-dimensional fundamental solutions to the Helmholtz and the modified Helmholtz equations. Eng.\,Anal.\,Bound.\,Elem.\,15,1995. 289-293. \bibitem {Kar} Karimov E.T., A boundary-value problem for 3-D elliptic equation with singular coefficients. Progress in analysis and its applications. 2010. 619-625. \bibitem {KN} Karimov E.T., Nieto J.J., The Dirichlet problem for a 3D elliptic equation with two singular coefficients. Computers and Mathematics with Applications. 62, 2011. 214-224. \bibitem {K}Kumar P., Approximation of growth numbers generalized bi-axially symmetric potentials. Fasc.Math.35, 2005. 51-60. \bibitem {L} Leray J., Un prolongementa de la transformation de Laplace qui transforme la solution unitaires d'un opereteur hyperbolique en sa solution elementaire (probleme de Cauchy,IV), Bull.Soc.Math.France 90, 1962. 39-156. \bibitem {M} Mavlyaviev R.M., Construction of Fundamental Solutions to B-Elliptic Equations with Minor Terms. Russian Mathematics, 2017, Vol.61, No.6, 60-65. Original Russian Text published in Izvestiya Vysshikh Uchebnikh Zavedenii. Matematika, 2017, No.6. 70-75. \bibitem {MC} McCoy P.A., Polynomial approximation and growth of generalize axisymmetric potentials, Canad. J.Math.31(1),1979. 49-59. \bibitem {R}Radzhabov N.,Integral representations and boundary value problems for an equation of Helmholtz type with several singular surfaces, in Analytic methods in the theory of elliptic equations, Nauka, Sibirk.Otdel.,Novasibirsk, 1982. 34-46. \bibitem {SH7} Salakhitdinov M.S., Hasanov A., To the theory of the multidimensional equation of Gellerstedt. Uzbek Math.Journal, 2007, No 3, 95-109. \bibitem {SH8} Salakhitdinov M.S., Hasanov A., A solution of the Neumann-Dirichlet boundary-value problem for generalized bi-axially symmetric Helmholtz equation. Complex Variables and Elliptic Equations. 53 (4) (2008), 355-364. \bibitem {SHCh} Srivastava H.M., Hasanov A., Choi J., Double-layer potentials for a generalized bi-axially symmetric Helmholtz equation, Sohag J.Math., 2(1),2015. 1-10. \bibitem {SK} Srivastava H.M., Karlsson P.W., {Multiple Gaussian Hypergeometric Series. New York,Chichester,Brisbane and Toronto: Halsted Press, 1985. 428 p.} \bibitem {U} Urinov A.K., On fundamental solutions for the some type of the elliptic equations with singular coefficients. Scientific Records of Ferghana State university, 1 (2006). 5-11. \bibitem {UrK} Urinov A.K., Karimov E.T., On fundamental solutions for 3D singular elliptic equations with a parameter. Applied Mathematic Letters, 24 (2011). 314-319. \bibitem {Wt} Weinacht R.J., Fundamental solutions for a class of singular equations, Contrib.Differential equations, 3, 1964. 43-55. \bibitem {Wn4}Weinstein A., Discontinuous integrals and generalized potential theory, Trans.Amer.Math.Soc., 63,1946. 342-354. \bibitem {Wn5} Weinstein A., Generalized axially symmetric potential theory, Bull. Amer.Math.Soc.,59,1953. 20-38. \end{enumerate} } \end{document}
\begin{document} \title{Universal robust quantum gates by geometric correspondence of noisy quantum evolution} \author{Yong-Ju Hai} \affiliation{Shenzhen Institute for Quantum Science and Engineering (SIQSE), Southern University of Science and Technology, Shenzhen, P. R. China} \affiliation{Department of Physics, Southern University of Science and Technology, Shenzhen 518055, China} \author{Junning Li} \affiliation{Department of Physics, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong SAR, China} \affiliation{International Quantum Academy (SIQA), and Shenzhen Branch, Hefei National Laboratory, Futian District, Shenzhen, P. R. China} \author{Junkai Zeng} \affiliation{Shenzhen Institute for Quantum Science and Engineering (SIQSE), Southern University of Science and Technology, Shenzhen, P. R. China} \affiliation{International Quantum Academy (SIQA), and Shenzhen Branch, Hefei National Laboratory, Futian District, Shenzhen, P. R. China} \author{Dapeng Yu} \affiliation{Shenzhen Institute for Quantum Science and Engineering (SIQSE), Southern University of Science and Technology, Shenzhen, P. R. China} \affiliation{Department of Physics, Southern University of Science and Technology, Shenzhen 518055, China} \affiliation{International Quantum Academy (SIQA), and Shenzhen Branch, Hefei National Laboratory, Futian District, Shenzhen, P. R. China} \author{Xiu-Hao Deng} \email{[email protected]} \affiliation{Shenzhen Institute for Quantum Science and Engineering (SIQSE), Southern University of Science and Technology, Shenzhen, P. R. China} \affiliation{International Quantum Academy (SIQA), and Shenzhen Branch, Hefei National Laboratory, Futian District, Shenzhen, P. R. China} \begin{abstract} A key to the next leap in quantum technologies and quantum computing relies on precise and robust control over noisy quantum systems. For the first time, our theory uncovers an essential correspondence between the driven noisy quantum evolution and multiplex space curves relating to various noises. Each two-sate system's noisy dynamics correspond to a diagram formed of the curves. The curve's properties provide a quantitative geometric metric of evolution error. On the other hand, the geometric correspondence is an explicit model to engineer quantum gates robust against various noises. It further gives the criteria to identify whether a noisy quantum system's evolution could be controlled robustly. Our analytic-numerical hybrid protocol enables the construction of universal robust quantum gates with very simple and smooth pulses for any given gate time. Based on realistic models of semiconductor spin qubits and superconducting transmons, our numerical simulations demonstrate plateaus of gate fidelity above the fault-tolerance threshold over a broad range of noise strength. These solid and promising results prove that our universal robust control pulses are ready for experiments. Therefore, this work shines some light on resolving the general robust quantum control problems. \end{abstract} \maketitle \section{Introduction} The future promises of applicable quantum technologies are brought closer to the present-day reality along with recent breakthroughs \cite{arute2019quantum, wu2021strong, krinner2022realizing, preskill2022physics}. One of the critical ingredients that trigger these advances is precise and robust quantum control \cite {preskill2018quantum, glaser2015training, koch2016controlling, koch2022quantum}. The quality of quantum gates is approaching the fault-tolerant threshold in the isolated characterization of up-to-date quantum processors \cite {barends2014superconducting, xue2022quantum, ballance2016high, bluvstein2022quantum}; however, it remains a crucial challenge in multi-qubit systems because the coupling to classical or quantum environments induces quantum decoherence and gate errors. Robust quantum gates are engineered in the feed-forward flavor to correct the errors dynamically \cite{khodjasteh2009dynamically, zeng2018general}, and hence play an important role in the integration of excessive qubits \cite{xiang2020simultaneous,terhal2020towards, cai2021bosonic, manovitz2022trapped,zhou2022quantum}. Practical robust gates are commonly synthesized with pulse-shaping techniques leveraging numerical optimization \cite {khaneja2005optimal,caneva2011chopped,yang2019silicon,figueiredo2021engineering,ribeiro2017systematic,song2022optimizing}, whose growing computational cost hinders further applications. Therefore, a fundamental solution to this issue relies on efficient models of the noisy quantum dynamics and understanding the origin of robustness. Quantum geometry associated with the adiabatic passage has been found helpful for robust control because the geometric phase is conserved against certain noises \cite {berry1990geometric,zu2014experimental,balakrishnan2004classical,ekert2000geometric,xu2020experimental} . To speed up the control, dynamically correcting gates are proposed to tackle dephasing noises by mapping to space curves formulated with analytic functions \cite {zeng2018general,zeng2019geometric,barnes2022dynamically}. However, the geometric theory is challenged by some essential problems, and the application for universal robust control confronts many practical difficulties, say, the robustness is limited to a single error in a single qubit \cite{zeng2018general}. Various noises co-exist in realistic systems and are coupled to different system operators. They might originate from the imprecise characterization of quantum chips with excessive parameters, noises during quantum control, limited scalability due to the excessive control lines, unwanted interaction and crosstalk, etc \cite{acharya2022suppressing}. A robust quantum gate should be tolerant to a certain combination of these noises. Therefore, so as to help engineer universal robust quantum gates with a considerably high tolerance to generic noises, we intend to find a simple, graphical and generic framework to study noisy quantum evolution. In this manuscript, we demonstrate the essential correspondence between the driven quantum dynamics with generic noises and multiplex space curves formed of a diagram to study control errors, as will be called the quantum erroneous evolution diagrams (QEED). These curves are parametrized in generalized Frenet-Serret frames and hence allow the bijective map with the Hamiltonian. We use the diagrams formed by multiplex curves to quantitatively characterize the driven quantum erroneous evolution subject to generic noises. We show that the diagrams provide quantitative measures of the control robustness to different noises. We prove the necessary and sufficient conditions to guide general robust control. To make this theoretical framework practical for robust control in experiments, we develop a simple and systematic protocol to search for arbitrary robust control pulses with the simplest waveforms, containing only a few Fourier components. This protocol could be made automatic given only the system parameters, so it is friendly to experiments. We demonstrate that these pulses could be set to arbitrary gate time while maintaining the same robustness. We exploit these pulses to implement universal robust quantum gates. For the first time, we use the realistic multi-qubit models of semiconductor quantum dots and superconducting transmons for the numerical simulations. We obtain high gate-fidelity plateaus above the fault-tolerant threshold over a broad range of noise strength. These results demonstrate the framework's effectiveness and show the promising prospect of applications. \section{Analytical theory} \label{Sec_correspondence} \subsection{Model settings} \begin{figure} \caption{ The illustration of geometric correspondence of noisy quantum dynamics and QEED. The black trajectory corresponds to the ideal unitary evolution $U_0 $ that drives an initial quantum state to a desired final state. Realistic systems subject to errors in the longitudinal direction, $\protect\delta_z$, and transverse direction, $\protect\delta_x$ and $\protect\delta_y $ undergo the colored trajectories with different control fields. The blue and red evolution trajectories are respectively driven by the robust pulse (blue) and cosine pulse (red) in the middle, corresponding to the blue and red geometric curves on the right. } \label{Fig_schematics} \end{figure} We start from the generic Hamiltonian with a control field \begin{equation} H(t)=H_{0}+H_{c}(t)+V. \label{Eq_Hfull} \end{equation} Here the system Hamiltonian $H_{0}$ added with the control term $H_{c}(t)$ generates the desired time evolution. $V=\boldsymbol{\delta }\cdot \hat{\mathbf{O}}$ is the noise term, where vector $ \boldsymbol{\delta }$'s components are random complex numbers corresponding to quantum and classical noises \cite {carballeira2021stochastic,clerk2022introduction} that couple to the target quantum system via the operators $\hat{\mathbf{O}} =\{O_{1},...,O_{n}\}$. In the SU(2) subspace, $\hat{\mathbf{O }}=\hat{\boldsymbol{\sigma}}$ and $\hat{\boldsymbol{\sigma}}$ is the vector of Pauli operators. $\boldsymbol{\delta}$ is generally time-dependent and randomly fluctuating, which could be caused by unwanted effects including uncontrollable frequency shifts, spectrum broadening, crosstalk, parameter fluctuations, residual couplings \cite {deng2021correcting,krinner2020benchmarking}, etc. The idea of robust control using the geometric correspondence is illustrated in Fig.~\ref{Fig_schematics}. A trivial control field $H_{c}(t)$, for example, a cosine pulse, supposedly drives the raw system $H_{0}$ to a target final state $|\psi _{f}\rangle $. But subject to the noise $V$, the system's evolution deviates from the ideal trajectory $U_{0}$ and undergoes a noisy evolution $U_{e}$. The erroneous final state $|\psi _{e}\rangle $ has an unknown distance from the desired final state depending on the noise strength. To correct the errors, control Hamiltonian $H_{c}(t)$ formed with robust control pulse (RCP) obtained by geometric correspondence should drive the system to the expected state $|\psi _{f}\rangle $, disregarding the presence of the unknown errors. This means that at some time $\tau $, $U_{c}(\tau )=U_{0}$. Nonetheless, solving the RCP is very challenging. We will show how to formulate error measures of control pulses and construct arbitrary RCPs, by establishing the geometric correspondence between space curves and the noisy quantum dynamics. In order to obtain analytical solutions, our theory is formulated on the two-state systems, giving the SU(2) dynamics. The dynamics of higher dimensional systems correspond to more complex geometric structures and are beyond the scope of this manuscript. However, some complex system's dynamics could be decomposed into direct sums or products of SU(2) operators, then the work presented in this manuscript still applies. The raw Hamiltonian for a single two-state Hilbert space $\text{span}\{\left\vert a\right\rangle ,\left\vert b\right\rangle \}$ is $H_{raw}=-\frac{\omega }{2}\sigma _{z}$. If driven transversely with a control field, it is usually written in the rotating frame with the control field \begin{equation} H_{0}^{rot}=-\frac{\Delta }{2}\sigma _{z}, \end{equation} where $\Delta$ is the detuning between qubit frequency and control field. The formulas in this manuscript are all present in the rotating frame, so in the following, we will drop the superscript "rot". The control and the noise Hamiltonian could be written in the general form \begin{eqnarray} H_{c}(t) &=&\Omega _{z}(t)\sigma _{z}+\Omega _{x}(t)\sigma _{x}+\Omega _{y}(t)\sigma _{y} \label{Eq_Hcontrol} \\ V &=&\delta _{z}\sigma _{z}+\delta _{x}\sigma _{x}+\delta _{y}\sigma _{y}, \label{Eq_GendeltaH} \end{eqnarray} where the Pauli matrices $\sigma _{x}=|a\rangle \langle b|+|b\rangle \langle a|$, $\sigma _{y}=-i|a\rangle \langle b|+i|b\rangle \langle a|$, $\sigma _{z}=|a\rangle \langle a|-|b\rangle \langle b|$. The error $\delta _{z}$ is related to many types of longitudinal noises including variations in frequency or fluctuating spectral splitting, which could originate from the coupling to other quantum systems, e.g., the neighboring qubits, two-level defects, or quantum bath \cite {burnett2019decoherence,lisenfeld2019electric}. On the other hand, the transverse errors $\delta _{x}$ and $\delta _{y}$ result from energy exchange with the environment, such as relaxation, crosstalk, and imperfect control. In this manuscript, the formulation of the theory assumes that the noise fluctuation time scale is longer than a quantum gate. So $ \boldsymbol{\delta}$ is a constant vector in this quasi-static limit. Based on this assumption, we will show how to engineer the control Hamiltonian to correct the errors in its perpendicular directions. And hence for generic errors in the $x,y,z$ directions we need controls in at least two directions, i.e., two terms out of the three in control Hamiltonian in Eq.~(\ref{Eq_Hcontrol}) are sufficient to achieve dynamical error suppression. Specifically in the following discussion, we let \begin{equation} H_{c}(t)=\frac{\Omega (t)\cos \Phi (t)}{2}\sigma _{x}+\frac{\Omega (t)\sin \Phi (t)}{2}\sigma _{y}, \label{Eq_XYdrive} \end{equation} where $\Omega (t)$ and $\Phi (t)$ are the control amplitude and modulated phase of the transverse control field. \subsection{Geometric correspondence} \label{subsecGeocorresp} Here we present the essential geometric correspondence between the driven noisy quantum dynamics and space curves, based on which we further establish the robust control constraints. Given that the total time evolution reads $U(t)=\mathcal{T}\exp \{-i\int_{0}^{t}H(\tau )d\tau \}$, generated by the full noisy Hamiltonian $H(t)$ given in Eq.~(\ref{Eq_Hfull}), whereas the noiseless driven dynamics evolves as $U_{0}(t)=\mathcal{T}\exp \{-i\int_{0}^{t}(H_{0}+H_{c}(\tau ))d\tau \}$. In the interaction picture with $U_{0}(t)$, the total Hamiltonian is transformed to $V_{I}=U_{0}^{\dagger }VU_{0}=U_{0}^{\dagger }( \boldsymbol{\delta }\cdot \hat{\boldsymbol{\sigma}})U_{0}$. Our model settings have assumed time-independent errors, so $\boldsymbol{ \delta }$ is a constant for time-integral. For the $j$-component of the noise source, the transformation $U_{0}^{\dagger }(t)\sigma _{j}U_{0}(t)$ over duration $dt$ gives rise to a displacement of the operator $d\boldsymbol{r}^{j}(t)\cdot \hat{\boldsymbol{\sigma }}$, while its time derivative defines a velocity $\boldsymbol{T}_{j}(t)= \dot{\boldsymbol{r}}^{j}(t)$. As $||\boldsymbol{T} _{j}||=||U_{0}^{\dagger }(t)\sigma _{j}U_{0}(t)||=1$, $\boldsymbol{T} _{j}$ is a unit vector which means the $j$-error motion has a constant speed. So the Hamiltonian has the geometric correspondence \begin{align} V_{I}(t)& =\delta _{x}\mathbf{T}_{x}(t)\cdot \hat{\boldsymbol{\sigma }} +\delta_{y}\mathbf{T}_{y}(t)\mathbf{\cdot }\hat{\boldsymbol{\sigma }}+\delta _{z}\mathbf{T}_{z}(t)\cdot \hat{\boldsymbol{\sigma }} \notag \\ & =\sum_{j,k=x,y,z}\delta _{j}T_{jk}(t)\sigma _{k}. \end{align} Here $T_{jk}(t)$ is a tensor connecting $j$-component of the noise source $\delta _{j}$ and Pauli term $\sigma _{k}$. Because $U(t)=U_{0}(t)U_{I}(t)$, $U_{I}(t)$ could be called the \textit{ erroneous evolution} referring to the deviation of the noisy $U(t)$ from the noiseless $U_{0}(t)$. $U_{I}(t)$ is generated by the Hamiltonian in the interaction picture following the integral in time order as \begin{align} U_{I}(t)& =\mathcal{T}\exp \{-i\int_{0}^{t}d\tau V_{I}(t)\} \notag \\ & =\mathcal{T}\exp \{-i\sum_{j=x,y,z}\delta _{j}\int_{0}^{t}d\tau \lbrack \mathbf{T}_{j}(t)\cdot \hat{\boldsymbol{\sigma }}\mathbf{]}\} \notag \\ & =\mathcal{T}\exp \{-i\sum_{j=x,y,z}\delta _{j}\mathbf{r}^{j} (t)\cdot \hat{\boldsymbol{\sigma }}\} \label{Eq_UItimeordering}. \end{align} Therefore, the erroneous dynamics generated by the Hamiltonian could be described by the kinematics of a moving point. The displacement $\mathbf{r}^{j}(t)$ sketches a space curve in $\mathbb{R}^{3}$, corresponding to the error coupled to $\sigma _{j}$. So we call $\mathbf{r}^{j}(t)$ the \textit{ $j$-error curve}. Therefore, the erroneous evolution $U_{I}(t)$ could be described by three error curves, forming a diagram to describe the erroneous dynamics of the two-state system. In this manuscript, we call it a quantum erroneous evolution diagram (QEED). It is known that any continuous, differentiable space curve could be defined necessarily and efficiently with the Frenet-Serret frame \cite{o2006elementary, pressley2010elementary} with a tangent, a normal, and a binormal unit vector $\{\mathbf{T}, \mathbf{N},\mathbf{B}\}$. Moreover, there is a correspondence between the curve length and the evolution time $ L(t)=\int_{0}^{t}d\tau ||\mathbf{T}_{j}(t)\cdot \hat{\boldsymbol{\sigma }}\mathbf{||}=\int_{0}^{t}d\tau =t$. Using the $j$-error curve and its tangent vector defined above, the explicit geometric correspondence could be established. As an example, for the control Hamiltonian Eq.~(\ref{Eq_XYdrive}) and noise in $z$-direction, $H=\delta _{z}\sigma _{z}+H_{c}(t)$. Following the derivation in Appendix~\ref{AppendixA}, the explicit geometric correspondence between the control Hamiltonian and the error curve is given by \begin{equation} \begin{array}{c} \mathbf{T}\cdot \hat{\boldsymbol{\sigma }}\mathbf{=}U_{0}^{\dagger }(t)\sigma _{z}U_{0}(t) \\ \mathbf{N}\cdot \hat{\boldsymbol{\sigma}}=U_{0}^{\dagger }(t)(-\sin \Phi (t)\sigma _{x}+\cos \Phi (t)\sigma _{y})U_{0}(t) \\ \mathbf{B}\cdot \hat{\boldsymbol{\sigma}}=U_{0}^{\dagger }(t)(-\cos \Phi (t)\sigma _{x}-\sin \Phi (t)\sigma _{y})U_{0}(t). \end{array} \label{Eq_z_correspondence1} \end{equation} Combining Eq.~(\ref{Eq_XYdrive}) and Eq.~(\ref{Eq_z_correspondence1}) we get the relation between the Frenet vectors and the control Hamiltonian \begin{equation} \left\{ \begin{array}{c} \kappa (t)=\dot{\mathbf{T}}\cdot \mathbf{N} =\Omega (t) \\ \tau (t)=-\dot{\mathbf{B}}\cdot \mathbf{N}=\dot{ \Phi}(t), \end{array} \right. \label{Eq_z_correspondence2} \end{equation} where $\kappa (t)$ and $\tau (t)$ are respectively the signed curvature and the singularity-free torsion of $z$-error curve. So far, we have established the geometric correspondence between the control Hamiltonian and the kinematic properties of space curves. This correspondence is a bijective map which means that given either a Hamiltonian or a curve, its counterpart could be solved straightforwardly via Eq.~(\ref{Eq_z_correspondence1}) or Eq.~(\ref{Eq_z_correspondence2}). We refer to Appendix~\ref{AppendixA} and \ref{AppendixC} for more details. \subsection{Robust control} \label{subsecRobCon} The robust control of the two-state system's dynamics requires the error evolution to vanish while driving the system to achieve a target gate at a specific gate time $\tau $, i.e., satisfying the robust condition $U(\tau )=U_{0}(\tau )$ and $U_{I}(\tau )=I$. To obtain the explicit form of $U_{I}$ in terms of the error curve, we use the equivalency between the time ordering representation and the Magnus expansion \cite{blanes2009magnus} to obtain \begin{align} U_{I}(t)& =\exp \{-i\sum_{n}[(\mathbf{\delta }\cdot )^{n} \hat{\mathbf{A}}_{n}(t)]\} \notag \\ & =\exp \{-i[\delta _{j}A_{1}^{j}+\delta _{j}\delta _{k}A_{2}^{jk}+O(\delta ^{3})]\}, \end{align} where $\hat{\mathbf{A}}_{n}$ is $n$th-order tensor corresponding to $n$th-order of the Magnus series and the Einstein summation is used. We have also assumed that all the $\delta _{x,y,z}$ terms are at the same perturbative order. The exponential form can also be further expanded to polynomial series \begin{equation} U_{I}(t)=I-i\delta _{j}A_{1}^{j}-[\frac{1}{2}(\delta _{j}A_{1}^{j})^{2}+i\delta _{j}\delta _{k}A_{2}^{jk}]+O(\delta ^{3}), \label{Eq_perturbative} \end{equation} where \begin{equation} \left\{ \begin{array}{c} A_{1}^{j}(t)=\int_{0}^{\tau }du(\mathbf{T}_{j}\cdot \hat{\boldsymbol{\sigma }}\mathbf{)} \\ A_{2}^{jk}(t)=\frac{1}{2}\int_{0}^{t}d\tau \lbrack \dot{A} _{1}^{j}(t),A_{1}^{k}(t)] \\ A_{n+1}^{jkl...}(t)=\frac{1}{2}\int_{0}^{t}d\tau \lbrack \dot{A} _{n}^{j}(t),A_{n}^{kl...}(t)]. \end{array} \right. \label{Eq_perturbOrders} \end{equation} Utilizing the geometric correspondence present above, the robust constraints could be established up to arbitrary perturbative orders. Correcting the leading-order error at time $\tau $ requires the first-order term $ A_{1}^{j}(t)$ in Eq.~(\ref{Eq_perturbative}) to vanish. Using $\mathbf{T }_{j}(t)=\dot{\mathbf{r}}^{j}(t)$ we get an explicit geometric representation of the erroneous evolution \begin{equation} A_{1}^{j}(t)=\mathbf{r}^{j}(t)\cdot \hat{\boldsymbol{\sigma }}. \label{Eq_1order} \end{equation} The geometric correspondence of $A_{1}^{j}(t)$ is given by the displacement $ \mathbf{r}^{j}(t)$ of the $j$-error curve. Therefore, the condition of control robustness up the leading order is \begin{equation} \mathbf{r}^{j}(\tau )=0. \end{equation} Eq.~(\ref{Eq_perturbOrders}) infers that the higher order terms contains more complex geometric properties. For simplicity, we consider the case when the error lies in only one axis $j$. The second order term in Eq.~(\ref{Eq_perturbative}) \begin{equation} A_{2}^{jj}(t)=i\int_{0}^{t}\dot{\mathbf{r}}^{j}(\tau )\times \mathbf{r}^{j}(\tau )d\tau \cdot \hat{\boldsymbol{\sigma} }=i\mathbf{R}^{j}(t)\cdot \hat{\boldsymbol{\sigma}}, \label{Eq_2order} \end{equation} now has a geometric meaning that $\mathbf{R}^{j}(t)=\int_{0}^{t} \dot{\mathbf{r}}^{j}(\tau )\times \mathbf{r}^{j} (\tau )d\tau $ forms directional integral areas on $y$-$z$, $z$-$x$, and $x$- $y$ planes enclosed by the projections of the space curve. Similarly, the second-order robustness conditions requires $\frac{1}{2}(\delta _{j}A_{1}^{j})^{2}+i\delta _{j}^{2}A_{2}^{jj}=0$, i.e., \begin{equation} \left\{ \begin{array}{c} \mathbf{r}^{j}(\tau )=0 \\ \mathbf{R}^{j}(\tau )=0. \end{array} \right. \label{Eq_2robust} \end{equation} Higher-robustness conditions also refer to the vanishing net areas of the corresponding space curves only with more closed loops \cite {barnes2022dynamically}. Nonetheless, higher-order robustness means more constraints so that the search for control pulses becomes more challenging, and the resulting control pulses are typically longer and more complicated, making experimental realization infeasible. Therefore aiming at robustness for orders higher than two is usually unnecessary. We consider only the constraints up to leading terms in the pulses construction protocol presented in the following Section~\ref{Sec_protocol}. To explain the assumption addressed at Eq.~(\ref{Eq_XYdrive}), we now present theorems to answer the questions: 1. What are the necessary conditions to correct the errors? 2. What are the sufficient conditions to correct the errors? \textbf{Theorem:} (\textit{Non-correctable condition}) If $[V,H_{c}(t)]=0$ for $\forall t\in \lbrack 0,\tau _{g}]$, the erroneous evolution cannot be dynamically corrected. Specifically, if $[\sigma _{j},H_{c}(t)]=0$, the $j$ -error cannot be dynamically corrected by $H_{c}$. Proof. Using the geometric correspondence, the proof for this theorem becomes explicit and intuitive. A necessary condition for the dynamical correctability is that the velocity $\mathbf{T}$ of the erroneous evolution could be modified by the control Hamiltonian, namely $\mathbf{N}\neq 0$ at some time $t\in \lbrack 0,\tau _{g}]$. Whether a trajectory $\mathbf{r}$ is curved or not depends on its normal vector $\mathbf{N}$, corresponding to the dependence of erroneous evolution on $H_{c}(t)$. From the geometric correspondence introduced above, we know \begin{align} \mathbf{N}(t)\cdot \hat{\boldsymbol{\sigma}}& =\frac{d(U_{0}^{\dagger }VU_{0})}{dt} \notag \\ & =\dot{U}_{0}^{\dagger }VU_{0}+U_{0}^{\dagger }\dot{ V}U_{0}+U_{0}^{\dagger }V\dot{U}_{0} \notag \\ & =U_{0}^{\dagger }[H_{c},V]U_{0}. \end{align} Here for quasi-static noise, $\dot{V}=0$ within the gate time. Therefore, if $[V,H_{c}(t)]=0$ for $\forall t\in \lbrack 0,\tau _{g}]$, $ \mathbf{N}(t)\equiv 0$. In the geometric frame, it means that the error curve remains in the same direction and hence the erroneous evolution cannot be dynamically corrected. Specifically, for $j$-error, $V_{j}=\delta _{j}\sigma _{j}$. The non-correctable condition $ [V,H_{c}]=0$ becomes $[\sigma _{j},H_{c}]=0$. Q.E.D. Following the general theorem, the next statements discuss the specific forms of robust control Hamiltonian. \textbf{Corollary:}\textit{\ (Necessary condition)}\textbf{\ }Controls in two non-commutable directions are necessary to correct the quasi-static noises coupled to all $x,y,z$ directions. That is $H_{c}(t)=\Omega _{j}(t)\sigma _{j}+\Omega _{k}(t)\sigma _{k}$, where $[\sigma _{j},\sigma _{k}]=i2\epsilon _{jkl}\sigma _{l}$. For any given $j$-error, $H_{c}$ should contain at least one term that is non-commutable with $\sigma _{j}$. Eq.~(\ref{Eq_XYdrive}) gives an example where $j=x$, $k=y$. So it is easy to demonstrate that for general $V= \boldsymbol{\delta }\cdot \hat{\boldsymbol{\sigma}}$, we get $[V,\Omega _{j}(t)\sigma _{j}+\Omega _{k}(t)\sigma_{k}]=i\boldsymbol{\gamma }\cdot \hat{\boldsymbol{\sigma }}$, where vector $\boldsymbol{\gamma }(t)\neq 0$ for $t\in \lbrack 0,\tau _{g}]$. So far, we have answered the first question: the necessary conditions for robust control. This helps clarify in which scenarios an error cannot be corrected dynamically. And these conditions could be generalized to higher dimensions. Nevertheless, whether it is feasible to perform robust control to correct arbitrary noisy quantum evolution remains to be discovered. Furthermore, whether smooth and simple pulses exist for robust control remains a question. To answer this, we address a conjecture here. We will next present analytical and numerical solutions to the universal robust gates, which correct errors subject to noises from all directions. These results are proof of this conjecture. \textbf{Conjecture: }\textit{(Sufficient condition)}\textbf{\ }Controls in two directions are sufficient to correct the quasi-static noises coupled to all $x,y,z$ directions. That is $H_{c}(t)=\Omega _{j}(t)\sigma _{j}+\Omega _{k}(t)\sigma _{k}$, where $[\sigma _{j},\sigma _{k}]=2i\epsilon _{jkl}\sigma _{l}$. \section{Constructing robust control Hamiltonian} \label{Sec_protocol} In this section, we will show how to utilize the geometric correspondence of quantum evolution to quantify the robustness of arbitrary quantum control pulse and further present a protocol to construct the universal robust control Hamiltonian for arbitrary noises. \subsection{Robustness measure} \label{Sec_Measure} The multiplex error curves, being the geometric correspondence of the erroneous quantum dynamics, provide a geometric way to measure the error of any control pulses subject to specific noises. Specifically, let's define the \textit{error distance} to be the linear error term $||\mathbf{r}^{j}(\tau )||$, as in Eq.~(\ref{Eq_1order}). Unlike gate fidelity describing the overall performance of a quantum operation, the error distance $||\mathbf{r}^{j}(\tau )||$ characterizes the gate error subject to a specific noise-$j$. Therefore, quantitatively characterizing the robustness of a given pulse could be done by mapping the evolution to an error curve and then measuring the error distance between the starting point and ending point. To show how the error distance characterizes the robustness of control pulses, we plot the z error curves of a robust pulse $R_{1;\bot}^{\pi }$ (defined in Sec.\ref{Sec_UniversalSet}) and the two commonly used pulses in experiments, the cosine and sine pulses. As shown in Fig.~\ref{Fig_RobustMeasure} (a,b), the error distances associated with these three pulses exhibit different lengths. Note that the coordinates of the planar error curves in the manuscript are all denoted by $\{x,y\}$ for convenience. To verify the monotonicity of the robustness measure, we numerically simulate the driven noisy dynamics and obtain the gate fidelity of the three pulses versus error $\delta_z$ as shown in Fig.~\ref{Fig_RobustMeasure}(c). The fidelity of sine and cosine pulses cannot maintain a fidelity plateau as the RCP does, which agrees with their error distances illustrated in Fig.~\ref{Fig_RobustMeasure}(a). Therefore, the error distances are a good robustness measure that will be used as the constraints in the pulse construction protocol introduced below. More quantitative analysis of the robustness order for various pulses can be found in Appendix~\ref{AppendixB}. Since the linearity of error distance, the overall errors from multiple noises are addictive. This approach for quantifying noise susceptibility can be generalized to quantum systems with multiple error sources with multiplex error curves. The simultaneous robustness of all the error sources requires the vanishing of error distances for all the curves. The examples of a such case will be presented in Section~\ref{Sec_UniversalSet}. Furthermore, the second-order error $||\mathbf{R}^{j}(\tau )||$, as introduced by Eq.~(\ref{Eq_2order}), indicates an additional measure of the higher order control robustness and will be illustrated in Section~\ref{Sec_UniversalSet} and Appendix~\ref{AppendixD}. A rigorous mathematical study of geometric measures of control robustness based on the measure theory is beyond the scope of this manuscript. \begin{figure} \caption{(a) The QEED shows three error curves of RCP (blue), cosine (orange), and sine (green) pulses, corresponding to the pulse waveforms shown in (b). (c) Comparison of the fidelities versus the error strength for different pulse waveforms, as a demonstration of the agreement between the robustness behaviors of different pulses and their error distances.} \label{Fig_RobustMeasure} \end{figure} \subsection{Pulse construction protocol} \label{Sec_PulseProtocol} The geometric correspondence helps us understand how the noise affects the qubit dynamics. Therefore, it provides a way to find the conditions of robust evolution and motivates a pulse construction protocol by reverse engineering analytic space curves that satisfy the robustness conditions. The procedure of this construction is summarized as follows: (1) Construct a regular curve that meets certain boundary conditions determined by the target gate operation and robustness conditions such as closeness and vanishing net area; (2) Re-parameterize the curve in terms of the arc-length parameter to make it moving at unit-speed; (3) Scale the length of the curve to fit an optional gate time; (4) Calculate its curvature and torsion to obtain the corresponding robust control pulses. This protocol is illustrated explicitly in previous works, where plane and space curves were used to construct different RCPs \cite{zeng2018general, zeng2019geometric,barnes2022dynamically}. We note that in our generic geometric correspondence, the control pulses are related to signed curvature and singularity-free torsion of the geometric curve, which are obtained by a set of well-chosen continuous Frenet vectors. We demonstrate the pulse construction from space curves and provide a universal plane curve construction for first- and second-order robust pulses in Appendix~\ref{AppendixC} and~\ref{AppendixD}. However, even with the constraints given by the theory, it is still challenging to make a good guess of ansatz for the RCPs. In a real quantum computer, the universal gates using RCPs are desired to be generated automatically given the system parameters. Therefore, assistance with numerical search is needed to find the RCPs automatically using the robustness conditions. Here, we present an analytic-numerical protocol to construct universal error-robust quantum gates that are made automatic. The theory of geometric correspondence gives analytical constraints for robustness, which is added to the objective function to perform constrained optimization using the COCOA algorithm \cite{song2022optimizing}. The protocol is described as follows. \begin{figure} \caption{ The flow chart of the pulse construction protocol. } \label{Fig_flowchart} \end{figure} \begin{figure*} \caption{(a) First order robust control pulses $\{ R_{1;\bot } \label{Fig_robust_orders} \end{figure*} (1) Initialize. Set initial input pulses and other relevant parameters. (2) Analytic pulse constraints. We apply discrete Fourier series as the generic ansatz of the RCPs, so that we can control the smoothness of the pulses by truncating the Fourier basis while limiting the number of parameters. For pulse amplitude, the Fourier series is multiplied with a sine function to ensure zero boundary values, i.e., zero starting and ending of the pulse waveform. The ansatz of the pulse amplitude and phase takes the form \begin{equation} \begin{aligned} \Omega_0(a_j,\phi_j;t) &= \sin( \frac{\pi t}{T} )(a_0 + \sum_{j=1}^n a_j \cos( \frac{2\pi j}{T} t + \phi_j) ) \\ \Phi_0(b_j,\psi_j;t) &= b_0 + \sum_{j=1}^n b_j \cos(\frac{2\pi j}{T}t + \psi_j), \end{aligned} \label{Eq_Pansatz} \end{equation} where $n$ is the number of Fourier components, which will be set to be $[1,4]$ for different gates shown in this paper. $\{ a_j,\phi_j, b_j,\psi_j \} $'s are parameters to be optimized. For each input pulse, the corresponding Fourier series is obtained by a Fourier expansion and truncation, and then the expansion function for the amplitude is multiplied with the sine function to obtain a modified pulse. The reason to choose the Fourier series: 1. few parameters and convenient for experiments; 2. easy to control the smoothness and bandwidth of the pulses; 3. easy to perform pre-distortion according to the transfer function. (3) Use the modified pulses to compute the dynamics to obtain the evolution operator $U(t)$, as well as the error curve $\mathbf{r}(t)$. (4) Compute the cost function, which takes the form \begin{equation} C=(1-F)+\xi , \label{Eq_CostF} \end{equation} where $F$ is the gate fidelity defined in Ref. \cite{pedersen2007fidelity} and $ \xi $ is the robustness constraint. When only the first-order robustness is considered, $\xi =\Vert r(\tau )\Vert $. Here $\Vert r(\tau )\Vert $ is the error distance as defined in ~\ref{Sec_Measure}. The robustness against errors in different axes can be achieved by including the error distances of different error curves in the cost function. The auto-differentiation tape records the calculations in (2)-(4). (5) Make a gradient update of the pulse to minimize $C$. (6) Go back to step (2) with the updated pulse as input if the cost function is larger than a criterion $\tau$, such as $10^{-5}$. (7) If $C < \tau$, break the optimization cycle and obtain the optimal robust pulse, which takes the analytical form of Eq.~(\ref{Eq_Pansatz}). \subsection{Universal set of RCP} \label{Sec_UniversalSet} Using the pulse construction protocol introduced in the former subsection, we are able to generate RCPs that satisfy the robustness condition provided by the geometric correspondence. As a specific example, we now apply the protocol to search for a set of RCPs that implement X-axis rotation robust against detuning error. Then Eq.~(\ref{Eq_Hfull}) now turns into \begin{equation} H=\frac{1}{2}\Omega (t)\sigma_{x}+\frac{1}{2}\Delta \sigma _{z}. \label{Eq_H1} \end{equation} We first optimize the control pulses with the first-order robustness around $ \Delta =0$. The obtained optimal RCPs are shown in Fig.~\ref{Fig_robust_orders}(a) and denoted as $\{R_{1;\bot}^{\pi },R_{1;\bot}^{7\pi /4},R_{1;\bot}^{5\pi /2},R_{1;\bot}^{2\pi }\}$. Here $R_{1;\bot}^{\theta }$ represents the first order RCP for a rotation of angle $\theta$ that can correct errors in its perpendicular directions. We denote the maximum absolute pulse amplitude as $\Omega_m=\text{max}\{|\Omega|\}$. Searching for higher-order robust pulses requires adding more terms associated with the higher-order robustness constraints, e.g., the net-area of the error curve, to the constraint term $\xi $ in the cost function Eq.~(\ref{Eq_CostF}). However, we found that simply adding these constraints hinders the convergence of the algorithm at an unacceptable rate because the computation of these net areas involves complicated integrals over the error curve. We settle this issue by adding new terms to the cost function $C$ with the gate fidelity and error distance at a non-vanishing $\Delta $. Specifically, we use $C=\sum_{j=1,2}(F(\Delta _{j})+\Vert r(\Delta _{j},\tau )\Vert )$, where $\Delta _{1}=0$ and $\Delta _{2}$ is chosen to be $\pi /\tau $ according to the experimentalist's expectation. This ensures the resulting RCPs maintaining high gate fidelities over a broader range of noise amplitude and thus lead to extended robustness. We then obtained a class of extended RCPs denoted as $\{R_{\text{ex};\bot}^{\pi },R_{\text{ex};\bot}^{9\pi /4},R_{ \text{ex};\bot}^{5\pi /2}, R_{\text{ex};\bot}^{2\pi }\}$ that are made by three cosine functions and the error curves of them have net areas close to zero, as shown in Fig.~\ref{Fig_robust_orders}(b). The robustness of the eight RCPs mentioned above is further confirmed by calculating the first few Magnus expansion terms, as shown in Appendix~\ref{AppendixB}. We use the first-order RCPs and the extended RCPs to implement four single-qubit robust gates $\{ X_{\pi}, X_{\pi/4}, X_{\pi/2}, X_{2\pi} \}$ ($ X_{\theta}$ represents a rotation of angle $\theta$ around x-axis of the Bloch sphere). The first three gates together with robust $Y_{\pi/2}$ gate, a.k.a. applying $R_{1;\bot}^{5\pi /2}$ or $R_{\text{ex};\bot}^{5\pi /2}$ in the Y direction, are elementary single-qubit gates for a universal robust gate set \cite{shor199637th}, while the $2\pi$ gate is equivalent to the robust dynamical decoupling. We numerically simulate the dynamics of the noisy qubit driven transversely (Eq.~\ref{Eq_H1}) and present the gate robustness in Fig.~\ref{Fig_robust_orders}(c). Compared with the gates performed by trivial cosine pulses with the same maximal amplitudes of the RCPs, the gates using the first-order RCPs and extended RCPs all exhibit wide high-fidelity plateaus over a significant range of noise amplitude. The gate infidelities exhibit plateaus around $10^{-8}$ within $1\%$ noise region for first-order RCPs and $10^{-5}$ within $15\%$ noise region for extended RCPs. Note that high-fidelity plateaus are a sign of the robustness of the gates. \begin{figure} \caption{ (a) A set of RCPs $R_{1;\text{all} \label{Fig_XYdrive} \end{figure} More general noises on all axes can be corrected simultaneously with XY control with the Hamiltonian in Eq.~(\ref{Eq_XYdrive}). This works because the robust control in a direction could correct the noise coupled to the other two perpendicular directions, as discussed in the previous section. As a numerical demonstration, we consider both longitudinal and transverse errors in the form of $V=\frac{\Delta }{2}\sigma_{z}+\frac{ \epsilon }{2}(\sigma_{x}+\sigma_{y})$, giving rise to the co-existence of x, y and z error curves in the geometric space. Then the robustness constraint in Eq.~(\ref{Eq_CostF}) takes the form $\xi =\sum_{j=x,y,z}\Vert \mathbf{r}^{j}(\tau )\Vert $. We apply our protocol to find an RCP $R_{1;\text{all}}^{3\pi /2}$ with only four Fourier components to implement a single-qubit $X_{3\pi /2}$ gate that is robust against errors in all three directions. Although this RCP is solved to generate a robust $3\pi/2$ rotation around the x-axis, its rotation axis can be changed by adding a constant phase while keeping the robustness along the rotation axis and its two perpendicular axes, as demonstrated numerically in Appendix~\ref{AppendixB}. As shown in Fig.~\ref{Fig_XYdrive}(a) the XY drive has three closed error curves during the gate time. The fidelity landscape of the $X_{3\pi /2}$ gate using the $R_{1;\text{all}}^{3\pi /2}$ and the cosine pulse in the two error dimensions is plotted in Fig.~\ref{Fig_XYdrive}(b). The RCP shows a great advantage over the cosine pulse, showing a significant high-fidelity plateau with fidelity above $0.9999$. \section{universal robust quantum gates for realistic qubits} \label{Sec_application} Applying this method to construct universal robust quantum gates for realistic systems, especially in a multi-qubit setup, is not a trivial task. In this section, we demonstrate the method by studying the physical model of gate-defined quantum dots and superconducting transmon qubits. Without loss of generality, the simulated gate time for all the RCPs is set to be $\tau =50$ ns. The realistic gate time can be arbitrary, and we can always rescale the RCPs in the time domain and maintain their robustness. This is guaranteed by the geometric correspondence since the substitution of $t\rightarrow \alpha t$, $\Omega \rightarrow \Omega /\alpha $ only rescales the length of the error curve and does not change the correspondence such as Eq.~(\ref{Eq_z_correspondence2}), as well as the unit-speed properties and robustness conditions of the error curves. In Appendix~\ref{AppendixB}, we demonstrate this property and provide the parameters for the RCPs involved in the manuscript. \subsection{Gate-defined quantum dot qubit} \begin{figure} \caption{ (a) Fidelities of single-qubit gate $\{ X_{\protect\pi} \label{Fig_QD} \end{figure} First, we consider gate defined double quantum dot system, in which the spin states of electrons serve as qubit \cite{xue2022quantum, noiri2022fast, mkadzik2022precision, philips2022universal, huang2019fidelity}. The Hamiltonian is \begin{equation} H=\mathbf{B}_{1}\cdot \mathbf{S}_{1}+\mathbf{B}_{2}\cdot \mathbf{S}_{2}+J(\mathbf{S}_{1}\cdot \mathbf{S}_{2}-1/4), \label{Eq_HeisenbergEq} \end{equation} where $\mathbf{S}_{j}=(\sigma_{x,j},\sigma_{y,j}, \sigma_{z,j})/2$ and $\mathbf{B} _{j}=(B_{x,j},B_{y,j},B_{z,j})$ are the spin operators and magnetic field acting on qubit $j$, $J$ is the exchange interaction between the spins. We denote the difference of Zeeman splitting between the two qubits as $\Delta =B_{z,2}-B_{z,1}$. Detuning noise from magnetic fluctuations, spin-orbital interaction and residual exchange interaction is one of the major obstacles to further improving the coherence and gate fidelity of such qubits \cite{xue2022quantum, bertrand2015quantum}. \textit{Single-qubit robust gates.}-A single QD is a good two-level system. With detuning noise coupled to a driven QD, the system Hamiltonian takes the same form as Eq.~(\ref{Eq_H1}). Therefore, the robustness of the pulses driving a single QD can be well described by Fig.~\ref{Fig_robust_orders}. Another source of single-qubit errors is the unwanted coupling between two QDs. The Hamiltonian of such a system with Heisenberg interaction takes the form of Eq.~(\ref{Eq_HeisenbergEq}). Ideally, the universal single-qubit gates $\{ X_{\pi}, X_{\pi/4}, X_{\pi/2}, X_{2\pi} \}$ on the second qubit are given by $I\otimes X_{\theta}$, as the operations on the second qubit shall not affect the first qubit. It is reasonable to assume $J \ll \Delta$, but J cannot be completely turned off during the single-qubit operations by tuning the gate voltage between two QDs \cite{xue2022quantum}. In the eigenbasis, the two subspaces $\text{span}\{ | \uparrow \uparrow \rangle, | \tilde{\uparrow \downarrow} \rangle \}$ and $\text{span}\{ | \tilde{\downarrow \uparrow} \rangle, |\downarrow \downarrow \rangle \}$ both correspond to the second qubit but are detuned by $J$ in the rotating frame. We tune a magnetic drive $B_{x,2}(t)$ according to the first order RCPs to implement robust X rotations against this detuning resulting from the unwanted coupling. The numerical results demonstrating robust $\{ X_{\pi},X_{\pi/4},X_{\pi/2},X_{2\pi} \}$ gates in such double QD system with $\Delta = 250$ MHz are shown in Fig.~\ref{Fig_QD}(a). Although what we demonstrated here is a partial error cancellation since there is an additional crosstalk error between the two subspaces, which will reduce the gate robustness to some extent, the fidelities are still significantly improved in comparison with the cosine waveform. Further, the crosstalk error can be canceled by appropriately timing and implementing additional correction pulses according to \cite{russ2018high, huang2019fidelity}, or with more sophisticated RCPs. \textit{Two-qubit robust gates.}-Two-qubit $\sqrt{\text{SWAP}}$ gate is commonly used as the entangling gates in the QD system. A perfect $\sqrt{ \text{SWAP}}$ can be realized by switching on the exchange coupling in the region where two QDs have zero Zeeman difference---a condition hardly met in experiment either due to the detuning noise or the need for qubit addressability. Note that the entangling originates from the Heisenberg interaction and the Hamiltonian Eq.~(\ref{Eq_HeisenbergEq}) in the anti-parallel spin states subspace $\text{span}\{ | \uparrow \downarrow \rangle, | \downarrow \uparrow \rangle \}$ takes the form \begin{equation} H(t) = \frac{1}{2} \left( \begin{array}{cc} -J(t) + \Delta & J(t) \\ J(t) & -J(t) - \Delta \end{array} \right). \end{equation} To implement a $\sqrt{\text{SWAP}}$ gate means to control $J(t)$ to implement $X_{\pi/2}$ operation in this subspace. The waveform of $J(t)$ could be upgraded to a robust pulse $R_{1;\bot }^{5\pi/2}$ in order to fight against the finite Zeeman difference. We numerically solve the time-dependent Schrodinger equation and obtain the results shown in Fig.~\ref{Fig_QD}. The robust $\sqrt{\text{SWAP}}$ gate exhibits a high-fidelity plateau, and the infidelity is several orders of magnitude lower than the cosine pulse counterpart in a wide detuning region. \subsection{Transmon qubit} \textit{Single-qubit gates robust to frequency variation.}-For a single superconducting transmon with qubit frequency variation, we use the RCPs $ \{R_{1}^{\pi },R_{1}^{7\pi /4},R_{1}^{5\pi /2},R_{1}^{2\pi }\}$ found in Fig.~\ref{Fig_robust_orders} to implement robust single-qubit gates $\{X_{\pi },X_{\pi /4},X_{\pi /2},X_{2\pi }\}$. A transmon is usually considered as a three-level system with a moderate anharmonicity between the first and second transitions \cite {blais2021circuit, krantz2019quantum}. With the transverse control field, a transmon could be modeled as \begin{equation} H=(\omega +\delta )a^{\dagger }a+\frac{u}{2}a^{\dagger }a^{\dagger }aa+H_{c}(t). \end{equation} where the control Hamiltonian \cite{krantz2019quantum} \begin{equation} H_{c}(t)=\frac{1}{2}\xi (t)ae^{i\omega _{d}t}-h.c. \end{equation} The lowest two levels form the qubit. Here $\omega $, $u$, $a^{\dagger }$, $ a$ are the qubit frequency, the anharmonicity, and the creation and annihilation operators of the transmon. $\omega _{d}$ is the frequency of the driving and $\xi (t)$ is the total control waveform. We apply our theory to the dynamics associated with the qubit levels to correct the errors resulting from frequency variation, and use DRAG to suppress the leakage to higher level \cite {motzoi2009simple}. So $\xi (t)$ is a sum of RCP $\Omega (t)$ and the corresponding DRAG pulse $i\alpha \dot{\Omega}(t)$. The DRAG parameter $\alpha $ is related to the transmon anharmonicity \cite{motzoi2009simple} and takes the numerically optimized value at zero detuning in our simulation. We set the gate times for $\{X_{\pi },X_{\pi /4},X_{\pi /2},X_{2\pi }\}$ to be $70,50,80$ and $55$ ns, and fix the maximal pulse amplitude $\Omega _{m}/2\pi $ to be around $27$ MHz. Like previous sections, we use cosine pulses with the same maximal amplitudes for comparison. Our numerical results in Fig.~\ref{Fig_transmon}(b) illustrate the fidelity plateau over a few MHz of frequency variation for our robust gates. Note that the centers of the fidelity plateau all shift to the left. We conclude the reason to be the AC-Stark shifts associated with the higher levels. \begin{figure} \caption{ (a) Fidelities of single-qubit gates $\{ X_{\protect\pi} \label{Fig_transmon} \end{figure} \textit{Single-qubit gates robust to unwanted coupling.-}In realistic multi-qubit systems, a transmon is unavoidably coupled to another quantum systems, the so-called spectators \cite{deng2021correcting, zhao2022quantum,krinner2020benchmarking, wei2021quantum, kandala2021demonstration,mundada2019suppression, ku2020suppression}. This results in qubit frequency splittings or spectrum broadening, giving rise to correlated errors, which becomes a major obstacle for large-scale quantum computing \cite{krinner2020benchmarking}. As a demonstrative example, we consider two directly-coupled transmon qubits (one spectator and one target qubit) with the Hamiltonian \begin{equation} H_{0}=\sum_{j=1,2}[\omega _{j}a_{j}^{\dagger }a_{j}+\frac{u_{j}}{2} a_{j}^{\dagger }a_{j}^{\dagger }a_{j}a_{j}]+g(a_{1}^{\dagger }a_{2}+a_{1}a_{2}^{\dagger }), \label{Eq_2Transmon} \end{equation} where $\omega _{j}$ is the qubit frequency, $u_{j}$ is the anharmonicity for the $j$th qubit and $g$ is the interaction strength. Denote the eigenstates of the spectator-target qubit system \cite{deng2021correcting} as $ |S,T\rangle $. Up to the second-order perturbation, the effective diagonal Hamiltonian in the subspace $\{\left\vert 00\right\rangle ,\left\vert 01\right\rangle \}$ and $\{\left\vert 10\right\rangle ,\left\vert 11\right\rangle \}$ is detuned by $\delta _{\text{zz}}$ with $\delta _{\text{ zz}}=-\frac{2g^{2}(u_{1}+u_{2})}{(u_{1}+\Delta )(u_{2}-\Delta )}$ and $ \Delta =\omega _{2}-\omega _{1}$. This detuning $\delta _{\text{zz}}$ is known as the residual ZZ-coupling. We take $\omega _{1}/2\pi =5.0$ GHz, $ \omega _{2}/2\pi =5.5$ GHz, $u_{1}/2\pi =-0.23$ GHz, $u_{2}/2\pi =-0.26$ GHz. The exact $\delta _{\text{zz}}$ obtained by numerical diagonalization and the perturbative one are plotted in Fig.~\ref{Fig_transmon}(b) as a demonstration of the effectiveness for the perturbative treatment of $\delta _{\text{zz}}$ at small $g$. Our numerical results as shown in Fig.~\ref {Fig_transmon}(c) illustrate a significant improvement of gate infidelity in the presence of residual ZZ-coupling up to the order of MHz. When ZZ-coupling is vanishing, the infidelity of robust gates remains of the same order as trivial pulses. Here the DRAG control \cite{motzoi2009simple} is added as well. \textit{Two-qubit gates.-}iSWAP gate is one of the favorable two-qubit entangling gates for quantum computing using transmons \cite {arute2019quantum}. Traditional implementation of the iSWAP gate requires the two qubits to be on resonance. This could be achieved by tuning qubit frequencies, which, however, induces extra flux noise and hence the variation in qubit frequency. To better tune the interaction strength, recent architecture favors tunable couplers \cite{xu2020high, stehlik2021tunable}. This doesn't change the effective Hamiltonian in Eq.\ref{Eq_2Transmon}, only enabling the control of the coupling strength $g(t)$. To correct the error due to the uncertain frequency difference $\delta $ between the two transmons, we apply RCPs $\{R_{1}^{\pi },R_{1}^{5\pi /2}\}$ on $g(t)$ to implement the robust iSWAP and $\sqrt{\text{iSWAP}}$ gates. We use the qubit settings as in the previous discussion to study the variation of gate infidelities versus $\delta $. The additional ZZ-coupling induced by higher levels of the transmons when activating the iSWAP interaction is omitted in our simulation as it can be canceled by exploiting the tunable coupler and setting a proper gate time \cite{sung2021realization}. Our numerical results in Fig. \ref{Fig_transmon}(d) demonstrate a great advantage of RCPs over their cosine counterparts. This control scheme also illustrates the great potential to simplify the experimental tune-ups of qubit frequencies. \section{Conclusion and discussion} In a summary, the bijective and singularity-free geometric correspondence generates the quantum erroneous evolution diagrams as a graphical analysis for studying the error dynamics of driven quantum systems subject to generic noises. The geometric measures of the errors obtained from the diagrams (Sec.\ref{Sec_Measure}) enable the intuitive and quantitative description of the robustness of arbitrary quantum operation. Furthermore, the framework provides conditions (Sec.\ref{subsecRobCon}) to identify feasible robust control scenarios and allows us to establish an experimental-friendly robust control protocol (Sec.\ref{Sec_PulseProtocol}) as a fusion of analytical theory and numerical optimization. John von Neumann famously said: \cite{dyson2004meeting}"\textit{ With four parameters I can fit an elephant, and with five I can make him wiggle his trunk}." Here, this protocol exploits no more than four cosine components to construct universal robust quantum gates that can tackle generic errors. These simple pulses will enable the low-cost generation and characterization in experiments. The duration of the generated pulses is flexibly adjustable while maintaining robustness as shown in Fig.\ref{Fig_pulse_rescale}. The applications to realistic systems are demonstrated by the numerical simulations on semiconductor spin and superconducting transmon systems, where the gate performance is shown beyond the fault-tolerance threshold over a broad robust plateau. With this simple and versatile protocol, our robust control framework could be applied to experiments immediately and adopted on various physical systems to enhance their robustness in complex noisy environments. The geometric correspondence is essential Although our discussion in this manuscript focuses on single- and two-qubit systems with multiple quasi-static noises, the framework is directly extensible to multi-qubit systems with complex, time-dependent noise via higher-dimensional geometric correspondence and error curves with modified speeds. And hence it shows a promising prospect in large quantum processors, and will potentially inspire a wave of study on noisy quantum evolution, quantum control and compiling, error mitigation and correction in quantum computers, quantum sensing, metrology, etc. On the other hand, a rigorous mathematical study of the geometric measure theory remains open and is beyond the scope of this work. \acknowledgments XHD conceived and oversaw the project. YJH and XHD derived the theory, designed the protocols, and wrote the manuscript. YJH did all the coding and numerical simulation. JL and JZ gave some important suggestions on the algorithm. All authors contributed to the discussions. We thank Yu He, Fei Yan for suggestions on the simulations of the realistic models and Qihao Guo, Yuanzhen Chen for fruitful discussions. This work was supported by the Key-Area Research and Development Program of Guang-Dong Province (Grant No. 2018B030326001), the National Natural Science Foundation of China (U1801661), the Guangdong Innovative and Entrepreneurial Research Team Program (2016ZT06D348), the Guangdong Provincial Key Laboratory (Grant No.2019B121203002), the Natural Science Foundation of Guangdong Province (2017B030308003), and the Science, Technology, and Innovation Commission of Shenzhen Municipality (JCYJ20170412152620376, KYTDPT20181011104202253), and the NSF of Beijing (Grants No. Z190012), Shenzhen Science and Technology Program (KQTD20200820113010023). \begin{appendix} \section{Geometric Correspondence} \label{AppendixA} Without loss of generality, we first derive the geometric correspondence given by the z-error curve discussed in the main text. The Hamiltonian in the interaction picture and the error curve is \begin{eqnarray} H_{I}(t) &=& \delta U_{0}^{\dagger }(t)\sigma_{z}U_{0}(t)= \mathbf{T} \cdot \hat{\boldsymbol{\sigma}} \\ \mathbf{r}(t) \cdot \hat{\boldsymbol{\sigma}} &=& \int_{0}^{t}U_{0}^{\dagger }(t_{1})\sigma_{z}U_{0}(t_{1})dt_{1}. \notag \label{Eq_A_tangent} \end{eqnarray} This noise term $\delta$ is treated as a static perturbation, which agrees with the physical picture where the duration of a quantum gate is much shorter than the time the scale of most of the pink noise or decoherence. Note $\dot{\mathbf{r}}(t)=\mathbf{T}$ is a unit tangent vector, one can then obtain a new unit vector $\mathbf{N}$ perpendicular to $\mathbf{T}$ through \begin{eqnarray} \dot{\mathbf{T}}(t)\cdot\hat{\sigma} &=&iU_{0}^{\dagger }(t)[H_{0}(t),\sigma _{z}]U_{0}(t) \notag \\ &=&\Omega (t)U_{0}^{\dagger }(t)(-\sin \Phi (t)\sigma _{x}+\cos \Phi (t)\sigma _{y})U_{0}(t) \notag \\ &=&\Omega (t)\mathbf{N}\cdot \hat{\boldsymbol{\sigma}}. \label{Eq_A_normal} \end{eqnarray} The third vector $\mathbf{B}$ perpendicular to $\mathbf{T}$ and $\mathbf{N}$ is given by $\mathbf{B}=\mathbf{T}\times \mathbf{N}$, \begin{equation} \mathbf{B}(t)\cdot \hat{\boldsymbol{\sigma}}=U_{0}^{\dagger }(t)(-\cos \Phi (t)\sigma _{x}-\sin \Phi (t)\sigma _{y})U_{0}(t). \label{Eq_A_binormal} \end{equation} Its time derivative satisfies \begin{eqnarray} \dot{\mathbf{B}}(t)\cdot \hat{\boldsymbol{\sigma}} &=&\dot{\Phi}U_{0}^{\dagger }(t)(\sin \Phi (t)\sigma _{x}-\cos \Phi (t)\sigma _{y})U_{0}(t) \notag \\ &=&-\dot{\Phi}(t)\mathbf{N}\cdot \hat{\boldsymbol{\sigma}}. \label{Eq_A_Dbinormal} \end{eqnarray} The three unit vectors $\{\mathbf{T},\mathbf{N},\mathbf{B}\}$, as tangent, normal, and binormal unit vectors of the error curve formed a Frenet-Serret frame, and their defining formulas Eq.~(\ref{Eq_z_correspondence1}) directly follows Eq.~(\ref{Eq_A_tangent})-(\ref{Eq_A_binormal}). They satisfy the Frenet-Serret equations \begin{equation} \left( \begin{array}{l} \dot{\mathbf{T}} \\ \dot{\mathbf{N}} \\ \dot{\mathbf{B}} \end{array} \right) =\left( \begin{array}{ccc} 0 & \kappa & 0 \\ -\kappa & 0 & \tau \\ 0 & -\tau & 0 \end{array} \right) \left( \begin{array}{l} \mathbf{T} \\ \mathbf{N} \\ \mathbf{B} \end{array} \right), \label{Eq_A_Frenet} \end{equation} with $\Omega(t)$ and $\dot{\Phi}(t)$ play the role of signed curvature $\kappa(t)$ and singularity free torsion $\tau(t)$ of the error curve, as stated in Eq.~(\ref{Eq_z_correspondence2}). Different from standard differential geometry, the Frenet vectors defined physically by Eq.~(\ref{Eq_z_correspondence1}) are continuous and differentiable since the pulses are assumed to be continuous and differentiable. We call the signed curvature as the projection of the curvature vector $\dot{\mathbf{T}}$ onto the normal vector $\dot{\mathbf{T}}\cdot \mathbf{N}=\kappa$. It can take negative values since the corresponding pulse amplitude $\Omega$ can be negative. It relates to the conventional curvature in standard differential geometry of curves by taking the absolute value. The torsion defined by $\dot{\mathbf{B}} \cdot \mathbf{N}=-\tau $ is also continuous and does not have a singularity at curvature zero point as in standard differential geometry. This mathematical ambiguity is addressed in Appendix~\ref{AppendixC}. One can also establish the geometric correspondence by error curve in other directions. Here we take the x error curve as an example. The x- error curve is given by $\dot{\mathbf{r}} = U_0^{\dagger} \sigma_x U_0$, and the Frenet vectors are defined as \begin{equation} \begin{array}{c} \mathbf{T}\cdot \hat{\boldsymbol{\sigma}} = U_{0}^{\dagger }(t)\sigma_{x}U_{0}(t) \\ \mathbf{N}\cdot \hat{\boldsymbol{\sigma}} = - U_{0}^{\dagger }(t)\sigma_{z}U_{0}(t) \\ \mathbf{B}\cdot \hat{\boldsymbol{\sigma}} = U_{0}^{\dagger }(t)\sigma_{y}U_{0}(t). \end{array} \end{equation} The relation between control pulses and the curvature-torsion of the x-error curve is given by \begin{equation} \begin{aligned} \kappa(t) &= \dot{\mathbf{T}} \cdot \mathbf{N} = \Omega(t) \sin\Phi(t) \\ \tau(t) &= -\dot{\mathbf{B}}\cdot \mathbf{N} = \Omega(t) \cos\Phi(t). \end{aligned} \end{equation} Imposing robustness constraints on this error curve will lead to the dynamics robust against x error. \section{From regular space curve to pulse} \label{AppendixC} As mentioned in the main text, one can construct robust control pulses by reverse engineering of space curves because of the geometric correspondence between regular space curves and the dynamics of a two-level system, where the signed curvature and singularity-free torsion of the space curves are related to the drive amplitude and phase. However, from the mathematical point of view, it is known that the existence and continuity of the Frenet-Serret frame are not guaranteed in standard differential geometry in the vicinity of curvature-vanishing points since the definition of $\mathbf{N}, \mathbf{B}$ and torsion need the curvature to be non-zero \cite{o2006elementary, pressley2010elementary}. This singularity and discontinuity issue can be solved in a purely mathematical manner by defining the continuous Frenet-Serret frame vectors in terms of three singularity-free Frenet-Euler angles, as introduced in \cite{shabana2022curvature}. The corresponding control pulses of the dynamics can be obtained by the resulting signed curvature and singularity-free torsion in terms of an arc-length variable. We summarize this approach and demonstrate a few examples as follows. For a regular space curve in arbitrary parametrization $\mathbf{r}(\lambda)$. The unit tangent vector $\mathbf{T}$ can always be written in terms of two angles by \begin{equation} \mathbf{T} = (x^{\prime},y^{\prime},z^{\prime})^T / |\mathbf{r}^{\prime}| = (\cos\psi \cos\theta, \sin\psi \cos\theta, \sin\theta)^T, \label{Eq_C_tangent} \end{equation} where $^\prime$ represents derivative with respect to $\lambda$ and angles $\psi$ and $\theta$ are determined by derivatives of the curve coordinate. The existence of well-defined normal vectors relies on the third angle $\phi$ defined as \begin{equation} \tan \phi = -\frac{(z^{\prime\prime}(x^{\prime 2}+y^{\prime 2})-z^{\prime}(x^{\prime} x^{\prime\prime}+y^{\prime} y^{\prime\prime}))}{|\mathbf{r}^{\prime}| (y^{\prime\prime} x^{\prime} - x^{\prime\prime} y^{\prime})}. \label{Eq_C_phi} \end{equation} At the curvature-zero point, where $\kappa=x^{\prime\prime}=y^{\prime\prime}=z^{\prime\prime}=0$, it is still well-defined by taking limit according to L'Hospital rule. The other two vectors of the coordinate system are then expressed as \begin{equation} \begin{aligned} \mathbf{N} = \left( \begin{array}{c} -\sin \psi \cos \phi+\cos \psi \sin \theta \sin \phi \\ \cos \psi \cos \phi+\sin \psi \sin \theta \sin \phi \\ -\cos \theta \sin \phi \end{array}\right) \end{aligned} \label{Eq_C_normal} \end{equation} and \begin{equation} \begin{aligned} \mathbf{B} = \left( \begin{array}{c} -\sin \psi \sin \phi-\cos \psi \sin \theta \cos \phi \\ \cos \psi \sin \phi-\sin \psi \sin \theta \cos \phi \\ \cos \theta \cos \phi \end{array}\right). \end{aligned} \label{Eq_C_binormal} \end{equation} Here the three angles $\{\psi,\theta,\phi\}$, named Frenet angles, are used to define curve geometry and resolve the ambiguity at curvature-vanishing points. We refer to \cite{shabana2022curvature} for a detailed discussion of this choice of coordinate system. According to the Frenet equation, the signed curvature and singularity-free torsion are obtained from the continuous Frenet vectors by \begin{equation} \begin{aligned} \kappa(\lambda) &= \frac{\mathbf{T}^{\prime}\cdot \mathbf{N}}{||\mathbf{r}^{\prime}||} \\ \tau(\lambda) &= - \frac{\mathbf{B}^{\prime}\cdot \mathbf{N}}{||\mathbf{r}^{\prime}||}. \end{aligned} \label{Eq_C_cur_tor} \end{equation} The geometric correspondence with the z error curve given by Eq.~(\ref{Eq_z_correspondence1}) and the above-mentioned mathematically non-singular frame choice can be connected as follows. Consider a conventional parametrization of evolution unitary by \begin{equation} \begin{aligned} &U_{0}(t)=\left(\begin{array}{cc} u_{1}(t) & -u_{2}^{*}(t) \\ u_{2}(t) & u_{1}^{*}(t) \end{array}\right) \\ &u_{1}(t)=e^{\frac{1}{2} i(\psi_1(t) + \phi_1(t))} \cos \left(\frac{\theta_1(t)}{2}\right) \\ &u_{2}(t)=-i e^{\frac{1}{2} i(\psi_1(t)-\phi_1(t))} \sin \left(\frac{\theta_1(t)}{2}\right). \end{aligned} \label{Eq_C_para_unitary} \end{equation} By equating the Frenet vectors defined by Eq.~(\ref{Eq_C_tangent})-(\ref{Eq_C_binormal}) and Eq.~(\ref{Eq_z_correspondence1}) with the unitary parametrization of Eq.~(\ref{Eq_C_para_unitary}) we have the following relations of angles \begin{equation} \begin{aligned} \psi_1 &= \psi - \frac{\pi}{2} \\ \theta_1 &= \frac{\pi}{2} - \theta \\ \phi_1 + \Phi &= \frac{\pi}{2} - \phi. \end{aligned} \end{equation} As indicated in \cite{zeng2019geometric}, the initial condition of the ideal evolution and the target gate unitary at the given gate time $T$ set the boundary conditions of these angles. Note that $U_0(0)=I$ gives the initial conditions $\theta_1(0)=0$ and $\psi_1(0) = - \phi_1(0)$. Since $H_0(t) = i\dot{U}_0(t)U_0^{\dagger}(t)$, we have $$ ( i \dot{\psi_1} \sin\theta_1 + \dot{\theta}_1 ) e^{-i \phi_1} = \Omega(t)e^{i\Phi} $$ and $\Phi(0) = - \phi_1(0) = \psi_1(0)$. The final $\theta_1(T)$ and $\psi_1(T)$ correspond to the final time tangent vector $\dot{\mathbf{r}}(T)$ and $\phi_1(T)$ is related to the total torsion $$ \phi_1(T)-\phi_1(0) = -\int_{0}^{T} \tau(t) d t - \arg[ i \dot{\psi_1} \sin\theta_1 + \dot{\theta}_1 ]|_{0} ^{T}. $$ \begin{figure} \caption{(a) A space curve in the QEED (up), the corresponding RCP (middle), and the fidelity of $X_{\pi} \label{Fig_spacecurve} \end{figure} Having established the correspondence from the space curve to the control pulse, we further elucidate this correspondence by constructing robust control pulses with the following examples of space curves. First, for a unit-speed space curve $$ \begin{aligned} \mathbf{r}(t) = \left( \begin{array}{c} (1+\cos \frac{t}{2} )\cos \frac{t}{2} \\ (1-\cos \frac{t}{2})\sin \frac{t}{2} \\ \frac{4}{3} \sin \frac{3t}{4} \end{array}\right) \end{aligned} $$ with $t \in [0,4\pi]$. Note that in the context of curve geometry, we refer to $t$ as the curve length variable. One can determine the Frenet angles unambiguously from the tangent vector and Eq.~(\ref{Eq_C_tangent}) to obtain $\theta = - 3t/4 + \pi/2 $, $\psi = -t/4 + \pi$ and $\tan\phi = -3/\sin(3t/4)$. Then vectors $\mathbf{N}$ and $\mathbf{B}$ can be obtained straightforwardly, and the curvature and torsion are given by $$ \begin{aligned} \kappa(t) = \frac{1}{8} \sqrt{38 - 2 \cos \frac{3t}{2} } \\ \tau(t) = \frac{-73 \cos \frac{3t}{4} + \cos \frac{9t}{4} }{-152 + 8\cos \frac{3t}{2}}. \end{aligned} $$ The corresponding pulses obtained from this curve via the correspondence Eq.~(\ref{Eq_z_correspondence2}) are shown in Fig.~\ref{Fig_spacecurve}(a). It generates a $\pi$ rotation if we choose an initial phase $\pi/2$ as the integral constant of $\Phi$. The robustness of $X_{\pi}$ gate performed by them against detuning noise are also shown in Fig.~\ref{Fig_spacecurve}(a). Specifically, for plane curve, we have $\theta = \phi = 0$, $\mathbf{T}=(\cos\psi,\sin\psi,0)^T$ and $\mathbf{N}=(-\sin\psi,\cos\psi,0)^T$ with $\cos\psi = x^{\prime}/\sqrt{x^{\prime 2}+y^{\prime 2}}$ and $\sin\psi = y^{\prime}/\sqrt{x^{\prime 2}+y^{\prime 2}}$. In this case, the normal vector is well-defined regardless of the existence of curvature zero. According to and Eq.~(\ref{Eq_C_cur_tor}), the signed curvature of plane curve is given by \begin{equation} \kappa(\lambda) = \frac{x^{\prime} y^{\prime \prime}-y^{\prime} x^{\prime \prime}}{\left(x^{\prime 2}+y^{\prime 2}\right)^{3 / 2}}. \end{equation} Consider $\mathbf{r}(\lambda)=(\sin 2\lambda, 3.5\sin\lambda,0 )$ with $\lambda \in [0,2\pi]$ as an example, we have $\kappa(\lambda) = (10.5 \sin\lambda + 3.5 \sin 3\lambda)/||\mathbf{r}^{\prime}||^{3}$ with $||\mathbf{r}^{\prime}|| = (12.25\cos^2\lambda + 4\cos^2 2\lambda)^{1/2}$. To obtain the corresponding pulse, we perform a numerical variable transformation to obtain the curvature in terms of curve length variable $t$, i.e., $\Omega(t) = \kappa(t)$. Systematic construction of the first and second-order robust control pulses from analytical plane curves is discussed in the next section. To demonstrate a space curve with curvature zeros, we take a figure 8 space curve shown in Fig.~\ref{Fig_spacecurve}(b) $\mathbf{r}(\lambda)=(\sin 2\lambda, 3.5\sin\lambda, 3.5\sin\lambda )$ with $\lambda \in [0,2\pi]$ for example. We have $\sin\psi = \sin\theta = 3.5\cos \lambda / ||\mathbf{r}^{\prime}||$ and $\tan\psi = - 2\cos 2\lambda / ||\mathbf{r}^{\prime}||$ with $||\mathbf{r}^{\prime}|| = (24.5 \cos^2\lambda + 4 \cos^2 2 \lambda)^{1/2}$. At the point of curvature singularity $t=\pi$, $\tan\phi$ is still well-defined by taking the limit of Eq.~(\ref{Eq_C_phi}). The continuous tangent and normal vector are illustrated using arrows on the space curve. The conventional curvature and signed curvature are defined by $\| \mathbf{r}^{\prime} \times \mathbf{r}^{\prime\prime} \| / \|\mathbf{r}^{\prime}\|^{3}$ and Eq.~(\ref{Eq_C_cur_tor}) in terms of curve coordinates and the continuous Frenet vectors respectively and are plotted in Fig.~\ref{Fig_spacecurve}(b) for comparison. The curvature zero at $t=\pi$ will lead to a singularity and discontinuity of the normal vector and torsion in the framework of standard differential geometry of space curves, while their counterparts obtained by the Frenet angles is continuous and singularity free. The corresponding robust pulses are given by the signed curvature and singular free torsion ($\tau(\lambda)=0$ for this example) obtained from the continuous Frenet vectors (Eq.~(\ref{Eq_C_cur_tor})). The robust pulse performs an identity operation, and its robustness against detuning noise is presented in Fig.~\ref{Fig_spacecurve}(b). \section{Robust Pulses from Analytical Construction of Plane Curves} \label{AppendixD} \begin{figure*} \caption{(a) First order robust control pulses $\{ R_{1^{\prime} \label{Fig_ana_pulse} \end{figure*} In this section, we construct a set of first and second-order RCPs against longitudinal error from analytical plane curves that are closed and with zero net areas, which are z-error curves of the corresponding dynamics satisfying the first and second-order robustness conditions. First of all, a series of first-order RCPs can be generated from the modified half lemniscates of Bernoulli used in \cite{zeng2018general} \begin{equation} \begin{array}{c} x_{1}(\lambda )=\frac{\alpha \sin (2\lambda )}{3+\cos (2\lambda )} \\ y_{1}(\lambda )=\frac{2\sin (\lambda )}{3+\cos (2\lambda )}. \end{array} \label{Eq_D_curve1} \end{equation} When $0\leq \lambda \leq \pi $, it becomes closed curve that subtends an angle $\theta =\pi -2\arctan (\frac{1}{\alpha })$ at the origin. The rotation angle of the corresponding pulse is given by the total curvature or total winding angle of the tangent vector of the curve $\phi=\int_{0}^{\Lambda}dt \kappa(\lambda) = 2\pi-2\arctan(\frac{1}{\alpha})$. Thus, through tuning $\alpha$, one can obtain the curves for first order RCPs with rotational angle $\pi<\phi<2\pi$. In addition, the curve for $ \pi $ pulse can be obtained by multiplying $x_{1}(\lambda)$ by a power of $ \sin(\lambda)$ to diminish the angle subtended at the origin and guarantee the curvature starts and ends at zero. A first-order $2\pi$ pulse can be constructed from a modified circle. We construct four plane curves for first order RCPs $\{ R_{1^{\prime};\bot}^{\pi},R_{1^{\prime};\bot}^{7\pi/4},R_{1^{\prime};\bot}^{5\pi/2},R_{1^{\prime};\bot}^{2\pi} \}$. The analytical expressions of the plane curves are listed in Table.~\ref{table2}, and the corresponding RCPs are obtained by calculating their curvature in the unit-speed parametrization. The four first-order RCPs and the corresponding plane curves are shown in Fig.~\ref{Fig_ana_pulse}(a). We then present a piecewise construction of the plane curves for second order RCPs with rotation angle $0<\phi <\pi $. The basic constituents of the composite curve are the aforementioned modified Bernoulli half lemniscates $\{x_{1}(\lambda ),y_{1}(\lambda )\}$ and the sinusoidal curve \begin{equation} \begin{array}{c} x_{2}(\lambda )=\alpha \sin (2\lambda ) \\ y_{2}(\lambda )=2\lambda . \end{array} \label{Eq_D_curve2} \end{equation} The equation of the composite curve is \begin{equation} \left\{ \begin{array}{l} x(\lambda )=-\beta x_{2}(\lambda ),\text{}y(\lambda )=\beta y_{2}(\lambda ),\quad 0\leq \lambda <\frac{\pi }{2} \\ x(\lambda )=x_{1}(\lambda -\frac{\pi }{2}),\text{ }y(\lambda )=\beta y_{2}( \frac{\pi }{2})+y_{1}(\lambda -\frac{\pi }{2}), \\ \multicolumn{1}{r}{\frac{\pi }{2}\leq \lambda <\frac{3\pi }{2}} \\ x(\lambda )=\beta x_{2}(2\pi -\lambda ),\text{ }y(\lambda )=\beta y_{2}(2\pi -\lambda ), \\ \multicolumn{1}{r}{\frac{3\pi }{2}\leq \lambda \leq 2\pi } \end{array} \right. \label{Eq_D_comp_curve} \end{equation} where $\alpha $ and $\beta $ are two parameters determined by the target rotation angle $\phi =2\arctan (\frac{1}{\alpha })$ and the zero net-area condition $\int_{0}^{2\pi }(y^{\prime }x-x^{\prime }y)d\lambda =0$. We construct RCPs $R_{2;\bot}^{\pi/4}$ and $R_{2;\bot}^{\pi/2}$ from the composite curve of Eq.~(\ref{Eq_D_comp_curve}), the RCP $R_{2;\bot}^{\pi}$ is obtained by modifying the curves for $R_{1^{\prime};\bot}^{\pi}$ and $R_{2;\bot}^{2\pi}$ is generated from the curves Eq.~(\ref{Eq_D_curve1}) with $0\le \lambda \le 2\pi$. The four plane curves and the corresponding second-order RCPs are shown in Fig.~\ref{Fig_ana_pulse}(b), and the analytical expressions for the four constructed curves are listed in Table~\ref{table2}. Fig.~\ref{Fig_ana_pulse}(c) shows the robustness of the single qubit gate $\{ X_{\pi},X_{\pi/4},X_{\pi/2},X_{2\pi} \}$ performed by the first and second-order RCPs mentioned above against detuning noise. All of the RCPs exhibit robust infidelity plateau with values much smaller than that of the cosine pulse within the noise region from $0.01\%$ to $1\%$. \begin{table*} \setlength{\tabcolsep}{20pt} \renewcommand{1.3}{1.4} \begin{tabular}{ll} \hline\hline RCPs & Curve Functions \\ \hline $R_{1^{\prime};\bot}^{3\pi/2}$ & $x(\lambda) = x_{1}(\lambda)$ \quad$y(\lambda) = y_{1}(\lambda)$, \quad$\alpha= 1$, \quad $0 \le\lambda\le\pi$ \\ \hline \multirow{4}{3em} {$R_{2;\bot}^{\pi/2}$} & $x(\lambda) = -\beta x_{2}(\lambda)$ \quad$y(\lambda) = \beta y_{2}(\lambda)$, \quad$0 \le\lambda< \frac{\pi}{2}$ \\ & $x(\lambda) = x_{1}(\lambda-\frac{\pi}{2})$ \quad$y(\lambda) = \beta y_{2}(\frac{\pi}{2}) + y_{1}(\lambda-\frac{\pi}{2})$, \quad$\frac{\pi}{2} \le\lambda< \frac{3\pi}{2}$ \\ & $x(\lambda) = \beta x_{2}(2\pi- \lambda)$ \quad$y(\lambda) = \beta y_{2}(2\pi- \lambda)$, \quad$\frac{3\pi}{2} \le\lambda\le2\pi$\\ & $\alpha= 1$ \quad$\beta= 0.3535534$ \\ \hline $R_{1^{\prime};\bot}^{7\pi/4}$ & $x_{1^{\prime}}(\lambda) = x_{1}(\lambda)$ \quad $y_{1^{\prime}}(\lambda) = y_{1}(\lambda)(-0.3\lambda (\lambda - \pi) + 1)$, \quad $\alpha= 1$, \quad $0 \le\lambda\le\pi$ \\ \hline \multirow{4}{3em} {$R_{2;\bot}^{\pi/4}$} & $x(\lambda) = -\beta x_{2}(\lambda)$ \quad $y(\lambda) = \beta y_{2}(\lambda)$, \quad $0 \le\lambda< \frac{\pi}{2}$ \\ & $x(\lambda) = x_{1^{\prime}}(\lambda-\frac{\pi}{2})$ \quad $ y(\lambda) = \beta y_{2}(\frac{\pi}{2}) + y_{1^{\prime}}(\lambda-\frac{\pi}{2})$, \quad $\frac{\pi}{2} \le\lambda< \frac{3\pi}{2}$ \\ & $x(\lambda) = \beta x_{2}(2\pi- \lambda)$ \quad$y(\lambda) = \beta y_{2}(2\pi- \lambda)$, \quad$\frac{3\pi}{2} \le\lambda\le2\pi$\\ & $\alpha= 2.4142136$ \quad $\beta= 0.4801245$ \\ \hline $R_{1^{\prime};\bot}^{\pi}$ & $x_{1^{\prime}}(\lambda) = x_{1}(\lambda)\sin^{2}(\lambda)$ \quad $y_{1^{\prime}}(\lambda) = y_{1}(\lambda)$, \quad $\alpha= 0.72$, \quad $0 \le\lambda\le\pi$ \\ \hline \multirow{2}{3em} {$R_{2;\bot}^{\pi}$} & $x(\lambda) = x_{1^{\prime}}(\lambda)(\lambda- (\frac{\pi}{2}-b))(\lambda- (\frac{\pi}{2}+b))$ \quad $y(\lambda) = 0.25y_{1^{\prime}}(\lambda)$ \\ & $\alpha= -0.3$, $b = 0.6100818$ \quad $0 \le\lambda\le\pi$ \\ \hline $R_{1^{\prime};\bot}^{2\pi}$ & $x(\lambda) = \frac{ 2.4 \sin(2\lambda+Pi) }{2 + \cos 2\lambda } $ \quad $ y(\lambda) = \frac{ \cos(2\lambda + \pi) + 1 }{2 + \cos 2\lambda )} \sin(\lambda+\pi) $ \\ \hline $R_{2;\bot}^{2\pi}$ & $x(\lambda) = x_{1}(\lambda)$ \quad$y(\lambda) = y_{1}(\lambda)$, \quad$\alpha= 1$, \quad $0 \le\lambda\le 2\pi$ \\ \hline\hline \end{tabular} \caption{Plane curve functions for RCPs. Curve ansatz $\{ x_1, y_1\}$ and $\{ x_2, y_2\}$ given by Eq.~(\ref{Eq_D_curve1}) and Eq.~(\ref{Eq_D_curve2}) are used. For the curve functions of RCPs, $R_{1^{\prime};\bot}^{7\pi/4}$ and $R_{1^{\prime};\bot}^{\pi}$, additional modifications are made to smoothen the resulting pulses, and the modified curve functions (denoted by $\{ x_{1^{\prime}}, y_{1^{\prime}}\}$) are used in constructing the plane curves for $R_{2;\bot}^{7\pi/4}$ and $R_{2;\bot}^{\pi}$ pulses. } \label{table2} \end{table*} \section{Supplementary for numerical results} \label{AppendixB} As an additional assessment of the robustness of our RCPs presented in Sec.~(\ref{Sec_protocol}), we calculate the Magnus expansion coefficients up to the fourth order numerically. The rescaled Magnus coefficients $\bar{A}_n = 10^{-n}||A_n||$ of the $X_{\pi/4}$ gate evolution produced by the $R_{1;\bot }^{\pi/4}$, $R_{2;\bot}^{\pi/4}$, $R_{\text{ex};\bot}^{\pi/4}$ and cosine pulse are plotted in Fig.~\ref{Fig_robustness_order} for comparison, where $R_{1;\bot }^{\pi/4}$ and $R_{\text{ex};\bot}^{\pi/4}$ are RCPs obtained by our pulse generation protocol and $R_{2;\bot}^{\pi/4}$ is an RCP with second order robustness constructed from the analytical plane curve in Appendix~\ref{AppendixD}. \begin{figure} \caption{ Numerical certification of robust pulses. The first to fourth order rescaled Magnus error coefficients for the $X_{\pi/4} \label{Fig_robustness_order} \end{figure} Compared with cosine pulse, $R_{1;\bot }^{\pi/4}$ and $R_{2;\bot}^{\pi/4}$ have small Magnus coefficients up to first and second order respectively, while all four coefficients for $R_{\text{ex};\bot}^{\pi/4}$ pulse are significantly suppressed, indicating its higher robustness. \begin{figure} \caption{(a) Left: the $R_{1;\text{all} \label{Fig_pulse_rescale} \end{figure} Our robust control pulses are suitable for different gate times, and the pulse rescaling in the time domain does not change their robustness since the substitution $t \rightarrow \alpha t$, $\Omega \rightarrow \Omega/\alpha $ does not change Eq.~(\ref{Eq_z_correspondence2}) and thus maintains the geometric correspondence between pulses and their error curves. Here we present a $50$ ns pulse for $3\pi/2$ rotation around the Y axis by adding a pi/2 phase constant to the RCP $R_{1;\text{all}}^{3\pi/2}$ presented in the main text and rescale it to an $80$ ns pulse. It maintains robustness in three axes with error Hamiltonian in the form of Eq.~(\ref{Eq_GendeltaH}). Fig.~\ref{Fig_pulse_rescale}(a) shows the rescaled pulse and the gate fidelity of the $Y_{3\pi/2}$ gate generated by the two pulses. The coinciding fidelity values versus the relative noise strength demonstrate their same robustness. In addition, our pulse generation protocol is compatible with a direct x-y control scheme, in which the noiseless Hamiltonian is of the form $H_0(t) = \Omega_x(t)/2\sigma_x + \Omega_y(t)/2\sigma_y$. Here we present a $50$ ns pulse for $\pi$ rotation around the X axis (denoted by $R_{1;\text{all}}^{\pi}$) that has robustness in three axes and rescale it to a $100$ ns pulse. Fig.~\ref{Fig_pulse_rescale}(b) shows the rescaled pulse and the gate fidelity of the $X_{\pi}$ gate generated by the two pulses. The parameters of all RCPs presented in the main text, and this section is listed in the following Table~\ref{table1}. \begin{table*} \setlength{\tabcolsep}{20pt} \renewcommand{1.3}{1.3} \begin{tabular}{llll} \hline\hline RCPs & & $a$ & $\phi$ \\ \hline $R_{1;\bot }^{\pi}$ & $\Omega$ & $[0.010,-0.259,-0.033]$ & $[-0.015,-0.038]$ \\ $R_{1;\bot }^{7\pi/4}$ & $\Omega$ & $[0.223,0.134,0.076]$ & $[0.001,-0.020]$ \\ $R_{1;\bot }^{5\pi/2}$ & $\Omega$ & [0.349,0.307] & [-0.003] \\ $R_{1;\bot }^{2\pi}$ & $\Omega$ & [0.258,0.183] & [0] \\ $R_{\text{ex};\bot}^{\pi}$ & $\Omega$ & $[-0.328,-1.014,-1.195,-0.304]$ & $[-0.003,-0.003,-0.008]$ \\ $R_{\text{ex};\bot}^{9\pi/4}$ & $\Omega$ & $[0.147,-0.089,-0.613,-0.161]$ & $[-0.123,-0.061,-0.073]$ \\ $R_{\text{ex};\bot}^{5\pi/2}$ & $\Omega$ & $[0.241,0.084,-0.482,-0.036]$ & $[-0.036,0.014,0.107]$ \\ $R_{\text{ex};\bot}^{2\pi}$ & $\Omega$ & $[0.042,-0.290,-0.765,-0.274]$ & $[0.003,0.003,0.003]$ \\ \multirow{2}{3em}{$R_{1;\text{all}}^{3\pi/2}$} & $\Omega$ & $[0.624,0.484,0.193,0.070,0.073]$ & $[0.005,0.013,0.003,-0.070]$ \\ & $\Phi$ & $[0.083,0.362,1.174,0.237,0.074]$ & $[0.022,0.017,0.011,-0.037]$ \\ \multirow{2}{3em}{$R_{1;\text{all}}^{\pi}$} & $\Omega_x$ & $[0.007,-0.236,0.032,-0.250]$ & $[0.008,-0.601,-0.029]$ \\ & $\Omega_y$ & $[-0.327,-0.127,0.167,0.066]$ & $[0.035,-0.079,-0.096]$ \\ \hline\hline \end{tabular} \caption{Parameters for RCPs with gate time $T=50$ ns and amplitude unit GHz.} \label{table1} \end{table*} \end{appendix} \begin{thebibliography}{63} \makeatletter \providecommand \@ifxundefined [1]{ \@ifx{#1\undefined} } \providecommand \@ifnum [1]{ \ifnum #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \@ifx [1]{ \ifx #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode `\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{http://dx.doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty \bibitem [{\citenamefont {Arute}\ \emph {et~al.}(2019)\citenamefont {Arute}, \citenamefont {Arya}, \citenamefont {Babbush}, \citenamefont {Bacon}, \citenamefont {Bardin}, \citenamefont {Barends}, \citenamefont {Biswas}, \citenamefont {Boixo}, \citenamefont {Brandao}, \citenamefont {Buell} \emph {et~al.}}]{arute2019quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Arute}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Arya}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Babbush}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Bacon}}, \bibinfo {author} {\bibfnamefont {J.~C.}\ \bibnamefont {Bardin}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Barends}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Biswas}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Boixo}}, \bibinfo {author} {\bibfnamefont {F.~G.}\ \bibnamefont {Brandao}}, \bibinfo {author} {\bibfnamefont {D.~A.}\ \bibnamefont {Buell}}, \emph {et~al.},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {574}},\ \bibinfo {pages} {505} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wu}\ \emph {et~al.}(2021)\citenamefont {Wu}, \citenamefont {Bao}, \citenamefont {Cao}, \citenamefont {Chen}, \citenamefont {Chen}, \citenamefont {Chen}, \citenamefont {Chung}, \citenamefont {Deng}, \citenamefont {Du}, \citenamefont {Fan} \emph {et~al.}}]{wu2021strong} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Wu}}, \bibinfo {author} {\bibfnamefont {W.-S.}\ \bibnamefont {Bao}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Cao}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {M.-C.}\ \bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {T.-H.}\ \bibnamefont {Chung}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Deng}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Du}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Fan}}, \emph {et~al.},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical review letters}\ }\textbf {\bibinfo {volume} {127}},\ \bibinfo {pages} {180501} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Krinner}\ \emph {et~al.}(2022)\citenamefont {Krinner}, \citenamefont {Lacroix}, \citenamefont {Remm}, \citenamefont {Di~Paolo}, \citenamefont {Genois}, \citenamefont {Leroux}, \citenamefont {Hellings}, \citenamefont {Lazar}, \citenamefont {Swiadek}, \citenamefont {Herrmann} \emph {et~al.}}]{krinner2022realizing} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Krinner}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Lacroix}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Remm}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Di~Paolo}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Genois}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Leroux}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Hellings}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Lazar}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Swiadek}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Herrmann}}, \emph {et~al.},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {605}},\ \bibinfo {pages} {669} (\bibinfo {year} {2022})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Preskill}(2022)}]{preskill2022physics} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Preskill}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:2208.08064}\ } (\bibinfo {year} {2022})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Preskill}(2018)}]{preskill2018quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Preskill}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Quantum}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages} {79} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Glaser}\ \emph {et~al.}(2015)\citenamefont {Glaser}, \citenamefont {Boscain}, \citenamefont {Calarco}, \citenamefont {Koch}, \citenamefont {K{\"o}ckenberger}, \citenamefont {Kosloff}, \citenamefont {Kuprov}, \citenamefont {Luy}, \citenamefont {Schirmer}, \citenamefont {Schulte-Herbr{\"u}ggen} \emph {et~al.}}]{glaser2015training} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~J.}\ \bibnamefont {Glaser}}, \bibinfo {author} {\bibfnamefont {U.}~\bibnamefont {Boscain}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Calarco}}, \bibinfo {author} {\bibfnamefont {C.~P.}\ \bibnamefont {Koch}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {K{\"o}ckenberger}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Kosloff}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Kuprov}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Luy}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Schirmer}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Schulte-Herbr{\"u}ggen}}, \emph {et~al.},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {The European Physical Journal D}\ }\textbf {\bibinfo {volume} {69}},\ \bibinfo {pages} {1} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Koch}(2016)}]{koch2016controlling} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~P.}\ \bibnamefont {Koch}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Journal of Physics: Condensed Matter}\ }\textbf {\bibinfo {volume} {28}},\ \bibinfo {pages} {213001} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Koch}\ \emph {et~al.}(2022)\citenamefont {Koch}, \citenamefont {Boscain}, \citenamefont {Calarco}, \citenamefont {Dirr}, \citenamefont {Filipp}, \citenamefont {Glaser}, \citenamefont {Kosloff}, \citenamefont {Montangero}, \citenamefont {Schulte-Herbr{\"u}ggen}, \citenamefont {Sugny} \emph {et~al.}}]{koch2022quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~P.}\ \bibnamefont {Koch}}, \bibinfo {author} {\bibfnamefont {U.}~\bibnamefont {Boscain}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Calarco}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Dirr}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Filipp}}, \bibinfo {author} {\bibfnamefont {S.~J.}\ \bibnamefont {Glaser}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Kosloff}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Montangero}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Schulte-Herbr{\"u}ggen}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Sugny}}, \emph {et~al.},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:2205.12110}\ } (\bibinfo {year} {2022})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Barends}\ \emph {et~al.}(2014)\citenamefont {Barends}, \citenamefont {Kelly}, \citenamefont {Megrant}, \citenamefont {Veitia}, \citenamefont {Sank}, \citenamefont {Jeffrey}, \citenamefont {White}, \citenamefont {Mutus}, \citenamefont {Fowler}, \citenamefont {Campbell} \emph {et~al.}}]{barends2014superconducting} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Barends}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Kelly}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Megrant}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Veitia}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Sank}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Jeffrey}}, \bibinfo {author} {\bibfnamefont {T.~C.}\ \bibnamefont {White}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Mutus}}, \bibinfo {author} {\bibfnamefont {A.~G.}\ \bibnamefont {Fowler}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Campbell}}, \emph {et~al.},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {508}},\ \bibinfo {pages} {500} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Xue}\ \emph {et~al.}(2022)\citenamefont {Xue}, \citenamefont {Russ}, \citenamefont {Samkharadze}, \citenamefont {Undseth}, \citenamefont {Sammak}, \citenamefont {Scappucci},\ and\ \citenamefont {Vandersypen}}]{xue2022quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Xue}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Russ}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Samkharadze}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Undseth}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Sammak}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Scappucci}}, \ and\ \bibinfo {author} {\bibfnamefont {L.~M.}\ \bibnamefont {Vandersypen}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {601}},\ \bibinfo {pages} {343} (\bibinfo {year} {2022})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ballance}\ \emph {et~al.}(2016)\citenamefont {Ballance}, \citenamefont {Harty}, \citenamefont {Linke}, \citenamefont {Sepiol},\ and\ \citenamefont {Lucas}}]{ballance2016high} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.J.}~\bibnamefont {Ballance}}, \bibinfo {author} {\bibfnamefont {T.P.}~\bibnamefont {Harty}}, \bibinfo {author} {\bibfnamefont {N.M.}~\bibnamefont {Linke}}, \bibinfo {author} {\bibfnamefont {M.A.}~\bibnamefont {Sepiol}}, \ and\ \bibinfo {author} {\bibfnamefont {D.M.}~\bibnamefont {Lucas}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical review letters}\ }\textbf {\bibinfo {volume} {117}},\ \bibinfo {pages} {060504} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bluvstein}\ \emph {et~al.}(2022)\citenamefont {Bluvstein}, \citenamefont {Levine}, \citenamefont {Semeghini}, \citenamefont {Wang}, \citenamefont {Ebadi}, \citenamefont {Kalinowski}, \citenamefont {Keesling}, \citenamefont {Maskara}, \citenamefont {Pichler}, \citenamefont {Greiner} \emph {et~al.}}]{bluvstein2022quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Bluvstein}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Levine}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Semeghini}}, \bibinfo {author} {\bibfnamefont {T.~T.}\ \bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Ebadi}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Kalinowski}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Keesling}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Maskara}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Pichler}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Greiner}}, \emph {et~al.},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {604}},\ \bibinfo {pages} {451} (\bibinfo {year} {2022})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Khodjasteh}\ and\ \citenamefont {Viola}(2009)}]{khodjasteh2009dynamically} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Khodjasteh}}\ and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Viola}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical review letters}\ }\textbf {\bibinfo {volume} {102}},\ \bibinfo {pages} {080501} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zeng}\ \emph {et~al.}(2018)\citenamefont {Zeng}, \citenamefont {Deng}, \citenamefont {Russo},\ and\ \citenamefont {Barnes}}]{zeng2018general} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Zeng}}, \bibinfo {author} {\bibfnamefont {X.-H.}\ \bibnamefont {Deng}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Russo}}, \ and\ \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Barnes}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {New Journal of Physics}\ }\textbf {\bibinfo {volume} {20}},\ \bibinfo {pages} {033011} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Xiang}\ \emph {et~al.}(2020)\citenamefont {Xiang}, \citenamefont {Zong}, \citenamefont {Sun}, \citenamefont {Zhan}, \citenamefont {Fei}, \citenamefont {Dong}, \citenamefont {Run}, \citenamefont {Jia}, \citenamefont {Duan}, \citenamefont {Wu} \emph {et~al.}}]{xiang2020simultaneous} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Xiang}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Zong}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Sun}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Zhan}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Fei}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Dong}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Run}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Jia}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Duan}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Wu}}, \emph {et~al.},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review Applied}\ }\textbf {\bibinfo {volume} {14}},\ \bibinfo {pages} {014099} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Terhal}\ \emph {et~al.}(2020)\citenamefont {Terhal}, \citenamefont {Conrad},\ and\ \citenamefont {Vuillot}}]{terhal2020towards} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {B.~M.}\ \bibnamefont {Terhal}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Conrad}}, \ and\ \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Vuillot}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Quantum Science and Technology}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {043001} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Cai}\ \emph {et~al.}(2021)\citenamefont {Cai}, \citenamefont {Ma}, \citenamefont {Wang}, \citenamefont {Zou},\ and\ \citenamefont {Sun}}]{cai2021bosonic} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Cai}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Ma}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {C.-L.}\ \bibnamefont {Zou}}, \ and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Sun}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Fundamental Research}\ }\textbf {\bibinfo {volume} {1}},\ \bibinfo {pages} {50} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Manovitz}\ \emph {et~al.}(2022)\citenamefont {Manovitz}, \citenamefont {Shapira}, \citenamefont {Gazit}, \citenamefont {Akerman},\ and\ \citenamefont {Ozeri}}]{manovitz2022trapped} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Manovitz}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Shapira}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Gazit}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Akerman}}, \ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Ozeri}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {PRX quantum}\ }\textbf {\bibinfo {volume} {3}},\ \bibinfo {pages} {010347} (\bibinfo {year} {2022})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zhou}\ \emph {et~al.}(2022)\citenamefont {Zhou}, \citenamefont {Sitler}, \citenamefont {Oda}, \citenamefont {Schultz},\ and\ \citenamefont {Quiroz}}]{zhou2022quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Zhou}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Sitler}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Oda}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Schultz}}, \ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Quiroz}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:2208.05978}\ } (\bibinfo {year} {2022})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Khaneja}\ \emph {et~al.}(2005)\citenamefont {Khaneja}, \citenamefont {Reiss}, \citenamefont {Kehlet}, \citenamefont {Schulte-Herbr{\"u}ggen},\ and\ \citenamefont {Glaser}}]{khaneja2005optimal} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Khaneja}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Reiss}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Kehlet}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Schulte-Herbr{\"u}ggen}}, \ and\ \bibinfo {author} {\bibfnamefont {S.~J.}\ \bibnamefont {Glaser}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Journal of magnetic resonance}\ }\textbf {\bibinfo {volume} {172}},\ \bibinfo {pages} {296} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Caneva}\ \emph {et~al.}(2011)\citenamefont {Caneva}, \citenamefont {Calarco},\ and\ \citenamefont {Montangero}}]{caneva2011chopped} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Caneva}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Calarco}}, \ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Montangero}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review A}\ }\textbf {\bibinfo {volume} {84}},\ \bibinfo {pages} {022326} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yang}\ \emph {et~al.}(2019)\citenamefont {Yang}, \citenamefont {Chan}, \citenamefont {Harper}, \citenamefont {Huang}, \citenamefont {Evans}, \citenamefont {Hwang}, \citenamefont {Hensen}, \citenamefont {Laucht}, \citenamefont {Tanttu}, \citenamefont {Hudson} \emph {et~al.}}]{yang2019silicon} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Yang}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Chan}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Harper}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Huang}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Evans}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Hwang}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Hensen}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Laucht}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Tanttu}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Hudson}}, \emph {et~al.},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature Electronics}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages} {151} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Figueiredo~Roque}\ \emph {et~al.}(2021)\citenamefont {Figueiredo~Roque}, \citenamefont {Clerk},\ and\ \citenamefont {Ribeiro}}]{figueiredo2021engineering} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Figueiredo~Roque}}, \bibinfo {author} {\bibfnamefont {A.~A.}\ \bibnamefont {Clerk}}, \ and\ \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Ribeiro}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {npj Quantum Information}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {1} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ribeiro}\ \emph {et~al.}(2017)\citenamefont {Ribeiro}, \citenamefont {Baksic},\ and\ \citenamefont {Clerk}}]{ribeiro2017systematic} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Ribeiro}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Baksic}}, \ and\ \bibinfo {author} {\bibfnamefont {A.~A.}\ \bibnamefont {Clerk}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review X}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {011021} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Song}\ \emph {et~al.}(2022)\citenamefont {Song}, \citenamefont {Li}, \citenamefont {Hai}, \citenamefont {Guo},\ and\ \citenamefont {Deng}}]{song2022optimizing} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Song}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {Y.-J.}\ \bibnamefont {Hai}}, \bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont {Guo}}, \ and\ \bibinfo {author} {\bibfnamefont {X.-H.}\ \bibnamefont {Deng}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review A}\ }\textbf {\bibinfo {volume} {105}},\ \bibinfo {pages} {012616} (\bibinfo {year} {2022})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Berry}(1990)}]{berry1990geometric} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~V.}\ \bibnamefont {Berry}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Proceedings of the Royal Society of London. Series A: Mathematical and Physical Sciences}\ }\textbf {\bibinfo {volume} {430}},\ \bibinfo {pages} {405} (\bibinfo {year} {1990})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zu}\ \emph {et~al.}(2014)\citenamefont {Zu}, \citenamefont {Wang}, \citenamefont {He}, \citenamefont {Zhang}, \citenamefont {Dai}, \citenamefont {Wang},\ and\ \citenamefont {Duan}}]{zu2014experimental} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Zu}}, \bibinfo {author} {\bibfnamefont {W.-B.}\ \bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {He}}, \bibinfo {author} {\bibfnamefont {W.-G.}\ \bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {C.-Y.}\ \bibnamefont {Dai}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Wang}}, \ and\ \bibinfo {author} {\bibfnamefont {L.-M.}\ \bibnamefont {Duan}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {514}},\ \bibinfo {pages} {72} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Balakrishnan}\ and\ \citenamefont {Dandoloff}(2004)}]{balakrishnan2004classical} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Balakrishnan}}\ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Dandoloff}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {European journal of physics}\ }\textbf {\bibinfo {volume} {25}},\ \bibinfo {pages} {447} (\bibinfo {year} {2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ekert}\ \emph {et~al.}(2000)\citenamefont {Ekert}, \citenamefont {Ericsson}, \citenamefont {Hayden}, \citenamefont {Inamori}, \citenamefont {Jones}, \citenamefont {Oi},\ and\ \citenamefont {Vedral}}]{ekert2000geometric} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Ekert}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Ericsson}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Hayden}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Inamori}}, \bibinfo {author} {\bibfnamefont {J.~A.}\ \bibnamefont {Jones}}, \bibinfo {author} {\bibfnamefont {D.~K.}\ \bibnamefont {Oi}}, \ and\ \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Vedral}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Journal of modern optics}\ }\textbf {\bibinfo {volume} {47}},\ \bibinfo {pages} {2501} (\bibinfo {year} {2000})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Xu}\ \emph {et~al.}(2020{\natexlab{a}})\citenamefont {Xu}, \citenamefont {Hua}, \citenamefont {Chen}, \citenamefont {Pan}, \citenamefont {Li}, \citenamefont {Han}, \citenamefont {Cai}, \citenamefont {Ma}, \citenamefont {Wang}, \citenamefont {Song} \emph {et~al.}}]{xu2020experimental} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Xu}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Hua}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Pan}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Han}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Cai}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Ma}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {Y.P.}~\bibnamefont {Song}}, \emph {et~al.},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review Letters}\ }\textbf {\bibinfo {volume} {124}},\ \bibinfo {pages} {230503} (\bibinfo {year} {2020}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zeng}\ \emph {et~al.}(2019)\citenamefont {Zeng}, \citenamefont {Yang}, \citenamefont {Dzurak},\ and\ \citenamefont {Barnes}}]{zeng2019geometric} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Zeng}}, \bibinfo {author} {\bibfnamefont {C.H.}~\bibnamefont {Yang}}, \bibinfo {author} {\bibfnamefont {A.S.}~\bibnamefont {Dzurak}}, \ and\ \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Barnes}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review A}\ }\textbf {\bibinfo {volume} {99}},\ \bibinfo {pages} {052321} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Barnes}\ \emph {et~al.}(2022)\citenamefont {Barnes}, \citenamefont {Calderon-Vargas}, \citenamefont {Dong}, \citenamefont {Li}, \citenamefont {Zeng},\ and\ \citenamefont {Zhuang}}]{barnes2022dynamically} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Barnes}}, \bibinfo {author} {\bibfnamefont {F.~A.}\ \bibnamefont {Calderon-Vargas}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Dong}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Zeng}}, \ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Zhuang}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Quantum Science and Technology}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {023001} (\bibinfo {year} {2022})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Acharya}\ \emph {et~al.}(2022)\citenamefont {Acharya}, \citenamefont {Aleiner}, \citenamefont {Allen}, \citenamefont {Andersen}, \citenamefont {Ansmann}, \citenamefont {Arute}, \citenamefont {Arya}, \citenamefont {Asfaw}, \citenamefont {Atalaya}, \citenamefont {Babbush} \emph {et~al.}}]{acharya2022suppressing} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Acharya}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Aleiner}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Allen}}, \bibinfo {author} {\bibfnamefont {T.~I.}\ \bibnamefont {Andersen}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Ansmann}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Arute}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Arya}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Asfaw}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Atalaya}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Babbush}}, \emph {et~al.},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:2207.06431}\ } (\bibinfo {year} {2022})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Carballeira}\ \emph {et~al.}(2021)\citenamefont {Carballeira}, \citenamefont {Dolgitzer}, \citenamefont {Zhao}, \citenamefont {Zeng},\ and\ \citenamefont {Chen}}]{carballeira2021stochastic} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Carballeira}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Dolgitzer}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Zhao}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Zeng}}, \ and\ \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Chen}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Scientific Reports}\ }\textbf {\bibinfo {volume} {11}},\ \bibinfo {pages} {1} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Clerk}(2022)}]{clerk2022introduction} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Clerk}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {SciPost Physics Lecture Notes}\ ,\ \bibinfo {pages} {044}} (\bibinfo {year} {2022})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Deng}\ \emph {et~al.}(2021)\citenamefont {Deng}, \citenamefont {Hai}, \citenamefont {Li},\ and\ \citenamefont {Song}}]{deng2021correcting} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {X.-H.}\ \bibnamefont {Deng}}, \bibinfo {author} {\bibfnamefont {Y.-J.}\ \bibnamefont {Hai}}, \bibinfo {author} {\bibfnamefont {J.-N.}\ \bibnamefont {Li}}, \ and\ \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Song}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:2103.08169}\ } (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Krinner}\ \emph {et~al.}(2020)\citenamefont {Krinner}, \citenamefont {Lazar}, \citenamefont {Remm}, \citenamefont {Andersen}, \citenamefont {Lacroix}, \citenamefont {Norris}, \citenamefont {Hellings}, \citenamefont {Gabureac}, \citenamefont {Eichler},\ and\ \citenamefont {Wallraff}}]{krinner2020benchmarking} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Krinner}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Lazar}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Remm}}, \bibinfo {author} {\bibfnamefont {C.K.}~\bibnamefont {Andersen}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Lacroix}}, \bibinfo {author} {\bibfnamefont {G.J.}~\bibnamefont {Norris}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Hellings}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Gabureac}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Eichler}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Wallraff}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review Applied}\ }\textbf {\bibinfo {volume} {14}},\ \bibinfo {pages} {024042} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Burnett}\ \emph {et~al.}(2019)\citenamefont {Burnett}, \citenamefont {Bengtsson}, \citenamefont {Scigliuzzo}, \citenamefont {Niepce}, \citenamefont {Kudra}, \citenamefont {Delsing},\ and\ \citenamefont {Bylander}}]{burnett2019decoherence} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~J.}\ \bibnamefont {Burnett}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Bengtsson}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Scigliuzzo}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Niepce}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Kudra}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Delsing}}, \ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Bylander}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {npj Quantum Information}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {1} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lisenfeld}\ \emph {et~al.}(2019)\citenamefont {Lisenfeld}, \citenamefont {Bilmes}, \citenamefont {Megrant}, \citenamefont {Barends}, \citenamefont {Kelly}, \citenamefont {Klimov}, \citenamefont {Weiss}, \citenamefont {Martinis},\ and\ \citenamefont {Ustinov}}]{lisenfeld2019electric} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Lisenfeld}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Bilmes}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Megrant}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Barends}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Kelly}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Klimov}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Weiss}}, \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Martinis}}, \ and\ \bibinfo {author} {\bibfnamefont {A.~V.}\ \bibnamefont {Ustinov}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {npj Quantum Information}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {1} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {O'neill}(2006)}]{o2006elementary} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {O'neill}},\ }\href@noop {} {\emph {\bibinfo {title} {Elementary differential geometry}}}\ (\bibinfo {publisher} {Elsevier},\ \bibinfo {year} {2006})\BibitemShut {NoStop} \bibitem [{\citenamefont {Pressley}(2010)}]{pressley2010elementary} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~N.}\ \bibnamefont {Pressley}},\ }\href@noop {} {\emph {\bibinfo {title} {Elementary differential geometry}}}\ (\bibinfo {publisher} {Springer Science \& Business Media},\ \bibinfo {year} {2010})\BibitemShut {NoStop} \bibitem [{\citenamefont {Blanes}\ \emph {et~al.}(2009)\citenamefont {Blanes}, \citenamefont {Casas}, \citenamefont {Oteo},\ and\ \citenamefont {Ros}}]{blanes2009magnus} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Blanes}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Casas}}, \bibinfo {author} {\bibfnamefont {J.-A.}\ \bibnamefont {Oteo}}, \ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Ros}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physics reports}\ }\textbf {\bibinfo {volume} {470}},\ \bibinfo {pages} {151} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Pedersen}\ \emph {et~al.}(2007)\citenamefont {Pedersen}, \citenamefont {M{\o}ller},\ and\ \citenamefont {M{\o}lmer}}]{pedersen2007fidelity} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.~H.}\ \bibnamefont {Pedersen}}, \bibinfo {author} {\bibfnamefont {N.~M.}\ \bibnamefont {M{\o}ller}}, \ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {M{\o}lmer}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physics Letters A}\ }\textbf {\bibinfo {volume} {367}},\ \bibinfo {pages} {47} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Shor}(1996)}]{shor199637th} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Shor}},\ }\href@noop {} {\enquote {\bibinfo {title} {37th symposium on foundations of computing},}\ } (\bibinfo {year} {1996})\BibitemShut {NoStop} \bibitem [{\citenamefont {Noiri}\ \emph {et~al.}(2022)\citenamefont {Noiri}, \citenamefont {Takeda}, \citenamefont {Nakajima}, \citenamefont {Kobayashi}, \citenamefont {Sammak}, \citenamefont {Scappucci},\ and\ \citenamefont {Tarucha}}]{noiri2022fast} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Noiri}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Takeda}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Nakajima}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Kobayashi}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Sammak}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Scappucci}}, \ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Tarucha}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {601}},\ \bibinfo {pages} {338} (\bibinfo {year} {2022})}\BibitemShut {NoStop} \bibitem [{\citenamefont {M{\k{a}}dzik}\ \emph {et~al.}(2022)\citenamefont {M{\k{a}}dzik}, \citenamefont {Asaad}, \citenamefont {Youssry}, \citenamefont {Joecker}, \citenamefont {Rudinger}, \citenamefont {Nielsen}, \citenamefont {Young}, \citenamefont {Proctor}, \citenamefont {Baczewski}, \citenamefont {Laucht} \emph {et~al.}}]{mkadzik2022precision} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~T.}\ \bibnamefont {M{\k{a}}dzik}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Asaad}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Youssry}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Joecker}}, \bibinfo {author} {\bibfnamefont {K.~M.}\ \bibnamefont {Rudinger}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Nielsen}}, \bibinfo {author} {\bibfnamefont {K.~C.}\ \bibnamefont {Young}}, \bibinfo {author} {\bibfnamefont {T.~J.}\ \bibnamefont {Proctor}}, \bibinfo {author} {\bibfnamefont {A.~D.}\ \bibnamefont {Baczewski}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Laucht}}, \emph {et~al.},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {601}},\ \bibinfo {pages} {348} (\bibinfo {year} {2022})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Philips}\ \emph {et~al.}(2022)\citenamefont {Philips}, \citenamefont {M{\k{a}}dzik}, \citenamefont {Amitonov}, \citenamefont {de~Snoo}, \citenamefont {Russ}, \citenamefont {Kalhor}, \citenamefont {Volk}, \citenamefont {Lawrie}, \citenamefont {Brousse}, \citenamefont {Tryputen} \emph {et~al.}}]{philips2022universal} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~G.}\ \bibnamefont {Philips}}, \bibinfo {author} {\bibfnamefont {M.~T.}\ \bibnamefont {M{\k{a}}dzik}}, \bibinfo {author} {\bibfnamefont {S.~V.}\ \bibnamefont {Amitonov}}, \bibinfo {author} {\bibfnamefont {S.~L.}\ \bibnamefont {de~Snoo}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Russ}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Kalhor}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Volk}}, \bibinfo {author} {\bibfnamefont {W.~I.}\ \bibnamefont {Lawrie}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Brousse}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Tryputen}}, \emph {et~al.},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:2202.09252}\ } (\bibinfo {year} {2022})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Huang}\ \emph {et~al.}(2019)\citenamefont {Huang}, \citenamefont {Yang}, \citenamefont {Chan}, \citenamefont {Tanttu}, \citenamefont {Hensen}, \citenamefont {Leon}, \citenamefont {Fogarty}, \citenamefont {Hwang}, \citenamefont {Hudson}, \citenamefont {Itoh} \emph {et~al.}}]{huang2019fidelity} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Huang}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Yang}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Chan}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Tanttu}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Hensen}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Leon}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Fogarty}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Hwang}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Hudson}}, \bibinfo {author} {\bibfnamefont {K.~M.}\ \bibnamefont {Itoh}}, \emph {et~al.},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {569}},\ \bibinfo {pages} {532} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bertrand}\ \emph {et~al.}(2015)\citenamefont {Bertrand}, \citenamefont {Flentje}, \citenamefont {Takada}, \citenamefont {Yamamoto}, \citenamefont {Tarucha}, \citenamefont {Ludwig}, \citenamefont {Wieck}, \citenamefont {B{\"a}uerle},\ and\ \citenamefont {Meunier}}]{bertrand2015quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Bertrand}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Flentje}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Takada}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Yamamoto}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Tarucha}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Ludwig}}, \bibinfo {author} {\bibfnamefont {A.~D.}\ \bibnamefont {Wieck}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {B{\"a}uerle}}, \ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Meunier}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical review letters}\ }\textbf {\bibinfo {volume} {115}},\ \bibinfo {pages} {096801} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Russ}\ \emph {et~al.}(2018)\citenamefont {Russ}, \citenamefont {Zajac}, \citenamefont {Sigillito}, \citenamefont {Borjans}, \citenamefont {Taylor}, \citenamefont {Petta},\ and\ \citenamefont {Burkard}}]{russ2018high} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Russ}}, \bibinfo {author} {\bibfnamefont {D.~M.}\ \bibnamefont {Zajac}}, \bibinfo {author} {\bibfnamefont {A.~J.}\ \bibnamefont {Sigillito}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Borjans}}, \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Taylor}}, \bibinfo {author} {\bibfnamefont {J.~R.}\ \bibnamefont {Petta}}, \ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Burkard}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review B}\ }\textbf {\bibinfo {volume} {97}},\ \bibinfo {pages} {085421} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Blais}\ \emph {et~al.}(2021)\citenamefont {Blais}, \citenamefont {Grimsmo}, \citenamefont {Girvin},\ and\ \citenamefont {Wallraff}}]{blais2021circuit} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Blais}}, \bibinfo {author} {\bibfnamefont {A.~L.}\ \bibnamefont {Grimsmo}}, \bibinfo {author} {\bibfnamefont {S.~M.}\ \bibnamefont {Girvin}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Wallraff}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Reviews of Modern Physics}\ }\textbf {\bibinfo {volume} {93}},\ \bibinfo {pages} {025005} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Krantz}\ \emph {et~al.}(2019)\citenamefont {Krantz}, \citenamefont {Kjaergaard}, \citenamefont {Yan}, \citenamefont {Orlando}, \citenamefont {Gustavsson},\ and\ \citenamefont {Oliver}}]{krantz2019quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Krantz}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Kjaergaard}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Yan}}, \bibinfo {author} {\bibfnamefont {T.~P.}\ \bibnamefont {Orlando}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Gustavsson}}, \ and\ \bibinfo {author} {\bibfnamefont {W.~D.}\ \bibnamefont {Oliver}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Applied Physics Reviews}\ }\textbf {\bibinfo {volume} {6}},\ \bibinfo {pages} {021318} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Motzoi}\ \emph {et~al.}(2009)\citenamefont {Motzoi}, \citenamefont {Gambetta}, \citenamefont {Rebentrost},\ and\ \citenamefont {Wilhelm}}]{motzoi2009simple} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Motzoi}}, \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Gambetta}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Rebentrost}}, \ and\ \bibinfo {author} {\bibfnamefont {F.~K.}\ \bibnamefont {Wilhelm}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical review letters}\ }\textbf {\bibinfo {volume} {103}},\ \bibinfo {pages} {110501} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zhao}\ \emph {et~al.}(2022)\citenamefont {Zhao}, \citenamefont {Linghu}, \citenamefont {Li}, \citenamefont {Xu}, \citenamefont {Wang}, \citenamefont {Xue}, \citenamefont {Jin},\ and\ \citenamefont {Yu}}]{zhao2022quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Zhao}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Linghu}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Xu}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Xue}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Jin}}, \ and\ \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Yu}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {PRX Quantum}\ }\textbf {\bibinfo {volume} {3}},\ \bibinfo {pages} {020301} (\bibinfo {year} {2022})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wei}\ \emph {et~al.}(2021)\citenamefont {Wei}, \citenamefont {Magesan}, \citenamefont {Lauer}, \citenamefont {Srinivasan}, \citenamefont {Bogorin}, \citenamefont {Carnevale}, \citenamefont {Keefe}, \citenamefont {Kim}, \citenamefont {Klaus}, \citenamefont {Landers} \emph {et~al.}}]{wei2021quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Wei}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Magesan}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Lauer}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Srinivasan}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Bogorin}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Carnevale}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Keefe}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Kim}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Klaus}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Landers}}, \emph {et~al.},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:2106.00675}\ } (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kandala}\ \emph {et~al.}(2021)\citenamefont {Kandala}, \citenamefont {Wei}, \citenamefont {Srinivasan}, \citenamefont {Magesan}, \citenamefont {Carnevale}, \citenamefont {Keefe}, \citenamefont {Klaus}, \citenamefont {Dial},\ and\ \citenamefont {McKay}}]{kandala2021demonstration} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Kandala}}, \bibinfo {author} {\bibfnamefont {K.X.}~\bibnamefont {Wei}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Srinivasan}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Magesan}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Carnevale}}, \bibinfo {author} {\bibfnamefont {G.A.}~\bibnamefont {Keefe}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Klaus}}, \bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Dial}}, \ and\ \bibinfo {author} {\bibfnamefont {D.C.}~\bibnamefont {McKay}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review Letters}\ }\textbf {\bibinfo {volume} {127}},\ \bibinfo {pages} {130501} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Mundada}\ \emph {et~al.}(2019)\citenamefont {Mundada}, \citenamefont {Zhang}, \citenamefont {Hazard},\ and\ \citenamefont {Houck}}]{mundada2019suppression} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Mundada}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Hazard}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Houck}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review Applied}\ }\textbf {\bibinfo {volume} {12}},\ \bibinfo {pages} {054023} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ku}\ \emph {et~al.}(2020)\citenamefont {Ku}, \citenamefont {Xu}, \citenamefont {Brink}, \citenamefont {McKay}, \citenamefont {Hertzberg}, \citenamefont {Ansari},\ and\ \citenamefont {Plourde}}]{ku2020suppression} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Ku}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Xu}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Brink}}, \bibinfo {author} {\bibfnamefont {D.~C.}\ \bibnamefont {McKay}}, \bibinfo {author} {\bibfnamefont {J.~B.}\ \bibnamefont {Hertzberg}}, \bibinfo {author} {\bibfnamefont {M.~H.}\ \bibnamefont {Ansari}}, \ and\ \bibinfo {author} {\bibfnamefont {B.L.T.}~\bibnamefont {Plourde}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical review letters}\ }\textbf {\bibinfo {volume} {125}},\ \bibinfo {pages} {200504} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Xu}\ \emph {et~al.}(2020{\natexlab{b}})\citenamefont {Xu}, \citenamefont {Chu}, \citenamefont {Yuan}, \citenamefont {Qiu}, \citenamefont {Zhou}, \citenamefont {Zhang}, \citenamefont {Tan}, \citenamefont {Yu}, \citenamefont {Liu}, \citenamefont {Li} \emph {et~al.}}]{xu2020high} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Xu}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Chu}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Yuan}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Qiu}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Zhou}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Tan}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Yu}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Li}}, \emph {et~al.},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review Letters}\ }\textbf {\bibinfo {volume} {125}},\ \bibinfo {pages} {240503} (\bibinfo {year} {2020}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Stehlik}\ \emph {et~al.}(2021)\citenamefont {Stehlik}, \citenamefont {Zajac}, \citenamefont {Underwood}, \citenamefont {Phung}, \citenamefont {Blair}, \citenamefont {Carnevale}, \citenamefont {Klaus}, \citenamefont {Keefe}, \citenamefont {Carniol}, \citenamefont {Kumph} \emph {et~al.}}]{stehlik2021tunable} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Stehlik}}, \bibinfo {author} {\bibfnamefont {D.M.}~\bibnamefont {Zajac}}, \bibinfo {author} {\bibfnamefont {D.L.}~\bibnamefont {Underwood}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Phung}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Blair}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Carnevale}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Klaus}}, \bibinfo {author} {\bibfnamefont {G.A.}~\bibnamefont {Keefe}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Carniol}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Kumph}}, \emph {et~al.},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review Letters}\ }\textbf {\bibinfo {volume} {127}},\ \bibinfo {pages} {080505} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sung}\ \emph {et~al.}(2021)\citenamefont {Sung}, \citenamefont {Ding}, \citenamefont {Braum{\"u}ller}, \citenamefont {Veps{\"a}l{\"a}inen}, \citenamefont {Kannan}, \citenamefont {Kjaergaard}, \citenamefont {Greene}, \citenamefont {Samach}, \citenamefont {McNally}, \citenamefont {Kim} \emph {et~al.}}]{sung2021realization} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Sung}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Ding}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Braum{\"u}ller}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Veps{\"a}l{\"a}inen}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Kannan}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Kjaergaard}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Greene}}, \bibinfo {author} {\bibfnamefont {G.~O.}\ \bibnamefont {Samach}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {McNally}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Kim}}, \emph {et~al.},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review X}\ }\textbf {\bibinfo {volume} {11}},\ \bibinfo {pages} {021058} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Dyson}\ \emph {et~al.}(2004)\citenamefont {Dyson} \emph {et~al.}}]{dyson2004meeting} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Dyson}} \emph {et~al.},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {427}},\ \bibinfo {pages} {297} (\bibinfo {year} {2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Shabana}(2022)}]{shabana2022curvature} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~A.}\ \bibnamefont {Shabana}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {ASME Open Journal of Engineering}\ }\textbf {\bibinfo {volume} {1}} (\bibinfo {year} {2022})}\BibitemShut {NoStop} \end{thebibliography} \end{document}
\begin{document} \title{Quantifying quantum coherence in a metal-silicate framework} \author{C. Cruz\email{[email protected]}} \affiliation{Grupo de Informa\c{c}\~{a}o Qu\^{a}ntica, Centro de Ci\^{e}ncias Exatas e das Tecnologias, Universidade Federal do Oeste da Bahia - Campus Reitor Edgard Santos. Rua Bertioga, 892, Morada Nobre I, 47810-059 Barreiras, Bahia, Brazil.} \author{M. F. Anka} \affiliation{Instituto de F\'{i}sica, Universidade Federal Fluminense, Av. Gal. Milton Tavares de Souza s/n, 24210-346 Niter\'{o}i, Rio de Janeiro, Brazil.} \date{\today} \begin{abstract} The study of quantum coherence in condensed matter systems is a broad avenue to be explored toward the enhancement of its quantum properties by means of material engineering. In this regard, the present work reports a study of the influence of temperature, pressure and magnetic fields on the quantum coherence of a Cu(II) metal-silicate framework. We calculate the $l_1$ trace norm quantum coherence as a function of the magnetic susceptibility of the compound, which allows us to evaluate the effects of these external parameters on the degree of coherence of the material. Our results show that the quantum coherence of a low-dimensional molecular magnetic system can be handled by the management of the external conditions, offering new prospects for quantum coherence measurements through magnetometric experiments. \end{abstract} \maketitle \section{Introduction} The development of new technologies has provided great advances in materials preparation techniques, leading to the emergence of new electronic devices. Nowadays, we have reached the point where the miniaturization of these devices has led to the development of molecular size components. However, designing molecular components requires a deep understanding of the quantum properties of these components. The technological challenges of quantum information science led us to consider fundamental aspects of molecular magnetism, because of its ease of synthesis, great versatility, and low-dimensional quantum features \cite{mario,cruz,cruz2016quantum,diogo,duarte,souza2,mario2,souza,esteves2014new,leite2015heptacopper,Shi2017,cruz2017influence}. In recent years, it has been demonstrated that molecular magnetic systems present themselves as strong candidates as prototype materials for emerging quantum devices, and the characterization of its quantum correlations has received a considerable attention \cite{cruz,cruz2017influence,diogo,duarte,souza2,mario2,souza,diogo3,duarte2,castro2016thermal}. Recently, it has been demonstrated that this systems may be immune to decoherence mechanisms, presenting highly stable quantum correlations against external perturbations such as temperature and magnetic fields \cite{cruz,cruz2017influence,diogo,duarte,souza2,mario2,souza,diogo3,duarte2,castro2016thermal}. On the other hand, while the entanglement and nonclassical correlations (or quantum correlations) are a key resource to characterize the quantum properties of a bipartite and some multipartite systems, quantum coherence is a common necessary condition for different forms of quantum correlations \cite{xi2015quantum,hu2018quantum}, being a fundamental feature for signifying quantumness in an integral system \cite{hu2018quantum}. Quantum coherence, arising from the coherent superposition of quantum states, is a remarkable feature in quantum optics, quantum information theory, solid state physics, quantum game theory, quantum metrology and thermodynamics \cite{PhysRevLett.113.170401,nielsen,giovannetti2011advances,lambert2013quantum,hu2018quantum,theurer2019quantifying,yadin2019coherence,xi2015quantum,streltsov2016quantum,kammerlander2016coherence,goold2016role,santos2020entanglement,passos2019non}. Recently a criterion of measurement that quantifies quantum coherence in solid states system was proposed by Baumgratz \textit{et al.} \cite{baumgratz2014quantifying}. The authors established the fundamental assumptions for a quantitative theory of coherence, enabling the development of a rigorous theory of quantum coherence as a physical resource \cite{streltsov2016quantum}. However, in order to utilize the remarkable features of quantum coherence it is necessary to define a consistent theoretical basis to measure it experimentally. In the present work, we report a study of the $l_{1}$ trace norm quantum coherence \cite{streltsov2016quantum,baumgratz2014quantifying,hu2018quantum,rana2016trace} in a antiferromagnetic metal-silicate framework, formed in the compound $KNaCuSi_{4}O_{10}$ \cite{cruz2017influence,brandao2009magnetic}. We establish a relationship between the the measurement of quantum coherence and the magnetic susceptibility of the compound, which allows us to estimate the thermal quantum coherence directly from a set of experimental data. We characterize the degree of quantum coherence in this molecular magnetic system and investigate the influence of the temperature, pressure and magnetic fields. Our results show that it is possible to handle the degree of coherence in a low-dimensional molecular magnetic system by controlling those external conditions, offering a new prospect for the measurement of quantum coherence in condensed matter systems through magnetometric experiments, leading to the development of novel materials with enhanced quantum properties employing materials engineering. \section{Quantum Coherence in a Molecular Magnetic System} Geometric approaches are widely used to characterize and quantify the quantum correlations in a wide variety of quantum systems. Similarly to that proposed in the entanglement theory \cite{horodecki}, from which the entanglement can be characterized by a distance between the considered state and a set of states closed under LOCC operations (separable states) \cite{baumgratz2014quantifying,hu2018quantum,horodecki,vedral1997quantifying,vedral1998entanglement}, Baumgratz et al. \cite{baumgratz2014quantifying} provides one path towards quantifying the amount of coherence in a quantum state $\rho$. From the minimal distance $D(\rho,\sigma)$, between the considered quantum state $\rho$ and a set $\lbrace \sigma=\sum_{k}^{d} \vert k\rangle\langle k \vert \in \mathcal{I} \rbrace$ of incoherent states of the $d$-dimensional Hilbert space, it is possible to quantify the quantum coherence as: \begin{eqnarray} \mathcal{C}_D=\min_{\lbrace \sigma \in \mathcal{I}\rbrace} D(\rho,\sigma)~ \end{eqnarray} where $D(\rho,\sigma)$ is any measure of distance between the two density matrices, where the reference basis $\lbrace \vert k\rangle \rbrace_{\{k=1,...,d\}}$ may be defined by the physical nature of the problem under investigation or by a task for which coherence is required \cite{streltsov2016quantum,baumgratz2014quantifying}. From the selected reference basis, the superposition consists in the nonvanishing off-diagonal terms of the density operator $\rho$ that describes the quantum state of the system of interest \cite{hu2018quantum,baumgratz2014quantifying}. From this consideration Baumgratz et al. \cite{baumgratz2014quantifying} showed that the $l_{1}$ trace norm can be a reliable measurement of quantum coherence \cite{streltsov2016quantum,baumgratz2014quantifying,hu2018quantum,rana2016trace} as \begin{eqnarray} \mathcal{C}_{l_{1}}&=& \min_{\sigma \in \mathcal{I}} \Vert \rho -\sigma \Vert_{l_1}=\sum_{i\neq j} \vert \langle i\vert \rho\vert j \rangle\vert~. \label{coherence} \end{eqnarray} In order to study the role of external parameters, such as temperature, pressure and magnetic fields, on the quantum coherence of a molecular magnetic system, we will take the reference basis as one of the spin eigenbasis in a certain direction within a quantum metrology setting. For the temperature dependence on the quantum coherence, in an experimental point of view, we associate the calculation of the trace norm quantum coherence to the measurement of magnetic susceptibility. \subsection{Experimental determination of thermal quantum coherence} The material, in which we evaluate the quantum coherence, is the KNaCuSi$_{4}$O$_{10}$ compound \cite{hefter1982synthesis,brandao2009magnetic,cruz2017influence}, a metal-silicate framework formed by Cu(II) spin dimers ($d^9$ electronic configuration). This material was chosen because it has synthetic analogs of the naturally occurring mineral litidionite \cite{brandao2009magnetic}, being an ideal realization of a two-qubit system \cite{cruz2017influence}, since these dimers are magnetically isolated from each other, separated by two SiO$_4$ corners \cite{brandao2009magnetic}. Therefore, this prototype material can be described by two-qubit system interacting by the Heisenberg model \cite{mario,sarandy,brandao2009magnetic,cruz2017influence}, \begin{equation} \mathcal{H}=-J \vec{S}_1\cdot \vec{S}_2~, \label{hamiltoniana} \end{equation} where $J$ is the coupling constant. The magnetic susceptibility of this prototype material corresponds to the Bleaney-Bowers equation \cite{mario,bleaney}: \begin{equation} \chi (T)=\frac{2 N(g\mu_B)^2}{k_B T}\frac{1}{3+e^{-{J}/{k_B T}}}~, \label{eq:4} \end{equation} where $g$ is the Land\'{e} factor, $\mu_B$ is the Bohr magneton, $k_B$ is the Boltzmann constant and $N$ is the number of dimers. Magnetic susceptibility measurements for the KNaCuSi$_{4}$O$_{10}$ compound are presented in reference \onlinecite{brandao2009magnetic}, where the measurements have been performed between 2 K and 350 K. Taking into account the crystal structure of this compound, the susceptibility data have been fitted to the Bleany-Bowers equation, Eq. (\ref{eq:4}), where the authors obtain $J/k_B=-2.86(3)$ K (antiferromagnetic coupled ions). To go further and analyze the quantum coherence by means of the magnetic susceptibility of the material, firstly, we write the density matrix of the system under consideration in the local $S_z$ eigenbasis $\lbrace \vert 00\rangle,\vert 01\rangle,\vert 10\rangle,\vert 11\rangle\rbrace$, \cite{yuri,yuri2,cruz2017influence} : \begin{eqnarray} \rho(T)&=&\frac{1}{4}\left[ \begin{matrix} 1+c(T) & & & \\ & 1-c(T) & 2c(T) & \\ & 2c(T) & 1-c(T) & \\ & & & 1+c(T) \end{matrix} \right] \label{eq:1} \end{eqnarray} where, \begin{equation} c(T)=\frac{2k_BT\chi(T)}{N_Ag^2\mu_B^2}-1 \label{eq:3} \end{equation} is the correlation function between the spins \cite{cruz,yuri,yuri2,cruz2017influence}. The isotropy of the magnetic material and the rotation symmetry of the Heisenberg interaction, Eq. (\ref{hamiltoniana}), at zero field makes the density matrix, Eq. (\ref{eq:1}), the same Bell's diagonal mixed state for any local $S^{(x)}$, $S^{(y)}$ or $S^{(z)}$ eigenbasis. Thus, for the isotropic Heisenberg interaction, the coherence will be basis independent for any of these spin eigenbasis. From Eq. (\ref{coherence}) and (\ref{eq:1}) one can evaluate the temperature dependence on the $l_{1}$ norm quantum coherence as a function of the temperature, measured by the magnetic susceptibility of the compound as \begin{eqnarray} \mathcal{C}(T)&=& \left| \frac{2k_BT\chi(T)}{N_Ag^2\mu_B^2}-1\right|~. \label{csus} \end{eqnarray} Therefore, from Eq. (\ref{csus}), it is possible to measure quantum coherence in a low dimensional molecular magnetic system by measuring the thermodynamic properties of solids, such as magnetic susceptibility. In Fig. \ref{fig2}, we show the quantum coherence obtained from the measurement of the magnetic susceptibility of our prototype material KNaCuSi$_{4}$O$_{10}$, reported in reference \onlinecite{brandao2009magnetic}. The theoretical curves were plotted taking the corresponding estimates for the coupling constant $J/k_B$ in Eq. (\ref{csus}). The reference \onlinecite{diogo} investigates the thermal quantum entanglement of this prototype material finding the maximum temperature of $2.43(7)$ K below which there is quantum entanglement between the Cu(II) ions; at this temperature the coherence of the system is $35\%$ of the maximum value. As can be seen, as expected, increasing the temperature changes the Boltzmann's weights due to the thermal fluctuations, which changes the occupation of the energy levels, leading the system to populate incoherent states. \begin{figure} \caption{Experimental (open circles) and theoretical (solid red line) quantum coherence as a function of the temperature, measured by the magnetic susceptibility of the KNaCuSi$_{4} \label{fig2} \end{figure} On the other hand, apart from its quantification and characterization, the coherence is intimately associated with other quantum correlations quantifiers \cite{ma2016converting,adesso2016measures}. Thus, quantum coherence and correlations can be transformed into each other. The quantum discord \cite{zurek}, a measurement of quantum correlations beyond entanglement \cite{streltsov2016quantum,cruz,adesso2016measures}, can also be quantified in molecular magnetic system via a distance based approach \cite{cruz,cruz2017influence,cruz2016quantum}. In this regard, Eq. \ref{csus} relates to the geometric quantum discord based on the Schatten 1-norm \cite{ciccarello} as \begin{eqnarray} \mathcal{Q}_G(\rho)=\min_{\omega_{cq}}\Vert\rho - \rho_c\Vert\label{eq:11}= \frac{\mathcal{C}(T)}{2}~, \end{eqnarray} where $\Vert A\Vert_1=\mbox{Tr}\left[\sqrt{A^\dagger A}\right]$, $\rho$ is a given quantum state and $\rho_c$ is the closest classical-quantum state \cite{sarandy2,sarandy3}. Therefore, the measurement of quantum coherence can also quantify the amount of quantum correlation in a molecular magnetic system. \subsection{Influence of the external pressure} Recently, using Density Funtional Theory calculations \cite{hohenberg1964inhomogeneous}, one of us shown that the application of an hydrostatic pressure in the prototype material KNaCuSi$_{4}$O$_{10}$ induces a structural contraction, which leads to a minimization on the degree of its quantum correlations \cite{cruz2017influence}. Through the dependence of the magnetic coupling constant of the compound with the external pressure \cite{cruz2017influence} we calculate magnetic susceptibilities, Eq. (\ref{eq:4}), from each magnetic coupling constant. Thus, by using Eqs. (\ref{csus}) it is possible to evaluate the influence of an hydrostatic pressure on the quantum coherence of a low dimensional molecular magnetic system. In this way, we establish a relationship between the quantum coherence and significant macroscopic effects, as an external hydrostatic pressure applied on the magnetic material. Figure \ref{fig:pressure} shows temperature dependence on the quantum coherence for different values of hydrostatic pressure. As obtained in the reference \onlinecite{cruz2017influence}, increasing the pressure in the system leads to a decrease on the lattice parameter and volume of unit cell; as a consequence, the exchange parameter of the system increases until it becomes positive changing the energy levels, i.e., the system ceases to be ordered antiferromagnetically in an entangled ground state $\left[\vert 01 \rangle - \vert 10 \rangle\right]/\sqrt{2}$ and is led to populate ferromagnetically ordered states with a lower degree of coherence. Due to this change of the exchange parameter sign, there is a gap in the coherence of the ground state, as it can be seen in figure \ref{fig:pressure}, increasing the hydrostatic pressure on this prototype material leads to the decrease of the degree of coherence in the system. \begin{figure} \caption{Temperature dependence of the quantum coherence calculated for different values of hydrostatic pressure. This result was obtained through the dependence of the magnetic coupling constant $J$ with the external pressure, which was obtained by Density Funtional Theory calculations the KNaCuSi$_{4} \label{fig:pressure} \end{figure} Therefore, increasing the pressure changes the magnetic alignment, which yields a change on the occupation of the energy levels, leading the system to a less coherent configuration. On this matter, it is possible to handle the degree of coherence of a low dimensional molecular magnetic system by managing the pressure applied on the system, since the external pressure induces a structural contraction in the metal--silicate framework, leading to a change of its magnetic alignment and reducing the degree of quantum coherence in the dimeric unit. This result shows that the degree of coherence in a spin cluster system can be controlled by the management of structural parameters such as the lattice parameter and volume of the unit cell, allowing the manipulation of the quantum coherence by materials engineering. \subsection{Influence of the longitudinal and transverse magnetic field} Unlike other quantum information theoretic quantifiers, quantum coherence is basis dependent. The reference basis $\lbrace{\vert k \rangle}\rbrace$ concerning the coherence is measured in regard to the physical problem under investigation; e.g., for molecular magnetic systems the usual basis is the spin eigenbasis in a certain direction, $\lbrace S_x,S_y,S_z\rbrace$, within a quantum metrology setting. In order to investigate the influence of the application of an external magnetic field on the prototype material KNaCuSi$_{4}$O$_{10}$, we consider the magnetic field $\vec{B}$ along the $z$ direction. One can evaluate the coherence for two reference bases: one parallel to the applied field ($S_z$ eigenbasis) and another perpendicular to the field ($S_x$ or $S_y$ eigenbasis). The Hamiltonian that rules this system interacting with an external magnetic field is given by \begin{eqnarray} \mathcal{H} = -J\vec{S_1}\cdot\vec{S_2} - \mu_B g \vec{B}\cdot \left( \vec{S}_1 + \vec{S}_1 \right) ~. \label{model} \end{eqnarray} Thus, our aim is study how the applied fields, parallel to the density matrix basis $S_z$ (longitudinal field) and perpendicular to the density matrix basis $S_x$ (transverse field), affect the degree of quantum coherence in our prototype material. \subsubsection{Longitudinal Field} The bipartite density matrix of this system can be written in the $S^{(z)}$ eigenbasis: $\lbrace \vert 00\rangle,\vert 01\rangle,\vert 10\rangle,\vert 11\rangle\rbrace$ \cite{yuri,yuri2} as a X-shaped matrix: \begin{eqnarray} \rho_z(T,B_z) &=&\frac{e^{x}}{2Z}\left[ \begin{matrix} 2e^{\beta h_z} & & & \\ & {1 + e^{-4x}} & {1 - e^{-4x}} & \\ & {1 - e^{-4x}} & {1 + e^{-4x}} & \\\ & & & 2e^{-\beta h_z} \end{matrix} \right]~. \label{longitudinal} \end{eqnarray} where \begin{equation} Z(T,B_z)= e^{x} + e^{-3x} + 2 e^{x} \cosh\left(\beta h_z\right)~, \end{equation} with $\beta = 1/k_BT$, $x=\beta J/4$, $h_z = \mu_Bg_zB_z$, and $Z$ is the partition function. The trace norm quantum coherence, Eq.(\ref{coherence}), can be calculated in terms of these matrix elements as: \begin{eqnarray} \mathcal{C}_z(T,B_z)&=& \left|\frac{1 - e^{-4x}}{1 + e^{-4x} + 2 \cosh\left(\beta h_z\right)}\right|~. \label{cz} \end{eqnarray} Figure \ref{figlong} shows the quantum coherence of our prototype material, as a function of the applied longitudinal field (figure \ref{figlong}(a)) and the temperature (figure \ref{figlong}(b)). As can be seen, the application of longitudinal magnetic field decreases the degree of coherence. As a system in equilibrium at 0 K is always in its maximally entangled ground state $\left[\vert 01 \rangle - \vert 10 \rangle\right]/\sqrt{2}$, since it is an antiferromagnetic $1/2$-spin dimer \cite{brandao2009magnetic,cruz2017influence}, when the longitudinal field reaches a critical value of $B_c=21 279$ Oe the system will be led to the incoherent ground state $\vert 00 \rangle$, i.e, all spins aligned with the applied field. Hence, there is an abrupt change in the degree of coherence of the ground state, the system changes from a maximally entangled ground state with $\mathcal{C}_z=1$ ($B < B_c$) to an completely incoherent ground state ($B \geq B_c$), due to the Zeeman effect along the parallel direction with the density matrix basis. Therefore, the aplication of a longitudinal magnetic field changes the energy eigenvalues, which leads the system to a different ground state with a smaller degree of coherence. \begin{figure} \caption{(Color online) Quantum coherence for selected temperatures as a function of the applied field in the (a) parallel eigenbasis $S_z$ (longitudinal field) and (b) the perpendicular eigenbasis $S_x$ (transverse field).} \label{figlong} \end{figure} Although absolute zero is not physically realizable, the changes on the degree of coherence between the Cu(II) ions, due to the application of the longitudinal, are highlighted in low temperatures. At nonzero temperatures, the thermal fluctuations compete with the magnetic interactions changing the occupation of the energy levels \cite{souza2,maziero2010quantum,castro2016thermal}. Therefore, increasing the temperature will populate many incoherent states, decreasing the degree of coherence of the system. In contrast, when a high longitudinal magnetic field is applied at low temperatures the ground state tends to be less coherent than some excited states. Thus, increasing the temperature may lead the system to some excited states with a larger degree of coherence, yeldind a small increase on the quantum coherence, as can be seen in figure \ref{figlong}(b) around 1 K. Similar effects were also encountered for the entanglement of formation and quantum discord in the references \onlinecite{souza2,maziero2010quantum,arnesen2001natural,asoudeh2006thermal}. \subsubsection{Transverse Field} On the other hand, the field applied on $z$ direction breaks the isotropy of the Heisenberg interaction due to the Zeeman effect along this direction \cite{mario}. In this regard, in order to study the effects of the application of a transverse magnetic field on the quantum coherence of KNaCuSi$_{4}$O$_{10}$ compound, we rewrite the density matrix Eq.(\ref{longitudinal}) in the perpendicular eigenbasis, $S^{(x)}$, $\lbrace \vert ++\rangle,\vert +-\rangle,\vert -+\rangle,\vert --\rangle\rbrace$ as: \begin{widetext} \begin{eqnarray} \rho_x(T,B_z) = \dfrac{e^{x}}{2Z} \left[\begin{matrix} \cosh(\beta h_z) + 1 & \sinh(\beta h_z) & \sinh(\beta h_z) & \cosh(\beta h_z) - 1 \\ \sinh(\beta h_z) &\cosh(\beta h_z) + e^{-4x} & \cosh(\beta h_z) - e^{-4x} & \sinh(\beta h_z) \\ \sinh(\beta h_z) & \cosh(\beta h_z) - e^{-4x} & \cosh(\beta h_z) + e^{-4x} & \sinh(\beta h_z)\\ \cosh(\beta h_z) - 1 & \sinh(\beta h_z) & \sinh(\beta h_z) & \cosh(\beta h_z) + 1 \end{matrix}\right]~. \label{transverse} \end{eqnarray} \end{widetext} From Eq. (\ref{transverse}) one can found that the transverse field quantum coherence as: \begin{eqnarray} \mathcal{C}_x (T,B_z) &=& \frac{e^{x}}{Z}\left(\left|\cosh(\beta h_z)-1\right| +4 \left|\sinh(\beta h_z)\right|+ \right. \nonumber \\ && \left. + \left|\cosh(\beta h_z) - e^{-4x}\right|\right)~. \label{transversez} \end{eqnarray} In Figure \ref{figtrans}, we show the quantum coherence of our metal-silicate framework as a function of the applied transverse field (figure \ref{figtrans}(a)) and the temperature (figure \ref{figtrans}(b)). In contrast to the application of a longitudinal magnetic field (figure \ref{figlong}), the transverse field increases and strengthens the degree of coherence between Cu(II) ions in the dimeric cluster. It also can be understood in terms of the population change of the ground state, due to the variation of the Boltzmann's weights, which leads to a change on the occupation of the energy levels \cite{souza2,maziero2010quantum,castro2016thermal}. Due to the rotational invariance of the Bell states, at 0 K and $B_z < B_c$the system is found in the the maximally entangled ground state $\left[\vert +- \rangle - \vert -+ \rangle\right]/\sqrt{2}$ on the $S^{(x)}$ eigenbasis, with quantum coherence $\mathcal{C}_z=1$. For the transverse field $B_z \geq B_c$, the system will be led to the ground state $\vert 00 \rangle = \left[\vert ++ \rangle + \vert +- \rangle + \vert -+ \rangle + \vert -- \rangle\right]/2$, which has the maximal coherence degree $\mathcal{C}_z=3$, i.e., the ground state becomes maximally coherent beyond the coherence of the maximally entangled ground state. Thus, this metal-silicate framework changes from a maximally entangled ground state $\mathcal{C}_z=1$ ($B < B_c$) to a maximally coherent ground state with $\mathcal{C}_z=3$ ($B \geq B_c$) even at nonzero temperatures (figure \ref{figtrans}(b)). This transition, induced by the application of a transverse magnetic field, describes an abrupt change in the degree of coherence of the ground state, due to the uncertainty relations between the $S_z$ spin direction of the Zeeman effect and the $S_x$ eigenbasis, in which the density matrix is written, Eq. (\ref{transverse}). This yields a change in the populations of the energy levels, leading the system to a state with the highest degree of coherence. Therefore, the transverse field effect is to populate such coherent states, leading to an increase in the degree coherence of the system. \begin{figure} \caption{(Color online) Temperature dependence of quantum coherence in the (a) parallel eigenbasis $S_z$ (longitudinal field) and (b) the perpendicular eigenbasis $S_x$ (transverse field), for selected magnetic fields.} \label{figtrans} \end{figure} It is worth noting that, for $T>10$ K the values of magnetic field needed to increase or decrease the coherence in this metal-silicate framework, are higher than the available field intensities usually created in laboratories, which indicates that the coherence in this low-dimensional molecular magnetic system cannot be destroyed or enhanced by a common longitudinal or transverse magnetic field, respectively, for temperatures above 10 K, providing the quantum coherence of this low-dimensional molecular magnetic system is resistant to magnetic field application above this temperature. \section{Conclusions} In summary, we have shown a method to evaluate the degree of quantum coherence in low dimensional magnetic materials composed of 1/2-spin dimers, where we investigate the influence of external parameters, such as temperature and magnetic fields, on the quantum coherence of the $KNaCuSi_{4}O_{10}$ prototype material, which is an ideal simulation of a two-qubit system. At zero magnetic field we associated the calculation of the trace norm quantum coherence with a magnetometric measurement of the magnetic susceptibility of the compound, using the experimental data reported in reference \onlinecite{brandao2009magnetic}, which characterizes the magnetic properties of this material. We established the theoretical relations between the measurement of quantum coherence and the thermodynamic properties of this prototype material, setting the path for the experimental measurement of quantum coherence in low dimensional molecular magnetic materials. In addition, we investigate the influence of the applied pressure on the degree of quantum coherence of our prototype material. We observed that increasing the external pressure yields a reduction on the degree of coherence in the system due to structural contraction provided by applying external pressure. Therefore, the quantum coherence of a molecular magnetic system can be handled by the management structural properties of the material. Moreover, we also presented a theoretical analysis of the influence of an external magnetic field on the degree of coherence of the system, where we investigate the basis dependence on the coherence. We found that when the longitudinally field is applied to the reference basis, it decreases the degree of coherence in the parallel eigenbasis, wereas if the transversely field is applied it strengthens the degree of coherence between the Cu(II) ions in the dimeric cluster in the transverse eigenbasis. In this context, the coherence of a low-dimensional molecular magnetic system can be handled by controlling the thermodynamic's external parameters, such as temperature, external pressure and magnetic fields. These results offer a new prospect for quantum coherence measurement, leading to promising applications in quantum information science such as the enhancement of quantum properties in low-dimensional molecular magnetic systems by materials engineering, and the development of novel candidate platforms for processing and transmission of quantum information. \begin{acknowledgments} The author would like to thank D. O. Soares-Pinto for his helpful comments, and A. M. dos Santos and M. S. Reis for the magnetic susceptibility measurements. This study was financed in part by the National Counsel of Technological and Scientific Development (CNPq) and the \textit{Coordena\c{c}\~{a}o de Aperfei\c{c}oamento de Pessoal de N\'{i}vel Superior - Brasil} (CAPES) - Finance Code 001. \end{acknowledgments} \begin{thebibliography}{49} \makeatletter \providecommand \@ifxundefined [1]{ \@ifx{#1\undefined} } \providecommand \@ifnum [1]{ \ifnum #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \@ifx [1]{ \ifx #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode `\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{http://dx.doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty \bibitem [{\citenamefont {Reis}(2013)}]{mario} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Reis}},\ }\href@noop {} {\emph {\bibinfo {title} {Fundamentals of magnetism}}}\ (\bibinfo {publisher} {Elsevier},\ \bibinfo {year} {2013})\BibitemShut {NoStop} \bibitem [{\citenamefont {Cruz}\ \emph {et~al.}(2016)\citenamefont {Cruz}, \citenamefont {Soares-Pinto}, \citenamefont {Brand�o}, \citenamefont {dos Santos},\ and\ \citenamefont {Reis}}]{cruz} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Cruz}}, \bibinfo {author} {\bibfnamefont {D.~O.}\ \bibnamefont {Soares-Pinto}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Brand�o}}, \bibinfo {author} {\bibfnamefont {A.~M.}\ \bibnamefont {dos Santos}}, \ and\ \bibinfo {author} {\bibfnamefont {M.~S.}\ \bibnamefont {Reis}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {EPL (Europhysics Letters)}\ }\textbf {\bibinfo {volume} {113}},\ \bibinfo {pages} {40004} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Cruz}(2017)}]{cruz2016quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Cruz}},\ }\href {\doibase 10.1142/S0219749917500319} {\bibfield {journal} {\bibinfo {journal} {International Journal of Quantum Information}\ ,\ \bibinfo {pages} {1750031}} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Soares-Pinto}\ \emph {et~al.}(2009)\citenamefont {Soares-Pinto}, \citenamefont {Souza}, \citenamefont {Sarthour}, \citenamefont {Oliveira}, \citenamefont {Reis}, \citenamefont {Brandao}, \citenamefont {Rocha},\ and\ \citenamefont {dos Santos}}]{diogo} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Soares-Pinto}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Souza}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Sarthour}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Oliveira}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Reis}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Brandao}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Rocha}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {dos Santos}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {EPL (Europhysics Letters)}\ }\textbf {\bibinfo {volume} {87}},\ \bibinfo {pages} {40008} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Duarte}\ \emph {et~al.}(2013{\natexlab{a}})\citenamefont {Duarte}, \citenamefont {Castro}, \citenamefont {Soares-Pinto},\ and\ \citenamefont {Reis}}]{duarte} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {O.~S.}\ \bibnamefont {Duarte}}, \bibinfo {author} {\bibfnamefont {C.~S.}\ \bibnamefont {Castro}}, \bibinfo {author} {\bibfnamefont {D.~O.}\ \bibnamefont {Soares-Pinto}}, \ and\ \bibinfo {author} {\bibfnamefont {M.~S.}\ \bibnamefont {Reis}},\ }\href {http://stacks.iop.org/0295-5075/103/i=4/a=40002} {\bibfield {journal} {\bibinfo {journal} {EPL (Europhysics Letters)}\ }\textbf {\bibinfo {volume} {103}},\ \bibinfo {pages} {40002} (\bibinfo {year} {2013}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Souza}\ \emph {et~al.}(2008)\citenamefont {Souza}, \citenamefont {Reis}, \citenamefont {Soares-Pinto}, \citenamefont {Oliveira},\ and\ \citenamefont {Sarthour}}]{souza2} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~M.}\ \bibnamefont {Souza}}, \bibinfo {author} {\bibfnamefont {M.~S.}\ \bibnamefont {Reis}}, \bibinfo {author} {\bibfnamefont {D.~O.}\ \bibnamefont {Soares-Pinto}}, \bibinfo {author} {\bibfnamefont {I.~S.}\ \bibnamefont {Oliveira}}, \ and\ \bibinfo {author} {\bibfnamefont {R.~S.}\ \bibnamefont {Sarthour}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review B}\ }\textbf {\bibinfo {volume} {77}},\ \bibinfo {pages} {104402} (\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Reis}\ \emph {et~al.}(2012)\citenamefont {Reis}, \citenamefont {Soriano}, \citenamefont {dos Santos}, \citenamefont {Sales}, \citenamefont {Soares-Pinto},\ and\ \citenamefont {Brandao}}]{mario2} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~S.}\ \bibnamefont {Reis}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Soriano}}, \bibinfo {author} {\bibfnamefont {A.~M.}\ \bibnamefont {dos Santos}}, \bibinfo {author} {\bibfnamefont {B.~C.}\ \bibnamefont {Sales}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Soares-Pinto}}, \ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Brandao}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {EPL (Europhysics Letters)}\ }\textbf {\bibinfo {volume} {100}},\ \bibinfo {pages} {50001} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Souza}\ \emph {et~al.}(2009)\citenamefont {Souza}, \citenamefont {Soares-Pinto}, \citenamefont {Sarthour}, \citenamefont {Oliveira}, \citenamefont {Reis}, \citenamefont {Brandao},\ and\ \citenamefont {dos Santos}}]{souza} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~M.}\ \bibnamefont {Souza}}, \bibinfo {author} {\bibfnamefont {D.~O.}\ \bibnamefont {Soares-Pinto}}, \bibinfo {author} {\bibfnamefont {R.~S.}\ \bibnamefont {Sarthour}}, \bibinfo {author} {\bibfnamefont {I.~S.}\ \bibnamefont {Oliveira}}, \bibinfo {author} {\bibfnamefont {M.~S.}\ \bibnamefont {Reis}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Brandao}}, \ and\ \bibinfo {author} {\bibfnamefont {A.~M.}\ \bibnamefont {dos Santos}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review B}\ }\textbf {\bibinfo {volume} {79}},\ \bibinfo {pages} {054408} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Esteves}\ \emph {et~al.}(2014)\citenamefont {Esteves}, \citenamefont {Tedesco}, \citenamefont {Pedro}, \citenamefont {Cruz}, \citenamefont {Reis},\ and\ \citenamefont {Brandao}}]{esteves2014new} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Esteves}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Tedesco}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Pedro}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Cruz}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Reis}}, \ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Brandao}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Materials Chemistry and Physics}\ }\textbf {\bibinfo {volume} {147}},\ \bibinfo {pages} {611} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Leite~Ferreira}\ \emph {et~al.}(2015)\citenamefont {Leite~Ferreira}, \citenamefont {Brand{\~a}o}, \citenamefont {Dos~Santos}, \citenamefont {Gai}, \citenamefont {Cruz}, \citenamefont {Reis}, \citenamefont {Santos},\ and\ \citenamefont {F{\'e}lix}}]{leite2015heptacopper} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Leite~Ferreira}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Brand{\~a}o}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Dos~Santos}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Gai}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Cruz}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Reis}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Santos}}, \ and\ \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {F{\'e}lix}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Journal of Coordination Chemistry}\ }\textbf {\bibinfo {volume} {68}},\ \bibinfo {pages} {2770} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Shi}\ \emph {et~al.}(2017)\citenamefont {Shi}, \citenamefont {Bai}, \citenamefont {Lu}, \citenamefont {Cruz}, \citenamefont {Reis},\ and\ \citenamefont {Gao}}]{Shi2017} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.-N.}\ \bibnamefont {Shi}}, \bibinfo {author} {\bibfnamefont {Y.-W.}\ \bibnamefont {Bai}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Lu}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Cruz}}, \bibinfo {author} {\bibfnamefont {M.~S.}\ \bibnamefont {Reis}}, \ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Gao}},\ }\href {\doibase 10.1007/s11243-017-0165-5} {\bibfield {journal} {\bibinfo {journal} {Transition Metal Chemistry}\ } (\bibinfo {year} {2017}),\ 10.1007/s11243-017-0165-5}\BibitemShut {NoStop} \bibitem [{\citenamefont {Cruz}\ \emph {et~al.}(2017)\citenamefont {Cruz}, \citenamefont {Alves}, \citenamefont {dos Santos}, \citenamefont {Soares-Pinto}, \citenamefont {de~Jesus}, \citenamefont {de~Almeida},\ and\ \citenamefont {Reis}}]{cruz2017influence} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Cruz}}, \bibinfo {author} {\bibfnamefont {{\'A}.}~\bibnamefont {Alves}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {dos Santos}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Soares-Pinto}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {de~Jesus}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {de~Almeida}}, \ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Reis}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {EPL (Europhysics Letters)}\ }\textbf {\bibinfo {volume} {117}},\ \bibinfo {pages} {20004} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Soares-Pinto}\ \emph {et~al.}(2011)\citenamefont {Soares-Pinto}, \citenamefont {Teles}, \citenamefont {Souza}, \citenamefont {Deazevedo}, \citenamefont {Sarthour}, \citenamefont {Bonagamba}, \citenamefont {Reis},\ and\ \citenamefont {Oliveira}}]{diogo3} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Soares-Pinto}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Teles}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Souza}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Deazevedo}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Sarthour}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Bonagamba}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Reis}}, \ and\ \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Oliveira}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {International Journal of Quantum Information}\ }\textbf {\bibinfo {volume} {9}},\ \bibinfo {pages} {1047} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Duarte}\ \emph {et~al.}(2013{\natexlab{b}})\citenamefont {Duarte}, \citenamefont {Castro},\ and\ \citenamefont {Reis}}]{duarte2} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {O.~S.}\ \bibnamefont {Duarte}}, \bibinfo {author} {\bibfnamefont {C.~S.}\ \bibnamefont {Castro}}, \ and\ \bibinfo {author} {\bibfnamefont {M.~S.}\ \bibnamefont {Reis}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review A}\ }\textbf {\bibinfo {volume} {88}},\ \bibinfo {pages} {012317} (\bibinfo {year} {2013}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Castro}\ \emph {et~al.}(2016)\citenamefont {Castro}, \citenamefont {Duarte}, \citenamefont {Pires}, \citenamefont {Soares-Pinto},\ and\ \citenamefont {Reis}}]{castro2016thermal} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Castro}}, \bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Duarte}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Pires}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Soares-Pinto}}, \ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Reis}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physics Letters A}\ }\textbf {\bibinfo {volume} {380}},\ \bibinfo {pages} {1571} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Xi}\ \emph {et~al.}(2015)\citenamefont {Xi}, \citenamefont {Li},\ and\ \citenamefont {Fan}}]{xi2015quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Xi}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Li}}, \ and\ \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Fan}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Scientific reports}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {10922} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hu}\ \emph {et~al.}(2018)\citenamefont {Hu}, \citenamefont {Hu}, \citenamefont {Wang}, \citenamefont {Peng}, \citenamefont {Zhang},\ and\ \citenamefont {Fan}}]{hu2018quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.-L.}\ \bibnamefont {Hu}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Hu}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Peng}}, \bibinfo {author} {\bibfnamefont {Y.-R.}\ \bibnamefont {Zhang}}, \ and\ \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Fan}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physics Reports}\ } (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Girolami}(2014)}]{PhysRevLett.113.170401} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Girolami}},\ }\href {\doibase 10.1103/PhysRevLett.113.170401} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {113}},\ \bibinfo {pages} {170401} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Nielsen}\ and\ \citenamefont {Chuang}(2010)}]{nielsen} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont {Nielsen}}\ and\ \bibinfo {author} {\bibfnamefont {I.~L.}\ \bibnamefont {Chuang}},\ }\href@noop {} {\emph {\bibinfo {title} {Quantum computation and quantum information}}}\ (\bibinfo {publisher} {Cambridge university press},\ \bibinfo {year} {2010})\BibitemShut {NoStop} \bibitem [{\citenamefont {Giovannetti}\ \emph {et~al.}(2011)\citenamefont {Giovannetti}, \citenamefont {Lloyd},\ and\ \citenamefont {Maccone}}]{giovannetti2011advances} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Giovannetti}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Lloyd}}, \ and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Maccone}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature photonics}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {222} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lambert}\ \emph {et~al.}(2013)\citenamefont {Lambert}, \citenamefont {Chen}, \citenamefont {Cheng}, \citenamefont {Li}, \citenamefont {Chen},\ and\ \citenamefont {Nori}}]{lambert2013quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Lambert}}, \bibinfo {author} {\bibfnamefont {Y.-N.}\ \bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {Y.-C.}\ \bibnamefont {Cheng}}, \bibinfo {author} {\bibfnamefont {C.-M.}\ \bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {G.-Y.}\ \bibnamefont {Chen}}, \ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature Physics}\ }\textbf {\bibinfo {volume} {9}},\ \bibinfo {pages} {10} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Theurer}\ \emph {et~al.}(2019)\citenamefont {Theurer}, \citenamefont {Egloff}, \citenamefont {Zhang},\ and\ \citenamefont {Plenio}}]{theurer2019quantifying} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Theurer}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Egloff}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Zhang}}, \ and\ \bibinfo {author} {\bibfnamefont {M.~B.}\ \bibnamefont {Plenio}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review Letters}\ }\textbf {\bibinfo {volume} {122}},\ \bibinfo {pages} {190405} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yadin}\ \emph {et~al.}(2019)\citenamefont {Yadin}, \citenamefont {Bogaert}, \citenamefont {Susa},\ and\ \citenamefont {Girolami}}]{yadin2019coherence} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Yadin}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Bogaert}}, \bibinfo {author} {\bibfnamefont {C.~E.}\ \bibnamefont {Susa}}, \ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Girolami}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review A}\ }\textbf {\bibinfo {volume} {99}},\ \bibinfo {pages} {012329} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Streltsov}\ \emph {et~al.}(2017)\citenamefont {Streltsov}, \citenamefont {Adesso},\ and\ \citenamefont {Plenio}}]{streltsov2016quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Streltsov}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Adesso}}, \ and\ \bibinfo {author} {\bibfnamefont {M.~B.}\ \bibnamefont {Plenio}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Reviews of Modern Physics}\ }\textbf {\bibinfo {volume} {89}},\ \bibinfo {pages} {041003} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kammerlander}\ and\ \citenamefont {Anders}(2016)}]{kammerlander2016coherence} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Kammerlander}}\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Anders}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Scientific reports}\ }\textbf {\bibinfo {volume} {6}},\ \bibinfo {pages} {22174} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Goold}\ \emph {et~al.}(2016)\citenamefont {Goold}, \citenamefont {Huber}, \citenamefont {Riera}, \citenamefont {del Rio},\ and\ \citenamefont {Skrzypczyk}}]{goold2016role} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Goold}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Huber}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Riera}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {del Rio}}, \ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Skrzypczyk}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Journal of Physics A: Mathematical and Theoretical}\ }\textbf {\bibinfo {volume} {49}},\ \bibinfo {pages} {143001} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Santos}(2020)}]{santos2020entanglement} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~C.}\ \bibnamefont {Santos}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Quantum Information Processing}\ }\textbf {\bibinfo {volume} {19}},\ \bibinfo {pages} {13} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Passos}\ \emph {et~al.}(2019)\citenamefont {Passos}, \citenamefont {Obando}, \citenamefont {Balthazar}, \citenamefont {Paula}, \citenamefont {Huguenin},\ and\ \citenamefont {Sarandy}}]{passos2019non} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Passos}}, \bibinfo {author} {\bibfnamefont {P.~C.}\ \bibnamefont {Obando}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Balthazar}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Paula}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Huguenin}}, \ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Sarandy}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Optics letters}\ }\textbf {\bibinfo {volume} {44}},\ \bibinfo {pages} {2478} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Baumgratz}\ \emph {et~al.}(2014)\citenamefont {Baumgratz}, \citenamefont {Cramer},\ and\ \citenamefont {Plenio}}]{baumgratz2014quantifying} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Baumgratz}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Cramer}}, \ and\ \bibinfo {author} {\bibfnamefont {M.~B.}\ \bibnamefont {Plenio}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical review letters}\ }\textbf {\bibinfo {volume} {113}},\ \bibinfo {pages} {140401} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Rana}\ \emph {et~al.}(2016)\citenamefont {Rana}, \citenamefont {Parashar},\ and\ \citenamefont {Lewenstein}}]{rana2016trace} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Rana}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Parashar}}, \ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Lewenstein}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review A}\ }\textbf {\bibinfo {volume} {93}},\ \bibinfo {pages} {012110} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Brandao}\ \emph {et~al.}(2009)\citenamefont {Brandao}, \citenamefont {Rocha}, \citenamefont {Reis}, \citenamefont {Dos~Santos},\ and\ \citenamefont {Jin}}]{brandao2009magnetic} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Brandao}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Rocha}}, \bibinfo {author} {\bibfnamefont {M.~S.}\ \bibnamefont {Reis}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Dos~Santos}}, \ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Jin}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Journal of Solid State Chemistry}\ }\textbf {\bibinfo {volume} {182}},\ \bibinfo {pages} {253} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Horodecki}\ \emph {et~al.}(2009)\citenamefont {Horodecki}, \citenamefont {Horodecki}, \citenamefont {Horodecki},\ and\ \citenamefont {Horodecki}}]{horodecki} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Horodecki}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Horodecki}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Horodecki}}, \ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Horodecki}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Reviews of modern physics}\ }\textbf {\bibinfo {volume} {81}},\ \bibinfo {pages} {865} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Vedral}\ \emph {et~al.}(1997)\citenamefont {Vedral}, \citenamefont {Plenio}, \citenamefont {Rippin},\ and\ \citenamefont {Knight}}]{vedral1997quantifying} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Vedral}}, \bibinfo {author} {\bibfnamefont {M.~B.}\ \bibnamefont {Plenio}}, \bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont {Rippin}}, \ and\ \bibinfo {author} {\bibfnamefont {P.~L.}\ \bibnamefont {Knight}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review Letters}\ }\textbf {\bibinfo {volume} {78}},\ \bibinfo {pages} {2275} (\bibinfo {year} {1997})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Vedral}\ and\ \citenamefont {Plenio}(1998)}]{vedral1998entanglement} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Vedral}}\ and\ \bibinfo {author} {\bibfnamefont {M.~B.}\ \bibnamefont {Plenio}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review A}\ }\textbf {\bibinfo {volume} {57}},\ \bibinfo {pages} {1619} (\bibinfo {year} {1998})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hefter}\ and\ \citenamefont {Kenney}(1982)}]{hefter1982synthesis} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Hefter}}\ and\ \bibinfo {author} {\bibfnamefont {M.~E.}\ \bibnamefont {Kenney}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Inorganic Chemistry}\ }\textbf {\bibinfo {volume} {21}},\ \bibinfo {pages} {2810} (\bibinfo {year} {1982})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sarandy}(2009)}]{sarandy} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~S.}\ \bibnamefont {Sarandy}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review A}\ }\textbf {\bibinfo {volume} {80}},\ \bibinfo {pages} {022108} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bleaney}\ and\ \citenamefont {Bowers}(1952)}]{bleaney} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Bleaney}}\ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Bowers}},\ }in\ \href@noop {} {\emph {\bibinfo {booktitle} {Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences}}},\ Vol.\ \bibinfo {volume} {214}\ (\bibinfo {organization} {The Royal Society},\ \bibinfo {year} {1952})\ pp.\ \bibinfo {pages} {451--465}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yurishchev}(2011)}]{yuri} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont {Yurishchev}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review B}\ }\textbf {\bibinfo {volume} {84}},\ \bibinfo {pages} {024418} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Aldoshin}\ \emph {et~al.}(2014)\citenamefont {Aldoshin}, \citenamefont {Fel'dman},\ and\ \citenamefont {Yurishchev}}]{yuri2} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Aldoshin}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Fel'dman}}, \ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Yurishchev}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Low Temperature Physics}\ }\textbf {\bibinfo {volume} {40}},\ \bibinfo {pages} {3} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ma}\ \emph {et~al.}(2016)\citenamefont {Ma}, \citenamefont {Yadin}, \citenamefont {Girolami}, \citenamefont {Vedral},\ and\ \citenamefont {Gu}}]{ma2016converting} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Ma}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Yadin}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Girolami}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Vedral}}, \ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Gu}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical review letters}\ }\textbf {\bibinfo {volume} {116}},\ \bibinfo {pages} {160407} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Adesso}\ \emph {et~al.}(2016)\citenamefont {Adesso}, \citenamefont {Bromley},\ and\ \citenamefont {Cianciaruso}}]{adesso2016measures} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Adesso}}, \bibinfo {author} {\bibfnamefont {T.~R.}\ \bibnamefont {Bromley}}, \ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Cianciaruso}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Journal of Physics A: Mathematical and Theoretical}\ }\textbf {\bibinfo {volume} {49}},\ \bibinfo {pages} {473001} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ollivier}\ and\ \citenamefont {Zurek}(2001)}]{zurek} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Ollivier}}\ and\ \bibinfo {author} {\bibfnamefont {W.~H.}\ \bibnamefont {Zurek}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review Letters}\ }\textbf {\bibinfo {volume} {88}},\ \bibinfo {pages} {017901} (\bibinfo {year} {2001})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ciccarello}\ \emph {et~al.}(2014)\citenamefont {Ciccarello}, \citenamefont {Tufarelli},\ and\ \citenamefont {Giovannetti}}]{ciccarello} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Ciccarello}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Tufarelli}}, \ and\ \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Giovannetti}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {New Journal of Physics}\ }\textbf {\bibinfo {volume} {16}},\ \bibinfo {pages} {013038} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Paula}\ \emph {et~al.}(2013)\citenamefont {Paula}, \citenamefont {de~Oliveira},\ and\ \citenamefont {Sarandy}}]{sarandy2} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.~M.}\ \bibnamefont {Paula}}, \bibinfo {author} {\bibfnamefont {T.~R.}\ \bibnamefont {de~Oliveira}}, \ and\ \bibinfo {author} {\bibfnamefont {M.~S.}\ \bibnamefont {Sarandy}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review A}\ }\textbf {\bibinfo {volume} {87}},\ \bibinfo {pages} {064101} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Montealegre}\ \emph {et~al.}(2013)\citenamefont {Montealegre}, \citenamefont {Paula}, \citenamefont {Saguia},\ and\ \citenamefont {Sarandy}}]{sarandy3} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~D.}\ \bibnamefont {Montealegre}}, \bibinfo {author} {\bibfnamefont {F.~M.}\ \bibnamefont {Paula}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Saguia}}, \ and\ \bibinfo {author} {\bibfnamefont {M.~S.}\ \bibnamefont {Sarandy}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review A}\ }\textbf {\bibinfo {volume} {87}},\ \bibinfo {pages} {042115} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hohenberg}\ and\ \citenamefont {Kohn}(1964)}]{hohenberg1964inhomogeneous} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Hohenberg}}\ and\ \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Kohn}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical review}\ }\textbf {\bibinfo {volume} {136}},\ \bibinfo {pages} {B864} (\bibinfo {year} {1964})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Maziero}\ \emph {et~al.}(2010)\citenamefont {Maziero}, \citenamefont {Guzman}, \citenamefont {C{\'e}leri}, \citenamefont {Sarandy},\ and\ \citenamefont {Serra}}]{maziero2010quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Maziero}}, \bibinfo {author} {\bibfnamefont {H.~C.}\ \bibnamefont {Guzman}}, \bibinfo {author} {\bibfnamefont {L.~C.}\ \bibnamefont {C{\'e}leri}}, \bibinfo {author} {\bibfnamefont {M.~S.}\ \bibnamefont {Sarandy}}, \ and\ \bibinfo {author} {\bibfnamefont {R.~M.}\ \bibnamefont {Serra}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review A}\ }\textbf {\bibinfo {volume} {82}},\ \bibinfo {pages} {012106} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Arnesen}\ \emph {et~al.}(2001)\citenamefont {Arnesen}, \citenamefont {Bose},\ and\ \citenamefont {Vedral}}]{arnesen2001natural} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~C.}\ \bibnamefont {Arnesen}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Bose}}, \ and\ \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Vedral}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review Letters}\ }\textbf {\bibinfo {volume} {87}},\ \bibinfo {pages} {017901} (\bibinfo {year} {2001})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Asoudeh}\ and\ \citenamefont {Karimipour}(2006)}]{asoudeh2006thermal} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Asoudeh}}\ and\ \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Karimipour}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review A}\ }\textbf {\bibinfo {volume} {73}},\ \bibinfo {pages} {062109} (\bibinfo {year} {2006})}\BibitemShut {NoStop} \end{thebibliography} \end{document}
\begin{document} \maketitle \begin{abstract} Given a Lie superalgebra $\mathfrak {g}$ with a subalgebra $\mathfrak {g}_{\mathfrak {g}eq 0}$, and a finite-dimensional irreducible $\mathfrak {g}_{\mathfrak {g}eq 0}$-module $F$, the induced $\mathfrak {g}$-module $M(F)=\mathcal{U}(\mathfrak {g})\otimes_{\mathcal{U}(\mathfrak {g}_{\mathfrak {g}eq 0})}F$ is called a finite Verma module. In the present paper we classify the non-irreducible finite Verma modules over the largest exceptional linearly compact Lie superalgebra $\mathfrak {g}=E(5,10)$ with the subalgebra $\mathfrak {g}_{\mathfrak {g}eq 0}$ of minimal codimension. This is done via classification of all singular vectors in the modules $M(F)$. Besides known singular vectors of degree 1,2,3,4 and 5, we discover two new singular vectors, of degrees 7 and 11. We show that the corresponding morphisms of finite Verma modules of degree 1,4,7, and 11 can be arranged in an infinite number of bilateral infinite complexes, which may be viewed as “exceptional" de Rham complexes for $E(5,10)$. \end{abstract} \section{Introduction} Recall that a linearly compact Lie (super)algebra $\mathfrak {g}$ is defined by the property that, viewed as a vector space, $\mathfrak {g}$ is linearly compact. According to E.\ Cartan's classification, the list of infinite-dimensional simple linearly compact Lie algebras consists of four Lie--Cartan series: $W_n$, $S_n$, $H_n$, and $K_n$. The infinite-dimensional simple linearly compact Lie superalgebras were classified in \cite{K} and explicitely described in \cite{CK}; all their maximal open subalgebras were classified in \cite{CaK}. The complete list consists of ten “classical" series (which include the Lie--Cartan series), and five exceptional examples, denoted by $E(1,6)$, $E(3,6)$, $E(3,8)$, $E(4,4)$, and $E(5,10)$. With the exception of $E(4,4)$, these Lie superalgebras carry a $\mathbb{Z}$--gradation, compatible with the parity: $$\mathfrak {g}=\bigoplus_{j\mathfrak {g}eq -d} \mathfrak {g}_j,$$ where $d=2$ for $E(1,6)$, $E(3,6)$ and $E(5,10)$, and $d=3$ for $E(3,8)$. Then $\mathfrak {g}_{\mathfrak {g}eq 0}:=\oplus_{j\mathfrak {g}eq 0}\mathfrak {g}_j$ is a maximal open subalgebra of $\mathfrak {g}$ of minimal codimension. In the case of $\mathfrak {g}=E(3,6)$ and $E(3,8)$ the subalgebra $\mathfrak {g}_0$ is isomorphic to $\mathfrak{sl}_3\oplus \mathfrak{sl}_2\oplus\mathbb{C}$, and for $\mathfrak {g}=E(5,10)$, $\mathfrak {g}_0$ is isomorphic to $\mathfrak{sl}_5$, which hints to connections to particle physycs \cite{KR2}. Let $F$ be an irreducible finite-dimensional $\mathfrak {g}_0$-module, extend it to $\mathfrak {g}_{\mathfrak {g}eq 0}$ by letting all $\mathfrak {g}_j$ with $j>0$ act by 0, and consider the {\em finite Verma $\mathfrak {g}$-module} $$M(F)=\mathcal{U}(\mathfrak {g})\otimes_{\mathcal{U}(\mathfrak {g}_{\mathfrak {g}eq 0})}F,$$ where $M(F)$ is viewed as a vector space with discrete topology. These modules are especially interesting since their topological duals are linearly compact. The first problem of representation theory of linearly compact Lie superalgebras is to classify their degenerate (i.e., non-irreducible) finite Verma modules and morphisms between them. This is equivalent to classification of {\em singular vectors} in these modules, i.e., those which are annihilated by $\mathfrak {g}_j$ with $j\mathfrak {g}eq 1$. This problem was solved for Lie algebras $W_n$, $S_n$ and $H_n$ by Rudakov \cite{R1}, \cite{R0}. In particular, he showed that the degenerate finite $W_n$-modules form the de Rham complex in a formal neighborhood of 0 in $\mathbb{C}^n$ (rather its topological dual). In a series of papers \cite{KR1}, \cite{KR2}, \cite{KR3} this problem was solved for the exceptional linearly compact Lie superalgebra $E(3,6)$. It turned out that all the morphisms between the degenerate finite Verma modules over $E(3,6)$ can be arranged in an infinite number of complexes, and cohomology of these complexes was computed in \cite{KR2} as well. The most difficult technical part of this work is \cite{KR3}, where all singular vectors have been classified. In the subsequent paper \cite{KR} a solution to this problem was announced for $E(3,8)$, and a conjecture on classification of degenerate finite Verma modules for $E(5,10)$ was posed, motivated by the singular vectors of degree 1 constructed there (the degree on $M(F)=\mathcal{U}(\mathfrak {g}_{<0})\otimes F$ is induced by the degree on $\mathfrak {g}_{<0}=\oplus_{j<0}\mathfrak {g}_j$). In a more recent paper \cite{R} it was proved that these are all singular vectors of degree 1, and also some singular vectors of degree 2,3,4 and 5 have been constructed. In the subsequent paper \cite{CC} it was shown that the singular vectors of degree less than or equal to 3 constructed by Rudakov are all singular vectors of degree less than or equal to 3. Actually, the morphisms of degrees 2, 3 and 5 corresponding to singular vectors constructed in \cite{R} are composition of morphisms of degree 1 and 4, and the morphisms of degree 1 and 4 can be arranged in an infinite number of infinite complexes \cite{R}. However, in Figure 2 of \cite{R} there are two notable gaps in the complexes. The key discovery of the present paper is the existence of morphisms of degree 7 and 11, which fill these gaps (see Figure \ref{E510morphisms}). Moreover, we show that there are no further singular vectors (Theorem \ref{conclusions}), thereby proving the conjecture from \cite{KR} on classification of degenerate finite Verma modules over $E(5,10)$. The proof of Theorem \ref{conclusions} goes as follows. First, using a result from \cite{R0} on $S_n$-modules for $n=5$, which is the even part of $E(5,10)$, we show that there are no singular vectors of degree greater than 14. Next we find that for degrees between 11 and 14 there is only one singular vector, it has degree 11 and defines a morphism from $M(\mathbb{C}^5)$ to $M({\mathbb{C}^5}^*)$, where $\mathbb{C}^5$ is the standard $\mathfrak{sl}_5$-module and ${\mathbb{C}^5}^*$ its dual. After that, using the techniques of \cite{CC}, we show that in degrees between 6 and 10 the only singular vector has degree 7 and it defines a morphism from $M(S^2\mathbb{C}^5)$ to $M(S^2{\mathbb{C}^5}^*)$. These are precisely the two morphisms, missing in Figure 2 of \cite{R}. Finally, we show that in degrees less than or equal to 6 there are no other singular vectors as compared to \cite{R}. The calculations involve solution of large systems of linear equations, which are performed with the aid of computer. Note also that the construction of morphisms is facilitated by the duality, constructed in \cite{CCK}, such that the morphism $M(F) \rightarrow M(F_1)$ induces the morphism $M(F_1)^*\rightarrow M(F)^*$ and for $E(5,10)$ has the property that $M(F)^*=M(F^*)$. We have learned recently that Daniele Brilli obtained in \cite{B} the upper bound 12 on the degrees of singular vectors for finite Verma modules over $E(5,10)$, using the techniques of representation theory of Lie pseudoalgebras. \section{Preliminaries}\label{S1} We let $\mathbb{N}=\{0,1,2,3,\dots\}$ be the set of non-negative integers and for $n\in\mathbb{N}$ we set $[n]=\{i\in\mathbb{N} ~|~ 1\leq i\leq n\}$. We consider the simple, linearly compact Lie superalgebra of exceptional type $\mathfrak {g}=E(5,10)$ whose even and odd parts are as follows: $\mathfrak {g}_{\bar{0}}$ consists of zero-divergence vector fields in five (even) indeterminates $x_1,\ldots,x_5$, i.e., \[\mathfrak {g}_{\bar{0}}=S_5=\{X=\sum_{i=1}^5f_i\partial_i ~|~ f_i\in\mathbb{C}[[x_1,\dots,x_5]], \textrm{div}(X)=0\},\] where $\partial_i=\partial_{x_i}$, and $\mathfrak {g}_{\bar{1}}=\Omega^2_{cl}$ consists of closed two-forms in the five indeterminates $x_1,\ldots,x_5$. The bracket between a vector field and a two-form is given by the Lie derivative and for $f,g\in \mathbb{C}[[x_1,\dots,x_5]]$ we have $$[fdx_i\wedge dx_j,g dx_k\wedge dx_l]=\varepsilon_{ijkl}fg\partial_{t_{ijkl}}$$ where, for $i,j,k,l\in [5]$, $\varepsilon_{ijkl}$ and $t_{ijkl}$ are defined as follows: if $|\{i,j,k,l\}|=4$ we let $t_{ijkl}\in [5]$ be such that $|\{i,j,k,l,t_{ijkl}\}|=5$ and $\varepsilon_{ijkl}$ be the sign of the permutation $(i,j,k,l,t_{ijkl})$. If $|\{i,j,k,l\}|<4$ then $\varepsilon_{ijkl}=0$. From now on we shall denote $dx_i\wedge dx_j$ simply by $d_{ij}$. The Lie superalgebra $\mathfrak {g}$ has a consistent, irreducible, transitive $\mathbb{Z}$-grading of depth 2 where, for $k\in\mathbb{N}$, \begin{align*} \mathfrak {g}_{2k-2}&=\langle f\partial_i ~|~i=1,\dots,5, f\in\mathbb{C}[[x_1,\dots, x_5]]_{k}\rangle\cap S_5\\ \mathfrak {g}_{2k-1}&=\langle fd_{ij} ~|~ i,j=1,\dots,5, f\in\mathbb{C}[[x_1,\dots, x_5]]_{k}\rangle\cap\Omega^2_{cl} \end{align*} where by $\mathbb{C}[[x_1,\dots, x_5]]_{k}$ we denote the homogeneous component of $\mathbb{C}[[x_1,\dots, x_5]]$ of degree $k$. Note that $\mathfrak {g}_0\cong \mathfrak{sl}_5$, $\mathfrak {g}_{-2}\cong (\mathbb{C}^5)^*$, $\mathfrak {g}_{-1}\cong \inlinewedge^2\mathbb{C}^5$ as $\mathfrak {g}_0$-modules (where $\mathbb{C}^5$ denotes the standard $\mathfrak{sl}_5$-module). We set $\mathfrak {g}_{-}=\mathfrak {g}_{-2}\oplus \mathfrak {g}_{-1}$, $\mathfrak {g}_{+}=\oplus_{j>0}\mathfrak {g}_j$ and $\mathfrak {g}_{\mathfrak {g}eq 0}=\mathfrak {g}_0\oplus \mathfrak {g}_+$. We denote by $U$ (resp.\ $U_{-}$) the universal enveloping algebra of $\mathfrak {g}$ (resp.\ $\mathfrak {g}_-$). Note that $U_-$ is a $\mathfrak {g}_0$-module with respect to the adjoint action: for $x\in \mathfrak {g}_0$ and $u\in U_-$, $$x.u=[x,u]=xu-ux.$$ We also point out that the $\mathbb{Z}$-grading of $\mathfrak {g}$ induces a $\mathbb{Z}$-grading on the enveloping algebra $U_-$. It is customary, though, to invert the sign of the degrees hence getting a grading over $\mathbb{N}$. Note that the homogeneous component $(U_-)_d$ of degree $d$ of $U_-$ under this grading is a $\mathfrak {g}_0$-submodule. We fix the Borel subalgebra $\langle x_i\partial_j, h_{ij}=x_i\partial_i-x_j\partial_j ~|~ i<j\rangle$ of $\mathfrak {g}_0$ and we consider the usual base of the corresponding root system given by $\{\alpha_{12},\ldots,\alpha_{45}\}$. We let $\Lambda$ be the weight lattice of $\frak{sl}_5$ and we express all weights of $\frak{sl}_5$ using their coordinates with respect to the fundamental weights $\varphi_{12},\varphi_{23},\varphi_{34},\varphi_{45}$, i.e., for $\lambda\in \Lambda$ we write $\lambda=(\lambda_{12},\ldots,\lambda_{45})$ for some $\lambda_{i\,i+1}\in \mathbb Z$ to mean $\lambda=\lambda_{12}\varphi_{12}+\cdots+\lambda_{45}\varphi_{45}$. If $\lambda\in \Lambda$ is a weight, we use the following convention: for all $1\leq i<j\leq 5$ we let \[ \lambda_{ij}=\sum_{k=i}^{j-1}\lambda_{k\,k+1}. \] If $V$ is a $\frak {sl}_5$-module and $v\in V$ is a weight vector we denote by $\lambda(v)$ the weight of $v$ and by $\lambda_{ij}(v)=(\lambda(v))_{ij}$. If $\lambda=(a,b,c,d)\in \Lambda$ is a dominant weight, i.e. $a,b,c,d\mathfrak {g}eq 0$, let us denote by $F(\lambda)=F(a,b,c,d)$ the irreducible $\mathfrak{sl}_5$-module of highest weight $\lambda$. In this paper we always think of $F(a,b,c,d)$ as the irreducible submodule of \[\Sym^a(\mathbb{C}^5)\otimes \Sym^b(\displaywedge^2(\mathbb{C}^5))\otimes \Sym^c(\displaywedge^2(\mathbb{C}^5)^*)\otimes \Sym^d((\mathbb{C}^5)^*)\] generated by the highest weight vector $x_1^ax_{12}^b{x_{45}^*}^c{x_5^*}^d$, where $\{x_1,\dots, x_5\}$ denotes the standard basis of $\mathbb{C}^5$, $x_{ij}=x_i\wedge x_j$, and $x_i^*$ and $x_{ij}^*$ are the corresponding dual basis elements. Besides, for a weight $\lambda=(a,b,c,d)$ we let $\lambda^*=(d,c,b,a)$, so that $F(\lambda)^*\cong F(\lambda^*)$. Notice that, as a $\mathfrak {g}_0$-module, $\mathfrak {g}_1\cong F(1,1,0,0)$ and that $x_5d_{45}$ is a lowest weight vector in $\mathfrak {g}_1$. Moreover, for $j\mathfrak {g}eq 1$, we have $\mathfrak {g}_j=\mathfrak {g}_1^j$. \section{Generalized Verma modules and morphisms}\label{S3} We recall the definition and some properties of (generalized) Verma modules over $E(5,10)$, most of which hold in the generality of arbitrary $\mathbb Z$--graded Lie superalgebras (for some detailed proofs see \cite{CC}). Given a $\mathfrak {g}_0$-module $V$, we extend it to a $\mathfrak {g}_{\mathfrak {g}eq 0}$-module by letting $\mathfrak {g}_+$ act trivially, and define $$M(V)=U\otimes_{U(\mathfrak {g}_{\mathfrak {g}eq 0})}V.$$ Note that $M(V)$ has a $\mathfrak {g}$-module structure by multiplication on the left, and is called the (generalized) Verma module associated to $V$. We also observe that $M(V)\cong U_{-}\otimes_{\mathbb{C}}V$ as $\mathfrak {g}_0$-modules. If the $\mathfrak {g}_0$-module $V$ is finite-dimensional and irreducible, then we call $M(V)$ a finite Verma module (it is finitely-generated as a $U_-$-module). We denote by $M(\lambda)$ the finite Verma module $M(F(\lambda))$. A finite Verma module is said to be non-degenerate if it is irreducible and degenerate otherwise. \begin{definition} We say that an element $w\in M(V)$ is homogeneous of degree $d$ if $w\in (U_-)_d\otimes V$. \end{definition} \begin{definition} A vector $w\in M(V)$ is called a {\em singular} vector if it satisfies the following conditions: \begin{itemize} \item[(i)] $x_i\partial_{i+1}w=0$ for every $i=1,\dots,4$; \item[(ii)] $zw=0$ for every $z\in \mathfrak {g}_1$; \item[(iii)] $w$ does not lie in $V$. \end{itemize} \end{definition} We observe that the homogeneous components of positive degree of a singular vector are singular vectors. The same holds for its weight components. From now on we will thus assume that a singular vector is a homogeneous weight vector unless otherwise specified. Notice that if condition (i) is satisfied then condition (ii) holds if $x_5d_{45}w=0$ since $x_5d_{45}$ is a lowest weight vector in $\mathfrak {g}_1$. We recall that a minimal Verma module $M(V)$ is degenerate if and only if it contains a singular vector \cite[Proposition 3.3]{CC}. Degenerate Verma modules can be described in terms of morphisms. A morphism $\varphi: M(V)\rightarrow M(W)$ can always be associated to an element $\Phi\in U_{-}\otimes \Hom(V,W)$ as follows: for $u\in U_-$ and $v\in V$ we let $$\varphi(u\otimes v)=u\Phi(v)$$ where, if $\Phi=\sum_iu_i\otimes \theta_i$ with $u_i\in U_-$, $\theta_i\in \Hom(V,W),$ we let $\Phi(v)=\sum_iu_i\otimes \theta_i(v)$. We will say that $\varphi$ (or $\Phi$) is a morphism of degree $d$ if $u_i\in (U_-)_d$ for every $i$. The following proposition characterizes morphisms between Verma modules. \begin{proposition}\cite{KR,R}\label{morphisms} Let $\varphi: M(V)\rightarrow M(W)$ be the linear map associated with the element $\Phi\in U_{-}\otimes \Hom(V,W)$. Then $\varphi$ is a morphism of $\mathfrak {g}$-modules if and only if the following conditions hold: \begin{itemize} \item[(a)] $\mathfrak {g}_0.\Phi=0$; \item[(b)] $X\varphi(v)=0$ for every $X\in \mathfrak {g}_1$ and for every $v\in V$. \end{itemize} \end{proposition} We observe that, if $M(V)$ is a finite Verma module and condition (a) holds, it is enough to verify condition (b) for an element $X$ generating $\mathfrak {g}_1$ as a $\mathfrak {g}_0$-module and for $v$ a highest weight vector in $V$. We recall that a finite Verma module $M(\mu)$ contains a singular vector if and only if there exist a finite Verma module $M(\lambda)$ and a morphism $\varphi:M(\lambda)\rightarrow M(\mu)$ of positive degree \cite[Proposition 3.5]{CC}. We recall the following duality on Verma modules which is established in \cite{CCK} in a much wider generality. \begin{theorem}\label{duality} Let $\varphi:M(\lambda)\rightarrow M(\mu)$ be a morphism of $\mathfrak {g}$-modules of degree $d$. Then there exists a dual morphism $\varphi^*:M(\mu^*)\rightarrow M(\lambda^*)$ of the same degree $d$. Equivalently, if $M(\lambda)$ contains a singular vector of degree $d$ and weight $\mu$, then $M(\mu^*)$ contains a singular vector of degree $d$ and weight $\lambda^*$. \end{theorem} \begin{remark}\label{dual} Let $\varphi: M(V)\rightarrow M(W)$ be a linear map of degree $d$ associated to an element $\Phi\in U_-\otimes \Hom(V,W)$ that satisfies condition (a) of Proposition \ref{morphisms}. Then there exists a $\mathfrak {g}_0$-morphism $\psi: (U_-)_d^*\rightarrow \Hom(V,W)$ such that $\Phi= \sum_i u_i\otimes \psi(u_i^*)$ where $\{u_i, i\in I\}$ is any basis of $(U_-)_d$ and $\{u_i^*, i\in I\}$ is the corresponding dual basis. \end{remark} \begin{definition} Let $M(\mu)$ be a finite Verma module and let $\pi: M(\mu)\rightarrow U_-\otimes F(\mu)_{\mu}$ be the natural projection, $F(\mu)_{\mu}$ being the weight space of $F(\mu)$ of weight $\mu$. Given a singular vector $w\in M(\mu)$, we call $\pi(w)$ the leading term of $w$. \end{definition} It is shown in \cite{CC} that the leading term of a singular vector is non-zero, and therefore a singular vector is uniquely determined by its leading term. The action of $E(5,10)$ on a module $M$ restricts to an action of its even part on $M$. It is therefore natural to take into account the structure of $M$ as an $S_5$-module also. In order to do this we consider the grading on $S_5$ given by $\partialg x_i=2$ and $\partialg (\partial_i)=-2$ to be consistent with the embedding of $S_5$ in $E(5,10)$. The definition of a Verma module for $S_5$ is analogous to the one for $E(5,10)$. Rudakov classified all singular vectors for the infinite-dimensional Lie algebra $S_n$ in \cite{R0} and we recall here his results in the special case of $S_5$. \begin{theorem}\cite{R0}\label{classS5} The following is a complete list (up to multiplication by a scalar) of singular vectors $w$ in Verma modules $M(\lambda)$ for $S_5$. \begin{itemize} \item[R1.] $\lambda=(1,0,0,0)$, $w=\partial_1 \otimes x_1+ \partial_2 \otimes x_2 +\partial_3 \otimes x_3 + \partial_4 \otimes x_4 +\partial_5\otimes x_5$; \item [R2.] $\lambda=(0,1,0,0)$, $w=\partial_2 \otimes x_{12}+ \partial_3 \otimes x_{13}+ \partial_4 \otimes x_{14}+\partial_5 \otimes x_{15}$; \item [R3.] $\lambda=(0,0,1,0)$, $w=\partial_3 \otimes x_{45}^*+ \partial_4 \otimes x_{53}^*+ \partial_5 \otimes x_{34}^*$; \item [R4.] $\lambda=(0,0,0,1)$, $w=\partial_4 \otimes x_5^*- \partial_5 \otimes x_4^*$; \item [R5.] $\lambda=(0,0,0,0)$, $w=\partial_5 \otimes 1$; \item[R6.] $\lambda=(1,0,0,0)$, $w=\partial_5(\partial_1 \otimes x_1+ \partial_2 \otimes x_2 +\partial_3 \otimes x_3 + \partial_4 \otimes x_4 +\partial_5\otimes x_5)$; \end{itemize} \end{theorem} \noindent Theorem \ref{classS5} provides the diagram of all non-zero morphisms between finite Verma modules for $S_5$ shown in Figure \ref{S5morphisms}. \begin{figure} \caption{\label{S5morphisms} \label{S5morphisms} \end{figure} \section{A first bound}\label{first} Let $\Omega=\{\{1,2\},\{1,3\},\{1,4\},\{1,5\},\{2,3\},\{2,4\},\{2,5\},\{3,4\}, \{3,5\},\{4,5\}\}$ and, if $p=\{i,j\}\in \Omega$ with $i<j$, then we let $d_p=d_{ij}=dx_i\wedge dx_j$. In order to avoid cumbersome notation, when no confusion may arise we will denote in this section the subset $\{i,j\}$ simply as $ij$. Let $V$ be a finite dimensional $\mathfrak {g}_0$-module. For all $k\mathfrak {g}eq 0$ we let \[ M_k(V)=\mathbb{C}[\partialb] \sum_{j\leq k} \sum_{p_1,\ldots,p_j \in \Omega} \mathbb{C} d_{p_1}\cdots d_{p_j}\otimes V. \] Note that $M_k(V)$ is not an $E(5,10)$-submodule of $M(V)$. Nevertheless the following result holds. \begin{proposition}\label{s5mod} For all $k=0,1,\ldots,10$ the subspace $M_k(V)$ is an $S_5$-module. \end{proposition} \begin{proof} It is enough to show that for all $X\in S_5$, $1\leq j\leq k$, $p_1,\ldots,p_j\in \Omega$, and $v\in V$ \begin{equation}\label{s5mod1} Xd_{p_1}\cdots d_{p_j}\otimes v \in M_j(V), \end{equation} since $M_j(V)\subseteq M_k(V)$. We also show that \begin{equation} \label{s5mod2} [X,d_{p_1}]d_{p_2}\cdots d_{p_j}\otimes v\in M_j(V) \end{equation} and we prove that \eqref{s5mod1} and \eqref{s5mod2} hold simultaneously by a double induction on $j$ and $\partialg X$. If $j=1$ then \eqref{s5mod2} is trivial and \eqref{s5mod1} follows from \eqref{s5mod2}. If $\partialg X=-2$ then \eqref{s5mod1} and \eqref{s5mod2} are both trivial, so we assume that $j\mathfrak {g}eq 2$ and $\partialg X\mathfrak {g}eq 0$. We have \[ Xd_{p_1}\dots d_{p_j}\otimes v=[X,d_{p_1}]d_{p_2}\cdots d_{p_j}\otimes v + d_{p_1} X d_{p_2}\cdots d_{p_j} \otimes v. \] The latter summand clearly lies in $M_j(V)$ by induction on $j$ and so \eqref{s5mod1} will follow from \eqref{s5mod2}. We have \[ [X,d_{p_1}]d_{p_2}\cdots d_{p_j}\otimes v = -d_{p_2}[X,d_{p_1}]d_{p_3}\cdots d_{p_j}\otimes v +[[X,d_{p_1}],d_{p_2}]d_{p_3}\cdots d_{p_j}\otimes v. \] The former summand lies in $M_j(V)$ by induction on $j$ and the latter by induction on $\partialg X$: the result follows. \end{proof} By Proposition \ref{s5mod} we have a filtration \[ \{0\}=M_{-1}(V)\subseteq \mathbb{C}[\partialb]\otimes V=M_0(V) \subseteq M_1(V) \subseteq \cdots \subseteq M_{10}(V)=M(V) \] of $S_5$-modules and we let \[ N_k(V)=M_k(V)/M_{k-1}(V) \] for all $k=0,\ldots,10$. \begin{proposition}\label{NkV} For all $k=0,\ldots,10$ and for any total order $\prec$ on $\Omega$ we have \[ N_k(V)\cong \mathbb{C}[\partialb]\otimes \bigoplus_{p_1\prec \cdots \prec p_k} \mathbb{C} d_{p_1}\cdots d_{p_k}\otimes V \] as $\mathbb{C}$-vector spaces. \end{proposition} \begin{proof} For all $p_1,\ldots,p_k \in \Omega$ and every permutation $\sigma$ of the indices $\{1,\ldots,k\}$ we have \begin{equation}\label{permform}d_{p_1}d_{p_2}\cdots d_{p_k}-\varepsilon(\sigma) d_{p_{\sigma(1)}}\cdots d_{p_{\sigma(k)}}\in M_{k-1}(V) \end{equation} and so $N_k(V)$ is generated as $\mathbb{C}[\partialb]$-module by the elements $d_{p_1}\cdots d_{p_k}\otimes v$ for all $p_1\prec \cdots \prec p_k$ and all $v\in V$. The result follows by Poincar\'e--Birkhoff--Witt theorem for $U(\mathfrak {g}_{-})$. \end{proof} Next we observe that the subspace \[ F_k(V)=\bigoplus_{p_1\prec \cdots \prec p_k}\mathbb{C} d_{p_1}\cdots d_{p_k}\otimes V \] of $N_k(V)$ also has a special structure: \begin{proposition}\label{NkVisVerma} The subspace $F_k(V)$ of $N_k(V)$ is an $\mathfrak{sl}_5$-module annihilated by $(S_5)_{>0}$. The $S_5$-module $N_k(V)$ is the finite Verma module for $S_5$ induced by $F_k(V)$, i.e. \[ N_k(V)=M(F_k(V)). \] \end{proposition} \begin{proof} The subspace $F_k(V)$ of $N_k(V)$ is an $\mathfrak{sl}_5$-module since $\mathfrak {g}_{-1}$ is a $\mathfrak {g}_0$-module, and by the definition of $N_k(V)$. The fact that $F_k(V)$ is annihilated by $(S_5)_{>0}$ follows easily by degree reasons. The second part follows from Proposition \ref{NkV} and the first part. \end{proof} This result together with Theorem \ref{classS5} allows us to determine a first bound on the degree of singular vectors for $E(5,10)$. \begin{corollary} Let $M(V)$ be a Verma module for $E(5,10)$ and $w\in M(V)$ be a singular vector. Then $w$ has degree at most 14. \end{corollary} \begin{proof} Let $k$ be minimal such that $w\in M_k(V)$. Then $w$ is a fortiori either a highest weight vector in $F_k(V)$ or a singular vector in the $S_5$-Verma module $N_k(V)$, and as such it has degree at most 4. It follows that $w$ has degree at most $k+4$, where $k\leq 10$. \end{proof} \section{Singular vectors of degree greater than 10}\label{>10} The description of singular vectors for $S_5$ allows us to give a much more precise description of possible singular vectors for $E(5,10)$ of degree greater than 10. We fix a total order $\prec$ on the set $\Omega=\{\{1,2\},\{1,3\},\ldots,\{4,5\}\}$. If $I=\{p_1,\ldots,p_j\}\subseteq \Omega$ with $p_1 \prec \cdots \prec p_j$ we let $d^\prec_I=d_{p_1}\cdots d_{p_j}$. We let \[ L^{\prec}_{h}(V)=\mathbb{C}[\partialb] \bigoplus_{I:\, |I|=h}\mathbb{C} d^\prec_I \otimes V. \] By construction we have \[ M_h(V)=L^\prec _{h}(V)\oplus M_{h-1}(V) \] for all $h=0,\ldots,10$ and in particular \[ M(V)=\bigoplus_{h=0}^{10} L^{\prec}_{h}(V). \] Every non-zero vector $w$ in $M(V)$ can be expressed uniquely in the following form: \[ w=w_h^\prec+w_{h-1}^\prec+\cdots +w_0^\prec, \] for some $h=0,\dots, 10$, with $w_h^\prec\neq 0$ and $w_j^\prec \in L^\prec_j(V)$ for all $j\leq h$. We say in this case that $w$ has {\em height} $h$ and we call $w_h^\prec$ the {\em highest term} of $w$. Note that the height of an element does not depend on the order $\prec$, while its highest term does. If $M=(m_1,\ldots,m_5)\in \mathbb{N}^5$ we let $\partial^M=\partial_1^{m_1}\cdots \partial_5^{m_5}$ and $|M|=m_1+\cdots+m_5$. Moreover we let $e_1=(1,0,0,0,0)$, $e_2=(0,1,0,0,0),\ldots,e_5=(0,0,0,0,1)$. If $w$ is homogeneous of degree $d$, then the term $w_j^\prec$ has the following form \begin{equation}\label{formofterms} w_j^\prec=\sum_{I\subseteq \Omega:\, |I|=j}\sum_{M\in \mathbb N^5:\, |M|=\frac{d-j}{2}}\partialb^M d^\prec_{I}\otimes v_{M,I}, \end{equation} where $v_{M,I}\in V$. Note that if $w$ is homogeneous of height $h$, then $w_j^\prec=0$ for all $j\not \equiv h \mod 2$. Observe that, by construction, if $w$ has height $h$, then \[ w\equiv w_h^\prec \mod M_{h-1}(V), \] and, in particular, $w$ and $w_h^\prec$ lie in the same class in $N_h(V)$. Theorem \ref{classS5} provides us the following description of possible singular vectors for $E(5,10)$. \begin{corollary}\label{d=h,h+4} Let $w\in M(V)$ be a singular vector of degree $d$ and height $h$, and let $w_h^\prec$ be its highest term. Let $w_j^\prec$ be as in \eqref{formofterms}. Then one of the following applies: \begin{enumerate} \item[(i)] $d=h$; \item[(ii)] $d=h+2$ and there exists $i\in [5]$ such that \[\sum_{I:\, |I|=h}d^\prec_I v_{e_i,I}\neq 0 \] is a highest weight vector for $\mathfrak{sl}_5$ in $N_h(V)$ and \[ w_h^\prec =\sum_{j=i}^5\partial _j \sum_{I:\, |I|=h} d^\prec_I\otimes v_{e_j,I} \] with \[\lambda(w)=\begin{cases} (0,0,0,0) & \textrm{if }i=1;\\ (1,0,0,0) & \textrm{if }i=2;\\ (0,1,0,0) & \textrm{if }i=3;\\ (0,0,1,0) & \textrm{if }i=4;\\ (0,0,0,1) & \textrm{if }i=5;\\ \end{cases}\hspace{5mm} \textrm{ and }\hspace{5mm} \lambda\Big(\sum_{I:\, |I|=h}d^\prec_I v_{e_i,I}\Big)= \begin{cases} (1,0,0,0) & \textrm{if }i=1;\\ (0,1,0,0) & \textrm{if }i=2;\\ (0,0,1,0) & \textrm{if }i=3;\\ (0,0,0,1) & \textrm{if }i=4;\\ (0,0,0,0) & \textrm{if }i=5;\\ \end{cases}\] \item[(iii)] $d=h+4$,\[\sum_{I:\, |I|=h}d_I v_{e_1+e_5,I}\neq 0 \] is a highest weight vector for $\mathfrak{sl}_5$ in $N_h(V)$ and \[ w_h^\prec =\partial _5\sum_{j=1}^5\partial _j \sum_{I:\, |I|=h} d^\prec_I\otimes v_{e_j+e_5,I} \] with $\lambda(w)=(0,0,0,1)$ and $\lambda\big(\sum_{I:\, |I|=h}d_I v_{e_1+e_5,I}\big)=(1,0,0,0)$. \end{enumerate} \end{corollary} \begin{proof} This is a straightforward consequence of Theorem \ref{classS5}. We know that $N_h(V)$ is a Verma module for $S_5$ and $w_h^\prec$ is annihilated by $(S_5)_{>0}$ and by $x_i \partial_j$ for all $i<j$. In particular, if $d\neq h$, we have that the class of $w_h^\prec$ in $N_h(V)$ is a genuine singular vector for $S_5$: the classification of singular vectors in Theorem \ref{classS5} then completes the proof. Note that if $d=h$, then the class of $w_h^\prec$ in $N_h(V)$ is actually a highest weight vector in $F_h(V)$, i.e. the $\mathfrak{sl}_5$-module we are inducing from. \end{proof} In this section we classify all possible singular vectors with degree strictly bigger than height, i.e. we treat the cases $d=h+2$ and $d=h+4$ in Corollary \ref{d=h,h+4}, and, in particular, we find all singular vectors of degree greater than 10. We fix the lexicographic order on $\Omega=\{\{1,2\},\{1,3\},\ldots,\{4,5\}\}$, i.e. we set \[ \{1,2\}\prec \{1,3\}\prec\{1,4\}\prec \{1,5\} \prec \{2,3\}\prec\{2,4\}\prec \{2,5\}\prec \{3,4\}\prec\{3,5\}\prec\{4,5\} \] and we simply write $L_h(V)$ instead of $L^\prec_h(V)$, $w_h$ instead of $w^\prec _h$ and $d_I$ instead of $d_I^\prec$. \begin{remark}\label{firstred}The following inclusions are immediate from the definition of the action of $\mathfrak {g}_0$ and $\mathfrak {g}_1$ on $M(V)$: \begin{align}\label{g0action} \mathfrak {g}_0.L_h(V)&\subseteq L_h(V)\oplus L_{h-2}(V)\\ \label{x5d45action} \mathfrak {g}_1.L_h(V) &\subseteq L_{h+1}(V)\oplus L_{h-1}(V)\oplus L_{h-3}(V). \end{align} \end{remark} Due to (\ref{g0action}), for $X\in\mathfrak {g}_0$ and $w\in L_h(V)$, we adopt the following notation: \begin{equation} X w=X^0 w+X^{-2}w \label{not1} \end{equation} with $X^0w \in L_h(V)$ and $X^{-2}w \in L_{h-2}(V)$. Similarly, due to (\ref{x5d45action}), for $X\in \mathfrak {g}_1$ and $w\in L_h(V)$ we write: \begin{equation} Xw=X^1w+X^{-1}w+X^{-3}w \label{not2} \end{equation} with $X^1w \in L_{h+1}(V)$, $X^{-1}w \in L_{h-1}(V)$ and $X^{-3}w \in L_{h-3}(V)$. The following simple observation will be crucial in the sequel. \begin{remark} Let $w\in M(V)$ be a singular vector of height $h$. Then for all $X\in \mathfrak {g}_1$ we have \begin{align}\label{1act} &X^1w_h=0\\ \label{-1act}&X^{-1}w_h+X^1w_{h-2}=0. \end{align} Moreover, for all $i=1,2,3,4$ and $E_i=x_i \partial_{i+1}\in\mathfrak {g}_0$ we have \begin{align}\label{0act}&E_i^0 w_h=0\\ \label{-2act}&E_i^{-2}w_h+E_i^0w_{h-2}=0. \end{align} \end{remark} It will be convenient to rephrase \eqref{1act} in the following equivalent way: for all $X\in \mathfrak {g}_1$ we have \begin{equation}\label{1actbis} Xw_h\equiv 0 \mod M_h(V). \end{equation} \begin{proposition}\label{d=h+4prima}Let $w$ be a singular vector in $M(F)$ with height $h$ and degree $d$ with $d=h+4$. Then $d=14$ and $F=F(1,0,0,0)$ is the standard representation of $\mathfrak {g}_0$. \end{proposition} \begin{proof} By Corollary \ref{d=h,h+4} we have \[ w_h=\partial_5\sum_{i,I}\partial_i d_I\otimes v_{i,I}. \] By applying \eqref{1actbis} with $X=x_kd_{kj}$ and all $k\neq j$, we deduce that if $v_{i,I}\neq 0$ then $I$ must contain all pairs containing $i$ and all pairs containing $5$, and, in particular, $w$ has height at least 7 since $v_{1,I}\neq 0$ for some $I$. If we apply \eqref{1actbis} with $X=x_1d_{23}+x_2d_{13}$, we obtain \[ -\partial_5 d_{23}\sum_I d_I v_{1,I}-\partial_5 d_{13}\sum_{I} d_I v_{2,I}\equiv 0 \mod M_h(V). \] We deduce that, if $v_{1,I}\neq 0$ and $I$ does not contain $23$, then it necessarily contains $24$, since all terms in the second summand do, and one can similarly show that $I$ must contain $34$ using $X=x_1d_{23}-x_3 d_{12}$. Permuting the roles of $2,3,4$, this argument shows that $I$ must contain at least two of the three pairs 23, 24, 34 and hence $w$ has height at least 9. A singular vector of height 9 and degree 13 produces a morphism $\varphi:M(0,0,0,1)\rightarrow M(\lambda)$ for some $\lambda$, by Corollary \ref{d=h,h+4}. The dual morphism $\varphi^*:M(\lambda^*)\rightarrow M(1,0,0,0)$ is also a morphism of degree 13 and so we necessarily have $\lambda=(1,0,0,0)$. Therefore, if $v_{1,I}\neq 0$ then the weight of $d_I$ must be $(0,0,0,0)$, but one can easily check that there are no $I$ with $|I|=9$ such that $\lambda(d_I)=(0,0,0,0)$ (see \cite[\S 6]{CC} for an easy way to compute the weight of the $d_I$'s). If $w$ has height 10 and degree 14, then by Corollary \ref{d=h,h+4}, and an argument analogous to the previous one shows that $w\in M(1,0,0,0)$. \end{proof} Now we can rule out the only left case with $d=h+4$. \begin{proposition}\label{d!=h+4}Let $w$ be a singular vector of degree $d$ and height $h$. Then $d< h+4$. \end{proposition} \begin{proof} By Propositions \ref{d=h,h+4} and \ref{d=h+4prima} we can assume that $d=14$, $h=10$ and \[ w_{10}=\partial_5 (\alpha_1\partial_1 d_{\Omega}\otimes x_1+\cdots +\alpha_5\partial_5 d_{\Omega}\otimes x_5), \] for some $\alpha_1,\ldots,\alpha_5 \in \mathbb{C}$ with $\alpha_1\neq 0$, and that $w_8$ has the following form: \[ w_8=\sum_{I:|I|=8} \sum_{M: |M|=3} \sum_{k=1}^5\alpha_{M,I,k}\partial^M d_I \otimes x_k, \] for some $\alpha_{M,I,k}\in \mathbb{C}$. If we expand \[x_5 d_{45}(w_{10}+w_8)=\sum \beta_{M,I,k} \partial^M d_I\otimes x_k,\] by \eqref{-1act} we obtain the relation \[ \beta_{(1,0,0,1,0),\Omega\setminus \{23\},4}=-\alpha_4-\alpha_{(1, 0, 0, 1, 1), \Omega\setminus\{23,45\},4}=0. \] Similarly, if we expand \[x_4 d_{45}(w_{10}+w_8)=\sum \mathfrak {g}amma_{M,I,k} \partial^M d_I\otimes x_k,\] by \eqref{-1act} we obtain te relation \[ \mathfrak {g}amma_{(1,0,0,0,1),\Omega\setminus\{23\},4}=\alpha_1-\alpha_4-\alpha_{(1, 0, 0, 1, 1), \Omega\setminus\{23,45\},4}=0, \] and hence $\alpha_1=0$, a contradiction. \end{proof} Next target is to deal with the case $d=h+2$ in Corollary \ref{d=h,h+4}. \begin{proposition}\label{hgeq8} Let $w$ be a singular vector in $M(V)$ of degree $d$ and height $h$, with $d=h+2$. Then $h\mathfrak {g}eq 8$. Moreover, if $h=8$ and \[w_8 =\sum_{j=i}^5\partial _j \sum_{I:\, |I|=8} d_I\otimes v_{j,I}\] as in Corollary \ref{d=h,h+4}, then $v_{j,I}\neq 0$ only if $\lambda(\partial_jd_I)=(0,0,0,0)$. \end{proposition} \begin{proof} By \eqref{1actbis} we have for all $k\neq l$ \[ x_kd_{kl}\, w_h\equiv -d_{kl}\sum_I d_I\otimes v_{k,I}\equiv 0 \mod M_h(V). \] This implies that, if $v_{k,I}\neq 0$, then $\{k,l\}\in I$ for all $l\neq k$. In particular, we immediately deduce that $h\mathfrak {g}eq 4$ and \[ w_h\equiv\sum_{j} \partial_j d_{1j}\cdots \hat{d_{jj}}\cdots d_{5j}\sum_{I_j}d_{I_j}\otimes v_{j,I_j} \mod M_{h-1}(V), \] where $I_1$ runs through all subsets of $\{\{2,3\},\{2,4\},\{2,5\},\{3,4\},\{3,5\},\{4,5\}\}$ of cardinality $h-4$, and similarly for $I_2,\ldots,I_5$, where the vectors $v_{j,I_j}$ have been reindexed. Now let $k,l,m$ be distinct integers in $[5]$ and use again \eqref{1actbis} with the element $X=x_kd_{lm}+x_ld_{km}$. We obtain: \[ d_{lm} d_{1k}\cdots \hat{d_{kk}}\cdots d_{5k}\sum_{I_k}d_{I_k}\otimes v_{k,I_k}+d_{km}d_{1l}\cdots \hat{d_{ll}}\cdots d_{5l}\sum_{I_l}d_{I_l}\otimes v_{l,I_l}\equiv 0 \mod M_h(V). \] Again, by \eqref{permform}, this implies that, if $v_{k,I_k}\neq 0$, then $I_k$ contains $\{l,m\}$ (in which case the corresponding summand is zero), or it contains both pairs $\{l,r\}$ and $\{l,s\}$, where $\{k,l,m,r,s\}=[5]$. It follows that $I_k$ must contain at least two pairs containing $l$ (since if it does not contain one such pair it must contain the other two). This implies that $I_k$ contains at least four pairs, and this completes the proof that $h\mathfrak {g}eq 8$. If $h=8$, by the previous argument, the two missing pairs in $I_k$ must contain the four elements distinct from $k$ exactly once, and so the weight of $\partial_k d_{1k}\cdots \hat d_{kk}\cdots d_{5k}d_{I_k}$ is $(0,0,0,0)$. \end{proof} We can now tackle the case of singular vectors of height 8 and degree 10. \begin{proposition}\label{h8d10} There are no singular vectors $M(V)$ of height 8 and degree 10. \end{proposition} \begin{proof} Assume by contradiction that $w$ is a singular vector of height 8 and degree 10. For distinct $i,j,k,l\in [5]$, with $i<j$ and $k<l$ we let $d^\vee_{jk,lm}=d_{\Omega\setminus\{jk,lm\}}$. For example $d^\vee_{14,25}=d_{12}d_{13}d_{15}d_{23}d_{24}d_{34}d_{35}d_{45}$. By Proposition \ref{hgeq8} we have that $w_8$ can be expressed in the following way \[ w_8=\sum_{i,j,k,l,m}\partial_id^\vee_{jk,lm}\otimes v_{i,jk,lm} \] where the sum is over all distinct $i,j,k,l,m \in [5]$ such that $j<k,l$, and $l<m$ (so we have exactly 15 summands). We also adopt the convention $v_{i,lm,jk}=v_{i,jk,lm}$ for notational convenience. By construction, we immediately have $(x_i\, d_{ik})^1w_8=0$ for all $i\neq k$. We will therefore consider elements in $\mathfrak {g}_1$ of the form $x_i d_{jk}+x_j d_{ik}$ and $x_i d_{jk}- x_k d_{ij}$ (for all $i<j<k$), which will allow us to deduce that $w_8=0$. To perform this computation efficiently we need the following notation. \begin{figure} \caption{\label{figlex} \label{figlex} \end{figure} Let \[\eta_{ij}=\begin{cases}1& \textrm{if } i+j=5 \\0&\textrm{otherwise}.\end{cases}\] The reason for introducing this function is the following: let $d(ij,kl)$ be the distance between the pairs $ij$ and $kl$ in the lexicographic order (i.e. in the graph represented in Figure \ref{figlex}; then one can easily check that for all $i<j<k$ we have \begin{equation}(-1)^{d(ik,jk)}=(-1)^{\eta_{ij}+1}\label{dist1} \end{equation} and \begin{equation}\label{dist2} (-1)^{d(ij,jk)}=(-1)^{\eta_{ij}+k+j+1}. \end{equation} Let $i,j,k,l,m$ be distinct such that $i<j<k$ and $l<m$. We have \[ (x_id_{jk}+x_j d_{ik})^1w_8\equiv -d_{jk}d^\vee_{jk,lm}\otimes v_{i,jk,lm}-d_{ik}d^\vee_{ik,lm}\otimes v_{j,ik,lm} \mod M_8(V) \] By \eqref{permform} and \eqref{dist1} we have \begin{equation}\label{gb1} v_{i,jk,lm}=\begin{cases} (-1)^{\eta_{ij}+1}v_{j,ik,lm} &\textrm{if }i<l<j\\ (-1)^{\eta_{ij}}v_{j,ik,lm}& \textrm{otherwise.} \end{cases} \end{equation} Similarly, applying $x_i d_{jk}-x_k d_{ij}$ we obtain \[ (x_id_{jk}-x_k d_{ij})^1w_8\equiv -d_{jk}d^\vee_{jk,lm}\otimes v_{i,jk,lm}+d_{ij}d^\vee_{ij,lm}\otimes v_{k,ij,lm} \mod M_8(V) \] and by \eqref{permform} and \eqref{dist2} we have \begin{equation}\label{gb2} v_{i,jk,lm}=\begin{cases} (-1)^{\eta_{ij}+k+j}v_{k,ij,lm} &\textrm{if }i<l<j\\ (-1)^{\eta_{ij}+k+j+1}v_{k,ij,lm}& \textrm{otherwise.} \end{cases} \end{equation} By repeated application of Eq.\ \eqref{gb1} we obtain \begin{itemize} \item $v_{1,23,45}=v_{2,13,45}=v_{4,15,23}$ \item $v_{1,24,35}=v_{2,14,35}=-v_{3,15,24}$ \item $v_{1,25,34}=v_{2,15,34}=- v_{3,14,25}$ \item $v_{2,13,45}=v_{4,13,25}$ \item $v_{2,14,35}=-v_{3,14,25}$ \item $v_{2,15,34}=-v_{3,15,24}$ \item $v_{3,12,45}=v_{4,12,35}$ \end{itemize} and by repeated application of Eq. \eqref{gb2} we obtain \begin{itemize} \item $v_{1,23,45}=v_{3,12,45}=v_{5,14,23}$ \item $v_{1,24,35}=-v_{4,12,35}=v_{5,13,24}$ \item $v_{1,25,34}=v_{5,12,34}=-v_{4,13,25}$ \item $v_{2,13,45}=v_{5,13,24}$ \item $v_{2,14,35}=v_{5,14,23}$ \item $v_{2,15,34}= -v_{4,15,23}$ \item $v_{3,12,45}=v_{5,12,34}$. \end{itemize} All these equations together imply that all $v_{i,jk,lm}$ vanish. \end{proof} We now consider the case of a singular vector $w$ of height 9 and degree 11. In this case, as in the proof of Proposition \ref{hgeq8}, we can immediately deduce that $w_9$ must have the following form \begin{equation}\label{w9} w_9=\sum_{i,j,k}\partial_i d^\vee_{jk}\otimes v_{i,jk}, \end{equation} where the sum is over all distinct $i,j,k$ with $j<k$ (a total of 30 summands) and $d^\vee_{jk}=d_{\Omega\setminus\{{j,k}\}}$. As in the case of height 8, we can now proceed by applying all elements in $\mathfrak {g}_1$ of the form $x_i d_{jk}+x_j d_{ik}$ and $x_i d_{jk}- x_k d_{ij}$. \begin{lemma}\label{lemh=9} If $w$ is a singular vector of height 9 and degree 11 with highest term $w_9$ as in \eqref{w9}, then \begin{itemize} \item $v_{1,23}=v_{2,13}=v_{3,12}$; \item $v_{1,24}=v_{2,14}=-v_{4,12}$; \item $v_{1,25}=v_{2,15}=v_{5,12}$; \item $v_{1,34}=v_{3,14}=v_{4,13}$; \item $v_{1,35}=v_{3,15}=-v_{5,13}$; \item $v_{1,45}=-v_{4,15}=-v_{5,14}$; \item $v_{2,34}=-v_{3,24}=-v_{4,23}$; \item $v_{2,35}=-v_{3,25}=v_{5,23}$; \item $v_{2,45}=v_{4,25}=v_{5,24}$; \item $v_{3,45}=v_{4,35}=v_{5,34}$. \end{itemize} \end{lemma} \begin{proof} All equalities are obtained using \eqref{permform} and \eqref{1actbis} applying elements $x_i d_{jk}+x_j d_{ik}$ and $x_i d_{jk}- x_k d_{ij}$. For example we have \[(x_2 d_{35}+x_3 d_{25}) w_9\equiv -d_{35} d_{35}^\vee\otimes v_{2,35}-d_{25} d_{25}^\vee\otimes v_{3,25}\equiv d_\Omega(-v_{2,35}-v_{3,25}) \mod M_8(V),\] hence $v_{2,35}=-v_{3,25}$. All other equalities can be obtained similarly. \end{proof} Thanks to Lemma \ref{lemh=9} the highest term of the singular vector assumes the following form: \begin{align}\label{w9reduced} w_9& = \partial_1(d_{23}^\vee\otimes u_1-d_{24}^\vee\otimes u_2+d_{25}^\vee\otimes u_3-d^\vee_{34}\otimes u_4+d^\vee_{35}\otimes u_5-d_{45}^\vee \otimes u_6)\\ \nonumber & \hspace{3mm} +\partial_2(d_{13}^\vee\otimes u_1-d_{14}^\vee\otimes u_2+d_{15}^\vee\otimes u_3-d^\vee_{34}\otimes u_7+d_{35}^\vee \otimes u_8-d_{45}^\vee\otimes u_9)\\ \nonumber& \hspace{3mm} +\partial_3(d_{12}^\vee\otimes u_1-d_{14}^\vee\otimes u_4+d_{15}^\vee\otimes u_5+d^\vee_{24}\otimes u_7-d_{25}^\vee \otimes u_8-d_{45}^\vee\otimes u_{10})\\ \nonumber& \hspace{3mm} +\partial_4(d_{12}^\vee\otimes u_2-d_{13}^\vee\otimes u_4+d_{15}^\vee\otimes u_6+d^\vee_{23}\otimes u_7-d_{25}^\vee \otimes u_9-d_{35}^\vee\otimes u_{10})\\ \nonumber& \hspace{3mm} +\partial_5(d_{12}^\vee\otimes u_3-d_{13}^\vee\otimes u_5+d_{14}^\vee\otimes u_6+d^\vee_{23}\otimes u_8-d_{24}^\vee \otimes u_9-d_{34}^\vee\otimes u_{10}). \end{align} for suitable elements $u_1,\ldots,u_{10} \in V$. \begin{lemma}\label{g0h9}Let $E_i=x_i\partial_{i+1}\in \mathfrak {g}_0$ and $w$ be a singular vector of height 9 and degree 11 with $w_9$ as in \eqref{w9reduced} above. Then \begin{itemize} \item $E_1$ annihilates $u_1,u_2,u_3,u_4,u_5,u_6,u_{10}$, $E_1.u_7= u_4$, $E_1. u_8= u_5$, $E_1.u_9=u_6$. \item $E_2$ annihilates $u_1,u_2,u_3,u_6,u_7,u_8,u_{9}$, $E_1.u_4= u_2$, $E_1.u_5=u_3$, $E_1 u_{10}= u_9$. \item $E_3$ annihilates $u_1,u_3,u_4,u_5,u_7,u_8,u_{10}$, $E_1.u_2= u_1$, $E_1.u_6= u_5$, $E_1 u_9= u_8$. \item $E_4$ annihilates $u_1,u_2,u_4,u_6,u_7,u_9,u_{10}$, $E_1.u_3=u_2$, $E_1.u_5= u_4$, $E_1 u_8= u_7$. \end{itemize} \end{lemma} \begin{proof} Recall the definition of $E_1^0$ from \eqref{not1}. By \eqref{0act} we have \begin{align*} 0&=E_1^0 w_9\\ &=\partial_2(d_{34}^\vee\otimes u_4-d_{35}^\vee\otimes u_5+d_{45}^\vee\otimes u_6)+\partial_3(-d_{24}^\vee\otimes u_4+ d_{25}^\vee \otimes u_5)\\ &\hspace{3mm}+\partial_4(- d_{23}^\vee\otimes u_4+\partial_4 d_{25}^\vee\otimes u_6)+\partial_5(-d_{23}^\vee \otimes u_5+d_{24}^\vee \otimes u_6)\\ &\hspace{3mm}+\partial_1(d_{23}^\vee\otimes E_1.u_1-d_{24}^\vee\otimes E_1.u_2+d_{25}^\vee\otimes E_1.u_3-d^\vee_{34}\otimes E_1.u_4+d^\vee_{35}\otimes E_1.u_5-d_{45}^\vee \otimes E_1.u_6)\\ & \hspace{3mm} +\partial_2(d_{13}^\vee\otimes E_1.u_1-d_{14}^\vee\otimes E_1.u_2+d_{15}^\vee\otimes E_1.u_3-d^\vee_{34}\otimes E_1.u_7+d_{35}^\vee \otimes E_1.u_8-d_{45}^\vee\otimes E_1.u_9)\\ & \hspace{3mm} +\partial_3(d_{12}^\vee\otimes E_1.u_1-d_{14}^\vee\otimes E_1.u_4+d_{15}^\vee\otimes E_1.u_5+d^\vee_{24}\otimes E_1.u_7-d_{25}^\vee \otimes E_1.u_8-d_{45}^\vee\otimes E_1.u_{10})\\ & \hspace{3mm} +\partial_4(d_{12}^\vee\otimes E_1.u_2-d_{13}^\vee\otimes E_1.u_4+d_{15}^\vee\otimes E_1.u_6+d^\vee_{23}\otimes E_1.u_7-d_{25}^\vee \otimes e_1.u_9-d_{35}^\vee\otimes E_1.u_{10})\\ & \hspace{3mm} +\partial_5(d_{12}^\vee\otimes E_1.u_3-d_{13}^\vee\otimes E_1.u_5+d_{14}^\vee\otimes E_1.u_6+d^\vee_{23}\otimes E_1.u_8-d_{24}^\vee \otimes E_1.u_9-d_{34}^\vee\otimes E_1.u_{10}). \end{align*} The result for $E_1$ follows. The other statements are obtained similarly. \end{proof} Lemma \ref{g0h9} is depicted in Figure \ref{Weightsh=9}, where an arrow from $u_i$ to $u_k$ labelled $E_j$ means $E_j.u_i=u_k$ and the absence of an arrow labelled $E_j$ coming out from $u_i$ means $E_j.u_i=0$. \begin{figure} \caption{\label{Weightsh=9} \label{Weightsh=9} \end{figure} \begin{proposition}\label{h93cases} Let $w\in M(V)$ be a singular vector of height 9 and degree 11 and let $w_9$ be as in \eqref{w9reduced}. Then one of the following applies: \begin{enumerate} \item $u_1$ is a highest weight vector, $V=F(0,0,1,0)$ and $\lambda(w)=(0,0,0,0)$; \item $u_1=\cdots=u_6=0$, $u_7$ is a highest weight vector, $V=F(0,0,0,1)$ and $\lambda(w)=(1,0,0,0)$. \item $u_1=u_2=\cdots=u_9=0$, $u_{10}$ is a highest weight vector, $V=F(0,0,0,0)$ and $\lambda(w)=(0,1,0,0)$. \end{enumerate} \end{proposition} \begin{proof} If $u_1\neq 0$ then it is a highest weight vector in $V$ by Lemma \ref{g0h9}, and by Corollary \ref{d=h,h+4} we necessarily have $\lambda(w)=(0,0,0,0)$ and hence $\lambda(u_1)=(0,0,1,0)$. If $u_1=0$ and $u_2\neq 0$, then $u_2$ is a highest weight vector by Lemma \ref{g0h9}, and by Corollary \ref{d=h,h+4} we necessarily have $\lambda(w)=(0,0,0,0)$ and hence $\lambda(u_2)=(0,1,-1,1)$, which is impossible since it is not a dominant weight. Similarly, if $u_1,u_2=0$ and $u_3\neq 0$ then $u_3$ would be a highest weight vector of weight $(0,1,0,-1)$. If $u_1=u_2=u_3=0$ and $u_4\neq 0$ then $u_4$ would be a highest weight vector of weight $(1,-1,0,1)$. If $u_1=u_2=u_3=u_4=0$ then then $u_5$ would be a highest weight vector of weight $(1,-1,1,-1)$. If $u_1=\cdots=u_5=0$ then $u_6$ would be a highest weight vector of weight $(1,0,-1,0)$. If $u_1=\cdots =u_6=0$ and $u_7\neq 0$ then by Corollary \ref{d=h,h+4} we have $\lambda(w)=(1,0,0,0)$ and so $\lambda(u_7)=(0,0,0,1)$. If $u_1=\cdots =u_7=0$ and $u_8\neq 0$, then $\lambda(w)=(1,0,0,0)$ by Corollary \ref{d=h,h+4} and hence $\lambda(u_8)=(0,0,1,-1)$. If $u_1=\cdots =u_8=0$ and $u_9\neq 0$, then $\lambda(w)=(1,0,0,0)$ by Corollary \ref{d=h,h+4} and hence $\lambda(u_9)=(0,1,-1,0)$. Finally, if $u_1=\cdots =u_9=0$ then $u_{10}\neq 0$ is a highest weight vector, $\lambda(w)=(0,1,0,0)$ by Corollary \ref{d=h,h+4} and so $\lambda (u_{10})=(0,0,0,0)$. \end{proof} Now we show that cases (1) and (3) in Proposition \ref{h93cases} can not occur. Observe that by Theorem \ref{duality} it is enough to show that case (3) does not occur. \begin{lemma}\label{h9case3} There are no singular vectors as in Proposition \ref{h93cases} (3). \end{lemma} \begin{proof} In this case we have: \[ w_9=(\partial_3 d_{45}^\vee+\partial_4 d_{35}^\vee+\partial_5 d_{34}^\vee)\otimes u, \] where $u$ is a generator of the trivial $\mathfrak {g}_0$-module. By construction $w_9$ satisfies \eqref{1act} for all $X\in \mathfrak {g}_1$ and \eqref{0act} for all $i$. We will therefore take into account also \eqref{-1act} and \eqref{-2act} showing that there exists no $w_7$ which satisfies these equations. We start computing \begin{align*} (x_5 d_{45})^{-1}w_9&\equiv (-\partial_1 d_{23,34}^\vee-\partial_2 d_{13,34}^\vee+\partial_3 d_{13,24}^\vee+\partial_4 d_{13,23}^\vee)\otimes u \mod M_7(V). \end{align*} We have \begin{equation}\label{w7} w_7=\sum_{i\leq j}\partial_i \partial_j \sum_{p_1\prec p_2\prec p_3} d^\vee_{p_1,p_2,p_3} \otimes v_{i,j,\{p_1,p_2,p_3\}}, \end{equation} and \[ (x_5 d_{45})^1w_7\equiv \sum_i (1+\partiallta_{i,5})\partial_i \sum_{p_1\prec p_2 \prec \{4,5\}}d^\vee_{p_1,p_2}d_{45}\otimes v_{i,5,\{p_1,p_2,45\}} \mod M_7(V). \] We deduce in particular that $v_{4,5,\{13,23,45\}}=-u\neq 0$ by \eqref{-1act}. Next observe that $x_4 \partial_5 w_9= 0$ and \[(x_4 \partial_5)_0 \partial_4 \partial_5 d_{13,23,45}^\vee\otimes u=-\partial_5^2 d_{13,23,45}^\vee\otimes u \mod M_6(V), \] and no other term of $w_7$ in \eqref{w7} can "produce" a summand $\partial_5^2 d_{13,23,45}^\vee \otimes u$ by applying $x_4 \partial_5$. This would imply $v_{4,5,\{13,23,45\}}=0$ by \eqref{-2act}, a contradiction. \end{proof} Case (2) in Proposition \ref{h93cases} leads us to the following surprising discovery. \begin{theorem}\label{w[11]} The following vector is a (unique up to multiplication by a scalar) singular vector in $M(0,0,0,1)$ of degree 11, height 9, and weight $(1,0,0,0)$: \begin{align*} w[11]&=d_{12}d_{13}d_{14}d_{15}\Big( -\partial_2 d_{23}d_{24}d_{25}d_{35}d_{45}\otimes x^*_5 -\partial_2 d_{23}d_{24}d_{25}d_{34}d_{45}\otimes x^*_4 \\ & -\partial_2 d_{23}d_{24}d_{25}d_{34}d_{35}\otimes x^*_3 +\partial_3d_{23}d_{25}d_{34}d_{35}d_{45}\otimes x^*_5 +\partial_3 d_{23}d_{24}d_{34}d_{35}d_{45}\otimes x^*_4\\ &+\partial_3 d_{23}d_{24}d_{25}d_{34}d_{35}\otimes x^*_2 +\partial_4 d_{24}d_{25}d_{34}d_{35}d_{45}\otimes x^*_5 -\partial_4 d_{23}d_{24}d_{34}d_{35}d_{45}\otimes x^*_3\\ &+\partial_4 d_{23}d_{24}d_{25}d_{34}d_{45}\otimes x^*_2 -\partial_5 d_{24}d_{25}d_{34}d_{35}d_{45}\otimes x^*_4 -\partial_5 d_{23}d_{25}d_{34}d_{35}d_{45}\otimes x^*_3\\ &+\partial_5 d_{23}d_{24}d_{25}d_{35}d_{45}\otimes x^*_2 -\partial_1 \partial_2 d_{23} d_{24} d_{25} \otimes x^*_2 +\partial_2^2 d_{23} d_{24} d_{25} \otimes x^*_1 +\partial_1 \partial_3 d_{23} d_{25} d_{34}\otimes x^*_2\\ &-\partial_2 \partial_3 d_{23} d_{25} d_{34}\otimes x^*_1 +\partial_1 \partial_4 d_{24} d_{25} d_{34}\otimes x^*_2 -\partial_2 \partial_4 d_{24} d_{25} d_{34}\otimes x^*_1 -\partial_1 \partial_3 d_{23} d_{24} d_{35} \otimes x^*_2\\ &+\partial_2 \partial_3 d_{23} d_{24} d_{35} \otimes x^*_1 +\partial_1\partial_5 d_{24} d_{25} d_{35} \otimes x^*_2 -\partial_2 \partial_5 d_{24} d_{25} d_{35} \otimes x^*_1 -\partial_1 \partial_3 d_{23} d_{34}d_{35} \otimes x^*_3\\ &+\partial_3^2 d_{23} d_{34}d_{35} \otimes x^*_1 -\partial_1 \partial_4 d_{24} d_{34}d_{35} \otimes x^*_3 +\partial_3 \partial_4 d_{24} d_{34}d_{35} \otimes x^*_1 -\partial_1 \partial_5 d_{25} d_{34} d_{35} \otimes x^*_3\\ &+\partial_3 \partial_5 d_{25} d_{34}d_{35} \otimes x^*_1 -\partial_1 \partial_4 d_{23} d_{24} d_{45} \otimes x^*_2 +\partial_2 \partial_4 d_{23} d_{24} d_{45} \otimes x^*_1 -\partial_1 \partial_5 d_{23} d_{25} d_{45} \otimes x^*_2\\ &+\partial_2 \partial_5 d_{23} d_{25} d_{45} \otimes x^*_1 -\partial_1 \partial_3 d_{23} d_{34}d_{45} \otimes x^*_4 +\partial_3 \partial_4 d_{23} d_{34}d_{45} \otimes x^*_1 -\partial_1 \partial_4 d_{24} d_{34}d_{45} \otimes x^*_4\\ &+\partial_4^2 d_{24} d_{34}d_{45} \otimes x^*_1 -\partial_1 \partial_5 d_{25} d_{34}d_{45} \otimes x^*_4 +\partial_4 \partial_5 d_{25} d_{34}d_{45} \otimes x^*_1 -\partial_1 \partial_3 d_{23} d_{35} d_{45} \otimes x^*_5\\ &+\partial_3 \partial_5 d_{23} d_{35} d_{45} \otimes x^*_1 -\partial_1 \partial_4 d_{24} d_{35} d_{45} \otimes x^*_5 +\partial_4 \partial_5 d_{24} d_{35} d_{45} \otimes x^*_1 -\partial_1 \partial_5 d_{25} d_{35} d_{45} \otimes x^*_5\\ &+\partial_5^2 d_{25} d_{35} d_{45} \otimes x^*_1 -\partial_1^2 \partial_3 d_{23} \otimes x^*_2 +\partial_1 \partial_2 \partial_3 d_{23} \otimes x^*_1 -\partial_1^2 \partial_4 d_{24} \otimes x^*_2\\ &+\partial_1 \partial_2 \partial_4 d_{24} \otimes x^*_1 -\partial_1^2 \partial_5 d_{25} \otimes x^*_2 +\partial_1 \partial_2 \partial_5 d_{25} \otimes x^*_1\Big). \end{align*} \end{theorem} \begin{proof} We prefer to omit the long but elementary computations that show that this is indeed a singular vector. Its uniqueness follows from the fact that the term $w_9$ is determined up to a scalar by Lemma \ref{g0h9} and Proposition \ref{h93cases}. So if $w'$ is another singular vector with $w_9=w'_9$ then $w-w'$ would be a singular vector of degree 11 with height at most 7 and this would contradict Proposition \ref{d!=h+4}. \end{proof} The last possible case with $d>h$ is ruled out by the following result. \begin{proposition} There are no singular vectors of height 10 and degree 12. \end{proposition} \begin{proof} By hypothesis we are in one of the five cases in $(ii)$ of Corollary \ref{d=h,h+4}. Suppose there exists a singular vector $w$ of height 10 and degree 12 in $M(1,0,0,0)$ with weight $(0,0,0,0)$, i.e., by Theorem \ref{classS5}, $$w=\sum_{i=1}^5\partial_id_{\Omega}\otimes x_i.$$ Then there exists a morphism $\varphi$ of $E(5,10)$-modules, $$\varphi: M(0,0,0,0)\rightarrow M(1,0,0,0).$$ By duality (see Theorem \ref{duality}), there exists a morphism $$\varphi^*: M(0,0,0,1)\rightarrow M(0,0,0,0),$$ i.e., a singular vector $\bar{w}$ of degree 12, of weight $(0,0,0,1)$ in $M(0,0,0,0)$. Then, by Theorem \ref{classS5} and Proposition \ref{d!=h+4}, we have: $\bar{w}_{10}=\partial_5d_{\Omega}\otimes 1$ with $1$ the highest weight vector in $F(0,0,0,0)$. Let $$\bar{w}_8=\sum_{i,j,k,l}\alpha_{ij}\partial_i\partial_jd^\vee_{k5,\ell 5}\otimes 1+\sum_{i,j,k,l,t}\partial_i\partial_5 (\beta_{i,jk}d_{jk,\ell t}^\vee \otimes 1+\beta_{i,j\ell}d_{jl,kt}^\vee \otimes 1+\beta_{i,jt}d_{jt,k\ell}^\vee \otimes 1)$$ for some $\alpha_{ij}, \beta_{i,rs}\in\mathbb{C}$, where the first sum is over all $\{i,j,k,l\}=[4]$ with $i<j$ and $k<l$, and the second sum is over all $\{i,j,k,l,t\}=[5]$ with $j<k<l<t$. We apply condition \eqref{-1act} with $h=10$, using the following elements $X$ in $\mathfrak {g}_1$: \begin{itemize} \item[i)] $X=x_5d_{45}$ hence getting $\beta_{2,45}=-1=\beta_{3,45}$; \item[ii)] $X=x_5d_{13}+x_3d_{15}$ hence getting $\alpha_{23}=1=-\beta_{2,45}$; \item[iii)] $X=x_5d_{12}-x_1d_{25}$ hence getting $\alpha_{13}=-1=\beta_{3,45}$; \item[iv)] $X=x_1d_{25}+x_2d_{15}$ hence getting $\alpha_{13}=\alpha_{23}$. \end{itemize} These conditions lead to a contradiction, we therefore conclude that there is no singular vector of degree 12 and height 10 as in R1 and R5, of Theorem \ref{classS5}. Now assume that $w$ is a singular vector as in R2, i.e., by Corollary \ref{d=h,h+4}, $w=\sum_{i=2}^5\partial_id_{\Omega}\otimes x_{1i}$, i.e., that there exists a morphism $\varphi$, of degree 12, of $E(5,10)$-modules: $$\varphi: M(1,0,0,0) \rightarrow M(0,1,0,0).$$ By duality this means that there exists a morphism $$\varphi^*: M(0,0,1,0) \rightarrow M(0,0,0,1)$$ of degree 12, i.e., a singular vector $\bar{w}$ of degree 12 and weight $(0,0,1,0)$ in $M(0,0,0,1)$. By Theorem \ref{classS5} and Proposition \ref{h8d10}, $\bar{w}$ is necessarily as in R4, with height 10, i.e., $$\bar{w}_{10}=\partial_4d_{\Omega}\otimes x_5^*-\partial_5d_{\Omega}\otimes x_4^*.$$ We have: $$(x_5d_{45})^{-1}(\bar{w}_{10})=\partial_4(d_{12}^\vee \otimes x_{3}^*+d_{13}^\vee\otimes x_{2}^*+d_{23}^\vee \otimes x_{1}^*) +(\partial_3d_{12}^\vee+\partial_2d_{13}^\vee +\partial_1d_{23}^\vee)\otimes x_{4}^*.$$ By condition \eqref{-1act} with $h=10$ and $X=x_5d_{45}$ it follows that in the expression of $\bar{w}_8$ the summand $\partial_4\partial_5d_{23,45}^\vee \otimes x_{1}^*$ must appear with coefficient equal to $1$. Now we have: $$E_4(\partial_4\partial_5d_{23,45}^\vee \otimes x_{1}^*)= (E_4)^0(\partial_4\partial_5d_{23,45}^\vee\otimes x_{1}^*)=-\partial_5^2d_{23,45}^\vee \otimes x_{1}^*.$$ This contradicts condition \eqref{-2act} for $h=10$. Indeed, one can see that no term in $E_4^{-2}\bar{w}_{10}+E_4^0\bar{w}_8$ can cancel the summand $\partial_5^2d_{23,45}^\vee \otimes x_{1}^*$. Finally, let us assume that there exists a singular vector of degree 12 and height 10, as in R3, i.e., $$w_{10}=\partial_3d_{\Omega}\otimes x_{45}^*+\partial_4d_{\Omega}\otimes x_{53}^*+\partial_5d_{\Omega}\otimes x_{34}^*.$$ Then we have: \begin{align*}(x_5d_{45})^{-1}(w_{10})= &\partial_3(d_{12}^\vee \otimes x_{34}^*+d_{13}^\vee \otimes x_{24}^*+d_{23}^\vee \otimes x_{14}^*)\\ & +\partial_4(-d_{13}^\vee \otimes x_{23}^*-d_{23}^\vee \otimes x_{13}^* )- (\partial_3d_{12}^\vee +\partial_2d_{13}^\vee+\partial_1d_{23}^\vee)\otimes x_{34}^*. \end{align*} Therefore, similarly as above, by condition \eqref{-1act} with $h=10$ and $X=x_5d_{45}$, in the expression of ${w}_8$ the summand $\partial_4\partial_5d_{23,45}^\vee x_{13}^*$ must appear with coefficient $1$. Then we have: $$E_4(\partial_4\partial_5d_{23,45}^\vee \otimes x_{13}^*)= (E_4)^0(\partial_4\partial_5d_{23,45}^\vee \otimes x_{13}^*)=-\partial_5^2d_{23,45}^\vee \otimes x_{13}^*.$$ This contradicts condition \eqref{-2act} for $h=10$. Indeed, one can see that no term in $E_4^{-2}{w}_{10}+E_4^0{w}_8$ can cancel the summand $\partial_5^2d_{23,45}^\vee \otimes x_{13}^*$. \end{proof} \section{Properties of $\omega_I$} In order to study morphisms between generalized Verma modules and to better understand their structure as $\mathfrak {g}_0$-modules, a particular basis of $U_-$ has been introduced in \cite{CC}. The main goal of this section is to show that this basis is also extremely useful when considering the action of the whole $\mathfrak {g}$ on a Verma module. We recall some technical notation needed to give an explicit definition of such a basis. We refer the reader to \cite[\S5]{CC} for further details. We recall that $(U_-)_d$ denotes the homogeneous component of $U_-$ of degree $d$. We let \[\mathcal I_d=\{ I=(I_1,\ldots,I_d):\, I_l=(i_l,j_l) \textrm{ with $ 1\leq i_l,j_l\leq 5$ for every $l=1,\ldots,d$}\}. \] If $I=(I_1,\ldots,I_d)\in \mathcal I_d$ we let $d_I=d_{I_1}\cdots d_{I_d}\in (U_-)_d$, with $d_{I_l}=d_{i_l j_l}$. Note that this notation is slightly different from the one adopted in Sections \ref{first} and \ref{>10}. We let $\mathcal S_d$ be the set of subsets of $[d]$ of cardinality 2, so that $|\mathcal S_d|=\binom{d}{2}$. Note that elements in $\mathcal I_d$ are ordered tuples of ordered pairs, while elements in $\mathcal S_d$ are unordered tuples of unordered pairs. If $\{k,l\}\in \mathcal S_d$ and $I\in \mathcal I_d$ we let $t_{I_k,I_l}=t_{i_k,j_k, i_l,j_l}$ and $ \varepsilon_{I_k,I_l}=\varepsilon_{i_k,j_k, i_l,j_l} $. We also let \[ D_{\{k,l\}}(I)=\frac{1}{2} (-1)^{l+k} \varepsilon_{I_k,I_l}\partial_{t_{I_k,I_l}}\in (U_-)_2. \] \begin{definition} A subset $S$ of $\mathcal S_d$ is \emph{self-intersection free} if its elements are pairwise disjoint.\end{definition} For example $S=\{\{1,3\},\{2,5\}, \{4,7\}\}$ is self-intersection free while $\{\{1,3\},\{2,5\}, \{3,7\}\}$ is not. We denote by $\textrm{SIF}_d$ the set of self-intersection free subsets of $\mathcal S_d$. \begin{definition} Let $\{k,l\}, \{h,m\}\in \mathcal S_d$ be disjoint pairs. We say that $\{k,l\}$ and $\{h,m\}$ \emph{cross} if exactly one element in $\{k,l\}$ is between $h$ and $m$. If $S\in \textrm{SIF}_d$ we let the crossing number $c(S)$ of $S$ be the number of pairs of elements in $S$ that cross. \end{definition} \begin{definition} Let $S=\{S_1,\ldots,S_r\}\in \textrm{SIF}_d$. We let \[ D_S(I)=\prod_{j=1}^rD_{S_j}(I)\in (U_-)_{2r} \] if $r\mathfrak {g}eq 2$ and $D_{\emptyset}(I)=1$ (note that the order of multiplication is irrelevant as the elements $D_{S_j}(I)$ commute among themselves). \end{definition} \begin{definition} For $I=(I_1,\ldots,I_d)\in \mathcal I_d$ and $S=\{S_1,\ldots,S_r\}\in \textrm{SIF}_d$ we let $C_S(I)\in \mathcal I_{d-2r}$ be obtained from $I$ by removing all $I_j$ such that $j\in S_k$ for some $k\in [r]$. \end{definition} \begin{definition}\label{defomega} For all $I\in \mathcal I_d$ we let \[ \omega_I=\sum_{S\in \textrm{SIF}_d}(-1)^{c(S)}D_S(I)\,d_{C_S(I)}\in (U_-)_d. \] \end{definition} If $I\in \mathcal I_d$ we let $x_I=x_{I_1}\wedge \cdots \wedge x_{I_d}\in \inlinewedge^d (\inlinewedge^2(\mathbb{C}^5))$. The main properties of the elements $\omega_I$ have been obtained in {\cite[Proposition 5.6 and Theorem 5.8]{CC}} and can be summarized in the following result. \begin{proposition}\label{isomega}Let $d=0,\ldots,10$. Then the map $\varphi:\inlinewedge^d (\inlinewedge^2(\mathbb{C}^5))\rightarrow U_-$, given by \[ \varphi(x_{I_1}\wedge\cdots \wedge x_{I_d})=\omega_{I_1,\ldots, I_d} \] for all $(I_1,\ldots,I_d)\in \mathcal I_d$, is a (well-defined) injective morphism of $\mathfrak {g}_0$-modules. \end{proposition} We will also need the following very useful notation. Let $I\in \mathcal I_d$ and $J\in \mathcal I_c$, with $c\leq d$. If there exists $K\in \mathcal I_{d-c}$ such that $x_I=x_J\wedge x_K\neq 0$ we let $\omega_{I\setminus J}=\omega_K$, and we let $\omega_{I\setminus J}=0$ if such $K$ does not exist. Note that this notation is well-defined also thanks to Proposition \ref{isomega}. For example, in order to compute $\omega_{(12,24,35,54)\setminus (24,45)}$ we observe that $x_{12,24,35,54}=x_{24,45}\wedge x_{12,35}$, therefore $\omega_{(12,24,35,54)\setminus (24,45)}=\omega_{12,35}$. Instead of the explicit definition of the elements $\omega_I$ given in Definition \ref{defomega}, we will need some (equivalent) recursive properties that they satisfy. \begin{lemma} Let $I=(I_1,\ldots,I_d)$. Then for all $k>1$ we have \[\sum_{S\in \textrm{SIF}_d:\,\{1,k\}\in S}(-1)^{c(S)}D_S(I)\,d_{C_S(I)}=-\frac{1}{2} \varepsilon_{I_1,I_k}\partial_{t_{I_1,I_k}}\omega_{(I_2,\ldots,I_d)\setminus I_k}. \] Furthermore \begin{equation} \label{ricomega}\omega_I=d_{I_1}\omega_{(I_2,\ldots,I_d)}-\frac{1}{2}\sum_{k=2}^d \varepsilon_{I_1,I_k}\partial_{t_{I_1,I_k}}\omega_{(I_2,\ldots,I_d)\setminus I_k} \end{equation} \end{lemma} \begin{proof} We prove the first statement for all $I\in \mathcal I_d$ by induction on $k$. If $k=2$ we have $D_{\{1,2\}}(I)=-\frac{1}{2} \varepsilon_{I_1,I_2}\partial_{t_{I_1,I_2}}$ and so, letting $J=(J_1,\ldots,J_{d-2})=(I_3,\ldots,I_{d})$ we have \begin{align*}\sum_{S\in \textrm{SIF}_d:\,\{1,2\}\in S}(-1)^{c(S)}D_S(I)\,d_{C_S(I)}&=-\frac{1}{2} \varepsilon_{I_1,I_2}\partial_{t_{I_1,I_2}} \sum_{S\in \textrm{SIF}_{d-2}}(-1)^{c(S)}D_S(J)d_{C_S(J)}\\ &= -\frac{1}{2} \varepsilon_{I_1,I_2}\partial_{t_{I_1,I_2}} \omega_{I_3,\ldots,I_d}\\ &=-\frac{1}{2} \varepsilon_{I_1,I_2}\partial_{t_{I_1,I_2}} \omega_{(I_2,\ldots,I_d)\setminus I_2}. \end{align*} If $k>2$ we let $J=(I_1,\ldots,I_{k-2},I_k,I_{k-1},I_{k+1},\ldots,I_d)$ be obtained from $I$ by swapping $I_k$ and $I_{k-1}$. We also observe that swapping $k$ with $k-1$ provides a bijection $S\mapsto S'$ between elements in $\textrm{SIF}_d$ containing $\{1,k\}$ and elements in $\textrm{SIF}_d$ containing $\{1,k-1\}$; we also observe that by this bijection we have $d_{C_S(I)}=d_{C_{S'}(J)}$ and $(-1)^{c(S)}D_S(I)=-(-1)^{c(S')}D_{S'}(J)$: indeed if there exists $l$ such that $\{k,l\}\in S$ then $(-1)^{c(S)}=-(-1)^{c(S')}$ and $D_S(I)=D_{S'}(J)$, and if such element $l$ does not exist then $(-1)^{c(S)}=(-1)^{c(S')}$ and $D_S(I)=-D_{S'}(J)$. Therefore, using the inductive hypothesis, we have \begin{align*} \sum_{S\in \textrm{SIF}_d:\,\{1,k\}\in S}(-1)^{c(S)}D_S(I)\,d_{C_S(I)}&= -\sum_{S'\in \textrm{SIF}_d:\,\{1,k-1\}\in S'}(-1)^{c(S')}D_{S'}(J)\,d_{C_S(J)}\\ &=\frac{1}{2} \varepsilon_{J_1,J_{k-1}} \partial_{t_{J_1,J_{k-1}}}\omega_{(J_2,\ldots,J_d)\setminus J_{k-1}}\\ &=-\frac{1}{2} \varepsilon_{I_1,I_{k}}\partial_{t_{I_1,I_{k}}}\omega_{(I_2,\ldots,I_d)\setminus I_{k}}. \end{align*} Equation \eqref{ricomega} now follows from the first part observing that the first summand in the right-hand side of \eqref{ricomega} is just \[ \sum_{S\in \textrm{SIF}_d:\,\{1,k\}\notin S \,\forall k}(-1)^{c(S)}D_S(I)\,d_{C_S(I)}. \] \end{proof} The following result is probably the easiest way to handle and compute the elements $\omega_I$ in a recursive way. \begin{proposition}\label{omegarec}Let $I=(I_1,\ldots,I_d)$. Then \[\omega_I=\frac{1}{d}\sum_{j=1}^d d_{I_j} \omega_{I\setminus I_j}.\] \end{proposition} \begin{proof} By \eqref{ricomega} and Proposition \ref{isomega} we have \begin{align*} \omega_I&=\frac{1}{d}\sum_{j=1}^d (-1)^{j+1} \omega_{I_j,I_1,\ldots,\hat I_j, \ldots,I_d}\\ &=\frac{1}{d}\sum_{j=1}^d(-1)^{j+1} \big(d_{I_j}\omega_{ I_1,\ldots,\hat I_j,\ldots,I_d} -\frac{1}{2}\sum_{k\neq j} \varepsilon_{I_j,I_k} \partial _{t_{I_j,I_k}}\omega_{(I_1,\ldots,\hat I_j,\ldots,I_d)\setminus I_k} \big)\\ &=\frac{1}{d} \sum_{j=1}^d d_{I_j}\omega_{I\setminus I_j}-\frac{1}{2}\sum_{k\neq j}\varepsilon_{I_j,I_k} \partial _{t_{I_j,I_k}}\omega_{I\setminus (I_j,I_k)}\\ &=\frac{1}{d} \sum_{j=1}^d d_{I_j}\omega_{I\setminus I_j}, \end{align*} since, clearly, $\omega_{I\setminus(I_j,I_k)}=-\omega_{I\setminus(I_k,I_j)}$ for all $k\neq j$. \end{proof} The following is an immediate consequence which is not needed in the sequel but sheds more light on the symmetric nature of the elements $\omega_I$'s. \begin{corollary} We have \[\omega_{I_1,\ldots,I_d}=\frac{1}{d!}\sum_{\sigma \in S_d}\varepsilon_{\sigma} d_{I_{\sigma(1)}}\cdots d_{I_{\sigma(d)}}. \] \end{corollary} \begin{proof} We proceed by induction on $d$, the result being trivial for $d=1$. We have \begin{align*} \frac{1}{d!}\sum_{\sigma \in S_d}\varepsilon_{\sigma} d_{I_{\sigma(1)}}\cdots d_{I_{\sigma(d)}}&= \frac{1}{d!}\sum_{j=1}^d\sum_{\sigma \in S([n]\setminus j)}(-1)^{j-1}\varepsilon_\sigma d_{I_j}d_{I_{\sigma(1)}}\cdots \hat d_{I_{\sigma(j)}} \cdots d_{I_{\sigma(d)}} \\ &=\frac{1}{d}\sum_{j=1}^d (-1)^{j-1}d_{I_j}\sum _{\sigma \in S([n]\setminus j)}\varepsilon_\sigma d_{I_{\sigma(1)}}\cdots \hat d_{I_{\sigma(j)}} \cdots d_{I_{\sigma(d)}}\\ &= \frac{1}{d}\sum_{j=1}^d (-1)^{j-1}d_{I_j} \omega_{I_1,\ldots,\hat I_j,\ldots,I_d}\\ &=\frac{1}{d}\sum_{j=1}^d d_{I_j} \omega_{I\setminus I_j}. \end{align*} \end{proof} We now reformulate \eqref{ricomega} in a way which is more suitable for our next arguments. \begin{lemma}\label{adgl} Let $I\in \mathcal I_d$ and let $\{i,j,r,s,t\}= \{1,2,3,4,5\}$ be such that $\varepsilon_{ij,rs}=\varepsilon_{ij,st}=\varepsilon_{ij,tr}=1$. Then \[ d_{ij}\omega_I=\omega_{ij,I}+\frac{1}{2}\partial_r \omega_{I\setminus st}+\frac{1}{2}\partial_s \omega_{I\setminus tr}+\frac{1}{2}\partial_t \omega_{I\setminus rs }. \] \end{lemma} Next target is to study the commutator between an element in $\mathfrak {g}_1$ of the form $x_p d_{pq}$ and a generic element $\omega_I$. In order to simplify the reading of the arguments we prefer to show the proof explicitly in the special case $x_5d_{45}$. \begin{lemma}\label{piec} We have \[ \sum_{k\neq j}\big[[x_5d_{45},d_{I_k}],d_{I_j}\big]\omega_{I\setminus (I_k,I_j)}=\sum_j \big[[x_5d_{45},d_{I_j}],\omega_{I\setminus I_j}\big] -3 \partial_4 \omega_{I\setminus(12,23,31)}+\frac{3}{2}\sum_{(\alpha,\beta,\mathfrak {g}amma)\in S_3}\partial_\alpha \omega_{I\setminus (\alpha\beta,\beta\mathfrak {g}amma, \mathfrak {g}amma 4)}. \] \end{lemma} \begin{proof} We first notice that in the left-hand side we have nonzero contributions only for those $k$ such that $I_k=12,23,31$ (up to order). We compute the contribution of $I_k=12$, the others will be similar. We have \[ \sum_j [x_5\partial_3,d_{I_j}]\omega_{I\setminus(12,I_j)}=d_{25} \omega_{I\setminus(12,23)}+d_{51}\omega_{I\setminus(12,31)}+d_{54}\omega_{I\setminus(12,34)} \] and by Lemma \ref{adgl} and Proposition \ref{isomega} we have \begin{align*} \sum_j [&x_5\partial_3,d_{I_j}]\omega_{I\setminus(12,I_j)}=\omega_{25,I\setminus(12,23)}+\omega_{51,I\setminus(12,31)}+\omega_{54,I\setminus(12,34)}\\ &+\frac{1}{2}\big(\partial_1\omega_{I\setminus(12,23,34)}+\partial_3 \omega_{I\setminus(12,23,41)}+\partial_4\omega_{I\setminus(12,23,13)}+ \partial_2 \omega_{I\setminus(12,31,34)}+\partial_3\omega_{I\setminus(12,31,42)}+\partial_4\omega_{I\setminus(12,31,23)}\\ &+ \partial_1\omega_{I\setminus(12,34,32)}+\partial_3\omega_{I\setminus(12,34,21)}+\partial_2\omega_{I\setminus(12,34,13)}\big)\\ &= [x_5\partial_3,\omega_{I\setminus 12}]-\partial_4 \omega_{I\setminus(12,23,31)}+\partial_1 \omega_{I\setminus(12,23,34)}+\partial_2 \omega_{I\setminus(21,13,34)}+\frac{1}{2}\partial_3 \omega_{I\setminus(32,21,14)}+\frac{1}{2}\partial_3\omega_{I\setminus(31,12,24)} \end{align*} The contributions of $I_k=23,31$ are similarly computed and the result follows. \end{proof} \begin{theorem} For all $I\in \mathcal I_d$ we have \begin{align*}[x_5 d_{45},\omega_I]&=\sum_{j=1}^d \Big(\frac{1}{2} \big[[x_5d_{45},d_{I_j}],\omega_{I\setminus I_j}\big]+\omega_{I\setminus I_j} [x_5d_{45},d_{I_j}]\Big)\\ &\hspace{5mm} +\frac{1}{2} \partial_4 \omega_{I\setminus (12,23,31)}-\frac{1}{4}\sum_{(\alpha,\beta,\mathfrak {g}amma) \in S_3} \partial_{\alpha} \omega_{I\setminus (\alpha \beta, \beta \mathfrak {g}amma, \mathfrak {g}amma 4)}. \end{align*} \end{theorem} \begin{proof} We proceed by induction on $d$, the case $d=1$ being easy. Note that by induction hypothesis we can assume that \begin{align*} [x_5d_{45},\omega_{I\setminus I_j}]&=\sum_{k\neq j} \Big(\frac{1}{2} \big[[x_5d_{45},d_{I_k}],\omega_{I\setminus (I_j,I_k)}\big]+\omega_{I\setminus (I_j,I_k)} [x_5d_{45},d_{I_k}]\Big)\\ &\hspace{5mm} +\frac{1}{2} \partial_4 \omega_{I\setminus (I_j,12,23,31)}-\frac{1}{4}\sum_{(\alpha,\beta,\mathfrak {g}amma) \in S_3} \partial_{\alpha} \omega_{I\setminus (I_j,\alpha \beta, \beta \mathfrak {g}amma, \mathfrak {g}amma 4)} . \end{align*} Using Proposition \ref{omegarec} and the inductive hypothesis we have: \begin{align*} [x_5 d_{45},\omega_I]&=\frac{1}{d}\sum_{j=1}^d \big([x_5d_{45},d_{I_j}]\omega_{I\setminus I_j}-d_{I_j}[x_5d_{45},\omega_{I\setminus I_j}]\big)\\ &=\frac{1}{d}\sum_{j=1}^d \Big(\big[[x_5d_{45},d_{I_j}],\omega_{I\setminus I_j}\big]+\omega_{I\setminus I_j}[x_5d_{45},d_{I_j}]\\ &\hspace{20mm} -d_{I_j}\sum_{k\neq j} \big(\frac{1}{2} \big[[x_5d_{45},d_{I_k}],\omega_{I\setminus (I_j,I_k)}\big]+\omega_{I\setminus (I_j,I_k)} [x_5d_{45},d_{I_k}]\big)\\ &\hspace{20mm} +\frac{1}{2} \partial_4 \omega_{I\setminus (I_j,12,23,31)}-\frac{1}{4}\sum_{(\alpha,\beta,\mathfrak {g}amma) \in S_3} \partial_{\alpha} \omega_{I\setminus (I_j,\alpha \beta, \beta \mathfrak {g}amma, \mathfrak {g}amma 4)} \Big)\\ &=\frac{1}{d}\sum_{j=1}^d \big(\omega_{I\setminus I_j}[x_5d_{45},d_{I_j}]-d_{I_j} \sum_{k\neq j} \omega_{I\setminus (I_j,I_k)} [x_5d_{45},d_{I_k}]\big)\\ & \hspace{5mm}+\frac{1}{d}\sum_{j=1}^d \big(\big[[x_5d_{45},d_{I_j}],\omega_{I\setminus I_j}\big]-d_{I_j}\sum_{k\neq j} \frac{1}{2} \big[[x_5d_{45},d_{I_k}],\omega_{I\setminus (I_j,I_k)}\big]\big)\\ & \hspace{5mm}+ \frac{1}{d}\sum_{j=1}^dd_{I_j}\big(-\frac{1}{2} \partial_4 \omega_{I\setminus (I_j,12,23,31)}+\frac{1}{4}\sum_{(\alpha,\beta,\mathfrak {g}amma) \in S_3} \partial_{\alpha} \omega_{I\setminus (I_j,\alpha \beta, \beta \mathfrak {g}amma, \mathfrak {g}amma 4)} \big) \end{align*} We split this formula into three parts (according to the last three lines above): the first part is \begin{align*}\frac{1}{d}&\sum_{j=1}^d \big(\omega_{I\setminus I_j}[x_5d_{45},d_{I_j}]-d_{I_j} \sum_{k\neq j} \omega_{I\setminus (I_j,I_k)} [x_5d_{45},d_{I_k}]\big)\\ &= \frac{1}{d} \big( \sum_{j=1}^d \omega_{I\setminus I_j}[x_5d_{45},d_{I_j}]+\sum_{k=1}^d \sum_{j\neq k} d_{I_j}\omega_{I\setminus(I_k,I_j)}[x_5d_{45},d_{I_k}]\big)\\ &=\frac{1}{d} \big( \sum_{j=1}^d \omega_{I\setminus I_j}[x_5d_{45},d_{I_j}]+\sum_{k=1}^d (d-1)\omega_{I\setminus I_k}[x_5d_{45},d_{I_k}]\big)\\ &= \sum_{j=1}^d \omega_{I\setminus I_j}[x_5d_{45},d_{I_j}]. \end{align*} The third part is \begin{align*}\frac{1}{d}&\sum_{j=1}^dd_{I_j}\big(-\frac{1}{2} \partial_4 \omega_{I\setminus (I_j,12,23,31)}+\frac{1}{4}\sum_{(\alpha,\beta,\mathfrak {g}amma) \in S_3} \partial_{\alpha} \omega_{I\setminus (I_j,\alpha \beta, \beta \mathfrak {g}amma, \mathfrak {g}amma 4)} \big)\\ &=\frac{1}{2d}\partial_4 \sum_{j=1}^d d_{I_j} \omega_{I\setminus (12,23,31,I_j)}-\frac{1}{4d}\sum_{(\alpha,\beta,\mathfrak {g}amma) \in S_3} \partial_{\alpha} \sum_{j=1}^d d_{I_j}\omega_{I\setminus (\alpha \beta, \beta \mathfrak {g}amma, \mathfrak {g}amma 4,I_j)} \\ &=\frac{d-3}{2d}\partial_4 \omega_{I\setminus(12,23,31)}-\frac{d-3}{4d} \sum_{(\alpha,\beta,\mathfrak {g}amma) \in S_3} \partial_{\alpha}\omega_{I\setminus (\alpha \beta, \beta \mathfrak {g}amma, \mathfrak {g}amma 4)}. \end{align*} In order to compute the second part we notice, using Lemma \ref{piec}, that the following holds: \begin{align*} -d_{I_j}\sum_{k,j}& \big[[x_5d_{45},d_{I_k}],\omega_{I\setminus (I_j,I_k)}\big]=d_{I_j}\sum_{k, j} \big[[x_5d_{45},d_{I_k}],\omega_{I\setminus (I_k,I_j)}\big]\\ &=\sum_{j,k}\Big(\big[[x_5d_{45},d_{I_k}],d_{I_j}\omega_{I\setminus(I_k,I_j)}\big]-\big[[x_5d_{45},d_{I_k}],d_{I_j}\big]\omega_{I\setminus(I_k,I_j)}\Big)\\ &=\sum_k\big[[x_5d_{45},d_{I_k}],\sum_jd_{I_j}\omega_{I\setminus (I_k,I_j)}\big]-\sum_{j,k}\big[[x_5d_{45},d_{I_k}],d_{I_j}\big]\omega_{I\setminus(I_k,I_j)}\\ &=(d-1)\sum_k \big[[x_5d_{45},d_{I_k}],\omega_{I\setminus I_k}\big]-\sum_j \big[[x_5d_{45},d_{I_j}],\omega_{I\setminus I_j}\big] +3 \partial_4 \omega_{I\setminus(12,23,31)}\\ &-\frac{3}{2}\sum_{(\alpha,\beta,\mathfrak {g}amma)\in S_3}\partial_\alpha \omega_{I\setminus (\alpha\beta,\beta\mathfrak {g}amma, \mathfrak {g}amma 4)}\\ &=(d-2)\sum_k \big[[x_5d_{45},d_{I_k}],\omega_{I\setminus I_k}\big]+3 \partial_4 \omega_{I\setminus(12,23,31)}-\frac{3}{2}\sum_{(\alpha,\beta,\mathfrak {g}amma)\in S_3}\partial_\alpha \omega_{I\setminus (\alpha\beta,\beta\mathfrak {g}amma, \mathfrak {g}amma 4)}. \end{align*} Therefore the whole second part is \begin{align*}\frac{1}{d}&\sum_{j=1}^d \big(\big[[x_5d_{45},d_{I_j}],\omega_{I\setminus I_j}\big]-d_{I_j}\sum_{k\neq j} \frac{1}{2} \big[[x_5d_{45},d_{I_k}],\omega_{I\setminus (I_j,I_k)}\big]\big)\\ &=\frac{1}{d}\Big(\sum_j \big[[x_5d_{45},d_{I_j}],\omega_{I\setminus I_j}\big]+\frac{1}{2} (d-2)\sum_k \big[[x_5d_{45},d_{I_k}],\omega_{I\setminus I_k}\big]+\frac{3}{2} \partial_4 \omega_{I\setminus(12,23,31)}\\ &-\frac{3}{4}\sum_{(\alpha,\beta,\mathfrak {g}amma)\in S_3}\partial_\alpha \omega_{I\setminus (\alpha\beta,\beta\mathfrak {g}amma, \mathfrak {g}amma 4)}\Big)\\ &=\frac{1}{2} \sum_j \big[[x_5d_{45},d_{I_j}],\omega_{I\setminus I_j}+\frac{3}{2d} \partial_4 \omega_{I\setminus(12,23,31)}-\frac{3}{4d}\sum_{(\alpha,\beta,\mathfrak {g}amma)\in S_3}\partial_\alpha \omega_{I\setminus (\alpha\beta,\beta\mathfrak {g}amma, \mathfrak {g}amma 4)}. \end{align*} The sum of the three parts gives the result. \end{proof} One can analogously prove the following result. \begin{theorem}\label{xpdpqomega} Let $\{p,q,a,b,c\}=\{1,2,3,4,5\}$ and $I\in \mathcal I_d$. Then \begin{align*}[x_p d_{pq},\omega_I]&=\sum_{j=1}^d \Big(\frac{1}{2} \big[[x_pd_{pq},d_{I_j}],\omega_{I\setminus I_j}\big]+\omega_{I\setminus I_j} [x_pd_{pq},d_{I_j}]\Big)\\ &\hspace{5mm} -\frac{1}{2} \partial_q \omega_{I\setminus (ab,bc,ca)}+\frac{1}{4}\sum_{(\alpha,\beta,\mathfrak {g}amma) \in S(a,b,c)} \partial_{\alpha} \omega_{I\setminus (\alpha \beta, \beta \mathfrak {g}amma, \mathfrak {g}amma q)}, \end{align*} where $S(a,b,c)$ denotes the set of permutations of $\{a,b,c\}$. \end{theorem} \section{The fundamental equations} We are now going to use Theorem \ref{xpdpqomega} to study possible morphisms of finite Verma modules $\varphi:M(V)\rightarrow M(W)$. Let $\sim$ be the equivalence relation on $\mathcal I_d$ such that $I\sim I'$ if and only if $\omega_I=\pm \omega_{I'}$, i.e. if $I'$ can be obtained from $I$ by permuting the pairs in $I$ and the elements in each pair. By Proposition \ref{morphisms}, Remark \ref{dual} and Proposition \ref{isomega} we know that if a morphism has degree $d$ then it can be expressed in the following way \begin{equation}\label{varfi} \varphi(v)=\sum_{l\leq d/2} \sum_{I\in \mathcal I_{d-2l}/\sim} \sum_{1\leq r_1\leq \cdots \leq r_l\leq 5} \partial_{r_1}\cdots \partial_{r_l} \omega_I \otimes \theta^{r_1,\ldots,r_l}_I(v) \end{equation} where the $\theta^{r_1,\ldots,r_l}_I:V\rightarrow W$ are such that the map \[\Sym^l(\mathbb{C}^5)\otimes \displaywedge^{d-2l} (\displaywedge^2((\mathbb{C}^5)^*)\rightarrow \Hom(V,W) \] given by \begin{equation}\label{isotheta}x_{r_1}\cdots x_{r_l}\otimes x_{I_1}^*\wedge \cdots \wedge x_{I_{d-2l}}^*\mapsto \theta^{r_1,\ldots,r_l}_{I_1,\ldots,I_{d-2l}}\end{equation} is a (well-defined) morphism of $\mathfrak {g}_0$-modules. This fact permits us to easily compute the action of $\mathfrak {g}_0$ on the morphisms $\theta^{r_1,\ldots,r_l}_I$'s. For example we have \[x_1 \partial_2.\theta^{2,3}_{12,13,14,23}=\theta^{1,3}_{12,13,14,23} -\theta^{2,3}_{12,23,14,23}-\theta^{2,3}_{12,13,24,23}=\theta^{1,3}_{12,13,14,23}+\theta^{2,3}_{12,13,23,24}.\] A technical lemma is in order. \begin{lemma}\label{jkjk} For all distinct $\alpha, \beta, \mathfrak {g}amma, p \in [5]$ we have: \[ \sum_{J\in \mathcal I_d/\sim} [x_p\partial_\mathfrak {g}amma,\omega_J]\otimes \theta_{\alpha\beta, J}=-\sum_{J\in \mathcal I_d/\sim} \omega_J \otimes (x_p \partial_\mathfrak {g}amma.\theta_{\alpha \beta, J}). \] and \[ \sum_{K\in \mathcal I_d/\sim }(-\partial_{\mathfrak {g}amma} \omega_K \otimes \theta^p_{\alpha \beta, K}+\sum_{t=1}^5 \partial_t [x_p \partial_\mathfrak {g}amma, \omega_K]\otimes \theta^t_{\alpha\beta, K})=-\sum_{t=1}^5 \sum_{K\in \mathcal I_d/\sim} \partial_t \omega_K\otimes (x_p \partial_\mathfrak {g}amma.\theta^t_{\alpha \beta,K}). \] \end{lemma} \begin{proof} The first equation follows from the following observation. By Proposition \ref{isomega} and \eqref{isotheta} we have that if $ [x_p \partial_\mathfrak {g}amma,\omega_J]=\sum_{J'} a_{J,J'} \omega_{J'} $ then $ x_p \partial_\mathfrak {g}amma.\theta_{J'}=-\sum_J a_{J,J'}\theta_J $ and hence also \[ x_p \partial_\mathfrak {g}amma.\theta_{\alpha \beta,J'}=-\sum_J a_{J,J'}\theta_{\alpha \beta,J}, \] since $\alpha$ and $\beta$ are distinct from $p$. We can conclude that \[\sum_J [x_p \partial_\mathfrak {g}amma,\omega_J]\otimes \theta_{\alpha \beta,J}=\sum_{J'} \omega_{J'}\otimes \sum_J a_{J,J'}\theta_{\alpha \beta,J}=-\sum_{J'}\omega_{J'}\otimes (x_p\partial_\mathfrak {g}amma.\theta_{\alpha \beta, J'}). \] In order to prove the second equation we proceed in a similar way. If $[x_p\partial_\mathfrak {g}amma, \omega_K]=\sum_{K'} a_{K,K'} \omega_{K'}$ we have $x_p \partial_\mathfrak {g}amma.\theta_{K'}=-\sum_K a_{K,K'} \theta_K$ and also $x_p \partial_\mathfrak {g}amma.\theta_{\alpha \beta,K'}=-\sum_K a_{K,K'} \theta_{\alpha \beta,K}$. Therefore, if $t\neq \mathfrak {g}amma$ we have $x_p \partial_{\mathfrak {g}amma}.\theta^t_{\alpha \beta,K'}=-\sum_K a_{K,K'} \theta^t_{\alpha \beta,K}$ and for $t=\mathfrak {g}amma$ we have \[ (x_p \partial_\mathfrak {g}amma.\theta^{\mathfrak {g}amma}_{\alpha \beta,K'})=\theta^p_{\alpha \beta,K'}-\sum_K a_{K,K'} \theta^{\mathfrak {g}amma}_{\alpha \beta,K}. \] So we can compute \begin{align*}\sum_{K}&(-\partial_{\mathfrak {g}amma} \omega_K \otimes \theta^p_{\alpha \beta, K}+\sum_t \partial_t [x_p \partial_\mathfrak {g}amma, \omega_K]\otimes \theta^t_{\alpha\beta, K})\\ &=\sum_{K}(-\partial_{\mathfrak {g}amma} \omega_K \otimes \theta^p_{\alpha \beta, K}+\sum_t \partial_t \sum_{K'} a_{K,K'}\omega_K' \otimes \theta^t_{\alpha\beta, K})\\ &=\sum _{K'}\partial_{\mathfrak {g}amma}\omega_{K'}\otimes (\theta^p_{\alpha \beta,K'}+\sum_K a_{K,K'} \theta^\mathfrak {g}amma_{\alpha \beta,K})+\sum_{t\neq \mathfrak {g}amma} \sum_{K'} \partial_t \omega_{K'}\otimes \sum _K a_{K,K'}\theta^t_{\alpha \beta, K}\\ &= -\sum_t \sum_{K'} \partial_t \omega_K' \otimes (x_p \partial_\mathfrak {g}amma.\theta^\mathfrak {g}amma_{\alpha \beta,K'}). \end{align*} \end{proof} We let $C(a,b,c)=\{(a,b,c), (b,c,a), (c,a,b)\}$ the set of cyclic permutations of $(a,b,c)$. From now on when we write $\sum_{\alpha \beta \mathfrak {g}amma}$ we always mean the sum over $(\alpha,\beta,\mathfrak {g}amma)\in C(a,b,c)$. \begin{lemma}\label{xpdpqaomth1} Let $\varphi:M(V)\rightarrow M(W)$ be a morphism of Verma modules as in \eqref{varfi} and $(p,q,a,b,c)$ be any permutation of $[5]$. Then \begin{align*} x_pd_{pq} &\sum_{I\in \mathcal I_d/\sim}\omega_I\otimes \theta_I(v)=\sum_{J\in \mathcal I_{d-1}/\sim} \omega_J\otimes \frac{1}{2}\varepsilon_{pqabc} \sum_{\alpha\beta\mathfrak {g}amma} \big(-(x_p\partial_\mathfrak {g}amma.\theta_{\alpha\beta,J})(v)+2x_p\partial_\mathfrak {g}amma.(\theta_{\alpha\beta,J}(v))\big)\\ \nonumber &\hspace{5mm}+\sum_{K\in \mathcal I_{d-3}/\sim}\partial_q\omega_K\otimes -\frac{1}{2}\theta_{ab,bc,ca,K}(v)+\sum_{\alpha\beta\mathfrak {g}amma}\partial_\alpha \omega_K \otimes \frac{1}{4}\big(\theta_{\alpha\beta,\beta\mathfrak {g}amma,\mathfrak {g}amma q,K}(v)+\theta_{\alpha\mathfrak {g}amma,\mathfrak {g}amma \beta,\beta q,K}(v)\big). \end{align*} \end{lemma} \begin{proof}Theorem \ref{xpdpqomega} can be reformulated in the following more convenient way \begin{align}\label{xpdpqomegabis} [x_pd_{pq},\omega_I]= & \frac{1}{2}\varepsilon_{pqabc}\sum_{(\alpha,\beta,\mathfrak {g}amma)\in C(a,b,c)}\big([x_p\partial_\mathfrak {g}amma,\omega_{I\setminus \alpha\beta} ]+2\omega_{I\setminus \alpha \beta}\, x_p \partial_\mathfrak {g}amma\big)\\ & \nonumber -\frac{1}{2} \partial_q \omega_{I\setminus(ab,bc,ca)}+\frac{1}{4} \sum_{(\alpha,\beta,\mathfrak {g}amma)\in C(a,b,c)}\partial_\alpha (\omega_{I\setminus(\alpha \beta, \beta \mathfrak {g}amma, \mathfrak {g}amma q)}+\omega_{I\setminus(\alpha \mathfrak {g}amma,\mathfrak {g}amma \beta , \beta q)}). \end{align} We can therefore compute \begin{align*} x_pd_{pq} &\sum_{I\in \mathcal I_d/\sim}\omega_I\otimes \theta_I(v)= \frac{1}{2}\varepsilon_{pqabc} \sum_{I\in \mathcal I_d/\sim} \,\,\sum_{(\alpha,\beta,\mathfrak {g}amma)\in C(a,b,c)}\Big([x_p\partial_\mathfrak {g}amma,\omega_{I\setminus \alpha \beta}]\otimes \theta_I(v)+2\omega_{I\setminus \alpha\beta}\otimes x_p \partial_\mathfrak {g}amma.(\theta_I(v)) \Big)\\ &+\sum_{I\in \mathcal I_d/\sim}\Big(-\frac{1}{2} \partial_q \omega_{I\setminus(ab,bc,ca)}\otimes \theta_I(v)+\frac{1}{4}\sum_{(\alpha,\beta,\mathfrak {g}amma)\in C(a,b,c)} \partial_\alpha (\omega_{I\setminus(\alpha \beta, \beta \mathfrak {g}amma, \mathfrak {g}amma q)}+\omega_{I\setminus(\alpha \mathfrak {g}amma, \mathfrak {g}amma \beta, \beta q)})\otimes \theta_I(v)\Big)\\ &= \frac{1}{2}\varepsilon_{pqabc} \sum_{J\in \mathcal I_{d-1}/\sim} \,\,\sum_{(\alpha,\beta,\mathfrak {g}amma)\in C(a,b,c)}\Big([x_p\partial_\mathfrak {g}amma,\omega_J]\otimes \theta_{\alpha \beta,J}(v)+2\omega_{J}\otimes x_p \partial_\mathfrak {g}amma.(\theta_{\alpha \beta,J}(v)) \Big)\\ &+\sum_{K\in \mathcal I_{d-3}} \Big(-\frac{1}{2}\partial_q \omega_{K}\otimes \theta_{ab,bc,ca,K}(v)+\frac{1}{4}\sum_{(\alpha,\beta,\mathfrak {g}amma)\in C(a,b,c)} \partial_\alpha \omega_K\otimes (\theta_{\alpha \beta, \beta \mathfrak {g}amma, \mathfrak {g}amma q, K}(v)+\theta_{\alpha \mathfrak {g}amma, \mathfrak {g}amma\beta , \beta q, K}(v))\Big)\\ &= \frac{1}{2}\varepsilon_{pqabc} \sum_{J\in \mathcal I_{d-1}/\sim} \,\,\sum_{(\alpha,\beta,\mathfrak {g}amma)\in C(a,b,c)}\Big(-\omega_J\otimes (x_p\partial_\mathfrak {g}amma.\theta_{\alpha \beta,J})(v)+2\omega_{J}\otimes x_p \partial_\mathfrak {g}amma.(\theta_{\alpha \beta,J}(v)) \Big)\\ &+\sum_{K\in \mathcal I_{d-3}} \Big(-\frac{1}{2}\partial_q \omega_{K}\otimes \theta_{ab,bc,ca,K}(v)+\frac{1}{4}\sum_{(\alpha,\beta,\mathfrak {g}amma)\in C(a,b,c)} \partial_\alpha \omega_K\otimes (\theta_{\alpha \beta, \beta \mathfrak {g}amma, \mathfrak {g}amma q, K}(v)+\theta_{\alpha \mathfrak {g}amma, \mathfrak {g}amma\beta , \beta q,K}(v))\Big), \end{align*} where we have used Lemma \ref{jkjk}. \end{proof} \begin{lemma}\label{xpdpqaomth2}Let $\varphi:M(V)\rightarrow M(W)$ be a morphism of Verma modules as in \eqref{varfi}. Then \begin{align*} x_pd_{pq} &\sum_{t=1}^5\sum_{I\in \mathcal I_{d-2}/\sim}\partial_t\omega_I\otimes \theta^t_I(v)=\sum_{J\in \mathcal I_{d-1}/\sim}\omega_J\otimes -\theta^p_{J\setminus pq}(v)\\ \nonumber&+ \sum_{t=1}^5 \sum_{K\in \mathcal I_{d-3}/\sim} \partial_t\omega_K\otimes \frac{1}{2}\varepsilon_{pqabc} \sum_{\alpha\beta\mathfrak {g}amma} \big(-(x_p\partial_\mathfrak {g}amma.\theta^t_{\alpha\beta,K})(v)+2x_p\partial_\mathfrak {g}amma.(\theta^t_{\alpha\beta,K}(v))\big)\\ \nonumber & \hspace{5mm}+\sum_t\sum_{L\in \mathcal I_{d-5}/\sim}\partial_t\partial_q\omega_L\otimes -\frac{1}{2}\theta^t_{ab,bc,ca,L}(v)+\sum_{\alpha\beta\mathfrak {g}amma}\partial_t\partial_\alpha \omega_L \otimes \frac{1}{4}\big(\theta^t_{\alpha\beta,\beta\mathfrak {g}amma,\mathfrak {g}amma q,L}(v)+\theta^t_{\alpha\mathfrak {g}amma,\mathfrak {g}amma \beta,\beta q,L}(v)\big) \end{align*} \end{lemma} \begin{proof} By Lemma \ref{adgl}, Lemma \ref{jkjk} and \eqref{xpdpqomegabis} we have \begin{align*} x_pd_{pq} &\sum_{t=1}^5\sum_{I\in \mathcal I_{d-2}/\sim}\partial_t\omega_I\otimes \theta^t_I(v)=\\ \nonumber&=\sum_{I\in \mathcal I_{d-2}/\sim}(-\omega_{pq,I}-\frac{1}{2}\varepsilon_{pqabc} \sum_{\alpha\beta\mathfrak {g}amma} \partial_\mathfrak {g}amma \omega_{I\setminus \alpha \beta})\otimes \theta^p_I(v)\\ \nonumber&\hspace{5mm}+\sum_{t=1}^5 \sum_{I\in \mathcal I_{d-2}/\sim} \frac{1}{2}\varepsilon_{pqabc} \partial_t\sum_{\alpha\beta\mathfrak {g}amma}\big([x_p \partial_\mathfrak {g}amma,\omega_{I\setminus \alpha \beta}]\otimes \theta_I^t(v)+2 \omega_{I\setminus \alpha \beta} \otimes x_p\partial_\mathfrak {g}amma.(\theta^t_{I}(v))\big)\\ \nonumber & \hspace{5mm}+\sum_t\sum_{I\in \mathcal I_{d-2}/\sim}\Big(-\frac{1}{2}\partial_t\partial_q\omega_{I\setminus (ab,bc,ca)}\otimes \theta^t_{I}(v)+\frac{1}{4}\sum_{\alpha\beta\mathfrak {g}amma}\partial_t\partial_\alpha \big(\omega_{I\setminus (\alpha \beta, \beta \mathfrak {g}amma, \mathfrak {g}amma q)}+\omega_{I\setminus (\alpha \mathfrak {g}amma, \mathfrak {g}amma \beta, \beta q)}) \otimes \theta_I^t(v)\Big)\\ \nonumber&=\sum_{J\in \mathcal I_{d-1}/\sim}\omega_J\otimes -\theta^p_{J\setminus pq}(v) \\ &\hspace{5mm} +\frac{1}{2}\varepsilon_{pqabc}\sum_{\alpha \beta \mathfrak {g}amma}\sum_{K\in \mathcal I_{d-3}/\sim}\Big(-\partial_\mathfrak {g}amma \omega_K\otimes \theta^p_{\alpha \beta, K}+\sum_{t=1}^5 \partial_t[x_p\partial_\mathfrak {g}amma, \omega_K]\otimes \theta^t_{\alpha \beta,K}\\ &\hspace{10mm}+ \sum_{t=1}^5 2 \partial_t \omega_K \otimes x_p\partial_\mathfrak {g}amma.(\theta^t_{\alpha \beta, K}(v))\Big)\\ & \hspace{5mm}+\sum_t\sum_{L\in \mathcal I_{d-5}/\sim}\partial_t\partial_q\omega_L\otimes -\frac{1}{2}\theta^t_{ab,bc,ca,L}(v)+\sum_{\alpha\beta\mathfrak {g}amma}\partial_t\partial_\alpha \omega_L \otimes \frac{1}{4}\big(\theta^t_{\alpha\beta,\beta\mathfrak {g}amma,\mathfrak {g}amma q,L}(v)+\theta^t_{\alpha\mathfrak {g}amma,\mathfrak {g}amma \beta,\beta q,L}(v)\big)\\ &= \sum_{J\in \mathcal I_{d-1}/\sim}\omega_J\otimes -\theta^p_{J\setminus pq}(v)\\ &\hspace{5mm} +\frac{1}{2}\varepsilon_{pqabc}\sum_{t=1}^5\sum_{K\in \mathcal I_{d-3}/\sim} \partial_t \omega_K\otimes \sum_{\alpha \beta \mathfrak {g}amma}-(x_p\partial_\mathfrak {g}amma.\theta^t_{\alpha \beta,K})(v)+2x_p \partial_\mathfrak {g}amma.(\theta^t_{\alpha \beta, K}(v))\\ &\hspace{5mm}+\sum_t\sum_{L\in \mathcal I_{d-5}/\sim}\partial_t\partial_q\omega_L\otimes -\frac{1}{2}\theta^t_{ab,bc,ca,L}(v)+\sum_{\alpha\beta\mathfrak {g}amma}\partial_t\partial_\alpha \omega_L \otimes \frac{1}{4}\big(\theta^t_{\alpha\beta,\beta\mathfrak {g}amma,\mathfrak {g}amma q,L}(v)+\theta^t_{\alpha\mathfrak {g}amma,\mathfrak {g}amma \beta,\beta q,L}(v)\big). \end{align*} \end{proof} The following result is fundamental in our study. \begin{corollary}\label{fund1} If $\varphi:M(V)\rightarrow M(W)$ is a morphism of Verma modules as in \eqref{varfi}, then for all $p,q,a,b,c$ such that $\{p,q,a,b,c\}=[5]$ we have \begin{equation}\label{sing1}-\theta^p_{J\setminus pq}(v)+\frac{1}{2}\varepsilon_{pqabc} \sum_{\alpha\beta\mathfrak {g}amma} \big(-(x_p\partial_\mathfrak {g}amma.\theta_{\alpha\beta,J})(v)+2x_p\partial_\mathfrak {g}amma.(\theta_{\alpha\beta,J}(v))\big)=0 \end{equation} for all $J\in \mathcal I_{d-1}$ and \begin{equation} \label{sing2} \frac{1}{4}\big(\theta_{ab,bc,cq,K}(v)+\theta_{ac,cb,bq,K}(v)\big)-\theta^{a,p}_{K\setminus pq}(v)+\frac{1}{2}\varepsilon_{pqabc} \sum_{\alpha\beta\mathfrak {g}amma} \big(-(x_p\partial_\mathfrak {g}amma.\theta^a_{\alpha\beta,K})(v)+2x_p\partial_\mathfrak {g}amma.(\theta^a_{\alpha\beta,K}(v))\big)=0, \end{equation} \begin{equation} \label{sing3} -2\theta^{p,p}_{K\setminus pq}(v)+\frac{1}{2}\varepsilon_{pqabc}\sum_{\alpha\beta\mathfrak {g}amma} \big(-(x_p\partial_\mathfrak {g}amma.\theta^p_{\alpha\beta,K})(v)+2x_p\partial_\mathfrak {g}amma.(\theta^p_{\alpha\beta,K}(v))\big)=0, \end{equation} \begin{equation} \label{sing4} -2\theta^{p,q}_{K\setminus pq}(v)-\theta_{ab,bc,ca,K}(v)+\varepsilon_{pqabc}\big(-(x_p\partial_\mathfrak {g}amma.\theta^q_{\alpha\beta,K})(v)+2x_p\partial_\mathfrak {g}amma.(\theta^q_{\alpha\beta,K}(v))\big)=0, \end{equation} for all $K\in \mathcal I_{d-3}$. \end{corollary} \begin{proof} Recall that $\varphi(v)=\sum_{l\leq d/2} \sum_{I\in \mathcal I_{d-2l}/\sim} \sum_{1\leq r_1\leq \cdots \leq r_l\leq 5} \partial_{r_1}\cdots \partial_{r_l} \omega_I \otimes \theta^{r_1,\ldots,r_l}_I(v)$. By Proposition \ref{morphisms} we have $x_pd_{pq} \varphi(v)=0$ and if we expand \[ x_p d_{pq}\varphi(v)=\sum_{l\leq (d-1)/2} \sum_{1\leq r_1\leq\cdots\leq r_l\leq 5} \sum_{I\in \mathcal I_{d-1-2l}/\sim}\partial_{r_1}\cdots \partial_{r_l} \omega_I\otimes v_{(r_1,\ldots,r_l),I} \] we have that all vectors $v_{(r_1,\ldots,r_l),I}$ must be 0. By Lemmas \ref{xpdpqaomth1} and \ref{xpdpqaomth2} we have for all $J\in \mathcal I_{d-1}$ \[ v_{(\,),J}=-\theta^p_{J\setminus pq}(v)+\frac{1}{2}\varepsilon_{pqabc} \sum_{\alpha\beta\mathfrak {g}amma} \big(-(x_p\partial_\mathfrak {g}amma.\theta_{\alpha\beta,J})(v)+2x_p\partial_\mathfrak {g}amma.(\theta_{\alpha\beta,J}(v))\big) \] hence \eqref{sing1} follows. Moreover, for all $K\in \mathcal I_{d-3}$, by Lemmas \ref{xpdpqaomth1} and \ref{xpdpqaomth2} we have \[ v_{(a),K}=\frac{1}{4}\big(\theta_{ab,bc,cq,K}(v)+\theta_{ac,cb,bq,K}(v)\big)-\theta^{a,p}_{K\setminus pq}(v)+\frac{1}{2}\varepsilon_{pqabc} \sum_{\alpha\beta\mathfrak {g}amma} \big(-(x_p\partial_\mathfrak {g}amma.\theta^a_{\alpha\beta,K})(v)+2x_p\partial_\mathfrak {g}amma.(\theta^a_{\alpha\beta,K}(v))\big). \] Note that in this case we have an additional term $-\theta^{a,p}_{K\setminus pq}(v)$ which is produced by \[x_p d_{pq}\,\partial_a \partial_p \omega_{K\setminus pq} \otimes \theta_{K\setminus pq}(v).\] Equation \eqref{sing2} follows. Equations \eqref{sing3} and \eqref{sing4} are obtained similarly by considering $v_{(p),K}$ and $v_{(q),K}$. \end{proof} Corollary \ref{fund1} can be slightly simplified if $v$ is a highest weight vector in $V$. For all $n,m$ we let \[ \chi_{n>m}=\begin{cases} 1& \textrm{if }n>m;\\ 0& \textrm{otherwise.} \end{cases}\] \begin{corollary}\label{singo} Let $\varphi:M(V)\rightarrow M(W)$ be a morphism of Verma modules as in \eqref{varfi} and let $s\in V$ be a highest weight vector. Then for all $\{p,q,a,b,c\}$ we have: \begin{equation}\label{sing1b}-2\theta^p_{J\setminus pq}(s)+\varepsilon_{pqabc} \sum_{\alpha\beta\mathfrak {g}amma} \big((-1)^{\chi_{p>\mathfrak {g}amma}}(x_p\partial_\mathfrak {g}amma.\theta_{\alpha\beta,J})(s)+2\chi_{p>\mathfrak {g}amma}x_p\partial_\mathfrak {g}amma.(\theta_{\alpha\beta,J}(s))\big)=0 \end{equation} for all $J\in \mathcal I_{d-1}$ and \begin{equation} \label{sing2b} -4\theta^{a,p}_{K\setminus pq}(s)+\big(\theta_{ab,bc,cq,K}(s)+\theta_{ac,cb,bq,K}(s)\big)+2\varepsilon_{pqabc} \sum_{\alpha\beta\mathfrak {g}amma} \big((-1)^{\chi_{p>\mathfrak {g}amma}}(x_p\partial_\mathfrak {g}amma.\theta^a_{\alpha\beta,K})(s)+2\chi_{p>\mathfrak {g}amma}x_p\partial_\mathfrak {g}amma.(\theta^a_{\alpha\beta,K}(s))\big)=0, \end{equation} \begin{equation} \label{sing3b} -4\theta^{p,p}_{K\setminus pq}(s)+\varepsilon_{pqabc}\sum_{\alpha\beta\mathfrak {g}amma} \big((-1)^{\chi_{p>\mathfrak {g}amma}}(x_p\partial_\mathfrak {g}amma.\theta^p_{\alpha\beta,K})(s)+2\chi_{p>\mathfrak {g}amma}x_p\partial_\mathfrak {g}amma.(\theta^p_{\alpha\beta,K}(s))\big)=0, \end{equation} \begin{equation} \label{sing4b} -2\theta^{p,q}_{K\setminus pq}-\theta_{ab,bc,ca,K}(s)+\varepsilon_{pqabc}\big((-1)^{\chi_{p>\mathfrak {g}amma}}(x_p\partial_\mathfrak {g}amma.\theta^q_{\alpha\beta,K})(s)+2\chi_{p>\mathfrak {g}amma}x_p\partial_\mathfrak {g}amma.(\theta^q_{\alpha\beta,K}(s))\big)=0 \end{equation} for all $K\in \mathcal I_{d-3}$. \end{corollary} \begin{proof} We prove Equation \eqref{sing1b}, the others are similar. By \eqref{sing1}, if $p>\mathfrak {g}amma$ we clearly have \[(-1)^{\chi_{p>\mathfrak {g}amma}}(x_p\partial_\mathfrak {g}amma.\theta_{\alpha\beta,J})(s)+2\chi_{p>\mathfrak {g}amma}x_p\partial_\mathfrak {g}amma.(\theta_{\alpha\beta,J}(s))=-(x_p\partial_\mathfrak {g}amma.\theta_{\alpha\beta,J})(s)+2x_p\partial_\mathfrak {g}amma.(\theta_{\alpha\beta,J}(s)). \] If $p<\mathfrak {g}amma$ we have \[ -(x_p\partial_\mathfrak {g}amma.\theta_{\alpha\beta,J})(s)+2x_p\partial_\mathfrak {g}amma.(\theta_{\alpha\beta,J}(s))=(x_p \partial_\mathfrak {g}amma.\theta_{\alpha\beta,J})(s) \] since $s$ is a highest weight vector, and the result follows. \end{proof} Now observe that all (non trivial) summands in any equation appearing in Corollary \ref{singo} have the same weight, and we call it the weight of the equation. Next result shows that if $\varphi:M(\lambda)\rightarrow M(\mu)$ is a morphism between finite Verma modules, then every equation of weight $\mu$ in Corollary \ref{singo} can be further simplified. \begin{corollary}\label{singol}Let $\varphi:M(\lambda)\rightarrow M(\mu)$ be a morphism of finite Verma modules. If $J\in \mathcal I_{d-1}$ and $a,b,c,p,q$ are such that Equation \eqref{sing1b} has weight $\mu$, then \begin{equation}\label{sing1c}-2\theta^p_{J\setminus pq}(s)+\varepsilon_{pqabc} \sum_{\alpha\beta\mathfrak {g}amma} (-1)^{\chi_{p>\mathfrak {g}amma}}(x_p\partial_\mathfrak {g}amma.\theta_{\alpha\beta,J})(s)=0. \end{equation} If $K$ and $a,b,c,p,q$ are such that Equation \eqref{sing2b} has weight $\mu$, then \begin{equation} \label{sing2c} -4\theta^{a,p}_{K\setminus pq}(s)+\theta_{ab,bc,cq,K}(s)+\theta_{ac,cb,bq,K}(s)+2\varepsilon_{pqabc} \sum_{\alpha\beta\mathfrak {g}amma} (-1)^{\chi_{p>\mathfrak {g}amma}}(x_p\partial_\mathfrak {g}amma.\theta^a_{\alpha\beta,K})(s)=0. \end{equation} If $K$ and $a,b,c,p,q$ are such that Equation \eqref{sing3b} has weight $\mu$ then \begin{equation} \label{sing3c} -4\theta^{p,p}_{K\setminus pq}(s)+\varepsilon_{pqabc}\sum_{\alpha\beta\mathfrak {g}amma} (-1)^{\chi_{p>\mathfrak {g}amma}}(x_p\partial_\mathfrak {g}amma.\theta^p_{\alpha\beta,K})(s)=0. \end{equation} If $K$ and $a,b,c,p,q$ are such that Equation \eqref{sing4b} has weight $\mu$ then \begin{equation} \label{sing4c} -2\theta^{p,q}_{K\setminus pq}(s)-\theta_{ab,bc,ca,K}(s)+\varepsilon_{pqabc}(-1)^{\chi_{p>\mathfrak {g}amma}}(x_p\partial_\mathfrak {g}amma.\theta^q_{\alpha\beta,K})(s)=0. \end{equation} \end{corollary} Note that all Equations appearing in Corollary \ref{singol} do not depend on the weights $\lambda$ and $\mu$: this observation will be the keypoint of our final classification. \section{Singular vectors of degree between 5 and 10}\label{min10} If $w\in M(\mu)$ is a singular vector of degree $d$ at most 10 we know that it also has height $d$ by the results in Section 4. In particular we can express it as \begin{equation}\label{lare} w=\sum_{l\leq d/2} \sum_{I\in \mathcal I_{d-2l}/\sim} \sum_{1\leq r_1\leq \cdots \leq r_l\leq 5} \partial_{r_1}\cdots \partial_{r_l} \omega_I \otimes \theta^{r_1,\ldots,r_l}_I(s), \end{equation} where $s$ is a highest weight vector in $F(\lambda)$. \begin{lemma}\label{I0} If $w\in M(\mu)$ is a singular vector of degree and height $d$ as in \eqref{lare}, then there exists $I_0\in \mathcal I_d$ such that $\theta_{I_0}(s)\neq 0$ is a highest weight vector in $F(\mu)$. \end{lemma} \begin{proof} Since $w$ has height $d$, we know that there exists $I\in \mathcal I_d$ such that $\theta_I(s)\neq 0$. Among all such $I$'s choose $I_0$ such that $\theta_{I_0}(s)$ has maximal weight. Applying $x_i\partial_{i+1}$ to $w$ we obtain a term $\omega_{I_0}\otimes x_i \partial_{i+1}.\theta_{I_0}(s)$, which cannot be simplified by any other term. Therefore $x_i \partial_{i+1}.\theta_{I_0}(s)=0$. \end{proof} If we fix any possible $I_0$ as in Lemma \ref{I0}, we can consider all equations in Corollary \ref{singol} with weight $\mu$ and we observe that these equations do not depend on $\mu$. For example, if we choose $I_0=(12,24,34,45)$, we can consider Equation \eqref{sing1c} with $(a,b,c,p,q)=(1,2,4,5,3)$ and $J=(25,34,45)$, getting \[ 2\theta_{14,24,25,34}(s)+2\theta_{12,24,34,45}(s)=0, \] and also with $(a,b,c,p,q)=(2,4,5,3,1)$ and $J=(14,23,34)$, getting \[ 2\theta_{14,24,25,34}(s)=0,\] and deducing $\theta_{I_0}(s)=0$. \begin{theorem}\label{5o7} Let \[ w=\sum_{l\leq d/2} \sum_{I\in \mathcal I_{d-2l}/\sim} \sum_{1\leq r_1\leq \cdots \leq r_l\leq 5} \partial_{r_1}\cdots \partial_{r_l} \omega_I \otimes \theta^{r_1,\ldots,r_l}_I(s) \] be a singular vector of degree $d$, with $5\leq d\leq 10$ and let ${I_0}\in \mathcal I_d$ be such that $\theta_{I_0}(s)\neq 0$ is a highest weight vector. Then $d=7$ and $I_0\sim (12,13,14,15,25,35,45)$ or $d=5$ and $I_0\sim (12,13,14,15,45)$ or $I_0\sim(12,15,25,35,45)$. \end{theorem} \begin{proof} The proof is based on Corollary \ref{singol}. The set of Equations \eqref{sing1c}--\eqref{sing4c} of weight $\lambda(\theta_{I_0}(s))$ provides a system of homogeneous linear equations among all $\theta_I(s)$, $\theta_J^r(s)$ and $\theta^{r_1,r_2}_K(s)$ (with $I\in \mathcal I_d$, $J\in \mathcal I_{d-2}$ and $K \in \mathcal I_{d-4}$) such that $\theta_I$, $\theta_J^r$ and $\theta^{r_1,r_2}_K$ have the same weight of $\theta_{I_0}$, and which do not depend on (the weight of) $s$. This system can be solved with the help of a computer in all possible cases and one can check that it implies $\theta_{I_0}(s)=0$ in all cases, but in the three exceptions stated above. We add a few words to explain what happens in the most complicated case, i.e., $d=10$ and $I_0=(12,13,14,15,23,24,25,34,35,45)$. In this case 86 variables are involved: $\theta_{I_0}(s)$, 15 vectors of the form $\theta^r_J(s)$, and 70 vectors of the form $\theta^{r_1,r_2}_K(s)$ with $r_1\neq r_2$. Equations \eqref{sing1c} and \eqref{sing3c} do not provide any condition. In Equation \eqref{sing2c} we can choose $(a,b,c,p,q)$ to be any permutation in $S_5$ and $K=(ac,ap,aq,bp,bq,cp,pq)$, getting 120 linear equations among our 86 variables, and in Equation \eqref{sing4c} we can choose $(a,b,c,p,q)$ to be any permutation in $S_5$ (with $a<b<c$ to avoid repeated equations) and $K=(ap,aq,bp,bq,cp,cq,pq)$ getting 20 more equations. This system of 140 equations implies that all 86 variables involved vanish. \end{proof} Now we study the exceptions given by Theorem \ref{5o7}. The case of degree 7 leads to another new singular vector. \begin{theorem}\label{w[7]} The following vector $w[7]\in M(0,0,0,2)$ of weight $(2,0,0,0)$ is the unique (up to a scalar factor) singular vector of degree 7 in a finite Verma module: \begin{align*}w[7]&=d_{12}d_{13} d_{14} d_{15} \Big( d_{23} d_{24} d_{25} \otimes ({x_{2}^*})^2 - d_{23} d_{25} d_{34} \otimes x_2^*x_3^* - d_{24} d_{25} d_{34} \otimes x_2^*x_4^* + d_{23} d_{24} d_{35} \otimes x_2^*x_3^*\\ &- d_{24} d_{25} d_{35} \otimes x_2^* x_5^* + d_{23} d_{34} d_{35} \otimes (x_3^*)^2 + d_{24} d_{34} d_{35} \otimes x_3^* x_4^* + d_{25} d_{34} d_{35} \otimes x_3^* x_5^* + d_{23} d_{24} d_{45} \otimes x_2^*x_4^*\\ &+ d_{23} d_{25} d_{45} \otimes x_2^* x_5^* + d_{23} d_{34} d_{45} \otimes x_3^* x_4^* + d_{24} d_{34} d_{45} \otimes (x_4^*)^2 + d_{25} d_{34} d_{45} \otimes x_4^* x_5^* + d_{23} d_{35} d_{45} \otimes x_3^* x_5^*\\ &+ d_{24} d_{35} d_{45} \otimes x_4^* x_5^* + d_{25} d_{35} d_{45} \otimes (x_5^*)^2 +\partial_1 d_{23} \otimes x_2^* x_3^* +\partial_1 d_{24} \otimes x_2^* x_4^* +\partial_1 d_{25} \otimes x_2^* x_5^*\\ &-\partial_2 d_{23} \otimes x_1^* x_3^* -\partial_2 d_{24} \otimes x_1^* x_4^* -\partial_2 d_{25} \otimes x_1^* x_5^* +\partial_3 d_{23} \otimes x_1^* x_2^* -\partial_3 d_{34} \otimes x_1^* x_4^* -\partial_3 d_{35} \otimes x_1^* x_5^*\\ &+\partial_4 d_{24}\otimes x_1^* x_2^* +\partial_4 d_{34}\otimes x_1^* x_3^* -\partial_4 d_{45} \otimes x_1^* x_5^* +\partial_5 d_{25}\otimes x_1^* x_2^* +\partial_5 d_{35}\otimes x_1^* x_3^* +\partial_5 d_{45}\otimes x_1^* x_4^*\Big). \end{align*} \end{theorem} \begin{proof} Let $I_0=(12,13,14,15,25,35,45)$. In this case Equations \eqref{sing1c}--\eqref{sing4c} of weight $\lambda(\theta_{I_0}(s))$ provide the following homogeneous linear relations: \begin{align}\label{condsette} \theta_{13, 14, 15, 25, 35, 45}(s)&= -2\theta_{14, 15, 25, 35}^2(s)= 2\theta_{12, 13, 15, 25, 45}^2(s)= -2\theta_{13, 14, 15, 25, 35}^3(s)= 2\theta_{12, 13, 15, 35, 45}^3(s)\\ \nonumber &= -2\theta_{13, 14, 15, 25, 45}^4(s)= 2\theta_{12, 14, 15, 35, 45}^4(s)= -4\theta_{12, 15, 25}^{2,2}(s)= -4\theta_{13, 15, 25}^{2,3}(s)\\ \nonumber &= -4\theta_{ 12, 15, 35}^{2,3}(s)= -4\theta_{ 13, 15, 35}^{3,3}(s)= -4 \theta_{ 14, 15, 25}^{2,4}(s)= -4 \theta_{ 12, 15, 45}^{2,4}(s)\\ \nonumber &= -4 \theta_{ 14, 15, 35}^{3,4}(s)= -4 \theta_{ 13, 15, 45}^{3,4}(s)= -4 \theta_{ 14, 15, 45}^{4,4}(s). \end{align} We use Equation \eqref{sing1b} to determine the weight $\mu=(\mu_{12},\mu_{23},\mu_{34}, \mu_{45})$. Taking $(a,b,c,p,q)=(1,2,3,4,5)$ and $J=(13,14,15,25,35,45)$ in \eqref{sing1b} we obtain $\mu_{34}=0$, using \eqref{condsette}. Taking $(a,b,c,p,q)=(4,5,1,3,2)$ and $J=(12,13,14,15,25,35,)$ in \eqref{sing1b} we obtain $\mu_{13}=0$. Taking $(a,b,c,p,q)=(1,2,4,5,3)$ and $J=(13,14,15,25,35,45)$ in \eqref{sing1b} we have \begin{align*} 0&=-2\theta^5_{(13,14,15,25,35,45)\setminus(53)}-(x_5\partial_4.\theta_{12,13,14,15,25,35,45})(s)+2x_5\partial_4(\theta_{12,13,14,15,25,35,45}(s))\\&\hspace{5mm}-(x_5\partial_1.\theta_{24,13,14,15,25,35,45})(s)\\ &=2\theta^5_{13,14,15,25,45}(s)+\theta_{12,13,14,15,25,35,45}(s)+\theta_{12,13,14,15,25,34,45}(s)+2x_5\partial_4.(\theta_{12,13,14,15,25,35,45}(s))\\&\hspace{5mm}+\theta_{24,13,14,15,21,35,45}(s)\\ &=2\theta^5_{13,14,15,25,45}(s)+2\theta_{12,13,14,15,25,35,45}(s)+\theta_{12,13,14,15,25,34,45}(s)+2x_5\partial_4.(\theta_{12,13,14,15,25,35,45}(s)). \end{align*} Finally we can apply $x_4\partial_5$ to this equation and use \eqref{condsette} to conclude \begin{align*} 0&=2\theta^4_{13,14,15,25,45}(s)-2\theta_{12,13,14,15,25,35,45}(s)-\theta_{12,13,14,15,25,35,45}(s)+2\mu_{45}\theta_{12,13,14,15,25,35,45}(s)\\&=2(\mu_{45}-2)\theta_{12,13,14,15,25,35,45}(s). \end{align*} This shows that the only possible singular vector of degree 7 sits in $M(0,0,0,2)$ and has weight $(0,0,0,2)+\lambda(\omega_{12,13,14,15,25,35,45})=(2,0,0,0)$. The uniqueness of such singular vector follows from Lemma \ref{I0}, since there are no other $I\in \mathcal I_7$ such that $\lambda (I)=\lambda (I_0)$. The fact that the displayed vector is indeed a singular vector can be checked with a long and technical calculation. Note that \eqref{condsette} is consistent with the vector $w[7]$ since one can check that \begin{align*}\label{sing7} 4&d_{12}d_{13}d_{14}d_{15}d_{25}d_{35}d_{45}=\\ &4\omega_{13, 14, 15, 25, 35, 45} -2\partial_2\omega_{12,14, 15, 25, 35}+ 2\partial_2\omega_{12, 13, 15, 25, 45} -2\partial_3\omega_{13, 14, 15, 25, 35}+ 2\partial_3\omega_{12, 13, 15, 35, 45}\\ \nonumber & -2\partial_4\omega_{13, 14, 15, 25, 45}+ 2\partial_4\omega_{12, 14, 15, 35, 45} -\partial_2^2\omega_{12, 15, 25} -\partial_2\partial_3\omega_{13, 15, 25}-\partial_2\partial_3\omega_{ 12, 15, 35}-\partial_3^2\omega_{ 13, 15, 35}\\ \nonumber & - \partial_2\partial_4\omega_{ 14, 15, 25} - \partial_2\partial_4\omega_{ 12, 15, 45} - \partial_3\partial_4\omega_{ 14, 15, 35} - \partial_3\partial_4\omega_{ 13, 15, 45}- \partial_4^2\omega_{ 14, 15, 45}. \end{align*} \end{proof} The two possible cases in degree 5 given by Theorem \ref{5o7} are dual to each other. They lead to singular vectors which were already known to Rudakov in \cite{R}. \begin{theorem} Let $w$ be a singular vector of degree 5 in $M(\mu)$ of weight $\lambda$. Then one of the following occurs. \begin{enumerate} \item $\mu=(0,0,1,0)$, $\lambda=(3,0,0,0)$ and \[ w=w[5_{CD}]=d_{12}d_{13}d_{14}d_{15}(d_{45}\otimes x_{45}^*+d_{35}\otimes x_{35}^* +d_{25}\otimes x_{25}^*+d_{24}\otimes x_{24}^*+d_{23}\otimes x_{23}^*); \] \item $\mu=(0,0,0,3)$, $\lambda=(0,1,0,0)$ and $w=w[5_{EA}]=d_{12}w[4_E]$, where $w[4_E]$ is explicitly described in Section \ref{w4E}. \end{enumerate} \end{theorem} \begin{proof} By Theorem \ref{5o7} we can assume that $\theta_{I_0}(s)$ is a highest weight vector with $I_0=(12,13,14,15,45)$ or $I_0=(12,15,25,35,45)$ and by Theorem \ref{duality} it is enough to show that the case $I_0=(12,13,14,15,45)$ leads to conditions (1). In this case Equations \eqref{sing1c}--\eqref{sing4c} easily provide the following relations: \begin{equation}\label{condcinque} \theta_{12,13,14,15,45}(s)=-2\theta^2_{12,14,15}(s)=-2\theta^3_{13,14,15}(s). \end{equation} We use Equation \eqref{sing1b} three times to show that necessarily $\mu=(0,0,1,0)$. We first use Equation \eqref{sing1b} with $(a,b,c,p,q)=(4,5,1,3,2)$ and $J=(12,13,14,15)$. All terms but one vanish and we obtain \[ x_3\partial_1.(\theta_{45,12,13,14,15}(s))=0, \] and so $\mu_{1,2}=\mu_{2,3}=0$. Using Equation (9) with $(a,b,c,p,q)=(1,2,3,4,5)$ and $J=(13,14,15,45)$, we obtain \begin{align*} 0&=-2\theta^4_{(13,14,15,45)\setminus(45)}-(x_4\partial_3.\theta_{12,13,14,15,45})(s)+2x_4\partial_3.(\theta_{12,13,14,15,45}(s)) -(x_4\partial_2.\theta_{31,13,14,15,45})(s)\\ &\hspace{5mm}-2x_4\partial_2.(\theta_{31,13,14,15,45}(s)) -(x_4\partial_1.\theta_{23,13,14,15,45})(s)-2x_4\partial_1.(\theta_{23,13,14,15,45}(s))\\ &=2\theta^4_{13,14,15}(s)+\theta_{12,13,14,15,35}(s)+2x_4\partial_3.(\theta_{12,13,14,15,45}(s)), \end{align*} where we have used the fact that $\theta_{23,13,14,15,45}(s)=0$ since it has weight greater than $\theta_{12,13,14,15,45}(s)$. Applying $x_3\partial_4$ to the previous equation, we obtain \[ -2\theta^3_{13,14,15}(s)-\theta_{12,13,14,15,45}(s)+2\mu_{34}\theta_{12,13,14,15,45}(s)=0. \] By \eqref{condcinque} we can conclude that $\mu_{34}=1$. Similarly, by Equation (9) with $(a,b,c,p,q)=(1,2,3,5,4)$ and $J=(13,14,15,45)$, we obtain \begin{align*} 0&=-2\theta^5_{(13,14,15,45)\setminus(54)}+(x_5\partial_3.\theta_{12,13,14,15,45})(s)-2x_5\partial_3.(\theta_{12,13,14,15,45}(s)) \end{align*} and applying $x_3\partial_5$ and then using \eqref{condcinque} we obtain \[ 0=-2\theta^3_{13,14,15}(s)+\theta_{12,13,14,15,45}(s)-2\mu_{35}\theta_{12,13,14,15,45}(s)=2(1-\mu_{35})\theta_{12,13,14,15,45}(s). \] So $\mu_{35}=1$ and $\mu_{45}=\mu_{35}-\mu_{34}=0$. The weight of $w$ is $\lambda=\mu+\lambda(\omega_{12,13,14,15,45})=(0,0,1,0)+(3,0,-1,0)=(3,0,0,0)$. The uniqueness follows by Lemma \ref{I0} and the verification that $d_{12}d_{13}d_{14}d_{15}(d_{45}\otimes x_{45}^*+d_{35}\otimes x_{35}^* +d_{25}\otimes x_{25}^*+d_{24}\otimes x_{24}^*+d_{23}\otimes x_{23}^*)$ is actually such a singular vector is left to the reader. This shows, by duality, that there exists a (unique) singular vector in $M(0,0,0,3)$ of weight $(0,1,0,0)$, and one can check that it is given by $d_{12}w[4_E]$. \end{proof} \section{Degree 4}\label{d4} The last case to be considered concerns singular vectors of degree and height 4. \begin{proposition}\label{gr4} Let $w$ be a singular vector of degree 4 as in (\ref{varfi}) and let $I_0\in \mathcal I_4$ be such that $\theta_{I_0}(s)\neq 0$ is a highest weight vector in $M(\mu)$. Then $I_0\sim(12,13,14,15)$ or $I_0\sim(15,25,35,45)$. \end{proposition} \begin{proof} We make use of the duality in Theorem \ref{duality} to consider nearly a half of the cases. Indeed let $w$ be a singular vector in $M(\mu)$ of weight $\lambda$ such that $\theta_{I_0}(s)$ is a highest weight vector (of weight $\mu$). Also consider the dual singular vector $w^*$ in $M(\lambda^*)$ of weight $\mu^*$ and assume that $w^*=\varphi^*(s^*)$, where $s^*$ is a highest weight vector in $M(\lambda^*$), can be expressed as in \eqref{varfi} with $\theta^*$'s instead of $\theta$'s, and with $\theta^*_{J_0}(s')$ of weight $\lambda'$. Then $\lambda(\theta_{I_0})=\mu-\lambda$ and $\lambda(\theta^*_{J_0})=\lambda^*-\mu^*=-\lambda(\theta_{I_0})^*$. In particular if there are no singular vectors such that $\theta_{I_0}(s)\neq 0$ for all $I_0$ such that $\lambda(\theta_{I_0})=\lambda$ then there are no singular vectors such that $\theta_{I_0}(s)\neq 0$ for all $I_0$ such that $\lambda(\theta_{I_0})=-\lambda^*$. As in Theorem \ref{5o7}, we can use Equations \eqref{sing1c}--\eqref{sing4c}, but in this case we have some more exceptions which must be considered apart. More precisely we have that $\theta_{I_0}(s)=0$ in all but the following cases (and their duals): \begin{enumerate} \item $(12,13,14,15)$; \item $(12,14,15,23)$, $(12,13,15,24)$, $(12,13,14,25)$; \item $(13,23,34,35)$; \item $(14,23,34,35)$, $(13,24,34,35)$, $(13,23,34,45)$; \item $(14,24,34,35)$, $(13,24,34,45)$, $(14,23,34,45)$; \item $(14,24,34,45)$; \item $(15,24,34,45)$, $(14,34,25,45)$, $(14,24,35,45)$; \item $(15,25,34,45)$, $(14,25,35,45)$, $(15,24,35,45)$. \end{enumerate} These exceptions have been grouped according to their weight. We now analyze all these cases. \begin{enumerate} \item This is the case which is not excluded by the statement. \item Equations \eqref{sing1c}--\eqref{sing4c} provide $\theta_{12,14,15,23}=\theta_{12,13,14,25}=-\theta_{12,13,15,24}$. In this case Equation \eqref{sing1b} with $a=2$, $b=3$, $c=1$, $p=5$, $q=4$ and $J=(12,14,15)$ gives: $$x_5\partial_2.(\theta_{31,12,14,15}(s))+x_5\partial_1.(\theta_{23,12,14,15}(s))=0.$$ If we apply the vector field $x_1\partial_5$, we get the following equation: $$x_1\partial_2.(\theta_{31,12,14,15}(s))+\mu_{15}(\theta_{23,12,14,15}(s))=0,$$ where we used that if $\theta_{23,12,14,15}(s)$ has the highest weight, then $\theta_{35,12,14,15}(s)=\theta_{31,52,14,15}(s)=\theta_{31,12,54,15}(s)=0$. Hence we get $$-\theta_{32,12,14,15}(s)-\theta_{31,12,24,15}(s)-\theta_{31,12,14,15}(s) +\mu_{15}(\theta_{23,12,14,15}(s))=0,$$ i.e., $(-\mu_{15}-3)(\theta_{12,14,15,23}(s))=0$, a contradiction. \item In this case Equation \eqref{sing1b} with $a=1$, $b=3$, $c=2$, $p=5$, $q=4$ and $J=(23,34,35)$ gives: $$-x_5\partial_2.(\theta_{13,23,34,35}(s))+x_5\partial_3.(\theta_{12,23,34,35}(s))=0.$$ If we apply the vector field $x_2\partial_5$, we get the following equation: $$-\mu_{25}(\theta_{13,23,34,35}(s))+x_2\partial_3.(\theta_{12,23,34,35}(s))=0,$$ where we used that if $\theta_{13,23,34,35}(s)$ has the highest weight then $\theta_{15,23,34,35}(s)=0$. Hence we get $$(-\mu_{25}-1)(\theta_{13,23,34,35}(s))=0,$$ a contradiction. \item In this case Equations \eqref{sing1c}--\eqref{sing4c} provide $\theta_{14,23,34,35}=\theta_{13,24,34,35}=\theta_{13,23,34,45}$. Equation \eqref{sing1b} with $a=1$, $b=2$, $c=3$, $p=5$, $q=4$ and $J=(24,34,35)$ gives: \begin{align*} -(x_5\partial_1. \theta_{23,24,34,35})(s)+(x_5\partial_2.\theta_{31,24,34,35})(s) + (x_5\partial_3.\theta_{12,24,34,35})(s)\\ +2 x_5\partial_2.(\theta_{31,24,34,35}(s))+ 2x_5\partial_3.(\theta_{12,24,34,35}(s))=0, \end{align*} where we used that, if $\theta_{13,24,34,35}(s)$ has the highest weight, then $\theta_{23,24,34,35}(s)=0$. This is equivalent to the following: $$2\theta_{13,23,24,34}(s)-2x_5\partial_2.(\theta_{13,24,34,35}(s))+ 2x_5\partial_3.(\theta_{12,24,34,35}(s))=0.$$ If we apply the vector field $x_2\partial_5$ and use the equality $\theta_{13,24,34,35}(s)=\theta_{13,23,34,45}(s)$, we get the following equation: $$-\mu_{25}(\theta_{13,24,34,35}(s))+x_2\partial_3.(\theta_{12,24,34,35}(s))=0,$$ where we used that, if $\theta_{13,24,34,35}(s)$ has the highest weight, then $\theta_{15,24,34,35}(s)=0=\theta_{12,34,35,45}(s)$. Hence we get $$(-\mu_{25}-1)(\theta_{13,24,34,35}(s))=0,$$ a contradiction. \item In this case Equations \eqref{sing1c}--\eqref{sing4c} provide $\theta_{14,24,34,35}=\theta_{14,23,34,45}=\theta_{13,24,34,45}$. Equation \eqref{sing1} with $a=1$, $b=4$, $c=3$, $p=5$, $q=2$ and $J=(24,34,35)$ gives: $$-x_5\partial_4.(\theta_{13,24,34,35}(s))+x_5\partial_3.(\theta_{14,24,34,35}(s))=0.$$ If we apply the vector field $x_3\partial_5$, we get the following equation: $$-x_3\partial_4.(\theta_{13,24,34,35}(s))+\mu_{35}(\theta_{14,24,34,35}(s))=0,$$ where we used that if $\theta_{14,24,34,35}(s)$ has the highest weight, then $\theta_{15,24,34,35}(s)=0=\theta_{13,24,35,45}(s)$. Hence, using the hypothesis $\theta_{14,24,34,35}(s)=\theta_{13,24,34,45}(s)$, we get $$(2+\mu_{35})(\theta_{14,24,34,35}(s))=0,$$ a contradiction. \item In this case Equation \eqref{sing1b} with $a=1$, $b=4$, $c=3$, $p=5$, $q=2$ and $J=(24,34,45)$ gives: $$-x_5\partial_4.(\theta_{13,24,34,45}(s))+x_5\partial_3.(\theta_{14,24,34,45}(s))=0.$$ If we apply the vector field $x_3\partial_5$, we get the following equation: $$-x_3\partial_4.(\theta_{13,24,34,45}(s))+\mu_{35}(\theta_{14,24,34,45}(s))=0,$$ where we used that if $\theta_{14,24,34,45}(s)$ has the highest weight, then $\theta_{15,24,34,45}(s)=0$. Hence we get $$(1+\mu_{35})(\theta_{14,24,34,45}(s))=0,$$ a contradiction. \item In this case Equations \eqref{sing1c}--\eqref{sing4c} provide $\theta_{15,24,34,45}=\theta_{14,25,34,45}=\theta_{14,24,35,45}$. Equation \eqref{sing1b} with $a=1$, $b=2$, $c=4$, $p=5$, $q=3$ and $J=(15,34,45)$ gives: \begin{align*} -(x_5\partial_1.\theta_{24,15,34,45})(s)&-(x_5\partial_2.\theta_{41,15,34,45})(s)-(x_5\partial_4.\theta_{12,15,34,45})(s)+2x_5\partial_1.(\theta_{24,15,34,45}(s))\\&+2x_5\partial_2.(\theta_{41,15,34,45}(s))+2x_5\partial_4.(\theta_{12,15,34,45}(s))=0, \end{align*} which is equivalent to the following equation: \begin{align*} -2\theta_{14,15,24,34}(s)&+2\theta_{12,14,34,45}(s) -2x_5\partial_1.(\theta_{15,24,34,45}(s))-2x_5\partial_2.(\theta_{14,15,34,45}(s))+\\ &2x_5\partial_4.(\theta_{12,15,34,45}(s))=0. \end{align*} If we apply the vector field $x_1\partial_5$ we get the following equation: \begin{align*} -2\theta_{54,15,24,34}(s)&-2\theta_{52,14,34,45}(s)- 2\mu_{15}(\theta_{15,24,34,45}(s))-2x_1\partial_2.(\theta_{14,15,34,45}(s))\\ &+2x_1\partial_4.(\theta_{12,15,34,45}(s))=0,\end{align*} where we used that if $\theta_{15,24,34,45}(s)$ has highest weight then $\theta_{15,25,34,45}(s)=0$. Hence we get $$(-6-2\mu_{15})(\theta_{15,24,34,45}(s))=0,$$ a contradiction. \item We have by Equations \eqref{sing1c}--\eqref{sing4c} $\theta_{15,25,34,45}=\theta_{15,24,35,45}=\theta_{14,25,35,45}$ and $\theta^1_{15,45}=\theta^2_{25,45}=\theta^3_{35,45}=0$. In this case Equation \eqref{sing1b} with $a=1$, $b=2$, $c=4$, $p=5$, $q=3$ and $J=(25,35,45)$ gives: \begin{align*} -2\theta^5_{J\setminus \{53\}}(s)-(x_5\partial_1.\theta_{24,25,35,45})(s)- (x_5\partial_2.\theta_{41,25,35,45})(s)-(x_5\partial_4.\theta_{12,25,35,45})(s)\\ -2x_5\partial_2.(\theta_{14,25,35,45}(s))+2x_5\partial_4.(\theta_{12,25,35,45}(s))=0 \end{align*} where we used that if $\theta_{14,25,35,45}(s)$ has highest weight then $\theta_{24,25,35,45}(s)=0$. Hence we have: \begin{align*} -2\theta^5_{J\setminus \{53\}}(s)+2\theta_{12,24,35,45}(s)+\theta_{24,25,31,45}(s) +\theta_{41,25,32,45}(s)+2\theta_{14,24,25,35}(s)\\ +\theta_{12,25,34,45}(s)-2x_5\partial_2.(\theta_{14,25,35,45}(s))+ 2x_5\partial_4.(\theta_{12,25,35,45}(s))=0. \end{align*} If we apply the vector field $x_2\partial_5$ we obtain the following equation: \begin{align*} -2\theta_{15,24,35,45}(s)- \theta_{41,25,35,45}(s)-2\theta_{14,54,25,35}(s)\\ -\theta_{15,25,34,45}(s)-2\mu_{25}(\theta_{14,25,35,45}(s))- 2\theta_{14,25,35,45}(s)=0 \end{align*} where we used $\theta^2_{25,45}(s)=0$ and that if $\theta_{14,25,35,45}(s)$ has highest weight then $\theta_{15,25,35,45}(s)=0$. Now, using the hypotheses $\theta_{15,25,34,45}=\theta_{15,24,35,45}=\theta_{14,25,35,45}$, we get: $$-2(\mu_{25}+1)\theta_{14,25,35,45},$$ a contradiction. \end{enumerate} \end{proof} The following result completes the study of singular vectors of degree 4 \begin{theorem}\label{degree4} Let $w$ be a singular vector in $M(\mu)$ of weight $\lambda$ and degree 4. Then one of the following occurs: \begin{enumerate} \item $\mu=(n,0,0,0)$, $\lambda=(n+3,0,0,0)$ and $w=d_{12}d_{13}d_{14}d_{15} \otimes x_1^{n}$ for some $n\in \mathbb N$. \item $\mu=(0,0,0,n+3)$, $\lambda=(0,0,0,n)$ and $w=w[4_E]$ (see Section \ref{w4E}) for some $n\in \mathbb N$. \end{enumerate} \end{theorem} \begin{proof} By Proposition \ref{gr4} we know that we can assume that $w$ is as in \eqref{varfi} with $\theta_{12,13,14,15}(s)$ a highest weight vector. By \eqref{sing1b} with $(a,b,c,p,q)=(1,2,3,5,4)$ and $J=(12,14,15)$ we immediately get $x_5\partial_2.(\theta_{12,13,14,15}(s))=0$ (recalling that $\theta_{12,23,14,15}(s)=0$ for weight reasons) and therefore $\mu=(n,0,0,0)$ for some $n$ and $\lambda=\lambda(\omega_{12,13,14,15})+(n,0,0,0)=(n+3,0,0,0)$. The uniqueness follows by Lemma \ref{I0}. It is a trivial check that the vector $d_{12}d_{13}d_{14}d_{15} \otimes x_1^{n}$ is such a singular vector. By duality we have that the other possible case in Proposition \ref{gr4} leads to a unique singular vector in $M(0,0,0,n+3)$ of weight $(0,0,0,n)$. The fact that this vector is actually $w[4_E]$ displayed in Section 10 is a huge verification that can be made with a computer. \end{proof} \section{Conclusions} As a result of discussions in the previous Section we are now in a position to state a complete classification of singular vectors in finite Verma modules for $E(5,10)$. \begin{theorem}\label{conclusions} The following is a complete classification of singular vectors and corresponding morphisms for finite Verma modules. In degree 1 we have \begin{itemize} \item $w[1_A]=d_{12}\otimes x_1^{m}x_{12}^{n}\in M(m,n,0,0)$ for all $m,n\mathfrak {g}eq 0$, giving a morphism \[\varphi[1_A]:M(m,n+1,0,0)\rightarrow M(m,n,0,0);\] \item $w[1_B]=d_{15}\otimes x_1^m (x_5^*)^{n+1}+d_{14}\otimes x_1^m x_4^*(x_5^*)^{n}+d_{13}\otimes x_1^m x_3^*(x_5^*)^{n}+d_{12}\otimes x_1^m x_2^*(x_5^*)^{n}$ for all $m,n\mathfrak {g}eq 0$, giving a morphism \[\varphi[1_B]:M(m+1,0,0,n)\rightarrow M(m,0,0,n+1);\] \item $w[1_C]=\sum_{i<j} d_{ij}\otimes x_{ij}^*(x_{45}^*)^{m}(x_5*)^n$ for all $m,n \mathfrak {g}eq 0$, giving a morphism \[\varphi[1_C]:M(0,0,m,n)\rightarrow M(0,0,m+1,n). \] \end{itemize} In degree 2 we have \begin{itemize} \item $w[2_{BA}]=\sum_{j>1}d_{12}d_{1j}\otimes x_1^m x_j^* \in M(m,0,0,1)$ for all $m\mathfrak {g}eq 0$, giving a morphism \[ \varphi[2_{BA}]=\varphi[1_B]\circ \varphi[1_A]:M(m+1,1,0,0)\rightarrow M(m,0,0,1); \] \item $w[2_{CB}]=\sum_{j>1} \sum_{h<k}d_{1j}d_{hk}\otimes x_{hk}^* x_j^* (x_5^*)^n \in M(0,0,1,n+1)$ for all $n\mathfrak {g}eq 0$, giving a morphism \[ \varphi[2_{CB}]=\varphi[1_C]\circ \varphi[1_B]:M(1,0,0,n)\rightarrow M(0,0,1,n+1); \] \item $w[2_{CA}]=\sum_{i<j}d_{12}d_{ij}\otimes x_{ij}^* \in M(0,0,1,0)$, giving a morphism \[ \varphi[2_{CA}]=\varphi[1_C]\circ \varphi[1_A]:M(0,1,0,0)\rightarrow M(0,0,1,0). \] \end{itemize} In degree 3 we have \begin{itemize} \item $w[3_{CBA}]=\sum_{j>1}\sum_{k<l}d_{12}d_{1j}d_{kl}\otimes x_j^* x_{kl}^* \in M(0,0,1,1)$, giving a morphism \[ \varphi[3_{CBA}])=\varphi[1_C]\circ \varphi[1_B]\circ \varphi[1_A]:M(1,1,0,0)\rightarrow M(0,0,1,1). \] \end{itemize} In degree 4 we have \begin{itemize} \item $w[4_D]=d_{12}d_{13}d_{14}d_{15}\otimes x_1^m \in M(m,0,0,0)$ for all $m\mathfrak {g}eq 0$, giving a morphism \[ \varphi[4_D]:M(m+3,0,0,0)\rightarrow M(m,0,0,0); \] \item $w[4_E]\in M(0,0,0,n+3)$ shown in \S \ref{w4E}, giving a morphism \[ \varphi[4_E]:M(0,0,0,n)\rightarrow M(0,0,0,n+3). \] \end{itemize} In degree 5 we have \begin{itemize} \item $w[5_{CD}]=d_{12}d_{13}d_{14}d_{15} \sum_{2<i<j} d_{ij}\otimes x_{ij}^*$, giving a morphism \[ \varphi[5_{CD}]=\varphi[1_C]\circ \varphi:[4_D]: M(3,0,0,0)\rightarrow M(0,0,1,0); \] \item $w[5_{EA}]=d_{12}w[4_E]\in M(0,0,0,3)$, giving a morphism \[ \varphi[5_{EA}]=\varphi[4_E]\circ \varphi[1_A]:M(0,1,0,0)\rightarrow M(0,0,0,3). \] \end{itemize} In degree 7 we have \begin{itemize} \item $w[7]\in M(0,0,0,2)$ given in Theorem \ref{w[7]}, giving a morphism \[ \varphi[7]:M(2,0,0,0)\rightarrow M(0,0,0,2). \] \end{itemize} In degree 11 we have \begin{itemize} \item $w[11]\in M(0,0,0,1)$ given in Theorem \ref{w[11]}, giving a morphism \[ \varphi[11]:M(1,0,0,0)\rightarrow M(0,0,0,1). \] \end{itemize} \end{theorem} \begin{proof} In \cite{KR} singular vectors of degree 1 were constructed, and in \cite{R} it was shown that there are no other ones. In \cite{R} singular vectors of degree 2,3,4 and 5 were constructed, and it was shown in \cite{CC} that there are no other ones of degree 2 and 3. In the paper we show that there are no other singular vectors of degree 4 and 5. More precisely, singular vectors of degree 4 are classified in Section \ref{d4}, singular vectors with degree equal to height greater than 5 are classified in Section \ref{min10}, and singular vectors of degree greater than height are classified in Section \ref{>10}. \end{proof} This theorem gives an affirmative answer to the conjecture posed in \cite{KR}. \begin{corollary} All degenerate finite Verma modules over $E(5,10)$ are of the form $M(m,n,0,0)$, $M(0,0,m,n)$ or $M(m,0,0,n)$, where $m,n\in\mathbb{N}$. \end{corollary} Figure \ref{E510morphisms} represents all morphisms between finite degenerate Verma modules, which are not compositions of other morphisms. Since a singular vector of weight $\mu$ in a finite Verma module with highest weight $\lambda$ corresponds to a non-zero morphism $M(\mu)\rightarrow M(\lambda)$, we can construct infinite sequences of morphisms as in Figure \ref{E510morphisms}. All of these sequences are complexes (i.e. all compositions of consecutive morphisms vanish), except for the one through the origin; the latter becomes a complex if we replace the sequence $$M(0,1,0,0)\rightarrow M(0,0,0,0)\rightarrow M(1,0,0,0)$$ of morphisms of degree 1 by their composition of degree 2. This claim, when morphisms of degree 7 and 11 are not involved, follows from \cite{R}; if they are involved, it follows since there are no morphisms of degree 8 and 12. Figure \ref{E510morphismsbis} represents all morphisms between finite degenerate Verma modules of degree 2 and 3; the corresponding bilateral infinite sequences shown in this picture are complexes, since any possible composition of two of these morphisms does not appear in Theorem \ref{conclusions}. \begin{figure} \caption{\label{E510morphisms} \label{E510morphisms} \end{figure} \section{The singular vector in $M(0,0,0,n+3)$ of degree 4 and weight $(0,0,0,n)$}\label{w4E} The following is the singular vector (which has been found and checked with the aid of a computer) in $M(0,0,0,n+3)$ of degree 4 and weight $(0,0,0,n)$. Here we denote $x_i^*$ by $f_i$ and we omit all tensor product symbols. \begin{align*} w[4_E]&=d_{12}d_{13}d_{14}d_{15}f_1^3f_5^{n} + d_{12}d_{14}d_{15}d_{23}f_1^2 f_2f_5^{n} + d_{13}d_{14}d_{15}d_{23}f_1^2 f_3 f_5^{n} - d_{12}d_{13}d_{15}d_{24}f_1^2 f_2f_5^{n}\\& + d_{13}d_{14}d_{15}d_{24}f_1^2 f_4f_5^{n} + d_{12}d_{15}d_{23}d_{24}f_1 f_2^2 f_5^{n} + d_{13}d_{15}d_{23}d_{24}f_1 f_2 f_3 f_5^{n} + d_{14}d_{15}d_{23}d_{24}f_1 f_2 f_4 f_5^{n}\\& + d_{12}d_{13}d_{14}d_{25}f_1^2 f_2f_5^{n} + d_{13}d_{14}d_{15}d_{25}f_1^2 f_5^{n+1} - d_{12}d_{14}d_{23}d_{25}f_1 f_2^2 f_5^{n} - d_{13}d_{14}d_{23}d_{25}f_1 f_2 f_3 f_5^{n} \end{align*} \begin{align*} &+ d_{14}d_{15}d_{23}d_{25}f_1 f_2 f_5^{n+1} + d_{12}d_{13}d_{24}d_{25}f_1 f_2^2 f_5^{n} - d_{13}d_{14}d_{24}d_{25}f_1 f_2 f_4 f_5^{n} - d_{13}d_{15}d_{24}d_{25}f_1 f_2 f_5^{n+1}\\& + d_{12}d_{23}d_{24}d_{25}f_2^3 f_5^{n} + d_{13}d_{23}d_{24}d_{25}f_2^2 f_3 f_5^{n} + d_{14}d_{23}d_{24}d_{25}f_2^2 f_4 f_5^{n} + d_{15}d_{23}d_{24}d_{25}f_2^2 f_5^{n+1}\\& - d_{12}d_{13}d_{15}d_{34}f_1^2 f_3 f_5^{n} - d_{12}d_{14}d_{15}d_{34}f_1^2 f_4f_5^{n} + d_{12}d_{15}d_{23}d_{34}f_1 f_2 f_3 f_5^{n} + d_{13}d_{15}d_{23}d_{34}f_1 f_3^2 f_5^{n}\\& + d_{14}d_{15}d_{23}d_{34}f_1 f_3 f_4 f_5^{n} + d_{12}d_{15}d_{24}d_{34}f_1 f_2 f_4 f_5^{n} + d_{13}d_{15}d_{24}d_{34}f_1 f_3 f_4 f_5^{n} + d_{14}d_{15}d_{24}d_{34}f_1 f_4^2 f_5^{n}\\& - d_{12}d_{13}d_{25}d_{34}f_1 f_2 f_3 f_5^{n} - d_{12}d_{14}d_{25}d_{34}f_1 f_2 f_4 f_5^{n} + d_{13}d_{15}d_{25}d_{34}f_1 f_3 f_5^{n+1} + d_{14}d_{15}d_{25}d_{34}f_1 f_4 f_5^{n+1}\\& - d_{12}d_{23}d_{25}d_{34}f_2^2 f_3 f_5^{n} - d_{13}d_{23}d_{25}d_{34}f_2 f_3^2 f_5^{n} - d_{14}d_{23}d_{25}d_{34}f_2 f_3 f_4 f_5^{n} - d_{15}d_{23}d_{25}d_{34}f_2 f_3 f_5^{n+1}\\& - d_{12}d_{24}d_{25}d_{34}f_2^2 f_4 f_5^{n} - d_{13}d_{24}d_{25}d_{34}f_2 f_3 f_4 f_5^{n} - d_{14}d_{24}d_{25}d_{34}f_2 f_4^2 f_5^{n} - d_{15}d_{24}d_{25}d_{34}f_2 f_4 f_5^{n+1}\\& + d_{12}d_{13}d_{14}d_{35}f_1^2 f_3 f_5^{n} - d_{12}d_{14}d_{15}d_{35}f_1^2 f_5^{n+1} - d_{12}d_{14}d_{23}d_{35}f_1 f_2 f_3 f_5^{n} - d_{13}d_{14}d_{23}d_{35}f_1 f_3^2 f_5^{n}\\& + d_{14}d_{15}d_{23}d_{35}f_1 f_3 f_5^{n+1} + d_{12}d_{13}d_{24}d_{35}f_1 f_2 f_3 f_5^{n} - d_{13}d_{14}d_{24}d_{35}f_1 f_3 f_4 f_5^{n} + d_{12}d_{15}d_{24}d_{35}f_1 f_2 f_5^{n+1}\\& + d_{14}d_{15}d_{24}d_{35}f_1 f_4 f_5^{n+1} + d_{12}d_{23}d_{24}d_{35}f_2^2 f_3 f_5^{n} + d_{13}d_{23}d_{24}d_{35}f_2 f_3^2 f_5^{n} + d_{14}d_{23}d_{24}d_{35}f_2 f_3 f_4 f_5^{n}\\& + d_{15}d_{23}d_{24}d_{35}f_2 f_3 f_5^{n+1} - d_{12}d_{14}d_{25}d_{35}f_1 f_2 f_5^{n+1} - d_{13}d_{14}d_{25}d_{35}f_1 f_3 f_5^{n+1} + d_{14}d_{15}d_{25}d_{35}f_1 f_5^{n+2}\\& - d_{12}d_{24}d_{25}d_{35}f_2^2 f_5^{n+1} - d_{13}d_{24}d_{25}d_{35}f_2 f_3 f_5^{n+1} - d_{14}d_{24}d_{25}d_{35}f_2 f_4 f_5^{n+1} - d_{15}d_{24}d_{25}d_{35}f_2 f_5^{n+2}\\& + d_{12}d_{13}d_{34}d_{35}f_1 f_3^2 f_5^{n} + d_{12}d_{14}d_{34}d_{35}f_1 f_3 f_4 f_5^{n} + d_{12}d_{15}d_{34}d_{35}f_1 f_3 f_5^{n+1} + d_{12}d_{23}d_{34}d_{35}f_2 f_3^2 f_5^{n}\\& + d_{13}d_{23}d_{34}d_{35}f_3^3 f_5^{n} + d_{14}d_{23}d_{34}d_{35}f_3^2 f_4 f_5^{n} + d_{15}d_{23}d_{34}d_{35}f_3^2 f_5^{n+1} + d_{12}d_{24}d_{34}d_{35}f_2 f_3 f_4 f_5^{n}\\& + d_{13}d_{24}d_{34}d_{35}f_3^2 f_4 f_5^{n} + d_{14}d_{24}d_{34}d_{35}f_3 f_4^2 f_5^{n} + d_{15}d_{24}d_{34}d_{35}f_3 f_4 f_5^{n+1} + d_{12}d_{25}d_{34}d_{35}f_2 f_3 f_5^{n+1}\\& + d_{13}d_{25}d_{34}d_{35}f_3^2 f_5^{n+1} + d_{14}d_{25}d_{34}d_{35}f_3 f_4 f_5^{n+1} + d_{15}d_{25}d_{34}d_{35}f_3 f_5^{n+2} + d_{12}d_{13}d_{14}d_{45}f_1^2 f_4f_5^{n}\\& + d_{12}d_{13}d_{15}d_{45}f_1^2 f_5^{n+1} - d_{12}d_{14}d_{23}d_{45}f_1 f_2 f_4 f_5^{n} - d_{13}d_{14}d_{23}d_{45}f_1 f_3 f_4 f_5^{n} - d_{12}d_{15}d_{23}d_{45}f_1 f_2 f_5^{n+1}\\& - d_{13}d_{15}d_{23}d_{45}f_1 f_3 f_5^{n+1} + d_{12}d_{13}d_{24}d_{45}f_1 f_2 f_4 f_5^{n} - d_{13}d_{14}d_{24}d_{45}f_1 f_4^2 f_5^{n} - d_{13}d_{15}d_{24}d_{45}f_1 f_4 f_5^{n+1}\\& + d_{12}d_{23}d_{24}d_{45}f_2^2 f_4 f_5^{n} + d_{13}d_{23}d_{24}d_{45}f_2 f_3 f_4 f_5^{n} + d_{14}d_{23}d_{24}d_{45}f_2 f_4^2 f_5^{n} + d_{15}d_{23}d_{24}d_{45}f_2 f_4 f_5^{n+1}\\& + d_{12}d_{13}d_{25}d_{45}f_1 f_2 f_5^{n+1} - d_{13}d_{14}d_{25}d_{45}f_1 f_4 f_5^{n+1} - d_{13}d_{15}d_{25}d_{45}f_1 f_5^{n+2} + d_{12}d_{23}d_{25}d_{45}f_2^2 f_5^{n+1}\\& + d_{13}d_{23}d_{25}d_{45}f_2 f_3 f_5^{n+1} + d_{14}d_{23}d_{25}d_{45}f_2 f_4 f_5^{n+1} + d_{15}d_{23}d_{25}d_{45}f_2 f_5^{n+2} + d_{12}d_{13}d_{34}d_{45}f_1 f_3 f_4 f_5^{n}\\& + d_{12}d_{14}d_{34}d_{45}f_1 f_4^2 f_5^{n} + d_{12}d_{15}d_{34}d_{45}f_1 f_4 f_5^{n+1} + d_{12}d_{23}d_{34}d_{45}f_2 f_3 f_4 f_5^{n} + d_{13}d_{23}d_{34}d_{45}f_3^2 f_4 f_5^{n}\\& + d_{14}d_{23}d_{34}d_{45}f_3 f_4^2 f_5^{n} + d_{15}d_{23}d_{34}d_{45}f_3 f_4 f_5^{n+1} + d_{12}d_{24}d_{34}d_{45}f_2 f_4^2 f_5^{n} + d_{13}d_{24}d_{34}d_{45}f_3 f_4^2 f_5^{n}\\& + d_{14}d_{24}d_{34}d_{45}f_4^3 f_5^{n} + d_{15}d_{24}d_{34}d_{45}f_4^2 f_5^{n+1} + d_{12}d_{25}d_{34}d_{45}f_2 f_4 f_5^{n+1} + d_{13}d_{25}d_{34}d_{45}f_3 f_4 f_5^{n+1}\\& + d_{14}d_{25}d_{34}d_{45}f_4^2 f_5^{n+1} + d_{15}d_{25}d_{34}d_{45}f_4 f_5^{n+2} + d_{12}d_{13}d_{35}d_{45}f_1 f_3 f_5^{n+1} + d_{12}d_{14}d_{35}d_{45}f_1 f_4 f_5^{n+1}\\& + d_{12}d_{15}d_{35}d_{45}f_1 f_5^{n+2} + d_{12}d_{23}d_{35}d_{45}f_2 f_3 f_5^{n+1} + d_{13}d_{23}d_{35}d_{45}f_3^2 f_5^{n+1} + d_{14}d_{23}d_{35}d_{45}f_3 f_4 f_5^{n+1}\\& + d_{15}d_{23}d_{35}d_{45}f_3 f_5^{n+2} + d_{12}d_{24}d_{35}d_{45}f_2 f_4 f_5^{n+1} + d_{13}d_{24}d_{35}d_{45}f_3 f_4 f_5^{n+1} + d_{14}d_{24}d_{35}d_{45}f_4^2 f_5^{n+1}\\& + d_{15}d_{24}d_{35}d_{45}f_4 f_5^{n+2} + d_{12}d_{25}d_{35}d_{45}f_2 f_5^{n+2} + d_{13}d_{25}d_{35}d_{45}f_3 f_5^{n+2} + d_{14}d_{25}d_{35}d_{45}f_4 f_5^{n+2} \end{align*} \begin{align*} &+ d_{15}d_{25}d_{35}d_{45}f_5^{n+3} + \partial_1 d_{12}d_{13}f_1 f_2 f_3 f_5^{n} + \partial_1 d_{12}d_{14}f_1 f_2 f_4 f_5^{n} + \partial_1 d_{12}d_{15}f_1 f_2 f_5^{n+1}\\& + \partial_1 d_{12}d_{23}f_2^2 f_3 f_5^{n} + \partial_1 d_{13}d_{23}f_2 f_3^2 f_5^{n} + \partial_1 d_{14}d_{23}f_2 f_3 f_4 f_5^{n} + \partial_1 d_{15}d_{23}f_2 f_3 f_5^{n+1}\\& + \partial_1 d_{12}d_{24}f_2^2 f_4 f_5^{n} + \partial_1 d_{13}d_{24}f_2 f_3 f_4 f_5^{n} + \partial_1 d_{14}d_{24}f_2 f_4^2 f_5^{n} + \partial_1 d_{15}d_{24}f_2 f_4 f_5^{n+1}\\& + \partial_1 d_{12}d_{25}f_2^2 f_5^{n+1} + \partial_1 d_{13}d_{25}f_2 f_3 f_5^{n+1} + \partial_1 d_{14}d_{25}f_2 f_4 f_5^{n+1} + \partial_1 d_{15}d_{25}f_2 f_5^{n+2}\\& - \partial_2 d_{12}d_{13}f_1^2 f_3 f_5^{n} - \partial_2 d_{12}d_{14}f_1^2 f_4f_5^{n} - \partial_2 d_{12}d_{15}f_1^2 f_5^{n+1} - \partial_2 d_{12}d_{23}f_1 f_2 f_3 f_5^{n}\\& - \partial_2 d_{13}d_{23}f_1 f_3^2 f_5^{n} - \partial_2 d_{14}d_{23}f_1 f_3 f_4 f_5^{n} - \partial_2 d_{15}d_{23}f_1 f_3 f_5^{n+1} - \partial_2 d_{12}d_{24}f_1 f_2 f_4 f_5^{n}\\& - \partial_2 d_{13}d_{24}f_1 f_3 f_4 f_5^{n} - \partial_2 d_{14}d_{24}f_1 f_4^2 f_5^{n} - \partial_2 d_{15}d_{24}f_1 f_4 f_5^{n+1} - \partial_2 d_{12}d_{25}f_1 f_2 f_5^{n+1}\\& - \partial_2 d_{13}d_{25}f_1 f_3 f_5^{n+1} - \partial_2 d_{14}d_{25}f_1 f_4 f_5^{n+1} - \partial_2 d_{15}d_{25}f_1 f_5^{n+2} + \partial_3 d_{12}d_{13}f_1^2 f_2f_5^{n}\\& - \partial_3 d_{13}d_{14}f_1^2 f_4f_5^{n} - \partial_3 d_{13}d_{15}f_1^2 f_5^{n+1} + \partial_3 d_{12}d_{23}f_1 f_2^2 f_5^{n} + \partial_3 d_{13}d_{23}f_1 f_2 f_3 f_5^{n}\\& + \partial_3 d_{14}d_{23}f_1 f_2 f_4 f_5^{n} + \partial_3 d_{15}d_{23}f_1 f_2 f_5^{n+1} - \partial_3 d_{12}d_{34}f_1 f_2 f_4 f_5^{n} - \partial_3 d_{13}d_{34}f_1 f_3 f_4 f_5^{n}\\& - \partial_3 d_{14}d_{34}f_1 f_4^2 f_5^{n} - \partial_3 d_{15}d_{34}f_1 f_4 f_5^{n+1} - \partial_3 d_{12}d_{35}f_1 f_2 f_5^{n+1} - \partial_3 d_{13}d_{35}f_1 f_3 f_5^{n+1}\\& - \partial_3 d_{14}d_{35}f_1 f_4 f_5^{n+1} - \partial_3 d_{15}d_{35}f_1 f_5^{n+2} + \partial_4 d_{12}d_{14}f_1^2 f_2f_5^{n} + \partial_4 d_{13}d_{14}f_1^2 f_3 f_5^{n}\\& - \partial_4 d_{14}d_{15}f_1^2 f_5^{n+1} + \partial_4 d_{12}d_{24}f_1 f_2^2 f_5^{n} + \partial_4 d_{13}d_{24}f_1 f_2 f_3 f_5^{n} + \partial_4 d_{14}d_{24}f_1 f_2 f_4 f_5^{n}\\& + \partial_4 d_{15}d_{24}f_1 f_2 f_5^{n+1} + \partial_4 d_{12}d_{34}f_1 f_2 f_3 f_5^{n} + \partial_4 d_{13}d_{34}f_1 f_3^2 f_5^{n} + \partial_4 d_{14}d_{34}f_1 f_3 f_4 f_5^{n}\\& + \partial_4 d_{15}d_{34}f_1 f_3 f_5^{n+1} - \partial_4 d_{12}d_{45}f_1 f_2 f_5^{n+1} - \partial_4 d_{13}d_{45}f_1 f_3 f_5^{n+1} - \partial_4 d_{14}d_{45}f_1 f_4 f_5^{n+1}\\& - \partial_4 d_{15}d_{45}f_1 f_5^{n+2} + \partial_5 d_{12}d_{15}f_1^2 f_2f_5^{n} + \partial_5 d_{13}d_{15}f_1^2 f_3 f_5^{n} + \partial_5 d_{14}d_{15}f_1^2 f_4f_5^{n}\\& + \partial_5 d_{12}d_{25}f_1 f_2^2 f_5^{n} + \partial_5 d_{13}d_{25}f_1 f_2 f_3 f_5^{n} + \partial_5 d_{14}d_{25}f_1 f_2 f_4 f_5^{n} + \partial_5 d_{15}d_{25}f_1 f_2 f_5^{n+1}\\& + \partial_5 d_{12}d_{35}f_1 f_2 f_3 f_5^{n} + \partial_5 d_{13}d_{35}f_1 f_3^2 f_5^{n} + \partial_5 d_{14}d_{35}f_1 f_3 f_4 f_5^{n} + \partial_5 d_{15}d_{35}f_1 f_3 f_5^{n+1}\\& + \partial_5 d_{12}d_{45}f_1 f_2 f_4 f_5^{n} + \partial_5 d_{13}d_{45}f_1 f_3 f_4 f_5^{n} + \partial_5 d_{14}d_{45}f_1 f_4^2 f_5^{n} + \partial_5 d_{15}d_{45}f_1 f_4 f_5^{n+1}. \end{align*} We recall that the construction of this vector is also sketched by Rudakov in \cite[\S 4]{R}. \begin{figure} \caption{\label{E510morphismsbis} \label{E510morphismsbis} \end{figure} \end{ack} \end{document}
\begin{document} \def\thesection.\arabic{equation}{\thesection.\arabic{equation}} \setcounter{equation}{0} \title{ {\bf Gamow-Jordan Vectors and Non-Reducible Density Operators from Higher Order S-Matrix Poles}} \author{ {\bf A. Bohm}\thanks{Center for Particle Physics}, {\bf M. Loewe}\thanks{Microelectronics Research Center}, \\ {\bf S. Maxson}\thanks{current address: Department of Physics, University of Colorado at Denver, Denver, Colorado 80217-3364}, {\bf P. Patuleanu}$^*$, {\bf C. P{\"u}ntmann}$^*$ \\ \phiootnotesize{The University of Texas at Austin}\\ \phiootnotesize{Austin, Texas~78712} \\ and \\ {\bf M. Gadella} \\ \phiootnotesize{Faculdad de Ciencias, Universidad de Valladolid}\\ \phiootnotesize{E-47011 Valladolid, Spain} } \maketitle \begin{abstract} In analogy to Gamow vectors that are obtained from first order resonance poles of the S-matrix, one can also define higher order Gamow vectors which are derived from higher order poles of the S-matrix. An S-matrix pole of $r$-th order at $z_R=E_R-i\Gammaamma/2$ leads to $r$ generalized eigenvectors of order $k=0,~1, \ldots,~r-1$, which are also Jordan vectors of degree $(k+1)$ with generalized eigenvalue $(E_R-i\Gammaamma/2)$. The Gamow-Jordan vectors are elements of a generalized complex eigenvector expansion, whose form suggests the definition of a state operator (density matrix) for the microphysical decaying state of this higher order pole. This microphysical state is a mixture of non-reducible components. In spite of the fact that the $k$-th order Gamow-Jordan vectors has the polynomial time-dependence which one always associates with higher order poles, the microphysical state obeys a purely exponential decay law. \end{abstract} \section{Introduction}\lambdabel{sec:introduction} \setcounter{equation}{0} The singularities of the analytically continued S-matrix that have attracted most of the attention in the past are the first order poles in the second sheet. They were associated with resonances that decay exponentially in time.~\cite{doublepoles} In conventional Hilbert space quantum theory it was not clear what those resonance ``states'' were, since a vector description of a resonance state was not possible within the framework of the Hilbert space.\cite{fonda-ghirardi-rimini} Higher order poles, in particular double poles have also been mentioned, but it has long been believed that they somehow lead to an additional polynomial time dependence of the decay law~\cite{polynomial}. However, precise derivations were not possible due to the lack of a vector space description. This changed when the first order poles were associated with vectors {${\psi^G =\sqrt{2\pi\Gamma}|E_R-i\Gammaamma/2\rangle}$} in a rigged Hilbert space (RHS)~\cite{bohm, bohm-group, gadella1983, bohm-gadella}, called Gamow vectors. They possess all the properties that one needs to describe decaying states or resonances: These Gamow vectors $\psi^G$ are eigenvectors of a self-adjoint Hamiltonian~\cite{22a} with complex eigenvalues $z_R=E_R-i\Gammaamma/2$ (energy and width). They evolve exponentially in time, and they have a Breit-Wigner energy distribution. They also obey an exact Golden Rule, which becomes the standard Golden Rule if one replaces $\psi^G$ with its Born approximation. The existence of these vectors allows us to interpret resonances as autonomous physical systems (which one cannot do in standard quantum mechanics). It also puts quasibound states (i.e. resonances) and anti-bound (or virtual) states~\cite{gadella1983} on the same footing with the bound states (eigenvectors with real energy), which have both a vector description and an S-matrix description. Mathematically, Gamow vectors are a generalization of Dirac kets (describing scattering states), i.e. they are also eigenkets. But whereas Dirac kets are associated with a value of the continuous Hilbert space spectrum of the self-adjoint Hamiltonian $H$, the Gamow kets are not, but have complex eigenvalues. Using the entirely different theory of finite dimensional complex matrices, decaying states (like the $K^0-\bar{K}^0$ system) have been phenomenologically described as eigenvectors of an effective Hamiltonian matrix with complex eigenvalues. One usually assumes that these complex Hamiltonians are diagonalizable~\cite{lee}. However, unlike hermitean matrices which have real eigenvalues, non-hermitean finite dimensional matrices cannot always be diagonalized, but can only be brought into a Jordan canonical form~\cite{baumgartel-gantmacher-lancaster}. Finite dimensional matrices consisting of non-diagonalizable Jordan blocks have been mentioned in connection with resonances numerous times in the past~\cite{lukierski,brandas,antoniou-tasaki,mondragon,stodolsky}, and they have been used for discussions of problems in nuclear~\cite{mondragon} and in hadron~\cite{stodolsky} physics. Jordan blocks have also been obtained in prototypes of mixing systems~\cite{antoniou-tasaki}, and the appearance of so-called ``irreducible'' non-diagonalizable blocks in the density matrix has been sought after for some time in connection with irreversible thermodynamics and the approach to equilibrium~\cite{prigogine}. That irreducible non-diagonalizable Jordan blocks may shed light on the idea of quantum chaos has been mentioned by Br{\"a}ndas and Dreismann~\cite{brandas}. Also important for the understanding of quantum chaos and of the statistical properties of nuclear spectra are accidental degeneracies and level crossing, which in the past had been almost exclusively restricted to stable systems driven by hermitean Hamiltonians~\cite{wilczek}. Based on a finite dimensional phenomenological expression for the S-matrix~\cite{verbaarschot}, Mondrag\'{o}n {\em et al.}~\cite{mondragon} extended these discussions to resonance states described by a Jordan block of rank $2$. In the present paper we shall show that the Jordan blocks emerge naturally for the matrix elements $\,^{(k)}\lambdangle^-z_R|H|\psi^-\rangle$ of a self-adjoint~\cite{22a} Hamiltonian $H$ between Gamow vectors $|z^-_R\rangle^{(k)}$ of order $k=1,~2,\dots,r-1$. From the generalized basis vector expansion derived here it follows that these $r$-dimensional blocks are a truncation of the infinite dimensional exact theory in the RHS. The higher order Gamow vectors $|z^-_R\rangle^{(k)}$ have been derived in a recent unpublished preprint by Antoniou and Gadella~\cite{antoniou-gadella}. The derivation is a generalization of the method by which the Gamow vector (of order $k=0$) was derived from the first order poles of the S-matrix~\cite{bohm-group}. Starting from an $r$-th order pole of the S-matrix element at complex energy $z=z_R$, they derived $r$ Gamow vectors of higher order, $|E_R-i\Gammaamma/2\,^-\rangle^{(k)}$, $k=0,\, 1,\cdots r-1$, as functionals in a rigged Hilbert space. These higher order Gamow kets are also Jordan vectors belonging to the eigenvalue $z_R$. In the present paper we generalize the RHS theory of the Gamow vectors associated with first order S-matrix poles, which we call Gamow vectors of order zero, to poles of order $r$. Quasistationary states in scattering experiments (i.e. states formed if the projectile is temporarily captured by the target) can be shown to appear not only as first order poles, but as poles of any order $r=1,2,\dots$ (\cite{polynomial}). In section~\ref{sec:poles}, we will start from the expression for the unitary S-matrix of a quasistationary state of finite order $r$ and energy $E_R$, given in reference~\cite{bohm} sect.~XVIII.6, and obtain from it $r$ Gamow vectors of order $k=0,1,\dots,r-1$ which are also Jordan vectors of degree $k+1$. After a review of the case $r=1$ in section~\ref{sec:summary}, we derive in section~\ref{sec:higher} the generalized eigenvector expansion, which contains the Gamow-Jordan vectors as basis vectors. With these basis vectors we can give a matrix representation of $H$ and of $e^{-iHt}$ which contains the $r$-dimensional Jordan blocks. In section~\ref{sec:possible}, we start from the pole term of the $r$-th order S-matrix pole and conjecture the state operator for the hypothetical microphysical system associated with this pole. This $r$-th order Gamow state operator consists of non-diagonalizable blocks which obey a purely exponential decay law. This unexpected result is in contrast to the belief~\cite{polynomial} that higher order poles must lead to an additional polynomial time dependence. At the present time there is little empirical evidence for the existence of these higher order pole ``states'' in nature. This is in marked contrast to the fact that first order pole states described by ordinary Gamow vectors have been identified in abundance, e.g. through their Breit-Wigner profile in scattering experiments and through their exponential decay law. Now that our results have obliterated the prime empirical objection of non-exponentiality against the existence of higher order pole states, one can continue to look for them. The first step in this direction is to use these higher order state operators in the exact Golden Rule~\cite{bohm} and obtain the decay probability and the decay rate, including the line widths. We plan to do this in a forthcoming paper. \section{Poles of the S-matrix and Gamow-Jordan Vectors}\lambdabel{sec:poles} \setcounter{equation}{0} Since the new (hypothetical) states are to be defined by the $r$-th order pole of the S-matrix, we consider a scattering system. The S-matrix consists of the matrix elements~\cite{s-matrix} \begin{align} \hspace{-.1in} \left(\psi^{\text{out}},\phi^{\text{out}}\right)&= \left(\psi^{\text{out}}(t), \phi^{\text{out}}(t)\right) =\left( \psi^{\text{out}},S \phi^{\text{in}}\right) \nonumber \\ & = \left( \Omega^- \psi^{\text{out}}(t), \Omega^+ \phi^{\text{in}}(t) \right) = \left( \psi^-(t),\phi^+(t)\right) = \left( \psi^-,\phi^+\right) \lambdabel{eq:outmatrix} \\ & =\int_{\text{spectrum }H}dE\;\lambdangle\psi^-|E^-\rangle S(E+i0)\lambdangle^+ E|\phi^+\rangle\;.\nonumber \end{align} Since we are interested only in the principles here, in equation (\ref{eq:outmatrix}) (and in subsequent equations) we choose to ignore all other labels of the basis vectors $|E^\pm \rangle$ and $|E\rangle$ except the energy label $E$, which can take values on a two-sheeted Riemann surface. Nothing principally new will be gained if we retain the additional quantum numbers $b= b_2,\, b_3, \ldots b_{\rm {l\!\!\! {C}}}N$ in the basis system $|E^\pm \rangle \Longrightarrow |E, b^\pm\rangle = |E,\, b_2,\, b_3,\,\ldots b^\pm_{\rm {l\!\!\! {C}}}N\rangle$, and in place of the integral over the energy we would just have some additional sums (or integrals in the case that some of the $b$'s are continuous) over the quantum numbers $b$. For instance, if one chooses the angular momentum basis $|E^\pm\rangle \Longrightarrow |E,l,\, l_3,\, \eta^\pm \rangle$, where $\eta$ are some additional (polarization or) channel quantum numbers (cf.~\cite{bohm}, sect.~ XX.2, XXI.4), then~(\ref{eq:outmatrix}) would read in detail \begin{align} \left( \psi^- ,\phi^+\right) =& \sum_{l,\, l_3,\, \eta} \sum_{l^\prime ,\, l^\prime_3 ,\, \eta^\prime} \iint\, dE\, dE^\prime \, \lambdangle \psi^-\, |\, E^\prime ,\, l^\prime ,\, l^\prime_3 ,\, \eta^{\prime -} \rangle \times \lambdabel{eq:matrix2}\\ &\quad \times\lambdangle^- E^\prime ,\, l^\prime ,\, l^\prime_3 ,\, \eta^\prime \, |E,\, l,\, l_3 ,\, \eta^+\rangle\lambdangle^+ E,\, l,\, l_3 ,\, \eta\, |\, \phi^+\rangle \; . \nonumber \end{align} Restricting ourselves to one initial $\eta =\eta_A$ and one final $\eta'=\eta_B$ channel (e.g., $\eta_B=\eta_A$ for elastic scattering) we obtain \begin{eqnarray}\lambdabel{eq:matrix2a} \lambdangle^-E',l',l'_3 ,\eta_B|E,l,l_3,\eta_A^+\rangle &=& \lambdangle E',l',l'_3,\eta_B|\,S\,|E,l,l_3,\eta_A\rangle\\ &=&\delta(E'-E)\delta_{l^\prime_3l_3}\delta_{l^\prime l} \lambdangle\eta_B|\!|S|\!|\eta_A\rangle\nonumber \end{eqnarray} where \begin{equation} \lambdangle \eta_B \, |\!| \, S\, |\!| \, \eta_A\rangle = S^{\eta_B}_l (E) \lambdabel{eq:matrix2b} \end{equation} is the $l$-th partial S-matrix element for scattering from the channel $\eta_A$ into one particular channel $\eta_B$ (e.g., the elastic channel, $\eta_B=\eta_A$). If we consider the $l$-th partial wave of the $\eta_B$-th channel, then the $S(E)$ in (\ref{eq:outmatrix}) is given by this matrix element $S(E)=S^{\eta_B}_l (E)$. E.g., if we consider a mass point in a potential barrier, then $|E^\pm\rangle = |E,\, l,\, l^\pm_3 \rangle$ is the angular momentum basis of the mass point and, depending on the shape and height of the barrier, one or several resonances can exist. Many concrete examples have been studied where one can see how first order resonance poles $z_{R_i}=E_{R_i} -i\, \Gamma_i /2$ move as a function of the potential parameters~\cite{hogreve}. We want to consider just one pole, and in the present paper we are mainly interested in a higher order pole at $z_R$. Whether physical systems exist that are described by higher order poles is not clear, but a few examples of second order poles have been discussed in the past~\cite{polynomial}~\cite{mondragon}. With the above simplifications to one channel $\eta_B$ and one partial wave $l$, the notation in (\ref{eq:outmatrix}) is standard in scattering theory. \begin{figure} \caption{The preparation-registration procedure in a scattering experiment} \end{figure} The standard scattering theory uses the same Hilbert space ${\cal H}$ for both the set of in-states $\phi^+$ and the set of out-``states'' $\psi^-$. The RHS formulation allows us to use two RHS's for the set $\{\phi^+\}$ defined by the initial conditions and the set $\{\psi^-\}$ defined by the final conditions. To explain this we subdivide the scattering experiment into a preparation stage and a registration stage, as explained in detail in reference~\cite{cologne}. Fig.~1 depicts these different stages illustrating the idealized process: The in-state $\phi^+$ (precisely the state which evolves from the prepared in-state $\phi^{\text{in}}$ outside the interaction region where $V=H-H_0$ is zero) is determined by the accelerator. The so-called out-state $\psi^-$ (or $\psi^{\text{out}}$) is determined by the detector; $|\psi^{\text{out}} \rangle \lambdangle\psi^{\text{out}} |$ is therefore the observable which the detector registers and not a state. In the conventional formulation one describes both the $\phi^{\text{in}}$ and the $\psi^{\text{out}}$ by any vectors of the Hilbert space. In reality the $\phi^{\text{in}}$ (or $\phi^+$) and $\psi^{\text{out}}$ (or $\psi^-$) are subject to different initial and boundary conditions and are therefore described by different sets of vectors belonging to different rigged Hilbert spaces. The RHS for the Dirac kets is denoted by \begin{equation}\lambdabel{eq:rhs} \Phi \subset {\rm {l\!\!\! {C}}}H \subset \Phi^\times \end{equation} where $\Phi$ is the space of the ``well-behaved'' vectors (Schwartz space), and the Dirac kets (scattering states) $|E^\pm\rangle$ and $|E\rangle$ are elements of $\Phi^\times$. The in-state vectors $\phi^+(t)=e^{iHt/\hbar}\phi^+$ evolve from the prepared in-state $\phi^{\text{in}}(t)=(\Omega^+)^{-1}\,\phi^+(t),\, t<0,$ and the out-observable vectors $\psi^-(t)=e^{iHt/\hbar}\psi^-$ evolve into the measured out-state $\psi^{\text{out}}(t)=(\Omega^-)^{-1}\, \psi^-(t),\, t>0$.\cite{fussnote} We denote the space of $\{\phi^+\}$ by ${\Phi}_-$ and the space of $\{\psi^-\}$ by ${\Phi}_+$. Then, ${\Phi} = {\Phi}_- +{\Phi}_+$, where ${\Phi}_-\cap{\Phi}_+ \ne \emptyset$. In place of the single rigged Hilbert space (\ref{eq:rhs}), one therefore has a pair of rigged Hilbert spaces: \begin{subeqnarray}\lambdabel{eq:rhsm}\lambdabel{bohm6} \phi^+\in\; \F\subset\HH\subset\F^\timesm&& \text{for in-states of a scattering}\\\nonumber &&\text{experiment which are prepared} \\\nonumber &&\text{by a preparation apparatus,}\\ \psi^-\in\; \F\subset\HH\subset\F^\timesp&& \lambdabel{bohm7} \text{for observables or out-``states''}\\\nonumber &&\text{which are registered by a detector.} \end{subeqnarray} The Hilbert space ${\cal H}$ in (\ref{eq:rhs}), (\ref{bohm6}a), and (\ref{bohm7}b) is the same, but ${\Phi}_+$ and ${\Phi}_-$ are two distinct spaces of ``very well-behaved'' vectors. The spaces ${\Phi}_+$ and ${\Phi}_-$ can be defined mathematically in terms of the spaces of their wave functions $\lambdangle^+ E|\phi^+\rangle$ and $\lambdangle^- E|\psi^-\rangle$, respectively. This is the realization of these abstract spaces by spaces of functions, in very much the same way as the Hilbert space ${\cal H}$ is realized by the space of {L}{e}{b}{e}{s}gue square-integrable functions {$L^2 [0,\infty )$}. The space ${\Phi}_-$ is realized by the space of well-behaved Hardy class functions in the lower half-plane of the second energy sheet of the S-matrix $S(E)$, and the space ${\Phi}_+$ is realized by the space of well-behaved Hardy class functions in the upper half-plane. Thus, the mathematical definition of the spaces $\Phi_+$ and $\Phi_-$ is: \begin{equation} \psi^-\in{\Phi}_+\quad\text{iff}\quad \lambdangle\,E\,|\,\psi^{\text{out}}\rangle =\lambdangle^- E\, |\psi^-\rangle \in {\rm {l\!\!\! {C}}}S\cap{\cal H}^2_+ \, \Bigr|_{{\rm {I\!R}}^+} \lambdabel{eq:2.11a} \end{equation} and \begin{equation} \phi^+\in\Phi_- \quad \text{iff}\quad \lambdangle\,E\,|\,\phi^{\text{in}}\rangle =\lambdangle^+ E \, |\phi^+\rangle \in {\rm {l\!\!\! {C}}}S\cap{\cal H}^2_-\, \Bigr|_{{\rm {I\!R}}^+} \;. \lambdabel{eq:2.11b} \end{equation} Here ${\cal S}$ denotes the Schwartz space and ${\rm {l\!\!\! {C}}}S\cap{\cal H}^2_\pm$ is the space of Hardy class functions from above/below. This mathematical property of the spaces ${\Phi}_+$ and ${\Phi}_-$ can be shown to be a consequence of the arrow of time inherent in every scattering experiment~\cite{cologne}. Being Hardy class from below means that the analytic continuation $\lambdangle\psi^-|z^-\rangle$ of $\lambdangle \psi^-|E^-\rangle = \overline{\lambdangle^- E|\psi^-\rangle}$, and the analytic continuation $\lambdangle^+z|\phi^+\rangle$ of $\lambdangle^+ E|\phi^+\rangle$, and therewith also $\lambdangle\psi^-|z^-\rangle\lambdangle^+z|\phi^+\rangle$, are analytic functions in the lower half-plane which vanish fast enough on the lower infinite semicircle. (For the precise definition, see~\cite{bohm-gadella, duren-hoffman}). The values of a Hardy class function in the lower half-plane are already determined by its values on the positive real axis~\cite{vanwinter}. From (\ref{eq:2.11a}) and (\ref{eq:2.11b}) follows that \begin{equation}\lambdabel{eq:2.12} \lambdangle\psi^-|E^-\rangle\lambdangle^+ E|\phi^+ \rangle \in{\rm {l\!\!\! {C}}}S\cap{\cal H}^p_-\;, \; p=1 \end{equation} and so are all its derivatives \begin{equation}\lambdabel{eq:2.13} \hspace{-.4in}\bigl( \lambdangle\psi^-|E^-\rangle\lambdangle^+ E| \phi^+ \rangle \bigr)^{(n)}\in{\rm {l\!\!\! {C}}}S\cap{\cal H}^p_-\;;\hspace{.2in} p=1,~n=0,\, 1,\, 2,\, \ldots \end{equation} because the derivation is continuous in ${\rm {l\!\!\! {C}}}S$. With the above preparations one can derive the vectors that are associated with the $r$-th order pole of the S-matrix for any value of $r$, in complete analogy to the derivation of the vectors associated with the first order poles, $r=1$.\cite{bohm-group}~ ~ ~We shall see that there are $r$ generalized vectors of order $k=0,\, 1, \ldots ,\, r-1$ associated with an $r$-th order pole. We call these vectors the higher order Gamow vectors, or Gamow-Jordan vectors (since they also have the properties of Jordan vectors~\cite{baumgartel-gantmacher-lancaster}). Their first derivation from the $r$-th order pole was given in~\cite{antoniou-gadella}. Here we give an alternative derivation and discuss their properties and applications in the generalized basis vector expansion. We consider the model in which the analytically continued S-matrix $S(\omegaega)$ has one $r$-th order pole at the position $\omegaega=z_R$ ($z_R =E_R-i\, \Gamma /2$) in the lower half-plane of the second sheet, (and consequently there is also one $r$-th order pole in the upper half-plane of the second sheet at $\omegaega=z^\ast_R$). In this paper we will not discuss the pole at $z^\ast_R$. It leads to $r$ growing higher order Gamow vectors and the correspondence between the growing and decaying vectors is just the same as for the case $r=1$. The model that we discuss here can easily be extended to any finite number of finite order poles in the second sheet below the positive real axis. \begin{figure} \caption{The contours in the two sheeted Riemann surface. (a) displays the contour ${\cal C} \end{figure} The unitary S-matrix of a quasistationary state associated with an $r$-th order pole at $z_R=E_R-i\Gammaamma/2$ is represented in the lower half-plane of the second sheet by (\cite{bohm} sect.~XVIII.6) \begin{eqnarray}\nonumber S_{\rm II}(\omegaega)&=&e^{2ir\;{\rm arctan}(\phirac{\Gammaamma} {2(E_R-\omegaega)})}e^{2i\gammaamma(\omegaega)}=\left(\phirac{\omegaega-E_R-i\Gammaamma/2} {\omegaega-(E_R-i\Gammaamma/2)}\right)^re^{2i\gammaamma(\omegaega)}\\ &=&\left(1+\phirac{-i\Gammaamma}{\omegaega-(E_R-i\Gammaamma/2)}\right)^r e^{2i\gammaamma(\omegaega)}\;. \lambdabel{b16a} \end{eqnarray} Here, $\delta_R(\omegaega)=2ir\,{\rm arctan}(\phirac{\Gammaamma}{2(E_R-\omegaega)})$ is the rapidly varying resonant part of the phase shift, and $\gammaamma(\omegaega)$ is the background phase shift, which is a slowly varying function of the complex energy $\omegaega$. We have restricted ourselves to the case that the $r$-th order pole is the only singularity of the S-matrix. Below we will mention how this can be generalized to the case of a finite number of finite order poles. (Note that phenomenologically only a finite number of first order poles have been established, but there is no theoretical reason that would prevent other isolated singularities on the second sheet below the real axis.) For our calculations we have to write~(\ref{b16a}) in the form of a Laurent series: \begin{eqnarray}\lambdabel{b16b} S_{\rm II}(\omegaega)&=&\sum_{l=-r}^{+\infty} C_l(\omegaega-z_R)^l\\ &=&\phirac{C_{-r}}{(\omegaega-z_R)^r}+\phirac{C_{-r+1}}{(\omegaega-z_R)^{r-1}} +\dots+C_0+C_1(\omegaega-z_R)+\dots\nonumber \end{eqnarray} Therefore we expand the bracket in~(\ref{b16a}): \begin{eqnarray}\nonumber S_{\rm II}(\omegaega)&=&\left(\sum_{l=0}^r\begin{pmatrix}r\\l\end{pmatrix} \phirac{(-i\Gammaamma)^l}{(\omegaega-z_R)^l}\right)e^{2i\gammaamma(\omegaega)}\\ &=&e^{2i\gammaamma(\omegaega)}+\sum_{l=1}^{r}\begin{pmatrix}r\\l\end{pmatrix} \phirac{(-i\Gammaamma)^l}{(\omegaega-z_R)^l}e^{2i\gammaamma(\omegaega)} \lambdabel{b16c}\\ &=&e^{2i\delta_R(\omegaega)}e^{2i\gammaamma(\omegaega)}\nonumber \end{eqnarray} We insert this into~(\ref{eq:outmatrix}) and deform the contour of integration through the cut along the spectrum of $H$ into the second sheet, as shown in fig.~2a. Then one obtains \begin{subeqnarray}\lambdabel{paul17}\lambdabel{bohm17} (\psi^-,\phi^+)&=&\int_{{\cal C}_-}\,d\omega \;\lambdangle\psi^-|\omega^- \rangle\,S_{\rm II}(\omegaega)\,\lambdangle^+\omega |\phi^+\rangle+\\ &&+\sum_{n=0}^{r-1}\oint_{\hookleftarrow} d\omega\, \lambdangle\psi^-|\omega^-\rangle\,\phirac{e^{2i\gammaamma(\omegaega)}a_{-n-1}} {(\omega-z_R)^{n+1}}\,\lambdangle^+\omega|\phi^+\rangle\nonumber\\ &=&\int_0^{-\infty_{\rm II}}dE\lambdangle\psi^-|E^-\rangle\,S_{\rm II}(E)\,\lambdangle^+E|\phi^+\rangle\,+\,(\psi^-,\phi^+)_{\rm P.T.} \end{subeqnarray} In here, $\text{Im} \,\omega <0$ on the second sheet, and \begin{equation}\lambdabel{b17b} a_{-n-1}\equiv\begin{pmatrix}r\\n+1\end{pmatrix}(-i\Gammaamma)^{n+1}\;. \end{equation} The first integral does not depend on the pole and may be called a ``background term''. The contour ${\cal C}_-$ can be deformed into the negative axis of the second sheet from 0 to $-\infty_{\rm II}$. We shall set this background integral aside for the moment. For the second term on the right-hand side of (\ref{bohm17}), the higher order pole term $(\psi^-,\phi^+)_{\rm P.T.}$, we obtain using the Cauchy integral formulas $\oint_{\hookrightarrow}\phirac{f(\omega )}{(\omega -z_R)^{n+1}}\;d\omega=\phirac{2\pi i}{n!}\left. f^{(n)}(z)\right|_{z=z_R}$ where $f^{(n)}(z)\equiv\phirac{d^nf(z)}{dz^n}$: \begin{eqnarray}\lambdabel{bohm18} (\psi^-,\phi^+)_{\rm P.T.}&\equiv& \sum_{n=0}^{r-1}\oint_{\hookleftarrow} d\omega\lambdangle\psi^-|\omega^-\rangle\,\phirac{e^{2i\gammaamma(\omegaega)}a_{-n-1}} {(\omega -z_R )^{n+1}}\,\lambdangle^+\omega |\phi^+ \rangle\\ &=& \sum_{n=0}^{r-1}\left(-\phirac{2\pi i}{n!}\right)\,a_{-n-1} \Bigl(\lambdangle \psi^-|\omega^-\rangle\; e^{2i\gammaamma(\omegaega)} \;\lambdangle^+ \omega |\phi^+\rangle\Bigr)^{(n)}_{\omega =z_R}\nonumber \end{eqnarray} In here, $\left(\dots\right)^{(n)}_{\omegaega=z_R}$ means the $n$-th derivative with respect to $\omegaega$ taken at the value $\omegaega=z_R$. Since the kets $|\omegaega^-\rangle$ are (like the Dirac kets $|E^-\rangle$) only defined up to an arbitrary factor or, if their ``normalization'' is already fixed, up to a phase factor we absorb the background phase $e^{2i\gammaamma(\omegaega)}$ into the kets $|\omegaega^-\rangle$ and define new vectors \begin{equation}\lambdabel{b18a} |\omegaega^\gammaamma\rangle\equiv|\omegaega^-\rangle e^{2i\gammaamma(\omegaega)}\;. \end{equation} (Note that the phase is not trivial since e.g. $|E^-\rangle e^{2i\delta(E)}=|E^+\rangle$.) For the case that the slowly varying background phase $\gammaamma(\omegaega)$ is constant, the $|\omegaega^\gammaamma\rangle$ are up to a totally trivial constant phase factor identical with $|\omegaega^-\rangle$; but in general~(\ref{b18a}) is a non-trivial gauge transformation. For the case of a first order resonance pole, $n=r-1=0$ in~(\ref{bohm18}), the phase transformation~(\ref{b18a}) is also irrelevant, because for $n=0$ no derivatives are involved in~(\ref{bohm18}). Using the phase transformed vectors~(\ref{b18a}) we can proceed in the same way as if we were using the $|\omegaega^-\rangle$ with $\gammaamma(\omegaega)={\rm constant}$. Taking the derivatives, we rewrite~(\ref{bohm18}) as: \begin{equation}\lambdabel{bohmprime} \hspace{-.3in}(\psi^-,\phi^+)_{\rm P.T.}=\sum_{n=0}^{r-1}\left( -\phirac{2\pi i}{n!}a_{-n-1}\right)\sum_{k=0}^n \begin{pmatrix}n\\k\end{pmatrix}\lambdangle \psi^-| z_R^\gammaamma\rangle^{(k)}\,^{(n-k)}\lambdangle^+z_R|\phi^+\rangle \end{equation} In here, we denote by $\lambdangle\psi^-|z^\gammaamma\rangle^{(n)}$ the $n$-th derivative of the analytic function $\lambdangle\psi^-|z^\gammaamma\rangle$, and with $\lambdangle\psi^-|z^\gammaamma_R\rangle^{(n)}$ its value at $z=z_R$. Since $\lambdangle\psi^- |E^-\rangle \in {\rm {l\!\!\! {C}}}S\cap{\cal H}^2_-$, it follows that $\lambdangle\psi^- |z^-\rangle^{(n)}$ and $\lambdangle\psi^- |z^\gammaamma\rangle^{(n)}$ are also analytic functions in the lower half-plane of the second sheet, whose boundary values on the positive real axis have the property $\lambdangle\psi^- |E^\gammaamma\rangle^{(n)}\in{\rm {l\!\!\! {C}}}S\cap{\cal H}^2_-$. Analogously, we denote by $\,^{(n)}\lambdangle^+ z|\phi^+\rangle$ the $n$-th derivative of the analytic function $\lambdangle^+ z|\phi^+\rangle$. Again, $\,^{(n)}\lambdangle^+ z|\phi^+\rangle$ is analytic in the lower half-plane with its boundary value on the real axis being $\,^{(n)}\lambdangle^+E|\phi^+\rangle\in{\rm {l\!\!\! {C}}}S \cap{\cal H}^2_-$. The case $r=1$ (and therefore $n=0$ and $k=0$ in~(\ref{bohmprime})) is the well-known case of the first order pole term, which led to the definition of the ordinary Gamow vectors for Breit-Wigner resonance states~\cite{bohm,bohm-group,gadella1983,bohm-gadella}. We shall review its properties in section~\ref{sec:summary}. In section~\ref{sec:higher} we shall then discuss the general $r$-th order pole term and the generalized vectors $|z^\gammaamma_R\rangle^{(k)}$, {$k=1,~2,\dots,r-1$}. These vectors we call Gamow vectors of order $k$ or Gamow-Jordan vectors of degree $k+1$, for reasons that will become clear in section~\ref{sec:higher}. \section{Summary of the Case $\mathbf{r=1}$}\lambdabel{sec:summary} \setcounter{equation}{0} For the case $r=1$ we obtain from~(\ref{bohm18}) and~(\ref{bohmprime}): \begin{eqnarray}\nonumber (\psi^-,\phi^+)_{\rm P.T.} &=&\int^{+\infty}_{-\infty_{\rm II}}dE\;\lambdangle\psi^-|E^-\rangle \lambdangle^+E|\phi^+\rangle\phirac{e^{2i\gammaamma(E)}\,a_{-1}} {E-(E_R-i\Gamma/2)}\\ \lambdabel{eq:regular}\lambdabel{paul19}\lambdabel{bohm19} &=&-2\pi ia_{-1}\lambdangle\psi^-|z^-_R\rangle e^{2i\gammaamma(z_R)}\lambdangle^+z_R|\phi^+\rangle\\ &=&-e^{2i\gammaamma(z_R)}2\pi\Gammaamma\;\lambdangle\psi^-| z^-_R\rangle\lambdangle^+z_R|\phi^+\rangle\;.\nonumber \end{eqnarray} The integral in~(\ref{bohm19}) is obtained from the integral in (\ref{bohm18}) by deforming the contour of integration into the real axis of the second sheet plus the infinite semicircle and omitting in~(\ref{bohm18}) the integral over the infinite semicircle in the lower half-plane of the second sheet, because it is zero. Eq.~(\ref{eq:regular}) is a special case of the Titchmarsh theorem. The value at $z=z_R$ of the analytic function $\lambdangle\psi^-|z^-\rangle\,e^{2i\gammaamma(z)}$ defines a continuous antilinear functional $F(\psi^-)\equiv\lambdangle\psi^-| z^-_R\rangle\,e^{2i\gammaamma(z_R)}=\lambdangle\psi^-|z^\gammaamma_R\rangle$ over the space $\Phi_+\ni\psi^-$, and this functional establishes the generalized vector $|z^\gammaamma_R\rangle=|z^-_R\rangle\,e^{2i\gammaamma(z_R)}\in\Phi^\times_+$. We can rewrite (\ref{eq:regular}) by omitting the arbitrary $\psi^-\in{\Phi}_+$ and write it as an equation for the functional $|z^-_R\rangle \in\Phi^\times_+$, \begin{eqnarray}\lambdabel{eq:5.32}\lambdabel{paul20}\lambdabel{bohm20} |z^-_R\rangle &=& \phirac{i}{2\pi}\int^{+\infty}_{-\infty_{\rm II}} \, dE\;|E^-\rangle\;\phirac{\lambdangle^+E|\phi^+\rangle}{\lambdangle^+z_R|\phi^+\rangle }\; \phirac{1}{E-(E_R-i\,\Gamma /2)}\\ \nonumber &=& -\phirac{1}{2\pi i}\int^{+\infty}_{-\infty_{\rm II}} \,dE\;|E^-\rangle\;\phirac{1}{E-z_R} \end{eqnarray} over all $\psi^-\in{\Phi}_+$. Or we can rewrite~(\ref{bohm19}) as an equation for the operator from ${\Phi}_-$ (preparation) to ${\Phi}_+$ (registration) by omitting the arbitrary $\psi^-\in\Phi_+$ and the arbitrary $\phi^+ \in{\Phi}_-$: \begin{equation}\lambdabel{eq:5.33} |z^-_R\rangle\lambdangle^+z_R|=\phirac{i}{2\pi} \int^{+\infty}_{-\infty_{\rm II}} dE\; {{|E^-\rangle\lambdangle^+E|}\over {E-(E_R -i\,\Gamma /2)}}\;. \end{equation} The notation for the vectors $|z^-_R\rangle$ derives from the Cauchy theorem: $\lambdangle\psi^-|z^-_R\rangle$ is the value of the function $\lambdangle\psi^-|\omega^-\rangle$ at the position $\omega=z_R$. The definition (\ref{eq:5.32}) of the Gamow vector is, like (\ref{eq:regular}) and (\ref{eq:5.33}), just another example of the Titchmarsh theorem. From the above derivation, one can see why we defined the Gamow vectors: They are the vectors associated with the pole term of the S-matrix element. The ``normalization'' of the vectors $|z^-_R\rangle$ is a consequence of the ``normalization'' of the Dirac kets $|E^-\rangle$, and we can define Gamow vectors $\psi^G$ with arbitrary normalization and phase $N(z_R)$, \begin{subeqnarray}\lambdabel{eq:5.34}\lambdabel{bohm22} \psi^G = |z^-_R\rangle\;N(z_R)\;.\hspace{1.9cm} \end{subeqnarray} A normalization that we shall use here is \begin{equation}\nonumber \hspace{5.3cm}\psi^G\equiv |z^-_R\rangle\Bigl(-e^{2i\gammaamma(z_R)}\Bigr) \sqrt{2\pi\Gammaamma}\;.\hspace{5.2cm}(\ref{bohm22}{\rm b}) \end{equation} The constant phase factor $-e^{2i\gammaamma(z_R)}$, which we introduced in~(\ref{bohm22}b) is arbitrary and of no significance here, and the ``normalization'' factor $\sqrt{2\pi\Gammaamma}$ is also a matter of convention.(\cite{bohm}, sect.~XXI.4) The notation $|z^-_R\rangle$ has a further meaning: It can be shown~\cite{bohm,bohm-gadella} that this vector is a generalized eigenvector of the self-adjoint~\cite{22a} Hamiltonian $H$ with eigenvalue $z_R=E_R-i\Gammaamma/2$: \begin{equation}\lambdabel{bohm23} \lambdangle\psi^-|H^\times \psi^G\rangle\equiv\lambdangle H\psi^-|\psi^G\rangle = z_R\;\lambdangle\psi^-|\psi^G\rangle\,,\qquad\phiorall\, \psi^- \hspace{-.05in} \in{\Phi}_+\;. \end{equation} where $H^\times$ is the conjugate operator in $\Phi^\times$ of the operator $H$ in $\Phi$. This one writes as \begin{equation}\lambdabel{bohm24} H^\times\psi^G=z_R\;\psi^G\hspace{.3in}\qquad\text{or~also}\qquad H^\times|z^-_R\rangle=z_R\,|z^-_R\rangle \end{equation} or following Dirac's notation $H|E^-\rangle=E|E^-\rangle$ \begin{equation}\nonumber H\psi^G=z_R\;\psi^G\hspace{.3in}\qquad\text{or~also}\qquad H|z^-_R\rangle=z_R\,|z^-_R\rangle \end{equation} if the operator $H$ is essentially self-adjoint. If one takes the complex conjugate of (\ref{bohm23}) one obtains: \begin{equation}\lambdabel{eq:5.34prime} \lambdangle\psi^G |\, H\, |\psi^-\rangle = \lambdangle\psi^G|\psi^-\rangle \Bigl( E_R +i\, \phirac{\Gamma}{2}\Bigr) \end{equation} which one can write in analogy to~(\ref{bohm24}) as \begin{equation}\lambdabel{bohm26} \lambdangle \psi^G |\, H = z^\ast_R\, \lambdangle \psi^G |\, \hspace{.3in} \text{or}\hspace{.3in} \lambdangle^-z_R |\, H = z^\ast_R\, \lambdangle^- z_R | \;. \end{equation} It has also been shown~\cite{bohm,bohm-gadella} that in the RHS~(\ref{bohm7}b) the time evolution is given by a semigroup operator \begin{equation}\lambdabel{eq:5.88} U^\times_+ (t)\equiv U(t)|^\times_{{\Phi}_+}\equiv\left(\left. e^{iHt}\right|_{{\Phi}_+}\right)^\times\equiv e^{-iH^\times t}_+ \;;\hspace{.25in}{\rm for~}t\gammae0 \end{equation} (A similar semigroup time evolution operator $e^{-iH^\times t}_-$, defined however only for $t\le 0$, also exists in the RHS (\ref{eq:rhsm}a) and has similar properties.) And it has been shown that this time evolution operator (\ref{eq:5.88}) acts on the Gamow vectors $\psi^G$ (or on the $|z^-_R\rangle\in\Phi^\times_+$) in the following way: \begin{equation}\lambdabel{bohm28} \lambdangle\psi^-|e^{-iH^\times t}_+|z^-_R\rangle \equiv \lambdanglee^{iHt}\psi^-|z^-_R\rangle =e^{-iE_R t}e^{-(\Gamma / 2)t}\, \lambdangle\psi^-|z^-_R\rangle \end{equation} or for the complex conjugate \begin{equation}\lambdabel{bohm29} \hspace{-.5cm}\lambdangle^-z_R|\,e^{iHt}\,|\psi^-\rangle=e^{iE_Rt}e^{-(\Gamma /2)t}\lambdangle^-z_R|\psi^-\rangle\hspace{.25in} \begin{array}{l} {\rm for~every~}\psi^-\in{\Phi}_+\\{\rm and~for~}t\gammae0\;. \end{array} \end{equation} Omitting the arbitrary $\psi^-\in{\Phi}_+$, this is also written in analogy to~(\ref{bohm24}) as \begin{subeqnarray}\lambdabel{eq:5.89prime}\lambdabel{paul30}\lambdabel{bohm30} e^{-iH^\times t}_+\,\psi^G&=&e^{-iE_R t}e^{-(\Gamma /2)t}\,\psi^G\\ {\rm or}\hspace{5.1cm}&&\hspace{5.4cm}\hspace{3.8cm}\nonumber\\ \lambdangle\psi^G |\,e^{iHt}&=&e^{+iE_R t}e^{-(\Gamma /2)t}\, \lambdangle\psi^G|\hspace{.5in}{\rm for~}t\gammae 0\;. \end{subeqnarray} One of the most important features of the Gamow vectors is that they are basis vectors of a basis system expansion. To explain this we start with the Dirac basis vector expansion (the Nuclear Spectral Theorem of the rigged Hilbert space) which states that \begin{equation}\lambdabel{bohm31} \hspace{-.15in}\phi = \int^{+\infty}_0\,dE\;|E^+\rangle \lambdangle^+ E|\phi^+\rangle\;+\;\sum_m\;|E_m)(E_m|\phi) \hspace{.25in}{\rm for~every~}\phi\in{\Phi}\;. \end{equation} In here, $|E_m)$ are the discrete eigenvectors of the exact Hamiltonian $H=K+V$, describing the bound states, $H\, |E_m)= E_m\, |E_m)$, and $|E^+\rangle$ are the generalized eigenvectors (Dirac kets) of $H$, describing scattering states~\cite{fussnote}. The integration extends over the continuous spectrum of $H: \; 0\le E <\infty$. Instead of the basis vector expansion~(\ref{bohm31}) which uses Dirac kets that correspond to the (continuous) spectrum of $H$, one can use a basis system that contains Gamow vectors, and one obtains the so-called ``complex basis vector expansion'' which states: For every $\phi^+\in{\Phi}_-$ (a similar expansion holds also for every $\psi^-\in{\Phi}_+$), one obtains for the case of a finite number of first order (resonance) poles at the positions $z_{R_i}$, $i=1,\, 2,\, \dots ,N$, the following basis system expansion: \begin{eqnarray}\lambdabel{eq:basisexpansion}\lambdabel{paul32}\lambdabel{bohm32} \phi^+&=&\int^{-\infty_{\rm II}}_0\,dE|E^+\rangle \lambdangle^+E|\phi^+\rangle\;-\;\sum^N_{i=1}|z^-_{R_i}\rangle 2\pi\Gamma_i\,e^{2i\gammaamma(z_{R_i})}\lambdangle^+ z_{R_i}|\phi^+\rangle\\ &&+\sum_m|E_m)(E_m|\phi^+)\hspace{1in}\text{for }\phi^+\in{\Phi}_-\nonumber \end{eqnarray} where $-|z^-_{R_i}\rangle\sqrt{2\pi\Gammaamma}\,e^{2i\gammaamma(z_R)}=\psi^{G_i} \in{\Phi}^\times_+$ are Gamow vectors representing decaying states. The first integral in~(\ref{paul32}) comes from the ``background term'' of equation (\ref{bohm17}). This background integral is omitted in the phenomenological theories with a complex effective Hamiltonian~\cite{lee, lee-oehme-yang}. The integration in~(\ref{paul32}) is along the negative real axis in the second sheet or along an equivalent contour. The third term will be absent if there is no bound state $|E_m)$, which we shall assume from now on. The expansion (\ref{eq:basisexpansion}) follows directly from (\ref{bohm17}) for $r=1$ (if one assumes that the S-matrix has no other singularities in the lower half-plane besides the $N$ first order poles at the positions $z_{R_i}$, which is a realistic assumption if one excludes higher order poles~\cite{nuss}). The matrix representation of $H$ in the basis of (\ref{bohm31}) is given by: \begin{align}\lambdabel{eq:5.61a} \left( \begin{matrix} \lambdangle\psi |\, H^\times \, |E_1) \\ \lambdangle\psi |\, H^\times \, |E_2) \\ \,\vdots \\ \lambdangle\psi |\, H^\times \, |E_N) \\ \lambdangle \psi |\, H^\times \, |E^\pm\rangle \end{matrix} \right) = \begin{pmatrix} E_1 & 0 & \cdots &\cdots& 0 \\ 0 & E_2 &&& 0 \\ \vdots && \ddots && \vdots \\ \vdots &&& E_N & 0 \\ 0 & 0 & \cdots & 0 & (E) \end{pmatrix} \begin{pmatrix} \lambdangle\psi |E_1 ) \\ \lambdangle\psi |E_2 ) \\ \,\vdots \\ \lambdangle\psi |E_N ) \\ \lambdangle\psi |E^\pm\rangle \end{pmatrix} \\\nonumber 0\le E <+\infty \end{align} for all $\psi\in{\Phi}^{\times\times}={\Phi}$. In (\ref{eq:5.61a}), the operator $H^\times$ is represented by a finite or an infinite diagonal submatrix (for a finite or an infinite number of bound states) and a continuously infinite diagonal submatrix, indicated by $(E)$, where $E$ takes the values $0\le E<+\infty $. If we consider only the case where there are no bound states (meaning we omit the submatrix of the $E_m$) then the matrix representation corresponding to the basis system expansion~(\ref{bohm31}) is simply given by the diagonal continuously infinite real energy matrix: \begin{equation}\lambdabel{eq:5.61b} \Bigl(\lambdangleH\psi|E^\pm\rangle\Bigr) = \Bigl(\lambdangle\psi|\,H^\times\,|E^\pm\rangle\Bigr) = \Bigl(E\Bigr)\Bigl(\lambdangle\psi|E^\pm\rangle\Bigr)\,;\quad \psi\in{\Phi},\quad 0\le E<+\infty \end{equation} On the other hand, the complex basis vector expansion (\ref{eq:basisexpansion}) (again without bound states) leads to a matrix representation of the self-adjoint semibounded Hamiltonian $H$ in the following form: \begin{align}\lambdabel{eq:5.62} \hspace{-0.4in} \left(\begin{matrix} \lambdangleH\psi^-|z^-_{R_1}\rangle \\ \lambdangleH\psi^-|z^-_{R_2}\rangle \\ \vdots \\ \lambdangleH\psi^-|z^-_{R_N}\rangle \\ \lambdangleH\psi^-|E^-\rangle \end{matrix} \right) = \left(\begin{matrix} \lambdangle\psi^-|\, H^\times \, |z^-_{R_1}\rangle \\ \lambdangle\psi^-|\, H^\times \, |z^-_{R_2}\rangle \\ \vdots \\ \lambdangle\psi^-|\, H^\times \, |z^-_{R_N}\rangle \\ \lambdangle\psi^-|\, H^\times \, |E^-\rangle \end{matrix} \right) = \begin{pmatrix} z_{R_1} &&&& 0 \\ & z_{R_2 } &&& 0 \\ && \ddots & & \vdots \\ &&& z_{R_{N}} & 0 \\ 0 & 0 & \ldots & 0 & (E) \end{pmatrix} \begin{pmatrix} \lambdangle\psi^- |z^-_{R_1} \rangle \\ \lambdangle\psi^- |z^-_{R_2} \rangle \\ \,\vdots \\ \lambdangle\psi^- |z^-_{R_N}\rangle \\ \lambdangle\psi^- |E^-\rangle \end{pmatrix} \\\nonumber \psi^-\in{\Phi}_+\subset{\Phi} \quad -\infty_{\rm II} < E\le 0 \end{align} The same Hamiltonian $H$ with $N$ resonances at $z_{R_i}$, $i=1,\, 2,\, \ldots N$, can thus be represented either as a continuous infinite matrix (\ref{eq:5.61b}) in the basis of (\ref{bohm31}), or by (\ref{eq:5.62}) in the basis of (\ref{eq:basisexpansion}). The later alternative is of more practical importance if one wants to study the resonance properties and if one can make $\lambdangle\psi^-|E^-\rangle$ small. The basis vector expansion~(\ref{paul32}) is an exact representation of $\phi^+\in\Phi_-$ and the matrix representation (\ref{eq:5.62}) is an exact representation of the self-adjoint Hamiltonian. In the phenomenological descriptions by complex effective Hamiltonians, one uses a truncation of~(\ref{paul32}) and~(\ref{eq:5.62}), omitting the background integral in~(\ref{paul32}) and the whole continuously infinite diagonal matrix $\bigl(E\bigr)$ (and sometimes even some of the $z_{R_i}$) in~(\ref{eq:5.62}). In this approximation one represents the Hamiltonian by the $N\times N$ dimensional diagonal complex submatrix in the upper left corner of (\ref{eq:5.62}). For example, if one considers only two resonances at $z_{R_1}=z_S \,, \;z_{R_2}=z_L$, one then has the complex energy matrix: \begin{equation}\lambdabel{eq:5.63} \left( \begin{matrix} \lambdangle\psi^-|\, H^\times\, |z^-_S\rangle \\ \lambdangle\psi^-|\, H^\times \, |z^-_L\rangle \end{matrix} \right) = \begin{pmatrix} z_S & 0 \\ 0 & z_L \end{pmatrix} \begin{pmatrix} \lambdangle\psi^-|z^-_S\rangle \\ \lambdangle\psi^-|z^-_L\rangle \end{pmatrix} \end{equation} This truncated matrix representation is only an approximation, corresponding to the approximation of omitting the integral in (\ref{eq:basisexpansion}). How good this approximation is depends upon the particular choice of the $\psi^-$ (or the choice of the $\phi^+$), but it can never be exact. \section{Higher Order Poles of the S-matrix and Gamow-Jordan Vectors} \lambdabel{sec:GJvectors}\lambdabel{sec:higher} \setcounter{equation}{0} We shall now discuss the possibility of extending the definition of one generalized eigenvector $|z^-_R\rangle^{(0)}$ to $r$ generalized eigenvectors of order {${n=0,~1,~2,\dots,r\!-\!1}$} for an S-matrix pole of order $r$.~\cite{eigenvector} ~ The equations~(\ref{bohm18}) and~(\ref{bohmprime}) for the pole term are rewritten (omitting on the right-hand side the integral over the infinite semicircle in the lower half-plane of the second sheet) as \begin{eqnarray} \phirac{i}{2\pi}(\psi^-,\phi^+)_{\rm P.T.}&=&\sum_{n=0}^{r-1} \phirac{i}{2\pi}\int^{+\infty}_{-\infty_{\rm II}}dE\, \lambdangle\psi^-|E^-\rangle\phirac{e^{2i\gammaamma(E)}\,a_{-n-1}} {(E-z_R)^{n+1}}\lambdangle^+ E|\phi^+\rangle\nonumber\\ &=&\sum_{n=0}^{r-1}\phirac{1}{n!}\,a_{-n-1}\,\phirac{d^n\;}{d\omega^n} \Bigl( \lambdangle\psi^- |\omega^\gammaamma\rangle\lambdangle^+ \omega|\phi^+\rangle \Bigr)_{\omega =z_R} \lambdabel{eq:41}\lambdabel{bohm37}\\ &=&\sum_{n=0}^{r-1}\phirac{1}{n!}\,a_{-n-1}\sum^n_{k=0} \begin{pmatrix}n\\k\end{pmatrix}\lambdangle\psi^- |z^\gammaamma_R \rangle^{(k)} \,^{(n-k)}\lambdangle^+z_R|\phi^+\rangle\nonumber\\ &=&\sum_{n=0}^{r-1}\phirac{i}{2\pi n!}\int^{+\infty}_{-\infty_{\rm II}}dE \phirac{\left(\lambdangle\psi^-|E^-\rangle e^{2i\gammaamma(E)}a_{-n-1} \lambdangle^+E|\phi^+\rangle\right)^{(n)}}{E-z_R}\nonumber \end{eqnarray} Since $G_-(E)=\lambdangle\psi^-|E^-\rangle\lambdangle^+E|\phi^+\rangle\in{\cal S}\cap{\cal H}_-^1$, its $(n+1)$-st order derivatives are also elements of ${\cal S}\cap{\cal H}^1_-$, and~(\ref{bohm37}) is an application of the Titchmarsh theorem in two different versions, for $G_-(E)=\lambdangle\psi^-|E^-\rangle\lambdangle^+E|\phi^+\rangle$ and for $G_-(E)=(\lambdangle\psi^-|E^-\rangle\lambdangle^+E|\phi^+\rangle)^{(n)}$. The value at $z=z_R$ of the analytic functions $\lambdangle\psi^-|z^\gammaamma\rangle^{(k)}$ ($k$-th derivatives of the analytic function $\lambdangle\psi^-|z^\gammaamma\rangle$) defines again a continuous antilinear functional $F^k(\psi^-)\equiv\lambdangle\psi^-|z^\gammaamma_R\rangle^{(k)}$ over the space $\Phi_+\ni\psi^-$. The antilinearity follows from the linearity of the differentiation $\left(\lambdangle\alpha\psi^-_1+\beta\psi^-_2|z\rangle\right)^{(k)} =\alpha^*\lambdangle\psi_1^-|z\rangle^{(k)}+\beta^*\lambdangle \psi^-_2|z\rangle^{(k)}$. The continuity follows because taking the $k$-th derivative $D^k$ is a continuous operation with respect to the topology in the space ${\cal S}\cap{\cal H}^2_-\ni\lambdangle\psi^-|E^\gammaamma\rangle$ and because $\lambdangle\psi^-|z^\gammaamma_R\rangle^{(k)}$ is a continuous functional $F$. Thus $F^k\equiv D^k\circ F$ is the product of two continuous maps and therefore also continuous. The continuous functionals $\lambdangle\psi^-|z^\gammaamma_R\rangle^{(k)}$ define thus the generalized vectors $|z^\gammaamma_R\rangle^{(k)}\in\Phi^\times_+$, $k=0,~1,\dots,~r-1.$ The $r$-th order pole is therefore by~(\ref{bohm37}) associated with the set of $r$ generalized vectors \begin{equation}\lambdabel{bohm38} |z^\gammaamma_R\rangle^{(0)}\,,\;|z^\gammaamma_R\rangle^{(1)}\,,\;\cdots, |z^\gammaamma_R\rangle^{(k)}\,,\cdots,|z^\gammaamma_R\rangle^{(n)}\;. \end{equation} Of the different representations of the pole term on the right-hand side of~(\ref{bohm37}) we shall use in this paper only the second and third line and will come back to the integral representations when we discuss the Golden Rule for the higher order Gamow states. We insert the values~(\ref{b17b}) of the coefficients $a_{-n-1}$ into~(\ref{eq:41}) and obtain \begin{eqnarray}\lambdabel{bohm39} (\psi^-,\phi^+)_{\rm P.T.} &=&\;-\sum_{n=0}^{r-1}\begin{pmatrix}\!r\!\\\!n\!+\!1\!\end{pmatrix} \phirac{(-i)^n}{n!}\phirac{d^n}{d(\omegaega/\Gammaamma)^n} \Bigl(\lambdangle\psi^-|\omegaega^\gammaamma\rangle\,2\pi\Gammaamma\, \lambdangle^+\omegaega| \phi^+\rangle\Bigr)_{\omegaega=z_R}\\ &=&\;-\sum_{n=0}^{r-1}\begin{pmatrix}\!r\!\\ \!n\!+\!1\!\end{pmatrix}\phirac{(-i\Gammaamma)^n}{n!}2\pi\Gammaamma\sum^n_{k=0} \begin{pmatrix}n\\k\end{pmatrix}\lambdangle\psi^-|z^\gammaamma_R\rangle^{(k)} \,^{(n-k)}\lambdangle^+ z_R | \phi^+\rangle \nonumber \end{eqnarray} The generalized vectors~(\ref{bohm38}) have all different dimensions, namely $({\rm energy})^{-\phirac12-k}$. If one uses the dimensionless variable $\omegaega/\Gammaamma$ as indicated in the first line of~(\ref{bohm39}), one is led to the new normalization of the generalized vectors \begin{equation}\lambdabel{bohm40} |z^\gammaamma_R{\!\,\text{{\Large $\succ$}}\!}^{(k)}=\phirac{1}{k!}|z^\gammaamma_R\rangle^{(k)} \;\Gammaamma^k\hspace{.3in}{\rm and}\hspace{.3in} \,^{(l)}{\!\text{{\Large $\prec$}}\!\,}^+z_R|=\Gammaamma^l\;^{(l)}\!\lambdangle^+z_R|\phirac{1}{l!} \end{equation} These vectors have for all values of $k=0,~1,~2,\dots,r\!-\!1\;$ the same dimension $({\rm energy})^{-\phirac12}$, like the Dirac kets. We have in addition introduced the factor $1/k!\;$ so that these higher order Gamow vectors become Jordan vectors with the standard normalization. The quantity $\lambdangle\psi^-|z^-{\!\,\text{{\Large $\succ$}}\!}^{(n)}\equiv\phirac{\Gammaamma^n}{n!} \lambdangle\psi^-|z^-\rangle^{(n)}$ is the value of the functional $|z^-{\!\,\text{{\Large $\succ$}}\!}^{(n)}\in\Phi^\times_+$ at $\psi^-\in\Phi_+$. However, unlike $\lambdangle\psi^-|z^-\rangle^{(n)}$, which is the $n$-th derivative of $\lambdangle\psi^-|z^-\rangle\in{\cal S}\cap{\cal H}^2_+$, the $\lambdangle\psi^-|z^-{\!\,\text{{\Large $\succ$}}\!}^{(n)}$ is not the $n$-th derivative of $\lambdangle\psi^-|z^-{\!\,\text{{\Large $\succ$}}\!}^{(0)}$; the standard Jordan vectors $|z^-{\!\,\text{{\Large $\succ$}}\!}^{(k)}$ are connected with the ``derivatives'' $|z^-\rangle^{(k)}$ by~(\ref{bohm40}). Therefore when we want to compare our results with the standard results in the theory of finite dimensional complex (non-diagonalizable) matrices~\cite{baumgartel-gantmacher-lancaster} we need to convert from the $|z^-\rangle^{(k)}$ to the $|z^-_R{\!\,\text{{\Large $\succ$}}\!}^{(k)}$. With the convention~(\ref{bohm40}) we obtain from~(\ref{bohm39}) \begin{eqnarray}\lambdabel{bohm41} (\psi^-,\phi^+)_{\rm P.T.} &=&-\sum_{n=0}^{r-1}\begin{pmatrix}r\\n+1\end{pmatrix} (-i)^n(2\pi\Gammaamma)\sum^n_{k=0}\lambdangle\psi^-|z^\gammaamma_R{\!\,\text{{\Large $\succ$}}\!}^{(k)} \,^{(n-k)}{\!\text{{\Large $\prec$}}\!\,}^+ z_R | \phi^+\rangle\\ &=&-2\pi\Gammaamma\,\sum_{n=0}^{r-1}\begin{pmatrix}r\\n+1\end{pmatrix} (-i)^n\lambdangle\psi^-|{W^\gammaamma}^{(n)}|\phi^+\rangle\nonumber \end{eqnarray} where we have defined the operator \begin{equation}\lambdabel{bohm42} \hspace{-.2in}W_{\rm P.T.}^{(n)}=\sum_{k=0}^n|z^-_R{\!\,\text{{\Large $\succ$}}\!}^{(k)} \,^{(n-k)}{\!\text{{\Large $\prec$}}\!\,}^+z_R|\hspace{.2in}{\rm and}\hspace{.2in} W^{\gammaamma(n)}_{\rm P.T.}=\sum_{k=0}^n|z^\gammaamma_R{\!\,\text{{\Large $\succ$}}\!}^{(k)} \,^{(n-k)}{\!\text{{\Large $\prec$}}\!\,}^+z_R|\;. \end{equation} Here $W^{\gammaamma(n)}_{\rm P.T.}$ is just an abbreviation for the right-hand side of~(\ref{bohm42}), and in section~\ref{sec:possible} we will discuss its interpretation. Whereas $W^{\gammaamma(n)}_{\rm P.T.}$ depends also upon the background phase shifts through~(\ref{b18a}), $W_{\rm P.T.}^{(n)}$ is just given by the S-matrix pole. We now return to the complete S-matrix element~(\ref{bohm17}) and insert the pole term~(\ref{bohm41}) into~(\ref{bohm17}b), \begin{eqnarray}\lambdabel{bohm43} (\psi^-,\phi^+)&=&\int_0^{-\infty_{\rm II}}dE\,\lambdangle\psi^-|E^-\rangle\, S_{\rm II}(E)\,\lambdangle^+E|\phi^+\rangle\\\nonumber &&\!\!\!-\sum_{n=0}^{r-1} \begin{pmatrix}r\\\!n\!+\!1\!\end{pmatrix}(-i)^n2\pi\Gammaamma \sum_{k=0}^n\lambdangle\psi^-|z^\gammaamma_R{\!\,\text{{\Large $\succ$}}\!}^{(k)} \;^{(n-k)}{\!\text{{\Large $\prec$}}\!\,}^+\!z_R|\phi^+\rangle \end{eqnarray} Omitting the arbitrary $\psi^-\in\Phi_+$ and rearranging the sums in the second term, we obtain the complex basis vector expansion for an arbitrary $\phi^+\in\Phi_-$, \begin{eqnarray}\lambdabel{bohm44}\lambdabel{eq:50} \phi^+&=&\int_0^{-\infty_{\rm II}}dE|E^+\rangle\lambdangle^+E| \phi^+\rangle\,+\\ &&+\,\sum_{k=0}^{r-1}|z^\gammaamma_R{\!\,\text{{\Large $\succ$}}\!}^{(k)} \left((-2\pi\Gammaamma)\sum_{n=k}^{r-1} \begin{pmatrix}r\\n+1\end{pmatrix}(-i)^n \,^{(n-k)}{\!\text{{\Large $\prec$}}\!\,}^+z_R|\phi^+\rangle\right)\nonumber \end{eqnarray} This complex basis vector expansion is the analogue of~(\ref{bohm32}) if instead of $N$ S-matrix poles of order one (and bound states $|E_m)$) we have one S-matrix pole of order $r$. To compare~(\ref{bohm44}) with~(\ref{bohm32}), we write~(\ref{bohm32}) also for the case of one S-matrix pole of order one. Then using the same phases~(\ref{bohm19}) as in~(\ref{bohm39}) and omitting all bound states and all resonances but one, we obtain for~(\ref{bohm32}) \begin{equation}\lambdabel{bohm45} \phi^+=\int_0^{-\infty_{\rm II}}dE|E^+\rangle\lambdangle^+E|\phi^+ \rangle-|z^\gammaamma_R\rangle\;2\pi\Gammaamma\lambdangle^+z_R|\phi^+\rangle \end{equation} which agrees with what we obtain from~(\ref{bohm44}) for $r=1$. Comparing~(\ref{bohm44}) with~(\ref{bohm45}) or~(\ref{bohm32}) we see the similarities and the differences: For a first order pole there is one generalized vector in the complex basis vector expansion; for an $r$-th order pole there are $r$ basis vectors in the complex basis vector expansion. Apart from the arbitrary phase-normalization factor $-2\pi\Gammaamma$, the coefficient of the first order Gamow vector, $|z^\gammaamma_R\rangle=|z^-_R\rangle\,e^{2i\gammaamma(z_R)}$, has the simple form $\lambdangle^+z_R|\phi^+\rangle$ which resembles the component $\lambdangle^+E|\phi^+\rangle$ of the vector $\phi^+$ along the basis vector $|E^+\rangle$. In contrast, the coefficients of the higher order Gamow vectors $|z^\gammaamma_R{\!\,\text{{\Large $\succ$}}\!}^{(k)}$ are given by the complicated expression \begin{equation}\lambdabel{bohm46} b_k=(-2\pi\Gammaamma)\sum_{n=k}^{r-1}\begin{pmatrix}r\\n+1 \end{pmatrix}(-i)^n\,^{(n-k)}{\!\text{{\Large $\prec$}}\!\,}^+z_R|\phi^+\rangle\;. \end{equation} The difference between~(\ref{bohm44}) and~(\ref{bohm32}) also foretells that the role of dyadic products like $\left.|E_n\right)\left(E_n|\right.$ (or also of $|z^-_R\rangle\lambdangle^-z_R|$), which have been prominently used for pure states, will probably be unimportant for states associated with higher order poles. In section~\ref{sec:possible}, we will see that for higher order Gamow states there is no meaning to being pure. Since the general expressions~(\ref{bohm44}) and~(\ref{bohm43}) are not very transparent, we want to specialize them now to the case of a double pole, $r=2$: \begin{subeqnarray}\nonumber \phi^+&=&\int_0^{-\infty_{\rm II}}dE|E^+\rangle \lambdangle^+E|\phi^+\rangle +\\\lambdabel{bohm47}\lambdabel{4.11a} &&-|z^\gammaamma_R{\!\,\text{{\Large $\succ$}}\!}^{(0)}2\pi\Gammaamma \Bigl(2\;^{(0)}{\!\text{{\Large $\prec$}}\!\,}^+z_R|\phi^+\rangle-i \;\;^{(1)}{\!\text{{\Large $\prec$}}\!\,}^+z_R|\phi^+\rangle\Bigr)\\ &&+|z^\gammaamma_R{\!\,\text{{\Large $\succ$}}\!}^{(1)}2\pi\Gammaamma i\;\;^{(0)}{\!\text{{\Large $\prec$}}\!\,}^+z_R|\phi^+\rangle\nonumber \end{subeqnarray} If as a generalization of~(\ref{bohm22}b), we define the differently normalized Gamow vectors \begin{subeqnarray}\lambdabel{4.12a} \psi^{G(k)}=(-1)^{k+1}|z^\gammaamma_R{\!\,\text{{\Large $\succ$}}\!}\!^{(k)}\sqrt{2\pi\Gammaamma} =(-1)^{k+1}\phirac{\Gammaamma^k}{k!}|z_R^\gammaamma\rangle^{(k)}\sqrt{2\pi\Gammaamma} \end{subeqnarray} then the basis vector expansion~(\ref{bohm47}a) for the case $r=2$ reads \begin{eqnarray}\nonumber \hspace{3.2cm}\phi^+&=&\int_0^{-\infty_{\rm II}}dE|E^+\rangle \lambdangle^+E|\phi^+\rangle+\\ &&+\psi^{G(0)}\sqrt{2\pi\Gammaamma}\Bigl(-2\;^{(0)}\!\lambdangle^+z_R |\phi^+\rangle+(-i)\;^{(1)}\!\lambdangle^+z_R|\phi^+\rangle\Bigr) \hspace{1.8cm}(\ref{bohm47}{\rm b})\nonumber\\ &&+\psi^{G(1)}\sqrt{2\pi\Gammaamma}\;i\;^{(0)}\!\lambdangle^+z_R |\phi^+\rangle\;.\nonumber \end{eqnarray} Note that according to~(\ref{4.12a}a) and~(\ref{b18a}) we have \begin{equation}\nonumber \hspace{3.2cm}\psi^{G(1)}=\Gammaamma\Bigl(|z^-_R\rangle^{(1)} +|z^-_R\rangle 2i\gammaamma'(z_R)\Bigr)e^{2i\gammaamma(z_R)} \sqrt{2\pi\Gammaamma}\hspace{3.8cm}(\ref{4.12a}{\rm b}) \end{equation} and only for constant background phase shift $\gammaamma^{(n)}(z)=0$, $n=~1,~2,\dots$, $\psi^{G(1)}$ (or $\psi^{G(k)}$) given by $|z^-_R\rangle^{(0)}$ (or $|z^-_R\rangle^{(k)}$). One can insert~(\ref{4.12a}b) into~(\ref{bohm47}b) and expand $\phi^+$ in terms of the basis vectors $|z^-_R\rangle$ and $|z^-_R\rangle^{(1)}$; and the same procedure one can repeat for arbitrary $k$ to express $\phi^+$ in~(\ref{bohm44}) in terms of \begin{equation}\lambdabel{4.13} |z^-_R\rangle^{(0)}\,,\;|z^-_R\rangle^{(1)}\,,\;\cdots, |z^-_R\rangle^{(k)}\,,\cdots,|z^-_R\rangle^{(n)}\;. \end{equation} Whether the phase convention in the definition~(\ref{4.12a}) will turn out to be convenient cannot be said at this stage. The basis vector expansion can be generalized in a straightforward way to the case of an arbitrary finite number of poles at the positions {${z_{R_i}, ~~i=1,2,\dots,N}$} of arbitrary finite order $r_i$. in the same way as it was done in~(\ref{bohm32}) for $r_i=1$. This complex generalized basis vector expansion is the most important result of our irreversible quantum theory (as is the Dirac basis vector expansion for reversible quantum mechanics). \noindent It shows that the generalized vectors~(\ref{bohm38}) (functionals over the space $\Phi_+$) are part of a basis system for the $\phi^+\in\Phi_-$ and form together with the kets $|E^+\rangle,~~-\infty_{\rm II}<E\leq0$, a complete basis system. The vectors~(\ref{bohm38}) span a linear subspace ${\cal M}_{z_R}\subset\Phi^\times_+$ of dimension $r$: \begin{equation}\lambdabel{bohm48} {\rm {l\!\!\! {C}}}M_{z_R} = \Biggl\{ \; \xi \; {\Bigg|} \; \xi =\sum^{r-1}_{k=0}|z^-_R \rangle^{(k)} c_k\, , \; c_k\in{\rm {l\!\!\! {C}}}\, \Biggr\} \subset{\Phi}^\times_+ \end{equation} If there are $N$ poles at $z_{R_i}$ of order $r_i$, then for every pole there is a linear subspace ${\cal M}_{z_{R_i}}\subset\Phi^\times_+$. Since the generalization to $N$ poles of order $r_i$ at energy $z_{R_i}$ is straightforward, we continue our discussions for the case of one pole of order $r$. Note that by the procedure described in this section a new label $k$ was introduced for the basis vectors in the expansion~(\ref{bohm44}), $|z^-_R{\!\,\text{{\Large $\succ$}}\!}^{(k)}=|z_R,b_2,b_3,\dots,b_{\cal N}^-{\!\,\text{{\Large $\succ$}}\!}^{(k)}$. Usually basis vector labels are quantum numbers associated with eigenvalues of a complete system of commuting observables. That means that if in addition to $H$ there are the ${\cal N}-1$ operators $B_2,B_3,\dots,B_{\cal N}$ with eigenvalues $\{b_2,b_3,\dots,b_{\cal N}\}\equiv\{b\}={\rm spectrum}(B_2,B_3,\dots,B_{\cal N})$, then the Dirac kets are labelled by $|E,b^-\rangle=|E,b_2,b_3,\dots,b_{\cal N}^- \rangle$ and in addition to the sum and integral in~(\ref{bohm31}) and~(\ref{bohm44}), there is a sum and/or an integral over all the values of the degeneracy quantum numbers $b_2,b_3,\dots,b_{\cal N}$, which we suppress here for the sake of simplicity. The label $k$ of the higher order Gamow vectors $|z_R,b^-\rangle^{(k)}$, which has appeared in~(\ref{bohm44}), is not associated with a conventional quantum number and is there in addition to the labels $b$ connected with the eigenvalues of the set of commuting observables $B_2,B_3,\dots,B_{\cal N}$. The quantum numbers $z_R,b_2,b_3,\dots,b_{\cal N}$ can be observed and have an experimentally defined physical meaning. It is not clear that the label $k$ will have a similar physical interpretation. This means that (if a higher order S-matrix pole has at all a physical meaning) the different vectors $|z^-_R\rangle^{(k)}$ in the subspace ${\cal M}_{z_R}$ have no separate physical meaning (unless $k$ can be given a physical interpretation). Now that~(\ref{bohm44}) has established the generalized vectors~(\ref{bohm38}) or the generalized vectors~(\ref{4.13}) as members of a basis system (together with the $|E^+\rangle;~~0\gammaeq E>-\infty_{\rm II}$) in $\Phi^\times_+$, we can obtain the action of the operator $H$ by the action of the operator $H^\times$ on these basis vectors; and we can write the operator $H$ in terms of its matrix elements with these basis vectors. This can also be done in the same way for any of the operators $f^*(H)$, where $f(z)$ is any holomorphic function such that \begin{equation}\lambdabel{bohm49} f^\ast (H) :{\Phi}_+\longrightarrow {\Phi}_+ \hspace{.3in} \text{is a $\tau_{{\Phi}_+}$-continuous operator}, \end{equation} (e.g., $f^\ast (H) = e^{iHt}\, ,\; f(H^\times )= e^{-iH^\times t}_+$ for the real parameter $t\gammae 0$ only, since for $t<0$ $f^*(H)=e^{iHt}$ is not a continuous operator in $\Phi_+$.) For this purpose we replace the arbitrary $\psi^-\in\Phi_+$ in~(\ref{bohm39}) by $\tilde{\psi}^-=f^*(H)\psi^-$ which is again an element of $\Phi_+$, because $f^*(H)$ is a continuous operator in $\Phi_+$ (by assumption~(\ref{bohm49})). Then we obtain by comparing powers of~$\Gammaamma$: \begin{eqnarray}\nonumber \sum_{k=0}^n\begin{pmatrix}n\\k\end{pmatrix}\lambdangle f^*(H) \psi^-|z^\gammaamma_R\rangle^{(k)}\;^{(n-k)}\lambdangle^+z_R| \phi^+\rangle&=&\phirac{d^n}{d\omegaega^n}\left(f(\omegaega) \lambdangle\psi^-|\omegaega^-\rangle e^{2i\gammaamma(\omegaega)} \lambdangle^+\omegaega|\phi^+\rangle\right)_{\omegaega=z_R}\\ &&\hspace{1.75cm}n=0,1,\dots,r-1\;.\lambdabel{b50} \end{eqnarray} where we have used~(\ref{b18a}) and \begin{equation}\lambdabel{b51} \lambdangle f^*(H)\psi^-|\omegaega\rangle=\lambdangle\psi^-|f(H^\times)|\omegaega^-\rangle =f(\omegaega)\lambdangle\psi^-|\omegaega^-\rangle \end{equation} which follows from~(\ref{bohm49}). The function \begin{equation}\lambdabel{b52} G(z)\equiv f(z)\lambdangle\psi^-|z^-\rangle\lambdangle^+z|\phi^+ \rangle e^{2i\gammaamma(z)} \end{equation} is an element of ${\cal S}\cap{\cal H}^2_-$, since $\lambdangle\psi^-|z^-\rangle\lambdangle^+z|\phi^+\rangle\in{\cal S}\cap{\cal H}^2_-$ and $e^{2i\gammaamma(z)}$ as well as $f(z)$ are holomorphic. Therefore we can take the derivatives $G(z)^{(n)}$ of any order \begin{equation}\lambdabel{b53} G(z)^{(n)}=\sum_{k=0}^n\begin{pmatrix}n\\k\end{pmatrix} \left(f(z)\lambdangle\psi^-|z^\gammaamma\rangle\right)^{(k)} \;^{(n-k)}\lambdangle^+z|\phi^+\rangle\;. \end{equation} Inserting this into~(\ref{b50}) we obtain: \begin{eqnarray}\nonumber &&\hspace{-.3in}\sum^n_{k=0}\left(\lambdangle\psi^-|f(H)| z^\gammaamma_R\rangle^{(k)}-\left( f(z)\lambdangle\psi^-| z^\gammaamma\rangle\right)^{(k)}_{z=z_R}\right) \begin{pmatrix}n\\k\end{pmatrix}\,^{(n-k)} \lambdangle^+ z_R|\phi^+\rangle=0\\ \lambdabel{b54} &&\hspace{2.8in}n=0,1,\dots,r-1 \end{eqnarray} Since this has to hold for every $\phi^+\in{\Phi}_-$ (i.e., for every $\lambdangle^+ E|\phi^+\rangle \in{\rm {l\!\!\! {C}}}S\cap{\cal H}^2_-$), it follows that the coefficients of each derivative $\,^{(n-k)}\lambdangle^+ z_R |\phi^+ \rangle = \phirac{d^{(n-k)}\,}{dz^{(n-k)}} \left. \lambdangle^+ z |\phi^+\rangle \right|_{z=z_R}$ must vanish. Thus, \begin{equation}\lambdabel{b55} \lambdangle\psi^-|\,f(H^\times)\,|z^\gammaamma_R\rangle^{(k)} =\Bigl(f(z)\lambdangle\psi^-|z^\gammaamma\rangle\Bigr)^{(k)}_{z=z_R} \hspace{.3in}\begin{array}{l}\text{for $k=0,\, 1,\, 2,\dots n$}\\ \text{and all $\psi^-\in{\Phi}_+$\;.}\end{array} \end{equation} By a similar argument, just comparing the coefficients of $(e^{2i\gammaamma(z)}\lambdangle^+z|\phi^+\rangle)^{(n-k)}_{z=z_R}$ rather than of $(\lambdangle^+z|\phi^+\rangle)^{(n-k)}_{z=z_R}$, one can show that the same equation holds for the $|z^-_R\rangle^{(k)}$ (with any nice function for $\gammaamma(z)$): \begin{equation}\lambdabel{b55'} \lambdangle\psi^-|\,f(H^\times)\,|z^-_R\rangle^{(k)} =\Bigl(f(z)\lambdangle\psi^-|z^-\rangle\Bigr)^{(k)}_{z=z_R} \end{equation} This permits us to calculate the action of $f(H^\times )$ on the generalized vectors $|z^-_R\rangle^{(k)}\in{\Phi}^\times_+$ for every $f(H^\times )$ that fulfills the condition (\ref{bohm49}). The same calculation applies to the generalized vectors $|z^\gammaamma_R\rangle^{(k)}$ due to~(\ref{b55}). Therefore we write the following equations for $|z^\gammaamma_R\rangle^{(k)}$ though the same holds for $|z^-_R\rangle^{(k)}$. We first choose $f(H^\times )= H^\times$; then we obtain \begin{equation}\lambdabel{b56} \lambdangle H\psi^-|z^\gammaamma_R\rangle^{(k)}\equiv\lambdangle\psi^-|H^\times |z^\gammaamma_R\rangle^{(k)}=z_R\lambdangle\psi^-|z^\gammaamma_R\rangle^{(k)}+ \begin{pmatrix}k\\1\end{pmatrix}\lambdangle\psi^-|z^\gammaamma_R\rangle^{(k-1)} \end{equation} which can also be written as a functional equation over ${\Phi}_+$ as \begin{equation}\lambdabel{b57} H^\times |z^\gammaamma_R\rangle^{(k)}=z_R|z^\gammaamma_R\rangle^{(k)} +k|z^\gammaamma_R\rangle^{(k-1)};\hspace{.3in}k=0,1,\dots,r-1\;. \end{equation} If we use the normalization of the basis vectors defined in~(\ref{bohm40}), and write~(\ref{b57}) out in detail then we obtain \begin{align} H^\times |z^-_R{\!\,\text{{\Large $\succ$}}\!}^{(0)} &= z_R\, |z^-_R{\!\,\text{{\Large $\succ$}}\!}^{(0)} \nonumber \\ H^\times |z^-_R{\!\,\text{{\Large $\succ$}}\!}^{(1)} &= z_R\, |z^-_R{\!\,\text{{\Large $\succ$}}\!}^{(1)} +\Gammaamma\, |z^-_R{\!\,\text{{\Large $\succ$}}\!}^{(0)} \nonumber \\ &~~\vdots \lambdabel{b58}\\ H^\times |z^-_R{\!\,\text{{\Large $\succ$}}\!}^{(k)} &= z_R\, |z^-_R{\!\,\text{{\Large $\succ$}}\!}^{(k)} +\Gammaamma\, |z^-_R{\!\,\text{{\Large $\succ$}}\!}^{(k-1)} \nonumber \\ &~~\vdots \nonumber \\ H^\times |z^-_R{\!\,\text{{\Large $\succ$}}\!}^{(r-1)} &= z_R\, |z^-_R{\!\,\text{{\Large $\succ$}}\!}^{(r-1)} +\Gammaamma\, |z^-_R {\!\,\text{{\Large $\succ$}}\!}^{(r-2)} \; .\nonumber \end{align} (and the same for $|z^\gammaamma_R{\!\,\text{{\Large $\succ$}}\!}^{(k)}$). This means that $H^\times$ restricted to the subspace ${\cal M}_{z_R}$ is a Jordan operator of degree $r$ (in the standard notation the operator $\phirac{1}{\Gammaamma}H^\times$ is the Jordan operator of degree $r$), and the vectors $|z^\gammaamma_R{\!\,\text{{\Large $\succ$}}\!}^{(k)},~k=0,~1,~2,\dots,~r-1$ are Jordan vectors of degree $k+1$.~\cite{baumgartel-gantmacher-lancaster} They fulfill the generalized eigenvector equation~\cite{eigenvector} \begin{equation}\nonumber \hspace{5.6cm}(H^\times -z_R)^{k+1}|z_R^-{\!\,\text{{\Large $\succ$}}\!}^{(k)} =0\;.\hspace{4.8cm} (\ref{b58}') \end{equation} We write the equations~(\ref{b58}) again in the form~(\ref{b56}) and arrange them as a matrix equation. Since the basis system includes, according to~(\ref{bohm44}), in addition to the $|z_R^\gammaamma{\!\,\text{{\Large $\succ$}}\!}^{(k)}$, $k=0,1,2,\dots,r-1$, also the $|E^-\rangle,~-\infty_{\rm II}<E\leq0$, we indicate this by a continuously infinite diagonal matrix equation which we write as: \begin{equation}\lambdabel{b59} \Bigl(\lambdangle H\psi^-|E^-\rangle\Bigr)= \Bigl(\lambdangle\psi^-|H|E^-\rangle\Bigr)= \Bigl(E\Bigr)\Bigl(\lambdangle\psi^-|E^-\rangle\Bigr) \end{equation} where $\left(\lambdangle\psi^-|E^-\rangle\right)$ indicates a continuously infinite column matrix. Then~(\ref{b58}) and~(\ref{b59}) together can be written in analogy to~(\ref{eq:5.62}) as: \begin{eqnarray}\nonumber \left( \begin{matrix} \lambdangleH\psi^-|z^-_R{\!\,\text{{\Large $\succ$}}\!}^{(0)}\\ \lambdangleH\psi^-|z^-_R{\!\,\text{{\Large $\succ$}}\!}^{(1)} \\ \;\vdots\\ \;\vdots\\ \lambdangleH\psi^-|z^-_R{\!\,\text{{\Large $\succ$}}\!}^{(r-1)} \\ \lambdangleH\psi^-|E^- \rangle \end{matrix} \right) &=&\left( \begin{matrix} \lambdangle\psi^-|\, H^\times \, |z^-_R{\!\,\text{{\Large $\succ$}}\!}^{(0)} \\ \lambdangle\psi^-|\, H^\times \, |z^-_R{\!\,\text{{\Large $\succ$}}\!}^{(1)} \\ \;\vdots\\ \;\vdots\\ \lambdangle\psi^-|\, H^\times \, |z^-_R{\!\,\text{{\Large $\succ$}}\!}^{(r-1)} \\ \lambdangle\psi^-|\, H^\times \, |E^- \rangle \end{matrix} \right)\\ &&\lambdabel{eq:54}\lambdabel{paul49}\lambdabel{b60}\\ \nonumber &=&\begin{pmatrix} z_R &0&0&\ldots&0&0\\ \Gammaamma&z_R &0&\ldots&0&\\ 0&\Gammaamma&z_R&\ldots&0&\vdots\\ \vdots&\vdots&\ddots &\ddots&\vdots&\\ 0&0&\ldots&\Gammaamma&z_R &0\\ 0&&\ldots&&0&\left( E\right) \end{pmatrix} \begin{pmatrix} \lambdangle\psi^-|z^-_R{\!\,\text{{\Large $\succ$}}\!}^{(0)}\\ \lambdangle\psi^-|z^-_R{\!\,\text{{\Large $\succ$}}\!}^{(1)}\\ \;\vdots\\ \;\vdots\\ \lambdangle\psi^- |z^-_R{\!\,\text{{\Large $\succ$}}\!}^{(r-1)}\\ \lambdangle\psi^- |E^-\rangle \end{pmatrix}\hspace{.5in} \end{eqnarray} In this matrix representation of $H^\times$, the upper left $r\times r$ submatrix associated with the complex eigenvalue $z_R$ is a (lower) Jordan block of degree $r$. We have chosen the Jordan vectors with the normalization of (\ref{bohm40}) in order to obtain the Jordan block in a form closest to the standard form, but with $\Gammaamma$'s in place of $1$'s on the subdiagonal. It is instructive also to write down the adjoint (i.e. transposed complex conjugate) of the matrix equation~(\ref{b60}) because it will clarify the notation and display the upper Jordan block. Taking the transposed and complex conjugate of~(\ref{b60}) we obtain: \begin{eqnarray} &&\Bigl(\,^{(0)}{\!\text{{\Large $\prec$}}\!\,}^-z_R|H|\psi^-\rangle, \dots, \,^{(r-1)}{\!\text{{\Large $\prec$}}\!\,}^-z_R|H|\psi^-\rangle, \,\lambdangle^-E|H|\psi^-\rangle\Bigr)=\\ &&\Bigl(\,^{(0)}{\!\text{{\Large $\prec$}}\!\,}^-z_R|\psi^-\rangle, \dots, \,^{(r-1)}{\!\text{{\Large $\prec$}}\!\,}^-z_R|\psi^-\rangle, \,\lambdangle^-E|\psi^-\rangle\Bigr) \left(\begin{array}{cccccc} z^*_R&\Gammaamma&0&\cdots&0&0\\ 0&z^*_R&\Gammaamma&&0&\\ 0&0&z^*_R&\ddots&\vdots&\vdots\\ \vdots&\vdots&&\ddots&\Gammaamma&\\ 0&0&0&\cdots&z^*_R&0\\ 0&&\cdots&&0&(E) \end{array}\right)\nonumber \end{eqnarray} With the derivation of~(\ref{bohm44}) and~(\ref{b60}) we have reduced the problem of finding the vectors (and their properties) associated with the higher order poles of the S-matrix to the spectral theory of finite dimensional (non-normal) complex matrices, which is well documented in the mathematical literature~\cite{baumgartel-gantmacher-lancaster}. If in addition to the $r$-th order pole at $z_R$ there are other $r_i$-th order poles at $z_{R_i}$, then for each of these poles we have to add another Jordan block of degree $r_i$ to the matrix in~(\ref{b60}) . \noindent We could now refer for further results to the mathematics literature of $r\times r$ complex matrices, but we can also obtain these results easily from~(\ref{b55}) and~(\ref{b55'}). Applying to the right-hand side of~(\ref{b55}) the Leibniz rule we obtain \begin{equation}\lambdabel{b61} \lambdangle\psi^-|\, f(H^\times )\, |z^\gammaamma_R\rangle^{(k)}= \sum^{k}_{\nu=0}\begin{pmatrix}k\\\nu\end{pmatrix}\Bigl[ f^{(\nu)}(z) \left(\lambdangle\psi^- |z^\gammaamma\rangle\right)^{(k-\nu)}\Bigr]_{z=z_R} \end{equation} where $f^{(\nu)}(z)$ is the $\nu$-th derivative of the holomorphic function $f(z)$ with respect to $z$ and $\lambdangle\psi^-|z^\gammaamma\rangle^{(k-\nu)}\equiv \left(\lambdangle\psi^- |z^\gammaamma\rangle\right)^{(k-\nu)}$ is the $(k-\nu)$-th derivative of $\lambdangle\psi^- |z^\gammaamma\rangle$. We now insert~(\ref{bohm40}) on both sides of~(\ref{b61}) and obtain: \begin{equation}\lambdabel{b62} \phirac{k!}{\Gammaamma^k}\lambdangle\psi^-|\, f(H^\times )\, |z^\gammaamma_R{\!\,\text{{\Large $\succ$}}\!}^{(k)}=\sum^{k}_{\nu=0} \phirac{k!}{\nu!(k-\nu)!}f^{(\nu)}\!(z_R)\;\lambdangle\psi^- |z^\gammaamma_R{\!\,\text{{\Large $\succ$}}\!}^{(k-\nu)}\phirac{(k-\nu)!}{\Gammaamma^{k-\nu}} \end{equation} From this we obtain \begin{equation}\lambdabel{b63a} \lambdangle\psi^-|\, f(H^\times )\,|z^\gammaamma_R{\!\,\text{{\Large $\succ$}}\!}^{(k)} \;=\;\sum^{k}_{\nu=0}\phirac{\Gammaamma^\nu}{\nu!}\;f^{(\nu)}(z_R)\; \lambdangle\psi^-|z^\gammaamma_R{\!\,\text{{\Large $\succ$}}\!}^{(k-\nu)} \end{equation} or as a functional equation: \begin{equation}\lambdabel{b63b} f(H^\times )\,|z^\gammaamma_R{\!\,\text{{\Large $\succ$}}\!}^{(k)}=\sum^{k}_{\nu=0}\phirac{\Gammaamma^\nu} {\nu!}f^{(\nu)}(z_R)|z^\gammaamma_R{\!\,\text{{\Large $\succ$}}\!}^{(k-\nu)} \end{equation} (Note in this calculation that $\lambdangle\psi^-|z^-{\!\,\text{{\Large $\succ$}}\!}^{(n)}$ is {\em not} the {$n$-th} derivative of $\lambdangle\psi^-|z^-{\!\,\text{{\Large $\succ$}}\!}^{(0)}$, whereas $\lambdangle\psi^-|z^-\rangle^{(n)}$ is the $n$-th derivative of $\lambdangle\psi^-|z^-\rangle\in{{\cal S}\cap{\cal H}_+^2}$. Therefore it is better to work with the $|z^-\rangle^{(k)}$ than with the $|z^-{\!\,\text{{\Large $\succ$}}\!}^{(k)}$.) In the theory of finite dimensional Jordan operators~\cite{baumgartel-gantmacher-lancaster}, the equality~(\ref{b63b}) is often called the Lagrange-Sylvester formula and is written as a matrix equation (using lower Jordan blocks for $H^\times$ as in~(\ref{b60})): \begin{align}\lambdabel{b64} &\begin{pmatrix} \lambdangle\psi^-|\, f(H^\times )\, |z^-_R{\!\,\text{{\Large $\succ$}}\!}^{(0)} \\ \lambdangle\psi^-|\, f(H^\times )\, |z^-_R{\!\,\text{{\Large $\succ$}}\!}^{(1)} \\ \quad\vdots \\ \quad\vdots\\ \lambdangle\psi^-|\, f(H^\times )\, |z^-_R{\!\,\text{{\Large $\succ$}}\!}^{(r-1)} \\ \end{pmatrix} = \\ &=\begin{pmatrix} f(z) &0&\ldots&\ldots&0 \\ \phirac{\Gammaamma}{1!}f^{(1)}(z)&f(z)&0&\ldots&0\\ \phirac{\Gammaamma^2}{2!}f^{(2)}(z)& \phirac{\Gammaamma}{1!}f^{(1)}(z)&\ddots&\ddots&\vdots \\ \vdots&\vdots&\ddots&f(z)&0 \\ \phirac{\Gammaamma^{r-1}}{(r-1)!}f^{(r-1)}(z)& \phirac{\Gammaamma^{r-2}}{(r-2)!}f^{(r-2)}(z)& \ldots&\phirac{\Gammaamma}{1!}f^{(1)}(z)&f(z) \end{pmatrix}_{\!\!z=z_R} \begin{pmatrix} \lambdangle\psi^-|z^-_R {\!\,\text{{\Large $\succ$}}\!}^{(0)} \\ \lambdangle\psi^-|z^-_R {\!\,\text{{\Large $\succ$}}\!}^{(1)} \\ \quad \vdots \\ \quad \vdots \\ \lambdangle\psi^-|z^-_R {\!\,\text{{\Large $\succ$}}\!}^{(r-1)} \\ \end{pmatrix}\nonumber \end{align} The $r\times r$ submatrix equation of~(\ref{b60}) is a special case of this for $f(H^\times)=H^\times$. Equation~(\ref{b64}) is not the complete matrix representation of $f(H^\times)$, because the infinite diagonal submatrix due to the first term in~(\ref{bohm44}), \begin{equation}\lambdabel{b65} \Bigl(\lambdangle f^*(H)\psi^-|E^+\rangle\Bigr)=\Bigl(E\Bigr)\Bigl( \lambdangle\psi^-|E^+\rangle\Bigr),\hspace{.3in}-\infty_{\rm II}<E\leq 0\;, \end{equation} has been omitted. Equation~(\ref{b64}) gives the restriction of $f(H^\times)$ to the $r$-dimensional subspace ${\cal M}_{z_R}\subset\Phi^\times_+$. The function of $H^\times$ that we are particularly interested in is the time evolution operator $f(H^\times)=e^{-iH^\times t}_+$. It can be defined in ${\Phi}^\times_+$ only for those values of the parameter $t$ for which $e^{iHt}:{\Phi}_+ \longrightarrow {\Phi}_+$ is a $\tau_{{\Phi}_+}$-continuous operator. This is the case for $t\gammae 0$, but not for $t\le 0$. (For $\lambdangle \psi^-|E^-\rangle\in{\rm {l\!\!\! {C}}}S\cap{\cal H}^2_-$, the function $\lambdangle e^{iHt}\psi^- |E^-\rangle = e^{-iEt}\lambdangle\psi^- |E^-\rangle$ is an element of ${\rm {l\!\!\! {C}}}S\cap{\cal H}^2_-$ only for $t\gammae 0$.) Thus, for $t\gammae 0$, we can use (\ref{b63b}) with $f(z)=e^{-izt}$ and $f^{(\nu)}(z)=(-it)^\nu e^{-izt}$, and we obtain the following functional equation in ${\rm {l\!\!\! {C}}}M_{z_R} \subset{\Phi}^\times_+$: \begin{equation}\lambdabel{b66} e^{-iH^\times t}|z^\gammaamma_R{\!\,\text{{\Large $\succ$}}\!}^{(k)}\,=\,e^{-iz_Rt}\sum_{\nu=0}^k \phirac{\Gammaamma^\nu}{\nu!}(-it)^\nu|z^\gammaamma_R{\!\,\text{{\Large $\succ$}}\!}^{(k-\nu)}\;. \end{equation} In terms of the vectors $|z^\gammaamma_R\rangle^{(k)}$ this can be written (using~(\ref{bohm40})): \begin{subeqnarray}\lambdabel{b67}\lambdabel{eq:59}\lambdabel{timeevo} e^{-iH^\times t}|z^\gammaamma_R\rangle^{(k)}=e^{-iz_Rt} \sum^k_{\nu=0}\begin{pmatrix}k\\\nu\end{pmatrix}\, (-it)^\nu\,|z^\gammaamma\rangle^{(k-\nu)} \end{subeqnarray} or taking the complex conjugate (in analogy to going from~(\ref{bohm30}a) to~(\ref{bohm30}b)): \begin{equation}\nonumber \hspace{3.5cm}\,^{(k)}\lambdangle^\gammaamma z_R|e^{iHt}= e^{iz^*_Rt}\sum^k_{\nu=0}\begin{pmatrix}k\\\nu\end{pmatrix}\,(it)^\nu \;\,^{(k-\nu)}\lambdangle^\gammaamma z_R|\;.\hspace{3.4cm}(\ref{b67}{\rm b}) \end{equation} The vectors $|z^\gammaamma_R\rangle^{(k)}$ in the above equations can be replaced by the vectors $|z^-_R\rangle^{(k)}$. It is important to note that the time evolution $e^{-iH^\times t}$ transforms between different $|z^-_R{\!\,\text{{\Large $\succ$}}\!}^{(k)}\, ,\; k=1 ,\, 2 \ldots, n$, that belong to the same pole of order $r$ at $z=z_R$, but the time evolution does not transform out of ${\rm {l\!\!\! {C}}}M_{z_R}$. On the basis vectors $|E^+\rangle$ of the first term in~(\ref{bohm44}) the time evolution is diagonal \begin{equation}\lambdabel{b68} e^{-iH^\times t}|E^+\rangle=e^{-iEt}|E^+\rangle\;. \end{equation} The equation~(\ref{b66}) and~(\ref{b67}) can be written as a matrix equation on the subspace {${{\rm {l\!\!\! {C}}}M_{z_R}\subset{\Phi}_+}$}: \begin{align}\lambdabel{b69}& \begin{pmatrix} \lambdangle\psi^-|\, e^{-iH^\times t}\, |z^-_R{\!\,\text{{\Large $\succ$}}\!}^{(0)} \\ \lambdangle\psi^-|\, e^{-iH^\times t}\, |z^-_R{\!\,\text{{\Large $\succ$}}\!}^{(1)} \\ \quad\vdots \\ \quad\vdots\\ \lambdangle\psi^-|\, e^{-iH^\times t}\, |z^-_R{\!\,\text{{\Large $\succ$}}\!}^{(r-1)} \end{pmatrix} = \\&= \begin{pmatrix} e^{-iz_Rt} &0&\ldots&\ldots&0 \\ \phirac{(-it\Gammaamma)}{1!}e^{-iz_Rt}&e^{-iz_Rt}&0&\ldots&0 \\ \phirac{(-it\Gammaamma)^2}{2!}e^{-iz_Rt}& \phirac{(-it\Gammaamma)}{1!}e^{-iz_Rt}&\ddots&\ddots&\vdots \\ \vdots &\vdots&\ddots&e^{-iz_R t}&0 \\ \phirac{(-it\Gammaamma)^{r-1}}{(r-1)!}e^{-iz_Rt}& \phirac{(-it\Gammaamma)^{r-2}}{(r-2)!}e^{-iz_Rt}&\ldots& \phirac{(-it\Gammaamma)}{1!}e^{-iz_R t}&e^{-iz_R t} \end{pmatrix} \begin{pmatrix} \lambdangle\psi^-|z^-_R {\!\,\text{{\Large $\succ$}}\!}^{(0)} \\ \lambdangle\psi^-|z^-_R {\!\,\text{{\Large $\succ$}}\!}^{(1)} \\ \quad \vdots\\ \quad \vdots\\ \lambdangle\psi^-|z^-_R {\!\,\text{{\Large $\succ$}}\!}^{(r-1)} \end{pmatrix}\nonumber \end{align} As an example let us consider the special case of a double pole, $r=2$, $k=0,~1$. The formula~(\ref{b66}) for the zeroth order Gamow vector is then \begin{equation}\lambdabel{b70a} e^{-iH^\times t}|z^-_R{\!\,\text{{\Large $\succ$}}\!}^{(0)}= e^{-iE_Rt}e^{-(\Gammaamma_R/2)t} |z^-_R{\!\,\text{{\Large $\succ$}}\!}^{(0)}, \hspace{.3in} t\gammaeq0, \end{equation} and for the first order Gamow vector it is \begin{equation}\lambdabel{b70b} e^{-iH^\times t}|z^-_R{\!\,\text{{\Large $\succ$}}\!}^{(1)} = e^{-iz_Rt}\left( |z^-_R{\!\,\text{{\Large $\succ$}}\!}^{(1)}+(-it\Gammaamma)|z^-_R{\!\,\text{{\Large $\succ$}}\!}^{(0)}\right) . \end{equation} It has been known for a long time~\cite{polynomial} that a double pole and in general all higher order S-matrix poles lead to a polynomial time dependence in addition to the exponential. However, it was not clear what the vectors were that have such a time evolution. Here we have seen that they are Jordan vectors of degree $r$ or less, and that they are Gamow vectors, $|z^\gammaamma_R{\!\,\text{{\Large $\succ$}}\!}^{(k)}\in\Phi^\times_+$. We have also shown that the time evolution operator is not diagonal in the basis~(\ref{bohm38}) but transforms a Gamow-Jordan vector of degree $(k+1)$ into a superposition~(\ref{b66}) of Gamow-Jordan vectors of the same and all lower degrees with a time dependence $t^\nu$ in addition to the exponential that depends upon the degrees of the resulting Gamow-Jordan vectors. \section{Possible Physical Interpretations of the Gamow-Jordan Vectors} \lambdabel{sec:possible} \setcounter{equation}{0} In the previous section we defined the higher order Gamow vectors $|z^-_R\rangle^{(k)}\in\Phi^\times_+$ from the $r$-th order pole term of a unitary S-matrix. We showed that they are the discrete members of a complete basis system for the vectors $\phi^+\in\Phi_-$,~(\ref{bohm44}), and we derived their mathematical properties: In~(\ref{b57}) and~(\ref{b58}), we showed that they are Jordan vectors of degree $k+1$, and in~(\ref{b64}) and~(\ref{b69}), we obtained the Lagrange-Sylvester formula and the time evolution. The mathematical procedure that we used for the $r$-th order pole term is a straightforward generalization of the definitions and derivations that had been used for an ordinary, zeroth order Gamow vector and first order poles of the S-matrix~\cite{bohm-group}. Gamow states of zeroth order with their empirically well established properties (exponential time evolution, Breit-Wigner energy distribution) have been abundantly observed in nature as resonances and decaying states. Theoretically, there is no reason why the other quasistationary states (i.e. states that also cause large time delay in a scattering process (\cite{bohm} sect.~XVIII.6) and are associated with integers $r>1$ in~(\ref{b16a})) should not exist. However, no such quasistationary states have so far been established empirically. One argument against their existence was that the polynomial time dependence, that was always vaguely associated with higher order poles~\cite{polynomial}, has not been observed for quasistationary states. The question that we want to discuss in this section is, whether there is an analogous physical interpretation for the higher order Gamow vectors as for the ordinary Gamow vectors, namely as states which decay (for $t>0$) or grow (for $t<0$) in one prefered direction of time (``arrow of time'') and obey the exponential law. Since we have now well defined vectors associated with an $r$-th order pole, we can attempt to define physical states which have well defined properties that can be tested experimentally. In this section we are dealing with physical questions about hypothetical objects associated with the $r$-th order pole. We therefore have first to conjecture the higher order Gamow state, before we can derive their properties. We start with the known cases. In von Neumann's definition of a pure stationary state one uses a dyadic product $W=|f\rangle\lambdangle f|$ of energy eigenvectors $|f\rangle$ in Hilbert space. In analogy to this, microphysical Gamow states connected with first order poles have been defined as dyadic products of zeroth order Gamow vectors~\cite{bohm}~\cite{bohm-gadella-maxson}: \begin{equation}\lambdabel{g60} W^G=|\psi^G\rangle\lambdangle\psi^G| =|z^-_R\rangle\lambdangle^-z_R|\, \equiv \, W^{(0)} \end{equation} (Since for the generalized vectors $|\psi^G\rangle=\sqrt{2\pi\Gammaamma} \, |z^-_R\rangle$ or $|z^-_R\rangle$ we cannot talk of normalization in the ordinary sense, it is not important at this stage whether or not to use the ``normalization'' factor of $2\pi\Gammaamma$ in $W^G$. \noindent The time evolution of the Gamow state~(\ref{g60}) is then given according to~(\ref{bohm30}) by: \begin{eqnarray} W^G(t)&\equiv&e^{-iH^\times t}\,|\psi^G\rangle \lambdangle\psi^G|\,e^{iHt} \nonumber\\ \lambdabel{WG62}\lambdabel{g61} &=&e^{-iz_R t}\,|\psi^G\rangle\lambdangle\psi^G|\,e^{iz^*_Rt}\\ &=&e^{-i(E_R-i(\Gammaamma/2))t}|\psi^G\rangle\lambdangle\psi^G| \,e^{i(E_R+i(\Gammaamma/2))t}\nonumber\\ &=&e^{-\Gammaamma t}\,W^G(0)\;,\hspace{4.5cm}t\gammaeq0\;.\nonumber \end{eqnarray} Mathematically, the equation~(\ref{g61}) is to be understood as a functional equation like~(\ref{bohm28}) and~(\ref{bohm29}): \begin{subeqnarray}\lambdabel{g62}\lambdabel{WG63} \lambdangle\psi^-_1|W^G(t)|\psi^-_2\rangle &=&e^{-\Gammaamma t}\lambdangle\psi^-_1|W^G|\psi^-_2\rangle\hspace{1cm}{\rm or}\\ \lambdangle\psi^-|W^G(t)|\psi^-\rangle &=&e^{-\Gammaamma t}\lambdangle\psi^-|W^G|\psi^-\rangle\\ &&{\rm for~all}\quad\psi^-,\psi^-_1,\psi^-_2\in\Phi_+ \quad{\rm and}\quad t\gammaeq0.\nonumber \end{subeqnarray} The mathematical form~(\ref{g62}) of the time evolution of $W^G$ shows how important it is in our RHS formulation to know what question one wants to ask about a Gamow state when one makes the hypothesis~(\ref{g60}). The vectors $\psi^-\in\Phi^+$ represent observables defined by the detector (registration apparatus). The operator $W^G$ represents the microsystem that affects the detector. Therefore the quantity $\lambdangle\psi^-|W^G|\psi^-\rangle$ is the answer to the question: What is the probability that the microsystem affects the detector? If the detector is triggered at a later time $t$, i.e. when the observable has been time translated \begin{equation}\lambdabel{g63} |\psi^-\rangle\lambdangle\psi^-|\quad\longrightarrow\quad e^{iHt}|\psi^-\rangle\lambdangle\psi^-|e^{-iHt}\, =\,|\psi^-(t)\rangle\lambdangle\psi^-(t)| \end{equation} then the same question for $t\gammaeq0$ has the answer: The probability that the microsystem affects the detector at $t>0$ is \begin{eqnarray}\nonumber \hspace{-.3in}\lambdangle\psi^-(t)|W^G|\psi^-(t)\rangle &=&\lambdangle e^{-iHt}\psi^-|W^G|e^{iHt}\psi^-\rangle\\\lambdabel{g64} &=&\lambdangle\psi^-|e^{-iH^\times t}W^Ge^{iHt}|\psi^-\rangle\\\nonumber &=&e^{-\Gammaamma t} \lambdangle \psi^-| W^G |\psi^-\rangle\;. \end{eqnarray} This means that~(\ref{g64}) is the probability to observe the decaying microstate at time $t$ relative to the probability $\lambdangle\psi^-|W^G|\psi^-\rangle$ at $t=0$, (which one can ``normalize'' to unity by choosing an appropriate factor on the right-hand side of~(\ref{g60})). The question that one asks in the scattering experiment of fig.~1 is different. There the pole term (P.T.) of~(\ref{eq:regular}) describes how the microsystem propagates the effect which the preparation apparatus (accelerator, described by the state $\phi^+$) causes on the registration apparatus (detector, described by the observable $\psi^-$). In conventional orthodox quantum theory one only deals with ensembles and with observables measured on ensembles. Their mathematical representations, e.g., $|\phi\rangle\lambdangle\phi|$ for the state of the ensemble and $\, |\psi\rangle\lambdangle\psi|\,$ for the observable, are from the same space $\bf{\Phi}$, i.e. $\phi,~\psi\in\Phi$. (And if one is mathematically precise then one chooses for $\Phi$ the Hilbert space, $\Phi={\cal H}$.) On this level, one cannot talk of single microsystems, and there are no mathematical objects in orthodox quantum mechanics to describe a single microsystem. Still, it is intuitively attractive to imagine that the effect by which the preparation apparatus acts on the registration apparatus is carried by single physical entities, the microphysical systems~\cite{ludwig-foundations}. According to the physical interpretation of the RHS formulation, ``real'' physical entities connected with an experimental apparatus, like the states $\phi$ defined by the preparation apparatus or the property $\psi$ defined by the registration apparatus, are assumed to be elements of $\Phi$, but states and observables are distinct. In particular, states and observables of a scattering experiment are distinct and described by $\Phi_-$ of~(\ref{bohm6}a) and $\Phi_+$ of~(\ref{bohm6}b). However, mathematical entities describing microphysical systems are not assumed to be in $\Phi$. The energy distribution for a microphysical system does not have to be a well-behaved (continuous, smooth, rapidly decreasing) function of the physical values of the energy $E$, like the functions $\lambdangle E|\psi\rangle$ describing the energy resolution of the detector, or the functions $\lambdangle E|\phi\rangle$ describing the energy distribution of the beam. Hence, for the hypothetical entities connected with microphysical systems, like Dirac's ``scattering states'' $|{\rm {\bf p}}\rangle$ or Gamow's ``decaying states'' $|E-i\Gammaamma/2\rangle$, the RHS formulation uses elements of $\Phi^\times$, $\Phi^\times_+$, and $\Phi^\times_-$~\cite{cologne,bohm-gadella-maxson}. The time evolution of the ``state'' vectors for the decaying microphysical systems, e.g.~(\ref{eq:5.89prime}) or~(\ref{g61}), can be obtained from the well established time evolution of the quantum mechanical observable~(\ref{g63}) using the definition of the conjugate operator as in~(\ref{bohm28}). Because of the difference between {$\psi^-\in\Phi_+$} for the observables and {$\phi^+\in\Phi_-$} for the prepared states one needs a different mathematical description for the same microphysical state, depending upon the question one is asking. If one asks the question with what probability the microphysical state affects the detector $\psi^-(t)$, then the microphysical state is described by~(\ref{g60}). In a resonance scattering experiment of fig.~1 one asks another question: What is the probability to observe $\psi^-(t)$ in a microphysical resonance state of a scattering experiment with the prepared in-state $\phi^+$? \noindent In distinction to a decay experiment, where one just asks for the probability of $\psi^-\in\Phi_+$, in the resonance scattering experiment one asks for the probability that relates $\psi^-\in\Phi_+$ to $\phi^+\in\Phi_-$ via the microphysical resonance state. Therefore the mathematical quantity that describes the microphysical resonance state in a scattering experiment cannot be given by $|z^-_R\rangle\lambdangle^-z_R|$ of~(\ref{g60}), but must be given by something like $|z^-_R\rangle\lambdangle^+z_R|$. \noindent The probability to observe $\psi^-$ in the prepared state $\phi^+$, independently of how the effect of $\phi^+$ is carried to the detector $\psi^-$ is given by the S-matrix element~(\ref{bohm17}), $|(\psi^-,\phi^+)|^2$. The probability amplitude that this effect is carried by the microphysical resonance state is then given by the pole term $(\psi^-,\phi^+)_{\rm P.T.}$, equation~(\ref{bohmprime}). In analogy to~(\ref{g64}) one can now also compare these probabilities at different times. For this purpose one translates the observable $\psi^-$ in the pole term~(\ref{eq:regular}) in time by an amount $t\gammaeq 0$, \begin{equation}\lambdabel{g65} \psi^-\longrightarrow\psi^-(t)=e^{iHt}\psi^-;\hspace{.3in}t\gammaeq0 \end{equation} (which corresponds to turning on the detector at a time $t\gammaeq0$ later than for $\psi^-$ ). One obtains \begin{align}\nonumber \left(\psi^-(t),\phi^+\right)_{\rm P.T.} & =\,-2\pi\Gammaamma\,\lambdangle e^{iHt}\psi^-|z_R^-\rangle \lambdangle^+z_R|\phi^+\rangle\\ \lambdabel{WG66a}& =\,-2\pi\Gammaamma \, \lambdangle \psi^-|e^{-iH^\times t} |z_R^-\rangle \lambdangle^+z_R|\phi^+\rangle\\& =\,-2\pi\Gammaamma \, e^{-iz_Rt}\lambdangle \psi^-| z_R^-\rangle \lambdangle^+ z_R | \phi^+ \rangle\nonumber\\&\nonumber = e^{-iE_Rt}e^{-\Gammaamma t/2}\left(\psi^-,\phi^+\right)_{\rm P.T.} \end{align} This means that the time dependent probability, due to the first order pole term, to measure the observable $\psi^-(t)$ in the state $\phi^+$ is given by the exponential law: \begin{equation}\lambdabel{WG66b} |(e^{iHt}\psi^-,\phi^+)_{\rm P.T.}|^2 =e^{-\Gammaamma t} \, |(\psi^-,\phi^+)_{\rm P.T.}|^2. \end{equation} This is as one would expect it if the action of the preparation apparatus on the registration apparatus is carried by an exponentially decaying microsystem (resonance) described by a Gamow vector. Therewith we have seen that there are two ways in which a resonance can appear in experiments and therefore there should be two forms of representing the decaying Gamow state (for the case $r=1$ so far)\phiootnote{An analogous statement holds for the Gamow states associated with the pole in the upper half-plane.} \begin{subeqnarray}\lambdabel{68a}\lambdabel{g68a} {\rm by}\hspace{.8cm}&|z^-_R\rangle\lambdangle^+z_R| &{\rm~in~a~scattering~experiment,}\\\lambdabel{68b}\lambdabel{g68b} {\rm and~by}&|z^-_R\rangle\lambdangle^-z_R|& {\rm~in~a~decay~experiment.} \end{subeqnarray} The first representation is the one used in the S-matrix when one calculates the cross section; the second representation is the one used when one calculates the Golden Rule (decay rate). In contrast to von Neumann's formulation where a given state (representing an ensemble prepared by the preparation apparatus) is always described by one and the same density operator $W$, the representation of the microphysical state in the RHS formulation depends upon the question one asks, i.e. upon the kind of experiment which one wants to perform. That a theory of the microsystems must include the methods of the experiments has previously been emphasized in~\cite{ludwig-foundations}. After this preparation we are now ready to conjecture the mathematical representation of a higher order Gamow state (a quasistationary state with $r>1$). In analogy to the correspondence between~(\ref{68a}a) and~(\ref{68b}b) we conjecture that for the case of general $r$ we have also two distinct representations of the Gamow state. The one for resonance scattering is already determined as in the case for $r=1$ by the (negative of the) pole term~(\ref{bohm41}), and is therefore given~by \begin{subeqnarray}\lambdabel{g69a} W_{\rm P.T.}&=&-2\pi\Gammaamma\sum_{n=0}^{r-1}\begin{pmatrix}r\\ n+1\end{pmatrix}(-i)^n\phirac{\Gammaamma^n}{n!}\sum_{k=0}^n \begin{pmatrix}n\\k\end{pmatrix}|z^-_R\rangle^{(k)} \;^{(n-k)}\lambdangle^+z_R|\\ &=&-2\pi\Gammaamma\sum_{n=0}^{r-1}\begin{pmatrix}r\\n+1\end{pmatrix} (-i)^nW^{(n)}_{\rm P.T.}\nonumber \end{subeqnarray} where we have used the operator defined in~(\ref{bohm42}): \begin{subeqnarray}\lambdabel{g69b} W^{(n)}_{\rm P.T.}=\phirac{\Gammaamma^n}{n!}\sum_{k=0}^n\begin{pmatrix}n\\ k\end{pmatrix}|z^-_R\rangle^{(k)}\;^{(n-k)}\lambdangle^+z_R| =\sum_{k=0}^n|z^-_R{\!\,\text{{\Large $\succ$}}\!}^{(k)}\;^{(n-k)}{\!\text{{\Large $\prec$}}\!\,}^+\!\!z_R|\;. \end{subeqnarray} In analogy to~(\ref{g68b}b) we would then conjecture that the $r$-th order microphysical decaying state is described by the state operator \begin{eqnarray} \hspace{2.8cm}W&=&2\pi\Gammaamma\sum_{n=0}^{r-1}\begin{pmatrix} r\\n+1\end{pmatrix}(-i)^n\phirac{\Gammaamma^n}{n!}\sum_{k=0}^n \begin{pmatrix}n\\k\end{pmatrix}|z^-_R\rangle^{(k)}\;^{(n-k)} \lambdangle^-z_R|\hspace{1.7cm}(\ref{g69a}{\rm b})\nonumber\\ &=&2\pi\Gammaamma\sum_{n=0}^{r-1}\begin{pmatrix}r\\n+1\end{pmatrix} (-i)^nW^{(n)}\nonumber \end{eqnarray} (up to a normalization factor which will have to be determined by normalizing the overall probability to 1). Since~(\ref{g68b}b) is postulated to be the zeroth order Gamow state representing a resonance, (\ref{g69a}b) is conjectured to be the $r$-th order Gamow state.\phiootnote{ We want to mention that mathematically there is an important difference between~(\ref{g69a}a) and~(\ref{g69a}b) because $\lambdangle\psi^-|z^-\rangle^{(k)}\,^{(n-k)}\lambdangle^+z|\phi^+\rangle$ are analytic functions for $z$ in the lower half-plane, whereas the $\lambdangle\psi^-_1|z^-\rangle^{(k)}\,^{(n-k)}\lambdangle^-z|\psi^-_2\rangle$ are not.} Whether the microphysical state of the (hypothetical) quasistationary microphysical system is always represented by the mathematical object~(\ref{g69a}b) or whether also each individual \begin{eqnarray}\nonumber \hspace{2.3cm}W^{(n)}&=&\phirac{\Gammaamma^n}{n!}\sum_{k=0}^n \begin{pmatrix}n\\k\end{pmatrix}|z^-_R\rangle^{(k)}\;^{(n-k)} \lambdangle^-z_R|=\sum_{k=0}^n|z^-_R{\!\,\text{{\Large $\succ$}}\!}^{(k)}\;^{(n-k)}{\!\text{{\Large $\prec$}}\!\,}^-z_R|\;;\\ &&\hspace{3.45cm}n=0,1,\dots,r-1\;,\hspace{4cm} (\ref{g69b}{\rm b})\nonumber \end{eqnarray} has a separate physical meaning, is not clear. So far it is not even certain that higher order poles of the S-matrix describe anything in nature (though there are no theoretical reasons that exclude these isolated singularities of the S-matrix.) But if these hypothetical objects do exist, the $r$-th order pole is associated with a mixed state~(\ref{g69a}b) whose irreducible components are given by~(\ref{g69b}b). E.g., for the case $r=2$ (second order pole at $z_R$) we have: \begin{equation}\lambdabel{g72} W^{(0)}=|z^-_R\rangle^{(0)}\;^{(0)}\lambdangle^-z_R|\hspace{8cm} \end{equation} and \begin{equation}\lambdabel{g73} W^{(1)}=\Gammaamma\Bigl( |z^-_R\rangle^{(0)}\;^{(1)}\lambdangle^-z_R|+ |z^-_R\rangle^{(1)}\;^{(0)}\lambdangle^-z_R|\Bigr)\hspace{4.2cm} \end{equation} and \begin{equation}\lambdabel{g74} W=2\pi\Gammaamma\Bigl(|z^-_R\rangle^{(0)}\;^{(0)}\lambdangle^-z_R|- 2i\Gammaamma\bigl(|z^-_R\rangle^{(0)}\;^{(1)}\lambdangle^-z_R|+ |z^-_R\rangle^{(1)}\;^{(0)}\lambdangle^-z_R|\bigr)\Bigr) \end{equation} This means that the conjectural physical state associated with the $r$-th order pole is a mixed state $W$, all of whose components $W^{(n)}$, except for the zeroth component $W^{(0)}$, cannot be reduced further into ``pure'' states given by dyadic products like $|z^-_R\rangle^{(k)}\,^{(k)}\lambdangle^-z_R|$. This is quite consistent with our earlier remark that the label $k$ is not a quantum number connected with an observable (like the suppressed labels $b_2,\dots,b_n$). Therefore a ``pure state'' with a definite value of $k$, like $|z^-_R\rangle^{(k)}\,^{(k)}\lambdangle^-z_R|$, $k\gammaeq1$, does not make sense physically. A physical interpretation could only be given to the whole $r$-dimensional space ${\cal M}_{z_R}$, (\ref{bohm48}). The individual $W^{(n)}$, {$n=0,~1,~2,\dots,r-1$}, act in the subspaces ${\cal M}^{(n)}_{z_R}\subset{\cal M}_{z_R}$ which are spanned by Gamow vectors of order $0,~1,\dots,n\,$ (Jordan vectors of degree $n+1$, i.e. $(H^\times-z_R)^{n+1}{\cal M}^{(n)}_{z_R}=0$). There the question is, whether there could be a physical meaning to each $W^{(n)}$ separately, or whether only the particular mixture $W$ given by~(\ref{g69a}b) can occur physically. Though the quantities $|z^-_R\rangle^{(k)}\,^{(k)}\lambdangle^-z_R|$ will have no physical meaning, even if higher order poles exist, they have been considered~\cite{antoniou-gadella} and their time evolution is calculated in a straightforward way from~(\ref{b66}): \begin{eqnarray}\lambdabel{purestate2}\lambdabel{g75} &&\hspace{-1cm}e^{-iH^\times t}|z^-_R{\!\,\text{{\Large $\succ$}}\!}^{(k)}\,^{(k)} {\!\text{{\Large $\prec$}}\!\,}^-z_R|e^{iHt}=\\ &&=e^{-\Gammaamma t}\sum^k_{l=0}\,\sum^k_{m=0} \phirac{1}{l!} \phirac{1}{m!} (-it\Gammaamma)^l(it\Gammaamma)^m |z^-_R{\!\,\text{{\Large $\succ$}}\!}^{(k-l)}\,^{(k-m)}{\!\text{{\Large $\prec$}}\!\,}^-z_R |\;.\nonumber \end{eqnarray} This time dependence (as well as the time dependence in~(\ref{b66})) is reminiscent of eq.~(\ref{bohm45}) in the reference of M.~L. Goldberger and K.~M. Watson~\cite{polynomial}. It shows the additional polynomial time dependence, that has always been considered an obstacle to the use of higher order poles for quasistationary states. A polynomial time dependence of this magnitude (of the order of $\tau=\phirac{1}{\Gammaamma}$) should have shown up in many experiments. We now derive the time evolution of the microphysical state operator defined in~(\ref{g69b}b) using the time evolution obtained for the Gamow-Jordan vector in~(\ref{b67}). It will turn out that this operator, whose form was conjectured in analogy to the pole term~(\ref{g69b}a), will have a purely exponential time evolution. This was quite unexpected. Inserting~(\ref{b67}a) and~(\ref{b67}b) into \begin{eqnarray}\lambdabel{b75} W^{(n)}(t)=e^{-iH^\times t}W^{(n)}e^{iHt} = \phirac{\Gammaamma^n}{n!}\sum_{k=0}^n \, \begin{pmatrix} n \\ k \end{pmatrix} \, e^{-iH^\times t} |z^-_R \rangle^{(k)} \,^{(n-k)} \lambdangle^- z_R |\,e^{iHt} \end{eqnarray} we calculate: \begin{eqnarray} W^{(n)}(t)&\hspace{-.5cm}&=e^{-iz_Rt}e^{iz^*_Rt} \phirac{\Gammaamma^n}{n!}\sum_{k=0}^n \sum_{l=0}^k\sum_{m=0}^{n-k} \begin{pmatrix} n \\ k \end{pmatrix} \begin{pmatrix} k \\ l \end{pmatrix} \begin{pmatrix} n\! -\! k \\ m \end{pmatrix} (-it)^{k-l}(it)^{n-k-m} |z^-_R\rangle^{(l)}\,^{(m)}\lambdangle^- z_R| \nonumber\\ =&\hspace{-.5cm}&e^{-\Gammaamma t}\phirac{\Gammaamma^n}{n!}\sum_{m=0}^n \sum_{l=0}^{n-m}\sum_{k=l}^{n-m} \begin{pmatrix} n \\ k \end{pmatrix} \begin{pmatrix} k \\ l \end{pmatrix} \begin{pmatrix} n\! -\! k \\ m \end{pmatrix} (-it)^{k-l}(it)^{n-k-m} |z^-_R\rangle^{(l)}\,^{(m)}\lambdangle^- z_R| \lambdabel{marc5.20}\\ =&\hspace{-.5cm}&e^{-\Gammaamma t}\phirac{\Gammaamma^n}{n!}\sum_{m=0}^n \sum_{l=0}^{n-m}\sum_{k=l}^{n-m} \begin{pmatrix} n \\ m \end{pmatrix} \begin{pmatrix} \!n\!-m\!\\ l \end{pmatrix} \begin{pmatrix} \!n\!-\!m\!-\!l\\ k\!-\!l \end{pmatrix} (-it)^{k-l}(it)^{n-k-m} |z^-_R\rangle^{(l)}\,^{(m)}\lambdangle^- z_R| \nonumber\\ =&\hspace{-.5cm}&e^{-\Gammaamma t}\phirac{\Gammaamma^n}{n!}\sum_{m=0}^n \begin{pmatrix} n \\ m \end{pmatrix} \sum_{l=0}^{n-m}\begin{pmatrix} \!n\!-m\!\\ l \end{pmatrix} |z^-_R\rangle^{(l)}\,^{(m)}\lambdangle^- z_R|\sum_{k=l}^{n-m} \begin{pmatrix} \!n\!-\!m\!-\!l\\ k\!-\!l \end{pmatrix} (-it)^{k-l}(it)^{n-k-m} \nonumber \end{eqnarray} In going from the second to the third line, the order of summation has been changed, by keeping the same terms, as displayed in fig.~3 for the case $n=3$. In going from the third to the fourth line one uses the identity \begin{equation}\lambdabel{b79} \begin{pmatrix} n \\ k \end{pmatrix} \begin{pmatrix} k \\ l \end{pmatrix} \begin{pmatrix} n\! -\! k \\ m \end{pmatrix}= \begin{pmatrix} n \\ m \end{pmatrix} \begin{pmatrix} n\!-\!m\! \\ l \end{pmatrix} \begin{pmatrix} n\!-\!m\!-\!l \\ k\!-\!l \end{pmatrix} \end{equation} where {\phiootnotesize{$\begin{pmatrix}n\\k\end{pmatrix}$}}$\equiv\phirac{n!}{k!(n-k)!}$ are binomial coefficients. \begin{figure} \caption{For the case $n=3$, the summation terms labeled by the parameters $k$, $l$, and $m$ are displayed as dots in the diagram to show that the summations of lines 2 and 3 of~(\ref{marc5.20} \end{figure} Since the indices labeling the Gamow-Jordan vectors do not depend upon $k$, the sum over $k$ may be performed using the binomial formula: \begin{equation}\lambdabel{b82} \sum_{k=l}^{n-m}\, \begin{pmatrix}\!n\!-\!m\!-\!l\!\\k\!-\!l\end{pmatrix} (-it)^{k-l}\, (it)^{n-k-m} = (it-it)^{n-m-l} = \left\{ \begin{array}{c} 1 \quad \mbox{for $l=n-m$} \\ 0 \quad \mbox{for $l\neq n-m$} \end{array} \right\}=\delta_{l,n-m} \end{equation} Inserting~(\ref{b82}) into the fourth line of~(\ref{marc5.20}) and performing the sum over $l$ then gives: \begin{equation}\lambdabel{b82'} W^{(n)}(t)=e^{-\Gammaamma t}\,\phirac{\Gammaamma^n}{n!}\,\sum_{m=0}^n \begin{pmatrix}n\\m\end{pmatrix} |z^-_R \rangle^{(n-m)} \,^{(m)} \lambdangle^- z_R | =e^{-\Gammaamma t} \, W^{(n)}(0)\;; \hspace{.2in} t\gammaeq 0 \end{equation} This means that the complicated non-reducible (i.e. ``mixed'') microphysical state operator $W^{(n)}$ defined by~(\ref{g69b}b) has a simple purely exponential semigroup time evolution, like the zeroth order Gamow state~(\ref{g68b}b). This operator is probably the only operator formed by the dyadic products $|z^-_R \rangle^{(m)} \,^{(l)} \lambdangle^-z_R |$ with $~m,~l=0,~1,\cdots,n$, which has a purely exponential time evolution. Thus $W^{(n)}$ of eq.~(\ref{g69b}b) is distinguished from all other operators in ${\rm {l\!\!\! {C}}}M_{z_R}^{(n)}$. The microphysical decaying state operator associated with the $r$-th order pole of the unitary S-matrix is according to its definition~(\ref{g69a}b) a sum of the $W^{(n)}$. Because of the simple form~(\ref{b82'}) (independence of the time evolution of $n$) this sum has again a simple and exponential time evolution \begin{equation}\lambdabel{b83} W(t)\equiv e^{-iH^\times t}We^{iHt}=e^{-\Gammaamma t}W\;;\quad t\gammaeq0\,. \end{equation} Thus we have seen that the state operator which we conjecture from the $r$-th order pole term describes a non-reducible ``mixed'' microphysical decaying state which obeys an exact exponential decay law. We can return to the question that we started with when we set out to conjecture the state operator for the (hypothetical) microphysical state associated with the $r$-th order S-matrix pole: What is the probability to register at the time $t$ the decay products $|\psi^-\rangle\lambdangle\psi^-|$ (or in general $\Lambdambda\equiv\sum_i |\psi^-_i\rangle\lambdangle\psi^-_i|$) if at $t=0$ the microphysical state was given by $W$ of~(\ref{g69a}b)? From~(\ref{b83}) we obtain \begin{equation}\lambdabel{b84} P_\Lambdambda(t)={\rm Tr}(\Lambdambda W(t))=e^{-\Gammaamma t}{\rm Tr}(\Lambdambda W) =e^{-\Gammaamma t}P_\Lambdambda(0) \end{equation} or in the special case of $\Lambdambda=|\psi^-\rangle\lambdangle\psi^-|$: \begin{equation}\lambdabel{b85} P_{\psi^-}(t)=\lambdangle\psi^-|W(t)|\psi^-\rangle =e^{-\Gammaamma t}\lambdangle\psi^-|W|\psi^-\rangle \end{equation} This is exactly the same result as the result~(\ref{g62}b) for the microphysical state $W^G$ associated with the first order pole of the S-matrix and the result which is in agreement with the experiments on the decay of quasistationary states. It is, however, important to note that in our derivation of~(\ref{b82'}) and~(\ref{b85}) we proceeded in a very specific order. We first derived~(\ref{b82'}) from~(\ref{b67}a) and~(\ref{b67}b) and then calculated the matrix elements with $\psi^-$ and not vice versa in order to avoid problems with the analyticity. \section{Conclusion}\lambdabel{sec:conclusion} \setcounter{equation}{0} Vectors that possess all the properties that one needs in order to describe a pure state of a resonance have been known for two decades. These Gamow vectors $\psi^G$ are eigenvectors of a self-adjoint Hamiltonian with complex eigenvalues $E_R-i\Gammaamma/2$ (energy and width), they are associated with resonance poles of the S-matrix, they evolve exponentially in time, and they have a Breit-Wigner energy distribution. They also obey an exact Golden Rule, which becomes the standard Golden Rule in the limit of the Born approximation. The existence of these vectors in the rigged Hilbert space allows us to interpret exponentially decaying resonances as autonomous microphysical systems, which one cannot do in standard Hilbert space quantum mechanics. The mathematical procedure by which these Gamow vectors had been introduced suggests a straightforward generalization to higher order Gamow vectors which are derived from higher order S-matrix poles. We have shown in this paper that the $r$-th order pole of a unitary S-matrix leads to $r$ generalized eigenvectors of order $k=0,~1,\cdots,~r-1$. These $k$-th order Gamow vectors are Jordan vectors of degree $(k+1)$ with complex eigenvalue $E_R-i\Gammaamma/2$. They are basis elements of a generalized eigenvector expansion. But their time evolution has in addition to the exponential time dependence also a polynomial time dependence, which is excluded experimentally. However, the generalized eigenvector expansion suggests the definition of a state operator for microphysical decaying states of higher order. These state operators cannot be expressed as dyadic products of generalized vectors. But these state operators have a purely exponential time evolution. There has been a lot of interest in the Jordan blocks for various applications (see e.g.~\cite{lukierski,brandas,antoniou-tasaki,mondragon,stodolsky,antoniou-gadella,prigogine}). Here it has been shown that Jordan blocks arise naturally from higher order S-matrix poles and represent a self-adjoint Hamiltonian~\cite{22a} by a complex matrix in a finite dimensional subspace contained in the rigged Hilbert space. Although higher order S-matrix poles are not excluded theoretically, there has been so far very little experimental evidence for their existence, because they were always believed to have polynomial time dependence. Since we have shown here that their non-reducible state operator evolves purely exponential in time there is reason to hope that these mathematically beautiful objects will have some application in physics. \noindent {\Lambdarge {\bf Acknowledgments}} \noindent The inspiration for this paper came from three sources: Prof.~A. Mondragon's talk at the 1994 workshop on Nonlinear, Deformed and Irreversible Quantum Systems~\cite{mondragon}, which showed us the importance of Jordan blocks; Profs.~I. Antoniou and M.~Gadella's unpublished preprint of Spring 1995~\cite{antoniou-gadella}, which demonstrated that the Jordan vectors are Gamow vectors of higher order S-matrix poles; and Prof.~I. Prigogine's unrelenting talk of ``irreducible states''. We would like to express our gratitude to them for valuable discussions and explanations. \noindent This collaboration has been made possible by the financial support of NATO. \end{document}
\begin{document} \title{Bijective proofs of character evaluations using trace forest of the jeu de taquin} \begin{abstract} Irreducible characters in the symmetric group are of special interest in combinatorics. They can be expressed either combinatorially with ribbon tableaux, or algebraically with contents. In this paper, these two expressions are related in a combinatorial way. We first introduce a fine structure in the famous jeu de taquin called ``trace forest'', with which we are able to count certain types of ribbon tableaux, leading to a simple bijective proof of a character evaluation formula in terms of contents that dates back to Frobenius (1901). Inspired by this proof, we give an inductive scheme that gives combinatorial proofs to more complicated formulae for characters in terms of contents. \end{abstract} \newcommand{\mysqr}[1]{rectangle +(0.5,0.5) +(0.25,0.25) node{#1}} \newcommand{\mycpr}[1]{<^{(#1)}} \newcommand{\mycpa}[1]{\vee^{(#1)}} \newcommand{\operatorname{sgn}}{\operatorname{sgn}} \newcommand{\operatorname{deg}}{\operatorname{deg}} \section{Introduction} Irreducible characters in the symmetric group has long attracted attention from combinatorists and group theorists. When evaluated at particular partitions, they can be expressed in terms of contents. Their study dates back to Frobenius. In \cite{frobenius1900charaktere}, for a partition $\lambda$ of an integer $n$, the following evaluations were given: \begin{align*} n (n-1) \chi^{\lambda}_{2,1^{n-2}} &= 2 f^{\lambda} \left( \sum_{w \in \lambda} c(w) \right), \\ n (n-1) (n-2) \chi^{\lambda}_{3,1^{n-3}} &= 3 f^{\lambda} \left( \sum_{w \in \lambda} (c(w))^2 + n (n-1) / 2 \right), \\ n (n-1) (n-2) (n-3) \chi^{\lambda}_{4,1^{n-4}} &= 4 f^{\lambda} \left( \sum_{w \in \lambda} (c(w))^3 + (2n-3) \sum_{w \in \lambda} c(w) \right). \end{align*} Here, $\chi^{\lambda}_{\mu}$ is the irreducible character indexed by $\lambda$ evaluated on the conjugacy class indexed by another partition $\mu$ of $n$, $f^{\lambda}$ is the dimension of the corresponding representation, and we sum over cells $w$ in the Ferrers diagram of $\lambda$, $c(w)$ is called the content of $w$. We postpone detailed definitions for these notions and related ones to Section~\ref{sec:prelim}. We observe that these character evaluations can be expressed with sums over powers of contents called content evaluations. This fact was proved in \cite{corteel2004content} for the general case, and in \cite{lassalle2008explicit} an explicit formula was given for general $\mu$ in $\chi^{\lambda}_{\mu}$. Such character evaluation in terms of contents are mostly obtained in an algebraic way, either using the Jucys-Murphy elements (\textit{e.g.} \cite{diaconis1989applications}), or with the help of symmetric functions (\textit{e.g.} \cite{corteel2004content, lassalle2008explicit}). They are also related to shifted symmetric functions on parts of partition (\textit{e.g.} \cite{kerov1994polynomial}). On the other hand, there is a well-developed combinatorial representation theory of the symmetric group (\textit{c.f.} \cite{stanley2001enumerative, sagan2001symmetric}), in which we can express characters combinatorially in terms of ribbon tableaux. It is thus interesting to relate ribbon tableaux to content evaluations using combinatorial tools, for example Sch\"utzenberger's famous jeu de taquin. Furthermore, since functions on contents appear in many contexts, such as in the proof that the generating function of some family of combinatorial maps is a solution to the KP hierarchy (\textit{c.f.} \cite{goulden2008kp}), a better understanding of the combinatorial importance of contents would also help us to better understand other combinatorial phenomena related to contents. In this article, we look into the fine structure in the jeu de taquin. In Section~\ref{sec:jdt}, we define a notion called ``trace forest'' for skew tableaux that encapsulates the paths of all possible jeu de taquin moves on such tableaux. Using this notion, we give a simple bijective proof of the formula above for $\chi^{\lambda}_{2,1^{n-2}}$ by counting corresponding ribbon tableaux. To the author's knowledge, no such bijective proof is known before. Inspired by this simple proof, in Section~\ref{sec:chara-eval} we investigate the possibility of using trace forest to give bijective proof of more involved character evaluation formulae, which is equivalent to counting certain ribbon tableaux, and for this purpose we sketch a general scheme using structural induction on the tree structure of trace forest. This scheme leads to combinatorial proofs of the other two character evaluation formula above, for $\chi^{\lambda}_{3,1^{n-2}}$ and $\chi^{\lambda}_{4,1^{n-2}}$. Further possible development of this scheme is also discussed. \section{Preliminaries} \label{sec:prelim} \subsection{Partitions and standard tableaux} A \emph{partition} $\lambda$ is a finite non-increasing sequence $(\lambda_i)_{i>0}$ of positive integers. We say that $\lambda$ is a partition of $n$ (noted as $\lambda \vdash n$) if $\sum_{i} \lambda_i = n$. The \emph{Ferrers diagram} of a partition $\lambda$ (also noted as $\lambda$ by abuse of notation) is a graphical representation of $\lambda$ consisting of left-aligned rows of boxes (also called \emph{cells}), in which the $i$-th line has $\lambda_i$ boxes. We assume that cells are all unit squares, and the center of the first cell in the first row is the origin of the plane. For a cell $w$ whose center is in $(i,j)$, we define its \emph{content} to be $c(w)=i-j$. Figure~\ref{fig:tableau} gives an example of a Ferrers diagram, drawn in French convention, with the content for each cell. A \emph{standard tableau} of the shape $\lambda \vdash n$ is a filling of the Ferrers diagram of $\lambda$ using integers from $1$ to $n$ such that each number is used exactly once, with increasing rows and columns. Figure~\ref{fig:tableau} also gives an example of a standard tableau. We note by $f^{\lambda}$ the number of standard tableaux of the form $\lambda$, and it is also the dimension of the irreducible representation of the symmetric group indexed by $\lambda$ (\textit{c.f.} \cite{sagan2001symmetric, vershik2004new}). \begin{figure} \caption{(a) the Ferrers diagram of the partition $(5,3,3,2)$, with the content for each cell. (b) a standard tableau of the shape $(5,3,3,2)$. (c) the skew diagram of the skew partition $(5,3,3,2) / (3,2)$. (d) a skew tableau of the shape $(5,3,3,2) / (3,2)$.\label{fig:tableau} \label{fig:tableau} \end{figure} Definitions above can be generalized to so-called skew-partitions. A \emph{skew-partition} $\lambda/\mu$ is a pair of partitions $(\lambda, \mu)$ such that for all $i>0$, $\lambda_i \geq \mu_i$. Graphically, it is equivalent to that the Ferrers diagram of $\lambda$ covers totally that of $\mu$. We then define the \emph{skew diagram} of the from $\lambda/\mu$ as the difference of the Ferrers diagrams of $\lambda$ and of $\mu$, \textit{i.e.} the Ferrers diagram of $\lambda$ without cells that also appear in that of $\mu$. Figure~\ref{fig:tableau} gives an example of a skew diagram. We now define the counterpart of standard tableau on skew diagrams. A \emph{skew tableau} of shape $\lambda/\mu$ is a filling of the skew diagram of $\lambda/\mu$ with $n$ cells that satisfies all conditions for standard tableaux. Figure~\ref{fig:tableau} gives an example of skew tableau. We note by $f^{\lambda/\mu}$ the number of skew tableaux with shape $\lambda/\mu$. Standard tableaux and skew tableaux are classical combinatorial objects closely related to the representation theory of the symmetric group. In \cite{vershik2004new, sagan2001symmetric, stanley2001enumerative} details of this relation are described. \subsection{Ribbon tableaux and the Murnaghan-Nakayama rule} We note $S_n$ the symmetric group formed by permutations of $n$ elements. Let $\lambda, \mu$ be partitions of $n$, we note by $\chi^{\lambda}_{\mu}$ the \emph{irreducible character} of $S_n$ indexed by $\lambda$ evaluated on the conjugacy class indexed by $\mu$. Irreducible characters can be expressed in a combinatorial way using the so-called ribbon tableaux. A \emph{ribbon} is a special skew diagram that is connected and without any $2 \times 2$ cells. The \emph{height} $ht(\lambda/\mu)$ of a ribbon $\lambda/\mu$ is the number of rows it spans minus one. A \emph{ribbon tableau} $T$ of the shape $\lambda$ is a sequence of partitions $\varnothing = \lambda^{(0)}, \lambda^{(1)}, \ldots, \lambda^{(k)} = \lambda$ such that $\lambda^{(i)} / \lambda^{(i-1)}$ is a ribbon for all $i>0$. The \emph{entry sequence} of $T$ is $(a_1, a_2, \ldots, a_k)$ with $a_i$ the number of cells in $\lambda^{(i)} / \lambda^{(i-1)}$. The total height of $T$ is defined by $ht(T)=\sum_i ht(\lambda^{(i)} / \lambda^{(i-1)})$, and the sign of $T$ is defined by $\operatorname{sgn}(T)=(-1)^{ht(T)}$. Figure~\ref{fig:ribbon-tableau} gives an example of a ribbon and a ribbon tableau. \begin{figure} \caption{(a) the ribbon $(5,4,4) / (3,3,1)$ of height $2$. (b) a ribbon tableau $T$ of shape $(5,3,3,2)$ and of entry sequence $(5,4,2,1,1)$, with $\protect\operatorname{sgn} \label{fig:ribbon-tableau} \end{figure} The Murnaghan-Nakayama rule (\textit{c.f.} Chapter~7.17 of \cite{stanley2001enumerative}) is a combinatorial interpretation of the irreducible character. According to this rule, we have $\chi^{\lambda}_{\mu}=\sum_T \operatorname{sgn}(T)$, where we sum over all ribbon tableau $T$ of the shape $\lambda$ and with entry sequence $\mu$. For a partition $\mu \vdash k$ and an integer $n > k$, we denote by $\mu, 1^{n-k}$ the partition obtained by concatenating $\mu$ with $n-k$ parts of size $1$. In this article, for a fixed ``small'' partition $\mu \vdash k$, we are interested by the evaluation of $\chi^{\lambda}_{\mu, 1^{n-k}}$ for arbitrary $\lambda \vdash n$ in terms of contents, which involves ribbon tableaux of shape $\lambda$ and entry sequence $\mu, 1^{n-k}$. \begin{lem}[\textit{c.f.} \cite{corteel2004content}] \label{lem:character-as-skew-tableau} For partitions $\lambda \vdash n$, $\mu \vdash k$ and $n > k$, we have \[ \chi^{\lambda}_{\mu, 1^{n-k}} = \sum_{\nu \vdash k} f^{\lambda / \nu} \chi^{\nu}_{\mu}. \] \end{lem} \begin{proof} Let $T_0$ be a ribbon tableaux of shape $\lambda$ and entry sequence $\mu, 1^{n-k}$. By retaining only the last $n-k$ ribbons of size $1$ in $T_0$, we obtain a skew tableau $T_1$, and $T = T_0 \setminus T_1$ is a ribbon tableau of entry sequence $\mu$. This is clearly a bijection between $T_0$ and $(T_1, T)$. Moreover, $\operatorname{sgn}(T)=\operatorname{sgn}(T_0)$. We now sum over the sign of all $T_0$ in bijection with $(T_1, T)$, first by the shape $\nu$ of $T$, then by each $T_1$ of shape $\lambda / \nu$, and finally by each $T$, and we finish the proof by the Murnaghan-Nakayama rule. \end{proof} By this lemma and the fact that irreducible characters span linearly the space of class functions (\textit{c.f.} Chapter~2.6 of \cite{serre1977linear}), character evaluation is equivalent to computing the number of skew tableaux of a certain shape. It is thus interesting for us to study skew tableaux. \subsection{Jeu de taquin} The jeu de taquin is a bijection between skew tableaux of different shapes. It was first introduced by Sch\"utzenberger and proved itself to be a powerful tool in the combinatorial representation theory of the symmetric group. Its applications includes the Sch\"utzenberger involution, the Littlewood-Richardson rule (\textit{c.f} \cite{stanley2001enumerative} for both), and also a bijective proof of Stanley's hook formula (\textit{c.f.} \cite{krattenthaler1999another}). An introduction to the jeu de taquin can be found in the Appendix~A of \cite{stanley2001enumerative}. We now define the building block of the jeu de taquin on skew tableau, which are local exchanges of entries in the tableaux. Given a skew tableau $T$ with a distinguished entry $*$, the \emph{in-coming step} tries to permute $*$ with one of its ``inward'' neighbors, the ones immediately below or to the left, while conserving the increasing conditions of skew tableau. This is always possible as in the left side of Figure~\ref{fig:jeu-de-taquin}. The \emph{out-going step} is similarly defined, by exchange with entries immediately above or to the right. Figure~\ref{fig:jeu-de-taquin} illustrates the precise rule of both kinds of steps. We verify that in-coming steps are exactly the reverse of out-going steps. We now define the \emph{in-coming slide} of the distinguished entry $*$ as successive applications of the in-coming steps to $*$ until it no longer has neighbor below or to the left. Since in-coming steps are reversible, given the distinguished entry and the resulting skew tableau, we can also reverse an in-coming slide. Therefore, the in-coming slide, which is a global operation on tableaux, is also reversible. \begin{figure} \caption{In-coming step (a) and out-going step (b) in jeu de taquin\label{fig:jeu-de-taquin} \label{fig:jeu-de-taquin} \end{figure} We now give a bijection that relates standard tableaux and skew tableaux using the jeu de taquin. \begin{lem} \label{lem:skew-bijection} For a partition $\lambda \vdash n$ and an integer $k > 0$, The jeu de taquin gives a bijection between the following two sets: \begin{itemize} \item the set of tuples $(T, a_1, \ldots, a_k)$, where $T$ is a standard tableau of shape $\lambda$, and all $a_i$ distinct integers between $1$ and $n$, \item the set of tuples $(T_0, T_1, a_1, \ldots, a_k)$, where $T_0$ is a skew tableau of shape $\lambda / \mu$ for a certain partition $\mu \vdash k$ of entries from $1$ to $n-k$, $T_1$ a standard tableau of shape $\mu$ of entries from $1$ to $k$, and all $a_i$ distinct integers between $1$ and $n$. \end{itemize} \end{lem} \begin{proof} We apply the in-coming slide to $a_1, \ldots, a_k$ successively on $T$. We then obtain a skew tableau $T_0'$ of shape $\lambda / \mu$ for a certain partition $\mu \vdash k$ and a standard tableau $T_1$ of shape $\mu$ of entries from $1$ to $k$ that indicates the exclusion order of cells. The entries in $T_0'$ are all integers from $1$ to $n$ except all $a_i$, but since all the $a_i$ are known, we can renumber entries in $T_0'$ to produce a standard tableau of entries from $1$ to $n-k$, and the reconstruction from $T_0$ to $T_0'$ is easy given all $a_i$. Since in-coming slides are reversible, given the $(a_1, \ldots, a_k), T_0, T_1$, we can reconstruct $T$. We conclude that it is indeed a bijection. Figure~\ref{fig:skew-bijection} gives an example for $\lambda = (5,3,3,2) , \mu=(2)$ \end{proof} \begin{figure} \caption{Example of bijection relation standard tableaux and skew tableaux via the jeu de taquin\label{fig:skew-bijection} \label{fig:skew-bijection} \end{figure} From the proof of the lemma above, we can conclude that, to calculate a certain $f^{\lambda / \mu}$ for $\mu \vdash k$, it suffices to count the number of tuples $(T, (a_1, \ldots, a_k))$, with $T$ a standard tableau of shape $\lambda$, that are associated to $(T_0, T_1, (a_1, \ldots, a_k))$, with $T_0$ of shape $\lambda / \mu$ via the jeu de taquin. To accomplish this task, we need to know more about the fine structure of the jeu de taquin. \section{Trace forest of jeu de taquin} \label{sec:jdt} We will now define a structure related to the jeu de taquin in skew tableaux called ``trace forest''. It is essentially a directed graph whose vertices are cells in the tableau, and it encapsulates the trace of the in-coming slide of each entries. \begin{defi} Given a skew tableau $T$, we define its \emph{trace forest}, which is a directed graph $F$ with cells in $T$ as vertices, as follows. For a cell $w$ in $T$ with neighbors below or to the left and $a$ its entry, we point an arc from $w$ to the destination of in the in-coming step for $a$. It is clear that no cycle can exist, thus the constructed graph is a forest, rooted at cells without neighbor below or to the left. \end{defi} Figure~\ref{fig:trace-forest} gives some examples of skew tableaux and their trace forests. For a skew tableau $T$, let $F$ be its trace forest. By definition, the in-coming step with any cell $c \in T$ follows exactly the arc from $c$ in the trace forest. With simple induction on $F$, we can see that the in-coming slide of the entry of any cell $c \in T$ coincides with the path from $c$ to its root in $F$, which gives the structure $F$ the name ``trace forest''. \begin{figure} \caption{Examples of skew tableaux and their trace forests\label{fig:trace-forest} \label{fig:trace-forest} \end{figure} We now study how an in-coming slide changes the trace forest of a skew tableau. We begin with some definitions. For a cell $c$ in $F$, we call its child to the right the \emph{right child}, and its child above the \emph{upper child}, noted as $c_<$ and $c_\vee$. We note by $F_<$ and $F_\vee$ the subtree of $F$ rooted in $c_<$ and $c_\vee$ respectively. Let $T$ be a skew tableau, $S$ a subtree of its trace forest rooted in $a$ and $a_{<}, a_{\vee}$ its right and upper child (if they exist). For a cell $c \in S$, we note $T^{c}$ the tableau obtained by applying an in-coming slide on $c$, and $F^{c}$ its trace forest. The cells in $S \setminus \{ a \}$ are partitioned into the following categories, as in Figure~\ref{fig:fine-structures}: \begin{itemize} \item $D_1(c)$ (resp. $D_2(c)$), the subtree rooted at the right child (resp. the upper child) of $c$; \item $P_1(c)$ (resp. $P_2(c)$), the set of ancestors of $c$ (including $c$) that issue a horizontal (resp. vertical) arc; \item $R(c)$ (resp. $A(c)$), the set of cells not in categories above and whose in-coming slide path lies below (resp. above) that of $c$. \end{itemize} We note $C_<(c,S) = D_1(c) \cup P_1(c) \cup R(c)$ and $C_\vee(c,S) = D_2(c) \cup P_2(c) \cup A(c)$. We can see that $C_<(c,S)$ and $C_\vee(c,S)$ divide cells in $S \setminus \{ a \}$ into two groups. In the following lemma, we see that this grouping of cells is related to the structure of $F^{c}$ after the in-coming slide of $c$ applied to $T$. \begin{figure} \caption{Fine structure of the trace forest\label{fig:fine-structures} \label{fig:fine-structures} \end{figure} \begin{lem} \label{lem:trace-forest-separation} For a skew tableau $T$, let $S$ be a subtree in its trace forest, and $c \in S$. For $d \in C_<(c,S)$ (resp. $d \in C_\vee(c,S)$), the in-coming slide path of $d$ in $T^{c}$ lies to the right (resp. above) of that of $c$ in $T$. \end{lem} \begin{proof} We only need to show that no arc goes between elements in $C_<(c,S)$ and $C_\vee(c,S)$ in the trace forest of $T^{c}$, and it will entail the lemma because of the relative position of $C_<(c,S)$ and $C_\vee(c,S)$. We will first prove that there is no arc from $C_<(c,S)$ to $C_\vee(c,S)$ in the trace forest of $T^{c}$. Let $d \in C_<(c,S)$ and $x$ the entry of $d$ in $T^{c}$, $d_1$ be the cell immediately to the left of $d$, $d_2$ the one below $d$ and $d_0$ the one on the south-west. There are three cases: $d \in P_1(c)$, $d \in D_1(c)$ and $d \in R(c)$. For $d \in P_1(c)$ and $d \in R(c)$, the only possible way that $d_1 \in C_\vee(c,S)$ is the case $d_1 \in P_2(c)$. For $d \in D_1(c)$, it suffices to prove for the root $d=r_1$ of $D_1(c)$, and the only possible way that $d_1 \in C_\vee(c,S)$ is still $d_1 \in P_2(c)$. Therefore, in all 3 cases, we have that the arc of $d_1$ points to $d_0$ in $F$. Let $y$ be the entry in $d_1$ and $z$ in $d_2$ in $T^{c}$. By definition of $P_2(c)$, $d_0$ contains $y$, and it entails $y < z$ by the definition of skew tableau. Therefore in $T^{c}$, the arc from $d$ points to $d_2$ according to the rule of the jeu de taquin, and we have the wanted separation. The right side of Figure~\ref{fig:fine-structures} illustrates this argument. The proof that there is no arc from $C_\vee(c,S)$ to $C_<(c,S)$ in the trace forest of $T^{c}$ is similar. \end{proof} Lemma~\ref{lem:trace-forest-separation} can be seen as a clarification of an argument in Lemma~$\mathrm{HC}^{*}$ in \cite{krattenthaler1999another}. Using Lemma~\ref{lem:trace-forest-separation}, we have the following simple bijective proof of a well-known character formula (\textit{c.f.} \cite{ingram1950some, corteel2004content, lassalle2008explicit}). To the knowledge of the author, no purely bijective proof is known before for this simple formula. \begin{thm} \label{thm:mu-2} For a partition $\lambda \vdash n$, we identify $\lambda$ and its Ferrers diagram, and we have \[ n(n-1)\chi^{\lambda}_{(2,1^{n-2})} = 2 f^{\lambda} \sum_{w \in \lambda} c(w). \] \end{thm} \begin{proof} Since from Lemma~\ref{lem:character-as-skew-tableau} follows $\chi^{\lambda}_{(2,1^{n-2})} = f^{\lambda / (2)} - f^{\lambda / (1,1)}$, we want to count the difference between the number of skew tableaux of shape $\lambda / (2)$ and those of shape $\lambda / (1,1)$. Let $(T,a,b)$ be a tuple with $T$ standard tableau of shape $\lambda$ and $a \neq b$ two entries in $T$. We let $F$ denote the only tree in the trace forest of $T$, and we let $T_1(T,a,b)$ denote the skew tableau $T_1$ such that $(T,a,b)$ is associated to $(T_1,a,b)$ in the bijection in Lemma~\ref{lem:skew-bijection}, in which $T_0$ is fixed in our case. Therefore, when going through all $(T,a,b)$, $T_1(T,a,b)$ goes over each skew tableau of shape $\lambda / (2)$ or $\lambda / (1,1)$ exactly $n(n-1)$ times. For entries $a,b$ with $a<b$, we consider the contribution of $T_1(T,a,b), T_1(T,b,a)$ to $f^{\lambda / (2)} - f^{\lambda / (1,1)}$. If $a$ is not an ancestor of $b$ in $T$, they have a common ancestor $c$, and by symmetry we can suppose that $b$ is on the subtree rooted in the upper child of $c$. From Lemma~\ref{lem:trace-forest-separation}, we know that $b \in A(a) \subset C_\vee(a,F)$ in $T^a$ and $a \in R(b) \subset C_<(b,F)$ in $T^b$, therefore $T_1(T,a,b)$ is of shape $\lambda / (1,1)$, while $T_1(T,b,a)=0$ is of shape $\lambda / (2)$. Thus this case does not contribute to $f^{\lambda / (2)} - f^{\lambda / (1,1)}$. The other case is that $a$ is an ancestor of $b$ in $T$. If the path from $b$ to $a$ ends with a horizontal arc pointing at $a$, then from Lemma~\ref{lem:trace-forest-separation}, we have $b \in D_1(a) \subset C_<(a,F)$ in $T^a$ and $a \in P_1(b) \subset C_<(b,F)$ in $T^b$, therefore $T_1(T,a,b)$ and $T_1(T,b,a)$ are both of shape $\lambda / (2)$. Otherwise, if the path from $b$ to $a$ ends with a vertical arc pointing at $a$, $T_1(T,a,b)$ and $T_1(T,b,a)$ are both of shape $\lambda / (1,1)$. The path in the trace forest from $b$ at $(i, j)$ to the cell at $(0,0)$ consists of $i$ horizontal arcs and $j$ vertical arcs. Therefore, if we sum over all ancestors $a$ of $b$, among all $T_1(T,a,b)$ and $T_1(T,b,a)$, we have $2i$ tableaux of shape $\lambda / (2)$ and $2j$ ones of shape $\lambda / (1,1)$, which gives a contribution of $2c(b)$ to $f^{\lambda / (2)} - f^{\lambda / (1,1)}$. This contribution is independent of $T$. In the end, we have $n(n-1)(f^{\lambda / (2)} - f^{\lambda / (1,1)}) = 2 f^{\lambda} \sum_{w \in \lambda} c(w)$, thus finish the proof. \end{proof} In the proof above, there are two cases for entries $a,b$: the case where $a$ is not an ancestor of $b$ that contributes nothing, and the other case where contents appears naturally in the contribution. In the former case, $a,b$ play the same role, which reflects some kind of symmetry. For more general cases, we need to apply the jeu de taquin to several entries and count the skew tableaux obtained of a certain shape. It is thus desirable to extract similar symmetries. However, the task becomes monstrous when we pass to more entries due to case analysis. To surmount this difficulty, it is natural to try to use the tree structure of the trace forest to implicitly extract the symmetry we want. \section{Character evaluation using trace forest} \label{sec:chara-eval} We will now use the notion of trace forest to calculate $f^{\lambda / \mu}$ with fixed small $\mu$. In \cite{corteel2004content} and \cite{lassalle2008explicit} (see also \cite{kerov1994polynomial}), it was proved that $\chi^{\lambda}_{\mu}$ can be expressed using so-called ``content evaluation''. By Lemma~\ref{lem:character-as-skew-tableau}, we know that $f^{\lambda / \mu}$ can also be expressed by such content evaluation. It is now interesting to study the interaction between content evaluation and trace forest, and how it applies to character evaluation. In this section, we will define a notion called the ``inductive form'' of functions on any subtree in the trace forest. It enables the computation of such functions by identification of inductive form. We also give the inductive form of several content evaluations. Then we proceed to the bijective counting of skew tableaux of different shapes using the jeu de taquin, and by identification of inductive form, we obtain the expression of several $f^{\lambda / \mu}$ for general $\lambda$ and small $\mu$ in terms of content evaluation, which gives bijective proofs of various character evaluation formulae. \subsection{Content powersums} We will start by defining various content powersums on subtrees of the trace forest of skew-tableaux related to contents. For a skew tableau $T$, let $S$ be a subtree in its trace forest and $a$ its root. We denote by $c_{a}(w)$ the \emph{relative content} of a cell $w$ w.r.t. the root $a$, \textit{i.e.} $a$ is taken as the origin when computing the relative content $c_{a}(w)$. We have $c_{a}(w) = c(w) - c(a)$, where $c$ stands for the normal content. For any partition $\alpha=(\alpha_1, \ldots, \alpha_l) \vdash k$, we define \emph{content power sums} of $S$ denoted by $cp^{\alpha}(S)$ as follows, with the convention $0^0=1$. \[ cp^{\alpha}(S) = \prod_{i=1}^{l} \sum_{w \in S} c_a(w)^{\alpha_i - 1} \] For any standard tableau $T$ of shape $\lambda$ and $F_T$ its only tree in its trace forest, we denote $cp^{\alpha}(\lambda) = cp^{\alpha}(F_T)$. The definition of $cp^{\alpha}$ extends readily to any subset of cells in $T$. For any subset of cells $C$ and any cell $a$, we define $cp^{\alpha}_{a}(C)$ as follows \[ cp^{\alpha}_{a}(C) = \prod_{i=1}^{l} \sum_{w \in C} c_a(w)^{\alpha_i - 1} \] We note that the subscript $a$ in $cp^{\alpha}_a$ represents the ``origin'' for the relative contents used in the function. When evaluated over a tree, we omit the subscript since we always take the root as origin. We notice that $cp^{(k)}$ is the sum of the content powersum of power $k-1$. We can see that, for $\alpha = (\alpha_1, \ldots, \alpha_l)$ and $S$ a subtree of the trace forest of some standard tableau, the functions $cp^{\alpha}$ and $cp^{\alpha}_a$ are the powersum function $p_{(\alpha_1 - 1, \ldots, \alpha_l - 1)}$ evaluated over multisets of contents, multiplied by a polynomial in $|S|$. We recall that the powersum functions span linearly the algebra of symmetric functions denoted as $\Lambda$ (\textit{c.f.} \cite{stanley2001enumerative}, Chapter 7). Therefore, our $cp^{\alpha}_a$ also inherit an algebra structure for a fixed $a$, noted $\Lambda_c$. When evaluated on the whole standard tableau of shape $\lambda$, $\Lambda_c$ is exactly the nice algebra generated by the shifted symmetric functions (\textit{c.f.} \cite{corteel2004content, kerov1994polynomial}). We will now show that the set for all $a$ also form an algebra by showing that we can change the origin $a$. We begin by some definitions. We recall that for a cell $a$ in a Young diagram, we note by $a_<$ the cell to its right and $a_\vee$ the cell above. We define two linear operators $\Gamma_+$ and $\Gamma_-$ as follows. \[ \Gamma_+ cp^{(k)}_a = cp^{(k)}_{a_<}, \quad \Gamma_- cp^{(k)}_a = cp^{(k)}_{a_\vee} \] By requiring $\Gamma_+$ and $\Gamma_-$ to be compatible with multiplication, \textit{i.e.} $\Gamma_+ (fg) = (\Gamma_+ f)(\Gamma_+ g)$ and the same for $\Gamma_-$, these two operators are thus defined over the whole $\Lambda_c$. In fact, the algebra $\Lambda_c$ is stable by these operators $\Gamma_+$ and $\Gamma_-$ via the following lemma. \begin{lem}\label{lem:operator-gamma} For any integer $k \geq 1$, the result of application of $\Gamma_+$ and $\Gamma_-$ is as follows. \[ \Gamma_+ cp^{(k)}_a = \sum_{i=0}^{k - 1} (-1)^{i} \binom{k-1}{i} cp^{(i)}_a, \quad \Gamma_- cp^{(k)}_a = \sum_{i=0}^{k-1} \binom{k-1}{i} cp^{(i)}_a \] Therefore $\Lambda_c$ is stable by $\Gamma_+$ and $\Gamma_-$. Moreover, $\Gamma_+ \Gamma_- = \Gamma_- \Gamma_+ = \mathrm{id}$. \end{lem} \begin{proof} We have the simple observation that, for any cell $w, a$, we have $c_{a_<}(w) + 1 = c_a(w) = c_{a_\vee}(w) - 1$. This is simply due to the change of origin. Now for any subset $C$ of cells, we have: \[ (\Gamma_+ cp^{(k)}_a)(C) = \sum_{w \in C} c_{a_<}^{k-1}(w) = \sum_{w \in C} (c_{a}(w)-1)^{k-1} = \sum_{i=0}^{k-1} (-1)^{i} \binom{k-1}{i} cp^{(i)}_a(C) \] \[ (\Gamma_- cp^{(k)}_a)(C) = \sum_{w \in C} c_{a_\vee}^{k-1}(w) = \sum_{w \in C} (c_{a}(w)+1)^{k-1} = \sum_{i=0}^{k-1} \binom{k-1}{i} cp^{(i)}_a(C) \] For $\Gamma_+ \Gamma_- = \Gamma_- \Gamma_+ = \mathrm{id}$, we only need to notice that $c_{(a_<)_\vee}(w) = c_a(w) = c_{(a_\vee)_<}(w)$ for any cell $w$. \end{proof} We notice that, for a function $f \in \Lambda_c$, when viewed as a function on a partition $\lambda$, $f$ is a shifted symmetric function in $\lambda_1, \lambda_2, \ldots$. The fact that shifted symmetric functions form a nice algebra hints that $\Lambda_c$ also have a nice algebraic structure. \subsection{Content evaluation and inductive form} We will need some more definitions. For $S$ a subtree in a trace forest rooted in $r$ and any partition $\alpha \vdash k$, we use $\mycpr{\alpha}(S)$ (resp. $\mycpa{\alpha}(S)$) as follows. \[ \mycpr{\alpha}(S) = cp^{\alpha}_r(S_<), \quad \mycpa{\alpha}(S) = cp^{\alpha}_r(S_\vee) \] We now define a transformation called the \emph{inductive form}. For $S$ a subtree of a trace forest rooted in $a$, we note $S_<, S_\vee$ the subtrees rooted in $a_<$ and $a_\vee$. Let $f$ be a real-valued function on subtrees of a trace forest, its inductive form is defined by $(\Delta f)(S) = f(S) - f(S_<) - f(S_\vee)$. The transformation $\Delta$ is clearly linear. \begin{lem} \label{lem:inductive-identify} Let $f, g$ be two functions on subtrees of trace forests. If $f(\varnothing) = g(\varnothing)$ and $\Delta f = \Delta g$, then we have $f=g$. \end{lem} \begin{proof} Since $\Delta$ is linear, for any $S$, $(f-g)(S) = (f-g)(S_<) + (f-g)(S_\vee)$, and we conclude the proof by structural induction. \end{proof} Now we compute the inductive form of $cp^{\alpha}$. It will be used later to identify characters, which can be seen as a function on the trace forest, as a sum of $cp^{\alpha}$. \begin{prop} \label{prop:cp-inductive-form} We have the following equalities for any subtree $S$ in a trace forest. \begin{align} cp^{(1)}(S) = 1 + \mycpr{1}(S) + \mycpa{1}(S); &\quad \forall k>1, cp^{(k)}(S) = \mycpr{k}(S) + \mycpa{k}(S) \\ cp^{(k)}(S_<) = \sum_{i=0}^{k-1} (-1)^{i} \binom{k-1}{i} \mycpr{k-i}(S); &\quad cp^{(k)}(S_\vee) = \sum_{i=0}^{k-1} \binom{k-1}{i} \mycpa{k-i}(S) \end{align} Furthermore, for any $\alpha \vdash d$, $\Delta cp^{\alpha}$ can be expressed as a polynomial in $\mycpr{k}$ and $\mycpa{k}$. \end{prop} \begin{proof} The equalities in (1) comes directly from the definition of $cp^{(k)}$. For (2), we notice that $cp^{(k)}(S_<)$ is rooted in $a_<$, thus $cp^{(k)}(S_<) = (\Gamma_+ cp^{(k)}_a)(S_<)$, and we conclude by Lemma~\ref{lem:operator-gamma}. For $cp^{(k)}(S_\vee)$ we have similarly $cp^{(k)}(S_\vee) = (\Gamma_- cp^{(k)}_a)(S_\vee)$. By (1) and (2), for any subtree $S$ of a trace forest, we can express $cp^{\alpha}(S)$, $cp^{\alpha}(S_<)$ and $cp^{\alpha}(S_\vee)$ as polynomials in $\mycpr{k}(S)$ and $\mycpa{k}(S)$. We finish the proof with the fact that $(\Delta cp^{\alpha})(S) = cp^{\alpha}(S) - cp^{\alpha}(S_<) - cp^{\alpha}(S_\vee)$. \end{proof} Here are some examples of the inductive form of some $cp^{\alpha}$. For simplicity, we consider $cp^{\alpha}, \mycpr{\alpha}, \mycpa{\alpha}$ as functions and omit their arguments. \begin{align*} \Delta cp^{(1)} &= 1, \quad \Delta cp^{(2)} = \mycpr{1} - \mycpa{1}, \quad \Delta cp^{(1,1)} = 2 \mycpr{1} \mycpa{1} + 2 \mycpr{1} + 2 \mycpa{1} + 1 \\ \Delta cp^{(3)} &= 2 \mycpr{2} - 2 \mycpa{2} - \mycpr{1} - \mycpa{1} \\ \Delta cp^{(2,1)} &= \mycpr{2} \mycpa{1} + \mycpr{1} \mycpa{2} + \mycpr{2} + \mycpa{2} + \mycpr{1,1} - \mycpa{1,1} \\ \Delta cp^{(1,1,1)} &= 3 \mycpr{1,1} \mycpa{1} + 3 \mycpr{1} \mycpa{1,1} + 3 \mycpr{1,1} + 6 \mycpr{1} \mycpa{1} + 3 \mycpa{1,1} + 3 \mycpr{1} + 3 \mycpa{1} + 1 \end{align*} \subsection{Inductive counting of skew tableaux} It is now natural to try to count skew tableaux of different forms using structural induction. For integers $n,k \geq 0$, we note the \emph{falling factorial} $(n)_k = n(n-1) \cdots (n-k+1)$, and the number of $k$-tuples in $n$ elements is exactly $(n)_k$. Given a standard tableau of shape $\lambda$ and a small partition $\mu$, we now try to count inductively the number of tuples that leads to a skew tableau of shape $\lambda / \mu$ using the bijection in Lemma~\ref{lem:skew-bijection}. We now describe a general scheme for computing such quantities. For a standard tableau $T$, we will see that the number of corresponding skew tableaux can be expressed as a sum through all cells $a \in T$, over some content evaluation on a certain component of $T^a$. We want to compute such quantity inductively for all subtrees in the trace forest of $T$. For such a subtree $S$ rooted at $f$, instead of computing directly the sum we want, we try to find out the inductive form of that sum. The idea is that the sum over $a \in S$ comes in three cases: $a=f$, $a \in S_<$, $a \in S_\vee$. The first case is readily expressed as content evaluation of $S_<$ and $S_\vee$, and the latter two cases consist of a sum of the same type we are computing. They can, hopefully, also be reduced to some content evaluation for $S_<$ and $S_\vee$. We thus obtain the inductive form, and by comparing those of $cp^{\alpha}$, we can identify the sum as a linear combination of content evaluation of $S$. Before proceeding to examples of application of our scheme, we first deal with some definitions and facts we need. The \emph{conjugate} of a partition $\lambda$, noted as $\lambda^{\dagger}$, is the partition whose Ferrers diagram is that of $\lambda$ flipped alongside the line $y=x$. \begin{lem} \label{lem:induction-case} For a skew tableau $T$, a subtree $S$ in its trace forest rooted in $f$, and $a \in S$, we have 3 cases. \begin{itemize} \item $a=f$. In this case, $C_<(a,S)=S_<$, $C_\vee(a,S)=S_\vee$. \item $a \in S_<$. In this case, $C_<(a,S)=C_<(a,S_<) \cup \{ f_< \}$, $C_\vee(a,S)=C_\vee(a,S_<) \cup S_\vee$. \item $a \in S_\vee$. In this case, $C_<(a,S)=C_<(a,S_\vee) \cup S_<$, $C_\vee(a,S)=C_\vee(a,S_\vee) \cup \{ f_\vee \}$. \end{itemize} \end{lem} \begin{proof} It follows from Lemma~\ref{lem:trace-forest-separation}. \end{proof} Since we will evaluate functions in $\Lambda_c$ on disjoint union of sets, we need the following lemma to ``decompose'' the evaluation. \begin{prop}\label{prop:union-eval} For a partition $\alpha = (\alpha_1, \ldots, \alpha_l)$, two disjoint subsets $A, B$ of cells in a tableau $T$ and $a$ an arbitrary cell in $T$, we have \[ cp^{\alpha}_a(A \cup B) = \sum_{\alpha^{(1)} \uplus \alpha^{(2)} = \alpha} \left( \prod_{i \geq 1} \binom{m(\alpha, i)}{m(\alpha^{(1)},i)} \right) cp^{\alpha^{(1)}}_a(A) cp^{\alpha^{(2)}}_a(B). \] Here $\uplus$ means the union of multisets, and $m(\alpha, i)$ (resp. $m(\alpha^{(1)},1)$) is the multiplicity of $i$ in $\alpha$ (resp. $\alpha^{(1)}$). \end{prop} \begin{proof} It follows from the definition of $cp^{(k)}_a$ that \begin{align*} cp^{\alpha}_a(A \cup B) &= \prod_{i=1}^{l} \left( cp^{(\alpha_i)}_a(A) + cp^{(\alpha_i)}_a(B) \right) \\ &= \sum_{\alpha^{(1)} \uplus \alpha^{(2)} = \alpha} \left( \prod_{i \geq 1} \binom{m(\alpha, i)}{m(\alpha^{(1)},i)} \right) cp^{\alpha^{(1)}}_a(A) cp^{\alpha^{(2)}}_a(B) \end{align*} \end{proof} We will now investigate some relations on partitions that can simplify some calculations. \begin{prop} \label{lem:young-lattice} For a partition $\mu$, let $P(\mu)$ be the set of partitions whose Ferrers diagram can be obtained by adding a cell to that of $\mu$. For any partition $\lambda$, $f^{\lambda / \mu} = \sum_{\mu' \in P(\mu)} f^{\lambda / \mu'}$. \end{prop} \begin{proof} It follows from the classification of all skew tableaux of shape $\lambda / \mu$ by the cell containing $1$. \end{proof} \begin{lem} \label{lem:conjugate-mu} For a partition $\mu$ and its conjugate $\mu^{\dagger}$, if there exists a multivariate function $F$ such that for any $\lambda$ we have $f^{\lambda / \mu} = F(cp^{(1)}(\lambda), cp^{(2)}(\lambda), \ldots, cp^{(i)}(\lambda), \ldots)$, then $f^{\lambda / \mu^{\dagger}} = F(cp^{(1)}(\lambda), -cp^{(2)}(\lambda), \ldots, (-1)^{i-1}cp^{(i)}(\lambda), \ldots)$. \end{lem} \begin{proof} By flipping skew tableaux alongside $y=x$, we see that $f^{\lambda / \mu} = f^{\lambda^{\dagger} / \mu^{\dagger}}$. We then conclude the proof by observing that $cp^{(k)}(\lambda) = (-1)^{k-1} cp^{(k)}(\lambda^{\dagger})$. \end{proof} With these simple facts, we now proceed to examples of computing $f^{\lambda / \mu}$ for a fixed $\mu$ with our scheme. We recall that, for a cell $r$ in a trace forest, we denote by $r_<$ the cell to the right of $r$, and by $r_\vee$ the cell above $r$. \begin{prop} \label{prop:tableau-2} For a partition $\lambda \vdash n$, \[ (n)_2 f^{\lambda / (2)} / f^{\lambda} = \frac{1}{2} cp^{(1,1)}(\lambda) + cp^{(2)}(\lambda) - \frac{1}{2} cp^{(1)}(\lambda) = n(n-1)/2 + \sum_{w \in \lambda} c(w). \] \end{prop} \begin{proof} For a subtree $S$ of a trace forest rooted at $r$, we define the following function $G_{(2)}(S) = \sum_{a \in S} cp^{(1)}_{r_<}(C_<(a,S))$. For a standard tableau $T$ and its only tree $F_T$ in its trace forest, by Lemma~\ref{lem:trace-forest-separation}, $G_{(2)}(F_T)$ is the number of tuples $(a,b)$ such that $(T,a,b)$ leads to a skew tableau of shape $\lambda / (2)$ as in Lemma~\ref{lem:skew-bijection}. We notice that \[G_{(2)}(S) = \sum_{a \in S} (\Gamma_+ cp^{(1)}_{r})(C_<(a,S)) = \sum_{a \in S} cp^{(1)}_{r}(C_<(a,S)). \] We now compute the inductive form of $G_{(2)}$ using Lemma~\ref{lem:induction-case} and Proposition~\ref{prop:union-eval}. \begin{align*} (\Delta G_{(2)})(S) &= (cp^{(1)}_{r}(C_<(r,S)) + \sum_{a \in S_<} \left( cp^{(1)}_{r}(C_<(a,S)) - (\Gamma_+ cp^{(1)}_{r})(C_<(a,S_<)) \right) \\ &\quad + \sum_{a \in S_\vee} \left( cp^{(1)}_{r}(C_<(a,S)) - (\Gamma_- \Gamma_+ cp^{(1)}_{r})(C_<(a,S_\vee)) \right) \\ &= \mycpr{1} + \sum_{a \in S_<} 1 + \sum_{a \in S_\vee} \mycpr{1} \\ &= 2\mycpr{1} + \mycpr{1}\mycpa{1} = \left( \Delta \left( \frac{1}{2} cp^{(1,1)} + cp^{(2)} - \frac{1}{2} cp^{1} \right) \right)(S) \end{align*} We then have $G_{2} = \frac{1}{2} cp^{(1,1)} + cp^{(2)} - \frac{1}{2} cp^{1}$ by Lemma~\ref{lem:inductive-identify}, and $G_{2}(F_T)$ is thus independent of $T$. By summing over all $T$, we conclude the proof. \end{proof} For $f^{\lambda / (1,1)}$, the formula can be found either with the same approach, or with Lemma~\ref{lem:young-lattice} applied on $\mu=(1)$, or with Lemma~\ref{lem:conjugate-mu}. This proposition entails Theorem~\ref{thm:mu-2}, but without explicitly using any symmetry. We have the first evidence that our scheme may work in more general cases. We now investigate the next case $\mu = (3)$. \begin{prop} \label{prop:tableau-3} For a partition $\lambda \vdash n$, \[ (n)_{3} f^{\lambda / (3)} / f^{\lambda} = \frac{1}{6} cp^{(1,1,1)}(\lambda) + cp^{(2,1)}(\lambda) + cp^{(3)}(\lambda) - cp^{(1,1)}(\lambda) - 2cp^{(2)} + \frac{5}{6} cp^{(1)}(\lambda) \] \end{prop} \begin{proof} For a subtree $S$ of a trace forest rooted at $r$, we define a function $G_{(3)}(S) = \sum_{a \in S} G_{(2)}(S^{a}_<)$. For a standard tableau $T$ and its only tree $F_T$ in its trace forest, $G_{(3)}(F_T)$ is the number of tuples $(a,b,c)$ such that $(T,a,b,c)$ leads to a skew tableau of shape $\lambda / (3)$ as in Lemma~\ref{lem:skew-bijection}. We now compute the inductive form of $G_{(3)}$ using Lemma~\ref{lem:induction-case} and Proposition~\ref{prop:tableau-2}. We notice that \begin{align*} G_{(2)}(S) &= \sum_{a \in S} \left(\Gamma_+ \left( \frac{1}{2} cp^{(1,1)} + cp^{(2)} - \frac{1}{2} cp^{1} \right) \right) (C_<(a,S)) \\ &= \sum_{a \in S} \left( \frac{1}{2} cp^{(1,1)} + cp^{(2)} - \frac{3}{2} cp^{1} \right) (C_<(a,S)). \end{align*} We now compute the inductive form of $G_{(3)}$ using Lemma~\ref{lem:induction-case} and Proposition~\ref{prop:union-eval}. \begin{align*} (\Delta G_{(3)})(S) &= \frac{1}{2} \mycpr{1,1} + \mycpr{2} - \frac{3}{2} \mycpr{1} + 2 \sum_{a \in S_<} cp^{(1)}_{r}(C_<(a,S_<)) \\ &\quad + (\mycpr{1}-1) \sum_{a \in S_\vee} cp^{(1)}_{r}(C_<(a,S_\vee)) + \sum_{a \in S_\vee} \left( \frac{1}{2} \mycpr{1,1} + \mycpr{2} - \frac{1}{2} \mycpr{1} \right) \\ &= \frac{1}{2} \mycpr{1,1} \mycpa{1} + \frac{1}{2} \mycpr{1} \mycpa{1,1} + \mycpr{2} \mycpa{1} + \mycpr{1} \mycpa{2} - \mycpr{1} \mycpa{1} + 3 \mycpr{2} - \mycpa{2} \\ &\quad + \frac{3}{2} \mycpr{1,1} - \frac{1}{2} \mycpa{1,1} - \frac{9}{2} \mycpr{1} - \frac{1}{2} \mycpa{1} \\ &= \left( \Delta \left( \frac{1}{6} cp^{(1,1,1)} + cp^{(2,1)} + cp^{(3)} - cp^{(1,1)} - 2 cp^{(2)} + \frac{5}{6} cp^{(1)} \right) \right)(S) \end{align*} We conclude the proof by Lemma~\ref{lem:inductive-identify} as in Proposition~\ref{prop:tableau-2}. \end{proof} Combining this proposition with Lemma~\ref{lem:young-lattice} and Proposition~\ref{prop:tableau-2}, we can also compute $f^{\lambda / (2,1)}$ and $f^{\lambda / (1,1,1)}$, and we obtain the character evaluated on a $3$-cycle for $\lambda \vdash n$: \[ (n)_3 \chi^{\lambda}_{(3,1^{n-3})} / f^{\lambda} = 3cp^{(3)}(\lambda) - \frac{3}{2} cp^{(1,1)}(\lambda) + \frac{3}{2} cp^{(1)}(\lambda) = 3\sum_{w \in \lambda} (c(w))^2 - 3\binom{n}{2}. \] And we have another example of bijective proof of character evaluation formula given by our scheme. Always following our scheme, with some more tedious but \emph{automated} computation, we obtain the following result for $f^{\lambda / (4)}$. \begin{prop} \label{prop:tableau-4} For a partition $\lambda \vdash n$, \begin{align*} (n)_{4} f^{\lambda / (4)} / f^{\lambda} &= \frac{1}{24} cp^{(1,1,1,1)}(\lambda) + \frac{1}{2} cp^{(2,1,1)}(\lambda) + \frac{1}{2} cp^{(2,2)}(\lambda) + cp^{(3,1)}(\lambda) + cp^{(4)}(\lambda) \\ &\quad - \frac{3}{4} cp^{(1,1,1)}(\lambda) - \frac{9}{2} cp^{(2,1)}(\lambda) - \frac{9}{2} cp^{(3)}(\lambda) + \frac{71}{24} cp^{(1,1)}(\lambda) + 6 cp^{(2)}(\lambda) - \frac{9}{4} cp^{(1)}(\lambda) \end{align*} \end{prop} Here we omit the proof, which is essentially a long (but automatic) computation of inductive form. Using Lemma~\ref{lem:conjugate-mu} we also have the expression of $f^{\lambda / (1,1,1,1)}$, and by Lemma~\ref{lem:young-lattice} applied to $\mu = (3)$ and $\mu = (1,1,1)$, we obtain the expression of $f^{\lambda / (3,1)}$ and $f^{\lambda / (2,1,1)}$. We can thus compute $\chi^{\lambda}_{(4,1^{n-4})}$ using Lemma~\ref{lem:character-as-skew-tableau} and we have the following formula: \[ (n)_4 \chi^{\lambda}_{(4,1^{n-4})} / f^{\lambda} = 4 \sum_{w \in \lambda} (c(w))^3 + 4(2n-3) \sum_{w \in \lambda} c(w) \] This is indeed a bijective proof of the character evaluation formula we want. Furthermore, since we can also compute $f^{\lambda / (2,2)}$ using Lemma~\ref{lem:young-lattice}, we can also obtain a bijective proof for the character $\chi^{\lambda}_{(2,2,1^{n-4})}$. As a remark, we notice that our proofs above never depend on the precise structure of the trace forest $F_T$, but rather on the fact that it is a binary tree. Even though the calculation above seems to be tedious, but it can be totally automatized using all the previous computational lemmas and propositions. \section{Discussion} In this article, using the notion of ``trace forest'' which reflects a fine structure in the famous jeu de taquin, we give a simple bijective proof of Theorem~\ref{thm:mu-2} through counting skew tableaux of different shapes. Inspired by this simple proof, we sketch a scheme for counting skew tableaux of more general shapes in an elementary way, using structural induction on the trace forest, and this scheme also leads to combinatorial proofs of several more sophisticated character evaluation formulae. It is also interesting that our proofs can actually be refined to hold for each standard tableau, which still lacks a good explanation. Empirically, our scheme seems to work for $f^{\lambda / \mu}$ for $\mu$ a hook. And we have the following conjecture. \begin{conj} Our scheme always gives the formula of $f^{\lambda / \mu}$ in terms of contents when $\mu$ is a hook. \end{conj} To prove this conjecture, we have to explain two ``miracles'' that occur in computations following to our scheme. First, in the computation of inductive form, we have some kind of sums over $a \in F_<$ and $a \in F_\vee$, but there is no guarantee that these sums can be expressed in $\mycpr{k}$ and $\mycpa{k}$. Second, when we obtain the inductive form in $\mycpr{k}$ and $\mycpa{k}$, we always find that it is the inductive form of some linear combination of $cp^{\alpha}$. This can be seen as a direct consequence of the fact that all character evaluation can be expressed in $cp^{\alpha}$, proved in \cite{corteel2004content} using algebraic methods, but no combinatorial proof is known. Unfortunately, for general $\mu$ our scheme does not always work. For instance, our scheme fail to compute $f^{\lambda / (2,2)}$ directly. However we can compute $f^{\lambda / (2,2)} + f^{\lambda / (3,1)}$ and $f^{\lambda / (3,1)}$ with our scheme, and it leads to an expression of $f^{\lambda / (2,2)}$. We have the intuition that our scheme, with Lemma~\ref{lem:young-lattice} and Lemma~\ref{lem:conjugate-mu}, would give enough linear combinations to work out all $f^{\lambda / \mu}$. We prove that this approach works for any $\mu \vdash k \leq 4$, and it extends easily to $\mu \vdash k \leq 6$. More general cases need further investigation. When passing from $f^{\lambda / \mu}$ to $\chi^{\lambda}_{\mu, 1^{n-k}}$, we notice that $\chi^{\lambda}_{\mu, 1^{n-k}}$ often have a much simpler form, due to some cancellations in the sum. Thus it might be easier to directly deal with the inductive form of characters, and we might see the combinatorial reason behind. \end{document}
\begin{document} \title[Shooting algorithm for state-constrained control-affine problems]{Well-posedness of the shooting algorithm for control-affine problems \\ with a scalar state constraint} \underline{a}thor[M.S. Aronna and F. Bonnans and B.S. Goh] {M. Soledad Aronna\mathop{\rm ad}dress{Escola de Matem\'atica Aplicada FGV EMAp, 22250-900 Rio de Janeiro, Brazil}\email{[email protected]} and J. Fr\'{e}d\'{e}ric Bonnans\mathop{\rm ad}dress{INRIA Saclay , L2S, CentraleSupelec, 91190 Gif-sur-Yvette, France} \email{[email protected]} and Bean San Goh\mathop{\rm ad}dress{School of Electrical Engineering, Computing and Mathematical Sciences Curtin University, Perth, Australia}\email {[email protected]} } \hat{t}anks{The first author was supported by FAPERJ (Brazil) through the {\em Jovem Cientista do Nosso Estado} Program and by CNPq (Brazil) through the {\em Universal} Program and the Productivity Scholarship. The second author was supported by the FiME Lab Research Initiative (Institut Europlace de Finance), and by the PGMO program. } \maketitle \begin{abstract} We deal with a control-affine problem with scalar control subject to bounds, a scalar state constraint and endpoint constraints of equality type. For the numerical solution of this problem, we propose a shooting algorithm and provide a sufficient condition for its local convergence. We exhibit an example that illustrates the theory. \end{abstract} \section{Introduction} In this article we deal with an optimal control problem governed by the dynamics \begin{equation*} \dot x_t = f_0(x_t) + u_t f_1(x_t)\quad \text{for a.a. } t\in [0,T], \end{equation*} subject to the control bounds $$ u_{\rm min} \leq u_t \leq u_{\rm max}, $$ with $u_{\rm min} < u_{\rm max}$, endpoint constraints like $$ \mathcal{P}hi(x_0,x_T) = 0, $$ and a scalar state constraint of the form $$ g(x_t) \leq 0. $$ For this class of problems, we propose a shooting-like numerical scheme and we show a sufficient condition for its local quadratic convergence, that is also a second order sufficient condition for optimality (in a particular sense to be specified later on). Additionally, we solve an example of practical interest, for which we also prove optimality by applying second order sufficient optimality conditions obtained in \cite{AronnaBonnansGoh2016}. This investigation is strongly motivated by applications since we deal with both control and state constraints, which naturally appear in realistic models. Many practical examples that are covered by our chosen framework can be found in the existing literature. A non exhaustive list includes the prey-predator model \cite{GLV74}, the Goddard problem in presence of a dynamic pressure limit \cite{SeywaldCliff1993,GraichenPetit2008}, the optimal control of the atmospheric arc for re-entry of a space shuttle seen in \cite{Bonnard2003}, an optimal production and maintenance system studied in \cite{MaurerKimVossen2005}, and a recent optimization problem on running strategies \cite{AftalionBonnans2014}. We refer also to \cite{dePinho2005}, \cite{MaurerDePinho2016}, \cite{Schaettler2006} and references therein. As it is commonly known, the application of the necessary conditions provided by Pontryagin's Maximum Principle leads to an associated two-point boundary-value problem (TPBVP) for the optimal trajectory and its associated multiplier \cite{Vinter2000}. A natural way for solving TPBVPs numerically is the application of {\em shooting algorithms} \cite{MorRilZan62}. This type of algorithms has been used extensively for the resolution of optimal control problems (see {\em e.g.} \cite{Bul71,BockPlitt1984,Pes94} and references therein). In particular, shooting methods have been applied to control-affine problems both with and without state constraints. Some works in this direction are mentioned in the sequel. Maurer \cite{Mau76} proposed a shooting scheme for solving a problem with bang-singular solutions, which was generalized quite recently by Aronna, Bonnans and Martinon in \cite{AronnaBonnansMartinon2013}, where they provided a sufficient condition for its local convergence. Both these articles \cite{Mau76} and \cite{AronnaBonnansMartinon2013} analyze the case with control bounds and no state constraints. Practical control-affine problems with state constraints were solved numerically in several articles, a non extensive list includes Maurer and Gillessen \cite{MauGil75}, Oberle \cite{Obe79}, Fraser-Andrews \cite{Fra89} {and the recent articles Cots \cite{Cots2017} and Cots {\em et al} \cite{CotsGergaud2022}}. Up to our knowledge, there is no result in the existing literature concerning sufficient conditions for the convergence of shooting algorithms in the framework considered here. The paper is organized as follows. In Sections \ref{SecProblem} and \ref{SecArcs} we introduce the problem and give the basic definitions. A shooting-like method and a sufficient condition for its local quadratic convergence are given in Sections \ref{SecShooting} and \ref{SecSuff}, respectively. The algorithm is implemented in Section \ref{SecExamples} where we solve numerically a variation of the regulator problem and we prove the optimality of the solution analytically. \noindentndent {\bf Notations.} Let $\mathbb{R}^k$ denote the $k-$dimensional real space, {\em i.e.} the space of column real vectors of dimension $k,$ and by $\mathbb{R}^{k*}$ its corresponding dual space, which consists of $k-$dimensional row real vectors. With $\mathbb{R}^k_+$ and $\mathbb{R}^k_-$ we refer to the subsets of $\mathbb{R}^k$ consisting of vectors with nonnegative, respectively nonpositive, components. We write $h_t$ for the value of function $h$ at time $t$ if $h$ is a function that depends only on $t,$ and by $h_{i,t}$ the $i$th component of $h$ evaluated at $t.$ Let $h(t+)$ and $h(t-)$ be, respectively, the right and left limits of $h$ at $t,$ if they exist. Partial derivatives of a function $h$ of $(t,x)$ are referred as $D_th$ or $\dot{h}$ for the derivative in time, and $D_xh,$ $h_x$ or $h'$ for the differentiation with respect to space variables. The same convention is extended to higher order derivatives. By $L^p(0,T)^k$ we mean the Lebesgue space with domain equal to the interval $[0,T]\subset \mathbb{R}$ and with values in $\mathbb{R}^k.$ The notations $W^{q,s}(0,T)^k$ and $H^1(0,T)^k$ refer to the Sobolev spaces (see Adams \cite{Ada75} for further details on Sobolev spaces). We let $BV(0,T)$ be the set of functions with bounded total variation. In general, when there is no place for confusion, we omit the argument $(0,T)$ when referring to a space of functions. For instance, we write { $L^\infty$ for $L^\infty(0,T),$ } or $(W^{1,\infty})^{k*}$ for the space of $W^{1,\infty}-$functions from $[0,T]$ to $\mathbb{R}^{k*}.$ We say that a function $h: \mathbb{R}^k \to \mathbb{R}^d$ is of class $C^\ell$ if it is $\ell-$times continuously differentiable in its domain. \section{The problem}\langlebel{SecProblem} Let us consider $L^\infty(0,T)$ and $W^{1,\infty}(0,T;\mathbb{R}^n)$ as control and state spaces, respectively. We say that a control-state pair $(u,x)\in L^\infty(0,T)\times W^{1,\infty}(0,T;\mathbb{R}^n)$ is a {\em trajectory} if it satisfies both the {\em state equation} \be \langlebel{bsbstateeq} \dot x_t = f_0(x_t) +u_{t} f_1(x_t) \quad \text{for a.a. } t\in [0,T], \ee and the finitely many {\em endpoint constraints} of equality type given by \be \mathcal{P}hi(x_0,x_T) = 0. \ee Here $f_0$ and $f_1$ are assumed to be Lipschitz continuous and twice continuously differentiable vector fields over $\mathbb{R}^n$, $\mathcal{P}hi$ is of class $C^2$ from $\mathbb{R}^n\times\mathbb{R}^n$ to $\mathbb{R}^{q}.$ Under these hypotheses, for any pair control-initial condition $(u,x_0)$ in $L^\infty(0,T)\times \mathbb{R}^n$, the state equation \eqref{bsbstateeq} has a unique solution. Additionally, we consider a {\em cost functional} $$ \hat{p}i:\mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R},$$ the {\em control bounds} \be \langlebel{bsbcc} u_{\rm min}\leq u_{t} \leq u_{\rm max} \quad \text{for a.a. } t\in [0,T], \ee where $ u_{\rm min} < u_{\rm max}$, and a {\em scalar state constraint} \be \langlebel{stateconstraint1} g(x_t) \leq 0\quad \text{for all } t\in [0,T], \ee with the functions $\hat{p}i$ and $g:\mathbb{R}^n\rangler \mathbb{R}$ being of class $C^2.$ A trajectory $(u,x)$ is said to be {\em feasible} if it satisfies \eqref{bsbcc}-\eqref{stateconstraint1}. \begin{remark}[On the control bounds] We allow $u_{\min}$ and $u_{\max}$ to be either finite real numbers, or to take the values $-\infty$ or $+\infty,$ respectively, meaning that we also consider problems with control constraints of the form $u_t \leq u_{\max}$ or $u_{\min} \leq u_t$, as well as problems in the absence of control constraints. \end{remark} Summarizing, this article deals with the optimal control problem in the Mayer form given by \be \langlebel{P}\tag{P} \min \hat{p}i(x_0,x_T); \qquad \text{subject to \eqref{bsbstateeq}-\eqref{stateconstraint1}.} \ee \subsection{Types of minima} Throughout this article, we make use of two notions of optimality that are {\em weak} and {\em Pontryagin minima} and are defined as follows. \begin{definition}[Weak and Pontryagin minima] \langlebel{defminpoint} A {\em weak minimum} for \eqref{P} is a feasible trajectory $(u,x)$ for which there exists $\mathop{\rm var}epsilon>0$ such that $\hat{p}i(x_0,x_T) \leq \hat{p}i(\tilde x_0,\tilde x_T)$ for any feasible $( \tilde u,\tilde x)$ verifying $\|(\tilde u,\tilde x)-(u,x)\|_{\infty} < \mathop{\rm var}epsilon.$ A feasible trajectory $(u,x)$ is called a {\em Pontryagin minimum} for \eqref{P} if for any $M>0,$ there exists $\mathop{\rm var}epsilon_M>0$ such that $\hat{p}i(x_0,x_T) \leq \hat{p}i(\tilde x_0,\tilde x_T)$ for any feasible $( \tilde u,\tilde x)$ satisfying \be \langlebel{defminpont1} \| \tilde x - x\|_\infty +\| \tilde u -u \|_1 < \mathop{\rm var}epsilon_M,\qquad \| \tilde u - u\|_\infty < M. \ee \end{definition} Note that any Pontryagin minimum is also a weak minimum. Consequently, necessary conditions that hold for weak minima, also do it for Pontryagin one. This article provides a numerical scheme for approximating Pontryagin minima of \eqref{P}. In order to achieve this, we make use of the auxiliary unconstrained transformed problem (TP) given in equations \eqref{costTP}-\eqref{continuityxTP}, which possesses neither control bounds {nor} state constraints and can be solved numerically in an efficient way. In Lemma \ref{LemmaPontWeak} below we prove that transformed Pontryagin minima of \eqref{P} that verify certain structural hypotheses are weak minima of the unconstrained transformed problem (TP). \subsection{Bang, constrained and singular arcs}\langlebel{SecArcs} The {\em contact set} associated with the state constraint is defined as \be \langlebel{defC} C := \{ t\in [0,T]:\; g(\hat{x}_t)=0 \}. \ee For $0 \leq a < b \leq T$, we say that $(a,b)$ is an {\em active arc} for the state constraint or, shortly, a {\em $C$ arc,} if $(a,b)$ is a maximal open interval contained in $C.$ A point $\tau\in (0,T)$ is a {\em junction point of the state constraint} if it is the extreme point of a $C$ arc. Similar definitions hold for the control constraint, with the difference that the control variable is only almost everywhere defined. The {\em contact sets for the control bounds} are given by \begin{gather*} B_- := \{ t\in [0,T]:\; \hat{u}_t = u_{\rm min} \}, \quad B_+ := \{ t\in [0,T]:\; \hat{u}_t = u_{\rm max} \},\\ B:=B_- \cup B_+. \end{gather*} Note that these sets are defined up to null measure sets. Additionally, observe that if $ u_{\rm min} = - \infty$ then $B_- = \emptyset$ and, analogously, if $ u_{\rm max}=+\infty$ then $B_+ = \emptyset.$ We say that $(a,b)$ is a {\em $B_-$} (resp. {\em $B_+)$ arc} if $(a,b)$ is included, up to a null measure set, in $B_-$ (resp. in $B_+$), but no open interval strictly containing $(a,b)$ is. We say that $(a,b)$ is a {\em $B$ arc} if it is either a $B_{-}$ or a $B_+$ arc. Finally, let $S$ denote the {\em singular set} given by \be S:=\{ t\in [0,T]: u_{\min} < \hat{u}_t < u_{\max} \text{ and } g(\hat{x}_t) <0 \}. \ee We say that $(a,b)$ is an {\em $S$ arc} if $(a,b)$ is included, up to a null measure set, in $S$, but no open interval strictly containing $(a,b)$ is. We call {\em junction} or {\em switching times} the points $\tau \in (0,T)$ at which the trajectory $(\hat{x},\hat{u})$ switches from one type of arc ($B_-,B_+,C$ or $S$) to another type. Junction/switching times are denominated by the type of arcs they separate. One can have, for instance, {\em $CS$ junction, $B_-B_+$ switching time, etc. } Throughout the remainder of the article, we assume that the {\em state constraint is of first order}, this is, \be \langlebel{1order} g'(\hat{x}_t)f_1(\hat{x}_t) \neq 0\quad \text{on } C, \ee and we impose the following {\em hypotheses on the control structure:} \be \langlebel{chypradn} \left\{ \ba{cl} {\rm (i)} &\text{the interval $[0,T]$ is (up to a zero measure set) the disjoint} \\ & \text{union of finitely many arcs of type $B$, $C$ and $S,$ and the set $C$} \\ & \text{does not contain isolated points,} \\ {\rm (ii)} &\text{the control $\hat{u}$ is at uniformly positive distance of the bounds } \\& u_{\min} \text{ and } u_{\max}, \text{ over $C$ and $S$ arcs,} \\ {\rm (iii)} & \text{the control $\hat{u}$ is discontinuous at CS and SC junctions.} \ea \mathop{\rm rint}ght. \ee { \begin{remark} Note that some problems (even if they are convex, like Fuller's problem \cite{MR0156744}) exhibit chattering phenomena with infinitely many switches, necessarily with some very short arcs. It is not clear how to deal with such problems with the method that we present in this article. \end{remark} } The example of the regulator problem studied in Section \ref{SecExamples} fullfils the above hypothesis \eqref{chypradn} (see as well the example given in \cite[Remark 2]{AronnaBonnansGoh2016}). When a control satisfying hypothesis \eqref{chypradn}(i) is, for instance, a concatenation of a bang and a singular arc, we call it a {\em BS control.} This denomination is extended for any finite sequence of arc types. In order to formulate our shooting algorithm, we express the control as a function of the state on $C$ arcs and we fix the control to its bounds on $B$ arcs. \subsubsection{Expression of the control on constrained arcs}\langlebel{SubsectionC} From $g(\hat{x}_t)=0$ on $C,$ we get \be \langlebel{dtg0} 0 = {\rm d}t g(\hat{x}_t) = g'(\hat{x}_t) \big( f_0(\hat{x}_t) + \hat{u}_t f_1(\hat{x}_t) \big)\quad \text{on } C, \ee and, since \eqref{1order} holds, we have that \be \langlebel{uinC} \hat{u}_t= -\frac{g'(\hat{x}_t)f_0(\hat{x}_t)}{g'(\hat{x}_t)f_1(\hat{x}_t)}\quad \text{on } C. \ee \section{Shooting formulation}\langlebel{SecShooting} We now explain how to state a transformed problem with neither control bounds nor running state constraints that serves as an intermediate step to write a numerical scheme for problem \eqref{P}. Afterwards the optimality system of the transformed problem is reduced to a nonlinear equation in a finite dimensional space. The starting point is to estimate the arc structure of the control, {\em i.e.} the sequence of its different types of arcs and the approximate values of its junction times. This is done in practice by some {\em direct method} such as solving the nonlinear programming (NLP) associated to the discretization of the optimal control problem. Then we formulate a {\em transformed problem} in which the control is fixed to its bounds on B arcs, and is expressed as a function of the state on C arcs. So the optimzation variables are now the control over singular arcs and the switching times. Subsequently, we express the optimality conditions of the transformed problem. Finally, by eliminating the control as a function of the state and costate, we reduce the optimality system to a finite dimensional equation. So, let us assume for the remainder of the section that $(\hat{u},\hat{x})$ is a Pontryagin minimum for \eqref{P}. Additionally, without loss of generality and for the sake of simplicity of notation, we set $u_{\min} :=0$ and $u_{\max} :=1.$ Recall further that $(\hat{u},\hat{x})$ complies with the structural hypotheses \eqref{chypradn} for the control $\hat{u},$ and that the state constraint is of first order, {\em i.e.} \eqref{1order} holds true. \subsection{The transformed problem}\langlebel{SubsecReformulation} We now state the transformed problem corresponding to \eqref{P}, in the spirit of \cite{AronnaBonnansMartinon2013}, and we prove that any Pontryagin minimum for the original problem \eqref{P} is transformed into a weak minimum of the unconstrained transformed problem. For the Pontryagin minimum $(\hat{u},\hat{x})$, let \be 0=: \hat\tau_0 < \hat\tau_1 < \dots < \hat\tau_N := T \ee denote its associated switching times. Recall the definition of the sets $C,$ $B_-,$ $B_+$ and $S$ given in Section \ref{SecArcs} above. Set $\hat I_k := [ \hat\tau_{k-1}, \hat\tau_k]$ for $k=1,\dots,N,$ and \be \mathcal{I}(S) := \big\{{k \in \{1,\ldots ,N \}}: \hat I_k \text{ is a singular arc} \big\}. \ee Analogously, define $\mathcal{I}(C),$ $\mathcal{I}(B_-),$ and $\mathcal{I}(B_+).$ For each $k=1,\dots,N,$ consider a state variable $x^k \in W^{1,\infty}(0,T;\mathbb{R}^n),$ and for each singular arc $k \in \mathcal{I}(S),$ a control variable $u^k \in L^\infty(0,T).$ On the set $B,$ we fix the control to the corresponding bound. Additionally, recall that from formula \eqref{uinC} we have that $\hat{u}_t=\mathcal{G}amma(\hat{x}_t)$ on $C,$ where $\mathcal{G}amma$ is given by $$ \mathcal{G}amma(x):= -\frac{g'(x)f_0(x)}{g'(x)f_1(x)}. $$ After these considerations, we are ready to state the transformed problem. Define the optimal control problem (TP), on the time interval $[0,1],$ by \begin{align} & \langlebel{costTP} \text{min } \hat{p}i(x^1_0,x^N_1), \\ & \langlebel{eqxk1} \dot{x}^k = (\tau_k -\tau_{k-1}) \big(f_0(x^k) + u^k f_1(x^k)\big)\quad \text{for } k\in \mathcal{I}(S), \\ & \dot{x}^k = (\tau_k -\tau_{k-1}) f_0(x^k)\quad \text{for } k\in \mathcal{I}(B_-), \\ & \dot{x}^k = (\tau_k - \tau_{k-1}) \big( f_0(x^k) + f_1(x^k) \big)\quad \text{for } k\in \mathcal{I}(B_+), \\ & \langlebel{eqxk4} \dot{x}^k = (\tau_k - \tau_{k-1}) \left(f_0(x^k) + \mathcal{G}amma(x^k) f_1(x^k) \mathop{\rm rint}ght)\quad \text{for } k\in \mathcal{I}(C), \\ & \dot\tau_k =0 \quad \text{for } k=1,\dots,N-1,\\ &\mathcal{P}hi(x_0^1,x_1^N) = 0,\\ & \langlebel{gx0k} g(x_0^k) = 0 \quad \text{for } k\in \mathcal{I}(C), \\ & \langlebel{continuityxTP} x_1^k = x_0^{k+1} \quad \text{for } k=1,\dots,N-1. \end{align} \begin{remark} Since we use the expression \eqref{uinC} to eliminate the control from the expression \eqref{dtg0} of the derivative of the state constraint equal to zero, we impose the entry conditions \eqref{gx0k} in the formulation of (TP) in order to guarantee that the state constraint is active along $x^k$ for every $k \in \mathcal{I}(C).$ \end{remark} Set \be \langlebel{changet} \begin{split} \hat{x}^k_s &:= \hat{x} \big( \hat\tau_{k-1} + ( \hat\tau_k - \hat\tau_{k-1})s \big) \quad \text{for}\ s\in [0,1] \text{ and } k=1,\dots,N,\\ \hat{u}^k_s &:= \hat{u} \big( \hat\tau_{k-1} + ( \hat\tau_k - \hat\tau_{k-1})s \big) \quad \text{for}\ s\in [0,1] \text{ and } k\in \mathcal{I}(S). \end{split} \ee \begin{lemma} \langlebel{LemmaPontWeak} Let $(\hat{u},\hat{x})$ be a Pontryagin minimum of problem \eqref{P}. Then the triple $$ \mathcal{B}ig( (\hat{u}^k)_{k \in \mathcal{I}(S)} , (\hat{x}^k)_{k=1}^N, ( \hat\tau_k)_{k=1}^{N-1} \mathcal{B}ig) $$ is a weak solution of (TP). \end{lemma} \begin{proof} Consider the feasible trajectories $\big( (u^k),(x^k),(\tau_k) \big)$ for (TP) satisfying \begin{equation} \langlebel{estuk} \| u^k - \hat{u}^k \|_\infty < \bar\mathop{\rm var}epsilon \quad \text{and} \quad \lvert \tau_k - \hat{\tau}_k \rvert \leq \bar\delta \quad \text{for all } k=1,\dots,N, \end{equation} for some $\bar\mathop{\rm var}epsilon,\bar\delta >0$ to be determined later. Set $I_k := [\tau_{k-1}, \tau_k]$ and consider the functions $s_k \operatorname{co}lon I_k \to [0,1]$ given by $s_{k,t} := \displaystylelaystyle \frac{t-\tau_{k-1}}{\tau_k - \tau_{k-1}}.$ Define $u\operatorname{co}lon[0,T] \to \mathbb{R}$ by \be u_t := \left\{ \ba{cl} 0 & \text{if } t\in I_k,\, k\in \mathcal{I}(B_-), \\ 1 & \text{if } t\in I_k,\, k\in \mathcal{I}(B_+), \\ \mathcal{G}amma \big( x^k ( s_{k,t} ) \big) & \text{if } t\in I_k,\, k\in \mathcal{I}(C), \\ u^k(s_{k,t} ) & \text{if } t\in I_k,\, k\in \mathcal{I}(S). \ea \mathop{\rm rint}ght. \ee Let $x \operatorname{co}lon [0,T] \to \mathbb{R}^n$ be the state corresponding to the control $u$ and the initial condition $x(0)=x_0^1.$ We next show that if $\bar\mathop{\rm var}epsilon>0$ and $\bar\delta >0$ are small enough, then $(u,x)$ is feasible for \eqref{P} and arbitrarily close to $(\hat{u},\hat{x})$ in the sense of \eqref{defminpont1}. Observe that $x(t) = x^k(s_{k,t})$ for all $k=1,\dots,N$ and $t\in I_k$. Hence, $x$ satisfies the endpoint constraints. Furthermore, due to Gronwall's Lemma, $(u,x)$ verifies the estimate \be \langlebel{estux} \|u-\hat{u}\|_1 + \|x-\hat{x}\|_\infty < \mathcal{O}(\bar\mathop{\rm var}epsilon + \bar\delta). \ee Let us analyze the control constraints. Take $k=1,\dots,N.$ If $k\in \mathcal{I}(B_-) \cup \mathcal{I}(B_+),$ then $u_t \in \{0,1\}$ for a.a. $ t\in I_k.$ On the other hand, by the hypothesis \eqref{chypradn} on the control structure, there exists $\hat{r}o_1 > 0$ such that \be \langlebel{bounduh} \hat{r}o_1 < \hat{u}_t < 1-\hat{r}o_1\quad \text{over $C$ and $S$ arcs.} \ee Suppose now that $k\in \mathcal{I}(S).$ Then, in view of \eqref{estuk} and \eqref{bounduh}, we can see that the control constraints hold on $I_k$ provided that $\bar\mathop{\rm var}epsilon \leq \hat{r}o_1.$ Finally, let $k$ be in $\mathcal{I}(C).$ Notice that in this case \eqref{bounduh} is equivalent to \be \hat{r}o_1 < \mathcal{G}amma(\hat{x}_t) < 1- \hat{r}o_1 \quad \text{ on } \hat{I}_k. \ee Hence, by standard continuity arguments and for $\bar\mathop{\rm var}epsilon,\bar\delta$ sufficiently small, we get that \be 0 < \mathcal{G}amma (x_t) <1\quad \text{ on } I_k. \ee We therefore confirm that $(u,x)$ verifies the control constraints. Let us now consider the state constraint. Take first $k \in \mathcal{I}(C).$ Then $g(x_{\tau_k})= g(x_0^k)=0$ and, by definition of $(u,x),$ we have that ${\rm d}t g(x_t) =0$ for all $t\in I_k.$ Therefore, $x$ satisfies the state constraint on $I_k$ for $k\in \mathcal{I}(C).$ Next, observe that, due to \eqref{chypradn}, for any $t\in [0,T]$ sufficiently far from a $C$ arc, one has $g(\hat{x}_t) \leq -\hat{r}o$ for some small $\hat{r}o>0.$ Thus, by \eqref{estux} we get that $g(x_t)<0$ for appropriate $\bar\mathop{\rm var}epsilon,\bar\delta.$ On the other hand, for $t\in [0,T]$ close to a $C$ arc, we reason as follows. Assume, without loss of generality, that $t$ is near an entry point $\tau_k$ of a C arc. In view of hypothesis \eqref{chypradn} and of the relation \eqref{dtg0}, we have that $ \frac{\rm d}{ {\rm d} s} \rvert_{s=\hat\tau_k-} g(\hat{x}_s) > 0,$ therefore, $ \frac{\rm d}{ {\rm d} s} \rvert_{s=\tau_k-} g(x_s) > 0$ as well, if $\bar\mathop{\rm var}epsilon,\bar\delta$ are sufficiently small. Consequently, $g(x_t) < 0.$ Hence, $x$ verifies the state constraint on $[0,T].$ With this, we conclude that $(u,x)$ is feasible for the original problem \eqref{P}. Finally, given $M>0$ as in Definition \ref{defminpoint}, we can easily show that $\bar\delta$ and $\bar\mathop{\rm var}epsilon$ can be taken in such a way that $(u,x)$ satisfies \eqref{defminpont1} for such $M$ and the corresponding $\mathop{\rm var}epsilon_M$ provided by the Pontryagin optimality of $(\hat{u},\hat{x}).$ Consequently, $ \hat{p}i(x_0,x_1) \geq \hat{p}i(\hat{x}_0,\hat{x}_1) $ or, equivalently, \be \hat{p}i(x^1_0,x^N_1) \geq \hat{p}i(\hat{x}^1_0,\hat{x}^N_1), \ee which proves that $\big( (\hat{u}^k)_{k \in \mathcal{I}(S)},(\hat{x}^k)_{k=1}^N, ( \hat\tau_k)_{k=1}^{N-1} \big)$ is a weak solution of (TP), as desired. This concludes the proof. \end{proof} \subsection{The shooting function} We shall start by rewriting the problem (TP) in the following compact form, in order to ease the notation, \begin{align} \langlebel{TPcost} & \min\,\, \tilde \hat{p}i(X_0,X_1),\\ \langlebel{TPdyn} & \dot{X} = \tilde f_0(X) + \sum_{k\in \mathcal{I}(S)} U^k \tilde f_k(X),\\ \langlebel{TPfinal} & \tilde \mathcal{P}hi (X_0,X_1)=0, \end{align} where $X:= \big( (x^k)_{k=1}^N,(\tau_k)_{k=1}^{N-1} \big),$ $U:=(u^k)_{k\in \mathcal{I}(S)},$ the vector field $ \tilde f_0: \mathbb{R}^{Nn+N-1} \to \mathbb{R}^{Nn+N-1}$ is defined as follows, \begin{equation*} \begin{split} \big( \tilde f_0(X) \big)&_{i=(k-1)n+1}^{kn}\\ &:= \left\{ \ba{cl} (\tau_k-\tau_{k-1})f_0(x^k) &\text{ for } k\in \mathcal{I}(S) \cup \mathcal{I}(B_-),\\ (\tau_k-\tau_{k-1}) \big(f_0(x^k)+f_1(x^k) \big) &\text{ for } k\in \mathcal{I}(B_+),\\ (\tau_k-\tau_{k-1}) \left( f_0(x^k)+\mathcal{G}amma(x^k) f_1(x^k) \mathop{\rm rint}ght) &\text{ for } k\in \mathcal{I}(C), \ea \mathop{\rm rint}ght. \end{split} \end{equation*} and $\big( \tilde f_0(X) \big)_{i=nN+1}^{Nn+N-1}:=0.$ Additionally, for $k\in \mathcal{I}(S)$ the vector field $\tilde f_k: \mathbb{R}^{Nn+N-1} \to \mathbb{R}^{Nn+N-1}$ is given by $$ \big( \tilde f_k (X) \big)_{i=(k-1)n+1}^{kn} := (\tau_k-\tau_{k-1})f_1(x^k), $$ and $\big( \tilde f_k (X) \big)_{i}:=0$ for the remaining index $i,$ the new cost $\tilde\hat{p}i:\mathbb{R}^{2(Nn+N-1)} \to \mathbb{R}$ is $$ \tilde\hat{p}i (X_0,X_1) :=\hat{p}i(x_0^1,x_1^N), $$ and the function $\tilde \mathcal{P}hi: \mathbb{R}^{2(Nn+N-1)} \to \mathbb{R}^{ {\rm d}_{\tilde \mathcal{P}hi}}$ with ${\rm d}_{\tilde \mathcal{P}hi}:=q+\lvert \mathcal{I}(C) \rvert+n(N-1)$ is defined as $$ \tilde\mathcal{P}hi(X_0,X_1):= \begin{pmatrix} \mathcal{P}hi(x_0^1,x^N_1) \\ \big( g(x_0^k) \big)_{k \in \mathcal{I}(C)} \\ \big( x_1^k-x_0^{k+1} \big)_{k=1}^{N-1} \end{pmatrix}. $$ The pre-Hamiltonian for problem (TP) is given by \be \tilde H = P \mathcal{B}ig( \tilde{f}_0(X) + \sum_{k\in \mathcal{I}(S)} U^k \tilde{f}_k(X) \mathcal{B}ig) = \sum_{k=1}^N (\tau_k - \tau_{k-1})H^k, \ee where $P$ denotes the costate associated to (TP), \be \langlebel{Hk} H^k:= p^k \big( f_0(x^k)+w^k f_1(x^k) \big), \ee with the notation $w^k$ defined as \be \langlebel{wk} w^k := \left\{ \ba{cl} u^k & \quad \text{if } k\in \mathcal{I}(S),\\ 0 & \quad \text{if } k\in \mathcal{I}(B_-),\\ 1 & \quad \text{if } k\in \mathcal{I}(B_+),\\ \mathcal{G}amma(x^k) & \quad \text{if } k\in \mathcal{I}(C), \ea \mathop{\rm rint}ght. \ee and $p^k$ denotes the $n$-dimensional vector of components $P_{(k-1)n+1},\dots,P_{kn}.$ Note that $w^k$ is a variable only for $k\in \mathcal{I}(S)$, in which case it represents the control $u^k$. \subsection{Constraint qualification and first order optimality condition for (TP)} Since problem (TP) has only endpoint equality constraints, and the Hamiltonian is an affine function of the control, it is known that Pontryagin's Maximum Principle is equivalent to the first-order optimality conditions. So, the qualification condition is that the derivative of the constraint is onto at the nominal trajectory $(\hat U,\hat X)$, see {\em e.g.} \cite[Ch. 3]{PAOP}. This means that \be \begin{split} \bar\mathcal{P}hi : \,\mathbb{R}^{Nn+N-1} \times (L^\infty)^{\lvert \mathcal{I}(S) \rvert} \to \mathbb{R}^{{\rm d}_{\tilde \mathcal{P}hi}}, \quad (X_0,U) \mapsto \tilde \mathcal{P}hi(X_0,X_1), \end{split} \ee where $X_t$ is the solution of \eqref{TPdyn} associated to $(X_0,U)$ is such that \be \langlebel{CQ2} D\bar\mathcal{P}hi(\hat X_0,\hat U) \text{ is surjective.} \ee Under this hypothesis, the first-order optimality condition in normal form is as follows, defining the endpoint Lagrangian associated to (TP) by: \be \tilde \ell^\mathcal{P}si := \hat{p}i(x_0^1,x_1^N) + \sum_{j=1}^{q} \mathcal{P}si_j \mathcal{P}hi_j(x_0^1,x_1^N) + \sum_{k\in \mathcal{I}(C)} \gamma_k g(x_0^k) + \sum_{k=1}^{N-1} \hat{t}eta_k(x_1^k-x_0^{k+1}). \ee \begin{theorem} Let $(\hat U,\hat X)$ be a weak solution for (TP) satisfying the qualification condition \eqref{CQ2}. Then there exists a unique $\tilde [\langlembda]bda := (\tilde \mathcal{P}si,P) \in \mathbb{R}^{{\rm d}_{\tilde\mathcal{P}hi}*}\times (W^{1,\infty})^{Nn+N-1*}$ such that $P$ is solution of \be -\dot{P}_t = D_X \tilde H(\hat U_t,\hat X_t,P_t) \quad \text{a.e. on } [0,T], \ee with {\em transversality conditions} \be \begin{split} P_0 & = -D_{X_0} \tilde\ell^{\tilde \mathcal{P}si} (\hat X_0,\hat X_1),\\ P_1 & = D_{X_T} \tilde \ell^{\tilde \mathcal{P}si} (\hat X_0,\hat X_1), \end{split} \ee and with \be \langlebel{stationarity1} \tilde H_U (\hat U_t,\hat X_t,P_t)=0. \ee \end{theorem} { \begin{proof} This is a variant of \cite[Theorem 1.174]{BonnansCSO}, where the cost function was supposed to be convex: one easily checks that the proof is essentially the same if the cost is differentiable. See also \cite{ArutyunovKaramzin2020}. \end{proof} } Since there is a unique associated multiplier, we omit from now on the dependence on $\tilde [\langlembda]bda$ for the sake of simplicity of the presentation. Moreover, in some ocassions, we omit the dependence on the nominal solution $(\hat U,\hat X).$ \subsection{Expression of the singular controls in problem (TP)}\langlebel{SubsectionSingular} { It is known that in this control-affine case, the control variable does not appear explicitly neither in the expression of $\tilde{H}_U$ nor in its time derivative $\dot{\tilde{H}}_U$ (see {\em e.g.} \cite{Rob67,AronnaBonnansMartinon2013}). The {\em strengthened generalized Legendre-Clebsch condition} \cite{Rob67} for (TP) reads \be \langlebel{LC} -\frac{\partial}{\partial U } {\rm d}ot{\tilde H}_{U} \succ 0. \ee Here $A \succ B$, where $A$ and $B$ are symmetric matrices of same size, means that $A-B$ is positive semidefinite. At this point, recall the definitions of $H^k$ and $p^k$ given in \eqref{Hk} and in the first line after \eqref{wk}, respectively. Simple calculations show that the l.h.s. of \eqref{LC} is a $\lvert \mathcal{I}(S) \rvert \times \lvert \mathcal{I}(S) \rvert$-diagonal matrix with positive entries equal to \be -(\tau_k-\tau_{k-1}) \frac{\partial}{\partial {u^k} } {\rm d}ot{H}^k_{u^k}\quad \text{for } k\in \mathcal{I}(S). \ee Then condition \eqref{LC} becomes \be \langlebel{LCa} \frac{\partial}{\partial {u^k} }{\rm d}ot{H}^k_{u^k} < 0\quad \text{for } k\in \mathcal{I}(S). \ee Hence, thanks to \eqref{LCa}, for each $k\in \mathcal{I}(S)$ one can compute the control $u^k$ from the identity \be \langlebel{ddotHuk} {\rm d}ot{H}^k_{u^k}=0. \ee Apart from the previous equation \eqref{ddotHuk}, in order to ensure the stationarity $H^k_{u^k}=0,$ we add the following endpoint conditions: \be 0 = H^k_{u^k} (0) = p_0^k f_1(x_0^k),\quad 0=\dot{H}^k_{u^k}(0) = p_0^k [f_1,f_0](x_0^k)\quad \text{for } k\in \mathcal{I}(S). \ee \subsection{Lagrangians and costate equation} The costate equation for $p^k$ is \be \langlebel{eqpk} \dot{p}^k = -(\tau_k-\tau_{k+1}) D_{x^k}H^k, \ee with endpoint conditions \be p_0^1=-D_{x_0^1} \tilde \ell^\mathcal{P}si = -D_{x_0^1}\hat{p}i - \sum_{j=1}^{q} \mathcal{P}si_j D_{x_0^1} \mathcal{P}hi_j - \hat{c}i_{\mathcal{I}(C)}(1) \gamma_1 g'(x_0^1), \ee \begin{gather} \langlebel{pk1} p_1^k = \hat{t}eta^k \quad \text{for } k=1,\dots,N-1,\\ \langlebel{pk0} p_0^k = \hat{t}eta^{k-1} - \hat{c}i_{\mathcal{I}(C)}(k) \gamma_k g'(x_0^k) \quad \text{for } k=2,\dots,N, \end{gather} \be p_1^N = D_{x_1^N} \hat{p}i + \sum_{j=1}^{q} \mathcal{P}si_j D_{x_1^N} \mathcal{P}hi_j, \ee where $\hat{c}i_{\mathcal{I}(C)}$ denotes the {\em characteristic function} associated to the set $\mathcal{I}(C).$ For the costate $p^{\tau_k}$ we have the dynamics \be \langlebel{pk} \dot{p}^{\tau_k} = -H^k+H^{k+1},\quad p_0^{\tau_k}=0,\ p_1^{\tau_k}=0\qquad \text{for } k=1,\dots,N-1. \ee It is known that the pre-Hamiltonian of autonomous problems has a constant value along an optimal solution (see {\em e.g.} \cite{Vinter2000}). By similar arguments it is easily seen that each $H^k$ is a constant function of time along an optimal solution. Consequently, from \eqref{pk} we get that $p^{\tau_k}$ vanishes identically and that \be H_1^k = H_0^{k+1} \quad \text{for } k=1,\dots,N-1. \ee \subsection{The shooting function and method} The shooting function associated with (TP) that we propose here is \be \begin{split} \mathcal{S} : \mathbb{R}^{Nn+N-1} \times \mathbb{R}^{Nn+q+\lvert \mathcal{I}(C) \rvert,*} &\to \mathbb{R}^{(N-1)n+N-1 +q+\lvert \mathcal{I}(C) \rvert+ 2\lvert \mathcal{I}(S) \rvert} \times \mathbb{R}^{(N+1)n,*},\\ \big( (x_0^k),(\tau_k),(p_0^k),\mathcal{P}si,\gamma \big) &\mapsto \begin{pmatrix} \mathcal{P}hi(x_0^1,x_1^N) \\ \big( g(x_0^k) \big)_{k\in \mathcal{I}(C)} \\ (x^k_1 - x^{k+1}_0)_{k=1,\dots,N-1} \\ p^1_0 + D_{x_0^1} \tilde\ell^\mathcal{P}si \\ p_1^k-p_0^{k+1}-\hat{c}i_{\mathcal{I}(C)}(k) \gamma_k g'(x_0^k) \\ p^q-D_{x^N_1} \tilde \ell^\mathcal{P}si \\ (H^k_1-H^{k+1}_0)_{k=1,\dots,N-1} \\ \big( p^k_0 f_1(x^k_0) \big)_{k\in \mathcal{I}(S)} \\ \big( p_0^k [f_1,f_0](x^k_0) \big)_{k \in \mathcal{I}(S)} \end{pmatrix}, \end{split} \ee where $\big( (x^k),(p^k) \big)$ is the solution of the state and costate equations \eqref{eqxk1}-\eqref{eqxk4}, \eqref{eqpk} with initial values $(x^k_0),(p^k_0),$ and control $(u^k)_{k\in \mathcal{I}(S)},$ given by the stationarity condition \eqref{ddotHuk}. Note that we removed the variable $\hat{t}eta$ by combining equations \eqref{pk1} and \eqref{pk0}. The key feature of this procedure is that ${\omega}ega := \big( (x_0^k),(\tau_k),(p_0^k),\mathcal{P}si,\gamma\big)$ satisfies \be \langlebel{S=0} \mathcal{S}({\omega}ega)=0, \ee if and only if the associated solution $\big( (x^k),(p^k),(u^k) \big)$ verifies the Pontryagin's Maximum Principle for (TP). Briefly speaking, in order to find the candidate solutions of (TP), we shall solve \eqref{S=0}. Let us observe that the system \eqref{S=0} has $2Nn+N-1+ q+\lvert \mathcal{I}(C) \rvert$ unknowns and $2Nn+N-1+ q +\lvert \mathcal{I}(C) \rvert+ 2\lvert \mathcal{I}(S) \rvert$ equations. Hence, as soon as a singular arc occurs, \eqref{S=0} has more equations than unknowns, {\em i.e.} it is overdetermined. We then follow \cite{AronnaBonnansMartinon2013}, where the authors suggested to solve the shooting equations by the Gauss-Newton method. We recall the following convergence result for Gauss-Newton, see {\em e.g.} Fletcher \cite{fletcher2013practical}, or alternatively Bonnans \cite{bonnans2006numerical}. If $F$ is a $C^1$ mapping from $\mathbb{R}^n$ to $\mathbb{R}^p$ with $p>n$, the Gauss-Newton method computes a sequence $(y^j)$ in $\mathbb{R}^n$ satisfying $F(y^j)+DF(y^j)(y^{j+1}-y^j)=0$. When $F$ has a zero at $\bar{y}$ and $DF(\bar{y})$ is onto, the sequence $(y^j)$ is well-defined provided that the starting point $y^0$ is close enough to $\bar{y}$ and in that case, $(y^j)$ converges superlinearly to $\bar{y}$ (quadratically if $DF$ is Lipschitz near $\bar{y}$). In view of the regularity hypotheses done in Section \ref{SecProblem}, we know that $\mathcal{S}'$ is Lipschitz continuous. \section{Sufficient condition for the convergence of the shooting algorithm}\langlebel{SecSuff} The main result of this article is Theorem \ref{ThConvergence} of current section. It gives a sufficient condition for the local convergence of the shooting algorithm, that is also a sufficient condition for weak optimality of problem (TP), as stated in Theorem \ref{SSC} below. \subsection{Second order optimality conditions for (TP)} We now recall some second order optimality conditions for (TP). Let us consider the quadratic mapping on the space $ (L^\infty)^{\lvert \mathcal{I}(S) \rvert} \times (W^{1,\infty})^{Nn+N-1},$ defined as \be \tilde Q(V,Z) := \mbox{$\frac{1}{2}$} D^2 \tilde\ell(Z_0,Z_1)^2 + \mbox{$\frac{1}{2}$} \int_0^1 \big[ Z^\top \tilde{H}_{XX} Z + 2 V \tilde H_{UX} Z \big] \mathrm{d}t. \ee We next introduce the {\em critical cone} associated to (TP). Since the problem has only qualified equality constraints, this critical cone coincides with the tangent space to the constraints. Consider first the linearized state equation \be \langlebel{LINEQ} \dot Z = \tilde A Z + \tilde B V\quad \text{a.e. on } [0,1], \ee where $F(U,X):= \tilde f_0(X) + \sum_{k\in \mathcal{I}(S)} U^k \tilde f_k(X),$ $\tilde A := F_X,$ $\tilde B:= F_U;$ and let the linearization of the endpoint constraints be given by \be \langlebel{LINCONS} D\tilde \mathcal{P}hi (Z_0,Z_1) = 0. \ee The {\em critical cone} for (TP) is defined as \be \tilde\mathcal{C} := \mathcal{B}ig\{(V,Z) \in (L^\infty)^{\lvert \mathcal{I}(S) \rvert} \times (W^{1,\infty})^{Nn+N-1} : \text{\eqref{LINEQ}-\eqref{LINCONS} hold} \mathcal{B}ig\}. \ee The following result follows (see {\em e.g.} \cite{LMO,ABDL12} for a proof). \begin{theorem}[Second order necessary condition] If $(\hat U,\hat X)$ is a weak minimum for (TP) that verifies \eqref{CQ2}, then \be \langlebel{SONCineq} \tilde Q(V,Z) \geq 0\quad \text{for all } (V,Z) \in \tilde\mathcal{C}. \ee \end{theorem} In the sequel we present some optimality conditions for (TP). The first one is a necessary condition due to Goh \cite{Goh66} and the second one, a sufficient condition from Dmitruk \cite{Dmi77,Dmi87}. The idea behind these results lies on the following observation. Note that $\tilde{H}_{UU}$ vanishes and, therefore, the quadratic mapping $\tilde Q$ does not contain a quadratic term on the control variation $V.$ Consequently, the Legendre-Clebsch necessary optimality condition on the positive semidefiniteness of $\tilde{H}_{UU}$ holds trivially and a second order sufficient condition cannot be obtained by strengthening inequality \eqref{SONCineq}. In order to overcome this issue and derive necessary conditions for this singular case, Goh introduced a change of variables in \cite{Goh66a} and applied it to derive necessary conditions in \cite{Goh66}. Some years later, Dmitruk \cite{Dmi77} showed a second order sufficient condition in terms of the coercivity of the transformation $\tilde {\Omega}ega$ of $\tilde Q$ introduced below. The {\em Goh transformation} for the linear system \eqref{LINEQ} is given by \be \langlebel{GOH} Y_t: = \int_0^t V_s {\rm d} s,\qquad \mathcal{X}i_t:= Z_t - \tilde B_t Y_t. \ee Notice that if $(V,Z) \in \tilde \mathcal{C},$ then $(Y,\mathcal{X}i)$ defined by the above transformation \eqref{GOH} satisfies (removing time indexes): \begin{equation} \dot \mathcal{X}i = A Z + \tilde B U - \tilde B U - \dot{ \tilde B} Y = A (\mathcal{X}i + \tilde B Y) - \dot{ \tilde B} Y, \end{equation} and, therefore $\mathcal{X}i$ is solution of the {\em transformed linearized equation} \be \langlebel{LINEQGOH} \dot\mathcal{X}i = \tilde A \mathcal{X}i + \tilde{E} Y, \ee {where $\tilde E := \tilde A \tilde B - {\rm d}t \tilde B,$} and $\mathcal{X}i$ satisfies the {\em transformed linearized endpoint constraints} \be \langlebel{LINCONSGOH} D\tilde \mathcal{P}hi(\mathcal{X}i_0,\mathcal{X}i_1+\tilde B_1 h)=0, \ee where we set $h:= Y_1.$ Consider the function \be \hat{r}o(\zeta_0,\zeta_1,h) := D^2 \tilde\ell (\zeta_0,\zeta_1+\tilde B_1 h)^2 + h\tilde H_{UX,1} (2\zeta_1+\tilde B_1 h), \ee and the quadratic mapping \be \tilde{\Omega}ega (Y,\tilde h,\mathcal{X}i) := \mbox{$\frac{1}{2}$} \hat{r}o(\mathcal{X}i_0,\mathcal{X}i_1,\tilde h) + \mbox{$\frac{1}{2}$} \int_0^1 \left( \mathcal{X}i^\top \tilde{H}_{XX} \mathcal{X}i + 2Y \tilde M \mathcal{X}i + Y \tilde{R} Y\mathop{\rm rint}ght) \mathrm{d}t, \ee for $(Y,\tilde h,\mathcal{X}i) \in (L^2)^{\lvert \mathcal{I}(S) \rvert} \times \mathbb{R} \times (H^1)^{Nn+N-1}$ and \be \tilde M:= \tilde{f}_1^\top \tilde{H}_{XX} - {\rm d}t \tilde{H}_{UX} -\tilde{H}_{UX} \tilde{A}, \quad \tilde R:= \tilde{f}_1^\top \tilde{H}_{XX} \tilde{f}_1-2 \tilde{H}_{UX}\tilde{E}- {\rm d}t (\tilde{H}_{UX} \tilde{f}_1). \ee Let us recall that the second order necessary condition for optimality stated by Goh \cite{Goh66} (and nowadays known as {\em Goh condition}) implies that if $(\hat U,\hat X)$ is a weak minimum for (TP) verifying \eqref{CQ2}, then \be \langlebel{CB} \tilde{H}_{UX}\tilde B \text{ is symmetric,} \ee or, equivalently, $P\cdotp D\tilde f_i \tilde f_j= P \cdotp D \tilde f_j \tilde f_i$ for all $i,j=1,\dots,N.$ In the recent literature, this condition can be encountered as $P \cdotp [\tilde f_i,\tilde f_j]=0.$ We shall mention that this necessary condition was first stated by Goh in \cite{Goh66} for the case with neither control nor state constraints, and extended in \cite{ABDL12,FraTon13} for problems containing control constraints. Notice that when the control variable of (TP) is scalar ({\em i.e.} $\lvert \mathcal{I}(S) \rvert=1$), then \eqref{CB} is trivially verified since $\tilde{H}_{UX}\tilde B$ is also a scalar. \if{ Furthermore, given the special structure of the dynamics of (TP), we can see that $D \tilde f_i\, \tilde f_j =0$ for all $i,j \in \{1,\dots,N\}$ with $ i\neq j $ and, consequently, the matrix in \eqref{CB} is diagonal and the Goh condition holds trivially for (TP) even when $\lvert \mathcal{I}(S) \rvert>1$. } \fi { We claim that $( Df_i ) f_j=0$, for all $i,j \in \{1, \ldots ,N\}$ with $i\neq j$ and, consequently, the matrix in (68) is diagonal and the Goh condition holds trivially for (TP) even when ${\mathcal I}(S) >1$. Indeed, let $k$, $\ell$ be disjoint elements of ${\mathcal I}(S)$. By the definition, the nonzero components of $\tilde f_k(X)$ have coordinates in the set $I_k := \{ (k-1)n+1, kn\}$. So, only the rows of $D \tilde f_k(X)$ in $I_k$ can be nonzero. However, since $I_k\cap I_\ell$ is empty, $\tilde f_\ell$ has only zero components in $I_k$. Our claim follows. } From \eqref{CB} and \cite[Theorem 4.4]{ABDL12} we get the following result: \begin{proposition} For all $(V,Z) \in (L^\infty)^{\lvert \mathcal{I}(S) \rvert} \times (W^{1,\infty})^{Nn+N-1}$ solution of \eqref{LINEQ} and $(Y,\mathcal{X}i)$ given by the Goh transform \eqref{GOH}, it holds \begin{equation*} \tilde Q (V,Z) = \tilde {\Omega}ega (Y,Y_T,\mathcal{X}i). \end{equation*} \end{proposition} Define, for $(\mathcal{X}i_0,Y,\tilde h) \in \mathbb{R}^n \times (L^2)^{\lvert \mathcal{I}(S) \rvert} \times \mathbb{R},$ the {\em order function} \be \gamma(\mathcal{X}i_0,Y,\tilde h) := \lvert \mathcal{X}i_0 \rvert^2 + \int_0^1 \lvert Y_t \rvert^2 \mathrm{d}t + \lvert \tilde h \rvert ^2. \ee \begin{definition}[$\gamma$-growth] A feasible trajectory $(\hat U,\hat X)$ of (TP) satisfies the {\em $\gamma$-growth condition in the weak sense} if there exists a positive constant $c$ such that, for every sequence of feasible variations $\big\{ (\delta X^k_0,V^k) \big\}_k$ converging to 0 in $\mathbb{R}^{Nn+N-1} \times ( L^\infty)^{\lvert \mathcal{I}(S) \rvert}$, one has that \be \tilde \hat{p}i(X_0^k,X_1^k) - \tilde \hat{p}i(\hat X_0,\hat X_1) \geq c \gamma(\mathcal{X}i^k_0,Y^k,Y^k_T), \ee for $k$ large enough, where $(Y^k,\mathcal{X}i^k)$ are given by Goh transform \eqref{GOH} and $X^k$ is the solution of the state equation \eqref{TPdyn} associated to $(\hat{X}_0+\delta X_0^k,\hat{U}+V^k).$ \end{definition} Consider the {\em transformed critical cone} \be \tilde\mathcal{P}^2_S := \left\{ (Y,\tilde h,\mathcal{X}i) \in(L^2)^{\lvert \mathcal{I}(S) \rvert} \times \mathbb{R} \times (H^1)^{Nn+N-1}: \text{\eqref{LINEQGOH}-\eqref{LINCONSGOH} hold} \mathop{\rm rint}ght\}. \ee The following characterization of $\gamma$-growth holds (see \cite{Dmi77} or \cite[Theorem 3.1]{Dmi87} for a proof). \begin{theorem} \langlebel{SSC} Let $(\hat U,\hat X)$ be such that the qualification condition \eqref{CQ2} holds. Then $(\hat U,\hat X)$ is a weak minimum of (TP) that satisfies $\gamma$-growth in the weak sense if and only if \eqref{CB} holds and there exists $c>0$ such that \be \langlebel{POS} \tilde {\Omega}ega(Y,\tilde h,\mathcal{X}i) \geq c \gamma (\mathcal{X}i_0,Y,\tilde h) \quad \text{on } \tilde\mathcal{P}_S^2. \ee \end{theorem} We are now ready to state the following convergence result for the shooting algorithm. \begin{theorem} \langlebel{ThConvergence} If $(\hat U,\hat X)$ is a weak minimum of problem (TP) satisfying the constraint qualification \eqref{CQ2} and the uniform positivity condition \eqref{POS}, then the shooting algorithm is locally quadratically convergent. \end{theorem} \begin{proof} This is a consequence of the convergence result in \cite[Theorem 5.4]{AronnaBonnansMartinon2013}. \end{proof} Note that in the proof of the above Theorem, it is established that the hypotheses imply that the derivative of the shooting function is injective. \section{Application to a regulator problem}\langlebel{SecExamples} Consider the following regulator problem, where $T=5$: \be \begin{split} &\min \mbox{$\frac{1}{2}$} \int_0^5 (x_{1,t}^2 + x_{2,t}^2) {\rm d} t + \mbox{$\frac{1}{2}$} x_{1,5}^2, \\ & \dot x_{1,t} = x_{2,t},\quad \dot x_{2,t} = u_t \in [-1,1], \end{split} \ee subject to the state constraint and initial conditions \be x_{2,t} \geq -0.2, \quad x_{1,0}=0,\quad x_{2,0}=1. \ee To write the problem in the Mayer form, we introduce an auxiliary state variable given by the dynamics \be \dot x_{3,t} = \mbox{$\frac{1}{2}$} (x_{1,t}^2 + x_{2,t}^2),\quad x_{3,0}=0. \ee The resulting problem is then \be \langlebel{Preg} \begin{split} & \min\, x_{3,5} + \mbox{$\frac{1}{2}$} x_{1,5}^2, \\ & \dot x_1 = x_2, \quad \dot x_2 = u, \quad \dot x_3 = \mbox{$\frac{1}{2}$} (x_1^2 + x_2^2), \\ &x_{1,0}=0,\quad x_{2,0}=1,\quad x_{3,0}=0,\\ &-1 \leq u \leq 1,\; \; x_2 \geq -0.2. \end{split} \ee Using the optimal control solver BOCOP \cite{Bocop2017} we estimated that the optimal control $\hat{u}$ is a concatenation of a bang arc in the lower bound, followed by a constrained arc and ended with a singular one. Briefly, we can say that the optimal control has a $B_-CS$ structure. \subsection{Checking local optimality} In this subsection we compute analytically the optimal solution of \eqref{Preg} and check that it verifies the second order sufficient condition for state-constrained control-affine problems proved in \cite[Theorem 5]{AronnaBonnansGoh2016}. While the problem is convex, and hence, satisfying the first order optimality conditions is enough for proving optimality, the quoted second order conditions are of interest since they imply the quadratic growth, see \cite[Definition 3]{AronnaBonnansGoh2016}. To problem \eqref{Preg} we associate the functions $g:\mathbb{R}^3 \to \mathbb{R},$ $f_0,f_1:\mathbb{R}^3 \to \mathbb{R}^3$ given by \begin{equation*} g(x):= -x_2-0.2,\quad f_0(x):= \begin{pmatrix} x_2\\ 0 \\ \mbox{$\frac{1}{2}$}( x_1^2+x_2^2 )\end{pmatrix},\quad f_1(x) = \begin{pmatrix} 0\\ 1\\ 0\end{pmatrix}. \end{equation*} So that the optimal control $\hat{u}$ is equal to -1 on the $B_-$ arc, and to $\displaystylelaystyle-\frac{g'(\hat{x})f_0(\hat{x})}{g'(\hat{x})f_1(\hat{x})} = 0$ on $C$ according to formula \eqref{uinC}, where $\hat{x}$ is the associated optimal state. The pre-Hamiltonian of \eqref{Preg} reads $$ H(u,x,p):= p_1 x_2 + p_2u +\mbox{$\frac{1}{2}$} p_3(x_1^2+x_2^2), $$ where $p:=(p_1,p_2,p_3)$ is the costate. Over the singular arc $S$, the inequality $u_{\min} < \hat{u}_t < u_{\max}$ and the minimum condition of Pontryagin's Maximum Principle (see {\em e.g.} \cite[equation (2.12)]{AronnaBonnansGoh2016}) imply that \be \langlebel{stationarity2} H_u=0. \ee Differentiating in time, (see {\em e.g.} \cite{Mau76,AronnaBonnansMartinon2013}), we obtain \be H_u = pf_1,\quad \dot{H}_u = p[f_1,f_0],\quad {\rm d}ot{H}_u = p\big[[f_1,f_0],f_0 \big] + \hat{u} p\big[[f_1,f_0],f_1\big], \ee where $[X,Y]:= X' Y - Y' X$ denotes the {\em Lie bracket} associated with a pair of vector fields $X,Y : \mathbb{R}^n \to \mathbb{R}^n.$ In this control-affine case, the control variable does not appear explicitly neither in the expression of $H_u$ nor in its time derivative $\dot{H}_u.$ So, if $p\big[[f_1,f_0],f_1\big]$ takes only nonzero values along singular arcs, we obtain an expression for the control on singular arcs, namely \be \langlebel{uinS} \hat{u}=-\frac{p\big[[f_1,f_0],f_0 \big](\hat{x})}{p\big[[f_1,f_0],f_1\big](\hat{x})}. \ee The involved Lie brackets for this examples are \be \langlebel{Lie} [f_1,f_0]= \begin{pmatrix} -1 \\ 0 \\ -x_2 \end{pmatrix},\quad \big[ [f_1,f_0],f_0 \big] = \begin{pmatrix} 0 \\ 0 \\ x_1 \end{pmatrix},\quad \big[ [f_1,f_0],f_1 \big]= \begin{pmatrix} 0 \\ 0\\ -1 \end{pmatrix}. \ee On the other hand, the costate equation on the singular arc $S$ gives \be \langlebel{examplep} \begin{split} \dot{p}_1 &= - p_3\hat{x}_1,\quad p_{1,5} = \hat{x}_{1,5}, \\ {\rm d}{p}_2 &= -(p_1+p_3 \hat{x}_2){\rm d} t+\nu, \quad p_{2,5}=0, \\ \dot{p}_3 &= 0,\quad p_{3,5}=1, \end{split} \ee {where $\mu$ is the multiplier associated with the state constraint, and $\nu$ is the density of $\mu$.} Thus, \be\langlebel{examplep3} p_3 \equiv 1, \ee and from \eqref{uinS} and \eqref{Lie} we get \be \langlebel{uonS} \hat{u}=\hat{x}_1\quad \text{on}\ S. \ee Moreover, the first order optimality conditions imply that \be \langlebel{p2CS} 0=H_u=p_2\quad \text{on } C\cup S. \ee Let us write $\hat{\tau}_1,\hat{\tau}_2$ for the switching times, so that $$ B_-=[0,\hat{\tau}_1],\quad C=[\hat{\tau}_1,\hat{\tau}_2],\quad S=[\hat{\tau}_2,5]. $$ Since the control $\hat{u}$ is constantly equal to $-1$ on $B_-,$ then \be \langlebel{examplex2} \hat{x}_{2,t} = 1-t \quad \text{on. } B_-, \ee until it saturates the state constraint at $\hat{\tau}_1.$ Hence $1- \hat{\tau}_1 = -0.2,$ so it follows that \be \langlebel{hattau1} \hat\tau_1=1.2 \ee and $B_- = [0,1.2].$ Consequently, \be\langlebel{examplex1} \hat{x}_{1,t} = t-\frac{t^2}{2} \quad \text{on } B_-. \ee On $C=[1.2,\hat{\tau}_2],$ necessarily $\hat{x}_2 \equiv -0.2,$ thus $u\equiv 0$ and \be \langlebel{x1C} \hat{x}_{1,t} = 0.48 - 0.2(t-1.2) = 0.72 - t/5 \quad \text{ on } C. \ee On the singular arc $S=[\hat{\tau}_2,5]$ we get, from the expression of $\hat{u}$ on $S$ \eqref{uonS}, that ${\rm d}ot \hat{x}_1 = \dot \hat{x}_2 = \hat{u} = \hat{x}_1. $ Thus \be x_{1,t} = c_1 e^{5-t} + c_2 e^{t-5}\quad \text{ on } S, \ee for some real constant values $c_1,c_2.$ Therefore, \be {\hat{x}}_{2,t} = \dot{\hat{x}}_{1,t} = -c_1 e^{5-t} + c_2 e^{t-5} \quad \text{ on } S. \ee The stationarity condition $0=H_u=p_2$ on $S$ yields $0=\dot{p}_2 = -p_1-\hat{x}_2,$ where the second equality of latter equation follows from \eqref{examplep} and since $\nu \equiv 0$ on $S.$ Thus $p_1=-\hat{x}_2$ on $S$, so \be p_{1,t} = c_1 e^{5-t} - c_2 e^{t-5} \quad \text{ on } S. \ee The transversality condition for $p_1$ (see \eqref{examplep}) implies $c_1+c_2 = c_1-c_2.$ Thus, $c_2=0$ and \be \hat{x}_1=-\hat{x}_2 = p_1 = c_1 e^{5-t} \quad \text{ on } S. \ee Since $-0.2=x_{2,\hat{\tau}_2},$ then $ x_{1,\hat{\tau}_2} = 0.2. $ Hence, from the expression of $\hat{x}_1$ on $C$ given in \eqref{x1C}, we obtain $0.72-\hat{\tau}_2/5 = 0.2$ and, consequently \be \langlebel{hattau2} \hat{\tau}_2=2.6. \ee Additionally, $-0.2 = x_{2,\hat{\tau}_2} = -c_1 e^{2.4}$ so that $ c_1 = 0.2 e^{-2.4}. $ At time $\hat{\tau}_2$ we have that $p_1= c_1 e^{2.4} = 0.2$ and, from the costate equation \eqref{examplep} and from \eqref{x1C}, we have \be \langlebel{regp1-dc} \dot p_{1,t} = - \hat{x}_{1,t} = t/5 - 0.72 \leq 2.6/5 -0.72 < 0 \quad \text{on } C = [1.2,2.6]. \ee So $p_1$ is decreasing on $C$ and, recalling \eqref{p2CS}, this implies that \be \nu_t = p_{1,t} + \hat{x}_{2,t} > p_{1,\hat{\tau}_2}- 0.2 =0\quad \text{for } t\in [\hat{\tau}_1,\hat{\tau}_2). \ee Thus the complementarity condition for the state constraint \cite[equation(3.4)(ii)]{AronnaBonnansGoh2016} holds true. In order to check that the strict complementarity hypothesis (i) of \cite[Theorem 5]{AronnaBonnansGoh2016} is satisfied, we will prove that the corresponding strict complementarity condition for the control constraint of \cite[Definition 4]{AronnaBonnansGoh2016} holds. In view of \eqref{regp1-dc}, we get \be p_{1,t} = 0.2 + (t-\hat{\tau}_2)^2/10 - 0.72(t-\hat{\tau}_2). \ee Therefore $ p_{1,\hat{\tau}_1} = 0.2 + (1.4)^2/10 +0.72 \times 1.4 = 1.404. $ On the $B_-$ arc, we have seen that $\hat{x}_1>0$ due to \eqref{examplex1} and $x_{1,0}=0,$ so that $\dot p_1 = - \hat{x}_1 <0$, and since $p_{1,\hat{\tau}_1}>0,$ it follows that $p_1$ has values greater than $p_{1,\hat{\tau}_1} = 1.404$. Therefore, since $\hat{x}_2 >-0.2$ over $B_-$, \be \dot{p}_2=-p_1-\hat{x}_2 < 0\quad \text{ on } B_-. \ee Since $p_{2,\hat{\tau}_1}=0,$ we get $p_2 > 0$ on $[0,\hat{\tau}_1)$ or, equivalently, $H_u>0$ on $[0,\hat{\tau}_1).$ We conclude that the strict complementarity condition for the control constraint given in {\cite[Definition 4]{AronnaBonnansGoh2016}} holds. This completes the verification of \cite[Theorem 5 - (i)]{AronnaBonnansGoh2016}. Let us now verify the uniform positivity \cite[equation (4.6)]{AronnaBonnansGoh2016}. The dynamics for the linearized state is \be \begin{split} & \dot{z}_1 = z_2,\quad \dot{z}_2=v,\quad \dot{z}_3 = \hat{x}_1 z_1 + \hat{x}_2 z_2,\\ & z_{1,0} = z_{2,0} = z_{3,0} =0. \end{split} \ee Let $\mathcal{C}_S$ and ${\mathcal{P}}^2_*$ denote the strict critical cone and the extended cone (invoking the notation used in \cite{AronnaBonnansGoh2016}) at the optimal trajectory $(\hat{u},\hat{x}),$ respectively. Since $\hat{u}$ is $B_-CS$ then, for any $(v,z) \in \mathcal{C}_S,$ $v=0$ on the initial interval $B_-.$ Consequently, \be \langlebel{yB-} y=0 \quad \text{on } B_-,\text{ for all } (y,\xi,h) \in \mathcal{P}^2_*. \ee On the other hand, the dynamics for the transformed linearized state is \begin{gather} \langlebel{eqxi} \dot\xi_1 = \xi_2+y,\quad \dot\xi_2=0,\quad \dot\xi_3 = \hat{x}_1 \xi_1 + \hat{x}_2\xi_2 + \hat{x}_2y,\\ \xi_{1,0} = \xi_{2,0} = \xi_{3,0}=0. \end{gather} Take $(y,\xi,h) \in \mathcal{P}^2_*.$ Then \be \langlebel{xi2} \xi_2\equiv 0\quad \text{on } [0,5]. \ee In view of \cite[equation (3.15)(i)]{AronnaBonnansGoh2016}, we have that $-\xi_2-y=0$ on $C.$ Thus, due to \eqref{xi2}, we get that \be \langlebel{y=0} y=0\quad \text{on } B_- \cup C. \ee Thus, from \eqref{eqxi}-\eqref{y=0}, we get $\xi_1=\xi_3=0$ on $B_- \cup C.$ {Regarding the last component $h$ of the considered critical direction $(y,\xi,h) \in \mathcal{P}^2_*,$ in view of the linearized cost equation \cite[equation (3.16)]{AronnaBonnansGoh2016} and due to the fact that there are no final constraints, we get that} \be \langlebel{h1} \hat{x}_{1,5}\,\xi_{1,5} + \xi_{3,5} = 0. \ee Then, we deduce that there is no restriction on $h.$ We obtain \be \mathcal{P}^2_* = \left\{ \begin{split} & (y,\xi,h) \in L^2 \times (H^1)^n \times \mathbb{R} : \,\, y=\xi_1=\xi_2 = 0 \text{ on } B_- \cup C,\\ & \xi_2 = 0 \text{ on } [0,5],\ \dot\xi_1 = y \text{ and } \dot \xi_3 = \hat{x}_1\xi_1 + \hat{x}_2 y\ \text{ on } S, \, \eqref{h1} \text{ holds} \end{split} \mathop{\rm rint}ght\}. \ee The quadratic forms $Q$ and ${\Omega}ega$ are given by \be Q(v,z):= \int_0^T( z^2_1 +z^2_2) {\rm d} t + z^2_{1,5}; \quad {\Omega}(y,h,\xi):= \int_0^T( \xi^2_1 + (\xi_2+y)^2) {\rm d} t + h^2. \ee Thus, ${\Omega}$ is a Legendre form on $\{(y,h,\xi) \in L^2\times \mathbb{R} \times (H^1)^3:\text{\eqref{eqxi} holds}\}$ and is coercive on $\mathcal{P}^2_*.$ Hence Theorem 5 in \cite{AronnaBonnansGoh2016} holds. Consequently, $(\hat{u},\hat{x})$ is a Pontryagin minimum of problem \eqref{Preg} satisfying the $\gamma$-growth condition. \subsection{Transformed problem} Here we transform the problem \eqref{Preg} to obtain a problem with neither control nor state constraints, as done in Section \ref{SubsecReformulation}. The optimal control associated with \eqref{Preg} has a $B_-CS$ structure, as said above. Then we triplicate the number of state variables obtaining the new variables $X_1,\dots,X_9$ and we consider two switching times that we write $X_{10},X_{11}.$ The new problem has only one control variable that corresponds to the singular arc and which we call $V.$ The reformulation of \eqref{Preg} is as follows \be \langlebel{Pregref} \begin{split} \min\ &X_{9,1} + \mbox{$\frac{1}{2}$} X^2_{7,1}, \\ & \dot{X}_1 = X_{10} X_2, \\ & \dot{X}_2 = - X_{10}, \\ & \dot{X}_3 = \mbox{$\frac{1}{2}$} X_{10} (X_1^2 + X_2^2), \\ & \dot{X}_4 = (X_{11}-X_{10})X_5, \\ & \dot{X}_5 = 0, \\ & \dot{X}_6 = \mbox{$\frac{1}{2}$} (X_{11}-X_{10}) (X_4^2 + X_5^2), \\ & \dot{X}_7 = (5-X_{11}) X_8,\\ & \dot{X}_8 =(5-X_{11}) V,\\ & \dot{X}_9 = \mbox{$\frac{1}{2}$} (5-X_{11}) (X_7^2 +X_8^2),\\ & \dot{X}_{10} = 0, \\ & \dot{X}_{11} = 0,\\ & X_{1,0}=0,\quad X_{2,0}=1,\quad X_{3,0}=0, \\ & X_{1,1}=X_{4,0},\quad X_{2,1} = X_{5,0},\quad X_{3,1} = X_{6,0},\\ & X_{4,1}=X_{7,0},\quad X_{5,1} = X_{8,0},\quad X_{6,1} = X_{9,0},\\ & X_{2,1}=0.2, \, {( \text{or } g(x_2(\tau_2))=0)}. \\ \end{split} \ee From \eqref{uonS} we deduce that $V=X_7.$ We solved problem \eqref{Pregref} by the shooting algorithm proposed in Section \ref{SecShooting}. The graphics of the optimal control and states are shown in Figure \ref{regulator} in the original variables $u$ and $x,$ and the corresponding costate variables $p_1,p_2,p_3$ are displayed in Figure \ref{regulator2}. In our numerical tests we take 1000 time steps. The optimal switching times obtained numerically are $\hat{\tau}_1=1.2$ and $\hat{\tau}_2 = 2.6036023,$ so these values agree with the ones founded analytically in \eqref{hattau1} and \eqref{hattau2}, while the optimal cost is $0.3934884$ and the shooting function evaluated at the optimal trajectory is $5.\times 10^{-6}.$ \begin{figure} \caption{Optimal control and state variables} \end{figure} \begin{figure} \caption{Costate variables} \end{figure} { \begin{remark} In order to compare with existing methods, we have solved the same problem using GEKKO Python \cite{beal2018gekko}. We have tested different variable definitions for the control input and we found the best result by setting the control as a {\em manipulated variables} \cite{beal2018gekko}. The results are shown in Figure \ref{GEKKO2}. On the right of this figure, we exhibit a zoom around the second switching point $\hat \tau_2$ of the optimal control. We can see that the method shows a discontinuity of the control at $\hat \tau_2,$ but that the approximation of $\hat \tau_2$ is between $2.57$ and $2.58$, while we have calculated analytically that $\hat \tau_2 = 2.6$ and our shooting algorithm finds the approximate value $2.6036023.$ GEKKO was tested with up to 1000 time steps, while our shooting algorithm was run with 150 time steps. This indicates that our method is more accurate when it comes to finding switching times and approximating bang-singular solutions. This feature of shooting methods has been already observed in the literature (see {\em e.g.} \cite{Trelat2012}. \end{remark} } \begin{figure} \caption{Numerical solution using GEKKO. On the left: optimal control and state variables. On the right: zoom of the optimal control around the second switching time.} \end{figure} \section{Conclusion} We have shown that, for problems that are affine w.r.t. the control, and for which the time interval can be partitioned in a finite set of arcs, the shooting algorithm converges locally quadratically if a second order sufficient condition holds, and thus the method may be an efficient way to get a highly accurate solution. The essential tool was to formulate a transformed problem. We think that this approach could be extended to more general problems with several state constraints and control variables, the latter possibly entering nonlinearly in the problem. {Problems with vector control are important in complex models, see {\em e.g.} \cite{LeparouzHerisseJean2022,LeparouzHerisseJean2022CDC} where they studied a control-affine problem with vector control, state constraints, but no singular arcs, generically, in the state-constrained solution. When considering several controls and state constraints, it becomes} important to take into account the possibility of having high-order state constraints. \section*{Statements and Declarations} The authors have no conflicts of interest to declare that are relevant to the content of this article. \end{document}
\begin{document} \allowdisplaybreaks \renewcommand{\arabic{footnote}}{$\star$} \renewcommand{030}{030} \FirstPageHeading \ShortArticleName{Tilting Modules in Truncated Categories} \ArticleName{Tilting Modules in Truncated Categories\footnote{This paper is a~contribution to the Special Issue on New Directions in Lie Theory. The full collection is available at \href{http://www.emis.de/journals/SIGMA/LieTheory2014.html} {http://www.emis.de/journals/SIGMA/LieTheory2014.html}}} \Author{Matthew BENNETT~$^\dag$ and Angelo BIANCHI~$^\ddag$} \AuthorNameForHeading{M.~Bennett and A.~Bianchi} \Address{$^\dag$~Department of Mathematics, State University of Campinas, Brazil} \EmailD{\href{mailto:[email protected]}{[email protected]}} \Address{$^\ddag$~Institute of Science and Technology, Federal University of S\~ao Paulo, Brazil} \EmailD{\href{mailto:[email protected]}{[email protected]}} \ArticleDates{Received September 05, 2013, in f\/inal form March 17, 2014; Published online March 26, 2014} \Abstract{We begin the study of a~tilting theory in certain truncated categories of mo\-du\-les $\mathcal G(\Gamma)$ for the current Lie algebra associated to a~f\/inite-dimensional complex simple Lie algebra, where $\Gamma = P^+ \times J$, $J$ is an interval in $\mathbb Z$, and $P^+$ is the set of dominant integral weights of the simple Lie algebra. We use this to put a~tilting theory on the category $\mathcal G(\Gamma')$ where $\Gamma' = P' \times J$, where $P'\subseteq P^+$ is saturated. Under certain natural conditions on $\Gamma'$, we note that $\mathcal G(\Gamma')$ admits full tilting modules.} \Keywords{current algebra; tilting module; Serre subcategory} \Classification{17B70; 17B65; 17B10; 17B55} \renewcommand{\arabic{footnote}}{\arabic{footnote}} \setcounter{footnote}{0} \section{Introduction} Associated to any f\/inite-dimensional complex simple Lie algebra $\mathfrak g$ is its current algebra $\mathfrak g[t]$. The current algebra is just the Lie algebra of polynomial maps from $\mathbb C \rightarrow \mathfrak g$ and can be identif\/ied with the space $\mathfrak g \otimes \mathbb C[t]$ with the obvious commutator. The study of the representation theory of current algebras was largely motivated by its relationship to the representation theory of af\/f\/ine and quantum af\/f\/ine algebras associated to $\mathfrak g$. However, it is also now of independent interest since the current algebra has connections with problems arising in mathematical physics, for instance the $X = M$ conjectures, see~\cite{AK,FK,naoi1}. Also, the current algebra, and many of its modules, admits a~natural grading by the integers, and this grading gives rise to interesting combinatorics. For example,~\cite{KNloewy} relates certain graded characters to the Poincar\'e polynomials of quiver varieties. Let $P^+$ be the set of dominant integral weights of $\mathfrak g$, $\Lambda=P^+ \times \mathbb Z$, and $\widehat{\mathcal G}$ the category of $\mathbb Z$-graded modules for $\mathfrak g[t]$ with the restriction that the graded pieces are f\/inite-dimensional. Also, let $\mathcal G$ be the full subcategory of $\widehat{\mathcal G}$ consisting of modules whose grades are bounded above. Then $\Lambda$ indexes the simple modules in $\widehat{\mathcal G}$. In this paper we are interested in studying Serre subcategories $\widehat{\mathcal G}(\Gamma)$ where $\Gamma \subset \Lambda$ is of the form $P'\times J$ where $J\subset \mathbb Z$ is a~(possibly inf\/inite) interval and $P'\subset P^+$ is closed with respect to a~natural partial order. In particular, we study the tilting theories in these categories. This generalized the work of~\cite{bc}, where $\Gamma$ was taken to be all of $\Lambda$. The category $\widehat{\mathcal G}(\Gamma)$ contains the projective cover and injective envelope of its simple objects. Given a~partial order on the set $\Gamma$, we can def\/ine the standard and costandard objects, as in~\cite{Donkin}. The majority of the paper is concerned with a~particular order, in which case the standard objects $\Delta(\lambda, r)(\Gamma)$ are quotients of the f\/inite-dimensional local Weyl modules, and the costandard objects $\nabla(\lambda, r)(\Gamma)$ are submodules of (appropriately def\/ined) duals of the inf\/inite-dimensional global Weyl modules. We recall (see, for example,~\cite{Mathieu}) that a~module~$T$ is called tilting if~$T$ admits a~f\/iltration by standard modules and a~f\/iltration by costandard modules. In our case, both sets of objects have been extensively studied (see~\cite{CL,FKtw,FoL,naoi2} for the local Weyl modules, and~\cite{CFK,CPweyl}, for the global Weyl modules). Both families of modules live in a~subcategory $\mathcal G_{\bdd}(\Gamma)$ consisting of objects whose weights are in a~f\/inite union of cones (as in $\mathcal O$) and whose grades are bounded above. The main goal of this paper is to construct another family of modules indexed by $\Gamma$ and which are in $\mathcal G_{\bdd}(\Gamma)$. These modules are denoted by $T(\lambda, r)(\Gamma)$, and admit an inf\/inite f\/iltration whose quotients are of the form $\Delta(\mu,s)(\Gamma)$, for $(\mu,s)\in \Gamma$. They also satisfy the homological property that $\Ext^1_{\mathcal G}(\Delta(\mu,s)(\Gamma), T(\lambda, r)(\Gamma))=0$ for all $(\mu, s)\in \Gamma$. We use the following theorem to prove that this homological property is equivalent to having a~$\nabla(\Gamma)$-f\/iltration, proving that the $T(\lambda, r)(\Gamma)$ are tilting. The theorem was proved in~\cite{bcm,bbcklk}, and~\cite{ci} for $\mathfrak sl_2[t]$, $\mathfrak sl_{n+1}[t]$ and general $\mathfrak g[t]$ respectively. \begin{Theorem} Let $P(\lambda, r)$ denote the projective cover of the simple module $V(\lambda, r)$. Then $P(\lambda,r)$ admits a~filtration by global Weyl modules, and we have an equality of filtration multiplicities $[P(\lambda, r): W(\mu,s)] = [\Delta(\mu,r): V(\lambda,s)]$, where $\Delta(\mu,r)$ is the local Weyl module. \end{Theorem} The following is the main result of this paper. \begin{Theorem}\quad \begin{enumerate}\itemsep=0pt \item[$1.$] Given $(\lambda, r)\in\Gamma$, there exists an indecomposable module $T(\lambda,r)(\Gamma)\in\Ob\mathcal G_{\bdd}(\Gamma)$ which admits a~$\Delta(\Gamma)$-filtration and a~$\nabla(\Gamma)$-filtration. Further, \begin{gather*} T(\lambda,r)(\Gamma)[s]_\lambda=0 \qquad \text{if} \quad s>r, \quad T(\lambda,r)(\Gamma)[r]_\lambda=1, \quad \wt T(\lambda,r)(\Gamma)\subset\conv W\lambda, \end{gather*} and $T(\lambda,r)(\Gamma)\cong T(\mu,s)(\Gamma)$ if and only if $(\lambda,r)=(\mu,s)$. \item[$2.$] Moreover, any indecomposable tilting module in $\mathcal G_{\bdd}(\Gamma)$ is isomorphic to $T(\lambda,r)(\Gamma)$ for some $(\lambda,r)\in \Gamma$, and any tilting module in $\mathcal G_{\bdd}(\Gamma)$ is isomorphic to a~direct sum of indecomposable tilting modules. \end{enumerate} \end{Theorem} The majority of the paper is devoted to the case where $\Gamma=P^+ \times J$. It is easy to see from the construction that the module $T(\lambda, r)(\Gamma)$ has its weights bounded above by $\lambda$. It follows that if we let $P'\subset P^+$ be saturated (downwardly closed with respect to the normal partial order on weights), and set $\Gamma'=P'\times J$, then $T(\lambda,r)(\Gamma')=T(\lambda, r)(\Gamma)$. We use the convention that $\Delta(\lambda, r)(\Lambda)$ is simply written $\Delta(\lambda, r)$, and similarly for other objects. Keeping $\Gamma=P^+ \times J$, there is a~natural functor taking $M\in \Ob \mathcal G$ to $M^\Gamma\in\Ob\mathcal G(\Gamma)$. For $(\lambda, r)\in \Gamma$ this functor preserves many objects, and in particular we have $\Delta(\lambda, r)^\Gamma=\Delta(\lambda, r)(\Gamma)$ and $\nabla(\lambda, r)^\Gamma=\nabla(\lambda, r)(\Gamma)$. So it is natural to ask if $T(\lambda, r)^\Gamma=T(\lambda, r)(\Gamma)$. The answer is ``no", and is a~result of the following phenomena: for $(\mu,s)\notin \Gamma$, the module $\nabla(\mu,s)^\Gamma$ is not in general zero, and does not correspond to any simple module. Hence $\nabla(\mu,s)^\Gamma$ can not be considered costandard. So the modules $T(\lambda, r)(\Gamma)$ must be studied independently. Another purpose of this paper is the following. In~\cite{bc}, the tilting modules $T(\lambda, r)$ are constructed for all $(\lambda, r)\in \Lambda$. It is normal to then consider the module $T=\bigoplus_{(\lambda,r)\in \Lambda} T(\lambda, r)$, the algebra $A=\End T$, and use several functors to f\/ind equivalences of categories. However, it is not hard to see that if $T$ is def\/ined in this way, then $T$ fails to have f\/inite-dimensional graded components, and hence $T\notin\Ob\mathcal G$. One of the purposes of this paper is to f\/ind Serre subcategories with index sets $\Gamma$ such that $T(\Gamma)=\bigoplus_{(\lambda,r)\in\Gamma}T(\lambda,r)(\Gamma)\in\Ob\mathcal G(\Gamma)$. It is not hard to see that (except for the degenerate case where $\Gamma=\{0\}\times J$) a~necessary and suf\/f\/icient pair of conditions on $\Gamma$ is that $P'$ be f\/inite and $J$ have an upper bound. It is natural to study the algebra $\End T(\Gamma)$ in the case that $T(\Gamma ) \in \mathcal G(\Gamma)$, and this will be pursued elsewhere. We also note that in the case that $\Gamma$ is f\/inite then $\End T(\Gamma)$ is a~f\/inite-dimensional associative algebra. We end the paper by considering other partial orders which can be used on $\Gamma\subset\Lambda$. In particular, we consider partial orders induced by the so-called covering relations. One tends to get trivial tilting theories in these cases (one of the standard-costandard modules is simple, and the other is projective or injective), but the partial orders are natural for other reasons, and we include their study for completeness. One of the reasons to study these other subcategories is that one can obtain directed categories as in~\cite{cg:hwcat} (in the sense of~\cite{CPS}). The paper is organized as follows. In Section~\ref{section1} we establish notation and recall some basic results on the f\/inite-dimensional representations of a~f\/inite-dimensional simple Lie algebra. In Section~\ref{section2} we introduce several important categories of modules for the current algebra. We also introduce some important objects, including the local and global Weyl modules. In Section~\ref{section3} we state the main results of the paper and establish some homological results. Section~\ref{section4} is devoted to constructing the modules $T(\lambda, r)(\Gamma)$ and establishing their properties. Finally, in Section~\ref{section5}, we consider the tilting theories which arise when considering partial orders on $\Lambda$ which are induced by covering relations. We also provide for the reader's convenience a~brief index of the notation which is used repeatedly in this paper. \section{Preliminaries}\label{section1} \subsection{Simple Lie algebras and current algebras}\label{section1.1} We f\/ix $\mathfrak g$, a~complex simple f\/inite-dimensional Lie algebra, and let $\mathfrak h \subset \mathfrak g$ be a~f\/ixed Cartan subalgebra. Denote by $\{\alpha_i: i\in I\}$ a set of simple roots of $\mathfrak g $ with respect to the Cartan subalgebra $\mathfrak h$, where $I=\{1,\dots,\dim\mathfrak h\}$. Let $R\subset\mathfrak h^*$ be the corresponding set of roots, $R^+$ the positive roots, $P^+$ the dominant integral weights, and $Q^+$ the positive root lattice. By $\theta$ we denote the highest root. Given $\lambda,\mu\in\mathfrak h^*$, we say that $\lambda \ge \mu$ if and only if $ \lambda-\mu\in Q^+$. The Weyl group of $\mathfrak g$ is the subgroup $W\subset \operatorname{Aut}(\mathfrak h^*)$ generated by the simple ref\/lections~$s_i$, and we let~$w_\circ$ denote the unique longest element of~$W$. For $\alpha\in R$ we write $\mathfrak g_\alpha$ for the corresponding root space. Then the subspaces $\mathfrak n^\pm=\bigoplus_{\alpha\in R^+}\mathfrak g_{\pm\alpha},$ form Lie subalgebras of $\mathfrak g$. We f\/ix a Chevalley basis $\{ x^\pm_\alpha, \, h_i \,|\, \alpha\in R^+, \, i\in I \}$ of $\mathfrak g$, and for each $\alpha\in R^+$ we set $h_\alpha=[x_\alpha,x_{-\alpha}]$. Note that $h_{\alpha_i}=h_i$, $i\in I$, and we let $\omega_i=h_i^*\in P^+$. For any Lie algebra $\mathfrak a$ we can construct another Lie algebra $\mathfrak a[t]=\mathfrak a \otimes \mathbb C[t]$, with bracket given by $[x\otimes t^r, y\otimes t^s]=[x,y]\otimes t^{r+s}$, which is the current algebra associated to $\mathfrak a$. Set $\mathfrak a[t]_+=\mathfrak a \otimes t \mathbb C[t]$. Then $\mathfrak a[t]$ and $\mathfrak a[t]_+$ are $\mathbb Z_+$-graded Lie algebras, graded by powers of $t$. If we denote by $\mathbb U(\mathfrak a)$ the universal enveloping algebra of a Lie algebra $\mathfrak a$, then $\mathbb U(\mathfrak a[t])$ and $\mathbb U(\mathfrak a[t]_+)$ inherit a natural grading by powers of $t$. We denote by $\mathbb U(\mathfrak a[t])[k]$ the $k^{th}$-graded component. Each graded component is a module for $\mathfrak a$ under left or right multiplication, and the adjoint action. Supposing that $\dim \mathfrak a < \infty$, then the graded component $\mathbb U(\mathfrak a[t])[k]$ is a free $\mathfrak a $ module (under multiplication) of f\/inite rank. It is well-known that the universal enveloping algebra $\mathbb U(\mathfrak a)$ is a Hopf algebra. In particular, it is equipped with a comultiplication def\/ined by sending $x\to x\otimes 1+1\otimes x$ for $x\in\mathfrak a$, and extending this assignment to be a homomorphism. In the case where $\mathfrak a =\mathfrak b[t]$ or $\mathfrak b[t]_+$, the comultiplication is a homomorphism of graded associative algebras. We note that if $[\mathfrak a,\mathfrak a]=\mathfrak a$ (which holds for our Lie algebra $\mathfrak g$), then as a graded associative algebra, $\mathfrak a$ and $\mathfrak a\otimes t$ generate ${\textbf U}(\mathfrak a[t])$. \subsection{Finite-dimensional modules}\label{section1.2} The f\/irst category we consider is $\mathcal F(\mathfrak g)$ the category of f\/inite-dimensional modules for~$\mathfrak g$ with morphisms $\mathfrak g$-module homomorphisms. It is well known that this is a semi-simple category, and that the simple objects are parametrized by $\lambda \in P^+$. Letting $V(\lambda)$ denote the simple module associated to $\lambda$, it is generated by a vector $v_\lambda\in V(\lambda)$ satisfying the def\/ining relations \begin{gather*} \mathfrak n^+ v_\lambda=0,\qquad hv_\lambda=\lambda(h)v_\lambda,\qquad (x^-_{\alpha_i})^{\lambda(h_i)+1}v_\lambda =0, \end{gather*} for all $h\in\mathfrak h$, $i\in I$. This category admits a duality, which on simple modules is given by $V(\lambda)^*\cong_{\mathfrak g} V(-w_\circ\lambda)$. An object $V\in\mathcal F(\mathfrak g)$ has a weight space decomposition $V=\bigoplus_{\lambda\in\mathfrak h^*} V_\lambda$ where $V_\lambda=\{v\in V : hv=\lambda(h)v,\, \forall\, h\in\mathfrak h\}$. For any such $V$, we def\/ine the subset $\operatorname{wt}(V)=\{\lambda\in\mathfrak h^*:V_\lambda\ne 0\}$ and def\/ine the character of $V$ to be the sum $\ch V=\sum \dim V_\lambda r^\lambda$. The following results are standard: \begin{Lemma} Let $V\in\mathcal F(\mathfrak g)$ and $\lambda\in P^+$. Then, \begin{enumerate}\itemsep=0pt \item[$1)$] $w\operatorname{wt}(V)\subset\operatorname{wt}(V)$ and $\dim V_\lambda=V_{w\lambda}$ for all $w\in W$; \item[$2)$] $\dim\operatorname{Hom}_{\mathfrak g}(V(\lambda), V)=\dim\{v\in V_\lambda: \mathfrak n^+ v=0\}$; \item[$3)$] $\operatorname{wt}(V(\lambda))\subset \lambda-Q^+$. \end{enumerate} \end{Lemma} \section{The main category and its subcategories} \label{section2} In this section we introduce the main categories of study, and present several properties and functors between them. We will also introduce several families of modules which will play important roles. Most of these categories and objects have been studied elsewhere (see~\cite{bbcklk,bc,cg:hwcat}). \subsection{The main category}\label{section2.1} We denote by $\widehat{\mathcal G}$ the category of $\mathbb Z$-graded $\mathfrak g[t]$ modules such that the graded components are f\/inite-dimensional and where morphisms are degree zero maps of $\mathfrak g[t]$-modules. Writing $V\in\operatorname{Ob}\widehat{\mathcal G}$ as \begin{gather*} V=\bigoplus_{r\in\mathbb Z} V[r], \end{gather*} we see that $V[r]$ is a f\/inite-dimensional $\mathfrak g$ module, while $z\otimes t^k. V[r]\subset V[r+k]$ for all $z\in\mathfrak g$, $k\in \mathbb Z_{\ge 0}$, and $r\in \mathbb Z$. For $M\in \widehat{\mathcal G}$, its graded character is the sum (formal, and possibly inf\/inite) $\ch_{\operatorname{gr}} M = \sum\limits_{r\in \mathbb Z} \ch M[r]u^r$. For $V\in \mathcal F(\mathfrak g)$ we make $V$ an object in $\widehat{\mathcal G}$, which we shall call $\ev V$, in the following way. Set $\ev V[0]=V$ and $\ev V[r]=0$ for all $r\ne 0$. Then necessarily we have $z\otimes t^k. v = \delta_{k,0} z.v$ for $z\in\mathfrak g$, $k\in\mathbb Z_+$, $v\in \ev V$. It is not hard to see that this def\/ines a covariant functor $\ev:\mathcal F(\mathfrak g)\to \widehat{\mathcal G}$. Further, for $s\in\mathbb Z$ let $\tau_s:\widehat{\mathcal G}\to\widehat{\mathcal G}$ be the grade shift functor given by \begin{gather*} (\tau_s V)[k]=V[k-s],\qquad \text{for all} \quad k\in\mathbb Z, \quad V\in\operatorname{Ob}{\widehat{\mathcal G}}. \end{gather*} For $(\lambda,r)\in P^+\times \mathbb Z$ set $V(\lambda,r):=\tau_r(\ev (V(\lambda)))$ and $v_{\lambda,r}:=\tau_r(v_\lambda)$. \begin{Proposition} The isomorphism classes of simple objects in $\widehat{\mathcal G}$ are parametrized by pairs $(\lambda,r)$ and we have \begin{gather*} \operatorname{Hom}_{\widehat{\mathcal G}}(V(\lambda,r), V(\mu,s))= \begin{cases} 0, & \text{if} \ \ (\lambda,r)\ne (\mu,s),\\ \mathbb C, & \text{if} \ \ (\lambda,r) = (\mu,s). \end{cases} \end{gather*} Moreover, if $V\in\operatorname{Ob}\widehat{\mathcal G}$ satisfies $V=V[n]$ for some $n\in\mathbb Z$, then $V$ is semi-simple. \end{Proposition} The category $\widehat{\mathcal G}$ admits a duality, where given $M$ we def\/ine $M^*\in \operatorname{Ob}\widehat{\mathcal G}$ to be the module given by \begin{gather*} M^*=\bigoplus_{r\in\mathbb Z}M^*[-r] \qquad \text{and} \qquad M^*[-r]=M[r]^* \end{gather*} and equipped with the usual action where \begin{gather*} \big(x\otimes t^r\big)m^*(v) = -m^*\big(x\otimes t^r.v\big). \end{gather*} We note that $M^{**}\cong M$ and that $\ch_{\operatorname{gr}}M^*=\sum\limits_{r\in\mathbb Z} \ch (M[r]^*)u^{-r}$. Denote by $\Lambda=P^+\times\mathbb Z$ and equip $\Lambda$ with the lexicographic partial order $\le$, i.e.\ \begin{gather*} (\mu,r) \le (\lambda,s) \qquad \Leftrightarrow \qquad \text{either} \quad \mu < \lambda \quad \text{or} \quad \mu=\lambda \quad \text{and} \quad r\le s. \end{gather*} \subsection{Some bounded subcategories of the main category}\label{section2.2} We let $\mathcal G_{\le s}$ be the full subcategory of $\widehat{\mathcal G}$ whose objects $V$ satisfy $V[r]=0$ for all $r>s$. Clearly~$\mathcal G_{\le s}$ is a full subcategory of $\mathcal G_{\le r}$ for all $s< r\in\mathbb Z$. Def\/ine $\mathcal G$ to be the full subcategory of $\widehat{\mathcal G}$ whose objects consist of those objects $V$ satisfying $V\in\operatorname{Ob}\mathcal G_{\le s}$ for some $s\in\mathbb Z$. Finally, let~${\mathcal G}_{\operatorname{bdd}}$ be the full subcategory of~$\mathcal G$ consisting of objects~$M$ satisfying the following condition: $|\wt (M)\cap P^+|<\infty$. Given $s\in\mathbb Z$ and $V\in \widehat{\mathcal G}$, def\/ine a submodule $V_{>s}=\bigoplus_{r>s} V[r]$ and a corresponding quotient $V_{\le s}= V/V_{>s}$. Then it is clear that $V_{\le s}\in\operatorname{Ob}\mathcal G_{\le s}$, and indeed this is the maximal quotient of~$V$ in~$\mathcal G_{\le s}$. Any $f\in\operatorname{Hom}_{\widehat{\mathcal G}}( V, W)$ naturally induces a morphism $f_{\le s}\in\operatorname{Hom}_{\widehat{\mathcal G}_{\le s}}( V_{\le s}, W_{\le s})$. The following is proved in~\cite{cg:hwcat}. \begin{Lemma}\label{functor} The assignments $V\mapsto V_{\le r}$ for all $V\in\operatorname{Ob}\widehat{\mathcal G}$ and $f\mapsto f_{\le r}$ for all $f\in\operatorname{Hom}_{\widehat{\mathcal G}}(V,W)$, $V,W\in\operatorname{Ob}\widehat{\mathcal G}$, define a full, exact and essentially surjective functor from $\widehat{\mathcal G}$ to $\mathcal G_{\le r}$. \end{Lemma} Given $V\in\operatorname{Ob}\mathcal G$ def\/ine $[V: V(\lambda, r)]:=[V[r]:V(\lambda)]$ the multiplicity of $V(\lambda)$ in a composition series for $V[r]$ as a~$\mathfrak g$-module. For any $V\in\operatorname{Ob}\widehat{\mathcal G}$, def\/ine \begin{gather*} \Lambda(V)=\{(\lambda,r)\in\Lambda: [V:V(\lambda,r)]\ne 0\}. \end{gather*} \subsection{Projective and injective objects in the main category}\label{section2.3} Given $(\lambda,r)\in \Lambda$, set \begin{gather*} P(\lambda,r)=\textbf U(\mathfrak g[t])\otimes_{\textbf U(\mathfrak g)} V(\lambda,r) \qquad \text{and} \qquad I(\lambda,r)\cong P(-w_\circ\lambda,-r)^{*}. \end{gather*} Clearly these are an inf\/inite-dimensional $\mathbb Z$-graded $\mathfrak g[t]$-module. Using the PBW theorem we have an isomorphism of graded vector spaces $\textbf U(\mathfrak g[t])\cong \textbf U(\mathfrak g[t]_+)\otimes \textbf U(\mathfrak g)$, and hence we get $P(\lambda,r)[k]=\textbf U(\mathfrak g[t]_+)[k-r]\otimes V(\lambda,r)$, where we understand that $\textbf U(\mathfrak g[t]_+)[k-r]=0$ if $k<r$. This shows that $P(\lambda,r)\in\Ob\widehat{\mathcal G}$ and also that $P(\lambda,r)[r] =1\otimes V(\lambda,r)$. Set $p_{\lambda,r}=1\otimes v_{\lambda,r}$. \begin{Proposition} \label{projectives} Let $(\lambda,r)\in\Lambda$, and $s\ge r$. \begin{enumerate}\itemsep=0pt \item[$1.$] $P(\lambda,r)$ is generated as a~$\mathfrak g[t]$-module by $p_{\lambda,r}$ with defining relations \begin{gather*} (\mathfrak n^+)p_{\lambda,r}=0, \qquad hp_{\lambda,r}=\lambda(h)p_{\lambda,r}, \qquad (x^-_{\alpha_i})^{\lambda(h_i)+1}p_{\lambda,r}=0, \end{gather*} for all $h\in\mathfrak h$, $i\in I$. Hence, $P(\lambda,r)$ is the projective cover in the category $\widehat{\mathcal G}$ of its unique simple quotient $V(\lambda,r)$. \item[$2.$] The modules $P(\lambda,r)_{\le s}$ are projective in $\mathcal G_{\le s}$. \item[$3.$] Let $V\in\Ob\widehat{\mathcal G}$. Then $\dim\Hom_{\widehat{\mathcal G}}(P(\lambda,r),V)=[V:V(\lambda,r)]$. \item[$4.$] Any injective object of $\mathcal G$ is also injective in $\widehat{\mathcal G}$. \item[$5.$] Let $(\lambda,r)\in\Lambda$. The object $I(\lambda,r)$ is the injective envelope of $V(\lambda,r)$ in $\mathcal G$ or $\widehat{\mathcal G}$. \end{enumerate} \end{Proposition} \subsection{Local and global Weyl modules}\label{section2.4} The next two families of modules in ${\mathcal G}_{\bdd}$ we need are the local and global Weyl modules which were originally def\/ined in~\cite{CPweyl}. For the purposes of this paper, we shall denote the local Weyl modules by $\Delta(\lambda,r)$, $(\lambda,r)\in P^+\times\mathbb Z$. Thus, $\Delta(\lambda, r)$ is generated as a~$\mathfrak g[t]$-module by an element $w_{\lambda,r}$ with relations: \begin{gather*} \mathfrak n^+[t]w_{\lambda,r}=0, \qquad (x_{i}^-)^{\lambda(h_{i})+1}w_{\lambda,r}=0, \qquad (h\otimes t^s)w_{\lambda,r}= \delta_{s,0}\lambda(h)w_{\lambda,r}, \end{gather*} here $i\in I$, $h\in\mathfrak h$ and $s\in\mathbb Z_+$. Next, let $W(\lambda,r)$ be the global Weyl modules, which is $\mathfrak g[t]$-module generated as a~$\mathfrak g[t]$-module by an element $w_{\lambda,r}$ with relations: \begin{gather*} \mathfrak n^+[t]w_{\lambda,r}=0, \qquad (x_{i}^-)^{\lambda(h_{i})+1}w_{\lambda,r}=0, \qquad hw_{\lambda,r}=\lambda(h)w_{\lambda,r}, \end{gather*} where $i\in I$ and $h\in\mathfrak h$. Clearly the module $\Delta(\lambda,r)$ is a~quotient of $W(\lambda,r)$ and moreover $V(\lambda,r)$ is the unique irreducible quotient of $W(\lambda,r)$. It is known (see~\cite{CFK} or~\cite{CPweyl}) that $W(0,r)\cong\mathbb C$ and that, if $\lambda\ne 0$, the modules $W(\lambda,r)$ are inf\/inite-dimensional and satisfy $\wt W(\lambda,r)\subset\conv W\lambda$ and $W(\lambda,r)[s]\ne 0$ if\/f $s\ge r$, from which we see that $ W(\lambda,r)\notin \Ob \mathcal G$. It follows that, if we set \begin{gather*} \nabla(\lambda,r)= W(-w_\circ\lambda,-r)^*, \end{gather*} then $\nabla(\lambda, r)\in\Ob {\mathcal G}_{\bdd}$ and $\soc\nabla(\lambda,r)\cong V(\lambda, r)$. We note that $\Delta(\lambda, r)$ (resp. $\nabla(\lambda, r)$) is the maximal quotient of $P(\lambda, r)$ (resp. submodule of $I(\lambda, r)$) satisfying \begin{gather*} [\Delta(\lambda,r):V(\mu,s)] \ne 0\implies (\mu,s)\le (\lambda,r), \\ \big(\text{resp.} \quad [\nabla(\lambda,r):V(\mu,s)] \ne 0\implies (\mu,s)\le (\lambda,r) \big). \end{gather*} Hence these are the standard (resp. costandard) modules in $\mathcal G$ associated to $(\lambda, r)$. \subsection{Truncated subcategories}\label{section2.5} In this section, we recall the def\/inition of certain Serre subcategories of $\widehat{\mathcal G}$. Given $\Gamma \subset \Lambda$, let $\widehat{\mathcal G}(\Gamma)$ be the full subcategory of $\widehat{\mathcal G}$ consisting of all $M$ such that \begin{gather*} M \in \Ob\widehat{\mathcal G}, \qquad [M:V(\lambda,r)] \ne 0 \implies (\lambda,r)\in \Gamma. \end{gather*} The subcategories $\mathcal{G}(\Gamma)$ and $\mathcal G_{\bdd}(\Gamma)$ are def\/ined in the obvious way. Observe that if $(\lambda,r)\in\Gamma$, then $V(\lambda,r) \in \widehat{\mathcal G}(\Gamma)$, and we have the following trivial result. \begin{Lemma} The isomorphism classes of simple objects of $\widehat{\mathcal G}(\Gamma)$ are indexed by $\Gamma$. \end{Lemma} \begin{Remark} Let $\mathcal C$ be one of the categories $\mathcal G_{\le s}$, $\mathcal G$, $\mathcal G_{\bdd}$, $\mathcal G(\Gamma)$, $\mathcal G_{\bdd}(\Gamma)$, which are full sub\-ca\-te\-gories of $\widehat{\mathcal G}$. Then, we have $\Ext^1_{\widehat{\mathcal G}}(M,N) = \Ext^1_{\mathcal C}(M,N)$ for all $M,N\in \mathcal C$. \end{Remark} \subsection{A specif\/ic truncation}\label{section2.6} We now focus on $\Gamma$ of the form $\Gamma = P^+ \times J$, where $J$ is an interval in $\mathbb Z$ with one of the forms $(-\infty,n]$, $[m,n]$, $[m,\infty)$ or $\mathbb Z$, where $n,m \in \mathbb Z$. We set $a=\inf J$ and $b=\sup J$. Throughout this section, we assume that $(\lambda, r)\in \Gamma$. Let $P(\lambda,r)(\Gamma)$ be the maximal quotient of $P(\lambda,r)$ which is an object of $\mathcal G(\Gamma)$ and let $I(\lambda,r)(\Gamma)$ be the maximal submodule of $I(\lambda,r)$ which is an object of $\mathcal G(\Gamma)$. These are the indecomposable projective and injective modules associated to the simple module $V(\lambda,r)\in \mathcal G(\Gamma)$. For an object $M\in \mathcal G$, let $M^\Gamma$ be the subquotient \begin{gather*} M^\Gamma = \frac{M_{\ge a}}{M_{>b}}, \end{gather*} where $M_{\ge a}= \bigoplus_{r\ge a} M[r]$, and we understand $M_{\ge a}=M$ if $a=-\infty$ and $M_{>b}=0$ if $b=\infty$. \begin{Remark}\quad \begin{enumerate}\itemsep=0pt \item If $M=\bigoplus_{s\ge p} M[s]$ for some $p \ge a$, then $M^\Gamma = \frac{M}{M_{>b}}$. \item If $M=\bigoplus_{s<p} M[s]$ for some $p \le b$, then $M^\Gamma = M_{\ge a}$. \end{enumerate} \end{Remark} Clearly $M^\Gamma \in \mathcal G(\Gamma)$, and because morphisms are graded, this assignment def\/ines a~functor from $\mathcal G$ to $\mathcal G(\Gamma)$. It follows from Lemma~\ref{functor} that $\Gamma$ is exact. If we def\/ine another subset $\Gamma'=P^+ \times \{ -J\}$, then it follows from the def\/inition of the graded duality that if $M\in\Ob \mathcal G(\Gamma)$ then $M^*\in\Ob\mathcal G(\Gamma')$. \begin{Lemma} The module $P(\lambda, r)(\Gamma)=P(\lambda, r)^\Gamma$ and $I(\lambda, r)(\Gamma)=I(\lambda,r)^\Gamma$. \end{Lemma} We set \begin{gather*} \Delta(\lambda, r)(\Gamma):=\Delta(\lambda, r)^{\Gamma}, \qquad W(\lambda, r)(\Gamma):=W(\lambda, r)^\Gamma \qquad \text{and} \qquad \nabla(\lambda, r)(\Gamma):=\nabla(\lambda, r)^{\Gamma}. \end{gather*} In light of the above remark, we can see that \begin{gather*} \Delta(\lambda, r)(\Gamma)=\frac{\Delta(\lambda, r)}{\bigoplus_{s> b}\Delta(\lambda, r)[s]}, \qquad W(\lambda, r)(\Gamma) = \frac{W(\lambda, r)}{\bigoplus_{s> b}W(\lambda, r)[s]}\qquad \text{and} \\ \nabla(\lambda, r)(\Gamma)=\nabla(\lambda, r)_{\ge a}. \end{gather*} Note that, with respect to the partial order $\le$, for each $(\lambda,r)\in \Gamma$ we have $\Delta(\lambda,r)(\Gamma)$ the maximal quotient of $P(\lambda,r)(\Gamma)$ such that \begin{gather*} [\Delta(\lambda,r)(\Gamma):V(\mu,s)] \ne 0\implies (\mu,s)\le (\lambda,r). \end{gather*} Similarly, we see that $\nabla(\lambda,r)(\Gamma)$ is the maximal submodule of $I(\lambda,r)(\Gamma)$ satisfying \begin{gather*} [\nabla(\lambda,r)(\Gamma):V(\mu,s)] \ne 0\implies (\mu,s)\le (\lambda,r). \end{gather*} These modules $\Delta(\lambda,r)(\Gamma)$ and $\nabla(\lambda,r)(\Gamma)$ are called, respectively, standard and co-standard modules associated to $(\lambda,r)\in \Gamma$. The following proposition summarizes the properties of $\Delta(\lambda,r)(\Gamma)$ which are necessary for this paper. They can easily be derived from the properties of the functor $\Gamma$. \begin{Proposition}\quad \begin{enumerate}\itemsep=0pt \item[$1.$] The module $\Delta(\lambda, r)(\Gamma)$ is generated as a~$\mathfrak g[t]$-module by an element $w_{\lambda,r}$ with relations: \begin{gather*} \mathfrak n^+[t]w_{\lambda,r}=0, \qquad (x_{i}^-)^{\lambda(h_{i})+1}w_{\lambda,r}=0, \\ (h\otimes t^s)w_{\lambda,r}= \delta_{s,0}\lambda(h)w_{\lambda,r}, \qquad \textbf U(\mathfrak g[t])[p]w_{\lambda,r}=0, \qquad \text{if} \quad p> b-r, \end{gather*} for all $i\in I$, $h\in\mathfrak h$ and $s\in\mathbb Z_+$, where if $b=\infty$, then the final relation is empty relation. \item[$2.$] The module $\Delta(\lambda,r)(\Gamma)$ is indecomposable and finite-dimensional and, hence, an object of $\mathcal G_{\bdd}(\Gamma)$. \item[$3.$] $\dim \Delta(\lambda,r)(\Gamma)_\lambda =\dim\Delta(\lambda,r)(\Gamma)[r]_\lambda=1$. \item[$4.$] $\wt\Delta(\lambda,r)(\Gamma)\subset\conv W\lambda$. \item[$5.$] The module $V(\lambda,r)$ is the unique irreducible quotient of $\Delta(\lambda,r)(\Gamma)$. \item[$6.$] $\{\ch_{\gr}\Delta(\lambda,r)(\Gamma):(\lambda,r)\in \Gamma\}$ is a~linearly independent subset of $\mathbb Z[P][u,u^{-1}]$. \end{enumerate} \end{Proposition} \subsection{The truncated global Weyl modules}\label{section2.7} Here we collect the results on $W(\lambda, r)(\Gamma)$ which we will need for this paper. \begin{Proposition}\quad \begin{enumerate}\itemsep=0pt \item[$1.$] The module $W(\lambda, r)(\Gamma)$ is generated as a~$\mathfrak g[t]$-module by an element $w_{\lambda,r}$ with relations: \begin{gather*} \mathfrak n^+[t]w_{\lambda,r}=0, \qquad (x_{i}^-)^{\lambda(h_{i})+1}w_{\lambda,r}=0, \qquad h w_{\lambda,r}=\lambda(h)w_{\lambda,r}, \\ \textbf U(\mathfrak g[t])[p]w_{\lambda,r}=0, \qquad \text{if} \quad p> b-r, \end{gather*} where, if $b=\infty$, then the final relation is empty relation. Here $i\in I$ and $h\in\mathfrak h$. \item[$2.$] The module $W(\lambda,r)(\Gamma)$ is indecomposable and an object of $\widehat{\mathcal G}(\Gamma)$. \item[$3.$] $\dim W(\lambda,r)(\Gamma)[r]_\lambda=1$ and $\dim W(\lambda, r)(\Gamma)_\lambda[s]\ne 0$ if and only if $r\le s \le b$. \item[$4.$] $\wt W(\lambda,r)(\Gamma)\subset\conv W\lambda$. \item[$5.$] The module $V(\lambda,r)$ is the unique irreducible quotient of $W(\lambda,r)(\Gamma)$. \item[$6.$] $\{ \ch_{\gr}W(\lambda,r)(\Gamma):(\lambda,r)\in \Gamma \}$ is a~linearly independent subset of $\mathbb Z[P][u,u^{-1}]$. \end{enumerate} \end{Proposition} \subsection{The costandard modules}\label{section2.8} The following proposition summarizes the main results on $\nabla(\lambda,r)(\Gamma)$ that are needed for this paper. All but the f\/inal result can be found by considering the properties of the functor $\Gamma$ and the paper~\cite{bc}. \begin{Proposition}\quad \begin{enumerate}\itemsep=0pt \item[$1.$] The module $\nabla(\lambda, r)(\Gamma)$ is an indecomposable object of $\mathcal G_{\bdd}(\Gamma)$. \item[$2.$] $\dim\nabla(\lambda,r)(\Gamma)[r]_\lambda=1$ and $\dim\nabla(\lambda,r)(\Gamma)[s]_\lambda\ne 0 \Leftrightarrow a\le s\le r$. \item[$3.$] $\wt\nabla(\lambda,r)(\Gamma)\subset\conv W\lambda$. \item[$4.$] Any submodule of $\nabla(\lambda,r)(\Gamma)$ contains $\nabla(\lambda,r)(\Gamma)[r]_\lambda$ and the socle of $\nabla(\lambda,r)(\Gamma)$ is the simple module $V(\lambda,r)$. \item[$5.$] $\{\ch_{\gr}\nabla(\lambda,r)(\Gamma):(\lambda,r)\in \Gamma \}$ is a~linearly independent subset of $\mathbb Z[P][u,u^{-1}]]$. \item[$6.$] Let $\Gamma' = P^+ \times\{ - J\}$ and $(\lambda,r)\in \Gamma$. Then $\nabla(\lambda, r)(\Gamma)\cong W(-\omega_0 \lambda, -r)(\Gamma')^*$. \end{enumerate} \end{Proposition} \begin{proof} We prove the f\/inal item. As a~vector space we have \begin{gather*} W(-\omega_0 \lambda, -r)(\Gamma ')\cong \bigoplus_{s=-r}^{-a}W(-\omega_0 \lambda, -r)[s]. \end{gather*} Since $W(-\omega_0 \lambda, -r)(\Gamma ')$ is a~quotient of $W(-\omega_0 \lambda, -r)$, its dual must be a~submodule of $\nabla(\lambda, r)$. By the def\/inition of the graded dual, we see that, as a~vector space, \begin{gather*} W(-\omega_0 \lambda, -r)(\Gamma ')^* \cong \bigoplus_{s=a}^{r}\nabla(\lambda, r)[s]. \end{gather*} Hence, as vector spaces, we see that $\nabla(\lambda, r)(\Gamma)\cong W(-\omega_0 \lambda, -r)(\Gamma')^*$. Now, the fact that $ W(-\omega_0 \lambda, -r)(\Gamma')^*$ is a~submodule completes the proof. \end{proof} \section{The main theorem and some homological results}\label{section3} \begin{Definition} We say that $M\in\Ob\mathcal G(\Gamma)$ admits a~$\Delta(\Gamma)$ (resp. $\nabla(\Gamma)$)-f\/iltration if there exists an increasing family of submodules $0\subset M_0\subset M_1\subset \cdots$ with $M=\bigcup_k M_k$, such that \begin{gather*} M_k/M_{k-1}\cong \bigoplus_{(\lambda,r)\in \Gamma}\Delta(\lambda,r)(\Gamma)^{m_k(\lambda,r)} \qquad \left(\text{resp.}\ \ M_k/M_{k-1}\cong \bigoplus_{(\lambda,r)\in \Gamma}\nabla(\lambda,r)(\Gamma)^{m_k(\lambda,r)}\right) \end{gather*} for some choice of $m_k(\lambda,r)\in \mathbb Z_+$. We do not require $\sum\limits_{(\lambda, r)}m_k(\lambda, r)<\infty$. If $M_k=M$ for some $k\ge 0$, then we say that $M$ admits a~f\/inite $\Delta(\Gamma)$ (resp.\ $\nabla(\Gamma)$)-f\/iltration. Because our modules have f\/inite-dimensional graded components, we can conclude that the multiplicity of a~f\/ixed $\Delta(\lambda, r)(\Gamma)$ (resp.\ $\nabla(\lambda, r)(\Gamma))$ in a~$\Delta(\Gamma)$-f\/iltration (resp.\ $\nabla(\Gamma)$-f\/iltration) must be f\/inite, and we denote this multiplicity by $[M:\Delta(\lambda, r)(\Gamma)]$ (resp. $[M:\nabla(\lambda, r)(\Gamma)]$). Finally, we say that $M\in\Ob\mathcal G(\Gamma)$ is tilting if $M$ has both a~$\Delta(\Gamma)$ and a~$\nabla(\Gamma)$-f\/iltration. \end{Definition} The main goal of this paper is to understand tilting modules in $\mathcal G_{\bdd}(\Gamma)$. (The case where $J=\mathbb Z$ was studied in~\cite{bc}.) In the case of algebraic groups (see~\cite{Donkin,Mathieu}) a~crucial necessary result is to give a~cohomological characterization of modules admitting a~$\nabla(\Gamma)$-f\/iltration. The analogous result in our situation is to prove the following statement: \textit{An object $M$ of $\mathcal G_{\bdd}(\Gamma)$ admits a~$\nabla(\Gamma)$-filtration if and only if $\Ext^1_{{\widehat{\mathcal G}}}((\Delta(\lambda,r)(\Gamma), M)=0$ for all $(\lambda,r)\in \Gamma$.} It is not hard to see that the forward implication is true. The converse statement however requires one to prove that any object of $\mathcal G_{\bdd}(\Gamma)$ can be embedded in a~module which admits a~$\nabla(\Gamma)$-f\/iltration. This in turn requires Theorem~\ref{bgg}. Summarizing, the f\/irst main result that we shall prove in this paper is: \begin{Proposition} \label{extnablaconn} Let $M\in\Ob\mathcal G_{\bdd}(\Gamma)$. Then the following are equivalent:{\samepage \begin{enumerate}\itemsep=0pt \item[$1.$] The module $M$ admits a~$\nabla(\Gamma)$-filtration. \item[$2.$] $M$ satisfies $\Ext^1_{\widehat{\mathcal G}}(\Delta(\lambda,r)(\Gamma), M)=0$ for all $(\lambda,r)\in \Gamma$. \end{enumerate}} \end{Proposition} The second main result that we shall prove in this paper is the following: \begin{Theorem}\quad \begin{enumerate}\itemsep=0pt \item[$1.$] Given $(\lambda, r)\in\Gamma$, there exists an indecomposable module $T(\lambda,r)(\Gamma)\in\Ob\mathcal G_{\bdd}(\Gamma)$ which admits a~$\Delta(\Gamma)$-filtration and a~$\nabla(\Gamma)$-filtration. Further, \begin{gather*} T(\lambda,r)(\Gamma)[s]_\lambda=0 \qquad \text{if} \quad s>r, \quad T(\lambda,r)(\Gamma)[r]_\lambda=1, \quad \wt T(\lambda,r)(\Gamma)\subset\conv W\lambda, \end{gather*} and $T(\lambda,r)(\Gamma)\cong T(\mu,s)(\Gamma)$ if and only if $(\lambda,r)=(\mu,s)$. \item[$2.$] Moreover, any indecomposable tilting module in $\mathcal G_{\bdd}(\Gamma)$ is isomorphic to $T(\lambda,r)(\Gamma)$ for some $(\lambda,r)\in \Gamma$, and any tilting module in $\mathcal G_{\bdd}(\Gamma)$ is isomorphic to a~direct sum of indecomposable tilting modules. \end{enumerate} \end{Theorem} \section{Proof of Proposition~\ref{extnablaconn}}\label{section4} \subsection{Initial homological results}\label{section4.1} We begin by proving the implication (1) $\implies$ (2) from Proposition~\ref{extnablaconn}. In order to do this, we f\/irst establish some homological properties of the standard and costandard modules which will be used throughout the paper. \begin{Proposition} \label{homresults} Let $\lambda, \mu \in P^+$. Then we have the following \begin{enumerate}\itemsep=0pt \item[$1.$] $\Ext^1_{\widehat{\mathcal G}}(W(\lambda,r)(\Gamma), W(\mu,s)(\Gamma))=0=\Ext^1_{\widehat{\mathcal G}}(\nabla(\mu,s)(\Gamma), \nabla(\lambda,r)(\Gamma)) $ for all $s,r\in \mathbb Z$ if $\lambda \not < \mu$. \item[$2.$] $\Ext^1_{\widehat{\mathcal G}}(\Delta(\lambda, r)(\Gamma), \nabla(\mu,s)(\Gamma))=0$. \item[$3.$] If $\lambda \not \le \mu$ then $\Ext^1_{\widehat{\mathcal G}}(\Delta(\lambda, r)(\Gamma), \Delta(\mu,s)(\Gamma))=0$ for all $s,r\in \mathbb Z$. \item[$4.$] If $s\ge r$, then $\Ext^1_{\widehat{\mathcal G}}(\Delta(\lambda, s)(\Gamma), \Delta(\lambda,r)(\Gamma))=0$. \end{enumerate} \end{Proposition} \begin{proof} For part (1), suppose that we have a~sequence $0\rightarrow W(\mu,s)(\Gamma) \rightarrow X \rightarrow W(\lambda,r)(\Gamma) \rightarrow 0$, and let $x\in X_\lambda[r]$ be a~pre-image of $w_{\lambda,r}$. It is clear from the hypothesis on $\mu$ that $\mathfrak n^+[t].x=0$. If $b< \infty$ and $p>b-r$, then $\textbf U(\mathfrak g[t])[p].x=0$ by grade considerations. Note that, since $\dim X[r]<\infty$, we have $\dim \textbf U(\mathfrak g)x<\infty$. It follows from the f\/inite-dimensional representation theory that $(x_i^-)^{\lambda(h_i)+1}x=0$, for all $i\in I$, and so the sequence splits. The proof for $\nabla(\Gamma)$ is similar and omitted. For part (2), suppose we have a~sequence $0\rightarrow \nabla(\mu,s)(\Gamma) \rightarrow X \rightarrow \Delta(\lambda,r)(\Gamma) \rightarrow 0$ and $\mu\not\ge \lambda$. Then $\dim X_\lambda[r]=1$, and if $x\in X_\lambda[r]$ is a~pre-image of $w_{\lambda,r}$ then $x$ satisf\/ies the def\/ining relations of $w_{\lambda,r}$ and the sequence splits. If $\mu \ge \lambda$, then by taking duals we get a~sequence $0\rightarrow \Delta(\lambda,r)(\Gamma)^*\rightarrow Y \rightarrow W(-\omega_0\mu,-s)(\Gamma')\rightarrow 0$. Again, if $y\in Y_{-\omega_0 \mu}[-s]$ is a~pre-image of $w_{-\omega_0 \mu, -s}$, we see that $y$ satisf\/ies the def\/ining relations of $w_{-\omega_0 \mu,-s}\in W(-\omega_0\mu,-s)(\Gamma')$, and the sequence splits. The proofs for parts (3) and (4) are similar to that for part (1) and are omitted. \end{proof} The proof of the following lemma is standard (see, for example,~\cite{bc}). \begin{Lemma} \label{reorder} Suppose that $M\in \Ob \mathcal G_{\bdd}(\Gamma)$ admits a~$($possibly infinite$)$ $\nabla(\Gamma)$-filtration. Then~$M$ admits a~finite $\nabla(\Gamma)$-filtration \begin{gather*} 0\subset M_1\subset M_2 \subset\dots\subset M_k=M \qquad \text{with} \quad M_s / M_{s-1} \cong \bigoplus_{r\in \mathbb Z} \nabla(\lambda_s, r)(\Gamma)^{[M:\nabla(\lambda_s,r)(\Gamma)]}, \end{gather*} where $\lambda_i>\lambda_j$ implies $i>j$. In particular if $\mu$ is maximal such that $M_\mu \ne 0$, then there exists $s\in\mathbb Z$ and a~surjective map $M\rightarrow \nabla(\mu,s)(\Gamma)$ such that the kernel has a~$\nabla(\Gamma)$-filtration. \end{Lemma} We can now prove the implication (1) $\implies$ (2) from Proposition~\ref{extnablaconn}. \begin{Corollary} If $M\in \Ob\mathcal G_{\bdd}(\Gamma)$ admits a~$\nabla(\Gamma)$-filtration, then $\Ext^1_{\mathcal G(\Gamma)}(\Delta(\lambda,r)(\Gamma), M)=0$, for all $(\lambda,r)\in \Gamma$. \end{Corollary} \begin{proof} Let $0\subset M_1 \subset M_2 \subset\dots\subset M_k=M$ be a~f\/inite $\nabla(\Gamma)$-f\/iltration as in Lemma~\ref{reorder}. Then, $M_s / M_{s-1}$ is a~(possibly inf\/inite) direct sum of $\nabla(\lambda_s, r)(\Gamma)$, and \begin{gather*} \Ext^1_{\mathcal G(\Gamma)}(\Delta(\lambda, r)(\Gamma), M_s / M_{s-1}) \cong \Ext^1_{\mathcal G(\Gamma)}\left(\Delta(\lambda, r)(\Gamma), \bigoplus_{r\in \mathbb Z} \nabla(\lambda_s,r)(\Gamma)^{m_s(\lambda_s,r)}\right) \\ \hphantom{\Ext^1_{\mathcal G(\Gamma)}(\Delta(\lambda, r)(\Gamma), M_s / M_{s-1})}{} \cong \prod \Ext^1_{\mathcal G(\Gamma)}(\Delta(\lambda, r)(\Gamma), \nabla(\lambda_s, r)(\Gamma)) =0, \end{gather*} by Proposition~\ref{homresults} (2). The result follows by induction on $k$, the length of the f\/iltration. \end{proof} \subsection{Towards understanding extensions between the standard\\ and costandard modules}\label{section4.2} \begin{Proposition} Suppose that $N\in\Ob\mathcal G(\Gamma)$ is such that $\Ext^1_{\mathcal G(\Gamma)}(\Delta(\lambda,r)(\Gamma), N)=0$ for all $(\lambda,r)\in \Gamma$. If $M\in\Ob\mathcal G(\Gamma)$ has a~$\Delta(\Gamma)$-filtration then $\Ext^1_{\mathcal G(\Gamma)}(M, N)=0$. \end{Proposition} \begin{proof} Consider a~short exact sequence $0\to N\to U\to M\to 0$. Suppose that $M_k\subset M_{k+1}$ is a~part of the $\Delta(\Gamma)$-f\/iltration of $M$ and assume that \begin{gather*} M_{k+1}/M_k\cong\bigoplus_{(\mu,s)\in\Lambda}\Delta(\mu,s)(\Gamma)^{m_k(\mu,s)}. \end{gather*} By assumption we have $\Ext^1_{\mathcal G(\Gamma)}(M_{k+1}/M_k, N)=0$. Let $U_k\subset U$ be the pre-image of $M_k$, which contains $N$ because $0\in M_k$. Note that $U_{k+1}/U_k\cong M_{k+1}/M_k$. Now, consider the short exact sequence $0\to N\to U_k\to M_k\to 0$. This sequence def\/ines an element of $\Ext^1_{\mathcal G(\Gamma)}(M_k, N)$. Since~$M_k$ has a~f\/inite $\Delta(\Gamma)$-f\/iltration it follows that $\Ext^1_{\mathcal G(\Gamma)}(M_k, N)=0$. Hence the sequence splits and we have a~retraction $\varphi_k: U_k\to N$. We want to prove that $\varphi_{k+1}:U_{k+1}\to N$ can be chosen to extend $\varphi_k$. For this, applying $\Hom_{\mathcal G(\Gamma)}(-,N)$ to $0\to U_k\to U_{k+1}\to U_{k+1}/U_k\to 0$, we get $\Hom_{\mathcal G(\Gamma)}(U_{k+1}, N)\to \Hom_{\mathcal G(\Gamma)}(U_k, N)\to 0$, which shows that we can choose $\varphi_{k+1}$ to lift~$\varphi_k$. Now def\/ining $\varphi: U\to N$ by $\varphi(u)=\varphi_k(u)$, for all $u\in U_k$, we have the desired splitting of the original short exact sequence. \end{proof} Together with Proposition~\ref{homresults} and taking $N=\nabla(\lambda,r)(\Gamma)$ in the proposition above, we now have: \begin{Corollary} \label{extMnabla} Suppose $M\!\in\!\Ob\mathcal G(\Gamma)$ admits a~$\Delta(\Gamma)$-filtration. Then, $\Ext^1_{\widehat{\mathcal G}}(M,\nabla(\lambda,r)(\Gamma))\!=\!0$, for all $(\lambda,r)\in\Gamma$. \end{Corollary} \subsection{A natural embedding}\label{section4.3} In this section we show that every $M\in \Ob \mathcal G_{\bdd}(\Gamma)$ embeds into an injective module $I(M)\in \Ob \mathcal G(\Gamma)$. Let $\soc M\subset M$ be the maximal semi-simple submodule of $M$. \begin{Lemma}\label{embeda} Let $M\in \Ob \mathcal G_{\bdd}(\Gamma)$.{\samepage \begin{enumerate}\itemsep=0pt \item[$1.$] If $M\ne 0$, then $\soc M \ne 0$. \item[$2.$] Suppose $\soc M=\bigoplus V(\lambda, r)^{m_{\lambda, r}}$. Then $M \hookrightarrow \bigoplus I(\lambda,r)(\Gamma)^{m_{\lambda,r}}$. \end{enumerate}} \end{Lemma} \begin{proof} For the f\/irst part, let $M\in \Ob\mathcal G_{\bdd}$, suppose $M\ne 0$, and let $s\in\mathbb Z$ be minimal such that $M\in \Ob \mathcal G_{\le s}$. Then $M[s]\ne 0$ and $M[s]\subset \soc M$. For the second, let $\soc M=\bigoplus_{(\lambda, r)\in \Lambda} V(\lambda, r)^{m_{\lambda, r}}$ from which we get a~natural injection $\soc M \stackrel{ \iota}\hookrightarrow \bigoplus I(\lambda,r)^{m_{\lambda,r}}$. By injectivity, we have a~morphism $ M \stackrel{f}\rightarrow \bigoplus I(\lambda,r)^{m_{\lambda,r}}$, through which~$\iota$ factors. In fact, we can show that $f$ is an injection. If not, $\soc \ker f \ne 0$. On the other hand, $\soc \ker f \subset \soc M$, and $f$ is injective on $\soc M$. If we assume that $M\in \Ob \mathcal G_{\bdd}(\Gamma)$, then it is easy to conclude that $\operatorname{im} f \subset \bigoplus I(\lambda,r)(\Gamma)^{m_{\lambda,r}}$, completing the proof. \end{proof} \subsection{o-canonical f\/iltration}\label{section4.4} In this section we shall establish a~f\/inite f\/iltration on modules $M\in \Ob \mathcal G_{\bdd}(\Gamma)$ where the successive quotients embed into direct sums of $\nabla(\Gamma)$. We then use the f\/iltration to establish lower and upper bounds on the graded character of $M$. We use the character bounds to prove Proposition~\ref{extnablaconn}. Now f\/ix an ordering of $P^+=\{\lambda_0,\lambda_1,\dots\}$ such that $\lambda_r>\lambda_s$ implies that $r>s$. For $M\in\Ob \mathcal G(\Gamma)$ we set $M_s \subset M$ as the maximal submodule whose weights lie in \mbox{$\{ \conv W\lambda_r \,|\, r\le s\}$}. Evidently $M_{s-1}\subset M_s$. We call this the o-canonical f\/iltration, because it depends on the order. This is a~f\/inite f\/iltration for $M\in\Ob \mathcal G_{\bdd}(\Gamma)$ and we set $k(M)$ to be minimal such that $M=M_{k(M)}$. Clearly \begin{gather} M_{s-1}\subset M_s, \qquad M=\bigcup_{s=0}^{k(M)}M_s, \qquad \text{and} \nonumber \\ \label{limit} \Hom_{\mathcal G}(V(\lambda, r), M_s/M_{s-1})\ne 0\implies \lambda=\lambda_s. \end{gather} It follows from Lemma~\ref{embeda} and~\eqref{limit} that the quotient $M_s/M_{s-1}$ embeds into a~module of the form $\bigoplus I(\lambda_s, r)(\Gamma)^{m_{s,r}}$, where $m_{s,r}=\dim \Hom_{\mathcal G}(V(\lambda_s,r), M_s/M_{s-1})$. Since the weights of $M_s / M_{s-1}$ are bounded above by $\lambda_s$, they embed into the maximal submodule of this direct sum, whose weights are bounded above by $\lambda_s$. Hence we have $M_s / M_{s-1}$ embedding into a~direct sum of modules of the form $\nabla(\lambda_s, r)$, with $r\in J$. This gives, \begin{gather*} \ch_{\gr}M=\sum\limits_{s\ge 0}\ch_{\gr}M_{s}/M_{s-1}\le \sum\limits_{s\ge 0}\sum\limits_{r\in J}\dim\Hom_{\mathcal G}(V(\lambda_{s},r), M_s/M_{s-1})\ch_{\gr}\nabla(\lambda_{s},r)(\Gamma), \end{gather*} i.e., \begin{gather*} [M: V(\mu,\ell)]\le \sum\limits_{s\ge 0}\sum\limits_{r\in J}\dim\Hom_{\mathcal G}(V(\lambda_{s},r), M_s/M_{s-1})[\nabla(\lambda_{s},r)(\Gamma): V(\mu,\ell)], \end{gather*} for all $(\mu,\ell)\in\Lambda$. We claim that this is equivalent to \begin{gather*} \ch_{\gr} M = \sum \ch_{\gr} M_s / M_{s-1} \le \sum\limits_{s\ge 0}\sum\limits_{r\in J} \dim \Hom_{\mathcal G} (\Delta(\lambda_s,r)(\Gamma), M)\ch_{\gr} \nabla(\lambda_s,r)(\Gamma). \end{gather*} The claim follows from Lemma~\ref{aaa} below, and~\cite[\S~3.5]{bc}, which states that \begin{gather*} \Hom_{\mathcal G} (\Delta(\lambda_s,r),M) \cong \Hom_{\mathcal G} \left(V(\lambda_s,r), M_s / M_{s-1} \right). \end{gather*} \begin{Lemma} \label{aaa} Let $M\in \Ob\mathcal G(\Gamma)$ and $(\lambda,r)\in \Gamma$. We have \begin{gather*} \Hom_{\widehat{\mathcal G}}(\Delta(\lambda,r)(\Gamma),M) \cong \Hom_{\widehat{\mathcal G}}(\Delta(\lambda,r),M) \end{gather*} {and} \begin{gather*} \Ext^1_{\widehat{\mathcal G}}(\Delta(\lambda,r)(\Gamma),M) \cong \Ext^1_{\widehat{\mathcal G}}(\Delta(\lambda,r),M). \end{gather*} \end{Lemma} \begin{proof} As $M[s]=0$ for all $s>b$, we must have $\Hom_{\widehat{\mathcal G}}(\bigoplus_{s>b} \Delta(\lambda,r)[s],M)=0$. Similarly, if $0\to M \to X \to \bigoplus_{s>b} \Delta(\lambda,r)[s] \to 0$ is exact, by using again that $M[s]=0$ for $s>b$, we have $\dim X[n] = \dim (\bigoplus_{s>b}\Delta(\lambda,r)[s])[n]$ for any $n>b$ and so we have an injective map $\iota: \bigoplus_{s>b}\Delta(\lambda,r)[s] \to X$ which splits the sequence. Thus, $\Ext^1_{\widehat{\mathcal G}}(\bigoplus_{s>b} \Delta(\lambda,r)[s],M)=0$. Now the other statements are easily deduced. \end{proof} Therefore, we get \begin{gather} \label{lowbd} \ch_{\gr} M \le \sum\limits_{s\ge 0} \sum\limits_{r\in J} \dim \Hom_{\mathcal G(\Gamma)}(\Delta(\lambda_s,r)(\Gamma),M) \ch_{\gr} \nabla(\lambda_s,r)(\Gamma) \end{gather} and the equality holds if, and only if, the o-canonical f\/iltration is $\nabla(\Gamma)$-f\/iltration. \subsection{A homological characterization of costandard modules}\label{section4.5} Note that even if $N\notin \Ob\mathcal G_{\bdd}$, we can still def\/ine the submodules $N_s$, and by def\/inition $N_s\in \Ob\mathcal G_{\bdd}$. The following result, or more precisely a~dual statement about projective modules and global Weyl f\/iltrations, was proved for $\mathfrak g= \mathfrak{sl}_2$ in~\cite{bcm}, for $\mathfrak g =\mathfrak {sl}_{n+1}$ in~\cite{bbcklk} and for general $\mathfrak g$ in~\cite{ci}. In particular, we note that the argument in \cite[Section~5.5]{bbcklk} works in general. \begin{Theorem} \label{bgg} For all $(\lambda, r) \in \Lambda$ and for all $p\in \mathbb Z_+$ the o-canonical filtration on $I(\lambda,r)_p$ is a~$\nabla$-filtration. Moreover, for all $(\mu,s)$ we have $[I(\lambda,r)_p:\nabla(\mu,s)]=[\Delta(\mu,r):V(\lambda,s)]$. \end{Theorem} We combine this with equation~\eqref{lowbd} and the linear independence of the graded characters of the $\nabla(\lambda, r)(\Gamma)$ to see that $[I(\lambda,r)_p:\nabla(\mu,s)]=\dim \Hom_{\mathcal G(\Gamma)}(\Delta(\lambda_s,r),I(\lambda, r)_p)$. As a~consequence of the theorem and the exactness of the functor $\Gamma$, we conclude that $I(\lambda,r)_p(\Gamma)$ has a~$\nabla(\Gamma)$-f\/iltration. It is easy to see that $I(\lambda,r)_p(\Gamma)\in \mathcal G_{\bdd}(\Gamma)$. For $M\in \mathcal G_{\bdd}(\Gamma)$, let $p$ be minimal such that $M_p=M$. Then it is clear that we can ref\/ine the embedding from Lemma~\ref{embeda} to $M \hookrightarrow \bigoplus_{} I(\lambda,r)_p(\Gamma)$. We can conclude the following: \begin{Corollary} \label{embed} For all $M\in \Ob\mathcal G_{\bdd}(\Gamma)$, we have $M\subset I(M) \in \Ob\mathcal G_{\bdd}(\Gamma)$, where the o-canonical filtration of $I(M)$ is a~$\nabla(\Gamma)$-filtration. \end{Corollary} We now complete the proof of Proposition~\ref{extnablaconn} following the argument in~\cite{bc}. Let $M \in \mathcal G_{\bdd}(\Gamma)$ and assume that $\Ext^1_{\widehat{\mathcal G}}(\Delta(\lambda,r)(\Gamma),M)=0$. Let $I(M)\in \mathcal G_{\bdd}(\Gamma)$ be as in Corollary~\ref{embed}, and consider the short exact sequence $0 \to M \to I(M) \to Q \to 0$. where $Q \in \Ob \mathcal G_{\bdd} (\Gamma)$. The assumption on the module $M$ implies that, if we apply the functor $\Hom_{\mathcal G(\Gamma)}(\Delta(\lambda,r)(\Gamma), -)$, we get the short exact sequence \begin{gather} 0 \rightarrow \Hom_{\mathcal G(\Gamma)}(\Delta(\lambda,r)(\Gamma), M) \rightarrow \Hom_{\mathcal G(\Gamma)}(\Delta(\lambda,r)(\Gamma), I(M)) \nonumber\\ \hphantom{0}{} \rightarrow \Hom_{\mathcal G(\Gamma)}(\Delta(\lambda,r)(\Gamma), Q) \rightarrow 0.\label{equal1} \end{gather} Since the o-canonical f\/iltration of $I(M)$ is a~$\nabla(\Gamma)$-f\/iltration, we can conclude that~\eqref{lowbd} is an equality for $I(M)$. We get that \begin{gather*} \ch_{\gr}M = \ch_{\gr} I(M) - \ch_{\gr} Q \ge \sum\limits_{\substack{r\in J\\s\ge 0}} (\dim \Hom_{\mathcal G(\Gamma)}(\Delta(\lambda_s,r)(\Gamma),I(M))\\ \phantom{\ch_{\gr}M= } - \dim \Hom_{\mathcal G(\Gamma)}(\Delta(\lambda_s,r)(\Gamma),Q)) \ch_{\gr}\nabla(\lambda_s,r) \\ \phantom{\ch_{\gr}M } = \sum\limits_{\substack{r\in J\\s\ge 0} } \dim \Hom_{\mathcal G(\Gamma)}(\Delta(\lambda_s, r)(\Gamma),M) \ch_{\gr}\nabla(\lambda_s,r), \end{gather*} where the f\/inal equality is from the exactness of~\eqref{equal1}. We now get that the character bound in~\eqref{lowbd} is an equality for $M$, and, hence, that $M$ has a~$\nabla(\Gamma)$-f\/iltration. Finally, we can prove the following. \begin{Proposition}\label{26} The following are equivalent for a~module $M\in \Ob \mathcal G_{\bdd}(\Gamma)$ \begin{enumerate}\itemsep=0pt \item[$1.$] For all $(\lambda, r)\in \Gamma$, we have $\Ext^1_{\widehat{\mathcal G}}(\Delta(\lambda,r)(\Gamma),M)=0$. \item[$2.$] $M$ admits a~$\nabla(\Gamma)$-filtration. \item[$3.$] The o-canonical filtration on $M$ is a~$\nabla(\Gamma)$-filtration. \end{enumerate} \end{Proposition} \begin{proof} The equivalence of (1) and (2) is precisely the statement of Proposition~\ref{extnablaconn}. Clearly $(3)$ implies $(2)$, so it is enough to show that~$(1)$ implies~$(3)$. Assuming $(1)$, we have shown that the character bound in~\eqref{lowbd} is an equality, which is true if and only if the o-canonical f\/iltration is a~$\nabla(\Gamma)$-f\/iltration. \end{proof} \subsection{Extensions between simple modules} \label{section4.6} Our f\/inal result before constructing the tilting modules $T(\lambda, r)(\Gamma)$ shows that the space of extensions between standard modules is always f\/inite-dimensional. The proof is analogous to the proof in~\cite{bc}. \begin{Proposition} \label{fdext} For all $(\lambda, r),(\mu,s) \in \Gamma$ we have $\dim \Ext^1_{\widehat{\mathcal G}}(\Delta(\lambda, r)(\Gamma), \Delta(\mu,s)(\Gamma))< \infty$. \end{Proposition} \begin{proof} Consider the short exact sequence $0\rightarrow K \rightarrow P(\lambda, r)\rightarrow \Delta(\lambda, r)(\Gamma) \rightarrow 0 $ and apply the functor $Hom_{\widehat{\mathcal G}}(-,\Delta(\mu,s)(\Gamma))$. Since $P(\lambda, r)$ is projective, we see that the result follows if $\dim \Hom_{\widehat{\mathcal G}}(K, \Delta(\mu,s)(\Gamma))< \infty$. Let $\ell$ be such that $\Delta(\mu,s)(\Gamma)[p]=0$ for all $p>\ell$. Then $\Hom_{\widehat{\mathcal G}}(K_{>\ell}, \Delta(\mu,s)(\Gamma))=0$, and, hence, we have an injection $\Hom_{\widehat{\mathcal G}}(K, \Delta(\mu,s)(\Gamma)) \hookrightarrow \Hom_{\widehat{\mathcal G}}(\frac{K}{K_{>\ell}}, \Delta(\mu,s)(\Gamma))$. The proposition follows because $\frac{K}{K_{>\ell}}$ is f\/inite-dimensional. \end{proof} \section{Construction of tilting modules}\label{section5} \subsection{Def\/ining a~subset which can be appropriately enumerated} \label{section5.1} In this section we construct a~family of indecomposable modules in the category $\mathcal G_{\bdd}(\Gamma)$, denoted by $\{T(\lambda,r)(\Gamma):(\lambda,r)\in\Gamma\}$, each of which admits a~$\Delta(\Gamma)$-f\/iltration and satisf\/ies \begin{gather*} \Ext^1_{\widehat{\mathcal G}}(\Delta(\mu,s)(\Gamma), T(\lambda,r)(\Gamma))=0, \qquad (\mu,s)\in\Gamma. \end{gather*} It follows that the modules $T(\lambda,r)(\Gamma)$ are tilting and we prove that any tilting module in $\mathcal G_{\bdd}(\Gamma)$ is a~direct sum of copies of $T(\lambda,r)(\Gamma)$, $(\lambda,r)\in\Gamma$. The construction is a~generalization of the one from~\cite{bc}, and the ideas are similar to the ones given in~\cite{Mathieu}. One of the f\/irst dif\/f\/iculties we encounter when trying to construct $T(\lambda,r)(\Gamma)$, using the algorithm given in~\cite{Mathieu}, is to f\/ind a~suitable subset (depending on $(\lambda,r)$) of $\Gamma$ which can be appropriately enumerated. Hence we assume the following result, whose proof we postpone to Section~\ref{section5.6}. \begin{Proposition}\label{subset} Fix $(\lambda, r)\in \Gamma$ and assume that under the enumeration we have $\lambda=\lambda_k$. Then there exists a~subset $\mathcal S(\lambda,r) \subset \Gamma$ such that \begin{enumerate}\itemsep=0pt \item[$1)$] $(\lambda, r)\in \mathcal S(\lambda, r)$; \item[$2)$] there exists $r_i$ for each $i\le k$ such that $r_i\ge r$, $r_k=r$, and \begin{gather*} \mathcal S(\lambda, r) = \{ (\lambda_i,s) \,|\, i\le k, s\le r_i \}; \end{gather*} \item[$3)$] $\Ext^1_{\widehat{\mathcal G}}(\Delta(\mu',s')(\Gamma), \Delta(\mu,s)(\Gamma))=0$ for all $(\mu,s) \in \mathcal S(\lambda,r)$ and $(\mu',s')\notin \mathcal S(\lambda,r)$. \end{enumerate} Furthermore, there exists an injection $\eta: \mathcal S(\lambda,r) \to \mathbb Z_{\ge 0}$ such that for $(\mu_i,p_i) = \eta^{-1}(i)$ we have $\Ext^1_{\widehat{\mathcal G}}(\Delta(\mu_i,p_i)(\Gamma),\Delta(\mu_j,p_j)(\Gamma))=0$ if $i< j$, and $\Delta(\mu_j,p_j)(\Gamma)_{\mu_i}[p_i]=0$ for $i<j$. \end{Proposition} Without loss of generality we may assume that $\eta(\lambda, r)=0$ and the image of $\eta$ is an interval. We need the following elementary result. \begin{Lemma}\label{elem} If $M,N\in\operatorname{Ob}\widehat{\mathcal G}$ are such that $0<\dim\operatorname{Ext}^1_{\widehat{\mathcal G}}(M,N)<\infty$ and $\operatorname{Ext}^1_{\widehat{\mathcal G}}(M,M)=0$. Then, there exists $U\in\operatorname{Ob}\widehat{\mathcal G}$, $d\in\mathbb Z_+$ and a non-split exact sequence $0\to N\to U\to M^{d}\to 0$ so that $\operatorname{Ext}^1_{\widehat{\mathcal G}}(M,U)=0$. \end{Lemma} \begin{proof} The proof follows from induction on $\dim\Ext^1_{\widehat{\mathcal G}}(M,N)$. The base case is obvious. For the inductive step, chose any non-split sequence $0\to N\to U'\to M\to 0$. Apply the functor $\Hom_{\widehat{\mathcal G}}(M,-)$ to the sequence, and note that the image of map $1_M\in \Hom_{\widehat{\mathcal G}}(M,M)$ is in the kernel of the surjection from $\Ext^1_{\widehat{\mathcal G}}(M,N) \rightarrow \Ext^1_{\widehat{\mathcal G}}(M,U')$. It follows that $\dim\Ext^1_{\widehat{\mathcal G}}(M,U') < \dim\Ext^1_{\widehat{\mathcal G}}(M,N)$. By the induction hypothesis, we now have a~module $U$ and a~non-split sequence $0 \to U' \to U \to M^{d-1} \to 0$. Now, considering the sequence $0 \to U'/N \to U/N \to M^{d-1} \to 0$, and again using that $\Ext^1_{\widehat{\mathcal G}}(M,M)=0$, we get a~non-split sequence $0 \to N \to U \to M^{d} \to 0$. \end{proof} \subsection{Constructing tilting modules}\label{section5.2} We now use $\eta$ to construct an inf\/inite family of f\/inite-dimensional modules $M_i$, whose direct limit will be $T(\lambda, r)(\Gamma)$. We note that the construction, at this point, will seem to be dependent on the ordering of $P^+$ we have chosen, and on the set $\mathcal S(\lambda, r)$ and $\eta$. We prove independence at the end of this section. Set $M_0=\Delta(\mu_0, p_0)(\Gamma)$. If $\Ext^1_{\widehat{\mathcal G}}(\Delta(\mu_1,p_1)(\Gamma), \Delta(\mu_0,p_0)(\Gamma))=0$, then set $M_1=M_0$. If not, then since $\dim(\Ext^1_{\widehat{\mathcal G}}(\Delta(\mu_1,p_1)(\Gamma), \Delta(\mu_0,p_0)(\Gamma)))<\infty$ by Proposition~\ref{fdext}, Lemma~\ref{elem} gives us an object $ M_1' \in \Ob \widehat{\mathcal G}(\Gamma)$ and a~non-split short exact sequence \begin{gather*} 0\to M_0\to M_1' \to {\Delta(\mu_1,p_1)(\Gamma)^{d_1'}} \to 0 \end{gather*} with $\Ext^1_{\widehat{\mathcal G}}(\Delta(\mu_1,p_1)(\Gamma),M_1')=0$. Let $M_1\subseteq M_1'$ be an indecomposable summand containing $(M_1')_{\mu_0}[p_0]$. By Proposition~\ref{subset} we see that $(M_1')_{\mu_0}[p_0]=(M_0)_{\mu_0}[p_0]$. Then we have $M_0 \stackrel{\iota_0}{\hookrightarrow} M_1$. Now, suppose that $M_1 \ne M_1'$. Then, since $M_1'$ is generated by $(M_1')_{\mu_0}[p_0]$ and $(M_1')_{\mu_1}[p_1]$, we must have $(M_1)_{\mu_1}[p_1] \ne (M_1')_{\mu_1}[p_1]$. We must have $\dim (M_1')_{\mu_1}[p_1] - \dim (M_1)_{\mu_1}[p_1]$ linearly independent vectors in $ (M_1')_{\mu_1}[p_1]$ which do not have a~pre-image in $M_0$, and each one must then generate a~copy of $\Delta(\mu_1,p_1)(\Gamma)$. So, $M_1'=M_1 \oplus \Delta(\mu_1,p_1)(\Gamma)^{ d}$ for some $d$ and we obtain the sequence \begin{gather} \label{eq422} 0\to M_0\to M_1 \to {\Delta(\mu_1,p_1)(\Gamma)^{ d_1}} \to 0. \end{gather} By applying the $\Hom(\Delta(\mu,p)(\Gamma),-) $ in the sequence~\eqref{eq422}, we get \begin{gather*} \cdots\rightarrow \Ext^1_{\widehat{\mathcal G}(\Gamma)}(\Delta(\mu,p)(\Gamma),M_0 )\rightarrow \Ext^1_{\widehat{\mathcal G}(\Gamma)}(\Delta(\mu,p)(\Gamma),M_1 ) \\ \hphantom{\cdots}{} \rightarrow\Ext^1_{\widehat{\mathcal G}(\Gamma)}(\Delta(\mu,p)(\Gamma),\Delta(\mu_1, p_1)(\Gamma)^{ d_1})\rightarrow \cdots. \end{gather*} If $(\mu,p)\notin \mathcal S(\lambda, r)$ or $(\mu, p)=(\mu_0, p_0)$ by Proposition~\ref{subset} we get \begin{gather*} \Ext^1_{\widehat{\mathcal G}(\Gamma)}(\Delta(\mu,p)(\Gamma),M_0 ) =0, \qquad \Ext^1_{\widehat{\mathcal G}(\Gamma)}(\Delta(\mu,p)(\Gamma),\Delta(\mu_1, p_1)(\Gamma))=0 \end{gather*} and we see that the middle term is also trivial, i.e. \begin{gather*} \Ext^1_{\widehat{\mathcal G}(\Gamma)}(\Delta(\mu, p)(\Gamma), M_1) = 0 \qquad \text{for} \quad (\mu,p)\notin \mathcal S(\lambda, r), \\ \text{and} \qquad \Ext^1_{\widehat{\mathcal G}(\Gamma)}(\Delta(\mu_0, p_0)(\Gamma), M_1) = 0. \end{gather*} The fact that $\Ext^1_{\widehat{\mathcal G}(\Gamma)}(\Delta(\mu_1, p_1)(\Gamma), M_1) = 0$ follows from the fact that $M_1$ is a~summand of $M_1'$ and that $\Ext^1_{\widehat{\mathcal G}}(\Delta(\mu_1,p_1)(\Gamma),M_1')=0$. We use Lemma~\ref{elem} again, with $N=M_1$ and $M=\Delta(\mu_2,p_2)(\Gamma)$, and we get \begin{gather*} 0\to M_1 \to M_2' \to {\Delta(\mu_2,p_2)(\Gamma)}^{d_2'} \to 0, \end{gather*} a~summand $M_2\subseteq M_2'$ containing $(M_2')_{\mu_i}[p_i]$, $i=0,1$, such that \begin{gather*} M_1 \stackrel{\iota_1}{\hookrightarrow}M_2, \qquad \Ext^1_{\widehat{\mathcal G}(\Gamma)}(\Delta(\mu_i, p_i)(\Gamma), M_2) = 0 \qquad \text{for} \quad i=0,1,2, \\ \text{and} \qquad \Ext^1_{\widehat{\mathcal G}(\Gamma)}(\Delta(\mu, p)(\Gamma), M_2) = 0 \qquad \text{for} \quad (\mu,p)\notin \mathcal S(\lambda, r). \end{gather*} Repeating this procedure, and using Lemma~\ref{elem} and the properties of $\eta$, we have the following proposition. The condition on weights is a~consequence that $ \wt \Delta(\lambda_i, p)\subset \conv W\lambda_k$ for all $i\le k$. \begin{Proposition} There exists a~family $\{M_s\}$, $s\in \mathbb Z_{\ge0}$, of indecomposable finite-dimensional modules and injective morphisms $\iota_s: M_s\to M_{s+1}$ of objects of $\mathcal G_{\bdd}(\Gamma)$ which have the following properties: \begin{enumerate}\itemsep=0pt \item[$1.$] $M_0=\Delta(\lambda_k,r)(\Gamma)=\Delta(\mu_0,p_0)(\Gamma)$, and for $s\ge 1$, \begin{gather*} M_{s}/\iota_{s-1}(M_{s-1})\cong\Delta(\mu_s,p_s)(\Gamma)^{d_s}, \qquad d_s\in\mathbb Z_+, \\ \dim M_s[r]_{\lambda_k}=1, \qquad \wt M_s\subset\conv W\lambda_k. \end{gather*} \item[$2.$] The spaces $M_s[p]=0$, for all $s\ge 0$, $p \gg \max\{r_i \}$. \item[$3.$] For all $0\le\ell\le s$ we have $\Ext^1_{\widehat{\mathcal G}}(\Delta(\mu_\ell,p_\ell)(\Gamma), M_s)=0$, and, for all $(\mu,p)\notin \mathcal S(\lambda, r)$, we have $\Ext^1_{\widehat{\mathcal G}}(\Delta(\mu,p)(\Gamma), M_s)=0$. \item[$4.$] $M_s$ is generated as a~$\mathfrak g[t]$-module by the spaces $\{M_s[p_\ell]_{\mu_\ell}: \ell\le s\}$. Moreover, if we let \begin{gather*} \iota_{r,s}= \iota_{s-1}\cdots\iota_r: \ M_r\to M_{s}, \quad r< s, \qquad \iota_{r,r}=\id, \end{gather*} then $M_s[p_\ell]_{\mu_\ell}= \iota_{\ell,s}(M_\ell[p_\ell]_{\mu_\ell})$, $s\ge\ell$. \end{enumerate} \end{Proposition} \subsection{Def\/ining the tilting modules}\label{section5.3} Let $T(\lambda_k,r)(\Gamma)=T(\lambda,r)(\Gamma)$ be the direct limit of $\{ M_s, \iota_{r,s} \,|\, r,s \in \mathbb Z_+, r\le s\}$. We have an injection $M_s \hookrightarrow T(\lambda,r)(\Gamma)$, and, letting $\widetilde{M_s}$ the image of $M_s$, we have $\widetilde{M_{s}}\subset \widetilde{M_{s+1}}$, $T(\lambda,r)(\Gamma)= \bigcup {\widetilde{M_s}}$, and $\frac{\widetilde{M_s}}{\widetilde{M_{s-1}}} \cong \frac{M_s}{M_{s-1}}$. In particular we see that $T(\lambda,r)(\Gamma)$ has $\Delta(\Gamma)$-f\/iltration. We identify~$M_s$ with~$\widetilde{M_s}$. The argument that $T(\lambda,r)(\Gamma)$ is indecomposable is identical to that from~\cite{bc}, which we include for completeness. We begin with an easy observation: \begin{gather} \label{ms1} T(\lambda, r)(\Gamma)[p_\ell]_{\mu_\ell} = M_{\ell}[p_\ell]_{\mu_\ell}, \qquad M_s=\sum\limits_{\ell\le s}\textbf U(\mathfrak g[t]) T(\lambda_k,r)[p_\ell]_{\mu_\ell}. \end{gather} To prove that $T(\lambda,r)(\Gamma)$ is indecomposable, suppose that $T(\lambda,r)(\Gamma)= U_1\oplus U_2$. Since $\dim T(\lambda,r)(\Gamma)[r]_{\lambda}=1$, we may assume without loss of generality that $T(\lambda,r)(\Gamma)[r]_{\lambda}\subset U_1$ and hence $ M_0\subset U_1$. Assume that we have proved by induction that $M_{s-1}\subset U_1$. Since~$M_s$ is gene\-ra\-ted as a~$\mathfrak g[t]$-module by the spaces $\{M_s[p_\ell]_{\mu_\ell}: \ell\le s\}$, it suf\/f\/ices to prove that \mbox{$M_s[p_s]_{\mu_s}\subset U_1$}. By~\eqref{ms1}, we have $U_i[p_s]_{\mu_s}\subset M_s$ and hence \begin{gather*} M_s= \left (M_{s-1}+ \textbf U(\mathfrak g[t])U_1[p_s]_{\mu_s}\right)\oplus\textbf U(\mathfrak g[t])U_2[p_s]_{\mu_s}. \end{gather*} Since $M_s$ is indecomposable by construction, it follows that $U_2[p_s]_{\mu_s}=0$ and $M_s\subset U_1$ which completes the inductive step. \begin{Proposition} Let $(\lambda,r)\in\Gamma$. \begin{enumerate}\itemsep=0pt \item[$1.$] Then there exists an indecomposable module $T(\lambda,r)(\Gamma) \in \Ob {\mathcal G}_{\bdd}(\Gamma)$ which admits a~filtration by finite-dimensional modules $M_s= \sum\limits_{\ell\le s}\textbf U(\mathfrak g[t]) T(\lambda,r)(\Gamma)[p_\ell]_{\mu_\ell}$, $s\ge 0$, such that $M_0\cong\Delta(\lambda,r)(\Gamma)$ and the successive quotients are isomorphic to a~finite-direct sum of $\Delta(\mu,s)(\Gamma)$, $(\mu,s)\in \mathcal S(\lambda, r)$. \item[$2.$] We have $ \wt T(\lambda,r)(\Gamma) \subset\conv W\lambda$, $\dim T(\lambda,r)(\Gamma)[r]_{\lambda}=1$. \item[$3.$] For all $(\mu,s)\in \Gamma$, we have $\Ext^1_{\widehat{\mathcal G}(\Gamma)}(\Delta(\mu,s)(\Gamma)), T(\lambda,r)(\Gamma))=0$. \end{enumerate} \end{Proposition} \begin{proof} Part (1) and (2) are proved in the proceeding discussion. The proof for part (3) is identical to that found in~\cite{bc}. \end{proof} \subsection{Initial properties of tilting modules} \label{section5.4} The next result is an analog of Fitting's lemma for the inf\/inite-dimensional modules $T(\lambda,r)(\Gamma)$. \begin{Lemma} Let $\psi: T(\lambda,r)(\Gamma)\to T(\lambda,r)(\Gamma)$ be any morphism of objects of $\widehat{\mathcal G}$. Then $\psi(M_s)\subset M_s$ for all $s\ge 0$ and $\psi$ is either an isomorphism or locally nilpotent, i.e., given $m\in M$, there exists $\ell\ge 0$ $($depending on $m)$ such that $\psi^\ell(m)=0$. \end{Lemma} \begin{proof} Since $\psi$ preserves both weight spaces and graded components it follows that \mbox{$\psi(M_s)\!\subset\! M_s$} for all $s\ge 0$. Moreover, since $M_s$ is indecomposable and f\/inite--dimensional it follows from Fitting's lemma that the restriction of $\psi$ to $M_s$, $s\ge 0$ is either nilpotent or an isomorphism. If all the restrictions are isomorphisms then since $T(\lambda,r)(\Gamma)$ is the union of $M_s$, $s\ge 0$, it follows that $\psi$ is an isomorphism. On the other hand, if the restriction of $\psi$ to some $M_s$ is nilpotent, then the restriction of $\psi$ to all $M_\ell$, $\ell\ge 0$ is nilpotent which proves that $\psi$ is locally nilpotent. \end{proof} In the rest of the section we shall complete the proof of the main theorem by showing that any indecomposable tilting module is isomorphic to some $T(\lambda,r)(\Gamma)$ and that any tilting module in $\mathcal G_{\bdd}(\Gamma)$ is isomorphic to a~direct sum of indecomposable tilting modules. Let $T\in\mathcal G_{\bdd}(\Gamma)$ be a~f\/ixed tilting module. Then we have \begin{gather} \label{tiltext} \Ext^1_{\widehat{\mathcal G}}(T,\nabla(\lambda,r)(\Gamma))=0=\Ext^1_{\widehat{\mathcal G}}(\Delta(\lambda,r)(\Gamma), T), \qquad (\lambda,r)\in \Gamma, \end{gather} where the f\/irst equality is due to Corollary~\ref{extMnabla}. \begin{Lemma} Suppose that $T_1$ is any summand of $T$. Then, $T_1$ admits a~$\nabla(\Gamma)$-filtration and $\Ext^1_{\widehat{\mathcal G}}(T_1,\nabla(\lambda,r)(\Gamma))=0$, for all $(\lambda,r)\in\Gamma$. \end{Lemma} \begin{proof} Since $\Ext^1$ commutes with f\/inite direct sums, for all $ (\lambda,r)\in \Gamma$ we have \begin{gather*} \Ext^1_{\widehat{\mathcal G}}(T_1,\nabla(\lambda,r) (\Gamma))=0, \qquad \text{and} \qquad \Ext^1_{\widehat{\mathcal G}}(\Delta(\lambda,r)( \Gamma), T_1)=0. \end{gather*} By Proposition~\ref{extnablaconn}, the second equality implies that $T_1$ has a~$\nabla (\Gamma)$-f\/iltration and the proof of the lemma is complete. \end{proof} \subsection{Completing the proof of the main theorem} \label{section5.5} The preceding lemma illustrates one of the dif\/f\/iculties we face in our situation. Namely, we cannot directly conclude that $T_1$ has a~$\Delta( \Gamma)$-f\/iltration from the vanishing $\Ext$-condition by using the dual of Proposition~\ref{extnablaconn}. However, we have the following, whose proof is given in~\cite{bc}. \begin{Proposition} \label{summands} Suppose that $N\in\mathcal G_{\bdd} (\Gamma)$ has a~$\nabla (\Gamma)$-filtration and satisfies \begin{gather*} \Ext^1_{\widehat{\mathcal G}}(N,\nabla(\lambda,r) (\Gamma)) =0, \qquad \text{for all} \quad (\lambda,r)\in \Gamma. \end{gather*} Let $(\mu,s)$ be such that $N\rightarrow \nabla(\mu,s)(\Gamma)\rightarrow 0$. Then $T(\mu,s)(\Gamma)$ is a~summand of $N$. \end{Proposition} The following is immediate. Note that this also shows that our construction of the indecomposable tilting modules is independent of the choice of enumeration of $P^+$, the set $\mathcal S(\lambda, r)$ and~$\eta$. \begin{Corollary} Any indecomposable tilting module is isomorphic to $T(\lambda{,}r)(\Gamma)$ for some \mbox{$(\lambda{,}r)\!\in\! \Gamma.\!$} Further if $T\in\Ob{\mathcal G}_{\bdd}(\Gamma)$ is tilting there exists $(\lambda,r)\in \Gamma$ such that $T(\lambda,r)(\Gamma)$ is isomorphic to a~direct summand of $T$. \end{Corollary} \begin{proof} Since $T$ and $T(\lambda,r)(\Gamma)$ are tilting they satisfy~\eqref{tiltext} and the corollary follows. \end{proof} We can now prove the following theorem. \begin{Theorem} Let $T\in\Ob{\mathcal G}_{\bdd}(\Gamma)$. The following are equivalent. \begin{enumerate}\itemsep=0pt \item[$1.$] $T$ is tilting. \item[$2.$] $\Ext^1_{\widehat{\mathcal G}}(\Delta(\lambda,r)(\Gamma), T) =0 =\Ext^1_{\widehat{\mathcal G}}(T,\nabla(\lambda,r)(\Gamma))$, $(\lambda,r)\in \Gamma$. \item[$3.$] $T$ is isomorphic to a~direct sum of objects $T(\mu,s)(\Gamma)$, $(\mu,s)\in \Gamma $. \end{enumerate} \end{Theorem} \begin{proof} The implication (1)$\implies$(2) is given by Corollary~\ref{extMnabla} and Proposition~\ref{extnablaconn}, while the fact that (3) implies (1) is clear. We complete the proof by showing that (2) implies (3). By Proposition~\ref{26}, that $\Ext^1_{\widehat{\mathcal G}}(\Delta(\lambda,r)(\Gamma), T) =0 $ implies that $T$ admits a~$\nabla(\Gamma)$-f\/iltration. By Lemma~\ref{reorder}, we can assume that the f\/iltration is f\/inite, and if $\lambda_k=\lambda$ is maximal such that $T_\lambda\ne0$, then $T\rightarrow \nabla(\lambda, r)(\Gamma)\rightarrow 0$ for $r$. Indeed, if we choose integers $r_1\ge r_2\ge \cdots $ such that \begin{gather*} [T:\nabla(\lambda,s)(\Gamma)]\ne 0 \qquad \text{if\/f} \quad s=r_i \quad \text{for some} \quad i, \end{gather*} then Lemma~\ref{reorder} says that we may assume that $T\rightarrow \nabla(\lambda, r_1)(\Gamma)\rightarrow 0$. By Proposition~\ref{summands} we have $T\cong T(\lambda, r_1)(\Gamma)\oplus T_1$ and we see that $T_1$ has a~$\nabla(\Gamma)$-f\/iltration. The same argument implies that $T_1$ maps onto $\nabla(\lambda, r_2)(\Gamma)$, and hence $T_1 \cong T(\lambda, r_2)(\Gamma) \oplus T_2$. Continuing, we f\/ind that for $j\ge 1$, there exists a~summand $T_j$ of $T$ with \begin{gather*} T=T_j \bigoplus_{s=1}^j T(\lambda, r_s)(\Gamma). \end{gather*} Let $\pi_j:T\rightarrow \oplus_{s=1}^j T(\lambda, r_s)(\Gamma)$ be the canonical projections. Because $T$ has f\/inite-dimensional graded components, and the $r_i$ are decreasing, it follows that for $m\in T$ there exists an integer~$k(m)$ such that $\pi_j(m)=\pi_{k(m)}(m)$ for all $k(m)\le j$. Hence, we have a~surjection \begin{gather*} \pi: \ T\rightarrow \bigoplus_{j\ge 1} T(\lambda, r_j)(\Gamma) \rightarrow 0 \qquad \text{and} \qquad \ker \pi = \bigcap T_j, \end{gather*} where $\pi(m):=\pi_{k(m)}(m)$. In particular, we have $T=(\bigoplus T(\lambda, r_i)(\Gamma)) \oplus \ker \pi$, where $(\ker \pi)_\lambda=0$, $\ker \pi$ admits a~$\nabla(\Gamma)$-f\/iltration and $\Ext^1_{\widehat{\mathcal G}}(\ker \pi,\nabla(\mu,r)(\Gamma))=0$, for all $(\mu,r)\in \Gamma$. It follows that we may apply to $\ker \pi$ the same arguments we used on~$T$. The result follows by induction on~$k$. \end{proof} \subsection{Proof of Proposition~\ref{subset}}\label{section5.6} We construct here the set $\mathcal S(\lambda, r)$ and the enumeration $\eta$. \textit{The set $\mathcal S(\lambda, r)$.} Recall that $\Gamma=P^+\times J$, and that $a=\inf J$ and $b=\sup J$. Using the enumeration of $P^+$, let $\lambda=\lambda_k$ and def\/ine integers $r_k \le r_{k-1} \le \dots \le r_0$ recursively by setting $r_k = r$ and \begin{gather*} r_s = \max \{ r \,|\, \Delta(\lambda_{s+1},r_{s+1})(\Gamma)[r] \ne 0\}. \end{gather*} Note that because $\Delta(\lambda_i, r_i)(\Gamma)[p]=0$ for any $p> r_{i-1}$, and $r_i\le r_j$ for all $j<i$, we have $\Delta(\lambda_i, r_i)(\Gamma)[p]=0$ if $p>r_j$ for any $j<i$. Then, it follows that \begin{gather} \label{eq4.6} \Delta(\lambda_i, s)(\Gamma)[p]=0 \qquad \text{for any} \quad s \le r_i < p. \end{gather} We set $\mathcal S(\lambda, r)=\{ (\lambda_i,s)| i\le k, s\le r_i \}$, and note that it satisf\/ies conditions (1) and (2) of Proposition~\ref{subset} by construction. We now verify condition (3). Let $(\mu,s)\in \mathcal S(\lambda, r)$ and $(\mu',s')\notin \mathcal S(\lambda, r)$. There are two possibilities for $(\mu',s')$: either $\mu'=\lambda_i$ for $i> k$, or $\mu'=\lambda_i$ for some $0\le i \le k$ and $s'>r_i$. The f\/irst case is covered by Proposition~\ref{homresults}.3, which tells us that if $\lambda \not\le \mu$ then $\Ext^1_{\widehat{\mathcal G}}(\Delta(\lambda, r)(\Gamma), \Delta(\mu,s)(\Gamma))=0$ for all $s,r\in \mathbb Z$. For the second case, again using Proposition~\ref{homresults}.3, it is enough to prove that $\Ext^1_{\widehat{\mathcal G}}(\Delta(\lambda_i, s')(\Gamma), \Delta(\lambda_j, s)(\Gamma))=0$ for $\lambda_i \le \lambda_j$, $s'>r_i$, and $s\le r_j$. By the total order (cf.\ Section~\ref{section4.4}), it follows that $i\le j$, and, hence, $s\le r_j \le r_i < s'$. By Proposition~\ref{homresults}.4, we can in fact assume that $i < j$. Consider a~short exact sequence $0 \to \Delta(\lambda_j, s)(\Gamma) \to M \to \Delta(\lambda_i, s')(\Gamma) \to 0$. From~\eqref{eq4.6} we have $\Delta(\lambda_j, s)(\Gamma)[s']=0$, and it follows that the sequence splits, as we desired. \textit{The enumeration $\eta$.} It remains to def\/ine the enumeration $\eta$. The case where $J=\mathbb Z$ is done in~\cite{bc}, and the case where $J$ is a~f\/inite or of the form $[a, \infty)$ (in which case $\mathcal S(\lambda, r)$ is in fact f\/inite), we use the enumeration def\/ined by the following rules \begin{enumerate}\itemsep=0pt \item[1)] $\eta(\lambda_i, s) < \eta(\lambda_j, s')$ if $i>j$, \item[2)] $\eta(\lambda_i, s) < \eta(\lambda_i, s-1)$. \end{enumerate} Suppose that $i<j$ and let $(\mu_i,p_i)=\eta^{-1}(i)$ and $(\mu_j,p_j)=\eta^{-1}(j)$. If $\mu_i=\mu_j$, then rule~$(2)$ implies that $p_j < p_i$, and Proposition~\ref{homresults} says that $\Ext^1_{\widehat{\mathcal G}}(\Delta(\mu_i,p_i)(\Gamma),\Delta\mu_j,p_j)(\Gamma))=0$. Otherwise, we have $\mu_j \not\le \mu_i$, and the result again follows by Proposition~\ref{homresults}. We are left with the case where $J=(-\infty, b]$. In this case $\eta$ will in fact be a~bijection. Note that it is enough to def\/ine a~bijective, set theoretic inverse $\eta^{-1}$. We recursively def\/ine another set of integers $\{ r_i'\}$ by setting $r_k'=r_k$ and letting $r_i'=\max\{ r| \Delta(\lambda_{i+1}, r_{i+1}')[r]\ne 0\}$. It is easy to see that $r_i \le r_i'$. If $r_i < r_i'$, then we must have $r_i'> b$, which implies that $r_i=b$. This implies that $r_j=b$ for all $j<i$. We note that $\Delta(\lambda_j,b)(\Gamma)=V(\lambda_j, b)$. Set $a_s:= r_s'-r_{s+1}'$. \begin{Lemma} \label{thelaststop} We have $\Ext^1_{\widehat{\mathcal G}}(\Delta(\lambda_i,c)(\Gamma),\Delta(\lambda_{s},d)(\Gamma))=0$ if $c-d\ge a_{s-1}+1$ and $i<s$. \end{Lemma} \begin{proof} We f\/irst prove that under these conditions $\Ext^1_{\widehat{\mathcal G}}(\Delta(\lambda_i,c),\Delta(\lambda_{s},d))=0$. Note that we can shift by $-d+r_s'$, and so we examine $\Ext^1_{\widehat{\mathcal G}}(\Delta(\lambda_i,c-d+r_s'),\Delta(\lambda_{s},r_s'))$. According to our hypothesis, we have $c-d+r_s' \ge r_{s-1}'+1$. It follows by the def\/inition of $r_{s-1}'$ that $\Delta(\lambda_s,r_s')[p]=0$ if $p\ge c-d+r_s'$. If we examine a~sequence $0 \to \Delta(\lambda_s, r_s') \to M \to \Delta(\lambda_i, c-d+r_s') \to 0$ and let $m\in M_{\lambda_i}[c-d+r_s']$ be a~pre-image of $w_{\lambda_i}$, it is clear that $m$ satisf\/ies the def\/ining relations of $w_{\lambda_i}$. Therefore, the sequence splits. Finally, observe that $\Delta(\lambda_s, d)(\Gamma)[c]=0$, again by the def\/inition of the $r_i'$ and noting that $\Delta(\lambda_s, d)(\Gamma) $ is a~quotient of $\Delta(\lambda_s, d)$. The same argument as above also shows that $\Ext^1_{\widehat{\mathcal G}}(\Delta(\lambda_i,c)(\Gamma),\Delta(\lambda_{s},d)(\Gamma))=0$. \end{proof} We def\/ine $\eta^{-1}: \mathbb Z_{\ge 0} \to \mathcal S(\lambda, r)$ in the following way. Set $\eta^{-1}(0) = (\lambda_k,r_k)$. If $\eta^{-1}$ is def\/ined on $\{0, \ldots, m-1\}$ and $\eta^{-1}(m-1)=(\lambda_i, p_i)$, we def\/ine $\eta^{-1}(m)$ as follows. Suppose that $i>0$, and $(\lambda_{i-1}, p_i + a_{i-1})\in \mathcal S(\lambda, r)$, then we set $\eta^{-1}(m)=(\lambda_{i-1}, p_i + a_{i-1})$. Otherwise, we let $\eta^{-1}(m)=(\lambda_k, p_m-1)$, where $p_m$ is the minimal integer such that $(\lambda_k, p)$ has a~pre-image under $\eta^{-1}$. To prove that $\Ext^1_{\widehat{\mathcal G}}(\Delta(\mu_i,p_i)(\Gamma),\Delta(\mu_j,p_j)(\Gamma))=0$ if $i< j$, we assume that $\mu_i \le \mu_j$. If $\mu_i = \mu_j$ then this follows from Proposition~\ref{homresults}. So we assume that $\mu_i < \mu_j$, and lets say that $\mu_j=\lambda_\ell$. In this case we must have $p_i-p_j > a_{\ell-1}$ and so the result follows by Lemma~\ref{thelaststop}. \section{Some dif\/ferent considerations on truncated categories}\label{section6} Throughout this section we discuss some ``trivial'' tilting theories for the category $\mathcal G(\Gamma)$ by considering dif\/ferent type of orders on the set $\Lambda$. These categories, equipped with the orders described below, have already appeared in the literature (see~\cite{BCFM,cg:hwcat,cg:koszul,cg:minp,ckr:faces} and references therein). \subsection{Truncated categories with the covering relation}\label{section6.1} Consider a strict partial order on $\Lambda$ in the following way. Given $(\lambda,r), (\mu,s)\in\Lambda$, we say that \begin{gather*} (\mu,s) \text{ covers } (\lambda,r) \text{ if and only if } s=r+1 \text{ and } \mu-\lambda\in R\cup\{0\}. \end{gather*} Notice that for any $(\mu,s)\in\Lambda$ the set of $(\lambda,r)\in\Lambda$ such that $(\mu,s)$ covers $(\lambda,r)$ is f\/inite. Let $\preceq$ be the unique partial order on~$\Lambda$ generated by this covering relation. One of the main inspirations to consider this relation comes from the following proposition: \begin{Proposition}[\protect{\cite[Proposition 2.5]{cg:hwcat}}] For $(\lambda,r),(\mu,s)\in\Lambda$, we have \begin{gather*} \Ext^1_{{\mathcal G}}(V(\lambda,r),V(\mu,s))= \begin{cases} 0, &\text{if}\quad s\ne r+1, \\ \Hom_{\mathfrak g}(V(\lambda), \mathfrak g\otimes V(\mu)), &\text{if}\quad s= r+1. \end{cases} \end{gather*} In other words, $\Ext^1_{{\mathcal G}}(V(\lambda,r),V(\mu,s))=0$ except when $(\mu,s)$ covers $(\lambda,r)$. \end{Proposition} Given $\Gamma\subset\Lambda$, set \begin{gather*} {V_\Gamma}^+= \{v\in V[s]_\mu: \mathfrak n^+ v=0, \ (\mu,s)\in \Gamma\}, \qquad V_\Gamma=\textbf U(\mathfrak g)V_\Gamma{}^+, \qquad V^\Gamma=V/V_{\Lambda\setminus\Gamma}. \end{gather*} \begin{Proposition}[\protect{\cite[Propositions 2.1, 2.4, and 2.7]{cg:hwcat}}] \label{propofPandI} Let $\Gamma$ be finite and convex and assume that $(\lambda,r), (\mu,s)\in\Gamma$. \begin{enumerate}\itemsep=0pt \item[$1.$] $[P(\lambda,r)^\Gamma:V(\mu,s)]=[P(\lambda,r):V(\mu,s)]=[I(\mu,s):V(\lambda,r)]=[I(\mu,s)_\Gamma:V(\lambda,r)]$. \item[$2.$] $\Hom_{{\mathcal G}}(P(\mu,s),P(\lambda,r)) \cong \Hom_{\mathcal{G}[\Gamma]}(P(\mu,s)^{\Gamma},P(\lambda,r)^{\Gamma})$. \item[$3.$] Let $K(\lambda,r )$ be the kernel of the canonical projection $P(\lambda,r)\twoheadrightarrow V(\lambda,r)$, and let $(\mu,s)\in\Lambda$. Then $[K(\lambda,r):V(\mu,s)]\not=0$ only if $(\lambda,r)\prec(\mu,s)$. \item[$4.$] Let $(\mu,s)\in\Lambda$. Then~$[I(\lambda,r)/V(\lambda,r):V(\mu,s)]\not=0$ only if~$(\mu,s)\prec(\lambda,r)$. \end{enumerate} \end{Proposition} Following Section~\ref{section2.6} but using the partial order $\preceq$ def\/ined above, for each $(\lambda,r)\in \Gamma$ we denote by $\Delta(\lambda,r)(\Gamma)$ the maximal quotient of $P(\lambda,r)$ such that \begin{gather*} [\Delta(\lambda,r)(\Gamma):V(\mu,s)] \ne 0\implies (\mu,s)\preceq (\lambda,r). \end{gather*} Similarly, we denoted by $\nabla(\lambda,r)(\Gamma)$ the maximal submodule of $I(\lambda,r)$ satisfying \begin{gather*} [\nabla(\lambda,r)(\Gamma):V(\mu,s)] \ne 0\implies (\mu,s)\preceq (\lambda,r). \end{gather*} The modules $\Delta(\lambda,r)(\Gamma)$ and $\nabla(\lambda,r)(\Gamma)$ are called, respectively, the standard and co-standard modules associated to $(\lambda,r)$. Further, any module in $\widehat{\mathcal G}$ with a~$\Delta(\Gamma)$-f\/iltration and a~$\nabla(\Gamma)$-f\/iltration is called tilting. \begin{Proposition}\label{p:tilt} Let $\Gamma \subseteq \Lambda$ finite and convex. \begin{enumerate}\itemsep=0pt \item[$1.$] For all $(\lambda,r)\in \Gamma$, there exists indecomposable tilting module $T(\lambda,r)(\Gamma)$ and $T(\lambda,r)(\Gamma)=I(\lambda,r)$. \item[$2.$] For all indecomposable tilting module $T$, we have $T\cong T(\lambda,r)(\Gamma)$ for some $(\lambda,r) \in \Gamma$. \item[$3.$] Every tilting module is isomorphic to a~direct sum of indecomposable tilting modules. \end{enumerate} \end{Proposition} Before proving this proposition, we have some remarks to make: \begin{Remark}\label{stdcostd}\quad \begin{enumerate}\itemsep=0pt \item It follows from Proposition~\ref{projectives}.3 and Proposition~\ref{propofPandI}, parts~(1) and~(4), that the costandard modules in $\widehat{\mathcal G}$ associated to $(\lambda,r)$ is $I(\lambda, r)$ and similarly it follows from Proposition~\ref{propofPandI}, parts~(1) and~(3), that the standard module in $\widehat{\mathcal G}$ associated to $(\lambda,r)$ is the simple module $V(\lambda,r)$. \item For any $M\in \mathcal G$ let $k(M)$ the such that $M\in \mathcal G_{\leq k(M)}$. Thus $M$ admits a~f\/iltration $\{ M_i\} $ where $M_i = \bigoplus_{j=0}^i M[k(M)-j]$ which can be ref\/ined into a~Jordan-Holder series since each quotient $M_{i+1}/M_i$ is a~f\/inite-dimensional $\mathfrak g$-module. \end{enumerate} \end{Remark} \begin{proof} [Proof of Proposition~\ref{p:tilt}] Part (1) follows from the Remark~\ref{stdcostd} since we have $T(\lambda, r)(\Gamma):= I(\lambda,r)$. Part (2) and (3) are direct consequences of the injectivity of $I(\lambda,r)$. \end{proof} \subsection{Truncated categories related to restricted Kirillov--Reshetikhin modules}\label{section6.2} One of the goals of~\cite{BCFM,cg:minp} was to study the modules $P(\lambda, r)^\Gamma$ (and their multigraded version) under certain very specif\/ic conditions on $\Gamma$. In these papers it was shown that the modules $P(\lambda,r)^\Gamma$ are giving in terms of generators and relations which allows us to regard these modules as specializations of the famous Kirillov--Reshetikhin modules (in the sense of~\cite{CM:kr,CM:krg}). These papers develop a~general theory over a~$\mathbb Z_+$-graded Lie algebra $\mathfrak a~= \bigoplus_{i\in \mathbb Z} \mathfrak g[i]$ where $\mathfrak g_0$ is a~f\/inite-dimensional complex simple Lie algebra and its non-zero graded components $\mathfrak g[i]$, $i>0$, are f\/inite-dimensional $\mathfrak g_0$- modules. By focusing in these algebras with $\mathfrak g[i] =0$ for $i>1$, we have $\mathfrak a~\cong \mathfrak g\ltimes V$, where $V$ is a~$\mathfrak g$-module, and in this context a~very particular tilting theory can be described as follows. Assume that $\wt(V)\ne\{0\}$ and f\/ix a~subset $\Psi\subseteq\wt(V)$ satisfying \begin{gather*} \sum\limits_{\nu\in\Psi} m_\nu\nu= \sum\limits_{\mu\in\wt(V)} n_\mu\mu(m_\nu,n_\mu\in\mathbb Z_+) \ \Longrightarrow \ \sum\limits_{\nu\in\Psi} m_\nu\le \sum\limits_{\mu\in\wt(V)} n_\mu \end{gather*} and \begin{gather*} \sum\limits_{\nu\in\Psi} m_\nu= \sum\limits_{\mu\in\wt(V)}n_\mu \qquad \text{only if} \quad n_\mu=0 \quad \text{for all} \quad \mu\notin\Psi. \end{gather*} \begin{Remark} Such subsets are precisely those contained in a proper face of the convex polytope determined by $\wt (V)$ conform~\cite{kharid:polytopes}. \end{Remark} Consider the ref\/lexive and transitive binary relation on $P$ given by \begin{gather*} \mu\le_\Psi\lambda \qquad \text{if} \quad \lambda-\mu\in\mathbb Z_+\Psi, \end{gather*} where $\mathbb Z_+\Psi$ is the $\mathbb Z_+$-span of $\Psi$. Set also \begin{gather*} d_\Psi(\mu,\lambda)=\min\left\{\sum\limits_{\nu\in\Psi}m_\nu: \lambda-\mu=\sum\limits_{\nu\in\Psi}m_\nu\nu,\, m_\nu\in\mathbb Z_+ \; \forall\, \nu\in\Psi \right\}. \end{gather*} By \cite[Proposition~5.2]{ckr:faces}, $\le_\Psi$ is in fact a partial order on~$P$. Moreover, it induces a ref\/inement $\preccurlyeq_\Psi$ of the partial order~$\preceq$ on~$\Lambda$ by setting \begin{gather*} (\lambda, r)\preccurlyeq_\Psi (\mu, s) \qquad \text{if} \quad \lambda\le_\Psi\mu, \quad s- r\in\mathbb Z_+, \quad \text{and} \quad d_\Psi(\lambda,\mu)= s- r. \end{gather*} Finally, if $\Gamma\subseteq\Lambda$ is f\/inite and convex with respect to $\preccurlyeq_\Psi$ and there exists $(\lambda, r)\in\Lambda$ such that $(\lambda, r)\preccurlyeq_\Psi(\mu, s)$ for all $(\mu, s)\in\Gamma$, it was shown~\cite[Lemma 5.5]{ckr:faces} that \begin{gather} \label{hom} \Hom_{\mathcal G[\Gamma]}(P(\mu, s)^\Gamma,P(\nu, t)^\Gamma)\ne 0 \qquad \text{only if} \quad (\nu, t)\preccurlyeq_\Psi (\mu, s). \end{gather} In particular, it follows from Proposition~\ref{projectives}.3, Proposition~\ref{propofPandI}, parts~(2) and~(3), and~\eqref{hom} that \begin{gather*} [P(\lambda,r):V(\mu,s)]\ne 0 \implies (\lambda,r)\preccurlyeq_\Psi (\mu,s) \end{gather*} and \begin{gather*} [I(\lambda,r):V(\mu,s)]\ne 0 \implies (\mu,s)\preccurlyeq_\Psi (\lambda,r). \end{gather*} We conclude that the standard modules in $\mathcal G(\Gamma)$ are the simple modules and the costandard modules are the injective hulls of the simple modules and, hence, $T(\lambda,r)(\Gamma)=I(\lambda,r)_\Gamma$. \subsection*{Index of notation} \hspace{\parindent}Subsection~\ref{section1.2} --- $\mathcal F(\mathfrak g)$, $V(\lambda)$. Subsection~\ref{section2.1} --- $\widehat{\mathcal G}$, $V[r]$, $V(\lambda,r)$, $v_{\lambda,r}$, $M^*$, $\Lambda$, $\le$. Subsection~\ref{section2.2} --- $\mathcal G_{\le s}$, $\mathcal G$, $\mathcal G_{\bdd}$, $V_{>s}$, $V_{\le s}$, $[V:V(\lambda,s)]$, $\Lambda(V)$. Subsection~\ref{section2.3} --- $P(\lambda,r)$, $I(\lambda,r)$, $p_{\lambda,r}$. Subsection~\ref{section2.4} --- $\Delta(\lambda,r)$, $W(\lambda,r)$, $\nabla(\lambda,r)$. Subsection~\ref{section2.5} --- $\Gamma$, $\widehat{\mathcal G}(\Gamma)$, $\mathcal G(\Gamma)$, $\mathcal G_{\bdd}(\Gamma)$. Subsection~\ref{section2.6} --- $J$, $a$, $b$, $M^\Gamma$, $\Delta(\lambda,r)(\Gamma)$, $W(\lambda,r)(\Gamma)$, $\nabla(\lambda,r)(\Gamma)$. Section~\ref{section3} --- $\Delta(\Gamma)$ (respect. $\nabla(\Gamma)$) -f\/iltration$, $$T(\lambda,r)$. Subsection~\ref{section4.3} --- $I(M)$. Subsection~\ref{section4.4} --- o-canonical f\/iltration. Subsection~\ref{section4.5} --- $N_s$, $I(\lambda,r)_p$. Subsection~\ref{section4.6} --- $T(\lambda,r)(\Gamma)$. \LastPageEnding \end{document}
\begin{document} \title[Almost automorphic delayed equations] {Almost automorphic delayed differential equations and Lasota-Wazewska model} \author[A. Coronel, Ch. Maul\'en, M. Pinto, D. Sepulveda] {An{\'\i}bal Coronel, Christopher Maul\'en, Manuel Pinto, Daniel Sepulveda} \address{An\'ibal Coronel \newline GMA, Departamento de Ciencias B\'asicas, Facultad de Ciencias, Universidad del B\'{\i}o-B\'{\i}o, Campus Fernando May, Chill\'{a}n, Chile.} \email{[email protected]} \address{Christopher Maul\'en \newline Departamento de Matem\'aticas, Facultad de Ciencias, Universidad de Chile} \email{[email protected]} \address{Manuel Pinto \newline Departamento de Matem\'aticas, Facultad de Ciencias, Universidad de Chile} \email{[email protected]} \address{Daniel Sepulveda \newline Escuela de Matem\'aticas y Estad\'isticas, Universidad Central de Chile} \email{[email protected]} \date{\today} \thanks{Partially supported by FONDECYT 1120709. CONICYT-PCHA/Mag\'ister Nacional/2013-221320155. An{\'\i}bal Coronel thanks for the support of research projects 124109 3/R, 104709 01 F/E and 121909 GI/C at Universidad del B{\'\i}o-B{\'\i}o, Chile.} \keywords{Abstract delay differential equations, Almost automorphic, Exponential dichotomy, Ergodicity, Evolution operator} \begin{abstract} Existence of almost automorphic solutions for abstract delayed differential equations is established. Using ergodicity, exponential dichotomy and Bi-almost automorphicity on the homogeneous part, sufficient conditions for the existence and uniqueness of almost automorphic solutions are given. \end{abstract} \maketitle \numberwithin{equation}{section} \newtheorem{thm}{Theorem}[section] \newtheorem{lem}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \newtheorem{cor}[thm]{Corollary} \newtheorem{defn}{Definition}[section] \newtheorem{conj}{Conjecture}[section] \newtheorem{exmp}{Example}[section] \newtheorem{rem}{Remark}[section] \allowdisplaybreaks \section{Introduction} The development of the theory of almost periodic type functions has been strongly stimulated by problems arising in differential equations, stability theory, dynamical systems and many other areas of science. Nowadays, there exist also a wide range of applications starting from the basic mathematical models based on linear ordinary differential equations, including nonlinear linear ordinary differential equations, differential equations in Banach space and also partial differential equations. Moreover, there exist several related concepts which arise as generalizations of the almost periodic concept. For instance the notions of almost automorphic, asymptotically almost periodic, asymptotically almost automorphic and pseudo almost periodic. Since there are plenty of results in literature, let us just quote, for their applications in engineering and life science, for example asymptotically almost periodic functions \cite{Hernandez&Santos,Henriquez&Pierri&Taboas_1, Henriquez&Pierri&Taboas_2,liang,Nicola&Pierre, pinto_robledo_nonl,pinto_robledo_jmaa,pinto_robledo_applmat,pinto_robledo_glass, Utz&Waltman,yoshizawa}, and pseudo almost periodic functions \cite{Cuevas&Pinto,diagana,pinto_3}. Moreover, we recall that N'Gu\'er\'ekata has given a huge impulse to the study of almost automorphic solutions of differential equations \cite{Blot-Mophou-N'Guerekata-Pennequin,Cieutat-Fatajou-N'Guerekata, Fatajou-Minh-N'Guerekata-Pankov,Gal-N'Guerekata,Goldestein&Guerekata, Liu-N'Guerekata-Minh,Minh-Naito-N'Guerekata, N'Guerekata_1,N'Guerekata_2}. For some recent results of almost automorphic differential equations consult also \cite{castillo,alan}. In this paper, we are initially motivated by a biological-mathematical model \cite{feng,grimmer,liz-martinez,Wei&Wang_1,Wei&Wang_2,zhao} which is a delayed differential equation of the following type \begin{eqnarray*} y'(t)=-\delta(t) y(t)+p(t)g(y(t-\tau)), \end{eqnarray*} with $\tau>0$, $\delta$ and $p$ almost automorphic functions and $g$ a Lipschitz function. Then, we focus our attention on the existence and uniqueness of solutions of the following delayed differential equation \begin{eqnarray} y' &=& A(t)y+f(t)+g(t,y(t-\tau)) \qquad\mbox{with $\tau\geq 0$,} \label{delay_lineal+f+g} \end{eqnarray} under several assumptions on $A,f$ and $g$. Naturally, the assumptions on $A$ and $f$ are related to the almost automorphic behavior and the assumptions on $g$ are mainly related with a Lipschitz requirement. We note that \eqref{delay_lineal+f+g} naturally, includes as particular cases the following equations \begin{eqnarray} y'&=&A(t)y,\label{lineal}\\ y'&=&A(t)y+f(t),\label{lineal+f}\\ y' &=& A(t)y+f(t)+g(t,y(t)).\label{lineal+f+g} \end{eqnarray} Thus, following a natural sequence of the classical systemic study of ordinary differential equations we start by analyzing the homogeneous linear equation \eqref{lineal}. Then, we develop the theory for the non-homogeneous linear equation \eqref{lineal+f} by applying the method of variation of parameters . In a third place, we analyze the nonlinear equation \eqref{lineal+f+g} by using the fixed point arguments. Finally, by a composition result of automorphic functions we get an extension of the results for \eqref{lineal+f+g} to the delay equation \eqref{delay_lineal+f+g}. The main contributions and the organization of the paper are given as follows. In section~\ref{sec:preliminar} we introduce the general assumptions, recall the concepts of almost automorphicity and ergodic functions, define a convolution operator and get the results for \eqref{lineal}. To be a little more precise in this section, we obtain some conditions for the exponential dichotomy using ergodic functions and we prove the Bi-almost periodicity and Bi-almost automorphicity of the Green function when the evolution operator commute with the projection. We note that, the integral Bi-almost automorphicity property of the Green function is fundamental to obtain the main results. In section~\ref{sec:mainres}, almost automorphicity of solutions of nonautonomous systems \eqref{lineal+f}, \eqref{lineal+f+g} and \eqref{delay_lineal+f+g} are obtained. Here, the results of almost automorphicity of the differential equation solutions are obtained by assuming that $A$ and $f$ are almost authomorphic and $g$ satisfies \eqref{eq:condition_L}. Finally, in section~\ref{sec:appl} we study a biological model establishing an explicit condition under which there exist a unique almost automorphic solution of the Lasota-Wazewska equation. \section{Preliminaries} \label{sec:preliminar} In this section we present some general assumptions, precise the concepts related with the almost automorphic notion, we recall the notion of ergodic functions, we define a convolution operator and the $\alpha$-exponential dichotomy and introduce several results for the homogeneous equation \eqref{lineal} in the scalar, system and abstract case. \subsection{General assumptions} Here we present two general assumptions. Firstly, throughout of the paper $(V,\Vert \cdot \Vert_{V})$ will be a Banach space and let $(BC(\mathbb{R},V), \Vert \cdot \Vert_{\infty})$ will be used to denote the Banach space of bounded continuous functions from $\mathbb{R}$ into $V$ endowed with the sup norm $\Vert \varphi \Vert_{\infty}=\sup_{t\in \mathbb{R}} \Vert \varphi(t) \Vert_{V}$. Second, concerning to the assumptions on the coefficients $A,f$ and $g$ for equations \eqref{delay_lineal+f+g}-\eqref{lineal+f+g} we comment that it will be specifically done on the hypothesis of each result. However, in order to give a unified presentation, we introduce some notation related to the assumption of the local Lipschitz behavior of $g$. Indeed, given a function $g$, it is assumed that: \begin{eqnarray} \left. \begin{array}{cl} \mbox{($g_0$)}&\mbox{$g(t,0)=0$ for all $t\in\mathbb{R}$;}\\ \mbox{($g_1$)}&\mbox{The function $g(t,y)$ is continuous on $\mathbb{R}\times \Delta(\varphi_0,\rho)$ with $\Delta(\varphi_0,\rho)$ } \\ &\mbox{the open ball centred in a given (fix) function $\varphi_0:\mathbb{R}\rightarrow V $ and with } \\ &\mbox{radius $\rho\in\mathbb{R}^+$, i.e., $ \Delta(\varphi_0,\rho)= \Big\{ \varphi :\mathbb{R}\rightarrow V \Big|\; \| \varphi - \varphi_0 \|_{\infty} \leq \rho\; \Big\}. $ In par-}\\ &\mbox{ticular, in subsections~\ref{subsec:lineal+f+g}-\ref{subsec:delay_lineal+f+g} will be assumed that $\varphi_{0}$ is of the form} \\ &\mbox{$\varphi_{0}(t)=\int_{\mathbb{R}} G(t,s)f(s)ds$ with $G$ the Green function defined on \eqref{green_c0}; }\\ \mbox{($g_2$)}&\mbox{There exist a positive constant $L$ such that the inequality}\\ & \mbox{$\| g(t,y_1)-g(t,y_2) \| \leq L \| y_1 -y_2 \|$ holds for all $(t,y_1,y_2)\in\mathbb{R}\times\Delta(\varphi_0,\rho)^2$.} \end{array} \right\} \label{eq:condition_L} \end{eqnarray} This set of conditions ($g_0$)-($g_2$) appear in several parts of the paper and essentially when we study the nonlinear equations in subsections~\ref{subsec:lineal+f+g}-\ref{subsec:delay_lineal+f+g}. \subsection{Almost automorphic notion and related concepts.} We recall that the almost automorphic functions have been developed by Bochner \cite{Bochner_1,Bochner_2} as a generalization of almost periodic functions. We recall that a function $f\in BC(\mathbb{R},V)$ is called Bohr almost periodic \cite{corduneanu} if for each $\epsilon>0$, there exist $l_{\epsilon}>0$ such that every interval of length $l_{\epsilon}$ contains a number $\xi$ with the property: $\Vert f(t+\xi)-f(t) \Vert_{V} \leq \epsilon \ \mathrm{for } \ t\in \mathbb{R}.$ The set of Bohr almost periodic will be denoted by $BC(\mathbb{R},V)$. Then, we precise the concept of almost automorphic functions and matrices. \begin{defn} \label{def_automorphic} Consider $V$ a Banach space. Then, \begin{enumerate}[(i)] \item A continuous function $\psi:\mathbb{R}\rightarrow V$ is called an almost automorphic function if for any sequence of real numbers $\{ \tilde{\tau}_n \}_{n=1}^{\infty}$, there exist a subsequence $\{\tau_n \}_{n=1}^{\infty}$ of $ \{ \tilde{\tau}_n \}_{n=1}^{\infty}$ such that the limit of the sequence $\{ \psi(t+\tilde{\tau}_n )\}_{n=1}^{\infty}$, denoted by $\tilde{\psi}(t)$, is well defined for all $t\in\mathbb{R}$ and the sequence $\{ \tilde{\psi}(t-\tilde{\tau}_n )\}_{n=1}^{\infty}$ converges pointwise on $\mathbb{R}$ to $\psi(t)$, or equivalently \begin{eqnarray} \tilde{\psi}(t)=\lim_{n\rightarrow \infty } \psi(t+\tau_{n}) \quad\mbox{and}\quad \psi(t)=\lim_{n\rightarrow \infty } \tilde{\psi}(t-\tau_{n})\label{tilde_psi} \end{eqnarray} are well defined for all $t\in \mathbb{R}$. The collection of all almost automorphic functions from $\mathbb{R}$ to $V$ is denoted by $AA(\mathbb{R},V).$ \item A matrix valued function $A:\mathbb{R}\to\mathbb{C}^{d_1\times d_2}$ is called an almost automorphic matrix valued function or equivalently (most of the time by briefness) $A(t)\in \mathbb{C}^{d_1\times d_2}$ is called an almost automorphic matrix if for any sequence $\{\xi'_{n} \}_{n=1}^{\infty}\subset \mathbb{R}$, there exist a subsequence $\{\xi_n\}_{n=1}^{\infty}$ of $ \{\xi'_n\}_{n=1}^{\infty}$ and a matrix $B(t)\in \mathbb{C}^{d_1\times d_2}$ such that the sequences $\{A(t+\xi_n)\}_{n=1}^{\infty}$ and $\{B(t-\xi_n)\}_{n=1}^{\infty}$ converges pointwise to $B(t)$ and $A(t)$, respectively. \end{enumerate} \end{defn} We note that the convergence in \eqref{tilde_psi} is pointwise. Then, the function $\tilde{\psi}$ in \eqref{tilde_psi} is measurable, but not necessarily continuous. Moreover, we note if we consider that convergence in definition \ref{def_automorphic} is uniform on $\mathbb{R}$ instead of pointwise convergence, we get that the function $\psi$ is Bochner almost periodic. It is well known that both definitions of almost periodicity (Bohr and Bochner) are equivalents, see for instance \cite{corduneanu}. Now, we note that $AP(\mathbb{R},V)$ and $AA(\mathbb{R},V)$ are vectorial space and $AP(\mathbb{R},V) $ is a proper subspace of $ AA(\mathbb{R},V)$, since for instance $\psi (t)= \cos \left( [2+\sin(t)+\sin(t\sqrt{2})]^{-1} \right) $ is an almost periodic function but not almost authomorphic. Similarly, it is proven that the inclusion $AP(\mathbb{R},V)\subset BC(\mathbb{R},V) $, for an extensive discussion consult \cite{caraballo-cheban,Bochner_2,Goldestein&Guerekata, Gal-N'Guerekata,Liu-N'Guerekata-Minh,Minh-Naito-N'Guerekata, Minh-Dat,N'Guerekata_1,N'Guerekata_2,Veech_1,Veech_2, Zaidman_1,Zaki,Ding&Xiao&Liang,Blot-Mophou-N'Guerekata-Pennequin, Xiao-Zhu-Liang,Zaidman_2,Fatajou-Minh-N'Guerekata-Pankov}. To close this subsection we introduce two additional facts. Firstly, we note that the simpler equation \eqref{lineal+f} with $A \equiv 0$, i.e., $y'(t)=f(t),$ with $f\in AA(\mathbb{R},V)$ has not necessarily a solution $y\in AA(\mathbb{R},V)$. However, this fact is true in a uniformly convex Banach space $V$ and hence in every Hilbert space, see Theorem~\ref{1.1}. In the second place we need a composition result \cite{Cieutat-Fatajou-N'Guerekata}, which will be fundamental for the analysis of \eqref{lineal+f+g} and \eqref{delay_lineal+f+g}, see Proposition~\ref{composition}. \begin{thm}\label{1.1} Denote by $C_0$ the vectorial space formed by the functions which vanishes at infinity. Consider that $V$ is a Banach space which does not contain $C_{0}$ as an isomorphic subspace and let $f\in AA(\mathbb{R},V)$. Then, the function $F(t)=\int_0^{t}f(s)ds$ is in $AA(\mathbb{R},V)$ if and only if it is bounded.\\ Such Banach space with this property for $F$ will be called a Banach space with the Bohl-Bohr property. \end{thm} \begin{prop}\label{composition} Let $g=g(t,y)\in AA(\mathbb{R}\times V,V)$ uniformly in $t$ for $y$ in a compact set contained in $V$ and $g$ satisfies the assumptions given on \eqref{eq:condition_L}. Then, $g(t,\varphi(t))\in AA(\mathbb{R},V)$ for all $\varphi\in AA(\mathbb{R},\Delta (\varphi_0,\rho))$. \end{prop} \subsection{Ergodic functions} Here we introduce the concept of ergodic functions and deduce that these types of functions implies naturally an exponential behavior (or $\alpha$-exponential dichotomy to be more precise). \begin{defn}\label{mean} A function $f\in BC(\mathbb{R},V)$ is called an ergodic function if the limit \begin{eqnarray*} M(f) = \lim_{T\rightarrow \infty} \frac{1}{2T} \int_{-T+\xi}^{T+\xi}f(s)ds \end{eqnarray*} exists uniformly with respect to $\xi\in \mathbb{R}$ and its value is independent of $\xi$. The complex number $M(f)$ is called the mean of the function $f$. \end{defn} The mean of an ergodic function has several properties, a complete list of properties may be consulted in \cite{Zhang_ergodic}. Among these useful basic properties, we only recall the translation invariance property, since it will be used frequently in the proofs given below in this paper. Indeed, the translation invariance property of $M(f)$, set that $M(f)$ satisfies the following identity \begin{eqnarray} M(f)=M(f_{\xi}),\ \label{translation_invarance} \end{eqnarray} where $f_{\xi}$ denotes a $\xi$-translation of $f$, i.e. $f_{\xi}(t)=f(t+\xi)$ for all $t\in \mathbb{R}$ and any arbitrarily given $\xi \in \mathbb{R}$ (but fixed). \begin{lem}\label{lema_mean} Consider $\mu\in BC(\mathbb{R},\mathbb{C})$ an ergodic function with $\mathrm{Re}(M(\mu))\neq 0$ and also consider $\alpha\in\mathbb{R}^+$. Then, there exist two positive constants $T_0$ (big enough) and $c$ such that the two assertions given below are valid: \begin{enumerate}[(i)] \item \label{int_exponencial_negativo} If $\mathrm{Re}(M(\mu))\in ]-\infty,-\alpha[\subset\mathbb{R}^-$, then the following inequalities hold true: \begin{eqnarray} &&\mathrm{Re}\left( \int_{s}^{t}\mu(r)dr\right) <-\alpha(t-s) \quad \mbox{for} \quad t-s>T_0,\label{int_negativa} \\ &&\left| \exp\left({\int_{s}^{t}\mu(r)dr}\right)\right | \leq c\exp({-\alpha(t-s)}) \quad \mbox{for all $(t,s)\in\mathbb{R}^2$ such that $t\geq s\geq 0$}. \label{exp_negativa} \end{eqnarray} \item \label{int_exponencial_positivo} If $\mathrm{Re}(M(\mu))\in ]\alpha,\infty[\subset\mathbb{R}^+$, then the following inequalities hold true: \begin{eqnarray} &&\mathrm{Re}\left( \int_{s}^{t}\mu(r)dr\right) <\alpha(t-s) \quad \mbox{for} \quad s-t>T_0,\label{int_positiva} \\ &&\left| \exp\left({\int_{s}^{t}\mu(r)dr}\right)\right | \leq c\exp({\alpha(t-s)}) \quad \mbox{for all $(t,s)\in\mathbb{R}^2$ such that $s\geq t\geq 0$.} \label{exp_postiva} \end{eqnarray} \end{enumerate} \end{lem} \begin{proof} Let us assume that $\mu\in BC(\mathbb{R},\mathbb{C})$ is an ergodic function. Then, by the Definition~\ref{mean} and the translation invariance property of $M(f)$ (see \eqref{translation_invarance}) we have that \begin{eqnarray} \int_{0}^{T}\mu(t+\tau)d\tau=[M(\mu)+o(1)]T \quad\mbox{when}\quad T\rightarrow \infty. \end{eqnarray} Here and throughout of the paper $o(1)$ corresponds to the well known Bachmann-Landau notation, i.e. $f=o(g)$ if and only if $(f/g)(x)\to 0$ when $x\to \infty.$ Then, when $T=t-s$, we get \begin{eqnarray} \int_{s}^{t}\mu(r)dr=\int_{0}^{T} \mu(-s+\tau)d\tau \end{eqnarray} and the proof of \eqref{int_negativa} follows immediately. The proof of \eqref{int_exponencial_negativo} is a consequence of the exponential function increasing behavior. Thus, the proof of item {\it (i)} is completed. Now, the proof of item {\it (ii)} is similar and we omit it. \end{proof} \subsection{The convolution operator} Let us denote by $L^1(\mathbb{R})$ and $L^\infty(\mathbb{R})$ the spaces of Lebesgue integrable functions on $\mathbb{R}$ and essentially bounded functions on $\mathbb{R}$, respectively. Then, the convolution operator on $\mathcal{L}:L^\infty(\mathbb{R})\to L^\infty(\mathbb{R})$ is defined as the operator such that \begin{eqnarray} \mathcal{L}(\varphi)(t)=\int_{\mathbb{R}} h(t-s)\varphi(s)ds, \quad h\in L^{1}(\mathbb{R}), \quad t\in \mathbb{R}, \label{convolution_operator} \end{eqnarray} for all $\varphi \in L^\infty(\mathbb{R})$. Some properties of $\mathcal{L}$, which will be needed in the proof the main results, are summarized in the following lemma. \begin{lem}\label{2.2} Consider $\mathcal{L}$ the convolution operator defined by \eqref{convolution_operator}. Then, the spaces $BC(\mathbb{R}),$ $AP(\mathbb{R})$ and $AA(\mathbb{R})$ are invariants under the operator $\mathcal{L}$. Moreover, the inequalities \begin{eqnarray} \|\mathcal{L}\varphi\|_{L^\infty(\mathbb{R})} &\le & \|\varphi\|_{L^\infty(\mathbb{R})} \|h\|_{L^1(\mathbb{R})} \quad \mbox{for $\varphi\in BC(\mathbb{R})$,} \label{cerrado_BC} \\ \|(\mathcal{L}\varphi)_{\xi}-\mathcal{L}\varphi\|_{L^\infty(\mathbb{R})} &\le & \|(\varphi)_{\xi}-\varphi\|_{L^\infty(\mathbb{R})} \|h\|_{L^1(\mathbb{R})} \quad \mbox{for $\xi\in\mathbb{R}$ and $\varphi\in AP(\mathbb{R})$,} \label{cerrado_AA} \end{eqnarray} hold. Here $(\mathcal{L}\varphi)_{\xi}$ and $(\varphi)_{\xi}$ are the $\xi$-translation functions for $\mathcal{L}\varphi$ and $\varphi$, respectively. \end{lem} \begin{proof} Let us select $\varphi\in AA(\mathbb{R})$. Then, by Definition~\ref{def_automorphic}, given an arbitrary sequence of real numbers $\{ \tilde{\tau}_n \}_{n=1}^{\infty}$, there exist a subsequence $\{\tau_n \}_{n=1}^{\infty}$ of $ \{ \tilde{\tau}_n \}_{n=1}^{\infty}$ such that \eqref{tilde_psi} is satisfied. Now, if we consider $\psi=L(\varphi)$, we have that \eqref{tilde_psi} is equivalently rewritten as follows \begin{eqnarray} \tilde{\psi}(t) = \lim_{n\rightarrow \infty} \psi(t+\tau_n) \quad \mbox{and} \quad \psi(t) =\lim_{n\rightarrow \infty} \tilde{\psi}(t-\tau_n). \end{eqnarray} Indeed, this fact can be proved by application of Lebesgue's dominated convergence theorem, since \begin{eqnarray*} \tilde{\psi}(t) &=& \lim_{n\rightarrow \infty} \mathcal{L}(\varphi_{\tau_n})(t) = \lim_{n\rightarrow \infty} \int_{\mathbb{R}}h(r)\varphi_{\tau_{n}}(t-r)dr \\ &=& \int_{\mathbb{R}}h(r)\tilde{\varphi}(t-r)dr = \mathcal{L}(\tilde{\varphi})(t). \end{eqnarray*} Let us consider $\varphi\in BC(\mathbb{R})$, then we deduce \eqref{cerrado_BC} by application of the H\"older inequality. Now, from \eqref{cerrado_BC} we follow the invariance of $BC(\mathbb{R})$. \end{proof} We note that, if we define \begin{eqnarray*} h_1(x)= \left\{ \begin{array}{lll} \exp(-\alpha x),&\quad&x>0,\\ 0,&&\mbox{otherwise}, \end{array} \right. \quad \mbox{and} \quad h_2(x)= \left\{ \begin{array}{lll} \exp(\alpha x),&\quad&x<0,\\ 0,&&\mbox{otherwise}, \end{array} \right. \end{eqnarray*} for some $\alpha\in\mathbb{R}^+$ and denote by $\mathcal{L}_{1}$ and $\mathcal{L}_{2}$ the corresponding convolution operators associated with $h_1$ and $h_2$, respectively. Then, we get an interesting result by application of Lemma~\ref{2.2}. More precisely, we have the following Corollary. \begin{cor} Consider $\alpha\in\mathbb{R}^+$ and the operators $\mathcal{L}_{i},\ i=1,2$ defined by \begin{eqnarray} \mathcal{L}_{1}(\varphi)(t) =\int_{-\infty}^{t}e^{-\alpha(t-s)}\varphi(s)ds \quad\mbox{and}\quad \mathcal{L}_{2}(\varphi)(t) =\int_{t}^{\infty}e^{\alpha(t-s)}\varphi(s)ds, \label{eq:operators_exp_green} \end{eqnarray} respectively. Then, the spaces $BC(\mathbb{R}),$ $AP(\mathbb{R})$ and $AA(\mathbb{R})$ are invariants under the operator $\mathcal{L}_{i},\ i=1,2$. Moreover, the inequalities \eqref{cerrado_BC} and \eqref{cerrado_AA} are satisfied with $\mathcal{L}_{i},i=1,2,$ instead of~$\mathcal{L}$. \end{cor} \subsection{Some concepts and properties related to equation \eqref{lineal}} In this subsection we study the equation \eqref{lineal}. In order to introduce the concepts and results, we recall the standard notation of fundamental matrix and flow associated with \eqref{lineal}, which are denoted by $\Phi_A$ and $\Psi_A$, respectively. More precisely \begin{eqnarray} \left. \begin{array}{l} \mbox{Given a matrix $A(t)$, then the notation $\Phi_A=\Phi_{A} (t)$ and $\Psi_A$ are used for } \\ \mbox{a fundamental matrix of the system \eqref{lineal} and for the application defined} \\ \mbox{as follows $\Psi_A(t,s)=\Phi_{A} (t)\Phi_{A}^{-1}(s)$ .} \end{array} \right\} \label{notation_linal} \end{eqnarray} \begin{lem} \label{lema_identidad} Consider the notation \eqref{notation_linal}. Then, the identities \begin{eqnarray} \Psi_{A}(t,s)-\Psi_{B}(t,s) &=&\int_{s}^{t}\Psi_{B}(t,r)[A(r)-B(r)]\Psi_{A}(r,s)dr, \label{identidad_lem} \\ \Psi_{A}(t+\xi,s+\xi)-\Psi_{B}(t,s) &=&\int_{s}^{t}\Psi_{B}(t,r)[A(r+\xi)-B(r)]\Psi_{A}(r+\xi, s+\xi)dr, \label{identidad_cor} \end{eqnarray} are satisfied for all $(t,s,\xi)\in \mathbb{R}^3.$ \end{lem} \begin{proof} Let us denote by $H$ the function defined by the following correspondence rule $H(t,s)=\Psi_{A}(t,s)-\Psi_{B}(t,s)$. Then, by partial differentiation of $H$ with respect to the first variable and making some rearrangements, we get \begin{eqnarray*} \frac{\partial}{\partial t}H(t,s) &=&\frac{\partial}{\partial t}\Psi_{A}(t,s) -\frac{\partial}{\partial t}\Psi_{B}(t,s) \\ &=& (A(t)-B(t))\Psi_{A}(t,s)+B(t)(\Psi_{A}(t,s)-\Psi_{B}(t,s)) \\ &=& (A(t)-B(t))\Psi_{A}(t,s)+B(t)H(t,s). \end{eqnarray*} Now, by multiplying to the left by $\Psi_{B}(t,r)$ and simplifying, we deduce that \begin{eqnarray*} \Psi_{B}(t,r)(A(r)-B(r))\Psi_{A}(r,s) &=&\Psi_{B}(t,r)\frac{\partial}{\partial r}H(r,s)-\Psi_{B}(t,r)B(r)H(r,s)\\ &=&\Psi_{B}(t,r)\frac{\partial} {\partial r}H(r,s)-\left(\frac{\partial}{\partial r}\Psi_{B}(t,r)\right)H(r,s)\\ &=&\frac{\partial}{\partial r}\left(\Psi_{B}(t,r)H(r,s) \right). \end{eqnarray*} Thus, by integration over the interval $[s,t]$, we have that \begin{eqnarray*} \int_{s}^{t}\Psi_{B}(t,r)[A(r)-B(r)]\Psi_{A}(r,s)dr &=&\Psi_{B}(t,t)H(t,s)-\Psi_{B}(t,s)H(s,s), \end{eqnarray*} which implies \eqref{identidad_lem} by noticing that $\Psi_{B}(t,t)=I$ and $H(s,s)=0$. Now, the proof \eqref{identidad_cor} follows by similar arguments or by direct application of \eqref{identidad_lem}. \end{proof} \begin{lem} \label{bi_ap} Consider the notation \eqref{notation_linal} and the sets $ \overrightarrow{\mathbb{R}}^2$ and $ \overleftarrow{\mathbb{R}}^2$ defined as follows \begin{eqnarray*} \overrightarrow{\mathbb{R}}^2=\Big\{(s,t)\in\mathbb{R}^2\;:\; s>t\Big\} \quad \mbox{and} \quad \overleftarrow{\mathbb{R}}^2=\Big\{(s,t)\in\mathbb{R}^2\;:\; s<t\Big\}, \end{eqnarray*} respectively. Assume that the following three statements are true: $A(t)$ is an almost periodic matrix (see definition~\ref{def_automorphic}), $P$ is a constant projection matrix that commutes with $\Phi_A$ and for some given positive constants $c$ and $\alpha$ the inequality \begin{eqnarray} \| \Psi_A(t,s)P \| \leq c\;\exp(-\alpha |t-s|), \label{dicotomia_Phi_P} \end{eqnarray} is satisfied for all $(t,s)\in \overrightarrow{\mathbb{R}^2}$ \Big(or for all $(t,s)\in \overleftarrow{\mathbb{R}^2}$\Big). Then, for all $(t,s,\xi)\in \overrightarrow{\mathbb{R}^2}\times\mathbb{R}$ \Big(respectively $(t,s)\in \overleftarrow{\mathbb{R}^2}\times\mathbb{R}$\Big), there exist two real constants $c_1>0$ and $\alpha'\in ]0,\alpha[$, such that \begin{eqnarray} \| \Psi_A(t+\xi ,s+\xi)P-\Psi_A(t,s)P \| \leq c_1 \Vert A(\cdot +\xi)-A(\cdot)\Vert_{\infty} \exp({-\alpha '\vert t-s\vert}). \label{des_A} \end{eqnarray} In particular, if $\xi$ is an $\epsilon$-almost period of $A$ the inequality \begin{eqnarray} \| \Psi_A(t+\xi ,s+\xi)P-\Psi_A(t ,s)P \| \leq c_1\;\epsilon \exp(-\alpha' \vert t-s \vert), \label{bi_phiP} \end{eqnarray} is satisfied for all $(t,s,\xi)\in \overrightarrow{\mathbb{R}^2}\times\mathbb{R}$ \Big(respectively $(t,s)\in \overleftarrow{\mathbb{R}^2}\times\mathbb{R}$\Big) or equivalently $\Psi_A(t,s)P$ is Bi-almost periodic. \end{lem} \begin{proof} Since the proofs with $(t,s,\xi)\in \overrightarrow{\mathbb{R}^2}\times\mathbb{R}$ or with $(t,s,\xi)\in \overleftarrow{\mathbb{R}^2}\times\mathbb{R}$ are analogous, we consider only one of the cases. Then, in order to fix ideas, let us consider $(t,s,\xi)\in \overrightarrow{\mathbb{R}^2}\times\mathbb{R}$ and recall the notation $\Delta_{\xi}$ defined by \begin{eqnarray} \mbox{$\Delta_{\mathbf{\xi}}F(\mathbf{x}) =F(\mathbf{x}+\mathbf{\xi})-F(\mathbf{x})$ for any function $F$}. \label{shif_notation} \end{eqnarray} In particular, for instance, we have that $\Delta_{\xi}\Phi(t,s)=\Phi(t+\xi,s+\xi)-\Phi(t,s)$ and $\Delta_{\xi}A(t)=A(t+\xi)-A(t)$. From \eqref{identidad_lem} and the hypothesis that $P$ is a constant projection matrix, i.e. $P^{2}=P$, which commutes with $\Phi(t)$ for every $t\in \mathbb{R}$, we have that \begin{eqnarray} \Delta_{\xi}\Psi_{A}(t,s)P =\int_{s}^{t}\Psi_{A}(t,r)P\Delta_{\xi}A(r)\Psi_{A_\xi}(r,s)Pdr. \label{identidad_projection} \end{eqnarray} Now, by the assumption \eqref{dicotomia_Phi_P} we follow that \eqref{identidad_projection} implies the following estimate \begin{eqnarray*} \|\Delta_{\xi}\Psi_{A}(t,s)P\| &\le& c\;\exp(-\alpha |t-s|) \int_{s}^{t}\|\Delta_{\xi}A(r)\|dr \\ &\le& c\;\exp(-\alpha |t-s|)\; \|\Delta_{\xi}A\|_{\infty}\;|t-s|, \end{eqnarray*} which implies \eqref{des_A}. The inequality \eqref{bi_phiP} follows immediately from \eqref{des_A} using the fact that $\xi$ is an $\epsilon$-almost period of $A$. \end{proof} \begin{defn} \label{intregal_bi_AA} Consider the notation \eqref{notation_linal}. If the fact that $A(t)$ is an almost automorphic matrix (see definition~\ref{def_automorphic}-(ii)) implies the following convergence \begin{eqnarray} \lim_{n\to\infty} \int_{-\infty}^{t} \left\| \Big(\Psi_{A_{\xi_{n}}}-\Psi_{B}\Big)(t,s)P \right\| ds = \lim_{n\to\infty} \int^{t}_{-\infty}\left\| \Big(\Psi_{B_{-\xi_{n}}}-\Psi_{A}\Big)(t,s)P \right\| ds =0, \label{exp_phi_aa} \end{eqnarray} the application $\Psi_{A}$ is called integrally Bi-almost automorphic on $]-\infty,t]$. Similarly, if the almost automorphic behavior of $A(t)$ implies the following convergence \begin{eqnarray} \lim_{n\to\infty} \int_{t}^{\infty} \left\| \Big(\Psi_{A_{\xi_{n}}}-\Psi_{B}\Big)(t,s)P \right\| ds = \lim_{n\to\infty} \int_{t}^{\infty}\left\| \Big(\Psi_{B_{-\xi_{n}}}-\Psi_{A}\Big)(t,s)P \right\| ds =0, \label{exp_phi_bbbb} \end{eqnarray} the application $\Psi_{A}$ is called integrally Bi-almost automorphic on $[t,\infty[$. Here $\{\xi_{n} \}_{n=1}^{\infty}$ and $B(t)$ denotes the subsequence and the matrix with the properties given on definition~\ref{def_automorphic}-(ii). \end{defn} \begin{lem} \label{bi_aa} Consider the notation \eqref{notation_linal}. If the assumptions of Lemma~\ref{bi_ap} hold, then $\Psi_A$ is integrally Bi-almost automorphic on $]-\infty,t]$ and on $[t,\infty[$. \end{lem} \begin{proof} Let us consider that $A(t)$ is an almost automorphic matrix. Then by definition~\ref{def_automorphic}-(ii) we follow that for any sequence $\{\xi'_{n} \}_{n=1}^{\infty}\subset \mathbb{R}$, there exist a subsequence $\{\xi_n\}_{n=1}^{\infty}$ of $ \{\xi'_n\}_{n=1}^{\infty}$ and a matrix $B(t)$ such that \begin{eqnarray} \lim_{n\to\infty} A_{\xi_n}(t)=B(t) \quad \mbox{and} \quad \lim_{n\to\infty} B_{-\xi_n}(t)=A(t) \quad \mbox{for all $t\in\mathbb{R}$}. \label{lem:bi_alm_aut:1} \end{eqnarray} Now, from the identity \eqref{identidad_lem}, we deduce that the assumption \eqref{dicotomia_Phi_P} implies the following bounds \begin{eqnarray} \left\| \Big(\Psi_{A_{\xi_{n}}}-\Psi_{B}\Big)(t,s)P\right\| &\leq& ce^{-\alpha\vert t-s\vert} \left\| \int_{s}^{t} (A_{\xi_{n}}-B)(r) dr \right\| \nonumber\\ &\leq& c_1 e^{-\alpha' \vert t-s\vert}(\Vert A\Vert+\Vert B\Vert), \label{lem:bi_alm_aut:2} \\ \left\| \Big(\Psi_{B_{-\xi_{n}}}-\Psi_{A}\Big)(t,s)P\right\| &\leq& ce^{-\alpha\vert t-s\vert} \left\| \int_{s}^{t} (B_{-\xi_{n}}-A)(r) dr \right\| \nonumber\\ &\leq& c_1 e^{-\alpha' \vert t-s\vert}(\Vert A\Vert+\Vert B\Vert), \label{lem:bi_alm_aut:2_2} \end{eqnarray} for all $(t,s,n)\in\mathbb{R}^2\times\mathbb{N}$ and some real constants $c_1>0$ and $\alpha'\in ]0,\alpha[$. Then, by applying four times the Lebesgue's dominated convergence theorem we deduce the integrally Bi-almost automorphic property of $\Psi_A$. Indeed, firstly by the first limit given in \eqref{lem:bi_alm_aut:1} we deduce that for each $(s,t)\in\mathbb{R}^2$ the integral $\int_{s}^{t} \| (A_{\xi_{n}}-B)(r) \| dr$ converges to $0$ when $n\to\infty$. Then, with this convergence in mind, in a second application, from \eqref{lem:bi_alm_aut:2} we get that for each $t\in \mathbb{R}$ the integral $\int_{-\infty}^{t} \left\| (\Psi_{A_{\xi_{n}}}-\Psi_{B})(t,s)P \right\| ds$ converges to $0$ when $n\to\infty$. Similarly, applying twice more the Lebesgue's theorem (in the second integral \eqref{lem:bi_alm_aut:1} and in the inequality \eqref{lem:bi_alm_aut:2_2}) we deduce that for each $(s,t)\in\mathbb{R}^2$ the integral $\int_{s}^{t} \| (B_{-\xi_{n}}-A)(r) \| dr$ converges to $0$ when $n\to\infty$ and for each $t\in\mathbb{R}$ the integral $\int_{-\infty}^{t} \left\| (\Psi_{B_{-\xi_{n}}}-\Psi_{A})(t,s)P \right\| ds$ converges to $0$ when $n\to\infty$. Thus, \eqref{exp_phi_aa} holds and $\Psi_A$ is integrally Bi-almost automorphic. The proof of \eqref{exp_phi_bbbb} can be obtained by similar arguments. \end{proof} \begin{defn} \label{def_dichotomy} Consider the notation \eqref{notation_linal}. The linear system \eqref{lineal} has an $\alpha$-exponential dichotomy if there exist a projection $P$ and two positive constants $c$ and $ \alpha$ such that for all $ (t,s)\in \mathbb{R}^2$ the estimate \begin{eqnarray} \| G_A(t,s)\|\leq c\exp({-\alpha\vert t-s\vert}) \quad\mbox{with}\quad G_A(t,s)=\begin{cases} \Phi_A(t)P\Phi^{-1}_A(s), &\mbox{$t\geq s$,} \\ -\Phi_A(t)(I-P)\Phi^{-1}_A(s), &\mbox{otherwise,} \end{cases}\label{green} \end{eqnarray} is satisfied. The matrix $G_A$ is called the Green matrix associated with the dichotomy. \end{defn} \begin{lem}\label{bi_aa_integrable} Consider the notation \eqref{notation_linal} and define the Green operator $\Gamma$ as follows \begin{eqnarray*} (\Gamma\varphi)(t) &=& \int_{-\infty}^{\infty} G(t,s)\varphi(s)ds,\ t\in \mathbb{R}. \end{eqnarray*} Assume that $A(t)$ is an almost automorphic matrix and \eqref{lineal} has an exponentially dichotomy such the its projection commutes with the fundamental matrix $\Phi_A$. Then, the following assertions are satisfied: \begin{enumerate}[(i)] \item The Green matrix $G_A$ is integrally Bi-almost automorphic. \item The spaces $BC(\mathbb{R},V),$ $AP(\mathbb{R},V)$ and $AA(\mathbb{R},V)$ are invariants under the operator $\Gamma$. Moreover, there exist two positive constants $c_1$ and $c_2$ such that the following inequalities \begin{eqnarray*} && \| \Gamma \varphi\|_{\infty}\leq \frac{\| \varphi \|_{\infty}}{\alpha} \quad\mbox{for $\varphi\in BC(\mathbb{R},V)$,} \\ && \| (\Delta_{\xi}\Gamma\varphi)(t) \| \leq c_1\| \varphi\|_{\infty} \sum_{i=1}^2 \mathcal{L}_i\left(\vert\Delta_{\xi}A \vert\right) +c_2\sum_{i=1}^2 \mathcal{L}_i \left( \vert\Delta_{\xi}\varphi \vert\right) \quad\mbox{for $\varphi\in AP(\mathbb{R},V)$,} \end{eqnarray*} are satisfied. Here $\mathcal{L}_i$ and $\Delta_{\xi}$ denotes the operators defined on \eqref{eq:operators_exp_green} and \eqref{shif_notation}, respectively. \end{enumerate} \end{lem} \begin{proof} The proofs of {\it (i)} and {\it (ii)} are straightforward. Indeed, for {\it(i)}, let us consider $A(t),B(t)$ and $\{\xi_n\}_{n=1}^\infty$ as given in the proof of Lemma~\ref{bi_aa}. Now, by the assumptions we can deduce that \begin{eqnarray*} \int_{-\infty}^{\infty}\| (G_{A_{\xi_n}}-G_{B})(t,s) \| ds\rightarrow 0 \quad \mbox{and} \quad \int_{-\infty}^{\infty}\| (G_{B_{-\xi_n}}-G_{A})(t,s) \| ds\rightarrow 0 \quad \mbox{ when $n\rightarrow \infty$} \end{eqnarray*} where $G_{A}$ is the Green matrix defined on \eqref{green}. Thus, we can follow that $G_{A}$ is integrally Bi-almost automorphic. Meanwhile, we follow the proof of {\it (ii)} by application of Lemmas~\ref{lema_mean}, \ref{2.2} and \ref{bi_aa_integrable}-{\it(i)}. \end{proof} \section{Main Results} \label{sec:mainres} In this section we present several results of Massera type for \eqref{lineal+f} and \eqref{delay_lineal+f+g} and related with the almost authomorphic behavior of the $A$ and $f$. \subsection{Results for \eqref{lineal+f}} Here we present a result for the scalar abstract case, see Theorem~\ref{massera}. Then, we extend this result can to linear triangular systems and general linear constant systems, see Theorem~\ref{massera_matrix}. We also, present simple and useful relation between finite and infinite dimension is deduced from Theorem~\ref{teo:fin_vs_infin}. Finally, we present to results on the general case , see Theorems~\ref {teo:SoL_lineal+f_abstract} and \ref{cor_unica_solucion}. \begin{thm} \label{massera} Consider the equation \eqref{lineal+f} with $A=\mu:\mathbb{R}\to\mathbb{C}$ and denote by $g$ the application defined by \begin{eqnarray} g(s,t)=\exp\Big({\int_{s}^{t}\mu(r)dr}\Big). \label{g:caso_escalar} \end{eqnarray} Then, the following assertions are valid: \begin{enumerate}[(i)] \item \label{massera_i} Assume that $\mu$ is a function belongs to $ AA(\mathbb{R},\mathbb{C})$ satisfying $M(\mathrm{Re}(\mu))\neq 0$. Also, assume that $f$ is belongs to $AA(\mathbb{R},V)$. Then, a solution $y$ of equation \eqref{lineal+f} is bounded if and only if $y\in AA(\mathbb{R},V)$ or equivalently the unique solution of equation \eqref{lineal+f} belongs $ AA(\mathbb{R},V)$ is given by \begin{eqnarray} y(t)= \left\{ \begin{array}{lll} {\displaystyle\int_{-\infty}^{t}} g(s,t)f(s)ds, &&M(\mathrm{Re}(\mu))<0, \\ -{\displaystyle\int_{t}^{\infty}} g(s,t)f(s)ds, &&M(\mathrm{Re}(\mu))>0. \end{array} \right. \label{sol_heter_partes} \end{eqnarray} \item Assume that $\mu(t)=ia(t)$ with $\int_{0}^{t} a(s)ds$ bounded and $V$ a Bohl-Bohr Banach space. Then, a solution $y$ of equation \eqref{lineal+f} is bounded if and only if $y$ is belongs $AA(\mathbb{R},V)$ and is given by \begin{eqnarray} y(t)=\exp\Big({i\int_{0}^{t}a(r)dr}\Big)v +\int_{0}^{t}\exp\Big({i\int_{s}^{t} a(r)dr}\Big)f(s)ds \quad \mbox{for all $v\in V$}. \end{eqnarray} \end{enumerate} \end{thm} \begin{proof} {\it(i)} Before of prove the item we deduce two estimates (see \eqref{inequality_cor} and \eqref{inequality_cor_2}) and introduce some notation (see \eqref{aa_mu} to \eqref{aa_f}). Firstly, by Lemma~\ref{lema_mean} we can deduce that the scalar equation $x'=\mu(t)x$ has an $\alpha$-exponential dichotomy. Indeed, we note that by the hypothesis $M(\mathrm{Re}(\mu))\not=0$ we can always select $\alpha$ satisfying $\vert M(\mathrm{Re}(\mu))\vert \geq \alpha>0$. Then, by application of Lemma~\ref{lema_mean}, we have that there exist a positive constant $c$ such that \begin{eqnarray} | g(t,s)|\leq ce^{-\alpha| t-s|} \quad \mbox{with $g$ defined in \eqref{g:caso_escalar}. } \label{inequality_green} \end{eqnarray} Moreover, by application of Lemma \ref{bi_ap} and integration on $s$, we deduce that there exist $c_1\in\mathbb{R}^+$ and $\alpha'\in ]0,\alpha[$ such the following inequalities \begin{eqnarray} &&\int_{-\infty}^{t} |\Delta_{\xi}g(t,s)|\;|f_{\xi}(s)| ds \leq \frac{c_1}{\alpha'} \| f\|_\infty\|\Delta_{\xi}\mu\|_\infty \quad \mbox{for all $t\in\mathbb{R}$,} \label{inequality_cor} \\ &&\int_{t}^{\infty} |\Delta_{\xi}g(t,s)|\;|f_{\xi}(s)| ds \leq \frac{c_1}{\alpha'} \| f\|_\infty\|\Delta_{\xi}\mu\|_\infty \quad \mbox{for all $t\in\mathbb{R}$,} \label{inequality_cor_2} \end{eqnarray} hold. Now, let us consider $\{\xi_{n} \}_{n=1}^\infty$ a sequence in $\mathbb{R}$. Then, by the assumption $\mu$ belongs $ AA(\mathbb{R},V)$, there exist a subsequence $\{\xi'_{n} \}_{n=1}^\infty$ of $\{\xi_{n} \}_{n=1}^\infty$ and the function $\tilde{\mu}$ such that \begin{eqnarray} \lim_{n\rightarrow \infty} \mu_{\xi'_n}(t)=\tilde{\mu}(t), \quad \lim_{n\rightarrow \infty} \tilde{\mu}_{-\xi'_n}(t)=\mu(t), \quad\mbox{for all $t\in\mathbb{R}$}. \label{aa_mu} \end{eqnarray} Similarly, given the sequence $\{\xi'_{n} \}_{n=1}^\infty$ by the hypothesis $f\in AA(\mathbb{R},V)$, there exist a subsequence $\{\xi''_{n} \}_{n=1}^\infty$ of $\{\xi'_{n} \}_{n=1}^\infty$ and the function $\tilde{f}$ such that \begin{eqnarray} &&\lim_{n\rightarrow \infty} f_{\xi''_n}(t)=\tilde{f}(t), \quad \lim_{n\rightarrow \infty} \tilde{f}_{-\xi''_n}(t)=f(t), \quad\mbox{for all $t\in\mathbb{R}$}. \label{aa_f} \end{eqnarray} Now we develop the proof of the item. Indeed, we consider that $y$ is defined by \eqref{sol_heter_partes} and we prove that $y$ is belongs to $AA(\mathbb{R},V)$. Let us start by considering the notation $y_{\pm}$ and $\tilde{y}_{\pm}$ for the functions defined as follows \begin{eqnarray} && y_+(t)=\int_{-\infty}^{t}g(t,s)f(s)ds, \quad \tilde{y}_+(t)=\int_{-\infty}^{t}g(t,s)\tilde{f}(s)ds, \label{notacion_ad_0}\\ && y_{-}(t)=-\int_{t}^{\infty}g(t,s)f(s)ds \quad \mbox{and} \quad \tilde{y}_{-}(t)=-\int_{t}^{\infty}g(t,s)\tilde{f}(s)ds, \label{notacion_ad} \end{eqnarray} respectively. Then, by algebraic rearrangements we deduce that \begin{eqnarray} (y_+)_{\xi''_{n}}(t)-\tilde{y}_+(t) &=&\int_{-\infty}^{t}\Delta_{\xi''_{n}}g(t,s) f_{\xi''_{n}}(s)ds +\int_{-\infty}^{t}g(t,s)(f_{\xi''_{n}}-\tilde{f})(s)ds, \qquad \label{see_cauchy} \\ (y_-)_{\xi''_{n}}(t)-\tilde{y}_-(t) &=&-\int_{t}^{-\infty}\Delta_{\xi''_{n}}g(t,s) f_{\xi''_{n}}(s)ds -\int_{t}^{-\infty} g(t,s)(f_{\xi''_{n}}-\tilde{f})(s)ds, \label{see_cauchy_2} \end{eqnarray} Now, by using Lebesgue dominated convergence theorem we get that the four integrals on \eqref{see_cauchy}-\eqref{see_cauchy_2} converges to $0$ when $n\to\infty$. Indeed, by \eqref{inequality_cor} and \eqref{aa_mu} we follow that the first integral in \eqref{see_cauchy} converges to $0$ when $n\rightarrow \infty$. We see that the second integral in \eqref{see_cauchy} vanishes when $n\to \infty$ by consequence of \eqref{aa_f}. Meanwhile, we note that both integrals in \eqref{see_cauchy_2} converge to $0$ when $n\to \infty$ by application of \eqref{inequality_cor_2}, \eqref{aa_mu} and \eqref{aa_f}. Consequently, we have that \begin{eqnarray} \lim_{n\rightarrow \infty} (y_{\pm})_{\xi_{n}}(t)=\tilde{y}_{\pm}(t) \quad \mbox{for all $t\in\mathbb{R}$.} \end{eqnarray} Similarly, we can prove that $(\tilde{y}_{\pm})_{-\xi_{n}}(t)\to y(t)$ for all $t\in\mathbb{R}$ and when $n\to\infty$. Hence $y\in AA(\mathbb{R},V)$. \noindent {\it(ii)} Noticing that $h(s)=\exp{(i\int_{0}^{s}a)}f(s)\in AA(\mathbb{R},V)$ and the Banach space $V$ has the Bohl-Bohr property we follow the proof by application of Theorem~\ref{1.1}. \end{proof} \begin{thm} \label{massera_matrix} Consider that $A(t)\in AA(\mathbb{R},\mathbb{C}^{p\times p})$ is an upper triangular of order $p \times p$. Then the following assertions are valid: \begin{enumerate}[(i)] \item Assume that $A(t)$ satisfies the condition $\mathrm{Re}(M(a_{ii}))\neq 0$ for all $i=1,\ldots,p$ and $f\in AA(\mathbb{R},V^p)$. Then, a solution $y$ of \eqref{lineal+f} is bounded if and only if $y\in AA(\mathbb{R},V^{p})$ \item Assume that $A(t)$ satisfies the condition \begin{eqnarray} a_{kk}(t)=i\beta_{k}(t) \quad\mbox{with}\quad \int_{0}^{t}\beta_{k}(s)ds \quad\mbox{bounded for all $k=1,\cdots,p.$} \end{eqnarray} Assume that the Banach space $V$ has the Bohl-Bohr property. Then any solution $y$ of system \eqref{lineal+f} is bounded if and only if $y$ is belongs $AA(\mathbb{R},V^{p})$. \end{enumerate} \end{thm} \begin{proof} {\it (i)} If we consider that $A(t)$ is triangular matrix, we have that the system \eqref{lineal+f} is of the following type \begin{eqnarray} \begin{array}{ccccccc} y'_{1}=& a_{11}(t)y_{1}+& a_{12}(t)y_{2}+& a_{13}(t)y_{3}+ &\cdots +& a_{1p}(t)y_{p}+&f_{1}(t)\\ y'_{2}=& \ &a_{22}(t)y_{2}+& a_{23}(t)y_{3}+ &\cdots +& a_{2p}(t)y_{p}+&f_{2}(t)\\ \vdots & & & & & \vdots& \vdots\\ y'_{p}=& & &\ &\ &a_{pp}(t)y_{p}+&f_{p}(t).\\ \end{array}\label{sist_triangular} \end{eqnarray} We note that the $p$-th equation in \eqref{sist_triangular} can be analyzed by application of Theorem \ref{massera}. Indeed, by Theorem \ref{massera}-{\it (i)}, we have that there exist $y_{p}\in AA(\mathbb{R},V)$ given for \begin{eqnarray*} y(t)= \left\{ \begin{array}{lll} {\displaystyle\int_{-\infty}^{t}} g_p(s,t)f(s)ds, &&M(\mathrm{Re}(\mu_p))<0, \\ -{\displaystyle\int_{t}^{\infty}} g_p(s,t)f(s)ds, &&M(\mathrm{Re}(\mu_p))>0. \end{array} \right. \quad \mbox{with} \quad g_p(s,t)=\exp\Big(\int_{s}^{t} a_{pp}(r)dr\Big). \end{eqnarray*} Similarly, by substituting $y_{p}\in AA(\mathbb{R},V)$ in $(p-1)-$th equation of \eqref{sist_triangular} and by a new application of Theorem~\ref{massera}-{\it (i)} we can find an explicit expression for $y_{p-1}(t)$. This argument can be repeated to construct $y_{p-2}(t),y_{p-3}(t),\ldots,y_2(t)$ and $y_1(t)$ by backwards substitution and application of Theorem~\ref{massera}-{\it (i)} in the system \eqref{sist_triangular}. Hence, we can construct $y(t)$ an also get that the conclusion of the theorem~\ref{massera_matrix}-{\it (i)} is valid. \noindent {\it (ii)} The proof of this item is similar to the proof of the precedent item {\it (i)} of the Theorem~\ref{massera_matrix}. In a broad sense, in this case we apply Theorem~\ref{massera}-{\it (ii)} instead of Theorem~\ref{massera}-{\it (i)} and similarly we use backwards substitution. \end{proof} \begin{thm} \label{teo:fin_vs_infin} Let $V$ be a Banach space having the Bohl-Bohr property. Let $\{\mu_{i}\}_{i=1}^{p}$ be the eigenvalues of the $p\times p$ constant matrix $A$ satisfying $\vert \mu_{i}\vert =1$. Then any bounded solution of \eqref{lineal+f} $y\in AA(\mathbb{R},V^p)$. When all the eigenvalues $\mu_i$ are distinct, these solutions have the form \begin{eqnarray} y(t)=\exp({At})\left[v+\int_{0}^{t}\exp({-As})f(s)ds \right] \quad \mbox{for all $v\in V^{p}$}. \label{sol_sist_lineal_cte} \end{eqnarray} In the general case, a formula for the bounded solutions can be also obtained. \end{thm} \begin{proof} If $\{\mu_{i}\}_{i=1}^{p}$ are distinct, the constant system \eqref{lineal} is similar to a diagonal system. Then, without loss of generality, we can suppose that $A$ is an upper triangular matrix. Hence, the result \eqref{sol_sist_lineal_cte} follows by application of the Theorem \ref{massera_matrix}-{\it (i)}. \end{proof} \begin{thm} \label{teo:SoL_lineal+f_abstract} Consider $A:V\rightarrow V$ an infinitesimal generator of a $C_0$ group of bounded linear operators $T(t)$ with $t \in \mathbb{R}$ and define the Green function \begin{eqnarray} G(t,s)=\begin{cases} T(t-s)P, & t\geq s \\ -T(t-s)(I-P), & t\leq s. \end{cases}\label{green_c0} \end{eqnarray} Assume that $A$ has an $\alpha$-exponential dichotomy, i.e., there exist two constants $c$ and $\alpha$ such that \begin{eqnarray} \| G(t,s)\| \leq ce^{-\alpha \vert t-s\vert } \quad \mbox{for all $(t,s)\in\mathbb{R}^2$.} \label{green_c0_exp} \end{eqnarray} Then if $f\in AA(\mathbb{R},V)$, equation \eqref{lineal+f} has a unique solution $y\in AA(\mathbb{R},V)$ given by \begin{eqnarray} y(t)=\int_{\mathbb{R}} G(t,s)f(s)ds \quad \mbox{for all $t\in\mathbb{R}$} \label{sol__y} \end{eqnarray} and satisfying the following estimate \begin{eqnarray} \Vert y\Vert_{\infty}\leq \frac{2c}{\alpha}\Vert f\Vert_{\infty}. \label{cota_solucion} \end{eqnarray} \end{thm} \begin{proof} From \eqref{green_c0} and \eqref{green_c0_exp} and $y\in BC(\mathbb{R},V)$ \begin{eqnarray} \lim_{s\rightarrow \pm \infty} \Vert G(t,s)y(s)\Vert =0 \label{Gy_0} \end{eqnarray} Indeed, for $t\geq s$ we have that $\Vert G(t,s)y(s)\Vert\leq ce^{-\alpha(t-s)}\Vert y\Vert_{\infty}$, which implies that $\Vert G(t,s)y(s)\Vert\to 0$ when $s\rightarrow -\infty$. Similarly, we get that $\Vert G(t,s)y(s)\Vert$ vanishes when $s\rightarrow \infty$. We note that, for $t\in\mathbb{R}$ (fix), applying $T(t-s)$ on the identity $y'(s)=Ay(s)+f(s)$ and using the fact that $A$ commutes with $T(t)$ on the domain of $A$, we get \begin{eqnarray} T(t-s)y'(s) &=& T(t-s)Ay(s)+T(t-s)f(s) \nonumber \\ &=& AT(t-s)y(s)+T(t-s)f(s) \label{3.22}. \end{eqnarray} Now, the proof consists of three main parts: (a) we prove that the solution $y$ of \eqref{lineal+f} is given by \eqref{sol__y}; (b) we prove that $y\in AA(\mathbb{R},V)$ and (c) we prove the uniqueness. \noindent {\it (a). Proof of that the solution $y$ of \eqref{lineal+f} is given by \eqref{sol__y}}. Firstly, we note that a formal integration of \eqref{3.22} on $s\in (-\infty,t)$ gives the following identity \begin{eqnarray} \int_{-\infty}^{t} T(t-s)Py'(s)ds = \int_{-\infty}^{t}AT(t-s)Py(s)ds+\int_{-\infty}^{t}T(t-s)Pf(s)ds. \label{form_integral} \end{eqnarray} Moreover we have that \begin{eqnarray} \frac{d}{ds} T(t-s)y(s) = -AT(t-s)y(s)+T(t-s)y'(s). \label{derivada_Ty} \end{eqnarray} Then an integration on $s\in [r,t]$ implies the following relation \begin{eqnarray} Py(t)-T(t-r)Py(r) = -\int_{r}^{t}AT(t-s)Py(s)ds+\int_{r}^{t}T(t-s)Py'(s)ds. \end{eqnarray} Now, by \eqref{Gy_0} letting $r\rightarrow -\infty$, we deduce that \begin{eqnarray} Py(t) = -\int_{-\infty}^{t}AT(t-s)Py(s)ds+\int_{-\infty}^{t}T(t-s)Py'(s)ds.\label{Py} \end{eqnarray} Here, we note that a integration of \eqref{3.22} on $s\in [t,\infty)$ with $Q=I-P$ gives \begin{eqnarray} \int_{t}^{\infty}T(t-s)Qy'(s) = \int_{t}^{\infty}AT(t-s)Qy(s)ds+\int_{t}^{\infty}T(t-s)Qf(s)ds. \label{form_integral_Q} \end{eqnarray} and a integration of \eqref{derivada_Ty} on $s\in [t,r]$ yields \begin{eqnarray*} T(t-r)Qy(r)-Qy(t) = \int_{t}^{r}-AT(t-s)Qy(s)ds+\int_{t}^{r}T(t-s)Qy'(s)ds. \end{eqnarray*} Now, by \eqref{Gy_0} and letting $r\rightarrow \infty$ in the last relation we get \begin{eqnarray} -(I-P)y(t) = -\int_{t}^{\infty}AT(t-s)(I-P)y(s)ds+\int_{t}^{\infty}T(t-s)(I-P)y'(s)ds. \label{eq:3.20cho} \end{eqnarray} The relation \eqref{eq:3.20cho} together with \eqref{form_integral_Q} yields \begin{eqnarray} -(I-P)y(t)= \int_{t}^{\infty} T(t-s)(I-P)f(s)ds. \label{-Qy} \end{eqnarray} Then, from \eqref{form_integral}, \eqref{Py} and \eqref{-Qy} we obtain \begin{align} y(t)&= Py(t)+(I-P)y(t) \label{3.29}\\ &= \int_{-\infty}^{t} T(t-s)Pf(s)ds-\int_{t}^{\infty}T(t-s)(I-P)f(s)ds \nonumber \\ &= \int_{\mathbb{R}} G(t,s)f(s)ds. \nonumber \end{align} Thus, we conclude the proof of \eqref{sol__y}. \noindent {\it (b). Proof $y\in AA(\mathbb{R},V)$.} The proof of this property follows by \eqref{3.29} the hypothesis $f\in AA (\mathbb{R},V)$ and application of Lemma~\ref{bi_aa_integrable}. \noindent {\it (c). Proof of uniqueness of bounded solutions.} The uniqueness of the bounded solution for \eqref{lineal+f} follows by the fact that $x\equiv 0$ is the unique bounded solution on $\mathbb{R}$ of the linear equation \eqref{lineal}. Indeed, if $x\in BC(\mathbb{R},V)$ is a solution of the linear system we have that $x(t)=Px(t)+(I-P)x(t)=x_{1}(t)+x_{2}(t)$. Note that $x_{1}\rightarrow \infty$ as $t\rightarrow -\infty$ and $x_{2}\rightarrow -\infty$ as $t\rightarrow \infty$, by the exponential dichotomy. Finally, we note that \eqref{cota_solucion} is a consequence of \eqref{3.29}. \end{proof} \begin{thm}\label{cor_unica_solucion} Assume that $A\in AA(\mathbb{R},\mathbb{C}^{p\times p})$ and \eqref{lineal} has an exponential dichotomy with a projection $P$ that commutes with the fundamental matrix $\Phi_A(t).$ Assume that $f\in AA(\mathbb{R},V^{p})$. Then, the linear non-homogeneous equation \eqref{lineal+f} has a unique $AA(\mathbb{R},V^{p})$ solution given by $$y(t)= \int_\mathbb{R} G(t,s)f(s)ds,$$ satisfying \eqref{cota_solucion}. \end{thm} \begin{proof} By application of Lemma~\ref{bi_aa_integrable}. \end{proof} \begin{thm} Consider $V$ be a Hilbert space and $A$ a linear compact operator on $V$. Suppose that $V=\oplus_{k=1}^{\infty} V_{k}$ is a Hilbert sum such that $V_{k}$ is a finite dimensional subspace of $V$ for each $k\in \mathbb{N}$. Suppose that each orthogonal projection $P_{k}$ on $V_{k}$ commutes with $A$. If $f\in AA(\mathbb{R},V)$, then every bounded solution $y$ of \eqref{lineal+f} is belongs $AA(\mathbb{R},V)$. \end{thm} \begin{proof} Noticing that for any $y \in V$, we have that $y=\sum_{k=1}^{\infty}P_ky=\sum_{k=1}^{\infty}y_k$. Then, by the fact that $A$ is bounded on $V$, we deduce that \begin{eqnarray} Ay=\sum_{k=1}^{\infty}Ay_{k} =\sum_{k=1}^{\infty}AP_{k}y =\sum_{k=1}^{\infty}P_{k}Ay. \end{eqnarray} Now, from the hypothesis that $f\in AA(\mathbb{R},V)$, we have that for any subsequence $\{ \tilde{\tau}_{n}\}_{n=1}^{\infty}\subset \mathbb{R}$, there exist a subsequence $\{ \tau_{n}\}_{n=1}^{\infty}\subset \{ \tilde{\tau}_{n}\}_{n=1}^{\infty}$ and a function $\tilde{f}$ such that $f_{\tau_{n}}(t)\to \tilde{f}(t)$ and $\tilde{f}_{-\tau_{n}}(t)\to f(t)$ pointwise on $\mathbb{R}$ when $n\to \infty$. Then, by compactness of $A$ we deduce that $Af_{\tau_{n}}(t)\to A\tilde{f}(t)$ and $A\tilde{f}_{-\tau_{n}}(t)\to Af(t)$ pointwise on $\mathbb{R}$ when $n\to \infty$. Now, choosing $y_{k}(t)=P_{k}y(t)$ and assuming that $y$ is solution of equation \eqref{lineal+f} we can deduce that \begin{eqnarray*} y'_{k}=P_{k}y' &=& P_{k}(Ay(t)+f(t))\\ &=& AP_{k}y(t)+P_{k}f(t)\\ &=& Ay_{k}(t)+P_{k}f(t) \end{eqnarray*} or equivalently $y_{k}$ satisfies the equation \eqref{lineal+f} in the finite dimensional space $V_{k}$ with $P_{k}f(t)\in AA(\mathbb{R},V_{k})$ since $P_{k}$ is a bounded linear operator. Thus, $y_{k}$ is bounded if and only if $y_{k}\in AA(\mathbb{R},V_{k})$. Now, if $y(t)$ is bounded the set $\Big\{Ay(t)\vert t\in \mathbb{R} \Big\}$ is relatively compact in $V$. Hence $\sum_{k=1}^{\infty}P_{k}Ay(t)=Ay(t)$ uniformly on $\mathbb{R}$. On the other hand $P_{k}Ay(t)\in AA(\mathbb{R},V_{k})$ since $P_kAy(t)=AP_ky(t)=Ay_{k}(t)$. Then, $Ay(t)\in AA(\mathbb{R},V)$ and $y'(t)\in AA(\mathbb{R},V)$ since $y(t)$ satisfies the equation \eqref{lineal+f}. Therefore, using the Theorem~\ref{1.1}, $y\in AA(\mathbb{R},V)$ since $y\in BC(\mathbb{R},V)$ and $V$ is a Hilbert space. \end{proof} \subsection{Results for \eqref{lineal+f+g}} \label{subsec:lineal+f+g} Before start we recall the notation $\Delta(\varphi_0,\rho)$ and $\varphi_0 $ given on \eqref{eq:condition_L}. Here, in this subsection, we present two results for \eqref{lineal+f+g} assuming fundamentally that $g$ satisfies the assumptions given on \eqref{eq:condition_L} and $f$ if a function such that the inequality \begin{eqnarray} \Vert f \Vert \leq \frac{\alpha \rho}{2c}, \label{condition_f} \end{eqnarray} is satisfied for some positive constants $\alpha$ and $c$, then $0\in \Delta(\varphi_0,\rho)$ or equivalently $\Vert \varphi_0 \Vert \leq \rho$. \begin{thm}\label{theo_sol_unica} Consider $A(t)\in AA(\mathbb{R},\mathbb{C}^{p\times p})$ such that \eqref{lineal} has an $\alpha$-exponential dichotomy with a projection $P$ that commutes with $\Phi_A$. Assume that $g$ satisfies the assumptions given on \eqref{eq:condition_L} and $f$ is selected such that the \eqref{condition_f} holds. Then, if $4cL<\alpha$ the equation \eqref{lineal+f+g} has a unique solution belongs $AA(\mathbb{R},V^p)$. \end{thm} \begin{proof} Let $\Delta=AA(\mathbb{R},V^p)\cap \Delta(\varphi_0,\rho)$ and $G=G(t,s)$ the Green matrix associated to $\alpha$-exponential dichotomy, i.e \begin{eqnarray*} \Vert G(t,s)\Vert \leq ce^{-\alpha\vert t-s \vert} \quad \mbox{for some $c,\alpha\in\mathbb{R}^+$ and for all $(t,s)\in \mathbb{R}^2$.} \end{eqnarray*} Now, for $\varphi\in \Delta$, by Proposition \ref{composition} we follow that the function $g(t,\varphi(t))\in AA(\mathbb{R},V^p)$. Then, by Theorem~\ref{cor_unica_solucion}, we have that \begin{eqnarray} (\Gamma\varphi)(t)=\int_{\mathbb{R}} G(t,s)[f(s)+g(s,\varphi(s))]ds\label{operador} \end{eqnarray} is the unique $AA(\mathbb{R},V^p)$ solution of the system \eqref{lineal+f+g}. Moreover, $(\Gamma\varphi)(t)$ satisfies the inequality \begin{eqnarray*} \Vert \Gamma\varphi -\varphi_{0} \Vert \leq \frac{2cL}{\alpha} \Vert \varphi \Vert \leq \frac{2cL}{\alpha} \left( \Vert \varphi -\varphi_0 \Vert +\Vert \varphi_0 \Vert \right)\leq \frac{4cL}{\alpha}\rho \leq \rho . \end{eqnarray*} Thus, $\Delta$ is invariant under $\Gamma$. Furthermore, we note note that $\Gamma$ is a contraction, since \begin{eqnarray*} \vert(\Gamma\varphi_{1})(t)-(\Gamma\varphi_2)(t)\vert &\leq& L\int_{\mathbb{R}} \vert G(t,s)\vert \vert \varphi_1(s)-\varphi_2(s)\vert ds \\ &\leq& \frac{2cL}{\alpha} \Vert \varphi_1 -\varphi_2 \Vert. \end{eqnarray*} Hence by the Banach fixed point arguments, we deduce that $\Gamma$ has a unique fixed point $\varphi \in \Delta$. Consequently, the function $\varphi$ is the unique solution of \eqref{lineal+f+g}. \end{proof} \begin{thm}\label{unique_sol_lineal+f+g} Consider that $A$ is a linear operator satisfying the assumptions given on Theorem~\ref{teo:SoL_lineal+f_abstract}. Assume that $g$ satisfies the assumptions given on \eqref{eq:condition_L} and $f$ is selected such that the \eqref{condition_f} holds. Then, if $4cL<\alpha$ the equation \eqref{lineal+f+g} has a unique mild solution belongs $AA(\mathbb{R},V^p)$. \end{thm} \begin{proof} Let $\Delta = AA(\mathbb{R},V)\cap \Delta(\varphi_0,\rho)$ and $\varphi\in AA(\mathbb{R},V)$. Then $\varphi_{-\tau}\in AA(\mathbb{R},V)$. Now, by the composition Proposition~\ref{composition}, we have that $\psi_{\varphi}(\cdot)=g(\cdot,\varphi_{-\tau}(\cdot))\in AA(\mathbb{R},V)$. Moreover, applying similar arguments to that used in Theorem~\eqref{theo_sol_unica}, we deduce that the Green operator $$(\Gamma\varphi)(t)=\varphi_0(t)+\int_{\mathbb{R}}G(t,s)\psi_{\varphi}(s)ds$$ maps $\Delta$ into $\Delta$ and additionally we can prove that $\Gamma :\Delta \rightarrow \Delta$ is a strict contraction. Thus, the Banach principle insures the existence of a unique $\varphi\in \Delta$ satisfying \eqref{condition_f}. The proof is now complete. \end{proof} \subsection{Results for \eqref{delay_lineal+f+g}} \label{subsec:delay_lineal+f+g} In this subsection we generalize the Theorems~\ref{theo_sol_unica} and \ref{unique_sol_lineal+f+g} to the case of the delay equation \eqref{delay_lineal+f+g}. \begin{thm} \label{teo:delay_lineal+f+g} Consider that $\tau>0$ is a constant delay. Then, we have that the following assertions are valid: \begin{enumerate} [(i)] \item If $A,f$ and $g$ satisfy the hypothesis of Theorem~\ref{theo_sol_unica}. Then, the conclusions of Theorem~\ref{theo_sol_unica} holds for the delayed equation \eqref{delay_lineal+f+g}. \item If $A,f$ and $g$ satisfy the hypothesis of Theorem~\ref{unique_sol_lineal+f+g}. Then, the conclusions of Theorem~\ref{unique_sol_lineal+f+g} are true for the delayed equation~\ref{delay_lineal+f+g} \end{enumerate} \end{thm} \begin{proof} The proof follows by application of Theorems \ref{theo_sol_unica} and \ref{unique_sol_lineal+f+g} since by Proposition~\ref{composition} we have that $g(\cdot,\phi(\cdot -\tau))\in AA(\mathbb{R},V)$ for every $\phi\in AA(\mathbb{R},\Delta (\varphi_0,\rho))$. Indeed, this fact is a consequence of the following fact: $\phi\in AA(\mathbb{R},V)$ implies that the translation $\phi_{\tau}(\cdot)=\phi(\cdot-\tau)$ is belongs $ AA(\mathbb{R},V)$. \end{proof} \section{Application to the Delayed Lasota-Wazewska Model} \label{sec:appl} The Lasota-Wazewska model is an autonomous differential equation of the form \begin{eqnarray} y'(t)=-\delta y(t)+pe^{-\gamma y(t-\tau )},\ t\geq 0. \label{lasota-wazewska} \end{eqnarray} It was occupied by Wazewska-Czyzewska and Lasota \cite{lasota-wazewska} to describe the survival of red blood cells in the blood of an animal. In this equation, $y(t)$ describes the number of red cells bloods in the time $t,\delta >0$ is the probability of death of a red blood cell; $p,\gamma$ are positive constants related with the production of red blood cells by unity of time and $\tau$ is the time required to produce a red blood cell. In this section, we study the following delayed model: \begin{eqnarray} y'(t)=-\delta(t) y(t)+p(t)g(y(t-\tau)), \label{appli-lasota} \end{eqnarray} where $\tau>0$, $\delta(\cdot),\ p(\cdot)$ are positive almost automorphic functions and $g(\cdot)$ is a positive Lipschitz function with Lipschitz constant $\gamma$. Equation \eqref{appli-lasota} models several situations in the real life, see \cite{liz-martinez}.\\ We will assume the following condition \begin{enumerate} \item[(D)] The mean of $\delta$ satisfies $M(\delta)>\delta_{-}>0$. \end{enumerate} In this section, the principal goal is the following Theorem: \begin{thm}\label{gamma_small_unique_sol} In the above conditions, for $\gamma$ sufficiently small, the equation \eqref{lasota-wazewska} has a unique almost automorphic solution. \end{thm} By Lemma \ref{lema_mean}, the linear part of equation \eqref{lasota-wazewska} has an exponential dichotomy. Let $\psi(t)$ be a real almost automorphic function and consider the equation \begin{eqnarray} y'(t)=-\delta (t)y(t)+p(t)g(\psi(t-\tau)). \label{lasota-psi} \end{eqnarray} Then, the bounded solution for the equation \eqref{lasota-psi} satisfies \begin{eqnarray*} y(t)=\int_{-\infty}^{t} \exp \left( -\int_{u}^{t}\delta (s)ds \right)p(u)g(\psi(u-\tau))du. \end{eqnarray*} The homogeneous part of equation of \eqref{lasota-psi} has an exponential dichotomy and since $\delta$ is almost automorphic function, by Lemma \ref{bi_aa}, it is integrally Bi-almost automorphic. Therefore, Theorem \ref{gamma_small_unique_sol} follows from Theorem \ref{theo_sol_unica}. Taking $g(x)=e^{ -\gamma x},\ \alpha>0$, we have the Lasota-Wazewska model: \begin{eqnarray} y'(t)=-\delta(t) y(t)+p(t)e^{-\gamma y(t-\tau)},\ t\geq 0. \label{application} \end{eqnarray} \begin{cor} For $\gamma$ small enough, the delayed Lasota-Wazewska model \eqref{application} has a unique asymptotically stable almost automorphic solution. \end{cor} \end{document}
\begin{document} \title{Lyapunov-Sylvester Computational Method for Two-Dimensional Boussinesq Equation} \author{Abdelhamid Bezia$^{\ast }$,~Anouar Ben Mabrouk~and~Kamel Betina \\ $^{\ast }$Faculty of Mathematics, USTHB\\ E-mail: [email protected].} \maketitle \begin{abstract} A numerical method is developed leading to algebraic systems based on generalized Lyapunov-Sylvester operators to approximate the solution of two-dimensional Boussinesq equation. It consists of an order reduction method and a finite difference discretization. It is proved to be uniquely solvable, stable and convergent by using Lyapunov criterion and manipulating Lyapunov-Sylvester operators. Some numerical implementations are provided at the end of the paper to validate the theoretical results.\newline \textbf{AMS Classification:} 65M06, 65M12, 65F05, 15A30, 37B25..\newline \textbf{Key words}: Boussinesq equation, Finite difference, Lyapunov-Sylvester operators. \end{abstract} \section{Introduction} In the present work we propose to serve of algebraic operators to approximate the solutions of some PDEs such as Boussinesq one in higher dimensions without adapting classical developments based on separation of variables, radial solutions, ...etc. The crucial idea is to prove that even simple methods of discretization of PDEs such as finite difference, finite volumes, can be transformed to well adapted algebraic systems such as Lyapunov-Sylvester ones developed hereafter. In the present paper, fortunately, we were confronted with more complicated but fascinating method to prove the invertibility of the algebraic operator yielded in the numerical scheme. Instead of using classical methods such as tri-diagonal transformations we applied a topological method to prove the invertibility. This is good as it did not necessitate to compute eigenvalues and precisely bounds/estimates of eigenvalues or direct inverses which remains a complicated problem in general linear algebra and especially for generalized Lyapunov-Sylvester operators. Recall that even though, bounds/estimates of eigenvalues can already be efficient in studying stability. Recall also that block tridiagonal systems for classical methods can be already used here also and can be solved for example using iterative techniques, or highly structured bandwidth solvers, Kronecker-product techniques, etc. These methods have been subjects of more general discretizations. See \cite {ElMikkawy1}, \cite{ElMikkawy2}, \cite{ElMikkawy3}, \cite{El-Mikkawy-Atlan}, \cite{ElMikkawy-Karawia}, \cite{Jia-Li} for a review on tridiagonal and block tridiagonal systems, their advantages as well as their disadvantages. In the present paper our principal aim is to apply other algebraic methods to investigates numerical solutions for PDEs in multi-dimensional spaces. We aim to prove that Lyapunov-Syalvester operators can be good candidates for such aim and that they may give best solvers compared to tridiagonal and/or block tridiagonal ones. The present paper is devoted to the development of a numerical method based on two-dimensional finite difference scheme to approximate the solution of the nonlinear Boussinesq equation in $\mathbb{R} ^{2}$ written on the form \begin{equation} \label{eqn1-1} u_{tt}=\Delta u+qu_{xxxx}+(u^{2})_{xx},\quad((x,y),t)\in\Omega\times(t_{0},+\infty) \end{equation} with initial conditions \begin{equation} \label{eqn1-3} u(x,y,t_{0})=u_{0}(x,y)\quad\hbox{and}\quad\displaystyle\frac{\partial\,u}{ \partial\,t}(x,y,t_{0})=\varphi(x,y),\quad(x,y)\in\Omega \end{equation} and boundary conditions \begin{equation} \label{eqn1-4} \frac{\partial u}{\partial\eta}\left(x,y,t\right)=0,\quad((x,y),t)\in \partial\Omega\times(t_{0},+\infty). \end{equation} In order to reduce the derivation order, we set \begin{equation} \label{eqn1-2} v=qu_{xx}+u^{2}. \end{equation} We have to solve the system \begin{equation} \label{systemequation1} \left\{ \begin{array}{l} u_{tt}=\Delta u+v_{xx},\quad(x,y,t)\in\Omega\times(t_{0},+\infty) \\ v=qu_{xx}+u^{2},\quad(x,y,t)\in\Omega\times(t_{0},+\infty) \\ \left(u,v\right)(x,y,t_{0})=\left(u_{0},v_{0}\right)(x,y),\quad(x,y)\in \overline{\Omega} \\ \frac{\partial\,u}{\partial\,t}(x,y,t_{0})=\varphi(x,y),\quad(x,y)\in \overline{\Omega} \\ \frac{\partial }{\partial\eta}(u,v)(x,y,t)=0,\quad(x,y,t)\in\partial\Omega \times(t_{0},+\infty) \end{array} \right. \end{equation} on a rectangular domain $\Omega=]L_{0},L_{1}[\times]L_{0},L_{1}[$ in $ \mathbb{R}^{2}$. $t_{0}\geq0$ is a real parameter fixed as the initial time, $u_{tt}$ is the second order partial derivative in time, $\Delta=\frac{ \partial^{2}}{\partial\,x^{2}}+\frac{\partial^{2}}{\partial\,y^{2}}$ is the Laplace operator in $\mathbb{R}^{2}$, $q$ is a real constant, $u_{xx}$ and $ u_{xxxx}$ are respectively the second order and the fourth order partial derivative according to $x$. $\displaystyle\frac{\partial}{\partial\eta}$ is the outward normal derivative operator along the boundary $\partial\Omega$. Finally, $u$, $u_{0}$ and $\varphi$ are real valued functions with $u_{0}$ and $\varphi$ are $\mathcal{C}^{2}$ on $\overline{\Omega}$. $u$ (and consequently $v$) is the unknown candidates supposed to be $\mathcal{C}^{4}$ on $\overline{\Omega}$. The Boussinesq equation has a wide reputability in both theoretic and applied fields such as hydrodynamics, traveling-waves. It governs the flow of ground water, Heat conduction, natural convection in thermodynamics for both volume and fluids in porous media, etc. For this reason, many studies have been developed discussing the solvability of such equations. There are works dealing with traveling-wave solutions, self-similar solutions, scattering method, mono and multi dimensional versions, reduction of multi dimensional equations with respect to algebras, etc. In \cite{Dehghan2006}, several finite difference schemes such as three fully implicit finite difference schemes, two fully explicit finite difference techniques, an alternating direction implicit procedure and the Barakat and Clark type explicit formula are discussed and applied to solve the two-dimensional Schrodinger equation with Dirichlets boundary conditions. In \cite {Dehghan2008}, the solution of a generalized Boussinesq equation has been developed by means of the homotopy perturbation method. It consisted in a technique method that avoids the discretization, linearization, or small perturbations of the equation and thus reduces the numerical computations. In \cite{Shokri-Dehghan}, a collocation and approximation of the numerical solution of the improved Boussinesq equation is obtained based on radial bases. A predic atorcorrector scheme is provided and the Not-a-Knot method is used to improve the accuracy in the boundary. Next, \cite{Dehghan-Salehi} , a boundary-only meshfree method has been applied to approximate the numerical solution of the classical Boussinesq equation in one dimension. See for instance \cite {BaoDan1,Benmabrouk1,Bratsos1,Clarkson1,Jafarizadeh1,Kano1,Kaya1,Lai1,Liu1,Parlange1,Song1,Varlamov1,Wazwaz1,Yi1,Yi2} and the references therein for backgrounds on theses facts. The method developed in this paper consists in replacing time and space partial derivatives by finite-difference approximations in order linear Lyapunov systems. An order reduction method is adapted leading to a system of coupled PDEs which is transformed by the next to a discrete algebraic one. The motivation behind the idea of applying Lyapunov operators was already evoked in our work \cite{Benmabrouk1}. We recall in brief that such a method leads to fast convergent and more accurate discrete algebraic systems without going back to the use of tridiagonal and/or fringe-tridiagonal matrices already used when dealing with multidimensional problems especially in discrete PDEs. In the organization of the present paper, the next section is concerned with the introduction of the finite difference scheme. Section 3 is devoted to the discretization of the continuous reduced system obtained from (\ref {eqn1-1})-(\ref{eqn1-4}) by the order reduction method. Section 4 deals with the solvability of the discrete Lyapunov equation obtained from the discretization developed in the section 3. In section 5, the consistency of the method is shown and next, the stability and convergence of are proved based on Lyapunov method. Finally, a numerical implementation is provided in section 6 leading to the computation of the numerical solution and error estimates. \section{Discrete Two-Dimensional Boussinesq Equation} Consider the domain $\Omega =]L_{0},L_{1}[\times]L_{0},L_{1}[\subset \mathbb{ R}^{2}$ and an integer $J\in\mathbb{N}^{\ast}$. Denote $h=\displaystyle\frac{ L_{1}-L_{0}}{J}$ for the space step, $x_{j}=L_{0}+jh$ and $y_{m}=L_{0}+mh$ for all $(j,m)\in {I}^{2}=\{0,1,\dots,J\}^{2}$. Let $l=\Delta\,t$ be the time step and $t_{n}=t_{0}+nl$, $n\in\mathbb{N}$ for the discrete time grid. For $(j,m)\in {I}$ and $n\geq 0$, $u_{j,m}^{n}$ will be the net function $ u(x_{j},y_{m},t_{n})$ and $U_{j,m}^{n} $ the numerical solution. The following discrete approximations will be applied for the different differential operators involved in the problem. For time derivatives, we set \begin{equation*} \displaystyle\,u_{t}\rightsquigarrow\displaystyle\frac{ U_{j,m}^{n+1}-U_{j,m}^{n-1}}{2l}\quad\hbox{and}\quad\displaystyle \,u_{tt}\rightsquigarrow \displaystyle\frac{ U_{j,m}^{n+1}-2U_{j,m}^{n}+U_{j,m}^{n-1}}{l^{2}} \end{equation*} and for space derivatives, we shall use \begin{equation*} \displaystyle\,u_{x}\rightsquigarrow\displaystyle\frac{ U_{j+1,m}^{n}-U_{j-1,m}^{n}}{2h}\quad \hbox{and}\quad\displaystyle \,u_{y}\rightsquigarrow \displaystyle\frac{U_{j,m+1}^{n}-U_{j,m-1}^{n}}{2h} \end{equation*} for first order derivatives and \begin{equation*} \displaystyle\,u_{xx}\rightsquigarrow\displaystyle\frac{U_{j+1,m}^{n, \alpha}-2U_{j,m}^{n,\alpha}+U_{j-1,m}^{n,\alpha}}{h^{2}},\quad\displaystyle \,u_{yy}\rightsquigarrow\displaystyle\frac{U_{j,m+1}^{n,\alpha}-2U_{j,m}^{n, \alpha}+U_{j,m-1}^{n,\alpha}}{h^{2}} \end{equation*} for second order ones, where for $n\in\mathbb{N}^{\ast}$ and $\alpha\in \mathbb{R}$, \begin{equation*} u^{n,\alpha}=\alpha\,U^{n+1}+(1-2\alpha)U^{n}+\alpha\,U^{n-1}. \end{equation*} Finally, we denote $\sigma=\displaystyle\frac{l^{2}}{h^{2}}$ and $\delta= \displaystyle\frac{q}{h^{2}}$. For $(j,m)\in \mathring{I}^{2}$ an interior point of the grid $I^{2}$, ($ \mathring{I}=\{1,2,\dots ,J-1\}$), and $n\geq 1$, the following discrete equation is deduced from the first equation in the system (\ref {systemequation1}). \begin{equation} \begin{matrix} & & U_{j,m}^{n+1}-2U_{j,m}^{n}+U_{j,m}^{n-1} \cr & = & \sigma\alpha \left(U_{j-1,m}^{n+1}-4U_{j,m}^{n+1}+U_{j+1,m}^{n+1}+U_{j,m-1}^{n+1}+U_{j,m+1}^{n+1}\right) \cr & & +\sigma(1-2\alpha) \left(U_{j-1,m}^{n}-4U_{j,m}^{n}+U_{j+1,m}^{n}+U_{j,m-1}^{n}+U_{j,m+1}^{n} \right) \cr & & +\sigma\alpha \left(U_{j-1,m}^{n-1}-4U_{j,m}^{n-1}+U_{j+1,m}^{n-1}+U_{j,m-1}^{n-1}+U_{j,m+1}^{n-1}\right) \cr & & +\sigma\alpha\left(V_{j-1,m}^{n+1}-2V_{j,m}^{n+1}+V_{j+1,m}^{n+1} \right) \cr & & +\sigma(1-2\alpha) \left(V_{j-1,m}^{n}-2V_{j,m}^{n}+V_{j+1,m}^{n}\right) \cr & & +\sigma\alpha\left(V_{j-1,m}^{n-1}-2V_{j,m}^{n-1}+V_{j+1,m}^{n-1}\right). \end{matrix} \label{eqn1-1discrete} \end{equation} Similarly, the following discrete equation is obtained from equation (\ref {eqn1-2}). \begin{equation} \begin{matrix} \,V_{j,m}^{n+1}+V_{j,m}^{n-1} & = & 2\delta \alpha \left( U_{j-1,m}^{n+1}-2U_{j,m}^{n+1}+U_{j+1,m}^{n+1}\right) \cr & & +2\delta (1-2\alpha )\left( U_{j-1,m}^{n}-2U_{j,m}^{n}+U_{j+1,m}^{n}\right) \cr & & +2\delta \alpha \left( U_{j-1,m}^{n-1}-2U_{j,m}^{n-1}+U_{j+1,m}^{n-1}\right) \cr & & +2 \widehat{F(U_{j,m}^{n})} \end{matrix} \label{eqn2-1discrete} \end{equation} where \begin{equation*} F(u)=u^{2},\;F^{n}=F(u^{n})\;\;\hbox{and}\;\;\widehat{F^{n}}=\displaystyle \frac{F^{n-1}+F^{n}}{2}. \end{equation*} The discrete boundary conditions are written for $n\geq 0 $ as \begin{equation} \displaystyle\,U_{1,m}^{n}=U_{-1,m}^{n}\quad \hbox{and}\quad \displaystyle \,U_{J-1,m}^{n}=U_{J+1,m}^{n}, \label{CBD1} \end{equation} \begin{equation} \displaystyle\,U_{j,1}^{n}=U_{j,-1}^{n}\quad \hbox{and}\quad \displaystyle \,U_{j,J-1}^{n}=U_{j,J+1}^{n}, \label{CBD2} \end{equation} The parameter $q$ is related to the equation and has the role of a viscosity-type coefficient and thus it is related to the physical domain of the model. The barycenter parameter $\alpha$ is used to calibrates the position of the approximated solution around the exact one. Of course, these parameters affect surely the numerical solution as well as the error estimates. This fact will be recalled later in the numerical implementations part. In a future work in progress now, we are developing results on numerical solutions of 2D Schr\"odinger equation on the error estimates as a function on the barycenter calibrations by using variable coefficients $ \alpha_n$ instead of constant $\alpha$. The use of these calibrations permits the use of implicit/explicit schemes by using suitable values. For example for $\alpha=\frac{1}{2}$, the barycentre estimation \begin{equation*} V^{n,\alpha}=\alpha\,V^{n+1}+(1-2\alpha)V^{n}+\alpha\,V^{n-1}=\displaystyle \frac{V^{n+1}+V^{n-1}}{2} \end{equation*} which is an implicit estimation that guarantees an error of order $2$ in time. Next, as it is motioned in the introduction, the idea consists in applying Lyapunov-Sylvester operators to approximate the solution of the continuous problem (\ref{eqn1-1})-(\ref{eqn1-4}) or its discrete equivalent system (\ref {eqn1-1discrete})-(\ref{CBD2}). Denote \begin{equation*} a_{1}=\displaystyle\frac{1}{2}+2\alpha\sigma,\quad\,a_{2}=-\alpha\sigma, \end{equation*} \begin{equation*} b_{1}=1-2(1-2\alpha)\sigma,\quad\,b_{2}=(1-2\alpha)\sigma, \end{equation*} \begin{equation*} c_{1}=(1-2\alpha)\delta\quad\hbox{and}\quad\,c_{2}=\alpha\delta. \end{equation*} Equation (\ref{eqn1-1discrete}) becomes \begin{equation} \label{eqn1-1discrete-forme-a} \begin{matrix} & & a_{2}U_{j-1,m}^{n+1}+a_{1}U_{j,m}^{n+1}+a_{2}U_{j+1,m}^{n+1}+a_{2}U_{j,m-1}^{n+1}+a_{1}U_{j,m}^{n+1}+a_{2}U_{j,m+1}^{n+1} \cr & & +a_{2}\left(V_{j-1,m}^{n+1}-2V_{j,m}^{n+1}+V_{j+1,m}^{n+1}\right) \cr & = & b_{2}U_{j-1,m}^{n}+b_{1}U_{j,m}^{n}+b_{2}U_{j+1,m}^{n}+b_{2}U_{j,m-1}^{n}+b_{1}U_{j,m}^{n}+b_{2}U_{j,m+1}^{n} \cr & & -a_{2}U_{j-1,m}^{n-1}-a_{1}U_{j,m}^{n-1}-a_{2}U_{j+1,m}^{n-1}-a_{2}U_{j,m-1}^{n-1}-a_{1}U_{j,m}^{n-1}-a_{2}U_{j,m+1}^{n-1} \cr & & +b_{2}\left(V_{j-1,m}^{n}-2V_{j,m}^{n}+V_{j+1,m}^{n}\right) \cr & & -a_{2}\left(V_{j-1,m}^{n-1}-2V_{j,m}^{n-1}+V_{j+1,m}^{n-1}\right). \end{matrix} \end{equation} Equation (\ref{eqn2-1discrete}) becomes \begin{equation} \label{eqn2-1discrete-forme-a} \begin{matrix} & & V_{j,m}^{n+1}-2c_{2} \left(U_{j-1,m}^{n+1}-2U_{j,m}^{n+1}+U_{j+1,m}^{n+1}\right) \cr & = & 2c_{1}\left(U_{j-1,m}^{n}-2U_{j,m}^{n}+U_{j+1,m}^{n}\right) \cr & & +2c_{2}\left(U_{j-1,m}^{n-1}-2U_{j,m}^{n-1}+U_{j+1,m}^{n-1}\right) \cr & & -V_{j,m}^{n-1}+2\widehat{F(U_{j,m}^{n})}. \end{matrix} \end{equation} Denote $A$, $B$ and $R$ the matrices defined by \begin{equation*} A= \begin{pmatrix} a_{1} & 2a_{2} & 0 & ... & ... & 0 \cr a_{2} & a_{1} & a_{2} & \ddots & \ddots & \vdots \cr 0 & \ddots & \ddots & \ddots & \ddots & \vdots \cr \vdots & \ddots & \ddots & \ddots & \ddots & 0 \cr \vdots & \ddots & \ddots & a_{2} & a_{1} & a_{2} \cr 0 & ... & ... & 0 & 2a_{2} & a_{1} \end{pmatrix} ,\quad B= \begin{pmatrix} b_{1} & 2b_{2} & 0 & ... & ... & 0 \cr b_{2} & b_{1} & b_{2} & \ddots & \ddots & \vdots \cr 0 & \ddots & \ddots & \ddots & \ddots & \vdots \cr \vdots & \ddots & \ddots & \ddots & \ddots & 0 \cr \vdots & \ddots & \ddots & b_{2} & b_{1} & b_{2} \cr 0 & ... & ... & 0 & 2b_{2} & b_{1} \end{pmatrix} \end{equation*} and \begin{equation*} R= \begin{pmatrix} -2 & 2 & 0 & ... & ... & 0 \cr1 & -2 & 1 & \ddots & \ddots & \vdots \cr0 & \ddots & \ddots & \ddots & \ddots & \vdots \cr\vdots & \ddots & \ddots & \ddots & \ddots & 0 \cr\vdots & \ddots & \ddots & 1 & -2 & 1 \cr0 & ... & ... & 0 & 2 & -2 \end{pmatrix} \end{equation*} The system (\ref{CBD1})-(\ref{eqn2-1discrete-forme-a}) can be written on the matrix form \begin{equation} \left\{ \begin{matrix} \mathcal{L}_{A}(U^{n+1})+a_{2}RV^{n+1}=\mathcal{L}_{B}(U^{n})-\mathcal{L} _{A}(U^{n-1})+R(b_{2}V^{n}-a_{2}V^{n-1}), \cr V^{n+1}-2c_{2}RU^{n+1}\!= \!2R(c_{1}U^{n}+c_{2}U^{n-1})-V^{n-1}\!+\!2\widehat{F^{n}} \end{matrix} \right. \label{matrixform1} \end{equation} for all $n\geq1$ where \begin{equation*} U^{n}=(U_{j,m}^{n})_{0\leq\,j,m\leq\,J},\;V^{n}=(V_{j,m}^{n})_{0\leq\,j,m \leq\,J}\;\hbox{and}\;F^{n}=(F(U_{j,m}^{n}))_{0\leq\,j,m\leq\,J} \end{equation*} and for a matrix $Q\in\mathcal{M}_{(J+1)^{2}}(\mathbb{R})$, $\mathcal{L}_{Q}$ is the Lyapunov operator defined by \begin{equation*} \mathcal{L}_{Q}(X)=QX+XQ^{T},\;\forall\,X\in\mathcal{M}_{(J+1)^{2}}(\mathbb{R }). \end{equation*} Remark that $V$ is obtained from the auxiliary function $v$ that is applied to reduce the order of the original PDEs in $u$. This reduction yielded the Lyapunov-Syslvester system (\ref{matrixform1}) above. A natural question that can be raised here turns around the ordering of $U$ and $V$. So, we stress the fact that no essential idea is fixed at advance but, this is strongly related to the system obtained. For example, in (\ref{matrixform1}) above, it is easy to substitute the second equation into the first to omit the unknown matrix $V^{n+1}$ from the first equation. But in the contrary, it is not easier to do the same for $U^{n+1}$, due to the difficulty to substitute it from $\mathcal{L}_{A}(U^{n+1})$. It is also not guaranteed that the part $a_{2}RV^{n+1}$ in the first equation is invertible to substitute $V^{n+1}$. So, it is essentially the final system that shows the ordering in $U$ and $V$. \section{Solvability of the Discrete Problem} In \cite{Benmabrouk1}, the authors have transformed the Lyapunov operator obtained from the discretization method into a standard linear operator acting on one column vector by juxtaposing the columns of the matrix $X$ horizontally which leads to an equivalent linear operator characterized by a fringe-tridiagonal matrix. We used standard computation to prove the invertibility of such an operator. Here. we do not apply the same computations as in \cite{Benmabrouk1}, but we develop different arguments. The first main result is stated as follows. \begin{theorem} \label{theorem1} The system (\ref{matrixform1}) is uniquely solvable whenever $U^{0}$ and $U^{1}$ are known. \end{theorem} \textit{Proof.} It reposes on the inverse of Lyapunov operators. Consider the endomorphism $\Phi $ defined on $\mathcal{M}_{(J+1)^{2}}(\mathbb{R} )\times \mathcal{M}_{(J+1)^{2}}(\mathbb{R}) $ by $\Phi (X,Y)=(AX+XA^T+a_{2}RY,\displaystyle\frac{1}{2}Y-c_{2}RX)$. To prove Theorem \ref{theorem1}, it suffices to show that $ker\Phi $ is reduced to $0$. Indeed, \begin{equation*} \Phi (X,Y)=0\Longleftrightarrow (AX+XA^T+a_{2}RY,\displaystyle\frac{1}{2} Y-c_{2}RX)=(0,0) \end{equation*} or equivalently, \begin{equation*} Y=2c_{2}RX\quad \hbox{and}\quad \,(A+2a_{2}c_{2}R^{2})X+XA^T=0. \end{equation*} So, the problem is transformed to the resolution of a Lyapunov type equation of the form \begin{equation} \mathcal{L}_{W,A}(X)=WX+XA^{T}=0 \label{Lyapunovequation} \end{equation} where $W$ is the matrix given by $W=A+2a_{2}c_{2}R^{2}$. Denoting \begin{equation*} \omega=2a_{2}c_{2},\,\,\omega_{1}=a_{1}+6\omega,\;\;\overline{\omega} _{1}=\omega_{1}+\omega\quad\hbox{and}\quad\omega_{2}=a_{2}-4\omega \end{equation*} the matrix $W$ is explicitly given by \begin{equation*} W= \begin{pmatrix} \omega _{1} & 2\omega _{2} & 2\omega & 0 & \dots & \dots & \dots & 0 & \cr\omega _{2} & \overline{\omega }_{1} & \omega _{2} & \omega & \ddots & \ddots & \ddots & \vdots & \cr\omega & \omega _{2} & \omega _{1} & \omega _{2} & \omega & \ddots & \ddots & \vdots & \cr0 & \ddots & \ddots & \ddots & \ddots & \ddots & \ddots & \vdots & \cr\vdots & \ddots & \ddots & \ddots & \ddots & \ddots & \ddots & 0 & \cr\vdots & \ddots & \ddots & \omega & \omega _{2} & \omega _{1} & \omega _{2} & \omega & \cr\vdots & \ddots & \ddots & \ddots & \omega & \omega _{2} & \overline{\omega }_{1} & \omega _{2} & \cr0 & \dots & \dots & \dots & 0 & 2\omega & 2\omega _{2} & \omega _{1} & \end{pmatrix} \end{equation*} Next, we use the following preliminary result of differential calculus (See \cite{HenriCartan} for example). \begin{lemma} \label{LemmeInversion} Let $E$ be a finite dimensional ($\mathbb{R}$ or $ \mathbb{C}$) vector space and $(\Phi_{n})_{n}$ be a sequence of endomorphisms converging uniformly to an invertible endomorphism $\Phi$. Then, there exists $n_{0}$ such that, for any $n\geq\,n_{0}$, the endomorphism $\Phi_{n}$ is invertible. \end{lemma} The proof is simple and can be found anywhere in differential calculus references such as \cite{HenriCartan}. We recall it here for the convenience and clearness of the paper. Recall that the set $Isom(E)$ (the set of isomorphisms on $E$) is already open in $L(E)$ (the set of endomorphisms of $ E$). Hence, as $\Phi\in\,Isom(E)$ there exists a ball $B(\Phi,r)\subset \,Isom(E)$. The elements $\Phi_n$ are in this ball for large values of $n$. So these are invertible. Assume now that $l=o(h^{2+s})$, with $s>0$ which is always possible. Then, the coefficients appearing in $A$ and $W$ will satisfy as $h\longrightarrow0$ the following. \begin{equation*} A_{i,i}=\displaystyle\frac{1}{2}+\varepsilon\,h^{2+2s}\longrightarrow \displaystyle\frac{1}{2}. \end{equation*} For $1\leq\,i\leq\,J-1$, \begin{equation*} A_{i,i-1}=A_{i,i+1}=\displaystyle\frac{A_{0,1}}{2}=\displaystyle\frac{ A_{J,J-1}}{2}=-\varepsilon\,h^{2+2s}\longrightarrow0. \end{equation*} For $2\leq\,i\leq\,J-2$, \begin{equation*} W_{i,i}=W_{0,0}=W_{J,J}=\displaystyle\frac{1}{2}+2\alpha\varepsilon \,h^{2+2s}-12\alpha^2\varepsilon\,h^{2s}\longrightarrow\displaystyle\frac{1}{ 2}. \end{equation*} Similarly, \begin{equation*} W_{1,1}=W_{J-1,J-1}=\displaystyle\frac{1}{2}+2\alpha\varepsilon\,h^{2+2s}-14 \alpha^2\varepsilon\,h^{2s}\longrightarrow\displaystyle\frac{1}{2} \end{equation*} and \begin{equation*} W_{i,i-1}=W_{i,i+1}=\displaystyle\frac{W_{0,1}}{2}=\displaystyle\frac{ W_{J,J-1}}{2}=-\alpha\varepsilon\,h^{2+2s}+8\alpha^2\varepsilon\,h^{2s} \longrightarrow0 \end{equation*} Finally, \begin{equation*} W_{i,i-2}=W_{i,i+2}=\displaystyle\frac{W_{0,2}}{2}=\displaystyle\frac{ W_{J,J-2}}{2}=-2\alpha^2\varepsilon\,h^{2s}\longrightarrow0. \end{equation*} Recall that the technique assumption $l=o(h^{2+s})$ is a necessary requirement for the resolution of the present problem and may not be necessary in other PDEs. See for example \cite{Benmabrouk2}, \cite {Benmabrouk1} and \cite{Benmabrouk3} for NLS and Heat equations. Next, observing that for all $X$ in the space $\mathcal{M}_{(J+1)^2}(\mathbb{R} )\times\mathcal{M}_{(J+1)^2}(\mathbb{R})$, \begin{equation*} \begin{matrix} \|(\mathcal{L}_{W,A}-I)(X)\| & =\|(W-\frac{1}{2}I)X+X(A^T-\frac{1}{2} I)\| \cr & \leq\left[\|W-\frac{1}{2}I\|+\|A^T-\frac{1}{2}I\|\right] \|X\|, \end{matrix} \end{equation*} it results that \begin{equation} \label{LWAtendsVersId} \|\mathcal{L}_{W,A}-I\|\leq\|W-\displaystyle\frac{1}{2}I\|+\|A^T- \displaystyle\frac{1}{2}I\|\leq\,C(\alpha)h^{2s}. \end{equation} Consequently, the Lyapunov endomorphism $\mathcal{L}_{W,A}$ converges uniformly to the identity $I$ as $h$ goes towards 0 and $l=o(h^{2+s})$ with $ s>0$. Using Lemma \ref{LemmeInversion}, the operator $\mathcal{L}_{W,A}$ is invertible for $h$ small enough. \section{Consistency, Stability and Convergence of the Discrete Method} The consistency of the proposed method is done by evaluating the local truncation error arising from the discretization of the system \begin{equation} \label{system1} \left\{ \begin{matrix} u_{tt}-\Delta\,u-v_{xx}=0, \cr v=qu_{xx}+u^2. \end{matrix} \right. \end{equation} The principal part of the first equation is \begin{equation} \label{consistency1} \begin{matrix} \mathcal{L}_{u,v}^1(t,x,y) & = & \displaystyle\frac{l^2}{12}\displaystyle \frac{\partial^4u}{\partial\,t^4} -\displaystyle\frac{h^2}{12}\left( \displaystyle\frac{\partial^4u}{\partial\,x^4}+\displaystyle\frac{ \partial^4u }{\partial\,y^4}\right) -\alpha\,l^2\displaystyle\frac{ \partial^2(\Delta\,u) }{\partial\,t^2} \cr & - & \displaystyle\frac{h^2 }{12}\displaystyle \frac{\partial^2v}{\partial\,x^4} -\alpha\,l^2 \displaystyle\frac{\partial^4v }{\partial\,t^2\partial\,x^2} +O(l^2+h^2). \end{matrix} \end{equation} The principal part of the local error truncation due to the second part is \begin{equation} \label{consistency2} \begin{matrix} \mathcal{L}_{u,v}^2(t,x,y) & = & \displaystyle\frac{l^2}{2}\displaystyle \frac{\partial^2v}{\partial\,t^2} +\displaystyle\frac{l^4}{24}\displaystyle \frac{\partial^4v}{\partial\,t^4} -\displaystyle\frac{h^2}{12}\displaystyle \frac{\partial^4u}{\partial\,x^4} \cr & - & \alpha\,l^2\displaystyle \frac{\partial^4u}{\partial\,t^2\partial\,x^2}+O(l^2+h^2). \end{matrix} \end{equation} It is clear that the two operators $\mathcal{L}_{u,v}^1$ and $\mathcal{L} _{u,v}^2$ tend toward 0 as $l$ and $h$ tend to 0, which ensures the consistency of the method. Furthermore, the method is consistent with an order 2 in time and space. We now proceed by proving the stability of the method by applying the Lyapunov criterion. A linear system $\mathcal{L}(x_{n+1},x_{n},x_{n-1}, \dots)=0$ is stable in the sense of Lyapunov if for any bounded initial solution $x_{0}$ the solution $x_{n}$ remains bounded for all $n\geq0$. Here, we will precisely prove the following result. \begin{lemma} \label{LyapunovStabilityLemma} $\mathcal{P}_{n}$: The solution $ (U^{n},V^{n}) $ is bounded independently of $n$ whenever the initial solution $(U^{0},V^{0})$ is bounded. \end{lemma} We will proceed by recurrence on $n$. Assume firstly that $ \|(U^0,V^0)\|\leq\eta$ for some $\eta$ positive. Using the system (\ref {matrixform1}), we obtain \begin{equation} \label{LyapunovStability2} \left\{ \begin{matrix} \mathcal{L}_{W,A}(U^{n+1})=\mathcal{L}_{\widetilde{B},B}(U^{n})+b_2RV^n- \mathcal{L}_{W,A}(U^{n-1})-a_2R(F^{n-1}+F^n), \cr V^{n+1}=2c_2RU^{n+1}+2R(c_1U^{n}+c_2U^{n-1})-V^{n-1}+2\widehat{F^n}. \end{matrix} \right. \end{equation} where $\widetilde{B}=B-2a_2c_1R^2$. Consequently, \begin{equation} \label{LyapunovStability3} \begin{matrix} \|\mathcal{L}_{W,A}(U^{n+1})\|\leq\|\mathcal{L}_{\widetilde{B} ,B}\|.\|U^{n}\|+2|b_2|.\|V^n\| \cr \qquad\qquad\qquad\qquad\qquad +\| \mathcal{L}_{W,A}\|.\|U^{n-1}\|+2|a_2|(\|F^{n-1}\|+\|F^n\|) \end{matrix} \end{equation} and \begin{equation} \label{LyapunovStability4} \begin{matrix} \|V^{n+1}\|\leq4|c_2|.\|U^{n+1}\|+4(|c_1|.\|U^{n}\|+|c_2|.\|U^{n-1}\|) \cr \qquad\qquad\qquad\qquad\qquad +\|V^{n-1}\|+\|F^{n-1}\|+\|F^n\|. \end{matrix} \end{equation} Next, recall that, for $l=o(h^{s+2})$ small enough, $s>0$, we have \begin{equation*} a_1=\displaystyle\frac{1}{2}+2\alpha\,h^{2s+2}\rightarrow\displaystyle\frac{1 }{2},\quad\,a_2=-\alpha\,h^{2s+2}\rightarrow0, \end{equation*} \begin{equation*} b_1=1-2(1-2\alpha)h^{2s+2}\rightarrow1,\quad\,b_2=(1-2\alpha)h^{2s+2} \rightarrow0, \end{equation*} \begin{equation*} c_1=(1-2\alpha)h^{-2}\rightarrow\infty\quad\hbox{and}\quad\,c_2=\alpha \,h^{2s+2}\rightarrow\infty, \end{equation*} \begin{equation*} a_2c_1=-\alpha(1-2\alpha)h^{2s}\rightarrow0. \end{equation*} As a consequence, for $h$ small enough, \begin{equation} \label{LtildeBB} \|\mathcal{L}_{\widetilde{B},B}\|\leq2\|B\|+2|a_2c_1|\|R\|^2\leq2 \max(|b_1|,2|b_2|)+4|a_2c_1|\leq2+4=6, \end{equation} and the following lemma deduced from (\ref{LWAtendsVersId}). \begin{lemma} \label{LWABounded} For $h$ small enough, it holds for all $X\in \mathcal{M} _{(J+1)^{2}}(\mathbb{R})$ that \begin{equation*} \displaystyle\frac{1}{2}\|X\|\leq(1-C(\alpha)h^{2s})\|X\|\leq\|\mathcal{L} _{W,A}(X)\|\leq(1+C(\alpha)h^{2s})\|X\|\leq\displaystyle\frac{3}{2}\|X\|. \end{equation*} \end{lemma} \hskip-17pt Indeed, recall that equation (\ref{LWAtendsVersId}) affirms that $\|\mathcal{L}_{W,A}-I\|\leq\,C(\alpha)h^{2s}$ for some constant $C(\alpha)>0 $. Consequently, for any $X$ we get \begin{equation*} (1-C(\alpha)h^{2s})\|X\|\leq\|\mathcal{L}_{W,A}(X)\|\leq(1+C(\alpha)h^{2s}) \|X\|. \end{equation*} For $h\leq\displaystyle\frac{1}{(C(\alpha))^{1/2s}}$, we obtain \begin{equation*} \displaystyle\frac{1}{2}\leq(1-C(\alpha)h^{2s})<(1+C(\alpha)h^{2s})\leq \displaystyle\frac{3}{2} \end{equation*} and thus Lemma \ref{LWABounded}. As a result, (\ref{LyapunovStability3}) yields that \begin{equation} \label{LyapunovStability3-1} \displaystyle\frac{1}{2}\|U^{n+1}\|\leq6\|U^{n}\|+2\|V^{n}\|+\displaystyle \frac{3}{2}\|U^{n-1}\|+2(\|F^{n-1}\|+\|F^{n}\|). \end{equation} For $n=0$, this implies that \begin{equation} \|U^{1}\|\leq12\|U^{0}\|+4\|V^{0}\|+3\|U^{-1}\|+4(\|F^{-1}\|+\|F^{0}\|). \label{LyapunovStability3-2} \end{equation} Using the discrete initial condition \begin{equation*} U^{0}=U^{-1}+l\varphi . \end{equation*} Here we identify the function $\varphi $ to the matrix whom coefficients are $\varphi_{j,m}=\varphi(x_{j},y_{m})$. We obtain \begin{equation} \|U^{-1}\|\leq\|U^{0}\|+l\|\varphi\|. \label{U-1Bounds} \end{equation} Observing that \begin{equation*} F_{j,m}^{-1}=F(U_{j,m}^{-1})=(U_{j,m}^{0}-l\varphi_{j,m})^{2}, \end{equation*} it results that \begin{equation*} |F_{j,m}^{-1}|\leq|U_{j,m}^{0}|^{2}+2l|\varphi_{j,m}|.|U_{j,m}^{0}|+l^{2}| \varphi_{j,m}|^{2} \end{equation*} and consequently, \begin{equation} \|F^{-1}\|\leq\|U^{0}\|^{2}+2l\|\varphi\|.\|U^{0}\|+l^{2}\|\varphi\|^{2}. \label{F-1Bounds} \end{equation} Hence, equation (\ref{LyapunovStability3-2}) yields that \begin{equation} \|U^{1}\|\leq(15+8l\|\varphi\|)\|U^{0}\|+4\|V^{0}\|+8\|F^{0}\|+3l\|\varphi \|+4l^{2}\|\varphi\|^{2}. \label{LyapunovStability3-3} \end{equation} Now, the Lyapunov criterion for stability states exactly that \begin{equation} \label{LyapunovStability1} \forall\,\,\varepsilon>0,\,\exists\,\eta>0\,\,\,s.t;\,\,\|(U^{0},V^{0})\| \leq\eta\,\,\Rightarrow\,\,\|(U^{n},V^{n})\|\leq\varepsilon,\,\,\forall\,n \geq0. \end{equation} For $n=1$ and $\|(U^{1},V^{1})\|\leq\varepsilon$, we seek an $\eta>0$ for which $\|(U^{0},V^{0})\|\leq\eta$. Indeed, using (\ref{LyapunovStability3-3} ), this means that, it suffices to find $\eta $ such that \begin{equation} 8\eta^{2}+(19+8l\|\varphi\|)\eta+3l\|\varphi\|+4l^{2}\|\varphi\|^{2}- \varepsilon<0. \label{LyapunovStability3-4} \end{equation} The discriminant of this second order inequality is \begin{equation} \label{Delta1} \Delta(l,h)=(19+8l\|\varphi\|)^{2}-32(3l\|\varphi\|+4l^{2}\|\varphi\|^{2}- \varepsilon). \end{equation} For $h,l$ small enough, this is estimated as \begin{equation*} \Delta(l,h)\sim361+32\varepsilon>0. \end{equation*} Thus there are two zeros of the second order equality above $\eta_{1}= \displaystyle\frac{\sqrt{\Delta(l,h)}-(19+8l\|\varphi\|)}{16}>0$ and a second zero $\eta_2<0$ rejected. Consequently, choosing $\eta\in]0,\eta_{1}[$ we obtain (\ref{LyapunovStability3-4}). Finally, (\ref{LyapunovStability3-3} ) yields that $\|U^{1}\|\leq\varepsilon$. Next, equation (\ref {LyapunovStability4}), for $n=0$, implies that \begin{equation} \label{LyapunovStability4-1} \|V^{1}\|\leq\,A(l,h,\varphi)\|U^{0}\|^{2}+B(l,h,\varphi)\|U^{0}\|+C(l,h, \varphi)+16|c_{2}|\|V^{0}\|, \end{equation} where \begin{equation*} A(l,h,\varphi)=3+32|c_{2}|, \end{equation*} \begin{equation*} B(l,h,\varphi)=4\left(|c_{1}|+8|c_{2}|(2+l\|\varphi\|)+l\|\varphi\|+ \displaystyle\frac{1}{h^{2}}\right) , \end{equation*} and \begin{equation*} C(l,h,\varphi)=2(1+8|c_{2}|)l^{2}\|\varphi\|^{2}+4l(4|c_{2}|+\displaystyle \frac{1}{h^{2}})\|\varphi\|. \end{equation*} Choosing $\|(U^{0},V^{0})\|\leq\eta$, it suffices to study the inequality \begin{equation} A(l,h,\varphi)\eta^{2}+\left(B(l,h,\varphi)+16|c_{2}|\right)\eta+C(l,h, \varphi)-\varepsilon\leq0. \label{V1Bound} \end{equation} Its discriminant satisfies for $h,l$ small enough, \begin{equation} \label{Delta2} \Delta(l,h)\sim\displaystyle\frac{16}{h^{4}}\left(1+20\alpha+|1-2\alpha| \right)^{2}+\displaystyle\frac{128\alpha|q|}{h^{2}}\varepsilon>0. \end{equation} Here also there are two zeros, $\eta_{1}^{\prime}=\displaystyle\frac{\sqrt{ \Delta(l,h)}-(B(l,h,\varphi)+16|c_{2}|)}{2A(l,h,\varphi)}>0$ and a second one $\eta^{\prime}<0$ and thus rejected. As a consequence, for $ \eta\in]0,\eta _{1}^{\prime}[$ we obtain $\|V^{1}\|\leq\varepsilon$. Finally, for $\eta\in]0,\eta_{0}[$ with $\eta_{0}=\min(\eta_{1},\eta_{1}^{ \prime})$, we obtain $\|(U^{1},V^{1})\|\leq\varepsilon$ whenever $ \|(U^{0},V^{0})\|\leq\eta$. Assume now that the $(U^{k},V^{k})$ is bounded for $k=1,2,\dots,n$ (by $\varepsilon_{1}$) whenever $(U^{0},V^{0})$ is bounded by $\eta$ and let $\varepsilon>0$. We shall prove that it is possible to choose $\eta$ satisfying $\|(U^{n+1},V^{n+1})\|\leq\varepsilon$. Indeed, from (\ref{LyapunovStability3-1}), we have \begin{equation} \label{LyapunovStability3Ordren-1} \|U^{n+1}\|\leq19\varepsilon_{1}+8\varepsilon_{1}^{2}. \end{equation} So, one seeks, $\varepsilon _{1}$ for which $8\varepsilon_{1}^{2}+19 \varepsilon _{1}-\varepsilon \leq 0$. Its discriminant $\Delta=361+32 \varepsilon $, with one positive zero $\varepsilon _{1}=\displaystyle\frac{ \sqrt{361+32\varepsilon }-19}{16}$. Then $\|U^{n+1}\|\leq \varepsilon $ whenever $\|(U^{k},V^{k})\|\leq\varepsilon_{1}$, $k=1,2,\dots,n$. Next, using (\ref{LyapunovStability4}) and (\ref{LyapunovStability3Ordren-1}), we have \begin{equation} \|V^{n+1}\|\leq\left(4|c_{1}|+80|c_{2}|+1\right)\varepsilon_{1}+ \left(32|c_{2}|+2\right)\varepsilon_{1}^{2}. \label{LyapunovStability3Ordren-2} \end{equation} So, it suffices as previously to choose $\varepsilon_{1}$ such that \begin{equation*} \left(32|c_{2}|+2\right)\varepsilon_{1}^{2}+\left(4|c_{1}|+80|c_{2}|+1 \right)\varepsilon_{1}-\varepsilon\leq 0. \end{equation*} $\Delta=(4|c_{1}|+80|c_{2}|+1)^{2}+4(32|c_{2}|+2)\varepsilon$, with positive zero $\varepsilon_{1}^{\prime}=\displaystyle\frac{\sqrt{\Delta} -(4|c_{1}|+80|c_{2}|+1)}{2(32|c_{2}|+2)}$. Then $\|V^{n+1}\|\leq\varepsilon $ whenever $\|(U^{k},V^{k})\|\leq\varepsilon_{1}^{\prime}$, $k=1,2,\dots ,n$. Next, it holds from the recurrence hypothesis for $\varepsilon _{0}=\min (\varepsilon _{1},\varepsilon_{1}^{\prime })$, that there exists $\eta >0$ for which $\|(U^{0},V^{0})\|\leq \eta $ implies that $\|(U^{k},V^{k})\|\leq \varepsilon _{0}$, for $k=1,2,\dots ,n$, which by the next induces that $ \|(U^{n+1},V^{n+1})\|\leq \varepsilon $. \begin{lemma} \label{laxequivresult} As the numerical scheme is consistent and stable, it is then convergent. \end{lemma} This lemma is a consequence of the well known Lax-Richtmyer equivalence theorem, which states that for consistent numerical approximations, stability and convergence are equivalent. Recall here that we have already proved in (\ref{consistency1}) and (\ref{consistency2}) that the used scheme is consistent. Next, Lemma \ref{LyapunovStabilityLemma}, Lemma \ref {LWABounded} and equation (\ref{LyapunovStability1}) yields the stability of the scheme. Consequently, the Lax equivalence Theorem guarantees the convergence. So as Lemma \ref{laxequivresult}. \section{Numerical implementations} We propose in this section to present some numerical examples to validate the theoretical results developed previously. The error between the exact solutions and the numerical ones via an $L_{2}$ discrete norm will be estimated. The matrix norm used will be \begin{equation*} \|X\|_{2}=\left( \displaystyle\sum_{i,j=1}^{N}|X_{ij}|^{2}\right)^{1/2} \end{equation*} for a matrix $X=(X_{ij})\in \mathcal{M}_{N+2}\mathbb{C}$. Denote $u^{n}$ the net function $u(x,y,t^{n})$ and $U^{n}$ the numerical solution. We propose to compute the discrete error \begin{equation} Er=\displaystyle\max_{n}\|U^{n}-u^{n}\|_{2} \label{Er} \end{equation} on the grid $(x_{i},y_{j})$, $0\leq\,i,j\leq\,J+1$ and the relative error between the exact solution and the numerical one as \begin{equation} Relative\,Er=\displaystyle\max_{n}\displaystyle\frac{\|U^{n}-u^{n}\|_{2}}{ \|u^{n}\|_{2}} \label{Errelative} \end{equation} on the same grid. \subsection{A Polynomial-Exponential Example} We develop in this part a classical example based on polynomial function with an exponential envelop. We consider the inhomogeneous problem \begin{equation} \left\{ \begin{array}{l} u_{tt}=\Delta\,u+v_{xx}+g(x,y,t),\quad(x,y,t)\in\Omega\times(t_{0},T), \\ v=qu_{xx}+u^{2},\quad(x,y,t)\in\Omega\times(t_{0},T), \\ \left(u,v\right)(x,y,t_{0})=\left(u_{0},v_{0}\right)(x,y),\quad(x,y,t)\in \overline{\Omega}\times(t_{0},T), \\ \frac{\partial\,u}{\partial\,t}(x,y,t_{0})=\varphi(x,y),\quad(x,y)\in \overline{\Omega}, \\ \overrightarrow{\nabla}(u,v)=0,\quad(x,y,t)\in\partial\Omega\times(t_{0},T) \end{array} \right. \end{equation} where $\Omega=[-1,1]^{2}$ and where the right hand term is \begin{eqnarray*} g\left(x,y,t\right)&=&\left[\left(x^{2}-1\right)^{2}(x^{4}-58x^{2}+9)-48 \left(35x^{4}-30x^{2}+3\right)+y^{4}-14y^{2}+5\right]e^{-t} \\ &&-16(x^{2}-1)^{2}\left[\left(x^{2}-1\right)^{4}\left(15x^{2}-1\right)+ \left(y^{2}-1\right)^{2}\left(7x^{2}-1\right)\right]e^{-2t} \end{eqnarray*} The exact solution is \begin{equation} u(x,y,t)=\left[\left(x^{2}-1\right)^{4}+\left(y^{2}-1\right)^{2}\right] e^{-t}. \end{equation} In the following tables, numerical results are provided. We computed for different space and time steps the discrete $L_{2}$-error estimates defined by (\ref{Er}). The time interval is $[0,1]$ for a choice $t_{0}=0$ and $T=1$ . The following results are obtained for different values of the parameters $ J$ (and thus $h$), $l$ ((and thus $N$). The parameters $q$ and $\alpha $ are fixed to $q=0.01$ and $\alpha =0.25$. We just notice that some variations done on these latter parameters have induced an important variation in the error estimates which explains the effect of the parameter $q$ which has the role of a viscosity-type coefficient and the barycenter parameter $\alpha $ which calibrates the position of the approximated solution around the exact one. Finally, some comparison with our work in \cite{Benmabrouk1} has proved that Lyapunov type operators already result in fast convergent algorithms with a maximum time of execution of 2.014 sd for the present one. The classical tri-diagonal algorithms associated to the same problem with the same discrete scheme and the same parameters yielded a maximum time of 552.012 sd, so a performance of $23.10^{-4}$ faster algorithm for the present one. We recall that the tests are done on a Pentium Dual Core CPU 2.10 GHz processor and 250 Mo RAM. \begin{table}[th] \begin{center} \centerline{Table 1.} \begin{tabular}{||l|l|l|l||} \hline\hline J & $l$ & $Er$ & $Relative\,Er$ \\ \hline 10 & 1/100 & $4,0.10^{-3}$ & 0,1317 \\ \hline 16 & 1/120 & $3,3.10^{-3}$ & 0,1323 \\ \hline 20 & 1/200 & $2,0.10^{-3}$ & 0,1335 \\ \hline 24 & 1/220 & $1,8.10^{-3}$ & 0,1337 \\ \hline 30 & 1/280 & $1,4.10^{-3}$ & 0,1340 \\ \hline 40 & 1/400 & $9,8.10^{-4}$ & 0.1344 \\ \hline 50 & 1/500 & $7,8.10^{-4}$ & 0,1346 \\ \hline\hline \end{tabular} \end{center} \end{table} \subsection{A 2-Particles Interaction Example} The example developed hereafter is a model of interaction of two particles or two waves. We consider the inhomogeneous problem \begin{equation} \left\{ \begin{array}{l} u_{tt}=\Delta\,u+v_{xx}+g(x,y,t),\quad(x,y,t)\in\Omega\times(t_{0},T), \\ v=qu_{xx}+u^{2},\quad(x,y,t)\in\Omega\times(t_{0},T), \\ \left(u,v\right)(x,y,t_{0})=\left(u_{0},v_{0}\right)(x,y),\quad(x,y,t)\in \overline{\Omega}\times(t_{0},T), \\ \frac{\partial\,u}{\partial\,t}(x,y,t_{0})=\varphi(x,y),\quad(x,y)\in \overline{\Omega}, \\ \overrightarrow{\nabla}(u,v)=0,\quad(x,y,t)\in\partial\Omega\times(t_{0},T) \end{array} \right. \end{equation} where \begin{equation*} g\left(x,y,t\right)=(4-6\psi^{2}(y))u^{2}-\psi^{2}(x)u. \end{equation*} and $u$ is the exact solution given by \begin{equation*} u(x,y,t)=2\psi^{2}(x)\psi^{2}(y)\theta(t) \end{equation*} with \begin{equation*} \psi(x)=\cos\left(\frac{x}{2}\right)\quad,\qquad\theta(t)=e^{-it} \end{equation*} and \begin{equation*} \varphi(x,y)=-2i\psi^{2}(x)\psi^{2}(y) \end{equation*} As for the previous example, the following tables shows the numerical computations for different space and time steps the discrete $L_{2}$-error estimates defined by (\ref{Er}) and the relative error (\ref{Errelative}). The time interval is $[-2\pi,+2\pi]$ for a choice $t_{0}=0$ and $T=1$. The following results are obtained for different values of the parameters $J$ (and thus $h$), $l$ ((and thus $N$). The parameters $q$ and $\alpha $ are fixed here-also the same as previously, $q=0.01$ and $\alpha =0.25$. Compared to the tri-diagonal scheme the present one leads a faster convergent algorithms \begin{table}[th] \begin{center} \centerline{Table 2.} \begin{tabular}{||l|l|l|l||} \hline\hline J & $l$ & $Er$ & $Relative\,Er$ \\ \hline 10 & 1/100 & $4,6.10^{-3}$ & 0,2311 \\ \hline 16 & 1/120 & $4,4.10^{-3}$ & 0,2372 \\ \hline 20 & 1/200 & $2,4.10^{-3}$ & 0,2506 \\ \hline 24 & 1/220 & $2,3.10^{-3}$ & 0,2671 \\ \hline 30 & 1/280 & $2,0.10^{-3}$ & 0,3074 \\ \hline 40 & 1/400 & $1,4.10^{-3}$ & 0,3592 \\ \hline 50 & 1/500 & $7,6.10^{-4}$ & 0,2355 \\ \hline\hline \end{tabular} \end{center} \end{table} \begin{remark} For the convenience of the paper, we give here some computations of the determinants $\Delta(l,h)$ for different values of the parameters of the discrete scheme Firstly, for both examples above, we can easily see that $ \|\varphi\|=$ and thus, equation (\ref{Delta1}) yields that \begin{equation*} \Delta(l,h)=361+32\varepsilon+416\,l-256\,l^2. \end{equation*} For the different values of $l$ as in the tables 1 and 2, we obtain a positive discriminant leading two zeros with a rejected one. For the discriminant of equation (\ref{Delta2}) we obtain \begin{equation*} \Delta(l,h)=\displaystyle\frac{676}{h^4}+\displaystyle\frac{8\varepsilon}{h^2 }. \end{equation*} Hence, the results explained previously hold. \end{remark} \section{Conclusion} This paper investigated the solution of the well-known Boussinesq equation in two-dimensional case by applying a two-dimensional finite difference discretization. The Boussinesq equation in its original form is a 4-th order partial differential equation. Thus, in a first step it was recasted into a system of second order partial differential equations using a reduction order idea. Next, the system has been transformed into an algebraic discrete system involving Lyapunov-Syslvester matrix terms by using a full time-space discretization. Solvability, consistency, stability and convergence are then established by applying well-known methods such as Lax-Richtmyer equivalence theorem and Lyapunov Stability and by examining the Lyapunov-Sylvester operators. The method was finally improved by developing numerical examples. It was shown to be efficient by means of error estimates as well as time execution algorithms compared to classical ones. \section{Appendix} \subsection{The Tridiagonal Associated System} Consider the lexicographic mesh $k=j(J+1)+m$ for $0\leq\,j,m\leq\,J$, and denote $N=J(J+2)$, and \begin{equation*} \Lambda_N=\{nJ+n-1\;;\;n\in\mathbb{N}\}\;\;,\;\;\widetilde{\Lambda} _N=\{n(J+1)\;;\;n\in\mathbb{N}\}\quad\hbox{and}\quad \Theta_N=\Lambda_N\cup \widetilde{\Lambda}_N. \end{equation*} We obtain a tri-diagonal block system on the form \begin{equation} \label{tridiagonalsystem1} \left\{ \begin{matrix} \widetilde{A}U^{n+1}+a_2\widetilde{R}V^{n+1}=\widetilde{B}U^{n}- \widetilde{A}U^{n-1}+b_2\widetilde{R}V^{n}-a_2\widetilde{R}V^{n-1} \cr \,V^{n+1}-2c_2\widetilde{R}U^{n+1}=2c_1\widetilde{R} U^{n}+2c_2\widetilde{R}U^{n-1}-V^{n-1}+2\widehat{F^{n}}. \end{matrix} \right. \end{equation} The numerical solutions' matrices $U^n$ and $V^n$ are identified here as one-column $(N+1)$-vectors and the matrices $\widetilde{A}$, $\widetilde{B}$ and $\widetilde{R}$ are evaluated as follows.\newline \underline{The matrix $\widetilde{A}$} \begin{equation*} \widetilde{A}_{j,j}=2a_1\;,\;\forall\,j\;\;\;;0\leq\,j\leq\,N, \end{equation*} \begin{equation*} \widetilde{A}_{j,j+1}=\displaystyle\frac{1}{2}\widetilde{A} _{0,1}=a_2\;,\;\forall\,j\;;\;\;1\leq\,j\leq\,N\,;\;\; j\notin\Theta_{N},\quad\hbox{and}\quad0\;\;\hbox{on}\;\;\Lambda_N, \end{equation*} \begin{equation*} \widetilde{A}_{j-1,j}=\displaystyle\frac{1}{2}\widetilde{A} _{N,N-1}=a_2\;,\;\forall\,j\;;\;\;1\leq\,j\leq\,N\,;\;\; j\notin\Theta_{N},,\quad\hbox{and}\quad0\;\;\hbox{on}\;\;\widetilde{\Lambda} _N, \end{equation*} \begin{equation*} \widetilde{A}_{j,j+J+1}=2a_2\;,\;\forall\,j\;,\;\;0\leq\,j\leq\,J \end{equation*} \begin{equation*} \widetilde{A}_{j,j+J+1}=a_2\;,\;\forall\,j\;,\;\;J+1\leq\,j\leq\,N-J-1 \end{equation*} \begin{equation*} \widetilde{A}_{j-J-1,j}=a_2\;,\;\forall\,j\;,\;\;J+1\leq\,j\leq\,N-J-1, \end{equation*} \begin{equation*} \widetilde{A}_{j-J-1,j}=2a_2\;,\;\forall\,j\;,\;\;N-J\leq\,j\leq\,N. \end{equation*} \underline{The matrix $\widetilde{B}$} \begin{equation*} \widetilde{B}_{j,j}=2b_1\;,\;\forall\,j\;\;\;;0\leq\,j\leq\,N, \end{equation*} \begin{equation*} \widetilde{B}_{j,j+1}=\displaystyle\frac{1}{2}\widetilde{B} _{0,1}=b_2\;,\;\forall\,j\;;\;\;1\leq\,j\leq\,N\,;\;\; j\notin\Theta_{N},\quad\hbox{and}\quad0\;\;\hbox{on}\;\;\Lambda_N, \end{equation*} \begin{equation*} \widetilde{B}_{j-1,j}=\displaystyle\frac{1}{2}\widetilde{B} _{N,N-1}=b_2\;,\;\forall\,j\;;\;\;1\leq\,j\leq\,N\,;\;\; j\notin\Theta_{N},,\quad\hbox{and}\quad0\;\;\hbox{on}\;\;\widetilde{\Lambda} _N, \end{equation*} \begin{equation*} \widetilde{B}_{j,j+J+1}=2b_2\;,\;\forall\,j\;,\;\;0\leq\,j\leq\,J \end{equation*} \begin{equation*} \widetilde{B}_{j,j+J+1}=b_2\;,\;\forall\,j\;,\;\;J+1\leq\,j\leq\,N-J-1 \end{equation*} \begin{equation*} \widetilde{B}_{j-J-1,j}=b_2\;,\;\forall\,j\;,\;\;J+1\leq\,j\leq\,N-J-1, \end{equation*} \begin{equation*} \widetilde{B}_{j-J-1,j}=2b_2\;,\;\forall\,j\;,\;\;N-J\leq\,j\leq\,N. \end{equation*} \underline{The matrix $\widetilde{R}$} \begin{equation*} \widetilde{R}_{j,j}=-2\;,\;\forall\,j\;\;\;;0\leq\,j\leq\,N, \end{equation*} \begin{equation*} \widetilde{R}_{j,j+J+1}=2\;,\;\forall\,j\;,\;\;0\leq\,j\leq\,J. \end{equation*} \begin{equation*} \widetilde{R}_{j,j-J-1}=2\;,\;\forall\,j\;,\;\;N-J\leq\,j\leq\,N. \end{equation*} \begin{equation*} \widetilde{R}_{j,j+J+1}=\widetilde{R}_{j-J-1,j}=1\;,\;\forall\,j\;,\;\;J+1 \leq\,j\leq\,N-J-1. \end{equation*} \subsection{Headlines of the algorithm applied} \begin{itemize} \item Compute the matrices of the system \item Initialisation: Compute the matrices $U^0$, $U^1$, $V^0$ and $V^1$ \item for $n\geq2$, \begin{equation*} U^n=lyap(W,A,\mathcal{L}_{\widetilde{B},B}(U^{n-1})+b_2RV^{n-1}-\mathcal{L} _{W,A}(U^{n-2})-a_2R(F^{n-2}+F^{n-1})), \end{equation*} and \begin{equation*} V^{n+1}=2c_2RU^{n}+2R(c_1U^{n-1}+c_2U^{n-2})-V^{n-2}+2\widehat{F^{n-1}}. \end{equation*} \end{itemize} \end{document}
\begin{document} \begin{frontmatter} \title{Universal frequency-preserving KAM persistence via modulus of continuity} \author{{ \blfootnote{$^{*}$Corresponding author at: School of Mathematics, Jilin University, Changchun 130012, People’s Republic of China} Zhicheng Tong$^{a}$ \footnote{ E-mail address : [email protected]} ,~ Yong Li$^{a,b,*}$} \footnote{E-mail address : [email protected]}\\ {$^{a}$College of Mathematics, Jilin University,} {Changchun 130012, P. R. China.}\\ {$^{b}$School of Mathematics and Statistics, and Center for Mathematics and Interdisciplinary Sciences, \\mathbb Northeast Normal University,} {Changchun, 130024, P. R. China.} } \begin{abstract} In this paper, we study the persistence and remaining regularity of KAM invariant torus under sufficiently small perturbations of a Hamiltonian function together with its derivatives, in sense of finite smoothness with modulus of continuity, as a generalization of classical H\"{o}lder continuous circumstances. To achieve this goal, we extend the Jackson approximation theorem to the case of modulus of continuity, and establish a corresponding regularity theorem adapting to the new iterative scheme. Via these tools, we establish a KAM theorem with sharp differentiability hypotheses, which asserts that the persistent torus keeps prescribed universal Diophantine frequency unchanged and reaches the regularity for persistent KAM torus beyond H\"older's type. \end{abstract} \begin{keyword} Hamiltonian system, KAM torus, frequency-preserving, modulus of continuity, Jackson approximation theorem. \MSC[2020] 37J40 \sep 70K60 \end{keyword} \end{frontmatter} \section{Introduction} The KAM theory mainly concerns the preservation of invariant tori of a Hamiltonian function $ H(y) $ under small perturbations (i.e., $H(y) \to H\left( {x,y,\varepsilon } \right) $ of freedom $ n \in \mathbb{N}^+ $ with $ \varepsilon>0 $ sufficiently small), which has a history of more than sixty years. See, for instance, Kolmogorov and Arnold \cite{R-9,R-10,R-11}, Moser \cite{R-12,R-13}, P\"oschel \cite{Po1,Po2} and etc. As is known to all, for frequency $ \omega = {H_y}\left( {y} \right) $ of the unperturbed system, we often require it to satisfy the following classical Diophantine condition (or be of Diophantine class $ \tau $) \begin{equation}\label{dio} | {\langle {{\tilde k},\omega } \rangle } | \geq \alpha_ *{ | {{\tilde k }} |}^{-\tau} ,\;\;\forall 0 \ne \tilde k \in {\mathbb{Z}^n} \end{equation} with respect to $ \tau \geq n-1 $ and some $ \alpha_ *>0 $, where $ |\tilde k|: = \sum\nolimits_{j = 1}^n {|{{\tilde k}_j}|} $. Otherwise, the torus may break no matter how small the perturbation is. Furthermore, to ensure the KAM persistence one also is interested in the minimal order of derivatives required for $ H\left( {x,y,\varepsilon } \right) $. Much effort has been devoted on this problem in terms of H\"older continuity, including constructing counterexamples and reducing the differentiability hypotheses. For some classic foundational work, see Moser \cite{J-3}, Jacobowitz \cite{R-2}, Zehnder \cite{R-7,R-8}, Mather \cite{R-55}, Herman \cite{M1,M2}, Salamon \cite{salamon} and etc. It is worth mentioning that, very recently P\"oschel \cite{Po3} obtained a KAM theorem on $ n $-dimensional torus (without action variables) based on a frequency being of Diophantine class $ \tau=n-1 $ in \eqref{dio}. Specially, he pointed out that the derivatives of order $ n $ need not be continuous, but rather $ L^2 $ in a certain strong sense. Back to our concern on Hamiltonian systems with action-angular variables, it is always conjectured that the minimum regularity requirement for the Hamiltonian function $ H $ is at least $ C^{2n} $. Along with the idea of Moser, the best known H\"older case $ C^{\ell} $ with $ \ell >2\tau+2>2n $ has been established by Salamon in \cite{salamon}, where the prescribed frequency is of Diophantine class $ \tau>n-1 $ in \eqref{dio} (with full Lebesgue measure and thus reveals the universality of the KAM persistence), and the remaining regularity of the KAM torus is also showed to be H\"older's type. More precisely, the resulting solutions are of class $ C^m $ with $ 0< m<2\ell-2\tau-2 $, and the function whose graph is the invariant torus is of class $ C^{m+\tau+1} $. Besides, the differentiability hypotheses is sharp due to the counterexample work of Herman \cite{M1,M2} et al., which will be explained later in \cref{subsubsub}. In the aspect of H\"older's type, see Bounemoura \cite{Bounemoura} and Koudjinan \cite{Koudjinan} for some new developments. Strictly weaker than H\"older continuity, Albrecht \cite{Chaotic} proved a KAM theorem via a strong Diophantine frequency of class $ \tau=n-1 $ in \eqref{dio}, which claimed that $ C^{2n} $ plus certain modulus of continuity $ \varpi $ satisfying the classical Dini condition \begin{equation}\label{cdini} \int_0^1 {\frac{{\varpi \left( x \right)}}{x}dx} < + \infty \end{equation} is enough for the KAM persistence. Such strong Diophantine frequencies are continuum many and form a set of zero Lebesgue measure, see details from \cite{Polecture}, therefore the corresponding KAM preservation is usually said to be non-universal. To the best of our knowledge, there is no other work on KAM via only modulus of continuity except for \cite{Chaotic}. Back to our concern on universal KAM persistence in this paper, the best result so far still requires $ C^{2n} $ plus certain H\"older continuity depending on the Diophantine nonresonance. It is therefore natural that ones should consider the following questions: \begin{itemize} \item \textit{Can H\"{o}lder smoothness in Salamon's KAM be further weakened into a general form of modulus of continuity?} \item \textit{If the invariant KAM torus persists, then what kind of smoothness does the torus have (H\"{o}lder continuity, or more general modulus of continuity)?} \item \textit{Could the prescribed universal Diophantine frequency to be kept unchanged?} \item \textit{Does there exist a Dini type integrability condition similar to \eqref{cdini} that reveals the explicit relation between nonresonance and regularity?} \end{itemize} To answer the above questions, there are at least four difficulties to overcome. Firstly, note that the Jackson approximation theorem for classical H\"{o}lder continuity is no longer valid at present, hence it must be developed to approximate the perturbed Hamiltonian function $ H\left( {x,y,\varepsilon } \right) $ in the sense of modulus of continuity, as a crucial step. Secondly, it is also basic how to establish a corresponding regularity iteration lemma to study the regularity of the invariant torus and the solution beyond H\"older's type. Thirdly, we need to set up a new KAM iterative scheme and prove its uniform convergence via these tools. Fourthly, it is somewhat difficult to extract an equilibrium integrability condition of nonresonance and regularity from KAM iteration, as well as further touch the remaining regularity. Indeed, to achieve the main result \cref{theorem1}, we apply \cref{Theorem1} to construct a series of analytic approximations to $ H\left( {x,y,\varepsilon } \right) $ with modulus of continuity, and prove the persistence and regularity of invariant torus via a modified KAM iteration as well as a generalized Dini type condition. It should be pointed out that our results still admit sharpness on differentiability $ C^{2n} $ due to Herman's work \cite{M1,M2}, where he considered the nonexistence of an invariant curve for an annulus mapping being of H\"older regularity $ {C^{3 - \epsilon }} $ with any $ \epsilon $ close to $ 0^+ $, i.e., $ C^{2n}=C^{4} $ minus arbitrary H\"older continuity cannot admit KAM persistence when $ n=2 $. As some new efforts, our \cref{theorem1} applies to a wide range, including non-universal and universal KAM persistence, and reveals the integral relation between regularity and nonresonance. Apart from above, it is well known that small divisors must lead to the loss of regularity, and our approach gives general estimates of the KAM remaining regularity without H\"older continuity for the first time. Particularly, as a direct application, our \cref{theorem1} could deal with the case of general modulus of continuity for $ H\left( {x,y,\varepsilon }\right) $, such as Logarithmic H\"{o}lder continuity case, i.e., for all $ 0 < \left| {x - \xi } \right| + \left| {y - \eta } \right| \leq 1/2 $, \begin{displaymath} \left| {{\partial ^\alpha }H\left( {x,y,\varepsilon } \right) - {\partial ^\alpha }H\left( {\xi ,\eta ,\varepsilon } \right)} \right| \leq \frac{c}{{{{\left( { - \ln \left( {\left| {x - \xi } \right| + \left| {y - \eta } \right|} \right)} \right)}^\lambda }}} \end{displaymath} with respect to all $ \alpha \in {\mathbb{N}^{2n}}$ with $ \left| \alpha \right| = {2n } $, where $ n \geq 2 $, $\lambda>1$, $c, \varepsilon>0 $ are sufficiently small, $ \left( {x,y} \right) \in {\mathbb{T}^n} \times G $ with $ {\mathbb{T}^n}: = {\mathbb{R}^n}/ \mathbb{Z}^n $, and $ G \subset {\mathbb{R}^n} $ is a connected closed set with interior points. See \cref{section6} for more details. This paper is organized as follows. In \cref{section2}, we first introduce some notions and properties for modulus of continuity, and establish a Jackson type approximation theorem based on them (the proof will be postponed to \cref{JACK}). Then we state our main results in this paper. Namely, considering that the higher-order derivatives of Hamiltonian function $ H $ with respect to the action-angular variables are only continuous, we present a KAM theorem (\cref{theorem1}) with sharp differentiability hypotheses under certain assumptions, involving a generalized Dini type integrability condition \textbf{(H1)}. The applications of this theorem are given in \cref{section6}, including non-universal (\cref{simichaos2}) and universal (\cref{Holder,lognew}) KAM persistence. For the former, we reach a conclusion similar to that in \cite{Chaotic}. As to the latter, we provide H\"{o}lder and H\"{o}lder plus Logarithmic H\"{o}lder circumstances, aiming to show the importance and universality of \cref{theorem1}. In particular, an explicit Hamiltonian function $ H $ is constructed, which cannot be studied by KAM theorems for finite smoothness via classical H\"{o}lder continuity, but the work generalized in this paper can be applied. \cref{section4} provides the proof of \cref{theorem1} and is mainly divided into two parts: the first part deals with the modified KAM steps via only modulus of continuity, while the second part is devoted to giving an iteration theorem (\cref{t1}) on regularity, which is used to analyze the remaining smoothness for the persistent invariant torus. \cref{proofsimichaos2,8,pflognew} present the proof of \cref{simichaos2,Holder,lognew} in \cref{section6}, respectively. \section{Statement of results} \label{section2} We first give some notions, including the modulus of continuity along with the norm based on it, the semi separability which will be used in \cref{Theorem1}, as well as the weak homogeneity which will appear in \cref{theorem1}. Denote by $ |\cdot| $ the sup-norm in $ \mathbb{R}^d $ and the dimension $ d \in \mathbb{N}^+ $ may vary throughout this paper. We formulate that in the limit process, $ f_1(x)=\mathcal{O}^{\#}\left(f_2(x)\right) $ means there are absolute positive constants $ \ell_1 $ and $ \ell_2 $ such that $ {\ell _1}{f_2}\left( x \right) \leq {f_1}\left( x \right) \leq {\ell _2}{f_2}\left( x \right) $, and $ f_1(x)=\mathcal{O}\left(f_2(x)\right) $ implies that there exists an absolute positive constant $ \ell_3 $ such that $ |f_1(x)| \leq \ell_3 f_2(x) $, and finally $ f_1(x)\sim f_2(x) $ indicates that $ f_1(x) $ and $ f_2(x) $ are equivalent. \begin{definition}\label{d1} Let $ \varpi (t)>0 $ be a nondecreasing continuous function on the interval $ \left( {0,\delta } \right] $ with respect to some $ \delta >0 $ such that $ \mathop {\lim }\limits_{x \to {0^ + }} \varpi \left( x \right) = 0 $ and $ \mathop {\overline {\lim } }\limits_{x \to {0^ + }} x/{\varpi }\left( x \right) < + \infty $. Next, we define the following semi norm and norm for a continuous function $ f $ on $ {\mathbb{R}^n} $ ($f\in C^0$, for short) \begin{equation}\notag {\left[ f \right]_\varpi }: = \mathop {\sup }\limits_{x,y \in {\mathbb{R}^n},\;0 < \left| {x - y} \right| \leq \delta } \frac{{\left| {f\left( x \right) - f\left( y \right)} \right|}}{{\varpi \left( {\left| {x - y} \right|} \right)}},\;\;{\left| f \right|_{{C^0}}}: = \mathop {\sup }\limits_{x \in {\mathbb{R}^n}} \left| {f\left( x \right)} \right|. \end{equation} We say that $ f $ is of $ C_{k,\varpi} $ continuous if $ f $ has partial derivatives $ {{\partial ^\alpha }f} $ for $ \left| \alpha \right| \leq k \in \mathbb{N} $ and satisfies \begin{equation}\label{k-w} {\left\| f \right\|_\varpi }: = \sum\limits_{\left| \alpha \right| \leq k} {\left( {{{\left| {{\partial ^\alpha }f} \right|}_{{C^0}}} + {{\left[ {{\partial ^\alpha }f} \right]}_\varpi }} \right)} < + \infty . \end{equation} Denote by $ {C_{k,\varpi }}\left( {{\mathbb{R}^n}} \right) $ the space composed of all functions $ f $ satisfying \eqref{k-w}. \end{definition} Such a function $ \varpi $ is usually referred to as the modulus of continuity of $ f $. It can be seen that the well-known Lipschitz continuity and H\"{o}lder continuity are special cases in the above definition. In particular, for $ 0<\ell \notin \mathbb{N}^+ $, we denote by $ f \in {C^\ell }\left( {{\mathbb{R}^n}} \right) $ the function space in which the higher derivatives in $ \mathbb{R}^n $ are H\"{o}lder continuous, i.e., the modulus of continuity is of the form $ \varpi_{\mathrm{H}}^{\{\ell\}}(x)\sim x^{\ell} $, where $ \{\ell\} \in (0,1)$ denotes the fractional part of $ \ell $. As a generalization of classical H\"{o}lder continuity, we define the Logarithmic H\"{o}lder continuity with index $ \lambda > 0 $, where $ \varpi_{\mathrm{LH}}^{\lambda} \left( x \right) \sim 1/{\left( { - \ln x} \right)^\lambda } $, and we omit the the range $ 0 < x \ll 1 $ without causing ambiguity. \begin{remark} For $ f:{\mathbb{R}^n} \to \Omega \subset {\mathbb{R}^d} $ with a modulus of continuity $ \varpi $, we modify the above designation to $ {C_{k,\varpi }}\left( {{\mathbb{R}^n},\Omega } \right) $. \end{remark} \begin{remark}\label{rema666} It is well known that a mapping defined on a bounded connected closed set in a finite dimensional space must have a modulus of continuity, see \cite{Herman3}. For example, for a function $ f(x) $ defined on $ [0,1] \subset {\mathbb{R}^1} $, it automatically admits a modulus of continuity \[{\omega _{f,\delta }}\left( x \right): = \mathop {\sup }\limits_{y \in \left[ {0,1} \right],0 < \left| {x - y} \right| \leq \delta } \left| {f\left( x \right) - f\left( y \right)} \right|.\] \end{remark} \begin{definition}\label{d5} Let $ {\varpi _1} $ and $ {\varpi _2} $ be modulus of continuity on interval $ \left( {0,\delta } \right] $. We say that $ {\varpi _1} $ is weaker (strictly weaker) than $ {\varpi _2} $ if $ \mathop {\overline\lim }\limits_{x \to {0^ + }} {\varpi _2}\left( x \right)/{\varpi _1}\left( x \right) <+\infty $ ($ =0 $). \end{definition} \begin{remark}\label{strict} Obviously any modulus of continuity is weaker than Lipschitz's type, and the Logarithmic H\"{o}lder's type $ \varpi_{\mathrm{LH}}^{\lambda} \left( x \right) \sim 1/{\left( { - \ln x} \right)^\lambda } $ with any $ \lambda > 0 $ is strictly weaker than arbitrary H\"{o}lder's type $ \varpi_{\mathrm{H}}^{\alpha}\left( x \right) \sim {x^\alpha } $ with any $ 0 < \alpha < 1 $. \end{remark} \begin{definition}[Semi separability]\label{d2} We say that $ \varpi $ in \cref{d1} is semi separable, if for $ x \geq 1 $, there holds \begin{equation}\label{Ox} \psi \left( x \right): = \mathop {\sup }\limits_{0 < r < \delta /x} \frac{{\varpi \left( {rx} \right)}}{{\varpi \left( r \right)}} = \mathcal{O}\left( x \right),\;\;x \to + \infty . \end{equation} \end{definition} \begin{remark}\label{Remarksemi} Semi separability directly leads to $ \varpi \left( {rx} \right) \leq \varpi \left( r \right)\psi \left( x \right) $ for $ 0 < rx \leq \delta $, which will be used in the proof of the Jackson type \cref{Theorem1} via only modulus of continuity. \end{remark} \begin{definition}[Weak homogeneity] \label{weak} A modulus of continuity $ \varpi $ is said to admit weak homogeneity, if for fixed $ 0<a<1 $, there holds \begin{equation}\label{erfenzhiyi} \mathop {\overline {\lim } }\limits_{x \to {0^ + }} \frac{{\varpi \left( x \right)}}{{\varpi \left( {ax} \right)}} < + \infty . \end{equation} \end{definition} It should be emphasized that semi separability and weak homogeneity are universal hypotheses. The H\"older and Lipschitz type automatically admit them. Many modulus of continuity weaker than the H\"older one are semi separable and also admit weak homogeneity, e.g., for the Logarithmic H\"{o}lder's type $ \varpi_{\mathrm{LH}}^{\lambda} \left( x \right) \sim 1/{\left( { - \ln x} \right)^\lambda } $ with any $ \lambda > 0 $, one verifies that $ \psi \left( x \right) \sim {\left( {\ln x} \right)^\lambda } = \mathcal{O}\left( x \right) $ as $ x \to +\infty $ in \eqref{Ox}, and $ \mathop {\overline {\lim } }\limits_{x \to {0^ + }} {\varpi_{\mathrm{LH}}^{\lambda}}\left( x \right)/{\varpi_{\mathrm{LH}}^{\lambda}}\left( {ax} \right) = 1 < + \infty $ with all $ 0<a<1 $ in \eqref{erfenzhiyi}. See more implicit examples in \cref{Oxlemma,ruotux}, in particular, \textit{it is pointed out that a convex modulus of continuity naturally possesses these two properties.} Next, we give a Jackson type approximation theorem beyond H\"older's type and some related corollaries based on \cref{d1,d2}, their proof will be postponed to \cref{JACK,proofcoro1,proofcoco2}, respectively. \begin{theorem}\label{Theorem1} There is a family of convolution operators \begin{equation}\notag {S_r}f\left( x \right) = {r^{ - n}}\int_{{\mathbb{R}^n}} {K\left( {{r^{ - 1}}\left( {x - y} \right)} \right)f\left( y \right)dy} ,\;\;0 < r \leq 1, \end{equation} from $ {C^0}\left( {{\mathbb{R}^n}} \right) $ into the space of entire functions on $ {\mathbb{C}^n} $ with the following property. For every $ k \in \mathbb{N} $, there exists a constant $ c\left( {n,k} \right)>0 $ such that, for every $ f \in {C_{k,\varpi }}\left( {{\mathbb{R}^n}} \right) $ with a semi separable modulus of continuity $ \varpi $, every multi-index $ \alpha \in {\mathbb{N}^n} $ with $ \left| \alpha \right| \leq k $, and every $ x \in {\mathbb{C}^n} $ with $ \left| {\operatorname{Im} x} \right| \leq r $, we have \begin{equation}\label{3.2} \left| {{\partial ^\alpha }{S_r}f\left( x \right) - {P_{{\partial ^\alpha }f,k - \left| \alpha \right|}}\left( {\operatorname{Re} x;\mathrm{i}\operatorname{Im} x} \right)} \right| \leq c\left( {n,k} \right){\left\| f \right\|_\varpi }{r^{k - \left| \alpha \right|}}\varpi(r), \end{equation} where the Taylor polynomial $ P $ is defined as follows \[{P_{f,k}}\left( {x;y} \right) := \sum\limits_{\left| \beta \right| \leq k} {\frac{1}{{\alpha !}}{\partial ^\beta }f\left( x \right){y^\alpha }}. \] Moreover, $ {{S_{r}}f} $ is real analytic whenever $ f $ is real valued. \end{theorem} As a direct consequence of \cref{Theorem1}, we give the following \cref{coro1,coro2}. These results have been widely used in H\"older's case, see for instance, \cite{Koudjinan,salamon}. \begin{corollary}\label{coro1} The approximation function $ {{S_r}f\left( x \right)} $ in \cref{Theorem1} satisfies \begin{equation}\notag \left| {{\partial ^\alpha }\left( {{S_r}f\left( x \right) - f\left( x \right)} \right)} \right| \leq c_*{\left\| f \right\|_\varpi }{r^{k - \left| \alpha \right|}}\varpi(r) \end{equation} and \begin{equation}\notag \left| {{\partial ^\alpha }{S_r}f\left( x \right)} \right| \leq {c^ * }{\left\| f \right\|_\varpi } \end{equation} for $ x \in \mathbb{C}^n $ with $ \left| {\operatorname{Im} x} \right| \leq r $, $ |\alpha| \leq k $, where $ c_* = c_*\left( {n,k} \right) >0$ and $ {c^ * } = {c^ * }\left( {n,k,\varpi } \right) >0$ are some universal constants. \end{corollary} \begin{corollary}\label{coro2} If the function $ f\left( x \right) $ in \cref{Theorem1} also satisfies that the period of each variables $ {x_1}, \ldots ,{x_n} $ is $ 1 $ and the integral on $ {\mathbb{T}^n} $ is zero, then the approximation function $ {S_r}f\left( x \right) $ also satisfies these properties. \end{corollary} We are now in a position to give the frequency-preserving KAM theorem via only modulus of continuity in this paper. Before this, let's start with our parameter settings. Let $ n \geq 2$ (degree of freedom), $\tau \geq n - 1$ (Diophantine index), $ 2\tau + 2 \leq k \in {\mathbb{N}^ + } $ (differentiable order) and a sufficiently large number $ M>0 $ be given. Consider a Hamiltonian function $ H(x,y):{\mathbb{T}^n} \times G \to \mathbb{R} $ with $ {\mathbb{T}^n}: = {\mathbb{R}^n}/ \mathbb{Z}^n $, and $ G \subset {\mathbb{R}^n} $ is a connected closed set with interior points. It follows from \cref{rema666} that $ H $ automatically has a modulus of continuity $ \varpi $. In view the comments below \cref{weak}, we assume that $ \varpi $ admits semi separability (\cref{d2}) and weak homogeneity (\cref{weak}) without loss of generality. Besides, we make the following assumptions: \begin{itemize} \item[\textbf{(H1)}] Integrability condition for modulus of continuity: Assume that $ H \in {C_{k,\varpi }}\left( {{\mathbb{T}^n} \times G} \right) $ with the above modulus of continuity $ \varpi $. In other words, $ H $ at least has derivatives of order $ k $, and the highest derivatives admit the regularity of $ \varpi $. Moreover, $ \varpi $ satisfies the Dini type integrability condition \begin{equation}\label{Dini} \int_0^1 {\frac{{\varpi \left( x \right)}}{{{x^{2\tau + 3 - k}}}}dx} < + \infty . \end{equation} \item[\textbf{(H2)}] Boundedness and nondegeneracy: \begin{equation}\notag {\left\| H \right\|_\varpi } \leq M,\;\;\left| {{{\left( {\int_{{\mathbb{T}^n}} {{H_{yy}}\left( {\xi ,0} \right)d\xi } } \right)}^{ - 1}}} \right| \leq M. \end{equation} \item[\textbf{(H3)}] Diophantine condition: For some $ \alpha_ * > 0 $, the frequency $ \omega \in {\mathbb{R}^n} $ satisfies \begin{equation}\notag | {\langle {{\tilde k},\omega } \rangle } | \geq \alpha_ *{ | {{\tilde k }} |}^{-\tau} ,\;\;\forall 0 \ne \tilde k \in {\mathbb{Z}^n},\;\; |\tilde k|: = \sum\limits_{j = 1}^n {|{{\tilde k}_j}|} . \end{equation} \item[\textbf{(H4)}] KAM smallness: There holds \begin{align}\label{T1-2} &\sum\limits_{\left| \alpha \right| \leq k} {\left| {{\partial ^\alpha }\Big( {H\left( {x,0} \right) - \int_{{\mathbb{T}^n}} {H\left( {\xi ,0} \right)d\xi } } \Big)} \right|{\varepsilon ^{\left| \alpha \right|}}} \notag \\ + &\sum\limits_{\left| \alpha \right| \leq k - 1} {\left| {{\partial ^\alpha }\left( {{H_y}\left( {x,0} \right) - \omega } \right)} \right|{\varepsilon ^{\left| \alpha \right| + \tau + 1}}} \leq M{\varepsilon ^k}\varpi \left( \varepsilon \right) \end{align} for every $ x \in \mathbb{R}^n $ and some constant $ 0 < \varepsilon \leq {\varepsilon ^ * } $. \item[\textbf{(H5)}] Criticality: For $ \varphi_i(x):=x^{k-(3-i)\tau-1}\varpi(x) $ with $ i=1,2 $, there exist critical $ k_i^*\in \mathbb{N}^+ $ such that \[\int_0^1 {\frac{{{\varphi _i}\left( x \right)}}{{{x^{k_i^ *+1 }}}}dx} < + \infty ,\;\;\int_0^1 {\frac{{{\varphi _i}\left( x \right)}}{{{x^{k_i^ * + 2}}}}dx} = + \infty .\] \end{itemize} Let us make some comments. \begin{itemize} \item[\textbf{(C1)}] There seems to be a large number of assumptions above, but they are important conditions abstracted from the H\"{o}lder continuous case, and we have to do so in order to give the KAM theorem in the case of only modulus of continuity. However, some of such conditions, e.g. \textbf{(H2)}-\textbf{(H3)}, are classical, while some are ordinary. \item[\textbf{(C2)}] In view of \cref{rema666}, $ H $ automatically admits a modulus of continuity. The Dini type integrability condition \eqref{Dini} in \textbf{(H1)} is a direct generalization of H\"older's type, which can be seen in \cref{Holder}. Interestingly, it becomes the classical Dini condition \eqref{cdini} if $ \tau=n-1 $ and $ k=2\tau+2=2n $. \item[\textbf{(C3)}] There is a large family of modulus of continuity satisfying the classical Dini condition \eqref{cdini}, such as the Logarithmic H\"{o}lder's type $ \varpi_{\mathrm{LH}}^{\lambda} \left( x \right) \sim 1/{\left( { - \ln x} \right)^\lambda } $ with $ \lambda>1 $, and even more complicated case: the generalized Logarithmic H\"older's type \begin{equation}\label{mafan} \varpi_{\mathrm{GLH}}^{\varrho,\lambda} \left( x \right) \sim \frac{1}{{(\ln (1/x))(\ln \ln (1/x)) \cdots {{(\underbrace {\ln \cdots \ln }_\varrho (1/x))}^\lambda }}} \end{equation} with any $ \varrho \in \mathbb{N}^+ $ and $ \lambda>1 $. In particular, $ \varpi_{\mathrm{LH}}^{\lambda}(x) \sim \varpi_{\mathrm{GLH}}^{1,\lambda}(x)$. Note that the above $ \lambda>1 $ cannot degenerate to $ 1 $, otherwise the Dini integral \eqref{cdini} diverges. \item[\textbf{(C4)}] According to the properties of Banach algebra, for the H\"older's type, it is assumed that \textbf{(H4)} only needs the term of $ \left| \alpha \right| = 0 $, and does not need higher-order derivatives to satisfy the condition. However, for general modulus of continuity, it seems not easy to establish the corresponding Banach algebraic properties, we thus add higher-order derivatives in \textbf{(H4)}. Sometimes they can be removed correspondingly. \item[\textbf{(C5)}] The existence of $ k_i^* $ in \textbf{(H5)} is directly guaranteed by \textbf{(H1)}, actually this assumption is proposed to investigate the higher regularity of the persistent KAM torus, that is, the regularity to $ C^{k_i^*} $ plus certain modulus of continuity. In general, given an explicit modulus of continuity $ \varpi $, such $ k_i^* $ in \textbf{(H5)} are automatically determined by using asymptotic analysis, see \cref{section6}. \end{itemize} Finally, we state the following frequency-preserving KAM theorem under sharp differentiability via only modulus of continuity: \begin{theorem}[Main Theorem]\label{theorem1} Assume \textbf{(H1)}-\textbf{(H4)}. Then there is a solution \[x = u\left( \xi \right),\;\;y = v\left( \xi \right)\] of the following equation with the operator $ D: = \sum\limits_{\nu = 1}^n {{\omega _\nu }\frac{\partial }{{\partial {\xi _\nu }}}} $ \begin{equation}\notag Du = {H_y}\left( {u,v} \right),\;\;Dv = - {H_x}\left( {u,v} \right), \end{equation} such that $ u\left( \xi \right) - \xi $ and $ v\left( \xi \right) $ are of period $ 1 $ in all variables, where $ u $ and $ v $ are at least $ C^1 $. In addition, assume \textbf{(H5)}, then there exist $ {\varpi _i} $ ($ i=1,2 $) such that $ u \in {C_{k_1^ * ,{\varpi _1}}}\left( {{\mathbb{R}^n},{\mathbb{R}^n}} \right) $ and $ v \circ {u^{ - 1}} \in {C_{k_2^ * ,{\varpi _2}}}\left( {{\mathbb{R}^n},G} \right) $. Particularly, $ {\varpi _i} $ can be determined as follows \begin{equation}\label{varpii} {\varpi _ i }\left( \gamma \right) \sim \gamma \int_{L_i\left( \gamma \right)}^\varepsilon {\frac{{\varphi_i \left( t \right)}}{{{t^{{k_i^*} + 2}}}}dt} = {\mathcal{O}^\# }\left( {\int_0^{L_i\left( \gamma \right)} {\frac{{\varphi_i \left( t \right)}}{{{t^{{k_i^*} + 1}}}}dt} } \right),\;\;\gamma \to {0^ + }, \end{equation} where $ L_i(\gamma) \to 0^+ $ are some functions such that the second relation in \eqref{varpii} holds for $ i=1,2 $. \end{theorem} \begin{remark} We call such a solution $x=u(\xi),y=v(\xi)$ the KAM one. \end{remark} \begin{remark} With the same as in \cite{salamon}, the unperturbed systems under consideration might be non-integrable (e.g., $ H = \left\langle {\omega ,y} \right\rangle + \left\langle {A\left( x \right)y,y} \right\rangle + \cdots $), and the KAM persistence is of frequency-preserving. The main difference from \cite{salamon} is that the regularity of the high-order derivatives and the derived smoothness for persistent torus is weakened to only modulus of continuity from the H\"{o}lder's type. \end{remark} \begin{remark} Actually \cref{theorem1} provides a method for determining $ \varpi_i $ with $ i=1,2 $, see \eqref{varpii}. For the prescribed modulus of continuity to Hamiltonian, such as the H\"older and Logarithmic H\"older type, we have to use asymptotic analysis to derive the concrete continuity of the KAM torus in \cref{section6}. \end{remark} As mentioned forego, the H\"older's type $ H \in C^{\ell}(\mathbb{T}^n,G) $ with $ \ell>2\tau+2 $ (where $ \tau>n-1$ is the Diophantine exponent) is always regarded as the critical case. Let $ k=[\ell] $. Then $ k=2\tau+2=2n $ ($ \tau=n-1 $ at present) seems to be the critical case in our setting, and our Dini type integrability condition \eqref{Dini} becomes the classical Dini condition \eqref{cdini}! But it should be noted that, such Diophantine frequencies with $ \tau=n-1 $ can only form a set of zero Lebesgue measure and are therefore not enough to represent almost all frequencies. In other words, for universal KAM persistence, we may have to require the generalized Dini condition in \textbf{(H1)}, which reveals the deep relationship between the \textit{irrationality} for frequency $ \omega $, \textit{order} and \textit{continuity} of the highest derivatives for the Hamiltonian $ H $. Obviously, if the highest differentiable order $ k $ of $ H $ satisfies $ k\geq 2\tau+3 $ or even larger, then \textbf{(H1)} will become trivial because $ \varpi $ does not have a singularity at $ 0 $. But our KAM theorem still makes sense, because the regularity of the persistent torus will also increase. \section{Applications} \label{section6} In this section, we show certain detailed regularity about KAM torus such as H\"{o}lder and Logarithmic H\"{o}lder ones etc. Denote by $ \{a\} $ and $ [a] $ the fractional part and the integer part of $ a\geq0 $, respectively. It should be emphasized that the Dini type integrability condition \eqref{Dini} in \textbf{(H1)} is easy to verify, that is, the KAM persistence is easy to obtain. However, some techniques of asymptotic analysis are needed to investigate the specific regularity of KAM torus, which is mainly reflected in the selection of functions $ L_i (\gamma)$ ($ i=1,2 $) in \eqref{varpii}. In particular, we will explicitly see the degree of regularity loss caused by small divisors, see for instance, \cref{simichaos2,Holder,lognew} and the example shown in \cref{explicitexa}. We apply our \cref{theorem1} from two different perspectives. In \cref{subnon}, for the minimum regularity $ C^{2n} $ that is critical under our approach, we investigate KAM preservation in the sense of zero Lebesgue measure (corresponds to non-universal), i.e., first let $ k=2n $, then determine Diophantine nonresonance $ \tau=n-1 $; while in \cref{subun}, for the given Diophantine nonresonance $ \tau>n-1 $ of full Lebesgue measure in advance (corresponds to universal), we study the minimum regularity requirement under our method. In what follows, the modulus of continuity under consideration are always convex near $ 0^+ $ and therefore automatically admit semi separability as well as weak homogeneity which we forego. \subsection{Non-universal KAM persistence}\label{subnon} Focusing on non-universal KAM persistence for Hamiltonian systems with action-angular variables of freedom $ n $, Albrecht \cite{Chaotic} proved that $ C^{2n} $ plus certain modulus of continuity satisfying the classical Dini condition \eqref{cdini} for regularity requirement is enough. The frequencies he used are of Diophantine class $ \tau=n-1 $ in \textbf{(H3)}, i.e., of zero Lebesgue measure. However, it is still interesting to study the remaining regularity of the KAM torus, which is still unknown so far. By applying \cref{theorem1} we directly obtain the following \cref{simichaos} similar to that in \cite{Chaotic}, therefore the proof is omitted here. To illustrate our results, we provide an explicit example in \cref{simichaos2}, and the proof will be postponed to \cref{proofsimichaos2}. \begin{theorem}\label{simichaos} Let $ k=2n $ and $ \tau=n-1 $ be given. Assume that \textbf{(H1)} \textbf{(H2)}, \textbf{(H3)} and \textbf{(H4)} hold with a convex modulus of continuity $ \varpi $. That is, the Hamiltonian $ H $ only has derivatives of order $ 2n $, the prescribed frequency is of Diophantine class $ n-1 $, and \textbf{(H1)} turns to the classical Dini condition \eqref{cdini}. Then the KAM persistence in \cref{theorem1} could be admited. \end{theorem} \begin{theorem}\label{simichaos2} In view of Comment \text{(C3)}, let the modulus of continuity in \cref{simichaos} be of the generalized Logarithmic H\"{o}lder's type in \eqref{mafan}, i.e., \begin{equation}\label{mafan2} \varpi_{\mathrm{GLH}}^{\varrho,\lambda} \left( x \right) \sim \frac{1}{{(\ln (1/x))(\ln \ln (1/x)) \cdots {{(\underbrace {\ln \cdots \ln }_\varrho (1/x))}^\lambda }}} \end{equation} with $ \varrho \in \mathbb{N}^+ $ and $ \lambda>1 $. Then the remaining regularity in \cref{theorem1} is $ u \in {C_{1 ,{\varpi _1}}}\left( {{\mathbb{R}^n},{\mathbb{R}^n}} \right) $ and $ v \circ {u^{ - 1}} \in {C_{n,{\varpi _2}}}\left( {{\mathbb{R}^n},G} \right) $, where \begin{equation}\label{rem} {\varpi _1}\left( x \right) \sim {\varpi _2}\left( x \right) \sim \frac{1}{{{{(\underbrace {\ln \cdots \ln }_\varrho (1/x))}^{\lambda - 1}}}}. \end{equation} \end{theorem} \begin{remark} Particularly \eqref{mafan2} reduces to the Logarithmic H\"older's type $ \varpi_{\mathrm{LH}}^{\lambda}(x) \sim 1/{(-\ln x)^\lambda} $ with $ \lambda>1 $ as long as $ \varrho=1 $. As can be seen that, the remaining regularity in \eqref{rem} is much weaker than that in \eqref{mafan2}, and it is indeed very weak if $ \lambda>1 $ is sufficiently close to $ 1 $ (but cannot degenerate to $ 1 $, see Comment \textbf{(C3)}), because the explicit modulus of continuity in \eqref{rem} tends to $ 0 $ quite slowly as $ x \to 0^+ $. \end{remark} \subsection{Universal KAM persistence}\label{subun} In this subsection, we always assume that the prescribed Diophantine frequencies $ \omega $ are of full Lebesgue measure, that is, $ \tau >n-1 $ in \textbf{(H3)}. Note that for fixed $ n $, the parameter $ \tau $ might be very large, and the frequencies being of Diophantine class $ \tau $ are at least continuum many. Under such setting, the known minimum regularity requirement for Hamiltonian $ H $ is H\"older's type $ C^{\ell} $ with $ \ell>2\tau +2 $, see Salamon \cite{salamon} and \cref{Holder} below. Interestingly, if one considers weaker modulus of continuity, such as $ C^{2\tau+2} $ plus Logarithmic H\"older's type, the above regularity could be weakened, see our new \cref{lognew}. \subsubsection{H\"{o}lder continuous case}\label{subsubsub} \begin{theorem}\label{Holder} Let $ H \in C^{\ell} (\mathbb{T}^n,G) $ with $ \ell>2\tau +2 $, where $ \ell \notin \mathbb{N}^+ $, $ \ell-\tau \notin \mathbb{N}^+$ and $ \ell-2\tau \notin \mathbb{N}^+ $. That is, $ H $ is of $ C_{k,\varpi} $ with $ k=[\ell] $ and $ \varpi(x)\sim \varpi_{\mathrm{H}}^{\ell}(x)\sim x^{\{\ell\}} $. Assume \textbf{(H2)}, \textbf{(H3)} and \textbf{(H4)}. Then there is a solution $ x = u\left( \xi \right),y = v\left( \xi \right) $ of the following equation with the operator $ D: = \sum\limits_{\nu = 1}^n {{\omega _\nu }\frac{\partial }{{\partial {\xi _\nu }}}} $ \begin{equation}\notag Du = {H_y}\left( {u,v} \right),\;\;Dv = - {H_x}\left( {u,v} \right) \end{equation} such that $ u\left( \xi \right) - \xi $ and $ v\left( \xi \right) $ are of period $ 1 $ in all variables. In addition, $ u \in {C^{\ell-2\tau-1}}\left( {{\mathbb{R}^n},{\mathbb{R}^n}} \right) $ and $ v \circ {u^{ - 1}} \in {C^{\ell-\tau-1}}\left( {{\mathbb{R}^n},G} \right) $. \end{theorem} \cref{Holder} has been completely proved in \cite{salamon}. Significantly, the differentiability hypotheses under consideration is sharp, i.e., it is close to the optimal one as in \cite{M1,M2}, where Herman gave a counterexample about the nonexistence of an invariant curve for an annulus mapping of $ {C^{3 - \epsilon }} $ with $ 0 < \epsilon \ll 1 $ corresponds to the case $ n=2,\ell=4-\varepsilon $ in our setting, which implies the sharpness of \cref{Holder}. See more from \cite{R-55,F-1}. \subsubsection{H\"{o}lder plus Logarithmic H\"{o}lder continuous case} To show different modulus of continuity weaker than H\"older's type, we establish the following \cref{lognew}. One will see later that \cref{lognew} employs more complicated asymptotic analysis than \cref{simichaos2}, and interestingly, the remaining regularity $ \varpi_1 $ and $ \varpi_2 $ admit different forms. In fact, \cref{lognew} can completely contain the case of \cref{simichaos2}, that is, $ \tau=n-1 $, and $ \varrho=1 $ in \eqref{mafan2}. However, in order to distinguish the full Lebesgue measure and zero Lebesgue measure of Diophantine nonresonance, we show them separately. \begin{theorem}\label{lognew} Let $ \tau>n-1 $ be given and let $ H\in C_{[2\tau+2], \varpi} $, where $ \varpi \left( x \right) \sim {x^{\{2\tau+2\}}}/{\left( { - \ln x} \right)^\lambda } $ with $ \lambda > 1 $. Assume \textbf{(H2)}, \textbf{(H3)} and \textbf{(H4)}. That is, $ H $ is of $ C^{k} $ plus the above $ \varpi $ with $k= [2\tau+2] $. Then there is a solution $ x = u\left( \xi \right),y = v\left( \xi \right) $ of the following equation with the operator $ D: = \sum\limits_{\nu = 1}^n {{\omega _\nu }\frac{\partial }{{\partial {\xi _\nu }}}} $ \begin{equation}\notag Du = {H_y}\left( {u,v} \right),\;\;Dv = - {H_x}\left( {u,v} \right) \end{equation} such that $ u\left( \xi \right) - \xi $ and $ v\left( \xi \right) $ are of period $ 1 $ in all variables. In addition, letting \[{\varpi _1}\left( x \right) \sim \frac{1}{{{{\left( { - \ln x} \right)}^{\lambda - 1}}}} \sim \varpi _{\mathrm{LH}}^{\lambda - 1}\left( x \right),\] and \[ {\varpi _2}\left( x \right) \sim \left\{ \begin{aligned} &{\frac{1}{{{{\left( { - \ln x} \right)}^{\lambda - 1}}}} \sim \varpi _{\mathrm{LH}}^{\lambda - 1}\left( x \right)},&n-1<\tau \in {\mathbb{N}^ + } , \\ &{\frac{{{x^{\left\{ \tau \right\}}}}}{{{{\left( { - \ln x} \right)}^\lambda }}} \sim {x^{\left\{ \tau \right\}}}\varpi _{\mathrm{LH}}^\lambda \left( x \right)},&n-1<\tau \notin {\mathbb{N}^ + } , \\ \end{aligned} \right.\] one has that $ u \in {C_{1 ,{\varpi _1}}}\left( {{\mathbb{R}^n},{\mathbb{R}^n}} \right) $ and $ v \circ {u^{ - 1}} \in {C_{[\tau+1] ,{\varpi _2}}}\left( {{\mathbb{R}^n},G} \right) $. \end{theorem} \begin{remark} Similar to \cref{simichaos2}, one can also consider the generalized Logarithmic H\"older's type \eqref{mafan2} instead of the Logarithmic H\"older one. Only the latter is presented here for simplicity. \end{remark} \subsection{An explicit example of Logarithmic H\"{o}lder's type}\label{explicitexa} To illustrate the wider applicability of our theorems, we shall present an explicit example strictly beyond H\"older's type. Note that the H\"older plus Logarithmic H\"older regularity for $ H $ in \cref{lognew} becomes simpler Logarithmic H\"older's type for $ 2n<2\tau+2 \in \mathbb{N}^+ $ (because $ \{2 \tau+2 \}=0 $), we therefore consider the following setting. Recall \cref{lognew}. Let $ n = 2,\tau = 2, k = 6=[2\tau+2], {\alpha _ * } > 0,\lambda > 1$ and $M > 0 $ be given. Assume that $ \left( {x,y} \right) \in {\mathbb{T}^2} \times G $ with $ G := \{ {y \in {\mathbb{R}^2}:\left| y \right| \leq 1} \} $, and the frequency $ \omega = {\left( {{\omega _1},{\omega _2}} \right)^T} \in \mathbb{R}^2 $ satisfies \begin{equation}\notag | {\langle {{\tilde k},\omega } \rangle } | \geq \alpha_ *{ | {{\tilde k }} |}^{-2} ,\;\;\forall 0 \ne \tilde k \in {\mathbb{Z}^2},\;\;|\tilde k|: = |k_1|+|k_2|, \end{equation} i.e., with full Lebesgue measure. Now we shall construct a function for finite smooth perturbation, whose regularity is $ C^6 $ plus Logarithmic H\"older's type $ \varpi_{\mathrm{LH}}^{\lambda}(r) \sim 1/(-\ln r)^{\lambda} $ with index $ \lambda>1 $. Namely, define \begin{equation}\notag P(r): = \left\{ \begin{aligned} &{{\int_0^r { \cdots \int_0^{{s_2}} {\frac{1}{{{{(1 - \ln \left| {{s_1}} \right|)}^\lambda }}}d{s_1} \cdots d{s_6}} }}},&{0 < \left| r \right| \leq 1} , \\ &{0},&{r=0} . \\ \end{aligned} \right. \end{equation} Obviously $ P(r)\in C_{6,\varpi_{\mathrm{LH}}^{\lambda}} ([-1,1])$. Let us consider the perturbed Hamiltonian function below with some constant $ 0 < {\varepsilon } < {\varepsilon ^ * } $ sufficiently small ($ {\varepsilon ^ * } $ depends on the constants given above): \begin{equation}\label{HH} H(x,y,\varepsilon ) = {\omega _1}{y_1} + {\omega _2}{y_2} + \frac{1}{M}(y_1^2 + y_2^2) + \varepsilon \left( {\sin (2\pi {x_1}) + \sin (2\pi {x_2}) + P({y_1}) + P\left( {{y_2}} \right)} \right). \end{equation} At this point, we have \begin{align*} \left| {{{\left( {\int_{{\mathbb{T}^2}} {{H_{yy}}\left( {\xi ,0} \right)d\xi } } \right)}^{ - 1}}} \right| &= \left| {{{\left( {\int_{{\mathbb{T}^2}} {\left( {\begin{array}{*{20}{c}} {2{M^{ - 1}}}&0 \\ 0&{2{M^{ - 1}}} \end{array}} \right)d\xi } } \right)}^{ - 1}}} \right| \notag \\ &= \left| {\left( {\begin{array}{*{20}{c}} {{2^{ - 1}}M}&0 \\ 0&{{2^{ - 1}}M} \end{array}} \right)} \right| \leq M < + \infty . \end{align*} In addition, one can verify that $ H \in {C_{6,{\varpi_{\mathrm{LH}}^{\lambda}}}}( {{\mathbb{T}^2} \times G} ) $ with $ \varpi_{\mathrm{LH}}^{\lambda}(r) \sim 1/(-\ln r)^{\lambda} $. However, for $ \tilde \alpha = {\left( {0,0,6,0} \right)^T} $ with $ \left| {\tilde \alpha } \right| = 6 = k $, we have \[\left| {{\partial ^{\tilde \alpha }}H\left( {{{\left( {0,0} \right)}^T},{{\left( {{y_1},0} \right)}^T},\varepsilon } \right) - {\partial ^{\tilde \alpha }}H\left( {{{\left( {0,0} \right)}^T},{{\left( {0,0} \right)}^T},\varepsilon } \right)} \right| = \frac{\varepsilon }{{{{(1 - \ln \left| {{y_1}} \right|)}^\lambda }}} \geq \varepsilon {c_{\lambda ,\ell }}{\left| {{y_1}} \right|^\ell }\] for any $ 0<\ell\leq1 $, where $ c_{\lambda ,\ell } >0$ is a constant that only depends on $ \lambda $ and $ \ell $. This implies that $ H \notin {C_{6,{\varpi _{\mathrm{H}}^\ell}}}( {{\mathbb{T}^2} \times G} ) $ with $ {\varpi _{\mathrm{H}}^\ell}(r) \sim {r^\ell } $, i.e., $ H \notin C^{6+\ell}( {{\mathbb{T}^2} \times G} ) $ with any $ 0<\ell \leq 1 $, because $ \varpi_{\mathrm{LH}}^{\lambda} $ is strictly weaker than $ \varpi _{\mathrm{H}}^\ell $, see also \cref{strict}. In other words, the highest derivatives (of order $ k=6 $) of $ H $ in \eqref{HH} can be rigorously proved to be Logarithmic H\"{o}lder continuous with index $ \lambda>1 $, but not any H\"{o}lder's type. Therefore, the finite smooth KAM theorems via classical H\"{o}lder continuity cannot be applied. But, all the assumptions of \cref{lognew} can be verified to be satisfied, then the invariant torus persists, and the frequency $ \omega = {\left( {{\omega _1},{\omega _2}} \right)^T} $ for the unperturbed system can remain unchanged. Moreover, the remaining regularity for mappings $ u $ and $ v \circ u^{-1} $ in \cref{lognew} could also be determined as $ u \in {C_{1 ,{\varpi _\mathrm{LH}^{\lambda-1}}}}\left( {{\mathbb{R}^n},{\mathbb{R}^n}} \right) $ and $ v \circ {u^{ - 1}} \in {C_{3 ,{\varpi _\mathrm{LH}^{\lambda-1}}}}\left( {{\mathbb{R}^n},G} \right) $, where $ \varpi _\mathrm{LH}^{\lambda-1}(r)\sim 1/(-\ln r)^{\lambda-1} $. More precisely, $ u $ is at least $ C^1 $, while $ v \circ u^{-1}$ is least $ C^3 $, and the higher regularity for them is still not any H\"older's type, but Logarithmic H\"older one with index $ \lambda-1 $, i.e., lower than the original index $ \lambda>1 $, this is because the small divisors causes the loss of regularity. \section{Proof of \cref{theorem1}}\label{section4} Now let us prove \cref{theorem1} by separating two subsections, namely frequency-preserving KAM persistence (\cref{KAM}) and further regularity (\cref{furtherregularity}) for KAM torus. For the former, the overall process is similar to that in \cite{salamon}, but the key points to weaken the H\"older regularity to only modulus of continuity are using \cref{Theorem1} and proving the uniform convergence of the transformation mapping, that is, the convergence of the upper bound series (see \eqref{dao} and \eqref{dao2}). As we will see later, the Dini type integrability condition \eqref{Dini} in \textbf{(H1)} guarantees this. As to the latter, we have to establish a more general regularity iterative theorem (\cref{t1}) which is not trivial since the resulting regularity might be somewhat complicated due to asymptotic analysis. \subsection{Frequency-preserving KAM persistence}\label{KAM} The proof of the frequency-preserving KAM persistence is organized as follows. Firstly, we construct a series of analytic approximation functions $ H^\nu $ of $ H $ by using \cref{Theorem1} and considering \textbf{(H1)} and \textbf{(H2)}. Secondly, we shall construct a sequence of frequency-preserving analytic and symplectic transformations $ \psi^\nu $ by induction. According to \textbf{(H2)}, \textbf{(H3)} and \textbf{(H4)}, the first step of induction is established by applying \cref{appendix} in \cref{Appsalamon} (or Theorem 1 in \cite{salamon}). Then, combining with weak homogeneity and certain specific estimates we complete the proof of induction and obtain the uniform convergence of the composite transformations. Finally, in the light of \textbf{(H5)}, the regularity of the KAM torus is guaranteed by \cref{t1}. \\ {\textbf{Step1:}} In view of \cref{Theorem1} (we have assumed that the modulus of continuity $ \varpi $ admits semi separability and thus \cref{Theorem1} could be applied here), one could approximate $ H(x, y) $ by a sequence of real analytic functions $ H_\nu(x, y) $ for $ \nu \geq 0 $ in the strips \[\left| {\operatorname{Im} x} \right| \leq {r_\nu },\;\;\left| {\operatorname{Im} y} \right| \leq {r_\nu },\;\;{r_\nu }: = {2^{ - \nu }}\varepsilon \] around $ \left| {\operatorname{Re} x} \right| \in {\mathbb{T}^n},\left| {\operatorname{Re} y} \right| \leq \rho , $ such that \begin{equation}\label{T1-3} \begin{aligned} \left| {{H^\nu }\left( z \right) - \sum\limits_{\left| \alpha \right| \leq k} {{\partial ^\alpha }H\left( {\operatorname{Re} z} \right)\frac{{{{\left( {\mathrm{i}\operatorname{Im} z} \right)}^\alpha }}}{{\alpha !}}} } \right| \leq{}& {c_1}{\left\| H \right\|_\varpi }r_\nu ^k\varpi \left( {{r_\nu }} \right),\\ \left| {H_y^\nu \left( z \right) - \sum\limits_{\left| \alpha \right| \leq k-1} {{\partial ^\alpha }{H_y}\left( {\operatorname{Re} z} \right)\frac{{{{\left( {\mathrm{i}\operatorname{Im} z} \right)}^\alpha }}}{{\alpha !}}} } \right| \leq{}& {c_1}{\left\| H \right\|_\varpi }r_\nu ^{k - 1}\varpi \left( {{r_\nu }} \right), \\ \left| {H_{yy}^\nu \left( z \right) - \sum\limits_{\left| \alpha \right| \leq k-2} {{\partial ^\alpha }{H_{yy}}\left( {\operatorname{Re} z} \right)\frac{{{{\left( {\mathrm{i}\operatorname{Im} z} \right)}^\alpha }}}{{\alpha !}}} } \right| \leq{}& {c_1}{\left\| H \right\|_\varpi }r_\nu ^{k - 2}\varpi \left( {{r_\nu }} \right) \end{aligned} \end{equation} for $ \left| {\operatorname{Im} x} \right| \leq {r_\nu },\;\left| {\operatorname{Im} y} \right| \leq {r_\nu } $, and $ c_1=c(n,k) $ is the constant provided in \eqref{3.2}. Fix $ \theta = 1/\sqrt 2 $. In what follows, we will construct a sequence of real analytic symplectic transformations $ z=(x,y),\zeta=(\xi,\eta),z = {\phi ^\nu }\left( \zeta \right) $ of the form \begin{equation}\label{bhxs} x = {u^\nu }\left( \xi \right),\;\;y = v^{\nu}\left( \xi \right) + (u_\xi ^\nu)^T{\left( \xi \right)^{ - 1}}\eta \end{equation} by induction, such that $ {u^\nu }\left( \xi \right) - \xi $ and $ {v^\nu }\left( \xi \right) $ are of period $ 1 $ in all variables, and $ {\phi ^\nu } $ maps the strip $ \left| {\operatorname{Im} \xi } \right| ,\left| \eta \right| \leq \theta {r_{\nu + 1}} $ into $ \left| {\operatorname{Im} x} \right| ,\left| y \right| \leq {r_\nu },\left| {\operatorname{Re} y} \right| \leq \rho $, and the transformed Hamiltonian function $ {K^\nu }: = {H^\nu } \circ {\phi ^\nu } $ satisfies \begin{equation}\label{qiudao} K_\xi ^\nu \left( {\xi ,0} \right) = 0,\;\;K_\eta ^\nu \left( {\xi ,0} \right) = \omega , \end{equation} i.e., with prescribed frequency-preserving. Namely by verifying certain conditions we obtain $ z=\psi^\nu(\zeta) $ of the form \eqref{bhxs} from \cref{appendix} by induction, mapping $ \left| {\operatorname{Im} \xi } \right|,\left| \eta \right| \leq {r_{\nu + 1}} $ into $ \left| {\operatorname{Im} x} \right|,\left| y \right| \leq \theta {r_\nu } $, and $ \psi^\nu \left( {\xi ,0} \right) - \left( {\xi ,0} \right) $ is of period $ 1 $, and \eqref{qiudao} holds. Here we denote $ \phi^\nu:=\phi^{\nu-1} \circ \psi^\nu$ with $ {\phi ^{ - 1}}: = \mathrm{id} $ (where $ \mathrm{id} $ denotes the $ 2n $-dimensional identity mapping and therefore $ {\phi ^0} = {\psi ^0} $). Further more, \cref{appendix} will lead to \begin{align} \left| {{\psi ^\nu }\left( \zeta \right) - \zeta } \right| &\leq c\left( {1 - \theta } \right)r_\nu ^{k - 2\tau - 1}\varpi \left( {{r_\nu }} \right),\label{2.82}\\ \left| {\psi _\zeta ^\nu\left( \zeta \right) - \mathbb{I}} \right| &\leq cr_\nu ^{k - 2\tau - 2}\varpi \left( {{r_\nu }} \right),\label{2.83}\\ \left| {K_{\eta \eta }^\nu\left( \zeta \right) - {Q^\nu}\left( \zeta \right)} \right| &\leq cr_\nu ^{k - 2\tau - 2}\varpi \left( {{r_\nu }} \right)/2M,\label{2.84}\\ \left| {U_x^\nu \left( x \right)} \right| &\leq cr_\nu ^{k - \tau - 1}\varpi \left( {{r_\nu }} \right),\label{2.85} \end{align} on $ \left| {\operatorname{Im} \xi } \right|,\left| \eta \right|,\left| {\operatorname{Im} x} \right| \leq r_{\nu+1} $, where $ {S^\nu }\left( {x,\eta } \right) = {U^\nu }\left( x \right) + \left\langle {{V^\nu }\left( x \right),\eta } \right\rangle $ is the generating function for $ {\psi ^\nu } $, and $ Q^\nu:=K_{\eta \eta}^{\nu-1} $, and $ \mathbb{I} $ denotes the $ 2n \times 2n $-dimensional identity mapping, and \begin{equation}\label{Q0} {Q^0}\left( z \right): = \sum\limits_{\left| \alpha \right| \leq k - 2} {{\partial ^\alpha }{H_{yy}}\left( {\operatorname{Re} z} \right)\frac{{{{\left( {\mathrm{i}\operatorname{Im} x} \right)}^\alpha }}}{{\alpha !}}} . \end{equation} \\ {\textbf{Step2:}} Here we show that $ \psi^0=\phi^0 $ exists, and it admits the properties mentioned in Step 1. Denote \begin{equation}\notag h(x) := H\left( {x,0} \right) - \int_{{\mathbb{T}^n}} {H\left( {\xi ,0} \right)d\xi } ,\;\;x \in {\mathbb{R}^n}. \end{equation} Then by the first term in \eqref{T1-2}, we have \begin{equation}\label{pianh} \sum\limits_{\left| \alpha \right| \leq k} {\left| {{\partial ^\alpha }h} \right|{\varepsilon ^{\left| \alpha \right|}}} < M{\varepsilon ^k}\varpi \left( \varepsilon \right). \end{equation} Note that \begin{align*} {H^0}\left( {x,0} \right) - \int_{{\mathbb{T}^n}} {{H^0}\left( {\xi ,0} \right)d\xi } ={}& {H^0}\left( {x,0} \right) - \sum\limits_{\left| \alpha \right| \leq k} {\partial _x^\alpha H\left( {\operatorname{Re} x,0} \right)\frac{{{{\left( {\mathrm{i}\operatorname{Im} x} \right)}^\alpha }}}{{\alpha !}}} \notag \\ {}&+ \int_{{\mathbb{T}^n}} {\left( {H\left( {\xi ,0} \right) - {H^0}\left( {\xi ,0} \right)} \right)d\xi } \notag \\ {}&+ \sum\limits_{\left| \alpha \right| \leq k} {{\partial ^\alpha }h\left( {\operatorname{Re} x} \right)\frac{{{{\left( {\mathrm{i}\operatorname{Im} x} \right)}^\alpha }}}{{\alpha !}}} . \end{align*} Hence, for $ \left| {\operatorname{Im} x} \right| \leq \theta {r_0} = \theta \varepsilon $, by using \cref{Theorem1}, \cref{coro1} and \eqref{pianh} we arrive at \begin{align*} \left| {{H^0}\left( {x,0} \right) - \int_{{\mathbb{T}^n}} {{H^0}\left( {\xi ,0} \right)d\xi } } \right| &\leq 2{c_1}{\left\| H \right\|_\varpi }{\varepsilon ^k}\varpi \left( \varepsilon \right) + M{\varepsilon ^k}\varpi \left( \varepsilon \right)\notag \\ &\leq c{\varepsilon ^k}\varpi \left( \varepsilon \right) \leq c{\varepsilon ^{k - 2\tau - 2}}\varpi \left( \varepsilon \right) \cdot {\left( {\theta \varepsilon } \right)^{2\tau + 2}}. \end{align*} Now consider the vector valued function $ f\left( x \right): = {H_y}\left( {x,0} \right) - \omega $ for $ x \in {\mathbb{R}^n} $. In view of the second term in \eqref{T1-2}, we have \begin{equation}\label{pianf} \sum\limits_{\left| \alpha \right| \leq k - 1} {\left| {{\partial ^\alpha }f} \right|{\varepsilon ^{\left| \alpha \right|}}} \leq M{\varepsilon ^{k - \tau - 1}}\varpi \left( \varepsilon \right). \end{equation} Note that \begin{align*} H_y^0\left( {x,0} \right) - \omega ={}& H_y^0\left( {x,0} \right) - \sum\limits_{\left| \alpha \right| \leq k - 1} {\partial _x^\alpha {H_y}\left( {\operatorname{Re} x,0} \right)\frac{{{{\left( {\mathrm{i}\operatorname{Im} x} \right)}^\alpha }}}{{\alpha !}}} \notag \\ {}&+ \sum\limits_{\left| \alpha \right| \leq k - 1} {{\partial ^\alpha }f\left( {\operatorname{Re} x} \right)\frac{{{{\left( {\mathrm{i}\operatorname{Im} x} \right)}^\alpha }}}{{\alpha !}}}. \end{align*} Therefore, for $ \left| {\operatorname{Im} x} \right| \leq \theta \varepsilon $, by using \eqref{T1-3} and \eqref{pianf} we obtain that \begin{align*} \left| {H_y^0\left( {x,0} \right) - \omega } \right| &\leq {c_1}{\left\| H \right\|_\varpi }{\varepsilon ^{k - 1}}\varpi \left( \varepsilon \right) + M{\varepsilon ^{k - \tau - 1}}\varpi \left( \varepsilon \right)\notag \\ &\leq c{\varepsilon ^{k - \tau - 1}}\varpi \left( \varepsilon \right) \leq c{\varepsilon ^{k - 2\tau - 2}}\varpi \left( \varepsilon \right) \cdot {\left( {\theta \varepsilon } \right)^{\tau + 1}}. \end{align*} Recall \eqref{Q0}. Then it follows from \eqref{T1-3} that \begin{align*} \left| {H_{yy}^0\left( z \right) - {Q^0}\left( z \right)} \right| &\leq {c_1}{\left\| H \right\|_\varpi }{\varepsilon ^{k - 2}}\varpi \left( \varepsilon \right) \leq \frac{c}{{4M}}{\varepsilon ^{k - 2}}\varpi \left( \varepsilon \right) \\ &\leq \frac{c}{{4M}}{\varepsilon ^{k - 2\tau - 2}}\varpi \left( \varepsilon \right),\;\;\left| {\operatorname{Im} x} \right|,\left| y \right| \leq \theta \varepsilon, \end{align*} and \begin{equation}\notag \left| {{Q^0}\left( z \right)} \right| \leq \sum\limits_{\left| \alpha \right| \leq k - 2} {{{\left\| H \right\|}_\varpi }\frac{{{\varepsilon ^{\left| \alpha \right|}}}}{{\alpha !}}} \leq {\left\| H \right\|_\varpi }\sum\limits_{\alpha \in {\mathbb{N}^{2n}}} {\frac{{{\varepsilon ^{\left| \alpha \right|}}}}{{\alpha !}}} = {\left\| H \right\|_\varpi }{e^{2n\varepsilon }} \leq 2M,\;\; \left| {\operatorname{Im} z} \right| \leq \varepsilon . \end{equation} Now, by taking $ r^{*} = \theta \varepsilon ,\delta^{*} = {\varepsilon ^{k - 2\tau - 2}}\varpi \left( \varepsilon \right) $ and using \cref{appendix} there exists a real analytic symplectic transformation $ z = {\phi ^0}\left( \zeta \right) $ of the form \eqref{bhxs} (with $ \nu=0 $) mapping the strip $ \left| {\operatorname{Im} \xi } \right|,\left| \eta \right| \leq {r_1}=r_0/2 $ into $ \left| {\operatorname{Im} x} \right|,\left| y \right| \leq \theta{r_0}=r_0/\sqrt{2} $, such that $ {u^0}\left( \xi \right) - \xi $ and $ {v^0}\left( \xi \right) $ are of period $ 1 $ in all variables and the Hamiltonian function $ {K^0}: = {H^0} \circ {\phi ^0} $ satisfies \eqref{qiudao} (with $ \nu=0 $). Moreover, \eqref{2.82}-\eqref{2.84} (with $ \nu=0 $) hold. Also assume that \begin{equation}\notag \left| {K_{\eta \eta }^{\nu - 1}\left( \zeta \right)} \right| \leq {M_{\nu - 1}},\;\;\left| {{{\left( {\int_{{\mathbb{T}^n}} {K_{\eta \eta }^{\nu - 1}\left( {\xi ,0} \right)d\xi } } \right)}^{ - 1}}} \right| \leq {M_{\nu - 1}},\;\;{M_\nu } \leq M \end{equation} for $ \left| {\operatorname{Im} x} \right| ,\left| y \right| \leq {r_\nu } $. Finally, define \[\tilde H\left( {x,y} \right): = {H^\nu } \circ {\phi ^{\nu - 1}}\left( {x,y} \right)\] with respect to $ \left| {\operatorname{Im} x} \right| ,\left| y \right| \leq {r_\nu } $. One can verify that $ {\tilde H} $ is well defined. Next we assume that the transformation $ z = {\phi ^{\nu - 1}}\left( \zeta \right) $ of the form \eqref{bhxs} has been constructed, mapping $ \left| {\operatorname{Im} \xi } \right|,\left| \eta \right| \leq \theta {r_\nu } $ into $ \left| {\operatorname{Im} x} \right|,\left| {\operatorname{Im} y} \right| \leq {r_{\nu - 1}},\left| {\operatorname{Re} y} \right| \leq \rho $, and $ {u^{\nu - 1}}\left( \xi \right) - \xi ,{v^{\nu - 1}}\left( \xi \right) $ are of period $ 1 $ in all variables, and $ K_\xi ^{\nu - 1}\left( {\xi ,0} \right) = 0,K_\eta ^{\nu - 1}\left( {\xi ,0} \right) = \omega $. In addition, we also assume that \eqref{2.82}-\eqref{2.85} hold for $ 0, \ldots ,\nu - 1 $. In the next Step 3, we will verify that the above still hold for $ \nu $, which establishes a complete induction. \\ {\textbf{Step3:}} We will prove the existence of transformation $ {\phi ^\nu } $ in each step according to the specific estimates below and \cref{appendix}. Let $ \left| {\operatorname{Im} x} \right| \leq \theta {r_\nu } $. Then $ \phi^{\nu-1}(x,0) $ lies in the region where the estimates in \eqref{T1-3} hold for both $ H^\nu $ and $ H^{\nu-1} $. Note that $ x \mapsto H^{\nu-1}(\phi^{\nu-1}(x,0)) $ is constant by \eqref{qiudao}. Then by \eqref{T1-3}, we arrive at the following for $ \left| {\operatorname{Im} x} \right| \leq \theta {r_\nu } $ \begin{align*} \left| {\tilde H\left( {x,0} \right) - \int_{{\mathbb{T}^n}} {\tilde H\left( {\xi ,0} \right)d\xi } } \right| &\leq 2\mathop {\sup }\limits_{\left| {\operatorname{Im} \xi } \right| \leq \theta {r_\nu }} \left| {{H^\nu }\left( {{\phi ^{\nu - 1}}\left( {\xi ,0} \right)} \right) - {H^{\nu - 1}}\left( {{\phi ^{\nu - 1}}\left( {\xi ,0} \right)} \right)} \right|\notag \\ &\leq 2{c_1}{\left\| H \right\|_\varpi }r_\nu ^k\varpi \left( {{r_\nu }} \right) + 2{c_1}{\left\| H \right\|_\varpi }r_{\nu-1} ^{k}\varpi \left( {{r_{\nu-1} }} \right)\notag \\ &\leq cr_\nu ^{k - 2\tau - 2}\varpi \left( {{r_\nu }} \right) \cdot r_\nu ^{2\tau + 2}, \end{align*} where the weak homogeneity of $ \varpi $ with respect to $ a=1/2 $ (see \cref{weak}) has been used in the last inequality, because $ \varpi(r_{\nu-1})=\varpi(2r_{\nu})\leq c \varpi(r_{\nu}) $ (thus $ c $ is independent of $ \nu $). For convenience we may therefore not mention it in the following. Taking $ \eta=0 $ in \eqref{2.83} we have \begin{align} \left| {u_\xi ^{\nu - 1}\left( \xi \right) - \mathbb{I}} \right| &\leq \sum\limits_{\mu = 0}^{\nu - 1} {\left| {u_\xi ^\mu \left( \xi \right) - u_\xi ^{\mu - 1}\left( \xi \right)} \right|} \leq c\sum\limits_{\mu = 0}^{\nu - 1} {r_\mu ^{k - 2\tau - 2}\varpi \left( {{r_\mu }} \right)} \notag \\ &\leq c\sum\limits_{\mu = 0}^\infty {{{\left( {\frac{\varepsilon }{{{2^\mu }}}} \right)}^{k - 2\tau - 2}}\varpi \left( {\frac{\varepsilon }{{{2^\mu }}}} \right)} \leq c\sum\limits_{\mu = 0}^\infty {\left( {\frac{\varepsilon }{{{2^{\mu - 1}}}} - \frac{\varepsilon }{{{2^\mu }}}} \right){{\left( {\frac{\varepsilon }{{{2^\mu }}}} \right)}^{k - 2\tau - 3}}\varpi \left( {\frac{\varepsilon }{{{2^\mu }}}} \right)} \notag \\ \label{dao} & \leq c\sum\limits_{\mu = 0}^\infty {\int_{\varepsilon /{2^\mu }}^{\varepsilon /{2^{\mu - 1}}} {\frac{{\varpi \left( x \right)}}{{{x^{2\tau + 3 - k}}}}dx} } \leq c\int_0^{2\varepsilon } {\frac{{\varpi \left( x \right)}}{{{x^{2\tau + 3 - k}}}}dx} \leq 1 - \theta \end{align} for $ \left| {\operatorname{Im} \xi } \right| \leq \theta {r_\nu } $, and the Dini type condition \eqref{Dini} in \textbf{(H1)} together with Cauchy Theorem are used since $ \varepsilon>0 $ is sufficiently small. Then it leads to \begin{equation}\label{nidao} \left| {u_\xi ^{\nu - 1}{{\left( \xi \right)}^{ - 1}}} \right| \leq {\theta ^{ - 1}},\;\;\left| {\operatorname{Im} \xi } \right| \leq \theta {r_\nu }. \end{equation} Finally, by \eqref{nidao} and \eqref{T1-3} we obtain that \begin{align*} \left| {{{\tilde H}_y}\left( {x,0} \right) - \omega } \right| &= \left| {u_\xi ^{\nu - 1}{{\left( x \right)}^{ - 1}}\left( {H_y^\nu \left( {{\phi ^{\nu - 1}}\left( {x,0} \right)} \right) - H_y^{\nu - 1}\left( {{\phi ^{\nu - 1}}\left( {x,0} \right)} \right)} \right)} \right|\notag \\ & \leq {\theta ^{ - 1}}\left| {H_y^\nu \left( {{\phi ^{\nu - 1}}\left( {x,0} \right)} \right) - H_y^{\nu - 1}\left( {{\phi ^{\nu - 1}}\left( {x,0} \right)} \right)} \right|\notag \\ & \leq {\theta ^{ - 1}}\left( {{c_1}{{\left\| H \right\|}_\varpi }r_\nu ^{k - 1}\varpi \left( {{r_\nu }} \right) + {c_1}{{\left\| H \right\|}_\varpi }r_{\nu - 1}^{k - 1}\varpi \left( {{r_{\nu - 1}}} \right)} \right)\notag \\ &\leq cr_\nu ^{k - 1}\varpi \left( {{r_\nu }} \right)\notag \\ &\leq cr_\nu ^{k - \tau - 2}\varpi \left( {{r_\nu }} \right) \cdot r_\nu ^{\tau + 1}, \end{align*} and \begin{align*} \left| {{{\tilde H}_{yy}}\left( z \right) - {Q^\nu }\left( z \right)} \right| &= \left| {u_\xi ^{\nu - 1}{{\left( x \right)}^{ - 1}}\left( {H_{yy}^\nu \left( {{\phi ^{\nu - 1}}\left( z \right)} \right) - H_{yy}^{\nu - 1}\left( {{\phi ^{\nu - 1}}\left( z \right)} \right)} \right){{\left( {u_\xi ^{\nu - 1}{{\left( x \right)}^{ - 1}}} \right)}^T}} \right|\notag \\ &\leq {\theta ^{ - 2}}\left| {H_{yy}^\nu \left( {{\phi ^{\nu - 1}}\left( z \right)} \right) - H_{yy}^{\nu - 1}\left( {{\phi ^{\nu - 1}}\left( z \right)} \right)} \right|\notag \\ &\leq {\theta ^{ - 2}}\left( {{c_1}{{\left\| H \right\|}_\varpi }r_\nu ^{k - 2}\varpi \left( {{r_\nu }} \right) + {c_1}{{\left\| H \right\|}_\varpi }r_{\nu - 1}^{k - 2}\varpi \left( {{r_{\nu - 1}}} \right)} \right)\notag \\ &\leq cr_\nu ^{k - 2\tau - 2}\varpi \left( {{r_\nu }} \right)/2M \end{align*} for $ \left| {\operatorname{Im} x} \right|,\left| y \right| \leq \theta {r_\nu } $. Then denote $ r^*:= r_\nu $ and $ \delta^*:=c r_\nu ^{k - 2\tau - 2}\varpi \left( {{r_\nu }} \right) $ in \cref{appendix}, we obtain the analytic symplectic preserving transformation $ {\phi ^\nu } $ of each step, mapping the strip $ \left| {\operatorname{Im} \xi } \right|\leq \theta {r_\nu },\left| \eta \right| \leq \theta {r_\nu } $ into $ \left| {\operatorname{Im} x} \right|\leq {r_\nu },\left| y \right| \leq {r_\nu } $, such that $ {u^\nu }\left( \xi \right) - \xi $ and $ {v^\nu }\left( \xi \right) $ are of period $ 1 $ in all variables, and the transformed Hamiltonian function $ {K^\nu } = {H^\nu } \circ {\phi ^\nu } $ satisfies \[K_\xi ^\nu \left( {\xi ,0} \right) = 0,\;\;K_\eta ^\nu \left( {\xi ,0} \right) = \omega .\] Moreover, \eqref{2.82}-\eqref{2.85} are valid for $ \left| {\operatorname{Im} \xi } \right|,\left| \eta \right|,\left| {\operatorname{Im} x} \right|\leq \theta {r_\nu }$. \\ {\textbf{Step4:}} By \eqref{2.83} for $ 0, \ldots ,\nu - 1 $ and the arguments in \eqref{dao}, there holds \begin{align} \left| {\phi _\zeta ^{\nu - 1}\left( \zeta \right)} \right| &\leq 1 + \sum\limits_{\mu = 0}^{\nu - 1} {\left| {\phi _\zeta ^\mu \left( \zeta \right) - \phi _\zeta ^{\mu - 1}\left( \zeta \right)} \right|} \leq 1 + \sum\limits_{\mu = 0}^{\nu - 1} {\left( {\left| {\phi _\zeta ^\mu \left( \zeta \right) - \mathbb{I}} \right| + \left| {\phi _\zeta ^{\mu - 1}\left( \zeta \right) - \mathbb{I}} \right|} \right)} \notag \\ \label{dao2} &\leq 1 + c\sum\limits_{\mu = 0}^\infty {{{\left( {\frac{\varepsilon }{{{2^\mu }}}} \right)}^{k - 2\tau - 2}}\varpi \left( {\frac{\varepsilon }{{{2^\mu }}}} \right)} \leq 1 + c\int_0^{2\varepsilon } {\frac{{\varpi \left( x \right)}}{x^{2\tau +3-k}}dx} \leq 2 \end{align} for $ \left| {\operatorname{Im} \xi } \right|,\left| \eta \right| \leq \theta {r_\nu } $ as long as $ \varepsilon>0 $ is sufficiently small, which leads to \begin{align*} \left| {{\phi ^\nu }\left( \zeta \right) - {\phi ^{\nu - 1}}\left( \zeta \right)} \right| &= \left| {{\phi ^{\nu - 1}}\left( {{\psi ^\nu }\left( \zeta \right)} \right) - {\phi ^{\nu - 1}}\left( \zeta \right)} \right| \notag \\ &\leq 2\left| {{\psi ^\nu }\left( \zeta \right) - \zeta } \right| \leq c\left( {1 - \theta } \right)r_\nu ^{k - 2\tau - 1}\varpi \left( {{r_\nu }} \right) \end{align*} for $ \left| {\operatorname{Im} \xi } \right|,\left| \eta \right| \leq {r_{\nu + 1}} $. Then by Cauchy's estimate, we obtain that \begin{equation}\notag \left| {\phi _\zeta ^\nu \left( \zeta \right) - \phi _\zeta ^{\nu - 1}\left( \zeta \right)} \right| \leq cr_\nu ^{k - 2\tau - 2}\varpi \left( {{r_\nu }} \right),\;\;\left| {\operatorname{Im} \xi } \right|,\left| \eta \right| \leq {r_{\nu + 1}}. \end{equation} It can be proved in the same way that $ | {\phi _\zeta ^\nu \left( \zeta \right)} | \leq 2 $ for $ \left| {\operatorname{Im} \xi } \right|,\left| \eta \right| \leq \theta {r_{\nu + 1}} $, which implies \begin{equation}\notag \left| {\operatorname{Im} z} \right| \leq 2\left| {\operatorname{Im} \zeta } \right| \leq 2\sqrt {{{\left| {\operatorname{Im} \xi } \right|}^2} + {{\left| {\operatorname{Im} \eta } \right|}^2}} \leq 2\sqrt {{\theta ^2}r_{\nu + 1}^2 + {\theta ^2}r_{\nu + 1}^2} = 2{r_{\nu + 1}} = {r_\nu }. \end{equation} Besides, we have $ \left| {\operatorname{Re} y} \right| \leq \rho $. Note that \begin{equation}\notag {v^\nu } \circ {\left( {{u^\nu }} \right)^{ - 1}}\left( x \right) - {v^{\nu - 1}} \circ {\left( {{u^{\nu - 1}}} \right)^{ - 1}}\left( x \right) = {\left( {u_\xi ^{\nu - 1}{{\left( \xi \right)}^{ - 1}}} \right)^T}U_x^\nu \left( \xi \right),\;\;x: = {u^{\nu - 1}}\left( \xi \right). \end{equation} Recall \eqref{dao}, by employing the contraction mapping principle we have $ \left| {\operatorname{Im} \xi } \right| \leq {r_{\nu + 1}} $ if $ \left| {\operatorname{Im} x} \right| \leq \theta {r_{\nu + 1}} $ with respect to $ x $ defined above. Then from \eqref{2.85} and \eqref{nidao} one can verify that \begin{equation}\label{4.97} \left| {{{\big( {u_\xi ^{\nu - 1}{{\left( \xi \right)}^{ - 1}}} \big)}^T}U_x^\nu \left( \xi \right)} \right| \leq cr_\nu ^{k - \tau - 1}\varpi \left( {{r_\nu }} \right). \end{equation} {\textbf{Step5:}} Finally, we are in a position to prove the convergence of $ u^\nu $ and $ v^\nu $, and the regularity of their limit functions. Note \eqref{4.97}. Then we have the following analytic iterative scheme \begin{equation}\label{4.98} \left| {{u^\nu }\left( \xi \right) - {u^{\nu - 1}}\left( \xi \right)} \right| \leq cr_\nu ^{k - 2\tau - 1}\varpi \left( {{r_\nu }} \right), \;\; \left| {\operatorname{Im} \xi } \right| \leq {r_{\nu + 1}}, \end{equation} and \begin{equation}\label{4.99} \left| {{v^\nu } \circ {{\left( {{u^\nu }} \right)}^{ - 1}}\left( x \right) - {v^{\nu - 1}} \circ {{\left( {{u^{\nu - 1}}} \right)}^{ - 1}}\left( x \right)} \right| \leq c r_\nu ^{k - \tau - 1}\varpi \left( {{r_\nu }} \right),\;\;\left| {\operatorname{Im} x} \right| \leq \theta {r_{\nu + 1}}. \end{equation} And especially, \eqref{4.98} and \eqref{4.99} hold when $ \nu=0 $ since $ {u^{0 - 1}} = \mathrm{id} $ and $ {v^{ 0- 1}} = 0 $. It is obvious to see that the uniform limits $ u $ and $ v\circ u^{-1} $ of $ u^\nu $ and $ v^\nu\circ (u^\nu)^{-1} $ are at least $ C^1 $ (in fact, this is implied by the higher regularity studied later in \cref{furtherregularity}). In addition, the persistent invariant torus possesses the same frequency $ \omega $ as the unperturbed torus by \eqref{qiudao}. \subsection{Iteration theorem on regularity without H\"older's type}\label{furtherregularity} To obtain accurate regularity for $ u $ and $ v\circ u^{-1} $ from the analytic iterative scheme \eqref{4.98} and \eqref{4.99}, we shall along with the idea of Moser and Salamon to establish an abstract iterative theorem, which provides the modulus of continuity of the integral form. \begin{theorem}\label{t1} Let $ n\in \mathbb{N}^+, \varepsilon>0 $ and $ \{r_\nu\}_{\nu \in \mathbb N}=\{\varepsilon2^{-\nu}\}_{n \in \mathbb{N}} $ be given, and denote by $ f:{\mathbb{R}^n} \to \mathbb{R} $ the limit of a sequence of real analytic functions $ {f_\nu }\left( x \right) $ in the strips $ \left| {\operatorname{Im} x} \right| \leq {r_{\nu} } $ such that \begin{equation}\label{huoche} {f_0} = 0,\;\;\left| {{f_\nu }\left( x \right) - {f_{\nu - 1}}\left( x \right)} \right| \leq \varphi \left( {{r_\nu }} \right),\;\;\nu \geq 1, \end{equation} where $ \varphi $ is a nondecreasing continuous function satisfying $ \varphi \left( 0 \right) = 0 $. Assume that there is a critical $ k_* \in \mathbb{N} $ such that \begin{equation}\label{330} \int_0^1 {\frac{{\varphi \left( x \right)}}{{{x^{{k_*} + 1}}}}dx} < + \infty ,\;\;\int_0^1 {\frac{{\varphi \left( x \right)}}{{{x^{{k_*} + 2}}}}dx} = + \infty . \end{equation} Then there exists a modulus of continuity $ \varpi_* $ such that $ f \in {C_{k_*,\varpi_* }}\left( {{\mathbb{R}^n}} \right) $. In other words, the regularity of $ f $ is at least of $ C^{k_*} $ plus $ \varpi_* $. In particular, $ \varpi_* $ could be determined as \begin{equation}\label{LLL} {\varpi _ * }\left( \gamma \right) \sim \gamma \int_{L\left( \gamma \right)}^\varepsilon {\frac{{\varphi \left( t \right)}}{{{t^{{k_*} + 2}}}}dt} = {\mathcal{O}^\# }\left( {\int_0^{L\left( \gamma \right)} {\frac{{\varphi \left( t \right)}}{{{t^{{k_*} + 1}}}}dt} } \right) ,\;\;\gamma \to {0^ + }, \end{equation} where $ L(\gamma) \to 0^+ $ is some function such that the second relation in \eqref{LLL} holds. \end{theorem} \begin{proof} Define $ {g_\nu }(x): = {f_\nu }\left( x \right) - {f_{\nu - 1}}\left( x \right) $ for $ \nu \in \mathbb{N}^+ $. Determine an integer function $ \widetilde N(\gamma) : [0,1] \to \mathbb{N}^+ $ (note that $ \widetilde N(\gamma) $ can be extended to $ \mathbb{R}^+ $ due to the arguments below, we thus assume that it is a continuous function). Then for the given critical $ k_*\in \mathbb{N} $ and $ x,y\in \mathbb{R}^n $, we obtain the following for all multi-indices $ \alpha = \left( {{\alpha _1}, \ldots ,{\alpha _n}} \right) \in {\mathbb{N}^n} $ with $ \left| \alpha \right| = k_* $: \begin{align} \sum\limits_{\nu = 1}^{\widetilde N\left( {\left| {x - y} \right|} \right)-1} {\left| {{\partial ^\alpha }{g_\nu }\left( x \right) - {\partial ^\alpha }{g_\nu }\left( y \right)} \right|}&\leq \left| {x - y} \right|\sum\limits_{\nu = 1}^{\widetilde N\left( {\left| {x - y} \right|} \right)-1} {{{\left| {{\partial ^\alpha }{g_{\nu x}}} \right|}_{{C^0}(\mathbb{R}^n)}}}\leq \left| {x - y} \right|\sum\limits_{\nu = 1}^{\widetilde N\left( {\left| {x - y} \right|} \right)-1} {\frac{1}{{r_\nu ^{k_* + 1}}}\varphi \left( {{r_\nu }} \right)} \notag \\ & = 2\left| {x - y} \right|\sum\limits_{\nu = 1}^{\widetilde N\left( {\left| {x - y} \right|} \right)-1} {\left( {\frac{\varepsilon }{{{2^\nu }}} - \frac{\varepsilon }{{{2^{\nu + 1}}}}} \right){{\left( {\frac{{{2^\nu }}}{\varepsilon }} \right)}^{{k_ * } + 2}}\varphi \left( {\frac{\varepsilon }{{{2^\nu }}}} \right)}\notag \\ \label{zhengze1}&\leq c\left| {x - y} \right|\int_{\varepsilon {2^{ - \widetilde N\left( {\left| {x - y} \right|} \right) }}}^\varepsilon {\frac{{\varphi \left( t \right)}}{{{t^{{k_*} + 2}}}}dt} , \end{align} where Cauchy's estimate and \eqref{huoche} are used in the second inequality, and arguments similar to \eqref{dao} are employed in \eqref{zhengze1}, $ c>0 $ is a universal constant. Besides, we similarly get \begin{align} \sum\limits_{\nu = \widetilde N\left( {\left| {x - y} \right|} \right) }^\infty {\left| {{\partial ^\alpha }{g_\nu }\left( x \right) - {\partial ^\alpha }{g_\nu }\left( y \right)} \right|} &\leq \sum\limits_{\nu = \widetilde N\left( {\left| {x - y} \right|} \right) }^\infty {2{{\left| {{\partial ^\alpha }{g_\nu }} \right|}_{{C^0}(\mathbb{R}^n)}}}\leq 2\sum\limits_{\nu = \widetilde N\left( {\left| {x - y} \right|} \right)}^\infty {\frac{1}{{r_\nu ^{k_*}}}\varphi \left( {{r_\nu }} \right)}\notag \\ &=2\sum\limits_{\nu = \widetilde N\left( {\left| {x - y} \right|} \right)}^\infty {\left( {\frac{\varepsilon }{{{2^\nu }}} - \frac{\varepsilon }{{{2^{\nu + 1}}}}} \right){{\left( {\frac{{{2^\nu }}}{\varepsilon }} \right)}^{{k_ * } + 1}}\varphi \left( {\frac{\varepsilon }{{{2^\nu }}}} \right)}\notag \\ \label{zhengze2}& \leq c\int_0^{\varepsilon {2^{ - \widetilde N\left( {\left| {x - y} \right|} \right) }}} {\frac{{\varphi \left(t \right)}}{{{t^{{k_*} + 1}}}}dt} . \end{align} Now choose $ \widetilde{N}(\gamma) \to +\infty $ as $ \gamma \to 0^+ $ such that \begin{equation}\label{varpi*} \gamma \int_{L\left( \gamma \right)}^\varepsilon {\frac{{\varphi \left( t \right)}}{{{t^{{k_*} + 2}}}}dt} = {\mathcal{O}^\# }\left( {\int_0^{L\left( \gamma \right)} {\frac{{\varphi \left( t \right)}}{{{t^{{k_*} + 1}}}}dt} } \right): = {\varpi _ * }\left( \gamma \right),\;\;\gamma \to {0^ + }, \end{equation} where $ \varepsilon {2^{ - \tilde N\left( \gamma \right) - 1}}: = L\left( \gamma \right)\to 0^+ $. This is achievable due to assumption \eqref{330}, Cauchy Theorem and The Intermediate Value Theorem. Note that the choice of $ L(\gamma) $ (i.e., $ \widetilde{N} $) and $ \varpi_* $ is not unique (may up to a constant), and $ \varpi_* $ could be continuously extended to some given interval (e.g., $ [0,1] $), but this does not affect the qualitative result. Combining \eqref{zhengze1}, \eqref{zhengze2} and \eqref{varpi*} we finally arrive at $ f \in {C_{k_*,\varpi_* }}\left( {{\mathbb{R}^n}} \right) $ because \begin{align*} \left| {{\partial ^\alpha }f\left( x \right) - {\partial ^\alpha }f\left( y \right)} \right| &\leq \sum\limits_{\nu = 1}^{\widetilde N\left( {\left| {x - y} \right|} \right)} { + \sum\limits_{\nu = \widetilde N\left( {\left| {x - y} \right|} \right) + 1}^\infty {\left| {{\partial ^\alpha }{g_\nu }\left( x \right) - {\partial ^\alpha }{g_\nu }\left( y \right)} \right|} } \\ & \leq c\left( {\left| {x - y} \right|\int_{\varepsilon {2^{ - \widetilde N\left( {\left| {x - y} \right|} \right) - 1}}}^\varepsilon {\frac{{\varphi \left( t \right)}}{{{t^{{k_*} + 2}}}}dt} + \int_0^{\varepsilon {2^{ - \widetilde N\left( {\left| {x - y} \right|} \right) - 1}}} {\frac{{\varphi \left( t \right)}}{{{t^{{k_*} + 1}}}}dt} } \right) \\ &\leq c\varpi_* \left( {\left| {x - y} \right|} \right). \end{align*} \end{proof} \cref{t1} can be extended to the case $ f:{\mathbb{R}^n} \to \mathbb{R}^m $ with $ n,m \in \mathbb{N}^+ $ since the analysis is completely the same, and the strip $ \left| {\operatorname{Im} x} \right| \leq {r_\nu } $ can also be replaced by $ \left| {\operatorname{Im} x} \right| \leq {r_{\nu+1} } $ (or $ \leq \theta r_{\nu+1} $). \cref{t1} can also be used to estimate the regularity of solutions of finite smooth homological equations, thus KAM uniqueness theorems in some cases might be derived, see Section 4 in \cite{salamon} for instance. However, in order to avoid too much content in this paper, it is omitted here. Recall \eqref{4.98} and \eqref{4.99}. Then one can apply \cref{t1} on $ \{u^{\nu}-\mathrm{id}\}_\nu $ (because \cref{t1} requires that the initial value vanishes) and $ \{v^{\nu}\circ (u^\nu)^{-1}\}_\nu $ to directly analyze the regularity of the KAM torus according to \textbf{(H5)}, i.e., there exist $ {\varpi _i} $ ($ i=1,2 $) such that $ u \in {C_{k_1^ * ,{\varpi _1}}}\left( {{\mathbb{R}^n},{\mathbb{R}^n}} \right) $ and $ v \circ {u^{ - 1}} \in {C_{k_2^ * ,{\varpi _2}}}\left( {{\mathbb{R}^n},G} \right) $. This completes the proof of \cref{theorem1}. \section{Proof of \cref{simichaos2}}\label{proofsimichaos2} Only need to determine $ k_i^* $ in \textbf{(H5)} and choose functions $ L_i(\gamma)\to 0^+ $ (as $ \gamma \to 0^+ $) to obtain the modulus of continuity $ \varpi_i $ in \eqref{varpii} for $ i=1,2 $. Obviously $ k_1^*=1 $ and $ k_2^*=n $ because \[\int_0^1 {\frac{{\varpi _{\mathrm{GLH}}^{\varrho ,\lambda }\left( x \right)}}{x}dx} < + \infty ,\;\;\int_0^1 {\frac{{\varpi _{\mathrm{GLH}}^{\varrho ,\lambda }\left( x \right)}}{{{x^2}}}dx} = + \infty .\] In view of $ \varphi_i(x) $ in \textbf{(H5)}, then by applying \cref{duochongduishu} we get \begin{align} \gamma \int_{{L_i}\left( \gamma \right)}^\varepsilon {\frac{{{\varphi _i}\left( t \right)}}{{{t^{k_i^ * + 2}}}}dt} &= {\mathcal{O}^\# }\Bigg( {\gamma \int_{{L_i}\left( \gamma \right)}^\varepsilon {\frac{1}{{{t^2}(\ln (1/t)) \cdots {{(\underbrace {\ln \cdots \ln }_\varrho (1/t))}^\lambda }}}dt} } \Bigg)\notag \\ & = {\mathcal{O}^\# }\Bigg( {\gamma \int_{1/\varepsilon }^{1/{L_i}\left( \gamma \right)} {\frac{1}{{(\ln z) \cdots {{(\underbrace {\ln \cdots \ln }_\varrho z)}^\lambda }}}dz} } \Bigg)\notag \\ \label{guji1}& = {\mathcal{O}^\# }\Bigg( {\frac{\gamma }{{{L_i}\left( \gamma \right)(\ln (1/{L_i}\left( \gamma \right)) \cdots {{(\underbrace {\ln \cdots \ln }_\varrho (1/{L_i}\left( \gamma \right)))}^\lambda }}}} \Bigg), \end{align} and by direct calculation one arrives at \begin{align} \int_0^{{L_i}\left( \gamma \right)} {\frac{{{\varphi _i}\left( t \right)}}{{{t^{k_i^ * + 1}}}}dt} & = {\mathcal{O}^\# }\Bigg( {\int_{{L_i}\left( \gamma \right)}^\varepsilon {\frac{1}{{t(\ln (1/t)) \cdots {{(\underbrace {\ln \cdots \ln }_\varrho (1/t))}^\lambda }}}dt} } \Bigg)\notag \\ \label{guji2}&= {\mathcal{O}^\# }\Bigg( {\frac{1}{{{{(\underbrace {\ln \cdots \ln }_\varrho (1/{L_i}\left( \gamma \right)))}^{\lambda - 1}}}}} \Bigg). \end{align} Finally, choosing \begin{equation}\label{Lit4} {L_i}\left( \gamma \right) \sim \frac{{\gamma }}{{(\ln (1/\gamma)) \cdots (\underbrace {\ln \cdots \ln }_{\varrho}(1/\gamma )})} \to {0^ + },\;\;\gamma \to {0^ + } \end{equation} will lead to the second relation in \eqref{varpii} for $ i=1,2 $, and substituting $ L_i(\gamma) $ into \eqref{guji1} or \eqref{guji2} yields that \begin{equation}\label{rho1} {\varpi _1}\left( \gamma \right) \sim {\varpi _2}\left( \gamma \right) \sim \frac{1}{{{{(\underbrace {\ln \cdots \ln }_\varrho (1/\gamma))}^{\lambda - 1}}}}. \end{equation} in \cref{theorem1}, see \eqref{varpii}. This proves \cref{simichaos2}. \section{Proof of \cref{Holder}}\label{8} Note that $ \ell \notin {\mathbb{N}^ + } $ implies $ \{\ell\}\in (0,1) $. Then $ k=[\ell] $ and $ \varpi(x)\sim \varpi_{\mathrm{H}}^{\ell}(x)\sim x^{\{\ell\}} $, i.e., modulus of continuity of H\"older's type. Consequently, \textbf{(H1)} can be directly verified because of $ \ell > 2\tau + 2 $: \[\int_0^1 {\frac{{\varpi \left( x \right)}}{{{x^{2\tau + 3 - k}}}}dx} = \int_0^1 {\frac{{{x^{\left\{ \ell \right\}}}}}{{{x^{2\tau + 3 - \left[ \ell \right]}}}}dx} = \int_0^1 {\frac{1}{{{x^{1 - \left( {\ell - 2\tau - 2} \right)}}}}dx} < + \infty .\] Here and below, let $ i $ be $ 1 $ or $ 2 $ for simplicity. Recall that $ {\varphi _i}\left( x \right) = {x^{k - \left( {3 - i} \right)\tau - 1}}\varpi \left( x \right) = {x^{\left[ \ell \right] - \left( {3 - i} \right)\tau - 1}} \cdot {x^{\left\{ \ell \right\}}} = {x^{\ell - \left( {3 - i} \right)\tau - 1}} $, and let \begin{align} \label{fff}\int_0^1 {\frac{{{\varphi _i}\left( x \right)}}{{{x^{k_i^ * + 1}}}}dx} &= \int_0^1 {\frac{1}{{{x^{k_i^ * - \left( {\ell - \left( {3 - i} \right)\tau - 2} \right)}}}}dx}<+\infty,\\ \label{ffff}\int_0^1 {\frac{{{\varphi _i}\left( x \right)}}{{{x^{k_i^ * + 2}}}}dx} &= \int_0^1 {\frac{1}{{{x^{k_i^ * - \left( {\ell - \left( {3 - i} \right)\tau - 2} \right) + 1}}}}dx}=+\infty . \end{align} Then the critical $ k_i^* $ in \textbf{(H5)} could be uniquely chosen as $ k_i^ * : = \left[ {\ell - \left( {3 - i} \right)\tau - 1} \right] \in \mathbb{N}^+$ since $ \ell - \left( {3 - i} \right)\tau - 1 \notin \mathbb{N}^+ $. Further, letting $ {L_i}\left( \gamma \right) = \gamma \to {0^ + } $ yields that \[\int_0^{{L_i}\left( \gamma \right)} {\frac{{{\varphi _i}\left( t \right)}}{{{t^{k_i^ * + 1}}}}dt} = {\mathcal{O}^\# }\left( {\int_0^\gamma {\frac{1}{{{t^{1 - \left\{ {\ell - \left( {3 - i} \right)\tau - 2} \right\}}}}}dt} } \right) = {\mathcal{O}^\# }\left( {{\gamma ^{\left\{ {\ell - \left( {3 - i} \right)\tau - 2} \right\}}}} \right)\] and \[\gamma \int_{{L_i}\left( \gamma \right)}^\varepsilon {\frac{{{\varphi _i}\left( t \right)}}{{{t^{k_i^ * + 2}}}}dt} = {\mathcal{O}^\# }\left( {\gamma \int_\gamma ^\varepsilon {\frac{1}{{{t^{2 - \left\{ {\ell - \left( {3 - i} \right)\tau - 2} \right\}}}}}dt} } \right) = {\mathcal{O}^\# }\left( {{\gamma ^{\left\{ {\ell - \left( {3 - i} \right)\tau - 2} \right\}}}} \right).\] This leads to H\"older's type \[{\varpi _i}\left( \gamma \right) \sim {\left( {{L_i}\left( \gamma \right)} \right)^{\left\{ {\ell - \left( {3 - i} \right)\tau - 2} \right\}}} \sim {\gamma ^{\left\{ {\ell - \left( {3 - i} \right)\tau - 2} \right\}}}\sim \varpi_{\mathrm{H}}^{\left\{ {\ell - \left( {3 - i} \right)\tau - 2} \right\}}(\gamma)\] due to \eqref{varpii} in \cref{theorem1}. By observing $ k_i^ * + \left\{ {\ell - \left( {3 - i} \right)\tau - 2} \right\} = \ell - \left( {3 - i} \right)\tau - 1 $ we finally arrive at $ u \in {C^{\ell-2\tau-1}}\left( {{\mathbb{R}^n},{\mathbb{R}^n}} \right) $ and $ v \circ {u^{ - 1}} \in {C^{\ell-\tau-1}}\left( {{\mathbb{R}^n},G} \right) $. This proves \cref{Holder}. \section{Proof of \cref{lognew}}\label{pflognew} Firstly, note that $ k=[2\tau+2] $ and $ \varpi \left( x \right) \sim {x^{\{2\tau+2\}}}/{\left( { - \ln x} \right)^\lambda } $ with $ \lambda > 1 $, then \textbf{(H1)} holds since \begin{align*} \int_0^1 {\frac{{\varpi \left( x \right)}}{{{x^{2\tau + 3 - k}}}}dx} &= {\mathcal{O}^\# }\left( {\int_0^{1/2} {\frac{{{x^{\left\{ {2\tau + 2} \right\}}}}}{{{x^{2\tau + 3 - \left[ {2\tau + 2} \right]}}{{\left( { - \ln x} \right)}^\lambda }}}dx} } \right)\\ & = {\mathcal{O}^\# }\left( {\int_0^{1/2} {\frac{1}{{x{{\left( { - \ln x} \right)}^\lambda }}}dx} } \right) < + \infty . \end{align*} Secondly, in view of $ \varphi_i(x) $ in \textbf{(H5)}, we have \[\int_0^1 {\frac{{{\varphi _i}\left( x \right)}}{{{x^{k_i^ * + 1}}}}dx} = {\mathcal{O}^\# }\left( {\int_0^{1/2} {\frac{1}{{{x^{k_i^ * - \left( {i - 1} \right)\tau }}{{\left( { - \ln x} \right)}^\lambda }}}dx} } \right)\;\;i=1,2.\] This leads to critical $ k_1^*=1 $ and $ k_2^*=[\tau+1] $ in \textbf{(H5)}. Here one uses the following fact: for given $ \lambda>1 $, \[\int_0^{1/2} {\frac{1}{{{x^\iota }{{\left( { - \ln x} \right)}^\lambda }}}dx} < + \infty ,\;\;\int_0^{1/2} {\frac{1}{{{x^{\iota + 1}}{{\left( { - \ln x} \right)}^\lambda }}}dx}=+\infty \] if and only if $ \iota \in (0,1] $. Next, we investigate the KAM remaining regularity through certain complicated asymptotic analysis. One notices that the analysis of $ \varpi_1 $ with all $ \tau>n-1 $ and $ \varpi_2 $ with $ n-1<\tau \notin \mathbb{N}^+ $ is the same as $ \varrho =1$ in \cref{simichaos2}, i.e., $ L_1(\gamma) $ and $ L_2(\gamma) $ could be chosen as $ \gamma/(-\ln \gamma) \to 0^+$, see \eqref{Lit4} with $ \varrho =1$. Therefore, in view of \eqref{rho1}, we arrive at \[{\varpi _1}\left( \gamma \right) \sim \frac{1}{{{{\left( { - \ln \gamma} \right)}^{\lambda - 1}}}} \sim \varpi _{\mathrm{LH}}^{\lambda - 1}\left( \gamma \right), \;\; \tau >n-1,\] and \[{\varpi _2}\left( \gamma \right) \sim \frac{1}{{{{\left( { - \ln \gamma} \right)}^{\lambda - 1}}}} \sim \varpi _{\mathrm{LH}}^{\lambda - 1}\left( \gamma \right), \;\; n-1<\tau \notin \mathbb{N}^+.\] However, the asymptotic analysis for $ \varpi_2 $ becomes different when $ n-1<\tau \in \mathbb{N}^+ $. Note that $ \{\tau\} \in (0,1) $ and $ \left[ {\tau + 1} \right] - \tau = \left[ \tau \right] + 1 - \tau = 1 - \left\{ \tau \right\} $ at present. Hence, by applying \eqref{erheyi1} in \cref{erheyi} we get \begin{align} \int_0^{{L_2}\left( \gamma \right)} {\frac{{{\varphi _2}\left( t \right)}}{{{t^{k_2^ * + 1}}}}dt} &= {\mathcal{O}^\# }\left( {\int_0^{{L_2}\left( \gamma \right)} {\frac{1}{{{t^{\left[ {\tau + 1} \right] - \tau }}{{\left( { - \ln t} \right)}^\lambda }}}dt} } \right) = {\mathcal{O}^\# }\left( {\int_0^{{L_2}\left( \gamma \right)} {\frac{1}{{{t^{1 - \left\{ \tau \right\}}}{{\left( { - \ln t} \right)}^\lambda }}}dt} } \right)\notag \\ \label{coro42}& = {\mathcal{O}^\# }\left( {\int_{1/{L_2}\left( \gamma \right)}^{ + \infty } {\frac{1}{{{z^{1 + \left\{ \tau \right\}}}{{\left( {\ln z} \right)}^\lambda }}}dz} } \right) = {\mathcal{O}^\# }\left( {\frac{{{{\left( {{L_2}\left( \gamma \right)} \right)}^{\left\{ \tau \right\}}}}}{{{{\left( {\ln \left( {1/{L_2}\left( \gamma \right)} \right)} \right)}^\lambda }}}} \right), \end{align} and similarly according to \eqref{erheyi2} in \cref{erheyi} we have \begin{equation}\label{coro43} \gamma \int_{{L_2}\left( \gamma \right)}^\varepsilon {\frac{{{\varphi _2}\left( t \right)}}{{{t^{k_2^ * + 2}}}}dt} = {\mathcal{O}^\# }\left( {\gamma \int_{1/\varepsilon }^{1/{L_2}\left( \gamma \right)} {\frac{1}{{{z^{\left\{ \tau \right\}}}{{\left( {\ln z} \right)}^\lambda }}}dz} } \right) = {\mathcal{O}^\# }\left( {\frac{{\gamma {{\left( {{L_2}\left( \gamma \right)} \right)}^{\left\{ \tau \right\} - 1}}}}{{{{\left( {\ln \left( {1/{L_2}\left( \gamma \right)} \right)} \right)}^\lambda }}}} \right). \end{equation} Now let us choose $ L_2(\gamma) \sim \gamma \to 0^+ $, i.e., different from that when $ n-1<\tau \in \mathbb{N}^+ $, one verifies that the second relation in \eqref{varpii} holds for $ i=2 $, and substituting $ L_2(\gamma) $ into \eqref{coro42} or \eqref{coro43} yields that \[{\varpi _2}\left( \gamma \right) \sim \frac{{{\gamma ^{\left\{ \tau \right\}}}}}{{{{\left( { - \ln \gamma } \right)}^\lambda }}} \sim {\gamma ^{\left\{ \tau \right\}}}\varpi _{\mathrm{LH}}^\lambda \left( \gamma \right),\;\; n-1<\tau \notin \mathbb{N}^+\] due to \eqref{varpii} in \cref{theorem1}. This proves \cref{lognew}. \appendix \section{Semi separability and weak homogeneity for modulus of continuity} \begin{lemma}\label{Oxlemma} Let a modulus continuity $ \varpi $ be given. If $ \varpi $ is piecewise continuously differentiable and $ \varpi'\geq 0 $ is nonincreasing, then $ \varpi $ admits semi separability in \cref{d2}. As a consequence, if $ \varpi $ is convex near $ 0^+ $, then it is semi separable. \end{lemma} \begin{proof} Assume that $ \varpi $ is continuously differentiable without loss of generality. Then we obtain semi separability due to \begin{align*} \mathop {\sup }\limits_{0 < r < \delta /x} \frac{{\varpi \left( {rx} \right)}}{{\varpi \left( r \right)}} &= \mathop {\sup }\limits_{0 < r < \delta /x} \frac{{\varpi \left( {rx} \right) - \varpi \left( 0+ \right)}}{{\varpi \left( r \right)}} \leq \mathop {\sup }\limits_{0 < r < \delta /x} \frac{1}{{\varpi \left( r \right)}}\sum\limits_{j = 0}^{\left[ x \right]} {\int_{jr}^{\left( {j + 1} \right)r} {\varpi '\left( t \right)dt} } \\ & \leq \mathop {\sup }\limits_{0 < r < \delta /x} \frac{1}{{\varpi \left( r \right)}}\sum\limits_{j = 0}^{\left[ x \right]} {\int_0^r {\varpi '\left( t \right)dt} } = \left( {\left[ x \right] + 1} \right) = \mathcal{O}\left( x \right),\;\;x \to + \infty . \end{align*} \end{proof} \begin{lemma}\label{ruotux} Let a modulus continuity $ \varpi $ be given. If $ \varpi $ is convex near $ 0^+ $, then it admits weak homogeneity in \cref{weak}. \end{lemma} \begin{proof} For $ x>0 $ sufficiently small, one verifies that \[\varpi \left( x \right) = x \cdot \frac{{\varpi \left( x \right) - \varpi \left( 0+ \right)}}{x-0} \leq x \cdot \frac{{\varpi \left( {ax} \right) - \varpi \left( 0+ \right)}}{{ax}-0} = a^{-1}\varpi \left( {ax} \right),\] for $ 0<a<1 $, which leads to weak homogeneity \[ \mathop {\overline {\lim } }\limits_{x \to {0^ + }} \frac{{\varpi \left( x \right)}}{{\varpi \left( {ax} \right)}} \leq a^{-1} < + \infty .\] \end{proof} \section{Proof of \cref{Theorem1}} \label{JACK} \begin{proof} For the completeness of the analysis we give a very detailed proof. An outline of the strategy for the proof is provided: we firstly construct an approximation integral operator by the Fourier transform of a compactly supported function, and then present certain properties of the operator (note that these preparations are classical); finally, we estimate the approximation error in the sense of modulus of continuity. Let\[K\left( x \right) = \frac{1}{{{{\left( {2\pi } \right)}^n}}}\int_{{\mathbb{R}^n}} {\widehat K\left( \xi \right){e^{\mathrm{i}\left\langle {x,\xi } \right\rangle }}d\xi } ,\;\;x \in {\mathbb{C}^n}\] be an entire function whose Fourier transform \[\widehat K\left( \xi \right) = \int_{{\mathbb{R}^n}} {K\left( x \right){e^{ - \mathrm{i}\left\langle {x,\xi } \right\rangle }}dx} ,\;\;\xi \in {\mathbb{R}^n}\] is a smooth function with compact support, contained in the ball $ \left| \xi \right| \leq 1 $, that satisfies $ \widehat K\left( \xi \right) = \widehat K\left( { - \xi } \right) $ and \begin{equation}\label{3.3} {\partial ^\alpha }\widehat K\left( 0 \right) = \left\{ \begin{gathered} 1,\;\;\alpha = 0, \\ 0,\;\;\alpha \ne 0. \\ \end{gathered} \right. \end{equation} Next, we assert that \begin{equation}\label{5} \left| {{\partial ^\beta }\mathcal{F}\big( {\widehat K\left( \xi \right)} \big)\left( z \right)} \right| \leq \frac{{c\left( {\beta ,p} \right)}}{{{{\left( {1 + \left| {\operatorname{Re} z} \right|} \right)}^p}}}{e^{\left| {\operatorname{Im} z} \right|}},\;\;\max \left\{ {1,\left|\beta\right|} \right\} \leq p \in \mathbb{R}. \end{equation} Note that we assume $ \widehat K \in C_0^\infty \left( {{\mathbb{R}^n}} \right) $ and $ \operatorname{supp} \widehat K \subseteq B\left( {0,1} \right) $, thus \begin{equation}\label{FK} \left| {{{\left( {1 + \left| z \right|} \right)}^k}{\partial ^\beta }\mathcal{F}\left( {\widehat K\left( \xi \right)} \right)\left( z \right)} \right| \leq \sum\limits_{\left| \gamma \right| \leq k} {\left| {{z^\gamma }{\partial ^\beta }\mathcal{F}\big( {\widehat K\left( \xi \right)} \big)\left( z \right)} \right|} = \sum\limits_{\left| \gamma \right| \leq k} {\left| {{\partial ^{\beta + \gamma }}\mathcal{F}\big( {\widehat K\left( \xi \right)} \big)\left( z \right)} \right|}, \end{equation} where $ \mathcal{F} $ represents the Fourier transform. Since $ {\partial ^{\beta + \gamma }}\mathcal{F}\big( {\widehat K\left( \xi \right)} \big)\left( z \right) \in C_0^\infty ( {\overline {B \left( {0,1} \right)}} ) $ does not change the condition, we only need to prove that \[\left| {\mathcal{F}\big( {\widehat K\left( \xi \right)} \big)\left( z \right)} \right| \leq {c_k}{e^{\left| {\operatorname{Im} z} \right|}}.\] Obviously \[\left| {\mathcal{F}\big( {\widehat K\left( \xi \right)} \big)\left( z \right)} \right| \leq \frac{1}{{{{\left( {2\pi } \right)}^n}}}\int_{{\mathbb{R}^n}} {\big| {\widehat K\left( \xi \right)} \big|{e^{ - \left\langle {\operatorname{Im} z,\xi } \right\rangle }}d\xi } \leq \frac{c}{{{{\left( {2\pi } \right)}^n}}}\int_{B\left( {0,1} \right)} {{e^{\left| {\left\langle {\operatorname{Im} z,\xi } \right\rangle } \right|}}d\xi } \leq c{e^{\left| {\operatorname{Im} z} \right|}},\] where $ c>0 $ is independent of $ n $. Then assertion \eqref{5} is proved by recalling \eqref{FK}. The inequality in \eqref{5} is usually called the Paley-Wiener Theorem, see also Chapter III in \cite{Stein}. As we will see later, it plays an important role in the subsequent verification of definitions, integration by parts and the translational feasibility according to Cauchy's integral formula. Next we assert that $ K:{\mathbb{C}^n} \to \mathbb{R} $ is a real analytic function with the following property \begin{equation}\label{3.6} \int_{{\mathbb{R}^n}} {{{\left( {u + \mathrm{i}v} \right)}^\alpha }{\partial ^\beta }K\left( {u + \mathrm{i}v} \right)du} = \left\{ \begin{aligned} &{\left( { - 1} \right)^{\left| \alpha \right|}}\alpha !,&\alpha = \beta , \\ &0,&\alpha \ne \beta , \\ \end{aligned} \right. \end{equation} for $ u,v \in {\mathbb{R}^n} $ and multi-indices $ \alpha ,\beta \in {\mathbb{N}^n} $. In order to prove assertion \eqref{3.6}, we first consider proving the following for $ x\in \mathbb{R}^n $: \begin{equation}\label{3.7} \int_{{\mathbb{R}^n}} {{x^\alpha }{\partial ^\beta }K\left( x \right)dx} = \left\{ \begin{aligned} &{\left( { - 1} \right)^{\left| \alpha \right|}}\alpha !,&\alpha = \beta , \\ &0,&\alpha \ne \beta . \\ \end{aligned} \right. \end{equation} {\bf{Case1:}} If $ \alpha = \beta $, then \begin{align*} \int_{{\mathbb{R}^n}} {{x^\alpha }{\partial ^\beta }K\left( x \right)dx} &= \int_{{\mathbb{R}^n}} {\Big( {\prod\limits_{j = 1}^n {x_j^{{\alpha _j}}} } \Big)\Big( {\prod\limits_{j = 1}^n {\partial _{{x_j}}^{{\alpha _j}}} } \Big)K\left( x \right)dx} \\ &= {\left( { - 1} \right)^{{\alpha _1}}}{\alpha _1}!\int_{{\mathbb{R}^{n - 1}}} {\Big( {\prod\limits_{j = 2}^n {x_j^{{\alpha _j}}} } \Big)\Big( {\prod\limits_{j = 2}^n {\partial _{{x_j}}^{{\alpha _j}}} } \Big)K\left( x_2,\cdots,x_n \right)dx_2 \cdots dx_n} \\ &= \cdots = {\left( { - 1} \right)^{{\alpha _1} + \cdots + {\alpha _n}}}{\alpha _1}! \cdots {\alpha _n}!\int_\mathbb{R} {K\left( x_n \right)dx_n} \\ & = {\left( { - 1} \right)^{\left| \alpha \right|}}\alpha !\widehat K\left( 0 \right) = {\left( { - 1} \right)^{\left| \alpha \right|}}\alpha !. \end{align*} {\bf{Case2:}} There exists some $ {\alpha _j} \leq {\beta _j} - 1 $, let $ j=1 $ without loss of generality. Then \begin{align*} \int_{{\mathbb{R}^n}} {{x^\alpha }{\partial ^\beta }K\left( x \right)dx} &= \int_{{\mathbb{R}^n}} {\Big( {\prod\limits_{j = 1}^n {x_j^{{\alpha _j}}} } \Big)\Big( {\prod\limits_{j = 1}^n {\partial _{{x_j}}^{{\beta _j}}} } \Big)K\left( x \right)dx} \\ &= {\left( { - 1} \right)^{{\beta _1} - {\alpha _1}}}\int_{{\mathbb{R}^n}} {\Big( {\prod\limits_{j = 2}^n {x_j^{{\alpha _j}}} } \Big)\Big( {\prod\limits_{j = 2}^n {\partial _{{x_j}}^{{\beta _j}}} } \Big)\partial _{{x_1}}^{{\beta _1} - {\alpha _1}}K\left( x \right)dx}=0. \end{align*} {\bf{Case3:}} Now we have $ {\alpha _1} \geq {\beta _1} $, and some $ {\alpha _j} \geq {\beta _j} + 1 $ (otherwise $ \alpha = \beta $). Let $ j=1 $ without loss of generality. At this time we first prove a conclusion according to \eqref{3.3}. Since \[{\partial ^\alpha }\widehat K\left( 0 \right) = {\left( { - \mathrm{i}} \right)^{\left| \alpha \right|}}\int_{{\mathbb{R}^n}} {{x^\alpha }K\left( x \right)dx} = 0,\;\;\alpha \ne 0,\] then it follows that \begin{displaymath} \int_{{\mathbb{R}^n}} {{x^\alpha }K\left( x \right)dx} = 0,\;\;\alpha \ne 0. \end{displaymath} Hence, we arrive at \[\int_{{\mathbb{R}^n}} {{x^\alpha }{\partial ^\beta }K\left( x \right)dx} = {\left( { - 1} \right)^{\sum\limits_{j = 1}^n {\left( {{\beta _j} - {\alpha _j}} \right)} }}\int_{{\mathbb{R}^n}} {\left( {x_1^{{\alpha _1} - {\beta _1}}} \right)\Big( {\prod\limits_{j = 2}^n {x_j^{{\alpha _j} - {\beta _j}}} } \Big)K\left( x \right)dx} = 0.\] This proves \eqref{3.7}. Next, we will consider a complex translation of \eqref{3.7} and prove that \begin{equation}\notag \int_{{\mathbb{R}^n}} {{{\left( {u + \mathrm{i}v} \right)}^\alpha }{\partial ^\beta }K\left( {u + \mathrm{i}v} \right)du} = \left\{ \begin{aligned} &{\left( { - 1} \right)^{\left| \alpha \right|}}\alpha !,&\alpha = \beta, \\ &0,&\alpha \ne \beta . \\ \end{aligned} \right. \end{equation} Actually one only needs to pay attention to \eqref{5}, and the proof is completed according to Cauchy's integral formula. Finally, we only prove that \begin{equation}\label{3.10} {S_r}{p} = {p},\;\;{p}:{\mathbb{R}^n} \to \mathbb{R}. \end{equation} In fact, only real polynomials need to be considered \[{p} = {x^\alpha } = \prod\limits_{j = 1}^n {x_j^{{\alpha _j}}} \ .\] It can be obtained by straight calculation \begin{align*} {S_r}{p} &= {r^{ - n}}\int_{{\mathbb{R}^n}} {K\left( {\frac{{x - y}}{r}} \right)\prod\limits_{j = 1}^n {y_j^{{\alpha _j}}} dy} = \int_{{\mathbb{R}^n}} {K\left( z \right)\prod\limits_{j = 1}^n {{{\left( {r{z_j} + {x_j}} \right)}^{{\alpha _j}}}} dz} \\ & = \Big( {\prod\limits_{j = 1}^n {x_j^{{\alpha _j}}} } \Big)\int_{{\mathbb{R}^n}} {K\left( z \right)dz} + \sum\limits_\gamma {{\varphi _\gamma }\left( {r,x} \right)\int_{{\mathbb{R}^n}} {{z^\gamma }K\left( z \right)dz} } = \prod\limits_{j = 1}^n {x_j^{{\alpha _j}}} = {p}, \end{align*} where $ {{\varphi _\gamma }\left( {r,x} \right)} $ are coefficients independent of $ z $. As to the complex case one only needs to perform complex translation to obtain \begin{equation}\notag {p}\left( {u;\mathrm{i}v} \right) = {S_r}{p}\left( {u + \mathrm{i}v} \right) = \int_{{\mathbb{R}^n}} {K\left( {\mathrm{i}{r^{ - 1}}v - \eta } \right){p}\left( {u;r\eta } \right)d\eta }. \end{equation} The above preparations are classic, see also \cite{salamon}. Next we begin to prove the Jackson type approximation theorem via only modulus of continuity. We will make use of \eqref{3.10} in case of the Taylor polynomial \[{p_k}\left( {x;y} \right): = {P_{f,k}}\left( {x;y} \right) = \sum\limits_{\left| \alpha \right| \leq k} {\frac{1}{{\alpha !}}{\partial ^\alpha }f\left( x \right){y^\alpha }} \] of $ f $ with $ k\in \mathbb{N} $. Note that \[\left| {f\left( {x + y} \right) - {p_k}\left( {x;y} \right)} \right| = \Bigg| {\int_0^1 {k{{\left( {1 - t} \right)}^{k - 1}}\sum\limits_{\left| \alpha \right| = k} {\frac{1}{{\alpha !}}\left( {{\partial ^\alpha }f\left( {x + ty} \right) - {\partial ^\alpha }f\left( x \right)} \right){y^\alpha }dt} } } \Bigg|\] for every $ x,y \in {\mathbb{R}^n} $. Define the following domains to partition $ \mathbb{R}^n $: \[{\Omega _1}: = \left\{ {\eta \in {\mathbb{R}^n}:\left| \eta \right| < \delta{r^{ - 1}}} \right\},\;\;{\Omega _2}: = \left\{ {\eta \in {\mathbb{R}^n}:\left| \eta \right| \geq \delta{r^{ - 1}}} \right\}.\] We have to use different estimates in the above two domains, which are abstracted as follows. If $ 0 < \left| y \right| < \delta $, we obtain that \begin{align} \left| {f\left( {x + y} \right) - {p_k}\left( {x;y} \right)} \right| \leq{}& \int_0^1 {k{{\left( {1 - t} \right)}^{k - 1}}\sum\limits_{\left| \alpha \right| = k} {\frac{1}{{\alpha !}} \cdot {{\left[ {{\partial ^\alpha }f} \right]}_\varpi }\varpi \left( {t\left| y \right|} \right) \cdot \left| {{y^\alpha }} \right|dt} } \notag \\ \leq{}& c\left( {n,k} \right){\left\| f \right\|_\varpi }\int_0^1 {\varpi \left( {t\left| y \right|} \right)dt} \cdot \left| {{y^\alpha }} \right|\notag \\ \leq{}& c\left( {n,k} \right){\left\| f \right\|_\varpi }\varpi \left( {\left| y \right|} \right)\left| {{y^\alpha }} \right|\notag \\ \label{e1}\leq{}& c\left( {n,k} \right){\left\| f \right\|_\varpi }\varpi \left( {\left| y \right|} \right){\left| y \right|^k}. \end{align} If $ \left| y \right| \geq \delta $, one easily arrives at \begin{align} \left| {f\left( {x + y} \right) - {p_k}\left( {x;y} \right)} \right| \leq{}& \int_0^1 {k{{\left( {1 - t} \right)}^{k - 1}}\sum\limits_{\left| \alpha \right| = k} {\frac{1}{{\alpha !}} \cdot 2{{\left| {{\partial ^\alpha }f} \right|}_{{C^0}}} \cdot \left| {{y^\alpha }} \right|dt} } \notag \\ \leq{}& c\left( {n,k} \right){\left\| f \right\|_\varpi }\left| {{y^\alpha }} \right|\notag \\ \label{e2} \leq{}& c\left( {n,k} \right){\left\| f \right\|_\varpi }{\left| y \right|^k}. \end{align} The H\"{o}lder inequality has been used in \eqref{e1} and \eqref{e2} with $ k \geq 1,{\alpha _i} \geq 1, \mu_i=k/\alpha_i\geq 1 $ without loss of generality: \[\left| {{y^\alpha }} \right| = \prod\limits_{i = 1}^n {{{\left| {{y_i}} \right|}^{{\alpha _i}}}} \leq \sum\limits_{i = 1}^n {\frac{1}{{{\mu _i}}}{{\left| {{y_i}} \right|}^{{\alpha _i}{\mu _i}}}} \leq \sum\limits_{i = 1}^n {{{\left| {{y_i}} \right|}^k}} \leq \sum\limits_{i = 1}^n {{{\left| y \right|}^k}} = n{\left| y \right|^k}.\] Now let $ x=u+\mathrm{i}v $ with $ u,v \in {\mathbb{R}^n} $ and $ \left| v \right| \leq r $. Fix $ p = n + k + 2 $, and let $ c = c\left( {n,k} \right) > 0 $ be a universal constant, then it follows that \begin{align*} \left| {{S_r}f\left( {u + \mathrm{i}v} \right) - {p_k}\left( {u;\mathrm{i}v} \right)} \right| \leq{}& \int_{{\mathbb{R}^n}} {K\left( {\mathrm{i}{r^{ - 1}}v - \eta } \right)\left| {f\left( {u + r\eta } \right) - {p_k}\left( {u;r\eta } \right)} \right|d\eta } \notag \\ \leq{}& c\int_{{\mathbb{R}^n}} {\frac{{{e^{{r^{ - 1}}v}}}}{{{{\left( {1 + \left| \eta \right|} \right)}^p}}}\left| {f\left( {u + r\eta } \right) - {p_k}\left( {u;r\eta } \right)} \right|d\eta } \notag \\ \leq{}& c\int_{{\mathbb{R}^n}} {\frac{1}{{{{\left( {1 + \left| \eta \right|} \right)}^p}}}\left| {f\left( {u + r\eta } \right) - {p_k}\left( {u;r\eta } \right)} \right|d\eta } \notag \\ ={}& c\int_{{\Omega _1}} + \int_{{\Omega _2}} {\frac{1}{{{{\left( {1 + \left| \eta \right|} \right)}^p}}}\left| {f\left( {u + r\eta } \right) - {p_k}\left( {u;r\eta } \right)} \right|d\eta } \notag \\ : ={}& c\left( {{I_1} + {I_2}} \right). \end{align*} As it can be seen later, $ I_1 $ is the main part while $ I_2 $ is the remainder. Recall \cref{Remarksemi} and \eqref{Ox}. Hence the following holds due to \eqref{e1} \begin{align}\label{I1} {I_1} ={}& \int_{{\Omega _1}} {\frac{1}{{{{\left( {1 + \left| \eta \right|} \right)}^p}}}\left| {f\left( {u + r\eta } \right) - {p_k}\left( {u;r\eta } \right)} \right|d\eta }\notag \\ \leq{}& \int_{\left| \eta \right| < \delta{r^{ - 1}}} {\frac{1}{{{{\left( {1 + \left| \eta \right|} \right)}^p}}} \cdot c{{\left\| f \right\|}_\varpi }\varpi \left( {\left| {r\eta } \right|} \right){{\left| {r\eta } \right|}^k}d\eta } \notag \\ \leq{}& \int_{\left| \eta \right| < \delta{r^{ - 1}}} {\frac{1}{{{{\left( {1 + \left| \eta \right|} \right)}^p}}} \cdot c{{\left\| f \right\|}_\varpi }\varpi \left( { {r } } \right) \psi(|\eta|) {{\left| {r\eta } \right|}^k}d\eta } \notag \\ \leq{}& c{\left\| f \right\|_\varpi }{r^k}{\varpi(r)}\int_0^{\delta{r^{ - 1}}} {\frac{{{w^{k + n }}}}{{{{\left( {1 + w} \right)}^p}}}dw} \notag \\ \leq{}& c{\left\| f \right\|_\varpi }{r^k}{\varpi(r)}\int_0^{+\infty} {\frac{{{w^{k + n }}}}{{{{\left( {1 + w} \right)}^p}}}dw} \notag \\ \leq{}& c{\left\| f \right\|_\varpi }{r^k}{\varpi(r)}. \end{align} In view of \eqref{e2}, we have \begin{align}\label{I2} {I_2} ={}& \int_{{\Omega _2}} {\frac{1}{{{{\left( {1 + \left| \eta \right|} \right)}^p}}}\left| {f\left( {u + r\eta } \right) - {p_k}\left( {u;r\eta } \right)} \right|d\eta } \notag \\ \leq{}& \int_{\left| \eta \right| \geq \delta{r^{ - 1}}} {\frac{1}{{{{\left( {1 + \left| \eta \right|} \right)}^p}}} \cdot c{{\left\| f \right\|}_\varpi }{{\left| {r\eta } \right|}^k}d\eta } \notag \\ \leq{}& c{\left\| f \right\|_\varpi }{r^k}\int_{\delta{r^{ - 1}}}^{ + \infty } {\frac{{{w^{k + n - 1}}}}{{{{\left( {1 + w} \right)}^p}}}dw} \notag \\ \leq{}& c{\left\| f \right\|_\varpi }{r^k}\int_{\delta{r^{ - 1}}}^{ + \infty } {\frac{1}{{{w^{p - k - n + 1}}}}dw} \notag \\ \leq{}& c{\left\| f \right\|_\varpi }{r^{k+2}}. \end{align} By \eqref{I1} and \eqref{I2}, we finally arrive at \begin{equation}\notag \left| {{S_r}f\left( {u + \mathrm{i}v} \right) - {p_k}\left( {u;\mathrm{i}v} \right)} \right| \leq c{\left\| f \right\|_\varpi }{r^k} {\varpi(r)} \end{equation} due to $ \mathop {\overline {\lim } }\limits_{r \to {0^ + }} r/{\varpi }\left( r \right) < + \infty $ in \cref{d1}. This proves \cref{Theorem1} for $ |\alpha| = 0 $. As to $ |\alpha| \ne 0 $, the result follows from the fact that $ {S_r} $ commutes with $ {\partial ^\alpha } $. We therefore finish the proof of \cref{Theorem1}. \end{proof} \section{Proof of \cref{coro1}}\label{proofcoro1} \begin{proof} Only the analysis of case $ \left| \alpha \right| = 0 $ is given. In view of \cref{Theorem1} and \eqref{e1}, we obtain that \begin{align} \left| {{S_r}f\left( x \right) - f\left( x \right)} \right| \leq{}& \left| {{S_r}f\left( x \right) - {P_{f,k}}\left( {\operatorname{Re} x;\mathrm{i}\operatorname{Im} x} \right)} \right| + \left| {{P_{f,k}}\left( {\operatorname{Re} x;\mathrm{i}\operatorname{Im} x} \right) - f\left( x \right)} \right|\notag \\ \label{Srf-f} \leq{}& c_*{\left\| f \right\|_\varpi }{r^k}\varpi(r) , \end{align} where the constant $ c_*>0 $ depends on $ n $ and $ k $. Further, by \eqref{Srf-f} we have \begin{align*} \left| {{S_r}f\left( x \right)} \right| \leq{}& \left| {{S_r}f\left( x \right) - f\left( x \right)} \right| + \left| {f\left( x \right)} \right|\notag \\ \leq{}& c_*{\left\| f \right\|_\varpi }{r^k}\varpi(r) + {\left\| f \right\|_\varpi } \leq {c^ * }{\left\| f \right\|_\varpi }, \end{align*} provided a constant $ c^*>0 $ depending on $ n,k $ and $ \varpi $. This completes the proof. \end{proof} \section{Proof of \cref{coro2}}\label{proofcoco2} \begin{proof} It is easy to verify that \begin{align*} {S_r}f\left( {x + 1} \right) ={}& \frac{1}{{{r^n}}}\int_{{\mathbb{R}^n}} {K\left( {\frac{{x - \left( {y - 1} \right)}}{r}} \right)f\left( y \right)dy} = \frac{1}{{{r^n}}}\int_{{\mathbb{R}^n}} {K\left( {\frac{{x - u}}{r}} \right)f\left( {u + 1} \right)du} \notag \\ ={}& \frac{1}{{{r^n}}}\int_{{\mathbb{R}^n}} {K\left( {\frac{{x - u}}{r}} \right)f\left( u \right)du} = {S_r}f\left( x \right). \end{align*} According to Fubini's theorem, we obtain \begin{align*} \int_{{\mathbb{T}^n}} {{S_r}f\left( x \right)dx} ={}& \frac{1}{{{r^n}}}\int_{{\mathbb{R}^n}} {\int_{{\mathbb{T}^n}} {K\left( {\frac{{x - y}}{r}} \right)f\left( y \right)dy} } \notag \\ ={}& \frac{1}{{{r^n}}}\int_{{\mathbb{R}^n}} {K\left( {\frac{m}{r}} \right)\left( {\int_{{\mathbb{T}^n}} {f\left( {x + m} \right)dx} } \right) dm} = 0. \end{align*} This completes the proof. \end{proof} \section{Asymptotic analysis in estimates} Here we provide some useful asymptotic results, all of which can be proved by L'Hopital's rule or by integration by parts, thus the proof is omitted here. \begin{lemma}\label{duochongduishu} Let $ \varrho \in \mathbb{N}^+ $, $ \lambda>1 $ and some $ M>0 $ sufficiently large be fixed. Then for $ X\to +\infty $, there holds \[\int_M^X {\frac{1}{{(\ln z) \cdots {{(\underbrace {\ln \cdots \ln }_\varrho z)}^\lambda }}}dz} = {\mathcal{O}^\# }\Bigg( {\frac{X}{{(\ln X) \cdots {{(\underbrace {\ln \cdots \ln }_\varrho X)}^\lambda }}}} \Bigg).\] \end{lemma} \begin{lemma}\label{erheyi} Let $ 0 <\sigma<1 $, $ \lambda>1 $ and some $ M>0 $ sufficiently large be fixed. Then for $ X\to +\infty $, we have \begin{equation}\label{erheyi1} \int_M^X {\frac{1}{{{z^\sigma }{{\left( {\ln z} \right)}^\lambda }}}dz} = {\mathcal{O}^\# }\left( {\frac{{{X^{1 - \sigma }}}}{{{{\left( {\ln X} \right)}^\lambda }}}} \right), \end{equation} and \begin{equation}\label{erheyi2} \int_X^{ + \infty } {\frac{1}{{{z^{1 + \sigma }}{{\left( {\ln z} \right)}^\lambda }}}dz} = {\mathcal{O}^\# }\left( {\frac{1}{{{X^\sigma }{{\left( {\ln X} \right)}^\lambda }}}} \right). \end{equation} \end{lemma} \section{KAM theorem for quantitative estimates}\label{Appsalamon} Here we give a KAM theorem for quantitative estimates, which is used in \cref{theorem1} in this paper. See Theorem 1 in Salamon's paper \cite{salamon} for case $ \tau >n-1 $; as to $ \tau =n-1 $, the proof is relatively trivial (in fact, just slightly modify Lemma 2 in \cite{salamon}). \begin{theorem}\label{appendix} Let $ n \geq 2, \tau \geq n - 1, 0 < \theta < 1$, and $ M \geq 1 $ be given. Then there are positive constants $ \delta_* $ and $ c $ such that $ c\delta_*\leq1/2 $ and the following holds for every $ 0 < r^* \leq 1 $ and every $ \omega\in\mathbb{R}^n $ that satisfies \eqref{dio}. Suppose that $ H(x, y) $ is a real analytic Hamiltonian function defined in the strip $ \left| {\operatorname{Im} x} \right| \leq {r^ * },\left| y \right| \leq {r^ * } $, which is of period $ 1 $ in the variables $ {x_1}, \ldots ,{x_n} $ and satisfies \begin{align*} \left| {H\left( {x,0} \right) - \int_{{\mathbb{T}^n}} {H\left( {\xi ,0} \right)d\xi } } \right| &\leq {\delta ^ * }{r^ * }^{2\tau + 2},\notag\\ \left| {{H_y}\left( {x,0} \right) - \omega } \right| &\leq {\delta ^ * }{r^ * }^{\tau + 1},\notag\\ \left| {{H_{yy}}\left( {x,y} \right) - Q\left( {x,y} \right)} \right| &\leq \frac{{c{\delta ^ * }}}{{2M}},\notag \end{align*} for $ \left| {\operatorname{Im} x} \right| \leq r^*,\left| y \right| \leq r^* $, where $ 0 < {\delta ^ * } \leq {\delta _ * } $, and $ Q\left( {x,y} \right) \in {\mathbb{C}^{n \times n}} $ is a symmetric (not necessarily analytic) matrix valued function in the strip $ \left| {\operatorname{Im} x} \right| \leq r,\left| y \right| \leq r $ and satisfies in this domain \[\left| {Q\left( z \right)} \right| \leq M,\;\;\left| {{{\left( {\int_{{\mathbb{T}^n}} {Q\left( {x,0} \right)dx} } \right)}^{ - 1}}} \right| \leq M.\] Then there exists a real analytic symplectic transformation $ z = \phi \left( \zeta \right) $ of the form \[z = \left( {x,y} \right),\;\;\zeta = \left( {\xi ,\eta } \right),\;\;x = u\left( \xi \right),\;\;y = v\left( \xi \right) + u_\xi ^T{\left( \xi \right)^{ - 1}}\eta \] mapping the strip $ \left| {\operatorname{Im} \xi } \right| \leq \theta r^*,\left| \eta \right| \leq \theta r^* $ into $ \left| {\operatorname{Im} x} \right| \leq r^*,\left| y \right| \leq r^* $, such that $ u\left( \xi \right) - \xi $ and $ v\left( \xi \right) $ are of period $ 1 $ in all variables and the Hamiltonian function $ K: = H \circ \phi $ satisfies \[{K_\xi }\left( {\xi ,0} \right) = 0,\;\;{K_\eta }\left( {\xi ,0} \right) = \omega .\] Moreover, $ \phi $ and $ K $ satisfy the estimates \begin{align*} &\left| {\phi \left( \zeta \right) - \zeta } \right| \leq c{\delta ^ * }\left( {1 - \theta } \right){r^ * },\;\;\left| {{\phi _\zeta }\left( \zeta \right) - \mathbb{I}} \right| \leq c{\delta ^ * },\\ &\left| {{K_{\eta \eta }}\left( \zeta \right) - Q\left( \zeta \right)} \right| \leq \frac{{c{\delta ^ * }}}{M},\\ &\left| {v \circ {u^{ - 1}}\left( x \right)} \right| \leq c{\delta ^ * }{r^ * }^{\tau + 1}, \end{align*} for $ \left| {\operatorname{Im} \xi } \right| \leq \theta r^*,\left| \eta \right| \leq \theta r^* $, and $ \left| {\operatorname{Im} x} \right| \leq \theta r^* $. \end{theorem} \section*{Acknowledgments} This work was supported in part by National Basic Research Program of China (Grant No. 2013CB834100), National Natural Science Foundation of China (Grant No. 12071175, Grant No. 11171132, Grant No. 11571065), Project of Science and Technology Development of Jilin Province (Grant No. 2017C028-1, Grant No. 20190201302JC), and Natural Science Foundation of Jilin Province (Grant No. 20200201253JC). \end{document}
{\beta}egin{document} \title{Homogeneous para-K\"ahler Einstein manifolds} {\alpha}uthor{D.V. Alekseevsky, C. Medori{\alpha}nd A. Tomassini} {\alpha}ddress{The University of Edinburgh\\ James Clerk Maxwell Building\\ The King's Buildings\\ Mayfield Road\\ Edinburgh\\ EH9 3JZ\\ UK} \email{[email protected]} {\alpha}ddress{Dipartimento di Matematica\\ Universit\`a di Parma\\ Viale G.P. Usberti, 53/A\\ 43100 Parma\\ Italy} \email{[email protected]} \email{[email protected]} \keywords{para-K\"ahler manifold, invariant Einstein metric, bi-Lagrangian structure, adjoint orbit, Koszul form} {{\mu}athfrak {su}}bjclass{53C15, 53C56} \thanks{This work was partially supported by Leverhulme Trust, EM/9/2005/0069, by the M.I.U.R. Project ``Geometric Properties of Real and Complex Manifolds'' and by G.N.S.A.G.A. of I.N.d.A.M.} \thanks{The second and the third author would like to thank the School of Mathematics of the University of Edinburgh for its kind hospitality} {\mu}aketitle {\beta}egin{center} Dedicated to E.B.Vinberg on the occasion of his 70th birthday \end{center} {\beta}egin{abstract} A para-K\"ahler manifold can be defined as a pseudo-Riemannian manifold $(M,g)$ with a parallel skew-symmetric para-complex structures $K$, i.e. a parallel field of skew-symmetric endomorphisms with $ K^2 = {\mu}athrm{Id} $ or, equivalently, as a symplectic manifold $(M,\omega)$ with a bi-Lagrangian structure $L^\pm$, i.e. two complementary integrable Lagrangian distributions. \\ A homogeneous manifold $M = G/H$ of a semisimple Lie group $G$ admits an invariant para-K\"ahler structure $(g,K)$ if and only if it is a covering of the adjoint orbit ${\mu}athrm{Ad}_Gh$ of a semisimple element $h.$ We give a description of all invariant para-K\"ahler structures $(g,K)$ on a such homogeneous manifold. Using a para-complex analogue of basic formulas of K\"ahler geometry, we prove that any invariant para-complex structure $K$ on $M = G/H$ defines a unique para-K\"ahler Einstein structure $(g,K)$ with given non-zero scalar curvature. An explicit formula for the Einstein metric $g$ is given. A survey of recent results on para-complex geometry is included. \end{abstract} \tableofcontents {\sigma}ection{Introduction.} An {{\beta}f almost para-complex structure} on a $2n$-dimensional manifold $M$ is a field $K$ of involutive endomorphisms ($K^2=1$) with $n$-dimensional eigendistributions $T^{\pm}$ with eigenvalues $\pm 1$. \\ (More generally, any field $K$ of involutive endomorphisms is called an {{\beta}f almost para-complex structure in weak sense}).\\ If the $n$-dimensional eigendistributions $T^{\pm}$ of $K$ are involutive, the field $K$ is called a {{\beta}f para-complex structure}. This is equivalent to the vanishing of the Nijenhuis tensor $N_K$ of the $K$. \\ In other words, a para-complex structure on $M$ is the same as a pair of complementary $n$-dimensional integrable distributions $ T^{\pm} M $.\\ A decomposition $M = M_+ \times M_-$ of a manifold $M$ into a direct product defines on $M$ a para-complex structure $K$ in the weak sense with eigendistributions $T^+ = TM_+$ and $T^- = TM_-$ tangent to the factors. It is a para-complex structure in the factors $M_{\pm}$ have the same dimension.\\ Any para-complex structure locally can be obtained by this construction. Due to this, an (almost) para-complex structure is also called an {{\beta}f (almost) product structure}.\\ A manifold $M$ endowed with a para-complex structure $K$ admits an atlas of para-holomorphic coordinates (which are functions with values in the algebra $C = \R + e \R, \, e^2 =1,$ of para-complex numbers) such that the transition functions are para-holomorphic. One can define para-complex analogues of different composed geometric structures which involve a complex structure (Hermitian, K\"ahler, nearly K\"ahler, special K\"ahler, hypercomplex, hyper-K\"ahler, quaternionic, quaternionic K\"ahler structures, $CR$-structure etc.) changing a complex structure $J$ to a para-complex structure $K$. Many results of the geometry of such structures remain valid in the para-complex case. On the other hand, some para-complex composed structures admit different interpretation as 3-webs, bi-Lagrangian structure and so on.{\sigma}mallskip The structure of the paper is the following.{\nu}ewline We give a survey of known results about para-complex geometry in section 2. Sections 3,4,5 contain an elementary introduction to para-complex and para-K\"ahler geometry. The bundle $\Lambda^r(T^*M \otimes C)$ of $C$-valued $r$-form is decomposed into a direct sum of $(p,q)$-forms $\Lambda^{p,q}M$ and the exterior differential $d$ is represented as a direct sum $d = \partial + {\beta}ar \partial$. Moreover, a para-complex analogue of the Dolbeault Lemma holds (see {\gamma}ite{CMMS1} and subsection \ref{differentialforms}).\\ A {{\beta}f para-K\"ahler structure} on a manifold $M$ is a pair $(g,K)$ where $g$ is a pseudo-Riemannian metric and $K$ is a parallel skew-symmetric para-complex structure. A pseudo-Riemannian $2n$-dimensional manifold $(M,g)$ admits a para-K\"ahler structure $(g,K)$ if and only if its holonomy group is a subgroup of $GL_n(\R) {{\mu}athfrak {su}}bset SO_{n, n} {{\mu}athfrak {su}}bset GL_{2n}(\R)$.\\ If $(g,K)$ is a para-K\"ahler structure on $M$, then $\omega = g {\gamma}irc K$ is a symplectic structure and the $\pm 1$-eigendistributions $T^{\pm} M$ of $K$ are two integrable $\omega$-Lagrangian distributions. Due to this, a para-K\"ahler structure can be identified with a bi-Lagrangian structure $(\omega, T^{\pm} M)$ where $\omega$ is a symplectic structure and $T^{\pm}M$ are two integrable Lagrangian distributions. In section 4 we derive some formulas for the curvature and Ricci curvature of a para-K\"ahler structure $(g,K)$ in terms of para-holomorphic coordinates. In particular, we show that, as in the K\"ahler case, the Ricci tensor $S$ depends only on the determinant of the metric tensor $g_{{\alpha}lpha {\beta}ar {\beta}eta}$ written in terms of para-holomorphic coordinates.\\ In section 5, we consider a homogeneous manifold $(M = G/H, K, {{\mu}athrm{vol}})$ with an invariant para-complex structure $K$ and an invariant volume form ${\mu}athrm{vol}$. We establish a formula which expresses the pull-back $\pi^* \rho$ to $G$ of the Ricci form $\rho = S {\gamma}irc K$ of any invariant para-K\"ahler structure $(g,K)$ as the differential of a left-invariant 1-form $\psi$, called the {{\beta}f Koszul form}.\\ In the last section, we use the important result by Z.\ Hou, S.\ Deng, S.\ Kaneyuki and K.\ Nishiyama (see {\gamma}ite{HDK}, {\gamma}ite{HDKN}) stating that a homogeneous manifold $M= G/H$ of a semisimple Lie group $G$ admits an invariant para-K\"ahler structure if and only if it is a covering of the adjoint orbit ${\,{\rm Ad}\,}_G h$ of a semisimple element $h\in \ggg = {{\mu}athrm{Lie}}(G)$. \\ We describe all invariant para-complex structures $K$ on $M= G/H$ in terms of fundamental gradations of the Lie algebra $\ggg$ with $\ggg_0 = \gh := {\mu}athrm{Lie}(H)$ and we show that they are consistent with any invariant symplectic structure $\omega$ on $G/H$ such that $(g = \omega {\gamma}irc K,\, K)$ is an invariant para-K\"ahler structure. This gives a description of all invariant para-K\"ahler structures on homogeneous manifolds of a semisimple group $G$. An invariant para-complex structure on $M = G/H$ defines an Anosov flow, but a theorem by Y. Benoist and F. Labourie shows that this flow can not be push down to any {\em smooth} compact quotient ${\Gamma}amma{\sigma}etminus G/H$. We give a complete description of invariant para-K\"ahler-Einstein metrics on homogeneous manifolds of a semisimple Lie group and prove the following theorem. {\beta}egin{thm} Let $M = G/H$ be a homogeneous manifold of a semisimple Lie group $G$ which admits an invariant para-K\"ahler structure and $K$ be the invariant paracomplex structure on $M$. Then there exists a unique invariant symplectic structure $\rho $ which is the push-down of the differential $ d \psi$ of the Koszul 1-form $\psi$ on $G$ such that $ g_{\lambda, K} := \lambda^{-1}\rho{\gamma}irc K$ is an invariant para-K\"ahler Einstein metric with Einstein constant $\lambda {\nu}eq 0$ and this construction gives all invariant para-K\"ahler-Einstein metrics on $M$. \end{thm} {\sigma}ection{A survey on para-complex geometry.} {{\mu}athfrak {su}}bsection{Para-complex structures.} The notion of {{\beta}f almost para-complex structure} (or {{\beta}f almost product structure}) on a manifold was introduced by P.K. Ra\v sevski\u{\i} {\gamma}ite{Rash} and P. Libermann {\gamma}ite{L1}, {\gamma}ite{L2}, where the problem of integrability had been also discussed. The paper {\gamma}ite{CFG} contains a survey of further results on para-complex structure with more then 100 references. The papers {\gamma}ite{CGM}, {\gamma}ite{EST} contain survey of para-Hermitian and para-K\"ahler geometries. Different generalizations of Hermitian geometry and contact geometry are considered in {\gamma}ite{{Kir}}.\\ Note that a para-complex structure $K$ on a manifold $M$ defines a new Lie algebra structure in the space ${\mu}athfrak{X}(M)$ of vector fields given by $$ [X,Y]_K := [KX,Y]+ [X,KY] - K[X,Y]$$ such that the map $$ ( {\mu}athfrak{X}(M),[.,.]_K ) \rightarrow ({\mu}athfrak{X}(M),[.,.]), \, \, X {\mu}apsto KX $$ is a homomorphism of Lie algebras.\\ Moreover, $K$ defines a new differential $d_K$ of the algebra $\Lambda(M)$ of differential forms which is a derivation of $\Lambda(M)$ of degree one with $d_K^2 =0$. It is given by $$d_K := \{d,K \}:= d {\gamma}irc \iota_K + \iota_K {\gamma}irc d $$ where $\iota_K $ is the derivation associated with $K$ of the supercommutative algebra $\Lambda(M)$ of degree $-1$ defined by the contraction. (Recall that the superbracket of two derivations is a derivation).\\ In {\gamma}ite{LS}, the authors define the notion of para-complex affine immersion $f : M \rightarrow M'$ with transversal bundle $N$ between para-complex manifolds $(M,K), \, (M',K')$ equipped with torsion free para-complex connections ${\nu}abla, {\nu}abla'$ and prove a theorem about existence and uniqueness of such an immersion of $(M,K,{\nu}abla)$ into the affine space $M' = {\mu}athbb{R}^{2m + 2n}$ with the standard para-complex structure and flat connection.\\ In {\gamma}ite{S1}, the notion of para-$tt^*$-bundle over an (almost) para-complex $2n$-dimensional manifold $(M,K)$ is defined as a vector bundle $\pi : E \rightarrow M$ with a connection ${\nu}abla$ and a $\hbox{\rm End}(E)$-valued 1-form $S$ such that 1-parametric family of connections $$ {\nu}abla^t_X = {\nu}abla_X + {\gamma}osh (t) S_X + {\sigma}inh (t) S_{KX}, \,\, X \in TM$$ is flat. It is a para-complex version of $tt^*$-bundle over a complex manifold defined in the context of quantum field theory in {\gamma}ite{CV} , {\gamma}ite{D} and studied from differential-geometric point of view in {\gamma}ite{CS}. In {\gamma}ite{S}, {\gamma}ite{S1}, {\gamma}ite{S2}, the author studies properties of para-$tt^*$-connection ${\nu}abla^t$ on the tangent bundle $E = TM$ of an (almost) para-complex manifold $M$. In particular, he shows that nearly para-K\"ahler and special para-complex structures on $M$ provide para-$tt^*$-connections. It is proved also that a para-$tt^*$-connection which preserves a metric or a symplectic form determines a para-pluriharmonic map $f : M \rightarrow N$ where $N = Sp_{2n}({\mu}athbb{R})/U^n(C^n)$ or, respectively, $N = SO_{n,n}/U^n(C^n)$ with invariant pseudo-Riemannian metric. Here $U^n(C^n)$ stands for para-complex analogue of the unitary group. {{\mu}athfrak {su}}bsubsection*{Generalized para-complex structures} Let ${\mu}athcal{T}(M) : = TM \oplus T^*M$ be the generalized tangent bundle of a manifold $M$ i.e. the direct sum of the tangent and cotangent bundles equipped with the natural metric $g$ of signature $(n,n)$, $$ g (X, \xi), (X', \xi') := {\varphi}rac12( \xi(X') + \xi'(X)). $$ The { {\beta}f Courant bracket} in the space ${\Gamma}amma({\mu}athcal{T}(M))$ of sections is defined by $$ [(X,\xi), (X', \xi')] = \left([X,X'], {\mu}athcal{L}_{X}\xi' - {\mu}athcal{L}_{X'}\xi - {\varphi}rac12d( \xi'(X) - \xi(X'))\right) $$ where ${\mu}athcal{L}_X$ is the Lie derivative in the direction of a vector field $X$. A maximally $g$-isotropic subbundle $ D {{\mu}athfrak {su}}bset {\mu}athcal{T}(M) $ is called a {{\beta}f Dirac structure} if its space of sections ${\Gamma}amma(D)$ is closed under the Courant bracket.\\ Changing in the definition of the para-complex structure the tangent bundle $TM$ to the generalized tangent bundle ${\mu}athcal{T}(M)$ and the Nijenhuis bracket to the Courant bracket, A. Wade {\gamma}ite{W} and I. Vaisman {\gamma}ite{V} define the notion of a generalized para-complex structure which unifies the notion of symplectic structure, Poisson structure and para-complex structure and similar to the Hitchin's definition of a generalized complex structure (see e.g {\gamma}ite{G}, {\gamma}ite{Hi}).\\ A { {\beta}f generalized para-complex structure} is a field $K$ of involutive skew-symmetric endomorphisms of the bundle ${\mu}athcal{T}(M)$ whose $\pm 1$-eigendistributions $T^\pm$ are closed under the Courant bracket.\\ In other words, it is a decomposition ${\mu}athcal{T}(M) = T^+ \oplus T^-$ of the generalized tangent bundle into a direct sum of two Dirac structures $T^\pm$.\\ Generalized para-complex structures naturally appear in the context of mirror symmetry: a semi-flat generalized complex structure on a $n$-torus bundle with sections over an $n$-dimensional manifold $M$ gives rise to a generalized para-complex structure on $M$, see {\gamma}ite{Ben-B}. I. Vaisman {\gamma}ite{V} extends the reduction theorem of Marsden-Weinstein type to generalized complex and para-complex structures and gives the characterization of the submanifolds that inherit an induced structure via the corresponding classical tensor fields. {{\mu}athfrak {su}}bsection{Para-hypercomplex (complex product) structures.} An {{\beta}f (almost) para-hypercomplex} (or an {{\beta}f almost complex product structure}) on a $2n$-dimensional manifold $M$ is a pair $(J,K)$ of an anticommuting (almost) complex structure $J$ and an (almost) para-complex structure $K$. The product $I = JK$ is another (almost) para-complex structure on $M$. If the structure $J,K$ are integrable, then $I =JK$ is also an (integrable) para-complex structure and the pair $(J,K)$ or triple $(I,J,K)$ is called a {{\beta}f para-hypercomplex structure}. An (almost) para-hypercomplex structure is similar to an (almost) hypercomplex structure which is defined as a pair of anticommuting (almost) complex structures. Like for almost hypercomplex structure, there exists a canonical connection ${\nu}abla$, called the {{\beta}f Obata connection}, which preserves a given almost para-hypercomplex structure (i.e. such that ${\nu}abla J = {\nu}abla K = {\nu}abla I =0$). The torsion of this connection vanishes if and only if $N_J = N_K= N_I= 0 $ that is $(J,K)$ is a para-hypercomplex structure.\\ At any point $x \in M$ the endomorphisms $I,J,K$ define a standard basis of the Lie subalgebra ${\mu}athfrak{sl}_2({\mu}athbb{R}) {{\mu}athfrak {su}}bset {\mu}athrm{End}(T_xM)$. The conjugation by a (constant) matrix $A \in SL_2({\mu}athbb{R})$ allows to associate with an (almost) para-hypercomplex structure $(J,K)$ a 3-parametric family of (almost) para-hypercomplex structures which have the same Obata connection.\\ Let $TM = T^+ \oplus T^-$ be the eigenspace decomposition for the almost para-complex structure $K$. Then the almost complex structure $J$ defines the isomorphism $J : T^+ \rightarrow T^-$ and we can identify the tangent bundle $TM$ with a tensor product $TM = {\mu}athbb{R}^2 \otimes E$ such that the endomorphisms $J,K,I$ acts on the first factor ${\mu}athbb{R}^2$ in the standard way : {\beta}egin{equation} \label{IJKmatrices} J= \left( {\beta}egin{array}{cc} 0 & -1 \\ 1 & 0 \\ \end{array} \right),\,\, K= \left( {\beta}egin{array}{cc} 1 & 0 \\ 0 & -1 \\ \end{array} \right),\,\, I= JK = \left( {\beta}egin{array}{cc} 0 & 1 \\ 1& 0 \\ \end{array} \right). \end{equation} Any basis of $E_x$ defines a basis of the tangent space $T_xM$ and the set of such (adapted) bases form a $GL_n({\mu}athbb{R})$-structure (that is a principal $GL_n({\mu}athbb{R})$-subbundle of the frame bundle of $M$). So one can identify an almost para-hypercomplex structure with a $GL_n({\mu}athbb{R})$-structure and a para-hypercomplex structure with a 1-integrable $GL_n({\mu}athbb{R})$-structure. This means that $(J,K)$ is a para-hypercomplex structure. The basic facts of the geometry of para-hypercomplex manifolds are described in {\gamma}ite{An1}, where also some examples are considered.{\nu}ewline Invariant para-hypercomplex structures on Lie groups are investigated in {\gamma}ite{An} - {\gamma}ite{AnS}. Algebraically, the construction of left-invariant para-hypercomplex structures on a Lie group $G$ reduces to the decomposition of its Lie algebra ${\mu}athfrak{g}$ into a direct sum of subalgebras ${\mu}athfrak{g}^+, \, {\mu}athfrak{g}^-$ together with the construction of a complex structure $J$ which interchanges ${\mu}athfrak{g}^+, {\mu}athfrak{g}^-$. It is proved in {\gamma}ite{AnS} that the Lie algebras ${\mu}athfrak{g}^{\pm}$ carry structure of left-symmetric algebra. Applications to construction of hypercomplex and hypersymplectic (or para-hyperK\"ahler) structures are considered there.{\sigma}mallskip Connection between a para-hypercomplex structure $(J,K)$ on a $2n$-dimensional manifold $M$ and a {{\beta}f 3-web} and special $G$-structures are studied in {\gamma}ite{MN}. Recall that a $3$-web on a $2n$-dimensional manifold $M$ is a triple $(V_1,V_2,V_3) $ of mutually complementary $n$-dimensional integrable distributions. A para-hypercomplex structure $(J,K)$ on $M$ defines a $3$-web $T^+,T^-,S^+$, where $ T^\pm, S^\pm$ are the eigendistributions of the para-complex structures $K$ and $I = JK$. Conversely, let $(V_1,V_2,V_3) $ be a 3-web. Then the decomposition $TM = V_1 + V_2$ defines a para-complex structure $K$ and the distribution $V_3$ is the graph of a canonically defined isomorphism $f : V_1 \rightarrow V_2$ that is $V_3 = (1 + f)V_1$. The $n$-dimensional distribution $V_4 : = (1 - f^{-1})V_2$ gives a direct sum decomposition $TM = V_3 + V_4 $ which defines another almost para-complex structure $I$ which anticommute with $K$. Hence, $(J = IK, K)$ is almost hypercomplex structure. It is integrable if and only if the distribution $V_4 = (1 - f^{-1})V_2$ associated with the 3-web $(V_1,V_2,V_3)$ is integrable. So, any 3-web defines an almost para-hypercomplex structure which is para-hypercomplex structure if the distribution $V_4$ is integrable. \\ The monograph {\gamma}ite{AkS} contains a detailed exposition of the theory of three-webs, which was started by Bol, Chern and Blaschke and continued by M.A.Akivis and his school. Relations with the theories of $G$-structures, in particular Grassmann structures, symmetric spaces, algebraic geometry, nomography, quasigroups, non-associative algebras and differential equations of hydrodynamic type are discussed. {{\mu}athfrak {su}}bsection{Para-quaternionic structures.} An {{\beta}f almost para-quaternionic structure} on a $2n$-dimensional manifold $M$ is defined by a 3-dimensional subbundle $Q$ of the bundle $\hbox{\rm End}(TM)$ of endomorphisms, which is locally generated by an almost para-hypercomplex structure $(I,J,K)$, i.e. $Q_x = {\mu}athbb{R}I_x + {\mu}athbb{R}J_x + {\mu}athbb{R}K_x$ where $x \in U$ and $U {{\mu}athfrak {su}}bset M$ is a domain where $I,J,K$ are defined. If $Q$ is invariant under a torsion free connection ${\nu}abla$ (called a {{\beta}f para-quaternionic connection}) then $Q$ is called a {{\beta}f para-quaternionic structure.} The normalizer of the Lie algebra ${\mu}athfrak{sp_1({\mu}athbb{R})} = \hbox{\rm span}(I_x,J_x,K_x)$ in $GL(T_xM)$ is isomorphic to $Sp_1({\mu}athbb{R}) {\gamma}dot GL_n({\mu}athbb{R})$. So an almost para-quaternionic structure can be considered as a $Sp_1({\mu}athbb{R}) {\gamma}dot GL_n({\mu}athbb{R})$-structure and a para-quaternionic structure corresponds to the case when this $G$-structure is 1-flat. Any para-hypercomplex structure $(I,J,K)$ defines a para-quaternionic structure $Q = \hbox{\rm span}(I,J,K)$, since the Obata connection ${\nu}abla$ is a torsion free connection which preserves $Q$. The converse claim is not true even locally. A para-quaternionic structure is generated by a para-hypercomplex structure if and only if it admits a para-quaternionic connection with holonomy group $\hbox{{\sigma}l Hol} {{\mu}athfrak {su}}bset GL_n({\mu}athbb{R})$.\\ Let $TM = H \otimes E$ be an {{\beta}f almost Grassmann structure} of type $(2,n)$, that is a decomposition of the tangent bundle of a manifold $M$ into a tensor product of a 2-dimensional vector bundle $H$ and an $n$-dimensional bundle $E$. A non-degenerate 2-form $\omega^H$ in the bundle $H$ defines an almost para-quaternionic structure $Q$ in $M$ as follows. For any symplectic basis $(h_-,h_+)$ of a fibre $H_x$ we define $ I_x = I^H \otimes 1,\, J_x = J^H \otimes 1,\, K_x = K^H \otimes 1$ where $I^H, \, J^H, \, K^H$ are endomorphisms of $H_x$ which in the bases $h_-, h_+$ are represented by the matrices (\ref{IJKmatrices}). Then, for $x \in M$, $Q_x$ is spanned by $I_x,J_x,K_x$. {{\mu}athfrak {su}}bsection{Almost para-Hermitian and para-Hermitian structures.} Like in the complex case, combining an (almost) para-complex structure $K$ with a "compatible" pseudo-Riemannian metric $g$ we get an interesting class of geometric structures. The natural compatibility condition is the Hermitian condition which means that that the endomorphism $K$ is skew-symmetric with respect to $g$. A pair $(g,K)$ which consists of a pseudo-Riemannian metric $g$ and a skew-symmetric (almost) para-complex structure $K$ is called an {{\beta}f (almost) para-Hermitian structure}. An almost para-Hermitian structure on a $2n$-dimensional manifold can be identified with a $GL_n({\mu}athbb{R})$-structure. The para-Hermitian metric $g$ has necessary the neutral signature $(n,n)$. A para-Hermitian manifold $(M,g,K)$ admits a unique connection (called the {{\beta}f para-Bismut connection}) which preserves $g,K$ and has a skew-symmetric torsion.\\ The {{\beta}f K\"ahler form} $\omega : = g {\gamma}irc K$ of an almost para-Hermitian manifold is not necessary closed. \\ A special class of para-Hermitian manifolds and their submanifolds are studied in {\gamma}ite{KK}. In the paper {\gamma}ite{AK}, the authors classify autodual 4-dimensional para-Hermitian manifolds with constant scalar curvature and parallel Lee form, which satisfy some extra conditions.\\ In {\gamma}ite{GM1}, the authors describe a decomposition of the space of $(0,3)$-tensors with the symmetry of covariant derivative ${\nu}abla^g \omega$ (where ${\nu}abla^g$ is the Levi-Civita connection of $g$) into irreducible subspaces with respect to the natural action of the structure group $GL_n({\mu}athbb{R})$. It gives an important classification of possible types of almost para-Hermitian structures which is an analogue of Gray-Hervella classification of Hermitian structures. Such special classes of almost para-Hermitian manifolds are {{\beta}f almost para-K\"ahler manifolds} (the K\"ahler form $\omega$ is closed), {{\beta}f nearly para-K\"ahler manifolds} (${\nu}abla^g\omega$ is a 3-form) and {{\beta}f para-K\"ahler manifolds} (${\nu}abla^g \omega=0$). {\sigma}mallskip\par Almost para-Hermitian and almost para-K\"ahler structures naturally arise on the cotangent bundle $T^*M$ of a pseudo-Riemannian manifold $(M,g)$. The Levi-Civita connection defines a decomposition $T_\xi(T^*M) = T_\xi^{vert}(T^*M) + H_\xi$ of the tangent bundle into vertical and horizontal subbundles. This gives an almost para-complex structure $K$ on $T^*M$. The natural identification $T^{vert}_{\xi}M = T^*_xM = T_xM = H_{\xi},$ where $\xi \in T^*_xM$ and allows to define also a compatible metric on $T^*M$ that is almost para-Hermitian structure, which is studied in {\gamma}ite{OPM}. Also the above identification defines an almost complex structure $J$ which anticommute with $K$. It allows to define an almost para-hyperHermitian structure on $TM$ (i.e. a pseudo-Riemannian metric together with two skew-symmetric anticommuting almost para-complex structures ), studied in {\gamma}ite{IV}. In {\gamma}ite{BCGHV}, the authors consider almost para-Hermitian manifolds with pointwise constant para-holomorphic sectional curvature, which is defined as in the Hermitian case. They characterize these manifolds in terms of the curvature tensor and prove that Schur lemma is not valid, in general, for an almost para-Hermitian manifold.\\ A $(1,2)$-tensor field $T$ on an almost para-Hermitian manifold $(M,g,K)$ is called a { {\beta}f homogeneous structure} if the connection ${\nu}abla := {\nu}abla^g - T$ preserves the tensors $g,K, T$ and the curvature tensor $R$ of the metric $g$. In {\gamma}ite{GO}, the authors characterize reductive homogeneous manifolds with invariant almost para-Hermitian structure in terms of homogeneous structure $T$ and give a classification of possible types of such tensors $T$.\\ Left-invariant para-Hermitian structures on semidirect and twisted products of Lie groups had been constructed and studied in the papers {\gamma}ite{O1}, {\gamma}ite{O2}, {\gamma}ite{O3}, {\gamma}ite{O}.\\ $CR$-submanifolds of almost para-Hermitian manifolds are studied in {\gamma}ite{EFT}. Four dimensional compact almost para-Hermitian manifolds are considered in {\gamma}ite{Ma}. The author decomposes these manifolds into three families and establishes some relations between Euler characteristic and Hirzebruch indices of such manifolds.\\ Harmonic maps between almost para-Hermitian manifolds are considered in {\gamma}ite{BB}. In particular, the authors show that a map $f : M \rightarrow N$ between Riemannian manifolds is totally geodesic if and only if the induced map $df : TM \rightarrow TN$ of the tangent bundles equipped with the canonical almost para-Hermitian structure is para-holomorphic.\\ In {\gamma}ite{Kon} it is proved that the symplectic reduction of an almost para-K\"ahler manifolds $(M,g,K)$ under the action of a symmetry group $G$ which admits a momentum map ${\mu}u : M \rightarrow {\mu}athfrak{g}^*$ provides an almost para-K\"ahler structure on the reduced symplectic manifold ${\mu}u^{-1}(0)/G$. \\ An almost para-K\"ahler manifold $(M,g, K)$ can be described in terms of symplectic geometry as a symplectic manifold $(M, \omega = g {\gamma}irc K)$ with a bi-Lagrangian splitting $TM = T^+ \oplus T^-$, i.e. a decomposition of the tangent bundle into a direct sum of two Lagrangian (in general non-integrable) distributions $T^\pm$. An almost para-K\"ahler manifold has a canonical symplectic connection ${\nu}abla$ which preserves the distributions $T^\pm$, defined by $$ {\beta}egin{array}{l} {\nu}abla_{X^{\pm}}Y^{\pm} = {\varphi}rac12 \omega^{-1}({\mu}athcal{L}_{X^{\pm}} (\omega{\gamma}irc Y^{\pm}))\,,\\[5pt] {\nu}abla_{X^+}Y^-= {{\mu}athrm pr}_{T^-}[X^+,Y^-]\,,\quad {\nu}abla_{X^-}Y^+ = {{\mu}athrm pr}_{T^+}[X^-,Y^+] \end{array} $$ for vector fields $X^{\pm},Y^{\pm} \in {\Gamma}amma(T^{\pm})$. Note that the torsion $T$ of this connection satisfies the conditions $T(T^+, T^-)=0$. If the distribution $T^+$ is integrable, then $T(T^+,T^+)=0$ and the curvature $R$ satisfies the condition $R(T^+,T^+)=0$. So the connection ${\nu}abla$ defines a flat torsion free connection ${\nu}abla^L$ (which depends only on $(\omega,T^+)$) on any leaf $L$ of the integrable distribution $T^+$. So the leaf $L$ has the canonical (flat) affine structure. Indeed, if $X^+ \in {\Gamma}amma(T^+)$ is a symplectic vector field, i.e. ${\mu}athcal{L}_{X^+} \omega =0$, then $${\nu}abla^L_{X^+}Y^+= {\nu}abla_{X^+}Y^+ = {\varphi}rac{1}{2}[X^+, Y^+] , \quad {\varphi}orall\, Y^+ \in {\Gamma}amma(T^+).$$ Since symplectic fields tangent to $T^+$ span the space ${\Gamma}amma(T^+)$ of vector fields tangent to $T^+$, ${\nu}abla^L$ is a well defined flat connection on $L$. Moreover, one can easily check that for symplectic commuting vector fields $X^+,Y^+ \in {\Gamma}amma(T^+)$ and any $Z \in {\Gamma}amma(T^-)$, the following holds {\beta}egin{eqnarray*} R(X^+,Y^+)Z &=& {\nu}abla_{X^+}[Y^+,Z]_{{T^-}}- {\nu}abla_{Y^+}[X^+,Z]_{T^-} \\[2pt] &=& [X^+,[Y^+,Z] _{T^-}] _{T^-}- [Y^+,[X^+,Z]_{T^-}]_{T^-} \\ &=& [X^+,[Y^+,Z]] _{T^-}- [Y^+,[X^+,Z]]_{T^-} =0\,, \end{eqnarray*} which shows that $R(T^+,T^+)=0$. Here $X_{T^-}$ is the projection of $X \in TM$ onto $T^-$. Let $f_1, \ldots, f_n$ be independent functions which are constant on leaves of $T^+$. Then the leaves of $T^+$ are level sets $f_1=c_1, \ldots, f_n= c_n$ and the Hamiltonian vector fields $X_i:= \omega^{-1}{\gamma}irc df_i$ commute and form a basis of tangent parallel fields along any leaf $L$. >From the formula for the Poisson bracket it follows $$ \{f_i,f_j \} = \omega^{-1}(df_i, df_j) =0 = df_i(X_j) = X_j {\gamma}dot f_i\,. $$ By a classical theorem of A. Weinstein {\gamma}ite{Wei} any Lagrangian foliation ${\mu}athcal{T}^+$ is locally equivalent to the cotangent bundle fibration $T^*N \rightarrow N$. More precisely, let $N$ be a Lagrangian submanifold of the symplectic manifold $(M,\omega)$ transversal to the leaves of an integrable Lagrangian distribution ${\mu}athcal{T}^+$. Then there is an open neighborhood $M'$ of $N$ in $M$ such that $(M', \omega|_{M'}, T^+|_{M'})$ is equivalent to a neighborhood $V$ of the zero section $Z_N {{\mu}athfrak {su}}bset T^*N$ equipped with the standard symplectic structure $\omega_{st}$ of $T^*N$ and the Lagrangian foliation induced by $T^*N \rightarrow N$. In {\gamma}ite{V1} a condition in order that $(M,\omega, T^+)$ is globally equivalent to the cotangent bundle $T^*N, \omega_{st}$ is given.\\ Let $(M = T^*, \omega_{st})$ be the cotangent bundle of a manifold $N$ with the standard symplectic structure $\omega_{st}$ and the standard integrable Lagrangian distributions $T^+$ defined by the projection $T^*N \rightarrow N$. A transversal to $T^+$ Lagrangian submanifolds are the graph ${\Gamma}amma_\xi = (x, \xi_x)$ of closed 1-forms $\xi \in \Omega^1(N)$. The horizontal distribution $T^-= H^{\nu}abla {{\mu}athfrak {su}}bset TM$ of a torsion free linear connection ${\nu}abla$ in $N$ is a Lagrangian distribution complementary to $T^+$ {\gamma}ite{Mo1}. Hence any torsion free connection defines an almost para-K\"ahler structure on $(\omega_{st}, T^+, T^- = H^{{\nu}abla} )$. Note that the distribution $T^+$ is integrable and the distribution $T^{-} = H ^{\nu}abla$ is integrable if and only if the connection ${\nu}abla$ is flat. Such structure is called a {{\beta}f half integrable almost para-K\"ahler structure}. An application of such structures to construction of Lax pairs in Lagrangian dynamics is given in {\gamma}ite{CI}. In {\gamma}ite{IZ}, the authors define several canonical para-Hermitian connections on an almost para-Hermitian manifold and use them to study properties of 4-dimensional para-Hermitian and 6-dimensional nearly para-K\"ahler manifolds. In particular, they show that a nearly para-K\"ahler 6-manifold is Einstein and a priori may be Ricci flat and that the Nijenhuis tensor $N_K$ is parallel with respect to the canonical connection. They also prove that the Kodaira-Thurston surface and the Inoue surface admit a hyper-para-complex structure. The corresponding para-hyperHermitian structures are locally (but not globally) conformally para-hyperK\"ahler. {{\mu}athfrak {su}}bsection{Para-K\"ahler (bi-Lagrangian or Lagrangian 2-web) structures.} A survey of results on geometry of para-K\"ahler manifolds is given in {\gamma}ite{EST}. Here we review mostly results which are not covered in this paper. Recall that an almost para-Hermitian manifold $(M,g,K)$ is { {\beta}f para-K\"ahler } if the Levi-Civita connection ${\nu}abla^g$ preserves $K$ or equivalently its holonomy group $\hbox{{\sigma}l Hol}_x $ at a point $x \in M$ preserves the eigenspaces decomposition $T_xM = T^+_x + T^-_x$. The parallel eigendistributions $T^\pm$ of $K$ are $g$-isotropic integrable distributions. Moreover, they are Lagrangian distributions with respect to the K\"ahler form $\omega = g {\gamma}irc K$ which is parallel and, hence, closed. The leaves of these distributions are totally geodesic submanifolds and they are flat with respect to the induced connection, see section 2.4. \\ A pseudo-Riemannian manifold $(M,g)$ admits a para-K\"ahler structure $(g,K)$ if the holonomy group $\hbox{{\sigma}l Hol}_x$ at a point $x$ preserves two complementary $g$-isotropic subspaces $T^\pm_x$. Indeed the associated distributions $T^\pm$ are parallel and define the para-complex structure $K$ with $K|_{T^\pm } = \pm 1$.{\nu}ewline Let $(M,g)$ be a pseudo-Riemannian manifold. In general, if the holonomy group $\hbox{{\sigma}l Hol}_x$ preserves two complementary invariant subspaces $V_1,V_2$, then a theorem by L. Berard-Bergery {\gamma}ite{Kr} shows that there are two complementary invariant $g$-isotropic subspaces $T^\pm_x$ which define a para-K\"ahler structure $(g,K)$ on $M$.\\ Many local results of K\"ahler geometry remain valid for para-K\"ahler manifolds, see section 4. The curvature tensor of a para-K\"ahler manifold belongs to the space $ {\mu}athcal{R}({\mu}athfrak{gl}_n({\mu}athbb{R}))$ of ${\mu}athfrak{gl}_n({\mu}athbb{R})$-valued 2-forms which satisfy the Bianchi identity. This space decomposes into a sum of three irreducible ${\mu}athfrak{gl}_n({\mu}athbb{R})$-invariant subspaces. In particular, the curvature tensor $R$ of a para-K\"ahler manifold has the canonical decomposition $$ R = c R_1 + R_{{\hbox{\it ric}}^0} +W $$ where $R_1$ is the curvature tensor of constant para-holomorphic curvature 1 (defined as the sectional curvature in the para-holomorphic direction $(X, KX$)), $R_{{\hbox{\it ric}}^0}$ is the tensor associated with the trace free part ${\hbox{\it ric}}^0$ of the Ricci tensor ${\hbox{\it ric}}$ of $R$ and $W \in {\mu}athcal{R}({\mu}athfrak{sl}_n({\mu}athbb{R}))$ is the para- analogue of the Weyl tensor which has zero Ricci part. Para-K\"ahler manifolds with constant para-holomorphic curvature (and the curvature tensor proportional to $R_1$) are called {{\beta}f para-K\"ahler space forms}. They were defined in {\gamma}ite{GM} and studied in {\gamma}ite{SG},{\gamma}ite{Er}, {\gamma}ite{EF}. Like in the K\"ahler case, there exists a unique (up to an isometry) simply connected complete para-K\"ahler manifold $M_k$ of constant para-holomorphic curvature $k$. It can be described as the projective space over para-complex numbers. Any complete para-K\"ahler manifold of constant para-holomorphic curvature $k$ is a quotient of $M_k$ by a discrete group of isometries acting in a properly discontinuous way.\\ Different classes of submanifolds of a para-K\"ahler manifold are studied in the following papers:\\ {\gamma}ite{AB} (para-complex submanifolds), {\gamma}ite{EF}, {\gamma}ite{Ro}, {\gamma}ite{Do} ($CR$-submanifolds), {\gamma}ite{Ri} (anti-holomorphic totally umbilical submanifolds), {\gamma}ite{IR} (special totally umbilical submanifolds). {{\mu}athfrak {su}}bsubsection{Symmetric and homogeneous para-K\"ahler manifolds.} A para-K\"ahler manifold $(M,g,K)$ is called {{\beta}f symmetric} if there exists a { {\beta}f central symmetry} $S_x$ with center at any point $x \in M$ that is an involutive isometry which preserves $K$ and has $x$ as isolated fixed point. The connected component $G$ of the group generated by central symmetries (called the { {\beta}f group of transvections}) acts transitively on $M$ and one can identify $M$ with the coset space $M = G/H$ where $H$ is the stabilizer of a point $o \in M$. It is known, see {\gamma}ite{A}, that a para-K\"ahler (and, more generally, a pseudo-Riemannian) symmetric space with non degenerate Ricci tensor has a semisimple group of isometries. All such symmetric manifolds are known. In the Ricci flat case, the group generated by transvections is solvable and the connected holonomy group is nilpotent {\gamma}ite{A}.\\ The classification of simply connected symmetric para-K\"ahler manifolds reduces to the classification of { {\beta}f 3-graded Lie algebras} $${\mu}athfrak{g} = {\mu}athfrak{g}^{-1} + {\mu}athfrak{g}^0 + {\mu}athfrak{g}^1,\,\, [{\mu}athfrak{g}^{i}, {\mu}athfrak{g}^j] {{\mu}athfrak {su}}bset {\mu}athfrak{g}^{i+j}$$ such that ${\mu}athfrak{g}^0$-modules ${\mu}athfrak{g}^{-1}$ and ${\mu}athfrak{g}^1$ are contragredient (dual).\\ Then ${\mu}athfrak{g} = {\mu}athfrak{h}+ {\mu}athfrak{m}= ({\mu}athfrak{g}^0) + ({\mu}athfrak{g}^{-1}+ {\mu}athfrak{g}^1)$ is a symmetric decomposition, the ${\alpha}d_{{\mu}athfrak{h}}$-invariant pairing ${\mu}athfrak{g}^{-1} \times {\mu}athfrak{g}^1 \rightarrow {\mu}athbb{R}$ defines an invariant metric on the associated homogeneous manifold $M = G/H$ and the ${\alpha}d_{{\mu}athfrak{h}}$-invariant subspaces ${\mu}athfrak{g}^\pm {{\mu}athfrak {su}}bset {\mu}athfrak{m} {\sigma}imeq T_oM$ define eigendistributions of an invariant para-complex structure $K$. Symmetric para-K\"ahler spaces of a semisimple Lie group $G$ were studied and classified by S. Kaneyuki {\gamma}ite{K1}, {\gamma}ite{K2}, {\gamma}ite{KKoz}.\\ $3$-graded Lie algebras are closely related with Jordan pairs and Jordan algebras, see {\gamma}ite{Be}, and have different applications. In {\gamma}ite{A}, a construction of some class of Ricci flat para-K\"ahler symmetric spaces of nilpotent Lie groups is described and the classification of Ricci flat para-K\"ahler symmetric spaces of dimension 4 and 6 is obtained.\\ Some classes of para-K\"ahler manifolds which are generalizations of symmetric para-K\"ahler manifolds are defined and studied in {\gamma}ite{DDW} and {\gamma}ite{DDW1}. Homogeneous para-K\"ahler manifolds of a semisimple Lie group had been studied by S. Kaneyuki and his collaborators, see {\gamma}ite{HDK}, {\gamma}ite{HDKN}. In particular, they prove that a homogeneous manifold $M = G/H$ of a semisimple Lie group $G$ admits an invariant para-K\"ahler structure if and only if it is a cover of the adjoint orbit of a semisimple element of the Lie algebra ${\mu}athfrak{g}$ of $G$. A description of invariant para-K\"ahler structures on homogeneous manifolds of the normal real form of a complex semisimple Lie group was given in {\gamma}ite{Hou}, {\gamma}ite{HD}. An explicit description of all invariant para-K\"ahler structures on homogeneous manifolds of a semisimple Lie group had been given in {\gamma}ite{AM}, see section 6.\\ Berezin quantization on para-K\"ahler symmetric spaces of a semisimple Lie group had been studied in {\gamma}ite{M}, {\gamma}ite{MV}, {\gamma}ite{MV1}. Kostant quantization of a general symplectic manifold with a bi-Lagrangian structure (that is a para-K\"ahler manifold) is considered in {\gamma}ite{Hess}. {{\mu}athfrak {su}}bsubsection{Special para-K\"ahler manifolds, $tt^*$-bundles and affine hyperspheres.} The notion of {{\beta}f special para-K\"ahler structure} on a manifold $M$ had been defined in the important paper {\gamma}ite{CMMS1}. It is an analogue of the notion of special K\"ahler structure and it is defined as a para-K\"ahler structure $(g,K)$ together with a flat torsion free connection ${\nu}abla$ which preserves the K\"ahler form $\omega= g {\gamma}irc K$ and satisfies the condition $({\nu}abla_X K)Y = ({\nu}abla_Y K)X$ for $X,Y \in TM$. It was shown in {\gamma}ite{CMMS1}, {\gamma}ite{CMMS2} that the target space for scalar fields in 4-dimensional Euclidean $N=2$ supersymmetry carries a special para-K\"ahler structure similar to the special K\"ahler structure which arises on the target space of scalar fields for $N=2$ Lorentzian 4-dimensional supersymmetry. This gives an explanation of a formal construction {\gamma}ite{GGP}, where the authors obtains the Euclidean supergravity action changing in the Lagrangian the imaginary unit $i$ by a symbol $e$ with $e^2 =1$. Besides physical applications, {\gamma}ite{CMMS1} contains an exposition of basic results of para-complex and para-K\"ahler geometry in para-holomorphic coordinates.\\ In {\gamma}ite{CLS}, the authors construct a canonical immersion of a simply connected special para-K\"ahler manifold into the affine space ${\mu}athbb{R}^{n+1}$ as a parabolic affine hypersphere. Any non-degenerate para-holomorphic function defines a special para-K\"ahler manifold and the corresponding affine hypersphere is explicitly described. It is shown also that any conical special para-K\"ahler manifold is foliated by proper affine hyperspheres of constant affine and pseudo-Riemannian mean curvature. Special para-K\"ahler structures are closely related with para-$tt^*$-bundles,{\gamma}ite{S1,S2,S}. {{\mu}athfrak {su}}bsection{Para-hyperK\"ahler (hypersymplectic) structures and para-hyperK\"ahler structures with torsion (PHKT-structures).} A {{\beta}f para-hyperK\"ahler} ({{\beta}f hypersymplectic}) structure on a $4n$-dimensional manifold $M$ can be defined as a pseudo-Riemannian metric $g$ together with a ${\nu}abla^g$-parallel $g$-skew-symmetric para-hypercomplex structure $(J,K)$. A pseudo-Riemannian metric $g$ admits a para-hyperK\"ahler structure if its holonomy group is a subgroup of the symplectic group $Sp_n({\mu}athbb{R}) {{\mu}athfrak {su}}bset SL_{4n}({\mu}athbb{R})$. A para-hypercomplex structure $(J,K)$ defines a para-hyperK\"ahler structure if and only if its (torsion free) Obata connection ${\nu}abla$ preserves a metric $g$. In other words, the holonomy group of ${\nu}abla$ ( which is in general a subgroup of the group $ GL(E) {\sigma}imeq GL_{2n}({\mu}athbb{R})$, which acts on the second factor of the canonical decomposition $T_xM = {\mu}athbb{R}^2 \otimes E$) must preserve a non-degenerate 2-form $\omega^E$, hence be a subgroup of $Sp(E, \omega^E) {\sigma}imeq Sp_{n}({\mu}athbb{R})$. The metric $g$ of a para-hyperK\"ahler structure has signature $(2n, 2n).$ In this case the $2$-form $\omega^E$ together with the volume $2$-form $\omega^H$ on the trivial bundle ${\mu}athbb{R}^2 \times M \rightarrow M$ defines the metric $g = \omega^H \otimes \omega^E$ in $TM = {\mu}athbb{R}^2 \otimes E$ such that $(g, J,K)$ is a para-hyperK\"ahler structure.\\ Para-hyperK\"ahler structure $(g,J,K)$ can be also described in symplectic terms as follows. Note that $\omega_1 := g {\gamma}irc(JK),\, \omega_2 := g {\gamma}irc J, \, \omega_3 = g {\gamma}irc K$ are three parallel symplectic structures and the associated three fields of endomorphisms $$ K= \omega_1^{-1} {\gamma}irc \omega_2, \quad -I= \omega_2^{-1} {\gamma}irc \omega_3,\quad -J= \omega_3^{-1} {\gamma}irc \omega_1 $$ define a para-hypercomplex structure $(I =JK,J,K)$. Conversely, three symplectic structures $\omega_1, \omega_2, \omega_3$ such that the associated three fields of endomorphisms $I,J,K$ form an almost para-hypercomplex structure $(I,J,K)$ and define a para-hyperK\"ahler structure $(g = \omega_1 {\gamma}irc I , J,K)$, see {\gamma}ite{AnS}. In the hyperK\"ahler case this claim is known as Hitchin Lemma. Due to this, para-hyperK\"ahler manifolds are also called {{\beta}f hypersymplectic manifolds}.{\nu}ewline The metric of a hypersymplectic manifold is Ricci-flat and the space of curvature tensors can be identified with the space $S^4E$ of homogeneous polynomial of degree four. \\ In contrast to the hyperK\"ahler case, there are homogeneous and even symmetric hypersymplectic manifolds. However, the isometry group of a symmetric hypersymplectic manifold is solvable. In {\gamma}ite{ABCV}, a classification of simply connected symmetric para-hyperK\"ahler manifolds with commutative holonomy group is presented. Such manifolds are defined by homogeneous polynomials of degree 4.{\nu}ewline In {\gamma}ite{FPPW}, the authors construct hypersymplectic structures on some Kodaira manifolds. Many other interesting examples of left-invariant hypersymplectic structures of solvable Lie groups had been given in {\gamma}ite{An}, {\gamma}ite{AnS} , {\gamma}ite{AD} and {\gamma}ite{ADBO}, where all such structures on 4-dimensional Lie groups had been classified. Under some additional assumption, it is given also in {\gamma}ite{BV}.{\nu}ewline In {\gamma}ite{DS}, the authors construct and study hypersymplectic spaces obtained as quotients of the flat hypersymplectic space ${\mu}athbb{R}^{4n}$ by the action of a compact abelian group. In particular, conditions for smoothness and non-degeneracy of such quotients are given. A natural generalization of para-hyperK\"ahler structure is a {{\beta}f para-hyperK\"ahler structure with torsion (PHKT-structure)} defined and studied in {\gamma}ite{ITZ}. It is defined as a pseudo-Riemannian metric $g$ together with a skew-symmetric para-hypercomplex structure $(J,K)$ such that there is a connection ${\nu}abla$ which preserves $g,J,K$ and has skew-symmetric torsion tensor $T$, i.e. such that $g {\gamma}irc T$ is a 3-form. The structure is called {{\beta}f strong} if the 3-form $g {\gamma}irc T$ is closed. The authors show that locally such a structure is defined by a real function (potential) and construct examples of strong and non strong PHKT-structures. {{\mu}athfrak {su}}bsection{Para-quaternionic K\"ahler structures and para-quaternionic-K\"ahler with torsion (PQKT) structures.} A para-quaternionic structure on a $4n$-dimensional manifold $M^{4n}$ can be defined as a pseudo-Riemannian metric $g$ (of neutral signature $(2n,2n)$) with holonomy group in $Sp_1({\mu}athbb{R}) {\gamma}dot Sp_n({\mu}athbb{R})$. This means that the Levi-Civita connection ${\nu}abla^g$ preserves a para-quaternionic structure $Q$. This implies that the metric $g$ is Einstein. Moreover, the scalar curvature ${\hbox{\it scal}}$ is zero if and only if the restricted holonomy group is in $Sp_n({\mu}athbb{R})$ and the induced connection in $Q$ is flat. \\ We will assume that ${\hbox{\it scal}} {\nu}eq 0$. Then the holonomy group contains $Sp_1({\mu}athbb{R})$ and it is irreducible. Examples of para-quaternionic K\"ahler manifolds are para-quaternionic K\"ahler symmetric spaces. Any para-quaternionic K\"ahler symmetric space is a homogeneous manifold $M = G/H$ of a simple Lie group $G$ of isometries. The classification of simply connected para-quaternionic K\"ahler symmetric spaces reduces to the description of the ${\mu}athbb{Z}$-gradations of the corresponding real simple Lie algebras of the form $$ {\mu}athfrak{g} = {\mu}athfrak{g}^{-2} + {\mu}athfrak{g}^{-1}+ {\mu}athfrak{g}^0 + {\mu}athfrak{g}^1 + {\mu}athfrak{g}^2 $$ with $ {\delta}im {\mu}athfrak{g}^{\pm 2}=1$. Such gradations can be easily determined, see section 6. The classification of para-K\"ahler symmetric spaces based on twistor approach is given in {\gamma}ite{DJS}. In this paper, the authors define the {{\beta}f twistor spaces} $Z(M)$, the {{\beta}f para-3-Sasakian bundle} $S(M)$ and the {{\beta}f positive Swann bundle} $U^+(M)$ of a para-quaternionic K\"ahler manifold $(M,g,Q)$ and study induced geometric structures. They assume that $M$ is real analytic and consider a (locally defined) holomorphic Grassmann structure of the complexification $M^{{\mu}athbb{C}}$ of $M$, that is an isomorphism $TM^{{\mu}athbb{C}} {\sigma}imeq E \otimes H$ of the holomorphic tangent bundle $TM^{{\mu}athbb{C}}$ with the tensor product of two holomorphic vector bundles $E,H$ of dimension $2n$ and $2$. The {{\beta}f twistor space} is defined as the projectivization $Z(M) := PH$ of the bundle $H$ which can be identified with the space of (totally geodesic) ${\alpha}lpha$-surfaces in $M^{{\mu}athbb{C}}$. The authors prove that $Z(M)$ carries a natural holomorphic contact form $\theta $ and a real structure and give a characterization of the twistor spaces $Z = Z(M)$ as $(2n+1)$-dimensional complex manifolds with a holomorphic contact form and a real structure which satisfy some conditions.\\ This is a specification of a more general construction {\gamma}ite{BE} of the twistor space of a manifold with a {{\beta}f holomorphic Grassmann structure} (called also a {{\beta}f complex para-conformal structure}).\\ The {{\beta}f para-3-Sasakian bundle} associated with $(M,g,Q)$ is the principal $SL_2({\mu}athbb{R})$-bundle $S(M) \rightarrow M$ of the standard frames $ s = (I,J,K)$ of the quaternionic bundle $Q {{\mu}athfrak {su}}bset \hbox{\rm End}(TM)$. It has a natural metric $g_S$ defined by the metric $g$ and the standard bi-invariant metric on $SL_2({\mu}athbb{R})$. The standard generators of $SL_2({\mu}athbb{R})$ define three Killing vector fields $\xi_{{\alpha}lpha}, \, {\alpha}lpha =1,2,3$ with the square norm $1,-1,1$ which define a {{\beta}f para-3-Sasakian structure}. This means that the curvature operator $R(X, \xi_{\alpha}lpha) = X \wedge \xi_{\alpha}lpha$ for $X \in TM$. This is equivalent to the condition that the cone metric $g_U = dr^2 + r^2 g$ on the cone $U^+(M) = M \times {\mu}athbb{R}^+$ is hypersymplectic, i.e. admits a ${\nu}abla^{g_U}$-parallel para-hypercomplex structure $I_U,J_U,K_U$. The bundle $U^+(M) \rightarrow M$ is called the {{\beta}f positive Swann bundle}.{\sigma}mallskip Let $G^{{\mu}athbb{C}}$ be the complexification of a real non compact semisimple Lie group $G$ and ${\mu}athcal{O}$ an adjoint nilpotent orbit of $G^{{\mu}athbb{C}}$. \\ In {\gamma}ite{DJS}, it is proved that the projectivization ${\mu}athbb{P}{\mu}athcal{O}$ is the twistor space of a para-quaternionic K\"ahler manifold $M$ if and only if ${\mu}athcal{O}$ is the complexification of a nilpotent orbit of $G$ in the real Lie algebra ${\mu}athfrak{g}$. In this case, $G$ acts transitively on $M$ preserving the para-quaternionic K\"ahler structure. Using the hypersymplectic momentum map of $U^+(M)$, the authors prove that, if a semisimple isometry group $G$ of a para-quaternionic K\"ahler manifold $(M,g,Q)$ acts transitively on the twistor space $Z(M)$, then the corresponding orbit ${\mu}athcal{O}$ is the orbit of the highest root vector and the manifold $M$ is symmetric. This leads to the classification of para-quaternionic K\"ahler symmetric spaces of a semisimple Lie group $G$, hence of all para-quaternionic K\"ahler symmetric spaces of non-zero scalar curvature, due to the following result:\\ Any para-quaternionic K\"ahler or pseudo-quaternionic K\"ahler symmetric space with a non-zero scalar curvature ${\hbox{\it scal}}$ is a homogeneous space of a simple Lie group. \\ This result have been proven in {\gamma}ite{AC1}, where a classification of pseudo-quaternionic K\"ahler symmetric spaces with ${\hbox{\it scal}} {\nu}eq 0$ is given in terms of Kac diagrams. In {\gamma}ite{Bl} and {\gamma}ite{BDM}, the authors define two twistor spaces $Z^+(M)$ and $Z^-(M)$ of a para-quaternionic K\"ahler manifold $(M,g,Q)$ of dimension $4n$. They consist of all para-complex structures $K \in Q, \, K^2 = 1$ and, respectively, complex structures $J \in Q,\, J^2 =-1$ of the quaternionic bundle $Q$. The authors define a natural almost para-complex, respectively, an almost complex structure on $Z^+(M)$ and, respectively, $Z^-(M)$ and prove their integrability in the case $n >1$. In {\gamma}ite{AC}, the geometry of these twistor spaces are studied using the theory of $G$-structures. It is proved that the natural horizontal distribution on $Z^{\pm}(M)$ is a para-holomorphic or, respectively, holomorphic contact distribution and that the manifolds $Z^{\pm}(M)$ carries two canonical Einstein metrics, one of which is (para)-K\"ahler. A twistor description of (automatically minimal) K\"ahler and para-K\"ahler submanifolds in $M$ is given.\\ A {{\beta}f para-quaternionic K\"ahler manifold with torsion} is a pseudo-Riemannian manifold $(M,g)$ with a skew-symmetric almost para-quaternionic structure $Q$ which admits a preserving $Q$ metric connection with skew-symmetric torsion.\\ Such manifolds are studied in {\gamma}ite{Z}. In particular, it is proved that this notion is invariant under a conformal transformation of the metric $g$. {{\mu}athfrak {su}}bsection{Para-CR structures and para-quaternionic CR structures (para-3-Sasakian structures).} A {{\beta}f weak almost para-$CR$ structure of codimension $k$} on a $m +k$-dimensional manifold $M$ is a pair $(HM,K)$, where $HM {{\mu}athfrak {su}}bset TM$ is a rank $m$ distribution and $ K \in {\rm End}(HM)$ is a field of endomorphisms such that $K^2=\hbox{\rm id}$ and $K{\nu}eq\pm\hbox{\rm id}$. If $m =2n$ and $\pm 1$-eigendistributions $H^{\pm}$ of $K$ has rank $n$, the pair $(H,K)$ is called an almost para-$CR$-structure. {\nu}ewline A (weak) almost para-CR structure is said to be a ({{\beta}f weak}) {{\beta}f para-CR structure}, if it is {\rm integrable}, that is eigen-distributions $H^{\pm}$ are involutive or, equivalently, the the following conditions hold: {\beta}egin{eqnarray*} &{}&[KX,KY]+[X,Y]\in{\Gamma}amma(HM)\,,\label{integrability1}\\[3pt] &{}& S(X,Y):=[X,Y]+[KX,KY] - K([X,KY]+[KX,Y])=0\label{integrability2} \end{eqnarray*} for all $X,\,Y\in{\Gamma}amma(HM)$.{\nu}ewline In {\gamma}ite{AMT} and {\gamma}ite{AMT1}, a description of maximally homogeneous weak para-CR-structures of semisimple type in terms of fundamental gradations of real semisimple Lie algebras is given.\\ Codimension one para-CR structures are naturally arise on generic hypersurfaces of a para-complex manifold $N$, in particular, on hypersurfaces in a Cartesian square $N =X \times X $ of a manifold $X$. \\ Consider, for example, a second order ODE ${\delta}dot y = F(x,y,{\delta}ot y)$. Its general solution $y = f(x,y, X,Y)$ depends on two free parameters $X,Y$ (constants of integration) and determines a hypersurface $M$ in the space $\R^2 \times \R^2 = \{(x,y,X,Y) \}$ with the natural para-complex structure $K$ invariant under the point transformations. The induced para-$CR$ structure on the space $M$ of solutions plays important role in a geometric theory of ODE, developed in {\gamma}ite{NS}, where a para-analogue of the Fefferman metric on a bundle over $M$ is constructed and a notion of duality of ODEs is introduced and studied. In {\gamma}ite{AKam}, a notion of a {{\beta}f para-quaternionic} (respectively,{ {\beta}f quaternionic}) {{\beta}f CR structure} on $4n+3$-dimensional manifold $M$ is defined as a triple $(\omega_1, \omega_2, \omega_3)$ of 1-forms such that associated 2-forms $$ \rho^1 = d\omega_1 - 2 \epsilon \,\omega_2 \wedge \omega_3\,,\quad \rho^2 = d\omega_1 + 2\omega_3 \wedge \omega_1\,, \quad\rho^3 = d\omega_3 + 2 \omega_1 \wedge \omega_2\,, $$ (where $\epsilon = 1$ in the para-quaternionic case and $\epsilon =-1$ in the quaternionic case) are non-degenerate on the subbundle $H ={\gamma}ap_{{\alpha}lpha =1}^3 \hbox{\rm Ker}\, \omega_{\alpha}lpha $ of codimension three and the endomorphisms $$ J_1 = - \epsilon(\rho^3_H )^{-1} {\gamma}irc \rho^2_H\,,\quad J_2 = (\rho^1_H )^{-1} {\gamma}irc \rho^3_H\,,\quad J_3 = (\rho^2_H )^{-1} {\gamma}irc \rho^1_H\,, $$ define an almost para-hypercomplex (respectively, almost hypercomplex) structure on $H$. It is shown that such a structure defines a para-$3$-Sasakian (respectively, pseudo-$3$-Sasakian) structure on $M$, hypersymplectic (respectively, hyperK\"ahler) structure on the cone $C(M) = M \times {\mu}athbb{R}^+$ and, under some regularity assumptions, a para-quaternionic K\"ahler (respectively, quaternionic K\"ahler) structure on the orbit space of a naturally defined $3$-dimensional group of transformations on $M$ (locally isomorphic to $Sp_1({\mu}athbb{R})$ or $Sp_1$). Some homogeneous examples of such structures are indicated and a simple reduction method for constructing a non-homogeneous example is described. {\sigma}ection{Para-complex vector spaces.} {{\mu}athfrak {su}}bsection{The algebra of para-complex numbers $C$.} We recall that the {{\beta}f algebra of para-complex numbers} is defined as the vector space $C = \R^2$ with the multiplication $$ (x,y){\gamma}dot (x',y')= (xx'+yy',xy'+yx')\,. $$ We set $e=(0,1)$. Then $e^2=1$ and we can write $$ C = \R + e \R = \{z = x + ey\,,\,\,\vert\,\,x,y\in\R\}\,. $$ The conjugation of an element $z = x + ey$ is defined by ${\beta}ar z := x -ey$ and $ \Re{\mu}athfrak{e}\,z :=x $ and $\Im{\mu}athfrak{m}\,z=y$ are called the {{\beta}f real part} and the {{\beta}f imaginary part} of the para-complex number $z$, respectively.{\nu}ewline We denote by $C^* = \{z = x +ey\,, \,\, \vert\,\,x^2-y^2{\nu}eq 0 \}$ the group of invertible elements of $C$.{\sigma}mallskip {{\mu}athfrak {su}}bsection{Para-complex structures on a real vector space.} Let $V$ be a $2n$-dimensional real vector space. A {{\beta}f para-complex structure} on $V$ is an endomorphism $K:V\to V$ such that {\beta}egin{itemize} \item[i)] $K^2={\rm Id}_V$; \item[ii)] the eigenspaces $V^+$, $V^-$ of $K$ with eigenvalues $1$, $-1$, respectively, have the same dimension. \end{itemize} The pair $(V,K)$ will be called a {{\beta}f para-complex vector space}.{\nu}ewline Let $K$ be a para-complex structure on $V$. We define the {{\beta}f para-complexification} of $V$ as $V^C=V\otimes_\R C$ and we extend $K$ to a $C$-linear endomorphism $K$ of $V^C$. Then by setting {\beta}egin{eqnarray*} V^{1,0}&=&\{v\in V^C\,\,\vert\,\, Kv=ev\}=\{v+eKv\,\,\vert\,\, v\in V\}\,,\\ V^{0,1}&=&\{v\in V^C\,\,\vert\,\, Kv=-ev\}=\{v-eKv\,\,\vert\,\, v\in V\}\,, \end{eqnarray*} we obtain $V^C=V^{1,0}\oplus V^{0,1}$. {\beta}egin{rem}\label{module} {\rm The extension $K$ of the endomorphism $K$ with eigenvalues $\pm 1$ to $V^C$ has ''eigenvalues'' $\pm e$. There is no contradiction since $V^C$ is a module over $C$, but not a vector space ($C$ is an algebra, but not a field).} \end{rem} The {{\beta}f conjugation} of $x+ey\in V^C$ is given by $\overline{z}=x-ey$, where $x,y\in V$. A para-complex structure $K$ on $V$ induces a para-complex structure $K^*$ on the dual space $V^*$ of $V$ by $$ K^*({\alpha}lpha)(v)={\alpha}lpha(Kv)\,, $$ for any ${\alpha}lpha\in V^*$, $v\in V$. Therefore, if $V^{* C}=V^*\otimes_\R C$, then $V^{*C}=V_{1,0}\oplus V_{0,1}$, where {\beta}egin{eqnarray*} V_{1,0}&=&\{{\alpha}lpha\in V^{*C}\,\,\vert\,\, K^*{\alpha}lpha=e{\alpha}lpha\}=\{{\alpha}lpha+eK^*{\alpha}lpha\,\,\vert\,\, {\alpha}lpha\in V^*\}\,,\\ V_{0,1}&=&\{{\alpha}lpha\in V^{*C}\,\,\vert\,\, K^*{\alpha}lpha=-e{\alpha}lpha\}=\{{\alpha}lpha-eK^*{\alpha}lpha\,\,\vert\,\, {\alpha}lpha\in V^*\}\,. \end{eqnarray*} Denote by $\wedge^{p,q}V^{*C}$ the subspace of $\wedge V^{*C}$ spanned by ${\alpha}lpha\wedge{\beta}eta$, with ${\alpha}lpha\in\wedge^pV_{1,0}$ and ${\beta}eta\in\wedge^qV_{0,1}$. Then $$ \wedge^rV^{*C} ={\beta}igoplus_{p+q=r}\wedge^{p,q}V^{*C}\,. $$ If $\{e^1,\ldots ,e^n\}$ is a basis of $V_{1,0}$, then $\{\overline{e}^1,\ldots ,\overline{e}^n\}$ is a basis of $V_{0,1}$ and $$ \{e^{i_1}\wedge{\gamma}dots\wedge e^{i_p}\wedge\overline{e}^{j_1}\wedge{\gamma}dots\wedge\overline{e}^{j_q},\,\,1\leq i_1<{\gamma}dots <i_p\leq n\,,\,\, 1\leq j_1<{\gamma}dots <j_q\leq n\} $$ is a basis of $\wedge^{p,q}V^{*C}$. {{\mu}athfrak {su}}bsection{Para-Hermitian forms.} {\beta}egin{definition} A {{\beta}f para-Hermitian form} on $V^C$ is a map $h:V^C\times V^C\to C$ such that: {\beta}egin{itemize} \item[i)] $h$ is $C$-linear in the first entry and $C$-antilinear in the second entry; \item[ii)] $h(W,Z)=\overline{h(Z,W)}$. \end{itemize} \end{definition} {\beta}egin{definition} A {{\beta}f para-Hermitian symmetric form} on $V^C$ is a symmetric $C$-bilinear form $h:V^C\times V^C\to C$ such that {\beta}egin{eqnarray*} h(V^{1,0},V^{1,0})&=&h(V^{0,1},V^{0,1}) =0\,,\\ h(\overline{Z},\overline{W}) &=&\overline{h(Z,W)} \end{eqnarray*} for any $Z,W\in V^C$.{\nu}ewline It is called {{\beta}f non-degenerate} if it has trivial kernel: $$ \ker (h) = \left\{ Z\in V^C\,\,: \,\, h(Z,V^C)=0 \right\}=0 $$ \end{definition} If $h(Z,W)$ is a para-Hermitian symmetric form, then $\hat{h}(Z,W)=h(Z,\overline{W})$ is a para-Hermitian form. {\beta}egin{lemma} There exists a natural $1-1$ correspondence between pseudo-Euclidean metric $g$ on a vector space $V$ such that $$ g(KX,KY)=-g(X,Y)\,,\quad X,Y\in V $$ and non-degenerate para-Hermitian symmetric forms $h=g^C$ in $V^C$, where $g^C$ is the natural extension of $g$ to $C$-bilinear symmetric form. Moreover, the natural $C$-extension $\omega^C$ of the two form $\omega =g{\gamma}irc K$ coincides with the $(1,1)$-form $g^C{\gamma}irc K$. \end{lemma} {\sigma}ection{Para-complex manifolds.} We recall some basic definitions of para-complex geometry. {\beta}egin{definition} An {{\beta}f almost para-complex structure} on a $2n$-dimensional manifold $M$ is a field $K\in {\Gamma}amma (\hbox{\rm End}(TM))$ of endomorphisms such that {\beta}egin{itemize} \item[i)]$K^2={\rm Id}_{TM}$, \item[ii)] the two eigendistributions $T^\pm M:=\ker({\rm Id}{\mu}p K)$ have the same rank\,. \end{itemize} A {{\beta}f almost para-complex structure} $K$ is said to be {{\beta}f integrable} if the distributions $T^\pm M$ are involutive. In such a case $K$ is called a {{\beta}f para-complex structure}. A manifold $M$ endowed with an (almost) para-complex structure $K$ is called an {{\beta}f (almost) para-complex manifold}.{\nu}ewline A map $f:(M,K)\to(M',K')$ between two (almost) para-complex manifolds is said to be {{\beta}f para-holomorphic} if {\beta}egin{equation}\label{paraholomorphicmap} df{\gamma}irc K=K'{\gamma}irc df\,. \end{equation} \end{definition} The {{\beta}f Nijenhuis tensor} $N_K$ of an almost para-complex structure $K$ is defined by $$ N_K(X,Y)=[X,Y]+[KX,KY]-K[KX,Y]-K[X,KY]\, $$ for any vector fields $X$, $Y$ on $M$. As in the complex case, a para-complex structure $K$ is integrable if and only if $N_K=0$ (see e.g. {\gamma}ite{CMMS1}).{\nu}ewline A basic example of a para-complex manifold is given by $$ C^n:=\{(z^1,\ldots ,z^n)\,\,\vert\,\,z^{\alpha}lpha\in C\,,i =1,\ldots ,n\}\,, $$ where the para-complex structure is provided by the multiplication by $e$. The Frobenius theorem implies (see e.g. {\gamma}ite{CMMS2}) the existence of local coordinates $(z^{\alpha}lpha_+,z^{\alpha}lpha_-)\,\,,{\alpha}lpha =1,\ldots , n$ on a para-complex manifold $(M,K)$, such that $$ T^+M=\hbox{\rm span}\left\{{\varphi}rac{\partial}{\partial z^{\alpha}lpha_+}\,,{\alpha}lpha =1,\ldots , n\right\}\,,\quad T^-M=\hbox{\rm span}\left\{{\varphi}rac{\partial}{\partial z^{\alpha}lpha_-}\,,{\alpha}lpha =1,\ldots , n\right\}\,. $$ Such (real) coordinates are called {{\beta}f adapted coordinates} for the para-complex structure $K$.{\nu}ewline The cotangent bundle $T^*M$ splits as $T^*M = T^{*}_+M\oplus T^{*}_-M$, where $T^{*}_{\pm}M$ are the $\pm 1$-eigendistributions of $K^*$. Therefore, $$ \wedge^rT^*M = {\beta}igoplus_{p+q=1}^r\wedge^{p,q}_{+\,-}T^*M\,, $$ where $\wedge^{p,q}_{+\,-}T^*M =\wedge^p(T^*_+ M)\otimes\wedge^q(T^*_- M)$. The sections of $\wedge^{p,q}_{+\,-}T^*M$ are called {{\beta}f $(p+,q-)$-forms} on the para-complex manifold $(M,K)$. We will denote the space of sections of the bundle $\wedge^{p,q}_{+\,-}T^*M$ by the same symbol.{\nu}ewline We set {\beta}egin{eqnarray*} \partial_+ &=&\hbox{\rm pr}_{\wedge^{p+1,q}_{+\,-}(M)}{\gamma}irc d : \wedge^{p,q}_{+\,-}(M)\to \wedge^{p+1,q}_{+\,-}(M)\,, \\ \partial_- &=&\hbox{\rm pr}_{\wedge^{p,q+1}_{+\,-}(M)}{\gamma}irc d : \wedge^{p,q}_{+\,-}(M)\to \wedge^{p,q+1}_{+\,-}(M)\,. \end{eqnarray*} Then the exterior differential $d$ can be decomposed as $d =\partial_+ + \partial_-$ and, since $d^2=0$, we have $$ \partial^2_+=\partial^2_- =0\,,\quad \partial_+\partial_-+\partial_-\partial_+=0\,. $$ {{\mu}athfrak {su}}bsection{Para-holomorphic coordinates.} Let $(M,K)$ be a $2n$-dimensional para-complex manifold. Like in the complex case we can define on $M$ an atlas of para-holomorphic local charts $(U_a,\varphi_a)$ where $\varphi_a : M{{\mu}athfrak {su}}pset U_a\to C^n$ and such that the transition functions $\varphi_a{\gamma}irc\varphi^{-1}_b$ are para-holomorphic functions in the sense of \eqref{paraholomorphicmap}. We associate with any adapted coordinate system $(z^{\alpha}lpha_+,z^{\alpha}lpha_-)$ a para-holomorphic coordinate system $z^{\alpha}lpha$ by {\beta}egin{equation}\label{coordinateparacomplesse} z^{\alpha}lpha={\varphi}rac{z^{\alpha}lpha_++z^{\alpha}lpha_-}{2} + e\,{\varphi}rac{z^{\alpha}lpha_+-z^{\alpha}lpha_-}{2}\,,\quad {\alpha}lpha=1,\ldots ,n\,. \end{equation} One can easily check (see {\gamma}ite{CMMS1}) that $z^{\alpha}lpha$ are para-holomorphic functions in the sense of \eqref{paraholomorphicmap} and that the transition functions between two para-holomorphic coordinate systems are para-holomorphic. We stress that the real part $x^{\alpha}lpha$ and the imaginary part $y^{\alpha}lpha$ of the functions $z^{\alpha}lpha$ given by $$ x^{\alpha}lpha = {\varphi}rac{1}{2}(z^{\alpha}lpha +\overline{z}^{\alpha}lpha)={\varphi}rac{1}{2}(z^{\alpha}lpha_++z^{\alpha}lpha_-)\,,\,\,\, y^{\alpha}lpha = {\varphi}rac{1}{2e}(z^{\alpha}lpha -\overline{z}^{\alpha}lpha)={\varphi}rac{1}{2}(z^{\alpha}lpha_+-z^{\alpha}lpha_-) $$ are not necessarily real analytic. {{\mu}athfrak {su}}bsection{Para-complex differential forms.}\label{differentialforms} Let $(M,K$) be a para-complex manifold. We define {{\beta}f para-complex tangent bundle} as the $\R$-tensor product $T^CM=TM\otimes C$ and we extend the endomorphism $K$ to a $C$-linear endomorphism of $T^CM$. For any $p\in M$, we have the following decomposition of $T_p^CM$: {\beta}egin{equation} \label{1001} T_p^CM=T^{1,0}_pM\oplus T^{0,1}_pM\,, \end{equation} where {\beta}egin{eqnarray*} T^{1,0}_pM&=&\{Z\in T_p^CM\,\,\vert\,\, KZ=eZ\}=\{X+eKX\,\,\vert\,\, X\in T_pM\}\,,\\ T^{0,1}_pM&=&\{Z\in T_p^CM\,\,\vert\,\, KZ=-eZ\}=\{X-eKX\,\,\vert\,\, X\in T_pM\}\, \end{eqnarray*} are the ''eigenspaces'' of $K$ with ''eigenvalues'' $\pm e$ (see remark \ref{module}).{\nu}ewline We define the {{\beta}f conjugation} of an element $Z=X+eY\in T_pM^C$ by $\overline{Z}=X-eY$. Then $T_p^{0,1}M=\overline{T}_p^{\,1,0}M$. The para-complex vectors $$ {\varphi}rac{\partial}{\partial z^{\alpha}lpha}={\varphi}rac{1}{2}\left({\varphi}rac{\partial}{\partial x^{\alpha}lpha}+e{\varphi}rac{\partial}{\partial y^{\alpha}lpha}\right)\,,\quad {\varphi}rac{\partial}{\partial \overline{z}^{\alpha}lpha}={\varphi}rac{1}{2}\left({\varphi}rac{\partial}{\partial x^{\alpha}lpha}-e{\varphi}rac{\partial}{\partial y^{\alpha}lpha}\right)\, $$ form a basis of the spaces $T^{1,0}_pM$ and $T^{0,1}_pM$, respectively. A vector $Z\in T^{1,0}_pM$, respectively, $ {\beta}ar Z \in T^{0,1}_pM$, has uniquely defined coordinates with respect to ${\varphi}rac{\partial}{\partial z^{\alpha}lpha}$, respectively, ${\varphi}rac{\partial}{\partial \overline{z}^{\alpha}lpha}$.{\nu}ewline The para-complex structure $K$ acts on the dual space $(T^C)^*M$ by $$ K^*{\alpha}lpha(X)={\alpha}lpha(KX)\,. $$ We have a decomposition $$ (T^C)^*M =\wedge^{1,0}(M) \oplus \wedge^{0,1}(M)\,, $$ where {\beta}egin{eqnarray*} \wedge^{1,0}(M)&:=&\{{\alpha}lpha +eK^*{\alpha}lpha\,\,\vert\,\,{\alpha}lpha\in T^*M\}\,,\\ \wedge^{0,1}(M)&:=&\{{\alpha}lpha -eK^*{\alpha}lpha\,\,\vert\,\,{\alpha}lpha\in T^*M\}\,, \end{eqnarray*} are eigenspaces for $K$ with eigenvalues $\pm e$. We denote by $$ dz^{\alpha}lpha=dx^{\alpha}lpha + edy^{\alpha}lpha \quad\hbox{\rm and}\quad d\overline{z}^{\alpha}lpha=dx^{\alpha}lpha - edy^{\alpha}lpha $$ the basis of $\wedge^{1,0}(M)$ and $\wedge^{0,1}(M)$ dual to the bases ${\varphi}rac{\partial}{\partial z^{\alpha}lpha}$ and ${\varphi}rac{\partial}{\partial \overline{z}^{\alpha}lpha}$, respectively. The last decomposition induces a splitting of the bundle $\wedge^r(T^{C})^*M$ of para-complex $r$-forms on $(M,K)$ given by $$ \wedge^r(T^{C})^*M ={\beta}igoplus_{p+q=r}\wedge^{p,q}(M)\,. $$ The sections of $\wedge^{p,q}(M)$ are called {{\beta}f $(p,q)$-forms} on the para-complex manifold $(M,K)$. One can check that {\beta}egin{equation}\label{hermitianform} \wedge^{1,1}_{+\,-}(M)=\{\omega\in\wedge^{1,1}(M)\,\,\vert\,\,\omega=\overline{\omega}\}\,. \end{equation} The exterior derivative $d:\wedge^rT^*M^C\to \wedge^{r+1}T^*M^C$ splits as $d=\partial + \overline{\partial}$, where {\beta}egin{eqnarray*} \partial &=& \hbox{\rm pr}_{\wedge^{p+1,q}(M)}{\gamma}irc d :\wedge^{p,q}(M)\to\wedge^{p+1,q}(M)\,,\\ \overline{\partial} &=& \hbox{\rm pr}_{\wedge^{p,q+1}(M)}{\gamma}irc d :\wedge^{p,q}(M)\to\wedge^{p,q+1}(M)\,, \end{eqnarray*} and moreover, since $d^2=0$, we easily get $$ \partial^2=0\,,\quad \overline{\partial}^2=0\,,\,\quad\partial\overline{\partial}+\overline{\partial}\partial=0\,. $$ The operators $\partial, \overline{\partial}$ are related to $\partial_+, \partial_-$ by $$ \partial ={\varphi}rac12((\partial_++\partial_-) + e (\partial_+-\partial_-))\,,\quad \overline{\partial} ={\varphi}rac12((\partial_++\partial_-) - e (\partial_+-\partial_-))\,. $$ In particular, $$ \partial\overline{\partial} = e\, \partial_+\partial_-\,. $$ We need the following result which is a consequence of a version of the Dolbeault Lemma for a para-complex manifold (see {\gamma}ite{CMMS1}). {\beta}egin{prop}\label{Dolbeault} Let $(M,K)$ be a para-complex manifold and $\omega$ a closed $2$-form belonging to $\wedge^{1,1}_{+\,-}(M)$. Then locally there exists a real-valued function $F$ (called {{\beta}f potential}) such that $$ \omega = \partial_+\partial_- F= e\,\partial\overline{\partial}F\,. $$ The potential $F$ is defined up to addition of a function $f$ satisfying the condition $\partial_+\partial_- f=0$. \end{prop} {\sigma}ection{Para-K\"ahler manifolds.} {{\mu}athfrak {su}}bsection{Para-K\"ahler structures and para-K\"ahler potential.} We recall three equivalent definitions of a para-K\"ahler manifold. {\beta}egin{definition} A {{\beta}f para-K\"ahler manifold} is given equivalently by: {\beta}egin{enumerate} \item[i)] a pseudo-Riemannian manifold $(M,g)$ together with a skew-symmetric para-complex structure $K$ which is parallel with respect to the Levi-Civita connection; \item[ii)] a symplectic manifold $(M,\omega)$ together with two complementary involutive Lagrangian distributions $T^\pm M$. \item[iii)] a para-complex manifold $(M,K)$ together with a symplectic form $\omega$ which belongs to $\wedge^{1,1}_{+\,-}(M)$. \end{enumerate} \end{definition} The relations between the three definitions are the following. A pair $(g,K)$ as in $i)$ defines a symplectic form $\omega = g {\gamma}irc K$ and complementary involutive Lagrangian distributions $T^\pm M$ which are eigenspace distributions of $K$. One can check that the para-complex extension of the symplectic form belongs to $\wedge^{1,1}_{+\,-}(M)$. Assume now that $(K,\omega) $ is as in iii). Then $g=\omega{\gamma}irc K$ is a pseudo-Riemannian metric on $M$ and $(g,K)$ satisfies $i)$ due to the following formula (see {\gamma}ite[Theorem 1]{CMMS1}) $$ 2g(({\nu}abla_XK)Y,Z)=d\omega(X,Y,Z)+d\omega(X,KY,KZ)-g(N_K(Y,Z),KX)\,. $$ Let $(M,K,\omega,g)$ be a para-K\"ahler manifold. We denote by the same letters $g$ and $\omega$ the extensions of $g$ and $\omega$ to $C$-bilinear forms. {\beta}egin{lemma}\label{isotropic} The following formulas hold: {\beta}egin{itemize} \item[i)] $g(\overline{Z},\overline{W})=\overline{g(Z,W)}\,,\qquad\quad$\ \ $\omega(\overline{Z},\overline{W})=\overline{\omega(Z,W)}\,,\,\,\,$\ ${\varphi}orall\, Z,W\in T^CM$;{\sigma}mallskip \item[ii)] $g (KZ,KW)=-g(Z,W)$, $\;\;\;\omega (KZ,KW)=-\omega(Z,W)$;{\sigma}mallskip \item[iii)] the restrictions of $g$ and $\omega$ to $T^{1,0}M$ and $T^{0,1}M$ vanish. \end{itemize} \end{lemma} {\beta}egin{proof} i) follows from reality of $g$ and $\omega$. ii) follows from the definition. For iii), since any element of $T^{1,0}M$ has the form $X+eKX$, for $X\in TM$, we get {\beta}egin{eqnarray*} g(X+eKX,Y+eKY)&=&g(X,Y)+g(KX,KY)+ \\ &{}&+\, e\left(g(X,KY)+g(KX,Y)\right)=0\,. \end{eqnarray*} \end{proof} Let now $(z^1,\ldots ,z^n)$ be a local para-holomorphic coordinate system. We denote by $$ \partial_{\alpha}={\varphi}rac{\partial}{\partial z^{\alpha}}\,,\quad \partial_{\overline{\alpha}}=\overline{\partial}_{\alpha} = {\varphi}rac{\partial}{\partial z^{\overline{\alpha}}}\,,\quad {\alpha}=1,\ldots ,n $$ the para-holomorphic and para-anti-holomorphic vector fields. Then we put $$ g_{{\alpha}\overline{{\beta}}}:=g(\partial_{\alpha},\partial_{\overline{\beta}})=\overline{g_{\overline{{\alpha}}{\beta}}} $$ and remark that $$ g_{{\alpha}{\beta}}=g(\partial_{\alpha},\partial_{\beta})=0\,,\quad g_{\overline{{\alpha}}\overline{{\beta}}}=g(\partial_{\overline{{\alpha}}},\partial_{\overline{{\beta}}})=0\,. $$ In these coordinates, $$ g=g_{{\alpha}\overline{{\beta}}}dz^{\alpha} d\overline{z}^{\beta} + \overline{g_{{\alpha}\overline{{\beta}}}dz^{\alpha} d\overline{z}^{\beta}}\,,\quad \omega =2{{\mu}athfrak {su}}m_{{\alpha}\,,{\beta}}\omega_{{\alpha}\overline{\beta}}dz^{\alpha}\wedge d\overline{z}^{{\beta}}\,, $$ where $\omega_{{\alpha}\overline{\beta}}=e\,g_{{\alpha}\overline{\beta}}$. Since $\omega$ is closed, we have $$ {\varphi}rac{\partial g_{{\alpha}\overline{\beta}}}{\partial z^{\gamma}}={\varphi}rac{\partial g_{{\gamma}\overline{\beta}}}{\partial z^{\alpha}}\,,\quad {\varphi}rac{\partial g_{{\alpha}\overline{\beta}}}{\partial \overline{z}^{{\gamma}}}={\varphi}rac{\partial g_{{\alpha}\overline{\gamma}}}{\partial\overline{z}^{{\beta}}}\,. $$ Proposition \ref{Dolbeault} implies the local existence of a real function $F$ such that $$ g_{{\alpha}\overline{{\beta}}}=\partial_{\alpha}\partial_{\overline{{\beta}}}F\,. $$ The function $F$ is called the {{\beta}f para-K\"ahler potential} of the para-K\"ahler metric $g$. {{\mu}athfrak {su}}bsection{Curvature tensor of a para-K\"ahler metric.} Denote by ${\nu}abla$ the Levi-Civita connection of the pseudo-Riemannian metric $g$ and by ${\Gamma}amma^C_{AB}= {\Gamma}amma^C_{BA}$ the Christoffel symbols with respect to a para-holomorphic coordinates, where $A,B,C$ denote both Greek indices and their conjugates. {\beta}egin{lemma} The only possible non-zero Christoffel symbols are $$ {\Gamma}amma^{\gamma}_{{\alpha}{\beta}}={\Gamma}amma^{\overline{{\gamma}}}_{\overline{{\alpha}}\overline {{\beta}}}\,. $$ \end{lemma} {\beta}egin{proof} The condition ${\nu}abla K=0$ implies $$ {\nu}abla_{\partial_A}K\partial_B-K{\nu}abla_{\partial_A}\partial_B=0\,. $$ Hence, {\beta}egin{eqnarray*} e{\nu}abla_{\partial_{\alpha}}\partial_{\beta}-K\left({{\mu}athfrak {su}}m_{{\gamma}}{\Gamma}amma_{{\alpha}{\beta}}^{{\gamma}}\partial_{\gamma}+ {{\mu}athfrak {su}}m_{{\gamma}}{\Gamma}amma_{{\alpha}{\beta}}^{\overline{\gamma}}\overline{\partial}_{\gamma}\right) &=&0\,,\\ e\left({{\mu}athfrak {su}}m_{{\gamma}}{\Gamma}amma_{{\alpha}{\beta}}^{{\gamma}}\partial_{\gamma}+ {{\mu}athfrak {su}}m_{{\gamma}}{\Gamma}amma_{{\alpha}{\beta}}^{\overline{\gamma}}\overline{\partial}_{\gamma}\right)- e{{\mu}athfrak {su}}m_{{\gamma}}{\Gamma}amma_{{\alpha}{\beta}}^{{\gamma}}\partial_{\gamma}+ e{{\mu}athfrak {su}}m_{{\gamma}}{\Gamma}amma_{{\alpha}{\beta}}^{\overline{\gamma}}\overline{\partial}_{\gamma}&=&0\,,\\ 2e {{\mu}athfrak {su}}m_{{\gamma}}{\Gamma}amma_{{\alpha}{\beta}}^{\overline{\gamma}}\overline{\partial}_{\gamma}&=&0\,, \end{eqnarray*} which implies ${\Gamma}amma_{{\alpha}{\beta}}^{\overline{\gamma}}=0$. The other computations are similar. \end{proof} By the formula relating the Levi-Civita connection to the metric, we can express the Christoffel symbols by {\beta}egin{equation}\label{christoffel1} {{\mu}athfrak {su}}m_{\alpha} g_{{\alpha}\overline{{\mu}}}{\Gamma}amma^{\alpha}_{{\beta}{\gamma}}={\varphi}rac{\partial g_{\overline{{\mu}}{\beta}}}{\partial z^{\gamma}}\,,\qquad {{\mu}athfrak {su}}m_{\alpha} g_{\overline{{\alpha}}{\mu}}{\Gamma}amma^{\overline{{\alpha}}}_{{\beta}\overline{{\gamma}}}={\varphi}rac{\partial g_{{\mu}\overline{{\beta}}}}{\partial \overline{z}^{\gamma}}\,. \end{equation} {\beta}egin{prop}\label{proprietacurvatura} The curvature tensor $R$ and the Ricci tensor $S$ of a para-K\"ahler metric $g$ satisfy the following relations {\beta}egin{eqnarray}\label{R} &{}& R(X,Y){\gamma}irc K = K{\gamma}irc R(X,Y)\,,\quad R(KX,KY)=-R(X,Y)\,,\label{formulaR}\\ &{}& S(KX,KY)=-S(X,Y)\label{formulaS}\,, \end{eqnarray} for any vector fields $X,Y\in {\mu}athfrak{X}(M)$. \end{prop} {\beta}egin{proof} Since $K$ is parallel with respect to the Levi-Civita connection ${\nu}abla$ of the para-K\"ahler metric $g$, we easily get that $R(X,Y)$ and $K$ commute for any $X$, $Y$. Hence the first formula of \eqref{formulaR} is proved.{\nu}ewline We have: {\beta}egin{eqnarray*} g(R(KX,KY)V,U) &=& g(R(U,V)KY,KX)\\ &=& g(KR(U,V)Y,KX)\\ &=& -g(R(U,V)Y,X)\\ &=& -g(R(X,Y)V,U)\,, \end{eqnarray*} which implies that $R(KX,KY) = - R(X,Y)$.{\nu}ewline By definition of the Ricci tensor $S$, we get {\beta}egin{eqnarray*} S(KX,KY) &=& {\hbox{tr}} \left(V{\mu}apsto R(V,KX)KY)\right)\\ &=& -{\hbox{tr}} \left(KV{\mu}apsto R(KV,KX)KY)\right)\\ &=& {\hbox{tr}} \left(KV{\mu}apsto R(V,X)KY)\right)\\ &=& {\hbox{tr}} \left(KV{\mu}apsto KR(V,X)Y)\right)\\ &=& -{\hbox{tr}} \left(V{\mu}apsto R(V,X)Y)\right)\\ &=& -S(X,Y)\,, \end{eqnarray*} where we used \eqref{formulaR} and the fact that $g(KX,KY)=-g(X,Y)$. \end{proof} {\beta}egin{prop}\label{curvatura} The only possible non-zero components of the Riemann curvature tensor R are {\beta}egin{eqnarray*} &{}& R^{\alpha}_{{\beta}\gamma\overline{{\delta}}}\,,\quad R^{\alpha}_{{\beta}\overline{\gamma}{\delta}}\,,\quad R^{\overline{\alpha}}_{\overline{{\beta}}\gamma{\delta}}\,,\quad R^{\overline{\alpha}}_{{\beta}\overline{\gamma}{\delta}}\,, \\ &{}& R_{{\alpha}\overline{{\beta}}{\gamma}\overline{{\delta}}}\,,\quad R_{{\alpha}\overline{{\beta}}\overline{{\gamma}}{\delta}}\,,\quad R_{\overline{{\alpha}}{\beta}{\gamma}\overline{{\delta}}}\,,\quad R_{\overline{{\alpha}}{\beta}\overline{{\gamma}}{\delta}}\,. \end{eqnarray*} \end{prop} {\beta}egin{proof} Since $R(\partial_C,\partial_D){\gamma}irc K= K{\gamma}irc R(\partial_C,\partial_D)$, we have {\beta}egin{eqnarray*} g(R(\partial_C,\partial_D)\overline{\partial}_{\beta},\overline{\partial}_{\gamma})&=& -e\,g(R(\partial_C,\partial_D)K\overline{\partial}_{\beta}, \overline{\partial}_{\gamma})\\ &=& -e\,g(K R(\partial_C,\partial_D)\overline{\partial}_{\beta},\overline{\partial}_{\gamma}) \\ &=&e\,g( R(\partial_C,\partial_D)\overline{\partial}_{\beta},K\overline{\partial}_{\gamma})\\ &=&-\,g(R(\partial_C,\partial_D)\overline{\partial}_{\beta},\overline{\partial}_{\gamma})\,. \end{eqnarray*} Hence, $R^{\alpha}_{\overline{{\beta}}CD}=0$. In a similar way, taking into account the symmetry properties of the Riemann tensor, the statement can be proved. \end{proof} By the formulas above, recalling the expression of $R$ in terms of ${\Gamma}amma^A_{BC}$, it follows that {\beta}egin{equation}\label{Rgamma} R^{\alpha}_{{\beta}{\gamma}\overline{{\delta}}}=-{\varphi}rac{\partial {\Gamma}amma^{{\alpha}}_{{\beta}{\gamma}}}{\partial \overline{z}^{\delta}}\,. \end{equation} {{\mu}athfrak {su}}bsection{The Ricci form in para-holomorphic coordinates.} The Ricci tensor of the metric $g$ is defined by $$ {\hbox{\it ric}}_{AB}={{\mu}athfrak {su}}m_{C}R^C_{ACB}\,. $$ Therefore, by Proposition \ref{curvatura} and \eqref{Rgamma}, we obtain {\beta}egin{equation}\label{curvaturaricci} {\hbox{\it ric}}_{{\alpha}\overline{{\beta}}}=-{{\mu}athfrak {su}}m_{\gamma} {\varphi}rac{\partial {\Gamma}amma^{\gamma}_{{\alpha}\gamma}}{\partial \overline{z}^{\beta}}\,,\qquad {\hbox{\it ric}}_{\overline{{\alpha}}{\beta}}=\overline{R}_{{\alpha}\overline{{\beta}}}\,,\qquad {\hbox{\it ric}}_{{\alpha}{\beta}}={\hbox{\it ric}}_{\overline{{\alpha}}\overline{{\beta}}}=0\,. \end{equation} We define the {{\beta}f Ricci form $\rho$} of the para-K\"ahler metric $g$ by {\beta}egin{equation}\label{formaricci} \rho:={\hbox{\it ric}}{\gamma}irc K\,. \end{equation} We extend it to a para-complex $2$-form $\rho$. Formula \eqref{curvaturaricci} shows that $\rho$ has type $(1,1)$ and in local coordinates can be represented as $$ \rho =2e\,{\hbox{\it ric}}_{{\alpha}\overline{{\beta}}}dz^{\alpha}\wedge d\overline{z}^{\beta}\,. $$ {\beta}egin{prop} The Ricci form of a para-K\"ahler manifold is a closed $(1,1)$-form and can be represented by {\beta}egin{equation}\label{ricci} \rho = e\,\partial\overline{\partial}\log({\delta}et(g_{{\alpha}\overline{{\beta}}}))\,. \end{equation} In particular, {\beta}egin{equation}\label{tensorericci} {\hbox{\it ric}}_{{\alpha}\overline{{\beta}}}=-{\varphi}rac{\partial^2 \log({\delta}et(g_{\lambda\overline{{\mu}}}))}{\partial z^{\alpha}\partial \overline{z}^{\beta}}\,. \end{equation} \end{prop} {\beta}egin{proof} We remark that the metric $g$ defines a $C$-linear bijective map $g: T^{1,0}M\to T^{0,1*}M$. There exists inverse map $g^{-1} :T^{0,1*}M\to T^{1,0}M$ which is $C$-linear. It is represented by a matrix $(g^{\overline{{\beta}}{\alpha}})$ such that $g_{{\alpha}\overline{{\beta}}}g^{\overline{{\beta}}\gamma}={\delta}elta^\gamma_{\alpha}$.{\nu}ewline Using \eqref{christoffel1} and the identity that is still valid in para-complex case {\beta}egin{equation}\label{determinante} {\varphi}rac{\partial{\delta}et(g_{\lambda\overline{{\mu}}})}{\partial z^{\alpha}} ={\delta}et(g_{\lambda\overline{{\mu}}}){{\mu}athfrak {su}}m_{{\beta}{\gamma}}g^{{\beta}\overline{{\gamma}}}{\varphi}rac{\partial g{_{{\beta}\overline{{\gamma}}}}}{\partial z^{\alpha}}\,, \end{equation} we obtain {\beta}egin{equation}\label{christoffel2} {\Gamma}amma^{\alpha}_{{\beta}{\gamma}}={{\mu}athfrak {su}}m_{\mu} g^{{\alpha}\overline{{\mu}}}{\varphi}rac{\partial g_{\overline{{\mu}}{\beta}}}{\partial z^{\gamma}}\,. \end{equation} Hence, by \eqref{determinante} and \eqref{christoffel2} we get $$ {{\mu}athfrak {su}}m_{\gamma} {\Gamma}amma^{\gamma}_{{\alpha}{\gamma}} ={\varphi}rac{\partial \log({\delta}et(g_{\lambda\overline{{\mu}}}))}{\partial z^{\alpha}}\,. $$ The last formula implies {\beta}egin{equation}\label{tensorericci1} {\hbox{\it ric}}_{{\alpha}\overline{{\beta}}}={\varphi}rac{\partial^2 \log({\delta}et(g_{\lambda\overline{{\mu}}}))}{\partial z^{\alpha}\partial \overline{z}^{\beta}}\,, \end{equation} which proves that $$ \rho = e\,\partial\overline{\partial}\log({\delta}et(g_{{\alpha}\overline{{\beta}}}))\,. $$ \end{proof} {{\mu}athfrak {su}}bsection{The canonical form of a para-complex manifold with volume form. } Let $(M,K,{\it vol})$ be an oriented manifold with para-complex structure $K$ and a (real) volume form ${\it vol}$. We define a canonical $(1,1)$-form $\rho$ on $M$. Let $z=(z^1,\ldots ,z^n)$ be local para-holomorphic coordinates and $(x^{\alpha}lpha,y^{\alpha}lpha)$ corresponding real coordinates, where $z^{\alpha}lpha=x^{\alpha}lpha + ey^{\alpha}lpha$. {\nu}ewline Then we can write {\beta}egin{eqnarray}\label{volumeform} {\it vol} &=& V(z,\overline{z}) dz^1\wedge d\overline{z}^1 \wedge \ldots \wedge dz^n\wedge d\overline{z}^n \\ &=& U(x,y) dx^1\wedge d y^1 \wedge \ldots \wedge dx^n\wedge dy^n\,.{\nu}onumber \end{eqnarray} We may assume that $U(x,y) >0$, as $M$ is oriented.{\nu}ewline Since $$ dz^{\alpha}lpha\wedge d\overline{z}^{\alpha}lpha = (dx^{\alpha}lpha +edy^{\alpha}lpha)\wedge (dx^{\alpha}lpha -edy^{\alpha}lpha)= -2e \,dx^{\alpha}lpha\wedge dy^{\alpha}lpha $$ then $$ (-2e)^n V(z,\overline{z})= U(x,y)\,. $$ In particular the, function $(-e)^n V$ is positive.{\nu}ewline Let $z'^{\alpha}lpha(z)=x'^{\alpha}lpha(z)+e y'^{\alpha}lpha(z)$ be another para-holomorphic coordinates such that the associated real coordinates $(x'^{\alpha}lpha,y'^{\alpha}lpha)$ have the same orientation and $V'(z',\overline{z'}), U'(x',y')$ be corresponding functions as in \eqref{volumeform}. Then $$ V'(z',\overline{z'})=V(z,\overline{z})\,\Delta(z)\,\overline{\Delta(z)}\,,\quad U'(x',y') = U(x,y)\, J(x,y) $$ where $\Delta(z) ={\delta}et\Vert{\varphi}rac{\partial z'}{\partial z}\Vert$ and $J={\delta}et\Vert{\varphi}rac{\partial(x',y')}{\partial (x,y)}\Vert >0$ are the Jacobian of the corresponding transition functions. Since $(-2e)^nV' = U'$ we have $$ \Delta(z)\,\overline{\Delta(z)} = J(x,y) >0\,. $$ If we write $$ \Delta(z)=u(z) + ev(z)\,, $$ then $$ \Delta(z)\,\overline{\Delta(z)} =u^2-v^2 = J >0\,. $$ Hence $u{\nu}eq 0$. This implies the following {\beta}egin{lemma} The formula {\beta}egin{equation}\label{canonicalform} \rho =e\,\partial \overline{\partial} \log \left((-e)^nV\right) \end{equation} defines a real global closed $2$-form of type $(1,1)$ on the oriented para-complex manifold $(M,K,{\it vol})$. \end{lemma} The form $\rho$ is called the {{\beta}f canonical form} on $(M,K,{\it vol})$. {\beta}egin{proof} Since $(-e)^nV(z,\overline{z})$ is a positive smooth function of $(z,\overline{z})$, logarithm of $(-e)^nV$ is a well defined smooth function and $\rho =\partial \overline{\partial} \log \left((-e)^nV\right)$ is a $(1,1)$-form. It remains to check that if $z'$ is another coordinate system and $V'=V(z,\overline{z})\,\Delta(z)\,\overline{\Delta(z)}$ is the associated function, then {\nu}ewline $$ \rho' =\partial \overline{\partial} \log\left( (-e)^nV'\right)= \rho\,. $$ Since the real part $u$ of $\Delta(z) =u+ e v$ is not zero, we can choose $\epsilon =\pm 1$ such that $\Delta^\epsilon := \epsilon \Delta$ and $\overline{\Delta^\epsilon}:=\epsilon \overline{\Delta}$ has positive real part, then the function $\log ( \Delta^\epsilon(z))$ is a para-holomorphic function. In particular, $\overline{\partial}\log(\epsilon \Delta(z))=0$. Similarly, $\log ( \overline{\Delta^\epsilon(z))}$ is anti-para-holomorphic. Then {\beta}egin{eqnarray*} \rho' &=&e\,\partial \overline{\partial} \log \left((-e)^nV'\right) = e\,\partial \overline{\partial} \log \left((-e)^nV\,\Delta^\epsilon(z)\overline{\Delta^\epsilon(z)}\right)\\ &=&e\,\partial \overline{\partial} \left[\log\left((-e)^nV\right)+\log\left(\Delta^\epsilon(z)\right)+ \log\left(\overline{\Delta^\epsilon(z)}\right) \right]\\ &=&e\,\partial \overline{\partial} \log\left((-e)^nV\right) =\rho\,. \end{eqnarray*} \end{proof} Hence, from formulas \eqref{ricci} and \eqref{canonicalform}, we get the following {\beta}egin{cor} Let $(M,K,\omega,g)$ be an oriented para-K\"ahler manifold and ${\it vol}^g$ the volume form associated with the metric $g$. Then the Ricci form $\rho$ of the para-K\"ahler manifold $M$ coincides with the canonical form of $(M,K,vol^g)$. In particular $\rho$ depends only on the para-complex structure and the volume form. \end{cor} {\mu}edskip Now we derive a formula for the canonical form $\rho$ in term of divergence.{\nu}ewline Let $(M,K)$ be a $2n$-manifold with a para-complex structure and ${\it vol}$ be a (real) volume form on $M$. With respect to the local para-holomorphic coordinates $z^{\alpha}lpha$ on $M$, we can write $$ {\it vol} = V(z,{\beta}ar z)dz^1 \wedge \ldots\wedge dz^n \wedge d {\beta}ar z^1 \wedge \ldots \wedge d {\beta}ar z^n\,, $$ where $V$ is a real or imaginary valued function depending on the parity of $n$. \par We define the {{\beta}f divergence} ${\delta}iver X$ of a vector field $X \in {\mu}athfrak X (M)$ on $(M, vol)$ by $$ {\mu}athcal{L}_X \,{\it vol} = ({\delta}iver X) {\it vol}\,, $$ where ${\mu}athcal{L}_X$ denotes the Lie derivative along the vector field $X$. Then for $X,Y \in {\mu}athfrak X(M)$ and any smooth function $f \in {\mu}athcal C^\infty(M)$ we have {\beta}egin{equation}\label{proprietadivergenza} X ({\delta}iver Y) - Y({\delta}iver X) = {\delta}iver [X,Y]\,,\quad {\delta}iver(fX)= f {\delta}iver X + X(f). \end{equation} Moreover, setting $\partial_i ={\varphi}rac{\partial}{\partial x^i}$, we have $${\delta}iver(\partial_i) = \partial_i \log ((-e)^nV)$$ and if $X = X^i \partial_i$ then {\beta}egin{equation}\label{divergenza} {\delta}iver X = \partial_i X^i + \partial_i (\log ((-e)^nV)) X^i\,. \end{equation} For any $$ Z = Z^{\alpha}lpha \partial_{\alpha}lpha + Z^{ {\beta}ar {\alpha}lpha} \partial_{{\beta}ar {\alpha}lpha}\,,\quad W = W^{\beta}eta \partial_{\beta}eta + W^{ {\beta}ar {\beta}eta} \partial_{{\beta}ar {\beta}eta}\,, $$ we denote by $$ h(Z,W) = \partial_{\alpha}lpha \partial_{{\beta}ar {\beta}eta} \log ((-e)^nV) Z^{\alpha}lpha W^{{\beta}ar {\beta}eta} + \partial_{\alpha}lpha \partial_{{\beta}ar {\beta}eta} \log ((-e)^nV) W^{\alpha}lpha Z^{{\beta}ar {\beta}eta} $$ the para-Hermitian form associated with $\rho$. Then $$ \rho(Z,W) = h(Z,KW)\,. $$ The following lemma will be used in the next section {\beta}egin{lemma}\label{rho} Let $X, Y$ be vector fields on $M$ such that ${\delta}iver X ={\delta}iver Y=0$ and ${\mu}athcal{L}_XK={\mu}athcal{L}_YK=0$, where ${\mu}athcal{L}$ denotes the Lie derivative. Then {\beta}egin{equation}\label{quattro} 2\rho (X,Y) ={\delta}iver(K[X,Y])\,. \end{equation} \end{lemma} {\beta}egin{proof} Set $$ X^c = X + eKX\,,\,\,\, Y^c = Y + eKY\,; $$ then, $X^c$ and $Y^c$ are para-holomorphic vector fields and by definition of $\rho$ and $h$ we have {\beta}egin{eqnarray*} 2\rho(X,Y) & = &{\varphi}rac{e}{2} \left( -h(X^c,\overline{Y^c})+h(\overline{X^c},Y^c) \right)\\ &=& {\varphi}rac{e}{2}\left(-h(X^c,\overline{Y^c})+\overline{h(X^c,\overline{Y^c})} \right)\\ & =& -2{\,{\rm Im}\,}\, h(X^c,\overline{Y^c})\\ &=& -{\,{\rm Im}\,}\, (X^c({\delta}iver\overline{Y^c}))\\ &=& (X({\delta}iver KY)- K X({\delta}iver Y))\\ &=& \,{\delta}iver ([X,KY])\\ &=& \,{\delta}iver (K[X,Y])\,, \end{eqnarray*} where in the last equality we used that ${\mu}athcal{L}_XK=0$. \end{proof} {\sigma}ection{Homogeneous para-K\"ahler manifolds.} {{\mu}athfrak {su}}bsection{The Koszul formula for the canonical form.} Let $M = G/H$ be a homogeneous reductive manifold with an invariant volume form ${\it vol}$. We fix a reductive decomposition $\ggg = \gh + \gm$ of the Lie algebra $\ggg$ and identify the subspace $\gm$ with the tangent space $T_oM$ at the point $o = eH$.\\ We denote by $\Omega = \pi^* {\it vol}$ the pull back of ${\it vol}$ under the natural projection $\pi : G \rightarrow G/H$. We choose a basis $\omega^i$ of the space of left-invariant horizontal 1-forms on $G$ such that $\Omega = \omega^1 \wedge \ldots \wedge \omega^m$ and denote by $X_i$ the dual basis of horizontal left-invariant vector fields. (The horizontality means that $\omega^i$ vanish on fibers of $\pi$ and the vector fields $X_i$ tangent to left-invariant distribution generated by ${\mu}athfrak{m}$). For any element $X \in {\mu}athfrak{g}$ we denote by the same letter $X$ the corresponding left-invariant vector field on $G$ and by $X'$ the corresponding right-invariant vector field on $G$. {\beta}egin{lemma}(Koszul {\gamma}ite{K})\label{koszul} Let $X$ be the velocity vector field on $M = G/H$ of a 1-parameter subgroup of $G$. Then the pull-back $\pi^*({\delta}iver X)$ of the divergence ${\delta}iver X$ is given by $$ \pi^* ({\delta}iver X) = {{\mu}athfrak {su}}m_{i=1}^{2n} \omega^i ([X_i,X])\,. $$ \end{lemma} {\beta}egin{proof} For any projectable vector field $\tilde X$ on $G$ with the projection $X$ we have $$ \pi^* ({\delta}iver X) \Omega=\pi^* ({\mu}athcal{L}_X \,vol ) = {\mu}athcal{L}_{\tilde X} \, \Omega = {{\mu}athfrak {su}}m \omega^i ([X_i,X])\Omega \,. $$ \end{proof} Now we assume that $G/H$ is endowed with an invariant para-complex structure $K$ and we derive a formula for the canonical form. We extend the endomorphism $K|_{T_oM} = K|_{\gm}$ to the ${{\,{\rm Ad}\,}}_H$-invariant endomorphism $\tilde{K}$ of $\ggg$ with kernel $\gh$ and we denote by the same symbol the associated left-invariant field of endomorphisms on the group $G$. {\beta}egin{prop}\label{propkoszul} Let $M = G/H$ be a homogeneous manifold with an invariant volume form ${\it vol}$ and an invariant para-complex structure $K$. Then the pull-back $\pi^* \rho$ to $G$ of the canonical $2$-form $\rho$ associated with $({\it vol}, K)$ at the point $o = eH$ is given by $$ 2 \pi^*\rho (X,Y) = {{\mu}athfrak {su}}m \omega^i \left([\tilde{K}[X,Y],X_i]- \tilde{K}[[X,Y], X_i]\right)\,,\quad {\varphi}orall\,X,Y \in \ggg\,. $$ In particular, $$ 2\pi^*\rho_e =d\psi\,, $$ where $\psi$ is the ${\alpha}d_{{\mu}athfrak{h}}$-invariant $1$-form on $\ggg$ given by {\beta}egin{equation}\label{koszulform1} \psi(X) =-{\hbox{tr}}_{{\mu}athfrak{g}/\gh} \left(\hbox{\rm ad}_{\tilde{K}X} - \tilde{K} \hbox{\rm ad}_X\right)\,,\quad {\varphi}orall\,X \in \ggg\,. \end{equation} \end{prop} The $1$-form $\psi$ (and the associated left-invariant $1$-form on $G$) is called the {{\beta}f Koszul form}. {\beta}egin{proof} We denote by $X',Y', Z' = [X',Y']$ right-invariant vector fields on $G$ associated with elements $X,Y,Z = [X,Y]$ of $\ggg$. These vector fields project to vector fields on $M = G/H$ which are generators of 1-parameter subgroups of $G$ acting on $M$; in particular, they preserve the para-complex structure and the volume form. Applying \eqref{quattro} of Lemma \ref{rho} and Lemma \ref{koszul} we get {\beta}egin{eqnarray*} 2\pi^*\rho(X,Y)&=& 2\rho (\pi_*X',\pi_*Y')_e ={\delta}iver (K\pi_*[X',Y'])_e \\ &=& {\delta}iver(\pi_*\tilde{K}Z')_e = {{\mu}athfrak {su}}m_i\omega^i\left([X_i,\tilde{K}Z'] \right)_e \,. \end{eqnarray*} Since left-invariant vector fields $X_i$ commute with right-invariant vector field $Z'$,we can write {\beta}egin{eqnarray*} [X_i,\tilde{K}Z']_e &=& ({\mu}athcal L_{X_i}\tilde K )Z'_e =({{\mu}athcal L}_{X_i}\tilde K )Z|_e \\ & =&{{\mu}athcal L}_{X_i}(\tilde K Z)|_e -\tilde K ({{\mu}athcal L}_{X_i}Z)|_e\\ &=&[X_i, \tilde K Z] - \tilde K ([X_i, Z])\,. \end{eqnarray*} Here we consider $Z$ and $\tilde K Z$ as left-invariant vector fields on $G$. Substituting this formula into the previous one, we obtain the statement. \end{proof} {{\mu}athfrak {su}}bsection{Invariant para-complex structures on a homogeneous manifold.} Let $M=G/H$ be a homogeneous manifold and ${\mu}athfrak{h}, {\mu}athfrak{g}$ be the Lie algebras of $H,G$, respectively. {\beta}egin{prop} There is a natural 1-1 correspondence between invariant para-complex structures $K$ on $M$ with eigenspaces decomposition $TM = T^+ \oplus T^-$ and decompositions ${\mu}athfrak{g}= {\mu}athfrak{g}^+ + {\mu}athfrak{g}^-$ into two ${{\mu}athrm Ad}_H$-invariant subalgebras ${\mu}athfrak{g}^\pm$ with ${\mu}athfrak{g}^+ {\gamma}ap {\mu}athfrak{g}^- = {\mu}athfrak{h}$ and ${\delta}im{{\mu}athfrak{g}^+}/ {\mu}athfrak{h} = {\delta}im{{\mu}athfrak{g}^-}/ {\mu}athfrak{h}$. \end{prop} {\beta}egin{proof} Indeed, such a decomposition defines two complementary ${{\mu}athrm Ad}_H$-invariant subspaces ${\mu}athfrak{g}^\pm/{\mu}athfrak{h} {{\mu}athfrak {su}}bset {\mu}athfrak{g}/{\mu}athfrak{h} = T_oM$ which can be extended to complementary invariant integrable distributions $T^\pm$. \end{proof} An invariant para-complex structure $K$ associated with a graded Lie algebra can be constructed as follows. Let {\beta}egin{equation}\label{gradation} \ggg = \ggg_{-k} +{\gamma}dots + \ggg_{-1}+\ggg_{0}+\ggg_{1}+{\gamma}dots + \ggg_{k}\,,\,\,\,[\ggg_i,\ggg_j]{{\mu}athfrak {su}}bset\ggg_{i+j} \end{equation} be a ${\mu}athbb{Z}$-graded Lie algebra. Then the endomorphism $D $ defined by $D|_{{\mu}athfrak{g}_j} = j {\rm Id}$ is a derivation of ${\mu}athfrak{g}$. Extending the Lie algebra ${\mu}athfrak{g}$ to ${\mu}athbb{R}D + {\mu}athfrak{g}$ if necessary, we may assume that the derivation $D$ is inner, i.e. there is an element $d \in {\mu}athfrak{g}_0$, called {{\beta}f grading element}, such that $D = {\alpha}d_d$. Let $G$ be a connected Lie group with Lie algebra ${\mu}athfrak{g}$ and $H$ be the (automatically closed) subgroup generated by ${\mu}athfrak{h}:= {\mu}athfrak{g}_0$. Then the decomposition $$ {\mu}athfrak{g} = {\mu}athfrak{g}^+ + {\mu}athfrak{g}^-\,, \quad {\mu}athfrak{g}^\pm = {\mu}athfrak{g}_0 + {{\mu}athfrak {su}}m_{j>0} {\mu}athfrak{g}_{\pm j} $$ defines an invariant para-complex structure $K$ on the (reductive) homogeneous manifold $M = G/H$. The homogeneous para-complex manifold $(M = G/H, K)$ and the para-complex structure $K$ are called the {{\beta}f para-complex manifold and the para-complex structure associated with a gradation}. Note that the grading element $d$ generates an Anosov flow $ \varphi_t:= {\exp}(td) $ on $M = G/H$ with smooth stable distribution $T^-$ and unstable distribution $T^+$ which preserves the canonical invariant connection on $M$ associated with the reductive decomposition ${\mu}athfrak{g} = {\mu}athfrak{h}+ {\mu}athfrak{m}: = {\mu}athfrak{g}_0 + ({{\mu}athfrak {su}}m_{j {\nu}eq 0}{\mu}athfrak{g}_j )$. The result by Benoist and Labourie {\gamma}ite{BL}, who describes Anosov diffeomorphisms on a compact manifold with smooth stable and unstable distributions which preserve a linear connection (or symplectic form), shows that there is no compact smooth quotient of $M$ which preserves the Anosov diffeomorphism $\varphi_1$, that is a cocompact freely acting discrete group ${\Gamma}amma$ of diffeomorphisms of $M$ which commutes with the Anosov diffeomorphism $\varphi_1$. Moreover, the following deep result {\gamma}ite{BL1} shows that there is no compact manifold modelled on the homogeneous manifold $M=G/H$ of a semisimple Lie group $G$ associated with a graded Lie algebra. Recall that a manifold $N$ is modeled on a homogeneous manifold $G/H$ if there is an atlas of $G/H$-valued local charts whose transition functions are restrictions of elements of $G$. {\beta}egin{thm} {\gamma}ite{BL1} Let $G/H$ be a homogeneous manifold of a connected semisimple Lie group $G$ which admits an invariant volume form. If there is a compact manifold $N$ modelled on $G/H$, then $H$ does not contain any hyperbolic element. In particular, if $G/H$ admits a smooth compact quotient ${\Gamma}amma {\sigma}etminus G/H$, then the center of $H$ is compact. \end{thm} Note that a semisimple group $G$ admits a cocompact discrete subgroup ${\Gamma}amma$ such that the quotient ${\Gamma}amma {\sigma}etminus G/H$ is a compact orbifold and it would be interesting to construct such an orbifold with induced Anosov diffeomorphism. {{\mu}athfrak {su}}bsection{Invariant para-K\"ahler structures on a homogeneous reductive manifold.} Now we give a Lie algebraic description of invariant para-K\"ahler structures on a homogeneous manifold $M = G/H$ with a reductive decomposition $ {\mu}athfrak{g} = {\mu}athfrak{h} + {\mu}athfrak{m, \,\, [{\mu}athfrak{h}, {\mu}athfrak{m}] {{\mu}athfrak {su}}bset {\mu}athfrak{m}}$. We denote by $$j : H \rightarrow GL({\mu}athfrak{m}),\,\, h {\mu}apsto j(h) = {\,{\rm Ad}\,}_h|_{{\mu}athfrak{m}}$$ the isotropy representation of $H$ into ${\mu}athfrak{m} = T_oM$. Recall that an invariant symplectic structure on $M = G/H$ is defined by a closed $Ad_H$-invariant $2$-form $\omega$ on $\ggg$ with kernel $\gh$. The invariant para-complex structure $K$ associated with a decomposition $$ {\mu}athfrak{g} = {\mu}athfrak{g}^+ + {\mu}athfrak{g}^- = ({\mu}athfrak{h}+ {\mu}athfrak{m}^+ )+ ( {\mu}athfrak{h}+ {\mu}athfrak{m}^-)$$ is skew-symmetric with respect to the invariant symplectic form $\omega$ (in other words $\omega$ is of type $(1,1)$) if and only if $\omega\vert_{\gm^\pm}=0$ (see lemma \ref{isotropic}). This implies {\beta}egin{prop} Invariant para-K\"ahler structures on a reductive homogeneous manifold $M=G/H$ are defined by triples $(\omega,\gm^+,\gm^-)$, where $\gm = \gm^+ +\gm^-$ is a $j(H)$-invariant decomposition such that $\ggg^\pm= \gh +\gm^\pm$ are subalgebra of $\ggg$ and $\omega$ is a closed $Ad_H$-invariant $2$-form with kernel $\gh$ such that $\omega\vert_{\gm^\pm}=0$. \end{prop} Note that an invariant volume form on a homogeneous manifold $M=G/H$ exists if and only if the isotropy representation $j(H)$ is unimodular (i.e. ${\delta}et (j(h))=1$ for all $h\in H$), and it is defined up to a constant scaling. We get the following {\beta}egin{cor}\label{corhomogparakaehler} Let $(M=G/H,K)$ be a homogeneous para-complex manifold which admits an invariant volume form ${\it vol}$. Then any invariant para-K\"ahler structure $(K,\omega)$ has the same Ricci form $\rho$ which is the canonical form of $(K,{\it vol})$. Moreover, there exists an invariant para-K\"ahler Einstein structure with non zero scalar curvature if and only if the canonical form $\rho$ is non degenerate. These structures are given by pairs $(K,\omega)$, with $\omega=\lambda\rho$, $\lambda{\nu}eq 0$. \end{cor} {\sigma}ection{Homogeneous para-K\"ahler Einstein manifolds of a semisimple group.} The aim of this section is to describe invariant para-K\"ahler Einstein structures on homogeneous manifolds $M=G/H$ of semisimple groups $G$. We need the following important result. {\beta}egin{thm}[{\gamma}ite{HDKN}] A homogeneous manifold $M= G/H$ of a semisimple Lie group $G$ admits an invariant para-K\"ahler structure $(K, \omega)$ if and only if it is a covering of a semisimple adjoint orbit ${\,{\rm Ad}\,}_G h = G/ Z_G(h) $ that is the adjoint orbit of a semisimple element $h \in \ggg$. \end{thm} Note that in this case $Z^0_G(h) {{\mu}athfrak {su}}bset H {{\mu}athfrak {su}}bset Z_G(h)$, where $Z^0_G(h)$ denote the connected centralizer of $h$ in $G$ and the element $h$ is $\gh$-regular, i.e. its centralizer in $\ggg$ is $Z_{\ggg}(h) = \gh$. We will describe invariant para-complex structures $K$ on such a homogeneous manifold $M = G/H$ in term of Satake diagrams and invariant symplectic structures $\omega$. Then we describe the canonical form $\rho=\rho_K$ on $(G/H,K)$ in term of roots and show that it is non degenerate. This implies that for any invariant para-complex structure $K$ there exists a unique para-K\"ahler Einstein structure $(K,\lambda\rho_K)$ with given non-zero scalar curvature. {{\mu}athfrak {su}}bsection{Invariant para-K\"ahler structures on a homogeneous manifold.} Let $M=G/H$ be a covering of an adjoint orbit ${\,{\rm Ad}\,}_G h $ of a real semisimple Lie group $G$ that is $Z^{0}_G(h) {{\mu}athfrak {su}}bset H {{\mu}athfrak {su}}bset Z_G(h)$. Since the Killing form $B\vert_\gh$ is non-degenerate, the $B$-orthogonal complement $\gm =\gh^{\beta}ot$ of $\gh$ defines a reductive decomposition $$ \ggg =\gh +\gm\,. $$ We recall that a gradation {\beta}egin{equation}\label{gradation} \ggg = \ggg_{-k} +{\gamma}dots + \ggg_{-1}+\ggg_{0}+\ggg_{1}+{\gamma}dots + \ggg_{k}\,,\,\,\,[\ggg_i,\ggg_j]{{\mu}athfrak {su}}bset\ggg_{i+j} \end{equation} of a semisimple Lie algebra $\ggg$ is defined by a grading element $d \in {\mu}athfrak{g}_0$. A gradation is called {{\beta}f fundamental}, if the subalgebras {\beta}egin{equation}\label{mpm} \gm_\pm ={{\mu}athfrak {su}}m_{i>0}\ggg_i\,. \end{equation} are generated by $\ggg_{\pm 1}$, respectively. {\beta}egin{lemma} A gradation is ${\,{\rm Ad}\,}_H$-invariant, where $H {{\mu}athfrak {su}}bset G$ is a subgroup of $G$ with the Lie algebra $\gh = \ggg_0$ if and only if ${\,{\rm Ad}\,}_H$ preserves $d$. \end{lemma} The following proposition describes all invariant para-Complex structures $K$ on a homogeneous manifold $M = G/H$ of a semisimple Lie group $G$ which is a cover of an adjoint orbit of a semisimple element $h \in {\mu}athfrak{g}$. {\beta}egin{prop}[{\gamma}ite{AM}] There is a natural $1-1$ correspondence between invariant para-complex structures $K$ on a homogeneous manifold $G/H$ of a semisimple Lie group $G$ which is a covering of a semisimple adjoint orbit $M=Ad_G(h)$ and ${\,{\rm Ad}\,}_H$-invariant fundamental gradations \eqref{gradation} of the Lie algebra $\ggg$ with $\ggg_0 =\gh$. The gradation \eqref{gradation} defines the para-complex structure $K$ with eigenspace decomposition $TM = T^+ + T^-$ where invariant integrable distributions $T^\pm$ are invariant extensions of the subspaces $\gm_\pm$ defined by \eqref{mpm}. \end{prop} Let $\gh = Z_{\ggg}(h)$ be the centralizer of a semisimple element $h$. We have a $B$-orthogonal decomposition $\gh = \gz + \gh'$, where $\gz$ is the center of $\gh$ and $\gh'=[\gh,\gh]$. Recall that an element $z\in\gz$ is called $\gh$-{{\beta}f regular} if $Z_\ggg(z)=\gh$.We need the following known {\beta}egin{prop} Let $M = G/H$ be a homogeneous manifold as in the previous proposition. Then there exists a natural $1-1$ correspondence between ${\,{\rm Ad}\,}_H$-invariant elements $z\in\gz$ and closed invariant $2$-forms $\omega_z$ on $M=G/H$ given by $$ \omega_z(X,Y)=B(z,[X,Y])\,,\quad {\varphi}orall X,Y\in\gm =T_oM{{\mu}athfrak {su}}bset \ggg\,. $$ Moreover $\omega_z$ is a symplectic form if and only if $z$ is $\gh$-regular. \end{prop} {\beta}egin{cor} Any invariant para-complex structure $K$ on $M=G/H$ is skew-symmetric with respect to any invariant symplectic structure. In other words, any pair $(K,\omega)$ defines an invariant para-K\"ahler structure. \end{cor} {{\mu}athfrak {su}}bsection{Fundamental gradations of a real semisimple Lie algebra.} We recall the description of a real form of a complex semisimple Lie algebra in terms of Satake diagrams, which are extensions of Dynkin diagrams (see {\gamma}ite{GOV}). Any real form of a complex semisimple Lie algebra $\ggg$ is the fixed point set $\ggg^{\sigma}igma$ of an anti-linear involution ${\sigma}igma$ of $\ggg$. A Cartan subalgebra $\ga^{{\sigma}igma}$ of $\ggg^{{\sigma}igma}$ decomposes into a direct sum $\ga=\ga^+\oplus\ga^-$, where {\beta}egin{eqnarray} && \ga^+ := \{X\in\gh \,|\, {\alpha}d_{\ggg}(X) {\mu}box{ has purely imaginary eigenvalues}\}\,,\\ && \ga^- := \{X\in\gh \,|\, {\alpha}d_{\ggg}(X) {\mu}box{ has real eigenvalues}\}\,, \end{eqnarray} are called the {{\beta}f toroidal} and the {{\beta}f vectorial} part of $\ga$, respectively. Let $\ga^{\sigma}$ be a maximal vectorial Cartan subalgebra of $\ggg^{\sigma}$, i.e.\ such that the vectorial part $\ga^-$ has maximal dimension. Then the root decomposition of $\ggg^{\sigma}igma$, with respect to the subalgebra $\ga^{\sigma}igma $, can be written as $$ \ggg^{\sigma}igma =\ga^{\sigma}igma + {{\mu}athfrak {su}}m_{\lambda\in\Sigma}\ggg^{\sigma}igma_\lambda\,, $$ where $\Sigma{{\mu}athfrak {su}}bset (\ga^-)^*$ is a (non-reduced) root system. Denote by $\ga =(\ga^{\sigma}igma)^{\C}$ the complexification of $\ga^{\sigma}igma$ (which is a ${\sigma}igma$-invariant Cartan subalgebra) and by ${\sigma}igma^*$ the induced anti-linear action of ${\sigma}igma$ on $\ga^*$: $$ {\sigma}igma^*{\alpha}lpha =\overline{{\alpha}lpha{\gamma}irc{\sigma}igma}\,,\quad{\alpha}lpha\in\ga^*\,. $$ Consider the root space decomposition of $\ggg$ with respect to $\ga$: $$ \ggg =\ga+{{\mu}athfrak {su}}m_{{\alpha}lpha\in R} \ggg_{\alpha}lpha \,, $$ where $R$ is the root system of $(\ggg,\ga)$. Note that ${\sigma}igma^*$ preserves $R$, i.e.\ ${\sigma}igma^*R=R$. Now we relate the root space decompositions of $\ggg^{\sigma}igma$ and $\ggg$. We define the subsystem of compact roots $R_{\beta}ullet$ by $$ R_{\beta}ullet =\{{\alpha}lpha\in R\,\,\vert\,\, {\sigma}igma^*{\alpha}lpha = -{\alpha}lpha \}= \{{\alpha}lpha\,\,\vert\,\,{\alpha}lpha(\gh^-)=0\} $$ and denote by $R'=R{\sigma}etminus R_{\beta}ullet$ the complementary set of non-compact roots. We can choose a system $\Pi$ of simple roots of $R$ such that the corresponding system $R^+$ of positive roots satisfies the condition: $$ R'_+:=R'{\gamma}ap R^+\,\,\,\hbox{\rm is} \,\,\,\,{\sigma}igma\hbox{\rm -invariant}. $$ In this case, $\Pi$ is called a ${\sigma}igma$-{{\beta}f fundamental system} of roots. \\ We denote by $\Pi_{\beta}ullet =\Pi{\gamma}ap R_{\beta}ullet$ the set of compact simple roots (which are also called black) and by $\Pi' =\Pi{\sigma}etminus \Pi_{\beta}ullet$ the non-compact simple roots (called white). The action of ${\sigma}igma^*$ on white roots satisfies the following property: \\ for any ${\alpha}lpha\in\Pi'$ there exists a unique ${\alpha}lpha'\in\Pi'$ such that\ ${\sigma}igma^*{\alpha}lpha-{\alpha}lpha'$\ is a linear combination of black roots. In this case, we say that ${\alpha}lpha, \,{\alpha}lpha'$ are ${\sigma}igma$-{{\beta}f equivalent}. The information about the fundamental system ($\Pi =\Pi_{\beta}ullet {\gamma}up \Pi'$) together with the ${\sigma}igma$-equivalence can be visualized in terms of the {{\beta}f Satake diagram}, which is defined as follows: {\nu}ewline on the Dynkin diagram ${\Gamma}$ of the system of simple roots $\Pi$, we paint the vertices which correspond to black roots black and we join the vertices which correspond to ${\sigma}igma$-equivalent roots ${\alpha}lpha,\,{\alpha}lpha'$ by a curved arrow. We recall that there is a natural $1-1$ correspondence between Satake diagrams subordinated to the Dynkin diagram of a complex semisimple Lie algebra $\ggg$, up to isomorphisms, and real forms $\ggg^{\sigma}igma$ of $\ggg$, up to conjugations. The list of Satake diagram of real simple Lie algebras is known (see e.g. {\gamma}ite{GOV}). {\sigma}mallskip\par The following proposition describes fundamental gradations of semisimple complex (respectively, real) Lie algebras in terms of crossed Dynkin (respectively, Satake) diagrams (see e.g. {\gamma}ite{Dj}, {\gamma}ite{AMT}). {\beta}egin{prop}\label{prop-grad-real} A fundamental gradation of a complex semisimple Lie algebra $\ggg={{\mu}athfrak {su}}m\ggg_p$ can be given by a crossed Dynkin diagram ${\Gamma}$. Crossed nodes belong to a subset $\Pi^1$ of a simple root system $\Pi =\Pi^0{\gamma}up\Pi^1$. The corresponding grading vector ${d}\in\ga$ is given by $$ {\alpha}lpha_i({d})= {\beta}egin{cases} 0 & \hbox{\rm if}\,\, {\alpha}lpha_i\in\Pi^0\\ 1 &\hbox{\rm if}\,\, {\alpha}lpha_i\in\Pi^1\,. \end{cases} $$ The subspaces $\ggg_p$ are given by $$ \ggg_p ={{\mu}athfrak {su}}m_{{\alpha}lpha({d})=p}\ggg_{\alpha}lpha\,. $$A real form $\ggg^{\sigma}$ is consistent with the gradation (i.e.\ $ d\in\ggg^{\sigma}$) if and only if the corresponding Satake diagram $\hat{\Gamma}$ satisfies the following properties: {\beta}egin{enumerate} \item[i)] all black nodes of $\hat{\Gamma}$ are uncrossed; \\ \item[ii)] two nodes related by an arrow are both crossed or uncrossed. \end{enumerate} \end{prop} {{\mu}athfrak {su}}bsection{Computation of the Koszul form and the main theorem.} Now we compute the Koszul form of a homogeneous para-complex manifold $(M=G^{\sigma}igma/H^{\sigma}igma,K_M,{\it vol})$, where $G^{\sigma}$ is a real form of a complex semisimple Lie group $G$, $M = G^{\sigma}/H^{\sigma}$ is a covering of a semisimple adjoint orbit ${\,{\rm Ad}\,}_{G^{\sigma}}d$, $K_M$ is the invariant para-complex structure on $M$ defined by the gradation of the Lie algebra $\ggg^{\sigma}igma$ with the grading element $d \in \ggg^{\sigma}$ and ${\it vol}$ is an invariant volume form on $M$. According to Proposition \ref{propkoszul}, it is sufficient to describe the Koszul form $\psi$ on the graded Lie algebra $\ggg^{\sigma}$ or its complexification $\ggg$. We choose a Cartan subalgebra $\ga{{\mu}athfrak {su}}bset\ggg_0$ of the Lie algebra $\ggg$ and denote by $R$ the root system of $(\ggg,\ga)$. Let $$ \Pi =\Pi^0{\gamma}up \Pi^1 $$ be the decomposition of a simple root system $\Pi$ of the root system $R$ which corresponds to the gradation and by $$ P = P^0{\gamma}up P^1 $$ the corresponding decomposition of the fundamental weights.{\nu}ewline We denote by $R^+$ the set of positive roots with respect to the basis $\Pi$ and set $$ R^+_0=\{{\alpha}lpha\in R^+\,\,\vert\,\,\ggg_{\alpha}lpha{{\mu}athfrak {su}}bset \ggg_0\}\,. $$ The following lemma describes the Koszul form $\psi \in \ga^{\sigma} {{\mu}athfrak {su}}bset \ggg^{\sigma} {{\mu}athfrak {su}}bset \ggg$ defined by \eqref{koszulform1} in terms of fundamental weights. {\beta}egin{lemma}\label{psilemma} The Koszul $1$-form $\psi \in \ga^*$ is equal to $$ \psi =2({\delta}elta^\ggg -{\delta}elta^\gh) $$ where $$ {\delta}elta^\ggg={{\mu}athfrak {su}}m_{{\alpha}lpha\in R^+}{\alpha}lpha\,,\qquad {\delta}elta^\gh={{\mu}athfrak {su}}m_{{\alpha}lpha\in R_0^+}{\alpha}lpha\,, $$ and the linear forms on the Cartan subalgebra $\ga$ are considered as linear forms on $\ggg$ which vanish on root spaces $\ggg_{\alpha}lpha$. \end{lemma} {\beta}egin{proof} First of all, remark that for any $E_{\alpha}lpha\in\ggg_{\alpha}lpha$ we have $$ KE_{\alpha}lpha = \left\{ {\beta}egin{array}{rl} 0 & {\mu}box{\rm if}\,\, \pm{\alpha}lpha\in R^+_0\\[3pt] \pm E_{\alpha}lpha &{\mu}box{\rm if}\,\, \pm{\alpha}lpha\in R^+{\sigma}etminus R^+_0\,. \end{array} \right. $$ Hence the endomorphisms $K{\alpha}d_{E_{\alpha}lpha}$ and ${\alpha}d_{KE_{\alpha}lpha}$ are nilpotent and consequently $\psi(E_{\alpha}lpha)=0$, for any root ${\alpha}lpha\in R$. In particular $\psi\vert_\gm =0$.{\nu}ewline Assume now that $X=t$ belongs to the Cartan subalgebra $\ga$. Then ${\alpha}d_t(E_{\alpha}lpha)={\alpha}lpha(t) E_{\alpha}lpha$. Therefore {\beta}egin{eqnarray*} \psi(t)&=&{\hbox{tr}}_\gm (K{\alpha}d_t -{\alpha}d_{Kt})={\hbox{tr}}_{\gm}(K{\alpha}d_t)={\hbox{tr}}_{\gm_+}(K{\alpha}d_t)+{\hbox{tr}}_{\gm_-}(K{\alpha}d_t)\\[3pt] &=& {{\mu}athfrak {su}}m_{{\alpha}lpha\in R^+{\sigma}etminus R^+_0}{\alpha}lpha(t)-{{\mu}athfrak {su}}m_{-{\alpha}lpha\in R^+{\sigma}etminus R^+_0}{\alpha}lpha(t) =2{{\mu}athfrak {su}}m_{{\alpha}lpha\in R^+{\sigma}etminus R^+_0}{\alpha}lpha(t)\\[3pt] &=& 2{{\mu}athfrak {su}}m_{{\alpha}lpha\in R_+}{\alpha}lpha(t)-2{{\mu}athfrak {su}}m_{{\alpha}lpha\in R^0_+}{\alpha}lpha(t)\,. \end{eqnarray*} \end{proof} By the last lemma and Proposition 4.1 of {\gamma}ite{AP}, it follows {\beta}egin{prop}\label{perelomov} Let $\Pi=\Pi^0{\gamma}up \Pi^1=\{{\alpha}lpha_1,\ldots ,{\alpha}lpha_\ell\}$ be the simple root system (corresponding to the gradation) and denote by $\pi_i$ the fundamental weight corresponding to the simple root ${\alpha}lpha_i$, namely $$ 2{\varphi}rac{(\pi_i,{\alpha}lpha_j)}{({\alpha}lpha_j,{\alpha}lpha_j)} ={\delta}elta_{ij}\,. $$ If $P^1 =\{\pi_{i_1},\ldots , \pi_{i_r}\}$, then the Koszul form $\psi$ is equal to {\beta}egin{equation}\label{koszulform} \psi =2{{\mu}athfrak {su}}m_{\pi\in P^1}n_\pi\pi=2{{\mu}athfrak {su}}m_{h=1}^ra_{i_h}\pi_{i_h}\,, \end{equation} where {\beta}egin{equation}\label{koszulcoefficients} a_{i_h}=2+b_{i_h}\,,\quad \hbox{with}\quad b_{i_h}=-2{\varphi}rac{({\delta}elta^\gh,{\alpha}lpha_{i_h})}{({\alpha}lpha_{i_h},{\alpha}lpha_{i_h})}\geq 0\,. \end{equation} \end{prop} {\sigma}mallskip Note that the $1$-form $\psi$ depends only on the decomposition $\Pi=\Pi^0{\gamma}up\Pi^1$.{\sigma}mallskip\par Let us denote by $\{X_{\alpha}lpha\,,{\alpha}lpha\in R,\,\,H_i\,,\,\,i=1,\ldots,\ell\}$ a Chevalley basis of the Lie algebra $\ggg$. For ${\alpha}lpha\in R$, we denote by $\omega^{\alpha}lpha$ the linear forms on $\ggg$ such that $$ \omega^{\alpha}lpha (\ga)=0\,,\quad\omega^{\alpha}lpha(X_{\beta}eta)={\delta}elta^{\alpha}lpha_{\beta}eta\,, $$ for any ${\beta}eta\in R$. If $\xi\in \ga^*$ and ${\alpha}lpha\in R$, then we put $$ n(\xi,{\alpha}lpha) =2{\varphi}rac{(\xi,{\alpha}lpha)}{({\alpha}lpha,{\alpha}lpha)}\,. $$ The next lemma easily follows from the commutation rules in the Lie algebra $\ggg$ (see {\gamma}ite[p. 145]{H} and also {\gamma}ite{AP}). {\beta}egin{lemma}\label{dxilemma} The differential of any $1$-form $\xi\in \ga^*$ is given by $$ d\xi ={{\mu}athfrak {su}}m_{{\alpha}lpha\in R^+} n(\xi,{\alpha}lpha) \omega^{\alpha}lpha \wedge \omega^{-{\alpha}lpha}\,. $$ Moreover $d\xi$ is an ${\alpha}d_\ga$-invariant $2$-form on $\ggg$ with kernel $$ \ker (d\xi) = \ga + {\mu}box{\rm span}\left\{E_{\alpha}lpha\,\,\vert\,\, (\xi,{\alpha}lpha)=0 \right\}\,. $$ \end{lemma} By Proposition \ref{perelomov} and Lemma \ref{dxilemma} we obtain the following {\beta}egin{cor}\label{corkoszul} The ${\alpha}d_\gh$-invariant $2$-form $\rho = d\psi$ on the Lie algebra $\ggg$ has kernel $\gh$. \end{cor} {\beta}egin{proof} Let $P^1= \{\pi_{i_1},\ldots ,\pi_{i_r}\}$ be fundamental weights which correspond to crossed simple roots $\Pi^1=\{{\alpha}lpha_{i_1},\ldots ,{\alpha}lpha_{i_r}\}$. Then $\psi = 2(a_{i_1}\pi_{i_1}+{\gamma}dots + a_{i_r}\pi_{i_r})$. By Lemma \ref{dxilemma}, in order to determine the kernel of $\psi$, it is sufficient to describe all roots ${\alpha}lpha$ with scalar product $(\psi,{\alpha}lpha)=0$. Recall that $$ (\pi_{i},{\alpha}lpha_{i})={\varphi}rac{1}{2}({\alpha}lpha_{i},\,{\alpha}lpha_{i}) $$ and the scalar product of $\pi_i$ with the other simple roots is zero. We can write any root ${\alpha}lpha$ as $$ {\alpha}lpha = k_{1}{\alpha}lpha_{1}+{\gamma}dots + k_{r}{\alpha}lpha_{i_r} +{\beta}eta $$ where ${\beta}eta$ is a linear combination of other simple roots. Then {\beta}egin{eqnarray*} (\psi,{\alpha}lpha) &=&2\,(a_{i_1}\pi_{i_1}+{\gamma}dots + a_{i_r}\pi_{i_r},\,k_{1}{\alpha}lpha_{1}+{\gamma}dots + k_{r}{\alpha}lpha_{i_r}+{\beta}eta) =\\ &=&\left(k_1a_{i_1} \left(({\alpha}lpha_{i_1},{\alpha}lpha_{i_1}\right)+{\gamma}dots +k_ra_{i_r} \left({\alpha}lpha_{i_r},{\alpha}lpha_{i_r}\right)\right)\,. \end{eqnarray*} This shows that $(\psi,{\alpha}lpha)=0$ if and only if $k_1={\gamma}dots =k_r=0$, that is ${\alpha}lpha\in R^0$. Hence the kernel of $\rho$ is $\gh$. \end{proof} By Corollary \ref{corhomogparakaehler} and Corollary \ref{corkoszul} and the above discussions, we obtain the following {\beta}egin{thm} Let $R$ be a root system of a complex semisimple Lie algebra $\ggg$ with respect to a Cartan subalgebra $\ga$ and $\ggg =\ggg_{-k}+{\gamma}dots + \ggg_k$ the fundamental gradation with the graded element $d$ associated with a decomposition $\Pi=\Pi^0{\gamma}up\Pi^1$ of a simple root system $\Pi{{\mu}athfrak {su}}bset R$. Let ${\sigma}igma$ be an admissible anti-involution of $\ggg$ which defines the graded real form $\ggg^{\sigma}igma$ of $\ggg$, \, $G^{\sigma}$ a connected real semisimple Lie group with the Lie algebra $\ggg^{\sigma}$ and $M=G^{\sigma}igma/H^{\sigma} $ a covering of a semisimple adjoint orbit $ Ad_{G^{\sigma}igma}(d)$. Denote by $K$ and $\psi$ the invariant para-complex structure on $M$ and the Koszul form associated with the gradation of $\ggg^{\sigma}igma$ and by $\rho$ the invariant symplectic form on $M$ defined by $d\psi$.\\ Then, for any $\lambda{\nu}eq 0$, the pair $(K,\lambda\rho)$ is an invariant para-K\"ahler Einstein structure on $M$ and this construction exhausts all homogeneous para-K\"ahler Einstein manifolds of real semisimple Lie groups. \end{thm} {{\mu}athfrak {su}}bsection{Examples.} {\beta}egin{comment} $$ \xymatrix @M=0pt @R=2pt @!C=6pt{ {{\gamma}irc}{\alpha}r @{--}[r]{\alpha}r@/^18pt/@{<->}[rrrrr]& {{\gamma}irc}{\alpha}r @{-}[r]{\alpha}r@/^8pt/@{<->}[rrr]& {{\beta}ullet}{\alpha}r @{--}[r]&{{\beta}ullet}{\alpha}r @{-}[r]&{{\gamma}irc}{\alpha}r @{--}[r]&{{\gamma}irc}\\ {{\alpha}lpha_1}&{{\alpha}lpha_p}&&&{{\alpha}lpha_{\ell-p+1}}&{{\alpha}lpha_{\ell}}} $$ \end{comment} In this subsection we describe the Koszul form $\psi$ for adjoint orbits $M=G^{\sigma}igma/H=Ad_{G^{\sigma}igma}(h)$ of some simple Lie group $G^{\sigma}igma$. In the case of $G_2$ we also indicate the para-K\"ahler Einstein form $\rho =d\psi$.\vskip.2truecm{\nu}oindent \paragraph{{\beta}f Case $\ggg = A_\ell = {\mu}athfrak{sl}({n+1}, {\mu}athbb{C})$.} The root system is $R = \{ \epsilon_i - \epsilon_j\}$ and the system of simple roots is $\Pi = \{{\alpha}lpha_i = \epsilon_i - \epsilon_{i+1} \}$. We will consider two real form of $\ggg$. {\sigma}mallskip \\ i) $G^{\sigma} = {\rm SL}(\ell +1,\R)$.\\ We denote by $M = {\rm SL}(\ell+1)/H$ the homogeneous manifold associated with the subsystem of simple roots given by \\ $\Pi^1=\{{\alpha}lpha_{i_1},\ldots ,{\alpha}lpha_{i_r}\,\,\vert\,\, 1\leq i_1<i_2<{\gamma}dots < i_r\leq\ell\}$.\\ {\beta}egin{comment} $$ \xymatrix @M=0pt @R=2pt @!C=6pt{ {\gamma}irc {\alpha}r@{.}[r] &\APLcirc{\times} {\alpha}r@{-}[r] &\APLcirc{\times} {\alpha}r@{-}[r] &{\gamma}irc {\alpha}r@{-}[r] &{\gamma}irc \\ {{\alpha}lpha_1}&{{\alpha}lpha_{i_1}}&&&{{\alpha}lpha_{\ell-p+1}}&{{\alpha}lpha_{\ell}}} $$ {\nu}ewline \end{comment} The manifold $M$ has dimension $ (\ell +1)^2-{\delta}isplaystyle{{\mu}athfrak {su}}m_{k=1}^{r+1}(i_{k}-i_{k-1})^2\,, $ where we assume that $i_0=0$ and $i_{r+1}=\ell +1$. Taking into account formula \eqref{koszulform}, we get the Koszul form $$ \psi = 2{{\mu}athfrak {su}}m_{k=1}^r(i_{k+1}-i_{k-1})\pi_{i_k}\,. $$ ii) $ G^{\sigma}= {\rm SL}(2,\quater)$ and $\Pi^1=\{{\alpha}lpha_2\}$. \\ This case corresponds to the crossed Satake diagram $$ \xymatrix @M=0pt @R=2pt @!C=6pt{ {\beta}ullet {\alpha}r@{-}[r] &\APLcirc{\times} {\alpha}r@{-}[r] &{\beta}ullet } $$ The manifold $M$ has dimension $8$. According to \eqref{koszulcoefficients}, we have $b_2=2$ and therefore $a_2 =4$. Hence the Koszul form is $$ \psi =2a_2\pi_2= 8\pi_2=4{\alpha}lpha_1+8{\alpha}lpha_2+4{\alpha}lpha_3 \,. $$ Taking into account lemma \ref{dxilemma}, by a direct computation we get $$ {\beta}egin{array}{lll} \rho &=& 8\left(\omega^{{\alpha}lpha_2}\wedge\omega^{-{\alpha}lpha_2}+\omega^{{\alpha}lpha_1+{\alpha}lpha_2}\wedge\omega^{-({\alpha}lpha_1+{\alpha}lpha_2)}+ \omega^{{\alpha}lpha_2+{\alpha}lpha_3}\wedge\omega^{-({\alpha}lpha_2+{\alpha}lpha_3)}+\right. \\[6pt] &{}&+ \left.\omega^{{\alpha}lpha_1+{\alpha}lpha_2+{\alpha}lpha_3}\wedge\omega^{-({\alpha}lpha_1+{\alpha}lpha_2+{\alpha}lpha_3)} \right)\,. \end{array} $$ \paragraph{{\beta}f Case of the complex exceptional Lie algebra {\beta}f $\ggg = \ggg_2 $} The system of simple roots can be written as $\Pi = \{ {\alpha}lpha_1, {\alpha}lpha_2 \}$ and the associated system of positive roots is $$ R^+=\{{\alpha}lpha_1,{\alpha}lpha_2,{\alpha}lpha_1+{\alpha}lpha_2,2{\alpha}lpha_1+{\alpha}lpha_2, 3{\alpha}lpha_1+{\alpha}lpha_2,3{\alpha}lpha_1+2{\alpha}lpha_2\}. $$ We consider the normal real form $ (\ggg_2)^{\sigma}$ of $\ggg_2$ and denote by $G^{\sigma} = G_2$ the corresponding simple Lie group. {\nu}ewline The fundamental weights are $$ \pi_1 =2{\alpha}lpha_1 +{\alpha}lpha_2\,, \quad \pi_2 =3{\alpha}lpha_1 +2{\alpha}lpha_2\,. $$ We have the following three cases: {\beta}egin{itemize} \item[i)]\ $ \xymatrix @M=0pt @R=2pt @!C=6pt{ {\APLcirc{\times}}{\alpha}r@{=}[r]|{\SelectTips{cm}{}{\delta}ir{>}}{\alpha}r@{-}[r]&{{\gamma}irc}\\ {{\alpha}lpha_1}&{{\alpha}lpha_2} }\qquad $ $\Pi^1=\{{\alpha}lpha_1\}\,$,\vskip.3truecm{\nu}oindent \item[ii)]\ $ \xymatrix @M=0pt @R=2pt @!C=6pt{ {{\gamma}irc}{\alpha}r@{=}[r]|{\SelectTips{cm}{}{\delta}ir{>}}{\alpha}r@{-}[r]&{\APLcirc{\times}}\\ {{\alpha}lpha_1}&{{\alpha}lpha_2} }\qquad $ $\Pi^1=\{{\alpha}lpha_2\}\,$,\vskip.3truecm{\nu}oindent \item[iii)]\ $ \xymatrix @M=0pt @R=2pt @!C=6pt{ {\APLcirc{\times}}{\alpha}r@{=}[r]|{\SelectTips{cm}{}{\delta}ir{>}}{\alpha}r@{-}[r]&{\APLcirc {\times}}\\ {{\alpha}lpha_1}&{{\alpha}lpha_2} }\qquad $ $\Pi^1=\{{\alpha}lpha_1,{\alpha}lpha_2\}\,$.\vskip.3truecm{\nu}oindent \end{itemize} i) The manifold $M$ has dimension $10$. By applying formula \eqref{koszulcoefficients}, we obtain $b_1 =3$ and consequently $a_1=5$. Therefore, by formula \eqref{koszulform}, the Koszul form $\psi$ can be expressed as $$ \psi = 10 \pi_1=10\left(2{\alpha}lpha_1+{\alpha}lpha_2\right)\,. $$ By Lemma \ref{dxilemma}, it follows that the para-K\"ahler Einstein form $\rho$ is given by $$ {\beta}egin{array}{lll} \rho &=& 10\left(\omega^{{\alpha}lpha_1}\wedge\omega^{-{\alpha}lpha_1}+\omega^{{\alpha}lpha_1+{\alpha}lpha_2}\wedge\omega^{-({\alpha}lpha_1+{\alpha}lpha_2)}+ 2\,\omega^{2{\alpha}lpha_1+{\alpha}lpha_2}\wedge\omega^{-(2{\alpha}lpha_1+{\alpha}lpha_2)}+\right. \\[6pt] &{}& +\left.\omega^{3{\alpha}lpha_1+{\alpha}lpha_2}\wedge\omega^{-(3{\alpha}lpha_1+{\alpha}lpha_2)}+\omega^{3{\alpha}lpha_1+2{\alpha}lpha_2}\wedge\omega^{-(3{\alpha}lpha_1+2{\alpha}lpha_2)} \right)\,. \end{array} $$ ii) The manifold $M$ has dimension $10$. By \eqref{koszulcoefficients}, we get $b_2=1$ and consequently $a_2=3$. Hence, according to \eqref{koszulform} and Lemma \ref{dxilemma}, the Koszul form $\psi$ and the para-K\"ahler Einstein form $\rho$ are given respectively by $$ \psi = 6 \pi_2 =6\left(3{\alpha}lpha_1+2{\alpha}lpha_2\right) $$ and $$ {\beta}egin{array}{lll} \rho &=& 6\left(\omega^{{\alpha}lpha_2}\wedge\omega^{-{\alpha}lpha_2}+3\,\omega^{{\alpha}lpha_1+{\alpha}lpha_2}\wedge\omega^{-({\alpha}lpha_1+{\alpha}lpha_2)}+ 3\,\omega^{2{\alpha}lpha_1+{\alpha}lpha_2}\wedge\omega^{-(2{\alpha}lpha_1+{\alpha}lpha_2)}+\right. \\[6pt] &{}& +\left.\omega^{3{\alpha}lpha_1+{\alpha}lpha_2}\wedge\omega^{-(3{\alpha}lpha_1+{\alpha}lpha_2)}+2\,\omega^{3{\alpha}lpha_1+2{\alpha}lpha_2}\wedge\omega^{-(3{\alpha}lpha_1+2{\alpha}lpha_2)} \right)\,. \end{array} $$ iii) The manifold $M$ has dimension $12$. In this case we get $$ \psi = 4(\pi_1+\pi_2)=4(5{\alpha}lpha_1 +3{\alpha}lpha_2) $$ and $$ {\beta}egin{array}{lll} \rho &=& 4 \left(\omega^{{\alpha}lpha_1}\wedge\omega^{-{\alpha}lpha_1}+ \omega^{{\alpha}lpha_2}\wedge\omega^{-{\alpha}lpha_2}+ 4\,\omega^{{\alpha}lpha_1+{\alpha}lpha_2}\wedge\omega^{-({\alpha}lpha_1+{\alpha}lpha_2)}+\right.\\[6pt] &{}&+\left. 5\,\omega^{2{\alpha}lpha_1+{\alpha}lpha_2}\wedge\omega^{-(2{\alpha}lpha_1+{\alpha}lpha_2)}+ 2\,\omega^{3{\alpha}lpha_1+{\alpha}lpha_2}\wedge\omega^{-(3{\alpha}lpha_1+{\alpha}lpha_2)}+\right.\\[6pt] &{}&+\left.3\,\omega^{3{\alpha}lpha_1+2{\alpha}lpha_2}\wedge\omega^{-(3{\alpha}lpha_1+2{\alpha}lpha_2)} \right)\,. \end{array} $$ {\beta}egin{thebibliography}{123} {\beta}ibitem{AkS} Akivis M.A., Shelekhov A.M.: Geometry and algebra of multidimensional three-web, Kluwer Academic Publ. Group, 1992. {\beta}ibitem{AB} Al-Aqeel A., Bejancu A.: On the geometry of paracomplex submanifolds, {\em Demonstratio Math.} {{\beta}f 34} (2001), 919--932. {\beta}ibitem{A} Alekseevsky D.: Pseudo-K\"ahler and para-K\"ahler symmetric spaces, to be published in {\em Handbook of Pseudo-Riemannian Geometry and Supersymmetry}, 31 pp. {\beta}ibitem{ABCV} Alekseevsky D.V., Blazic N., Cort\'es V., Vukmirovic S.: A class of Osserman spaces, {\em J.Geom. Physics} {{\beta}f 53} (2005), 345--353. {\beta}ibitem{AC1} Alekseevsky D.V., Cort\'es V.: Classification of pseudo-Riemannian symmetric spaces of quaternionic K\"ahler type, in {\em "Lie groups and invariant theory"} (Edited by E. Vinberg), {\em Amer. Math. Soc. Transl. Ser. 2} { {\beta}f 213}, Amer. Math. Soc., Providence, RI, 2005, pp. 33--62. {\beta}ibitem{AC} Alekseevsky D.V., Cort\'es V.: The twistor spaces of a para-quaternionic K\"ahler manifold, {\em Osaka J. Math.} {{\beta}f 45} (2008), 1--37. {\beta}ibitem{AKam} Alekseevsky D., Kamishima Y.: Quaternionic and para-quaternionic CR structure on $(4n+3)$-dimensional manifolds, {\em Cent. Eur. J. Math.} {{\beta}f 2} (2004), 732--753. {\beta}ibitem{AKam1}Alekseevsky,D.; Kamishima,Y.: Pseudo-conformal quaternionic CR structure on $(4n+3)$-dimensional manifolds, {\em Ann. Mat. Pura Appl.} (4) {{\beta}f 187} (2008), 487--529. {\beta}ibitem{AM} Alekseevsky D.\ V., Medori C.: Bi-isotropic decompositions of semisimple Lie algebras and homogeneous bi-Lagrangian manifolds, {\em J. of Algebra} {{\beta}f 313} (2007), 8--27. {\beta}ibitem{AMT} Alekseevsky D.\ V., Medori C., Tomassini A.: Maximally homogeneous para-CR manifolds, {\em Ann. Glob. Anal. Geom.} {{\beta}f 30} (2006), 1--27. {\beta}ibitem{AMT1} Alekseevski D.V., Medori C., Tomassini A.: Maximally homogeneous para-CR manifolds of semisimple type, {\tt arXiv:math/0808.0431v1}, to be published in {\em ''Handbook of pseudo-Riemannian Geometry and Supersymmetry''}. {\beta}ibitem{AP} Alekseevsky D.V., Perelomov A.M.: Invariant K\"ahler-Einstein metrics on compact homogeneous spaces, {\em Functional Anal. Appl.} {{\beta}f 20} (1986), 171--182. {\beta}ibitem{AS} Alekseevsky D.V., Spiro A. F.: Homogeneous bi-Lagrangian manifolds and invariant Monge-Ampere equations, in {\em "Symmetries and Perturbation theory"} (Edited by G. Gaeta, R. Vitolo, S. Walcher), World Scientific, 2007, pp.3--12. {\beta}ibitem{An1} Andrada A.: Complex product structures and affine foliations, {\em Ann. Global Anal. Geom.} {{\beta}f 27} (2005), 377--405. {\beta}ibitem{An} Andrada A.: Hypersymplectic Lie algebras, {\em J. Geom. Phys.} {{\beta}f 56 } (2006), 2039--2067. {\beta}ibitem{ADBO} Andrada A., Barberis M.L., Dotti I.G., Ovando G.P.: Product structures on four dimensional solvable Lie algebras. {\em Homology Homotopy Appl.} {{\beta}f 7} (2005), 9--37. {\beta}ibitem{AD} Andrada A., Dotti I.G.: Double products and hypersymplectic structures on ${\mu}athbb{R}^{4n}$. {\em Comm. Math. Phys.} {{\beta}f 262} (2006), 1--16. {\beta}ibitem{AnS} Andrada A., Salamon S.: Complex product structures on Lie algebras, {\em Forum Math.} {{\beta}f 17} (2005), 261--295. {\beta}ibitem{AK} Arsen'eva O.E., Kirichenko V.F.: Self-dual geometry of generalized Hermitian surfaces, {\em Sb. Math.} {{\beta}f 189} (1998), 19--41. {\beta}ibitem{BE} Bailey T.N., Eastwood M.G.: Complex paraconformal manifolds---their differential geometry and twistor theory, {\em Forum Math.} {{\beta}f 3} (1991), 61--103. {\beta}ibitem{BB} Bejian C-L., Benyounes M.: Harmonic maps between almost para-Hermitian manifolds, in {\em "New developments in differential geometry", Budapest, 1996}, Kluwer Acad. Publ., Dordrecht, 1999, pp.67--97. {\beta}ibitem{Ben-B} Ben-Bassat O.: Mirror symmetry and generalized complex manifolds, {\em J.Geom. Phys.} {{\beta}f 56} (2006), 533--558. {\beta}ibitem{BL1} Benoist L., Labourie F.: Sur les espaces homog\`{e}nes mod\`{e}les de vari\'{e}t\'{e}s compactes, {\em Inst. Hautes \'{E}tudes Sci. Publ. Math.} {{\beta}f 76} (1992), 99--109. {\beta}ibitem{BL} Benoist Y., Labourie F.: Sur les diff\'{e}omorphismes d'Anosov affines \`{a} feuilletages stable et instable diff\'{e}rentiables, {\em Invent. Math.} {{\beta}f 111} (1993), 285--308. {\beta}ibitem{Be} Bertram W.: The geometry of Jordan and Lie structures, {\em Lecture Notes in Mathematics} {{\beta}f 1754}. Springer-Verlag, Berlin, 2000. {\beta}ibitem{Bl} Blair D.E.: A product twistor space, {\em Serdica Math. J.} {{\beta}f 28} (2002), 163--174. {\beta}ibitem{BDM} Blair D.E., Davidov, J., Mu\u skarov O.: Hyperbolic twistor spaces, {\em Rocky Mountain J. Math.} {{\beta}f 35} (2005), 1437--1465. {\beta}ibitem{BV} Bla\v zi\'c N., Vukmirovi\'c S.: Para-hypercomplex structures on a four-dimensional Lie group, in {\em "Contemporary geometry and related topics"}, World Sci. Publ., River Edge, NJ, 2004, pp.41--56. {\beta}ibitem{BCGHV} Bonome A., Castro R., Garc\'{i}a-R\'{i}o E., Hervella L., V\'{a}zquez-Lorenzo R.: On the paraholomorphic sectional curvature of almost para-Hermitian manifolds, {\em Houston J. Math.} {{\beta}f 24} (1998), 277--300. {\beta}ibitem{Br} Bryant R.L.: Bochner-K\"ahler metrics, {\em J. Amer. Math. Soc.} {{\beta}f 14} (2001), 623--715. {\beta}ibitem{CI} Cari\~{n}ena J.F., Ibort L.A.: Bi-Lagrangian connections and the reduction of Lax equations, {\em Lett. Math. Phys.} {{\beta}f 8} (1984), 359--365. {\beta}ibitem{CV} Cecotti S., Vafa C.: Topological-anti-topological fusion, {\em Nuclear Phys. B} {{\beta}f 367} (1991), 359--461. {\beta}ibitem{CLS} Cort\'{e}s V., Lawn M-A., Sch\"afer L.: Affine hyperspheres associated to special para-K\"ahler manifolds, {\em Int. J. Geom. Methods Mod. Phys.} {{\beta}f 3} (2006), 995--1009. {\beta}ibitem{CMMS1} Cort\'{e}s V., Mayer C., Mohaupt T., Saueressig F.: Special geometry of Euclidean supersymmetry. I. Vector multiplets, {\em J. High Energy Phys.} {{\beta}f 028} (2004), 73 pp. {\beta}ibitem{CMMS2} Cort\'{e}s V., Mayer C., Mohaupt T., Saueressig F.: Special geometry of Euclidean supersymmetry. II. Hypermultiplets and the $c$-map, {\em J. High Energy Phys.} {{\beta}f 025} (2005), 27 pp. {\beta}ibitem{CS} Cort\'{e}s V., Sch\"afer L.: Topological-antitopological fusion equations, pluriharmonic maps and special K\"ahler manifolds, in {\em "Complex, contact and symmetric manifolds"}, Progr. Math. {{\beta}f 234}, Birkh\"auser Boston, 2005, pp. 59--74 {\beta}ibitem{CFG} Cruceanu V., Fortuny P., Gadea P. M.: A survey on paracomplex geometry, {\em Rocky Mountain J. Math.} {{\beta}f 26} (1996), 83--115. {\beta}ibitem{CGM} Cruceanu V., Gadea P.M., Munoz Masque J.: Para-Hermitian and para-K\"ahler manifolds. {\em Quaderni Inst. Mat. Univ. Messina} 1 (1995), 1--72 {\beta}ibitem{DJS} Dancer A.S., Jorgensen H.R., Swann A.F.: Metric geometry over the split quaternions, {\em Rend. Sem. Mat. Univ. Politec. Torino} {{\beta}f 63} (2005), 119--139. {\beta}ibitem{DS} Dancer A., Swann A.: Toric hypersymplectic quotients, {\em Trans. Amer. Math. Soc.} {{\beta}f 359} (2007), 1265--1284. {\beta}ibitem{DDW} Defever F., Deszcz R., Verstraelen L.: On pseudosymmetric para-K\"ahler manifolds, {\em Colloq. Math.} {{\beta}f 74} (1997), 253--260. {\beta}ibitem{DDW1} Defever F., Deszcz R., Verstraelen L.: On semisymmetric para-K\"ahler manifolds, {\em Acta Math. Hungar.} {{\beta}f 74} (1997), 7--17. {\beta}ibitem{Dj} Djokovi\'c D.\v{Z}.: Classification of $\Z$-graded Real Semisimple Lie Algebras, {\em J. of Algebra} {{\beta}f 76} (1982), 367--382. {\beta}ibitem{Do} Donato S.: CICR submanifolds of a para-K\"ahler manifold having the self-orthogonal Killing property, {\em Atti Accad. Peloritana Pericolanti Cl. Sci. Fis. Mat. Natur.} {{\beta}f 66} (1988), 283--292. {\beta}ibitem{D} Dubrovin B.: Geometry and integrability of topological-antitopological fusion, {\em Comm. Math. Phys.} {{\beta}f 152} (1993), 539--564. {\beta}ibitem{Er} Erdem S.: Paracomplex projective models and harmonic maps into them, {\em Contrib. to Alg. and Geom.} {{\beta}f 40} (1999), 385--398. {\beta}ibitem{EF} Etayo F., Fioravanti M.: CR-submanifolds of the paracomplex projective space, {\em Publ. Math. Debrecen} {{\beta}f 47} (1995), 151--160. {\beta}ibitem{EFT} Etayo F., Fioravanti M., Tr\'{i}as U.R.: On the submanifolds of an almost para-Hermitian manifold, {\em Acta Math. Hungar.} {{\beta}f 85} (1999), 277--286. {\beta}ibitem{EST} Etayo F., Santamar\'{i}a R., Tr\'{i}as U.R.: The geometry of a bi-Lagrangian manifold, {\em Differential Geom. Appl.} {{\beta}f 24} (2006), 33--59. {\beta}ibitem{FPPW} Fino A., Pedersen H., Poon Y.S., Weye M.S.: Neutral Calabi-Yau structures on Kodaira manifolds, {\em Commun. Math. Phys.} {{\beta}f 248} (2004), 255--268. {\beta}ibitem{GM1} Gadea P.M., Masque J.M.: Classification of almost para-Hermitian manifolds, {\em Rend. Mat. Appl. (7)} {{\beta}f 11} (1991), 377--396. {\beta}ibitem{GM2} Gadea P.M., Montesinos A.: The paracomplex projective spaces as symmetric and natural spaces, {\em Indian J. Pure Appl. Math.} {{\beta}f 23} (1992), 261--275. {\beta}ibitem{GO} Gadea P.M., Oubi\~{n}a J.A.: Homogeneous almost para-Hermitian structures, {\em Indian J. Pure Appl. Math.} {{\beta}f 26} (1995), 351--362. {\beta}ibitem{GMV} Garc\'{i}a-Rio E., Matsushita Y. V\'{a}zquez-Lorenzo R.: Paraquaternionic K¨ahler Manifolds, {\em Rocky Mountain J. Math.} {{\beta}f 31} (2001), 237--260. {\beta}ibitem{GGP} Gibbons G.W., Green M.B., Perry M.J.: Instantons and seven-branes in type IIB superstring theory, {\em Phys. Lett. B} {{\beta}f 370} (1996), 37--44. {\beta}ibitem{GOV} Gorbatsevic V.V., Onishchik A.L., Vinberg E.B.: Structure of Lie Groups and Lie Algebras, in {\em Encyclopaedia of Mathematical Sciences, vol.41}, Springer-Verlag, Berlin, 1993. {\beta}ibitem{G} Gualtieri M.: Generalized complex geometry, DPhil thesis, University of Oxford, 2003, math.DG/0401221. {\beta}ibitem{Hess} Hess H.: Connections on symplectic manifolds and geometric quantization, in {\em "Differential geometrical methods in mathematical physics" (Proc. Conf., Aix-en-Provence/Salamanca, 1979)}, Lecture Notes in Math. {{\beta}f 836}, 1980, pp. 153--166. {\beta}ibitem{Hi} Hitchin N.J.: Generalized Calabi-Yau Manifolds, {\em Quart. J. Math.} {{\beta}f 54} (2003), 281--308. {\beta}ibitem{Hou} Hou Z.X.: On homogeneous paracomplex and para-K\"ahler manifolds, {\em Chinese Ann. Math.} {{\beta}f 15} (1994), 193--206. {\beta}ibitem{HD} Hou Z.X., Deng S.: Recent progress on dipolarizations in Lie algebras and homogeneous para-K\"ahler manifolds, {\em Adv. Math. (China)} {{\beta}f 30} (2001), 489--494. {\beta}ibitem{HDK} Hou Z., Deng S., Kaneyuki S.: Dipolarizations in compact Lie algebras and homogeneous para-K\"ahler manifolds, {\em Tokyo J. Math.} {{\beta}f 20} (1997), 381--388. {\beta}ibitem{HDKN} Hou Z., Deng S., Kaneyuki S., Nishiyama K.: Dipolarizations in semisimple Lie algebras and homogeneous para-K\"ahler manifolds, {\em J. Lie Theory} {{\beta}f 9} (1999), 215--232. {\beta}ibitem{H} Humpreys J.E.: {\em Introduction to Lie algebras and representation Theory}, Graduate Texts in Mathematics {{\beta}f 9}, Springer-Verlag, New York-Berlin, 1972. {\beta}ibitem{IMV} Ianus S., Mazzocco R., V\^{i}lcu G.E.: Real lightlike hypersurfaces of paraquaternionic K\"ahler manifolds, {\em Mediterr. J. Math.} {{\beta}f 3} (2006), 581--592. {\beta}ibitem{IR} Ianus S., Rizza G.B.: Some submanifolds of a para-K\"ahler manifold, {\em Rend. Circ. Mat. Palermo (2)} {{\beta}f 47} (1998), 71--80. {\beta}ibitem{IV} Ianus S., V\^{i}lcu G.E.: Some constructions of almost para–hyperhermitian structures on manifolds and tangent bundles, preprint ArXiv:0707.3360v1 (2007), 10 pp. {\beta}ibitem{ITZ} Ivanov S., Tsanov V., Zamkovoy S.: Hyper-parahermitian manifolds with torsion, {\em J. Geom. Phys.} {{\beta}f 56} (2006), 670--690. {\beta}ibitem{IZ} Ivanov S., Zamkovoy S.: Parahermitian and paraquaternionic manifolds, {\em Differential Geom. Appl.} {{\beta}f 23} (2005), 205--234. {\beta}ibitem{K1} Kaneyuki, S.: On classification of para-Hermitian symmetric spaces, {\em Tokyo J. Math.} {{\beta}f 8} (1985), 473--482. {\beta}ibitem{K2} Kaneyuki S.: Compactification of parahermitian symmetric spaces and its applications. II. Stratifications and automorphism groups, {\em J. Lie Theory} {{\beta}f 13} (2003), 535--563. {\beta}ibitem{KKoz} Kaneyuki S., Kozai M.: Paracomplex structures and affine symmetric spaces, {\em Tokyo J. Math.} {{\beta}f 8} (1985), 81--98. {\beta}ibitem{Kir} Kirichenko V.F.: Methods of generalized Hermitian geometry in the theory of almost contact manifolds, in {\em Itogi Nauki i Tekhniki, "Problems of geometry". Vol.18} (Russian), Akad. Nauk SSSR, Vsesoyuz. Inst. Nauchn. i Tekhn. Inform., Moscow, 1986, pp.25--71, translated in {\em J. Soviet Math.} {{\beta}f 42} (1988), 1885--1919. {\beta}ibitem{KK} Kirichenko V.F., Konnov V.V.: Almost K\"ahler manifolds of hyperbolic type, {\em Izv. Math.} {{\beta}f 67} (2003), 655--694. {\beta}ibitem{KN} Kobayashi S., Nomizu K.: {\em Foundations of Differential Geometry. Vol. II.} Interscience Publishers John Wiley \& Sons, Inc., New York-London-Sydney, 1969. {\beta}ibitem{Kon} Konderak J.J.: A symplectic reduction for pseudo-Riemannian manifolds with compatible almost product structures, {\em Beittr\"age Algebra Geom.} {{\beta}f 45} (2004), 465--479. {\beta}ibitem{K} Koszul J.L.: Sur la forme hermitienne canonique des espaces homog\`{e}nes complexes, {\em Canad. J. Math.} {{\beta}f 7} (1955), 562--576. {\beta}ibitem{Kr} Krantz Th.: Holonomy representations which are a diagonal direct sum of two faithful representations, preprint ArXiv:0704.2776v3 (2008). {\beta}ibitem{LS} Lawn M-A., Sch\"afer L.: Decompositions of para-complex vector bundles and para-complex affine immersions, {\em Results Math.} {{\beta}f 48} (2005), 246--274. {\beta}ibitem{L1} Libermann P.: Sur les structures presque paracomplexes, {\em C. R. Acad. Sci. Paris} {{\beta}f 234} (1952), 2517--2519. {\beta}ibitem{L2} Libermann P.: Sur le probl\`{e}me d'\'{e}quivalence de certaines structures infinit\'{e}simales, {\em Ann. Mat. Pura Appl. (4)} {{\beta}f 36} (1954), 27--120. {\beta}ibitem{MN} Marchiafava S., Nagy P.: (Anti-)hypercomplex structures and 3-webs on a manifold. Preprint, Universit\`a degli Studi di Roma "La Sapienza" n. 38/03, 2003 {\beta}ibitem{Ma} Matsushita Y.: On Euler characteristics and Hirzebruch indices of four-dimensional almost para-Hermitian manifolds, {\em JP J. Geom. Topol.} {{\beta}f 5} (2005), 115--120. {\beta}ibitem{M} Molchanov V. F.: Quantization on para-Hermitian symmetric spaces, in {\em "Contemporary mathematical physics"}, {\em Amer. Math. Soc. Transl. Ser. 2} {{\beta}f 175}, Amer. Math. Soc., Providence, RI, 1996, pp. 81--95. {\beta}ibitem {MV} Molchanov V.F., Van Dijk G.: The Berezin form for rank one para-Hermitian symmetric spaces. {\em J. Math. Pures Appl. (9)} {{\beta}f 77} (1998), 747--799. {\beta}ibitem{MV1} Molchanov V.F., Volotova N.B.: Polynomial quantization on rank one para-Hermitian symmetric spaces, {\em Acta Appl. Math.} {{\beta}f 81} (2004), 215--232. {\beta}ibitem{Mo} Morvan J-M.: Connexions et feuilletages en g\'{e}om\'{e}trie symplectique, {\em C. R. Acad. Sci. Paris S\'{e}r. I Math.} {{\beta}f 296} (1983), 765--768. {\beta}ibitem{Mo1} Morvan J-M.: Quelques invariants topologiques en g\'{e}om\'{e}trie symplectique, {\em Ann. Inst. H. Poincar´e Sect. A (N.S.)} {{\beta}f 38} (1983), 349--370. {\beta}ibitem{NS} Nurowski P., Sparling G.A.J.: Thre-dimensional Cauchy-Riemann structures and second-order ordinary differential equations, {\em Class. Quantum Grav.} {{\beta}f 20} (2003), 4995-5016. {\beta}ibitem{O1} Olszak Z.: On para-Hermitian structures on twisted products on Lie groups, {\em Bull. Math. Soc. Sci. Math. Roumanie (N.S.)} {{\beta}f 43(91)} (2000), 313--323. {\beta}ibitem{O2} Olszak Z.: On 4-dimensional conformally flat left invariant para-Hermitian structures, in {\em "Geometry and topology manifolds" (Krynica, 1999)}, {\em Univ. Iagel. Acta Math.} {{\beta}f 38} (2000), 29--40. {\beta}ibitem{O3} Olszak Z.: Left invariant para-Hermitian structures on semidirect products of Lie groups, {\em Tensor (N.S.)} {{\beta}f 62} (2000), 1--11. {\beta}ibitem{O} Olszak Z.: On natural para-Hermitian structures on twisted products of Lie groups, in {\em "PDEs, submanifolds and affine differential geometry" (Warsaw, 2000)}, {\em Banach Center Publ.} {{\beta}f 57}, Polish Acad. Sci., Warsaw, 2002, pp. 203--209. {\beta}ibitem{OPM} Oproiu V., Papaghiuc N., Mitric G.: Some classes of para-Hermitian structures on cotangent bundles, {\em An. Stiint. Univ. Al. I. Cuza Iasi. Mat. (N.S.)} {{\beta}f 43} (1997), 7--22. {\beta}ibitem{Rash} Ra\v sevski\u{\i} P.K.: The scalar fields in a stratified space, {\em Trudy Sem. Vektor. Tenzor. Analizu} {{\beta}f 6} (1948), 225--248. {\beta}ibitem{Ri} Rizza G.B.: Some remarks on para-K\"ahler manifolds, in {\em "Proceedings of the Second International Workshop on Differential Geometry and its Applications" (Constanta, 1995)}, {\em An Stiint. Univ. Ovidius Constanta, Ser. Mat.} {{\beta}f 3} (1995), 113--120. {\beta}ibitem{Ro} Rosca, R.: CR-sous-vari\'{e}t\'{e}s co-isotropes d'une vari\'{e}t\'{e} parak\"ahl\'{e}rienne, {\em C. R. Acad. Sci. Paris S\'{e}r. I Math.} {{\beta}f 298} (1984), 149--151. {\beta}ibitem{SG} Santos-Garcia G., Gadea P.M.: A Weitzenb\"{o}ck formula for the Laplacian of para-K\"ahler space forms, {\em Rend. Sem. Mat. Messina Ser. II} {{\beta}f 2(16)} (1993), 81--89. {\beta}ibitem{S1} Sch\"afer L.: $tt^*$-bundles in para-complex geometry, special para-K\"ahler manifolds and para-pluriharmonic maps, {\em Differential Geom. Appl.} {{\beta}f 24} (2006), 60--89. {\beta}ibitem{S2} Sch\"afer L.: Para-$tt^*$-bundles on the tangent bundle of an almost para-complex manifold, {\em Ann. Global Anal. Geom.} {{\beta}f 32} (2007), 125--145. {\beta}ibitem{S} Sch\"afer L.: Harmonic bundle solutions of topological-antitopological fusion in para-complex geometry, {\em Differential Geom. Appl.} {{\beta}f 26} (2008), 97--105. {\beta}ibitem{T} Tabachnikov S.: Geometry of Lagrangian and Legendrian 2-web, {\em Differential Geom. Appl.} {{\beta}f 3} (1993), 265--284. {\beta}ibitem{V1} Vaisman I.: Basics of Lagrangian foliations, {\em Publ. Mat.} {{\beta}f 33} (1989), 559--575. {\beta}ibitem{V} Vaisman I.: Reduction and submanifolds of generalized complex manifolds, {\em Differential Geom. Appl.} {{\beta}f 25} (2007), 147--166. {\beta}ibitem{W} Wade A.: Dirac structures and paracomplex manifolds, {\em C.R. Acad. Paris} {{\beta}f 338} (2004), 889--894. {\beta}ibitem{Wei} Weinstein A.: Symplectic manifolds and their Lagrangian submanifolds, {\em Advances in Math.} {{\beta}f 6} (1971), 329--346. {\beta}ibitem{Z} Zamkovoy S.: Geometry of paraquaternionic K\"ahler manifolds with torsion. {\em J. Geom. Phys.} {{\beta}f 57} (2006), 69--87. \end{thebibliography} \end{document}
\begin{document} \title{{\normalsize Published in Foundations of Physics Letters, vol. 12, pp. 291-8 (1998)}\\ Maxwell equations as the one-photon quantum equation\\ } \author{A. Gersten} \address{Department of Physics, Ben-Gurion University of the Negev, Beer-Sheva, Israel \\ e:mail - [email protected]} \date{May 7, 1999} \maketitle \begin{abstract} Maxwell equations (Faraday and Ampere-Maxwell laws) can be presented as a three component equation in a way similar to the two component neutrino equation. However, in this case, the electric and magnetic Gauss's laws can not be derived from first principles. We have shown how all Maxwell equations can be derived simultaneously from first principles, similar to those which have been used to derive the Dirac relativistic electron equation. We have also shown that equations for massless particles, derived by Dirac in 1936, lead to the same result.$\ $The complex wave function, being a linear combination of the electric and magnetic fields, is a locally measurable and well understood quantity. Therefore Maxwell equations should be used as a guideline for proper interpretations of quantum theories. \end{abstract} \pacs{Key words: Maxwell equations, one photon, quantum equation.} The Maxwell equations (except for the electric and magnetic Gauss's law) can be presented by a three component equation in a way similar to the two component neutrino equation. This was already known to Oppenheimer \cite {oppenheimer} and to Majorana \cite{mignani}, \cite{gersten}. Also this type of equation is a particular case of a more general equation for any spin derived by Weinberg \cite{weinberg}. There is a continuous interest in this equation even to this day \cite{good}, \cite{tucker}, \cite{ahluwalia}, \cite {dvoe}. However one of the drawbacks of the above derivations is that the electric and magnetic Gauss's laws are not derived from first principles. The aim of the present latter is to complement the above mentioned works, and to derive all Maxwell equations directly from a decomposition similar to that which was used to derive the Dirac relativistic electron equation. The Dirac equation is derived from the relativistic condition on the Energy $ E,$ mass $m,$ and momentum ${\bf \vec{p}}$: \begin{equation} \left( E^{2}-c^{2}{\bf \vec{p}}^{2}-m^{2}c^{4}\right) I^{(4)}\Psi =0, \label{n1} \end{equation} where $I^{(4)}$ is the $4\times 4$ unit matrix and $\Psi $ is a four component column (bispinor) wave function. Eq. (\ref{n1}) is decomposed into \begin{equation} \left[ EI^{(4)}+\left( \begin{array}{cc} mc^{2}I^{(2)} & c{\bf \vec{p}\cdot \vec{\sigma}} \\ c{\bf \vec{p}\cdot \vec{\sigma}} & -mc^{2}I^{(2)} \end{array} \right) \right] \left[ EI^{(4)}-\left( \begin{array}{cc} mc^{2}I^{\left( 2\right) } & c{\bf \vec{p}}\cdot {\bf \vec{\sigma}} \\ c{\bf \vec{p}\cdot \vec{\sigma}} & -mc^{2}I^{\left( 2\right) } \end{array} \right) \right] \Psi =0, \label{n2} \end{equation} where $I^{\left( 2\right) }$ is the $2\times 2$ unit matrix and ${\bf \vec{ \sigma}}$ is the Pauli spin one-half vector matrix with the components \begin{equation} \sigma _{x}=\left( \begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right) ,\quad \sigma _{y}=\left( \begin{array}{cc} 0 & -i \\ i & 0 \end{array} \right) ,\quad \sigma _{z}=\left( \begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array} \right) ,\quad I^{\left( 2\right) }=\left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right) . \label{n3} \end{equation} The two component neutrino equation can be derived from the decomposition \begin{equation} \left( E^{2}-c^{2}{\bf \vec{p}}^{2}\right) I^{\left( 2\right) }\psi =\left[ EI^{\left( 2\right) }-c{\bf \vec{p}\cdot \vec{\sigma}}\right] \left[ EI^{\left( 2\right) }+c{\bf \vec{p}\cdot \vec{\sigma}}\right] \psi =0, \label{n6} \end{equation} where $\psi $ is a two component spinor wavefunction. We shall derive the photon equation from the following decomposition \begin{equation} \left( \frac{E^{2}}{c^{2}}-{\bf \vec{p}}^{2}\right) I^{\left( 3\right) }{\bf =}\left( \frac{E}{c}I^{\left( 3\right) }-{\bf \vec{p}\cdot \vec{S}}\right) \left( \frac{E}{c}I^{\left( 3\right) }+{\bf \vec{p}\cdot \vec{S}}\right) -\left( \begin{array}{ccc} p_{x}^{2} & p_{x}p_{y} & p_{x}p_{z} \\ p_{y}p_{x} & p_{y}^{2} & p_{y}p_{z} \\ p_{z}p_{x} & p_{z}p_{y} & p_{z}^{2} \end{array} \right) =0, \label{n7} \end{equation} where $I^{\left( 3\right) }$ is a $3\times 3$ unit matrix, and ${\bf \vec{S}} $ is a spin one vector matrix with components \begin{equation} S_{x}=\left( \begin{array}{ccc} 0 & 0 & 0 \\ 0 & 0 & -i \\ 0 & i & 0 \end{array} \right) ,\quad S_{y}=\left( \begin{array}{ccc} 0 & 0 & i \\ 0 & 0 & 0 \\ -i & 0 & 0 \end{array} \right) ,\quad S_{z}=\left( \begin{array}{ccc} 0 & -i & 0 \\ i & 0 & 0 \\ 0 & 0 & 0 \end{array} \right) ,\quad I^{\left( 3\right) }=\left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right) , \label{n8} \end{equation} and with the properties \begin{equation} \left[ S_{x},S_{y}\right] =iS_{z},\quad \left[ S_{z},S_{x}\right] =iS_{y},\quad \left[ S_{y},S_{z}\right] =iS_{x},\quad {\bf \vec{S}} ^{2}=2I^{\left( 3\right) }. \label{n9} \end{equation} The decomposition (\ref{n7}) can be verified directly by substitution. It will be crucial to note that the matrix on the right hand side of Eq. ( \ref{n7}) can be rewritten as: \begin{equation} \left( \begin{array}{ccc} p_{x}^{2} & p_{x}p_{y} & p_{x}p_{z} \\ p_{y}p_{x} & p_{y}^{2} & p_{y}p_{z} \\ p_{z}p_{x} & p_{z}p_{y} & p_{z}^{2} \end{array} \right) =\left( \begin{array}{c} p_{x} \\ p_{y} \\ p_{z} \end{array} \right) \left( \begin{array}{ccc} p_{x} & p_{y} & p_{z} \end{array} \right) . \label{n10} \end{equation} From Eqs. $\left( \ref{n7}-\ref{n8}\right) $ and \ $\left( \ref{n10}\right) $ , the photon equation can be obtained form \begin{equation} \left( \frac{E^{2}}{c^{2}}-{\bf \vec{p}}^{2}\right) {\bf \vec{\Psi}=}\left( \frac{E}{c}I^{\left( 3\right) }-{\bf \vec{p}\cdot \vec{S}}\right) \left( \frac{E}{c}I^{\left( 3\right) }+{\bf \vec{p}\cdot \vec{S}}\right) {\bf \vec{ \Psi}-}\left( \begin{array}{c} p_{x} \\ p_{y} \\ p_{z} \end{array} \right) \left( {\bf \vec{p}\cdot \vec{\Psi}}\right) =0, \label{n11} \end{equation} where ${\bf \vec{\Psi}}$ is a 3 component (column) wave function. Eq. ($\ref {n11}$) will be satisfied if the two equations \begin{eqnarray} \left( \frac{E}{c}I^{\left( 3\right) }+{\bf \vec{p}\cdot \vec{S}}\right) {\bf \vec{\Psi}} &=&0, \label{n12} \\ {\bf \vec{p}\cdot \vec{\Psi}} &=&0, \label{n13} \end{eqnarray} will be simultaneously satisfied. For real energies and momenta complex conjugation of Eqs. (\ref{n11}) and (\ref{n8}) leads to \begin{equation} \left( \frac{E^{2}}{c^{2}}-{\bf \vec{p}}^{2}\right) {\bf \vec{\Psi}}^{\ast } {\bf =}\left( \frac{E}{c}I^{\left( 3\right) }+{\bf \vec{p}\cdot \vec{S}} \right) \left( \frac{E}{c}I^{\left( 3\right) }-{\bf \vec{p}\cdot \vec{S}} \right) {\bf \vec{\Psi}}^{\ast }{\bf -}\left( \begin{array}{c} p_{x} \\ p_{y} \\ p_{z} \end{array} \right) \left( {\bf \vec{p}\cdot \vec{\Psi}}^{\ast }\right) =0, \label{n13a} \end{equation} where ${\bf \vec{\Psi}}^{\ast }$ is the complex conjugate of \ ${\bf \vec{ \Psi}}.$ Eq. (\ref{n13a}) will be satisfied if the two equations \begin{eqnarray} \left( \frac{E}{c}I^{\left( 3\right) }-{\bf \vec{p}\cdot \vec{S}}\right) {\bf \vec{\Psi}}^{\ast } &=&0, \label{n13b} \\ {\bf \vec{p}\cdot \vec{\Psi}}^{\ast } &=&0, \label{n13c} \end{eqnarray} will be simultaneously satisfied. Eqs. (\ref{n11}) and (\ref{n13a}) are the two different possible decompositions of their left hand side. Eqs. (\ref {n13a}-\ref{n13c}) do not contain new information as they are only the complex conjugates of Eqs. (\ref{n11}-\ref{n13}). On the other hand the physical interpretation is different, namely Eq. (\ref{n12}) is the negative helicity equation, while Eq. (\ref{n13b}) is the positive helicity equation. It will be interesting to note that also other set of equivalent equations is possible. Eqs. (\ref{n11}) and (\ref{n13}) can be rewritten as \begin{equation} \left( \begin{array}{c} p_{x} \\ p_{y} \\ p_{z} \end{array} \right) \left( {\bf \vec{p}\cdot \vec{\Psi}}\right) =\left( \frac{E}{c} I^{\left( 3\right) }-{\bf \vec{p}\cdot \vec{S}}\right) \left( \frac{E}{c} I^{\left( 3\right) }+{\bf \vec{p}\cdot \vec{S}}\right) {\bf \vec{\Psi}-} \left( \frac{E^{2}}{c^{2}}-{\bf \vec{p}}^{2}\right) {\bf \vec{\Psi}=0,} \label{n13d} \end{equation} which will be satisfied if the two equations \begin{equation} \left( \frac{E}{c}I^{\left( 3\right) }+{\bf \vec{p}\cdot \vec{S}}\right) {\bf \vec{\Psi}}=0,\quad \left( \frac{E^{2}}{c^{2}}-{\bf \vec{p}}^{2}\right) {\bf \vec{\Psi}=0,} \label{n13f} \end{equation} or their equivalents \begin{equation} \left( \frac{E}{c}I^{\left( 3\right) }-{\bf \vec{p}\cdot \vec{S}}\right) {\bf \vec{\Psi}}^{\ast }=0,\quad \left( \frac{E^{2}}{c^{2}}-{\bf \vec{p}} ^{2}\right) {\bf \vec{\Psi}}^{\ast }{\bf =0,} \label{n13g} \end{equation} will be simultaneously satisfied. Maxwell equations will be derived from Eqs. ($\ref{n12}$-$\ref{n13}$). We will show below that if in Eqs. ($\ref{n12}$) and ($\ref{n13}$) the quantum operator substitutions \begin{equation} E\Longrightarrow i\hbar \frac{\partial }{\partial t},\quad {\bf \vec{p} \Longrightarrow -}i\hbar \nabla ,\quad \label{n14} \end{equation} and the wavefunction substitution \begin{equation} {\bf \vec{\Psi}=\vec{E}-}i{\bf \vec{B},} \label{n15} \end{equation} are made, as a result the Maxwell equations will be obtained. In Eq. (\ref {n15}) ${\bf \vec{E}}$ and ${\bf \vec{B}}$ are the electric and magnetic fields respectively. Indeed, one can easily check from Eqs. (\ref{n8}) and ( \ref{n14}) that the following identity is satisfied \begin{equation} \left( {\bf \vec{p}\cdot \vec{S}}\right) {\bf \vec{\Psi}=}\hbar \nabla \times {\bf \vec{\Psi}.} \label{n16} \end{equation} From Eqs. (\ref{n12}-\ref{n13}) and (\ref{n14}, \ref{n16}) we obtain \begin{equation} \frac{i\hbar }{c}\frac{\partial }{\partial t}{\bf \vec{\Psi}=}-\hbar {\bf \nabla }\times {\bf \vec{\Psi},} \label{n17} \end{equation} \begin{equation} -i\hslash \nabla \cdot {\bf \vec{\Psi}=}0. \label{n18} \end{equation} The constant $\hslash $ can be cancelled out in Eqs. (\ref{n17}-\ref{n18}), and after replacing ${\bf \vec{\Psi}}$ by Eq. (\ref{n15}), the following equations are obtained \begin{equation} {\bf \nabla }\times \left( {\bf \vec{E}-}i{\bf \vec{B}}\right) {\bf =-}i \frac{1}{c}\frac{\partial \left( {\bf \vec{E}-}i{\bf \vec{B}}\right) }{ \partial t}, \label{n19} \end{equation} \begin{equation} {\bf \nabla }\cdot \left( {\bf \vec{E}-}i{\bf \vec{B}}\right) {\bf =}0. \label{n20} \end{equation} If in Eqs. (\ref{n19}-\ref{n20}) the electric and magnetic fields are real, the separation into the real and imaginary parts will lead to the Maxwell equations \begin{equation} {\bf \nabla }\times {\bf \vec{E}=-}\frac{1}{c}\frac{\partial {\bf \vec{B}}}{ \partial t} \label{1} \end{equation} \begin{equation} {\bf \nabla }\times {\bf \vec{B}=}\frac{1}{c}\frac{\partial {\bf \vec{E}}}{ \partial t} \label{2} \end{equation} \begin{equation} {\bf \nabla }\cdot {\bf \vec{E}}=0 \label{3} \end{equation} \begin{equation} {\bf \nabla }\cdot {\bf \vec{B}}=0. \label{4} \end{equation} One should note that the Plank constant $\hslash $ was cancelled out earlier, in Eqs. (\ref{n17}-\ref{n18}), which explains its absence in the Maxwell equations. Another comment should be made here, starting from equations (\ref{n13f}) and (\ref{n14}-\ref{n15}) one can get equations which are equivalent to Maxwell equations (without sources), namely \begin{equation} {\bf \nabla }\times {\bf \vec{E}=-}\frac{1}{c}\frac{\partial {\bf \vec{B}}}{ \partial t},\quad {\bf \nabla }\times {\bf \vec{B}=}\frac{1}{c}\frac{ \partial {\bf \vec{E}}}{\partial t},\quad \left( \frac{\partial ^{2}}{ c^{2}\partial t^{2}}-{\bf \nabla }^{2}\right) \overrightarrow{{\bf E}} =0,\quad \left( \frac{\partial ^{2}}{c^{2}\partial t^{2}}-{\bf \nabla } ^{2}\right) \overrightarrow{{\bf B}}=0, \label{4a} \end{equation} the Gauss laws (\ref{3}-\ref{4}) are satisfied on the basis of Eq. (\ref {n13d}). Dirac \cite{dirac} and Wigner \cite{wigner},\cite{bacry} have derived relativistic equations for massless particles of any spin from which the Gauss laws, for the spin one case, can be derived. Moreover, Wigner \cite {wigner} has shown that any finite-component massless field has only two possible helicity states. Dirac has derived equations for massless particles with spin $k$, which in the ordinary vector notation \cite{dirac} are \begin{equation} \left\{ kp_{t}+S_{x}p_{x}+S_{y}p_{y}+S_{z}p_{z}\right\} \psi =0, \label{a1} \end{equation} \begin{equation} \left\{ kp_{x}+S_{x}p_{t}-iS_{y}p_{z}+iS_{z}p_{y}\right\} \psi =0, \label{a2} \end{equation} \begin{equation} \left\{ kp_{y}+S_{y}p_{t}-iS_{z}p_{x}+iS_{x}p_{z}\right\} \psi =0, \label{a3} \end{equation} \begin{equation} \left\{ kp_{z}+S_{z}p_{t}-iS_{x}p_{y}+iS_{y}p_{x}\right\} \psi =0, \label{a4} \end{equation} where the $p_{n}$ are the momenta, $p_{t}=E/c$, $E$ the energy, $\psi $ a $ \left( 2k+1\right) $ component wave function and $S_{n}$ are the spin $ \left( 2k+1\right) \times \left( 2k+1\right) $ matrices which satisfy \begin{equation} \left[ S_{x},S_{y}\right] =iS_{z},\quad \left[ S_{z},S_{x}\right] =iS_{y},\quad \left[ S_{y},S_{z}\right] =iS_{x},\quad S_{x}^{2}+S_{y}^{2}+S_{z}^{2}=k(k+1)I^{\left( k\right) }, \label{a5} \end{equation} and $I^{\left( k\right) }$ is a $\left( 2k+1\right) \times \left( 2k+1\right) $ unit matrix. As we shall see below, for the case $k=1$, Eq. ( \ref{a1}) will lead to the Faraday and Ampere-Maxwell laws. The Gauss laws can be derived from Eqs. (\ref{a1}-\ref{a4}) in a way which will be described below. Eqs. (\ref{a1}-\ref{a4}) were analyzed extensively by Bacry \cite{bacry}, who derived them using Wigner's condition \cite{wigner} on the Pauli-Lubanski vector $W^{\mu }$ for massless fields \begin{equation} W^{\mu }=kp^{\mu },\quad \mu =x,y,z,t. \label{a6} \end{equation} Let us now demonstrate how Eq. (\ref{n13}) can be derived from Eqs. (\ref{a1}-\ref{a4}). Following Dirac \cite{dirac}, one replaces Eqs. (\ref {a1}-\ref{a4}), which are linearly dependent, with the Eq. (\ref{a1}) and 3 conditions on the wave function, which are obtained by substituting $p_{t}$ from Eq. (\ref{a1}) into Eqs. (\ref{a2}-\ref{a4}) \begin{equation} \left\{ kp_{t}+S_{x}p_{x}+S_{y}p_{y}+S_{z}p_{z}\right\} \psi =0, \label{b1} \end{equation} \begin{equation} \left\{ (k^{2}-S_{x}^{2})p_{x}+(ikS_{z}-S_{x}S_{y})p_{y}-\left( ikS_{y}+S_{x}S_{z}\right) p_{z}\right\} \psi =0, \label{b2} \end{equation} \begin{equation} \left\{ (k^{2}-S_{y}^{2})p_{y}+(ikS_{x}-S_{y}S_{z})p_{z}-\left( ikS_{z}+S_{y}S_{x}\right) p_{x}\right\} \psi =0, \label{b3} \end{equation} \begin{equation} \left\{ (k^{2}-S_{z}^{2})p_{z}+(ikS_{y}-S_{z}S_{x})p_{x}-\left( ikS_{x}+S_{z}S_{y}\right) p_{y}\right\} \psi =0. \label{b5} \end{equation} For the case $k=1$, $\psi \equiv \vec{\Psi}$, and using the representation ( \ref{n8}) for the spin matrices, one obtains for Eq. (\ref{b2}) \begin{equation} \left( \begin{array}{ccc} p_{x} & p_{y} & p_{z} \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{array} \right) \vec{\Psi}=0, \label{b6} \end{equation} for Eq. (\ref{b3}) \begin{equation} \left( \begin{array}{ccc} 0 & 0 & 0 \\ p_{x} & p_{y} & p_{z} \\ 0 & 0 & 0 \end{array} \right) \vec{\Psi}=0, \label{b7} \end{equation} and for Eq. (\ref{b5}) \begin{equation} \left( \begin{array}{ccc} 0 & 0 & 0 \\ 0 & 0 & 0 \\ p_{x} & p_{y} & p_{z} \end{array} \right) \vec{\Psi}=0, \label{b8} \end{equation} which all are equivalent to Eq. (\ref{n13}). It is interesting to note that \begin{eqnarray} \left( \begin{array}{ccc} p_{x}^{2} & p_{x}p_{y} & p_{x}p_{z} \\ p_{y}p_{x} & p_{y}^{2} & p_{y}p_{z} \\ p_{z}p_{x} & p_{z}p_{y} & p_{z}^{2} \end{array} \right) &=&\left( \begin{array}{ccc} p_{x} & 0 & 0 \\ p_{y} & 0 & 0 \\ p_{z} & 0 & 0 \end{array} \right) \left( \begin{array}{ccc} p_{x} & p_{y} & p_{z} \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{array} \right) \label{b9} \\ &=&\left( \begin{array}{ccc} 0 & p_{x} & 0 \\ 0 & p_{y} & 0 \\ 0 & p_{z} & 0 \end{array} \right) \left( \begin{array}{ccc} 0 & 0 & 0 \\ p_{x} & p_{y} & p_{z} \\ 0 & 0 & 0 \end{array} \right) \label{b10} \\ &=&\left( \begin{array}{ccc} 0 & 0 & p_{x} \\ 0 & 0 & p_{y} \\ 0 & 0 & p_{z} \end{array} \right) \left( \begin{array}{ccc} 0 & 0 & 0 \\ 0 & 0 & 0 \\ p_{x} & p_{y} & p_{z} \end{array} \right) , \label{b11} \end{eqnarray} from which we deduce that the equations (\ref{n12}) and (\ref{n13}) and the decomposition (\ref{n7}), can be realized on the basis of Eq. (\ref{b1}) and one of the equations (\ref{b2}),(\ref{b3}) and (\ref{b5}). Above, we have shown how all Maxwell equations can be derived simultaneously from first principles, similar to those which have been used to derive the Dirac relativistic electron equation. Moreover the wave function ${\bf \vec{ \Psi}}$ has a definite local classical interpretation in terms of the electric and magnetic fields, as given by Eq. (\ref{n15}), which are locally measurable and well understood quantities. Therefore Maxwell equations should be used as a guideline for proper interpretations of quantum theories. \begin{references} \bibitem{oppenheimer} J.R. Oppenheimer, Phys. Rev. {\bf 38}, 725 (1931). \bibitem{mignani} R. Mignani, E. Recami and M. Boldo, Lett. Nuovo Cimento {\bf 11}, 568 (1974). \bibitem{gersten} A. Gersten, Ann. Fond. L. de Broglie, {\bf 21}, 67 (1996). \bibitem{weinberg} S. Weinberg, Phys. Rev. {\bf B133}, 1318 (1964), {\bf B134}, 882 (1964). \bibitem{good} R.H. Good and T.J. Nelson, {\it Classical Theory of Electric and Magnetic Fields}, Academic Press, New York (1971), Chapter 11. \bibitem{tucker} R.H. Tucker and C.L. Hammer, Phys. Rev. {\bf D3}, 2448 (1971). \bibitem{ahluwalia} D.V. Ahluwalia, M.B. Johnson and T. Goldman, Phys. Lett. {\bf B316}, 102 (1993). \bibitem{dvoe} V.V. Dvoeglazov, Helv. Phys. Acta, {\bf 70}, 697 (1997). \bibitem{dirac} P.A.M. Dirac, Proc. Roy. Soc. {\bf A155}, 447-59 (1936). \bibitem{wigner} E.P. Wigner, Ann. Math., {\bf 40,} 149 (1939). \bibitem{bacry} H. Bacry, Nuovo Cimento, {\bf A32}, 448-60 (1976). \end{references} \end{document}
\begin{document} \author{David S. Lipham} \address{Department of Mathematics \& Statistics, Auburn University at Montgomery, Montgomery AL 36117, United States of America} \email{[email protected]; [email protected]} \subjclass[2010]{54F15, 54F65} \keywords{cut point, compactification, continuum, dendron, dendrite, weakly orderable, ordered, irreducible} \begin{abstract} We show that if $X$ is a separable locally compact Hausdorff connected space with fewer than $\mathfrak c$ non-cut points, then $X$ embeds into a dendrite $D\subseteq \mathbb R ^2$, and the set of non-cut points of $X$ is a nowhere dense $G_\delta$-set. We then prove a Tychonoff cut-point space $X$ is weakly orderable if and only if $\beta X$ is an irreducible continuum. Finally, we show every separable metrizable cut-point space densely embeds into a reducible continuum with no cut points. By contrast, there is a Tychonoff cut-point space each of whose compactifications has the same cut point. The example raises some questions about persistent cut points in Tychonoff spaces.\end{abstract} \title{Compactification of cut-point spaces} \section{Introduction} Let $X$ be a connected topological space. A point $x\in X$ is called a \textit{cut point} if $X\setminus \{x\}$ is disconnected. If $X\setminus \{x\}$ has exactly two connected components, then $x$ is a \textit{strong cut point}. A \textit{cut-point space} (resp. \textit{strong cut-point space}) is a connected topological space in which every point is a cut point (resp. strong cut point). In 1936, L.E. Ward \cite{warr} famously proved: If $X$ is a connected and locally connected separable metrizable space in which every point is a strong cut point, then $X$ is homeomorphic to the real line. In 1970, S.P. Franklin and G.V. Krishnarao strengthened Ward's result by showing the word ``metrizable'' could be replaced with ``regular'' \cite{fra1}. Then, in a short addendum \cite{fra} they claimed: \textit{If $X$ is a separable locally compact Hausdorff connected space in which every point is a strong cut point, then $X$ is homeomorphic to the real line.} A mistake in the proof was discovered in 1977 by A.E. Brouwer, who then re-proved the statement \cite[Theorem 8]{bro}. More generally, Brouwer showed every separable locally compact Hausdorff cut-point space embeds into a dendrite \cite[Theorems 5 \& 7]{bro}. In Section 2 of this paper, we prove a stronger result using L.E. Ward's 1988 characterization of dendrons. \begin{ut}\label{t2} If $X$ is a connected separable locally compact Hausdorff space with fewer than $\mathfrak c=|\mathbb R|$ non-cut points, then: \begin{enumerate} \item[\textnormal{(i)}] $X$ embeds into a dendrite whose cut points are precisely the cut points of $X$; and \item[\textnormal{(ii)}] the set of non-cut points of $X$ is a nowhere dense $G_\delta$-set. \end{enumerate}\end{ut} \noindent Every dendrite embeds into Wazewski's plane continuum \cite[10.37]{nad} (see Figure 1), so in fact the set $X$ in Theorem 1.1 embeds into a dendrite in the plane. The result also implies that every separable Hausdorff continuum with only countably many non-cut points is a (plane) dendrite. In Sections 3 and 4, we focus on non-dendritic compactifications of Tychonoff cut-point spaces, including weakly ordered spaces. A space $X$ is \textit{weakly orderable} if there exists a continuous linear ordering of the elements of $X$. To be more precise, $X$ is weakly orderable if there is a continuous one-to-one mapping of $X$ into a Hausdorff arc. Apparently, every connected weakly ordered space is a strong cut-point space. In Section 3 we show that a connected Tychonoff space $X$ is weakly orderable if and only if $X$ is a cut-point space and $\beta X$ is an irreducible Hausdorff continuum (Corollary \ref{46}). In this event, the Stone-\v{C}ech extension of the weak ordering epimorphism continuously orders the internal layers of $\beta X$. We also show each connected weakly orderable normal space densely embeds into an irreducible Hausdorff continuum of the same weight (Theorem \ref{47}). This generalizes a result proved by Roman Duda in the separable metrizable setting \cite[Theorem 5]{dud}. We will see that each cut point of $X$ is a cut point of $\beta X$ (Theorem \ref{41}). On the other hand, in Section 4 we show every separable metrizable cut-point space densely embeds into a reducible metrizable continuum with no cut points (Theorem \ref{54}). An obvious example is the one-point compactification of $\mathbb R$. A locally connected fan of long lines shows this type of embedding is not possible for all Tychonoff cut-point spaces (see Example \ref{ex2}). \begin{figure} \caption{Universal plane dendrite} \end{figure} \subsection{Terminology} A \textit{continuum} is a connected compact Hausdorff space. An \textit{arc} is a continuum homeomorphic to the interval $[0,1]$. A \textit{Hausdorff arc} is a linearly ordered (Hausdorff) continuum. A connected space $X$ is \textit{dendritic} if every two points are separated by some other point. Two points $a$ and $b$ are \textit{separated by} a third point $c$ if $X\setminus \{c\}$ is the union of two disjoint open sets, one containing $a$ and the other containing $b$. A \textit{dendron} is a dendritic compact Hausdorff space, and a \textit{dendrite} is a metrizable dendron. A topological space $X$ is \textit{connected im-kleinen} at $x\in X$ provided $x$ has arbitrarily small connected neighborhoods. If $X$ is connected im-kleinen at each of its points, then $X$ is \textit{locally connected}. A continuum $X$ \textit{indecomposable} if every proper subcontinuum of $X$ is nowhere dense. A continuum $X$ is \textit{reducible} if for every two points $a,b\in X$ there exists a proper subcontinuum of $X$ containing $a$ and $b$. If no proper subcontinuum of $X$ contains both $a$ and $b$, then $X$ is \textit{irreducible between $a$ and $b$}. A continuum which is irreducible between some two of its points is said to be \textit{irreducible}. A \textit{compactification} of a Tychonoff space $X$ is a compact Hausdorff space which has a dense subspace homeomorphic to $X$. $\beta X$ denotes the \textit{Stone-\v{C}ech compactification} of $X$. Each locally compact Hausdorff space $X$ has a compactification $\gamma X$ such that the remainder $\gamma X\setminus X$ is zero-dimensional, and disjoint closed subsets of $X$ with compact boundaries have disjoint closures in $\gamma X$. The canonical compactification of $X$ with these properties is called the \textit{Freudenthal compactification} of $X$. \section{Proof of Theorem 1.1} Let $X$ be a connected separable locally compact Hausdorff space. Let $\nc(X)$ denote the set of non-cut points of $X$. Suppose $|\nc(X)|<\mathfrak c$. Let $D$ be the Freudenthal compactification of $X$. \begin{ucl}\label{35}Every point in $X$ is a cut point of $D$.\end{ucl} \begin{proof} Let $x\in X$, and write $X\setminus \{x\}=U\sqcup V$. Let $W$ be an open subset of $X$ such that $x\in W$ and $\overline W$ is compact. Then $U\setminus W$ and $V\setminus W$ are disjoint closed subsets of $X$ with compact boundaries. Thus $\overline{U\setminus W}\cap \overline{V\setminus W}=\varnothing$. It follows that $D\setminus \{x\}$ is the union of the two disjoint open sets $(U\cap W)\cup \overline{U\setminus W}$ and $(V\cap W)\cup \overline{V\setminus W}$.\end{proof} \begin{ucl}\label{32}For every two non-degenerate subcontinua $K,L\subseteq D$, if $K\subseteq L$ then $K$ contains a cut point of $L$. \end{ucl} \begin{proof}Suppose $K$ and $L$ are non-degenerate subcontinua of $D$ and $K\subseteq L$. Since $D\setminus X$ is zero-dimensional and compact, $K\cap X$ is a non-empty open subset of $K$. Every open subset of a continuum has cardinality at least $\mathfrak c$. Hence $|\nc(X)|<\mathfrak c$ implies $K$ contains uncountably many cut points of $X$. And by Claim \ref{35}, for each $x\in K\cap X\setminus \nc(X)$ we can write $D\setminus \{x\}=U_x\sqcup V_x$. For a contradiction, suppose $L\setminus \{x\}$ is connected for all $x\in K\cap X\setminus \nc(X)$. Then we may assume $L\setminus \{x\}\subseteq U_x$. For any two points $x\neq y\in K\cap X\setminus \nc(X)$ we have $V_x\subseteq U_y\cup V_y$ and $x\in U_y$. The set $V_x\cup \{x\}$ is connected, therefore $V_x\cup \{x\}\subseteq U_y$ and $V_x\cap V_y=\varnothing$. Thus $\{V_x:x\in K\cap X\setminus \nc(X)\}$ is an uncountable collection of pairwise disjoint non-empty open subsets of $D$. This contradicts the fact that $D$ is separable. Therefore $K$ contains a cut point of $L$.\end{proof} By Claim \ref{32} and \cite[Theorem 1]{warrr}, $D$ is a dendron. Separable dendrons are metrizable by \cite[Theorem I.5]{eb}. Thus $D$ is a dendrite. Clearly $D\setminus X$ contains no cut point of $D$, so by Claim \ref{35} $X$ is equal to the set of cut points of $D$. This concludes our proof of Theorem \ref{t2}(i). Toward proving Theorem \ref{t2}(ii), note that the set of cut points of any dendrite is a countable union of arcs. So by part (i), $\nc(X)$ is a $G_\delta$-subset of $X$. Hence $|\nc(X)|<\mathfrak c$ implies $X$ is scattered (and countable). Every open subset of $X$ is perfect, so $\nc(X)$ is nowhere dense. This concludes the proof of Theorem \ref{t2}(ii). \begin{uc}\label{pp}Every separable Hausdorff continuum with only countably many non-cut points is a dendrite.\end{uc} \section{Weakly ordered Tychonoff spaces} In this section we show connected weakly ordered Tychonoff spaces are precisely those cut-point spaces which can be densely embedded into irreducible continua. These include graphs of certain functions defined on the real line. For a non-trivial example, let $\varphi(t)=\sin(1/t)$ for $t\in \mathbb R\setminus \{0\}$ and put $\varphi(0)=0$. Now let $\mathbb Q =\{q_n:n<\omega\}$ be an enumeration of the rationals, and define $f:\mathbb R \to [0,1]$ by $f(t)=\sum_{n=1}^\infty \varphi(t-q_n)\cdot 2^{-n}.$ The graph $X:=\{\langle t,f(t)\rangle:t\in \mathbb R\}$ is connected, and the elements of $X$ are ordered by the first coordinate projection. This example is due to Kuratowski and Sierp\-i\'n\-ski \cite{ks}. More generally, for every $n\leq \omega$ Duda \cite[Theorem 6]{dud} constructed a function $f:\mathbb R\to [0,1]^n$ whose graph is $n$-dimensional and connected. To prove the first two results, we need the following fact: \renewenvironment{quote}{ \list{}{ \leftmargin1cm \rightmargin\leftmargin } \item\relax } {\endlist} \begin{quote} If $U$ and $V$ are disjoint open subsets of a Tychonoff space $X$, and $W$ is an open subset of $\beta X$ such that $W\cap X=U\cup V$, then the sets $W\cap \cl_{\beta X}U$ and $W\cap \cl_{\beta X} V$ are disjoint $\beta X$-open sets unioning to $W$. \end{quote} Proofs may be found in the proofs of \cite[Lemma 1.4]{walk} and \cite[Theorem 4]{lip}. \begin{ut}\label{41}If $X$ is a connected Tychonoff space, then every cut point of $X$ is a cut point of $\beta X$.\end{ut} \begin{proof}Suppose $x\in X$ is a cut point. Write $X\setminus \{x\}=U\sqcup V$. Let $W=\beta X\setminus \{x\}$. By the fact above, $\beta X\setminus \{x\}$ is the union of two disjoint open sets $[\cl_{\beta X} U]\setminus \{x\}$ and $[\cl_{\beta X} V]\setminus \{x\}$. \end{proof} \begin{ut}\label{42}If $X$ is a connected weakly orderable Tychonoff space, then $\beta X$ is an irreducible continuum.\end{ut} \begin{proof}Let $X$ be a connected weakly ordered Tychonoff space. Let $Y$ be a Hausdorff arc compactification of $X$ in the weak order topology. Let $y_0$ and $y_1$ be the endpoints of $Y$, and note that $Y\setminus X\subseteq \{y_0,y_1\}$. Let $f:X\hookrightarrow Y$ be the identity, and let $\beta f:\beta X\to Y$ be the Stone-\v{C}ech extension of $f$. Then there exist $p\in \beta f^{-1}\{y_0\}$ and $q\in \beta f^{-1}\{y_1\}$. We claim $\beta X$ is irreducible between $p$ and $q$. Let $K$ be any subcontinuum of $\beta X$ containing $p$ and $q$. We show $X\setminus \{y_0,y_1\} \subseteq K$. Let $x\in (y_0,y_1)$. Take $U=f^{-1}[y_0,x)$ and $V=f^{-1}(x,y_1]$ and $W=\beta X\setminus \{x\}$. By the fact above, $[\cl_{\beta X}U]\setminus \{x\}$ and $[\cl_{\beta X} V]\setminus \{x\}$ are disjoint $\beta X$-open sets covering $\beta X\setminus \{x\}$. Since $p\in \cl_{\beta X}U$, $q\in \cl_{\beta X}V$, and $K$ is connected, we have $x\in K$. Thus $X\setminus \{y_0,y_1\}\subseteq K$. So $K$ contains a dense subset of $\beta X$, therefore $K=\beta X$. \end{proof} \begin{ul}\label{43}Let $X$ be a cut point space. For all $x_0,x_1\in X$ there are three disjoint non-empty open sets $U$, $W$ and $V$ such that $X\setminus \{x_0,x_1\}=U\cup W\cup V$. \end{ul} \begin{proof}Write $X\setminus \{x_0\}=U\sqcup W_0$ so that $x_1\in W_0$. Write $X\setminus \{x_1\}=W_1\sqcup V$ with $x_0\in W_1$. Let $W=W_0\cap W_1$. Note that $$X\setminus W=X\setminus (W_0\cap W_1)=(X\setminus W_0)\cup (X\setminus W_1)\subseteq U\cup V\cup \{x_0,x_1\}.$$ So $X\setminus \{x_0,x_1\}=U\cup W\cup V$. Clearly $U\cap W=\varnothing$ and $V\cap W=\varnothing$. Finally, $U\cup \{x_0\}$ is connected, so $U\cup \{x_0\}\subseteq W_1$. Therefore $U\cap V=\varnothing$. \end{proof} \begin{ut}\label{44}Let $X$ be a Tychonoff cut-point space, and suppose $\beta X$ is an irreducible continuum. Then $X$ is weakly ordered. \end{ut} \begin{proof}Let $p,q\in \beta X$ such that $\beta X$ is irreducible between $p$ and $q$. We claim that every indecomposable subcontinuum of $\beta X$ is nowhere dense. Suppose to the contrary that $I$ is an indecomposable subcontinuum of $\beta X$, and $I$ contains a non-empty $\beta X$-open subset $G$. Let $x_0,x_1\in G\cap X$ and write $X\setminus \{x_0,x_1\}=U\sqcup W\sqcup V$ as in Lemma \ref{43}. Then $U\cap G$ and $V\cap G$ are non-empty open sets. Each composant of $I$ is dense in $I$, and every proper subcontinuum of $I$ is nowhere dense. So there is a nowhere dense subcontinuum $N\subseteq I$ which intersects both $\cl_{\beta X}(U\cap G)$ and $\cl_{\beta X}(V\cap G)$. Since $U\cup \{x_0\}$ and $\{x_1\}\cup V$ are connected, we find that $K:=\cl_{\beta X}U\cup N\cup \cl_{\beta X} V$ is a proper subcontinuum of $\beta X$. By irreducibility between $p$ and $q$, $\{p,q\}\not\subseteq K$. Without loss of generality, assume $p\notin K$. Then $p\in \cl_{\beta X} W\subseteq \cl_{\beta X}(X\setminus U)\cap \cl_{\beta X}(X\setminus V)$. Note that $X\setminus U$ and $X\setminus V$ are connected, and $q\in \cl_{\beta X}(X\setminus U)\cup\cl_{\beta X}(X\setminus V)$. Therefore $p$ and $q$ are contained proper subcontinuum of $Y$. This is a contradiction. By Gordh \cite{gor} and the claim above, $\beta X$ is a generalized $\lambda$-type continuum. That is, there is a Hausdorff arc $Y$ and a mapping $\lambda:\beta X\to Y$ such that $\{\lambda^{-1}\{y\}:y\in Y\}$ is an upper semi-continuous decomposition of $\beta X$ into maximal nowhere dense subcontinua. To prove $X$ is weakly ordered, it suffices to show $\lambda\restriction X$ is one-to-one. Suppose $x_0,x_1\in X$ and $\lambda(x_0)=y= \lambda(x_1)$. If $x_0\neq x_1$ then we may write $X\setminus \{x_0,x_1\}=U\sqcup W\sqcup V$ as in Lemma \ref{43}. Then $K:=\cl_{\beta X} [U\cup \lambda^{-1}\{y\}\cup V]$ is a subcontinuum of $\beta X$ which contains both $p$ and $q$. Since $\lambda^{-1}\{y\}$ is nowhere dense, $K$ is a proper subset of $\beta X$. This violates irreducibility between $p$ and $q$. Therefore $x_0=x_1$ and $\lambda$ is one-to-one. \end{proof} \begin{ur}We observe that $\lambda^{-1}\{\lambda(x)\}$ is the union of two continua $H$ and $K$ such that $H\cap K=\{x\}$, and $\lambda^{-1}\{\lambda(x)\}=\{x\}$ if and only if $X$ is connected im-kleinen at $x$.\end{ur} \begin{uc}\label{46}A Tychonoff cut-point space $X$ is weakly orderable if and only if $\beta X$ is an irreducible continuum. \end{uc} \begin{proof}Combine Theorems \ref{42} and \ref{44}. \end{proof} \begin{ut}\label{47}Let $X$ be a connected weakly orderable normal space. Then densely embeds into an irreducible continuum of the same weight as $X$. In particular, if $X$ is separable metrizable then $X$ densely embeds into an irreducible metrizable continuum.\end{ut} \begin{proof} Let $X$ be a connected weakly orderable normal space. Let $\kappa$ be weight of $X$, i.e. the least cardinality of a basis for $X$. By Theorem \ref{42}, $\beta X$ is irreducible between two points $p$ and $q$. Let $\{U_\alpha:\alpha<\kappa\}$ be a basis for $X\setminus \{p,q\}$ with each $U_\alpha\neq\varnothing$. For each $\alpha<\kappa$ we have that $\beta X\setminus U_\alpha$ is the union of two disjoint compact sets $A_\alpha$ and $B_\alpha$ with $p\in A_\alpha$ and $q\in B_\alpha$. By Urysohn's Lemma, for every $\alpha<\kappa$ there is a continuous function $f_\alpha:X\to[0,1]$ such that $f_\alpha[A_\alpha\cap X]=0$ and $f_\alpha[B_\alpha\cap X]=1$. Define $f:X\to [0,1]^\kappa$ by $f(x)=\langle f_\alpha(x)\rangle_{\alpha<\kappa}$. Let $g:X\to [0,1]$ be a homeomorphic embedding of $X$ into the Tychonoff cube $[0,1]^\kappa$, and put $h=f\times g$. Then $h:X\to [0,1]^{ \kappa}\times [0,1]^{ \kappa}$ is a homeomorphism, and $\overline{h[X]}$ is a continuum irreducible between $\beta h(p)$ and $\beta h(q)$. Here, $\beta h:\beta X\to \overline{h[X]}$ is the Stone-\v{C}ech extension of $h$. \end{proof} \section{Non-cut points in compactifications} The non-cut point existence theorem for connected compact spaces, stated below, was originally proved by R.L. Moore \cite{more} in the context of metric spaces. It was generalized for T$_1$ spaces by G.T. Whyburn \cite{why}, and finally for all topological spaces by B. Honari and Y. Bahrampour in \cite{poo}. \begin{ut}[Theorem 3.9 in \cite{poo}]If $X$ is a compact connected topological space with more than one point, then X has at least two non-cut points. \end{ut} No separation axioms are needed to prove the next four results. \begin{ut}\label{52}If $X$ is a cut-point space, then for every $x\in X$ and connected component $C$ of $X\setminus \{x\}$, $C\cup \{x\}$ is non-compact.\end{ut} \begin{proof}Let $X$ be a cut-point space. Let $x\in X$, and let $C$ be a connected component of $X\setminus \{x\}$. Suppose $C\cup \{x\}$ is compact. We will reach a contradiction by finding a non-cut point of $X$ in $C$. Observe that $X\setminus C$ is connected. For if $X\setminus C$ is the union of two nonempty and disjoint separated sets $A$ and $B$ with $x\in A$, then $C\cup B$ is a connected subset of $X\setminus \{x\}$ bigger than $C$. Also, $C$ is closed in the subspace $X\setminus \{x\}$, implying $\overline{C}\in \{C,C\cup \{x\}\}$. \textit{Case 1}: $\overline{C}=C\cup \{x\}$. Then $C\cup \{x\}$ is a compact connected set with more than one point and thus has a non-cut point $y\in C$. Observe that $X\setminus \{y\}$ is equal to the union of the two connected sets $(C\cup \{x\})\setminus \{y\}$ and $X\setminus C$ which have the point $x$ in common. Therefore $y$ is a non-cut point of $X$. \textit{Case 2}: $\overline{C}=C$. Then $C$ is compact and connected. Additionally, $X\setminus C$ is connected implies $C$ has more than one point. Thus $C$ has two non-cut points $y_0$ and $y_1$. There exists $b\in 2$ such that $\{y_b\}\neq \overline{X\setminus C}\cap C$. By connectedness of $X$ we have $\overline{X\setminus C}\cap C =\overline{X\setminus C}\cap \overline{C}\neq\varnothing$. By the choice of $b$ it follows that $\overline{X\setminus C}\cap (C\setminus \{y_b\})\neq\varnothing$. Thus $X\setminus \{y_b\}$ is the union of two non-separated connected sets $X\setminus C$ and $C\setminus \{y_b\}$. Therefore $X\setminus \{y_b\}$ is connected and $y_b$ is a non-cut point of $X$. In each case we reached a contradiction. Therefore $C\cup \{x\}$ is non-compact. \end{proof} \begin{uc}\label{53}Let $X$ be a locally connected cut-point space. If $x\in X$ has a compact neighborhood, and $\{x\}$ is closed, then $X\setminus \{x\}$ has only finitely many connected components. \end{uc} \begin{proof}Suppose $N$ is a compact neighborhood of $x$, and $\{x\}$ is closed. Let $\{C_\alpha:\alpha<\kappa\}$ be the set of connected components of $X\setminus \{x\}$. Since $X$ is locally connected and $X\setminus \{x\}$ is open, each $C_\alpha$ is open. By Theorem \ref{52} and the fact that $\{x\}\cup C_\alpha$ is closed, we have $C_\alpha\setminus N\neq\varnothing$ for each $\alpha<\kappa$. Since $C_\alpha$ is a relatively clopen subset of $X\setminus \{x\}$, by connectedness of $X$ we have $x\in \overline{C_\alpha}$. So $C_\alpha\cap \partial N\neq\varnothing$. The $C_\alpha$'s are pairwise disjoint, so no proper subcollection of $\{C_\alpha:\alpha<\kappa\}$ covers $\partial N$. A finite subcollection covers $\partial N$ by compactness, so $\kappa$ is finite. \end{proof} \begin{uc}\label{53}Let $X$ be a cut-point space which is a dense subset of a compact space $Y$. If $Y\setminus X$ is connected, then $Y$ has no cut points.\end{uc} \begin{proof}Suppose $Y\setminus X$ is connected. For each $p\in Y\setminus X$, the set $Y\setminus \{p\}$ is connected because it has dense connected subset $X$. Now let $x\in X$. By Theorem \ref{52}, $(\cl_{Y}C)\setminus X\neq\varnothing$ for each connected component $C$ of $X\setminus \{x\}$. Since $Y\setminus X$ is connected, this implies $Y\setminus \{x\}$ is connected. \end{proof} \begin{uc}The one-point compactification of a locally compact cut-point space has no cut points. \end{uc} To prove the next theorem, we use the fact that every connected separable metrizable space has a metrizable compactification with path-connected remainder. This was proved by Jan J. Dijkstra in \cite{po}. \begin{ut}\label{54}Every separable metrizable cut-point space densely embeds into a reducible metrizable continuum with no cut points.\end{ut} \begin{proof}Let $X$ be a separable metrizable cut-point space. By \cite[Theorem 1]{po}, there is a metrizable compactification $\gamma X$ such that $\gamma X\setminus X$ is path-connected. By Corollary \ref{53}, $\gamma X$ has no cut points. It remains to show $\gamma X$ is reducible. To that end, let $p,q\in \gamma X$. We will assume $p\neq q$, and exhibit a proper subcontinuum of $\gamma X$ containing $p$ and $q$. If $p,q\in \gamma X\setminus X$, then there is an arc $A\subseteq \gamma X\setminus X$ with $p,q\in A$. Now suppose $p\in X$ or $q\in X$. Assume $p\in X$, and write $X\setminus \{p\}=U\sqcup V$. Without loss of generality, $q\in \cl_{\gamma X} V$. Then $\cl_{\gamma X} (\{p\}\cup V)$ is a proper subcontinuum of $Y$ containing $p$ and $q$. \end{proof} The following example shows Theorem \ref{54} does not generalize to Tychonoff spaces. \begin{ue}\label{ex2}Let $[0,\omega_1)$ denote the $\omega_1$-long line, which is defined as $\omega_1 \times [0,1)$ in the lexicographic order topology. Endow $A:=[0,\omega_1)\times (\{0\}\cup \{1/n:n=1,2,3,...\})$ with the product topology. Then the locally connected fan $$X:=A/\{\langle x,y\rangle\in A:x=0\text{ or }y=0\}$$ is a Tychonoff cut-point space. Define $\overline{X}$ similarly, with $B:=[0,\omega_1]\times (\{0\}\cup \{1/n:n=1,2,3,...\})$ in the place of $A$. Here $[0,\omega_1]=[0,\omega_1)\cup \{\omega_1\}$ denotes the one-point compactification of $[0,\omega_1)$. If $f$ is any continuous real-valued function on $X$, then $f\restriction [0,\omega_1)\times \{1/n\}$ is eventually constant. We observe that $f$ continuously extends $\overline X $ by mapping $\langle \omega_1,1/n\rangle$ to the eventually constant value of $f\restriction [0,\omega_1)\times \{1/n\}$. So $\overline X =\beta X$. Thus if $\gamma X$ any compactification of $X$, then there is a continuous surjection $\beta \iota:\overline X \to \gamma X$ extending identity $\iota:X\to X$. The function $\beta \iota$ is finite-to-one, and $\gamma X \simeq \{\beta \iota^{-1}\{p\}:p\in \gamma X \}$ in the quotient topology. Thus, $\gamma X$ is obtained from $\overline X $ by collapsing finite subsets of $\{\langle \omega_1,1/n\rangle:n=1,2,3,...\}$. We see now that $\gamma X\setminus X\simeq \omega$ and $\gamma X\setminus \{\langle 0,0\rangle\}$ has infinitely many connected components. In particular, $\langle 0,0\rangle$ is a cut point of $\gamma X$. \end{ue} We say that a cut point $x\in X$ is \textit{persistent} if $x$ is a cut point of every compactification of $X$. In Example \ref{ex2}, $\langle 0,0\rangle$ is a persistent cut point of $X$. All other cut points of $X$ are non-persistent. To see this, take $\overline X $ and for each $n=1,2,3,...$ glue together the two points $\langle \omega_1,1/(2n-1)\rangle$ and $\langle \omega_1,1/(2n)\rangle$. The resulting continuum has only one cut-point: $\langle 0,0\rangle$. \begin{ut}\label{58}Let $X$ be a locally connected Tychonoff cut-point space. If $x\in X$ has a compact neighborhood, then $x$ is non-persistent. \end{ut} \begin{proof} By Corollary \ref{53}, $X\setminus \{x\}$ has only finitely many components $C_0,C_1,...,C_{n-1}$. By Theorem \ref{52}, for each $i<n$ there exists $p_i\in [\cl_{\beta X}C_i]\setminus X$. The quotient $\beta X/\{p_i:i<n\}$ is a compactification of $X$ in which $x$ is a non-cut point. \end{proof} \begin{uq}Does every Tychonoff cut-point space have a non-persistent cut point?\end{uq} A positive answer to Question 1 could be viewed as a generalization of the non-cut point existence theorem for Hausdorff continua, since each cut point of a continuum is persistent. \end{document}
\begin{document} \title[Quantitative H\"older Estimates for Even S.I.O. on patches]{Quantitative H\"older Estimates for \\ Even Singular Integral Operators on patches} \author[F. Gancedo]{Francisco Gancedo$^\dagger$} \address{$^\dagger$Departamento de An\'{a}lisis Matem\'{a}tico $\&$ IMUS, Universidad de Sevilla, C/ Tarfia s/n, Campus Reina Mercedes, 41012 Sevilla, Spain. \href{mailto:[email protected]}{[email protected]}} \author[E. Garc\'ia-Ju\'arez]{Eduardo Garc\'ia-Ju\'arez$^{\ddagger}$} \address{$^{\ddagger}$Departament de Matemàtiques i Informàtica, Universitat de Barcelona, Gran Via de les Corts Catalanes, 585 08007, Barcelona, Spain. \href{mailto:[email protected]}{[email protected]}} \begin{abstract} In this paper we show a constructive method to obtain $\dot{C}^\sigma$ estimates of even singular integral operators on characteristic functions of domains with $C^{1+\sigma}$ regularity, $0<\sigma<1$. This kind of functions were shown in first place to be bounded (classically only in the $BMO$ space) to obtain global regularity for the vortex patch problem \cite{Chemin1993, BertozziConstantin1993}. This property has then been applied to solve different type of problems in harmonic analysis and PDEs. Going beyond in regularity, the functions are discontinuous on the boundary of the domains, but $\dot{C}^{\sigma}$ in each side. This $\dot{C}^{\sigma}$ regularity has been bounded by the $C^{1+\sigma}$ norm of the domain \cite{Depauw1999, MateuOrobitgVerdera2009,Mitrea2Verdera2016}. Here we provide a quantitative bound linear in terms of the $C^{1+\sigma}$ regularity of the domain. This estimate shows explicitly the dependence of the lower order norm and the non-self-intersecting property of the boundary of the domain. As an application, this quantitative estimate is used in a crucial manner to the free boundary incompressible Navier-Stokes equations providing new global-in-time regularity results in the case of viscosity contrast \cite{GancedoG-J2021}. \end{abstract} \setcounter{tocdepth}{2} \maketitle \tableofcontents \section{Introduction} In this paper we deal with characteristic functions of domains $$ 1_{ D}(x)=\left\{\begin{array}{ll} 1,& x\in D,\\ 0,& x\in \mathbb R^n\smallsetminus\overline{ D}. \end{array}\right. $$ The main interest is to study singular integral operators of Calderón-Zygmund type applied on this kind of functions and obtained as follows \begin{equation} \label{T1O} T(1_ D)(x)=\text{pv}\int_{\mathbb R^n}K(x-y)1_{ D}(y) dy=\lim_{\varepsilon\to 0^+}\int_{|x-y|>\varepsilon}K(x-y)1_{ D}(y) dy,\qquad x\in \mathbb R^n. \end{equation} Above, $\text{pv}$ stands for principal value and the kernel $K$ is homogeneous of degree $-n$, given by $$ K(x)=\frac{\Omega(x)}{|x|^n},\quad\Omega(\lambda x)=\Omega(x)\quad \forall \lambda>0,\quad \int_{|x|=1}\Omega(x)d\sigma(x)=0. $$ In the classical theory of singular integrals, the function $T(1_ D)(x)$ belongs to the $BMO$ space \cite{Stein93}. In particular, for odd kernels, it is not difficult to show that as $x$ approaches to a point on $\partial_{\alpha}rtial D$ the function $T(1_ D)(x)$ is not bounded and diverges to infinity logarithmically. On the other hand, when the kernel is even, $$ \Omega(x)=\Omega(-x), $$ a new geometric cancellation was found in \cite{Chemin1993,BertozziConstantin1993} which shows that $T(1_ D)(x)$ belongs to $L^\infty$. This $L^\infty$ bound is given in terms of the $C^{1+\sigma}$ norm of the domain, $0<\sigma<1$. The regularity of the boundary together with the fact that the kernel has mean zero on half spheres cancel the singularity on the boundary of the domain. The motivation was to show preservation of $C^{1+\sigma}$ regularity for domains moving by the 2D Euler equations; i.e. global in time existence for the vortex patch problem \cite{Chemin1993,BertozziConstantin1993}. From the harmonic analysis point of view, Calder\'on-Zygmund operators with smooth and even kernel have been studied specifically as they satisfy stronger inequalities than general ones. In \cite{MOV2011}, it is shown that the following pointwise inequality holds for even, higher-order Riesz Transforms $$ T^*f(x)\leq CM(Tf)(x), $$ where $T^*$ is the maximal singular integral and $M$ is the Hardy-Littlewood maximal operator. It yields a stronger estimate than the classical Cotlar's inequality \cite{Torchinsky1986}. The extra cancellation providing $L^\infty$ bounds has been extensively used in different PDEs problems. Considering the Beltrami equation, it guarantees that the solutions are bi-Lipschitz \cite{MateuOrobitgVerdera2009}. For the Muskat problem, modeling the evolution of incompressible immiscible fluids in porous media or Hele-Shaw cells, this bound yields lack of squirt singularities \cite{CordobaGancedo2010} (also known in the literature as splat singularities). For multidensional aggregation equations with a Newtonian potential, it provides propagation of $C^{1+\sigma}$ regularity up to the blow-up \cite{BGLV2016}. In the two dimensional inhomogeneous Navier-Stokes equations modeling the evolution of incompressible fluids of different densities, this $L^\infty$ bound provides global-in-time regularity for higher order norms ($W^{2,\infty}$ and $C^{2+\sigma}$) of the moving free boundary between the fluids \cite{GancedoG-J2018}. In \cite{GancedoG-J2020}, a combination of parabolic and elliptic estimates together with this $L^\infty$ bound are used to propagate the same higher order norms but for Boussinesq temperature fronts. See also \cite{CMOV2021} for recent developments in contour dynamics for non-linear transport equations. In all these results the singular integral operators are given with $\Omega$ a polynomial function. In this work, we go further in order to control higher regularity for functions given by \eqref{T1O} with even kernel. Despite the fact that these functions are discontinuous on $\partial_{\alpha}rtial D$, it is possible to obtain $C^{\sigma}$ regularity in $ D$ and in $\mathbb R^n\smallsetminus\overline{ D}$. In \cite{MateuOrobitgVerdera2009, Mitrea2Verdera2016}, this regularity has been shown together with qualitative bounds of the form \begin{equation}\label{VerderaBound} \|T(1_ D)\|_{C^\sigma(\overline{ D})}\leq C P (\| D\|_{C^{1+\sigma}}), \end{equation} with $ P $ a polynomial and $C>0$ a constant depending on $n$, $\sigma$, and the geometry of the domain $D$. Above, the function $T(1_ D)$ is extended continuously on $\overline{ D}$. The latter paper also characterizes the regularity of the domains in terms of odd singular integrals operators on $\partial_{\alpha}rtial D$. It uses harmonic analysis techniques and Clifford algebras as a generalization of the field of complex numbers to higher dimensions. In this paper, we show that the bound above can be improved to make it linear in the higher regularity norm of the boundary. Moreover, the dependency on the arc-chord condition is made explicit: \begin{equation}\label{sharpc1g} \begin{aligned} \|T(1_D)\|_{\dot{C}^{\sigma}(\overline{ D})\cup \dot{C}^{\sigma}(\mathbb R^n\smallsetminus D)}\leq C(1\!+\!|\partial_{\alpha}rtial D|) P (\|D\|_{*}\!+\!\|D\|_{\Lip})(1+\|D\|_{\dot{C}^{1+\sigma}}). \end{aligned} \end{equation} Above, $C=C(n,\sigma)$, $\Lip$ stands for Lipschitz, $\| D\|_*$ measures the non self-intersecting property of the boundary $\partial_{\alpha}rtial D$, $ P $ is a polynomial function, and $|\partial_{\alpha}rtial D|$ denotes the $(n-1)$-dimensional surface area. The higher order norm is homogeneous, given for a function by $$ \|f\|_{\dot{C}^{1+\sigma}}=\|\nabla f\|_{\dot{C}^{\sigma}}=\sup_{x\neq y}\frac{|\nabla f(x)-\nabla f(y)|}{|x-y|^{\sigma}},\quad 0<\sigma<1. $$ See below for more details about the notation. During the review process of this article the referee pointed out work \cite{Depauw1999}, in which the author also proves an estimate similar to the one above. However, our proof is different, working at the level of the interface via contour dynamics methods. An important motivation of these estimates comes from a classical two dimensional fluid mechanics problem. Concretely, the dynamics of two incompressible immiscible fluids evolving by the inhomogeneous Navier-Stokes equations. In that problem, the viscosity can be understood as a patch function and the gradient of the velocity is related to the viscosity by combinations of second and fourth-order Riesz transforms, $$ T=\partial_{\alpha}rtial_j\partial_{\alpha}rtial_k(-\Delta)^{-1}(I-\nabla(-\Delta)\nabla\cdot),\quad j,k=1,...,n,\quad I \mbox{ the identity.} $$ In \cite{GancedoG-J2021}, global-in-time well-posedness for the evolution of $C^{1+\sigma}$ interfaces between the two fluids is proved. Global-in-time regularity was recently shown in \cite{PaicuZhang2020} for $H^{5/2}$ Sobolev regularity of the interface instead of $C^{1+\sigma}$ and by using striated regularity. In the argument of the proof in \cite{GancedoG-J2021}, the estimate \eqref{sharpc1g} is used in an important manner. In particular, we emphasize the importance of the quantitative bound of the non self-intersection condition $\|D\|_{*}$. This quantity has to be controlled globally, since it is known that free-boundary incompressible Navier-Stokes can developed finite-time pointwise particle collision on the free interface \cite{CCFGG19,CS19}. The rest of the paper is structured as follows. Section \ref{main_result} contains the statement of the main results: Theorems \ref{MainTheorem} and \ref{cor1}. It describes how the operators \eqref{T1O} can be studied in terms of odd operators on the boundary, yielding Theorem \ref{cor1} as a corollary of Theorem \ref{MainTheorem}. The rest of the paper, Section \ref{main_thm_sec}, is the proof of Theorem \ref{MainTheorem}. To study the H\"older regularity of the operators involved, the proof distinguishes three situations: when the two points are on the boundary (Subsection \ref{on_boundary}), \textit{near} the boundary (Subsection \ref{near_boundary}), and \textit{far} from the boundary (Subsection \ref{reg_far}). The deciding cut-off is defined in terms of the non self-intersecting condition and the $\dot{C}^{1+\sigma}$ regularity of the domain. In the second scenario, we need to consider further whether the separation between the points occurs \textit{mostly} in normal or tangential direction. The \textit{nearly} normal direction case is decomposed in purely normal (Case 1) and tangential (Case 2) differences, each one estimated through delicate splittings of the singular integrals. The \textit{nearly} tangential case is reduced, using a fixed point argument, to the purely normal plus on the boundary cases. Finally, the third situation is less singular. \section{Main Result}\label{main_result} We consider higher-order Riesz transform operators of even order $2l$, $l\geq1$. That is, we deal with Calder\'on-Zygmund operators given by \begin{equation}\label{calderonzygmund} \begin{aligned} T(f)(x)= \lim_{\varepsilon\to 0}\int_{|x-y|>\varepsilon} K(x-y)f(y)dy, \end{aligned} \end{equation} where \begin{equation}\label{kernel} K(x)=\frac{P_{2l}(x)}{|x|^{n+2l}}, \end{equation} and $P_{2l}(x)$ is a homogeneous harmonic polynomial of degree $2l$ in $\mathbb R^n$. We want to study the H\"older regularity of the operator $T$ applied to the characteristic function $1_ D(x)$ of a $C^{1+\sigma}$ domain $ D$, \begin{equation}\label{TOmega} T(1_ D)(x)=\text{pv}\int_{\mathbb R^2}K(x-y)1_{ D}(y) dy=\text{pv}\int_{ D}K(x-y) dy,\qquad x\in \mathbb R^n. \end{equation} We recall that the operators \eqref{calderonzygmund} have explicit Fourier multipliers, \begin{equation}\label{multip} \mathcal{F}\Big(\frac{P_{m}(x)}{|x|^{n+m-\alpha}}\Big)(\xi)=c_{m,\alpha,n}\frac{P_{m}(\xi)}{|\xi|^{m+\alpha}}, \end{equation} with $0<\alpha\leq n$ and \begin{equation*} c_{m,\alpha,n}=i^m\pi^{\frac{n}2-\alpha}\frac{\Gamma\big(\frac{m+\alpha}{2}\big)}{\Gamma\big(\frac{m+n-\alpha}{2}\big)}. \end{equation*} \begin{remark}\label{harmonic} Any homogeneous polynomial $P_k$ of degree $k$ can be written as $P_k(x)=p_{k}(x)+|x|^2p_{k-2}(x)$, where $p_k$ is a homogeneous harmonic polynomial of degree $k$ and $p_{k-2}$ is homogeneous of degree $k-2$ (Sec. 3 in Chapter 3 of \cite{Stein1970}). Thus the restriction to harmonic polynomials in \eqref{kernel} involves no loss of generality. \end{remark} Using Euler's homogeneous function theorem and integration by parts, the regularity of \eqref{TOmega} can be studied through the associated operators \begin{equation}\label{Soperator} S(f)(x)=\text{pv} \int_{\partial_{\alpha}rtial D} k(x-y) f(y)dS(y),\qquad x\in\mathbb{R}^n, \end{equation} where the kernel $k(x)$ is given by \begin{equation*} k(x)=\frac{Q_{2l-1}(x)}{|x|^{n+2l-2}}, \end{equation*} and $Q_{2l-1}$ is a homogeneous harmonic polynomial of degree $2l-1$. Since one of the main motivations of these results are physical, we show the techniques in dimension three. An analogous approach provides the proof in any dimension. We will denote by $D$ a non-self-intersecting bounded domain of class $C^{1+\sigma}$. It is defined as follows. Denote by $V_j$, $j = 1,...,J$, the neighborhoods that provide local charts of the boundary $\partial_{\alpha}rtial D$ in such a way that for any $x\in\partial_{\alpha}rtial D$ there exists a $V_j\subset\mathbb R^2$ such that $x=Z(\alpha)$ with $\alpha\in V_j$, with well-defined normal vector. To measure the non-self intersection and the regularity of the parameterization, we define $$ \|D\|_*=\|F(Z)\|_{L^\infty}=|\partial_{\alpha}rtial_\alpha Z|_{\inf}^{-1}<\infty, $$ where $$ |\partial_{\alpha} Z|_{\inf}=\min_{j=1,...,J}\{\min \{\inf_{\alpha\in V_j}|\partial_{\alpha}rtial_{\alpha_1}Z(\alpha)|,\inf_{\alpha\in V_j}|\partial_{\alpha}rtial_{\alpha_2}Z(\alpha)|,\inf_{\alpha\neq\beta, \alpha,\beta\in V_j}\frac{|Z(\alpha)-Z(\beta)|}{|\alpha-\beta|}\}\}. $$ The Lipschitz norm is given by $$ \|D\|_{\Lip}=\max_{j=1,...,J}\sup_{\alpha\neq\beta, \alpha,\beta\in V_j}\frac{|Z(\alpha)-Z(\beta)|}{|\alpha-\beta|}<\infty, $$ and the H\"older seminorm by $$ \|D\|_{\dot{C}^{1+\sigma}}=\max_{j=1,...,J}\sup_{\alpha\neq\beta, \alpha,\beta\in V_j}\frac{|\nabla Z(\alpha)-\nabla Z(\beta)|}{|\alpha-\beta|^{\sigma}}<\infty. $$ There exists also a well-defined normal vector given by $$N=\partial_{\alpha}rtial_{\alpha_1}Z\wedge \partial_{\alpha}rtial_{\alpha_2}Z.$$ For convenience, we take the parametrization so that $N$ is pointing towards the interior of the surface. Now we are in position to state our main theorem: \begin{thm}\label{MainTheorem} Assume $D$ is a bounded domain of class $C^{1+\sigma}$, $0<\sigma<1$. Then, the operator \eqref{Soperator} maps boundedly $\dot{C}^{\sigma}(\partial_{\alpha}rtial D)$ into $\dot{C}^{\sigma}(\overline{ D})\cup \dot{C}^{\sigma}(\mathbb R^n\smallsetminus D)$. Moreover, the following bound holds \begin{equation*} \|S(f)\|_{\dot{C}^{\sigma}(\overline{ D})\cup \dot{C}^{\sigma}(\mathbb R^n\smallsetminus D)}\leq C(1\!+\!|\partial_{\alpha}rtial D|)P(\|D\|_{*}\!+\!\|D\|_{\Lip})\Big(\|f\|_{C^{\sigma}}+\|f\|_{L^\infty}\|D\|_{\dot{C}^{1+\sigma}}\Big), \end{equation*} with $P$ a polynomial function depending on $S$, and $C=C(n,\sigma)$. \end{thm} As indicated before, we can write \eqref{TOmega} as a sum of terms of the form \eqref{Soperator} with $f=N_j$. Therefore, we have the following result: \begin{thm}\label{cor1} Assume $D$ is a bounded domain of class $C^{1+\sigma}$, $0<\sigma<1$. Then, the Calder\'on-Zygmund operator \eqref{calderonzygmund} applied to the characteristic function of $ D$, \eqref{TOmega}, defines a piecewise $\dot{C}^{\sigma}$ function, \begin{equation*} T(1_D)\in \dot{C}^{\sigma}(\overline{D})\cup \dot{C}^{\sigma}(\mathbb R^n\smallsetminus D). \end{equation*} Moreover, it satisfies the bound \begin{equation*} \begin{aligned} \|T(1_D)\|_{\dot{C}^{\sigma}(\overline{ D})\cup \dot{C}^{\sigma}(\mathbb R^n\smallsetminus D)}\leq C(1\!+\!|\partial_{\alpha}rtial D|) P(\|D\|_{*}\!+\!\|D\|_{\Lip})(1+\|D\|_{\dot{C}^{1+\sigma}}), \end{aligned} \end{equation*} with $P$ a polynomial function depending on $T$, and $C=C(n,\sigma)$. \end{thm} \section{Proof of Theorem \ref{MainTheorem}}\label{main_thm_sec} Without loss of generality, we show the proof for the following case \begin{equation*} \begin{aligned} S(f)(x)=\text{pv}\int_{\partial_{\alpha}rtial D} k(x-y)f(y)dS(y), \end{aligned} \end{equation*} with \begin{equation*} k(x)=\frac{x_1x_2x_3}{|x|^5}. \end{equation*} The developed techniques can be applied to any other odd homogeneous polynomial and any dimension. We choose this case to show more clearly the crucial steps. The case of $k(x)$ of degree one is more direct. If the degree is greater than three the approach is the same but technically longer. With the kernel chosen, we provide a constructive and direct method, showing the main difficulties and cancellations. \subsection{Regularity on the boundary.}\label{on_boundary} First, consider an atlas of the surface $\partial_{\alpha}rtial D$ and, on a given chart, fix a cut-off $\eta>0$ and define the ball $$ A_\eta=\{y\in\partial_{\alpha}rtial D:|x-y|<\eta\}. $$ Consider any $h\in \mathbb R^3$ such that \begin{equation*} |h|\leq \overline{\eta}=\frac{\eta}{4(1+\|F(Z)\|_{L^\infty})(1+\|\partial_{\alpha} Z\|_{L^\infty})}, \end{equation*} then $x+h\in A_{\overline{\eta}}$, and we will generally write \begin{equation*} x=Z(\alpha),\qquad x+h=Z(\beta). \end{equation*} We will also use the middle coordinate point \begin{equation}\label{mid_alphabeta} \xi=\frac{\alpha+\beta}{2}. \end{equation} Then, \begin{equation}\label{holdersplit_boundary} \begin{aligned} S(f)(x)-S(f)(x+h)&=\text{pv}\int_{A_\eta}\Big(k(x-y)-k(x+h-y)\Big)f(y)dS(y)\\ &\quad+\int_{\partial_{\alpha}rtial D \smallsetminus A_\eta}\Big(k(x-y)-k(x+h-y)\Big)f(y)dS(y)\\ &=I+II. \end{aligned} \end{equation} The second term is away of the singular part and thus more regular, \begin{equation*} \begin{aligned} |II|\leq \frac{C}{\eta^3}|\partial_{\alpha}rtial D|\|f\|_{L^\infty}|h|. \end{aligned} \end{equation*} The first term is given by \begin{equation*} \begin{aligned} I&=\int_{Z^{-1}(A_\eta)} \big(k(Z(\alpha)-Z(\gamma))-k(Z(\beta)-Z(\gamma))\big)f(Z(\gamma))|N(\gamma)|d\gamma. \end{aligned} \end{equation*} For simplicity of notation, we will denote \begin{equation*} g(\gamma)=f(Z(\gamma))|N(\gamma)|, \end{equation*} so we have that \begin{equation}\label{gbound} \begin{aligned} \|g\|_{L^\infty}&\leq \|\partial_{\alpha} Z\|_{L^\infty}^2\|f\|_{L^\infty},\\ \|g\|_{\dot{C}^{\sigma}}&\leq \|f\|_{\dot{C}^\sigma}\|\partial_{\alpha} Z\|_{L^\infty}^{2+\sigma}+\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}\|\partial_{\alpha} Z\|_{L^\infty}\|f\|_{L^\infty}. \end{aligned} \end{equation} The nonlinear kernels in $I$ are neither odd nor given by a derivative. We decompose the $I$ term as follows \begin{equation}\label{Isplit_boundary} \begin{aligned} I=I_1+I_2, \end{aligned} \end{equation} where \begin{equation*} \begin{aligned} I_1=&\int_{Z^{-1}(A_\eta)}\Big(k(Z(\alpha)-Z(\gamma))-k(\partial_{\alpha} Z(\xi)(\alpha-\gamma))\Big)g(\gamma)\hspace{0.05cm} d\gamma\\ &-\int_{Z^{-1}(A_\eta)}\Big(k(Z(\beta)-Z(\gamma))-k(\partial_{\alpha} Z(\xi)(\beta-\gamma))\Big)g(\gamma)\hspace{0.05cm} d\gamma, \end{aligned} \end{equation*} and \begin{equation*} \begin{aligned} I_2&=\int_{Z^{-1}(A_\eta)}\Big(k(\partial_{\alpha} Z(\xi)(\alpha-\gamma))-k(\partial_{\alpha} Z(\xi)(\beta-\gamma))\Big)g(\gamma)\hspace{0.05cm} d\gamma. \end{aligned} \end{equation*} We estimate $I_2$ first. Since the kernel is odd, we first isolate the singularity from the boundary. We notice that \begin{equation}\label{distances} |\alpha-\beta|\leq \|F(Z)\|_{L^\infty}|h|\leq \|F(Z)\|_{L^\infty}\overline{\eta}\leq \frac{\eta}{4\|\partial_{\alpha} Z\|_{L^\infty}}, \end{equation} and \begin{equation*} d(\alpha, \partial_{\alpha}rtial Z^{-1}(A_{\eta}))\geq\frac{\eta}{\|\partial_{\alpha} Z\|_{L^\infty}},\quad d(\beta, \partial_{\alpha}rtial Z^{-1}(A_\eta))\geq\frac{3\eta}{4\|\partial_{\alpha} Z\|_{L^\infty}}, \end{equation*} thus, we take a smooth cut-off $\chi(\gamma)$ defined as $\chi(\gamma)=1$ for $|\alpha-\gamma|\leq \frac{\eta}{2\|\partial_{\alpha} Z\|_{L^\infty}}$, $\chi(\gamma)=0$ for $|\alpha-\gamma|\geq\frac{3\eta}{4\|\partial_{\alpha} Z\|_{L^\infty}}$, and radial centered at $\alpha$. Introducing the cut-off in $I_{2}$, we have \begin{equation}\label{I21split_boundary} \begin{aligned} I_{2}&=I_{2,1}+I_{2,2}, \end{aligned} \end{equation} with \begin{equation*} \begin{aligned} I_{2,1}&=\int_{Z^{-1}(A_\eta)}\Big(k(\partial_{\alpha} Z(\xi)(\alpha-\gamma))-k(\partial_{\alpha} Z(\xi)(\beta-\gamma))\Big)\chi(\gamma)g(\gamma)\hspace{0.05cm} d\gamma,\\ I_{2,2}&=\int_{Z^{-1}(A_\eta)}\Big(k(\partial_{\alpha} Z(\xi)(\alpha-\gamma))-k(\partial_{\alpha} Z(\xi)(\beta-\gamma))\Big)(1-\chi(\gamma))g(\gamma)\hspace{0.05cm} d\gamma. \end{aligned} \end{equation*} Along the paper, we will need to control the singularity in the kernel $k(\partial_{\alpha} Z(\xi)(\alpha-\gamma))$. We will use that it is comparable to $|\alpha-\gamma|^{-2}$. In fact, since the domain is regular and of class $C^{1+\gamma}$, it holds that \begin{equation}\label{aux} |\partial_{\alpha}rtial_{\alpha_1}Z(\xi)\cdot\partial_{\alpha}rtial_{\alpha_2}Z(\xi)|\leq (1-\varepsilon)|\partial_{\alpha}rtial_{\alpha_1}Z(\xi)||\partial_{\alpha}rtial_{\alpha_2}Z(\xi)|, \end{equation} for some $\varepsilon>0$. Therefore, \begin{equation*} \begin{aligned} |\partial_{\alpha}rtial_\alpha Z(\xi)(\alpha-\gamma)|^2&\geq (\alpha_1-\gamma_1)^2|\partial_{\alpha}rtial_1 Z(\xi)|^2+(\alpha_2-\gamma_2)^2|\partial_{\alpha}rtial_2 Z(\xi)|^2\\ &\quad-2(1-\varepsilon)|\alpha_1-\gamma_1||\alpha_2-\gamma_2||\partial_{\alpha}rtial_1 Z(\xi)||\partial_{\alpha}rtial_2 Z(\xi)|\\ &\geq \varepsilon\Big((\alpha_1-\gamma_1)^2|\partial_{\alpha}rtial_1 Z(\xi)|^2+(\alpha_2-\gamma_2)^2|\partial_{\alpha}rtial_2 Z(\xi)|^2\Big)\\ &\geq \varepsilon |\partial_{\alpha}rtial_\alpha Z|_{\inf}^2 |\alpha-\gamma|^2. \end{aligned} \end{equation*} We now see that we can take $\varepsilon>0$ explicit by using the arc-chord quantity. For all $\gamma\in\mathbb{R}^2\setminus\{0\}$, and $\xi\in\mathbb{R}^2$, \begin{equation*} |\partial_{\alpha}rtial_\alpha Z|_{\inf}\leq \liminf_{\gamma\to 0}\frac{|Z(\xi+\gamma)-Z(\xi)|}{|\gamma|}\leq|\partial_{\alpha}rtial_\alpha Z(\xi)\frac{\gamma}{|\gamma|}|. \end{equation*} Therefore, \begin{equation*} |\gamma_1\partial_{\alpha}rtial_{\alpha_1}Z(\xi)+\gamma_2\partial_{\alpha}rtial_{\alpha_2}Z(\xi)|^2\geq |\gamma|^2|\partial_{\alpha}rtial_\alpha Z|_{\inf}^2. \end{equation*} Taking $\gamma_j=\pm|\partial_{\alpha}rtial_{\alpha_j}Z(\xi)|^{-1}$, we conclude that \begin{equation*} |\cos{\theta(\xi)}|\leq 1-\frac{|\partial_{\alpha} Z|_{\inf}^2}{|\partial_{\alpha}rtial_{\alpha} Z(\xi)|^2}\leq 1-\frac{|\partial_{\alpha} Z|_{\inf}^2}{\|\partial_{\alpha}rtial_{\alpha} Z\|_{L^\infty}^2}. \end{equation*} Hence we can take \begin{equation}\label{varepsilon_def} \varepsilon=(\|F(Z)\|_{L^\infty}\|\partial_{\alpha} Z\|_{L^\infty})^{-2}, \end{equation} and thus we have the bound \begin{equation}\label{bound_den} \begin{aligned} |\partial_{\alpha}rtial_\alpha Z(\xi)(\alpha-\gamma)|&\geq \frac{|\alpha-\gamma|}{\|F(Z)\|_{L^\infty}^2\|\partial_{\alpha} Z\|_{L^\infty}}. \end{aligned} \end{equation} This yields the following bound for the kernel $k(\partial_{\alpha} Z(\xi)(\alpha-\gamma))$, \begin{equation}\label{kernel_bound} \begin{aligned} |k(\partial_{\alpha} Z (\xi)(\alpha-\gamma))|\leq \frac{\|F(Z)\|_{L^\infty}^4\|\partial_{\alpha} Z\|_{L^\infty}^2}{|\alpha-\gamma|^2}. \end{aligned} \end{equation} We notice that \begin{equation}\label{den_dif} \begin{aligned} \frac{1}{|\partial_{\alpha} Z(\xi)(\alpha-\gamma)|^5}-\frac{1}{|\partial_{\alpha} Z(\xi)(\beta-\gamma)|^5} =&\frac{(\partial_{\alpha} Z(\xi)(\beta-\alpha))\cdot\big(\partial_{\alpha} Z(\xi)(\alpha-\gamma+\beta-\gamma)\big) }{|\partial_{\alpha} Z(\xi)(\alpha-\gamma)|^5|\partial_{\alpha} Z(\xi)(\beta-\gamma)|^5}\\ &\times \frac{p_4(|\partial_{\alpha} Z(\xi)(\alpha-\gamma)|^2,|\partial_{\alpha} Z(\xi)(\beta-\gamma)|^2)}{|\partial_{\alpha} Z(\xi)(\alpha-\gamma)|^5+|\partial_{\alpha} Z(\xi)(\beta-\gamma)|^5}, \end{aligned} \end{equation} with $p_m$ a homogeneous polynomial of degree $m$. Thanks to \eqref{distances} and the cut-off function, in $I_{2,2}$ it holds that $\frac12|\alpha-\gamma|\leq |\beta-\gamma|\leq\frac32|\alpha-\gamma|$. Together with \eqref{bound_den} and \eqref{distances}, this gives that \begin{equation}\label{kernel_dif} \begin{aligned} |k(\partial_{\alpha} Z(\xi)(\alpha\!-\!\gamma))\!-\!k(\partial_{\alpha} Z(\xi)(\beta\!-\!\gamma))|\!\leq\! C\frac{\|F(Z)\|_{L^\infty}^7\|\partial_{\alpha} Z\|_{L^\infty}^4}{|\alpha-\gamma|^3}|h|. \end{aligned} \end{equation} Then, the term $I_{2,2}$ can be estimated directly since the kernel is not singular in its domain, \begin{equation*} \begin{aligned} |I_{2,2}|\leq C\|g\|_{L^\infty}\|\partial_{\alpha} Z\|_{L^\infty}^5\|F(Z)\|_{L^\infty}^8\frac{|h|}{\eta}. \end{aligned} \end{equation*} The support of $\chi$ allows to rewrite $I_{2,1}$ as follows $$ I_{2,1}=\int_{\mathbb R^2}\Big(k(\partial_{\alpha} Z(\xi)(\alpha-\gamma))-k(\partial_{\alpha} Z(\xi)(\beta-\gamma))\Big)\chi(\gamma)g(\gamma)\hspace{0.05cm} d\gamma. $$ The classical splitting to show that the singular integral goes from $\dot{C}^{\sigma}$ to $\dot{C}^{\sigma}$ (see Lemma 4.6 in \cite{MajdaBertozzi2002} for example) provides the desired bound: $$ I_{2,1}\leq C P(\|F(Z)\|_{L^\infty}\!+\!\|\partial_{\alpha} Z\|_{L^\infty}) \|g\|_{C^\sigma}|h|^\sigma. $$ Combining the bounds above, we have obtained that \begin{equation}\label{I2bound_boundary} \begin{aligned} |I_2|\leq C P(\|F(Z)\|_{L^\infty}\!+\!\|\partial_{\alpha} Z\|_{L^\infty}) \|g\|_{C^\sigma}|h|^\sigma. \end{aligned} \end{equation} We proceed to estimate $I_1$ \eqref{Isplit_boundary}. We split it as follows: \begin{equation}\label{I1split_boundary} I_1=I_{1,1}+I_{1,2}+I_{1,3}+I_{1,4}, \end{equation} where \begin{equation*} \begin{aligned} I_{1,1}&=\!\int_{Z^{-1}(A_\eta)}\!\!d\gamma\hspace{0.05cm} g(\gamma)\bigg(\frac{(Z_1(\alpha)\!-\!Z_1(\gamma)\!-\!\partial_{\alpha} Z_1(\xi)\cdot(\alpha\!-\!\gamma))(Z_2(\alpha)\!-\!Z_2(\gamma))(Z_3(\alpha)\!-\!Z_3(\gamma))}{|Z(\alpha)\!-\!Z(\gamma)|^5}\\ &\hspace{2cm}-\frac{(Z_1(\beta)-Z_1(\gamma)-\partial_{\alpha} Z_1(\xi)\cdot(\beta-\gamma))(Z_2(\beta)-Z_2(\gamma))(Z_3(\beta)-Z_3(\gamma))}{|Z(\beta)-Z(\gamma)|^5}\bigg), \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} I_{1,2}&=\!\int_{Z^{-1}(A_\eta)}\!\!d\gamma\hspace{0.05cm} g(\gamma)\bigg(\frac{\partial_{\alpha} Z_1(\xi)\cdot(\alpha-\gamma)(Z_2(\alpha)\!-\!Z_2(\gamma)\!-\!\partial_{\alpha} Z_2(\xi)\cdot(\alpha\!-\!\gamma))(Z_3(\alpha)\!-\!Z_3(\gamma))}{|Z(\alpha)\!-\!Z(\gamma)|^5}\\ &\hspace{1.8cm}-\frac{\partial_{\alpha} Z_1(\xi)\cdot(\beta-\gamma)(Z_2(\beta)-Z_2(\gamma)-\partial_{\alpha} Z_2(\xi)\cdot(\beta-\gamma))(Z_3(\beta)-Z_3(\gamma))}{|Z(\beta)-Z(\gamma)|^5}\bigg), \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} I_{1,3}&=\!\int_{Z^{-1}(A_\eta)}\!\!d\gamma\hspace{0.05cm} g(\gamma)\bigg(\prod_{i=1}^2\partial_{\alpha} Z_i(\xi)\cdot(\alpha-\gamma)\frac{(Z_3(\alpha)\!-\!Z_3(\gamma)-\partial_{\alpha} Z_3(\xi)\cdot(\alpha-\gamma))}{|Z(\alpha)\!-\!Z(\gamma)|^5}\\ &\hspace{3.4cm}-\prod_{i=1}^2\partial_{\alpha} Z_i(\xi)\cdot(\beta-\gamma)\frac{(Z_3(\beta)-Z_3(\gamma)-\partial_{\alpha} Z_3(\xi)\cdot(\beta-\gamma))}{|Z(\beta)-Z(\gamma)|^5}\bigg), \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} I_{1,4}&=\!\int_{Z^{-1}(A_\eta)}\!\!d\gamma\hspace{0.05cm} g(\gamma)\Bigg(\prod_{i=1}^3\partial_{\alpha} Z_i(\xi)\cdot(\alpha-\gamma) \Big(\frac{1}{|Z(\alpha)\!-\!Z(\gamma)|^5}-\frac{1}{|\partial_{\alpha} Z(\xi)(\alpha-\gamma)|^5}\Big) \\ &\hspace{3.4cm}-\prod_{i=1}^3\partial_{\alpha} Z_i(\xi)\cdot(\beta-\gamma)\Big(\frac{1}{|Z(\beta)-Z(\gamma)|^5}-\frac{1}{|\partial_{\alpha} Z(\xi)(\beta-\gamma)|^5}\Big)\Bigg). \end{aligned} \end{equation*} Using the following sets \begin{equation}\label{U1U2} U_1=Z^{-1}(A_\eta)\cap \{|\alpha-\gamma|<2|\alpha-\beta|\},\quad U_2=Z^{-1}(A_\eta)\cap \{|\alpha-\gamma|\geq2|\alpha-\beta|\}, \end{equation} we decompose $I_{1,1}$ further, \begin{equation}\label{I11split_boundary} I_{1,1}=J_1+J_2+J_3+J_4+J_5, \end{equation} with \begin{equation*} \begin{aligned} J_1&=\int_{U_1}d\gamma\hspace{0.05cm} g(\gamma)\bigg( \frac{(Z_1(\alpha)\!-\!Z_1(\gamma)\!-\!\partial_{\alpha} Z_1(\xi)\cdot(\alpha\!-\!\gamma))(Z_2(\alpha)\!-\!Z_2(\gamma))(Z_3(\alpha)\!-\!Z_3(\gamma))}{|Z(\alpha)\!-\!Z(\gamma)|^5}\\ &\hspace{2.2cm}-\frac{(Z_1(\beta)-Z_1(\gamma)-\partial_{\alpha} Z_1(\xi)\cdot(\beta-\gamma))(Z_2(\beta)-Z_2(\gamma))(Z_3(\beta)-Z_3(\gamma))}{|Z(\beta)-Z(\gamma)|^5}\bigg)\\ &=J_{1,1}+J_{1,2}, \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} J_2&=\int_{U_2}d\gamma\hspace{0.05cm} g(\gamma)\frac{(Z_1(\alpha)\!-\!Z_1(\beta)\!-\!\partial_{\alpha} Z_1(\xi)\cdot(\alpha\!-\!\beta))(Z_2(\alpha)\!-\!Z_2(\gamma))(Z_3(\alpha)\!-\!Z_3(\gamma))}{|Z(\alpha)\!-\!Z(\gamma)|^5}, \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} J_3&=\int_{U_2}d\gamma\hspace{0.05cm} g(\gamma)\frac{(Z_1(\beta)\!-\!Z_1(\gamma)\!-\!\partial_{\alpha} Z_1(\xi)\cdot(\beta\!-\!\gamma))(Z_2(\alpha)\!-\!Z_2(\beta))(Z_3(\alpha)\!-\!Z_3(\gamma))}{|Z(\alpha)\!-\!Z(\gamma)|^5}, \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} J_4&=\int_{U_2}d\gamma\hspace{0.05cm} g(\gamma)\frac{(Z_1(\beta)\!-\!Z_1(\gamma)\!-\!\partial_{\alpha} Z_1(\xi)\cdot(\beta\!-\!\gamma))(Z_2(\beta)\!-\!Z_2(\gamma))(Z_3(\alpha)\!-\!Z_3(\beta))}{|Z(\alpha)\!-\!Z(\gamma)|^5}, \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} J_5&=\int_{U_2}d\gamma\hspace{0.05cm} g(\gamma)(Z_1(\beta)\!-\!Z_1(\gamma)\!-\!\partial_{\alpha} Z_1(\xi)\cdot(\beta\!-\!\gamma))(Z_2(\beta)\!-\!Z_2(\gamma))(Z_3(\beta)\!-\!Z_3(\gamma))\\ &\hspace{3cm}\times \Big( \frac{1}{|Z(\alpha)\!-\!Z(\gamma)|^5}-\frac{1}{|Z(\beta)\!-\!Z(\gamma)|^5}\Big). \end{aligned} \end{equation*} Then, we have that \begin{equation*} \begin{aligned} J_{1,1}&=\int_{U_1}d\gamma\hspace{0.05cm} g(\gamma) \frac{(Z_1(\alpha)\!-\!Z_1(\gamma)\!-\!\partial_{\alpha} Z_1(\alpha)\cdot(\alpha\!-\!\gamma))(Z_2(\alpha)\!-\!Z_2(\gamma))(Z_3(\alpha)\!-\!Z_3(\gamma))}{|Z(\alpha)\!-\!Z(\gamma)|^5}\\ &\quad+\int_{U_1}d\gamma\hspace{0.05cm} g(\gamma) \frac{(\partial_{\alpha} Z_1(\alpha)\!-\!\partial_{\alpha} Z_1(\xi))\cdot(\alpha\!-\!\gamma)(Z_2(\alpha)\!-\!Z_2(\gamma))(Z_3(\alpha)\!-\!Z_3(\gamma))}{|Z(\alpha)\!-\!Z(\gamma)|^5}\\ &=K_1+K_2. \end{aligned} \end{equation*} The estimate for $K_1$ follows immediately \begin{equation*} \begin{aligned} |K_1|&\leq C\|g\|_{L^\infty}\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}\|F(Z)\|_{L^\infty}^{6+\sigma}\|\partial_{\alpha} Z\|_{L^\infty}^3|h|^\sigma. \end{aligned} \end{equation*} Since $U_1$ is a ball, we can write $K_2$ as follows \begin{equation*} \begin{aligned} K_2&=(\partial_{\alpha} Z_1(\alpha)-\partial_{\alpha} Z_1(\xi))\cdot\int_{U_1}d\gamma\hspace{0.05cm} (\alpha-\gamma)\bigg(g(\gamma)\frac{(Z_2(\alpha)-Z_2(\gamma))(Z_3(\alpha)-Z_3(\gamma))}{|Z(\alpha)-Z(\gamma)|^5}\\ &\hspace{6.5cm}-g(\alpha)\frac{\partial_{\alpha} Z_2(\alpha)\cdot(\alpha-\gamma)\partial_{\alpha} Z_3(\alpha)\cdot(\alpha-\gamma)}{|\partial_{\alpha} Z(\alpha)(\alpha-\gamma)|^5}\bigg). \end{aligned} \end{equation*} Similarly as in \eqref{den_dif}, the difference between the denominators is given by \begin{equation}\label{den_dif2} \begin{aligned} \frac{(Z_2(\alpha)-Z_2(\gamma))(Z_3(\alpha)-Z_3(\gamma))}{|Z(\alpha)-Z(\gamma)|^5}&-\frac{\partial_{\alpha} Z_2(\alpha)\cdot(\alpha-\gamma)\partial_{\alpha} Z_3(\alpha)\cdot(\alpha-\gamma)}{|\partial_{\alpha} Z(\alpha)(\alpha-\gamma)|^5}\\ &\hspace{-4cm}=\frac{(Z_2(\alpha)-Z_2(\gamma)-\partial_{\alpha} Z_2(\alpha)\cdot(\alpha-\gamma))\cdot(Z_2(\alpha)-Z_2(\gamma)+\partial_{\alpha} Z_2(\alpha)\cdot(\alpha-\gamma))}{|Z(\alpha)-Z(\gamma)|^5|\partial_{\alpha} Z(\alpha)(\alpha-\gamma)|^5}\\ &\hspace{-4cm}\quad\times\frac{p_4(|Z(\alpha)-Z(\gamma)|^2,|\partial_{\alpha} Z(\alpha)(\alpha-\gamma)|^2)}{|Z(\alpha)-Z(\gamma)|^5+|\partial_{\alpha} Z(\alpha)(\alpha-\gamma)|^5}, \end{aligned} \end{equation} thus \begin{equation*} \begin{aligned} |K_2|&\leq C\big(\|g\|_{\dot{C}^\sigma}\|F(Z)\|_{L^\infty}^{6+\sigma}\|\partial_{\alpha} Z\|_{L^\infty}^4+ \|g\|_{L^\infty}\|\partial_{\alpha} Z\|_{\dot{C}^{\sigma}}\|F(Z)\|_{L^\infty}^{8+\sigma}\|\partial_{\alpha} Z\|_{L^\infty}^5\big)|h|^\sigma. \end{aligned} \end{equation*} It is clear that $J_{1,2}$ satisfies the same bound, hence \begin{equation*} \begin{aligned} |J_1|&\leq C P(\|F(Z)\|_{L^\infty}\!+\!\|\partial_{\alpha} Z\|_{L^\infty})\big(\|g\|_{\dot{C}^\sigma}\!+\! \|g\|_{L^\infty}\|\partial_{\alpha} Z\|_{\dot{C}^{\sigma}}\big)|h|^\sigma. \end{aligned} \end{equation*} By writing \begin{equation*} \begin{aligned} Z_1(\alpha)-Z_1(\beta)-\partial_{\alpha} Z_1(\xi)\cdot(\alpha-\beta)&= Z_1(\alpha)-Z_1(\beta)-\partial_{\alpha} Z_1(\alpha)\cdot(\alpha-\beta)\\ &\quad+(\partial_{\alpha} Z_1(\alpha)-\partial_{\alpha} Z_1(\xi))\cdot(\alpha-\beta), \end{aligned} \end{equation*} the term $J_2$ is bounded directly by \begin{equation*} \begin{aligned} |J_2|&\leq C\|g\|_{L^\infty}\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}\|F(Z)\|_{L^\infty}^{6+\sigma}\|\partial_{\alpha} Z\|_{L^\infty}^3|h|^\sigma. \end{aligned} \end{equation*} Taking into account that on $U_2$ it holds that $\frac12|\alpha-\gamma|\leq |\beta-\gamma|\leq\frac32|\alpha-\gamma|$, the bounds for $J_3$, $J_4$, and $J_5$ follows \begin{equation*} \begin{aligned} |J_3|+|J_4|\leq C\|g\|_{L^\infty}\|\partial_{\alpha} Z\|_{\dot{C}^{\sigma}}\|F(Z)\|_{L^\infty}^{6+\sigma}\|\partial_{\alpha} Z\|_{L^\infty}^3|h|^\sigma,\\ |J_5| \leq C\|g\|_{L^\infty}\|\partial_{\alpha} Z\|_{\dot{C}^{\sigma}}\|F(Z)\|_{L^\infty}^{8+\sigma}\|\partial_{\alpha} Z\|_{L^\infty}^5|h|^\sigma. \end{aligned} \end{equation*} Inserting back in \eqref{I11split_boundary} the bounds for $J_3$-$J_7$, we obtain \begin{equation}\label{I11boundarybound} \begin{aligned} |I_{1,1}|\leq C P(\|F(Z)\|_{L^\infty}\!+\!\|\partial_{\alpha} Z\|_{L^\infty})\big(\|g\|_{\dot{C}^\sigma}\!+\! \|g\|_{L^\infty}\|\partial_{\alpha} Z\|_{\dot{C}^{\sigma}}\big)|h|^\sigma. \end{aligned} \end{equation} Now, recalling the splitting for $I_1$ \eqref{I1split_boundary}, it is clear that the bound above works as well for $I_{1,2}$, $I_{1,3}$, and $I_{1,4}$, hence it is valid for $I_1$. Combining it with the bound for $I_2$ \eqref{I2bound_boundary} in \eqref{Isplit_boundary} and recalling \eqref{holdersplit_boundary} and \eqref{gbound}, we finally have that \begin{equation}\label{bound_boundary} \begin{aligned} |S(f)(x)\!-\!S(f)(x+h)|\leq C(1\!+\!|\partial_{\alpha}rtial D|)P(\|F(Z)\|_{L^\infty}\!+\!\|\partial_{\alpha} Z\|_{L^\infty})\Big(\|f\|_{C^{\sigma}}\!+\!\|f\|_{L^\infty}\|\partial_{\alpha} Z\|_{\dot{C}^{\sigma}}\Big)|h|^\sigma. \end{aligned} \end{equation} \subsection{Regularity near the boundary.}\label{near_boundary} Consider two points $x\in \overline{D}$ and $x+h\in D$ (or analogous situation in $\overline{\mathbb R^3\smallsetminus D}$, $\mathbb R^3\smallsetminus D$) and, without loss of generality, suppose that \begin{equation}\label{distxh} d(x+h,\partial_{\alpha}rtial D)\geq d(x,\partial_{\alpha}rtial D)=\delta\geq0. \end{equation} We can write \begin{equation}\label{xxh} \begin{aligned} x&=Z(\alpha)+\delta \tilde{N}(\alpha),\\ x+h&=Z(\alpha)+(h_n+\delta)\tilde{N}(\alpha)+h_{\tau_1}\partial_{\alpha}rtial_{\alpha_1} Z(\alpha)+h_{\tau_2}\partial_{\alpha}rtial_{\alpha_2} Z(\alpha). \end{aligned} \end{equation} We denote \begin{equation*} \tilde{N}(\alpha)=\frac{N(\alpha)}{\sqrt{|N(\alpha)|}}, \quad N(\alpha)=\partial_{\alpha}rtial_{\alpha_1}Z(\alpha) \wedge \partial_{\alpha}rtial_{\alpha_2}Z(\alpha), \end{equation*} and we notice that, as in \eqref{aux}, we have \begin{equation*} \begin{aligned} |N(\alpha)|\geq (\|F(Z)\|_{L^\infty}\|\partial_{\alpha} Z\|_{L^\infty})^{-1}|\partial_{\alpha}rtial_{\alpha_1}Z(\alpha)||\partial_{\alpha}rtial_{\alpha_2}Z(\alpha)|, \end{aligned} \end{equation*} hence \begin{equation}\label{tildeN} \begin{aligned} \|\partial_{\alpha} Z\|_{L^\infty}\geq|\tilde{N}(\alpha)|\geq \|F(Z)\|_{L^\infty}^{-\frac32}\|\partial_{\alpha} Z\|_{L^\infty}^{-\frac12}. \end{aligned} \end{equation} We define the cutoffs \begin{equation}\label{delta_cutoff} \begin{aligned} |\delta|&\leq \frac16\Big(\frac{|\partial_{\alpha} Z|_{\inf}}{18\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}}\Big)^{\frac{1}{\sigma}}\Big(\frac{|\partial_{\alpha} Z|_{\inf}}{\|\partial_{\alpha} Z\|_{L^\infty}}\Big)^{\frac12},\\ |h_n|&\leq \frac1{24}\Big(\frac{|\partial_{\alpha} Z|_{\inf}}{18\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}}\Big)^{\frac{1}{\sigma}}\Big(\frac{|\partial_{\alpha} Z|_{\inf}}{\|\partial_{\alpha} Z\|_{L^\infty}}\Big)^{\frac12},\qquad |h_\tau|&\leq \frac1{24}\Big(\frac{|\partial_{\alpha} Z|_{\inf}}{18\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}}\Big)^{\frac{1}{\sigma}}\Big(\frac{|\partial_{\alpha} Z|_{\inf}}{\|\partial_{\alpha} Z\|_{L^\infty}}\Big)^{\frac12}, \end{aligned} \end{equation} where $|h_\tau|^2=h_{\tau_1}^2+h_{\tau_2}^2$. It will be sometimes convenient to write the point $x+h$ in the following form \begin{equation}\label{xh_normal} x+h=Z(\alpha+\lambda)+\mu\tilde{N}(\alpha+\lambda), \end{equation} where by assumption \eqref{distxh} \begin{equation}\label{mudelta} \mu|\tilde{N}(\alpha+\lambda)|\geq \delta |\tilde{N}(\alpha)|. \end{equation} We must first make sure that such a $\lambda$ and $\mu$ always exist for given $\delta$ and $h$ satisfying \eqref{delta_cutoff}. More specifically, we want to find $\lambda=(\lambda_1,\lambda_2)$ and $\mu$ satisfying \eqref{mudelta} solutions of \eqref{xh_normal}, \begin{equation*} Z(\alpha)+(h_n+\delta)\tilde{N}(\alpha)+h_{\tau_1}\partial_{\alpha}rtial_{\alpha_1} Z(\alpha)+h_{\tau_2}\partial_{\alpha}rtial_{\alpha_2} Z(\alpha)=Z(\alpha+\lambda)+\mu\tilde{N}(\alpha+\lambda). \end{equation*} Let us denote \begin{equation*} \bm{\lambda}=(\lambda_1,\lambda_2,\mu)^T,\qquad \bm{h}=(h_{\tau_1},h_{\tau_2},h_n+\delta)^T. \end{equation*} Then, upon projecting the equations onto $\partial_{\alpha}rtial_{\alpha_1}Z(\alpha)$, $\partial_{\alpha}rtial_{\alpha_2}Z(\alpha)$, and $\tilde{N}(\alpha)$, the system reads as follows \begin{equation*} \begin{aligned} M(\lambda)\bm{\lambda}=\tilde{\bm{h}}, \end{aligned} \end{equation*} where \begin{equation*} \begin{aligned} M(\lambda)=\begin{bmatrix}\frac{Z(\alpha+\lambda_1 e_1)-Z(\alpha)}{\lambda_1}\cdot\frac{\partial_{\alpha}rtial_{\alpha_1}Z(\alpha)}{|\partial_{\alpha}rtial_{\alpha_1}Z(\alpha)|^2} & \frac{Z(\alpha+\lambda)-Z(\alpha+\lambda_1 e_1)}{\lambda_2}\cdot\frac{\partial_{\alpha}rtial_{\alpha_1}Z(\alpha)}{|\partial_{\alpha}rtial_{\alpha_1}Z(\alpha)|^2} & \frac{\tilde{N}(\alpha+\lambda)\cdot\partial_{\alpha}rtial_{\alpha_1}Z(\alpha)}{|\partial_{\alpha}rtial_{\alpha_1}Z(\alpha)|^2}\\[2ex] \frac{Z(\alpha+\lambda)-Z(\alpha+\lambda_2 e_2)}{\lambda_1}\cdot\frac{\partial_{\alpha}rtial_{\alpha_2}Z(\alpha)}{|\partial_{\alpha}rtial_{\alpha_2}Z(\alpha)|^2} & \frac{Z(\alpha+\lambda_2 e_2)-Z(\alpha)}{\lambda_2}\cdot\frac{\partial_{\alpha}rtial_{\alpha_2}Z(\alpha)}{|\partial_{\alpha}rtial_{\alpha_2}Z(\alpha)|^2} & \frac{\tilde{N}(\alpha+\lambda)\cdot\partial_{\alpha}rtial_{\alpha_2}Z(\alpha)}{|\partial_{\alpha}rtial_{\alpha_2}Z(\alpha)|^2}\\[2ex] \frac{Z(\alpha+\lambda)-Z(\alpha+\lambda_2e_2)}{\lambda_1}\cdot\frac{\tilde{N}(\alpha)}{|\tilde{N}(\alpha)|^2} & \frac{Z(\alpha+\lambda_2e_2)-Z(\alpha)}{\lambda_2}\cdot\frac{\tilde{N}(\alpha)}{|\tilde{N}(\alpha)|^2} & \frac{\tilde{N}(\alpha+\lambda)}{|\tilde{N}(\alpha)|}\cdot\frac{\tilde{N}(\alpha)}{|\tilde{N}(\alpha)|} \end{bmatrix}, \end{aligned} \end{equation*} and \begin{equation*} \tilde{h}_1=h_{\tau_1}+h_{\tau_2}\frac{\partial_{\alpha}rtial_{\alpha_1}Z(\alpha)\cdot\partial_{\alpha}rtial_{\alpha_2}Z(\alpha)}{|\partial_{\alpha}rtial_{\alpha_1}Z(\alpha)|^2},\quad \tilde{h}_2=h_{\tau_2}+h_{\tau_1}\frac{\partial_{\alpha}rtial_{\alpha_1}Z(\alpha)\cdot\partial_{\alpha}rtial_{\alpha_2}Z(\alpha)}{|\partial_{\alpha}rtial_{\alpha_2}Z(\alpha)|^2},\quad \tilde{h}_3=h_{\tau_3}. \end{equation*} Denote the limit matrix by $\tilde{M}(\alpha)$, \begin{equation*} \tilde{M}=\begin{bmatrix} 1 & \frac{\partial_{\alpha}rtial_{\alpha_1}Z(\alpha)\cdot\partial_{\alpha}rtial_{\alpha_2}Z(\alpha)}{|\partial_{\alpha}rtial_{\alpha_1}Z(\alpha)|^2} & 0\\ \frac{\partial_{\alpha}rtial_{\alpha_1}Z(\alpha)\cdot\partial_{\alpha}rtial_{\alpha_2}Z(\alpha)}{|\partial_{\alpha}rtial_{\alpha_2}Z(\alpha)|^2} & 1 & 0\\ 0 & 0 & 1 \end{bmatrix}. \end{equation*} As a fixed point equation, the equation reads \begin{equation}\label{fixedpointeq1} \bm{\lambda}=T(\bm{\lambda}):=\tilde{M}^{-1}\big((\tilde{M}-M(\lambda))\bm{\lambda}+\tilde{\bm{h}}\big). \end{equation} Since $\tilde{M}^{-1}\tilde{\bm{h}}=\bm{h}$, and taking into account \eqref{aux}, we see that \begin{equation*} \begin{aligned} |T(\bm{\lambda})|&\leq |\tilde{M}^{-1}(\tilde{M}-M(\lambda))\bm{\lambda}|+|\bm{h}|\\ &\leq C\|\partial_{\alpha} Z\|_{L^\infty}^2\|F(Z)\|_{L^\infty}^3\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}(1+\|\partial_{\alpha} Z\|_{L^\infty}^{\frac12}\|F(Z)\|_{L^\infty}^{\frac12})|\bm{\lambda}|^{1+\sigma}+|\bm{h}|, \end{aligned} \end{equation*} thus there exists a large enough constant $C_1>0$ so that for $$|\bm{h}|<\Big(C_1\|\partial_{\alpha} Z\|_{L^\infty}^2\|F(Z)\|_{L^\infty}^3\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}(1+\|\partial_{\alpha} Z\|_{L^\infty}^{\frac12}\|F(Z)\|_{L^\infty}^{\frac12})\Big) ^{-\frac{1}{\sigma}},$$ Brouwer's Fixed Point Theorem \cite{Evans} yields the existence of a solution to \eqref{fixedpointeq1} in the ball of radius $2\Big(C_1\|\partial_{\alpha} Z\|_{L^\infty}^2\|F(Z)\|_{L^\infty}^3\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}(1+\|\partial_{\alpha} Z\|_{L^\infty}^{\frac12}\|F(Z)\|_{L^\infty}^{\frac12})\Big) ^{-\frac{1}{\sigma}}$ and centered at the origin, and moreover \begin{equation}\label{lambdabound} |\bm{\lambda}|\leq 2|\bm{h}|. \end{equation} Finally, the third equation in \eqref{fixedpointeq1} shows that the condition \eqref{mudelta}, i.e., \eqref{distxh}, implies that \begin{equation*} \begin{aligned} h_n&\geq -\delta\big|\frac{|\tilde{N}(\alpha)|-|\tilde{N}(\alpha+\lambda)|}{|\tilde{N}(\alpha+\lambda)|}\big|-3C_1^{-1}|\bm{\lambda}|, \end{aligned} \end{equation*} which in particular gives that, for suitable $C_1$ big enough, \begin{equation}\label{hn_lower} h_n\geq -\frac{\delta}{4}-\frac{|h_\tau|}{4}. \end{equation} Next, we distinguish two cases: $|h_\tau|\leq \frac14\frac{|\partial_{\alpha} Z|_{\inf}}{\|\partial_{\alpha} Z\|_{L^\infty}}\delta$ and $|h_\tau|\geq \frac14\frac{|\partial_{\alpha} Z|_{\inf}}{\|\partial_{\alpha} Z\|_{L^\infty}}\delta$. \subsubsection{{\em{\textbf{Regularity in {\em{nearly}} normal direction:}}}} Assume that \begin{equation}\label{htau_menor_delta} |h_\tau|\leq \frac14\frac{|\partial_{\alpha} Z|_{\inf}}{\|\partial_{\alpha} Z\|_{L^\infty}}\delta. \end{equation} We can write \begin{equation}\label{normaltangentSplit} \begin{aligned} S(f)(x+h)-S(f)(x)&=S(f)(x+h)-S(f)(Z(\alpha)+(\delta+h_n)\tilde{N}(\alpha))\\ &\quad+S(f)(Z(\alpha)+(\delta+h_n)\tilde{N}(\alpha))-S(f)(Z(\alpha)+\delta \tilde{N}(\alpha)), \end{aligned} \end{equation} so that the first two terms correspond to a difference in the tangential direction and the last two to a difference in the normal direction. We estimate each of these terms separately. Note that the above splitting is valid since \eqref{hn_lower} and the assumption $|h_\tau|\leq \frac14\frac{|\partial_{\alpha} Z|_{\inf}}{\|\partial_{\alpha} Z\|_{L^\infty}}\delta$ guarantees that $h_n+\delta\geq \delta/2$, hence the point $Z(\alpha)+(\delta+h_n)\tilde{N}(\alpha)$ belongs to $ D$. We can thus assume in the following subsection, Case 1, that $h_n\geq0$; otherwise, interchange the roles of $\delta$ and $\delta+h_n$. \noindent\textbf{Case 1: Normal direction.} Here we consider $h_\tau=0$, i.e., we are dealing with the second difference above. We write it as follows \begin{equation*} \begin{aligned} S(f)(Z(\alpha)+(\delta+&h_n)\tilde{N}(\alpha))-S(f)(Z(\alpha)+\delta \tilde{N}(\alpha))\\ &=\text{pv}\int_{Z(B_\eta)}\Big(k(Z(\alpha)+(\delta+h_n)\tilde{N}(\alpha)-y)-k(x-y)\Big)f(y)dS(y)\\ &\quad+\int_{\partial_{\alpha}rtial D \smallsetminus Z(B_\eta)}\Big(k(Z(\alpha)+(\delta+h_n)\tilde{N}(\alpha)-y)-k(x-y)\Big)f(y)dS(y)\\ &=I+II, \end{aligned} \end{equation*} where \begin{equation*} B_\eta=\{\gamma\in\mathbb{R}^2:|\alpha-\gamma|<\eta\}. \end{equation*} The second term is again more regular, \begin{equation*} \begin{aligned} |II|\leq \frac{C}{\eta^3|\partial_{\alpha} Z|_{\inf}^3}|\partial_{\alpha}rtial D|\|f\|_{L^\infty}|h|. \end{aligned} \end{equation*} The first term is given by \begin{equation*} \begin{aligned} I&=\int_{B_\eta} \frac{\prod_{j=1}^3(Z_j(\alpha)+(h_n+\delta)\tilde{N}_j(\alpha)-Z_j(\gamma))}{|Z(\alpha)+(h_n+\delta)\tilde{N}(\alpha)-Z(\gamma)|^5}g(\gamma)d\gamma \\ &\quad-\int_{B_\eta} \frac{\prod_{j=1}^3(Z_j(\alpha)+\delta\tilde{N}_j(\alpha)-Z_j(\gamma))}{|Z(\alpha)+\delta\tilde{N}(\alpha)-Z(\gamma)|^5}g(\gamma)d\gamma, \end{aligned} \end{equation*} and we decompose it as follows \begin{equation}\label{Isplit} I=I_1+I_2, \end{equation} \begin{equation*} \begin{aligned} I_1&=\int_{B_\eta}\prod_{j=1}^3(Z_j(\alpha)-Z_j(\gamma))\Big(\frac{1}{|Z(\alpha)+(\delta+h_n)\tilde{N}(\alpha)-Z(\gamma)|^5}-\frac{1}{|x-Z(\gamma)|^5}\Big)g(\gamma)d\gamma, \end{aligned} \end{equation*} \begin{equation} \begin{aligned}\label{I2N} I_2&=\int_{B_\eta}\!\!\frac{\prod_{j=1}^3(Z_j(\alpha)-Z_j(\gamma)+(h_n+\delta)\tilde{N}_j(\alpha))-\prod_{j=1}^3(Z_j(\alpha)-Z_j(\gamma))}{|Z(\alpha)-Z(\gamma)+(h_n+\delta)\tilde{N}(\alpha)|^5}g(\gamma)d\gamma \\ &\quad-\int_{B_\eta}\frac{\prod_{j=1}^3(Z_j(\alpha)-Z_j(\gamma)+\delta\tilde{N}_j(\alpha))-\prod_{j=1}^3(Z_j(\alpha)-Z_j(\gamma))}{|Z(\alpha)-Z(\gamma)+\delta\tilde{N}(\alpha)|^5}g(\gamma)d\gamma. \end{aligned} \end{equation} We split $I_1$ further: \begin{equation}\label{I1split} \begin{aligned} I_1&=I_{1,1}+\dots+I_{1,6}, \end{aligned} \end{equation} where \begin{equation*} \begin{aligned} I_{1,1}&=\int_{B_\eta}(Z_1(\alpha)-Z_1(\gamma)-\partial_{\alpha} Z_1(\alpha)\cdot(\alpha-\gamma))\prod_{j=1}^2(Z_j(\alpha)-Z_j(\gamma))\\ &\hspace{1cm}\times \Big(\frac{1}{|Z(\alpha)+(\delta+h_n)\tilde{N}(\alpha)-Z(\gamma)|^5}-\frac{1}{|x-Z(\gamma)|^5}\Big)g(\gamma)d\gamma, \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} I_{1,2}&=\int_{B_\eta}\partial_{\alpha} Z_1(\alpha)\cdot(\alpha-\gamma)(Z_2(\alpha)-Z_2(\gamma)-\partial_{\alpha} Z_2(\alpha)\cdot(\alpha-\gamma))(Z_3(\alpha)-Z_3(\gamma))\\ &\hspace{1cm}\times\Big(\frac{1}{|Z(\alpha)+(\delta+h_n)\tilde{N}(\alpha)-Z(\gamma)|^5}-\frac{1}{|x-Z(\gamma)|^5}\Big)g(\gamma)d\gamma, \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} I_{1,3}&=\int_{B_\eta}\partial_{\alpha} Z_1(\alpha)\cdot(\alpha-\gamma)\partial_{\alpha} Z_2(\alpha)\cdot(\alpha-\gamma)(Z_3(\alpha)-Z_3(\gamma)-\partial_{\alpha} Z_3(\alpha-\gamma))\\ &\hspace{1cm}\times\Big(\frac{1}{|Z(\alpha)+(\delta+h_n)\tilde{N}(\alpha)-Z(\gamma)|^5}-\frac{1}{|x-Z(\gamma)|^5}\Big)g(\gamma)d\gamma, \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} I_{1,4}&=\int_{B_\eta}\partial_{\alpha} Z_1(\alpha)\cdot(\alpha-\gamma)\partial_{\alpha} Z_2(\alpha)\cdot(\alpha-\gamma)\partial_{\alpha} Z_3(\alpha)\cdot(\alpha-\gamma)\\ &\hspace{1cm}\times (g(\gamma)-g(\alpha))\Big(\frac{1}{|Z(\alpha)+(\delta+h_n)\tilde{N}(\alpha)-Z(\gamma)|^5}-\frac{1}{|x-Z(\gamma)|^5}\Big)d\gamma, \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} I_{1,5}&=g(\alpha)\int_{B_\eta}\partial_{\alpha} Z_1(\alpha)\cdot(\alpha-\gamma)\partial_{\alpha} Z_2(\alpha)\cdot(\alpha-\gamma)\partial_{\alpha} Z_3(\alpha)\cdot(\alpha-\gamma)\\ &\hspace{1cm}\times \Big(\frac{1}{|Z(\alpha)+(\delta+h_n)\tilde{N}(\alpha)-Z(\gamma)|^5}-\frac{1}{|\partial_{\alpha} Z(\alpha)(\alpha-\gamma)+(h_n+\delta)\tilde{N}(\alpha)|^5}\\ &\hspace{1cm}\qquad+\frac{1}{|\partial_{\alpha} Z(\alpha) (\alpha-\gamma)+\delta \tilde{N}(\alpha)|^5}-\frac{1}{|x-Z(\gamma)|^5}\Big)d\gamma, \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} I_{1,6}&=g(\alpha)\int_{B_\eta}\partial_{\alpha} Z_1(\alpha)\cdot(\alpha-\gamma)\partial_{\alpha} Z_2(\alpha)\cdot(\alpha-\gamma)\partial_{\alpha} Z_3(\alpha)\cdot(\alpha-\gamma)\\ &\hspace{1cm}\times \Big(\frac{1}{|\partial_{\alpha} Z(\alpha)(\alpha-\gamma)+(h_n+\delta)\tilde{N}(\alpha)|)^{5}}-\frac{1}{|\partial_{\alpha} Z(\alpha)(\alpha-\gamma)+\delta\tilde{N}(\alpha)|^5}\Big)d\gamma, \end{aligned} \end{equation*} To estimate these terms we will need to bound from below the denominator \begin{equation*} D=|Z(\alpha)-Z(\gamma)+(\delta+h_n)\tilde{N}(\alpha)|^2. \end{equation*} We can write \begin{equation*} \begin{aligned} D&=|Z(\alpha)-Z(\gamma)|^2+(h_n+\delta)^2|N(\alpha)|+2(Z(\alpha)-Z(\gamma)-\partial_{\alpha} Z(\alpha)(\alpha-\gamma))\cdot\tilde{N}(\alpha)(h_n+\delta)\\ &\geq \frac{|Z(\alpha)-Z(\gamma)|^2}{|\alpha-\gamma|^2} |\alpha-\gamma|^2+(h_n+\delta)^2|\tilde{N}(\alpha)|^2-2|\alpha-\gamma|^{1+\sigma}\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}|\tilde{N}(\alpha)|(h_n+\delta). \end{aligned} \end{equation*} The last term satisfies that \begin{equation*} \begin{aligned} 2|\alpha-\gamma|^{1+\sigma}\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}|\tilde{N}(\alpha)|(h_n+\delta)&\leq\frac{1-\sigma}2(h_n+\delta)^2|\tilde{N}(\alpha)|^2\\ + \frac{1+\sigma}2 2^{\frac{2}{1+\sigma}}&(h_n+\delta)^{\frac{2\sigma}{1+\sigma}}\|\partial_{\alpha} Z\|_{L^\infty}^{\frac{2\sigma}{1+\sigma}}\|\partial_{\alpha} Z\|_{\dot{C}}^{\frac{2}{1+\sigma}}|\alpha-\gamma|^2. \end{aligned} \end{equation*} The fact that $h_n+\delta\geq\delta/2$ and the choice of the cutoff for $\delta$ \eqref{delta_cutoff} allow us to obtain that \begin{equation}\label{denominatorb2} \begin{aligned} D&\geq \frac12 |\partial_{\alpha} Z|_{\text{inf}}^2\Big( |\alpha-\gamma|^2+(h_n+\delta)^2\Big). \end{aligned} \end{equation} Next, we proceed to estimate each of these terms $I_{1,i}$, $i=1,\dots, 6$. For $I_{1,1}$ we have that \begin{equation}\label{I11split2} \begin{aligned} |I_{1,1}|\leq C|h_n|\|g\|_{L^\infty}\|\partial_{\alpha} Z\|_{L^\infty}^3\|\partial_{\alpha} Z\|_{\dot{C}^{\sigma}}\int_{B_\eta} \Big(&\frac{|\alpha-\gamma|^{3+\sigma}|x-Z(\gamma)|^{-5}}{|Z(\alpha)+(\delta+h_n)\tilde{N}(\alpha)-Z(\gamma)|}\\ &+\frac{|\alpha-\gamma|^{3+\sigma}|x-Z(\gamma)|^{-1}}{{|Z(\alpha)+(\delta+h_n)\tilde{N}(\alpha)-Z(\gamma)|^5}}\Big)d\gamma. \end{aligned} \end{equation} We introduce the bound for the denominator \eqref{denominatorb2} in $I_{1,1}$ to obtain that \begin{equation*} \begin{aligned} |I_{1,1}|\leq C|h_n|\|g\|_{L^\infty}\|\partial_{\alpha} Z\|_{\dot{C}^{\sigma}}\frac{\|\partial_{\alpha} Z\|_{L^\infty}^3}{|\partial_{\alpha} Z|_{\text{inf}}^6}\int_{B_\eta}\Big(&\frac{|\alpha-\gamma|^{3+\sigma}(|\alpha-\gamma|^2+\delta^2)^{-\frac52}}{(|\alpha-\gamma|^2+(h_n+\delta)^2)^{\frac12}}\\ &+\frac{|\alpha-\gamma|^{3+\sigma}(|\alpha-\gamma|^2+\delta^2)^{-\frac12}}{(|\alpha-\gamma|^2+(h_n+\delta)^2)^{\frac52}}\Big)d\gamma. \end{aligned} \end{equation*} Changing variables $w=(\alpha-\gamma)/(h_n+\delta)$, \begin{equation*} \begin{aligned} |I_{1,1}|\leq C\frac{|h_n|}{(h_n+\delta)^{1-\sigma}}\|g\|_{L^\infty}\|\partial_{\alpha} Z\|_{\dot{C}^{\sigma}}\frac{\|\partial_{\alpha} Z\|_{L^\infty}^3}{|\partial_{\alpha} Z|_{\text{inf}}^6}\int_{|w|\leq\frac{\eta}{h_n+\delta}}&\Big(\frac{|w|^{3+\sigma}\big(|w|^2+\big(\frac{\delta}{h_n+\delta}\big)^2\big)^{-\frac52}}{(|w|^2+1)^{\frac12}}\\ &+\frac{|w|^{3+\sigma}\big(|w|^2+\big(\frac{\delta}{h_n+\delta}\big)^2\big)^{-\frac12}}{(|w|^2+1)^{\frac52}}\Big)dw, \end{aligned} \end{equation*} hence \begin{equation*} \begin{aligned} |I_{1,1}|&\leq C|h_n|^\sigma\|g\|_{L^\infty}\|\partial_{\alpha} Z\|_{\dot{C}^{\sigma}}\frac{\|\partial_{\alpha} Z\|_{L^\infty}^3}{|\partial_{\alpha} Z|_{\text{inf}}^6}\int_{\mathbb R^2}\Big(\frac{|w|^{-2+\sigma}}{(|w|^2+1)^{\frac12}}+\frac{|w|^{2+\sigma}}{(|w|^2+1)^{\frac52}}\Big)dw, \end{aligned} \end{equation*} to conclude the desired bound \begin{equation}\label{I11bound} \begin{aligned} |I_{1,1}|&\leq C\|g\|_{L^\infty}\|\partial_{\alpha} Z\|_{\dot{C}^{\sigma}}\frac{\|\partial_{\alpha} Z\|_{L^\infty}^3}{|\partial_{\alpha} Z|_{\text{inf}}^6}|h_n|^\sigma. \end{aligned} \end{equation} The terms $I_{1,2}$ and $I_{1,3}$ are bounded analogously: \begin{equation}\label{I12I13bound} \begin{aligned} |I_{1,2}|+|I_{1,3}|&\leq C\|g\|_{L^\infty}\|\partial_{\alpha} Z\|_{\dot{C}^{\sigma}}\frac{\|\partial_{\alpha} Z\|_{L^\infty}^3}{|\partial_{\alpha} Z|_{\text{inf}}^6}|h_n|^\sigma. \end{aligned} \end{equation} In $I_{1,4}$ the same approach yields \begin{equation}\label{I14bound2} |I_{1,4}|\leq C\|g\|_{\dot{C}^\sigma}\frac{\|\partial_{\alpha} Z\|_{L^\infty}^4}{|\partial_{\alpha} Z|_{\text{inf}}^6}|h_n|^\sigma. \end{equation} We deal with $I_{1,5}$ \eqref{I1split}. Let us denote \begin{equation}\label{uhvh} \begin{aligned} u_h&=|Z(\alpha)-Z(\gamma)+(h_n+\delta)\tilde{N}(\alpha)|,\\ v_h&=|\partial_{\alpha} Z(\alpha)(\alpha-\gamma)+(h_n+\delta)\tilde{N}(\alpha)|, \end{aligned} \end{equation} and \begin{equation*} \begin{aligned} \frac{1}{u_h^5}-\frac{1}{u_0^5}=G(u_h,u_0)(u_0^2-u_h^2), \end{aligned} \end{equation*} where \begin{equation}\label{Guhvh} G(u_h,u_0)=\frac1{u_h+u_0}\big(\frac1{u_h^5u_0}+\frac1{u_h^4u_0^2}+\frac1{u_h^3u_0^3}+\frac1{u_h^2u_0^4}+\frac1{u_hu_0^5}\big). \end{equation} We notice that $u_h^2=D$ for which we have the lower bound \eqref{denominatorb2}. We also need a lower bound for $v_h$: \begin{equation*} \begin{aligned} |v_h|^2&=|\partial_{\alpha} Z(\alpha)(\alpha-\gamma)|^2+(h_n+\delta)^2|\tilde{N}(\alpha)|^2\\ &\geq |\partial_{\alpha} Z|_{\inf}^2\frac{|\partial_{\alpha} Z|_{\inf}^2}{\|\partial_{\alpha} Z\|_{L^\infty}^2}\big(|\alpha-\gamma|^2+(h_n+\delta)^2\big), \end{aligned} \end{equation*} where we have used \eqref{aux}, \eqref{varepsilon_def}, and \eqref{tildeN}. Notice that the following is a common lower bound for $u_h$ and $v_h$, \begin{equation}\label{uhvh_lower} \begin{aligned} |u_h|^2,|v_h|^2&\geq \frac12|\partial_{\alpha} Z|_{\inf}^2\frac{|\partial_{\alpha} Z|_{\inf}^2}{\|\partial_{\alpha} Z\|_{L^\infty}^2}\big(|\alpha-\gamma|^2+(h_n+\delta)^2\big). \end{aligned} \end{equation} We have that \begin{equation}\label{uhmvh} \begin{aligned} u_0^2-u_h^2&=-h_n(h_n+2\delta)|N(\alpha)|-2(Z(\alpha)-Z(\gamma))\cdot\tilde{N}(\alpha)h_n,\\ v_0^2-v_h^2&=-h_n(h_n+2\delta)|N(\alpha)| . \end{aligned} \end{equation} The term $I_{1,5}$ \eqref{I1split} is then written as follows \begin{equation} \begin{aligned}\label{I15} I_{1,5}&=g(\alpha)\int_{B_\eta}\partial_{\alpha} Z_1(\alpha)\cdot(\alpha-\gamma)\partial_{\alpha} Z_2(\alpha)\cdot(\alpha-\gamma)\partial_{\alpha} Z_3(\alpha)\cdot(\alpha-\gamma)\\ &\hspace{2.5cm}\times \Big(\frac{1}{u_h^5}-\frac{1}{u_0^5}-\big(\frac{1}{v_h^5}-\frac{1}{v_0^5}\big)\Big)d\gamma, \end{aligned} \end{equation} and we split it further \begin{equation*} I_{1,5}=J_1+J_2, \end{equation*} where \begin{equation*} \begin{aligned} J_1&=g(\alpha)\int_{B_\eta}\partial_{\alpha} Z_1(\alpha)\cdot(\alpha-\gamma)\partial_{\alpha} Z_2(\alpha)\cdot(\alpha-\gamma)\partial_{\alpha} Z_3(\alpha)\cdot(\alpha-\gamma)\\ &\hspace{2cm}\times G(u_h,u_0)\big(u_0^2-u_h^2-(v_0^2-v_h^2)\big)d\gamma, \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} J_2&=g(\alpha)\int_{B_\eta}\partial_{\alpha} Z_1(\alpha)\cdot(\alpha-\gamma)\partial_{\alpha} Z_2(\alpha)\cdot(\alpha-\gamma)\partial_{\alpha} Z_3(\alpha)\cdot(\alpha-\gamma)\\ &\hspace{2cm}\times \big(G(u_h,u_0)-G(v_h,v_0)\big)(v_0^2-v_h^2)d\gamma. \end{aligned} \end{equation*} Substituting \eqref{uhmvh}, \begin{equation*} \begin{aligned} J_{1}=-2h_n\hspace{0.05cm} g(\alpha)\int_{B_\eta}&\partial_{\alpha} Z_1(\alpha)\cdot(\alpha-\gamma)\partial_{\alpha} Z_2(\alpha)\cdot(\alpha-\gamma)\partial_{\alpha} Z_3(\alpha)\cdot(\alpha-\gamma)\\ &\times G(u_h,u_0) (Z(\alpha)-Z(\gamma))\cdot\tilde{N}(\alpha) d\gamma. \end{aligned} \end{equation*} The extra cancellation \begin{equation*} (Z(\alpha)-Z(\gamma))\cdot\tilde{N}(\alpha)=(Z(\alpha)-Z(\gamma)-\partial_{\alpha} Z(\alpha)(\alpha-\gamma))\cdot\tilde{N}(\alpha), \end{equation*} and recalling that $u_h^2=D$, we introduce the lower bound for $D$ in \eqref{denominatorb2} to obtain that \begin{equation*} \begin{aligned} |J_{1}|&\leq C \hspace{0.05cm} \|g\|_{L^\infty}\frac{\|\partial_{\alpha} Z\|_{L^\infty}^4}{|\partial_{\alpha} Z|_{\text{inf}}^7}\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}|h_n| \int_{|\alpha-\gamma|\leq \eta} \Big(\frac{|\alpha-\gamma|^{4+\sigma}(|\alpha-\gamma|^2+\delta^2)^{-\frac12}}{(|\alpha-\gamma|^2+(h_n+\delta)^2)^{3}} \\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\frac{|\alpha-\gamma|^{4+\sigma}(|\alpha-\gamma|^2+\delta^2)^{-\frac52}}{|\alpha-\gamma|^2+(h_n+\delta)^2}\Big)d\gamma\\ &\leq C \hspace{0.05cm} \|g\|_{L^\infty}\frac{\|\partial_{\alpha} Z\|_{L^\infty}^4}{|\partial_{\alpha} Z|_{\text{inf}}^7}\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}|h_n|^\sigma. \end{aligned} \end{equation*} We proceed to estimate $J_2$. We further split this term $$ J_{2}=\sum_{k=1}^6J_{2,k} $$ where \begin{equation*} \begin{aligned} J_{2,k}&=g(\alpha)\int_{B_\eta}\partial_{\alpha} Z_1(\alpha)\cdot(\alpha-\gamma)\partial_{\alpha} Z_2(\alpha)\cdot(\alpha-\gamma)\partial_{\alpha} Z_3(\alpha)\cdot(\alpha-\gamma)\\ &\hspace{2cm}\times \big(\frac1{u_h^{6-k}u_0^k}-\frac1{v_h^{6-k}v_0^k}\big)\frac{v_0^2-v_h^2}{u_h+u_0}d\gamma, \end{aligned} \end{equation*} for $1\leq k\leq 5$, and \begin{equation*} \begin{aligned} J_{2,6}&=g(\alpha)\int_{B_\eta}\partial_{\alpha} Z_1(\alpha)\cdot(\alpha-\gamma)\partial_{\alpha} Z_2(\alpha)\cdot(\alpha-\gamma)\partial_{\alpha} Z_3(\alpha)\cdot(\alpha-\gamma)(v_0^2-v_h^2)\\ &\hspace{2cm}\times \big(\frac1{v_h^5v_0}+\frac1{v_h^4v_0^2}+\frac1{v_h^3v_0^3}+\frac1{v_h^2v_0^4}+\frac1{v_hv_0^5}\big)\big(\frac{1}{u_h+u_0}-\frac1{v_h+v_0}\big)d\gamma. \end{aligned} \end{equation*} To control $J_{2,1}$ a further splitting is given: $$ J_{2,1}=\sum_{l=1}^6K_l $$ where \begin{equation*} \begin{aligned} K_{1}&=g(\alpha)\int_{B_\eta}\partial_{\alpha} Z_1(\alpha)\cdot(\alpha-\gamma)\partial_{\alpha} Z_2(\alpha)\cdot(\alpha-\gamma)\partial_{\alpha} Z_3(\alpha)\cdot(\alpha-\gamma)\\ &\hspace{2cm}\times \frac1{u_h^5} \big(\frac1{u_0}-\frac1{v_0}\big)\frac{v_0^2-v_h^2}{u_h+u_0}d\gamma, \end{aligned} \end{equation*} and \begin{equation*} \begin{aligned} K_{l}&=g(\alpha)\int_{B_\eta}\partial_{\alpha} Z_1(\alpha)\cdot(\alpha-\gamma)\partial_{\alpha} Z_2(\alpha)\cdot(\alpha-\gamma)\partial_{\alpha} Z_3(\alpha)\cdot(\alpha-\gamma)\\ &\hspace{2cm}\times \frac1{v_0u_h^{6-l}v_h^{l-2}}\big(\frac1{u_h}-\frac1{v_h}\big)\frac{v_0^2-v_h^2}{u_h+u_0}d\gamma \end{aligned}. \end{equation*} for $2\leq l\leq 6$. Estimate \begin{equation}\label{v0-u0} |v_h-u_h|\leq \|\partial_{\alpha}rtial_\alpha Z\|_{\dot{C}^{\sigma}}|\alpha-\gamma|^{1+\sigma} \end{equation} and the lower bound \eqref{uhvh_lower} allows us to get \begin{equation*} \begin{aligned} |K_1|&\leq C\|g\|_{L^\infty}\frac{\|\partial_{\alpha}rtial_\alpha Z\|_{L^\infty}^{13}}{\|\partial_{\alpha}rtial_\alpha Z\|^{16}_{\text{inf}}}\|\partial_{\alpha}rtial_\alpha Z\|_{\dot{C}^{\sigma}}|h_n|\int_{|\alpha-\gamma|\leq \eta} \frac{|\alpha-\gamma|^{2+\sigma}((h_n+\delta)+\delta)d\gamma}{(|\alpha-\gamma|^2+(h_n+\delta)^2)^{\frac52}((h_n+\delta)+\delta)} \\ &\leq C \hspace{0.05cm} \|g\|_{L^\infty}\frac{\|\partial_{\alpha} Z\|_{L^\infty}^{13}}{|\partial_{\alpha} Z|_{\text{inf}}^{16}}\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}|h_n|^\sigma. \end{aligned} \end{equation*} Using \eqref{v0-u0}, an analogous bound follows for the rest of $K_l$. It yields the desired control for the term $J_{2,1}$. The terms $J_{2,k}$, $k=2,...,5$, are controlled in a similar manner to $J_{2,1}$. We show some detail in the most singular one: $J_{2,5}$. It is decomposed by $$ J_{2,5}=K_{r}+K_{s} $$ where $K_{r}$ is a representative term, and in $K_s$ we collect similar integrals (they can be handled as before). The $K_{r}$ integral is given by \begin{equation*} \begin{aligned} K_{r}&=g(\alpha)\int_{B_\eta}\partial_{\alpha} Z_1(\alpha)\cdot(\alpha-\gamma)\partial_{\alpha} Z_2(\alpha)\cdot(\alpha-\gamma)\partial_{\alpha} Z_3(\alpha)\cdot(\alpha-\gamma)\\ &\hspace{2cm}\times \frac1{v_hu_0^4}(\frac1{u_0}-\frac1{v_0}\big)\frac{v_0^2-v_h^2}{u_h+u_0}d\gamma. \end{aligned} \end{equation*} Then, the following bound is obtained \begin{equation*} \begin{aligned} |K_{r}|&\leq C\|g\|_{L^\infty}\frac{\|\partial_{\alpha}rtial_\alpha Z\|_{L^\infty}^{13}}{\|\partial_{\alpha}rtial_\alpha Z\|^{16}_{\text{inf}}}\|\partial_{\alpha}rtial_\alpha Z\|_{\dot{C}^{\sigma}}|h_n|\int_{|\alpha-\gamma|\leq \eta} \frac{|\alpha-\gamma|^{4+\sigma}(|\alpha-\gamma|^2+\delta^2)^{-3}}{(|\alpha-\gamma|^2+(h_n+\delta)^2)^{\frac12}} d\gamma\\ &\leq C \hspace{0.05cm} \|g\|_{L^\infty}\frac{\|\partial_{\alpha} Z\|_{L^\infty}^{13}}{|\partial_{\alpha} Z|_{\text{inf}}^{16}}\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}|h_n|^\sigma. \end{aligned} \end{equation*} The rest of terms in $K_s$ are estimated analogously. Then, it yields the desired control for $J_{2,5}$. Next, we consider $J_{2,6}$ which is estimated using \eqref{v0-u0}: \begin{equation*} \begin{aligned} |J_{2,6}|&\leq C\|g\|_{L^\infty}\frac{\|\partial_{\alpha}rtial_\alpha Z\|_{L^\infty}^{13}}{\|\partial_{\alpha}rtial_\alpha Z\|^{16}_{\text{inf}}}\|\partial_{\alpha}rtial_\alpha Z\|_{\dot{C}^{\sigma}}|h_n|\int_{|\alpha-\gamma|\leq \eta} \Big(\frac{|\alpha-\gamma|^{4+\sigma}(|\alpha-\gamma|^2+\delta^2)^{-\frac12}}{(|\alpha-\gamma|^2+(h_n+\delta)^2)^{3}} \\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\frac{|\alpha-\gamma|^{4+\sigma}(|\alpha-\gamma|^2+\delta^2)^{-\frac52}}{|\alpha-\gamma|^2+(h_n+\delta)^2}\Big)d\gamma\\ &\leq C \hspace{0.05cm} \|g\|_{L^\infty}\frac{\|\partial_{\alpha} Z\|_{L^\infty}^{13}}{|\partial_{\alpha} Z|_{\text{inf}}^{16}}\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}|h_n|^\sigma. \end{aligned} \end{equation*} We are done with $J_2$ and therefore with $I_{1,5}$, \begin{equation}\label{I15bound} \begin{aligned} |I_{1,5}|&\leq C \hspace{0.05cm} \|g\|_{L^\infty}\|\partial_{\alpha} Z\|_{L^\infty}^4\|F(Z)\|_{L^\infty}^7(1+\|\partial_{\alpha} Z\|_{L^\infty}^9\|F(Z)\|_{L^\infty}^9)\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}|h_n|^\sigma. \end{aligned} \end{equation} Finally, because $|\partial_{\alpha} Z(\alpha)(\alpha-\gamma)+(h_n+\delta)\tilde{N}(\alpha)|^2=|\partial_{\alpha} Z(\alpha)(\alpha-\gamma)|^2+(h_n+\delta)^2|\tilde{N}(\alpha)|^2$, the integral in $I_{1,6}$ \eqref{I1split} is odd and thus vanishes. Hence, recalling \eqref{I1split} and the bounds \eqref{I11bound}-\eqref{I14bound2}, \eqref{I15bound}, the $I_1$ integral is estimated, \begin{equation}\label{I1bound} \begin{aligned} |I_1|&\leq C \hspace{0.05cm} \|g\|_{L^\infty}\|\partial_{\alpha} Z\|_{L^\infty}^3\|F(Z)\|_{L^\infty}^6(1+\|\partial_{\alpha} Z\|_{L^\infty}^{10}\|F(Z)\|_{L^\infty}^{10})\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}|h_n|^\sigma\\ &\quad+C \hspace{0.05cm} \|g\|_{\dot{C}^\sigma}\|\partial_{\alpha} Z\|_{L^\infty}^4\|F(Z)\|_{L^\infty}^6|h_n|^\sigma. \end{aligned} \end{equation} It remains the control of $I_2$. In order to estimate the term $I_2$ a further decomposition is done in \eqref{I2N}: \begin{equation}\label{I2split} I_2=\sum_{j=1}^8I_{2,j}. \end{equation} They are given expanding the products in the numerator, gathering in one term the subtraction between one integral with $(h_n+\delta)$ and its corresponding integral with only $\delta$. Here we show how to deal with three of them, as the rest of the estimates follow similarly. They are given by \begin{equation*} \begin{aligned} I_{2,1}&=\int_{B_\eta}\!\!\frac{(Z_1(\alpha)-Z_1(\gamma))(Z_2(\alpha)-Z_2(\gamma))(h_n+\delta)\tilde{N}_3(\alpha)}{|Z(\alpha)-Z(\gamma)+(h_n+\delta)\tilde{N}(\alpha)|^5}g(\gamma)d\gamma \\ &\quad-\int_{B_\eta}\frac{(Z_1(\alpha)-Z_1(\gamma))(Z_2(\alpha)-Z_2(\gamma))\delta\tilde{N}_3(\alpha)}{|Z(\alpha)-Z(\gamma)+\delta\tilde{N}(\alpha)|^5}g(\gamma)d\gamma, \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} I_{2,2}&=\int_{B_\eta}\!\!\frac{(Z_1(\alpha)-Z_1(\gamma))(h_n+\delta)^2\tilde{N}_2(\alpha)\tilde{N}_3(\alpha)}{|Z(\alpha)-Z(\gamma)+(h_n+\delta)\tilde{N}(\alpha)|^5}g(\gamma)d\gamma \\ &\quad-\int_{B_\eta}\frac{(Z_1(\alpha)-Z_1(\gamma))\delta^2\tilde{N}_2(\alpha)\tilde{N}_3(\alpha)}{|Z(\alpha)-Z(\gamma)+\delta\tilde{N}(\alpha)|^5}g(\gamma)d\gamma, \end{aligned} \end{equation*} and \begin{equation*} \begin{aligned} I_{2,3}&=\int_{B_\eta}\!\!\frac{(h_n+\delta)^3\tilde{N}_1(\alpha)\tilde{N}_2(\alpha)\tilde{N}_3(\alpha)}{|Z(\alpha)-Z(\gamma)+(h_n+\delta)\tilde{N}(\alpha)|^5}g(\gamma)d\gamma -\int_{B_\eta}\frac{\delta^3\tilde{N}_1(\alpha)\tilde{N}_2(\alpha)\tilde{N}_3(\alpha)}{|Z(\alpha)-Z(\gamma)+\delta\tilde{N}(\alpha)|^5}g(\gamma)d\gamma. \end{aligned} \end{equation*} Next, we perform a further decomposition in $I_{2,1}$ to handle it: \begin{equation}\label{J3J7split} I_{2,1}=\sum_{k=3}^7J_k \end{equation} where \begin{equation*} \begin{aligned} J_3&=\int_{B_\eta}\!\!\frac{(Z_1(\alpha)-Z_1(\gamma)-\partial_{\alpha}rtial_\alpha Z_1(\alpha)\cdot(\alpha-\gamma))(Z_2(\alpha)-Z_2(\gamma))(h_n+\delta)\tilde{N}_3(\alpha)}{|Z(\alpha)-Z(\gamma)+(h_n+\delta)\tilde{N}(\alpha)|^5}g(\gamma)d\gamma \\ &\quad-\int_{B_\eta}\frac{(Z_1(\alpha)-Z_1(\gamma)-\partial_{\alpha}rtial_\alpha Z_1(\alpha)\cdot(\alpha-\gamma))(Z_2(\alpha)-Z_2(\gamma))\delta\tilde{N}_3(\alpha)}{|Z(\alpha)-Z(\gamma)+\delta\tilde{N}(\alpha)|^5}g(\gamma)d\gamma, \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} J_4&=\int_{B_\eta}\!\!\frac{\partial_{\alpha}rtial_\alpha Z_1(\alpha)\cdot(\alpha-\gamma)(Z_2(\alpha)-Z_2(\gamma)-\partial_{\alpha}rtial_\alpha Z_2(\alpha)\cdot(\alpha-\gamma))(h_n+\delta)\tilde{N}_3(\alpha)}{|Z(\alpha)-Z(\gamma)+(h_n+\delta)\tilde{N}(\alpha)|^5}g(\gamma)d\gamma \\ &\quad-\int_{B_\eta}\frac{\partial_{\alpha}rtial_\alpha Z_1(\alpha)\cdot(\alpha-\gamma)(Z_2(\alpha)-Z_2(\gamma)-\partial_{\alpha}rtial_\alpha Z_2(\alpha)\cdot(\alpha-\gamma))\delta\tilde{N}_3(\alpha)}{|Z(\alpha)-Z(\gamma)+\delta\tilde{N}(\alpha)|^5}g(\gamma)d\gamma, \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} J_5&=\int_{B_\eta}\!\!\frac{\partial_{\alpha}rtial_\alpha Z_1(\alpha)\cdot(\alpha-\gamma)\partial_{\alpha}rtial_\alpha Z_2(\alpha)\cdot(\alpha-\gamma)(h_n+\delta)\tilde{N}_3(\alpha)}{|Z(\alpha)-Z(\gamma)+(h_n+\delta)\tilde{N}(\alpha)|^5}(g(\gamma)-g(\alpha))d\gamma \\ &\quad-\int_{B_\eta}\frac{\partial_{\alpha}rtial_\alpha Z_1(\alpha)\cdot(\alpha-\gamma)\partial_{\alpha}rtial_\alpha Z_2(\alpha)\cdot(\alpha-\gamma)\delta\tilde{N}_3(\alpha)}{|Z(\alpha)-Z(\gamma)+\delta\tilde{N}(\alpha)|^5}(g(\gamma)-g(\alpha))d\gamma, \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} J_6&=(h_n+\delta)\tilde{N}_3(\alpha)g(\alpha)\int_{B_\eta}\partial_{\alpha}rtial_\alpha Z_1(\alpha)\cdot(\alpha-\gamma)\partial_{\alpha}rtial_\alpha Z_2(\alpha)\cdot(\alpha-\gamma)\big(\frac{1}{u_h^5}-\frac{1}{v_h^5}\big)d\gamma \\ &\quad-\delta\tilde{N}_3(\alpha)g(\alpha)\int_{B_\eta}\partial_{\alpha}rtial_\alpha Z_1(\alpha)\cdot(\alpha-\gamma)\partial_{\alpha}rtial_\alpha Z_2(\alpha)\cdot(\alpha-\gamma)\big(\frac{1}{u_0^5}-\frac{1}{v_0^5}\big)d\gamma, \end{aligned} \end{equation*} (see \eqref{uhvh}) and \begin{equation*} \begin{aligned} J_7&=(h_n+\delta)\tilde{N}_3(\alpha)g(\alpha)\int_{B_\eta}\frac{\partial_{\alpha}rtial_\alpha Z_1(\alpha)\cdot(\alpha-\gamma)\partial_{\alpha}rtial_\alpha Z_2(\alpha)\cdot(\alpha-\gamma)}{|\partial_{\alpha}rtial_\alpha Z(\alpha)(\alpha-\gamma)+(h_n+\delta)\tilde{N}(\alpha)|^5}d\gamma \\ &\quad-\delta\tilde{N}_3(\alpha)g(\alpha)\int_{B_\eta}\frac{\partial_{\alpha}rtial_\alpha Z_1(\alpha)\cdot(\alpha-\gamma)\partial_{\alpha}rtial_\alpha Z_2(\alpha)\cdot(\alpha-\gamma)}{|\partial_{\alpha}rtial_\alpha Z(\alpha)(\alpha-\gamma)+\delta\tilde{N}(\alpha)|^5}d\gamma. \end{aligned} \end{equation*} In the next step, a splitting gives $$ J_{3}=J_{3,1}+J_{3,2} $$ with \begin{equation*} \begin{aligned} J_{3,1}&=h_n\tilde{N}_3(\alpha)\int_{B_\eta}\!\!\frac{(Z_1(\alpha)-Z_1(\gamma)-\partial_{\alpha}rtial_\alpha Z_1(\alpha)\cdot(\alpha-\gamma))(Z_2(\alpha)-Z_2(\gamma))}{|Z(\alpha)-Z(\gamma)+(h_n+\delta)\tilde{N}(\alpha)|^5}g(\gamma)d\gamma \end{aligned} \end{equation*} and \begin{equation*} \begin{aligned} J_{3,2}&=\delta\tilde{N}_3(\alpha)\int_{B_\eta}\!\!(Z_1(\alpha)-Z_1(\gamma)-\partial_{\alpha}rtial_\alpha Z_1(\alpha)\cdot(\alpha-\gamma))(Z_2(\alpha)-Z_2(\gamma))g(\gamma)\big(\frac{1}{u_h^5}-\frac{1}{u_0^5}\big)d\gamma. \end{aligned} \end{equation*} Bound \eqref{denominatorb2} provides the desired estimates as before: \begin{equation*} |J_{3,1}|\leq C \|g\|_{L^\infty}\frac{\|\partial_{\alpha} Z\|_{L^\infty}^2}{|\partial_{\alpha} Z|_{\text{inf}}^5}\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}|h_n|^\sigma, \end{equation*} \begin{equation*} \begin{aligned} |J_{3,2}|&\leq C \|g\|_{L^\infty}\frac{\|\partial_{\alpha} Z\|_{L^\infty}^3}{|\partial_{\alpha} Z|_{\text{inf}}^6}\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}\delta|h_n|\int_{B_{\eta}}\Big(\frac{|\alpha-\gamma|^{2+\sigma}(|\alpha-\gamma|^2+\delta^2)^{-\frac12}}{(|\alpha-\gamma|^2+(h_n+\delta)^2)^{\frac52}}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\frac{|\alpha-\gamma|^{2+\sigma}(|\alpha-\gamma|^2+\delta^2)^{-\frac52}}{(|\alpha-\gamma|^2+(h_n+\delta)^2)^{\frac12}}\Big)d\gamma\\ &\leq C \|g\|_{L^\infty}\frac{\|\partial_{\alpha} Z\|_{L^\infty}^3}{|\partial_{\alpha} Z|_{\text{inf}}^6}\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}|h_n|^{\sigma}. \end{aligned} \end{equation*} We are done with $J_3$. The terms $J_{4}$ and $J_{5}$ follow in a similar manner, \begin{equation*} |J_4|\leq C \|g\|_{L^\infty}\Big(\frac{\|\partial_{\alpha} Z\|_{L^\infty}^2}{|\partial_{\alpha} Z|_{\text{inf}}^5}+\frac{\|\partial_{\alpha} Z\|_{L^\infty}^3}{|\partial_{\alpha} Z|_{\text{inf}}^6}\Big)\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}|h_n|^{\sigma}, \end{equation*} but for $J_5$ the bound is slightly different: \begin{equation*} |J_5|\leq C \| g\|_{\dot{C}^\sigma}\Big(\frac{\|\partial_{\alpha} Z\|_{L^\infty}^3}{|\partial_{\alpha} Z|_{\text{inf}}^5}+\frac{\|\partial_{\alpha} Z\|_{L^\infty}^4}{|\partial_{\alpha} Z|_{\text{inf}}^6}\Big)|h_n|^{\sigma}. \end{equation*} To handle $J_6$ we split it in two: \begin{equation*} J_6=J_{6,1}+J_{6,2}, \end{equation*} where \begin{equation*} \begin{aligned} J_{6,1}&=h_n\tilde{N}_3(\alpha)g(\alpha)\int_{B_\eta}\partial_{\alpha}rtial_\alpha Z_1(\alpha)\cdot(\alpha-\gamma)\partial_{\alpha}rtial_\alpha Z_2(\alpha)\cdot(\alpha-\gamma)\big(\frac{1}{u_h^5}-\frac{1}{v_h^5}\big)d\gamma, \end{aligned} \end{equation*} and \begin{equation*} \begin{aligned} J_{6,2}&=\delta\tilde{N}_3(\alpha)g(\alpha)\int_{B_\eta}\partial_{\alpha}rtial_\alpha Z_1(\alpha)\cdot(\alpha-\gamma)\partial_{\alpha}rtial_\alpha Z_2(\alpha)\cdot(\alpha-\gamma)\Big(\frac{1}{u_h^5}-\frac{1}{u_0^5}-\big(\frac{1}{v_h^5}-\frac{1}{v_0^5}\big)\Big)d\gamma. \end{aligned} \end{equation*} As before it is possible to get \begin{equation*} |J_{6,1}|\leq C \|g\|_{L^\infty}\frac{\|\partial_{\alpha} Z\|_{L^\infty}^{11}}{|\partial_{\alpha} Z|_{\text{inf}}^{14}}\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}|h_n|^{\sigma}. \end{equation*} The next term can be estimated similarly to $I_{1,5}$ \eqref{I15} in order to obtain \begin{equation*} |J_{6,2}|\leq C \|g\|_{L^\infty}\frac{\|\partial_{\alpha} Z\|_{L^\infty}^{13}}{|\partial_{\alpha} Z|_{\text{inf}}^{16}}\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}|h_n|^{\sigma}, \end{equation*} and therefore the appropriate bound for $J_6$: \begin{equation*} |J_{6}|\leq C \|g\|_{L^\infty}\frac{\|\partial_{\alpha} Z\|_{L^\infty}^{13}}{|\partial_{\alpha} Z|_{\text{inf}}^{16}}\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}|h_n|^{\sigma}, \end{equation*} We now show that the last term $J_7$ \eqref{J3J7split} is Lipschitz. The change of variables $\gamma\leftarrow (\alpha-\gamma)/(h_n+\delta)$ gives that \begin{equation*} \begin{aligned} J_7&=\tilde{N}_3(\alpha)g(\alpha)\Big(\int_{|\gamma|\leq\frac{\eta}{h_n+\delta}}\frac{\partial_{\alpha}rtial_\alpha Z_1(\alpha)\cdot\gamma\partial_{\alpha}rtial_\alpha Z_2(\alpha)\cdot\gamma}{(|\partial_{\alpha}rtial_\alpha Z(\alpha)\gamma|^2+|\tilde{N}(\alpha)|^2)^{\frac52}}d\gamma-\int_{|\gamma|\leq\frac{\eta}{\delta}}\frac{\partial_{\alpha}rtial_\alpha Z_1(\alpha)\cdot\gamma\partial_{\alpha}rtial_\alpha Z_2(\alpha)\cdot\gamma}{(|\partial_{\alpha}rtial_\alpha Z(\alpha)\gamma|^2+|\tilde{N}(\alpha)|^2)^{\frac52}}d\gamma\Big). \end{aligned} \end{equation*} Define \begin{equation*} \begin{aligned} F(z)&=\int_{|\gamma|\leq\frac{\eta}{z}}\frac{\partial_{\alpha}rtial_\alpha Z_1(\alpha)\cdot\gamma\partial_{\alpha}rtial_\alpha Z_2(\alpha)\cdot\gamma}{(|\partial_{\alpha}rtial_\alpha Z(\alpha)\gamma|^2+|\tilde{N}(\alpha)|^2)^{\frac52}}d\gamma, \end{aligned} \end{equation*} which, denoting $\hat{\gamma}=\frac{\gamma}{|\gamma|}$, can be written as follows \begin{equation*} \begin{aligned} F(z)&=\int_{-\pi}^\pi\frac{\partial_{\alpha}rtial_\alpha Z_1(\alpha)\cdot\hat{\gamma}\partial_{\alpha}rtial_\alpha Z_2(\alpha)\cdot\hat{\gamma}}{|\tilde{N}(\alpha)|^5}\int_0^{\frac{\eta}{z}}\frac{r^3}{(|\tilde{N}(\alpha)|^{-2}|\partial_{\alpha}rtial_\alpha Z(\alpha)\hat{\gamma}|^2r^2+1)^{\frac52}}drd\hat{\gamma}. \end{aligned} \end{equation*} If we denote further \begin{equation}\label{J7aux} \begin{aligned} G(r,a)=\int_0^{\frac{1}{r}}\frac{\rho^3}{(a^2\rho^2+1)^{\frac52}}d\rho =-\frac{2+3a^2r^{-2}}{3a^4(1+a^2r^{-2})^{\frac32}}, \end{aligned} \end{equation} we obtain that \begin{equation*} \begin{aligned} |J_7|&\leq \|\partial_{\alpha} Z\|_{L^\infty}\|g\|_{L^\infty}\Big|\int_{-\pi}^\pi\frac{\partial_{\alpha}rtial_\alpha Z_1(\alpha)\cdot\hat{\gamma}\partial_{\alpha}rtial_\alpha Z_2(\alpha)\cdot\hat{\gamma}}{|\tilde{N}(\alpha)|^5}\\ &\qquad\times\Big(G(\frac{h_n+\delta}{\eta},|\tilde{N}(\alpha)|^{-1}|\partial_{\alpha}rtial_\alpha Z(\alpha)\hat{\gamma}|)-G(\frac{\delta}{\eta},|\tilde{N}(\alpha)|^{-1}|\partial_{\alpha}rtial_\alpha Z(\alpha)\hat{\gamma}|)\Big)d\hat{\gamma} \Big|. \end{aligned} \end{equation*} Since $|\frac{d}{dr}G(a,r)|\leq |a|^{-5}$, we obtain \begin{equation*} \begin{aligned} |J_7|&\leq C\|g\|_{L^\infty}\frac{\|\partial_{\alpha} Z\|_{L^\infty}^8}{|\partial_{\alpha} Z|_{\inf}^{10}}\frac{|h_n|}{\eta}. \end{aligned} \end{equation*} This yields the appropriate estimate for $I_{2,1}$. Next we handle $I_{2,2}$ with the splitting $$ I_{2,2}=\sum_{k=8}^{11}J_{k}, $$ where \begin{equation*} \begin{aligned} J_{8}&=(h_n+\delta)^2\tilde{N}_2(\alpha)\tilde{N}_3(\alpha)\int_{B_\eta}\!\!\frac{(Z_1(\alpha)-Z_1(\gamma)-\partial_{\alpha} Z_1(\alpha)(\alpha-\gamma))}{|Z(\alpha)-Z(\gamma)+(h_n+\delta)\tilde{N}(\alpha)|^5}g(\gamma)d\gamma \\ &\quad-\delta^2\tilde{N}_2(\alpha)\tilde{N}_3(\alpha)\int_{B_\eta}\frac{(Z_1(\alpha)-Z_1(\gamma)-\partial_{\alpha} Z_1(\alpha)(\alpha-\gamma))}{|Z(\alpha)-Z(\gamma)+\delta\tilde{N}(\alpha)|^5}g(\gamma)d\gamma, \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} J_{9}&=(h_n+\delta)^2\tilde{N}_2(\alpha)\tilde{N}_3(\alpha)\int_{B_\eta}\!\!\frac{\partial_{\alpha} Z_1(\alpha)(\alpha-\gamma)}{|Z(\alpha)-Z(\gamma)+(h_n+\delta)\tilde{N}(\alpha)|^5}(g(\gamma)-g(\alpha))d\gamma \\ &\quad-\delta^2\tilde{N}_2(\alpha)\tilde{N}_3(\alpha)\int_{B_\eta}\frac{\partial_{\alpha} Z_1(\alpha)(\alpha-\gamma)}{|Z(\alpha)-Z(\gamma)+\delta\tilde{N}(\alpha)|^5}(g(\gamma)-g(\alpha))d\gamma, \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} J_{10}&=(h_n+\delta)^2\tilde{N}_2(\alpha)\tilde{N}_3(\alpha)g(\alpha)\int_{B_\eta}\!\!\partial_{\alpha} Z_1(\alpha)(\alpha-\gamma)\big(\frac{1}{u_h^5}-\frac{1}{v^5_h}\big)d\gamma \\ &\quad-\delta^2\tilde{N}_2(\alpha)\tilde{N}_3(\alpha)g(\alpha)\int_{B_\eta}\partial_{\alpha} Z_1(\alpha)(\alpha-\gamma)\big(\frac{1}{u_0^5}-\frac{1}{v^5_0}\big)d\gamma, \end{aligned} \end{equation*} (see \eqref{uhvh}) and \begin{equation*} \begin{aligned} J_{11}&=(h_n+\delta)^2\tilde{N}_2(\alpha)\tilde{N}_3(\alpha)g(\alpha)\int_{B_\eta}\!\!\frac{\partial_{\alpha} Z_1(\alpha)(\alpha-\gamma)}{|\partial_{\alpha} Z(\alpha)(\alpha-\gamma)+(h_n+\delta)\tilde{N}(\alpha)|^5}d\gamma \\ &\quad-\delta^2\tilde{N}_2(\alpha)\tilde{N}_3(\alpha)g(\alpha)\int_{B_\eta}\frac{\partial_{\alpha} Z_1(\alpha)(\alpha-\gamma)}{|\partial_{\alpha} Z(\alpha)(\alpha-\gamma)+\delta\tilde{N}(\alpha)|^5}d\gamma. \end{aligned} \end{equation*} The term $J_{8}$ can be decomposed further to get $$ J_8=J_{8,1}+J_{8,2}, $$ where \begin{equation*} \begin{aligned} J_{8,1}&=(h_n^2+2h_n\delta)\tilde{N}_2(\alpha)\tilde{N}_3(\alpha)\int_{B_\eta}\!\!\frac{Z_1(\alpha)-Z_1(\gamma)-\partial_{\alpha} Z_1(\alpha)(\alpha-\gamma)}{|Z(\alpha)-Z(\gamma)+(h_n+\delta)\tilde{N}(\alpha)|^5}g(\gamma)d\gamma \end{aligned} \end{equation*} and \begin{equation*} \begin{aligned} J_{8,2}&=\delta^2\tilde{N}_2(\alpha)\tilde{N}_3(\alpha)\int_{B_\eta}(Z_1(\alpha)-Z_1(\gamma)-\partial_{\alpha} Z_1(\alpha)(\alpha-\gamma))g(\gamma)\big(\frac1{u_h^5}-\frac1{u_0^5}\big)d\gamma. \end{aligned} \end{equation*} Then, it is possible to bound as follows \begin{equation*} |J_{8,1}|\leq C \|g\|_{L^\infty}\frac{\|\partial_{\alpha} Z\|_{L^\infty}^2}{|\partial_{\alpha} Z|_{\text{inf}}^5}\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}\frac{|h_n|(2|h_n+\delta|+|h_n|)}{|h_n+\delta|^{2-\sigma}}\leq C \|g\|_{L^\infty}\frac{\|\partial_{\alpha} Z\|_{L^\infty}^2}{|\partial_{\alpha} Z|_{\text{inf}}^5}\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}|h_n|^{\sigma}, \end{equation*} to get the desired estimate. Similarly as before, the next term is approached: \begin{equation*} \begin{aligned} |J_{8,2}|&\leq C \|g\|_{L^\infty}\frac{\|\partial_{\alpha} Z\|_{L^\infty}^8}{|\partial_{\alpha} Z|_{\text{inf}}^{11}}\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}\delta^2|h_n|\int_{B_{\eta}}\Big(\frac{|\alpha-\gamma|^{1+\sigma}(|\alpha-\gamma|^2+\delta^2)^{-\frac12}}{(|\alpha-\gamma|^2+(h_n+\delta)^2)^{\frac52}}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\frac{|\alpha-\gamma|^{1+\sigma}(|\alpha-\gamma|^2+\delta^2)^{-\frac52}}{(|\alpha-\gamma|^2+(h_n+\delta)^2)^{\frac12}}\Big)d\gamma\\ &\leq C \|g\|_{L^\infty}\frac{\|\partial_{\alpha} Z\|_{L^\infty}^8}{|\partial_{\alpha} Z|_{\text{inf}}^{11}}\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}\Big(\frac{\delta|h_n|}{|h_n\!+\!\delta|^{2-\sigma}}\!+\!\frac{|h_n|}{|h_n\!+\!\delta|^{1-\sigma}}\Big)\leq C \|g\|_{L^\infty}\frac{\|\partial_{\alpha} Z\|_{L^\infty}^8}{|\partial_{\alpha} Z|_{\text{inf}}^{11}}\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}|h_n|^{\sigma}. \end{aligned} \end{equation*} It yields the desired estimate for $J_8$: \begin{equation*} |J_{8}|\leq C \|g\|_{L^\infty}\frac{\|\partial_{\alpha} Z\|_{L^\infty}^8}{|\partial_{\alpha} Z|_{\text{inf}}^{11}}\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}|h_n|^{\sigma}. \end{equation*} The next term can be handled analogously, obtaining the bound below \begin{equation*} |J_9|\leq C \| g\|_{\dot{C}^\sigma}\frac{\|\partial_{\alpha} Z\|_{L^\infty}^9}{|\partial_{\alpha} Z|_{\text{inf}}^{11}}|h_n|^{\sigma}. \end{equation*} We continue dealing with $J_{10}$, which can be decomposed as before to obtain \begin{equation*} |J_{10}|\leq C \|g\|_{L^\infty}\frac{\|\partial_{\alpha} Z\|_{L^\infty}^{10}}{|\partial_{\alpha} Z|_{\text{inf}}^{13}}\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}|h_n|^{\sigma}. \end{equation*} The last term in the splitting is prepared to be integrated explicitly. In this case it is easy to check that it is zero: $$ J_{11}=0. $$ Gathering the last four estimates provides the appropriate estimate for $I_{2,2}$. It remains to handle $I_{2,3}$. A further decomposition yields $$ I_{2,3}=J_{12}+J_{13}+J_{14}, $$ where \begin{equation*} \begin{aligned} J_{12}=&(h_n+\delta)^3\tilde{N}_1(\alpha)\tilde{N}_2(\alpha)\tilde{N}_3(\alpha)\int_{B_\eta}\!\!\frac{(g(\gamma)-g(\alpha))d\gamma}{|Z(\alpha)-Z(\gamma)+(h_n+\delta)\tilde{N}(\alpha)|^5}\\ &-\delta^3\tilde{N}_1(\alpha)\tilde{N}_2(\alpha)\tilde{N}_3(\alpha)\int_{B_\eta}\frac{(g(\gamma)-g(\alpha))d\gamma}{|Z(\alpha)-Z(\gamma)+\delta\tilde{N}(\alpha)|^5}, \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} J_{13}=&(h_n+\delta)^3\tilde{N}_1(\alpha)\tilde{N}_2(\alpha)\tilde{N}_3(\alpha)g(\alpha)\int_{B_\eta}\!\!\big(\frac{1}{u_h^5}-\frac{1}{v_h^5}\big)d\gamma\\ &-\delta^3\tilde{N}_1(\alpha)\tilde{N}_2(\alpha)\tilde{N}_3(\alpha)g(\alpha)\int_{B_\eta}\!\!\big(\frac{1}{u_0^5}-\frac{1}{v_0^5}\big)d\gamma, \end{aligned} \end{equation*} and \begin{equation*} \begin{aligned} J_{14}&=(h_n+\delta)^3\tilde{N}_1(\alpha)\tilde{N}_2(\alpha)\tilde{N}_3(\alpha)g(\alpha)\int_{B_\eta}\!\!\frac{d\gamma}{(|\partial_{\alpha} Z(\alpha)(\alpha-\gamma)|^2+(h_n+\delta)^2|\tilde{N}(\alpha)|^2)^{\frac52}} \\ &\quad-\delta^3\tilde{N}_1(\alpha)\tilde{N}_2(\alpha)\tilde{N}_3(\alpha)g(\alpha)\int_{B_\eta}\frac{d\gamma}{(|\partial_{\alpha} Z(\alpha)(\alpha-\gamma)|^2+\delta^2|\tilde{N}(\alpha)|^2)^{\frac52}}. \end{aligned} \end{equation*} A further decomposition helps to deal with $J_{12}$: $$ J_{12}=J_{12,1}+J_{12,2}, $$ where \begin{equation*} \begin{aligned} J_{12,1}=&(h_n^3+3h_n\delta(h_n+\delta))\tilde{N}_1(\alpha)\tilde{N}_2(\alpha)\tilde{N}_3(\alpha)\int_{B_\eta}\!\!\frac{(g(\gamma)-g(\alpha))d\gamma}{|Z(\alpha)-Z(\gamma)+(h_n+\delta)\tilde{N}(\alpha)|^5}, \end{aligned} \end{equation*} and \begin{equation*} \begin{aligned} J_{12,2}=&\delta^3\tilde{N}_1(\alpha)\tilde{N}_2(\alpha)\tilde{N}_3(\alpha)\int_{B_\eta}\!\! (g(\gamma)-g(\alpha))\big(\frac{1}{u_h^5}-\frac{1}{u_0^5}\big)d\gamma. \end{aligned} \end{equation*} Next, it is possible to bound as follows \begin{equation*} |J_{12,1}|\leq C \frac{\|\partial_{\alpha} Z\|_{L^\infty}^3}{|\partial_{\alpha} Z|_{\text{inf}}^5}\|g\|_{\dot{C}^\sigma}\frac{|h_n|(|h_n|^2\!+\!3|h_n\!+\!\delta|^2\!+\!3|h_n||h_n\!+\!\delta|)}{|h_n+\delta|^{3-\sigma}}\leq C \frac{\|\partial_{\alpha} Z\|_{L^\infty}^3}{|\partial_{\alpha} Z|_{\text{inf}}^5}\|g\|_{\dot{C}^\sigma}|h_n|^{\sigma}, \end{equation*} \begin{equation*} \begin{aligned} |J_{12,2}|&\leq C \frac{\|\partial_{\alpha} Z\|_{L^\infty}^4}{|\partial_{\alpha} Z|_{\text{inf}}^6}\|g\|_{\dot{C}^\sigma}\delta^3|h_n|\int_{B_{\eta}}|\alpha-\gamma|^{\sigma}\Big(\frac{(|\alpha-\gamma|^2+\delta^2)^{-\frac12}}{(|\alpha-\gamma|^2+(h_n+\delta)^2)^{\frac52}}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad+\frac{(|\alpha-\gamma|^2+\delta^2)^{-\frac52}}{(|\alpha-\gamma|^2+(h_n+\delta)^2)^{\frac12}}\Big)d\gamma\\ &\leq C \frac{\|\partial_{\alpha} Z\|_{L^\infty}^4}{|\partial_{\alpha} Z|_{\text{inf}}^6}\|g\|_{\dot{C}^\sigma}\Big(\frac{\delta^2|h_n|}{|h_n\!+\!\delta|^{3-\sigma}}\!+\!\frac{|h_n|}{|h_n\!+\!\delta|^{1-\sigma}}\Big)\leq C \frac{\|\partial_{\alpha} Z\|_{L^\infty}^4}{|\partial_{\alpha} Z|_{\text{inf}}^6}\|g\|_{\dot{C}^\sigma}|h_n|^{\sigma}. \end{aligned} \end{equation*} and to get the appropriate estimate for $J_{12}$. A similar splitting for $J_{13}$ allows to obtain \begin{equation*} |J_{13}|\leq C \|g\|_{L^\infty}\Big(\frac{\|\partial_{\alpha} Z\|_{L^\infty}^4}{|\partial_{\alpha} Z|_{\text{inf}}^6}+\frac{\|\partial_{\alpha} Z\|_{L^\infty}^5}{|\partial_{\alpha} Z|_{\text{inf}}^7}+\frac{\|\partial_{\alpha} Z\|_{L^\infty}^6}{|\partial_{\alpha} Z|_{\text{inf}}^8}\Big)\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}|h_n|^{\sigma}. \end{equation*} Similarly as we did for $J_7$ in \eqref{J7aux}, after a change of variables, the radial part of the integrals in $J_{14}$ can be integrated to obtain the desired bound. It yields the appropriate estimate for $I_{2,3}$. We therefore complete controlling the term $I_2$ \eqref{I2split}, \begin{equation}\label{I2bound} \begin{aligned} |I_2|&\leq C \hspace{0.05cm} \|g\|_{L^\infty}\|\partial_{\alpha} Z\|_{L^\infty}^2\|F(Z)\|_{L^\infty}^5(1+\|\partial_{\alpha} Z\|_{L^\infty}^{8}\|F(Z)\|_{L^\infty}^{8})\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}|h_n|^\sigma\\ &\quad+C \hspace{0.05cm} \|g\|_{\dot{C}^\sigma}\|\partial_{\alpha} Z\|_{L^\infty}^3\|F(Z)\|_{L^\infty}^6(1+\|\partial_{\alpha} Z\|_{L^\infty}^6\|F(Z)\|_{L^\infty})^6|h_n|^\sigma. \end{aligned} \end{equation} Together with \eqref{I1bound}, it provides the desired bound for $I$ \eqref{Isplit}. Recalling \eqref{gbound}, this completes the proof of the Case 1, \begin{equation}\label{normal_bound} \begin{aligned} |S(f)(Z(\alpha)+(\delta+h_n)\tilde{N}(\alpha))-S(f)(Z(\alpha)+\delta &\tilde{N}(\alpha))|\leq\\ P(\|\partial_{\alpha} Z\|_{L^\infty}\!&+\!\|F(Z)\|_{L^\infty})\big(\|f\|_{L^\infty}\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}\!+\! \|f\|_{\dot{C}^\sigma}\big)|h_n|^\sigma. \end{aligned} \end{equation} \noindent\textbf{Case 2: Tangential direction.} Here we consider the first difference in \eqref{normaltangentSplit}. We recall that in the case we are dealing with, $|h_\tau|\leq \frac14\frac{|\partial_{\alpha} Z|_{\inf}}{\|\partial_{\alpha} Z\|_{L^\infty}}\delta$, we have that $\delta+h_n>0$. Hence, for simplicity in notation we do the estimate for some $\delta>0$, and we will later apply it with $\delta+h_n$. We keep the notation $x=Z(\alpha)+\delta \tilde{N}(\alpha)$. First, we split as before \begin{equation}\label{holder_tan} \begin{aligned} S(f)(x+h_\tau\cdot\partial_{\alpha} Z(\alpha))-S(f)(x)&=\text{pv}\int_{Z(B_\eta)}\Big(k(x+h_\tau\cdot\partial_{\alpha} Z(\alpha)-y)-k(x-y)\Big)f(y)dS(y)\\ &+\int_{\partial_{\alpha}rtial D \smallsetminus Z(B_\eta)}\Big(k(x+h_\tau\cdot\partial_{\alpha} Z(\alpha)-y)-k(x-y)\Big)f(y)dS(y)\\ &=I+II, \end{aligned} \end{equation} where \begin{equation*} B_\eta=\{\gamma\in\mathbb{R}^2:|\alpha-\gamma|<\eta\}. \end{equation*} The second term is again more regular, \begin{equation*} \begin{aligned} |II|\leq \frac{C}{\eta^3|\partial_{\alpha} Z|_{\inf}}|\partial_{\alpha}rtial D|\|f\|_{L^\infty}|h|. \end{aligned} \end{equation*} The first term is given by \begin{equation*} \begin{aligned} I&=\int_{B_\eta} \frac{\prod_{j=1}^3(x_j+h_\tau\cdot\partial_{\alpha} Z_j(\alpha)-Z_j(\gamma))}{|x+h_\tau\cdot\partial_{\alpha} Z(\alpha)-Z(\gamma)|^5}g(\gamma)d\gamma-\int_{B_\eta} \frac{\prod_{j=1}^3(x_j-Z_j(\gamma))}{|x-Z(\gamma)|^5}g(\gamma)d\gamma, \end{aligned} \end{equation*} and we decompose it as follows \begin{equation}\label{Isplit_tangent} I=\sum_{i=1}^6 I_{i}, \end{equation} where \begin{equation}\label{I1_tangent} \begin{aligned} I_1&= \int_{B_\eta}\!\!\! \frac{\prod_{j=1}^2(Z_j(\alpha)\!-\!Z_j(\gamma)\!+\!h_\tau\cdot\partial_{\alpha} Z_j(\alpha)\!+\!\delta\tilde{N}_j(\alpha))(Z_3(\alpha)\!-\!Z_3(\gamma)\!-\!(\alpha\!-\!\gamma)\cdot \partial_{\alpha} Z_3(\alpha))}{|Z(\alpha)-Z(\gamma)+h_\tau\cdot\partial_{\alpha} Z(\alpha)+\delta \tilde{N}(\alpha)|^5}g(\gamma)d\gamma\\ &\quad-\int_{B_\eta} \frac{\prod_{j=1}^2(Z_j(\alpha)-Z_j(\gamma)+\delta\tilde{N}_j(\alpha))(Z_3(\alpha)-Z_3(\gamma)-(\alpha-\gamma)\cdot \partial_{\alpha} Z_3(\alpha))}{|Z(\alpha)-Z(\gamma)+\delta \tilde{N}(\alpha)|^5}g(\gamma)d\gamma, \end{aligned} \end{equation} \begin{equation}\label{I2_tangent} \begin{aligned} I_2&= \int_{B_\eta}\!\!\! \frac{((\alpha-\gamma+h_\tau)\cdot\partial_{\alpha} Z_3(\alpha)+\delta\tilde{N}_3(\alpha))(Z_2(\alpha)-Z_2(\gamma)-(\alpha-\gamma)\cdot\partial_{\alpha} Z_2(\alpha))}{|Z(\alpha)-Z(\gamma)+h_\tau\cdot\partial_{\alpha} Z(\alpha)+\delta \tilde{N}(\alpha)|^5}\\ &\qquad\times(Z_1(\alpha)-Z_1(\gamma)+h_\tau\cdot\partial_{\alpha} Z_1(\alpha)+\delta\tilde{N}_1(\alpha))g(\gamma)d\gamma\\ & \quad-\int_{B_\eta} \frac{((\alpha-\gamma)\cdot\partial_{\alpha} Z_3(\alpha)+\delta\tilde{N}_3(\alpha))(Z_2(\alpha)-Z_2(\gamma)-(\alpha-\gamma)\cdot\partial_{\alpha} Z_2(\alpha))}{|Z(\alpha)-Z(\gamma)+\delta \tilde{N}(\alpha)|^5}\\ &\qquad\times (Z_1(\alpha)-Z_1(\gamma)+\delta\tilde{N}_1(\alpha)) g(\gamma)d\gamma, \end{aligned} \end{equation} \begin{equation}\label{I3_tangent} \begin{aligned} I_3&= \int_{B_\eta}\!\!\! \frac{\prod_{j=2}^3((\alpha-\gamma+h_\tau)\cdot \partial_{\alpha} Z_j(\alpha)+\delta\tilde{N}_j(\alpha))(Z_1(\alpha)-Z_1(\gamma)-(\alpha-\gamma)\cdot\partial_{\alpha} Z_1(\alpha))}{|Z(\alpha)-Z(\gamma)+h_\tau\cdot\partial_{\alpha} Z(\alpha)+\delta \tilde{N}(\alpha)|^5}g(\gamma)d\gamma\\ &\quad-\int_{B_\eta} \frac{\prod_{j=2}^3((\alpha-\gamma)\cdot \partial_{\alpha} Z_j(\alpha)+\delta\tilde{N}_j(\alpha))(Z_1(\alpha)-Z_1(\gamma)-(\alpha-\gamma)\cdot\partial_{\alpha} Z_1(\alpha))}{|Z(\alpha)-Z(\gamma)+\delta \tilde{N}(\alpha)|^5}g(\gamma)d\gamma, \end{aligned} \end{equation} \begin{equation}\label{I4_tangent} \begin{aligned} I_4&= \int_{B_\eta}\!\!\! \frac{\prod_{j=1}^3((\alpha-\gamma+h_\tau)\cdot \partial_{\alpha} Z_j(\alpha)+\delta\tilde{N}_j(\alpha))(g(\gamma)-g(\alpha))}{|Z(\alpha)-Z(\gamma)+h_\tau\cdot\partial_{\alpha} Z(\alpha)+\delta \tilde{N}(\alpha)|^5}d\gamma\\ &\quad-\int_{B_\eta} \frac{\prod_{j=1}^3((\alpha-\gamma)\cdot \partial_{\alpha} Z_j(\alpha)+\delta\tilde{N}_j(\alpha))(g(\gamma)-g(\alpha))}{|Z(\alpha)-Z(\gamma)+\delta \tilde{N}(\alpha)|^5}d\gamma, \end{aligned} \end{equation} \begin{equation}\label{I5_tangent} \begin{aligned} I_5&= \int_{B_\eta} g(\alpha)\prod_{j=1}^3((\alpha-\gamma+h_\tau)\cdot \partial_{\alpha} Z_j(\alpha)+\delta\tilde{N}_j(\alpha))\\ &\qquad\times\Big(\frac{1}{|Z(\alpha)-Z(\gamma)+h_\tau\cdot\partial_{\alpha} Z(\alpha)+\delta \tilde{N}(\alpha)|^5}-\frac{1}{\big(|\partial_{\alpha} Z(\alpha)(\alpha-\gamma+h_\tau)|^2+\delta^2|\tilde{N}(\alpha)|^2\big)^{\frac52}}\Big)d\gamma\\ &\quad-\int_{B_\eta} g(\alpha)\prod_{j=1}^3((\alpha-\gamma)\cdot \partial_{\alpha} Z_j(\alpha)+\delta\tilde{N}_j(\alpha))\\ &\quad\qquad\times \Big(\frac{1}{|Z(\alpha)-Z(\gamma)+\delta \tilde{N}(\alpha)|^5}-\frac{1}{\big(|\partial_{\alpha} Z(\alpha)(\alpha-\gamma)|^2+\delta^2\tilde{N}(\alpha)|^2\big)^{\frac52}}\Big)d\gamma, \end{aligned} \end{equation} \begin{equation}\label{I6_tangent} \begin{aligned} I_6&= g(\alpha)\int_{B_\eta} \frac{\prod_{j=1}^3((\alpha-\gamma+h_\tau)\cdot \partial_{\alpha} Z_j(\alpha)+\delta\tilde{N}_j(\alpha))}{\big(|\partial_{\alpha} Z(\alpha)(\alpha-\gamma+h_\tau)|^2+\delta^2|\tilde{N}(\alpha)|^2\big)^{\frac52}}d\gamma\\ &\quad-g(\alpha)\int_{B_\eta} \frac{\prod_{j=1}^3((\alpha-\gamma)\cdot \partial_{\alpha} Z_j(\alpha)+\delta\tilde{N}_j(\alpha))}{\big(|\partial_{\alpha} Z(\alpha)(\alpha-\gamma)|^2+\delta^2|\tilde{N}(\alpha)|^2\big)^{\frac52}}d\gamma. \end{aligned} \end{equation} We proceed to estimate each of these terms. We first estimate the denominator as follows \begin{equation*} \begin{aligned} D&=|x-Z(\gamma)+h_{\tau}\cdot \partial_{\alpha} Z(\alpha)|^2\\ &=|Z(\alpha)-Z(\gamma)|^2+|h_\tau\cdot\partial_{\alpha} Z(\alpha)|^2+\delta^2|\tilde{N}(\alpha)|^2\\ &\quad+2h_\tau \cdot\partial_{\alpha} Z(\alpha)\cdot(Z(\alpha)-Z(\gamma))+2\delta(Z(\alpha)-Z(\gamma)-\partial_{\alpha} Z(\alpha)(\alpha-\gamma))\cdot\tilde{N}(\alpha)\\ &\geq \frac{|Z(\alpha)-Z(\gamma)|^2}{|\alpha-\gamma|^2} |\alpha-\gamma|^2+|h_\tau\partial_{\alpha} Z(\alpha)|^2+\delta^2|\tilde{N}(\alpha)|^2\\ &\quad-2\delta|\alpha-\gamma|^{1+\sigma}\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}|\tilde{N}(\alpha)|-2|\alpha-\gamma||h_\tau\cdot\partial_{\alpha} Z(\alpha)|\frac{|Z(\alpha)-Z(\gamma)|}{|\alpha-\gamma|}. \end{aligned} \end{equation*} The last two terms satisfy that \begin{equation*} \begin{aligned} 2\delta|\alpha-\gamma|^{1+\sigma}\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}|\tilde{N}(\alpha)|&\leq\frac{1-\sigma}2\delta^2|\tilde{N}(\alpha)|^2\\ &\hspace{-0.3cm}+ \frac{1+\sigma}2 2^{\frac{2}{1+\sigma}}\delta^{\frac{2\sigma}{1+\sigma}}\|\tilde{N}\|_{L^\infty}^{\frac{2\sigma}{1+\sigma}}\|\partial_{\alpha} Z\|_{\dot{C}}^{\frac{2}{1+\sigma}}|\alpha-\gamma|^2, \end{aligned} \end{equation*} and \begin{equation*} 2|\alpha-\gamma||h_\tau\cdot\partial_{\alpha} Z(\alpha)|\frac{|Z(\alpha)-Z(\gamma)|}{|\alpha-\gamma|}\leq \frac14|\alpha-\gamma|^2\frac{|Z(\alpha)-Z(\gamma)|^2}{|\alpha-\gamma|^2}+4|h_\tau\cdot\partial_{\alpha} Z(\alpha)|^2. \end{equation*} The choice of the cutoff for $\delta$ \eqref{delta_cutoff} and the fact that we are in the case where $|h_\tau|\leq \frac14\frac{|\partial_{\alpha} Z|_{\inf}}{\|\partial_{\alpha} Z\|_{L^\infty}}\delta$ provides that \begin{equation}\label{denominatorb2_tan} \begin{aligned} D&\geq \frac12 |\partial_{\alpha} Z|_{\text{inf}}^2\Big( |\alpha-\gamma|^2+\frac12\delta^2\Big). \end{aligned} \end{equation} Next, we split $I_1$ \eqref{Isplit_tangent} as follows \begin{equation}\label{I1split_tangent} \begin{aligned} I_1=I_{1,1}+I_{1,2}+I_{1,3}, \end{aligned} \end{equation} with \begin{equation*} \begin{aligned} I_{1,1}&=h_\tau\cdot\partial_{\alpha} Z_1(\alpha)\\ &\quad\times\int_{B_\eta}\!\! \frac{(Z_2(\alpha)\!-\!Z_2(\gamma)+h_\tau\cdot\partial_{\alpha} Z_2(\alpha)\!+\!\delta\tilde{N}_2(\alpha))(Z_3(\alpha)\!-\!Z_3(\gamma)\!-\!(\alpha\!-\!\gamma)\cdot\partial_{\alpha} Z_3(\alpha))}{|Z(\alpha)-Z(\gamma)+h_\tau\cdot\partial_{\alpha} Z(\alpha)+\delta \tilde{N}(\alpha)|^5}g(\gamma)d\gamma, \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} I_{1,2}&\!=\!h_\tau\cdot\partial_{\alpha} Z_2(\alpha)\!\int_{B_\eta}\!\! \frac{(Z_1(\alpha)\!-\!Z_1(\gamma)\!+\!\delta\tilde{N}_1(\alpha))(Z_3(\alpha)\!-\!Z_3(\gamma)\!-\!(\alpha\!-\!\gamma)\cdot\partial_{\alpha} Z_3(\alpha))}{|Z(\alpha)-Z(\gamma)+h_\tau\cdot\partial_{\alpha} Z(\alpha)+\delta \tilde{N}(\alpha)|^5}g(\gamma)d\gamma, \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} I_{1,3}&=\int_{B_\eta} \prod_{j=1}^2(Z_j(\alpha)\!-\!Z_j(\gamma)\!+\!\delta\tilde{N}_j(\alpha))(Z_3(\alpha)\!-\!Z_3(\gamma)\!-\!(\alpha\!-\!\gamma)\cdot\partial_{\alpha} Z_3(\alpha))g(\gamma)\\ &\qquad\Big(\frac{1}{|Z(\alpha)-Z(\gamma)+h_\tau\cdot\partial_{\alpha} Z(\alpha)+\delta \tilde{N}(\alpha)|^5}-\frac{1}{|Z(\alpha)-Z(\gamma)+\delta \tilde{N}(\alpha)|^5}\Big)d\gamma. \end{aligned} \end{equation*} We have that \begin{equation*} \begin{aligned} |I_{1,1}|&\leq |h_\tau|\frac{|\partial_{\alpha} Z_1(\alpha)|\|\partial_{\alpha} Z_2\|_{L^\infty}\|\partial_{\alpha} Z_3\|_{\dot{C}^\sigma}\|g\|_{L^\infty}}{|\partial_{\alpha} Z|_{\inf}^5} \int_{B_\eta} \frac{(|\alpha-\gamma|+|h_\tau|+\delta)|\alpha-\gamma|^{1+\sigma}}{\Big(|\alpha-\gamma|^2+\frac12\delta^2\Big)^5}d\gamma\\ &\leq\frac{|h_\tau|}{\delta^{1-\sigma}}\frac{\|\partial_{\alpha} Z\|_{L^\infty}^2\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}\|g\|_{L^\infty}}{|\partial_{\alpha} Z|_{\inf}^5} \int_{\mathbb{R}^2} \frac{(|\gamma|+\frac{|h_\tau|}{\delta}+1)|\gamma|^{1+\sigma}}{\Big(|\gamma|^2+\frac12\Big)^5}d\gamma, \end{aligned} \end{equation*} and hence, recalling that we are dealing with the case where \eqref{htau_menor_delta} holds, we obtain that \begin{equation*} \begin{aligned} |I_{1,1}|\leq C\frac{\|\partial_{\alpha} Z\|_{L^\infty}^{1+\sigma}\|g\|_{L^\infty}}{|\partial_{\alpha} Z|_{\inf}^{4+\sigma}}\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}|h_\tau|^\sigma. \end{aligned} \end{equation*} Analogously, we find that \begin{equation*} \begin{aligned} |I_{1,2}|\leq C\frac{\|\partial_{\alpha} Z\|_{L^\infty}^{1+\sigma}\|g\|_{L^\infty}}{|\partial_{\alpha} Z|_{\inf}^{4+\sigma}}\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}|h_\tau|^\sigma. \end{aligned} \end{equation*} If we denote \begin{equation}\label{uh_tan} \begin{aligned} u_h&=|Z(\alpha)-Z(\gamma)+h_\tau\cdot\partial_{\alpha} Z(\alpha)+\delta\tilde{N}(\alpha)|,\\ u_0&=|Z(\alpha)-Z(\gamma)+\delta\tilde{N}(\alpha)|, \end{aligned} \end{equation} we have that \begin{equation*} \begin{aligned} \frac{1}{u_h^5}-\frac{1}{u_0^5}=G(u_h,u_0)(u_0^2-u_h^2), \end{aligned} \end{equation*} where $G$ is defined in \eqref{Guhvh}. Since \begin{equation}\label{uhu0_tan} \begin{aligned} u_0^2-u_h^2&=-|h_\tau\cdot\partial_{\alpha} Z(\alpha)|^2-2(Z(\alpha)-Z(\gamma)+\delta \tilde{N}(\alpha))\cdot(h_\tau\cdot\partial_{\alpha} Z(\alpha))\\ &=-|h_\tau\cdot\partial_{\alpha} Z(\alpha)|^2-2(Z(\alpha)-Z(\gamma))\cdot h_\tau\cdot\partial_{\alpha} Z(\alpha), \end{aligned} \end{equation} we obtain that \begin{equation*} \begin{aligned} |I_{1,3}|&\leq C\frac{\|\partial_{\alpha} Z\|_{L^\infty}^{3+\sigma}\|g\|_{L^\infty}}{|\partial_{\alpha} Z|_{\inf}^{6+\sigma}}\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}|h_\tau|^\sigma, \end{aligned} \end{equation*} and thus \begin{equation}\label{I1bound_tan} |I_1|\leq C\frac{\|\partial_{\alpha} Z\|_{L^\infty}^{1+\sigma}\|g\|_{L^\infty}}{|\partial_{\alpha} Z|_{\inf}^{4+\sigma}}\Big(1+\frac{\|\partial_{\alpha} Z\|_{L^\infty}^2}{|\partial_{\alpha} Z|_{\inf}^{2}}\Big)\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}|h_\tau|^\sigma. \end{equation} The terms $I_2$ and $I_3$ \eqref{I1split_tangent} follow similarly and share the same bound \begin{equation}\label{I2I3bound_tan} |I_2|+|I_3|\leq C\frac{\|\partial_{\alpha} Z\|_{L^\infty}^{1+\sigma}\|g\|_{L^\infty}}{|\partial_{\alpha} Z|_{\inf}^{4+\sigma}}\Big(1+\frac{\|\partial_{\alpha} Z\|_{L^\infty}^2}{|\partial_{\alpha} Z|_{\inf}^{2}}\Big)\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}|h_\tau|^\sigma, \end{equation} while for $I_4$ we find that \begin{equation}\label{I4bound_tan} |I_4|\leq C\frac{\|\partial_{\alpha} Z\|_{L^\infty}^{2+\sigma}}{|\partial_{\alpha} Z|_{\inf}^{4+\sigma}}\Big(1+\frac{\|\partial_{\alpha} Z\|_{L^\infty}^2}{|\partial_{\alpha} Z|_{\inf}^{2}}\Big)\|g\|_{\dot{C}^\sigma}|h_\tau|^\sigma. \end{equation} We proceed to estimate $I_{5}$. We recall the notation \eqref{uh_tan} and define \begin{equation}\label{vh_tan} \begin{aligned} v_h&=\big(|\partial_{\alpha} Z(\alpha)(\alpha-\gamma+h_\tau)|^2+\delta^2|\tilde{N}(\alpha)|^2\big)^{\frac12},\\ v_0&=\big(|\partial_{\alpha} Z(\alpha)(\alpha-\gamma)|^2+\delta^2|\tilde{N}(\alpha)|^2\big)^{\frac12}, \end{aligned} \end{equation} for which we have the lower bound, also valid for $u_h$ \eqref{uh_tan}, \begin{equation}\label{uhvh_lower_tan} \begin{aligned} |v_h|^2&\geq \frac12 |\partial_{\alpha} Z|_{\text{inf}}^2\frac{|\partial_{\alpha} Z|_{\text{inf}}^2}{\|\partial_{\alpha} Z\|_{L^\infty}^2}\Big( |\alpha-\gamma|^2+\frac12\delta^2\Big). \end{aligned} \end{equation} Then, we perform the following splitting \begin{equation}\label{I5split_tan} \begin{aligned} I_5&=I_{5,1}+I_{5,2}+I_{5,3}+I_{5,4}, \end{aligned} \end{equation} where \begin{equation*} \begin{aligned} I_{5,1}&=g(\alpha)h_\tau\cdot\partial_{\alpha} Z_1(\alpha)\int_{B_\eta}\prod_{j=2}^3 ((\alpha-\gamma+h_\tau)\cdot\partial_{\alpha} Z_j(\alpha)+\delta\tilde{N}_j(\alpha))\big(u_h^{-5}-v_h^{-5}\big)d\gamma, \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} I_{5,2}&\!=\!g(\alpha)h_\tau\!\cdot\!\partial_{\alpha} Z_2(\alpha)\!\!\int_{B_\eta}\!\!\!\!\!\!((\alpha\!-\!\gamma)\!\cdot\!\partial_{\alpha} Z_1(\alpha)\!+\!\delta\tilde{N}_1(\alpha))((\alpha\!-\!\gamma\!+\!h_\tau\!)\cdot\!\partial_{\alpha} Z_3(\alpha)\!+\!\delta\tilde{N}_3(\alpha))\big(u_h^{-5}\!-\!v_h^{-5}\big)d\gamma, \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} I_{5,3}&=g(\alpha)h_\tau\cdot\partial_{\alpha} Z_3(\alpha)\int_{B_\eta}\prod_{j=1}^2 ((\alpha-\gamma)\cdot\partial_{\alpha} Z_j(\alpha)+\delta\tilde{N}_j(\alpha))\big(u_h^{-5}-v_h^{-5}\big)d\gamma, \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} I_{5,4}&=g(\alpha)\int_{B_\eta}\prod_{j=1}^3 ((\alpha-\gamma)\cdot\partial_{\alpha} Z_j(\alpha)+\delta\tilde{N}_j(\alpha))\big(u_h^{-5}-v_h^{-5}-u_0^{-5}+v_0^{-5}\big)d\gamma. \end{aligned} \end{equation*} We bound $I_{5,1}$ as follows \begin{equation*} \begin{aligned} |I_{5,1}|&\leq \|g\|_{L^\infty}\|\partial_{\alpha} Z\|_{L^\infty}^3|h_\tau|\int_{B_\eta}(|\alpha-\gamma+h_\tau|+\delta)^2 |u_h^{-5}-v_h^{-5}|d\gamma. \end{aligned} \end{equation*} We have that \begin{equation}\label{uh_vh_tan} \begin{aligned} u_h^2-v_h^2&=|Z(\alpha)-Z(\gamma)-(\alpha-\gamma)\cdot\partial_{\alpha} Z(\alpha)|^2\\ &\quad+2\big((\alpha-\gamma+h_\tau)\cdot\partial_{\alpha} Z(\alpha)+\delta\tilde{N}(\alpha)\big)\cdot\big(Z(\alpha)-Z(\gamma)-(\alpha-\gamma)\cdot\partial_{\alpha} Z(\alpha)\big)\\ &\leq C\big(|\alpha-\gamma|+|\alpha-\gamma+h_\tau|+\delta\big)\|\partial_{\alpha} Z\|_{L^\infty}\|\partial_{\alpha} Z\|_{\dot{C}^{\sigma}}|\alpha-\gamma|^{1+\sigma}, \end{aligned} \end{equation} thus \begin{equation*} \begin{aligned} |u_h^{-5}-v_h^{-5}|&\leq C\frac{\|\partial_{\alpha} Z\|_{L^\infty}^7}{|\partial_{\alpha} Z|_{\inf}^{14}} \frac{|u_h^2-v_h^2|}{\big(|\alpha-\gamma|^2+\frac12\delta^2\big)^{\frac72}}\\ &\leq C \frac{\|\partial_{\alpha} Z\|_{L^\infty}^8\|\partial_{\alpha} Z\|_{\dot{C}^{\sigma}}}{|\partial_{\alpha} Z|_{\inf}^{14}}\frac{\big(|\alpha-\gamma|+|\alpha-\gamma+h_\tau|+\delta\big)|\alpha-\gamma|^{1+\sigma}}{\big(|\alpha-\gamma|^2+\frac12\delta^2\big)^{\frac72}}. \end{aligned} \end{equation*} Introducing this bound back, we obtain that \begin{equation}\label{I51bound_tan} \begin{aligned} |I_{5,1}|&\leq C\frac{\|g\|_{L^\infty}\|\partial_{\alpha} Z\|_{L^\infty}^{11}\|\partial_{\alpha} Z\|_{\dot{C}^{\sigma}}}{|\partial_{\alpha} Z|_{\inf}^{14}}|h_\tau|\int_{B_\eta}\frac{\big(|\alpha-\gamma|+|\alpha-\gamma+h_\tau|+\delta\big)^3|\alpha-\gamma|^{1+\sigma}}{\big(|\alpha-\gamma|^2+\frac12\delta^2\big)^{\frac72}} d\gamma\\ &\leq C\frac{\|g\|_{L^\infty}\|\partial_{\alpha} Z\|_{L^\infty}^{10+\sigma}}{|\partial_{\alpha} Z|_{\inf}^{13+\sigma}}\|\partial_{\alpha} Z\|_{\dot{C}^{\sigma}}|h_\tau|^\sigma. \end{aligned} \end{equation} The same bound holds for the terms $I_{5,2}$ and $I_{5,3}$. We are left with $I_{5,4}$. We split it further as follows \begin{equation}\label{I54split_tan} \begin{aligned} I_{5,4}=J_1+J_2, \end{aligned} \end{equation} with \begin{equation*} \begin{aligned} J_1&=g(\alpha)\int_{B_\eta}\prod_{j=1}^3 ((\alpha-\gamma)\cdot\partial_{\alpha} Z_j(\alpha)+\delta\tilde{N}_j(\alpha))G(u_h,u_0)\big(u_0^2-u_h^2-(v_0^2-v_h^2)\big)d\gamma, \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} J_2&=g(\alpha)\int_{B_\eta}\prod_{j=1}^3 ((\alpha-\gamma)\cdot\partial_{\alpha} Z_j(\alpha)+\delta\tilde{N}_j(\alpha))\big(G(u_h,u_0)-G(v_h,v_0)\big)\big(v_0^2-v_h^2\big)d\gamma, \end{aligned} \end{equation*} where $G$ is defined in \eqref{Guhvh}. From \eqref{vh_tan} we have that \begin{equation}\label{vhv0_tan} \begin{aligned} v_0^2-v_h^2&=-|h_\tau\cdot\partial_{\alpha} Z(\alpha)|^2-2(h_\tau\cdot\partial_{\alpha} Z)\cdot(\partial_{\alpha} Z(\alpha)(\alpha-\gamma)), \end{aligned} \end{equation} which, together with \eqref{uhu0_tan}, provides that \begin{equation*} \begin{aligned} J_1&=-2g(\alpha)\int_{B_\eta}\prod_{j=1}^3 ((\alpha-\gamma)\cdot\partial_{\alpha} Z_j(\alpha)+\delta\tilde{N}_j(\alpha))G(u_h,u_0)\\ &\quad\times\big(Z(\alpha)-Z(\gamma)-\partial_{\alpha} Z(\alpha)(\alpha-\gamma)\big)\cdot\big(h_\tau \cdot\partial_{\alpha} Z(\alpha)\big)d\gamma. \end{aligned} \end{equation*} Therefore, we obtain \begin{equation}\label{J1bound_tan} \begin{aligned} |J_1|\leq \frac{\|g\|_{L^\infty}\|\partial_{\alpha} Z\|_{L^\infty}^{10+\sigma}}{|\partial_{\alpha} Z|_{\inf}^{13+\sigma}}\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}|h_\tau|^\sigma. \end{aligned} \end{equation} We proceed to estimate $J_2$. We further split this term \begin{equation}\label{J2split_tan} J_{2}=\sum_{k=1}^6J_{2,k} \end{equation} where \begin{equation*} \begin{aligned} J_{2,k}&=g(\alpha)\int_{B_\eta}\partial_{\alpha} Z_1(\alpha)\cdot(\alpha-\gamma)\partial_{\alpha} Z_2(\alpha)\cdot(\alpha-\gamma)\partial_{\alpha} Z_3(\alpha)\cdot(\alpha-\gamma)\\ &\hspace{2cm}\times \big(\frac1{u_h^{6-k}u_0^k}-\frac1{v_h^{6-k}v_0^k}\big)\frac{v_0^2-v_h^2}{u_h+u_0}d\gamma, \end{aligned} \end{equation*} for $1\leq k\leq 5$, and \begin{equation*} \begin{aligned} J_{2,6}&=g(\alpha)\int_{B_\eta}\partial_{\alpha} Z_1(\alpha)\cdot(\alpha-\gamma)\partial_{\alpha} Z_2(\alpha)\cdot(\alpha-\gamma)\partial_{\alpha} Z_3(\alpha)\cdot(\alpha-\gamma)(v_0^2-v_h^2)\\ &\hspace{2cm}\times \big(\frac1{v_h^5v_0}+\frac1{v_h^4v_0^2}+\frac1{v_h^3v_0^3}+\frac1{v_h^2v_0^4}+\frac1{v_hv_0^5}\big)\big(\frac{1}{u_h+u_0}-\frac1{v_h+v_0}\big)d\gamma. \end{aligned} \end{equation*} To control $J_{2,1}$ a further splitting is given: $$ J_{2,1}=\sum_{l=1}^6K_l $$ where \begin{equation*} \begin{aligned} K_{1}&=g(\alpha)\int_{B_\eta}\partial_{\alpha} Z_1(\alpha)\cdot(\alpha-\gamma)\partial_{\alpha} Z_2(\alpha)\cdot(\alpha-\gamma)\partial_{\alpha} Z_3(\alpha)\cdot(\alpha-\gamma)\\ &\hspace{2cm}\times \frac1{u_h^5} \big(\frac1{u_0}-\frac1{v_0}\big)\frac{v_0^2-v_h^2}{u_h+u_0}d\gamma, \end{aligned} \end{equation*} and \begin{equation*} \begin{aligned} K_{l}&=g(\alpha)\int_{B_\eta}\partial_{\alpha} Z_1(\alpha)\cdot(\alpha-\gamma)\partial_{\alpha} Z_2(\alpha)\cdot(\alpha-\gamma)\partial_{\alpha} Z_3(\alpha)\cdot(\alpha-\gamma)\\ &\hspace{2cm}\times \frac1{v_0u_h^{6-l}v_h^{l-2}}\big(\frac1{u_h}-\frac1{v_h}\big)\frac{v_0^2-v_h^2}{u_h+u_0}d\gamma \end{aligned}. \end{equation*} for $2\leq l\leq 6$. Estimate \eqref{uh_vh_tan} and \eqref{vhv0_tan} allows us to get \begin{equation*} \begin{aligned} |K_1|&\leq C\|g\|_{L^\infty}\frac{\|\partial_{\alpha}rtial_\alpha Z\|_{L^\infty}^{13}}{\|\partial_{\alpha}rtial_\alpha Z\|^{16}_{\text{inf}}}\|\partial_{\alpha}rtial_\alpha Z\|_{\dot{C}^{\sigma}}|h_\tau|\int_{B_\eta} \frac{|\alpha-\gamma|^{4+\sigma}(|\alpha-\gamma|+\delta)(|h_\tau|+|\alpha-\gamma|)}{(|\alpha-\gamma|^2+\frac12\delta^2)^{\frac92}} d\gamma\\ &\leq C \hspace{0.05cm} \|g\|_{L^\infty}\frac{\|\partial_{\alpha} Z\|_{L^\infty}^{12+\sigma}}{|\partial_{\alpha} Z|_{\text{inf}}^{15+\sigma}}\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}|h_\tau|^\sigma. \end{aligned} \end{equation*} The rest of $K_l$ are bounded similarly. Hence $J_{2,1}$ is controlled. The remaining terms $J_{2,k}$, $k=2,...,6$, in \eqref{J2split_tan} are controlled in a similar manner to $J_{2,1}$, and all of them are bounded with the same bound (we omit the details to avoid repetition, as the estimates follow along the lines below \eqref{v0-u0}). Therefore, \begin{equation}\label{J2bound_tan} \begin{aligned} |J_2|&\leq C \hspace{0.05cm} \|g\|_{L^\infty}\frac{\|\partial_{\alpha} Z\|_{L^\infty}^{12+\sigma}}{|\partial_{\alpha} Z|_{\text{inf}}^{14+\sigma}}\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}|h_\tau|^\sigma, \end{aligned} \end{equation} which together with \eqref{J1bound_tan} gives that \begin{equation}\label{I54bound_tan} |I_{5,4}|\leq C \hspace{0.05cm} \|g\|_{L^\infty}\frac{\|\partial_{\alpha} Z\|_{L^\infty}^{12+\sigma}}{|\partial_{\alpha} Z|_{\text{inf}}^{15+\sigma}}\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}|h_\tau|^\sigma. \end{equation} Joining this bound with the ones for $I_{5,1}$, $I_{5,2}$, $I_{5,3}$ \eqref{I51bound_tan} back in \eqref{I5split_tan} provides that \begin{equation}\label{I5bound_tan} |I_{5}|\leq C \hspace{0.05cm} \|g\|_{L^\infty}\frac{\|\partial_{\alpha} Z\|_{L^\infty}^{12+\sigma}}{|\partial_{\alpha} Z|_{\text{inf}}^{15+\sigma}}\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}|h_\tau|^\sigma. \end{equation} Finally, the term $I_6$ \eqref{Isplit_tangent} can be written as follows \begin{equation*} \begin{aligned} I_6&\!=\! g(\alpha)\!\int_{|w-\frac{h_\tau}{\delta}|\leq \frac{\eta}{\delta}} \frac{\prod_{j=1}^3(w\cdot \partial_{\alpha} Z_j(\alpha)\!+\!\tilde{N}_j(\alpha))}{\big(|\partial_{\alpha} Z(\alpha)w|^2\!+\!|\tilde{N}(\alpha)|^2\big)^{\frac52}}d\gamma-g(\alpha)\int_{|w|\leq \frac{\eta}{\delta}} \frac{\prod_{j=1}^3(w\cdot \partial_{\alpha} Z_j(\alpha)\!+\!\tilde{N}_j(\alpha))}{\big(|\partial_{\alpha} Z(\alpha)w|^2\!+\!|\tilde{N}(\alpha)|^2\big)^{\frac52}}d\gamma, \end{aligned} \end{equation*} and therefore \begin{equation*} \begin{aligned} |I_6|&\leq C\|g\|_{L^\infty}\int_{-\pi}^\pi\Big(\int_{\frac{\eta}{\delta}-\frac{|h_\tau|}{\delta}}^{\frac{\eta}{\delta}}+\int_{\frac{\eta}{\delta}}^{\frac{\eta}{\delta}+\frac{|h_\tau|}{\delta}}\Big) \frac{\prod_{j=1}^3((\hat{w}\cdot\partial_{\alpha} Z_j(\alpha))r+\tilde{N}_j(\alpha))}{\big(|\partial_{\alpha} Z(\alpha)\hat{w}|^2r^2\!+\!|\tilde{N}(\alpha)|^2\big)^{\frac52}}r\hspace{0.05cm} dr\\ &\leq C\|g\|_{L^\infty}\frac{\|\partial_{\alpha} Z\|_{L^\infty}^8}{|\partial_{\alpha} Z|_{\inf}^{10}}\Big(\int_{\frac{\eta}{\delta}-\frac{|h_\tau|}{\delta}}^{\frac{\eta}{\delta}}+\int_{\frac{\eta}{\delta}}^{\frac{\eta}{\delta}+\frac{|h_\tau|}{\delta}}\Big)\frac{r(r^3+1)}{(r^2+1)^{\frac52}}dr\\ &\leq C\|g\|_{L^\infty}\frac{\|\partial_{\alpha} Z\|_{L^\infty}^8}{|\partial_{\alpha} Z|_{\inf}^{10}}\frac{|h_\tau|}{|\eta|}. \end{aligned} \end{equation*} Together with \eqref{I1bound_tan}, \eqref{I2I3bound_tan}, \eqref{I4bound_tan}, and \eqref{I5bound_tan} in \eqref{Isplit_tangent}, we conclude that \begin{equation}\label{Ibound_tan} \begin{aligned} |I|\leq C \bigg(&\frac{\|\partial_{\alpha} Z\|_{L^\infty}^{12+\sigma}}{|\partial_{\alpha} Z|_{\text{inf}}^{14+\sigma}}\Big(\|g\|_{\dot{C}^\sigma}+\|g\|_{L^\infty}\frac{\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}}{|\partial_{\alpha} Z|_{\inf}}\Big)+\frac{\|g\|_{L^\infty}\|\partial_{\alpha} Z\|_{L^\infty}^8}{|\partial_{\alpha} Z|_{\inf}^{10}}|h_\tau|^{1-\sigma}\bigg)|h_\tau|^\sigma, \end{aligned} \end{equation} and hence, substituting \eqref{gbound}, \begin{equation*} \begin{aligned} |S(f)(x+h_\tau\cdot\partial_{\alpha}& Z(\alpha))-S(f)(x)|\\ &\leq C(1+|\partial_{\alpha}rtial D|) P(\|F(Z)\|_{L^\infty}\!+\!\|\partial_{\alpha} Z\|_{L^\infty})\Big(\|f\|_{\dot{C}^{\sigma}}+\|f\|_{L^\infty}\|\partial_{\alpha} Z\|_{\dot{C}^{\sigma}}\Big)|h_\tau|^\sigma. \end{aligned} \end{equation*} This concludes the proof of Case 2. Together with \eqref{normal_bound} (Case 1) and recalling the splitting \eqref{normaltangentSplit}, the estimate for two points near the boundary in \textit{nearly} normal direction, i.e., assuming \eqref{htau_menor_delta}, is done. \subsubsection{{\em{\textbf{Regularity in {\em{nearly}} tangential direction:}}}} Here we consider the case $|h_\tau|\geq \frac14\frac{|\partial_{\alpha} Z|_{\inf}}{\|\partial_{\alpha} Z\|_{L^\infty}}\delta$. Recalling the expression \eqref{xh_normal} for $x+h$ in this case, we can write \begin{equation*} \begin{aligned} S(f)(x+h)-S(f)(x)&=S(f)(Z(\alpha+\lambda)+\mu\tilde{N}(\alpha+\lambda))-S(f)(Z(\alpha+\lambda))\\ &\quad+S(f)(Z(\alpha+\lambda))-S(f)(Z(\alpha))\\ &\quad+S(f)(Z(\alpha))-S(f)(Z(\alpha)+\delta\tilde{N}(\alpha)). \end{aligned} \end{equation*} Given the bound \eqref{lambdabound} and that we are in the case $\delta<4\frac{\|\partial_{\alpha} Z\|_{L^\infty}}{|\partial_{\alpha} Z|_{\inf}}|h_\tau|$, we can apply the previous H\"older estimates for $S(f)$ along the surface \eqref{bound_boundary} and on the normal direction \eqref{normal_bound} to obtain that \begin{multline}\label{tangentialestimate} |S(f)(x+h)-S(f)(x)|\leq \\ C(1+|\partial_{\alpha}rtial D|) P(\|F(Z)\|_{L^\infty}\!+\!\|\partial_{\alpha} Z\|_{L^\infty})\big(\|f\|_{C^{\sigma}}+\|f\|_{L^\infty}\|\partial_{\alpha} Z\|_{\dot{C}^{\sigma}}\big)|h|^\sigma. \end{multline} \subsection{Regularity far from the boundary}\label{reg_far} Consider two points $x$ and $x+h$ in $ D$ (or analogously both in $\mathbb R^3\smallsetminus D$) such that they are sufficiently far from the boundary. That is, recalling \eqref{delta_cutoff}, we consider now that \begin{equation*} \min\{d(x,\partial_{\alpha}rtial D), d(x+h,\partial_{\alpha}rtial D)\}\geq L,\qquad |h|\leq \frac{L}{2}, \end{equation*} where \begin{equation*} L=\frac{1}{6}\Big(\frac{|\partial_{\alpha} Z|_{\inf}}{18\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}}\Big)^{\frac{1}{\sigma}}\Big(\frac{\|\partial_{\alpha} Z\|_{L^\infty}}{|\partial_{\alpha} Z|_{\inf}}\Big)^{\frac12}. \end{equation*} Then, for any $y\in\partial_{\alpha}rtial D$ and $s\in(0,1)$, \begin{equation*} |x-y+sh|\geq |x-y|-|h|\geq \frac{|x-y|}{2}, \end{equation*} and thus the mean value theorem gives that \begin{equation*} \begin{aligned} |k(x+h-y)-k(x-y)|&\leq C\frac{|h|}{|x-y|^3},\\ |k(x+h-y)-k(x-y)|&\leq \frac{C}{|x-y|^2}. \end{aligned} \end{equation*} Hence, it follows that \begin{equation*} \begin{aligned} |S(f)(x+h)-S(f)(x)|&=\Big|\int_{\partial_{\alpha}rtial D}\big(k(x+h-y)-k(x-y)\big) f(y) dS(y)\Big|\\ &\leq C\|f\|_{L^\infty}\int_{\partial_{\alpha}rtial D}\frac{|h|^\sigma}{|x-y|^{2+\sigma}}dS(y)\\ &\leq C\|f\|_{L^\infty}\|\partial_{\alpha} Z\|_{L^\infty}|h|^\sigma\int_L^\infty \frac{dr}{r^{1+\sigma}}, \end{aligned} \end{equation*} and we conclude that \begin{equation*} \begin{aligned} |S(f)(x+h)-S(f)(x)|&\leq C\frac{\|f\|_{L^\infty}\|\partial_{\alpha} Z\|_{L^\infty}}{L^\sigma}|h|^\sigma\\ &\leq C\|f\|_{L^\infty}\frac{\|\partial_{\alpha} Z\|_{L^\infty}}{|\partial_{\alpha} Z|_{\inf}}\frac{|\partial_{\alpha} Z|_{\inf}^{\frac{\sigma}{2}}}{\|\partial_{\alpha} Z\|_{L^\infty}^{\frac{\sigma}{2}}}\|\partial_{\alpha} Z\|_{\dot{C}^\sigma}|h|^\sigma. \end{aligned} \end{equation*} \qed \end{document}
\begin{document} \title{Simply $sm$-factorizable (para)topological groups and their completions} \author{Li-Hong Xie*} \address{(L.H. Xie) School of Mathematics and Computational Science, Wuyi University, Jiangmen 529020, P.R. China} \email{[email protected]; [email protected]} \thanks{*The research is supported by NSFC (Nos. 11601393; 11861018)} \author{Mikhail Tkachenko**} \address{(M. Tkachenko) Departamento de Matem\'aticas, Universidad Aut\'onoma Metropolitana, Av. San Rafael Atlixco 186, Col. Vicentina, Iztapalapa, C.P. 09340, M\'exico City, Mexico} \email{[email protected]} \thanks{** Corresponding author} \subjclass[2010]{ 22A05, 22A30,54H11, 54A25, 54C30} \keywords{Simply $sm$-factorizable, realcompactification; Dieudonn\'{e} completion; Lindel\"{o}f $\Sigma$-space; $\mathbb{R}$-factorizable group} \date{February 9, 2020} \begin{abstract} Let us call a (para)topological group \emph{strongly submetrizable} if it admits a coarser separable metrizable (para)topological group topology. We present a characterization of simply $sm$-factorizable (para)topo\-logical groups by means of continuous real-valued functions. We show that a (para)topo\-logical group $G$ is a simply $sm$-factorizable if and only if for each continuous function $f\colon G\to \mathbb{R}$, one can find a continuous homomorphism $\varphi$ of $G$ onto a strongly submetrizable (para)topological group $H$ and a continuous function $g\colon H\to \mathbb{R}$ such that $f=g\circ\varphi$. This characterization is applied for the study of completions of simply $sm$-factorizable topological groups. We prove that the equalities $\mu{G}=\varrho_\omega{G}=\upsilon{G}$ hold for each Hausdorff simply $sm$-factorizable topological group $G$. This result gives a positive answer to a question posed by Arhangel'skii and Tkachenko in 2018. Also, we consider realcompactifications of simply $sm$-factorizable paratopological groups. It is proved, among other results, that the realcompactification, $\upsilon{G}$, and the Dieudonn\'e completion, $\mu{G}$, of a regular simply $sm$-factorizable paratopological group $G$ coincide and that $\upsilon{G}$ admits the natural structure of paratopological group containing $G$ as a dense subgroup and, furthermore, $\upsilon{G}$ is also simply $sm$-factorizable. Some results in [\emph{Completions of paratopological groups, Monatsh. Math. \textbf{183} (2017), 699--721}] are improved or generalized. \end{abstract} \maketitle \section{Introduction} A \emph{paratopological group} $G$ is a group $G$ with a topology such that the multiplication mapping of $G \times G$ to $G$ associating $xy$ to arbitrary $x, y\in G$ is jointly continuous. A paratopological group $G$ is called a \emph{topological group} if the inversion on $G$ is continuous. Slightly reformulating the celebrated theorem of Comfort and Ross \cite[Theorem~1.2]{CR}, we can say that the pseudocompact topological groups are exactly the dense $C$-embedded subgroups of compact topological groups. In particular, the Stone-\v{C}ech compactification, $\beta{G}$, the Hewitt-Nachbin completion, $\upsilon{G}$, and the Ra\u{\i}kov completion, $\varrho{G}$, of a pseudocompact topological group $G$ coincide. Hence the Hewitt-Nachbin completion of the group $G$ is again a topological group containing $G$ as a dense $C$-embedded subgroup. Recently, with the idea to study connections between the properties of a Tychonoff space $X$ and its dense $C$-embedded subspace $Y$ homeomorphic to a topological group, the authors of \cite{AT1} introduced the new notions of $sm$-factorizable, densely $sm$-factorizable and simply $sm$-factorizable (para)topological groups as follows: \begin{definition}[See Definition~5.11 in \cite{AT1}] A (para)topological group $G$ is called \textit{$sm$-factorizable} if for each co-zero set $U$ in $G$, there exists a continuous homomorphism $\pi$ of $G$ onto a separable metrizable (para)topological group $H$ such that the set $\pi(U$) is open in $H$ and $\pi^{-1}(\pi(U)) = U$. Replacing the assumption that $\pi(U)$ is open by the requirement that $\pi(U)$ is dense in some open subset of $H$, we obtain the definition of a \textit{densely $sm$-factorizable} (para)topological group. Removing the assumption that $\pi(U)$ is open, we obtain the definition of a \textit{simply $sm$-factorizable} (para)topological group. \end{definition} It is shown in \cite{AT1} that the following implications are valid (and none of these implications can be inverted, see \cite[Example~5.12, Proposition~5.13]{AT1}): $$ sm\text{-factorizability}\, \Rightarrow\, \text{dense~} sm\text{-factorizability}\, \Rightarrow\, \text{simple~} sm\text{-factorizability}. $$ Recall that a (para)topological group $G$ is called {\it $\mathbb{R}$-factorizable} if for every continuous real-valued function on $G$ can be factorized through a continuous homomorphism onto a separable metrizable (para)topological group. Arhangel'skii and Tkachenko obtained that a (para)topological group $G$ is $\mathbb{R}$-factorizable if and only if $G$ is $sm$-factorizable \cite[Theorem~5.9]{AT1} (in view of the proof of \cite[Theorem~5.9]{AT1}, it is worth noting that it need not any separation axiom on $G$). However, there is a Hausdorff $\omega$-narrow and simply $sm$-factorizable Abelian topological group is not $\mathbb{R}$-factorizable \cite[Corollary 5.20]{AT1}. In \cite[Corollary~5.24]{AT1} it is shown that subgroups of Lindel\"{o}f topological groups need not be simply $sm$-factorizable. Since subgroups of Lindel\"{o}f topological groups are $\omega$-narrow, $\omega$-narrow topological groups can fail to be simply $sm$-factorizable. Also, there exists a densely $sm$-factorizable topological Abelian group $G$ such that $ib(G) = 2^\omega$ \cite[Example~5.12]{AT1}, so densely $sm$-factorizable topological groups can fail to be $\omega$-narrow. Now we summarize the relations between $\mathbb{R}$-factorizable, $sm$-factorizable, densely $sm$-factorizable, simply $sm$-factorizable and $\omega$-narrow (para)topological groups as follows: \begin{enumerate} \item[(1)] $\mathbb{R}\text{-factorizability} \Leftrightarrow sm\text{-factorizability}$; \item[(2)] $sm\text{-factorizability} \Rightarrow \text{dense~} sm\text{-factorizability} \Rightarrow \text{simple~} sm\text{-factorizability};$ \item[(3)] regularity $\&$ $sm$-factorizability $\Rightarrow$ total $\omega$-narrowness (in paratopo\-logical groups); \item[(4)] regularity $\&$ $\omega$-narrowness $\not \Rightarrow$ simple $sm$-factorizability (in topological groups); \item[(5)] regularity $\&$ dense $sm$-factorizability $\not\Rightarrow$ $\omega$-narrowness (in topological groups). \end{enumerate} We denote the Dieudonn\'{e} completion and Hewitt-Nachbin realcompactification of a completely regular space $X$ by $\mu X$ and $\upsilon X$, respectively. Recall that a topological group $G$ is called a {\it $PT$-group} if the group operations in $G$ can be continuously extended to Dieudonn\'{e} completion $\mu G$. Since every $\mathbb{R}$-factorizable topological group is a $PT$-group \cite[Corollary~8.3.7]{AT}, the authors of \cite{AT1} posed the following question: \begin{problem}[See Problem~7.6 in \cite{AT1}]\label{Q1.1} Is every simply $sm$-factorizable topological group a $PT$-group? What about densely $sm$-factorizable topological groups? \end{problem} Continuous open homomorphic images of $\mathbb{R}$-factorizable topological groups are $\mathbb{R}$-factorizable \cite[Theorem~8.4.2]{AT}, but it is unknown whether continuous homomorphisms preserve $\mathbb{R}$-factorizable topological groups \cite[Open~Problem~8.4.1]{AT}. A weaker version of this problem is given below: \begin{problem}[See Problem~7.8 in \cite{AT1}]\label{Q1.2} Is every continuous homomorphic image of an $\mathbb{R}$-factorizable topological group $G$ simply $sm$-factorizable? \end{problem} It is known, however, that continuous homomorphic images of simply $sm$-factorizable topological groups need not be simply $sm$-factorizable \cite{AT1}. This makes it natural to rise the following question: \begin{problem}[See Problem~7.9 in \cite{AT1}]\label{Q1.3} Is every quotient group of a simply $sm$-factorizable topological group $G$ simply $sm$-factorizable? What if, additionally, $G$ is $\omega$-narrow? \end{problem} In Section~\ref{Sec:2}, we present a characterization of simply $sm$-factorizable (para)topo\-logical groups in terms of continuous homomorphisms onto strongly submetrizable (para)topological groups. Applying this characterization we give a positive answer to Problem~\ref{Q1.1} and partially answer Problems~\ref{Q1.2} and~\ref{Q1.3}. The article is organized as follows. In Section~\ref{Sec:2}, we characterize simply $sm$-factorizable (para)topological groups. We establish the following facts: (1) A (para)topological group $G$ is simply $sm$-factorizable if and only if for each continuous function $f\colon G\to \mathbb{R}$, one can find a continuous homomorphism $\varphi$ of $G$ onto a strongly submetrizable (para)topo\-logical group $H$ and a continuous function $g\colon H\to \mathbb{R}$ such that $f=g\circ\varphi$ (see Theorem~\ref{th1}); (2) an $\omega$-narrow topological group $G$ is simply $sm$-factorizable if and only if for any continuous function $f\colon G\to \mathbb{R}$, there is an invariant admissible subgroup $N_f$ of $G$ such that $f$ is constant on $gN_f$ for each $g\in G$ (see Theorem~\ref{Th3}). Section~\ref{Sec:3} contains a positive answer to a question posed by Arhangel'skii and Tkachenko \cite[Problem~7.8]{AT1}. We show that the equalities $\mu{G}=\varrho_\omega{G}=\upsilon{G}$ hold for every Hausdorff simply $sm$-factorizable topological group $G$ and, therefore, $G$ is completion friendly (see Theorem~\ref{Th3.2}). In Section~\ref{Sec:4}, we study completions of simply $sm$-factorizable paratopological groups. We establish the following: (1) If $G$ is a regular simply $sm$-factorizable paratopological group, then the realcompactification $\upsilon{G}$ of the space $G$ admits a natural structure of paratopological group containing $G$ as a dense subgroup and $\upsilon{G}$ is also simply $sm$-factorizable (Theorem~\ref{Th}); (2) If $G$ is a regular paratopological group such that the topological group $G^\ast$ associated to $G$ is $\omega$-narrow and simply $sm$-factorizable, then the realcompactification $\upsilon{G}$ of $G$ admits a natural structure of paratopological group containing $G$ as a dense subgroup and the equality $\upsilon{G}=\mu{G}$ holds (Theorem~\ref{Th3.14}). Our results are accompanied with several examples and short discussions that outline the limits for their generalizations. We do not impose any separation restrictions on spaces and (para)topological groups unless the separation axioms are stated explicitly. A space $X$ satisfies the $T_3$-separation axiom if for any point $x\in X$ and a neighborhood $O$ of $x$, there is a neighborhood $U$ of $x$ such that $\overline{U}\subseteq O$. A space $X$ is \emph{regular} if it is a $T_3$-space satisfying the $T_1$-separation axiom. Let $X$ be a space with topology $\tau$. Then the family $\{\text{Int}_\tau\hskip1pt \overline{U}: \emptyset\neq U\in \tau\}$ constitutes a base for a coarser topology $\sigma$ on $X$. The space $X_{sr} = (X, \sigma)$ is called the \emph{semiregularization} of $X$. One of the main results regarding the semiregularization of paratopological groups is the following important theorem proved by Ravsky in \cite{Rav01}: \begin{theorem}\label{Th1} Let $G$ be an arbitrary paratopological group. Then the space $G_{sr}$ carrying the same group structure is a $T_3$ paratopological group. If $G$ is Hausdorff, then $G_{sr}$ is a regular paratopological group. \end{theorem} The \emph{index of narrowness} of a paratopological group $G$ is denoted by $ib(G)$ (see \cite[Section~5.2]{AT}). By definition, $ib(G)$ is the smallest cardinal $\tau\geq \omega$ such that for each open neighborhood $U$ of the identity in $G$, there is a subset $A$ of $G$ satisfying $AU=UA=G$ and $|A|\leq \tau$. If $ib(G)=\omega$, then $G$ is called \emph{$\omega$-narrow.} For a paratopological group $G$ with topology $\tau$, one defines the \textit{conjugate topology} $\tau^{-1}$ on $G$ by $\tau^{-1}=\{U^{-1}: U\in \tau\}$. Then $G' = (G, \tau^{-1})$ is also a paratopological group, and the inversion $x\rightarrow x^{-1}$ is a homeomorphism of $G$ onto $G'$. The upper bound $\tau^\ast=\tau\vee \tau^{-1}$ is a topological group topology on $G$, and we say that the topological group $G^\ast=(G,\tau^\ast)$ is \textit{associated} to $G$. A paratopological group $G$ is \emph{totally $\omega$-narrow} if the topological group $G^\ast$ associated to $G$ is $\omega$-narrow. A paratopological group $G$ is called \emph{$\omega$-balanced} if for every neighborhood $U$ of the identity $e$ in $G$ there is a countable family $\{V_n: n\in\omega\}$ of open neighborhoods of $e$ such that for each $g\in G$, some $V_n$ satisfies $gV_n g^{-1}\subseteq U.$ In this case we say that the family $\{V_n: n\in\omega\}$ is \emph{subordinated to $U$.} It is well known that every $\omega$-narrow topological group is $\omega$-balanced \cite[Proposition~3.4.10]{AT} and every totally $\omega$-narrow paratopological group is $\omega$-balanced \cite[Proposition~3.8]{ST2}. In this paper, $c(X)$ and $\psi(X)$ stand for the cellularity and pseudocharacter of $X$, respectively. The closure of a subset $Y$ of $X$ is denoted by $\overline{Y}$ or $\mathrm{cl}_X Y$ if we want to stress that the closure is taken in $X$. The cardinality of the continuum is $\mathfrak{c}=2^\omega$. \section{Characterizations of simply $sm$-factorizable (para)topological groups}\label{Sec:2} In this section, we present some characterizations of simply $sm$-factorizable topological and paratopological groups via continuous real-valued functions. The following notion plays an important role in this article. \begin{definition} A (para)topological group $G$ is \emph{strongly submetrizable} if $G$ admits a coarser separable metrizable (para)topological group topology or, equivalently, there exists a continuous one-to-one homomorphism of $G$ onto a separable metrizable (para)topo\-logical group. \end{definition} It is clear that every strongly submetrizable (para)topological group is Hausdorff and has countable pseudocharacter. Furthermore, the identity of a strongly submetrizable paratopological group is the intersection of countably many \emph{closed} neighborhoods. The following fact is obvious. \begin{proposition}\label{le1} \emph{Every strongly submetrizable (para)topological group is simply $sm$-factorizable.} \end{proposition} Every $\omega$-narrow Hausdorff topological group of countable pseudocharacter admits a continuous one-to-one homomorphism onto a separable metrizable topological group (see \cite[Corollary~3.4.25]{AT}). Similarly, a totally $\omega$-narrow regular paratopological group of countable pseudocharacter admits a continuous one-to-one homomorphism onto a separable metrizable paratopological group (see \cite[Lemma~1.6]{PZ}). Thus we have: \begin{lemma}\label{l1} Every $\omega$-narrow Hausdorff topological group (totally $\omega$-narrow regular paratopological group) of countable pseudocharacter is strongly submetrizable. \end{lemma} Now we give a characterization of simply $sm$-factorizable (para)topological groups in terms of continuous homomorphisms to strongly submetrizable (para)topological groups as follows: \begin{theorem}\label{th1} Let $G$ be a (para)topological group. Then the following statements are equivalent: \begin{enumerate} \item[(1)] $G$ is simply $sm$-factorizable; \item[(2)] for each continuous function $f\colon G\to \mathbb{R}$, one can find a continuous homomorphism $\pi$ of $G$ onto a strongly submetrizable (para)topological group $H$ and a continuous function $g\colon H\to \mathbb{R}$ such that $f=g\circ \pi$. \item[(3)] for each continuous function $f\colon G\to \mathbb{R}$, one can find a continuous homomorphism $\pi$ of $G$ onto a regular strongly submetrizable (para)topological group $H$ and a continuous function $g\colon H\to \mathbb{R}$ such that $f=g\circ \pi$. \end{enumerate} \end{theorem} \begin{proof} (1)\,$\Rightarrow$\,(2). Let $\mathcal {V}$ be a countable base of $\mathbb{R}$ consisting co-zero sets and $f$ be a continuous real-valued function on $G$. For every $V\in\mathcal{V}$, let $U_V=f^{-1}(V)$. Then each element of the family $\mathcal {U}=\{U_{V}: V\in \mathcal {V}\}$ is a co-zero set in $G$. Since $G$ is simply $sm$-factorizable, for each $V\in\mathcal{V}$ we can find a separable metrizable (para)topological group $H_{V}$ and a continuous homomorphism $\pi_V$ of $G$ onto $H_V$ such that $U_V=\pi_{V}^{-1}(\pi_V(U_V))$. Let $\pi=\Delta_{V\in\mathcal {V}} \pi_V$ be the diagonal mapping of the family $\{\pi_V: V\in\mathcal{V}\}$ and $\Pi=\prod_{V\in\mathcal {V}} H_V$ be the topological product of the family $\{H_V: V\in\mathcal{V}\}$. Clearly, $\pi\colon G\to h(G)\subseteq \Pi$ is a continuous homomorphism and $H'= \pi(G)$ is a separable metrizable (para)topological group. Now let $H$ have the same group structure as $H'$ and endow $H$ with the quotient topology with respect to $h$. Then clearly $H$ is strongly submetrizable and $\pi\colon G\to H$ is open. Thus it suffices to show that there is a continuous function $g \colon H\to\mathbb{R}$ satisfying $f=g\circ \pi$. Since $\pi$ is continuous and open, the latter will follow if we show that the equality $f(g_1)=f(g_2)$ holds for all $g_1,g_2\in G$ with $\pi(g_1)=\pi(g_2)$. Indeed, if $f(g_1)\neq f(g_2)$, then there is a $V\in \mathcal {V}$ such that $f(g_2)\in V$ and $f(g_1)\notin V$. Thus $g_2\in U_V$ and $g_1\notin U_V$. Since $U_{V}=\pi_V^{-1}(\pi_V(U_V))$, we have $$ \pi^{-1}(\pi(g_2))=\bigcap_{W\in \mathcal {V}} \pi_W^{-1}(\pi_W(g_2))\subseteq \pi_V^{-1}(\pi_V(U_V))=U_V. $$ This implies that $\pi(g_1)\neq \pi(g_2)$ and completes the proof of the implication. (2)\,$\Rightarrow$\,(1). Let $U$ be a co-zero set in $G$. Then there is a continuous function $f\colon G\to\mathbb{R}$ such that $U=f^{-1}(\mathbb{R}\setminus \{0\})$. By (2), one can find a strongly submetrizable (para)topo\-logical group $H$, a continuous homomorphism $\pi$ of $G$ onto $H$ and a continuous function $g\colon H\to \mathbb{R}$ such that $f=g\circ \pi$. Let $i\colon H\to H'$ be a continuous isomorphism onto a separable metrizable (para)topological group $H'$. Then $\varphi=i\circ \pi$ is a continuous homomorphism of $G$ onto the separable metrizable group $H'$ and $U=\varphi^{-1}(\varphi(U))$. This shows that $G$ is simply $sm$-factorizable. Now we show that $(2)\Leftrightarrow (3)$. Since every $T_0$-topological group is regular, the equivalence $(2)\Leftrightarrow (3)$ for topological groups is obvious. For paratopological groups, it suffices to show that $(2)\Rightarrow (3)$. Let $G$ be a paratopological group satisfying $(2)$. Let also $f$ be a continuous real-valued function on $G$. By (2), one can find a continuous homomorphism $\pi\colon G\to H$ onto a strongly submetrizable paratopological group $H$ and a continuous function $g$ on $H$ such that $f=g\circ\pi$. Then $H$ is a Hausdorff paratopological group. Let $H_{sr}$ be the semiregularization of $H$. Then $H_{sr}$ is a regular paratopological group, by Theorem~\ref{Th1}. From the fact that let $f\colon X\to Y$ be a continuous mapping of $X$ to a regular space $Y$; then $f$ remains continuous as a mapping of the semiregularization $X_{sr}$ of $X$ to $Y$ \cite[Lemma~3.5]{XST}, one can easily see that $H_{sr}$ is a strongly submetrizable paratopological group and $g\colon H_{sr}\to \mathbb{R}$ is continuous. Clearly, $f=g\circ (i\circ \pi)$, where $i\colon H\to H_{sr}$ is the identity mapping. The proof is complete. \end{proof} A space $X$ is called \emph{weakly Lindel\"{o}f} if for each open cover $\mathcal {U}$ of $X$, there exists a countable subfamily $\mathcal {V}$ of $\mathcal {U}$ such that $\bigcup\mathcal {V}$ is dense in $X$. \begin{corollary}\cite[Proposition 5.18]{AT1}\label{C1} Every weakly Lindel\"{o}f topological group $G$ is simply $sm$-factorizable. \end{corollary} \begin{proof} Take any continuous function $f\colon G\to \mathbb{R}$. According to \cite[Theorem~8.1.18]{AT}, one can find a continuous homomorphism $\pi\colon G \to H$ onto a topological group $H$ such that the pseudocharacter of $H$ is countable and a continuous real-valued function $h$ on $H$ such that $f=h\circ \pi$. Every weakly Lindel\"{o}f topological group is $\omega$-narrow \cite[Proposition~5.2.8]{AT}. Hence the group $G$ and its continuous homomorphic image $H$ are $\omega$-narrow as well. According to Lemma~\ref{l1} $H$ is a strongly submetrizable topological group and therefore, $G$ is simply $sm$-factorizable by Theorem~\ref{th1}. \end{proof} Let $\varphi\colon G\to H$ be a continuous surjective homomorphism of semitopological groups. The pair $(H, \varphi)$ is called a \emph{$T_2$-reflection} of $G$ if $H$ is a Hausdorff semitopological group and for every continuous mapping $f\colon G \to X$ of $G$ to a Hausdorff space $X$, there exists a continuous mapping $h\colon H\to X$ such that $f = h\circ\varphi$. Abusing of terminology we say that $T_2(G)$ is the $T_2$-reflection of $G$, thus omitting the corresponding homomorphism $\varphi$ (see \cite{T2}). The homomorphism $\varphi$ is denoted by $\varphi_{ G, 2}$ and called the \emph{canonical homomorphism} of $G$ onto $T_2(G)$. \begin{proposition}\label{Po1} \emph{A paratopological group $G$ is simply $sm$-factorizable if and only if so is the $T_2$-reflection $T_2(G)$ of $G$.} \end{proposition} \begin{proof} Let $G$ be simply $sm$-factorizable. Take any continuous function $f\colon T_2(G)\to \mathbb{R}$. Then $f\circ \varphi_{G,2}$ is continuous real-valued function on $G$, and therefore, we can find a strongly submetrizable paratopological group $H$, a continuous homomorphism $\pi$ of $G$ onto $H$ and a continuous function $g\colon H\to \mathbb{R}$ such that $f\circ \varphi_{G,2}=g\circ\pi$, by Theorem~\ref{th1}. Since $H$ is a Hausdorff paratopological group, there is continuous map $p\colon T_2(G)\to H$ such that $p\circ \varphi_{G,2}=\pi$. Observing that $\varphi_{G,2}$ and $\pi$ are homomorphisms, one can easily show that $p$ is also a homomorphism. Clearly, the subgroup $p(T_2(G))$ of $H$ is strongly submetrizable and $f=g\circ{p}$, so $T_2(G)$ is simply $sm$-factorizable by Theorem~\ref{th1}. Let $T_2(G)$ be simply $sm$-factorizable. Take any continuous function $f\colon G\to \mathbb{R}$. Since $\mathbb{R}$ is a Hausdorff space, there is continuous function $h\colon T_2(G)\to \mathbb{R}$ such that $h\circ \varphi_{G,2}=f$. Further, combining Theorem~\ref{th1} and the fact that $T_2(G)$ is simply $sm$-factorizable we see that there are a strongly submetrizable paratopological group $H$, a continuous homomorphism $\pi$ of $T_2(G)$ onto $H$ and a continuous function $g\colon H\to \mathbb{R}$ such that $h=g\circ\pi$. Thus we have the equality $$ f=h\circ \varphi_{G,2}=g\circ (\pi\circ \varphi_{G,2}). $$ By Theorem~\ref{th1}, this implies that $G$ is simply $sm$-factorizable. \end{proof} \begin{proposition}\label{Po2} \emph{A paratopological group $G$ is simply $sm$-factorizable if and only if so is the semiregularization $G_{sr}$ of $G$.} \end{proposition} \begin{proof} Let $G$ be simply $sm$-factorizable. Take any continuous function $f\colon G_{sr}\to \mathbb{R}$. Clearly, $f$ is continuous on $G$, and applying Theorem~\ref{th1} we find a regular strongly submetrizable paratopological group $H$, a continuous homomorphism $\pi$ of $G$ onto $H$ and a continuous function $g\colon H\to \mathbb{R}$ such that $f\circ \varphi_{G,2}=g\circ\pi$. Since $H$ is regular, by \cite[Lemma~3.5]{XST} $\pi$ is also continuous on $G_{sr}$. Hence $G_{sr}$ is simply $sm$-factorizable according to Theorem~\ref{th1}. Let $G_{sr}$ be simply $sm$-factorizable. Take any continuous function $f\colon G\to \mathbb{R}$. Then by \cite[Lemma~3.5]{XST}, $f$ is also continuous on $G_{sr}$. So Theorem~\ref{th1} implies that there are strongly submetrizable paratopological group $H$, a continuous homomorphism $\pi$ of $G_{sr}$ onto $H$ and a continuous function $g\colon H\to \mathbb{R}$ such that $f\circ \varphi_{G,2}=g\circ\pi$. Thus we have that $f=g\circ (\pi\circ i)$, where $i\colon G\to G_{sr}$ is the identity mapping. By Theorem~\ref{th1}, this implies that $G$ is simply $sm$-factorizable. \end{proof} Let $f\colon X \to Y$ be a continuous mapping. Then $f$ is said to be \emph{$d$-open} if for each open subset $O$ of $X$ there exists an open subset $V$ of $Y$ such that $f(O)$ is a dense subset of $V$ or, equivalently, $f(O)$ is a subset of the interior of the closure of $f(O)$ in $Y$. We recall that a Hausdorff paratopological group $G$ has countable \emph{Hausdorff number}, in symbols $Hs(G)\leq\omega$, if for each neighborhood $U$ of the identity $e$ in $G$ there is a countable family $\{U_n: n\in\omega\}$ of open neighborhoods of $e$ such that $\bigcap_{n\in \omega} U_nU_n^{-1} \subseteq U$. \begin{lemma}\label{Le1} Let $G$ be a Hausdorff weakly Lindel\"{o}f paratopological group with $Hs(G)\leq \omega$. If $G$ is $\omega$-balanced, then for any continuous real-valued function $f$ on $G$ one can find a $d$-open homomorphism $\pi\colon G\to K$ onto a regular paratopological group $K$ of countable pseudocharacter and a continuous function $h\colon K\to \mathbb{R}$ such that $f=h\circ \pi$. \end{lemma} \begin{proof} Let $f\colon G\to\mathbb{R}$ be a continuous function. By \cite[Theorem~2.4]{PZ}, we can find an open continuous homomorphism $\pi\colon G\to K$ of $ G$ onto a Hausdorff paratopological group $K$, a continuous function $h$ on $K$ and a sequence $\{V_n: n \in \omega \}$ of open neighborhoods of the identity $e_K$ in $K$ such that $f=h\circ\pi$ and $\{e_K\}=\bigcap_{n\in \omega}\overline{V_n}$. Let $K_{sr}$ be the semiregularization of $K$. Since $K$ is a Hausdorff paratopological group, $K_{sr}$ is a regular paratopological group by Theorem~\ref{Th1}. Clearly, the elements of the sequence $\{\mathrm{Int}_K \mathrm{cl}_K{V_n}: n\in \omega \}$ are open neighborhoods of the identity $e_K$ in $K_{sr}$ and $\{e_K\}=\bigcap_{n\in \omega}\mathrm{Int}_K\mathrm{cl}_K V_n$. This implies that $K_{sr}$ has countable pseudocharacter. By \cite[Lemma~3.5]{XST}, $h$ remains continuous on $K_{sr}$. Since the identity mapping $i\colon X\to X_{sr}$ is $d$-open and continuous for any space $X$ \cite[Lemma~3.2]{XY}, one can easily check that $i\circ\pi\colon G\to K_{sr}$ is a $d$-open continuous homomorphism. Clearly, $f=h\circ (i\circ\pi)$. This completes the proof. \end{proof} \begin{corollary}[See Theorem~2.9 in \cite{PZ}]\label{C2.8} Every totally $\omega$-narrow weakly Lindel\"{o}f paratopological group $G$ is simply $sm$-factorizable. \end{corollary} \begin{proof} Clearly weak Lindel\"{o}fness and total $\omega$-narrowness are preserved by continuous maps and continuous homomorphisms, respectively. Theorem~\ref{Th1} implies that $(T_2(G))_{sr}$ is a regular weakly Lindel\"{o}f and totally $\omega$-narrow paratopological group. According to Propositions~\ref{Po1} and~\ref{Po2}, $G$ is simply $sm$-factorizable if and if so is $(T_2(G))_{sr}$. Hence we can assume that $G$ is regular. Take any continuous function $f\colon G\to \mathbb{R}$. Since every regular totally $\omega$-narrow paratopological group $H$ is $\omega$-balanced (see \cite[Proposition~3.8]{ST2}) and satisfies $Hs(H)\leq \omega$ (\cite[Theorem~2]{Sa}), we apply Lemma~\ref{Le1} to find a $d$-open continuous homomorphism $\pi\colon G\rightarrow K$ onto a regular paratopological group $K$ with countable pseudocharacter and a continuous function $h\colon K\to \mathbb{R}$ such that $f=h\circ \pi$. Clearly, $K$ is totally $\omega$-narrow, so $K$ is strongly submetrizable by Lemma~\ref{l1}. Therefore, $G$ is simply $sm$-factorizable by Theorem~\ref{th1}. \end{proof} It is well known that a subgroup $H$ of an $\mathbb{R}$-factorizable topological group $G$ is $z$-embedded in $G$ if and only if $H$ is $\mathbb{R}$-factorizable. Now we consider the $z$-embedded subgroups of simply $sm$-factorizable (para)topological groups. \begin{proposition}\label{P} \emph{A (para)topological group $G$ is simply $sm$-factorizable if and only if for each continuous function $f\colon G\rightarrow \mathbb{R}^{\omega}$, there exist a continuous homomorphism $\pi\colon G\to H$ onto a (regular) strongly submetrizable (para)topological group $H$ and a continuous function $g\colon H\to \mathbb{R}^{\omega}$ such that $f=g\circ\pi$.} \end{proposition} \begin{proof} According to Theorem~\ref{th1} it suffices to prove the necessity. Take any continuous function $f\colon G\to \mathbb{R}^{\omega}$. For every $i\in\omega$, denote by $p_i$ the projection of $\mathbb{R}^{\omega}$ to the $i$th factor. Then $p_i\circ f\colon G\to \mathbb{R}$ is a continuous real-valued function. Since $G$ is simply $sm$-factorizable, there are a (regular) strongly submetrizable (para)topological group $H_i$, a continuous homomorphism $\pi_i$ of $G$ onto $H_i$ and a continuous function $g_i\colon H_i\to \mathbb{R}$ such that $p_i \circ f=g_i\circ \pi_i$, for each $i\in\omega$. Denote by $\pi$ the diagonal product of the family $\{\pi_i: i\in\omega\}$. Then $\pi\colon G\to \prod_{i\in\omega}H_i$ is a continuous homomorphism and the image $K=\pi(G)$ is a (regular) strongly submetrizable (para)topological group. For each $i\in\omega$, let $q_i\colon \prod_{j\in\omega}H_j\to H_i$ be the projection. Then $\pi_i=q_i\circ\pi$. Finally, denote by $g^\ast$ the Cartesian product of the family $\{g_i: i\in\omega\}$. Then $g^\ast\colon\prod_{i\in\omega}H_i\to \mathbb{R}^\omega$ is continuous. Let us verify that $f=g^\ast\circ \pi$. Indeed, for each $i\in\omega$ and each $x\in G$, we have: $$ p_i\circ f=g_i\circ \pi_i=g_i\circ q_i\circ \pi=p_i\circ g^\ast\circ \pi. $$ Hence the function $g=g^\ast \res_K$ satisfies $f=g\circ \pi$. \end{proof} \begin{theorem}\label{Th:sm-f} Let $G$ be a simply $sm$-factorizable (para)topological group. If a subgroup $H$ of $G$ is $z$-embedded in $G$, then $H$ is simply $sm$-factorizable. \end{theorem} \begin{proof} Consider a continuous function $f\colon H\to \mathbb{R}$. Let $\mathcal {V}$ be a countable base of $\mathbb{R}$ consisting of co-zero sets and $U_V=f^{-1}(V)$, where $V\in\mathcal{V}$. Then each element of the family $\mathcal {U}=\{U_V: V\in\mathcal {V}\}$ is co-zero set in $H$. Since $H$ $z$-embedded in $G$, for each $V\in \mathcal {V}$ there is a continuous function $g_{V}\colon G\to \mathbb{R}$ such that $g_{V}^{-1}(\mathbb{R}\setminus \{0\})\cap H=U_V$. Denote by $g$ the diagonal product of the family $\{g_{V}: V\in \mathcal{V}\}$. Then $g\colon G\to \mathbb{R}^{\mathcal{V}}$ is continuous. Since $\mathcal{V}$ is countable and $G$ is simply $sm$-factorizable, it follows from Proposition~\ref{P} that one can find a continuous homomorphism $\varphi$ of $G$ onto a strongly submetrizable (para)topological group $K$ and a continuous function $h\colon K\to \mathbb{R}^{\mathcal{V}}$ such that $g=h\circ\varphi$. Let us verify the following: {\bf Claim.} \emph{If $x,y\in G$ and $\varphi(x)=\varphi(y)$, then $f(x)=f(y)$.} If $f(x)\neq f(y)$, then there is $V\in\mathcal {V} $ such that $f(x)\in V$ and $f(y)\notin V$. Hence our choice of the function $g_V$ implies that $g_V(x)\neq 0$ and $g_V(y)=0$. Therefore, $g(x)\neq g(y)$ and, hence, $\varphi(x)\neq \varphi(y)$ since $g=h\circ \varphi$. This proves the claim. According to the above Claim, there is a function $j\colon \varphi(H)\to \mathbb{R}$ such that $f=j\circ \varphi \res_H$ ($j$ can fail to be continuous when $\varphi(H)$ is endowed with the topology inherited from $K$). Denote by $L$ the group $\varphi(H)$ endowed with the quotient topology with respect to the homomorphism $\varphi$. Then $\varphi \res_H\colon H\to L$ is open, so the function $j\colon L\to \mathbb{R}$ is continuous. Clearly, the group $L$ is strongly submetrizable. This completes the proof of the theorem. \end{proof} Theorem~\ref{Th:sm-f} makes it natural to ask the following question. Let $H$ be a subgroup of a topological group $G$. Is $H$ $z$-embedded in $G$ provided both $H$ and $G$ are simply $sm$-factorizable? It turns out that the answer to the question is \lq\lq{No\rq\rq}. \begin{example} \emph{There exists a separable (hence $\omega$-narrow) simply $sm$-factorizable topological group $G$ which contains a simply $sm$-factorizable subgroup $H$ such that it fails to be $z$-embedded in $G$.} \end{example} \begin{proof} Let $H$ be a separable topological group which fails to be $\mathbb{R}$-factorizable. One can take as $H$ the free Abelian topological group over the Sorgenfrey line \cite{RS}. Then $H$ has countable cellularity, so \cite[Corollary~5.19]{AT1} implies that $H$ is simply $sm$-factorizable. Every separable topological group is $\omega$-narrow, so $H$ embeds as a topological subgroup into a product $G=\prod_{\alpha\in A} G_\alpha$ of second-countable topological groups \cite[Theorem~3.4.23]{AT}. Since the weight of a separable topological group is at most $\mathfrak{c}=2^\omega$, we can assume without loss of generality that $|A|\leq\mathfrak{c}$. Then the product group $G$ is separable, while \cite[Corollary~8.1.15]{AT} implies that $G$ is $\mathbb{R}$-factorizable. However, the subgroup $H$ of $G$ cannot be $z$-embedded in $G$\,---\,otherwise $H$ would be $\mathbb{R}$-factorizable by \cite[Theorem~8.2.6]{AT}. \end{proof} In Theorem~\ref{Th3} below we give a characterization of simply $sm$-factorizable topological groups assuming that the groups are $\omega$-narrow. First we recall the notion of \emph{admissible} subgroup introduced in \cite{Tk89} (see also \cite[Section~5.5]{AT}. \begin{definition}\label{Def:1} Let $G$ be a topological group and $\{U_n: n\in \omega\}$ a sequence of open symmetric neighborhoods of the identity in $G$ such that $U_{n+1}^2\subseteq U_n$, for each $n\in \omega$. Then $N=\bigcap_{n\in \omega}U_n$ is a subgroup of $G$ which is called \emph{admissible.} \end{definition} It follows from the above definition that every admissible subgroup of a topological group $G$ is closed and that every neighborhood of the identity in $G$ contains an admissible subgroup \cite[Lemma~5.5.2]{AT}. \begin{lemma}\label{L1} Let $f\colon G\to H$ be a continuous homomorphism of topological groups. If $H$ has countable pseudocharacter, then $\ker{f}$ is an invariant admissible subgroup of $G$. \end{lemma} \begin{proof} Clearly, $N=\ker{f}$ is an invariant subgroup of $G$, so it suffices to show that $N$ is admissible. Since $H$ is a topological group with countable pseudocharacter, one can find a sequence $\{U_n: n\in \omega\}$ of open symmetric neighborhoods of the identity $e$ in $H$ such that $U_{n+1}^2\subseteq U_n$, for each $n\in \omega$ and $\{e\}=\bigcap_{n\in \omega}U_n$. Let $V_n=f^{-1}(U_n)$, $n\in\omega$. Then $\{V_n: n\in \omega\}$ is a sequence of open symmetric neighborhoods of the identity in $G$ satisfying $V_{n+1}^2\subseteq V_n$, for each $n\in \omega$ and $N=\bigcap_{n\in \omega} V_n$. This implies that $N$ is an admissible subgroup of $G$. \end{proof} \begin{theorem}\label{Th3} The implication {\rm (a)}\,$\Rightarrow$\,{\rm (b)} is valid for every topological group $G$, where \begin{enumerate} \item[{\rm (a)}] $G$ is simply $sm$-factorizable; \item[{\rm (b)}] for every continuous function $f\colon G\to \mathbb{R}$, there exists an invariant admissible subgroup $N$ of $G$ such that $f$ is constant on $gN$, for each $g\in G$. \end{enumerate} Furthermore, if $G$ is $\omega$-narrow, then {\rm (a)} and {\rm (b)} are equivalent. \end{theorem} \begin{proof} Let us show that {\rm (a)}\,$\Rightarrow$\,{\rm (b)}. Assume that $G$ is a simply $sm$-factorizable topological group. Then, according to Theorem~\ref{th1}, one can find a continuous homomorphism $\pi$ of $G$ onto a strongly submetrizable topological group $H$ and a continuous function $g\colon H\to \mathbb{R}$ such that $f=g\circ \pi$. Clearly, $H$ has countable pseudocharacter, so by Lemma~\ref{L1}, the kernel $N=\pi^{-1}(e)$ of $\pi$ is an invariant admissible subgroup of $G$. Since $f=g\circ \pi$, one can easily see that $f$ is constant on $xN$, for each $x\in G$. This implies (b). Assume that $G$ is an $\omega$-narrow group satisfying (b), and let $f\colon G\to \mathbb{R}$ be a continuous function. Then there is an invariant admissible subgroup $N$ of $G$ such that $f$ is constant on $xN$ for each $x\in G$. Let $G/N$ be the quotient topological group of $G$ and $\pi\colon G\to G/N$ be the quotient homomorphism. Since $G$ is $\omega$-narrow and so is every quotient group of $G$, $G/N$ is an $\omega$-narrow group of countable pseudocharacter (see \cite[Lemma~2.3.6]{T1}). By Lemma~\ref{l1}, $G/N$ is a strongly submetrizable topological group. Observing that $f$ is constant on $xN$ for each $x\in G$ and $\pi\colon G\to G/N$ is open, one can find a continuous function $h\colon G/N\to \mathbb{R}$ such that $f=h\circ \pi$. From Theorem~\ref{th1} it follows that $G$ is simply $sm$-factorizable. \end{proof} We have just established the equivalence of items (a) and (b) in Theorem~\ref{Th3} under the additional assumption of the $\omega$-narrowness of $G$. Since every discrete abelian group of cardinality $2^\omega$ is simply $sm$-factorizable \cite[Proposition~5.15]{AT1}, we see that simply $sm$-factorizable topological groups need not be $\omega$-narrow. Therefore, it is natural to ask whether every simply $sm$-factorizable topological group is $\omega$-balanced. In the next example we answer this question in the negative. \begin{example} \emph{Let $H=GL(\mathbb{R},2)$ be the group of $2\times 2$ invertible matrices with real entries. Denote by $G$ the group $H^\omega$ endowed with the box topology. Then $G$ is a strongly submetrizable (hence simply $sm$-factorizable) group of countable pseudocharacter which fails to be $\omega$-balanced.} \end{example} \begin{proof} Indeed, denote by $G_\ast$ the group $H^\omega$ with the usual product topology. Then the identity mapping of $G$ onto $G_\ast$ is a continuous isomorphism onto a separable metrizable topological group, so $G$ is strongly submetrizable, hence simply $sm$-factorizable (Proposition~\ref{le1}). However, $G$ is not $\omega$-balanced. To show this we slightly modify the argument from \cite[Example~2]{Pes}. It is well known that the group $H$ contains two sequences $\{a_n: n\in\omega\}$ and $\{b_n: n\in\omega\}$ and an element $z_0\neq e$ such that $a_nb_n\to e$ and $b_na_n\to z_0$ for $n\to\infty$, where $e$ is the identity element of $H$ \cite[4.24]{HR}. Choose an open neighborhood $U_0$ of $e$ in $H$ such that $z_0\notin \overline{U_0}$. It is easy to see that for every open neighborhood $V$ of $e$ in $H$, there exists $k\in\omega$ such that $b_k V b_k^{-1}\setminus U_0\neq\emptyset$\,---\,otherwise $b_k a_k=b_k(a_k b_k) b_k^{-1}\in b_k V b_k^{-1}\subset U_0$ for all sufficiently big $k\in\omega$, which contradicts our choice of the sequences $\{a_n: n\in\omega\}$, $\{b_n: n\in\omega\}$ and of the set $U_0$. Clearly $U=U_0^\omega$ is an open neighborhood of the identity in $G$. Suppose for a contradiction that $G$ is $\omega$-balanced. Then there exists a sequence $\{V_n: n\in\omega\}$ of open neighborhoods of the identity in $G$ subordinated to $U$. One can assume without loss of generality that every $V_n$ has the form $\prod_{k\in\omega} V_{n,k}$, where $V_{n,k}$ is an open neighborhood of $e$ in $H$ for each $k\in\omega$. We have just shown that for every $n\in\omega$, there exists $k_n\in\omega$ such that $b_{k_n}V_{n,n}b_{k_n}^{-1}\setminus U_0\neq\emptyset$. Let $x=(x_n)_{n\in\omega}$ be the element of $G$ defined by $x_n=b_{k_n}$ for each $n\in\omega$. Then $xV_nx^{-1}\setminus U\neq\emptyset$ for each $n\in\omega$ since the projections of the sets $xV_nx^{-1}$ and $U$ to the $n$th factor are $b_{k_n}V_{n,n}b_{k_n}^{-1}$ and $U_0$, respectively. This contradicts our choice of the sequence $\{V_n: n\in\omega\}$. Therefore, the group $G$ is not $\omega$-balanced. \end{proof} The following result partially answers Problem~\ref{Q1.2}. \begin{corollary}\label{Cor:Adm} Let $\pi\colon G\to H$ be a continuous homomorphism of topological groups such that for each invariant admissible subgroup $K\subseteq G$, the image $\pi(K)$ contains an invariant admissible subgroup of $H$. If $G$ is simply $sm$-factorizable and $H$ is $\omega$-narrow, then $H$ is simply $sm$-factorizable. \end{corollary} \begin{proof} Let $f\colon H\to \mathbb{R}$ be any continuous function. Since $G$ is simply $sm$-factorizable, it follows from Theorem~\ref{Th3} that there is an invariant admissible subgroup $N$ of $G$ such that for each $x\in G$, $f\circ\pi$ is constant on $xN$. According to our assumption $\pi(N)$ contains an invariant admissible subgroup $K$ of $H$. Since $\pi$ is a homomorphism and for each $x\in G$, $f\circ\pi$ is constant on $xN$, one can easily verify that for each $y\in H$, $f$ is constant on $yK$. Observing that $H$ is $\omega$-narrow, from Theorem~\ref{Th3} it follows that $H$ is simply $sm$-factorizable. \end{proof} \begin{remark} The condition on the homomorphism $\pi$ in Corollary~\ref{Cor:Adm} is quite strong. It can easily fail, even if the homomorphism $\pi$ is open. Indeed, according to \cite[Theorem~7.6.18]{AT}, every Hausdorff topological group $H$ is a quotient group of a topological group $G$ with countable pseudocharacter. So we can take $H$ to be a Hausdorff topological group with $\psi(H)>\omega$ and find an open continuous surjective homomorphism $\pi\colon G\to H$, where $G$ is a topological group of countable pseudocharacter. Then $K=\{e\}$ is an invariant admissible subgroup of $G$, where $e$ is the identity element of $G$. Clearly $\pi(K)=\{e_H\}$ does not contain any admissible subgroup of $H$. It also follows from Proposition~\ref{Prop:New} below that $G$ cannot be simply $sm$-factorizable if $|H|>\mathfrak{c}$. \end{remark} As we mentioned after Definition~\ref{Def:1}, every neighborhood of the identity in a topological group contains an admissible subgroup \cite[Lemma~5.5.2]{AT}. This conclusion can be strengthened for $\omega$-balanced topological groups: \begin{lemma}\label{L2.17} Every neighborhood of the identity $e$ in an $\omega$-balanced topological group $G$ contains an invariant admissible subgroup. \end{lemma} \begin{proof} Every $\omega$-balanced topological group is a subgroup of a topological product of first countable topological groups \cite[Theorem~5.1.9]{AT}, so for each open neighborhood $U$ of $e$ in $G$, one can find a continuous homomorphism $p$ on $G$ onto a first countable topological group $H$ and an open neighborhood $V$ of the identity $e_H$ in $H$ satisfying $p^{-1}(V)\subseteq U$. Clearly $K=\overline{\{e_H\}}$ is an invariant admissible subgroup of $H$ satisfying $K\subset V$, so $p^{-1}(K)$ is an invariant admissible subgroup of $G$ contained in $U$. \end{proof} We recall that $X$ is a \emph{$P$-space} if every $G_\delta$-set in $X$ is open. Similarly, a (para)topo\-log\-i\-cal group $G$ is said to be a \emph{$P$-group} if the underlying space of $G$ is a $P$-space. The following result gives a partial answer to Problem~\ref{Q1.3}. \begin{corollary} If an $\omega$-narrow topological group $H$ is a quotient group of a simply $sm$-factorizable $P$-group, then $H$ is $\mathbb{R}$-factorizable. \end{corollary} \begin{proof} Let $\pi\colon G\to H$ be a quotient homomorphism and $G$ be a simply $sm$-factorizable $P$-group. Then $H$ is a $P$-group by \cite[Lemma~4.4.1\,c)]{AT}. Since every $\omega$-narrow simply $sm$-factorizable $P$-group is $\mathbb{R}$-factorizable \cite[Proposition~5.23]{AT1}, it suffices to show that $H$ is simply $sm$-factorizable. Let $f$ be a continuous real-valued function on $H$. Since $G$ is simply $sm$-factorizable, Theorem~\ref{Th3} implies that there is an invariant admissible subgroup $N$ of $G$ such that for each $x\in G$, $f\circ\pi$ is constant on $xN$. Therefore, $f$ is constant on $y\pi(N)$ for each $y\in H$. Observing that $G$ is a $P$-space and $\pi$ is open, we see that $\pi(N)$ is an open neighborhood of the identity in $H$. Clearly $H$ is $\omega$-balanced, so Lemma~\ref{L2.17} implies that $\pi(N)$ contains an invariant admissible subgroup $K$ of $H$. It is clear that $f\circ\pi$ is constant on $yK$ for each $y\in H$, so $H$ is simply $sm$-factorizable by Theorem~\ref{Th3}, because $H$ is $\omega$-narrow. \end{proof} \begin{proposition}\label{Prop:New} \emph{Every regular simply $sm$-factorizable (para)topological group $G$ of countable pseudocharacter admits a continuous isomorphic bijection onto a separable metrizable (para)topological group. Therefore, $G$ is strongly submetrizable and $|G|\leq \mathfrak{c}$.} \end{proposition} \begin{proof} Every regular paratopological group is completely regular \cite{BR}. Let $\{U_n: n\in \omega\}$ be a family open neighborhoods of the identity $e$ in $G$ such that $\{e\}=\bigcap_{n\in \omega}U_n$. For every $n\in \omega$, one can find a continuous function $f_n\colon G\to \mathbb{R}$ such that $f_n(e)=0$ and $f_n(x)=1$ for each $x\in G\setminus U_n.$ Denote by $f$ the diagonal product of the family $\{f_n: n\in \omega\}$. Then $f\colon G \to \mathbb{R}^\omega$ is continuous. Since $G$ is simply $sm$-factorizable, Proposition~\ref{P} implies that we can find a continuous homomorphism $p\colon G\to H$ onto a strongly submetrizable (para)topological group $H$ and a continuous map $h\colon H\to\mathbb{R}^\omega$ such that $f=h\circ p$. Note that $h(e_H)=f(e)=\bar{0}=(0,0,\ldots)$. We claim that $p$ is a continuous isomorphic bijection. Indeed, it suffices to show that $\ker{p}= \{e\}$. Take an element $x\in G\setminus \{e\}$. Then there is $n\in \omega$ such that $x\notin U_n$, so $f_n(x)=1$. Therefore, $f(x)\neq \bar{0}$. Hence $x\notin \ker{p}$ because $f=h\circ p$ and $h(e_H) = \bar{0}$. Since every separable metrizable space has cardinality at most $\mathfrak{c}$ and $p$ is a bijection, we see that $|G|\leq\mathfrak{c}$. \end{proof} \section{Completions of simply $sm$-factorizable topological groups}\label{Sec:3} In this section we study the Dieudonn\'e and Hewitt--Nachbin completions of simply $sm$-factorizable topological groups. In Theorem~\ref{Th3.2} we answer Problem~\ref{Q1.1} affirmatively. In fact, we prove a stronger result: Every Hausdorff simply $sm$-factorizable topological group is \emph{completion friendly.} A subset $Y$ of a space $X$ is \emph{$G_\delta$-dense} in $X$ if every nonempty $G_\delta$-set in $X$ intersects $Y$. The biggest set $Z\subset X$ which contains $Y$ as a $G_\delta$-dense subset is called the \emph{$G_\delta$-closure} of $Y$ in $X$. A space $X$ is called \emph{Moscow} if for each open set $U$ of $X$, the closure $\overline{U}$ of $U$ is the union of a family of $G_\delta$-sets in $X$. Let $\varrho{G}$ be the Ra\u{\i}kov completion of a topological group $G$. We denote the $G_\delta$-closure of $G$ in $\varrho{G}$ by $\varrho_\omega{G}$. It is easy to verify that $\varrho_\omega{G}$ is a subgroup of $\varrho{G}$ (see \cite[Section~6.4]{AT}). \begin{proposition}\label{P3.1} \emph{Let $H$ be a $G_\delta$-dense simply $sm$-factorizable subgroup of a topological group $G$. Then $H$ is $C$-embedded in $G$.} \end{proposition} \begin{proof} Take any continuous function $f\colon H\to \mathbb{R}$. According to Theorem~\ref{th1}, we can find a continuous homomorphism $\pi$ of $H$ onto a strongly submetrizable topological group $F$ and a continuous function $g\colon F\to \mathbb{R}$ such that $f=g\circ \pi$. Let $\varrho{G}$ and $\varrho{F}$ be the Ra\u{\i}kov completions of $G$ and $F$, respectively. Since $H$ is dense in $\varrho{G}$, $\pi$ extends to a continuous homomorphism $\tilde{\pi}\colon \varrho{G}\to \varrho{F}$. Denote by $\varrho_\omega{G}$ and $\varrho_\omega{F}$ the $G_\delta$-closures of $G$ and $F$ in $\varrho{G}$ and $\varrho{F}$, respectively. Since $F$ is strongly submetrizable, it has countable pseudocharacter and, hence, $F$ is a Moscow space \cite[Corollary~6.4.11(1)]{AT}. It is well known that if a Moscow space $Y$ is a $G_\delta$-dense subspace of a homogeneous space $X$, then $X$ is also a Moscow space and $Y$ is $C$-embedded in $X$ \cite[Theorem~6.1.8]{AT}. Since $\varrho_\omega{F}$ is a topological group and $F$ is a Moscow space, $F$ is $C$-embedded in $\varrho_\omega{F}$. Hence $g$ extends to a continuous function $\tilde{g}\colon \varrho_\omega{F}\to \mathbb{R}$. Since $H$ is $G_\delta$-dense in $G$, we have the inclusion $\tilde{\pi}(G)\subseteq \varrho_\omega{F}$. Thus $\tilde{g}\circ \tilde{\pi} \res_{G}$ is a continuous extension of $f$. This proves that $H$ is $C$-embedded in $G$. \end{proof} A topological group $G$ is called \emph{completion friendly} if $G$ is $C$-embedded in $\varrho_\omega{G}$. Every completion friendly group is a $PT$-group \cite[Proposition~6.5.17]{AT}. The following result gives a positive answer to Problem~\ref{Q1.2}. \begin{theorem}\label{Th3.2} Let $G$ be a Hausdorff simply $sm$-factorizable topological group. Then the equalities $\mu{G}=\varrho_\omega{G}=\upsilon{G}$ hold and, therefore, $G$ is completion friendly. \end{theorem} \begin{proof} Since every simply $sm$-factorizable topological group $H$ satisfies $ib(H)\leq\mathfrak{c}$ \cite[Proposition~5.14]{AT1}, we have that $ib(G)\leq \mathfrak{c}$. According to \cite[Theorem~5.4.10]{AT}, the inequality $c(H)\leq 2^{ib(H)}$ holds for every topological group $H$. We conclude, therefore, that $c(G)\leq 2^\mathfrak{c}$. From \cite[Theorem~6.2.2]{AT} it follows that the cardinal number $2^\mathfrak{c}$ is Ulam non-measurable, so the cardinality of every discrete family of open sets in $G$ is Ulam non-measurable and hence the equality $\mu{G}=\upsilon{G}$ holds by \cite[Lemma~8.3.1]{AT}. Note that the space $\varrho_\omega{G}$ is Dieudonn\'e complete \cite[Proposition~6.5.2]{AT}. Since $\mu{G}=\upsilon{G}$ and $G$ is $C$-embedded in the Dieudonn\'{e} complete group $\varrho_\omega{G}$ by Proposition~\ref{P3.1}, we conclude that $\mu{G}=\varrho_\omega{G}= \upsilon{G}$. \end{proof} \begin{corollary}\label{C3.3} Every Hausdorff simply $sm$-factorizable topological group $G$ is a $PT$-group, so the Dieudonn\'{e} completion $\mu{G}$ of the space $G$ admits a natural structure of topological group containing $G$ as a dense subgroup. \end{corollary} By Corollary~\ref{C1} and Theorem~\ref{Th3.2}, we obtain the following result: \begin{corollary}[See Proposition~2.4 of \cite{ST}] Every Hausdorff weakly Lindel\"{o}f topological group $G$ satisfies the equalities $\mu{G}=\varrho_\omega{G}=\upsilon{G}$ and, therefore, $G$ is completion friendly. \end{corollary} We also present an alternative proof of \cite[Theorem~5.21]{AT1}: \begin{corollary}\label{Cor:3.5} Let $G$ be a simply $sm$-factorizable topological group which is $C$-embedded in a regular Lindel\"{o}f space $X$. Then the group $G$ is $\mathbb{R}$-factorizable and the closure of $G$ in $X$ is a topological group containing $G$ as a dense subgroup. \end{corollary} \begin{proof} We can assume without loss of generality that $G$ is dense in $X$. Then $\upsilon G=X$. Since every $C$-embedded subspace of a regular Lindel\"{o}f is pseudo-$\omega_1$-compact \cite[Corollary~3.3]{AT1}, the space $G$ is pseudo-$\omega_1$-compact. It now follows from \cite[Corollary~8.3.3]{AT} that $X = \upsilon G = \mu G$, i.e.~the Hewitt-Nachbin and Dieudonn\'{e} completions of $G$ coincide. By Corollary~\ref{C3.3}, $X$ is homeomorphic to a Lindel\"{o}f topological group containing $G$ as a dense subgroup. Hence $X$ is $\mathbb{R}$-factorizable by \cite[Theorem~8.1.6]{AT}. So $G$ is $\mathbb{R}$-factorizable as a $C$-embedded subgroup of the $\mathbb{R}$-factorizable topological group $X$ (see \cite[Theorem~3.2]{HST}). \end{proof} \section{Completions of simply $sm$-factorizable paratopological groups}\label{Sec:4} In this section we consider the Dieudonn\'e and Hewitt--Nachbin completions of simply $sm$-factorizable \emph{paratopological} groups. The following result follows from Lemmas~3 and~4 of \cite{ST1}: \begin{lemma}\label{LL1} Let $G$ be a Hausdorff paratopological group satisfying $Hs(G)\leq \omega$. Then the $G_\delta$-closure of an arbitrary subgroup $H$ of $G$ is again a subgroup of $G$. \end{lemma} \begin{lemma}\label{LL} The topological product $G=\prod_{\alpha\in A} G_\alpha$ of any family of strongly submetrizable paratopological groups satisfies $Hs(G)\leq \omega$. \end{lemma} \begin{proof} According to \cite[Proposition~2.3]{T}, the class of paratopological groups with countable Hausdorff number is closed under taking arbitrary products and subgroups. Therefore, it suffices to show that any strongly submetrizable para\-topo\-logical group $H$ satisfies $Hs(H)\leq \omega$. Since $H$ is strongly submetrizable, there exists a continuous isomorphism $i\colon H\to H'$ onto a separable metrizable paratopological group $H'$. Let $\{U_n: n\in \omega\}$ be a local base at the identity $e'$ in $H'$. Since $H'$ is metrizable, we have the equality $\{e'\}=\bigcap_{n\in \omega}U_nU_n^{-1}$. Let $V_n=i^{-1}(U_n)$, $n\in\omega$. For each neighborhood $V$ of the identity $e$ in $H$, we have that $\{e\}=\bigcap_{n\in \omega}V_nV_n^{-1}\subseteq V$. This implies that $Hs(H)\leq \omega$. \end{proof} \begin{theorem}\label{Th} Let $G$ be a regular simply $sm$-factorizable paratopological group. Then the realcompactification $\upsilon G$ of the space $G$ admits a natural structure of paratopological group containing $G$ as a dense subgroup and $\upsilon{G}$ is also simply $sm$-factorizable. \end{theorem} \begin{proof} Since every regular paratopological group is a Tychonoff space \cite[Corollary~5]{BR}, so is $G$. Let $\{f_\alpha: \alpha\in A\}$ be the family of continuous real-valued functions on $G$. Since $G$ is simply $sm$-factorizable, it follows from Theorem~\ref{th1} that for each $\alpha\in A$, there exist a continuous homomorphism $\pi_\alpha$ of $G$ onto a regular strongly submetrizable paratopological group $H_\alpha$ and a continuous function $g_\alpha\colon H_\alpha\to \mathbb{R}$ such that $f_\alpha=g_\alpha\circ\pi_\alpha$. Since the family $\{f_\alpha: \alpha\in A\}$ separates point and closed sets in $G,$ so does $\{\pi_\alpha: \alpha\in A\}$. Therefore, the diagonal product of the family $\{\pi_\alpha: \alpha\in A\}$, denoted by $\pi$, is a topological isomorphism of $G$ onto the subgroup $\pi(G)\subseteq \Pi=\prod_{\alpha\in A} H_\alpha$. In what follows we identify $G$ with the subgroup $\pi(G)$ of $\Pi$. Then the equality $f_\alpha=g_\alpha\circ\pi_\alpha$ acquires the form $f_\alpha=g_\alpha\circ p_\alpha \res_{G}$, where $p_\alpha$ is the projection of $\Pi$ to $H_\alpha$. By Lemma~\ref{LL}, we have that $Hs(\Pi)\leq \omega$. Therefore, by Lemma~\ref{LL1}, the $G_\delta$-closure of $G$ in $\Pi$, denoted by $H$, is a subgroup of $\Pi$. We claim that the subspace $H$ of $\Pi$ is realcompact. Indeed, for each $\alpha\in A$, $H_\alpha$ is a strongly submetrizable paratopological group, so $H_\alpha$ admits a coarser separable metrizable paratopological group topology. Hence the space $H_\alpha$ is Dieudonn\'{e} complete \cite[Proposition~6.10.8]{AT} and $|H_\alpha|\leq \mathfrak{c}$. In particular, the cellularity of $H_\alpha$ is less than or equal to $\mathfrak{c}$, which is Ulam non-measurable and, therefore, the space $H_\alpha$ is realcompact by \cite[Proposition~6.5.18]{AT}. Hence the space $\Pi=\prod_{\alpha\in A} H_\alpha$ is also realcompact. It also follows from the definition of $H$ that the complement $\Pi\setminus H$ is the union of family of $G_\delta$-sets in $\Pi$. Further, every $G_\delta$-set in $\Pi$ is the union of a family of zero-sets in $\Pi$, and the complement $\Pi\setminus Z$ is realcompact, for each zero-set $Z$ in $\Pi$ (see \cite[Corollary~3.11.8]{En}). By \cite[Corollary~3.11.7]{En}, the intersection of a family of realcompact subspaces of a space is realcompact. Therefore, $H$ is realcompact as the intersection of a family of cozero-sets in $\Pi$. Let us show that $G$ is $C$-embedded in $H$, which implies that $\upsilon{G}= H$ (see \cite[Theorem~8.6]{GJ}). Indeed, for each continuous real-valued function $f$ on $G$, there exists $\alpha\in A$ such that $f=f_\alpha$. It follows from $f_\alpha=g_\alpha\circ p_\alpha \res_{G}$ that $g_\alpha\circ p_\alpha \res_{H}\colon H\to \mathbb{R}$ is a continuous extension of $f$ over $H$. We have thus proved that the realcompactification $\upsilon{G}$ of $G$ admits a natural structure of a topological group containing $G$ as a dense subgroup. It remains to verify that the paratopological group $H$ is a simply $sm$-factorizable. Take any continuous function $g\colon H\to \mathbb{R}$ and denote by $f$ the restriction of $g$ to $G$. Then we can find $\alpha\in A$ such that $f=f_\alpha$, whence it follows that $f_\alpha=g_\alpha\circ p_\alpha\res_G$. Since $\pi_\alpha(H)=H_\alpha$ and $H$ is Hausdorff, we have the equality $g=g_\alpha \circ p_\alpha \res_{H}$. Thus the continuous homomorphism $p_\alpha \res_H$ of $H$ to $H_\alpha$ factorizes the function $g.$ Therefore, by Theorem~\ref{th1}, $H$ is simply $sm$-factorizable. \end{proof} \begin{lemma}\label{Le:ast} Let $G$ be a regular simply $sm$-factorizable paratopological group. Then the topological group $G^\ast$ associated to $G$ satisfies $ib(G^\ast)\leq \mathfrak{c}$. Therefore, $c(G)\leq c(G^\ast)\leq 2^\mathfrak{c}$ and the equality $\upsilon{G}=\mu{G}$ is valid. \end{lemma} \begin{proof} Let $U$ be an arbitrary neighborhood of the identity $e$ in the group $G^\ast$. It follows from the definition of $G^\ast$ that there exists an open neighborhood $V$ of the identity in $G$ such that $V\cap V^{-1}\subset U$. Every regular paratopological group is completely regular \cite[Corollary~5]{BR}, so there exists a cozero set $W$ in $G$ satisfying $e\in W\subset V$. Since $G$ is simply $sm$-factorizable, we can find a continuous homomorphism $\pi\colon G\to H$ onto a separable metrizable paratopological group $H$ such that $W=\pi^{-1}\pi(W)$. Hence $W^{-1}=\pi^{-1}\pi(W^{-1})$. Combining the two equalities we see that \begin{equation*} W\cap W^{-1}=\pi^{-1}\pi(W\cap W^{-1}). \end{equation*} It is clear that $|H|\leq \mathfrak{c}$. Choose a subset $A$ of $G$ with $|A|\leq \mathfrak{c}$ such that $\pi(A)=H$. Applying $(2)$ and taking into account that $e\in W\cap W^{-1}\neq\emptyset$ we deduce that $A\cdot (W\cap W^{-1})=G=(W\cap W^{-1})\cdot A$. The latter equalities together with $e\in W\cap W^{-1}\subset V\cap V^{-1}\subset U$ imply that $A\cdot U=G=U\cdot A$. Hence the group $G^\ast$ satisfies $ib(G^\ast)\leq \mathfrak{c}$. We now apply \cite[Theorem~5.4.10]{AT} to conclude that $c(G^\ast)\leq 2^{ib(G^\ast)}\leq 2^\mathfrak{c}$. Since $G$ is a continuous image of $G^\ast$, we also have $c(G)\leq c(G^\ast)\leq 2^\mathfrak{c}$. Finally, the cardinal number $2^\mathfrak{c}$ is Ulam non-measurable by \cite[Theorem~6.2.2]{AT}. So the cardinality of every discrete family of open sets in $G$ is Ulam non-measurable and the equality $\mu{G}=\upsilon{G}$ hods by \cite[Lemma~8.3.1]{AT}. \end{proof} \begin{corollary}\label{C3.8} Let $G$ be a regular simply $sm$-factorizable paratopological group. Then the equality $\mu{G}=\upsilon{G}$ hods. Furthermore, the space $\mu{G}$ admits a natural structure of paratopological group containing $G$ as a dense subgroup and $\mu{G}$ is also simply $sm$-factorizable. \end{corollary} \begin{proof} Lemma~\ref{Le:ast} implies that the cardinality of every discrete family of open sets in $G$ is at most $2^\mathfrak{c}$ and, hence, is Ulam non-measurable. So the equality $\mu{G}=\upsilon{G}$ hods by \cite[Lemma~8.3.1]{AT}. Therefore, both conclusions of the corollary follow directly from Theorem~\ref{Th}. \end{proof} \begin{corollary} Let $G$ be a regular totally $\omega$-narrow and weakly Lindel\"{o}f paratopological group. Then the equality $\mu{G}=\upsilon{G}$ hods. Furthermore, the Dieudonn\'{e} completion $\mu{G}$ of the space $G$ admits a natural structure of paratopological group containing $G$ as a dense subgroup and $\mu{G}$ is simply $sm$-factorizable. \end{corollary} \begin{proof} The group $G$ is simply $sm$-factorizable, by Corollary~\ref{C2.8}. Hence the required conclusions follow from Corollary~\ref{C3.8}. \end{proof} \begin{problem} Does Lemma~\ref{Le:ast} remain valid without the assumption of the regularity of $G$? \end{problem} One can try to improve one of the conclusions of Lemma~\ref{Le:ast} as follows: \begin{problem}\label{Prob:SM} Does every Hausdorff (regular) simply $sm$-factorizable paratopological group $G$ satisfy $c(G)\leq \mathfrak{c}$? \end{problem} It is worth mentioning that the \lq{Hausdorff\rq} and \lq{regular\rq} versions of Problem~\ref{Prob:SM} are equivalent since every paratopological group $G$ satisfies $c(G)=c(G_{sr})$ (see \cite[Proposition~2.2]{Tk15}) and the paratopological group $G_{sr}$ is regular provided $G$ is Hausdorff. The next result is close to Corollary~\ref{Cor:3.5}. In it, under stronger assumptions, we extend some properties of topological groups to the wider class of paratopological groups. \begin{corollary}\label{C3.10} Let $G$ be a simply $sm$-factorizable paratopological group which is $C$-embedded in a regular space $X$. If $X^2$ is Lindel\"of, then the group $G$ is $\mathbb{R}$-factorizable. In addition, if $X$ is a Lindel\"{o}f $\Sigma$-space, then all subgroups of $G$ have countable cellularity and for every family $\gamma$ of $G_\delta$-sets in $G$, the closure of $\,\bigcup\gamma$ is a zero-set in $G$. \end{corollary} \begin{proof} We can assume without loss of generality that $G$ is dense in $X$ because the class of Lindel\"{o}f $\Sigma$-spaces is hereditary w.r.t.~taking closed subspaces. Then $\upsilon{G}=X$. By Corollary~\ref{C3.8}, $X = \upsilon{G} = \mu{G}$ and $X$ is homeomorphic to a paratopological group containing $G$ as a dense paratopological subgroup. Also, the topological group $X^\ast$ associated to $X$ is topologically homeomorphic to a closed subspace of the space $X^2$ \cite[Lemma~2.2]{AS} and, hence, $X^\ast$ is Lindel\"of. Applying \cite[Theorem~3.6]{ST3} we see that $X$ is $\mathbb{R}$-factorizable. So $G$ is $\mathbb{R}$-factorizable as a $C$-embedded subgroup of the $\mathbb{R}$-factorizable paratopological group $X$ (see \cite[Theorem~3.2]{XY1}). Assume that $X$ is a Lindel\"of $\Sigma$-space. Then $X^2$ is also a Lindel\"of $\Sigma$-space, so $G$ is $\mathbb{R}$-factorizable. Let $G^\ast$ be the topological group associated to $G$. It is clear that the identity embedding of $G$ to $X$ is topological isomorphism of $G^\ast$ onto a subgroup of the topological group $X^\ast$. Since $X^\ast$ is homeomorphic to a closed subspace of $X^2$, we deduce that $X^\ast$ is a Lindel\"of $\Sigma$-space. Every subgroup $H$ of $X^\ast$ has countable cellularity by \cite[Corollary~5.3.21]{AT}. If $K$ is a subgroup of $G$, then $H=j^{-1}(K)$ is a subgroup of both $G^\ast$ and $X^\ast$, where $j$ is the identity mapping of $G^\ast$ onto $G$. It follows from the continuity of $j$ that $c(K)\leq c(H)\leq\omega$. Finally, let $\gamma$ be a family of $G_\delta$-sets in $G$. Since $G$ is a Tychonoff space, each element of $\gamma$ is the union of a family of zero-sets. Hence we can assume that $\gamma$ consists of zero-sets in $G$. By our assumptions, $G$ is a dense $C$-embedded subspace of $X$, so the closure in $X$ of every zero-set in $G$ is a zero-set in $X$. Let $\gamma^\ast=\{\mathrm{cl}_X P: P\in\gamma\}$. Then $\gamma^\ast$ is a family of zero-sets in $X$ and \cite[Theorem~4.2]{ST2} implies that the closure of $\bigcup\gamma^\ast$ in $X$ is a zero-set. Since $\bigcup\gamma$ is dense in $\bigcup\gamma^\ast$, we conclude that $\mathrm{cl}_G\bigcup\gamma$ is a zero-set in $G$. \end{proof} \begin{remark} In Corollary~\ref{C3.10}, the Lindel\"{o}fness of $X^2$ cannot be weakened to the Lindel\"{o}fness of $X$. Indeed, the Sorgenfrey line $\mathbb{S}$ is a simply $sm$-factorizable paratopological group by Theorem~\ref{th1}. Also, $\mathbb{S}$ is Lindel\"of but not $\mathbb{R}$-factorizable \cite[Remark~3.22]{XST}. \end{remark} \begin{corollary}[See Theorem~1 of \cite{ST1}] Let $G$ be a regular $\mathbb{R}$-factorizable paratopological group. Then the Dieudonn\'{e} completion of the space $G$, $\mu{G}$, admits a natural structure of paratopological group containing $G$ as a dense subgroup and $\mu{G}$ is $\mathbb{R}$-factorizable. \end{corollary} \begin{proof} Every regular paratopological group is a Tychonoff space \cite[Corollary~5]{BR}. Since $G$ is $\mathbb{R}$-factorizable (hence, simply $sm$-factorizable), we can apply Corollary~\ref{C3.8} to conclude that both multiplication and inversion on $G$ admit continuous extensions to $\mu{G}$ making the latter space into a paratopological group. The $\mathbb{R}$-factorizability of the group $\mu{G}$ can be deduced as in \cite{ST1}. \end{proof} Our proof of Theorem~\ref{P3.11} below requires the next simple fact: \begin{lemma}\label{Le:NS} Let $\mathcal{N}$ be a family of subgroups of a paratopological group $G$ such that every neighborhood of the identity $e$ in $G$ contains an element of $\mathcal{N}$. Then every neighborhood of $e$ in the topological group $G^\ast$ associated to $G$ also contains an element of $\mathcal{N}$. \end{lemma} \begin{proof} Take an arbitrary neighborhood $U$ of $e$ in $G^\ast$. It follows from the definition of the topology of $G^\ast$ that there exists an open neighborhood $V$ of $e$ in $G$ such that $V\cap V^{-1}\subset U$. By the assumptions of the lemma, there exists $N\in\mathcal{N}$ satisfying $N\subset V$. Since $N$ is a subgroup of $G$, we have that $N=N^{-1} \subset V^{-1}$. Therefore, $N\subset V\cap V^{-1}\subset U$. \end{proof} \begin{theorem}\label{P3.11} Let $G$ be a regular paratopological group. If the topological group $G^\ast$ associated to $G$ is simply $sm$-factorizable and $\omega$-narrow, then so is $G$. \end{theorem} \begin{proof} Let $\mathcal {N}$ be the family of closed invariant subgroups $N$ of $G$ such that the quotient paratopological group $G/N$ is strongly submetrizable. {\bf Claim 1.} \emph{The family $\mathcal {N}$ is closed under countable intersections.} Take any countable subfamily $\mathcal {C}=\{N_{k}: k\in\omega\}$ of $\mathcal {N}$. For every $k\in\omega$, let $\varphi_k\colon G\to G/N_k$ be the quotient homomorphism. Then the diagonal product $\psi\colon G\to \prod_{k\in \omega} G/N_k$ of the family $\{\varphi_k: k\in \omega\}$ is a continuous homomorphism. Since every group $G/N_k$ is strongly submetrizable, so is the countable product $\prod_{k\in \omega} G/N_k$. Further, the subgroup $\psi(G)$ of $\prod_{k\in\omega} G/N_k$ is also strongly submetrizable. Clearly, the kernel of $\psi$ satisfies $\ker\psi=\bigcap\mathcal {C}$. It is easy to see that the quotient group $G/\ker\psi$ is strongly submetrizable and the latter implies that $\bigcap\mathcal {C}=\ker\psi\in \mathcal {N}$. This proves Claim~1. Let $\mathcal{N}=\{N_i: i\in I\}$ and $\varphi_i\colon G\to G/N_i$ be the quotient homomorphism, where $i\in I$. {\bf Claim 2.} \emph{The diagonal product $\varphi\colon G\to \varphi(G)\subseteq \Pi=\prod_{i\in I} G/N_i$ of the family $\{\varphi_i: i\in I\}$ is a topological isomorphism. In particular, every neighborhood of the identity in $G$ contains an element of $\mathcal{N}$.} Since $G$ is regular and totally $\omega$-narrow, it is $\omega$-balanced \cite[Proposition~3.8]{ST2} and satisfies $Ir(G)\leq \omega$ \cite[Theorem~2]{Sa}. Therefore, it follows from Theorem~3.6 and Lemma~3.7 of \cite{T} that for each open neighborhood $U$ of the identity $e$ in $G$, one can find a continuous homomorphism $p\colon G\to H$ onto a separable metrizable paratopological group $H$ and an open neighborhood $V$ of the identity in $H$ such that $p^{-1}(V)\subseteq U$. Hence the kernel of $p$ belongs to $\mathcal {N}$ and satisfies $\ker{p}\subset U$. This implies that the family $\{\varphi_i: i\in I\}$ separates the points and closed sets in $G$ and, therefore, the diagonal product of this family, say, $\varphi$ is a topological isomorphism of $G$ onto $\varphi(G)$, as claimed. {\bf Claim 3.} \emph{Every $G_\delta$-set $P$ in $G^\ast$ with $e\in P$ contains some $N\in\mathcal{N}$.} Take a $G_\delta$-set $P$ in $G^\ast$ containing the identity $e$. Let $\{U_k: k\in\omega\}$ be a family of neighborhoods of the identity in $G^\ast$ such that $P=\bigcap_{k\in\omega} U_k$. It follows from Lemma~\ref{Le:NS} and Claim~2 that for every $k\in\omega$, there exists $N_k\in\mathcal{N}$ such that $N_k\subset U_k$. Then by Claim~1, $N=\bigcap_{k\in\omega} N_k$ is an element of $\mathcal{N}$ satisfying $N\subset \bigcap_{k\in\omega} U_k=P$. Take any continuous function $f\colon G\to \mathbb{R}$. Then $f$ remains continuous on $G^\ast$. Since $G^\ast$ is simply $sm$-factorizable, it follows from item (2) of Theorem~\ref{th1} that we can find a continuous homomorphism $\pi$ of $G^\ast$ onto a strongly submetrizable topological group $H$ and a continuous function $g\colon H\to \mathbb{R}$ such that $f=g\circ \pi$. Since $H$ is strongly submetrizable, the identity $e_H$ of $H$ is a $G_\delta$-set in $H$. Therefore, $\ker\pi$ is a $G_\delta$-set in $G^\ast$. Then Claim~3 implies that there is an $N_i\in \mathcal {N}$ such that $N_i\subseteq \ker\pi$. Since $f$ is constant on each coset $x\cdot \ker\pi$ and the groups $G$ and $G^\ast$ share the same underlying set, we see that $f$ is constant on $xN_i$ for each $x\in G$. Therefore, there exists a function $h\colon G/N_i\to \mathbb{R}$ such that $f=h\circ \varphi_i$, where $G/N_i$ is endowed with the quotient topology and $\varphi_i$ is the quotient homomorphism of $G$ onto $G/N_i$. Since $\varphi_i$ is continuous and open, $h$ is continuous. Clearly, $G/N_i$ is strongly submetrizable, so $G$ is simply $sm$-factorizable by Theorem~\ref{th1}. Finally, $G$ is $\omega$-narrow as a continuous homomorphic image of $G^\ast$. \end{proof} Theorem~\ref{P3.11} makes it natural to ask the following questions: \begin{problem} Let $G$ be a regular simply $sm$-factorizable paratopological group. Is the topological group $G^\ast$ associated to $G$ simply $sm$-factorizable? What if $G$ is a regular $\mathbb{R}$-factorizable paratopological group? \end{problem} \begin{question} Can the requirement of $\omega$-narrowness of $G^\ast$ be dropped in Theorem~\ref{P3.11}? Does the $\omega$-narrowness of $G$ suffice? \end{question} In Example~\ref{Ex:NN} we answer both parts of the above question in the negative. First we present an auxiliary lemma in which $\mathbb{Z}$ stands for the discrete additive group of integers. \begin{lemma}\label{Le:Aux} There exists a countable dense subgroup $S$ of the product $\mathbb{Z}^{\mathfrak{c}}$ such that for every $x\in S$, every finite set $C\subset\mathfrak{c}$ and every $k\in\mathbb{Z}$, one can find $s\in S$ satisfying $s(\alpha)\leq x(\alpha)$ for each $\alpha\in\mathfrak{c}\setminus C$ and $s(\alpha)\leq k$ for each $\alpha\in C$. \end{lemma} \begin{proof} Our argument is a modification of the proof of the Hewitt--Marczewski-Pondiczery theorem as presented in \cite[Theorem~2.3.15]{En}. Let $\mu$ be a separable metrizable topology on the index set $\mathfrak{c}$. Denote by $\mathcal{A}$ a countable base for $(\mathfrak{c},\mu)$. Take a countable dense subset $R_0$ of the space $\Pi=\mathbb{Z}^\mathfrak{c}$ and let $S_0=\hull{R_0}$. Assume that we have defined countable subgroups $S_0\subset \cdots\subset S_n$ of $\Pi$. For every $x\in S_n$, every finite disjoint subfamily $\nu$ of $\mathcal{A}$ and every $k\in\mathbb{Z}$, we define an element $y_{x,\nu,k}\in\Pi$ by the rule \begin{equation*} y_{x,\nu,k}(\alpha)=\begin{cases} x(\alpha)&\text{if $\alpha\in\mathfrak{c}\setminus \bigcup\nu$};\\ \min\{k,x(\alpha)\}&\text{if $\alpha\in \bigcup\nu$}. \end{cases} \end{equation*} Let $R_{n+1}=\{y_{x,\nu,k}: x\in S_n,\ \nu\in [\mathcal{A}]^{<\omega},\ k\in\mathbb{Z}\}$. Clearly $R_{n+1}$ is countable, so $S_{n+1}=S_n+\hull{R_{n+1}}$ is countable as well. We claim that the subgroup $S=\bigcup_{n\in\omega} S_n$ of $\Pi$ is as required. Note that $S$ is countable and dense in $\Pi$ since $S_0\subset S$. Take an element $x\in S$, a finite subset $C=\{\alpha_1,\ldots,\alpha_r\}$ of $\mathfrak{c}$ and an integer $k$. Then $x\in S_n$ for some $n\in\omega$. Since the space $(\mathfrak{c},\mu)$ is Hausdorff, we can find pairwise disjoint elements $U_1,\ldots,U_r$ of $\mathcal{A}$ such that $\alpha_i\in U_i$ for each $i\leq r$. Let $\nu=\{U_1,\ldots,U_r\}$. Then the point $s=y_{x,\nu,k}\in R_{n+1}\subset S_{n+1}\subset S$ satisfies $s(\alpha)\leq x(\alpha)$ for each $\alpha\in\mathfrak{c}\setminus C$ and $s(\alpha_i)\leq k$ for each $i\leq r$. This completes the proof. \end{proof} \begin{example}\label{Ex:NN} \emph{There exists a regular $\omega$-narrow paratopological Abelian group $G$ such that the topological group $G^\ast$ associated to $G$ is discrete with $|G^*|\leq\mathfrak{c}$ (hence simply $sm$-factorizable by \cite[Proposition 5.15]{AT1}), but $G$ fails to be simply $sm$-factorizable.} \end{example} \begin{proof} We modify the construction described in \cite[Example~2.9]{T}. In fact, our group $G$ will be a (dense) subgroup of the paratopological group constructed there. Let $\mathbb{Z}$ be the discrete additive group of the integers and $\Pi=\mathbb{Z}^\mathfrak{c}$ be the product of $\mathfrak{c}=2^\omega$ copies of $\mathbb{Z}$. For every $x\in\Pi$, let $$ \mathrm{supp}(x)=\{\alpha\in \mathfrak{c}: x(\alpha)\neq 0\}. $$ Then $\sigma=\{x\in\Pi: |\mathrm{supp}(x)|<\omega\}$ is a subgroup of $\Pi$ which is called the \emph{$\sigma$-product} of $\mathfrak{c}$ copies of $\mathbb{Z}$. It is clear that $|\sigma|=\mathfrak{c}$. Let $S$ be a countable dense subgroup of $\Pi$ as in Lemma~\ref{Le:Aux} ($\Pi$ carries the Tychonoff product topology). Clearly $G=\sigma+S$ is a subgroup of $\Pi$. For every $A\subset \mathfrak{c}$, we define a subset $U_A$ of $G$ by $$ U_A = \{x\in G: x(\alpha)=0 \mbox{ for each } \alpha\in A \mbox{ and } x(\alpha)\geq 0 \mbox{ for each } \alpha\in\mathfrak{c}\}. $$ Note that each $U_A$ is a \emph{subsemigroup} of $G$, i.e.~$U_A+U_A\subset U_A$. Also, each $U_A$ contains the identity element of $G$. These properties of the sets $U_A$ imply that the family $$ \mathcal{B} = \{x+U_A: x\in G,\ A\subset \mathfrak{c},\ |A|<\omega\} $$ is a base for a paratopological group topology $\tau$ on $G$ and the sets $U_A$, with a finite set $A\subset\mathfrak{c}$, is a local base at the identity of the paratopological group $(G,\tau)$. It is clear that the topology $\tau$ is (strictly) finer than the topology of $G$ inherited from the Tychonoff product $\mathbb{Z}^\mathfrak{c}$, so the space $(G,\tau)$ is Hausdorff. \noindent {\bf Claim~1.} \emph{The group $(G,\tau)$ is $\omega$-narrow.} Consider a basic open neighborhood $U_A$ of the identity in $G$, where $A$ is finite. Denote by $\sigma_A$ the set of all $x\in\sigma$ such that $\mathrm{supp}(x)\subset A$. It is clear that $\sigma_A$ is a countable subgroup of $\sigma$. We claim that $G=S+\sigma_A+U_A$. Indeed, let $y\in G$ be an arbitrary element. Then $y=x+a$ for some $x\in S$ and $a\in\sigma$. If $a$ is the identity element of $G$ (equivalently, $\mathrm{supp}(a)=\emptyset$), then $y=x\in S\subset S+U_A$. Otherwise let $C=\mathrm{supp}(a)\setminus A$ and $k=\min\{y(\alpha): \alpha\in\mathrm{supp}(a)\}$. According to our choice of $S$, there exists an element $s\in S$ such that $s(\alpha)\leq x(\alpha)$ for each $\alpha\in\mathfrak{c}\setminus C$ and $s(\alpha)\leq k$ for each $\alpha\in C$. Since $x$ and $y$ coincide on $\mathfrak{c}\setminus \mathrm{supp}(a)$, our definition of $k$ implies that $s(\alpha)\leq y(\alpha)$ for each $\alpha\in\mathfrak{c}$. Choose an element $b\in\sigma_A$ such that $y(\alpha)=s(\alpha)+b(\alpha)$ for each $\alpha\in A$. Then $y\in s+b+U_A \subset S+\sigma_A+U_A$. This proves the equality $G=S+\sigma_A+U_A$. Since the subset $S+\sigma_A$ of $G$ is countable, we conclude that the group $(G,\tau)$ is $\omega$-narrow. This proves Claim~1. \noindent {\bf Claim~2.} \emph{The set $U_A$ is clopen in $(G,\tau)$, for each finite subset $A$ of $\mathfrak{c}$.} Indeed, if $x\in G\setminus U_A$, then either $x(\alpha)\neq 0$ for some $\alpha\in A$ or $x(\beta)<0$ for some $\beta\in\mathfrak{c}$. In the first case, $x+U_B$ is an open neighborhood of $x$ disjoint from $U_A$, where $B=\{\alpha\}$. In the second case, $x+U_B$ is an open neighborhood of $x$ disjoint from $U_A$, where $B=\{\beta\}$. Therefore, the complement $G\setminus U_A$ is open in $(G,\tau)$ and $U_A$ is clopen. This proves Claim~2. It follows from Claim~2 that the base $\mathcal{B}$ of $(G,\tau)$ consists of clopen sets and, hence, this space is zero-dimensional. In particular, $(G,\tau)$ is regular. Let $O=U_\emptyset$. Then $O\cap (-O)=\{\bar{0}\}$, where $\bar{0}$ is the identity element of $G$. Hence the topological group $(G,\tau)^\ast$ associated to $(G,\tau)$ is discrete. \noindent {\bf Claim~3.} \emph{The group $(G,\tau)$ is not simply $sm$-factorizable.} By Claim~2, the set $O$ is clopen in $(G,\tau)$. Let $f$ be the characteristic function of $O$, i.e.~$f(x)=1$ if $x\in O$ and $f(x)=0$ otherwise. Then $f$ is continuous. Suppose for a contradiction that there exists a continuous homomorphism $\varphi\colon (G,\tau)\to H$ to a second-countable paratopological group $H$ such that $O=\varphi^{-1}\varphi(O)$. Let $\{V_n: n\in\omega\}$ be a local base at the identity of $H$. For every $n\in\omega$, take a finite subset $A_n$ of $\mathfrak{c}$ such that $\varphi(U_{A_n})\subset V_n$. Then the set $B=\bigcup_{n\in\omega} A_n$ is countable and $U_B\subset\ker\varphi$. Since $\ker\varphi$ is a subgroup of $G$, we see that the subgroup $\hull{U_B}$ of $G$ generated by $U_B$ is contained in $\ker\varphi$. An easy verification shows that $$ \hull{U_B}=\{x\in G: x(\alpha)=0 \mbox{ for each } \alpha\in B\}. $$ Denote by $p_B$ the natural projection of $G$ to $\mathbb{Z}^B$. Then $\ker p_B=\hull{U_B} \subset \ker\varphi$, so our choice of $\varphi$ implies that $O=p_B^{-1}p_B(O)$, which is clearly false. Indeed, take an arbitrary element $x\in G$ such that $x(\alpha)=0$ for each $\alpha\in B$ and $x(\beta)<0$ for some $\beta\in\mathfrak{c}\setminus B$. Then $x\notin O$, while $x\in p_B ^{-1}p_B(O)$. This contradiction proves that the group $(G,\tau)$ fails to be simply $sm$-factorizable. \end{proof} Since every Hausdorff $\mathbb{R}$-factorizable topological group is $\omega$-narrow and simply $sm$-factorizable, the next corollary is immediate from Theorem~\ref{P3.11}. \begin{corollary}\label{Cor:4.11} If the topological group $G^\ast$ associated to a regular paratopological group $G$ is $\mathbb{R}$-factorizable, then $G$ is simply $sm$-factorizable. \end{corollary} \begin{problem}\label{Prob:Reg} Can one weaken the regularity of $G$ in Corollary~\ref{Cor:4.11} to the Hausdorff separation property? \end{problem} \begin{theorem}\label{Th3.14} Let $G$ be a regular paratopological group such that the topological group $G^\ast$ associated to $G$ is $\omega$-narrow and simply $sm$-factorizable. Then the realcompactification $\upsilon{G}$ of the space $G$ admits a natural structure of paratopological group containing $G$ as a dense subgroup and the group $\upsilon{G}$ is simply $sm$-factorizable. \end{theorem} \begin{proof} According to Theorem~\ref{P3.11}, $G$ is simply $sm$-factorizable. Hence Corollary~\ref{C3.8} implies that the space $\upsilon{G}$ admits the structure of paratopological group containing $G$ as a dense subgroup. By Theorem~\ref{Th}, the group $\upsilon{G}$ is simply $sm$-factorizable. \end{proof} It is well known that every Hausdorff $\mathbb{R}$-factorizable topological group is $\omega$-narrow and simply $sm$-factorizable (see \cite[Proposition~8.1.3]{AT} and \cite[Theorem~5.9]{AT1}). Thus the next corollary follows from Theorem~\ref{Th3.14}. \begin{corollary}[See Theorem~2 of \cite{ST1}]\label{C3.15} Let $G$ be a regular paratopological group such that the topological group $G^\ast$ associated to $G$ is $\mathbb{R}$-factorizable. Then the realcompactification $\upsilon{G}$ of the space $G$ admits a natural structure of paratopological group containing $G$ as a dense subgroup and the equality $\upsilon{G}=\mu{G}$ holds. \end{corollary} \begin{remark} Let $G$ be as in Corollary~\ref{C3.15}. It is shown in \cite[Theorem~2]{ST1} that the topological groups $(\upsilon{G})^\ast$ and $\upsilon(G^\ast)$ are topologically isomorphic and $\mathbb{R}$-factorizable. However, we do not know whether the paratopological group $G$ is $\mathbb{R}$-factorizable (see \cite[Problem~5.1]{ST3}). Notice that $G$ is simply $sm$-factorizable, by Theorem~\ref{Th3.14}. \end{remark} {\bf Acknowledgement:} This paper is dedicated to Professor Lin Shou on the occasion of his 60th anniversary. He is a distinguished teacher and is one of the founders of the Chinese school of Generalized Metric Spaces Theory. His deep mathematical insight and his warm and sincere personality greatly influenced us. \vskip0.9cm \end{document}
\begin{document} \title{f A four dimensional Bernstein Theorem} \begin{abstract} We prove a four dimensional version of the Bernstein Theorem, with complex polynomials being replaced by quaternionic polynomials. We deduce from the theorem a quaternionic Bernstein's inequality and give a formulation of this last result in terms of four-dimensional zonal harmonics and Gegenbauer polynomials. \footnote{{\bfseries Mathematics Subject Classification (2010)}. Primary 30G35; Secondary 26D05, 33C50.\\ {\bfseries Keywords:} Bernstein Theorem, Bernstein inequality, Quaternionic polynomials, Zonal harmonics} \end{abstract} {\mathbb{S}}ection{Introduction} In 1930, S.\ Bernstein \cite{Bernstein} proved the following result: \begin{theorem*}[{\bfseries A}] Let $p(z)$ and $q(z)$ be two complex polynomials with degree of $p(z)$ not exceeding that of $q(z)$. If $q(z)$ has all its zeros in $\{|z|\le1\}$ and $|p(z)|\le |q(z)|$ for $|z|=1$, then $|p'(z)|\le |q'(z)|$ for $|z| = 1$. \end{theorem*} From this result, the famous Bernstein's inequality (first established in this form by M.\ Riesz in 1914) can be deduced. Taking $q(z)=Mz^n$, one obtains the following \begin{theorem*}[{\bfseries B}] If $p(z)$ is a complex polynomial of degree $d$ and $\max_{|z|=1}|p(z)|=M$, then $|p'(z)|\le dM$ for $|z| = 1$. \end{theorem*} This note deals with a four dimensional version of such classic results, with complex polynomials being replaced by quaternionic polynomials. The extension of Bernstein's inequality to the quaternionic setting has already appeared in \cite{GalSabadini}. The proof given there is based on a quaternionic version of the Gauss-Lucas Theorem. Unfortunately, this last result is valid only for a small class of quaternionic polynomials, as it has been recently showed in \cite{GaussLucas}, where another version of the Gauss-Lucas Theorem, valid for every polynomial, has been proved. Recently, a different proof of the quaternionic Bernstein's inequality has been given in \cite{Xu}, using the Fej\'er kernel and avoiding the use of the Gauss-Lucas Theorem. We refer the reader to \cite{GeStoSt2013} and \cite{DivisionAlgebras} for definitions and properties concerning the algebra ${\mathbb{H}}$ of quaternions and many aspects of the theory of quaternionic \emph{slice regular} functions, a class of functions which includes polynomials and convergent power series. The ring ${\mathbb{H}}[X]$ of quaternionic polynomials is defined by fixing the position of the coefficients with respect to the indeterminate $X$ (e.g.\ on the right) and by imposing commutativity of $X$ with the coefficients when two polynomials are multiplied together (see e.g.\ \cite[\S 16]{Lam}). Given two polynomials $P,Q\in{\mathbb{H}}[X]$, let $P\cdot Q$ denote the product obtained in this way. A direct computation (see \cite[\S 16.3]{Lam}) shows that if $P(x)\mathcal{N}e0$, then \begin{equation}\label{product} (P\cdot Q)(x)=P(x)Q(P(x)^{-1}xP(x)), \end{equation} while $(P\cdot Q)(x)=0$ if $P(x)=0$. In particular, if $P$ has real coefficients, then $(P\cdot Q)(x)=P(x)Q(x)$. In this setting, a {(left) root or zero} of a polynomial $P(X)={\mathbb{S}}um_{h=0}^dX^h a_h$ is an element $x\in{\mathbb{H}}$ such that $P(x)=\textstyle{\mathbb{S}}um_{h=0}^dx^h a_h=0$. A subset $A$ of ${\mathbb{H}}$ is called \emph{circular} if, for each $x\in A$, $A$ contains the whole set (a 2-sphere if $x\mathcal{N}ot\in{\mathbb{R}}$, a point if $x\in{\mathbb{R}}$) \begin{equation}\label{sx} {\mathbb{S}}_x=\{pxp^{-1}\in{\mathbb{H}}\;|\;p\in{\mathbb{H}}^*\}, \end{equation} where ${\mathbb{H}}^*:={\mathbb{H}}{\mathbb{S}}etminus\{0\}$. In particular, for any imaginary unit $I\in{\mathbb{H}}$, ${\mathbb{S}}_I={\mathbb{S}}$ is the 2-sphere of all imaginary units in ${\mathbb{H}}$. It it is well-known (see e.g.\ \cite[\S3.3]{GeStoSt2013}) that if $P\mathcal{N}ot\equiv0$, the zero set $V(P)$ consists of isolated points or isolated 2-spheres of the form \eqref{sx}. We show that the quaternionic version of Theorem (A) holds true after imposing a necessary assumption on the second polynomial. We require that $Q\in{\mathbb{H}}[X]$ has every coefficients belonging to a fixed subalgebra of ${\mathbb{H}}$. This restricted version of the Bernstein Theorem is however sufficient to deduce the quaternionic Bernstein's inequality, i.e.\ the analog of Theorem (B). In Section \ref{sec:zonal}, we restate the inequality in terms of four-dimensional zonal harmonics and Gegenbauer polynomials. To obtain this form, we use results of \cite{Harmonicity} to obtain an Almansi type decomposition of a quaternionic polynomial. {\mathbb{S}}ection{Bernstein Theorem and inequality} Let $I\in{\mathbb{S}}$ and let ${\mathbb{C}}_I{\mathbb{S}}ubset{\mathbb{H}}$ be the real subalgebra generated by $I$, i.e.\ the complex plane generated by 1 and $I$. If ${\mathbb{C}}_I$ contains every coefficient of $P\in{\mathbb{H}}[X]$, then we say that $P$ is a \emph{${\mathbb{C}}_I$-polynomial}. Every ${\mathbb{C}}_I$-polynomial $P$ is \emph{one-slice-preserving}, i.e.\ $P({\mathbb{C}}_I){\mathbb{S}}ubseteq{\mathbb{C}}_I$. If this property holds for two imaginary units $I,J$, with $I\mathcal{N}e\pm J$, then it holds for every unit and $P$ is called \emph{slice-preserving}. This happens exactly when all the coefficients of $P$ are real. Let $P(X)={\mathbb{S}}um_{k=0}^dX^ka_k\in{\mathbb{H}}[X]$ of degree $d\geq1$. Let $P'(X)={\mathbb{S}}um_{k=1}^dX^{k-1}ka_k$ be the derivative of $P$. For every $I\in{\mathbb{S}}$, let $\pi_I:{\mathbb{H}}\to{\mathbb{H}}$ be the orthogonal projection onto ${\mathbb{C}}_I$ and $\pi_I^\bot=id-\pi_I$. Let $P^I(X):={\mathbb{S}}um_{k=1}^dX^ka_{k,I}$ be the ${\mathbb{C}}_I$-polynomial with coefficients $a_{k,I}:=\pi_I(a_k)$. We denote by ${\mathbb{B}}=\{x\in{\mathbb{H}}\,|\,|x|<1\}$ the unit ball in ${\mathbb{H}}$ and by ${\mathbb{S}}^3=\{x\in{\mathbb{H}}\,|\,|x|=1\}$ the unit sphere. \begin{theorem}\label{thm:Bernstein} Let $P,Q\in{\mathbb{H}}[X]$ be two quaternionic polynomials with degree of $P$ not exceeding that of $Q$. Assume that there exists $I\in{\mathbb{S}}$ such that $Q$ is a ${\mathbb{C}}_I$-polynomial. If $V(Q){\mathbb{S}}ubseteq\overline{\mathbb{B}}$ and $|P(x)|\le|Q(x)|$ for $x\in{\mathbb{S}}^3$, then $|P'(x)|\le|Q'(x)|$ for $x\in{\mathbb{S}}^3\cap{\mathbb{C}}_I$. \end{theorem} \begin{proof} Let $\lambda\in{\mathbb{H}}$ with $|\lambda|>1$ and set $R:=Q-P\lambda^{-1}\in{\mathbb{H}}[X]$. The polynomials $Q$ and $R^I=Q-(P\lambda^{-1})^I$ are ${\mathbb{C}}_I$-polynomials and then they can be identified with elements of ${\mathbb{C}}_I[X]$, with $\deg(R^I)\le\deg(Q)$. For every $x\in{\mathbb{C}}_I$, it holds \[|R^I(x)-Q(x)|=|(P\lambda^{-1})^I(x)|=|\pi_I((P\lambda^{-1})(x))|\le|(P\lambda^{-1})(x)|=\frac{|P(x)|}{|\lambda|}. \] If $x\in{\mathbb{S}}^3\cap{\mathbb{C}}_I=\{x\in{\mathbb{C}}_I\,|\,|x|=1\}$, then \begin{equation}\label{eq:inequality} |R^I(x)-Q(x)|\le\frac{|P(x)|}{|\lambda|}\le\frac{|Q(x)|}{|\lambda|}\le |Q(x)|. \end{equation} In view of Rouch\'e's Theorem for polynomials in ${\mathbb{C}}_I[X]$, $R^I$ and $Q$ have the same zeros in the disc $\{x\in{\mathbb{C}}_I\,|\,|x|<1\}$. Moreover, if $|x|=1$ and $Q(x)=0$, the inequality \eqref{eq:inequality} gives $R^I(x)=0$. Since $\deg(R^I)\le\deg(Q)$ and $V(Q){\mathbb{S}}ubseteq\overline{\mathbb{B}}$, we get that $V(R^I)\cap{\mathbb{C}}_I{\mathbb{S}}ubseteq \overline{\mathbb{B}}\cap{\mathbb{C}}_I$. From the classic Gauss-Lucas Theorem, we get $V(R')\cap{\mathbb{C}}_I{\mathbb{S}}ubseteq V((R^I)')\cap{\mathbb{C}}_I{\mathbb{S}}ubseteq \overline{\mathbb{B}}\cap{\mathbb{C}}_I$. Now let $x\in{\mathbb{C}}_I$ with $|x|>1$ be fixed and define $\lambda:=Q'(x)^{-1}P'(x)\in{\mathbb{H}}$. Observe that $Q'(x)\mathcal{N}e0$ again from the classic Gauss-Lucas Theorem applied to the polynomial $Q$ considered as element of ${\mathbb{C}}_I[X]$. If $|\lambda|>1$, the polynomial $R=Q-P\lambda^{-1}\in{\mathbb{H}}[X]$ defined as above has zero derivative at $x$: $R'(x)=Q'(x)-P'(x)\lambda^{-1}=0$, contradicting what obtained before. Therefore it must be $|\lambda|\le1$, i.e.\ $|P'(x)|/|Q'(x)|\le1$ for all $x\in{\mathbb{C}}_I$ with $|x|>1$. By continuity, $|P'(x)|\le|Q'(x)|$ for all $x\in{\mathbb{C}}_I$ with $|x|=1$. \end{proof} We recall that a quaternionic polynomial, as any slice regular function, satisfies the maximum modulus principle \cite[Theorem 7.1]{GeStoSt2013}. Let \[\|P\|=\max_{|x|=1}|P(x)|=\max_{|x|\le1}|P(x)| \] denote the sup-norm of the polynomial $P\in{\mathbb{H}}[X]$ on ${\mathbb{B}}$. \begin{corollary}[{\bfseries Bernstein's inequality}]\label{cor:inequality} If $P\in{\mathbb{H}}[X]$ is a quaternionic polynomial of degree $d$, then $\|P'\|\le d\|P\|$. \end{corollary} \begin{proof} Let $M=\|P\|$ and apply the previous theorem to $P(X)$ and $Q(X)=MX^d$. Since $Q$ is slice-preserving, the thesis of Theorem \ref{thm:Bernstein} holds for every $I\in{\mathbb{S}}$. \end{proof} The inequality of Corollary \ref{cor:inequality} is best possible with equality holding if and only if $P$ is a multiple of the power $X^d$. \begin{proposition} If $P\in{\mathbb{H}}[X]$ is a quaternionic polynomial of degree $d$, and $|P'(y)|= d\|P\|$ at a point $y\in{\mathbb{S}}^3$, then $P(X)=X^da$, with $a\in{\mathbb{H}}$, $|a|=\|P\|$. \end{proposition} \begin{proof} We can assume that $P(X)$ is not constant. Let $b=P'(y)^{-1}$ and set $Q(X):=P(X)b={\mathbb{S}}um_{k=1}^dX^ka_k$. Then $Q'(y)=1$, $\|Q\|=1/d$ and $\|Q'\|\le1$. Let $I\in{\mathbb{S}}$ such that ${\mathbb{C}}_I\mathcal{N}i y$. Then \[\textstyle 1=Q'(y)={\mathbb{S}}um_k ky^{k-1}a_k=\pi_I(Q'(y))={\mathbb{S}}um_k ky^{k-1}\pi_I(a_k)=(Q^I)'(y). \] If $x\in{\mathbb{C}}_I\cap{\mathbb{S}}^3$, it holds \[\textstyle\big|(Q^I)'(x)\big|=\big|{\mathbb{S}}um_k kx^{k-1}\pi_I(a_k)\big|=\big|\pi_I\big({\mathbb{S}}um_k kx^{k-1}a_k\big)\big|\le\big|{\mathbb{S}}um_k kx^{k-1}a_k\big|=|Q'(x)|\le1. \] This means that the ${\mathbb{C}}_I$-polynomial $Q^I$, considered as an element of ${\mathbb{C}}_I[X]$, satisfies the equality in the classic Bernstein's inequality. The same inequality implies that \[1=\max_{x\in{\mathbb{C}}_I\cap{\mathbb{S}}^3}|(Q^I_{|{\mathbb{C}}_I})'(x)|\le d\max_{x\in{\mathbb{C}}_I\cap{\mathbb{S}}^3}|Q^I_{|{\mathbb{C}}_I}(x)|\le d\|Q\|=1, \] i.e.\ $\max_{x\in{\mathbb{C}}_I\cap{\mathbb{S}}^3}|Q^I_{|{\mathbb{C}}_I}(x)|=1/d$. Therefore the restriction of $Q^I$ to ${\mathbb{C}}_I$ coincides with the function $x^dc$, with $c\in{\mathbb{C}}_I$, $|c|=1/d$: \[Q^I(x)={\mathbb{S}}um_{k=1}^d x^k\pi_I(a_k)=x^dc \text{\quad for every $x\in{\mathbb{C}}_I$}. \] This implies that $\pi_I(a_d)=c$, $\pi_I(a_k)=0$ for each $k=1,\ldots,d-1$ and $Q$ can be written as $Q(X)=X^dc+\widetilde Q(X)$, with the coefficients of $\widetilde Q$ belonging to ${\mathbb{C}}_I^\bot=\pi_I^\bot({\mathbb{H}})$. When $x\in{\mathbb{C}}_I\cap{\mathbb{S}}^3$, $\widetilde Q(x)\in{\mathbb{C}}_I^\bot$, and then \[\frac1{d^2}\ge|Q(x)|^2=|x^dc|^2+|\widetilde Q(x)|^2=\frac1{d^2}+|\widetilde Q(x)|^2. \] This inequality forces $\widetilde Q$ to be the zero polynomial and then $P(X)=Q(X)b^{-1}=X^d cb^{-1}$. \end{proof} We now show that in Theorem \ref{thm:Bernstein}, the assumption on $Q$ to be one-slice-preserving is necessary. \begin{proposition}\label{counterexample} Let \[ P(X)=(X-i)\cdot(X-j)\cdot(X-k),\quad Q(X)=2X\cdot(X-i)\cdot (X-j). \] Then $V(Q)=\{0,i\}{\mathbb{S}}ubseteq\overline{\mathbb{B}}$ and $|P(x)|\le|Q(x)|$ for every $x\in{\mathbb{S}}^3$, but there exists $y\in{\mathbb{S}}^3$ such that $|P'(y)|>|Q'(y)|$. \end{proposition} \begin{proof} By a direct computation we obtain: \begin{align} P(X)&=X^3-X^2(i+j+k)+X(i-j+k)+1,\quad Q(X)=2X^3-2X^2(i+j)+2Xk,\\ P'(X)&=3X^2-2X(i+j+k)+i-j+k,\quad Q'(X)=6X^2-4X(i+j)+2k. \end{align} Let $P_1(X)=X-k$, $Q_1(X)=2X$, $P_2(X)=(X-j)\cdot P_1(X)$, $Q_2(X)=(X-j)\cdot Q_1(X)$. Then $P(X)=(X-i)\cdot P_2(X)$ and $Q(X)=(X-i)\cdot Q_2(X)$. For every $x\in{\mathbb{S}}^3{\mathbb{S}}etminus\{j\}$, using formula \eqref{product} we get \[|P_2(x)|=|x-j||(x-j)^{-1}x(x-j)-k|\le2|x-j|=|x-j||2x|=|Q_2(x)|. \] Since $P_2(j)=Q_2(j)=0$, the inequality holds also at $j$. From this we obtain, for each $x\in{\mathbb{S}}^3{\mathbb{S}}etminus\{i\}$, \[|P(x)|=|x-i||P_2((x-i)^{-1}x(x-i))|\le |x-i||Q_2((x-i)^{-1}x(x-i))|=|Q(x)|. \] Since $P$ and $Q$ vanish at $i$, $|P(x)|\le|Q(x)|$ for every $x\in{\mathbb{S}}^3$. Let $y=\frac1{10}\left(1+9i+4j-{\mathbb{S}}qrt2k\right)\in{\mathbb{S}}^3$. An easy computation gives \[|P'(y)|^2=\frac7{25}(5+{\mathbb{S}}qrt2){\mathbb{S}}imeq 1.80, \quad |Q'(y)|^2=\frac4{25}(10-3{\mathbb{S}}qrt2){\mathbb{S}}imeq 0.92. \] \end{proof} {\mathbb{S}}ection{Bernstein inequality and zonal harmonics} \label{sec:zonal} Since the restriction of a complex variable power $z^m$ to the unit circumference is equal to $\cos(m\theta)+i{\mathbb{S}}in(m\theta)$, the classic Bernstein inequality for complex polynomials can be restated in terms of trigonometric polynomials. In this section we show that a similar interpretation is possible in four dimensions, by means of an Almansi type decomposition of quaternionic polynomials and its relation with zonal harmonics in ${\mathbb{R}}^4$. Quaternionic polynomials, as any slice regular function, are biharmonic with respect to the standard Laplacian of ${\mathbb{R}}^4$ \cite[Theorem 6.3]{Harmonicity}. In view of Almansi's Theorem (see e.g.\ \cite[Proposition 1.3]{Aronszajn}), the four real components of such polynomials have a decomposition in terms of a pair of harmonic functions. The results of \cite{Harmonicity} can be applied to obtain a refined decomposition of the polynomial in terms of the quaternionic variable. Let $\mathcal Z_{k}(x,a)$ denote the four-dimen\-sion\-al \emph{(solid) zonal harmonic} of degree $k$ with pole $a\in{\mathbb{S}}^3$ (see e.g.~\cite[Ch.5]{HFT}). The symmetry properties of zonal harmonics imply that $\mathcal Z_{k-1}(x,a)=\mathcal Z_{k-1}(x\overline a,1)$ for every $a\in{\mathbb{H}}$ and any $a\in{\mathbb{S}}^3$. Moreover it holds \cite[Corollary 6.7(d)]{Harmonicity} \begin{equation}\label{eq:powers} x^k=\widetilde{\mathcal Z}_k(x)-\overline x\, \widetilde{\mathcal Z}_{k-1}(x)\text{\quad for every $x\in{\mathbb{H}}$ and $k\in{\mathbb{N}}$}, \end{equation} where $\widetilde{\mathcal Z}_k(x)$ is the real-valued zonal harmonic defined by $\widetilde{\mathcal Z}_k(x):=\frac1{k+1}{\mathcal Z}_k(x,1)$ for any $k\ge0$ and by $\widetilde{\mathcal Z}_{-1}:=0$. In the following we will consider polynomials in the four real variables $x_0,x_1,x_2,x_3$ of the form $A(x)={\mathbb{S}}um_{k=0}^d\widetilde{\mathcal Z}_k(x) a_k$, with quaternionic coefficients $a_k\in{\mathbb{H}}$. They will be called \emph{zonal harmonic polynomials with pole 1}. All these polynomials have an axial symmetry with respect to the real axis: for every orthogonal transformation $T$ of ${\mathbb{H}}{\mathbb{S}}imeq{\mathbb{R}}^4$ fixing 1, it holds $A\circ T=A$. \begin{proposition}[{\bfseries Almansi type decomposition}]\label{teo:Almansi} Let $P\in{\mathbb{H}}[X]$ be a quaternionic polynomial of degree $d$. There exist two zonal harmonic polynomials $A$, $B$ with pole $1$, of degrees $d$ and $d-1$ respectively, such that \begin{equation} P(x)=A(x)-\overline x B(x)\text{\quad for every $x\in{\mathbb{H}}$.}\label{eq:Almansi} \end{equation} The restrictions of $A$ and $B$ to the unit sphere ${\mathbb{S}}^3$ are spherical harmonics depending only on $x_0=\re(x)$. \end{proposition} \begin{proof} Let $P(X)={\mathbb{S}}um_{k=0}^dX^kc_k$. Formula \eqref{eq:Almansi} follows immediately from \eqref{eq:powers} setting \[ A(x)={\mathbb{S}}um_{k=0}^d\widetilde{\mathcal Z}_k(x)c_k\text{\quad and\quad}B(x)={\mathbb{S}}um_{k=0}^{d-1}\widetilde{\mathcal Z}_{k}(x)c_{k+1}. \] The restriction of $\widetilde{\mathcal Z}_k(x)$ to the unit sphere ${\mathbb{S}}^3$ is equal to the Gegenbauer (or Chebyshev of the second kind) polynomial $C^{(1)}_{k}(x_0)$, where $x_0=\re(x)$ (see \cite[Corollary 6.7(e)]{Harmonicity}). This property implies immediately the last statement. \end{proof} \begin{remark} The zonal harmonics $A$ and $B$ of the previous decomposition can be obtained from $P$ through differentiation. Since $P(x)-P(\overline x)=A(x)-\overline xB(x)-A(\overline x)+xB(\overline x)=2\im(x)B(x)$, the function $B$ is the \emph{spherical derivative} of $P$, defined (see \cite{GhPe_AIM}) on ${\mathbb{H}}{\mathbb{S}}etminus{\mathbb{R}}$ as $P'_s(x)=(2\im(x))^{-1}(P(x)-P(\overline x))$. In \cite{Harmonicity} it was proved that the spherical derivative of a slice regular function, in particular of a quaternionic polynomial, is indeed the result of a differential operation. Given the Cauchy-Riemann-Fueter operator \[\overline{\partial}_{\scriptscriptstyle CRF} =\dd{}{x_0}+i\dd{}{x_1}+j\dd{}{x_2}+k\dd{}{x_3},\] it holds $\overline{\partial}_{\scriptscriptstyle CRF} P=-2{P'_s}$. Therefore \begin{equation}\label{eq:AB} A(x)=P(x)-\frac12{\overline x}\,\overline{\partial}_{\scriptscriptstyle CRF} P(x),\quad B(x)=-\frac12\overline{\partial}_{\scriptscriptstyle CRF} P(x). \end{equation} Defining $A$ and $B$ by formulas \eqref{eq:AB} and using results from \cite{Harmonicity}, it can be easily seen that the Almansi type decomposition $f(x)=A(x)-\overline xB(x)$ holds true for every slice regular function $f$, with $A$ and $B$ harmonic and axially symmetric w.r.t.\ the real axis. Observe that $B=f'_s$ is the spherical derivative of $f$ and $A=f^\circ_s+x_0f'_s$, where $f^\circ_s(x)=\frac12(f(x)+f(\overline x))$ is the \emph{spherical value} of $f$ (see \cite{GhPe_AIM}). \end{remark} Thanks to the previous decomposition, the quaternionic Bernstein inequality of Corollary \eqref{cor:inequality} can be restated in terms of Gegenbauer polynomials $C^{(1)}_{k}(x_0)$. Let $d\in{\mathbb{N}}$. For any $(d+1)$-uple $\alpha=(a_0,\ldots,a_d)\in{\mathbb{H}}^{d+1}$, let $Q_\alpha:{\mathbb{S}}^3\to{\mathbb{H}}$ be defined by \[Q_\alpha(x):={\mathbb{S}}um_{k=0}^d (C^{(1)}_{k}(x_0)-\overline x\,C^{(1)}_{k-1}(x_0))a_k\] for any $x=x_0+ix_1+jx_2+kx_3\in{\mathbb{S}}^3$ (where we set $C^{(1)}_{-1}:=0$). Being the restriction to ${\mathbb{S}}^3$ of the quaternionic polynomial $P(X)={\mathbb{S}}um_{k=0}^dX^ka_k$, which has biharmonic real components on ${\mathbb{H}}$, $Q_\alpha$ is a quaternionic valued \emph{spherical biharmonic} of degree $d$ (see e.g.\ \cite{GrzebulaMichalik}). \begin{corollary} Let $\alpha=(a_0,\ldots,a_d)$ and $\alpha'=(a_1,2a_2,\ldots,ka_k,\ldots,da_d,0)\in{\mathbb{H}}^{d+1}$. Then it holds: \[ \text{if \quad}|Q_\alpha(x)|=\left|{\mathbb{S}}um_{k=0}^d \left(C^{(1)}_{k}(x_0)-\overline x\,C^{(1)}_{k-1}(x_0)\right)a_k\right|\le M\text{\quad for every $x\in{\mathbb{S}}^3$}, \] \[\text{then\quad} |Q_{\alpha'}(x)|=\left|{\mathbb{S}}um_{k=0}^{d-1} \left(C^{(1)}_{k}(x_0)-\overline x\,C^{(1)}_{k-1}(x_0)\right) (k+1)a_{k+1}\right|\le dM\text{\quad for every $x\in{\mathbb{S}}^3$}. \] \end{corollary} \begin{proof} Let $P(X)={\mathbb{S}}um_{k=0}^dX^ka_k$. From formula \eqref{eq:powers} it follows that the restriction of $P'$ to the unit sphere is the spherical biharmonic $Q_{\alpha'}$. Corollary \ref{cor:inequality} permits to conclude. \end{proof} \begin{remark}\label{rem:max} Let $P\in{\mathbb{H}}[X]$ be a polynomial with Almansi type decomposition $P(x)=A(x)-\overline xB(x)$ and let $y=\alpha+J\beta\in{\mathbb{S}}^3$, $\alpha,\beta\in{\mathbb{R}},\beta>0$. Let $v=A(y)\overline{B(y)}$. It follows from general properties of slice functions \cite[Lemma 5.3]{DivisionAlgebras} that if $v\in{\mathbb{R}}$, then $|P|_{|{\mathbb{S}}_y}$ is constant, while if $v\mathcal{N}ot\in{\mathbb{R}}$, then the maximum modulus of $P$ on the 2-sphere ${\mathbb{S}}_y{\mathbb{S}}ubset{\mathbb{S}}^3$ is attained at the point $\alpha+I\beta$, with $I=\im(v)/|\im(v)|$, while the minimum modulus is attained at $\alpha-I\beta$. In principle, this reduces the problem of maximizing or minimizing the modulus of $P$ on the unit sphere (or ball) to a one-dimensional problem. \end{remark} \begin{example} Consider the polynomial $P(X)=(X-i)\cdot(X-j)\cdot(X-k)$ of Proposition \ref{counterexample}. Since the first four zonal harmonics are \[\widetilde{\mathcal Z}_0(x)=1,\ \widetilde{\mathcal Z}_1(x)= 2 x_0,\ \widetilde{\mathcal Z}_2(x)=3 x_0^2 - x_1^2 - x_2^2 - x_3^2,\ \widetilde{\mathcal Z}_3(x)=4 x_0 (x_0^2 - x_1^2 - x_2^2 - x_3^2), \] the Almansi type decomposition of $P$ is $P(x)=A(x)-\overline xB(x)$, with \begin{align*} A(x)&=(1 + 4 x_0^3 - 4 x_0 x_1^2 - 4 x_0 x_2^2 - 4 x_0 x_3^2)+(i+j+k)(2 x_0 - 3 x_0^2 + x_1^2 + x_2^2 + x_3^2), \\ B(x)&=(3 x_0^2 - x_1^2 - x_2^2 - x_3^2)+i(1-2x_0)-j (1 + 2 x_0) +k (1 - 2 x_0) \end{align*} harmonic polynomials. Their restrictions to ${\mathbb{S}}^3$ are the spherical harmonics \begin{align*} A_{|{\mathbb{S}}^3}(x)&=(1 - 4 x_0 + 8 x_0^3)+i(1 + 2 x_0 - 4 x_0^2)+j(1 - 2 x_0 - 4 x_0^2)+k(1 + 2 x_0 - 4 x_0^2) , \\ B_{|{\mathbb{S}}^3}(x)&=(-1+ 4 x_0^2)+i(1 - 2 x_0)-j(1 + 2 x_0)+k(1 - 2 x_0). \end{align*} Following the observation made in Remark \ref{rem:max}, since $\im(A(y)\overline{B(y)})=4((\alpha-1)i+\alpha k)$, where $\alpha=\re(y)$, $y\in{\mathbb{S}}^3$, one can find the 2-sphere ${\mathbb{S}}_y{\mathbb{S}}ubset{\mathbb{S}}^3$ where the maximum modulus of $P$ is attained. A direct computation gives $\re(y)=(1-{\mathbb{S}}qrt{19})/6{\mathbb{S}}im-0.56$ and the corresponding maximum value $\|P\|{\mathbb{S}}im4.70$ attained at the point $\tilde y=(1-{\mathbb{S}}qrt{19})/6-i(5+{\mathbb{S}}qrt{19})/12+k(1-{\mathbb{S}}qrt{19})/12$ of ${\mathbb{S}}^3$. \end{example} \begin{remark} Some of the results presented in this note can be generalized to the general setting of real alternative *-algebras, where polynomials can be defined and share many of the properties valid on the quaternions (see \cite{GhPe_AIM}). The polynomials of Proposition \ref{counterexample} can be defined every time the algebra contains an Hamiltonian triple $i,j,k$, i.e.\ when the algebra contains a subalgebra isomorphic to ${\mathbb{H}}$ (see \cite[\S8.1]{Numbers}). This is true e.g.\ for the algebra of octonions and for the Clifford algebras with signature $(0,n)$, with $n\ge2$. In all such algebras we can repeat the previous proofs and get the analog of Theorem \ref{thm:Bernstein}, as well as of the Bernstein inequality (see also \cite{Xu} for this last result). \end{remark} \end{document}
\begin{document} Makeretitle \begin{abstract} The triangle game introduced by Chv\'{a}tal and Erd\H{o}s (1978) is one of the most famous combi\-natorial games. For $n,q\in\mathbb{N}$, the $(n,q)$-triangle game is played by two players, called Maker and Breaker, on the complete graph $K_n$. Alternately Maker claims one edge and thereafter Breaker claims $q$ edges of the graph. Maker wins the game if he can claim all three edges of a triangle, otherwise Breaker wins. Chv\'{a}tal and Erd\H{o}s (1978) proved that for $q<\sqrt{2n+2}-5/2\approx 1.414\sqrt{n}$ Maker has a winning strategy, and for $q\geq 2\sqrt{n}$ Breaker has a winning strategy. Since then, the problem of finding the exact leading constant for the threshold bias of the triangle game has been one of the famous open problems in combinatorial game theory. In fact, the constant is not known for any graph with a cycle and we do not even know if such a constant exists. Balogh and Samotij (2011) slightly improved the Chv\'{a}tal-Erd\H{o}s constant for Breaker's winning strategy from $2$ to $1.935$ with a randomized approach. Since then no progress was made. In this work, we present a new deterministic strategy for Breaker's win whenever $n$ is sufficiently large and $q\geq\sqrt{(8/3+o(1))n}\approx 1.633\sqrt{n}$, significantly reducing the gap towards the lower bound. In previous strategies Breaker chooses his edges such that one node is part of the last edge chosen by Maker, whereas the remaining node is chosen more or less arbitrarily. In contrast, we introduce a suitable potential function on the set of nodes. This allows Breaker to pick edges that connect the most `dangerous' nodes. The total potential of the game may still increase, even for several turns, but finally Breaker's strategy prevents the total potential of the game from exceeding a critical level and leads to Breaker's win. \end{abstract} \pagebreak \section{Introduction} For $n,q\in\mathbb{N}$ the $(n,q)$-triangle game is played on the complete graph $K_n$. In every turn Maker claims an unclaimed edge from $K_n$, followed by Breaker claiming $q$ edges. The game ends when all edges are claimed either by Maker or Breaker. If Maker manages to build a triangle, he wins the game, otherwise Breaker wins. This game is one of the most prominent examples of \emph{\mak-\bre\ game s}, in which Maker tries to build a certain structure while Breaker tries to prevent this. These are games of perfect information without chance moves, so either Maker or Breaker has a winning strategy, in which case we say the game is Maker's win or Breaker's win, respectively. For more information on \mak-\bre\ game s we refer to the paper by Krivelevich~\cite{Krivelevich14}. \begin{figure} \caption{The $(7,2)$-triangle game is a Maker's win. Maker-edges are red, Breaker-edges blue.} \label{fig:degree} \end{figure} \subsection{Previous work} \mak-\bre\ game s have been extensively studied by Beck~\cite{Beck81,Beck82,Beck85}, concerning, e.g., games in which Maker tries to build a Hamiltonian cycle, a spanning tree or a big star. Beck~\cite{Beck82} also presented very general sufficient conditions for Maker's win and Breaker's win. In his work he generalized the Erd\H os-Selfridge Theorem~\cite{Erdoes73}, which gives a winning criterion for Breaker for the case $q=1$. A direct application of these criteria to specific games as the triangle game often does not lead to strong results. However, it turned out to be a powerful tool, e.g.\ Bednarska and \L uczak~\cite{Bednarska00} used it to prove the following fundamental result: For a fixed graph $G$ consider the $(G,n,q)$-game, in which Maker has to build a copy of $G$. There exist constants $c_0,C_0$ such that the game is a Maker's win for $q\leq c_0n^{1/m(G)}$ and a Breaker's win for $q\geq C_0n^{1/m(G)}$, where $m(G):=\max\Big\{\frac{e(H)-1}{v(H)-1}:H\subseteq G,v(H)\geq 3\Big\}$. This result recently was further generalized for hypergraphs by Kusch et al.~\cite{Kusch17}. Bednarska and \L uczak also conjectured that $c_0$ and $C_0$ can be chosen arbitrarily close to each other. Until today this conjecture couldn't be proved or disproved for any fixed graph $G$ that contains a cycle.\par The special case of the triangle game was proposed by Chv\' atal and Erd\H os~\cite{Chvatal78}, who presented a winning strategy for Maker if $q<\sqrt{2n+2}-5/2$, and a winning strategy for Breaker if $q\geq 2\sqrt{n}$. Both strategies are rather simple: If $q<\sqrt{2n+2}-5/2$, Maker can win by fixing a node $v$ and then simply claiming all his edges incident to $v$. At some point of time Breaker will not be able to close all Maker paths of length~$2$, so Maker can complete such a path to build a triangle. If $q\geq 2\sqrt{n}$, Breaker can always close all paths of length~$2$ created by Maker and at the same time prevent Maker from building a star of size $q/2$. To achieve this he first closes all new Maker paths of length~$2$ and then claims arbitrary edges such that at the end of the turn he claimed exactly $\sqrt{n}$ edges incident in $u$ and the remaining $\sqrt{n}$ edges incident in $v$, where $\{u,v\}$ is the edge recently claimed by Maker.\par Chv\' atal and Erd\H os also asked for the \emph{threshold bias} of this game, i.e., the value $q_0(n)$ such that the game is Maker's win for $q\leq q_0(n)$ and Breaker's win for $q>q_0(n)$. For the triangle game, the asymptotic order of $\Theta(\sqrt{n})$ follows directly from the two strategies given above and also occurs as special case of the result by Bednarska and \L uczak, but the gap between $\sqrt{2n}$ and $2\sqrt{n}$ could not be narrowed for many years. In 2011, Balogh and Samotij~\cite{Balogh11} presented a randomized Breaker strategy, improving the lower bound for Breaker's win to about $(2-1/24)\sqrt{n}\approx 1.958\sqrt{n}$. \subsection{Our Contribution} In our work we present a new deterministic strategy for Breaker that further improves the recent lower bound for Breaker's win to $q=\sqrt{(8/3+o(1))n}\approx 1.633\sqrt{n}$, assuming $n$ to be sufficiently large. The global idea of our strategy is as follows. Instead of claiming arbitrary edges incident in the nodes of the last edge claimed by Maker, as done in the strategy of Chv\' atal and Erd\H os, Breaker claims only edges that connect the `most dangerous' nodes, i.e., nodes that already have many incident Maker edges and rather few Breaker edges. Proceeding this way, Breaker needs fewer edges to prevent Maker from building a $q/2$-star. For the realization of this idea we use an (efficiently computable) \emph{potential function} to decide which edges are most dangerous and should be claimed next to prevent Maker from building any triangle \emph{or} big star. In contrast to Beck~\cite{Beck82}, instead of assigning a potential to every winning set, our potential function is defined directly on the set of nodes. However, the most significantly difference to Beck and other previous potential-based approaches is that our potential function is not necessarily decreasing in every single turn. Some \emph{critical} turns may occur in which the potential increases, so the challenge is to bound the number and the impact of these critical rounds. This new approach requires plenty of analytic work but turns out to be a more powerful technique than classic potential-based approaches and also might be of interest for other kinds of \mak-\bre\ game s. \section{Breaker's strategy} We start by introducing the potential function which forms the basis for Breaker's strategy. During the game, denote by $M$ the Maker\ graph consisting of all edges claimed by Maker\ so far and let $B$ denote the corresponding Breaker\ graph. For $v\in V$ and $H\in\{M,B\}$ let $\textnormal{deg}_H(v)$ denote the degree of $v$ in $H$. For a turn $t$, $\textnormal{deg}_{H,t}(v)$ denotes the degree of $v$ in $H$ directly after turn $t$. \subsection{The potential function}\label{sec:potfunc} Let $\epsilon^*>0$ and $\beta=\frac{8}{3}+\epsilon^*$. In this chapter we consider the $(n,q)$-Triangle game with $q=\sqrt{\beta n}$. As mentioned in the introduction, for $\beta\geq 4$ there exist known winning strategies for Maker, so we will assume $\beta\leq 4$ if necessary. Fix $\delta\in(0,1-\frac{8}{3\beta})$. \begin{definition}\label{def:pot} For every $v\in V$ define the \emph{balance} of $v$ as \[\textnormal{bal}(v):=\frac{8(n-\textnormal{deg}_B(v))} {q^2(1-\delta)(3+\delta)-4\textnormal{deg}_M(v)(2q-\textnormal{deg}_M(v))}.\] Moreover we define $p_0$ as the balance of a node in the very beginning of the game, i.e.\ \[p_0:=\frac{8n}{q^2(1-\delta)(3+\delta)}=\frac{8}{\beta(1-\delta)(3+\delta)}.\] \end{definition} The balance of a node is a measure of the ratio of Maker- and Breaker-edges incident in this node: The more Maker-edges and the fewer Breaker-edges incide in $v$, the bigger the balance value gets. A detailed interpretation of the balance value can be found in Section~\ref{sec:interpretation}. \par For the success of Breaker's strategy it is crucial that $p_0<1$ (e.g.\ for the choice of $\eta$ in Section~\ref{sec:crit_turns}). This is assured by the next remark. \begin{rem}\label{rem:p0} It holds $\frac{8}{3\beta}<p_0<\frac{8}{3\beta(1-\delta)}<1$. \end{rem} \begin{proof} The second and third inequality follow directly from $\delta\in(0,1-\frac{8}{3\beta})$. For the first inequality, note that $(1-\delta)(3+\delta)=3-2\delta-\delta^2<3$, so we get $p_0=\frac{8}{\beta(1-\delta)(3+\delta)}>\frac{8}{3\beta}$. \end{proof} During the game, Breaker\ will not be able to keep all nodes at their start balance. Some nodes will get more Breaker-edges than needed, others less. This \emph{deficit} of a node will be used to define its potential. \begin{definition}\label{def:pot2} Consider the game at an arbitrary point of time. For a node $v\in V$ let \mbox{$\textnormal{bal}deg(v)\in\mathbb{R}$} be the \emph{balanced Breaker-degree} of this node, i.e.\ the Breaker-degree that would be necessary, so that $\textnormal{bal}(v)=p_0$. Formally we define \[\textnormal{bal}deg(v):= n-p_0\left(\frac{q^2(1-\delta)(3+\delta)}{8} -\textnormal{deg}_M(v)\left(q-\frac{\textnormal{deg}_M(v)}{2}\right)\right). \] The \emph{deficit} of $v$ is defined by \[d(v):=\textnormal{bal}deg(v)-\textnormal{deg}_B(v).\] Finally, let $\mu:=1+\frac{6\beta\ln(n)}{\delta q}$. Define the \emph{potential} of $v$ as \[\textnormal{pot}(v):=\begin{cases}0&\textnormal{if }\textnormal{deg}_M(v)+\textnormal{deg}_B(v)=n-1\\ \mu^{d(v)/q}&\textnormal{else}\end{cases}\] and for an unclaimed edge $e=\{u,w\}$ define the \emph{potential} of $e$ as $\textnormal{pot}(e):=\textnormal{pot}(u)+\textnormal{pot}(w)$. For every turn $t$ we define $\textnormal{pot}_t(v)$ ($\textnormal{pot}_t(e)$, resp.) as the potential of $v$ ($e$, resp.) directly after turn $t$ and $\textnormal{pot}_0(v)$ as the potential of $v$ at the beginning of the game. Analogously we define $\textnormal{bal}deg_t(v)$ and $d_t(v)$. The \emph{total potential} of a turn $t$ is defined as $\textnormal{POT}_t:=\sum_{v\in V}\textnormal{pot}_t(v)$. The \emph{total starting potential} is defined as $\textnormal{POT}_0:=\sum_{v\in V}\textnormal{pot}_0(v)$. \end{definition} \begin{lemma}\label{lem:start_potential} The total starting potential fulfills $\textnormal{POT}_0=n$. \end{lemma} \begin{proof} Let $v\in V$ with $\textnormal{deg}_M(v)=\textnormal{deg}_B(v)=0$. Then, \begin{align*} \textnormal{bal}deg(v) =n-p_0\left(\frac{q^2(1-\delta)(3+\delta)}{8}\right) =n-p_0\cdot n\cdot p_0^{-1}=0. \end{align*} This implies $\textnormal{pot}(v)=\mu^{d(v)/q}=\mu^{(\textnormal{bal}deg(v)-\textnormal{deg}_B(v))/q}=\mu^0=1$, so \[\textnormal{POT}_0=\sum_{v\in V}\textnormal{pot}_0(v)=\sum_{v\in V}1=n.\] \end{proof} Breaker's aim is to keep the total potential as low as possible. The next lemma ensures that if Breaker\ can keep the potential of every single node below $2n$, he can prevent Maker\ from raising the Maker-degree of a node above $q/2$. We will later show (Theorem~\ref{thm:smallpot}) that Breaker\ is even able to keep the \emph{total} potential of the game below $2n$. \begin{lemma}\label{lem:potentialvsdegree} If $n$ is sufficiently big, for every turn $t$ and every node $v\in V$ the following holds: \[0<\textnormal{pot}_t(v)\leq 2n\mathbb{R}ightarrow \textnormal{deg}_{M,t}(v)\neq\left\lceil q/2\right\rceil-1.\] \end{lemma} \begin{proof} Let $t$ be a turn and $v\in V$ with $\textnormal{pot}_t(v)>0$ and $\textnormal{deg}_{M,t}(v)=\left\lceil q/2\right\rceil-1$. We show that $\textnormal{pot}_t(v)>2n$. Because $\textnormal{pot}_t(v)\neq 0$, we have $\textnormal{pot}_t(v)=\mu^{d_t(v)/q}$. We claim (and later prove) that \begin{equation}\label{claim} d_t(v)\geq\frac{2\delta n}{3}. \end{equation} This implies \begin{align*} \textnormal{pot}_t(v) &=\mu^{d_t(v)/q} \geq \mu^{2\delta n/3q} =\left(1+\frac{6\beta \ln(n)}{\delta q}\right)^{2\delta n/3q}\\ &=\left(1+\frac{6\beta \ln(n)}{\delta q}\right) ^{\left(\frac{\delta q}{6\beta \ln(n)}+1\right)\left(\frac{\delta q}{6\beta \ln(n)}+1\right)^{-1}\frac{2\delta n}{3q}} \geq e^{\alpha}, \end{align*} where \begin{align*} \alpha &=\left(\frac{\delta q}{6\beta\ln(n)}+1\right)^{-1}\frac{2\delta n}{3q} =\left(\frac{\delta q\mu}{6\beta\ln(n)}\right)^{-1}\frac{2\delta n}{3q} =\frac{4\beta n\ln(n)}{q^2\mu} >2\ln(n), \end{align*} where for the last inequality we used that $\mu<2$ if $n$ is big enough. Finally we get $\textnormal{pot}_t(v)\geq e^\alpha>n^2$ and for $n\geq 2$ this is at least $2n$. \par We still have to prove claim~\eqref{claim}. Recall that $d_t(v)=\textnormal{bal}deg_t(v)-\textnormal{deg}_{B,t}(v)$. We estimate $\textnormal{bal}deg_t(v)$ as \begin{align*} \textnormal{bal}deg_t(v) &=n-p_0\left(\frac{q^2(1-\delta)(3+\delta)}{8} -\textnormal{deg}_{M,t}(v)\left(q-\frac{\textnormal{deg}_{M,t}(v)}{2}\right)\right)\\ &\geq n-p_0\left(\frac{q^2(1-\delta)(3+\delta)}{8} -\left(\frac{q}{2}-1\right)\left(q-\frac{q}{4}\right)\right)\\ &=n+p_0\left(q^2\left(\frac{3-(1-\delta)(3+\delta)}{8}\right) -\frac{3q}{4}\right)\\ &= n+p_0\bigg(\frac{q^2\delta}{4}+q\bigg(\underbrace{\frac{\delta^2q}{8}-\frac{3}{4}}_{\geq 0 \textnormal{ if } n \textnormal{ suff. big}}\bigg)\bigg)\\ &\geq n+\frac{p_0\beta \delta n}{4} \geq n+\frac{2\delta n}{3}. \tag{Remark~\ref{rem:p0}} \end{align*} Therefore, \begin{align*} d_t(v) =\textnormal{bal}deg_t(v)-\textnormal{deg}_{B,t}(v) \geq n+\frac{2\delta n}{3}-n = \frac{2\delta n}{3}. \end{align*} \end{proof} \subsection{The detailed strategy}\label{sec:strategy} The basic idea is that Maker's task to build a triangle is closely related to the task of connecting big stars. Assume that at any time during the game Maker\ manages to build a path $(u,v,w)$ of length $2$. Then Breaker\ is forced to immediately \emph{close} this path by claiming the edge $\{u,w\}$ if he doesn't already own this edge. So every sensible Breaker-strategy will follow the simple rule of immediately closing all Maker-paths of length~$2$. Hence, the only chance for Maker\ to win the game is to construct more than $q$ paths of length~$2$ in a single turn, so that Breaker\ can't claim enough edges to close all of them immediately. By claiming an edge $\{u,v\}$, Maker\ is building $\textnormal{deg}_M(u)+\textnormal{deg}_M(v)$ new paths of length~$2$. This implies that if Breaker\ at each turn closes all Maker-paths of length~$2$ and simultaneously manages to prevent Maker\ from building a $q/2$-star, he will win the game. \begin{strategy}\label{strat:ours} Consider an arbitrary turn $t$. Let $e_M=\{u,v\}$ be the edge claimed by Maker\ in this turn. Breaker's moves for this turn are split into two parts. \par \textbf{Part 1: closing paths.} Breaker\ claims $\textnormal{deg}_{M,t-1}(v)$ edges incident in $u$ and $\textnormal{deg}_{M,t-1}(u)$ edges incident in $v$ to close all new Maker-paths of length~$2$. If such a path is already closed, he claims an arbitrary edge incident in $u$ ($v$, resp.) instead. If all edges incident in $u$ ($v$, resp.) are already claimed, we call the turn $t$ an \emph{isolation turn}. In this case, Breaker\ claims arbitrary unclaimed edges instead. We call the edges claimed during Part~1 \emph{closing edges}. $u$ ($v$, resp.) is called the \emph{head} of the closing edge, whereas the corresponding second node of the edge is called its \emph{tail}. \par \textbf{Part 2: free edges.} If after part 1 Breaker\ still has edges left to claim (we will later show that this is always the case), he iteratively claims an edge $e$ with $\textnormal{pot}(e)\geq\textnormal{pot}(e')$ for all unclaimed edges $e'$, until he claimed all of his $q$ edges. We call the edges claimed in Part~2 \emph{free edges}. The number of free edges claimed in turn $t$ is denoted by $f(t)$. Note that \begin{equation}\label{eq:freeedges} f(t)=q-\textnormal{deg}_{M,t-1}(u)-\textnormal{deg}_{M,t-1}(v). \end{equation} \end{strategy} Part~1 of the strategy is more or less obligatory, because a Maker-path of length~$2$ that is not closed by Breaker\ can be completed to a triangle in the next turn. Part~2 is more interesting. Our aim in the following sections is to prove Theorem~\ref{thm:final}, where we show that part~2 of the strategy prevents Maker\ from building a $q/2$-star, so that Breaker\ wins the game.\par \begin{observation}\label{obs:isolation} We can assume that the game contains no isolation turns. \end{observation} \begin{proof} Consider an arbitrary isolation $t$ turn in the game, i.e., a turn, after which one of the nodes of the edge $e_M$ claimed in this turn by Maker\ has no unclaimed incident edges left. Right after the turn, every triangle $e_M$ belongs to is already blocked by Breaker, so the edge $e_M$ is of no use for Maker\ from this time on. Breaker\ even could pretend that the edge $e_M$ belongs to his own edges, so that in the turn $t$ Breaker\ claimed $q+1$ edges and Maker\ didn't claim any edge. Hence, a perfectly playing Maker\ will always try to avoid isolation turns. If he can't, he will definitely loose the game, since he can only claim useless edges until the end of the game. \end{proof} The following observation states that, as long as Breaker\ can keep the total potential below $2n$, he will have at least $2$ free edges in every turn. \begin{observation}\label{obs:freeedges} For every turn $t$ with $f(t)\leq 1$ there exists a turn $t'<t$ with $\textnormal{POT}_{t'}> 2n$. \end{observation} \begin{proof} Let $t$ be a turn with $f(t)\leq 1$ and let $\{u,v\}$ be the Maker-edge of this turn. Because $f(t)=q-\textnormal{deg}_{M,t-1}(u)-\textnormal{deg}_{M,t-1}(v)$, we get $\textnormal{deg}_{M,t-1}(u)+\textnormal{deg}_{M,t-1}(v)\geq q-1$, so there exists $w\in\{u,v\}$ with $\textnormal{deg}_{M,t-1}(w)\geq \ceiling{\tfrac{q-1}{2}}\geq \ceiling{\tfrac{q}{2}}-1$. Hence, there exists a turn $t'\leq t-1$ with $\textnormal{deg}_{M,t'}(w)=\ceiling{\tfrac{q}{2}}-1$ and $\textnormal{pot}_{t'}(w)>0$. We apply Lemma~\ref{lem:potentialvsdegree} and get $\textnormal{pot}_{t'}(w)>2n$, so especially $\textnormal{POT}_{t'}>2n$. \end{proof} \subsection{Main results} In this subsection we prove that Strategy~\ref{strat:ours} works correctly and is a winning strategy. For both theorems in this subsection we assume that Breaker\ plays according to Strategy~\ref{strat:ours}. We further assume that $q=\sqrt{(\tfrac{8}{3}+\epsilon^*)n}$ for some $\epsilon^*>0$ as stated above and that $n$ is sufficiently large. For Breaker's strategy it is crucial that the potential of every node is kept below a certain level. This is ensured by the following theorem. \begin{theorem}\label{thm:smallpot} For every turn $s$ it holds $\textnormal{POT}_s< 2n$. \end{theorem} The proof of this theorem is the mathematical core of this paper and is given in the next section. The main result of our work is: \begin{theorem}\label{thm:final} At the end of the game there exists no node with Maker-degree of at least $q/2$ and Breaker\ wins the game. \end{theorem} \begin{proof} Assume that there exists a node $v$ with $\textnormal{deg}_M(v)\geq q/2$ at the end of the game. Then, $\textnormal{deg}_M(v)\geq\lceil q/2 \rceil$. Let $t$ denote the turn in which Maker\ claimed his \mbox{$\lceil q/2\rceil$-th} edge incident in $v$, so $\textnormal{deg}_{M,t-1}(v)=\lceil q/2\rceil -1$. Due to Theorem~\ref{thm:smallpot} we know that $\textnormal{pot}_{t-1}(v)\leq\textnormal{POT}_{t-1}<2n$. Note that after turn $t-1$ there are still unclaimed edges incident in $v$, so $\textnormal{pot}_{t-1}(v)>0$. We apply Lemma~\ref{lem:potentialvsdegree} and get $\textnormal{deg}_{M,t-1}(v)\neq \lceil q/2\rceil-1$, a contradiction.\par So with every edge $\{u,v\}$ that Maker\ chooses he creates less than $\textnormal{deg}_M(u)+\textnormal{deg}_M(v)<q$ new Maker-paths of length~$2$. Hence, Breaker\ always has enough edges to close all Maker-paths of length~$2$ and finally wins the game. \end{proof} \section{Analysis} \subsection{Outline of the proof} We proceed to prove Theorem~\ref{thm:smallpot}. As it is depending on a series of lemmas, for the reader's convenience we first outline the argumentation in an informal way. We distinguish two types of turns. A turn is called \emph{non-critical}, if a certain fraction of the Breaker-edges in this turn suffices to compensate the total potential increase caused by Maker\ in this turn. Otherwise, we call it \emph{critical}. We start with an arbitrary critical turn $t_0$ in which the potential exceeds $n$. Lemma~\ref{lem:critical_turns} gives us a useful characterization of critical turns. This enables us to prove Theorem~\ref{thm:no_increase}, where we state that before a constant number of additional critical turns is played, the total potential will sink below $n$ again. Because a constant number of critical turns cannot increase the total potential considerably much (Lemma~\ref{lem:crit_increase}), we can prove that the total potential of the game never exceeds $2n$. \subsection{Potential change in a single turn} To analyze the potential change of a single turn, we first present a few tools for estimation of potential change caused by single Maker- and Breaker-edges. The next lemma shows how the addition of a single Maker-edge changes the deficit of a node. \begin{lemma}\label{lem:deficit_single} Consider an arbitrary point of time in the game. Let $u\in V$ and let ${\textnormal{bal}deg}'(u),\textnormal{deg}_M'(u)$ and $d'(u)$ be the balanced Breaker-degree, Maker-degree and deficit of $u$ after an additional edge incident in $u$ was claimed by Maker. Then, \[d'(u)-d(u)={\textnormal{bal}deg}'(u)-\textnormal{bal}deg(u)\leq p_0(q-\textnormal{deg}_M(u)).\] \end{lemma} \begin{proof} The equation follows from the fact that an additional Maker-edge does not change $\textnormal{deg}_B(u)$. Using that $\textnormal{deg}_M'(u)=\textnormal{deg}_M(u)+1$ we continue \begin{align*} &{\textnormal{bal}deg}'(u)-\textnormal{bal}deg(u)\\ &= p_0\textnormal{deg}_M'(u)\left(q-\frac{\textnormal{deg}_M'(u)}{2}\right) -p_0\textnormal{deg}_M(u)\left(q-\frac{\textnormal{deg}_M(u)}{2}\right)\\ &= p_0\left(\textnormal{deg}_M(u)\left(q-\frac{\textnormal{deg}_M(u)+1}{2}\right) +\left(q-\frac{\textnormal{deg}_M(u)+1}{2}\right)\right)\\ &-p_0\textnormal{deg}_M(u)\left(q-\frac{\textnormal{deg}_M(u)}{2}\right)\\ &=p_0(q-\textnormal{deg}_M(u)-1/2) \leq p_0(q-\textnormal{deg}_M(u)). \end{align*} \end{proof} \begin{lemma}\label{lem:pot_single} \begin{itemize} \item[(i)] A single edge $e_M$ claimed by Maker\ increases the potential of a node by at most a factor of $\mu$ and causes a total potential increase of at most $(\mu-1)\textnormal{pot}(e_M)$ (where $\textnormal{pot}(e_M)$ denotes the potential of $e_M$ when claimed by Maker). \item[(ii)] A single edge $e_B$ claimed by Breaker\ causes a total potential decrease of at least $(1-\mu^{-1/q})\textnormal{pot}(e_B)$ (where $\textnormal{pot}(e_B)$ denotes the potential of $e_B$ when claimed by Breaker). \end{itemize} \end{lemma} \begin{proof} (i). Let $e_M=\{u,v\}$. For $w\in V$ let $\textnormal{pot}(w)$ denote the potential of $w$ before Maker\ claimed $e_M$ and $\textnormal{pot}'(w)$ denote the potential of $w$ directly after Maker\ claimed $e_M$. If $e_M$ is not incident in $w$, the potential of $w$ remains unchanged. If $e_M$ is the last unclaimed edge incident in $w$, $\textnormal{pot}'(w)=0$ and we are done. Otherwise we can apply Lemma~\ref{lem:deficit_single} and Remark~\ref{rem:p0} and get \begin{equation*} \frac{\textnormal{pot}'(w)}{\textnormal{pot}(w)}=\mu^{(d'(w)-d(w))/q} \leq \mu^{p_0(q-\textnormal{deg}_M(w))/q}\leq \mu. \end{equation*} Because $e_M$ only changes the potential of $u$ and $v$, the total potential increase is $\textnormal{pot}'(v)-\textnormal{pot}(v)+\textnormal{pot}'(u)-\textnormal{pot}(u)\leq (\mu-1)\textnormal{pot}(e_M)$.\par (ii). Let $e_B=\{u,v\}$. Because $e_B$ only changes the potential of $u$ and $v$, the total potential decrease caused by $e_B$ is $\textnormal{pot}(v)-\textnormal{pot}'(v)+\textnormal{pot}(u)-\textnormal{pot}'(u)$, where $\textnormal{pot}(w)$ denotes the potential of $w$ before Breaker\ claimed $e_B$ and $\textnormal{pot}'(w)$ denote the potential of $w$ directly after Breaker\ claimed $e_B$. We show that \[\textnormal{pot}(v)-\textnormal{pot}'(v)\geq(1-\mu^{-1/q})\textnormal{pot}(v).\] Because the same holds for $u$, the claim~(ii) follows. If $e_B$ is the last unclaimed edge in $v$, $\textnormal{pot}'(v)=0$. Otherwise, \begin{equation*} \frac{\textnormal{pot}'(v)}{\textnormal{pot}(v)}=\mu^{(d'(u)-d(u))/q} =\mu^{-1/q}, \end{equation*} where the last equation follows from the fact that a Breaker-edge does not change $\textnormal{bal}deg(v)$ and increases $\textnormal{deg}_B(v)$ by $1$. \end{proof} Every turn $t$ starts with a Maker\ move, i.e.\ an edge $\{u,v\}$ being claimed by Maker\ followed by $q$ Breaker\ moves. While the Maker\ move causes a potential increase, Breaker's moves cause a decrease. For every node $w\in V$, we denote its potential increase by $\inc{t}(w)$ and its potential decrease by $\dec{t}(w)$. Note that every claimed edge only changes the potential of its two incident nodes. When following Breaker's strategy, there are four possible ways of potential decrease for the node $w$: decrease caused by free edges, denoted by $\decfree{t}(w)$ and decrease caused by closing edges, either $w$ being their head, denoted by $\decheads{t}(w)$, or their tail, denoted by $\dectails{t}(w)$. In the special case in which Maker\ or Breaker\ claim the last unclaimed edge incident in $w$, the potential of $w$ is set to $0$, which causes an additional potential decrease. For technical reasons, this additional decrease is considered seperately and denoted by $\deczero{t}(w)$. If for example Breaker\ claims a free edge that is the last unclaimed edge incident in $w$, this edge contributes both to $\decfree{t}(w)$ and $\deczero{t}(w)$. For the contribution to $\decfree{t}(w)$ we only compute the potential change caused by the change of the balance value and for the contribution to $\deczero{t}(w)$ we take the real potential decrease caused by the edge and subtract the computed contribution to $\decfree{t}(w)$. Moreover, we further split $\decheads{t}(w)$ into two parts $\decheads{t}(w)=\utt{t}(w)+\ott{t}(w)$, where \[\utt{t}(w):=\min\{\inc{t}(w),\decheads{t}(w)\} \ \textnormal{ and }\ \ott{t}(w):=\max\{\decheads{t}(w)-\inc{t}(w),0\}.\] If Maker\ claims an edge that connects two nodes with a very high Maker-degree, it might happen that $\decheads{t}(w)>\inc{t}(w)$ for one or both of the newly connected nodes. Otherwise, $\ott{t}=0$ and $\utt{t}(w)=\decheads{t}(w)$.\par If for one of these values we omit the argument, we always mean the total potential increase (decrease) added up over all nodes. For example, $\inc{t}:=\sum_{v\in V}\inc{t}(v)$. For every turn $t$ we have \[\textnormal{POT}_t-\textnormal{POT}_{t-1} =\inc{t}-\dec{t} =\inc{t}-(\decfree{t}+\utt{t}+\ott{t}+\dectails{t}+\deczero{t}).\] \begin{lemma}\label{lem:first_increase} Let $t$ be an arbitrary turn. Let $e_M$ be the Maker-edge of this turn. Then, \begin{itemize} \item[(i)] for every $w\in V$ it holds $\inc{t}(w)-\utt{t}(w)\leq(\mu^{p_0f(t)/q}-1)\textnormal{pot}_{t-1}(w)$. \item[(ii)]$\inc{t}-\utt{t}\leq(\mu^{p_0f(t)/q}-1)\textnormal{pot}_{t-1}(e_M)$. \end{itemize} \end{lemma} \begin{proof} $(i)$. Let $e_M=\{u,v\}$. First note that if $\utt{t}(w)\neq\decheads{t}(w)$, it follows that $\utt{t}(w)=\inc{t}(w)$, so there is nothing more to show. Otherwise, the term $\inc{t}(w)-\utt{t}(w)$ describes the change of the potential of $w$ from the beginning of the turn $t$ to the end of part $1$ of Breaker's moves in the same turn, where we ignore the changes caused by tails of closing edges. For $w\notin\{u,v\}$ this is $0$ and we are done. So let $w\in\{u,v\}$ and let $\first{\textnormal{deg}}_{M,t}(w),\first{\textnormal{deg}}_{B,t}(w),\first{\textnormal{bal}deg}_{t}(w)$ and $\first{d}_{t}(w),\first{\textnormal{pot}}_{t}(w)$ be the Maker-degree, Breaker-degree, balanced degree, deficit and potential of $w$ after part $1$ of Breaker's moves (i.e.\ after all closing edges have been claimed). To compute the change of the potential of $w$, we start by computing the change of its deficit. We have \begin{align*} \first{d}_t(w)-d_{t-1}(w) &=\first{\textnormal{bal}deg}_t(w)-\first{\textnormal{deg}}_{B,t}(w) -\textnormal{bal}deg_{t-1}(w)+\textnormal{deg}_{B,t-1}(w)\\ &=(\first{\textnormal{bal}deg}_t(w)-\textnormal{bal}deg_{t-1}(w)) -(\first{\textnormal{deg}}_{B,t-1}(w)-\textnormal{deg}_{B,t}(w)). \end{align*} The first term describes the change of $\textnormal{bal}deg(w)$. Since Breaker-edges do not influence this value, this change is caused solely by $e_M$. Due to Lemma~\ref{lem:deficit_single}, this is at most $p_0(b-\textnormal{deg}_{M,t-1}(w))$. The second term simply describes the number of closing edges claimed incident to $w$. Due to Observation~\ref{obs:isolation}, $t$ is no isolation turn, so in case of $w=u$, this is $\textnormal{deg}_{M,t-1}(v)$ and in case of $w=v$ this is $\textnormal{deg}_{M,t-1}(u)$. Together with~\eqref{eq:freeedges} and Remark~\ref{rem:p0} this gives \begin{equation}\label{equ} \first{d}_t(u)-d_{t-1}(u)=p_0(q-\textnormal{deg}_{M,t-1}(u))-\textnormal{deg}_{M,t-1}(v) \leq p_0f(t) \end{equation} and \begin{equation}\label{eqv} \first{d}_t(v)-d_{t-1}(v)=p_0(q-\textnormal{deg}_{M,t-1}(v))-\textnormal{deg}_{M,t-1}(u) \leq p_0f(t). \end{equation} This implies \begin{align*} \inc{t}(w)-\utt{t}(w) &=\first{\textnormal{pot}}_{t}(w)-\textnormal{pot}_{t-1}(w) =\mu^{\first{d}_t(w)/q}-\textnormal{pot}_{t-1}(w)\\ &=(\mu^{(\first{d}_t(w)-d_{t-1}(w))/q}-1)\textnormal{pot}_{t-1}(w)\\ &\overset{\mathclap{\eqref{equ},\eqref{eqv}}}{\leq} \hspace{4mm}(\mu^{p_0f(t)/q}-1)\textnormal{pot}_{t-1}(w). \end{align*} $(ii)$. Note that $\inc{t}=\inc{t}(u)+\inc{t}(v)$ and $\utt{t}=\utt{t}(u)+\utt{t}(v)$, so we have \begin{align*} \inc{t}-\utt{t} &=\inc{t}(u)+\inc{t}(v)-(\utt{t}(u)+\utt{t}(v))\\ &=\inc{t}(u)-\utt{t}(u)+\inc{t}(v)-\utt{t}(v)\\ &\overset{(i)}{\leq} (\mu^{p_0f(t)/q}-1)\textnormal{pot}_{t-1}(u)+(\mu^{p_0f(t)/q}-1)\textnormal{pot}_{t-1}(v)\\ &=(\mu^{p_0f(t)/q}-1)\textnormal{pot}_{t-1}(e_M). \end{align*} \end{proof} \subsection{Critical turns}\label{sec:crit_turns} Since $\mu\overset{n\rightarrow\infty}\longrightarrow 1$, with Remark~\ref{rem:p0} and $n$ big enough we get $\mu p_0<1$. Fix $\eta\in(0,1-\mu p_0)$ and define the following parts of potential change. \begin{definition}\label{def:crit} For every turn $t$ let \[\textnormal{crit}diff{t}:=\inc{t}-\utt{t}-(1-\eta)\decfree{t}\] and \[\restdiff{t}:=\ott{t}+\dectails{t}+\eta\decfree{t}+\deczero{t}.\] We call $t$ \emph{critical}, if $\textnormal{crit}diff{t}> 0$ and \emph{non-critical} otherwise. \end{definition} Note that $\textnormal{POT}_t-\textnormal{POT}_{t-1}=\textnormal{crit}diff{t}-\restdiff{t}$. Since $\restdiff{t}\geq 0$, every turn $t$ with $\textnormal{POT}_t>\textnormal{POT}_{t-1}$ is critical. \begin{lemma}\label{lem:tech1} For all $x\in\mathbb{R}$ with $x\geq 1$ it holds $x(1-\mu^{-1/q})\geq 1-\mu^{-x/q}$. \end{lemma} \begin{proof} We define $g(x):=x(1-\mu^{-1/q})$ and $h(x):=1-\mu^{-x/q}$, so we have to show $g(x)\geq h(x)$ for all $x\geq 1$. First note that $g(1)=h(1)$, so it suffices to show that $g'(x)\geq h'(x)$ for all $x\geq 1$. We have $g'(x)=1-\mu^{-1/q}$ and $h'(x)=\mu^{-x/q}\frac{\ln(\mu)}{q}$. Because for all $x>0$ we have $h''(x)=-\mu^{-x/q}\left(\frac{\ln(\mu)}{q}\right)^2<0=g''(x)$, it suffices to show that $g'(1)\geq h'(1)$. To see this, we use the fact that $e^y-1\geq y$ for all $y\geq 0$, so especially $\mu^{1/q}-1\geq\frac{\ln(\mu)}{q}$. If we multiply both sides with $\mu^{-1/q}$, we get \[1-\mu^{-1/q}\geq \mu^{-1/q}\frac{\ln(\mu)}{q}.\] Because the left hand side is $g'(1)$ and the right hand side is $h'(1)$, we are done. \end{proof} The following lemma provides an important characterization of critical turns by an upper bound for the potential of all edges still unclaimed after the turn. \begin{lemma}\label{lem:critical_turns} Let $t$ be a critical turn with $f(t)\geq 2$ and let $e_M$ be the edge chosen by Maker\ in this turn. For every edge $e$ that is still unclaimed after $t$ it holds \[\textnormal{pot}_t(e)< \frac{\mu p_0}{(1-\eta)}\textnormal{pot}_{t-1}(e_M).\] \end{lemma} \begin{proof} Let $e_M=\{u,v\}$. By Lemma~\ref{lem:first_increase}~(ii) we have \begin{align*} \inc{t}-\utt{t} &\leq (\mu^{p_0f(t)/q}-1)\textnormal{pot}_{t-1}(e_M)\\ &=\mu^{p_0f(t)/q}(1-\mu^{-p_0f(t)/q})\textnormal{pot}_{t-1}(e_M)\\ &\leq\mu(1-\mu^{-p_0f(t)/q})\textnormal{pot}_{t-1}(e_M) \end{align*} We apply Lemma~\ref{lem:tech1} with $x:=p_0f(t)$ (note that due to Remark~\ref{rem:p0} we have $x>\tfrac{8}{3\beta}f(t)>\tfrac{8}{12}f(t)\geq\tfrac{16}{12}>1$) and get \begin{equation*} \inc{t}-\utt{t}\leq \mu p_0f(t)(1-\mu^{-1/q})\textnormal{pot}_{t-1}(e_M). \end{equation*} Because $t$ is a critical turn, we get \begin{align*} 0<\diff{t} &=\inc{t}-\utt{t}-(1-\eta)\decfree{t}\\ &\leq\mu p_0f(t)(1-\mu^{-1/q})\textnormal{pot}_{t-1}(e_M)-(1-\eta)\decfree{t}, \end{align*} implying \begin{equation}\label{eq:critturn} (1-\eta)\decfree{t}< \mu p_0f(t)(1-\mu^{-1/q})\textnormal{pot}_{t-1}(e_M). \end{equation} Now let $e$ be an edge that after turn~$t$ still is unclaimed. Then every free edge claimed by Breaker\ in turn~$t$ has at least a potential of $\textnormal{pot}_{t}(e)$ because Breaker\ iteratively chooses the edge with maximum potential and every edge claimed by Breaker\ only decreases potential. Due to Lemma~\ref{lem:pot_single}~(ii) every free edge causes a total potential decrease of at least $\textnormal{pot}_t(e)(1-\mu^{-1/q})$ and hence we get \[\decfree{t}\geq f(t)\textnormal{pot}_{t}(e)(1-\mu^{-1/q}).\] Together with~\eqref{eq:critturn} this implies $\textnormal{pot}_t(e)< \frac{\mu p_0}{(1-\eta)}\textnormal{pot}_{t-1}(e_M)$. \end{proof} \subsection{Increase of total potential}\label{sec:increase} With our strategy we cannot guarantee that $\textnormal{POT}_t\leq\textnormal{POT}_{t-1}$ for all turns $t$. But we will show that each turn $t_0$ at which the potential exceeds $n$ is followed closely by a turn at which the total potential is at most as big as it was before $t_0$. So in the long run we obtain a decrease of the total potential, which will ensure Breaker's win. \par Fix constant parameters $\gamma\in(0,1)$, and $\epsilon>0$ with \begin{equation}\label{eq:choiceeps} \frac{1-\eta}{(1+\epsilon)\mu p_0}>1. \end{equation} Recall that this possible, because $\eta<1-\mu p_0$ by the choice of $\eta$. Define \[c:=\left\lceil\frac{1-\log(1-\gamma)}{\log(1-\eta)-\log(1+\epsilon)-\log(\mu p_0)}\right\rceil\] and note that $c>0$ due to~\eqref{eq:choiceeps}. Although $c$ depends on $n$, it is bounded by constants because $1<\mu<2$ for $n$ sufficiently big. Let $t_0$ be a turn with $\textnormal{POT}_{t_0}>n,\textnormal{POT}_{t_0-1}\leq n$ and $\textnormal{POT}_t<2n$ for all $t<t_0$. Then, $t_0$ is a critical turn and due to Observation~\ref{obs:freeedges} it holds $f(t_0)\geq 2$. Let $e_0=\{u,v\}$ be the edge claimed by Maker\ in this turn and w.l.o.g.\ let $\textnormal{pot}_{t_0-1}(u)\geq\textnormal{pot}_{t_0-1}(v)$. We consider three points of time: \begin{itemize} \item Let $t_1$ be the first turn after $t_0-1$ with $\textnormal{pot}_{t_1}(u)\leq(1-\gamma)\textnormal{pot}_{t_0-1}(u)$. \item Let $t_2$ be the first turn after $t_0$ with $\textnormal{pot}_{t_2}(w)\geq (1+\epsilon)\textnormal{pot}_{s}(w)$ for some $w\in V$ and some turn $s$ with $t_0\leq s<t_2$. \item Let $t_3$ be the $c$-th critical turn after $t_0-1$. \end{itemize} If the game ends before the turn $t_i$ is reached, let $t_i:=\infty$. We set ${t^*}:=\min(t_1,t_2,t_3)$ (note that ${t^*}=\infty$ is possible) and aim to prove the following theorem \begin{theorem}\label{thm:no_increase} Let $n$ sufficiently big. If the game is not ended before turn~${t^*}$, then $\textnormal{POT}_{t^*}\leq\textnormal{POT}_{t_0-1}$. \end{theorem} Since the proof is quite involved, it is split into several parts. We start with an observation, that between the turns $t_0$ and $t_2$ the total potential will not exceed $2n$. \begin{observation}\label{obs:auxobs} If $n$ is sufficiently large, for every turn $t$ with $t_0\leq t<t_2$ it holds $\textnormal{POT}_t<2n$. \end{observation} \begin{proof} Because $t<t_2$, for every $v\in V$ it holds $\textnormal{pot}_t(v)\leq(1+\epsilon)\textnormal{pot}_{t_0}(v)$ by definition of $t_2$. This implies \[\textnormal{POT}_t=\sum_{v\in V}\textnormal{pot}_t(v)\leq\sum_{v\in V}(1+\epsilon)\textnormal{pot}_{t_0}(v) =(1+\epsilon)\textnormal{POT}_{t_0}.\] By Lemma~\ref{lem:first_increase}~(ii) we have \begin{align*} \textnormal{POT}_{t_0} &=\textnormal{POT}_{t_0}-\textnormal{POT}_{t_0-1}+\textnormal{POT}_{t_0-1} \leq \inc{t_0}-\utt{t_0}+\textnormal{POT}_{t_0-1}\\ &\leq \mu^{p_0f(t_0)/q}\textnormal{POT}_{t_0-1} \leq \mu\textnormal{POT}_{t_0-1}, \end{align*} so finally, \[\textnormal{POT}_t\leq(1+\epsilon)\textnormal{POT}_{t_0}\leq\mu(1+\epsilon)\textnormal{POT}_{t_0-1}\leq\mu(1+\epsilon)n<\tfrac{3}{2}\mu n.\] For sufficiently large $n$ we have $\mu<\tfrac{4}{3}$ and the proof is complete. \end{proof} In the following we assume that the game is not ended before turn ${t^*}$ is reached. In the next lemma we further refine the characterization of critical turns from Lemma~\ref{lem:critical_turns}. We only consider turns between $t_0$ and $t_2$ and prove that the number of critical turns in this interval affects the maximum possible potential of unclaimed edges exponentially. \begin{lemma}\label{lem:crit_stacking} Let $s$ be a turn with $t_0\leq s\leq {t^*}$ and $s< t_2$. Let $\textnormal{crit}(s)\in[c]$ be the number of critical turns between $t_0$ and $s$ (including $t_0$ and $s$). Then, for every edge $e$ unclaimed after turn $s$ it holds \[\textnormal{pot}_s(e)<\left(\frac{(1+\epsilon)\mu p_0}{(1-\eta)}\right)^{\textnormal{crit}(s)}2\textnormal{pot}_{t_0-1}(u).\] \end{lemma} \begin{proof} Via induction over $\textnormal{crit}(s)$. \par Let $\textnormal{crit}(s)=1$. Recall that $e_0=\{u,v\}$ is the edge claimed by Maker\ in turn $t_0$ and that $\textnormal{pot}_{t_0-1}(u)\geq\textnormal{pot}_{t_0-1}(v)$. Let $e=\{x,y\}$ be an edge unclaimed after turn $s$. Because $s<t_2$, we know that \[\textnormal{pot}_s(e)=\textnormal{pot}_s(x)+\textnormal{pot}_s(y)\leq(1+\epsilon)\textnormal{pot}_{t_0}(x)+(1+\epsilon)\textnormal{pot}_{t_0}(y)=(1+\epsilon)\textnormal{pot}_{t_0}(e)\] and because $f(t_0)\geq 2$, by Lemma~\ref{lem:critical_turns} \[(1+\epsilon)\textnormal{pot}_{t_0}(e) <(1+\epsilon)\frac{\mu p_0}{(1-\eta)}\textnormal{pot}_{t_0-1}(e_0) \leq\frac{(1+\epsilon)\mu p_0}{(1-\eta)}2\textnormal{pot}_{t_0-1}(u).\] Now let the claim be true for all $s'$ with $\textnormal{crit}(s')=i,i\in[c-1]$. Let $s$ be a turn with $\textnormal{crit}(s)=i+1$. Let $s'$ be the last critical turn before $s$ (if $s$ is critical, let $s'=s$). Then $\textnormal{crit}(s'-1)=i$. Let $e_M$ be the edge claimed by Maker\ in turn $s'$. We get \begin{align*} \textnormal{pot}_s(e) &\leq (1+\epsilon)\textnormal{pot}_{s'}(e)\tag{$t<t_2$}\\ &\leq(1+\epsilon)\frac{\mu p_0}{(1-\eta)}\textnormal{pot}_{s'-1}(e_M) \tag{Lemma~\ref{lem:critical_turns}}\\ &\leq(1+\epsilon)\frac{\mu p_0}{(1-\eta)}\left(\frac{(1+\epsilon)\mu p_0}{1-\eta}\right)^i2\textnormal{pot}_{t_0-1}(u)\tag{IH}\\ &=\left(\frac{(1+\epsilon)\mu p_0}{1-\eta}\right)^{i+1}2\textnormal{pot}_{t_0-1}(u). \end{align*} Note that for the above application of Lemma~\ref{lem:critical_turns}, we need to ensure that $f(s')\geq 2$. Due to Observation~\ref{obs:freeedges}, it suffices to show that $\textnormal{POT}_t<2n$ for all $t<s'$. By choice of $t_0$, we already know that $\textnormal{POT}_t<2n$ for all $t<t_0$ and because $s'\leq s<t_2$, for all $t_0\leq t<s'$ we can apply Observation~\ref{obs:auxobs} and get $\textnormal{POT}_t<2n$. \end{proof} \begin{lemma}\label{lem:crit_increase} For every $\xi>0$, if $n$ is sufficiently big, we have \[\sum_{\substack{t_0\leq s\leq {t^*}\\s \textnormal{ critical}}}\inc{s} \leq 2c(\mu-1)\textnormal{pot}_{t_0-1}(u)<\xi\textnormal{pot}_{t_0-1}(u).\] \end{lemma} \begin{proof} Let $\xi>0$. First note that due to Lemma~\ref{lem:pot_single}~(i) \begin{equation} \inc{t_0}\leq (\mu-1)\textnormal{pot}_{t_0-1}(e_0)\leq 2(\mu-1)\textnormal{pot}_{t_0-1}(u). \end{equation} Now let $s$ be a critical turn with $t_0<s\leq {t^*}$. Let $e_M$ be the edge claimed by Maker\ in this turn. We get \begin{align*} \inc{s} &\leq(\mu-1)\textnormal{pot}_{s-1}(e_M)\tag{Lemma~\ref{lem:pot_single}~(i)}\\ &<(\mu-1)\left(\frac{(1+\epsilon)\mu p_0}{(1-\eta)}\right)^{\textnormal{crit}(s-1)}2\textnormal{pot}_{t_0-1}(u)\tag{Lemma~\ref{lem:crit_stacking}}\\ &\overset{\eqref{eq:choiceeps}}{\leq} (\mu-1)2\textnormal{pot}_{t_0-1}(u). \end{align*} So for every critical turn $s$ with $t_0\leq s\leq {t^*}$ we have \begin{equation} \inc{s}\leq 2(\mu-1)\textnormal{pot}_{t_0-1}(u). \end{equation} Because $t\leq t_3$, there are at most $c$ critical turns between $t_0$ and ${t^*}$, so finally we get \[\sum_{\substack{t_0\leq s\leq {t^*}\\s \textnormal{ critical}}}\inc{s} \leq\sum_{\substack{t_0\leq s\leq {t^*}\\s \textnormal{ critical}}}2(\mu-1)\textnormal{pot}_{t_0-1}(u) \leq 2c (\mu-1)\textnormal{pot}_{t_0-1}(u).\] Recall that $(\mu-1)=6\ln(n)\beta/\delta q=6\ln(n)\sqrt{\beta}/\delta \sqrt{n} \overset{n\rightarrow\infty}{\longrightarrow} 0$, whereas $c$ is bounded by a constant. So for $n$ sufficiently big, the whole term is smaller than $\xi\textnormal{pot}_{t_0-1}(u)$. \end{proof} By definition, $t^*$ always has one of the three values $t_1,t_2,t_3$. In the following three lemmas we consider all possible cases. These lemmas combined directly imply Theorem~\ref{thm:no_increase}. We always assume $n$ to be sufficiently big if needed. \begin{lemma}\label{lem:t1} If $t_1\leq t_2$ and $t_1\leq t_3$, then $\textnormal{POT}_{t_0-1}\geq \textnormal{POT}_{t^*}$. \end{lemma} \begin{proof} Let $\xi\in(0,\eta\gamma)$. By assumption ${t^*}=\min(t_1,t_2,t_3)=t_1$ and hence, by definition of $t_1$ we have $\textnormal{pot}_{{t^*}}(u)\leq(1-\gamma)\textnormal{pot}_{t_0-1}(u)$. Let $R:=\sum\limits_{t_0\leq s\leq {t^*}}\diffrest{s}$. Then, \begin{align*} \textnormal{POT}_{{t^*}}-\textnormal{POT}_{t_0-1} &=\sum_{\mathclap {t_0\leq s\leq t^*}}\textnormal{POT}_{s}-\textnormal{POT}_{s-1} =\sum_{\mathclap{t_0\leq s\leq t^*}}\diff{s}-\diffrest{s}\\ &=\sum_{\mathclap{\substack{t_0\leq s\leq {t^*}\\s \textnormal{ critical}}}}\diff{s} +\underbrace{\sum_{\substack{t_0\leq s\leq {t^*}\\s \textnormal{ non-critical}}}\hspace{-0.5cm}\diff{s}}_{\leq 0}-R\\ &\leq \sum_{\mathclap{\substack{t_0\leq s\leq {t^*}\\s \textnormal{ critical}}}}\diff{s}-R\\ &\leq \xi\textnormal{pot}_{t_0-1}(u)-R,\tag{Lemma~\ref{lem:crit_increase}} \end{align*} hence it suffices to show that $R\geq \xi\textnormal{pot}_{t_0-1}(u)$. We have \begin{align*} &\xi\textnormal{pot}_{t_0-1}(u)\\ &\leq\eta\gamma\textnormal{pot}_{t_0-1}(u)\\ &\leq\eta(\textnormal{pot}_{t_0-1}(u)-\textnormal{pot}_{t^*}(u))\tag{${t^*}=t_1$}\\ &=\eta\left(\sum_{t_0\leq s\leq {t^*}}\dec{s}(u)-\inc{s}(u)\right)\\ &=\eta\left(\sum_{t_0\leq s\leq {t^*}}\utt{s}(u)+\ott{s}(u) +\dectails{s}(u)+\decfree{s}(u)+\deczero{s}(u)-\inc{s}(u)\right)\\ &\leq\eta\left(\sum_{t_0\leq s\leq {t^*}}\ott{s}(u) +\dectails{s}(u)+\decfree{s}(u)+\deczero{s}(u)\right) \tag{$\utt{s} (u)\leq\inc{s} (u)$}\\ &\leq\sum_{t_0\leq s\leq {t^*}}\ott{s}(u)+\dectails{s}(u)+\eta\decfree{s}(u)+\deczero{s}(u)\\ &\leq\sum_{t_0\leq s\leq {t^*}}\restdiff{s} =R. \end{align*} \end{proof} \begin{lemma}\label{lem:t2} If $t_2< t_1$ and $t_2\leq t_3$, then $\textnormal{POT}_{t_0-1}\geq \textnormal{POT}_{t^*}$. \end{lemma} \begin{proof} Let $\xi>0$ with $\xi\leq\eta(1-\gamma)(1-(1+\epsilon)^{-1/p_0})$. We have ${t^*}=t_2$, so there exists a turn $s_0$ with $t_0\leq s_0< {t^*}$ and a vertex $w\in V$, such that $\textnormal{pot}_t(w)\geq(1+\epsilon)\textnormal{pot}_{s_0}(w)$. Because ${t^*}<t_1$, the potential of $u$ was not set to $0$ and as in the proof of Lemma~\ref{lem:t1} it suffices to show that $R\geq\xi\textnormal{pot}_{t_0-1}(u)$. We start by showing that for all turns $t$ with $s_0\leq t \leq {t^*}$ it holds \begin{equation}\label{prodabsch} \textnormal{pot}_{t}(w)\leq\textnormal{pot}_{s_0}(w)\prod_{s_0<s\leq t}\mu^{p_0f(s)/q}. \end{equation} We prove~\eqref{prodabsch} via induction over $t$. For $t=s_0$ the claim obviously holds. Now let $t>s_0$. Then, $t-1\geq s_0$ and by Lemma~\ref{lem:first_increase}~(i) we have \begin{align*} \textnormal{pot}_{t}(w)-\textnormal{pot}_{t-1}(w) \leq\inc{t}(w)-\utt{t}(w) \leq\textnormal{pot}_{t-1}(w)(\mu^{p_0f(t)/q}-1), \end{align*} so \[\textnormal{pot}_{t}(w)\leq\textnormal{pot}_{t-1}(w)\mu^{p_0f(t)/q}.\] By applying the induction hypothesis we finish the proof of~\eqref{prodabsch}: \[\textnormal{pot}_{t}(w)\leq\left(\textnormal{pot}_{s_0}(w)\prod_{s_0<s\leq t-1}\mu^{p_0f(s)/q} \right)\mu^{p_0f(t)/q} =\textnormal{pot}_{s_0}(w)\prod_{s_0<s\leq t}\mu^{p_0f(s)/q}.\] Using~\eqref{prodabsch}, we get \[(1+\epsilon)\textnormal{pot}_{s_0}(w)\leq\textnormal{pot}_{t^*}(w) \leq\textnormal{pot}_{s_0}(w)\prod_{s_0<s\leq {t^*}}\mu^{p_0f(s)/q},\] so \[ (1+\epsilon)\leq\prod_{s_0<s\leq {t^*}}\mu^{p_0f(s)/q} =\mu^{\left(p_0\sum_{s_0<s\leq {t^*}}f(s)\right)/q}\] which,taking the logarithm gives \[\sum_{s_0<s\leq {t^*}}f(s) \geq\frac{q\ln(1+\epsilon)}{p_0\ln(\mu)}=:x,\] so at least $x$ free edges were claimed by Breaker\ between the turns $s_0$ and ${t^*}$. Because ${t^*}<t_1$, at the whole time from $t_0$ to ${t^*}$ the potential of $u$ is at least $(1-\gamma)\textnormal{pot}_{t_0-1}(u)$. Hence, during this time every unclaimed edge incident in $u$ has a potential of at least $(1-\gamma)\textnormal{pot}_{t_0-1}(u)$, so especially every free edge claimed by Breaker\ has at least this potential and, due to Lemma~\ref{lem:pot_single}~(ii), causes a decrease of the total potential of at least $(1-\gamma)\textnormal{pot}_{t_0-1}(u)(1-\mu^{-\frac{1}{q}})$. Therefore, we get \begin{align*} R &\geq \eta\sum_{s_0<s\leq {t^*}}\decfree{s}\\ &\geq \eta x (1-\gamma)\textnormal{pot}_{t_0-1}(u)\left(1-\mu^{-\frac{1}{q}}\right)\\ &\geq \eta (1-\gamma)\textnormal{pot}_{t_0-1}(u)\left(1-\mu^{-\frac{x}{q}}\right) \tag{Lemma~\ref{lem:tech1}}\\ &\geq \eta (1-\gamma)\textnormal{pot}_{t_0-1}(u)\left(1-(1+\epsilon)^{-\frac{1}{p_0}}\right)\\ &\geq \xi \textnormal{pot}_{t_0-1}(u). \end{align*} \end{proof} \begin{lemma}\label{lem:t3} $t_3\geq\min(t_1,t_2)$. \end{lemma} \begin{proof} Let us assume that $t_3<\min(t_1,t_2)$. Then ${t^*}=t_3$, so ${t^*}$ is the $c$-th critical turn after $t_0-1$. We apply Lemma~\ref{lem:crit_stacking} to $s={t^*}<t_2$ and obtain that for every unclaimed edge $e$ after turn ${t^*}$ it holds \[\textnormal{pot}_{t^*}(e) <\left(\frac{(1+\epsilon)\mu p_0}{(1-\eta)}\right)^{c}2\textnormal{pot}_{t_0-1}(u) \leq (1-\gamma)\textnormal{pot}_{t_0-1}(u)\] by the choice of $c$. Since ${t^*}<t_1$, we have $\textnormal{pot}_t(u)\geq(1-\gamma)\textnormal{pot}_{t_0-1}(u)$, so directly after turn ${t^*}$, every unclaimed edge incident in $u$ has a potential of at least $(1-\gamma)\textnormal{pot}_{t_0-1}(u)$. Hence, after turn ${t^*}$ there exists no unclaimed edge incident in $u$ and this implies that the potential of $u$ must have been set to $0$ at some turn $s$ with $t_0\leq s\leq {t^*}$. But then $t_1\leq s\leq {t^*}=t_3$, a contradiction. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:smallpot}] Let $s$ be some turn with $\textnormal{POT}_t<2n$ for all $t<s$. We show that this already implies $\textnormal{POT}_s<2n$.\par If $\textnormal{POT}_s< n$, there is nothing to show, so let $\textnormal{POT}_s>n$. Let $t_0$ be maximal satisfying $t_0\leq s$ and $\textnormal{POT}_{t_0-1}\leq n$ ($t_0$ exists due to Lemma~\ref{lem:start_potential}). Define ${t^*}$ as in Section~\ref{sec:increase}. If $s={t^*}$, we can apply Theorem~\ref{thm:no_increase} and get $\textnormal{POT}_s=\textnormal{POT}_{t^*}\leq\textnormal{POT}_{t_0-1}\leq n$, so we may assume $s<{t^*}$. But then, $s<t_2$, so we can apply Observation~\ref{obs:auxobs} and obtain that $\textnormal{POT}_s<2n$. \end{proof} \section{Open Questions} We have narrowed the gap for the threshold bias to $[1.414\sqrt{n},1.633\sqrt{n}]$. Of course, the question about the exact threshold value remains. At first sight our strategy still has some unused potential for improvement, since the secondary goal of preventing Maker\ from building a $q/2$-star is very restricting. Breaker\ could allow Maker\ to build a few bigger stars, if at the same time he is able to claim all edges connecting these stars. For $q\leq \sqrt{8n/3}$ the strategy still could be used to prevent Maker\ from building an $\alphaq$-star for some $\alpha>1/2$. But it certainly needs some additional variations of the strategy to prevent Maker\ from connecting stars of size at least $q/2$. \section{Appendix} \subsection{List of variables} \begin{itemize} \item $n:$ number of nodes in the game graph \item $q:$ number of Breaker-edges per turn \item $\beta:$ defined as $\beta:=\frac{q^2}{n}$; the strategy in this paper works for $\beta>\frac{8}{3}$. \item $\epsilon^*:$ a strictly positive constant. \item $\textnormal{deg}_{M,t}(v):$ \emph{Maker-degree of} $v$; number of Maker-edges incident in $v$ after turn $t$ \item $\textnormal{deg}_{B,t}(v):$ \emph{Breaker-degree of} $v$; number of Breaker-edges incident in $v$ after turn $t$ \item $\delta:$ a constant with $0<\delta<1-\frac{8}{3\beta}$; chosen in Section~\ref{sec:potfunc} \item $\textnormal{bal}(v):$ the \emph{balance} of $v$; a measure of the ratio of Maker\ and Breaker-edges incident in $v$; introduced in Definition~\ref{def:pot} \item $p_0:$ the balance of a node without incident Maker\ or Breaker-edges; introduced in Definition~\ref{def:pot} \item $\textnormal{bal}deg(v):$ the \emph{balanced Breaker-degree} of $v$; introduced in Definition~\ref{def:pot2} \item $d(v):$ the \emph{deficit} of $v$, exponent in the potential function; introduced in Definition~\ref{def:pot2} \item $\mu:$ base in the potential function; introduced in Definition~\ref{def:pot2} \item $\textnormal{pot}(v):$ the \emph{potential} of $v$, in part~2 of the strategy Breaker\ always claims edges $\{u,v\}$ maximizing $\textnormal{pot}(u)+\textnormal{pot}(v)$; introduced in Definition~\ref{def:pot2} \item $\textnormal{POT}_t:$ the \emph{total potential} of a turn $t$; introduced in Definition~\ref{def:pot2} \item $f(t):$ number of \emph{free edges} claimed by Breaker\ in turn $t$; introduced in Section~\ref{sec:strategy} \item $\inc{t}(v):$ potential increase of $v$ in turn $t$ \item $\dec{t}(v):$ potential decrease of $v$ in turn $t$ \item $\decfree{t}(v):$ potential decrease of $v$ in turn $t$ caused by free edges \item $\decheads{t}(v):$ potential decrease of $v$ in turn $t$ caused by closing edges with $v$ as head \item $\dectails{t}(v):$ potential decrease of $v$ in turn $t$ caused by closing edges with $v$ as tail \item $\deczero{t}(v):$ potential decrease of $v$ in turn $t$ caused by claiming the last unclaimed edge of $v$ \item $\utt{t}(v)=\min\{\inc{t}(v),\decheads{t}(v)\}$; it holds $\utt{t}(v)+\ott{t}(v)=\decheads{t}(v)$ \item $\ott{v}(v)=\max\{\decheads{t}(v)-\inc{t}(v),0\}$; it holds $\utt{t}(v)+\ott{t}(v)=\decheads{t}(v)$ \item $\eta:$ a constant with $0<\eta<1-\mu p_0$; introduced in Section~\ref{sec:crit_turns} \item $\diff{t}:$ main part of the total potential change in turn $t$ with \mbox{$\diff{t}+\diffrest{t}=\textnormal{POT}_t-\textnormal{POT}_{t-1}$}; introduced in Definition~\ref{def:crit} \item $\diffrest{t}:$ rest part of the total potential change in turn $t$, with $\diff{t}+\diffrest{t}=\textnormal{POT}_t-\textnormal{POT}_{t-1}$; introduced in Definition~\ref{def:crit} \item $\gamma:$ a strictly positive constant; introduced in Section~\ref{sec:increase} \item $\epsilon:$ a strictly positive constant; introduced in Section~\ref{sec:increase} \item $c:$ a strictly positive value bounded by a constant; introduced in Section~\ref{sec:increase} \item $t_i,i=0,1,2,3:$ certain turns considered in Section~\ref{sec:increase} \item ${t^*}=\min(t_1,t_2,t_3)$; introduced in Section~\ref{sec:increase} \end{itemize} \subsection{Interpretation of the balance value} \label{sec:interpretation} In the following we motivate the definition of the balance value of a node by giving an `in-game'-example. Let $v\in V$ with $\textnormal{deg}_M(v)<\frac{q(1-\delta)}{2}$ and suppose that Maker\ decides to concentrate on the node $v$, i.e., from this moment on he will claim all of his edges incident in $v$ as long as there are unclaimed edges incident in $v$. Moreover suppose that Breaker's aim, besides closing all Maker-paths of length $2$, is to keep $\textnormal{deg}_M(v)$ below $\frac{q(1-\delta)}{2}$. To achieve this, he must claim a certain amount of edges incident in $v$ himself. Denote this amount by $B_v$. Let $T$ denote the number of turns that Maker\ needs to raise $\textnormal{deg}_M(v)$ above $\left\lceil\frac{q(1-\delta)}{2}\right\rceil$. Then $B_{\textnormal{total}}:=Tb$ is the number of edges that Breaker\ can claim before $\textnormal{deg}_M(v)\geq\left\lceil\frac{q(1-\delta)}{2}\right\rceil$. But there is a certain number $C$ of edges that Breaker\ has to claim at different places, not incident in $v$, to close new Maker-paths.\par Setting $A:=B_{\textnormal{total}}-C$ as the amount of available Breaker-edges, the term $\frac{B_v}{A}$ represents the fraction of \emph{available} Breaker-edges necessary to prevent Maker\ from building a $q/2$-star. We will show that $\textnormal{bal}(v)$ is an approximation of $\frac{B_v}{A}$, hence it is a measure for the `danger' of $v$: The smaller $\frac{B_v}{A}$ is, the less attention Breaker\ has to spend to the node $v$. If $\frac{B_v}{A}>1$, this means that Breaker\ cannot achieve his goal of keeping $\textnormal{deg}_M(v)$ below $q/2$.\par For $f,g:\mathbb{N}\rightarrow \mathbb{R}$ we write $f\sim g$ if and only if $\lim\limits_{n\rightarrow\infty}\frac{f(n)}{g(n)}=1$. We will close this subsection by showing that $\textnormal{bal}(v)\sim\frac{B_v}{A'}$ for some $A'\leq A$. To prevent Maker\ from building a $\frac{q(1-\delta)}{2}$-star at $v$, at the end of the game Breaker\ must possess at least $n-\frac{q(1-\delta)}{2}$ edges incident in $v$. Hence, the number of edges still to claim is $B_v=n-\frac{q(1-\delta)}{2}-\textnormal{deg}_B(v)\sim n-\textnormal{deg}_B(v)$. Because Maker\ claims one edge per turn and concentrates on $v$, we get $T=\frac{q(1-\delta)}{2}-\textnormal{deg}_M(v)$ and $B_{\textnormal{total}}=\frac{q^2(1-\delta)}{2}-q\textnormal{deg}_M(v)$. The exact value of $C$ depends on the choices of Maker\ and on how many closing edges are already owned by Breaker. If we assume that all closing edges are previously unclaimed, we can upper bound $C$ by \begin{align*}C'&:=\sum\limits_{i=\textnormal{deg}_M(v)}^{\lceil q(1-\delta)/2\rceil -1}i\\ &=\frac{(\lceil q(1-\delta)/2\rceil-1)\cdot\lceil q(1-\delta)/2\rceil}{2} -\frac{(\textnormal{deg}_M(v)+1)\textnormal{deg}_M(v)}{2}\\ &\sim \frac{q^2(1-\delta)^2}{8}-\frac{\textnormal{deg}_M(v)^2}{2}. \end{align*} Finally, for $A':=B_{\textnormal{total}}-C'\leq A$ we get \begin{align*}\frac{B_v}{A'} &=\frac{B_v}{B_{\textnormal{total}}-C'}\\ &\sim \frac{n-\textnormal{deg}_B(v)}{\frac{q^2(1-\delta)}{2}-b\textnormal{deg}_M(v) -\left(\frac{q^2(1-\delta)^2}{8}-\frac{\textnormal{deg}_M(v)^2}{2}\right)}\\ &=\frac{8(n-\textnormal{deg}_B(v))} {q^2(1-\delta)(3+\delta)-4\textnormal{deg}_M(v)(2q-\textnormal{deg}_M(v))} =\textnormal{bal}(v). \end{align*} \end{document}
\begin{document} \begin{frontmatter} \title{Differential, integral, and variational delta-embeddings\\ of Lagrangian systems} \author[jc,jc2]{Jacky Cresson} \ead{[email protected]} \author[abm,cidma]{Agnieszka B. Malinowska} \ead{[email protected]} \author[cidma,dfmt]{Delfim F. M. Torres} \ead{[email protected]} \address[jc]{Laboratoire de Math\'{e}matiques Appliqu\'{e}es de Pau, Universit\'{e} de Pau, Pau, France} \address[jc2]{Institut de M\'ecanique C\'eleste et de Calcul des \'Eph\'em\'erides, Observatoire de Paris, Paris, France} \address[abm]{Faculty of Computer Science, Bia{\l}ystok University of Technology, 15-351 Bia\l ystok, Poland} \address[cidma]{R\&D Unit CIDMA, University of Aveiro, 3810-193 Aveiro, Portugal} \address[dfmt]{Department of Mathematics, University of Aveiro, 3810-193 Aveiro, Portugal} \begin{abstract} We introduce the differential, integral, and variational delta-embeddings. We prove that the integral delta-embedding of the Euler--Lagrange equations and the variational delta-embedding coincide on an arbitrary time scale. In particular, a new coherent embedding for the discrete calculus of variations that is compatible with the least action principle is obtained. \end{abstract} \begin{keyword} coherence \sep embedding \sep least action principle \sep discrete calculus of variations \sep difference Euler--Lagrange equations. \MSC[2010] 34N05 \sep 49K05 \sep 49S05. \end{keyword} \end{frontmatter} \section{Introduction} An ordinary differential equation is usually given in differential form, \textrm{i.e.}, \begin{equation*} \frac{dx(t)}{dt} =f\left(t,x(t)\right), \quad t\in [a,b] , \quad x(t) \in \mathbb{R}^n . \end{equation*} However, one can also consider the integral form of the equation: \begin{equation*} x(t) =x(a)+\displaystyle\int_a^t f(s,x(s)) ds , \quad t \in [a,b]. \end{equation*} The differential form is related to dynamics via the time derivative. The integral form is useful for proving existence and unicity of solutions or to study analytical properties of solutions. In order to give a meaning to a differential equation over a new set (\textrm{e.g.}, stochastic processes, non-differentiable functions or discrete sets) one can use the differential or the integral form. In general, these two generalizations do not give the same object. In the differential case, we need to extend first the time derivative. As an example, we can look to Schwartz's distributions \cite{sc} or backward/forward finite differences in the discrete case. Using the new derivative one can then generalize differential operators and then differential equations of arbitrary order. In the integral case, one need to give a meaning to the integral over the new set. This strategy is for example used by It\^o \cite{ito} in order to define stochastic differential equations, defining first stochastic integrals. In general, the integral form imposes less constraints on the underlying objects. This is already in the classical case, where we need a differentiable function to write the differential form but only continuity or weaker regularity to give a meaning to the integral form. The notion of embedding introduced in \cite{cd} is an algebraic procedure providing an extension of classical differential equations over an arbitrary vector space. Embedding is based on the differential formulation of the equation. This formalism was developed in the framework of fractional equations \cite{cr1}, stochastic processes \cite{cd}, and non-differentiable functions \cite{cg}. Recently, it has been extended to discrete sets in order to discuss discretization of differential equations and numerical schemes \cite{BCGI}. In this paper, we define an embedding containing the discrete as well as the continuous case using the time scale calculus. We use the proposed embedding in order to define time scale extensions of ordinary differential equations both in differential and integral forms. Of particular importance for many applications in physics and mathematics is the case of Lagrangian systems governed by an Euler--Lagrange equation. Lagrangian systems possess a variational structure, \textrm{i.e.}, their solutions correspond to critical points of a functional and this characterization does not depend on the coordinates system. This induces strong constraints on solutions, for example the conservation of energy for autonomous classical Lagrangian systems. That is, if the Lagrangian does not depend explicitly on $t$, then the energy is constant along physical trajectories. We use the time scale embedding in order to provide an analogous of the classical Lagrangian functional on an arbitrary time scale. By developing the corresponding calculus of variations, one then obtain the Euler--Lagrange equation. This extension of the original Euler--Lagrange equation passing by the time scales embedding of the functional is called a time scale {\it variational embedding}. These extensions are known under the terminology of {\it variational integrators} in the discrete setting. We then have three ways to extend a given ordinary differential equation: differential, integral or variational embedding. All these extensions are a priori different. The coherence problem introduced in \cite{cd}, in the context of the stochastic embedding, consider the problem of finding conditions under which these extensions coincide. Here we prove that the integral and variational embeddings are coherent (see Theorem~\ref{thm:mr}). The result is new and interesting even in the discrete setting, providing a new form of the Euler--Lagrange difference equation (see \eqref{neleq}) that is compatible with the least action principle. \section{Note on the notation used} We denote by $f$ or $t \rightarrow f(t)$ a function, and by $f(t)$ the value of the function at point $t$. Along all the text we consistently use square brackets for the arguments of operators and round brackets for the arguments of all the other type of functions. Functionals are denoted by uppercase letters in calligraphic mode. We denote by $D$ the usual differential operator and by $\partial_i$ the operator of partial differentiation with respect to the $i$th variable. \section{Reminder about the time scale calculus} \label{sec:rtsc} The reader interested on the calculus on time scales is refereed to the book \cite{book:ts:2001}. Here we just recall the necessary concepts and fix some notations. A nonempty closed subset of $\mathbb{R}$ is called a time scale and is denoted by $\mathbb{T}$. Thus, $\mathbb{R}$, $\mathbb{Z}$, and $\mathbb{N}$, are trivial examples of time scales. Other examples of time scales are: $[-2,4] \bigcup \mathbb{N}$, $h\mathbb{Z}:=\{h z \, | \, z \in \mathbb{Z}\}$ for some $h>0$, $q^{\mathbb{N}_0}:=\{q^k \, | \, k \in \mathbb{N}_0\}$ for some $q>1$, and the Cantor set. We assume that a time scale $\mathbb{T}$ has the topology that it inherits from the real numbers with the standard topology. The forward jump $\sigma :\mathbb{T} \rightarrow \mathbb{T}$ is defined by $\sigma(t)=\inf{\{s\in\mathbb{T}:s>t}\}$ for all $t\in\mathbb{T}$, while the \emph{backward jump} $\rho:\mathbb{T}\rightarrow\mathbb{T}$ is defined by $\rho(t)=\sup{\{s\in\mathbb{T}:s<t}\}$ for all $t\in\mathbb{T}$, where $\inf\emptyset=\sup\mathbb{T}$ (\textrm{i.e.}, $\sigma(M)=M$ if $\mathbb{T}$ has a maximum $M$) and $\sup\emptyset=\inf\mathbb{T}$ (\textrm{i.e.}, $\rho(m)=m$ if $\mathbb{T}$ has a minimum $m$). The \emph{graininess function} $\mu:\mathbb{T}\rightarrow[0,\infty)$ is defined by $\mu(t)=\sigma(t)-t$ for all $t\in\mathbb{T}$. \begin{example} If $\mathbb{T} = \mathbb{R}$, then $\sigma(t) = \rho(t) = t$ and $\mu(t)=0$. If $\mathbb{T} = h\mathbb{Z}$, then $\sigma(t) = t + h$, $\rho(t) = t - h$, and $\mu(t)=h$. On the other hand, if $\mathbb{T} = q^{\mathbb{N}_0}$, where $q>1$ is a fixed real number, then we have $\sigma(t) = q t$, $\rho(t) = q^{-1} t$, and $\mu(t)= (q-1) t$. \end{example} In order to introduce the definition of delta derivative, we define a new set $\mathbb{T}^\kappa$ which is derived from $\mathbb{T}$ as follows: if $\mathbb{T}$ has a left-scattered maximal point $M$, then $\mathbb{T}^\kappa := \mathbb{T}\setminus\{M\}$; otherwise, $\mathbb{T}^\kappa := \mathbb{T}$. In general, for $r \ge 2$, $\mathbb{T}^{\kappa^r}:=\left(\mathbb{T}^{\kappa^{r-1}}\right)^\kappa$. Similarly, if $\mathbb{T}$ has a right-scattered minimum $m$, then we define $\mathbb{T}_\kappa := \mathbb{T}\setminus\{m\}$; otherwise, $\mathbb{T}_\kappa := \mathbb{T}$. Moreover, we define $\mathbb{T}^{\kappa}_{\kappa}:=\mathbb{T}^{\kappa}\cap \mathbb{T}_{\kappa}$. \begin{definition} \label{def:de:dif} We say that a function $f:\mathbb{T}\rightarrow\mathbb{R}$ is \emph{delta differentiable} at $t\in\mathbb{T}^{\kappa}$ if there exists a number $\Delta[f](t)$ such that for all $\varepsilon>0$ there is a neighborhood $U$ of $t$ such that $$ \left|f(\sigma(t))-f(s)-\Delta[f](t)(\sigma(t)-s)\right| \leq\varepsilon|\sigma(t)-s|,\mbox{ for all $s\in U$}. $$ We call $\Delta[f](t)$ the \emph{delta derivative} of $f$ at $t$ and we say that $f$ is \emph{delta differentiable} on $\mathbb{T}^{\kappa}$ provided $\Delta[f](t)$ exists for all $t\in\mathbb{T}^{\kappa}$. \end{definition} \begin{example} \label{ex:der:QC} If $\mathbb{T} = \mathbb{R}$, then $\Delta[f](t) = D[f](t)$, \textrm{i.e.}, the delta derivative coincides with the usual one. If $\mathbb{T} = h\mathbb{Z}$, then $\Delta[f](t) = \frac{1}{h}\left(f(t+h) - f(t)\right) =: \Delta_+[f](t)$, where $\Delta_+$ is the usual forward difference operator defined by the last equation. If $\mathbb{T} = q^{\mathbb{N}_0}$, $q>1$, then $\Delta[f](t)=\frac{f(q t)-f(t)}{(q-1) t}$, \textrm{i.e.}, we get the usual derivative of quantum calculus. \end{example} A function $f:\mathbb{T}\rightarrow\mathbb{R}$ is called \emph{rd-continuous} if it is continuous at right-dense points and if its left-sided limit exists at left-dense points. We denote the set of all rd-continuous functions by $C_{\textrm{rd}}(\mathbb{T}, \mathbb{R})$ and the set of all delta differentiable functions with rd-continuous derivative by $C_{\textrm{rd}}^1(\mathbb{T}, \mathbb{R})$. It is known (see \cite[Theorem~1.74]{book:ts:2001}) that rd-continuous functions possess a \emph{delta antiderivative}, \textrm{i.e.}, there exists a function $\xi$ with $\Delta[\xi]=f$, and in this case the \emph{delta integral} is defined by $\int_{c}^{d}f(t)\Delta t=\xi(d)-\xi(c)$ for all $c,d\in\mathbb{T}$. \begin{example} Let $a, b \in \mathbb{T}$ with $a < b$. If $\mathbb{T} = \mathbb{R}$, then $\int_{a}^{b}f(t)\Delta t = \int_{a}^{b} f(t) dt$, where the integral on the right-hand side is the classical Riemann integral. If $\mathbb{T} = h\mathbb{Z}$, then $\int_{a}^{b} f(t)\Delta t = \sum_{k=\frac{a}{h}}^{\frac{b}{h}-1}hf(kh)$. If $\mathbb{T} = q^{\mathbb{N}_0}$, $q>1$, then $\int_{a}^{b} f(t)\Delta t = (1 - q) \sum_{t \in [a,b)} t f(t)$. \end{example} The delta integral has the following properties: \begin{itemize} \item[(i)] if $f\in C_{rd}$ and $t \in \mathbb{T}$, then \begin{equation*} \int_t^{\sigma(t)} f(\tau)\Delta\tau=\mu(t)f(t)\, ; \end{equation*} \item[(ii)]if $c,d\in\mathbb{T}$ and $f$ and $g$ are delta-differentiable, then the following formulas of integration by parts hold: \begin{equation*} \int_{c}^{d} f(\sigma(t))\Delta[g](t)\Delta t =\left.(fg)(t)\right|_{t=c}^{t=d} -\int_{c}^{d} \Delta[f](t) g(t)\Delta t, \end{equation*} \begin{equation} \label{int:par:delta} \int_{c}^{d} f(t) \Delta[g](t)\Delta t =\left.(fg)(t)\right|_{t=c}^{t=d} -\int_{c}^{d} \Delta[f](t) g(\sigma(t)) \Delta t. \end{equation} \end{itemize} \section{Time scale embeddings and evaluation operators} \label{sec:tse} Let $\mathbb{T}$ be a bounded time scale with $a:= \min \mathbb{T}$ and $b:= \max \mathbb{T}$. We denote by $C([a,b];\mathbb{R})$ the set of continuous functions $x : [a,b] \rightarrow \mathbb{R}$. As introduced in Section~\ref{sec:rtsc}, by $C_{\textrm{rd}}(\mathbb{T},\mathbb{R})$ we denote the set of all real valued rd-continuous functions defined on $\mathbb{T}$, and by $C_{\textrm{rd}}^1(\mathbb{T}, \mathbb{R})$ the set of all delta differentiable functions with rd-continuous derivative. A time scale embedding is given by specifying: \begin{itemize} \item A mapping $\iota : C([a,b],\mathbb{R}) \rightarrow C_{\textrm{rd}}(\mathbb{T},\mathbb{R} )$; \item An operator $\delta : C^1([a,b],\mathbb{R}) \rightarrow C_{\textrm{rd}}^1(\mathbb{T}^\kappa,\mathbb{R})$, called a generalized derivative; \item An operator $J: C([a,b],\mathbb{R})\rightarrow C_{\textrm{rd}}(\mathbb{T},\mathbb{R})$, called a generalized integral operator. \end{itemize} We fix the following embedding: \begin{definition}[Time scale embedding] The mapping $\iota$ is obtained by restriction of functions to $\mathbb{T}$. The operator $\delta$ is chosen to be the $\Delta$ derivative, and the operator $J$ is given by the $\Delta$ integral as follows: $$ \delta[x](t) := \Delta[x](t) \, , \quad J[x](t) := \displaystyle\int_a^{\sigma (t)} x(s) \Delta s\, . $$ \end{definition} \begin{definition}[Evaluation operator] Let $f :\mathbb{R} \rightarrow \mathbb{R}$ be a continuous function. We denote by $\widetilde{f}$ the operator associated to $f$ and defined by \begin{equation} \label{evaluation} \widetilde{f} : \left . \begin{array}{ccc} C (\mathbb{R} ,\mathbb{R} ) & \longrightarrow & C (\mathbb{R} ,\mathbb{R} )\\ x & \mapsto & \widetilde{f}[x] := t \rightarrow f(x(t)) . \end{array} \right . \end{equation} The operator $\widetilde{f}$ given by \eqref{evaluation} is called the \emph{evaluation operator} associated with $f$. \end{definition} The definition of evaluation operator is easily extended in various ways. We give in Definition~\ref{def:L:eval:oper} a special evaluation operator that naturally arises in the study of problems of the calculus of variations and respective Euler--Lagrange equations (\textrm{cf.} Section~\ref{sec:cv}). \begin{definition}[Lagrangian operator] \label{def:L:eval:oper} Let $L : [a,b] \times \mathbb{R} \times \mathbb{R} \rightarrow \mathbb{R}$ be a $C^1$ function defined for all $(t,x,v) \in [a,b] \times \mathbb{R}^2$ by $L(t,x,v)\in \mathbb{R}$. The \emph{Lagrangian operator} $\widetilde{L} : C^1([a,b] ,\mathbb{R}) \rightarrow C^1([a,b] ,\mathbb{R})$ associated with $L$ is the evaluation operator defined by $\widetilde{L}[x] := t \rightarrow L\left(t,x(t),D[x](t)\right)$. \end{definition} We consider ordinary differential equations of the form \begin{equation*} O[x](t) =0, \quad t \in [a,b], \end{equation*} where $x\in C^n(\mathbb{R},\mathbb{R})$ and $O$ is a differential operator of order $n$, $n \ge 1$, given by \begin{equation} \label{operator} O=\displaystyle\sum_{i=0}^n \widetilde{a_i} \cdot \left( D^i \circ \widetilde{b_i} \right), \end{equation} where $(\widetilde{a_i})$ (resp. $(\widetilde{b_i})$) is the family of evaluation operators associated to a family of functions $(a_i)$ (resp. $(b_i)$), and $D^i$ is the derivative of order $i$, \textrm{i.e.}, $D^i = \frac{d^i}{dt^i}$. Differential operators of form \eqref{operator} play a crucial role when dealing with Euler--Lagrange equations. We are now ready to define the time scale embedding of evaluation and differential operators. \begin{definition}[Time scale embedding of evaluation operators] Let $f : \mathbb{R} \rightarrow \mathbb{R}$ be a continuous function and $\widetilde{f}$ the associated evaluation operator. The time scale embedding $\widetilde{f}_\mathbb{T}$ of $\widetilde{f}$ is the extension of $\widetilde{f}$ to $C_{rd}(\mathbb{T} ,\mathbb{R} )$: \begin{equation*} \widetilde{f}_{\mathbb{T}} : \left . \begin{array}{ccc} C_{rd}(\mathbb{T} ,\mathbb{R} ) & \longrightarrow & C_{rd}(\mathbb{T} ,\mathbb{R} )\\ x & \mapsto & \widetilde{f}_\mathbb{T}[x] := t \rightarrow f(x(t)) . \end{array} \right . \end{equation*} \end{definition} Next definition gives the time scale embedding of the differential operator \eqref{operator}. \begin{definition}[Time scale embedding of differential operators] \label{eq:diff:emb} The time scale embedding of the differential operator \eqref{operator} is defined by \begin{equation*} O_{\Delta} =\displaystyle\sum_{i=0}^n \widetilde{a_i}_\mathbb{T} \cdot \left( {\Delta^i} \circ \widetilde{b_i}_\mathbb{T} \right). \end{equation*} \end{definition} The two previous definitions are sufficient to define the time scale embedding of a given ordinary differential equation. \begin{definition}[Time scale embedding of differential equations] \label{embeddingdofferential} The delta-differential embedding of an ordinary differential equation $O[x]=0$, $x \in C^n([a,b],\mathbb{R})$, is given by $O_{\Delta}[x] =0$, $x\in C^n_{rd}(\mathbb{T}^{\kappa^n} ,\mathbb{R})$. \end{definition} In order to define the delta-integral and variational embeddings (see Sections~\ref{sec:dealp}, \ref{sec:intgral} and \ref{sec:mr}) we need to know how to embed an integral functional. \begin{definition}[Time scale embedding of integral functionals] \label{embeddingfunctional} Let $L : [a,b] \times \mathbb{R}^2 \rightarrow \mathbb{R}$ be a continuous function and $\mathcal{L}$ the functional defined by $$ \mathcal{L}(x) = \int_a^t L\left(s,x(s),D[x](s)\right) ds = \int_a^t \widetilde{L}[x](s) ds. $$ The time scale embedding $\mathcal{L}_{\Delta}$ of $\mathcal{L}$ is given by \begin{equation*} \mathcal{L}_{\Delta}(x) =\int_a^{\sigma(t)} L\left(s,x(s),\Delta[x](s)\right) \Delta s = \int_a^{\sigma(t)} \widetilde{L}_\mathbb{T}[x](s) \Delta s \, . \end{equation*} \end{definition} \section{Calculus of variations} \label{sec:cv} The classical variational functional $\mathcal{L}$ is defined by \begin{equation} \label{lagrangianfunctional} \mathcal{L}(x) =\int_a^b L\left(t,x(t),D[x](t)\right) dt , \end{equation} where $L :[a,b] \times \mathbb{R} \times \mathbb{R} \rightarrow \mathbb{R}$ is a smooth real valued function called the Lagrangian (see, \textrm{e.g.}, \cite{vanBrunt}). Functional \eqref{lagrangianfunctional} can be written, using the Lagrangian operator $\widetilde{L}$ (Definition~\ref{def:L:eval:oper}), in the following equivalent form: \begin{equation*} \mathcal{L}(x) = \int_a^{b} \widetilde{L}[x](t) dt . \end{equation*} The Euler--Lagrange equation associated to \eqref{lagrangianfunctional} is given (see, \textrm{e.g.}, \cite{vanBrunt}) by \begin{equation} \label{eq:class:EL:pre} D\left[ \tau \rightarrow \partial_3[L]\left(\tau,x(\tau), D[x](\tau)\right) \right](t) -\partial_2[L]\left(t,x(t),D[x](t)\right) =0 , \end{equation} $t \in [a,b]$, which we can write, equivalently, as \begin{equation*} \left(D \circ \widetilde{\partial_3[L]}\right)[x](t) - \widetilde{\partial_2[L]}[x](t) =0 . \end{equation*} Still another way to write the Euler--Lagrange equation consists in introducing the differential operator $\mbox{\rm EL}_L$, called the {\it Euler--Lagrange operator}, given by \begin{equation*} \mbox{\rm EL}_L := D \circ \widetilde{\partial_3[L]} - \widetilde{\partial_2[L]} \, . \end{equation*} We can then write the Euler--Lagrange equation simply as $\mbox{\rm EL}_L[x](t)=0$, $t \in [a,b]$. \section{Delta-differential embedding of the Euler--Lagrange equation} \label{sec:deEL} By Definition~\ref{eq:diff:emb}, the time scale delta embedding of the Euler--Lagrange operator $\mbox{\rm EL}_L$ gives the new operator \begin{equation*} \left(\mbox{\rm EL}_L\right)_{\Delta} := \Delta \circ \left(\widetilde{\partial_3[L]}\right)_\mathbb{T} - \left(\widetilde{\partial_2[L]}\right)_\mathbb{T}. \end{equation*} As a consequence, we have the following lemma. \begin{lemma}[Delta-differential embedding of the Euler--Lagrange equation] The delta-differential embedding of the Euler--Lagrange equation is given by $(\mbox{\rm EL}_L)_{\Delta}[x](t) = 0$, $t \in \mathbb{T}^{\kappa^2}$, \textrm{i.e.}, \begin{equation} \label{eq:d:EL} \left(\Delta \circ \widetilde{\partial_3[L]}_\mathbb{T}\right)[x](t) - \widetilde{\partial_2[L]}_\mathbb{T}[x](t) = 0 \end{equation} for any $t \in \mathbb{T}^{\kappa^2}$. \end{lemma} In the discrete case $\mathbb{T} =[a,b]\cap h\mathbb{Z}$, we obtain from \eqref{eq:d:EL} the well-known discrete version of the Euler--Lagrange equation, often written as \begin{equation} \label{eq:d:d:EL} \Delta_+ \circ \frac{\partial L}{\partial v}\left(t,x(t),\Delta_+ x(t)\right) -\frac{\partial L}{\partial x}\left(t,x(t),\Delta_+ x(t)\right)= 0 , \end{equation} $t \in \mathbb{T}^{\kappa^2}$, where $\Delta_+ f(t) = \frac{f(t+h)-f(t)}{h}$. The important point to note here is that from the numerical point of view, equation \eqref{eq:d:d:EL} does not provide a good scheme. Let us see a simple example. \begin{example} \label{ex1} Consider the Lagrangian $L(t,x,v) = \frac{1}{2} v^2 - U(x)$, where $U$ is the potential energy and $(t,x,v) \in [a,b] \times \mathbb{R}\times \mathbb{R}$. Then the Euler--Lagrange equation \eqref{eq:d:d:EL} gives \begin{equation} \label{alg1} \frac{x_{k+2}-2 x_{k+1}+x_k}{h^2} + \frac{\partial U}{\partial x}(x_k) = 0, \quad k = 0, \ldots, N-2, \end{equation} where $N = \frac{b-a}{h}$ and $x_k = x(a+hk)$. This numerical scheme is of order one, meaning that we make an error of order $h$ at each step, which is of course not good. \end{example} In the next section we show an alternative Euler--Lagrange to \eqref{eq:d:d:EL} that leads to more suitable numerical schemes. As we shall see in Section~\ref{sec:mr}, this comes from the fact that the embedded Euler--Lagrange equation \eqref{eq:d:EL} is not coherent, meaning that it does not preserve the variational structure. As a consequence, the numerical scheme \eqref{alg1} is not {\it symplectic} in contrast to the flow of the Lagrangian system (see \cite{Arnold1979}). In particular, the numerical scheme \eqref{alg1} dissipates artificially energy (see \cite[Fig.~1, p.~364]{Marsden-West2001}). \section{Discrete variational embedding} \label{sec:dealp} The time scale embedding can be also used to define a delta analogue of the variational functional \eqref{lagrangianfunctional}. Using Definition~\ref{embeddingfunctional}, and remembering that $\sigma(b)=b$, the time scale embedding of \eqref{lagrangianfunctional} is \begin{equation} \label{funct} \mathcal{L}_{\Delta}(x) =\int_a^b L\left(t,x(t) ,\Delta[x](t)\right)\, \Delta t =\int_a^b \widetilde{L}_\mathbb{T}[x](t) \Delta t. \end{equation} A calculus of variations on time scales for functionals of type \eqref{funct} is developed in Section~\ref{sec:mr}. Here we just emphasize that in the discrete case $\mathbb{T} =[a,b]\cap h\mathbb{Z}$ functional \eqref{funct} reduces to the classical discrete Lagrangian functional \begin{equation} \label{dpcv} \mathcal{L}_\Delta(x)= h \sum_{k=0}^{N-1} L\left(t_k,x_k,\Delta_+ x_k\right), \end{equation} where $N = \frac{b-a}{h}$, $x_k = x(a+hk)$ and $\Delta_+ x_k = \frac{x_{k+1}-x_k}{h}$, and that the Euler--Lagrange equation obtained by applying the discrete variational principle to \eqref{dpcv} takes the form \begin{equation} \label{eq:nabla:EL} \Delta_- \circ \frac{\partial L}{\partial v}\left(t,x(t),\Delta_+ x(t)\right) -\frac{\partial L}{\partial x}\left(t,x(t),\Delta_+ x(t)\right)= 0, \end{equation} $t \in \mathbb{T}^{\kappa}_{\kappa}$, where $\Delta_-$ is the backward finite difference operator defined by $\Delta_- f(t) = \frac{f(t)-f(t-h)}{h}$ \cite{BCGI,lu}. The numerical scheme corresponding to the discrete variational embedding, \textrm{i.e.}, to \eqref{eq:nabla:EL}, is called in the literature a {\it variational integrator} \cite{BCGI,lu}. Next example shows that the variational integrator associated with problem of Example~\ref{ex1} is a better numerical scheme than \eqref{alg1}. \begin{example} \label{ex2} Consider the same Lagrangian as in Example~\ref{ex1} of Section~\ref{sec:deEL}: $L(t,x,v) = \frac{1}{2} v^2 - U(x)$, where $(t,x,v) \in [a,b] \times \mathbb{R} \times \mathbb{R}$. The Euler--Lagrange equation \eqref{eq:nabla:EL} can be written as $$ \frac{x_{k+1}-2 x_{k}+x_{k-1}}{h^2} + \frac{\partial U}{\partial x}(x_k) = 0, \quad k = 1, \ldots, N-1, $$ where $N = \frac{b-a}{h}$ and $x_k = x(a+hk)$. This numerical scheme possess very good properties. In particular, it is easily seen that the order of approximation is now two and not of order one as in Example~\ref{ex1}. \end{example} We remark that the form of $\mathcal{L}_{\Delta}$ given by \eqref{funct} is not the usual one in the literature of time scales (see \cite{bartos:torres,malina:T,naty} and references therein). Indeed, in the literature of the calculus of variations on time scales, the following version of the Lagrangian functional is studied: \begin{equation} \label{eq:std:cv:t} \mathcal{L}_{\mathbb{T}}^{\rm usual}(x) = \int_a^b L\left(t,x(\sigma(t)),\Delta[x](t)\right) \Delta t. \end{equation} However, the composition of $x$ with the forward jump $\sigma$ found in \eqref{eq:std:cv:t} seems not natural from the point of view of embedding. \section{Delta-integral embedding of the Euler--Lagrange equation in integral form} \label{sec:intgral} We begin by rewriting the classical Euler--Lagrange equation into integral form. Integrating \eqref{eq:class:EL:pre} we obtain that \begin{equation} \label{eq:class:EL:if} \partial_3[L]\left(t,x(t),D[x](t)\right) = \int_a^{t} \partial_2[L]\left(\tau,x(\tau),D[x](\tau)\right) d\tau + c, \end{equation} for some constant $c$ and all $t \in [a,b]$ or, using the evaluation operator, \begin{equation*} \widetilde{\partial_3[L]}[x](t) = \int_a^{t} \widetilde{\partial_2[L]}[x](\tau) d\tau + c. \end{equation*} Using Definition~\ref{embeddingfunctional}, we obtain the delta-integral embedding of the classical Euler--Lagrange equation \eqref{eq:class:EL:if}. \begin{lemma}[Delta-integral embedding of the Euler--Lagrange equation in integral form] The delta-integral embedding of the Euler--Lagrange equation \eqref{eq:class:EL:if} is given by \begin{equation} \label{eq:class:EL:if:ts} \partial_3[L]\left(t,x(t),\Delta[x](t)\right) = \int_a^{\sigma(t)} \partial_2[L]\left(\tau,x(\tau), \Delta[x](\tau)\right) \Delta \tau + c \end{equation} or, equivalently, as \begin{equation*} \widetilde{\partial_3[L]}_\mathbb{T}[x](t) = \int_a^{\sigma(t)} \widetilde{\partial_2[L]}_\mathbb{T}[x](\tau) \Delta \tau + c , \end{equation*} where $c$ is a constant and $t \in \mathbb{T}^\kappa$. \end{lemma} Note that in the particular case $\mathbb{T}=[a,b]\cap h\mathbb{Z}$ equation \eqref{eq:class:EL:if:ts} gives the discrete Euler--Lagrange equation \begin{equation} \label{neleq} \frac{\partial L}{\partial v}\left(t_k,x_k,\frac{x_{k+1}-x_k}{h}\right) = h \sum_{i=0}^{k} \frac{\partial L}{\partial x}\left(t_i,x_i, \frac{x_{i+1}-x_i}{h}\right) + c , \end{equation} where $t_i = a + h i$, $i = 0, \ldots, k$, $x_i = x(t_i)$, and $k = 0, \ldots, N-1$. This numerical scheme is different from \eqref{eq:d:d:EL} and \eqref{eq:nabla:EL}, and has not been discussed before in the literature with respect to embedding and coherence. This is done in Section~\ref{sec:mr}. \section{The delta-variational embedding and coherence} \label{sec:mr} Our next theorem shows that equation \eqref{eq:class:EL:if:ts} can be also obtained from the least action principle. In other words, Theorem~\ref{thm:mr} asserts that the delta-integral embedding of the classical Euler--Lagrange equation in integral form \eqref{eq:class:EL:if} and the delta-variational embedding are coherent. \begin{theorem} \label{thm:mr} If $\hat{x}$ is a local minimizer or maximizer to \eqref{funct} subject to the boundary conditions $x(a)=x_a$ and $x(b)=x_b$, then $\hat{x}$ satisfies the Euler--Lagrange equation \eqref{eq:class:EL:if:ts} for some constant $c$ and all $t \in {\mathbb{T}}^\kappa$. \end{theorem} \begin{proof} Suppose that $\mathcal{L}_{\Delta}$ has a weak local extremum at $\hat{x}$. Let $x = \hat{x} + \varepsilon h$, where $\varepsilon\in \mathbb{R}$ is a small parameter, $h\in C^1_{rd}$ such that $h(a) =h(b)=0$. We consider $$ \phi(\varepsilon) := \mathcal{L}_{\Delta}(\hat{x} + \varepsilon h) = \int_a^b L\left(t,\hat{x}(t)+\varepsilon h(t), \Delta[\hat{x}](t)+ \varepsilon \Delta[h](t)\right) \Delta t. $$ A necessary condition for $\hat{x}$ to be an extremizer is given by \begin{equation} \label{eq:FT} \left.\phi'(\varepsilon)\right|_{\varepsilon=0} = 0 \Leftrightarrow \int_a^b \Bigl( \partial_2[L]\left(t,\hat{x}(t),\Delta[\hat{x}](t)\right) h(t) + \partial_3[L]\left(t,\hat{x}(t),\Delta[\hat{x}](t)\right) \Delta[h](t)\Bigr) \Delta t = 0. \end{equation} The integration by parts formula \eqref{int:par:delta} gives \begin{multline*} \int_a^b \partial_2[L]\left(t,\hat{x}(t),\Delta[\hat{x}](t)\right) h(t)\Delta t\\ =\int_a^t \partial_2[L]\left(\tau,\hat{x}(\tau), \Delta[\hat{x}](\tau)\right) \Delta \tau h(t)|_{t=a}^{t=b} -\int_a^b \left(\int_a^{\sigma (t)} \partial_2[L]\left(\tau, \hat{x}(\tau),\Delta[\hat{x}](\tau)\right) \Delta \tau \, \Delta[h](t)\right) \Delta t. \end{multline*} Because $h(a) =h(b)= 0$, the necessary condition \eqref{eq:FT} can be written as \begin{equation*} \int_a^b \left(\partial_3[L]\left(t,\hat{x}(t),\Delta[\hat{x}](t)\right) -\int_a^{\sigma(t)} \partial_2[L]\left(t,\hat{x}(t),\Delta[\hat{x}](t)\right)\Delta \tau \right) \Delta[h](t) \, \Delta t = 0 \end{equation*} for all $h\in C_{rd}^1$ such that $h(a) = h(b)=0$. Thus, by the Dubois--Reymond Lemma (see \cite[Lemma~4.1]{b7}), we have \begin{equation*} \partial_3[L]\left(t,\hat{x}(t),\Delta[\hat{x}](t)\right) =\int_a^{\sigma(t)} \partial_2[L]\left(\tau,\hat{x}(\tau),\Delta[\hat{x}](\tau)\right)\Delta \tau + c \end{equation*} for some $c\in \mathbb{R}$ and all $t \in {\mathbb{T}}^\kappa$. \end{proof} \section{Conclusion} \label{sec:lackCo} Given a variational functional and a corresponding Euler--Lagrange equation, the problem of coherence concerns the coincidence of a direct embedding of the given Euler--Lagrange equation with the one obtained from the application of the embedding to the variational functional followed by application of the least action principle. An embedding is not always coherent and a nontrivial problem is to find conditions under which the embedding can be made coherent. An example of this is given by the standard discrete embedding: the discrete embedding of the Euler--Lagrange equation gives \eqref{eq:d:d:EL} but the Euler--Lagrange equation \eqref{eq:nabla:EL} obtained by the standard discrete calculus of variations does not coincide. On the other hand, from the point of view of numerical integration of ordinary differential equations, we know that the discrete variational embedding is better than the direct discrete embedding of the Euler--Lagrange equation (\textrm{cf.} Example~\ref{ex2}). The lack of coherence means that a pure algebraic discretization of the Euler--Lagrange equation is not good in general, because we miss some important dynamical properties of the equation which are encoded in the Lagrangian functional. A method to solve this default of coherence had been recently proposed in \cite{BCGI}, and consists to rewrite the classical Euler--Lagrange \eqref{eq:class:EL:pre} as an asymmetric differential equation using left and right derivatives. Inspired by the results of \cite{fmt}, here we propose a completely different point of view to embedding based on the Euler--Lagrange equation in integral form. For that we introduce the new delta-integral embedding (\textrm{cf.} Definition~\ref{embeddingfunctional}). Our main result shows that the delta-integral embedding and the delta-variational embedding are coherent for any possible discretization (Theorem~\ref{thm:mr} is valid on an arbitrary time scale). \end{document}
\begin{document} \title{Homological Algebra of Heyting modules} \centerline{\emph{Dept. of Mathematics, Indian Institute of Science, Bangalore-560012, India.}} \centerline{\emph{Email: [email protected]}} \begin{abstract} The collection of open sets of a topological space forms a Heyting algebra, which leads to the idea of a Heyting algebra as a generalized topological space. In fact, a sober topological space may be reconstructed from its locale of open sets. This has given rise to a good theory of presheaves and sheaves over locales. At the same time, several ring like properties of Heyting algebras have also been studied. The purpose of this paper is to study a non-abelian homological theory for modules over Heyting algebras. \end{abstract} \emph{MSC (2010) Subject Classification: 18G50} \emph{Keywords: Heyting algebras, Heyting modules, non-abelian homological algebra} \section{Introduction} The essence of an abelian category is the fact that every morphism may be factored uniquely as a cokernel followed by a kernel. However, several difficulties arise when one tries to adapt this axiom to non-abelian or even non-additive settings. This is the reason the study of homological algebra has been dominated by that of abelian categories. Over the years, many efforts have been made to develop homological algebra with categories that are not additive. One of the earliest appears to be a series of papers by Fr$\ddot{\mbox{o}}$hlich \cite{Fro1,Fro2,Fro3} which gave a theory of derived functors for non-abelian groups. The work of Eckmann and Hilton \cite{EH1}, \cite{EH2}, \cite{EH3} was motivated by the category of topological spaces while Gerstenhaber gave a theory of Baer extensions in \cite{MG}. We also refer the reader to the study of Barr-exact categories (see \cite{Barr}), modular categories (see Carboni \cite{Carboni}), protomodular categories (see Bourn \cite{Bourn}), Mal'cev categories (see Carboni, Lambek and Pedicchio \cite{CLP}) semi-abelian categories (see Janelidze, M\'{a}rki and Tholen\cite{GJ}) and homological categories (see Grandis \cite{Grandis}) for other instances of work in this area. More recently, Connes and Consani \cite{one} have presented a comprehensive theory of ``Homological Algebra in characteristic one'' over the Boolean semifield $\mathbb B=\{0,1\}$. Their motivation was to develop a theory that would be suitable for treating certain non-additive categories of sheaves over a topos. In this paper, our objective is to develop a homological theory similar to \cite{one} for modules over a Heyting algebra. We recall that a Heyting algebra $H$ is a lattice such that for each $x\in H$, the functor $\_\_\wedge x: H\longrightarrow H$ has a right adjoint. In other words, for each $x,y\in H$, there is an element $(x\rightarrow y)_H\in H$ such that $z\leq (x\rightarrow y)_H$ if and only if $z\wedge x\leq y$. Here the partially ordered set underlying the lattice $H$ is treated as a category. For each element $x\in H$, we may consider $\righthalfcap x:=(x\rightarrow 0)_H$. Then, if $\righthalfcap \righthalfcap x=x$ for every $x\in H$, then $H$ becomes a Boolean algebra. In general, the collection of Heyting algebras contains the collection of Boolean algebras. The significance of Heyting algebras lies in the fact that they may be treated as generalized topological spaces, a deep idea that was first advanced by Ehresmann \cite{Ehres} and B\'{e}nabou \cite{Ben}. If $X$ is a topological space, the collection $\Omega X$ of open sets of $X$ is a Heyting algebra : the joins are given by unions, the finite meets are given by interesection and \begin{equation*} (U\rightarrow V)_{\Omega X}:=int(U^c\cup V) \end{equation*} for any open subsets $U$, $V\in \Omega X$. We observe that $\Omega X$ need not be a Boolean algebra. For instance, if $X=\mathbb R$ with the usual topology, the open set $U=\mathbb R\backslash \{0\}$ satisfies $\righthalfcap U=int(U^c\cup \phi)=\phi$ and hence $\righthalfcap\righthalfcap U\ne U$. For any topological space $X$, the Heyting algebra $\Omega X$ forms a locale and there is an equivalence of categories between spatial locales and sober topological spaces (see \cite[Chapter II]{PJT}). We recall that a topological space is said to be sober if every irreducible closed subset is the closure of a unique generic point. This applies in particular to the Zariski spectrum of a commutative ring. This treatment of Heyting algebras as generalized topological spaces has led to a well-developed theory of sheaves over locales, for which we again refer the reader to Johnstone's book \cite[Chapter V]{PJT}. At the same time, several ``ring like properties'' have also been studied for Heyting algebras, such as prime ideals, filters and maximal ideal theorem (see \cite[Chapter I]{PJT}). Therefore, it is only natural to study a theory of modules over Heyting algebras, which is the purpose of this paper. We now describe the paper in more detail. We begin by introducing Boolean modules in Section 2. A Boolean module over a Boolean algebra $B$ is a join semilattice $M$ equipped with an action $B\times M\longrightarrow M$ that is distributive and an operator $\righthalfcap_M:M\longrightarrow B$ satisfying certain conditions described in Definition \ref{Bmodule}. A key fact proved in this section is that when $B$ is a finite Boolean algebra, any distributive module over the lattice underlying $B$ is canonically equipped with such an operator $\righthalfcap_M:M\longrightarrow B$. This is in preparation for the notion of a Heyting module, which we introduce in Section 3. Let $H$ be a Heyting algebra. A Heyting module $M$ over $H$ is a join semilattice $M$ with an action $H\times M\longrightarrow M$ such that for each $m\in M$, the functor $\_\_ \wedge m:H\longrightarrow M$ has a right adjoint (see Definition \ref{DD3.2}). In other words, for every $m$, $n\in M$, there is an element $(m\rightarrow n)_M\in H$ such that \begin{equation*} x\wedge m\leq n \qquad\Leftrightarrow\qquad x\leq (m\rightarrow n)_M \end{equation*} for any $x\in H$. We denote by $Heymod_H$ the category of Heyting modules over a Heyting algebra $H$. The basic properties of the operation $(\_\_\rightarrow \_\_)_M$ as well as those of morphisms of Heyting modules are established in Section 3. In particular, if $H$ is a finite Heyting algebra, we show that any module $M$ over the underlying lattice $H$ is canonically equipped with the structure of a Heyting module. We recall that for any Heyting algebra, the collection $H_{\righthalfcap\righthalfcap}$ of elements $x\in H$ satisfying $\righthalfcap \righthalfcap x=x$ forms a Boolean algebra. If $H_{\righthalfcap\righthalfcap}$ is a sublattice of $H$ and $M\in Heymod_H$, we show that the collection $M_{\righthalfcap\righthalfcap}$ of all elements of $M$ such that $(m\rightarrow 0)_M\in H_{\righthalfcap\righthalfcap}$ forms a Boolean module over the Boolean algebra $H_{\righthalfcap\righthalfcap}$. In Section 4, we begin studying the dual $M^\bigstar=Heymod_H(M,H)$ of a Heyting module $M\in Heymod_H$. In order to understand a morphism $\phi : M\longrightarrow H$ in $Heymod_H$, we need to consider the submodules $K_c:=\{\mbox{$m\in M$ $\vert$ $\phi(m)\leq c$ }\}$ as $c$ varies over all elements of $H$. The relations between the objects $\{K_c\}_{c\in H}$ are then formalized to define ``hereditary systems of submodules'' (see Definition \ref{D4.3}). If $H$ is a finite Heyting algebra, we obtain a one-one correspondence between elements $\phi\in M^\bigstar$ and hereditary systems of submodules of $M$. Further, if $H$ is a Boolean algebra, hereditary systems of submodules correspond to ordinary hereditary submodules of the join semilattice $M$. In the latter case, we are able to obtain an analogue of Hahn-Banach Theorem, which shows that the morphisms in the dual $M^\bigstar$ are able to separate elements of $M$. It is easy to show that $Heymod_H$ is a semiadditive category, i.e., finite products and finite coproducts coincide. However, we would like to know the extent to which $Heymod_H$ satisfies a non-additive version of the (AB2) axiom for abelian categories. In other words, we would like to compare coimages and images in $Heymod_H$. This is done in Section 5 by considering ``kernel pairs'' and ``cokernel pairs'' in a manner analogous to \cite{one}. Additionally, for a submodule $K\subseteq N$, we set \begin{equation*} \widetilde{K}=\{\mbox{$n\in N$ $\vert$ $t_1(n)=t_2(n)$ $\forall$ $Q\in Heymod_H$ $\forall$ $t_1,t_2:N\longrightarrow Q$ such that $t_1(k)=t_2(k)$ $\forall$ $k\in K$ }\} \end{equation*} Then, if $H$ is a finite Heyting algebra and $f:M\longrightarrow N$ is a morphism in $Heymod_H$, we show that $\widetilde{Coim(f)}=Im(f)$. If $H$ is finite and Boolean, this reduces to give $Coim(f)=Im(f)$. We continue in Section 6 with $H$ being a finite Heyting algebra. We show that $Heymod_H$ is a tensor category, with the functor $\_\_\otimes_HN:Heymod_H \longrightarrow Heymod_H$ being left adjoint to $Heymod_H(N,\_\_):Heymod_H\longrightarrow Heymod_H$. This gives rise to a good theory of `extension and restriction of scalars' for Heyting modules. We now consider the endofunctor $\bigperp : Heymod_H\longrightarrow Heymod_H$ given by taking each $M\in Heymod_H$ to $M^2$ and each morphism $f:M\longrightarrow N$ to $(f,f):M^2\longrightarrow N^2$. This defines a comonad on $Heymod_H$ and we consider the corresponding Kleisli category $Heymod_H^2$ in Section 7 whereas the corresponding Eilenberg-Moore category $Heymod_H^{\mathfrak s}$ is studied in Section 8. For modules over the semifield $\{0,1\}$, the Kleisli and Eilenberg-Moore categories of the squaring endofunctor have been studied by Connes and Consani \cite{one}. The category $Heymod_H^2$ has the same objects as $Heymod_H$. A morphism $M\longrightarrow N$ in $Heymod_H^2$ consists of a pair $(f,g):M\longrightarrow N$ of morphisms in $Heymod_H$, with composition given by (see \eqref{composition}) \begin{equation*} (f',g')\circ (f,g)=(f'\circ f + g'\circ g,f'\circ g+g'\circ f) \end{equation*} On the other hand, an object of $Heymod_H^{\mathfrak s}$ is a pair $(M,\sigma_M)$ consisting of $M\in Heymod_H$ and $\sigma_M$ an $H$-linear involution of $M$. The conditions describing monomorphisms, epimorphisms and strict exactness for sequences in $Heymod_H^2$ and $Heymod_H^{\mathfrak s}$ are studied respectively in Sections 7 and 8. Further, we show in Section 9 that $Heymod_H^{\mathfrak s}$ is a semiexact category in the sense of Grandis \cite{Grandis} (see Theorem \ref{Tt9.6}). Finally, when $H$ is a finite Boolean algebra, we show in Section 10 that $Heymod_H^{\mathfrak s}$ also becomes a homological category in the sense of Grandis \cite{Grandis} (see Theorem \ref{Tq10.10}). We return to the case of an arbitrary (not necessarily finite) Heyting algebra $H$ in Section 11. Using an ultrafilter criterion of Finocchiaro \cite{Fino} and some recent methods of Finocchiaro, Fontana and Spirito \cite{FFS}, we show that the collection $Sub(M)$ of Heyting submodules of a Heyting module $M$ carries the structure of a spectral space. In other words, $Sub(M)$ is homeomorphic to the Zariski spectrum of a commutative ring. We show more generally that any $Sub^{\mathbf c}(M)$ is a spectral space, where $Sub^{\mathbf c}(M)$ is the collection of submodules fixed by a closure operator $\mathbf c$ of finite type on $Sub(M)$ (see Definition \ref{D10.2} and Proposition \ref{P10.4}). In particular, it follows that the collection of hereditary submodules of a Heyting module $M$ is equipped with the structure of a spectral space. \section{Boolean modules} By definition, a lattice $L$ is a partially ordered set $(L,\leq)$ such that every finite subset $S\subseteq L$ of elements of $L$ has a supremum (called the join $\vee S:=\underset{s\in S}{\vee}s$) and an infimum (called the meet $\wedge S:=\underset{s\in S}{\wedge}s$). Considering respectively the join and the meet of the empty subset, it follows that $L$ contains two distinguished elements $0$ and $1$ such that every $x\in L$ satisfies $0\leq x\leq 1$. A lattice is said to be distributive if it satisfies the following two conditions: (1) For any $x$, $y$, $z\in L$, we have $x\wedge (y\vee z)=(x\wedge y)\vee (x\wedge z)$. (2) For any $x$, $y$, $z\in L$, we have $x\vee (y\wedge z)=(x\vee y)\wedge (x\vee z)$. In fact, it may be shown (see \cite[Lemma I.1.5]{PJT}) that a lattice $L$ satisfies condition (1) if and only if it satisfies condition (2). Further, in a distributive lattice, it may be verified that given an element $x\in L$, there exists at most one element $y\in L$ such that $x\vee y=1$ and $x\wedge y=0$. Such an element (if it exists) is referred to as a complement of $x$. \begin{defn}\label{D2.0} Let $L$ be a distributive lattice. A module over $L$ consists of the following data: (1) A join semilattice $M$, i.e., a partially ordered set $M$ such that every finite subset of $M$ has a join in $M$. (2) An action $f:L\times M\longrightarrow M$ that is order preserving in both variables and such that \begin{equation*} \begin{array}{c} f(1_L,m)=m\quad f(0_L,m)=0_M \quad f(x\wedge y,m)=f(x,f(y,m))\\ \end{array} \end{equation*} for any $x$, $y\in L$ and $m\in M$. Here $0_L$ and $1_L$ are respectively the least element and largest element of $L$, while $0_M$ is the least element of the join semilattice $M$. For $x\in L$ and $m\in M$, we will typically denote by $x\wedge m$ the element $f(x,m)\in M$. Further, we will say that $M$ is a distributive module over $L$ if it satisfies \begin{equation*} f(x\vee y,m)=f(x,m)\vee f(y,m)\quad f(x,m\vee n)=f(x,m)\vee f(x,n) \end{equation*} for any $x$, $y\in L$ and $m$, $n\in M$. A morphism $g:M\longrightarrow M'$ of modules over $L$ is a morphism of join semilattices such that $g(x\wedge m)=x\wedge g(m)$ for each $x\in L$, $m\in M$. \end{defn} \begin{defn}\label{D2.1} (see \cite[$\S$ I.1.6]{PJT}) A Boolean algebra $B$ is a distributive lattice such that every element $x\in B$ has a complement in $B$. This gives a unary operation on $B$, which we will denote by $\righthalfcap_B:B \longrightarrow B$. \end{defn} It is clear that $\righthalfcap_B\circ\righthalfcap_B=id$ and $\righthalfcap_B(0)=1$. It can also be checked easily that $\righthalfcap_B:B \longrightarrow B$ is order-reversing and satisfies De Morgan's laws: \begin{equation} \righthalfcap_B(x\vee y)=\righthalfcap_B(x)\wedge \righthalfcap_B(y)\qquad \righthalfcap_B(x\wedge y)=\righthalfcap_B(x)\vee \righthalfcap_B(y) \end{equation} If $(B,\righthalfcap_B)$ and $(B',\righthalfcap_{B'})$ are Boolean algebras, the uniqueness of complements shows that a lattice morphism $f:B\longrightarrow B'$ automatically satisfies $f(\righthalfcap_{B}(x))=\righthalfcap_{B'}(f(x))$ for each $x\in B$. We should also mention (see \cite[$\S$ 1.9]{PJT}) that the category of Boolean algebras is isomorphic to the category of Boolean rings. We will now introduce the concept of a Boolean module over a Boolean algebra. \begin{defn}\label{Bmodule} Let $(B,\righthalfcap_B)$ be a Boolean algebra. A Boolean module over $(B,\righthalfcap_B)$ consists of the following data: (1) A distributive module $M$ over the distributive lattice underlying $B$. (2) An operation $\righthalfcap_M:M\longrightarrow B$ satisfying the following conditions: \begin{equation}\label{eq2.2} \righthalfcap_M(0_M)=1_B \quad \righthalfcap_M(x\wedge m)=\righthalfcap_B(x)\vee \righthalfcap_M(m)\quad \righthalfcap_M(m\vee n)=\righthalfcap_M(m)\wedge \righthalfcap_M(n) \end{equation} for any $x\in B$ and $m$, $n\in M$. A morphism $g:(M,\righthalfcap_M)\longrightarrow (M',\righthalfcap_{M'})$ of $(B,\righthalfcap_B)$-modules is simply a morphism of modules over the distributive lattice underlying $B$. \end{defn} In Section 3, we will give one way of constructing Boolean modules by considering Heyting algebras that satisfy certain conditions. Throughout this section and the rest of this paper, we try to minimize the use of subscripts whenever the meaning is clear from context. As such, we will write $0$ for the least elements, $1$ for the largest elements and $\righthalfcap$ for the unary operators wherever applicable. \begin{lem}\label{L2.4} Let $M$ be a Boolean module over a Boolean algebra $B$. Then, the operation $\righthalfcap_M: M\longrightarrow B$ is order reversing. \end{lem} \begin{proof} We consider $m$, $n\in M$ such that $m\leq n$, i.e., $m\vee n=n$. It follows that: \begin{equation} \righthalfcap(n)=\righthalfcap(m\vee n)=\righthalfcap(m)\wedge \righthalfcap(n)\leq \righthalfcap(m) \end{equation} This proves the result. \end{proof} The next result shows that when we restrict to finite Boolean algebras, every distributive module is already a Boolean module. \begin{thm}\label{P2.5} Suppose that $(B,\righthalfcap_B)$ is a finite Boolean algebra and let $M$ be a distributive module over $B$. Then, there is a canonical map $\righthalfcap_M:M\longrightarrow B$ making $(M,\righthalfcap_M)$ a Boolean module over $(B,\righthalfcap_B)$. \end{thm} \begin{proof} Since $B$ is finite, for any $m\in M$, we can set $\righthalfcap_M(m)$ to be the finite join: \begin{equation} \righthalfcap_M(m):=\bigvee \{\mbox{$x\in B$ $\vert$ $x\wedge m=0$ }\}\in B \end{equation} Since $\_\_\wedge m: B\longrightarrow M$ preserves finite joins, it is clear that $\righthalfcap(m)\wedge m=0$. As such, we see that \begin{equation} x\wedge m =0 \qquad\Leftrightarrow \qquad x\leq \righthalfcap(m) \end{equation} for any $x\in B$. It now follows that for any $y\in B$ we have: \begin{equation}\label{eq2.5} y\leq \righthalfcap (x\wedge m)\quad\Leftrightarrow \quad y\wedge x\wedge m=0 \quad\Leftrightarrow \quad y\wedge x\leq \righthalfcap(m) \end{equation} For any elements $p$, $q$, $r$ in a Boolean algebra, it may be verified easily that $p\wedge q\leq r$ is equivalent to $p\leq \righthalfcap(q)\vee r$. Applying this in \eqref{eq2.5} we get \begin{equation} y\leq \righthalfcap (x\wedge m) \quad\Leftrightarrow\quad y\leq \righthalfcap(x)\vee \righthalfcap(m) \end{equation} for any $y\in B$. This gives $ \righthalfcap (x\wedge m) = \righthalfcap(x)\vee \righthalfcap(m)$. On the other hand, for $m$, $n\in M$, we have: \begin{equation} \begin{array}{ll} y\leq \righthalfcap (m\vee n) & \Leftrightarrow\quad y\wedge (m\vee n)=0\\ &\Leftrightarrow\quad \mbox{$(y\wedge m)=0$ and $(y\wedge n=0)$}\\ &\Leftrightarrow\quad \mbox{$(y\leq \righthalfcap(m))$ and $(y\leq \righthalfcap(n))$}\\ &\Leftrightarrow\quad y\leq \righthalfcap(m)\wedge \righthalfcap(n)\\ \end{array} \end{equation} Hence, we have $ \righthalfcap (m\vee n) =\righthalfcap(m)\wedge \righthalfcap(n)$. This proves the result. \end{proof} \section{Heyting Modules} We begin by recalling the notion of a Heyting algebra, which generalizes the concept of a Boolean algebra. A partially ordered set $(L,\leq )$ may be viewed as a category $L$ whose objects are the elements of $L$ and such that there is a single morphism $x\longrightarrow y$ in $L$ whenever $x\leq y\in L$. When $L$ is a lattice, for each fixed $y\in L$, the association $x\mapsto x\wedge y$, $\forall$ $x\in L$ defines a functor from $L$ to $L$. \begin{defn}\label{D3.1} (see \cite[$\S$ I.1.10]{PJT}) A Heyting algebra $H$ is a lattice such that for each fixed $y\in H$, the functor defined by the association $x\mapsto x\wedge y$, $\forall$ $x\in H$ has a right adjoint. In other words, for any elements $y$, $z\in H$, there is an element $(y\rightarrow z)_H\in H$ such that \begin{equation*} x\wedge y\leq z\quad\Leftrightarrow\quad x\leq (y\rightarrow z)_H \end{equation*} for any $x\in H$. \end{defn} In general, every Heyting algebra is distributive. For $y\in H$, the element $\righthalfcap_H(y):=(y\rightarrow 0)_H\in H$ is referred to as the negation of $y$. If the operation $\righthalfcap_H$ satisfies $\righthalfcap_H \righthalfcap_H(y)=y$ for every $y\in H$, then $H$ becomes a Boolean algebra (see \cite[Lemma I.1.11]{PJT}). We are now ready to introduce the concept of a Heyting module. \begin{defn}\label{DD3.2} Let $H$ be a Heyting algebra. A Heyting module over $H$ consists of the following data: (1) A distributive module $M$ over the distributive lattice underlying $H$. (2) For each $m\in M$, the functor $H\longrightarrow M$ defined by the association $x\mapsto x\wedge m$ has a right adjoint. In other words, for any $m$, $n\in M$, there is an element $(m\rightarrow n)_M\in H$ such that \begin{equation*} x\wedge m\leq n\quad\Leftrightarrow \quad x\leq (m\rightarrow n)_M \end{equation*} for any $x\in H$. A morphism of Heyting modules is simply a morphism of modules over the distributive lattice underlying $H$. We denote by $Heymod_H$ the category of Heyting modules over a Heyting algebra $H$. \end{defn} Given a Heyting module $M$ and an element $m\in M$, we define the negation $\righthalfcap_M(m):=(m\rightarrow 0)_M$. Again, we will generally omit the subscripts whenever the meaning is clear from context. \begin{thm}\label{P3.25} Let $f:M\longrightarrow N$ be a morphism of Heyting modules over a Heyting algebra $H$. Suppose that $f:M\longrightarrow N$ has a right adjoint $g:N\longrightarrow M$. Then, for any $m\in M$ and $n\in N$, we have $(f(m)\rightarrow n)_{N}=(m \rightarrow g(n))_M$ in $H$. \end{thm} \begin{proof} Since $f:M\longrightarrow N$ is a morphism of Heyting modules, we know that for any $m\in M$, the functor $\_\_\wedge f(m):H \longrightarrow N$ can be expressed as the composition \begin{equation} \_\_\wedge f(m) = f\circ (\_\_\wedge m):H\longrightarrow N \end{equation} As such, the right adjoint of $\_\_\wedge f(m)$ is equal to the right adjoint $g:N\longrightarrow M$ composed with the right adjoint $(m\rightarrow \_\_)_M:M\longrightarrow H$. \end{proof} \begin{lem}\label{L3.3} Let $M$ be a Heyting module over a Heyting algebra $H$. Then: (a) For $m$, $m'$, $n\in M$ we have \begin{equation} (m\rightarrow n)\wedge (m'\rightarrow n)=((m\vee m')\rightarrow n)\qquad \end{equation} In particular, we know that $\righthalfcap(m\vee m')=\righthalfcap(m)\wedge \righthalfcap(m')$. (b) For $x\in H$ and $m$, $n\in M$ we have \begin{equation} ((x\wedge m)\rightarrow n) = (x\rightarrow (m\rightarrow n)) \end{equation} In particular, we know that $\righthalfcap(x\wedge m)=(x\rightarrow \righthalfcap(m))$. (c) For each $n\in M$, the association $(\_\_\rightarrow n): M\longrightarrow H$ is order reversing. (d) $\righthalfcap_M(0_M)=1_H$. \end{lem} \begin{proof} (a) For any $x\in H$, we see that \begin{equation} \begin{array}{ll} x\leq ((m\vee m')\rightarrow n)& \Leftrightarrow\quad (x\wedge m)\vee (x\wedge m')\leq n \\ &\Leftrightarrow\quad (x\wedge m)\leq n \quad \&\quad (x\wedge m')\leq n\\ &\Leftrightarrow\quad (x\leq (m\rightarrow n))\quad\&\quad (x\leq (m'\rightarrow n))\\ &\Leftrightarrow\quad x\leq (m\rightarrow n)\wedge (m'\rightarrow n)\\ \end{array} \end{equation} (b) For any $y\in H$, we see that \begin{equation} y\leq ((x\wedge m)\rightarrow n) \quad\Leftrightarrow\quad (y\wedge x\wedge m)\leq n \quad\Leftrightarrow\quad y\wedge x\leq (m\rightarrow n) \quad\Leftrightarrow\quad y\leq (x\rightarrow (m\rightarrow n)) \end{equation} The result of (c) is an immediate consequence of (a). The result of (d) is clear from the fact that $1_H\wedge 0_M=0_M$. \end{proof} \begin{thm}\label{P3.4} Let $M$ be a Heyting module over a Heyting algebra $H$. Then, if $(H,\righthalfcap_H)$ is a Boolean algebra, $(M,\righthalfcap_M)$ is a Boolean module over $H$. \end{thm} \begin{proof} Since the Heyting algebra $H$ is actually Boolean, we know (see \cite[$\S$ I.1.10]{PJT}) that $(x\rightarrow y)_H=\righthalfcap_H(x)\vee y$ for any elements $x$, $y\in H$. From Lemma \ref{L3.3}(b) it now follows that for $x\in H$, $m\in M$, we have: \begin{equation} \righthalfcap_M(x\wedge m)=(x\wedge m\rightarrow 0)=(x\rightarrow \righthalfcap_M(m))=\righthalfcap_H(x)\vee \righthalfcap_M(m) \end{equation} The other conditions mentioned in \eqref{eq2.2} that $(M,\righthalfcap_M)$ must satisfy in order to be a Boolean module are also clear from Lemma \ref{L3.3}. \end{proof} We will now show that when $H$ is a finite Heyting algebra, every distributive module is already a Heyting module. \begin{thm}\label{P3.5} Suppose that $H$ is a finite Heyting algebra and let $M$ be a distributive module over $H$. Then, there is a canonical map $(\_\_\rightarrow\_\_)_M:M\times M\longrightarrow H$ making $M$ a Heyting module over $H$. \end{thm} \begin{proof} We fix some $m\in M$. Then, the right adjoint of the functor $\_\_\wedge m : H\longrightarrow M$ is given by setting \begin{equation}\label{e3.7pg} (m\rightarrow n)_M:=\bigvee\{\mbox{$x\in H$ $\vert$ $x\wedge m\leq n$}\} \end{equation} for each $n\in M$. This gives $M$ the canonical structure of a Heyting module. \end{proof} \begin{cor}\label{C3.6ty} Suppose that $H$ is a finite Heyting algebra and $g:M\longrightarrow N$ is a morphism in $Heymod_H$. Then, for any $m$, $m'\in M$, we have $(m\rightarrow m')_M\leq (g(m)\rightarrow g(m'))_{M'}\in H$. \end{cor} \begin{proof} From \eqref{e3.7pg}, we know that $(m\rightarrow m')_M$ is the join of all the elements $x\in H$ such that $x\wedge m\leq m'$. Since $g$ is a morphism in $Heymod_H$, $x\wedge m\leq m'$ $\Rightarrow$ $x\wedge g(m)=g(x\wedge m)\leq g(m')$. The result is now clear. \end{proof} \begin{rem}\label{XR3.7}\emph{(1) The proof of Proposition \ref{P3.5} is a special case of the more general Adjoint Functor Theorem for join semilattices (see, for instance, \cite[$\S$ I.4.2]{PJT}).} \emph{(2) It is known (see, for instance, \cite[Exercise I.1.12(ii)]{PJT}) that every finite distributive lattice is a Heyting algebra. Conversely, we know that every Heyting algebra is distributive and a lattice. As such, a finite Heyting algebra is simply a finite distributive lattice.} \emph{(3) It is easy to give examples of finite Heyting algebras which are not finite Boolean algebras. For instance, we may consider any finite totally ordered set $H=\{0=x_1<x_2<x_3 .... <x_k=1 \}$. Then, $H$ becomes a Heyting algebra (see \cite[$\S$ I.1.12]{PJT}) by setting} \begin{equation} (x_m\rightarrow x_n)_H:=\left\{\begin{array}{ll} 1 & \mbox{if $m\leq n$} \\ x_n & \mbox{otherwise}\\ \end{array}\right. \end{equation} \emph{Then, we see that every $x_m\ne 0$ satisfies $\righthalfcap\righthalfcap x_m=1$. As such, $H$ cannot be a Boolean algebra for any $k\geq 3$. } \end{rem} We record here the following fact. \begin{thm}\label{P4.5} (a) In the category $Heymod_H$ of Heyting modules over a Heyting algebra $H$, a morphism $g:M\longrightarrow N$ is a monomorphism $\Leftrightarrow$ the underlying map is injective. (b) If $g:M\longrightarrow N$ is a monomorphism in $Heymod_H$, then for any $m$, $m'\in M$, we have $(m\rightarrow m')_M= (g(m)\rightarrow g(m'))_{N}\in H$. \end{thm} \begin{proof} (a) The $\Leftarrow$ implication is obvious. On the other hand, for any element $m\in M$, we can consider the morphism $g_m:H\longrightarrow M$ in $Heymod_H$ taking each $c\in H$ to $c\wedge m\in M$. Then, if $m$, $m'\in M$ are such that $g(m)=g(m')\in N$, then $g\circ g_m=g\circ g_{m'}$. If $g$ is a monomorphism, this implies $g_m=g_{m'}$ and hence $m=g_m(1_H)=g_{m'}(1_H)=m'$. (b) From part (a), it follows that $M\subseteq N$. As such, for $m$, $m'\in M$, we have $x\wedge m\leq m'$ if and only if $x\wedge g(m)\leq g(m')$. The result now follows from the definitions. \end{proof} \begin{thm}\label{Pp3.65} Let $M$ be a Heyting module over a Heyting algebra $H$. Let $N\subseteq M$ be a distributive submodule of $M$ over the lattice $H$. Then, $N$ is a Heyting module over $H$ and $(n\rightarrow n')_N=(n\rightarrow n')_M$ for $n$, $n'\in N$. \end{thm} \begin{proof} We set $(n\rightarrow n')_N:=(n\rightarrow n')_M$ for $n$, $n'\in N$. Since the partial ordering on $N\subseteq M$ is induced by the partial ordering on $M$, we get $c\wedge n\leq n'$ $\Leftrightarrow$ $c\leq (n\rightarrow n')_M=(n\rightarrow n')_N$. This proves the result. \end{proof} \begin{defn}\label{D3.7} (see \cite[$\S$ 1.13]{PJT}) An element $x$ in a Heyting algebra $H$ is said to be regular if it satisfies $\righthalfcap \righthalfcap (x)=x$. The collection of regular elements of $H$ is denoted by $H_{\righthalfcap\righthalfcap}$. \end{defn} The regular elements $H_{\righthalfcap\righthalfcap}$ of a Heyting algebra $H$ form a Boolean algebra. The complementation in $H_{\righthalfcap\righthalfcap}$ coincides with the negation on $H$. While meets in $H_{\righthalfcap\righthalfcap}$ always coincide with meets in $H$, the same is not necessarily true for joins in $H_{\righthalfcap\righthalfcap}$. In fact, $H_{\righthalfcap\righthalfcap}$ is a sublattice of $H$ if and only if the negation operator satisfies $\righthalfcap (x\wedge y)=\righthalfcap(x)\vee \righthalfcap(y)$ for every $x$, $y\in H$. We recall that the dual of this relation, i.e., $\righthalfcap (x\vee y)=\righthalfcap(x)\wedge \righthalfcap(y)$ for $x$, $y\in H$ holds in every Heyting algebra. We now suppose that $H_{\righthalfcap\righthalfcap}$ is a sublattice of $H$. Given a Heyting module $M$ over $H$, we consider \begin{equation}\label{eq3.8} M_{\righthalfcap\righthalfcap}:=\{\mbox{$m\in M$ $\vert$ $\righthalfcap(m)\in H$ is regular, i.e., $\righthalfcap(m)\in H_{\righthalfcap\righthalfcap}$}\} \end{equation} We will say that the elements of $M_{\righthalfcap\righthalfcap}$ are the regular elements of $M$ over $H$. We will show that $M_{\righthalfcap\righthalfcap}$ is actually a Boolean module over $H_{\righthalfcap\righthalfcap}$. For this, we need the following simple result. \begin{lem}\label{L3.9} Let $H$ be a Heyting algebra such that the Boolean algebra $H_{\righthalfcap\righthalfcap}$ is a sublattice of $H$. Then, given any regular elements $a$, $b\in H_{\righthalfcap\righthalfcap}$, we have $(a\rightarrow b)_H=\righthalfcap(a)\vee b$. In particular, $(a\rightarrow b)_H$ is regular. \end{lem} \begin{proof} For any elements $a$, $b$ in a Heyting algebra $H$, we know (see, for instance, \cite{Hey}) that $\righthalfcap (a\rightarrow b)=(\righthalfcap\righthalfcap(a))\wedge (\righthalfcap(b))$. If $a$ and $b$ are regular, this implies that \begin{equation} \righthalfcap (a\rightarrow b)=a\wedge \righthalfcap(b) \end{equation} Since $H_{\righthalfcap\righthalfcap}$ is a sublattice of $H$, it follows that $\righthalfcap\righthalfcap(a\rightarrow b)=\righthalfcap(a\wedge \righthalfcap(b))=\righthalfcap(a)\vee \righthalfcap\righthalfcap(b)=\righthalfcap(a)\vee b$. For any element $x$ in a Heyting algebra, it is clear that $x\leq \righthalfcap\righthalfcap(x)$ (since $\righthalfcap(x)\leq \righthalfcap(x)$ which gives $x\wedge\righthalfcap(x) =0$). It follows that \begin{equation}\label{eq3.10} \righthalfcap\righthalfcap(a\rightarrow b)\leq \righthalfcap(a)\vee b\quad \Rightarrow \quad(a\rightarrow b)\leq \righthalfcap(a)\vee b \end{equation} Conversely, we have \begin{equation}\label{eq3.11} a\wedge b\leq b \quad\Rightarrow\quad a\wedge (\righthalfcap(a)\vee b)\leq b \quad\Rightarrow\quad (\righthalfcap(a)\vee b)\leq (a\rightarrow b) \end{equation} From \eqref{eq3.10} and \eqref{eq3.11} the result follows. \end{proof} \begin{thm}\label{P3.10} Let $H$ be a Heyting algebra such that the Boolean algebra $H_{\righthalfcap\righthalfcap}$ is a sublattice of $H$. Suppose that $M$ is a Heyting module over $H$. Then, $M_{\righthalfcap\righthalfcap}$ is a Boolean module over the Boolean algebra $H_{\righthalfcap\righthalfcap}$. \end{thm} \begin{proof} If $m$, $n\in M_{\righthalfcap\righthalfcap}$, we know from Lemma \ref{L3.3}(a) that $\righthalfcap(m\vee n)=\righthalfcap(m)\wedge \righthalfcap(n)$. It follows that $m\vee n\in M_{\righthalfcap\righthalfcap}$. For $x\in H_{\righthalfcap\righthalfcap}$ and $m\in M_{\righthalfcap \righthalfcap}$, it follows from Lemma \ref{L3.3}(b) that $\righthalfcap(x\wedge m)=(x\rightarrow \righthalfcap(m))$. Since $x\in H_{\righthalfcap\righthalfcap}$ and $m\in M_{\righthalfcap\righthalfcap}$, we can apply Lemma \ref{L3.9} to see that \begin{equation}\righthalfcap(x\wedge m)=(x\rightarrow \righthalfcap(m))=\righthalfcap(x)\vee \righthalfcap(m) \in H_{\righthalfcap\righthalfcap} \end{equation} Hence, $x\wedge m\in M_{\righthalfcap\righthalfcap}$. The map $M_{\righthalfcap\righthalfcap} \longrightarrow H_{\righthalfcap\righthalfcap}$ is obtained by restricting the map $\righthalfcap : M\longrightarrow H$. From the above, it is clear that this map satisfies the conditions for making $M$ into a Boolean module over the Boolean algebra $H_{\righthalfcap\righthalfcap}$. \end{proof} An application of Proposition \ref{P3.10} gives us a simple way of constructing examples of Boolean modules. \begin{cor}\label{C3.11} Let $H$ be a Heyting algebra such that the Boolean algebra $H_{\righthalfcap\righthalfcap}$ is a sublattice of $H$. Then, $H$ is canonically equipped with the structure of a Boolean module over the Boolean algebra $H_{\righthalfcap\righthalfcap}$. \end{cor} \begin{proof} It is clear that $H$ is a Heyting module over itself. To prove the result, it suffices to show that every element $x\in H$ is regular over $H$ in the sense of \eqref{eq3.8}, i.e., $\righthalfcap(x)\in H_{\righthalfcap\righthalfcap}$ for any $x\in H$. For this, we notice that $x\leq \righthalfcap\righthalfcap(x)$, $\forall$ $x\in H$ as mentioned in the proof of Lemma \ref{L3.9}. This gives $\righthalfcap(x)\leq \righthalfcap\righthalfcap\righthalfcap(x)$. However, since $\righthalfcap$ is order reversing, the relation $x\leq \righthalfcap\righthalfcap(x)$ gives $\righthalfcap(x)\geq \righthalfcap\righthalfcap\righthalfcap(x)$. Hence, $\righthalfcap(x)=\righthalfcap\righthalfcap\righthalfcap(x)$ for every $x\in H$, i.e., $\righthalfcap(x)\in H_{\righthalfcap\righthalfcap}$. \end{proof} \section{Duals of Heyting modules and hereditary systems} We continue with $H$ being a Heyting algebra and $M$ being a Heyting module over it. By a Heyting submodule of $M$, we will mean a distributive submodule $N$ of $M$ over the lattice underlying $H$. From Proposition \ref{Pp3.65}, it is clear that $N$ is canonically equipped with the structure of a Heyting module, with $(n_1\rightarrow n_2)_{N}=(n_1\rightarrow n_2)_M$ for any $n_1$, $n_2\in N$. The following simple observation will be very useful to us: by definition, the element $1_H$ in the lattice $H$ is the largest element in $H$, i.e., every $c\in H$ satisfies $c\leq 1$. Hence, for any $m\in M$, we have \begin{equation}\label{e4.0} c\wedge m \leq 1_H\wedge m = m \end{equation} For a Heyting module $M$, we will now consider two different ``dual objects'' : the first is the collection $Heymod_H(M,H)$ of Heyting module morphisms from $M$ to $H$ which we will denote by $M^\bigstar$. The second dual object, which we shall denote by $M^\star$, is the collection of join semilattice homomorphisms $\phi:M\longrightarrow \{0,1\}$. Since any $\phi\in M^\star$ is order preserving, we notice that \begin{equation}\label{e4.1} \phi(m)=0 \quad\Rightarrow\quad \phi(c\wedge m)\leq \phi(m)=0 \quad\Rightarrow \quad\phi(c\wedge m)=0 \textrm{ }\forall\textrm{ }c\in H \end{equation} It is clear that a morphism $M_1\longrightarrow M_2$ of Heyting modules induces maps $M_2^\bigstar\longrightarrow M_1^\bigstar$ and $M_2^\star\longrightarrow M_1^\star$ between their respective duals. \begin{defn}\label{D4.1} Let $M$ be a Heyting module over a Heyting algebra $H$. A subjoin semilattice $N\subseteq M$ will be called a hereditary Heyting submodule if it satisfies the following condition \begin{equation*} n\in N \quad\&\quad n'\leq n \quad\Rightarrow n'\in N \end{equation*} \end{defn} From the observation in \eqref{e4.0}, it is immediate that any hereditary Heyting submodule of $M$ is automatically a Heyting submodule. As such, a hereditary Heyting submodule of $M\in Heymod_H$ in the sense of Definition \ref{D4.1} is simply a hereditary submodule of the underlying join semilattice $M$ in the sense of \cite[Definition 2.2]{one}. For each element $m\in M$, we now consider: \begin{equation}\label{eq4.1} I_m^M=I_m:=\{\mbox{$n\in M$ $\vert$ $n\leq m$ }\} \end{equation} It is easily seen that each $I_m$ is a hereditary Heyting submodule. \begin{thm}\label{P4.2} Let $M$ be a Heyting module over a Heyting algebra $H$. Then, (a) There is a one-one correspondence between elements of $M^\star$ and hereditary Heyting submodules of $M$. (b) A distributive submodule of $M$ over $H$ is hereditary if and only if it is a filtering union of hereditary Heyting submodules of the form $\{I_m\}_{m\in M}$. (c) If $m$, $m'\in M$ are such that $\phi(m)=\phi(m')$ for every $\phi\in M^\star$, then $m=m'$. \end{thm} \begin{proof} As mentioned before, $N\subseteq M$ is a hereditary Heyting submodule if and only if it is a hereditary submodule of the join semilattice $M$ in the sense of \cite[Definition 2.2]{one}. As such, all three parts (a), (b) and (c) follow directly from \cite[Proposition 2.3]{one}. Explicitly, an element $\phi\in M^\star$ corresponds to the hereditary submodule $\phi^{-1}(0)\subseteq M$. \end{proof} We now introduce the notion of a ``hereditary system of submodules'' which will be used to describe elements of the dual $M^\bigstar$. First of all, for a given hereditary submodule $K\subseteq M$ we set \begin{equation} (K:c):=\{\mbox{$m\in M$ $\vert$ $c\wedge m\in K$}\} \end{equation} for each $c\in K$. It is clear that $(K:c)\subseteq M$ is also a hereditary Heyting submodule. \begin{defn}\label{D4.3} Let $M$ be a Heyting module over a Heyting algebra $H$. A hereditary system $\mathcal K=\{K_c\}_{c\in H}$ of submodules of $M$ consists of the following data: (a) For each $c\in H$, $K_c$ is a hereditary submodule of the Heyting module $M$. (b) Given any elements $c$, $d\in H$, we have $K_c\cap K_d=K_{c\wedge d}$. (c) Given any elements $c$, $d\in H$, we have $(K_d:c)=K_{(c\rightarrow d)}$. In particular, if $c\leq d$ in $H$, we have $K_c\subseteq K_d$. \end{defn} \begin{lem}\label{L4.31} Let $M$ be a Heyting module and let $\phi:M\longrightarrow H$ be an element of $M^\bigstar$. Then, setting $K_c:=\{\mbox{$m\in M$ $\vert$ $\phi(m)\leq c$}\}$ for each $c\in H$ gives a hereditary system $\mathcal K_\phi$ on $M$. \end{lem} \begin{proof} Since $\phi\in M^\bigstar$ is order preserving and preserves joins, we see that each $K_c=\{\mbox{$m\in M$ $\vert$ $\phi(m)\leq c$}\}$ is hereditary. Since $\phi:M\longrightarrow H$ is $H$-linear, for $c$, $d\in H$ and any $m\in M$, we have: \begin{equation*} \begin{array}{c} m\in K_c\cap K_d \textrm{ }\Leftrightarrow\textrm{ } \phi(m)\leq c \textrm{ }\&\textrm{ }\phi(m)\leq d \textrm{ }\Leftrightarrow\textrm{ } \phi(m)\leq c\wedge d \textrm{ }\Leftrightarrow\textrm{ } m\in K_{c\wedge d}\\ m\in (K_d:c) \textrm{ }\Leftrightarrow\textrm{ } c\wedge m\in K_d \textrm{ }\Leftrightarrow\textrm{ } \phi(c\wedge m)\leq d \textrm{ }\Leftrightarrow\textrm{ } c\wedge \phi(m)\leq d \textrm{ }\Leftrightarrow\textrm{ } \phi(m)\leq (c\rightarrow d)\textrm{ }\Leftrightarrow\textrm{ }m\in K_{(c\rightarrow d)}\\ \end{array} \end{equation*} This proves the result. \end{proof} \begin{thm}\label{P4.32} Let $H$ be a finite Heyting algebra and let $M$ be a Heyting module over $H$. Then, there is a one-one correspondence between $M^\bigstar$ and hereditary systems of submodules of $M$. \end{thm} \begin{proof} Given $\phi\in M^\bigstar$, we have already constructed a hereditary system $\mathcal K_\phi$ of submodules of $M$ in Lemma \ref{L4.31}. Conversely, given a hereditary system $\mathcal K=\{K_c\}_{c\in H}$, we define $\phi_{\mathcal K}:M \longrightarrow H$ by setting \begin{equation}\label{eq4.5r} \phi_{\mathcal K}(m):=\underset{m\in K_d}{\bigwedge} d \end{equation} We begin by showing that $\phi_{\mathcal K}:M\longrightarrow H$ preserves joins. We pick $m_1$, $m_2\in M$ and set $d_1=\phi_{\mathcal K}(m_1)$, $d_2=\phi_{\mathcal K}(m_2)$. Since $H$ is finite, it follows from condition (b) in Definition \ref{D4.3} that $d_1$ (resp. $d_2$) is the smallest element of $H$ such that $m_1\in K_{d_1}$ (resp. $m_2\in K_{d_2}$). Since $K_{d_1}$, $K_{d_2}\subseteq K_{d_1\vee d_2}$, we have $m_1\vee m_2\in K_{d_1\vee d_2}$. On the other hand, if $e\in H$ is such that $m_1\vee m_2\in K_e$, then $m_1,m_2\leq m_1\vee m_2$ $\Rightarrow$ $m_1$, $m_2\in K_e$ (since $K_e$ is hereditary). Then, $e\geq d_1$ and $e\geq d_2$ which gives $e\geq d_1\vee d_2$. From \eqref{eq4.5r}, it now follows that $\phi_{\mathcal K}(m_1\vee m_2)=d_1\vee d_2=\phi_{\mathcal K}(m_1)\vee \phi_{\mathcal K}(m_2)$. For $c\in H$ and $m\in M$, we know that \begin{equation}\label{eq4.6r} c\wedge \phi_{\mathcal K}(m)=c\wedge \left(\underset{m\in K_d}{\bigwedge} d\right)=\underset{m\in K_d}{\bigwedge}(c\wedge d)\qquad \phi_{\mathcal K}(c\wedge m)=\underset{c\wedge m\in K_d}{\bigwedge} d = \underset{m\in K_{(c\rightarrow d)}}{\bigwedge} d \end{equation} Since $d\leq (c\rightarrow c\wedge d)$, we notice that if $m\in K_d$, then $m\in K_{ (c\rightarrow c\wedge d)}= (K_{c\wedge d}:c)$. Hence $c\wedge m\in K_{c\wedge d}$. From \eqref{eq4.6r}, it now follows that $c\wedge \phi_{\mathcal K}(m)\geq \phi_{\mathcal K}(c\wedge m)$. Conversely, for any $d\in H$ such that $m\in K_{(c\rightarrow d)}$ it follows from the definition in \eqref{eq4.5r} that $\phi_{\mathcal K}(m)\leq (c\rightarrow d)$. Then \begin{equation}\label{eq4.7r} \phi_{\mathcal K}(m)\leq \underset{m\in K_{(c\rightarrow d)}}{\bigwedge} (c\rightarrow d)=\left(c\rightarrow \left(\underset{m\in K_{(c\rightarrow d)}}{\bigwedge} d\right)\right)=(c\rightarrow \phi_{\mathcal K}(c\wedge m))\quad \Rightarrow \quad c\wedge \phi_{\mathcal K}(m)\leq \phi_{\mathcal K}(c\wedge m) \end{equation} Hence, we have $c\wedge \phi_{\mathcal K}(m)=\phi_{\mathcal K}(c\wedge m)$ and it follows that $\phi_{\mathcal K}\in M^\bigstar$. It remains to show that the two associations are inverse to each other. First of all, it is clear that $\phi=\phi_{\mathcal K_\phi}$ for any $\phi\in M^\bigstar$. On the other hand, it follows from the above that $m\in K_{\phi_{\mathcal K}(m)}$ for each $m\in M$ and hence $\mathcal K_{\phi_{\mathcal K}}=\mathcal K$. \end{proof} The next result shows that when the Heyting algebra $H$ is actually Boolean, the hereditary systems take a particularly simple form. \begin{thm}\label{P4.33} Let $M$ be a Heyting module over a Heyting algebra $H$. If $H$ is a Boolean algebra, then there is a one-one correspondence between hereditary systems of submodules of $M$ and hereditary submodules of $M$. \end{thm} \begin{proof} Let $\mathcal K=\{K_c\}_{c\in H}$ be a hereditary system of submodules of $M$. If $H$ is Boolean, we claim that $\mathcal K$ is determined completely by the hereditary submodule $K_0$. This is because condition (c) in Definition \ref{D4.3} reduces to \begin{equation} K_c=K_{(\righthalfcap c\rightarrow 0)}=(K_0:\righthalfcap c) \end{equation} To complete the proof, it remains to show that for any hereditary submodule $K\subseteq M$, the collection $K_c:=(K:\righthalfcap c)$, $c\in H$ gives a hereditary system of submodules of $M$. We have noted before that each $(K:\righthalfcap c)$ is hereditary. Further, for any $m\in M$ and $c$, $d\in H$, we have: \begin{equation*} m\in (K_d:c) \textrm{ }\Leftrightarrow \textrm{ }c\wedge m\in K_d=(K:\righthalfcap d) \textrm{ }\Leftrightarrow \textrm{ }m\in (K:c\wedge \righthalfcap d)=(K:\righthalfcap (\righthalfcap c \vee d))=(K:\righthalfcap (c\rightarrow d))=K_{(c\rightarrow d)} \end{equation*} Here we have used the fact that since $H$ is Boolean, we must have $(c\rightarrow d)=\righthalfcap c\vee d$. Finally, since $K$ is hereditary, we know that for $c$, $d\in H$ and $m\in M$, $(\righthalfcap c \vee \righthalfcap d)\wedge m \in K$ if and only if both $\righthalfcap c\wedge m$, $\righthalfcap d\wedge m\in K$. It follows that \begin{equation*} m\in K_c\cap K_d \textrm{ }\Leftrightarrow \textrm{ } \righthalfcap c\wedge m\textrm{ }\&\textrm{ }\righthalfcap d\wedge m\in K \textrm{ }\Leftrightarrow \textrm{ } (\righthalfcap c \vee \righthalfcap d)\wedge m \in K \textrm{ }\Leftrightarrow \textrm{ }m\in (K:\righthalfcap (c\wedge d))=K_{c\wedge d} \end{equation*} This proves the result. \end{proof} \begin{thm}\label{xP4.34} Let $M$ be a Heyting module over $H$. If $H$ is a finite Boolean algebra, then there is a one-one correspondence between the duals $M^\bigstar$ and $M^*$. \end{thm} \begin{proof} This result follows directly as a consequence of Propositions \ref{P4.2}, \ref{P4.32} and \ref{P4.33}. This correspondence may be made explicit as follows : given $\phi:M\longrightarrow H$ in $M^\bigstar$, we can define $\psi:M\longrightarrow \{0,1\}$ in $M^*$ by setting $\psi(m)=0$ if $\phi(m)=0$ and $\psi(m)=1$ otherwise. Conversely, given $\psi:M\longrightarrow \{0,1\}$ in $M^*$ , we can obtain a corresponding $\phi:M\longrightarrow H$ in $M^\bigstar$ by setting \begin{equation}\label{x4.9e} \phi(m)=\bigwedge \{\mbox{$c\in H$ $\vert$ $\psi(\righthalfcap c\wedge m)=0$}\} \end{equation} \end{proof} \begin{thm}\label{P4.3} Let $H$ be a finite Boolean algebra, $M$ be a Heyting module over $H$ and let $N\subseteq M$ be a Heyting submodule. Then, the induced map $M^\bigstar\longrightarrow N^\bigstar$ is surjective. \end{thm} \begin{proof} From Proposition \ref{xP4.34}, we know that there are bijections $M^\bigstar\simeq M^*$ and $N^\bigstar\simeq N^*$. From \cite[Proposition 2.3]{one}, we know that the induced morphism $M^*\longrightarrow N^*$ on the duals of the join semilattices is surjective. The result is now clear. \end{proof} \begin{thm}\label{P4.31} Let $H$ be a finite Boolean algebra and let $M$ be a Heyting module over $H$. Then, the duality between $M$ and $M^\bigstar$ is separating, i.e., if $m_1$, $m_2\in M$ are such that $\phi(m_1)=\phi(m_2)$ for every $\phi\in M^\bigstar$, then $m_1=m_2$. \end{thm} \begin{proof} We consider the hereditary submodule $I_{m_1}\subseteq M$ as defined in \eqref{eq4.1}. Then, $\{(I_{m_1}:\righthalfcap c)\}_{c\in H}$ is a hereditary system of submodules of $M$ and let $\phi$ be the corresponding element in $M^\bigstar$. Then, $\phi(m)=0$ for some $m\in M$ if and only if $m\in I_{m_1}$. Then, $\phi(m_2)=\phi(m_1)=0$ and hence $m_2\in I_{m_1}$, i.e., $m_2\leq m_1$. Similarly, we can show that $m_1\leq m_2$. Hence, $m_1=m_2$. \end{proof} We now obtain the following version of Hahn-Banach Theorem for modules over a finite Boolean algebra (compare \cite[Lemma 2.6]{one} and also \cite{CGQ}). \begin{thm}\label{P4.4} Let $H$ be a finite Boolean algebra, $M$ be a Heyting module over $H$ and $i:N\hookrightarrow M$ be a Heyting submodule. Then, if $m\in M$ is such that $\phi_1(m)=\phi_2(m)$ for every pair $(\phi_1,\phi_2)\in M^\bigstar\times M^\bigstar$ such that $\phi_1\circ i=\phi_2\circ i$, then $m\in N$. \end{thm} \begin{proof} We consider a pair $(\psi_1,\psi_2)\in M^*\times M^*$ such that $\psi_1\circ i=\psi_2\circ i$. Using the bijection $M^\bigstar \simeq M^*$, we take $\phi_1$, $\phi_2\in M^\bigstar$ corresponding respectively to $\psi_1$, $\psi_2\in M^*$. For any fixed $n\in N$ and any $c\in H$, we know that $\righthalfcap c\wedge n\in N$. Since $\psi_1\circ i=\psi_2\circ i$, it follows that \begin{equation}\label{x4.10e} \psi_1(\righthalfcap c\wedge n)=0\quad\Leftrightarrow\quad \psi_2(\righthalfcap c\wedge n)=0 \end{equation} From \eqref{x4.9e} and \eqref{x4.10e}, it follows that $\phi_1(n)=\phi_2(n)$ for every $n\in N$, i.e., $\phi_1\circ i=\phi_2\circ i$. By assumption, we must therefore have $\phi_1(m)=\phi_2(m)$. Again, from the proof of Proposition \ref{xP4.34}, we obtain \begin{equation} \psi_1(m)=0 \quad\Leftrightarrow\quad \phi_1(m)=0 \quad\Leftrightarrow\quad \phi_2(m)=0 \quad\Leftrightarrow\quad \psi_2(m)=0 \end{equation} Hence, any maps $\psi_1,\psi_2:M\longrightarrow \{0,1\}$ in $M^*$ such that $\psi_1\circ i=\psi_2\circ i$ must satisfy $\psi_1(m)=\psi_2(m)$. Applying \cite[Lemma 2.6]{one}, we get $m\in N$. \end{proof} In Proposition \ref{P4.5}, we showed that a morphism $f:M\longrightarrow N$ in $Heymod_H$ is a monomorphism if and only if it is injective on underlying sets. For epimorphisms, we have a somewhat less general result. \begin{thm}\label{P4.51} Let $H$ be a finite Boolean algebra and let $f:M\longrightarrow N$ be a morphism of Heyting modules over $H$. Then, $f$ is an epimorphism if and only if $f$ is surjective on underlying sets. \end{thm} \begin{proof} The ``if part'' is obvious. On the other hand, consider a morphism $f:M\longrightarrow N$ in $Heymod_H$. It is clear that the image of $f$ (as a subset of $N$) is already a Heyting submodule, which we will denote by $E$. Suppose that $f$ is an epimorphism in $Heymod_H$. We now consider a pair $(\phi_1,\phi_2)\in N^\bigstar\times N^\bigstar$ such that $\phi_1|_E=\phi_2|_E$. Then, $\phi_1\circ f=\phi_2\circ f$. Since $\phi_1,\phi_2:N\longrightarrow H$ are morphisms of Heyting modules and $f$ is an epimorphism, we obtain $\phi_1=\phi_2$. In particular, $\phi_1(n)=\phi_2(n)$ for every $n\in N$. Using Proposition \ref{P4.4}, we get $n\in E$ for every $n\in N$, i.e., $f$ is surjective. \end{proof} We conclude this section with the following result. \begin{thm}\label{P4.61} Let $M$ be a Heyting module over a finite Heyting algebra $H$. Then, the dual $M^\bigstar$ is a Heyting module over $H$ and there is a canonical morphism of Heyting modules $M\longrightarrow M^{\bigstar\bigstar}$. \end{thm} \begin{proof} For $c\in H$ and $\phi\in M^\bigstar$, we set $(c\wedge \phi)(m):=\phi(c\wedge m)$ for any $m\in M$. It may be easily verified that this makes $M^\bigstar$ into a distributive module over the lattice underlying $H$. Similarly, we see that $M^{\bigstar\bigstar}$ is a distributive module over $H$ and there is a canonical map $M\longrightarrow M^{\bigstar\bigstar}$ of distributive modules given by the association $m\mapsto \langle \_\_,m\rangle : M^\bigstar\longrightarrow H$ for each $m\in M$. Finally, since $H$ is a finite Heyting algebra, it follows from Proposition \ref{P3.5} that $M^\bigstar$ becomes a Heyting module over $H$ and the canonical map $M\longrightarrow M^{\bigstar\bigstar}$ becomes a morphism of Heyting modules. \end{proof} \section{Coequalizers and Equalizers in $Heymod_H$} Throughout this section, we let $H$ be a finite Heyting algebra. In other words (see Remark \ref{XR3.7}), this means that $H$ is a finite distributive lattice. We also recall from Proposition \ref{P3.5} that any distributive module over a finite Heyting algebra $H$ is canonically equipped with the structure of a Heyting module. In this section, we will study products, coproducts, coequalizers and equalizers in $Heymod_H$ in a manner analogous to \cite[$\S$ 3]{one}. In other words, we study elements of homological algebra of distributive modules over a finite distributive lattice, extending the results from \cite{one} for modules over the Boolean semifield $\mathbb B=\{0,1\}$. It is clear that the object $0$ is both initial and final in $Heymod_H$. Our first aim is to show that $Heymod_H$ is a semiadditive category, i.e., $Heymod_H$ has all finite biproducts (see \cite[VII.2]{ML}). \begin{lem}\label{uL5.1} (a) The category $Heymod_H$ contains all finite products. (b) In the category $Heymod_H$, finite products are isomorphic to finite coproducts. \end{lem} \begin{proof} (a) We consider Heyting modules $M$, $N$. Then, $M\times N=\{\mbox{$(m,n)$ $\vert$ $m\in M$, $n\in N$}\}$ becomes a Heyting module with the operations \begin{equation} (m,n)\vee (m',n') =(m\vee m',n\vee n') \qquad c\wedge (m,n)=(c\wedge m,c\wedge n) \end{equation} for $(m,n)$, $(m',n')\in M\times N$ and $c\in H$. We also notice that the canonical projections $p_1:M\times N\longrightarrow M$, $p_2:M\times N\longrightarrow N$ as well as the inclusions $e_1:M\longrightarrow M\times N$, $e_1(m)=(m,0)$ and $e_2:N\longrightarrow M\times N$, $e_2(n)=(0,n)$ are morphisms of Heyting modules. Given morphisms $f:X\longrightarrow M$, $g:X\longrightarrow N$ in $Heymod_H$, it is clear that $(f,g):X\longrightarrow M\times N$ given by $(f,g)(x)=(f(x),g(x))$ for each $x\in X$ is the unique morphism in $Heymod_H$ such that $p_1\circ (f,g)=f$ and $p_2\circ (f,g)=g$. Hence, $Heymod_H$ contains all finite products. (b) We now suppose that we are given morphisms $f:M\longrightarrow Y$ and $g:N\longrightarrow Y$ in $Heymod_H$. Then, we define $h:M\times N\longrightarrow Y$ by setting $h(m,n)=f(m)\vee g(n)$. Since $(m,n)=(m,0)\vee (0,n)$, it is clear that $h$ is the unique morphism from $M\times N$ to $Y$ such that $h\circ e_1=f$ and $h\circ e_2=g$. Hence, $M\times N$ is also the coproduct in $Heymod_H$. \end{proof} \begin{thm}\label{uP5.2} Let $H$ be a finite Heyting algebra. Then, $Heymod_H$ is a semiadditive category. \end{thm} \begin{proof} This follows immediately from Lemma \ref{uL5.1} and the definition of a semi-additive category. \end{proof} \begin{rem} \emph{If $\mathcal C$ is any category with zero objects and finite products, given objects $c_1$, $c_2\in \mathcal C$, there are canonical morphisms $(id,0):c_1\longrightarrow c_1\times c_2$ and $(0,id):c_2\longrightarrow c_1\times c_2$. If $\mathcal C$ also has finite coproducts, these morphisms together induce a canonical morphism $\gamma_{12}:c_1\coprod c_2\longrightarrow c_1\times c_2$. Classically, a category is said to be semiadditive if every such canonical morphism $\gamma_{12}:c_1\coprod c_2\longrightarrow c_1\times c_2$ is an isomorphism. However, a result of Lack \cite[Theorem 5]{Lack} shows that giving any family of isomorphisms $c_1\coprod c_2\overset{\cong}{ \longrightarrow}c_1\times c_2$ that is natural in $c_1$, $c_2$ is enough to show that $\mathcal C$ is semiadditive. } \end{rem} Our next aim is to construct the coequalizer and equalizer of morphisms in $Heymod_H$. By definition, a congruence relation on some $M\in Heymod_H$ will be a Heyting submodule $R\subseteq M\times M$ which satisfies the following three conditions: (1) For each $m\in M$, we have $(m,m)\in R$. (2) For $m$, $m'\in M$ such that $(m,m')\in R$, we must have $(m',m)\in R$. (3) If $m$, $m'$ and $m''\in M$ are such that $(m,m')$, $(m',m'')\in M$, then $(m,m'')\in M$. \begin{lem}\label{vL5.4} If $R\subseteq M\times M$ is a congruence relation on $M$, then $M/R$ is a Heyting module. \end{lem} \begin{proof} By definition, $M/R$ is the set of equivalence classes in $M$, where $m\sim m'$ if and only if $(m,m')\in R$. Given elements $[m]$, $[n]\in M/R$ corresponding respectively to elements $m$, $n\in M$, we set \begin{equation}\label{e5.2re} [m]\vee [n]:=[m\vee n]\qquad c\wedge [m]:=[c\wedge m] \end{equation} for each $c\in H$. Say $m\sim m'$ and $n\sim n'$ in $M$. Then, $(m,m')$, $(n,n')\in R$ and since $R$ is a submodule, we must have $(m\vee n,m'\vee n')=(m,m')\vee (n,n')\in R$. It follows that $[m\vee n]=[m'\vee n']$. Since $R$ is a submodule, it also follows that $(c\wedge m,c\wedge m')\in R$ for any $c\in H$ and hence $[c\wedge m]=[c\wedge m']$. Hence, the Heyting module structure on $M/R$ given in \eqref{e5.2re} is well-defined. \end{proof} Given morphisms $f,g:L\longrightarrow M$ in $Heymod_H$, we now set $R_{(f,g)}$ to be the intersection of all congruence relations on $M$ containing the collection $\{(f(x),g(x))\}_{x\in L}$. \begin{thm}\label{vP5.5} (a) The canonical morphism $r:M\longrightarrow M/R_{(f,g)}$ is the coequalizer of the morphisms $f,g:L\longrightarrow M$ in $Heymod_H$. (b) The coequalizer of $f,g:L\longrightarrow M$ is the quotient of $M$ over the congruence relation given by \begin{equation}\label{5.3'} R'_{(f,g)}=\{\mbox{$(m,n)\in M\times M$ $\vert$ $t(m)=t(n)$ for every $t:M\longrightarrow Q$ in $Heymod_H$ with $t\circ f=t\circ g$}\} \end{equation} \end{thm} \begin{proof} (a) From the definition of $R_{(f,g)}$, it is clear that $r\circ f=r\circ g$. Further, if $s:M\longrightarrow P$ is a morphism such that $s\circ f=s\circ g$, we set \begin{equation} R_s:=\{\mbox{$(m,n) \in M\times M$ $\vert$ $s(m)=s(n)$}\} \end{equation} It is clear that $R_s\subseteq M\times M$ gives a congruence relation on $M$ and that $(f(x),g(x))\in R_s$ for every $x\in L$. It follows that $R_{(f,g)}\subseteq R_s$. Then, for any $m$, $n\in M$: \begin{equation}\label{5.4eg} r(m)=r(n)\quad\Rightarrow \quad (m,n)\in R_{(f,g)}\quad \Rightarrow \quad (m,n)\in R_s \quad \Rightarrow \quad s(m)=s(n) \end{equation} As such, \eqref{5.4eg} shows that the morphism $s:M\longrightarrow P$ factors uniquely through $M/R_{(f,g)}$. (b) Since $R'_{(f,g)}$ is a congruence relation containing all the elements $\{(f(x),g(x))\}_{x\in L}$, it follows from the definition of $R_{(f,g)}$ that $R_{(f,g)}\subseteq R'_{(f,g)}$. Conversely, consider some $(m,n)\in R'_{(f,g)}$. From part (a), we know that the canonical morphism $r:M\longrightarrow M/R_{(f,g)}$ satisfies $r\circ f=r\circ g$. From \eqref{5.3'} it follows that $r(m)=r(n)$, i.e., $(m,n)\in R_{(f,g)}$. Hence, $R_{(f,g)}=R'_{(f,g)}$ and the result follows. \end{proof} We also see that the equalizer $L'\hookrightarrow L$ of morphisms $f,g:L\longrightarrow M$ in $Heymod_H$ is given by \begin{equation}\label{eqlz} L':=\{\mbox{$x\in L$ $\vert$ $f(x)=g(x)$}\} \end{equation} It is easily verified that $L'$ is a Heyting submodule of $L$. The next step is to consider coimages and images in the category $Heymod_H$. This will be done in a manner analogous to \cite[$\S$ 3]{one}, by considering ``kernel pairs'' and ``cokernel pairs.'' We begin with a morphism $f:M\longrightarrow N$ in $Heymod_H$. Considering $M^2:=M\times M$ along with the canonical projections $p_1,p_2:M^2 \longrightarrow M$, we have morphisms $f\circ p_1,f\circ p_2:M^2\longrightarrow N$ in $Heymod_H$. The kernel pair $Ker_p(f)$ is now defined to be the equalizer \begin{equation}\label{kerp} Ker_p(f):=Eq\left(\xymatrix{ M^2 \ar@<-.5ex>[r]_{f\circ p_2} \ar@<.5ex>[r]^{f\circ p_1} & N }\right) \end{equation} From the definition in \eqref{eqlz}, we know that $Ker_p(f)\subseteq M^2$. The coimage $Coim(f)$ is taken to be the coequalizer \begin{equation}\label{coim} Coim(f):=Coeq\left(Ker_p(f)\hookrightarrow \xymatrix{ M^2 \ar@<-.5ex>[r]_{p_2} \ar@<.5ex>[r]^{p_1} & M }\right) \end{equation} The next result describes the coimage more explicitly. \begin{thm}\label{usP5.6} (a) The submodule $Ker_p(f)\subseteq M$ defines a congruence relation on $M$. (b) The coimage $Coim(f)$ is the quotient of $M$ over the equivalence relation $m\sim m'$ $\Leftrightarrow$ $f(m)=f(m')$. (c) The coimage $Coim(f)$ is isomorphic to the Heyting submodule $I_f:=\{\mbox{$f(m)$ $\vert$ $m\in M$}\}$ of $N$. \end{thm} \begin{proof} Using \eqref{eqlz}, we see that an element $(m,m')\in M\times M$ lies in $Ker_p(f)$ if and only if $f(m)=f\circ p_1(m,m')=f\circ p_2(m,m')=f(m')$. It is clear that this gives a congruence relation on $M$. This proves (a). By definition, the coequalizer $Coim(f)$ in \eqref{coim} is the quotient of $M$ over the smallest congruence relation containing $Ker_p(f)$. Since $Ker_p(f)$ is already a congruence relation, it follows that $Coim(f)$ is the quotient of $M$ over the equivalence relation \begin{equation}\label{5.9eqp} m\sim m' \quad\Leftrightarrow\quad (m,m')\in Ker_p(f)\quad\Leftrightarrow\quad f(m)=f(m') \end{equation} This proves (b). The result of (c) is clear from \eqref{5.9eqp}. \end{proof} We now consider $N^2=N\times N$ along with the canonical inclusions $e_1,e_2:N\longrightarrow N^2$. Proceeding in a dual manner, the cokernel pair $Coker_p(f)$ is taken to be the coequalizer \begin{equation}\label{cokerp} Coker_p(f):=Coeq\left( \xymatrix{ M\ar@<-.5ex>[r]_{e_2\circ f} \ar@<.5ex>[r]^{e_1\circ f} & N^2 }\right) \end{equation} Further, the image $Im(f)$ is taken to be the equalizer \begin{equation}\label{im} Im(f):=Eq\left( \xymatrix{ N\ar@<-.5ex>[r]_{e_2} \ar@<.5ex>[r]^{e_1} & N^2 }\longrightarrow Coker_p(f)\right) \end{equation} The next result gives an explicit description of the image of a morphism in $Heymod_H$. \begin{thm}\label{P5.7b} Let $H$ be a finite Heyting algebra and $f:M\longrightarrow N$ be a morphism in $Heymod_H$. Then, the image of $f$ is given by \begin{equation*} Im(f)=\{\mbox{$n\in N$ $\vert$ $t_1(n)=t_2(n)$ $\forall$ $Q\in Heymod_H$, $\forall$ $t_1,t_2:N\longrightarrow Q$ such that $t_1(f(m))=t_2(f(m))$ $\forall$ $m\in M$ }\} \end{equation*} In particular, if $i:K\hookrightarrow N$ is a monomorphism in $Heymod_H$, the image $\widetilde{K}=Im(i)$ is given by the submodule \begin{equation*} \widetilde{K}=\{\mbox{$n\in N$ $\vert$ $t_1(n)=t_2(n)$ $\forall$ $Q\in Heymod_H$ $\forall$ $t_1,t_2:N\longrightarrow Q$ such that $t_1(k)=t_2(k)$ $\forall$ $k\in K$ }\} \end{equation*} \end{thm} \begin{proof} From Proposition \ref{vP5.5} and the definition in \eqref{cokerp}, we see that $Coker_p(f)$ is the quotient of $N^2$ over the equivalence relation $(n_1,n_2)\sim (n_1',n_2')$ if $t(n_1,n_2)=t(n_1',n_2')$ for every $t:N^2\longrightarrow Q$ in $Heymod_H$ such that $t(f(m),0)=t(0,f(m))$ $\forall$ $m\in M$. Then, \eqref{im} shows that $Im(f)$ consists of all $n\in N$ such that $(n,0)\sim (0,n)\in N^2$. Unpacking this definition, we see that $Im(f)$ consists of all $n\in N$ such that $t(n,0)=t(0,n)$ for every $t:N^2\longrightarrow Q$ such that $t(f(m),0)=t(0,f(m))$ for every $m\in M$. We see that a morphism $t:N^2\longrightarrow Q$ corresponds to two separate morphisms $t_1,t_2:N\longrightarrow Q$ such that $t_1(n)=t(n,0)$ and $t_2(n)=t(0,n)$. The result is now clear. \end{proof} For modules over a finite Heyting algebra, the following result replaces the usual (AB2) property (isomorphism of coimage and image) for abelian categories. \begin{thm}\label{sP5.71} Let $H$ be a finite Heyting algebra and let $f:M\longrightarrow N$ be a morphism in $Heymod_H$. Then, we have $\widetilde{Coim(f)}=Im(f)$. \end{thm} \begin{proof} From Proposition \ref{P5.7b}, we see that $Im(f)=\widetilde{I_f}$, where $I_f=\{\mbox{$f(m)$ $\vert$ $m\in M$}\}$. From Proposition \ref{usP5.6}, we know that $Coim(f)=I_f$ and hence the result follows. \end{proof} In the case where $H$ is a finite Boolean algebra, we have an isomorphism between the coimage and the image. \begin{thm}\label{sP5.8} Suppose that $H$ is a finite Boolean algebra and let $f:M\longrightarrow N$ be a morphism in $Heymod_H$. Then, we have ${Coim(f)}= Im(f)$. \end{thm} \begin{proof} From Proposition \ref{P4.4}, it follows that $\widetilde{K}=K$ for any submodule $K\subseteq N$. Hence, $\widetilde{I_f}=I_f\subseteq N$ and we get $Im(f)=\widetilde{I_f}=I_f=Coim(f)$. \end{proof} \section{Tensor products of Heyting modules and change of base} We continue with $H$ being a finite Heyting algebra. In this section, we construct the tensor product $M\otimes_HN$ of Heyting modules over $H$. For a study of tensor products of lattices and semilattices in the literature, we refer the reader to \cite{AK}, \cite{Fraser}, \cite{Gra} and \cite{Shm}. For any set $S$, we define $Free_H(S)$ to be the collection of all functions $f:S\longrightarrow H$ of finite support, i.e., there are only finitely many elements in $S$ such that $f(s)\ne 0$. It is easily seen that $Free_H(S)$ is a distributive module over $H$: \begin{equation}\label{6xeq6.1} (f\vee g)(s):=f(s)\vee g(s)\qquad (c\wedge f)(s):=c\wedge f(s)\qquad \forall\textrm{ }s\ \in S, \textrm{ }\forall\textrm{ }f,g\in Free_H(S) \end{equation} Since $H$ is finite, this makes $Free_H(S)$ a Heyting module. For the sake of convenience, an element of $Free_H(S)$ will be denoted by a formal sum $\sum c_i\wedge s_i$, where $s_i\in S$ and $c_i=f(s_i)\in H$. In particular, if $M$, $N\in Heymod_H$, then $Free_H(M\times N)$ consists of sums of the form $\sum c_i\wedge (m_i,n_i)$, where each $(m_i,n_i)\in M\times N$ and each $c_i\in H$. As a set, we now define $M\otimes_HN$ to be the quotient of $Free_H(M\times N)$ over the equivalence relation generated by the following: \begin{equation}\label{6xeq6.2} \begin{array}{c} \sum c_i\wedge (m_i,n_i) + c\wedge (0,n)=\sum c_i\wedge (m_i,n_i) = \sum c_i\wedge (m_i,n_i) + c\wedge (m,0)\\ \sum c_i\wedge (m_i,n_i) + c\wedge (m,n)+c\wedge (m',n)=\sum c_i\wedge (m_i,n_i) + c\wedge (m\vee m',n)\\ \sum c_i\wedge (m_i,n_i) +c\wedge (m,n)+c\wedge (m,n')= \sum c_i\wedge (m_i,n_i) +c\wedge (m,n\vee n')\\ \sum c_i\wedge (m_i,n_i) +c\wedge (d\wedge m,n)=\sum c_i\wedge (m_i,n_i) +(c\wedge d)\wedge (m,n)=\sum c_i\wedge (m_i,n_i) +c\wedge (m,d\wedge n)\\ \end{array} \end{equation} where $c$, $d$, $c_i\in H$, $m$, $m_i\in M$ and $n$, $n_i\in N$. From the relations in \eqref{6xeq6.2}, it is evident that the $\vee$ operation on $Free_H(M\times N)$ as well as $c\wedge \_\_$ operation for each $c\in H$ descends to $M\otimes_HN$, making it a distributive module over $H$ and hence a Heyting module. Given $m\in M$, $n\in N$, we will denote by $m\otimes n$ the equivalence class of the element $(m,n)\in Free_H(M\times N)$ in $M\otimes_HN$. \begin{defn}\label{6xD6.1} Let $H$ be a finite Heyting algebra and let $M$, $N$, $P$ be Heyting modules. A bimorphism $f:M\times N\longrightarrow P$ is a map such that for fixed $m\in M$ and $n\in N$, the maps \begin{equation*} g_m:=f(m,\_\_):N\longrightarrow P\qquad h_n:=f(\_\_,n):M\longrightarrow P \end{equation*} are morphisms of Heyting modules. \end{defn} Analogous to ordinary tensor products of modules, we will now see that $M\otimes_HN$ represents bimorphisms from $M\times N$ to $P$. For the similar notion of bimorphisms of join semilattices, see \cite{Gra}. \begin{thm}\label{6xP6.2} Let $M$, $N$ and $P$ be Heyting modules. Then, for each bimorphism $f:M\times N\longrightarrow P$, there is a unique morphism $f':M\otimes_HN\longrightarrow P$ in $Heymod_H$ such that $f'(m\otimes n)=f(m,n)$. \end{thm} \begin{proof} We consider a bimorphism $f:M\times N\longrightarrow P$. The morphism $f':M\otimes_HN\longrightarrow P$ is defined by taking the equivalence class of the element $\sum c_i\wedge (m_i,n_i)$ to $\sum c_i\wedge f(m_i,n_i)\in P$. From the relations in \eqref{6xeq6.2} and the fact that $f$ is a bimorphism, we see that $f'$ is well-defined. It is also clear that $f'$ is a morphism of Heyting modules. Since $m\otimes n$ is the equivalence class of the element $(m,n)\in Free_H(M\times N)$ in $M\otimes_HN$, the definition gives $f'(m\otimes n)=f(m,n)$. In general, if $g:M\otimes_HN\longrightarrow P$ is a morphism of Heyting modules such that $g(m\otimes n)=f(m,n)$, then $g$ must take the equivalence class of $\sum c_i\wedge (m_i,n_i)$ to $\sum c_i\wedge f(m_i,n_i)$. Hence, $f'$ must be unique. \end{proof} \begin{cor}\label{6xC6.3} Given Heyting modules $M$, $N$ and $P$, we have isomorphisms: \begin{equation} \begin{array}{c} (M\otimes_HN)\cong (N\otimes_HM)\\ (M\otimes_HN)\otimes_HP\cong M\otimes_H(N\otimes_HP)\\ \end{array} \end{equation} \end{cor} \begin{proof} Using the description of morphisms from the tensor product in Proposition \ref{6xP6.2} and applying Yoneda lemma to the category $Heymod_H$, the result follows. \end{proof} Given morphisms $f$, $g:M\longrightarrow N$ in $Heymod_H$ and any $c\in H$, we define \begin{equation}\label{6ins} (f\vee g)(m):=f(m)\vee g(m)\qquad (c\wedge f)(m):=c\wedge f(m)\qquad \forall\textrm{ }m\ \in M \end{equation} It is easily verified that this makes $Heymod_H(M,N)$ into a distributive module over $H$. Since $H$ is finite, we get $Heymod_H(M,N)\in Heymod_H$. We will often write $f\vee g$ as $f+g$. \begin{thm}\label{6xP6.4} For any $N\in Heymod_H$, the functor $\_\_\otimes_HN:Heymod_H\longrightarrow Heymod_H$ is left adjoint to the functor $ Heymod_H(N,\_\_):Heymod_H\longrightarrow Heymod_H$. \end{thm} \begin{proof} We consider $M$, $N$, $P\in Heymod_H$ and a morphism $f:M\longrightarrow Heymod_H(N,P)$ in $Heymod_H$. We define $g:M\times N\longrightarrow P$ by setting $g(m,n):=f(m)(n)$. Then, for each fixed $m\in M$, $g(m,\_\_)=f(m)$ is a morphism of Heyting modules from $N$ to $P$. If we fix $n\in N$, it follows from the definitions in \eqref{6ins} and the fact that $f$ is a morphism of Heyting modules that \begin{equation*} \begin{array}{c} g(m',n)\vee g(m'',n)=f(m')(n)\vee f(m'')(n)=(f(m')\vee f(m''))(n)=f(m'\vee m'')(n)=g(m'\vee m'',n)\\ g(c\wedge m',n)=f(c\wedge m')(n)=(c\wedge f(m'))(n)=c\wedge f(m')(n)=c\wedge g(m',n) \end{array} \end{equation*} for any $m'$, $m''\in M$ and $c\in H$. It follows that $g:M\times N\longrightarrow P$ is a bimorphism. Since $f$ and $g$ completely determine each other, it now follows from Proposition \ref{6xP6.2} that we have an isomorphism $Heymod_H(M\otimes_HN,P)\cong Heymod_H(M,Heymod_H(N,P))$. \end{proof} We are now ready to consider base extensions of Heyting modules. \begin{thm}\label{6xP6.5} Let $f:H\longrightarrow H'$ be a morphism between finite Heyting algebras, i.e., $f$ is a morphism of the underlying distributive lattices. Then, there is an `extension of scalars' along $f$, i.e., $f$ induces a functor $\_\_\otimes_HH':Heymod_H\longrightarrow Heymod_{H'}$. \end{thm} \begin{proof} If $f:H\longrightarrow H'$ is a morphism of Heyting algebras, then $H'\in Heymod_H$. Accordingly, for any $M\in Heymod_H$, we can form the tensor product $M\otimes_HH'$. For any element in $M\otimes_HH'$ represented by $\sum c_i\wedge (m_i,h'_i)$ and any $h'\in H'$, we set $(\sum c_i\wedge (m_i,h'_i))\wedge h':=\sum c_i\wedge (m_i,h'_i\wedge h')$. It may be verified easily that this operation gives $M\otimes_HH'\in Heymod_{H'}$. \end{proof} On the other hand, given a morphism $f:H\longrightarrow H'$ between finite Heyting algebras, there is an obvious restriction functor $Res^{H'}_H:Heymod_{H'}\longrightarrow Heymod_H$. We record the following observation. \begin{thm}\label{6xP6.6} Let $f:H\longrightarrow H'$ be a morphism between finite Heyting algebras. Let $N'\in Heymod_{H'}$ and set $N:=Res^{H'}_H(N')$. Then, we have $ f((n_1\rightarrow n_2)_{N})\leq (n_1\rightarrow n_2)_{N'}$ for all $n_1,n_2\in N' $. \end{thm} \begin{proof} The $H$-module structure on $N=Res^{H'}_H(N')$ is given by $(c,n)\mapsto f(c)\wedge n$ for every $c\in H$ and $n\in N$. From the construction in Proposition \ref{P3.5}, we now see that $(n_1\rightarrow n_2)_N\in H$ is the supremum of all elements $c\in H$ such that $f(c)\wedge n_1\leq n_2$. Also, $(n_1\rightarrow n_2)_{N'}\in H'$ is the supremum of all elements $c'\in H'$ such that $c'\wedge n_1\leq n_2$. In particular, this means that if $c\in H$ is such that $f(c)\wedge n_1\leq n_2$, we will have $f(c)\leq (n_1\rightarrow n_2)_{N'}$. Since $H$ is finite and $f$ preserves finite joins, it is now clear that $ f((n_1\rightarrow n_2)_{N})\leq (n_1\rightarrow n_2)_{N'}$ for all $n_1,n_2\in N' $. \end{proof} \begin{thm}\label{6xP6.7} Let $f:H\longrightarrow H'$ be a morphism between finite Heyting algebras. Then, the functor $\_\_\otimes_HH':Heymod_H \longrightarrow Heymod_{H'}$ is left adjoint to the restriction $Res^{H'}_H:Heymod_{H'}\longrightarrow Heymod_H$. \end{thm} \begin{proof} We consider $M\in Heymod_H$, $N\in Heymod_{H'}$ and a morphism $g_1:M\otimes_HH'\longrightarrow N$ in $Heymod_{H'}$. Then, $Res^{H'}_H(g_1)$ is a morphism in $Heymod_H$ and we compose it with $1_M\otimes_Hf:M\otimes_HH\longrightarrow M\otimes_HH'$ to obtain a morphism $M=M\otimes_HH\longrightarrow Res_H^{H'}(N)$ in $Heymod_H$. Conversely, suppose that we are given a morphism $g_2:M\longrightarrow Res^{H'}_H(N)$ in $Heymod_H$. Then, $g_2$ induces a morphism $g_2\otimes_HH': M\otimes_HH'\longrightarrow Res^{H'}_H(N)\otimes_HH'$ in $Heymod_{H'}$. Since $N\in Heymod_{H'}$, we have a canonical morphism $Res^{H'}_H(N)\otimes_HH'\longrightarrow N$ in $Heymod_{H'}$. Composing this with $g_2\otimes_HH'$, we obtain a morphism $M\otimes_HH'\longrightarrow N$ in $Heymod_{H'}$. It is clear that these two associations are inverse to each other and this proves the result. \end{proof} \section{The Kleisli category $\mathbf{Heymod_H^2}$} We continue with $H$ being a finite Heyting algebra. From Section 5, we see that the sequence \begin{equation}\label{hexact} \xymatrix{ Ker_p(f)\ar@<-.5ex>[r] \ar@<.5ex>[r] & M }\overset{f}{\longrightarrow} \xymatrix{ N\ar@<-.5ex>[r] \ar@<.5ex>[r] & Cok_p(f) } \end{equation} replaces the usual exact sequence involving the kernel and the cokernel in an abelian category. In a manner similar to \cite[$\S$ 3,4]{one}, the sequence in \eqref{hexact} suggests that we study the category $Heymod_H^2$ which has the same objects as $Heymod_H$, but each morphism $M\longrightarrow N$ in $Heymod_H^2$ is a pair $(f,g):M\longrightarrow N$ of morphisms in $Heymod_H$. The composition of morphisms in $Heymod_H^2$ follows from the intuition that the pair $(f,g)$ should play the role of ``$f-g$.'' As such, the composition law for morphisms in $Heymod_H^2$ is given by: \begin{equation}\label{composition} (f',g')\circ (f,g)=(f'\circ f+g'\circ g,f'\circ g+g'\circ f) \end{equation} The category $Heymod_H$ is canonically embedded in $Heymod_H^2$ by taking any morphism $f$ to the pair $(f,0)$, giving a functor $\kappa_H:Heymod_H\hookrightarrow Heymod_H^2$. The construction of $Heymod_H^2$ may be understood more categorically as follows: we recall the notion of a comonad on a category $\mathcal C$. \begin{defn}\label{comonad} (see, for instance, \cite[p 219]{Borceux} Given a category $\mathcal C$, the composition of functors determines a monoidal (but not necessarily symmetric monoidal) structure on the category $Fun(\mathcal C,\mathcal C)$ of endofunctors $\mathcal C\longrightarrow\mathcal C$. A comonad on $\mathcal C$ is a comonoid in the monoidal category $Fun(\mathcal C,\mathcal C)$. More explicitly, a comonad on $\mathcal C$ is a triple $(\bigperp,\delta,\epsilon)$ consisting of a functor $\bigperp:\mathcal C \longrightarrow\mathcal C$ and natural transformations \begin{equation}\label{comonad2} \delta: \bigperp\longrightarrow \bigperp\circ\bigperp \qquad \epsilon : \bigperp\longrightarrow id \end{equation} satisfying the conditions for coassociativity and counity respectively. \end{defn} \begin{thm}\label{P6.2} Let $H$ be a finite Heyting algebra. Then, the endofunctor $\bigperp :Heymod_H\longrightarrow Heymod_H$ defined by taking any object $M\in Heymod_H$ to $M^2$ and any morphism $f:M\longrightarrow N$ to $(f,f):M^2 \longrightarrow N^2$ determines a comonad on $Heymod_H$. \end{thm} \begin{proof} The natural transformations $\epsilon : \bigperp\longrightarrow id$ and $\delta: \bigperp\longrightarrow \bigperp\circ\bigperp$ are defined by setting for each $M\in Heymod_H$: \begin{equation}\label{eq6.4} \begin{array}{c} \epsilon(M):\bigperp M=M^2\longrightarrow M \qquad (m,n)\mapsto m\\ \delta(M): \bigperp M=M^2 \longrightarrow \bigperp\circ\bigperp M=(M^2)^2 \qquad (m,n)\mapsto (m,n,n,m)\\ \end{array} \end{equation} It is clear that the morphisms in \eqref{eq6.4} lie in $Heymod_H$. The counit property of $\epsilon$ follows from the commutativity of the following diagram \begin{equation*} \begin{CD} (m,n)\in \bigperp M @>\delta(M)>> \bigperp (\bigperp M)\ni (m,n,n,m) \\ @V\delta(M)VV @VV\epsilon(\bigperp M)V\\ (m,n,n,m)\in \bigperp (\bigperp M) @>\bigperp(\epsilon(M))>> \bigperp M \ni (m,n)\\ \end{CD} \end{equation*} The coassociativity property of $\delta$ follows from the commutativity of the following diagram \begin{equation*} \begin{CD} (m,n)\in \bigperp M @>\delta(M)>> \bigperp (\bigperp M)\ni ((m,n),(n,m)) \\ @V\delta(M)VV @VV\delta(\bigperp M)V\\ ((m,n),(n,m))\in \bigperp (\bigperp M)@>\bigperp(\delta(M))>> \bigperp\bigperp\bigperp M\ni ((m,n),(n,m),(n,m),(m,n))=((m,n,n,m),(n,m,m,n)))\\ \end{CD} \end{equation*} \end{proof} By definition, the Kleisli category $Kl_{\bigperp}(\mathcal C)$ of a comonad $\bigperp$ (see \cite[p 192]{Borceux}) on a category $\mathcal C$ is constructed as follows: the objects of $Kl_{\bigperp}(\mathcal C)$ are the same as those of $\mathcal C$ and the morphism sets are defined by setting: \begin{equation} Kl_{\bigperp}(C_1,C_2):=\mathcal C(\bigperp C_1,C_2)\qquad\forall\textrm{ }C_1,C_2\in Ob(\mathcal C) \end{equation} Given morphisms $f\in Kl_{\bigperp}(C_1,C_2)=\mathcal C(\bigperp C_1,C_2)$ and $g\in Kl_{\bigperp}(C_2,C_3) =\mathcal C(\bigperp C_2,C_3)$, the composition is given by \begin{equation}\label{eq6.6} \left( \begin{CD}\bigperp C_1 @>\delta(C_1)>>\bigperp(\bigperp C_1) @>{\bigperp f}>> \bigperp C_2 @>g>> C_3\end{CD}\right)\in \mathcal C(\bigperp C_1,C_3)=Kl_{\bigperp}(C_1,C_3) \end{equation} \begin{thm}\label{P6.3} For a finite Heyting algebra $H$, the Kleisli category of the comonad $\bigperp$ is given by $Heymod_H^2$. \end{thm} \begin{proof} By definition, a morphism from $M$ to $N$ in $Kl_{\bigperp}(Heymod_H)$ consists of a morphism $M^2\longrightarrow N$ in $Heymod_H$, i.e., a pair $(f,g)$ of morphisms from $M$ to $N$ in $Heymod_H$. We consider a pair $(f',g')$ of morphisms from $N$ to $P$ in $Heymod_H$. We calculate $(f',g')\circ (f,g)$ as per the composition law for the Kleisli category in \eqref{eq6.6}. For this, we choose $(m,n)\in M^2=\bigperp M$. Then, we know that $\delta(M)(m,n)=(m,n,n,m)\in (M^2)^2$. From the morphism $(f,g):M^2\longrightarrow N$ which takes $(m,n)\mapsto f(m)\vee g(n)$, we obtain \begin{equation} \bigperp (f,g)((m,n),(n,m))=((f,g),(f,g))((m,n),(n,m))=(f(m)\vee g(n),f(n)\vee g(m))\in N^2=\bigperp N \end{equation} Finally, the pair $(f',g')$ takes $(f(m)\vee g(n),f(n)\vee g(m))\in N^2=\bigperp N$ to $(f'\circ f)(m)\vee (f'\circ g)(n)\vee (g'\circ f)(n)\vee (g'\circ g)(m)\in P$. It is now clear that the composition in the Kleisli category $Kl_{\bigperp}(Heymod_H)$ is identical to the composition in the category $Heymod_H^2$ described in \eqref{composition}. \end{proof} The kernel pair of a morphism$ \xymatrix{ M \ar@<-.5ex>[r]_{g} \ar@<.5ex>[r]^{f} & N }$ in $Heymod_H^2$ is now defined by setting \begin{equation}\label{kerp1} Ker_p(f,g):=\{\mbox{$(m_1,m_2)\in M\times M$ $\vert$ $f(m_1)\vee g(m_2)=f(m_2)\vee g(m_1)$ }\} \end{equation} If $g=0$, it is clear that \eqref{kerp1} recovers the notion of the kernel pair in \eqref{kerp}. \begin{lem}\label{L6.4} Given a morphism $\xymatrix{ M \ar@<-.5ex>[r]_{g} \ar@<.5ex>[r]^{f} & N }$ in $Heymod_H^2$, $Ker_p(f,g)\subseteq M\times M$ is a Heyting submodule. \end{lem} \begin{proof} Given $(m_1,m_2)$, $(m_1',m_2')\in Ker_p(f,g)$ we see that \begin{equation} \begin{array}{ll} f(m_1\vee m_1')\vee g(m_2\vee m_2')=&(f(m_1)\vee g(m_2))\vee (f(m_1')\vee g(m_2'))\\ &=(f(m_2)\vee g(m_1))\vee (f(m_2')\vee g(m_1'))\\ &=f(m_2\vee m_2') \vee g(m_1\vee m_1')\\ \end{array} \end{equation} It follows that $(m_1,m_2)\vee (m_1',m_2')\in Ker_p(f,g)$. Also, for any $c\in H$, we see that \begin{equation} f(c\wedge m_1)\vee g(c\wedge m_2)=c\wedge (f(m_1)\vee g(m_2))=c\wedge (f(m_2)\vee g(m_1))=f(c\wedge m_2)\vee g(c\wedge m_1) \end{equation} and hence $(c\wedge m_1,c\wedge m_2)\in Ker_p(f,g)$. \end{proof} While Lemma \ref{L6.4} shows that $Ker_p(f,g)$ is a Heyting submodule of $M\times M$, it should be pointed out that unlike the case of $Ker_p(f)$ in Proposition \ref{usP5.6}, $Ker_p(f,g) \subseteq M\times M$ does not define a congruence relation on $M$. Also, $Ker_p(f,g)$ defined in \eqref{kerp1} is not an equalizer unlike \eqref{kerp}. Being a submodule of $M\times M$, $Ker_p(f,g)$ is equipped with two canonical morphisms to $M$, which determine a morphism $\xymatrix{ Ker_p(f,g) \ar@<-.5ex>[r]_(0.6){k_2} \ar@<.5ex>[r]^(0.6){k_1} & M }$ in $Heymod_H^2$. The next result gives us something resembling a universal property for $Ker_p(f,g)$. For this, we note that the intuition of a morphism $(f,g)$ in $Heymod_H^2$ corresponding to ``$f-g$'' suggests that a composition $(f',g')\circ (f,g)$ in $Heymod_H^2$ ``corresponds to zero'' if and only if $f'\circ f+g'\circ g=f'\circ g+g'\circ f$. \begin{thm}\label{P6.5} Let $\xymatrix{ M \ar@<-.5ex>[r]_{g} \ar@<.5ex>[r]^{f} & N }$ be a morphism in $Heymod_H^2$. Then the morphisms \begin{equation} \xymatrix{ Ker_p(f,g) \ar@<-.5ex>[r]_(0.6){k_2} \ar@<.5ex>[r]^(0.6){k_1} & M \ar@<-.5ex>[r]_{g} \ar@<.5ex>[r]^{f} & N } \end{equation} in $Heymod_H^2$ satisfy $f\circ k_1+g\circ k_2=g\circ k_1+f\circ k_2$. Further, if there exists a morphism $\xymatrix{L\ar@<-.5ex>[r]_{h_2} \ar@<.5ex>[r]^{h_1}&M}$ in $Heymod_H^2$ satisfying $f\circ h_1+g\circ h_2=g\circ h_1+f\circ h_2$, then $(h_1,h_2)$ factors through $(k_1,k_2)$. \end{thm} \begin{proof} If $(m,m')\in Ker_p(f,g)$, then $k_1(m,m')=m$ and $k_2(m,m')=m'$. It follows from the definition in \eqref{kerp1} that $f(m)\vee g(m')=f(m')\vee g(m)$ and hence $f\circ k_1+g\circ k_2=g\circ k_1+f\circ k_2$. From the definition in \eqref{kerp1}, it is also clear that for any $l\in L$, the element $(h_1(l),h_2(l))\in M\times M$ actually lies in $Ker_p(f,g)$. This gives us a morphism $L\longrightarrow Ker_p(f,g)$ in $Heymod_H$ and hence a morphism $\xymatrixcolsep{3pc}\xymatrix{L\ar@<-.5ex>[r]_(0.3){0} \ar@<.5ex>[r]^(0.3){(h_1,h_2)}& Ker_p(f,g)}$ in $Heymod_H^2$. We now have the composition \begin{equation} (k_1,k_2)\circ ((h_1,h_2),0)=(k_1\circ (h_1,h_2),k_2\circ (h_1,h_2))=(h_1,h_2) \end{equation} in $Heymod_H^2$, which proves the result. \end{proof} Given a morphism $\xymatrix{L\ar@<-.5ex>[r]_{f_2} \ar@<.5ex>[r]^{f_1}&M}$ in $Heymod_H^2$, we now set \begin{equation}\label{im2} I(f_1,f_2):=\{\mbox{$(f_1(x)\vee f_2(y),f_1(y)\vee f_2(x))$ $\vert$ $x$, $y\in L$}\}\subseteq M\times M \end{equation} It is evident that $I(f_1,f_2)$ is a Heyting submodule of $M\times M$. \begin{lem}\label{L6.6} Consider a composition of morphisms in $Heymod_H^2$ as follows: \begin{equation}\label{eq6.14} \xymatrix{L\ar@<-.5ex>[r]_{f_2} \ar@<.5ex>[r]^{f_1}&M\ar@<-.5ex>[r]_{g_2} \ar@<.5ex>[r]^{g_1}&N} \end{equation} Then, we have: \begin{equation} I(f_1,f_2)\subseteq Ker_p(g_1,g_2)\quad\Leftrightarrow \quad g_1\circ f_1+g_2\circ f_2=g_1\circ f_2+g_2\circ f_1 \end{equation} \end{lem} \begin{proof} We see that \begin{equation}\label{eq6.16} \begin{array}{l} I(f_1,f_2)\subseteq Ker_p(g_1,g_2)\\ \Leftrightarrow (f_1(x)\vee f_2(y),f_1(y)\vee f_2(x)) \in Ker_p(g_1,g_2)\\ \Leftrightarrow g_1(f_1(x)\vee f_2(y))\vee g_2(f_1(y)\vee f_2(x))=g_1(f_1(y)\vee f_2(x))\vee g_2(f_1(x)\vee f_2(y))\\ \Leftrightarrow (g_1\circ f_1+g_2\circ f_2)(x)\vee (g_1\circ f_2+g_2\circ f_1)(y)=(g_1\circ f_2+g_2\circ f_1)(x)\vee (g_1\circ f_1+g_2\circ f_2)(y)\\ \end{array} \end{equation} for all $x$, $y\in L$. Then, $I(f_1,f_2)\subseteq Ker_p(g_1,g_2)\Rightarrow g_1\circ f_1+g_2\circ f_2=g_1\circ f_2+g_2\circ f_1$ follows by setting $y=0$ in \eqref{eq6.16}. The other implication is also clear from \eqref{eq6.16}. \end{proof} We notice here that with a composition as in \eqref{eq6.14} both $Ker_p(g_1,g_2)\subseteq M\times M$ and $I(f_1,f_2)\subseteq M\times M$ are symmetric submodules of $M\times M$, i.e., they contain an ordered pair $(m,m')$ if and only if they also contain $(m',m)$. We are now ready to define strict exactness in $Heymod_H^2$ in a manner parallel to \cite[Definition 4.4]{one}. \begin{defn}\label{D6.7} A sequence of morphisms $\xymatrix{L\ar@<-.5ex>[r]_{f_2} \ar@<.5ex>[r]^{f_1}&M\ar@<-.5ex>[r]_{g_2} \ar@<.5ex>[r]^{g_1}&N}$ in $Heymod_H^2$ is strict exact at $M$ if $I(f_1,f_2)+\Delta_M=Ker_p(g_1,g_2)$. Here, $\Delta_M=\{\mbox{$(m,m)$ $\vert$ $m\in M$}\}\subseteq M\times M$ is the diagonal submodule of $M\times M$. \end{defn} \begin{thm}\label{P6.8} (a) A morphism $\xymatrix{L\ar@<-.5ex>[r]_{f_2} \ar@<.5ex>[r]^{f_1}&M}$ in $Heymod_H^2$ is a monomorphism if and only if the induced map $L^2\longrightarrow M^2$ given by $(x,y)\mapsto (f_1(x)\vee f_2(y),f_1(y)\vee f_2(x))$ is injective. (b) A monomorphism $\xymatrix{L\ar@<-.5ex>[r]_{f_2} \ar@<.5ex>[r]^{f_1}&M}$ in $Heymod_H^2$ induces a strict exact sequence $\xymatrix{0\ar@<-.5ex>[r]_{0} \ar@<.5ex>[r]^{0}&L\ar@<-.5ex>[r]_{f_2} \ar@<.5ex>[r]^{f_1}&M}$ in $Heymod_H^2$. (c) A sequence $\xymatrix{0\ar@<-.5ex>[r]_{0} \ar@<.5ex>[r]^{0}&L\ar@<-.5ex>[r]_{f_2} \ar@<.5ex>[r]^{f_1}&M}$ in $Heymod_H^2$ is strict exact at $L$ if and only if \begin{equation} f_1(x)\vee f_2(y)=f_1(y)\vee f_2(x)\qquad\Leftrightarrow \qquad x=y \end{equation} \end{thm} \begin{proof} If $L$ is any Heyting module, each element $l\in L$ determines a morphism $H\longrightarrow L$, $x\mapsto x\wedge l$ in $Heymod_H$. The rest of the proof is analogous to that of \cite[Proposition 4.6]{one} and \cite[Proposition 4.10]{one}. \end{proof} We now come to the epimorphisms in $Heymod_H^2$ and the corresponding strict exact sequences. \begin{thm}\label{P7.9} Let $\begin{CD} M@>\phi=(f,g)>> N@>>> 0\end{CD}$ be a sequence of morphisms in $Heymod_H^2$. (a) The following are equivalent: $\hspace{0.2in}$(1) The sequence $\begin{CD} M@>\phi=(f,g)>> N@>>> 0\end{CD}$ is strictly exact at $N$. $\hspace{0.2in}$(2) $\{\mbox{$f(x)\vee g(y)$ $\vert$ $f(y)\vee g(x)=0$, $x$, $y\in M$}\}=N$. $\hspace{0.2in}$(3) $I(f,g)=N\times N$. (b) If the sequence $\begin{CD} M@>\phi=(f,g)>> N@>>> 0\end{CD}$ is strictly exact at $N$, then the morphism $\phi=(f,g)$ is an epimorphism in $Heymod_H^2$. \end{thm} \begin{proof} The proof of (a) is analogous to that of \cite[Proposition 4.5]{one}. For (b), we proceed as follows: if the sequence $\begin{CD} M@>\phi=(f,g)>> N@>>> 0\end{CD}$ is strictly exact at $N$, we have $I(f,g)=N\times N$. Explicitly speaking, this means that \begin{equation}\label{eq7.17} I(f,g):=\{\mbox{$(f(x)\vee g(y),f(y)\vee g(x))$ $\vert$ $x$, $y\in M$}\}= N\times N \end{equation} Let $\psi=(\psi_1,\psi_2):N\longrightarrow P$ and $\psi'=(\psi'_1,\psi'_2):N\longrightarrow P$ be morphisms in $Heymod_H^2$ such that $\psi\circ \phi=\psi'\circ \phi$. Writing this out explicitly, we get \begin{equation}\label{eq7.18} \begin{array}{c} \psi_1\circ f+\psi_2\circ g=\psi'_1\circ f+\psi'_2\circ g \qquad \psi_1\circ g+\psi_2\circ f=\psi'_1\circ g+\psi'_2\circ f\\ \end{array} \end{equation} The morphisms $\psi$ and $\psi'$ induce morphisms $\tilde\psi=(\psi_1,\psi_2):N^2\longrightarrow P$ and $\tilde\psi'=(\psi'_1,\psi'_2):N^2\longrightarrow P$ in $Heymod_H$ given by \begin{equation}\label{eq7.19} \tilde\psi(z,w)=\psi_1(z)\vee \psi_2(w)\qquad \tilde\psi'(z,w)=\psi'_1(z)\vee \psi'_2(w)\qquad\forall\textrm{ }z,w\in N \end{equation} For elements $x$, $y\in M$, \eqref{eq7.19} now gives \begin{equation}\label{eq7.20} \begin{array}{c} \tilde\psi(f(x)\vee g(y),f(y)\vee g(x))=\psi_1(f(x)\vee g(y))\vee \psi_2(f(y)\vee g(x)) \\ \tilde\psi'(f(x)\vee g(y),f(y)\vee g(x))=\psi'_1(f(x)\vee g(y))\vee \psi'_2(f(y)\vee g(x)) \\ \end{array} \end{equation} Applying \eqref{eq7.18}, we obtain $\tilde\psi|I(f,g)=\tilde\psi'|I(f,g)$. Since $I(f,g)=N\times N$, this gives $\tilde\psi=\tilde\psi':N\times N\longrightarrow P$. In particular, $\psi_1(z)=\tilde\psi(z,0)=\tilde\psi'(z,0)=\psi'_1(z)$ and $\psi_2(w)=\tilde\psi(0,w)=\tilde\psi'(0,w)=\psi'_2(w)$ for $z$, $w\in N$, i.e., $\psi=\psi'$. Hence, $\phi$ is an epimorphism in $Heymod_H^2$. \end{proof} \begin{thm}\label{P7.10} Let $H$ be a finite Boolean algebra. Then, the sequence $\begin{CD} M@>\phi=(f,g)>> N@>>> 0\end{CD}$ in $Heymod_H^2$ is strictly exact at $N$ if and only if the morphism $\phi=(f,g)$ is an epimorphism in $Heymod_H^2$. \end{thm} \begin{proof} The ``only if part'' of this result already follows from Proposition \ref{P7.9}. For the ``if part,'' we maintain the notation from the proof of Proposition \ref{P7.9}. Let $\tilde\psi=(\psi_1,\psi_2):N^2\longrightarrow P$ and $\tilde\psi'=(\psi'_1,\psi'_2):N^2\longrightarrow P$ be morphisms in $Heymod_H$ satisfying $\tilde\psi|I(f,g)=\tilde\psi'|I(f,g)$. Putting $y=0$ in \eqref{eq7.20}, we get \begin{equation}\label{eq7.21} (\psi_1\circ f+\psi_2\circ g)(x)=\tilde\psi(f(x),g(x))=\tilde\psi'(f(x),g(x))=(\psi'_1\circ f+\psi'_2\circ g)(x) \end{equation} Similarly, putting $x=0$ in \eqref{eq7.20} gives $\psi_1\circ g+\psi_2\circ f=\psi'_1\circ g+\psi'_2\circ f$. This means that $\psi\circ \phi=\psi'\circ \phi$ in $Heymod_H^2$, where $\psi$ is given by the pair $(\psi_1,\psi_2)$ and $\psi'$ by the pair $(\psi'_1,\psi'_2)$. Since $\phi$ is an epimorphism in $Heymod_H^2$, we obtain $\psi=\psi'$. Then, $\psi_1=\psi'_1$ and $\psi_2=\psi'_2$ and hence $\tilde\psi=\tilde\psi'$. Since $H$ is a finite Boolean algebra, we may now apply Proposition \ref{P4.4} to prove that $I(f,g)=N\times N$. \end{proof} After monomorphisms and epimorphisms, we have to treat the isomorphisms in $Heymod_H^2$. \begin{thm}\label{P7.11} A sequence $\xymatrix{0\ar@<-.5ex>[r]_{0} \ar@<.5ex>[r]^{0}& M\ar@<-.5ex>[r]_{g} \ar@<.5ex>[r]^{f}&N \ar@<-.5ex>[r]_{0} \ar@<.5ex>[r]^{0}&0}$ is strict exact in $Heymod_H^2$ if and only if $\xymatrix{M\ar@<-.5ex>[r]_{g} \ar@<.5ex>[r]^{f}&N}$ is an isomorphism in $Heymod_H^2$. Further, such a strict exact sequence corresponds to an isomorphism $h:M\overset{\cong}{\longrightarrow}N$ in $Heymod_H$ and a unique decomposition $N=N_1\times N_2$ such that $f$ and $g$ are induced respectively by the canonical projections $N_1\times N_2\longrightarrow N_1$ and $N_1\times N_2\longrightarrow N_1$ as follows \begin{equation} f:M\overset{h}{\longrightarrow}N=N_1\times N_2\longrightarrow N_1 \hookrightarrow N \qquad g:M\overset{h}{\longrightarrow}N=N_1\times N_2\longrightarrow N_2 \hookrightarrow N \end{equation} \end{thm} \begin{proof} This may be proved in a manner similar to \cite[Proposition 4.11 \& Proposition 4.12]{one}. \end{proof} From the definition in \eqref{6ins}, we know that for any $M$, $M'\in Heymod_H$, the morphisms in $Heymod_H$ from $M$ to $M'$ form a Heyting module. Fix $M\in Heymod_H$. Considering products in $Heymod_H$, it follows that the functor given by the association \begin{equation}\label{eq7.24} N\mapsto Heymod_H^2(M,N)=Heymod_H(M,N)\times Heymod_H(M,N) \end{equation} for each $N\in Heymod_H^2$ determines a covariant functor $Heymod_H^2(M,\_\_):Heymod_H^2\longrightarrow Heymod_H$. Given $N\subseteq M$ in $Heymod_H$, we now define \begin{equation}\label{7quo} (M/N): Heymod_H^2\longrightarrow Heymod_H\qquad P\mapsto \{\mbox{$(f,g)\in Heymod_H^2(M,P)$ $\vert$ $f(x)=g(x)$ for all $x\in N$}\} \end{equation} \begin{lem}\label{L7.12} (a) For $N\subseteq M$ in $Heymod_H$, the association in \eqref{7quo} defines a functor $(M/N): Heymod_H^2\longrightarrow Heymod_H$. (b) For each $P\in Heymod_H^2$, the involution $\sigma(P):(M/N)(P)\longrightarrow (M/N)(P)$ given by $(f,g)\mapsto (g,f)$ determines an involutive natural transformation of functors $\sigma:(M/N)\longrightarrow (M/N)$. \end{lem} \begin{proof} We consider a morphism $(f',g')\in Heymod_H^2(P,P')$ and some $(f,g)\in (M/N)(P)$. By definition, the composition $(f',g')\circ (f,g)$ is given by $(f'\circ f+ g'\circ g,f'\circ g+ g'\circ f )$. Since $f|N=g|N$, it is clear that $(f'\circ f+ g'\circ g)|N=(f'\circ g+ g'\circ f )|N$. This proves (a). The result of (b) is also clear from the definitions. \end{proof} \begin{thm}\label{P7.13} Let $N\subseteq M$ in $Heymod_H$. (a) The cokernel pair of the inclusion $i:N\hookrightarrow M$ is given by the quotient of $M\times M$ over the equivalence relation \begin{equation}\label{eq7.26} (x,y)\sim (x',y')\quad \Leftrightarrow\quad f(x)\vee g(y)=f(x')\vee g(y'), \textrm{ }\forall\textrm{ }P\in Heymod_H^2, \textrm{ }(f,g)\in (M/N)(P) \end{equation} (b) Set $Q:=Coker_p(i)$. Then, there is a canonical isomorphism of functors $(M/N)\circ \kappa_H \overset{\cong}{\longrightarrow}Heymod_H(Q,\_\_)$, where $\kappa_H$ is the canonical embedding $\kappa_H: Heymod_H\hookrightarrow Heymod_H^2$. \end{thm} \begin{proof} Let $e_1$, $e_2:M\hookrightarrow M^2$ be the canonical inclusions. From Proposition \ref{vP5.5} and from the definition in \eqref{cokerp}, we know that the cokernel pair of $i:N\hookrightarrow M$ is given by taking the quotient of $M\times M$ over the equivalence relation $(x,y)\sim (x',y')$ if $t(x,y)=t(x',y')$ for every $t:M\times M\longrightarrow P$ in $Heymod_H$ such that $t\circ (e_1\circ i)=t\circ (e_2\circ i)$. Each $t:M\times M\longrightarrow P$ corresponds to an ordered pair $(f,g)$ of morphisms from $M$ to $P$ such that $t(x,y)=f(x)\vee g(y)$ for each $(x,y)\in M\times M$. Further $t:M\times M\longrightarrow P$ satisfies $t\circ (e_1\circ i)=t\circ (e_2\circ i)$ if and only if $f(n)=g(n)$ for each $n\in N$. The result of (a) is now clear. As such, any morphism in $Heymod_H(Q,P)$ corresponds to a morphism $t:M\times M\longrightarrow P$ in $Heymod_H$ such that $t(x,y)=t(x',y')$ whenever $(x,y)\sim (x',y')$. Suppose that $t$ is given by the ordered pair $(f_1,f_2)$ of morphisms $M\longrightarrow P$. If $n\in N$, we notice that $(n,0)\sim (0,n)$ for the equivalence relation in \eqref{eq7.26}. Then, $f_1(n)=t(n,0)=t(0,n)=f_2(n)$, i.e., $(f_1,f_2)\in ((M/N)\circ \kappa_H)(P)$. Conversely, we consider $(g_1,g_2)\in ((M/N)\circ \kappa_H)(P)$, i.e., morphisms $g_1,g_2:M\longrightarrow P$ such that $g_1|N=g_2|N$. Then, if $(x,y)\sim (x',y')$ as in \eqref{eq7.26}, we must have $g_1(x)\vee g_2(y)= g_1(x')\vee g_2(y')$. Then, the morphism $t:M\times M\longrightarrow P$ given by the ordered pair $(g_1,g_2)$ satisfies $t(x,y)=t(x',y')$ whenever $(x,y)\sim (x',y')$. Hence, $(g_1,g_2)$ induces a morphism $Q\longrightarrow P$ in $Heymod_H$. This proves (b). \end{proof} We conclude this section by explaining when the involution $\sigma$ in Lemma \ref{L7.12} is an identity. This will require an application of Proposition \ref{P4.4}, which is our analogue of Hahn-Banach theorem. As such, we will have to assume that $H$ is a finite Boolean algebra. \begin{thm}\label{P7.14} Let $H$ be a finite Boolean algebra and consider $N\subseteq M$ in $Heymod_H$. Then, the involutive natural transformation of functors $\sigma : (M/N)\longrightarrow (M/N)$ described in Lemma \ref{L7.12} is the identity if and only if $N=M$. \end{thm} \begin{proof} If $\sigma : (M/N)\longrightarrow (M/N)$ is the identity, it follows in particular that $\sigma(H): (M/N)(H)\longrightarrow (M/N)(H)$ is an identity. This means that $\phi_1=\phi_2$ for any maps $\phi_1$, $\phi_2\in M^\bigstar$ such that $\phi_1|N=\phi_2|N$. Applying Proposition \ref{P4.4}, we get $N=M$. \end{proof} \section{The Eilenberg-Moore category $\mathbf{Heymod_H^{\mathfrak s}}$} In Proposition \ref{P6.2}, we have observed that the endofunctor $\bigperp :Heymod_H\longrightarrow Heymod_H$ defined by taking any object $M\in Heymod_H$ to $M^2$ and any morphism $f:M\longrightarrow N$ to $(f,f):M^2 \longrightarrow N^2$ determines a comonad on $Heymod_H$. \begin{defn}\label{D8.1} (see, for instance, \cite[p 189]{Borceux}) Let $\mathcal C$ be a category along with a triple $(\bigperp,\delta,\epsilon)$ determining a comonad on $\mathcal C$. A coalgebra over the comonad $(\bigperp,\delta,\epsilon)$ is a pair $(A,\xi)$ consisting of an object $A\in\mathcal C$ and a morphism $\xi:A\longrightarrow \bigperp A$ such that \begin{equation}\label{eq8.1} \epsilon(A)\circ \xi=id_A\qquad \bigperp(\xi)\circ \xi=\delta(A)\circ \xi : A\longrightarrow \bigperp\bigperp A \end{equation} The category of such coalgebras is said to be the Eilenberg-Moore category of the comonad $(\bigperp,\delta,\epsilon)$. \end{defn} For the category $Heymod_H$ and the comonad $\bigperp$, a coalgebra consists of some $M\in Heymod_H$ and $\xi: M\longrightarrow \bigperp M$ satisfying the conditions in \eqref{eq8.1}. In particular, $\epsilon(M)\circ \xi=id$ and hence $\xi$ is of the form $x\mapsto (x,\sigma(x))\in M\times M$ for each $x\in M$. It may be verified in a manner similar to \cite[Proposition 3.12]{one} that $\sigma:M\longrightarrow M$ is actually an involution and that the Eilenberg-Moore category of the comonad $\bigperp$ on $Heymod_H$ is equivalent to the category of Heyting modules equipped with an ($H$-linear) involution. In this section, we will study this Eilenberg-Moore category, which we call $Heymod_H^{\mathfrak s}$ by extending the terminology from \cite[$\S$ 5]{one}. A morphism $f:(M,\sigma)\longrightarrow (M',\sigma')$ in $Heymod_H^{\mathfrak s}$ is a morphism $f:M\longrightarrow M'$ in $Heymod_H$ that commutes with the involutions, i.e., $f\circ \sigma=\sigma'\circ f$. For any Heyting module $N$, we notice that $Heymod_H(H,N)=N$, i.e., each $n\in N$ corresponds to the morphism $f_n:H\longrightarrow N$ which takes $c\in H$ to $c\wedge n$. We now consider the functor: \begin{equation}\label{yon} y_H:=Heymod_H^2(H,\_\_):Heymod_H^2\longrightarrow Heymod_H\qquad N\mapsto Heymod_H^2(H,N)=N\times N \end{equation} Further, the Heyting module $N\times N$ is equipped with an obvious involution $\tau_N$ that takes $(n_1,n_2)\in N\times N$ to $(n_2,n_1)$. This involution may also be obtained by considering the morphism $(0,id):N\longrightarrow N$ in $Heymod_H^2$ and applying the functor $y_H$ in \eqref{yon}. Hence, $y_H$ may be rewritten as a functor $y_H:=Heymod_H^2(H,\_\_):Heymod_H^2\longrightarrow Heymod_H^{\mathfrak s}$. In general, if $\tilde g=(g_1,g_2):N\longrightarrow N'$ is a morphism in $Heymod_H^2$, the corresponding morphism $y_H(\tilde g):N\times N\longrightarrow N'\times N'$ is given by \begin{equation}\label{eq8.3} \begin{array}{c} (n_1,n_2)\in N\times N\mapsto (f_{n_1},f_{n_2})\in Heymod_H^2(H,N)\mapsto\hspace{2.5in}\\ \hspace{0.1in}(g_1,g_2)\circ (f_{n_1},f_{n_2})\in Heymod_H^2(H,N')\mapsto (g_1(n_1)\vee g_2(n_2),g_1(n_2)\vee g_2(n_1))\in N'\times N' \\ \end{array} \end{equation} We also notice that the morphism in \eqref{eq8.3} is compatible with the respective involutions on $N\times N$ and $N'\times N'$. Comparing \eqref{eq8.3} and the definition in \eqref{kerp1}, it follows that the kernel pair of a morphism $\tilde g=(g_1,g_2):N\longrightarrow N'$ in $Heymod_H^2$ is given by \begin{equation}\label{kerp8} Ker_p(\tilde{g}):=\{\mbox{$(n_1,n_2)\in N\times N$ $\vert$ $g_1(n_1)\vee g_2(n_2)=g_1(n_2)\vee g_2(n_1)$ }\}=y_H(\tilde g)^{-1}(\Delta_{N'}) \end{equation} where $\Delta_{N'}\subseteq N'\times N'$ is the diagonal. Using \eqref{eq8.3}, the definition in \eqref{im2} may also be recast as \begin{equation}\label{im8} I(\tilde g):=\{\mbox{$(g_1(n_1)\vee g_2(n_2),g_1(n_2)\vee g_2(n_1))$ $\vert$ $n_1$, $n_2\in N$}\}=Range(y_H(\tilde g)) \end{equation} It may be verified in a way analogous to \cite[Lemma 5.1]{one} that $y_H:Heymod_H^2\longrightarrow Heymod_H^{\mathfrak s}$ embeds $Heymod_H^2$ as a full subcategory of $Heymod_H^{\mathfrak s}$. The result of Lemma \ref{L6.6} may now be adapted to the category $Heymod_H^{\mathfrak s}$ as follows. \begin{thm}\label{P8.2} Let $\begin{CD} L@>\tilde f=(f_1,f_2)>> M@>\tilde g=(g_1,g_2)>> N\end{CD}$ be morphisms in $Heymod_H^2$. Then, the following are equivalent (a) $ I(\tilde f)\subseteq Ker_p(\tilde g)$ (b) $g_1\circ f_1+g_2\circ f_2=g_1\circ f_2+g_2\circ f_1$ (c) $Range(y_H(\tilde f))\subseteq y_H(\tilde g)^{-1}(\Delta_{N})$. (d) $Range(y_H(\tilde g\circ \tilde f))\subseteq \Delta_{N}$. \end{thm} \begin{proof} The fact that (a) $\Leftrightarrow$ (b) already follows from Lemma \ref{L6.6}. Using \eqref{kerp8} and \eqref{im8}, it is clear that (c) $\Leftrightarrow$ (a). Using \eqref{kerp8}, we also see that $Range(y_H(\tilde g\circ \tilde f))\subseteq \Delta_{N}$ is equivalent to the saying that $Ker_p(\tilde g\circ \tilde f)=L\times L$. Since $\tilde g\circ \tilde f=(g_1f_1+g_2f_2,g_1f_2+g_2f_1)$, it follows that $Ker_p(\tilde g\circ \tilde f)=L\times L$ is further equivalent to the condition that \begin{equation}\label{eq8.6} (g_1f_1+g_2f_2)(l_1)\vee (g_1f_2+g_2f_1)(l_2)=(g_1f_1+g_2f_2)(l_2)\vee (g_1f_2+g_2f_1)(l_1) \end{equation} for every $l_1$, $l_2\in L$. Hence, (b) $\Rightarrow$ (d). Putting $l_2=0$ in \eqref{eq8.6}, we get (d) $\Rightarrow$ (b). \end{proof} \begin{thm}\label{P8.3} Let $\tilde g=(g_1,g_2):M\longrightarrow N$ be a morphism in $Heymod_H^2$. Then: (a) $\tilde g$ is a monomorphism in $Heymod_H^2$ if and only if $y_H(\tilde g)$ is injective. (b) If $y_H(\tilde g)$ is surjective, then $\tilde g$ is an epimorphism in $Heymod_H^2$. (c) Let $H$ be a finite Boolean algebra. Then, $y_H(\tilde g)$ is surjective if and only if $\tilde g$ is an epimorphism in $Heymod_H^2$. \end{thm} \begin{proof} From Proposition \ref{P6.8}(a), we know that $\tilde g$ is a monomorphism if and only if the induced map $M^2\longrightarrow N^2$ given by $(m_1,m_2)\mapsto (g_1(m_1)\vee g_2(m_2),g_1(m_2)\vee g_2(m_1))$ is injective. From \eqref{eq8.3}, this is equivalent to $y_H(\tilde g)$ being injective. This proves (a). From Proposition \ref{P7.9}, we know that $y_H(\tilde g)$ being surjective, i.e., $I(\tilde g)=Range(y_H(\tilde g))=N\times N$, is equivalent to the sequence $M\overset{\tilde g}{\longrightarrow}N\longrightarrow 0$ being strictly exact at $N$. In particular, Proposition \ref{P7.9} also says that this makes $\tilde g$ an epimorphism in $Heymod_H^2$. This proves (b). Additionally, if $H$ is a finite Boolean algebra, it follows from Proposition \ref{P7.10} that the sequence $M\overset{\tilde g}{\longrightarrow}N\longrightarrow 0$ being strictly exact at $N$ is equivalent to $\tilde g$ being an epimorphism in $Heymod_H^2$. This proves (c). \end{proof} Let us denote by $\mathfrak s$ the squaring functor \begin{equation} \begin{CD}\mathfrak s:Heymod_H@>\kappa_H>> Heymod_H^2@>y_H>> Heymod_H^{\mathfrak s}\end{CD} \end{equation} Then, $\mathfrak s(N)=(N\times N,\tau_N)$ for any $N\in Heymod_H$, where $\tau_N(n_1,n_2)=(n_2,n_1)$ for each $(n_1,n_2)\in N\times N$. For a morphism $f:N\longrightarrow N'$ in $Heymod_H$, the induced morphism $\mathfrak s(f)$ is given by $(f,f):(N\times N,\tau_N)\longrightarrow (N'\times N',\tau_{N'})$. We consider a morphism $\phi:(M,\sigma)\longrightarrow \mathfrak s(N)=(N\times N,\tau_N)$ in $Heymod_H^{\mathfrak s}$. If $\phi: M\longrightarrow N\times N$ is given by $(f,g)$, the following diagram must be commutative \begin{equation}\label{eq8.8} \begin{CD} M @>\phi=(f,g)>> N^2 \\ @V\sigma VV @V\tau_NVV \\ M @>\phi=(f,g)>> N^2 \\ \end{CD} \end{equation} From \eqref{eq8.8}, we obtain $(g(m),f(m))=(f(\sigma(m)),g(\sigma(m)))$ for each $m\in M$ and hence $g(m)=f(\sigma(m))$. Hence, given the object $(M,\sigma)\in Heymod_H^{\mathfrak s}$, the morphism $\phi=(f,g):(M,\sigma)\longrightarrow \mathfrak s(N)=(N\times N,\tau_N)$ is determined completely by the morphism $f:M\longrightarrow N$ in $Heymod_H$. This gives us an adjunction of functors \begin{equation}\label{adj8} Heymod_H^{\mathfrak s}((M,\sigma),\mathfrak s(N))\cong Heymod_H(\mathfrak f(M,\sigma),N) \end{equation} Here $\mathfrak f:Heymod_H^{\mathfrak s}\longrightarrow Heymod_H$ is the forgetful functor. We observe that \eqref{adj8} is actually an isomorphism of Heyting modules. Suppose now that $N$ is a Heyting module and $M\subseteq N$ a Heyting submodule. From Proposition \ref{P7.13}, we know that the cokernel pair $Q=Coker_p(i)$ of the inclusion $i:M\hookrightarrow N$ is given by the quotient of $N\times N$ over the equivalence relation \begin{equation}\label{eq8.10} (x,y)\sim (x',y')\quad \Leftrightarrow\quad f(x)\vee g(y)=f(x')\vee g(y'), \textrm{ }\forall\textrm{ }P\in Heymod_H^2, \textrm{ }(f,g)\in (N/M)(P) \end{equation} From the definition in \eqref{7quo}, it is clear that $(f,g)\in (N/M)(P)$ if and only if $(g,f)\in (N/M)(P)$. Then, if $(x,y)\sim (x',y')$ according to the eqivalence relation in \ref{eq8.10}, we must also have $g(x)\vee f(y)=g(x')\vee f(y'), \textrm{ }\forall\textrm{ }P\in Heymod_H^2, \textrm{ }(f,g)\in (N/M)(P)$. In other words, if $(x,y)\sim (x',y')$ in $N\times N$, we must also have $(y,x)\sim (y',x')$. This means that the cokernel pair $Q$ is equipped with a canonical involution $\sigma$, making it an object of $Heymod_H^{\mathfrak s}$. The adjunction in \eqref{adj8} now gives us an isomorphism \begin{equation}\label{eq8.11} Heymod_H^{\mathfrak s}((Q,\sigma),\mathfrak s(P))\cong Heymod_H(\mathfrak f(Q,\sigma),P) \end{equation} for any object $P\in Heymod_H$. Since $\mathfrak f(Q,\sigma)$ is simply $Q$ considered again as an object of $Heymod_H$, it follows from Proposition \ref{P7.13}(b) that $(N/M)(\kappa_H(P)) \overset{\cong}{\longrightarrow}Heymod_H(\mathfrak f(Q,\sigma),P)$. Combining with \eqref{eq8.11}, we get \begin{equation}\label{eq8.12} Heymod_H^{\mathfrak s}((Q,\sigma),y_H(\kappa_H(P)))=Heymod_H^{\mathfrak s}((Q,\sigma),\mathfrak s(P))\cong Heymod_H(\mathfrak f(Q,\sigma),P)=(N/M)(\kappa_H(P)) \end{equation} It follows from \eqref{eq8.12} that the functors $Heymod_H^{\mathfrak s}((Q,\sigma),y_H(\_\_)), (N/M)(\_\_):Heymod_H^2\longrightarrow Heymod_H$ coincide when restricted to $\kappa_H:Heymod_H \hookrightarrow Heymod_H^2$. It may be easily verified that the isomorphisms in \eqref{eq8.12} are well-behaved with respect to morphisms in $Heymod_H^2$ and we obtain \begin{equation}\label{q8.13} Heymod_H^{\mathfrak s}((Q,\sigma),y_H(P))\cong (N/M)(P) \end{equation} for every $P\in Heymod_H^2$. For an object $(M,\sigma)$ in $Heymod_H^{\mathfrak s}$, we will always denote by $M^\sigma$ the collection of fixed points of the involution $\sigma$. Since $\sigma$ is $H$-linear, it is clear that $M^\sigma\in Heymod_H$. We now consider a sequence $\begin{CD} L@>\tilde f=(f_1,f_2)>> M@>\tilde g=(g_1,g_2)>> N \end{CD} $ of morphisms in $Heymod_H^2$ and the corresponding sequence $\begin{CD} L^2@>y_H(\tilde f)>> M^2@> y_H(\tilde g)>> N^2 \end{CD} $ in $Heymod_H^{\mathfrak s}$. We know from Proposition \ref{P8.2} that $I(\tilde f)\subseteq Ker_p(\tilde g)$ if and only if $Range(y_H(\tilde f))\subseteq y_H(\tilde g)^{-1}(\Delta_N)$. Expressing the diagonal $\Delta_N\subseteq N\times N$ as the collection of fixed points of the involution $\tau_N:N\times N\longrightarrow N\times N$ on $y_H(N)$, we can rewrite this condition as $Range(y_H(\tilde f))\subseteq y_H(\tilde g)^{-1}(y_H(N)^{\tau_N})$. This motivates the idea that a composition of morphisms $\begin{CD}(L,\sigma_L)@>f>> (M,\sigma_M) @>g>> (N,\sigma_N)\end{CD}$ in $Heymod_H^{\mathfrak s}$ should be treated as ``zero'' if $Range(f)\subseteq g^{-1}(N^{\sigma_N})$. \begin{defn}\label{D8.4} A sequence $\begin{CD} (L,\sigma_L)@>f>> (M,\sigma_M) @>g>> (N,\sigma_N) \end{CD} $ in $Heymod_H^{\mathfrak s}$ is strictly exact at $M$ if $Range(f)+M^{\sigma_M}=g^{-1}(N^{\sigma_N})$. \end{defn} \begin{thm}\label{P8.5} Let $\begin{CD} L@>\tilde f=(f_1,f_2)>> M@>\tilde g=(g_1,g_2)>> N \end{CD} $ be a sequence of morphisms in $Heymod_H^2$. Then, the following are equivalent: (a) In $Heymod_H^2$, the sequence $\begin{CD} L@>\tilde f>> M@>\tilde g>> N \end{CD} $ is strictly exact at $M$. (b) In $Heymod_H^{\mathfrak s}$, the sequence $\begin{CD}y_H(L)@>y_H(\tilde f)>>y_H(M)@>y_H(\tilde g)>>y_H(N)\end{CD}$ is strictly exact at $y_H(M)$. \end{thm} \begin{proof} From \eqref{yon}, we know that $y_H(L)$, $y_H(M)$ and $y_H(N)$ are given respectively by $(L\times L,\tau_L)$, $(M\times M,\tau_M)$, $(N\times N,\tau_N)$, each equipped with the canonical involution that swaps the two components. From Definition \ref{D6.7}, $\begin{CD} L@>\tilde f=(f_1,f_2)>> M@>\tilde g=(g_1,g_2)>> N \end{CD} $ is strictly exact at $M$ if and only if $I(\tilde f)+\Delta_M=Ker_p(\tilde g)$. It is clear that the diagonal $\Delta_M=(M\times M)^{\tau_M}=y_H(M)^{\tau_M}$. From \eqref{kerp8}, we see that $Ker_p(\tilde g)=y_H(\tilde g)^{-1}(\Delta_N)=y_H(\tilde g)^{-1}((N\times N)^{\tau_N})=y_H(\tilde g)^{-1}(y_H(N)^{\tau_N})$. On the other hand, \eqref{im8} gives us $I(\tilde f)=Range(y_H(\tilde f))$. The result is now clear from Definition \ref{D8.4}. \end{proof} We now return to the adjunction \begin{equation}\label{adj8'} Heymod_H(\mathfrak f(M,\sigma_M),N)=Heymod_H^{\mathfrak s}((M,\sigma_M),\mathfrak s(N)) \end{equation} explained in \eqref{adj8}, where $(M,\sigma_M)\in Heymod_H^{\mathfrak s}$ and $N\in Heymod_H$. By definition, $(\mathfrak s\circ \mathfrak f)(M,\sigma_M)=(M^2,\tau_M)$, where $\tau_M:M\times M\longrightarrow M\times M$ is the involution that interchanges the components. The unit map of this adjunction $1_{Heymod_H^{\mathfrak s}}\longrightarrow (\mathfrak s\circ \mathfrak f)$ corresponds to the identity map $\mathfrak f(M,\sigma_M)=M\longrightarrow M=\mathfrak f(M,\sigma_M)$ in $Heymod_H$. Using the explicit description of this adjunction in \eqref{eq8.8}, we conclude that the morphism $\eta{(M,\sigma_M)}:(M,\sigma_M)\longrightarrow (\mathfrak s\circ \mathfrak f)(M,\sigma_M)=(M^2,\tau_M)$ in $Heymod_H^{\mathfrak s}$ given by the unit $1_{Heymod_H^{\mathfrak s}}\longrightarrow (\mathfrak s\circ \mathfrak f)$ corresponds to $(1_M,\sigma_M):M\longrightarrow M^2$. Further, if we put $\mathfrak T=\mathfrak s\circ \mathfrak f$, there is a retraction \begin{equation}\label{retract} \xi(M,\sigma_M) : \mathfrak T(M,\sigma_M)\longrightarrow (M,\sigma_M) \qquad (x,y)\mapsto x\vee \sigma_M(y) \end{equation} satisfying $\xi(M,\sigma_M)\circ \eta(M,\sigma_M)=id$. \begin{thm}\label{P8.6} For a morphism $f:(L,\sigma_L)\longrightarrow (M,\sigma_M)$ in $Heymod_H^{\mathfrak s}$, the following are equivalent: (a) $f$ is a monomorphism in $Heymod_H^{\mathfrak s}$. (b) $f:L\longrightarrow M$ is injective. (c) The sequence $\begin{CD} 0@>>>\mathfrak T(L)@>\mathfrak T(f)>> \mathfrak T(M)\end{CD}$ is strictly exact at $\mathfrak T(L)$. \end{thm} \begin{proof} It is clear that (b) $\Rightarrow$ (a). For any element $x\in L$, there exists a canonical morphism $\xi_x:(H\times H,\tau_H)\longrightarrow (L,\sigma_L)$ given by $\xi_x(a,b)=(a\wedge x)\vee (b\wedge \sigma_L(x))$. Then, $f(x)=f(y)$ for $x$, $y\in L$ gives $f\circ \xi_x=f\circ \xi_y$. If $f$ is a monomorphism, then $\xi_x=\xi_y$ and hence $x=\xi_x(1,0)=\xi_y(1,0)=y$. Hence, (a) $\Rightarrow$ (b). By definition, $\mathfrak T=y_H\circ \kappa_H\circ \mathfrak f$. Hence, using Proposition \ref{P8.5}, we see that the sequence $\begin{CD} 0@>>>\mathfrak T(L)@>\mathfrak T(f)>> \mathfrak T(M)\end{CD}$ being strictly exact at $\mathfrak T(L)$ is equivalent to the sequence $\begin{CD} 0@>>>\kappa_H(\mathfrak f(L))@>\kappa_H(\mathfrak f(f))>> \kappa_H(\mathfrak f(M))\end{CD}$ being strictly exact at $\kappa_H(\mathfrak f(L))$. Applying Proposition \ref{P6.8}(c), the latter is equivalent to the statement that \begin{equation} f(x)=f(y) \qquad \Leftrightarrow \qquad x=y \end{equation} In other words, $f$ is injective. This proves the result. \end{proof} \begin{thm}\label{P8.7} For a morphism $f:(L,\sigma_L)\longrightarrow (M,\sigma_M)$ in $Heymod_H^{\mathfrak s}$, the following are equivalent: (a) $f:L\longrightarrow M$ is surjective. (b) The sequence $\begin{CD}\mathfrak T(L)@>\mathfrak T(f)>> \mathfrak T(M)@>>>0\end{CD}$ is strictly exact at $\mathfrak T(M)$. In particular, either of these conditions implies that $f$ is an epimorphism in $Heymod_H^{\mathfrak s}$. \end{thm} \begin{proof} Again since $\mathfrak T=y_H\circ \kappa_H\circ \mathfrak f$, it follows from Proposition \ref{P8.5} that (b) is equivalent to the sequence $\begin{CD} \kappa_H(\mathfrak f(L))@>\kappa_H(\mathfrak f(f))>> \kappa_H(\mathfrak f(M))@>>>0\end{CD}$ being strictly exact at $\kappa_H(\mathfrak f(M))$. Applying Proposition \ref{P7.9}(a), the latter is equivalent to the statement that $\{\mbox{$f(x)$ $\vert$ $f(y)=0$, $x$, $y\in L$}\}=M$. Hence, (a) $\Leftrightarrow$ (b). \end{proof} As in other sections, the best results for epimorphisms are obtained when $H$ is a finite Boolean algebra. \begin{thm}\label{P8.8} Let $H$ be a finite Boolean algebra. For a morphism $f:(L,\sigma_L)\longrightarrow (M,\sigma_M)$ in $Heymod_H^{\mathfrak s}$, the following are equivalent: (a) $f$ is an epimorphism in $Heymod_H^{\mathfrak s}$. (b) $f:L\longrightarrow M$ is surjective. (c) The sequence $\begin{CD}\mathfrak T(L)@>\mathfrak T(f)>> \mathfrak T(M)@>>>0\end{CD}$ is strictly exact at $\mathfrak T(M)$. \end{thm} \begin{proof} From Proposition \ref{P8.7}, we know that (b) $\Leftrightarrow$ (c) $\Rightarrow$ (a) for any finite Heyting algebra $H$. We now suppose (a), i.e., $f$ is an epimorphism in $Heymod_H^{\mathfrak s}$. The range $N:=Range(f)$ of $f$ is a Heyting submodule of $M$. If $N\ne M$, it follows from Proposition \ref{P4.4} that we can choose morphisms $\phi_1\ne \phi_2:M=\mathfrak f(M,\sigma_M)\longrightarrow H$ in $Heymod_H$ such that $\phi_1\circ f=\phi_2\circ f$. The adjunction in \eqref{adj8} then gives $\tilde\phi_1\ne \tilde\phi_2\in Heymod_H^{\mathfrak s}( (M,\sigma_M),(H\times H,\tau_H))$ corresponding respectively to $\phi_1$ and $\phi_2$. For any $x\in L$, we now have \begin{equation*} (\tilde\phi_1\circ f)(x)=(\phi_1(f(x)),\phi_1\sigma_M(f(x)))=(\phi_1(f(x)),(\phi_1\circ f)(\sigma_L(x)))=(\phi_2(f(x)),(\phi_2\circ f)(\sigma_L(x)))= (\tilde\phi_2\circ f)(x) \end{equation*} Since $f$ is an epimorphism in $Heymod_H^{\mathfrak s}$, this gives $\tilde\phi_1=\tilde\phi_2$ and hence $\phi_1=\phi_2$, which is a contradiction. \end{proof} \section{$Heymod_H^{\mathfrak s}$ as a semiexact category} For a finite Heyting algebra $H$, we will show in this section that $Heymod_H^{\mathfrak s}$ is a semiexact category in the sense of Grandis \cite{Grand0}, \cite[$\S$ 1.3.3]{Grandis}. For this, we will first recall several notions from \cite[Chapter 1]{Grandis}. Let $\mathcal C$ be a category. A collection $\mathscr N$ of morphisms of $\mathcal C$ is said to be an ideal if $f\in \mathscr N$ implies that $g\circ f\circ h\in \mathscr N$ for all morphisms $g,h$ in $\mathcal C$ such that the composition $g\circ f\circ h$ is legitimate. Further, $\mathscr N$ is said to be a closed ideal if every morphism in $\mathscr N$ factorizes through some identity morphism also in $\mathscr N$. An $N$-category is a pair $(\mathcal C,\mathscr N)$ consisting of a category $\mathcal C$ and an ideal $\mathscr N$ of morphisms of $\mathcal C$. The ideal $\mathscr N$ is referred to as the ideal of null morphisms of $\mathcal C$. An object $X$ of $\mathcal C$ is said to be null if the identity morphism $id_X\in \mathscr N$. A functor $F:(\mathcal C,\mathscr N)\longrightarrow (\mathcal C',\mathscr N')$ of $N$-categories is a functor $F:\mathcal C\longrightarrow \mathcal C'$ that preserves null morphisms. \begin{defn}\label{Dt9.1} Let $(\mathcal C,\mathscr N)$ be an $N$-category and let $f:A\longrightarrow B$ be a morphism in $\mathcal C$. A morphism $k:K\longrightarrow A$ is said to be a kernel for $f$ if it satisfies the following two conditions: (1) $f\circ k\in \mathscr N$. (2) If $h$ is a morphism in $\mathcal C$ such that $f\circ h\in \mathscr N$, then $h$ factorizes uniquely through $k$. A morphism $c:B\longrightarrow C$ is said to be a cokernel for $f$ if it satisfies the following two conditions: (1) $c\circ f\in \mathscr N$. (2) If $h$ is a morphism in $\mathcal C$ such that $h\circ f\in \mathscr N$, then $h$ factorizes uniquely through $c$. \end{defn} \begin{defn}\label{Dt9.2} A semiexact category is an $N$-category $(\mathcal C,\mathscr N)$ such that (a) $\mathscr N$ is a closed ideal. (b) Every morphism in $\mathcal C$ has a kernel and a cokernel with respect to $\mathscr N$. \end{defn} We now consider the category $Heymod_H^{\mathfrak s}$. We will say that a morphism $f:(L,\sigma_L)\longrightarrow (M,\sigma_M)$ in $Heymod_H^{\mathfrak s}$ is null if $f(x)=\sigma_M(f(x))$ for each $x\in L$. The collection of null morphisms of $Heymod_H^{\mathfrak s}$ will be denoted by $\mathscr N$. \begin{lem}\label{Lt9.3} The collection of null morphisms is a closed ideal in $Heymod_H^{\mathfrak s}$. \end{lem} \begin{proof} We consider $f:(L,\sigma_L)\longrightarrow (M,\sigma_M)$ in $\mathscr N$ and morphisms $h:(L',\sigma_{L'})\longrightarrow (L,\sigma_L)$ and $g:(M,\sigma_M)\longrightarrow (M',\sigma_{M'})$ in $Heymod_H^{\mathfrak s}$. For $x'\in L'$, we have \begin{equation*}gfh(x')=g(\sigma_M(fh(x')))=\sigma_{M'}(gfh(x')) \end{equation*} This shows thatb $gfh\in \mathscr N$, i.e., $\mathscr N$ is an ideal. We consider the object $(M^{\sigma_M},id)\in Heymod_H^{\mathfrak s}$. It is clear that the identity on $(M^{\sigma_M},id)$ is a null morphism in $Heymod_H^{\mathfrak s}$. Given $f:(L,\sigma_L)\longrightarrow (M,\sigma_M)$ in $\mathscr N$, the condition $f=\sigma_M\circ f$ ensures that $f$ factorizes through $(M^{\sigma_M},id)$. This proves that $\mathscr N$ is a closed ideal. \end{proof} \begin{thm}\label{Pt9.4} Let $f:(L,\sigma_L)\longrightarrow (M,\sigma_M)$ be a morphism in $Heymod_H^{\mathfrak s}$. Then, the canonical morphism $(f^{-1}(M^{\sigma_M}),\sigma_L|f^{-1}(M^{\sigma_M})) \longrightarrow (L,\sigma_L)$ is the kernel for $f$ with respect to $\mathscr N$. \end{thm} \begin{proof} First, we notice that $\sigma_L:L\longrightarrow L$ does restrict to an involution on $f^{-1}(M^{\sigma_M})$. Indeed, if $x\in f^{-1}(M^{\sigma_M})$, then $\sigma_Mf(\sigma_L(x))=\sigma_M^2f(x)=f(x)=\sigma_Mf(x)=f(\sigma_L(x))$, i.e., $\sigma_L(x)\in f^{-1}(M^{\sigma_M})$. Also, the composition $(f^{-1}(M^{\sigma_M}),\sigma_L|f^{-1}(M^{\sigma_M})) \longrightarrow (L,\sigma_L)\overset{f}{\longrightarrow} (M,\sigma_M)$ factors through the null object $(M^{\sigma_M},id)$. By definition, a composition $(L',\sigma_{L'})\overset{h}{\longrightarrow} (L,\sigma_L)\overset{f}{\longrightarrow} (M,\sigma_M)$ is null if and only if $h(y)\in f^{-1}(M^{\sigma_M})$ for every $y\in L'$. Hence, any such null composition in $Heymod_H^{\mathfrak s}$ factors uniquely through the canonical morphism $(f^{-1}(M^{\sigma_M}),\sigma_L|f^{-1}(M^{\sigma_M})) \longrightarrow (L,\sigma_L)$. This proves the result. \end{proof} Accordingly, the canonical morphism described in Proposition \ref{Pt9.4} will be written as the kernel $Ker(f)$ of $f:(L,\sigma_L)\longrightarrow (M,\sigma_M)$. We now define an equivalence relation $\sim$ on $M$ as follows: \begin{equation}\label{eqf9.1} x_1{\sim}x_2 \quad\Leftrightarrow\quad g(x_1)=g(x_2)\textrm{ }\forall \mbox{$g\in Heymod_H^{\mathfrak s}((M,\sigma_M),(N,\sigma_N))$ such that $g\circ f\in\mathscr N$} \end{equation} It is easily seen that $M/\sim$ is a Heyting module. Further, if $x_1\sim x_2$ and $g\in Heymod_H^{\mathfrak s}((M,\sigma_M),(N,\sigma_N))$ is such that $g\circ f\in\mathscr N$, we see that $g(\sigma_M(x_1))=\sigma_Ng(x_1)=\sigma_Ng(x_2) = g(\sigma_M(x_2))$. It follows from \eqref{eqf9.1} that $\sigma_M(x_1)\sim \sigma_M(x_2)$, i.e., the involution $\sigma_M$ descends to an involution on $M/\sim$ that we continue to denote by $\sigma_M$. \begin{thm}\label{Pt9.5} Let $f:(L,\sigma_L)\longrightarrow (M,\sigma_M)$ be a morphism in $Heymod_H^{\mathfrak s}$. Then, the canonical morphism $(M,\sigma_M)\longrightarrow (M/\sim,\sigma_M)$ is the cokernel for $f$ with respect to $\mathscr N$. \end{thm} \begin{proof} For any $g\in Heymod_H^{\mathfrak s}((M,\sigma_M),(N,\sigma_N))$ such that $g\circ f\in\mathscr N$, we note that $g\sigma_Mf(x)=\sigma_Ngf(x)=gf(x)$. It follows from \eqref{eqf9.1} that $f(x)\sim \sigma_Mf(x)$ and hence the composition $(L,\sigma_L)\overset{f}{\longrightarrow} (M,\sigma_M)\longrightarrow (M/\sim,\sigma_M)$ is null. The definition in \eqref{eqf9.1} also shows that any $g\in Heymod_H^{\mathfrak s}((M,\sigma_M),(N,\sigma_N))$ such that $g\circ f\in\mathscr N$ must factor through $M/\sim$. Since the involution on $M/\sim$ is induced by $\sigma_M$, it is clear that any such $g$ factors uniquely through $(M,\sigma_M)\longrightarrow (M/\sim,\sigma_M)$ in $Heymod_H^{\mathfrak s}$. This proves the result. \end{proof} Accordingly, the canonical morphism described in Proposition \ref{Pt9.5} will be written as the kernel $Coker(f)$ of $f:(L,\sigma_L)\longrightarrow (M,\sigma_M)$. \begin{Thm}\label{Tt9.6} Let $H$ be a finite Heyting algebra. Then, $Heymod_H^{\mathfrak s}$ is a semiexact category. \end{Thm} \begin{proof} This follows from the definition of a semiexact category and by applying Lemma \ref{Lt9.3}, Proposition \ref{Pt9.4} and Proposition \ref{Pt9.5}. \end{proof} \section{Finite Boolean algebras and semiexact homological categories} We begin this section by recalling some more general facts for semiexact categories. In a semiexact category $(\mathcal C,\mathscr N)$, a morphism of the form $Ker(f)\longrightarrow L$ corresponding to some morphism $f\in \mathcal C(L,M)$ is referred to as a normal monomorphism (see \cite[$\S$ 1.3.3]{Grandis}). Similarly, a morphism of the form $M\longrightarrow Coker(f)$ corresponding to some $f\in\mathcal C(L,M)$ is referred to as a normal epimorphism. Every morphism $f:L\longrightarrow M$ in a semiexact category $(\mathcal C,\mathscr N)$ admits a unique and natural factorization of the form (see \cite[$\S$ 1.5.5]{Grandis}) \begin{equation}\label{fnormal1} \begin{CD} Ker(f) @>>> L@>f>> M@>>> Coker(f) \\ @. @VpVV @AmAA @. \\ @. Coker(Ker(f)) @>\tilde{f}>> Ker(Coker(f)) @.\\ \end{CD} \end{equation} It is clear that $p:L\longrightarrow Coker(Ker(f))$ is a normal epimorphism and $m:Ker(Coker(f))\longrightarrow M$ is a normal monomorphism. The morphism $f$ is said to be exact if $\tilde{f}$ as defined in \eqref{fnormal1} is an isomorphism. A morphism $f$ in a semiexact category $(\mathcal C,\mathscr N)$ is exact if and only if it can be factored as $k\circ h$, where $h$ is a normal epimorphism and $k$ is a normal monomorphism. In this section, we will always assume that $H$ is a finite Boolean algebra. In particular, this assumption will allow us to use Proposition \ref{P4.3} and Proposition \ref{P4.31}. We recall from Section 4 that if $M$ is a Heyting module, then the dual $M^\bigstar$ denotes the collection $Heymod_H(M,H)$ of all Heyting module morphisms from $M$ to $H$. Our purpose in this section is to show that $Heymod_H^{\mathfrak s}$ is actually a ``semiexact homological category'' in the sense of \cite[$\S$ 1.3]{Grandis}, which we will do in a manner analogous to \cite[$\S$ 6]{one}. \begin{lem}\label{Lq10.1} Let $H$ be a finite Boolean algebra. (a) For any indexing set $I$, the product $H^I$ of copies of $H$ is an injective object in $Heymod_H$. (b) Every Heyting module $M$ can be embedded as a submodule of a product of copies of $H$. \end{lem} \begin{proof}(a) Let $M\in Heymod_H$ and let $N\subseteq M$ be a Heyting submodule. By definition, any morphism $\phi:N\longrightarrow H^I$ in $Heymod_H$ corresponds to a family of morphisms $\{\phi_i:N\longrightarrow H\}_{i\in I}$. Since $H$ is a finite Boolean algebra, it follows from Proposition \ref{P4.3} that the induced morphism $M^\bigstar\longrightarrow N^\bigstar$ is surjective, i.e., we can choose for each $\phi_i:N\longrightarrow H$ a morphism $\psi_i:M\longrightarrow H$ extending $\phi_i$. The $\{\psi_i\}_{i\in I}$ combine to yield a morphism $\psi:M\longrightarrow H^I$ extending $\phi$. (b) Given $M\in Heymod_H$, we define a morphism $i_M:M\longrightarrow H^{M^\bigstar}$ by setting $i_M(m)=\{\phi(m)\}_{\phi\in M^\bigstar}$. Using Proposition \ref{P4.31}, we see that $i_M$ is an injective map. It now follows from Proposition \ref{P4.5} that $M$ is a Heyting submodule of $H^{M^\bigstar}$. \end{proof} \begin{lem}\label{Lq10.2} Let $H$ be a finite Boolean algebra. Then, $(E,\sigma_E)$ is an injective object in $Heymod_H^{\mathfrak s}$ if and only if $\mathfrak f(E,\sigma_E)=E$ is injective in $Heymod_H$. \end{lem} \begin{proof} We suppose that $E\in Heymod_H$ is injective. Let $i:(L,\sigma_L)\longrightarrow (M,\sigma_M)$ be a monomorphism in $Heymod_H^{\mathfrak s}$ and let $f:(L,\sigma_L)\longrightarrow (E,\sigma_E)$ be a morphism in $Heymod_H^{\mathfrak s}$. By Proposition \ref{P8.6}, we know the underlying map $i:L\longrightarrow M$ is injective and hence a monomorphism in $Heymod_H$ (by Proposition \ref{P4.5}). Hence, there is a morphism $g:M\longrightarrow E$ in $Heymod_H$ such that $g\circ i=f$. Setting $h=g+\sigma_E\circ g\circ \sigma_M$, it may be verified easily that $h\circ \sigma_M=\sigma_E \circ h$ and that $h\circ i=f$. This shows that $(E,\sigma_E)\in Heymod_H^{\mathfrak s}$ is injective. Conversely, suppose that $(E,\sigma_E)$ is an injective object in $Heymod_H^{\mathfrak s}$. Using Lemma \ref{Lq10.1}, we can find an embedding $i:E\longrightarrow H^I$ in $Heymod_H$ for some indexing set $I$. Then, the map $E\longrightarrow H^I\times H^I$ given by $x\mapsto (i(x),i(\sigma_E(x)))$ is injective and it is easily verified that this gives a morphism $u:(E,\sigma_E)\longrightarrow \mathfrak s(H^I)$ in $Heymod_H^{\mathfrak s}$. Since $(E,\sigma_E)\in Heymod_H^{\mathfrak s}$ is injective, we have a retraction $v:\mathfrak s(H^I)\longrightarrow (E,\sigma_E)$ such that $v\circ u=id_{(E,\sigma_E)}$. It follows that $\mathfrak f(v):H^I\times H^I\longrightarrow E$ is a retraction of the map $\mathfrak f(u):E\longrightarrow H^I\times H^I$ in $Heymod_H$. By Lemma \ref{Lq10.1}, we know that $H^I\times H^I$ is injective and hence it follows that $E$ is injective in $Heymod_H$. \end{proof} \begin{lem}\label{Lq10.3} Let $f:(L,\sigma_L)\longrightarrow (M,\sigma_M)$ be a normal monomorphism in $Heymod_H^{\mathfrak s}$, corresponding to the kernel of $g:(M,\sigma_M)\longrightarrow (N,\sigma_N)$. Then, there is a morphism $h:(N,\sigma_N)\longrightarrow (E,\sigma_E)$ such that: (1) $(E,\sigma_E)$ is injective in $Heymod_H^{\mathfrak s}$, (2) $h:N\longrightarrow E$ an injective map and (3) $Ker(g)=((L,\sigma_L)\overset{f}{\longrightarrow}(M,\sigma_M))=Ker(h\circ g)$. \end{lem} \begin{proof} We recall from Section 8 the morphism $\eta(N,\sigma_N):(N,\sigma_N)\longrightarrow (N^2,\tau_N)$ corresponding to the unit of the adjunction between the functors $\mathfrak f:Heymod_H^{\mathfrak s}\longrightarrow Heymod_H$ and $\mathfrak s:Heymod_H\longrightarrow Heymod_H^{\mathfrak s}$. The map underlying $\eta(N,\sigma_N)$ is given by $(1_N,\sigma_N): N\longrightarrow N^2$. Using the expression for the kernel of a morphism in $Heymod_H^{\mathfrak s}$ obtained in Proposition \ref{Pt9.4}, we notice that \begin{equation} Ker(\eta(N,\sigma_N)\circ g)=(\eta(N,\sigma_N)\circ g)^{-1}((N^2)^{\tau_N})=g^{-1}(N^{\sigma_N})=Ker(g) \end{equation} Applying Lemma \ref{Lq10.1}, we can find an embedding $i:N\longrightarrow E'$ into an injective $E'$ in $Heymod_H$. It is clear that $\mathfrak s(i):\mathfrak s(N)\longrightarrow \mathfrak s(E')$ satisfies $(\mathfrak s(i))^{-1}((E'^2)^{\tau_{E'}}) =(N^2)^{\tau_N}$. It follows that \begin{equation} Ker(\mathfrak s(i)\circ \eta(N,\sigma_N)\circ g)=(\mathfrak s(i)\circ \eta(N,\sigma_N)\circ g)^{-1}((E'^2)^{\tau_{E'}})=Ker(\eta(N,\sigma_N)\circ g)=Ker(g) \end{equation} Finally, since $\mathfrak f (\mathfrak s(E'))=E'^2$ is injective in $Heymod_H$, it follows from Lemma \ref{Lq10.2} that $(E,\sigma_E):=\mathfrak s(E')$ is injective in $Heymod_H^{\mathfrak s}$. It is clear from the constructions that the map $N\longrightarrow E$ underlying $\mathfrak s(i)\circ \eta(N,\sigma_N)$ is injective. This proves the result. \end{proof} \begin{thm}\label{Pq10.4} Let $H$ be a finite Boolean algebra. Then, the normal monomorphisms in $Heymod_H^{\mathfrak s}$ are stable under composition. \end{thm} \begin{proof} We consider normal monomorphisms $i:(L,\sigma_L)\longrightarrow (M,\sigma_M)$ and $j:(M,\sigma_M)\longrightarrow (N,\sigma_N)$. Using Lemma \ref{Lq10.3}, we can find a morphism $f:(M,\sigma_M)\longrightarrow (E,\sigma_E)$ with $(E,\sigma_E)$ injective in $Heymod_H^{\mathfrak s}$ such that $(L,\sigma_L)=Ker(f)=(f^{-1}(E^{\sigma_E}),\sigma_L)$. Then, there exists a morphism $g:(N,\sigma_N)\longrightarrow (E,\sigma_E)$ such that $g\circ j=f$. Since $j$ is a normal monomorphism, we can write $(M,\sigma_M)=Ker(h)$ for some morphism $h:(N,\sigma_N)\longrightarrow (P,\sigma_P)$. Then, we have an induced morphism $(g,h):(N,\sigma_N) \longrightarrow (E,\sigma_E)\times (P,\sigma_P)$ and we claim that $j\circ i=Ker(g,h)$. By definition, we know that $Ker(g,h)=(g,h)^{-1}(E^{\sigma_E}\times P^{\sigma_P})$. As such, an element $n\in N$ lies in $Ker(g,h)$ if and only if $g(n)\in E^{\sigma_E}$ and $h(n)\in P^{\sigma_P}$. Since $j=Ker(h)$, the fact that $h(n)\in P^{\sigma_P}$ shows that $n=j(m)$ for some $m\in M$. Then, $f(m)=g(j(m))=g(n)\in E^{\sigma_E}$ and since $i=Ker(f)$, we obtain $m=i(l)$ for some $l\in L$. This gives us $(j\circ i:(L,\sigma_L)\longrightarrow (N,\sigma_N))=Ker(g,h)$ and the result follows. \end{proof} The next aim is to show that the normal epimorphisms in $Heymod_H^{\mathfrak s}$ are stable under composition. \begin{lem}\label{Lq10.5} Let $(\mathcal C,\mathscr N)$ be a semiexact category. Then, $e:M\longrightarrow N$ is a normal epimorphism in $(\mathcal C,\mathscr N)$ if and only if $e$ is equivalent to the morphism $M\longrightarrow Coker(Ker(e))$. \end{lem} \begin{proof} Suppose that $e$ is a normal epimorphism. Then, $e$ automatically factorizes as the composition of a normal epimorphism followed by a normal monomorphism, i.e., $e$ is exact (see \cite[$\S$ 1.5.5]{Grandis}). Then, the morphism $\tilde e$ appearing in the factorization of $e$ as in \eqref{fnormal1}: \begin{equation}\label{fnormal11} \begin{CD} Ker(e) @>j>> M@>e>> N@>q>> Coker(e) \\ @. @VpVV @AmAA @. \\ @. Coker(j)= Coker(Ker(e)) @>\tilde{e}>> Ker(Coker(e))=Ker(q) @.\\ \end{CD} \end{equation} must be an isomorphism. Further since $q\circ e\in\mathscr N$ and $e$ is a normal epimorphism, it follows from \cite[Lemma 1.5.3(g)]{Grandis} that $q\in \mathscr N$. From the definitions, it is evident that the kernel of a null morphism must be the identity and hence $m=1_N:Ker(q)\longrightarrow N$. It is now clear from \eqref{fnormal11} that $e:M\longrightarrow N$ is equivalent to $p:M\longrightarrow Coker(Ker(e)\overset{j}{\longrightarrow}M)$. The converse is obvious. \end{proof} From Theorem \ref{Tt9.6}, we know that $Heymod_H^{\mathfrak s}$ is a semiexact category. Applying Lemma \ref{Lq10.5} and the definition of the cokernel given in Proposition \ref{Pt9.5}, we see that a morphism $f:(M,\sigma_M)\longrightarrow (N,\sigma_N)$ in $Heymod_H^{\mathfrak s}$ is a normal epimorphism if and only if $f:M\longrightarrow N$ is surjective and \begin{equation}\label{cond62} f(m)=f(m')\quad\Leftrightarrow \quad \mbox{$g(m)=g(m')$ for all $g\in Heymod_H^{\mathfrak s}((M,\sigma_M),(P,\sigma_P))$ s.t. $Ker(f)\subseteq Ker(g)$} \end{equation} for any $m$, $m'\in M$. \begin{thm} \label{Pq10.6} Let $H$ be a finite Boolean algebra. Then, the normal epimorphisms in $Heymod_H^{\mathfrak s}$ are stable under composition. \end{thm} \begin{proof} We consider normal epimorphisms $p:(L,\sigma_L)\longrightarrow (M,\sigma_M)$ and $q:(M,\sigma_M)\longrightarrow (N,\sigma_N)$. Since $q$ is a normal epimorphism, it follows from the criterion in \eqref{cond62} that for $l$, $l'\in L$, we have $qp(l)=qp(l')$ if and only if $g(p(l))=g(p(l'))$ for each $g\in Heymod_H^{\mathfrak s}((M,\sigma_M),(Y,\sigma_Y))$ such that $Ker(g)\supseteq Ker(q)$. Since $p$ is a normal epimorphism, it follows from the explicit description of cokernels in Proposition \ref{Pt9.5} that $p:L\longrightarrow M$ is surjective. As such, for a morphism $g\in Heymod_H^{\mathfrak s}((M,\sigma_M),(Y,\sigma_Y))$, we have $Ker(g)\supseteq Ker(q)$ $\Leftrightarrow$ $Ker(g\circ p)\supseteq Ker(q\circ p)$. Setting $f:=g\circ p$, any such $g$ gives us a morphism $f\in Heymod_H^{\mathfrak s}((L,\sigma_L),(Y,\sigma_Y))$ such that $Ker(f)\supseteq Ker(q\circ p)$. Conversely, suppose that we have a morphism $f\in Heymod_H^{\mathfrak s}((L,\sigma_L),(Y,\sigma_Y))$ such that $Ker(f)\supseteq Ker(q\circ p)\supseteq Ker(p)$. Applying Lemma \ref{Lq10.5} to the normal epimorphism $p$, we know that $(M,\sigma_M)=Coker(Ker(p)\longrightarrow L)$ and hence $f$ factors uniquely through some $g:(M,\sigma_M)\longrightarrow (Y,\sigma_Y)$ as $f=g\circ p$. Combining these facts, we have shown that $qp(l)=qp(l')$ for $l$, $l'\in L$ if and only if $f(l)=f(l')$ for any $f\in Heymod_H^{\mathfrak s}((L,\sigma_L),(Y,\sigma_Y))$ such that $Ker(f)\supseteq Ker(q\circ p)$. Since $q$ and $p$ are both surjective, so is $q\circ p$. The result now follows from the criterion in \eqref{cond62}. \end{proof} \begin{defn}\label{Dq10.625} (see \cite[$\S$ 1.3.6]{Grandis}) Let $(\mathcal C,\mathscr N)$ be a semiexact category. Then, $(\mathcal C,\mathscr N)$ is said to be a homological category if it satisfies the following conditions: (1) The normal monomorphisms in $(\mathcal C,\mathscr N)$ are stable under composition. (2) The normal epimorphisms in $(\mathcal C,\mathscr N)$ are stable under composition. (3) Given a normal monomorphism $i:M\longrightarrow N$ and a normal epimorphism $q:N\longrightarrow Q$ in $(\mathcal C,\mathscr N)$ such that $Ker(q)\leq M$ in the lattice of subobjects of $M$, the composition $q\circ i$ is exact. \end{defn} \begin{lem}\label{Lq10.65} Let $i:(M,\sigma_M)\longrightarrow (N,\sigma_N)$ be a normal monomorphism in $Heymod_H^{\mathfrak s}$. Then, for any $f:(L,\sigma_L)\longrightarrow (M,\sigma_M)$ in $Heymod_H^{\mathfrak s}$, we have $Ker(i\circ f)=Ker(f)$. \end{lem} \begin{proof} Since $i:(M,\sigma_M)\longrightarrow (N,\sigma_N)$ is a normal monomorphism, we can choose some $g:(N,\sigma_N) \longrightarrow (P,\sigma_P)$ such that $(M,\sigma_M)=Ker(g)$. From the definition of the kernel in Proposition \ref{Pt9.4}, it is clear that $i^{-1}(N^{\sigma_N})=M^{\sigma_M}$. Then, $Ker(i\circ f)=f^{-1}(i^{-1}(N^{\sigma_N}))=f^{-1}(M^{\sigma_M})=Ker(f)$. \end{proof} \begin{thm}\label{Pq10.7} Let $H$ be a finite Boolean algebra. Let $i:(M,\sigma_M)\longrightarrow (N,\sigma_N)$ (resp. $q:(N,\sigma_N) \longrightarrow (Q,\sigma_Q)$) be a normal monomorphism (resp. a normal epimorphism) in $Heymod_H^{\mathfrak s}$. Suppose that $Ker(q)\subseteq (M,\sigma_M)$. Then, the composition $q\circ i$ is an exact morphism in $Heymod_H^{\mathfrak s}$. \end{thm} \begin{proof} Since $i:(M,\sigma_M)\longrightarrow (N,\sigma_N)$ is a normal monomorphism, we may choose $f:(N,\sigma_N)\longrightarrow (T,\sigma_T)$ such that $Ker(f)=(M,\sigma_M)$. By assumption, $Ker(q)\subseteq Ker(f)$. Since $q$ is a normal epimorphism, it follows from \eqref{cond62} that $q(n)=q(n')$ for $n$, $n'\in N$ implies that $f(n)=f(n')$. Accordingly, there is a morphism $g:(Q,\sigma_Q)\longrightarrow (T,\sigma_T)$ such that $f=g\circ q$. We now set $(P,\sigma_P):=Ker(g)$. Since $g\circ q\circ i= f\circ i\in \mathscr N$, there is a unique morphism $p:(M,\sigma_M)\longrightarrow Ker(g)=(P,\sigma_P)$ which makes the following diagram commutative \begin{equation} \begin{CD} (M,\sigma_M) @>i>> (N,\sigma_N) \\ @VpVV @VqVV \\ (P,\sigma_P)@>j>> (Q,\sigma_Q)\\ \end{CD} \end{equation} Since $(P,\sigma_P)=Ker(g)$, the morphism $j$ is a normal monomorphism. In order to show that $q\circ i=j\circ p$ is exact, it suffices therefore to show that $p$ is a normal epimorphism. In other words, we need to show that $p$ coincides with the canonical morphism $p':(M,\sigma_M)\longrightarrow Coker(Ker(p)\longrightarrow (M,\sigma_M))$. Since $q$ is a normal epimorphism in $Heymod_H^{\mathfrak s}$, it follows from Proposition \ref{Pt9.5} that $q$ is surjective. We notice that this implies that $p$ is surjective. We now consider $m_1$, $m_2\in M$ such that $p'(m_1)=p'(m_2)$. Let $x:(N,\sigma_N)\longrightarrow (X,\sigma_X)$ be a morphism in $Heymod_H^{\mathfrak s}$ such that $Ker(q)\subseteq Ker(x)$. Then, $Ker(j\circ p)=Ker(q\circ i)\subseteq Ker(x\circ i)$ and it follows from Lemma \ref{Lq10.65} that $Ker(p)=Ker(j\circ p)\subseteq Ker(x\circ i)$. Since $p'$ is the canonical morphism to the cokernel of $Ker(p)\longrightarrow M$, the fact that $p'(m_1)=p'(m_2)$ now implies that $x( i(m_1)) =x(i(m_2))$. Since $q$ is a normal epimorphism, it follows from \eqref{cond62} that $q(i(m_1))=q(i(m_2))$. Conversely, suppose that $m_1$, $m_2\in M$ are such that $p'(m_1)\ne p'(m_2)$. Then, there exists some $y:(M,\sigma_M) \longrightarrow (Y,\sigma_Y)$ with $Ker(p)\subseteq Ker(y)$ such that $y(m_1)\ne y(m_2)$. From the construction in Lemma \ref{Lq10.3}, we see that $(Y,\sigma_Y)$ may be assumed to be injective in $Heymod_H^{\mathfrak s}$. From Proposition \ref{P8.6} and Proposition \ref{Pt9.4}, it is clear that a normal monomorphism in $Heymod_H^{\mathfrak s}$ is also a monomorphism in $Heymod_H^{\mathfrak s}$. Hence, $y:(M,\sigma_M) \longrightarrow (Y,\sigma_Y)$ extends to some $z:(N,\sigma_N)\longrightarrow (Y,\sigma_Y)$ such that $z\circ i=y$. It follows that $z(i(m_1))\ne z(i(m_2))$. We now claim that $Ker(z)\supseteq Ker(q)$. Indeed, if $n\in N$ is such that $n\in Ker(q)$, we know from the assumption $Ker(q)\subseteq (M,\sigma_M)$ that $n=i(m)$ for some $m\in M$. Then, $m\in Ker(q\circ i)$. But $Ker(q\circ i)=Ker(j\circ p)=Ker(p)\subseteq Ker(y)$. Hence, $m\in Ker(y)$. Since $y=z\circ i$, this shows that $n=i(m)\in Ker(z)$. Since $q$ is a normal epimorphism and $Ker(z)\supseteq Ker(q)$, the fact that $z(i(m_1))\ne z(i(m_2))$ implies that $q(i(m_1))\ne q(i(m_2))$. Since $j$ is an injective map, we now have an equivalence \begin{equation}\label{leqbeq} p'(m_1)=p'(m_2)\textrm{ }\Leftrightarrow\textrm{ }q(i(m_1))=q(i(m_2))\textrm{ }\Leftrightarrow\textrm{ }p(m_1)=p(m_2) \end{equation} for all $m_1$, $m_2\in M$. Since $p$ and $p'$ are both surjective, it is clear from \eqref{leqbeq} that $p=p'$. \end{proof} \begin{Thm}\label{Tq10.10} Let $H$ be a finite Boolean algebra. Then, $Heymod_H^{\mathfrak s}$ is a semiexact homological category. \end{Thm} \begin{proof} We know from Theorem \ref{Tt9.6} that $Heymod_H^{\mathfrak s}$ is a semiexact category. The result now follows from Definition \ref{Dq10.625} along with Propositions \ref{Pq10.4}, \ref{Pq10.6} and \ref{Pq10.7} \end{proof} \section{Spectral spaces and Heyting submodules} In this final section, we let $H$ be an arbitrary (not necessarily finite) Heyting algebra. We will show that the collection of Heyting submodules of a given Heyting module $M$ can be given the structure of a spectral space. We will do this by using a criterion of Finocchiaro \cite{Fino} in a manner similar to \cite{FFS}, where it was shown that the collection of submodules of a given module over a commutative ring forms a spectral space. In \cite{AB}, it was shown that these techniques apply more generally to abelian categories that satisfy the (AB5) axiom (see also \cite{AB1}, \cite{Ray}). We recall that a topological space is said to be spectral if it is homeomorphic to the Zariski spectrum of a commutative ring. A famous result of Hochster \cite{Hoch} shows that a topological space $X$ is spectral if and only if satisfies: (a) $X$ is quasi-compact (b) the quasi-compact opens in $X$ are closed under intersection and form a basis (c) every non-empty irreducible closed subset has a unique generic point. In other words, the property of being spectral can be characterized in purely topological terms, without any reference to commutative rings. For a Heyting module $M$ over the given Heyting algebra $H$, we denote by $Sub(M)$ the collection of Heyting submodules of $M$. From Proposition \ref{P4.5} and Proposition \ref{Pp3.65}, we know that a Heyting submodule $N\in Sub(M)$ is simply a distributive submodule of $M$ over the lattice $H$. For any finite collection of elements $\{m_1,...,m_n\}\in M$, we set \begin{equation}\label{closed} V(m_1,...,m_n):=\{\mbox{$N\in Sub(M)$ $\vert$ $m_1,...,m_n\in N$}\} \end{equation} We let the $V(m_1,...,m_n)$ be a subbasis of closed sets for the topology on $Sub(M)$. In other words, a subbasis of open sets for the topology on $Sub(M)$ is given by subsets of the form \begin{equation}\label{open} D(m_1,...,m_n):=Sub(M)\backslash V(m_1,...,m_n) \end{equation} We will now show that this topology makes $Sub(M)$ into a spectral space. We recall here that a filter $\mathfrak F$ on a set $S$ is a collection of subsets of $S$ such that (a) $\phi\notin S$, (b) $Y$, $Z\in \mathfrak F$ $\Rightarrow$ $Y\cap Z\in \mathfrak F$ and (c) $Y\subseteq Z\subseteq S$ and $Y\in \mathfrak F$ implies $Z\in \mathfrak F$. An ultrafilter on $S$ (see, for instance, \cite[$\S$ 1]{Fino}) is a maximal element in the collection of filters on $S$ ordered by inclusion. In particular, if $\mathfrak F$ is an ultrafilter, then for any subset $T\subseteq S$, exactly one of $T$ and $(S\backslash T)$ lies in $\mathfrak F$. \begin{thm}\label{P10.1} Let $H$ be a Heyting algebra and $M$ be a Heyting module over $H$. Then, the collection $Sub(M)$ of Heyting submodules of $M$ is a spectral space having the collection $\mathcal S$ of subsets of the form $D(m_1,...,m_n)$ as a subbasis of quasi-compact open sets. \end{thm} \begin{proof} We consider $N$, $N'\in Sub(M)$ with $N\ne N'$. Then, we can pick some $m\in M$ such that $m$ lies in exactly one of the two submodules $N$, $N'$. Then, $V(m)$ is a closed subset of $Sub(M)$ containing exactly one of the two points $N$, $N'\in Sub(M)$. Hence, $Sub(M)$ is a $T_0$-space. We now consider an ultrafilter $\mathfrak F$ on $Sub(M)$ and set \begin{equation}\label{ultraf} N(\mathfrak F):=\{\mbox{$m\in M$ $\vert $ $V(m)\in \mathfrak F$}\} \end{equation} We claim that $N(\mathfrak F)$ is a Heyting submodule of $M$. By Proposition \ref{Pp3.65}, we need to check that it is a distributive submodule of $M$. We consider therefore $m_1$, $m_2\in N(\mathfrak F)$ and some $c\in H$. Then, $V(m_1)$, $V(m_2)\in \mathfrak F$. From the definition of a filter and from \eqref{closed}, we obtain \begin{equation}\label{eq10.4c} V(m_1\vee m_2)\supseteq V(m_1)\cap V(m_2) \textrm{ }\Rightarrow\textrm{ }V(m_1\vee m_2)\in \mathfrak F \qquad V(c\wedge m_1)\supseteq V(m_1) \textrm{ }\Rightarrow\textrm{ }V(c\wedge m_1)\in \mathfrak F \end{equation} Hence, $N(\mathfrak F)\in Sub(M)$. We now claim that \begin{equation}\label{fineq} D(m_1,...,m_n)\in \mathfrak F \qquad\Leftrightarrow\qquad N(\mathfrak F)\in D(m_1,...,m_n) \end{equation} First, we suppose that $N(\mathfrak F)\in D(m_1,...,m_n)$, i.e., there is some $m_k\in \{m_1,...,m_n\}$ such that $m_k\notin N(\mathfrak F)$. Applying \eqref{ultraf}, we get $V(m_k)\notin \mathfrak F$. Since $\mathfrak F$ is an ultrafilter, this means that the complement $D(m_k)\in \mathfrak F$. It is clear that $D(m_k)\subseteq D(m_1,...,m_n)$ and $\mathfrak F$ being a filter, this means that $D(m_1,...,m_n)\in \mathfrak F$. On the other hand, if $N(\mathfrak F)\notin D(m_1,...,m_n)$, then $N(\mathfrak F)\in V(m_1,...,m_n)$. Hence, $V(m_i)\in \mathfrak F$ for each $1\leq i\leq n$. Since $\mathfrak F$ is a filter, we get $V(m_1,...,m_n)=\underset{i=1}{\overset{n}{\bigcap}}V(m_i)\in \mathfrak F$. Hence, $D(m_1,...,m_n)\notin \mathfrak F$. This proves the equivalence in \eqref{fineq}. Since $Sub(M)$ is a $T_0$-space, it now follows from the criterion in \cite[Corollary 3.3]{Fino} that $Sub(M)$ is spectral with subsets of the form $D(m_1,...,m_n)$ being a basis of quasi-compact opens. \end{proof} We can refine the result of Proposition \ref{P10.1} by considering closure operators on Heyting submodules in a manner similar to \cite[$\S$ 3]{AB} and \cite[$\S$ 3]{FFS}. These are inspired by closure operations on ideals in commutative algebra, such as taking the radical closure, the integral closure, plus closure or Frobenius closure in certain special classes of rings (see \cite{Epstein} for a detailed survey). We will say that a Heyting submodule $N\in Sub(M)$ is finitely generated if there is a finite collection $\{m_1,...,m_n\}$ of elements of $M$ such that $N$ is the smallest Heyting submodule containing all of them. For $N\in Sub(M)$, we will denote by $fg(N)$ the collection of finitely generated Heyting submodules of $N$. \begin{defn}\label{D10.2} Let $H$ be a Heyting algebra and let $M$ be a Heyting module over $H$. A closure operation $\mathbf c$ on Heyting submodules of $M$ is an operator $\mathbf c:Sub(M)\longrightarrow Sub(M)$ such that: (a) $\mathbf c$ is extensive, i.e., $N\subseteq \mathbf c(N)$ for each $N\in Sub(M)$. (b) $\mathbf c$ is order preserving, i.e., $N\subseteq N'$ for $N$, $N'\in Sub(M)$ implies $\mathbf c(N)\subseteq \mathbf c(N')$. (c) $\mathbf c$ is idempotent, i.e., $\mathbf c(\mathbf c(N))=\mathbf c(N)$ for each $N\in Sub(M)$. We will say that a closure operator $\mathbf c:Sub(M)\longrightarrow Sub(M)$ is of finite type if it satisfies $\mathbf c(N)=\underset{N'\in fg(N)}{\bigcup} \mathbf c(N')$ for every $N\in Sub(M)$. \end{defn} \begin{lem}\label{L10.2} Let $\mathbf c:Sub(M)\longrightarrow Sub(M)$ be a closure operator on Heyting submodules of $M$. Then, the operator defined by setting \begin{equation}\label{clf} \mathbf c_f(N):=\underset{N'\in fg(N)}{\bigcup} \mathbf c(N')\qquad\forall\textrm{ }N\in Sub(M) \end{equation} is a closure operator of finite type. \end{lem} \begin{proof} It is evident that $\mathbf c_f$ is extensive and order-preserving. If $N\in Sub(M)$ is finitely generated, it is clear from \eqref{clf} that $\mathbf c_f(N)=\mathbf c(N)$. Hence, $\mathbf c_f(N)=\underset{N'\in fg(N)}{\bigcup} \mathbf c(N')=\underset{N'\in fg(N)}{\bigcup} \mathbf c_f(N')$, i.e., $\mathbf c_f$ is of finite type. It remains to show that $\mathbf c_f$ is idempotent. For this, we consider some $N''\in fg(\mathbf c_f(N))$ having a generating set $\{n_1,...,n_k\}$. For each $n_i$, we can choose $N'_i\in fg(N)$ such that $n_i\in \mathbf c(N_i')$. Then, $N_1'\vee ... \vee N_k'$ is a finitely generated Heyting submodule of $N$ and $N''\subseteq \mathbf c(N_1'\vee ... \vee N_k')$. Since $\mathbf c$ is idempotent, we get $\mathbf c(N'')\subseteq \mathbf c(N_1'\vee ... \vee N_k')$ for each $N''\in fg(\mathbf c_f(N))$. We now have \begin{equation}\label{eq10.7} \mathbf c_f(\mathbf c_f(N))=\underset{N''\in fg(\mathbf c_f(N))}{\bigcup} \mathbf c(N'')\subseteq \underset{N'\in fg(N)}{\bigcup} \mathbf c(N')=\mathbf c_f(N) \end{equation} Since $\mathbf c_f$ is extensive, \eqref{eq10.7} implies that $\mathbf c_f(\mathbf c_f(N))=\mathbf c_f(N)$. This proves the result. \end{proof} \begin{thm}\label{P10.4} Let $H$ be a Heyting algebra and $M$ be a Heyting module over $H$. Let $\mathbf c:Sub(M)\longrightarrow Sub(M)$ be a closure operator of finite type.Then, the collection $Sub^{\mathbf c}(M)$ of Heyting submodules of $M$ fixed by $\mathbf c$ is a spectral space having the collection $\mathcal S^{\mathbf c}$ of subsets of the form $D(m_1,...,m_n)\cap Sub^{\mathbf c}(M)$ as a subbasis of quasi-compact open sets. \end{thm} \begin{proof} Since $Sub^{\mathbf c}(M)$ is equipped with the subspace topology induced by $Sub(M)$, it must be a $T_0$-space. Given an ultrafilter $\mathfrak F$ on $Sub^{\mathbf c}(M)$, we now set \begin{equation}\label{ultrafc} N(\mathfrak F):=\{\mbox{$m\in M$ $\vert $ $V(m)\cap Sub^{\mathbf c}(M)\in \mathfrak F$}\} \end{equation} The order relations between subsets in \eqref{eq10.4c} continue to hold when intersected with $Sub^{\mathbf c}(M)$ and hence it follows that $N(\mathfrak F)\in Sub(M)$. We claim that $N(\mathfrak F)\in Sub^{\mathbf c}(M)$, i.e., it is fixed by $\mathbf c$. We consider an element $m\in \mathbf c(N(\mathfrak F))$. Since $\mathbf c$ is of finite type, there is a finitely generated submodule $N'\subseteq N(\mathfrak F)$ such that $m\in \mathbf c(N')$. Suppose that $N'$ is generated by $\{m_1,...,m_k\}$. Then, if $N''\in Sub^{\mathbf c}(M)$ is such that $N''\supseteq N'$, we obtain $N''=\mathbf c(N'')\supseteq \mathbf c(N')$, i.e., $m\in N''$. In other words, we have $V(m_1,...,m_k)\cap Sub^{\mathbf c}(M)\subseteq V(m)\cap Sub^{\mathbf c}(M)$. Since $m_1$, $m_2$, ..., $m_k\in N'\subseteq N(\mathfrak F)$, we get $V(m_i)\cap Sub^{\mathbf c}(M)\in \mathfrak F$ for each $1\leq i\leq k$. Then, $V(m)\cap Sub^{\mathbf c}(M)\supseteq V(m_1,...,m_k)\cap Sub^{\mathbf c}(M)=\underset{i=1}{\overset{k}{\bigcap}}(V(m_i)\cap Sub^{\mathbf c}(M))\in\mathfrak F$ and hence $V(m)\cap Sub^{\mathbf c}(M)\in \mathfrak F$. By \eqref{ultrafc}, it follows that $m\in N(\mathfrak F)$, i.e., $N(\mathfrak F)\in Sub^{\mathbf c}(M)$. In a manner similar to the proof of Proposition \ref{P10.1}, it may be verified that \begin{equation}\label{finceq} D(m_1,...,m_n)\cap Sub^{\mathbf c}(M)\in \mathfrak F \qquad\Leftrightarrow\qquad N(\mathfrak F)\in D(m_1,...,m_n)\cap Sub^{\mathbf c}(M) \end{equation} Since $Sub^{\mathbf c}(M)$ is a $T_0$-space, it now follows from the criterion in \cite[Corollary 3.3]{Fino} that $Sub^{\mathbf c}(M)$ is a spectral space with the collection $\mathcal S^{\mathbf c}$ as a subbasis of quasi-compact open sets. \end{proof} For each Heyting submodule $N\subseteq M$, we now define \begin{equation}\label{clsr} \overline{N}:=\{\mbox{$m\in M$ $\vert$ $m\leq n$ for some $n\in N$}\} \end{equation} It is clear that $\overline{N}$ is a hereditary submodule and in fact the smallest hereditary submodule of $M$ containing $N$. \begin{cor}\label{C10.5} Let $H$ be a Heyting algebra and $M$ be a Heyting module over $H$. Then, the collection of hereditary submodules of $M$, equipped with the subspace topology induced by $Sub(M)$, forms a spectral space. \end{cor} \begin{proof} It is clear that the operation $N\mapsto \overline{N}$ in \eqref{clsr} is a closure operator of finite type. The result now follows from Proposition \ref{P10.4}. \end{proof} \small \end{document}
\begin{document} \title{Composition sums related to the hypergeometric function} \begin{abstract} The present note considers a certain family of sums indexed by the set of fixed length compositions of a given number. The sums in question cannot be realized as weighted compositions. However they can be be related to the hypergeometric function, thereby allowing one to factorize the corresponding generating polynomials. This factorization leads to some interesting identities. \end{abstract} \pagestyle{myheadings} \markboth{}{Composition sum identities} An $l$-composition of a natural number $n$ is an ordered list of $l$ positive integers $\pp=(p_1, p_2, \ldots, p_l)$ such that $p_1+\ldots + p_l=n$. The purpose of the present note is to exhibit closed form expressions for a certain family of sums indexed by the set of fixed length compositions of a given number. There are examples of such identities relating composition sums to Stirling numbers \cite{Sitgreaves,HomenkoStrok} and to Fibonacci numbers \cite{HoggattLind, MoserWhitney}. It should be noted that the sums introduced in the preceding references can all be regarded as enumerations of weighted compositions \cite{MoserWhitney}. To be more specific, a weighted composition is one where the $j^{\text{th}}$ term is given a weight $w_j$, and the enumeration of such compositions is defined as $$S(l,n)=\sum w_{p_{1}} w_{p_{2}} \ldots w_{p_l},$$ where the sum is taken over all $l$-compositions of a fixed $n$. It therefore follows that all such sums can be realized in terms of a certain type of generating function: $$\frac{1}{1-tw(x)}= \sum_{l,n} S(l,n)t^l x^n,$$ where $w(x)=w_1 x+w_2 x^2+ \ldots$. The same cannot be said of the sums introduced in the present note. Instead, the identities to be discussed here come about because one can relate the sums in question to the hypergeometric function, $F(\alpha,\beta,\gamma; z)$. This is accomplished by gauge-transforming a certain second-order differential equation into the hypergeometric equation , and then taking the residue with respect to the $\gamma$ parameter. For a number of other combinatorial identities involving hypergeometric functions, as well as for other properties of these fascinating mathematical objects the reader is referred to \cite{Bateman}. For a given composition, $\pp$, let $L(\pp)$ and $R(\pp)$ denote, respectively, the products of the left and right partial sums of $\pp$. To wit \begin{align*} L(\pp)& = p_1 (p_1+p_2) \ldots (p_1+\ldots + p_{l-1})\, n,\\ R(\pp)& = p_l (p_{l-1}+p_l) \ldots (p_2+\ldots + p_l)\, n. \end{align*} Throughout let $\esp_j(x_1,\ldots,x_k)$ denote the $j^{\text{th}}$ elementary symmetric function of the $x$'s, i.e. the coefficient of $X^{k-j}$ in the expansion of $(X+x_1)\ldots(X+x_k)$. The weighted sums that will be discussed here are indexed by three natural numbers, and are defined by $$S(k,l,n) = \sum \frac{\esp_k(p_1,\ldots,p_l)}{L(\pp)R(\pp)},$$ where the sum is taken over all $l$-compositions of $n$. For every $n>0$ define a corresponding generating polynomial by $$P_n(u,v) = \sum_{k=0}^l\sum_{l=1}^n S(k,l,n) u^k v^{l-k}.$$ The main result of the present note is the following factorization of this generating polynomial. \begin{theorem} In the even case, say $n=2m$, one has $$(n!)^2 P_n(u,v)= \prod_{i=0}^{m-1} \left[ (u+v+q_i)(u+v+q_{i+1}) + r_i u\right], $$ where $q_i=i(n-i)$ and $r_i = (n-1-2i)^2$. In the odd case, say $n=2m+1$ one has $$(n!)^2 P_n(u,v)=(u+v+q_m) \prod_{i=0}^{m-1} \left[ (u+v+q_i)(u+v+q_{i+1}) + r_i u\right]. $$ \end{theorem} The proof of the theorem will be given below. First it is worth remarking that the above theorem implies some attractive identities: \begin{equation} \label{eqn:id1} \sum \frac{p_1 p_2 \ldots p_l}{L(\pp) R(\pp)} = \frac{1}{n}\,\esp_{l-1}\! \lp \frac{1}{1\cdot 2} , \frac{1}{2\cdot 3} , \ldots , \frac{1}{(n-1)\cdot n}\rp, \end{equation} \begin{multline} \label{eqn:id2} \sum \frac{(p_1-1) (p_2-1) \ldots (p_l-1)}{L(\pp) R(\pp)} \\ = \frac{n-1}{n^2}\,\esp_{l-1}\! \lp \frac{r_1}{q_1 q_2}, \frac{r_2}{q_2 q_3} ,\ldots, \frac{r_{m-1}}{q_{m-1} q_m}\rp, \end{multline} where the sums are taken over all $l$-compositions of $n$, and where $m$ in equation \eqref{eqn:id2} is the largest integer smaller or equal to $n/2$. To prove the identity in \eqref{eqn:id1} note that the left hand side is just $S(l,l,n)$, and hence can be obtained as the coefficient of $u^l$ in the factorizations of Theorem 1. An easy calculation shows that $$q_i+ q_{i+1}+ r_i = i(i+1)+(n-i-1)(n-i).$$ Hence $$P_n(u,0) = \frac{u}{n} \prod_{i=1}^{n-1} \lp \frac{u}{i(i+1)}+1\rp$$ and the desired identity follows immediately. The identity in \eqref{eqn:id2} is proved by noting that the left hand side is given by the alternating sum $$ \sum_{k=0}^l (-1)^{l-k} S(k,l,n),$$ and hence can be obtained as the coefficient of $u^l$ in $P_n(u,-u)$. Using Theorem 1 to evaluate the latter leads to \eqref{eqn:id2}. Using Theorem 1 to evaluate $P_n(0,v)$ one also obtains the identity $$ \sum \frac{1}{L(p)R(p)} = \frac{1}{n^2}\, \esp_{l-1}\!\lp\frac{1}{q_1},\frac{1}{q_2},\ldots, \frac{1}{q_{n-1}}\rp.$$ However, this identity is considerably less interesting, because it can be obtained by a simple rearrangement of factors in the summands of the left hand side in question. The proof of Theorem 1 will require a number of intermediate results. Let $a_{ij},\, i,j\in\natnums$ be indeterminates. For a composition $p_1,p_2,\ldots,p_l$ of $n$ let $s_j$ denote the $j^{\text{th}}$ left partial sum, $p_1+\ldots+p_j$, and set $a(p)=a_{0 s_1} a_{s_1 s_2} a_{s_2 s_3} \ldots a_{s_{l-1} n}$. \begin{lemma} \label{lemma:ordpart} Defining $f_n,\, n\in\natnums$ recursively by $$f_n = \sum_{j=0}^{n-1} a_{jn} f_j,\quad f_0=1,$$ one has $f_n = \sum_p a(p)$ where the sum is taken over all compositions (of all lengths) of $n$. \end{lemma} For $f(\gamma)$, a rational function of an indeterminate $\gamma$, let $\Res(f;\gamma=\gamma_0)$ denote the residue of this function at the value $\gamma_0$, i.e. the coefficient of $(\gamma-\gamma_0)^{-1}$ in the Laurent series expansion of $f$ about $\gamma=\gamma_0$. Let $$f(u,v,\gamma;z)=1+\sum_{n>0} f_n(u,v,\gamma)z^n$$ denote the unique formal power series solution of \begin{equation} \label{eq:fdef} z^2 f_{zz} + (1-\gamma) z f_z + \lp \frac{vz}{1-z} + \frac{uz}{(1-z)^2}\rp f = 0,\quad f(0)=1. \end{equation} \begin{proposition} \label{prop:res} $n P_n(u,v) = \Res(f_n; \gamma=n)$. \end{proposition} \begin{proof} Rewriting $z\,(1-z)^{-1}$ as $\sum_{n>0} z^n$ and $z\,(1-z)^{-2}$ as $\sum_{n>0} nz^n$ one obtains the following recurrence relation for the coefficients of $f$: $$n(\gamma-n) f_n = \sum_{i=0}^{n-1} ((n-i)u+v)f_i.$$ Set $$a_{ij} = \frac{(j-i)u+v}{j(\gamma-j)}$$ and note that for a composition $p_1,\ldots,p_l$ of $n$ one has \begin{align*} a(p) &= \frac{p_1 u+v}{s_1(\gamma-s_1)}\times \frac{p_2u+v}{s_2(\gamma-s_2)} \times \ldots\times \frac{p_{l-1}u+v}{s_{l-1}(\gamma-s_{l-1})}\times \frac{p_lu+v}{n(\gamma-n)} \\ &= \frac{\sum_{k=0}^l \esp_k(p_1,\ldots,p_l) u^k v^{l-k}} {L(p) (\gamma-s_1)(\gamma-s_2)\ldots(\gamma-n)}. \end{align*} Taking the residue of the right hand side at $\gamma=n$ and applying Lemma \ref{lemma:ordpart} gives the desired conclusion. \end{proof} Let $\alpha, \beta, \gamma$ be indeterminates, and let $F(\alpha,\beta,\gamma;z)$ denote the usual hypergeometric series $$\sum_{n=0}^\infty \frac{(\alpha)_n\,(\beta)_n}{n!\,(\gamma)_n}\,z^n ,$$ where $(x)_n$ is an abbreviation for the expression $x(x+1)\ldots(x+n-1)$. \begin{proposition} \label{prop:hypg} Setting \begin{align} \hat{u}&=\frac14\,(\alpha+\beta+\gamma)(2-\alpha-\beta-\gamma),\\ \nonumber \hat{v}&=\frac14\,(\alpha-\beta-\gamma)(\alpha-\beta+\gamma). \end{align} one has \begin{equation} \label{eq:fFrel} f(\hat{u},\hat{v},\gamma;z)= (1-z)^{(\alpha+\beta+\gamma)/2}\,F(\alpha,\beta,1-\gamma;z). \end{equation} \end{proposition} \begin{proof} Substituting \eqref{eq:fFrel} into \eqref{eq:fdef} yields the following equation for $F$ $$ z^2 F_{zz} + (1-\gamma)zF_z - (\alpha+\beta+\gamma) F_z - \alpha\beta F = 0.$$ Multiplying the above by $(1-z)/z$ yields the usual hypergeometric equation with parameters $\alpha, \beta, 1-\gamma$. \end{proof} \begin{proof}[Proof of Theorem 1.] Expanding $(1-z)^{(\alpha+\beta+\gamma)/2}$ into a power series in $z$, and using Proposition \ref{prop:hypg} one obtains that $$f_n(\hat{u},\hat{v},\gamma) = \frac{(\alpha)_n\,(\beta)_n}{n!\,(1-\gamma)_n}+\ldots,$$ where the remainder is a finite sum of rational functions in $\alpha, \beta, \gamma$ which do not contain a factor of $\gamma-n$ in the denominator. Hence, by Proposition \ref{prop:res} $$P_n(\hat{u},\hat{v}) = (-1)^n\,\frac{(\alpha)_n\,(\beta)_n}{(n!)^2}.$$ An elementary calculation shows that for $0\leq i<n/2$ $$ (\hat{u}+\hat{v}+q_i)(\hat{u}+\hat{v}+q_{i+1})\, + \,r_i\hat{u} = (\alpha+i) (\beta+i) (\alpha+n-1-i)(\beta+n-1-i) $$ and that for $n=2m+1$ $$ \hat{u}+\hat{v}+q_m = -(\alpha+m)(\beta+m).$$ The desired factorizations follow immediately. \end{proof} \end{document}
\begin{document} \begin{frontmatter} \title{Improving Semi-Supervised Learning for Remaining Useful Lifetime Estimation Through Self-Supervision} \author[1]{Tilman Krokotsch\corref{cor1}} \ead{[email protected]} \author[2]{Mirko Knaak} \author[1]{Clemens Gühmann} \cortext[cor1]{Corresponding author} \address[1]{Chair of Electronic Measurement and Diagnostic Technology, Technische Universität Berlin} \address[2]{Thermodynamics \& Power Systems, Power Train \& Power Engineering, IAV GmbH} \begin{abstract} RUL estimation suffers from a severe data imbalance where data from machines near their end of life is rare. Additionally, the data produced by a machine can only be labeled after the machine failed. \ac{ssl} can incorporate the unlabeled data produced by machines that did not yet fail. Previous work on \ac{ssl} evaluated their approaches under unrealistic conditions where the data near failure was still available. Even so, only moderate improvements were made. This paper proposes a novel \ac{ssl} approach based on self-supervised pre-training. The method can outperform two competing approaches from the literature and a supervised baseline under realistic conditions on the NASA \acl{cmapss} dataset. \end{abstract} \begin{keyword} Remaining Useful Lifetime, Predictive Maintenance, Semi-Supervised Learning, Deep Learning \end{keyword} \end{frontmatter} \section{Introduction} \label{sec:Introduction} \ac{pdm} is one of the core pillars of Industry 4.0 and enables more cost-effective operation of machinery. While early approaches to \ac{pdm} focused on hand-crafted, physical models and heuristics, nowadays data-driven methods are on the rise. Fueled by massive amounts of data provided by an increasing number of sensors, data-driven \ac{pdm} makes hand-crafting physical models less necessary. Due to the advent of deep learning, data-driven models can ingest even more data without the need for specialized feature engineering. Nevertheless, \ac{pdm} suffers from a server data imbalance as data from healthy machines is far more ubiquitous than data from degraded or faulty ones. This makes training effective data-driven models a challenging task. \ac{rul} estimation, as a sub-field of \ac{pdm}, is defined as “the length from the current time to the end of the useful life”. \cite{Si2011} It plays a vital role in effectively scheduling maintenance operations. Unfortunately, labeling data for \ac{rul} estimation is only possible after a machine fails, making it hard to acquire enough labeled data for conventional, supervised approaches to work. On the other hand, large amounts of unlabeled data are available from machines that did not yet fail. \ac{ssl} can be a possible solution for this problem. \acf{ssl} uses labeled and unlabeled data in concert to solve a learning problem \cite{VanEngelen2020}. It makes it possible to distill knowledge from large amounts of unlabeled data, thus lowering the amount of labeled data needed to achieve good performance. Specifically, \ac{ssl} aims to learn a conditional distribution $P(y|x)$ where $x\sim P(X)$ are the available features (i.e. sensor readings) and $y\sim P(Y)$ are the labels (i.e. the \ac{rul}). The learning algorithm has access to the set of labeled training data $D_L = \{(x_1, y_1), \dots, (x_i, y_i)\}$ and the mutually exclusive set of unlabeled data $D_U=\{x_{i+1}, \dots, x_{i+j}\}$. Recent works, e.g. \cite{ListouEllefsen2019,Yoon2017}, have shown promising results using different \ac{ssl} methods for \ac{rul} estimation. More specifically, they were able to modestly reduce the test \ac{rmse} and \ac{rul}-Score on subsets of the NASA \ac{cmapss} dataset using as few as 1\% of the labeled data. Although these findings are leading in the right direction, there are some shortcomings this paper wants to address. First, the previous work only evaluates their \ac{ssl} approaches on one of the four subsets of the \ac{cmapss} dataset. As these subsets are relatively small (compared to other deep learning data sets), at least all of the subsets should be investigated to see if the performance improvements are universal. Secondly, the time series of the unlabeled data were used as-is. In our previous work \cite{Krokotsch2020}, we have shown that unlabeled time series data for \ac{rul} estimation should not contain time steps at the point of failure. If they do, the whole time series could be labeled trivially, making the use of \ac{ssl} unnecessary. Therefore, the previous studies on \ac{ssl} for \ac{rul} estimation may produce overly optimistic results, as the approaches have access to more data near failure than could be considered realistically. This \emph{grade of degradation} of the unlabeled data, i.e. how far from failure the machines in it are, is of significant importance for \ac{ssl}. As previous work produced only modest improvements, we propose a novel \ac{ssl} method based on self-supervised pre-training. Self-supervision has been proven useful for pre-training deep neural networks on unlabeled data \cite{Doersch_2015_ICCV,gidaris2018unsupervised,Devlin2019}. Our pre-training task is to estimate the time between two time steps in a time series as a proxy for the difference in \ac{rul}. Afterward, the pre-trained network is fine-tuned on labeled data. We will conduct experiments on all four subsets of \ac{cmapss} comparing our approach and two common \ac{ssl} approaches, i.e. \acp{rbm} and \acp{ae}, to a supervised baseline. This will serve to answer our following research questions: \begin{itemize} \item Do findings of previous \ac{ssl} studies on \acs{rul} estimation hold when taking the \emph{grade of degradation} of the unlabeled data into account? \item Can self-supervised pre-training improve on other pre-training-based \ac{ssl} approaches for \acs{rul} estimation? \end{itemize} The remaining paper is structured as follows. First, we will lay out the related work on \ac{rul} estimation with deep neural networks, \acl{ssl} and self-supervised learning in section \ref{sec:RelatedWork}. In section \ref{sec:Methods}, we will describe our network architecture, our semi-supervised learning approach, and our self-supervised pre-training. Afterward, we explain our experimental setup, including the data, performance metrics, and competing approaches, as well as the evaluation procedure and hyperparameter selection. Section \ref{sec:Results} is concerned with the presentation and discussion of our results, while section \ref{sec:Conclusion} concludes this paper and gives an outlook on future work. The code to reproduce the results of this paper is available on GitHub: \texttt{\url{www.github.com/tilman151/self-supervised-ssl}}. \section{Related Work} \label{sec:RelatedWork} This section gives an overview of the current state of the literature. First, we will discuss \ac{rul} estimation with \acp{dnn}. Second, we will investigate \acl{ssl} and self-supervised learning in the scope of \ac{rul} estimation. \subsection{Remaining Useful Lifetime Estimation} \label{sec:RW:RULEstimation} \ac{rul} estimation is often treated as a regression problem. Recent work focuses mainly on \acp{dnn} because they work on the raw data and do not require hand-crafted features. Obviously, the early works were focuses on shallow networks and \acp{mlp} \cite{Gebraeel2004}. Later ones settled on \acp{cnn} and \acs{lstm} as network architectures. \acp{lstm} are \acp{rnn} and a natural fit for the time series data seen in \ac{rul} applications. They process one time step at a time with the help of an internal cell state derived from all previously seen time steps. Zheng et al. \cite{Zheng2017} used \acp{lstm} on three benchmark datasets and found them working best compared to \acp{mlp} and \acp{cnn}. Wu et al. \cite{Wu2018} compared \acp{lstm} against vanilla \acp{rnn} and GRUs. They declared \acp{lstm} the winner, too. \acp{cnn} on the other hand seem better suited for image data than time series. But, using 1d-convolution instead of 2d we can use it for \ac{rul} estimation, too. In a comparison, Bai et al. \cite{Bai2018} found \acp{cnn} equal to \acp{lstm} in performance, even though they are faster in inference and training. First attempts at \ac{rul} estimation from Babu et al. \cite{SateeshBabu2016} were still worse than \acp{lstm}. They were still using 2d-convolution and when Li et al. \cite{Li2018a} switched to 1d, they were able to surpass \acp{lstm} altogether. Zhu et al. \cite{Zhu2019} combined features from multiple hidden layers with their multiscale \ac{cnn} which did better on their dataset than traditional \acp{cnn}. Instead of using raw time series data, they transformed it to the frequency domain and used that as the input of their network. Jiang et al. \cite{Jiang2020} resorted to combining both network types and report better performance. Next to all networks were trained against \ac{mse} \cite{Zheng2017,Wu2018,SateeshBabu2016,Zhu2019,Jiang2020}. Only Li et al. \cite{Li2018a} used \ac{rmse}. \subsection{Semi-Supervised Learning} \label{sec:RW:SSL} \ac{ssl} can be divided into several sub-fields \cite{VanEngelen2020}. The most common distinction is the goal of the training process itself. While transductive methods are only concerned with providing the labels for the unlabeled training data, inductive methods yield a model that can be used on unseen data. We will focus on the inductive methods, as for \ac{pdm} applications we need to apply the trained model on unseen data after training. Even though there are many diverse approaches to \ac{ssl} (e.g. S3VMs), we will narrow our perspective to the ones applicable to \acp{dnn}. Wrapper methods offer some of the oldest \ac{ssl} approaches. They rely on producing pseudo-labels for the unlabeled portion of the data and then using it in conjunction with the labeled data to train supervised. Self-training is one of the most basic methods and was published by Yarowsky et al. in 1995 \cite{Yarowsky1995}. First, a model trained only on the labeled data provides the pseudo-labels for the unlabeled data. Afterward, a final model is trained on the combined labeled and pseudo-labeled data. Pre-training-based methods rely on unsupervised learning. The \ac{dnn} or a part of it is trained with an unsupervised learning method and taken as the initialization for a supervised learning stage. The literature provides examples for several unsupervised approaches, most commonly \acp{ae}, used for pre-training. Cheng et al. \cite{Cheng2019} used deep autoencoders for semi-supervised machine translation. Kingma et al. \cite{Kingma2014semi} used variational autoencoders for semi-supervised image classification. There are several deep learning methods for \ac{ssl} that directly incorporate an unsupervised component into their loss. Examples are Ladder Networks \cite{Rasmus2015semi} which incorporate an autoencoder reconstruction loss, or Pseudo-ensembles \cite{Bachman2014semi} which use a consistency loss between the outputs of a parent network and perturbed child networks for the unlabeled data. The survey of Van Engelen et al. \cite{VanEngelen2020} gives an excellent overview of the previously mentioned methods. There are a few papers on \ac{ssl} for \ac{rul} estimation. Ellefsen et al. \cite{ListouEllefsen2019} used a \ac{rbm} for pre-training on the NASA \ac{cmapss} dataset. Yoon et al. \cite{Yoon2017} pre-trained their network on the same dataset with a variational autoencoder. He et al. \cite{He2018rul} used ladder networks for \ac{rul} estimation of centrifugal pumps. Unfortunately, the field of \ac{ssl} suffers from a multitude of evaluation setups, which makes comparing approaches difficult \cite{Oliver2018realistic}. \ac{ssl} for \ac{rul} estimation is not excluded from this issue, as many papers report results only on parts of their datasets, or omit critical information like the amount of available labeled data. We aim to take these pitfalls into account in this work. \subsection{Self-Supervised Learning} \label{sec:RW:SelfSupervised} Like many methods in deep learning, self-supervised learning first succeeded in computer vision and spread to different fields from there. The aim was to learn features that are beneficial for solving common tasks (image classification, object detection, etc.) in the absence of labeled data. For this, a so-called pre-text task is defined to train a neural network, which is then used as a feature extractor for the real task. For example, Doersch et al. \cite{Doersch_2015_ICCV} predicted the position of an image patch, cut from a larger image, in relation to another patch from the same image. The trained network was then able to perform object detection. Another example is Gidaris et al. \cite{gidaris2018unsupervised}, who used predicting image rotation as a pre-text task for classification. Approaches like these were able to produce state-of-the-art results in a semi-supervised regime (large amount of unlabeled, small amount of labeled data) but could not outperform supervised approaches trained on large-scale datasets. Self-supervised learning gained prominence as a pre-training technique in natural language processing through Delvin et al. \cite{Devlin2019}. Their model, named BERT, was pre-trained on the pre-text task of predicting missing words in sentences sampled from a $3.3$M word dataset, including the whole of the English Wikipedia. The model was then able to produce state-of-the-art results on eleven benchmark tasks by training a single layer on top of the pre-trained network. Metric learning is another possible pre-text task that can be used supervised and self-supervised. It aims to learn a distance or similarity metric between pairs of data points. The recent work of Musgrave et al. \cite{Musgrave2020} gives an excellent overview of popular methods but shows that newer, more complex approaches perform only as good as older, simple ones. This leads us to the conclusion that even simple metric learning methods could be used as a self-supervised pre-text task, too. There is, to our knowledge, still a lack of work in self-supervised methods for multivariate time series data as found in \ac{pdm} applications. Recently, Franceschi et al. \cite{Franceschi2019} proposed a metric-learning-based pre-text task for time series. They trained siamese networks \cite{baldi1993neural,bromley1993signature} to predict the similarity of two time series snippets with a triplet loss. Their pre-trained network outperformed dynamic time warping in an unsupervised regime, other deep learning approaches in a semi-supervised regime and yield competitive results when compared to supervised state-of-the-art approaches. This hints at self-supervised learning as a promising direction for pre-training on time series data. \section{Methods} \label{sec:Methods} In this section, we will describe the neural network we use for \ac{rul} estimation, \ac{ssl} via pre-training, and our novel self-supervised pre-training technique. \begin{figure} \caption{\ac{rul} \label{fig:RulEst} \caption{Self-Supervised Siamese Network} \label{fig:PreTrain} \caption{ \textbf{Overview of the networks used in this work:} \label{fig:Networks} \end{figure} \subsection{Remaining Useful Lifetime Estimation Network} \label{sec:M:RULEstimation} \begin{figure} \caption{\textbf{Architecture of the used Networks:} \label{fig:BasicArchitecture} \end{figure} In general, \acp{dnn} for regression follow a common architecture, as seen figure \ref{fig:RulEst}. They consist of two networks: the feature extractor $f$ and a regression head $g$. Networks for \ac{rul} estimation are no different, estimating the \ac{rul} value of a sample $x$ as: \begin{equation} \textsc{RUL}' = g(f(x)) \end{equation} As seen in section \ref{sec:RW:RULEstimation}, the feature extractor can take the form of a \ac{cnn}, \ac{lstm} or any other network architecture. The regression head, on the other hand, almost always takes the form of a simple linear layer ($ax+b$) or a \ac{mlp}. We expect the network input $x_{(i-w):i}^{(k)}$, or $x_i$ for short, to be a time frame of size $w$ from a multivariate time series $k$ ending at time step $i$. Therefore, we use a simple 1d-\ac{cnn} feature extractor with a fully-connected layer as the regression head. The feature extractor consists of multiple 1d-\ac{cnn} layers with Dropout, Batch Normalization \cite{Ioffe15}, and a \ac{relu} activation function, followed by a single linear layer also with Batch Normalization and \ac{relu}. The complete architecture is depicted in figure \ref{fig:BasicArchitecture}. We selected a \ac{cnn} because they are faster to train than e.g. \acp{lstm} and have less trainable parameters than a \ac{mlp} with similar performance. In this work, we will focus on this feature extractor only, as we are mainly concerned with the influence of pre-training it on unlabeled data. Furthermore, previous works imply that differences in performance between different extractor architectures for \ac{rul} estimation are marginal \cite{Li2018a}. We train the network by mini-batch gradient descent with a \ac{rmse} loss: \begin{equation} \mathcal{L}_{\textsc{RMSE}} = \sqrt{\frac{1}{|X|}\sum_{i=1}^{|X|} (\textsc{RUL}_i - \textsc{RUL}'_i)^2} \label{eqn:rmse} \end{equation} where $X$ is a (mini-)batch of samples $\{x_1, ..., x_{|X|}\}$, $\textsc{RUL}_i$ the \ac{rul} value associated with $x_i$ and $\textsc{RUL}'_i = g(f(x_i))$. \subsection{Semi-Supervised Learning through Pre-Training} \label{sec:M:SSL} \ac{ssl} is a machine learning regime that includes labeled and unlabeled data into the training process. In this paper, we focus on the two-stage approach of combining unsupervised pre-training with supervised fine-tuning. First, the feature extractor $f$ is trained using a method that does not require labels on the features of the labeled and unlabeled data. Afterward, the pre-trained feature extractor is used as the starting point for the conventional, supervised training as described in the previous section. The regression head $g$ is still initialized randomly. This procedure aims for the feature extractor to learn useful features from the unsupervised data that are beneficial to the supervised training. For this to work, it is necessary to assume that the marginal data distribution $P(X)$ contains information about the conditional distribution $P(\textsc{RUL}|X)$ we want to learn \cite{VanEngelen2020}. It follows that we have to assume that the labeled and unlabeled data were generated by the same distribution $P$. \subsection{Self-Supervised Pre-Training} \label{sec:M:SelfSupervised} In this section, we lay out the intuitive motivation behind our proposed self-supervised pre-training through analyzing the latent space of trained networks. Afterward we derive a pre-text task from this intuition and devise a training regime. \subsubsection{Motivation} \label{sec:M:SS:Motivation} To be able to design a suitable pre-text task, one has to understand the supervised training process and its shortcomings. To gain an understanding of the supervised training process, we will look at the features learned by the feature extractor $f$ trained with different amounts of labeled data. Unfortunately, high-dimensional data, such as the extracted features, is impossible to understand for humans. Therefore, we project it into two dimensions by using \ac{umap} \cite{mcinnes2018umap}, and plot the features in a scatter plot. Figure \ref{fig:embeddings_full} depicts the features extracted from the validation data of \ac{cmapss} subset FD004 after training on all available labeled data. The network achieves a validation \ac{mse} loss of $19.3$. We can observe a snake-like structure in the features that broadens on the left side and narrows on the right side. Though coloring the data points according to their associated \ac{rul} value, we can see that the left end of the snake corresponds to high and the right side to low \ac{rul} values. Furthermore, no discernible sub-structures are apparent which shows that the network does not differentiate between time frames from different time series and the same \ac{rul}. Because of the low validation loss, we can conclude that this snake-like structure in the feature space may be beneficial to \ac{rul} estimation. Figure \ref{fig:embeddings_few} depicts the features extracted by a feature extractor trained only on three labeled time series, which corresponds to $2\%$ of the available labeled data. The network achieves a validation loss of $31.8$. Again, we can observe the snake-like structure but this time the high-\ac{rul} end is much more feathered out. Additionally, the \ac{rul} values are not as clearly separated as in the previous figure. Comparing the two plots, we can theorize that the network fails in the low data scenario because it cannot learn the tight snake-like structure. A suitable pre-text task should result in a feature extractor that already learned this structure which we can verify qualitatively with a plot like in figure \ref{fig:embeddings}. Data points with similar \ac{rul} values should be near each other in the feature space. \begin{figure} \caption{All labeled time series} \label{fig:embeddings_full} \caption{Three labeled time series} \label{fig:embeddings_few} \caption{\textbf{Features produces by trained feature extractors:} \label{fig:embeddings} \end{figure} \subsubsection{Definition of the Pre-Text Task} \label{sec:M:SS:Loss} To derive a pre-text task from the intuition gained in the previous chapter, we turn to the field of metric learning. Metric learning is concerned with learning a distance or similarity metric between two data points of a dataset. As we are aiming to learn self-supervised, we can use only information from the unlabeled data to achieve this task. In our case, we want to learn a latent space structure, where data points with similar \ac{rul} values are close and points with a large difference in \ac{rul} are distant from each other. This corresponds to predicting the \emph{relative \ac{rul}} between two data points. Given two time series, $k$ and $l$, of data from machines run to failure, the \emph{relative \ac{rul}} value $r$ is: \begin{equation} r = \textsc{RUL}^{(k)}_i - \textsc{RUL}^{(l)}_j \label{eqn:relRul} \end{equation} where $\textsc{RUL}^{(k)}_i$ is the \ac{rul} at time step $i$ in time series $k$. If we declare $k = l$ and given the assumption of a linear degradation model of: \begin{equation} \textsc{RUL}^{(k)}_i = |k| - i \end{equation} where $|k|$ is the length of time series $k$, we can rewrite equation \ref{eqn:relRul} as: \begin{equation} r = (|k| - i) - (|k| - j) \end{equation} It is trivial to see that $|k|$ is not needed at all to calculate $r$: \begin{eqnarray} r &=& j - i \end{eqnarray} This makes it possible to calculate $r$ even on time series of unfailed machines where $|k|$ is not known. Therefore, this pre-training target can be used without any failure data and consequently without the true \ac{rul} values. For a piece-wise linear degradation model as seen in the \ac{cmapss} dataset: \begin{equation} \textsc{RUL}^{(k)}_i = \max{(\textsc{RUL}_\textsc{max},\; |k| - i)} \end{equation} the formula for $r$ is still an acceptable approximation, if we assume that $i < j$ and $i - j \leq \textsc{RUL}_\textsc{max},\; \forall i,j$. We normalize $r$ to the range of $[0,1]$ by dividing it by $\textsc{RUL}_\textsc{max}$. The final equation is then: \begin{equation} r \approx \frac{j - i}{\textsc{RUL}_\textsc{max}},\quad \begin{aligned}i &< j \\ i - j &\leq \textsc{RUL}_\textsc{max} \end{aligned}\quad \forall i,j \label{eqn:finalTarget} \end{equation} \subsubsection{Training Procedure} \label{sec:M:SS:Training} The goal of our pre-training is training a neural network on estimating the target value $r$ from equation \ref{eqn:finalTarget} from which we then extract the feature extractor $f$ to be used to initialize the \ac{rul} estimation network before supervised training. We realize this goal with siamese networks \cite{bromley1993signature,baldi1993neural}. Siamese networks take the form of the feature extractor $f(x)$ and a distance function $h(a, b)$ operating on two samples, $x_i$ and $x_j$, so that: \begin{equation} r' = h(f(x_i), f(x_j)) \end{equation} where $r'$ is the predicted value of $r$. The function $h(a, b)$ is defined as: \begin{equation} h(a, b) = \left|\left|\frac{a}{||a||} - \frac{b}{||b||}\right|\right|^2 \end{equation} where $||\cdot||$ is the euclidean norm. Dividing the embeddings by their norm restricts the embedding space to a hyper ball, which often is found to be useful \cite{Musgrave2020}. Figure \ref{fig:PreTrain} depicts the schematic structure of the siamese networks. We use \ac{mse} as a loss function with mini-batch gradient descent to train the siamese networks: \begin{equation} \mathcal{L}_{\textsc{MSE}} = \frac{1}{|X|} \sum_{k=1}^{|X|}{(r_k - r_k')} \end{equation} For each (mini-)batch $X$, we sample $|X|$ time series from the training data, uniformly sample pairs of time frames $x^{(k)}_i$ and $x^{(k)}_j$, and calculate $r_k$ according to equation \ref{eqn:finalTarget}. Because the samples $x^{(k}_i$ and $x^{(k}_j$ are time frames from the same time series, a difference between $i$ and $j$ that is much smaller than the length of the time frame results in significant overlap between the samples. Therefore, we introduce a minimum distance for sampling pairs, which is regarded as a hyperparameter. We used the Adam optimizer \cite{Kingma2014a} for training with the default values of $0.9$ and $0.99$ for $\beta_1$ and $\beta_2$. \section{Experimental Design} \label{sec:ExperimentalDesign} This section describes the data set used for our experiments, which performance metrics were used, and how the evaluation was conducted. \subsection{Data} \label{sec:ED:Data} We evaluate the effect of our pre-training technique on the publicly available NASA \ac{cmapss} dataset. It contains four subsets (FD001 - FD004) of different operating conditions and possible fault modes. Engines in FD001 and FD002 experience only \ac{hpc} degradation, while the ones in FD003 and FD004 can additionally experience fan degradation. Further, in FD001 and FD003 are engines run under only one operating condition, while engines from FD002 and FD004 vary between six. Each subset is split into training and test data by the dataset authors. The training data contains multivariate time series data of aero engines up until the time step they failed. Each time series can be considered as a different engine of the same type. The test data's time series stop at a random time step before failure, for which the true \ac{rul} label is provided. Table \ref{tab:cmapss} summarizes the details of the dataset. We construct one validation set from each training set by taking $20\%$ of the time series. \begin{table} \centering \begin{tabular}{lcccc} \textbf{Dataset} & \textbf{FD001} & \textbf{FD002} & \textbf{FD003} & \textbf{FD004} \\ \hline \hline \# Training Engines & 100 & 260 & 100 & 249 \\ \# Test Engines & 100 & 259 & 100 & 248 \\ \# Operation Conditions & 1 & 6 & 1 & 6 \\ \# Failure Modes & 1 & 1 & 2 & 2 \\ \hline \end{tabular} \caption{\ac{cmapss} Dataset} \label{tab:cmapss} \end{table} For pre-processing, we follow Li et al. \cite{Li2018a} and select 14 of the 21 sensor channels with the indices 2, 3, 4, 7, 8, 9, 11, 12, 13, 14, 15, 17, 20, and 21 as the input of the network. The features are scaled by channel, using a min-max scaling calculated on each subsets' training set. We then use a sliding window with a step size of one to obtain frames of unified length in the time dimension. We use a window size of 30, 20, 30, and 15 for the subsets 1-4. These sizes were determined by the length shortest time series in each subsets test set. The \ac{rul} labels are calculated by a piece-wise linear degradation model with a maximum value $\textsc{RUL}_{\textsc{max}}$ of $125$. \subsection{Data Scenarios} \label{sec:ED:DataScenarios} As described in the introduction, unlabeled \ac{rul} data cannot contain the point of failure as we could simply compute the labels otherwise. It follows that we have to truncate the time series before failure when studying \ac{ssl}, as the model would have access to more failure data than normally available otherwise. How early we truncate the time series represents the \textit{grade of degradation} of the engine that the time series was collected from. Previous work on \ac{ssl} for \ac{rul} estimation varied only the amount of labeled data available, i.e. how many engines had already failed. Our experimental design includes varying the \textit{grade of degradation} for the unlabeled data, too. Adapting the work of Krokotsch et al. \cite{Krokotsch2020}, we impose data scenarios on each subset. In our case, a data scenario is characterized by the \textit{number of failed engines} and \textit{grade of degradation} of the unlabeled ones. Both factors are interpreted as percentages. A data scenario of \textit{number of engines} at $n\%$ would mean that only $n\%$ of the machines in the subset are used as labeled data for training. The rest is assumed to have not yet failed and is used as unlabeled data. This limits the amount of machine-to-machine variance that is covered by the labeled data. A data scenario of \textit{grade of degradation} at $n\%$ would mean that only the first $n\%$ of time steps of each unlabeled time series is available during training. This effectively limits the amount of available data near failure. We use five different \textit{grades of degradation} of $40\%$, $60\%$, $70\%$, $80\%$ and $90\%$ for our evaluation. A \textit{grades of degradation} of $100\%$ would not make sense, as the unlabeled machines would have failed already and could be used as labeled data. The lower limit of percentages is chosen due to the piece-wise linear degradation model. Using fewer than $40\%$ of the time steps would mean that the unlabeled data contains next to no data with a \ac{rul} of less than $\textsc{RUL}_{\textsc{max}}$. This means that the unlabeled data would come from completely healthy machines and may add no benefit to training. The number of failed engines is set at $2\%$, $10\%$, $20\%$, $40\%$, $100\%$. Using $100\%$ of failed machines means that no unlabeled data is available and pre-training is conducted on the labeled data only. We decided on these percentages because preliminary experiments did not show any degradation in performance for more than $40\%$ compared to using $100\%$ of the engines. The lower limit of $2\%$ is chosen because it results in at least one failed engine for each subset. \subsection{Performance Metrics} \label{sec:ED:Metrics} We employ two common performance metrics from the field of \ac{pdm}. The first is the \ac{rmse}, as described in equation (\ref{eqn:rmse}) over the test set. The second is the \emph{\ac{rul}-Score}, first proposed in the PHM 2008 Data Challenge \cite{Heimes2008}. It is calculated as follows: \begin{equation} s_i = \begin{cases} e^{-\frac{\Delta\textsc{RUL}_i}{13}} - 1 & \text{for } \Delta\textsc{RUL}_i < 0 \\ e^{\frac{\Delta\textsc{RUL}_i}{10}} - 1 & \text{for } \Delta\textsc{RUL}_i \geq 0 \end{cases} \end{equation} where $\Delta\textsc{RUL}_i$ is the difference between predicted and true \ac{rul} for sample $x_i$. This metric penalizes overestimating \ac{rul} more than underestimating it as the former has a bigger impact on maintenance actions. We report the sum over all samples in the test set following previous work in the field. We will discuss the results using both metrics. Although, we will focus on \ac{rmse} as we view it as the more intuitive measure of performance. \subsection{Supervised Baseline} \label{sec:ED:Baseline} The baseline for our experiments is training the \ac{rul} estimation network in a supervised fashion as described in section \ref{sec:M:RULEstimation}. The training will only incorporate the available labeled data. We used the Adam optimizer \cite{Kingma2014a} with the default values for $0.9$ and $0.99$ for $\beta_1$ and $\beta_2$. \subsection{Competing Approaches} \label{sec:ED:Competition} Aside from the baseline, we compare our pre-training approach to two other methods found in the literature. The first approach is unsupervised pre-training via a deep \ac{ae}. We construct the \ac{ae} so that our feature extractor network is used as the encoder. The decoder is an exact mirror image of the encoder with transposed convolutions replacing the convolutional layers. The \ac{ae} is then trained to reconstruct its input via mini-batch gradient descent with a \ac{mse} loss. As for the baseline, we used the Adam optimizer. The second approach is unsupervised pre-training via a \ac{rbm}. The first layer of the feature extractor is interpreted as a \ac{rbm} with Gaussian visible units and \ac{relu} hidden units and trained until convergence. The other layers of the feature extractor remain randomly initialized. Again, Adam was used as the optimizer. Even though \cite{ListouEllefsen2019,Yoon2017} reported results for \acp{rbm} and variational \acp{ae}, we choose to reproduce the competing approaches ourselves. This has several reasons. First, the mentioned papers evaluated their approaches only on FD001 or FD004 which paints a limited picture of the approaches' performance. Second, the mentioned papers used a slightly different experimental setup, i.e. a different $\textsc{RUL}_{max}$, making comparison difficult. Furthermore, the aforementioned $\textsc{RUL}_{max}$ was optimized as a hyperparameter in one of the papers. As this value controls the possible range of the performance metrics, optimizing it results in improperly optimistic results. \subsection{Evaluation Procedure} \label{sec:ED:Evaluation} The baseline, our approach and the competing \ac{ssl} methods are evaluated on each of the subsets of the \ac{cmapss} dataset subject to each of our selected data scenarios. This results in 100 different experimental setups (4 subsets, 5 number of failed engines, 5 grades of degradation) for each \ac{ssl} method. The baseline includes only 20 setups, as it does not use unlabeled data which makes varying the grade of degradation superfluous. Each experimental setup is replicated ten times. For each replication, a new split of labeled and unlabeled data is randomly samples according to the data scenario. The splits are held constant across methods, i.e. the baseline receives the same labeled data as the \ac{ssl} methods given the same data scenario. This makes comparison easier as much more replications than ten would be needed for the low labeled data scenarios to receive statistically stable performance estimates. The pre-training stage for the \ac{ssl} methods receives the labeled and unlabeled portions of the data and trains for at least 100 epochs. The \ac{rbm} is trained for five epochs as it contains only one layer. Early stopping is used to select the model with the lowest validation loss. For our method we monitor the \ac{mse} loss for the relative \ac{rul} target and for the autoencoder the \ac{mse} reconstruction loss. The supervised training stage receives only the labeled portion of the data and is trained for at least 200 epochs. The weights of the selected pre-trained model are used to initialize the network in this stage for the \ac{ssl} methods. The baseline is initialized randomly. It should be noted that we divide the output of the feature extractor by its norm if the pre-training was self-supervised. Again, early stopping on the validation \ac{rmse} is used to select the best model for which the performance metrics over the test set are calculated. We report the performance for each subset/data scenario combination separately and averaged over the ten replications. \subsection{Hyperparameter Selection} \label{sec:ED:Hyperparameters} We began hyperparameter selection using the fixed network architecture shown in figure \ref{fig:BasicArchitecture} and conducted a random search in two steps. First, the hyperparameters of the supervised stage (i.e. learning rate, dropout, and batch size) were optimized for each subset of \ac{cmapss}. A network was trained in a supervised fashion with all available labeled data as described in section \ref{sec:ED:Evaluation}. After 100 trials, the hyperparameters of the network with the best validation \ac{rmse} were selected. The hyperparameters in question can be seen in table \ref{tab:hp_supervised}. \begin{table}[] \begin{tabular}{llllll} \multirow{2}{*}{\textbf{Hyperparameter}} & \multirow{2}{*}{\textbf{Search Space}} & \multicolumn{4}{l}{\textbf{Configuration for}} \\ & & \textbf{FD001} & \textbf{FD002} & \textbf{FD003} & \textbf{FD004} \\ \hline \hline \textbf{Learning Rate} & $\operatorname{qlogu}(1^{-4}, 1^{-1}, 5^{-5})$ & $0.0056$ & $0.0903$ & $0.095$ & $0.06635$ \\ \textbf{Dropout} & $\operatorname{qu}(0.0, 0.5, 0.1)$ & $0.4$ & $0.3$ & $0.2$ & $0.0$ \\ \textbf{Batch Size} & [64, 128, 256, 512] & 128 & 512 & 64 & 64 \\ \hline \end{tabular} \caption{\textbf{Hyperparameters of the Supervised Training Stage:} The search space $\operatorname{qlogu}(a,b,c)$ draws samples uniformly on a logarithmic scale from the interval $[a,b]$ quantized to $c$. The search space $\operatorname{qu}(a,b,c)$ draws samples uniformly from the interval $[a,b]$ quantized to $c$.} \label{tab:hp_supervised} \end{table} In a second step, the hyperparameters of the pre-training stages (i.e. learning rate, dropout, batch size, and minimum pair distance) were optimized similarly. The networks are trained without labels at $80\%$ grade of degradation as described in section \ref{sec:ED:Evaluation}. For our method we selected hyperparameters according to the validation \ac{mse} loss for the relative \ac{rul} target and for the autoencoder according to the validation \ac{mse} reconstruction loss. As stability was an issue, each hyperparameter configuration was trained five times and the mean of these replications is used for selection. For the \ac{rbm} we adopted the hyperparameters from \cite{ListouEllefsen2019}. We set the learning rate for this method to $10^{-4}$ by hand as it was not given in the paper. It should be noted that all optimizations used no labels at all and can therefore be conducted independently from the amount of labeled data. The selected hyperparameters can be seen in table \ref{tab:hp_self_supervised} and \ref{tab:hp_unsupervised}. \begin{table}[] \begin{tabular}{llllll} \multirow{2}{*}{\textbf{Hyperparameter}} & \multirow{2}{*}{\textbf{Search Space}} & \multicolumn{4}{l}{\textbf{Configuration for}} \\ & & \textbf{FD001} & \textbf{FD002} & \textbf{FD003} & \textbf{FD004} \\ \hline \hline \textbf{Learning Rate} & $\operatorname{qlogu}(1^{-4}, 1^{-1}, 5^{-5})$ & $0.00015$ & $0.01155$ & $0.00615$ & $0.07455$ \\ \textbf{Dropout} & $\operatorname{qu}(0.0, 0.5, 0.1)$ & $0.2$ & $0.4$ & $0.1$ & $0.1$ \\ \textbf{Batch Size} & [64, 128, 256, 512] & 64 & 64 & 64 & 64 \\ \textbf{Minimum Distance} & [1, 10, 15, 30] & 10 & 15 & 15 & 10 \\ \hline \end{tabular} \caption{\textbf{Hyperparameters of the Self-Supervised Pre-Training:} See table \ref{tab:hp_supervised} for an explanation of the search spaces.} \label{tab:hp_self_supervised} \end{table} \begin{table}[] \begin{tabular}{llllll} \multirow{2}{*}{\textbf{Hyperparameter}} & \multirow{2}{*}{\textbf{Search Space}} & \multicolumn{4}{l}{\textbf{Configuration for}} \\ & & \textbf{FD001} & \textbf{FD002} & \textbf{FD003} & \textbf{FD004} \\ \hline \hline \textbf{Learning Rate} & $\operatorname{qlogu}(1^{-4}, 1^{-1}, 5^{-5})$ & $0.0001$ & $0.0248$ & $0.015$ & $0.0006$ \\ \textbf{Dropout} & $\operatorname{qu}(0.0, 0.5, 0.1)$ & $0.1$ & $0.4$ & $0.0$ & $0.0$ \\ \textbf{Batch Size} & [64, 128, 256, 512] & 64 & 256 & 64 & 64 \\ \textbf{Minimum Distance} & [1, 10, 15, 30] & 1 & 15 & 1 & 10 \\ \hline \end{tabular} \caption{\textbf{Hyperparameters of the Autoencoder Pre-Training:} See table \ref{tab:hp_supervised} for an explanation of the search spaces. The minimum distance was optimized for this method, too, due to implementation reasons. It should not influence the results significantly, though.} \label{tab:hp_unsupervised} \end{table} \section{Results} \label{sec:Results} First, we will describe the results of our experiments for each \ac{cmapss} subset. Afterward, we will interpret the findings and set them into context. \subsection{Comparison of Approaches} \label{sec:R:Comparison} Our experiments produced too many data points to present them all in detail. We will therefore show only slices of our results as plots. The results plotted against the percentage of labeled data are shown in figures \ref{fig:results_80_degradation} and \ref{fig:results_score_80_degradation} at 80\% degradation. The results plotted against grade of degradation are shown in figures \ref{fig:results_2_labeled} and \ref{fig:results_score_2_labeled} at 2\% of the labeled data. The complete results are shown in tables \ref{tab:res_12}, \ref{tab:res_34}, \ref{tab:score_12} and \ref{tab:score_34}. Overall we can conclude that a significant drop in baseline performance was mostly observable for very low amounts of labeled data. In figure \ref{fig:results_80_degradation} we can see that the performance of the baseline (blue) has a relatively small standard deviation for 100\%, 40\%, and 20\% of labeled data. For the subsets FD002 and FD004, the mean and standard deviation increase only at 2\% labeled data. For FD001, the performance already drops at 40\% labeled data and for FD003 at 10\%. One has to keep in mind, though, that FD002 and FD004 are more than twice as large as FD001 and FD003, which means that they have a higher amount of labeled data available at the same percentage. Performance on \textbf{FD001} shows next to no improvement through \ac{ssl}. In figure \ref{fig:results_80_degradation} we can see that the median \ac{rmse} performances is well inside of each others \acp{iqr} with the exception of 40\% labeled data where \ac{ae} pre-training achieves much better performance. In one case, i.e. for 2\% labeled data, the performance of \ac{rbm} pre-training was even worse than the baseline. The \ac{rul}-Score metric in figure \ref{fig:results_score_80_degradation} paints a similar picture, even though pre-trained models seem slightly better for 20\% and 40\% labeled data. For 2\% labeled data, the \ac{ssl} approaches had a worse mean \ac{rul}-Score performance than the baseline. These findings are stable for different grades of degradation, too. Figures \ref{fig:results_2_labeled} and \ref{fig:results_score_2_labeled} show no discernible trend with respect to the grade of degradation for any method. Overall, the \ac{ae} seems to perform slightly better than the other approaches. Performance on \textbf{FD002} clearly benefits from \ac{rbm} and self-supervised pre-training. The self-supervised pre-training, in particular, beats the baseline and the other approaches most of the time, especially in low-labeled data scenarios. On the other hand, there is an extreme drop in performance for our method at 40\% degradation. This is true for all percentages of labeled data but most apparent for 2\%. When looking at figure \ref{fig:results_score_2_labeled}, only self-supervised pre-training reliably beats the median performance of the baseline in terms of \ac{rul}-Score for grades of degradation above 40\%. Performance gains on \textbf{FD003} are better than on FD001 but still minor. For 2\% labeled data, the self-supervised pre-training beats the baseline and the other methods for next to all grades of degradation in terms of mean \ac{rmse} and \ac{rul}-score. Nevertheless, the \acp{iqr} of our method and the \ac{rbm} are often highly overlapping. As on FD001, we can see no trend concerning the grade of degradation. On \textbf{FD004}, we can see the benefits of our approach most clearly. While \ac{ae} and \ac{rbm} pre-training bring next to no performance gains in terms of median \ac{rmse}, self-supervised pre-training outperforms the baseline reliably, as seen in figure \ref{fig:results_80_degradation} and \ref{fig:results_2_labeled}. Nevertheless, we can observe a downward trend in performance with respect to the grade of degradation for our method. The competing approaches do not suffer from this. Figures \ref{fig:results_score_80_degradation} and \ref{fig:results_score_2_labeled} reveal devastating performance losses for \ac{ae} pre-training in terms of \ac{rul}-Score. We can conclude that \ac{ssl} can be beneficial for \ac{rul} estimation even under realistic conditions with varying grades of degradation. However, no approach was able to reliably beat the baseline on all subsets under all data scenarios. A representative validation set and careful hyperparameter tuning are needed to assure improved performance and detect negative outcomes. Self-supervised pre-training seems to be a step in the right direction, as it often outperforms the competing approaches. Nevertheless, its performance trends downward on FD002 and FD004 with a falling grade of degradation. \begin{figure} \caption{\textbf{\ac{rmse} \label{fig:results_80_degradation} \end{figure} \begin{figure} \caption{\textbf{\ac{rul} \label{fig:results_score_80_degradation} \end{figure} \begin{figure} \caption{\textbf{\ac{rmse} \label{fig:results_2_labeled} \end{figure} \begin{figure} \caption{\textbf{\ac{rul} \label{fig:results_score_2_labeled} \end{figure} \begin{table} \begin{adjustbox}{width=\textwidth,center} \centering \begin{tabular}{cl|ccccc} \hline & & \multicolumn{5}{c}{FD001} \\ & & 2\% & 10\% & 20\% & 40\% & 100\% \\ \hline \hline & None & $31.30 \pm 6.16$ & $25.45 \pm 2.31$ & $24.21 \pm 1.31$ & $21.94 \pm 3.08$ & $14.63 \pm 0.81$ \\ \hline \multirow{3}{*}{40\%} & AE & $33.37 \pm 9.53$ & $24.71 \pm 1.28$ & $\mathbf{23.56 \pm 3.12}$ & $\mathbf{17.27 \pm 1.47}$ & $15.03 \pm 0.61$ \\ & RBM & $33.49 \pm 7.49$ & $25.90 \pm 1.93$ & $25.09 \pm 1.61$ & $21.02 \pm 3.17$ & $15.67 \pm 0.57$ \\ & Ours & $32.35 \pm 6.08$ & $\mathbf{24.42 \pm 1.58}$ & $24.91 \pm 1.87$ & $21.38 \pm 2.32$ & $\mathbf{14.34 \pm 0.81}$ \\ \hline \multirow{3}{*}{60\%} & AE & $32.17 \pm 8.94$ & $\mathbf{24.62 \pm 1.75}$ & $\mathbf{22.65 \pm 2.46}$ & $\mathbf{17.95 \pm 1.92}$ & $15.09 \pm 0.52$ \\ & RBM & $32.64 \pm 7.45$ & $27.06 \pm 1.79$ & $24.78 \pm 2.73$ & $19.50 \pm 2.93$ & $15.68 \pm 1.01$ \\ & Ours & $\mathbf{29.88 \pm 3.83}$ & $25.81 \pm 2.36$ & $23.93 \pm 1.72$ & $22.41 \pm 1.82$ & $\mathbf{14.48 \pm 0.87}$ \\ \hline \multirow{3}{*}{70\%} & AE & $34.93 \pm 8.64$ & $25.10 \pm 2.23$ & $\mathbf{21.70 \pm 2.09}$ & $\mathbf{17.71 \pm 1.79}$ & $15.02 \pm 0.58$ \\ & RBM & $33.00 \pm 7.40$ & $26.00 \pm 1.77$ & $23.95 \pm 1.55$ & $19.21 \pm 1.92$ & $15.45 \pm 0.71$ \\ & Ours & $\mathbf{29.52 \pm 5.26}$ & $\mathbf{24.90 \pm 1.47}$ & $22.96 \pm 2.38$ & $18.51 \pm 2.76$ & $15.35 \pm 0.78$ \\ \hline \multirow{3}{*}{80\%} & AE & $30.24 \pm 7.16$ & $25.32 \pm 2.23$ & $\mathbf{22.41 \pm 3.07}$ & $\mathbf{17.13 \pm 1.31}$ & $15.40 \pm 0.62$ \\ & RBM & $33.58 \pm 7.05$ & $25.53 \pm 2.36$ & $24.43 \pm 1.66$ & $19.56 \pm 4.02$ & $15.55 \pm 0.88$ \\ & Ours & $\mathbf{28.50 \pm 2.68}$ & $\mathbf{25.13 \pm 2.28}$ & $23.64 \pm 1.52$ & $20.98 \pm 2.52$ & $\mathbf{14.33 \pm 0.74}$ \\ \hline \multirow{3}{*}{90\%} & AE & $\mathbf{30.57 \pm 7.00}$ & $26.04 \pm 1.92$ & $\mathbf{21.84 \pm 1.82}$ & $\mathbf{17.26 \pm 1.34}$ & $14.89 \pm 0.65$ \\ & RBM & $31.86 \pm 5.30$ & $\mathbf{25.05 \pm 2.17}$ & $25.01 \pm 2.53$ & $19.44 \pm 2.36$ & $15.55 \pm 0.47$ \\ & Ours & $31.69 \pm 6.62$ & $25.20 \pm 1.92$ & $23.52 \pm 2.24$ & $18.74 \pm 3.53$ & $15.39 \pm 0.78$ \\ \hline \hline & & \multicolumn{5}{c}{FD002} \\ & & 2\% & 10\% & 20\% & 40\% & 100\% \\ \hline \hline & None & $29.80 \pm 6.38$ & $23.87 \pm 1.30$ & $22.83 \pm 1.03$ & $20.73 \pm 0.94$ & $17.55 \pm 0.62$ \\ \hline \multirow{3}{*}{40\%} & AE & $33.86 \pm 7.88$ & $23.12 \pm 1.11$ & $21.91 \pm 0.96$ & $20.48 \pm 0.90$ & $\mathbf{17.19 \pm 0.30}$ \\ & RBM & $\mathbf{27.43 \pm 4.30}$ & $\mathbf{22.79 \pm 0.70}$ & $\mathbf{20.98 \pm 0.65}$ & $\mathbf{19.41 \pm 0.92}$ & $17.19 \pm 0.51$ \\ & Ours & $42.04 \pm 4.47$ & $24.04 \pm 0.84$ & $22.09 \pm 1.31$ & $20.44 \pm 0.80$ & $18.34 \pm 0.84$ \\ \hline \multirow{3}{*}{60\%} & AE & $30.95 \pm 6.38$ & $23.25 \pm 0.92$ & $22.26 \pm 1.05$ & $20.69 \pm 0.91$ & $17.78 \pm 1.22$ \\ & RBM & $26.50 \pm 3.22$ & $22.87 \pm 0.90$ & $\mathbf{21.36 \pm 1.01}$ & $\mathbf{19.42 \pm 0.68}$ & $\mathbf{17.54 \pm 0.74}$ \\ & Ours & $\mathbf{25.46 \pm 3.79}$ & $\mathbf{22.86 \pm 1.17}$ & $22.14 \pm 1.35$ & $20.45 \pm 0.84$ & $18.45 \pm 0.78$ \\ \hline \multirow{3}{*}{70\%} & AE & $28.03 \pm 4.85$ & $23.84 \pm 1.06$ & $22.43 \pm 1.83$ & $20.49 \pm 0.82$ & $17.56 \pm 1.22$ \\ & RBM & $26.83 \pm 3.91$ & $22.85 \pm 1.20$ & $21.41 \pm 1.19$ & $19.33 \pm 0.58$ & $17.17 \pm 0.46$ \\ & Ours & $\mathbf{24.82 \pm 4.00}$ & $\mathbf{21.79 \pm 1.42}$ & $\mathbf{20.25 \pm 0.77}$ & $\mathbf{18.66 \pm 0.69}$ & $\mathbf{16.83 \pm 0.44}$ \\ \hline \multirow{3}{*}{80\%} & AE & $32.36 \pm 7.24$ & $23.29 \pm 0.90$ & $21.87 \pm 0.90$ & $19.92 \pm 0.90$ & $17.21 \pm 0.58$ \\ & RBM & $27.13 \pm 4.05$ & $22.84 \pm 0.86$ & $21.35 \pm 0.66$ & $\mathbf{19.13 \pm 0.87}$ & $\mathbf{17.10 \pm 0.57}$ \\ & Ours & $\mathbf{24.82 \pm 3.43}$ & $\mathbf{22.43 \pm 1.11}$ & $\mathbf{21.21 \pm 1.26}$ & $19.79 \pm 0.65$ & $18.23 \pm 0.63$ \\ \hline \multirow{3}{*}{90\%} & AE & $31.55 \pm 6.75$ & $23.70 \pm 1.07$ & $22.30 \pm 1.00$ & $20.71 \pm 1.36$ & $17.55 \pm 0.53$ \\ & RBM & $26.98 \pm 3.73$ & $22.55 \pm 1.12$ & $20.99 \pm 1.07$ & $19.46 \pm 0.70$ & $17.28 \pm 0.39$ \\ & Ours & $\mathbf{24.99 \pm 3.80}$ & $\mathbf{21.48 \pm 1.39}$ & $\mathbf{20.26 \pm 1.23}$ & $\mathbf{18.82 \pm 0.72}$ & $\mathbf{16.80 \pm 0.44}$ \\ \hline \end{tabular} \end{adjustbox} \caption{\textbf{\ac{rmse} results for FD001 and FD002}: We report the mean and standard deviation. The rows represent the \emph{grade of degradation} and the columns the \emph{percent of labeled data} of the data scenario. The second column contains the pre-training method, where \emph{None} is the baseline without any pre-training, and \emph{Ours} is the self-supervised pre-training. The bold results mark the best mean performance for each data scenario. If no result is bold, no approach was able to beat the baseline.} \label{tab:res_12} \end{table} \begin{table} \begin{adjustbox}{width=\textwidth,center} \centering \begin{tabular}{cl|ccccc} \hline & & \multicolumn{5}{c}{FD003} \\ & & 2\% & 10\% & 20\% & 40\% & 100\% \\ \hline \hline & None & $42.33 \pm 20.56$ & $22.99 \pm 4.02$ & $16.55 \pm 0.94$ & $14.58 \pm 1.06$ & $13.88 \pm 0.58$ \\ \hline \multirow{3}{*}{40\%} & AE & $35.08 \pm 6.38$ & $\mathbf{22.12 \pm 5.31}$ & $17.05 \pm 1.63$ & $14.75 \pm 1.12$ & $13.96 \pm 0.59$ \\ & RBM & $35.83 \pm 7.00$ & $22.98 \pm 4.85$ & $16.82 \pm 1.13$ & $15.19 \pm 0.82$ & $14.50 \pm 0.50$ \\ & Ours & $\mathbf{31.82 \pm 4.51}$ & $22.98 \pm 4.30$ & $\mathbf{16.00 \pm 0.86}$ & $\mathbf{14.39 \pm 0.85}$ & $\mathbf{13.47 \pm 0.18}$ \\ \hline \multirow{3}{*}{60\%} & AE & $34.00 \pm 6.89$ & $\mathbf{21.26 \pm 4.86}$ & $16.77 \pm 1.62$ & $15.09 \pm 1.14$ & $14.16 \pm 0.49$ \\ & RBM & $\mathbf{31.63 \pm 6.02}$ & $22.63 \pm 3.93$ & $17.04 \pm 1.89$ & $15.33 \pm 0.84$ & $14.45 \pm 0.42$ \\ & Ours & $33.79 \pm 4.74$ & $24.80 \pm 4.77$ & $17.10 \pm 1.19$ & $\mathbf{14.52 \pm 1.10}$ & $\mathbf{13.09 \pm 0.74}$ \\ \hline \multirow{3}{*}{70\%} & AE & $35.01 \pm 5.99$ & $\mathbf{22.00 \pm 4.80}$ & $\mathbf{16.36 \pm 1.15}$ & $15.39 \pm 0.93$ & $13.85 \pm 0.42$ \\ & RBM & $36.30 \pm 7.52$ & $22.36 \pm 4.23$ & $16.68 \pm 1.35$ & $15.51 \pm 0.88$ & $14.61 \pm 0.45$ \\ & Ours & $\mathbf{33.10 \pm 4.34}$ & $22.35 \pm 4.82$ & $16.73 \pm 1.45$ & $15.14 \pm 1.18$ & $\mathbf{13.64 \pm 0.62}$ \\ \hline \multirow{3}{*}{80\%} & AE & $36.35 \pm 5.06$ & $\mathbf{21.24 \pm 3.75}$ & $\mathbf{16.49 \pm 1.60}$ & $14.82 \pm 0.76$ & $14.03 \pm 0.50$ \\ & RBM & $34.29 \pm 6.83$ & $22.87 \pm 4.63$ & $17.31 \pm 0.86$ & $15.52 \pm 0.66$ & $14.68 \pm 0.56$ \\ & Ours & $\mathbf{33.47 \pm 5.57}$ & $22.98 \pm 5.18$ & $16.54 \pm 2.04$ & $14.60 \pm 0.91$ & $\mathbf{13.05 \pm 0.53}$ \\ \hline \multirow{3}{*}{90\%} & AE & $33.37 \pm 5.36$ & $\mathbf{20.14 \pm 4.09}$ & $17.10 \pm 2.03$ & $14.84 \pm 0.88$ & $14.17 \pm 0.35$ \\ & RBM & $36.48 \pm 7.74$ & $23.47 \pm 4.27$ & $17.26 \pm 1.35$ & $15.06 \pm 0.92$ & $14.34 \pm 0.57$ \\ & Ours & $\mathbf{31.79 \pm 3.53}$ & $21.19 \pm 4.13$ & $\mathbf{16.34 \pm 1.19}$ & $14.78 \pm 0.75$ & $\mathbf{13.83 \pm 0.49}$ \\ \hline \hline & & \multicolumn{5}{c}{FD004} \\ & & 2\% & 10\% & 20\% & 40\% & 100\% \\ \hline \hline & None & $35.44 \pm 2.22$ & $27.00 \pm 1.37$ & $25.18 \pm 1.13$ & $23.51 \pm 0.62$ & $21.66 \pm 0.90$ \\ \hline \multirow{3}{*}{40\%} & AE & $36.07 \pm 4.39$ & $28.53 \pm 2.26$ & $25.39 \pm 1.36$ & $\mathbf{23.20 \pm 1.06}$ & $21.62 \pm 0.71$ \\ & RBM & $35.99 \pm 2.38$ & $28.46 \pm 2.20$ & $25.74 \pm 0.90$ & $23.55 \pm 0.62$ & $21.48 \pm 0.54$ \\ & Ours & $36.41 \pm 1.71$ & $31.15 \pm 3.66$ & $25.44 \pm 1.67$ & $23.24 \pm 1.84$ & $\mathbf{21.11 \pm 0.52}$ \\ \hline \multirow{3}{*}{60\%} & AE & $35.60 \pm 3.68$ & $28.34 \pm 2.36$ & $\mathbf{24.88 \pm 0.97}$ & $23.42 \pm 0.97$ & $21.30 \pm 0.37$ \\ & RBM & $35.59 \pm 3.69$ & $27.61 \pm 2.44$ & $25.48 \pm 1.22$ & $23.22 \pm 0.48$ & $21.18 \pm 0.62$ \\ & Ours & $\mathbf{34.20 \pm 4.08}$ & $29.40 \pm 2.77$ & $25.31 \pm 2.28$ & $\mathbf{22.24 \pm 1.21}$ & $\mathbf{20.96 \pm 0.67}$ \\ \hline \multirow{3}{*}{70\%} & AE & $35.72 \pm 3.72$ & $28.28 \pm 1.82$ & $25.34 \pm 1.30$ & $23.19 \pm 0.71$ & $21.26 \pm 0.56$ \\ & RBM & $35.41 \pm 2.69$ & $27.32 \pm 2.30$ & $25.40 \pm 1.39$ & $23.02 \pm 0.73$ & $21.08 \pm 0.52$ \\ & Ours & $\mathbf{31.58 \pm 1.57}$ & $27.02 \pm 2.07$ & $\mathbf{23.88 \pm 2.14}$ & $\mathbf{22.48 \pm 1.51}$ & $\mathbf{20.95 \pm 0.67}$ \\ \hline \multirow{3}{*}{80\%} & AE & $34.68 \pm 2.34$ & $28.16 \pm 1.73$ & $25.47 \pm 1.79$ & $22.74 \pm 1.08$ & $21.23 \pm 0.62$ \\ & RBM & $37.15 \pm 4.56$ & $27.95 \pm 2.36$ & $25.20 \pm 1.08$ & $23.43 \pm 1.25$ & $21.26 \pm 0.89$ \\ & Ours & $\mathbf{29.23 \pm 2.74}$ & $\mathbf{25.84 \pm 2.84}$ & $\mathbf{24.26 \pm 2.11}$ & $\mathbf{21.46 \pm 0.68}$ & $\mathbf{20.95 \pm 0.67}$ \\ \hline \multirow{3}{*}{90\%} & AE & $35.95 \pm 2.96$ & $27.57 \pm 1.28$ & $25.66 \pm 1.14$ & $23.43 \pm 1.20$ & $21.73 \pm 0.59$ \\ & RBM & $35.88 \pm 2.70$ & $27.84 \pm 1.82$ & $25.69 \pm 1.31$ & $23.73 \pm 0.78$ & $21.25 \pm 0.44$ \\ & Ours & $\mathbf{31.07 \pm 3.81}$ & $\mathbf{24.20 \pm 2.14}$ & $\mathbf{24.17 \pm 2.88}$ & $\mathbf{21.82 \pm 1.22}$ & $\mathbf{20.90 \pm 0.62}$ \\ \hline \end{tabular} \end{adjustbox} \caption{\textbf{\ac{rmse} results for FD003 and FD004}: Please consult table \ref{tab:res_12} for further information.} \label{tab:res_34} \end{table} \begin{table} \begin{adjustbox}{width=\textwidth,center} \centering \begin{tabular}{cl|ccccc} \hline & & \multicolumn{5}{c}{FD001} \\ & & 2\% & 10\% & 20\% & 40\% & 100\% \\ \hline \hline & None & $5.64e3 \pm 3.25e3$ & $1.49e4 \pm 1.11e4$ & $1.04e4 \pm 7.48e3$ & $5.28e3 \pm 6.35e3$ & $4.24e2 \pm 1.15e2$ \\ \hline \multirow{3}{*}{40\%} & AE & $1.51e4 \pm 1.28e4$ & $\mathbf{4.57e3 \pm 2.37e3}$ & $\mathbf{7.39e3 \pm 7.22e3}$ & $\mathbf{1.03e3 \pm 7.22e2}$ & $4.79e2 \pm 1.03e2$ \\ & RBM & $1.62e4 \pm 8.24e3$ & $1.73e4 \pm 1.83e4$ & $9.79e3 \pm 9.05e3$ & $3.69e3 \pm 4.48e3$ & $5.32e2 \pm 6.25e1$ \\ & Ours & $2.04e4 \pm 9.05e3$ & $1.14e4 \pm 8.40e3$ & $1.57e4 \pm 1.32e4$ & $3.90e3 \pm 2.89e3$ & $\mathbf{3.97e2 \pm 8.31e1}$ \\ \hline \multirow{3}{*}{60\%} & AE & $1.38e4 \pm 1.28e4$ & $\mathbf{9.80e3 \pm 1.20e4}$ & $\mathbf{4.76e3 \pm 4.94e3}$ & $\mathbf{9.85e2 \pm 5.46e2}$ & $4.79e2 \pm 6.05e1$ \\ & RBM & $1.76e4 \pm 1.56e4$ & $1.89e4 \pm 2.34e4$ & $1.55e4 \pm 1.58e4$ & $2.95e3 \pm 4.13e3$ & $5.14e2 \pm 9.48e1$ \\ & Ours & $1.80e4 \pm 2.13e4$ & $1.08e4 \pm 5.36e3$ & $8.08e3 \pm 5.60e3$ & $6.04e3 \pm 5.29e3$ & $\mathbf{4.04e2 \pm 8.41e1}$ \\ \hline \multirow{3}{*}{70\%} & AE & $3.16e4 \pm 3.61e4$ & $1.42e4 \pm 1.67e4$ & $\mathbf{2.71e3 \pm 2.45e3}$ & $\mathbf{8.92e2 \pm 4.70e2}$ & $4.76e2 \pm 7.56e1$ \\ & RBM & $1.49e4 \pm 1.42e4$ & $1.37e4 \pm 1.51e4$ & $5.71e3 \pm 5.43e3$ & $1.06e3 \pm 5.36e2$ & $4.91e2 \pm 1.05e2$ \\ & Ours & $9.03e3 \pm 8.16e3$ & $\mathbf{9.96e3 \pm 8.28e3}$ & $7.18e3 \pm 8.90e3$ & $1.45e3 \pm 1.53e3$ & $4.86e2 \pm 9.98e1$ \\ \hline \multirow{3}{*}{80\%} & AE & $8.31e3 \pm 9.62e3$ & $\mathbf{1.32e4 \pm 1.20e4}$ & $\mathbf{4.74e3 \pm 5.51e3}$ & $\mathbf{9.11e2 \pm 6.53e2}$ & $5.22e2 \pm 9.65e1$ \\ & RBM & $3.00e4 \pm 2.83e4$ & $1.55e4 \pm 2.14e4$ & $6.26e3 \pm 6.15e3$ & $4.72e3 \pm 8.42e3$ & $5.13e2 \pm 1.41e2$ \\ & Ours & $1.13e4 \pm 1.47e4$ & $1.54e4 \pm 1.46e4$ & $6.48e3 \pm 4.50e3$ & $3.62e3 \pm 2.74e3$ & $\mathbf{3.98e2 \pm 7.89e1}$ \\ \hline \multirow{3}{*}{90\%} & AE & $1.01e4 \pm 5.01e3$ & $1.25e4 \pm 1.48e4$ & $\mathbf{2.99e3 \pm 2.53e3}$ & $\mathbf{9.04e2 \pm 3.92e2}$ & $5.06e2 \pm 1.74e2$ \\ & RBM & $1.62e4 \pm 1.09e4$ & $1.37e4 \pm 1.05e4$ & $1.22e4 \pm 1.37e4$ & $1.27e3 \pm 7.67e2$ & $4.95e2 \pm 6.30e1$ \\ & Ours & $1.50e4 \pm 1.18e4$ & $\mathbf{1.06e4 \pm 1.03e4}$ & $7.26e3 \pm 8.44e3$ & $3.50e3 \pm 6.38e3$ & $4.89e2 \pm 9.89e1$ \\ \hline \hline & & \multicolumn{5}{c}{FD002} \\ & & 2\% & 10\% & 20\% & 40\% & 100\% \\ \hline \hline & None & $6.95e4 \pm 7.74e4$ & $2.02e4 \pm 1.70e4$ & $9.21e3 \pm 4.80e3$ & $4.79e3 \pm 1.95e3$ & $1.99e3 \pm 6.11e2$ \\ \hline \multirow{3}{*}{40\%} & AE & $1.25e5 \pm 1.32e5$ & $\mathbf{1.21e4 \pm 5.90e3}$ & $8.23e3 \pm 4.71e3$ & $5.80e3 \pm 2.51e3$ & $1.86e3 \pm 5.80e2$ \\ & RBM & $7.16e4 \pm 7.74e4$ & $1.35e4 \pm 8.68e3$ & $\mathbf{6.08e3 \pm 2.18e3}$ & $4.33e3 \pm 2.30e3$ & $\mathbf{1.69e3 \pm 2.60e2}$ \\ & Ours & $1.43e5 \pm 8.00e4$ & $1.45e4 \pm 1.00e4$ & $8.20e3 \pm 3.92e3$ & $\mathbf{4.12e3 \pm 2.29e3}$ & $2.71e3 \pm 1.74e3$ \\ \hline \multirow{3}{*}{60\%} & AE & $1.01e5 \pm 1.03e5$ & $\mathbf{1.20e4 \pm 5.64e3}$ & $\mathbf{6.71e3 \pm 2.70e3}$ & $5.46e3 \pm 2.16e3$ & $2.08e3 \pm 1.13e3$ \\ & RBM & $3.92e4 \pm 4.56e4$ & $1.29e4 \pm 7.54e3$ & $8.26e3 \pm 4.54e3$ & $\mathbf{3.10e3 \pm 8.06e2}$ & $\mathbf{1.82e3 \pm 3.69e2}$ \\ & Ours & $\mathbf{2.99e4 \pm 2.72e4}$ & $1.25e4 \pm 1.04e4$ & $7.69e3 \pm 6.09e3$ & $3.71e3 \pm 9.67e2$ & $2.78e3 \pm 1.72e3$ \\ \hline \multirow{3}{*}{70\%} & AE & $7.06e4 \pm 9.35e4$ & $1.92e4 \pm 1.71e4$ & $1.02e4 \pm 9.86e3$ & $5.08e3 \pm 1.72e3$ & $2.59e3 \pm 1.93e3$ \\ & RBM & $6.14e4 \pm 6.51e4$ & $\mathbf{1.30e4 \pm 4.50e3}$ & $7.90e3 \pm 5.90e3$ & $3.57e3 \pm 1.23e3$ & $1.76e3 \pm 2.73e2$ \\ & Ours & $\mathbf{3.33e4 \pm 4.39e4}$ & $1.78e4 \pm 2.29e4$ & $\mathbf{4.74e3 \pm 4.02e3}$ & $\mathbf{2.76e3 \pm 1.38e3}$ & $\mathbf{1.67e3 \pm 1.97e2}$ \\ \hline \multirow{3}{*}{80\%} & AE & $9.93e4 \pm 1.01e5$ & $1.56e4 \pm 1.63e4$ & $7.76e3 \pm 4.89e3$ & $4.58e3 \pm 2.32e3$ & $1.74e3 \pm 2.47e2$ \\ & RBM & $6.16e4 \pm 6.11e4$ & $\mathbf{1.02e4 \pm 3.96e3}$ & $8.27e3 \pm 3.91e3$ & $\mathbf{3.32e3 \pm 1.34e3}$ & $\mathbf{1.65e3 \pm 2.47e2}$ \\ & Ours & $\mathbf{3.55e4 \pm 5.25e4}$ & $1.04e4 \pm 4.79e3$ & $\mathbf{6.25e3 \pm 4.44e3}$ & $3.88e3 \pm 2.41e3$ & $2.13e3 \pm 4.91e2$ \\ \hline \multirow{3}{*}{90\%} & AE & $7.85e4 \pm 7.03e4$ & $1.48e4 \pm 6.75e3$ & $8.13e3 \pm 3.31e3$ & $5.43e3 \pm 3.63e3$ & $1.88e3 \pm 3.15e2$ \\ & RBM & $4.95e4 \pm 5.63e4$ & $1.03e4 \pm 6.69e3$ & $7.04e3 \pm 5.64e3$ & $3.44e3 \pm 1.30e3$ & $1.68e3 \pm 2.25e2$ \\ & Ours & $\mathbf{3.75e4 \pm 5.15e4}$ & $\mathbf{8.11e3 \pm 4.40e3}$ & $\mathbf{3.91e3 \pm 2.10e3}$ & $\mathbf{2.41e3 \pm 6.83e2}$ & $\mathbf{1.66e3 \pm 2.01e2}$ \\ \hline \end{tabular} \end{adjustbox} \caption{\textbf{\ac{rul}-Score results for FD001 and FD002}: We report the mean and standard deviation. The rows represent the \emph{grade of degradation} and the columns the \emph{percent of labeled data} of the data scenario. The second column contains the pre-training method, where \emph{None} is the baseline without any pre-training, and \emph{Ours} is the self-supervised pre-training. The bold results mark the best mean performance for each data scenario. If no result is bold, no approach was able to beat the baseline. Please note that the standard deviation of these results can be misleading as \ac{rul}-Score is an exponential measure.} \label{tab:score_12} \end{table} \begin{table} \begin{adjustbox}{width=\textwidth,center} \centering \begin{tabular}{cl|ccccc} \hline & & \multicolumn{5}{c}{FD003} \\ & & 2\% & 10\% & 20\% & 40\% & 100\% \\ \hline \hline & None & $1.18e8 \pm 2.62e8$ & $8.96e3 \pm 1.20e4$ & $1.31e3 \pm 6.83e2$ & $7.03e2 \pm 3.82e2$ & $7.48e2 \pm 1.71e2$ \\ \hline \multirow{3}{*}{40\%} & AE & $2.93e4 \pm 3.11e4$ & $2.02e4 \pm 5.66e4$ & $1.40e3 \pm 9.24e2$ & $6.94e2 \pm 3.24e2$ & $7.13e2 \pm 2.08e2$ \\ & RBM & $9.06e4 \pm 1.13e5$ & $\mathbf{4.18e3 \pm 3.15e3}$ & $3.43e3 \pm 7.44e3$ & $9.74e2 \pm 5.84e2$ & $9.88e2 \pm 3.10e2$ \\ & Ours & $\mathbf{1.51e4 \pm 1.31e4}$ & $7.16e3 \pm 9.81e3$ & $\mathbf{1.02e3 \pm 5.83e2}$ & $\mathbf{6.20e2 \pm 2.76e2}$ & $\mathbf{4.95e2 \pm 1.12e2}$ \\ \hline \multirow{3}{*}{60\%} & AE & $4.03e4 \pm 6.81e4$ & $\mathbf{7.16e3 \pm 1.35e4}$ & $1.39e3 \pm 7.65e2$ & $9.76e2 \pm 3.91e2$ & $7.10e2 \pm 1.44e2$ \\ & RBM & $4.07e4 \pm 6.87e4$ & $9.77e3 \pm 2.35e4$ & $2.64e3 \pm 4.86e3$ & $1.03e3 \pm 6.20e2$ & $9.02e2 \pm 3.54e2$ \\ & Ours & $\mathbf{3.35e4 \pm 4.47e4}$ & $7.24e3 \pm 7.05e3$ & $1.71e3 \pm 1.55e3$ & $\mathbf{6.13e2 \pm 2.42e2}$ & $\mathbf{4.74e2 \pm 1.71e2}$ \\ \hline \multirow{3}{*}{70\%} & AE & $3.91e4 \pm 3.60e4$ & $7.12e3 \pm 1.50e4$ & $\mathbf{1.05e3 \pm 4.12e2}$ & $1.04e3 \pm 6.46e2$ & $6.78e2 \pm 1.75e2$ \\ & RBM & $6.09e4 \pm 7.69e4$ & $\mathbf{3.49e3 \pm 3.45e3}$ & $1.42e3 \pm 6.43e2$ & $1.01e3 \pm 3.75e2$ & $9.28e2 \pm 2.79e2$ \\ & Ours & $\mathbf{1.65e4 \pm 1.88e4}$ & $1.20e4 \pm 3.14e4$ & $1.16e3 \pm 5.47e2$ & $7.17e2 \pm 2.83e2$ & $\mathbf{5.25e2 \pm 1.27e2}$ \\ \hline \multirow{3}{*}{80\%} & AE & $4.36e4 \pm 3.58e4$ & $\mathbf{4.11e3 \pm 5.66e3}$ & $1.21e3 \pm 6.58e2$ & $8.46e2 \pm 3.09e2$ & $6.14e2 \pm 1.39e2$ \\ & RBM & $5.96e4 \pm 7.59e4$ & $8.60e3 \pm 1.82e4$ & $1.47e3 \pm 7.13e2$ & $9.10e2 \pm 2.92e2$ & $1.01e3 \pm 2.76e2$ \\ & Ours & $\mathbf{4.20e4 \pm 7.41e4}$ & $6.77e3 \pm 9.98e3$ & $\mathbf{9.61e2 \pm 3.88e2}$ & $\mathbf{5.96e2 \pm 2.08e2}$ & $\mathbf{4.59e2 \pm 1.39e2}$ \\ \hline \multirow{3}{*}{90\%} & AE & $3.22e4 \pm 4.59e4$ & $\mathbf{3.40e3 \pm 5.39e3}$ & $1.73e3 \pm 1.23e3$ & $8.26e2 \pm 4.42e2$ & $6.66e2 \pm 8.99e1$ \\ & RBM & $8.56e4 \pm 8.92e4$ & $5.24e3 \pm 6.90e3$ & $1.97e3 \pm 1.67e3$ & $9.11e2 \pm 5.46e2$ & $9.92e2 \pm 4.93e2$ \\ & Ours & $\mathbf{1.31e4 \pm 7.50e3}$ & $3.44e3 \pm 4.80e3$ & $\mathbf{1.14e3 \pm 5.58e2}$ & $\mathbf{6.69e2 \pm 2.35e2}$ & $\mathbf{5.55e2 \pm 9.86e1}$ \\ \hline \hline & & \multicolumn{5}{c}{FD004} \\ & & 2\% & 10\% & 20\% & 40\% & 100\% \\ \hline \hline & None & $1.63e5 \pm 1.58e5$ & $2.52e4 \pm 2.33e4$ & $1.69e4 \pm 9.37e3$ & $9.15e3 \pm 3.29e3$ & $6.21e3 \pm 1.62e3$ \\ \hline \multirow{3}{*}{40\%} & AE & $1.44e6 \pm 9.82e5$ & $1.56e6 \pm 8.75e5$ & $2.08e6 \pm 6.41e5$ & $2.27e6 \pm 1.34e6$ & $1.77e6 \pm 9.15e5$ \\ & RBM & $1.60e5 \pm 1.52e5$ & $3.90e4 \pm 3.43e4$ & $1.69e4 \pm 8.72e3$ & $\mathbf{7.79e3 \pm 2.62e3}$ & $6.03e3 \pm 2.25e3$ \\ & Ours & $\mathbf{1.02e5 \pm 4.04e4}$ & $5.25e4 \pm 7.18e4$ & $\mathbf{1.65e4 \pm 1.28e4}$ & $1.12e4 \pm 7.05e3$ & $\mathbf{5.26e3 \pm 1.71e3}$ \\ \hline \multirow{3}{*}{60\%} & AE & $1.10e6 \pm 8.70e5$ & $1.82e6 \pm 8.91e5$ & $1.71e6 \pm 1.29e6$ & $1.94e6 \pm 1.00e6$ & $1.42e6 \pm 6.38e5$ \\ & RBM & $1.41e5 \pm 1.12e5$ & $3.66e4 \pm 3.26e4$ & $\mathbf{1.29e4 \pm 5.23e3}$ & $8.53e3 \pm 5.18e3$ & $5.76e3 \pm 1.30e3$ \\ & Ours & $\mathbf{3.69e4 \pm 1.60e4}$ & $3.04e4 \pm 1.76e4$ & $1.66e4 \pm 1.35e4$ & $\mathbf{6.67e3 \pm 3.77e3}$ & $\mathbf{4.29e3 \pm 9.92e2}$ \\ \hline \multirow{3}{*}{70\%} & AE & $1.80e5 \pm 2.11e5$ & $3.86e4 \pm 3.61e4$ & $1.64e4 \pm 7.75e3$ & $1.08e4 \pm 8.82e3$ & $5.73e3 \pm 1.43e3$ \\ & RBM & $1.43e5 \pm 1.39e5$ & $3.68e4 \pm 5.13e4$ & $1.53e4 \pm 9.11e3$ & $9.53e3 \pm 4.82e3$ & $5.38e3 \pm 1.94e3$ \\ & Ours & $\mathbf{3.52e4 \pm 1.87e4}$ & $\mathbf{1.54e4 \pm 6.76e3}$ & $\mathbf{8.43e3 \pm 4.92e3}$ & $\mathbf{8.52e3 \pm 5.06e3}$ & $\mathbf{4.25e3 \pm 1.01e3}$ \\ \hline \multirow{3}{*}{80\%} & AE & $1.46e6 \pm 1.01e6$ & $1.76e6 \pm 1.09e6$ & $1.37e6 \pm 6.03e5$ & $1.75e6 \pm 6.24e5$ & $1.71e6 \pm 8.83e5$ \\ & RBM & $2.11e5 \pm 2.24e5$ & $3.41e4 \pm 2.14e4$ & $1.42e4 \pm 8.29e3$ & $9.48e3 \pm 5.50e3$ & $5.13e3 \pm 1.09e3$ \\ & Ours & $\mathbf{2.18e4 \pm 2.11e4}$ & $\mathbf{1.61e4 \pm 1.51e4}$ & $\mathbf{1.36e4 \pm 1.10e4}$ & $\mathbf{5.87e3 \pm 2.13e3}$ & $\mathbf{4.25e3 \pm 1.01e3}$ \\ \hline \multirow{3}{*}{90\%} & AE & $1.58e5 \pm 1.73e5$ & $3.28e4 \pm 2.23e4$ & $1.59e4 \pm 1.02e4$ & $1.04e4 \pm 5.96e3$ & $5.74e3 \pm 4.26e2$ \\ & RBM & $1.46e5 \pm 1.24e5$ & $2.51e4 \pm 2.21e4$ & $1.61e4 \pm 1.16e4$ & $1.09e4 \pm 5.88e3$ & $5.60e3 \pm 1.27e3$ \\ & Ours & $\mathbf{5.05e4 \pm 7.02e4}$ & $\mathbf{1.14e4 \pm 7.05e3}$ & $\mathbf{1.17e4 \pm 1.27e4}$ & $\mathbf{5.52e3 \pm 2.95e3}$ & $\mathbf{4.04e3 \pm 8.03e2}$ \\ \hline \end{tabular} \end{adjustbox} \caption{\textbf{\ac{rul}-Score results for FD003 and FD004}: Please consult table \ref{tab:score_12} for further information.} \label{tab:score_34} \end{table} \subsection{Discussion of Findings} \label{sec:R:Discussion} Our results revealed several points worthy of further discussion. First of all, there is the matter of minimal baseline performance drops when using as few as 40\% of the labeled data. A possible explanation for this is the fact that the \ac{cmapss} dataset is the product of a simulation which may lead to little variation between the individual time series. If the variation between time series is small, the network needs less data to learn useful patterns from it. The differences between the subsets may be explained by the varying number of available time series per subset. Additional experiments where the absolute number of labeled data is varied instead of a percentage can be used to confirm this hypothesis. The next point would be the discrepancies between \ac{rmse} and \ac{rul}-Score performance best seen on FD004 for the \ac{ae} pre-training. Even though the \ac{rmse} performance is similar to the other approaches, the \ac{rul}-Score is much higher. This phenomenon can be explained through the asymmetric nature of the \ac{rul}-score, i.e. that late predictions incur a higher penalty than early ones. The \ac{ae} pre-trained models seem to make predominantly late predictions, even though the absolute difference from the real \ac{rul} is similar to the other approaches. Another interesting finding is, that \ac{rbm} pre-training is competitive with \ac{ae} pre-training, even though it pre-trains only the first layer. Both methods use a reconstruction-based pre-training task, which makes the comparison even more interesting. An explaining factor could be that the bottleneck size of the autoencoder was not optimized as a hyperparameter, although it has a significant influence on the reconstruction capabilities of the \ac{ae}. We did not optimize it because changing the bottleneck size would change the number of model parameters making the comparison with other methods difficult. A bigger bottleneck may increase performance because of the increased parameter count independently from pre-training. Additional experiments where the bottleneck size is optimized for the autoencoder and then compared to the other approaches on the same model architecture could prove this hypothesis. The last point is the downward trend in performance for self-supervised pre-training with respect to the grade of degradation. At least on FD002 and FD004, we can observe that performance drops at lower grades of degradation and even makes our approach worse than the baseline in some cases. This problem may lie in the nature of the pre-text task we choose. It is based on the assumption that differences in the features of two time frames are correlated with the difference in \ac{rul}. If we take the piece-wise linear degradation model of \ac{cmapss} at face value, there are no differences in \ac{rul} above $\textsc{RUL}_{max}$. Consequently, there should not be a difference in the features either. Our pre-text task on the other hand is only accurate for a linear degradation model as we cannot take $\textsc{RUL}_{max}$ into account on unlabeled data. Looking at the percentage of time frames above and below $\textsc{RUL}_{max}$ for different grades of degradation, we can see that in FD004 only 11\% of the training data has a label below a $\textsc{RUL}_{max}$ of 125 at 40\% grade of degradation. Coincidentally, the trend is most obvious on this subset. The training data of FD001, FD002, and FD003 has 33\%, 23\%, and 16\% labels below 125 at 40\% grade of degradation. Less data with labels below $\textsc{RUL}_{max}$ makes the approximation of our pre-text task less accurate. \section{Conclusion \& Future Work} \label{sec:Conclusion} We presented a study of three \ac{ssl} approaches on the NASA \ac{cmapss} dataset under improved, more realistic evaluation conditions. These conditions take into account that realistic, unlabeled data does not contain features near the point of failure. Concerning our first research question, our results show that, contrary to previous studies, \ac{ssl} does not always improve the accuracy of \ac{rul} estimation. This underlines the importance of a representative validation set to identify negative outcomes which may not always be available in settings with few labeled time series. In answering our second research question, we have shown that our \ac{ssl} approach, based on self-supervised pre-training, has superior performance compared to the competing approaches under certain conditions. More work is necessary to replicate these findings on other datasets. Nevertheless, our approach was not able to beat the baseline performance reliably under all data scenarios. Most notably, the performance dropped when the grade of degradation was low. Future work includes conducting the experiments outlined in the discussion section to test the proposed explanations to the observed phenomena. Additionally, advanced techniques from the field of metric learning, e.g. hard sample mining or triplet loss, could be used to improve the self-supervised pre-training. Theoretically, our approach could be used for Unsupervised \ac{da}, too, as it shares many characteristics with \ac{ssl}. Unsupervised \ac{da} is of high interest for \ac{rul} estimation as labeled and unlabeled data often do not share the same domain. Investigating the effectiveness of our approach for general, non-linear degradation processes, e.g. tool wear estimation, is another direction for future work. \end{document}
\begin{document} \title{Multiparameter Quantum Estimation Theory in Quantum Gaussian states} \author{Lahcen Bakmou}\email{[email protected]} \email{[email protected]} \affiliation{LPHE-Modeling and Simulation, Faculty of Sciences, Mohammed V University, Rabat, Morocco.} \author{Mohammed Daoud} \email{[email protected]} \affiliation{Department of Physics , Faculty of Sciences, University Ibn Tofail, Kenitra, Morocco.} \affiliation{LPHE-Modeling and Simulation, Faculty of Sciences, Mohammed V University, Rabat, Morocco.} \author{Rachid ahl laamara} \email{[email protected]} \affiliation{Centre of Physics and Mathematics (CPM), Mohammed V University of Rabat, Rabat, Morocco.} \affiliation{LPHE-Modeling and Simulation, Faculty of Sciences, Mohammed V University, Rabat, Morocco.} \begin{abstract} Multiparameter quantum estimation theory aims to determine simultaneously the ultimate precision of all parameters contained in the state of a given quantum system. Determining this ultimate precision depends on the quantum Fisher information matrix (QFIM) which is essential to obtaining the quantum Cramér-Rao bound. This is the main motivation of this work which concerns the computation of the analytical expression of the QFIM. Inspired by the results reported in J. Phys. A 52, 035304 (2019), the general formalism of the multiparameter quantum estimation theory of quantum Gaussian states in terms of their first and second moments is given. We give the analytical formulas of right logarithmic derivative (RLD) and symmetric logarithmic derivative (SLD) operators. Then we derive the general expressions of the corresponding quantum Fisher information matrices. We also derive an explicit expression of the condition which ensures the saturation of the quantum Cramér-Rao bound in estimating several parameters. Finally, we examine some examples to clarify the use of our results. \par \textbf{Keywords}: Multiparameter quantum estimation theory, quantum Fisher information matrix, quantum Cramér-Rao bound, quantum Gaussian states \end{abstract} \maketitle \section{ Introduction} Besides its fundamental aspects, quantum physics provided us the tools to understand the microscopic world and this understanding has lead to the technological revolution that gives us several solid-state devices. In the last two decades, it has been theoretically and experimentally shown that quantum mechanics provides the key tools for a modern technological revolution. This type of technology is usually called the quantum technologies \cite{Caves1981, paris2009, toth2014, dowling2015}, among its aspects we mention, quantum communication \cite{briegel1998, duan2001, Gisin2007}, quantum cryptography \cite{ralph1999, bechmann2000,jennewein2000, grosshans2002}, quantum computation \cite{knill2001, lloyd2000, taddei2013} and quantum metrology \cite{giovannetti2006, demkowicz2012, escher2011, pang2014}. The latter constitutes a promising quantum protocol, to enhance the precision of measurements, taking into account the need to discover more sensitive and accurate detectors\cite{giovannetti2004, giovannetti2011, zwierz2010}. Quantum metrology or quantum estimation theory, was initially proposed by Helstrom \cite{helstrom1976} and Holevo \cite{gudder1985}. Its main goal is to perform high-precision measurements of the parameters specifying a given quantum system. In this sense, quantum metrology aims to develop quantum strategies that allow us to understand the optimal limits of quantum measurements in estimation protocols. The standard limits that fix the ultimate accuracy are known by the quantum Cramér-Rao bounds (QCRB) \cite{gill2005, paris2009}, which always reaches saturation in the case where a single parameter is estimated. On the contrary, it is difficult to saturate this bound in the case of simultaneous estimation of several parameters, due to the incompatibility between the optimal measurements of various estimated parameters \cite{ragy2016, matsumoto2002, vaneph2013, vidrighin2014, crowley2014}. For this, the multiparameter quantum metrology has attracted great interest to generalize certain conditions to saturate the QCRB, and therefore to achieve maximum precision. In general, to determine the QCRB, it is necessary to compute the quantum Fisher information matrix (QFIM) \cite{paris2009,vsafranek2018simple}. This matrix represents a key ingredient in multiparameter quantum metrology as long as its inverse provides the limits of the maximum precision in the multiparameter estimation. Therefore, the ways to increase the QFIM become an intriguing point in multiparameter protocols enhancement. The QFIM is important for a variety of purposes such as; improvement of the standard frequency \cite{boss2017, albarelli2017, albarelli2018, frowis2014, kessler2014}, estimation of the Unruh-Hawking effect \cite{aspachs2010, huang2018, liu2019}, magnetic field detection \cite{nair2016,bakmou2019, zhang2014}, applications in thermometry \cite{monras2011, correa2015} and optical interferometry used in the detection of gravitational waves as LIGO \cite{abbott2009} and VIRGO \cite{acernese2014}. In addition, the QFIM has been also connected to other aspects of quantum mechanics namely, the description of criticality and quantum phase transitions \cite{zanardi2007, zanardi2008, venuti2007}, the quantification of quantum coherence and quantum entanglement \cite{seveso2019, hauke2016, zhang2013, liu2017}. These various potential applications stimulate to develop some theoretical computation techniques to find the QFIM elements. In this context, we present in this paper an analytical method to get the QFIM in bosonic continuous variable systems described by states of Gaussian type. Recently, Gaussian states that use a continuous variable (CV) systems in the process of quantum information \cite{ferraro2005,braunstein2005, andersen2010} have attracted considerable attention in the literature for two reasons; firstly, for the simplicity of their analytical tools on the theoretical viewpoint owing to, they are described only by the first and second moments, secondly, for its ease to generate and manipulate them experimentally. Indeed, they have several applications in quantum optics \cite{hammerer2010}, optomechanics \cite{tian2010, nunnenkamp2011} and teleportation channel \cite{kim2002, olivares2003, wolf2007}. In addition to that, there is a strong motivation for the Gaussian representation on the remarkable experimental observation in Bose-Einstein condensate \cite{kevrekidis2003, gross2011, wade2016}. Given the importance of the representation of Gaussian states and the role of multiparameter quantum estimation theory in improving precision measurement, it would be preferable if these two are successfully integrated into a common framework. The goal of our work goes in this direction. We will provide the analytical expression of the central quantities in multiparameter quantum estimation theory, namely the right logarithmic derivative (RLD) and its associated QFIM, as well as the symmetric logarithmic derivative (SLD) and its associated QFIM. Indeed, the most efficient and the most appropriate way to achieve this goal is to use a phase-space analysis. It must be emphasized that the ideas developed in this work complete some recently obtained results in the literature like for instance. The paper \cite{vsafranek2018} in which the authors derived the quantum Fisher information matrix (QFIM) associated with the symmetric logarithmic derivative (SLD) when the Williamson’s decomposition of the covariance matrix is known. In this paper, we shall provide an easy algorithm to derive the analytical formulas of the quantum Fisher information matrix corresponding to (RLD) and (SLD) simultaneously. We believe that the results presented here can be adapted to quantum estimation issues involving continuous variables based on the phase-space approach which is was initially proposed in Ref. \cite{monras2013}. It is also interesting to mention that the results obtained in our work can be adapted to that obtained in Ref. \cite{genoni2013} This paper is structured as follows. The second section reviews some basic tools of quantum Gaussian states that are needed for our purpose. Next, we present in Sec. \ref{sec3} the general framework of the multiparameter quantum estimation theory. In Sec. \ref{sec4} we derive the expressions of RLD and SLD and the corresponding QFIM. We give in Sec. \ref{sec5} some illustrative instance to exemplify the use of our obtained results. Finally, we end this paper with concluding remarks. Technical proofs of computation are provided in the appendices \section{preliminary for quantum Gaussian states} Our analysis focuses on the $N$-mode bosonic CV system described by the creation and annihilation operators $\hat{a}_k^\dag $, ${\hat{a}_k}$ $\left( {k = 1,2,...,N} \right)$ which verify the commutation relations $\left[ {{{\hat a}_j},\hat a_k^\dag } \right] = {\delta _{jk}}$. The Hilbert space for the whole system is the tensor product of infinite-dimensional Fock spaces of $\mathcal{H} = \mathop \otimes \limits_{k = 1}^N {\mathcal{F}_k}$, so that each mode is covered by the base of the eigenstates of number operator $\hat a_k^\dag {{\hat a}_k}$. The CV systems can be also described by the quadrature operators ${\hat q_k}$, ${\hat p_k}$ that satisfy the commutation relations $\left[ {{{\hat q}_j},{{\hat p}_k}} \right] = 2 i \hspace{0.1cm}{\delta _{jk}}$, with $\hbar = 2$. These quadratic operators write in terms of $\hat{a}_k^\dag $, ${\hat{a}_k}$ as \begin{equation} {\hat q_k} = {{{\hat a}_k} + \hat a_k^ + }, \hspace{1.5cm} {\hat p_k} = i\left( {\hat a_k^ + - {{\hat a}_k}} \right). \end{equation} The commutation relations between the quadrature operators can be written in a form that is useful for analysis in the phase-space. This is given by \begin{equation} \left[ {{{\hat r}_j},{{\hat r}_k}} \right] = 2i \hspace{0.1cm}{\Omega _{jk}}, \end{equation} where $\mathbf{\hat r }= {\left( {{{\hat q}_1},{{\hat p}_1},...,{{\hat q}_n},{{\hat p}_n}} \right)^T}$ is the vector operators and ${\Omega _{jk}}$ are the elements of the matrix $\Omega$ of dimension $2N \times 2N$, \begin{equation} \Omega = \mathop \oplus \limits_{k = 1}^n \omega, \hspace{1.5cm} \omega = \left[ {\begin{array}{*{20}{c}} 0&1\\ { - 1}&0 \end{array}} \right]. \end{equation} We notice that ${\Omega ^T} = {\Omega ^{ - 1}} = - \Omega $. In quantum mechanics, the density operator ${\hat \rho }$ encodes all the information of the quantum system. For $N$-mode bosonic CV system, the density operator describing each mode has an equivalent representation in terms of the quasi-probability distribution defined in the phase-space. This representation is characterized by a function called the characteristic function \begin{equation} {\chi _{\hat \rho }}\left( {\bf{r}} \right) = Tr\left[ {{{\hat D}_{ - {\bf{r}}}} \hspace{0.1cm}\hat \rho } \right], \end{equation} where $\mathbf{r} = {\left( {{q_1},{p_1},...,{q_n},{p_n}} \right)^T}$ is a vector of $2N$ real coordinates in phase-space and ${\hat D_{ - {\bf{r}}}}$ is the Weyl operator which is given by \begin{equation} {\hat D_{ - {\bf{r}}}} = {e^{ - i{{\bf{r}}^T}\Omega {\bf{\hat r}}}}. \end{equation} Setting $\mathbf{\tilde r} = \Omega \hspace{0.1cm} \mathbf{r}$, the Weyl operator can be written as follows \begin{equation} {{\hat D}_{ - {\bf{r}}}} = {e^{i{{{\bf{\tilde r}}}^T}{\bf{\hat r}}}}. \label{6} \end{equation} The state ${\hat \rho }$ of a $N$-mode CV system is called Gaussian state if its characteristic function takes the following form \begin{equation} {\chi _{\hat \rho }}\left( \mathbf{r} \right) = exp\left[ { - \frac{1}{4}{{\mathbf{\tilde r}}^T}\sigma \hspace{0.1cm}\mathbf{\tilde r} + i\hspace{0.1cm}{{\mathbf{\tilde r}}^T}\mathbf{d}} \right].\label{7} \end{equation} The characteristic function of the Gaussian states is completely described by two important statistical quantities which are the first and second moments. In particular, the first moment called the displacement vector, is expressed by \begin{equation} \mathbf{d}=\left\langle {\mathbf{\hat r}} \right\rangle=Tr\left[ {\hat \rho \hspace{0.1cm}\mathbf{\hat r}} \right], \label{8} \end{equation} and the second moment is the covariance matrix $\sigma$. Its elements are given by \begin{equation} {\sigma _{jk}} = \frac{1}{2}Tr\left[ {\hat \rho \left\{ {\Delta {{\hat r}_j},\Delta {{\hat r}_k}} \right\}} \right] = \frac{1}{2}\left\langle {\left\{ {\Delta {{\hat r}_j},\Delta {{\hat r}_k}} \right\}} \right\rangle, \label{9} \end{equation} where $\Delta {{\hat r}_j} = {{\hat r}_j} - \left\langle {\hat r_j} \right\rangle$ and the symbol $\left\{ {.,.} \right\}$ represents the notation of anticommutator. The covariance matrix $\sigma$ is a $2N \times 2N$ real symmetric matrix defined strictly positive and satisfy the uncertainty principle \cite{simon1994} \begin{equation} \sigma + i \hspace{0.1cm}\Omega \ge 0. \label{10} \end{equation} We now consider a unitary transformation $\hat U = \exp \left( { - i\hat H} \right)$ ( $\hat H$ is the Hamiltonian of system) that transforms a state ${{\hat \rho }_{in}}$ into ${{\hat \rho }_{out}}$ as follows \begin{equation} {{\hat \rho }_{in}} \to {{\hat \rho }_{out}} = U{{\hat \rho }_{in}}{U^\dag }. \end{equation} This transformation is called a Gaussian unitary transformation or Gaussian unitary channel when it preserves the Gaussianity of the quantum state, i.e. it converts a Gaussian state into another Gaussian state. In terms of statistical moments $ \mathbf{d}$ in Eq. (\ref{8}) and $\sigma$ in Eq. (\ref{9}), the action of unitary Gaussian transformation is characterized by the following transformations \begin{equation} {\mathbf{d}_{in}} \to {\mathbf{d}_{out}} = S{\mathbf{d}_{in}} + {{\mathbf{r}}}, \hspace{0.5cm}{\sigma _{in}} \to {\sigma _{out}} = S\hspace{0.1cm}{\sigma _{in}}{S^\dag}, \end{equation} where $\mathbf{r} \in \mathbb{R} {^{2 {N}}}$, and $S$ is $2N \times 2N$ symplectic real matrix. More details can be found in the references \cite{weedbrook2012, braunstein2005}. \section{Quantum multiparameter estimation theory \label{sec3}} Generally, a quantum system is described by a semi-definite positive density operator $\hat \rho$, and all information about this system is encoded in parameters specifying this density operator ${\hat \rho \left( {{\theta _\mu }} \right)}$ such that ${{\theta _\mu } = \left\{ {{\theta _1},...,{\theta _M}} \right\}}$ is the set of parameters contained in the quantum system. Therefore, the main goal of quantum estimation theory is to determine the best possible accuracy in estimating a parameter or several parameters in a metrological protocol. This optimal precision is given by the quantum Cramér-Rao bound (QCRB). The two QCRB most used are based on RLD and SLD quantum Fisher information matrices \cite{fujiwara1994, genoni2013, gao2014}, where the RLD (right logarithmic derivative) and SLD (symmetric logarithmic derivative ) operators are respectively obtained from the following differential equations (we adopt that ${\partial _{{\theta _\mu }}} = {\raise0.7ex\hbox{$\partial $} \!\mathord{\left/ {\vphantom {\partial {\partial {\theta _\mu }}}}\right.\kern-\nulldelimiterspace} \!\lower0.7ex\hbox{${\partial {\theta _\mu }}$}}$) \begin{equation} {\partial _{{\theta _\mu}}}\hat \rho = \hat \rho \mathcal{\hat L}_{{\theta _\mu}}^R, \label{13} \end{equation} \begin{equation} {\partial _{{\theta _\mu}}}\hat \rho = \frac{1}{2}\left\{ {\hat \rho ,\hat L_{{\theta _\mu}}^S} \right\}. \label{14} \end{equation} The quantum Fisher information matrices associated with RLD and SLD are defined respectively by \begin{equation} {{\cal F}_{{\theta _\mu }{\theta _\nu }}} = Tr\left[ {\hat \rho \hat {\cal L}_{{\theta _\mu }}^R\hat {\cal L}{{_{{\theta _\nu }}^R}^\dag }} \right], \label{15} \end{equation} \begin{equation} {H_{{\theta _\mu }{\theta _\nu }}} = \frac{1}{2}Tr\left[ {\hat \rho \left\{ {\hat L_{{\theta _\mu }}^S,\hat L_{{\theta _\nu }}^S} \right\}} \right]. \label{16} \end{equation} The associated QCRB are expressed by \begin{equation} \mathbf{Cov\left[ \hat{\theta} \right]} \ge \frac{{{\mathop{\rm Re}\nolimits} \left[ {{{\cal F}^{ - 1}}} \right] + \left| {{\mathop{\rm Im}\nolimits} \left[ {{{\cal F}^{ - 1}}} \right]} \right|}}{\mathcal{N}}, \label{17} \end{equation} \begin{equation} \mathbf{Cov\left[ \hat{\theta} \right]} \ge \frac{{{H^{ - 1}}}}{\mathcal{N}}, \label{18} \end{equation} where $\mathbf{Cov\left[ {\hat \theta } \right]}$ is a covariance matrix defined by $\mathbf{Cov\left[ {{\theta _\mu },{\theta _\nu }} \right]} = E\left( {{\theta _\mu }{\theta _\nu }} \right) - E\left( {{\theta _\mu }} \right)E\left( {{\theta _\nu }} \right)$, the symbol $\left| \bullet \right|$ denotes the absolute value of the quantity $\bullet$ and $\mathcal{N}$ is the number the measurements performed. In particular, the individual estimation strategy is equivalent to ${\mathcal{F}_{{\theta _\mu }{\theta _\nu }}} = {H_{{\theta _\mu }{\theta _\nu }}} = 0$ when $\mu \ne \nu $. Therefore, the optimal measure of a parameter can be quantified by the variance, which implies that the Eqs. (\ref{17}) and (\ref{18}) reduce to \begin{equation} {\mathop{\rm var}} \left[ {{\theta _\mu }} \right] \ge \frac{{{\rm{Re}}\left[ {F_{{\theta _\mu }{\theta _\mu }}^{ - 1}} \right] + \left| {{\rm{Im}}\left[ {F_{{\theta _\mu }{\theta _\mu }}^{ - 1}} \right]} \right|}}{\mathcal{N}}, \label{19} \end{equation} \begin{equation} {\mathop{\rm var}} \left[ {{\theta _\mu }} \right] \ge \frac{{H_{{\theta _\mu }{\theta _\mu }}^{ - 1}}}{\mathcal{N}}.\label{20} \end{equation} The Eqs. (\ref{19}) and (\ref{20}) are always saturated. This saturation corresponds to an optimal measurement of the parameter, and the optimal states forming the projection corresponding to the eigenbasis of SLD. If we apply the trace operator to the two inequalities (\ref{17}) and (\ref{18}), we find that they correspond to the sum of the variances of the estimated parameters \begin{equation} \sum\limits_\mu ^M {\rm var\left( {{\theta _\mu }} \right)} \ge B_R = \frac{{Tr\left[ {{\mathop{\rm Re}\nolimits} \left[ {{{\cal F}^{ - 1}}} \right]} \right] + Tr\left[ {\left| {{\mathop{\rm Im}\nolimits} \left[ {{{\cal F}^{ - 1}}} \right]} \right|} \right]}}{\mathcal{N}}, \label{21} \end{equation} \begin{equation} \sum\limits_\mu ^M {\rm var\left( {{\theta _\mu }} \right)} \ge B_S = \frac{{Tr\left[ {{H^{ - 1}}} \right]}}{\mathcal{N}}. \label{22} \end{equation} In general, the problem remains in scenarios of simultaneous estimation of several parameters. In this case, the limits associated with RLD and SLD can not be saturated because of the incompatibilities between the optimal measurements of the different parameters, i.e. the optimization of the measurement on a parameter can disturb the accuracy of one measure on others. This is the consequence of the noncommutativity of quantum mechanics. On the other hand, the optimal measure for RLD does not always correspond to a positive operator values measurement (POVM) . It is therefore natural to look for the conditions that must be verified in a multi-parameters scenario to saturate these inequalities and finally achieve an optimal measurement. In this context, it is interesting to note that several works on quantum multiparameter estimation theory \cite{ragy2016, matsumoto2002, vaneph2013, vidrighin2014, crowley2014} were devoted to SLD and most of these works showed that the QCRB associated with SLD (\ref{18}), (\ref{22}) can be saturated if and only if \begin{equation} Tr\left[ {\hat \rho \left[ {\hat L_{{\theta _\mu }}^S,\hat L_{{\theta _\nu }}^S} \right]} \right] = 0 \label{23}. \end{equation} It is simple to see that the condition (\ref{23}) can be equivalently written as \begin{equation} {\rm{Im}}\left( {Tr\left[ {\hat \rho \hat L_{{\theta _\mu }}^S\hat L_{{\theta _\nu }}^S} \right]} \right) = 0. \label{24} \end{equation} However, it is natural to ask what is the link between bound $B_R$ associated with RLD and bound $B_S$ associated with SLD. And which of these bounds is more informative and important. Answers to these questions were reported in the references \cite{genoni2013, gao2014} by introducing the so-called the most informative QCRB ($B_{MI}$) defined by \begin{equation} {B_{MI}} = \max\left\{ {{B_R},{B_S}} \right\}. \label{25} \end{equation} Consequently, the determination of the most informative QCRB depends completely on the comparison between the QCRB associated with RLD and the QCRB associated with SLD. For this reason, we introduce the ratio between the two QCRBs, it is defined as follows \begin{equation} \mathcal{R} = \frac{{{B_S}}}{{{B_R}}}, \end{equation} if $\mathcal{R} < 1$, then $B_{MI}$ corresponds to $B_S$. If $\mathcal{R} > 1$, then $B_{MI}$ corresponds to $B_R$. In the situation where $\mathcal{R} = 1$, we will see that $B_{MI} = B_R = B_S$. \\ Finally, the optimal measures in the multi-parameter protocols can be equated as a single inequality that is given as follows \begin{equation} \sum\limits_\mu ^M {{\mathop{\rm var}} \left[ {{\theta _\mu }} \right] \ge \frac{{{B_{MI}}}}{\mathcal{N}}}. \end{equation} \section{Evaluation of RLD and SLD quantum Fisher information matrices in quantum Gaussian states \label{sec4}} In this section, we derive the explicit formulas of the RLD and SLD operators in quantum Gaussian states. Using their expression, we determine the analytic expressions of the quantum Fisher information matrices associated with RLD and SLD respectively. To simplify our notations, we adopt in what follows the Einstein’s convention of summation over repeated indices. \subsection{Evaluation of RLD quantum Fisher information matrix} It is clear that to determine the elements of the RLD quantum Fisher information matrix, defined by Eq. (\ref{15}), it is necessary to obtain first the expression of the right logarithmic derivative (RLD) $\mathcal{\hat L_{{\theta _\mu }}}^R$ defined by Eq. (\ref{13}). For a $N$-mode Gaussian state, we consider that RLD must be at most quadratic in the canonical operators: \begin{equation} \mathcal{\hat L_{{\theta _\mu }}}^R = {{\cal L}^R}^{\left( 0 \right)} + {\cal L}_l^{R\left( 1 \right)}{\hat r_l} + {\cal L}_{jk}^{R\left( 2 \right)}{\hat r_j}{\hat r_k}, \label{27} \end{equation} where $\mathbf{\hat r }= {\left( {{{\hat q}_1},{{\hat p}_1},...,{{\hat q}_n},{{\hat p}_n}} \right)^T}$ is the vector of canonical operators, ${\mathcal{L}^R}^{\left( 0 \right)} \in \mathbb{C}$, ${\mathbf{\mathcal{L}}^R}^{\left( 1 \right)} $ is a vector in $ \mathbb{C}^{2N}$ and ${\mathcal{{ {L}}}^R}^{\left( 2 \right)}$ is $2N \times 2N$ complex matrix. For a given set of the parameters $\theta _\mu$, we prove in Appendix \ref{app:A} that the quantities $\hat { \mathcal{L}}_{{\theta _\mu }}^{R\left( 0 \right)}$, \hspace{0.1cm} $\hat { \mathcal{L}}_{{\theta _\mu }}^{R\left( 1 \right)}$ and $\hat { \mathcal{L}}_{{\theta _\mu }}^{R\left( 2 \right)}$ in Eq. (\ref{27}) can be written respectively as follows \begin{equation} {\cal L}_{{\theta _\mu }}^{R\left( 0 \right)} = - \frac{1}{2}Tr\left[ {{\Gamma _ + }\hat {\mathcal{L}}_{{\theta _\mu }}^{R\left( 2 \right)}} \right] - {{\bf{d}}^T}\hat {\mathcal L}_{{\theta _\mu }}^{R\left( 1 \right)} - {{\bf{d}}^T}\hat {\mathcal{L}}_{{\theta _\mu }}^{R\left( 2 \right)}{\bf{d}}, \end{equation} \begin{equation} \hat{ \mathcal{L}}_{{\theta _\mu }}^{R\left( 1 \right)} = 2 \hspace{0.1cm}\Gamma _ + ^{ - 1}\hspace{0.1cm}{\partial _{{\theta _\mu }}}{\bf{d}} - 2\hspace{0.1cm}\hat{ \mathcal{L}}_{{\theta _\mu }}^{R\left( 2 \right)}{\bf{d}}, \end{equation} \begin{equation} \mathtt{vec}\left[ \hat { \mathcal{L}}_{{\theta _\mu }}^{R\left( 2 \right)} \right] = {\left( {\Gamma ^\dag \otimes {\Gamma }} \right)^{ +}}\mathtt{vec}\left[ {{\partial _{{\theta _\mu }}}\sigma } \right], \end{equation} where ${\Gamma} = \sigma + i\hspace{0.1cm}\Omega $ and $\mathtt{vec}\left[A\right]$ denotes the vectorization of a matrix $A$ which defined for any $p \times p$ real or complex matrix $A$ as; $\mathtt{vec}\left[ A \right] = {\left( {{a_{11}},...,{a_{p1}},{a_{12}},...,{a_{p2}},...,{a_{1p}},...,{a_{pp}}} \right)^T}$. Inserting the expression of the right logarithmic derivative (RLD) into Eq. (\ref{15}) (the calculation details are found in Appendix \ref{App:B}) we find the RLD quantum Fisher information matrix \begin{equation} {{\mathcal{F}}_{{\theta _\mu }{\theta _\nu }}} = \frac{1}{2}\mathtt{vec}{\left[ {{\partial _{{\theta _\mu }}}\sigma } \right]^\dag }{\Sigma ^ + }\mathtt{vec}\left[ {{\partial _{{\theta _\nu }}}\sigma } \right] + 2{\partial _{{\theta _\mu }}}{\mathbf{d}^T}\hspace{0.1cm}\Gamma ^{+}\hspace{0.1cm}{\partial _{{\theta _\nu }}}\mathbf{d}, \label{31} \end{equation} where ${\Sigma ^ + } = \left( {{\Gamma ^\dag } \otimes \Gamma } \right)^+$ and the index "+" denotes the Moore-Penrose pseudoinverse which is a generalization of the inverse matrix \cite{penrose1955, ben2003} that be calculated using the Tikhonov regularization \cite{golub1996}: ${A^ + } = \mathop {\lim }\limits_{\delta \searrow 0} \left( {{A^\dag }{{\left( {A{A^\dag } + \delta I} \right)}^{ - 1}}} \right) = \mathop {\lim }\limits_{\delta \searrow 0} \left( {{{\left( {{A^\dag }A + \delta I} \right)}^{ - 1}}{A^\dag }} \right)$. These limits exist even if $A^{-1}$ does not exist. We note that if $\Gamma$ is invertible (non-singular) the RLD quantum Fisher information matrix can be expressed as \begin{equation} {{\mathcal{F}}_{{\theta _\mu }{\theta _\nu }}} = \frac{1}{2}\mathtt{vec}{\left[ {{\partial _{{\theta _\mu }}}\sigma } \right]^\dag }{\Sigma ^{ - 1}}\mathtt{vec}\left[ {{\partial _{{\theta _\nu }}}\sigma } \right] + 2{\partial _{{\theta _\mu }}}{\mathbf{d}^T}\hspace{0.1cm}\Gamma ^{-1}\hspace{0.1cm}{\partial _{{\theta _\nu }}}\mathbf{d}. \label{32} \end{equation} In this case, the Moore-Penrose pseudoinverse of \hspace{0.1cm}$\Gamma$ coincides with its inverse. \subsection{Evaluation of SLD quantum Fisher information matrix } Similarly, the expression of the SLD quantum Fisher information matrix requires the explicit formula of the symmetric logarithmic derivative (SLD) $\hat L_{{\theta _\mu }}^S$ defined by Eq. (\ref{14}). For this end, we also write the SLD as a quadratic form in canonical operators: \begin{equation} \hat L_{{\theta _\mu }}^S = {L^S}^{\left( 0 \right)} + L_l^{S\left( 1 \right)}{\hat r_l} + L_{jk}^{S\left( 2 \right)}{\hat r_j}{\hat r_k}, \label{33} \end{equation}\\ with $L_{{\theta _\mu }}^{S\left( 0 \right)} \in \mathbb{R} $, $ \hat L_{{\theta _\mu }}^{S\left( 1 \right)} \in {\mathbb{R}^{2N}}$ and $\hat L_{{\theta _\mu }}^{S\left( 2 \right)}$ is $2N \times 2N$ real symmetric matrix. These quantities are given respectively by the following expressions (more details are given in Appendix \ref{App:C}) \begin{equation} L_{{\theta _\mu }}^{S\left( 0 \right)} = - \frac{1}{2}Tr\left[ {\sigma \hat L_{{\theta _\mu }}^{S\left( 2 \right)}} \right] - {{\bf{d}}^T}\hat L_{{\theta _\mu }}^{S\left( 1 \right)} - {{\bf{d}}^T} \hat L_{{\theta _\mu }}^{S\left( 2 \right)}{\bf{d}},\label{35} \end{equation} \begin{equation} \hat L_{{\theta _\mu }}^{S\left( 1 \right)} = 2 \hspace{0.1cm}{\sigma ^{ - 1}}{\partial _{{\theta _\mu }}}{\bf{d}} - 2 \hspace{0.1cm}\hat L_{{\theta _\mu }}^{S\left( 2 \right)}{\bf{d}},\label{361} \end{equation} \begin{equation} \mathtt{vec}\left[ {\hat L_{{\theta _\mu }}^{S\left( 2 \right)}} \right] = {\left( {{\sigma ^\dag} \otimes \sigma + \Omega \otimes \Omega } \right)^{+}}\mathtt{vec}\left[ {{\partial _{{\theta _\mu }}}\sigma } \right]. \label{371} \end{equation} Thus, inserting the expression of SLD in Eq. (\ref{16}), one gets the elements of the SLD quantum Fisher information matrix (see Appendix \ref{App:D}). \begin{equation} {H_{{\theta _\mu }{\theta _\nu }}} = \frac{1}{2}\mathtt{vec}{\left[ {{\partial _{{\theta _\mu }}}\sigma } \right]^\dag }{{\mathcal M}^{+}}\mathtt{vec}\left[ {{\partial _{{\theta _\nu }}}\sigma } \right] + 2{\partial _{{\theta _\mu }}}{{\bf{d}}^T} \hspace{0.1cm} \sigma ^{-1} \hspace{0.1cm}{\partial _{{\theta _\nu }}}{\bf{d}},\label{37} \end{equation} where $\mathcal{M} = \left( {{\sigma ^\dag} \otimes \sigma + \Omega \otimes \Omega } \right)$. In the case where $\mathcal{M}$ is invertible, the SLD quantum Fisher information matrix can be calculated as \begin{equation} {H_{{\theta _\mu }{\theta _\nu }}} = \frac{1}{2}\mathtt{vec}{\left[ {{\partial _{{\theta _\mu }}}\sigma } \right]^\dag }{{\mathcal M}^{-1}}\mathtt{vec}\left[ {{\partial _{{\theta _\nu }}}\sigma } \right] + 2{\partial _{{\theta _\mu }}}{{\bf{d}}^T} \hspace{0.1cm} \sigma ^{-1} \hspace{0.1cm}{\partial _{{\theta _\nu }}}{\bf{d}}.\label{38} \end{equation} According to Ref. \cite{nichols2018}, the saturation condition of the quantum Cramér-Rao bound (\ref{24}) is expressed in the phase-space as \begin{align}\label{39} {\rm{Im}}\left( {Tr\left[ {\hat \rho \hat L_{{\theta _\mu }}^S\hat L_{{\theta _\nu }}^S} \right]} \right) = 2\hspace{0.1cm}Tr\left[ {\sigma \hat L_{{\theta _\mu }}^{S\left( 2 \right)}\Omega \hat L_{{\theta _\nu }}^{S\left( 2 \right)}} \right] +\\ \notag 2\hspace{0.1cm}{\partial _{{\theta _\mu }}}{\mathbf{d}^T}{\sigma ^{ - 1}}\Omega {\sigma ^{ - 1}}{\partial _{{\theta _\nu }}}\mathbf{d}. \end{align} These results can be rewritten in a compact form using the notations introduced here above. We thus, consider the following relations \begin{equation} Tr\left[ {{A^\dag }B} \right] = \mathtt{vec}{\left[ A \right]^\dag }\mathtt{vec}\left[ B \right], \label{40} \end{equation} \begin{equation} \mathtt{vec}\left[ {AB} \right] = \left( {I \otimes A} \right)\mathtt{vec}\left[ B \right] = \left( {{B^\dag } \otimes I} \right)\mathtt{vec}\left[ A \right], \label{41} \end{equation} \begin{equation} \left( {A \otimes B} \right)\left( {C \otimes D} \right) = AC \otimes BD, \label{42} \end{equation} From Eqs. (\ref{40}), (\ref{41}) and (\ref{42}), one has \begin{align} Tr\left[ {{{\left( {AD} \right)}^\dag }BC} \right] &= \mathtt{vec}{\left[ {AD} \right]^\dag }\mathtt{vec}\left[ {BC} \right]\\ \notag& = \mathtt{vec}{\left[ A \right]^\dag }\left( {D \otimes I} \right)\left( {I \otimes B} \right)\mathtt{vec}\left[ C \right], \end{align} \begin{equation} Tr\left[ {{A^\dag }BC{D^\dag }} \right] = \mathtt{vec}{\left[ A \right]^\dag }\left( {D \otimes B} \right)\mathtt{vec}\left[ C \right]. \end{equation} Using the last equation, we find that the first term of Eq. (\ref{39}) can be written as {\small \begin{align*} {\small Tr\left[ {L_{{\theta _\mu }}^{S\left( 2 \right)}\Omega L_{{\theta _\nu }}^{S\left( 2 \right)}\sigma } \right]} &= \mathtt{vec}{\left[ {L_{{\theta _\mu }}^{S\left( 2 \right)}} \right]^\dag }\left( {\sigma \otimes \Omega } \right)\mathtt{vec}\left[ {L_{{\theta _\nu }}^{S\left( 2 \right)}} \right]\\ \notag& = \mathtt{vec}{\left[ {{\partial _{{\theta _\mu }}}\sigma } \right]^\dag }{{\cal M}^{+}}\left( {\sigma \otimes \Omega } \right){{\cal M}^{+}}\mathtt{vec}\left[ {{\partial _{{\theta _\nu }}}\sigma } \right]. \end{align*}} The last equality follows from Eq. (\ref{371}). Finally, the expression of the saturation condition of the quantum Cramér-Rao bound in terms of $\mathbf{d}$, $\sigma$, and their derivative with respect to the estimated parameters can be derived as {\small \begin{align} \notag {\rm{Im}}\left( {Tr\left[ {\hat \rho \hat L_{{\theta _\mu }}^S\hat L_{{\theta _\nu }}^S} \right]} \right) &= 2 \mathtt{vec}{\left[ {{\partial _{{\theta _\mu }}}\sigma } \right]^\dag }{{\cal M}^{+}}\left( {\sigma \otimes \Omega } \right){{\cal M}^{+}}\mathtt{vec}\left[ {{\partial _{{\theta _\nu }}}\sigma } \right] \\ & +2{\partial _{{\theta _\mu }}}{{\bf{d}}^T} \hspace{0.1cm}{\sigma ^{-1}}\Omega \hspace{0.1cm} {\sigma ^{-1}}{\partial _{{\theta _\nu }}}{\bf{d}}. \label{45} \end{align}} When the matrix $\mathcal{M}$ is invertible, $\mathcal{M}^+$ can be replaced by $\mathcal{M}^{-1}$ in the last equation.\\ Eqs. (\ref{35}, \ref{361}, \ref{371}, \ref{38}, \ref{45}) are identical to the Eqs. (9, 8, 11) of Ref. \cite{vsafranek2018} they obtained by a different method. \section{Application \label{sec5}} In this section, we treat some protocols of multiparameter quantum Gaussian metrology. The first example can be considered as an illustration of the validity and the usefulness of our results according to the results reported Ref. \cite{genoni2013}. The second example concerns the estimation of the parameters: squeezing parameter $r$ and phase rotation $\varphi$ when we take the thermal state and coherent state as the inputs states evolving under a Gaussian channel (squeezing and rotation channel) \subsection{Estimation of two parameters of a displacement operator} We first consider the estimation of the two parameters $q_0$ and $p_0$ of the displacement operator $\hat D\left( {{q_0},{p_0}} \right) = \exp \left( {i{p_0}\hat q - i{q_0}\hat p} \right)$ with a measurement on the displaced state ${\rho _{out}} = \hat D\left( {{q_0},{p_0}} \right){\rho _{in}}{\hat D^\dag }\left( {{q_0},{p_0}} \right)$. We take the single-mode Gaussian state following ${\rho _{in}} = \hat S\left( r \right){\rho _{th}}\hat S{\left( r \right)^\dag }$ where $\hat S\left( r \right) = \exp \left( {\frac{r}{2}\left( {{{\hat a}^2} - {{\hat a}^{\dag 2}}} \right)} \right)$ denotes the single-mode squeezing operator and $\rho_{th}$ is the thermal state given by \begin{equation} {\rho _{th}} = \sum\limits_{n = 0}^{ + \infty } {\frac{{{{\bar n}^n}}}{{{{\left( {\bar n + 1} \right)}^{n + 1}}}}\left| n \right\rangle \left\langle n \right|}, \label{46} \end{equation} where $\bar n = \left\langle {{a^\dag }a} \right\rangle $ is the mean number of photon. The first and second moments of the output state are give by \begin{equation} {\mathbf{d}_{out}} = \left[ {\begin{array}{*{20}{c}} {{q_0}}\\ {{p_0}} \end{array}} \right], \hspace{1cm} {\sigma _{out}} = \left( {2\bar n + 1} \right)\left[ {\begin{array}{*{20}{c}} {{e^{ - 2r}}}&0\\ 0&{{e^{2r}}} \end{array}} \right]. \end{equation} The RLD quantum Fisher information matrix is calculated from Eq. (\ref{32}). It has the form \begin{equation} \mathcal{F} = \left[ {\begin{array}{*{20}{c}} {\frac{{2\left( {2\bar n + 1} \right){e^{2r}}}}{{{{\left( {2\bar n + 1} \right)}^2} - 1}}}&{\frac{{ - 2i}}{{{{\left( {2\bar n + 1} \right)}^2} - 1}}}\\ {\frac{{2i}}{{{{\left( {2\bar n + 1} \right)}^2} - 1}}}&{\frac{{2\left( {2\bar n + 1} \right){e^{ - 2r}}}}{{{{\left( {2\bar n + 1} \right)}^2} - 1}}} \end{array}} \right]. \end{equation} Similarly, the SLD quantum Fisher information is calculated from Eq. (\ref{38}). It is given by \begin{equation} H = \left[ {\begin{array}{*{20}{c}} {\frac{{2\ {e^{2r}}}}{{\left( {2\bar n + 1} \right)}}}&0\\ 0&{\frac{{2{e^{ - 2r}}}}{{\left( {2\bar n + 1} \right)}}} \end{array}} \right]. \end{equation} The two bounds $B_R$ and $B_S$ can be evaluated from Eqs. (\ref{21}), (\ref{22}) as \begin{equation} {B_R} = \left( {2\bar n + 1} \right)\cosh \left( {2r} \right) + 1, \hspace{0.6cm} {B_S} = \left( {2\bar n + 1} \right)\cosh \left( {2r} \right). \end{equation} Obviously, the most informative quantum Cramér-Rao bound $B_{MI}$ (\ref{25}) in this case is given by $B_R$: \begin{equation} {B_{MI}} = \left( {2\bar n + 1} \right)\cosh \left( {2r} \right) + 1. \end{equation} This result coincides with the quantum Cramér-Rao bound that obtained in Ref. \cite{genoni2013}. This confirms the validity of the formalism developed in this paper for the protocols of multiparameter quantum metrology involving Gaussian states. \subsection{Estimation of two parameters $r$ and $\varphi $ contained in squeezing and rotation operators} The second illustration concerns the joint estimation of two parameters: squeezing parameter $r$ and phase rotation $\varphi$, when we consider the thermal state (\ref{46}) as the initial probe state (input state). We assume that this state evolves in squeezing and rotation channels, which transforms the input state into \begin{equation} {\rho _{out}} =\hat R\left( \varphi \right) \hat S\left( r \right){\rho _{th}} \hat S{\left( r \right)^\dag } \hat R{\left( \varphi \right)^\dag }. \end{equation} The symplectic transformations corresponding to this channel are given by \begin{equation} \hat R\left( \varphi \right) = \left[ {\begin{array}{*{20}{c}} {\cos \varphi }&{\sin \varphi }\\ { - \sin \varphi }&{\cos \varphi } \end{array}} \right], \hspace{0.5cm}\hat S\left( r \right) = \left[ {\begin{array}{*{20}{c}} {{e^{ - r}}}&0\\ 0&{{e^r}} \end{array}} \right], \label{53} \end{equation} which leads to the following moments for the output state \begin{equation} {\mathbf{d}_{out}} = \hat R\left( \varphi \right)\hat S\left( r \right){\mathbf{d}_{in}}, \hspace{0.3cm}{\sigma _{out}} = \hat R\left( \varphi \right)\hat S\left( r \right){\sigma _{in}}\hat S{\left( r \right)^\dag }\hat R{\left( \varphi \right)^\dag }, \end{equation} where $\mathbf{d}_{in}$ and $\sigma _{in}$ are the first and second moments of $\rho_{th}$, and they are given by \begin{equation} {{\bf{d}}_{in}} = \left[ {\begin{array}{*{20}{c}} 0\\ 0 \end{array}} \right], \hspace{1.5cm} {\sigma _{in}} = \left( {2\bar n + 1} \right)\mathbb{1}. \end{equation} Now, to compute the RLD quantum Fisher information matrix (\ref{15}), one needs the following expressions \begin{equation} \mathtt{vec}\left[ {{\partial _\varphi }{\sigma _{out}}} \right] =2\left( {2\bar n + 1} \right)\sinh 2r\left[ {\begin{array}{*{20}{c}} {\sin 2\varphi }\\ {\cos 2\varphi }\\ {\cos 2\varphi }\\ { - \sin 2\varphi } \end{array}} \right], \label{56} \end{equation} \begin{equation} \mathtt{vec}\left[ {{\partial _r}{\sigma _{out}}} \right] = 2\left( {2\bar n + 1} \right)\left[ {\begin{array}{*{20}{c}} {{{\sin }^2}\varphi {e^{2r}} - {{\cos }^2}\varphi {e^{ - 2r}}}\\ {\sin 2\varphi \cosh 2r}\\ {\sin 2\varphi \cosh 2r}\\ {{{\cos }^2}\varphi {e^{2r}} - {{\sin }^2}\varphi {e^{ - 2r}}} \end{array}} \right]. \label{57} \end{equation} It easy to verify that $\Gamma $ is invertible, so that ${\Gamma ^ + } = {\Gamma ^{ - 1}}$. Thus one gets \begin{widetext} \begin{equation} {\Gamma ^{ - 1}} =\left[ {\begin{array}{*{20}{c}} {\frac{{\left( {1 + 2\bar n} \right)\left( {2\left( {\cosh \left[ {2r} \right] + \cos \left[ {2\varphi } \right]\sinh \left[ {2r} \right]} \right)} \right)}}{{8\bar n\left( {1 + \bar n} \right)}}}&{ - \frac{{{\rm{i}} + \left( {1 + 2\bar n} \right)\sin \left[ {2\varphi } \right]\sinh \left[ {2r} \right]}}{{4\bar n\left( {1 + \bar n} \right)}}}\\ {\frac{{{\rm{i}} - \left( {1 + 2\bar n} \right)\sin \left[ {2\varphi } \right]\sinh \left[ {2r} \right]}}{{4\bar n\left( {1 + \bar n} \right)}}}&{\frac{{{{\rm{e}}^{ - 2r}}\left( {1 + 2\bar n} \right)\left( {1 + \cos \left[ {2\varphi } \right] + 2{{\rm{e}}^{4r}}\sin {{\left[ \varphi \right]}^2}} \right)}}{{8\bar n\left( {1 + \bar n} \right)}}} \end{array}} \right]. \label{58} \end{equation} \end{widetext} The RLD quantum Fisher information matrix, can be calculated from Eq. (\ref{32}) as {\small \begin{equation} \mathcal{F} = \frac{1}{2}\left[ {\begin{array}{*{20}{c}} {\mathtt{vec}{{\left[ {{\partial _r}\sigma } \right]}^\dag }{\Sigma ^{ - 1}}\mathtt{vec}\left[ {{\partial _r}\sigma } \right]}&{\mathtt{vec}{{\left[ {{\partial _r}\sigma } \right]}^\dag }{\Sigma ^{ - 1}}\mathtt{vec}\left[ {{\partial _\varphi }\sigma } \right]}\\ {\mathtt{vec}{{\left[ {{\partial _\varphi }\sigma } \right]}^\dag }{\Sigma ^{ - 1}}\mathtt{vec}\left[ {{\partial _r}\sigma } \right]}&{\mathtt{vec}{{\left[ {{\partial _\varphi }\sigma } \right]}^\dag }{\Sigma ^{ - 1}}\mathtt{vec}\left[ {{\partial _\varphi }\sigma } \right]} \end{array}} \right]. \end{equation}} Using the identity ${\left( {A \otimes B} \right)^{ - 1}} = {A^{ - 1}} \otimes {B^{ - 1}}$, we obtain the RLD quantum Fisher information matrix as {\small \begin{equation} \mathcal{F} =\left[ {\begin{array}{*{20}{c}} {\frac{{{{\left( {1 + 2 \bar n} \right)}^2}\left( {1 + 2 \bar n\left( {1 +\bar n} \right)} \right)}}{{2{\bar n^2}{{\left( {1 +\bar n} \right)}^2}}}}&{\frac{{{\rm{i}}{{\left( {1 + 2 \bar n} \right)}^3}\sinh \left[ {2r} \right]}}{{2{ \bar n^2}{{\left( {1 + \bar n} \right)}^2}}}}\\ { - \frac{{{\rm{i}}{{\left( {1 + 2 \bar n} \right)}^3}\sinh \left[ {2r} \right]}}{{2{\bar n^2}{{\left( {1 +\bar n} \right)}^2}}}}&{\frac{{\left( {1 + 2 \bar n\left( {1 +\bar n} \right)} \right)\sinh {{\left[ {2r} \right]}^2}}}{{2{\bar n^2}{{\left( {1 +\bar n} \right)}^2}{{\left( {1 + 2 \bar n} \right)}^{ - 2}}}}} \end{array}} \right]. \end{equation}} Similarly, it is easy to verify that $\det \mathcal{M} \ne 0$, (i.e. $\mathcal{M}$ is invertible) and the SLD quantum Fisher information matrix (\ref{38}) is given by \begin{equation} H = \left[ {\begin{array}{*{20}{c}} {\frac{{4 \bar n\left( {\bar n + 1} \right) + 1}}{{\bar n\left( {\bar n + 1} \right)}}}&0\\ 0&{\frac{{{{\left( {1 + 2\bar n} \right)}^2}\sinh {{\left[ {2r} \right]}^2}}}{{\bar n\left( {1 + \bar n} \right)}}} \end{array}} \right]. \label{61} \end{equation} The two quantum Cramér-Rao bounds can be simply evaluated from Eqs. (\ref{21}) and (\ref{22}). They are given by \begin{equation} {B_R} = \frac{{\left( {1 + 2\bar n\left( {1 + \bar n} \right)} \right)\coth {{\left[ {2r} \right]}^2}\sinh \left[ {2r} \right] + 2\left( {1 + 2 \bar n} \right)}}{{2{{\left( {1 + 2 \bar n} \right)}^2}\sinh \left[ {2r} \right]}}, \label{62} \end{equation} \begin{equation} {B_S} = \frac{{\bar n\left( {1 + \bar n} \right)\coth {{\left[ {2r} \right]}^2}}}{{{{\left( {1 + 2\bar n} \right)}^2}}}. \label{63} \end{equation} The expressions (\ref{62}) and (\ref{63}) show that the precision of the estimated parameters does not depend on the value of the unknown phase rotation parameter, but depends only on the squeezing parameter and the inverse of the temperature. The SLD quantum Fisher information matrix (\ref{61}) is diagonal, this diagonal form follows from ${\rm{Im}}\left( {Tr\left[ {{{\rho }_{out}}\hat L_{{\theta _\mu }}^S\hat L_{{\theta _\nu }}^S} \right]} \right) = 0$ which means that the quantum Cramér-Rao bound is attainable. \begin{figure}\label{F1} \end{figure} The behavior represented in Fig. (\ref{F1}) shows that the ratio $\mathcal{R}$ is less than 1 when $\bar n$ is small and then increases when $r$ and $\bar n$ increase until to attain the value 1 for large values of the mean photons number. Therefore, we conclude that $B_{MI} = B_S = B_R$ when the thermal mean photon number takes the large values, and $B_{MI} = B_R$ otherwise. It is also possible to analyze these results in terms of the temperature which characterized the thermal state. In fact, this can be simply done by performing the substitution $2\bar n + 1 = \coth \left( {{\raise0.7ex\hbox{$\beta $} \!\mathord{\left/{\vphantom {\beta 2}}\right.\kern-\nulldelimiterspace} \!\lower0.7ex\hbox{$2$}}} \right)$. \begin{figure}\label{F2} \end{figure} The results represented in Fig. (\ref{F2}) shows that $B_{MI}$ decreases with increasing values of the mean energy of the probe thermal state $\bar n$, while it reaches their minimum value when $r$ takes great values. This means that the optimal values of simultaneous estimation of the parameters $r$ and $\varphi$ is achievable when we increase the mean energy of the probe thermal state. Now, we consider a new estimation problem with the coherent state $\left| \alpha \right\rangle $ as input state evolving in the same channel Gaussian described by the transformation $\hat R\left( \varphi \right)\hat S\left( r \right)$, such as \begin{equation} {\rho _{out}} = \hat R\left( \varphi \right)\hat S\left( r \right)\left| \alpha \right\rangle \left\langle \alpha \right|{\hat S^\dag }\left( r \right){\hat R^\dag }\left( \varphi \right). \label{64} \end{equation} The input coherent state $\left| \alpha \right\rangle $ is characterized by \begin{equation} {\mathbf{d}_{in}} =2\left[ {\begin{array}{*{20}{c}} {{\mathop{\rm Re}\nolimits} \left[ \alpha \right]}\\ {{\mathop{\rm Im}\nolimits} \left[ \alpha \right]} \end{array}} \right], \hspace{1cm} {\sigma _{in}} = \left[ {\begin{array}{*{20}{c}} 1&0\\ 0&1 \end{array}} \right]. \end{equation} Using the symplectic transformations (\ref{53}), we can express the first and second moments of the output state (\ref{64}) as follows \begin{equation} {\mathbf{d}_{out}} = \hat R\left( \varphi \right)\hat S\left( r \right){\mathbf{d}_{in}}, \hspace{0.3cm}{\sigma _{out}} = \hat R\left( \varphi \right)\hat S\left( r \right){\sigma _{in}}{{\hat S}^\dag }\left( r \right){{\hat R}^\dag }\left( \varphi \right). \end{equation} To calculate the RLD and SLD quantum Fisher information matrix, we first derive \begin{equation} \mathtt{vec}\left[ {{\partial _r}{\sigma _{out}}} \right] = 2\left[ {\begin{array}{*{20}{c}} {{{\sin }^2}\varphi {e^{2r}} - {{\cos }^2}\varphi {e^{ - 2r}}}\\ {2\sin \varphi \cos \varphi \sinh 2r}\\ {2\sin \varphi \cos \varphi \sinh 2r}\\ {{{\cos }^2}\varphi {e^{2r}} - {{\sin }^2}\varphi {e^{ - 2r}}} \end{array}} \right], \end{equation} \begin{equation} \mathtt{vec}\left[ {{\partial _\varphi }{\sigma _{out}}} \right] = 2\sinh 2r\left[ {\begin{array}{*{20}{c}} {2\sin \varphi \cos \varphi }\\ {\cos 2\varphi }\\ {\cos 2\varphi }\\ { - 2\sin \varphi \cos \varphi } \end{array}} \right], \end{equation} \begin{equation} {\partial _r}{\mathbf{d}_{out}} = 2\left[ {\begin{array}{*{20}{c}} { - {{\rm{e}}^{ - r}}{\mathop{\rm Re}\nolimits} \left[ \alpha \right]\cos \varphi + {{\rm{e}}^r}{\mathop{\rm Im}\nolimits} \left[ \alpha \right]\sin \varphi }\\ {{{\rm{e}}^r}{\mathop{\rm Im}\nolimits} \left[ \alpha \right]\cos \varphi + {{\rm{e}}^{ - r}}{\mathop{\rm Re}\nolimits} \left[ \alpha \right]\sin \varphi } \end{array}} \right], \end{equation} \begin{equation} {\partial _\varphi }{\mathbf{d}_{out}} = 2\left[ {\begin{array}{*{20}{c}} {{{\rm{e}}^r}{\mathop{\rm Re}\nolimits} \left[ \alpha \right]\cos \varphi - {{\rm{e}}^{ - r}}{\mathop{\rm Im}\nolimits} \left[ \alpha \right]\sin \varphi }\\ { - {{\rm{e}}^{ - r}}{\mathop{\rm Im}\nolimits} \left[ \alpha \right]\cos \varphi - {{\rm{e}}^r}{\mathop{\rm Re}\nolimits} \left[ \alpha \right]\sin \varphi } \end{array}} \right]. \end{equation} Obviously, $ \det \Gamma = 0$ ($\Gamma$ is singular matrix) due to the saturation of principle uncertainty given by Eq. (\ref{10}) for coherent states. We use the Tikhonov regularization to compute the Moore-Penrose pseudoinverse of $\Gamma$. This gives {\small \begin{equation} {\Gamma ^ + } = {{\rm{e}}^{2r}}\left[ {\begin{array}{*{20}{c}} {\frac{{{\lambda _ + }\left( r \right) - {\lambda _ - }\left( r \right)\cos \left[ {2\varphi } \right]}}{{2{\lambda _ + }{{\left( r \right)}^2}}}}&{\frac{{2{\rm{i}}{{\rm{e}}^{2r}} + {\lambda _ - }\left( r \right)\sin \left[ {2\varphi } \right]}}{{2{\lambda _ + }{{\left( r \right)}^2}}}}\\ {\frac{{ - 2{\rm{i}}{{\rm{e}}^{2r}} + {\lambda _ - }\left( r \right)\sin \left[ {2\varphi } \right]}}{{2{\lambda _ + }{{\left( r \right)}^2}}}}&{\frac{{{\lambda _ + }\left( r \right) + {\lambda _ - }\left( r \right)\cos \left[ {2\varphi } \right]}}{{2{\lambda _ + }{{\left( r \right)}^2}}}} \end{array}} \right], \end{equation}} where ${\lambda _ \pm }\left( r \right) = {{\rm{e}}^{4r}} \pm 1$. We note that ${\Gamma ^\dag } = \Gamma $. Using ${\Sigma ^ + } = {\Gamma ^ + } \otimes {\Gamma ^ + }$ the RLD quantum Fisher information matrix is obtained from (\ref{31}), as \begin{widetext} {\small \begin{equation} \mathcal{F} = \left[ {\begin{array}{*{20}{c}} {\frac{{2\left( {{{\left( {1 + {{\rm{e}}^{8r}}} \right)}^2} + 4{\lambda _ + }{{\left( r \right)}^2}\left( {{\mathop{\rm Re}\nolimits} {{\left[ \alpha \right]}^2} + {{\rm{e}}^{8r}}{\mathop{\rm Im}\nolimits} {{\left[ \alpha \right]}^2}} \right)} \right)}}{{{\lambda _ + }{{\left( r \right)}^4}}}}&{\frac{{2{{\rm{e}}^{2r}}\left( { - {\rm{i}}\left( {{\lambda _ - }\left( r \right)\left( {1 + {{\rm{e}}^{8r}}} \right)} \right) - 2{\lambda _ + }{{\left( r \right)}^2}\alpha \left( { - {\rm{i}}{\mathop{\rm Re}\nolimits} \left[ \alpha \right] + {{\rm{e}}^{4r}}{\mathop{\rm Im}\nolimits} \left[ \alpha \right]} \right)} \right)}}{{{\lambda _ + }{{\left( r \right)}^4}}}}\\ {\frac{{2{{\rm{e}}^{2r}}\left( {{\rm{i}}\left( {{\lambda _ - }\left( r \right)\left( {1 + {{\rm{e}}^{8r}}} \right)} \right) - 2{\lambda _ + }{{\left( r \right)}^2}{\alpha ^*}\left( {{\rm{i}}{\mathop{\rm Re}\nolimits} \left[ \alpha \right] + {{\rm{e}}^{4r}}{\mathop{\rm Im}\nolimits} \left[ \alpha \right]} \right)} \right)}}{{{\lambda _ + }{{\left( r \right)}^4}}}}&{\frac{{2{{\rm{e}}^{4r}}\left( {{\lambda _ - }{{\left( r \right)}^2} + {\lambda _ + }{{\left( r \right)}^2}{{\left| \alpha \right|}^2}} \right)}}{{{\lambda _ + }{{\left( r \right)}^4}}}} \end{array}} \right]. \end{equation}} \end{widetext} Similarly, we can calculate the Moore-Penrose pseudoinverse of $\mathcal{M}$ by mean of Tikhonov regularization. In this picture using (\ref{37}), we obtain the following SLD quantum Fisher information matrix {\small\begin{equation} H = \left[ {\begin{array}{*{20}{c}} {2\left( {4{{\left| \alpha \right|}^2} + \tanh {{\left[ {4r} \right]}^2}} \right)}&{ - 16{\mathop{\rm Re}\nolimits} \left[ \alpha \right]{\mathop{\rm Im}\nolimits} \left[ \alpha \right]\cosh \left[ {2r} \right]}\\ { - 16{\mathop{\rm Re}\nolimits} \left[ \alpha \right]{\mathop{\rm Im}\nolimits} \left[ \alpha \right]\cosh \left[ {2r} \right]}&{{\rm{8}}{{\rm{e}}^{ - 4r}}\left( {{{\rm{e}}^{8r}}{\mathop{\rm Re}\nolimits} {{\left[ \alpha \right]}^2} + {\mathop{\rm Im}\nolimits} {{\left[ \alpha \right]}^2}} \right)} \end{array}} \right]. \end{equation}} The two bounds $B_R$ and $B_S$ for this protocol can be computed from Eqs. (\ref{21}) and (\ref{22}). One gets {\small \begin{equation} {B_R} = \frac{{f\left( {r,\alpha } \right)\cosh {{\left[ {2r} \right]}^2} + g\left( {r,\alpha } \right)\cosh {{\left[ {2r} \right]}^3} + h\left( {r,\alpha } \right)\sinh \left[ {2r} \right]}}{{2\left( {{\mathop{\rm Im}\nolimits} {{\left[ \alpha \right]}^2} + {{\rm{e}}^{8r}}{\mathop{\rm Re}\nolimits} {{\left[ \alpha \right]}^2}} \right)}},\label{75} \end{equation}} {\small \begin{equation} {B_S} = \frac{{{\rm{4}}k\left( {r,\alpha } \right) + {{\rm{e}}^{4r}}\left( {4{{\left| \alpha \right|}^2} + \tanh {{\left[ {4r} \right]}^2}} \right)}}{{8\left( {4{{\left( {{\mathop{\rm Re}\nolimits} {{\left[ \alpha \right]}^2} - {{\rm{e}}^{4r}}{\mathop{\rm Im}\nolimits} {{\left[ \alpha \right]}^2}} \right)}^2} + k\left( {r,\alpha } \right)\tanh {{\left[ {4r} \right]}^2}} \right)}},\label{76} \end{equation}} where $f\left( {r,\alpha } \right) = 1 + {{\rm{e}}^{8r}}\left( {1 + 4{\mathop{\rm Im}\nolimits} {{\left[ \alpha \right]}^2}} \right) + 4{\mathop{\rm Re}\nolimits} {\left[ \alpha \right]^2} + {{\rm{e}}^{4r}}\left( {4{{\left| \alpha \right|}^2} - 1} \right)$, $g\left( {r,\alpha } \right) = 8{{\rm{e}}^{4r}}\left( {{\mathop{\rm Im}\nolimits} {{\left[ \alpha \right]}^2} - {\mathop{\rm Re}\nolimits} {{\left[ \alpha \right]}^2}} \right)$, $h\left( {r,\alpha } \right) = 2{{\rm{e}}^{4r}}\left( {2{{\left| \alpha \right|}^2} + \left( {2 + 4{{\left| \alpha \right|}^2}} \right)\cosh \left[ {4r} \right]} \right)$, and $k\left( {r,\alpha } \right) = {{\rm{e}}^{8r}}{\mathop{\rm Im}\nolimits} {\left[ \alpha \right]^2} + {\mathop{\rm Re}\nolimits} {\left[ \alpha \right]^2}$. Eqs. (\ref{75}) and (\ref{76}) show that RLD and SLD quantum Cramér-Rao bounds depend on the squeezing parameter $r$ and the variable labeling the coherent state (the input state). Here also, it is interesting to note that the rotation parameter $\varphi$ does not contribute to the quantum Cramér-Rao bounds. From the results represented in Fig. (\ref{F3}), we notice that the ratio $\mathcal{R}$ is always less than 1. Consequently, the most informative quantum Cramér-Rao bound corresponds to RLD quantum Cramér-Rao bound ($B_{MI}$=$B_R$). This result can be explained by the fact that the condition of saturation of the SLD quantum CR bound is not satisfied. Using the Eq .(\ref{45}), we derive \begin{equation} {\rm{Im}}\left( {Tr\left[ {{\rho _{out}}\hat L_{{\theta _\mu }}^S\hat L_{{\theta _\nu }}^S} \right]} \right) = 8\left( {{{\rm{e}}^{ - 2r}}{\mathop{\rm Re}\nolimits} {{\left[ \alpha \right]}^2} - {{\rm{e}}^{2r}}{\mathop{\rm Im}\nolimits} {{\left[ \alpha \right]}^2}} \right). \end{equation} This quantity is not zero, which means that the SLD quantum Cramér-Rao bound can not be saturated for simultaneous estimation of $r$ and $\varphi$ encoded into squeezing and rotation operators respectively when taking the coherent state as input state. \begin{figure}\label{F3} \end{figure} \begin{figure}\label{F4} \end{figure} The behavior represented in Fig. (\ref{F4}) shows that $B_{MI}$ increases when the energy of the probe coherent states $\vert \alpha\vert^2$ decreases. It reaches their minimum value when $r$ takes smaller values, which implies that the optimal values for simultaneous estimation of parameters $r$ and $\varphi$ is achievable when we increase the mean energy of the probe coherent state. From the comparison between the results represented in Fig. (\ref{F2}) and the results represented in Fig. (\ref{F4}), we note that there are similarities between the performance obtained for the simultaneous estimation of $r$ and $\varphi$ encoded respectively in squeezing and rotation operators when taking the thermal state and the coherent state as probes states. But it must noticed that thermal states present more advantages when the mean energy of coherent and thermal states takes the smaller values. \section{conclusion} Quantum Cramér-Rao bound is the key tool used to estimate unknown parameters in a quantum system. This bound is determined from the quantum Fisher information matrix. In this article, we determined the expressions of the RLD quantum Fisher information matrix and SLD quantum Fisher information matrix by explicitly computing the expressions of the right logarithmic derivative (RLD) and symmetric logarithmic derivative (SLD) operators corresponding for the multi-mode quantum Gaussian states. We also expressed the saturation condition of quantum Cramér-Rao bounds associated with SLD operator in multiparameter quantum estimation protocols. We then illustrated the derived formalism with some examples of quantum Gaussian channels. We note that all the explicit expressions of the quantities supplying this general formalism are expressed in terms of the first and second moments. That brings back to the degree of freedom of the quantum Gaussian states which limits only to two characteristic parameters (first and second moments). This remarkable advantage is an incentive to provide more general strategies in estimation multiparameter quantum Gaussian metrology. \begin{widetext} \appendix \section{Right Logarithmic Derivative (RLD) \label{app:A}} To determine the expressions of RLD and SLD quantum Fisher information matrix and the corresponding logarithmic derivatives operators, we need to use certain properties of the characteristic function of Gaussian states which established a link between the infinite-dimensional Hilbert space and finite-dimensional phase-space. The characteristic function is defined by \begin{equation} {\chi _{\hat \rho }} = Tr\left[ {\hat D\hat \rho } \right] \mathop { =\joinrel=}\limits^{(\ref{6})} Tr\left[ {{e^{i{{\mathbf{\tilde q}}^\mathtt{T}}\mathbf{\hat q}}}{e^{i{{\mathbf{\tilde p}}^\mathtt{T}}\mathbf{\hat p}}}{e^{\frac{i}{2}{{\mathbf{\tilde q}}^\mathtt{T}}\mathbf{\tilde p}}}\hat \rho } \right] = Tr\left[ {{e^{i{{\mathbf{\tilde p}}^\mathtt{T}}\mathbf{\hat p}}}{e^{i{{\mathbf{\tilde q}}^\mathtt{T}}\mathbf{\hat q}}}{e^{ - \frac{i}{2}{{\mathbf{\tilde q}}^\mathtt{T}}\mathbf{\tilde p}}}\hat \rho } \right], \label{A1} \end{equation} where $\mathbf{\hat q} (\mathbf{\hat p})$ and $\mathbf{\tilde q} (\mathbf{\tilde p})$ are the vectors of odd (even) entries of the parent vectors $\mathbf{\hat r}$ and $\mathbf{\tilde r}$ respectively. This decomposition follows from the Baker-Campbell-Hausdorff formula. The derivation of (\ref{A1}) with respect to ${{\tilde r}_k}({{\tilde q}_k},{{\tilde p}_k})$ gives the following interesting identities; \begin{equation} Tr\left[ {\hat D \hspace{0.1cm}\hat \rho \hspace{0.1cm}{{\hat r}_k}} \right] = \left( { - i{\mkern 1mu} {\partial _{{{\tilde r}_{_k}}}} - \frac{1}{2}{\Omega _{kk'}}{{\tilde r}_{k'}}} \right){\chi _{\hat \rho }}, \hspace*{1cm} Tr\left[ {\hat D \hspace{0.1cm} \hat \rho \hspace{0.1cm} {{\hat r}_j} \hspace{0.1cm} {{\hat r}_k}} \right] = \left( { - i{\mkern 1mu} {\partial _{{{\tilde r}_k}}} - \frac{1}{2}{\Omega _{kk'}}{{\tilde r}_{k'}}} \right)\left( { - i{\mkern 1mu} {\partial _{{{\tilde r}_{_j}}}} - \frac{1}{2}{\Omega _{jj'}}{{\tilde r}_{j'}}} \right){\chi _{\hat \rho }}.\label{A2} \end{equation} \begin{equation} Tr\left[ {\hat D \hspace{0.1cm}\left( {\hat \rho \hspace{0.1cm}{{\hat r}_j} + {{\hat r}_j} \hspace{0.1cm}\hat \rho } \right)} \right] = - 2i{\mkern 1mu} {\partial _{{{\tilde r}_j}}}{\chi _{\hat \rho }}, \hspace{2cm} Tr\left[ {\hat D \hspace{0.1cm}\left( {\hat \rho \hspace{0.1cm} {{\hat r}_j}\hspace{0.1cm}{{\hat r}_k} + {{\hat r}_k}\hspace{0.1cm}{{\hat r}_j}\hspace{0.1cm}\hat \rho } \right)} \right] = \frac{1}{2}\left( {{\Omega _{jj'}}{{\tilde r}_{j'}}{\Omega _{kk'}}{{\tilde r}_{k'}} - 4{\partial _{{{\tilde r}_k}}}{\partial _{{{\tilde r}_j}}}} \right){\chi _{\hat \rho }}.\label{A3} \end{equation} We derive the expression of quantum Gaussian states (\ref{7}), which is expressed in terms of the first and the second moments with respect to ${{\tilde r}_l}$ and ${\theta _\mu }$(estimated parameters) respectively. We get \begin{equation} {\partial _{{\theta _\mu }}}{\chi _{\hat \rho }} = \left( {i{\mkern 1mu} {{\tilde r}_v}{\partial _{{\theta _\mu }}}{d_v} - \frac{1}{4}{\partial _{{\theta _\mu }}}{\sigma _{pm}}{{\tilde r}_p}{{\tilde r}_m}} \right){\chi _{\hat \rho }}, \hspace*{0.8cm} {\partial _{{{ {\tilde r}}_l}}}{\chi _{\hat \rho }} = \left( {i{\mkern 1mu} {\partial _{{\theta _\mu }}}{d_l} - \frac{1}{2}{\partial _{{\theta _\mu }}}{\sigma _{lm}}{{\tilde r}_l}} \right){\chi _{\hat \rho }}. \label{A4} \end{equation} These expressions are important in computing the explicit expression of RLD (\ref{27}). For this one needs to calculate the expressions of ${{\cal L}^R}^{\left( 0 \right)}$, ${\cal L}^{R\left( 1 \right)}$ and ${\cal L}^{R\left( 2 \right)}$ occurring in Eq. (\ref{27}). We refer each passage by the formula used \begin{align} {\partial _{{\theta _\mu }}}{\mathbf{\chi} _{\hat \rho }}&= Tr\left[ {\hat D\hspace{0.1cm}{\partial _{{\theta _\mu }}}\hat \rho } \right] \\ \notag& \mathop{ =\joinrel=}\limits^{\left( \ref{13} \right)} Tr\left[ {\hat D\hspace{0.1cm}\hat \rho\hspace{0.1cm} \mathcal{\hat L_{{\theta _\mu }}}^R} \right]\\ \notag& \mathop { =\joinrel=} \limits^{(\ref{27})} {\mathcal{\mathcal{L}}^R}^{\left( 0 \right)}Tr\left[ {\hat D\hspace{0.1cm}\hat \rho } \right] + \mathcal{\mathcal{L}}_l^{R\left( 1 \right)}Tr\left[ {\hat D\hspace{0.1cm}\hat \rho \hspace{0.1cm}{{\hat r}_l}} \right] + \mathcal{L}_{jk}^{R\left( 2 \right)}Tr\left[ {\hat D \hspace{0.1cm}\hat \rho\hspace{0.1cm} {{\hat r}_j}\hspace{0.1cm}{{\hat r}_k}} \right]\\ \notag & \mathop { =\joinrel=}\limits^{(\ref{A2})} {{\cal L}^R}^{\left( 0 \right)}{{\bf{\chi }}_{\hat \rho }} + {\cal L}_l^{R\left( 1 \right)}\left( { - i{\partial _{{{\tilde r}_l}}} - \frac{1}{2}{\Omega _{ll'}}{{\tilde r r}_{l'}}} \right){\chi _{\hat \rho }} + {\cal L}_{jk}^{R\left( 2 \right)}\left( { - i{\partial _{{\tilde r_k}}} - \frac{1}{2}{\Omega _{kk'}}{{\bar r}_{k'}}} \right)\left( { - i{\partial _{{{\tilde r}_j}}} - \frac{1}{2}{\Omega _{jj'}}{{\tilde r}_{j'}}} \right){\chi _{\hat \rho }}. \label{A6} \end{align} Using the results of Eq.(\ref{A4}), one finds \begin{align} \left( {i{{\tilde r}_v}{\partial _{{\theta _\mu }}}{d_v} - \frac{1}{4}{\partial _{{\theta _\mu }}}{\sigma _{mp}}{{\tilde r}_m}{{\tilde r}_p}} \right){\chi _{\hat \rho }} = &{{\cal L}^R}^{\left( 0 \right)}{\chi _{\hat \rho }} + {\cal L}_l^{R\left( 1 \right)}\left( {\frac{i}{2}{\sigma _{ll'}}{{\tilde r}_{l'}} + {d_l} - \frac{1}{2}{\Omega _{i{i^,}}}{{\tilde r}_{{i^,}}}} \right){\chi _{\hat \rho }} + \\ \notag& {\cal L}_{jk}^{R(2)}\left( {\left( {\frac{1}{2}{\sigma _{jj'}}{{\tilde r}_{j'}} - i{d_j}} \right)\left( { - \frac{1}{2}{\sigma _{kk'}}{{\tilde r}_{k'}} + i{d_k}} \right) + \frac{1}{2}{\sigma _{jk}}+{\frac{i}{2}{\Omega _{jk}}}} \right){\chi _{\hat \rho }} - \\ \notag& {\cal L}_{jk}^{R(2)}\left( {\frac{i}{4}{\Omega _{jj'}}{\sigma _{kk'}}{{\tilde r}_{j'}}{{\tilde r}_{k'}} + \frac{i}{4}{\Omega _{kk'}}{\sigma _{jj'}}{{\tilde r}_{k'}}{{\tilde r}_{j'}} + \frac{1}{2}{\Omega _{jj'}}{{\tilde r}_{j'}}{d_k} + \frac{1}{2}{\Omega _{kk'}}{{\tilde r}_{k'}}{d_j} - \frac{1}{4}{\Omega _{jj'}}{\Omega _{kk'}}{{\tilde r}_{j'}}{{\tilde r}_{k'}}} \right){\chi _{\hat \rho }}. \end{align} Now, we can amend the different orders of the last equation independently. We note that ${\chi _{\hat \rho }}$ is always non zero; The identification of second-order terms of Eq. (\ref{A6}) leads to: \begin{equation} - {\partial _{{\theta _\mu }}}{\sigma _{mp}}{\tilde r_m}{{\tilde r}_p} = {\cal L}_{jk}^{R(2)}\left( {{\Omega _{jj'}}{{\tilde r}_{j'}}{\Omega _{kk'}}{{\tilde r}_{k'}} - {\sigma _{jj'}}{{\tilde r}_{j'}}{\sigma _{kk'}}{{\tilde r}_{k'}} - i\,{\sigma _{kk'}}{{\tilde r}_{k'}}{\Omega _{jj'}}{{\tilde r}_{j'}} - i\,{\Omega _{kk'}}{{\tilde r}_{k'}}{\sigma _{jj'}}{{\tilde r}_{j'}}} \right). \end{equation} Using a matrix representation (without index), one finds \begin{align} {\partial _{{\theta _\mu }}}\sigma &= \sigma {{\cal L}^{\left( R \right)}}^{\left( 2 \right)}\sigma - \Omega {{\cal L}^{\left( R \right)}}^{\left( 2 \right)}\Omega + i\,\sigma {{\cal L}^{\left( R \right)}}^{\left( 2 \right)}\Omega + i\,\Omega {{\cal L}^{\left( R \right)}}^{\left( 2 \right)}\sigma \\ \notag& = {\Gamma}{{\cal L}^{\left( R \right)}}^{\left( 2 \right)}{\Gamma }, \end{align} where ${\Gamma} = \sigma + i\,\Omega $. To determine the expression of ${{\cal L}^{\left( R \right)}}^{\left( 2 \right)}$, one employs the property \begin{equation} \mathtt{vec}\left[ {ABC} \right] = \left( {{{C^\dag }} \otimes A} \right)\mathtt{vec}\left[ B \right], \label{A10} \end{equation} where $A$, $B$, and $C$ are matrices with complex elements. Thus one gets \begin{equation} \mathtt{vec}\left[ {{\mathcal{L}^{\left( R \right)}}^{\left( 2 \right)}} \right] = {\left( {\Gamma ^\dag \otimes {\Gamma }} \right)^{+}}\mathtt{vec}\left[ {{\partial _{{\theta _\mu }}}\sigma } \right]. \label{A11} \end{equation} The identification of first-order terms of Eq. (\ref{A6}) leads to: \begin{align} i{{\tilde r}_v}{\partial _{{\theta _\mu }}}{d_v} = \mathcal{L}_l^{R\left( 1 \right)}\left( {\frac{i}{2}{\sigma _{ll'}}{{\tilde r}_{l'}} - \frac{1}{2}{\Omega _{i{i^,}}}{{\tilde r}_{{i^,}}}} \right){\chi _{\hat \rho }} + {\rm{ }}\mathcal{L}_{jk}^{R(2)}\left( {\frac{i}{2}{\sigma _{jj'}}{{\tilde r}_{j'}}{d_k} + \frac{i}{2}{\sigma _{kk'}}{{\tilde r}_{k'}}{d_j} - \frac{1}{2}{\Omega _{jj'}}{{\tilde r}_{j'}}{d_k} - \frac{1}{2}{\Omega _{kk'}}{{\tilde r}_{k'}}{d_j}} \right). \end{align} The matrix form of the last equation writes \begin{align} {\partial _{{\theta _\mu }}}\mathbf{d} =\frac{1}{2}\left( {\sigma + i\,\Omega } \right){\mathcal{L}^R}^{\left( 1 \right)} + {\rm{ }}\left( {\sigma + i\,\Omega } \right){\mathcal{L}^R}^{\left( 2 \right)}\mathbf{d} = \frac{1}{2}{\Gamma}{\mathcal{L}^R}^{\left( 1 \right)} + {\rm{ }}{\Gamma}{\mathcal{L}^R}^{\left( 2 \right)}\mathbf{d}, \end{align} and the expression of ${\mathcal{L}^R}^{\left( 1 \right)}$ is found as \begin{equation} {\mathcal{L}^R}^{\left( 1 \right)} = 2\Gamma ^{+}{\partial _{{\theta _\mu }}}\mathbf{d}{\rm{ - 2}}{\mathcal{L}^R}^{\left( 2 \right)}\mathbf{d}. \end{equation} The identification of zero-order terms of Eq. (\ref{A6}) leads to: \begin{equation} 0 = {\mathcal{L}^R}^{\left( 0 \right)} +\mathcal{ L}_l^{R\left( 1 \right)}{d_l} + \mathcal{L}_{jk}^{R(2)}\left( {{d_j}{d_k} + \frac{1}{2}{\sigma _{jk}} + \frac{i}{2}{\Omega _{jk}}} \right), \end{equation} which takes the following matrix form \begin{equation} 0 = {{\cal L}^R}^{\left( 0 \right)} + {{\cal L}^{R\left( 1 \right)}}^T{\bf{d}} + {{\bf{d}}^T}{{\cal L}^{R\left( 2 \right)}}{\bf{d}} + \frac{1}{2}Tr\left[ {{\Gamma }{{\cal L}^{R\left( 2 \right)}}} \right]. \end{equation} We find that the expression of ${\mathcal{L}^R}^{\left( 0 \right)}$ is given by \begin{equation} {\mathcal{L}^R}^{\left( 0 \right)} = - \frac{1}{2}Tr\left[ {{\Gamma }{\mathcal{L}^{R\left( 2 \right)}}} \right] - {\mathcal{L}^{R\left( 1 \right)}}^T \mathbf{d} - {\mathbf{d}^T}{\mathcal{L}^{R\left( 2 \right)}}\mathbf{d}. \end{equation} \section{RLD Quantum Fisher Information Matrix (QFIM) \label{App:B}} Inserting the expression of the right logarithmic derivative (RLD) obtained in the Appendix (\ref{app:A}) into the definition of the RLD quantum Fisher information matrix (\ref{15}) which writes also as \begin{equation} {{\cal F}_{{\theta _\mu }{\theta _\nu }}} =Tr\left[ {{\partial _{{\theta _\mu }}}\hat \rho {\cal L}{{_{{\theta _\nu }}^R}^\dag }} \right], \label{B1} \end{equation} and using the property of the characteristic function if it is evaluated when $\mathbf{\tilde r}=0$ \begin{equation} Tr\left[ {\hat \rho } \right] = {\left. {Tr\left[ {\hat D\hat \rho } \right]} \right|_{\mathbf{\tilde r} = 0}} = {\left. {{\chi _{\hat \rho }}} \right|_{\mathbf{\tilde r} = 0}} = 1, \label{B2} \end{equation} the expression of (\ref{B1}) can be written as \begin{align} {{\cal F}_{{\theta _\mu }{\theta _\nu }}} &= Tr\left[ {{\partial _{{\theta _\mu }}}\hat \rho {\cal L}{{_{{\theta _\nu }}^R}^\dag }} \right] \\ \notag& \mathop { =\joinrel=}\limits^{(\ref{27})}{{\cal L}^{R\left( 0 \right)*}}Tr\left[ {{\partial _{{\theta _\mu }}}\hat \rho } \right] + {\cal L}_l^{R\left( 1 \right)*}Tr\left[ {{\partial _{{\theta _\mu }}}\hat \rho \hspace{0.1cm} {{\hat r}_l}} \right] + {\cal L}{_{kj}^{R\left( 2 \right)*}}Tr\left[ {{\partial _{{\theta _\mu }}}\hat \rho \hspace{0.1cm} {{\hat r}_k}{{\hat r}_j}} \right] \\ \notag& \mathop {=\joinrel=} \limits^{(\ref{B2})}{{\cal L}^{R\left( 0 \right)*}}{\left. {{\partial _{{\theta _\mu }}}Tr\left[ {\hat D \hat \rho } \right]} \right|_{\tilde r = 0}} + {\left. {{\cal L}_l^{R\left( 1 \right)*}{\partial _{{\theta _\mu }}} Tr\left[ {\hat D \hspace{0.1cm} \hat \rho \hspace{0.1cm} {{\hat r}_l}} \right]} \right|_{\tilde r = 0}} + {\left. {{\cal L}{{_{kj}^{R\left( 2 \right)}}^*}{\partial _{{\theta _\mu }}}Tr\left[ {\hat D \hspace{0.1cm} \hat \rho \hspace{0.1cm} {{\hat r}_k} \hspace{0.1cm} {{\hat r}_j}} \right]} \right|_{\tilde r=0}}\\ \notag& \mathop { =\joinrel=}\limits^{(\ref{A2})} {{\cal L}^{R\left( 0 \right)*}}{\left. {{\partial _{{\theta _\mu }}}{\chi _{\hat \rho }}} \right|_{\tilde r = 0}} + {\left. {{\cal L}_l^{R\left( 1 \right)*}\left( { - i{\partial _{{{\tilde r}_l}}} - \frac{1}{2}{\Omega _{ll'}}{{\tilde r}_{l'}}} \right){\partial _{{\theta _\mu }}}{\chi _{\hat \rho }}} \right|_{\tilde r = 0}} + {\left. {{\cal L}{{_{kj}^{R\left( 2 \right)}}^*}\left( { - i{\partial _{{{\tilde r}_j}}} - \frac{1}{2}{\Omega _{jj'}}{{\tilde r}_{j'}}} \right)\left( { - i{\partial _{{{\tilde r}_k}}} - \frac{1}{2}{\Omega _{kk'}}{{\tilde{r}}_{k'}}} \right){\partial _{{\theta _\mu }}}{\chi _{\hat \rho }}} \right|_{\tilde r = 0}}. \end{align} Replacing ${\partial _{{\theta _\mu }}}{\chi _{\hat \rho }}$ by the corresponding expression in (\ref{A4}), one has \begin{align*} {{\cal F}_{{\theta _\mu }{\theta _\nu }}}&={\mathcal{L}^{R\left( 0 \right)*}}{\left. {\left( {i{\mkern 1mu} {{\tilde r}_v}{\partial _{{\theta _\mu }}}{d_v} - \frac{1}{4}{\partial _{{\theta _\mu }}}{\sigma _{pm}}{{\tilde r}_p}{{\tilde r}_m}} \right){\chi _{\hat \rho }}} \right|_{\tilde r = 0}} + {\left. {\mathcal{L}_l^{R\left( 1 \right)*}\left( { - i{\partial _{{{\bar r}_l}}} - \frac{1}{2}{\Omega _{ll'}}{{\tilde r}_{l'}}} \right)\left( {i{\mkern 1mu} {{\tilde r}_v}{\partial _{{\theta _\mu }}}{d_v} - \frac{1}{4}{\partial _{{\theta _\mu }}}{\sigma _{pm}}{{\tilde r}_p}{{\tilde r}_m}} \right){\chi _{\hat \rho }}} \right|_{\tilde r = 0}} + \\ \notag& \hspace*{4cm} {\rm{ \mathcal{L}}}_{kj}^{R\left( 2 \right)*}{\left. {\left( { - i{\partial _{{{\tilde r}_j}}} - \frac{1}{2}{\Omega _{jj'}}{{\tilde r}_{j'}}} \right)\left( { - i{\partial _{{{\tilde r}_k}}} - \frac{1}{2}{\Omega _{kk'}}{{\tilde r}_{k'}}} \right)\left( {i{\mkern 1mu} {{\tilde r}_v}{\partial _{{\theta _\mu }}}{d_v} - \frac{1}{4}{\partial _{{\theta _\mu }}}{\sigma _{pm}}{{\tilde r}_p}{{\tilde r}_m}} \right){\chi _{\hat \rho }}} \right|_{\tilde r = 0}}. \end{align*} Using the expression of ${\partial _{{{\tilde r}_l}}}{\chi _{\hat \rho }}$ given by (\ref{A4}), one gets \begin{equation} {\cal F}_{{\theta _\mu }{\theta _\nu }} = {\cal L}_l^{R\left( 1 \right)*}{\partial _{{\theta _\mu }}}{d_l} + {\cal L}_{kj}^{R\left( 2 \right)*}\left( {\frac{1}{2}{\partial _{{\theta _\mu }}}{\sigma _{kj}} + 2\,{\partial _{{\theta _\mu }}}{d_k}\,{d_j}} \right). \end{equation} Using the matrix representation, one finds \begin{equation} {{\cal F}_{{\theta _\mu }{\theta _\nu }}} = \frac{1}{2}Tr\left[ {{\partial _{{\theta _\mu }}}\sigma \mathcal{L}_{{\theta _\nu }}^{R\left( 2 \right)\dag }} \right] + \mathcal{L}_{{\theta _\nu }}^{R\left( 1 \right)\dag }{\partial _{{\theta _\mu }}}{\bf{d}} + 2{\partial _{{\theta _\mu }}}{{\bf{d}}^T}\mathcal{L}_{{\theta _\nu }}^{R\left( 2 \right)\dag }{\bf{d}}. \end{equation} Using Eq. (\ref{40}) and replacing $\mathcal{L}_{{\theta _\nu }}^{R\left( 2 \right)}$ by its expression given in (\ref{A11}), we find the expression of RLD quantum Fisher information matrix \begin{equation} {\mathcal{F}_{{\theta _\mu }{\theta _\nu }}} = \frac{1}{2}\mathtt{vec}{\left[ {{\partial _{{\theta _\mu }}}\sigma } \right]^\dag }{\left( {\Gamma ^\dag \otimes {\Gamma }} \right)^{+}}\mathtt{vec}\left[ {{\partial _{{\theta _\nu }}}\sigma } \right] + 2{\partial _{{\theta _\mu }}}{\mathbf{d}^T}\Gamma ^{+}{\partial _{{\theta _\nu }}}\mathbf{d}. \end{equation} \section{Symmetric Logarithmic Derivative (SLD) \label{App:C}} Analogously, to find the explicit expression of the SLD quantum Fisher information matrix, it is necessary to determine first the expression of the corresponding symmetric logarithmic derivative (SLD) operator. In this case, we consider the quadratic form of SLD given by (\ref{33}) and one has to determiner the expression of ${L^S}^{\left( 0 \right)}$, $L^{S\left( 1 \right)}$ and $L^{S\left( 2 \right)}$. To do this, one starts with \begin{align} {\partial _{{\theta _\mu }}}{\chi _{\hat \rho }} &= Tr\left[ {\hat D \hspace{0.1cm}{\partial _{{\theta _\mu }}}\hat \rho } \right]\\ \notag& \mathop { =\joinrel=} \limits^{\left( {\ref{14}} \right)} \frac{1}{2}\left( {Tr\left[ {\hat D \hat \rho \hat L_{{\theta _\mu }}^S} \right] + Tr\left[ {\hat D \hat L_{{\theta _\mu }}^S\hat \rho } \right]} \right)\\ \notag& \mathop { =\joinrel=} \limits^{\left( {\ref{33}} \right)} \frac{1}{2}\left( {Tr\left[ {\hat D \hat \rho \left( {{L^S}^{\left( 0 \right)} + L{{_l^S}^{\left( 1 \right)}}{{\hat r}_l} + L{{_{jk}^S}^{\left( 2 \right)}}{{\hat r}_j}{{\hat r}_k}} \right)} \right] + Tr\left[ {\hat D \left( {{L^S}^{\left( 0 \right)} + L{{_l^S}^{\left( 1 \right)}}{{\hat r}_l} + L{{_{jk}^S}^{\left( 2 \right)}}{{\hat r}_j}{{\hat r}_k}} \right)\hat \rho } \right]} \right)\\ \notag& {=\joinrel=} {L^S}^{\left( 0 \right)}Tr\left[ {\hat D \hat \rho } \right] + \frac{1}{2}L_l^{S\left( 1 \right)}Tr\left[ {\hat D \left( {\hat \rho \hspace{0.1cm} {{\hat r}_l} + {{\hat r}_l} \hspace{0.1cm} \hat \rho } \right)} \right] + \frac{1}{2}L_{jk}^{S\left( 2 \right)}Tr\left[ {\hat D \left( {{{\hat r}_j}{{\hat r}_k}\hat \rho + \hspace{0.1cm}\hat \rho \hspace{0.1cm} {{\hat r}_j}{{\hat r}_k}} \right)} \right]\\ \notag& \mathop { =\joinrel=} \limits^{(\ref{A3})} {L^S}^{\left( 0 \right)}{\chi _{\hat \rho }} + L_l^{S\left( 1 \right)}\left( { - i{\mkern 1mu} {\mkern 1mu} {\partial _{{{\tilde r}_l}}}{\chi _{\hat \rho }}} \right) + \frac{1}{2}L_{jk}^{S\left( 2 \right)}\left( { - 4{\mkern 1mu} {\partial _{{{\tilde r}_j}}}{\mkern 1mu} {\mkern 1mu} {\partial _{{{\tilde r}_k}}} + {\Omega _{jj'}}{{\tilde r}_{j'}}{\Omega _{kk'}}{{\tilde r}_{k'}}} \right){\chi _{\hat \rho }}. \end{align} Replacing the results of (\ref{A4}) in the last equation, one finds \begin{align}\label{C2} \left( {2i\,{{\tilde r}_v}{\partial _{{\theta _\mu }}}{d_v} - \frac{1}{2}{\partial _{{\theta _\mu }}}{\sigma _{lm}}{{\tilde r}_l}{{\tilde r}_m}} \right){\chi _{\hat \rho }} = & 2{L^S}^{\left( 0 \right)}{\chi _{\hat \rho }} + L_l^{S\left( 1 \right)}\left( {2\,{d_l} + i\hspace{0.1cm}{\sigma _{ll'}}{{\tilde r}_{l'}}} \right){\chi _{\hat \rho }} + \frac{1}{2}L_{jk}^{S\left( 2 \right)}{\Omega _{jj'}}{{\tilde r}_{j'}}{\Omega _{kk'}}{{\tilde r}_{k'}}\,{\chi _{\hat \rho }} -\\ \notag& \hspace{1cm} L_{jk}^{S\left( 2 \right)}\left( {2\left( {i{d_j} - \frac{1}{2}{\sigma _{jj'}}{{\tilde r}_{j'}}} \right)\left( {i{d_k} - \frac{1}{2}{\sigma _{kk'}}{{\tilde r}_{k'}}} \right) - {\sigma _{jk}}} \right){\chi _{\hat \rho }}. \end{align} The identification of second-order terms of Eq. (\ref{C2}) leads to: \begin{equation} {\partial _{{\theta _\mu }}}{\sigma _{lm}} = L_{jk}^{S\left( 2 \right)}{\sigma _{jj'}}{\sigma _{kk'}} - L_{jk}^{S\left( 2 \right)}{\Omega _{jj'}}{\Omega _{kk'}}, \end{equation} and the matrix representation form is \begin{equation} {\partial _{{\theta _\mu }}}\sigma = \sigma L_{{\theta _\mu }}^{S\left( 2 \right)}\sigma - \Omega L_{{\theta _\mu }}^{S\left( 2 \right)}\Omega. \end{equation} Using the Eq. (\ref{A10}), one obtains \begin{equation} \mathtt{vec}\left[ {L_{{\theta _\mu }}^{S\left( 2 \right)}} \right] = {\left( {{\sigma ^\dag} \otimes \sigma + \Omega \otimes \Omega } \right)^{+}}\mathtt{vec}\left[ {{\partial _{{\theta _\mu }}}\sigma } \right]. \label{C6} \end{equation} The identification of first-order terms of Eq. (\ref{C2}) leads to : \begin{equation} 2{\partial _{{\theta _\mu }}}{d_v} = L_l^{S\left( 1 \right)}{\sigma _{ll'}} + L_{jk}^{S\left( 2 \right)}\left( {{d_j}{\sigma _{kk'}} + {d_k}{\sigma _{jj'}}} \right), \end{equation} and the corresponding matrix form is \begin{equation} L_{{\theta _\mu }}^{S\left( 1 \right)} = 2{\sigma ^{-1}}{\partial _{{\theta _\mu }}}{\bf{d}} - 2L_{{\theta _\mu }}^{S\left( 2 \right)}{\bf{d}}. \end{equation} The identification of zero-order terms of Eq. (\ref{C2}) leads to : \begin{equation} 2{L^S}^{\left( 0 \right)} + 2L_l^{S\left( 1 \right)}{d_l} + L_{jk}^{S\left( 2 \right)}{\sigma _{jk}} + 2L_{jk}^{S\left( 2 \right)}{d_j}{d_k} = 0, \end{equation} which takes the following matrix form \begin{equation} L_{{\theta _\mu }}^{S\left( 0 \right)} = - \frac{1}{2}Tr\left[ {L_{{\theta _\mu }}^{S\left( 2 \right)}\sigma } \right] - L_{{\theta _\mu }}^{S\left( 1 \right)T}{\bf{d}} - {{\bf{d}}^T}L_{{\theta _\mu }}^{S\left( 2 \right)}{\bf{d}}. \end{equation} \section{SLD Quantum Fisher Information Matrix (QFIM) \label{App:D}} To find the formula of the SLD quantum Fisher information matrix, we insert the expression of the symmetric logarithmic derivative operator (SLD) which is obtained in Appendix (\ref{App:C}) into Eq. (\ref{16}). This equation can be also written as \begin{equation} {H_{{\theta _\mu }{\theta _\nu }}}= \frac{1}{2}Tr\left[ {{\partial _{{\theta _\mu }}}\hat \rho \,\hat L_{{\theta _\nu }}^S} \right]. \end{equation} Using the property of characteristic function given by (\ref{B2}), one gets \begin{align} {H_{{\theta _\mu }{\theta _\nu }}} &= \frac{1}{2}Tr\left[ {{\partial _{{\theta _\mu }}}\hat \rho \,\hat L_{{\theta _\nu }}^S} \right] \\ \notag& \mathop { =\joinrel= }\limits^{\left( {\ref{33}} \right)} \frac{1}{2}Tr\left[ {{\partial _{{\theta _\mu }}}\hat \rho \left( {{L^S}^{\left( 0 \right)} + L_l^{S\left( 1 \right)}{{\hat r}_l} + L_{jk}^{S\left( 2 \right)}{{\hat r}_j}{{\hat r}_k}} \right)} \right] \\ \notag& {=\joinrel=} \frac{1}{2}{L^S}^{\left( 0 \right)}Tr\left[ {{\partial _{{\theta _\mu }}}\hat \rho } \right] + L_l^{S\left( 1 \right)}Tr\left[ {{\partial _{{\theta _\mu }}}\hat \rho \hspace{0.1cm} {{\hat r}_l}} \right] + L_{jk}^{S\left( 2 \right)}Tr\left[ {{\partial _{{\theta _\mu }}}\hat \rho \hspace{0.1cm}{{\hat r}_j}\hspace{0.1cm}{{\hat r}_k}} \right] \\ \notag& \mathop{=\joinrel=}\limits^{\left({\ref{B2}}\right)}\frac{1}{2}{L^S}^{\left( 0 \right)}{\left. {{\partial _{{\theta _\mu }}}{\chi _{\hat \rho }}} \right|_{\tilde r = 0}} + L_l^{S\left( 1 \right)}{\left. {Tr\left[ {\hat D \hspace{0.1cm}\hat \rho \hspace{0.1cm} {{\hat r}_l}} \right]{\partial _{{\theta _\mu }}}{\chi _{\hat \rho }}} \right|_{\tilde r= 0}} + {\left. {L_{jk}^{S\left( 2 \right)}Tr\left[ {\hat D \hspace{0.1cm}\hat \rho \hspace{0.1cm} {{\hat r}_j}\hspace{0.1cm}{{\hat r}_k}} \right]{\partial _{{\theta _\mu }}}{\chi _{\hat \rho }}} \right|_{\tilde r = 0}} \\ \notag& \mathop{=\joinrel=}\limits^{\left( {\ref{A2}} \right)} \frac{1}{2}{L^S}^{\left( 0 \right)}{\left. {{\partial _{{\theta _\mu }}}{\chi _{\hat \rho }}} \right|_{\tilde r = 0}} + L_l^{S\left( 1 \right)}{\left. {\left( { - i{\partial _{{{\tilde r}_l}}} - \frac{1}{2}{\Omega _{ll'}}{{\tilde r}_{l'}}} \right){\partial _{{\theta _\mu }}}{\chi _{\hat \rho }}} \right|_{\tilde r = 0}} + {\left. {L_{jk}^{S\left( 2 \right)}\left( { - i{\partial _{{{\tilde r}_j}}} - \frac{1}{2}{\Omega _{jj'}}{{\tilde r}_{j'}}} \right)\left( { - i{\partial _{{{\tilde r}_k}}} - \frac{1}{2}{\Omega _{kk'}}{{\tilde r}_{k'}}} \right){\partial _{{\theta _\mu }}}{\chi _{\hat \rho }}} \right|_{\tilde r = 0}}\\ \notag& \mathop {=\joinrel=} \limits^{\left( {\ref{A4}} \right)} \frac{1}{2}{L^S}^{\left( 0 \right)}{\left. {\left( {i\,{{\tilde r}_v}{\partial _{{\theta _\mu }}}{d_v} - \frac{1}{4}{\partial _{{\theta _\mu }}}{\sigma _{pm}}{{\tilde r}_p}{{\tilde r}_m}} \right){\chi _{\hat \rho }}} \right|_{\bar r = 0}} + L_l^{S\left( 1 \right)}{\left. {\left( { - i{\partial _{{{\tilde r}_l}}} - \frac{1}{2}{\Omega _{ll'}}{{\tilde r}_{l'}}} \right)\left( {i\,{{\tilde r}_v}{\partial _{{\theta _\mu }}}{d_v} - \frac{1}{4}{\partial _{{\theta _\mu }}}{\sigma _{pm}}{{\tilde r}_p}{{\tilde r}_m}} \right){\chi _{\hat \rho }}} \right|_{\tilde r = 0}} +\\ \notag& \hspace{4cm}{\left. {L_{jk}^{S\left( 2 \right)}\left( { - i{\partial _{{{\tilde r}_j}}} - \frac{1}{2}{\Omega _{jj'}}{{\tilde r}_{j'}}} \right)\left( { - i{\partial _{{{\tilde r}_k}}} - \frac{1}{2}{\Omega _{kk'}}{{\tilde r}_{k'}}} \right)\left( {i\,{{\tilde r}_v}{\partial _{{\theta _\mu }}}{d_v} - \frac{1}{4}{\partial _{{\theta _\mu }}}{\sigma _{pm}}{{\tilde r}_p}{{\tilde r}_m}} \right){\chi _{\hat \rho }}} \right|_{\tilde r = 0}}. \end{align} Evaluating the last equation when $\mathbf{\tilde{r}}=0$, one gets \begin{equation} {H_{{\theta _\mu }{\theta _\nu }}}= L_l^{S\left( 1 \right)}{\partial _{{\theta _\mu }}}{d_l} + \frac{1}{2}L_{jk}^{S\left( 2 \right)}{\partial _{{\theta _\mu }}}{\sigma _{jk}} + 2L_{jk}^{S\left( 2 \right)}{\partial _{{\theta _\mu }}}{d_j}{d_k}, \end{equation} which rewrites in the matrix representation as \begin{equation} {H_{{\theta _\mu }{\theta _\nu }}} = {\partial _{{\theta _\mu }}}{\mathbf{d}^T}L_{{\theta _\nu }}^{S\left( 1 \right)} + \frac{1}{2}Tr\left[ {{\partial _{{\theta _\mu }}}\sigma L_{{\theta _\nu }}^{S\left( 2 \right)}} \right] + 2{\partial _{{\theta _\mu }}}{\mathbf{d}^T}L_{{\theta _\nu }}^{S\left( 2 \right)}\mathbf{d}. \end{equation} Using Eq. (\ref{40}), and replacing $L_{{\theta _\nu }}^{S\left( 2 \right)}$ by its expression given by (\ref{C6}), one obtains the following SLD quantum Fisher information matrix \begin{equation} {H_{{\theta _\mu }{\theta _\nu }}} = \frac{1}{2}\mathtt{vec}{\left[ {{\partial _{{\theta _\mu }}}\sigma } \right]^\dag }{{\mathcal M}^{+}}\mathtt{vec}\left[ {{\partial _{{\theta _\nu }}}\sigma } \right] + 2{\partial _{{\theta _\mu }}}{{\bf{d}}^T} \hspace{0.1cm} \sigma ^{-1} \hspace{0.1cm}{\partial _{{\theta _\nu }}}{\bf{d}}, \end{equation} where $\mathcal{M} = \left( {{\sigma ^\dag} \otimes \sigma + \Omega \otimes \Omega } \right)$. \end{widetext} \end{document}
\begin{document} \title{Preconditioned Iterative Methods for Diffusion Problems\\ with High-Contrast Inclusions} \author[1]{Yuliya Gorb\thanks{[email protected], corresponding author}} \author[2]{{Vasiliy Kramarenko\thanks{[email protected]}}} \author[1]{Yuri Kuznetsov\thanks{[email protected]}} \affil[1]{Department of Mathematics, University of Houston, Houston, TX 77204} \affil[2]{ Marchuk Institute of Numerical Mathematics of the Russian Academy of Sciences, Moscow, Russian Federation} \date{} \maketitle \begin{abstract} \noindent This paper concerns robust numerical treatment of an elliptic PDE with high contrast coefficients, for which classical finite-element discretizations yield ill-conditioned linear systems. This paper introduces a procedure by which the discrete system obtained from a linear finite element discretization of the given continuum problem is converted into an equivalent linear system of the saddle point type. Then three preconditioned iterative procedures -- preconditioned Uzawa, preconditioned Lanczos, and PCG for the square of the matrix -- are discussed for a special type of the application, namely, highly conducting particles distributed in the domain. Robust preconditioners for solving the derived saddle point problem are proposed and investigated. Robustness with respect to the contrast parameter and the discretization scale is also justified. Numerical examples support theoretical results and demonstrate independence of the number of iterations of the proposed iterative schemes on the contrast in parameters of the problem and the mesh size. \end{abstract} \noindent{\bf Keywords}: high contrast, saddle point problem, robust preconditioning, Schur complement, Uzawa method, Lanczos method \section{Introduction} In this paper, we consider iterative solutions of the linear system arising from the discretization of a diffusion problem \begin{equation} \label{E:problem-intro} -\nabla \cdot \left[ \sigma(x) \nabla u \right] = f, \quad x\in \Omega \end{equation} with appropriate boundary conditions on $\Gamma = \partial \Omega$. Below, in our theoretical consideration and numerical tests, we will assume the homogeneous Dirichlet boundary conditions on $\Gamma$. The main focus of this work is on the case when the coefficient function $ \sigma(x) \in L^{\infty} (\Omega)$ varies largely within the domain $\Omega$, that is, \[ \kappa = \frac{\sup_{x\in \Omega} \sigma(x)}{\inf_{x\in \Omega} \sigma(x)} \gg 1. \] We assume that $\Omega$ is a bounded domain $\Omega \subset \mathbb{R}^2$, that contains $m\geq 1$ disjoint polygonal subdomains $\mathcal{D}^s$, $s \in \{1,\ldots,m\}$, see Figure \ref{f:kuz_fig_ex0}, in which $\sigma$ is ``large'', e.g. of order $O(\kappa)$, but remains of $O(1)$ in the domain outside of $\mathcal{D} := \cup_{s=1}^m \mathcal{D}^s$. \begin{figure} \caption{An example of $\mathcal{D} \label{f:kuz_fig_ex0} \end{figure} A P1-FEM discretization of this problem results in a linear system \begin{equation} \label{E:linsys} \bold{A}_{\sigma} \,\overline{u} = \overline{f}, \end{equation} with a large, sparse, symmetric and positive definite (SPD) matrix $\bold{A}_{\sigma}$. A major issue in numerical treatments of \eqref{E:problem-intro} with the coefficient $\sigma$ discussed above, is that the high contrast leads to an ill-conditioned matrix $\bold{A}_{\sigma}$. Indeed, if $h$ is the discretization scale, then the condition number of the resulting stiffness matrix $\bold{A}_{\sigma}$ grows proportionally to $h^{-2}$ with the coefficient of proportionality linearly depending on $\kappa$. Because of that, the high contrast problems have been a subject of an active research recently, see e.g. \cite{ah02,agks08,gk18,bgw14}. Our main goal here is robust numerical treatment of the described problem. For that, we introduce an additional variable that allows us to replace \eqref{E:linsys} with an equivalent formulation in terms of a linear system \begin{equation} \label{E:LS} \boldsymbol{\mathcal{A}} \, \overline{x}=\overline{F}, \quad \mbox{with} \quad \overline{F}= \begin{bmatrix} \overline{ \mathrm{f}} \\ \overline{0} \end{bmatrix} , \end{equation} and a {\it saddle point matrix} $\boldsymbol{\mathcal{A}}$ written in the block form: \begin{equation} \label{E:Ablock} \boldsymbol{\mathcal{A}}= \begin{bmatrix} \bold{A} & \bold{B}^T \\ \bold{B} & -\bold{C} \end{bmatrix}, \end{equation} where $\bold{A} \in \mathbb{R}^{N \times N}$ is SPD, $\bold{B} \in \mathbb{R}^{n \times N} $ is rank deficient, and $\bold{C} \in \mathbb{R}^{n \times n} $ is an SPD matrix. Below, we discuss three iterative procedures -- preconditioned Uzawa (PU) method for the system with an SPD Schur complement matrix; preconditioned Lanczos (PL) method for solving \eqref{E:LS}; and preconditioned conjugate gradient (PCG) method for an equivalent system with an SPD matrix. Then we propose a robust block-diagonal preconditioner \[ \boldsymbol{\mathcal{H}}= \begin{bmatrix} \mathcal{H}_{\mathrm{A}} & 0\\ 0 & \mathcal{H}_{\mathrm{S}} \end{bmatrix}, \] for solving \eqref{E:LS}-\eqref{E:Ablock} with these three iterative methods. The main feature of the proposed preconditioners is that convergence rates of discussed iterative schemes are independent of the contrast parameter $\kappa \gg 1$ and the discretization size $h>0$. A rigorous justification of the latter statement is based on the evaluation of the eigenvalues of the matrix $\boldsymbol{\mathcal{H}}\boldsymbol{\mathcal{A}}$, which are proven to be in the union of two intervals $[\mu_-^1,\mu_-^2]\cup[\mu_+^1,\mu_+^2]$, where $\mu_-^1< \mu_-^2 < 0 < \mu_+^1 < \mu_+^2$. Assuming that the mesh on $\Omega$ is regularly-shaped and quasi-uniform, we demonstrate that constants $\mu_\pm^i$ ($i=1,2$) are independent of the discretization scale $h$ and the number of inclusions. If, in addition, we assume that particles are located at distances comparable to their sizes, then $\mu_\pm^i$ ($i=1,2$) are independent of the diameters of $\mathcal{D}^s$, $s \in \{1,\ldots,m\}$, their locations, and distances between them. The numerical experiments on simple test cases support theoretical findings and demonstrate independence of convergence rates of the proposed iterative schemes on parameters indicated above. These numerical tests are performed for a two-dimensional problem, whereas theoretical results remain true for three dimensions as well. The development of efficient preconditioners for saddle point problems has been an active area of research since early 1990s, see e.g. \cite{bpx90,bpx97,rus90,wath,erv90}. The main feature of the problem considered in this paper is that we deal with a special type of saddle point matrices that, in particular, contains a rank deficient block $\bold{B}$. Also, this paper proposes a very special form for the block $ \mathcal{H}_{\mathrm{S}}$ of $\boldsymbol{\mathcal{H}}$, see \eqref{E:Uzawa-prec} in Section \ref{S:methods}, utilized in three methods that yields theoretical results mentioned above. Moreover, one of the iterative procedures that we employed in this paper is {\it Lanczos method} \cite{lanc50,paige} that, as would be evident from our numerical experiments below, has demonstrated significant advantages over the other methods with respect to the arithmetic cost. Finally, we point out that robust numerical treatment of the described problem is crucial in developing the mutiscale strategies for models of composite materials with highly conducting particles. The latter find their application in particulate flows, subsurface flows in natural porous formations, electrical conduction in composite materials, and medical and geophysical imaging. The paper is organized as follows. In Section \ref{S:form}, the mathematical problem formulation is presented including the derivation of the saddle point problem of the type \eqref{E:LS}-\eqref{E:Ablock}. Section \ref{S:methods} discusses three iterative methods (preconditioned Uzawa, preconditioned Lanczos and PCG, mentioned above) for solving system \eqref{E:LS}-\eqref{E:Ablock} and proposes efficient preconditioners for all of them. The main theoretical results, which are the estimates for the eigenvalues of the matrix $\boldsymbol{\mathcal{H}}\boldsymbol{\mathcal{A}}$, are stated and proven in Section \ref{S:eig_est}. Numerical experiments based on simple test scenarios are presented in Section \ref{S:numerics}. Conclusions are discussed in Section \ref{S:concl}. \noindent {\bf Acknowledgements.} Y. Gorb has been supported by the NSF grant DMS-$1350248$. \section{Problem Formulation} \label{S:form} \subsection{Equivalent variational formulations} Consider an open, a bounded domain $\Omega \subset \mathbb{R}^2$ with a piece-wise smooth boundary $\Gamma:=\partial \Omega$, that contains $m\geq 1$ subdomains $\mathcal{D}^s$ with piece-wise smooth boundaries $\Gamma_s:=\partial \Omega_s$, $s\in\{1,\ldots,m\}$, see Figure \ref{f:kuz_fig_ex0}. Assume that $\Gamma_s \cap \Gamma_t = \emptyset$ when $s\neq t$, and $\Gamma \cap \Gamma_s= \emptyset $, $s\in\{1,\ldots,m\}$. For simplicity, we assume that $\Omega$ and $\mathcal{D}^s$ are polygons. The union of $\mathcal{D}^s$ is denoted by $\mathcal{D}$. In the domain $\Omega$, we consider the following elliptic problem \begin{equation} \label{E:pde-form} \left\{ \begin{array}{r l l} -\nabla \cdot \left[ \sigma(x) \nabla u \right] & = f, & x\in \Omega \\[2pt] u & = 0, & x\in \Gamma \end{array} \right. \end{equation} with the source term $f \in L^2(\Omega)$, and the coefficient $\sigma$ that varies largely inside the domain $\Omega$. In this paper, we are focused on the case when $\sigma$ is a piecewise constant function given by \begin{equation} \label{E:sigma} \sigma(x)= \begin{cases} 1, & x\in \Omega\setminus \overline{\mathcal{D}}\\ \displaystyle 1+\frac{1}{\varepsilon_s}, & x\in \mathcal{D}^s, ~s\in\{1,\ldots,m\} \end{cases} \end{equation} with $\displaystyle 0< \varepsilon_s\equiv \mbox{const} \leqslant 1$, $s\in\{1,\ldots,m\}$. The standard variational formulation of \eqref{E:pde-form} is \begin{equation} \label{E:var-form-orig} \mbox{Find}~ u \in V:=H^1_0(\Omega) ~ \mbox{such that}~ \int_{\Omega} \nabla u \cdot \nabla v ~dx +\sum_{s=1}^{m}\frac{1}{\varepsilon_s} \int_{\mathcal{D}^s} \nabla u \cdot \nabla v ~dx = \int_{\Omega} f v ~dx, ~ \forall v\in V. \end{equation} We introduce new variables $p_s \in H^{1}(\mathcal{D}^s)$ via \begin{equation} \label{E:p} p_s =\frac{1}{\varepsilon_s} u_s +c_s \quad \mbox{in} \quad\mathcal{D}^s, \quad s\in \{1,\ldots,m\}, \quad \mbox{where} ~u_s = u|_{ \mathcal{D}^s}, \end{equation} and $c_s$ are arbitrary constants, $s\in\{1,\ldots,m\}$. With that, we replace formulation \eqref{E:var-form-orig} with the new one, namely, \[ \mbox{Find}~ u \in V~ \mbox{and} ~ p_s \in \left.V_s:=H^1(\mathcal{D}^s)=V\right|_{\mathcal{D}^s}, ~s\in \{1,\ldots,m\}, ~ \mbox{such that} \] \begin{equation} \label{E:var-form-new1} \int_{\Omega} \nabla u \cdot \nabla v ~dx + \sum_{t=1}^m \int_{\mathcal{D}^t} \nabla p_t \cdot \nabla v ~dx = \int_{\Omega} f v ~dx, ~ \forall v\in V, \end{equation} \begin{equation} \label{E:var-form-new2} \int_{\mathcal{D}^s} \nabla u \cdot \nabla w ~dx - \varepsilon_s \int_{\mathcal{D}^s} \nabla p_s \cdot \nabla w ~dx=0 , ~ \forall w \in V_s,~ s\in\{1,\ldots,m\}. \end{equation} Two formulations \eqref{E:var-form-orig} and \eqref{E:var-form-new1}-\eqref{E:var-form-new2} are equivalent in the sense that their solutions $u\in H^1(\Omega)$ coincide, and any solution $p_s\in V_s$ of \eqref{E:var-form-new1}-\eqref{E:var-form-new2} is equal to the function $\frac{1}{\varepsilon_s} u_s +c_s$ with an appropriate constant $c_s$, $s\in\{1,\ldots,m\}$. For the uniqueness of $p_s$, we can either demand \begin{equation} \label{E:uniq-p} \int_{\mathcal{D}^s} p_s ~dx = 0, \quad s\in\{1,\ldots,m\} , \end{equation} or modify the formulation \eqref{E:var-form-new2} as follows \begin{equation} \label{E:var-form-ours} \begin{array}{l l} \displaystyle \mbox{Find}~ u \in V~ \mbox{and} ~ p_s \in V_s~ \mbox{such that} \int_{\Omega} \nabla u \cdot \nabla v ~dx + \sum_{t=1}^m \int_{\mathcal{D}^t} \nabla p_t \cdot \nabla v ~dx = \int_{\Omega} f v ~dx, ~ \forall v\in V, \\[7pt] \displaystyle \int_{\mathcal{D}^s} \nabla u \cdot \nabla w ~dx - \int_{\mathcal{D}^s} \nabla p_s \cdot \nabla w ~dx - \frac{1}{|\mathcal{D}^s|} \left[\int_{\mathcal{D}^s} p_s ~dx \right] \left[\int_{\mathcal{D}^s} w ~dx \right] =0 , ~ \forall w \in V_s,~ s\in\{1,\ldots,m\}, \end{array} \end{equation} where $|\mathcal{D}^s|$ is the area of the particle $\mathcal{D}^s$. It is obvious, that solutions $p_s$, $s\in\{1,\ldots,m\}$, of \eqref{E:var-form-ours} satisfy condition \eqref{E:uniq-p}, and the above constants $c_s$ are defined by \[ c_s=-\frac{1}{\varepsilon_s} \int_{\mathcal{D}^s} u \,dx, \quad ~ s\in\{1,\ldots,m\}. \] \subsection{Discretization of \eqref{E:var-form-ours} and Description of the Saddle Point Problem} Let $\Omega_h$ be a triangular mesh on $\Omega$. Assume that $\Omega_h$ is conforming with boundaries $\Gamma$ and $\Gamma_s$, $s\in\{1,\ldots,m\}$, that is, $\Gamma$ and $\Gamma_s$ are the unions of the triangular sides. We define $\left. \mathcal{D}^s_h = \Omega_h\right|_{\mathcal{D}^s}$, $s\in\{1,\ldots,m\}$, and $ \mathcal{D}_h := \cup_{s=1}^m \mathcal{D}^s_h$. We now choose a FEM space $V_h \subset H_0^1 (\Omega)$ to be the space of linear finite-element functions defined on $\Omega_h$, and $V^s_h:= V_{h}|_{\mathcal{D}_h^s}$, $s\in\{1,\ldots,m\}$. Then, the FEM discretization \cite{bs08} of \eqref{E:var-form-ours} reads as follows: \begin{equation} \label{E:FEM-form} \begin{array}{l l} \mbox{Find}\quad u_h \in V_h \quad \mbox{and} \quad p_h = (p^1_h,\ldots,p^m_h) \quad \mbox{with}\quad p^s_h \in V^s_h \quad \mbox{such that}\\[2pt] \displaystyle \int_{\Omega} \nabla u_h \cdot \nabla v_h ~dx + \int_{\mathcal{D}} \nabla p_h \cdot \nabla v_h ~dx = \int_{\Omega} f \, v_h ~dx, \quad \forall v_h\in V_h,\\[5pt] \displaystyle \int_{\mathcal{D}^s} \nabla u_h \cdot \nabla w^s_h ~dx - \varepsilon_s \int_{\mathcal{D}^s} \nabla p^s_h \cdot \nabla w^s_h ~dx - \frac{1}{|\mathcal{D}^s|} \left[\int_{\mathcal{D}^s} p^s_h ~dx \right] \left[\int_{\mathcal{D}^s} w^s_h ~dx \right] =0 , ~ \forall w^s_h \in V^s_h , \end{array} \end{equation} for $s\in\{1,\ldots,m\}$, that results in the following linear system of equations: \begin{equation} \label{E:lin-sys-full-SPD} \left\{ \begin{array}{r l l} \bold{A} \overline{u} + \bold{B}^T \overline{p} & =\overline{ \mathrm{f}}, \\[2pt] \bold{B} \overline{u} - [\bold{\Sigma}_\varepsilon \boldsymbol{\mathcal{B}}_{\mathcal{D}} + \bold{Q}] \overline{p} & =\overline{0}, \end{array} \right.\quad \overline{u} \in \mathbb{R}^N, \quad \overline{p} \in \mathbb{R}^n, \end{equation} or equivalently, \begin{equation} \label{E:lin-sys-z} \boldsymbol{\mathcal{A}}_\varepsilon \bold{z}_\varepsilon = \overline{ \mathrm{F}} , \end{equation} with the {\it saddle point} matrix \begin{equation} \label{E:lin-sys-full-matrix-1} \boldsymbol{\mathcal{A}}_\varepsilon = \begin{bmatrix} \bold{A} & \bold{B}^T \\ \bold{B} & - \bold{\Sigma}_\varepsilon \boldsymbol{\mathcal{B}}_{\mathcal{D}} - \bold{Q} \end{bmatrix} \in \mathbb{R}^{(N+n)\times(N+n)}, \notag \end{equation} and vectors \[ \bold{z}_\varepsilon = \begin{bmatrix} \overline{u} \\ \overline{p} \end{bmatrix} \in \mathbb{R}^{N+n}, \quad \overline{ \mathrm{F}}= \begin{bmatrix} \overline{ \mathrm{f}} \\ \overline{0} \end{bmatrix} \in \mathbb{R}^{N+n}. \] To provide the comprehensive description of the linear system \eqref{E:lin-sys-full-SPD} or \eqref{E:lin-sys-z}, we introduce the following notations for the number of degrees of freedom in different parts of $\Omega_h$. Let $N$ be the total number of nodes in $\Omega_h$, and $n$ be the number of nodes in $\overline{\mathcal{D}}_h$ so that \[ n=\sum_{s=1}^m n_s, \] where $n_s$ denotes the number of nodes in $\overline{\mathcal{D}}^s_h$, and, finally, $n_{0}$ is the number of nodes in $\Omega_h \setminus \overline{\mathcal{D}_h}$, so that we have \[ N=n_{0} + n . \] Then in \eqref{E:lin-sys-full-SPD}, the vector $\overline{u} \in \mathbb{R}^{N}$ has entries $u_i=u_h(x_i) $ with $x_i \in \Omega_h$. We count the entries of $\overline{u}$ in such a way that its first $n$ entries correspond to the nodes of $\overline{\mathcal{D}}_h$, and the remaining $n_0$ entries correspond to the nodes of $\Omega_h \setminus \overline{ \mathcal{D}}_h$. Entries of the first group can be further partitioned into $m$ subgroups such that there are $n_s$ entries in the $s^{\text{th}}$ group that corresponds to $\overline{\mathcal{D}}^s_h$, $s\in\{1,\ldots,m\}$. Similarly, the vector $\overline{p}\in \mathbb{R}^{n}$ has entries $p_i=p_h(x_i) $ where $x_i \in \overline{ \mathcal{D}}_h$. Then we can write \[ \mathbb{R}^{n}\ni\overline{p} = \begin{bmatrix} \overline{p}_1 \\ \vdots\\ \overline{p}_n \end{bmatrix} , \quad \mbox{where }~ \overline{p}_s \in \mathbb{R}^{n_s}, \quad s\in\{1,\ldots,m\}. \] The symmetric positive definite matrix $\bold{A} \in\mathbb{R}^{N\times N}$ of \eqref{E:lin-sys-full-SPD} is the stiffness matrix that arises from the discretization of the Laplace operator with the homogeneous Dirichlet boundary conditions on $\Gamma$, that is, \begin{equation} \label{E:A-def} (\bold{A} \overline{u} , \overline{v} ) = \int_{\Omega_h} \nabla u_h \cdot \nabla v_h ~dx, \quad\mbox{where}\quad \overline{u} , \overline{v} \in \mathbb{R}^N, \quad u_h , v_h \in V_h, \end{equation} where $(\cdot,\cdot)$ is the standard dot-product of vectors. With the above orderings, the matrix $\bold{A} $ of \eqref{E:A-def} can be presented as $2\times 2$ block-matrix \begin{equation} \label{E:matr-A} \bold{A} = \begin{bmatrix} \mathrm{A}_{\mathcal{D}\mathcal{D}} & \mathrm{A}_{\mathcal{D}0} \\ \mathrm{A}_{0\mathcal{D}} & \mathrm{A}_{00} \end{bmatrix}, \end{equation} where the block $\mathrm{A}_{\mathcal{D}\mathcal{D}} \in \mathbb{R}^{n\times n}$ corresponds to the inclusions $\overline{\mathcal{D}}^s_h$, $s\in \{1,\ldots,m\}$, the block $\mathrm{A}_{00}\in \mathbb{R}^{n_0\times n_0}$ corresponds to the region outside of $\overline{\mathcal{D}}_h$, and the entries of $\mathrm{A}_{\mathcal{D}0} \in \mathbb{R}^{n\times n_0}$ and $\mathrm{A}_{0\mathcal{D}}=\mathrm{A}_{\mathcal{D}0}^T$ are assembled from entries associated with both $\overline{\mathcal{D}}_h$ and $\Omega_h \setminus \overline{\mathcal{D}}_h$. The matrix $\boldsymbol{\mathcal{B}}_{\mathcal{D}} \in \mathbb{R}^{n\times n}$ in \eqref{E:lin-sys-full-SPD}, that corresponds to the highly conducting inclusions, is the $m\times m$ block-diagonal matrix \begin{equation}\label{E:matrix-B_D} \boldsymbol{\mathcal{B}}_{\mathcal{D}} = \text{diag}~ (\mathrm{B_1}, \ldots, \mathrm{B_m}), \end{equation} whose blocks $\mathrm{B_s}\in \mathbb{R}^{n_s \times n_s}$ are defined by \begin{equation} \label{E:B-def} (\mathrm{B_s} \overline{u} , \overline{v} ) = \int_{\mathcal{D}^s} \nabla u_h \cdot \nabla v_h ~dx, \quad\mbox{where}\quad \overline{u} , \overline{v} \in \mathbb{R}^{n_s}, \quad u_h , v_h \in V^s_h. \end{equation} Note that the matrix $\mathrm{B_s}$ is the stiffness matrix in the discretization of the Laplace operator in the domain $\mathcal{D}^s$ with the Neumann boundary conditions on $\Gamma_s$, $s\in\{1,\ldots,m\}$. Also, remark that each matrix $\mathrm{B_s}$ is {\it positive semidefinite} with \begin{equation} \label{E:ker-Bi} \ker \mathrm{B_s} = \mbox{span} \left\{\overline{e}_s \right\}, \quad \mbox{where} \quad \overline{e}_s=\begin{bmatrix} 1 \\ \vdots\\ 1 \end{bmatrix} \in \mathbb{R}^{n_s}. \end{equation} To this end, \[ \dim \ker ~ \boldsymbol{\mathcal{B}}_{\mathcal{D}} = m. \] Then, the matrix $\bold{B} \in \mathbb{R}^{n\times N}$ of \eqref{E:lin-sys-full-SPD} is written in the block form as \begin{equation} \label{E:matr-B} \bold{B} = \begin{bmatrix} \boldsymbol{\mathcal{B}}_{\mathcal{D}} & \bold{0}\end{bmatrix}, \end{equation} with zero-matrix $\bold{0} \in \mathbb{R}^{n\times n_0}$ and $\boldsymbol{\mathcal{B}}_{\mathcal{D}} \in \mathbb{R}^{n\times n}$. The vector $\overline{ \mathrm{f}} \in \mathbb{R}^{N}$ of \eqref{E:lin-sys-full-SPD} is defined in a similar way by \[ (\overline{ \mathrm{f}},\overline{v}) = \int_{\Omega_h} f v_h ~dx, \quad\mbox{where}\quad \overline{v} \in \mathbb{R}^N, \quad v_h \in V_h. \] With all that, the first equation of \eqref{E:FEM-form} results in the first equation of \eqref{E:lin-sys-full-SPD}. Now denote \begin{equation*} \bold{\Sigma}_\varepsilon = \text{diag}~ (\varepsilon_1\mathrm{I_1}, \ldots, \varepsilon_m\mathrm{I_m}), \end{equation*} with $\mathrm{I_s} \in \mathbb{R}^{n_s\times n_s}$ being the identity matrix. Finally, we construct the matrix $\bold{Q}$ in \eqref{E:FEM-form} using \begin{equation} \label{E:Q} \bold{Q} =\text{diag}~ ( \mathrm{Q_1}, \ldots, \mathrm{Q_m}), \end{equation} whose blocks $\mathrm{Q_s} \in \mathbb{R}^{n_s\times n_s}$, $s\in\{1,\ldots,m\}$, are defined by \begin{equation} \label{E:Q-def} (\mathrm{Q_s} \overline{p} , \overline{q} ) = \frac{1}{|\mathcal{D}^s_h|} \left[ \int_{\mathcal{D}^s_h} p_h~dx \right] \left[ \int_{\mathcal{D}^s_h} q_h~dx \right], ~~\mbox{where}~ \overline{p} , \overline{q} \in \mathbb{R}^{n_s}, ~~ p_h, ~ q_h \in V^s_h. \end{equation} As would be evident from below considerations, another way of writing the matrix $\mathrm{Q_s}$ is via \begin{equation} \label{E:evector-1} \mathrm{Q_s} = \frac{1}{d_s^2} \left[\mathrm{M_s} \overline{w}^1_s \otimes \mathrm{M_s} \overline{w}^1_s \right], \quad \mbox{where} \quad d_s=\left|\mathcal{D}^s_h \right|^{1/2}, \quad \mbox{and} \quad \overline{w}^1_s := \frac{1}{d_s} \overline{e}_s \in \mathbb{R}^{n_s}, \end{equation} and $\mathrm{M_s} \in \mathbb{R}^{n_s \times n_s}$ is the {\it mass matrix} associated with the inclusion $\mathcal{D}^s$ and given by \begin{equation} \label{E:mass1} (\mathrm{M_s} \overline{p}_s , \overline{q}_s ) = \int_{\mathcal{D}^s_h} p^s_h \, q^s_h ~dx, \quad \mbox{for all } \ \overline{p}_s,\overline{q}_s, \in \mathbb{R}^{n_s}, \quad p^s_h,~ q^s_h \in V^s_h, \ s\in \{1,\ldots,m\}. \end{equation} In \eqref{E:evector-1}, $\overline{p} \otimes \overline{q} = \overline{p} \, \overline{q}^T$ denotes the {\it outer product} of vectors $ \overline{p} $ and $\overline{q}$. The matrix $\mathrm{Q_s}$ is a symmetric and positive semidefinite rank-one matrix generated by the $\mathrm{M_s}$-normal vector $\overline{w}^1_s $, that is, $( \mathrm{M_s} \overline{w}^1_s , \overline{w}^1_s) = 1$, $s\in \{1,\ldots,m\}$. With \eqref{E:B-def}--\eqref{E:mass1}, the second equation of \eqref{E:var-form-ours} yields the second equation in the system \eqref{E:lin-sys-full-SPD}. Note that with \eqref{E:matr-A}, the symmetric and indefinite matrix $ \boldsymbol{\mathcal{A}}_\varepsilon$ defined in \eqref{E:lin-sys-z} is then \begin{equation} \label{E:lin-sys-full-matrix-2} \boldsymbol{\mathcal{A}}_\varepsilon = \begin{bmatrix} \mathrm{A}_{\mathcal{D}\mathcal{D}} & \mathrm{A}_{\mathcal{D}0} & \boldsymbol{\mathcal{B}}_{\mathcal{D}} \\ \mathrm{A}_{0\mathcal{D}} & \mathrm{A}_{00} & \bold{0}^T \\ \boldsymbol{\mathcal{B}}_{\mathcal{D}} & \bold{0} & - \bold{\Sigma}_\varepsilon \boldsymbol{\mathcal{B}}_{\mathcal{D}} - \bold{Q} \end{bmatrix}. \end{equation} This concludes the derivation of the saddle point formulation \eqref{E:lin-sys-full-SPD}. Clearly, there exists a unique solution $\overline{u} \in \mathbb{R}^N$, $\overline{p} \in \mathbb{R}^n$, or equivalently, $\bold{z}_\varepsilon \in \mathbb{R}^{N+n}$. System \eqref{E:lin-sys-full-SPD} was proposed in \cite{gkk18,kuz00,kuz09} for the case when $\bold{Q}=\bold{0}$, where it was also demonstrated that \eqref{E:lin-sys-full-SPD} can be derived in a purely algebraic way. \section{Preconditioned Iterative Methods} \label{S:methods} In this paper, we consider and investigate three iterative methods for solving system \eqref{E:lin-sys-z}. The first one is the preconditioned conjugate gradient method or preconditioned Uzawa (PU) for the Schur complement system \begin{equation} \label{E:Uzawa-syst} \bold{S}_\varepsilon \, \overline{p} =\overline{ \mathrm{g}} = : \bold{B}\bold{A}^{-1} \, \overline{ \mathrm{f}}, \end{equation} where \begin{equation} \label{E:schur} \bold{S}_\varepsilon : = \bold{\Sigma}_\varepsilon \boldsymbol{\mathcal{B}}_{\mathcal{D}} + \bold{Q} + \bold{B}\bold{A}^{-1}\bold{B}^T, \end{equation} with the preconditioner \begin{equation} \label{E:Uzawa-prec} \mathcal{H}_{\mathrm{S}} = [\boldsymbol{\mathcal{B}}_{\mathcal{D}} + \bold{Q}]^{-1} \in \mathbb{R}^{n \times n} . \end{equation} The second method is the preconditioned Lanzcos (PL) method with the preconditioner \begin{equation} \label{E:block-prec} \boldsymbol{\mathcal{H}} = \begin{bmatrix} \mathcal{H}_{\mathrm{A}} & 0\\ 0 & \mathcal{H}_{\mathrm{S}} \end{bmatrix}, \end{equation} where $\mathcal{H}_{\mathrm{A}} \in \mathbb{R}^{N \times N}$ is a given symmetric positive definite matrix introduced below, and $\mathcal{H}_{\mathrm{S}}$ is the same as in \eqref{E:Uzawa-prec}. The third method is the preconditioned conjugate gradient (PCG) method with the preconditioner $\boldsymbol{\mathcal{H}}$ defined in (\ref{E:block-prec}) for a modified system obtained from \eqref{E:lin-sys-z} as follows: \begin{equation}\label{E:mod-syst_PCG} \bold{K}_\varepsilon {\bold{z}}_\varepsilon = \boldsymbol{\mathcal{G}}_\varepsilon , \end{equation} where \begin{equation}\label{E:mod-syst_PCG_k_e} \bold{K}_\varepsilon = \boldsymbol{\mathcal{A}}_\varepsilon \boldsymbol{\mathcal{H}} \boldsymbol{\mathcal{A}}_\varepsilon, \quad \boldsymbol{\mathcal{G}}_\varepsilon= \boldsymbol{\mathcal{A}}_\varepsilon \boldsymbol{\mathcal{H}} \, \overline{ \mathrm{F}}. \end{equation} \subsection{ Preconditioned Uzawa Method} \label{S:Uzawa} The preconditioned Uzawa algorithm combined with the PCG method is well known, see e.g. \cite{bpx90,rg84}. It is defined by \begin{equation}\label{E:PCG_Uzawa-1} \overline{p}^k = \overline{p}^{k-1} - \beta_k \overline{\xi}_k, \quad k=1,2,\ldots, \end{equation} where \begin{equation}\label{E:PCG_Uzawa-0} \overline{\xi}_k = \begin{cases} \mathcal{H}_{\mathrm{S}} (\bold{S}_\varepsilon \overline{p}^{0} - \overline{ \mathrm{g}} ), &k=1 \\ \mathcal{H}_{\mathrm{S}} (\bold{S}_\varepsilon \overline{p}^{k-1} - \overline{ \mathrm{g}} ) - \alpha_k \overline{\xi}_{k-1}, &k \geq 2, \end{cases} \end{equation} and \begin{equation}\label{E:PCG_Uzawa-2} \beta_k = \frac{ \left( \bold{S}_\varepsilon \overline{p}^{k-1} - \overline{ \mathrm{g}} , \overline{\xi}_{k} \right) }{(\bold{S}_\varepsilon \overline{\xi}_{k}, \overline{\xi}_{k})}, \quad \alpha_k = \frac{ \left( \mathcal{H}_{\mathrm{S}} (\bold{S}_\varepsilon \overline{p}^{k-1} - \overline{ \mathrm{g}} ) , \bold{S}_\varepsilon \overline{\xi}_{k-1}\right) }{(\bold{S}_\varepsilon \overline{\xi}_{k-1}, \overline{\xi}_{k-1})}, \quad k=1,2,\ldots \end{equation} Here $\overline{p}^0$ is an initial guess, and $\mathcal{H}_{\mathrm{S}}$ is given by (\ref{E:Uzawa-prec}). Denote by $\overline{p}^*$ the solution of \eqref{E:Uzawa-syst}, then the convergence estimate for \eqref{E:mod-syst_PCG}-\eqref{E:PCG_Uzawa-1} is given by (see \cite{axel,km74}): \begin{equation*} \| \overline{p}^k - \overline{p}^* \|_{\bold{S}_\varepsilon} \leqslant \frac{1}{C_k \left(\frac{b + a}{b - a}\right)} \| \overline{p}^0 - \overline{p}^* \|_{\bold{S}_\varepsilon}, \quad k= 0,1,2,\ldots, \end{equation*} where $\| \cdot \|_{\bold{S}_\varepsilon}$ is the elliptic norm generated by the matrix $\bold{S}_\varepsilon$, $C_k\left(t\right)$ is the Chebyshev polynomial of degree $k$, and $b$ and $a$ are the estimates from above and from below for the eigenvalues of the matrix $\mathcal{H}_{\mathrm{S}} \bold{S}_\varepsilon$, respectively. To investigate the eigenvalue problem \begin{equation*} \mathcal{H}_{\mathrm{S}} \bold{S}_\varepsilon \overline{\psi} = \mu \, \overline{\psi}, \end{equation*} we observe that \begin{equation}\label{E:eigenval_Q1} \mathcal{H}_{\mathrm{S}} \boldsymbol{\mathcal{B}}_{\mathcal{D}} = \bold{I} - \tilde{\bold{Q}}, \end{equation} \begin{equation}\label{E:eigenval_Q2} \mathcal{H}_{\mathrm{S}} \bold{Q} = \tilde{\bold{Q}}, \end{equation} where $ \bold{\tilde{Q}}$ is $m \times m$ block diagonal matrix: \begin{equation*} \tilde{ \bold{Q}} =\text{diag}~ ( \tilde{\mathrm{Q}}_1, \ldots, \tilde{\mathrm{Q}}_m), \end{equation*} with $\mathrm{M_s}$-orthogonal projectors \begin{equation*} \tilde{\mathrm{Q}}_s = \overline{w}^1_s \otimes \left(\mathrm{M_s} \overline{w}^1_s \right) \in \mathbb{R}^{n_s \times n_s}, \quad s\in \{1,\ldots,m\}, \end{equation*} where $\overline{w}^1_s$ and $\mathrm{M}_s$ were introduced in \eqref{E:evector-1} and \eqref{E:mass1}, respectively. \begin{remark} \label{R:rk1} It follows from (\ref{E:eigenval_Q1}), (\ref{E:eigenval_Q2}) that implementation of the matrix-vector products $\mathcal{H}_{\mathrm{S}} \boldsymbol{\mathcal{B}}_{\mathcal{D}} \, \overline{y}$ and $\mathcal{H}_{\mathrm{S}} \bold{Q}\, \overline{y}$ requires only $2n$ arithmetical operations for any vector $\overline{y} \in \mathbb{R}^n$, that is, we do not need to solve a system with the matrix $\boldsymbol{\mathcal{B}}_{\mathcal{D}} + \bold{Q}$. \end{remark} Simple algebraic analysis, see e.g. \cite{gkk18,kuz09}, shows that \begin{equation}\label{E:a_0_estim} a \geqslant \min \{ a_{0}+ \varepsilon_{\min} ;1\} \end{equation} and \begin{equation}\label{E:b_0_estim} b \geqslant \max \{ b_{0}+\varepsilon_{\max} ;1\} \end{equation} where $a_{0}>0$ and $b_{0}$ are estimates from below and above, respectively, for the eigenvalues of the matrix \begin{equation*} \mathcal{H}_{\mathrm{S}} \boldsymbol{\mathrm{S}}_0 \equiv \mathcal{H}_{\mathrm{S}} \bold{B} \bold{A}^{-1} \bold{B}^{T} \end{equation*} that is, $\boldsymbol{\mathrm{S}}_0 =\boldsymbol{\mathrm{S}}_\varepsilon$ when $\varepsilon_1= \ldots\varepsilon_m=0$, and $\varepsilon_{\min} = \min \limits_{1 \leqslant t \leqslant m} \varepsilon_t$, $\varepsilon_{\max} = \max \limits_{1 \leqslant t \leqslant m} \varepsilon_t$. The values of $a_{0}$ and $b_{0}$ will be derived in Section \ref{S:eig_est}. \subsection{Preconditioned Lanczos Method} \label{S:PL} Preconditioned Lanczos method for systems with symmetric indefinite matrices was proposed in late 1960s, see \cite{km74} and references therein. In this paper, we consider PL method for the saddle-point system (\ref{E:lin-sys-z}) preconditioned by a symmetric positive definite matrix $\boldsymbol{\mathcal{H}}$ of \eqref{E:block-prec} with some given symmetric positive definite matrix $\mathcal{H}_{\mathrm{A}}$ introduced below, and $\mathcal{H}_{\mathrm{S}}$ defined by \eqref{E:Uzawa-prec}. The PL method is as follows, see e.g. \cite{km74}: \begin{equation} \overline{z}^k = \overline{z}^{k-1} - \beta_k \overline{\xi}_k, \quad k= 1,2,\ldots, \end{equation} where \begin{equation} \label{E:Prec_lanc-0} \overline{\xi}_k = \begin{cases} \boldsymbol{\mathcal{H}}(\boldsymbol{\mathcal{A}}_{\varepsilon}\overline{z}^0 - \overline{ \mathrm{F}} ), &k=1 \\ \boldsymbol{\mathcal{H}}\boldsymbol{\mathcal{A}}_{\varepsilon}\overline{\xi}_1 - \alpha_2 \overline{\xi}_1, & k=2\\ \boldsymbol{\mathcal{H}}\boldsymbol{\mathcal{A}}_{\varepsilon}\overline{\xi}_{k-1} - \alpha_k \overline{\xi}_{k-1} - \gamma_k \overline{\xi}_{k-2}, & k\geq 3, \end{cases} \end{equation} and \begin{equation} \alpha_k = \frac{(\boldsymbol{\mathcal{A}}_{\varepsilon} \boldsymbol{\mathcal{H}} \boldsymbol{\mathcal{A}}_{\varepsilon} \overline{\xi}_{k-1}, \boldsymbol{\mathcal{H}} \boldsymbol{\mathcal{A}}_{\varepsilon} \overline{\xi}_{k-1})}{(\boldsymbol{\mathcal{A}}_{\varepsilon} \overline{\xi}_{k-1}, \boldsymbol{\mathcal{H}} \boldsymbol{\mathcal{A}}_{\varepsilon} \overline{\xi}_{k-1})}, \qquad \gamma_k = \frac{(\boldsymbol{\mathcal{A}}_{\varepsilon} \boldsymbol{\mathcal{H}}\boldsymbol{\mathcal{A}}_{\varepsilon} \overline{\xi}_{k-1}, \boldsymbol{\mathcal{H}}\boldsymbol{\mathcal{A}}_{\varepsilon} \overline{\xi}_{k-2})}{(\boldsymbol{\mathcal{A}}_{\varepsilon} \overline{\xi}_{k-2}, \boldsymbol{\mathcal{H}} \boldsymbol{\mathcal{A}}_{\varepsilon} \overline{\xi}_{k-2})}, \quad k= 1,2,\ldots, \end{equation} \begin{equation} \beta_k = \frac{( \boldsymbol{\mathcal{A}}_{\varepsilon}\overline{z}^{k-1} - \overline{ \mathrm{F}} , \boldsymbol{\mathcal{H}} \boldsymbol{\mathcal{A}}_{\varepsilon} \overline{\xi}_{k})}{(\boldsymbol{\mathcal{A}}_{\varepsilon} \overline{\xi}_{k}, \boldsymbol{\mathcal{H}} \boldsymbol{\mathcal{A}}_{\varepsilon} \overline{\xi}_{k})}, \quad k= 1,2,\ldots \label{E:Prec_lanc-3} \end{equation} Let $\overline{z}^0$ be an initial guess, $\overline{z}^*$ the solution of \eqref{E:lin-sys-z}, then, the following convergence estimate holds \begin{equation}\label{E:eigenval_est_PL} \| \overline{z}^k - \overline{z}^* \|_{\bold{K}_\varepsilon} \leqslant \frac{1}{C_{k/2} \left(\frac{b^2 + a^2}{b^2 - a^2}\right)} \| \overline{z}^0 - \overline{z}^* \|_{\bold{K}_\varepsilon}, \quad k=2,4,\ldots \end{equation} see \cite{km74}, where $\bold{K}_\varepsilon$ is given by \eqref{E:mod-syst_PCG_k_e} and $\| \cdot \|_{\bold{K}_\varepsilon}$ is the elliptic norm generated by the matrix $\bold{K}_\varepsilon =\bold{K}_\varepsilon^T > 0 $. Here $C_{k/2}$ is the Chebyshev polynomial of degree $k/2$, and $b^2$ and $a^2>0$ are estimates from above and from below for eigenvalues of the matrix $\left(\boldsymbol{\mathcal{H}} \boldsymbol{\mathcal{A}}_\varepsilon\right)^2 $, respectively. \subsection{Preconditioned Conjugate Gradient method} \label{S:PCG} The preconditioned conjugate gradient method with the preconditioner $\boldsymbol{\mathcal{H}}$ defined by \eqref{E:block-prec}, we apply to system (\ref{E:mod-syst_PCG}): \begin{equation} \overline{z}^k = \overline{z}^{k-1} - \beta_k \overline{\xi}_k, \quad k=1,2,\ldots , \end{equation} where \begin{equation} \overline{\xi}_k = \begin{cases} \boldsymbol{\mathcal{H}} (\bold {K}_\varepsilon \overline{z}^{0} - \boldsymbol{\mathcal{G}}_\varepsilon ), &k=1 \\ \boldsymbol{\mathcal{H}} (\bold{K}_\varepsilon \overline{z}^{k-1} - \boldsymbol{\mathcal{G}}_\varepsilon ) - \alpha_k \overline{\xi}^{k-1}, &k \geq 2, \end{cases} \label{E:prec_pcg-0} \end{equation} and \begin{equation} \alpha_k = \frac{( \boldsymbol{\mathcal{H}}[ \bold{K}_\varepsilon \overline{z}^{k-1} - \boldsymbol{\mathcal{G}}_\varepsilon ], \bold{K}_\varepsilon \overline{\xi}_{k-1})}{( \bold{K}_\varepsilon \overline{\xi}_{k-1}, \overline{\xi}_{k-1})} , \qquad \beta_k = \frac{( \bold{K}_\varepsilon \overline{z}^{k-1} - \boldsymbol{\mathcal{G}}_\varepsilon, \overline{\xi}_{k})}{( \bold{K}_\varepsilon \overline{\xi}_{k}, \overline{\xi}_{k})}, \quad k=1,2,\ldots \label{E:prec_pcg-2} \end{equation} The convergence estimate for the method is as follows, see \cite{axel,km74}: \begin{equation} \| \overline{z}^k - \overline{z}^* \|_{\bold{K}_\varepsilon} \leqslant \frac{1}{C^{k} \left(\frac{b^2 + a^2}{b^2 - a^2}\right)} \| \overline{z}^0-\overline{z}^* \|_{\bold{K}_\varepsilon}, \ k=1,2,\ldots \end{equation} with the same matrix $\bold{K}_\varepsilon$ defined in \eqref{E:mod-syst_PCG_k_e} and values $a^2$ and $b^2$ as in \eqref{E:eigenval_est_PL}. \section{Eigenvalue estimates}\label{S:eig_est} \subsection{Eigenvalue estimates for the matrix $\mathcal{H}_{\mathrm{S}} \boldsymbol{\mathrm{S}}_0 $} \label{S:eig_est_hs} Consider the eigenvalue problem \begin{equation}\label{E:eigv_prob_init} \boldsymbol{\mathrm{S}}_0 \overline{\psi}_{\mathcal{D}} = \mu \, \mathcal{H}_{\mathrm{S}}^{-1} \overline{\psi}_{\mathcal{D}}, \end{equation} where \begin{equation} \boldsymbol{\mathrm{S}}_0 = \bold{B} \bold{A}^{-1} \bold{B}^T = \boldsymbol{\mathcal{B}}_{\mathcal{D}} \mathrm{S}^{-1}_{00} \boldsymbol{\mathcal{B}}_{\mathcal{D}}, \end{equation} \begin{equation} \mathcal{H}_{\mathrm{S}}^{-1} = \boldsymbol{\mathcal{B}}_{\mathcal{D}} + \bold {Q}, \end{equation} and \begin{equation*} \mathrm{S}_{00}= \mathrm{A}_{\mathcal{D}\mathcal{D}} - \mathrm{A}_{\mathcal{D}0} \mathrm{A}_{00}^{-1}\mathrm{A}_{0\mathcal{D}} \end{equation*} is the Schur complement of $\mathrm{A}_{00}$. It is obvious that $\lambda = 1$ if $\overline{\psi}_{\mathcal{D}} \in \ker \boldsymbol{\mathcal{B}}_\mathcal{D}$, and $\bold {Q} \overline{\psi}_{\mathcal{D}}=\boldsymbol{0}$ for any $\lambda \neq 1$, that is, $\overline{\psi}_{\mathcal{D}}$ is $\bold{M}$-orthogonal to $\ker \boldsymbol{\mathcal{B}}_{\mathcal{D}}$, with \[ \bold{M} = \mbox{diag} \left \{ \mathrm{M_1}, \ldots, \mathrm{M_m} \right\}, \quad\mbox{where } \mathrm{M_s} \mbox{ is given by } \eqref{E:mass1}, \quad s\in \{1,\ldots,m\}. \] Thus, to derive $a_0$ and $b_0$ of (\ref{E:a_0_estim})-(\ref{E:b_0_estim}), instead of \eqref{E:eigv_prob_init}, we can consider the following eigenvalue problem \begin{equation}\label{E:eig_val_main_eq} \boldsymbol{\mathrm{S}}_0 \overline{\psi}_{\mathcal{D}} =\mu \, \boldsymbol{\mathcal{B}}_{\mathcal{D}} \overline{\psi}_{\mathcal{D}}, \end{equation} under the condition $ \left( \bold{M} \overline{\psi}_{\mathcal{D}} ,\overline{w}\right)=0$ for all $\overline{w} \in \ker \boldsymbol{\mathcal{B}}_{\mathcal{D}} $. Let $\mu$ be an eigenvalue of (\ref{E:eig_val_main_eq}) and $\overline{\psi}_{\mathcal{D}}$ a corresponding eigenvector. Then, \begin{equation} \label{E:raylegh} \mu = \frac{\left(\boldsymbol{\mathrm{S}}_0 \overline{\psi}_{\mathcal{D}},\overline{\psi}_{\mathcal{D}} \right)}{\left( \boldsymbol{\mathcal{B}}_{\mathcal{D}} \overline{\psi}_{\mathcal{D}},\overline{\psi}_{\mathcal{D}} \right)} = \frac{\left(\bold{A} \overline{\psi},\overline{\psi} \right)}{\left( \boldsymbol{\mathcal{B}}_{\mathcal{D}} \overline{\psi}_{\mathcal{D}},\overline{\psi}_{\mathcal{D}} \right)} = \frac{ \int \limits_{\Omega_h} \left| \nabla \psi_h \right|^2 dx}{ \int \limits_{\mathcal{D}_h} \left| \nabla \psi_h \right|^2 dx}, \end{equation} where \begin{equation*} \overline{\psi} = \begin{bmatrix} \overline{\psi}_{\mathcal{D}} \\ \overline{\psi}_0 \end{bmatrix}, \quad \mbox{such that} \quad \mathrm{A}_{0\mathcal{D}} \overline{\psi}_{\mathcal{D}} + \mathrm{A}_{00}\overline{\psi}_0=0, \end{equation*} and ${\psi}_h \in V_h$. The vector $\overline{\psi}_0 \in \mathbb{R}^{n_0}$ corresponds to a FEM function ${\psi}_{0,h} \in V_h|_{\Omega_h\setminus\mathcal{D}_h}$ called the {\it continuous $h$-harmonic extension} of ${\psi}_{\mathcal{D},h} \in V_h|_{\mathcal{D}_h}$ from $\mathcal{D}_h$ into $\Omega_h \setminus \mathcal{D}_h$, where the FEM function ${\psi}_{\mathcal{D},h}$ corresponds to the vector $\overline{\psi}_{\mathcal{D}} \in \mathbb{R}^n$. Note that ${\psi}_{0,h}$ is the solution of the following variational finite element problem: \begin{equation*} \begin{array}{l l l} & \displaystyle \mbox{ Find } & u_h \in V_h|_{\Omega_h\setminus\mathcal{D}_h} ~ \mbox{ satisfying } ~ u_h = {\psi}_{\mathcal{D},h} ~ \mbox{ on } ~ \partial \mathcal{D}_h ~ \mbox{ such that } \\[2pt] & & \displaystyle \int \limits_{\Omega_h \setminus \mathcal{D}_h} \left| \nabla u_h \right|^2 dx = \min_{ \substack{v_h \in V_h|_{\Omega_h\setminus\mathcal{D}_h}\\ v_h|_{\partial \mathcal{D}_h } = {\psi}_{\mathcal{D},h} }} ~\int \limits_{\Omega_h\setminus \mathcal{D}_h} \left| \nabla v_h \right|^2 dx. \end{array} \end{equation*} From now on, we will write $\mathcal{D}$ instead of $\mathcal{D}_h$, and $\mathcal{D}^s$ instead of $\mathcal{D}_h^s$, $s\in \{1,\ldots,m\}$, since they are the same due to the above assumptions. To estimate the value of $a_0$ in (\ref{E:a_0_estim}) from below we consider the eigenvalue problem \eqref{E:eig_val_main_eq} using the spectral decomposition of $\mathrm{B_s} \in \mathbb{R}^{n_s}$, $s\in \{1,\ldots,m\}$ that comes from \begin{equation} \mathrm{B_s} \ol{w} = \lambda \mathrm{M_s} \ol{w}, \label{e:kuz_eigen_mass} \end{equation} that is, \begin{equation} \mathrm{B_s} = \mathrm{M_s} \mathrm{W_s} \Lambda_s \mathrm{W_s}^T \mathrm{M_s}, \label{e:kuz_spectral_decomp} \end{equation} with \begin{equation*} \mathrm{W_s} = \left[\ol{w}_s^1, \ldots,\ol{w}_s^{n_s} \right], \quad \mbox{and} \quad \Lambda_s = \mbox{diag} \left\{ \lambda_s^1, \ldots, \lambda_s^{n_s} \right\}, \end{equation*} where $0=\lambda_s^1 < \lambda_s^2 \leqslant \ldots \leqslant \lambda_s^{n_s}$ are the eigenvalues in \eqref{e:kuz_eigen_mass} and $\ol{w}_s^1, \ldots,\ol{w}_s^{n_s}$ are the corresponding $\mathrm{M_s}$-orthonormal eigenvectors, $s\in \{1,\ldots,m\}$. We define the matrices \begin{equation} \mathrm{\hat{B}_s} = \mathrm{M_s}^{\frac{1}{2}} \mathrm{W_s} \Lambda_s \mathrm{W_s}^T \mathrm{M_s}^{\frac{1}{2}}, \label{e:kuz_B_matrix_decomp} \end{equation} and \begin{equation} \hat{\mathrm{B}}_s^{\frac{1}{2}} = \mathrm{M_s}^{\frac{1}{2}} \mathrm{W_s} \Lambda_s^{\frac{1}{2}} \mathrm{W_s}^T \mathrm{M_s}^{\frac{1}{2}}, \label{e:kuz_B_matrix_decomp_half} \end{equation} It is obvious that $\hat{\mathrm{B}}_s^{\frac{1}{2}} $ are symmetric positive semidefinite matrices and $\hat{\mathrm{B}}_s^{\frac{1}{2}} \hat{\mathrm{B}}_s^{\frac{1}{2}} =\hat{\mathrm{B}}_s$, $s\in \{1,\ldots,m\}$. Also note that $\ol{w}_s^1 \in \ker \mathrm{B}_s$ and is precisely the one that is given by \eqref{E:evector-1}. In addition, we define the matrices \begin{equation*} \hat{\mathrm{B}}_{d, s}^{\frac{1}{2}} = \hat{\mathrm{B}}_s^{\frac{1}{2}} + \frac{1}{ d_s} \mathrm{M}_s^{\frac{1}{2}} \ol{w}_s^1 \otimes \mathrm{M}_s^{\frac{1}{2}}\ol{w}_s^1, \end{equation*} where $d_s$, $s\in \{1,\ldots,m\}$, was introduced in \eqref{E:evector-1}. Straightforward multiplications show that \begin{equation*} \hat{\mathrm{B}}_{s}^{\frac{1}{2}} \hat{\mathrm{B}}_{d, s}^{\frac{1}{2}} = \hat{\mathrm{B}}_{d, s}^{\frac{1}{2}} \hat{\mathrm{B}}_s^{\frac{1}{2}} = \hat{\mathrm{B}}_s, \quad s\in \{1,\ldots,m\}. \end{equation*} The latter observation shows that eigenvalue problem \eqref{E:eig_val_main_eq} is equivalent to the eigenvalue problem \begin{equation} \bold{M}^{\frac{1}{2}} \hat{\boldsymbol {\mathcal{B}}}^{\frac{1}{2}} \hat{\boldsymbol {\mathcal{B}}}_d^{\frac{1}{2}} \bold{M}^{\frac{1}{2}} \mathrm{S}_{00}^{-1} \bold{M}^{\frac{1}{2}} \hat{\boldsymbol {\mathcal{B}}}_d^{\frac{1}{2}} \hat{\boldsymbol {\mathcal{B}}}^{\frac{1}{2}} \bold{M}^{\frac{1}{2}} \ol{w} = \mu \, \boldsymbol{\mathcal{B}}_{\mathcal{D}} \ol{w}, \label{e:kuz_equival_eigenval_pr} \end{equation} and \begin{equation*} \hat{\boldsymbol {\mathcal{B}}}^{\frac{1}{2}} = \mbox{diag} \left (\hat{\mathrm{B}}_1^{\frac{1}{2}}, \ldots, \hat{\mathrm{B}}_m^{\frac{1}{2}} \right), \quad \hat{\boldsymbol {\mathcal{B}}}_{d}^{\frac{1}{2}} = \mbox{diag} \left ( \hat{\mathrm{B}}_{d,1}^{\frac{1}{2}}, \ldots, \hat{\mathrm{B}}_{d,m}^{\frac{1}{2}} \right), \end{equation*} are $m \times m $ block diagonal matrices. It is easy to see that the minimal eigenvalue in (\ref{e:kuz_equival_eigenval_pr}) is bounded from below by the minimal eigenvalue of the matrix \begin{equation*} \hat{\boldsymbol {\mathcal{B}}}_d^{\frac{1}{2}} \bold{M}^{\frac{1}{2}}\mathrm{S}_{00}^{-1} \bold{M}^{\frac{1}{2}} \hat{\boldsymbol{\mathcal{B}}}_d^{\frac{1}{2}} \end{equation*} which is equal to the minimal eigenvalue of the similar matrix $\mathrm{S}_{00}^{-1} \boldsymbol{\mathcal{B}}_{d}$ with \begin{equation*} \boldsymbol{\mathcal{B}}_d = \bold{M}^{\frac{1}{2}} \hat{\boldsymbol {\mathcal{B}}}_d \bold{M}^{\frac{1}{2}} = \mbox{diag} \left( \mathrm{B}_1+\mathrm{Q_1}, \ldots, \mathrm{B}_s+ \mathrm{Q_m} \right) , \end{equation*} where $\mathrm{Q_s}$, $s\in \{1,\ldots,m\}$ is defined in \eqref{E:evector-1}. If $\left(\mu, \ol{w} \right) $ is an eigenpair of the matrix $\mathrm{S}_{00}^{-1} \boldsymbol{\mathcal{B}}_{d}$, then similar to \eqref{E:raylegh}, we obtain \begin{equation} \mu = \max_{\substack{ v_{h} \in V_h|_{\Omega_h \setminus \mathcal{D}} \\ v_{h} |_{ \partial \mathcal{D} }= w_h }} \frac{\int \limits_{\mathcal{D}} \left| \nabla w_h \right|^2 dx + \sum \limits_{s=1}^m \frac{1}{d_s^2} \left[ \int \limits_{\mathcal{D}^s} w_h dx \right]^2}{ \int \limits_{\mathcal{D}} \left| \nabla w_h \right|^2 dx + \int \limits_{\Omega_h \setminus \mathcal{D}} \left| \nabla v_{h} \right|^2 dx} \geqslant \frac{\norm{w_h}_d^2}{\norm{w_h}_d^2 + \int \limits_{\Omega_h \setminus \mathcal{D}} \left| \nabla v_{h} \right|^2 dx}, \label{e:kuz_big_big_est} \end{equation} for any $v_{h} \in V_h |_{\Omega_h \setminus \mathcal{D}}$, such that $v_{h} = w_h$ on $ \partial \mathcal{D}$, where \begin{equation} \norm{w_h}_d^2 = \int \limits_{\mathcal{D}} \left| \nabla w_h \right|^2 dx + \sum \limits_{s=1}^m \frac{1}{d_s^2} \left[ \int \limits_{\mathcal{D}^s} w_h dx \right]^2. \end{equation} \begin{figure} \caption{An example of $\mathcal{D} \label{f:kuz_fig_ex1} \end{figure} Following \cite{kuz09,kuz18}, we embed subdomains $\mathcal{D}^s$ into subdomains $\tilde{\mathcal{D}}^s$ with the conforming boundary $\tilde{\Gamma}=\partial\tilde{\mathcal{D}}^s$ (see Figure \ref{f:kuz_fig_ex1}) so that \begin{equation}\label{E:d_st_estimate} \min \limits_{x \in \ol{\mathcal{D}}^s, ~y \in \tilde{\mathcal{D}}^s \cup \Gamma} |x - y | \geqslant c d_s, \end{equation} with a given positive constant $c$ independent of $d_s$, $s\in \{1,\ldots,m\}$. We assume that $ \tilde{\mathcal{D}}^s\cap \tilde{\mathcal{D}}^t = \varnothing $ for any $s \neq t$, $s,t \in \{1,\ldots,m\}$. We define $\tilde{\mathcal{D}} = \bigcup \limits_{s=1}^m \tilde{\mathcal{D}}^s$, and assume that $v_{h}$ in (\ref{e:kuz_big_big_est}) vanishes in $\Omega_h \setminus \tilde{\mathcal{D}}$. With that, we obtain the following estimate \begin{equation*} \mu \geqslant \min \limits_{s\in \{1,\ldots,m\}} \frac{\norm{w_{h,s}}_{d,s}^2}{\norm{w_{h,s}}_{d,s}^2+ \int \limits_{\tilde{\mathcal{D}}^s \setminus \mathcal{D}^s} | \nabla v_{h} |^2 dx}, \end{equation*} for any $v_{h} \in V_h|_{\tilde{\mathcal{D}}^s \setminus \mathcal{D}^s}$, such that $v_{h}|_{ \Gamma_{s}} = w_{h,s}$, \hspace{1pt} $v_{h} |_{ \tilde{\Gamma}_{s}}= 0$, where $w_{h,s}:=w_h|_{\mathcal{D}^{s}}$, and \begin{equation*} \norm{w_{h,s}}^2_{d,s}= \int \limits_{\mathcal{D}^{s}} | \nabla w_{h,s} |^2 dx + \frac{1}{d_s^2} \left[\int \limits_{\mathcal{D}^{s}} w_{h,s} dx \right]^2. \end{equation*} If we assume that for any $w_{h,s} \in V_s$, its finite element extension $ \tilde{w}_{h} \in V_h|_{ \tilde{\mathcal{D}}^s \setminus \mathcal{D}^{s}}$ with $\tilde{w}_{h}|_{\Gamma_{s} }= w_{h,s} $, $\tilde{w}_{h}|_{ \tilde{\Gamma}_s} = 0$, exists such that \begin{equation} \int \limits_{\tilde{\mathcal{D}}^s \setminus \mathcal{D}^{s}} | \nabla \tilde{w}_{h} |^2 dx \leqslant C^2 \norm{w_{h,s}}^2_{d,s}, \label{E:kuz_label_new_new_eq} \end{equation} with a positive constant $C$ independent of $\Omega_h$ and values of $d_s$, $s\in \{1,\ldots,m\}$, then we arrive at the estimate \begin{equation}\label{E:eig_est_mu_c} \mu \geqslant \frac{1}{1+C^2}. \end{equation} The existence of norm preserving finite element extensions on quasi-uniform regular shaped triangular meshes was proved in \cite{toswid05}. To utilize the latter result to (\ref{E:kuz_label_new_new_eq}) we have to assume that the mesh $\Omega_h$ is quasi-uniform and regular shaped in subdomains $\tilde{\mathcal{D}}^s \setminus \overline{\mathcal{D}}^{s} $ and to apply the transformation $x' = \frac{1}{d_s} x$ for each of the subdomains $\tilde{\mathcal{D}}^s $ as it was proposed in \cite{kuz09}, $s\in \{1,\ldots,m\}$. Thus, under the assumptions made, the estimate \begin{equation*} a_0 \geqslant \frac{1}{1+C^2} \end{equation*} holds, where $C$ is a positive constant independent on $\Omega_h$ and values of $d_s$, $s\in \{1,\ldots,m\}$. \begin{remark} There is an alternative proof of the estimate for $\mu$ from below as in \eqref{E:eig_est_mu_c}, see \cite{kuz18}, that does not use the algebraic technique \eqref{e:kuz_eigen_mass}-\eqref{e:kuz_equival_eigenval_pr} proposed in this paper. \end{remark} \subsection{Eigenvalue estimates for the matrix $\boldsymbol{\mathcal{H}} \boldsymbol{\mathcal{A}}_{\varepsilon}$} \label{S:eig_est_ha} In this Section, we assume that the assumptions made in the end of the Section \ref{S:eig_est_hs} are still valid, that is, the mesh $\Omega_h$ in $\tilde{\mathcal{D}}^s$ is regularly shaped and quasi-uniform, $s \in \{ 1, \ldots, m\}$, and distances between ${\mathcal{D}}^s$ and ${\mathcal{D}}^t$ satisfy (\ref{E:d_st_estimate}) with a constant $c$ independent of $\Omega_h$ as well as shape and location of inclusions. In other words, we assume that \begin{equation}\label{E:eig_est_s0_bd} \boldsymbol{\mathrm{S}}_0 \leqslant \boldsymbol{\mathcal{B}}_{\mathcal{D}} \leqslant \left(1 + C^2 \right) \boldsymbol{\mathrm{S}}_0, \end{equation} where $C^2$ is a positive constant independent of $\Omega_h$, and shape and locations of ${\mathcal{D}}^s$, $s \in \{ 1, \ldots, m\}$. Consider the eigenvalue problem \begin{equation}\label{E:eig_est_a_eps} \boldsymbol{\mathcal{A}}_{\varepsilon} \begin{bmatrix} \overline{v} \\ \overline{w} \end{bmatrix} = \mu \boldsymbol{\mathcal{H}}^{-1}_0 \begin{bmatrix} \overline{v} \\ \overline{w} \end{bmatrix}, \end{equation} and two additional eigenvalue problems \begin{equation}\label{E:eig_est_a_hat_eq} \boldsymbol{\hat{\mathcal{A}}} \begin{bmatrix} \overline{v} \\ \overline{w} \end{bmatrix} = \hat{\mu} \boldsymbol{\mathcal{H}}^{-1}_0 \begin{bmatrix} \overline{v} \\ \overline{w} \end{bmatrix}, \end{equation} \begin{equation}\label{E:eig_est_a_check_eq} \boldsymbol{\check{\mathcal{A}}} \begin{bmatrix} \overline{v} \\ \overline{w} \end{bmatrix} = \check{\mu} \boldsymbol{\mathcal{H}}^{-1}_0 \begin{bmatrix} \overline{v} \\ \overline{w} \end{bmatrix}, \end{equation} with the matrices \begin{equation*} \boldsymbol{\hat{\mathcal{A}}} = \begin{bmatrix} \bold{A} & \bold{B}^T \\ \bold{B} & - \bold{Q} \end{bmatrix} , \quad \mbox{and} \quad \boldsymbol{\check{\mathcal{A}}} = \begin{bmatrix} \bold{A} & \bold{B}^T \\ \bold{B} & -r_{\max} \boldsymbol{\mathrm{S}}_0 - \bold{Q} \end{bmatrix} , \end{equation*} respectively, where \begin{equation*} r_{\max} = \left(1+C^2\right) \varepsilon_{\max} , \end{equation*} and \begin{equation*} \boldsymbol{\mathcal{H}}^{-1}_0 = \begin{pmatrix} \bold{A} & 0 \\0 & \bold{B} \bold{A}^{-1} \bold{B}^T + \bold{Q} \end{pmatrix}. \end{equation*} It is obvious that \begin{equation*} \boldsymbol{\check{\mathcal{A}}} \leqslant\boldsymbol{\mathcal{A}}_\varepsilon \leqslant \boldsymbol{ \hat{\mathcal{A}}} , \end{equation*} and three eigenproblems (\ref{E:eig_est_a_eps}), (\ref{E:eig_est_a_hat_eq}), and (\ref{E:eig_est_a_check_eq}) have equal numbers of negative and positive eigenvalues. It is also obvious that all three eigenproblems have the same multiplicity of the eigenvalue $\mu = \check{\mu}=\hat{\mu} = 1$ and the underlying eigenvectors $\begin{bmatrix} \overline{v} \\ \overline{w} \end{bmatrix} $ satisfy the conditions \begin{equation}\label{E:hat_mu_est} \overline{v} \in \ker \bold{B}, \quad \overline{w} \in \ker \bold{B}^T = \ker \boldsymbol{\mathcal{B}}_{\mathcal{D}}. \end{equation} The latter condition in (\ref{E:hat_mu_est}) implies that for the eigenvalues $\mu , \check{\mu},\hat{\mu}$ not equal to one in \eqref{E:eig_est_a_eps}-\eqref{E:eig_est_a_check_eq}, we can impose additional conditions on the vector $\overline{w} \in \mathbb{R}^n$: \begin{equation}\label{E:eig_prob_add_cond} \left(\bold{M}\overline{w} , \overline{\xi} \right) = 0 , \quad \forall \overline{\xi} \in \ker \boldsymbol{\mathcal{B}}_{\mathcal{D}}. \end{equation} Assume that $\hat{\mu} \neq 1$ in (\ref{E:eig_est_a_hat_eq}) and $\check{\mu} \neq 1$ in (\ref{E:eig_est_a_check_eq}), then eliminating the vector $\overline{v} \in \mathbb{R}^N$ in (\ref{E:eig_est_a_hat_eq}) and (\ref{E:eig_est_a_check_eq}) (see also \cite{kuz95}) yields the equations \begin{equation*} -\frac{1}{1-\hat{\mu}} = \hat{\mu}, \quad \mbox{and} \quad -\frac{1}{1- \check{\mu}} - r_{\max} = \check{\mu} , \end{equation*} respectively. It follows that each of eigenproblems (\ref{E:eig_est_a_hat_eq}) and (\ref{E:eig_est_a_check_eq}) under the condition \eqref{E:eig_prob_add_cond} has only two different eigenvalues \begin{equation} \label{E:mu12} \hat{\mu}_{1,2} = \frac{1 \mp \sqrt{ 5}}{2}, \quad \mbox{and} \quad \check{\mu}_{1,2} = \frac{1-r_{\max} \mp \sqrt{ \left(1-r_{\max}\right)^2 + 4 \left(1+r_{\max}\right)}}{2}, \end{equation} respectively. \begin{remark} It is obvious that $\check{\mu}_1$ tends to $\hat{\mu}_1= \frac{1}{2} \left(1 - \sqrt{5}\right)$ and $\check{\mu}_2$ tends to $\hat{\mu}_2 =\frac{1}{2} \left(1 + \sqrt{5}\right)$ as $\varepsilon_{\max}$ tends to zero. \end{remark} Straightforward analysis of (\ref{E:mu12}) shows that \begin{equation*} \check{\mu}_1 < \hat{\mu}_1 < 0 < \check{\mu}_2 < \hat{\mu}_2. \end{equation*} Using inequalities \eqref{E:eig_est_s0_bd} and results of \cite{bell}, we conclude that all eigenvalues of (\ref{E:eig_est_a_hat_eq}), which are not equal one, belong to the union of two disjoint segments \begin{equation*} \left[ \check{\mu}_1, \hat{\mu}_1 \right] \cup \left[\check{\mu}_2, \hat{\mu}_2 \right], \end{equation*} with the endpoints independent of $\Omega_h$, shape, and location of the inclusions (see the assumption in the beginning of this Section). Simple analysis shows that $\check{\mu}_2 >1$ for any $\varepsilon_{\max} \geqslant 0$. To this end, we conclude that all the eigenvalues of (\ref{E:eig_est_a_eps}) belong to the set \begin{equation*} \left[\check{\mu}_1, \hat{\mu}_1\right] \cup \left[1,\hat{\mu}_2\right]. \end{equation*} Now we consider the eigenvalue problem \begin{equation}\label{E:eig_est_a_eps_main} \boldsymbol{\mathcal{A}}_{\varepsilon} \begin{bmatrix} \overline{v} \\ \overline{w} \end{bmatrix} = \mu \boldsymbol{\mathcal{H}}^{-1} \begin{bmatrix} \overline{v} \\ \overline{w} \end{bmatrix} , \end{equation} where \begin{equation}\label{E:h_a_defin} \boldsymbol{\mathcal{H}}^{-1}= \begin{bmatrix} \mathcal{H}_{\mathrm{A}}^{-1} &0 \\ 0 & \boldsymbol{\mathcal{B}}_{\mathcal{D}} + \bold {Q} \end{bmatrix} , \end{equation} and $\mathcal{H}_{\mathrm{A}}$ is a symmetric positive definite matrix satisfying the condition \begin{equation} \label{E:beta-bounds} \beta_1 \bold{A} \leqslant \mathcal{H}_{\mathrm{A}}^{-1} \leqslant \beta_2 \bold{A}, \end{equation} with positive constants $\beta_1$ and $\beta_2$. We assume that $\beta_1$ and $\beta_2$ are independent of $\Omega_h$. For instance, $\mathcal{H}_{\mathrm{A}}^{-1}$ could be BPX or AMG preconditioner \cite{bpx90,bpx97,bs08,kuz90}. Using (\ref{E:eig_est_s0_bd}) and (\ref{E:h_a_defin}), we obtain \begin{equation*} \alpha_{\min} \boldsymbol{\mathcal{H}}^{-1}_0 \leqslant \boldsymbol{\mathcal{H}}^{-1} \leqslant \alpha_{\max} \boldsymbol{\mathcal{H}}^{-1}_0, \end{equation*} where \begin{equation*} \alpha_{\min} = \min \Big\{ \beta_1; \frac{1}{1+C^2} \Big\}, \quad \mbox{and} \quad \alpha_{\max} = \max \{\beta_2;1\} . \end{equation*} Then, straightforward analysis shows that the eigenvalues of the matrix $\boldsymbol{\mathcal{H}} \boldsymbol{\mathcal{A}}_{\varepsilon}$ belong to the set \begin{equation*} \left[C_1, C_2\right] \cup \left[ C_3, C_4\right], \end{equation*} where \begin{equation} \label{E:constC} C_1 = \frac{\check{\mu}_1}{\alpha_{\min}}\leqslant C_2 = \frac{\hat{\mu}_{\textcolor{red}{1}}}{\alpha_{\max}}< 0, \quad \mbox{and} \quad C_4 = \frac{\hat{\mu}_2}{\alpha_{\min}} > C_3 = \frac{1}{\alpha_{\max}}>0 . \end{equation} Thus, we have proved the following result. \begin{theorem} \label{T:main} Let the mesh $\Omega_h$ be regularly shaped and quasi-uniform, and distances between ${\mathcal{D}}^s$ and ${\mathcal{D}}^t$ satisfy (\ref{E:d_st_estimate}) with a constant $c$ independent of $\Omega_h$ as well as the shape and location of inclusions. Then the eigenvalues of the matrix $\left(\boldsymbol{\mathcal{H}} \boldsymbol{\mathcal{A}}_{\varepsilon}\right)^2$ belong to the segment $\left[ a^2, b^2 \right]$, where \begin{equation*} a = \min \big\{|C_2|; C_3 \big\}, \quad b = \max \big\{ |C_1|; C_4 \big\} , \end{equation*} where $C_i$, $i\in \{1,\ldots,4\}$, are given by \eqref{E:constC}. \end{theorem} Note that under the assumption made, the values of $a$ and $b$ are independent of $\Omega_h$ as well as shape and location of inclusions ${\mathcal{D}}^s$ in $\Omega$, $s\in \{1,\ldots,m\}$. \begin{remark} The results of this section can be easily extended to the case of 3D diffusion problem as well as to the problems with nonzero reaction coefficient and to different types of boundary conditions (Neumann, Robin and mixed). \end{remark} \section{Numerical results} \label{S:numerics} To evaluate and verify methods proposed in Sections \ref{S:form} and \ref{S:methods}, and the theoretical results justified in Section \ref{S:eig_est}, we consider the following simple model problem. Let $\Omega$ be a unit square, and $\Omega_h$ be a triangulated square mesh with mesh step size $h = \frac{1}{\sqrt{N} -1}$. We consider two types of particles' distribution in $\Omega$, $s \in \{1,\ldots,m \}$. The first one, called {\it ``periodic''}, is shown in Figures \ref{f:kuz_quad_reg} and \ref{f:kuz_fig_ex}. The second one, called {\it ``random''}, is obtained by removing $\hat{m} < \bar{m}$ inclusions randomly chosen from the periodic array of $\bar{m}$ particles, so that $m=\bar{m}-\hat{m}$, see Figure \ref{f:kuz_quad_rand}. The values of $\varepsilon_s$ in $\mathcal{D}^s$, $s \in \{1,\ldots,m \}$, are chosen either randomly from the segment $\left[\varepsilon_{\min},10^{-2} \right]$, where $\varepsilon_{\min} <1$, or uniformly $\varepsilon_s = \varepsilon$, $s \in \{1,\ldots,m\}$. \begin{figure} \caption{Periodic distributions of particles } \label{f:kuz_fig_ex} \label{f:kuz_quad_reg} \end{figure} \begin{figure} \caption{Random distribution of particles ($m=230$, $h = \frac{1} \label{f:kuz_quad_rand} \end{figure} \begin{table}[h!] \centering \begin{tabular}{|c |c| c |c |c |} \hline \diagbox[width=3em]{ $\delta$ }{N} & 65,025 & 261,121 & 1,046,529 & 4,190,209\\%[5pt] \hline $10^{-2}$ & 4 & 4 & 4 &4 \\ [1ex] $10^{-4}$ & 7 & 7 & 7 & 7 \\ [1ex] $10^{-6}$ & 10 & 10 & 10 & 10 \\ [1ex] $10^{-7}$ & 12 & 12 & 12 & 12 \\ [1ex] $10^{-8}$ & 14 & 14 & 14 & 14 \\ [1ex] \hline \end{tabular} \caption{The number of PCG iterations} \label{T:table:1} \end{table} In our numerical tests the inclusions are represented by $d \times d$ squares separated by the distance $d\equiv d_s$, $s \in \{1,\ldots,m \}$, between neighboring inclusions so that the minimal distance between the inclusions and the boundary $\partial \Omega$ equals $d/2$ as shown in Figure \ref{f:kuz_quad_reg}. \begin{table}[h!] \centering \begin{tabular}{|c |c| c |c |c | c| c|} \hline \multirow{ 2}{*}{\diagbox[width=5em]{$\varepsilon_{\min}$}{$m$}} & \multicolumn{2}{c|}{65,536} & \multicolumn{2}{c|}{16,384} & \multicolumn{2}{c|}{4,096}\\ \cline{2-7} & Period& Rand& Period& Rand & Period& Rand \\ \hline $10^{-2}$ & 11 & 11 & 11 &10 & 10 & 10 \\ [1ex] $10^{-4}$ & 11 & 11 & 11 & 11& 10 & 10 \\ [1ex] $10^{-6}$ & 11 & 11 & 11 & 11 & 10 & 10 \\ [1ex] \hline \end{tabular} \caption{The number of PU iterations} \label{T:table:2} \end{table} \begin{table}[h!] \centering \begin{tabular}{|c |c| c |c |c | c| c|} \hline \multirow{ 2}{*}{\diagbox[width=5em]{$\varepsilon_{\min}$}{$m$}} & \multicolumn{2}{c|}{65,536} & \multicolumn{2}{c|}{16,384} & \multicolumn{2}{c|}{4,096}\\ \cline{2-7} & Period& Rand& Period& Rand & Period& Rand \\ \hline $10^{-2}$ & 40 & 40 & 43 &43 & 46 & 44 \\ [1ex] $10^{-4}$ & 40 & 40 & 44 & 44& 46 & 46 \\ [1ex] $10^{-6}$ & 40 & 40 & 44 & 44 & 46 & 46 \\ [1ex] \hline \end{tabular} \caption{The number of PL iterations} \label{T:table:3} \end{table} \begin{table}[h!] \centering \begin{tabular}{|c |c| c |c |c | c| c|} \hline \multirow{ 2}{*}{\diagbox[width=5em]{$\varepsilon_{\min}$}{$m$}} & \multicolumn{2}{c|}{65,536} & \multicolumn{2}{c|}{16,384} & \multicolumn{2}{c|}{4,096}\\ \cline{2-7} & Period& Rand& Period& Rand & Period& Rand \\ \hline $10^{-2}$ & 90 &90 & 88 &88 &89 & 89 \\ [1ex] $10^{-4}$ & 93 & 93 & 92 & 92& 92 & 92 \\ [1ex] $10^{-6}$ & 93 & 93 & 92 & 92 & 92 & 92 \\ [1ex] \hline \end{tabular} \caption{The number of PCG iterations} \label{T:table:4} \end{table} The matrix $\mathcal{H}_{\mathrm{A}}$ is the W-cycle Algebraic Multigrid preconditioner, proposed and investigated in \cite{kuz90,kuz_pde}. It was shown in \cite{kuz_pde}, that the eigenvalues of the matrix $\mathcal{H}_{\mathrm{A}} \bold{A}$ lie in the segment $\left[\frac{1}{2}\left(3-\sqrt{3}\right), \frac{3}{2}\left(1+\sqrt{3}\right)\right]$, that is, in \eqref{E:beta-bounds} we have \begin{equation*} \beta_1 = \frac{1}{2}\left(3-\sqrt{3}\right), \qquad \beta_2 = \frac{3}{2}\left(1+\sqrt{3}\right). \end{equation*} Therefore, the number of arithmetical operations (flops) for calculation of the matrix-vector product $\mathcal{H}_{\mathrm{A}} \ol{\xi}$ with $\ol{\xi} \in \mathbb{R}^N$ is bounded above by $5\times N$, hence, arithmetical costs of multiplication of a vector by $\mathcal{H}_{\mathrm{A}}$ and $\bold{A}$ are almost equal. The main goal of our numerical experiments is to evaluate the minimal number of iterations sufficient for the minimization of initial errors in $\delta^{-1}$ times, $\delta<1$. To this end, in our numerical tests, we consider the homogeneous systems with a randomly chosen initial guess. For the PU method (\ref{E:PCG_Uzawa-0})-(\ref{E:PCG_Uzawa-2}) the stopping criteria was \begin{equation} \label{E:stop-crit1} \norm{\ol{p}^k}_{\bold{S}_\varepsilon} \leqslant \delta \norm{\ol{p}^0}_{\bold{S}_\varepsilon}, \end{equation} and for the PL (\ref{E:Prec_lanc-0})-(\ref{E:Prec_lanc-3}) and PCG method (\ref{E:prec_pcg-0})-(\ref{E:prec_pcg-2}) the stopping criteria was \begin{equation} \label{E:stop-crit2} \norm{\ol{z}^k}_{\bold{K}_\varepsilon} \leqslant \delta \norm{\ol{z}^0}_{\bold{K}_\varepsilon} . \end{equation} In Table \ref{T:table:1}, we display the number of PCG iterations with the preconditioner $\mathcal{H}_\mathrm{A}$ mentioned at the beginning of this section for the homogeneous system \begin{equation*} \bold{A} \ol{x} = \ol{0} , \end{equation*} and randomly chosen initial guesses $\ol{x}^0$. The stopping criteria was \begin{equation*} \norm{\ol{x}^k}_{\bold{A}}\leqslant \delta \norm{\ol{x}^0}_{\bold{A}} . \end{equation*} We observe that $12$ iterations are sufficient to minimize the $\bold{A}$-norm of the error in $10^7$ times. In Table \ref{T:table:2}, we display the number of iterations of the PU method with $\delta = 10^{-6}$, which is independent of a random choice of $\varepsilon \in \left[\varepsilon_{\min}, 10^{-2} \right]$ in the algebraic system, and the distribution of the inclusions. To perform the product $\mathcal{H}_{\mathrm{A}} \ol{\xi}$, $\ol{\xi} \in \mathbb{R}^N$, we used $12$ iterations of the PCG method for systems with the matrix $\bold{A}$. \begin{table}[h!] \centering \begin{tabular}{|c |c| c |c |} \hline \multirow{ 2}{*}{\diagbox[width=6.5em]{$\varepsilon_{\min}$}{method}} & & & \\ & \textbf{PL} & PCG & PU\\ \cline{2-4} \hline $10^{-2}$ & \textbf{44} &176 & 120 \\ [1ex] $10^{-4}$ & \textbf{46} & 184 & 132 \\ [1ex] $10^{-6}$ & \textbf{46} & 184 & 132 \\ [1ex] \hline \end{tabular} \caption{Arithmetical cost} \label{T:table:5} \end{table} In Tables \ref{T:table:3} and \ref{T:table:4}, we display the number of iterations for the PL and PCG methods described in Sections \ref{S:PL} and \ref{S:PCG}, respectively. The tests are done for various numbers of particles $m$, and the two types of particles' distribution: periodic and random ones. As it is clearly seen, the number of iterations does not depend on $\varepsilon_{\min} $, nor on distribution of the particles, or their number, or the mesh size $h$ in $\Omega_h$. Using results of the tests presented in Tables \ref{T:table:2}, \ref{T:table:3} and \ref{T:table:4}, we compare all three respective methods (PU, PL, and PCG) in terms of their arithmetical costs in Table \ref{T:table:5}. Note that due to Remark \ref{R:rk1}, the major computational effort is associated with multiplications by the matrices $\mathcal{H}_\mathrm{A}$ and $\bold{A}$, hence, this table presents the number of multiplications by $\mathcal{H}_\mathrm{A}$ and $\bold{A}$ needed to solve the underlying systems with accuracy $\delta$ due to criteria \eqref{E:stop-crit1} and \eqref{E:stop-crit2}. Based on these results, we may conclude that for the above test problems, the PL method is almost three times faster than the PU method, and almost four times faster than the PCG method. Obviously, the results and conclusions may be different for other test problems and different choice of a preconditioner $\mathcal{H}_{\mathrm{A}}$ for the matrix $\bold{A}$. \section{Conclusions} \label{S:concl} This paper proposes three preconditioned iterative methods for solving a linear system of the saddle point type arising in discretization of the diffusion problem \eqref{E:pde-form} that involves large variation of its coefficient \eqref{E:sigma}. The latter feature is typically called {\it high contrast}. The main theoretical outcome presented in Theorem \ref{T:main} yields that with the proposed preconditioner $\boldsymbol{\mathcal{H}}$, the condition numbers of the preconditioned matrix $\boldsymbol{\mathcal{H}}\boldsymbol{\mathcal{A}}_\varepsilon$ is of $O(1)$. This implies robustness of the proposed preconditioners. The assumption about regularly shaped and quasi-uniform mesh $\Omega_h$ is needed to apply the norm-preserving extension theorem of \cite{widlund87} that yields independence of convergence rates of the mesh size $h$. In order to claim independence of convergence rates of the diameter of $\mathcal{D}^s$, $s\in \{1,\ldots,m\}$ and their locations, we need assumption \eqref{E:d_st_estimate}. Our numerical experiments based on simple test scenarios presented in Section \ref{S:numerics} confirm theoretical findings of this paper, and demonstrate convergence rates of the proposed iterative schemes to be independent of the contrast, discretization size, and also on the number of inclusions and their sizes. The very important feature of the discussed procedures is that they are computationally inexpensive with the arithmetical cost being proportional to the size of the linear system. This makes the proposed methodology attractive for the type of applications that use high contrast particles. \end{document}
\begin{document} \title{\textbf{On Geometric Ergodicity of Additive and Multiplicative Transformation Based Markov Chain Monte Carlo in High Dimensions}} \author{ Kushal Kr. Dey$^{\dag}$ , Sourabh Bhattacharya$^{\ddag, +}$ } \date{} \maketitle \begin{center} $^{\dag}$ University of Chicago \\ $^{\ddag}$ Indian Statistical Institute\\ $+$ Corresponding author: \href{mailto: [email protected]}{[email protected]} \end{center} \begin{abstract} Recently \ctn{Dutta13} introduced a novel Markov Chain Monte Carlo methodology that can simultaneously update all the components of high dimensional parameters using simple deterministic transformations of a one-dimensional random variable drawn from any arbitrary distribution defined on a relevant support. The methodology, which the authors refer to as Transformation-based Markov Chain Monte Carlo (TMCMC), greatly enhances computational speed and acceptance rate in high-dimensional problems. Two significant transformations associated with TMCMC are additive and multiplicative transformations. Combinations of additive and multiplicative transformations are also of much interest. In this work we investigate geometric ergodicity associated with additive and multiplicative TMCMC, along with their combinations, assuming that the target distribution is multi-dimensional and belongs to the super-exponential family; we also illustrate their efficiency in practice with simulation studies. \\[2mm] {\bf Keywords}: {\it Acceptance Rate; Geometric Ergodicity; High Dimension; Mixture; Proposal Distribution; Transformation-based Markov Chain Monte Carlo.} \end{abstract} \section{Introduction} It is well-known that in high dimensions traditional Markov Chain Monte Carlo (MCMC) methods, such as the Metropolis-Hastings algorithm, face several challenges, with respect to computational complexity, as well as with convergence issues. Indeed, Bayesian computation often requires inversion of high-dimensional matrices in each MCMC iteration, causing enormous computational burden. Moreover, such high-dimensional problems may converge at an extremely slow rate, because of the complicated posterior dependence among the parameters. This implies the requirement of an extremely large number of iterations, but since even individual iterations may be computationally burdensome, traditional MCMC methods do not seem to be ideally suited for Bayesian analysis of complex, high-dimensional problems. In an effort to combat the problems \ctn{Dutta13} proposed a novel methodology that can update all the parameters simultaneously in a single block using simple deterministic bijective transformations of a one-dimensional random variable (or any other low-dimensional random variables) drawn from some arbitrary distribution. The idea effectively reduces the high-dimensional random parameter to a one-dimensional parameter, thus dramatically improving computational speed and acceptance rate. Details are provided in \ctn{Dutta13}. Among the deterministic, bijective transformations, \ctn{Dutta13} recommend the additive and the multiplicative transformations. Here it is important to mention that the multiplicative transformation is designed to update parameters on the real line, not just on $(0,\infty)$, and thus, can not be represented as the log-additive transformation. In Sections \ref{subsec:additive_tmcmc} and \ref{subsec:multiplicative_tmcmc} we provide brief overviews of additive and multiplicative TMCMC, respectively. In Section \ref{subsec:add_mult_tmcmc} we briefly explain additive-multiplicative TMCMC, which is a combination of additive and multiplicative TMCMC. This paper deals with geometric ergodicity (or geometric rate of convergence) of the TMCMC chain (both additive and multiplicative, along with their mixtures of two kinds) to the multi-dimensional stationary distribution. The geometric ergodicity property, apart from theoretically ensuring convergence of the underlying Markov chain to the stationary distribution at a geometric rate, also ensures asymptotic stability of a regular family of stochastic estimates through the application of the central limit theorem (see \ctn{Meyn93}, Chapter 17, and \ctn{jones01}, Section 5.3). The geometric ergodicity of the Random Walk Metropolis Hastings (RWMH) chain is already well documented (see \ctn{Mengersen96}, \ctn{Roberts96}, \ctn{Jarner00}). Some extensions of these results to chains with polynomial rates of convergence and specific forms of target densities (for instance, heavy tailed families) are also available in the literature (\ctn{Jarner02}, \ctn{Jarner07}). In this paper we present conditions that guarantee geometric ergodicity of the TMCMC chain corresponding to both additive and multiplicative moves, when the target distribution is multi-dimensional. Crucially, we assume that the target distribution belongs to the super-exponential family. Note that the super-exponential assumption has also been crucially used by \ctn{Jarner00} for proving geometric ergodicity of RWMH for multi-dimensional target distributions. While dealing with multiplicative TMCMC, we encounter a technical problem, which is bypassed by forming an appropriate mixture of additive and multiplicative moves, to which we refer as ``essentially fully" multiplicative TMCMC. We also consider a usual mixture of additive and multiplicative moves. We establish geometric ergodicity of both kinds of mixtures and demonstrate with simulation studies that the usual mixture outperforms RWMH, additive TMCMC, as well as ``essentially fully" multiplicative TMCMC. In Section \ref{sec:geo_additive}, we give conditions for geometric ergodicity of additive TMCMC. The approach to establishing geometric ergodicity of the TMCMC chains associated with multiplicative TMCMC is more complicated and is covered in detail in Section \ref{sec:geo_multiplicative}. In Section \ref{sec:simulation}, we illustrate the practical implications of our theoretical results by conducting simulation studies, where we numerically compare convergence issues of the TMCMC approach with that of RWMH, especially in high dimensions. In Section \ref{sec:non_super_exponential}, we discuss extension of our approach to situations where the high-dimensional target densities are not in the super-exponential family but can be dealt with using special techniques, in particular, a diffeomorphism based method developed by \ctn{Johnson12}, and conduct detailed simulation studies in such set-up, demonstrating that TMCMC very significantly outperforms RWMH that set-up. Concluding remarks are provided in Section \ref{sec:conclusions}. \subsection{Additive TMCMC} \label{subsec:additive_tmcmc} Suppose that we are simulating from a $d$ dimensional space (usually $\mathbb{R}^{d}$), and suppose we are currently at a point $x= (x_{1}, \ldots, x_{d})$. Let us define $d$ random variables $b_{1}, \ldots, b_{d}$, such that, for $i=1,\ldots,d$, \begin{equation} b_{i} =\left\{\begin{array}{ccc} +1 & \mbox{with probability} & p_i; \\ -1 & \mbox{with probability} & 1-p_i. \end{array}\right. \label{eq:b_add} \end{equation} The additive TMCMC uses moves of the following type: \begin{equation*} (x_{1}, \ldots, x_{d}) \rightarrow (x_{1}+ b_{1}\epsilon, \ldots, x_{d}+b_{d}\epsilon), \end{equation*} where $\epsilon\sim g^{(1)}=q^{(1)}(\cdot)I_{\{\epsilon>0\}}$. Here $q^{(1)}(\cdot)$ is an arbitrary density with support $\mathbb R_+$, the positive part of the real line, and for any set $A$, $I_{A}$ denotes the indicator function of $A$. We define $T^{(1)}_b(x,\epsilon)=(x_1+b_1\epsilon,\ldots,x_d+b_d\epsilon)$ to be the additive transformation of $x$ corresponding to the `move-type' $b$. In this work, we shall assume that $p_i=1/2$ for $i=1,\ldots,d$. Note that the Jacobian of the additive transformations is one. Thus, a single $\epsilon$ is simulated from $q(\cdot)I_{\{\epsilon>0\}}$, which is then either added to, or subtracted from each of the $d$ coordinates of $x$ with probability $1/2$. Assuming that the target distribution is proportional to $\pi$, the new move $T^{(1)}_b(x,\epsilon)$, corresponding to the move-type $b$, is accepted with probability \begin{equation} \alpha=\min\left\{1,\frac{\pi(T^{(1)}_b(x,\epsilon))}{\pi(x)}\right\}. \label{eq:acc_additive} \end{equation} The path diagram for additive TMCMC that displays the possible regions to which our chain can move to starting from a fixed point, is presented in Figure~\ref{fig:addTMCMC}. \begin{figure} \caption{Path diagram for Additive TMCMC in one step from a fixed point denoted by the red patch in the middle.} \caption{Path diagram for the first version of Multiplicative TMCMC in one step from a fixed point denoted by the red patch.} \caption{Path diagram for the second version of Multiplicative TMCMC in one step from a fixed point denoted by the red patch.} \label{fig:addTMCMC} \label{fig:fullMultiplicative} \label{fig:fullMultiplicative2} \end{figure} In this paper we show, under appropriate and reasonably general assumptions on $\pi$, that additive TMCMC with $p_i=1/2;~i=1,\ldots,d$, is geometrically ergodic for any finite dimension $d$. \subsubsection{Discussion on non-uniform move-type probabilities for additive TMCMC} \label{subsubsec:additive_non_uniform_move_type} For simplicity of illustration, let us assume that $p_i=p;~i=1,\ldots,d$. Also, let $Y=\sum_{i:b_i=1}b_i$. Then $Y\sim \mbox{Binomial}\left(d,p\right)$. The acceptance probability is then given by \begin{align} \alpha&=\min\left\{1,\left(\frac{p}{1-p}\right)^Y\left(\frac{1-p}{p}\right)^{(d-Y)} \frac{\pi(T^{(1)}_b(x,\epsilon))}{\pi(x)}\right\}\notag\\ &=\min\left\{1,\left(\frac{p}{1-p}\right)^{(2Y-d)} \frac{\pi(T^{(1)}_b(x,\epsilon))}{\pi(x)}\right\} \label{eq:acc_additive2} \end{align} Now, as $d\rightarrow\infty$, $(2Y-d)\stackrel{a.s}{\sim} (2dp-d)=d(2p-1)$, where, for any two random sequences $\left\{m_d;~d=1,2,\ldots\right\}$ and $\left\{n_d;~d=1,2,\ldots\right\}$, $m_d\stackrel{a.s.}{\sim} n_d$ indicates $\underset{d\rightarrow\infty}{\lim}\frac{m_d}{n_d}=1$, almost surely. Hence, for $p\neq 1/2$, $\left(\frac{p}{1-p}\right)^{(2Y-d)}\stackrel{a.s.}{\rightarrow}\infty$. Now note that, for additive TMCMC with a single $\epsilon$, as $d\rightarrow\infty$, the ratio $\frac{\pi(T^{(1)}_b(x,\epsilon))}{\pi(x)}$ is expected to converge to zero at a very slow rate. Indeed, it follows from the supplement of \ctn{Dutta13} that under the strong log-concavity assumption on $\pi$, the acceptance rate with these non-uniform move-type probabilities satisfies the following inequalities as $d\rightarrow\infty$: \begin{equation} \left\{2\Phi\left(\sqrt{-\frac{2}{dM_d}\log\frac{1-\psi_2}{c_d}}\right)-1\right\}\leq AR_p\leq \left\{2\Phi\left(\sqrt{-\frac{2}{dM_d}\log\frac{\psi_1}{c_d}}\right)-1\right\}, \label{eq:AR_non_uniform} \end{equation} where $0<\psi_1,\psi_2<1$, $M_d=O\left(d^t\right);~t>2$, and $c_d = \left(\frac{p}{1-p}\right)^{d(2p-1)}$. For $p=1/2$, we obtain the following asymptotic inequality proved in the supplement of \ctn{Dutta13} \begin{equation} \left\{2\Phi\left(\sqrt{-\frac{2}{dM_d}\log (1-\psi_2)}\right)-1\right\}\leq AR_{\frac{1}{2}}\leq \left\{2\Phi\left(\sqrt{-\frac{2}{dM_d}\log \psi_1}\right)-1\right\}, \label{eq:AR_uniform} \end{equation} which is a special case of (\ref{eq:AR_non_uniform}). It is shown in the supplement of \ctn{Dutta13} that for $p=1/2$, as $d\rightarrow\infty$, the acceptance rate of additive TMCMC tends to zero at a much slower rate compared to that of the normal random walk Metropolis-Hastings algorithm. In fact, it is easy to see that $AR_p\rightarrow 0$ as $d\rightarrow\infty$ for any $p\in (0,1)$, and quite importantly, it holds that $\frac{AR_p}{AR_{\frac{1}{2}}}\rightarrow\infty$ as $d\rightarrow\infty$. In other words, for high-dimensional target distributions, the additive TMCMC based acceptance rate can be further improved with non-uniform move probabilities. But an increase in acceptance rate does not necessarily lead to faster convergence of the underlying Markov chain. Hence, although higher acceptance rates are to be expected of additive TMCMC for non-uniform move-type probabilities in high dimensions, faster rates of convergence may not still be achieved. We reserve the investigation of the effects of non-uniform move-type probabilities on convergence rate for our future research. \subsection{Multiplicative TMCMC} \label{subsec:multiplicative_tmcmc} Again suppose that we are simulating from a $d$ dimensional space (say, $\mathbb R^d$), and that we are currently at a point $x= (x_{1}, \ldots, x_{d})$. Let us now modify the definition of the random variables $b_{1}, \ldots, b_{d}$, such that, for $i=1,\ldots,d$, \begin{equation} b_{i} =\left\{\begin{array}{ccc} +1 & \mbox{with probability} & p_i; \\ 0 & \mbox{with probability} & q_i;\\ -1 & \mbox{with probability} & 1-p_i-q_i. \end{array}\right. \label{eq:b_mult} \end{equation} Let $\epsilon\sim g^{(2)}=q^{(2)}(\cdot)I_{\{|\epsilon|\leq 1\}}$. If $b_i=+1$, then $x_i\rightarrow x_i\epsilon$, if $b_i=-1$, then $x_i\rightarrow x_i/\epsilon$ and if $b_i=0$, then $x_i\rightarrow x_i$, that is, $x_i$ remains unchanged. Let the transformed coordinate be denoted by $x^*_i$. Also, let $J(b,\epsilon)$ denote the Jacobian of the transformation $(x,\epsilon)\mapsto (x^*,\epsilon)$. We denote $x^*$ by $T^{(2)}_b(x,\epsilon)$, the multiplicative transformation $(x,\epsilon)\mapsto (x^*,\epsilon)$ associated with the move-type $b$. For example, if $d=2$, then for $b=(1,1)$, $T^{(2)}_b(x,\epsilon)=(x_1\epsilon,x_2\epsilon)$ and the Jacobian is $\epsilon^2$, for $b=(-1,-1)$, $T^{(2)}_b(x,\epsilon)=(x_1/\epsilon,x_2/\epsilon)$ and $|J(b,\epsilon)|=\epsilon^{-2}$. For $b=(1,-1)$, $b=(-1,1)$, and $b=(0,0)$, $T^{(2)}_b(x,\epsilon)=(x_1\epsilon,x_2/\epsilon)$, $(x_1/\epsilon,x_2\epsilon)$, and $(x_1,x_2)$, respectively, and in all these three instances, $|J(b,\epsilon)|=1$. For $b=(1,0)$ and $b=(0,1)$, $T^{(2)}_b(x,\epsilon)=(x_1\epsilon,x_2)$ and $T^{(2)}_b(x,\epsilon)=(x_1,x_2\epsilon)$, respectively, and in both these cases $|J(b,\epsilon)|=|\epsilon|$. For $b=(-1,0)$ or $b=(0,-1)$, $T^{(2)}_b(x,\epsilon)=(x_1/\epsilon,x_2)$ and $(x_1,x_2/\epsilon)$, respectively, and the Jacobian is $|\epsilon|^{-1}$ in both these cases. In general, the transformation of the $i$-th coordinate is given by $x_i\epsilon^{b_i}$ and the Jacobian is given by $|\epsilon|^{\sum_{i=1}^db_i}$. The path diagram for multiplicative TMCMC that displays the possible range of values that our chain can move to starting from a fixed point is presented in Figure~\ref{fig:fullMultiplicative}. We envisage another version of multiplicative TMCMC where we first generate $\epsilon\sim g^{(3)}=q^{(3)}(\cdot)I_{\{0<\epsilon\leq 1\}}$, and then make the transformation $x_i\rightarrow c_ix_i\epsilon^{b_i}$, where $c_i$ takes the values $1$ and $-1$ with probabilities $r_i$ and $1-r_i$, respectively, where $0<r_i<1$. In this case it is permissible to set $q_i$, the probability of $b_i=0$, to zero. Observe that the Jacobian remains the same as in the first version of multiplicative TMCMC. The path diagram for this version is shown in Figure~\ref{fig:fullMultiplicative2}. Since the theory of geometric ergodicity remains essentially the same for both versions of multiplicative TMCMC, we consider only the first version for the theoretical treatment. For our purpose, we assume that $p_i=q_i=1/3;~i=1,\ldots,d$. Then assuming that the target distribution is proportional to $\pi$, the new move $T^{(2)}_b(x,\epsilon)$ is accepted with probability \begin{equation} \alpha=\min\left\{1,\frac{\pi(T^{(2)}_b(x,\epsilon))}{\pi(x)}|J(b,\epsilon)|\right\}. \label{eq:acc_multiplicative} \end{equation} \subsubsection{Discussion on non-uniform move-type probabilities for multiplicative TMCMC} \label{subsubsec:multiplicative_non_uniform_move_type} For simplicity of illustration, let us assume that $p_i=p$ and $q_i=q$, for $i=1,\ldots,d$. Also, let $Y=\sum_{i:b_i=1}b_i$, and $Z=\sum_{i:b_i=0}b_i$. Then $Y\sim \mbox{Binomial}\left(d,p\right)$, and $Z\sim \mbox{Binomial}\left(d,q\right)$. The acceptance probability is then given by \begin{align} \alpha&=\min\left\{1,\left(\frac{p}{1-p-q}\right)^Y\left(\frac{1-p-q}{p}\right)^{(d-Y-Z)} \frac{\pi(T^{(1)}_b(x,\epsilon))}{\pi(x)}\right\}\notag\\ &=\min\left\{1,\left(\frac{p}{1-p-q}\right)^{(2Y+Z-d)} \frac{\pi(T^{(1)}_b(x,\epsilon))}{\pi(x)}\right\} \label{eq:acc_multiplicative2} \end{align} As $d\rightarrow\infty$, $(2Y+Z-d)\stackrel{a.s}{\sim} d(2p+q-1)$. If $2p+q>1$, then $p>(1-q)/2$, so that $1-p-q<(1-q)/2$. Hence, $p/(1-p-q)>1$, implying that $\left(\frac{p}{1-p-q}\right)^{(2Y+Z-d)}\stackrel{a.s.}{\rightarrow}\infty$. If, on the other hand $2p+q<1$, then $p<(1-q)/2$, and $1-p-q>(1-q)/2$, implying $p/(1-p-q)<1$. Again, this implies $\left(\frac{p}{1-p-q}\right)^{(2Y+Z-d)}\stackrel{a.s.}{\rightarrow}\infty$. In contrast with additive TMCMC, for multiplicative TMCMC the asymptotic form of the acceptance rate is not available yet, even for strongly log-concave target distributions. However, it is clear that for $2p+q\neq 1$, the acceptance rate is asymptotically much higher than that associated with $2p+q=1$. Although for convenience of presentation we prove geometric ergodicity of multiplicative TMCMC assuming $p_i=p=1/3$ and $q_i=q=1/3$;$~i=1,\ldots,d$, the steps of our proofs remain the same for any other $0<p,q<1$, satisfying $2p+q=1$ (note that for such $p,q$, $p+q=(1+q)/2<1$ is automatically satisfied). We reserve the cases $2p+q\neq 1$ for our future investigation. Apart from additive and multiplicative TMCMC we also consider appropriate geometric ergodic mixtures of additive and multiplicative TMCMC, which not only help bypass a somewhat undesirable theoretical assumption regarding the high-dimensional target density $\pi$, but as simulation studies demonstrate, appropriate mixtures of additive and multiplicative TMCMC also ensure faster convergence compared to individual additive TMCMC and individual multiplicative TMCMC. \subsection{Additive-Multiplicative TMCMC} \label{subsec:add_mult_tmcmc} \ctn{Dutta13} described another TMCMC algorithm that uses the additive transformation for some coordinates of $x$ and the multiplicative transformation for the remaining coordinates. \ctn{Dutta13} refer to this as additive-multiplicative TMCMC. Let the target density $\pi$ be supported on $\mathbb R^d$. Then, if the additive transformation is used for the $i$-th coordinate, we update $x_i$ to $x_i+b_i\epsilon_1$, where $b_i$ is defined by (\ref{eq:b_add}), and $\epsilon\sim g^{(1)}$. On the other hand, if for any coordinate $x_j$, the multiplicative transformation is used, then we simulate $b_j$ following (\ref{eq:b_mult}), simulate $\epsilon_2\sim g^{(2)}$, and update $x_j$ to either $x_j\epsilon_2$ or $x_j/\epsilon_2$ accordingly as $b_j=+1$ or $-1$. If $b_j=0$, then we leave $x_j$ unchanged. The new proposal is accepted with probability having the same form as (\ref{eq:acc_multiplicative}). Note that unlike the cases of additive TMCMC and multiplicative TMCMC, which use a single $\epsilon$ to update all the $d$ coordinates of $x$, here we need two $\epsilon$'s: $\epsilon_1$ and $\epsilon_2$, to update the $d$ coordinates. The proof of geometric ergodicity of additive-multiplicative TMCMC is almost the same as that of multiplicative TMCMC, and hence we omit it from this paper. \subsection{Geometric ergodicity} Let $P$ be the transition kernel of a $\psi$-irreducible, aperiodic, positive Harris recurrent Markov chain with the stationary distribution $\pi$. Then the chain is geometrically ergodic if there exist a function $V \geq 1$ which is finite at least at one point, and constants $0<\rho<1$ and $M<\infty$ satisfying \begin{equation}\label{eq:geo} \|P^{n}(x,\cdot)- \pi(\cdot)\|_{TV} \leq MV(x)\rho^{n} \hspace{0.5 cm} \forall n \geq 1, \end{equation} where $\|\mu\|_{TV}=\underset{g: |g| \leq V}{\sup} \mu(g)$ denotes the \emph{total variation norm}. A standard way of checking geometric ergodicity is a result that involves small sets and the `geometric drift condition'. A set $E$ is called small if there exists $m>0$, $\delta>0$ and a probability measure $\nu$ such that for $x\in E$, \begin{equation} P^m(x,\cdot)\geq\delta\nu(\cdot). \end{equation} $P$ is said to have geometric drift to a small set $E$ if there is a function $V \geq 1$, finite for at least one point, and constants $\lambda <1 $ and $\zeta < \infty $ so that \begin{equation}\label{eq:drift} PV(x) \leq \lambda V(x) + \zeta I_{E}(x), \end{equation} where $PV(x)= \int{V(y)P(x,y)dy}$ is the expectation of $V$ after one transition given that one starts at the point $x$, and $I_{E}(x)=1$ if $x\in E$ and $0$ otherwise, is the indicator function. Theorems 14.0.1 and 15.0.1 in \ctn{Meyn93} establish the fact that if $P$ has a geometric drift to a small set $E$, then under certain regularity conditions, $P$ is $\pi$-almost everywhere geometrically ergodic and the converse is also true. We now provide necessary and sufficient conditions in favour of (\ref{eq:drift}); the result can be thought of as an adaptation of Lemma 3.5 of \ctn{Jarner00}. \begin{lemma}\label{Lemma 1} Assume that the Markov transition kernel $P$ is associated with additive, multiplicative, or additive-multiplicative TMCMC. If there exists $V$ such that $V \geq 1$ and finite on bounded support, such that the following hold \begin{equation}\label{eq:1con} \underset{\|x\| \rightarrow \infty}{\lim\sup}~{\frac{PV(x)}{V(x)}} < 1\quad\mbox{and} \end{equation} \begin{equation}{\label{eq:infcon}} {\frac{PV(x)}{V(x)}} < \infty \hspace{1 cm} \forall x, \end{equation} then $V$ satisfies the geometric drift condition (\ref{eq:drift}) and hence the chain must be geometrically ergodic. Also, if for some $V$ finite, the geometric drift condition is satisfied, then the above conditions must also hold true. \end{lemma} \begin{proof} Assume that for some $V$ finite and $V \geq 1$, the geometric drift condition (\ref{eq:drift}) is satisfied. Now, dividing both sides by $V(x)$, we get $$ \frac{PV(x)}{V(x)} \leq \lambda + \zeta \frac{I_{E}(x)}{V(x)}. $$ Since $V$ is finite, then given that $V \geq 1$, we have $$ \frac{PV(x)}{V(x)} \leq \lambda + \zeta \hspace { 0.2 cm} < \infty. $$ Also if $\|x\| \rightarrow \infty $ then as $E$ is a bounded small set, $I_{E}(x) \rightarrow 0$, and hence $$ \underset{\|x\| \rightarrow \infty}{\lim\sup}~{\frac{PV(x)}{V(x)}} \leq \lambda < 1. $$ For the converse, let us fix a value $\gamma <1$. Let $R$ be particularly large so that if $\|x\| > R$, then $$ {\frac{PV(x)}{V(x)}} < \gamma\quad\mbox{if}\quad \|x\|> R \hspace{0.2 cm} \implies PV(x)< \gamma V(x)\quad\mbox{if}\quad \|x\| > R. $$ Since $$ PV(x) \leq \frac{PV(x)}{V(x)} V(x), $$ and since $\frac{PV(x)}{V(x)}$ is finite by hypothesis (\ref{eq:infcon}) and the function $V$ is also finite on any bounded set, this implies that $PV(x)$ is finite on $E=\{x:\|x\| \leq R \}$, which is closed and bounded. Take $\zeta$ to be the maximum value (which must be finite) that $PV(x)$ can attain on the set $E$. In the supplement of \ctn{Dutta13} it is shown that for additive TMCMC, sets of the form $E=\{x:\|x\| \leq R \}$ are small. Defining \begin{equation} \mathcal V=\{(v_1,\ldots,v_d)\in\mathbb R^d: v_i=0~ \mbox{for at least one}~i\in\{1,\ldots,d\}\}, \label{eq:set_V} \end{equation} in the Appendix we will show that compact subsets of $\mathbb R^d\backslash\mathcal V$, which we denote by $E^*$, are small for multiplicative TMCMC; the same result also holds for additive-multiplicative TMCMC. Hence, for all $x$, if $\mathbb E$ is either $E$ or $E^*$, $$ PV(x) \leq \gamma V(x)+ \zeta I_{\mathbb E}(x). $$ This proves the lemma. \end{proof} So, in order to check geometric ergodicity, it is enough to prove (\ref{eq:1con}) and (\ref{eq:infcon}) for the given chain. \newcommand{A^{I_{k}}}{A^{I_{k}}} \newcommand{ R^{I_{k}}}{ R^{I_{k}}} \newcommand{{x_{I_{k}}}(\epsilon)}{{x_{I_{k}}}(\epsilon)} \newcommand*{\bigleft}{\mbox{\Huge $[$}} \newcommand*{\bigright}{\mbox{\Huge $]$}} \newcommand*{\bigcurlleft}{\mbox{\huge $\{$}} \newcommand*{\bigcurlright}{\mbox{\huge $\}$}} \section{Geometric ergodicity of additive TMCMC} \label{sec:geo_additive} We shall now provide necessary and sufficient conditions for geometric ergodicity for additive TMCMC for a broad class of distributions. This proof follows on the lines of \ctn{Jarner00} and has been suitably modified for our additive TMCMC case. First, we define the notion of super-exponential densities. A density $\pi$ is said to be super-exponential if it is positive with continuous first derivative and satisfies \begin{equation} \underset{\|x\| \rightarrow \infty}{\lim} n(x)' \nabla \log \pi(x) = -\infty, \end{equation} where $n(x)$ denotes the unit vector $ \frac{x}{\|x\|} $. This would imply that for any $K > 0$, $\exists$ $R > 0$ such that \begin{equation}\label{eq:exp} \frac{\pi(x+cn(x))}{\pi(x)} \leq e^{-cK}; \hspace{1 cm} \|x\|\geq R, c\geq 0. \end{equation} In words, the above definition entails that $\pi$ is decaying at a rate faster than exponential along any direction. It is very easy to check that the Gaussian (univariate as well as multivariate for any variance covariance matrix) or the Gamma distributions (univariate or independent multivariate) indeed satisfy these conditions. Let the acceptance region and the (potential) rejection region corresponding to the move-type $b$ be defined by $A^{(1)}(b,x)=\{\epsilon:\pi(T^{(1)}_b(x,\epsilon))\geq \pi(x)\}$ and $R^{(1)}(b,x)=\{\epsilon:\pi(T^{(1)}_b(x,\epsilon))<\pi(x)\}$, respectively. Also, let $A^{(1)}(x)=\cup_{b_1,\ldots,b_d}A^{(1)}(b,x)$ and $R^{(1)}(x)=\cap_{b_1,\ldots,b_d}R^{(1)}(b,x)$ denote the overall acceptance region and the overall potential rejection region, respectively. Let $Q^{(1)}(x,B)$ denote the probability corresponding to the additive TMCMC proposal of reaching the Borel set $B$ from $x$ in one step. Let $P^{(1)}$ denote the Markov transition kernel associated with additive TMCMC. Then the following theorem establishes geometric ergodicity of additive TMCMC in the super-exponential set-up. \begin{theorem}\label{theorem:geo_additive} If the target density $\pi$ is super-exponential and has contours that are nowhere piecewise parallel to $\{x:|x_1|=|x_2|=\cdots = |x_d|\}$, then the additive TMCMC chain satisfies geometric drift if and only if \begin{equation} \underset{\|x\| \rightarrow \infty}{\lim\inf}~ Q^{(1)}(x, A^{(1)}(x)) > 0. \label{eq:liminf_Q_additive} \end{equation} \end{theorem} \begin{proof} Following the notation of \ctn{Jarner00}, let $C_{\pi(x)}$ be the contour of the density $\pi$ corresponding to the value $\pi(x)$. We define the radial cone $C_{\pi(x)}(\delta)$ around $ C_{\pi(x)}$ to be \begin{equation} C_{\pi(x)} (\delta) = \left \{ y + sn(y) : y \in C_{\pi(x)}, -\delta < s < \delta\right \}. \label{eq:radial_cone} \end{equation} See Figure 1 of \ctn{Jarner00} for visualizing these regions in two dimensions. By (\ref{eq:liminf_Q_additive}) there exists a $\eta > 0$ such that \begin{equation} \underset{\|x\| \rightarrow \infty} {\lim\sup} \hspace{0.2 cm} Q^{(1)} (x, R^{(1)}(x)) \leq 1-2\eta^{\frac{1}{2}}. \label{eq:limsup_Q_additive} \end{equation} Take the belt length $\delta$ such that the probability that a move from $x$, the starting point, falls within this $\delta$ belt is less than $\eta$. That it is possible can be seen as follows. Note that there exists a compact set $E$ such that \begin{equation} Q^{(1)}(x,E^{c}) <\frac{\eta}{2}. \label{eq:Q1} \end{equation} So, for given $\delta$, if we can ensure that our proposal distribution satisfies \begin{equation} Q^{(1)}(x, C_{\pi(x)}(\delta) \cap E) < \frac{\eta}{2}, \label{eq:Q2} \end{equation} then we are done. Note that for any point on the contour, the probability that the additive TMCMC moves result in a value within $C_{\pi(x)}(\delta)$ is bounded above by $2c\delta$, for some finite $c$ (since this probability is $2\int_0^{\delta}g(\epsilon)d\epsilon\leq 2c\delta$, as $g(\epsilon)\leq c$ on $(0,\delta)$, for $0<c<\infty$) and thus can be made as small as desired by choosing $\delta$ sufficiently small. The above argument is easy to visualize in two dimensions as depicted in Figure \ref{fig:addTMCMC} and Figure 1 of \ctn{Jarner00} -- for any point in the first quadrant part of the contour, the probability that the outer and inner TMCMC moves given, respectively, by $(+\epsilon, +\epsilon)$ and $(-\epsilon,-\epsilon)$, land within $C_{\pi(x)}(\delta)$, is bounded above by $2c\delta$. The same argument applies to the other three quadrants. For the other moves, note that since the contours (intersected with $E$) are nowhere piecewise parallel to $\{x:|x_1|=\cdots = |x_d|\}$, the moves can fall in only finite number of regions of $C_{\pi(x)}(\delta)\cap E$. Infinitely many regions can be ruled out because of the intersection with $E$, which is compact. If that was the case, then this infinite collection of interesting points would have a limit point in $E$, which is not possible as the points are isolated. Now, there exists $R_{\eta}$ so that for any point $y$ outside the $\delta$ bound around $x$ and in the rejection region, it holds that \begin{equation} \frac{\pi(y)}{\pi(x)} < \eta; \hspace {0.5 cm} \|x\| > R_{\eta}. \label{eq:super_exp} \end{equation} This can be seen by taking the shortest line from $y$ to the origin; suppose it intersects (after extending if needed) the contour $C_{\pi(x)}$ at $z$. There will be two such values of $z$, and we choose the one that is nearest to $x$. Then, by (\ref{eq:exp}) and the fact that $ \pi(x)$ is the same as $\pi(z)$ (since $x$ and $z$ are on the same contour), we obtain (\ref{eq:super_exp}). To ensure that this $z$ indeed satisfies $ \|z\| > R_{\eta}$, consider the set $E$, which is the set where effectively all the moves fall. Join each point in $E$ to the origin by a straight line and extend it if needed to intersect the contour; consider those points of intersections which are closest to $x$. The points of intersections yield a segment, $D(x)$, of the contour, which contains $x$ and is bounded and closed. Now since this set is bounded, we can always choose $x$ with large enough norm so that all the points in $E$ associated with $D(x)$ have norms greater than $R_{\eta}$. Since $z$ is one of such points, we are done. On the other hand, if $y$ is outside the $\delta$ bound around $x$ but falls in the acceptance region, then by the same arguments it holds that \begin{equation} \frac{\pi(x)}{\pi(y)} < \eta. \label{eq:super_exp2} \end{equation} Now, with $ V(x) = \frac{c}{\sqrt{\pi(x)}}$ for some $c > 0$ chosen appropriately, it holds that \begin{eqnarray} &&\frac{P^{(1)}V(x)}{V(x)}\nonumber\\ &= & \frac{1}{2^{d}} \sum_{b_{1},\cdots, b_{d}} \int _{A^{(1)}(x)} {\left [ \frac{\pi(x_{1},\ldots,x_{d})}{\pi(x_{1}+b_{1}\epsilon,\ldots, x_{d}+b_{d}\epsilon)} \right ]^{\frac{1}{2}} g^{(1)}(\epsilon)d\epsilon} \nonumber\\ && + \frac{1}{2^{d}} \sum_{b_{1},\cdots, b_{d}} \int _{R^{(1)}(x)}{ \left [ 1- \frac{\pi(x_{1}+b_{1}\epsilon,\ldots,x_{d}+b_{d}\epsilon)}{\pi(x_{1},\ldots,x_{d})} + \left \{ \frac{\pi(x_{1}+b_{1}\epsilon,\ldots, x_{d}+b_{d}\epsilon)}{\pi(x_{1},\ldots,x_{d})} \right \}^{\frac{1}{2}} \right ] g^{(1)}(\epsilon)d\epsilon}\nonumber\\ \label{eq:sum_ratios_additive} \end{eqnarray} We split the integrals over $A^{(1)}(x)$ and that over $R^{(1)}(x)$ into two parts -- within $C_{\pi(x)}(\delta)$ and outside $C_{\pi(x)}(\delta)$. Since $\frac{\pi(x_{1},\ldots,x_{d})}{\pi(x_{1}+b_{1}\epsilon,\ldots, x_{d}+b_{d}\epsilon)}<1$ on $A^{(1)}(x)$, it follows from (\ref{eq:Q1}) and (\ref{eq:Q2}) that \begin{align} \int _{A^{(1)}(x)\cap C_{\pi(x)}(\delta)} {\left [ \frac{\pi(x_{1},\ldots,x_{d})}{\pi(x_{1}+b_{1}\epsilon,\ldots, x_{d}+b_{d}\epsilon)} \right ]^{\frac{1}{2}} g^{(1)}(\epsilon)d\epsilon} &<\frac{\eta}{2}. \label{eq:part1} \end{align} Now note that \begin{align} \left\vert\frac{\|x_n+b\epsilon\|^2}{\|x_n\|^2}-1\right\vert &\leq\frac{2\epsilon\left|b'x_n\right|}{\|x_n\|^2}+\frac{d\epsilon^2}{\|x_n\|^2}\notag\\ &\leq \frac{2\epsilon}{\|x_n\|}+\frac{d\epsilon^2}{\|x_n\|^2}.\label{eq:as_conv}\\ & (\mbox{since}~ \left|b'x_n\right|\leq\sqrt{\|b\|^2\|x_n\|^2}=\|x_n\|).\notag \end{align} Let $\mathcal N_{\epsilon}$ denote a null set associated with the probability distribution of $\epsilon$. Then for all $\omega\in\mathcal N^c_{\epsilon}$ such that $\epsilon(\omega)\in E$, for any compact set $E$, (\ref{eq:as_conv}) goes to zero. That is, for $\omega\in\mathcal N^c_{\epsilon}\cap\epsilon^{-1}(E)$, $\frac{\|x_n+b\epsilon\|}{\|x_n\|}\rightarrow 1$. Thus, for $n>N_0(\eta_2(\omega))$ for some $N_0(\eta_2(\omega))$ depending upon $\eta_2(\omega)$ such that $\|x_{n}\|>\frac{R_{\eta}}{1-\eta_2(\omega)}>R_{\eta}$, since $1+\eta_2(\omega)>\frac{\|x_{n}+b\epsilon\|}{\|x_{n}\|}>1-\eta_2(\omega)$ for $\omega\in\mathcal N^c_{\epsilon}\cap\epsilon^{-1}(E)$, we have \begin{equation} \|x_{n}+b\epsilon\|>(1-\eta_2(\omega))\|x_{n}\|>R_{\eta}. \label{eq:p0} \end{equation} Note that for any given $\zeta>0$, we can choose $E_{\zeta}$ such that $Q^{(1)}(x,E^c_{\zeta,x,b})<\zeta$, for any $x$ and $b$, where $E^c_{\zeta,x,b}=\{x+b\epsilon:\epsilon\in E^c_{\zeta}\}$. Thus, we can choose $\zeta>0$ such that \begin{equation} Q^{(1)}(x,A^{(1)}(x)\cap C^c_{\pi(x)}(\delta)\cap E^c_{\zeta,x,b})<\frac{\eta^{\frac{1}{2}}}{2}Q^{(1)}(x,A^{(1)}(x)). \label{eq:p1} \end{equation} Now, it follows from (\ref{eq:p0}) and (\ref{eq:super_exp2}) that for given $\eta>0$, we can choose $R_{\eta}$ such that for $\|x\|>R_{\eta}$, $\frac{\pi(x_{1},\ldots,x_{d})}{\pi(x_{1}+b_{1}\epsilon,\ldots, x_{d}+b_{d}\epsilon)}<\frac{\eta}{4}$. Hence, \begin{align} \int _{A^{(1)}(x)\cap C^c_{\pi(x)}(\delta)\cap E_{\zeta,x,b}} {\left [ \frac{\pi(x_{1},\ldots,x_{d})}{\pi(x_{1}+b_{1}\epsilon,\ldots, x_{d}+b_{d}\epsilon)} \right ]^{\frac{1}{2}} g^{(1)}(\epsilon)d\epsilon} &<\frac{\eta^{\frac{1}{2}}}{2}Q^{(1)}(x,A^{(1)}(x)). \label{eq:part2_1} \end{align} Also, since $\frac{\pi(x_{1},\ldots,x_{d})}{\pi(x_{1}+b_{1}\epsilon,\ldots, x_{d}+b_{d}\epsilon)}<1$ on $A^{(1)}(x)$, it follows from (\ref{eq:p1}) that \begin{align} \int _{A^{(1)}(x)\cap C^c_{\pi(x)}(\delta)\cap E^c_{\zeta,x,b}} {\left [ \frac{\pi(x_{1},\ldots,x_{d})}{\pi(x_{1}+b_{1}\epsilon,\ldots, x_{d}+b_{d}\epsilon)} \right ]^{\frac{1}{2}} g^{(1)}(\epsilon)d\epsilon} &<\frac{\eta^{\frac{1}{2}}}{2}Q^{(1)}(x,A^{(1)}(x)). \label{eq:part2_2} \end{align} Thus, for $\|x\|>R_{\eta}$, it follows from (\ref{eq:part2_1}) and (\ref{eq:part2_2}) that \begin{align} \int _{A^{(1)}(x)\cap C^c_{\pi(x)}(\delta)} {\left [ \frac{\pi(x_{1},\ldots,x_{d})}{\pi(x_{1}+b_{1}\epsilon,\ldots, x_{d}+b_{d}\epsilon)} \right ]^{\frac{1}{2}} g^{(1)}(\epsilon)d\epsilon} &<\eta^{\frac{1}{2}}Q^{(1)}(x,A^{(1)}(x)). \label{eq:part2} \end{align} Now note that on $R^{(1)}(x)$, $1- \frac{\pi(x_{1}+b_{1}\epsilon,\ldots,x_{d}+b_{d}\epsilon)}{\pi(x_{1},\ldots,x_{d})}<1$, so that \begin{equation} \int_{R^{(1)}(x)}\left[1- \frac{\pi(x_{1}+b_{1}\epsilon,\ldots,x_{d}+b_{d}\epsilon)}{\pi(x_{1},\ldots,x_{d})}\right] g^{(1)}(\epsilon)d\epsilon<Q^{(1)}(x,R^{(1)}(x)). \label{eq:part3} \end{equation} For the integral $\int_{R^{(1)}(x)}\frac{\pi(x_{1}+b_{1}\epsilon,\ldots, x_{d}+b_{d}\epsilon)}{\pi(x_{1},\ldots,x_{d})} g^{(1)}(\epsilon)d\epsilon$, breaking up $R^{(1)}(x)$ into $R^{(1)}(x)\cap C_{\pi(x)}(\delta)$ and $R^{(1)}(x)\cap C^c_{\pi(x)}(\delta)$ we obtain, in exactly the same way as (\ref{eq:part1}) and (\ref{eq:part2}), the following: \begin{equation} \int_{R^{(1)}(x)}\left[\frac{\pi(x_{1}+b_{1}\epsilon,\ldots, x_{d}+b_{d}\epsilon)}{\pi(x_{1},\ldots,x_{d})}\right]^{\frac{1}{2}} g^{(1)}(\epsilon)d\epsilon <\frac{\eta}{2}+\eta^{\frac{1}{2}}Q^{(1)}(x,R^{(1)}(x)), \label{eq:part4} \end{equation} for $ \|x\| > R_{\eta}$. Combining (\ref{eq:part1}), (\ref{eq:part2}), (\ref{eq:part3}) and (\ref{eq:part4}) we obtain \begin{eqnarray} \frac{P^{(1)}V(x)}{V(x)} &< & \eta+\eta^{\frac{1}{2}} Q^{(1)}(x, A^{(1)}(x)) + \left (1+\eta^{\frac{1}{2}} \right ) Q^{(1)} (x, R^{(1)}(x)) \nonumber \\ & =& \eta +\eta^{\frac{1}{2}} + Q^{(1)} (x, R^{(1)}(x)). \nonumber \\ \end{eqnarray} Using (\ref{eq:limsup_Q_additive}), we obtain \begin{eqnarray} \underset{\|x\| \rightarrow \infty} {\lim\sup}~\frac{PV(x)}{V(x)}&< & \eta +\eta^{\frac{1}{2}} + \underset{\|x\| \rightarrow \infty} {\lim\sup}~Q^{(1)} (x, R^{(1)}(x))\nonumber\\ & =& 1- \eta^{\frac{1}{2}} +\eta\nonumber\\ &<& 1.\nonumber \end{eqnarray} Thus, (\ref{eq:1con}) is satisfied. Since all the ratios in the integrals of (\ref{eq:sum_ratios_additive}) are less than 1, it is clear that $P^{(1)}V(x)/V(x)<\infty$ for all $x$, satisfying (\ref{eq:infcon}). This proves geometric ergodicity of additive TMCMC. Now we prove that if additive TMCMC is geometrically ergodic, then (\ref{eq:liminf_Q_additive}) is satisfied. In fact, we prove that if (\ref{eq:liminf_Q_additive}) is not satisfied, that is, if $\underset{\|x\| \rightarrow \infty} {\lim\sup}~Q^{(1)} (x, R^{(1)}(x))=1$, then $\underset{\|x\| \rightarrow \infty} {\lim\sup}~P^{(1)} (x, \{x\})=1$. Indeed, it follows from Theorem 5.1 of \ctn{Roberts96} that the latter condition implies that $P^{(1)}$ is not geometrically ergodic. We can choose a compact set $E$ such that $Q^{(1)}(x,E^c)<\eta$ and can choose $\delta$ small enough such that $\underset{\|x\| \rightarrow \infty} {\lim\sup}~Q^{(1)} (x,C_{\pi(x)}(\delta)\cap E)\leq\eta$. This and the fact (\ref{eq:super_exp}) imply that \begin{eqnarray} \underset{\|x\| \rightarrow \infty} {\lim\sup}~P^{(1)}(x,\{x\}) &\geq & \underset{\|x\| \rightarrow \infty} {\lim\sup}~ \frac{1}{2^{d}} \sum_{b_{1},\cdots, b_{d}}\int _{R^{(1)}(x)}{ \left [ 1- \frac{\pi(x_{1}+b_{1}\epsilon,\ldots,x_{d}+b_{d}\epsilon)}{\pi(x_{1},\ldots,x_{d})} \right ] g^{(1)}(\epsilon)d\epsilon}\nonumber\\ & \geq & \underset{\|x\| \rightarrow \infty} {\lim\sup}~ \frac{1}{2^{d}} \sum_{b_{1},\cdots, b_{d}} \int _{R^{(1)}(x)\cap E\cap [C_{\pi(x)}(\delta)]^c}{ \left [ 1- \frac{\pi(x_{1}+b_{1}\epsilon,\ldots,x_{d}+b_{d}\epsilon)}{\pi(x_{1},\ldots,x_{d})} \right ] g^{(1)}(\epsilon)d\epsilon}\nonumber\\ &\geq & (1-\eta) \underset{\|x\| \rightarrow \infty} {\lim\sup}~ Q(x,R^{(1)}(x)\cap E\cap [C_{\pi(x)}(\delta)]^c)\nonumber\\ &\geq & (1-\eta)(1-2\eta).\notag \end{eqnarray} Since $\eta>0$ is arbitrary, the proof is complete. \end{proof} \begin{figure} \caption{A contour of a spherically symmetric distribution. Here $x$ is the current state lying on the contour (first quadrant), and the four directions that can be taken by the next move of additive TMCMC, are displayed. Here $p=q=1/2$ are the move-type probabilities.} \label{fig:norm} \end{figure} Note that for spherically symmetric super-exponential distributions (for example standard Gaussian), the conditions of Theorem \ref{theorem:geo_additive} naturally hold. For instance, the fact that no part of the contour is parallel to $\{x:|x_1|=|x_2|=\cdots = |x_d|\}$ is quite obvious. To check that $\underset{\|x\| \rightarrow \infty}{\lim\inf}~Q^{(1)}(x, A^{(1)}(x)) > 0$, first perceive (see Figure \ref{fig:norm}) that at any point in the first quadrant, the inward direction stays in the acceptance region if the magnitude of the inward direction does not exceed the diameter of the contour containing $x$. However, the inward direction can land in the rejection region on the other side of the contour if the magnitude of the inward direction exceeds the diameter of the contour $C_{\pi(x)}$. Since $\|x\|$ is the radius of $C_{\pi(x)}$, in order to ensure that the inward move falls in $A^{(1)}(x)$ with high probability when $\|x\|$ is large, we must choose the proposal density $g^{(1)}(\epsilon)$ in such a way that too large step sizes compared to $\|x\|$, have small probabilities. Thus, for our purpose, first let $M_{\eta}$ be such that $ \int_{0}^{M_{\eta}}{g^{(1)}(\epsilon)d\epsilon} > 1-\eta$. Now choose $x$ such that $\|x\| > 3 M_{\eta}$ (radius of $C_{\pi(x)}$ is greater than $3 M_{\eta})$. Then $ Q^{(1)}(x,A^{(1)}(x)) > \frac{1-\eta}{4} > 0$. Now consider any sequence ${x_ {n}}$ with $ \|x_{n}\| \rightarrow \infty$, where $x_{n}$ has norm greater than $ 3 M_{\eta}$ for all but finite $n$. Then along this sequence, the limit of $ Q^{(1)}(x, A^{(1)}(x))$ is greater than $ \frac{1-\eta}{4}$. Thus $\underset{\|x\| \rightarrow \infty}{\lim\inf}~Q^{(1)}(x, A^{(1)}(x)) > 0$ condition is satisfied. Note that the constraint that no part of the contour can be piecewise parallel to $\{x:|x_1|=\cdots = |x_d|\}$ does not really cause too much of a problem because the only common distribution that satisfies this property is the Laplace distribution and it is not super-exponential. \section{Geometric ergodicity of multiplicative TMCMC} \label{sec:geo_multiplicative} In the one-dimensional case, geometric ergodicity of multiplicative TMCMC has been established by \ctn{Dutta12}, assuming that the target density is regularly varying in an appropriate sense. Here we extend the result to arbitrary dimensions, of course without the aid of the regularly varying assumption, since such an assumption is not well-defined in high dimensions. Note however, that since vectors $v\in\mathcal V$ (where $\mathcal V$ is defined in (\ref{eq:set_V})), can not belong to small sets associated with multiplicative TMCMC, to prove geometric ergodicity we also need to show that $\underset{\|x-v\|\rightarrow 0}{\lim\sup}~P^{(2)}V(x)/V(x)<1$ for all $v\in\mathcal V$. This seems to be too demanding a requirement. In the one-dimensional case, $0$ is the only point which can not belong to small sets, and the proof of geometric ergodicity in this case requires showing $\underset{|x|\rightarrow 0}{\lim\sup}~P^{(2)}V(x)/V(x)<1$. This has been established by \ctn{Dutta12}, however, the technique of his proof could not assist us in our complicated, high-dimensional case. If one has the liberty to assume, in the high-dimensional case, that there is an arbitrarily small, compact neighborhood $\mathbb N_0$, of $\boldsymbol{0}=(0,0,\ldots,0)'$, which has zero probability under the target density $\pi$, then the proof of $\underset{\|x-v\|\rightarrow 0}{\lim\sup} P^{(2)}V(x)/V(x)<1$ for all $v\in\mathcal V$ is not required. Although for practical purposes this is not a very stringent assumption, from the theoretical standpoint this is somewhat disconcerting. In the next subsections we introduce two different kinds of geometric ergodic mixtures of additive and multiplicative TMCMC kernels that do not require the undesirable assumption $\pi(\mathbb N_0)=0$. The first mixture we introduce is essentially multiplicative TMCMC in a sense to be made precise subsequently, whereas the second mixture is a straightforward convex combination of additive and multiplicative TMCMC kernels. \subsection{A new mixture-based Markov transition kernel with ``essentially full" weight on multiplicative TMCMC} \label{subsec:new_kernel} We break up $\pi$ into a mixture of two densities: $\pi_1$, supported on $\mathbb N_0$, and $\pi_2$, supported on $\mathbb N^c_0$. That is, we write \begin{align} \pi(x)&=\pi(\mathbb N_0)\frac{\pi(x)}{\pi(\mathbb N_0)}I\{x\in\mathbb N_0\} +\pi(\mathbb N^c_0)\frac{\pi(x)}{\pi(\mathbb N^c_0)}I\{x\in\mathbb N^c_0\}\notag\\ &=\pi(\mathbb N_0)\pi_1(x)+\pi(\mathbb N^c_0)\pi_2(x),\label{eq:mixture} \end{align} where \begin{align} \pi_1(x)&=\frac{\pi(x)}{\pi(\mathbb N_0)}I\{x\in\mathbb N_0\}\quad\mbox{and}\quad \pi_2(x)=\frac{\pi(x)}{\pi(\mathbb N^c_0)}I\{x\in\mathbb N^c_0\}.\label{eq:pi_split} \end{align} Clearly, $\pi_2(\mathbb N_0)=0$. In fact, as we elaborate below, the above mixture representation transfers the requirement $\pi(\mathbb N_0)=0$ to $\pi_2(\mathbb N_0)=0$. Now consider the following Markov chain: for any $x\in\mathbb R^d$ and $A\in\mathcal B(\mathbb R^d)$, with $\mathcal B(\mathbb R^d)$ being the Borel $\sigma$-field of $\mathbb R^d$, \begin{equation} P(x,A)=\pi(\mathbb N_0)P^{(1)}(x,A)+\pi(\mathbb N^c_0)P^{(2)}(x,A), \label{eq:mc1} \end{equation} where $P^{(1)}(x,\cdot)$ and $P^{(2)}(x,\cdot)$ are Markov transition kernels corresponding to additive TMCMC converging to $\pi_1$ and multiplicative TMCMC converging to $\pi_2$, respectively. We choose the proposal density $g^{(2)}$ for multiplicative TMCMC such that there is a one-dimensional, arbitrarily small neighborhood of $0$ which receives zero probability under $g^{(2)}$. We denote the one-dimensional neighborhood of $0$ by $\mathcal N_0$. We also assume that there exist arbitrarily small neighborhoods $\mathcal N_{+1}$ and $\mathcal N_{-1}$ of $+1$ and $-1$ respectively, which receive zero probability under $g^{(2)}$. In order to implement the mixture kernel $P$, we can separately run two chains -- one is additive TMCMC converging to $\pi_1$ and another is multiplicative TMCMC converging to $\pi_2$, both chains starting at the same initial value $x_0$. Since both the chains are positive Harris recurrent on $\mathbb R^d\backslash\mathcal V$ ($P^{(1)}$ is positive Harris recurrent on $\mathbb R^d$ and $P^{(2)}$ is positive Harris recurrent on $\mathbb R^d\backslash\mathcal V$) convergence to both $\pi_1$ and $\pi_2$ occurs for the initial value $x_0~(\neq\boldsymbol{0})$, even though the supports of $\pi_1$ and $\pi_2$ are disjoint. In practice, it will be convenient to choose $x_0$ from the boundary between $\mathbb N_0$ and $\mathbb N^c_0$. Thus, for any initial value $x_0$, we will have an additive TMCMC chain $\{x^{(k)}_1;k=0,1,2,\ldots\}$ converging to $\pi_1$ and another multiplicative TMCMC chain $\{x^{(k)}_2;k=0,1,2,\ldots\}$ converging to $\pi_2$, with $x^{(0)}_1=x^{(0)}_2=x_0$. Finally, for each $k=1,2,\ldots$, we select and store $x^{(k)}_1$ with probability $\pi(\mathbb N_0)$ and $x^{(k)}_2$ with probability $[1-\pi(\mathbb N_0)]$. Thus, for $k>1$, the chain $\left\{P^{(1)}\right\}^k$ depends only on $x^{(k-1)}_1$, and not on $x^{(k-1)}_2$. Similarly, $\left\{P^{(2)}\right\}^k$ depends only on $x^{(k-1)}_2$ and not on $x^{(k-1)}_1$. Thus, the mixture $P$ uses additive TMCMC to simulate only from $\pi_1$, and uses multiplicative TMCMC to simulate only from $\pi_2$. Since $\pi(\mathbb N_0)$ is negligibly small, the mixture $P$ gives ``essentially full" weight to multiplicative TMCMC. If we can prove that $P^{(2)}$ is geometrically ergodic for $\pi_2$, then because $P^{(1)}$ is geometrically ergodic for $\pi_1$ (in fact, uniformly ergodic for $\pi_1$ since the support $\mathbb N_0$ of $\pi_1$ is compact), it will follow that $P$ itself is geometrically ergodic. See Appendix \ref{sec:P_geo} for a proof of this statement. Note that $\pi(\mathbb N_0)$ is unknown and needs to be estimated for implementing $P$. In Appendix \ref{sec:implement_P} we present an importance sampling based idea regarding this, also demonstrating why the estimated probability is expected to yield the same TMCMC samples as the exact value of $\pi(\mathbb N_0)$. The mixture kernel $P$ given by (\ref{eq:mc1}) is designed to give almost full weight to multiplicative TMCMC. It is also possible to consider a more conventional mixture of additive and multiplicative TMCMC, which is also geometrically ergodic but combines the good features of both the algorithms to yield a more efficient TMCMC sampler, and does not require estimation of $\pi(\mathbb N_0)$. In the next subsection we discuss this in detail, also elucidating how (\ref{eq:mc1}) differs from the combination of additive and multiplicative TMCMC in a traditional mixture set-up. \subsection{Combination of additive and multiplicative TMCMC in a traditional mixture set-up} \label{subsec:usual_mixture_kernel} Instead of (\ref{eq:mc1}), we could define a mixture of the form \begin{equation} P^*(x,A)=p~P^{(1)}(x,A)+(1-p)~P^{(2)}(x,A), \label{eq:mc2} \end{equation} where $0<p<1$ is any choice of mixing probability. The transition kernels $P^{(1)}(x,\cdot)$ and $P^{(2)}(x,\cdot)$, as before, are additive and multiplicative TMCMC, respectively, but here each of them converges to the target density $\pi$, unlike the case of (\ref{eq:mc1}) where $P^{(1)}(x,\cdot)$ converged to $\pi_1$ and $P^{(2)}(x,\cdot)$ converged to $\pi_2$. For the implementation of $P^*$, one can first simulate $u\sim U(0,1)$; if $u<p$, then additive TMCMC will be implemented, else multiplicative TMCMC should be used. Thus, unlike the case of (\ref{eq:mc1}), we have a single chain $\{x^{(k)};k=0,1,2,\ldots\}$ converging to $\pi$. Note also, that $P^*$ implements both additive and multiplicative TMCMC on the entire support of $\pi$. In contrast, $P$, given by (\ref{eq:mc1}), implements additive TMCMC only for $\pi_1$, which is supported on $\mathbb N_0$ and implements multiplicative TMCMC only for $\pi_2$, which is supported on $\mathbb N^c_0$. In Section \ref{sec:geo_additive} we have already shown, for $V(x)=c/\sqrt{\pi(x)}$, that $\underset{\|x\|\rightarrow\infty}{\lim\sup}~\frac{P^{(1)}V(x)}{V(x)}<1$ and that the ratio $\frac{P^{(1)}V(x)}{V(x)}$ is finite for all $x$. For the same function $V$ if we can also prove that $\underset{\|x\|\rightarrow\infty}{\lim\sup}~\frac{P^{(2)}V(x)}{V(x)}<1$, and that the ratio $\frac{P^{(2)}V(x)}{V(x)}$ is finite for all $x$, then it follows that the mixture $P^*$ is also geometrically ergodic. Indeed, \begin{align} \underset{\|x\|\rightarrow\infty}{\lim\sup}~\frac{P^*V(x)}{V(x)} &\leq p~\underset{\|x\|\rightarrow\infty}{\lim\sup}~\frac{P^{(1)}V(x)}{V(x)} +(1-p)~\underset{\|x\|\rightarrow\infty}{\lim\sup}~\frac{P^{(2)}V(x)}{V(x)}\notag\\ &<p+(1-p)=1,\notag \end{align} and $v\in\mathcal V$ can be a limit point of small sets corresponding to $P^*$, since $P^*(x,A)\geq pP^{(1)}(x,A)$ for all $x$ and all $A\in\mathcal B (\mathbb R^d)$, and all compact sets of $\mathbb R^d$ are small sets of $P^{(1)}$. \subsection{Distinctions between the roles of $P^{(2)}$ in $P$ and $P^*$} \label{subsec:distinctions} \subsubsection{Geometric ergodicity of $P$ requires geometric ergodicity of $P^{(2)}$} \label{subsubsec:geo_P} The proof of geometric ergodicity of the essentially fully multiplicative mixture $P$ will follow if we can show that $P^{(2)}$ is geometrically ergodic for $\pi_2$, where $\pi_2(\mathbb N_0)=0$ by construction. Theorem \ref{theorem:geo_multiplicative} provides necessary and sufficient conditions for geometric ergodicity of $P^{(2)}$ under the super-exponential set-up, assuming $\pi_2(\mathbb N_0)=0$, and that the proposal density $g^{(2)}$ gives zero probability to arbitrarily small compact neighborhoods of $0$, $-1$ and $+1$. Since it is always possible to construct a proposal density $g^{(2)}$ with the requisite properties, the strategy of forming the mixture $P$ is not restrictive, given the super-exponential set-up. \subsubsection{Geometric ergodicity of $P^*$ does not require geometric ergodicity of $P^{(2)}$ or the restriction $\pi(\mathbb N_0)=0$} \label{subsubsec:geo_P_star} Geometric ergodicity of the traditional mixture $P^*$, on the other hand, follows only if $\underset{\|x\|\rightarrow\infty}{\lim\sup}~\frac{P^{(2)}V(x)}{V(x)}<1$ and $\frac{P^{(2)}V(x)}{V(x)}$ is finite for all $x$; it does not require $\pi(\mathbb N_0)=0$, $\pi$ being the invariant target distribution for both $P^{(1)}$ and $P^{(2)}$ of $P^*$. That the former two conditions hold under the aforementioned assumptions, without the restriction $\pi(\mathbb N_0)=0$, can be easily seen in the proof of Theorem \ref{theorem:geo_multiplicative}. The implication is that, super-exponentiality of $\pi$ and the aforementioned properties of the proposal distribution $g^{(2)}$ guarantee geometric ergodicity of $P^*$, even though $P^{(2)}$ does not individually converge to its invariant distribution $\pi$ because the proposal density $g^{(2)}$ assigns zero probability to some arbitrarily small compact neighborhood of $\boldsymbol{0}$ (in the first place, the fact that $P^*$ converges to $\pi$ is clear because of irreducibility, aperiodicity and positive Harris recurrence of $P^*$). As before, assumption of such proposal density $g^{(2)}$ is not restrictive, and hence the strategy of forming the mixture $P^*$ is not restrictive either, given the super-exponential set-up. In the next section we introduce our theorem characterizing geometric ergodicity of $P^{(2)}$. \subsection{Geometric ergodicity of $P^{(2)}$ for $\pi_2$} \label{subsec:geo_multiplicative} For multiplicative TMCMC, for a given move-type $b$, we define the acceptance region and the potential rejection region by $A^{(2)}(b,x)=\{\epsilon:\frac{\pi(T^{(2)}_b(\epsilon))}{\pi(x)}|J(b,\epsilon)|\geq 1\}$ and $R^{(2)}(b,x)=\{\epsilon:\frac{\pi(T^{(2)}_b(\epsilon))}{\pi(x)}|J(b,\epsilon)|< 1\}$, respectively. The overall acceptance region and the overall potential rejection region are $A^{(2)}(x)=\cup_{b_1,\ldots,b_d}A^{(2)}(b,x)$ and $R^{(2)}(x)=\cap_{b_1,\ldots,b_d}R^{(2)}(b,x)$, respectively. We also define $A^*(b,x)=\{\epsilon:\frac{\pi(T^{(2)}_b(\epsilon))}{\pi(x)}\geq 1\}$ and $R^*(b,x)=\{\epsilon:\frac{\pi(T^{(2)}_b(\epsilon))}{\pi(x)}< 1\}$, respectively. Let $Q^{(2)}(x,B)$ denote the probability corresponding to the multiplicative TMCMC proposal of reaching the Borel set $B$ from $x$ in one step. Then the following theorem characterizes geometric ergodicity of multiplicative TMCMC under the super-exponential set-up. For our convenience, we slightly abuse notation by referring to $\pi_2$ as $\pi$. \begin{theorem}\label{theorem:geo_multiplicative} Suppose that $\pi$, the target density, is super-exponential and has contours that are nowhere piecewise parallel to $\{x:|x_1|=\cdots =|x_d|\}$; also assume that there is an arbitrarily small compact neighborhood $\mathbb N_0$ such that $\pi(\mathbb N_0)=0$. If there exist compact neighborhoods $\mathcal N_0$, $\mathcal N_{+1}$ and $\mathcal N_{-1}$ (all arbitrarily small) of $0$, $+1$ and $-1$, respectively such that $g^{(2)}$ gives zero probability to $\mathcal N_0$, $\mathcal N_{+1}$ and $\mathcal N_{-1}$, then the multiplicative TMCMC chain satisfies geometric drift if and only if \begin{equation} \underset{\|x\| \rightarrow \infty}{\lim\inf}~ Q^{(2)}(x, A^{(2)}(x)) > 0. \label{eq:liminf_Q_multiplicative} \end{equation} \end{theorem} \begin{proof} As before, let $C_{\pi(x)}$ be the contour of the density $\pi$ corresponding to the value $\pi(x)$, and let the radial cone around $ C_{\pi(x)}$ be $C_{\pi(x)} (\delta)$ given by (\ref{eq:radial_cone}). By (\ref{eq:liminf_Q_multiplicative}) there exists $\gamma > 0$ such that \begin{equation} \underset{\|x\| \rightarrow \infty} {\lim\sup} \hspace{0.2 cm} Q^{(2)} (x, R^{(2)}(x)) \leq 1-5\gamma^{\frac{1}{2}}. \label{eq:limsup_Q_multiplicative} \end{equation} Once again, we take the belt of length $\delta$ such that the probability that a move from $x$ falls within this $\delta$ belt is less than $\gamma$. This holds since the neighborhoods $\mathcal N_{+1}$ and $\mathcal N_{-1}$ of $+1$ and $-1$ receive zero probabilities under the proposal density $g^{(2)}$. The remaining arguments are similar as in the proof of Theorem \ref{theorem:geo_additive}. Hence, as before, there exists $R_{\gamma}$ so that for any point $y$ outside the $\delta$ bound around $x$, (\ref{eq:super_exp}) holds. As before, let $ V(x) = \frac{c}{\sqrt{\pi(x)}}$, where $c > 0$ is chosen appropriately. Then it holds that \begin{eqnarray} &&\frac{P^{(2)}V(x)}{V(x)}\notag\\ &= & \frac{1}{3^{d}} \sum_{b_{1},\cdots, b_{d}} \int _{A^{(2)}(x)} {\left [ \frac{\pi(x_{1},\ldots,x_{d})}{\pi(T_b(x,\epsilon))} \right ]^{\frac{1}{2}} g^{(2)}(\epsilon)d\epsilon} \nonumber\\ && + \frac{1}{3^{d}} \sum_{b_{1},\cdots, b_{d}} \int _{R^{(2)}(x)}{ \left [ 1- \frac{\pi(T_b(x,\epsilon))}{\pi(x_{1},\ldots,x_{d})}|J(b,\epsilon)| + \left \{ \frac{\pi(T_b(x,\epsilon))}{\pi(x_{1},\ldots,x_{d})} \right \}^{\frac{1}{2}}|J(b,\epsilon)| \right ] g^{(2)}(\epsilon)d\epsilon}.\notag\\ \label{eq:exact_expression}\\ &\leq & \frac{1}{3^{d}} \sum_{b_{1},\cdots, b_{d}} \left\{\sum_{b_1,\ldots,b_d} \int _{A^{(2)}(b,x)} {\left [ \frac{\pi(x_{1},\ldots,x_{d})}{\pi(T_b(x,\epsilon))} \right ]^{\frac{1}{2}} g^{(2)}(\epsilon)d\epsilon}\right\} \nonumber\\ && + \frac{1}{3^{d}} \sum_{b_{1},\cdots, b_{d}} \int _{R^{(2)}(x)}{ \left [ 1- \frac{\pi(T_b(x,\epsilon))}{\pi(x_{1},\ldots,x_{d})}|J(b,\epsilon)| + \left \{ \frac{\pi(T_b(x,\epsilon))}{\pi(x_{1},\ldots,x_{d})} \right \}^{\frac{1}{2}}|J(b,\epsilon)| \right ] g^{(2)}(\epsilon)d\epsilon}.\nonumber\\ \label{eq:sum_ratios_multiplicative} \end{eqnarray} We now break up the integrals on $A^{(2)}(b,x)$ as sums of the integrals on $A^{(2)}(b,x)\cap A^*(b,x)$ and $A^{(2)}(b,x)\cap R^*(b,x)$. Also, we break up the integrals on $R^{(2)}(x)$ as sums of integrals on $R^{(2)}(x)\cap A^*(b,x)$ and $R^{(2)}(x)\cap R^*(b,x)$. Since $R^{(2)}(x)=\cap_{b_1,\ldots,b_d}R^{(2)}(b,x)$, these involve the intersections $R^{(2)}(b,x)\cap A^*(b,x)$ and $R^{(2)}(b,x)\cap R^*(b,x)$, respectively. Note that, since $|J(b,\epsilon)|$ is of the form $|\epsilon |^k$, for $k=-d,\ldots,-1,0,1,\ldots,d$, and $|\epsilon |\leq 1$ (almost surely), $A^{(2)}(b,x)\cap R^*(b,x)$ is either the null set $\emptyset$ (when $k= -d,-d+1,\ldots,-1,0$), or of the form $A^{(2)}(b,x)\cap R^*(b,x)$$=\{\epsilon: |\epsilon|^k\leq\frac{\pi(T^{(2)}_b(x,\epsilon))}{\pi(x)}<1\}$, for $k=1,2,\ldots,d$. Hence, for $\|x\|>R_{\gamma}$, by (\ref{eq:super_exp}), $Q^{(2)}(x,A^{(2)}(b,x)\cap R^*(b,x)\cap [C_{\pi(x)}(\delta)]^c)$ $\leq Q^{(2)}(|\epsilon|^k\leq\frac{\pi(T^{(2)}_b(x,\epsilon))}{\pi(x)}<\gamma)<\gamma/2$, and for $\delta$ sufficiently small, $Q^{(2)}(x,A^{(2)}(b,x)\cap R^*(b,x)\cap C_{\pi(x)}(\delta))<\gamma/2$. Moreover, on $A^{(2)}(b,x)\cap R^*(b,x)\cap C_{\pi(x)}(\delta)$, $\frac{\pi(x_{1},\ldots,x_{d})}{\pi(T_b(x,\epsilon))}$ is bounded by a finite constant. By hypothesis, $\mathcal N_0$ has zero probability under $g^{(2)}$. This implies that the set \begin{equation*} \mathcal S=\{|\epsilon |\leq 1:\exists ~b~\mbox{ and set}~ \mathcal S_{\epsilon}~\mbox{such that for}~ x\in \mathcal S_{\epsilon},~ \frac{\pi(x_{1},\ldots,x_{d})}{\pi(T_b(x,\epsilon))}>K,~\forall~ K>0\} \label{eq:S} \end{equation*} has zero probability under $g^{(2)}$. Hence $\frac{\pi(x_{1},\ldots,x_{d})}{\pi(T_b(x,\epsilon))}$ is almost surely bounded even on $A^{(2)}(b,x)\cap R^*(b,x)\cap C^c_{\pi(x)}(\delta)$. Hence, by the above arguments, for $\|x\|>R_{\gamma}$, and for sufficiently small $\xi>0$, \begin{align} &\int _{A^{(2)}(b,x)\cap R^*(b,x)} \left [ \frac{\pi(x_{1},\ldots,x_{d})}{\pi(T_b(x,\epsilon))} \right ]^{\frac{1}{2}} g^{(2)}(\epsilon)d\epsilon\notag\\ &=\int _{A^{(2)}(b,x)\cap R^*(b,x)\cap C_{\pi(x)}(\delta)} \left [ \frac{\pi(x_{1},\ldots,x_{d})}{\pi(T_b(x,\epsilon))} \right ]^{\frac{1}{2}} g^{(2)}(\epsilon)d\epsilon\notag\\ &+\int _{A^{(2)}(b,x)\cap R^*(b,x)\cap [C_{\pi(x)}(\delta)]^c} \left [ \frac{\pi(x_{1},\ldots,x_{d})}{\pi(T_b(x,\epsilon))} \right ]^{\frac{1}{2}} g^{(2)}(\epsilon)d\epsilon\notag\\ &<\xi/2\notag. \end{align} By similar (in fact, somewhat simpler) arguments, it follows that for $\|x\|>R_{\gamma}$, \begin{align} \int _{A^{(2)}(b,x)\cap A^*(b,x)} \left [ \frac{\pi(x_{1},\ldots,x_{d})}{\pi(T_b(x,\epsilon))} \right ]^{\frac{1}{2}} g^{(2)}(\epsilon)d\epsilon &<\frac{\xi}{2}+\xi^{\frac{1}{2}}Q^{(2)}(x,A^{(2)}(b,x)\cap A^*(b,x)\cap [C_{\pi(x)}(\delta)]^c)\notag\\ &<\frac{\xi}{2}+\xi^{\frac{1}{2}}Q^{(2)}(x,A^{(2)}(x)). \label{eq:upper_bound1} \end{align} The arguments required are somewhat simpler because on $A^{(2)}(b,x)\cap A^*(b,x)$, the ratio $\frac{\pi(x_{1},\ldots,x_{d})}{\pi(T_b(x,\epsilon))}$ is bounded above by 1. Hence, the first part of the expression for $P^{(2)}V(x)/V(x)$ given by (\ref{eq:sum_ratios_multiplicative}) is less than $3^d\left(\xi+\xi^{\frac{1}{2}}Q^{(2)}(x,A^{(2)}(x))\right)$. Formally, for $\|x\|>R_{\gamma}$, \begin{equation} \frac{1}{3^{d}} \sum_{b_{1},\cdots, b_{d}} \left\{\sum_{b_1,\ldots,b_d}\int _{A^{(2)}(b,x)} {\left [ \frac{\pi(x_{1},\ldots,x_{d})}{\pi(T_b(x,\epsilon))} \right ]^{\frac{1}{2}} g^{(2)}(\epsilon)d\epsilon}\right\} <3^d\left(\xi+\xi^{\frac{1}{2}}Q^{(2)}(x,A^{(2)}(x))\right). \label{eq:upper_bound2} \end{equation} For sufficiently small $\xi>0$ we can choose $\eta>3^{2d}\xi$ so that \begin{equation} 3^d\left(\xi+\xi^{\frac{1}{2}}Q^{(2)}(x,A^{(2)}(x))\right) <\eta+\eta^{\frac{1}{2}}Q^{(2)}(x,A^{(2)}(x)). \label{eq:new_upper_bound} \end{equation} In the second part of the expression for $P^{(2)}V(x)/V(x)$, note that on $R^{(2)}(x)$, $$1- \frac{\pi(T_b(x,\epsilon))}{\pi(x_{1},\ldots,x_{d})}|J(b,\epsilon)|<1,$$ so that \begin{equation} \int_{R^{(2)}(x)}\left[1- \frac{\pi(T_b(x,\epsilon))}{\pi(x_{1},\ldots,x_{d})}|J(b,\epsilon)|\right]g^{(2)}(\epsilon)d\epsilon <Q^{(2)}(x,R^{(2)}(x)), \label{eq:upper_bound3} \end{equation} and \begin{align} &\int_{R^{(2)}(x)}\left\{ \frac{\pi(T_b(x,\epsilon))}{\pi(x_{1},\ldots,x_{d})} \right \}^{\frac{1}{2}}|J(b,\epsilon)| g^{(2)}(\epsilon)d\epsilon\label{eq:uppbound1}\\ &=\int_{R^{(2)}(x)\cap A^*(b,x)}\left\{ \frac{\pi(T_b(x,\epsilon))|J(b,\epsilon)|}{\pi(x_{1},\ldots,x_{d})} \right \}^{\frac{1}{2}} |J(b,\epsilon)|^{\frac{1}{2}} g^{(2)}(\epsilon)d\epsilon\label{eq:upper_bound4_1st}\\ &+\int_{R^{(2)}(x)\cap R^*(b,x)}\left\{ \frac{\pi(T_b(x,\epsilon))}{\pi(x_{1},\ldots,x_{d})} \right \}^{\frac{1}{2}}|J(b,\epsilon)| g^{(2)}(\epsilon)d\epsilon. \label{eq:upper_bound4} \end{align} Note that on $R^{(2)}(x)\cap A^*(b,x)$, $ \frac{\pi(T_b(x,\epsilon))|J(b,\epsilon)|}{\pi(x_{1},\ldots,x_{d})}<1$, and by our choice of the proposal density $g^{(2)}$, $\mathcal N_0$ has zero probability under $g^{(2)}$, so that the Jacobians $|J(b,\epsilon)|$ are bounded above by a finite constant, say $K$; we choose $K>1$. Hence, the first integral (\ref{eq:upper_bound4_1st}) in the break-up of the integral (\ref{eq:uppbound1}) is bounded above by $K Q^{(2)}(x,R^{(2)}(x)\cap A^*(b,x))$. Now, $Q^{(2)}(x,R^{(2)}(x)\cap A^*(b,x))=Q^{(2)}(x,R^{(2)}(x)\cap A^*(b,x)\cap C_{\pi(x)}(\delta)) +Q^{(2)}(x,R^{(2)}(x)\cap A^*(b,x)\cap [C_{\pi(x)}(\delta)]^c)$, and we can achieve $Q^{(2)}(x,R^{(2)}(x)\cap A^*(b,x)\cap C_{\pi(x)}(\delta))<\gamma^*/4$, for sufficiently small $\gamma^*$. The sets of the form $R^{(2)}(b,x)\cap A^*(b,x)$ are again empty sets or of the form $\{\epsilon: |\epsilon|^k\leq\frac{\pi(T^{(2)}_b(x,\epsilon))}{\pi(x)}<1\}$; $k=1,2,\ldots,d$. Hence, the sets $R^{(2)}(x)\cap A^*(b,x)$ are also either empty sets or intersections with sets of the form $\{\epsilon: |\epsilon|^k\leq\frac{\pi(T^{(2)}_b(x,\epsilon))}{\pi(x)}<1\}$; $k=1,2,\ldots,d$. Hence, for $\|x\|>R_{\gamma}$, we can achieve $Q^{(2)}(x,R^{(2)}(x)\cap A^*(b,x)\cap [C_{\pi(x)}(\delta)]^c)<\gamma^*/4$. In other words, for $\|x\|>R_{\gamma}$, \begin{align} &\int_{R^{(2)}(x)\cap A^*(b,x)}\left\{ \frac{\pi(T_b(x,\epsilon))|J(b,\epsilon)|}{\pi(x_{1},\ldots,x_{d})} \right \}^{\frac{1}{2}} |J(b,\epsilon)|^{\frac{1}{2}} g^{(2)}(\epsilon)d\epsilon\notag\\ & <K Q^{(2)}(x,R^{(2)}(x)\cap A^*(b,x)\cap C_{\pi(x)}(\delta)) + Q^{(2)}(x,R^{(2)}(x)\cap A^*(b,x)\cap [C_{\pi(x)}(\delta)]^c)\notag\\ &< K\gamma^*/2. \label{eq:upp1} \end{align} Now consider the second integral (\ref{eq:upper_bound4}) in the break-up of the integral (\ref{eq:uppbound1}). We have \begin{align} &\int_{R^{(2)}(x)\cap R^*(b,x)}\left\{ \frac{\pi(T_b(x,\epsilon))}{\pi(x_{1},\ldots,x_{d})} \right \}^{\frac{1}{2}}|J(b,\epsilon)| g^{(2)}(\epsilon)d\epsilon\notag\\ &=\int_{R^{(2)}(x)\cap R^*(b,x)\cap C_{\pi(x)}}\left\{ \frac{\pi(T_b(x,\epsilon))}{\pi(x_{1},\ldots,x_{d})} \right \}^{\frac{1}{2}}|J(b,\epsilon)| g^{(2)}(\epsilon)d\epsilon\label{eq:uppe1}\\ &+\int_{R^{(2)}(x)\cap R^*(b,x)\cap [C_{\pi(x)}]^c}\left\{ \frac{\pi(T_b(x,\epsilon))}{\pi(x_{1},\ldots,x_{d})} \right \}^{\frac{1}{2}}|J(b,\epsilon)| g^{(2)}(\epsilon)d\epsilon.\label{eq:uppe2} \end{align} Note that on $R^*(b,x)$, $\frac{\pi(T_b(x,\epsilon))}{\pi(x_{1},\ldots,x_{d})}<1$. Hence, the first integral (\ref{eq:uppe1}) in the above break-up is bounded above by $K Q^{(2)}(x,R^{(2)}(x)\cap R^*(b,x)\cap C_{\pi(x)}(\delta))$, which, in turn, is bounded above by $K\gamma^*/2$. For $\|x\|>R_{\gamma}$, the second integral (\ref{eq:uppe2}) is bounded above by $K{\gamma^*}^{\frac{1}{2}}Q^{(2)}(x,R^{(2)}(x)\cap R^*(b,x)\cap [C_{\pi(x)}(\delta)]^c)$, which, in turn, is bounded above by $K{\gamma^*}^{\frac{1}{2}}Q^{(2)}(x,R^{(2)}(x))$. In other words, \begin{align} &\int_{R^{(2)}(x)\cap R^*(b,x)}\left\{ \frac{\pi(T_b(x,\epsilon))|J(b,\epsilon)|}{\pi(x_{1},\ldots,x_{d})} \right \}^{\frac{1}{2}} |J(b,\epsilon)|^{\frac{1}{2}} g^{(2)}(\epsilon)d\epsilon < K\frac{\gamma^*}{2}+ K{\gamma^*}^{\frac{1}{2}}Q^{(2)}(x,R^{(2)}(x)). \label{eq:upp2} \end{align} Combining (\ref{eq:upp1}) and (\ref{eq:upp2}) we obtain that (\ref{eq:uppbound1}) is bounded above by $K\gamma^*+ K{\gamma^*}^{\frac{1}{2}}Q^{(2)}(x,R^{(2)}(x))$. With sufficiently small $\gamma^*$ we have, for $\eta> K^2\gamma^*$, \[ K\gamma^*+ K{\gamma^*}^{\frac{1}{2}}Q^{(2)}(x,R^{(2)}(x)) < \eta+\eta^{\frac{1}{2}}Q^{(2)}(x,R^{(2)}(x)). \] Combining this with (\ref{eq:upper_bound3}) we get the following upper bound for the second term of (\ref{eq:sum_ratios_multiplicative}): \begin{align} & \frac{1}{3^{d}} \sum_{b_{1},\cdots, b_{d}} \int _{R^{(2)}(x)}{ \left [ 1- \frac{\pi(T_b(x,\epsilon))}{\pi(x_{1},\ldots,x_{d})}|J(b,\epsilon)| + \left \{ \frac{\pi(T_b(x,\epsilon))}{\pi(x_{1},\ldots,x_{d})} \right \}^{\frac{1}{2}}|J(b,\epsilon)| \right ] g^{(2)}(\epsilon)d\epsilon}\notag\\ &<\eta+\left(1+\eta^{\frac{1}{2}}\right)Q^{(2)}(x,R^{(2)}(x)). \label{eq:upper_bound5} \end{align} Combining (\ref{eq:upper_bound2}) and (\ref{eq:upper_bound5}), we obtain, for $\eta<\gamma$ (so that $\max\{3^{2d}\xi, K^2\gamma^*\}<\eta<\gamma$), \begin{align} \underset{\|x\| \rightarrow \infty} {\lim\sup}~ \frac{P^{(2)}V(x)}{V(x)} &\leq 2\eta+\eta^{\frac{1}{2}}+\underset{\|x\| \rightarrow \infty} {\lim\sup}~Q^{(2)}(x,R^{(2)}(x))\notag\\ &< 2\eta+\eta^{\frac{1}{2}}+1-5\eta^{\frac{1}{2}}\quad\mbox{(by (\ref{eq:limsup_Q_multiplicative}) and the fact that $\eta<\gamma$})\notag\\ &=1-(2\eta)^{\frac{1}{2}}+2\eta\notag\\ &<1.\notag \end{align} Hence (\ref{eq:1con}) holds. To see that condition (\ref{eq:infcon}) holds, in (\ref{eq:exact_expression}) observe that all the ratios in the integrands are bounded above by 1, while the terms $|J(b,\epsilon)|^{\frac{1}{2}}$ are almost surely bounded above by our choice of the proposal density $g^{(2)}$. Hence, $P^{(2)}V(x)/V(x)$ is finite for every $x$. Now we prove that if multiplicative TMCMC is geometrically ergodic, then (\ref{eq:liminf_Q_multiplicative}) is satisfied. As before, we prove that if $\underset{\|x\| \rightarrow \infty} {\lim\sup}~Q^{(2)} (x, R^{(2)}(x))=1$, then $\underset{\|x\| \rightarrow \infty} {\lim\sup}~P^{(2)} (x, \{x\})=1$. Again, we choose a compact set $E$ such that $Q^{(2)}(x,E^c)\leq\eta$ and choose $\delta>0$ small enough such that $\underset{\|x\| \rightarrow \infty}{\lim\sup}~Q^{(2)}(x,C_{\pi(x)}(\delta)\cap E)\leq\eta$. Since $R^{(2)}(x)\cap A^*(b,x)\cap [C_{\pi(x)}(\delta)]^c$ is either null set or intersection with sets of the form $\{\epsilon:|\epsilon|^k\leq \frac{\pi(x)}{\pi(T_b(x,\epsilon))}<1\}$, for $k=1,2,\ldots,d$, it follows from (\ref{eq:super_exp}) that for any fixed $b^*$, if $\|x\|>R_{\eta}$, \[ Q^{(2)}(x,R^{(2)}(x)\cap A^*(b^*,x)\cap [C_{\pi(x)}(\delta)]^c) \leq Q^{(2)}(x,\{\epsilon:|\epsilon|^k\leq \frac{\pi(x)}{\pi(T_{b^*}(x,\epsilon))}<\eta\})\leq\eta. \] Hence, \[ \underset{\|x\| \rightarrow \infty}{\lim\sup}~Q^{(2)} (x, R^{(2)}(x)\cap A^*(b^*,x)\cap [C_{\pi(x)}(\delta)]^c)\leq\eta. \] Since $\underset{\|x\| \rightarrow \infty} {\lim\sup}~Q^{(2)} (x, R^{(2)}(x))=1$, the above imply that \[ \underset{\|x\| \rightarrow \infty}{\lim\sup}~Q^{(2)} (x, R^{(2)}(x)\cap R^*(b^*,x)\cap [C_{\pi(x)}(\delta)]^c)>1-2\eta. \] Moreover, since $|J(b,\epsilon)|$ are almost surely bounded by the choice of our proposal density, assume that there exists $0< K<\infty$ such that $|J(b,\epsilon)|< K$ almost surely with respect to $g^{(2)}$. These and the fact (\ref{eq:super_exp}) imply that \begin{eqnarray} \underset{\|x\| \rightarrow \infty}{\lim\sup}~P^{(1)}(x,\{x\}) &=& \underset{\|x\| \rightarrow \infty}{\lim\sup}~\frac{1}{3^{d}} \sum_{b_{1},\cdots, b_{d}}\int _{R^{(2)}(x)}{ \left [ 1- \frac{\pi(T_b(x,\epsilon))}{\pi(x)}|J(b,\epsilon)| \right ] g^{(2)}(\epsilon)d\epsilon}\nonumber\\ & \geq & \underset{\|x\| \rightarrow \infty}{\lim\sup}~ \frac{1}{3^{d}} \sum_{b_{1},\cdots, b_{d}} \int _{R^{(2)}(x)\cap R^*(b^*,x)\cap [C_{\pi(x)}(\delta)]^c}{ \left [ 1- \frac{\pi(T_b(x,\epsilon))}{\pi(x)}|J(b,\epsilon)| \right ] g^{(2)}(\epsilon)d\epsilon}\nonumber\\ &\geq & (1-\eta K) \underset{\|x\| \rightarrow \infty}{\lim\sup}~ Q(x,R^{(2)}(x)\cap R^*(b^*,x)\cap [C_{\pi(x)}(\delta)]^c)\nonumber\\ &\geq & (1-\eta K)(1-2\eta). \label{eq:non_geo_multiplicative} \end{eqnarray} Since $\eta>0$ is arbitrary, the proof is complete. \end{proof} That it is easy to ensure geometric ergodicity of multiplicative TMCMC in super-exponential cases can be seen as follows. Select a move-type $b^*$ such that $|J(b^*,\epsilon)|=|\epsilon |$. Then $A(b^*,x)=\{\epsilon:\frac{\pi(T_{b^*}(x,\epsilon))}{\pi(x)}|\epsilon |\geq 1\}$, and $A^*(b^*,x)=\{\epsilon:\frac{\pi(T_{b^*}(x,\epsilon))}{\pi(x)}\geq 1\}$. Then, since $|\epsilon |\leq 1$ almost surely, \begin{equation} A(b^*,x)\cap A^*(b^*,x)=\left\{\epsilon:\frac{\pi(x)}{\pi(T_{b^*}(x,\epsilon))}<|\epsilon |\leq 1\right\}. \label{eq:eqn1} \end{equation} If, for $\eta>0$, $\|x\|>R_{\eta}$, then by (\ref{eq:super_exp}), \begin{equation} \frac{\pi(x)}{\pi(T_{b^*}(x,\epsilon))}<\eta. \label{eq:eqn2} \end{equation} Equations (\ref{eq:eqn1}) and (\ref{eq:eqn2}) imply that for any given $\xi>0$ it is possible to choose $\eta>0$ such that for $\|x\|>R_{\eta}$, it holds that \begin{equation} Q^{(2)}(x,A^{(2)}(b^*,x)\cap A^*(b^*,x))>1-\xi. \label{eq:eqn3} \end{equation} Hence, for $\|x\|>R_{\eta}$, we obtain using (\ref{eq:eqn3}), \begin{align} Q^{(2)}(x,A^{(2)}(x))&\geq Q^{(2)}(x,A^{(2)}(x)\cap A^*(b^*,x))\notag\\ &\geq Q^{(2)}(x,A^{(2)}(b^*,x)\cap A^*(b^*,x))\notag\\ &>1-\xi.\notag \end{align} Hence, (\ref{eq:liminf_Q_multiplicative}) holds, ensuring geometric ergodicity. As we remarked earlier, we omit the proof of geometric ergodicity of additive-multiplicative TMCMC, since it is almost the same as that of multiplicative TMCMC, provided above. \section{Illustration with simulation studies} \label{sec:simulation} There are several considerations in defining the accuracy or the efficiency of any MCMC-based approach. First, one important aspect is that the chain must have reasonably high acceptance rate. This has been an important consideration in our proposing TMCMC. It is to be noted that geometric ergodicity only tells us that convergence of our chain to the target density occurs at a geometric rate. However, if the value of $\rho$, the geometric rate in (\ref{eq:geo}) is close to 1, then the algorithm in question, in spite of being geometrically ergodic, need not be efficient in practice. To test how efficient our TMCMC algorithms actually are in absolute terms and also relative to standard MCMC approaches, we need to define a measure of closeness of the $n$-th order kernel $P^{n}(x,\cdot)$ with respect to the target density $\pi(\cdot)$, assuming that the latter can be empirically evaluated. The Kolmogorov Smirnov (K-S) distance seems to be a suitable candidate in this regard, and the one that we adopt for our purpose. Corresponding to each MCMC algorithm, we consider $N$ replicates of the chain starting from the same initial value, so that at each iteration $t$, we obtain a set of $N$ many realizations of the chain. We then compute the empirical distribution of these $N$ values and measure the K-S distance between the empirical distribution and the target distribution $\pi$. For the chain to be efficient, it must have K-S distance close to 0 after the chain has run for a large number of iterations (that is, when $t$ is large). Moreover, the burn-in period is expected to be small for efficient MCMC algorithms. \subsection{First simulation experiment comparing RWMH and additive TMCMC} \label{subsec:sim1} Table \ref{table:table1} presents the results of a simulation experiment comparing the performances of RWMH and additive TMCMC (Add-TMCMC) chains for different dimensions, where, for our purpose we consider the target density $\pi$ to be the multivariate normal distribution with mean vector $\boldsymbol{0}$ and covariance matrix $\bold{I}$, the identity matrix. For RWMH we consider two distinct scales for the normal random walk proposal for each of the coordinates -- the optimal scale 2.4, and a sub-optimal scale 6. We consider the same scaling for additive TMCMC as well. Indeed, as shown in \ctn{Dey13}, for both additive TMCMC and RWMH, the optimal scaling parameter is very close to 2.4 but the optimal acceptance rate of additive TMCMC is around 0.439, which is significantly higher than 0.234, the optimal acceptance rate of the RWMH approach (\ctn{Roberts96}, \ctn{Roberts1997}). Moreover, the results of simulation experiments reported in \ctn{Dey13} demonstrate superior performance of additive TMCMC over RWMH in terms of higher acceptance rates irrespective of dimensions and optimal or sub-optimal scale choices. Referring to Table \ref{table:table1}, since the K-S statistic is computed after burn in, the differences between additive TMCMC and RWMH in terms of the K-S distance do not appear to be pronounced in low dimensions, but for dimensions 100 and 200, the differences seem to be more pronounced, indicating somewhat better performance of TMCMC. Figure \ref{fig:figex1_30} displays the K-S distances corresponding to RWMH and additive TMCMC when the target is a 30-dimensional normal distribution. It is clearly seen that additive TMCMC converges much faster than RWMH. In fact, the figures indicate that additive TMCMC takes around just 150 iterations to converge when the scale is optimal, and around 200 iterations when the scale is sub-optimal. On the other hand, in the case of optimal scaling, RWMH takes around 300 iterations to converge and for sub-optimal scaling it takes around 450 iterations. The mixing issue is quite pronounced in higher dimensions. Indeed, as seen in Figure \ref{fig:figex1}, the K-S distances associated with additive TMCMC are almost uniformly smaller than those associated with RWMH, particularly when the scaling is sub-optimal. In fact, in the sub-optimal case it seems that additive TMCMC has converged within the first 2,000 iterations, whereas RWMH does not seem to show any sign of convergence even after 20,000 iterations (the K-S distances are significantly larger than those of additive TMCMC). \begin{table}[h] \centering \caption{ Performance evaluation of RWMH and additive TMCMC (Add-TMCMC) chains for different dimensions. } \begin{tabular}{|p{0.4in}|c|c|c|c|c|} \hline \multirow{2}{*}{Dim.} & \multirow{2}{*}{\backslashbox{Scaling}{Criteria}} & \multicolumn{2}{|c|}{$\begin{array}{c} Acceptance \\ rate ($\%$) \end{array} $} & \multicolumn{2}{|c|}{\emph{Avg. K-S dist.}} \\ \cline{3-6} & & RWMH & Add-TMCMC & RWMH & Add-TMCMC\\ \hline \multirow{3}{*}{2} & 2.4 & 34.9 & 44.6 & 0.1651 & 0.1657 \\ & 6 & 18.66 & 29.15 & 0.1659 & 0.1655 \\ \multirow{3}{*}{5} & 2.4 (opt) & 28.6 & 44.12 & 0.1659 & 0.1664 \\ & 6 & 2.77 & 20.20 & 0.1693 & 0.1674 \\ \multirow{3}{*}{10} & 2.4 (opt) & 26.05 & 44.18 & 0.1652 & 0.1677 \\ & 6 & 1.19 & 20.34 & 0.1784 & 0.1688 \\ \multirow{2}{*}{100} & 2.4 (opt) & 23.3 & 44.1 & 0.1594 & 0.1571 \\ & 6 & 0.32 & 20.6 & 0.1687 & 0.1645 \\ \multirow{2}{*}{200} & 2.4 (opt) & 23.4 & 44.2 & 0.1596 & 0.1435 \\ & 6 & 0.38 & 20.7 & 0.1622 & 0.1484 \\ \hline \end{tabular} \label{table:table1} \end{table} \begin{figure} \caption{Comparisons between K-S distances associated with additive TMCMC and RWMH for dimension = 30.} \label{fig:K-S1_30} \label{fig:K-S2_30} \label{fig:figex1_30} \end{figure} \begin{figure} \caption{Comparisons between K-S distances associated with additive TMCMC and RWMH for dimension = 100.} \label{fig:K-S1} \label{fig:K-S2} \label{fig:figex1} \end{figure} \subsection{Performance comparison with ``essentially fully" multiplicative TMCMC} \label{subsec:sim2} In this case, we choose our neighborhood $ \mathbb{N}_{0}$ in (\ref{eq:mc1}) to be $[-0.1,0.1]^{d}$, where $d$ is the dimension of the space. The method of estimation of the mixing probability $\pi(\mathbb {N}_{0})$ is discussed in detail in Appendix \ref{sec:implement_P}; however, in our simulation example, this probability is simply $\prod_{i=1}^d\left[2\Phi(0.1)-1\right]$, $\Phi$ denoting the cumulative distribution function of $N(0,1)$. This chain is basically as close as we can get to a fully multiplicative TMCMC chain on $\mathbb{R}^{d}$ ensuring that the geometric drift condition holds. For our experiment, the scale of the additive TMCMC part of the mixture remains the same as before, that is, we consider the optimal scale 2.4, and the sub-optimal scale 6. We assume the proposal density $g^{(2)}$ is defined on a set of the form $[-l_{2},-l_{1}] \cup [l_{1},l_{2}] $ such that the interval $[l_{1},l_{2}]$ is a proper subset of $[0,1]$ minus small neighborhoods of 0 and 1. The distribution of the step $\epsilon$ is taken to be a mixture normal random variable such that $\epsilon\sim \frac{1}{2} N(\mu,\sigma^{2})I_{[l_{1},l_{2}]} + \frac{1}{2} N(-\mu,\sigma^{2})I_{[-l_{2},-l_{1}]}$ with mean $\mu \in [l_{1},l_{2}]$ and variance $\sigma^2$. In our simulation experiment we assumed $l_{1} =0.05$ and $l_{2}=0.95$ and optimal performance was observed when the mean $\mu$ is in the range $0.35$ to $0.45$, which is around halfway from both $l_{1}$ and $l_{2}$. Table \ref{table:table2} provides a comparison of the performances between RWMH and essentially full multiplicative TMCMC with respect to acceptance rate and average K-S distance. Note that unlike additive TMCMC, we find here that the acceptance rate for essentially fully multiplicative TMCMC is poor compared to RWMH. Moreover, the K-S distances also suggest that RWMH is closer to the target distribution compared to essentially fully multiplicative TMCMC for most of the iterations considered. However, on inspection it is observed that the K-S distance initially drops faster for the latter compared to RWMH; see Figure \ref{fig:figex2}. As shown by \ctn{Dutta12}, multiplicative TMCMC in one-dimensional situations are appropriate for certain heavy-tailed distributions. But in our current simulation study associated with high dimensions and a thin-tailed density, (essentially fully) multiplicative TMCMC did not seem to perform satisfactorily, although theoretically it is geometrically ergodic. \begin{table}[h] \centering \caption{Performance evaluation of RWMH and essentially fully multiplicative TMCMC (Mult-TMCMC) chains for different dimensions.} \begin{tabular}{|p{0.4in}|c|c|c|c|c|} \hline \multirow{2}{*}{Dim} & \multirow{2}{*}{\backslashbox{Scaling}{Criteria}} & \multicolumn{2}{|c|}{$\begin{array}{c} Acceptance \\ rate ($\%$) \end{array} $} & \multicolumn{2}{|c|}{\emph{Avg. K-S dist.}} \\ \cline{3-6} & & RWMH & Mult-TMCMC & RWMH & Mult-TMCMC\\ \hline \multirow{3}{*}{10} & 2.4 (opt) & 26.05 & 16.86 & 0.1652 & 0.2097 \\ & 6 & 1.19 & 6.32 & 0.1784 & 0.2133\\ \multirow{2}{*}{30} & 2.4 (opt) & 23.5 & 15.74 & 0.1637 & 0.1828 \\ & 6 & 1.16 & 6.77 & 0.1711 & 0.1924\\ \multirow{2}{*}{100} & 2.4 (opt) & 23.4 & 15.46 & 0.1596 & 0.1812\\ & 6 & 0.38 & 2.67 & 0.1622 & 0.1866 \\ \hline \end{tabular} \label{table:table2} \end{table} \begin{figure} \caption{Comparisons between K-S distances associated with essentially Mult-TMCMC and RWMH for dimension = 30.} \label{fig:K-S1_mix} \label{fig:K-S2_mix} \label{fig:figex2} \end{figure} \subsection{Performance comparison with the traditional mixture of additive and multiplicative TMCMC} \label{subsec:sim3} Now we consider the traditional mixture chain of the form (\ref{eq:mc2}) with both additive and multiplicative moves. We assume that with probability $\frac{1}{2}$, we move by additive TMCMC and with probability $\frac{1}{2}$ by multiplicative TMCMC. The proposal mechanisms for additive and multiplicative TMCMC remain the same as in Section \ref{subsec:sim2} associated with essentially fully multiplicative TMCMC. \begin{table}[h] \centering \caption{Performance evaluation of RWMH and traditional Mixture TMCMC (Mix-TMCMC) chains for different dimensions. For the multiplicative TMCMC part, we consider $\mu=0.35$ and $\sigma=1$.} \begin{tabular}{|p{0.4in}|c|c|c|c|c|} \hline \multirow{2}{*}{Dim} & \multirow{2}{*}{\backslashbox{Scaling}{Criteria}} & \multicolumn{2}{|c|}{$\begin{array}{c} Acceptance \\ rate ($\%$) \end{array} $} & \multicolumn{2}{|c|}{\emph{Avg. K-S dist.}} \\ \cline{3-6} & & RWMH & Mix TMCMC & RWMH & Mix TMCMC\\ \hline \multirow{3}{*}{10} & 2.4 (opt) & 26.05 & 29.43 & 0.1652 & 0.1455 \\ & 6 & 1.19 & 11.26 & 0.1784 & 0.1576\\ \multirow{2}{*}{30} & 2.4 (opt) & 23.5 & 29.32 & 0.1637 & 0.1428 \\ & 6 & 1.16 & 16.33 & 0.1711 & 0.1529 \\ \multirow{2}{*}{100} & 2.4 (opt) & 23.4 & 29. 29 & 0.1596 & 0.1398\\ & 6 & 0.38 & 10.67 & 0.1622 & 0.1412 \\ \hline \end{tabular} \label{table:table3} \end{table} Table \ref{table:table3} provides a comparison of the performances between RWMH and our traditional mixture TMCMC kernel with respect to acceptance rate and average K-S distance. Note that although the acceptance rate for the mixture kernel in our experiments is around 0.293 for $\mu=0.35$ and $\sigma=1$ which is quite low compared to additive TMCMC, it is of course still significantly higher than the optimal acceptance rate 0.234 for standard RWMH. To avoid any possible confusion it is important to emphasize that this acceptance rate for mixture kernel is not analytically derived as the optimal acceptance rate, rather it is the rate corresponding to the optimal value of $\mu$, numerically obtained by varying over $\mu$ keeping $\sigma$ fixed at 1 and computing the K-S distance and then choosing that $\mu$ for which the empirical average K-S distance was found to be the minimum. However, the average K-S distance for the mixture kernel is smaller compared to both RWMH and additive TMCMC, implying faster convergence. This improvement acts as a trade off for the low acceptance rate of the mixture kernel. Figure \ref{fig:figex3} displays plots of K-S distances associated with RWMH and mixture TMCMC in the case of a 30-dimensional normal target distribution. The plot shows much faster convergence of mixture TMCMC compared to RWMH. From Figures \ref{fig:figex1_30} and \ref{fig:figex2}, it is also clear that mixture TMCMC converges faster than even additive TMCMC and essentially fully multiplicative TMCMC. In fact, mixture TMCMC seems to converge in just about 100 iterations. This faster convergence may be attributed to the fact that the multiplicative steps allow the chain to take longer jumps and hence explore the space faster, while on the other hand the additive steps keep the acceptance rate high and enables the chain to move briskly. So, in other words, mixture TMCMC shares the positives of both the additive and the multiplicative chains and is found to outperform each of them individually. \begin{figure} \caption{Comparisons between K-S distances associated with Mix-TMCMC and RWMH for dimension = 30.} \label{fig:K-S1_mult} \label{fig:K-S2_mult} \label{fig:figex3} \end{figure} \section{Extensions of our geometric ergodicity results to target distributions that are not super-exponential} \label{sec:non_super_exponential} So far we have proved geometric ergodicity of additive and multiplicative TMCMC when the target density $\pi$ is super-exponential. It is natural to ask if our results go through when the super-exponential assumption does not hold. \subsection{Target density as mixture} \label{subsec:target_mixture} Note that, if the target density $\pi$ can be represented as a mixture of the form \begin{align} \pi(x)&=\int f_1(x|\theta)f_2(\theta)d\theta, \label{eq:non_super_exp_pi} \end{align} where $f_1(\cdot\vert\theta)$ is super-exponential for all $\theta$ and $f_2$ admits direct (exact) simulation, then the Markov transition kernel \begin{align} P(x,A)&=\int P(x,A|\theta)f_2(\theta)d\theta, \label{eq:non_super_exp_P} \end{align} where $P(x,A|\theta)$ denotes either additive or multiplicative TMCMC based Markov transtition kernel conditional on $\theta$, is geometrically ergodic for the target density $\pi$. The proof is essentially the same as the proof presented in Appendix \ref{sec:P_geo} that the finite mixture Markov transition kernel (\ref{eq:mc1}) is geometrically ergodic for the mixture representation (\ref{eq:mixture}); only the summations need to be replaced with integrals. The kernel (\ref{eq:non_super_exp_P}) will be implemented by first directly simulating $\theta\sim f_2$; then given $\theta$, the transition mechanism $P(x,\cdot|\theta)$ has to be implemented. Two popular examples of multivariate densities admitting mixture forms are multivariate $t$ and multivariate Cauchy, both of which can be represented as univariate $Gamma$-distributed mixtures of multivariate normal distributions. \subsection{Change-of-variable idea} \label{subsec:change_of_variable} The general situation has been addressed by \ctn{Johnson12} using a change-of-variable idea. If $\pi_{\beta}$ is the multivariate target density of interest, then one can first simulate a Markov chain having invariant density \begin{align} \pi_{\gamma}(\gamma)=\pi_{\beta}\left(h(\gamma)\right)\left|\mbox{det}~\nabla h(\gamma)\right|, \label{eq:transformed_target} \end{align} where $h$ is a diffeomorphism. If $\pi_{\beta}$ is the density of the random vector $\beta$, then $\pi_{\gamma}$ is the density of the random vector $\gamma=h^{-1}(\beta)$. \ctn{Johnson12} obtain conditions on $h$ which make $\pi_{\gamma}$ super-exponentially light. In more details, \ctn{Johnson12} define the following isotropic function $h:\mathbb R^d\mapsto\mathbb R^d$: \begin{equation} h(\gamma)=\left\{\begin{array}{cc}f(\|\gamma\|)\frac{\gamma}{\|\gamma\|}, & \gamma\neq \boldsymbol{0}\\ 0 & \gamma=\boldsymbol{0} \end{array}\right. \label{eq:isotropy} \end{equation} for some function $f: (0,\infty)\mapsto (0,\infty)$. \ctn{Johnson12} confine attention to isotropic diffeomorphisms, that is, to functions $h$ where both $h$ and $h^{-1}$ are continuously differentiable, with the further property that $\mbox{det}~\nabla h$ and $\mbox{det}~\nabla h^{-1}$ are also continuously differentiable. In particular, they define $f:[0,\infty)\mapsto [0,\infty)$ as follows: \begin{equation} f(x)=\left\{\begin{array}{cc}x, & x<R\\ x+(x-R)^p, & x\geq R, \end{array}\right. \label{eq:diffeo} \end{equation} where $R\geq 0$ and $p>2$. Theorem 2 of \ctn{Johnson12} shows that if $\pi_{\beta}$ is an exponentially light density ($\pi_{\beta}$ is exponentially light if $\underset{\|x\|\rightarrow\infty}{\lim\sup}~n(x)'\nabla\log\pi_{\beta}(x)<0$) on $\mathbb R^d$, and $h$ is defined by (\ref{eq:isotropy}) and (\ref{eq:diffeo}) then the transformed density $\pi_{\gamma}$ given by (\ref{eq:transformed_target}) is super-exponentially light. Thus, this transformation transforms an exponential density to a super-exponential density. Theorem 3 of \ctn{Johnson12} provided conditions under which sub-exponential densities can be converted to exponential densities ($\pi_{\beta}$ is sub-exponentially light if $\underset{\|x\|\rightarrow\infty}{\lim\sup}~n(x)'\nabla\log\pi_{\beta}(x)=0$). In particular, if $\pi_{\beta}$ is a sub-exponentially light density on $\mathbb R^d$, there exist $\alpha>d$, $R<\infty$ such that \[ \left(\frac{\beta}{\|\beta\|}\right)'\nabla\log\pi_{\beta}(\beta)\leq -\frac{\alpha}{\|\beta\|},\hspace{2mm} \|\beta\|>R, \] then $h$ defined as (\ref{eq:isotropy}) with $f:[0,\infty)\mapsto [0,\infty)$ given by \begin{equation} f(x)=\left\{\begin{array}{cc}e^{bx}-\frac{e}{3}, & x>\frac{1}{b}\\ x^3\frac{b^3e}{6}+x\frac{be}{2}, & x\leq \frac{1}{b}, \end{array}\right. \label{eq:diffeo2} \end{equation} where $b>0$, ensures that the transformed density $\pi_{\gamma}$ of the form (\ref{eq:transformed_target}), is super-exponentially light. In other words, starting from a sub-exponential target density, one can achieve a super-exponential density by first converting it to exponential using the transformation $h$ (given by (\ref{eq:isotropy})) with $f$ given by (\ref{eq:diffeo2}). Then one can convert the obtained exponential density to super-exponential using the transformation $h$ and $f$ given by (\ref{eq:diffeo}). As an example \ctn{Johnson12} show that the multivariate $t$ distribution of the form \begin{equation} \pi_{\beta}(t)=\frac{\Gamma\left(\frac{\nu+d}{2}\right)} {\Gamma\left(\frac{\nu}{2}\right)\left(\nu\pi\right)^{d/2}\mbox{det}\left(\Sigma\right)} \left[1+\frac{1}{\nu}\left(t-\mu\right)'\Sigma^{-1}\left(t-\mu\right)\right]^{-\left(\frac{\nu+d}{2}\right)}, \label{eq:multivariate_t} \end{equation} is sub-exponential. This can be converted to super-exponential by applying the aforementioned transformations in succession. Hence, we can run our geometric ergodic TMCMC algorithms for the super-exponentially light $\pi_{\gamma}$, and then transform the realizations $\{\gamma^{(k)};k=1,2,\ldots\}$ to $\{h(\gamma^{(k)});k=1,2,\ldots\}$. Then it easily follows (see Appendix A of \ctn{Johnson12}) that the transformed chain is also geometrically ergodic. \subsubsection{Simulation studies comparing RWMH and additive TMCMC in the context of diffeomorphism based simulation from Cauchy and $t$-distributions} \label{subsubsec:diffeo_simstudy} We now compare diffeomorphism-based RWMH and additive TMCMC algorithms with respect to K-S distance, when the target distributions are $d$-dimensional Cauchy and $t$-distributions, the latter having $\nu$ degrees of freedom. We assume that the location vectors and scale matrices are $\boldsymbol{\mu}=\boldsymbol{0}_d$ and $\boldsymbol{\Sigma}=diag\{0.7\boldsymbol{1}_d'\}+0.3\boldsymbol{1}_d\boldsymbol{1}_d'$, respectively, where $\boldsymbol{0}_d$ is a $d$-dimensional vector with all elements $0$, and $\boldsymbol{1}_d$ is a $d$-dimensional vector with each component 1. We choose $d=50$ for the illustrations. For both RWMH and additive TMCMC we consider the scale of the proposal distribution to be 2.4. Figure \ref{fig:diffeo_compare_50} compares the performances of diffeomorphism based RWMH and diffeomorphism based Add-TMCMC with respect to the K-S distance when the target distributions are 50-variate Cauchy and 50-variate $t$ respectively, with the aforementioned location vector and scale matrix. In both the cases Add-TMCMC quite significantly outperforms RWMH. Hence, the results are highly encouraging -- additive TMCMC significantly outperforms RWMH when the high-dimensional target density is not super-exponential, and is highly dependent. Since a mixture of additive and multiplicative TMCMC is demonstrably more efficient than additive TMCMC, it is clear that the mixture will beat RWMH by a large margin. We have also carried out extensive simulation studies comparing RWMH and Add-TMCMC when the target distributions are $50$-dimensional $i.i.d.$ Cauchy and $50$-dimensional $i.i.d.$ $t$ with $10$ degrees of freedom, that is, with $\boldsymbol{\mu}=\boldsymbol{0}_d$ and $\boldsymbol{\Sigma}=\bf{I}_d$, the latter standing for the identity matrix or order $d$, with $d=50$. We do not present the results here due to lack of space, but Add-TMCMC outperformed RWMH at least as significantly as in this reported dependent set-up. As an aside, we also compare the gains of the diffeomorphism based approach over the usual, direct application of RWMH and TMCMC to the target densities. Figure \ref{fig:RWMH_w_wo_diffeo_compare_50} compares the performances of diffeomorphism based RWMH and direct RWMH when the targets are the above-defined 50-dimensional multivariate Cauchy and $t$ (with 10 degrees of freedom). Likewise, Figure \ref{fig:TMCMC_w_wo_diffeo_compare_50} compares the performances of diffeomorphism based Add-TMCMC and direct Add-TMCMC with the above 50-dimensional target densities. As is evident from the figures, the diffeomorphism based approaches quite significantly outperform the direct approaches. \begin{figure} \caption{50-dimensional Cauchy and multivariate $t$ (10 degrees of freedom) targets: Comparisons between K-S distances associated with diffeomorphism based additive TMCMC and diffeomorphism based RWMH.} \label{fig:diffeo1:KS_50} \label{fig:diffeo2:KS_50} \label{fig:diffeo_compare_50} \end{figure} \begin{figure} \caption{50-dimensional Cauchy and multivariate $t$ (10 degrees of freedom) targets: Comparisons between K-S distances associated with RWMH implemented with and without diffeomorphism.} \label{fig:RWMH_cauchy_w_and_wo_diffeo:KS_50} \label{fig:RWMH_t_w_and_wo_diffeo:KS_50} \label{fig:RWMH_w_wo_diffeo_compare_50} \end{figure} \begin{figure} \caption{50-dimensional Cauchy and multivariate $t$ (10 degrees of freedom) targets: Comparisons between K-S distances associated with Add-TMCMC implemented with and without diffeomorphism.} \label{fig:TMCMC_cauchy_w_and_wo_diffeo:KS_50} \label{fig:TMCMC_t_w_and_wo_diffeo:KS_50} \label{fig:TMCMC_w_wo_diffeo_compare_50} \end{figure} \section{Concluding remarks} \label{sec:conclusions} We presented a comprehensive comparative study of geometric ergodicity and convergence behavior of various versions of TMCMC: additive, ``essentially full" multiplicative and mixture TMCMC. Additive TMCMC is the easiest to implement and as observed in the simulation study, has somewhat better convergence to the target distribution compared to RWMH. The essentially fully multiplicative TMCMC traverses the sample space more rapidly but we observed that it is relatively slow in convergence to the target density compared to the standard RWMH approach. The best convergence results are obtained for mixture TMCMC which combines the additive and the multiplicative moves in equal proportions. Of considerable interest are situations when the high-dimensional target densities are not super-exponential but can be handled by the diffeomorphism based approach. The relevant simulation studies detailed in Section \ref{subsubsec:diffeo_simstudy} demonstrate far superior convergence of additive TMCMC compared to RWMH. Since these simulation studies are conducted assuming high dependence structure of the target densities, the results are particularly encouraging and lead us to recommend TMCMC in general situations. Moreover, it is to be noted that in these simulation studies we concern ourselves with only additive TMCMC. Since a mixture of additive and multiplicative TMCMC is seen to be more efficient in comparison with additive TMCMC, it is clear that such a mixture will outperform RWMH by even greater margins. There are obviously some questions of further interest. We would definitely like to have quantitative rates of convergence for each of the three approaches to TMCMC. In this paper we considered the mixing proportion in mixture TMCMC to be $1/2$ and we also observed in our simulation study that extremal mixing proportions (which correspond to additive and essentially fully multiplicative approaches) lead to slower convergence compared to uniform mixing. But it would be worth noting how this rate of convergence changes with the change in mixing proportion. Optimal scaling of TMCMC methods is another area which is of considerable interest to us. The optimal scaling for additive TMCMC has been studied for a broad class of multivariate target densities (\ctn{Dey13}), but the optimal scaling for mixture TMCMC and multiplicative or essentially fully multiplicative approaches are yet to be determined. The biggest challenge in dealing with this problem is that the generator functions for the associated time scaled diffusion process for these methods are hard to express in any simple analytic form. One area we are currently focussing on is defining adaptive versions of the TMCMC approach (additive and multiplicative) and comparing the performances (convergence criterion and acceptance rate in particular) among various adaptive schemes and also with the typical non adaptive algorithms we considered here. We are also trying to expand the scope of our approach beyond $\mathbb{R}^{d}$ by considering spheres and other Riemannian or Symplectic manifolds as the support of the target distributions and it would be interesting to investigate such properties like irreducibility, detailed balance and ergodicity properties of the TMCMC algorithms over such spaces. \section*{Acknowledgment} We are sincerely grateful to three anonymous reviewers whose comments led to a much improved version of our manuscript. \section*{Appendix} \begin{appendix} \section{Minorization condition for multiplicative TMCMC} \label{sec:minorization} For the one-dimensional case, minorization conditions of multiplicative TMCMC has been established by \ctn{Dutta12}. Here we generalize the results to arbitrary dimension. For simplicity we assume $p_i=q_i=1/3$ for $i=1,\ldots,d$. The following theorem establishes the minorization condition for multiplicative TMCMC. \begin{theorem} \label{theorem:minor_mult} Let the target density $\pi$ be bounded and positive on compact sets. Then there exists a nonzero measure $\nu$, a positive integer $m$, $\delta>0$, and a {\it small set} $E^*$ such that \begin{equation} \left\{P^{(2)}\right\}^m(x,\mathbb A)\geq\delta \nu(\mathbb A),\quad\forall x\in E^*\quad\mbox{and for all Borel sets}\ \ \mathbb A. \label{eq:minor1} \end{equation} \end{theorem} \begin{proof} Observe that, from $x=(x_1,\ldots,x_d)$ it is possible to move to any Borel set $\mathbb A$ in at least $d$ steps using those multiplicative TMCMC move types $b=(b_1,\ldots,b_d)$ which update only one coordinate at a time. Hence, for our purpose it is sufficient to confine attention to these moves. Let $E^*$ denote a compact subset of $\mathbb R^d$. Also, let $\mathbb C$ be a compact set containing $E^*$. Let $\mathbb A^*=\mathbb A\cap \mathbb C$. For the simplicity of presentation we present the proof of minorization for $d=2$. Let $\mathbb A_1=\{(\epsilon_1,\epsilon_2):(x_1\epsilon_1,x_2\epsilon_2)\in\mathbb A^*\}$, $\mathbb A_2=\{(\epsilon_1,\epsilon_2):(x_1/\epsilon_1,x_2/\epsilon_2)\in\mathbb A^*\}$, $\mathbb A_3=\{(\epsilon_1,\epsilon_2):(x_1\epsilon_1,x_2/\epsilon_2)\in\mathbb A^*\}$, and $\mathbb A_4=\{(\epsilon_1,\epsilon_2):(x_1/\epsilon_1,x_2\epsilon_2)\in\mathbb A^*\}$. For $x\in E^*$, we have \begin{align} &\left\{P^{(2)}\right\}^2(x,\mathbb A)\geq \left\{P^{(2)}\right\}^2(x,\mathbb A^*)\notag\\ &\geq \frac{1}{3^4} \int_{\mathbb A_1} \min\left\{1,\frac{\pi(x_1\epsilon_1,x_2)|\epsilon_1|}{\pi(x_1,x_2)}\right\} \times \min\left\{1,\frac{\pi(x_1\epsilon_1,x_2\epsilon_2)|\epsilon_2|}{\pi(x_1\epsilon_1,x_2)}\right\} g^{(2)}(\epsilon_1)g^{(2)}(\epsilon_2)d\epsilon_1d\epsilon_2\notag\\ &+ \frac{1}{3^4} \int_{\mathbb A_2} \min\left\{1,\frac{\pi(x_1/\epsilon_1,x_2)|\epsilon|^{-1}}{\pi(x_1,x_2)}\right\} \times \min\left\{1,\frac{\pi(x_1/\epsilon_1,x_2/\epsilon_2)|\epsilon_2|^{-1}}{\pi(x_1/\epsilon_1,x_2)}\right\} g^{(2)}(\epsilon_1)g^{(2)}(\epsilon_2)d\epsilon_1d\epsilon_2\notag\\ &+ \frac{1}{3^4} \int_{\mathbb A_3} \min\left\{1,\frac{\pi(x_1\epsilon_1,x_2)|\epsilon_1|}{\pi(x_1,x_2)}\right\} \times \min\left\{1,\frac{\pi(x_1\epsilon_1,x_2/\epsilon_2)|\epsilon_2|^{-1}}{\pi(x_1\epsilon_1,x_2)}\right\} g^{(2)}(\epsilon_1)g^{(2)}(\epsilon_2)d\epsilon_1d\epsilon_2\notag\\ &+ \frac{1}{3^4} \int_{\mathbb A_4} \min\left\{1,\frac{\pi(x_1/\epsilon_1,x_2)|\epsilon_1|^{-1}}{\pi(x_1,x_2)}\right\} \times \min\left\{1,\frac{\pi(x_1/\epsilon_1,x_2\epsilon_2)|\epsilon_2|}{\pi(x_1/\epsilon_1,x_2)}\right\} g^{(2)}(\epsilon_1)g^{(2)}(\epsilon_2)d\epsilon_1d\epsilon_2. \label{eq:minorization} \end{align} Let $r=\inf_{y\in\mathbb C}\pi(y)$ and $R=\sup_{y\in\mathbb C}\pi(y)$. Also note that each integral on $\mathbb A_i$; $i=1,2,3,4$, can be split into $\mathbb A_i=\{\mathbb A_i\cap\mathbb S_{\eta}\}\cup\{\mathbb A_i\cap\mathbb S^c_{\eta}\}$, where $\mathbb S_{\eta}=\{(\epsilon_1,\epsilon_2):\eta<|\epsilon_1|\leq 1, \eta<|\epsilon_2|\leq 1\}$, for some $\eta>0$, and $\mathbb S^c_{\eta}$ denotes the complement of $\mathbb S_{\eta}$. Let $G$ denote the probability measure corresponding to the distribution $\epsilon_1,\epsilon_2\stackrel{i.i.d.}{\sim}g^{(2)}$. On $\mathbb A_i\cap\mathbb S^c_{\eta}$, for $i=1,3,4$, the corresponding integrands have infimum zero; hence zero is the lower bound of the respective integrals on $\mathbb A_i\cap\mathbb S^c_{\eta}$, for $i=1,3,4$. On $\mathbb A_2\cap\mathbb S^c_{\eta}$, the integrand of the second integral has infimum equal to 1; hence, the corresponding integral is bounded below by $\frac{1}{3^4}G(\mathbb A_2\cap\mathbb S^c_{\eta})$. Note that $G(\mathbb A_2\cap\mathbb S^c_{\eta})$ can be made arbitrarily small by choosing $\eta$ to be as small as desired. On $\mathbb A_i\cap\mathbb S_{\eta}$, each of the integrals are bounded below by $\frac{\eta^2}{3^4}(\frac{r}{R})^2G(\mathbb A_i\cap\mathbb S_{\eta})$. Hence, \begin{align} \left\{P^{(2)}\right\}^2(x,\mathbb A)&\geq \left\{P^{(2)}\right\}^2(x,\mathbb A^*)\notag\\ &\geq \frac{\eta^2}{3^4}\left(\frac{r}{R}\right)^2\sum_{i=1}^4G(\mathbb A_i\cap\mathbb S_{\eta})\notag\\ &\geq \frac{\eta^2}{3^4}\left(\frac{r}{R}\right)^2G(\left\{\cup_{i=1}^4\mathbb A_i\right\}\cap\mathbb S_{\eta})\notag\\ &= \frac{\eta^2}{3^4}\left(\frac{r}{R}\right)^2G(\mathbb A^*\cap\mathbb S_{\eta}).\notag\\ &= \frac{\eta^2}{3^4}\left(\frac{r}{R}\right)^2G(\mathbb S_{\eta}) \times\frac{G(\mathbb A^*\cap\mathbb S_{\eta})}{G(\mathbb S_{\eta})}.\notag\\ &=\delta\nu(\mathbb A^*), \end{align} with \[ \delta=\frac{\eta^2}{3^4}\left(\frac{r}{R}\right)^2G(\mathbb S_{\eta})\quad\mbox{and}\quad \nu(\mathbb A^*)=\frac{G(\mathbb A^*\cap\mathbb S_{\eta})}{G(\mathbb S_{\eta})}. \] Hence, minorization holds for multiplicative TMCMC, and $E^*$ is the small set. The same ideas of the proof go through for any finite dimension $d$. \end{proof} We next show that vectors in the set \[ \mathcal V=\{(v_1,\ldots,v_d)\in\mathbb R^d: v_i=0~ \mbox{for at least one}~i\in\{1,\ldots,d\}\}, \] can not be limit points of small sets. For our purpose we need a lemma which can be seen as a generalization of Lemma 1 of \ctn{Dutta12} to arbitrary dimensions and for vectors in $\mathcal V$. \begin{lemma} \label{lemma:minor_mult} Fix $v=(v_1,\ldots,v_d)\in\mathcal V$. For $\{i_1,\ldots,i_k\}\subseteq\{1,\ldots,d\}$, where $k\leq d$, let $v_{i_j}=0$, for $j=1,\ldots,k$. Let $\{x_n\}$ be a sequence of positive (negative) numbers decreasing (increasing) to zero. Consider the sequence $v_n=(v_{1,n},\ldots,v_{d,n})'$, where $v_{j,n}=x_n$ for $j=i_1,\ldots,i_k$, and $v_{j,n}=v_j$ for $j\in\{1,\ldots,d\}\backslash \{i_1,\ldots,i_k\}$. If $v_i=0$ for $i=1,\ldots,d$, then $v_n=(x_n,\ldots,x_n)'$ may also be considered. Then, \begin{equation} P^{(2)}(v_n,\mathbb A)\rightarrow 0, \label{eq:minor_mult} \end{equation} for all Borel sets $\mathbb A$ such that $\mathbb A\cap\{(v_1,\ldots,v_d)\in\mathbb R^d:v_{i_j}=0;~j=1,\ldots,k\}=\emptyset$. \end{lemma} \begin{proof} Without loss of generality we present the proof for $d=2$. Let us fix $v=(v_1,v_2)$, where $v_1=0$ and $v_2\in\mathbb R$. Let $v_n=(x_n,v_2)$. Note that for moving from $x_n$ to $z\in\mathbb R$, where $|x_n|\leq |z|$ for all $n$, we must simulate $\epsilon = x_n/z$ and take the backward move $z=x_n/\epsilon$. The move $z=x_n\epsilon$, with $\epsilon=z/x_n$ can not be valid in this case, since $x_n\rightarrow 0$ implies that for large $n$, $\epsilon\notin [-1,1]$. Since the acceptance probability is bounded above by 1, we have, for $y<0$, \begin{align} P^{(2)}(v_n,(-\infty,y]\times (-\infty,\infty)) &\leq \frac{1}{3^2}\int_{x_n/y}^0g(\epsilon)d\epsilon\notag\\ &\rightarrow 0. \label{eq:limit_point1} \end{align} If $y>0$, then \begin{align} P^{(2)}(v_n,[y,\infty)\times (-\infty,\infty)) &\leq \frac{1}{3^2}\int_0^{x_n/y}g(\epsilon)d\epsilon\notag\\ &\rightarrow 0. \label{eq:limit_point2} \end{align} Hence, (\ref{eq:minor_mult}) holds when $d=2$. The proof clearly goes through for any dimension $d$. If $v=(0,0)$, we can consider $v_n=(x_n,0)'$ or $v_n=(x_n,x_n)'$. Then, in addition to (\ref{eq:limit_point1}) and (\ref{eq:limit_point2}), which clearly hold, the following also hold true: if $y<0$ \[ P^{(2)}(v_n,(-\infty,\infty)\times (-\infty,y])\rightarrow 0, \] and \[ P^{(2)}(v_n,(-\infty,\infty)\times [y,\infty))\rightarrow 0, \] if $y>0$. These imply that for dimension $d=2$, \begin{equation} P^{(2)}(v_n,\cdot)\rightarrow I_{\{\boldsymbol{0}\}}(\cdot). \label{eq:limit_point_0} \end{equation} The above result (\ref{eq:limit_point_0}) clearly holds for any dimension $d$ for $v=(0,0,\ldots,0)'$ and $v_n=x_n\boldsymbol{1}$, where $\boldsymbol{1}=(1,1,\ldots,1)'$ is the $d$-component vector of ones. \end{proof} Now, if $v\in\mathcal V$ is a limit point of $E^*$, then there exists a sequence $v_n$ as in Lemma \ref{lemma:minor_mult}, converging to $v$. This, and Lemma \ref{lemma:minor_mult} imply that for any fixed integer $m>1$, and for any Borel set $\mathbb A$, \begin{align} \left\{P^{(2)}\right\}^m(v_n,\mathbb A) &=\int_{\mathbb R^d}\left\{P^{(2)}\right\}^{m-1}(z,\mathbb A)P^{(2)}(v_n,dz)\notag\\ &\rightarrow 0, \label{eq:prob_lim1} \end{align} if $\mathbb A\cap\{(v_1,\ldots,v_d)\in\mathbb R^d:v_{i_j}=0;~j=1,\ldots,k\}=\emptyset$. In particular, if $\boldsymbol{0}$ is a limit point of $E$, then for any fixed integer $m>1$, and for any Borel set $\mathbb A$, \begin{align} \left\{P^{(2)}\right\}^m(x_n\boldsymbol{1},\mathbb A) &\rightarrow I_{\{\boldsymbol{0}\}}(\mathbb A). \label{eq:prob_lim2} \end{align} Both (\ref{eq:prob_lim1}) and (\ref{eq:prob_lim2}) contradict the minorization inequality (\ref{eq:minor1}). Now consider the case of additive-multiplicative TMCMC. Let the coordinates with indices $\{j_1,j_2,\ldots,j_{\ell}\}\subset\{1,2,\ldots,d\}$ be given the multiplicative transformation and let the remaining coordinates be given the additive transformation. Here, let $\mathcal V(j_1,\ldots,j_{\ell})= \{(v_1,\ldots,v_d)'\in\mathbb R^d:v_j=0~\mbox{for at least one}~j\in\{j_1,j_2,\ldots,j_{\ell}\}\}$. Then vectors $v\in \mathcal V(j_1,\ldots,j_{\ell})$ can not be limit points of small sets associated with additive-multiplicative TMCMC. In particular, $\boldsymbol{0}$ can not be a limit point. The proof is the same as in the case of multiplicative TMCMC, and hence omitted. \begin{comment} \section{Proof of detailed balance of the Markov transition kernel $P=\pi(\mathbb N_0)P^{(1)}+\pi(\mathbb N^c_0)P^{(2)}$} \label{sec:detailed_balance} \section*{Case 1: $x\in\mathbb N_0$ and $y\in\mathbb N_0$, where $x\neq y$} To move from $x$ to $y$, one must implement additive TMCMC because, to implement multiplicative TMCMC, for $i=1,\ldots,d$, one must set either $y_i=x_i$ (no change) or $y_i=x_i\epsilon$ (forward move) or $y_i=x_i/\epsilon$ (backward move), where $\epsilon=y_i/x_i$ or $\epsilon=x_i/y_i$ is drawn from $g^{(2)}$. Since both $x$ and $y$ are in $\mathbb N_0$, either $\epsilon\in\mathcal N_{+1}$ or $\epsilon\in\mathcal N_{-1}$, both of which have zero probabilities under $g^{(2)}$. If one uses the move type that sets $y_i=x_i$ for all $i=1,\ldots,d$, then this is not a valid move since $x\neq y$. To return from $y$ to $x$, again one must use additive TMCMC, since multiplicative TMCMC requires setting either $x_i=y_i$ (no change) $x_i=y_i/\epsilon$ (backward move) or $x_i=y_i\epsilon$ (forward move), where $\epsilon=y_i/x_i$ or $\epsilon=x_i/y_i$ is drawn from $g^{(2)}$, and by the same arguments as above, are not valid. Formally, we have \begin{align} \pi(x)P(x,y)&=\frac{1}{2^d}\pi_1(x)\pi(\mathbb N_0)g^{(1)}(\epsilon) \min\left\{1,\frac{\pi_1(y)}{\pi_1(x)}\right\}\notag\\ &=\frac{1}{2^d}g^{(1)}(\epsilon)\pi(\mathbb N_0)\min\{\pi_1(x),\pi_1(y)\} \label{eq:case1_1} \end{align} and \begin{align} \pi(y)P(y,x)&=\frac{1}{2^d}\pi_1(y)\pi(\mathbb N_0)g^{(1)}(\epsilon) \min\left\{1,\frac{\pi_1(x)}{\pi_1(y)}\right\}\notag\\ &=\frac{1}{2^d}g^{(1)}(\epsilon)\pi(\mathbb N_0)\min\{\pi_1(x),\pi_1(y)\}, \label{eq:case1_2} \end{align} showing that detailed balance holds. \section*{Case 2: $x\in\mathbb N_0$ and $y\in\mathbb N^c_0$} To move from $x$ to $y$, one can either implement additive TMCMC of multiplicative TMCMC with moves $x_i\mapsto x_i/\epsilon$ for at least one $i$. To return from $y$ to $x$, the backward moves for additive or multiplicative TMCMC must be used coordinate wise. In this case, we have \begin{align} \pi(x)P(x,y)&=\frac{1}{2^d}\pi_1(x)\pi(\mathbb N_0)g^{(1)}(\epsilon_1)\min\left\{1,\frac{\pi_2(y)}{\pi_1(x)}\right\} +\frac{1}{3^d}\pi_1(x)\pi(\mathbb N^c_0)g^{(2)}(\epsilon_2) \min\left\{1,\frac{\pi_2(y)}{\pi_1(x)}|J(b,\epsilon_2)|\right\}\notag\\ &=\frac{1}{2^d}\pi(\mathbb N_0)g^{(1)}(\epsilon_1)\min\{\pi_1(x),\pi_2(y)\} +\frac{1}{3^d}\pi(\mathbb N^c_0)g^{(2)}(\epsilon_2)\min\left\{\pi_1(x),\pi_2(y)|J(b,\epsilon_2)|\right\} \label{eq:case2_1} \end{align} To move back from $y$ to $x$ we can use the backward moves of either the additive or the multiplicative transformations. Hence, \begin{align} \pi(y)P(y,x)&=\frac{1}{2^d}\pi_2(y)\pi(\mathbb N_0)g^{(1)}(\epsilon_1)\min\left\{1,\frac{\pi_1(x)}{\pi_2(y)}\right\} +\frac{1}{3^d}\pi_2(y)\pi(\mathbb N^c_0)g^{(2)}(\epsilon_2)|J(b,\epsilon_2) \min\left\{1,\frac{\pi_1(x)}{\pi_2(y)}|J(b,\epsilon_2)|^{-1}\right\}\notag\\ &=\frac{1}{2^d}\pi(\mathbb N_0)g^{(1)}(\epsilon_1)\min\{\pi_1(x),\pi_2(y)\} +\frac{1}{3^d}\pi(\mathbb N^c_0)g^{(2)}(\epsilon_2)\min\left\{\pi_1(x),\pi_2(y)|J(b,\epsilon_2)|\right\} \label{eq:case2_2} \end{align} Since (\ref{eq:case2_2}) is the same as (\ref{eq:case2_1}), detailed balance holds. \section*{Case 3: $x\in\mathbb N^c_0$ and $y\in\mathbb N^c_0$, $x\neq y$} To move from $x$ to $y$, we can use either additive TMCMC with probability $\pi(\mathcal N_0)$ or multiplicative TMCMC with probability $\pi(\mathcal N^c_0)$. Hence, the same equations (\ref{eq:case2_1}) and (\ref{eq:case2_2}) hold in this case, proving that detailed balance holds. \end{comment} \section{Proof of geometric ergodicity of the Markov transition kernel $P=\pi(\mathbb N_0)P^{(1)}+\pi(\mathbb N^c_0)P^{(2)}$} \label{sec:P_geo} Let us first introduce an auxiliary random variable $Z$, with \begin{align} Pr(Z=1)&=\pi(\mathbb N_0)\quad\mbox{and}\quad Pr(Z=2)=1-Pr(Z=1).\label{eq:pi_Z} \end{align} Note that for $i=1,2$, \begin{equation} P(x,A|Z=i)=P^{(i)}(x,A)\quad\mbox{and}\quad \pi(A|Z=i)=\pi_i. \label{eq:pi_i} \end{equation} Also note that, since $P^{(i)}$ is geometrically ergodic when the target density is $\pi_i$, we must have \begin{equation} \left\|\left\{P^{(i)}\right\}^n(x,\cdot)-\pi_i(\cdot)\right\|\leq M_i(x)\rho^n_i, \label{eq:pi_geo} \end{equation} for $i=1,2$, for some $M_1(x),M_2(x)<\infty$ and $0<\rho_1,\rho_2<1$. Now, \begin{align} &\|P^n(x,\cdot)-\pi(\cdot)\|_{TV}=\underset{A\in\mathcal B(\mathbb R^d)} \sup\bigg |P^n(x,A)-\pi(A)\bigg |\notag\\ &= \underset{A\in\mathcal B(\mathbb R^d)} \sup\bigg |P^n(x,A|Z=1)Pr(Z=1)+ P^n(x,A|Z=2)Pr(Z=2)\notag\\ &\quad\quad-\left(\pi(A|Z=1)Pr(Z=1)+\pi(A|Z=2)Pr(Z=2)\right)\bigg |\notag\\ &= \underset{A\in\mathcal B(\mathbb R^d)} \sup\bigg |\left\{P^{(1)}\right\}^n(x,A)Pr(Z=1)+ \left\{P^{(2)}\right\}^n(x,A)Pr(Z=2)\notag\\ &\quad\quad-\left(\pi_1(A)Pr(Z=1)+\pi_2(A)Pr(Z=2)\right)\bigg |\quad\mbox{by (\ref{eq:pi_i})}\notag\\ &\leq Pr(Z=1)\left\|\left\{P^{(1)}\right\}^n(x,\cdot)-\pi_1(\cdot)\right\| +Pr(Z=2)\left\|\left\{P^{(2)}\right\}^n(x,\cdot)-\pi_2(\cdot)\right\|\notag\\ &\leq Pr(Z=1)M_1(x)\rho^n_1+Pr(Z=2)M_2(x)\rho^n_2\quad\mbox{by (\ref{eq:pi_geo})}\notag\\ &\leq M(x)\rho^n,\notag \end{align} where $M(x)\geq\max\{M_1(x),M_2(x)\}$, and $\rho\geq\max\{\rho_1,\rho_2\}$. Hence, $P$ is geometrically ergodic when the target density is $\pi$. Note that the proof employed in Section \ref{subsec:usual_mixture_kernel} for showing geometric ergodicity of the alternative mixture Markov transition kernel $P^*$, is also valid for showing geometric ergodicity of $P$, but the current proof (with slight modification; replacing the summations with integrations) is appropriate for proving geometric ergodicity of continuous mixture kernels of the form (\ref{eq:non_super_exp_P}) for continuous mixture target densities of the form (\ref{eq:non_super_exp_pi}) since a single function $V$ need not be appropriate for (uncountably) infinite number of mixture components. \section{Discussion on estimation of the mixing probability $\pi(\mathbb N_0)$} \label{sec:implement_P} In order to implement the Markov transition kernel $P$, for each $k=1,2,\ldots$, we are required to draw $u\sim U(0,1)$; if $u<\pi(\mathbb N_0)$, we select $x^{(k)}_1$, else we select $x^{(k)}_2$. Note that $\pi(\mathbb N_0)$ is not known, and needs to be estimated numerically. Direct estimation using TMCMC samples from $\pi$ will generally not be reliable, since the region $\mathbb N_0$, being arbitrarily small, can be easily missed by any MCMC method. However, this may be reliably estimated using importance sampling as follows. Let $\pi(x)=c\ell(x)$, where $c=1/\int\ell(y)dy$ is the unknown normalizing constant. Also, let $h(x)=|\mathbb N_0|^{-1}I_{\mathbb N_0}(x)$ be the uniform distribution on $\mathbb N_0$, where $|\mathbb N_0|$ denotes the Lebesgue measure of the set $\mathbb N_0$. We may use $h$ as the importance sampling density in the region $\mathbb N_0$. For the region $\mathbb N^c_0$ we may consider some thick-tailed importance sampling density $g(x)$, for example, a $d$-variate $t$-density, but adjusting the support to be $\mathbb N^c_0$. Then \begin{align} \pi(\mathbb N_0) &=\frac{\int_{\mathbb N_0}\ell(x)dx}{\int\ell(x)dx} =\frac{\int_{\mathbb N_0}\frac{\ell(x)}{h(x)}h(x)dx} {\int_{\mathbb N_0}\frac{\ell(x)}{h(x)}h(x)dx+\int_{\mathbb N^c_0}\frac{\ell(x)}{g(x)}g(x)dx}\notag\\ &\approx\frac{\frac{1}{N_1}\sum_{j=1}^{N_1}\frac{\ell(x^{(j)})}{h(x^{(j)})}} {\frac{1}{N_1}\sum_{j=1}^{N_1}\frac{\ell(x^{(j)})}{h(x^{(j)})} +\frac{1}{N_2}\sum_{k=1}^{N_2}\frac{\ell(y^{(k)})}{g(y^{(k)})}}=\hat\pi(\mathbb N_0)~\mbox{(say)},\notag \end{align} where $\{x^{(j)};j=1,\ldots,N_1\}$ are $i.i.d.$ realizations drawn from the uniform distribution $h$ and $\{y^{(k)};k=1,\ldots,N_2\}$ are $i.i.d.$ or TMCMC realizations from $g$, depending on the complexity of the form of $g$. The parameters of $g$ may be chosen by variational methods; see http://www.gatsby.ucl.ac.uk/vbayes/ for a vast repository of papers, softwares and links on variational methods. Observe that even though we are proposing to estimate $\pi(\mathbb N_0)$ by $\hat\pi(\mathbb N_0)$, implementation of the mixture kernel $P$ with $\hat\pi(\mathbb N_0)$ as the mixing probability is expected to be exactly the same as the mixture kernel $P$ with the true mixing probability $\pi(\mathbb N_0)$. This is because even if $\hat\pi(\mathbb N_0)$ is only a reasonably accurate estimate of $\pi(\mathbb N_0)$, it is expected that for any $u\sim U(0,1)$, $u<\pi(\mathbb N_0)$ if and only if $u<\hat\pi(\mathbb N_0)$. For instance, if $\hat\pi(\mathbb N_0)=\pi(\mathbb N_0)+\eta$, for some $\eta>0$, then $Pr\left(\pi(\mathbb N_0)<u<\hat\pi(\mathbb N_0)\right)=\eta$. Even if $\eta$ is not extremely small, the above probability is still reasonably small, for reasonably small values of $\eta$. In other words, a very high degree of accuracy of the estimate $\hat\pi(\mathbb N_0)$ is not that important in this case. \end{appendix} \end{document}
\begin{document} \title{Neural Tangent Kernel: A Survey} \tableofcontents \begin{abstract} A seminal work of \cite{jacot2018neural} demonstrated that training a neural network under certain parameterization is equivalent to performing a certain kernel method as width goes to infinity. This equivalence opened a promising direction of applying results of rich literature on kernel methods to neural nets which were much harder to tackle. The present survey covers key results on kernel convergence as width goes to infinity, finite-width corrections, applications, and discussion of limitations of the corresponding method. \end{abstract} \section{Definition and the explicit solution for square loss} \label{sec:definition} Consider a generic parametric model $f(x; \theta): \, \mathcal{X} \times \mathbb{R}^N \to \mathbb{R}$ differentiable with respect to weights $\theta$. We aim to minimize square loss over a dataset $(\vec x, \vec y)$ of size $m$: $\frac{1}{2} \sum_{j=1}^m (y_j - f(x_j; \theta))^2 \to \min_\theta$. A continuous-time gradient descent dynamics (gradient flow) corresponds to the following ordinary differential equation (ODE): \begin{equation} \dot\theta_t = -\nabla_\theta\left(\frac{1}{2} \sum_{j=1}^m (y_j - f(x_j; \theta_t))^2\right) = \sum_{j=1}^m (y_j - f(x_j; \theta_t)) \nabla_\theta f(x_j; \theta_t). \end{equation} Let us abbreviate the prediction at a given data point $x$ at time $t$, $f(x; \theta_t)$, as $f_t(x)$. Under the dynamics above, this quantity evolves as \begin{equation} \dot f_t(x) = \dot\theta_t^T \nabla f_t(x) = \sum_{j=1}^m (y_j - f_t(x_j)) \nabla_\theta^T f_t(x_j) \nabla_\theta f_t(x). \label{eq:f_t_dynamics} \end{equation} If we perceive $\nabla_\theta f_t(x)$ as a feature map $\Phi_t: \, \mathcal{X} \to \mathbb{R}^N$, the scalar product above becomes a kernel evaluated at a pair $(x_j,x)$. This kernel is called an empirical neural tangent kernel (NTK) and is denoted by $\hat\Theta_t$: \begin{equation} \hat\Theta_t(x,x') = \nabla_\theta^T f_t(x) \nabla_\theta f_t(x'). \end{equation} This definition allows for a shorter representation of the prediction dynamics (\ref{eq:f_t_dynamics}): \begin{equation} \dot f_t(x) = \hat\Theta_t(x,\vec x) (\vec y - f_t(\vec x)), \label{eq:f_t_dynamics_emp_ntk} \end{equation} where by convention, $\hat\Theta_t(x,\vec x) \in \mathbb{R}^{1 \times m}$. Assume that the empirical NTK does not evolve with time, i.e $\hat\Theta_t(x,x') = \hat\Theta_0(x,x')$ $\forall x,x' \in \mathcal{X}$. This assumption is equivalent to assuming the model $f(x;\theta)$ to be linear as a function of its weights: \begin{equation} f(x; \theta) = f(x; \theta_0) + \nabla_\theta^T f(x; \theta_0) (\theta - \theta_0). \end{equation} When the kernel is constant, Eq.(\ref{eq:f_t_dynamics_emp_ntk}) is easily integrable. Indeed, on the train dataset, \begin{equation} \dot f_t(\vec x) = \hat\Theta_0(\vec x,\vec x) (\vec y - f_t(\vec x)), \label{eq:f_t_dynamics_train} \end{equation} which gives \begin{equation} f_t(\vec x) = f_0(\vec x) - \left(I - e^{-\hat\Theta_0(\vec x, \vec x) t}\right) (f_0(\vec x) - \vec y). \end{equation} Plugging it back to Eq.(\ref{eq:f_t_dynamics_emp_ntk}) gives \begin{equation} \dot f_t(x) = \hat\Theta_t(x,\vec x) e^{-\hat\Theta_0(\vec x, \vec x) t} (\vec y - f_0(\vec x)), \end{equation} and finally, \begin{equation} f_t(x) = f_0(x) - \hat\Theta_0(x, \vec x) \hat\Theta_0^{-1}(\vec x, \vec x) \left(I - e^{-\hat\Theta_0(\vec x, \vec x) t}\right) (f_0(\vec x) - \vec y). \label{eq:lin_solution_square_loss} \end{equation} While the exact solution above is based on the constant kernel assumption, one can prove that the kernel is indeed nearly constant in certain settings, see \cref{sec:convergence}. This allows one to transfer results that hold for linearized models to original ones. For example, $f_t(\vec x)$ converges to $\vec y$ (i.e. the model learns the dataset) as long as the Gram matrix is positive definite: $\hat\Theta_0(\vec x,\vec x) \geq \lambda_0$ for some $\lambda_0 > 0$, see Eq.(\ref{eq:f_t_dynamics_train}). The same result holds without the constant kernel assumption, as long as $\hat\Theta_t(\vec x,\vec x)$ stays sufficiently close to $\hat\Theta_0(\vec x,\vec x)$, and therefore, say, $\hat\Theta_t(\vec x,\vec x) \geq \lambda_0/2$. Indeed, \begin{equation} \frac{d}{dt}\left(\frac{1}{2} \| \vec y - f_t(\vec x) \|_2^2\right) = -(\vec y - f_t(\vec x))^T \hat\Theta_t(\vec x, \vec x) (\vec y - f_t(\vec x)) \leq -\frac{\lambda_0}{2} \| \vec y - f_t(\vec x) \|_2^2, \end{equation} which gives \begin{equation} \| \vec y - f_t(\vec x) \|_2^2 \leq e^{-\lambda_0 t} \| \vec y - f_0(\vec x) \|_2^2 \to 0 \quad \text{as $t \to \infty$}; \end{equation} see \cite{du2018gradient} for the formal result. This result is not trivial, since loss surfaces of generic neural nets are non-convex, and therefore any local optimization method (e.g. the gradient flow) may get stuck in a spurious local minimum. See \cite{arora2019fine} for other results of a similar kind. Also, if one assumes the kernel to be nearly constant, one can identify certain pathologies affecting the learning process by analyzing the initial kernel: see \cite{martens2021rapid} discussing trainability of very deep nets and \cite{dupuis2021dnn,tancik2020fourier} fixing blurry results of image regression. Finally, the exact solution (\ref{eq:lin_solution_square_loss}) can be used as a substitute for the usual gradient descent training routine. A naive approach for evaluating Eq.(\ref{eq:lin_solution_square_loss}) would be to compute the initial kernel $\hat\Theta_0(\vec x, \vec x)$ and then to invert it. Naively computing the kernel requires $O(N m^2)$ time and $O(m^2)$ memory, while inverting it takes $O(m^3)$ more time. Such an approach is infeasible for datasets of realistic sizes (i.e. $m \gtrsim 10^5$), asking for major optimizations, see \cite{novak2019neural,novakfast,meanti2020kernel}. Nevertheless, for $m \lesssim 10^4$, the direct approach is feasible and gives promising results, see \cite{arora2019harnessing}. Also, in certain scenarios, the kernel can be efficiently scaled from small $m$ to larger ones, see \cite{radhakrishnan2021simple}. \section{Kernel convergence} \label{sec:convergence} The goal of this section is to validate the constant kernel assumption: $\hat\Theta_t(x,x') = \hat\Theta_0(x,x')$ $\forall x,x' \in \mathcal{X}$. The main result is: under certain parameterization, the empirical NTK of a neural network becomes constant as width goes to infinity. Before stating this result formally, we provide an illustrative example. Consider a neural network with one hidden layer, scalar input, and Gaussian-initialized weights: \begin{equation} f(x; a_{1:n}, w_{1:n}) = \sum_{i=1}^n a_i \phi(w_i x), \quad a_{1:n} \sim \mathcal{N}(0, n^{-1} I), \quad w_{1:n} \sim \mathcal{N}(0, I). \label{eq:1_hid_net_standard} \end{equation} Here $n$ is width of the hidden layer; following a standard initialization scheme \cite{he2015delving}, initialization variance of each layer is inversely proportional to the number of input neurons. The above parameterization of the network is the one typically used in practice; we shall refer it as standard. However, the parameterization we need is a different one: \begin{equation} f(x; a_{1:n}, w_{1:n}) = \frac{1}{\sqrt{n}} \sum_{i=1}^n a_i \phi(w_i x), \quad a_{1:n} \sim \mathcal{N}(0, I), \quad w_{1:n} \sim \mathcal{N}(0, I). \label{eq:1_hid_net_ntk} \end{equation} We shall refer it as NTK-parameterization. Note that it does not alter the distribution of neurons, both hidden and output, at initialization but it does alter the gradient flow: \begin{equation} \dot a_k = \frac{1}{\sqrt{n}} \sum_{j=1}^m \phi(w_k x_j), \quad \dot w_k = \frac{1}{\sqrt{n}} \sum_{j=1}^m a_k \phi'(w_k x_j) x_j. \end{equation} Here input and output weights receive $O(n^{-1/2})$ increments, while both of them are $O(1)$ at initialization. Hence $a_k(t) \to a_k(0)$ and $w_k(t) \to w_k(0)$ as $n \to \infty$ for any fixed $k \in \mathbb{N}$ and $t \in \mathbb{R}_+$. Compare with gradient flow under standard parameterization: \begin{equation} \dot a_k = \sum_{j=1}^m \phi(w_k x_j), \quad \dot w_k = \sum_{j=1}^m a_k \phi'(w_k x_j) x_j. \end{equation} Here the output weights are $O(n^{-1/2})$ at initialization but receive $O(1)$ increments for $t=0$, while the input weights are $O(1)$ at initialization but receive $O(n^{-1/2})$ increments for $t=0$. Let us write the NTK under NTK parameterization: \begin{multline} \hat\Theta_t(x,x') = \sum_{i=1}^n \left(\partial_{a_i} f(x) \partial_{a_i} f(x') + \partial_{w_i} f(x) \partial_{w_i} f(x')\right) =\\= \frac{1}{n} \sum_{i=1}^n \left(\phi(w_i(t) x) \phi(w_i(t) x') + a_i^2(t) \phi'(w_i(t) x) \phi'(w_i(t) x') x x'\right). \end{multline} Since $a_k(t) \to a_k(0)$ and $w_k(t) \to w_k(0)$ as $n \to \infty$ for any fixed $k \in \mathbb{N}$ and $t \in \mathbb{R}_+$, the above expression is asymptotically equivalent to \begin{equation} \hat\Theta_0(x,x') = \frac{1}{n} \sum_{i=1}^n \left(\phi(w_i(0) x) \phi(w_i(0) x') + a_i^2(0) \phi'(w_i(0) x) \phi'(w_i(0) x') x x'\right), \end{equation} which converges (almost surely) to \begin{equation} \Theta(x,x') = \mathbb{E}\,_{a,w \sim \mathcal{N}(0,1)} \left(\phi(w x) \phi(w x') + a^2 \phi'(w x) \phi'(w x') x x'\right) \end{equation} as $n \to \infty$ due to the (strong) Law of Large Numbers. The limit kernel $\Theta(x,x')$ depends neither on a timestep $t$, nor on initialization. This kernel is typically referred as NTK, contrasting to the empirical NTK $\hat\Theta_t$. Since under standard parameterization the weights receive increments asymptotically at least comparable to initialization, one cannot expect that the empirical NTK stops evolving as $n \to \infty$ in this setting. Moreover, the initial empirical NTK diverges with width: \begin{multline} \hat\Theta_0(x,x') = \sum_{i=1}^n \left(\phi(w_i(0) x) \phi(w_i(0) x') + a_i^2(0) \phi'(w_i(0) x) \phi'(w_i(0) x') x x'\right) \sim\\\sim n \times \mathbb{E}\,_{w \sim \mathcal{N}(0,1)} \phi(w x) \phi(w x'). \end{multline} The above kernel convergence result holds in more general settings. Consider a fully-connected network with $L$ layers under NTK parameterization: \begin{equation} f(x) = h_L(x), \quad h_l(x) = \frac{1}{\sqrt{n_{l-1}}} W_l x_{l-1}(x), \quad x_{l-1}(x) = \phi(h_{l-1}(x)), \quad x_0(x) = x, \end{equation} where $W_1 \in \mathbb{R}^{n_1 \times n_0}$, $W_L \in \mathbb{R}^{1 \times n_{L-1}}$, and $W_l \in \mathbb{R}^{n_l \times n_{l-1}}$ for all other $l$. Here all weights are initialized with independent standard Gaussians. Suppose we aim to optimize a generic differentiable loss $\ell$ instead of the quadratic one: \begin{equation} \dot\theta_t = -\nabla_\theta\left(\sum_{j=1}^m \ell(y_j, f(x_j; \theta_t))\right) = \sum_{j=1}^m \left.\frac{\partial \ell(y_j, z)}{\partial z}\right|_{z=f(x_j; \theta_t)} \nabla_\theta f(x_j; \theta_t), \end{equation} where $\theta$ now is a concatenation of all weights $W_{1:L}$. The seminal work of \cite{jacot2018neural} proves the following: \begin{theorem}[\cite{jacot2018neural}] Under the conditions above, for $\phi$ being $C^2$ and Lipschitz and $\ell$ being $C^1$ and Lipschitz, $\hat\Theta_t(x,x') \to \Theta(x,x')$ in probability as $n_{1:L-1} \to \infty$ sequentially $\forall x,x' \in \mathcal{X}$ $\forall t \geq 0$. \end{theorem} In fact, the theorem above can be generalized far from fully-connected nets with smooth activation functions. Define a tensor program as a set of initial variables of certain types and a sequence of operations. Each of the operations generates a new variable by acting on previously generated ones. The variable types are \begin{enumerate} \item $\mathsf{A}$: $n \times n$ matrices with iid $\mathcal{N}(0,1)$ entries; \item $\mathsf{G}$: vectors of size $n$ with asymptotically iid Gaussian entries; \item $\mathsf{H}$: images of $\mathsf{G}$-vars by coordinatewise nonlinearities. \end{enumerate} The operations are \begin{enumerate} \item $\mathrm{Trsp}$: $W: \mathsf{A} \to W^\top: \mathsf{A}$; \item $\mathrm{MatMul}$: $(W: \mathsf{A}, \; x: \mathsf{H}) \to \frac{1}{\sqrt{n}} W x: \mathsf{G}$; \item {$\mathrm{LinComb}$: $(\{x_i: \mathsf{G}, \; a_i \in \mathbb{R}\}_{i=1}^k) \to \sum_{i=1}^k a_i x_i: \mathsf{G}$;} \item $\mathrm{Nonlin}$: $(\{x_i: \mathsf{G}\}_{i=1}^k, \; \phi: \mathbb{R}^k \to \mathbb{R}) \to \phi(x_{1:k}): \mathsf{H}$. \end{enumerate} The set of initial variables consists of variables of $\mathsf{A}$-type and $\mathsf{G}$-type. As for input $\mathsf{G}$-vars, we sample $\{x_\alpha: \text{$x$ is an input G-var}\} \sim \mathcal{N}(\mu^{in}, \Sigma^{in})$ $\forall \alpha \in [n]$. The above formalism allows to express forward and backward passes of a very wide class of neural nets (including RNNs, ResNets, and Transformers). Besides none of the operations above generates new $\mathsf{A}$-vars (new weights), the whole gradient descent training process can be expressed as a single tensor program by backtracking the gradient steps. The real power of tensor programs comes from the following theorem: \begin{theorem}["Master theorem", \cite{yang2020tensor_iii}] \label{thm:master_theorem} Consider a tensor program with $M$ $\mathsf{G}$-vars, under above assumptions. Suppose all the nonlinearities $\phi$ and a function $\psi: \, \mathbb{R}^M \to \mathbb{R}$ are polynomially bounded. Then the following holds: \begin{equation} \frac{1}{n} \sum_{\alpha=1}^n \psi(g^1_\alpha,\ldots,g^M_\alpha) \to \mathbb{E}\,_{Z \sim \mathcal{N}(\mu,\Sigma)} \psi(Z) \end{equation} a.s. as $n \to \infty$, where $\mu$ and $\Sigma$ can be computed using certain recurrent rules. \end{theorem} It is possible to define the empirical NTK of a tensor program and express it in the form $\frac{1}{n} \sum_{\alpha=1}^n \psi(g^1_\alpha,\ldots,g^M_\alpha)$ for a certain function $\psi$. Then the kernel converges by virtue of the above theorem. See \cite{yang2020tensor_ii} for the proof of initial kernel convergence and \cite{yang2021tensor_iib} for the proof of kernel convergence for any timestep. As an illustration, recall the two-layered net considered at the beginning of the present section. Its empirical NTK is given by \begin{equation} \hat\Theta_0(x,x') = \frac{1}{n} \sum_{i=1}^n \left(\phi(w_i(0) x) \phi(w_i(0) x') + a_i^2(0) \phi'(w_i(0) x) \phi'(w_i(0) x') x x'\right). \end{equation} Here $\mathsf{G}$-vars are $g^1 = w(0) x$, $g^2 = w(0) x'$, $g^3 = a(0) x$, $g^4 = a(0) x'$. Taking $\psi(g^1_\alpha,\ldots,g^4_\alpha) = \phi(g^1_\alpha) \phi(g^2_\alpha) + \phi'(g^1_\alpha) \phi'(g^2_\alpha) g^3_\alpha g^4_\alpha$ allows for explicit application of Theorem \ref{thm:master_theorem}. \section{Finite-width corrections} \label{sec:finite_width} While the results discussed in \cref{sec:convergence} hold in the limit of infinite width, they are not directly applicable to real-life finite-width nets for obvious reasons. This motivates one to introduce finite-width corrections for the limit NTK. First, define a higher-order kernel: \begin{equation} O_{s,t}(x_{1:s}) = \nabla^T_\theta O_{s-1,t}(x_{1:s-1}) \nabla_\theta f_t(x_s). \end{equation} Put $O_{1,t}(x_1) = f_t(x_1)$; this gives $O_{2,t}(x_1,x_2) = \hat\Theta_t(x_1,x_2)$. Consider a gradient flow optimization process under square loss: \begin{equation} \dot\theta_t = \sum_{j=1}^m (y_j - f_t(x_j)) \nabla_\theta f_t(x_j). \end{equation} Under this process, the $s$-order kernel evolves as \begin{equation} \dot O_{s,t}(x_{1:s}) = \nabla^T_\theta O_{s,t}(x_{1:s}) \dot\theta = O_{s+1,t}(x_{1:s},\vec x) (\vec y - f_t(\vec x)). \end{equation} This gives an infinite system of ODE's governing the evolution of the kernels. If our goal is to obtain a solution ony up to the order of $n^{-1}$, will it allow us to truncate the initially infinite system? How many equations should we keep? In order to answer these questions, let us estimate the order of growth for $O_{s,t}$. Following \cite{Dyer2020Asymptotics}, we start with a definition of a correlation function. Let us fix $t=0$ and omit the corresponding subscript for now. Define a rank-$k$ derivative tensor $T_{\mu_1 \ldots \mu_k}$ as follows: \begin{equation} T_{\mu_1 \ldots \mu_k}(x; f) = \frac{\partial^k f(x)}{\partial \theta^{\mu_1} \ldots \partial \theta^{\mu_k}}. \end{equation} For $k=0$ we define $T(x; f) = f(x)$. We are now ready to define a correlation function $C$: \begin{equation} C(x_1,\ldots,x_m) = \sum_{\mu_1,\ldots,\mu_{k_m}} \Delta_{\mu_1 \ldots \mu_{k_m}}^{(\pi)} \mathbb{E}\,_\theta \left( T_{\mu_1 \ldots \mu_{k_1}}(x_1) T_{\mu_{k_1+1} \ldots \mu_{k_2}}(x_2) \ldots T_{\mu_{k_{m-1}+1} \ldots \mu_{k_m}}(x_m) \right). \end{equation} Here $0 \leq k_1 \leq \ldots \leq k_m$, $k_m$ and $m$ are even, $\pi \in S_{k_m}$ is a permutation, and $\Delta_{\mu_1 \ldots \mu_{k_m}}^{(\pi)} = \delta_{\mu_{\pi(1)} \mu_{\pi(2)}} \ldots \delta_{\mu_{\pi(k_m-1)} \mu_{\pi(k_m)}}$. For example, \begin{multline} \mathbb{E}\,_\theta (f(x) \nabla^T_\theta f(x) \nabla_\theta \nabla^T_\theta f(x_1) \nabla_\theta f(x_2)) = \sum_{\mu,\nu} \mathbb{E}\,_\theta (f(x) \partial_\mu f(x) \partial^2_{\mu,\nu} f(x_1) \partial_\nu f(x_2)) =\\= \sum_{\mu_1,\mu_2,\mu_3,\mu_4} \delta_{\mu_1 \mu_2} \delta_{\mu_3 \mu_4} \mathbb{E}\,_\theta (f(x) \partial_{\mu_1} f(x) \partial^2_{\mu_2,\mu_3} f(x_1) \partial_{\mu_4} f(x_2)) = C(x,x,x_1,x_2) \label{eq:dtheta_dt_as_corr_f} \end{multline} is a correlation function with $m=4$, $k_1=0$, $k_2=1$, $k_3=3$, $k_4=4$, and $\pi(j) = j$. If two derivative tensors have two indices that are summed over, we say that they are contracted. Formally, we say that $T_{\mu_{k_{i-1}+1} \ldots \mu_{k_i}}(x_i)$ is contracted with $T_{\mu_{k_{j-1}+1} \ldots \mu_{k_j}}(x_j)$ for $1 \leq i,j \leq m$ if there exists an even $s \leq k_m$ such that $k_{i-1} < \pi(s-1) \leq k_i$, while $k_{j-1} < \pi(s) \leq k_j$, or vice versa. Define the cluster graph $G_C(V,E)$ as a non-oriented non-weighted graph with vertices $V = \{v_1, \ldots, v_m\}$ and edges $E = \{(v_i,v_j) \, | \, \text{$T(x_i)$ and $T(x_j)$ are contracted in $C$}\}$. Let $n_e$ be the number of even-sized connected components of $G_C(V,E)$ and $n_o$ be the number of odd-sized components. We are going to use the following conjecture, which is proven in certain scenarios: \begin{conjecture}[\cite{Dyer2020Asymptotics}] \label{conj:C_asymptotics} If $m$ is even, $C(x_1,\ldots,x_m) = O_{n\to\infty}(n^{s_C})$, where $s_C = n_e + n_o / 2 - m / 2$. If $m$ is odd, $C(x_1,\ldots,x_m) = 0$. \end{conjecture} We are also going to use the following lemma: \begin{lemma}[\cite{Dyer2020Asymptotics}] \label{lemma:derivative_asymptotics} Suppose \cref{conj:C_asymptotics} holds. Let $C(\vec x) = \mathbb{E}\,_\theta F(\vec x; \theta)$ be a correlation function and suppose $C(\vec x) = O(n^{s_C})$ for $s_C$ defined in \cref{conj:C_asymptotics}. Then $\mathbb{E}\,_\theta d^k F(\vec x; \theta) / dt^k = O(n^{s_C})$ $\forall k \geq 1$. \end{lemma} \begin{proof} Consider the first derivative: \begin{multline} \mathbb{E}\,_\theta \frac{dF(\vec x)}{dt} = \mathbb{E}\,_\theta (\dot\theta^T \nabla_\theta F(\vec x)) = \mathbb{E}\,_{x,y} \mathbb{E}\,_\theta (\eta (y - f(x)) \nabla^T_\theta f(x) \nabla_\theta F(\vec x)) =\\= \eta \mathbb{E}\,_{x,y} \mathbb{E}\,_\theta (y \nabla^T_\theta f(x) \nabla_\theta F(\vec x)) - \eta \mathbb{E}\,_{x,y} \mathbb{E}\,_\theta (f(x) \nabla^T_\theta f(x) \nabla_\theta F(\vec x)). \end{multline} This is a sum of a linear combination of correlation functions. By \cref{conj:C_asymptotics}, the first sum evaluates to zero, while the second one has $m' = m+2$, $n_e'$ even clusters, and $n_o'$ odd clusters. If $\nabla_\theta f(x)$ is contracted with an even cluster of $C$, we have $n_e' = n_e - 1$, $n_o' = n_o + 2$. In contrast, if $\nabla_\theta f(x)$ is contracted with an odd cluster of $C$, we have $n_e' = n_e + 1$, $n_o' = n_o$. In the first case, we have $s_C' = n_e' + n_o'/2 - m'/2 = s_C - 1$, while for the second $s_C' = s_C$. In any case, the result is a linear combination of correlation functions with $s_C' \leq s_C$ for each. \end{proof} Let us return the $t$-subscript. Since $O_s$ has $s$ derivative tensors and a single cluster, by virtue of \cref{conj:C_asymptotics}, $\mathbb{E}\,_\theta O_{s,0} = O(n^{1 - s/2})$ for even $s$ and $\mathbb{E}\,_\theta O_{s,0} = 0$ for odd $s$. At the same time, $\mathbb{E}\,_\theta \dot O_{s,0} = O(n^{1 - (s+2)/2}) = O(n^{-s/2})$ for even $s$ and $\mathbb{E}\,_\theta \dot O_{s,0} = O(n^{1 - (s+1)/2}) = O(n^{1/2 - s/2})$ for odd $s$. As for the second moments, we have $\mathbb{E}\,_\theta (O_{s,0})^2 = O(n^{2 - s})$ for even $s$ and $\mathbb{E}\,_\theta (O_{s,0})^2 = O(n^{1 - s})$ for odd $s$. Similarly, we have $\mathbb{E}\,_\theta (\dot O_{s,0})^2 = O(n^{2/2 - (2s+2)/2}) = O(n^{-s})$ for even $s$ and $\mathbb{E}\,_\theta (\dot O_{s,0})^2 = O(n^{2 - (2s+2)/2}) = O(n^{1 - s})$ for odd $s$. The asymptotics for the first two moments implies the asymptotic for a random variable itself: \begin{equation} O_{s,0}(x_{1:s}) = \begin{cases} O(n^{1 - s/2}) &\text{for even $s$;} \\ O(n^{1/2 - s/2}) &\text{for odd $s$;} \end{cases} \qquad \dot O_{s,0}(x_{1:s}) = \begin{cases} O(n^{-s/2}) &\text{for even $s$;} \\ O(n^{1/2 - s/2}) &\text{for odd $s$.} \end{cases} \end{equation} \cref{lemma:derivative_asymptotics} gives $\forall k \geq 1$: \begin{equation} \left.\frac{d^k O_{s,t}}{dt^k}(x_{1:s})\right|_{t=0} = \begin{cases} O(n^{-s/2}) &\text{for even $s$;} \\ O(n^{1/2 - s/2}) &\text{for odd $s$.} \end{cases} \end{equation} Then given an analytic activation function, we have $\forall t \geq 0$: \begin{equation} \dot O_{s,t}(x_{1:s}) = \sum_{k=1}^\infty \left.\frac{d^k O_{s,t}}{dt^k}(x_{1:s})\right|_{t=0} \frac{t^k}{k!} = \begin{cases} O(n^{-s/2}) &\text{for even $s$;} \\ O(n^{1/2 - s/2}) &\text{for odd $s$.} \end{cases} \end{equation} This allows us to write a finite system of ODE for the model evolution up to $O(n^{-1})$ terms: \begin{equation} \dot f_{t}(x_1) = O_{2,t}(x_1, \vec x) (\vec y - f_t(\vec x)), \qquad f_0(x_1) = f(x_1; \theta), \quad \theta \sim \mathcal{N}(0, I), \end{equation} \begin{equation} \dot O_{2,t}(x_1, x_2) = O_{3,t}(x_1, x_2, \vec x) (\vec y - f_t(\vec x)), \qquad O_{2,0}(x_1, x_2) = \nabla_\theta^T f_0(x_1) \nabla_\theta f_0(x_2), \end{equation} \begin{equation} \dot O_{3,t}(x_1, x_2, x_3) = O_{4,t}(x_1, x_2, x_3, \vec x) (\vec y - f_t(\vec x)), \qquad O_{3,0}(x_1, x_2, x_3) = \nabla_\theta^T O_{2,0}(x_1, x_2) \nabla_\theta f_0(x_3), \end{equation} \begin{equation} \dot O_{4,t}(x_1, x_2, x_3, x_4) = O(n^{-2}), \qquad O_{4,0}(x_1, x_2, x_3, x_4) = \nabla_\theta^T O_{3,0}(x_1, x_2, x_3) \nabla_\theta f_0(x_4). \end{equation} Let us expand all the quantities wrt $n^{-1}$: \begin{equation} O_{s,t}(x_{1:s}) = O_{s,t}^{(0)}(x_{1:s}) + n^{-1} O_{s,t}^{(1)}(x_{1:s}) + O(n^{-2}), \end{equation} where $O_{s,t}^{(k)}(x_{1:s}) = \Theta_{n\to\infty}(1)$. Then the system above transforms into the following: \begin{equation} \dot f_{t}^{(0)}(x_1) = O_{2,t}^{(0)}(x_1, \vec x) (\vec y - f_t^{(0)}(\vec x)), \end{equation} \begin{equation} \dot f_{t}^{(1)}(x_1) = O_{2,t}^{(1)}(x_1, \vec x) (\vec y - f_t^{(0)}(\vec x)) - O_{2,t}^{(0)}(x_1, \vec x) f_t^{(1)}(\vec x), \end{equation} \begin{equation} O_{2,t}^{(0)}(x_1, x_2) = \nabla_\theta^T f_0^{(0)}(x_1) \nabla_\theta f_0^{(0)}(x_2), \end{equation} \begin{equation} \dot O_{2,t}^{(1)}(x_1, x_2) = O_{3,t}^{(1)}(x_1, x_2, \vec x) (\vec y - f_t^{(0)}(\vec x)), \end{equation} \begin{equation} \dot O_{3,t}^{(1)}(x_1, x_2, x_3) = O_{4,t}^{(1)}(x_1, x_2, x_3, \vec x) (\vec y - f_t^{(0)}(\vec x)), \end{equation} \begin{equation} O_{4,t}^{(1)}(x_1, x_2, x_3, x_4) = \nabla_\theta^T O_{3,0}^{(0)}(x_1, x_2, x_3) \nabla_\theta f_0^{(0)}(x_4), \end{equation} where we have ignored the initial conditions for the time being. Integrating this system is straightforward: \begin{equation} f_{t}^{(0)}(\vec x) = \vec y + e^{-O_{2,0}^{(0)}(\vec x, \vec x) t} (f_0^{(0)}(\vec x) - \vec y), \end{equation} For brevity, let us introduce the following definition: \begin{equation} \Delta f_t^{(0)}(x) = e^{-O_{2,0}^{(0)}(x, \vec x) t} (f_0^{(0)}(\vec x) - \vec y). \end{equation} This gives: \begin{equation} O_{3,t}^{(1)}(x_1, x_2, x_3) = O_{3,0}^{(1)}(x_1, x_2, x_3) - \int_{0}^t O_{4,0}^{(1)}(x_1, x_2, x_3, \vec x) \Delta f_{t'}^{(0)}(\vec x) \, dt'. \end{equation} \begin{multline} O_{2,t}^{(1)}(x_1, x_2) = O_{2,0}^{(1)}(x_1, x_2) - \int_{0}^t O_{3,0}^{(1)}(x_1, x_2, \vec x) \Delta f_{t'}^{(0)}(\vec x) \, dt' +\\+ \int_{0}^{t} \int_{0}^{t''} \Delta f_{t''}^{(0),T}(\vec x) O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x') \Delta f_{t'}^{(0)}(\vec x') \, dt' \, dt''. \end{multline} Let us elaborate the terms: \begin{equation} \int_{0}^t O_{3,0}^{(1)}(x_1, x_2, \vec x) \Delta f_{t'}^{(0)}(\vec x) \, dt' = O_{3,0}^{(1)}(x_1, x_2, \vec x) \left(O_{2,0}^{(0)}(\vec x,\vec x)\right)^{-1} \left(I - e^{-O_{2,0}^{(0)}(\vec x, \vec x) t}\right) (f_0^{(0)}(\vec x) - \vec y). \end{equation} \begin{multline} \int_{0}^{t} \int_{0}^{t''} \Delta f_{t''}^{(0),T}(\vec x) O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x') \Delta f_{t'}^{(0)}(\vec x') \, dt' \, dt'' =\\= \int_{0}^{t} (f_0^{(0)}(\vec x) - \vec y)^T \left(I - e^{-O_{2,0}^{(0)}(\vec x, \vec x) t'}\right) \left(O_{2,0}^{(0)}(\vec x,\vec x)\right)^{-1} O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x') e^{-O_{2,0}^{(0)}(\vec x, \vec x) t'} (f_0^{(0)}(\vec x) - \vec y) \, dt' =\\= (f_0^{(0)}(\vec x) - \vec y)^T \left(O_{2,0}^{(0)}(\vec x,\vec x)\right)^{-1} O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x') \left(O_{2,0}^{(0)}(\vec x,\vec x)\right)^{-1} \left(I - e^{-O_{2,0}^{(0)}(\vec x, \vec x) t}\right) (f_0^{(0)}(\vec x) - \vec y) -\\- \int_{0}^{t} (f_0^{(0)}(\vec x) - \vec y)^T e^{-O_{2,0}^{(0)}(\vec x, \vec x) t'} \left(O_{2,0}^{(0)}(\vec x,\vec x)\right)^{-1} O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x') e^{-O_{2,0}^{(0)}(\vec x, \vec x) t'} (f_0^{(0)}(\vec x) - \vec y) \, dt'. \end{multline} Consider the eigenvalue-eigenvector decomposition of $O_{2,0}^{(0)}(\vec x, \vec x)$: $O_{2,0}^{(0)}(\vec x, \vec x) = \sum_{k=1}^m \lambda_1 v_k v_k^T$. This helps us integrating the last term: \begin{multline} \int_{0}^{t} (f_0^{(0)}(\vec x) - \vec y)^T e^{-O_{2,0}^{(0)}(\vec x, \vec x) t'} \left(O_{2,0}^{(0)}(\vec x,\vec x)\right)^{-1} O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x') e^{-O_{2,0}^{(0)}(\vec x, \vec x) t'} (f_0^{(0)}(\vec x) - \vec y) \, dt' =\\= \sum_{k,l=1}^m \int_{0}^{t} e^{-(\lambda_k+\lambda_l) t'} (f_0^{(0)}(\vec x) - \vec y)^T v_k v_k^T \left(O_{2,0}^{(0)}(\vec x,\vec x)\right)^{-1} O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x') v_l v_l^T (f_0^{(0)}(\vec x) - \vec y) \, dt' =\\= \sum_{k,l=1}^m \frac{1}{\lambda_k+\lambda_l} \left(1 - e^{-(\lambda_k+\lambda_l) t}\right) (f_0^{(0)}(\vec x) - \vec y)^T v_k v_k^T \left(O_{2,0}^{(0)}(\vec x,\vec x)\right)^{-1} O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x') v_l v_l^T (f_0^{(0)}(\vec x) - \vec y). \end{multline} Recall $\hat\Theta_t(x_1,x_2) = O_{2,t}(x_1,x_2) = O_{2,t}^{(0)}(x_1,x_2) + n^{-1} O_{2,t}^{(1)}(x_1,x_2) + O(n^{-2})$. The first term (the limit NTK) does not depend on $t$, $O_{2,t}^{(0)}(x_1,x_2) = O_{2,0}^{(0)}(x_1,x_2) = \Theta(x_1,x_2)$, while the second one (the correction) does. Note that computing the second term invokes $O_{4,0}^{(1)}$, the fourth-order tensor, therefore approaching it directly requires $O(m^4)$ memory. Integrating the above system further gives the first-order correction for the limit model $f_t^{(1)}$. As we shall see in \cref{sec:beyond}, the kernel $\Theta^{NTH}(x_1,x_2) = O_{2,0}^{(0)}(x_1,x_2) + n^{-1} \mathbb{E}\, O_{2,\infty}^{(1)}(x_1,x_2)$ can be considered as a label-aware alternative to the usual NTK $\Theta(x_1,x_2) = O_{2,0}^{(0)}(x_1,x_2)$. Let us write its explicit definition and refer it later in \cref{sec:beyond}: \begin{multline} \Theta^{NTH}(x_1,x_2) = O_{2,0}^{(0)}(x_1,x_2) + n^{-1} \mathbb{E}\, O_{2,\infty}^{(1)}(x_1,x_2) =\\= \Theta(x_1,x_2) + n^{-1} \mathbb{E}\,\left[O_{2,0}^{(1)}(x_1,x_2)\right] - n^{-1} \mathbb{E}\,\left[O_{3,0}^{(1)}(x_1, x_2, \vec x) \Theta^{-1}(\vec x,\vec x) f_0^{(0)}(\vec x)\right] +\\+ n^{-1} \vec y^T \Theta^{-1}(\vec x,\vec x) \mathbb{E}\,\left[O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x)\right] \Theta^{-1}(\vec x,\vec x) \vec y +\\+ n^{-1} \mathbb{E}\,\left[f_0^{(0),T}(\vec x) \Theta^{-1}(\vec x,\vec x) O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x) \Theta^{-1}(\vec x,\vec x) f_0^{(0)}(\vec x)\right] -\\- n^{-1} \sum_{k,l=1}^m \frac{1}{\lambda_k (\lambda_k+\lambda_l)} \vec y^T \vec v_k \vec v_k^T \mathbb{E}\,\left[O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x)\right] \vec v_l \vec v_l^T \vec y -\\- n^{-1} \sum_{k,l=1}^m \frac{1}{\lambda_k (\lambda_k+\lambda_l)} \mathbb{E}\,\left[f_0^{(0),T}(\vec x) \vec v_k \vec v_k^T O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x) \vec v_l \vec v_l^T f_0^{(0)}(\vec x)\right]. \label{eq:lantk_nth} \end{multline} While the above result is valid under a conjecture, \cref{conj:C_asymptotics}, it can be proven rigorously, see \cite{huang2019dynamics}. \section{Computing the limit kernel} \label{sec:limit} It is not obvious how to compute the limit kernel $\Theta$ predicted by the theorems discussed in \cref{sec:convergence}. Fortunately, one can compute the limit kernel exactly for certain classes of models. \subsection{Fully-connected nets} \label{sec:limit_fc_nets} Consider an $L$-layer fully-connected network under NTK parameterization: \begin{equation} f(x) = h_L(x), \quad h_l(x) = \frac{1}{\sqrt{n_{l-1}}} W_l x_{l-1}(x), \quad x_{l-1}(x) = \phi(h_{l-1}(x)), \quad x_0(x) = x, \end{equation} where $W_l \in \mathbb{R}^{n_l \times n_{l-1}}$ $\forall l \in [L]$. For simplicity, we assume $n_L=1$, i.e. the output is scalar. Since we already know (see \cref{sec:convergence}) that the kernel does not depend on $t$ under NTK parameterization, we consider the case $t=0$ only and omit the $t$-subscript. The empirical NTK is given by \begin{equation} \hat\Theta(x,x') = \nabla^T_\theta f(x;\theta) \nabla_\theta f(x';\theta) = \sum_{l=1}^L \tr\left(\nabla^T_{W_l} f(x;W_{1:L}) \nabla_{W_l} f(x;W_{1:L})\right). \end{equation} By chain rule, \begin{equation} \nabla_{W_l} f(x) = \sum_{i=1}^{n_l} \partial_{h_l^i} f(x) \nabla_{W_l} h_l^i(x) = \frac{1}{\sqrt{n_{l-1}}} \sum_{i=1}^{n_l} \sum_{j=1}^{n_{l-1}} \partial_{h_l^i} f(x) E_{ij} x_{l-1}^j(x) = \frac{1}{\sqrt{n_{l-1}}} \nabla_{h_l} f(x) x_{l-1}^T(x). \end{equation} Therefore, \begin{equation} \hat\Theta(x,x') = \sum_{l=1}^L \tr\left(\nabla^T_{W_l} f(x) \nabla_{W_l} f(x)\right) = \sum_{l=1}^L \frac{1}{n_{l-1}} \left(\nabla^T_{h_l} f(x') \nabla_{h_l} f(x)\right) \times \left(x_{l-1}^T(x) x_{l-1}(x')\right). \end{equation} If $x_{l-1}$ had iid components with zero mean, $\frac{1}{n_{l-1}} x_{l-1}^T(x) x_{l-1}(x')$ would be an empirical covariance estimated with $n_{l-1}$ samples. In fact, when all weights are iid standard Gaussians, components of $h_{l-1}$ become iid Gaussian with zero mean as $n_{1:l-2} \to \infty$ sequentially. Hence their images under elementwise maps $\phi$ are also iid. Proof by induction. $h_1(x) = \frac{1}{\sqrt{n_0}} W_1 x$ has iid Gaussian components with zero mean and variance $q_1(x) = x^T x$. Suppose components of $h_{l-1}(x)$ become iid Gaussian with zero mean and $q_{l-1}(x)$ variance as $n_{1:l-2} \to \infty$ sequentially. Then $h_l(x) = \frac{1}{\sqrt{n_{l-1}}} W_l \phi(h_{l-1}(x))$ converges (in distribution) to a vector of Gaussians with zero mean and variance $q_l(x) = \mathbb{E}\,_{z \sim \mathcal{N}(0,q_{l-1}(x))} \phi^2(z)$ as $n_{1:l-1} \to \infty$ sequentially by the Central Limit Theorem (CLT). One can easily generalize the above proof to any finite set of inputs. In particular, $[h_l^i(x),h_l^i(x')]^T$ converges to a Gaussian with zero mean and covariance $\Sigma_l(x,x') = \begin{pmatrix} q_l(x) & q_l(x,x')\\q_l(x,x') & q_l(x') \end{pmatrix}$, where $q_l(x,x') = \mathbb{E}\,_{[z,z']^T \sim \mathcal{N}(0,\Sigma_{l-1}(x,x'))} \phi(z) \phi(z')$. Hence as $n_{1:l-2} \to \infty$ sequentially, $\frac{1}{n_{l-1}} x_{l-1}^T(x) x_{l-1}(x')$ converges to $q_l(x,x')$. Let $g_l(x) = \sqrt{n_l} \nabla_{h_l} f(x)$. Since \begin{equation} \nabla_{h_l^j} f(x) = \sum_{i=1}^{n_{l+1}} \nabla_{h_{l+1}^i} f(x) \nabla_{h_l^j} h_{l+1}^i(x) = \frac{1}{\sqrt{n_l}} \sum_{i=1}^{n_{l+1}} \nabla_{h_{l+1}^i} f(x) W_{l+1}^{ij} \phi'(h_l^j(x)), \end{equation} we have $g_l(x) = \frac{1}{\sqrt{n_{l+1}}} D_l(x) W_{l+1}^T g_{l+1}(x)$, where $D_l(x) = \diag(\phi'(h_l(x)))$. There are two obstacles that prevent us from following the same lines for $g_l$ as for $h_l$. First, $g_{l+1}$ depends on $D_{l+1}$ that depends on $h_{l+1}$ that depends on $W_{l+1}$. Since $W_{l+1}$ and $g_{l+1}$ are dependent, we cannot guarantee that components of $g_l$ become iid. Second, we know the distribution of $h_l$ as all the layers from the input side become infinitely wide sequentially, while induction for $g_l$ should be performed starting from the head. Nevertheless, it can be proven rigorously that ignoring these two obstacles still lead to a correct result \cite{yang2020tensor_ii}: $g_l(x)$ converges to a vector of iid Gaussians with zero mean and variance $\dot q_l(x) = \dot q_{l+1}(x) \mathbb{E}\,_{z \sim \mathcal{N}(0,q_l(x))} (\phi')^2(z)$ as $n_{1:L-1} \to \infty$. A similar result holds for a pair of inputs: $[g_l^i(x),g_l^i(x')]^T$ converges to a Gaussian with zero mean and covariance $\dot\Sigma_l(x,x') = \begin{pmatrix} \dot q_l(x) & \dot q_l(x,x')\\\dot q_l(x,x') & \dot q_l(x') \end{pmatrix}$, where $\dot q_l(x,x') = \dot q_{l+1}(x,x') \mathbb{E}\,_{[z,z']^T \sim \mathcal{N}(0,\Sigma_l(x,x'))} \phi'(z) \phi'(z')$. Hence $\nabla^T_{h_l} f(x') \nabla_{h_l} f(x) = \frac{1}{n_l} g_l^T(x') g_l(x)$ converges to $\dot q_l(x,x')$. Putting all together, $\hat\Theta(x,x')$ converges to $\Theta(x,x') = \sum_{l=1}^L \dot q_l(x,x') q_l(x,x')$, where \begin{equation} q_1(x,x') = x^T x', \quad q_l(x,x') = \mathbb{E}\,_{[z,z']^T \sim \mathcal{N}(0,\Sigma_{l-1}(x,x'))} \phi(z) \phi(z'), \end{equation} \begin{equation} \dot q_L(x,x') = 1, \quad \dot q_l(x,x') = \dot q_{l+1}(x,x') \mathbb{E}\,_{[z,z']^T \sim \mathcal{N}(0,\Sigma_l(x,x'))} \phi'(z) \phi'(z'), \end{equation} and $\Sigma_l(x,x') = \begin{pmatrix} q_l(x,x) & q_l(x,x')\\q_l(x,x') & q_l(x',x') \end{pmatrix}$. Note that the Master theorem of \cite{yang2020tensor_ii} gives similar recurrent formulas for NTK of any architecture expressible by a tensor program and makes them mathematically rigorous. In fact, computing the NTK can be performed in a convenient sequential layer-wise manner, as implemented in Neural Tangents\footnote{\url{https://github.com/google/neural-tangents}} \cite{novak2019neural}. Define the NTK for the first $l$ layers as $\Theta_{:l}(x,x') = \sum_{l'=1}^l \tr(\nabla_{W_{l'}}^T h_l^i(x) \nabla_{W_{l'}} h_l^i(x'))$; in this case $\Theta_{:L}(x,x') = \Theta(x,x')$. Suppose $\Theta_{:l-1}(x,x')$ and $q_{l-1}(x,x')$ are already computed. Adding a nonlinearity and a linear layer with weights $W_l$ gives $q_l$ as listed above: \begin{equation} q_l(x,x') = \mathbb{E}\,_{[z,z']^T \sim \mathcal{N}(0,\Sigma_{l-1}(x,x'))} \phi(z) \phi(z'), \quad \text{where $\Sigma_{l-1}(x,x') = \begin{pmatrix} q_{l-1}(x,x) & q_{l-1}(x,x')\\q_{l-1}(x,x') & q_{l-1}(x',x') \end{pmatrix}$.} \label{eq:q_iteration} \end{equation} However, according to a formula above, $\dot q_l$ is computed using $\dot q_{l+1}$, which requires a sequential layer-wise "forward pass" to compute all $q_l$ and a "backward pass" to compute $\dot q_l$. In fact, one forward pass is enough: \begin{multline} \Theta_{:l}(x,x') = \sum_{l'=1}^l \tr(\nabla_{W_{l'}}^T h_l^i(x) \nabla_{W_{l'}} h_l^i(x')) = q_l(x,x') + \sum_{l'=1}^{l-1} \tr(\nabla_{W_{l'}}^T h_l^i(x) \nabla_{W_{l'}} h_l^i(x')) =\\= q_l(x,x') + \Theta_{:l-1}(x,x') \mathbb{E}\,_{[z,z']^T \sim \mathcal{N}(0,\Sigma_{l-1}(x,x'))} \phi'(z) \phi'(z'). \label{eq:Theta_iteration} \end{multline} In Neural Tangents, each operation in a neural network is mapped to a corresponding kernel transform. \subsection{Convolutional nets} The same idea can be applied for convolutional nets as well. Consider 1d-convolutions for simplicity. In this case, we are dealing with 1d "images" with $d$ pixels: $x \in \mathbb{R}^{n_0 \times d}$. Consider a network with $L$ convolutions under NTK parameterization and an average pooling at the end: \begin{equation} f^i = \frac{1}{d} \sum_{s=1}^d x_L^{i,s}, \quad h_l^{i,s} = \frac{1}{\sqrt{n_{l-1}}} \sum_{j=1}^{n_{l-1}} \sum_{r \in \ker} W_l^{ijr} x_{l-1}^{j,s+r}, \quad x_{l-1}^{i,s} = \phi(h_{l-1}^{i,s}), \quad x_0^{i,s} = x^{i,s}, \end{equation} where we omitted the argument $x$ for brevity, $W_l \in \mathbb{R}^{n_l \times n_{l-1} \times |\ker|}$ with $W_l^{ijr} \sim \mathcal{N}(0,1)$ iid $\forall l \in [L]$, and $\ker$ denotes the convolution filter; e.g. $\ker = [-1,0,1]$ for a convolution of size $3$. For simplicity, we assume $n_L=1$, i.e. the output is scalar. As before, the empirical NTK is given as \begin{equation} \hat\Theta(x,x') = \nabla^T_\theta f(x;\theta) \nabla_\theta f(x';\theta) = \sum_{l=1}^L \sum_{i=1}^{n_l} \sum_{j=1}^{n_{l-1}} \sum_{r \in \ker} \partial_{W_l^{ijr}} f(x) \partial_{W_l^{ijr}} f(x'). \end{equation} By chain rule, \begin{equation} \partial_{W_l^{ijr}} f = \sum_{s=1}^d \partial_{h_l^{i,s}} f \partial_{W_l^{ijr}} h_l^{i,s} = \frac{1}{\sqrt{n_{l-1}}} \sum_{s=1}^{d} \partial_{h_l^{i,s}} f x_{l-1}^{j,s+r}. \end{equation} Therefore, \begin{equation} \hat\Theta(x,x') = \sum_{l=1}^L \frac{1}{n_{l-1}} \sum_{i=1}^{n_l} \sum_{j=1}^{n_{l-1}} \sum_{r \in \ker} \sum_{s,s'=1}^{d} \partial_{h_l^{i,s}} f(x) \partial_{h_l^{i,s'}} f(x') x_{l-1}^{j,s+r}(x) x_{l-1}^{j,s'+r}(x'). \end{equation} As for the fully-connected case, we are going to prove that $h^{i,s}$ become Gaussian with zero mean and variance given by a certain recurrent formula as $n_{1:l-1} \to \infty$ sequentially. However for the convolutional case, not all $h^{i,s}$ become independent: they become independent for different $i$'s but not for different $s$. Let us induct on $l$. $h_1^{i,s} = \frac{1}{\sqrt{n_0}} \sum_{j=1}^{n_0} \sum_{r \in \ker} W_1^{ijr} x^{j,s+r}$ are independent for any two different $i$'s. For a fixed $i$, $h_1^{i,\cdot}$ is a Gaussian vector with zero mean and covariance $q_1^{s,s'} = \frac{1}{n_0} \sum_{j=1}^{n_0} \sum_{r \in \ker} x^{j,s+r} x^{j,s'+r}$. Suppose $h_{l-1}^{i,s}$ becomes Gaussian with zero mean, independent for any two different $i$'s, and $q_{l-1}^{s,s'}$ is its covariance as $n_{1:l-2} \to \infty$ sequentially. Then $h_l^{i,s} = \frac{1}{\sqrt{n_{l-1}}} \sum_{j=1}^{n_{l-1}} \sum_{r \in \ker} W_l^{ijr} x_{l-1}^{j,s+r}$ converges (in distribution) to a random variable with similar properties but with covariance $q_l^{s,s'} = \mathbb{E}\,_{z \sim \mathcal{N}(0,q_{l-1})} \sum_{r \in \ker} \phi(z^{s+r}) \phi(z^{s'+r})$ as $n_{1:l-1} \to \infty$ sequentially by the Central Limit Theorem (CLT). One can easily generalize the above proof to any finite set of inputs. In particular, $[h_l^{i,\cdot}(x),h_l^{i,\cdot}(x')]^T \in \mathbb{R}^{2d}$ converges to a Gaussian with zero mean and covariance $\Sigma_l(x,x') = \begin{pmatrix} q_l(x) & q_l(x,x')\\q_l(x,x') & q_l(x') \end{pmatrix} \in \mathbb{R}^{2d \times 2d}$, where $q_l^{s,s'}(x,x') = \mathbb{E}\,_{[z,z']^T \sim \mathcal{N}(0,\Sigma_{l-1}(x,x'))} \sum_{r \in \ker} \phi(z^{s+r}) \phi(z^{\prime,s'+r})$. Hence as $n_{1:l-2} \to \infty$ sequentially, $\frac{1}{n_{l-1}} \sum_{j=1}^{n_{l-1}} \sum_{r \in \ker} x_{l-1}^{j,s+r}(x) x_{l-1}^{j,s'+r}(x')$ converges to $q_l^{s,s'}(x,x')$. Let $g_l^{j,p} = \sqrt{n_l} \nabla_{h_l^{j,p}} f$. Since \begin{multline} \partial_{h_l^{j,p}} f = \sum_{i=1}^{n_{l+1}} \sum_{s=1}^d \partial_{h_{l+1}^{i,s}} f \partial_{h_l^{j,p}} h_{l+1}^{i,s} =\\= \frac{1}{\sqrt{n_l}} \sum_{i=1}^{n_{l+1}} \sum_{s=1}^d \partial_{h_{l+1}^{i,s}} f \sum_{r \in \ker} W_{l+1}^{ijr} 1_{s+r=p} \phi'(h_l^{j,p}) = \frac{1}{\sqrt{n_l}} \sum_{i=1}^{n_{l+1}} \sum_{r \in \ker} \partial_{h_{l+1}^{i,p-r}} f W_{l+1}^{ijr} \phi'(h_l^{j,p}), \end{multline} $\partial_{h_L^{j,p}} f = \frac{1}{d} \phi'(h_L^{j,p})$, and $n_L=1$, we have \begin{equation} g_L^{j,p} = \frac{1}{d} \phi'(h_L^{j,p}), \quad g_l^{j,p} = \frac{1}{\sqrt{n_{l+1}}} \sum_{i=1}^{n_{l+1}} \sum_{r \in \ker} g_{l+1}^{i,p-r} W_{l+1}^{ijr} \phi'(h_l^{j,p}). \end{equation} With the same correctness remark as for convolutional nets, it is possible to show that $g_l^{j,p}$ become independent for different $j$'s and $g_l^{j,\cdot}$ become Gaussian with covariance $\dot q_l^{p,p'}$ as $n_{1:L-1} \to \infty$. Covariance is given by the following recurrence: $\dot q_L^{p,p'} = \frac{1}{d^2} \mathbb{E}\,_{[z,z']^T \sim \mathcal{N}(0,\Sigma_L)} \phi'(z^p) \phi'(z^{p'})$, $\dot q_l^{p,p'} = \mathbb{E}\,_{z \sim \mathcal{N}(0,q_l)} \phi'(z^{p}) \phi'(z^{p'}) \sum_{r \in \ker} \dot q_{l+1}^{p-r,p'-r}$. A similar result holds for a pair of inputs: $[g_l^{i,\cdot}(x),g_l^{i,\cdot}(x')]^T \in \mathbb{R}^{2d}$ converges to a Gaussian with zero mean and covariance $\dot\Sigma_l(x,x') = \begin{pmatrix} \dot q_l(x) & \dot q_l(x,x')\\\dot q_l(x,x') & \dot q_l(x') \end{pmatrix} \in \mathbb{R}^{2d \times 2d}$, where $\dot q_l^{s,s'}(x,x') = \mathbb{E}\,_{[z,z']^T \sim \mathcal{N}(0,\Sigma_l(x,x'))} \phi'(z^{s}) \phi'(z^{\prime,s'}) \sum_{r \in \ker} \dot q_{l+1}^{s-r,s'-r}(x,x')$. Hence \begin{equation} \sum_{i=1}^{n_l} \partial_{h_l^{i,s}} f(x) \partial_{h_l^{i,s'}} f(x') = \frac{1}{n_l} \sum_{i=1}^{n_l} g_l^{i,s}(x) g_l^{i,s'}(x') \to \dot q_l^{s,s'}(x,x'). \end{equation} Putting all together, $\hat\Theta(x,x')$ converges to $\Theta(x,x') = \sum_{l=1}^L \sum_{s,s'=1}^d \dot q_l^{s,s'}(x,x') q_l^{s,s'}(x,x')$, where \begin{equation} q_1^{s,s'}(x,x') = \frac{1}{n_0} \sum_{j=1}^{n_0} \sum_{r \in \ker} x^{j,s+r} x^{\prime,j,s'+r}, \end{equation} \begin{equation} q_l^{s,s'}(x,x') = \mathbb{E}\,_{[z,z']^T \sim \mathcal{N}(0,\Sigma_{l-1}(x,x'))} \sum_{r \in \ker} \phi(z^{s+r}) \phi(z^{\prime,s'+r}), \end{equation} \begin{equation} \dot q_L^{s,s'}(x,x') = \frac{1}{d^2} \mathbb{E}\,_{[z,z']^T \sim \mathcal{N}(0,\Sigma_L(x,x'))} \phi'(z^s) \phi'(z^{\prime,s'}), \end{equation} \begin{equation} \dot q_l^{s,s'}(x,x') = \mathbb{E}\,_{[z,z']^T \sim \mathcal{N}(0,\Sigma_l(x,x'))} \phi'(z^{s}) \phi'(z^{\prime,s'}) \sum_{r \in \ker} \dot q_{l+1}^{s-r,s'-r}(x,x'), \end{equation} and $\Sigma_l(x,x') = \begin{pmatrix} q_l(x,x) & q_l(x,x')\\q_l(x,x') & q_l(x',x') \end{pmatrix}$. Same as for fully-connected nets, computing the NTK can be performed in a convenient sequential layer-wise manner. Define the empirical NTK for the first $l$ layers as \begin{equation} \hat\Theta_{:l}^{s,s'}(x,x') = \sum_{l'=1}^l \sum_{i=1}^{n_{l'}} \sum_{j=1}^{n_{l'-1}} \sum_{r \in \ker} \partial_{W_{l'}^{ijr}} h_l^{1,s}(x) \partial_{W_{l'}^{ijr}} h_l^{1,s'}(x'); \end{equation} in this case, by chain rule, \begin{multline} \hat\Theta(x,x') = \sum_{l=1}^L \sum_{i=1}^{n_l} \sum_{j=1}^{n_{l-1}} \sum_{r \in \ker} \partial_{W_l^{ijr}} f(x) \partial_{W_l^{ijr}} f(x') =\\= \sum_{l=1}^L \sum_{i=1}^{n_l} \sum_{j=1}^{n_{l-1}} \sum_{s,s'=1}^d \sum_{r \in \ker} \partial_{W_l^{ijr}} h_L^{1,s}(x) \partial_{W_l^{ijr}} h_L^{1,s'}(x') \partial_{h_L^{1,s}} f(x) \partial_{h_L^{1,s'}} f(x') =\\= \frac{1}{d^2} \sum_{s,s'=1}^d \phi'(h_L^{1,s}(x)) \phi'(h_L^{1,s'}(x')) \hat\Theta_{:L}^{s,s'}(x,x'), \end{multline} and therefore, \begin{equation} \Theta(x,x') = \frac{1}{d^2} \sum_{s,s'=1}^d \dot q_L^{s,s'}(x,x') \Theta_{:L}^{s,s'}(x,x'). \end{equation} Suppose $\hat\Theta_{:l-1}(x,x')$ and $q_{l-1}(x,x')$ are already computed. Adding a nonlinearity and a convolutional layer with weights $W_l$ gives $q_l$ as listed above: \begin{equation} q_l^{s,s'}(x,x') = \mathbb{E}\,_{[z,z'] \sim \mathcal{N}(0,\Sigma_{l-1}(x,x'))} \sum_{r \in \ker} \phi(z^{s+r}) \phi(z^{\prime,s'+r}), \label{eq:q_iteration_conv} \end{equation} where $\Sigma_{l-1}(x,x') = \begin{pmatrix} q_{l-1}(x,x) & q_{l-1}(x,x')\\q_{l-1}(x,x') & q_{l-1}(x',x') \end{pmatrix}$. We can compute $\hat\Theta_{:L}$ in a single forward pass using the following recurrence: \begin{multline} \hat\Theta_{:l}^{s,s'}(x,x') = \sum_{l'=1}^l \sum_{i=1}^{n_{l'}} \sum_{j=1}^{n_{l'-1}} \sum_{\tilde r \in \ker} \partial_{W_{l'}^{ij\tilde r}} h_l^{1,s}(x) \partial_{W_{l'}^{ij\tilde r}} h_l^{1,s'}(x') =\\= \frac{1}{n_{l-1}} \sum_{j=1}^{n_{l-1}} \sum_{\tilde r \in \ker} x^{j,s+\tilde r}(x) x^{j,s'+\tilde r}(x') +\\+ \sum_{l'=1}^{l-1} \sum_{i=1}^{n_{l'}} \sum_{j=1}^{n_{l'-1}} \sum_{\tilde r \in \ker} \sum_{k,k'=1}^{n_{l-1}} \sum_{p,p'=1}^d \partial_{W_{l'}^{ij\tilde r}} h_{l-1}^{k,p}(x) \partial_{W_{l'}^{ij\tilde r}} h_{l-1}^{k',p'}(x') \partial_{h_{l-1}^{k,p}} h_l^{1,s}(x) \partial_{h_{l-1}^{k',p'}} h_l^{1,s'}(x') =\\= \frac{1}{n_{l-1}} \sum_{j=1}^{n_{l-1}} \sum_{\tilde r \in \ker} x^{j,s+\tilde r}(x) x^{j,s'+\tilde r}(x') +\\+ \frac{1}{n_{l-1}} \sum_{l'=1}^{l-1} \sum_{i=1}^{n_{l'}} \sum_{j=1}^{n_{l'-1}} \sum_{\tilde r,r,r' \in \ker} \sum_{k,k'=1}^{n_{l-1}} \partial_{W_{l'}^{ij\tilde r}} h_{l-1}^{k,s+r}(x) \partial_{W_{l'}^{ij\tilde r}} h_{l-1}^{k',s'+r'}(x') \times\\\times W_l^{1kr} \phi'(h_{l-1}^{k,s+r}(x)) W_l^{1k'r'} \phi'(h_{l-1}^{k',s'+r'}(x')). \label{eq:Theta_iteration_conv} \end{multline} A limit then gives \begin{equation} \Theta_{:l}^{s,s'}(x,x') = q_l^{s,s'}(x,x') + \sum_{r,r' \in \ker} \Theta_{:l-1}^{s+r,s'+r'}(x,x') \mathbb{E}\,_{[z,z']^T \sim \mathcal{N}(0,\Sigma_{l-1}(x,x'))} \phi'(z^{s+r}) \phi'(z^{\prime,s'+r'}), \end{equation} which resembles the corresponding result for fully-connected nets when $\ker = [0]$. \subsection{Computing the expectations} The only obstacle that prevents explicit computation here is expectations over $[z,z']^T \sim \mathcal{N}(0,\Sigma_l(x,x'))$. Fortunately, these expectations can be computed analytically for certain $\phi$: in particular, for ReLU and the error function. We cover only the case of ReLU here as it is more widely used in practice. Let us omit the $l$-subscript and the arguments $(x,x')$ for brevity: $\Sigma = \begin{pmatrix} q_{11} & q_{12}\\q_{12} & q_{22} \end{pmatrix}$, and we are interested in $\mathbb{E}\,_{[u,v]^T \sim \mathcal{N}(0,\Sigma)} [u]_+ [v]_+$ and $\mathbb{E}\,_{[u,v]^T \sim \mathcal{N}(0,\Sigma)} 1_{u>0} 1_{v>0}$. Following \cite{arora2019exact}, we start with assuming $q_{11} = q_{22} = 1$ and $q_{12} = \lambda$; $\Sigma \geq 0$ implies $|\lambda| \leq 1$. Then \begin{multline} \mathbb{E}\,_{[u,v]^T \sim \mathcal{N}(0,\Sigma)} [u]_+ [v]_+ = \mathbb{E}\,_{[u,\tilde v]^T \sim \mathcal{N}(0,I)} [u]_+ \left[\lambda u + \sqrt{1-\lambda^2} \tilde v\right]_+ =\\= \mathbb{E}\,_{u \sim \mathcal{N}(0,1)} \left([u]_+ \int_{-\frac{\lambda}{\sqrt{1-\lambda^2}} u}^\infty \left(\lambda u + \sqrt{1-\lambda^2} \tilde v\right) \frac{1}{\sqrt{2\pi}} e^{-\tilde v^2/2} \, d\tilde v \right) =\\= \mathbb{E}\,_{u \sim \mathcal{N}(0,1)} \left( [u]_+ \left( \lambda u \frac{1}{2} \left(1 - \erf\left(-\frac{\lambda}{\sqrt{2-2\lambda^2}} u\right)\right) + \sqrt{\frac{1-\lambda^2}{2\pi}} e^{-\frac{\lambda^2}{2-2\lambda^2} u^2} \right) \right) =\\= \int_0^\infty u \left(\lambda u \frac{1}{2} \left(1 - \erf\left(-\frac{\lambda}{\sqrt{2-2\lambda^2}} u\right)\right) + \sqrt{\frac{1-\lambda^2}{2\pi}} e^{-\frac{\lambda^2}{2-2\lambda^2} u^2}\right) \frac{1}{\sqrt{2\pi}} e^{-u^2/2} \, du =\\= \frac{\lambda}{4} + \int_0^\infty u \left(\lambda u \frac{1}{2} \erf\left(\frac{\lambda}{\sqrt{2-2\lambda^2}} u\right) + \sqrt{\frac{1-\lambda^2}{2\pi}} e^{-\frac{\lambda^2}{2-2\lambda^2} u^2}\right) \frac{1}{\sqrt{2\pi}} e^{-u^2/2} \, du =\\= \frac{\lambda}{4} + \frac{\lambda}{2} A + \sqrt{\frac{1-\lambda^2}{2\pi}} B. \end{multline} \begin{multline} A = \int_0^\infty u^2 \erf\left(\frac{\lambda}{\sqrt{2-2\lambda^2}} u\right) \frac{1}{\sqrt{2\pi}} e^{-u^2/2} \, du = -\int_0^\infty u \erf\left(\frac{\lambda}{\sqrt{2-2\lambda^2}} u\right) \frac{1}{\sqrt{2\pi}} \, d\left(e^{-u^2/2}\right) =\\= \int_0^\infty \left(\erf\left(\frac{\lambda}{\sqrt{2-2\lambda^2}} u\right) + u \frac{\lambda}{\sqrt{2-2\lambda^2}} \frac{2}{\sqrt{\pi}} e^{-\frac{\lambda^2}{2-2\lambda^2} u^2} \right) \frac{1}{\sqrt{2\pi}} e^{-u^2/2} \, du = C + \frac{\lambda}{\sqrt{2-2\lambda^2}} \frac{2}{\sqrt{\pi}} B. \end{multline} \begin{equation} C = \int_0^\infty \erf\left(\frac{\lambda}{\sqrt{2-2\lambda^2}} u\right) \frac{1}{\sqrt{2\pi}} e^{-u^2/2} \, du = \frac{1}{\pi} \arctan\left(\frac{\lambda}{\sqrt{1-\lambda^2}}\right) = \frac{1}{\pi} \arcsin\lambda. \end{equation} \begin{equation} B = \int_0^\infty u e^{-\frac{\lambda^2}{2-2\lambda^2} u^2} \frac{1}{\sqrt{2\pi}} e^{-u^2/2} \, du = \frac{1}{\sqrt{2\pi}}\int_0^\infty u e^{-\frac{1}{2-2\lambda^2} u^2} \, du = \frac{1-\lambda^2}{\sqrt{2\pi}}. \end{equation} Putting all together, \begin{multline} \mathbb{E}\,_{[u,v]^T \sim \mathcal{N}(0,\Sigma)} [u]_+ [v]_+ = \frac{\lambda}{4} + \frac{\lambda}{2} A + \sqrt{\frac{1-\lambda^2}{2\pi}} B = \frac{\lambda}{4} + \frac{\lambda}{2} C + \frac{\lambda^2}{\sqrt{1-\lambda^2}} \frac{1}{\sqrt{2\pi}} B + \sqrt{\frac{1-\lambda^2}{2\pi}} B =\\= \frac{\lambda}{4} + \frac{\lambda}{2} C + \frac{1}{\sqrt{1-\lambda^2}} \frac{1}{\sqrt{2\pi}} B = \frac{\lambda}{4} + \frac{\lambda}{2\pi} \arcsin\lambda + \frac{\sqrt{1-\lambda^2}}{2\pi} =\\= \frac{\lambda\left(\frac{\pi}{2} + \arcsin\lambda\right) + \sqrt{1-\lambda^2}}{2\pi} = \frac{\lambda\left(\pi - \arccos\lambda\right) + \sqrt{1-\lambda^2}}{2\pi}. \end{multline} And for the second quantity, \begin{multline} \mathbb{E}\,_{[u,v]^T \sim \mathcal{N}(0,\Sigma)} 1_{u>0} 1_{v>0} = \mathbb{E}\,_{[u,\tilde v]^T \sim \mathcal{N}(0,I)} 1_{u>0} 1_{\lambda u + \sqrt{1-\lambda^2} \tilde v > 0} =\\= \mathbb{E}\,_{u \sim \mathcal{N}(0,1)} \left(1_{u>0} \int_{-\frac{\lambda}{\sqrt{1-\lambda^2}} u}^\infty \frac{1}{\sqrt{2\pi}} e^{-\tilde v^2/2} \, d\tilde v \right) =\\= \mathbb{E}\,_{u \sim \mathcal{N}(0,1)} \left( 1_{u>0} \frac{1}{2} \left(1 - \erf\left(-\frac{\lambda}{\sqrt{2-2\lambda^2}} u\right)\right) \right) =\\= \int_0^\infty \frac{1}{2} \left(1 - \erf\left(-\frac{\lambda}{\sqrt{2-2\lambda^2}} u\right)\right) \frac{1}{\sqrt{2\pi}} e^{-u^2/2} \, du =\\= \frac{1}{4} + \int_0^\infty \frac{1}{2} \erf\left(\frac{\lambda}{\sqrt{2-2\lambda^2}} u\right) \frac{1}{\sqrt{2\pi}} e^{-u^2/2} \, du =\\= \frac{1}{4} + \frac{1}{2} C = \frac{\frac{\pi}{2} + \arcsin\lambda}{2\pi} = \frac{\pi - \arccos\lambda}{2\pi}. \end{multline} A general positive semi-definite matrix $\Sigma$ can be expressed as $\Sigma = D \Lambda D$, where $\Lambda = \begin{pmatrix} 1 & \lambda\\\lambda & 1 \end{pmatrix}$, $D = \begin{pmatrix} \sqrt{q_{11}} & 0\\0 & \sqrt{q_{22}} \end{pmatrix}$, and $\lambda = \frac{q_{12}}{\sqrt{q_{11} q_{22}}}$. Then, using homogeneity of ReLU, \begin{multline} \mathbb{E}\,_{[u,v]^T \sim \mathcal{N}(0,\Sigma)} [u]_+ [v]_+ = \mathbb{E}\,_{[u,v]^T \sim \mathcal{N}(0,D \Lambda D)} [u]_+ [v]_+ = \mathbb{E}\,_{[u,v]^T \sim \mathcal{N}(0,\Lambda)} [\sqrt{q_{11}} u]_+ [\sqrt{q_{22}} v]_+ =\\= \sqrt{q_{11} q_{22}} \mathbb{E}\,_{[u,v]^T \sim \mathcal{N}(0,\Lambda)} [u]_+ [v]_+ = \sqrt{q_{11} q_{22}} \frac{\lambda\left(\pi - \arccos\left(\frac{q_{12}}{\sqrt{q_{11} q_{22}}}\right)\right) + \sqrt{1-\frac{q_{12}^2}{q_{11} q_{22}}}}{2\pi} =\\= \frac{\lambda \sqrt{q_{11} q_{22}} \left(\pi - \arccos\left(\frac{q_{12}}{\sqrt{q_{11} q_{22}}}\right)\right) + \sqrt{q_{11} q_{22} - q_{12}^2}}{2\pi}. \end{multline} \begin{multline} \mathbb{E}\,_{[u,v]^T \sim \mathcal{N}(0,\Sigma)} 1_{u>0} 1_{v>0} = \mathbb{E}\,_{[u,v]^T \sim \mathcal{N}(0,D \Lambda D)} 1_{u>0} 1_{v>0} =\\= \mathbb{E}\,_{[u,v]^T \sim \mathcal{N}(0,\Lambda)} 1_{u>0} 1_{v>0} = \frac{\pi - \arccos\left(\frac{q_{12}}{\sqrt{q_{11} q_{22}}}\right)}{2\pi}. \end{multline} Similar explicit computations are available for convolutional networks \cite{arora2019exact}, as well as for generic tensor programs, as long as the nonlinearities used belong to a certain list (which includes e.g. ReLU and the error function, see \cite{novak2019neural} for a concrete implementation and \cite{yang2020tensor_ii} for generic recurrent formulas in terms of expectations). However, a typical convolutional network also uses max poolings and other nonlinear maps for which explicit formulas for expectations are not available at the moment. In this case, one can rely on a finite-width Monte-Carlo estimate for $\Theta(x,x')$, i.e. $\hat\Theta^{(M)}(x,x') = \frac{1}{M} \sum_{k=1}^M \hat\Theta(x,x')$, where $M$ is a number of independent initializations and $\hat\Theta(x,x')$ is an empirical kernel for width $n$. According to convergence results, $\hat\Theta^{(M)}(x,x') \to \Theta(x,x')$ as $n \to \infty$ $\forall M \geq 1$. Also, $\hat\Theta^{(M)}(x,x') \to \mathbb{E}\, \hat\Theta(x,x')$ as $M \to \infty$ $\forall n \to \infty$. Unfortunately, one cannot guarantee that $\mathbb{E}\, \hat\Theta(x,x') = \Theta(x,x')$; therefore, $\hat\Theta^{(M)}(x,x')$ can be a biased estimate. However, according to experiments of \cite{novak2019neural}, discrepancy between $\hat\Theta^{(M)}$ and $\Theta$ decreases as $M$ grows for any finite $n$. This means that the main component of this discrepancy is not bias but variance decreased by adding more Monte-Carlo samples. We also have to note that \cite{arora2019exact} reports significant accuracy drops on a CNN of width $n=512$ when using a single-sample Monte-Carlo estimate for the NTK instead of the exact limit NTK. However, they haven't provided any results for $M > 1$, therefore, this accuracy drop could be caused by large variance of $\hat\Theta$. \subsection{NTK for attention layers} A neural tangent kernel is typically considered for architectures for which analytical computation is available, i.e. for fully-connected and convolutional ReLU nets, see \cref{sec:limit}. One of the necessary conditions for exact computations to be possible is the fact that the output of each individual pre-activation neuron becomes a Gaussian process in the limit of large width. This allows one to apply Master theorem (\cref{thm:master_theorem}), and express the NTK as an expectation over certain Gaussian variables. However, there exist layers which does not enjoy Gaussian behavior even in the limit of large width. Attention layer is one of the examples: \begin{equation} f(x) = \mathrm{Softmax}\left(G(x)\right) V(x), \qquad G(x) = \frac{1}{\sqrt{n}} Q^T(x) K(x), \end{equation} where we define queries $Q(x) = x W_Q$, keys $K(x) = x W_K$, and values $V(x) = x W_V$. Dimensions of the corresponding matrices are: $W_Q \in \mathbb{R}^{n_0 \times n}$, $W_K \in \mathbb{R}^{n_0 \times n}$, and $W_V \in \mathbb{R}^{n_0 \times n_H}$, and $x \in \mathbb{R}^{d \times n_0}$. If $W_Q$ and $W_K$ are independent with iid zero mean unit variance entries then $G_{\alpha\beta}(x) = n^{-1/2} \sum_{i=1}^n \sum_{j,k=1}^{n_0} x_{\alpha,j} x_{\beta,k} W_Q^{ji} W_K^{ki}$ converges by CLT to a Gaussian variable. The resulting limit matrix is therefore $d \times d$ matrix with (non-degenerate) Gaussian entries. Since $d$ stays fixed as $n \to \infty$, we cannot apply any limit theorem to reason about the distribution of $f_i(x)$ for some $i \in [n_H]$. \cite{hron2020infinite} consider a multi-head attention layer and show that it does enjoy Gaussian process behavior as width and number of heads go to infinity simultaneously: \begin{equation} f(x) = [f^1(x), \ldots, f^n(x)] W_O, \qquad f_i(x) = \mathrm{Softmax}\left(G_i(x)\right) V_i(x), \qquad G_i(x) = \frac{1}{\sqrt{n}} Q_i^T(x) K_i(x), \end{equation} where $W_O \in \mathbb{R}^{n_H n \times n_H}$ and all $Q_i$, $K_i$, and $V_i$ are iid for different $i \in [n]$. To gain some intuition about the result of \cite{hron2020infinite}, consider $n_H=1$, i.e. outputs of all individual heads are scalars and the final output is also a scalar. In this case, $f(x)$ is a product of a vector with $n$ iid entries and a matrix with iid $\mathcal{N}(0,n^{-1})$ entries. This product tends to a Gaussian as $n \to \infty$ by CLT. Considering a set of inputs gives a random Gaussian vector similar to the fully-connected case, see \cref{sec:limit_fc_nets}. \cite{hron2020infinite} gives exact formulas for covariances $q(x,x')$ and the kernel $\Theta(x,x')$; they are implemented as layers in NeuralTangents \cite{novak2019neural}. \section{Computational aspects} \label{sec:computations} \subsection{Inference optimizations} Suppose one is able to compute (or approximate) the limit kernel, $\Theta(x,x')$, on any pair of points $(x,x')$. The result of kernel regression at convergence ($t \to \infty$) in the limit of inifinite width is then given by (see Eq.~(\ref{eq:lin_solution_square_loss})): \begin{equation} f_\infty(x) = f_0(x) - \Theta(x, \vec x) \Theta^{-1}(\vec x, \vec x) (f_0(\vec x) - \vec y). \label{eq:inf_wide_solution_square_loss} \end{equation} where $\Theta(\vec x, \vec x) \in \mathbb{R}^{m \times m}$ and $\Theta(x, \vec x) \in \mathbb{R}^{1 \times m}$. For multi-class problems, $f(x) \in \mathbb{R}^k$, where $k$ is the number of classes, and the kernel evaluated at two points becomes a $k \times k$ matrix: \begin{equation} \hat\Theta_{jj'}(x,x') = \nabla_\theta^T f^j(x) \nabla_\theta f^{j'}(x'). \end{equation} Define a Gram matrix as $\hat\Theta_{ik+j,i'k+j'}(\vec x, \vec x) = \hat\Theta_{jj'}(x_i,x_{i'})$ and its limit counterpart $\Theta(\vec x, \vec x) \in \mathbb{R}^{mk \times mk}$ accordingly; similarly for $\Theta(x, \vec x) \in \mathbb{R}^{k \times mk}$. If one defines $f_0^{ik+j}(\vec x) = f_0^j(x_i)$, the corresponding solution takes the same form as Eq.~(\ref{eq:inf_wide_solution_square_loss}). Evaluating this quantity naively requires storing and inverting the kernel Gram matrix $\Theta(\vec x, \vec x) \in \mathbb{R}^{mk \times mk}$. Storing it requires $O(m^2 k^2)$ memory, while inverting it takes $O(m^3 k^3)$ time, making such a naive approach computationally infeasible for datasets with $m k \gtrsim 10^4$ (nevertheless, for small datasets, the naive approach for computing the NTK estimator (\ref{eq:inf_wide_solution_square_loss}) is feasible and may provide advantage over traditional SGD training, see \cite{arora2019harnessing}). Let us start with discussing two important optimizations implemented in Neural Tangents \cite{novak2019neural}. Note that as discussed in \cref{sec:limit}, for a fully-connected net (and, in fact, for any tensor program, see \cite{yang2019tensor_i}) preactivations of different neurons on a given layer become iid as width goes to infinity. This implies $\Theta_{jj'}(x,x') = \Theta_{11}(x,x') 1_{j=j'}$. Therefore the kernel Gram matrix has a block structure: $\Theta(\vec x, \vec x) = \Theta|_{k=1}(\vec x, \vec x) \otimes I_{k \times k}$. This reduces memory footprint to $O(m^2)$ and the time requirement to $O(m^3)$. The second optimization deals with convolutional networks. Note that computing $\Theta(x,x')$ requires computing all intermediate covariances $q_l(x,x')$. These covariances were scalars for fully-connected nets since different neurons of a given layer became iid as width went to infinity. However, for an image with $d$ pixels, different pixels of a given layer are dependent since their preactivations are computed using same weight matrices. That's why for convolutional nets, one has to construct intermediate covariance matrices of size $d \times d$; storing and computing them for each pair of points requires $O(m^2 d^2)$ memory and time, even surpassing the time required for Gram matrix inversion when $d^2 > m$ (this happens e.g. for CIFAR10 for which $d = 32 \times 32 = 1024$, $m = 50 000$, $k = 10$). However, as was noted e.g. in \cite{xiao2018dynamical}, if no pooling is used in the network, it suffices to compute and store $d$ independent $m \times m$ blocks of this covariance matrix, boiling down to $O(m^2 d)$ time requirement which is usually not greater than $O(m^3)$ time required for inversion. So far, the main computational bottleneck was the time required for inverting the kernel Gram matrix. This problem is not specific for NTK; it appears for any regularized kernel regression problem: \begin{equation} \hat f_\lambda = \argmin_{f \in \mathcal{H}} \sum_{j=1}^m \ell(y_j, f(x_j)) + \lambda \| f \|_\mathcal{H}^2. \label{eq:kernel_regression} \end{equation} Here $\mathcal{H}$ is a Hilbert space of functions of the form $f(x) = \Phi^T(x) \theta$; the corresponding scalar product is $\langle \Phi^T(x) \theta, \Phi^T(x) \theta' \rangle = \theta^T \theta'$. Hence $\| f \|_\mathcal{H}^2 = \langle f, f \rangle = \|\theta\|_2^2$ for $f(x) = \Phi^T(x) \theta$. Problem~(\ref{eq:kernel_regression}) has an associated kernel, which we denote with the same letter as NTK: $\Theta(x,x') = \Phi^T(x) \Phi(x')$. Due to the representer theorem \cite{kimeldorf1970correspondence}, any solution of Problem~(\ref{eq:kernel_regression}) has the form $f(x) = \sum_{j=1}^m \alpha_j \Theta(x,x_j)$. For now, consider quadratic loss: $\ell(y,z) = \frac{1}{2} \| y - z \|_2^2$. The problem above becomes: \begin{equation} \vec\alpha = \argmin_{\vec\alpha \in \mathbb{R}^m} \frac{1}{2} \sum_{j=1}^m \left( \sum_{j'=1}^m \alpha_{j'} \Theta(x_j,x_{j'}) - y_j \right)^2 + \lambda \left\| \sum_{j=1}^m \alpha_j \Phi(x_j) \right\|_2^2. \end{equation} This problem is convex, therefore any critical point of the corresponding functional is a solution: \begin{equation} (\Theta(\vec x, \vec x) + \lambda I) \vec\alpha = \vec y. \end{equation} As long as $\Theta(\vec x, \vec x) + \lambda I$ is invertible, the solution is $\vec\alpha = (\Theta(\vec x, \vec x) + \lambda I)^{-1} \vec y$. Putting $\lambda = 0$, we recover expected Eq.(\ref{eq:inf_wide_solution_square_loss}) (since $\mathbb{E}\, f_0(x) = 0$). While the represeneter theorem guarantees that it suffices to look for solutions only of the form $f(x) = \sum_{j=1}^m \alpha_j \Theta(x,x_j)$ instead of inspecting the whole $\mathcal{H}$, we, following \cite{meanti2020kernel}, consider further contracting the search space by sampling $m'$ points $(\tilde x_1, \ldots, \tilde x_{m'})$ uniformly out of $m$ and looking for solutions of the form $f(x) = \sum_{j=1}^{m'} \tilde\alpha_j \Theta(x,\tilde x_j)$. This is known as Nystr\"om approximation. The minimization problem then becomes: \begin{equation} \vec{\tilde\alpha} = \argmin_{\vec{\tilde\alpha} \in \mathbb{R}^{m'}} \frac{1}{2} \sum_{j=1}^m \left( \sum_{j'=1}^{m'} \tilde\alpha_{j'} \Theta(x_j,\tilde x_{j'}) - y_j \right)^2 + \lambda \left\| \sum_{j=1}^{m'} \tilde\alpha_j \Phi(\tilde x_j) \right\|_2^2. \end{equation} This problem is again convex and its critical points satisfy the following: \begin{equation} \left(\Theta\left(\vec{\tilde x}, \vec x\right) \Theta\left(\vec x, \vec{\tilde x}\right) + \lambda \Theta\left(\vec{\tilde x}, \vec{\tilde x}\right)\right) \vec{\tilde \alpha} = \Theta\left(\vec{\tilde x}, \vec x\right) \vec y. \label{eq:critical_points_nystrom} \end{equation} Computing the kernel-kernel product takes $O(m {m'}^2)$ time and solving the above system directly takes $O({m'}^3)$ time. The space requirement can be put to $O({m'}^2)$ as the "rectangular Gram matrix" can be computed in $m' \times m'$ blocks. Conjugate gradient methods are iterative methods designed for approximately solving linear systems of the form $A \vec z = \vec b$ without explicitly inverting the matrix $A$. The main operation used by these methods on each iteration is a matrix-vector product. In our case, the matrix-vector product requires $O(mm' + {m'}^2)$ time; note that it allows one to avoid computing the kernel-kernel product explicitly, by computing two matrix-vector product instead, costing $O(mm')$ time each. Putting all together, solving system~(\ref{eq:critical_points_nystrom}) with $s$ iterations of a conjugate gradient method requires $O(s(mm' + {m'}^2))$ time and $O({m'}^2)$ space. Based on certain theoretical results, \cite{meanti2020kernel} suggest taking $m' = O(\sqrt{m})$ and $s = O(\log m)$. The resulting $O(m \sqrt{m} \log m)$ time and $O(m)$ space allows for applying their method to datasets of size up to $m \sim 10^6$ (the size of ImageNet). \cite{meanti2020kernel} also discuss several optimizations aiming for improving GPU-efficiency of the method. While their method is publicly available as an open-source library\footnote{\url{https://github.com/FalkonML/falkon}}, we are not aware of any of its applications to NTK. \subsection{Computing the empirical kernel} All the previous discussion of the current section assumed that the kernel, $\Theta$, can be efficiently computed. This is the case for certain models for which analytic computations are available. Indeed, for $L$-layer fully-connected nets, the limit Gram matrix $\Theta(\vec x, \vec x)$ can be computed in $O(m^2 L)$ time while storing it requires $O(m^2)$ space, see Eqs. (\ref{eq:q_iteration}) and (\ref{eq:Theta_iteration}). For more complex models, e.g. for those including max-poolings, closed-form analytic expressions for the limit kernel are not currently available. However, the empirical kernel, $\hat\Theta$, can always be computed explicitly and is close to $\Theta$ for sufficiently large width (see convergence theorems in \cref{sec:convergence}). For this reason, we are looking for ways to compute $\hat\Theta$ efficiently. In order to simplify the illustration, we will discuss only time requirements in the sequel. Recall the empirical kernel is a product of two jacobians: $\hat\Theta_{jj'}(x,x') = \nabla^T_\theta f^j(x) \nabla_\theta f^{j'}(x')$. Therefore the time cost for computing the kernel consists of the time required to compute the jacobian and the time required for jacobian contraction. Denote $[FP]$ the cost of a single forward pass for our network; a single backward pass has approximately the same cost. Then computing a jacobian for a given point $x$ takes $O(k [FP])$ time. Contracting two jacobians for fixed $j$ and $j'$ takes $O(N)$ time, where $N$ is the total number of parameters: $\theta \in \mathbb{R}^N$. Putting all together, computing the full $mk \times mk$ Gram matrix takes $O(m k [FP] + m^2 k^2 N)$ time. \cite{novakfast} propose a method for computing the NTK-vector product. It can be directly embedded into the method of \cite{meanti2020kernel} using conjugate gradients, or used for computing the kernel explicitly by applying it to columns of the $k \times k$ identify matrix. Their method boils down to casting a matrix-vector product where the matrix is the empirical NTK to a vector-jacobian product followed by a jacobian-vector product: $\sum_{j'=1}^k \hat\Theta_{jj'}(x,x') v_{j'} = \nabla^T_\theta f^j(x) \sum_{j'=1}^k \nabla_\theta f^{j'}(x') v_{j'}$. Both matrix-vector products can be computed in $O([FP])$ time. Therefore this method allows to compute the full $mk \times mk$ Gram matrix in $O(m^2 k [FP])$ time, which improves over the jacobian contraction method as long as $[FP] < C k N$ for a certain constant $C$. Memory requirements that we do not show here are, in fact, same for both methods, see \cite{novakfast}. \cite{novakfast} also propose another optimization exploiting certain stucture of the function $f$: e.g. weights of a fully-connected net are aligned sequentially, while weights of a convolutional layer are aranged in blocks. We do not discuss it in the present survey. Both optimizations are publicly available as JAX \cite{jax2018github} function transformations.\footnote{\url{https://github.com/iclr2022anon/fast_finite_width_ntk}}. \section{Applications} \subsection{A kernel method} \subsubsection{Supervised learning on small datasets} The NTK is a kernel, therefore it can be used in any kernel method itself, i.e. kernel ridge regression or kernel SVM. However, computing the kernel Gram matrix on a dataset of size $m$ requires $O(m^2)$ time, which is infeasible for large datasets. One can either rely on certain approximations, e.g. Nystr\"om approximation, see \cref{sec:computations}, or restrict oneself to small datasets. One possible advantage of kernel methods over neural nets is lower variance. Indeed, the only variance of a kernel method is induced by sampling the dataset, while a neural network has several more sources of variance; e.g. initialization randomness and batch sampling. It is likely that this difference in variances is especially important when the dataset is small. The other advantage of kernel methods is having smaller number of hyperparamaters compared to neural nets. This makes kernel methods useful as robust baseline methods that may outperform large neural nets in a situation when there is no budget for careful hyperparamater tuning. As an illustration, \cite{arora2019harnessing} demonstrated that kernel regression with 14-layer CNTK consistently outperforms ResNet-34 trained with standard hyperparameters on a random subset of CIFAR-10 with $\leq 640$ samples. \subsubsection{Neural architecture search using NTK conditional number} There are other setups where computing the Gram matrix on a small dataset is sufficient. For example, \cite{chen2021neural} proposes a condition number of the NTK Gram matrix as a proxy-measure of a given architecture performance; this proxy-measure is then used to guide neural architecture search (NAS). In this case, we do not need the Gram matrix itself but only the condition number, which motivates computing the matrix on a small subset of examples. While the condition number on a random subset Gram matrix provides only a random estimate, possibly noisy and biased, of a true condition number, the way we use it does not require exact estimates. Indeed, a performance measure in NAS algorithms is mainly used to cut-off pathologic, low-performing models from a population, rather than finding the best one. Therefore any measure that correlates positively with performance suffices. The use of condition number as a proxy-measure of performance relies on two hypotheses: (1) performance correlates with trainability, and (2) trainability correlates with NTK condition number. The first hypothesis is mainly motivated by a natural implication "bad trainability implies low performance". To motivate the second hypothesis, let us consider kernel ridge regression trained with usual discrete-time gradient descent: \begin{equation} f_{t+1}(\vec x) = f_t(\vec x) + \eta \Theta(\vec x, \vec x) (\vec y - f_t(\vec x)), \end{equation} where now $t$ is a discrete time-step and $\eta$ is a learning rate. Consider eigenvalue decomposition of the kernel: $\Theta(\vec x, \vec x) = \sum_{k=1}^m \lambda_k \vec v_k \vec v_k^T$, where $\lambda_1 \geq \ldots \geq \lambda_m \geq 0$, and $(\vec v_k)_{k=1}^m$ forms an orthonormal basis. Let us decompose our model's predictions as $f_t(\vec x) = \sum_{k=1}^m u_{t,k} \vec v_k$. Then the dynamics above decomposes as \begin{equation} u_{t+1,k} = u_{t,k} + \eta \lambda_k (\vec y^T \vec v_k - u_{t,k}). \end{equation} This gives \begin{equation} u_{t+1,k} - \vec y^T \vec v_k = (1 - \eta \lambda_k) (u_{t,k} - \vec y^T \vec v_k), \end{equation} and the solution is therefore \begin{equation} u_{t,k} = \vec y^T \vec v_k + (1 - \eta \lambda_k)^t (u_{0,k} - \vec y^T \vec v_k). \end{equation} The dynamics above converges as $t \to \infty$ for any $u_{0,k}$ if and only if $\eta < 2 / \lambda_k$. Since this should hold for all $k \in [m]$ and the maximal $\lambda$ is $\lambda_1$, we need to have $\eta < 2 / \lambda_1$. Therefore the $m$-th principal component converges at rate $\eta \lambda_m < 2 \lambda_m / \lambda_1$. $\kappa = \lambda_m / \lambda_1$ is our condition number. We see that small condition number implies low trainability and thus, by the first hypothesis, low performance. Using a combination of two proxy-measures, the condition number and the number of linear regions (we do not discuss it here), \cite{chen2021neural} constructed a NAS method that provided state-of-the-art performance on NAS-Bench-201 \cite{dong2020bench}, while using much smaller time compared to most of the other methods. \cite{chen2021neural} tested their method on CIFAR10 and ImageNet as well. In both cases, their method demonstrated competetive performance while using orders of magnitude less time. \subsubsection{Matrix completion and image impainting} In some cases, posing the problem as kernel regression allows for certain optimizations. In particular, \cite{radhakrishnan2021simple} proposed approaching the problem of matrix completion by minimizing the following loss: \begin{equation} \mathcal{L}(\theta) = \sum_{(i,j) \in S} (Y_{ij} - \tr(f(Z;\theta) M^{(ij)}))^2, \end{equation} where $S \subset [k] \times [d]$ is a set of coordinates of known entries of the target matrix $Y \in \mathbb{R}^{k \times d}$, $M^{(ij)} \in \mathbb{R}^{k \times d}$ has $1$ at position $(i,j)$ and $0$ elsewhere, $f(\cdot;\theta)$ is a neural network with parameters $\theta$, $n_0$ inputs and $k$ outputs, and $Z \in \mathbb{R}^{n_0 \times d}$ is an a-priori given matrix. The model $f$ is applied to each column of $Z$ seperately, therefore $f(Z;\theta)$ is $k \times d$ matrix. The above setup can be treated as a usual $l_2$ regression problem on a dataset $(Y_{ij}, M^{(ij)})_{(i,j) \in S}$. The corresponding empirical NTK is defined as $\hat K(M^{(ij)}, M^{(i'j')}) = \nabla^T_\theta \tr(f(Z;\theta) M^{(ij)}) \nabla_\theta \tr(f(Z;\theta) M^{(i'j')})$. Naturally, it does not depend on target matrix entries $Y$, and since there is only a finite set of possible inputs $M^{(ij)}$ (namely, $k d$), the resulting $kd \times kd$ Gram matrix will be the same for all possible matrix completion problems of a given target matrix dimensions. In other words, one can precompute the Gram matrix once and use it to all possible matrix completion problems of given dimensions. In contrast, original neural network formulation would require training a new network for each dataset $(Y_{ij}, M^{(ij)})_{(i,j) \in S}$. When $f(\cdot;\theta)$ is given by a fully-connected network with $L$ layers, \cite{radhakrishnan2021simple} provide a closed-form formula for its limit NTK: $K(M^{(ij)}, M^{(i'j')}) = \kappa_L\left(z_{\cdot,j}^T z_{\cdot,j'}\right) 1_{i=i'}$, where $\kappa_L$ is given by a certain recurrent relation. As we see, according to this kernel, elements of different rows of $Y$ are orthogonal (does not effect each other), while similarity of elements of the same row is given by a scalar product of the corresponding columns of $Z$. Therefore columns of $Z$ encodes a-priori similarities between columns of $Y$. The matrix $Z$ is called a feature-prior matrix. The ideal feature-prior matrix would be the target matrix $Y$ itself. Since one does not have access to it, \cite{radhakrishnan2021simple} suggest using the output $\hat Y$ of a separate matrix completion method instead. The resulting joint method performs better than the backbone one on popular collaborative filtering and virtual drug screening datasets. Image impainting can be viewed as a special case of matrix completion. Apart from using the same Gram matrix for all problems of a given size, image impainting with convolutional networks allows for one more optimization. When $f$ is a convolutional network, we pose the problem a bit differently to above. Suppose $f$ has $n_0$ input channels, $1$ output channel, and it maps an image to an image of the same size. Suppose $Z \in \mathbb{R}^{n_0 \times 2^p \times 2^q}$ and it is treated as a $2^p \times 2^q$ image with $n_0$ channels. This in contrast to the previous considerations, where $Z$ was a matrix with columns treated as different inputs to a vector-valued model. Similar to the above, $Y \in \mathbb{R}^{2^p \times 2^q}$ is a target image, and $M^{(ij)}$ of the same size has $1$ at $(i,j)$ and zero elsewhere. Note that $f$ applied to the "image" $Z$ has $2^p \times 2^q$ output and therefore its NTK $\Theta$ is a $2^p \times 2^q \times 2^p \times 2^q$ tensor. Suppose $f$ has no downsampling or upsampling layers. \cite{radhakrishnan2021simple} provides exact formula for the corresponding limit NTK in terms of the limit NTK of the model $f$ in this case: $K(M^{(ij)}, M^{(i'j')}) = \Theta(Z,Z)_{i,j,i',j'}$. Now suppose $f$ has $s$ downsampling and $s$ upsampling layers. Computing the Gram matrix for its NTK requires $O(2^{2p+2q})$ memory and $O(L 2^{2p+2q})$ time, where $L$ is the number of convolutions in $f$. It is already prohibitive for moderate-size images, i.e. when $p, q \approx 10$. \cite{radhakrishnan2021simple} propose a way to reconstruct the $2^p \times 2^q \times 2^p \times 2^q$ Gram matrix from a smaller Gram matrix of size $2^{2s+p+q}$. Moreover, this smaller Gram matrix requires computing the "usual" Gram matrices only for images of size $2^{s+1} \times 2^{s+1}$ which requires only $O(L 2^{4s})$ time. \subsubsection{Approximate integration with application to federated learning} Even in the case when the NTK Gram matrix can be computed and stored, the exact solution (\ref{eq:inf_wide_solution_square_loss}) requires inverting the kernel Gram matrix, which costs $O(m^3)$ when performed naively. Fortunately, mixing continuous-time and discrete-time formulations allows one to avoid computing the inverse explicitly. Denote $H_{t,ij} = \hat\Theta_t(x_i,x_j)$, $Z_{t,ik} = \partial_{\theta_i} f(x_k;\theta)$, and $u_{t,k} = f_t(x_k)$. Note that $H_t = Z_t^T Z_t$. Discrete-time weight evolution with learning rate $\eta$ is given by \begin{equation} \theta_{t+1} = \theta_t + \eta Z_t (\vec y - \vec u_t). \end{equation} Recall that assuming stationary kernel $H_t = H_0$ is equivalent to assuming stationary jacobian $Z_t = Z_0$. With this assumption, the dynamics above is solved as \begin{equation} \theta_t = \theta_0 + \eta Z_0 \sum_{s=0}^{t-1} (\vec y - \vec u_s). \end{equation} Recall that integrating continuous-time gradient descent dynamics under assumption $H_t = H_0$ gives \begin{equation} \vec u_s = \vec y + e^{-\eta s H_0} (\vec u_0 - \vec y). \end{equation} Combining the two latter equations, we get the weights at any time-step $t$: \begin{equation} \theta_t = \theta_0 + \eta Z_0 \sum_{s=0}^{t-1} e^{-\eta s H_0} (\vec y - \vec u_0). \end{equation} The continuous analogue of the above evolution is obtained by replacing the sum with an integral: \begin{equation} \theta_t = \theta_0 + \eta Z_0 \int_0^t e^{-\eta s H_0} (\vec y - \vec u_0) \, ds = \theta_0 + Z_0 H_0^{-1} \left(I - e^{-\eta s H_0}\right) (\vec y - \vec u_0). \end{equation} Here we get the inverse, as expected. Note that in this approach we do not assume that the network to be infinitely wide, we just assume it to be linear in its weights. This allows us to reason in terms of the network weight vector $\theta_t$ instead of reasoning in terms of some abstract feature space associated to the kernel. This aspect gives us one additional advantage: we can integrate the dynamics up to some time $t_1$ and, since we know the weights $\theta_{t_1}$, compute $Z_{t_1}$ and $H_{t_1}$. We can then proceed integration with these updated matrices. This method lies in between the usual gradient descent training and kernel gradient descent with constant kernel. The latter never updates the kernel, while the former updates the kernel at each timestep. In contrast, the method we discuss updates the kernel only at given timesteps. The approach under discussion requires computing and storing $Z$ of size $N \times m$, which is an obvious disadvantage. As a remedy, \cite{yue2021neural} propose splitting the job of computing $Z$ between several workers. A server joins the parts together, integrates the dynamics up to some timestep $t$, and sends $\theta_t$ to all of the workers, starting a new iteration. Tuning the timesteps of kernel updates may help balancing load between the server and the workers. The data used to compute $Z$ is never stored on the server, making this approach promising for federated learning. However, since the server may attempt reconstructing the data from $Z$, one has to ensure each worker's privacy cannot be compromised; see \cite{yue2021neural} for further details. \subsection{Pathology analysis} \begin{figure} \caption{Images are borrowed from \cite{tancik2020fourier} \label{fig:image_regression} \end{figure} \begin{figure} \caption{Images are borrowed from \cite{tancik2020fourier} \label{fig:low_dim_regression} \end{figure} While the empirical NTK of a neural network is not the same as its limit NTK, they may have certain properties in common. In particular, certain issues of a finite-width network may reflect in certain issues of its limit NTK, and fixing these issues in the limit NTK may result in fixing them in a finite-width net. As an example where this approach is proven to work, consider image regression. In this task, input samples are image coordinates, $x \in [0,1]^d$ for $d=2$, and targets are pixel colors; we assume grey-scale images with $y \in [0,1]$. The task is therefore to regress the full image given a set of pixels. Let us consider applying a fully-connected network for this task. As we have already observed in \cref{sec:limit_fc_nets}, the limit NTK $\Theta(x,x')$ of a fully-connected network depends only on $x^T x$, $x^{\prime,T} x'$, and $x^T x'$. All of these terms are rotation-invariant, hence the kernel itself is rotation-invariant. However, none of this terms is translation-invariant, hence the kernel cannot be translation-invariant (otherwise, it has to be constant). Therefore it is quite unlikely that the empirical kernel will be invariant to translations. On the other hand, both translation and rotation invariance are desirable for a kernel used for image regression. Indeed, this means that applying these transformations to the train set of pixels results in the same image as without them, up to translation and rotation. In order to achieve this property, one may start working on translationaly invariant embeddings of image coordinates. The simplest non-trivial embedding of this kind is $z(x) = [\cos(2\pi x), \sin(2\pi x)]^T$, where $\cos$ and $\sin$ are applied elementwise. Following \cite{tancik2020fourier}, we shall refer it as "basic". Comparing (b) and (c) of Figure~\ref{fig:image_regression}, this indeed results in better perceived quality. However the regressed image is still blurry: see Figure~\ref{fig:image_regression} (c). As we shall see shortly, NTK kernel regression learns low-frequency components of the image before its high-frequency ones. If we assume that the same property holds for the corresponding finite-width net then achieving sharp images may be impossible for a given number of gradient steps. Recall the training dynamics of a kernel regression with kernel $\Theta$ trained to minimize square loss on a training dataset $(\vec x, \vec y)$: \begin{equation} \dot f_t(\vec x) = \Theta(\vec x, \vec x) (\vec y - f_t(\vec x)). \end{equation} $\Theta$ is a kernel, therefore its Gram matrix is positive-semidefinite. Consider its eigenvalue decomposition: $\Theta(\vec x, \vec x) = \sum_{k=1}^m \lambda_k \vec v_k \vec v_k^T$, where $\lambda_1 \geq \ldots \geq \lambda_m \geq 0$, and $(\vec v_k)_{k=1}^m$ forms an orthonormal basis. Let us decompose our model's predictions as $f_t(\vec x) = \sum_{k=1}^m u_{t,k} \vec v_k$. Then the dynamics above decomposes as \begin{equation} u_{t,k} = \lambda_k (\vec v_k^T \vec y - u_{t,k}), \end{equation} which solves as \begin{equation} u_{t,k} = \vec v_k^T \vec y - e^{-\lambda_k t} (\vec v_k^T \vec y - u_{0,k}). \end{equation} As one clearly sees, time required to learn the $k$-th principal component of the target is inversely proportional to its strength $\lambda_k$. In other words, strong components are learned before weak ones. The question is: what are the eigenvectors of the NTK Gram matrix? It is hard to answer this question in general since a Gram matrix depends on the dataset. However, for a kernel, there is an analogue of eigenvalue decomposition called Mercer's representation. Let $X$ be a compact metric space and let $\mu$ be a sigma-additive measure on $X$ with $\supp \mu = X$. Suppose $K: \; X \times X \to \mathbb{R}$ is continuous, symmetric, and satisfies $\int_X \int_X K(x,x') f(x) f(x') \, d\mu(x) \, d\mu(x') < \infty$ $\forall f \in L^2_\mu(X)$. Define Gram-Schmidt operator $T_K: L^2_\mu(X) \to L^2_\mu(X)$ as $T_K[f](x) = \int_X K(x,x') \, d\mu(x')$. Then the above operator admits an eigenvalue decomposition with eigenfunctions $(\psi_k)_{k=1}^\infty$ and corresponding eigenvalues $(\lambda_k)_{k=1}^\infty$, and the set of eigenfunctions forms an orthonormal basis in $L^2_\mu(X)$. The Mercer's representation is the corresponding decomposition of the kernel: \begin{equation} K(x,x') = \sum_{k=1}^\infty \lambda_k \psi_k(x) \psi_k(x'). \end{equation} The series converges uniformly in $X \times X$. From the above, we have $\int_X \int_X K(x,x') \psi_k(x) \psi_k(x') \, d\mu(x) \, d\mu(x') = \lambda_k$ $\forall k \geq 1$. Hence if $\vec x = (x_k)_{k=1}^m$ and $\vec x' = (x'_k)_{k=1}^m$ are sampled iid from $\mu$ then \begin{multline} \frac{1}{m^2} \psi_k^T(\vec x) K(\vec x, \vec x') \psi_k(\vec x') =\\= \frac{1}{m^2} \sum_{i,j=1}^m K(x_i, x'_j) \psi(x_i) \psi(x'_k) \to \int_X \int_X K(x,x') \psi_k(x) \psi_k(x') \, d\mu(x) \, d\mu(x') = \lambda_k \end{multline} a.s. as $m \to \infty$ by the Law of Large Numbers (LLN). Note that considering $\psi^T(\vec x) K(\vec x, \vec x) \psi(\vec x)$ instead of $\psi^T(\vec x) K(\vec x, \vec x') \psi(\vec x')$ may result in a different limit because the diagonal of $K$ is now calculated on two dependent arguments. Nevertheless, there are only $m$ elements on the diagonal, which results in $O(m^{-1})$ error vanishing in the limit. Hence \begin{equation} \frac{1}{m^2} \psi_k^T(\vec x) K(\vec x, \vec x) \psi_k(\vec x) \to \lambda_k \end{equation} a.s. as $m \to \infty$. In other words, given $\vec x$ sampled iid from $\mu$, $(\psi_k(\vec x))_{k=1}^m$ are approximately the eigenvectors of $K(\vec x, \vec x)$ with eigenvalues $(m^2 \lambda_k)_{k=1}^m$. Recall that, as was noted above, the limit NTK of a fully-connected net $\Theta(z,z')$ depends only on $z^T z'$, $\|z\|_2$, and $\|z'\|_2$. Recall also that we have decided to embed inputs with $z(x) = [\cos(2\pi x), \sin(2\pi x)]^T$. This embedding maps $[0,1]^d$ on a $d$-dimensional torus that lies inside a $2d-1$-dimensional sphere. In this case, our $\Theta(x,x') = \Theta(z(x),z(x'))$ depends only on $z^T(x) z(x')$. Kernels with this property are called zonal. Any zonal kernel $K: S^{p-1} \times S^{p-1} \to \mathbb{R}$ admits the following Mercer's decomposition with respect to the uniform measure on $S^{p-1}$: \begin{equation} K(z^T z') = \sum_{k=0}^\infty \lambda_k \sum_{j=1}^{N(p,k)} Y_{k,j}(z) Y_{k,j}(z'), \end{equation} where $N(p,k)$ are so-called Gegenbauer polynomials and $Y_{k,j}$ are spherical harmonics. For $p=2$, this decomposition gets a simpler form: \begin{equation} K(z^T z') = \frac{1}{4\pi^2} + \frac{1}{\pi^2} \sum_{k=1}^\infty \lambda_k \cos(k \arccos(z^T z')). \label{eq:mercer_zonal_2d} \end{equation} As we see, large $k$'s correspond to high-frequency harmonics, while small $k$'s correspond to low-frequency ones. A recent result of \cite{chen2020deep} states that the NTK of a fully-connected net with inputs lying on $S^{p-1}$ has eigenvalues decaying as a power-law: $\lambda_k \sim k^{-p}$ as $k \to \infty$; see also \cite{geifman2020similarity} for an earlier result for shallow nets and \cite{bietti2019inductive} for an even earlier result for bias-free shallow nets. This means that learning the $k$-th harmonic of the input image requires $O(k^p)$ time. Hence for a finite amount of training steps, high-frequency components remain not learned, which results in blurry images similar to Figure~\ref{fig:image_regression} (c). The possible remedy would be increasing $\lambda_k$ for large $k$. But how to achieve it? We illustrate the solution proposed in \cite{tancik2020fourier} in the following. Consider the case $d=1$ for simplicity. In this case, the embedding map $z(x) = [\cos(2\pi x), \sin(2\pi x)]^T$ traverses a circle. Consider a modified embedding $\tilde z(x) = [\cos(2\pi b x), \sin(2\pi b x)]^T$ instead, where $b \in \mathbb{N}$ is a tunable parameter. The corresponding kernel is then given as \begin{multline} K(\tilde z^T \tilde z') = \frac{1}{4\pi^2} + \frac{1}{\pi^2} \sum_{k=1}^\infty \lambda_k \cos(k \arccos(\tilde z^T \tilde z')) =\\= \frac{1}{4\pi^2} + \frac{1}{\pi^2} \sum_{k=1}^\infty \lambda_k \cos(4\pi k b (x-x')) = \frac{1}{4\pi^2} + \frac{1}{\pi^2} \sum_{k=1}^\infty \lambda_k \cos(k b \arccos(z^T z')), \end{multline} which means that $\lambda_k$ becomes the $kb$-th eigenvalue in the original embedding space. If $\lambda_k$ decreased monotonically this would mean that each $kb$-th eigenvalue increased from $\lambda_{kb}$ to $\lambda_k$, implying faster convergence to $kb$-th principal component. The obvious downside of the method above is that in a new parameterization some of the eigenvalues become zero --- therefore they are never learned. A simple solution is to enlarge the embedding: $\tilde z(x) = [\cos(2\pi \sigma^{j/M} x), \sin(2\pi \sigma^{j/M} x)]^T$, where $M \in \mathbb{N}$ and $\sigma \in \mathbb{R}_+$ are tunable parameters; this referred as "positional encoding" in \cite{tancik2020fourier}. Another solution proposed by \cite{tancik2020fourier} is random Gaussian projections: $\tilde z(x) = [\cos(2\pi B x), \sin(2\pi B x)]^T$, where $B \in \mathbb{R}^{M \times d}$, each element of $B$ is sampled independently from $\mathcal{N}(0,\sigma^2)$, and $M$ and $\sigma$ are tunable parameters. Both solution perform on par with each other and much better than the original embedding: compare (c), (d), and (e) in Figure~\ref{fig:image_regression}. The same method suites other low-dimensional regression problems as well; \cite{tancik2020fourier} provide examples of 3D shape regression, MRI reconstruction, and inverse rendering. See Figure~\ref{fig:low_dim_regression} for comparison of outputs of a neural net with no enconding of inputs (top row) and the proposed Gaussian encoding (bottom row). One more notable example is Solid Isotropic Material Penalisation, an instance of topology optimization. The task here is to optimize over material density at $N$ points $y \in [0,1]^N$ to obtain a shape that can withstand forces applied at certain points. Given a density $y$ and a force vector $F$, the SIMP method constructs a stiffness matrix $K(y)$, and derives a displacement vector $U(y)$ by solving a linear system $K(y) U(y) = F$. The resulting construction is stable if the forces do not do any work, i.e. $U^T(y) F = 0$. The density is therefore optimized to minimize the work $C(y) = U^T(y) F U(y) \to \min_y$ under a volume constraint $\sum_{i=1}^N y_i = V$; $C$ is usually called compliance. We can cast the constrained optimization problem as an unconstrained one by introducing pre-density $x \in \mathbb{R}^N$ and constructing density as $y_i = \sigma(x_i + b(x))$, where $b$ is a function that ensures the volume constraint. Denoting this operation as $y = \Sigma(x)$, we get a new unconstrained optimization problem in the space of pre-densities: $C(\Sigma(x)) \to \min_x$. While the above problem is not a regression problem, we can still model $x$ as outputs of a neural net at the corresponding grid points. However, lack of translation invariance results in unplausible patterns. \cite{dupuis2021dnn} used a similar embedding scheme as \cite{tancik2020fourier} to control this issue. On the other hand, in contrast to \cite{tancik2020fourier}, \cite{dupuis2021dnn} used $\sin(\omega x)$ as activation instead of ReLU, and used $\omega$ together with bias initialization variance to control sharpness of output shapes, instead of modifying the embedding. Both methods aim to "widen" the spectrum of the limit NTK. \subsection{A theoretical tool} \label{sec:app_theory} Apart from providing a meaningful kernel for kernel methods, NTK can be used as a concept useful for reasoning about neural nets of large width. Indeed, as stated in \cref{sec:convergence}, NTK, while being random and evolving, converges to a constant deterministic limit as width goes to infinity. One can hope that for large enough width, the NTK stays close to its limit with high probability. Therefore, any result valid for kernel regression with NTK taken as a kernel, may become also valid with high probability for a wide enough net. \subsubsection{Global GD convergence} Let us start with the following result valid for kernel regression with a constant kernel: when the kernel is positive-definite, kernel regression learns the dataset. Indeed, recall the training dynamics of a kernel regression with kernel $\Theta$ trained to minimize square loss on a training dataset $(\vec x, \vec y)$: \begin{equation} \dot f_t(\vec x) = \Theta(\vec x, \vec x) (\vec y - f_t(\vec x)). \end{equation} Assuming $\Theta(\vec x, \vec x) \geq \lambda$, \begin{equation} \frac{d}{dt}\left(\frac{1}{2} \| \vec y - f_t(\vec x) \|_2^2\right) = -(\vec y - f_t(\vec x))^T \Theta(\vec x, \vec x) (\vec y - f_t(\vec x)) \leq -\lambda \| \vec y - f_t(\vec x) \|_2^2, \end{equation} which gives \begin{equation} \| \vec y - f_t(\vec x) \|_2^2 \leq e^{-2\lambda t} \| \vec y - f_0(\vec x) \|_2^2. \end{equation} Hence $\lambda > 0$ suffices to guarantee that $f_t(\vec x)$ converges to $\vec y$ as $t \to \infty$. Suppose now our kernel regression uses a random time-dependent kernel $\hat\Theta_t$ instead of $\Theta$: \begin{equation} \dot f_t(\vec x) = \hat\Theta_t(\vec x, \vec x) (\vec y - f_t(\vec x)). \end{equation} If we manage to guarantee that with probability $\geq 1-\delta$ $\forall t \geq 0$ $\hat\Theta_t(\vec x, \vec x) \geq \lambda$ then $\lambda > 0$ suffices to guarantee that $f_t(\vec x)$ converges to $\vec y$ as $t \to \infty$ with probability $\geq 1-\delta$. Indeed, \begin{equation} \frac{d}{dt}\left(\frac{1}{2} \| \vec y - f_t(\vec x) \|_2^2\right) = -(\vec y - f_t(\vec x))^T \hat\Theta_t(\vec x, \vec x) (\vec y - f_t(\vec x)) \leq -\lambda \| \vec y - f_t(\vec x) \|_2^2 \quad \text{w.p. $\geq 1-\delta$}, \end{equation} which gives \begin{equation} \| \vec y - f_t(\vec x) \|_2^2 \leq e^{-2 \lambda t} \| \vec y - f_0(\vec x) \|_2^2 \quad \text{w.p. $\geq 1-\delta$}. \end{equation} One of the first results of this kind concerns ReLU nets with one hidden layer under NTK parameterization: \begin{equation} f(x; a_{1:n}, w_{1:n}) = \frac{1}{\sqrt{n}} \sum_{i=1}^n a_i [w_i^T x]_+. \label{eq:two_layered_ReLU_net_ntk} \end{equation} We aim to minimize square loss on a dataset $(\vec x, \vec y)$ of size $m$ with gradient descent on the input weights: \begin{equation} \dot w_i(t) = \frac{1}{\sqrt{n}} \sum_{k=1}^m (y_k - f(x_k; a_{1:n}, w_{1:n}(t))) a_i [w_i^T(t) x_k > 0] x_k \quad \forall i \in [n]. \end{equation} We sample $w_i \sim \mathcal{N}(0,I_{n_0})$ and $a_i \in U(\{-1,1\})$ $\forall i \in [n]$ independently. The goal of sampling $a_i$ from this particular distribution is mere simplification: in this case $a_i^2 = 1$, which simplifies the NTK Gram matrix a little bit: \begin{equation} \hat\Theta_t(x_k, x_l) = \frac{1}{n} \sum_{i=1}^n [w_i^T(t) x_k > 0] [w_i^T(t) x_l > 0] x_k^T x_l. \end{equation} However, it is possible to apply the same technique to any distribution of the output layer not depending on $n$. Note that the Gram matrix depends merely on activation patterns of the hidden layer computed on the dataset. The limit NTK is therefore given as: \begin{equation} \Theta(x_k, x_l) = \mathbb{E}\,_{w \sim \mathcal{N}(0, I_{n_0})} [w^T x_k > 0] [w^T x_l > 0] x_k^T x_l. \end{equation} Note that in our two-layered case, $\Theta(x,x') = \lim_{n \to \infty} \hat\Theta_t(x,x') = \mathbb{E}\, \hat\Theta_0(x,x')$. In the sequel, we denote the Gram matrices $\hat\Theta_t(\vec x, \vec x)$ as $H(t)$ and $\Theta(\vec x, \vec x)$ as $H^\infty$. Let $\lambda_0$ to be the least eigenvalue of $H^\infty$. \begin{theorem}[\cite{du2018gradient}] Consider the setting discussed above and further assume $\|x_k\|_2 \geq 1$ and $|y_k| \leq 1$ $\forall k \in [m]$. Then $\exists C, C_0 > 0$ such that $\forall \delta \in (0,1)$ taking \begin{equation} n > \max\left( C \frac{m^6}{\lambda_0^4 \delta^3}, \; C_0 \frac{m^2}{\lambda_0^2} \log\left(\frac{2m}{\delta}\right) \right) \end{equation} guarantees $H(t) \geq \lambda_0/2$ $\forall t \geq 0$ w.p. $\geq 1-\delta$. \label{thm:convergence_2layer} \end{theorem} This result implies $\| \vec y - f_t(\vec x) \|_2^2 \leq e^{-\lambda_0 t} \| \vec y - f_0(\vec x) \|_2^2$ w.p. $\geq 1-\delta$, as discussed above. For the full proof, see the original paper \cite{du2018gradient} or lecture notes \cite{golikov2020notes}. We are going to discuss, very briefly, only crucial parts of the proof in the sequel. The proof is based on four lemmas. The first lemma states that as long as $n = \Omega(m^2 \lambda_0^{-2} \log(m/\delta))$, where $\Omega$ hides a certain constant, $\|H(0) - H^\infty\|_2 \leq \lambda_0/4$, where $\|\cdot\|_2$ denotes a singular norm, w.p. $\geq 1-\delta$; this implies $H(0) \geq 3\lambda_0/4$ with the same probability. As already noted above, $\mathbb{E}\, H(0) = H^\infty$. This allows one to apply a concentration inequality to each element of $H(0)$. Union bound then gives a bound that holds uniformly for all elements of $H(0)$. This implies a bound on $\|H(0) - H^\infty\|_F$, hence on a singular norm as well. The second lemma states that as long as $\forall i \in [n]$ $\|w_i - w_i(0)\|_2 \leq R$ for certain $R = R(\delta,\lambda_0,m)$, $\| H - H(0) \|_2 \leq \lambda_0/4$ w.p. $\geq 1-\delta$. In other words, as long as weights are close to initialization, the corresponding Gram matrix is close to the initial one too. The idea is that as long as the weights are not far from their initialization, with certain probability, not many of the hidden neurons can alter their activation patterns on the train dataset. Since as already noted above, our Gram matrices depend only on activation patterns on the train dataset, this implies a tail bound on $|H_{kl}(0) - H_{kl}^\infty|$ $\forall k,l \in [m]$, which gives a tail bound on $\|H(0) - H^\infty\|_2$ with the same technique as used in the first lemma. The third lemma states that as long as $H(s) \geq \lambda_0/2$ $\forall s \in [0,t]$ (we haven't proven it yet), weights indeed stay close to their initialization: $\forall i \in [n]$ $\|w_i(t) - w_i(0)\|_2 \leq R'$ for certain $R' = R'(\lambda_0,m,n)$. This can be proven by a very simple estimate: \begin{multline} \left\|\frac{dw_i(s)}{ds}\right\|_2 = \left\|\frac{1}{\sqrt{n}} \sum_{k=1}^m (y_k - f_s(x_k)) a_i [w_i^T(s) x_k > 0] x_k\right\|_2 \leq \\\leq \frac{1}{\sqrt{n}} \sum_{k=1}^m |y_k - f_s(x_k)| \leq \sqrt{\frac{m}{n}} \|\vec y - f_s(\vec x)\|_2 \leq \sqrt{\frac{m}{n}} e^{-\lambda_0 s / 2} \|\vec y - f_0(\vec x)\|_2. \end{multline} This gives $\forall i \in [n]$: \begin{multline} \| w_i(t) - w_i(0) \|_2 = \left\|\int_0^t \frac{dw_i(s)}{ds} \, ds\right\|_2 \leq \int_0^t \left\|\frac{dw_i(s)}{ds}\right\|_2 \, ds \leq \\\leq \frac{2 \sqrt{m}}{\lambda_0 \sqrt{n}} \left(1 - e^{-\lambda_0 t / 2}\right) \|\vec y - f_0(\vec x)\|_2 \leq \frac{2 \sqrt{m}}{\lambda_0 \sqrt{n}} \|\vec y - f_0(\vec x)\|_2. \end{multline} Finally, the fourth lemma states that as long as $R' < R$, $\| H(t) - H(0) \|_2 \leq \lambda_0/4$ $\forall t \geq 0$ w.p. $\geq 1-\Omega(\delta)$ where $\Omega$ hides a certain constant. Combined with the first lemma, this implies $H(t) \geq \lambda_0/2$ $\forall t \geq 0$ w.p. $\geq 1-\Omega(\delta)$. The condition $R'(\lambda_0,m,n) < R(\delta,\lambda_0,m)$ gives the second lower bound on $n$ (the first one is given be the first lemma). By changing $\delta$, we get the desired result. The fourth lemma is proven as follows. Let $t_0$ be the first moment of time when the second lemma becomes no longer applicable, i.e. $t_0 = \inf\left\{t \geq 0: \; \max_{i \in [n]} \| w_i(t) - w_i(0) \|_2 > R\right\}$. Assume it is finite. Since weights are continuous functions of time, $\max_{i \in [n]} \| w_i(t_0) - w_i(0) \|_2 = R$. Hence the second lemma holds for $w_{1:n} = w_{1:n}(t)$ $\forall t \in [0,t_0]$ and $\| H(t) - H(0) \|_2 \leq \lambda_0/4$ w.p. $\geq 1-\delta$ $\forall t \in [0,t_0]$, therefore $H(t) \geq \lambda_0/2$ w.p. $\geq 1-\Omega(\delta)$ $\forall t \in [0,t_0]$. But then the third lemma holds as well: $\forall i \in [n]$ $\|w_i(t_0) - w_i(0)\|_2 \leq R' < R$; contradiction. Hence $\forall t \geq 0$ $\max_{i \in [n]} \| w_i(t) - w_i(0) \|_2 \leq R$ and the second lemma gives the desired statement. \cref{thm:convergence_2layer} requires the number of hidden units $n$ to grow as $m^6$ with the size of a train dataset and as $\delta^{-3}$ with the failure probability. This bound is way too loose for practical purposes: indeed, even for very small datasets $m \geq 100$ which results in a bound of the order at least $10^8$. If we want the bound to be valid with at least $90\%$ probability, we pay three orders of magnitude more. Note that modern architectures designed to be trained on large datasets like ImageNet ($m=10^6$) have width barely exceeding $10^4$. We state one of the existing improvements of \cref{thm:convergence_2layer} below: \begin{theorem}[\cite{song2019quadratic}] Under the same setting as \cref{thm:convergence_2layer}, $\exists C, C_0 > 0$ such that $\forall \delta \in (0,1)$ taking \begin{equation} n > \max\left( C \frac{m^4}{\lambda_0^4} \log^3\left(\frac{m}{\delta}\right), \; C_0 \frac{m^2}{\lambda_0^2} \log\left(\frac{2m}{\delta}\right) \right) \end{equation} guarantees $H(t) \geq \lambda_0/2$ $\forall t \geq 0$ w.p. $\geq 1-\delta$. \label{thm:convergence_2layer_quartic} \end{theorem} This result decreases the exponent of $m$ from $6$ to $4$ and makes the $\delta$-dependence logarithmic. The proof follows the same path as above. Note however that the previous result aimed for elementwise tail bounds on $H(0) - H^\infty$ or $H - H(0)$ which lead to tail bounds on $\|H(0) - H^\infty\|_2$ and $\|H - H(0)\|_2$ by union bound, which gives an $m^2$ factor. One of the improvements proposed by \cite{song2019quadratic} is to replace these elementwise bounds with matrix-Chernoff bounds --- they do not give this $m^2$ factor, thus leading to better bounds. The other improvement is to replace Markov inequalities that result in $1/\delta$ factors with Bernstein inequality that results only in $\log(1/\delta)$ ones. The $m^4$ width bound is still far from being realistically tight. We are not aware of any further improvements of the results discussed above that apply the idea of NTK stability. Global gradient descent convergence can be, however, proved by first proving gurantees on convergence to local minima and then proving that all minima are global for wide enough nets. See \cite{lee2016gradient,panageas2017gradient,mertikopoulos2020almost} for the first line of works and \cite{yu1995local,nguyen2017loss,nguyen2019connected,nguyen2021note} for the second. None of the works of both lines use the idea of NTK stability and they neither rely on NTK parameterization. \cite{nguyen2019connected} proves that $n = m$ is enough of leaky ReLU nets to have only global "local valleys" (generalization of global minima to certain losses such as cross-entropy) and \cite{nguyen2021note} demonstrates that this bound cannot be improved for two-layered nets and general data. \cite{du2019gradient} extends \cref{thm:convergence_2layer} to deep nets. Their proof idea is the same: first show that $H(0)$ is close to $H^\infty$, then show that $H(t)$ stays close to $H(0)$. However for the multilayer case, $H(0)$ cannot be proven to be close to $H^\infty$ just by concentration of measure. When layers are many, perturbations caused by finite width result in deviations exponential with respect to the number of layers $L$. For this reason, their bound grows exponentially with $L$. See also \cite{allen2019convergence} for a similar result with a bound depending on $m$ only polynomially, proved using a different technique. \subsubsection{Generalization guarantees} Stability of NTK has another interesting consequence. Suppose the empirical NTK is constant, i.e. $\hat\Theta_t = \hat\Theta_0$. It is equivalent to say that the corresponding model is linearized: \begin{equation} f(x; \theta) = f(x; \theta_0) + \nabla_\theta^T f(x; \theta_0) (\theta - \theta_0). \end{equation} For brevity, denote $\vec u_t = f_t(\vec x)$ and $Z_t^{ik} = \partial_{\theta_i} f(x_k; \theta_t)$. Hence $Z_t \in \mathbb{R}^{N \times m}$ where $N$ is the total number of parameters and $\vec u_t = \vec u_0 + Z_0^T (\theta_t - \theta_0)$. Note that $H_t = Z_t^T Z_t$. Recall the train set predictions for constant kernel: \begin{equation} \vec u_t = \vec y + e^{-H_0 t} (\vec u_0 - \vec y). \end{equation} In our linearized dynamics, the weights evolve as follows: \begin{equation} \dot\theta_t = Z_0 (\vec y - \vec u_t) = Z_0 e^{-H_0 t} (\vec y - \vec u_0). \end{equation} Straightforward integration gives: \begin{equation} \theta_t = \theta_0 + Z_0 H_0^{-1} \left(I - e^{-H_0 t}\right) (\vec y - \vec u_0). \end{equation} Recalling $H_0 = Z_0^T Z_0$, at the end of training ($t \to \infty$) we get \begin{equation} \|\theta_\infty - \theta_0\|_2^2 = (\theta_\infty - \theta_0)^T (\theta_\infty - \theta_0) = (\vec y - \vec u_0)^T H_0^{-1} (\vec y - \vec u_0). \end{equation} Define $\mathcal{F}_B^{w_{1:n}(0), a_{1:n}}$ as a set of models of the form (\ref{eq:two_layered_ReLU_net_ntk}) with output weights $a_{1:n}$ and input weights $w_{1:n}$ such that $\| W - W(0) \|_F \leq B$ for given $w_{1:n}(0)$. The above considerations state that a trained model always lies in $\mathcal{F}_B^{w_{1:n}(0), a_{1:n}}$ with $B = (\vec y - \vec u_0)^T H_0^{-1} (\vec y - \vec u_0)$. Hence our training procedure outputs models in a certain set rather than any model in of the form (\ref{eq:two_layered_ReLU_net_ntk}). Upper-bounding Rademacher complexity of this model set will give us a generalization bound as we shall see below. Let us upper-bound the Rademacher complexity conditioned on a dataset $(\vec x, \vec y)$ of size $m$: \begin{multline} \Rad{\mathcal{F}_B^{w_{1:n}(0), a_{1:n}}}{\vec x, \vec y} = \mathbb{E}\,_{\sigma_{1:m} \sim \{-1,1\}^m} \sup_{f \in \mathcal{F}_B^{w_{1:n}(0), a_{1:n}}} \left(\frac{1}{m} \sum_{k=1}^m \sigma_k u_k\right) = \\= \frac{1}{m} \mathbb{E}\,_{\sigma_{1:m} \sim \{-1,1\}^m} \sup_{\| W - W(0) \|_F \leq B} \left(\sum_{k=1}^m \sigma_k \frac{1}{\sqrt{n}} \sum_{i=1}^n a_i [w_i^T(0) x_k \geq 0] w_i^{T} x_k\right) = \\= \frac{1}{m} \mathbb{E}\,_{\sigma_{1:m} \sim \{-1,1\}^m} \sup_{\| W - W(0) \|_F \leq B} \left( \vec\sigma^T Z^{T}(0) \theta \right) = \\= \frac{1}{m} \mathbb{E}\,_{\sigma_{1:m} \sim \{-1,1\}^m} \sup_{\| W - W(0) \|_F \leq B} \left( \vec\sigma^T \tilde Z^{T}(0) (\theta - \theta_0) \right) = \\= \frac{B}{m} \mathbb{E}\,_{\sigma_{1:m} \sim \{-1,1\}^m} \| Z(0) \vec\sigma \|_2 \leq \frac{B}{m} \sqrt{\mathbb{E}\,_{\sigma_{1:m} \sim \{-1,1\}^m} \| Z(0) \vec\sigma \|_2^2} = \frac{B}{m} \| Z(0) \|_F. \end{multline} Note that \begin{equation} \| Z(0) \|_F^2 = \frac{1}{n} \sum_{i=1}^n \sum_{k=1}^m [w_i^T(0) x_k \geq 0]. \end{equation} It is an average of i.i.d random variables, which allows for Hoeffding's inequality: \begin{equation} \mathcal{P}(\| Z(0) \|_F^2 - \frac{m}{2} \geq \epsilon) \leq e^{-2n \epsilon^2 / m^2}. \end{equation} This gives w.p. $\geq 1-\delta$ over initialization, \begin{equation} \| Z(0) \|_F^2 \leq \frac{m}{2} + \sqrt{\frac{m^2}{2n} \log\left(\frac{1}{\delta}\right)}. \end{equation} Finally, we got that w.p. $\geq 1-\delta$ over initialization, \begin{equation} \Rad{\mathcal{F}_B^{w_{1:n}(0), a_{1:n}}}{(\vec x, \vec y)} \leq \frac{B}{\sqrt{m}} \sqrt{\frac{1}{2} + \sqrt{\frac{1}{2n} \log\left(\frac{1}{\delta}\right)}}. \end{equation} Consider zero-one risk: $r(y,z) = [y z < 0]$; we have $R(f) = \mathbb{E}\,_{x,y \sim \mathcal{D}} r(y,f(x))$ and $\hat R(f) = \mathbb{E}\,_{x,y \in S_m} r(y,f(x))$, correspondingly. From the generalization theory, we know that for any $B$ and for any initialization $w_{1:n}(0), a_{1:n}$, w.p. $\geq 1-\tilde\delta$ over the training dataset, $\forall f \in \mathcal{F}_B^{w_{1:n}(0), a_{1:n}}$, \begin{equation} R(f) \leq \hat R_m(f) + \mathbb{E}\,_{(\vec x, \vec y)} \Rad{\mathcal{F}_B^{w_{1:n}(0), a_{1:n}}}{(\vec x, \vec y)} + \sqrt{\frac{1}{2m} \log \frac{1}{\tilde\delta}} \quad \text{w.p. $\geq 1 - \tilde\delta$ over $(\vec x, \vec y)$.} \end{equation} We want to take $B = (\vec y - \vec u_0)^T H_0^{-1} (\vec y - \vec u_0)$ but it depends on the dataset $(\vec x, \vec y)$. Take a sequence $\{B_j\}_{j=1}^\infty$ monotonically increasing to infinity and a sequence $\{\tilde\delta_j\}_{j=1}^\infty$ of deltas $\in (0,1)$ that sum to $\tilde\delta$. This allows us to apply a union bound: w.p. $\geq 1-\tilde\delta$ over the training dataset, for any initialization $w_{1:n}(0), a_{1:n}$, $\forall j \in \mathbb{N}$, $\forall f \in \mathcal{F}_{B_j}^{w_{1:n}(0), a_{1:n}}$, \begin{equation} R(f) \leq \hat R_m(f) + \mathbb{E}\,_{(\vec x, \vec y)} \Rad{\mathcal{F}_{B_j}^{w_{1:n}(0), a_{1:n}}}{(\vec x, \vec y)} + \sqrt{\frac{1}{2m} \log \frac{1}{\tilde\delta_j}}. \end{equation} We are free to choose minimal $j$ such that $B_j \geq (\vec y - \vec u_0)^T H_0^{-1} (\vec y - \vec u_0)$; denote it by $\hat j$. Let for definiteness $B_j = j$. Then $B_{\hat j} \leq 1 + (\vec y - \vec u_0)^T \hat\Theta_0^{-1} (\vec y - \vec u_0)$. Putting all together, we have w.p. $\geq 1-\tilde\delta$ over the training dataset, w.p. $\geq 1-\delta$ over initialization, \begin{multline} R(f(\theta_\infty)) \leq \hat R_m(f(\theta_\infty)) + \\+ \frac{1 + (\vec y - \vec u_0)^T H_0^{-1} (\vec y - \vec u_0)}{\sqrt{m}} \sqrt{\frac{1}{2} + \sqrt{\frac{1}{2n} \log\left(\frac{1}{\delta}\right)}} + \sqrt{\frac{1}{2m} \log \frac{1}{\tilde\delta_{\hat j}}}. \label{eq:generalization_bound_for_fixed_acts_model} \end{multline} Recall that the bound above was obtained under the assumption of constant NTK. In order to relax this assumption, one has to show that, possibly for large enough width, $H_t^{-1}$ stays close to $H_0^{-1}$. Note that when proving global GD convergence we had to prove that $H_t$ stays close to $H_0$, which is different. The required closeness result is proven in \cite{arora2019fine}, it leads to the following theorem: \begin{theorem}[\cite{arora2019fine}] Under the same setting as \cref{thm:convergence_2layer}, $\exists p, C, C_0 > 0$ such that $\forall \delta \in (0,1)$ taking \begin{equation} n > \max\left( C \frac{m^7}{\lambda_0^4 \delta^p}, \; C_0 \frac{m^2}{\lambda_0^2} \log\left(\frac{2m}{\delta}\right) \right) \end{equation} guarantees w.p. $\geq 1-\delta$ over the training dataset of size $m$ and w.p. $\geq 1-\delta$ over initialization, \begin{multline} R(f(\theta_\infty)) \leq \hat R_m(f(\theta_\infty)) + \\+ \frac{1 + (\vec y - \vec u_0)^T \left(H^{\infty}\right)^{-1} (\vec y - \vec u_0)}{\sqrt{m}} \sqrt{\frac{1}{2} + \sqrt{\frac{1}{2n} \log\left(\frac{1}{\delta}\right)}} + \sqrt{\frac{1}{2m} \log \frac{1}{\delta}}. \end{multline} \label{thm:generalization_2layer} \end{theorem} \section{Standard parameterization and kernel evolution} \label{sec:standard_param} \begin{figure} \caption{The figure is borrowed from \cite{fort2020deep} \label{fig:kernel_velocity} \end{figure} As was noted in \cref{sec:convergence}, NTK diverges under standard parameterization. Recall the example of a two-layered net: \begin{equation} f(x; a_{1:n}, w_{1:n}) = \sum_{i=1}^n a_i \phi(w_i x), \quad a_{1:n} \sim \mathcal{N}(0, n^{-1} I), \quad w_{1:n} \sim \mathcal{N}(0, I); \end{equation} \begin{equation} \hat\Theta_t(x,x') = \sum_{i=1}^n \left(\phi(w_i(t) x) \phi(w_i(t) x') + a_i^2(t) \phi'(w_i(t) x) \phi'(w_i(t) x') x x'\right). \end{equation} At $t=0$, since $w_i$ are independent and of the order of $O(1)$, the sum diverges proportionaly to $n$. Since under square loss, $\dot f_t(x) = \hat\Theta_t(x,\vec x) (\vec y - f_t(\vec x))$, the model prediction at any point $x$ receive a $O(n)$ increment at the very beginning of training. In other words, model predictions diverge with width, making the model useless for regression. However, if the goal is classification, magnitude of predictions does not matter; what matters is their signs for binary classification, or indices of the largest logits when classes are multiple. Therefore in this case, an infinite-width limit under standard parameterization still may make sense besides of divergent NTK, see \cite{golikov2020dynamically}. In order to deal with divergence, consider a normalized empirical NTK $\tilde\Theta_t(x,x') = \hat\Theta_t(x,x') / n$; its infinite-width limit at initialization is $\mathbb{E}\,_{w \sim \mathcal{N}(0,1)} \phi(w x) \phi(w x')$; we shall refer it as normalized NTK and denote as $\tilde\Theta(x,x')$. In contrast to NTK under NTK parameterization, normalized NTK under standard parameterization evolves with time \cite{golikov2020dynamically}: \begin{multline} \frac{d\tilde\Theta_t(x,x')}{dt} = \frac{1}{n} \sum_{i=1}^n \left(\phi(w_i(t) x) \phi'(w_i(t) x') x' + \phi'(w_i(t) x) \phi(w_i(t) x') x\right) \frac{dw_i(t)}{dt} +\\+ \frac{1}{n} \sum_{i=1}^n a_i^2(t) x x' \left(\phi'(w_i(t) x) \phi''(w_i(t) x') x' + \phi''(w_i(t) x) \phi(w_i(t) x') x\right) \frac{dw_i(t)}{dt} +\\+ \frac{1}{n} \sum_{i=1}^n 2 a_i(t) \phi'(w_i(t) x) \phi'(w_i(t) x') x x' \frac{da_i(t)}{dt}. \end{multline} Recall the gradient flow dynamics under standard parameterization: \begin{equation} \frac{a_k(t)}{dt} = \sum_{j=1}^m \phi(w_k(t) x_j), \quad \frac{w_k(t)}{dt} = \sum_{j=1}^m a_k(t) \phi'(w_k(t) x_j) x_j. \end{equation} At $t=0$, we have $\dot a_k = O(1)$, while $\dot w_k = O(n^{-1/2})$. Since $a_k(0) = O(n^{-1/2})$ and $w_k(0) = O(1)$, it means that for any $t > 0$ independent on $n$, $a_k(t) = O(1)$, $\dot a_k(t) = O(1)$, $w_k(t) = O(1)$, and $\dot w_k(t) = O(1)$. A naive estimate of the sums then gives $\frac{d\tilde\Theta_t(x,x')}{dt} = O(1) + O(1) + O(1) = O(1)$ for any $t > 0$ independent on $n$. Therefore the normalized kernel keeps evolving with time even in the limit of infinite width. This can be the reason for superior performance of neural networks to conventional kernel methods and NTK. A kernel measures similarity between points in a feature space. While for NTK this feature space is fixed, a neural net varies its corresponding kernel feature space, hopefully making it better suitable for the task at hand; moreover, under standard parameterization, this feature does not vanish for large width. The way an empirial NTK varies with time can be measured with kernel velocity, defined as kernel distance between the kernels corresponding to two consequent optimization steps. Kernel distance is in its turn defined as one minus cosine similarity between Gram matrices $H$ and $H'$ of the corresponding kernels: \begin{equation} \rho(H, H') = 1 - \frac{\tr(H H^{\prime,T})}{\sqrt{\tr(H H^T) \tr(H' H^{\prime,T})}}. \end{equation} After measuring kernel velocity for a realistic net under standard parameterization, \cite{fort2020deep} distinguished two phases of training: a phase of rapid kernel evolution, and a phase of almost constant NTK, see Figure~\ref{fig:kernel_velocity}. The first phase is called \emph{chaotic}, while the second one is coined \emph{ordered}. Curiously enough, these two phases can be distinguished not only by kernel velocity. Suppose the network is trained up to time $T$, called \emph{spawn epoch}. Two independent copies of the same network is then trained further. In other words, we train two networks which remain the same up to time $T$ and may diverge afterwards due to randomness of training procedure. We then measure \emph{test error barrier} between these two networks, i.e. height of the error "hill" on a straight segment between their corresponding weights. A small error barrier would mean that training of the two networks ended up in the same valley of test error, which likely means that they are similar. As one can see in Figure~\ref{fig:kernel_velocity}, the test error barrier drops dramatically with growth of spawn epoch. Also, the two quantities under discussion, kernel velocity and error barrier appear to be strongly correlated, see again Figure~\ref{fig:kernel_velocity}. There are also other quantities that experience sharp transition on the border of the two phases: kernel distance between child networks as a function of spawn epoch, ReLU activation Hamming distance, and Hamming distance between responses on the test set; see \cite{fort2020deep} for details. \section{Beyond NTK} \label{sec:beyond} While NTK kernel regression has a natural interpretation of training an infinitely wide neural network under certain parameterization with gradient flow (see \cref{sec:convergence}), NTK is not the only possible kernel that can be constructed using a neural net. \subsection{NNGP kernel} One of the other notable "neural kernels" is the NNGP-kernel \cite{lee2018deep}, defined as $K(x,x') = \mathbb{E}\,_\theta f(x; \theta) f(x'; \theta)$, where $f(\cdot; \theta)$ is a parametric model with weights $\theta$ and scalar output. Suppose $f$ is a neural network with the output layer of the form $f(x) = v^T h(x)$, where $h(x) \in \mathbb{R}^n$ is its last layer representation and $v \sim \mathcal{N}(0, I_n / n)$ independent on $h$. Then $K(x,x') = \frac{1}{n} \mathbb{E}\, h^T(x) h(x')$. As we have seen in \cref{sec:limit} on the example of fully-connected and convolutional nets, the last layer representations tend to iid Gaussians as width go to infinity. In other words, $\forall i \in [n]$ $h^i$ tend to identical and independent Gaussian processes with covariance $\mathbb{E}\, h^i(x) h^i(x') = \frac{1}{n} \mathbb{E}\, h^T(x) h(x')$, which is exactly $K(x,x')$. This motivates the term "NNGP" --- \emph{Neural Network Gaussian Process}. Note that we have already seen the object $\mathbb{E}\, h^i(x) h^i(x')$ in \cref{sec:limit}: when $h = h_l$ --- the $l$-th layer hidden representation of a fully-connected network, the above object is hidden layer covariance $q_l(x,x')$. Therefore the NNGP of this fully-connected network is nothing else but $q_L(x,x')$. This can be generalized to the whole class of architectures expressible by tensor programs: see the Master theorem of \cite{yang2019tensor_i} mentioned in \cref{sec:convergence}. That is, any neuron of any hidden representation of a neural network expressible by a tensor program tends to a Gaussian process. Learning a Gaussian process with zero mean and covariance $K(\cdot,\cdot)$ on a training dataset $(\vec x, \vec y)$ means computing its Bayesian prosterior, which is again a Gaussian with mean $\mu(\cdot \,|\, (\vec x, \vec y))$ and covariance $K(\cdot,\cdot \,|\, (\vec x, \vec y))$ given below: \begin{equation} \mu(x \,|\, (\vec x, \vec y)) = K(x,\vec x) K^{-1}(\vec x, \vec x) \vec y; \end{equation} \begin{equation} K(x,x' \,|\, (\vec x, \vec y)) = K(x,x') - K(x,\vec x) K^{-1}(\vec x, \vec x) K(\vec x,x'). \end{equation} Interestingly, training the last layer of an infinitely wide network with NNGP $K(\cdot,\cdot)$ results in exactly the same Gaussian process. When only the last layer is trained, the NNGP coincides with the NTK. Indeed, an NTK-parameterized NN of width $n$ with readout weights $v$ can be expressed as $f(x) = \frac{1}{\sqrt{n}} v^T h(x)$ with $v \sim \mathcal{N}(0, I_n)$. The empirical NTK is therefore given by $\hat\Theta_0(x,x') = \frac{1}{n} \nabla^T_v (v^T h(x)) \nabla_v (v^T h(x')) = \frac{1}{n} h^T(x) h(x')$, which converges to $\mathbb{E}\, h^i(x) h^i(x') = K(x,x')$ as $n \to \infty$; note that $h(\cdot)$ also depends on $n$. Recall the model prediction dynamics under constant NTK which is $K$ in our case: \begin{equation} f_t(x) = f_0(x) - K(x,\vec x) K^{-1}(\vec x,\vec x) \left(I - e^{-K(\vec x,\vec x) t}\right) (f_0(\vec x) - \vec y). \end{equation} Since $f_0(\cdot)$ is a Gaussian process as discussed before and $K(\vec x,\vec x)$ is deterministic, $f_t(\cdot)$ is a Gaussian process for any $t \geq 0$. Its mean $\mu_t(\cdot)$ and covariance $K_t(\cdot,\cdot)$ are: \begin{equation} \mu_t^{NNGP}(x) = K(x,\vec x) K^{-1}(\vec x,\vec x) \left(I - e^{-K(\vec x,\vec x) t}\right) \vec y; \end{equation} \begin{multline} K_t^{NNGP}(x,x') = K(x,x') +\\+ K(x,\vec x) K^{-1}(\vec x,\vec x) \left(I - e^{-K(\vec x,\vec x) t}\right) K(\vec x,\vec x) \left(I - e^{-K(\vec x,\vec x) t}\right) K^{-1}(\vec x,\vec x) K(\vec x,x') -\\- \left[K(x,\vec x) K^{-1}(\vec x,\vec x) \left(I - e^{-K(\vec x,\vec x) t}\right) K(\vec x,x') + K(x',\vec x) K^{-1}(\vec x,\vec x) \left(I - e^{-K(\vec x,\vec x) t}\right) K(\vec x,x)\right]. \end{multline} It is easy to see that $\mu_t^{NNGP}(x) \to \mu(x \,|\, (\vec x, \vec y))$ and $K_t^{NNGP}(x,x') \to K(x,x' \,|\, (\vec x, \vec y))$ as $t \to \infty$ $\forall x,x'$. If not only the last layer is trained, NNGP does not generally correspond to NTK. The corresponding training dynamics is given by \begin{equation} f_t(x) = f_0(x) - \Theta(x,\vec x) \Theta^{-1}(\vec x,\vec x) \left(I - e^{-\Theta(\vec x,\vec x) t}\right) (f_0(\vec x) - \vec y). \end{equation} While $f_t(\cdot)$ is again a Gaussian process for any $t \geq 0$, its mean and covariance are different. In particular, as $t \to \infty$, they tend to \begin{equation} \mu_\infty^{NTK}(x) = \Theta(x,\vec x) \Theta^{-1}(\vec x,\vec x) \vec y; \end{equation} \begin{multline} K_\infty^{NTK}(x,x') = K(x,x') + \Theta(x,\vec x) \Theta^{-1}(\vec x,\vec x) K(\vec x,\vec x) \Theta^{-1}(\vec x,\vec x) \Theta(\vec x,x') -\\- \left[\Theta(x,\vec x) \Theta^{-1}(\vec x,\vec x) K(\vec x,x') + \Theta(x',\vec x) \Theta^{-1}(\vec x,\vec x) K(\vec x,x)\right]. \end{multline} As was shown in \cite{lee2019wide}, there does not exist an initial covariance matrix (a "prior") such that these mean and covariance correspond to Bayesian posterior given the training data. The "empirical" counterpart of NNGPs is $\hat K(x,x') = \frac{1}{n} h^T(x) h(x')$. Compared to empirical NTKs, empirical NNGPs are easier to compute as they do not require a backward pass. The corresponding memory footprint is also lower for empirical NNGPs as they do not require computing Jacobian matrices that scale as $O(N)$ where $N$ is the number of weights. This makes NNGPs more suitable for large models. As an example, \cite{park2020towards} used performance of empirical NNGPs as a proxy measure for neural architecture search. They argue that first, empirical NTKs are too costly to compute, and second, they provide worse learning signal for their task. NNGP of a generic neural network can be computed in a recursive manner, as was demonstrated in \cref{sec:limit} on the example of fully-connected and convolutional nets: $q_{l+1}(x,x') = \mathbb{E}\,_{[z,z']^T \sim \mathcal{N}(0,\Sigma_l(x,x'))} \phi(z) \phi(z')$, where $\Sigma_l(x,x') = \begin{pmatrix} q_l(x,x) & q_l(x,x') \\ q_l(x',x) & q_l(x',x') \end{pmatrix}$; the Master theorem of \cite{yang2019tensor_i} gives similar fomulas for a generic neural net. In the above example, there is an operation that maps a kernel $q_l(x,x')$ to a subsequent kernel $q_{l+1}(x,x')$. \cite{shankar2020neural} presents an algebra of operations on kernels. While this algebra consists of operations of only three types, it is enough to express NNGP of a fully-connected or a convolutional network with any elementwise nonlinearities. \subsection{Label-aware NTK} One of the major problems of kernel methods is \emph{label agnosticism.} Recall that a kernel evaluated at a pair of points is a scalar product of their mappings to some feaure space: $K(x,x') = \langle \Phi(x), \Phi(x') \rangle$. Therefore a kernel measures how similar the two points are, and a kernel method uses this information to derive responses on unseen data: $f(x) = K(x,\vec x) \vec\alpha$. Intuitively, a kernel $K$ should result in a good-generalizing model if $K(x,x')$ is positive when $y=y'$ and negative otherwise. Therefore the "perfect" kernel would be $K^*(x,x') = y y'$; the obvious problem is that it cannot be computed on unseen data. A kernel that can be computed on unseen data cannot depend on labels. Therefore, if data has several possible labelings, for a pair of data points $(x,x')$, there could be a labeling with $y=y'$ and a labeling with $y\neq y'$. At the same moment, $K(x,x')$ stays the same on both cases; therefore, the corresponding kernel method cannot generalize well on both of the labelings. As an example of several possible labelings on a single dataset, consider a dataset of pictures with two objects in each frame, and let the two objects belong to two disjoint sets of classes. Then one of the labelings may consider only the objects of the first classes set, while the other may consider the objects of the second set. \cite{chen2020label} propose two ways of making a kernel \emph{label-aware.} The first is mixing the kernel at hand with the perfect kernel $K^*(x,x') = y y'$: $K^{HR}(x,x') = (1-\lambda) K(x,x') + \lambda K^*(x,x')$ for $\lambda \in [0,1]$. If the perfect kernel was available, the best choice would be to take $\lambda=1$. Since it is not available, we have to approximate it somehow, therefore making the optimal $\lambda$ to become less than one. In order to approximate $K^*(x,x')$, we need a model that maps $(x,x')$ to $y y'$. Since the training dataset for this model consists $O(m^2)$ samples, and since the model itself has to be evaluated on $O(m)$ samples for each test point $x$, the model has to be relatively simple. \cite{chen2020label} consider models of the form $Z(x,x') = \vec y^T M(x,x',\vec x) \vec y$, where $M \in \mathbb{R}^{m \times m}$. One of the possible choices of $M$ is $M(x,x',\vec x)_{ij} = \psi(K(x,x'),K(x_i,x_j))$, where $\psi(z_1,z_2)$ measures similarity. As one can see, this choice of $Z$ takes a linear combination of $y_i y_j$ with weights being similarities of $K(x,x')$ and $K(x_i,x_j)$. Intuitively, this reads as "$y y'$ and $y_i y_j$ are similar if $K(x,x')$ and $K(x_i,x_j)$ are close". While the above proposal can be applied to any kernel $K$, the second label-aware kernel of \cite{chen2020label} is a specific modification of NTK. Let us recall the construction of $\Theta^{NTH}$ resulted from integrating the learning dynamics up to the order $n^{-1}$, taking the limit of $t \to \infty$, and taking expectation (see \cref{sec:finite_width} and specifically Eq.~(\ref{eq:lantk_nth})): \begin{multline} \Theta^{NTH}(x_1,x_2) = O_{2,0}^{(0)}(x_1,x_2) + n^{-1} \mathbb{E}\, O_{2,\infty}^{(1)}(x_1,x_2) =\\= \Theta(x_1,x_2) + n^{-1} \mathbb{E}\,\left[O_{2,0}^{(1)}(x_1,x_2)\right] - n^{-1} \mathbb{E}\,\left[O_{3,0}^{(1)}(x_1, x_2, \vec x) \Theta^{-1}(\vec x,\vec x) f_0^{(0)}(\vec x)\right] +\\+ n^{-1} \vec y^T \Theta^{-1}(\vec x,\vec x) \mathbb{E}\,\left[O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x)\right] \Theta^{-1}(\vec x,\vec x) \vec y +\\+ n^{-1} \mathbb{E}\,\left[f_0^{(0),T}(\vec x) \Theta^{-1}(\vec x,\vec x) O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x) \Theta^{-1}(\vec x,\vec x) f_0^{(0)}(\vec x)\right] -\\- n^{-1} \sum_{k,l=1}^m \frac{1}{\lambda_k (\lambda_k+\lambda_l)} \vec y^T \vec v_k \vec v_k^T \mathbb{E}\,\left[O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x)\right] \vec v_l \vec v_l^T \vec y -\\- n^{-1} \sum_{k,l=1}^m \frac{1}{\lambda_k (\lambda_k+\lambda_l)} \mathbb{E}\,\left[f_0^{(0),T}(\vec x) \vec v_k \vec v_k^T O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x) \vec v_l \vec v_l^T f_0^{(0)}(\vec x)\right]. \end{multline} Since $\hat\Theta_0(x_1,x_2) = O_{2,0}^{(0)}(x_1,x_2) + n^{-1} O_{2,0}^{(1)}(x_1,x_2) + O(n^{-2})$, we have $\Theta(x_1,x_2) + n^{-1} \mathbb{E}\,\left[O_{2,0}^{(1)}(x_1,x_2)\right] = \mathbb{E}\,\hat\Theta_0(x_1,x_2) + O(n^{-2})$ and $\Theta(x_1,x_2) = \mathbb{E}\,\hat\Theta_0(x_1,x_2) + O(n^{-1})$. For the same reason, $\mathbb{E}\,\left[O_{4,0}(x_1, x_2, x_3, x_4)\right] = n^{-1} \mathbb{E}\,\left[O_{4,0}^{(1)}(x_1, x_2, x_3, x_4)\right] + O(n^{-2})$. Suppose $f_0^{(0)}(\vec x) = 0$. Given this approximation, up to order $O(n^{-2})$, \begin{multline} \Theta^{NTH}(x_1,x_2) \approx \mathbb{E}\,\hat\Theta_0(x_1,x_2) + \vec y^T \left(\mathbb{E}\,\hat\Theta_0(\vec x,\vec x)\right)^{-1} \mathbb{E}\,\left[O_{4,0}(x_1, x_2, \vec x, \vec x)\right] \left(\mathbb{E}\,\hat\Theta_0(\vec x,\vec x)\right)^{-1} \vec y -\\- \sum_{k,l=1}^m \frac{1}{\lambda_k (\lambda_k+\lambda_l)} \vec y^T \vec v_k \vec v_k^T \mathbb{E}\,\left[O_{4,0}(x_1, x_2, \vec x, \vec x)\right] \vec v_l \vec v_l^T \vec y. \end{multline} As one can see, $\Theta^{NTH}(x_1,x_2)$ depends on train labels $\vec y$. Roughly speaking, this kernel corresponds to the NTK of a network trained until convergence ($t\to\infty$); obviously, this kernel should depend on training data. As an interesting observation $\Theta^{NTH}(x_1,x_2) = \mathbb{E}\,\hat\Theta_0(x_1,x_2) + \vec y^T M(x_1,x_2,\vec x) \vec y$ for a certain matrix $M$ --- recall that $K^{(HR)}(x_1,x_2)$ considered previously has a similar form. Note that computing the Gram matrix $\Theta^{NTH}(\vec x, \vec x)$ requires computing the Gram "matrix" of the expected 4-th order empirical kernel $\mathbb{E}\,\left[O_{4,0}(\vec x, \vec x, \vec x, \vec x)\right]$. Instantiating this tensor requires $O(m^4)$ time and $O(m^4)$ memory which is only possible for very small datasets. \section{Limits of applicability} \label{sec:experiments} \begin{figure} \caption{ \label{fig:myrtle} \label{fig:myrtle} \end{figure} \begin{figure} \caption{ \label{fig:myrtle_bce_cifar2_time_flops} \label{fig:myrtle_bce_cifar2_time_flops} \end{figure} \begin{figure} \caption{ \label{fig:myrtle_bce_cifar10_time_accuracy} \label{fig:myrtle_bce_cifar10_time_accuracy} \end{figure} \begin{figure} \caption{ \label{fig:resnet50_nll_cifar10_time_accuracy} \label{fig:resnet50_nll_cifar10_time_accuracy} \end{figure} \begin{figure} \caption{ \label{fig:myrtle_bce_stl2_time_accuracy} \label{fig:myrtle_bce_stl2_time_accuracy} \end{figure} In this section, we present a small experimental study on scope of applicability for NTK regression to real-time scenarios. In particular, we would like to investigate first, what is the maximal size $m$ of training dataset of images of given size we can afford with limited computational resources. Second, what is the maximal image resolution $d$ we can afford given fixed dataset size. We restrict ourselves to these two questions since for practical purposes, dependence of NTK regression complexity on these two parameters is the most worrying: it is $O(m^2 d^4)$ for constructing the Gram matrix, $O(m^3)$ for integrating the dynamics analytically, and $O(m^2 T)$ for integrating the dynamics numerically for $T$ steps; see \cref{sec:computations}. We use NeuralTangents \cite{novak2019neural} and perform all our experiments on a single GTX 1080Ti GPU with 12 GiB of memory. We consider a Myrtle network\footnote{\url{https://myrtle.ai/how-to-train-your-resnet-4-architecture/}} with 64 channels in all convolutional layers, see \cref{fig:myrtle}. We pick this architecture because it is lightweight and uses only those layers for which NTK can be computed analytically. For the first experiment, we consider two classes of CIFAR10 and refer this dataset as CIFAR2. We pick a subset of 1000 samples of the original test set of CIFAR2 and vary the size of the training subset. We optimize binary cross-entropy (BCE) and integrate the dynamics numerically for $T = 10^4$. We compute the Gram matrix of a kernel using batch size 4. On \cref{fig:myrtle_bce_cifar2_time_flops}, we plot training time and the number of floating-point operations (FLOPS) for different stages (i.e. Gram matrix computation, integrating the dynamics, inference on a test set) and for different regimes of training (analytical NTK, analytical NNGP, and empirical NTK) versus size of training dataset. As one can see, already for relatively small datasets ($m=10^4$), the most time-demanding stage is construction of the Gram matrix $\Theta(\vec x, \vec x)$ (solid line), but not integration (which is also takes time quadratic to size of the dataset) (dotted line). Also, the time to compute the NNGP kernel is almost the same as the one for NTK, since both are computed analytically; see \cref{sec:limit}. We could not obtain the point $m=10^4$ for empirical NTK (ENTK) due to numerical reasons. If we extrapolate the solid line to $m=10^6$, the size of ImageNet, noting the quadratic growth, we will get $5 \times 10^9$ seconds, which is around 160 years of computations. While our time measurements are device-dependent, we also measure the number of FLOPS, which while being device-independent, grows the same way as time and is also quite large. This experiment demonstrates that indeed, the naive approach for integrating the NTK dynamics falls short on datasets of realistic sizes, thus striving for major optimizations. As mentioned in \cref{sec:computations}, a promising approach could be the one of \cite{meanti2020kernel}. On \cref{fig:myrtle_bce_cifar10_time_accuracy}, we present the same experiment but with all 10 classes of CIFAR10. We observe the same quadratic time growth issue for all three regimes of training (analytical NTK, analytical NNGP, and empirical NTK). We also report accuracy for comparison with previous works on small data training with kernel methods (i.e. \cite{arora2019harnessing}). In addition to experiments with a small network, we experimented with a variant of Resnet50 \cite{he2016deep}. We modify this architecture by removing batch normalizations and substituting max poolings with average poolings, so to make analytical computations possible. Results are shown on \cref{fig:resnet50_nll_cifar10_time_accuracy}. Doing the same extrapolation to ImageNet size, we get $6.25 \times 10^{11}$ seconds, which is around $20000$ years. Lastly, we consider two classes of STL10 and similarly to CIFAR2, refer this dataset as STL2. We pick a subset of 100 samples of the original test set of STL2 and 500 samples of its original train set. While STL10 has fewer labeled examples compared to CIFAR10, it has larger images: $96 \times 96$ for STL10 versus $32 \times 32$ for CIFAR10. We vary size of the input image and measure training time and accuracy, similarly to the first experiment. As before, we optimize binary cross-entropy (BCE) and integrate the dynamics numerically for $T = 10^4$. However, we use batch size 1 for computing the Gram matrix, since larger batch sizes do not fit in GPU memory for large image sizes. Results are shown on \cref{fig:myrtle_bce_stl2_time_accuracy}. As before, the most time-demanding part is kernel Gram matrix computation (blue line): it grows as $O(d^4)$, where $d$ is image resolution; see \cref{sec:limit}. If we extrapolate this line to $d=224$, the resolution on which traditional ImageNet classification models operate, we will get around 150 days of computations. This experiment therefore demonstrates that not only dataset size, but also image resolution complexity can also be a serious bottleneck in applying NTK approach in practice. Also, while for dataset size, certain optimizations are available (e.g. \cite{meanti2020kernel}), we are not aware of any optimizations aiming for decreasing image resolution complexity. \section{Conclusions} The use of NTK theory is twofold: first, it relates neural networks to kernel methods, a far more well-developped class of models. Second, it gives a machine learning practitioner a kernel that shares some properties with neural nets. Recall what we have concerning the first application. We have a theorem (\cref{thm:master_theorem}) that implies that a neural tangent kernel of a wide class of architectures is deterministic and does not evolve with time in the limit of infinite width, and provides a recurrent formula for the limit. Therefore a network that is wide enough should share some properties, i.e. convergence and generalization, see \cref{sec:app_theory}, with the corresponding kernel method. However, the resulting width bounds are far from realistic. Second, the limit kernel does not evolve with time only under certain non-standard parameterization rarely used in practice. In contrast, standard parameterization results in evolving (normalized) kernel, see \cref{sec:standard_param}. The fact that the kernel evolves may be the key to understanding superior performance of neural nets to kernel methods. Unfortunately, we have little understanding of this aspects at the moment. Lastly, \cref{thm:master_theorem} requires Gaussian weight initialization rarely used in practice. Generalizing it to non-Gaussian weight distribution remains to be done in the future. Let us discuss the second application. At the moment of writing, computing the exact limit kernel was available only for convolutional and fully-connected networks with average poolings and nonlinearities in a certain class, see \cref{sec:computations}. For other architectures, one has to rely on empirical NTK which is a biased estimate of the limit one. Computing the empirical NTK requires instantiating output-by-weight jacobians at every pair of training points, which is especially memory risky for realistically large architectures. Storing the Gram matrix of the kernel also requires $O(m^2)$ memory where $m$ is dataset size. Even if the kernel is sucessfully computed on every pair of training points, integrating the training dynamics naively requires inverting the Gram matrix, which costs $O(m^3)$ time, while for datasets of size $10^6$ one can barely afford more than $O(m)$ time and memory. We study applicability limits of this naive approach in \cref{sec:experiments}. Still, certain optimization are available, see \cref{sec:computations}. Also concerning the second application, NTK is not the only kernel that can be constructed using a neural network; certain other kernels may have computational or performance gains compared to NTK, see \cref{sec:beyond}. \end{document}
\begin{document} \thetaitle[Chaotic behavior of the AFS algorithm]{On the chaotic behavior of the Primal--Dual Affine--Scaling Algorithm for Linear Optimization} \alphauthor{H. Bruin} \email{[email protected]} \alphaffiliation{{Faculty of Mathematics, University of Vienna, Oskar Morgensternplatz 1\\ A-1090 Vienna, Austria }} \alphauthor{R. Fokkink} \email{[email protected]} \alphaffiliation{ Delft University, Faculty of Electrical Engineering, Mathematics and Computer Science,\\ P.O.Box 5031, 2600 GA Delft, Netherlands} \alphauthor{G. Gu} \email{[email protected]} \alphaffiliation{ Department of Mathematics, Nanjing University, Nanjing 210093, China } \alphauthor{C. Roos} \email{[email protected]} \alphaffiliation{ Delft University, Faculty of Electrical Engineering, Mathematics and Computer Science,\\ P.O.Box 5031, 2600 GA Delft, Netherlands} \date{\thetaoday} \begin{abstract} \noindent \normalsize We study a one-parameter family of quadratic maps, which serves as a template for interior point methods. It is known that such methods can exhibit chaotic behavior, but this has been verified only for particular linear optimization problems. Our results indicate that this chaotic behavior is generic. \end{abstract} \keywords{interior-point method, affine scaling method, primal--dual method, chaotic behavior. } \maketitle We study a one-parameter family of quadratic maps on a projective simplex, which has been derived from an interior point method, known as the primal-dual Affine Scaling method~\cite{JansenRoosTerlaky}. This particular method neatly handles both the primal and the dual variables in one step, enabling us to derive a one-parameter family, independently of the underlying linear optimization problem. We study the bifurcations of this one-parameter family and find that they are almost identical to those that have previously been found by Castillo and Barnes~\cite{Barnes} for a specific linear optimization problem, using another interior point method. This indicates, experimentally and non-rigorously, that the route to chaos in our one-parameter family is typical for general interior point methods. \section{Introduction} In linear optimization one wants to compute the maximum value of a linear objective function under linear inequality constraints. There exist many algorithms that solve LO problems by iteration. The classical algorithm is the simplex method, which produces an exact solution. It runs through the extremal points of the convex set that satisfies the constraints (the feasible set), improving the value of objective function in each step, halting at an extremal point that produces the maximum value. The simplex method runs from one boundary point of the feasible set to the next. Interior point methods run through the interior of the feasible set. Historically, the first such method is the {\sl affine scaling algorithm} (AFS) method of Dikin, which remained unnoticed until 1985. The work of Karmarkar~\cite{int:Karmarkar2} sparked a large amount of research in polynomial--time methods for LO, and gave rise to many new and efficient interior point methods (IPMs) for LO. For a survey of this development we refer to the books of Wright~\cite{int:SWright9}, Ye~\cite{BookYe}, Vanderbei~\cite{Vanderbei} and Roos \emph{et al.}~\cite{RoosTerlakyVial2}. An IPM starts from an arbitrary initial point $x_0$ in the interior and constructs a sequence $x_n$ that converges to a maximum $x^*$. An IPM is a dynamical system which solves the LO problem, provided that the $\omega$-limit of~$x_0$ consists of maxima of the objective function~$f$. If there is only one such maximum, then the LO problem is called \thetaextit{non-degenerate}. In this case, the IPM solves the LO problem provided that it converges to the maximum. Any LO problem can be converted to a dual problem in which one needs to find the minimum value of a dual linear objective function under dual constraints. If $f$ is the objective function of the primal problem and if $g$ is the objective function of the dual problem, then $f(x)\leq g(y)$ for all feasible $x$ and all $y$. To solve an LO problem it therefore suffices to close the duality gap and find $x^*,y^*$ such that $f(x^*)=g(y^*)$. According to the minimax theorem, such $x^*,y^*$ exist and apart from a primal sequence $x_n$ most IPM's also produces a dual sequence $y_n$, halting as soon as the duality gap $g(y_n)-f(x_n)$ reaches a value which is below the desired accuracy threshold. A primal problem that is non-degenerate may have a degenerate dual problem, in which case the orbit $y_n$ may have a non-trivial $\omega$-limit set. We will study such an LO problem at the end of this paper and find that the dual dynamical system contains a hyperbolic attractor. The simplex method runs from one extremal point to the next, but an IPM uses a variable step size~$\alphalpha$. For each feasible $x$ the algorithm produces a vector $v$ such that $x+\alphalpha v$ is contained in the feasible set for $0\leq \alphalpha\leq 1$ and such that $x+v$ is in the boundary of the feasible set. The step size $\alphalpha$ is fixed during iteration and is chosen $<1$ so that the orbit $x_n$ is contained in the interior of the feasible set. An IPM therefore is a one-parameter family of dynamical systems. It is well known that an IPM may not converge if $\alphalpha$ is too large. One of the best studied algorithms is the Affine Scaling method (AFS) which was proposed by Dikin and which has been further developed by Vanderbei {\em et al.} \cite{Vanderbeietal} It is known~\cite{TsuchiyaMuramatsu} that AFS converges if $\alphalpha\leq 2/3$ and that it need not converge if $\alphalpha>2/3$, see~\cite{twothirds}. It is also known that AFS behaves chaotically in the dual variables for $\alphalpha>2/3$, as has been found by Castillo and Barnes~\cite{Barnes} and Mascarenhas \cite{Mascarenhas}. \subsection{Outline of our paper} The previous studies of chaotic behavior in interior point methods were carried out for specific problems: one considers an LO problem, applies the algorithm and analyzes the resulting dynamical system. In this paper, we take a different approach. We consider the primal-dual AFS that was proposed by Jansen {\em et al.}~\cite{JansenRoosTerlaky}. It has the nice property that it be presented in a such a form that its low order terms do not depend on the original LO problem. By ignoring the higher order terms, we obtain a one-parameter family of dynamical systems, which we call the \thetaextit{Dikin process}, that is the same for all LO problems. Of course, the Dikin process is not an IPM anymore. However, the bifurcations that we establish for the Dikin process are the same as the bifurcations that have previously been found by Castillo and Barnes for their specific LO problem. This indicates, experimentally and non-rigorously, that the chaotic behaviour of the Dikin process represents that of general interior point methods. Our paper is organized as follows. We first recall the primal-dual AFS method for solving LO problems. We then derive the one-parameter family of dynamical systems, and analyze it for increasing values of a parameter $\theta$. We show that the system behaves chaotically as $\theta$ increases beyond $2/3$. We supplement this analysis experimentally by Feigenbaum diagrams. In the final section, we compare our results to an IPM that arises from a specific LO problem. \subsection{Notation} We reserve the symbol $e \in \mathbb R^n$ for the vector of all ones. For a vector $x$, the capital $X$ denotes the diagonal matrix with the entries of $x$ on the diagonal. Furthermore, if $f:\mathbb R\rightarrow\mathbb R$ is a function and $x \in \mathbb R^n$, then we denote by $f(x)$ the vector $(f(x_1),\dots,f(x_n))$. If $s$ is another vector, then $xs$ will denote the coordinatewise product of $x$ and $s$ and $x/s$ will denote the coordinatewise quotient of $x$ and $s$. In other words, $xs=Xs$ and $x/s=S^{-1}x$. Finally, $\| . \|$ denotes the $l_2-$norm. \section{A recap of the Primal-dual affine scaling method} In linear optimization, the notion of affine scaling has been introduced by Dikin~\cite{int:Dikin1} as a tool for solving the (primal) problem in standard format \[ (P)\;\;\;\qquad \min\{c^T x\,:\, A x=b x\ge0\}. \] The underlying idea is to replace the nonnegativity constraints $x\ge 0$ by the ellipsoidal constraint \begin{equation}\label{ellips} \norm{\bar{X}^{-1}({\bar x}-x)}\leq 1,\end{equation} where $\bar{x}$ denotes some given interior feasible point, and $\bar{X}$ the diagonal matrix corresponding to $\bar{x}$. The resulting subproblem is easily solved and renders a new interior feasible point with a better objective value. Dikin showed, under the assumption of primal nondegeneracy, that this process converges to an optimal solution of $(P)$. Every known method for solving $(P)$ essentially also solves the dual problem \[ (D)\;\;\;\qquad \max\{b^T y\,:\, A^T y + s=c s\ge0\} \] by closing the duality gap between $c^Tx$ and $b^Ty$, which equals $x^Ts$. Our basic assumption is that a primal-dual pair $(x,s)$ of feasible solutions exists and that $A$ is an $m\thetaimes n$ matrix of rank $m$ for $m<n$. A pair of feasible vectors $x^*,s^*$ solves $(P)$ and $(D)$ if and only if they are orthogonal. Since $x^*\geq 0$ and $s^*\geq 0$, this means that the coordinatewise product $x^*s^*$ is equal to the all-zero vector. The primal-dual AFS method that we consider in this paper has been proposed by Jansen {\em et al.}~\cite{JansenRoosTerlaky}. In primal-dual AFS, Dikin's ellipsoidal constraint (\ref{ellips}) is replaced by a constraint that includes both the primal and the dual variable: \begin{equation}\label{ellips2} \norm{\bar{X}^{-1}({\bar x}-x)+\bar{S}^{-1}({\bar s}-s)}\leq 1,\end{equation} where $\bar {S}$ denotes the diagonal matrix corresponding to the slack vector $\bar s$. In this notation, $(\bar x,\bar s)$ is the original pair of primal vector and slack vector and $(x,s)$ is an updated pair. The differences $\Delta x=x-\bar x$ and $\Delta s=s-\bar s$ are called the primal-dual AFS directions. For non-negative $x,s$ let $v=(xs)^{1/2}$ be the coordinatewise square root of the coordinatewise product and let $v^k$ be the coordinatewise power of $v$. Jansen {\em et al.}\ have shown that the directions $\Delta x$ and $\Delta s$ can be derived from the vector \[p_v=-\frac{v^{3}}{\norm v^2}\] by first projecting $p_v$ onto the null space (for $\Delta x$) and the row space (for $\Delta s$) of $AD$, with $D=XS^{-1}$, and then rescaling the result by a coordinatewise product. More specifically, if $d=(x/s)^{1/2}$ then \[\Delta x=dP_{AD}(p_v),\ \Delta s= d^{-1}Q_{AD}(p_v)\] where $P_{AD}$ and $Q_{AD}$ denote the orthogonal projections onto the null space of $AD$ and the row space of $AD$, respectively. These projections recombine in the Dikin ellipsoid to \begin{equation}\label{sum} X^{-1}\Delta x + S^{-1}\Delta s=-\frac{v^2}{\norm{v^2}}. \end{equation} This gives the primal-dual AFS directions but not the size of the step, which is controlled by an additional parameter $\alphalpha$. It is known that the iterative process $x\mapsto x+\alphalpha\Delta x, s\mapsto s+\alphalpha\Delta s$ converges to a solution if $\alphalpha<1/\left(15\sqrt n\right)$.\cite{JansenRoosTerlaky} This is of course a significant restriction on the step size, and primal-dual AFS is not often used in practice. \section{Derivation of the Dikin process} Starting with a primal-dual feasible pair $(x,s)$, the next iterated pair is given by \[x^{+} = x+\alpha\Delta x = s+\alpha\Delta s,\] and hence we have \[x^{+} = xs+\alpha\left({x\Delta s+s\Delta x}\right) + \alpha^2\Delta x\Delta s.\] The vectors $\Delta x$ and $\Delta s$ are orthogonal. If the AFS iterations are close to a solution, then $\Delta x$ and $\Delta s$ will be relatively small, and the product $\Delta x\Delta s$ will be negligible. If we ignore the quadratic term, i.e., if we assume that the coordinatewise product $\Delta x\Delta s$ is equal to zero, then the reduction of~$xs$ is proportional to~$x\Delta s+s\Delta x$, which can be rewritten to \[ xs\left(x^{-1}\Delta x+ s^{-1}\Delta s\right). \] Observe that $x^{-1}\Delta x+ s^{-1}\Delta s$ is equal to the left-hand side in $(\ref{sum})$. So if we ignore the quadratic term, and if we use equation~$(\ref{sum})$, then we find that \[x^{+} = xs+\alpha\left({x\Delta s+s\Delta x}\right) = xs -\alpha \frac{x^2s^2}{\norm{xs}} = xs{e -\alpha \frac{xs}{\norm{xs}}}.\] Recall that $e$ denotes the all-one vector, so we may also write this as \[ x^{+} =xs{e -\alpha \frac{xs}{\norm{xs}}}. \] Now we have arrived at an iterative process for the product vector $xs$. Since we require $x\ge 0$ and $s\ge 0$, we need to require $xs\ge 0$ in the iterative process, and the maximal step size is equal to \[\alpha_{\max} = \frac{\norm{xs}}{\max{xs}}\] Defining \[\theta = \frac{\alpha}{\alpha_{\max}} = \alpha\frac{\max{xs}}{\norm{xs}}\] and writing $w=xs$ we get \[w^+ = w{e -\theta \frac{w}{\max{w}}}\quad \theta\in [0,1].\] This iterative process depends on a parameter $\theta$ which is related to the original step size by $\alphalpha=\theta\alpha_{max}$. If $w$ has coordinates that are approximately equal (in optimization one says that $w$ is `close to the central line'), then $\alpha_{max}\alphapprox \sqrt n$. In general, $1\leq \alpha_{max}\leq \sqrt n$. We make one further reduction. If $u=\lambda w$ for a scalar $\lambda$ then $u^+=\lambda w^+$, so the iterative process preserves projective equivalence. We may therefore reduce our system up to projective equivalence by scaling vectors so that their maximum coordinate is equal to one. If we consider vectors up to projective equivalence, then we obtain our \thetaextit{Dikin process}: \begin{equation}\label{eq:scaled process} {\bar{w}}^{k} = w^k{e -\theta w^k}\quad w^{k+1} = \frac{{\bar{w}}^{k}}{\max{\bar{w}}^{k}}\quad k=0, 1, \ldots \end{equation} The Dikin process involves two steps: multiplication and scaling. To describe the process more succinctly we use the map $f_\theta(x)=x(1-\theta x)$. The Dikin process is then given by: \begin{equation} w^{k+1}=f_\theta(w^k)/\max\{f_\theta(w^k)\}.\end{equation} Note that $f_{\theta}$ is a higher-dimensional analog of the logistic map on the unit interval. For each coordinate we apply the same quadratic map, and the only interaction between the coordinates is induced by the scaling. The Dikin process does not solve the original LO problem. Its significance derives from the fact that it does not depend on the LO problem and that its bifurcations can be analyzed in a standard way. \section{Bifurcation analysis} We analyze the Dikin process $f_\theta$, for increasing values of $\theta$. We suppress the subscript $\thetaheta$ in $f_\theta$ and simply write the Dikin process as \begin{equation}w^{k+1}=f(w^k)/\max\{f(w^k)\}.\end{equation} Note that $f$ has a global maximum $f(1/2\theta)=1/4\theta$ and that we scale $f$ such that all coordinates take values $\leq 1$. \subsection{$\theta\leq 2/3$: the process converges to $e$} If $\theta\leq 1/2$ then the global maximum of $f$ is $\geq 1$, which is outside the domain of our coordinates. The value of each coordinate increases during iteration. By monotonicity the limit of $w^k$ exists and it is a fixed point under iteration. The only fixed point is $e$ and therefore $w^k$ converges to the all-one vector~$e$ if $\theta\leq 1/2$. We now argue that $e$ remains the global attractor if $\theta\leq 2/3$. If $\theta> 1/2$ then $f$ is unimodal and point symmetric with respect to its maximum $1/2\theta$: \begin{equation}\label{pointsym} f\left(\frac 1{2\theta}+z\right)=f\left(\frac 1{2\theta}-z\right).\end{equation} Under iteration of $f$ all orbits eventually end up in the interval $[1-1/\theta,1]$. In particular, for every initial $w^0$ it eventually holds that $\min{w^k}\geq 1/\theta-1$. We need to show that $\min w^k$ in fact converges to $1$ if $1/2\leq \theta\leq 2/3$. By the point symmetry in \eqref{pointsym}, if we replace the coordinates $w_i<1/2\theta$ in $w^k$ by their reflections $1/\theta-w_i$, then this does not affect $w^{k+1}$. We may therefore assume that $\min w^k\geq 1/2\theta$. Let $x=\min w^k\geq 1/2\theta$. Since $f$ is decreasing for $x\geq 1/2\theta$ we have that $f(x)=\max f(w^k)$ and $f(1)=\min f(w^k)$. Therefore the minimum coordinate of $w^{k+1}$ is given by \begin{equation}\label{h} h(x)=\frac {f(1)}{f(x)}=\frac{1-\theta}{x(1-\theta x)}. \end{equation} To prove that the process converges to $e$, it now suffices to prove that $h(x)\geq x$, because this implies that the limit of $h^k(x)$ exists and is equal to the unique fixed point of $h$. Now $h(x)\geq x$ can be rewritten as \begin{equation}\label{cbc} \theta x^3-x^2+1-\theta\geq 0. \end{equation} The derivative $(3\theta x - 2)x$ of the cubic $\theta x^3 -x^2 +1 -\theta$ is negative on the unit interval, by our assumption that $\theta\leq 2/3$. So the cubic has its maximum at $0$ and its minimum at $x=1$, which is a zero of the cubic. Hence the inequality $h(x)\geq x$ holds and we conclude that $w^k$ also converges to $e$ if $1/2\leq\theta\leq 2/3$. \subsection{$2/3<\theta\leq \frac{1+\sqrt 5}4$: convergence to a point of period two.} We will see that if $2/3<\theta\leq \frac{1+\sqrt 5}4$, then the minimum coordinate and the maximum coordinate interchange under iteration, while all other coordinates either converge to the minimum of the maximum. We can thus ignore these other coordinates, and observe that the Dikin process on the minim and maximum coordinate is given by $(x,1)\thetao(1,h(x))\thetao (h^2(x),1)\thetao\cdots$, with $h$ as in~(\ref{h}). Observe that a fixed point of $h$ produces a point of period two for this process. If $\theta>2/3$ then $h$ has a unique fixed point $r\in (0,1)$, which can be found by solving the cubic equation $h(x)=x$ that we already encountered in equation~\eqref{cbc}. This cubic is divisible by $x-1$, so we find that $r$ satisfies the quadratic equation \begin{equation} \label{quadratic} \theta r^2+(\theta-1)r+(\theta-1)=0.\end{equation} The positive solution for $r$ is equal to \begin{equation}\label{eq:r} r=\frac{1-\theta+\sqrt{(1-\theta)^2+4\theta(1-\theta)}}{2\theta}, \end{equation} which is $\leq 1$ if and only if $\theta\geq 2/3$. The cubic equation $h(x)=x$ has zeros in $1,r$ and the third zero $s$ is negative. In particular, $h(x)>x$ on $(s,r)$ and $h(x)<x$ on $(r,1)$ and we find that $r$ is the global attractor of $h$ in the interval $[1/\theta-1,1)$. Note that $r$ is not a global attractor in the closed interval $[1/\theta-1,1]$ since $h(1)=1$ is a fixed point. Since $r$ is an attractor, the two-dimensional process $(x,1)\thetao(1,h(x))$ converges to an orbit of period two if $\theta> 2/3$. We note that this particular limit behavior has also been observed by Hall and Vanderbei \cite{twothirds} for the (primal) AFS method. Since all coordinates eventually increase above $1-1/\theta$ we may as well assume that $\min w^0\geq 1/\theta-1$. The minimum coordinate of $f(w^0)$ then has value $f(1)=1-\thetaheta$ and the maximum coordinate has value $\leq 1/4\theta$. Therefore, $\min w^1\geq 4\theta(1-\theta)$. If in fact $\min w^1\geq 1/2\theta$, then we can restrict out attention to the mimimum and the maximum coordinate. This would be the case if $4\theta(1-\theta)\geq 1/2\theta$, which leads us to the cubic equation $$8\theta^2(1-\theta)-1=0\Longleftrightarrow (2\theta-1)(1-2\theta-4\theta^2)=0.$$ The two roots of the quadratic are $\frac{1\pm\sqrt 5}4$, and so we conclude that we may indeed restrict our attention to the minimum and the maximum coordinate if $\frac 12 < \theta\leq \frac{1+\sqrt 5}4$. Suppose $\theta\leq \frac {1+\sqrt 5}4\alphapprox 0.809$ and consider an initial condition $w^0 = (w_1, \dots, w_n)$ with increasing coordinates $w_1\leq w_2 \leq \dots \leq w_n = 1$ and such that $w_1\geq 1/2\theta$. For an intermediate coordinate $w_i$ the process is given by $ w_i \thetao \frac{w_i(1-\thetaheta w_i)}{w_1(1-\thetaheta w_1)}. $ Now the minimum coordinate converges to $r$ so we may as well put $w_1=r$, in which case we get that $w_i\thetao g(w_i)$ for the map $$ g(x) = \frac{x(1-\thetaheta x)}{r(1-\thetaheta r)}. $$ This map $g$ keeps track of the Dikin process $w^k$ on a fixed coordinate. We now prove that the $\omega$-limit of $g$ is Lebesgue a.e. equal to $\{r,1\}$, which will show that a.e. point converges to a point of period two. This comes down to a straightforward computation, which is carried out in the paragraph below. First note that $g(r)=1$ and that $g(1)=h(r)=r$ so that $g$ has a fixed point $s\in (r,1)$ as is illustrated by the graph of $g^2$ in the figure below. We leave it to the reader to verify that $g$ has a unique fixed point in $(r,1)$ at $s=(r +\theta - 1)/r\theta$. \begin{figure} \caption{The function $g^2$ on the interval [1/4,1] for $\thetaheta=0.8$ plotted against the diagonal.} \label{ggraph} \end{figure} The derivative of $g$ is given by \begin{equation}\label{derivative} g'(x) = \frac{1-2\thetaheta x}{r(1-\thetaheta r)}=\frac{r(1-2\thetaheta x)}{1-\theta}, \end{equation} where we use that $1-\theta=r^2-\theta r^3$. Note that $g(x)=x$ is a quadratic equation with solutions $x=0$ and $x=s$. The equation $g^2(x)=x$ is an equation of degree four with solutions $0,r,s,1$. It follows that $g^2(x)\not=x$ on the two subintervals $(r,s)\cup(s,1)$ and that $g^2(x)>x$ on the one interval while $g^2(x)<x$ on the other interval. Using equation~(\ref{derivative}), and using that $rs\theta=r+\theta-1$, we find that the derivative at $s$ is \begin{eqnarray*} g'(s) &=& \frac{r\left( 1 - 2\theta s \right)}{1-\thetaheta}=\frac{2-r-2\theta}{1-\theta}. \end{eqnarray*} To prove that $s$ is unstable, we need to verify that $\frac{2-r-2\theta}{1-\theta}<-1$, or equivalently, that $r>3-3\theta$. Substituting (\ref{eq:r}) for $r$ and simplifying equations we end up with $ (1-\theta)+\sqrt{(1-\theta)^2+4\theta(1-\theta)}>6\theta(1-\theta). $ Taking squares to remove the root gives $ (1-\theta)^2+4\theta(1-\theta)>(6\theta-1)^2(1-\theta)^2. $ which simplifies to $ 1+3\theta>(6\theta-1)^2(1-\theta) $. Collecting all terms and dividing by $\theta$ we finally arrive at the inequality $9\theta^2-12\theta+4>0$, or equivalently, $(3\theta-2)^2>0$. This obviously holds if $\theta>2/3$. It follows that $g^2(x)<x$ on $(r,s)$ and that $g^2(x)>x$ on $(s,1)$. This completes our computation and we conclude that if $2/3<\theta\leq \frac{1+\sqrt 5}4$, then the $\omega$-limit is Lebesgue a.e. equal to an orbit of period two. The coordinates of these periodic points are either equal to $r$ or $1$. \subsection{Persistence of period two.} For generic period doubling bifurcations in smooth dynamical systems, the parameter curve of the periodic points of period $2n$ is parabolic and intersects the curve of the periodic point of period $n$ transversally. At the point of intersection, the period $n$ point changes from stable to unstable, or vice versa. Curiously, this scenario fails in at least the first two periodic doublings in our Feigenbaum diagrams, in particular see Figure~\ref{bifudiag}. Our numerical experiments show that the period two limit cycle persists beyond $\frac{1+\sqrt{5}}4$. It is indeed possible to prove that the period two point persist, but the analysis gets involved. We limit ourselves to the case that $w$ has three coordinates. Assuming that the coordinates are ordered $x<y<1$ we can write \begin{eqnarray*} (x,y,1) &\mapsto& \left( 1, \frac{ y(1-\thetaheta y) }{ x(1-\thetaheta x) }, \frac{ 1-\thetaheta }{ x(1-\thetaheta x) } \right) \\ &\mapsto& \left( \frac{ x(1-\thetaheta x) }{ 1-\thetaheta \frac{1-\thetaheta}{x(1-\thetaheta x)} }\ ,\ \frac{y(1-\thetaheta y)}{1-\thetaheta} \frac {1-\thetaheta \frac{y(1-\thetaheta y)}{x(1-\thetaheta x)} } {1-\thetaheta \frac{1-\thetaheta}{x(1-\thetaheta x)} }\ ,\ 1 \right) \end{eqnarray*} so we can describe the second iterate by the function $$ F(x,y)=\left(\frac{ x(1-\thetaheta x) }{ 1-\thetaheta \frac{1-\thetaheta}{x(1-\thetaheta x)} }\ ,\ \frac{y(1-\thetaheta y)}{1-\thetaheta} \frac {1-\thetaheta \frac{y(1-\thetaheta y)}{x(1-\thetaheta x)} } {1-\thetaheta \frac{1-\thetaheta}{x(1-\thetaheta x)} }\right). $$ \begin{figure} \caption{\label{fig:secondeigenvalue} \label{fig:secondeigenvalue} \end{figure} This function preserves the diagonal, on which we have the two-dimensional process, which as we have seen already has a period two global attractor for $\thetaheta>2/3$. So, the instability has to occur in the direction transversal to the diagonal. We can study this stability by taking the derivative $$ DF(x,y) = \left[ \begin{array}{ll} \frac{\partial F_1(x,y)}{\partial x} & \frac{\partial F_2(x,y)}{\partial x} \\ 0 & \frac{\partial F_2(x,y)}{\partial y} \end{array}\right] $$ where the partial derivatives on the diagonal of the matrix $\frac{\partial F^1(x,y)}{\partial x}, \frac{\partial F_2(x,y)}{\partial y}$ are equal to $$ \frac{(x-3\thetaheta x^2-2\thetaheta+2\thetaheta^2+2\thetaheta^2 x^3+4\thetaheta^2x-4\thetaheta^3x)x(1-\thetaheta x)} {(-x+\thetaheta x^2+\thetaheta-\thetaheta^2)^2} $$ and $$ \frac{x-\thetaheta x^2-2 y\thetaheta+6 y^2\thetaheta^2-2x^2\thetaheta+2y\thetaheta^2x^2-4y^3\thetaheta^3}{(-x+\thetaheta x^2+\thetaheta-\thetaheta^2) \cdot (-1+\thetaheta) }. $$ Maple computations show that fixed point becomes unstable at $\thetaheta = 0.8499377796$, when the eigenvalue $\frac{\partial F_2(r,r)}{\partial y}$ becomes equal to $-1$. At this value of $\thetaheta$ we expect $(r, r,1)$ to become unstable, splitting off a stable period $4$ point in a period doubling bifurcation, which is confirmed by the Feigenbaum diagrams below. In our computational results for real LO problems, we find that the limit two cycle persists slightly beyond the threshold of $\thetaheta = 0.8499377796$. \subsection{$\theta>\frac{1+\sqrt 5}{4}$: comparison to the logistic family.} It is hard to extend the bifurcation analysis for $\theta\geq \frac{1+\sqrt 5}4$, since the degree of the algebraic equations increases and periodic points cannot be found in closed form. However, using the similarity between the Dikin process and the logistic map\cite{Feigenbaum} $Q_\theta:x \mapsto 4\theta x(1-x)$, we can prove that stable periodic points of higher order appear if $\theta$ increases beyond~$\frac{1+\sqrt 5}{4}$. In particular, we shall now show that if the critical point $c = \frac12$ is $m$-periodic under $Q_\theta$ then the Dikin process has a locally stable $m$-periodic orbit, provided the number of coordinates $n \ge m$. Assume that the first $m$ coordinates $w_i^k$ of the vector $w^k$ are equal to $Q_\theta^i(c)/\theta$ (so the coordinates are not put in increasing order here). In particular, $w_m^k = c/\theta = 1/2\theta$, and $f_\theta(w_m^k) = 1/4\theta = \max\{f_\theta(x)\}$. Then $w_i^{k+1} = 4\theta f_\theta(w_i^k) = 4\theta w_i^k(1-\theta w_i^k)$. The linear scaling $h(x) = \theta x$ conjugates this to $Q_\theta$, since $h^{-1} \circ Q_\theta \circ h(x) = 4\theta x(1-\theta x)$. Since the the critical point of $Q_\theta$ is periodic by our assumption, the critical point of $f_\theta$ is periodic too: $w_i^{k+1} = w_{(i \bmod m)+1}^k$ for $i = 1, \dots, m$, and $w^{k+1}_{m-1} = 1/2\theta$. In particular, the scaling remains the same for all iterates. This periodic orbit attracts the coordinates $w_i$ for $m < i \leq n$ and Lebesgue-a.e.\ initial choice of $w_i$. Let us now verify that the orbit is also stable under small changes in the coordinates $w_i$ for $1 \leq i \leq m$. Renaming these $w_i$ to $y_i$, $i=1, \dots, m$, where $y_{m-1} = 1/2\theta$, $y_m = 1$, $y_1 = f(1)/f(y_{m-1})$ and $y_{i+1} = f(y_i)/f(y_{m-1})$ for $1 \leq i < m$, we can describe them by the map \begin{equation}\label{Fmap} F(y_1, \dots, y_{m-1}, 1) = \left( \frac{f(1)}{f(y_{m-1})}, \dots , \frac{f(y_{m-2})}{f(y_{m-1})}, 1 \right) \end{equation} The final coordinate is redundant, so $DF$ is an $(m-1)\thetaimes (m-1)$ matrix. Recall that $f'(x) = 1-2\theta x$. Therefore $DF(y)$ is equal to \[ \left( \begin{array}{ccccr} 0 & \dots & 0 &\ -(1-2\theta y_{m-1})\frac{f(1)}{f(y_{m-1})^2} \\[1mm] \frac{1-2 \theta y_1}{f(y_{m-1})} & \dots & 0 &\ -(1-2\theta y_{m-1})\frac{f(y_1)}{f(y_{m-1})^2} \\[1mm] 0 & \dots & 0 &\ -(1-2\theta y_{m-1})\frac{f(y_2)}{f(y_{m-1})^2} \\[1mm] \vdots & & \vdots & \vdots&\ \\[1mm] 0 & \dots & \frac{1-2 \theta y_{m-2}}{f(y_{m-1})} &\ -(1-2\theta y_{m-1})\frac{f(y_{m-2})}{f(y_{m-1})^2} \end{array} \right) \] and since $y_{m-1} = 1/2\theta$, the right-most column is zero. Therefore all eigenvalues are zero, and $DF^n$ is a contraction. We conclude that the structure of the Feigenbaum map of the logistic family must be present within the Feigenbaum diagrams of the Dikin process. However, we made no estimate on the basin of attraction of the periodic points, and our numerical results indicate that these basins are small. \subsection{The process converges to a periodic point for $\theta$ near $1$.}\label{locstabletheta1} Surprisingly, it is possible to determine the limit of $w^k$ for $\theta$ arbitrarily close to $1$. To conclude our bifurcation analysis, we show that for $\theta$ close to $1$ the Dikin process has a locally stable point of period $n$, i.e., the period is equal to the dimension. Let $y$ be any point with maximal coordinate $1$ and all other coordinates $\leq \frac 1{2\theta}$. As before, we assume $\min y \geq 1/\theta - 1$ and this implies that $f(1)$ is the minimal coordinate of $w^1$. We arrange the coordinates of $y$ in non-decreasing order. Then $f(y_{m-1})$ is the largest coordinate among all the $f(y_k)$, so we scale by this number and we arrange the coordinates of $w^1$ in non-decreasing order. The dynamic process can then be described by the map $ F(y_1, \dots, y_{m-1}, 1) = \left( \frac{f(1)}{f(y_{m-1})}, \dots , \frac{f(y_{m-2})}{f(y_{m-1})}, 1 \right) $ of equation $(\ref{Fmap})$, and therefore we find cyclic periodicity if $f(y_k)/f(y_{m-1})=y_{k+1}$ and $f(1)/f(y_{m-1})=y_1$. Fix $y_{m-1}<1/2\theta$ and define a map $g(x)=f(x)/f(y_{m-1})$. Note that $y $ has the required cyclic periodicity if $$ y_{m-1}=g(y_{m-2})=\cdots=g^{m-2}(y_1)=g^{m-1}(1). $$ By the point symmetry of $f$ in \eqref{pointsym}, we may replace $g^{m-1}(1)$ by $g^{m-1}(1/\theta-1)$. If we take $y_{m-1}=1/2\theta$ then a sufficient condition for the cyclic periodic point to exist is \begin{equation}\label{percondit} g^{m-1}(1/\theta-1)\leq 1/2\theta. \end{equation} This inequality is satisfied if $\theta$ is sufficiently close to $1$. Now $g$ increases as $y_{m-1}$ decreases, so once the condition is satisfied, there exists an $y_{m-1}$ such that $g^{m-1}(1/\theta-1)=y_{m-1}$. To compute the stability of this orbit, we cannot use anymore that the right-most column of $DF$ vanishes, because now $y_{m-1} < 1/2\theta$. Fortunately, $DF$ is of a simple form $$ DF(y) = \left( \begin{array}{ccccc} 0 & 0 & \dots & 0 & -c_1 \\[1mm] \ d_1 \ & 0 & & \vdots & -c_2 \\[1mm] 0 & \ d_2 \ &\quad \ddots \quad & & -c_3 \\[1mm] \vdots & & \ddots & & \vdots \\[1mm] 0 & \cdots & & d_{m-1} & -c_m \end{array} \right) $$ where $d_1>d_2>\ldots>d_{m-1}>0$ and $0 < c_1< c_2<\ldots<c_m<1$. This follows from the fact that $y_1<y_2<\cdots<y_{m-1}\leq 1/2\theta$ and that $f$ is increasing on $[y_1,y_{m-1}]\subset [0,\frac 1 {2\theta}]$. In order to estimate the eigenvalues of $DF$ we use the classical result of~Enestr\"om-Kakeya\cite{Enestrom} that a polynomial $p(z)=\sum_{k=0}^m a_k z^k$ with all coefficients $a_i\geq 0$ has zeros in the annulus $\alphalpha\leq |z|\leq \beta$, where \[ \alphalpha=\min \left\{\frac {a_k}{a_{k+1}}\right\},\ \beta=\max \left\{\frac {a_k}{a_{k+1}}\right\}. \] \begin{quote} {\bf Claim:} If $c_id_i < c_{i+1}$ for all $i \leq m-1$, then all eigenvalues of $DF$ are in the open unit disc. \end{quote} Abbreviate $A = DF$ and let $p(\lambda) = \det(\lambda I_m - A) = \sum_{k=0}^m a_k\lambda^k$ be the characteristic polynomial of $A$. We will show by induction that the coefficients are decreasing. More precisely $1=a_m>\cdots>a_0>0$ and $a_0=c_1d_1\cdots d_{m-1}$. The proof of the claim is by induction. The claim is obvious for $m=1$. Assume that the claim is true for $m-1$. The characteristic polynomial is equal to $$ \lambda\det(\lambda I_{m-1}-A_{11})+(-1)^{m-1}c_1\cdot (-d_1)\cdots (-d_{m-1}), $$ where $A_{11}$ is the $(1,1)$-minor matrix of $A$. By the inductive hypothesis, $\det(\lambda I_{m-1}-A)$ has decreasing coefficients and constant coefficient $c_2\cdot d_2\cdots d_{m-1}$. If we rewrite $a_0 = \frac{c_1}{c_2} \cdot d_1 \cdot a_1 < a_1$, then the claim follows. We now compute $$ \frac{c_i}{c_{i+1}} \cdot d_i = \frac{f(y_{i-1})}{f(y_i)}\frac{1-2\theta y_i}{f(y_{m-1})} = \frac{1-2\theta y_i}{1-\theta y_i} < 1. $$ which demonstrates that $c_id_i < c_{i+1}$, as claimed. By the Enestr\"om-Kakeya Theorem, the roots of $p(\lambda)$ are all in the open unit disc. Hence $DF^m$ is a contraction at $(y_1,\dots,y_m)$ for $\theta$ sufficiently close to $1$. Our numerical simulations suggest that the set of initial values $w^0$ that converge to this periodic point is large and has (nearly) full measure, as illustrated by the Feigenbaum diagrams in the next section. \section{Feigenbaum diagrams}\label{sec:Feigenbaum} The Dikin process is defined in aribtrary dimensions, so in our Feigenbaum diagrams we have to project $n$-dimensional $\omega$-limits onto one dimension. We have chosen to simply plot one single coordinate of the $\omega$-limit set. \begin{figure} \caption{\label{bifudiag} \label{feigdim34} \label{bifudiag} \end{figure} The Feigenbaum diagrams seem to exhibit the usual structure of period doubling cascades of the logistic family $Q_\theta:x \mapsto 4\theta x(1-x)$, $\theta \in [0,1]$. It is well-known that for $Q_\theta$, between two period doubling bifurcations, there is a parameter where the critical point $c = \frac12$ is periodic. We proved above that this periodic point should then also appear as a stable periodic point in the Dikin process, provided the dimension exceeds the period. Since our examples have small dimension, we do not see much of the period doubling cascade of the logistic family. To illustrate the consequence of the choice of the projection, compare the Feigenbaum diagrams in Figure~3. In the top figure we plot the $\omega$-limit set of a random coordinate. Below we choose the middle coordinate of the ordered vector. We see that the process bifurcates at $\theta=2/3$, when a point of order two appears, and then at $\theta=0.849...$, when a point of order four appears. In the top figure, the diagram splits into five lines at $\theta=0.849...$, in the figure below it splits into four lines. The reason for this is that the point of order four is of the type $$ (r,s_2,1)\thetao(1,s_3,s_1)\thetao(r,1,s_2)\thetao(1,s_1,s_3)\thetao(r,s_2,1) $$ for values $s_1,s_2$ close to $r$ and $s_3$ close to $1$. We will plot the diagrams in the same way as the figure above, so the reader should keep in mind that, contrary to standard Feigenbaum diagrams, the period of a point may be smaller than the number of lines. The diagram indicates that $\omega$-limit set gets positive measure at around $\theta\alphapprox 0.91$ and that the cyclic point of period three appears at around $\theta\alphapprox 0.95$. The coordinates of the period three, for $\theta\alphapprox 0.95$ point are approximately $(0.2,0.6,1)$. In Section~\ref{locstabletheta1} we found that the period three point exists as soon as inequality \eqref{percondit} is satisfied. If $n=3$ and $\theta=0.95$ then $g(1/\theta-1)\alphapprox 0.1900$, $g^2(1/\theta-1)\alphapprox 0.5917$ and $1/2\theta\alphapprox 0.5263$. Hence, the appearance of the period three point occurs a little before at the threshold value of $\theta$ predicted by inequality \eqref{percondit}, but it is of the required form $(g(1/\theta-1),g^2(1/\theta-1),1)$. This is not surprising. We showed that a cyclic point of that form is stable as soon as the inequality is satisfied. The eigenvalues vary continuously with $\theta$ so the point cannot suddenly become unstable once $\theta$ decreases below the threshold given in inequality \eqref{percondit}. \begin{figure} \caption{\label{bifudiag2} \label{feigdim45} \label{bifudiag2} \end{figure} The Feigenbaum diagrams for $n=4$ and $n=5$ are similar to the diagram for $n=3$, and as it turns out that this holds in general for all $n>3$. The main difference between $n=3$ and $n>3$ is the appearance of a chaotic region for $0.95<\theta<1$. It is remarkable that a stable point of period three reappears around $\theta\alphapprox 0.95$. For $n=4$ the stable cyclic point of period four appears at $\theta \alphapprox 0.99$ and is still visible in this figure. For $n=5$ it appears only at $\theta\alphapprox 0.999$ and it is not visible in this picture. To show that our analysis holds and that the periodic point does exist, we zoom in on step sizes in $(0.95,1)$ in the next figure. \begin{figure} \caption{\label{bifudiag3} \label{feigdimzoom} \label{bifudiag3} \end{figure} The diagram shows the cyclic period five for $\theta$ near $1$. This concludes our analysis of the Dikin process $w^k$. Now to prove that this analysis makes sense, we still need to check that the primal-dual AFS method displays the same type of chaotic behavior as $w^k$. We will do that in the next and final section. \section{Comparison to primal-dual AFS} The iterative process $w^k$ has been derived by a linearization of the primal-dual AFS method. To show that our bifurcation analysis bears any relevance, we need to verify that a simlar route to chaor occurs in actual LO problems. There is one complication. The Dikin process involves a parameter $\theta$ that defines the step-size with respect to the maximum $\alphalpha_{max}=\frac{\norm{xs}}{\max{xs}}$. So if we consider the primal-dual AFS method, then we should set our step size accordingly. This means that $\alphalpha$ should not be constant, which it is in the \cite{}original primal-dual AFS method, but we should take it to be equal to $\theta\alpha_{\max}$. We modify the AFS method in this way and we put $\alpha=\theta\alpha_{max}$. We take the same example as considered by Castillo and Barnes in \cite{Barnes}: \begin{equation} \begin{matrix} \min {10x_1+10x_2+5x_3+x_4-x_5}\\ \thetaext{under the constraints}\\ x_1+2x_2-3x_3-2x_4-x_5=0\\ -x_1+2x_2-x_3-x_4-x_5=0\\ x\geq 0 \end{matrix} \end{equation} We take the same initial vectors $x_0$ and $y_0$ as Castillo and Barnes and run \thetaextit{our} modified primal-dual AFS method that we describe in pseudo-code below. The numerical task of computing the limit of the AFS process is not trivial, especially for a larger values of the step size, because $x^k$ rapidly converges to zero which leads to numerical problems, caused by inverting matrices that are ill conditioned. Castillo and Barnes developed analytic formulas that enabled them to still compute Feigenbaum diagrams with high precision. Such an analytic exercise is beyond the scope of our paper. We stop the computation once the duality gap reaches $10^{-10}$. \begin{figure} \caption{Primal--dual affine scaling algorithm with modified step size $\alphalpha$. In our computations we put $\varepsilon=10^{-10} \label{fig:6} \end{figure} We have computed the Feigenbaum diagram for the scaled process $\frac {w^k}{\max w^k}$ that is given in Figure~\ref{feigprimdual}. The diagram below depicts the limit of the fourth coordinate. There is a bifurcation for $\theta=2/3$ and another bifurcation close to $\theta=0.86$, followed by a chaotic regime. At the end of the diagram, for values of $\theta$ close to $1$, we find a stable periodic point. This is similar to the diagrams that we computed earlier for our process $w^k$, although the periodic point at the end of the diagram is period three instead of period five. The Feigenbaum diagram above, which depicts the second coordinate, shows a different picture. The diagram bifurcates at $\theta=2/3$ but the two branches of the graph intersect twice between $2/3$ and $0.86$: once at $\theta\alphapprox 0.69$ and once at $\theta\alphapprox 0.78$. At these values of $\theta$, the limit lands exactly on the unstable fixed point. We already noticed that this point is weakly repelling, which is why the second coordinate has not yet fully converged to its $\omega$-limit yet, even when the duality gap is $10^{-10}$. \begin{figure} \caption{Feigenbaum diagrams for the Castillo-Barnes LO problem. Horizontal coordinate represents $\thetaheta$. Vertical axis contains the $\omega$-limit of a coordinate of the scaled vector $w$. Second coordinate above. Fourth coordinate below.} \label{feigprimdual} \end{figure} The dual problem is degenerate \begin{equation} \begin{matrix} \max {0}\\ \thetaext{under the constraints}\\ y_1-y_2\leq 10\\ 2y_1+2y_2\leq 10\\ -3y_1-y_2\leq 5\\ -2y_1-y_2\leq 1\\ -y_1-y_2\leq -1 \end{matrix} \end{equation} All feasible points solve the dual problem. If $\theta\leq 2/3$ then the process $y^k$ converges to $(3.0513, 0.5522)$ but if $\theta$ increases beyond $2/3$ then the process no longer converges to a single point. However, $y^k$ remains within the feasible set even for large values of $\theta$. Figure \ref{ylimit} contains the limit set that we computed for $\theta=0.94$. It has the contours of a H\'enon-like strange attractor. The image of the attractor is slightly blurred since the orbit has not fully converged yet. \begin{figure} \caption{The omega-limit set of the vector $y$ in the dual problem for $\thetaheta=0.94$ forms a strange attractor in the feasible set.} \label{ylimit} \end{figure} It seems that the process $w^k$ that we have considered in this paper represents the iterations of primal-dual AFS rather well. We have tested other LO problems as well and we find similar Feigenbaum diagrams for the vector $w$, regardless whether the dual problem is degenerate or not. The algorithm converges to an optimal solution for relatively high values of $\theta$, so for a step-size that is close to $\alpha_{max}$. This may indicate that a step-size that is larger than $1/(15\sqrt n)$ is possible, if $\alpha$ is not taken to be constant but is allowed to vary with $xs$, as in our computations. \section{Conclusion} We have presented the Dikin process as an archetype for general interior point methods. The Dikin process is a one-parameter family with a route to chaos that bears similarity to the the logistic family, and which agrees with the chaotic behaviour of interior point methods that has been previously observed. \end{document}
\begin{document} \title{Weakened linearity for quantum fields} \author{Peter Morgan} \address{Physics Department, Yale University, CT 06520.} \ead{[email protected]} \begin{abstract} There are still no interacting models of the Wightman axioms, suggesting that the axioms are too tightly drawn. Here a weakening of linearity for quantum fields is proposed, with the algebra still linear but with the quantum fields no longer required to be tempered distributions, allowing explicit interacting quantum field models. Interacting quantum fields should be understood to be nonlinear quantum fields in this sense, because a set of effective field theories encodes a dependence on the energy scale of measurement --- which is a nontrivial property of the test functions --- so that correlation functions are implicitly nonlinear functions of test functions in the conventional formalism. In Local Quantum Physics terms, the algebraic models constructed here do not satisfy the additivity property. Finite nonlinear deformations of quantized electromagnetism are constructed as examples. \end{abstract} \pacs{03.70.+k, 11.10.Gh} \submitto{\JPA} \maketitle \newcommand\Half{{\frac{1}{2}}} \newcommand\Intd{{\mathrm{d}}} \newcommand\eqN{{\,\stackrel{\mathrm{N}}{=}\,}} \newcommand\PP[1]{{(\hspace{-.27em}(#1)\hspace{-.27em})}} \newcommand\PPs[1]{{(\hspace{-.4em}(#1)\hspace{-.4em})}} \newcommand\RR {{\mathrm{I\hspace{-.1em}R}}} \newcommand\CC{{{\rm C}\kern -0.5em \vrule width 0.05em height 0.65em depth -0.03em \kern 0.45em}} \newcommand\kT{{{\mathsf{k_B}} T}} \newcommand\RE{{\mathrm{Re}}} \newcommand\IM{{\mathrm{Im}}} \section{Introduction} The free Klein-Gordon quantum field is an operator valued \emph{linear} map from a suitable space of functions, $\hat\phi:f\mapsto\hat\phi_f$. We will take $f$ to be from a Schwartz space of functions\cite[\S II.1.2]{Haag}, so that $f(x)$ is infinitely often differentiable and decreases as well as its derivatives faster than any power as $x$ moves to infinity in any direction. For the free Klein-Gordon quantum field, $\hat\phi$ is then a tempered distribution. This is the linearity we will weaken: we will allow the operator valued map $\hat\phi:f\mapsto\hat\phi_f$ to be nonlinear, so that the linear operators $\hat\phi_f$, $\hat\phi_g$ and $\hat\phi_{f+g}$ will in general not satisfy the linear dependence $\hat\phi_f+\hat\phi_g = \hat\phi_{f+g}$. With this weakening, we cannot take a quantum field to be an operator-valued distribution $\hat\phi(x)$, we will be concerned \emph{only} with operators $\hat\phi_f$. Note, however, that allowing $\hat\phi$ to be nonlinear does not weaken the linearity of the algebra generated by the operators $\hat\phi_f$, and we will be able to construct a linear Hilbert space representation of the algebra of observables. The construction here is thus different from the nonlinear relativistic approach of Kibble, for example, who introduces a nonlinear Hamiltonian operator\cite{Kibble}. The nonlinear quantum fields constructed here do not satisfy the additivity property of Local Quantum Physics \cite[\textit{Axiom} \textbf{B}, \S III.1]{Haag}. This axiom requires that two algebras of observables, associated with regions $\mathcal{O}_1$ and $\mathcal{O}_2$ in space-time, together generate the algebra of observables associated with their union, $\mathcal{A}(\mathcal{O}_1\cup\mathcal{O}_2)= \mathcal{A}(\mathcal{O}_1)\vee\mathcal{A}(\mathcal{O}_2)$, but this is generally not possible if, for $f$ and $g$ with support in $\mathcal{O}_1$ and $\mathcal{O}_2$ respectively, $\hat\phi_{f+g}\not=\hat\phi_f+\hat\phi_g$. The construction of this paper therefore casts some doubt on the necessity of the additivity property as an axiom of quantum field theory. Locality and Lorentz covariance, however, will be preserved absolutely. The algebraic structure of a free linear quantum field is given by the hermitian inner product corresponding to the commutator, $[\hat a^{ }_g, \hat a_f^\dagger]=(f,g)$, with $\hat\phi_f=\hat a^{ }_f+\hat a_f^\dagger$. A free linear quantum field is local just because $[\hat\phi_f,\hat\phi_g]=(g,f)-(f,g)$ is zero whenever the test functions $f$ and $g$ have space-like separated supports. Nonlinearity will be introduced in two ways, firstly by the simple expedient of taking the commutator to be a sum of a number of inner products such as, for example, without worrying here about constants, \begin{eqnarray} [\hat a^{ }_g, \hat a_f^\dagger]&=&\xi(f,g)=(f,g)+(f+f^2,g+g^2)+(f^2,g^2)+\left(f(f,f),g(g,g)\right)+ \cr &&\qquad (f+\partial_\mu f\partial^\mu f,g+\partial_\mu g\partial^\mu g)+..., \end{eqnarray} which will result in a local nonlinear quantum field just because invariant polynomials in the field and its derivatives such as $f^n(x)=[f(x)]^n$ or $\partial_\mu f\partial^\mu f$ have support contained in $\mathrm{Supp}(f)$. $[\hat\phi_f,\hat\phi_g]=\xi(g,f)-\xi(f,g)$ is zero, as for the free field, whenever the test functions $f$ and $g$ have space-like separated supports. The constraints of locality, positive semi-definiteness and Lorentz invariance on the form of $\xi(f,g)$ are satisfied by many models, and it will turn out to be as easy to construct a vacuum state over this algebra as over the linear free field, allowing the GNS construction of a Hilbert space. Secondly, we can deform the simple relationship $\hat\phi_f=\hat a^{ }_f+\hat a_f^\dagger$, setting $\hat\phi_f$ to be an arbitrary self-adjoint operator-valued function of $\hat a^{ }_f+\hat a_f^\dagger$, $\hat a^{ }_{\mathcal{P}_i[f]}+\hat a_{\mathcal{P}_i[f]}^\dagger$, ..., \begin{equation} \hat\phi_f=\hat F(\hat a^{ }_f+\hat a_f^\dagger,\hat a^{ }_{\mathcal{P}_1[f]}+\hat a_{\mathcal{P}_1[f]}^\dagger, \hat a^{ }_{\mathcal{P}_2[f]}+\hat a_{\mathcal{P}_2[f]}^\dagger, X_1(f),X_2(f), ...), \end{equation} where $\mathrm{Supp}(\mathcal{P}_i[f])\subseteq\mathrm{Supp}(f)$, and $X_i(f)$ are arbitrary Poincar\'e invariant scalar functions of $f$ --- microcausality is satisfied whatever such scalar functions are introduced. In the general case this deformation is quite nontrivial, more general than a nonlinear coordinate transformation. The energy scale of an experiment is essentially a pragmatic matter that is obvious to an experimenter: an experiment deals with phonons on a lattice, with atomic energy levels, with nuclear energy levels, etc., without an exact explicit discussion being necessary, and we can choose the cutoff appropriately for a given experiment without too much detailed concern. From a quantum field perspective, however, the energy scale of an experiment is a very non-detailed measure of the structure of the test functions involved in its description: if a test function appropriate to a description of an experiment determines an effective real-space length scale, or if the fourier transform of the same (or another) test function is concentrated at a particular energy scale, then such scales pragmatically determine what effective field model we use. Hence, there is a \emph{prima facie} case that the correlation functions of a quantum field are nonlinearly determined by properties of the test functions that describe an experiment, because the test functions are involved in an explicit description of correlation functions \emph{not only} by smearing, so that interacting quantum fields should be understood to be nonlinear quantum fields. This significantly reconceptualizes our understanding of interacting quantum fields. We will not here concern ourselves with the Hamiltonian operators of the theories we discuss, because the Hamiltonian is a global (non)observable, so that any constraint on it is essentially theoretical. Additionally, the Hamiltonian is inessential to the algebraic constructions of quantum field theories given here. Instead, we will take $n$-measurement correlation functions to be the observables of the theory, with empirical adequacy achieved if a theory can accurately model experimental correlations. Section \ref{FreeFieldPreliminaries} first discusses free quantum fields, then section \ref{WeakenedLinearity1} introduces a large class of models that weaken the linearity of the quantum field by the introduction of a nonlinear inner product, and section \ref{WeakenedLinearity2} discusses the introduction of a nonlinear map between $\hat\phi_f$ and creation and annihilation operators. Section \ref{EMfield} applies the methods of section \ref{WeakenedLinearity1} to an electromagnetic field, leaving the application of the methods of section \ref{WeakenedLinearity2} to the future. \section{Free field preliminaries}\label{FreeFieldPreliminaries} A simple way to construct the free Klein-Gordon quantum field \cite{MorganCKG} is to project $\hat\phi_f$ into two parts, $\hat\phi_f=\hat a^{ }_f+\hat a^\dagger_f$, and specify the algebraic properties of $\hat a^\dagger_f$ and $\hat a^{ }_f$ by the commutation relations \begin{equation}\label{FreeCommutationRelations} \Bigl[\hat a^{ }_g,\hat a^\dagger_f\Bigr]=(f,g),\qquad \Bigl[\hat a^{ }_g,\hat a^{ }_f\Bigr]=0. \end{equation} The manifestly Poincar\'e invariant hermitian inner product $(f,g)$ is given by \begin{equation}\label{ScalarInnerProduct} (f,g) = \hbar\int \frac{\Intd^4k}{(2\pi)^4} 2\pi\delta(k^\mu k_\mu-m^2)\theta(k_0)\tilde f^*(k)\tilde g(k). \end{equation} This fixes the algebraic structure of the observables $\hat\phi_f$, $[\hat\phi_f,\hat\phi_g] = i\omega(f,g)$, where $\omega(f,g)=i((f,g)-(g,f))=-\omega(g,f)$. Note that the self-adjoint operators $\hat\phi'_f=i(\hat a^{ }_f-\hat a^\dagger_f)$ are taken \emph{not} to be observable (if they were observable then we would be able to send messages faster than light because $[\hat\phi'_f,\hat\phi_g]=i((g,f)+(f,g))$ is non-zero when $f$ and $g$ have space-like separated supports\footnote{We can eliminate the creation and annihilation operators (which are too prominent in many presentations of the free quantized Klein-Gordon field), by presenting the algebra directly as $[\hat\phi_f,\hat\phi_g] = i\omega(f,g)$ and presenting the vacuum state using the generating function $\left<0\right|e^{i\lambda\hat\phi_f}\left|0\right>= e^{-\Half\lambda^2(f,f)}$, with $\omega(f,g)$ and $(f,g)$ defined as above. Together these are as sufficient to fix the Wightman functions of the theory as the construction in the main text is.}). The vacuum expectation values are fixed by the trivial action of the operators $\hat a^{ }_f$ on the vacuum state, $\hat a^{ }_f\left|0\right>=0$, and the normalization $\left<0\right|\!\left.0\right>=1$. To compute any vacuum expectation value, apply the commutation relations above repeatedly, eliminating any terms in which $\hat a^{ }_f\left|0\right>$ or $\left<0\right|\hat a^\dagger_f$ appear, until we obtain a number by finally applying $\left<0\right|\!\left.0\right>=1$. For example, $\left<0\right|\hat\phi_f\hat\phi_g\left|0\right>= \left<0\right|\hat a^{ }_f\hat a^\dagger_g\left|0\right>= \left<0\right|((g,f)+\hat a^\dagger_g\hat a^{ }_f)\left|0\right>=(g,f)$. The commutator algebra and the specification of the vacuum state fix the Wightman functions of the theory at all times, which effectively encodes all dynamical information, so that a Hamiltonian and Lagrangian are superfluous in this approach to quantum fields. Since the algebra and the definition of the vacuum are the only structures in this approach, those are what we have to deform to create an interacting field theory. The free field algebra determines that the probability density associated with an observable $\hat\phi_f$ in the vacuum state is Gaussian. The characteristic function can be computed as $\left<0\right|e^{i\lambda\hat\phi_f}\left|0\right>= e^{-\Half\lambda^2(f,f)}$ by applying a Baker-Campbell-Hausdorff formula, leading to the probability density $\frac{1}{\sqrt{2\pi(f,f)}}\exp{\left(-\frac{x^2}{2(f,f)}\right)}$, which is well-defined if we take $f$ to be a Schwartz space function, but not if we take $f$ to be a point-like delta function. In a similar way, we can compute the joint quasiprobability density associated with two observables $\hat\phi_f$ and $\hat\phi_g$ in the vacuum state, which is also Gaussian. The characteristic function is $\left<0\right|e^{i\lambda\hat\phi_f+i\mu\hat\phi_g}\left|0\right>= e^{-\Half(\lambda f+\mu g,\lambda f+\mu g)}= \exp{\left[-\Half\left[\lambda^2(f,f)+2\lambda\mu\textrm{Re}(f,g)+\mu^2(g,g)\right]\right]}$, leading to the quasiprobability density \begin{equation} \frac{\exp{\left(-\Half\frac{x^2(g,g)-2xy\mathrm{Re}(f,g)+y^2(g,g)} {(f,f)(g,g)-\left|\mathrm{Re}(f,g)\right|^2}\right)}} {2\pi\sqrt{(f,f)(g,g)-\left|\mathrm{Re}(f,g)\right|^2}}. \end{equation} Note that this quasiprobability is independent of the imaginary parts of $(f,g)$. Finally for the vacuum state, for a set of observables $\{\hat\phi_{f_j}\}$ we obtain a characteristic function $\left<0\right|e^{i\sum_j \lambda_j\hat\phi_{f_j}}\left|0\right>= e^{-\Half\underline{\lambda}^T F\underline{\lambda}}$, where the matrix $F_{ij}=\mathrm{Re}(f_i,f_j)$ describes the relative geometry of the $n$ joint measurements for the purposes of the free field theory, leading to the $n$-measurement joint quasiprobability density \begin{equation}\label{vacuumProb} \frac{e^{-\Half \underline{x}^T F^{-1}\underline{x}}} {\sqrt{(2\pi)^n \mathrm{det}(F)}}. \end{equation} The singular condition $\mathrm{det}(F)=0$ is fairly innocuous, since it is the expectation values that are significant rather than any characteristic functions that can be used to generate them. For the non-vacuum state $\hat a^\dagger_g\left|0\right>/\sqrt{(g,g)}$ and a set of observables $\{\hat\phi_{f_j}\}$, we obtain a characteristic function $\left<0\right|\hat a^{ }_g e^{i\sum_j \lambda_j\hat\phi_{f_j}}\hat a^\dagger_g\left|0\right>/(g,g)= (1-|\underline{\lambda}.\underline{S}|^2)e^{-\Half\underline{\lambda}^T F\underline{\lambda}}$, where $S_i=(f_i,g)/\sqrt{(g,g)}$ describes the relation between the state preparation and the chosen measurements. This leads to the $n$-measurement joint quasiprobability density \begin{equation}\label{vacuumplus1Prob} \left[|\underline{x}^T F^{-1}\underline{S}|^2+(1-\underline{S}^\dagger F^{-1}\underline{S})\right] \frac{e^{-\Half \underline{x}^T F^{-1}\underline{x}}} {\sqrt{(2\pi)^n \mathrm{det}(F)}}. \end{equation} The imaginary parts of $(f_i,g)$ contribute to equation (\ref{vacuumplus1Prob}), which consequently may be not positive semi-definite\footnote{If instead of the inner product of equation (\ref{ScalarInnerProduct}), we use the real form $(f,g)+(g,f)$, we still obtain a quantum field theory, but it is classical in the sense that $[\hat\phi_f,\hat\phi_g]=0$ whatever the space-time relationship between $f$ and $g$, and equation (\ref{vacuumplus1Prob}) is accordingly positive semi-definite. For a comparable perspective on the relationship between random fields and quantum fields see \cite{MorganCKG}.}. It is straightforward, but progressively more time-consuming, to compute $n$-measurement joint quasiprobability densities for higher states, which introduce increasing deviations from a Gaussian distribution. We can in principle also compute probability densities straightforwardly for higher order observables such as $\hat\phi_{f_1}\hat\phi_{f_2}+\hat\phi_{f_2}\hat\phi_{f_1}$. The intention of this rather lengthy elementary discussion of characteristic functions and quasiprobabilities is to give some sense of how we can compute empirically relevant results quite effectively by only considering the relations between explicit measurement and state descriptions without ever considering operator-valued distributions $\hat\phi(x)$. We have exclusively used inner products between the functions $f_i$ and $g$ that were used above to construct measurements and states. Using test functions universally has the useful effect of ensuring manifest Poincar\'e invariance of the resulting formalism very straightforwardly. Note that we have used the term ``$n$-measurement'' correlations instead of ``$n$-point'', because we never measure anything at a point, and the idealization of point-like measurements will become impossible when we introduce nonlinearity. All calculations involve only Schwartz space functions, which are much easier to manipulate than distributions, in particular because Schwartz space is closed under multiplication. In a simple-minded way, it is arguable that the infinities profusely generated by the conventional perturbation of free quantum fields are caused by the introduction of higher than quadratic products of distributions. In more abstract terms, for free fields the properties of the vacuum state define a state $\varphi_0:A\mapsto \left<0\right|A\left|0\right>$ over the $\star$-algebra $\mathcal{A}$ generated by a finite number of creation and annihilation operators, a linear map satisfying $\varphi_0(A^\dagger)=\overline{\varphi_0(A)}$, $\varphi_0(A^\dagger A)\ge 0$, $\varphi_0(1)=1$, which allows the Gelfand-Naimark-Segal construction of a pre-Hilbert space acted on by $\mathcal{A}$, which can be closed in the norm to obtain a Hilbert space $\mathcal{H}_{\varphi_0}$ (see Haag\cite[\S III.2]{Haag}). For free fields, $\varphi_0(A)=\left<0\right|A\left|0\right>$ satisfies $\varphi_0(A^\dagger A)=\left<0\right|A^\dagger A\left|0\right>\ge 0$ because \begin{equation}\label{AlgebraIP} \left<0\right|\Bigg[\prod_{k=1}^K \hat a^{ }_{f_k}\Bigg] \Bigg[\prod_{j=1}^J \hat a^\dagger_{g_j}\Bigg]\left|0\right>=\delta_{J,K}\mathrm{per}[(g_j,f_k)], \end{equation} where $\mathrm{per}[(g_j,f_k)]$ is the \textbf{\textsf{permanent}}\footnote{The permanent of a $K\times K$ matrix $M$ is a sum over the symmetric group, $\mathrm{per}(M)=\sum_{\sigma\in S_K} M_{1\sigma(1)}M_{2\sigma(2)}...M_{K\sigma(K)}$. This is the determinant without the sign of the permutation. The normalized permanent $\mathrm{per}[(g_j,g_k)]/\prod_{i=1}^K(g_i,g_i)$ of a complex hermitian positive semi-definite matrix that is generated using inner products $(g_j,g_k)$ measures how close the $K$ functions $g_i$ are to being parallel, independently of the relative lengths $(g_i,g_i)$ of the functions, except in the singular case when $\prod_{i=1}^K(g_i,g_i)=0$. If the functions are all parallel, the normalized permanent is $K!$; if they are all orthogonal, the normalized permanent is $1$. Comparably, the normalized determinant is zero if \emph{any} subset of the functions is linearly dependent; if all the functions are orthogonal the normalized determinant is $1$.} of the $K\times K$ complex matrix $(g_j,f_k)$. It is well-known\cite{Minc,MarcusNewman} that \begin{eqnarray} &&\mathcal{S}^{\otimes K}\!\times\!\mathcal{S}^{\otimes K}\rightarrow \CC; \quad(g_1\otimes ...\otimes g_K,f_1\otimes ...\otimes f_K)\mapsto \mathrm{per}[(g_j,f_k)], \end{eqnarray} is a complex hermitian positive semi-definite inner product on the symmetrized tensor product space $\mathcal{S}^{\otimes K}$, so that equation (\ref{AlgebraIP}) defines a complex hermitian positive semi-definite inner product on a direct sum of symmetrized tensor product spaces. Any operator constructed as a multinomial in $\hat\phi_{f_i}$ is not in the algebra $\mathcal{B}(\mathcal{H}_{\varphi_0})$ of bounded observables acting on $\mathcal{H}_{\varphi_0}$, so we generally have to pay attention to the domain of $A\in\mathcal{A}$. The insistence on at least a Banach $\star$-algebra structure for the algebra of observables is useful for analysis (allowing, for example, the extension of the action of the algebra of observables to the Hilbert space $\mathcal{H}_{\varphi_0}$), but for constructive calculations of expectation values, characteristic functions, and probability distributions in particular states, as above, if $\left<\psi\right|A\left|\psi\right>$ is finite for a normalized vector $\left|\psi\right>\in\mathcal{H}_{\varphi_0}$ then we can interpret $A$ as an observable for that state. This is a nontrivial extension of the pre-Hilbert space because, for example, the normalized vector $e^{\hat a^\dagger_g}\left|0\right>/\sqrt{e^{(g,g)}}$ gives us a finite state over $\mathcal{A}$. As well as extending the pre-Hilbert space, we have already implicitly extended the algebra $\mathcal{A}$ by using $\left<0\right|e^{i\lambda\hat\phi_f}\left|0\right>$ above as a characteristic function, since $e^{i\lambda\hat\phi_f}$ is not a polynomial in the field. \section{Weakened linearity I}\label{WeakenedLinearity1} Suppose now that we replace equation (\ref{FreeCommutationRelations}) by a commutation relation that depends nonlinearly on $f$ and $g$, \begin{equation}\label{NonlinearCommutationRelations} \Bigl[\hat a^{ }_g,\hat a^\dagger_f\Bigr]=\xi(f,g),\qquad \Bigl[\hat a^{ }_g,\hat a^{ }_f\Bigr]=0, \end{equation} where $\xi(f,g)$ must be complex hermitian positive semi-definite on Schwartz space (in the sense that the matrix $\xi(f_i,f_j)$ is complex hermitian positive semi-definite for any finite set of Schwartz space functions $\{f_i\}$). We will call $\xi(f,g)$ a ``nonlinear inner product''; the term ``inner product'' historically indicates a sesquilinear form, so we will always be explicit about nonlinearity. The operator valued map $\hat\phi:f\mapsto\hat\phi_f$ cannot be linear if $\xi(f,g)$ is nonlinear. The algebra $\mathcal{A}_d$ generated by $\hat\phi_f$ is still linear, but the linear dependence $\hat\phi_f+\hat\phi_g=\hat\phi_{f+g}$ generally does not hold. Essentially, for any set of vectors $\{g_i\}$ used to construct an operator in the deformed free field algebra, we obtain a complex hermitian positive semi-definite matrix $\xi(g_i,g_j)$. As a complex hermitian positive semi-definite matrix, it is a Gram matrix based on some other functions $\{f_i\}$ chosen so that $(f_i,f_j)=\xi(g_i,g_j)$. The action of the vacuum state on an operator $A^\dagger A$ in $\mathcal{A}_d$ that is constructed using $\{\hat a^\dagger_{g_i}\}$ is positive semi-definite, therefore, just because the action of the vacuum state on an operator constructed in the same way in $\mathcal{A}$ using $\{\hat a^\dagger_{f_i}\}$ is positive semi-definite. To ensure locality, \begin{equation} [\hat\phi_f,\hat\phi_g]=\xi(g,f)-\xi(f,g), \end{equation} must be zero when $f$ and $g$ have space-like separated supports. There is a wide range of possibilities for $\xi(f,g)$: we can use the sum of any number of complex hermitian positive semi-definite inner products such as \begin{equation} (f,g),\,(f+f^2,g+g^2),\,(f^2,g^2),\, ...,\,(f^n,g^n),\,..., \end{equation} just because the sum of positive semi-definite matrices is positive semi-definite. All these terms satisfy locality because $f^n$ has the same support as $f$, so that, for example, $\omega(f^n,g^n)$ is zero if $f$ and $g$ have space-like separated support. We can also introduce invariant polynomials in derivatives of the field, such as $\partial_\mu f\partial^\mu f$, which again have the same support as $f$. Furthermore, we need not restrict ourselves to one inner product $(f,g)$, we can introduce different mass Poincar\'e invariant inner products for different invariant polynomials in the field and its derivatives. If the free quantum field is a 4-vector or other nontrivial representation space of the Lorentz group, ``$f^n$'', perhaps contracted in some way, will usually require a different inner product than $f$ (see section \ref{EMfield} for a concrete example). In general, $\xi(f,g)$ can be a sum \begin{equation} \xi(f,g)=\sum_i (\mathcal{P}_i[f],\mathcal{P}_i[g])_i \end{equation} for a list of local functionals $\mathcal{P}_i$, satisfying $\mathrm{Supp}(\mathcal{P}_i[f])\subseteq\mathrm{Supp}(f)$, and a list of linear inner products $(\cdot,\cdot)_i$. That we cannot in general expect the linear dependencies $\hat\phi_f+\hat\phi_g=\hat\phi_{f+g}$ and $\hat\phi_{\lambda f}\not=\lambda\hat\phi_f$ to hold requires a fresh understanding of what we do when we describe a measurement using a function $f+g$ or $\lambda f$, which we must derive from the mathematical structure of the nonlinear inner product. In the linear case, we can imagine in folk terms that when we use the operator $\hat\phi_f$ we are asking how much $f$ ``resonates'' with the quantum state, insofar as the inner product of $f$ with the functions $g_i$ that are used to construct the state is a measure of similarity between the on-shell fourier components of the functions. There is of course a minimal ``resonance'' of $f$ with vacuum state fluctuations. In the nonlinear case, in the same folk terms, the nonlinear inner product is a measure of similarity between not only the on-shell components of $f$ and $g_i$, but also between the on-shell components of $f^2$ and $g_i^2$, $f+f^2$ and $g_i+g_i^2$, etc. We cannot, therefore, just add the results of measuring $\hat\phi_f$ and $\hat\phi_g$ to compute what we would have observed if we had measured $\hat\phi_{f+g}$, because the nonlinear resonances are not taken into account by simple addition of the operators. Analogously to equations (\ref{vacuumProb}) and (\ref{vacuumplus1Prob}), we can construct the pseudoprobabilities \begin{eqnarray} &&\frac{e^{-\Half \underline{x}^T F^{-1}\underline{x}}} {\sqrt{(2\pi)^n \mathrm{det}(F)}}, \\ &&\left[|\underline{x}^T F^{-1}\underline{S}|^2+(1-\underline{S}^\dagger F^{-1}\underline{S})\right] \frac{e^{-\Half \underline{x}^T F^{-1}\underline{x}}} {\sqrt{(2\pi)^n \mathrm{det}(F)}},\\ &&F_{ij}=\mathrm{Re}\left[\xi(f_i,f_j)\right],\qquad S_i=\frac{\xi(f_i,g)}{\sqrt{\xi(g,g)}} \end{eqnarray} in which the only change, predictably enough, is that we replace the inner product $(f,g)$ by the ``nonlinear inner product'' $\xi(f,g)$ wherever it occurs. The probability densities generated for the vacuum state are still Gaussian (which will be addressed by the method of the next section), but, for example, the fall-off of the 2-measurement correlation coefficient with increasing distance is controlled by $\xi(f,g)$, so the fall-off is in general nontrivially different from the fall-off for the free field. For scalar functions $f(x)$ and $f_a(x)=f(x+a)$ representing two measurements at separation $a^\mu$, and supposing the dynamics is described by the inner product (\ref{ScalarInnerProduct}) with masses $m_i$, then the 2-measurement correlation function is given by \begin{eqnarray} \xi(f,f_a)&=&\hbar\sum_i\int\widetilde{\mathcal{P}_i[f]}^*\widetilde{\mathcal{P}_i[f_a]} 2\pi\delta(k^\mu k_\mu-m_i^2)\theta(k_0)\frac{\Intd^4k}{(2\pi)^4}\cr &=&\hbar\sum_i\int\left|\widetilde{\mathcal{P}_i[f]}\right|^2 e^{-ik_\mu a^\mu} 2\pi\delta(k^\mu k_\mu-m_i^2)\theta(k_0)\frac{\Intd^4k}{(2\pi)^4}, \end{eqnarray} so with a suitable choice of $\mathcal{P}_i$, we have considerable control over the change of the 2-measurement correlation with increasing separation and for different functions $f$. \section{Weakened linearity II}\label{WeakenedLinearity2} If we observe non-Gaussian probability densities, we can model them in linear quantum field theory by acting on the vacuum state with as many creation operators as necessary, spread over as large a region of space-time as necessary, or by constructing representations of the weakened linear commutation relations that are unitarily inequivalent to the vacuum sector. This section discusses the nonlinear alternative already introduced in the introduction, which maps the creation and annihilation operators nonlinearly to the quantum field $\hat\phi_f$, \begin{equation} \hat\phi_f=\hat F(\hat a^{ }_f+\hat a_f^\dagger,\hat a^{ }_{\mathcal{P}_1[f]}+\hat a_{\mathcal{P}_1[f]}^\dagger, \hat a^{ }_{\mathcal{P}_2[f]}+\hat a_{\mathcal{P}_2[f]}^\dagger, X_1(f),X_2(f), ...), \end{equation} where $\mathrm{Supp}(\mathcal{P}_i[f])\subseteq\mathrm{Supp}(f)$, and $X_i(f)$ are arbitrary Poincar\'e invariant scalar functions of $f$. Microcausality is preserved, $[\hat\phi_f,\hat\phi_g]=0$ whenever $f$ and $g$ have space-like separated supports, because $[\hat a^{ }_{\mathcal{P}_i[f]}+\hat a_{\mathcal{P}_i[f]}^\dagger, \hat a^{ }_{\mathcal{P}_j[g]}+\hat a_{\mathcal{P}_j[g]}^\dagger]=0\ \forall\hspace{0.1em}i,j$, but if $\hat F$ includes a dependency on $(f,f)$, for example, there is a larger sense in which the algebra of observables is nonlocal. We take the set of observables to be the subalgebra of the algebra of operators generated by $\hat a^{ }_f$ and $\hat a_f^\dagger$ that is generated by $\hat\phi_f$ (as noted above, the set of observables in the linear free field case is generated by $\hat\phi_f$, not by the creation and annihilation operators). In the simplest case, we can set $G(\hat\phi_f)=\hat a^{ }_f+\hat a_f^\dagger$ for some invertible function $G(x)$; with this deformation, the gaussian probability density $Pr(\hat a^{ }_f+\hat a_f^\dagger=x)=\exp{(-x^2/2(f,f))}/\sqrt{2\pi(f,f)}$ becomes \begin{equation} Pr(\hat\phi_f=y)=\frac{1}{\sqrt{2\pi(f,f)}}\exp{\left(-\frac{G(y)^2}{2(f,f)}\right)}G'(y). \end{equation} This simplest case is of course more-or-less trivial, but in the most general case the nonlinear map $F$ is not so easily dismissed. Whether trivial or not, even for $G(x)=x-\mathrm{tanh}\,x$ we obtain a probability density with the double maximum characteristic of symmetry breaking, \begin{equation} Pr(\hat\phi_f=y)=\frac{1}{\sqrt{2\pi(f,f)}}\exp{\left(-\frac{(y-\mathrm{tanh}\,y)^2}{2(f,f)}\right)} (1-\mathrm{sech}^2\,y) \end{equation} (however this is not enough to claim that such a state corresponds to conventional symmetry breaking). Calculating $n$-measurement correlation functions in this superficially simple model for $n\ge 2$ is not straightforward. We have effectively constructed a class of quantum fields that is analogous to the class of integrable systems in classical field theory in that they are reducible to a free quantum field by nonlinear (and possibly microcausality preserving but otherwise nonlocal) maps. In other attempts to construct algebras of observables using the nonlinear operator-valued map $\hat\phi:f\rightarrow\hat\phi_f$, using algebra deformations similar to those of Arik-Coons type\cite{Quesne} (which work nicely in the one-dimensional case), I have not so far found it possible to construct quantum field algebras that are both microcausal and associative, which I have taken to be essential requirements. \section{Deformation of electromagnetism}\label{EMfield} The electromagnetic potential and Dirac spinors are not observable fields, so we will here deform the quantized electromagnetic field. To avoid excessive complexity, we will use only the method of section \ref{WeakenedLinearity1}. The dynamics of the electromagnetic field in terms of a positive semi-definite inner product on test functions is given by Menikoff and Sharp\cite[equation (3.27)]{MenikoffSharp} (except for a missing factor of $(2\pi)^{-3}$ that is present in their equation (3.25)): \begin{eqnarray} (f_1,f_2)_{EM} &=& \hbar\int\frac{\Intd^4k}{(2\pi)^4} 2\pi\delta(k_\alpha k^\alpha)\theta(k_0) k^\mu\tilde f_{1\mu\beta}^*(k) k^\nu\tilde f_{2\ \nu}^{\ \beta}(k). \end{eqnarray} Note that $f_{1}$ and $f_{2}$ are \emph{not} electromagnetic field tensors, they are classical test functions that contribute to a description of measurement and/or state preparation of the quantized electromagnetic field. The electromagnetic field in an interacting theory of the sort introduced here is not measurable at a point, so we always have to consider $\hat\phi_f$. Supposing there is an observable 4-current field, and that $J_{1\mu}$ and $J_{2\mu}$ are test functions for it, we can introduce a massive free field inner product \begin{eqnarray} (J_1,J_2)_V &=& \hbar\int\frac{\Intd^4k}{(2\pi)^4} 2\pi\delta(k_\alpha k^\alpha-m^2)\theta(k_0) \left(\sigma_T k^\mu k_\nu-\sigma_S m^2\delta^\mu_\nu\right) \tilde J_{1\mu}^*(k)\tilde J_2^\nu(k),\cr&& \end{eqnarray} where $\sigma_T\ge\sigma_S\ge 0$ determine the relative significance of time-like and space-like components (relative to $k_\mu$) of the 4-current. Note that any test function component for which $(f,f)$ is zero is in effect infinitely suppressed in the free theory\footnote{The variance associated with the observable $\hat\phi_f$ in the vacuum state is $(f,f)$; if this is zero, then the observed value of $\hat\phi_f$ is always zero in the vacuum state (and indeed in every state).}, so $\sigma_S=\sigma_T$ makes only components orthogonal to $k_\mu$ significant and $\sigma_S=0$ makes only the component parallel to $k_\mu$ significant. In terms of these free field inner products, we can introduce an interacting nonlinear inner product, \begin{eqnarray} &&(\;(J_1,f_1),(J_2,f_2)\;)_I = (f_1,f_2)_{EM}+(J_1, J_2)_V+\cr &&\qquad\lambda_1(J_1^\alpha+\kappa_1 J_{1\mu} f_1^{\mu\alpha}, J_2^\beta+\kappa_1 J_{2\nu} f_2^{\nu\beta})_V\ +\lambda_2(J_{1\mu} f_1^{\mu\alpha},J_{2\nu} f_2^{\nu\beta})_V+\cr &&\qquad\lambda_3(\epsilon^{\mu\rho\sigma\alpha} J_{1\mu} f_{1\rho\sigma}, \epsilon^{\nu\tau\upsilon\beta} J_{2\nu} f_{2\tau\upsilon})_V \end{eqnarray} with $\lambda_1$, $\lambda_2$, and $\lambda_3$ all $\ge 0$, and of course higher order terms are possible. Degrees of freedom that make no contribution to a noninteracting inner product may make a contribution after we introduce a new term to a nonlinear inner product. Fourier components of $J_1$ that are not on mass-shell, for example, so that they make no contribution to $(J_1,J_2)_V$, may contribute to the on mass-shell fourier components of $J_{1\mu} f_1^{\mu\alpha}$. Introducing nonlinearity in this way, therefore, effectively adds new degrees of freedom as well. Polynomial invariants in derivatives of both $J$ and $f$ can also be added, such as $(J_1^\alpha+\kappa_2 \partial_\mu f_1^{\mu\alpha}, J_2^\beta+\kappa_2 \partial_\nu f_2^{\nu\beta})_V$ or $(\partial_{[\alpha} J_{1\mu]}+\kappa_3 f_{1\alpha\mu}, \partial_{[\beta} J_{2\nu]}+\kappa_3 f_{2\beta\nu})_{EM}$, again with higher orders as necessary. All the nonlinear terms introduced above can result in correlations between the current and the electromagnetic field. In the noninteracting case, the inner product between test functions $(J_1,0)$ and $(0,f_2)$ will always be zero, so there is no correlation in the vacuum state between 4-current observables and electromagnetic field observables, but with the introduction of the nonlinear terms above there will generally be correlations between 4-current observables and electromagnetic field observables in the vacuum state. Such interactions between the 4-current and the electromagnetic field through the action of nonlinearity in this approach are not immediately comparable to the description of correlations in conventional perturbation theory through the annihilation and creation of photon and charge lines in Feynman diagrams. If there is also an observable axial 4-vector, and $S_{1\mu}$ and $S_{2\mu}$ are test functions for it, quite a few more terms become possible in a nonlinear inner product, even without introducing derivatives, \begin{eqnarray}\label{bigI} &&(\;(J_1,S_1,f_1),(J_2,S_2,f_2)\;)_I = (f_1,f_2)_{EM}+(J_1, J_2)_V+(S_1,S_2)_V+\cr &&\qquad\lambda_1(J_1^\alpha+\kappa_1 J_{1\mu} f_1^{\mu\alpha}, J_2^\beta+\kappa_1 J_{2\nu} f_2^{\nu\beta})_V\ +\lambda_2(J_{1\mu} f_1^{\mu\alpha},J_{2\nu} f_2^{\nu\beta})_V+\cr &&\qquad\lambda_3(\epsilon^{\mu\rho\sigma\alpha} J_{1\mu} f_{1\rho\sigma}, \epsilon^{\nu\tau\upsilon\beta} J_{2\nu} f_{2\tau\upsilon})_V +\lambda_4(S_{1\mu} f_1^{\mu\alpha},S_{2\nu} f_2^{\nu\beta})_V+\cr &&\qquad\lambda_5(S_{1\mu} f_1^{\mu\alpha}+ \kappa_2\epsilon^{\mu\rho\sigma\alpha} J_{1\mu} f_{1\rho\sigma}, S_{2\nu} f_2^{\nu\beta}+ \kappa_2\epsilon^{\nu\tau\upsilon\beta} J_{2\nu} f_{2\tau\upsilon})_V+\cr &&\qquad\lambda_6(S_{1[\mu}J_{1\alpha]}+ \kappa_3\epsilon_{\mu\alpha}^{\ \ \rho\sigma} f_{1\rho\sigma}, S_{2[\nu}J_{2\beta]}+ \kappa_3\epsilon_{\nu\beta}^{\ \ \tau\upsilon} f_{2\tau\upsilon})_{EM}+\cr &&\qquad\lambda_7(S_{1[\mu}J_{1\alpha]},S_{2[\nu}J_{1\beta]})_{EM} \end{eqnarray} To these might also be added parity violating terms, and, with the introduction of a scalar inner product, terms involving $(J_{1\mu}J_1^\mu,J_{2\nu}J_2^\nu)_S$, $(S_{1\mu}S_1^\mu,S_{2\nu}S_2^\nu)_S$, $(J_{1\mu}S_1^\mu,J_{2\nu}S_2^\nu)_S$, $(f_{1\mu\alpha}f_1^{\mu\alpha},f_{2\nu\beta}f_2^{\nu\beta})_S$. Furthermore, every occurrence of an inner product could be modified to make each term have a unique mass (and a different contribution for the time-like and space-like components of each 4-current and axial 4-vector term). In view of the number of parameters that are apparently possible in this approach, even in the case of electromagnetism, in contrast to the relatively tight constraints imposed by renormalizability, equation (\ref{bigI}) presumably has to be regarded as only (potentially) phenomenologically descriptive, not as a fundamental theory, unless a theoretically natural constraint on admissible terms emerges. Note that this approach or some extension or modification of it might be empirically useful, for example if it can describe electromagnetic fields in nonlinear materials effectively, without it being at all equivalent to QED. \section{Conclusion} With all computations being entirely finite, it may be possible to use these nonlinear quantum field models more easily and with less conceptual uncertainty than using conventional perturbation theory. The universal use of Schwartz space test functions to describe measurement and state preparation ensures that there are none of the infinities that usually emerge in perturbative quantum field theory. Correlation functions for measurements in a given state are straightforwardly computed in terms of the nonlinear inner products between all the functions used to generate a state and to describe measurements. The mathematics allows a reasonable understanding of the nonlinearity that has been introduced, and there seems to be no \textit{a priori} reason to exclude this kind of nonlinearity, in which the linearity of the algebra is preserved. Indeed, on the classical precedent, nonlinearity ought to be expected. The apparent introduction of nonlinearity by renormalization through the implicit nonlinear use of test functions gives a stronger impetus to consider how empirically effective the nonlinear models introduced here --- and perhaps more general models --- can be. The infinite range of possibilities is at present a little uncontrolled, and the mathematical analysis of the empirical consequences of particular terms in models of the theory appear to be quite nontrivial --- to my knowledge it is a novel mathematical problem. Quantum theory has largely moved to supersymmetry and string theory because of the apparent impossibility of putting interacting quantum field theory on a sound mathematical footing, but the form of interacting quantum field theories presented here is a mathematically reasonable alternative. It will be interesting to see what range of physical situations can be modelled with these nonlinear quantum fields. Free fields are already useful as a first approximation in quantum optics, so it's possible that the methods of this paper might make a useful second approximation as a way to construct phenomenological models for nonlinear materials. These nonlinear quantum fields, however, are conceptually significantly different from the interacting quantum fields of conventional perturbation theory, and are manifestly different from conventional constructive and axiomatic quantum fields. \end{document}
\begin{document} \author{Zi-Hao Chen} \affiliation{Department of Chemical Physics University of Science and Technology of China, Hefei, Anhui 230026, China} \author{YiJing Yan} \affiliation{Department of Chemical Physics University of Science and Technology of China, Hefei, Anhui 230026, China} \email{[email protected]} \tildetle { Kondo regime of the impurity spectral function and the current noise spectrum in the double impurity Anderson model } {\rm d}ate{\today} \begin{abstract} The dissipaton equations of motion (DEOM) method is one of the most popular methods for simulating quantum impurity systems. In this article, we use DOEM theory to deal with the Kondo problem of the double quantum dots (DQDs) impurity system. We focus on the impurity spectral function and the total noise spectral function, this two function will be used to describe the Kondo effect of this system. The influence of the interaction, the hooping and the difference of the chemical potential between the two dots on the Kondo effect of the system is studied. We find that the interaction between the two dots can influence the Kondo effect of the system a lot. \end{abstract} \maketitle \section{Introduction} An efficient impurity solver is highly required in strong correlation problems. A variety of numerical simulation methods can be applied as impurity solvers \cite{Wil75773, Kri801003, Kri801044, Bul08395, Hir862521, Gul11349, Whi922863, Han19050601, Wan012979, Muh08176403, Ema11349}. Among them, the most famous approaches are the numerical renormalization group (NRG) \cite{Wil75773, Kri801003, Kri801044}, the density matrix renormalization group (DMRG) \cite{Whi922863} and quantum Monte Carlo (QMC) \cite{Ema11349}. This method can obtain single--particle Green function and two--particle correlation function efficiency. But the dynamic properties, such as dynamic I-V characteristics differential conductance, and propagation of density matrix, can not be obtained directly using NRG or QMC. And as it is very time-consuming to obtain the two--particle correlation function such as the current noise spectrum. The real--time propagation approaches basically make up for this shortcoming. One can directly obtain multi-particle correlation functions, such as the current noise spectrum, which cannot be obtained by NRG or QMC. It includes the hierarchical equations of motion (HEOM) \cite{Tan906676, Tan06082001, Yan04216, Xu05041103, Xu07031107, Jin08234703} and its second quantization version dissipaton equation of motion (DEOM) \cite{Yan16110306}, the semi-group quantum master equations\cite{Lin76119, Gor76821,Ali87}, and the DMFT methods \cite{Hou14045141}. Although the DMRG and NRG have their time--dependent extension, these methods still have shortcomings in efficiency compared to HEOM/DEOM \cite{Xu22230601}. The strong correlation problems contain varieties of systems, among them, quantum dots have been widely studied not only in computations but also in experiments.\cite{Wie031, Han071217, Rei021283} As an interesting and popular system, single quantum dots and multiple quantum dots (MQDs) can be regarded as ``artificial atoms'' and ``artificial molecules''.\cite{Bli967899, Wie031, Jeo012221} The QDs and reservoirs around them can form strongly correlated quantum systems. Especially, those interactions lead to the famous Kondo resonance at low temperatures.\cite{Lia02725,Far20256805, Moc21186804, Kur216004, Fer20738} Depending on the specific system, QDs can form {Ruderman--Kittel--Kasuya--Yosida} indirect exchange interactions,\cite{Pow1349} inter--dot Coulomb interactions and intra--dot Coulomb interactions (capacitive interactions).\cite{Hew93} Those two types of coupling strongly influence the properties of systems. In this article, we utilize the DEOM method, one of the most popular methods for simulating quantum impurity systems, to simulate the density of state (DOS) and the noise spectrum of double quantum dots. \section{Dissipaton equations of motion} The Hamiltonian of the open quantum system can be written as \begin{equation} H_{\textrm{T}} = H_{\textrm{S}} + H_{\textrm{SB}} + H_{\textrm{B}}, \end{equation} where $H_{\textrm{S}}$ is the Hamiltonian of the system, $H_{\textrm{B}}$ is the Hamiltonian of the bath, and $H_{\textrm{SB}}$ is the Hamiltonian of the coupling between the system and the bath. In the Anderson impurity model (AIM), $H_{\textrm{S}}$ can be arbitrary, $H_{\textrm{SB}}$ and $H_{\textrm{B}}$ can be written as \begin{subequations} \begin{align} h_{\textrm{B}} & = \sum_{\alpha} h_{\alpha} = \sum_{\alpha k} \epsilon_{\alpha k} \hat d_{\alpha k}^{+} \hat d_{\alpha k}, \\ H_{\textrm{SB}} & = \sum_{\alpha u} (\hat F^{{\rm d}agger}_{\alpha u} \hat a_{u} + \hat a^{{\rm d}agger}_{u} \hat F_{\alpha u}) = \sum_{\sigma \alpha u} \hat a_{u}^{\bar \sigma} \tilde F^{\sigma}_{\alpha u}, \\ \hat F_{\alpha u} & = \sum_{k} t_{\alpha u k}^{\ast} \hat d_{\alpha k}, \end{align} \end{subequations} where $\hat a_{u}$ ($\hat a_{u}^{{\rm d}agger}$) is the annihilation (creation) operator of the system electron. Note $u$ labels the degrees of freedom of the system electrons, which can be the spin or the site index. The fluctuation--dissipation theorem of this bath can be written as \begin{equation} \langlebel{heom:fermi_dft} \langle \hat F_{\alpha u}^{\sigma}(t) \hat F_{\alpha v}^{\bar \sigma}(0) \rangle_{\textrm{B}}^{\textrm{eq}} = \frac{1}{\pi} \int {\rm d} \omega \frac{J_{\alpha u v}^{\sigma}(\omega) e^{i \sigma \omega t}}{1 + e^{\sigma \beta \omega}}, \end{equation} where, $\beta = K_{B} T$, $K_{B}$ is the Boltzmann constant, $\sigma = \pm 1$ is the fermion sign and $J_{\alpha u v}^{\sigma}(\omega)$ is the spectral density of the bath, $F_{\alpha u}^{\sigma}(t) = e^{ih_{\textrm{B}}t} F_{\alpha u} e^{-ih_{\textrm{B}}t}$ and $\langle \hat O \rangle_{\textrm{B}}^{\textrm{eq}} = \mathrm{tr}_{\textrm{B}} (\hat O e^{-\beta_{\alpha} {\hat h}_{\alpha}}) / Z_{\alpha}^{\textrm{eq}}$ with the canonical ensembles partition function $Z_{\alpha}^{\textrm{eq}} = \mathrm{tr}_{\textrm{B}} e^{-\beta_{\alpha} {\hat h}_{\alpha}}$. In the DEOM theory, we expand this time correlation function as the summation of exponentials: \begin{equation} \langlebel{eq:c_t} \langle \hat F_{\alpha u}^{\sigma}(t) \hat F_{\alpha v}^{\bar \sigma}(0) \rangle_{\textrm{B}}^{\textrm{eq}} = \sum_{k=1}^{K} \eta^{\sigma}_{\alpha u v k} e^{- \gamma^{\sigma}_{\alpha u v k} t} = \sum_{j = 1}^{J} n_{j} \gamma_{j}. \end{equation} Then, the DEOM formulism reads as follows:\cite{Yan16110306} \begin{align} \langlebel{eq:DEOM} {\rm d}ot \rho_{\bf n}^{(n)} (t) = & (- i \mathcal{L}_{\textrm{S}} - \sum_{j}n_{j}\gamma_{j})\rho^{(n)}_{\bf n} - i \sum_j \mathcal{A}_{\bar j} \rho^{(n+1)}_{{\bf n}j} \nonumber \\ & - i \sum_{j} (-1)^{n-\theta_j} \mathcal{C}_{j} \rho^{(n-1)}_{{\bf n}_j^-}. \end{align} Throughout this paper, we set $\hbar = 1$. In the above equation, the summation of j from 1 to J, where J is exactly the number of terms in \eq{eq:c_t}. The $\{\rho_{\bf n}^{(n)}\}$ are the dissipaton density operators (DDOs). $\theta_j = \sum_{k = 1} ^{j} n_{k}$. $\mathcal{L}_{\textrm{S}} \hat O = [H_{\textrm{S}}, \hat O]$ and the other super operators are defined as: \begin{subequations} \langlebel{eq:super_operator} \begin{align} \mathcal{A}_{j} \rho^{(n)}_{\bf n} & \equiv\hat a_u^{\sigma} \rho^{(n)}_{\bf n} + (-1)^n\rho^{(n)}_{\bf n} \hat a_u^{\sigma}, \\ \mathcal{C}_{j} \rho^{(n)}_{\bf n} & \equiv \sum_v \big( \eta^{\sigma}_{\alpha u k} \hat a_u^{\sigma} \rho^{(n)}_{\bf n} - (-1)^n \eta^{\bar \sigma \ast}_{\alpha u k} \rho^{(n)}_{\bf n} \hat a_u^{\sigma}\big), \end{align} \end{subequations} where $\hat a_{u}^{{\rm d}agger}$ ($\hat a_{u}$) is the local creation (annihilation) operator of the $i\ $th system electron with spin $s$, here $s= \,\uparrow$ and ${\rm d}ownarrow$. Note the $(-1)^n$ factor in the definition of super operators in \eq{eq:super_operator} is due to the fermion sign. The DEOM theory can also deal with the nonequilibrium steady-state case. The nonequilibrium bath can be described by the following effective Hamiltonian: \begin{equation} h^{\textrm{st}}_{\textrm{B}} = \sum_{\alpha} h^{\textrm{st}}_{\alpha} = \sum_{\alpha k} (\epsilon_{\alpha k} + \mu_{\alpha}) \hat d_{\alpha k}^{+} \hat d_{\alpha k}. \end{equation} Then \eq{heom:fermi_dft} can be recast as follows: \begin{equation} \langlebel{heom:fermi_dft_neq} \langle \hat F_{\alpha u}^{\sigma}(t) \hat F_{\alpha v}^{\bar \sigma}(0) \rangle^{\textrm{st}}_{\textrm{B}} = \frac{1}{\pi} \int {\rm d} \omega \frac{J_{\alpha u v}^{\sigma}(\omega - \mu_{\alpha}) e^{i \sigma \omega t}}{1 + e^{\sigma \beta (\omega - \mu_{\alpha})}}. \end{equation} Note in the above equation, $F_{\alpha u}^{\sigma}(t) = e^{ih^{\textrm{st}}_{\textrm{B}}t} F_{\alpha u} e^{-ih^{\textrm{st}}_{\textrm{B}}t}$ and $\langle \hat O \rangle_{\textrm{B}}^{\textrm{st}} = \mathrm{tr}_{\textrm{B}} (\hat O e^{-\beta_{\alpha} {\hat h}^{\textrm{st}}_{\alpha}}) / Z_{\alpha}^{\textrm{st}}$ with the grand canonical ensembles partition function $Z_{\alpha}^{\textrm{st}} = \mathrm{tr}_{\textrm{B}} e^{-\beta_{\alpha} {\hat h}^{\textrm{st}}_{\alpha}}$. We can obtain \begin{equation} \langle \hat F_{\alpha u}^{\sigma}(t) \hat F_{\alpha v}^{\bar \sigma}(0) \rangle^{\textrm{st}}_{\textrm{B}} = e^{i \sigma \mu_{\alpha} t} \langle \hat F_{\alpha u}^{\sigma}(t) \hat F_{\alpha v}^{\bar \sigma}(0) \rangle^{\textrm{eq}}_{\textrm{B}}, \end{equation} and the other relationships and the equation of motion keep the same as the equilibrium case. In the numerical simulation of the DEOM method, we need to truncate the DDOs to a finite level $L$, which means that we only keep the DDOs, $\rho_{\bf n}^{(n)}$, satisfying $\sum_k n_k < L$ and discard all the other DDOs. Thus, in the numerical simulation of the fermionic bath, the number of DDOs is \begin{equation} \sum_{l=1}^{L} \frac{K!}{l! (k-l)!}. \end{equation} Here the $K$ is the bath modes (See \eq{eq:c_t}) and the $L$ is the truncation level. The vast of DDOs makes the direct simulation of the DEOM method very expensive. But with the matrix product state (MPS) and time-dependent variational principle (TDVP) methods \cite{Ose112295, Lub15917, Shi18174102, Xu22230601}, the cost of propagating DEOM can be reduced to nearly proportional to $K$. Although the MPS method will be the slower one under the small $K$ cases. In this article, we will only show the results of simulations using the direct method. We use the recently developed time-domain Prony Fitting Decomposition ($t$-PFD) method to obtain the parameters of the summation of exponentials. \cite{Che22221102} $t$-PFD method can obtain the almost minimal basis of the summation of exponential. In this article, we focus on the impurity spectral function, \begin{equation} A_{u u'}(\omega) = \frac{1}{2 \pi} \int_{-\infty}^{\infty}\!\!{\rm d} t\, e^{i\omega t} \langle\{ \hat a_{u} (t), \hat a_{u'}^{{\rm d}agger} (0)\} \rangle_{\textrm{eq}}, \end{equation} and the noise spectral function \begin{equation} \langlebel{eq::noise_spectral_function} S_{\alpha \alpha'}(\omega) = \frac{1}{2 \pi} \int_{-\infty}^{\infty}\!\!{\rm d} t\, e^{i\omega t} \langle\{{\rm d}elta \hat I_{\alpha} (t), {\rm d}elta \hat I_{\alpha'} (0)\} \rangle_{\textrm{eq}}. \end{equation} Here $\langle \hat O \rangle_{\textrm{eq}} = \textrm{T}r (\hat O \rho_{\textrm{T}}^{\textrm{eq}})$. In \eq{eq::noise_spectral_function}, ${\rm d}elta \hat I_{\alpha} (t) = \hat I_{\alpha} (t) - \hat I_{\alpha}^{\textrm{st}}$ is the fluctuation of the transport current respect to the steady state current $\hat I_{\alpha}^{\textrm{st}}$. The transport current is defined as \begin{equation} \hat I_{\alpha} = - \frac{\partial \hat N_\alpha}{\partial t} = - i \sum_{u} (\hat a_{u}^{{\rm d}agger} \hat F_{\alpha u} - \hat F_{\alpha u}^{{\rm d}agger} \hat a_{u}) \end{equation} and $\hat I_{\alpha}(t) = e^{i H_{\textrm{\textrm{T}}}t} \hat I_{\alpha} e^{-i H_{\textrm{T}}t}$. Both the impurity and the noise spectral functions can be calculated using the DEOM method. The details of those correlation functions by the DEOM method can be found in \onlinecite{Jin15234108, Yan16110306, Mao21014104}. In this article, we use the self--consistent iteration method \cite{Zha17044105} to obtain the equilibrium state and the spectral density function. See Appendix for details. \section{Impurity and noise spectral function of double quantum dots} \begin{figure} \caption{ The illustration of the AIM with the system as double quantum dots.} \end{figure} As a numerical demonstration, we choose the DQD for simulations. To be concrete, we set \begin{align} \langlebel{sys_def} H_{\textrm{S}} = & \sum_{i = 1,2} \epsilon_i \hat n_i + U \sum_{i = 1,2} \hat n_{i\uparrow} \hat n_{i{\rm d}ownarrow} + U_{\textrm{C}} \hat n_{1} \hat n_{2} \nonumber \\ & + T \sum_{s} \textrm{B}ig(\hat a^{{\rm d}agger}_{1s} \hat a_{2s} + \hat a^{{\rm d}agger}_{2s} \hat a_{1s} \textrm{B}ig), \end{align} where $\hat n_{i,\uparrow}$ ($\hat n_{i{\rm d}ownarrow}$) is the particle number operator of spin-up (spin-down) electron at site $i$ and $\hat n_{i} = \hat n_{i\uparrow} + \hat n_{i{\rm d}ownarrow}$ $\epsilon_i$ is on--site energy, $U$ ($U_{\textrm{C}}$) is intra--site (inter--site) Coulomb energy, and T is the hopping energy between two sites. The system parameters are follow this scheme: $\epsilon_{1} = \epsilon_{2} = -(U + 2NU_{\textrm{C}})/2$. This leads to $\langle \hat n_{1} + \hat n_{2} \rangle = N + 1$ in the equilibrium states of the system. The system is coupled to two reservoirs with the Lorentz--type spectrum density \begin{equation} \langlebel{eq:lorentz} J_{\alpha u v}^{\sigma}(\omega) = \frac{\Delta W^2}{\omega^2 + W^2}. \end{equation} We set the left and right reservoirs link to the 1st quantum dot and the 2ed quantum dot, respectively. For convenience, we label the left and right reservoirs as $1$ and $2$, respectively. These two bath parameters are the same: $W = 50 \Delta$, $U = 12 \Delta$, $\beta = 20 \Delta^{-1}$ with $\Delta$ as the unit. The total current noise spectrum which follows the Ramo--Shockley theorem \cite{Bla001} can be written as \begin{align} \langlebel{eq::noise} S(\omega) & = a^2 S_{\textrm{L} \textrm{L}}(\omega) + b^2 S_{\textrm{R} \textrm{R}}(\omega) - 2ab \mathrm{Re} \{S_{\textrm{L} \textrm{R}}(\omega)\}, \end{align} The parameter $a$ and $b$ are relative to the left and right leads and the impurity, respectively. In the wideband limit, the coupling strength of the two reservoirs is the same leading to $a = b = 0.5$ in \eq{eq::noise}. The total system is illustrated in Fig.~\ref{fig1}. We set the truncated level as $L = 5$, and the number of exponential series as $K = 6$, which guarantee the error between the summation of the exponential expansion and the original one less than $2\%$ under all frequency. In this section, we show both the impurity and noise spectral function of double quantum dots under equilibrium scenarios with or without hopping and nonequilibrium scenarios. We will show that the impurity spectra will split \subsection{Equilibrium Scenarioes} \begin{figure} \caption{ The impurity spectral function $A(\omega) = A_{1 \uparrow, 1 \uparrow} \end{figure} \begin{figure*} \caption{ The impurity spectral function $A(\omega) = A_{1 \uparrow, 1 \uparrow} \end{figure*} In the \Fig{fig2}, we show the impurity spectral function $A_{1 \uparrow, 1 \uparrow}(\omega)$ (left panels), the total noise spectral function $S(\omega)$ (middle panels) of the AIM with the system as double quantum dots. We also show the derivation of the total noise spectral function, ${\rm d} S(\omega) / {\rm d} t$, at the right panel at \Fig{fig2}. Here, we set $U_{\textrm{C}} = U - \Delta$, $U$, $U + \Delta$, and the chemical potential of these two baths as both $0$ (equilibrium scenario). We also check the results of $A(\omega)$ using the MPS method (not shown in the \Fig{fig2}) and we can obtain similar results as those shown in the \Fig{fig2}. We can see that with the $U_{\textrm{C}}$ increase, the Kondo peak, $A(0)$, will first increase and then decrease. The highest peak takes place at $U_{\textrm{C}} = U$, which shows the resonance effect of the double quantum dots. The Hubbard peak, which is the peak at $\omega \approx \pm U/2$, will move far from $0$. We also observed the new Kondo peak, appearing around $\omega \approx \pm (U - U_{\textrm{C}})$ at $U_{\textrm{C}} = U \pm \Delta$. This peak is the result of the inter--site Coulomb interaction $U_{\textrm{C}}$ and will disappear when $U_{\textrm{C}} = U$. The behaviors of the Kondo peak near the $U_{\textrm{C}} = U$ are similar to the Fano resonance. When $U_{\textrm{C}} > U$, the electron transfer between the two quantum dots will be blocked by the inter--site Coulomb interaction $U_{\textrm{C}}$, which leads to the heavily decrease of the Kondo peak. To illustrate these phenomena, we compare the results of the impurity spectral function under more settings at different temperatures in \Fig{fig2-2}. We can see that all the Kondo peaks will disappear at low temperatures. This is because the Kondo effect is a low--temperature phenomenon and shows those split peaks in \Fig{fig2} are not Hubbard peaks. Under the $N = 0$ scenario, those split peaks will vanish and the behavior of the Kondo peak will be similar to \Fig{fig2}\,(b). This behavior is due to the absence of the Coulomb blockade under the $N = 0$ scenario. The equilibrium state of those double quantum dots is $\langle \hat n_{1} \rangle = \langle \hat n_{2} \rangle = (N + 1) / 2$. In the $N = 1$ scenario, $\langle \hat n_{1} \rangle = \langle \hat n_{2} \rangle = 1$. The symmetry of the impurity spectral function, $A(\omega) = A(-\omega)$, is also broken; See the panel (e) of \Fig{fig2-2}. Now turn to the total noise spectral function $S(\omega)$, which is shown in the right panels of \Fig{fig2}. The Kondo characteristics of the total noise spectral function will show at the derivation of $S(\omega)$, ${\rm d} S(\omega) / {\rm d} t$, the total noise spectral function will behave like the step function, and the Kondo peak or the Hubbard peaks will appear in the derivation of $S(\omega)$. \cite{Jin15234108} As shown in the \Fig{fig2}, the Hubbard peaks occur near $\omega \approx U/2$, and will decrease with the increase of $U_{\textrm{C}}$. The Kondo peak will appear near $\omega \approx 0$. The location of the Kondo peak is influenced by the $U_{\textrm{C}}$. Under the $U_{\textrm{C}} = U$ scenario, the Kondo peak will appear exactly at $\omega = 0$. Under the $U_{\textrm{C}} = U - \Delta$ case, the Kondo peak will move to $\omega \approx \Delta$, and remain a small Kondo peak at $\omega = 0$. The $U_{\textrm{C}} = U + \Delta$ case is similar to the $U_{\textrm{C}} = U - \Delta$ case, but under this case, the Kondo peak near $\omega = 0$ becomes very large. \subsection{Equilibrium Scenarioes with Hopping} \begin{figure} \caption{ The impurity spectral function $A(\omega) = A_{1 \uparrow, 1 \uparrow} \end{figure} In the \Fig{fig3}, we show the impurity spectral function $A_{1 \uparrow, 1 \uparrow}(\omega)$ (left panels) and the total noise spectral function $S(\omega)$ (right panels) of the AIM with the system as double quantum dots. Here, we set the hopping energy as $T_{\textrm{C}} = 0.5 \Delta$ or $\Delta$, $U_{\textrm{C}} = U$, and the other parameters are the same as those in the \Fig{fig2}. We also notice that the sign of $T_{\textrm{C}}$ will not influent the result of both $A(\omega)$ and $S(\omega)$. As shown in the \Fig{fig3}, the Kondo peak of the impurity spectral function will split into two peaks, which appear at $\omega \approx \pm 2 T = \pm 2 \Delta$, $\pm 4 \Delta$. These phenomena are similar to what we show in \Fig{fig2}(c). But under this scenario, these behaviors are due to the inducement of $T_{\textrm{C}}$ causing an antiferromagnetic interaction and then the splitting of the Kondo peak. \cite{Li18115133} The Hubbard peaks remain occur at a similar location as $U_{\textrm{C}} = U$ case in the \Fig{fig2}\,(b). Turning to the total noise spectral function $S(\omega)$, we only focus on the derivation of this type of spectral function, which is shown in the right panels of \Fig{fig3}. The derivation of the total noise spectral function, ${\rm d} S(\omega) / {\rm d} t$, will also split into two peaks, which appear at $\omega \approx 0$ and $\omega \approx 2 T = \Delta$. The Hubbard peaks is absent around $\omega \approx U/2$. Moreover, the total noise spectral function will perform like Fano resonance other than the step function. \begin{figure} \caption{ The impurity spectral function $A(\omega) = A_{1 \uparrow, 1 \uparrow} \end{figure} \subsection{Nonequilibrium Scenarioes} In the \Fig{fig4}, we show the impurity spectral function $A_{1 \uparrow, 1 \uparrow}(\omega)$ (left panels) and the total noise spectral function $S(\omega)$ (right panels) of the AIM with the system as double quantum dots. We notice that $A_{2 \uparrow, 2 \uparrow}(\omega) = A_{1 \uparrow, 1 \uparrow}(- \omega)$ (not shown in the \Fig{fig4}) Here, we set the chemical potential of these two baths as $\mu_{L} = - \mu_{R} = \Delta$ and $2 \Delta$ with the other parameters same as \Fig{fig2}. As shown in the \Fig{fig4}, the Kondo peak of the impurity spectral function will move to $\omega \approx \pm \Delta$ and $\pm 2 \Delta$. This is the same behavior as the single quantum dot case,\cite{Wan13035129} but with only the $\omega < 0$ part, and the $\omega > 0$ part will appear in the $A_{2 \uparrow, 2 \uparrow}(\omega)$. Turning to the derivation of current noise spectral function, which is shown in the right panels of \Fig{fig4}. The Kondo peak will move to $\omega \approx 2 \mu$ and will keep a little bit of the Fano resonance behavior near $\omega = 0$. The Hubbard peaks is absent around $\omega \approx U/2$. \section{Summary} In this article, we use the nearly developed time domain Prony fitting decomposition method and the self--consistent tteration method to simulate the Anderson impurity model with double quantum dots. Support from the Ministry of Science and Technology of China (Grant No.\ 2021YFA1200103) and the National Natural Science Foundation of China (Grant Nos. 22103073, 22173088) is gratefully acknowledged. The numerical calculations in this paper have been done on the supercomputing system in the Supercomputing Center of University of Science and Technology of China. \appendix* \section{Self--consistent Iteration Method} The self--consistent iteration method has been used to solve the equilibrium state of the bosonic environmental DEOM.\cite{Zha17044105} We will utilize this method to solve both the equilibrium state and the spectral density function of the fermionic environmental DEOM. Firstly, we show the workflow for solving the equilibrium state of the fermionic environmental DEOM. The equilibrium state of the fermionic environmental DEOM is defined as \begin{align} \langlebel{heom:steady_state_example_1} 0 = & (- i \mathcal{L}_{\textrm{S}} - \sum_{j}n_{j}\gamma_{j})\rho^{(n)}_{\bf n} - i \sum_j \mathcal{A}_{\bar j} \rho^{(n+1)}_{{\bf n}j} \nonumber \\ & - i \sum_{j} (-1)^{n-\theta_j} \mathcal{C}_{j} \rho^{(n-1)}_{{\bf n}_j^-}. \end{align} This equation can be rewritten as \begin{align} \langlebel{heom:steady_state_example_2} \textrm{B}ig(i \mathcal{L}_{\textrm{S}} + \gamma_{\bf n} + \Omega\textrm{B}ig)\rho^{(n)}_{\bf n} = & \Omega \rho^{(n)}_{\bf n} - i \sum_j \mathcal{A} \rho^{(n+1)}_{{\bf n}_j^+} \nonumber \\ & - i \sum_{j} n_{j} \mathcal{C}_{j} \rho^{(n-1)}_{{\bf n}_j^-}. \end{align} We add the stability factor $\Omega$ to both sides of the above equation. We can use the iterative method to solve the above equation. \begin{align} \langlebel{heom:steady_state_example_3} \rho^{(n); i+1}_{\bf n} = & \textrm{B}ig(i \mathcal{L}_{\textrm{S}} + \gamma_{\bf n} + \Omega\textrm{B}ig)^{-1}\textrm{B}ig(\Omega \rho^{(n); i }_{\bf n} - i \sum_j \mathcal{A} \rho^{(n+1); i}_{{\bf n}_j^+} \nonumber \\ & - i \sum_{j} n_{j} \mathcal{C}_{j} \rho^{(n-1); i}_{{\bf n}_j^-}\textrm{B}ig) \end{align} The stability factor, $\Omega$, can make this iterative method stable. Secondly, We can use the same method to solve the spectral density function. The spectral density function is defined as \begin{align} \langlebel{heom:corr_freq} \hat C_{\textrm{A} \textrm{B}} (\omega) & \equiv \frac{1}{\pi} \int_{0}^{\infty} \textrm{T}r \textrm{B}ig\{{\hat A} e^{- i \mathcal{L}_{T} t} {\hat B} \rho_{\textrm{T}}^{\textrm{eq}} \textrm{B}ig\} e^{i \omega t} {\rm d} t\nonumber \\ & = \frac{1}{\pi} \int_{0}^{\infty} \textrm{T}r \textrm{B}ig\{{\hat A} e^{i (\omega - \mathcal{L}_{T}) t} {\hat B} \rho_{\textrm{T}}^{\textrm{eq}} \textrm{B}ig\} {\rm d} t \nonumber \\ & = \frac{1}{\pi} \textrm{T}r \textrm{B}ig\{{\hat A} (i \mathcal{L}_{T} - i \omega)^{-1} {\hat B} \rho_{\textrm{T}}^{\textrm{eq}} \textrm{B}ig\} \nonumber \\ & = \frac{1}{\pi} \langle\langle \hat {\bm A} \vert \hat {\bm X}(\omega) \rangle\rangle, \end{align} where, \begin{align} \langlebel{heom:ddo_operator} \rho_{\textrm{T}}(t) & \rightarrow {\bm \rho}(t) \equiv \{\rho_{\bf n}^{(n)}(t)\}, \nonumber \\ \hat A & \rightarrow \hat {\bm A} \equiv \{\hat A^{(n)}_{\bf n}; n = 0,1,2,\cdots\}, \nonumber \\ \hat B & \rightarrow \hat {\bm B} \equiv \{\hat B^{(n)}_{\bf n}; n = 0,1,2,\cdots\}, \end{align} and \begin{align} \langlebel{heom:spe_w_sch} (i {\bm{\mathcal{L}}}_{T} - i \omega) \hat {\bm X}(\omega) = {\hat B} \rho^{\textrm{eq}}_{\textrm{T}} = {\bm{\rho}}(0; \hat B). \end{align} \Eq{heom:spe_w_sch} can be rewritten as follows: \begin{align} \langlebel{heom:sci_w_example} \rho^{(n)}_{\bf n} (0; \hat B) = & - (i \mathcal{L}_{\textrm{S}} + \gamma_{\bf n} + i \omega) \hat X^{(n)}_{\bf n}(\omega) \nonumber \\ & - i \sum_j \mathcal{A}_{j} \hat X^{(n+1)}_{{\bf n}_j^+} (\omega) \nonumber \\ & - i \sum_{j} (-1)^{n-\theta_j} \mathcal{C}_{j} \hat X^{(n-1)}_{{\bf n}_j^-}(\omega). \end{align} as the same procedure as (\ref{heom:steady_state_example_1}) --\eq{heom:steady_state_example_3}, we can obtain the similar iterative form: \begin{align} \langlebel{heom:spe_w_sch_final} \hat X^{(n); i+1}_{\bf n}(\omega) = & (i \mathcal{L}_{\textrm{S}} + \gamma_{\bf n} + i \omega + \Omega)^{-1} \textrm{B}igg\{- \rho^{(n)}_{\bf n} (0; \hat B) \nonumber \\ & + \Omega \hat X^{(n); i}_{\bf n}(\omega) - i \sum_j \mathcal{A}_{j} \hat X^{(n+1); i}_{{\bf n}_j^+}(\omega) \nonumber \\ & - i \sum_{j} (-1)^{n-\theta_j} \mathcal{C}_{j} \hat X^{(n-1); i}_{{\bf n}_j^-}(\omega) \textrm{B}igg\}. \end{align} \end{document}
\begin{document} \title{Limits of Thompson's group F} \newtheorem{theo}{Theorem}[section] \newtheorem{lemm}[theo]{Lemma} \newtheorem{coro}[theo]{Corollary} \newtheorem{defi}[theo]{Definition} \newtheorem{prop}[theo]{Proposition} \newtheorem{fact}[theo]{Fact} \newtheorem{rema}[theo]{Remark} \newtheorem{exam}[theo]{Example} \newtheorem{ques}[theo]{Question} \newtheorem{clai}[theo]{Claim} \begin{abstract} Let $F$ be the Thompson's group $\langle x_{0}, x_{1}| [x_{0}x_{1}^{-1}, x_{0}^{-i}x_{1} x_{0}^{i}], i=1,2 \rangle$. Let $G_{n}= \langle y_{1},\ldots , y_{m}, x_{0},x_{1} \vert [x_{0} x_{1}^{-1},x_{0}^{-i}x_{1}x_{0}^{i}], y_{j}^{-1}$ $g_{j,n}(x_{0},x_{1}), i=1, 2,$ $j\leq m \rangle$, where $g_{j,n}(x_{0},x_{1})\in F$, $n\in\mathbb{N}$, be a family of groups isomorphic to $F$ and marked by $m+2$ elements. If the sequence $(G_{n})_{n<\omega}$ is convergent in the space of marked groups and $G$ is the corresponding limit we say that $G$ is an $F$-limit group. The paper is devoted to a description of $F$-limit groups. \end{abstract} \section{Preliminaries} The notion of \emph{limit group} was introduced by Z. Sela in his work on characterization of elementary equivalence of free groups \cite{Sel}. This approach has been extended in the paper of C. Champetier and V. Guirardel \cite{GC}, where the authors look at \emph{limit groups} as limits of convergent sequences in a space of \emph{marked groups}. They have given a description of Sela's limit groups in these terms (with respect to the class of free groups). This approach has been also aplied by L. Guyot and Y. Stalder \cite{SG} to the class of Baumslag-Solitar groups. \parskip0pt Thompson's group $F$ has remained one of the most interesting objects in geometric group theory. We study $F$-limit groups. We show in this paper, that among $F$-limit groups there are no free products of $F$ with any non-trivial group. Moreover, we prove that among $F$-limit groups there are no HNN-extensions over cyclic subgroups. \parskip0pt In the remaining part of the section we recollect some useful definitions and facts concerning limit groups and Thompson's group $F$. In Section 2 we present results concerning free products and in Section 3 results concerning HNN-extensions. \parskip0pt A \emph{marked group} $(G,S)$ is a group G with a distinguished set of generators $S = (s_{1}, s_{2}, \ldots , s_{n})$. For fixed $n$, let $\mathcal{G} _{n}$ be the set of all $n$-generated groups marked by $n$ generators (up to isomorphism of marked groups). Following \cite{GC} we put certain metric on $\mathcal{G} _{n}$. We will say that two marked groups $(G,S), (G',S')\in\mathcal{G} _{n}$ are at distance less or equal to $e^{-R}$ if they have exactly the same relations of length at most $R$. The set $\mathcal{G} _{n}$ equiped with this metric is a compact space \cite{GC}. \emph{Limit groups} are simply limits of convergent sequences in this metric space. \begin{defi} Let $G$ be an $n$-generated group. A marked group in $\mathcal{G} _{n}$ is a $G$-limit group if it is a limit of marked groups each isomorphic to $G$. \end{defi} To introduce the Thompson's group $F$ we will follow \cite{CFP}. \begin{defi} Thompson's group $F$ is the group given by the following infinite group presentation: $$ \langle x_{0}, x_{1}, x_{2}, \ldots | x_{j}x_{i}=x_{i}x_{j+1} (i<j) \rangle $$ \end{defi} In fact $F$ is finitely presented: $$ F = \langle x_{0}, x_{1}| [x_{0}x_{1}^{-1}, x_{0}^{-i}x_{1}x_{0}^{i}], i=1,2 \rangle . $$ Every non-trivial element of $F$ can be uniquely expressed in the normal form: $$ x_{0}^{b_{0}}x_{1}^{b_{1}}x_{2}^{b_{2}}\ldots x_{n}^{b_{n}}x_{n}^{-a_{n}}\ldots x_{2}^{-a_{2}}x_{1}^{-a_{1}}x_{0}^{-a_{0}},$$ where $n$, $a_{0}, \ldots , a_{n}$, $b_{0}, \ldots , b_{n}$ are non-negative integers such that: \\ i) exactly one of $a_{n}$ and $b_{n}$ is nonzero; \\ ii) if $a_{k}>0$ and $b_{k}>0$ for some integer $k$ with $0\leq k < n$, then $a_{k+1}>0$ or $b_{k+1}>0$. \parskip0pt We study properties of $F$-limit groups. For this purpose let us consider a sequence, $(g_{i,n})_{n<\omega}$, $1\leq i\leq t$, of elements taken from the group $F$ and the corresponding sequence of limit groups marked by $t+2$ elements, $G_{n} = (F,(x_{0},x _{1},g_{1,n},\ldots ,g_{t,n}))$, $n\in\mathbb{N}$, where $x_{0}$ and $x_{1}$ are the standard generators of $F$. Assuming that such a sequence is convergent in the space of groups marked by $t+2$ elements, denote by $G = (\langle x_{0}, x_{1}, g_{1}\ldots , g_{t}|R_{F} \cup R_{G} \rangle, (x_{0}, x_{1}, g_{1},\ldots , g_{t}))$ the limit group formed in that manner; here $x_{0}$, $x_{1}$ are "limits" of constant sequences $(x_{0})_{n< \omega}$ and $(x_{0})_{n<\omega}$, $g_{i}$ is the "limit" of $(g_{i,n})_{n<\omega}$ for $1\leq i\leq t$, $R_{F}$ and $R_{G}$ refer respectively to the set of standard relations taken from $F$ and the set (possibly infinite) of new relations. \parskip0pt It has been shown in \cite{GC} that in the case of free groups some standard constructions can be obtained as limits of free groups. For example, it is possible to get $\mathbb{Z} ^{k}$ as a limit of $\mathbb{Z}$ and $\mathbb{F}_{k}$ as a limit of $\mathbb{F} _{2}$. On the other hand, the direct product of $\mathbb{F} _{2}$ and $\mathbb{Z}$ can not be obtained as a limit group. HNN-extensions often occure in the class of limit groups (with respect to free groups). For example, the following groups are the limits of convergent sequences in the space of free groups marked by three elements: the free group of rank $3$, the free abelian group of rank $3$ or a HNN-extension over a cyclic subgroup of the free group of rank $2$ (\cite{FGM}). All non-exceptional surface groups form another broad class of interesting examples (\cite{BaB}, \cite{BaG}). \parskip0pt In the case of Thompson's group the situation is not so clear. Since the centrum of $F$ is trivial it is surely not possible to obtain any direct product with the whole group as an $F$-limit group. In 1985 Brin and Squier \cite{BS} showed that Thompson's group $F$ does not satisfy any law (also see Abert's paper \cite{A} for a shorter proof). However, in this paper we show that there are certain non-trivial words with constants over $F$ (which will be called later laws with constants), which are equal to the identity for each evaluation in $F$. This implies that no free product of $F$ with any non-trivial group is admissible as limit group with respect to $F$ (see Section 2). Moreover, we prove that HNN-extensions over a cyclic subgroup are not admissible as limit groups with respect to $F$ (see Section 3). \parskip0pt There are many geometric interpretations of $F$, but here we will use the following one. Consider the set of all strictly increasing continuous piecewise-linear functions from the closed unit interval onto itself. Then the group $F$ is realized by the set of all such functions, which are differentiable except at finitely many dyadic rational numbers and such that all slopes (deriviatives) are integer powers of 2. The corresponding group operation is just the composition. For the further reference it will be usefull to give an explicit form of the generators $x_{0}, x_{1}, \ldots$ in terms of piecewise-linear functions: $$ x_{n}(t) = \left\{ \begin{array}{ll} t & \textrm{, $t\in [0,\frac{2^{n}-1}{2^{n}} ]$} \\ \frac{1}{2}t + \frac{2^{n}-1}{2^{n+1}} & \textrm{, $t\in [\frac{2^{n}-1}{2^{n}}, \frac{2^{n+1}-1} {2^{n+1}} ]$} \\ t - \frac{1}{2^{n+2}} & \textrm{, $t\in [\frac{2^{n+1} -1}{2^{n+1}}, \frac{2^{n+2}-1}{2^{n+2}}]$} \\ 2t-1 & \textrm{, $t\in [\frac{2^ {n+2}-1}{2^{n+2}},1]$} \end{array}\right. $$ for $n = 0, 1,\ldots$. \parskip0pt For any diadic subinterval $[a,b]\subset [0,1]$, let us consider the set, of elements in $F$, which are trivial on its complement, and denote it by $F _{[a,b]}$. We know that it forms a subgroup of $F$, which is isomorphic to the whole group. Let us denote its standard infinite set of generators by $x_{[a,b],0}, x_{[a,b],1}, x_{[a,b],2}, \ldots$. Let us consider an arbitrary element $g$ in $F$ and treat it as a piecewise-linear homeomorphism of the interval $[0,1]$. Let $supp(g)$ be the set $\{ x\in [0,1] : g(x) \neq x \}$ and $\overline{supp}(g)$ the topological closure of $supp(g)$. We will call each point from the set $P_{g} = (\overline{supp}(g)\setminus supp(g))\cap\mathbb{Z} [\frac{1}{2}]$ a \emph{dividing point} of $g$. This set is obviously finite and thus we get a finite subdivision of $[0,1]$ of the form $[0=p_{0}, p_{1}], [p_{1}, p_{2}], \ldots , [p_{n-1}, p_{n}=1]$ for some natural $n$. It is easy to see that $g$ can be presented as $g = g_{1}g_{2}\ldots g_{n}$, where $g_{i}\in F _{[p_{i-1}, p_{i}]}$ for each $i$. Since $g$ can act trivially on some of these subintervals, some of the elements $g_{1},\ldots ,g_{n}$ may be trivial. We call the set of all non-trivial elements from $\{ g_{1},\ldots ,g_{n}\}$ the \emph{defragmentation} of $g$. \begin{fact}[Corollary 15.36 in \cite{GS}, Proposition 3.2 in \cite{KM}]\label{GS} The centralizer of any element $g\in F$ is the direct product of finitely many cyclic groups and finitely many groups isomorphic to $F$. \end{fact} Moreover if the element $g\in F$ has the defragmentation $g=g_1 \ldots g_n$, then some roots of the elements $g_1, \ldots , g_n$ are the generators of cyclic components of the decomposition of the centralizer above. The components of this decomposition which are isomorphic to $F$ are just the groups of the form $F_{[a,b]}$, where $[a,b]$ is one of the subintervals $[p_{i-1},p_{i}] \subset [0,1]$, which are stabilized pointwise by $g$. Generally, if we interpret the elements of $F$ as functions, the relations occuring in the presentation of $F$, $[x_{0}x_{1}^{-1}, x_{0}^{-i} x_{1}x_{0}^{i}]$ for $i = 1,2$, have to assure, that two functions, which have mutually disjoint supports except of finitely many points, commute. In particular, these relations imply analogous relations for different $i>2$. According to the fact that $x_{0}^{-i}x_{1}x_{0}^{i} = x_{i+1}$, we conclude that all the relations of the form $[x_{0}x_{1}^{-1}, x_{M}]$, $M>1$, hold in Thompson's group $F$. We often refer to these geometrical observations. I am grateful to the referee for his helpful remarks. \section{Free products} Brin and Squier have shown in $\cite{BS}$ that the Thompson's group $F$ does not satisfy any group law. In this section we show how to construct words with constants from $F$, which are equal to the identity for any substitution in $F$. \begin{defi} Let $w(y_{1},\ldots , y_{t})$ be a non-trivial word over $F$, reduced in the group $\mathbb{F} _{t}\ast F$ and containing at least one variable. We will call $w$ a \emph{law with constants} in $F$ if for any $\bar{g}=(g_{1},\ldots , g_{t})\in F^{t}$, the value $w(\bar{g})$ is equal to $1_{F}$. \end{defi} The following proposition gives a construction of certain laws with constants in $F$. \begin{prop} \label{lwc} Consider the standard action of Thompson's group $F$ on $[0,1]$. Suppose we are given four pairwise disjoint closed diadic subintervals $I_{i}=[p_{i}, q_{i}]\subset [0,1]$, $1\leq i \leq 4$, and assume that $p_{1}<p_{2}<p_{3}<p_{4}$. Then for any non-trivial $h_{1}\in F _{I_{1}}$, $h_{2}\in F_{I_{2}}$, $h_{3}\in F_{I_{3}}$ and $h_{4}\in F_{I_{4}}$, the word $w$ obtained from $$[y^{-1}h_{1}^{-1}yh_{4}^{-1}y^{-1}h_{1}yh_{4}, y^{-1}h_{2}^{-1}yh_{3}^{-1}y^{-1}h_{2}yh_{3}]$$ by reduction in $\mathbb{Z}\ast F$ (we treat the variable $y$ as a generator of $\mathbb{Z}$) is a law with constants in $F$. \end{prop} \emph{Proof.} \ We will use the following notation: $w_{14}=y^{-1}h_{1}^{-1}yh_{4}^{-1}y^{-1}h_{1}yh _{4}$ and $w_{23}=y^{-1}h_{2}^{-1}yh_{3}^{-1}y^{-1}h_{2}yh_{3}$. It is easy to see that $w$ cannot be reduced to a constant. \parskip0pt We claim that \begin{quote} for any any $g\in F$ satisfying $g(q_{1})<p_{4}$ and $g(p_{4})>q_{1}$ the word $w _{14}(g)$ is equal to the identity. \end{quote} To show this we consider the action of $w_{14}(g)$ on each point from $[0,1]$. Assume, that $t\in [0,g^{-1}(q_{1}))$. Since $t\notin supp(h_{4})$ we have: $$w_{14}(g)(t)=g^{-1}h_{1}^{-1}gh_{4}^{-1}g^{-1}h_{1}g(h_{4}(t))=g^{-1}h_{1}^{-1}gh_{4}^{-1}g^{-1}(h_{1} (g(t))).$$ By $g^{-1}(h_{1}(g(t)))<g^{-1}(h_{1}(q_{1}))=g^{-1}(q_{1})<p_{4}$ we see $h_{4}^{-1}(g^{-1}(h_{1}(g(t)))) =g^{-1}(h_{1}(g(t)))$. Thus: $$w_{14}(g)(t)=g^{-1}h_{1}^{-1}gg^{-1}(h_{1}(g(t)))=t.$$ If $t\in [g^{-1}(q_{1}),1]$ then since $h_{4}^{\pm 1}(t)\geq min(t,p_{4})$, we have $g(h_{4}^{\pm 1}(t)) \geq q_{1}$ and hence: $$w_{14}(g)(t)=g^{-1}h_{1}^{-1}gh_{4}^{-1}g^{-1}h_{1}g(h_{4}(t))=g^{-1}h_{1}^{-1}gh_{4}^{-1}g^{-1}g(h_{4} (t))=$$ $$=g^{-1}h_{1}^{-1}g(t)=t.$$ It follows that for $g$ such that $g(q_{1})<p_{4}$ and $g(p_{4})>q_{1}$, $w_{14}(g)=1_{F}$ and hence $w(g)=[w_{14}(g),w_{23}(g)]=[1_{F},w_{23}(g)]=1_{F}$. Thus we are left with the case when $g (q_{1})\geq p_{4}$ (the proof of the case $g(p_{4})\leq q_{1}$ uses the same argument). Now we will prove that \begin{quote} for any $g\in F$ satisfying $g(q_{1})\geq p_{4}$ the word $w_{23}(g)$ is equal to the identity. \end{quote} Assume that $t\in [0,g^{-1}(p_{2})]$. Since $g(q_{1})\geq p_{4}$, we have $q_{1}\geq g^{-1}(p_{4})> g^{-1}(p_{2})\geq t$. Thus: $$w_{23}(g)(t)=g^{-1}h_{2}^{-1}gh_{3}^{-1}g^{-1}h_{2}gh_{3}(t)=g^{-1}h_{2}^{-1}gh_{3}^{-1}g^{-1}h_{2}g (t)=$$ $$=g^{-1}h_{2}^{-1}gh_{3}^{-1}g^{-1}g(t)=t.$$ Now assume that $t\in (g^{-1}(p_{2}),g^{-1}(q_{2}))$. Then since again $q_{1}\geq g^{-1}(p_{4})> g ^{-1}(q_{2})\geq t$ we obtain: $$w_{23}(g)(t)=g^{-1}h_{2}^{-1}gh_{3}^{-1}g^{-1}h_{2}gh_{3}(t)=g^{-1}h_{2}^{-1}gh_{3}^{-1}g^{-1}h_{2} g(t).$$ Since $h_{2}(g(t))\in (p_{2},q_{2})$ we have $g^{-1}(h_{2}(g(t)))<g^{-1}(q_{2})<q_{1}$ and: $$w_{23}(g)(t)=g^{-1}h_{2}^{-1}gh_{3}^{-1}(g^{-1}h_{2}g(t))=g^{-1}h_{2}^{-1}g(g^{-1}h_{2}g(t))=t.$$ Assume that $t\in [g^{-1}(q_{2}),g^{-1}(p_{3})]$. Since we still have $g^{-1}(p_{3})<q_{1}$, we see that: $$w_{23}(g)(t)=g^{-1}h_{2}^{-1}gh_{3}^{-1}g^{-1}h_{2}gh_{3}(t)=g^{-1}h_{2}^{-1}gh_{3}^{-1}g^{-1}h_{2}g (t)=$$ $$=g^{-1}h_{2}^{-1}gh_{3}^{-1}g^{-1}g(t)=t.$$ Let $t\in (g^{-1}(p_{3}),g^{-1}(q_{3}))$. Then since $g(p_{3})>q_{2}$ and $h_{3}(t)\neq t\ \Rightarrow\ h_{3}(t)>p_{3}$: $$w_{23}(g)(t)=g^{-1}h_{2}^{-1}gh_{3}^{-1}g^{-1}h_{2}gh_{3}(t)=g^{-1}h_{2}^{-1}gh_{3}^{-1}g^{-1}gh_{3} (t)=$$ $$=g^{-1}h_{2}^{-1}g(t)=t.$$ Finally assume $t\in [g^{-1}(q_{3}),1]$ (and then $g(t)>q_{2}$). Similarly as above: $$w_{23}(g)(t)=g^{-1}h_{2}^{-1}gh_{3}^{-1}g^{-1}h_{2}gh_{3}(t)=g^{-1}h_{2}^{-1}gh_{3}^{-1}g^{-1}g(h_{3} (t))=$$ $$=g^{-1}h_{2}^{-1}g(t)=t.$$ Now we see that for $g$ such that $g(q_{1})\geq p_{4}$, we have $w_{23}(g)=1_{F}$ and hence $w(g)=[w_{14}(g),w_{23}(g)]=[w_{14}(g),1_{F}]=1_{F}$. The proof is finished.\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $\square$\\ We now apply the construction from Proposition \ref{lwc} to limits of Thompson's group $F$. \begin{theo}\label{fre} Suppose we are given a convergent sequence of marked groups $((G_{n}, (x_{0}, x_{1}, g_{n,1},\ldots , g_{n,s})))_{n<\omega}$, where $G_{n}=F$, $(g_{n,1},\ldots , g_{n,s})\in F$, $n\in\mathbb{N}$, and denote by $\mathbb{G}$ its limit. Then $\mathbb{G}\neq F\ast G$ for any non-trivial $G$. \end{theo} Before the proof we formulate a general statement, which exposes the main point of our argument. \begin{prop}\label{pro} Let $H=\langle h_{1},\ldots , h_{m}\rangle$ be a finitely generated torsion-free group, which satisfies a one variable law with constants and does not satisfy any law without constants. Let $\mathbb{G}$ be the limit of a convergent sequence of marked groups $((G_{n}, (h_{1},\ldots , h_{m}, g_{n,1},\ldots , g_{n,t})))_{n<\omega}$, where $(g_{n,1},\ldots , g_{n,t})\in H$, $G_{n}=H$, $n\in\mathbb{N}$. Then $\mathbb{G} \neq H\ast K$ for any non-trivial $K<\mathbb{G}$. \end{prop} \emph{Proof.} \ It is clear that $\mathbb{G}$ is torsion-free. To obtain a contradiction suppose that $\mathbb{G}=H\ast K$, $K\neq\{ 1\}$, and $\mathbb{G}$ is marked by a tuple $(h_{1},\ldots , h_{m}, f_{1},\ldots , f_{t})$. Let $f=u(\bar{h},\bar{f})$ be an element of $K\setminus\{ 1\}$ and let $w(y)$ be a law with constants in $H$. Obviously $w(u(\bar{h}, g_{n,1},\ldots , g_{n,t}))=1_{H}$ for all $n< \omega$. It follows from the definition of an $H$-limit group that $w(u(\bar{h},\bar{f}))=1 _{\mathbb{G}}$. Since $w$ was chosen to be non-trivial, with constants from $H$ and $|f|=\infty$, we obtain a contradiction with the fact that $\mathbb{G}$ is the free product of $H$ and $K$. \\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $\square$\\ \emph{Proof of Theorem \ref{fre}.} \ It follows directly from Proposition \ref{lwc}, that there is some word $w(y)$, which is a law with constants in $F$, and hence we just apply Proposition \ref{pro} for $H =F$, $h_{1}=x_{0}$ and $h_{2}=x_{1}$. \\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $\square$\\ \section{HNN-extensions} Now we proceed to discuss the case of HNN-extensions. For this purpose we consider a sequence of groups marked by three elements, $(G_{n})_{n<\omega}$, and the corresponding limit group $G = (\langle x_{0}, x_{1}, g|R_{F} \cup R_{G} \rangle, (x_{0}, x_{1}, g))$. The following theorem is the main result of the section. \begin{theo}\label{mt} Let $(G_{n})_{n<\omega}$ be a convergent sequence of groups, where $G_{n} = (F,(x_{0},x_{1},g_{n}))$, and let $G = (\langle x_{0}, x_{1}, g|R_{F} \cup R_{G}\rangle, (x_{0}, x_{1}, g))$ be its limit. Then $G$ is not a HNN-extension of Thompson's group $F$ of the following form $\langle x_{0}, x_{1}, g | R_{F}, ghg^{-1} = h' \rangle$ for some $h,h' \in \emph{F}$. \end{theo} In what follows we will need two easy technical lemmas: \begin{lemm}\label{l1} Suppose $g\in F$ and let $x_{0}^{a_{0}}x_{1}^{a_{1}}\ldots x_{n}^{a_{n}} x_{n}^{-b_{n}}\ldots x_{1}^{-b_{1}}x_{0}^{-b_{0}}$ be its normal form. There is $M\in\mathbb{N}$ such that for all $m>M$: $$g^{-1}x_{m}g = x_{m+t} \ \ \emph{or} \ \ gx_{m}g^{-1}=x_{m+t},$$ where $t = \mid\sum _{i=0} ^{n} (a_{i}-b_{i})\mid$. \end{lemm} \emph{Proof.} Consider the case when $\sum _{i=0} ^{n} (a_{i}-b_{i})\geq 0$. Then for sufficently large $m$: $$ x_{0}^{b_{0}}x_{1}^{b_{1}}\ldots x_{n}^{b_{n}}x_{n}^{-a_{n}}\ldots x_{1}^{-a_{1}} x_{0}^{-a_{0}} x_{m} x_{0}^{a_{0}}x_{1}^{a_{1}}\ldots x_{n}^{a_{n}}x_{n}^{-b_{n}} \ldots x_{1}^{-b_{1}}x_{0}^{-b_{0}} =$$ $$= x_{0}^{b_{0}}x_{1}^{b_{1}}\ldots x_{n}^{b_{n}} x_{m+\sum _{i=0} ^{n} a_{i}} x_{n}^{-b_{n}}\ldots x_{1}^{-b_{1}}x_{0}^{-b_{0}} =$$ $$ = x_{m+\sum _{i=0} ^{n} (a_{i}-b_{i})} $$ In the case when $\sum _{i=0} ^{n} (a_{i}-b_{i})<0$ we consider the symmetric conjugation and apply the same argument. \\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $\square$\\ \begin{lemm}\label{l2} Under the assumptions of Lemma \ref{l1} the numbers $M$ and $t$ defined in that lemma, additionally satisfy the property that for all $m>M$ and $k>0$: $$g^{-k}x_{m}g^{k} = x_{m+kt}\ \ \emph{or} \ \ g^{k}x_m g^{-k} = x_{m+kt}.$$ \end{lemm} \emph{Proof.} If $g^{-1}x_m g=x_{m+t}$ holds (one of possible conclusions of Lemma \ref{l1}) then $M\leq m+t$ and applying Lemma \ref{l1} $k$ times we obtain the result. The case $gx_m g^{-1} = g_{m+t}$ is similar. \\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $\square$\\ \emph{Proof of Theorem \ref{mt}.}\ First we prove the theorem in the case of centralized HNN-extensions. \parskip0pt Suppose that $h=h'\neq 1$ in the formulation, i.e. the limit group has a relation of the form $ghg^{-1} =h$ and denote by $H$ the corresponding HNN-extension of Thompson's group $\langle x_{0},x_{1}, g | R_{F}, ghg^{-1} = h \rangle$. Assume that $ghg^{-1} = h$ is satisfied in $G$. From the definition of a limit group it follows, that $g_{n}hg_{n}^{-1}=_{F} h$ for almost all $n$. Denote by $C(h)$ the centralizer of $h$ and by $C_{1}\oplus\ldots\oplus C_{m}$ its decomposition taken from Fact \ref{GS}. As almost all $g_{n}$ commute with $h$, almost all $g_{n}$ have a decomposition of the form $g_{n} = g_{n,1}\ldots g_{n,m}$, where $g_{n,i} \in C_{i}$.\parskip0pt As $h\neq 1$, at least one of the factors $C_{1}\oplus\ldots\oplus C_{m}$ is isomorphic to $\mathbb{Z}$, say $C_{i_{0}}$. Denote by $[a,b]$ the support of elements taken from the subgroup $C_{i_{0}}$. It follows from the construction of this decomposition, that $h$ can only fix finitely many points in $[a,b]$.\parskip0pt Let us consider the sequence $(g_{n,i_{0}})_{n<\omega}$. Wlog we may suppose that $(g_{n,i_{0}}) _{\omega < \infty}$ consists of powers of some element of $F$ (which is a generator of $C_{i_{0}}$). Consider the case when it has infinitely many occurences of the same element. If $g'$ occurs infinitely many times in this sequence, then infinitely many $g_{n}(g')^{-1}$ commute with $x_{[a,b],0}$, $x_{[a,b], 1}$. That gives us a subsequence $(g_{k_{n},i_{0}})_{n<\omega}$ for which the relation $[g_{k_{n}}(g') ^{-1},f]$ holds for all $n$ and for all $f\in \langle x_{[a,b],0},x_{[a,b],1}\rangle$. As $\langle x_{[a,b],0}, x _{[a,b],1}\rangle$ is isomorphic to Thompson's group $F$, we will find a word of the form $g'y^{-1}f^{-1} y(g')^{-1}f$ with $f\in \langle x_{[a,b],0}, x_{[a,b],1}\rangle$, which is trivial for $y=\lim _{n\to\infty} g_{k_{n}}$ in the limit group corresponding to the sequence $(g_{k_{n},i_{0}})_{n<\omega}$ and non-trivial for $y=g$ in the group $H$. Indeed, it follows from Britton's Lemma on irreducible words in an HNN-extension (\cite{LS}, page 181), that the considered word can be reduced in $H$ only if $f^{-1}$ lies in the cyclic subgroup generated by $h$. But $f\in \langle x_{[a,b],0}, x_{[a,b],1}\rangle$ can be easily choosen outside $\langle h\rangle$.\parskip0pt Let us now assume that the sequence $(g_{n,i_{0}})_{n<\omega}$ is not stabilizing. By the discussion from the end of Section 1 we see that \begin{quote} $(\dagger )$\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ for all $m>1$, $[x_{[a,b],0}x_{[a,b],1}^{-1}, x_{[a,b],m}] = 1$.\end{quote} On the other hand any $g_{n,i_{0}}$ is a power of some fixed element from $C_{i_{0}}$. Thus we see by Lemma \ref{l2} that for $M$ found for the generator of $C_{i_{0}}$ as in Lemma \ref{l1}: $$(\forall n)\ g_{n,i_{0}}^{-1}x_{[a,b],M}g_{n,i_{0}}=x_{[a,b],M+t_{n}}\ \ \emph{or}\ \ (\forall n)\ g_{n,i_{0}}x_{[a,b],M}g_{n,i_{0}}^{-1}=x_{[a,b],M+t_{n}}, \ t_{n}\geq 0.$$ Substituting $x_{[a,b],M+t_{n}}$ into $(\dagger )$ instead of $x_{[a,b],m}$ we have: $$[x_{[a,b],0}x_{[a,b],1}^{-1}, g_{n,i_{0}}^{-1}x_{[a,b],M}g_{n,i_{0}}] = 1\ \ \emph{or}\ \ [x_{[a,b],0}x _{[a,b],1}^{-1}, g_{n,i_{0}}x_{[a,b],M}g_{n,i_{0}}^{-1}] = 1.$$ Thus we see that one of the following relations holds for all $n$: $$ [x_{[a,b],0}x_{[a,b],1}^{-1}, g_{n}^{-1}x_{[a,b],M}g_{n}] = 1 \ \ \emph{or} \ \ [x_{[a,b],0} x_{[a,b],1}^{-1}, g_{n}x_{[a,b],M}g_{n}^{-1}] = 1.$$ Suppose that the first relation holds for all $n$'s. Consider the corresponding word in the group $H$: $$ w = x_{[a,b],1}x_{[a,b],0}^{-1}g^{-1}x_{[a,b],M}^{-1}gx_{[a,b],0}x_{[a,b],1}^{-1}g^{-1}x_{[a,b], M}g.$$ We claim that $w \neq 1$ in this HNN-extension. Once again, it follows from Britton's Lemma on irreducible words in an HNN-extension, that we can reduce $w$ if $x_{[a,b],M}$ is a power of $h$ or $x_{[a,b],0} x_{[a,b],1}^{-1}$ is a power of $h$. We know that $x_{[a,b],0}, x_{[a,b],1}, \ldots x_{[a,b],m}, \ldots$ generate the group $F _{[a,b]}$, which is isomorphic to $F$. From the properties of $F$ we know that for different $m,m'>M$, $x_{[a,b],m}$ and $x_{[a,b],m'}$ do not have a common root. Thus, possibly increasing the number $M$, we can assume that $x_{[a,b],M}$ is not a power of $h$. If $x_{[a,b],0}x_{[a,b],1}^{-1} = h^{d}$ for some integer $d$, then $h^{d}$ fixes pointwise the segment $[\frac{1}{4}a + \frac{3}{4}b,b] \subset [a,b]$. Hence $h$ also fixes some final subinterval of [a,b]. This gives a contradiction as $h$ was chosen to fix only finitely many points in $[a,b]$. This finishes the case of centralized HNN-extensions.\parskip0pt Generally, let us consider the situation, where in the limit group we have one relation of the form $ghg^{-1}=h'$ for some $h, h'\in F$. By the construction of limit groups $h'=h^{f}$ for some element $f \in F$. Indeed, if $h$ and $h'$ are not conjugated in $F$, then there is no sequence $(g_{n})_{n<\omega}$ in $F$ with $g_{n}hg_{n}^{-1}=h'$ for almost all $n$. We now apply the argument above: let $(fg_{n}) _{n<\omega}$ be a sequence convergent to the element $fg$. It obviously commutes with $h$, so we can repeat step by step the proof above. That completes the proof. \\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $\square$\\ \end{document}
\begin{document} \title{A discontinuous isoperimetric profile for a complete Riemannian manifold} \begin{quote} {\small ABSTRACT: The first known example of a complete Riemannian manifold whose isoperimetric profile is discontinuous is given. \par RESUM\'E : On construit le premier exemple connu d' une vari\'et\'e riemannienne compl\`ete dont le profil isop\'erim\'etrique est discontinu.} \epsilonilonnd{quote} \footnotetext[1]{S. Nardulli, Instituto de Matem\'atica, Universidade Federal de Rio de Janeiro} \footnotetext[2]{P. Pansu, Univ Paris-Sud, Laboratoire de Math\'ematiques d'Orsay, Orsay, F-91405} \footnotetext[3]{\hskip42pt CNRS, Orsay, F-91405.} \section{Introduction} \subsection{The problem} Let $M$ be a Riemannian manifold. We are concerned with the continuity of the isoperimetric profile of $M$. Given $0<v<\mathrm{vol}(M)$, consider all domains, i.e. smooth compact codimension $0$ submanifolds in $M$ with volume $v$. Define $I_{M}(v)$ as the least upper bound of the boundary areas of such domains. In this way, one gets a function $I_{M}:(0,\mathrm{vol}(M))\to\mathbf{R}_{+}$ called the \epsilonilonmph{isoperimetric profile} of $M$. \begin{ques} When is the isoperimetric profile a continuous function ? \epsilonilonnd{ques} The answer is yes when $M$ is compact, Lemme 6.2 of \cite{Gallot}. S. Gallot's proof uses techniques of metric geometry. In the compact case, alternative proofs, based on the direct method of the calculus of variations, can be found in books like \cite{AmbrosioFuscoPallara}, \cite{MorganGMT}, \cite{Maggi}. The latter argument has been extended to the case of complete manifolds with $C^{2,\alpha}$-bounded geometry, see Theorem $1$ of \cite{NardulliFlores} and Theorem 2.2 of \cite{NardulliFlores3}. If one assumes existence of isoperimetric regions of every volume, one can weaken bounded geometry assumptions. It suffices to assume a lower bound on Ricci curvature and on the volumes of balls of radius 1, see Theorem 4.1 \cite{NardulliFlores}. In our opinion, it remains an open question whether the noncollapsing assumption (lower bound on the volumes of balls) can be removed or not, see Question \ref{Ques:3} below. The isoperimetric profile is continuous also when the volume of $M$ is finite; a proof of this fact can be found in Corollary 2.4 of \cite{NardulliRusso}. When the ambient manifold is a non-compact homogeneous space, Hsiang showed that its isoperimetric profile is a non-decreasing and absolutely continuous function [\cite{Hsiang}, Lemma 3, Thm. 6]. In a recent paper, \cite{RitoreContinuity}, Manuel Ritor\'e showed that a complete Riemannian manifold possessing a strictly convex Lipschitz continuous exhaustion function has continuous and nondecreasing isoperimetric profile. Hadamard manifolds and complete non-compact manifolds with strictly positive sectional curvature belong to this class. This shows that earlier attempts to construct counterexamples using pieces of increasing negative curvature are doomed to fail. An example of a manifold with density with discontinuous isoperimetric profile has been described by Adams, Morgan and Nardulli in Prop. 2 of \cite{MorganBlog}. For more informations about the literature on the continuity of the isoperimetric profile the reader should consult the introduction of \cite{RitoreContinuity} and references therein. \subsection{The result} \begin{thm} \label{main} There exists a connected non compact 3-dimensional Riemannian manifold $M$ such that $I_M$ is a discontinuous function. \epsilonilonnd{thm} The proof is a modification of the treatment of Riemannian manifolds with density by Adams, Morgan and Nardulli, an account of which can be found in Frank Morgan's blog, \cite{MorganBlog}. Start with a disjoint union of Riemannian manifolds $N=\coprod_n M_n$ such that $\mathrm{vol}(M_n)=1+\tau_n$ where $\tau_n>0$ tends to 0. Then $I_N(1+\tau_n)=0$. Assume that, for all $n$, $I_{M_n}(1)=I_{M_n}(\tau_n)\geq 1$. Then it is not too hard to show that $I_N(1)\geq 1$. Connecting $M_{n}$ to $M_{n+1}$ with a very thin tube produces a connected Riemannian manifold $M$ for which $I_N(1+\epsilonilonps_n)$ tends to 0. Again, it is not too hard to show that $I_M(1)>0$. Therefore $I_M$ is discontinuous. Thus the key input is the sequence of Riemannian manifolds $M_n$ with $\mathrm{vol}(M_n)$ bounded and $I_{M_n}(\tau_n)$ bounded below. Adams, Morgan and Nardulli indulged themselves in introducing densities. They took for $M_n$ a tiny round sphere with a high constant density. Since volumes and boundary areas rescale differently, one can achieve $I_{M_n}(\epsilonilonps_n)\geq 1$. Instead, we use nilmanifolds equipped with metrics which converge (up to rescaling) to a single Carnot-Carath\'eodory metric. The Carnot-Carath\'eodory isoperimetric profile established in \cite{Pansu83} gives a uniform lower bound for the isoperimetric profiles of such metrics. A similar construction certainly works in any dimension $\geq 3$. \begin{ques}\label{Ques:2} Does there exist a 2-dimensional Riemannian manifold whose isoperimetric profile is discontinuous? \epsilonilonnd{ques} \begin{ques} \label{Ques:3} Does a manifold with Ricci curvature bounded below and admitting isoperimetric regions in every volume, have a continuous isoperimetric profile? \epsilonilonnd{ques} \section{Isoperimetry in nilmanifolds} \subsection{Isoperimetry in the Heisenberg group} The Heisenberg group $\mathbf{H}$ is the group of real upper triangular unipotent $3\times 3$ matrices, \begin{eqnarray*} \mathbf{H}=\{\begin{pmatrix} 1&x&z\\0&1&y\\0&0&1 \epsilonilonnd{pmatrix}\,;\,x,\,y,\,z\in\mathbf{R}\}. \epsilonilonnd{eqnarray*} Putting integer entries produces the discrete subgroup $\mathbf{H}_{\mathbf{Z}}\subset\mathbf{H}$. Let $dx$, $dy$, $\theta=dz-xdy$ be a basis of left-invariant forms. Let \begin{eqnarray*} g_{\epsilonilonps}=dx^2+dy^2+\frac{1}{\epsilonilonps^2}\theta^2. \epsilonilonnd{eqnarray*} This is a left-invariant Riemannian metric on $\mathbf{H}$. As $\epsilonilonps$ tends to 0, the distance $d_{\epsilonilonps}$ associated to $g_\epsilonilonps$ converges to the \epsilonilonmph{Carnot-Carath\'eodory distance} \begin{eqnarray*} d_c(p,q)=\inf\{\mathrm{length}(\gamma)\,;\,\gamma(0)=p,\,\gamma(1)=q,\,\gamma^{*}\theta=0\}. \epsilonilonnd{eqnarray*} \begin{itemize} \item $d_c$ has Hausdorff dimension 4, with spherical 4-dimensional measure proportional to Haar measure $\mathcal{S}^4=dxdydz$. \item Smooth surfaces $S$ in $\mathbf{H}$ have Hausdorff dimension 3, with spherical 3-dimensional measure proportional to the measure denoted by $\mathcal{S}^3$ to be described soon (the proportionality constants are universal and will be ignored in the sequel). \item Smooth curves which are transverse to the contact structure ker$(\theta)$ have Hausdorff dimension 2, with spherical 2-dimensional measure $\mathcal{S}^2$ given by integration of $\theta$ (up to sign). \item Smooth curves tangent to ker$(\theta)$ have Hausdorff dimension 1, with spherical 1-dimensional measure $\mathcal{S}^1$ being length. \epsilonilonnd{itemize} $\mathcal{S}^3$ is locally the product $\mathcal{S}^1 \otimes \mathcal{S}^2$. Specifically, let $d\epsilonilonll$ denote a (locally defined away from points where $S$ is tangent to ker$(\theta)$) unit 1-form on $S$ whose kernel is orthogonal to the trace of ker$(\theta)$ on the tangent plane to $S$. Then, up to a sign, $\mathcal{S}^3$ is obtained by integrating $d\epsilonilonll\wedge\theta$. The Heisenberg isoperimetric inequality (\cite{Pansu83}) states that for all smooth domains $\Omega\subset \mathbf{H}$, \begin{eqnarray}\label{HI} \mathcal{S}^3(\partial \Omega)\geq \mathcal{S}^4(\Omega)^{3/4}, \epsilonilonnd{eqnarray} up to a universal constant that we ignore again. Here is an alternate description of $\mathcal{S}^3$. Let $d\,\mathrm{area}_\epsilonilonps$ denote the area induced by Riemannian metric $g_\epsilonilonps$. The 1-form $\theta$ restricts to a 1-form on $S$, we denote its $g_\epsilonilonps$-norm by $|\theta_{|S}|_\epsilonilonps$. Then $\mathcal{S}^3$ has density $|\theta_{|S}|_\epsilonilonps$ with respect to $g_\epsilonilonps$-area, \begin{eqnarray*} d\mathcal{S}^3=|\theta_{|S}|_\epsilonilonps d\,\mathrm{area}_\epsilonilonps. \epsilonilonnd{eqnarray*} Since $|\theta|_\epsilonilonps=\epsilonilonps$, $|\theta_{|S}|_\epsilonilonps\leq\epsilonilonps$, therefore $\mathcal{S}^3 (S)\leq\epsilonilonps\,\mathrm{area}_\epsilonilonps(S)$. On the other hand, the Riemannian volume element of $g_\epsilonilonps$ is $\frac{1}{\epsilonilonps}dxdydz$. This shows that the Heisenberg isoperimetric inequality (\ref{HI}) implies a lower bound on the isoperimetric profile of $(\mathbf{H},g_\epsilonilonps)$ for all $\epsilonilonps>0$, \begin{eqnarray}\label{HepsI} I_{(\mathbf{H},g_\epsilonilonps)}(v)\geq \frac{1}{\epsilonilonps^{1/4}}v^{3/4}. \epsilonilonnd{eqnarray} This is asymptotically sharp for large volumes, but not for small volumes, where the correct asymptotics is $v^{2/3}$. Never mind, it is the dependance on $\epsilonilonps$ which is most important here. We shall not directly use inequality (\ref{HepsI}). Instead, we shall rely on inequality (\ref{HI}) to study the Carnot-Carath\'eodory isoperimetric profile of a quotient of $\mathbf{H}$. Only at the very end shall we return to Riemannian geometry. \subsection{Nilmanifolds} $\mathbf{H}$ possesses group automorphisms $\delta_t(x,y,z)=(tx,ty,tz)$. Let $\Gamma_t=\delta_t(\mathbf{H}_\mathbf{Z})$ and $N_t=\Gamma_t\setminus\mathbf{H}$ be the quotient manifold. It inherits quotient metrics $g_\epsilonilonps$, yielding Riemannian nilmanifolds $N_{t,\epsilonilonps}$ of total volume equal to $\frac{t^4}{\epsilonilonps}$. But is also inherits a Carnot-Carath\'eodory metric that depends only on $t$. Our first goal is to show that the Carnot-Carath\'eodory isoperimetric profile of $N_{t}$ satisfies an inequality similar to (\ref{HI}). Note that $\delta_t$ induces a homothetic map of $N_1$ onto $N_t$, so it suffices to work with one single compact space $N_1$. The volume of $N_1$ is $\mathcal{S}^{4}(N_1)=1$. \begin{thm}\label{HIN} There exists a constant $c$ such that the Carnot-Carath\'eodory isoperimetric profile of $N_1$ satisfies $I_{(N_1,d_c)}(v)\geq c\,\min\{v,1-v\}^{3/4}$. In other words, if $\Omega\subset N_1$ is a smooth domain of volume less that $1/2$, \begin{eqnarray*} \mathcal{S}^3(\partial\Omega)\geq c\,\mathcal{S}^4(\Omega)^{3/4}. \epsilonilonnd{eqnarray*} \epsilonilonnd{thm} The method consists in cutting domains of $N_{1}$ into pieces that lift to covering spaces. Ultimately, pieces lift to $\mathbf{H}$ where one can apply $(\ref{HI})$. This covers cases where volume is smaller than some universal constant $v_0$. To treat domains with volume $\geq v_0$, we apply a compactness result due to \cite{Franchi-Serapioni-Serra-Cassano} (see also \cite{Leonardi-Rigot}). \subsection{Reduction to pillars} A first step is to cut domains into pieces called \epsilonilonmph{pillars} that lift to a $\mathbf{Z}\oplus\mathbf{Z}$ covering space $Z$ of $N_1$. \begin{defi} Let $\zeta$ denote the center of $\mathbf{H}_\mathbf{Z}$. Let us call \epsilonilonmph{pillar} a subset of $Z=\zeta\setminus\mathbf{H}$ whose projection to $\mathbf{H}/[\mathbf{H},\mathbf{H}]=\mathbf{R}^2$ is contained in a unit square. Denote by $PI_{Z}$ the \epsilonilonmph{pillar profile} of $Z$, i.e. \begin{eqnarray*} PI_{Z}(v)=\inf\{\mathcal{S}^3(\partialP)\,;\,P \textrm{ a pillar}, \,\mathcal{S}^4(P)=v\}. \epsilonilonnd{eqnarray*} \epsilonilonnd{defi} \begin{prop}[Reduction to pillars]\label{cut} The pillar profile of $Z$ bounds the profile of $N_{1}$ from below, with an error term, \begin{eqnarray*} I_{(N_{1},d_c)}(v)\geq PI_{Z}(v)-4v. \epsilonilonnd{eqnarray*} \epsilonilonnd{prop} \par \noindent {\bf Proof.}{\hskip1em} The coordinate functions $x$ and $y$ on $\mathbf{H}$ pass to a quotient $N_{1}\to \mathbf{Z}\setminus\mathbf{R}$. For $u=(s,s')\in (\mathbf{Z}\setminus\mathbf{R})^2$, let \begin{eqnarray*} G_u=\{p\in N_{1}\,;\,x(p)=s\textrm{ or }y(p)=s'\}. \epsilonilonnd{eqnarray*} This is the union of two surfaces, each of which is a level set of one of the functions $x$ or $y$. The complement of $G_u$ has a cyclic fundamental group that maps isomorphically onto $\zeta$. Let $\Omega$ be a domain in $N_{1}$. By the coarea formula, \begin{eqnarray*} \mathcal{S}^4(\Omega)=\int_{\mathbf{Z}\setminus\mathbf{R}}\mathcal{S}^3(x^{-1}(s)\cap\Omega)\,ds. \epsilonilonnd{eqnarray*} This coarea formula follows from the fact that the volume element (viewed as a 4-form) splits, \begin{eqnarray*} d\,\mathcal{S}^4=dx\wedge dy\wedge\theta=dx\wedge d\,\mathcal{S}^3 , \epsilonilonnd{eqnarray*} since $dy\wedge\theta=d\,\mathcal{S}^3$ along the fibers of $x$ (one can take $d\epsilonilonll=dy$ globally). The same inequality holds with $x$ replaced with $y$. This shows that there exists $u=(s,s')\in (\mathbf{Z}\setminus\mathbf{R})^2$ such that \begin{eqnarray*} \mathcal{S}^3(x^{-1}(s)\cap\Omega)\leq\mathcal{S}^4(\Omega), \quad \mathcal{S}^3(y^{-1}(s')\cap\Omega)\leq\mathcal{S}^4(\Omega), \epsilonilonnd{eqnarray*} and thus \begin{eqnarray*} \mathcal{S}^3(G_u\cap\Omega)\leq 2\mathcal{S}^4(\Omega). \epsilonilonnd{eqnarray*} The complement $\Omega\setminus G_u$ lifts to the cyclic covering space $Z$. Pick some lift. Its closure $P$ is a pillar. Indeed, on $P$, the real valued functions $x$ and $y$ take values in intervals of length $1$. The boundary of $P$ consists of a part that isometrically and injectively maps to $\partial\Omega$, and of a part that maps 2-1 to $G_u\cap\Omega$. Therefore \begin{eqnarray*} \mathcal{S}^3(\partialP)\leq\mathcal{S}^3(\partial\Omega)+2\mathcal{S}^3(G_u\cap\Omega)\leq\mathcal{S}^3(\partial\Omega)+4\mathcal{S}^4(\Omega). \epsilonilonnd{eqnarray*} If $\mathcal{S}^4(\Omega)=v$, this shows that \begin{eqnarray*} I_{(N_{1},d_c)}(v)\geq PI_{Z}(v)-4v.~\vrule height .9ex width .8ex depth -.1ex \epsilonilonnd{eqnarray*} \subsection{Treatment of pillars} \begin{prop}[Treatment of pillars]\label{recut} The profile of $\mathbf{H}$ bounds the pillar profile of $Z$ from below, with an error term, \begin{eqnarray*} PI_{Z}(v)\geq I_{\mathbf{H}}(v)-8v. \epsilonilonnd{eqnarray*} \epsilonilonnd{prop} \par \noindent {\bf Proof.}{\hskip1em} Let $P\subset Z$ be a pillar. We can assume that its projection to $\mathbf{R}^2$ is contained in $\{0\leq x\leq 1\}$. Its inverse image $\tilde{P}$ in $\mathbf{H}$ is a $\zeta$-invariant subset with small projection in $\mathbf{R}^2$. Again, we cut $\tilde{P}$ into logs of height $1$ using level sets of the $z$ function. This time, we split the volume element as \begin{eqnarray*} d\mathcal{S}^4=dx\wedge dy\wedge dz=dz\wedge(dx\wedge dy)=dz\wedge\frac{1}{|x|}d\,\mathcal{S}^3\geq dz\wedge d\,\mathcal{S}^3. \epsilonilonnd{eqnarray*} We have used the expression $d\,\mathcal{S}^3 =|x|\,dx\,dy$ for the measure induced on horizontal planes $\{z=s\}$. On such surfaces, one can take $d\epsilonilonll=dx$ globally, whence $d\,\mathcal{S}^3 =\pm dx\wedge\theta=|x|\,dx\,dy$. The coarea formula gives \begin{eqnarray*} \mathcal{S}^4(P)&=&\mathcal{S}^4(\tilde{P}\cap\{0\leq z\leq 1\})\\ &=&\int_{0}^{1}\left(\int_{\tilde{P}\cap\{z=s\}}\frac{1}{|x|}d\,\mathcal{S}^3\right)\,ds\\ &\geq&\int_{0}^{1}\mathcal{S}^3(\tilde{P}\cap\{z=s\})\,ds. \epsilonilonnd{eqnarray*} There exists $s\in[0,1]$ such that \begin{eqnarray*} \mathcal{S}^3(\tilde{P}\cap\{z=s\})\leq\mathcal{S}^4(P). \epsilonilonnd{eqnarray*} Set $\Omega'=\tilde{P}\cap\{s\leq z\leq s+1\}$. Then \begin{eqnarray*} \mathcal{S}^3(\partial\Omega')\leq \mathcal{S}^3(\partialP)+2\mathcal{S}^4(P). \epsilonilonnd{eqnarray*} If $P$ has volume $v$, this leads to \begin{eqnarray*} PI_{Z}(v)\geq I_{\mathbf{H}}(v)-2v. \epsilonilonnd{eqnarray*} \subsection{Profile of $(N_1,d_c)$} \begin{prop}[Carnot-Carath\'eodory isoperimetric inequality for small volumes]\label{CC} If $v\leq 12^{-4}$, \begin{eqnarray*} I_{(N_{1},d_c)}(v)\geq \frac{1}{2}v^{3/4}. \epsilonilonnd{eqnarray*} \epsilonilonnd{prop} \par \noindent {\bf Proof.}{\hskip1em} Combined with Propositions \ref{cut} and \ref{recut}, the Heisenberg isoperimetric inequality (\ref{HI}) yields \begin{eqnarray*} I_{(N_{1},d_c)}(v)\geq v^{3/4}-4v-2v=v^{3/4}(1-6\, v^{1/4})\geq\frac{1}{2}v^{3/4}, \epsilonilonnd{eqnarray*} since $v\leq 12^{-4}$. ~\vrule height .9ex width .8ex depth -.1ex \subsection{Proof of Theorem \ref{HIN}} There is a notion of Carnot-Carath\'eodory perimeter, an appropriate topology for which $\mathcal{S}^4$ is continuous and the perimeter (which coincides with $\mathcal{S}^3$ for smooth domains) lower semi-continuous, and a compactness theorem for sets of bounded perimeter in a compact Carnot manifold, \cite{Franchi-Serapioni-Serra-Cassano}. This implies that the Carnot-Carath\'eodory isoperimetric profile $I_{(N_1,d_c)}$ is positive on $(0,1)$ and lower semi-continuous. Therefore, there exists $\epsilonilonta>0$ such that $I_{(N_1,d_c)}\geq \epsilonilonta$ on $[12^{-4},1-12^{-4}]$. Set $c=\min\{\frac{1}{2},2^{3/4} \epsilonilonta\}$. Then $I_{(N_1,d_c)}(v)\geq\epsilonilonta=c(\frac{1}{2})^{3/4}\geq c\,v^{3/4}$ for every $v\in[12^{-4},\frac{1}{2}]$. On the other hand, Proposition \ref{CC} shows that $I_{(N_1,d_c)}(v)\geq c\,v^{3/4}$ for all $v\in[0,12^{-4}]$. ~\vrule height .9ex width .8ex depth -.1ex Note that the proof does not provide an effective constant $c$. \subsection{Riemannian profile} \begin{cor} Let $N_{t,\epsilonilonps}$ denote the quotient $(\delta_t(\mathbf{H}_\mathbf{Z}))\setminus\mathbf{H}$ equipped with the Riemannian metric induced by $g_\epsilonilonps$. The isoperimetric profile of $N_{t,\epsilonilonps}$ satisfies \begin{eqnarray*} I_{N_{t,\epsilonilonps}}(v)\geq \frac{c}{\epsilonilonps^{1/4}}\,\min\{v,\frac{t^4}{\epsilonilonps}-v\}^{3/4}. \epsilonilonnd{eqnarray*} \epsilonilonnd{cor} \par \noindent {\bf Proof.}{\hskip1em} The homothetic map $N_1\to N_t$ induced by the automorphism $\delta_t$ transports the inequality of Theorem \ref{HIN} to $N_t$ without any change but the fact that $\mathcal{S}^4(N_t)=t^4$ replaces $1$. The Riemannian volume element of $N_{t,\epsilonilonps}$ is $\frac{1}{\epsilonilonps}\mathcal{S}^4$, the Riemannian area induced on surfaces satisfies $\epsilonilonps\,\mathrm{area} \geq \mathcal{S}^3$. This leads to the indicated dependence on $\epsilonilonps$ in the isoperimetric profile of $N_{t,\epsilonilonps}$. ~\vrule height .9ex width .8ex depth -.1ex \section{Proof of Theorem \ref{main}} \subsection{The case of a disjoint union of nilmanifolds} \begin{prop}\label{disjoint} Let $\tau_n=\frac{1}{n}$, $\epsilonilonps_n=\tau_n^3$ and $t_n=\tau_n^{3/4}(1+\tau_n)^{1/4}$. Let $N=\coprod_{n}N_{t_n,\epsilonilonps_n}$. Then, for all $v\in[\frac{1}{16},1]$, $I_N(v)\geq \frac{c}{8}$, where $c$ is the constant of Theorem \ref{HIN}. \epsilonilonnd{prop} \par \noindent {\bf Proof.}{\hskip1em} By construction, $\mathrm{vol}(N_{t_n,\epsilonilonps_n})=1+\tau_n$. Let $\Omega$ be a domain in $N$ with $\mathrm{vol}(\Omega)=v$. Write $\Omega=\coprod_n \Omega_n$ where $\Omega_n\subset N_{t_n,\epsilonilonps_n}$ has volume $v_n$, $\sum_{n=1}^{\infty}v_n=v$. If some $v_n$ satisfies $v_n\geq \frac{1}{2}(1+\tau_n)$, then \begin{eqnarray*} \mathrm{area}(\partial\Omega_n)&\geq&\frac{c}{\epsilonilonps_n^{1/4}}(1+\tau_n-v_n)^{3/4}\\ &\geq&\frac{c}{\epsilonilonps_n^{1/4}}\tau_n^{3/4}= c, \epsilonilonnd{eqnarray*} so \begin{equation}\label{Eq:disjoint} \mathrm{area}(\partial\Omega)\geq c \epsilonilonnd{equation} in this case. Otherwise, for all $n\geq 1$, \begin{eqnarray*} \mathrm{area}(\partial\Omega_n)\geq\frac{c}{\epsilonilonps_n^{1/4}}v_n^{3/4}\geq c\, v_n^{3/4}. \epsilonilonnd{eqnarray*} We use the concavity inequality \begin{eqnarray*} a^{\alpha}+b^{\alpha}\geq (a+b)^{\alpha}, \epsilonilonnd{eqnarray*} valid for all $0\leq\alpha\leq 1$, $a\geq 0$ and $b\geq 0$. This gives \begin{eqnarray*} \mathrm{area}(\partial\Omega)&=&\sum_{n=1}^{\infty}\mathrm{area}(\partial\Omega_n)\\ &\geq&c\sum_{n=1}^{\infty}v_n^{3/4}\\ &\geq&c\left(\sum_{n=1}^{\infty}v_n\right)^{3/4}=(\frac{1}{16})^{3/4}c=\frac{c}{8}.~\vrule height .9ex width .8ex depth -.1ex \epsilonilonnd{eqnarray*} \subsection{Connecting manifolds} \par \noindent {\bf Proof.}{\hskip1em} We construct a noncompact manifold that has the shape of an infinite pearl necklace, adjusting suitable parameters carefully. Let $0<\tau_n<1$ be the sequence of positive real numbers chosen in the proof of Proposition \ref{disjoint}. Pick another sequence of volumes $w_n<1$, such that \begin{equation}\label{Eq:Main1} \sum_n w_n<\frac{1}{2}, \epsilonilonnd{equation} and a sequence of areas $a_n>0$ such that \begin{equation}\label{Eq:Main} \sum_n a_n<\frac{c}{16}, \epsilonilonnd{equation} where $c$ is the constant of Theorem \ref{HIN}. The manifolds $N_{t_n,\epsilonilonps_n}$ that we want to connect to obtain our counterexample $M$, are like in Proposition \ref{disjoint}, in particular we retain here that $V(N_{t_n,\epsilonilonps_n})=1+\tau_n$. Take two small disjoint balls $B_{n,1}, B_{n,2}$ inside $N_{t_n,\epsilonilonps_n}$ whose boundaries have total area $\leq a_n$. Arrange that $B_{n,2}$ and $B_{n+1,1}$ be nearly isometric. Put $\tilde{N}_n:=N_{t_n,\epsilonilonps_n}\setminus\left(B_{n,1}\mathring{\cup}B_{n,2}\right)$. Consider tubes or cylinders $T_n$ of the form $T_n:=(S^2(1)\times [0, 1], g_n)$, where the metrics $g_n$ are chosen in such a way that $V(g_n)\leq w_n$ and they glue together into a smooth metric on the connected sum $M_n:=\tilde{N}_n\#T_n$ where the gluing is done along $i_n(S^2(1)\times\{0\})\mathrm{const.}g\partial B_{n,2}$. Now consider \begin{equation} (M,g):=M_1\#M_2\#\cdots\#M_{n}\#M_{n+1}\#\cdots \epsilonilonnd{equation} where $M_n$ and $M_{n+1}$ are glued together along the boundaries $i_n(S^2(1)\times\{1\})\mathrm{const.}g\partial B_{(n+1),1}$, where $i_n:T_n\rightarrow M$ is the isometric embedding associated to our construction. We show that the right limit $I_M(1+)$ vanishes. Consider domains $D_n:=\tilde{N}_n$, we get $V(D_n)=1+\tau_n-\tilde{v}'_{n-1}-\tilde{v}'_n=1+\alpha_n$, with $\alpha_n\rightarrow 0$, $\varepsilon'_n:=A(\partial D_n)=A_g(\partial B_{n,2}\mathring{\cup}\partial B_{n+1,1})\rightarrow 0$. This implies readily \begin{equation}0\leq\lim_{n\rightarrow+\infty} I_M(1+\alpha_n)\leq\lim_{n\rightarrow+\infty} A(\partial D_n)=0.\epsilonilonnd{equation} We show that $I_M(1)>0$. Let $\Omega$ be a domain in $M$ such that $V(\Omega)=1$. Write $\tilde{\Omega}:=\mathring{\bigcup}\tilde{\Omega}_n$, where $\tilde{\Omega}_n:=\Omega\cap\tilde{N}_n$. Then $$V(\tilde{\Omega})\geq 1-\sum_n w_n \geq \frac{1}{2}.$$ According to Proposition \ref{disjoint}, $$A(\partial\tilde{\Omega})\geq \frac{c}{8}.$$ Since, for all $n$, \begin{eqnarray*} \partial\tilde{\Omega}_n=((\partial\Omega)\cap\tilde{N}_n) \mathring{\cup} (\Omega\cap \partial\tilde{N}_n), \epsilonilonnd{eqnarray*} \begin{eqnarray*} A(\partial\tilde{\Omega}_n)-A((\partial\Omega)\cap\tilde{N}_n)\leq A_g(\partial B_{n,2}\mathring{\cup}\partial B_{n,1})\leq a_n,\epsilonilonnd{eqnarray*} thus \begin{eqnarray*} A(\partial\Omega)\geq A(\partial\tilde{\Omega})-\sum_{n}a_n\geq \frac{c}{8}-\frac{c}{16}=\frac{c}{16}. \epsilonilonnd{eqnarray*} This show that $I_M(1)\geq\frac{c}{16}$. This concludes the proof of Theorem \ref{main}.~\vrule height .9ex width .8ex depth -.1ex \markboth{References}{References} \addcontentsline{toc}{section}{\numberline{}References} Keywords : Isoperimetric inequality, Nilmanifold, Carnot-Carath\'eodory metric. Mathematics Subject Classification : 53C20, 49Q20, \vskip1cm \noindent Stefano Nardulli\\ Instituto de Matem\'atica\\ Universidad Federal de Rio de Janeiro (Brazil)\\ \noindent {\tt [email protected]}\\ http://www.im.ufrj.br/nardulli \par \noindent Pierre Pansu\\ Laboratoire de Math\'ematiques d'Orsay, UMR 8628 du C.N.R.S.\\ B\^atiment 425, Universit\'e Paris-Sud - 91405 Orsay (France)\\ \noindent {\tt\small [email protected]}\\ http://www.math.u-psud.fr/$\sim$pansu \epsilonilonnd{document}
\begin{document} \title[Stein's method for dependent random variables occurring in Statistical Mechanics]{\large Stein's method for dependent random variables \\occurring in Statistical Mechanics} \author[Peter Eichelsbacher and Matthias L\"owe]{} \maketitle \thispagestyle{empty} \centerline{\sc Peter Eichelsbacher\footnote{Ruhr-Universit\"at Bochum, Fakult\"at f\"ur Mathematik, NA3/68, D-44780 Bochum, Germany, {\tt [email protected] }} and Matthias L\"owe\footnote{Universit\"at M\"unster, Fachbereich Mathematik und Informatik, Institut f\"ur Mathematische Statistik, Einsteinstr. 62, D-48149 M\"unster, Germany {\tt [email protected]}}} \begin{quote} {\small {\bf Abstract:} We obtain rates of convergence in limit theorems of partial sums $S_n$ for certain sequences of dependent, identically distributed random variables, which arise naturally in statistical mechanics, in particular, in the context of the Curie-Weiss models. Under appropriate assumptions there exists a real number $\alpha$, a positive real number $\mu$, and a positive integer $k$ such that $(S_n- n \alpha)/n^{1 - 1/2k}$ converges weakly to a random variable with density proportional to $\exp( -\mu |x|^{2k} /(2k)! )$. We develop Stein's method for exchangeable pairs for a rich class of distributional approximations including the Gaussian distributions as well as the non-Gaussian limit distributions with density proportional to $\exp( -\mu |x|^{2k} /(2k)! )$. Our results include the optimal Berry-Esseen rate in the Central Limit Theorem for the total magnetization in the classical Curie-Weiss model, for high temperatures as well as at the critical temperature $\beta_c=1$, where the Central Limit Theorem fails. Moreover, we analyze Berry-Esseen bounds as the temperature $1/ \beta_n$ converges to one and obtain a threshold for the speed of this convergence. Single spin distributions satisfying the Griffiths-Hurst-Sherman (GHS) inequality like models of liquid helium or continuous Curie-Weiss models are considered. } \end{quote} \noindent {\it MSC 2000:} Primary 60F05, secondary 60G09, 60K35, 82B20, 82D40 \noindent {\it Keywords and phrases.} Berry-Esseen bound, Stein's method, exchangeable pairs, Curie-Weiss models, critical temperature, GHS-inequality \noindent This research was done partly at the Mathematisches Forschungsinstitut Oberwolfach during a stay within the Research in Pairs Programme from August 24 - September 6, 2008. \eject \setcounter{section}{0} \secdef\sct\sect{Introduction and main result}\label{intro} There is a long tradition in considering mean--field models in statistical mechanics. The Curie--Weiss model is famous, since it exhibits a number of properties of real substances, such as multiple phases, metastable states and others, explicitly. The aim of this paper is to prove Berry-Esseen bounds for the sums of dependent random variables occurring in statistical mechanics under the name Curie-Weiss models. To this end, we will develop Stein's method for exchangeable pairs (see \cite{Stein:1986}) for a rich class of distributional approximations. For an overview of results on the Curie--Weiss models and related models, see \cite{Ellis:LargeDeviations}, \cite{Ellis/Newman:1978}, \cite{Ellis/Newman/Rosen:1980}. For a fixed positive integer $d$ and a finite subset $\Lambda$ of $\Z^d$, a ferromagnetic crystal is described by random variables $X_i^{\Lambda}$ which represent the spins of the atom at sites $i \in \Lambda$, where $\Lambda$ describes the macroscopic shape of the crystal. In {\it Curie--Weiss} models, the joint distribution at fixed temperature $T >0$ of the spin random variables is given by \begin{equation} \label{CWmeasure} P_{\Lambda, \beta}((x_i)):=P_{\Lambda, \beta} \bigl( (X_i^{\Lambda})_{i \in \Lambda} = (x_i)_{i \in \Lambda} \bigr) := \frac{1}{Z_{\Lambda}(\beta)} \exp \biggl( \frac{\beta}{2 |\Lambda|} \bigl( \sum_{i \in \Lambda} x_i \bigr)^2 \biggr) \prod_{i \in \Lambda} \, d \varrho(x_i). \end{equation} Here $\beta := T^{-1}$ is the inverse temperature and $Z_{\Lambda}(\beta)$ is a normalizing constant known as the partition function and $|\Lambda|$ denotes the cardinality of $\Lambda$. Moreover $\varrho$ is the distribution of a single spin in the limit $\beta \to 0$. We define $S_{\Lambda}=\sum_{i \in \Lambda} X_i^{\Lambda}$, the {\it total magnetization} inside $\Lambda$. We take without loss of generality $d=1$ and $\Lambda = \{1, \ldots, n \}$, where $n$ is a positive integer. We write $n$, $X_i^{(n)}$, $P_{n,\beta}$ and $S_n$, respectively, instead of $|\Lambda|$, $X_i^{\Lambda}$, $P_{\Lambda,\beta}$, and $S_\Lambda$, respectively. In the case where $\beta$ is fixed we may even sometimes simply write $P_n$. We assume that $\varrho$ is in the class $\mathcal{B}$ of non-degenerate symmetric Borel probability measures on $\R$ which satisfy \begin{equation} \label{condrho} \int \exp \biggl( \frac{b \, x^2}{2} \biggr) \, d \varrho (x) < \infty \quad \text{for all} \quad b>0. \end{equation} In the classical Curie--Weiss model, spins are distributed in $\{-1, +1\}$ according to $\varrho = \frac 12 (\delta_{-1} + \delta_1)$. More generally, the Curie--Weiss model carries an additional parameter $h >0$ called {\it external magnetic field} which leads to the modified measure, given by $$ P_{n,\beta,h} (x) = \frac{1}{Z_{n, \beta,h}} \exp \bigl( \frac{\beta}{2n} S_n^2 + \beta \, h S_n \bigr) \, d \varrho^{\otimes n} (x), \quad x=(x_i). $$ The measures $P_{n, \beta, h}$ is completely determined by the value of the total magnetization. It is therefore called an {\it order parameter} and its behaviour will be studied in this paper. The non-negative external magnetic field strength may even depend on the site: \begin{equation} \label{zn} P_{n, \beta, h_1, \ldots, h_n}(x) = \frac{1}{Z_{n , \beta, h_1, \ldots, h_n}} \exp \bigl( \frac{\beta}{2n} S_n^2 + \beta \, \sum_{i=1}^n h_i \, x_i \bigr) \, d \varrho^{\otimes n} (x), \quad x=(x_i). \end{equation} In the general case \eqref{CWmeasure}, we will see (analogously to the treatment in \cite{Ellis/Newman:1978, Ellis/Newman/Rosen:1980}) that the asymptotic behaviour of $S_n$ depends crucially on the extremal points of a function $G$ (which is a transform of the rate function in a corresponding large deviation principle): define $$ \phi_{\varrho}(s) := \log \int \exp (s \, x) \, d \varrho(x) $$ and \begin{equation} \label{Gdef} G_{\varrho}(\beta,s) := \frac{\beta \, s^2}{2} - \phi_{\varrho}(\beta \, s). \end{equation} We shall drop $\beta$ in the notation for $G$ whenever there is no danger of confusion, similarly we will suppress $\varrho$ in the notation for $\phi$ and $G$. For any measure $\varrho \in \mathcal{B}$, $G$ was proved to have global minima, which can be only finite in number, see \cite[Lemma 3.1]{Ellis/Newman:1978}. Define $C = C_{\varrho}$ to be the discrete, non--empty set of minima (local or global) of $G$. If $\alpha \in C$, then there exists a positive integer $k:=k(\alpha)$ and a positive real number $\mu:= \mu(\alpha)$ such that \begin{equation}\label{Taylor} G(s) = G(\alpha) + \frac{\mu(\alpha) (s-\alpha)^{2k}}{(2k)!} + {\mathcal O}((s-\alpha)^{2k+1} ) \quad \text{as} \quad s \to \alpha. \end{equation} The numbers $k$ and $\mu$ are called the {\it type} and {\it strength}, respectively, of the extremal point $\alpha$. Moreover, we define the maximal type $k^*$ of $G$ by the formula $$ k^* = \max \{ k(\alpha); \alpha \text{ is a global minimum of } G \}. $$ Note that the $\mu(\alpha)$ can be calculated explicitly: one gets \begin{equation}\label{k=1} \mu(\alpha) = \beta- \b^2\phi''(\beta \, \alpha)\qquad \mbox{if }k=1 \end{equation} while \begin{equation}\label{kriesig} \mu(\alpha) = - \beta^{2k} \phi^{(2k)}(\beta \, \alpha) \qquad \mbox{if }k\ge 2 \end{equation} (see \cite{Ellis/Newman/Rosen:1980}). An interesting point is, that the global minima of $G$ of maximal type correspond to stable states, meaning that multiple minima represent a mixed phase and a unique global minimum a pure phase. For details see the discussions in \cite{Ellis/Newman/Rosen:1980}. The following is known about the fluctuation behaviour of $S_n$ under $P_n$. In the classical model ($\varrho$ is the symmetric Bernoulli measure), for $0 < \beta < 1$, in \cite{Ellis/Newman:1978} the Central Limit Theorem is proved: $$ \frac{\sum_{i=1}^n X_i}{\sqrt{n}} \to N(0, \sigma^2(\beta)) $$ in distribution with respect to the Curie--Weiss finite volume Gibbs states with $\sigma^2(\beta) = (1-\beta)^{-1}$. Since for $\beta = 1$ the variance $\sigma^2(\beta)$ diverges, the Central Limit Theorem fails at the critical point. In \cite{Ellis/Newman:1978} it is proved that for $\beta = 1$ there exists a random variable $X$ with probability density proportional to $\exp(- \frac{1}{12} x^4)$ such that as $n \to \infty$ $$ \frac{\sum_{i=1}^n X_i}{n^{3/4}} \to X $$ in distribution with respect to the finite-volume Gibbs states. Asymptotic independence properties and propagation of chaos for blocks of size $o(n)$ have been investigated in \cite{BenArous/Zeitouni:1999}. In general, given $\varrho \in \mathcal{B}$, let $\alpha$ be one of the global minima of maximal type $k$ and strength $\mu$ of $G_{\varrho}$. Then $$ \frac{S_n - n \alpha}{n^{1 - 1/2k}} \to X_{k, \mu, \beta} $$ in distribution, where $X_{k, \mu, \beta}$ is a random variable with probability density $f_{k, \mu, \beta}$, defined by \begin{equation} \label{densitysigma} f_{1, \mu, \beta}(x) = \frac 1 {\sqrt{2 \pi \sigma^2} }\exp \bigl( -x^2 / 2 \sigma^2 \bigr) \end{equation} and for $k \geq 2$ \begin{equation} \label{densitygen} f_{k, \mu, \beta}(x) = \frac{\exp \bigl( - \mu x^{2k} / (2 k)! \bigr)}{ \int \exp \bigl( - \mu x^{2k} / (2k)! \bigr) \, dx}. \end{equation} Here, $\sigma^2 = \frac 1 \mu - \frac 1\beta$ so that for $\mu = \mu(\alpha)$ as in \eqref{k=1}, $\sigma^2 = ([ \phi''(\beta \alpha)]^{-1} - \beta)^{-1}$ (see \cite{Ellis/Newman:1978}, \cite{Ellis/Newman/Rosen:1980}). Moderate deviation principles have been investigated in \cite{Eichelsbacher/Loewe:2004}. In \cite{Ellis/Monroe/Newman:1976} and \cite{Ellis/Newman/Rosen:1980}, a class of measures $\varrho$ is described exhibiting a behaviour similar to that of the classical Curie--Weiss model. Assume that $\varrho$ is any symmetric measure that satisfies the Griffiths-Hurst-Sherman (GHS) inequality, \begin{equation}\label{GHS} \frac{d^3}{ds^3} \phi_\varrho(s) \le 0 \quad \mbox{for all } s\ge 0, \end{equation} (see also \cite{Ellis/Newman:1978b, Griffiths:1970}). One can show that in this case $G$ has the following properties: There exists a value $\beta_c$, the inverse critical temperature, and $G$ has a unique global minimum at the origin for $0 < \beta \leq \beta_c$ and exactly two global minima, of equal type, for $\beta >\beta_c$. For $\beta_c$ the unique global minimum is of type $k \geq 2$ whereas for $\beta \in (0,\beta_c)$ the unique global minimum is of type 1. At $\beta_c$ the law of large numbers still holds, but the fluctuations of $S_n$ live on a smaller scale than $\sqrt n$. This critical temperature can be explicitly computed as $\beta_c= 1 / \phi''(0) = 1 / \operatorname{Var}_{\varrho}(X_1)$. By rescaling the $X_i$ we may thus assume that $\beta_c=1$. Alternatively, the GHS-inequality can be formulated in the terms of $Z_{n,\beta, h_1, \ldots, h_n}$, defined in \eqref{zn}: \begin{eqnarray} \label{GHS2} 0 & \geq & \frac{\partial^3}{\partial h_i \, \partial h_j \partial h_k} \log Z_{n, \beta, h_1, \ldots, h_n} \nonumber \\ & = & E(X_i X_j X_k) - \E(X_i) \E(X_j X_k) - \E(X_j) \E(X_i X_k) \\ & & \hspace{0.5cm} - \E(X_k) \E(X_i X_j) + 2 \E(X_i) \E(X_j) \E(X_k) \nonumber \end{eqnarray} for all (not necessarily distinct) sites $i,j,k \in \{1, \ldots, n\}$. Here $\E$ denotes the expectation with respect to $P_{n, \beta, h_1, \ldots, h_n}$. The GHS inequality has a number of interesting implications, see \cite{Ellis/Monroe/Newman:1976}. With GHS, we will denote the set of measures $\varrho \in \mathcal{B}$ such that the GHS-inequality \eqref{GHS} is valid (for $P_{n, \beta, h_1, \ldots, h_n}$ in the sense of \eqref{GHS2}). We will give examples in Section 7. \begin{remark} In \cite[Lemma 4.1]{Ellis/Newman:1978}, for $\varrho \in \mathcal{B}$ it is proved that $G$ has a unique global minimum if and only if $$ \int \exp (s \, x) \, d\varrho(x) < \exp(s^2/2), \quad \text{for} \,\, s \,\, \text{real}, $$ where the right hand side of this strict inequality is the moment generating function of a standard normal random variable. Moreover, in the same Lemma it is proved that $G$ has a local minimum at the origin of type $k$ and strength $\mu$ if and only if $$ \bar{\mu}_j - \mu_j(\varrho) = \left\{ \begin{array}{ll} 0 & \mbox{for } j=0,1,\ldots,2k-1, \\ \mu>0 & \mbox{for } j=2k. \\ \end{array} \right. $$ Here $\mu_j(\varrho)$ and $\bar{\mu}_j$ define the $j$'th moment of $\varrho$ and the $j$'th moment of a standard normal random variables, respectively. Note that this in particular implies $\mu_1(\varrho)=\mathbb{E}_\varrho(X_1)=0$. \end{remark} The aim of this paper is to prove the following theorems: \secdef \subsct\sbsect{Results for the classical Curie-Weiss model} \begin{theorem}[classical Curie-Weiss model, optimal Berry-Esseen bounds outside the critical temperature]\label{CW} Let $\varrho = \frac 12 \delta_{-1} + \frac 12 \delta_1$ and $0 < \beta <1$. We have \begin{equation} \label{Kol} \sup_{z \in \R} \bigg| P_n \biggl( S_n/ \sqrt{n} \leq z \biggr) - \Phi_{\beta}(z) \bigg| \leq C \, n^{-1/2}, \end{equation} where $\Phi_{\beta}$ denotes the distribution function of the normal distribution with expectation zero and variance $(1- \beta)^{-1}$, and $C$ is an absolute constant, depending on $\beta$, only. \end{theorem} \begin{theorem}[classical Curie-Weiss model, optimal Berry-Esseen bounds at the critical temperature]\label{CWcritical} Let $\varrho = \frac 12 \delta_{-1} + \frac 12 \delta_1$ and $\beta =1$. We have \begin{equation} \label{Kol2} \sup_{z \in \R} \bigg| P_n \biggl( S_n/ n^{3/4} \leq z \biggr) - F(z) \bigg| \leq C \, n^{-1/2}, \end{equation} where \begin{equation} \label{density2} F(z) := \frac{1}{Z} \int_{- \infty}^z \exp(-x^4/12) \, dx, \end{equation} $Z:= \int_{\R} \exp(-x^4/12)\, dx$ and $C$ is an absolute constant. \end{theorem} \begin{theorem}[Berry-Esseen bounds for size-dependent temperatures]\label{CWbetadep} Let $\varrho = \frac 12 \delta_{-1} + \frac 12 \delta_1$ and $0 < \beta_n < \infty$ depend on $n$ in such a way that $\beta_n \to 1$ monotonically as $n \to \infty$. Then the following assertions hold: \begin{enumerate} \item If $\beta_n -1 = \frac{\gamma}{\sqrt{n}}$ for some $\gamma \not= 0$, we have \begin{equation} \label{Kol3} \sup_{z \in \R} \bigg| P_n \biggl( S_n/ n^{3/4} \leq z \biggr) - F_{\gamma}(z) \bigg| \leq C\, n^{-1/2} \end{equation} with $$ F_{\gamma}(z) := \frac{1}{Z} \int_{- \infty}^z \exp \bigl( - \frac{x^4}{12} + \frac{\gamma x^2}{2} \bigr) \, dx. $$ where $Z:= \int_{\R} \exp \bigl( - \frac{x^4}{12} + \frac{\gamma x^2}{2} \bigr)\, dx$ and $C$ is an absolute constant. \item If $|\beta_n -1| \ll n^{-1/2}$, $S_n/ n^{3/4}$ converges in distribution to $F$, given in \eqref{density2}. Moreover, if $|\beta_n -1| = \mathcal{O}(n^{-1})$, \eqref{Kol2} holds true. \item If $|\beta_n -1| \gg n^{-1/2}$, the Kolmogorov distance of the distribution of $\sqrt{\frac{1-\beta_n}{n}} \sum_{i=1}^n X_i$ and the normal distribution $N(0, (1-\beta_n)^{-1})$ converges to zero. Moreover, if $|\beta_n -1| \gg n^{-1/4}$, we obtain \begin{equation*} \sup_{z \in \R} \bigg| P_n \biggl( \frac{\sqrt{(1-\beta_n)} S_n}{\sqrt{n}} \leq z \biggr) - \Phi_{\beta_n}(z) \bigg| \leq C \, n^{-1/2} \end{equation*} with an absolute constant $C$. \end{enumerate} \end{theorem} \begin{remark} In \cite{Barbour:1980}, Barbour obtained distributional limit theorems, together with rates of convergence, for the equilibrium distributions of a variety of one-dimensional Markov population processes. In section 3 he mentioned, that his results can be interpreted in the framework of \cite{Ellis/Newman:1978}. As far as we understand, his result (3.9) can be interpreted as the statement \eqref{Kol2}, but with the rate $n^{-1/4}$. \end{remark} \begin{remark} In the first assertion of Theorem \ref{CWbetadep}, our method of proof allows to compare the distribution of $S_n/n^{3/4}$ alternatively with the distribution with Lebesgue-density proportional to $$ \exp \bigl( - \frac{\beta_n^3 x^4}{12} + \frac{\gamma \, x^2}{2} \bigr). $$ To be able to compare the distribution of interest with a distribution depending on $n$ (on $\beta_n$), is one of the advantages of Stein's method. The proof of this statement follows immediately from the proof of Theorem \ref{CWbetadep}. If in Theorem \ref{CWbetadep} (2) $|\beta_n-1| \gg n^{-1}$ the speed of convergence reduces to $\mathcal{O}(\sqrt n |1-\beta_n|)$. Likewise, if in Theorem \ref{CWbetadep} (3) $|\beta_n-1| \ll n^{-1/4}$, the speed of convergence is $\mathcal{O}(\frac 1 {n |1-\beta_n|})$. This reduced speed of convergence reflects the influence of two potential limiting measures. Next to the ''true'' limit there is also the limit measure from part (1) of Theorem \ref{CWbetadep}, which in these cases is relatively close to our measures of interest. \end{remark} \secdef \subsct\sbsect{Results for a general class of Curie-Weiss models} More generally, we obtain Berry-Esseen bounds for sums of dependent random variables occurring in the general Curie-Weiss models. We will be able to obtain Berry-Esseen-type results for $\varrho$-a.s. bounded single-spin variables $X_i$: \begin{theorem} \label{CWgeneral} Given $\varrho \in \mathcal{B}$ in GHS, let $\alpha$ be the global minimum of type $k$ and strength $\mu$ of $G_{\varrho}$. Assume that the single-spin random variables $X_i$ are bounded $\varrho$-a.s. In the case $k=1$ we obtain \begin{equation} \label{auchwas} \sup_{z \in \R} \biggl| P_n \biggl( \frac{S_n}{\sqrt{n}} \leq z \biggr) - \Phi_{W}(z) \biggr| \leq C n^{-1/2}, \end{equation} where $W:= S_n/\sqrt{n}$ and $\Phi_{W}$ denotes the distribution function of the normal distribution with mean zero and variance $\E(W^2)$ and $C$ is an absolute constant depending on $0 < \beta < 1$. For $k \geq 2$ we obtain \begin{equation} \label{Kol4} \sup_{z \in \R} \bigg| P_n \biggl( \frac{S_n - n \alpha}{n^{1 - 1/2k}} \leq z \biggr) - \widehat{F}_{W,k}(z) \bigg| \leq C_k \, n^{-1/k} \end{equation} where $\widehat{F}_{W,k}(z) := \int_{- \infty}^z \widehat{f}_{W,k}(x) \, dx$ with $\widehat{f}_{W,k}$ defined by $$ \widehat{f}_{W,k}(x) := \frac{\exp \bigl( -\frac{x^{2k}}{2k\, \E(W^{2k})} \bigr)}{ \int \exp \bigl( -\frac{x^{2k}}{2k\, \E(W^{2k})} \bigr) \, dx} $$ with $W := \frac{S_n - n \alpha}{n^{1- 1/2k}}$ and $C_k$ is an absolute constant. \end{theorem} \begin{theorem} \label{CWgendep} Let $\varrho \in \mathcal{B}$ satisfy the GHS-inequality and assume that $\beta_c=1$. Let $\alpha$ be the global minimum of type $k$ with $k \geq 2$ and strength $\mu_k$ of $G_{\varrho}$ and let the single-spin variable $X_i$ be bounded. Let $0 < \beta_n < \infty$ depend on $n$ in such a way that $\beta_n \to 1$ monotonically as $n \to \infty$. Then the following assertions hold true: \begin{enumerate} \item If $\beta_n -1 = \frac{\gamma}{n^{1 - \frac 1k}}$ for some $\gamma \not= 0$, we have \begin{equation} \label{Kol5} \sup_{z \in \R} \bigg| P_n \biggl( \frac{S_n- n \alpha}{n^{1 - 1/2k} }\leq z \biggr) - F_{W,k, \gamma}(z) \bigg| \leq C_k\, n^{-1/k} \end{equation} with $$ F_{W,k, \gamma}(z) := \frac{1}{Z} \int_{- \infty}^z \exp \biggl( - c_W^{-1} \biggl( \frac{\mu_k}{(2k)!} x^{2k} - \frac{\gamma}{2} x^2 \biggr) \biggr) \, dx. $$ where $Z:= \int_{\R} \exp \bigl( - c_W^{-1} \bigr( \frac{\mu_k}{(2k)!} x^{2k} - \frac{\gamma}{2} x^2 \bigr) \bigr)\, dx$, with $W := \frac{S_n - n \alpha}{n^{1-1/2k}} $, $$ c_W := \frac{\mu_k}{(2k)!} \E(W^{2k}) - \gamma \E(W^2) $$ and $C_k$ is an absolute constant. \item If $|\beta_n -1| \ll n^{-(1 - 1/k)}$, $\frac{S_n - n \alpha}{n^{1-1/2k}}$ converges in distribution to $\widehat{F}_{W,k}$, defined as in Theorem \ref{CWgeneral}. Moreover, if $|\beta_n -1| = \mathcal{O}(n^{-1})$, \eqref{Kol4} holds true. \item If $|\beta_n -1| \gg n^{-(1-1/k)}$, the Kolmogorov distance of the distribution of $W:= \sqrt{\frac{1-\beta_n}{n}} \sum_{i=1}^n X_i$ and the normal distribution $N(0, \E(W^2))$ converges to zero. Moreover, if $|\beta_n -1| \gg n^{-(1/2-1/2k)}$, we obtain \begin{equation*} \sup_{z \in \R} \bigg| P_n \biggl( \frac{\sqrt{(1-\beta_n)} S_n}{\sqrt{n}} \leq z \biggr) - \Phi_{W}(z) \bigg| \leq C \, n^{-1/2} \end{equation*} with an absolute constant $C$. \end{enumerate} \end{theorem} \begin{remark} Since the symmetric Bernoulli law is ${\rm GHS}$, Theorems \ref{CWgeneral} and \ref{CWgendep} include Berry-Esseen type results for this case. But these results differ from the results in Theorem \ref{CW}, \ref{CWcritical} and \ref{CWbetadep} with respect to the limiting laws: the laws in \ref{CWgeneral} and \ref{CWgendep} depend on moments of $W$. The bounds in Theorems \ref{CW}-\ref{CWbetadep} are easier to obtain; moreover their proofs apply Corollary \ref{corsigma} and part (2) of Theorem \ref{generaldensity} which are less involved versions of Stein's method for exchangeable pairs. \end{remark} For arbitrary $\varrho \in {\rm GHS}$ we are able to proof good bounds with respect to the Wasserstein-metric. For any class of test functions $\mathcal{H}$, a distance on probability measures on $\R$ can be defined by $$ d_{\mathcal{H}} (P,Q) = \sup_{h \in {\mathcal H}} \bigg| \int h \, dP - \int h \, dQ \bigg|. $$ The class of test functions $h$ for the Wasserstein distance $d_{w}$ is just the Lipschitz functions ${\rm Lip}(1)$ with constant no greater than 1. The total variation distance is given by the set ${\mathcal H}$ of indicators of Borel sets, the Kolmogorov distance $d_{K}$ by the set of indicators of half lines. Only for technical reasons, we consider now a modified model. Let $$ \widehat{P}_{n, \beta, h}(x) = \frac{1}{\widehat{Z}_{n, \beta, h}} \exp \biggl( \frac{\beta}{n} \sum_{1 \leq i < j \leq n} x_i x_j + \beta \, h \sum_{i=1}^n x_i \biggr) \, d\varrho^{\otimes n}(x), \,\, x=(x_i). $$ \begin{theorem} \label{wasserstein} Given the Curie-Weiss model $\widehat{P}_{n, \beta}$ and $\varrho \in \mathcal{B}$ in GHS, let $\alpha$ be the global minimum of type $k$ and strength $\mu$ of $G_{\varrho}$. In the case $k=1$, for any uniformly Lipschitz function $h$ we obtain for $W=S_n/\sqrt{n}$ that $$ \big| \E \bigl( h(W) \bigr) - \Phi_W(h) \big| \leq \|h'\| \, C \, \frac{\max \bigl( \E|X_1|^3, \E|X_1'|^3 \bigr)}{\sqrt{n}}. $$ Here $C$ is a constant depending on $0 < \beta < 1$ and $\Phi_W(h) := \int_{\R} h(z) \Phi_W(dz)$. The random variable $X_i'$ is drawn from the conditional distribution of the $i$'th coordinate $X_i$ given $(X_j)_{j \not= i}$ (this choice will be explained in Section 3). For $k \geq 2$ we obtain for any uniformly Lipschitz function $h$ and for $W := \frac{S_n - n \alpha}{n^{1- 1/2k}}$ $$ \big| \E \bigl( h(W) \bigr) - \widehat{F}_{W,k}(h) \big| \leq \|h'\| \biggl( C_1 \frac{1}{n^{1/k}} + \frac{C_2 \, \max \bigl( \E|X_1|^3, \E|X_1'|^3 \bigr)}{n^{1-1/2k}} \biggr). $$ Here $C_1, C_2$ are constants, and $\widehat{F}_{W,k}(h) := \int_{\R} h(z) \widehat{F}_{W,k}(dz)$. \end{theorem} \begin{remark} Assume that there exists a $\delta$ such that for any uniformly Lipschitz function $h$, $|\E h(W) - F(h) |\leq \delta \|h'\|$, where $W$ is a random variable, $F(h) := \int_{\R} h(z) F(dz)$ for some distribution function $F$, then from the definition of the Wasserstein distance it follows immediately that $\sup_{h \in {\rm Lip}(1)} |\E h(W) -F(h)| \leq \delta$. Moreover, the Kolmogorov distance $\sup_{z} | P(W \leq z) - F(z)|$ can be bounded by $c_F \, \delta^{1/2}$, where $c_F$ is some constant depending on $F$ (the proof follows the lines of \cite[Theorem 3.1]{ChenShao:2005}). \end{remark} \begin{remark} \label{lebo} In \cite{Ellis/Monroe/Newman:1976}, the distribution of the spins $\varrho$ are allowed to depend on the site. They define a subclass ${\mathcal G}$ of $\mathcal{B}$ such that for $\varrho_1, \ldots, \varrho_n \in {\mathcal G}$ the GHS inequality holds. In Section 7 we present a large class of measures which belong to $\mathcal{G}$ (see \cite[Theorem 1.2]{Ellis/Monroe/Newman:1976}). The GHS inequality itself has a number of interesting implications like the concavity of the average magnetization as a function of the external field $h$ or the monotonicity of correlation length in Ising models. These and other implications can be found in \cite{Ellis/Monroe/Newman:1976} and references therein. Note that for $\varrho \in {\rm GHS}$, $\phi_{\varrho}(s) \leq \frac 12 \sigma_{\varrho}^2 s^2$ for all real $s$, where $\sigma_{\varrho}^2 = \int_{\R} x^2 \, \varrho(dx)$. These measures are called {\it sub-Gaussian}. Very important for our proofs of Berry-Esseen bounds will be the following correlation-inequality due to Lebowitz \cite{Lebowitz:1974}: If $\E$ denotes the expectation with respect to the measure $P_{n, \beta, h_1, \ldots, h_n}$, one observes easily that for any $\varrho \in {\mathcal B}$ and sites $i,j,k,l \in \{1, \ldots, n\}$ the following identity holds: \begin{eqnarray} \label{ursell4} & & \frac{\partial^3}{\partial h_i \, \partial h_j \, \partial h_k} \E (X_l) \biggl|_{{\rm all} \,\, h_i =0} \\ & = & \E(X_i X_j X_k X_l) - \E(X_i X_j) \E(X_k X_l) - \E(X_i X_k) \E(X_j X_l) - \E(X_i X_l) \E(X_j X_k). \nonumber \end{eqnarray} Lebowitz \cite{Lebowitz:1974} proved that if $\varrho \in {\rm GHS}$, then \eqref{ursell4} is non-positive (see \cite[V.13.7.(b)]{Ellis:LargeDeviations} and \cite{Kondo/Otofuji/Sugiyama:1985}). Stein's method reduces to the computation of, or bounds on, {\it low order} moments, perhaps even only on variances of certain quantities. Such variance computations can be very difficult. We will see in the proof of Theorem \ref{CWgeneral} and Theorem \ref{CWgendep} the use of Lebowitz' inequality for bounding the variances successfully. \end{remark} In the situation of Theorem \ref{CWgeneral} and Theorem \ref{CWgendep} we can bound higher order moments as follows: \begin{lemma} \label{momentsgeneral} Given $\varrho \in {\mathcal B}$, let $\alpha$ be one of the global minima of maximal type $k$ for $k \geq 1$ and strength $\mu$ of $G_{\varrho}$. For $$ W := \frac{S_n - n \alpha}{n^{1 - 1/2k}} $$ we obtain for any $l \in \N$ \begin{equation*} \E |W|^{l} \leq {\operatorname {const.}\,}(l). \end{equation*} \end{lemma} We prepare for the proof of Lemma \ref{momentsgeneral}. It considers a well known transformation -- sometimes called the {\it Hubbard--Stratonovich transformation} -- of our measure of interest. \begin{lemma}\label{hubbard} Let $m\in \R$ and $0<\gamma<1$ be real numbers. Consider the measure $Q_{n, \beta}:= \bigl( P_{n}\circ \left(\frac{S_n-nm}{n^\gamma}\right)^{-1} \bigr) \ast \mathcal{N}(0,\frac 1 {\beta n^{2\gamma-1}})$ where $\mathcal{N}(0,\frac 1 {\beta n^{2\gamma-1}})$ denotes a Gaussian random variable with mean zero and variance $\frac 1 {\beta n^{2\gamma-1}}$. Then for all $n \ge 1$ the measure $Q_{n, \beta}$ is absolutely continuous with density \begin{equation}\label{density} \frac{\exp\left(-n G(\frac s {n^{1-\gamma}}+m)\right)}{\int_{\R} \exp\left(-n G(\frac s {n^{1-\gamma}}+m)\right)ds}, \end{equation} where $G$ is defined in equation \eqref{Gdef}. \end{lemma} \begin{remark} As shown in \cite{Ellis/Newman:1978}, Lemma 3.1, our condition \eqref{condrho} ensures that $$ \int_{\R} \exp\left(-n G \left( \frac s {n^{1-\gamma}}+m \right) \right)ds $$ is finite, such that the above density is well defined. \end{remark} \begin{proof}[Proof of Lemma \ref{hubbard}] The proof of this lemma can be found at many places, e.g. in \cite{Ellis/Newman:1978}, Lemma 3.3. \end{proof} \begin{proof}[Proof of Lemma \ref{momentsgeneral}] We apply the Hubbard-Stratonovich transformation with $\gamma=1-1/2k$. It is clear that this does not change the finiteness of any of the moments of $W$. Using the Taylor expansion \eqref{Taylor} of $G$, we see that the density of $Q_{n, \beta}$ with respect to Lebesgue measure is given by $\mathrm{Const.}\exp(-x^{2k})$ (up to negligible terms, see e.g. \cite{Ellis/Newman:1978}, \cite{Eichelsbacher/Loewe:2004}). A measure with this density, of course, has moments of any finite order. \end{proof} \begin{remark} As we will see, we only have to bound $\E(W^4)$ in the classical model, when $0< \beta <1$. This can be obtained directly using the definition of $P_n$ and Taylor-expansion. But already for the classical model, for $\beta=1$, it is quite cumbersome to bound higher order moments via direct calculations. \end{remark} In Section 2, we develop in Theorem \ref{ourmain}, Corollary \ref{corsigma} and Corollary \ref{corsigma2} refinements of Stein's method for exchangeable pairs in the case of normal approximation. As a first application we prove Theorem \ref{CW} in Section 3. In Section 4 we develop Stein's method for exchangeable pairs for a rich class of other distributional approximations. Obtaining good bounds for the solutions of the corresponding Stein equations in the appendix, we prove Theorem \ref{CWcritical} and Theorem \ref{CWbetadep} in Section 5, applying Theorem \ref{generaldensity}. In Section 6, we proof Theorems \ref{CWgeneral}, \ref{CWgendep} and \ref{wasserstein}, applying Corollary \ref{corsigma2} and Theorem \ref{generaldensity2}. Section 7 contains a collection of examples including the Curie-Weiss model with three states, studying liquid helium, and a continuous Curie-Weiss model, where the single spin distribution $\varrho$ is a uniform distribution. \secdef\sct\sect{Stein's method with exchangeable pairs for normal approximation}\label{hajek} Stein introduced in \cite{Stein:1986} the exchangeable pair approach. Given a random variable $W$, Stein's method is based on the construction of another variable $W'$ (some coupling) such that the pair $(W, W')$ is exchangeable, i.e. their joint distribution is symmetric. The approach essentially uses the elementary fact that if $(W,W')$ is an exchangeable pair, then $\E g(W,W')=0$ for all antisymmetric measurable functions $g(x,y)$ such that the expectation exists. A theorem of Stein (\cite[Theorem 1, Lecture III]{Stein:1986}) shows that a measure of proximity of $W$ to normality may be provided in terms of the exchangeable pair, requiring $W'-W$ to be sufficiently small. He assumed the linear regression property $$ \E(W'|W)=(1-\lambda) \, W $$ for some $0 < \lambda <1$. This approach has been successfully applied in many models, see \cite{Stein:1986} and for example \cite{DiaconisStein:2004} and references therein. In \cite{Rinott/Rotar:1997}, the range of application was extended by replacing the linear regression property by a weaker condition, allowing to hold the regression property only approximately. The exchangeable pair approach is also successful for other distributional approximations, as will be shown in Section 4. We develop Stein's method by replacing the linear regression property by $$ \E(W'|W) = W + \lambda \, \psi(W) + R(W), $$ where $\psi(x)$ will be depend on a continuous distribution under consideration. Before we consider in this section the case of normal approximation, we mention that this is not the first paper to study other distributional approximations via Stein's method. For a rather large class of continuous distributions, the Stein characterization was introduced in \cite{DiaconisStein:2004}, following \cite[Chapter 6]{Stein:1986}. In \cite{DiaconisStein:2004}, the method of exchangeable pairs was introduced for this class of distribution and used in a simulation context. Recently, the exchangeable pair approach was introduced for exponential approximation in \cite[Lemma 2.1]{Chatterjee/Fulman/Roellin:2009}. For measuring the distance of the distribution of $W$ and the standard normal distribution (or any other distribution), we would like to bound $$ | \E h(W) - \Phi(h) | $$ for a class of test functions $h \in {\mathcal {H}}$, where $\Phi(h) := \int_{-\infty}^{\infty} h(z) \Phi(dz)$ and $\Phi$ is the standard normal distribution function. One advantage of Stein's method is that we are able to obtain bounds for different distances like the Wasserstein distance $d_{\rm{w}}$, the total variation distance $d_{\rm{TV}}$ or the Kolmogorov distance $d_{\rm{K}}$. In \cite{Rinott/Rotar:1997}, the exchangeable pair approach of Stein was developed for a broad class of non smooth functions $h$, applying standard smoothing inequalities. They proved the following: \begin{theorem}[Rinott, Rotar: 1997]\label{RR} Consider a random variable $W$ with $\E(W)=0$ and $\E(W^2)=1$. Let $(W, W')$ be an exchangeable pair (i.e., their joint distribution is symmetric). Define a random variable $R=R(W)$ by $$ E(W' |W) = (1 - \lambda) W +R, $$ where $\lambda$ is a number satisfying $0 < \lambda <1$. If moreover $$ |W' - W| \leq A $$ for a constant $A$. Then one obtains \begin{equation} \label{rr} \sup_{z \in \R} |P(W \leq z) - \Phi(z)| \leq \frac{12}{\lambda} \sqrt{ {\rm var} \{ \E[(W'-W)^2 |W] \}} + 37 \frac{\sqrt{\E(R^2)}}{\lambda} + 48 \sqrt{2/ \pi} \frac{A^3}{\lambda} + \sqrt{2/ \pi} \frac{A^2}{\sqrt{\lambda}}. \end{equation} \end{theorem} \begin{remark} Rinott and Rotar also proved a bound in the case, where $|W'-W|$ is not assumed to be bounded. In this case, the last two summands on the right hand side of \eqref{rr} have to be replaced by $$ \sqrt{ \frac{a}{\lambda} \E | W'-W|^3}. $$ This estimation is crude, since even for a normalized sum of $n$ independent variables $W$, it leads to a bound of the order $n^{-1/4}$. The advantage of the results in \cite{Rinott/Rotar:1997} is, that these bounds do not only apply to indicators on half lines, but also to a broad class of non smooth test functions, see \cite[Section 1.2]{Rinott/Rotar:1997}. \end{remark} Chen and Shao introduced a concentration inequality approach. Here a concentration inequality is proved using the Stein identity (see \cite{ChenShao:2001} and \cite{ChenShao:2005}). In the context of the construction of an exchangeable pair, in \cite{ShaoSu:2006} Shao and Su proved the following theorem: \begin{theorem}[Shao, Su: 2005]\label{SS} Let $W$ be a random variable with $\E(W)=0$ and $\E(W^2) \leq 1$ and $(W, W')$ be an exchangeable pair such that $$ E(W' |W) = (1 - \lambda) W $$ with $0 < \lambda <1$, then for any $a >0$ \begin{equation} \label{ss1} \sup_{z \in \R} |P(W \leq z) - \Phi(z) | \leq \sqrt{ \E \biggl( 1 - \frac{1}{2 \lambda} E((W-W')^2|W) \biggr)^2} + \frac{0.41 a^3}{\lambda} + 1.5 a + \frac{1}{2 \lambda} \E ( (W-W') 1_{\{|W-W'| \geq a\}} ). \end{equation} If $ |W - W'| \leq A$, then the bound reduces to \begin{equation} \label{ss2} \sup_{z \in \R} |P(W \leq z) - \Phi(z) | \leq \sqrt{ \E \biggl( 1 - \frac{1}{2 \lambda} E((W-W')^2|W) \biggr)^2} + \frac{0.41 A^3}{\lambda} + 1.5 A. \end{equation} \end{theorem} \begin{remark} When $|W-W'|$ is bounded, \eqref{ss2} improves \eqref{rr} with respect to the constants. \end{remark} Following the lines of the proofs in \cite{Rinott/Rotar:1997} and \cite{ShaoSu:2006}, we obtain the following refinement: Given two random variables $X$ and $Y$ defined on a common probability space, we denote by $$ d_{\rm{K}}(X,Y) := \sup_{z \in \R} | P(X \leq z) - P(Y \leq z)| $$ the Kolmogorov distance of the distributions of $X$ and $Y$. \begin{theorem} \label{ourmain} Let $(W, W')$ be an exchangeable pair of real-valued random variables such that $$ E(W' |W) = (1 - \lambda) W +R $$ for some random variable $R = R(W)$ and with $0 < \lambda <1$. Assume that $\E(W^2) \leq 1$. Let $Z$ be a random variable with standard normal distribution. Then for any $A >0$, \begin{eqnarray*} d_{\rm{K}}(W, Z) & \leq & \sqrt{ \E \biggl( 1 - \frac{1}{2 \lambda} \E[(W'-W)^2 |W] \biggr)^2} + \biggl( \frac{\sqrt{2 \pi}}{4} + 1.5 A \biggr) \frac{\sqrt{\E(R^2)}}{\lambda} \\ & &+ \frac{0.41 A^3}{\lambda} + 1.5 A + \frac{1}{2 \lambda} \E \bigl( (W-W')^2 1_{\{|W-W'| \geq A\}} \bigr). \end{eqnarray*} If $|W - W'| \leq A$ for a constant $A$, we obtain the bound \begin{equation} \label{mainbound} d_{\rm{K}}(W, Z) \leq \sqrt{ \E \biggl( 1 - \frac{1}{2 \lambda} \E[(W'-W)^2 |W] \biggr)^2} + \biggl( \frac{\sqrt{2 \pi}}{4} + 1.5 A \biggr) \frac{\sqrt{\E(R^2)}}{\lambda} + \frac{0.41 A^3}{\lambda} + 1.5 A. \end{equation} \end{theorem} \begin{remark} When $|W-W'|$ is bounded, \eqref{mainbound} improves \eqref{rr} with respect to the Berry-Esseen constants. \end{remark} \begin{proof} We sketch the proof: For a function $f$ with $|f(x)| \leq C(1 +|x|)$ we obtain \begin{eqnarray} \label{start} 0 & = & \E \bigl( (W-W')(f(W')+f(W)) \bigr) \nonumber \\ & = & \E \bigl( (W-W')(f(W')-f(W)) \bigr) + 2 \lambda \E(W f(W)) - 2 \E(f(W) \, R). \end{eqnarray} Let $f=f_z$ denote the solution of the Stein equation \begin{equation} \label{steinnormal} f_z'(x) - xf_z(x) = 1_{\{x \leq z\}}(x) - \Phi(z). \end{equation} We obtain \begin{eqnarray} \label{tis} P(W \leq z) - \Phi(z) & = & \E (f'(W) - W f(W)) \nonumber \\ & = & \E (f'(W)) - \frac{1}{2 \lambda} \E \bigl( (W-W')(f(W)-f(W')) \bigr) - \frac{1}{\lambda} \E (f(W) \, R) \nonumber \\ & = & \E \biggl( f'(W) \bigl( 1 - \frac{1}{2 \lambda} (W-W')^2 \bigr) \biggr) + \E \biggl( f'(W) \frac{1}{2 \lambda} (W-W')^2 \biggr) \nonumber \\ & & - \frac{1}{2 \lambda} \E \bigl( (W-W')(f(W)-f(W')) \bigr) - \frac{1}{\lambda} \E (f(W) \, R) \nonumber \\ & =& \E \biggl( f'(W) \bigl( 1 - \frac{1}{2 \lambda} (W-W')^2 \bigr) \biggr) - \frac{1}{2 \lambda} \E (2 f(W) \, R) \nonumber \\ & & - \frac{1}{2 \lambda} \E \biggl[ (W-W') \bigl( f(W) - f(W') - (W-W')f'(W) \bigr) \biggr] \nonumber \\ & =: & T_1 + T_2 + T_3. \end{eqnarray} Using $|f'(x)| \leq 1$ for all real $x$ (see \cite[Lemma 2.2]{ChenShao:2005}), we obtain the bound $$ |T_1| \leq \sqrt{ \E \biggl( 1 - \frac{1}{2 \lambda} \E[(W'-W)^2 |W] \biggr)^2}. $$ Using $0 < f(x) \leq \sqrt{2 \pi}/4$ (see \cite[Lemma 2.2]{ChenShao:2005}), we have $$ |T_2| \leq \frac{\sqrt{2 \pi}}{4 \lambda} \E(|R|) \leq \frac{\sqrt{2 \pi}}{4 \lambda} \sqrt{ \E(R^2)}. $$ Bounding $T_3$ we apply the concentration technique, see \cite{ShaoSu:2006}: \begin{eqnarray} \label{t3} (- 2 \lambda) \, T_3 & = & \E \biggl( (W-W') 1_{\{|W-W'| > A\}} \int_{-(W-W')}^0 (f'(W+t)-f'(W)) dt \biggr) \nonumber \\ &+ & \E \biggl( (W-W') 1_{\{|W-W'| \leq A\}} \int_{-(W-W')}^0 (f'(W+t)-f'(W)) dt \biggr). \end{eqnarray} The modulus of the first term can be bounded by $\E \bigl( (W-W')^2 1_{\{|W-W'| >A\}} \bigr)$ using $|f'(x) - f'(y)| \leq 1$ for all real $x$ and $y$ (see \cite[Lemma 2.2]{ChenShao:2005}). Using the Stein identity \eqref{steinnormal}, the second summand can be represented as \begin{eqnarray*} & & \E \biggl( (W-W') 1_{\{|W-W'| \leq A\}} \int_{-(W-W')}^0 \bigl( (W+t)f(W+t) - Wf(W) \bigr) dt \biggr) \\ & & + \E \biggl( (W-W') 1_{\{|W-W'| \leq A\}} \int_{-(W-W')}^0 (1_{\{W+t \leq z\}} - 1_{\{W \leq z\}}) dt \biggr) =: U_1 + U_2. \end{eqnarray*} Next observe that $|U_1| \leq 0.82 A^3$, see \cite{ShaoSu:2006}: by the mean value theorem one gets $$ (W+t) f(W+t) - W f(W) = W ( f(W+t) - f(W)) + t f(W+t) = W \bigl( \int_0^1 f'(W + ut) t du \bigr) + t f(W+t). $$ Hence $$ |(W+t) f(W+t) - W f(W) | \leq |W| \, |t| + |t| \sqrt{2 \pi}/4 = |t| (\sqrt{2 \pi}/4 + |W|). $$ Using $\E |W| \leq \sqrt{ \E(W^2)} \leq 1$ gives the bound. The term $U_2$ can be bounded by $$ \E \bigl( (W-W')^2 I_{\{0 \leq (W-W') \leq A\}} \, 1_{\{z \leq W \leq z+A\}} \bigr). $$ Under the assumptions of our Theorem we proceed as in \cite{ShaoSu:2006} and obtain the following concentration inequality: \begin{equation} \label{conz1} \E \bigl( (W-W')^2 I_{\{0 \leq (W-W') \leq a\}} \, 1_{\{z \leq W \leq z+A\}} \bigr) \leq 3A (\lambda + \E(R)). \end{equation} To see this, we apply the estimate $$ \E \bigl( (W-W')^2 I_{0 \leq (W-W') \leq A} \, 1_{z \leq W \leq z+A} \bigr) \leq \E \bigl( (W-W')(f(W)-f(W')) \bigr), $$ see \cite{ShaoSu:2006}; here $f$ is defined by $f(x):= -1.5 A$ for $x \leq z-A$, $f(x):=1.5 A$ for $x \geq z+2A$ and $f(x):=x-z-A/2$ in between. Now we apply \eqref{start} and get $$ \E \bigl( (W-W')^2 I_{ \{0 \leq (W-W') \leq A\}}\, 1_{\{z \leq W \leq z+A\}} \bigr) \leq 2 \lambda \E(Wf(W)) + 2 \E(f(W)R) \leq 3A(\lambda + \E(|R|)), $$ where we used $\E(|W|) \leq \sqrt{ \E(W^2)} \leq 1$. Similarly, we obtain $$ U_2 \geq - 3A(\lambda + \E(R)). $$ \end{proof} \begin{remark} In Theorem \ref{ourmain}, we assumed $\E(W^2) \leq 1$. Alternatively, let us assume that $\E(W^2)$ is finite. Then the proof of Theorem \ref{ourmain} shows, that the third and the fourth summand of the bound \eqref{mainbound} change to $$ \frac{A^3}{\lambda} \bigl( \frac{\sqrt{2 \pi}}{16} + \frac{\sqrt{ \E(W^2)}}{4} \bigr) + 1.5 A \, \E(|W|). $$ \end{remark} In the following corollary, we discuss the Kolmogorov-distance of the distribution of a random variable $W$ to a random variable distributed according to $N(0, \sigma^2)$, the normal distribution with mean zero and variance $\sigma^2$. \begin{cor} \label{corsigma} Let $\sigma^2 >0$ and $(W, W')$ be an exchangeable pair of real-valued random variables such that \begin{equation} \label{3.2} E(W' |W) = \bigl(1 - \frac{\lambda}{\sigma^2} \bigr) W +R \end{equation} for some random variable $R = R(W)$ and with $0 < \lambda <1$. Assume that $\E(W^2)$ is finite. Let $Z_{\sigma}$ be a random variable distributed according to $N(0, \sigma^2)$. If $|W - W'| \leq A$ for a constant $A$, we obtain the bound \begin{eqnarray} \label{mainbound2} d_{\rm{K}}(W, Z_{\sigma}) & \leq & \sqrt{ \E \biggl( 1 - \frac{1}{2 \lambda} \E[(W'-W)^2 |W] \biggr)^2} + \biggl( \frac{\sigma \sqrt{2 \pi}}{4} + 1.5 A \biggr) \frac{\sqrt{\E(R^2)}}{\lambda} \nonumber \\ & + & \frac{A^3}{\lambda} \biggl(\frac{\sqrt{2 \pi \sigma^2}}{16} + \frac{\sqrt{\E(W^2)}}{4} \biggr) + 1.5 A \sqrt{\E(W^2)}. \end{eqnarray} \end{cor} \begin{proof} Let us denote by $f_{\sigma} := f_{\sigma,z}$ the solution of the Stein equation \begin{equation} \label{stein7} f_{\sigma,z}'(x) - \frac{x}{\sigma^2} f_{\sigma,z}(x) = 1_{\{x \leq z\}}(x) - F_{\sigma}(z) \end{equation} with $F_{\sigma}(z) := \frac{1}{\sqrt{2 \pi} \sigma} \int_{- \infty}^z \exp \bigl( - \frac{y^2}{2 \sigma^2} \bigr) \, dy$. It is easy to see that the identity $f_{\sigma, z}(x) = \sigma f_{z} \bigl( \frac{x}{\sigma} \bigr)$, where $f_z$ is the solution of the corresponding Stein equation of the standard normal distribution, holds true. Using \cite[Lemma 2.2]{ChenShao:2005} we obtain $0 < f_{\sigma} (x) < \sigma \frac{\sqrt{2 \pi}}{4}$, $|f_{\sigma}'(x) | \leq 1$, and $|f_{\sigma}'(x) - f_{\sigma}'(y)| \leq 1$. With \eqref{3.2} we arrive at $$ P(W \leq z ) - F_{\sigma}(z) = T_1 + T_2 + T_3 $$ with $T_i$'s defined in \eqref{tis}. Using the bounds of $f_{\sigma}$ and $f_{\sigma}'$, the bound of $T_1$ is the same as in the proof of Theorem \ref{ourmain}, whereas the bound of $T_2$ changes to $$ |T_2| \leq \sigma \frac{\sqrt{2 \pi}}{4 \lambda} \sqrt{\E(R^2)}. $$ Since we consider the case $|W-W'| \leq A$, we have to bound $$ T_3 = -\frac{1}{2 \lambda} \E \biggl( (W-W') 1_{\{|W-W'| \leq A\}} \int_{-(W-W')}^0 (f'(W+t)-f'(W)) dt \biggr). $$ Using the Stein identity \eqref{stein7}, the mean value theorem as well as the concentration inequality-argument along the lines of the proof of Theorem \ref{ourmain}, we obtain $$ |T_3| \leq \frac{A^3}{\lambda} \bigl( \frac{\sqrt{\E(W^2)}}{4} + \frac{\sigma \sqrt{2 \pi}}{16} \bigr) + 1.5 A \bigl( \sqrt{\E(W^2)} + \frac{\sqrt{\E(R^2)}}{\lambda} \bigr). $$ Hence the corollary is proved. \end{proof} With \eqref{3.2} we obtain $\E(W-W')^2 = \frac{2 \lambda}{\sigma^2} \E(W^2) - 2 \E(W\, R)$. Therefore \begin{equation} \label{hihi} \E \biggl(1- \frac{1}{2 \lambda} \E[(W'-W)^2 |W] \biggr) = 1 - \frac{\E(W^2)}{\sigma^2} + \frac{\E(W \, R)}{\lambda}, \end{equation} so that the bound in Corollary \ref{corsigma} is only useful when $\E(W^2)$ is close to $\sigma^2$ (and $\E(W\,R)/ \lambda$ is small). An alternative bound can be obtained comparing with a $N(0, \E(W^2))$-distribution. \begin{cor} \label{corsigma2} In the situation of Corollary \ref{corsigma}, let $Z_{W}$ denote the $N(0, \E(W^2))$ distribution. We obtain \begin{eqnarray} \label{mainbound3} d_{\rm{K}}(W, Z_{W}) & \leq & \frac{\sigma^2}{2 \lambda} \bigl( {\rm Var} \bigl( \E[(W'-W)^2 |W] \bigr) \bigr)^{1/2} + \sigma^2 \biggl( \frac{\sqrt{\E(W^2)}\, \sqrt{2 \pi}}{4} + 1.5 A \biggr) \frac{\sqrt{\E(R^2)}}{\lambda} \nonumber \\ & & \hspace{-2cm} + \sigma^2 \, \frac{A^3}{\lambda} \biggl(\frac{\sqrt{\E(W^2)} \, \sqrt{2 \pi}}{16} + \frac{\sqrt{\E(W^2)}}{4} \biggr) + \sigma^2 \, 1.5 A \, \sqrt{\E(W^2)} + \sigma^2 \frac{\sqrt{\E(W^2)} \, \sqrt{\E(R^2)}}{\lambda}. \end{eqnarray} \end{cor} \begin{proof} With \eqref{hihi} we get $\E(W^2)= \sigma^2 \bigl( \frac{1}{2 \lambda} ( E(W-W')^2 + 2 \E(W \, R)) \bigr)$. With the definition of $T_2$ and $T_3$ as in \eqref{tis} we obtain \begin{eqnarray} \label{tistis} \E \bigl( \E(W^2) f'(W) - W f(W) \bigr) & = & \sigma^2 \E \biggl( \frac{\E(W-W')^2 + 2 \E(W \, R)}{2 \lambda} f'(W) \biggr) - \E( W \, f(W)) \nonumber \\ & & \hspace{-5cm} = \sigma^2 \E \biggl( f'(W) \biggl( \frac{\E(W-W')^2 - \E[(W-W')^2|W]}{2 \lambda} \biggr) \biggr) + \sigma^2 (T_2 + T_3) + \sigma^2 \frac{\E(W \, R)}{\lambda}. \end{eqnarray} Remark that now $\sigma^2$ in \eqref{3.2} is a parameter of the exchangeable-pair identity and no longer the parameter of the limiting distribution. We apply \eqref{stein7} and exchange every $\sigma^2$ in \eqref{stein7} with $\E(W^2)$. Applying Cauchy-Schwarz to the first summand and bounding the other terms as in the proof of Corollary \ref{corsigma} leads to the result. \end{proof} \secdef\sct\sect{Berry-Esseen bounds for the classical Curie-Weiss model} Let $\varrho$ be the symmetric Bernoulli measure and $0 < \beta <1$. Then $$ W := W_n := \frac{1}{\sqrt{n}} \sum_{i=1}^n X_i. $$ converges in distribution to a $N(0, \sigma^2)$ with $\sigma^2 = (1 - \beta)^{-1}$: \begin{proof}[Proof of Theorem~\ref{CW}] We consider the usual construction of an exchangeable pair. We produce a spin collection $X'= (X_i')_{i \geq 1}$ via a {\it Gibbs sampling} procedure: select a coordinate, say $i$, at random and replace $X_i$ by $X_i'$ drawn from the conditional distribution of the $i$'th coordinate given $(X_j)_{j \not= i}$. Let $I$ be a random variable taking values $1, 2, \ldots, n$ with equal probability, and independent of all other random variables. Consider $$ W' := W - \frac{X_I}{\sqrt{n}} + \frac{X_I'}{\sqrt{n}} = \frac{1}{\sqrt{n}} \sum_{j \not= I} X_j + \frac{X_I'}{\sqrt{n}}. $$ Hence $(W,W')$ is an exchangeable pair and $$ W-W' = \frac{X_I - X_I'}{\sqrt{n}}. $$ Let $\mathcal{F} := \sigma(X_1, \ldots, X_n)$. Now we obtain $$ \E[W-W'| \mathcal{F}] = \frac{1}{\sqrt{n}} \frac 1n \sum_{i=1}^n \E[ X_i - X_i' | \mathcal{F}] = \frac{1}{n} \, W - \frac{1}{\sqrt{n}} \frac 1n \sum_{i=1}^n \E[ X_i'|\mathcal{F}]. $$ The conditional distribution at site $i$ is given by $$ P_n \bigl( x_i | (x_j)_{j \not= i} \bigr) = \frac{ \exp \bigl( x_i \, \beta \, m_i(x) \bigr)}{\exp \bigl( \beta m_i(x) \bigr) + \exp \bigl( - \beta m_i(x) \bigr)}, $$ with $$ m_i(x) := \frac 1n \sum_{j \not= i} x_j, \,\, i=1, \ldots, n. $$ It follows that $$ \E[X_i' | \mathcal{F}] = \E[ X_i | (X_j)_{j \not= i}] = \tanh (\beta m_i(X)). $$ Now $\frac{1}{\sqrt{n}} \frac 1n \sum_{i=1}^n \tanh (\beta m_i(X)) = \frac{1}{\sqrt{n}} \frac 1n \sum_{i=1}^n \bigl( \tanh (\beta m_i(X)) - \tanh (\beta m(X)) \bigr) + \frac{1}{\sqrt{n}} \tanh (\beta m(X)) =: R_1 + R_2$ with $m(X) := \frac 1n \sum_{i=1}^n X_i$. Taylor-expansion $\tanh(x) = x + \mathcal{O}(x^3)$ leads to \begin{equation*} R_2 =\frac{1}{\sqrt{n}} \beta m(X) + \frac{1}{\sqrt{n}} \mathcal{O} \bigl( m(X)^3 \bigr) = \frac{\beta}{n} W + \mathcal{O} \bigl( \frac{W^3}{n^2} \bigr). \end{equation*} Hence \begin{equation} \label{exid1} \E[W-W'|W] = \frac{1-\beta}{n} \, W + R = \frac{\lambda}{\sigma^2} \, W + R \end{equation} with $\lambda := \frac 1n$, $\sigma^2 := (1-\beta)^{-1}$ and $R := \mathcal{O} \bigl( \frac{W^3}{n^2} \bigr) - R_1$. Since $|W-W'| = \bigl| \frac{X_I - X_I'}{\sqrt{n}} \bigr| \leq \frac{1}{\sqrt{n}} =:A$, we are able to apply Corollary \ref{corsigma}. From Lemma \ref{momentsgeneral} we know that for $\varrho$ being the symmetric Bernoulli distribution and for $0 < \beta < 1$ we have $\E (W^4) \leq \rm{const.}$. Applying this it follows that the fourth term in \eqref{mainbound2} can be bounded by $1.5 A \frac{\sqrt{ \E(W^2)}}{\sigma^2} \leq \frac{(1-\beta) \rm{const.}}{\sqrt{n}}$, and the third summand in \eqref{mainbound2} can be estimated as follows: $$ \frac{A^3}{\lambda} \biggl( \frac{\sqrt{2 \pi}}{16} \sqrt{(1-\beta)} + \frac{\rm{const.}}{4} (1-\beta) \biggr) \leq \frac{1}{\sqrt{n}} \sqrt{(1-\beta)} \rm{const.}. $$ Moreover we obtain $\E|R| \leq \E |R_1| + \mathcal{O} \bigl( \frac{\E|W^3|}{n^2} \bigr)$. Since $\tanh(x)$ is 1-Lipschitz we obtain $|R_1| \leq \frac{1}{\sqrt{n}} |m_i(X) - m(X)| \leq \frac{1}{n^{3/2}}$. Therefore, with Lemma \ref{momentsgeneral}, we get $\E |R| = \mathcal{O} \bigl( \frac{1}{n^{3/2}} \bigr)$ and thus, the second summand in \eqref{mainbound2} can be bounded by $$ \rm{const.} \biggl( \frac{\sqrt{ 2 \pi}}{4 \sqrt{(1-\beta)}} + 1.5 \frac{1}{\sqrt{n}} \biggr) \frac{1}{\sqrt{n}} = \mathcal{O} \bigl( \frac{1}{\sqrt{n}} \bigr). $$ To bound the first summand in \eqref{mainbound2}, we obtain $(W-W')^2 = \frac{X_I^2}{n} - \frac{2 X_I \, X_I'}{n} + \frac{X_I'}{n}$. Hence $$ \E \bigl[ (W-W')^2 | \mathcal{F} \bigr] = \frac{2}{n} - \frac{2}{n^2} \sum_{i=1}^n X_i \, \tanh (\beta m_i(X)), $$ and therefore \begin{eqnarray*} 1 - \frac{1}{2 \lambda} \E \bigl[ (W-W')^2 | \mathcal{F} \bigr] & = & \frac 1n \sum_{i=1}^n X_i \, \tanh (\beta m_i(X)) \\ & = & \frac 1n \sum_{i=1}^n X_i \bigl( \tanh (\beta m_i(X)) - \tanh (\beta m(X)) \bigr) + m(X) \, \tanh (\beta m(X)) \\ & =:& R_1 + R_2. \end{eqnarray*} By Taylor expansion we get $R_2 = \frac{\beta}{n} W^2 + \mathcal{O} \bigl( \frac{W^4}{n^2} \bigr)$ and using Lemma \ref{momentsgeneral} we obtain $\E |R_2| = \mathcal{O} (n^{-1})$. Since $\tanh(x)$ is 1-Lipschitz we obtain $|R_1| \leq \frac 1n$. Hence $\E |R_1 + R_2| = \mathcal{O}(n^{-1})$ and Theorem \ref{CW} is proved. \end{proof} Now we discuss the critical case $\beta=1$, when $\varrho$ is the symmetric Bernoulli distribution. For $\beta=1$, using the Taylor expansion $\tanh(x) = x - x^3/3 + \mathcal{O}(x^5)$, \eqref{exid1} would lead to $$ \E [W-W'| W] = \frac{W^3}{3} \frac{1}{n^2} + \tilde{R} $$ for some $\tilde{R}$. Hence it is no longer possible to apply Corollary \ref{corsigma}. Moreover the prefactor $\lambda:=\frac{1}{n^2}$ would give growing bounds. In other words, the criticality of the temperature value $1 / \beta_c =1$ can also be recognized by Stein's method. We already know that at the critical value, the sum of the spin-variables has to be rescaled. Let us now define \begin{equation} \label{w2} W := \frac{1}{n^{3/4}} \sum_{i=1}^n X_i. \end{equation} Constructing the exchangeable pair $(W,W')$ in the same manner as before we will obtain \begin{equation} \label{beta1} \E[W-W' | W] = \frac{1}{n^{3/2}} \, \frac{W^3}{3} + R(W) =: -\lambda \psi(W) +R(W). \end{equation} with $\lambda=\frac{1}{n^{3/2}}$ and a reminder $R(W)$ presented later. Considering the density $p(x) = C \, \exp(-x^4/12)$, we have $$ \frac{p'(x)}{p(x)}= \psi(x). $$ This is the starting point for developing Stein's method for limiting distributions with a regular Lebesgue-density $p(\cdot)$ and an exchangeable pair $(W, W')$ which satisfies the condition $$ \E [W-W'|W] = - \lambda \psi(W) + R(W) = -\lambda \frac{p'(W)}{p(W)} +R(W) $$ with $0 < \lambda <1$. To prove \eqref{beta1}, observe that $$ \E[W-W' | W] = \frac 1n W - \frac{1}{n^{3/4}} \frac 1n \sum_{i=1}^n \tanh( m_i(X)). $$ By Taylor expansion and the identity $m_i(X) = m(X) - \frac{X_i}{n}$ we obtain $$ \frac{1}{n^{3/4}} \frac 1n \sum_{i=1}^n \tanh( m_i(X)) = \frac 1n W - \frac{1}{n^{3/2}} \frac{W^3}{3} - R(W) $$ with $R(W)$ such that $\E|R(W)| = \mathcal{O}(n^{-2})$. The exact form of $R(W)$ will be presented in Section 5. \secdef\sct\sect{The exchangeable pair approach for distributional approximations} Motivated by the classical Curie-Weiss model at the critical temperature, we will develop Stein's method with the help of exchangeable pairs as follows. For a rather large class of continuous distributions, the Stein characterization was introduced in \cite{DiaconisStein:2004}, following the lines of \cite[Chapter 6]{Stein:1986}. The densities occurring as limit laws in models of statistical mechanics belong to this class. Let $I$ be a real interval, where $-\infty \leq a < b \leq \infty$. A function is called {\it regular} if $f$ is finite on $I$ and, at any interior point of $I$, $f$ possesses a right-hand limit and a left-hand limit. Further, $f$ possesses a right-hand limit $f(a+)$ at the point $a$ and a left-hand limit $f(b-)$ at the point $b$. Let us assume, that the regular density $p$ satisfies the following condition: {\bf Assumption (D)} Let $p$ be a regular, strictly positive density on an interval $I=[a,b]$. Suppose $p$ has a derivative $p'$ that is regular on $I$ and has only countably many sign changes and being continuous at the sign changes. Suppose moreover that $\int_I p(x) | \log(p(x))| \, dx < \infty$ and assume that \begin{equation} \label{psi} \psi(x) := \frac{p'(x)}{p(x)} \end{equation} is regular. In \cite[Proposition]{DiaconisStein:2004} it is proved, that a random variable $Z$ is distributed according to the density $p$ if and only if $$ \E \bigl( f'(Z) + \psi(Z) \, f(Z) \bigr) = f(b-) \, p(b-) - f(a+) \, p(a+) $$ for a suitably chosen class $\mathcal{F}$ of functions $f$. The proof is integration by parts. The corresponding Stein identity is \begin{equation} \label{steinid2} f'(x) + \psi(x) \, f(x) = h(x) - P(h), \end{equation} where $h$ is a measurable function for which $\int_I |h(x)|\, p(x) \, dx < \infty$, $P(x) := \int_{-\infty}^x p(y) \, dy$ and $P(h) := \int_I h(y) \, p(y)\, dy$. The solution $f:=f_h$ of this differential equation is given by \begin{equation} \label{solution} f(x) = \frac{ \int_a^x \bigl( h(y) - Ph) \, p(y) \, dy}{p(x)}. \end{equation} For the function $h(x) := 1_{\{x \leq z\}}(x)$ let $f_z$ be the corresponding solution of \eqref{steinid2}. We will make the following assumptions: {\bf Assumption (B1)} Let $p$ be a density fulfilling Assumption (D). We assume that for any absolute continuous function $h$, the solution $f_h$ of \eqref{steinid2} satisfies $$ \| f_h \| \leq c_1 \|h'\|, \quad \|f_h'\| \leq c_2 \|h'\| \quad \text{and} \quad \|f_h''(x)\| \leq c_3 \|h'\|, $$ where $c_1, c_2$ and $c_3$ are constants. {\bf Assumption (B2)} Let $p$ be a density fulfilling Assumption (D) We assume that the solution $f_z$ of \begin{equation} \label{steinid3} f_z'(x) + \psi(x) \, f_z(x) = 1_{\{x \leq z\}}(x) - P(z) \end{equation} satisfies $$ |f_z(x)| \leq d_1, \quad |f_z'(x)| \leq d_2 \quad \text{and} \quad |f_z'(x)-f_z'(y) | \leq d_3 $$ and \begin{equation} \label{addcond} |(\psi(x) \, f_z(x))'| = \bigl| ( \frac{p'(x)}{p(x)} \, f_z(x))' \bigr| \leq d_4 \end{equation} for all real $x$ and $y$, where $d_1, d_2, d_3$ and $d_4$ are constants. At first glance, Condition \eqref{addcond} seem to be a rather strong or at least a rather technical condition. \begin{remark} In the case of the normal approximation, $\psi(x) = -x$, we have to bound $ (x f_z(x))'$ for the solution $f_z$ of the classical Stein equation. But it is easy to observe that $|(x f_z'(x))'| \leq 2$ by direct calculation (see \cite[Proof of Lemma 6.5]{ChenShao:2005}). However, in the normal approximation case, this bound would lead to a worse Berry-Esseen constant (compare Theorem \ref{ourmain} with Theorem \ref{generaldensity}). Hence in this case we only use $d_2=d_3=1$ and $d_1 = \sqrt{2 \pi}/4$. \end{remark} We will see, that for all distributions appearing as limit laws in our class of Curie-Weiss models, Condition \eqref{addcond} can be proved: \begin{lemma} \label{genbound} The densities $f_{k, \mu, \beta}$ in \eqref{densitysigma} and \eqref{densitygen} and the densities in Theorem \ref{CWbetadep}, Theorem \ref{CWgeneral} and Theorem \ref{CWgendep} satisfy Assumptions (D), (B1) and (B2). \end{lemma} \begin{proof} We defer the proofs to the appendix, since they only involve careful analysis. \end{proof} \begin{remark} \label{gibbsrem} With respect to all densities which appear as limiting distributions in our theorems, we restrict ourselves to bound solutions (and its derivatives) of the corresponding Stein equation characterizing distributions with probability densities $p$ of the form $b_k \exp(-a_k x^{2k})$. Along the lines of the proof of Lemma \ref{genbound}, one would be able to present good bounds (in the sense that Assumption (B1) and (B2) are fulfilled) even for measures with a probability density of the form \begin{equation} \label{gibbs} p(x) = b_k \exp \bigl( -a_k V(x) \bigr), \end{equation} where $V$ is even, twice continuously differentiable, unbounded above at infinity, $V'\not= 0$ and $V'$ and $1/V'$ are increasing on $[0,\infty)$. Moreover one has to assume that $\frac{V''(x)}{|V'(x)|}$ can be bounded by a constant for $x \geq d$ with some $d \in \R_+$. We sketch the proof in the appendix. It is remarkable, that this class of measures is a subclass of measures which are GHS, see Section 7. A measure with density $p$ in \eqref{gibbs} is usually called a Gibbs measure. Stein's method for discrete Gibbs measures is developed in \cite{Eichelsbacher/Reinert:2008}. Our remark might be of use applying Stein's method for some continuous Gibbs measure approximation. \end{remark} \begin{remark} In the case of comparing with an {\it exponential distribution} with parameter $\mu$, it is easy to see, that Assumption (D) and (B2) is fulfilled, see \cite[Example 1.6]{DiaconisStein:2004} for (D) and \cite[Lemma 2.1]{Chatterjee/Fulman/Roellin:2009} for (B2). We have $\psi(x) = - \mu$ and $\|f_z\| \leq 1$, $\|f_z'\| \leq 1$ and $\sup_{x,y \geq 0} |f_z'(x) - f_z'(y)| \leq 1$. Thus $|(\psi(x) f_z(x))'| = \mu |f_z'(x)| \leq \mu$. \end{remark} \begin{remark} From \eqref{solution} we obtain $$ f_z(x) = \frac{(1-P(z)) \, P(x)}{p(x)} \quad \text{for} \quad x \leq z $$ and $$ f_z(x) = \frac{P(z)(1-P(x))}{p(x)} \quad \text{for} \quad x \geq z. $$ Hence $$ \psi(x) \, f_z(x) = \frac{P(x) \, p'(x)}{p^2(x)} (1-P(z)) \quad \text{for} \quad x \leq z $$ and $$ \psi(x) \, f_z(x) = \frac{(1-P(x)) \, p'(x)}{p^2(x)} (P(z)) \quad \text{for} \quad x \geq z. $$ Therefore one has to bound the derivative of $$ \frac{P(x) \, p'(x)}{p^2(x)} \quad \text{and} \quad \frac{(1-P(x)) \, p'(x)}{p^2(x)}, $$ respectively, to check Condition \eqref{addcond}. \end{remark} The following result is a refinement of Stein's result \cite{Stein:1986} for exchangeable pairs. \begin{theorem} \label{generaldensity} Let $p$ be a density fulfilling Assumption (D). Let $(W, W')$ be an exchangeable pair of real-valued random variables such that \begin{equation} \label{exchangepsi} \E [W'|W] = W + \lambda \psi(W) - R(W) \end{equation} for some random variable $R=R(W)$, $0 < \lambda < 1$ and $\psi$ defined in \eqref{psi}. Then \begin{equation} \label{zweitesmoment} \E (W-W')^2 = - 2 \lambda \E[W \psi(W)] + 2 \E[W \, R(W)]. \end{equation} We obtain the following assertions: \begin{enumerate} \item Let $Z$ be a random variable distributed according to $p$. Under Assumption (B1), for any uniformly Lipschitz function $h$, we obtain $$ | \E h(W) - \E h(Z) | \leq \delta \|h'\| $$ with $$ \delta: = c_2 \E \biggl| 1 - \frac{1}{2 \lambda} \E \bigl( (W-W')^2| W \bigr) \biggr| + \frac{c_3}{4 \lambda} \E |W-W'|^3 + \frac{c_1}{\lambda} \sqrt{ \E (R^2)}. $$ \item Let $Z$ be a random variable distributed according to $p$. Under Assumption (B2), we obtain for any $A >0$ \begin{eqnarray} \label{kolall} d_{\rm{K}}(W, Z) & \leq & d_2 \sqrt{ \E \biggl( 1 - \frac{1}{2 \lambda} \E[(W'-W)^2 |W] \biggr)^2} + \big( d_1 + \frac 32 A \bigr) \frac{\sqrt{\E(R^2)}}{\lambda} \nonumber \\ & + & \frac{1}{\lambda} \bigl( \frac{d_4 A^3}{4} \bigr) + \frac{3A}{2} \E (|\psi(W)|) + \frac{d_3}{2 \lambda} \E \bigl( (W-W')^2 1_{\{|W-W'| \geq A\}} \bigr). \end{eqnarray} \end{enumerate} \end{theorem} With \eqref{zweitesmoment} we obtain $$ \E \biggl( 1 - \frac{1}{2\lambda} \E[(W-W')^2|W] \biggr) = 1 + \E[W \psi(W) ] - \frac{\E(W \, R)}{\lambda}. $$ Therefore the bounds in Theorem \ref{generaldensity} are unlikely to be useful unless $-\E[W \psi(W)]$ is close to 1 and $ \frac{\E(W \, R)}{\lambda}$ is small. Alternatively bounds can be obtained comparing not with a distribution given by $p$ but with a modification which involves $\E[W \psi(W)]$. Let $p_W$ be a probability density such that a random variable $Z$ is distributed according to $p_W$ if and only if $$ \E \bigl( \E[W \psi(W)] \, f'(Z) + \psi(Z) \, f(Z) \bigr) =0 $$ for a suitably chosen class of functions. \begin{theorem} \label{generaldensity2} Let $p$ be a density fulfilling Assumption (D). Let $(W, W')$ be an exchangeable pair of real-valued random variables such that \eqref{exchangepsi} holds. If $Z_W$ is a random variable distributed according to $p_W$, we obtain under (B1), for any uniformly Lipschitz function $h$ that $| \E h(W) - \E h(Z_W) | \leq \delta' \|h'\|$ with $$ \delta': = \frac{c_2}{2 \lambda} \bigl( {\rm Var} \bigl( \E[(W-W')^2| W] \bigr) \bigr)^{1/2} + \frac{c_3}{4 \lambda} \E |W-W'|^3 + \frac{c_1+c_2 \sqrt{\E(W^2)}}{\lambda} \sqrt{ \E (R^2)}. $$ Under Assumption (B2) we obtain for any $A >0$ \begin{eqnarray} \label{kolall2} d_{\rm{K}}(W, Z_W) & \leq & \frac{d_2}{2 \lambda} \bigl( {\rm Var} \bigl( \E [ (W-W')^2| W] \bigr)^{1/2} + \big( d_1 + d_2 \sqrt{\E(W^2)} +\frac 32 A \bigr) \frac{\sqrt{\E(R^2)}}{\lambda} \nonumber \\ & + & \frac{1}{\lambda} \bigl( \frac{d_4 A^3}{4} \bigr) + \frac{3A}{2} \E (|\psi(W)|) + \frac{d_3}{2 \lambda} \E \bigl( (W-W')^2 1_{\{|W-W'| \geq A\}} \bigr). \end{eqnarray} \end{theorem} \begin{proof}[Proof of Theorem \ref{generaldensity}] Interestingly enough, the proof is a quite simple adaption of the results in \cite{Stein:1986} and follows the lines of the proof of Theorem \ref{ourmain}. For a function $f$ with $|f(x)| \leq C(1 +|x|)$ we obtain \begin{eqnarray} \label{start2} 0 & = & \E \bigl( (W-W')(f(W')+f(W)) \bigr) \nonumber \\ & = & \E \bigl( (W-W')(f(W')-f(W)) \bigr) - 2 \lambda \E(\psi(W) \, f(W)) + 2 \E(f(W) \, R(W)), \end{eqnarray} which is equivalent to \begin{equation} \label{start3} \E(\psi(W) \, f(W)) = - \frac{1}{2 \lambda} \E \bigl( (W-W')(f(W)-f(W')) \bigr) + \frac{1}{\lambda} \E(f(W) \, R(W)) \end{equation} {\bf Proof of (1):} Now let $f=f_h$ be the solution of the Stein equation \eqref{steinid2}, and define $$ \widehat{K}(t) := (W-W') \bigl( 1_{\{-(W-W') \leq t \leq 0 \}} - 1_{\{0 < t \leq -(W-W') \}} \bigr) \geq 0. $$ By \eqref{start3}, following the calculations on page 21 in \cite{ChenShao:2005}, we simply obtain \begin{eqnarray*} |\E h(W) - \E h(Z) | & = & |\E \bigl( f'(W) + \psi(W) \, f(W) \bigr) |\\ & = & \bigl| \E \bigl( f'(W) \biggl( 1 - \frac{1}{2 \lambda} (W-W')^2 \biggr) + \frac{1}{2 \lambda} \E \biggl( \int_{\R} (f'(W) - f'(W+t)) \, \widehat{K}(t) \, dt \biggr)\\ & + & \frac{1}{\lambda} \E (f(W) R(W)) \bigr|. \end{eqnarray*} Using $\int_{\R} |t| \widehat{K}(t) \, dt = \frac 12 \E |W-W'|^3$, the bounds in Assumption (B1) give: \begin{equation} |\E h(W) - \E h(Z) | \leq \|h'\| \biggl( c_2 \E \biggl| 1 - \frac{1}{2 \lambda} \E \bigl( (W-W')^2| W \bigr) \biggr| + \frac{c_3}{4 \lambda} \E |W-W'|^3 + \frac{c_1}{\lambda} \sqrt{ \E (R^2)} \biggr). \end{equation} {\bf Proof of (2):} Now let $f=f_z$ be the solution of the Stein equation \eqref{steinid3}. As in \eqref{tis}, using \eqref{start3}, we obtain \begin{eqnarray*} P(W \leq z) - P(z) & = & \E (f'(W) + \psi(W) f(W)) \\ & =& \E \biggl( f'(W) \bigl( 1 - \frac{1}{2 \lambda} (W-W')^2 \bigr) \biggr) + \frac{1}{2 \lambda} \E (2 f(W) \, R)\\ & & - \frac{1}{2 \lambda} \E \biggl[ (W-W') \bigl( f(W) - f(W') - (W-W')f'(W) \bigr) \biggr]\\ & = & T_1 + T_2 + T_3. \end{eqnarray*} Now the bounds in Assumption (B2) give $$ |T_1| \leq d_2 \sqrt{ \E \biggl( 1 - \frac{1}{\lambda} \E[(W'-W)^2 |W] \biggr)^2}. $$ and $$ |T_2| \leq \frac{d_1}{\lambda} \sqrt{ \E(R^2)}. $$ Using the decomposition \eqref{t3} of $(-2 \lambda) \, T_3$, the modulus of the first term can be bounded by $d_3 \, \E \bigl( (W-W')^2 1_{\{|W-W'| >A\}} \bigr)$. Using the Stein identity \eqref{steinid3}, the second summand can be represented as \begin{eqnarray*} & & \E \biggl( (W-W') 1_{\{|W-W'| \leq A\}} \int_{-(W-W')}^0 \bigl( -\psi(W+t) \, f(W+t) + \psi(W) \, f(W) \bigr) dt \biggr) \\ & & + \E \biggl( (W-W') 1_{\{|W-W'| \leq A\}} \int_{-(W-W')}^0 (1_{\{W+t \leq z\}} - 1_{\{W \leq z\}}) dt \biggr) =: U_1 + U_2. \end{eqnarray*} With $g(x) := (\psi(x) f(x))'$ we obtain $$ -\psi(W+t) \, f(W+t) + \psi(W) \, f(W) = - \int_0^t g(W+s) \, ds. $$ Since $|g(x)| \leq d_4$ we obtain $|U_1| \leq \frac{A^3}{2} d_4$. Analogously to the steps in the proof of Theorem \ref{ourmain}, $U_2$ can be bounded by $$ \E \bigl( (W-W')(f(W)-f(W')) \bigr) = 2 \E \bigl( f(W) \, R(W) \bigr) - 2 \lambda \, \E \bigl( \psi(W) \, f(W) \bigr), $$ where we applied \eqref{start3}, and where $f$ is defined by $f(x):= -1.5 A$ for $x \leq z-A$, $f(x):=1.5 A$ for $x \geq z+2A$ and $f(x):=x-z-A/2$ in between. Thus $U_2 \leq 3A \bigl( \E(|R|) + \lambda \E (|\psi(W)|) \bigr)$. Similarly we obtain $U_2 \geq - 3A \bigl( \E(|R|) + \lambda \E (|\psi(W)|) \bigr)$. \end{proof} \begin{proof}[Proof of Theorem \ref{generaldensity2}] The main observation is the following identity: \begin{eqnarray*} \E \bigl( -\E[W \psi(W)] \, f'(W) + \psi(W) f(W) \bigr) & = & \E \biggl( f'(W) \biggl( \frac{\E[(W-W')^2] - 2 \E[W \, R]}{2 \lambda} \biggr) \biggr) + \E \bigl( \psi(W) \, f(W) \bigr) \nonumber \\ & & \hspace{-6cm} = \E \biggl( f'(W) \biggl( \frac{\E[(W-W')^2] - \E[(W-W')^2|W]}{2\lambda} \biggr) \biggr) +\frac{1}{\lambda} \biggl(\E[f(W) \, R] - \E[\E(W R) \, f'(W)] \biggr) +T_3 \end{eqnarray*} with $T_3$ defined as in the proof of Theorem \ref{generaldensity}. Now we can apply the Cauchy-Schwarz inequality to get $$ \E \bigl| \E[(W-W')^2] - \E[(W-W')^2|W] \bigr| \leq \bigl( {\rm Var} \bigl( \E[(W-W')^2|W] \bigr) \bigr)^{1/2}. $$ Now the proof follows the lines of the proof of Theorem \ref{generaldensity}. \end{proof} \begin{remark} \label{alternative} We discuss an alternative bound in Theorem \ref{generaldensity} in the case that $(\psi(x) f_z(x))'$ cannot be bounded uniformly. By the mean value theorem we obtain in general $$ -\psi(W+t) f(W+t) + \psi(W) f(W) = \psi(W) \bigl( - \int_0^1 f'(W+st) t ds \bigr) + f(W+t) \bigl( - \int_0^1 \psi'(W+st) t ds \bigr). $$ This gives $$ |-\psi(W+t) f(W+t) + \psi(W) f(W)| \leq d_2 |\psi(W)| |t| + d_1 \int_0^1 |\psi'(W+st)| |t| ds. $$ Now we get the bound $$ \frac{1}{2 \lambda} |U_1| \leq \frac{d_2 A^3}{4 \lambda} \E(|\psi(W)|) + \frac{d_1}{2 \lambda} \E(V) $$ with $$ V := \biggl( |W-W'| \, 1_{\{|W-W'| \leq A\}} \int_{-(W-W')}^0 \int_0^1 |\psi'(W+st)| \, |t| ds \, dt \biggr). $$ Let us consider the example $\psi(x) = - x^3/3$. Now $$ \psi'(W+st)| = |(W+st)^2| = | W^2 + 2 st W + s^2t^2|, $$ hence $$ |(W+st)^2| |t| \leq |t| |W^2| + |t^2| 2 |W| |s| + |t^3| |s^2| $$ and integration over $s$ gives $$ \int_0^1 | \psi'(W+st)| |t| ds \leq |t| |W^2| + |t^2| |W| + |t^3| /3. $$ Integration over $t$ leads to $$ \int_{-(W-W')}^0 ( \int_0^1 | \psi'(W+st)| |t| ds) \leq \frac{|\Delta|^2}{2} |W^2| + \frac{|\Delta|^3}{3} |W| + \frac{|\Delta|^4}{12}. $$ with $\Delta := (W-W')$. Hence we get $$ \E(V) \leq \biggl( \frac{A^3}{2} \E |W^2| + \frac{A^4}{3} \E |W| + \frac{A^5}{12} \biggr). $$ We will see in Section 5, that this bound is good enough for an alternative proof of Theorem \ref{CWcritical}. \end{remark} \secdef\sct\sect{Berry-Esseen bound at the critical temperature} \begin{proof}[Proof of Theorem \ref{CWcritical}] We start with \eqref{beta1}, where $W$ is given by \eqref{w2}. We will calculate the remainder term $R(W)$ more carefully: By Taylor expansion and the identities $m_i(X) = m(X) - X_i/n$ and $m(X) = \frac{1}{n^{1/4}} W$ we obtain $$ \frac{1}{n^{3/4}} \frac 1n \sum_{i=1}^n \tanh ( m_i(X)) = \frac 1n W - \frac{1}{n^{3/2}} \frac{W^3}{3} - \mathcal{O} \bigl(\frac{W}{n^2} \bigr) + \mathcal{O} \bigl( \frac{W^3}{n^{5/2}} \bigr) + \mathcal{O} \bigl( S(W) \bigr) $$ with $$ S(W) = \frac{1}{n^{3/4}} \frac 1n \sum_{i=1}^n m_i(X)^5 = \mathcal{O} \bigl( \frac{W^5}{n^2} \bigr) + \mathcal{O} \bigl( \frac{W^3}{n^{7/2}} \bigr) + \mathcal{O} \bigl( \frac{W^2}{n^{21/4}} \bigr) + \mathcal{O} \bigl( \frac{W}{n^6} \bigr). $$ From Lemma \ref{momentsgeneral} we know that for $\varrho$ being the symmetric Bernoulli distribution and $\beta=1$ we get $\E |W|^6 \leq {\rm const.}$. Using this we get the exchangeable pair identity \eqref{beta1} with $R(W) = \mathcal{O} \bigl( \frac{1}{n^2} \bigr)$. With Lemma \ref{genbound}, we can now apply Theorem \ref{generaldensity}, using $|W - W'| \leq \frac{1}{n^{3/4}}=:A$. We obtain $1.5 A \, \E(|\psi(W)|) \leq \rm{const.} \frac{1}{n^{3/4}}$ and $\frac{d_4 \, A^3}{4 \lambda} = \frac{d_4}{4} \frac{1}{n^{3/4}}$. Using $\E |R(W)| \leq \rm{const.} \frac{1}{n^2}$ we get $$ \bigl( d_1 + \frac 32 A \bigr) \frac{\E|R(W)|}{\lambda} \leq \rm{const.} \frac{1}{\sqrt{n}}. $$ Moreover we obtain $$ \E \bigl[ (W-W')^2 | \mathcal{F} \bigr] = \frac{2}{n^{3/2}} - \frac{2}{n^{5/2}} \sum_{i=1}^n X_i \tanh (m_i(X)). $$ Hence applying Theorem \ref{generaldensity} we have to bound the expectation of $$ T := \bigl| \frac 1n \sum_{i=1}^n X_i \tanh (m_i(X)) \bigr|. $$ Again using Taylor and $m_i(X) = m(X) - \frac{X_i}{n}$ and Lemma \ref{momentsgeneral}, the leading term of $T$ is $\frac{W^2}{n^{1/2}}$. Hence $\E(T) = \mathcal{O}(n^{-1/2})$ and Theorem \ref{CWcritical} is proved. \end{proof} \begin{remark} In Remark \ref{alternative}, we presented an alternative bound via Stein's method without proving a uniform bound for $(\psi'(x) f_z(x))'$. As we can see, the additional terms in this bound are of smaller order than $\mathcal{O}(n^{-1/2})$, using $A = n^{-3/4}$. \end{remark} \begin{proof}[Proof of Theorem \ref{CWbetadep}] (1) Let $\beta_n -1 = \frac{\gamma}{\sqrt{n}}$ and $W= S_n/n^{3/4}$. For the distribution function $F_{\gamma}$ in Theorem \ref{CWbetadep} we obtain $\psi(x) = \gamma \, x - \frac 13 x^3$. Moreover we have \begin{equation} \label{sehrhuebsch} \E[W-W'|W] = \frac{1 - \beta_n}{n} W + \frac{\beta_n^3}{n^{3/2}} \frac{W^3}{3} + R(\beta_n, W) \end{equation} with $R(\beta_n, W) = \mathcal{O}(n^{-2})$. With $\beta_n-1 = \frac{\gamma}{\sqrt{n}}$ we obtain $$ \E[W-W'|W] = -\frac{\gamma}{n^{3/2}} W + \frac{\beta_n^3}{n^{3/2}} \frac{W^3}{3} + R(\beta_n, W) = - \frac{1}{n^{3/2}} \psi(W) + \tilde{R}(\beta_n, W) $$ with $\tilde{R}(\beta_n, W) = \mathcal{O}(n^{-2})$. Now we only have to adapt the proof of Theorem \ref{CWcritical} step by step, using, that the sixth moment of $W$ is bounded for varying $\beta_n$, see Lemma \ref{momentsgeneral}. Hence by Lemma \ref{genbound} and Theorem \ref{generaldensity}, part (1) is proved. (2): we consider the case $|\beta_n -1| = \mathcal{O}(n^{-1})$ and $W= S_n/ n^{3/4}$. Now in \eqref{sehrhuebsch}, the term $ \frac{1 - \beta_n}{n} W$ will be a part of the remainder: $$ \E[W-W'|W] = \frac{\beta_n^3}{n^{3/2}} \frac{W^3}{3} + R(\beta_n, W) + \frac{1 - \beta_n}{n} W =: -\frac{\beta_n^3}{n^{3/2}} \psi(W) + \hat{R}(\beta_n, W) $$ with $\psi(x) := x^3/3$. Along the lines of the proof of Theorem \ref{CWcritical}, we have to bound $\frac{\E|\hat{R}|}{\lambda}$ with $\lambda = \frac{\beta_n^3}{n^{3/2}}$. But since by assumption $$ \lim_{n \to \infty} \frac{1}{\lambda} \frac{(1-\beta_n)}{n} = \frac{\sqrt{n}(1-\beta_n)}{\beta_n^3} = 0, $$ applying Theorem \ref{generaldensity}, we obtain the convergence in distribution for any $\beta_n$ with $|\beta_n -1| \ll n^{-1/2}$, and we obtain the Berry-Esseen bound of order $\mathcal{O}(1 / \sqrt{n})$ for any $|\beta_n -1| = \mathcal{O} (n^{-1})$. (3) Finally we consider $|\beta_n -1| \gg n^{-1/2}$ and $W= \sqrt{\frac{(1-\beta_n)}{n}} S_n$. Now we obtain $$ \E[W-W'|W] = \frac{1 - \beta_n}{n} W + \frac{\beta_n \, W}{n^2} + \frac{\beta_n^3}{n^2(1-\beta_n)} \frac{W^3}{3} + R(\beta_n, W) =: -\lambda \psi(W) + \tilde{R}(\beta_n,W) $$ with $\lambda = \frac{(1-\beta_n)}{n}$ and $\psi(x) = -x$. We apply Corollary \ref{corsigma}: with $A= \frac{1}{\sqrt{n}}(1-\beta_n)^{1/2}$, one obtains $\lambda^{-1} A^3 = n^{-1/2} (1-\beta_n)^{1/2}$ and $$ \frac{\E|\tilde{R}(\beta_n,W)|}{\lambda} \leq \frac{{\rm const}}{n(1-\beta_n)^2}. $$ Moreover $$ \E[(W-W')^2|W] = \frac{2(1-\beta_n)}{n} - \frac{2(1-\beta_n)}{n} \frac 1n \sum_{i=1}^n X_i \tanh \bigl( \beta_n m_i(X) \bigr). $$ Hence $$ \biggl| 1 - \frac{1}{2 \lambda} \E[(W-W')^2|W] \biggr| = \biggl| \frac{\beta_n}{n (1- \beta_n)} W^2 - \frac{\beta_n}{n} - \frac{\beta_n^3}{n^2(1-\beta_n)^2} \frac{W^4}{3} + R(\beta_n, W) \biggr| = \mathcal{O} \bigl( \frac{\beta_n}{n(1-\beta_n)} \bigr). $$ Hence with $|\beta_n -1| \gg n^{-1/2}$ we obtain convergence in distribution. Under the additional assumption $|\beta_n -1| \gg n^{-1/4}$ we obtain the Berry-Esseen result. \end{proof} \secdef\sct\sect{Proof of the general case} \begin{proof}[Proof of Theorem \ref{CWgeneral}] Given $\varrho$ which satisfies the GHS-inequality and let $\alpha$ be the global minimum of type $k$ and strength $\mu(\alpha)$ of $G_{\varrho}$. In case $k=1$ it is known that the random variable $\frac{S_n}{\sqrt{n}}$ converges in distribution to a normal distribution $N(0, \sigma^2)$ with $\sigma^2 = \mu(\alpha)^{-1} - \beta^{-1}= (\sigma_{\varrho}^{-2} - \beta)^{-1}$, see for example \cite[V.13.15]{Ellis:LargeDeviations}. Hence in this case we will apply Corollary \ref{corsigma2} (to obtain better constants for our Berry-Esseen bound in comparison to Theorem \ref{generaldensity2}). Consider $k \geq 1$. We just treat the case $\alpha=0$ and denote $\mu = \mu(0)$. The more general case can be done analogously. For $k=1$, we consider $\psi(x) = - \frac{x}{\sigma^2}$ with $\sigma^2 = \mu^{-1} - \beta^{-1}$. For any $k \geq 2$ we consider $$ \psi(x) = - \frac{\mu}{(2k-1)!} x^{2k-1}. $$ We define $$ W := W_{k,n} := \frac{1}{n^{1 - 1/(2k)}} \sum_{i=1}^n X_i $$ and $W'$, constructed as in Section 3, such that $$ W-W' = \frac{X_I - X_I'}{n^{1 - 1/(2k)}}. $$ We obtain $$ \E[W-W'| \mathcal{F}] = \frac 1n W - \frac{1}{n^{1 - 1/(2k)}} \frac 1n \sum_{i=1}^n \E(X_i' | \mathcal{F}). $$ Now we have to calculate the conditional distribution at site $i$ in the general case: \begin{lemma} \label{CWidentitygeneral} In the situation of Theorem \ref{CWgeneral}, if $X_1$ is $\varrho$-a.s. bounded, we obtain $$ \E (X_i' | \mathcal{F}) = \bigl( m_i(X) - \frac{1}{\beta} G_{\varrho}'(\beta, m_i(X)) \bigr) \, \bigl(1 + \mathcal{O}(1/n) \bigr) $$ with $m_i(X) := \frac 1n \sum_{j \not= i} X_j = m(X) - \frac{X_i}{n}$. \end{lemma} \begin{proof} We compute the conditional density $g_{\beta}(x_1|(X_i)_{i \ge 2})$ of $X_1=x_1$ given $(X_i)_{i \ge 2}$ under the Curie-Weiss measure: \begin{eqnarray*} g_{\beta}(x_1|(X_i)_{i \ge 2})& = & \frac {e^{\beta/2n (\sum_{i \ge 2} x_1 X_i+ \sum_{i \neq j \ge 2} X_i X_j+ x_1^2)}} {\int e^{\beta/2n (\sum_{i \ge 2} x_1 X_i+ \sum_{i \neq j \ge 2} X_i X_j+ x_1^2)} \varrho(dx_1)}\\ &=& \frac {e^{\beta/2n (\sum_{i \ge 2} x_1 X_i+ x_1^2)}} {\int e^{\beta/2n (\sum_{i \ge 2} x_1 X_i+ x_1^2)} \varrho(dx_1)}. \end{eqnarray*} Hence we can compute $\mathbb{E}[X_1'|\mathcal{F}]$ as $$ \mathbb{E}[X_1'|\mathcal{F}]=\frac {\int x_1 e^{\beta/2n (\sum_{i \ge 2} x_1 X_i+ x_1^2)}\varrho(dx_1)} {\int e^{\beta/2n (\sum_{i \ge 2} x_1 X_i+ x_1^2)} \varrho(dx_1)}. $$ Now, if $|X_1| \le c$ $\varrho$-a.s $$ \mathbb{E}[X_1'|\mathcal{F}]\le \frac {\int x_1 e^{\beta/2n (\sum_{i \ge 2} x_1 X_i)}\varrho(dx_1) e^{\beta c^2/2n}} {\int e^{\beta/2n (\sum_{i \ge 2} x_1 X_i+ x_1^2)} \varrho(dx_1)e^{-\beta c^2/2n}} $$ and $$ \mathbb{E}[X_1'|\mathcal{F}]\ge \frac {\int x_1 e^{\beta/2n (\sum_{i \ge 2} x_1 X_i)}\varrho(dx_1) e^{-\beta c^2/2n}} {\int e^{\beta/2n (\sum_{i \ge 2} x_1 X_i+ x_1^2)} \varrho(dx_1)e^{\beta c^2/2n}}. $$ By computation of the derivative of $G_\varrho$ we see that $$ \frac {\int x_1 e^{\beta/2n (\sum_{i \ge 2} x_1 X_i)}\varrho(dx_1)} {\int e^{\beta/2n (\sum_{i \ge 2} x_1 X_i+ x_1^2)} \varrho(dx_1)} e^{\pm \beta c^2/n} = \bigl( m_1(X) - \frac{1}{\beta} G_{\varrho}'(\beta, m_1(X)) \bigr) \, (1 \pm \beta c^2/n). $$ \end{proof} \begin{remark} If we consider the Curie-Weiss model with respect to $\widehat{P}_{n, \beta}$, the conditional density $g_{\beta}(x_1 |(X_i)_{i \geq 2})$ under this measure becomes $$ g_{\beta}(x_1|(X_i)_{i \ge 2}) = \frac {e^{\beta/2n (\sum_{i \ge 2} x_1 X_i)}} {\int e^{\beta/2n (\sum_{i \ge 2} x_1 X_i)} \varrho(dx_1)}. $$ Thus we obtain $\E (X_i' | \mathcal{F}) = \bigl( m_i(X) - \frac{1}{\beta} G_{\varrho}'(\beta, m_i(X)) \bigr)$ without the boundedness assumption for the $X_1$. \end{remark} Applying Lemma \ref{CWidentitygeneral} and the presentation \eqref{Taylor} of $G_{\varrho}$, it follows that $$ \E[W-W'|W] = \frac 1n W - \frac{1}{n^{1-1/(2k)}} \biggl( \frac 1n \sum_{i=1}^n \biggl( m_i(X) - \frac{\mu}{\beta (2k-1)!} m_i(X)^{2k-1} + \mathcal{O} \bigl( m_i(X)^{2k} \bigr) \biggr) \biggr). $$ With $m_i(X) = m(X) - \frac{X_i}{n}$ and $m(X) = \frac{1}{n^{1/(2k)}}W$ we obtain $$ \frac{1}{n^{1-1/(2k)}} \frac 1n \sum_{i=1}^n m_i(X) = \frac 1n W - \frac{1}{n^2} W $$ and \begin{eqnarray*} \frac{1}{n^{1-1/(2k)}} \frac 1n \sum_{i=1}^n \frac{\mu}{\beta (2k-1)!} m_i(X)^{2k-1} =\frac{1}{n^{1-1/(2k)}} \frac{\mu}{\beta (2k-1)!} \sum_{l=0}^{2k-1} {2k -1 \choose l} m(X)^{2k-1-l} \frac{(-1)^{l}}{n^l} \frac 1n \sum_{i=1}^n X_i^l. \end{eqnarray*} For any $k \geq 1$ the first summand ($l=0$) is \begin{equation} \label{schick} \frac{1}{n^{2- \frac 1k}} \frac{\mu}{\beta (2k-1)!} W^{2k-1} = - \frac{1}{n^{2- \frac 1k}} \psi(W). \end{equation} To see this, let $k=1$. Since we set $\phi''(0)=1$, we obtain $\mu(0)= \beta - \beta^2$ and therefore $\frac{1}{\beta} \mu(0) W = (1-\beta) W$. In the case $k \geq 2$ we know that $\beta=1$. Hence in both cases, \eqref{schick} is checked. Summarizing we obtain for any $k \geq 1$ $$ \E[W-W'|W] = - \frac{1}{n^{2- \frac 1k}} \psi(W) + R(W) =: - \lambda \psi(W) + R(W) $$ with \begin{eqnarray*} R(W) & = & \frac{1}{n^2} W + \frac{\mu}{\beta (2k-1)!} \frac{1}{n^{1-1/(2k)}} \sum_{l=1}^{2k-1} {2k -1 \choose l} m(X)^{2k-1-l} \frac{(-1)^{l}}{n^l} \frac 1n \sum_{i=1}^n X_i^l + \mathcal{O}(m(X)^{2k}) \\ & = & \frac{1}{n^2} W + \frac{\mu}{\beta (2k-1)!} \sum_{l=1}^{2k-1} {2k -1 \choose l} \frac{1}{n^{2 - \frac 1k - \frac{l}{2k}}} W^{2k-1-l} \frac{(-1)^{l}}{n^l} \frac 1n \sum_{i=1}^n X_i^l + \mathcal{O}(\frac{W^{2k}}{n}). \end{eqnarray*} With Lemma \ref{momentsgeneral} we know that $\E |W|^{2k} \leq \rm{const}$. We will apply Corollary \ref{corsigma2}, if $k=1$ and Theorem \ref{generaldensity2} for $k \geq 2$. In both cases we apply Lemma \ref{genbound}. Since the spin variables are assumed to be bounded $\varrho$-a.s, we have $$ |W-W'| \leq \frac{\rm{const.}}{n^{1 - \frac{1}{2k}}} =:A. $$ Let $k=1$. Now $\lambda= \frac 1n$, $A= {\rm const.}/n^{-1/2}$, $\E(W^4) \leq {\rm const}$. The leading term of $R$ is $W/n^2$. Hence the last four summands in \eqref{mainbound3} of Corollary \ref{corsigma2} are $\mathcal{O} (n^{-1/2})$. For $k \geq 2$ we obtain $\frac{3 A}{2} \E(|\psi(W)|) = \mathcal{O} \bigl( n^{\frac{1}{2k}-1} \bigr)$ and $\frac{1}{\lambda} \bigl( \frac{d_4 A^3}{4} \bigr) = \mathcal{O} \bigl( n^{\frac{1}{2k}-1} \bigr)$. The leading term in the second term of $R(W)$ is the first summand ($l=1$), which is of order $\mathcal{O}(n^{-3+ \frac{1}{k} + \frac{1}{2k}})$. With $\lambda= n^{\frac{1}{k} -2}$ we obtain $$ \frac{\E(|R|)}{\lambda} \leq \frac{\E(|W|)}{\lambda n^2} + \mathcal{O} \bigl( n^{\frac{1}{2k} -1} \bigr) \quad \text{and} \quad \frac{\E(|W|)}{\lambda n^2} = \mathcal{O} \bigl( n^{1/k} \bigr). $$ Hence the last four summands in \eqref{kolall2} of Theorem \ref{generaldensity2} are $\mathcal{O} (n^{-1/k})$. Finally we have to consider the variance of $\frac{1}{2 \lambda}\E[(W-W')^2|W]$. Hence we have to bound the variance of \begin{equation} \label{haupt1} \frac{1}{2n} \sum_{i=1}^m X_i^2 + \frac{1}{2n} \sum_{i=1}^n E[ (X_i')^2|\mathcal{F}] + \frac 1n \sum_{i=1}^n X_i \biggl( m_i(X) - \frac{1}{\beta} G_{\varrho}'(\beta, m_i(X)) \biggr) \, (1 + O(1/n)). \end{equation} Since we assume that $\varrho \in {\rm GHS}$, we can apply the correlation-inequality due to Lebowitz (see Remark \ref{lebo}) $$ \E(X_i X_j X_k X_l) - \E(X_i X_j) \E(X_k X_l) - \E(X_i X_k) \E(X_j X_l) - \E(X_i X_l) \E(X_j X_k) \leq 0. $$ The choice $i=k$ and $j=l$ leads to the bound $$ {\rm Cov} \bigl( X_i^2, X_j^2) = \E(X_i^2 X_j^2) - \E(X_i^2) \E(X_j^2) \leq 2 (\E (X_i \, X_j))^2. $$ With Lemma \ref{momentsgeneral} we know that $(\E(X_i X_j))^2 \leq {\rm const.} n^{-2/k}$. This gives $$ {\rm Var} \bigl( \frac{1}{2n}\sum_{i=1}^n X_i^2 \bigr) = \frac{1}{4 n^2} \sum_{i=1}^n {\rm Var}(X_i^2) + \frac{1}{4 n^2} \sum_{1 \leq i < j \leq n} {\rm Cov} (X_i^2, X_j^2) = \mathcal{O} \bigl( n^{-1} \bigr) + \mathcal{O} \bigl(n^{-2/k}). $$ Using a conditional version of Jensen's inequality we have $$ {\rm Var} \bigl( \E \bigl( \frac{1}{2n}\sum_{i=1}^n X_i^2 \bigl| \mathcal{F} \bigr) \bigr) \leq {\rm Var} \bigl( \frac{1}{2n}\sum_{i=1}^n X_i^2 \bigr). $$ Hence the variance of the second term in \eqref{haupt1} is of the same order as the variance of the first term. Applying \eqref{Taylor} for $G_{\varrho}$, the variance of the third term in \eqref{haupt1} is of the order of the variance of $W^2 / n^{1/k}$. Summarizing the variance of \eqref{haupt1} can be bounded by 9 times the maximum of the variances of the three terms in \eqref{haupt1}, which is a constant times $n^{-2/k}$, and therefore for $k \geq 1$ we obtain $$ \biggl( {\rm Var} \biggl( \frac{1}{2 \lambda} \, \E[(W-W')^2|W] \biggr) \biggr)^{1/2} = \mathcal{O}(n^{-1/k}). $$ Note that for $k \geq 2$ $$ \frac{\psi(x)}{-\E[W \psi(W)]} = -\frac{x^{2k-1}}{\E(W^{2k})}. $$ Hence we compare the distribution of $W$ with a distribution with Lebesgue-probability density proportional to $\exp \bigl( -\frac{x^{2k}}{2k \E(W^{2k})} \bigr)$. \end{proof} \begin{proof}[Proof of Theorem \ref{CWgendep}] Since $\alpha=0$ and $k=1$ for $\beta \not=1$ while $\alpha=0$ and $k \geq 2$ for $\beta =1$, $G_{\varrho}(\cdot)$ can now be expanded as $$ G(s) = G(0) + \frac{\mu_1}{2} s^2 + \frac{\mu_k}{(2k)!} s^{2k} + \mathcal{O} (s^{2k+1}) \quad \text{as} \quad s \to 0. $$ Hence $\frac{1}{\beta_n} \, G_{\varrho}'(s) = \frac{\mu_1}{\beta_n} s + \frac{\mu_k}{\beta_n (2k-1)!} s^{2k-1} + \mathcal{O}(s^{2k})$. With Lemma \ref{CWidentitygeneral} and $\mu_1=(1-\beta_n) \beta_n$ we obtain $$ \E[X_i | \mathcal{F}] = \beta_n m_i(X) - \frac{\mu_k}{\beta_n (2k-1)!} m_i(X)^{2k-1} \, (1 + \mathcal{O}(1/n)). $$ We get $$ \E[W-W'|W] = \frac{1 -\beta_n}{n} W + \frac{\beta_n}{n^2} W + \frac{1}{n^{2 - 1/k}} \frac{\mu_k}{\beta_n (2k-1)!} W^{2k+1} + R(\beta_n, W). $$ The remainder $R(\beta_n,W)$ is the remainder in the proof of Theorem \ref{CWgeneral} with $\mu$ exchanged by $\mu_{k}$ and $\beta$ exchanged by $\beta_n$. \noindent Let $\beta_n -1 = \frac{\gamma}{n^{1-1/k}}$ and $W= n^{1/(2k)-1} \sum_{i=1}^n X_i$. We obtain \begin{equation} \label{wunderbar} \E[W-W'|W] = - \frac{1}{n^{2 - 1/k}} \psi(W) + \frac{\beta_n}{n^2} W + R(\beta_n,W), \end{equation} where $\psi(x) = \gamma x - \frac{\mu_k}{\beta_n \, (2k-1)!} x^{2k-1}$. As in the proof of Theorem \ref{CWgeneral} we obtain that $R(\beta_n,W)= \mathcal{O}(n^{-2})$. Now we only have to adapt the proof of Theorem \ref{CWgeneral} step by step, applying Lemma \ref{momentsgeneral}, Lemma \ref{genbound} and Theorem \ref{generaldensity2}. \noindent Let $|\beta_n -1| = \mathcal{O}(1/n)$ and $W= n^{1/(2k)-1} \sum_{i=1}^n X_i$. Now in \eqref{wunderbar}, the term $\frac{1-\beta_n}{n} W$ will be a part of the remainder: \begin{eqnarray*} \E[W-W'|W] & = & \frac{1}{n^{2 - 1/k}} \frac{\mu_k}{\beta_n (2k-1)!} W^{2k+1} + R(\beta_n, W) + \frac{\beta_n}{n^2} W + \frac{1 -\beta_n}{n} W \\ & =: & - \frac{1}{\beta_n \, n^{2 - 1/k}} \psi(W) + \hat{R}(\beta, W) \end{eqnarray*} with $\psi(x) = - \frac{\mu_k}{(2k-1)!} x^{2k-1}$. Following the lines of the proof of Theorem \ref{CWgeneral}, we have to bound $\frac{\E|\hat{R}(\beta_n,W)|}{\lambda}$ with $\lambda:= \frac{1}{\beta_n \, n^{2-1/k}}$. Since by our assumption for $(\beta_n)_n$ we have $$ \lim_{n \to \infty} \frac{1}{\lambda} \frac{(1-\beta_n)}{n} = \beta_n (1- \beta_n) n^{1-1/k} =0. $$ Thus with Theorem \ref{generaldensity2} we obtain convergence in distribution for any $\beta_n$ with $|\beta_n -1| \ll n^{-(1-1/k)}$. Moreover we obtain the Berry-Esseen bound of order $\mathcal{O}(n^{-1/k})$ for any $|\beta_n-1| = \mathcal{O}(n^{-1})$. \noindent Finally we consider $|\beta_n -1| \gg n^{-(1-1/2)}$ and $W= \sqrt{\frac{(1-\beta_n)}{n}} S_n$. A little calculation gives $$ \E[W-W'|W] = \frac{1-\beta_n}{n} W + \frac{\beta_n \, W}{n^2} + \frac{\mu_k}{(2k-1)! n^k (1-\beta_n)^{k-1} \beta_n} W^{2k-1} + R(\beta_n,W) =: -\lambda \psi(W) + \hat{R}(\beta_n, W) $$ with $\psi(x) = -x$ and $\lambda = \frac{1-\beta_n}{n}$. Now we apply Corollary \ref{corsigma2}. With $A := \frac{\rm{const.} (1-\beta_n)^{1/2}}{\sqrt{n}}$ we obtain $$ \frac{A^3}{\lambda} \leq \frac{\rm{const.} (1-\beta_n)^{1/2}}{\sqrt{n}} \quad \text{and} \quad \frac{\E| \hat{R}(\beta_n,W)|}{\lambda} \leq \frac{\rm{const}}{n^{k-1}(1-\beta_n)^k}. $$ Remark that the bound on the right hand side is good for any $|\beta_n -1| \gg n^{-(1-1/k)}$. Finally we have to bound the variance of $\frac{1}{2 \lambda} \, \E[(W-W')^2|W]$. The leading term is the variance of $$ \frac 1n \sum_{i=1}^n X_i \biggl( m_i(X) - \frac{1}{\beta} G_{\varrho}'(\beta, m_i(X)) \biggr), $$ which is of order $\mathcal{O} \bigl( \frac{\beta_n}{n(1-\beta_n)} \bigr)$. Hence with $|\beta_n -1| \gg n^{-(1-1/k)}$ we get convergence in distribution. Under the additional assumption that $|\beta_n -1| \gg n^{-(1/2-1/(2k))}$ we obtain the Berry-Esseen bound. \end{proof} \begin{proof}[Proof of Theorem \ref{wasserstein}] We apply Theorem \ref{generaldensity2}. For unbounded spin variables $X_i$ we consider $\widehat{P}_{n, \beta}$ and apply Lemma \ref{CWidentitygeneral} to bound $\frac{1}{\lambda} \sqrt{ {\rm Var} ( \E[(W-W')^2|W] )}$ exactly as in the proof of Theorem \ref{CWgeneral}. By Theorem \ref{generaldensity2} it remains to bound $\frac{1}{\lambda} \E|W-W'|^3$. With $\lambda = n^{-2+1/k}$ we have $$ \frac{1}{\lambda} \E|W-W'|^3 = \frac{1}{n^{1- 1/2k}} \E|X_I - X_I'|^3 = \frac{1}{n^{1- 1/2k}} \E|X_1 - X_1'|^3. $$ Now $ \E|X_1 - X_1'|^3 \leq \E|X_1|^3 + 3 \E|X_1^2 \, X_1'| + 3 \E |X_1 (X_1')^2| + \E |X_1'|^3$. Using H\"older's inequality we obtain $$ \E|X_1^2 \, X_1'| \leq \bigl( \E|X_1|^3 \bigr)^{2/3} \, \bigl( \E|X_1'|^3 \bigr)^{1/3} \leq \max \bigl( \E |X_1|^3, \E |X_1'|^3 \bigr). $$ Hence we have $$ \frac{1}{\lambda} \E|W-W'|^3 \leq \frac{8}{n^{1 - 1/2k}} \max \bigl( \E |X_1|^3, \E |X_1'|^3 \bigr). $$ Thus the Theorem is proved. \end{proof} \secdef\sct\sect{Examples} It is known that the following distributions $\varrho$ are ${\rm GHS}$ (see \cite[Theorem 1.2]{Ellis/Monroe/Newman:1976}). The symmetric Bernoulli measure is ${\rm GHS}$, first noted in \cite{Ellis:1975}. The family of measures $$ \varrho_a(dx) = a \, \delta_x + \bigl( (1-a)/2 \bigr) \bigl( \delta_{x-1} + \delta_{x+1} \bigr) $$ for $0 \leq a \leq 2/3$ is ${\rm GHS}$, whereas the GHS-inequality fails for $2/3 < a < 1$, see \cite[p.153]{Griffiths/Simon:1973}. ${\rm GHS}$ contains all measures of the form $$ \varrho_V(dx) := \bigl( \int_{\R} \exp \bigl( -V(x) \bigr) \, dx \bigr)^{-1} \, \exp \bigl( -V(x) \bigr)\, dx, $$ where $V$ is even, continuously differentiable, and unbounded above at infinity, and $V'$ is convex on $[0, \infty)$. ${\rm GHS}$ contains all absolutely continuous measures $\varrho \in {\mathcal B}$ with support on $[-a,a]$ for some $0 < a < \infty$ provided $g(x) = d\varrho/dx$ is continuously differentiable and strictly positive on $(-a,a)$ and $g'(x)/g(x)$ is concave on $[0,a)$. Measures like $\varrho(dx) = {\rm const.} \exp \bigl( -a x^4 - b x^2 \bigr) \, dx$ or $\varrho(dx) = {\rm const.} \exp \bigl( - a \cosh x - b x^2 \bigr) \, dx$ with $a >0$ and $b$ real are GHS. Both are of physical interest, see \cite{Ellis/Monroe/Newman:1976} and references therein). \noindent \begin{example}[A Curie--Weiss model with three states] We will now consider the next simplest example of the classical Curie--Weiss model: a model with three states. Observe, that this is not the Curie--Weiss--Potts model \cite{Ellis/Wang:1990}, since the latter has a different Hamiltonian. Indeed the Hamiltonian considered in \cite{Ellis/Wang:1990} is of the form $\frac 1n \sum_{i,j} \delta_{x_i, x_j}$. It favours states with many equal spins, whereas in our case the spins also need to have large values. We choose $\varrho$ to be $$ \varrho=\frac 23 \delta_0 + \frac 16 \delta_{-\sqrt 3}+ \frac 16 \delta_{\sqrt 3}. $$ This model seems to be of physical relevance. It is studied in \cite{Thompson:book}. In \cite{Blume/Emery/Griffiths:1971} it was used to analyze the tri-critical point of liquid helium. A little computation shows that $$ \frac {d^3}{ds^3} \phi_\varrho(s)= - 6\,{\displaystyle \frac {{\rm sinh}(x\,\sqrt{3})\,\sqrt{3}\,( {\rm cosh}(x\,\sqrt{3}) - 1)}{12\,{\rm cosh}(x\,\sqrt{3}) + 6\, {\rm cosh}(x\,\sqrt{3})^{2} + {\rm cosh}(x\,\sqrt{3})^{3} + 8}} \le 0 $$ for all $s \ge 0$. Hence the GHS-inequality \eqref{GHS} is fulfilled (see also \cite[Theorem 1.2]{Ellis/Monroe/Newman:1976}), which implies that there is one critical temperature $\beta_c$ such that there is one minimum of $G$ for $\beta\le \b_c$ and two minima above $\beta_c$. Since ${\rm Var}_\varrho(X_1)= 2 \frac 1 6 \cdot 3 =1$ we see that $\beta_c=1$. For $\beta\le \b_c$ the minimum of $G$ is located in zero while for $\beta> 1$ the two minima are symmetric and satisfy $$ s= \frac{\sqrt{3}\sinh(\sqrt{3}\beta s)}{2 + \cosh(\sqrt{3}\beta s\,)}. $$ Now Theorem \ref{CWgeneral} and \ref{CWgendep} tell that \begin{itemize} \item For $\beta<1$ the rescaled magnetization $S_n/ \sqrt{n}$ satisfies a Central Limit Theorem and the limiting variance is $(1-\beta)^{-1}$. Indeed, $\frac {d^2}{ds^2} \phi_\varrho(0)={\rm Var}_\varrho(X_1)=1$. Hence $\mu_1= \beta-\beta^2$ and $\sigma^2 =\frac 1 {1-\beta}$. Moreover we obtain $$ \sup_{z \in \R} \bigg| P_n \bigl( \frac{S_n}{\sqrt{n}} \leq z \bigr) - \Phi_W(z) \bigg| \leq \frac{C}{\sqrt{n}}. $$ \item For $\beta=\b_c=1$ the rescaled magnetization $S_n/ n^{5/6}$ converges in distribution to $X$ which has the density $f_{3, 6, 1}$. Indeed $\mu_2$ is computed to be 6. Moreover we obtain $$ \sup_{z \in \R} \bigg| P_n \bigl( \frac{S_n}{n^{5/6}} \leq z \bigr) - \widehat{F}_3(z) \bigg| \leq \frac{C}{n^{1/3}} $$ where the derivative of $\widehat{F}_3$ is the rescaled density $\exp \bigl( - \frac{x^6}{6 \E(W^6)} \bigr)$. \item If $\beta_n$ converges monotonically to $1$ faster than $n^{-2/3}$ then $\frac{S_n}{n^{5/6}}$ converges in distribution to $\widehat{F}_3$, whereas if $\beta_n$ converges monotonically to $1$ slower than $n^{-2/3}$ then $\frac{\sqrt{1-\beta_n}\, S_n}{\sqrt{n}}$ satisfies a Central Limit Theorem. Eventually, if $|1-\beta_n|= \gamma n^{-2/3}$, $\frac{S_n}{n^{5/6}}$ converges in distribution to a random variable which probability distribution has the mixed Lebesgue-density $$ \exp \biggl( - c_W^{-1} \biggl( \frac{x^6}{120} - \gamma \frac{x^2}{2} \biggr) \biggr) $$ with $c_W = \frac{1}{120} \E(W^6) - \gamma \E(W^2)$. Moreover we have $$ \sup_{z \in \R} \bigg| P_n \biggl(\frac{S_n}{n^{5/6}} \leq z \biggr) - \frac{1}{Z} \int_{- \infty}^z \exp \biggl( - c_W^{-1} \biggl( \frac{x^6}{120} - \gamma \frac{x^2}{2} \biggr) \biggr) \bigg| \leq \frac{C}{n^{1/3}}. $$ \end{itemize} \end{example} \noindent \begin{example}[A continuous Curie--Weiss model] Last but not least we will treat an example of a continuous Curie--Weiss model. We choose as underlying distribution the uniform distribution on an interval in $\R$. To keep the critical temperature one we define $$ \frac{d\varrho (x_i)}{d\, x_i}= \frac{1}{2a} \mathbb{I}_{[-a,a]}(x_i) $$ with $a= \sqrt 3$. Then from a general result in \cite[Theorem 2.4]{Ellis/Newman:1978b} (see also \cite[Theorem 1.2]{Ellis/Monroe/Newman:1976}) it follows that $\varrho (x_i)$ obeys the GHS-inequality \eqref{GHS}. Therefore there exists a critical temperature $\beta_c$, such that for $\b<\b_c$ zero is the unique global minimum of $G$ and is of type 1, while at $\beta_c$ this minimum is of type $k \ge 2$. This $\b_c$ is easily computed to be one. Indeed, $\mu_1 = \b - \b^2 \phi''(0)= \b - \b^2 \mathbb{E}_\varrho(X_1^2)= \b(1-\beta)$, since $\varrho$ is centered and has variance one. Thus $\mu_1$ vanishes at $\b=\b_c=1$. Eventually for $\b> 1$ there are again two minima which are solutions of $$ \frac{\sqrt 3 \b}{\tanh(\sqrt 3 \b x)} = \b x + \frac 1x. $$ Now again by Theorems \ref{CWgeneral} and \ref{CWgendep} \begin{itemize} \item For $\beta<1$ the rescaled magnetization $S_n/\sqrt{n}$ obeys a Central Limit Theorem and the limiting variance is $(1-\beta)^{-1}$. Indeed, since $\mathbb{E}_\varrho(X_1^2)=1$, $\mu_1= \beta-\beta^2$ and $\sigma^2 =\frac 1 {1-\beta}$. \item For $\beta=\b_c=1$ the rescaled magnetization $S_n/n^{7/8}$ converges in distribution to $X$ which has the density $f_{4, 6/5, 1}$. Indeed $\mu_2$ is computed to be $$ -\E_\varrho(X_1^4)+3\mathbb{E}_\varrho(X_1^2)= -\frac 9 5 +3 = \frac 65. $$ Moreover we obtain $$ \sup_{z \in \R} \bigg| P_n \biggl( \frac{S_n}{n^{7/8}} \leq z \biggr) - \widehat{F}_4(z) \bigg| \leq \frac{C}{n^{1/4}} $$ where the derivative of $\widehat{F}_4$ is the rescaled density $\exp \bigl( - \frac{x^8}{8 \E(W^8)} \bigr)$. \item If $\beta_n$ converges monotonically to $1$ faster than $n^{-3/4}$ then $\frac{S_n}{n^{7/8}}$ converges in distribution to $\widehat{F}_4$, whereas if $\beta_n$ converges monotonically to $1$ slower than $n^{-3/4}$ then $\frac{\sqrt{1-\beta_n}\, S_n}{\sqrt{n}}$ satisfies a Central Limit Theorem. Eventually, if $|1-\beta_n|= \gamma n^{-3/4}$, $\frac{S_n}{n^{7/8}}$ converges in distribution to the mixed density $$ \exp \biggl( - c_W^{-1} \biggl( \frac{6}{5} \frac{x^8}{8!} - \gamma \frac{x^2}{2} \biggr) \biggr) $$ with $c_W = \frac{5}{6 (8!)} \E(W^8) - \gamma \E(W^2)$. Moreover we have $$ \sup_{z \in \R} \bigg| P_n \biggl(\frac{S_n}{n^{7/8}} \leq z \biggr) - \frac{1}{Z} \int_{- \infty}^z \exp \biggl( - c_W^{-1} \biggl( \frac{6}{5} \frac{x^8}{8!} - \gamma \frac{x^2}{2} \biggr) \biggr) \bigg| \leq \frac{C}{n^{1/4}}. $$ \end{itemize} Note that there is some interesting change in limiting behaviour of all of these models at criticality. While for $\beta<1$ all of the models have the same rate of convergence for the Central Limit Theorem behaviour, in the limit at criticality the limiting distribution function as well as the distributions which depend on some moments of $W$ becomes characteristic of the underlying distribution $\varrho$. Moreover the rate of convergence differs at criticality (for $k \geq 3$). \end{example} \secdef\sct\sect{Appendix} \begin{proof}[Proof of Lemma \ref{genbound}] Consider a probability density of the form \begin{equation} \label{proto} p(x) := p_k(x) := b_k \exp \bigl( - a_k x^{2k} \bigr) \end{equation} with $b_k = \int_{\R} \exp \bigl(- a_k x^{2k} \bigr) \, dx$. Clearly $p$ satisfies Assumption (D). First we prove that the solutions $f_z$ of the Stein equation, which characterizes the distribution with respect to the density \eqref{proto}, satisfies Assumption (B2). Let $f_z$ be the solution of $$ f_z'(x) + \psi(x) f_z(x) = 1_{\{x \leq z\}}(x) - P(z). $$ Here $\psi(x) = - 2k \, a_k \, x^{2k-1}$. We have \begin{equation} \label{thesolution} f_z(x) = \left\{ \begin{array}{ll} (1-P(z)) \, P(x) \exp( a_k x^{2k}) b_k^{-1} & \mbox{for } x \leq z, \\ P(z) \, (1-P(x)) \exp( a_k x^{2k}) b_k^{-1} & \mbox{for } x \geq z \\ \end{array} \right. \end{equation} with $P(z) := \int_{-\infty}^z p(x) \, dx$. Note that $f_z(x) = f_{-z}(-x)$, so we need only to consider the case $z \geq 0$. For $x >0$ we obtain \begin{equation} \label{ungl1} 1 - P(x) \leq \frac{b_k}{2k \, a_k x^{2k-1}} \exp \bigl( - a_k x^{2k} \bigr), \end{equation} whereas for $x<0$ we have \begin{equation} \label{ungl1b} P(x) \leq \frac{b_k}{2k \, a_k |x|^{2k-1}} \exp \bigl( - a_k x^{2k} \bigr). \end{equation} By partial integration we have $$ \int_x^{\infty} \frac{(2k-1)}{2k \, a_k} t^{-2k} \, \exp \bigl( - a_k t^{2k} \bigr) = - \frac{1}{2k \, a_k \, t^{2k-1}} \exp \bigl( - a_k t^{2k} \bigr) \bigg|_{x}^{\infty} - \int_{x}^{\infty} \exp \bigl( - a_k t^{2k} \bigr)\, dt. $$ Hence for any $x >0$ \begin{equation} \label{ungl2} b_k \, \biggl( \frac{x}{2k \, a_k x^{2k} + 2k-1} \biggr) \exp \bigl( - a_k x^{2k} \bigr) \leq 1 - P(x). \end{equation} With \eqref{ungl1} we get for $x >0$ $$ \frac{d}{dx} \biggl( \exp \bigl( a_k x^{2k} \bigr) \int_{x}^{\infty} \exp \bigl( - a_k t^{2k} \bigr) \, dt \biggr) = -1 + 2k \, a_k x^{2k-1} \exp \bigl(a_k x^{2k} \bigr) \, \int_x^{\infty} \exp \bigl( - a_k t^{2k} \bigr) \, dt < 0. $$ So $ \exp \bigl( a_k x^{2k} \bigr) \int_{x}^{\infty} \exp \bigl( - a_k t^{2k} \bigr) \, dt$ attains its maximum at $x=0$ and therefore $$ \exp \bigl( a_k x^{2k} \bigr) \, b_k \, \int_x^{\infty} \exp \bigl( - a_k t^{2k} \bigr) \, dt \leq \frac 12. $$ Summarizing we obtain for $x>0$ \begin{equation} \label{ungl3} 1-P(x) \leq \min \biggl( \frac 12, \frac{b_k}{2k \, a_k \, x^{2k-1}} \biggr) \exp \bigl( - a_k x^{2k} \bigr). \end{equation} With \eqref{ungl1b} we get for $x<0$ $$ \frac{d}{dx} \biggl( \exp \bigl( a_k x^{2k} \bigr) \int_{-\infty}^{x} \exp \bigl( - a_k t^{2k} \bigr) \, dt \biggr) = 1 + 2k \, a_k x^{2k-1} \exp \bigl(a_k x^{2k} \bigr) \, \int_{-\infty}^x \exp \bigl( - a_k t^{2k} \bigr) \, dt > 0. $$ So $ \exp \bigl( a_k x^{2k} \bigr) \int_{-\infty}^{x} \exp \bigl( - a_k t^{2k} \bigr) \, dt$ attains its maximum at $x=0$ and therefore $$ \exp \bigl( a_k x^{2k} \bigr) \, b_k \, \int_{-\infty}^{x} \exp \bigl( - a_k t^{2k} \bigr) \, dt \leq \frac 12. $$ Summarizing we obtain for $x<0$ \begin{equation} \label{ungl3b} P(x) \leq \min \biggl( \frac 12, \frac{b_k}{2k \, a_k \, |x|^{2k-1}} \biggr) \exp \bigl( - a_k x^{2k} \bigr). \end{equation} Applying \eqref{ungl3} and \eqref{ungl3b} gives $0 < f_z(x) \leq \frac{1}{2 \, b_k}$ for all $x$. Note that for $x<0$ we only have to consider the first case of \eqref{thesolution}, since $z \geq 0$. The constant $\frac{1}{2 \, b_k}$ is not optimal. Following the proof of Lemma 2.2 in \cite{ChenShao:2005} or alternatively of Lemma 2 in \cite[Lecture II]{Stein:1986} would lead to optimal constants. We omit this. It follows from \eqref{thesolution} that \begin{equation} \label{thederivative} f_z'(x) = \left\{ \begin{array}{ll} (1-P(z)) \biggl[ 1 + x^{2k-1} \, 2k \, a_k \, P(x) \exp( a_k x^{2k}) b_k^{-1} \biggr] & \mbox{for } x \leq z, \\ P(z) \biggl[ (1-P(x)) \, 2k \, a_k \, x^{2k-1} \, \exp( a_k x^{2k}) b_k^{-1} -1 \biggr] & \mbox{for } x \geq z. \\ \end{array} \right. \end{equation} With \eqref{ungl1} we obtain for $0< x \leq z$ that $$ f_z'(x) \leq (1-P(z)) \biggl[z^{2k-1} \, 2k \, a_k \, P(z) \exp( a_k z^{2k}) b_k^{-1} \biggr] +1 \leq 2. $$ The same argument for $x \geq z$ leads to $|f_z'(x)| \leq 2$. For $x<0$ we use the first half of \eqref{thesolution} and apply \eqref{ungl1b} to obtain $|f_z'(x)| \leq 2$. Actually this bound will be improved later. Next we calculate the derivative of $-\psi(x) \, f_z(x)$: \begin{equation} \label{theproduct} (-\psi(x) f_z(x))' = \left\{ \begin{array}{ll} \frac{(1-P(z))}{b_k} \biggl[ P(x)e^{a_k x^{2k}} \biggl(2k(2k-1)a_k x^{2k-2}+(2k)^2 a_k^2 x^{4k-2} \biggr) + 2k a_k x^{2k-1} b_k \biggr], & x \leq z, \\ \frac{P(z)}{b_k} \biggl[ (1-P(x)) e^{a_k x^{2k}} \biggl(2k(2k-1) a_k x^{2k-2} +(2k)^2 a_k^2 x^{4k-2} \biggr) - 2k a_k x^{2k-1} b_k \biggr], & x \geq z. \\ \end{array} \right. \end{equation} With \eqref{ungl2} we obtain $(-\psi(x) f_z(x))' \geq 0$, so $-\psi(x) f_z(x)$ is an increasing function of $x$ (remark that for $x<0$ we only have to consider the first half of \eqref{thesolution}). Moreover with \eqref{ungl1}, \eqref{ungl1b} and \eqref{ungl2} we obtain that \begin{equation} \label{ungl4} \lim_{x \to -\infty} 2k \, a_k \, x^{2k-1} f_z(x) = P(z)-1 \quad \text{and} \quad \lim_{x \to \infty} 2k \, a_k \, x^{2k-1} f_z(x) = P(z). \end{equation} Hence we have $|2k \, a_k \, x^{2k-1} f_z(x)| \leq 1$ and $|2k \, a_k \bigl(x^{2k-1} f_z(x)- u^{2k-1} f_z(u)\bigr)| \leq 1$ for any $x$ and $u$. From \eqref{ungl1} it follows that $f_z'(x) >0$ for all $x<z$ and $f_z'(x) < 0$ for $x>z$. With Stein's identity $f_z'(x) = - \psi(x) f_z(x) + 1_{\{x \leq x\}} -P(z)$ and \eqref{ungl4} we have $$ 0 < f_z'(x) \leq -\psi(z) f_z(z) + 1 - P(z) <1 \quad \text{for} \quad x < z $$ and $$ -1 < -\psi(z) f_z(z) - P(z) \leq f_z'(x) < 0 \quad \text{for} \quad x > z. $$ Hence, for any $x$ and $y$, we obtain $$ |f_z'(x)| \leq 1 \quad \text{and} \quad |f_z'(x) - f_z'(y)| \leq \max \bigl( 1, -\psi(z) f_z(z) + 1 - P(z) - (-\psi(z) f_z(z) - P(z)) \bigr)=1. $$ Next we bound $(-\psi(x) f_z(x))'$. We already know that $(-\psi(x) f_z(x))'>0$. Again we apply \eqref{ungl1} and \eqref{ungl1b} to see that $$ (-\psi(x) f_z(x))' \leq \frac{2k-1}{|x|} $$ for $x \geq z >0$ and all $x \leq 0$. For $0<x \leq z$ this latter bound holds, as can be seen by applying this bound (more precisely the bound for $(-\psi(x) f_z(x))' \, \frac{b_k}{P(z)}$ for $x \geq z$) with $-x$ for $x$ to the formula for $(\psi(x) f_z(x))'$ in $x \leq z$. For some constant $c$ we can bound $(\psi(x) f_z(x))'$ by $c$ for all $|x| \geq \frac{2k-1}{c}$. Moreover, on $[-\frac{2k-1}{c}, \frac{2k-1}{c}]$ the continuous function $(-\psi(x) f_z(x))'$ is bounded by some constant $d$, hence we have proved $$ |-(\psi(x) f_z(x))' | \leq \max(c,d). $$ The problem of finding the optimal constant, depending on $k$, is omitted. Summarizing, Assumption (B2) is fulfilled for $p$ with $d_2=d_3=1$ and some constants $d_1$ and $d_4$. Next we consider an absolutely continuous function $h : \R \to \R$. Let $f_h$ be the solution of the Stein equation \eqref{steinid2}, that is $$ f_h(x) = \frac{1}{p(x)} \int_{-\infty}^x (h(t) - Ph) \, p(t) \, dt = - \frac{1}{p(x)} \int_{x}^{\infty} (h(t) - Ph) \, p(t) \, dt. $$ We adapt the proof of \cite[Lemma 2.3]{ChenShao:2005}: without loss of generality we assume that $h(0)=0$ and put $e_0 := \sup_{x} |h(x) -Ph|$ and $e_1:= \sup_x |h'(x)|$. Form the definition of $f_h$ it follows that $|f_h(x)| \leq e_0 \frac{1}{2 b_k}$. An alternative bound is $c_1 \, e_1$ with some constant $c_1$ depending on $\E|Z|$, where $Z$ denotes a random variable distributed according to $p$. With \eqref{steinid2} and \eqref{ungl2}, for $x \geq 0$, $$ |f_h'(x)| \leq |h(x) -Ph| - \psi(x) e^{a_k x^{2k}} \int_x^{\infty} |h(t) - Ph| e^{-a_k t^{2k}} \, dt \leq 2 e_0. $$ An alternative bound is $c_2 \, e_1$ with some constant $c_2$ depending on the $(2k-2)$'th moment of $p$. This is using Stein's identity \eqref{steinid2} to obtain $$ f_h'(x) = - e^{a_k x^{2k}} \int_{x}^{\infty} (h'(t) - \psi'(t) \, f(t)) e^{-a_k t^{2k}} \, dt. $$ The details are omit. To bound the second derivative $f_h''$, we differentiate \eqref{steinid2} and have $$ f_h''(x) = \bigl(\psi^2(x) - \psi'(x) \bigr) f_h(x) - \psi(x) \bigl( h(x) -Ph \bigr) + h'(x). $$ Similarly to \cite[(8.8), (8.9)]{ChenShao:2005} we obtain $$ h(x) - Ph = \int_{-\infty}^x h'(t) P(t) \, dt - \int_{x}^{\infty} h'(t) (1-P(t)) \, dt. $$ It follows that $$ f_h(x) = -\frac{1}{b_k} e^{a_k x^{2k}} (1-P(x)) \, \int_{-\infty}^x h'(t) P(t) \, dt - \frac{1}{b_k} e^{a_k x^{2k}} P(x) \, \int_{x}^{\infty} h'(t) (1-P(t))\, dt. $$ Now we apply the fact that the quantity in \eqref{theproduct} is non-negative to obtain \begin{eqnarray*} |f_h''(x)| & \leq & |h'(x)| + \big| \bigl(\psi^2(x) - \psi'(x) \bigr) f_h(x) - \psi(x) \bigl( h(x) -Ph \bigr) \big| \\ & \leq & |h'(x)| + \biggl| \biggl( - \psi(x) - \frac{1}{b_k} \bigl( \psi^2(x) - \psi'(x) \bigr) e^{a_k x^{2k}} (1-P(x)) \biggr) \, \int_{-\infty}^x h'(t) P(t) \, dt \biggr| \\ & & \hspace{0.5cm} + \biggl| \biggl( \psi(x) - \frac{1}{b_k} \bigl( \psi^2(x) - \psi'(x) \bigr) e^{a_k x^{2k}} \, P(x) \biggr) \, \int_x^{\infty} h'(t) (1-P(t)) \, dt \biggr| \\ & \leq & |h'(x)| + e_1 \biggl( \psi(x) + \frac{1}{b_k} \bigl( \psi^2(x) - \psi'(x) \bigr) e^{a_k x^{2k}} (1-P(x)) \biggr) \, \int_{-\infty}^x P(t) \, dt \\ & & \hspace{0.5cm} + e_1 \biggl( - \psi(x) + \frac{1}{b_k} \bigl( \psi^2(x) - \psi'(x) \bigr) e^{a_k x^{2k}} P(x) \biggr) \, \int_{x}^{\infty} (1-P(t)) \, dt. \end{eqnarray*} Moreover we know, that the quantity in \eqref{theproduct} can be bounded by $\frac{2k-1}{|x|}$, hence $$ |f_h''(x)| \leq e_1 + e_1 \frac{2 b_k \, (2k-1)}{|x|} \biggl( \int_{-\infty}^x P(t) \, dt + \int_{x}^{\infty} (1-P(t)) \, dt \biggr). $$ Now we bound $$ \bigl| \int_{-\infty}^x P(t) \, dt + \int_{x}^{\infty} (1-P(t)) \, dt \bigr| = \bigl| xP(x) - x(1-P(x)) + 2 \int_x^{\infty} t p(t) \, dt \bigr| \leq 2|x| + 2 \E|Z|, $$ where $Z$ is distributed according to $p$. Summarizing we have $|f_h''(x)| \leq c_3 \sup_{x} |h'(x)|$ for some constant $c_3$, using the fact that $f_h$ and therefore $f_h'$ and $f_h''$ are continuous. Hence $f_h$ satisfies Assumption (B1). \end{proof} \begin{proof}[Sketch of the proof of Remark \ref{gibbsrem}] Now let $p(x) = b_k \exp \bigl( - a_k V(x) \bigr)$ and $V$ satisfies the assumptions listed in Remark \ref{gibbsrem}. To proof that $f_z$ (with respect to $p$) satisfies Assumption (B2), we adapt \eqref{ungl2} as well as \eqref{ungl3} and \eqref{ungl3b}, using the assumptions on $V$. We obtain for $x >0$ \begin{equation*} b_k \, \biggl( \frac{V'(x)}{V''(x) + a_k V'(x)^2} \biggr) \exp \bigl( - a_k \, V(x) \bigr) \leq 1 - P(x). \end{equation*} and for $x >0$ \begin{equation*} 1-P(x) \leq \min \biggl( \frac 12, \frac{b_k}{a_k \, V'(x)} \biggr) \exp \bigl( - a_k \, V(x) \bigr) \end{equation*} and for $x < 0$ \begin{equation*} P(x) \leq \min \biggl( \frac 12, \frac{b_k}{a_k \, |V'(x)|} \biggr) \exp \bigl( - a_k \, V(x) \bigr). \end{equation*} Estimating $(-\psi(x) f_z(x))'$ gives $$ (-\psi(x) f_z(x))' \leq {\rm const.} \, \frac{V''(x)}{|V'(x)|}. $$ By our assumptions on $V$, the right hand side can be bounded for $x \geq d$ with $d \in \R_+$ and since $\psi(x) f_z(x)$ is continuous, it is bounded everywhere. \end{proof} {\em Acknowledgement.} During the preparation of our manusscript we became aware of a preprint of S. Chatterjee ans Q.-M. Shao about Stein's method with applications to the Curie-Weiss model. As far as we understand, there the authors give an alternative proof of Theorem 1.2 and 1.3. \bibliographystyle {amsplain} \end{document}
\begin{document} \title{On the growth of powers of operators with spectrum contained in Cantor sets} \author{AGRAFEUIL Cyril} \date{} \maketitle \begin{abstract} For $\xi \in \big( 0, \frac{1}{2} \big)$, we denote by $E_{\xi}$ the perfect symmetric set associated to $\xi$, that is $$ E_{\xi} = \Big\{ \varepsilonxp \big( 2i \pi (1-\xi) \displaystyle \sum_{n = 1}^{+\infty} \varepsilonpsilon_{n} \xi^{n-1} \big) : \, \varepsilonpsilon_{n} = 0 \textrm{ or } 1 \quad (n \geq 1) \Big\}. $$ Let $s$ be a nonnegative real number, and $T$ be an invertible bounded operator on a Banach space with spectrum included in $E_{\xi}$. We show that if \begin{eqnarray*} & & \big\| T^{n} \big\| = O \big( n^{s} \big), \,n \rightarrow +\infty \\ & \textrm{and} & \big\| T^{-n} \big\| = O \big( e^{n^{\beta}} \big), \, n \rightarrow +\infty \textrm{ for some } \beta < \frac{\log{\frac{1}{\xi}} - \log{2}}{2\log{\frac{1}{\xi}} - \log{2}}, \varepsilonnd{eqnarray*} then for every $\varepsilon > 0$, $T$ satisfies the stronger property $$ \big\| T^{-n} \big\| = O \big( n^{s+\frac{1}{2}+\varepsilon} \big), \, n \rightarrow +\infty. $$ This result is a particular case of a more general result concerning operators with spectrum satisfying some geometrical conditions. \varepsilonnd{abstract} \section{Introduction} We denote by $\mathbb{T}$ the unit circle and by $\mathbb{D}$ the open unit disk. We shall say that a closed subset $E$ of $\mathbb{T}$ is a $K$-set if there exists a positive constant $c$ such that for any arc $L$ of $\mathbb{T}$, $$ \sup_{z \in L} d(z,E) \geq c |L|, \varepsilonqno (K) $$ where $|L|$ denotes the length of the arc $L$ and $d(z,E)$ the distance between $z$ and $E$. Let $E$ be a $K$-set. We set $$ \delta(E) = \sup \Big\{ \delta \geq 0 : \, \int_{0}^{2\pi} \frac{1}{d(e^{it},E)^{\delta}} \mathrm{d}t < +\infty \Big\}. $$ We have $\displaystyle \delta(E) \geq \frac{\log{\frac{1}{1-c}}}{\log{\frac{2}{1-c}}}$ (see \cite{Dyn} section 5, proof of lemma 2 and corollary). E. M. Dyn'kin showed in \cite{Dyn} that condition $(K)$ characterizes the interpolating sets for $\Lambda_ {s}^{+}(\mathbb{T})$, $s>0$ (see section 2 for the definition of $\Lambda_ {s}^{+}(\mathbb{T})$). Let $s$ be a nonnegative real number, and let $T$ be an invertible operator on a Banach space. We show (theorem \ref{operateurs}) that if the spectrum of $T$ is included in $E$ and if $T$ satisfies \begin{eqnarray*} & & \big\| T^{n} \big\| = O \big( n^{s} \big), \,n \rightarrow +\infty \\ & \textrm{and} & \big\| T^{-n} \big\| = O \big( e^{n^{\beta}} \big), \, n \rightarrow +\infty \textrm{ for some } \beta < \frac{\delta(E)}{1+\delta(E)}, \varepsilonnd{eqnarray*} then for every $\varepsilon>0$, $T$ also satisfies the stronger property \begin{eqnarray} \label{intro1} \big\| T^{-n} \big\| = O \big( n^{s+\frac{1}{2}+\varepsilon} \big), \, n \rightarrow +\infty. \varepsilonnd{eqnarray} For $\displaystyle \xi \in \Big( 0, \frac{1}{2} \Big)$, we denote by $E_{\xi}$ the perfect symmetric set associated to $\xi$, that is $$ E_{\xi} = \Big\{ \varepsilonxp \big( 2i \pi (1 - \xi) \displaystyle \sum_{n = 1}^{+\infty} \varepsilonpsilon_{n} \xi^{n-1} \big) : \, \varepsilonpsilon_{n} = 0 \textrm{ or } 1 \quad (n \geq 1) \Big\}. $$ We set $\displaystyle b(\xi) = \frac{\log{\frac{1}{\xi}} - \log{2}}{2\log{\frac{1}{\xi}} - \log{2}}$. We obtain (as a consequence of theorem \ref{operateurs}) that if the spectrum of $T$ is included in $E_{\xi}$, $\big\| T^{n} \big\| = O \big( n^{s} \big), \, n \rightarrow +\infty$ and $\big\| T^{-n} \big\| = O \big( e^{n^{\beta}} \big), \, n \rightarrow +\infty$ for some $\beta < b(\xi)$, then $T$ satisfies (\ref{intro1}). Notice that J. Esterle showed in \cite{Est2} that if $T$ is a contraction on a Banach space (respectively on a Hilbert space) with spectrum included in $\displaystyle E_{\frac{1}{q}}$ (respectively included in $E_{\xi}$) such that $\big\| T^{-n} \big\| = O \big( e^{n^{\beta}} \big), \, n \rightarrow +\infty$ for some $\displaystyle \beta < b \big( \frac{1}{q} \big)$ (respectively $\beta < b(\xi)$), then $\displaystyle \sup_{n \geq 0} \big\| T^{-n} \big\| < +\infty$ (respectively $T$ is an isometry). Here $q$ is an integer greater than or equal to $3$. \\ \section{Growth of powers of operators} Let $p$ be a non-negative integer. We denote by $\mathcal{C}^{p}(\mathbb{T})$ the space of $p$ times continuously differentiable functions on $\mathbb{T}$. We set $$ \textrm{{\LARGE $a$}}^{p}(\mathbb{D}) = \Big\{ f \in \mathcal{C}^{p}(\mathbb{T}) : \, \widehat{f}(n) = 0 \quad (n<0) \Big\}, $$ $\mathcal{C}^{\infty}(\mathbb{T})=\bigcap_{p\geq 0} \mathcal{C}^{p}(\mathbb{T})$ and $\textrm{{\LARGE $a$}}^{\infty}(\mathbb{D})= \bigcap_{p\geq 0} \textrm{{\LARGE $a$}}^{p}(\mathbb{D})$. \noindent Let $s$ be a nonnegative real number, we denote by $[s]$ the nonnegative integer such that $[s] \leq s < [s]+1$. We define the Banach algebra $$ \Lambda_{s}(\mathbb{T}) = \Big\{ f \in \mathcal{C}^{[s]}(\mathbb{T}) : \, \sup_{z,z' \in \mathbb{T}} \frac{\big| f^{([s])}(z) - f^{([s])}(z') \big|}{|z-z'|^{s-[s]}}< +\infty \Big\}, $$ equiped with the norm $\displaystyle \big\| f \big\|_{\Lambda_{s}} = \big\| f \big\|_{\mathcal{C}^{[s]}(\mathbb{T})} + \sup_{z,z' \in \mathbb{T}} \frac{\big| f^{([s])}(z) - f^{([s])}(z') \big|}{|z-z'|^{s-[s]}}$. We also define the subalgebra $$ \lambda_{s}(\mathbb{T}) = \Big\{ f \in \mathcal{C}^{[s]}(\mathbb{T}) : \, \big| f^{([s])}(z) - f^{([s])}(z') \big| = o \big( |z-z'|^{s-[s]} \big), \, |z-z'| \rightarrow 0 \Big\}, $$ which we equip with the same norm. We also set \begin{eqnarray*} \Lambda_{s}^{+}(\mathbb{T}) & = & \Big\{ f \in \Lambda_{s}(\mathbb{T}) : \, \widehat{f}(n) = 0 \quad (n<0) \Big\} \\ \textrm{and } \lambda_{s}^{+}(\mathbb{T}) & = & \Big\{ f \in \lambda_{s}(\mathbb{T}) : \, \widehat{f}(n) = 0 \quad (n<0) \Big\}. \varepsilonnd{eqnarray*} We remark that if $s$ is an integer, $\Lambda_{s}(\mathbb{T}) = \lambda_{s}(\mathbb{T}) = \mathcal{C}^{s}(\mathbb{T})$ and so $\Lambda_{s}^{+}(\mathbb{T}) = \lambda_{s}^{+}(\mathbb{T}) = \textrm{{\LARGE $a$}}^{s}(\mathbb{D})$. We define $$ N_{s}(E) = \big\{ f \in \Lambda_{s}(\mathbb{T}) : \, f_{|_{E}} = \ldots = f_{|_{E}}^{([s])} = 0 \big\}, $$ and set $N_{s}^{+}(E) = N_{s}(E) \cap \Lambda_{s}^{+}(\mathbb{T})$. \begin{lemme} \label{bernstein} Let $s$ be a nonnegative real number. Then for all $\varepsilon > 0$, we have the following continuous embedding $$ \Lambda_{s+\frac{1}{2}+\varepsilon}(\mathbb{T}) \hookrightarrow A_{s}(\mathbb{T}). $$ \varepsilonnd{lemme} \begin{proof} For $s=0$, this is a result of Bernstein (see \cite{Kaha1}, p.13). The general case is obtained by the same arguments. Let $\varepsilon > 0$, and set $\tilde{s}=s+\frac{1}{2}+\varepsilon$. Let $f \in \Lambda_{\tilde{s}}(\mathbb{T})$. For $h>0$, define $$ P(h) = \int_{0}^{2\pi} \big| f^{([\tilde{s}])}(e^{i(t-h)}) - f^{([\tilde{s}])}(e^{i(t+h)}) \big|^{2} \mathrm{d} t. $$ It follows from Parseval equality that \begin{eqnarray} \label{parseval} P(h) = 8 \pi \sum_{n=-\infty}^{+\infty} \big| \widehat{f^{([\tilde{s}])}}(n) \big|^{2} \sin^{2}{(nh)}. \varepsilonnd{eqnarray} Let $j_{0}$ be the smallest integer such that $[\tilde{s}] < 2^{j_{0}}$ and let $j \geq j_{0}$. It follows from the relation $\widehat{f^{([\tilde{s}])}}(n) = \Big( \prod\limits_{k=1}^{[\tilde{s}]} (n+k) \Big) \widehat{f}(n+[\tilde{s}]) \, (n \in \mathbb{Z})$ and from (\ref{parseval}) that there exists a constant $C_{1} > 0$ independent of $f$ such that \begin{eqnarray} \label{bernstein1} P(h) & \geq & \frac{4}{C_{1}^{2}} \sum_{|n|=2^{j}}^{2^{j+1}-1} \big| \widehat{f}(n+[\tilde{s}]) \big|^{2} (1+|n|)^{2[\tilde{s}]} \sin^{2}{(nh)}. \varepsilonnd{eqnarray} Using the Cauchy-Schwartz inequality, we have \begin{eqnarray} \label{bernstein2} \sum_{|n|=2^{j}}^{2^{j+1}-1} \big| \widehat{f}(n+[\tilde{s}]) \big| (1+|n|)^{s} \leq \Big( \sum_{|n|=2^{j}}^{2^{j+1}-1} \big| \widehat{f}(n+[\tilde{s}]) \big|^{2} (1+|n|)^{2[\tilde{s}]} \Big)^{\frac{1}{2}} \Big( \sum_{n=2^{j}}^{2^{j+1}-1} (1+|n|)^{2s-2[\tilde{s}]} \Big)^{\frac{1}{2}} \varepsilonnd{eqnarray} Set $\displaystyle h=\frac{\pi}{3.2^{j}}$. For all integers $n$ such that $2^{j} \leq |n| \leq 2^{j+1}-1$, we have $\displaystyle \frac{\pi}{3} \leq |nh| \leq \frac{2\pi}{3}$, and so $\displaystyle \sin^{2}{(nh)} \geq \frac{1}{4}$. So, we deduce from (\ref{bernstein1}) that $$ \Big( \sum_{|n|=2^{j}}^{2^{j+1}-1} \big| \widehat{f}(n+[\tilde{s}]) \big|^{2} (1+|n|)^{2[\tilde{s}]} \Big)^{\frac{1}{2}} \leq C_{1} P \big( \frac{\pi}{3.2^{j}} \big)^{\frac{1}{2}}. $$ Then, as $f \in \Lambda_{\tilde{s}}(\mathbb{T})$, we have $$ P \big( \frac{\pi}{3.2^{j}} \big)^{\frac{1}{2}} \leq (2 \pi)^{\frac{1}{2}} \big\| f \big\|_{\Lambda_{\tilde{s}}} \big( \frac{2\pi}{3.2^{j}} \big)^{\tilde{s}-[\tilde{s}]}, $$ so that \begin{eqnarray} \label{bernstein3} \Big( \sum_{|n|=2^{j}}^{2^{j+1}-1} \big| \widehat{f}(n+[\tilde{s}]) \big|^{2} (1+|n|)^{2[\tilde{s}]} \Big)^{\frac{1}{2}} \leq C_{1} (2 \pi)^{\frac{1}{2}} \big\| f \big\|_{\Lambda_{\tilde{s}}} \big( \frac{2\pi}{3.2^{j}} \big)^{\tilde{s}-[\tilde{s}]}. \varepsilonnd{eqnarray} Furthermore, there exists a constant $C_2>0$ such that \begin{eqnarray} \label{bernstein4} \Big( \sum_{|n|=2^{j}}^{2^{j+1}-1} (1+|n|)^{2s-2[\tilde{s}]} \Big)^{\frac{1}{2}} \leq C_2 2^{j \big( s-[\tilde{s}]+\frac{1}{2} \big)}. \varepsilonnd{eqnarray} Finally we deduce from (\ref{bernstein2}) and the inequalities (\ref{bernstein3}) and (\ref{bernstein4}) that there exists a constant $C_{3} > 0$ independent of $f$ such that for all $j \geq j_{0}$, $$ \sum_{|n|=2^{j}}^{2^{j+1}-1} \big| \widehat{f}(n+[\tilde{s}]) \big| (1+|n|)^{s} \leq 2^{j ( s-\tilde{s}+\frac{1}{2} )} C_{3} \big\| f \big\|_{\Lambda_{\tilde{s}}} = 2^{-\varepsilon j} C_{3} \big\| f \big\|_{\Lambda_{\tilde{s}}}. $$ Summing over $j \geq j_{0}$ these inequalities, we get $$ \sum_{|n| \geq 2^{j_{0}}} \big| \widehat{f}(n+[\tilde{s}]) \big| (1+|n|)^{s} \leq \frac{C_{3}}{1-2^{-\varepsilon}} \big\| f \big\|_{\Lambda_{\tilde{s}}}, $$ On the other hand, we have $\big| \widehat{f}(n) \big| \leq \big\| f \big\|_{\Lambda_{\tilde{s}}}$ for every $n \in \mathbb{Z}$. So, since $j_{0}$ is independent of $f$, there exists a constant $K > 0$ (independent of $f$) such that $$ \big\| f \big\|_{s} \leq K \big\| f \big\|_{\Lambda_{\tilde{s}}} $$ \varepsilonnd{proof} Before giving the main theorem of the paper, we need the following lemma. \begin{lemme} \label{sansfacteur} Let $E$ be a closed subset of $\mathbb{T}$. We assume that there exists $\delta > 0$ for which $\displaystyle \int_{0}^{2\pi} \frac{1}{d(e^{it},E)^{\delta}} \mathrm{d}t < +\infty$. Let $\displaystyle \beta < \frac{\delta}{1+\delta}$ and let $T$ be an invertible operator on a Banach space with spectrum included in $E$ that satisfies \begin{eqnarray*} & & \big\| T^{n} \big\| = O \big( n^{s} \big), \,n \rightarrow +\infty \quad(\textrm{ for some nonnegative real s}) \\ & \textrm{and} & \big\| T^{-n} \big\| = O \big( e^{n^{\beta}} \big), \, n \rightarrow +\infty , \varepsilonnd{eqnarray*} Then there exists an outer function $f \in \textrm{{\LARGE $a$}}^{\infty}(\mathbb{D})$ which vanishes exactly on $E$ and such that $f(T) := \sum\limits_{n=0}^{+\infty} \widehat{f}(n) T^{n}=0$. \varepsilonnd{lemme} \begin{proof} Let $\omega$ be the weight defined by $\omega(n) = \big\| T^{n} \big\| \, (n \in \mathbb{Z})$. Let $\Phi$ be the continuous morphism from $A_{\omega}(\mathbb{T})$ to $\mathcal{L}(X)$ defined by $$ \Phi(f) = f(T) = \sum_{n = - \infty}^{+\infty} \widehat{f}(n) T^{n} \qquad \big( f \in A_{\omega}(\mathbb{T}) \big). $$ Since the algebra $A_{\omega}(\mathbb{T})$ is regular, we have $\big\{ z \in \mathbb{T} : \, f(z)=0 \quad (f \in \textrm{Ker } \Phi) \big\} \subset E$ (see \cite{EStZo1}, theorem 2.5), and so $J_{\omega}(E) \subset \textrm{Ker } \Phi$. Then the result follows from lemmas 7.1 and 7.2 of \cite{Est2}. \varepsilonnd{proof} \begin{theo} \label{operateurs} Let $E$ be a $K$-set, and let $s$ be a nonnegative real number. Then, any invertible operator $T$ on a Banach space with spectrum included in $E$ that satisfies \begin{eqnarray*} & & \big\| T^{n} \big\| = O \big( n^{s} \big), \,n \rightarrow +\infty \\ & \textrm{and} & \big\| T^{-n} \big\| = O \big( e^{n^{\beta}} \big), \, n \rightarrow +\infty \textrm{ for some } \beta < \frac{\delta(E)}{1+\delta(E)}, \varepsilonnd{eqnarray*} also satisfies the stronger property $$ \big\| T^{-n} \big\| = O \big( n^{s+\frac{1}{2}+\varepsilon} \big), \, n \rightarrow +\infty, $$ for all $\varepsilon>0$. \varepsilonnd{theo} \begin{proof} Let $\varepsilon>0$ and set $\tilde{s}=s+\frac{1}{2}+\varepsilon$. Without loss of generality, we may assume that $\tilde{s}$ is not an integer. Let $t$ a real number, which is not an integer, and satisfies $s + \frac{1}{2} < t < \tilde{s}$ and $[t] = [\tilde{s}]$. According to lemma \ref{bernstein}, we can define a continuous morphism $\Phi$ from $\lambda_{t}^{+}(\mathbb{T})$ to $\mathcal{L}(X)$ by $$ \Phi(f) = f(T) = \sum_{n=0}^{+\infty} \widehat{f}(n) T^{n} \qquad \big( f \in \lambda_{t}^{+}(\mathbb{T}) \big). $$ Let $I = \textrm{Ker } \Phi$, $I$ is a closed ideal of $\lambda_{t}^{+}(\mathbb{T})$. We denote by $S_{I}$ its inner factor, that is the greatest common divisor of all inner factors of the non-zero functions in $I$ (see \cite{Hoff} p.85), and we set, for $0 \leq k \leq [t]$, $\displaystyle h^{k}(I) = \big\{ z \in \mathbb{T} : \, f(z) = \ldots = f^{(k)}(z) = 0 \quad (f \in I) \big\}$. \\F. A. Shamoyan showed in \cite{Sha} that $$ I = \Big\{ f \in \lambda_{t}^{+}(\mathbb{T}) : \, S(I) | \, S(f) \textrm{ and } f^{(k)}=0 \textrm{ on } h^{k}(I) \textrm{ for all } 0 \leq k \leq [t] \Big\}, $$ where $S(f)$ denotes the inner factor of $f$ and $S(I) | \, S(f)$ means that $S(f)/ S(I)$ is a bounded holomorphic function in $\mathbb{D}$. Since $\displaystyle \beta < \frac{\delta(E)}{1+\delta(E)}$, there exists $0 < \delta < \delta(E)$ such that $\displaystyle \beta < \frac{\delta}{1+\delta}$. We have, by definition of $\delta(E)$, $\displaystyle \int_{0}^{2\pi} \frac{1}{d(e^{it},E)^{\delta}} \mathrm{d}t < +\infty$. So we deduce from lemma \ref{sansfacteur} that there exists an outer function $f \in \textrm{{\LARGE $a$}}^{\infty}(\mathbb{D})$ which vanishes exactly on $E$ and such that $f \in I$. Therefore, we have $S(I)=1$ and $h^{0}(I) \subset E$, so that $N_{t}^{+}(E) \cap \lambda_{t}(\mathbb{T}) \subset I$. Now, as $\Lambda_{\tilde{s}}^{+}(\mathbb{T}) \subset \lambda_{t}^{+}(\mathbb{T})$, we can define a continuous morphism $\Psi$ from $\Lambda_{\tilde{s}}^{+}(\mathbb{T})$ to $\mathcal{L}(X)$ by $\Psi = \Phi_{|_{\Lambda_{\tilde{s}}^{+}(\mathbb{T})}}$. Using what precedes, we have $$ N_{\tilde{s}}^{+}(E) \subset \textrm{Ker } \Psi. $$ So there exists a continuous morphism $\tilde{\Psi}$ from $\Lambda_{\tilde{s}}^{+}(\mathbb{T}) / N_{\tilde{s}}^{+}(E)$ into $\mathcal{L}(X)$ such that $\Psi = \tilde{\Psi} \circ \pi_{\tilde{s}}^{+}$, where $\pi_{\tilde{s}}^{+}$ is the canonical surjection from $\Lambda_{\tilde{s}}^{+}(\mathbb{T})$ to $\Lambda_{\tilde{s}}^{+}(\mathbb{T}) / N_{\tilde{s}}^{+}(E)$. Since $E$ is a $K$-set, by a theorem of E. M. Dyn'kin \cite{Dyn}, it is an interpolating set for $\Lambda_{\tilde{s}}^{+}(\mathbb{T})$, so that the canonical imbedding $i$ from $\Lambda_{\tilde{s}}^{+}(\mathbb{T}) / N_{\tilde{s}}^{+}(E)$ into $\Lambda_{\tilde{s}}(\mathbb{T}) / N_{\tilde{s}}(E)$ is onto. We have, for $n \geq 0$, $$ T^{-n} = \tilde{\Psi} \circ i^{-1} \circ \pi_{\tilde{s}} (\alpha^{-n}), $$ where $\pi_{\tilde{s}}$ denote the canonical surjection from $\Lambda_{\tilde{s}}(\mathbb{T})$ to $\Lambda_{\tilde{s}}(\mathbb{T}) / N_{\tilde{s}}(E)$ and where $\alpha \ :\ z \to z$ is the identity map. So we have, for $n \geq 0$, \begin{eqnarray*} \big\| T^{-n} \big\| & \leq & \big\| \tilde{\Psi} \circ i^{-1} \big\| \big\| \pi_{\tilde{s}} (\alpha^{-n}) \big\|_{\Lambda_{\tilde{s}}} \\ & \leq & \big\| \tilde{\Psi} \circ i^{-1} \big\| (1+n)^{\tilde{s}}, \varepsilonnd{eqnarray*} which completes the proof. \varepsilonnd{proof} We give two immediate corollaries of this theorem. \begin{cor} \label{operateurs1} Let $\xi \in \big( 0, \frac{1}{2} \big)$ and let $s$ be a nonnegative real number. Then, any invertible operator $T$ on a Banach space with spectrum included in $E_{\xi}$ that satisfies \begin{eqnarray*} & & \big\| T^{n} \big\| = O \big( n^{s} \big), \,n \rightarrow +\infty \\ & \textrm{and} & \big\| T^{-n} \big\| = O \big( e^{n^{\beta}} \big), \, n \rightarrow +\infty \textrm{ for some } \beta < b(\xi), \varepsilonnd{eqnarray*} also satisfies the stronger property $$ \big\| T^{-n} \big\| = O \big( n^{s+\frac{1}{2}+\varepsilon} \big), \, n \rightarrow +\infty, $$ for all $\varepsilon>0$. \varepsilonnd{cor} \begin{proof} It is well known that $E_{\xi}$ is a $K$-set (see proposition 2.5 of \cite{Est2}). Moreover, $E_{\xi}$ satisfies $\displaystyle \int_{0}^{2\pi} \frac{1}{d(e^{it},E)^{\delta}} \mathrm{d}t < +\infty$ if and only if $\displaystyle \delta < 1+\frac{\log{2}}{\log{\xi}}$. Indeed, the condition $\displaystyle \int_{0}^{2\pi} \frac{1}{d(e^{it},E)^{\delta}} \mathrm{d}t < +\infty$ is equivalent to $\sum\limits_{n=1}^{+\infty} \sum\limits_{i=1}^{2^{n-1}} \big| L_{n,i} \big|^{1-\delta}$, where $L_{n,i}$ are the arcs contiguous to $E_{\xi}$, and $\big| L_{n,i} \big|$ are their length, which is equal to $2 \pi \xi^{n-1} (1-2\xi)$ (see \cite{KaSa} for further details). Then it is easily seen that the last series converges if and only if $\displaystyle \delta < 1+\frac{\log{2}}{\log{\xi}}$, so $\displaystyle \delta(E_{\xi}) = 1+\frac{\log{2}}{\log{\xi}}$. Now, the result follows immediately from theorem \ref{operateurs}. \varepsilonnd{proof} Then we obtain an other immediate result, which generalizes theorem 4.1 of \cite{ElKe}. Indeed, the condition "$\big\| T^{-n} \big\| = O \big( e^{n^{\beta}} \big), \, n \rightarrow +\infty$" which appears in the following corollary is weaker than the condition used by the authors of \cite{ElKe}. \begin{cor} \label{operateurs2} Let $E$ be a $K$-set, and let $s$ be a nonnegative real number. Then, there exists a constant $\beta > 0$ independent of $s$ such that any invertible operator $T$ on a Banach space with spectrum included in $E$ that satisfies \begin{eqnarray*} & & \big\| T^{n} \big\| = O \big( n^{s} \big), \,n \rightarrow +\infty \\ & \textrm{and} & \big\| T^{-n} \big\| = O \big( e^{n^{\beta}} \big), \, n \rightarrow +\infty, \varepsilonnd{eqnarray*} also satisfies the stronger property $$ \big\| T^{-n} \big\| = O \big( n^{s+\frac{1}{2}+\varepsilon} \big), \, n \rightarrow +\infty, $$ for all $\varepsilon>0$. \varepsilonnd{cor} \begin{proof} As $E$ is a $K$-set, we deduce from \cite{Dyn} (section 5, corollary) that $\delta(E)>0$. Then the result follows immediately from theorem \ref{operateurs}, with any $\beta < \frac{\delta(E)}{1+\delta(E)}$. \varepsilonnd{proof} \hspace*{-6mm}\textbf{Remark 2.6:} \\ 1) Some results concerning operators with countable spectrum are obtained in \cite{Zar2} and in \cite{bibi1}. Let $E$ be a closed subset of $\mathbb{T}$ and let $s$, $t$ be two nonnegative reals. We denote by $P(s,t,E)$ the following property: every invertible operator $T$ on a Banach space such that $\textrm{Sp} \, T \subset E$ and satisfies the conditions: \begin{eqnarray*} & & \big\| T^{n} \big\| = O(n^{s}) \,\, (n \rightarrow +\infty) \label{introCp} \\ & & \big\| T^{-n} \big\| = O(e^{\varepsilon \sqrt{n}}) \,\, (n \rightarrow +\infty), \, \textrm{ for all } \varepsilon > 0, \varepsilonnd{eqnarray*} also satisfies the stronger property \begin{eqnarray*} \big\| T^{-n} \big\| = O(n^{t}) \, (n \rightarrow +\infty). \varepsilonnd{eqnarray*} M. Zarrabi showed in \cite{Zar2} (th\'eor\`eme 3.1 and remarque 2.a) that a closed subset $E$ of $\mathbb{T}$ satisfies $P(0,0,E)$ if and only if $E$ is countable. Notice that $E$ is called a Carleson set if $\displaystyle \int_{0}^{2\pi} \log^{+} \frac{1}{d(e^{it}, E)} \mathrm{d}t < +\infty$. If $E$ is a countable closed subset of $\mathbb{T}$, we show in \cite{bibi1} that the following conditions are equivalent: \\ \hspace*{3mm} (i) there exist two positive constants $C_1, C_2$ such that for every arc $I\subset \mathbb{T}$, $$ \frac{1}{|I|} \int_{I} \log^{+} \frac{1}{d(e^{it}, E)} \mathrm{d}t \leq C_{1} \log {\frac{1}{|I|}} + C_{2}. \\ $$ \hspace*{2mm} (ii) $E$ is a Carleson set and for all $s \geq 0$, there exists $t$ such that $P(s,t,E)$ is satisfied. \\ For contractions with spectrum satisfying the Carleson condition, we can see \cite{Kell}. \\ 2) When $\displaystyle \xi=\frac{1}{q}$, the constant $\displaystyle b \big( \frac{1}{q} \big)$ in corollary \ref{operateurs1} is the best possible in view of \cite{ERaZa}, where the authors built a contraction $T$ such that $\displaystyle \lim_{n \rightarrow +\infty} \log \big\| T^{-n} \big\| = +\infty$, $Sp \, T \subset E_{\frac{1}{q}}$ and $\log \big\| T^{-n} \big\| = O \big( n^{b(\frac{1}{q})} \big)$. According to theorem 6.4 of \cite{Est1}, $T$ doesn't satisfy $\big\| T^{-n} \big\| = O \big( n^{s} \big)$ for any real $s \geq 0$. \\ \begin{thebibliography}{1} \bibitem{bibi1} C. Agrafeuil, \varepsilonmph{Id\'eaux ferm\'es de certaines alg\`ebres de Beurling et application aux op\'erateurs \`a spectre d\'enombrable}, preprint. \bibitem{Dyn} E. M. Dynkin, \varepsilonmph{Free interpolation set for H\"older classes}, Mat. Sbornik, \textbf{109} (1979), 107-128. \bibitem{ElKe} O. El-Fallah et K. Kellay, \varepsilonmph{Sous-espaces biinvariants pour certains shifts pond\'er\'es}, Ann. Inst. Fourier \textbf{48} (1998), no. 5, 1543-1558. \bibitem{Est1} J. Esterle, \varepsilonmph{Uniqueness, strong form of uniqueness and negative powers of contractions}, Banach Center Publ. \textbf{30} (1994), 1-19. \bibitem{Est2} J. Esterle, \varepsilonmph{Distributions on Kronecker sets, strong forms of uniqueness, and closed ideals of $A^{+}$}, J. reine angew. Math. \textbf{450} (1994), 43-82. \bibitem{ERaZa} J. Esterle, M. Rajoelina and M. Zarrabi, \varepsilonmph{On contractions with spectrum contained in Cantor set}, Math. Proc. Camb. Phil. Soc. \textbf{117} (1995), 339-343. \bibitem{EStZo1} J. Esterle, E. Strouse and F. Zouakia, \varepsilonmph{Theorems of Katznelson-Tzafriri type for contractions}, J. Func. Anal. (2) \textbf{94} (1990), 273-287. \bibitem{Hoff} K. Hoffman \varepsilonmph{''Banach spaces of analytic functions''}, Prentic-Hall, Englewood Cliffs, 1962. \bibitem{Kaha1} J. P. Kahane, \varepsilonmph{''S\'eries de Fourier absolument convergentes''}, Erg. Math. \textbf{336}, Springer Verlag, Berlin-Heidelberg-New York, 1973. \bibitem{KaSa} J. P. Kahane, R. Salem, \varepsilonmph{''Ensembles parfaits et s\'eries trigonom\'etriques''}, Paris, Hermann, 1963. \bibitem{Kell} K. Kellay, \varepsilonmph{Contractions et hyperdistributions \`a spectre de Carleson}, J. London Math. Soc. (2) \textbf{58} (1998), 185-196. \bibitem{Sha} F. A. Shamoyan \varepsilonmph{Closed ideals in algebras of functions analytic in the disc and smooth up to its boundary}, Mat. Sbornik \textbf{79} (1994), 425-445. \bibitem{Zar2} M. Zarrabi, \varepsilonmph{Contractions \`a spectre d\'enombrable et propri\'et\'e d'unicit\'e des ferm\'es d\'enombrables du cercle unit\'e}, Ann. Inst. Fourier \textbf{43} (1993), 251-263. \varepsilonnd{thebibliography} \ \\ \hspace*{-7mm} 2000 Mathematics Subject Classification: 46J15, 46J20, 47A30. \\ Key-words: operators, Beurling algebra, spectral synthesis, perfect symmetric set. \ \\ \hspace*{-7mm} AGRAFEUIL Cyril \\ [email protected] \\ Laboratoire Bordelais d'Analyse et G\'eom\'etrie (LaBAG), CNRS-UMR 5467 Universit\'e Bordeaux I \\ 351, cours de la lib\'eration \\ 33405 Talence cedex, FRANCE. \\ \varepsilonnd{document}
\begin{document} \title{Uniform polynomial approximation with star{} \begin{abstract} We prove matching direct and inverse theorems for uniform polynomial approximation with $A^*$ weights (a subclass of doubling weights suitable for approximation in the $\mathbb{L}_\infty$ norm) having finitely many zeros and not too ``rapidly changing'' away from these zeros. This class of weights is rather wide and, in particular, includes the classical Jacobi weights, generalized Jacobi weights and generalized Ditzian-Totik weights. Main part and complete weighted moduli of smoothness are introduced, their properties are investigated, and equivalence type results involving related realization functionals are discussed. \end{abstract} \sect{Introduction} Recall that a nonnegative integrable function $w$ is a doubling weight (on $[-1,1]$) if there exists a positive constant $L$ (a so-called doubling constant of $w$) such that \be \label{doub} w (2I) \leq L w (I) , \ee for any interval $I\subset [-1,1]$. Here, $2I$ denotes the interval of length $2|I|$ ($|I|$ is the length of $I$) with the same center as $I$, and $ w(I) := \int_{I} w(u) du$. Note that it is convenient to assume that $w$ is identically zero outside $[-1,1]$ which allows us to write $w(I)$ for any interval $I$ that is not necessarily contained in $[-1,1]$. Let ${\mathcal DW}_L $ denote the set of all doubling weights on $[-1,1]$ with the doubling constant $L$, and ${\mathcal DW} := \cup_{L>0} {\mathcal DW}_L$, {\em i.e., } ${\mathcal DW}$ is the set of all doubling weights. It is easy to see that $w \in {\mathcal DW}_L$ if and only if there exists a constant $\kappa\geq 1$ such that, for any two adjacent intervals $I_1, I_2 \subset [-1,1]$ of equal length, \be \label{withkap} w(I_1) \leq \kappa w(I_2) . \ee Clearly, $\kappa$ and $L$ depend on each other. In fact, if $w \in {\mathcal DW}_L$ then \ineq{withkap} holds with $\kappa = L^2$. Conversely, if \ineq{withkap} holds, then $w \in {\mathcal DW}_{1+\kappa}$. Following \cites{mt1999, mt2000}, we say that $w$ is an $A^*${} weight (on $[-1,1]$) if there is a constant $L^*$ (a so-called $A^*${} constant of $w$) such that, for all intervals $I\subset [-1,1]$ and $x\in I$, we have \be \label{astar} w(x) \leq {L^* \over |I|} w(I) . \ee Throughout this paper, $A^*_{L^*}$ denotes the set of all $A^*${} weights on $[-1,1]$ with the $A^*${} constant $L^*$. We also let $A^* := \cup_{L>0} A^*_{L^*}$, {\em i.e., } $A^*$ is the set of all $A^*${} weights. Note that any $A^*${} weight is doubling, {\em i.e., } $A^*_{L^*} \subset {\mathcal DW}_{L}$, where $L$ depends only on $L^*$. This was proved in \cite{mt2000} and is an immediate consequence of the fact (see \cite[Theorem 6.1]{mt2000}) that if $w\inA^*_{L^*}$ then, for some $l$ depending only on $L^*$ (for example, $l=2L^*$ will do), $w(I_1) \geq (|I_1|/|I_2|)^l w(I_2)$, for all intervals $I_1, I_2 \subset [-1,1]$ such that $I_1\subset I_2$. Indeed, for any $I\subset [-1,1]$, this implies $w(I) \geq \left( |I|/|2I\cap [-1,1]| \right)^l w(2I) \geq 2^{-l} w (2I)$, which shows that $w \in {\mathcal DW}_{2^l}$. Moreover, it is known and is not difficult to check (see \cite[pp. 58 and 68]{mt2000}) that all $A^*$ weights are $A_\infty$ weights. Here, $A_\infty$ is the union of all Muckenhoupt $A_p$ weights and can be defined as the set of all weights $w$ such that, for any $0<\alphapha<1$, there is $0<\beta<1$ so that $w(E)\geq \beta w(I)$, for all intervals $I\subset [-1,1]$ and all measurable subsets $E\subset I$ with $|E| \geq \alphapha |I|$ (see {\em e.g. } \cite{stein}*{Chapter V}). Clearly, any $A^*${} weight on $[-1,1]$ is bounded since if $w\inA^*_{L^*}$, then $w(x) \leq L^* w[-1,1]/2$, $x\in [-1,1]$. (We slightly abuse the notation and write $w[a,b]$ instead of $w \left([a,b]\right)$ throughout this paper.) At the same time, not every bounded doubling weight is an $A^*${} weight (for example, the doubling weight constructed in \cite{fm} is bounded and is not in $A_\infty$, and so it is not an $A^*${} weight either). Throughout this paper, we use the standard notation $\norm{f}{I} := \norm{f}{\mathbb{L}_\infty(I)} := \esssup_{u\in I} |f(u)|$ and $\norm{f}{} := \norm{f}{[-1,1]}$. Also, \[ E_n(f, I)_{w} := \inf_{q \in\Pi_n} \norm{w(f-q )}{I} , \] where $\Pi_n$ is the space of algebraic polynomials of degree $\leq n-1$. The following theorem is due to G. Mastroianni and V. Totik \cite{mt2001}*{Theorem 1.4} and is the main motivation for the present paper (see also \cites{mt1998, mt1999, mt2000}). \begin{theorema}[\cite{mt2001}*{Theorem 1.4}] \label{mtthm} Let $r\in\mathbb N$, $M\geq 3$, $-1 = z_1 < \dots < z_M = 1$, and let $w$ be a bounded generalized Jacobi weight \be \label{genJacobi} w_{\J}(x) := {\mathfrak{p}}rod_{j=1}^M |x-z_j|^{\gamma_j} {\mathfrak{q}}uad \mbox{\rm with }\; \gamma_j \geq 0, \; 1\leq j\leq M. \ee Then there is a constant $c$ depending only on $r$ and the weight $w$ such that, for any $f$, \[ E_n(f, [-1,1])_{w_{\J}} \leq c \w_\varphi^r (f, 1/n)_{w_{\J}}^* , \] and \[ \w_\varphi^r (f, 1/n)_{w_{\J}}^* \leq c n^{-r} \sum_{k=1}^n k^{r-1} E_k(f, [-1,1])_{w_{\J}} , \] where \[ \w_\varphi^r (f, t)_{w_{\J}}^* := \sum_{j=1}^{M-1} \sup_{0<h\leq t} \norm{w_{\J}(\cdot) \delta(\mathcal Z)elta_{h\varphi(\cdot)}^r(f,\cdot, J_{j,h})}{} + \sum_{j=1}^M E_r(f, I_{j, t})_{w_{\J}} \] with $I_{1,h} = [-1, -1+h^2]$, $I_{M, h} = [1-h^2, 1]$, $J_{1, h} = [-1+h^2, z_2-h]$, $J_{M-1, h} = [z_{M-1}+h, 1-h^2]$, and $I_{j, h} = [z_j-h, z_j+h]$ and $J_{j, h} = [z_j+h, z_{j+1}-h]$ for $1<j<M-1$, and the $r$th symmetric difference is defined in \ineq{dd}. \end{theorema} The purpose of the present paper is to prove an analog of \thm{mtthm} for more general weights (namely, for $A^*${} weights having finitely many zeros inside $[-1,1]$ and not too ``rapidly changing'' away from these zeros), and give a more natural and transparent (in our opinion) definition of the modulus of smoothness $\w_\varphi^r$. Our recent paper \cite{k-singular} deals with approximation in the weighted $\mathbb{L}_p$, $p<\infty$, (quasi)norm and a certain class of doubling weights having finitely many zeros and singularities. Approximation in the weighted $\mathbb{L}_\infty$ norm considered in the current paper is similar in some sense, but it also presents some challenges that have to be dealt with, and our present proofs are different from those in both \cite{mt2001} and \cite{k-singular}. The main results of the present paper are \thm{jacksonthm} (direct result), \thm{conversethm} (inverse result) and \thm{corr99} (equivalence of the modulus and an appropriate realization functional). Finally, we mention that \thm{mtthm} is a corollary of our results taking into account that $w_{\J} \in {\mathcal{W}}^*(\mathcal Z)$, $\mathcal Z\in{\mathbb Z}M$ (see \rem{rem3.3}), and \begin{eqnarray*} \lefteqn{ \w_\varphi^r\left(f, \max\left\{(1-z_2^2)^{-1/2},(1-z_{M-1}^2)^{-1/2}\right\}, 1/2, t\right)_{w_{\J}} }\\ &\leq& \w_\varphi^r (f, t)_{w_{\J}}^* \leq M\cdot \w_\varphi^r\left(f, 1/2, \max\left\{(1-z_2^2)^{-1/2},(1-z_{M-1}^2)^{-1/2}\right\}, t\right)_{w_{\J}}, {\mathfrak{q}}uad 0<t\leq1, \end{eqnarray*} where ${\mathcal{W}}^*(\mathcal Z)$ and $\w_\varphi^r(f, A, B, t)_w$ are defined in \deff{def11} and \ineq{compmod}, respectively. \section{Some properties of $A^*${} weights} Note that, for any interval $I\subset [-1,1]$ and $x\in I$, if \ineq{astar} holds for $I_1 := I \cap [-1,x]$ and $I_2 := I \cap [x, 1]$, then it also holds for $I$ since $|I_1|+|I_2| = |I|$ and $w(I_1) + w(I_2) = w(I)$. Therefore, $w\inA^*_{L^*}$ if and only if, for all intervals $[a,b] \subset [-1,1]$, \be \label{astar1} \max\{ w(a), w(b)\} \leq {L^* \over b-a} w[a,b] . \ee \begin{lemma} \label{lemma02} Let $w\inA^*_{L^*}$, $\xi\in [-1,1]$, and let $w_1(x) := f(|x-\xi|)$, where $f: [0,2] \mapsto \mathbb R_+$ is nondecreasing and such that $f(2x)\leq {\mathit{K}} f(x)$, for some ${\mathit{K}}>0$ and all $0\leq x \leq 1$. Then, $\widetilde w := w w_1 \inA^*_L$ with the constant $L$ depending only on ${\mathit{K}}$ and $L^*$. \end{lemma} \begin{proof} Suppose that $I\subset [-1,1]$ and $d$ is one of the endpoints of $I$. We need to show that $\widetilde w(d) \leq L \widetilde w(I)/|I|$. {\bf Case 1: $\xi\not\in int(I)$.}\\ Then, $w_1$ is monotone on $I$, and so either $w_1(d) \leq w_1(u)$ or $w_1(d) \geq w_1(u)$, for $u\in I$. In the former case, we immediately have \[ \widetilde w (d) = w(d) w_1(d) \leq {L^* \over |I|} \int_I w_1(d) w(u) du \leq {L^* \over |I|} \widetilde w(I) . \] Suppose now that $w_1(d) \geq w_1(u)$, for $u\in I$. This means that $d$ is the endpoint of $I$ furthest from $\xi$. Let $\mathit{z}eta$ be the midpoint of $I$, and let $J := [d,\mathit{z}eta]$ (as usual, if $x<y$, then $[y,x] := [x,y]$). Then, $w_1(\mathit{z}eta) \leq w_1(u)$, for all $u\in J$. Also, since $|d-\xi|/2 \leq |\mathit{z}eta-\xi|$ and $|d-\xi| \leq 2$, we conclude that \[ w_1(d) = f(|d-\xi|) \leq {\mathit{K}} f(|d-\xi|/2) \leq {\mathit{K}} f(|\mathit{z}eta-\xi|)= {\mathit{K}} w_1(\mathit{z}eta) . \] Therefore, $w_1(d) \leq {\mathit{K}} w_1(u)$, for all $u\in J$, and so \begin{eqnarray} \label{auxw} \widetilde w (d) &=& w(d) w_1(d) \leq {L^* \over |J|} \int_J w_1(d) w(u) du \leq {L^* {\mathit{K}} \over |J|} \int_J w_1(u) w(u) du \leq {L^*{\mathit{K}} \over |J|} \widetilde w(I) \\ \nonumber &=& {2L^*{\mathit{K}} \over |I|} \widetilde w(I) . \end{eqnarray} {\bf Case 2: $\xi \in int(I)$.} \\ If $|d-\xi|\geq |I|/4$, then using \ineq{auxw} for $I':= [d,\xi]$, we have \[ \widetilde w (d) \leq {2L^*{\mathit{K}} \over |I'|} \widetilde w(I') \leq {8L^*{\mathit{K}} \over |I|}\widetilde w(I) . \] We now assume that $|d-\xi| < |I|/4$. Let $d'$ be the point symmetric to $d$ about $\xi$, {\em i.e., } $\xi = (d+d')/2$, and let $I'':= I\setminus [d,d')$. Then $|I''| = |I| - 2|d-\xi| \geq |I|/2$, and $w_1(d)=w_1(d') \leq w_1(u)$, for all $u\in I''$. Hence, taking into account that $w$ is doubling with the doubling constant depending only on $L^*$, we have \[ \widetilde w (d) = w(d) w_1(d) \leq {L^* w_1(d) \over |I|} \int_I w(u) du \leq {c w_1(d) \over |I|} \int_{I''} w(u) du \leq {c \over |I|} \int_{I''} w_1(u) w(u) du \leq {c \over |I|} \widetilde w (I) . \] This completes the proof. \end{proof} \begin{corollary} \label{corast} Suppose that $w\inA^*_{L^*}$, $M\in\mathbb N$ and, for each $1\leq i\leq M$, $z_i \in [-1,1]$, $\gamma_i \geq 0$ and $\Gamma_i \in\mathbb R$ (if $\gamma_i >0$) or $\Gamma_i \leq 0$ (if $\gamma_i =0$). Then \be \label{genDT} \widetilde w(x) := w(x) {\mathfrak{p}}rod_{i=1}^M |x-z_i|^{\gamma_i} \left( \ln{e \over |x-z_i|} \right)^{\Gamma_i} \ee is an $A^*${} weight with the $A^*${} constant depending only on $\gamma_i$'s, $\Gamma_i$'s and $L^*$. \end{corollary} We remark that, with $w\sim 1$, the weights $\widetilde w$ in \ineq{genDT} are sometimes called ``generalized Ditzian-Totik weights''. \begin{proof} Denote \[ f_{\gamma, \Gamma} (x) := \begin{cases} \left( 1 - \ln x \right)^{\Gamma} , & \mbox{\rm if $\gamma = 0$ and $\Gamma \leq 0$,} \\ x^\gamma \left( \Psi - \ln x \right)^{\Gamma} , & \mbox{\rm if $\gamma > 0$ and $\Gamma \in\mathbb R$,} \\ \end{cases} \] where $\Psi := 1 + \max\{0, \Gamma\}/\gamma$. It is easy to check that $f_{\gamma, \Gamma}$ is nonnegative and nondecreasing on $[0,2]$, and satisfies $\sup_{x\in [0,1]} |f_{\gamma, \Gamma}(2x)/f_{\gamma, \Gamma}(x)| < \infty$. Hence, \lem{lemma02} implies that the weight \[ \widehat w(x) := w(x) {\mathfrak{p}}rod_{i=1}^M f_{\gamma_i, \Gamma_i} (|x-z_i|) \] is an $A^*${} weight with the $A^*${} constant depending only on $\gamma_i$'s, $\Gamma_i$'s, and $L^*$. Finally, it remains to notice that, if $\gamma >0$ and $\Gamma \in\mathbb R$, then $f_{\gamma, \Gamma}(x) \sim x^\gamma \left( 1 - \ln x \right)^{\Gamma}$ on $[0,2]$ with equivalence constants depending only on $\gamma$ and $\Gamma$, and so $\widetilde w \sim \widehat w$ on $[-1,1]$. Clearly, this implies that $\widetilde w\inA^*$. \end{proof} \begin{remark} \label{remark25} It follows from \cor{corast} that, for any $A^*${} weight $w$ and any $\mu\geq 0$, $w\varphi^\mu$ is also an $A^*${} weight, where $\varphi(x) := \sqrt{1-x^2}$. \end{remark} For $n\in\mathbb N$, following {\em e.g. } \cite{mt2001}, we denote \[ w_n(x) := \rho_n(x)^{-1} \int_{x-\rho_n(x)}^{x+\rho_n(x)} w(u) du , \] where $\rho_n(x) := n^{-1}\varphi(x) + n^{-2}$ (recall that $w$ is assumed to be $0$ outside $[-1,1]$). Note that, for any $w\inA^*_{L^*}$ and $x\in [-1,1]$, \begin{eqnarray} \label{wlesswn} w (x) &\leq& {L^* \over \left| [x-\rho_n(x), x+\rho_n(x)] \cap [-1,1] \right|} \int_{[x-\rho_n(x), x+\rho_n(x)] \cap [-1,1]} w(u) du \\ \nonumber & \leq & {L^* \over \rho_n(x) } \int_{x-\rho_n(x)}^{x+\rho_n(x)} w(u) du = L^* w_n(x) . \end{eqnarray} \begin{lemma}\label{lemma26} Let $w\inA^*_{L^*}$ and $n\in\mathbb N$. Then $w_n\inA^*_{L}$ with $L$ depending only on $L^*$. \end{lemma} \begin{proof} Suppose that $n\in\mathbb N$ is fixed. Let $I$ be a subinterval of $[-1,1]$, and suppose that $x\in I$ is the left endpoint of $I$ (the case for the right endpoint is analogous). If $[x,x+\rho_n(x)] \subset I$, using the fact that $w$ is doubling, we have \begin{eqnarray*} w_n(x) &=& \rho_n(x)^{-1} \int_{x-\rho_n(x)}^{x+\rho_n(x)} w(u) du \leq c \rho_n(x)^{-1} \int_{x}^{x+\rho_n(x)} w(u) du \\ &\leq & c \rho_n(x)^{-1} \int_{x}^{x+\rho_n(x)} {L^* \over \left| I \right|} \int_{I} w(v) dv \, du \leq {c \over \left| I \right|} \int_{I} w(v) dv \leq {c \over \left| I \right|} \int_I w_n(v) dv . \end{eqnarray*} Recall now that, if $|x-u| \leq {\mathit{K}} \rho_n(x)$, then $w_n(x) \sim w_n(u)$ (see {\em e.g. } \cite[(2.3)]{mt2001}). This implies that, if $x$ is the left endpoint of $I$ and $x+\rho_n(x) \not\in I$, then $I \subset [x, x+\rho_n(x)]$, and so $w_n(u) \sim w_n(x)$, for all $u\in I$. Hence, in this case, \[ w_n(x) \sim {1 \over \left| I \right|} \int_I w_n(u) du . \] Therefore, \ineq{astar1} implies that $w_n$ is an $A^*${} weight. \end{proof} \sect{Special $A^*${} weights and associated moduli of smoothness} Let \[ \rho(h,x) := h\varphi(x)+h^2 \] (note that $\rho(1/n, x)=\rho_n(x)$), and \[ {\mathbb Z}M := \left\{ (z_j)_{j=1}^M \;\; \big| \;\; -1 \leq z_1 < \dots < z_{M-1} < z_M \leq 1 \right\}, \; M\in\mathbb N . \] For $\mathcal Z\in{\mathbb Z}M$, it is convenient to denote \[ \mathcal Z_{A,h}^j := \mathcal Z_{A,h}^j(\mathcal Z) := \left\{ x \in [-1,1] \;\; \big| \;\; |x-z_j| \leq A \rho(h, z_j) \right\}, {\mathfrak{q}}uad 1\leq j\leq M , \] \[ \mathcal Z_{A,h} := \mathcal Z_{A,h}(\mathcal Z) := \cup_{j=1}^M \mathcal Z_{A,h}^j, \] and \[ {\mathcal I}_{A, h} := {\mathcal I}_{A, h}(\mathcal Z) := \left([-1,1] \setminus \mathcal Z_{A,h}\right)^{cl} = \left\{ x\in [-1,1] \;\; \big| \;\; |x-z_j| \geq A \rho(h,z_j), \; \text{for all } 1\leq j \leq M \right\}. \] Also, \[ \delta(\mathcal Z) := \mathrm{pmin} \left\{ |z_j - z_{j-1}| \;\; \big| \;\; 1\leq j\leq M+1 \right\} , \] where $z_0 := -1$, $z_{M+1} := 1$ and $\mathrm{pmin}(S)$ is the smallest {\em positive} number from the finite set $S$ of nonnegative reals. Note that $\delta(\mathcal Z) \leq 2$, for any $\mathcal Z\in{\mathbb Z}M$. The following definition is an analog of \cite{k-singular}*{Definition 2.1} for $A^*${} weights. \begin{definition} \label{def11} Let $\mathcal Z\in{\mathbb Z}M$. We say that $w$ is an $A^*${} weight from the class ${\mathcal{W}}^*(\mathcal Z)$ (and write $w\in {\mathcal{W}}^*(\mathcal Z)$) if \begin{itemize} \item[(i)] $w\inA^*$, \end{itemize} and \begin{itemize} \item[(ii)] for any $\e>0$ and $x, y\in [-1,1]$ such that $|x-y| \leq \rho(\e, x)$ and $\dist\left( [x,y], z_j \right) \geq \rho(\e, z_j)$ for all $1\leq j \leq M$, the following inequalities are satisfied \be \label{nochange} c_* w(y) \leq w(x) \leq c_*^{-1} w(y) , \ee where the constant $c_*$ depends only on $w$, and does not depend on $x$, $y$ and $\e$. \end{itemize} \end{definition} Clearly, there are non-$A^*${} weights satisfying condition (ii) in Definition~\ref{def11}. For instance, the non-doubling weight \[ w(x) := \begin{cases} -x , & \mbox{\rm if }\; x<0 ,\\ x^2 , & \mbox{\rm if }\; x\geq 0 , \end{cases} \] is one such example for $\mathcal Z := \{0\}$. \begin{remark} A weight from the class ${\mathcal{W}}^*(\mathcal Z)$ may have zeros only at the points in $\mathcal Z$. At the same time, it is not required to have zeros at those points. \end{remark} \begin{remark} \label{rem3.3} It follows from \cite{k-singular}*{Example 2.7} and \cor{corast} that the following weights belong to ${\mathcal{W}}^*(\mathcal Z)$ with $\mathcal Z= (z_j)_{j=1}^M$, $-1\leq z_1 < \dots <z_{M-1} < z_M \leq 1$: \begin{itemize} \item bounded classical Jacobi weights: $w(x) = (1+x)^\alphapha (1-x)^\beta$, $\alphapha,\beta \geq 0$, with $M=2$, $z_1 = -1$ and $z_2= 1$, \item bounded generalized Jacobi weights \ineq{genJacobi}, \item bounded generalized Ditzian-Totik weights \ineq{genDT} with $w\equiv 1$. \end{itemize} \end{remark} The following lemma immediately follows from \cite{k-singular}*{Lemma 2.3} taking into account the fact that any $A^*${} weight is doubling. \begin{lemma} \label{properties} Let $w$ be an $A^*${} weight and $\mathcal Z\in{\mathbb Z}M$. The following conditions are equivalent. \begin{enumerate}[\rm (i)] \item \label{i} $w\in {\mathcal{W}}^*(\mathcal Z)$. \item \label{ii} For any $n\in\mathbb N$ and $x, y$ such that $[x,y] \subset {\mathcal I}_{1, 1/n}$ and $|x-y| \leq \rho_n(x)$, inequalities \ineq{nochange} are satisfied with the constant $c_*$ depending only on $w$. \item \label{iii} For some $N\in\mathbb N$ that depends only on $w$, and any $n\geq N$ and $x, y$ such that $[x,y] \subset {\mathcal I}_{1, 1/n}$ and $|x-y| \leq \rho_n(x)$, inequalities \ineq{nochange} are satisfied with the constant $c_*$ depending only on $w$. \item \label{iv} For any $n\in\mathbb N$, $A, B >0$, and $x, y$ such that $[x,y] \subset {\mathcal I}_{A, 1/n}$ and $|x-y| \leq B \rho_n(x)$, inequalities \ineq{nochange} are satisfied with the constant $c_*$ depending only on $w$, $A$ and $B$. \item \label{v} For any $n\in\mathbb N$ and $A>0$, \[ w(x) \sim w_n(x) , {\mathfrak{q}}uad x \in {\mathcal I}_{A, 1/n} , \] where the equivalence constants depend only on $w$ and $A$, and are independent of $x$ and $n$. \end{enumerate} \end{lemma} For $r\in\mathbb N$, $t>0$ and $\mathcal Z\in{\mathbb Z}M$, {\em the main part weighted modulus of smoothness} is defined as \be \label{mpmod} \Omega_\varphi^r(f, A, t)_{ w} := \Omega_\varphi^r(f, A, t; \mathcal Z)_{ w} := \sup_{0<h\leq t} \norm{w(\cdot) \delta(\mathcal Z)elta_{h\varphi(\cdot)}^r(f,\cdot, {\mathcal I}_{A, h})}{} , \ee where \be \label{dd} \delta(\mathcal Z)elta_h^r(f,x, J):=\left\{ \begin{array}{ll} \displaystyle \sum_{i=0}^r {r \choose i} (-1)^{r-i} f(x-rh/2+ih),&\mbox{\rm if }\, [x-rh/2, x+rh/2] \subset J \,,\\ 0,&\mbox{\rm otherwise}, \end{array}\right. \ee is the $r$th symmetric difference. Note that if we denote \be \label{domain} \delta(\mathcal Z)om (A,h, r) := \left\{ x \;\; \big| \;\; [x-rh\varphi(x)/2,x+rh\varphi(x)/2] \subset {\mathcal I}_{A, h} \right\} \ee then \[ \Omega_\varphi^r(f, A, t)_{ w} = \sup_{0<h\leq t} \norm{w(\cdot) \delta(\mathcal Z)elta_{h\varphi(\cdot)}^r(f,\cdot,\mathbb R)}{\delta(\mathcal Z)om(A, h,r)} . \] The {\em weighted Ditzian-Totik modulus of smoothness} is \[ \w_\varphi^r(f, t)_{ w} := \sup_{0<h\leq t} \norm{w(\cdot) \delta(\mathcal Z)elta_{h\varphi(\cdot)}^r(f,\cdot, [-1,1])}{} . \] For $A, B, t>0$, we define the {\em complete weighted modulus of smoothness } as \be \label{compmod} \w_\varphi^r(f, A, B, t)_{ w} := \w_\varphi^r(f, A, B, t; \mathcal Z)_{ w} := \Omega_\varphi^r(f, A, t;\mathcal Z)_{ w} + \sum_{j=1}^M E_r(f, \mathcal Z_{B,t}^j)_{w} . \ee We will also need the following auxiliary quantity (``restricted main part modulus''): \be \label{restmod} \Omega_\varphi^r(f, t)_{S, w} := \sup_{0<h\leq t} \norm{w(\cdot) \delta(\mathcal Z)elta_{h\varphi(\cdot)}^r(f,\cdot, S)}{} , \ee where $S$ is some subset (a union of intervals) of $[-1,1]$ that does not depend on $h$. \section{Properties of main part and complete weighted moduli} \begin{proposition} \label{propprop} For any weight function $w$ and a set $\mathcal Z\in{\mathbb Z}M$, the moduli defined in \ineq{mpmod}, \ineq{compmod} and \ineq{restmod} have the following properties: \begin{enumerate}[\rm (i)] \item \label{pi} $\displaystyle \Omega_\varphi^r(f, A, t)_{w} = \Omega_\varphi^r(f, A, \sqrt{2/A})_{w}$ for any $t\geq \sqrt{2/A}$; \item \label{pii} $\displaystyle \w_\varphi^r(f, A, B, t)_{ w} = \w_\varphi^r(f, A, B, t_0)_{ w} \geq M E_r(f, [-1,1])_{w}$ for any $t\geq t_0 := \max\{\sqrt{2/A}, \sqrt{2/B} \}$; \item \label{piii} $\displaystyle \Omega_\varphi^r(f, A, t_1)_{ w} \leq \Omega_\varphi^r(f, A, t_2)_{ w}$ and $\displaystyle \w_\varphi^r(f, A, B, t_1)_{ w} \leq \w_\varphi^r(f, A, B, t_2)_{ w}$ if $0<t_1 \leq t_2$; \item \label{piv} $\displaystyle \Omega_\varphi^r(f, A_1, t)_{ w} \geq \Omega_\varphi^r(f, A_2, t)_{ w}$ and $\displaystyle \w_\varphi^r(f, A_1, B, t)_{ w} \geq \w_\varphi^r(f, A_2, B, t)_{ w}$ if $A_1 \leq A_2$; \item \label{pv} $\displaystyle \w_\varphi^r(f, A, B_1, t)_{ w} \leq \w_\varphi^r(f, A, B_2, t)_{ w}$ if $B_1 \leq B_2$; \item \label{pvi} $\displaystyle \Omega_\varphi^r(f, c_* t)_{{\mathcal I}_{A, t}, w} \leq \Omega_\varphi^r(f, A/\max\{c_*, c_*^2\}, c_*t)_{ w}$ for any $t>0$ and $c_*>0$. \end{enumerate} \end{proposition} \begin{proof} Properties~(\ref{pi}) and (\ref{pii}) immediately follow from the observation that, if $h\geq \sqrt{2/C}$, then $C \rho(h, z_j) \geq 2$. Properties~(\ref{piii}) and (\ref{pv}) follow from the definition and the fact that $\mathcal Z^j_{B_1, t_1} \subset \mathcal Z^j_{B_2, t_2}$ if $t_1 \leq t_2$ and $B_1 \leq B_2$. Property~(\ref{piv}) is a consequence of the inclusion ${\mathcal I}_{A_2, h} \subset {\mathcal I}_{A_1, h}$ if $A_1 \leq A_2$. Property~(\ref{pvi}) follows from the observation that, for $c_*>0$ and $0< h\leq c_*t$, since $\rho(h, z_j)/\max\{c_*, c_*^2\} \leq \rho(t, z_j)$, then ${\mathcal I}_{A, t} \subset {\mathcal I}_{A/\max\{c_*, c_*^2\}, h}$. \end{proof} We need an auxiliary lemma that is used in the proofs of several results below. \begin{lemma}\label{auaulemma} Suppose that $\mathcal Z\in{\mathbb Z}M$ and $w \in {\mathcal{W}}^*(\mathcal Z)$. If $A, h>0$, $r\in\mathbb N$ and $x\in [-1,1]$ are such that \[ [x-rh\varphi(x)/2, x+rh\varphi(x)/2 ] \subset {\mathcal I}_{A, h} {\mathfrak{q}}uad \mbox{\rm ({\em i.e., } $x\in \delta(\mathcal Z)om (A,h, r)$),} \] then, for any $y\in [x-rh\varphi(x)/2, x+rh\varphi(x)/2 ]$, \[ w(y) \sim w(x) \sim w_{n}(x) , \] where $n := \lceil 1/h \rceil$, and the equivalence constants depend only on $r$, $A$ and the weight $w$. \end{lemma} \begin{proof} First we note that, if $h > \sqrt{2/A}$, then $A\rho(h, z_j) > 2$, and so ${\mathcal I}_{A, h}=\emptyset$. Hence, we can assume that $0<h\leq \sqrt{2/A}$. Now, if $n = \lceil 1/h \rceil$, then $n\in\mathbb N$, $n^{-1} \leq h < (n-1)^{-1}$ and ${\mathcal I}_{A, h} \subset {\mathcal I}_{A, 1/n}$. Moreover, if $n\geq 2$, then $(n-1)^{-1} \leq 2/n$ and so $\rho(h, x) \leq 4 \rho_n(x) $ and, if $n=1$, then \[ \rho(h, x) \leq \rho(\sqrt{2/A}, x) \leq \max\{ \sqrt{2/A}, 2/A \} \rho_n(x) . \] Hence, if $y\in [x-rh\varphi(x)/2, x+rh\varphi(x)/2 ]$, then $[x,y] \subset {\mathcal I}_{A, 1/n}$ and \[ |x-y| \leq rh\varphi(x)/2 \leq r\rho(h,x)/2 \leq (r/2) \max\{4, \sqrt{2/A}, 2/A \} \rho_n(x) . \] Therefore, \lemp{iv} implies that $w(y)\sim w(x)$, and \lemp{v} yields the equivalence $w(x)\sim w_n(x)$. \end{proof} In the following lemma and in the sequel, we use the usual notation \[ \mathbb{L}_\infty^w := \left\{ f: [-1,1]\mapsto\mathbb R \;\; \big| \;\; \norm{wf}{} <\infty \right\} . \] \begin{lemma} \label{normestimate} If $\mathcal Z\in{\mathbb Z}M$, $w \in {\mathcal{W}}^*(\mathcal Z)$, $f\in\mathbb{L}_\infty^w$, $r\in\mathbb N$, and $A, B, t >0$, then \[ \w_\varphi^r(f, A, B, t)_{ w} \leq c \norm{wf}{} , \] where $c$ depends only on $r$, $A$ and the weight $w$. \end{lemma} \begin{proof} First of all, it is clear that \[ \sum_{j=1}^M E_r(f, \mathcal Z_{B,t}^j)_{w} \leq \sum_{j=1}^M \norm{wf}{\mathcal Z_{B,t}^j} \leq M \norm{wf}{}. \] We now let $h \in (0, t]$ and $x$ be such that $[x-rh\varphi(x)/2, x+rh\varphi(x)/2 ] \subset {\mathcal I}_{A, h}$, and denote $y_i(x) := x+(i-r/2)h \varphi(x)$. Then, \lem{auaulemma} implies that $w(y_i(x)) \sim w(x)$, $0\leq i \leq r$, and so \begin{eqnarray*} w(x) \left|\delta(\mathcal Z)elta_{h\varphi(x)}^r(f,x, {\mathcal I}_{A,h})\right| & \leq & w(x) \sum_{i=0}^r {r \choose i} \left| f(y_i(x)) \right| \leq 2^r w(x) \max_{0 \leq i \leq r} \left| f(y_i(x)) \right| \\ & \leq & c \max_{0 \leq i \leq r} \left|w(y_i(x)) f(y_i(x)) \right| . \end{eqnarray*} This yields $ \Omega_\varphi^r(f, A, t)_{ w}\leq c \norm{wf}{}$, which completes the proof of the lemma. \end{proof} Taking into account that $\w_\varphi^r(f, A, B, t)_{ w} = \w_\varphi^r(f-q, A, B, t)_{ w}$, for any $q\in\Pi_r$, we immediately get the following corollary. \begin{corollary} \label{cornormestimate} If $\mathcal Z\in{\mathbb Z}M$, $w \in {\mathcal{W}}^*(\mathcal Z)$, $f\in\mathbb{L}_\infty^w$, $r\in\mathbb N$, and $A, B, t >0$, then \[ \w_\varphi^r(f, A, B, t)_{ w} \leq c E_r(f, [-1,1])_{w}, \] where $c$ depends only on $r$, $A$ and the weight $w$. \end{corollary} \begin{lemma} \label{twot} If $\mathcal Z\in{\mathbb Z}M$, $w \in {\mathcal{W}}^*(\mathcal Z)$, $f\in\mathbb{L}_\infty^w$, $r \in\mathbb N$, and $A, t >0$, then \be \label{ineqtwot} \Omega_\varphi^r(f, A, 2 t)_{ w} \leq c \Omega_\varphi^r(f, \sqrt{2} A , \sqrt{2} t)_{ w} , \ee where $c$ depends only on $r$, $A$ and the weight $w$. \end{lemma} Now, Proposition~\ref{propprop}(\ref{piii} and \ref{piv}) and \lem{twot} imply the following result. \begin{corollary} \label{corollary411} If $\mathcal Z\in{\mathbb Z}M$, $w \in {\mathcal{W}}^*(\mathcal Z)$, $f\in\mathbb{L}_\infty^w$, $r \in\mathbb N$, and $A, t >0$, then \[ \Omega_\varphi^r(f, A, t)_{ w} \sim \Omega_\varphi^r(f, \sqrt{2} A, t)_{ w} , \] and so \[ \Omega_\varphi^r(f, A, t)_{ w} \sim \Omega_\varphi^r(f, 1 , t)_{ w} , \] where the equivalence constants depend only on $r$, $A$ and the weight $w$. Moreover, \[ \Omega_\varphi^r(f, 1 , t)_{ w} \leq \Omega_\varphi^r(f, 1 , 2t)_{ w} \leq c \Omega_\varphi^r(f, 1 , t)_{ w} , \] where $c$ depends only on $r$ and the weight $w$. \end{corollary} \begin{proof}[Proof of \lem{twot}] Recall a rather well known identity (see \cite{pp}*{(5) on p. 42}, for example) \be \label{pp} \delta(\mathcal Z)elta_{2 h}^r (f, x) = \sum_{i_1=0}^{1} \dots \sum_{i_r=0}^{1} \delta(\mathcal Z)elta_{h}^r \left(f, x + [i_1 +\dots +i_r - r/2]h \right) . \ee Now, we fix $h \in (0, t]$, and let $x$ be a fixed number such that $[x-r h\varphi(x) , x+r h\varphi(x) ] \subset {\mathcal I}_{A, 2h}$ ({\em i.e., } $x\in\delta(\mathcal Z)om(A, 2h, r)$). We have \begin{eqnarray*} \left|\delta(\mathcal Z)elta_{2h\varphi(x)}^r(f,x, {\mathcal I}_{A, 2h})\right| & \leq & \sum_{i_1=0}^{1} \dots \sum_{i_r=0}^{1} \left|\delta(\mathcal Z)elta_{h\varphi(x)}^r \left(f, x + [i_1 +\dots +i_r - r/2]h\varphi(x) \right)\right| \\ & \leq & 2^r \left|\delta(\mathcal Z)elta_{h\varphi(x)}^r \left(f, y \right)\right| =: 2^r F , \end{eqnarray*} where $y := x + \gamma h\varphi(x)$, and $\gamma$ is such that $\gamma +r/2 \in \{0, 1, \dots, r\}$ (and so $|\gamma| \leq r /2$) and \[ \left|\delta(\mathcal Z)elta_{h\varphi(x)}^r \left(f, y \right)\right| = \max_{0\leq m \leq r} \left|\delta(\mathcal Z)elta_{h\varphi(x)}^r \left(f, x + [m - r/2]h\varphi(x) \right)\right| . \] Note that \lem{auaulemma} implies that $w(x)\sim w(y)$. Also, since $x{\mathfrak{p}}m r h\varphi(x) \in [-1,1]$, we have $|x| \leq (1-r^2h^2)/(1+r^2h^2)$, which implies \[ {|x| \over \varphi(x)} \leq {1-r^2h^2 \over 2rh} , \] and so \begin{eqnarray*} \left[\varphi(y ) \over \varphi(x) \right]^2 & = & 1 - \gamma^2 h^2 - 2\gamma h {x \over \varphi(x)} \geq 1 - \gamma^2 h^2 - 2 |\gamma| h {|x| \over \varphi(x)} \geq {\mathfrak{f}}rac{1}{2} + {\mathfrak{f}}rac{r^2h^2}{4} \geq {\mathfrak{f}}rac{1}{2}. \end{eqnarray*} Therefore, $ \varphi(x) \leq \sqrt{2} \varphi(y )$, and \[ w(x) F \leq c w (y) \left|\delta(\mathcal Z)elta_{h^*\varphi(y)}^r \left(f, y \right)\right| , \] where \[ 0<h^* := {h\varphi(x) \over \varphi(y)} \leq \sqrt{2} h \leq \sqrt{2} t . \] We now note that $\rho(2h,z_j) \geq \sqrt{2}\rho(h^* , z_j)$ which implies ${\mathcal I}_{A, 2h} \subset {\mathcal I}_{\sqrt{2}A , h^*}$, and so \begin{eqnarray*} && \left[y - rh^*\varphi (y)/2, y + rh^*\varphi (y)/2 \right] = \left[ x +(\gamma-r/2) h \varphi(x), x +(\gamma+r/2) h \varphi(x) \right] \\ && \mbox{} \subset \left[ x -r h \varphi(x), x + r h \varphi(x) \right] \subset {\mathcal I}_{A, 2h} \subset {\mathcal I}_{\sqrt{2}A , h^*}. \end{eqnarray*} Therefore, $\delta(\mathcal Z)elta_{h^*\varphi(y)}^r \left(f, y \right) = \delta(\mathcal Z)elta_{ h^* \varphi(y)}^r \left(f, y , {\mathcal I}_{\sqrt{2}A, h^*} \right)$, and so we have \begin{eqnarray*} w(x) F &\leq & c \sup_{0<h^* \leq \sqrt{2} t} \esssup_{y } w (y) \left|\delta(\mathcal Z)elta_{ h^* \varphi(y)}^r \left(f, y , {\mathcal I}_{\sqrt{2}A, h^*} \right)\right| \leq c \Omega_\varphi^r(f, \sqrt{2}A , \sqrt{2} t)_{ w} \end{eqnarray*} for almost all $x\in\delta(\mathcal Z)om(A, 2h, r)$. The lemma is now proved. \end{proof} \begin{lemma} \label{newlem21} Let $\mathcal Z\in{\mathbb Z}M$, $w \in {\mathcal{W}}^*(\mathcal Z)$, $r\in\mathbb N$, $z \in \mathcal Z$, $z\neq 1$, $0<\e <\delta(\mathcal Z)/2$, $I := [z+\e/2,z+\e]$, and let $J := [z+\e,z+\e+\delta]$ with $\delta$ such that $0< \delta \leq \e/(2r)$. Then, for any $h\in [\delta, \e/(2r)]$ and any polynomial $q\in\Pi_r$, we have \be \label{osn} \norm{w(f-q)}{J} \leq c \norm{ w(\cdot) \delta(\mathcal Z)elta_{h}^r (f, \cdot, I\cup J)}{I\cup J} + c \norm{w(f-q)}{I} . \ee Additionally, \be \label{osn2} \norm{w(f-q)}{J} \leq c \Omega_\varphi^r(f, 54 \delta t/\e)_{I\cup J, w} + c \norm{w(f-q)}{I} , \ee where $0<t < 1$ is such that $\e = \rho(t,z)$, and all constants $c$ depend only on $r$ and the weight $w$. \end{lemma} \begin{remark} By symmetry, the statement of the lemma is also valid for $I:= [z-\e,z-\e/2]$ and $J := [z-\e-\delta,z-\e]$, where $z\in\mathcal Z$ is such that $z\neq -1$. \end{remark} \begin{remark} The condition $\e <\delta(\mathcal Z)/2$ guarantees that $I$ is ``far'' from all other points in $\mathcal Z$. In particular, $[z+\e/2,z+2\e] \cap \left(\mathcal Z\cup\{{\mathfrak{p}}m 1\}\right) = \emptyset$. \end{remark} \begin{proof}[Proof of \lem{newlem21}] Denoting for convenience $g := f-q$ and taking into account that $\delta(\mathcal Z)elta_{h}^r (g, x, \mathbb R) = \delta(\mathcal Z)elta_{h}^r (f, x, \mathbb R)$ we have \[ g(x+rh/2) = \delta(\mathcal Z)elta_{h}^r (f, x,\mathbb R) - \sum_{i=0}^{r-1} {r \choose i} (-1)^{r-i} g(x-rh/2+ih). \] We now fix $h \in [\delta , \e/(2r)]$, and note that, for any $x$ such that $x+rh/2 \in J$, we have $[x-rh/2, x+(r-2)h/2] \subset I$, and so \begin{eqnarray*} \norm{g}{J} &\leq& \norm{ \delta(\mathcal Z)elta_{h}^r (f, \cdot,\mathbb R)}{[z+\e-rh/2, z+\e+\delta - rh/2]} + (2^r-1) \norm{g}{I} \\ & \leq & \norm{ \delta(\mathcal Z)elta_{h}^r (f, \cdot, I\cup J)}{I\cup J} + (2^r-1) \norm{g}{I} . \end{eqnarray*} Suppose now that $0<t < 1$ is such that $\e = \rho(t, z)$, let $n\in\mathbb N$ be such that $n:= \lfloor 1/t \rfloor$ and pick $A$ so that $\e =A \rho_n(z)$. Note that $\rho_{n+1}(z) < \e \leq \rho_n(z)$ and $1/4 < A \leq 1$. Hence, $\dist\left(I\cup J, z\right) = \e/2 \geq \rho_n(z)/8$. Suppose now that $\widetilde z \in \mathcal Z$ is such that $\widetilde z > z$ and $(z, \widetilde z) \cap \mathcal Z = \emptyset$, {\em i.e., } $\widetilde z$ is the ``next'' point from $\mathcal Z$ to the right of $z$ (if there is no such $\widetilde z$ then there is nothing to do, and the next paragraph can be skipped). We will now show that $\widetilde d := \dist\left(I\cup J, \widetilde z\right) \geq \rho_n(\widetilde z)/20$. Indeed, $\widetilde d = \widetilde z - z - (\e+\delta) \geq \delta(\mathcal Z) -3\e/2 \geq \e/2$. If $\e > \rho_n(\widetilde z)/10$, then we are done, and so we suppose that $\e \leq \rho_n(\widetilde z)/10$. Recall (see {\em e.g. } \cite{k-singular}*{p. 27}) the well known fact that \be \label{usual} \rho_n(u)^2 \leq 4 \rho_n(v) (|u-v|+\rho_n(v)) , {\mathfrak{q}}uad \mbox{\rm for all }\; u,v\in [-1,1] . \ee This implies \[ |\widetilde z - z| \geq {\rho_n(\widetilde z)^2 \over 4 \rho_n(z)} - \rho_n(z)\geq {5A \over 2} \rho_n(\widetilde z) - {\e \over A} \geq \left( {5A \over 2} - {1 \over 10 A} \right) \rho_n(\widetilde z) \geq (9/40) \rho_n(\widetilde z) . \] Also, $|\widetilde z - z| = \widetilde d + \e+\delta \leq \widetilde d + 3\e/2 \leq 4 \widetilde d$, which implies $\widetilde d \geq (9/160) \rho_n(\widetilde z) \geq \rho_n(\widetilde z)/20$ as needed. Therefore, we can conclude that \[ I\cup J \subset {\mathcal I}_{1/20, 1/n} . \] Now, using \ineq{usual} we conclude that, if $u\in I\cup J$, then $|u-z| \leq 3\e/2 \leq 3\rho_n(z)/2$, and so \[ \rho_n(u)^2 \leq 4 \rho_n(z) (|u-z|+\rho_n(z)) \leq 10 \rho_n(z)^2 \] and \[ \rho_n(z)^2 \leq 4 \rho_n(u) (|u-z|+\rho_n(u)) \leq 4 \rho_n(u) (3\rho_n(z)/2 +\rho_n(u)) . \] This implies that, for any $u\in I\cup J$, \[ \rho_n(u)/4 \leq \rho_n(z) \leq 7 \rho_n(u) . \] Hence, for any $u, v \in I\cup J$, \[ |u-v| \leq \e \leq \rho_n(z) \leq 7 \rho_n(u) . \] It now follows from \lemp{iv} that $w(u) \sim w(v)$, for any $u, v \in I\cup J$, and so \[ \norm{w g}{J} \leq c \norm{ w(\cdot) \delta(\mathcal Z)elta_{h}^r (f, \cdot, I\cup J)}{I\cup J} + c \norm{wg}{I} , \] and \ineq{osn} is proved. In order to prove \ineq{osn2}, we note that, for any $x\in I\cup J$, \[ 1-|x| \geq \e/2 = \rho (t,z)/2 \geq t^2/2 , \] which implies $\varphi(x) \geq t/\sqrt{2}$, and so, with $h:= \delta$, we have \[ 0< {h \over \varphi(x)} \leq {\delta \rho_n(z) \over \e \varphi(x)} \leq {7 \delta \rho_n(x) \over \e \varphi(x)} \leq {7\delta \over \e n} \left(1 + {\sqrt{2} \over nt} \right) \leq {7\delta t \over \e} \left(2 + 4\sqrt{2} \right) \leq {54\delta t/\e} . \] Therefore, for almost all $x \in I\cup J$, denoting $h^* := h/\varphi(x)$ we have \begin{eqnarray*} w(x) \delta(\mathcal Z)elta_{h}^r (f, x, I\cup J) & = & w(x) \delta(\mathcal Z)elta_{h^* \varphi(x)}^r (f, x, I\cup J) \\ &\leq & \sup_{0<h\leq 54\delta t/\e} \norm{ w(\cdot) \delta(\mathcal Z)elta_{h \varphi(\cdot) }^r (f, \cdot, I\cup J)}{} , \end{eqnarray*} and the proof of \ineq{osn2} is complete. \end{proof} \begin{corollary} \label{corol54} Let $\mathcal Z\in{\mathbb Z}M$, $w \in{\mathcal{W}}^*(\mathcal Z)$, $r\in\mathbb N$, $B>0$, and let $0<t < c_0$, where $c_0$ is such that $\max_{1\leq j \leq M} \rho(c_0,z_j) \leq \delta(\mathcal Z)/(2B)$ (for example, $c_0 := \min\{ 1, \delta(\mathcal Z)/(4B)\}$ will do). Then, \[ \w_\varphi^r\left(f, 1, B(1+1/(2r)) , t \right)_{ w} \leq c \w_\varphi^r(f, 1, B , t)_{ w} , \] where the constant $c$ depends only on $r$, $B$ and the weight $w$. \end{corollary} Taking into account that $(1+1/(2r))^m \geq 2$ for $m = \lceil 1/\log_2(1+1/(2r))\rceil$, we immediately get the following result. \begin{corollary} \label{cor2.14} Let $\mathcal Z\in{\mathbb Z}M$, $w \in{\mathcal{W}}^*(\mathcal Z)$, $r\in\mathbb N$, $B>0$, and let $0<t < c_0$, where $c_0$ is such that $\max_{1\leq j \leq M} \rho(c_0,z_j) \leq \delta(\mathcal Z)/(2B)$ (for example, $c_0 := \min\{ 1, \delta(\mathcal Z)/(4B)\}$ will do). Then, \[ \w_\varphi^r\left(f, 1, B , t \right)_{ w} \leq c \w_\varphi^r(f, 1, B/2 , t)_{w} , \] where the constant $c$ depends only on $r$, $B$ and the weight $w$. \end{corollary} \begin{proof}[Proof of \cor{corol54}] For each $1\leq j \leq M$, let $\e_j := B \rho(t,z_j)$ and note that $\e_j < \delta(\mathcal Z)/2$. It follows from \lem{newlem21} and the remark after it that, for any $q_j \in \Pi_r$, $ \delta_j := \e_j/(2r) $ and $\tau_j$ such that $\rho(\tau_j, z_j) = B \rho(t,z_j) = \e_j$, we have \[ \norm{w(f-q_j)}{J_j^r} \leq c \Omega_\varphi^r(f, 27 \tau_j/r)_{I_j^r\cup J_j^r, w} + c \norm{w(f-q_j)}{I_j^r} , \] where $I_j^r := [z_j+\e_j/2,z_j+\e_j]$ and $J_j^r := [z_j+\e_j,z_j+\e_j+\delta_j]$, and \[ \norm{w(f-q_j)}{J_j^l} \leq c \Omega_\varphi^r(f, 27 \tau_j/r)_{I_j^l\cup J_j^l, w} + c \norm{w(f-q_j)}{I_j^l} . \] where $I_j^l := [z_j-\e_j, z_j-\e_j/2]$ and $J_j^l := [z_j-\e_j-\delta_j, z_j-\e_j]$. Note that, if $z_j = 1$ or $-1$, we do not consider $I_j^r$, $J_j^r$ or $I_j^l$, $J_j^l$, respectively. We now note that $[z_j-\e_j, z_j+\e_j] \cap [-1,1] = \mathcal Z_{B, t}^j$ and $[z_j-\e_j-\delta_j, z_j+\e_j+\delta_j] \cap [-1,1] = \mathcal Z_{\widetilde B, t}^j$, where $\widetilde B := B(1 +1/(2r))$. Letting $q_j\in \Pi_r$ be such that \[ \norm{w(f-q_j)}{\mathcal Z_{B,t}^j} \leq c E_r(f, \mathcal Z_{B,t}^j)_{w} , \] we have \begin{eqnarray*} E_r(f, \mathcal Z_{\widetilde B, t}^j)_{w} & \leq & \norm{w(f-q_j)}{\mathcal Z_{\widetilde B, t}^j} \\ &\leq & \norm{w(f-q_j)}{J_j^l} + \norm{w(f-q_j)}{\mathcal Z_{ B, t}^j} + \norm{w(f-q_j)}{J_j^r} \\ & \leq & c \norm{w(f-q_j)}{\mathcal Z_{ B, t}^j} + c \Omega_\varphi^r(f, 27 \tau_j/r)_{I_j^l\cup J_j^l, w} + c \Omega_\varphi^r(f, 27 \tau_j/r)_{I_j^r\cup J_j^r, w} \\ & \leq & c E_r(f, \mathcal Z_{B,t}^j)_{ w} + c \Omega_\varphi^r(f, 27 \tau_j/r)_{I_j^l\cup J_j^l, w} + c \Omega_\varphi^r(f, 27 \tau_j/r)_{I_j^r\cup J_j^r, w} . \end{eqnarray*} Now, note that \[ I_j^r\cup J_j^r \subset {\mathcal I}_{B/2,t} {\mathfrak{q}}uad\mbox{\rm and}{\mathfrak{q}}uad I_j^l\cup J_j^l \subset {\mathcal I}_{B/2,t} . \] Hence, taking into account that $\tau_j \leq \max\{B, \sqrt{B} \} t$ we get \begin{eqnarray*} E_r(f, \mathcal Z_{\widetilde B, t}^j)_{w} &\leq& c E_r(f,\mathcal Z_{B,t}^j)_{ w} + c \Omega_\varphi^r(f, 27 \tau_j/r)_{{\mathcal I}_{B/2,t}, w} \\ &\leq& c E_r(f,\mathcal Z_{B,t}^j)_{w} + c \Omega_\varphi^r \left(f, 27 \max\{B, \sqrt{B} \} t/r\right)_{{\mathcal I}_{B/2,t}, w} . \end{eqnarray*} Now, with $c_* := 27 \max\{B, \sqrt{B} \}/r$, Proposition~\ref{propprop}(\ref{pvi}) and \cor{corollary411} imply \[ \Omega_\varphi^r \left(f, c_* t\right)_{{\mathcal I}_{B/2,t}, w} \leq \Omega_\varphi^r \left( f, B/(2\max\{c_*, c_*^2\}), c_* t\right)_{ w} \leq c \Omega_\varphi^r \left( f, 1, t\right)_{w} . \] Therefore, \begin{eqnarray*} \w_\varphi^r(f, 1, \widetilde B, t)_{ w} &=& \Omega_\varphi^r(f, 1, t)_{ w} + \sum_{j=1}^M E_r(f, \mathcal Z_{\widetilde B,t}^j)_{ w } \\ & \leq & c \Omega_\varphi^r(f, 1, t)_{ w} + c \sum_{j=1}^M E_r(f, \mathcal Z_{B,t}^j)_{w} \\ & \leq & c \w_\varphi^r(f, 1, B, t)_{w} , \end{eqnarray*} and the proof is complete. \end{proof} \sect{Auxiliary results} \begin{theorem}[\mbox{\cite[(6.10)]{mt2000}}] \label{thmremez} Let $W$ be a $2{\mathfrak{p}}i$-periodic function which is an $A^*${} weight on $[0, 2{\mathfrak{p}}i]$. Then there is a constant $C>0$ such that if $T_n$ is a trigonometric polynomial of degree at most $n$ and $E$ is a measurable subset of $[0, 2{\mathfrak{p}}i]$ of measure at most $\mathbb{L}ambda/n$, $1\leq \mathbb{L}ambda\leq n$, then \[ \norm{ T_n W}{[0, 2{\mathfrak{p}}i]} \leq C^{\mathbb{L}ambda} \norm{ T_n W }{[0, 2{\mathfrak{p}}i]\setminus E} . \] \end{theorem} The following result is essentially proved in \cite{mt2000}. However, since it was not stated there explicitly we sketch its very short proof below. \begin{corollary} \label{cordw} Let $w\inA^*_{L^*}$. If $E \subset [-1,1]$ is such that $ \int_E (1-x^2)^{-1/2} dx \leq \lambda/n$ with $\lambda\leq n/2$, then for each $P_n\in\Pi_n$, we have \[ \norm{P_n w}{[-1,1]} \leq c \norm{P_n w}{[-1,1]\setminus E} , \] where the constant $c$ depends only on $\lambda$ and $L^*$. \end{corollary} \begin{proof} Let $W(t) := w(\cos t)$, $T_n(t) := P_n(\cos t)$ and $\widetilde E:= \left\{ 0\leq t \leq 2{\mathfrak{p}}i \;\; \big| \;\; \cos t \in E\right\}$. Note that $W$ is a $2{\mathfrak{p}}i$-periodic function which is an $A^*${} weight on $[0, 2{\mathfrak{p}}i]$ (see \cite[p. 68]{mt2000}), and \[ \meas(\widetilde E) = \int_{\widetilde E} dt = 2 \int_{\widetilde E \cap [0, {\mathfrak{p}}i]} dt = 2 \int_E (1-x^2)^{-1/2} dx \leq 2\lambda/n . \] Hence, \begin{eqnarray*} \norm{P_n w}{} & = & \norm{T_n W}{[0,2{\mathfrak{p}}i]} \leq c \norm{ T_n W }{[0, 2{\mathfrak{p}}i]\setminus \widetilde E} = c \norm{P_n w}{[-1,1]\setminus E}. \end{eqnarray*} \end{proof} \begin{lemma}[\mbox{\cite[(7.27)]{mt2000}}] \label{auxlemma} Let $w$ be an $A^*${} weight on $[-1,1]$. Then, for all $n\in\mathbb N$ and $P_n\in\Pi_n$, \[ \norm{P_n w}{} \sim \norm{P_n w_n}{} \] with the equivalence constants independent of $P_n$ and $n$. \end{lemma} It is convenient to denote $\varphi_n(x) := \varphi(x) + 1/n$, $n\in\mathbb N$, and note that $w := \varphi$ is an $A^*${} weight and $w_n \sim \varphi_n$ on $[-1,1]$. One of the applications of \cor{cordw} is the following quite useful result. \begin{theorem} \label{mainauxthm} Let $w$ be a $A^*${} weight, $n\in\mathbb N$, $0\leq \mu \leq n$. Then, for any $P_n \in \Pi_n$, \be \label{mainauxineq1} \norm{w \varphi^\mu P_n}{} \sim \norm{w_n \varphi^\mu P_n}{} \ee and \be \label{mainauxineq2} \norm{w \lambda_n^\mu P_n}{} \sim \norm{w_n \lambda_n^\mu P_n}{} , \ee where $\lambda_n(x) := \max\left\{ \sqrt{1-x^2} , 1/n \right\}$, and the equivalence constants are independent of $\mu$, $n$ and $P_n$. \end{theorem} \begin{proof} We start with the equivalence \ineq{mainauxineq1}. Let $m:= 2\lfloor \mu/2 \rfloor$. Then $m$ is an even integer such that $\mu-2 < m \leq \mu$ (note that $m=0$ if $\mu<2$), and $Q_{n+m} := \varphi^m P_n \in \Pi_{n+m} \subset \Pi_{2n}$. Since $w$ is an $A^*${} weight, then $w \varphi^{\gamma}$, $\gamma>0$, is also an $A^*${} weight (see \rem{remark25}) and \[ (w \varphi^{\gamma })_n \sim w_n \varphi_n^{\gamma }, \] where the equivalence constants depend on $\lceil \gamma\rceil $ and the doubling constant of $w$. Hence, denoting $E_n :=[-1+n^{-2}, 1-n^{-2}]$, $\eta := \mu-m$, noting that $0\leq \eta <2 $ (and so $\lceil \eta \rceil$ is either $0$, $1$ or $2$ allowing us to replace constants that depend on $\lceil \eta \rceil$ by those independent of $\eta$), and using Lemmas~\ref{auxlemma} and \ref{lemma26}, \cor{cordw}, and the observation that $w_n(x)\sim w_k(x)$ if $n\sim k$, we have \begin{eqnarray*} \norm{\varphi^\mu w P_n}{} &=& \norm{\varphi^{\eta} w Q_{n+m}}{} \sim \norm{(w \varphi^{\eta })_n Q_{n+m}}{} \sim \norm{ (w \varphi^{\eta})_n Q_{n+m}}{E_n} \\ & \sim & \norm{ w_n \varphi_n^{\eta } Q_{n+m}}{E_n} \sim \norm{w_n \varphi^{\eta } Q_{n+m}}{E_n} . \end{eqnarray*} Since $w_n \varphi^{\eta }$ is an $A^*${} weight (see \rem{remark25}), we can continue as follows: \begin{eqnarray*} \norm{ w_n \varphi^{\eta } Q_{n+m}}{E_n} \sim \norm{ w_n \varphi^{\eta } Q_{n+m}}{} = \norm{w_n \varphi^\mu P_{n}}{} . \end{eqnarray*} Note that none of the constants in the equivalences above depend on $\mu$. This completes the proof of \ineq{mainauxineq1}. Now, let ${\mathcal E}_n := \left\{ x \;\; \big| \;\; \sqrt{1-x^2} \leq 1/n \right\}$ and note that $\lambda_n(x) =1/n$ if $x\in {\mathcal E}_n$, and $\lambda_n(x) =\varphi(x)$ if $x\in [-1,1]\setminus {\mathcal E}_n$. Using \ineq{mainauxineq1} we have \begin{eqnarray*} \norm{w \lambda_n^\mu P_n}{} & \leq & \norm{w \lambda_n^\mu P_n}{{\mathcal E}_n} + \norm{w \lambda_n^\mu P_n}{[-1,1]\setminus {\mathcal E}_n} \\ & = & n^{- \mu} \norm{w P_n}{{\mathcal E}_n} + \norm{w \varphi^\mu P_n}{[-1,1]\setminus {\mathcal E}_n} \\ & \leq & n^{- \mu} \norm{w P_n}{} + \norm{w \varphi^\mu P_n}{} \\ & \leq & c_0 \left( n^{- \mu} \norm{w_n P_n}{} + \norm{w_n \varphi^\mu P_n}{}\right) \\ & \leq & 2c_0 \norm{w_n \lambda_n^\mu P_n}{} . \end{eqnarray*} In the other direction, the sequence of inequalities is exactly the same (switching $w$ and $w_n$). This verifies \ineq{mainauxineq2}. \end{proof} If we allow constants to depend on $\mu$, then we have the following result. \begin{corollary} \label{newjust1} Let $w$ be an $A^*${} weight, $n\in\mathbb N$ and $\mu \geq 0$. Then, for any $P_n \in \Pi_n$, \[ \norm{w \varphi_n^\mu P_n}{} \sim \norm{w \varphi^\mu P_n}{} \sim \norm{w_n \varphi^\mu P_n}{} \sim \norm{ w_n \varphi_n^\mu P_n}{} , \] where all equivalence constants are independent of $n$ and $P_n$. \end{corollary} \begin{proof} Since $\lambda_n(x) \leq \varphi_n(x) \leq 2 \lambda_n(x)$ and $\varphi(x) \leq \varphi_n(x)$, we immediately get from \thm{mainauxthm} \[ \norm{w \varphi^\mu P_n}{} \sim \norm{w_n \varphi^\mu P_n}{} \leq \norm{w_n\varphi_n^\mu P_n}{}\sim \norm{w\varphi_n^\mu P_n}{} . \] At the same time, \[ \norm{w \varphi^\mu P_n}{} \sim \norm{ (w\varphi^{\mu })_n P_n}{} \sim \norm{w_n \varphi_n^{\mu} P_n}{}. \] and the proof is complete. \end{proof} \begin{theorem}[Markov-Bernstein type theorem] \label{thm5.5} Let $w$ be an $A^*${} weight and $r\in\mathbb N$. Then, for all $n\in\mathbb N$ and $P_n\in\Pi_n$, \[ n^{-r} \norm{w \varphi^r P_n^{(r)}}{} \sim n^{-r} \norm{w_n \varphi^r P_n^{(r)}}{} \sim \norm{w_n \rho_n^r P_n^{(r)}}{} \sim \norm{w \rho_n^r P_n^{(r)}}{} \leq c \norm{w P_n }{} \sim \norm{w_n P_n }{} , \] where the constant $c$ and all equivalence constants are independent of $n$ and $P_n$. \end{theorem} \begin{proof} The statement of the lemma is an immediate consequence of \cor{newjust1} and either of the estimates \[ \norm{w_n \rho_n^r P_n^{(r)} }{} \leq c \norm{w_n P_n}{} , \] (see \cite[Lemma 6.1]{k-acta}, for example), or \[ \norm{w \varphi^r P_n^{(r)} }{} \leq c n^r \norm{w P_n}{} , \] (see \cite[(7.29)]{mt2000} or \cite[(2.5)]{mt2001}), where the constant $c$ depends only on $r$ and the $A^*${} constant of $w$. \end{proof} \begin{lemma} \label{lem8.5j} Let $w$ be an $A^*${} weight, $A >0$ and $\mathcal Z\in{\mathbb Z}M$. Then for any $n, r\in\mathbb N$, $1\leq j \leq M$, and any polynomials $Q_n\in\Pi_n$ and $q_r \in \Pi_r$ satisfying $Q_n^{(\nu)}(z_j)=q_r^{(\nu)}(z_j)$, $0\leq \nu\leq r-1$, the following inequality holds \[ \norm{w(Q_n-q_r)}{\mathcal Z_{A, 1/n}^j} \leq c n^{-r} \norm{w \varphi^r Q_n^{(r)} }{} , \] where the constant $c$ depends only on $r$, $A$ and the weight $w$. \end{lemma} \begin{proof} Denote $I := \mathcal Z_{A, 1/n}^j$, $z:=z_j$, and note that $(Q_n - q_r)^{(\nu)}(z)=0$, $0\leq \nu\leq r-1$. Using Taylor's theorem with the integral remainder we have \[ Q_n(x) - q_r(x) = {1 \over (r-1)!} \int_z^x (x-u)^{r-1} Q_n^{(r)}(u) du , \] which implies \begin{eqnarray*} \norm{w(Q_n-q_r)}{I} & \leq & \sup_{x\in I} w(x) \left|\int_z^x (x-u)^{r-1} Q_n^{(r)}(u) du\right| \leq \norm{Q_n^{(r)}}{I} \sup_{x\in I} w(x)|x-z|^r \\ & \leq & \left(A\rho_n(z)\right)^r \norm{Q_n^{(r)}}{I} \sup_{x\in I} w(x) \leq c \norm{\rho_n^r Q_n^{(r)}}{I} {1 \over |I|} w(I) , \end{eqnarray*} where, in the last inequality, we used the fact that $w$ is an $A^*${} weight and $\rho_n(x)\sim \rho_n(z)$, $x\in I$. Now, since $w$ is doubling, $w(I)/|I| \leq c w[z-\rho_n(z), z+\rho_n(z)]/|I| \leq c w_n(z) \leq c w_n(x)$, $x\in I$, and so \[ \norm{w(Q_n-q_r)}{I} \leq c \norm{w_n \rho_n^r Q_n^{(r)}}{I} \leq c n^{-r} \norm{w \varphi^r Q_n^{(r)}}{} , \] where the last estimate follows from \thm{thm5.5} . \end{proof} \begin{lemma} \label{lem7.2} Let $\mathcal Z\in{\mathbb Z}_M$, $w\in {\mathcal{W}}^*(\mathcal Z)$, $c_*>0$, $n,r\in\mathbb N$, $A>0$ and $0<t\leq c_*/n$. Then, for any $P_n\in\Pi_n$, we have \[ \Omega_\varphi^r(P_n, A, t)_{w} \leq c t^r \norm{w \varphi^r P_n^{(r)}}{} , \] where $c$ depends only on $r$, $c_*$ and the weight $w$. \end{lemma} \begin{remark} Using the same method as the one used to prove \cite{k-singular}*{Lemma 8.2} one can show that a stronger result than \lem{lem7.2} is valid. Namely, if $f$ is such that $f^{(r-1)}\in {\mathcal A}C_\mathrm{loc} \left( (-1,1)\setminus \mathcal Z \right)$ and $\norm{w \varphi^r f^{(r)}}{} < \infty$, then one can show that \[ \Omega_\varphi^r(f, A, t)_{w} \leq c t^r \norm{w \varphi^r f^{(r)}}{} , {\mathfrak{q}}uad t>0. \] However, \lem{lem7.2} whose proof is simpler and shorter is sufficient for our purposes. \end{remark} \begin{proof}[Proof of \lem{lem7.2}] It follows from \cite{k-acta}*{Lemma 7.2} and \cor{newjust1} that, for any $c_*>0$ and $0<t \leq c_*/n$, \[ \w_\varphi^r(P_n, t)_{w_n} \leq c t^r \norm{w_n \varphi^r P_n^{(r)}}{} \leq c t^r \norm{w \varphi^r P_n^{(r)}}{}, \] where the constants $c$ depend on $r$, $c_*$ and the weight $w$. Therefore, since any $A^*${} weight $w$ satisfies $w(x) \leq c w_n(x)$, for any $x\in [-1,1]$ and $n\in\mathbb N$ (see \ineq{wlesswn}), we have \begin{eqnarray*} \Omega_\varphi^r(P_n, A, t)_{w} &= & \sup_{0<h\leq t} \norm{w(\cdot) \delta(\mathcal Z)elta_{h\varphi(\cdot)}^r(P_n,\cdot, {\mathcal I}_{A,h})}{} \leq c \sup_{0<h\leq t} \norm{w_n(\cdot) \delta(\mathcal Z)elta_{h\varphi(\cdot)}^r(P_n,\cdot, [-1,1])}{}\\ & \leq & c \w_\varphi^r(P_n, t)_{w_n}\leq c t^r \norm{w \varphi^r P_n^{(r)}}{} . \end{eqnarray*} \end{proof} \sect{Direct theorem} \begin{theorem} \label{jacksonthm} Let $w\in{\mathcal{W}}^*(\mathcal Z)$, $r,\nu_0\in\mathbb N$, $\nu_0\geq r$, ${\mathfrak \vartheta}>0$, $f\in\mathbb{L}_\infty^w$ and $B>0$. Then, there exists $N\in\mathbb N$ depending on $r$, ${\mathfrak \vartheta}$ and the weight $w$, such that for every $n \geq N$, there is a polynomial $P_n \in\Pi_n$ satisfying \be \label{thing1} \norm{w(f-P_n)}{} \leq c \w_\varphi^r(f, 1, B, {\mathfrak \vartheta}/n)_{ w } \ee and \be \label{thing2} \norm{ w \varphi^\nu P_n^{(\nu)}}{} \leq c n^\nu \w_\varphi^r(f, 1, B, {\mathfrak \vartheta}/n)_{ w } , {\mathfrak{q}}uad r\leq \nu \leq \nu_0, \ee where constants $c$ depend only on $r$, $\nu_0$, $B$, ${\mathfrak \vartheta}$ and the weight $w$. \end{theorem} We use an idea from \cite[Section 3.2]{mt2001} and deduce \thm{jacksonthm} from the following result that was proved in \cite{k-acta}. \begin{theorem}[\mbox{\cite{k-acta}*{Theorem 5.3 ($p=\infty$)}}] \label{jacksonthmacta} Let $w$ be a doubling weight, $r, \nu_0\in\mathbb N$, $\nu_0\geq r$, and $f\in\mathbb{L}_\infty[-1,1]$. Then, for every $n \geq r$ and $0<{\mathfrak \vartheta}\leq 1$, there exists a polynomial $P_n \in\Pi_n$ such that \[ \norm{w_n(f-P_n)}{} \leq c \w_\varphi^r(f, {\mathfrak \vartheta}/n)_{w_n} \] and \[ \norm{w_n \rho_n^\nu P_n^{(\nu)}}{} \leq c \w_\varphi^r(f, {\mathfrak \vartheta}/n)_{w_n} , {\mathfrak{q}}uad r\leq \nu \leq \nu_0, \] where constants $c$ depend only on $r$, $\nu_0$, ${\mathfrak \vartheta}$ and the doubling constant of $w$. \end{theorem} \begin{proof}[Proof of \thm{jacksonthm}] Since $\w_\varphi^r(f, 1, B, t)_{ w }$ is a nondecreasing function of $t$, without loss of generality we can assume that with ${\mathfrak \vartheta} \leq 1/(2r)$. Suppose that $N\in\mathbb N$ is such that $N\geq \max\{ r, 100/({\mathfrak \vartheta} \delta(\mathcal Z))\}$, $n\geq N$, and and let $(x_i)_{i=0}^n$ be the Chebyshev partition of $[-1,1]$, {\em i.e., } $x_i = \cos(i{\mathfrak{p}}i/n)$, $0\leq i \leq n$ (for convenience, we also denote $x_i :=-1$, $i\geq n+1$, and $x_i :=1$, $i\leq -1$). As usual, we let $I_i := [x_i, x_{i-1}]$ for $1\leq i\leq n$. Note that each (nonempty) interval $[z_j, z_{j+1}]$, $0\leq j\leq M$, contains at least $10$ intervals $I_i$. For each $1\leq j\leq M$, denote \[ \nu_j := \min \left\{ i \;\; \big| \;\; 1\leq i\leq n {\mathfrak{q}}uad\mbox{\rm and}{\mathfrak{q}}uad z_j \in I_i \right\} {\mathfrak{q}}uad\mbox{\rm and}{\mathfrak{q}}uad J_j:= [x_{\nu_j+1}, x_{\nu_j-2}] . \] Note that $\min$ in the definition of $\nu_j$ is needed if $z_j$ belongs to more than one (closed) interval $I_i$ (in which case $\nu_j$ is chosen so that $z_j$ is the left endpoint of $I_{\nu_j}$). Let $q_j \in \Pi_r$ be a polynomial of near best weighted approximation of $f$ on $J_j$, {\em i.e., } $\norm{w(f-q_j)}{J_j} \leq c E_r(f, J_j)_{w}$, $1\leq j \leq M$, and define \[ F(x):= \begin{cases} q_j(x) , & \mbox{\rm if }\; x \in J_i , \; 1\leq j \leq M ,\\ f(x), & \mbox{\rm otherwise.} \end{cases} \] Since (see \cite{k-singular}*{p. 27}, for example) $|I_i|/3 \leq |I_{i+1}| \leq 3 |I_i|$, $1\leq i \leq n-1$, and $\rho_n(x) \leq |I_i| \leq 5 \rho_n(x)$ for all $x\in I_i$ and $1\leq i \leq n$, we conclude that \begin{eqnarray*} \max\{ |x_{\nu_j+1}-z_j|, |x_{\nu_j-2}-z_j|\} &\leq& \max\{ |I_{\nu_j+1}| + |I_{\nu_j}|,|I_{\nu_j}| + |I_{\nu_j-1}| \} \\ & \leq & 4 |I_{\nu_j}| \leq 20 \rho_n(z_j) \leq (20/{\mathfrak \vartheta}^2) \rho({\mathfrak \vartheta}/n, z_j) , \end{eqnarray*} and so \[ J_j \subset \mathcal Z^j_{20/{\mathfrak \vartheta}^2, {\mathfrak \vartheta}/n} , {\mathfrak{q}}uad 1\leq j \leq M. \] Therefore, \begin{eqnarray} \label{Ff} \norm{w(F-f)}{} & = & \max_{1\leq j \leq M} \norm{w(q_j-f)}{J_i} \leq c \max_{1\leq j \leq M} E_r(f, J_j)_{w} \\ \nonumber & \leq & c \sum_{j=1}^M E_r(f, \mathcal Z^j_{20/{\mathfrak \vartheta}^2, {\mathfrak \vartheta}/n} )_{w} \leq c \w_\varphi^r(f, 1, 20/{\mathfrak \vartheta}^2, {\mathfrak \vartheta}/n)_w . \end{eqnarray} We now estimate $\w_\varphi^r(F, {\mathfrak \vartheta}/n)_{w_n}$ in terms of the modulus of $f$. Let $0<h\leq {\mathfrak \vartheta}/n$ and $x$ such that $[x-rh\varphi(x)/2,x+rh\varphi(x)/2]\subset [-1,1]$ be fixed, and consider the following three cases. {\bf Case 1:} $\displaystyle x\in\mathbb Range_1 := \left\{ x \;\; \big| \;\; [x-rh\varphi(x)/2,x+rh\varphi(x)/2] \subset J_j , \; \mbox{\rm for some }\; 1\leq j \leq M \right\}$.\\ Then, for some $1\leq j \leq M$, $\delta(\mathcal Z)elta_{h \varphi(x)}^r(F, x, [-1,1]) = \delta(\mathcal Z)elta_{h \varphi(x)}^r(q_j, x, [-1,1]) = 0$, and so \[ \norm{w_n(\cdot) \delta(\mathcal Z)elta_{h \varphi(\cdot)}^r(F, \cdot, [-1,1])}{\mathbb Range_1} = 0 . \] {\bf Case 2:} $\displaystyle x\in\mathbb Range_2 := \left\{ x \;\; \big| \;\; [x-rh\varphi(x)/2,x+rh\varphi(x)/2] \cap \cup_{j=1}^M J_i = \emptyset\right\}$. \\ Then, taking into account that \begin{eqnarray} \label{newarr} J_j &\supset& [z_j - |I_{\nu_j+1}|, z_j + |I_{\nu_j-1}|] \supset [z_j - |I_{\nu_j}|/3, z_j +|I_{\nu_j}|/3] \\ \nonumber & \supset & [z_j - \rho_n(z_j)/3, z_j +\rho_n(z_j)/3] = \mathcal Z_{1/3, 1/n}^j , \end{eqnarray} we conclude that $x\in {\mathcal I}_{1/3, 1/n}$, and so $w_n(x) \sim w(x)$ by \lemp{v}. Also, \ineq{newarr} implies that \[ [-1,1] \setminus \cup_{j=1}^M J_j \subset [-1,1] \setminus \cup_{j=1}^M \mathcal Z_{1/3, 1/n}^j \subset {\mathcal I}_{1/3, 1/n} \subset {\mathcal I}_{1/3, h} , \] and so $[x-rh\varphi(x)/2,x+rh\varphi(x)/2] \subset {\mathcal I}_{1/3, h}$. Therefore, $\delta(\mathcal Z)elta_{h \varphi(x)}^r(F, x, [-1,1]) = \delta(\mathcal Z)elta_{h \varphi(x)}^r(f, x, {\mathcal I}_{1/3, h})$, and \[ \norm{w_n(\cdot) \delta(\mathcal Z)elta_{h \varphi(\cdot)}^r(F, \cdot, [-1,1])}{\mathbb Range_2} \leq c \norm{w (\cdot)\delta(\mathcal Z)elta_{h \varphi(\cdot)}^r(f, \cdot, {\mathcal I}_{1/3, h})}{\mathbb Range_2}. \] {\bf Case 3:} $\displaystyle x\in \mathbb Range_3^j$, for some $1\leq j\leq M$, where $\mathbb Range_3^j$ is the set of all $x$ such that $[x-rh\varphi(x)/2,x+rh\varphi(x)/2]$ has nonempty intersections with $J_j$ and $\left([-1,1]\setminus J_j\right)^{cl}$, {\em i.e., } \[ \mathbb Range_3^j := \left\{ x \;\; \big| \;\; x_{\nu_j+1}\; \mbox{\rm or }\; x_{\nu_j-2} \in [x-rh\varphi(x)/2,x+rh\varphi(x)/2] \right\} . \] Note that, because of the restrictions on $N$, $[x-rh\varphi(x)/2,x+rh\varphi(x)/2]$ cannot have nonempty intersection with more than one interval $J_i$, and, in fact, $\mathbb Range_3^j$ is ``far'' from all intervals $J_i$ with $i\neq j$. Without loss of generality, we can assume that $x_{\nu_j+1} \in [x-rh\varphi(x)/2,x+rh\varphi(x)/2]$, since the other case follows by symmetry. Taking into account that $x - rh\varphi(x)/2$ and $x + rh\varphi(x)/2$ are both increasing functions in $x$, we have \[ \dist\left\{ z_j, [x-rh\varphi(x)/2,x+rh\varphi(x)/2] \right\} = z_j - x - rh\varphi(x)/2 \geq z_j - \widetilde x -rh \varphi (\widetilde x)/2 , \] where $\widetilde x$ is such that $\widetilde x - rh \varphi (\widetilde x)/2 = x_{\nu_j+1}$. Note that $\widetilde x < x_{\nu_j}$ since \[ x_{\nu_j} - rh \varphi (x_{\nu_j})/2 > x_{\nu_j} - r{\mathfrak \vartheta} \rho_n(x_{\nu_j})/2 \geq x_{\nu_j} - r{\mathfrak \vartheta} |I_{\nu_j+1}|/2 > x_{\nu_j+1}, \] and so $\widetilde x \in I_{\nu_j+1}$. Therefore, \begin{eqnarray*} \lefteqn{ \dist\left\{ z_j, [x-rh\varphi(x)/2,x+rh\varphi(x)/2] \right\} }\\ &\geq& z_j - x_{\nu_j+1} - rh \varphi (\widetilde x) \geq |I_{\nu_j+1}| - r{\mathfrak \vartheta} \rho_n(\widetilde x) \\ & \geq & (1-r{\mathfrak \vartheta}) |I_{\nu_j+1}| \geq (1-r{\mathfrak \vartheta})\rho_n(z_j)/3 = \rho_n(z_j)/6. \end{eqnarray*} Also, \[ \max\left\{|y-z_j| \;\; \big| \;\; y\in [x-rh\varphi(x)/2,x+rh\varphi(x)/2] \right\} = z_j - x + rh\varphi(x)/2 \leq z_j - \widehat x + rh \varphi (\widehat x)/2 , \] where $\widehat x$ is such that $\widehat x + rh \varphi (\widehat x)/2 = x_{\nu_j+1}$. Now, $\widehat x > x_{\nu_j+2}$ since \[ x_{\nu_j+2} + rh \varphi (x_{\nu_j+2})/2 < x_{\nu_j+2} + r{\mathfrak \vartheta} \rho_n(x_{\nu_j+2})/2 < x_{\nu_j+2} + r{\mathfrak \vartheta} |I_{\nu_j+2}|/2 < x_{\nu_j+1} , \] and so $\widehat x \in I_{\nu_j+2}$. Therefore, \begin{eqnarray*} \lefteqn{ \max\left\{|y-z_j| \;\; \big| \;\; y\in [x-rh\varphi(x)/2,x+rh\varphi(x)/2] \right\} }\\ &\leq & z_j - x_{\nu_j+1} + rh \varphi (\widehat x) \leq x_{\nu_j-1} - x_{\nu_j+1} + r{\mathfrak \vartheta} \rho_n (\widehat x)\\ & \leq & |I_{\nu_j+1}| + |I_{\nu_j}| + r{\mathfrak \vartheta} |I_{\nu_j+2}| \leq (20+45 r{\mathfrak \vartheta}) \rho_n(z_j) \leq 50 \rho_n(z_j). \end{eqnarray*} Hence, \[ [x-rh\varphi(x)/2,x+rh\varphi(x)/2] \subset {\mathcal I}_{1/6, 1/n} \cap \mathcal Z^j_{50,1/n} \subset {\mathcal I}_{1/6, h} \cap \mathcal Z^j_{50,1/n} . \] \lemp{v} implies that $w_n(x) \sim w(x)$. Also, for any $y\in [x-rh\varphi(x)/2,x+rh\varphi(x)/2]$, $|x-y| \leq rh\varphi(x)/2 \leq r{\mathfrak \vartheta} \rho_n(x)/2 = \rho_n(x)/4$, and so \lemp{iv} yields $w(y)\sim w(x)$. This implies \begin{eqnarray*} \lefteqn{ \norm{w_n(\cdot) \delta(\mathcal Z)elta_{h \varphi(\cdot)}^r(F, \cdot, [-1,1])}{\mathbb Range_3^j}}\\ & \leq & \norm{w_n(\cdot) \delta(\mathcal Z)elta_{h \varphi(\cdot)}^r(f, \cdot, [-1,1]) }{\mathbb Range_3^j} + \norm{w_n(\cdot) \delta(\mathcal Z)elta_{h \varphi(\cdot)}^r(F-f, \cdot , [-1,1])}{\mathbb Range_3^j} \\ & \leq & \norm{w_n(\cdot) \delta(\mathcal Z)elta_{h \varphi(\cdot)}^r(f, \cdot, {\mathcal I}_{1/6, h}) }{\mathbb Range_3^j} + \norm{w_n(\cdot) \sum_{i=0}^r {r \choose i} \left|(F-f)(\cdot-rh/2+ih\varphi(\cdot))\right| }{\mathbb Range_3^j} \\ & \leq & c \norm{w (\cdot) \delta(\mathcal Z)elta_{h \varphi(\cdot)}^r(f, \cdot, {\mathcal I}_{1/6, h}) }{\mathbb Range_3^j} + c \norm{w(q_j-f)}{J_j} \\ & \leq & c \norm{w (\cdot) \delta(\mathcal Z)elta_{h \varphi(\cdot)}^r(f, \cdot, {\mathcal I}_{1/6, h}) }{\mathbb Range_3^j} + c E_r(f, \mathcal Z^j_{20/{\mathfrak \vartheta}^2, {\mathfrak \vartheta}/n})_{w} . \end{eqnarray*} Combining the above cases we conclude that \begin{eqnarray*} \w_\varphi^r(F, {\mathfrak \vartheta}/n)_{w_n} &\leq& c \sup_{0<h\leq {\mathfrak \vartheta}/n} \norm{w (\cdot) \delta(\mathcal Z)elta_{h \varphi(\cdot)}^r(f, \cdot, {\mathcal I}_{1/6, h}) }{} + c \sum_{j=1}^M E_r(f, \mathcal Z^j_{20/{\mathfrak \vartheta}^2, {\mathfrak \vartheta}/n})_{w}\\ & \leq & c \w^r_\varphi(f, 1/6, 20/{\mathfrak \vartheta}^2, {\mathfrak \vartheta}/n)_w \leq c \w^r_\varphi(f, 1, 20/{\mathfrak \vartheta}^2, {\mathfrak \vartheta}/n)_w . \end{eqnarray*} We now recall that \thm{thm5.5} implies that $\norm{ w \varphi^\nu P_n^{(\nu)}}{} \leq c n^\nu \norm{w_n \rho_n^\nu P_n^{(\nu)}}{}$, and so applying \thm{jacksonthmacta} for the function $F$ as well as the fact that $w(x) \leq cw_n(x)$ (see \ineq{wlesswn}) we conclude that \ineq{thing1} and \ineq{thing2} are proved with $\w^r_\varphi(f, 1, 20/{\mathfrak \vartheta}^2, {\mathfrak \vartheta}/n)_w$ instead of $\w^r_\varphi(f, 1, B, {\mathfrak \vartheta}/n)_w$ on the right-hand side. Now, if $B\geq 20/{\mathfrak \vartheta}^2$, then $\w^r_\varphi(f, 1, 20/{\mathfrak \vartheta}^2, {\mathfrak \vartheta}/n)_w \leq \w^r_\varphi(f, 1, B, {\mathfrak \vartheta}/n)_w$. If $B<20/{\mathfrak \vartheta}^2$, then, since ${\mathfrak \vartheta}/n < \delta(\mathcal Z)/(80/{\mathfrak \vartheta}^2)<1$, \cor{cor2.14} implies that $\w^r_\varphi(f, 1, 20/{\mathfrak \vartheta}^2, {\mathfrak \vartheta}/n)_w \leq c \w^r_\varphi(f, 1, 2^{-m}\cdot 20/{\mathfrak \vartheta}^2, {\mathfrak \vartheta}/n)_w \leq c \w^r_\varphi(f, 1, B, 1/n)_w$, where $m := \lceil \log_2(20/(B{\mathfrak \vartheta}^2)\rceil \in\mathbb N$, and the constant $c$ depends only on $r$, $B$, ${\mathfrak \vartheta}$ and the weight $w$. The proof is now complete. \end{proof} \sect{Inverse theorem} \begin{theorem} \label{conversethm} Suppose that $\mathcal Z\in{\mathbb Z}M$, $w\in{\mathcal{W}}^*(\mathcal Z)$, $f\in\mathbb{L}_\infty^w$, $A,B>0$ and $n,r\in\mathbb N$. Then \[ \w_\varphi^r(f, A, B, n^{-1})_{w} \leq c n^{-r} \sum_{k=1}^{n} k^{r-1 } E_{k}(f, [-1,1])_{w } , \] where the constant $c$ depends only on $r$, $A$, $B$, and the weight $w$. \end{theorem} \begin{proof} Let $P_n^* \in \Pi_n$ denote a polynomial of (near) best approximation to $f$ with weight $w$, {\em i.e., } \[ c \norm{w(f-P_n^*)}{} \leq \inf_{P_n\in\Pi_n} \norm{w(f-P_n)}{} = E_{n}(f, [-1,1])_{w} . \] We let $N\in\mathbb N$ be such that $2^N \leq n < 2^{N+1}$. To estimate $\Omega_\varphi^r(f, A, n^{-1})_{w}$, using \lem{normestimate} we have \begin{eqnarray*} \Omega_\varphi^r(f, A, n^{-1})_{w} & \leq & \Omega_\varphi^r (f, A, 2^{-N})_{w} \\ & \leq & \Omega_\varphi^r (f - P_{2^N}^*, A, 2^{-N})_{ w} + \Omega_\varphi^r (P_{2^N}^*, A, 2^{-N})_{ w} \\ & \leq & c \norm{w(f - P_{2^N}^*)}{} + \Omega_\varphi^r (P_{2^N}^*, A, 2^{-N})_{w} \\ & \leq & c E_{2^N}(f, [-1,1])_{w} + \Omega_\varphi^r (P_{2^N}^*, A, 2^{-N})_{w}. \end{eqnarray*} Now, using \be \label{decom} P_{2^N}^* = P_1^* + \sum_{i=0}^{N-1} (P_{2^{i+1}}^* - P_{2^{i}}^*) \ee as well as \lem{lem7.2} we have \begin{eqnarray*} \Omega_\varphi^r (P_{2^N}^*, A, 2^{-N})_{ w} &\leq & \sum_{i=0}^{N-1} \Omega_\varphi^r \left( P_{2^{i+1}}^* - P_{2^{i}}^*, A, 2^{-N}\right)_{w } \leq c 2^{-Nr } \sum_{i=0}^{N-1} \norm{ w \varphi^r \left(P_{2^{i+1}}^* - P_{2^{i}}^*\right)^{(r)}}{} . \end{eqnarray*} Now, for each $1\leq j\leq M$, taking into account that $\mathcal Z_{B,t_1}^j \subset \mathcal Z_{B,t_2}^j$ if $t_1 \leq t_2$, we have \begin{eqnarray*} E_r(f, \mathcal Z_{B,1/n}^j)_w &\leq& E_r(f, \mathcal Z_{B,2^{-N}}^j)_w \leq \norm{w(f-P_{2^N}^*)}{\mathcal Z_{B,2^{-N}}^j} + E_r(P_{2^N}^*, \mathcal Z_{B,2^{-N}}^j)_w \\ & \leq & c E_{2^N}(f, [-1,1])_{w} + \norm{w(P_{2^N}^*-q_r(P_{2^N}^*)) }{\mathcal Z_{B,2^{-N}}^j}, \end{eqnarray*} where $q_r(g)$ denotes the Taylor polynomial of degree $<r$ at $z_j$ for $g$. Using \ineq{decom} again, noting that \be \label{extrataylor} q_r(P_{2^N}^*) = P_1^* + \sum_{i=0}^{N-1} q_r(P_{2^{i+1}}^* - P_{2^{i}}^*), \ee and taking \lem{lem8.5j} into account we have \begin{eqnarray*} \norm{w(P_{2^N}^*-q_r(P_{2^N}^*) )}{\mathcal Z_{B,2^{-N}}^j } & \leq & \sum_{i=0}^{N-1} \norm{w\left( (P_{2^{i+1}}^* - P_{2^{i}}^*) - q_r(P_{2^{i+1}}^* - P_{2^{i}}^*)\right)}{\mathcal Z_{B,2^{-N}}^j} \\ & \leq & c \sum_{i=0}^{N-1} 2^{-Nr} \norm{w \varphi^r (P_{2^{i+1}}^* - P_{2^{i}}^*)^{(r)}}{} . \end{eqnarray*} Hence, \[ \w_\varphi^r(f, A, B, n^{-1})_{w} \leq c E_{2^N}(f, [-1,1])_{w} + c 2^{-Nr } \sum_{i=0}^{N-1} \norm{w \varphi^r \left(P_{2^{i+1}}^* - P_{2^{i}}^*\right)^{(r)}}{}. \] Now, using \thm{thm5.5} we have \begin{eqnarray*} \w_\varphi^r(f, A, B, n^{-1})_{w} &\leq & c E_{2^N}(f, [-1,1])_{w} + c 2^{-Nr } \sum_{i=0}^{N-1} 2^{ir } \norm{w( P_{2^{i+1}}^* - P_{2^{i}}^* ) }{}\\ &\leq& c 2^{-Nr } \sum_{i=0}^{N} 2^{ir } E_{2^i}(f, [-1,1])_{w} \\ & \leq & c n^{-r} \left(E_{1}(f, [-1,1])_{w} + \sum_{i=1}^{N} \sum_{k=2^{i-1}+1}^{2^{i}} k^{r-1 } E_{k}(f, [-1,1])_{w} \right) \\ & \leq & c n^{-r} \sum_{k=1}^{n} k^{r-1} E_{k}(f, [-1,1])_{w} , \end{eqnarray*} with all constants $c$ depending only on $r$, $A$, $B$, and the weight $w$. \end{proof} \sect{Realization functionals} For $w\in{\mathcal{W}}^*(\mathcal Z)$, $r \in\mathbb N$, and $f\in\mathbb{L}_\infty^w$, we define the following ``realization functional'' as follows \[ R_{r,\varphi} (f, t, \Pi_n)_{ w} := \inf_{P_n\in\Pi_n} \left( \norm{w(f-P_n)}{} + t^r \norm{w \varphi^r P_n^{(r)}}{} \right) , \] and note that $R_{r,\varphi} (f, t_1, \Pi_n)_{w} \sim R_{r,\varphi} (f, t_2, \Pi_n)_{w}$ if $t_1 \sim t_2$. \begin{theorem} \label{corr99} Let $\mathcal Z\in{\mathbb Z}M$, $w\in{\mathcal{W}}^*(\mathcal Z)$, $f\in\mathbb{L}_\infty^w$, $A, B>0$, $r \in\mathbb N$, and let ${\mathfrak \vartheta}_2\geq {\mathfrak \vartheta}_1>0$. Then, there exists a constant $N\in\mathbb N$ depending only on $r$, ${\mathfrak \vartheta}_1$, and the weight $w$, such that, for $n\geq N$ and ${\mathfrak \vartheta}_1/n \leq t \leq {\mathfrak \vartheta}_2/n$, \[ R_{r,\varphi} (f, 1/n, \Pi_n)_{ w} \sim \w_\varphi^r(f, A, B, t)_{w } , \] where the equivalence constants depend only on $r$, $A$, $B$, ${\mathfrak \vartheta}_1$, ${\mathfrak \vartheta}_2$ and the weight $w$. \end{theorem} \begin{proof} In view of \cor{corollary411} it is sufficient to prove this lemma for $A=1$. \thm{jacksonthm} implies that, for every $n\geq N$ (with $N$ depending only on $r$, ${\mathfrak \vartheta}_1$ and the weight $w$), there exists a polynomial $P_n \in\Pi_n$ such that \be \label{kf1} R_{r,\varphi} (f, 1/n, \Pi_n)_{w} \leq c \w_\varphi^r(f, 1, B, {\mathfrak \vartheta}_1/n)_{ w } \leq c \w_\varphi^r(f, 1, B, t)_{w } . \ee Now, let $P_n$ be an arbitrary polynomial from $\Pi_n$, $n\in\mathbb N$. Lemmas~\ref{normestimate} and \ref{lem7.2} imply that \begin{eqnarray} \label{kf2} \Omega_\varphi^r(f, 1, t)_{w} &\leq& c \Omega_\varphi^r(f-P_n, 1, t)_{ w} + c \Omega_\varphi^r(P_n, 1, t)_{ w} \\ \nonumber & \leq & c \norm{w(f-P_n)}{} + c n^{-r} \norm{ w \varphi^r P_n^{(r)}}{} , \end{eqnarray} where constants $c$ depend only on $r$, ${\mathfrak \vartheta}_2$ and the weight $w$. Also, taking into account that $\mathcal Z_{B,t}^j \subset \mathcal Z_{B, {\mathfrak \vartheta}_2/n}^j \subset \mathcal Z_{B {\mathfrak \vartheta}_2\max\{{\mathfrak \vartheta}_2,1\}, 1/n}^j$ and using \lem{lem8.5j}, we have \begin{eqnarray} \label{kf4} \sum_{j=1}^M E_r (f, \mathcal Z_{B,t}^j)_w & \leq & c \norm{w(f-P_n)}{} + \sum_{j=1}^M \inf_{q\in\Pi_r} \norm{w(P_n - q)}{\mathcal Z_{B {\mathfrak \vartheta}_2\max\{{\mathfrak \vartheta}_2,1\}, 1/n}^j}\\ \nonumber & \leq & c \norm{w(f-P_n)}{} + c n^{-r} \norm{w \varphi^r P_n^{(r)} }{}. \end{eqnarray} Therefore, for any $n\in\mathbb N$, ${\mathfrak \vartheta}_2>0$ and $0<t\leq {\mathfrak \vartheta}_2/n$, \be \label{kf99} \w_\varphi^r(f, 1, B, t)_{ w } \leq c R_{r,\varphi} (f, 1/n, \Pi_n)_{w} , \ee which completes the proof of the theorem. \end{proof} \thm{corr99} implies, in particular, that $\w_\varphi^r(f, A_1 , B_1, t_1)_{w } \sim \w_\varphi^r(f, A_2, B_2, t_2)_{w }$ if $t_1 \sim t_2$ with equivalence constants independent of $f$. Finally, we remark that the moduli $\w_\varphi^r(f, A , B, t)_{w }$ are not equivalent to the following weighted $K$-functional \[ K_{r,\varphi} (f, t)_{ w} := \inf_{g^{(r-1)}\in{\mathcal A}C_\mathrm{loc}} \left( \norm{w(f-g)}{} + t^r \norm{w \varphi^r g^{(r)}}{} \right) . \] This follows from counterexamples constructed in \cite{mt1999}, where additional discussions and negative results can be found. \sect{Appendix} The following lemma shows that $E_r(f, \mathcal Z_{B,t}^j)_w$ in the definition of the complete modulus \ineq{compmod} can be replaced with $\norm{w(f-q_j)}{\mathcal Z_{B,t}^j}$, where $q_j$ is a polynomial of (near) best weighted approximation to $f$ on any subinterval of $\mathcal Z_{B,t}^j$ of length $\geq c \rho(t, z_j)$. \begin{lemma} \label{lem3.1} Suppose that $\mathcal Z\in{\mathbb Z}M$, $w\in{\mathcal{W}}^*(\mathcal Z)$, $f\in\mathbb{L}_\infty^w$, and suppose that intervals $I$ and $J$ are such that $I\subset J \subset [-1,1]$ and $|J|\leq c_0|I|$. Then, for any $r\in\mathbb N$, if $q \in\Pi_r$ is a polynomial of near best approximation to $f$ on $I$ with weight $w$, {\em i.e., } \[ \norm{w(f-q)}{I} \leq c_1 E_r(f,I)_{w} , \] then $q$ is also a polynomial of near best approximation to $f$ on $J$. In other words, \[ \norm{w(f-q)}{J} \leq c E_r(f,J)_{w} , \] where the constant $c$ depends only on $r$, $c_0$, $c_1$ and the weight $w$. \end{lemma} \begin{proof} The proof is similar to that of \cite{k-singular}*{Lemma A.1}. First, we assume that $|I| \leq \delta(\mathcal Z)/2$, and so $I$ may contain at most one $z_j$ from $\mathcal Z$. Now, we denote by $a$ the midpoint of $I$ and let $n\in\mathbb N$ be such that $\rho_{n+1}(a) < |I|/1000 \leq \rho_n(a)$. Then, $\rho_n(a) \sim |I|$ and, as was shown in the proof of \cite{k-singular}*{Lemma A.1}, $I$ contains at least $5$ adjacent intervals $I_{\nu+i}$, $i=2,1,0,-1,-2$. Moreover, one of those intervals, $I_\mu$, is such that $|I_\mu|\sim |I|$ and $I_\mu \subset {\mathcal I}_{c, 1/n}$ with some absolute constant $c$, and \lemp{iv} implies that $w(x) \sim w (y)$, for $x,y\in I_\mu$, with equivalence constants depending only on $w$. Suppose now that $\widetilde q$ is a polynomial of near best weighted approximation of $f$ on $J$, {\em i.e., } $\norm{w(f-\widetilde q)}{J} \leq c E_r(f, J)_{w}$. Then, taking into account that $|I_\mu| \sim |I| \sim |J|$ and using the fact that $w$ is doubling, we have \begin{eqnarray*} \norm{w(\widetilde q -q)}{J} & \leq & L^* |J|^{-1} \norm{\widetilde q - q}{J} w(J) \leq c |I_\mu|^{-1} \norm{\widetilde q - q}{I_\mu} w(I_\mu) \\ & \leq & c w(x_\mu) \norm{\widetilde q - q}{I_\mu} \leq c \norm{w(\widetilde q -q)}{I_\mu} . \end{eqnarray*} Therefore, \begin{eqnarray*} \norm{w(f-q)}{J} &\leq& c \norm{w(f-\widetilde q)}{J}+ c \norm{w(\widetilde q -q)}{J} \\ & \leq & c \norm{w(f-\widetilde q)}{J} + c \norm{w(\widetilde q -q)}{I} \\ & \leq & c \norm{w(f-\widetilde q)}{J} + c \norm{w(\widetilde q -f)}{I} + c \norm{ w( f -q)}{I} \\ & \leq & c \norm{w(f-\widetilde q)}{J} + c \norm{w( f -q)}{I} \\ & \leq & c E_r(f, J)_{w} + c E_r(f, I)_{ w} \\ & \leq & c E_r(f, J)_{ w} , \end{eqnarray*} and the proof is complete if $|I| \leq \delta(\mathcal Z)/2$. If $|I| > \delta(\mathcal Z)/2$, then $|I|\sim |J|\sim 1$, and we take $n\in\mathbb N$ to be such that $I$ contains at least $4M+4$ intervals $I_i$. Then $I$ contains $4$ adjacent intervals $I_i$ not containing any points from $\mathcal Z$, and we can use the same argument as above. \end{proof} \begin{bibsection} \begin{biblist} \bib{dt}{book}{ author={Ditzian, Z.}, author={Totik, V.}, title={Moduli of smoothness}, series={Springer Series in Computational Mathematics}, volume={9}, publisher={Springer-Verlag}, place={New York}, date={1987}, pages={x+227}, isbn={0-387-96536-X}, } \bib{fm}{article}{ author={Fefferman, C.}, author={Muckenhoupt, B.}, title={Two nonequivalent conditions for weight functions}, journal={Proc. Amer. Math. Soc.}, volume={45}, date={1974}, pages={99--104}, } \bib{k-acta}{article}{ author={Kopotun, K. A.}, title={Polynomial approximation with doubling weights}, journal={Acta Math. Hungar.}, volume={146}, number={1}, date={2015}, pages={496--535}, } \bib{k-singular}{article}{ author={Kopotun, K. A.}, title={Polynomial approximation with doubling weights having finitely many zeros and singularities}, journal={J. Approx. Theory}, volume={198}, date={2015}, pages={24--62}, } \bib{mt2001}{article}{ author={Mastroianni, G.}, author={Totik, V.}, title={Best approximation and moduli of smoothness for doubling weights}, journal={J. Approx. Theory}, volume={110}, date={2001}, number={2}, pages={180--199}, } \bib{mt2000}{article}{ author={Mastroianni, G.}, author={Totik, V.}, title={Weighted polynomial inequalities with doubling and $A_\infty$ weights}, journal={Constr. Approx.}, volume={16}, date={2000}, number={1}, pages={37--71}, } \bib{mt1999}{article}{ author={Mastroianni, G.}, author={Totik, V.}, title={Jackson type inequalities for doubling weights. II}, journal={East J. Approx.}, volume={5}, date={1999}, number={1}, pages={101--116}, } \bib{mt1998}{article}{ author={Mastroianni, G.}, author={Totik, V.}, title={Jackson type inequalities for doubling and $A_p$ weights}, booktitle={Proceedings of the Third International Conference on Functional Analysis and Approximation Theory, Vol. I (Acquafredda di Maratea, 1996)}, journal={Rend. Circ. Mat. Palermo (2) Suppl.}, number={52, Vol. I}, date={1998}, pages={83--99}, } \bib{pp}{book}{ author={Petrushev, P. P.}, author={Popov, V. A.}, title={Rational approximation of real functions}, series={Encyclopedia of Mathematics and its Applications}, volume={28}, publisher={Cambridge University Press, Cambridge}, date={1987}, pages={xii+371}, } \bib{stein}{book}{ author={Stein, E. M.}, title={Harmonic analysis: real-variable methods, orthogonality, and oscillatory integrals}, series={Princeton Mathematical Series}, volume={43}, note={With the assistance of Timothy S. Murphy; Monographs in Harmonic Analysis, III}, publisher={Princeton University Press, Princeton, NJ}, date={1993}, pages={xiv+695}, } \end{biblist} \end{bibsection} \end{document}
\begin{document} \title{Maximin optimal cluster randomized designs for assessing treatment effect heterogeneity} \author[1,2]{Mary M. Ryan*} \author[1,2]{Denise Esserman} \author[1,2,3]{Fan Li} \authormark{RYAN \textsc{et al}} \address[1]{\orgdiv{Department of Biostatistics}, \orgname{Yale School of Public Health}, \orgaddress{\state{Connecticut}, \country{USA}}} \address[2]{\orgdiv{Yale Center for Analytical Sciences}, \orgname{Yale School of Public Health}, \orgaddress{\state{Connecticut}, \country{USA}}} \address[3]{\orgdiv{Center for Methods in Implementation and Prevention Science}, \orgname{Yale School of Public Health}, \orgaddress{\state{Connecticut}, \country{USA}}} \corres{*Mary M. Ryan,\\ Department of Biostatistics,\\ Yale School of Public Health,\\ New Haven, Connecticut, USA\\ \email{[email protected]}} \abstract[Abstract]{Cluster randomized trials (CRTs) are studies where treatment is randomized at the cluster level but outcomes are typically collected at the individual level. When CRTs are employed in pragmatic settings, baseline population characteristics may moderate treatment effects, leading to what is known as heterogeneous treatment effects (HTEs). Pre-specified, hypothesis-driven HTE analyses in CRTs can enable an understanding of how interventions may impact subpopulation outcomes. While closed-form sample size formulas have recently been proposed, assuming known intracluster correlation coefficients (ICCs) for both the covariate and outcome, guidance on optimal cluster randomized designs to ensure maximum power with pre-specified HTE analyses has not yet been developed. We derive new design formulas to determine the cluster size and number of clusters to achieve the locally optimal design (LOD) that minimizes variance for estimating the HTE parameter given a budget constraint. Given the LODs are based on covariate and outcome-ICC values that are usually unknown, we further develop the maximin design for assessing HTE, identifying the combination of design resources that maximize the relative efficiency of the HTE analysis in the worst case scenario. In addition, given the analysis of the average treatment effect is often of primary interest, we also establish optimal designs to accommodate multiple objectives by combining considerations for studying both the average and heterogeneous treatment effects. We illustrate our methods using the context of the Kerala Diabetes Prevention Program CRT, and provide an R Shiny app to facilitate calculation of optimal designs under a wide range of design parameters.} \keywords{Average treatment effect, cluster randomized trial, heterogeneous treatment effect, intracluster correlation coefficient, locally optimal design} \jnlcitation{\cname{ \author{M.M. Ryan}, \author{D. Esserman}, and \author{F. Li}} (\cyear{2023}), \ctitle{Maximin optimal cluster randomized designs for assessing treatment effect heterogeneity}, \cjournal{Statistics in Medicine}, \cvol{0000;00:00--00}.} \maketitle \footnotetext{\textbf{Abbreviations:} ATE, average treatment effect; CRT, cluster randomized trial; HTE, heterogeneous treatment effect; ICC, intracluster correlation coefficient; LOD, locally optimal design} \section{Introduction}\label{s:intro} Cluster randomized trials (CRTs) -- studies where treatment is randomized at the cluster or group level -- are gaining popularity in clinical medicine, public health and implementation science research. These designs are chosen for a variety of reasons such as the natural occurrence or grouping of the treatment clusters, treatment contamination prevention, or logistical constraints that would make individual randomization infeasible.\cite{murray_design_1998,hayes_cluster_2017} When CRTs are employed in pragmatic settings where identification of heterogeneous subpopulations is an important objective, diverse population characteristics, which may be key effect modifiers driving the variations in patient's response to interventions, are often collected at baseline leading to what is known as heterogeneous treatment effects (HTEs). Whereas many {exploratory} HTE analyses are performed \emph{post-hoc} and represent essential steps for generating future hypotheses, confirmatory HTE analyses are often pre-specified, hypothesis-driven and can require more rigorous planning at the design stage. Although the power analysis of the treatment-by-covariate interaction test has been relatively well-studied in individually randomized trials,\cite{brookes_subgroup_2004,shieh_detecting_2009,greenland_tests_1983} related methods for power analysis in CRTs have only received recent attention with the goal to enable a rigorous understanding of how system-level innovations may differentially impact outcomes for important subpopulations. \cite{spybrook_power_2016,dong_power_2018,yang_sample_2020,tong_accounting_2021,li_designing_2022} With a pre-specified effect modifier, Yang et al\cite{yang_sample_2020} developed an analytical sample size and power formula to test the treatment-by-covariate interaction, making it possible to power CRTs \textit{a priori} for confirmatory HTE analyses. Similar to designing conventional CRTs to study the average treatment effect, the intracluster correlation coefficient (ICC) of the outcome, or outcome-ICC, plays an essential role in determining the power and necessary sample size for the HTE test. In addition, the analytical formula of Yang et al\cite{yang_sample_2020} further requires knowledge of the covariate-ICC, or ICC of the effect modifier. The covariate-ICC can be characterized as the fraction of between-cluster covariate variation relative to the total or marginal variation of the covariate, and measures the degree of similarity of the effect modifier in the same cluster. Although the sample size formula for HTE has been previously characterized in CRTs, the optimal sample size, or equivalently, \emph{optimal design}, for testing HTE has not yet been investigated. In the CRT literature, the optimal design refers to the combination of number of clusters and cluster size that maximizes the power of the significance test, given a total budget for sampling and measuring clusters and individuals. As argued in van Breukelen and Candel,\cite{van_breukelen_efficient_2015} the identification of the optimal design can be of strong relevance from a cost-effectiveness standpoint; this has become an important consideration in implementation science studies as it allows more studies to be conducted with the same grand budget. To date, the identification of an optimal CRT design has been restricted to the objective of maximizing the average treatment effect. For example, Snijders and Bosker\cite{snijders_standard_1993} were the first to derive the optimal cluster size for two-level CRTs analyzing the average treatment effect for continuous outcomes with a linear mixed model in the absence of other covariates. Raudenbush\cite{raudenbush_statistical_1997} updated this derivation to account for the inclusion of covariates for increased precision, which also leads to the introduction of the concept of covariate-ICC in a different context. Extensions to three-level CRTs,\cite{moerbeek_design_2000} logistic regression models with\cite{moerbeek_optimal_2005} and without\cite{moerbeek_optimal_2001} covariates, unequal costs between study arms,\cite{liu_statistical_2003} and multiple treatment effects collected at different levels\cite{moerbeek_optimal_2020} subsequently followed. For ease of reference, we provide a summary of existing optimal design methods for CRTs in Table \ref{tab:LODlit}. These approaches all suggest that the optimal design critically depends on the outcome-ICC, which drives the precision of the average treatment effect estimator. This means that an optimal CRT design derived under one outcome-ICC estimate will likely not be optimal under a different value; thus, such designs are only \emph{locally optimal}. \begin{table} \caption{\label{tab:LODlit}Brief summary of existing literature on locally optimal designs for cluster randomized trials that study the average treatment effect.} \centering {\RaggedRight \begin{tabular}{l p{0.1\linewidth}p{0.1\linewidth}p{0.1\linewidth}p{0.07\linewidth}p{0.27\linewidth}} \toprule Reference & \multicolumn{2}{c}{CRT Design Type} & \multicolumn{2}{c}{Outcome} & Feature\\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} & Two-level & Three-level & Continuous & Binary&\\ \midrule Snijders \& Bosker (1993)\cite{snijders_standard_1993} & \checkmark & - & \checkmark & - & Introduces optimal design to CRTs\\ \midrule Raudenbush (1997)\cite{raudenbush_statistical_1997} & \checkmark & - & \checkmark & - & Optimal design conditional on covariate\\ \midrule Moerbeek et al (2000)\cite{moerbeek_design_2000} & \checkmark & \checkmark & \checkmark & - & Optimal designs for three-level CRTs\\ \midrule Moerbeek et al (2001a)\cite{moerbeek_optimal_2001-1} & \checkmark & - & - & \checkmark & Optimal designs and randomization for CRTs using logistic models\\ \midrule Moerbeek et al (2001b)\cite{moerbeek_optimal_2001} & \checkmark & - & \checkmark & - & Introduces D- and L-optimality criteria\\ \midrule Liu (2003)\cite{liu_statistical_2003} & \checkmark & \checkmark & \checkmark & - & Optimal unequal allocation design for CRTs with unequal costs per randomization unit\\ \midrule Moerbeek \& Maas (2005)\cite{moerbeek_optimal_2005} & \checkmark & - & - & \checkmark & Optimal designs for multilevel logistic model with covariates\\ \midrule Moerbeek (2020)\cite{moerbeek_optimal_2020} & \checkmark & - & \checkmark & - & Optimal designs for multiple treatment effects\\ \bottomrule \end{tabular}} \end{table} While reporting of the outcome-ICC is becoming more commonplace among CRTs, it can still be difficult to predict at the design stage and misspecification can severely impact sample size and power calculations. To mitigate this issue, van Breukelen and Candel\cite{van_breukelen_efficient_2015} introduced maximin designs for CRTs investigating the average treatment effect. Maximin design procedures find the most efficient design with respect to a budget constraint for a range of outcome-ICC values, meaning the design that maximizes power given a hard budget or minimizes budget given a power threshold in the worst case outcome-ICC scenarios. Liu et al\cite{liu_optimal_2019} extended this work to the setting of three-level CRTs. All current locally optimal and maximin design methods for CRTs are specifically developed for assessing the average treatment effect; no attempts have yet been made to derive optimal procedures for assessing HTE which, as shown by Yang et al,\cite{yang_sample_2020} would critically depend on both outcome- and covariate-ICCs. In addition, to the best of our knowledge, the covariate-ICC is not standard in trial reporting and reliable information on reasonable ranges may be less available than for the outcome-ICC,\cite{korevaar_intra-cluster_2021} making its elucidation in CRT design procedures difficult. Thus, developing a maximin design procedure for testing HTE, by considering a range for the covariate-ICC with a fixed outcome-ICC, or considering a range for both the outcome- and covariate-ICC, may prove essential to designing CRTs adequately powered for HTE and answering pre-specified questions involving diverse subpopulations. This points to the central focus of this paper. In addition, it is rare for testing of HTE hypotheses to be the sole aim of a study. Often, the average or main treatment effect is also of interest -- if not the primary interest -- and the sample sizes required to properly power each set of analyses may not align. It then becomes a question of how to strike a balance between these study objectives. Very little research has been conducted in this area for CRTs. Moerbeek\cite{moerbeek_optimal_2020} developed a multiple-objective optimal design procedure for CRTs when both individual- and cluster-level outcomes are of interest but only when other design parameters are fixed, creating a multiple-objective locally optimal design. Such procedures have not been extended to the maximin design space. To fill this gap, we will also extend the HTE optimal design procedures in the manner of Moerbeek\cite{moerbeek_optimal_2020} to balance considerations for both the heterogeneous and average treatment effect objectives. The remainder of this article is organized as follows. In Section \ref{s:model}, we introduce the linear mixed analysis of covariance model with a pre-specified effect modifier and review the main result in Yang et al.\cite{yang_sample_2020} In Section \ref{s:sood}, we develop a closed-form solution for the locally optimal CRT design for assessing HTE with a pre-specified effect modifier, as well as a maximin design procedure that accommodates uncertainties in the covariate-ICC and outcome-ICC. In Section \ref{s:mood}, we expand the results of Section \ref{s:sood} to arrive at optimal designs when the objective function incorporates considerations on both the HTE analysis and the average treatment effect analysis, leading to the multiple-objective optimal designs. In Section \ref{s:power} we briefly discuss power considerations in practice. In Section \ref{s:dataApp} we use data from the Kerala Diabetes Prevention Program study (K-DPP)\cite{thankappan_peer-support_2018} to illustrate the proposed new optimal design procedures and determine the number of clusters and cluster sizes required to maximize power under a fixed grand budget. Finally, in Section \ref{s:discuss} we discuss the results and possible future work in this area. To facilitate the exploration of optimal designs for assessing HTE in a wider range of practical scenarios, we also provide a free R shiny application to implement the proposed procedure at: \url{https://mary-ryan.shinyapps.io/HTE-MMD-app/}. \section{Statistical model}\label{s:model} Before we develop the optimal designs to assess HTE in CRTs, we first introduce the linear mixed analysis of covariance model, as well as review the existing sample size formulas developed in Yang et al.\cite{yang_sample_2020} We consider parallel CRTs with two arms. Let $Y_{ij}$ be a continuous outcome for the $j$th individual ($j = 1, \dots, m)$ in the $i$th cluster ($i = 1, \dots, n)$; we assume equal cluster sizes following the convention of deriving optimal designs. When we are solely interested in evaluating the average treatment effect, it is common to analyze the individual-level outcomes using a linear mixed effects model similar to the one outlined below:\cite{turner_review_2017} \begin{equation}\label{eq:noInteract} Y_{ij} = \alpha_1+ \alpha_2 W_i + \psi_i + \xi_{ij}, \end{equation} where $W_i$ is the binary treatment indicator ($W_i=1$ if cluster \textit{i} is assigned to intervention and $W_i=0$ otherwise), $\psi_i \sim \mathcal{N}(0, \sigma^2_{\psi})$ is the random cluster effect accounting for the outcome-ICC, and $\xi_{ij} \sim \mathcal{N}(0, \sigma^2_{\xi})$ is the residual error, independent of the random cluster effect. In this unadjusted regression model, $\alpha_1$ represents the mean of the outcome under the control condition and $\alpha_2$ represents the average treatment effect without adjusting for covariates. A primary goal in pragmatic CRTs is to evaluate interventions in settings similar to those observed in the real world, i.e., settings with realistic population diversity and heterogeneity. The investigators therefore may be interested in testing for possible treatment effect heterogeneity with respect to a pre-specified effect modifier. To introduce the linear mixed model accounting for an effect modifier, we assume $X_{ij}$ is a continuous or binary univariate covariate that may moderate the treatment effect. We consider the effect modifier to be measured either at the individual-level or cluster-level; in the latter case, we can simply replace $X_{ij}$ with $X_i$ as all individuals in the same cluster will have the same value of the effect modifier when it is measured at the cluster level. For simplicity, we also assume the effect modifier to be univariate and will discuss possible extensions to multivariate effect modifiers in Section \ref{s:discuss}. With $X_{ij}$, model (\ref{eq:noInteract}) can be expanded as: \begin{equation}\label{eq:interact} Y_{ij} = \beta_1 + \beta_2 W_i + \beta_3 X_{ij} + \beta_4 X_{ij} W_i + \gamma_i + \epsilon_{ij}, \end{equation} where $X_{ij} W_i$ is the interaction between treatment and covariate, $\gamma_i \sim \mathcal{N}(0, \sigma^2_{\gamma})$ is the random cluster effect, and $\epsilon_{ij} \sim \mathcal{N}(0, \epsilon^2_{\epsilon})$ is the residual error, independent of $\gamma_i$. Of note, we have not considered a random slope for the effect modifier, such that there are no additional cluster-by-covariate interactions. In this analysis of covariance type model, $\beta_1$ is the mean of the outcome under the control condition when $X_{ij}=0$, $\beta_2$ is the treatment effect when $X_{ij}=0$ (or the average treatment effect when $X_{ij}$ is mean-centered at 0), and $\beta_3$ and $\beta_4$ are regression coefficients for the covariate and the interaction terms, respectively. In particular, the magnitude of $\beta_4$ can quantify the degree of treatment effect heterogeneity regarding the effect modifier, and can be considered as a basis for testing for HTE in CRTs. Further, if the covariate is mean-centered, $\beta_2$ represents the average treatment effect parameter under model \eqref{eq:interact}.\cite{tong_accounting_2021} Mean-centering the covariates, however, does not affect the interpretation of the interaction parameter $\beta_4$. While sample size considerations (and subsequently optimal designs) based on the unadjusted linear mixed model \eqref{eq:noInteract} have been relatively well studied, sample size considerations based on the adjusted linear mixed model \eqref{eq:interact} have only been recently examined for applications to CRTs. Specifically, for the purpose of testing for HTE, Yang et al\cite{yang_sample_2020} showed that the variance of the maximum likelihood estimator $\hat{\beta}_4$, which we denote $\sigma^2_{\text{HTE}}$, is: \begin{equation}\label{eq:sHTE} \sigma^2_{\text{HTE}} = \frac{\sigma^2_{y|x}(1-\rho_{y|x}) \left\{1+(m-1)\rho_{y|x}\right\}}{nm\sigma^2_w \sigma^2_x \left\{1+(m-2)\rho_{y|x} - (m-1)\rho_x\rho_{y|x}\right\}}, \end{equation} where $m$ is the common cluster size, $n$ is the total number of clusters, $\sigma^2_{y|x}=\sigma_\gamma^2+\sigma_\epsilon^2$ is the total variance of $Y_{ij}$ adjusted for $X_{ij}$, $\sigma^2_x$ is the marginal variance of the covariate $X_{ij}$, and $\sigma^2_w = E(W_i)\{1-E(W_i)\}$ quantifies the variation in treatment assignment. Importantly, expression \eqref{eq:sHTE} also features two key intracluster correlation coefficients: $\rho_{y|x} = {\sigma^2_{\gamma}}/{\sigma^2_{y|x}}$ represents the outcome-ICC adjusted for $X_{ij}$, and $\rho_x$ represents the covariate-ICC; the latter concept can be defined as the fraction of between-cluster covariate variation relative to the total or marginal variation of the covariate, $\sigma_x^2$, and measures the degree of similarity of the effect modifier in the same cluster.\cite{raudenbush_statistical_1997} Finally, Tong et al\cite{tong_accounting_2021} showed that when the cluster sizes are equal and the covariate is mean-centered (and assumed to be uncorrelated with the treatment variable in large samples), the variance of the covariate-adjusted average treatment effect estimator $\hat{\beta}_2$, which we denote $\sigma^2_{\text{ATE}}$, is: \begin{equation}\label{eq:sATE} \sigma^2_{\text{ATE}} = \frac{\sigma^2_{y|x}\left\{1+(m-1)\rho_{y|x}\right\}}{nm\sigma^2_w}, \end{equation} where $\left\{1+(m-1)\rho_{y|x}\right\}$ is commonly referred to as the design effect in CRTs. For subsequent purposes, we can also write $\sigma^2_{\text{HTE}}$ in terms of $\sigma^2_{\text{ATE}}$ with a multiplication factor as: $$\sigma^2_{\text{HTE}} = \sigma^2_{\text{ATE}} \times \frac{(1-\rho_{y|x})}{\sigma^2_x\left\{1+(m-2)\rho_{y|x} - (m-1)\rho_x\rho_{y|x}\right\}}.$$ In the special case with a cluster-level covariate and $\rho_x=1$ by definition, we obtain $\sigma^2_{\text{HTE}} = \sigma^2_{\text{ATE}}/\sigma_x^2$. Note that in this case, the ratio $\sigma^2_{\text{HTE}}/\sigma^2_{\text{ATE}}$ does not depends on the number of clusters $n$ nor the cluster size $m$, such that the optimal design would be the same for studying the average or heterogeneous treatment effects. On the other hand, in the special case where the individual-level covariate randomly varies both within and between clusters (such that the extra between-cluster variation is $0$), or $\rho_x= 0$, then the ratio $\sigma^2_{\text{HTE}}/\sigma^2_{\text{ATE}}$ decreases from $1/\sigma^2_x$ for $\rho_{y|x} = 0$ to $0$ for $\rho_{y|x}=1$. \section{Optimal Designs for Assessing Treatment Effect Heterogeneity}\label{s:sood} Determining an efficient CRT study design is rarely a simple task due to the confluence of enrolling both clusters and individuals, the uncertainty in design parameters, as well as budget restrictions. Here, we refer to a CRT design as the combinations of the total number of clusters $n$ and cluster size $m$. Designs are considered optimal if they minimize the variance of the estimator of interest given a fixed budget constraint, or if they minimize costs given a fixed level of precision; we will focus on the case where the budget constraint is fixed. We suppose we have a total budget $B$ to spend on our study. Assuming inclusion of each cluster in the study costs $c$ and inclusion of each individual subject within a cluster costs $s$, we can divide our total budget into the cost attributable to cluster and subject inclusion: \begin{equation}\label{eq:budget} B=cn + smn = n(c + sm) \end{equation} In the special case where $c=0$ and $s=1$, equation \eqref{eq:budget} returns the traditional total sample size constraint: $B=nm$. We note that optimal CRT designs for estimating the average treatment effect have already been investigated extensively in the literature (see Table \ref{tab:LODlit}); thus, in what follows we will primarily focus on optimal designs for testing the HTE. First we will derive a closed-form solution for the locally optimal design (LOD) for testing the HTE, which relies on exact specification of ICC parameters. Then we will develop a maximin design procedure that is optimal over a range of outcome- and covariate-ICC values specified in the design stage. \subsection{Locally optimal design}\label{ss:solod} A single-objective LOD is one in which the highest efficiency or smallest variance is achieved for a single objective or estimator on a known set of parameters, given a budget constraint such as \eqref{eq:budget}. The single-objective optimal design for the HTE would be the one where, for known values of ($\rho_{y|x}$,$\rho_x$), $\sigma^2_{\text{HTE}}$ is minimized. To achieve this, we rearrange the cost function \eqref{eq:budget} for $n$ and substitute this into variance equation \eqref{eq:sHTE}: \begin{align} \begin{split}\label{eq:sHTE2} \sigma^2_{\text{HTE}} &\propto \frac{c+sm}{Bm}\times \frac{(1-\rho_{y|x})\left\{1+(m-1)\rho_{y|x}\right\}}{\left\{1+(m-2)\rho_{y|x} - (m-1)\rho_x\rho_{y|x}\right\}}\\ &= \frac{s(1-\rho_{y|x})}{B}\times \frac{(k+m)\left\{1+(m-1)\rho_{y|x}\right\}}{m\left\{1+(m-2)\rho_{y|x} - (m-1)\rho_x\rho_{y|x}\right\}}, \end{split} \end{align} where the proportionality constant is $\sigma^2_{y|x}/(\sigma^2_w\sigma^2_x)$ and $k=c/s>0$ is the cluster-to-individual cost ratio. While $k$ technically need only be greater than $0$, the only instances where it would be less than $1$ would be special circumstances where individual-level costs might include very expensive individual data collection procedures or interventions (e.g., Magnetic resonance imaging (MRI); positron emission tomography (PET) scan). Minimizing the above with respect to $m$, we obtain the closed-form LOD for testing the HTE: \begin{align*} \begin{split} m_{\text{opt}} &= \frac{(1-\rho_{y|x})(1-\rho_x) + \sqrt{\rho^{-1}_{y|x}k^{-1}(1-\rho_{y|x})(\rho_x-\rho_{y|x})\left\{1-(k+2)\rho_{y|x}+(k+1)\rho_x\rho_{y|x}\right\}}}{k^{-1}(\rho_x-\rho_{y|x}) -\rho_{y|x}(1-\rho_x)},\\ n_{\text{opt}} &= \frac{B}{c+sm_{opt}}, \end{split} \end{align*} meaning that the design with the highest precision to test the HTE for a given budget and fixed ICC values $\rho_{y|x}$ and $\rho_x$ is one where there are a total of $n_{\text{opt}}$ clusters, each of size $m_{\text{opt}}$. We note that the optimal cluster size relies on budget constraint \eqref{eq:budget} only through the cost ratio, and does not further depend on the size of the total budget nor the precise per-unit cost of clusters or individuals. The above closed-form LOD includes some explicit conditions on design parameters. To elaborate, in order for the optimal cluster size $m_{\text{opt}}$ to be real and greater than $1$, there is an implied plausible range for the covariate-ICC, $\rho_x$. It can be shown that the above solution for optimal LOD is achieved when (assuming $k>0$) \begin{align*} \frac{\rho_{y|x}(k+1)}{\rho_{y|x}k+1} < \rho_x \le 1,~~~~\text{and}~~~~ 0 \le \rho_{y|x} < 1, \end{align*} We note that while $\rho_{y|x} \in [0,1)$, it rarely exceeds $0.2$.\cite{van_breukelen_efficient_2015} In addition, if covariate $X_{ij}$ is a good prognostic variable, its inclusion in model \eqref{eq:interact} can sometimes drive $\rho_{y|x}$ toward $0$ (due to explained variation, such as when $\rho_x$ is large), increasing the acceptable range for $\rho_x$. When $\rho_x$ is outside this valid range, such as when $\rho_x$ is close to $0$ and $\rho_{y|x}$ is relatively far away from $0$, we have observed in numerical evaluations that $\sigma^2_{\text{HTE}}$ generally decreases as $m\rightarrow \infty$ and $n$ is decreased to remain within the budget constraint. In these scenarios, it would be reasonable to set $m_{\text{opt}}$ to a maximum determined \emph{a priori}. A very large maximum would, under budget constraints, encourage a very small number of clusters. Since CRTs lose their utility with respect to individually randomized trials when designed with an extremely small number of clusters, we also want to \emph{a priori} specify a minimum for $n_{\text{opt}}$, which we define as $\underline{n}$. We can then use this lower bound for the number of clusters and budget constraint \eqref{eq:budget} to define a maximum cluster size for $m_{\text{opt}}$, given by $\overline{m}=(B/\underline{n}-c)/s$. This maximum can also be utilized even when $\rho_x$ is in the valid range but the unrestricted LOD calls for an $m_{\text{opt}}$ that would drive $n$ below the minimum $\underline{n}$. To unify the above practical considerations, we propose a conditional LOD in Proposition \ref{PROP:SOLOD}. \begin{proposition}\label{PROP:SOLOD} \textit{Given a fixed budget constraint, a minimum number of clusters, an outcome-ICC, and a covariate-ICC, the locally optimal design for a cluster randomized trial that minimizes $\sigma^2_{\text{HTE}}$ is given by:} \begin{align*} m_{\text{opt}}=\frac{(1-\rho_{y|x})(1-\rho_x)+\sqrt{\rho^{-1}_{y|x} k^{-1} (1 - \rho_{y|x})(\rho_x - \rho_{y|x})\left\{1-(k+2)\rho_{y|x} + (k+1)\rho_x\rho_{y|x}\right\}}}{k^{-1}(\rho_x - \rho_{y|x}) - \rho_{y|x} (1 - \rho_x)}, \end{align*} \textit{under the condition that} \begin{align}\label{eq:cond} \frac{\rho_{y|x}(k+1)}{\rho_{y|x}k+1} < \rho_x \le 1,~~~\text{and}~~~m_{\text{opt}} \le \frac{B/\underline{n} - c}{s}. \end{align} \textit{If condition \eqref{eq:cond} is not satisfied, then we set} \begin{align*} m_{\text{opt}} = \frac{B/\underline{n} - c}{s}. \end{align*} \textit{In either case, the optimal number of clusters is given by} \begin{align*} n_{\text{opt}} = \frac{B}{c+sm_{\text{opt}}}. \end{align*} \begin{proof} See Appendix \ref{soLODProof}. \end{proof} \end{proposition} As a concrete illustration, Table \ref{tab:soLOD} shows examples of LODs calculated via Proposition \ref{PROP:SOLOD} for combinations of known ICC values. In Table \ref{tab:soLOD}, we assume $B=100,000$, cost ratios of $k=10$ ($c=500$, $s=50$) and $k=20$ ($c=2,000$, $s=100$), and a minimum of $\underline{n}=6$ clusters. For the purpose of illustrating the power of each design, we select the standardized HTE effect size, defined by $\beta_4\sigma_x/\sigma_{y|x}=0.2$ and set $\sigma^2_{y|x} = \sigma^2_x = 1$. This standardized effect size is interpreted as the change in treatment effect (per standard deviation unit of the outcome) due to one standard deviation unit change in the effect modifier. We see that, for a fixed value of $\rho_{y|x}$, the optimal design shifts from a few large clusters to many small clusters as $\rho_x$ increases; this also results in a reduction in power. This pattern is consistent with the idea that as $\rho_x$ increases, it becomes more akin to a cluster-level covariate, which would make the number of clusters more important for estimating the HTE parameter, $\beta_4$. On the other hand, as $\rho_{y|x}$ increases, power becomes more sensitive to changes in $\rho_x$, confirming results observed by Yang et al\cite{yang_sample_2020} in fixed, non-optimal designs. We also see that if $\rho_x$ is held constant and is within its valid range, the optimal design generally shifts from a few large clusters to many small clusters as $\rho_{y|x}$ increases. However, we may see $m_{\text{opt}}$ abruptly ``jump'' up when $\rho_x$ is near the lower bound of its valid range. For example, when $k=10$, $\rho_{y|x}=0.1$ and $\rho_x=0.75$, $m_{\text{opt}}=20$ and the lower bound for $\rho_x=0.55$; when $\rho_{y|x}$ increases to $0.2$ and $\rho_x$ is kept fixed at $0.75$, though, the lower bound for $\rho_x$ increases to $0.733$ and $m_{\text{opt}}$ ``jumps'' to $86$. Finally, we observe that as $\rho_{y|x}$ increases, so does the frequency with which $\rho_x$ is outside its valid range, forcing $m_{\text{opt}}$ to take on the maximum cluster size value, $\overline{m}$, more frequently. To examine a wider range of ICC parameter values, the LOD for assessing HTE can also be implemented via a free web application at \url{https://mary-ryan.shinyapps.io/HTE-MMD-app/}. Finally, in the special case where we are interested in testing HTE with respect to a cluster-level effect modifier (i.e., $\rho_x=1$), the optimal design simplifies to: \begin{align*} m_{\text{opt}} &= \frac{\sqrt{\rho^{-1}_{y|x} k^{-1} (1 - \rho_{y|x})}}{k^{-1}} = \sqrt{\frac{(1-\rho_{y|x})}{\rho_{y|x}}\times k} = \sqrt{\frac{\theta c}{s}},\\ n_{\text{opt}} &=\frac{B}{\sqrt{\theta s c}+c},~~~~~~ \theta = \frac{1-\rho_{y|x}}{\rho_{y|x}}. \end{align*} This optimal design shares the same form with the optimal CRT design for testing the average treatment effect developed in Raudenbush,\cite{raudenbush_statistical_1997} Moerbeeek et al,\cite{moerbeek_design_2000} and van Breukelen and Candel.\cite{van_breukelen_efficient_2015} This is expected because the variance for the interaction parameter in linear mixed model \eqref{eq:interact} includes the same design effect as appears in the variance for the average treatment effect in CRTs (also see Section \ref{s:model}). \begin{table} \caption{\label{tab:soLOD}Locally optimal design with cluster size ($m$), number of clusters ($n$), and power to detect a standardized HTE effect size of 0.2 for known outcome-ICC ($\rho_{y|x}$) and covariate-ICC ($\rho_x$) values assuming a total budget $B=100,000$, cost ratios of $k=10$ ($c=500$, $s=50$) and $k=20$ ($c=2,000$, $s=100$), and $\sigma^2_{y|x} = \sigma^2_x = 1$. Bold values indicate instances where $\rho_x$ is outside the valid range and the maximum cluster size (minimum six clusters) was used as optimal.} \centering \begin{tabular}{ll | lll | lll} \toprule && \multicolumn{3}{c |}{Cost ratio $k=10$} & \multicolumn{3}{c}{Cost ratio $k=20$}\\ $\rho_{y|x}$ & $\rho_x$ & $m$ & $n$ & Power& $m$ & $n$ & Power\\ \midrule 0.005 & 0.1 & \textbf{323} & \textbf{6} & \textbf{0.990} & \textbf{146} & \textbf{6} & \textbf{0.826}\\ & 0.2 & 175 & 10 & 0.979 & \textbf{146} & \textbf{6} & \textbf{0.809}\\ & 0.5 & 76 & 23 & 0.973 & 119 & 7 & 0.741\\ & 0.75 & 55 & 30 & 0.961 & 81 & 9 & 0.668\\ & 1 & 44 & 37 & 0.955 & 63 & 12 & 0.671\\ \midrule 0.05 & 0.1 & \textbf{323} & \textbf{6} & \textbf{0.990} & \textbf{146} & \textbf{6} & \textbf{0.824}\\ & 0.2 & \textbf{323} & \textbf{6} & \textbf{0.982} & \textbf{146} & \textbf{6} & \textbf{0.784}\\ & 0.5 & 61 & 28 & 0.913 & \textbf{146} & \textbf{6} & \textbf{0.618}\\ & 0.75 & 22 & 62 & 0.830 & 40 & 16 & 0.441\\ & 1 & 13 & 86 & 0.753 & 19 & 25 & 0.352\\ \midrule 0.1 & 0.1 & \textbf{323} & \textbf{6} & \textbf{0.993} & \textbf{146} & \textbf{6} & \textbf{0.841}\\ & 0.2 & \textbf{323} & \textbf{6} & \textbf{0.986} & \textbf{146} & \textbf{6} & \textbf{0.800}\\ & 0.5 & \textbf{323} & \textbf{6} & \textbf{0.913} & \textbf{146} & \textbf{6} & \textbf{0.619}\\ & 0.75 & 20 & 66 & 0.751 & 74 & 10 & 0.376\\ & 1 & 9 & 105 & 0.630 & 13 & 30 & 0.265\\ \midrule 0.2 & 0.1 & \textbf{323} & \textbf{6} & \textbf{0.997} & \textbf{146} & \textbf{6} & \textbf{0.880}\\ & 0.2 & \textbf{323} & \textbf{6} & \textbf{0.993} & \textbf{146} & \textbf{6} & \textbf{0.841}\\ & 0.5 & \textbf{323} & \textbf{6} & \textbf{0.938} & \textbf{146} & \textbf{6} & \textbf{0.657}\\ & 0.75 & 86 & 20 & 0.690 & \textbf{146} & \textbf{6} & \textbf{0.403}\\ & 1 & 6 & 125 & 0.491 & 8 & 35 & 0.189\\ \bottomrule \end{tabular} \end{table} \subsection{Maximin design}\label{ss:sommd} Section \ref{ss:solod} illustrated how the optimal design that minimizes $\sigma^2_{\text{HTE}}$ within a budget constraint varies with the outcome-ICC and covariate-ICC. While reporting of the outcome-ICC is recommended practice for parallel CRTs\cite{campbell_consort_2004,campbell_consort_2012} and becoming increasingly commonplace, reporting of covariate-ICC is currently uncommon. Thus there is likely to be substantial uncertainty around these values at the design stage, and misspecification of ICC values can result in inaccurate sample size estimates and lead to either over- or under-powered trials. To address this potential limitation for designing studies interested in assessing the average treatment effect, van Breukelen and Candel\cite{van_breukelen_efficient_2015} introduced a maximin CRT design procedure. Through a search process, this procedure identifies a design that is optimal for a particular outcome-ICC value while getting as close as possible to the maximum relative efficiency (RE) for the other values in a given plausible range. We consider a similar procedure but now focus on the assessment of HTE in CRTs. Specifically, we define RE for assessing the HTE as: \begin{equation} \text{RE}_{\text{HTE}} = \frac{\sigma^{2*}_{\text{HTE}}}{\sigma^2_{\text{HTE}}}, \end{equation} where $\sigma^{2*}_{\text{HTE}}$ is the variance of the HTE parameter estimator under the LOD from Section \ref{ss:solod}. Based on this RE expression, we extended the maximin design procedure to accommodate uncertainty in both the outcome-ICC and the covariate-ICC for assessing the HTE in CRT. This maximin design procedure for testing HTE is summarized in Algorithm \ref{algo:soMMD}. \begin{algorithm} \caption{Maximin design procedure for assessing HTE in CRTs}\label{algo:soMMD} \begin{algorithmic}[1] \State Define the discrete parameter space for ($\rho_{y|x}$, $\rho_x$) and design space for $\left(m, n(m) \right)$; \State For each ($\rho_{y|x}$, $\rho_x$) parameter value combination, compute the LOD for the HTE objective using Proposition \ref{PROP:SOLOD}. Then compute the RE for each $\left(m, n(m) \right)$ design value combination compared with the LOD at the parameter value pair by taking the ratio of the variances; \State For each $\left(m, n(m) \right)$ design value combination, identify the ($\rho_{y|x}$, $\rho_x$) parameter value combination with the smallest RE; \State Among the smallest REs, choose the $\left(m, n(m) \right)$ design value combination with the largest RE. This returns the maximin optimal design for assessing HTE in CRTs. \end{algorithmic} \end{algorithm} Of note, the maximin design in Algorithm \ref{algo:soMMD} is not an exhaustive search over every ($m$, $n$) combination; instead $n$ is determined as a function of $m$ via ${B}/(c+sm)$ (also see Proposition \ref{PROP:SOLOD}). It also need not be an exhaustive search over the ICC parameter space; similar to observations made by van Breukelen and Candel,\cite{van_breukelen_efficient_2015} we observe that the maximin design for assessing HTE is often found at the intersection of two out of four potential RE curves defined by the boundaries of the ICC parameter ranges, when the design space is relatively broad in $m$: ($\underline{\rho_{y|x}},\underline{\rho_x}$), ($\underline{\rho_{y|x}},\overline{\rho_x}$), ($\overline{\rho_{y|x}},\underline{\rho_x}$), ($\overline{\rho_{y|x}},\overline{\rho_x}$), where $\underline{\rho_{y|x}}$ and $\overline{\rho_{y|x}}$ refer to the minimum and maximum values of the outcome-ICC in the specified parameter space, and $\underline{\rho_{x}}$ and $\overline{\rho_{x}}$ refer to the minimum and maximum values of the covariate-ICC in the specified parameter space, respectively. There are several cases where the maximin design will not be found at an intersection between these scenarios, but at the maximum value of $m$ in the design space. First, a larger cost-ratio $k$ will flatten RE curves for all ICC scenarios such that LODs are found at larger $m$ to offset the relatively increased cost of additional clusters; thus, intersections between scenarios will occur at larger values of $m$ and if the design space is restricted, $\overline{m}$ may be smaller than this potential intersection point. Second, smaller maximum values of the outcome- and covariate-ICCs will flatten RE curves for ICC scenarios involving the maximums, and the LODs for these scenarios are found at larger values of $m$ due to a lower degree of clustering; the maximum of the covariate-ICC is usually more influential for this than the outcome-ICC. If the design space does not extend to these regions, the maximin design will be found at the maximum value of $m$ in the design space. As an illustration, Figure \ref{fig:soMMD} shows two examples of maximin designs for assessing HTE where $\rho_{y|x} \in [0.005, 0.2]$, $\rho_x \in [0.1, 1]$ under design spaces $$m\in \left[2,\frac{B/\underline{n} -c}{s}\right],~~~~n\in \left[6,\frac{B}{c+s\underline{m}}\right].$$ Figure \ref{fig:soMMD} (a) and (b) assume cluster-to-individual cost ratios of $k=10$ ($B=100,000$, $c=500$, $s=50$) and $k=20$ ($B=100,000$, $c=2,000$, $s=100$), respectively. A vertical dotted gray line depicts the maximin design; in the case of a cost ratio of $k=10$ the maximin design is $62$ clusters of size $22$, while in the $k=20$ case it is $18$ clusters of size $33$. Note that in each cost ratio case, the maximin design is at the intersection of the RE curves for ICC combinations ($\overline{\rho_{y|x}} =0.2$, $\underline{\rho_x}=0.1$) (dashed purple line) and ($\overline{\rho_{y|x}}=0.2$, $\overline{\rho_x}=1$) (dashed pink line). This makes intuitive sense as the LOD for the ($\overline{\rho_{y|x}}=0.2$, $\overline{\rho_x}=1$) scenario tends toward many small clusters so it reaches maximum RE early in the design space and then quickly becomes less relatively efficient as $m$ increases. On the other hand, in our example the LOD for the ($\overline{\rho_{y|x}} =0.2$, $\underline{\rho_x}=0.1$) scenario is the smallest number of large clusters possible within our constraints, so it is slow in reaching the maximum RE; its RE curve follows very closely to the ($\underline{\rho_{y|x}} =0.005$, $\underline{\rho_x}=0.1$) scenario (solid green line), which has the same LOD. Thus, it makes sense for the maximin design to be found at the intersection of scenarios that achieve their LOD most and least quickly, respectively. The minimum RE for the maximin design in both cost-ratio scenarios (($m=22$, $n=62$) in the $k=10$ scenario, and ($m=33$, $n=18$) in the $k=20$ scenario) is approximately $0.68$. However, if the ICC combination(s) under which the maximin design is identified differs from the true ICC that generates the trial data, the RE of the maximin design may improve by as much as $32$\%. That is, if the true trial ICC combination is $(\rho_{y|x}=0.005, \rho_x=0.1)$ (Figure \ref{fig:soMMD}, solid green line), for example, instead of $(\rho_{y|x}=0.2, \rho_x=0.1)$ (dashed purple line) or $(\rho_{y|x}=0.2, \rho_x=1)$ (dashed pink line) then, given the maximin design is found at the intersection of the dashed purple and pink lines, the maximin design for either the k=10 or k=20 scenarios (($m=22$, $n=62$) and ($m=33$, $n=18$), respectively) can achieve a RE of approximately $0.90$ (and thus has a 32\% improvement over the minimum RE of 0.68). In addition, we confirm that the maximin design in the higher cost-ratio case favors a fewer number of large clusters compared to the lower cost-ratio case, reflecting the cost-effective strategy of expanding the cluster size to increase precision when recruiting an additional cluster becomes expensive and less practical. Of course, the corresponding overall or average treatment effect scenarios would result in much different maximin designs; for example, the average treatment effect maximin design for a cost ratio of $k=10$ would be $80$ clusters of size $15$, while it would be $23$ clusters of size $23$ in the $k=20$ case. In general, the ATE-oriented maximin design favors a greater number of smaller clusters compared to the HTE-oriented maximin design; this difference is because the HTE-oriented maximin design requires us to additionally consider the impact of covariate-ICC beyond the outcome-ICC. \begin{figure} \caption{ Plots of relative efficiencies (RE) of designs with cluster size $m$ versus their respective LODs for several ($\rho_{y|x} \end{figure} For completeness, we include three-dimensional RE plots in Appendix \ref{3dplots} for the $k=20$ case. The left panels of Figure C1 illustrate the behavior of RE across the design space of $m$ and continuously across the parameter space of $\rho_{y|x}$ for fixed values of $\rho_x\in\{0.1, 0.5, 1\}$. The right panels of Figure C1 serve a similar purpose, but illustrate the behavior of RE continuously across the parameter space of $\rho_x$ for fixed values of $\rho_{y|x}\in\{0.005, 0.1, 0.2\}$. Dynamic versions of these plots can also be viewed via a freely-accessible R shiny web application at \url{https://mary-ryan.shinyapps.io/HTE-MMD-app/}. \section{Optimal designs based on a compound optimality criterion}\label{s:mood} In Section \ref{ss:solod}, the locally optimal and maximin designs are based on maximizing the power for detecting HTE, and are referred to as the \emph{single-objective designs}. In general, single-objective maximin designs are useful when we are only interested in powering a study with respect to a single analytic goal. The single-objective optimal design procedures developed for assessing the HTE, however, may or may not be optimal for assessing the average treatment effect as the respective estimators for these different effect measures have different variances relying on different sets of parameters. To balance the needs of these two objectives, in the following Section \ref{ss:molod} we construct a compound optimality criterion that allows us to find an optimal design taking into account both the average and heterogeneous treatment effect objectives assuming knowledge of the ICC parameters, and arrive at a multiple-objective locally optimal design. In Section \ref{ss:mommd}, we further extend this to the maximin design space to find a design that is optimal over a range of unknown ICC values. To encourage the exploration of a wider range of parameter spaces, we have also implemented the multiple-objective locally optimal and maximin designs in a freely-accessible R shiny web application at \url{https://mary-ryan.shinyapps.io/HTE-MMD-app/}. \subsection{Locally optimal design}\label{ss:molod} Let $\Theta_{\text{HTE}}(\zeta)$ and $\Theta_{\text{ATE}}(\zeta)$ denote the heterogeneous (minimize $\sigma^2_{\text{HTE}}$) and average treatment effect (minimize $\sigma^2_{\text{ATE}}$) objectives, respectively, under some design $\zeta$ in the design space. Similar to Moerbeek,\cite{moerbeek_optimal_2020} we create a compound function that takes both objectives into account: \begin{equation}\label{eq:compoundObj} \Theta(\zeta|\lambda) = \lambda\Theta_{\text{ATE}}(\zeta) + (1-\lambda)\Theta_{\text{HTE}}(\zeta), \end{equation} where $\lambda\in[0,1]$ is a user-specified priority weight. As a linear combination of two objectives, this compound function includes two special cases when $\lambda$ takes the boundary values. That is, when $\lambda=0$, the objective function represents the efficiency objective for assessing HTE alone (and returns the methods in Section \ref{ss:solod}); when $\lambda=1$, the objective function coincides with the efficiency objective for assessing the average treatment effect alone (and returns some of the methods in Table \ref{tab:LODlit}, but replacing their marginal outcome-ICC with a conditional outcome-ICC). In other cases, assuming the average treatment effect objective will usually be the primary study priority, $\lambda$ can be specified such that the efficiency of $\Theta_{\text{HTE}}(\zeta)$ is maximized while maintaining some minimal efficiency level for $\Theta_{\text{ATE}}(\zeta)$, meaning $\lambda$ can be chosen as a value greater than $0.5$. In what follows, we will pursue locally optimal design assuming a fixed priority weight $\lambda$. Because the variance considered in each objective may be obtained on a different scale, we standardize each variance based on their respective LODs; for example, the LOD for assessing the HTE is derived in Proposition \ref{PROP:SOLOD}. Then our optimality criterion can be written as: \begin{align}\label{eq:optCrit} \begin{split} \min_m \Theta(\zeta|\lambda) &= \lambda \frac{\Theta_{\text{ATE}}(\zeta)}{\Theta_{\text{ATE}}(\zeta^*_{\text{ATE}})} + (1-\lambda)\frac{\Theta_{\text{HTE}}(\zeta)}{\Theta_{\text{HTE}}(\zeta^*_{\text{HTE}})}\\ &= \frac{\lambda}{\Theta_{\text{ATE}}(\zeta^*_{\text{ATE}})}\sigma^2_{\text{ATE}} + \frac{(1-\lambda)}{\Theta_{\text{HTE}}(\zeta^*_{\text{HTE}})}\sigma^2_{\text{HTE}}\\ &= \tilde{w}_{\text{ATE}}\sigma^2_{\text{ATE}} + \tilde{w}_{\text{HTE}}\sigma^2_{\text{HTE}}, \end{split} \end{align} where $\zeta^*_{O}$ represents the optimal design under objective $O\in\{\text{HTE},\text{ATE}\}$, $\tilde{w}_O = \frac{\lambda}{\Theta_O(\zeta^*_O)}$ represents the weight $\sigma^2_O$ contributes to the criterion under objective $O\in\{\text{HTE},\text{ATE}\}$, and ${\Theta_O(\zeta)}/{\Theta_O(\zeta^*_O)}$ can be interpreted as the inverse RE. Our goal is to minimize the compound objective to find the optimal design. This specification is similar to that used by Moerbeek.\cite{moerbeek_optimal_2020} In the current article, we instead propose to maximize the weighted combination of the REs to obtain the multiple-objective LOD, because RE (rather than inverse RE) is usually a more standard metric in deriving the optimal design. In our numerical explorations (results not shown), these two approaches frequently lead to similar optimal solutions, but RE criterion provides simpler and more regular solutions (solving quadratic functions rather than fourth-order polynomials). Specifically, we propose to solve for the optimal cluster size $m$ by maximizing the weighted combination of RE criterion: \begin{align}\label{eq:optCritRE} \begin{split} \max_m \Theta(\zeta|\lambda) &= \lambda \frac{\Theta_{\text{ATE}}(\zeta^*_{\text{ATE}})}{\Theta_{\text{ATE}}(\zeta)} + (1-\lambda) \frac{\Theta_{\text{HTE}}(\zeta^*_{\text{HTE}})}{\Theta_{\text{HTE}}(\zeta)}\\ &= \lambda \Theta_{\text{ATE}}(\zeta^*_{\text{ATE}})\frac{1}{\sigma^2_{\text{ATE}}} + (1-\lambda)\Theta_{\text{HTE}}(\zeta^*_{\text{HTE}})\frac{1}{\sigma^2_{\text{HTE}}}\\ &= \frac{w_{\text{ATE}}}{\sigma^2_{\text{ATE}}} + \frac{w_{\text{HTE}}}{\sigma^2_{\text{HTE}}} \end{split} \end{align} where $\zeta^*_{O}$ is defined similar as in equation \eqref{eq:optCrit} and $w_O = \lambda\Theta_O(\zeta^*_O)$ represents the weight $1/\sigma^2_O$ contributes to the criterion under objective $O\in\{\text{HTE},\text{ATE}\}$. The approach based on \eqref{eq:optCritRE} has the benefit of greater interpretability and a more elegant closed-form solution for the multiple-objective LOD, which we outline in Proposition \ref{PROP:MOLOD}. \begin{proposition}\label{PROP:MOLOD} \textit{Let the compound optimality criterion for the average and heterogeneous treatment effect objectives be defined as in \eqref{eq:optCritRE} and given by:} \begin{equation*} \Theta(\zeta|\lambda) = \lambda \frac{\Theta_{\text{ATE}}(\zeta^*_{\text{ATE}})}{\Theta_{\text{ATE}}(\zeta)} + (1-\lambda) \frac{\Theta_{\text{HTE}}(\zeta^*_{\text{HTE}})}{\Theta_{\text{HTE}}(\zeta)} = \frac{w_{\text{ATE}}}{\sigma^2_{\text{ATE}}} + \frac{w_{\text{HTE}}}{\sigma^2_{\text{HTE}}}. \end{equation*} \textit{Then, given a budget constraint \eqref{eq:budget}, well-defined outcome- and covariate-ICCs, and a priority weight $\lambda$, the locally optimal design for a cluster randomized design that maximizes this compound criterion is given by:} \begin{equation}\label{eq:moLOD} m_{\text{opt}} = \frac{-w_{\text{HTE}}ka_2 - \sqrt{w^2_{\text{HTE}}k^2a^2_2 - 4\left\{w_{\text{HTE}}(ka_1 - b_1) - w_{\text{ATE}}\rho_{y|x}\right\}\left\{w_{ATE}k(1-\rho_{y|x}) + w_{\text{HTE}}ka_3\right\}}}{2\left\{w_{\text{HTE}}(ka_1 - b_1) - w_{\text{ATE}}\rho_{y|x}\right\}}, \end{equation} \textit{under the condition that} \begin{equation}\label{eq:moLODCond} w_{\text{ATE}} > w_{\text{HTE}} \left\{(k+1)\rho_{y|x} - \rho_x(k\rho_{y|x}+1)\right\}\text{ and } m_{\text{opt}} \le \frac{B/\underline{n} - c}{s}, \end{equation} where $a_1=\rho^2_{y|x}(1-\rho_x)$, $a_2=2\rho_{y|x}(1-\rho_{y|x})(1-\rho_x)$, $a_3=(1-2\rho_{y|x}+\rho_x\rho_{y|x})(1-\rho_{y|x})$, $b_1=\rho_{y|x}(\rho_x-\rho_{y|x})$. \textit{If condition \eqref{eq:moLODCond} is not satisfied, then we set} \begin{equation*} m_{\text{opt}} = \frac{B/\underline{n} - c}{s}. \end{equation*} \textit{In either case, the optimal number of clusters is given by} \begin{equation*} n_{\text{opt}} = \frac{B}{c+sm_{\text{opt}}} \end{equation*} \begin{proof} See Appendix \ref{moLODProof}. \end{proof} \end{proposition} It is worth noting that in the case where covariates are collected at the cluster level ($\rho_x=1$), the multiple-objective LOD given by Proposition \ref{PROP:MOLOD} coincides with the single-objective LOD for assessing treatment effect heterogeneity given in Proposition \ref{PROP:SOLOD}, as well as with the single-objective LOD for the average treatment effect. In addition, our numerical explorations suggest that the condition \eqref{eq:moLODCond} is satisfied under a wide range of the parameter space, and is only likely a critical condition when $\lambda<0.25$, in which case the priority weight largely favors the objective to assess treatment effect heterogeneity. To explore the pattern of the multiple-objective LOD, Table \ref{tab:moLOD} presents several examples obtained by calculating \eqref{eq:moLOD}. We assume $k=10$ and a minimum of $\underline{n}=6$ clusters; we additionally assume we are powering both the average treatment effect and HTE for equal standardized effect sizes of $\beta_2/\sigma_{y|x}=\beta_4\sigma_x/\sigma_{y|x}=0.2$ ($\sigma^2_{y|x} = \sigma^2_x = 1$). Multiple-objective LODs were then found assuming priority weights of $\lambda\in\{0.4,0.6,0.85\}$. As $\lambda\rightarrow 1$, the multiple-objective LODs allocate more power to the average treatment effect objective than the HTE objective compared to the LOD for the same ICC values under a smaller $\lambda$. We also observe that the LOD changes less with shifts in $\rho_x$ and fixed $\rho_{y|x}$ when $\lambda=0.85$ and tends toward a larger number of smaller clusters than under $\lambda=0.6$ or $0.4$; this is because more weight is being given to the average treatment effect objective, which is less sensitive to changes in $\rho_x$ as the covariate-ICC does not factor into $\sigma^2_{\text{ATE}}$. More substantial changes in the LOD are seen with shifts of $\rho_{y|x}$ when $\lambda=0.85$. As $\lambda \rightarrow 0$, we see there are more gradual changes in the LOD for shifts of $\rho_{y|x}$ and $\rho_x$, moving more evenly from a smaller number of large clusters to many small clusters as both ICCs increase; this is because priority is shifted to the HTE objective, whose variance depends on both ICC parameters. We also see that this shift toward the HTE objective results in the LODs favoring fewer, larger clusters than for greater values of $\lambda$ at the same $(\rho_{y|x}, \rho_x)$. Finally, for a fixed $\rho_{y|x}$ and $\lambda$, the power for the average treatment effect remains fairly stable as $\rho_x$ increases while the power for assessing HTE can vary to a greater degree, especially if the value of the outcome-ICC, $\rho_{y|x}$, is large. \begin{table} \caption{\label{tab:moLOD}Locally optimal cluster size ($m$), number of clusters ($n$), and power to detect standardized average (ATE) and heterogeneous treatment effect (HTE) sizes of $0.2$ for known outcome-ICC ($\rho_{y|x}$) and covariate-ICC ($\rho_x$) values assuming a total budget $B=100,000$, cluster-associated costs $c=500$, individual-associated costs $s=50$ (cost ratio $k=10$), and $\sigma^2_{y|x} = \sigma^2_x = 1$.} \centering \begin{tabular}{ll | llll | llll | llll} \toprule && \multicolumn{4}{c}{$\lambda = 0.4$} & \multicolumn{4}{c}{$\lambda = 0.6$} & \multicolumn{4}{c}{$\lambda=0.85$}\\ \midrule && && \multicolumn{2}{c|}{\emph{Power}} &&& \multicolumn{2}{c|}{\emph{Power}} &&& \multicolumn{2}{c}{\emph{Power}}\\ $\rho_{y|x}$ & $\rho_x$ & $m$ & $n$ & ATE & HTE & $m$ & $n$ & ATE & HTE & $m$ & $n$ & ATE & HTE\\ \midrule 0.005 & 0.1 & 72 & 24 & 0.947 & 0.984 & 58 & 29 & 0.952 & 0.982 & 48 & 34 & 0.954 & 0.979\\ & 0.2 & 68 & 25 & 0.947 & 0.980 & 56 & 30 & 0.953 & 0.980 & 48 & 34 & 0.954 & 0.977\\ & 0.5 & 57 & 29 & 0.950 & 0.970 & 52 & 32 & 0.955 & 0.972 & 46 & 35 & 0.953 & 0.969\\ & 0.75 & 50 & 33 & 0.954 & 0.963 & 48 & 34 & 0.954 & 0.963 & 45 & 36 & 0.955 & 0.963\\ & 1 & 44 & 37 & 0.956 & 0.955 & 44 & 37 & 0.956 & 0.955 & 44 & 37 & 0.956 & 0.955\\ \midrule 0.05 & 0.1 & 26 & 55 & 0.735 & 0.961 & 18 & 71 & 0.769 & 0.942 & 15 & 80 & 0.778 & 0.929\\ & 0.2 & 25 & 57 & 0.743 & 0.950 & 18 & 71 & 0.769 & 0.931 & 14 & 83 & 0.777 & 0.910\\ & 0.5 & 21 & 64 & 0.758 & 0.893 & 17 & 74 & 0.774 & 0.883 & 14 & 83 & 0.777 & 0.867\\ & 0.75 & 17 & 74 & 0.774 & 0.829 & 15 & 80 & 0.778 & 0.824 & 14 & 83 & 0.777 & 0.819\\ & 1 & 13 & 86 & 0.774 & 0.753 & 13 & 86 & 0.774 & 0.753 & 13 & 86 & 0.774 & 0.753\\ \midrule 0.1 & 0.1 & 19 & 68 & 0.62 & 0.949 & 12 & 90 & 0.667 & 0.908 & 10 & 100 & 0.677 & 0.885\\ & 0.2 & 19 & 68 & 0.62 & 0.934 & 12 & 90 & 0.667 & 0.891 & 10 & 100 & 0.677 & 0.868\\ & 0.5 & 16 & 76 & 0.642 & 0.949 & 12 & 90 & 0.667 & 0.821 & 10 & 100 & 0.677 & 0.802\\ & 0.75 & 12 & 90 & 0.667 & 0.736 & 11 & 95 & 0.673 & 0.734 & 10 & 100 & 0.677 & 0.727\\ & 1 & 9 & 105 & 0.676 & 0.630 & 9 & 105 & 0.676 & 0.630 & 9 & 105 & 0.676 & 0.630\\ \midrule 0.2 & 0.1 & 13 & 86 & 0.527 & 0.937 & 8 & 111 & 0.576 & 0.870 & 6 & 125 & 0.581 & 0.806\\ & 0.2 & 13 & 86 & 0.527 & 0.917 & 8 & 111 & 0.576 & 0.846 & 6 & 125 & 0.581 & 0.782\\ & 0.5 & 11 & 95 & 0.550 & 0.799 & 8 & 111 & 0.576 & 0.750 & 6 & 125 & 0.581 & 0.694\\ & 0.75 & 9 & 105 & 0.568 & 0.646 & 7 & 117 & 0.578 & 0.619 & 6 & 125 & 0.581 & 0.602\\ & 1 & 6 & 125 & 0.581 & 0.491 & 6 & 125 & 0.581 & 0.491 & 6 & 125 & 0.581 & 0.491\\ \bottomrule \end{tabular} \end{table} \subsection{Maximin design}\label{ss:mommd} In Table \ref{tab:moLOD}, we observe that the multiple-objective LOD varies for different combinations of outcome-ICC and covariate-ICC. Thus, we can use the compound optimality criterion \eqref{eq:optCritRE} within a maximin design framework to find a design robust to ICC misspecification that appropriately powers both the average and heterogeneous treatment effect objectives within the given budget constraints. This multiple-objective maximin design procedure is summarized in Algorithm \ref{algo:moMMD}. \begin{algorithm} \caption{Multiple-objective maximin design procedure based on the compound optimality criterion}\label{algo:moMMD} \begin{algorithmic}[1] \State Choose priority weight $\lambda$; \State Define the parameter ($\rho_{y|x}$, $\rho_x$) and design $\left(m, n(m)\right)$ spaces; \State For each ($\rho_{y|x}$, $\rho_x$) parameter value combination, compute the LOD for each objective based on Proposition \ref{PROP:SOLOD} and methods (for assessing the average treatment effect) in Table \ref{tab:LODlit}. Then compute the compound optimality criterion $\Theta(\zeta|\lambda)$ for each $\left(m, n(m)\right)$ design value combination compared with the LODs at the parameter value pair according to \eqref{eq:optCritRE}; \State For each $\left(m, n(m)\right)$ design value combination, choose the ($\rho_{y|x}$, $\rho_x$) parameter value combination that has the smallest criterion value; \State Among the smallest criterion values, choose the $\left(m, n(m)\right)$ design value combination that has the largest criterion value. \end{algorithmic} \end{algorithm} Figure \ref{fig:moMMD} illustrates examples of multiple-objective maximin designs for cost ratios of $k=10$ (left panels) and $k=20$ (right panels) as well as priority weights $\lambda=0.4$ (top row), $\lambda=0.6$ (middle row), and $\lambda=0.85$ (bottom row). We explored the following parameter and design spaces: \begin{equation*} \rho_{y|x}\in [0.005, 0.2],~~~~\rho_x\in [0.1, 1],~~~~m\in \left[2,\frac{B/\underline{n}-c}{s}\right],~~~~n\in\left[6, \frac{B}{c+s\underline{m}}\right]. \end{equation*} Unlike in the single-objective maximin design case where the maximin design was often the intersection of RE curves from two ICC value combinations, in the compound objective case there are often three different ICC value combinations that achieve local optimality criterion minimums on portions of the range of $m$ and the maximin design generally falls somewhere on the range of the ``middle'' local minimum; this is seen most clearly when $k=20$ and $\lambda=0.4$ and $0.6$ (Figure \ref{fig:moMMD} panels (b) and (d)) where the ``middle'' local minimum refers to the dashed purple line for ICC scenario $(0.2, 0.1)$. In our particular example, regardless of cost ratio, we see that the maximin design is found at the intersection of $(\overline{\rho_{y|x}}, \underline{\rho_x})$ (dashed purple line) and $(\overline{\rho_{y|x}}, \overline{\rho_x})$ (dashed pink line) when priority is placed on the HTE objective ($\lambda < 0.5$; Figure \ref{fig:moMMD} (a)-(b)); this is the same scenario intersection that the single HTE-objective maximin design was found at in Figure \ref{fig:soMMD} (Figure \ref{fig:soMMD} can be thought of as an extreme $\lambda=0$ case of the multiple-objective maximin design). On the other hand, when priority is placed on the average treatment effect objective ($\lambda > 0.5$; Figure \ref{fig:moMMD} (c)-(f)) the maximin design is found at the intersection of $(\overline{\rho_{y|x}}, \underline{\rho_x})$ (dashed purple line) and $(\underline{\rho_{y|x}}, \underline{\rho_x})$ (solid green line). We also see that as $\lambda \rightarrow 1$ the curves for the $(\underline{\rho_{y|x}}, \underline{\rho_x})$ and $(\underline{\rho_{y|x}}, \overline{\rho_x})$ (solid green and dotted orange) scenarios converge, as do the curves for $(\overline{\rho_{y|x}}, \underline{\rho_x})$ and $(\overline{\rho_{y|x}}, \overline{\rho_x})$ (dashed purple and dashed pink) scenarios; this is because $\sigma^2_{\text{ATE}}$ only depends on $\rho_{y|x}$ and therefore the pure ATE-oriented maximin design is found at the intersection of boundary values of the outcome-ICC, regardless of the value of the covariate-ICC. In addition, while the multiple-objective maximin design can vary across cost ratios, it does not vary excessively across values of $\lambda$, especially when comparing across different values of $\lambda$ giving ``majority weight'' to the same objective. \begin{figure} \caption{\label{fig:moMMD} \label{fig:moMMD} \end{figure} As in the single-objective case, we include three-dimensional optimality criterion plots in Appendix \ref{3dplots} for the $k=20$ and $\lambda=0.6$ case. The left panels of Figure C2 illustrate the behavior of optimality criterion across the design space of $m$ and continuously across the parameter space of $\rho_{y|x}$ for fixed values of $\rho_x\in\{0.1, 0.5, 1\}$. The right panels of Figure C2 serve a similar purpose, but illustrating the behavior of optimality criterion continuously across the parameter space of $\rho_x$ for fixed values of $\rho_{y|x}\in\{0.005, 0.1, 0.2\}$. Dynamic versions of these plots can also be viewed via a freely-accessible R shiny web application at \url{https://mary-ryan.shinyapps.io/HTE-MMD-app/}. \section{Power Considerations for Maximin Designs}\label{s:power} The maximin design procedures proposed in this article allow investigators to identify an optimal study sample size in the face of uncertainty regarding outcome- and covariate-ICC values at the design stage. Once an optimal design is found using the proposed maximin design procedures, though, a question still remains as to how one might conduct power calculations for the next stage in study planning, including decisions around ICC values to use. While our main focus in this work is on the identification of optimal designs based on relative efficiency, we provide some perspectives on power calculation for completeness. We begin by assuming that investigators are interested in exploring power over the same ICC parameter space as they previously used for identification of the maximin design. Next, as uncertainty around outcome- and covariate-ICC values still remains for study investigators, it may benefit investigators to calculate power under a grid search of the ICC parameter space as sensitivity analyses. As an example, power curves for the single HTE objective with standardized effect size of $\beta_4\sigma_x/\sigma_{y|x}=0.2$ ($\sigma^2_{y|x} = \sigma^2_x = 1$) and at cost ratios $k=10$ (a) and $k=20$ (b), evaluated at their respective maximin designs identified in Section \ref{ss:sommd}, are shown in Figure \ref{fig:soMMD-power}. Similar power curves assessing power for the heterogeneous and average treatment effects evaluated at maximin designs identified in Section \ref{ss:mommd} at $\lambda=0.6$ are shown in Figure C3 in Appendix \ref{3dplots}. \begin{figure} \caption{\label{fig:soMMD-power} \label{fig:soMMD-power} \end{figure} Overall we observe that, at a particular maximin design and for a fixed $\rho_{y|x}$, power of the HTE test decreases as $\rho_x$ increases. When $\rho_x$ is very small, higher power will be achieved under larger $\rho_{y|x}$; when $\rho_x$ is large, higher power is obtained under small $\rho_{y|x}$. Power differences between $\rho_{y|x}$ at small $\rho_x$ will be more stark at large cost ratios $k$ (panel b). We observe that, in general, the highest HTE power is attained at $(\overline{\rho_{y|x}}, \underline{\rho_x})$ while the lowest power is attained at $(\overline{\rho_{y|x}}, \overline{\rho_x})$; this reflects previous results regarding the parabolic relationship between $\rho_{y|x}$ and the variance of the HTE estimator for fixed, non-optimal designs.\cite{yang_sample_2020} We note that the endpoints of the lightest and darkest lines in Figure \ref{fig:soMMD-power} represent the boundary ICC combinations that were assessed at the maximin design stage. Thus, we can then establish a lower bound for the HTE power of our maximin design at $(\overline{\rho_{y|x}}, \overline{\rho_x})$ and an upper bound at $(\overline{\rho_{y|x}}, \underline{\rho_x})$. In the case of our examples in Figure \ref{fig:soMMD-power}, we would conclude that our maximin design would have power to detect a standardized $\beta_4\sigma_x/\sigma_{y|x}=0.2$ as low as $36.7$\% and as high as $97.2$\% when $k=10$, and as low as $14.4$\% and as high as $72.8$\% when $k=20$. These power bounds, along with accompanying power curves such as those shown in Figure \ref{fig:soMMD-power}, can be used by investigators to assess whether satisfactory power is achieved under the allocated budget across a range of ICC values. We make an additional note that, in the case of assessing power for the average treatment effect after identifying the optimal sample size via the multiple-objective maximin design (as we do in Figure C3 in Appendix \ref{3dplots}), one needs only $\rho_{y|x}$ to create upper and lower power bounds. This is because $\sigma^2_{\text{ATE}}$ does not involve $\rho_x$. In addition, the power of the average treatment effect test generally decreases with increasing values of $\rho_{y|x}$, and hence the lower and upper bound of $\rho_{y|x}$ often corresponds to the upper and lower bound of power of the average treatment effect test. In our evaluation of optimal designs and study power, we have primarily focused on studying the impact of changes in each single design parameter while holding the other design parameters constant. Our goal has been to assess, in the study planning stage, how changes in the resulting optimal design and study power may be sensitive to input values for each individual design parameter (and therefore understand the anticipated trend), rather than to indicate that the design parameters are variationally independent in practice. In practice, it may not always be the case that one ICC parameter will stay fixed if the other is increased or decreased due to the implicit relationship between $\rho_x$ and $\rho_{y|x}$. For example, moving to a more homogeneous population of covariate $X$ (increasing $\rho_x$) may not affect the marginal homogeneity of the outcome ($\rho_y$) but may decrease the homogeneity of the outcome conditional on the covariate ($\rho_{y|x}$) if the covariate is highly correlated with the outcome (due to explained variation). Finally, we acknowledge that the total budget provided is often a key consideration to ensure a practical design where the testing objectives are properly powered in the worst case scenario. If either the single- or multiple-objective maximin design provides insufficient power for the desired effect size(s) under the worst case scenario regarding the ICC assumptions, the total budget must be increased to provide a larger statistical power under the worst case scenario. In addition, power will also depend on the assumptions on the range for ICC values. If pilot or routinely-collected data are available to help elicit narrower ranges of the covariate- or outcome-ICC values, that information should inform the power calculation for the obtained maximin design, and can often improve the power under the worst case scenario compared to using unnecessarily wide ICC ranges. There are recent efforts that report outcome-ICCs for cluster randomized trials,\cite{korevaar_intra-cluster_2021} and we encourage similar efforts to report covariate-ICCs for planning CRTs to detect treatment effect heterogeneity. \section{Application to the Kerala Diabetes Prevention Program (K-DPP) Study}\label{s:dataApp} We illustrate our single- and multiple-objective optimal design procedures using data from the Kerala Diabetes Prevention Program (K-DPP) study,\cite{thankappan_peer-support_2018} a cluster-randomized controlled trial of a peer-support lifestyle intervention to reduce progression to diabetes in a community setting in India; we use data that are publicly available from the figshare database: \url{https://figshare.com/articles/dataset/K-DPP_datasets/5661610}. In the actual study, participants at high-risk for diabetes were recruited from $60$ polling areas (clusters) in a subdistrict of Kerala state, and polling areas were randomized in a $1$:$1$ ratio to receive usual care (education booklet on general lifestyle advice) or a $12$-month peer-support lifestyle intervention consisting of $15$ group sessions primarily led by trained lay peer leaders and held in local neighborhood facilities. The intervention was specifically designed to reduce cost and resource burden so as to be more readily employed in low- and middle-income countries where diabetes incidence is on the rise. The primary outcome was incidence of diabetes at $24$ months, but secondary outcomes included change in Indian Diabetes Risk Score (IDRS). Post-hoc HTE subgroup analyses were also conducted based on baseline glucose tolerance group, including impaired fasting glucose (IFG) as defined by the World Health Organization. In the context of the K-DPP study, suppose study investigators are interested in conducting a CRT to evaluate the benefit of the peer-support lifestyle intervention among the population at high-risk for developing diabetes as measured by change in IDRS, as well as to see if such benefit is differential by baseline body mass index (BMI) and IFG status. As the randomization ratio is $1$:$1$, the variance of the treatment variable is given by $\sigma^2_w=0.25$. The original study reported cluster-level costs in the intervention arm to be approximately \$$241.20$ per cluster (personnel, travel, food, logistics, and communication costs for training and group sessions) and individual-level costs to be approximately \$$7.98$ per participant, (resource materials and administrative costs), with the intervention arm costing a total of \$$11,225$ in 2013 USD; this results in a cluster-to-individual cost ratio of $k\approx 30$. Assuming cluster- and individual-level costs in the control arm were considerably less, we will assume a total budget of \$$20,000$ and a global cost ratio of $k=20$ ($c=\$100$, $s=\$5$) for our study planning; we will discuss extension to heterogeneous cost ratios in Section \ref{s:discuss}. The original study also reported that peer groups had approximately $10$ to $23$ participants each; therefore we will restrict our search for optimal cluster size $m$ to be in the space $[8, 40]$ and total clusters $n$ to be in $[66,143]$. Based on publicly available K-DPP data, we estimate the marginal standard deviation of change in IDRS to be $\sigma_y=10.270$. The original study did not report observed outcome-ICC but we estimate it using a linear mixed model procedure to be $0.028$ conditional on BMI and 0.032 conditional on IFG; for the maximin design procedures, we will define the parameter space for $\rho_{y|x}$ to be $[0.005, 0.1]$. The K-DPP study observed an average treatment effect size of $\beta_2=-1.50$ IDRS points. From baseline K-DPP data, the mean BMI is $24.888$ ($\sigma_x=4.031$) and approximately $22.5$\% of participants have IFG ($\sigma_x=0.417$). We estimate the covariate-ICC for BMI to be $0.055$ ($95$\% CI: $[0.022, 0.106]$)\cite{mcgraw_forming_1996} and for IFG to be $0.012$ ($95$\% CI: $[0,0.093]$),\cite{zou_confidence_2004} where the ICC for IFG was estimated using the ANOVA method.\citep{ridout_estimating_1999} We will define the parameter space for both $\rho_x$ to be $[0.1, 0.75]$, as we likely would not have had a precise parameter range at the design stage. Suppose we are interested in an effect size of the treatment-by-BMI interaction of $\beta_4=0.25\times\beta_2=-0.375$ points, and an effect size for the IFG interaction that is the same as the average treatment effect. In practice, the HTE effect size associated with IFG may be smaller, but we select a relatively large effect size to offset the relatively small IFG covariate variance (as a binary effect modifier). Because we will perform design calculations based on both the ATE and HTE objectives, we assume the BMI and IFG are mean centered. If we were solely interested in powering for the one of the HTEs, the LOD for optimally testing the HTE with respect to either BMI or IFG is obtained as one with $66$ clusters of size $40$ (total sample size $N=2,640$), as we are assuming a minimum of $66$ clusters. Using Algorithm \ref{algo:soMMD}, we find that the maximin design agrees with the LOD: the optimal design would be one with $66$ clusters of size $40$, found at ($\overline{\rho_{y|x}}=0.1, \underline{\rho_x}=0.1$), giving us $96.5$\% power to detect the HTE with respect to BMI at that scenario and $69.7$\% power at $(\overline{\rho_{y|x}}=0.1, \overline{\rho_x}=0.75)$, the scenario with the worst power. On the other hand, we get $34.6$\% power to detect the HTE with respect to IFG under the ($\overline{\rho_{y|x}}=0.1, \underline{\rho_x}=0.1$) scenario and 17.4\% power at $(\overline{\rho_{y|x}}=0.1, \overline{\rho_x}=0.75)$. At the observed ICC values, ($\rho_{y|x}=0.028, \rho_{\text{BMI}}=0.055$) and ($\rho_{y|x}=0.032, \rho_{\text{IFG}}=0.012$), we have $96.5$\% power to detect the HTE with respect to BMI and $34.9$\% power to detect the HTE with respect to IFG. These apparent differences in power between the BMI and IFG effects are mainly due to differences in the magnitude of the standard deviations of the effect modifier. Table \ref{tab:moLOD-KDPP} further shows the multiple-objective LODs obtained when the true ICCs are known, ($\rho_{y|x}=0.028$, $\rho_{\text{BMI}}=0.055$) and ($\rho_{y|x}=0.032$, $\rho_{\text{IFG}}=0.012$), with priority weights varying between $0.5$ and $0.95$ (recall that as the priority weight goes to $1$, the objective criterion favors the objective for studying the average treatment effect). As the covariate-ICCs for BMI and IFG are not vastly different, their respective multiple-objective LODs are also found to be relatively similar. \begin{table} \caption{\label{tab:moLOD-KDPP}LOD, value of the optimality criterion, and power to detect an average treatment effect size of $-1.5$ and either a BMI HTE effect size of $-0.375$ or an IFG effect size of $-1.5$ at various $\lambda$ values assuming a total budget $B=20,000$, cluster-associated costs $c=100$, and individual-associated costs $s=5$ (cost ratio $k=20$).} \centering \begin{tabular}{l | cllll | cllll} \toprule &\multicolumn{5}{c|}{\emph{BMI}} & \multicolumn{5}{c}{\emph{IFG}}\\ $\lambda$ & Optimality Criterion & $m$ & $n$ & ATE & HTE & Optimality Criterion & $m$ & $n$ & ATE & HTE\\ \midrule 0.5 & 0.827 & 42 & 64 & 0.747 & 0.967 & 0.814 & 40 & 66 & 0.723 & 0.349\\ 0.55 & 0.839 & 38 & 68 & 0.753 & 0.962 & 0.827 & 36 & 71 & 0.734 & 0.340\\ 0.6 & 0.854 & 36 & 71 & 0.760 & 0.960 & 0.843 & 34 & 74 & 0.741 & 0.335\\ 0.65 & 0.869 & 33 & 75 & 0.764 & 0.955 & 0.860 & 32 & 76 & 0.740 & 0.326\\ 0.7 & 0.886 & 32 & 76 & 0.763 & 0.952 & 0.878 & 30 & 80 & 0.748 & 0.322\\ 0.75 & 0.903 & 30 & 80 & 0.771 & 0.949 & 0.900 & 29 & 81 & 0.746 & 0.316\\ 0.8 & 0.922 & 29 & 81 & 0.768 & 0.945 & 0.916 & 28 & 83 & 0.748 & 0.314\\ 0.85 & 0.941 & 28 & 83 & 0.770 & 0.943 & 0.937 & 27 & 85 & 0.750 & 0.310\\ 0.9 & 0.960 & 27 & 85 & 0.772 & 0.941 & 0.957 & 26 & 86 & 0.747 & 0.304\\ 0.95 & 0.980 & 26 & 86 & 0.768 & 0.935 & 0.979 & 25 & 88 & 0.747 & 0.299\\ \bottomrule \end{tabular} \end{table} Designs obtained using multiple-objective maximin design Algorithm \ref{algo:moMMD} and varying the priority weight $\lambda$ between $0.5$ and $0.95$ are summarized in Table \ref{tab:moMMD-KDPP}. If the average and heterogeneous treatment effect objectives are given equal priority, the optimal design is one with $86$ clusters of size $26$ (total sample size $N=2,236$) found at the intersection of ($\rho_{y|x}=0.005$, $\rho_x=0.1$) and ($\rho_{y|x}=0.1$, $\rho_x=0.1$). If the average treatment effect objective is given a priority of at least $\lambda=0.6$, the optimal design becomes the one with $85$ clusters of size $27$ (total sample size $N=2,295$) also found at the intersection of ($\rho_{y|x}=0.005$, $\rho_x=0.1$) and ($\rho_{y|x}=0.1$, $\rho_x=0.1$). This gives us between $49.7$\% and $91.2$\% power to detect the average treatment effect, between $68.7$\% and $94.2$\% power to detect the HTE with respect to BMI, and between $17.1$\% and $30.7$\% power to detect the HTE with respect to IFG. At the observed ICC values, ($\rho_{y|x}=0.028$, $\rho_{\text{BMI}}=0.055$) and ($\rho_{y|x}=0.032$, $\rho_{\text{IFG}}=0.012$), we have $77.2$\% power to detect the average treatment effect, $94.1$\% power to detect the HTE with respect to BMI, and $31.0$\% power to detect the HTE with respect to IFG. \begin{table} \caption{\label{tab:moMMD-KDPP}Maximin design, value of the optimality criterion, and upper and lower power bounds to detect an average treatment effect (ATE) size of $-1.5$ and either a BMI HTE effect size of $-0.375$ or an IFG effect size of $-1.5$ at various $\lambda$ values assuming a total budget $B=20,000$, cluster-associated costs $c=100$, and individual-associated costs $s=5$ (cost ratio $k=20$).} \centering \begin{tabular}{l | cllccc} \toprule &&&& \multicolumn{3}{c}{Power Bounds}\\ \cmidrule{5-7} $\lambda$ & Optimality Criterion & $m$ & $n$ & ATE & HTE (BMI) & HTE (IFG)\\ \midrule 0.5 & 0.742 & 26 & 86 & (0.497 - 0.906) & (0.681 - 0.937) & (0.169 - 0.301)\\ 0.55 & 0.757 & 26 & 86 & (0.497 - 0.906) & (0.681 - 0.937) & (0.169 - 0.301)\\ 0.6 & 0.772 & 27 & 85 & (0.497 - 0.912) & (0.687 - 0.943) & (0.171 - 0.308)\\ 0.65 & 0.787 & 27 & 85 & (0.497 - 0.912) & (0.687 - 0.943) & (0.171 - 0.308)\\ 0.7 & 0.801 & 27 & 85 & (0.497 - 0.912) & (0.687 - 0.943) & (0.171 - 0.308)\\ 0.75 & 0.816 & 27 & 85 & (0.497 - 0.912) & (0.687 - 0.943) & (0.171 - 0.308)\\ 0.8 & 0.828 & 27 & 85 & (0.497 - 0.912) & (0.687 - 0.943) & (0.171 - 0.308)\\ 0.85 & 0.841 & 27 & 85 & (0.497 - 0.912) & (0.687 - 0.943) & (0.171 - 0.308)\\ 0.9 & 0.853 & 27 & 85 & (0.497 - 0.912) & (0.687 - 0.943) & (0.171 - 0.308)\\ 0.95 & 0.867 & 28 & 83 & (0.491 - 0.914) & (0.687 - 0.945) & (0.171 - 0.311)\\ \bottomrule \end{tabular} \end{table} \section{Discussion}\label{s:discuss} Interest in assessing differential treatment effects among subpopulations, in addition to assessing overall treatment effect, is increasing in the setting of CRTs. Understanding treatment effect heterogeneity is crucial for improving how and to whom future interventions can be designed and delivered. In this article, we expanded on the works of van Breukelen and Candel\cite{van_breukelen_efficient_2015} and Moerbeek\cite{moerbeek_optimal_2020} to develop several optimal design procedures for obtaining the required cluster size and number of clusters that maximize statistical power to test for HTE in CRTs based on a pre-specified effect modifier, measured at either the individual level or cluster level, under a budget constraint (in other words, examining the cost-effectiveness of CRT designs for properly studying important treatment effect moderation). We further extended this optimal design procedure to allow for uncertainty in the assumed covariate-ICC and outcome-ICC values, to achieve design robustness to ICC value misspecification. Our new methodology is further illustrated using a recent CRT that published information on costs for sampling clusters and individuals, which assisted in the ascertainment of optimal design. As we elaborate in Section \ref{s:intro} and Table \ref{tab:LODlit}, existing optimal design methodology for CRTs had largely focused on maximizing power for testing the average treatment effect and had not yet considered treatment effect heterogeneity with respect to baseline effect modifiers or covariates, nor had they considered maximin designs that are based on two objectives (testing for the average treatment effect and pre-specified treatment effect heterogeneity). This paper fills those important methodological gaps. Of note, we have pursued the optimal design results with a quantitative endpoint analyzed by linear mixed models, whereby under this framework, the optimal design critically depends on the variance of the target estimator and is free of the effect size. This has been noted for optimal design results for assessing the average treatment effect,\cite{raudenbush_statistical_1997} and is also applicable when the interest lies in studying HTE (single-objective optimal design) and in studying both the average and heterogeneous treatment effect (multiple-objective optimal design). Furthermore, as was discussed in Section \ref{s:power}, it is important to notice that the total budget specified is often a key consideration to ensure a practical design where the testing objectives are properly powered; the optimal design under the budget constraints will then boil down to the optimal number of clusters ($n$) as optimal cluster size ($m$) then only depends on the cluster-to-individual cost ratio. If either the single- or multiple-objective maximin design provides insufficient power for the desired effect size(s), the total budget must be increased to provide a larger statistical power. Finally, we note that like van Breukelen et al,\cite{van_breukelen_efficient_2015} Liu et al,\cite{liu_optimal_2019} and Moerbeek,\cite{moerbeek_optimal_2020} we base our maximin procedure on a function of relative efficiency. It is also natural to consider a procedure that maximizes the minimum efficiency (equivalently, minimizing the maximum $\sigma^2_{\text{HTE}}$ and thus maximizing the minimum power); the procedure in this case would always result in the LOD for the worst-case ICC combination scenario (or as close to the LOD as we may get in the chosen design space), which may potentially be a very different design than that identified under a RE-based maximin procedure. As noted in van Breukelen et al,\cite{van_breukelen_efficient_2015} however, an efficiency-based maximin procedure for studying the average treatment effect has the potential to be very inefficient if the true ICC values are very different than the worst-case scenario. Further investigations are necessary to elucidate the operating characteristics of such an efficiency-based maximin design when the interest lies in assessing treatment effect heterogeneity. There are several limitations and possible future extensions of our current work. First, we only considered the case where the aim lies in testing treatment effect heterogeneity or moderation with respect to a univariate baseline covariate, which can be either binary or continuous. While this is a common scenario in studying confirmatory HTE and where sample size calculations at the design stage require relatively fewer parameters, it might be of interest to extend our framework to a joint test for HTE with respect to multiple or multivariate effect modifiers. Even for the single-objective design, this extension requires one to properly define the optimality criterion in terms of a variance-covariance matrix of the interaction parameter estimators; see Yang et al\cite{yang_sample_2020} for a characterization of the variance matrix expression (which we refer to as $\Sigma_{\text{HTE}}$ in subsequent text) that extends $\sigma^2_{\text{HTE}}$ with multiple covariates. For example, it would be worthwhile to identify locally optimal designs by minimizing either the trace of the variance-covariance matrix $\Sigma_{\text{HTE}}$, the determinant of $\Sigma_{\text{HTE}}$, or the minimum eigenvalues of $\Sigma_{\text{HTE}}$; these three optimality criteria are akin to the A-optimality, D-optimality and E-optimality in the classic optimal design literature.\cite{fedorov_theory_2013} Extensions of any of these locally optimal designs to maximin designs open up new avenues for additional research. Second, as is conventional for planning CRTs, we have assumed a constant cluster size $m$, whereas in practice the cluster sizes may be variable due to non-informative drop-out or the fact that the source population is heterogeneous. While the extension of our optimal designs for assessing HTE and the compound objective is worthy of further investigation, Tong et al\cite{tong_accounting_2021} has recently pointed out that the ``correction factor'' for $\sigma^2_{\text{HTE}}$ based on an individual-level effect modifier due to cluster size variation is almost equal to $1$ in a wide range of the parameter space. This suggests that our optimal design procedure for assessing HTE with an individual-level effect modifier would likely be robust under small to moderate degrees of cluster size variability. The correction factor for $\sigma_{\text{HTE}}^2$ with a cluster-level effect modifier shares the same form with that derived earlier in van Breukelen et al,\cite{van_breukelen_relative_2007} though it can exceed $1$ and even be as large as $1.24$.\cite{tong_accounting_2021} We plan to conduct additional research to elucidate the impact of unequal cluster sizes for identifying the optimal designs based on the compound objective in Section \ref{s:mood}. Finally, we also assume the cluster-to-individual cost ratio does not vary by study arm. There are cases where the cost ratio will differ between treatment and control arms, as might also be the case in the K-DPP study. The heterogeneous cost ratio may therefore lead to a different locally optimal or maximin design, which will require further modifications of our procedure{; the same would be true if each study arm had different total budgets allocated to each or if there were heterogeneity in outcome variance between the arms (such as when a binary outcome is considered). Van Breukelen and Candel\cite{van_breukelen_maximin_2021} recently developed a single-objective maximin design for studying the average treatment effect by accommodating cost as well as variance } heterogeneity, and Moerbeek\cite{moerbeek_optimal_2020} recently considered the case where both cost ratios and cluster size vary between arms, wherein the optimal design is expressed as a ratio of sample sizes. We plan to carry out future work along these directions to extend our Proposition \ref{PROP:SOLOD} and Proposition \ref{PROP:MOLOD}, and to refine the associated operational details for achieving cost-effective study designs in broader settings with categorical outcomes and cost heterogeneity. \section*{Data Availability Statement} Data used in this article as an illustrative example are publicly available from the figshare database at \url{https://doi.org/10.6084/m9.figshare.5661610}. \appendix \section{Proof of Proposition \ref{PROP:SOLOD}}\label{soLODProof} To find a closed-form solution for the LOD for assessing HTE, first recall HTE variance equation \eqref{eq:sHTE2}: \begin{equation*} \sigma^2_{\text{HTE}} \propto \frac{s(1-\rho_{y|x})}{B} \times \frac{(k+m)\left[1+(m-1)\rho_{y|x}\right]}{m\left[1+(m-2)\rho_{y|x} - (m-1)\rho_x\rho_{y|x}\right]}. \end{equation*} \noindent Taking the derivative with respect to $m$, we get: \begin{align*} &\frac{\left\{\left[1+(m-1)\rho_{y|x}\right] + (k+m)\rho_{y|x}\right\}m\left[1+(m-2)\rho_{y|x} - (m-1)\rho_x\rho_{y|x}\right]}{\left\{m\left[1+(m-2)\rho_{y|x} - (m-1)\rho-x\rho_{y|x}\right]\right\}^2}\\ &~~- \frac{(k+m)\left[1+(m-1)\rho_{y|x}\right]\left\{\left[1+(m-2)\rho_{y|x}-(m-1)\rho_x\rho_{y|x}\right] + m(\rho_{y|x}-\rho_x\rho_{y|x})\right\}}{\left\{m\left[1+(m-2)\rho_{y|x} - (m-1)\rho-x\rho_{y|x}\right]\right\}^2}. \end{align*} \noindent This can be simplified to: \begin{equation}\label{appEq:derivativeHTE} (b_1 - ka_1)m^2 - ka_2 - ka_3, \end{equation} where \begin{align*} a_1 &= \rho^2_{y|x}(1-\rho_x),\\ a_2 &= 2\rho_{y|x}(1-\rho_{y|x})(1-\rho_x),\\ a_3 &= (1-2\rho_{y|x}+\rho_x\rho_{y|x})(1-\rho_{y|x}),\\ b_1 &= \rho_{y|x}(\rho_x - \rho_{y|x}). \end{align*} \noindent Setting \eqref{appEq:derivativeHTE} equal to 0, we can solve for $m$ through the quadratic equation: \begin{equation*} m=\frac{ka_2 \pm \sqrt{k^2a^2_2 + 4ka_3(b_1 - ka_1)}}{2(b_1 - ka_1)}, \end{equation*} which can be further simplified to: \begin{equation*} m_{\text{opt}}=\frac{(1-\rho_{y|x})(1-\rho_x) \pm \sqrt{\rho^{-1}_{y|x}k^{-1} (1-\rho_{y|x})(\rho_x - \rho_{y|x}) \left[1-(k+2)\rho_{y|x} + (k+1)\rho_x\rho_{y|x}\right]}}{k^{-1}(\rho_x - \rho_{y|x}) - \rho_{y|x}(1-\rho_x)} \end{equation*} focusing on the additive root as this is where solutions greater than 0 will occur: \begin{equation} m_{\text{opt}}=\frac{(1-\rho_{y|x})(1-\rho_x) + \sqrt{\rho^{-1}_{y|x}k^{-1} (1-\rho_{y|x})(\rho_x - \rho_{y|x}) \left[1-(k+2)\rho_{y|x} + (k+1)\rho_x\rho_{y|x}\right]}}{k^{-1}(\rho_x - \rho_{y|x}) - \rho_{y|x}(1-\rho_x)} \end{equation} \section{Proof of Proposition \ref{PROP:MOLOD}}\label{moLODProof} To find a closed-form solution for the multiple-objective LOD, we recall the variance equation \eqref{eq:sHTE2}: \begin{equation*} \sigma^2_{\text{HTE}} \propto \frac{s(1-\rho_{y|x})}{B} \times \frac{(k+m)\left[1+(m-1)\rho_{y|x}\right]}{m\left[1+(m-2)\rho_{y|x} - (m-1)\rho_x\rho_{y|x}\right]}. \end{equation*} We can find a similar expression for $\sigma^2_{\text{ATE}}$ by rearranging cost function \eqref{eq:budget} for n and substituting this into average treatment effect variance equation \eqref{eq:sATE}: \begin{equation} \sigma^2_{\text{ATE}} \propto \frac{(k+m)\left[1+(m-1)\rho_{y|x}\right]}{m}. \end{equation} \noindent Using these in multiple objective optimality criterion \eqref{eq:optCritRE}, we can simplify the criterion to: \begin{equation*} \Theta(\zeta | \lambda ) \propto w_{\text{ATE}} \times \frac{m}{(k+m)\left[1+(m-1)\rho_{y|x}\right]} + w_{\text{HTE}} \times\frac{m\left[1+(m-2)\rho_{y|x} - (m-1)\rho_x\rho_{y|x}\right]}{(k+m)\left[1+(m-1)\rho_{y|x}\right]}, \end{equation*} where $w_{\text{ATE}} = \lambda\Theta_{\text{ATE}}(\zeta^*_{\text{ATE}})$ and $w_{\text{HTE}} = (1-\lambda)\Theta_{\text{HTE}}(\zeta^*_{\text{HTE}})$. Taking the derivative of the above expression with respect to $m$, we get: \begin{equation}\label{appEq:derivMO} w_{\text{ATE}}\times \frac{k(1-\rho_{y|x}) -m^2\rho_{y|x}}{\left\{(k+m)\left[1+(m-1)\rho_{y|x}\right]\right\}^2} + w_{\text{HTE}}\times \frac{(ka_1-b_1)m^2 +ka_2m +ka_3}{\left\{(k+m)\left[1+(m-1)\rho_{y|x}\right]\right\}^2}, \end{equation} where \begin{align*} a_1&=\rho^2_{y|x}(1-\rho_x),\\ a_2&=2\rho_{y|x}(1-\rho_{y|x})(1-\rho_x),\\ a_3&=(1-2\rho_{y|x}+\rho_x\rho_{y|x})(1-\rho_{y|x}),\\ \text{and } b_1&=\rho_{y|x}(\rho_x-\rho_{y|x}). \end{align*} \noindent Setting this equal to $0$, we can simplify the left hand side and group terms by the degree of $m$: \begin{equation}\label{appEq:derivMOEq0} \left\{w_{\text{HTE}}(ka_1 - b_1) - w_{\text{ATE}}\rho_{y|x}\right\}m^2 + w_{\text{HTE}}ka_2m + \left[w_{\text{ATE}}k(1-\rho_{y|x} + w_{\text{HTE}}ka_3\right] = 0. \end{equation} Using the quadratic equation, the solution where $m>0$ is: \begin{equation} m_{\text{opt}} = \frac{-w_{\text{HTE}}ka_2 - \sqrt{w_{\text{HTE}}^2k^2a^2_2 - 4\left[w_{\text{HTE}}(ka_1 - b_1) - w_{\text{ATE}}\rho_{y|x}\right]\left[w_{\text{ATE}}k(1-\rho_{y|x}) + w_{\text{HTE}}ka_3\right]}}{2\left[w_{\text{HTE}}(ka_1 - b_1) - w_{\text{ATE}}\rho_{y|x}\right]} \end{equation} \noindent When $\rho_x=1$, $\sigma^2_{\text{HTE}} = \sigma^2_{\text{ATE}}$, making the optimality criterion RE of the average treatment effect and derivative \eqref{appEq:derivMO} simplifies such that we have: $$\Theta_{\text{ATE}}(\zeta^*_{\text{ATE}})\times \frac{k(1-\rho_{y|x}) -m^2\rho_{y|x}}{\left\{(k+m)\left[1+(m-1)\rho_{y|x}\right]\right\}^2}.$$ Setting this equal to $0$ and solving for $m$ we get: \begin{equation}\label{appEq:moLODrhox1} m_{\text{opt}} = \sqrt{\frac{(1-\rho_{y|x})}{\rho_{y|x}}\times k}. \end{equation} \section{Supplementary Figures}\label{3dplots} \begin{figure} \caption{\label{fig:soMMD-3d} \label{fig:soMMD-3d} \end{figure} \begin{figure} \caption{\label{fig:moMMD-3d} \label{fig:moMMD-3d} \end{figure} \begin{figure} \caption{\label{fig:moMMD-power} \label{fig:moMMD-power} \end{figure} \renewcommand*{\thetable}{\arabic{table} } \end{document}
\betagin{document} \betagin{abstract} The paper gives a categorical approach to generalized manifolds such as orbit spaces and leaf spaces of foliations. It is suggested to consider these spaces as sets equipped with some additional structure which generalizes the notion of atlas. The approach is compared with the known ones that use the Grothendieck topos, the Haefliger classifying space and the Connes non-commutative geometry. The main aim of this paper is to indicate that the suggested approach permits to simplify essentially the modern theory of the leaf space of foliations and the orbit space of diffeomorphism groups, and to obtain some new results on the characteristic classes of foliations. As an application, it is shown that the first Chern class is non-trivial for the leaf space of the Reeb foliation and its geometrical meaning is indicated. \subjclass[2000]{57D30, 57R32, 22A22, 55R40, 57D20} \end{abstract} \maketitle \section{Introduction} Such objects as leaf spaces of a foliations and orbit spaces of diffeomorphism groups are subjects of intensive study during last several decades. There is the vast literature devoted to the investigation of these spaces. They have a very complicated structure, in particular, often their topology is very bad. By this reason usually such a space is substituted by some model which 'encodes' the information about this space. For example, for the leaf spaces of foliations there are three approaches to construct such models \cite{MoModels}, namely, by means of the Grothendieck topos, the Haefliger classifying space and the Connes non-commutative geometry. Central to all these approaches is the construction of a smooth \'etale groupoid $G$. The three approaches above then become special instances of the general procedure of associating to a smooth groupoid a classifying topos $Sh(G)$, a classifying space $BG$, and a convolution algebra $C^\infty_c(G)$. Thus the study of the leaf space of a foliation is included in the more general theory of smooth groupoids. In the all of these examples the leaf space of a foliation or the orbit space of a diffeomorphism group are present only virtually. In contrast to these approaches we consider the objects above as sets equipped with some additional structure which generalizes the notion of atlas. Moreover, we construct a category containing among its objects both manifolds and all objects above. A typical example of such an object is an orbifold. Thus, our construction is based on the idea of the atlas consisting of charts in contrast, for example, to the non-commutative geometry which is based on the idea of non-commutative, in general, algebra of functions. Although the topology of basic sets of objects for the category above is often bad or trivial but the smooth structure is very rich. First such an approach was proposed in the paper \cite{L5}. The more general categorical construction which contains all categories we need next is proposed in the paper \cite{L6}. Note that the objects of these categories possess the natural structure of Grothendieck's topology and, in many cases, they are defined uniquely by some smooth \'etale groupoid. This allows us to compare our approach with the known ones. The main aim of this paper is to indicate that our approach permits to simplify essentially the modern theory of the leaf space of foliations and the orbit space of diffeomorphism groups, and to obtain some new results on the characteristic classes of foliations. The characteristic classes of a foliation $\mathfrak F$ of codimension $n$ on a smooth manifold $M$ are well known. Moreover, it is known that they can be defined as cohomology classes on the spaces of leaves of foliations. Usually, the characteristic classes above are derived from the relative cohomologies $H^*(W_n,O(n))$ and $H^*(W_n,\text{GL}(n,\mathbb{R}))$ of the Lie algebra of formal vector fields with respect to the orthogonal group $O(n)$ or from the Pontrjagin classes of the principal $\text{GL}(n,\mathbb{R})$-bundle associated to the space of leaves of foliation. All these characteristic classes can be obtained as the cohomology classes of the \v{C}ech-De Rham cohomology theory constructed in \cite{Cr}. Although there are explicit formulas for all these characteristic classes, there is no information about non-triviality of the characteristic classes above. In the paper \cite{L5} it was shown that the formal first Chern class of the cohomology $H^*(W_1,{\rm GL}(1,\mathbb{R}))$ is in general non-trivial in some specific cohomology theory. One of the aims of this paper is to prove that the first Chern class is non-trivial for the leaf space of the Reeb foliation and to indicate its geometrical meaning. Unless specified, all manifolds next are finite dimensional Hausdorff manifolds of $C^\infty$-class. Throughout the paper all categories are proposed to be small. \section{$\mathcal C$-spaces} \subsection{Definition of $\mathcal C$-spaces}\lambdabel{secdefCsp} Let $\mathcal C$ be a category. Next we denote the space of objects of $\mathcal C$ by $\mathcal C_0$ and the set of morphisms of $\mathcal C$ by $\mathcal C_1$. Let $a\in\mathcal C_0$. A set $S$ of $\mathcal C$-morphisms $f:b\to a$ is an $a$-{\bfseries sieve}, if the composition of an arbitrary $\mathcal C$-morphism $c\to b$ and $f$ belongs to $S$. The minimal $a$-sieve containing a given set $T$ of $\mathcal C$-morphisms with the target $a$ is called an $a$-{\bfseries sieve generated by} $T$. The set of $\mathcal C$-morphisms $c\to b$ whose compositions with $f$ belong to $S$, is called a {\bfseries restriction} of $S$ to $b$ (with respect to $f$) and denoted by $f^*S$. Recall that a {\bfseries Grothendick topology} on a category $\mathcal C$ is constituted by assigning to each $a\in\mathcal C_0$ the set $\circn{Cov}(a)$ of $a$-sieves, called {\bfseries covers}, such that the following axioms hold: \betagin{enumerate} \item For any $a\in\mathcal C_0$, the $a$-sieve generated by identity morphism $1_a$ is a cover; \item For any $\mathcal C$-morphism $f:b\to a$ and $S\in\circn{Cov}(a)$, we have $f^*S\in\circn{Cov}(b)$; \item Let $S\in\circn{Cov}(a)$ and let $R$ be an $a$-sieve. Then $R\in\circn{Cov}(a)$ if, for any $\mathcal C$-morphism $f:b\to a$ from $S$, we have $f^*R\in\circn{Cov}(b)$. \end{enumerate} A category $\mathcal C$ with a Grothendieck topology on $\mathcal C$ is called a {\bfseries site}. Referring to the paper \cite{L6} for the general exposition we consider the following five categories. \betagin{itemize} \item[1] The category $\mathcal M_n$ with objects smooth manifolds of dimension $n$ and morphisms local diffeomorphisms (i.e. \'etale morphisms) of one manifold into another one\footnote{Here a smooth map $f:M\to N$ between two manifolds is called a local diffeomorphism if each point of $M$ has an open neighborhood $U$ such that $f$ restricted to $U$ is a diffeomorphism onto~$f(U)$.}. \item[2] The full subcategory $\mathcal D_n$ of the category $\mathcal M_n$ with objects open submanifolds of $\mathbb R^n$. \item[3] The category $\mathcal M_{\infty}$ with objects smooth manifolds with model space $\mathbb R^\infty$ and morphisms local diffeomorphisms (i.e. \'etale morphisms) of one manifold into another one. In \cite{BR} it is shown that the usual smooth technique can be extended naturally to such manifolds. \item[4] The category $\mathcal M$ of smooth manifolds (of arbitrary finite dimension) and morphisms smooth maps of one manifold into another one. \item[5] Let $G$ be a Lie group. Consider a category $\mathcal P_n(G)$ with objects smooth principal $G$-bundles with $n$-dimensional bases whose morphisms are morphisms of such principal $G$-bundles which are projected to the \'etale maps of bases. \end{itemize} Let $\mathcal C$ be any of the categories above. For any $a\in\mathcal C_0$, an $a$-sieve $S$ is called a cover, if $S$ containes the set of inclusions $a_i\to a$, where $a_i$ is an open cover of $a$. It is easy to see that these data define a Grothendieck topology on $\mathcal C$. Next we apply some general constructions of \cite{L6} to the category $\mathcal C$. Denote by $J$ the forgetful functor from $\mathcal C$ to the category {\itshape Sets} of sets. Let $X$ be a set. A $\mathcal C$-{\bfseries chart} on $X$ is a map $k:c\to X$, where $c\in\mathcal C_0$. Put $D(k)=c$. Given two $\mathcal C$-charts $k_1$ and $k_2$, {\bfseries a morphism} from $k_1$ to $k_2$ is a $\mathcal C$-morphism $f:D(k_1)\to D(k_2)$ such that $k_1= k_2\circ f$. Let $\mathcal C(X)$ be a category of $\mathcal C$-charts on $X$. By definition, $D:\mathcal C(X)\to\mathcal C$ is a covariant functor. For a set $\Phi(X)$ of $\mathcal C$-charts on $X$, denote by $\mathcal C_{\Phi(X)}$ the full subcategory of $\mathcal C(X)$ with the object set $\Phi(X)$. \betagin{definition}\lambdabel{C-atlas} A set $\Phi(X)$ of $\mathcal C$-charts on $X$ is called a $\mathcal C$-atlas on $X$, if the set $X$ with the set of maps $k:J\circ D(k)\to X$ ($k\in\Phi(X))$ is an inductive limit $\varinjlim J\circ D$ of the functor $J\circ D:\mathcal C_{\Phi(X)}\to Sets$. \end{definition} Let $\Phi(X)$ be a $\mathcal C$-atlas on $X$. Evidently the functor $D:\mathcal C_{\Phii(X)}\to \mathcal C$ maps $\mathcal C_{\Phi(X)}$ onto some subcategory $\mathcal C(\Phi(X))$ of $\mathcal C$ and the pair $(X,\Phi(X))$ can be considered as an inductive limit of the restriction of $J$ to $\mathcal C(\Phi(X))$. Conversely, for any subcategory $\mathcal B$ of $\mathcal C$, the inductive limit of the restriction of $J$ to $\mathcal B$ is an $\mathcal C$-atlas on some set $X$. Thus one can define a $\mathcal C$-atlas as an inductive limit of the restriction of $J$ to some subcategory of $\mathcal C$. The set $X$ for this $\mathcal C$-atlas is determined by this inductive limit up to bijective map. Note that, for every set $\Phi(X)$ of $\mathcal C$-charts on $X$, by the definition of an inductive limit, $\varinjlim J\circ D$ is a set $\widetilde X$ given with a set of maps $\widetilde k:J\circ D(k)\to \widetilde X$ ($k\in \Phi(X)$) such that the set of maps $\widetilde k:D(k)\to\tilde X$ is a $\mathcal C$-atlas on $\widetilde X$ and there is a unique map $h:\widetilde X\to X$ such that, for every $k\in \Phii(X)$, $k=h\circ \widetilde k$. Let $\Phi$ be a set of $\mathcal C$-charts on $X$. We shall say that, for a $b\in \mathcal C_0$ a $\mathcal C$-chart on $X$ of the type $k\circ D(f)$, where $k\in \Phi$, $c=D(k)$, and $f:b\to c$ is a morphism of $\mathcal C$, {\bfseries is obtained by restriction of $k$ to $b$ with respect to} $f$. Denote by $R(\Phi)$ the set of $\mathcal C$-charts on $X$ obtained by restriction of the charts from $\Phi$ with respect to the all appropriate morphisms of $\mathcal C$. Clearly, we have $\Phi\subset R(\Phi)$ and $R^2(\Phi)=R(\Phi)$. We shall say that a $\mathcal C$-chart $k$ on $X$ {\bfseries is obtained by gluing from} $\Phi$, if there are a family $\{k_i\}$ of $\mathcal C$-charts from $\Phi$ and the set of morphisms $\{f_i:D(k_i)\to D(k)\}$ generating a cover of $D(k)$, such that $k_i=k\circ f_i$. Denote by $G(\Phi)$ the set of $\mathcal C$-charts on $X$ obtained by gluing from $\Phi$. Clearly, we have $\Phi\subset G(\Phi)$ and $G^2(\Phi)=G(\Phi)$. Put $\bar\Phi=G\circ R(\Phi)$. \betagin{proposition} (\cite{L6}, Proposition 2.1.2) The correspondence $\Phi\to \bar\Phi$ is a closure on the set of $\mathcal M_n$-atlases on $X$, i.e. the following conditions hold: \betagin{enumerate} \item $\Phi\subset\bar\Phi$; \item If $\Phi_1\subset\Phi_2$, then $\bar\Phi_1\subset\bar\Phi_2$; \item $\Bar{\Bar\Phi}=\bar\Phi$. \end{enumerate} \end{proposition} Evidently, for a $\mathcal C$-atlas on $X$, $\bar\Phi$ is a $\mathcal C$-atlas on $X$. A $\mathcal C$-atlas $\Phi$ on $X$ is called {\bfseries closed}\footnote{A closed $\mathcal C$-atlas is the same as a {\bf maximal} $\mathcal C$-atlas. A $\mathcal C$-atlas $\Phi$ is called {\bf full} if for any $k\in\bar\Phi$ and any $x\in D(x)$, there exist a chart $k_1\in\Phi$ and a morphism $m:k_1\to k$ such that $k_1(y)=x$ for some $y\in D(k_1)$. The notion of a full atlas is similar to the notion from \cite{Cr} of a transversal basis for a foliation.} if $\Bar\Phi=\Phi$. \betagin{definition} A pair $(X,\Phi)$, where $\Phi$ is a closed $\mathcal C$-atlas on $X$, is called $\mathcal C$-{\bfseries space}. Two $\mathcal C$-atlases $\Phi_1$ and $\Phi_2$ on $X$ are called {\bfseries equivalent} if $\bar\Phi_1=\bar\Phi_2$.\end{definition} By definition, equivalent $\mathcal C$-atlases on $X$ determine the same structure of $\mathcal C$-space on $X$. It is evident that two $\mathcal C$-atlases $\Phi_1$ and $\Phi_2$ on $X$ are equivalent iff $R(\Phi_1)=R(\Phi_2)$. \betagin{definition}\lambdabel{morphism} Let $(X_1,\Phi_1)$ and $(X_2,\Phi_2)$ be two $\mathcal C$-spaces. A morphism $(X_1,\Phi_1)\to (X_2,\Phi_2)$ is a map $f:X_1\to X_2$ , if, for each $k\in\Phi_1$, we have $f\circ k\in\Phi_2$. \end{definition} Thus, we have a category $\mathcal C_{\circn{sp}}$ of $\mathcal C$-spaces, in particular, we have the categories $\mathcal M_{n,\circn{sp}}$, $\mathcal D_{n,\circn{sp}}$, $\mathcal M_{\infty,\circn{sp}}$, $\mathcal M_{\circn{sp}}$, and $\mathcal P(G)_{n,\circn{sp}}$. For $a\in\mathcal C_0$, the identity map $\circn{id}:a\to a$ is an $\mathcal C$-chart on $a$ and $\Phi(a)=\{\circn{id}\}$ is a $\mathcal C$-atlas on $a$. Therefore, one can consider $a$ as an object of the category $\mathcal C_{\circn{sp}}$ and the corresponding map $\mathcal C\to\mathcal C_{\circn{sp}}$ is an isomorphism of the category $\mathcal C$ onto a full subcategory of the category $\mathcal C_{\circn{sp}}$. It is evident that each object of the category $\mathcal M_n$ is an object of the category $\mathcal D_{n,\circn{sp}}$. Since the category $\mathcal D_n$ is a full subcategory of $\mathcal M_n$, the corresponding natural map from $\mathcal D_{n,\circn{sp}}$ into $\mathcal M_{n,\circn{sp}}$ is a category isomorphism (see \cite{L6}, Theorem 2.3.4). Nevertheless, later it is useful to consider both of these categories. Moreover, each $\mathcal M_n$-atlas $\Phi$ on $X$ is a $\mathcal M$-atlas on $X$ (with the larger set of morphisms of charts) and equivalent $\mathcal M_n$-atlases on $X$ are equivalent as $\mathcal M$-atlases on $X$. It is clear that the definition of a $\mathcal D_n$-space is a direct generalization of the definition of an $n$-dimensional orbifold. See also the similar notions of $S$-atlas of Van Est \cite{VE} and $QF$-variety of Pradines, Wouafo-Kamga \cite{PW}. The category of $\mathcal D$-spaces coincides with the category of diffeological spaces introduced by Souriau~\cite{So}. \subsection{Structures on $\mathcal{M}_n$-spaces}\lambdabel{secstruc} Now we show that an $\mathcal M_n$-space has a rich smooth structure. Let $F:\mathcal M_n\to Sets$ be a covariant functor. Define an extension of $F$ to the category $\mathcal M_{n,\circn{sp}}$ as follows. Let $X$ be a $\mathcal M_n$-space and let $\Phi$ be a $\mathcal M_n$-atlas on $X$. By definition of inductive limit, $\varinjlim F\circ D$ is a set $\tilde X$ with the set of maps $\tilde \Phi$: $\tilde k:F\circ D(k)\to\tilde X$ ($k\in\Phi$) compatible with the morphisms of the category $\mathcal C_\Phi$. Put $F(\Phi)=\tilde X$. It is easy to check that, if two $\mathcal M_n$-atlases $\Phi_1$ and $\Phi_2$ on $X$ are equivalent, there is a natural bijective map $F(\Phi_1)\to F(\Phi_2)$. We extend thus the functor $F$ to the category $\mathcal M_{n,\circn{sp}}$. It is easy to see that this extension is compatible with functor morphisms. For example, let $C(M)$ be the set of smooth singular simplexes on a manifold $M$. Evidently $C(M)$ is a covariant functor $\mathcal M_n\to Sets$ which has a natural extension to the category $\mathcal M_{n,\circn{sp}}$. Then we have natural notion of the complex of finite smooth singular chains on each $\mathcal M_n$-space. Let $F:\mathcal M_n\to\mathcal M_N$ ($F:\mathcal M_n\to\mathcal M_\infty$) be a covariant functor and let $\Phi$ be a $\mathcal M_n$-atlas on $X$. Put $\tilde F=J\circ F$ and $\tilde X=\varinjlim \tilde F\circ D$. Then the set of maps $\tilde \Phi$: $\tilde k:\tilde F\circ D(k)\to\tilde X$ ($k\in\Phi$) is an $\mathcal M_N$-atlas ($\mathcal M_\infty$-atlas) on the set $\tilde X=F(\Phi)$. We obtain thus the covariant functor $F:\mathcal M_{n,\circn{sp}}\to \mathcal M_{N,\circn{sp}}$ ($F:\mathcal M_{n,\circn{sp}}\to \mathcal M_{\infty,\circn{sp}}$) which is an extension of the initial functor and is compatible with functor morphisms. We have thus the notions of the tangent bundle, the frame bundle, and so on for any $\mathcal M_n$-space. For example, put $F(M)=\circn{T}(M)$ or $F(M)=\circn{Fr}(M)$, where $\circn{T}(M)$ is a tangent bundle of $M$ and $\circn{Fr}(M)$ is the frame bundle of $M$. Applying the above extension to these functors we obtain the tangent bundle for any $\mathcal M$-space and frame bundle for any $\mathcal M_n$-space. Moreover, the projection $\circn{T}(M)\to M$ ($\circn{Fr}(M)\to M$) is functor morphism from the category $\mathcal M_{\circn{sp}}$ to the category $\mathcal M_{\circn{sp}}$ (from the category of $\mathcal M_{n,\circn{sp}}$ to $\mathcal M_{n(n+1),\circn{sp}}$). Then we have a projection of the tangent bundle of a $\mathcal M$-space $X$ (the frame bundle of a $\mathcal M_n$-space) to this $\mathcal M$-space). Note that the fibers of the projection $T(X)\to X$ are not vector spaces in general. By definition, a tangent vector $\tau\in T(X,\Phi)$ is a family of tangent vectors $\tau_k$ to $M$ for each $k\in\Phi$ such that these vectors are compatible with the morphisms of $\mathcal C_{\Phi}$. Assume that a category $\mathcal C$ has products and the kernel of each pair of morphisms exists. For example, $\mathcal C$ is a category of sets, groups, rings or algebras. Let $F:\mathcal M_n\to\mathcal C$ be a contravariant functor. Define an extension of $F$ to the category of $\mathcal M_n$-spaces as follows. Let $\Phi$ be a $\mathcal M_n$-atlas on $X$. By the definition of projective limit, is an object $\hat X$ of the category $\mathcal C$ with the set of morphisms $\hat\Phi$: $\hat k:\hat X\to F\circ D(k)$ ($k\in\Phi$) compatible with the morphisms of the category $\mathcal C_\Phi$. It is known that the projective limit $\varprojlim F\circ D$ exists. It is easy to see that, if two $\mathcal M_n$-atlases $\Phi_1$ and $\Phi_2$ on $X$ are equivalent, there is a natural isomorphism $\hat\Phi_1\to \hat\Phi_2$. Put $F(\Phi)=\hat\Phi$. It is evident that $F$ is an extension of the initial functor to the category $\mathcal M_{n,\circn{sp}}$ which is compatible with functor morphisms. If $F(M)$ is a sheaf on each $n$-dimensional manifold $M$, $F(X,\Phi)$ is called {\bfseries a sheaf on a $\mathcal M_n$-space} $(X,\Phi)$. The examples of sheaves are the tensor algebra $\Thetaeta(M)$ of tensor fields, the de Rham complex $\Omega^*(M)$ on a manifold $M$, and so on. Put $F(M)=\Omega^*(M)$, where $\Omega^*(M)$ is the graded differential algebra of differential forms on $M$. Then $\Omega^*(X)$ is a graded differential algebra for any $\mathcal M_n$-space $X$. By definition, a $p$-differential form on $X$ is a family of $p$-differential forms $\circm(D(k))$ on $D(k)$ for each $k\in\Phi)$ such that these forms are compatible with the morphisms of $\mathcal C_{\Phi}$. It is easy to see that the notion of integral of a differential $p$-form over a smooth singular $p$-chain is extended naturally to the category $\mathcal M_{n,\circn{sp}}$ and the corresponding Stokes theorem is true. Note that in the foliation theory the corresponding differential algebra $\Omega^*(X)$ is called the algebra of basic differential forms and the cohomology $H^*(X)$ of $\Omega^*(X)$ is called the basic cohomology of foliation. Thus, we see that the elements of the smooth techniques on smooth manifolds which have a functorial nature could be extended to the category $\mathcal M_{n,\circn{sp}}$. \betagin{remark*} To extend the functors above it is possible to use, instead of a maximal $\mathcal M_n$-atlas on the set $X$, any full $\mathcal M_n$-atlas on the set $X$. The result will be the same. For example, one can use a maximal $\mathcal D_n$-atlas on $X$. \end{remark*} \subsection{Groupoids} Recall the definition of smooth and \'etale groupoids. {\bfseries A groupoid} is a category $\mathcal G$ all whose morphisms are isomorphisms. Thus, we have a set $\mathcal{G}_0$ of objects $x,y,...$ and the set $\mathcal{G}_1$ of morphisms $f,g,...$. Each morphism $f:x\to y$ has a source $s(f)=x$ and a target $t(f)=y$. Two morphisms $f$ and $g$ with $s(f)=t(g)$ can be composed as $fg:s(g)\to t(f)$. This composition is associative, has a unit $1_x:x\to x$ for each $x\in\mathcal{G}_0$, and has an inverse $i(f)=f^{-1}:t(f)\to s(f)$ for each $f\in\mathcal{G}_1$. Put $$\mathcal{G}_2=\mathcal{G}_1\times_{\mathcal{G}_0}\mathcal{G}_1=\{(f,g)\in \mathcal{G}_1\times\mathcal{G}_1: s(f)=t(g)\}.$$ All the structure is contained in the diagram \betagin{equation}\lambdabel{groupoid} \mathcal{G}_2\stackrel{m}{\longrightarrow}\mathcal{G}_1\stackrel{i}{\longrightarrow} \mathcal{G}_1\stackbin[t]{s}{\rightrightarrows}\mathcal{G}_0\stackrel{u}{\longrightarrow}\mathcal{G}_1. \end{equation} Here $s$ and $t$ are the source and the target, $m$ denotes composition, $i$ is the inverse, and $u(x)=1_x$. The groupoid is {\bfseries smooth} if $\mathcal{G}_0$ and $\mathcal{G}_1$ are smooth manifolds ($\mathcal{G}_1$ is non-Hausdorff, in general), all the structure maps in \eqref{groupoid} are smooth, and $s$ and $t$ are submersions, so that $\mathcal{G}_2$ is a smooth manifold as well. A smooth groupoid is {\bfseries \'etale} if all the structure maps in \eqref{groupoid} are \'etale maps (it suffices to require this for~$s$). Let $\mathcal C$ be a category. Define an equivalence relation $x \sigmam y$ on $\mathcal C_0$ generated by the relations of the following type: there is a morphism $f$ of the category $\mathcal C$ such that either $f:x\to y$, or $f:y\to x$. Thus, the quotient set $\mathcal C_0/\sigmam$ equals $\partiali_0(\mathcal C)$, the set of connected components of $\mathcal C$. It is evident that, if $\mathcal C=\mathcal G$ is a groupoid, $x\sigmam y$ iff there is a morphism $f:x\to y$ of the groupoid $\mathcal G$. Let $\mathcal C=\mathcal M_n, \mathcal D_n, \mathcal M$. For each $\mathcal C$-atlas $\Phi$ on $X$, we define a category $\mathcal G(\Phi)$ as follows. The object space $\mathcal G(\Phi)_0$ is the disjoint union of the domains $D(k)$ of all $\mathcal C$-charts $k\in\Phi$. For any $x_1,x_2\in\mathcal G(\Phi)_0$, a morphism $x_1\to x_2$ is the germ of a morphism $f:k_1\to k_2$ at $x_1$, where $k_1, k_2\in\Phi$, $x_1\in D(k_1)$, $x_2\in D(k_2)$ and $f(x_1)=x_2$. It is clear that the natural map $\mathcal G(\Phi)_0\to X$ induces a bijective map $(\mathcal G(\Phi)/\sigmam)\to X$. Note that, by definition, $\mathcal G(\Phi)_0$ is an $n$-dimensional manifold for $\mathcal C=\mathcal M_n,\mathcal D_n$ and a disjoint union of manifolds for $\mathcal C=\mathcal M$. Conversely, consider the category $\mathcal G(\Phi)$ and the projection $\partiali:\mathcal G(\Phi)_0\to X=\mathcal G(\Phi)/\sigmam$. Let $\Phi'$ be the set of restrictions of $\partiali$ to all open subsets of $\mathcal G(\Phi)_0$. It is easy to check that $\Phi'$ is a $\mathcal C$-atlas on $X$, $\Phi\subset\Phi'$, and $\mathcal C$-atlases $\Phi$ and $\Phi'$ are equivalent. A $\mathcal C$-atlas $\Phi$ on $X$ is called {\bfseries locally invertible} if the category $\mathcal G(\Phi)$ is a groupoid, i.e., if for any $k_1,k_2\in\Phi$, each $\mathcal C_\Phi$-morphism $f:k_1\to k_2$, and each $x\in D(k_1)$, there are $k_3\in\Phi$ and $\mathcal C_\Phi$-morphisms $i:k_3\to k_2$, $g:k_3\to k_1$ such that $f(x)\in i(D(k_3))$, $fg=i$, and $i$ maps $D(k_3)$ diffeomorphically onto an open subset of $D(k_2)$. In particular, an $\mathcal M_n$-atlas $\Phi$ on $X$ is locally invertible if, for each $k\in\Phi$ and each open subset $U$ of $D(k)$, we have $(U,k|_U)\in\Phi$. Let $\Phi$ be a locally invertible $\mathcal M_n$-atlas on $X$. The set $\mathcal G_0(\Phi)$ is a manifold by definition. Endow the set of morphisms $\mathcal G_1(\Phi(X))$ with a topology generated by sets of germs of morphisms $f:k_1\to k_2$ of the category $C_\Phi$ at points $x\in D(k_1)$. Note that the manifold $\mathcal G_1(\Phi)$ may be non-Hausdorff. Since the sourse map $s:\mathcal G(\Phi)_1\to\mathcal G(\Phi)_0$ is \'etale, $\mathcal G(\Phi)$ is an \'etale groupoid. If $\mathcal G$ is an \'etale groupoid, each morphism $g:x\to y$ in $\mathcal G$ uniquely determines the germ of a diffeomorphism $\tilde g:(x,\mathcal G_0)\to(y,\mathcal G_0)$, namely $\tilde g$ is a germ at x of the local diffeomorphism $t\circ\sigma$ of $\mathcal G_0$, where $\sigma$ is a section of $s:\mathcal G_1\to\mathcal G_0$ on a neighborhood $U$ of $x$ , with $\sigma(x)=y$ and $U$ is so small that $t\circ \sigma$ is a diffeomorphism from $U$ onto its image. In particular, this construction gives a group homomorphism $\mathcal G_x\to\circn{Diff}_x(\mathcal G_0)$. The \'etale groupoid is called {\bfseries effective} (or $S$-atlas in sence of van Est \cite{VE}) whenever this homomorphism is injective for each $x\in\mathcal G_0$. By definition, the groupoid $\mathcal G(\Phi)$ is effective. \subsection{Examples of $\mathcal{M}_n$-spaces} Now we give examples of $\mathcal M_n$-spaces. 1. Let $N$ be an $(m+n)$-dimensional manifold and let $\mathcal F$ be a foliation of codimension $n$ on $N$. Denote by $L(\mathcal F)$ the set of leaves of $\mathcal F$ and by $\Phi(\mathcal F)$ the set of smooth maps $M\to N$ transverse to the leaves of $\mathcal F$, where $M\in (\mathcal M_n)_0$. \betagin{theorem}(\cite{L5},\cite{L6}) The set $\Phi(\mathcal F)$ of $\mathcal M_n$-charts on $L(\mathcal F)$ is a locally invertible $\mathcal M_n$-atlas on $L(\mathcal F)$. \end{theorem} Let $T$ be a complete transversal of the foliation $\mathcal F$, i.e. an embedded $n$-submanifold $M\subset N$ which transverse to the leaves of $\mathcal F$ hitting each leaf at least once. Let $p:T\to L(\mathcal F)$ be the projection. It is easy to check that the set $\Phi(T)$ of the restrictions of $p$ to open submanifolds of $T$ is a locally invertible $\mathcal M_n$-atlas on $L(\mathcal F)$ and $\Phi(T)$ is equivalent to the $\mathcal M_n$-atlas $\Phi(\mathcal F)$. By definition, the set of objects for the groupoid $\mathcal G(\Phi(T))$ equals $T$. Evidently, the groupoid $\mathcal G(\Phi(T))$ coincides with the so-called \'etale model $\circn{Hol}_T(N,\mathcal F)$ of the holonomy groupoid $\circn{Hol}(N,\mathcal F)$ of the foliated manifold $(N,\mathcal F)$ (see, for example, \cite{Mo}). \betagin{remark} For study the the leaf space of a foliated manifold $(N,\mathcal F)$ one usually uses the holonomy groupoid of $\mathcal F$ with $N$ as the set of objects. From the point of view accepted in this paper this is not natural since different foliated manifolds may have isomorphic leaf spaces. \end{remark} 2. Let $N$ be a manifold of dimension $n$ and let $\Gamma$ be a pseudogroup of local diffeomorphisms\footnote{Here a local diffeomorphism of a manifold $N$ is a smooth map $f:U\to N$ defined on an open subset $U\subset N$ that maps $U$ diffeomorphically onto $f(U)$. Compare this notion of a local diffeomorphism with the one from Section \ref{secdefCsp}} of $N$, in particular, a group of diffeomorphisms of $N$. Let $N/\Gamma$ be the orbit space of $\Gamma$ and let $p:N\to N/\Gamma$ be the natural projection. Consider the set $\Phi(\Gamma)$ of $\mathcal M_n$-charts on $N/\Gamma$ of the type $(M',p|_{M'})$, where $M'$ is an open subset of $N$. \betagin{theorem}\lambdabel{pseudogroup}(\cite{L5},\cite{L6}) The set $\Phi(\Gamma)$ of $\mathcal M_n$-charts on $N/\Gamma$ is a locally invertible $\mathcal M_n$-atlas on $N/\Gamma$. \end{theorem} 3. Let $X$ be a one point set $\{pt\}$ and let the set of $\mathcal M_n$-charts be the set $\Phi(\{pt\})$ of all maps $M\to\{pt\}$, where $M\in {\mathcal M_n}_0$. It is evident that the pair $(\{pt\},\Phi(\{pt\})$ is a final object of the category $\mathcal M_{n,\circn{sp}}$. 4. Consider the space $\mathbb R^n$ and the pseudogroup $\Gamma(\mathbf R^n)$ of all local diffeomorphisms of $\mathbb R^n$. It is clear that $\mathbb R^n/\Gamma(\mathbb R^n)$ is an one point set $\{pt\}$ and, by Theorem~\ref{pseudogroup}, the set $\Phi(\Gamma(\mathbb R^n))$ of $\mathcal M_n$-charts on $\{pt\}$, is a locally invertible $\mathcal M_n$-atlas on $\{pt\}$. Then the corresponding $\mathcal M_n$-space is a final object of the category of $\mathcal M_n$ spaces. By definition, the groupoid corresponding to $\Phi(\Gamma(\mathbb R^n))$ is the classifying Haefliger groupoid $\Gamma_n$ \cite{Hf1}. To get an equivalent classifying space one can take instead of $\mathbb R^n$ an arbitrary connected $n$-dimensional manifold $M$ and its pseudogroup $\Gamma(M)$ of all local diffeomorphisms, this is obvious, since $M /\Gamma(M)$ is again a one-point set. \section{\'Etale structures} Consider the following generalizations of an $\mathcal M_n$-space. \betagin{definition}\lambdabel{E-atlas} Let $\Phi$ be an $\mathcal M_n$-atlas on a set $X$ and let $\mathcal B_\Phi$ be a subcategory of the category $\mathcal C_\Phi$ with the same set of objects $\Phi$. An \'etale structure on $X$ is a category $\mathcal E_\Phi$ such that $(\mathcal E_\Phi)_0=\Phi$ given with a full covariant functor $\Pi:\mathcal E_\Phi\to\mathcal B_\Phi$ which is identical on the set of objects satisfying the following condition: the set of maps $k:J\circ D(k)\to X$ ($k\in\Phi)$ is an inductive limit $\varinjlim J\circ D\circ\Pi$ of the functor $J\circ D\circ\Pi: \mathcal E_\Phi\to Sets$. An \'etale structure $\mathcal E_\Phi$ on $X$ is called effective if $\mathcal E_\Phi=\mathcal B_\Phi$. An \'etale structure $\mathcal E_\Phi$ on $X$ is called locally invertible if for any $k_1,k_2\in\Phi$, each $\mathcal E_\Phi$-morphism $f:k_1\to k_2$, and each $x\in D(k_1)$, there are $k_3\in\Phi$ and $\mathcal E_\Phi$-morphisms $i:k_3\to k_2$, $g:k_3\to k_1$ such that $f(x)\in i(D(k_3))$, $fg=i$, and $i$ maps $D(k_3)$ diffeomorphically onto an open subset of $D(k_2)$. \end{definition} It is clear that each \'etale structure $\mathcal E_\Phi$ on $X$ defines uniquely the corresponding structure of $\mathcal M_n$-space on $X$ and, therefore, the corresponding structure of diffeological space. For example, let $\Gamma$ be a pseudogroup of local diffeomorphisms of an $n$-dimensional manifold $M$, $\Phi(\Gamma)$ consists of the restrictions of the projection $M\to M/\Gamma=X$ to the domains of the local diffeomorphisms from $\Gamma$, and morphisms of $\mathcal E_{\Phi(\Gamma)}$ are local diffeomorphisms from $\Gamma$. By definition, $\mathcal E_{\Phi(\Gamma)}$ is an effective locally invertible \'etale structure on $X$. Let $\mathcal E_{\Phi}$ be an effective \'etale structure on $X$ with the set of objects $\Phi$. For each $M\in\mathcal M_0$, $k\in\Phi$, and a smooth map $f:M\to D(k)$, the composition $k\circ f$ is an $\mathcal M$-chart on $X$. \betagin{definition}\lambdabel{dover} A diffeological structure on the set $X$ over an effective inverible \'etale structure $\mathcal E_{\Phi}$ on $X$ is a category $\mathcal E_{\Phi,d}$ whose objects are the pairs $(k,f)$, where $k\in\Phi$ and $f:M\to D(k)$ is a smooth map for some $M\in\mathcal M_0$, and, for any objects $(k_1,f_1)$, $(k_2,f_2)$, a morphism $(k_1,f_1)\to (k_2,f_2)$ is defined whenever $f_1$ and $f_2$ have the same domain $M$ and in this case is a morphism $g:k_1\to k_2$ belonging to $(\mathcal E_{\Phi})_1$ such that $f_2=g\circ f_1$. \end{definition} We generalize definition \ref{morphism} as follows. \betagin{definition}\lambdabel{morphism1} Let $\mathcal E_{\Phi_1}$ and $\mathcal E_{\Phi_2}$ be two \'etale structures on the sets $X_1$ and $X_2$ respectively. A morphism of $f:\mathcal E_{\Phi_1}\to\mathcal E_{\Phi_2}$ of \'etale structures is a map $f:X_1\to X_2$ that satisfy following conditions: \betagin{enumerate} \item For each $k\in\Phi_1$, we have $f\circ k\in\Phi_2$; \item For each morphism $m:k_1\to k_2$ from $\mathcal E_{\Phi_1}$, the map $m:D(f\circ k_1)=D(k_1)\to D(k_2)=D(f\circ k_2)$ is a morphism $f\circ k_1\to f\circ k_2$ from $\mathcal E_{\Phi_2}$. \end{enumerate} \end{definition} Let $\mathcal E_\Phi$ be an \'etale structure on a set $X$. Define a category $\mathcal G(\mathcal E_\Phi)$ as follows. The set of objects $\mathcal G(\mathcal E_\Phi)_0$ equals the disjoint union of domains of $\mathcal M_n$-charts from $\Phi$. For $x,y\in\mathcal G(\mathcal E_\Phi)_0$ and $k_1,k_2\in\Phi$, a morphism $x\to y$ is a germ at $x$ of a $\mathcal E_\Phi$-morphism $g:k_1\to k_2$ such that $x\in D(k_1)$, $y\in D(k_2)$, and $D(g)(x)=y$. If an \'etale structure $\mathcal E_\Phi$ on $X$ is locally invertible, the category $\mathcal G(\mathcal E_\Phi)$ is a groupoid. By definition, the set of objects $\mathcal G(\mathcal E_\Phi)_0$ of $\mathcal G(\mathcal E_\Phi)$ is an $n$-dimensional manifold. Endow the set of morphisms $\mathcal G(\mathcal E_\Phi)_1$ of $\mathcal G(\mathcal E_\Phi)$ with a topology generated by the sets of germs of morphisms $f:k_1\to k_2$ of the category $\mathcal E_\Phi$ at points $x\in D(k_1)$. Since such sets are $n$-dimensional manifolds, $\mathcal G(\mathcal E_\Phi)_1$ is an $n$-dimensional manifold as well. By construction, for a locally invertible \'etale structure $\mathcal E_\Phi$ the category $\mathcal G(\mathcal E_\Phi)$ is an \'etale groupoid. \betagin{remark}\lambdabel{reduced} Note that, in the construction above, the category $\mathcal G(\mathcal E_\Phi)$ very often can be reduced essentially. Assume that we have a surjective \'etale map $\mathcal G(\mathcal E_\Phi)_0\to M$, where $M$ is an $n$-dimensional manifold. Then one can take for $\mathcal G(\mathcal E_\Phi)_0$ the manifold $M$ and, for $x,y\in M$, take for a $\mathcal E$-morphism $x\to y$ the germ of local diffeomorphism $(M,x)\to (M,y)$ which can be lifted to a germ of some $\mathcal G(\mathcal E_\Phi)$-morphism $k_1\to k_2$. By definition, the reduced category has the same quotient space $X=\mathcal G(\Phi)/\sigmam$. For example, let $\Gamma$ be a pseudogroup of local diffeomorphisms of an $n$-dimensional manifold $M$ and let $\mathcal E_{\Phi(\Gamma)}$ be the corresponding locally invertible \'etale structure on $X=M/\Gamma$. By the standard procedure, the set of objects $\mathcal G(\mathcal E_{\Phi(\Gamma)})_0$ is the disjoint union of domains of local diffeomorphisms from $\Gamma$. But there is the natural \'etale projection $\mathcal G(\mathcal E_{\Phi(\Gamma)})_0\to M$. Therefore, one can take for the object set of the reduced category $\mathcal G(\mathcal E_{\Phi(\Gamma)})$ the manifold $M$. \end{remark} Let $\mathcal G$ be a smooth groupoid and $X=\mathcal G_0/\sigmam$. Since $s$ is a submersion, for each morphism $g:x\to y$ from $\mathcal G_1$, there is an open neighborhood $U$ of $x\in\mathcal G_0$ and a smooth section $\sigma:U\to\mathcal G_1$ of the map $s$ such that $\sigma(x)=g$. Put $f_\sigma=t\circ\sigma:U\to\mathcal G_0$. Define a category $\mathcal E(\mathcal G)$ as follows. The objects of this category are open subsets of $\mathcal G_0$. For $U,V\in\mathcal E(\mathcal G)_0$, a morphism from $U$ to $V$ in the category $\mathcal E(\mathcal G)$ is a section $\sigma:U\to\mathcal G_1$ of the source map $s$ with the properties that $f_\sigma$ is an \'etale map and $f_\sigma(U)\subset V$. The composition of morphisms $\sigma:U\to\mathcal G_1$ and $\tau:V\to\mathcal G_1$, where $f_\sigma(U)\subset V$, is defined by $(\tau\sigma)(x)=\tau(f_\sigma(x))\cdot\sigma(x)$ ($x\in U$), where the multiplication is taken in $\mathcal G$. Note that the category $\mathcal E(\mathcal G)$ is similar to the category $\circn{Emb}(\mathcal G)$ defined in \cite{Mo1}. Let $p:\mathcal G_0\to X$ be the projection. For each open subset $U$ of $\mathcal G_0$, denote by $k_U$ the restriction of $p$ to $U$. By definition, $k_U$ is a $\mathcal M_n$-chart on $X$. For two open subsets $U,V$ of $\mathcal G_0$ and the section $\sigma$ of the source map $s$ such that $f_\sigma$ is \'etale and $f_\sigma(U)\subset V$, one can consider $f_\sigma$ as a morphism $k_U\to k_V$ of $\mathcal M_n$-charts on $X$. Denote by $\Phi=\Phi(\mathcal G)$ the set of $\mathcal M_n$-charts on $X$ of type $k_U$. Consider also the category $\mathcal B(\mathcal G)$ with $\Phi$ as the set of objects whose morphisms are the \'etale maps $f_\sigma:U\to V$ as above. We have the natural covariant functor $\Pi:\mathcal E(\mathcal G)\to\mathcal B(\mathcal G)$ such that, for an open subset $U\subset\mathcal G_0$, we have $\Pi(U)=U$ and, for a section $\sigma$ as above, we have $\Pi(\sigma)=f_\sigma:k_U\to k_V$. By definition, the functor $\Pi$ is full. Consider the covariant functor $J\circ D\circ\Pi:\mathcal E(\mathcal G)\to Sets$. Prove that the set of maps $k_U:J\circ D(k_U):U\to X$ is an inductive limit $\varinjlim J\circ D$ of the functor $J\circ D\circ\Pi$. It is obvious that the codomains of the $\mathcal M_n$-charts $k_U$ cover $X$. Let $U,V\in\mathcal E(\mathcal G)_0$, $x\in U$, $y\in V$, and $k_U(x)=k_V(y)$. Since $X=\mathcal G_0/\sigmam$, there is a $\mathcal G$-morphism $g:x\to y$. First assume that $x=y$ and $U\subset V$. Consider the section $\varepsilon:U\to\mathcal G_1$ defined by $\varepsilon(x')=1_{x'}$ for any $x'\in U$. Then $\varepsilon$ is a morphism $k_U\to k_V$ such that $f_\varepsilon(x)=y$ and we are done. In the general case there exists\footnote{The section $\sigma$ exists by the following reason. Since both maps $ds$ and $dt$ are epimorphisms, there exists a vector subspace $L\subset T_g {\mathcal G} _1$ such that $ds: L\to T_x{\mathcal G} _0$ and $dt:L\to T_y {\mathcal G} _0$ are isomorphisms. It is possible to chose a submanifold $\Sigmagma \subset {\mathcal G}_1$ passing through the point $g$ such that $T_g\Sigmagma=L$ and $s|_\Sigmagma,t|_\Sigmagma :\Sigmagma \to {\mathcal G}_0$ are diffeomorphisms onto their images. Then $f=(s|_\Sigmagma)^{-1}$ is the required section. This remark is communicated to A.\,Galaev by Alexei\,Kotov.} a smooth section $\sigma:U\to\mathcal G_1$ of the source map $s$ such that $f_\sigma$ is \'etale and $\sigma(x)=g$ (by the argument above one can assume that $f_\sigma(U)\subset V$). Then for $f_\sigma$ as a morphism $k_U\to k_V$, we have $f_\sigma(x)=y$ and we are done again. Thus, $\mathcal E(\mathcal G)$ is a locally invertible \'etale structure on $X$. Let $\mathcal G$ be a smooth groupoid and let $\mathcal E(\mathcal G)$ be the corresponding locally invertible \'etale structure on $X=\mathcal G/\sigmam$. Consider the groupoid $\mathcal G(\mathcal E(\mathcal G))$. By definition we have an obvious \'etale map $\mathcal G(\mathcal E(\mathcal G))_0\to\mathcal G_0$. Then, by remark \ref{reduced}, one can consider the corresponding reduced groupoid $\mathcal G(\mathcal E(\mathcal G))$ with $\mathcal G_0$ as the set of objects. By construction, for any \'etale groupoid $\mathcal G$, we have $\mathcal G(\mathcal E(\mathcal G))=\mathcal G$. Thus, to study an \'etale groupoid $\mathcal G$ one can equivalently use the corresponding \'etale structure~$\mathcal E_{\mathcal G,\Phi_S}$. \section{Morita equivalence} Let $\mathcal G$ and $\mathcal H$ be two smooth groupoids. A covariant functor $F:\mathcal G\to\mathcal H$ is smooth if the corresponding maps $\mathcal G_0\to\mathcal H_0$ and $\mathcal G_1\to\mathcal H_1$ are smooth. It is clear that the morphism $\mathcal E_{\Phi_1}\to\mathcal E_{\Phi_2}$ of locally invertible \'etale structures on the sets $X_1$ and $X_2$ respectively induces a smooth covariant functor $\mathcal G(\mathcal E_{\Phi_1})\to\mathcal G(\mathcal E_{\Phi_2})$ of the corresponding \'etale groupoids. Recall some definitions. Let $\mathcal G$ be a smooth groupoid and let $M$ be a smooth manifold. {\bfseries A right action} of $\mathcal G$ on $M$ consists of two smooth maps $\partiali: M\to\mathcal G_0$ (the moment map) and $m:M\times_{\mathcal G_0}\mathcal G_1=\{(x,g):\partiali(x)=t(g)\}\to M$ (action) such that, denoting $m(x,g)=xg$, we have $$ (xg)h=x(gh), \quad x1=x,\quad \partiali(xg)=s(g). $$ We will call $M$ a right $\mathcal G$-space with the moment map $\partiali$. A left $\mathcal G$-space is defined similarly. A (right) $\mathcal G$-{\bfseries bundle} over the manifold $B$ consists of a smooth right $\mathcal G$-space $E$ and a smooth map $p:E\to B$ which is $\mathcal G$-equivariant (i.e. $p(xg)=p(x)$). It is called principal, if $p$ is a surjective submersion and the map $E\times_{\mathcal G_0}\mathcal G_1\to E\times_BE$, defined by $(e,g)\to (e,eg)$, is a diffeomorphism. Let $\mathcal G$ and $\mathcal H$ be two smooth groupoids. \betagin{definition}\lambdabel{Skandalis} A homomorphism of groupoids (or the Hilsum-Skandalis map) $\mathcal G\to\mathcal H$ (\cite {Hf},\cite{Mo}, \cite{Mr}) consists of a manifold $P$, smooth maps (source and target): $s_P:P\to\mathcal G_0$, $t_P: P\to\mathcal H_0$, a left action of $\mathcal G$ on $P$ with the moment map $s_P$, a right action of $\mathcal H$ on $P$ with the moment map $t_P$, such that \betagin{enumerate} \item $s_P$ is $\mathcal H$-equivariant, $t_P$ is $\mathcal G$-equivariant; \item the actions of $\mathcal G$ and $\mathcal H$ commute: $(gp)h=g(ph)$; \item $s_P:P\to\mathcal G_0$, as an $\mathcal H$-bundle with the moment map $t_P$, is principal. \end{enumerate} \end{definition} The composition of two homomorphisms $P:\mathcal G\to\mathcal H$ and $Q: \mathcal H\to \mathcal K$ is defined by dividing $P\times_{\mathcal H_0}Q$ by the action of $\mathcal H$: $(p,q)h=(ph,h^{-1}q)$, and taking the natural actions of $\mathcal G$ and $\mathcal K$. Thus, we have the category of smooth groupoids. Two smooth groupoids $\mathcal G$ and $\mathcal H$ are called {\bfseries Morita equivalent} if they are isomorphic in the category of smooth groupoids and an isomorphism $\mathcal G\to\mathcal H$ is called {\bfseries a Morita equivalence}. Now we compare definition \ref{morphism1} with the definition of a morphism of smooth groupoids. \betagin{theorem}\lambdabel{homomorphism} Let $\mathcal E_{\Phi_1}$ and $\mathcal E_{\Phi_2}$ be locally invertible \'etale structures on the sets $X$ and $Y$ respectively and let $\mathcal G=\mathcal G(\mathcal E_{\Phi_1})$ and $\mathcal H=\mathcal G(\mathcal E_{\Phi_2})$ be the corresponding \'etale groupoids. Then each homomorphism $\mathcal G\to \mathcal H$ induces a morphism $f:X\to Y$ of the corresponding diffeological spaces defined by $\mathcal E_{\Phi_1}$ and $\mathcal E_{\Phi_2}$. Conversely, each morphism $f:X\to Y$ of the diffeological spaces defines a homomorphism $\mathcal G\to\mathcal H$. \end{theorem} \betagin{proof} Let $P:\mathcal G\to\mathcal H$ be a homomorphism of smooth groupoids. By definition of the actions of $\mathcal G$ and $\mathcal H$ on $P$, for each $p\in P$, we have $\mathcal Gs_P(p)=s_P(\mathcal Gp)$ and $t_P(p)\mathcal H=t_P(p\mathcal H)$. Moreover, by (3) of \ref{Skandalis}, $s_P$ induces a surjective map of orbit spaces $P/\mathcal G\to (\mathcal G_0/\mathcal G)$. Let, for $p_1,p_2\in P$, we have $s_P(\mathcal Gp_1)=s_P(\mathcal Gp_2)$. Then there is a $g\in \mathcal G_1$ such that $s_P(p_2)=s_P(gp_1)$ and by (3) of \ref{Skandalis} there is a unique $h\in\mathcal H_1$ such that $gp_1=p_2h$. Therefore, by (1) of \ref{Skandalis} we have $t_P(p_2\mathcal H)=t_P(gp_1\mathcal H)=t_P(p_1\mathcal H)$. This implies that the correspondence $s_P(\mathcal Gp)\to t_P(p\mathcal H)$ defines a map of orbit spaces $f_P:\mathcal G_0/\mathcal G_1\to\mathcal H_0/\mathcal H_1$. Put $X=\mathcal G_0/\mathcal G$ and $Y=\mathcal H_0/\mathcal H$ and consider the natural structures of diffeological spaces on $X$ and $Y$ defined by the groupoids $\mathcal G$ and $\mathcal H$. Since, by (3) of \ref{Skandalis}, the $s_P$ is a surjective submersion, for each $x\in\mathcal G_0$, there are local sections of $s_P\to\mathcal G_0$ defined in sufficiently small neighborhoods of $x$. Let $\sigma:U\to P$ be such a section. Then the composition $t_P\circ\sigma:U\to\mathcal H_0$ is a smooth map and its composition with $\mathcal H_0\to\mathcal H_0/\mathcal H$ is a $\mathcal M$-chart on $Y$ as a diffeological space. Let $\sigma_1,\sigma_2:U\to P$ be two such sections. By (3) of \ref{Skandalis}, there is a smooth map $h:U\to\mathcal H_1$ such that for each $y\in U$ we have $\sigma_2(y)=\sigma_1(y)h(y)$. Then the $\mathcal M$-charts on $Y$ induced by $\sigma_1$ and $\sigma_2$ as above coincide. Since the restrictions of the projection $\mathcal G_0\to\mathcal G_0/\mathcal G$ to the open subsets above form an $\mathcal M$-atlas on $X$ of the structure of a diffeological space on $X$, the construction above induces a morphism of diffeological spaces $X\to Y$. Conversely, let $\mathcal G$, $\mathcal H$ be two \'etale groupoids determined by the locally invertible $\mathcal M_n$-atlases $\Phi_1$ on $X=\mathcal G_0/\mathcal G$ and $\Phi_2$ on $Y=\mathcal H_0/\mathcal H$ and let $f:X\to Y$ be a morphism of the corresponding diffeological spaces. Put $\tilde P=\{(c,d,h):c\in \mathcal G_0,d\in \mathcal H_0,f\circ p_G(c)=p_H(d), h\in \mathcal H_1, t(h)=d\}$. Consider the following equivalence relation on $\tilde P$: $(c,d,h)\sigmam (c,d',h'h)$, where $h'\in\mathcal H_1$ and $h':d\to d'$, and the corresponding quotient set $P=\tilde P/\sigmam$. Define a topology on $P$ as follows. For each $c\in\mathcal G_0$, consider a restriction $p_U$ of a projection $p_G:\mathcal G_0\to X$ to some neighborhood $U$ of $c$ as a $\mathcal M$-chart on $X$. Then, if $U$ is sufficiently small, the composition $f\circ p_U$ is a $\mathcal D$-chart on $Y$ of the type $p_H\circ\varphi_U$, where $\varphi_U$ is a smooth map $U\to\mathcal H_0$. Let $V$ be a neighborhood of $d=\varphi_U(c)\in\mathcal H_o$ such that there is a smooth section $\sigma:V\to\mathcal H_1$. We may assume that $\varphi_U(U)\subset V$. Consider a subset $P(U,\varphi,\sigma)$ of $P$ defined by the points of $\tilde P$ of the type $(c',\varphi(c'),\sigma\circ\varphi(c'))$, where $c'\in U$. Consider the topology on $P$ generated by the subsets $P(U,\varphi,\sigma)$ for all $c\in\mathcal G_0$, the maps $\varphi$, and the sections $\sigma$. Since each subset $P(U,\varphi,\sigma)$ is a smooth manifold, it is easy to check that there is a unique structure of a smooth manifold on $P$ such that each $P(U,\varphi,\sigma)$ is an open submanifold of $P$. Define the maps $\tilde s_P:\tilde P\to\mathcal G_0$ and $\tilde t_P:\tilde P\to\mathcal H_0$ as follows: $\tilde s_P(c,d,h)=c$ and $\tilde t_P(c,d,h)=s(h)$. It is easy to check that the maps $\tilde s_P$ and $\tilde t_P$ induce the smooth maps $s_P:P\to\mathcal G_0$ and $t_P:P\to\mathcal H_0$. Consider the left action of $\mathcal G$ on $\tilde P$ and the right action of $\mathcal H$ on $\tilde P$: $g(c,d,h)=(t(g),d,h)$, where $g\in\mathcal G_1$ and $s(g)=c$, and $(c,d,h)h'=(c,d,hh')$, where $h'\in\mathcal H_1$ and $t(h')=s(h)$. It is easy to check that these actions induce the smooth actions on $P$. It is clear that the map $s_P$ is $\mathcal H$-invariant, the map $t_P$ is $\mathcal G$-invariant, the actions of $\mathcal G$ and $\mathcal H$ commute, and $s_P:P\to\mathcal G_0$, as an $\mathcal H$-bundle with the moment map $t_P$, is principal. Thus we have a homomorphism of groupoids $\mathcal G\to\mathcal H$. \end{proof} \betagin{remark*} Let $\varphi:\mathcal G\to \mathcal H$ be a smooth covariant functor. By definition, $\varphi$ induces a map $f:\mathcal G_0/\mathcal G\to\mathcal H_0/\mathcal H$ which is the morphism of the corresponding diffeological spaces. It is easy to check that the smooth morphism of groupoids $\mathcal G\to\mathcal H$, defined by $P_\varphi=\mathcal G_0\times_{\mathcal H_0}\mathcal H_1=\{(x,h):\varphi(x)=t(h)\}$, $s_P(c,h)=c$, $t_P(c,h)=s(h)$ and the natural actions of $\mathcal G$ and $\mathcal H$ on $P_\varphi$, is a particular case of the construction of Theorem~\ref{homomorphism}. \end{remark*} The following corollaries are evident. \betagin{corollary}\lambdabel{Morita} Let $\Phi_1$ and $\Phi_2$ be locally invertible $\mathcal M_n$-atlases on the sets $X$ and $Y$ respectively and let $\mathcal G=\mathcal G(\Phi_1)$ and $\mathcal H=\mathcal G(\Phi_2)$ be the corresponding \'etale groupoids. Then the categories $\mathcal G$ and $\mathcal H$ are Morita equivalent iff the diffeological spaces $X$ and $Y$ defined by the $\mathcal M_n$-atlases $\Phi_1$ and $\Phi_2$ are isomorphic as objects of the category $\mathcal M_{\circn{sp}}$. \end{corollary} \betagin{corollary}\lambdabel{Morita1} Let $\Phi_1$ and $\Phi_2$ be equivalent locally invertible $\mathcal M_n$-atlases on the set $X$. Then the corresponding \'etale groupoids $\mathcal G(\Phi_1)$ and $\mathcal G(\Phi_2)$ are Morita equivalent. \end{corollary} \section{Classifying topos} Let $\Phi$ be an $\mathcal M_n$-atlas or $\mathcal M$-atlas on $X$ such that, for each $(M,k)\in\Phi$ and each open subset $U\subset M$ the restriction of $(M,k)$ with respect to the inclusion $U\subset M$ belongs to $\Phi$. Let $S$ be a contravariant functor on the category $\mathcal C_{\Phi}$. For $(M,k)\in\Phi$, consider the standard category $\mathcal C(M)$ of $M$ as topological space, i.e. the category with objects open subsets $U$ of $M$ and morphisms the inclusions $U_1\subset U_2$ of such open subsets. By assumption $\mathcal C(M)$ is a subcategory of $\mathcal C_{\Phi}$. The functor $S$ is called {\bfseries a sheaf on $\Phi$} if, for each $(M,k)\in\Phi$, the restriction of $S$ to $\mathcal C(M)$ is a scheaf on $M$. It is easy to see that if $\Phi$ is an invertible $\mathcal M_n$-atlas on $X$, a sheaf on $\Phi$ defines a $\mathcal C(\Phi)$-sheaf in sense of \cite{Mo} and, conversely, each $\mathcal C(\Phi)$-sheaf induces a sheaf on $\Phi$. Therefore, the classifying topos $\mathcal B(\mathcal C(\Phi))$ could be considered as the space of sheaves on $\Phi$. Let $\Phi$ be a $\mathcal M_n$-atlas on $X$ as above and let $(X,\Psii)$ be the corresponding diffeological space. Since, by definition, $\Psii=\bar\Phi$, using the inverse image functor one can extend uniquely each scheaf on $\Phi$ to the scheaf on $\Psii$. This remark implies the following theorem. \betagin{theorem}\lambdabel{scheaf} Let $\Phi_1$ and $\Phi_2$ be locally invertible $\mathcal M_n$-atlases on the sets $X$ and $Y$ respectively, $\mathcal G=\mathcal G(\Phi_1)$ and $\mathcal H=\mathcal G(\Phi_2)$ the corresponding \'etale groupoids, $(X,\Psii_1)$ and $(Y,\Psii_2)$ the corresponding diffeological spaces. Then each morphism $f:X\to Y$ of diffeological spaces induces a morphism $\mathcal B(\mathcal G)\to\mathcal B(\mathcal H)$ of toposes. If $f$ is an isomorphism, it induces an isomorphism of the corresponding toposes. \end{theorem} \section{Nerve and classifying space for a smooth groupoid} For a groupoid $\mathcal G$, denote by $\mathcal G_n$ the space of composable strings of morphisms in $\mathcal G_n$: \betagin{equation}\lambdabel{string} x_0\stackrel{g_1}{\longleftarrow}x_1\stackrel{g_2}{\longleftarrow}... \stackrel{g_n}{\longleftarrow}x_n. \end{equation} The spaces $\mathcal G_n$ ($n\geq 0$) with the face maps $d_i:\mathcal G_n\to\mathcal G_{n-1}$ defined in the usual way: $$ d_i(g_1,\dots,g_n)= \betagin{cases} (g_2,\dots,g_n)&\text{if $i=0$}\\ (g_1,\dots,g_i g_{i+1},\dots,g_n)&\text{if $1\le i\le n-1$}\\ (g_1,\dots,g_{n-1})&\text{if $i=n$} \end{cases} $$ form a simplicial space, the {\bfseries nerve of} $\mathcal G$, denoted by $N(\mathcal G)$. Its geometric realization is {\bfseries the classifying space} of $\mathcal G$, denoted $B\mathcal G$. For a smooth groupoid $\mathcal G$, the notation above agree with the earlier notation for $n=0,1,2$. Thus, the space $\mathcal G_n$ is a fibered product $\mathcal G_1\times_{\mathcal G_0}\times...\times_{\mathcal G_0}\mathcal G_1$ and, hence, has a structure of a smooth manifold. A Morita equivalence $\varphi:\mathcal G\stackrel{~}{\longrightarrow}\mathcal H$ induces a weak homotopy equivalence $B\mathcal G\stackrel{~}{\longrightarrow}B\mathcal H$ \cite{Hf1}. Corollary \ref{Morita} and \cite{Mo} imply the following \betagin{theorem}\lambdabel{contraction} Let $\Phi_1(X)$ and $\Phi_2(X)$ be two full $\mathcal M_n$-atlases on $X$ such that $\Phi_1(X)\subset\Phi_2(X)$ and let $\mathcal G({\Phi_1(X)})$ and $\mathcal G({\Phi_2(X)})$ be the correspponding \'etale groupoids. Then the classifying spaces of $\mathcal G(\Phi_1(X))$ and $\mathcal G(\Phi_2(X))$ are homotopy equivalent. \end{theorem} Let $\mathcal G$ be a \'etale groupoid such that its set of objects $\mathcal G_0$ is an $n$-dimensional manifold $M$. For example, for the Haefliger groupoid $\Gamma_n$ we have $M=\mathbb R^n$. Let $\mathcal F$ be a foliation on a manifold $N$ of codimension $n$ and let $T$ be a complete transversal of $\mathcal F$. Consider the full $\mathcal M_n$-atlas $\Phi$ on the set of leaves $N/\mathcal F$ determined by $T$ and the corresponding \'etale category $\mathcal G(\Phi)$. For this category we have $M=T$. In this case one can define the so-called embedding category $\circn{Emb}(\mathcal G)$. Objects of the category $\circn{Emb}(\mathcal G)$ are the elements of a fixed base $\mathcal U$ of the topology of $M=G_0$ consisting of contractible open subsets. For two such open subsets $U$ and $V$, each section $\sigma:U\to \mathcal G_1$ of the source map $s$ with the property that $t\circ\sigma:U\to \mathcal G_0$ defines an embedding $\hat\sigma:U\to V$, is a morphism of the category $\circn{Emb}(\mathcal G)$. The composition in $\circn{Emb}(\mathcal G)$ is defined by $\hat\tau\circ\hat\sigma(x)=\tau(t\sigma(x))\cdot\sigma(x)$ (multipliation in $\mathcal G$). The nerve of the category $\circn{Emb}(\mathcal G)$ is the simplicial set, whose geometric realization is the classifying space of $\circn{Emb}(\mathcal G)$ and is denoted by $B\circn{Emb}(\mathcal G)$. There is the following \betagin{theorem} (\cite{Mo1}) For any \'etale groupoid $\mathcal G$, the classifying spaces $B\mathcal{G}$ and $B\circn{Emb}(\mathcal G)$ are weakly homotopy equivalent. \end{theorem} In contrast to $B\mathcal G$, the classifying space $B\circn{Emb}(\mathcal G)$ is a $\circn{CW}$-complex. \betagin{corollary}\lambdabel{weak} The classifying spaces $B\mathcal G(\Phi)$ and $B\circn{Emb}(\mathcal G(\Phi))$ are weakly homotopy equivalent. \end{corollary} \section{The \v{C}ech-De Rham cohomology of $\mathcal M_n$-spaces} Let $X$ be an $\mathcal M_n$-space defined by a full $\mathcal M_n$-atlas $\Phi(X)$ and let $\mathcal G(\Phi(X))$ be the corresponding \'etale groupoid. The space of objects of $\mathcal G(\Phi(X))$ is an $n$-dimensional manifold $M$. Let $\mathcal{U}$ be a fixed base of the topology on $M$. Consider the \v{C}ech complexes for $\mathcal G(\Phi(X))$ as follows: $$ \chieck C_\mathcal U(\Phi(X),\Omega^p): \partialrod\limits_{U_0}\Omega^p(U_0)\stackrel{\delta}{\longrightarrow} \partialrod\limits_{U_0\stackrel{g_1}{\longrightarrow}U_1}\Omega^p(U_0) \stackrel{\delta}{\longrightarrow}\partialrod\limits_{U_0\stackrel{g_1}{\longrightarrow}U_1 \stackrel{g_2}{\longrightarrow}U_2}\Omega^p(U_0)\stackrel{\delta}{\longrightarrow}..., $$ where the product is taken over the strings of composable arrows of $\mathcal U$, with the boundary \betagin{multline} (\delta\circm)(g_1,\dots,g_{p+1})=g_1^*\circm(g_2,\dots,g_{p+1})+\\ \sum_{i=1}^p(-1)^i\circm(g_1,\dots,g_{i+1}g_i,\dots,g_{p+1})+(-1)^{p+1}\circm(g_1,\dots g_p). \end{multline} {\bfseries The \v{C}ech-De Rham complex} $(\chieck C_\mathcal U(\Phi(X),\Omega^*),D)$ of a $\mathcal M_n$-space $X$ is the total complex for the double complex $(\chieck C_\mathcal U\Phi(X),\Omega^*),\delta,d)$, where $d$ is the De Rham differential and $D=\delta\partialm d$ with the standard sign convention. Denote by $\chieck H^*_{\mathcal U}(\Phi(X);\mathbb R)$ the cohomology of the total complex $(\chieck C_\mathcal U(\Phi(X),\Omega^*),D)$ and call it the \v{C}ech-De Rham cohomology of the $\mathcal M_n$-space $X$ (relative to $\Phi(X)$ and $\mathcal U$). Consider the first spectral sequence of the double complex $(\chieck C_\mathcal U(X,\Omega^*),\delta,d)$. It is clear that, for the spectral sequence, we have $E_2^{pq}=E_\infty^{pq}=0$ for $q>0$ and $E_2^{p0}=E_\infty^{p0}=H^p(B\mathcal G_\Phi;\mathbb R)$. Therefore, we have an isomorphism $\chieck H^*_\Phi(X;\mathbb R)=H^*(B\mathcal G_\Phi;\mathbb R)$. Thus, we have the following \betagin{theorem} For an $\mathcal M_n$-space $X$ defined by a full $\mathcal M_n$-atlas $\Phi(X)$, the \v{C}ech-De Rham cohomology $\chieck H^*_{\mathcal U}(\Phi(X);\mathbb R)$ does not depend on the choice neither of a full $\mathcal M_n$-atlas $\Phi(X)$, nor on a base $\mathcal U$ and is denoted by $H^*(X;\mathbb R)$. \end{theorem} The cohomology $H^*(X;\mathbb R)$ is called the real cohomology of $\mathcal M_n$-space $X$ and denoted by $H^*(X;\mathbb R)$. The \v{C}ech-De Rham cohomology for a foliation are defined in \cite{Cr}. \section {Categories $\mathcal P_n(G)$ and $\mathcal P_n(G)$-spaces} Let $G$ be a Lie group. Consider a category $\mathcal P_n(G)$ with objects smooth principal $G$-bundles with $n$-dimensional bases and morphisms the morphisms of such principal $G$-bundles which project to the \'etale maps of bases. Applying the general procedure of \cite{L6} to this category one obtains the category of $\mathcal P_n(G)$-spaces. By definition, a $\mathcal P_n(G)$-chart on a set $Y$ is a pair $(k,P)$, where $k:P\to Y$, $P\in (\mathcal P_n(G))_0$. Morphisms of such charts are defined by means of morphisms of the category $\mathcal P_n(G)$. A $\mathcal P_n(G)$-atlas on a set $Y$ is the set $\Psii$ of $\mathcal P_n(G)$-charts on $Y$ such that the set of maps $k:J\circ I_{\Psii(Y)}(P,k)=J(P)\to Y$ ($(P,k)\in\Psii(Y))$) is an inductive limit $\varinjlim J\circ I_{\Psii(Y)}$ of the functor $J\circ I_\Psii$, where the functors $J$ and $I_{\Psii(Y)}$ are similar to those from section \ref{secdefCsp}. Therefore, one defines a $\mathcal P_n(G)$-space as a set $Y$ with a maximal $\mathcal P_n(G)$-atlas $\Psii_m(Y)$ on $Y$. For a principal smooth $G$-bundle $P$ with a base $B$ the projection $p:P\to B$ is a covariant functor from $\mathcal P_n(G)$ to $\mathcal M_n$. By the definition of the category $\mathcal P_n$, for any $\mathcal P_n$-space $Y$ the $\mathcal P_n$-atlas on $Y$ induces a $\mathcal M_n$-atlas on some set $B_Y$ so that the extension of the functor $P\to B$ gives the covariant functor $\tilde Y\to B_Y$ from the category of $\mathcal P_n$-spaces to the category $\mathcal M_{n,\circn{sp}}$. The $\mathcal M_n$-set $B_Y$ is called the base of the $\mathcal P_n$-space $Y$. A $\mathcal P_n(G)$-atlas $\Psii$ on a set $Y$ is called full if the corresponding $\mathcal M_n$-atlas on the base $B_Y$ is full. \betagin{remark*} 1. For a $\mathcal M_n$-space $X$, consider the corresponding frame bundle $Fr(X)$ (see section \ref{secstruc}). By definition, $Fr(X)$ is a $\mathcal P_n(\circn{GL}(n,\mathbb R))$-space. 2. Consider a transversal principal $G$-bundle $P$ over a foliation $(N,\mathcal F)$ of codimension $n$. One can consider $P$ as a principal $G$-bundle over $N$ with a smooth action of the holonomy groupoid $\circn{Hol}(\mathcal F)$ of the foliation $\mathcal F)$ on $P$ which commutes with the action of $G$ on $P$ (see \cite{KT}). It is evident that the action of $\circn{Hol}(\mathcal F)$ on $P$ induces an action on $N$. It is easy to construct a full $\mathcal P_n(G)$-atlas on the set $Y$ of orbits of the groupoid $\circn{Hol}(\mathcal F)$ on $N$. \end{remark*} Let $G$ be a Lie group with Lie algebra $\mathfrak g$ and let $p:P\to B$ be a smooth principal $G$-bundle. For $\timesi\in\mathfrak g$, denote by $X_\timesi$ the corresponding vector field on $P$ induced by the action of $G$ on $P$. Let $T_p$ be a tangent space at $p\in P$ and let $l_p:T_p\to \mathfrak g$ be a linear map such that, for any $\timesi\in\mathfrak g$, we have $l_p(X_\timesi(p))=\timesi$. The space $\tilde P$ of all such linear maps $l_p$ ($p\in P)$ is a smooth fiber bundle over $P$ with the projection $l_p\to p$. Consider a right action of $G$ on $\tilde P$ defined by: $g\to \alpha(g)=\circn{Ad}_{g^{-1}}\circ l_p\circ (R_g)_*^{-1}$, where $g\to R_g$ is the action of $G$ on $P$. The following lemma is evident. \betagin{lemma}\lambdabel{tilde P} The fiber bundle $\tilde P\to P$ possesses the following properties: \betagin{enumerate} \item the fibers of the fiber bundles $\tilde P\to P$ and $\tilde P/G\to B$ are affine spaces and, then, the spaces $\tilde P/G$ and $B$ are homotopy equivalent; \item the transformation $\alpha_g$ is an automorphism of the affine fiber bundle $\tilde P\to P$; \item the correspondence $g\to\alpha_g$ defines a free action of $G$ on $\tilde P$ and $\tilde P\to \tilde P/G$ is a smooth principal $G$-bundle; \item $\tilde P/G$ is a smooth fiber bundle over $B$ with affine fibers: \item each connection form $\circm$ on $P$ is a smooth $G$-invariant section of the fiber bundle $\tilde P\to P$ or a smooth section of the fiber bundle $\tilde P/G\to B$. \end{enumerate} \end{lemma} Define a differential 1-form $\varkappa$ on $\tilde P$ with values in $\mathfrak g$ as follows. For a tangent vector $w$ at $l_p\in\tilde P$, put $\varkappa(w)=l_p(\tilde p_*w)\in\mathfrak g$. The following lemma follows from the definitions directly. \betagin{lemma}\lambdabel{kappa} The form $\varkappa$ have the following properties: \betagin{enumerate} \item $\varkappa$ is a connection form on the principal $G$-bundle $\tilde P$; \item $\varkappa$ is invariant under the natural action of of the group of automorphisms of the principal $G$-bundle $P$ on $\tilde P$; \item Let $\circm$ be a connection form on $P$ and $s:P\to\tilde P$ is the corresponding section. Then we have $\circm=s^*\varkappa$. \end{enumerate} \end{lemma} The connection $\varkappa$ is called {\bfseries a canonical connection}. We denote by $K$ the curvature form of $\varkappa$. Now we apply the Chern-Weil homomorphism to the canonical connection $\varkappa$. Assume that the Lie group $G$ is reductive. It is known that the space of invariant polynomials $I(\mathfrak g)$ on the Lie algebra $\mathfrak g$ is a free algebra generated by a finite set of homogeneous polynomials. Let $F\in I(\mathfrak g)$ is a homogeneous polynomial of degree $p$. Denote by $F(K)$ the differential $2p$-form on $\tilde P/G$ obtained by the substitution of $K$ instead of a variable of $F$ with the following alternation. The form $F(K)$ is a closed form on the space $\tilde P/G$ and its cohomology class depends only on the structure of the principal $G$-bundle $P$. This cohomology class is called {\bfseries the characteristic class} of $P$ corresponding to the polynomial $F$. Since, by (1) of lemma \ref{tilde P}, the spaces $\tilde P/G$ and $B$ are homotopy equivalent, we obtain the homomorphism $I(\mathfrak g)\to H^*(B,\mathbb R)$ which is called the characteristic Chern-Weil homomorphism. Let $\circm$ be a connection form on the principal $G$-bundle $P\to B$ and $s:B\to \tilde P/G$ the corresponding section of the bundle $\tilde P/G\to B$. Then $s^*F(K)=F(R)$, where $R$ is the curvature form of $\circm$, and the map $F\to F(K)$ defines the standard characteristic Chern-Weil homomorphism associated with the connection $\circm$. It is clear that it does not depend on the choice of $\circm$. Consider the covariant functors $P\to \tilde P$, $P\to B$, $P\to \tilde P/G$, the functor morphisms $\tilde P\to\tilde P/G$ and $\tilde P/G\to B$, and their extensions to the category of $\mathcal P_n(G)$-spaces. Moreover, consider the contravariant functors $P\to \Omega^*(\tilde P)$, $P\to\Omega^*(\tilde P/G)$, and the functor morphism $\Omega^*(\tilde P/G)\to \Omega^*(B)$. We have their extensions to the category of $\mathcal P_n(G)$-spaces. Thus, we get the de Rham complexes $\Omega(\tilde Y)$, $\Omega^*(\tilde Y/G)$, and $\Omega^*(B_Y)$ for any $\mathcal P_n(G)$-space $Y$ and their cohomologies $H^*(\tilde Y)$, $H^*(\tilde Y/G)$, and $H^*(B_Y)$. It is clear that the characteristic homomorphism $I(\mathfrak g)\to \Omega^*(\tilde P/G)$ is extended to the characteristic homomorphisms $I(\mathfrak g)\to \Omega^*(\tilde Y/G)$ and $I(\mathfrak g)\to H^*(\tilde Y/G)$ for any $\mathcal P_n(G)$-space $Y$. \betagin{remark*} The construction of the characteristic homomorphism $I(\mathfrak g)\to \Omega^*(\tilde P/G)$ is based on the statement (2) of lemma \ref{kappa}. For a $\mathcal P_n(G)$-space $Y$, consider a morphism $\tilde Y/G\to B_Y$ and the corresponding homomorphism $H^*(B_Y)\to H^*(\tilde Y/G)$. In contrast to the isomorphism $H^*(B)\to H^*(\tilde P/G)$ (see (1) of lemma \ref{tilde P}), the homomorphism $H^*(Y_B)=H^*(\tilde Y/G)$ is not an isomorphism in general. Therefore, the image of the characteristic homomorphism $I(\mathfrak g)\to H^*(\tilde Y/G)$ cannot be projected to $H^*(Y_B)$. \end{remark*} Now we show how to extend the characteristic homomorphism to $\mathcal P_n(G)$-spaces. Let $\Psii$ be a full $\mathcal P_n(G)$-atlas on a $\mathcal P_n(G)$-space $Y$ and let $\Phi(B_Y)$ be the corresponding $\mathcal M_n$-atlas on the base $B_Y$. Let us take an $n$-dimensional manifold $M$ and a base $\mathcal U$ of the topology of $M$ consisting of contractible open subsets of $M$. Assume that the $\mathcal M_n$-atlas $\Phi(B_Y)$ is formed by the charts of the type $(U,k)$, where $U\in\mathcal U$. For each $U\in\mathcal U$, let us choose a $\mathcal P_n(G)$-chart $(P_U,\tilde k)\in\Psii$ which induces a $\mathcal M_n$-chart $(U,k)$ on $B_Y$. Since $U$ is contractible, for any $U,V\in\mathcal U$ and each embedding $\sigma:U\to V$ there is a unique morphism of principal $G$-bundles $P_U\to P_V$ which projects to $\sigma$. Consider the \v{C}ech-De Rham complex $(\chieck C_\mathcal U(\Phi(X),\Omega^*),D)$ for $\Phi(B_Y)$. By the arguments above, each string of composable morphisms of the $\mathcal M_n$-atlas $\Phi(B_Y)$ $$ \chieck C_\mathcal U(\Phi(B_Y),\Omega^p): \partialrod\limits_{U_0}\Omega^p(U_0)\stackrel{\delta}{\longrightarrow} \partialrod\limits_{U_0\stackrel{\sigma_1}{\longrightarrow}U_1}\Omega^p(U_0) \stackrel{\delta}{\longrightarrow}\partialrod\limits_{U_0\stackrel{\sigma_1}{\longrightarrow}U_1 \stackrel{\sigma_1}{\longrightarrow}U_2}\Omega^p(U_0)\stackrel{\delta}{\longrightarrow}..., $$ could be covered by the string of composable morphisms of $\mathcal P_n(G)$-atlas on $Y$. Applying the extension of the functor $\tilde P\to\tilde P/G$ to the latter string we get the string $$ \partialrod\limits_{U_0}\Omega^p(P_{U_0})\stackrel{\delta}{\longrightarrow} \partialrod\limits_{U_0\stackrel{\sigma_1}{\longrightarrow}U_1}\Omega^p(P_{U_0}) \stackrel{\delta}{\longrightarrow}\partialrod\limits_{U_0\stackrel{\sigma_1}{\longrightarrow}U_1 \stackrel{\sigma_1}{\longrightarrow}U_2}\Omega^p(P_{U_0})\stackrel{\delta}{\longrightarrow}... $$ and then we have the corresponding \v{C}ech-De Rham complex $(\chieck C_\mathcal U(\Phi(X),\Omega^*),D)$ for $\Phi(B_Y)$. \section{The first Chern class for the Reeb foliation} \subsection{The cohomologies $H^*(W_n)$ and $H^*(W_n,\text{GL}(n,\mathbb{R}))$} Let $W_n$ be the algebra of formal vector fields in $n$ variables, i.e. the topological vector space of $\infty$-jets at $0$ of smooth vector fields on $\mathbb{R}^n$ with the bracket induced by the Lie bracket of vector fields on $\mathbb{R}^n$. Consider $\mathbb{R}$ as a trivial $W_n$-module. The complex $C^*(W_n)=\{C^q(W_n),d^q\}$ of standard continuous cochains of $W_n$ with values in $\mathbb{R}$ is defined as follows: $C^q(W_n)$ is the space of continuous skew-symmetric $q$-forms on $W_n$ with values in $\mathbb{R}$ and the differential $d^q:C^q(W_n)\to C^{q+1}(W_n)$ is defined by the following formula: $$ (d^q c)(\timesi_1,\dots,\timesi_{q+1})= \sum_{i,j}(-1)^{i+j}c([\timesi_i,\timesi_j],\timesi_1,\dots, \widehat{\timesi_i},\dots,\widehat{\timesi_j},\dots,\timesi_{q+1}), $$ where $c\in C^q(W_n)$, $\timesi_1,\dots,\timesi_{q+1}\in W_n$, and, as usual, $\hat\timesi$ means that the term $\timesi$ is omitted. We denote the cohomology of this complex by $H^*(W_n)=\{H^p(W_n)\}$. The natural action of $\text{GL}(n,\mathbb{R})$ on $\mathbb{R}^n$ induces an action of $\text{GL}(n,\mathbb{R})$ on $C^*(W_n)$ by automorphisms of this complex. Then we have the subcomplex of relatives cochains $C^*(W_n,\text{GL}(n,\mathbb{R}))$ of $W_n$ with respect to $\text{GL}(n,\mathbb{R})$ consisting of $\text{GL}(n,\mathbb{R})$-invariant cochains from $C^*(W_n)$. We denote the cohomology of this complex by $H^*(W_n,\text{GL}(n,\mathbb{R}))=\{H^p(W_n,\text{GL}(n,\mathbb{R}))\}$. Recall some facts about the cohomology $H^*(W_n,\circn{GL}(n,\mathbb{R}))$ (\cite{BR}, \cite{F}, \cite{God}). By definition, $C^*(W_n)$ is a graded differential algebra and the differential $d$ is an antiderivation of degree 1. For $\timesi^i\in\mathbb{R}[[\mathbb{R}^n]]$ and $\timesi=\sum_{i=1}^n\timesi^i\frac{\partial}{\partial x^i}\in W_n$, put $$ c^i_{j_1\dots j_r}(\timesi)=\frac{\partial^r\timesi^i}{\partial x^{j_1}\dots\partial x^{j_r}}(0), $$ where $x^i$ $(i=1,\dots,n)$ are the standard coordinates in $\mathbb{R}^n$. By definition, we have $c^i_{j_1\dots j_r}\in C^1(W_n)$. Moreover, $c^i_{j_1\dots j_r}$ for $r=0,1,\dots$ and $i,j_1\dots j_r=1,\dots,n$ are generators of the $DG$-algebra $C^*(W_n)$. Since $d=\{d^q\}$ is an antiderivation of degree 1 of $C^*(W_n)$, it is uniquely determined by the following conditions: \betagin{equation}\lambdabel{dc} dc^i_{j_1\dots j_r}= \sum_{0\le k\le r}\sum_{s_1<\dots< s_k}\sum_{l=1}^nc^i_{lj_1\dots\widehat{j_{s_1}}\dots \widehat{j_{s_k}}\dots j_r}\wedge c^l_{j_{s_1}\dots j_{s_k}}. \end{equation} Put $$ \gamma=(c^i_j),\quad \Psii^i_j=\sum_{k=1}^nc^i_{jk}\wedge c^k,\quad\circn{and}\quad \Psii=(\Psii^i_j). $$ It is known that $$ \Psii_p=\circn{tr}(\underbrace{\Psii\wedge\dots\wedge\Psii}_{\text {p times}})\quad (p=1,\dots,n) $$ are cocycles of $C^*(W_n,\circn{GL}_n(\mathbb{R}))$ and the cohomology classes of these cocycles generate $H^*(W_n,\circn{GL}_n(\mathbb{R}))$. The cohomology class of $\Psii_p$ is called $p$th formal Chern class. \subsection{The space of frames of infinite order and the Gelfand-Kazhdan form }\lambdabel{S(M)} Let $M$ be a connected orientable $n$-dimensional smooth manifold. Denote by $S(M)$ the space of frames of infinite order of $M$, i.e. $\infty$-jets at $0$ of germs at $0$ of smooth regular at $0\in\mathbb{R}^n$ maps from $\mathbb{R}^n$ into $M$. It is known that $S(M)$ is a manifold with model space $\mathbb{R}^\infty$ (\cite{BR}). Define the canonical Gelfand-Kazhdan 1-form $\circm$ with values in $W_n$ on $S(M)$ (\cite{G-K} and \cite{BR}). Let $\tauu$ be a tangent vector at $s\in S(M)$ and let $s(u)$ be a curve on $S(M)$ such that $\tauu=\frac{ds}{du}(0)$. One can represent $s(u)$ by a smooth family $k_u$ of germs at $0$ of regular at $0\in\mathbb{R}^n$ maps $\mathbb{R}^n\to M$, i.e. $s(u)=j^\infty_0k_u$. Then put $$ \circm(\tauu)=-j_0^\infty\frac{d}{du}(k_0^{-1}\circ k_u)(0). $$ Let $c\in C^q(W_n)$. For each $s\in S(M)$ and $X_1,\dots,X_q\in T_s$, put $$ \circm_c(X_1,\dots,X_q)=c(\circm(X_1),\dots,\circm(X_q)). $$ It is known that $c\mapsto\circm_c$ is a homomorphism of the complexes $\alpha:C^*(W_n)\to\Omega^*(S(M))$. Moreover, we have $\alpha(C^*(W_n,\text{GL}(n,\mathbb{R})))=\Omega^*(S(M)/\text{GL}(n,\mathbb{R}))$. It is easy to check that $\beta=(\beta^i_j)=-(\alpha(c^i_j))$ is a connection form on a principal $\circn{GL}_n(\mathbb{R})$-bundle $S(M)\to S(M)/\circn{GL}_n(\mathbb{R})$, $R=d\beta+\beta\wedge\beta$ is the curvature form of this connection and the image of the formal Chern class $\Psii_p$ under $\alpha$ equals $\alpha(\Psii_p)=Tr(\underbrace{R\wedge\dots\wedge R}_{\mbox{$p$ times}})$. {\bf Case $n=1$.} Now apply the constructions above to the case $n=1$. Let $x$ be a coordinate on $M_1$ near $z_0\in M_1$ and $k=k(u)$ be a smooth map $\mathbb{R}\to M_1$ regular at $0\in\mathbb{R}$. By definition, $s=j_0^\infty k\in S(M_1)$. Thus, one can consider $x_0=k(0),x_p=\frac{d^p k}{dt^p}(0)$ $(p=1,\dots)$ as coordinates on $S(M_1)$, where $x_1\ne 0$. Let $s(u)$ be a curve in $S(M_1)$. It can be obtained as follows: $s(u)=j^\infty_0k(u)(t)$, where for each $u$, $k(u)(t)$ is a smooth map $\mathbb{R}\to M_1$ regular at $t=0$. Then $\tau=\frac{d}{du}j_0^\infty k(u)(t)|_{u=0}$ is a tangent vector to the curve $s(u)$ at $s(0)$. By definition, we have $\circm(\tau)=-j_0^\infty \frac{d}{du}( k^{-1}(0)\circ k(u))|_{u=0}$. Let $\circm=(\circm_0,\circm_1,\circm_2,\dots)$ and let $x_p$ $(p=0,\dots)$ be the coordinates on $S(M_1)$ as above. By construction, we have \betagin{equation} \betagin{split} \circm_o&=-\frac{dx_0}{x_1},\quad \circm_1=\frac{x_2dx_0}{x_1^2}-\frac{dx_1}{x_1},\\ \circm_2=&\left(\frac{x_3}{2x_1^2}-\frac{x_2^2}{x_1^3}\right)dx_0+\frac{x_2}{x_1^2}dx_1- \frac{dx_2}{2x_1}. \end{split} \end{equation} Consider the action of $\text{GL}(1,\mathbb{R})=\mathbb{R}^*$ on $S(M_1)$. By definition, we have for $\lambda\in\mathbb{R}^*$ and $s=(x_p)$: $\lambda s=(\lambda^px_p)$. Then one can take $y_0=x_0,\,y_p=(x_1^p)^{-1}x_p$ ($p=2,\dots$) for the coordinates on $S(M_1)/\text{GL}(1,\mathbb{R})$. By definition, the first Chern class $c_1$ in these coordinates is defined by the 2-form $c_{1,y}=dy_2\wedge dy_0$ on $S(M_1)/\text{GL}(1,\mathbb{R})$. It is clear that this form is exact if $M_1$ is a smooth manifold. But we will show that, in general, this form is not exact if $M_1$ is a ${\mathcal D}_1$-space. \subsection{Reeb's foliation} Describe Reeb's foliation on the sphere $\mathbb S^3$. Consider the sphere $\mathbb S^3$ in $\mathbb{R}^4=\mathbb{R}^2\times\mathbb{R}^2$ given by the equation $|x|^2+|y|^2=2$, where $|x|$ is the standard norm in $\mathbb{R}^2$. Take two subsets $\mathbb S^3_i$ $(i=1,2)$ of $\mathbb S^3$ defined by the equations $|x|^2\le |y|^2$ and $|y|^2\le |x|^2$, respectively. The projection $(x,y)\to x$ induces the structure of fiber bundle on $\mathbb S^3_1$ with base $D_1=\{|x|^2\le 1\}$ and, for each $x\in D_1$, the fiber is the circle $\mathbb S^1_x$ given by the equation $|y|^2=2-|x|^2$. Therefore, $\mathbb S_1^3$ is diffeomorphic to $D_1\times \mathbb S^1$. Similarly, $\mathbb S^3_2$ is diffeomorphic to $D_2\times \mathbb S^1$, where $D_2=\{|y|^2\le 1\}$ and the fiber over $y\in D_2$ is given by the equation $|x|^2=2-|y|^2$. By definition, we have $\mathbb S^3=\mathbb S^3_1\cup \mathbb S^3_2$. Consider a smooth function $f(t)$ on the interval $|t|<1$ satisfying the following conditions: \betagin{align}\lambdabel{f} &f(0)=0,\,f(t)\ge 0,\,f(-t)=f(t),\\ \lambdabel{f1}\lim_{t\to\partialm 1}&\frac{d^pf}{dt^p}(t)=\infty,\,\lim_{t\to\partialm 1}\frac{d^p}{dt^p}\frac1{f'(t)}=0,\,\mbox{for $p=0,1,\dots$} \end{align} Consider the foliation of codimension $1$ on $\mathbb S_1^3$ with the leaf $L=\mathbb S^1\times \mathbb S^1$ and the leaves $L_\alpha$ for each $\alpha\in\mathbb{R}$, consisting of the points of the type $(x,|y|e^{2\partiali i(\alpha+f(|x|)})$, where $x$ is an interior point of $D_1$ and we regard $\mathbb{R}^4$ as $\mathbb{R}^2\times\mathbb{C}$. It is clear that the leaf $L$ is compact and the leaf $L_\alpha$ is diffeomorphic to $\mathbb{R}^2$. Similarly, we define a foliation of codimension $1$ on $\mathbb S^3_2$ using, in general, another function $f$. Combining these two foliations we get a foliation on $\mathbb S^3$ called the Reeb foliation and is denoted by $\mathfrak F$. Consider the curves $\gamma_1$: $\alpha\to (0,0,\sqrt 2e^{2\partiali\alpha i})$ ($\alpha\in\mathbb{R})$ and $\gamma_2$: $t\to\left(\frac{t}{\sqrt 2},\frac{t}{\sqrt 2},\sqrt{2-t^2}\right)$ $(|t|<\sqrt 2)$ on $S^3$. By construction, the curves $\gamma_1$ and $\gamma_2$ are transverse to Reeb's foliation on $\mathbb S^3$ and, then, can be considered as $\mathcal D_1$-charts on the space of leaves $\mathbb S^3/\mathfrak F$. Since $\gamma_1(\alpha)\in L_\alpha$ and $\gamma_2(t)\in L_\alpha$ for $f(|t|)+\alpha=0$, the map $\varphi$: $\alpha=-f(t)$ ($0<t<1)$ is a morphism $\varphi$ of ${\mathcal D}_1$-charts $\gamma_2\to\gamma_1$. Denote by $\beta_i$ and $y_i$ $(i\neq 1)$ the extension of the coordinates $\alpha$ and $t$ to the ${\mathcal D}_1$-space $S(\mathbb S^3/\mathfrak F)/\text{GL}(1,\mathbb{R})$. By definition, the $2$-form $c_1$, defining the first Chern class, is given in these coordinates by the forms $c_{1,\beta}=d\beta_2\wedge d\beta_0$ and $c_{1,y}=dy_2\wedge dy_0$, respectively. Evidently we have $\varphi^*c_{1,\beta}=c_{1,y}$. Denote by $S^2(\mathbb S^3/\mathfrak F)$ the space of frames of the second order of $\mathbb S^3/\mathfrak F$. Clearly, $c_1$ may be considered as a 2-form on $S^2(\mathbb S^3/\mathfrak F)/\text{GL}(1,\mathbb R)$. \betagin{theorem}\lambdabel{first} The first Chern class $c_1$ is non-trivial in the complex\footnote{In the original text it was stated that $c_1$ is non-trivial in the complex $\Omega^*(S(\mathbb S^3/\mathfrak F)/\text{GL}(1,\mathbb R))$. Unfortunately, the original proof contained a gap, on the other hand, it was easy to adapt it to the proof of the current statement of the theorem. Note that if $M_1$ is a manifold, then the cohomology of the complexes $\Omega^*(S(M_1)/\text{GL}(1,\mathbb R))$ and $\Omega^*(S^2(M_1)/\text{GL}(1,\mathbb R))$ are isomorphic. This is generally not true if $M_1$ is a $\mathcal D_1$-space.} $\Omega^*(S^2(\mathbb S^3/\mathfrak F)/\text{\rm GL}(1,\mathbb R))$. \end{theorem} \betagin{proof} First we write the extension of the morphism $\varphi$ to $S(\mathbb S^3/\mathfrak F)$. Let $f(x)$ and $g(u)$ be two functions and let $h=g\circ f$. Recall the classical Fa\`a di Bruno formula for the $n$th derivative $h^{(n)}$: \betagin{equation}\lambdabel{Faa} h^{(n)}=n!\sum_{k=1}^n\frac{g^{(k)}\circ f}{k!}\sum_{i_1+\dots+i_k=n}\frac{f^{(i_1)}}{i_1!}\dots\frac{f^{(i_k)}}{i_k!} \end{equation} Applying \eqref{Faa} for $g=-f(t)$ and $f=t(u)$ we get $$ \alpha_0=-f(t_0),\quad \alpha_n=-n!\sum_{k=1}^n\frac{f^{(k)}}{k!}\sum_{i_1+\dots+ i_k=n}\frac{t_{i_1}}{i_1!}\dots\frac{t_{i_k}}{i_k!},\quad n\ge 1 $$ Then the extension of the morphism $\varphi$ to $S(\mathbb S^3/\mathfrak F)/\text{GL}(1,\mathbb{R})$ is defined by the formulas \betagin{equation}\lambdabel{1} \betagin{split} \beta_0&=-f(y_0),\\ \beta_n&=(-1)^{n-1}\left(n!\sum_{k=1}^{n-1}\frac{1}{k!}\frac{f^{(k)}(y_0)}{(f')^n(y_0)}\sum_{i_1+\dots i_k=n} \frac{y_{i_1}}{i_1!}\dots\frac{y_{i_k}}{i_k!}+\frac{f^{(n)}(y_0)}{(f')^n(y_0)}\right), \end{split} \end{equation} where $y_1=1$, $n\ge 2$, $y_i$, and $\beta_i$ are the coordinates on $S(S^3/\mathfrak F)/\text{GL}(1,\mathbb{R}))$ corresponding to $t_i$ and $\alpha_i$. In particular, the extension of the morphism $\varphi$ to $S^2(\mathbb S^3/\mathfrak F)/\text{GL}(1,\mathbb{R})$ is defined by the formulas $$\beta_0=-f(y_0),\quad \beta_2=-\frac{y_2}{f'(y_0)}-\frac{f''(y_0)}{(f'(y_0))^2}.$$ By induction with respect to $n$ from \eqref{Faa} for $h(t)=\frac{1}{f'(t)}$ and \eqref{f} we get for $n>1$ \betagin{equation}\lambdabel{n} \lim_{t\to\partialm 1}\frac{f^{(n)}(t)}{(f'(t))^n}=0\quad\text{and}\quad \lim_{t\to\partialm 1}\frac{d}{dt}\frac{f^{(n)}(t)}{(f'(t))^n}=0. \end{equation} In particular, \eqref{n} implies $\lim_{y_0\to 1}\beta_n=0$ for $n>1$. Assume that the class $c_1$ is trivial. Then there is a form $\gamma\in\Omega^1(S^2(\mathbb S^3/\mathfrak F)/\text{GL}(1,\mathbb R))$ such that $c_1=d\gamma$. This means that, if $\gamma_\beta$ and $\gamma_y$ are the expressions of the form $\gamma$ in the coordinates $\beta_0,\beta_2$ and $y_0,y_2$, respectively, we have $d\gamma_\beta=c_{1,\beta}$, $d\gamma_{y}=c_{1,y}$, and $\varphi^*\gamma_\beta=\gamma_y$. Let $$\gamma_\beta=\gamma_0d\beta_0+\gamma_2d\beta_2,$$ where $\gamma_i$ are smooth functions of $\beta_0$, $\beta_2$. So we have $$\gamma_\beta=\beta_2d\beta_0+\lambda,$$ where $$\lambda=\lambda_0d\beta_0+\lambda_2d\beta_2$$ is a closed $1$-form. Moreover, we have $$ \varphi^*\gamma_\beta=A_0dy_0+A_2dy_2, $$ where $A_0$ and $A_2$ are smooth functions of $y_0$, $y_2$. By definition, the $1$-form $\varphi^*\gamma$ has a smooth extension to a neighborhood of each point $(1,y_2)$. On the other hand, we have \betagin{equation}\lambdabel{2} \varphi^*\gamma_\beta=\left( y_2+\frac{f''}{f'}-(\lambda_0\circ\varphi)\cdot f'\right)dy_0+(\lambda_2\circ\varphi)\cdot d(\beta_2\circ\varphi), \end{equation} where $f=f(y_0)$. From \eqref{2} we get \betagin{multline}\lambdabel{3} A_0= y_2+\frac{f''}{f'}-(\lambda_0\circ\varphi)\cdot f'-(\lambda_2\circ\varphi)\cdot\left( \left(\frac{f''}{(f')^2}\right)'+y_2\left(\frac{1}{f'}\right)' \right). \end{multline} Fix a $\beta_0$. There exists a sequence $\{y_{0,n}\}$ such that $\lim_{n\to\infty}y_{0,n}= 1$ and $-f(y_{0,n})\equiv\beta_0\mod 1$. It is clear that $$\lim_{n\to\infty}\frac1{f'(y_{0,n})}A_0(y_{0,n},y_2)=0.$$ By this, \eqref{3}, \eqref{f}, and \eqref{n}, we get $\lambda_0(\beta_0,0)=0$. Therefore, we have \betagin{equation}\lambdabel{4} \lambda_0(\beta_0,\beta_2)=\mu(\beta_0,\beta_2)\beta_2, \end{equation} where $\mu(\beta_0,\beta_2)$ is a smooth function of $\beta_0,\beta_2$ and $\mu(\beta_0,0)=\frac{\partial\lambda_0}{\partial\beta_2}(\beta_0,0)$. Consider $\lim_{n\to\infty}\frac{\partial A_{0}}{\partial y_2}( y_{0,n},y_2)$. Since $\lim_{y_0\to 1}f'\frac{\partial\beta_2}{\partial y_2}=-1$, we get $$ \frac{\partial A_0}{\partial y_2}( 1,y_2)=1+\frac{\partial\lambda_0}{\partial \beta_2}(\beta_0,0). $$ Since the form $\lambda$ is closed, we have \betagin{equation}\lambdabel{5} \frac{\partial\lambda_2}{\partial\beta_0}(\beta_0,0)=\frac{\partial\lambda_0}{\partial\beta_2}(\beta_0,0)=c=\text{const}. \end{equation} By definition, the form $\lambda$ is periodic with respect to $\beta_0$ with period $1$. This implies that $\lambda_2$ and $\frac{\partial\lambda_2}{\partial\beta_0}$ are periodic with respect to $\beta_0$ with the same period as well. It is easy to see that, if the derivative of a smooth periodic function is constant, this constant equals $0$. Then \eqref{5} implies $\frac{\partial\lambda_0}{\partial\beta_2}(\beta_0,0)=c=0$. Thus, $\mu(\beta_0,\beta_2)=\nu(\beta_0,\beta_2)\beta_2$, where $\nu(\beta_0,\beta_2)$ is a smooth function of $\beta_0$ and $\beta_2$, and, by \eqref{4}, we have $\lambda_0(\beta_0,\beta_2)=\nu(\beta_0,\beta_2)\beta_2^2$. Put now in \eqref{3} $y_2=0$. We get \betagin{multline*} A_0(y_0,0)= \frac{f''}{f'}-\nu\left(-f,-\frac{f''}{(f')^2}\right)\frac{(f'')^2}{(f')^3}-\lambda_2\left(-f,-\frac{f''}{(f')^2}\right)\left(\frac{f''}{(f')^2}\right)', \end{multline*} where $f=f(y_0)$. This equation can rewritten in the following form $$ \frac{f''}{f'}=\frac{A_0(y_0,0)+\lambda_2\left(-f,-\frac{f''}{(f')^2}\right)\left(\frac{f''}{(f')^2}\right)'}{1-\nu\left(-f,-\frac{f''}{(f')^2}\right)\frac{f''}{(f')^2}}. $$ Consider a sequence $\{y_{0,n}\}$ as above and substitute it to the last equality. The right hand side of the equality tends to $A(1,0)$ as $n$ tends to $+\infty$. The properties \eqref{f1} of the function $f$ imply that $\frac{f''}{f'}$ tends to $+\infty$ as $n$ tends to $+\infty$. We get a contradiction. This contradiction proves the theorem. \end{proof} Now we study the geometrical meaning of the first Chern class of Reeb's foliation\footnote{On April 19 of 2013 M.V.\,Losik informed A.\,Galaev in an email that the first Chern class defines the Reeb foliation up to the orientation. Unfortunately, the proof of this statement is missing. The following sentence suggests that the idea of the proof was to show that the first Chern class determines the holonomy group of the compact leaf.}. We recall that Reeb's foliation is determined up to isomorphism by the holonomy group of the compact leaf $\mathbb S^1\times \mathbb S^1$ (see \cite{Ser}) and by some considerations on orientation studied in details in \cite{Miz}. \betagin{thebibliography}{10} \bibitem {BR} I.N.~Bernstein and B.I.~Rozenfeld, \emph{Homogeneous spaces of infinite-dimensional Lie algebras and characteristic classes of foliations}(Russian), Uspehi Math. Nauk \textbf{28} (5),(1973), 103-138. \bibitem{Cr} M.~Crainic, I.~Moerdijk, \emph{\v{C}ech-De Rham theory for leaf spaces of foliations}, Math. Ann. \textbf{328} (2004), 59-85. \bibitem {F} D.B.~Fuks, \emph{Cohomology of infinite-dimensional Lie algebras} (Russian), M., Nauka, 1984. English translation: D.B.~Fuks,\emph{Cohomology of infinite-dimensional Lie algebras}, Contemporary Soviet Mathematics, Consultunt Bureau, New York, 1986. \bibitem {G-K} I.M.~Gelfand and D.A.~Kazhdan, \emph{Some topics of differential geometry and the calculus of Lie algebras of vector fields}, Dokl. Akad. Nauk SSSR \textbf{200}(2)(1971), 269-272 (Russian). \bibitem {God} C.~Godbillon, \emph{Cohomologies d'alg\`ebres de Lie de champs de vecteurs formels}, Lect. Notes in Math. \textbf{383} (5) (1974), 69-87. \bibitem {Hf} A.~Haefliger, \emph{Homotopy and integrability}, in: Manifolds, Amsterdam, 1970, SLN \textbf{192}(1971), 133-163. \bibitem {Hf1} A.~Haefliger, \emph{Groupoides d'holonomie et espaces classifiants}, Asterisk, \textbf{116} (1984), 70-97. \bibitem{KT} F.~Kamber, P.~Thondeur, \emph{Foliated bundles and characteristic classes}, Lect. Notes in Math., \textbf{493} (1975). \bibitem{L5} M.V.~Losik, \emph{On some generalization of a manifold and its characteristic classes} (Russian), Funcional. Anal.i Prilozhen. \textbf{24}(1990), no 1, 29-37 ; English translation in Functional Anal. Appl. \textbf {24} (1990), 26-32. \bibitem{L6} M.V.~Losik, \emph{Categorical differential geometry}, Cahiers de topol. et geom. diff. cat. \textbf{35} (1994), no. 4, 274-290. \bibitem{Miz} T. Mizutani, \emph{Foliated cobordisms of $S^3$ and examples of foliated $4$-manifolds}, Topology \textbf{13},(1974), 353-362. \bibitem{Mo} I.~Moerdijk, \emph{Classifying topos and foliations}, Ann. Inst. Fourier. Grenoble \textbf{41} (1991), 189-209. \bibitem{Mo1} I.~Moerdijk, \emph{On the weak homotopy type of \'etale groupoids}, in ``Integrabale Systems and Foliations'', Birkhauser (1997), 147-156. \bibitem{MoModels} I.\,Moerdijk, { Models for the leaf space of a foliation}. Eur. Cong. Math. 2000, vol. I, 481--489, (Barcelona), Progr. Math., 201, Birkh\"auser, Basel, 2001. \bibitem{Mr}J.~Mrcun, \emph{Stability and invariants of Hilsum-Skandalis maps}, Ph.D.-thesis, Utrecht Univ. (1996). \bibitem{PW} J.~Pradines, J.~Wouafo-Kamga, \emph{La cat\'egorie des QF-vari\'et\'es}, C.R.Acad. Sci. Paris. \textbf{288} (1979), 717-719. \bibitem {Ser} F.~Sergeraert, \emph{Feuilletages et diff\'eomorphismes infinitement tangent \`a l'identit\'e}, Invent. math. \textbf{17} (1978), no. 4, 367-382. \bibitem{So} J.-M. Souriau, \emph{Groups diff\'erentiells et physique math\'ematique}, Coll. Travaux en Cours, Hermann, Paris, 1984, 75-79. \bibitem {VE} W.T.~van Est, \emph{Rapport sur les S-atlas}, Ast\'erisque, \textbf{116} (1984), 235-292. \end{thebibliography} \end{document}
\begin{document} \title{Local unitary equivalence of multipartite pure states} \author{B. Kraus} \affiliation{Institute for Theoretical Physics, University of Innsbruck, Austria} \begin{abstract} Necessary and sufficient conditions for the equivalence of arbitrary $n$--qubit pure quantum states under Local Unitary (LU) operations are derived. First, an easily computable standard form for multipartite states is introduced. Two generic states are shown to be LU--equivalent iff their standard forms coincide. The LU--equivalence problem for non--generic states is solved by presenting a systematic method to determine the LU operators (if they exist) which interconvert the two states. \end{abstract} \maketitle Multipartite states occur in many applications of quantum information, like one--way quantum computing, quantum error correction, and quantum secret sharing \cite{Gothesis97,RaBr01}. Furthermore, the theory of multipartite states plays also an important role in other fields of physics which deal with many-body systems \cite{AmFa08}. The existence of those practical and abstract applications is due to the subtle properties of multipartite entangled states. Thus, one of the main goals in quantum information theory is to gain a better understanding of the non--local properties of quantum states. Whereas the bipartite case is well understood, the multipartite case is much more complex. Even though a big theoretical effort has been undertaken where several entanglement measures for multipartite states have been introduced \cite{CoKu00}, different classes of entangled states have been identified \cite{DuViCi00}, and a normal form of multipartite states has been presented \cite{Ves}, we are far from completely understanding the non--local properties of multipartite states \cite{HoHo07}. One approach to gain insight into the entanglement properties of quantum states is to consider their interconvertability. That is, given two states $\ket{\Psi}$, $\ket{\Phi}$ the question is whether or not $\ket{\Psi}$ can be transformed into $\ket{\Phi}$ by local operations \cite{HoHo07}. One particularly interesting case, which is also investigated in this paper, is the LU-equivalence of multipartite states. We say that a $n$--partite state, $\ket{\Psi}$ is LU--equivalent to $\ket{\Phi}$ ($\ket{\Psi}\simeq_{LU} \ket{\Phi}$) if there exist local unitary operators, $U_1,\ldots, U_n$, such that $\ket{\Psi}=U_1\otimes \cdots \otimes U_n \ket{\Phi}$. Note that two states which are LU--equivalent are equally useful for any kind of application and they posses precisely the same amount of entanglement. This is why understanding the interconvertability of quantum states by LU operations is part of the solution to the more general problem of characterizing the different types of entangled quantum states. In order to solve this long--standing problem the so--called local polynomial invariants have been introduced \cite{GrRo98}. However, even though it is known that it is sufficient to consider only a finite set of them, this complete finite set is known only for very few simple cases. Here, we derive necessary and sufficient conditions for the existence of LU operations which transform two states into each other. For generic states, states where non of the single qubit reduced states is completely mixed, the conditions can be easily computed. For arbitrary $n$--qubit states a systematic method to determine the unitaries (in case they exist) which interconvert the states is presented. The sequel of the paper is organized as follows. First, we introduce a standard form of multipartite states, which we use in order to derive easily computable necessary and sufficient conditions for the LU--equivalence of generic multipartite states. Like in the bipartite case, it is shown that two generic states are LU--equivalent iff their standard forms coincide. For non--generic states it is shown that whenever one of the single qubit reduced states is not completely mixed, the problem of LU--equivalence of $n$--qubit states can be reduced to the problem of LU--equivalence of $(n-1)$--qubit states. Then, a systematic method to determine the local unitaries (if they exist) which interconvert two arbitrary states is presented. It is shown that the states are LU--equivalent iff there exists a solution to a finite set of equations. The number of variables involved in those equations depends on the entanglement properties of the states. The case with the largest number of variables occurs for the sometimes called maximally entangled states of $n$ qubits, where any bipartition of $\lceil n/2 \rceil$ qubits is maximally entangled with the rest. It is known however, that only for certain values of $n$ such states exist \cite{Bra03}. The power of this method is illustrated by considering several examples. Throughout this paper the following notation is used. By $X,Y,Z$ we denote the Pauli operators. The subscript of an operator will always denote the system it is acting on, or the system it is describing. The reduced states of system $i_1,\ldots i_k$ of $\ket{\Psi}$ ($\ket{\Phi}$) will always be denoted by $\rho_{i_1\ldots i_k}$ ($\sigma_{i_1\ldots i_k}$) resp., i.e. $\rho_{i_1\ldots i_k}=\mathrm{tr}_{\neg i_1\ldots \neg i_k}(\proj{\Psi})$. We denote by ${\bf i}$ the classical bit--string $(i_1,\ldots, i_n)$ with $i_k{\bf i}n\{0,1\}$ $\forall k{\bf i}n \{1,\ldots, n\}$ and $\ket{{\bf i}}\equiv\ket{i_1,\ldots, i_n}$ denotes the computational basis. Normalization factors as well as the tensor product symbol will be omitted whenever it does not cause any confusion. Let us start by introducing a unique standard form of multipartite states (see also \cite{KrKr08}). Let $\ket{\Psi}$ be a $n$--qubit state. As a first step we apply local unitaries, $U_i^1$ such that all the single qubit reduced states of the state $\ket{\Psi_t}=U_1^1\otimes \ldots U_n^1 \ket{\Psi}$ are diagonal in the computational basis, i.e. $\mathrm{tr}_{\neg i}(\proj{\Psi_t})=D_i=\mbox{diag}(\lambda_i^1,\lambda_i^2)$. We call any such decomposition trace decomposition of the state $\ket{\Psi}$. A sorted trace decomposition is then defined as a trace decomposition with $\lambda_i^1\geq \lambda_i^2$. Note that transforming a state into its sorted trace decomposition, which we will denote by $\ket{\Psi_{st}}$ in the following, can be easily done by computing the spectral decomposition of all the single qubit reduced states. The sorted trace decomposition of a generic state, $\ket{\Psi}$ with $\rho_i\neq \mbox{$1 \hspace{-1.0mm} {\bf l}$}$ $\forall i$ is unique up to local phase gates. That is $U_1\ldots U_n \ket{\Psi_{st}}$ is a sorted trace decomposition of $\ket{\Psi}$ iff (up to a global phase, $\alpha_0$) $U_i= U_i(\alpha_i)\equiv \text{ diag}(1,e^{i\alpha_i })$. In order to make the sorted trace decomposition of generic states unique we impose the following condition on the phases $\alpha_i$, $i{\bf i}n\{0,\ldots, n\}$. We write $\ket{\Psi_{st}}=\sum_{i_1,\ldots i_n=0}^1 \lambda_{i_1,\ldots,i_n}\ket{i_1,\ldots,i_n}$, and define the set $S=\{{\bf i}: \lambda_{{\bf i}}\neq 0\}$ and $\bar{S}$ denotes the set of the linearly independent vectors in $S$. The global phase, $\alpha_0$ is chosen to make $\lambda_{{\bf i_0}}$ real and positive where ${\bf i_0}={\bf 0}$ in case $\lambda_{{\bf 0}}\neq 0$ else ${\bf i_0}$ denotes the first (in lexicographic order) linearly dependent vector in $S$. After that, the $n$ phases are chosen to make the coefficients $e^{i\alpha_0}\lambda_{{\bf i}}$ for ${\bf i}{\bf i}n \bar{S}$ real and positive \footnote{If there are less than $n$ linearly independent vectors in $S$, say $k$, then $k$ phases can be defined like that, the other phases leave the state invariant and can therefore be chosen arbitrarily.}. Since all the phase gates, which do not leave the state invariant are fixed in this way we have that $U_1\ldots U_n\ket{\Psi_s}$, where $\ket{\Psi_s}$ denotes here and in the following the standard form of $\ket{\Psi}$, has standard form iff $U_1\ldots U_n\ket{\Psi_s}=\ket{\Psi_s}$. That is the standard form is unique. If $\rho_i=\frac{1}{2}\mbox{$1 \hspace{-1.0mm} {\bf l}$}$, for some system $i$, the standard form can be similarly defined \cite{KrKr08}, however it will not be unique then. Due to the definition any state is LU--equivalent to its standard form \footnote{Note that this standard form coincides for the simplest case of two qubits with the Schmidt decomposition \cite{NiCh00} and can be generalized to $d$--level systems.}. We employ now the standard form to derive a criterion for the LU--equivalence of generic multipartite states. First of all note that $\ket{\Psi}\simeq_{LU}\ket{\Phi}$ iff $\ket{\Psi_s}\simeq_{LU}\ket{\Phi_s}$. Using then that the standard form is unique we obtain the following theorem. \begin{theorem} Let $\ket{\Psi}$ be an $n$ qubit state with $\rho_i\neq \mbox{$1 \hspace{-1.0mm} {\bf l}$}$ $\forall i$. Then $\ket{\Psi}\simeq_{LU} \ket{\Phi}$ iff the standard form of $\ket{\Psi}$ is equivalent to the standard form of $\ket{\Phi}$, i.e. $\ket{\Psi_s}=\ket{\Phi_s}$. \end{theorem} Thus, similarly to the bipartite case, two generic states are LU--equivalent iff their standard forms coincide, which can be easily checked. Furthermore, if the states are LU--equivalent then $\ket{\Psi}=U_1,\ldots,U_n\ket{\Phi}$ with $U_i=(U_s^i)^\dagger V_s^i$, where $U_s^i$, $V_s^i$ denote the local unitaries such that $\ket{\Psi_s}\equiv U_s^1\ldots U_s^n\ket{\Psi}$ and $\ket{\Phi_s}\equiv V_s^1 \ldots V_s^n\ket{\Phi}$. In order to study now the non--generic cases, we will rewrite the necessary and sufficient condition derived above. For a generic state, $\ket{\Psi}$ it is easy to verify that $\ket{\Psi_s}=\ket{\Phi_s}$ iff there exists a bitstring ${\bf k}=k_1,\ldots k_n$, local phase gates $U_i(\alpha_i)$, and a global phase $\alpha_0$ s.t. \begin{eqnarray} \label{LU} e^{i\alpha_0} \begin{itemize}gotimes_i U_i(\alpha_i)X_i^{k_i}\bar{W}_i\ket{\Psi}=\begin{itemize}gotimes_i \bar{V}_i \ket{\Phi},\end{eqnarray} where $\bar{W}_i$ ($\bar{V}_i$) are local unitaries which transform $\rho_i$ ($\sigma_i$) into a diagonal matrix. That is $\begin{itemize}gotimes_i\bar{W}_i\ket{\Psi}$ and $ \begin{itemize}gotimes_i\bar{V}_i \ket{\Phi}$ are trace decompositions of $\ket{\Psi}$ and $\ket{\Phi}$ resp.. For generic states $k_i$ is chosen such that the order of the eigenvalues of the single qubit reduced states of $\begin{itemize}gotimes_i X_i^{k_i}\bar{W}_i\ket{\Psi}$ and $\begin{itemize}gotimes_i \bar{V}_i \ket{\Phi}$ coincides. In order to check then whether or not there exist phases $\alpha_i$ such that Eq. (\ref{LU}) is satisfied, we make use of the following lemma. There, we will consider four $n$-- qubit states. The systems, each composed out of $n$ qubits will be denoted by $A,B,C,D$ respectively. The $i$-th qubit of system $A$ will be denoted by $A_i$, etc. Furthermore, we will use the notation $\ket{\chi_i}=(\ket{0110}-\ket{1001})_{A_i,B_i,C_i,D_i}$ and $P^i_{AC}=\sum_{\bf k} \ket{{\bf k}}\bra{{\bf k}{\bf k}}_{A_1,C_1,\ldots A_{i-1},C_{i-1},A_{i+1},C_{i+1}\ldots, A_n,C_n}$ and similarly we define $P^i_{BD}$ for systems $B,D$. For a state $\ket{\Psi}$ we define $K_\Psi\equiv \{{\bf k} \mbox{ such that }\bra{{\bf k}}\Psi\rangle=0\}$ and $\ket{\Psi_{\bar{\alpha}_i}}=\ket{\Psi}+ e^{-i\bar{\alpha}_0} \sum_{{\bf k}{\bf i}n K_\Psi} e^{-i\sum_{i=1}^n \bar{\alpha}_i k_i}\ket{{\bf k}}$ for some phases $\bar{\alpha}_i$ and $\ket{\Psi_{\bf 0}}=\ket{\Psi}+\sum_{{\bf k}{\bf i}n K_\Psi} \ket{{\bf k}}$. \begin{lemma} \label{LemmaPhase} Let $\ket{\Psi}, \ket{\Phi}$ be $n$ qubit states. Then, there exist local phase gates, $U_i(\alpha_i)$ and a phase $\alpha_0$ such that $\ket{\Psi}=e^{i\alpha_0}\begin{itemize}gotimes_{i=1}^n U_i(\alpha_i) \ket{\Phi}$ iff there exist phases $\{\bar{\alpha}_i\}_{i=0}^{n}$ such that \begin{itemize} {\bf i}tem[(i)] $|\bra {\bf i} \Psi_{\bf 0}\rangle|= |\bra {\bf i} \Phi_{\bar{\alpha}_i})\rangle|$ $\forall {\bf i}$ and {\bf i}tem[(ii)] $\bra{\chi}_i P^i_{AC} P^i_{BD} \ket{\Psi_{\bf 0}}_A\ket{\Psi_{\bf 0}}_B\ket{\Phi_{\bar{\alpha}_i}}_C\ket{\Phi_{\bar{\alpha}_i}}_D=0$ $\forall i{\bf i}n \{1,\ldots ,n\}$.\end{itemize} \end{lemma} The prove of this lemma will be presented in the appendix. Let us now consider the non--generic case. Obviously, two arbitrary states, $\ket{\Psi},\ket{\Phi}$, are LU--equivalent iff there exist local unitaries $\bar{V}_k,\bar{W}_k$ a bit string ${\bf k}$ and phases $\alpha_i$ such that Eq. (\ref{LU}) is fulfilled. We will show now how $\bar{V}_k,\bar{W}_k$ can be determined by imposing necessary conditions of LU--equivalence. First of all, we note that for any state $\ket{\Psi}$ with $\rho_i\neq \mbox{$1 \hspace{-1.0mm} {\bf l}$}$ for some system $i$, $k_i$ as well as $\bar{V}_i$ and $\bar{W}_i$ can be easily determined as follows. If $\ket{\Psi}\simeq_{LU}\ket{\Phi}$ then all the reduced states must be LU--equivalent, in particular $D_i=\mbox{diag}(\lambda_1^i,\lambda_2^i)=\bar{W}_i \rho_i \bar{W}_i^\dagger=\bar{V}_i \sigma_i \bar{V}_i^\dagger$, for some unitaries $\bar{W}_i,\bar{V}_i$. Analogously to the generic case, this equation determines $\bar{W}_i$ and $\bar{V}_i$ (and $k_i=0$) uniquely up to a phase gate. Thus, for this case we have that $\ket{\Psi}\simeq_{LU} \ket{\Phi}$ iff there exist two phases, $\alpha_{i}$ and $\alpha_0$ and local unitaries $U_j$ such that \begin{eqnarray} \label{cond1} \phantom{,}_i\bra{l}\bar{W}_i\Psi_s\rangle =e^{i(\Phi+\alpha_i l)}\begin{itemize}gotimes_{j\neq i}U_{j}\phantom{,}_i\bra{l}\bar{V}_i\Phi_s\rangle, \end{eqnarray} where $l{\bf i}n\{0,1\}$ and $\bar{W}_i,$ and $\bar{V}_i$ are chosen such that $D_i=\mbox{diag}(\lambda_1^i,\lambda_2^i)=\bar{W}_i \rho_i \bar{W}_i^\dagger=\bar{V}_i \sigma_i \bar{V}_i^\dagger$. Hence, if there is one system where the reduced state is not proportional to the identity then we can reduce the problem of LU--equivalence of $n$--qubit states to the LU--equivalence of $(n-1)$--qubit states. This statement can be easily generalized to the case where more than one single qubit reduced state is not completely mixed. Let us now consider the more complicated case, where some $\rho_i=\mbox{$1 \hspace{-1.0mm} {\bf l}$}$. There, it is obviously no longer possible to determine $\bar{V}_i$,$\bar{W}_i$ by imposing the necessary condition of LU--equivalence, $\rho_i=U_i\sigma_i U^\dagger_i$. However, we will show next, which necessary condition can be used in order to determine them. Before we do so, we explain the problem which might occur if $\rho_i=\mbox{$1 \hspace{-1.0mm} {\bf l}$}$ by considering a simple example. Let $\ket{\Psi}$ and $\ket{\Phi}$ denote two states with $\rho_{12}=\sigma_{12}=\mbox{$1 \hspace{-1.0mm} {\bf l}$}-\lambda\proj{\Psi^-}$, for some $\lambda\neq 0$. Then we find that $\rho_{12}=U_1 U_2 \sigma_{12} U_1^\dagger U_2^\dagger$ iff $U_1=U_2$, which implies that $\ket{\Psi}\simeq_{LU} \ket{\Phi}$ iff there exist local unitaries $U_1,U_3,\ldots, U_n$ such that $\ket{\Psi}=U_1U_1\ldots U_n\ket{\Phi}$. Thus, the unitary $U_2$ depends on $U_1$. Or, stated differently, $\bar{W}_2$ (and $\alpha_2$) depends on $U_1$ in Eq. (\ref{LU}), where we set $V_1=V_2=\mbox{$1 \hspace{-1.0mm} {\bf l}$}$. In general we might neither be able to determine the phase $\alpha_2$, nor $\bar{W}_2$ as a function of $U_1$ alone. However, the next lemma shows that any $\bar{W}_k$ can be determined as a function of a few unitaries and $\bar{V}_k$ can always be determined directly form the state $\ket{\Phi}$. We will see that the number of unitaries which are required to define $\bar{W}_k$ depends on the entanglement properties of the state. \begin{lemma} \label{Le1} If $\ket{\Psi}=U_1\ldots U_n\ket{\Phi}$ and if there exist systems $i_1, \ldots i_l$ such that $\rho_{i_1, \ldots i_l,k}\neq \rho_{i_1, \ldots i_l}\otimes \mbox{$1 \hspace{-1.0mm} {\bf l}$}$ then $\bar{V}_k$ in Eq. (\ref{LU}) can be determined from the state $\ket{\Phi}$ and $\bar{W}_k$ can be determined as a function of $U_{i_1},\ldots U_{i_l}$.\end{lemma} \begin{proof} Without loss of generality we assume $i_1=1, \ldots i_l=l$ and write $\ket{\Psi}=\sum \ket{{\bf i}}_{1\ldots ,l}\ket{\Psi_{{\bf i}}}_{l+1\ldots ,n}$ and $\ket{\Phi}=\sum \ket{{\bf i}}_{1\ldots ,l}\ket{\Phi_{{\bf i}}}_{l+1\ldots ,n}$, where ${\bf i}=(i_1,\ldots,i_l)$. Since $\sigma_{1,\ldots,l,k}= \sum \ket{{\bf i}} \bra{{\bf j}} \mathrm{tr}_{\neg k}(\ket{\Phi_{{\bf i}}} \bra{\Phi_{{\bf j}}})\neq \sigma_{1,\ldots,l}\otimes \mbox{$1 \hspace{-1.0mm} {\bf l}$}$, there exist at least two tuples ${\bf i}$ and ${\bf j}=(j_1,\ldots j_l)$ such that the $2\times 2$ matrix $X_{{\bf i}}^{{\bf j}}\equiv \mathrm{tr}_{\neg k}(\ket{\Phi_{{\bf i}}} \bra{\Phi_{{\bf j}}})\not\propto \mbox{$1 \hspace{-1.0mm} {\bf l}$} $. Thus, at least one of the two hermitian operators $Y_{{\bf i}}^{{\bf j}}=X_{{\bf i}}^{{\bf j}}+(X_{{\bf i}}^{{\bf j}})^\dagger$ and $Z_{{\bf i}}^{\bf j}=i X_{\bf i}^{\bf j}-i(X_{\bf i}^{\bf j})^\dagger$ is not proportional to the identity. W. l. o. g. we assume that $\mbox{$1 \hspace{-1.0mm} {\bf l}$} \not\propto Y_{\bf i}^{\bf j} = \mathrm{tr}_{\neg k}[(\ket{{\bf i}}\bra{{\bf j}}+h.c)\ket{\Phi}\bra{\Phi}]$. Using that $\ket{\Psi}=U_1\ldots U_n\ket{\Phi}$ we have \begin{eqnarray} \label{Yi} U_k Y_{\bf i}^{\bf j} U_k^\dagger =\mathrm{tr}_{\neg k}[(\ket{{\bf i}}\bra{{\bf j}}+h.c)\cdot U_1^\dagger \ldots U_l^\dagger \ket{\Psi}\bra{\Psi}U_1 \ldots U_l].\end{eqnarray} Since $Y_{\bf i}^{\bf j}$ is hermitian we can diagonalize it as well as the right hand side of Eq (\ref{Yi}) and obtain $U_k \bar{V}^\dagger_k D \bar{V}_k U_k^\dagger=\nonumber \\\bar{W}^\dagger_k(U_1,\ldots U_l) D(U_1,\ldots U_l) \bar{W}_k(U_1,\ldots U_l),$ which is true iff $D= X^{i_k}D (U_1,\ldots U_l)X^{i_k} $, with $i_k{\bf i}n\{0,1\}$ and $U_k= e^{i\alpha_0}\bar{W}^\dagger_k(U_1,\ldots U_l) U(\alpha_k) X^{i_k} \bar{V}_k,$ for some phases $\alpha_0,\alpha_k$. Note that Eq (\ref{Yi}) must hold for any ${\bf i},{\bf j}$. Note further that $\bar{V}_k$ is the unitary which diagonalizes $Y_{\bf i}^{\bf j}$ and can therefore be determined directly from the state $\ket{\Phi}$. Thus, we have $\ket{\Psi}=U_1\ldots U_n\ket{\Phi}$ iff there exists $i_k{\bf i}n\{0,1\}$, $\alpha_0$ and $\alpha_k$ such that $e^{i\alpha_0}X^{i_k}U(\alpha_k)\bar{W}_k(U_1,\ldots,U_l) \ket{\Psi}=U_1\ldots \bar{V}_k \ldots U_n\ket{\Phi}$. \end{proof} Note that the proof of Lemma \ref{Le1} is constructive. The idea was to impose the necessary condition for LU--equivalence given in Eq. (\ref{Yi}) for any $l$--tuples $\bf{i},\bf{j}$. Since the $2\times 2$ matrices occurring in this equation are hermitian, one can, similarly to the previous cases, determine the unitaries $\bar{V}_k, \bar{W}_k$ by diagonalizing these matrices. In contrast to before we will find here, that $\bar{W}_k$ might depend on $U_1, \ldots ,U_l$. We use now Lemma \ref{Le1} to present a constructive method to compute all local unitaries as functions of a few variables. If some unitary, $U_i$ cannot be determined in this way, we write $U_i=e^{-i\gamma_i Z_i} e^{-i\beta_i X_i} e^{-i\alpha_i Z_i}$ (up to a phase). Then $\ket{\Psi}=\otimes_j U_j\ket{\Phi}$ iff $e^{i\alpha_i Z_i} \bar{W}_i\ket{\Psi}=\otimes_{j\neq i} U_j\ket{\Phi}$, where $\bar{W}_i=e^{i\beta_i X_i} e^{i\gamma_i Z_i}$. That is, in this case we set $\bar{V}_i=\mbox{$1 \hspace{-1.0mm} {\bf l}$}$, $k_i=0$, and $\bar{W}_i=e^{i\beta_i X_i} e^{i\gamma_i Z_i}$ in Eq (\ref{LU}). We will say then that we consider $U_i$ as a variable. The constructive method to compute now $\bar{V}_k$ and $\bar{W}_k$ in Eq (\ref{LU}) is as follows: (1) If there exists a system $i$ such that $\rho_i\not\propto \mbox{$1 \hspace{-1.0mm} {\bf l}$} $ compute $\bar{V}_i$, $\bar{W}_i$ using that $\bar{W}_i\rho_i\bar{W}_i^\dagger=\bar{V}_i\sigma_i\bar{V}_i^\dagger$ ($k_i=0$). Furthermore, compute $\bar{V}_k$ and $\bar{W}_k(U_i)$ for any system $k$ with $\rho_{ik}\neq \rho_i\otimes \mbox{$1 \hspace{-1.0mm} {\bf l}$} $ using Lemma \ref{Le1}. (2) For all systems $i$ for which $\rho_i \neq \mbox{$1 \hspace{-1.0mm} {\bf l}$}$ apply the unitaries $\bar{W}_i$ ($\bar{V}_i$) to $\ket{\Psi}$ ($\ket{\Phi}$) resp. and measure system $i$ in the computational basis thereby reducing the number of systems (see Eq (\ref{cond1})). After this step we have $\rho_i \propto \mbox{$1 \hspace{-1.0mm} {\bf l}$}$ $ \forall i$. Then we continue as follows: (3) Consider the two qubit reduced states: (3a) There exist systems $i,j$ such that $\rho_{ij}\neq \mbox{$1 \hspace{-1.0mm} {\bf l}$} $. W. l. o. g. we choose $i=1$, consider $U_1$ as variable, and set $\bar{V}_1=\mbox{$1 \hspace{-1.0mm} {\bf l}$}$, $k_1=0$ and $\bar{W}_1=e^{i\beta_1 X_1} e^{i\gamma_1 Z_1}$. Then, compute $\bar{V}_j$ and $\bar{W}_j(U_1)$ using Lemma \ref{Le1} for any system $j$ with $\rho_{1j}\not\propto \mbox{$1 \hspace{-1.0mm} {\bf l}$}$. Let us denote by $J_2$ the set of systems for which $\rho_{1j}\not\propto \mbox{$1 \hspace{-1.0mm} {\bf l}$}$. (3b) If there exists no system $i,j$ such that $\rho_{ij}\not\propto \mbox{$1 \hspace{-1.0mm} {\bf l}$} $ consider $U_1$ and $U_2$ as variables and set $\bar{V}_i=\mbox{$1 \hspace{-1.0mm} {\bf l}$}$, $k_i=0$ and $\bar{W}_i=e^{i\beta_i X_i} e^{i\gamma_i Z_i}$, for $i=1,2$. Furthermore, set $J_2=\{2\}$. (4) Consider the three--qubit reduced states: (4a) If there exists a system $k$ such that $\rho_{1 j k}\neq \rho_{1j}\otimes \mbox{$1 \hspace{-1.0mm} {\bf l}$}$ for some $j{\bf i}n J_2$ compute $\bar{V}_k$ and $\bar{W}_k(U_1,U_j)$ using Lemma \ref{Le1}. Determine for any system $k$ with $\rho_{1 j k}\neq \rho_{1j}\otimes \mbox{$1 \hspace{-1.0mm} {\bf l}$}$ $\bar{V}_k$ and $\bar{W}_k(U_1,U_j)$ (if they are not already determined). (4b) If there exists no system $k$ such that $\rho_{1jk}\neq \rho_{1j}\otimes \mbox{$1 \hspace{-1.0mm} {\bf l}$}$ include $U_3$ as variable. (5) Continue in this way until all unitaries are either determined as functions of a few unitaries, or are free parameters. If at some point it is not possible to choose $\bar{V}_k$ or $\bar{W}_k$ unitary, e.g. if the eigenvalues of the operators occurring in Eq (\ref{Yi}) do not coincide, the states are not LU--equivalent. Once all unitaries, $\bar{V}_i$ are determined and all unitaries $\bar{W}_i$ are determined as functions of a few variables, we have that $\ket{\Psi}\simeq_{LU} \ket{\Phi}$ iff there exists a bitstring ${\bf k}$ and phases $\{\alpha_i\}_{i=0}^n$, such that Eq (\ref{LU}) is fulfilled. In order to check the existence of the local phase gates in Eq. (\ref{LU}) (for some bitstring ${\bf k}$), we use Lemma \ref{LemmaPhase}. It is important to note here that the state on the right hand side of Eq. (\ref{LU}) is completely determined, thus, the set $K_\Psi$ in Lemma \ref{LemmaPhase} can be determined and therefore this lemma can be applied. The states are LU--equivalent iff the conditions in Lemma \ref{LemmaPhase} are fulfilled for some bitstring ${\bf k}$. Note that the unitaries, $U_i$ which transform $\ket{\Phi}$ into $\ket{\Psi}$ are then given by $U_i=\bar{W}^\dagger_i U(\alpha_i) X^{k_i}\bar{V}_i$ (up to a global phase) \footnote{Note that the phases $\alpha_i$ can be easily computed.}. These unitaries are uniquely determined up to the symmetry of the state. Note that a pure state has the property that $\rho_{i_1,\ldots,i_l,k}=\rho_{i_1,\ldots,i_l}\otimes \mbox{$1 \hspace{-1.0mm} {\bf l}$}_k$ iff for any outcome of any Von Neumann measurement on systems $i_1,\ldots,i_l$, system $k$ is maximally entangled with the remaining systems. Only in this case we have to add another unitary as a variable. It is clear that two states, $\ket{\Psi},\ket{\Phi}$ with $\rho_{i_1,\ldots,i_l,k}=\rho_{i_1,\ldots,i_l}\otimes \mbox{$1 \hspace{-1.0mm} {\bf l}$}$ and $\sigma_{i_1,\ldots,i_l,k}\neq\sigma_{i_1,\ldots,i_l}\otimes \mbox{$1 \hspace{-1.0mm} {\bf l}$}$ can neither be LU--equivalent nor posses the same entanglement. Thus, the method presented above suggests that in order to characterize the non--local properties of multipartite states, one should first identify the class (as described above) to which the state belongs to and then determine within this class the entanglement of the state. It might well be, that the different classes lead to different applications. For instance, the states used for error correction, one way quantum computing and quantum secret sharing have the property that all single qubit reduced states are completely mixed. Before we consider now some examples, let us mention that the worst case, i.e. the case which involves the largest number of variables, is the one where the reduced state of any bipartite splitting of $\lceil n/2 \rceil$ systems versus the rest are maximally mixed. In this case we have $\lceil n/2 \rceil$ unitaries as variables. Note however, that there are very few instances, where those states do exist \cite{Bra03}. In order to illustrate the power of this method we consider first the simplest examples of two and three qubit states. The standard form of a two qubit state is $ \ket{\Psi}=\lambda_1\ket{00}+\lambda_2\ket{11}$. Thus, the method above tells us that if $\lambda_1\neq \lambda_2$, i.e. $\rho_i\neq \mbox{$1 \hspace{-1.0mm} {\bf l}$}$, then, $\ket{\Psi}\simeq_{LU}\ket{\Phi}$ iff the Schmidt coefficients $\lambda_i$ are the same. For $\lambda_1=\lambda_2$ it is straightforward to show that the unitaries, $U_i$, which are obtained using the method above for the states $\ket{\Phi^+}\equiv \ket{00}+\ket{11}$ and some LU--equivalent state $V_1V_2\ket{\Phi^+}$ are $U_1=V_1W$ and $U_2=V_2W^\ast$ for any unitary $W$. The reason why the unitaries $U_i$ are not completely determined by $V_i$ is due to the symmetry of the state, $\ket{\Phi^+}=W\otimes W^\ast \ket{\Phi^+}$ $\forall W$ unitary. For three qubits the method is almost equally simple. First, we transform both states into their trace decomposition. If non of the reduced states is completely mixed, we simply compare their standard forms (Theorem $1$). If there exists some $i$ such that $\rho_i\neq\mbox{$1 \hspace{-1.0mm} {\bf l}$}$, we know that $U_i=U(\alpha_i)$. We measure system $i$ in the computational basis and are left with two two--qubit states (see Eq. (\ref{cond1})). In case those states are LU--equivalent we apply the corresponding unitaries and use Lemma \ref{LemmaPhase} to find out whether the three qubit states are LU--equivalent or not. For the remaining case, where $\rho_i=\mbox{$1 \hspace{-1.0mm} {\bf l}$}$ $\forall i$ it can be easily shown that $\ket{\Psi}$ is LU--equivalent to the GHZ--state, $\ket{\Psi_0}=\ket{000}+\ket{111}$ \cite{Kr09b}. Even without using this fact it can be easily shown that also in this case the method presented above leads directly to the right unitaries (up to the symmetry of the states) for two states which are LU--equivalent (for details see \cite{Kr09b}). With the same method the LU--equivalence classes of up to $5$--qubit states are investigated in \cite{Kr09b}. We will show there, for instance, that for $4$--qubit states with $\rho_{ij}=\mbox{$1 \hspace{-1.0mm} {\bf l}$}$ for some $i,j$ (which is the hardest class of states using the method presented above), the LU--equivalence class is determined by only three parameters. Thus, also the entanglement of those states is completely determined by the fact that system $ij$ is maximally entangled to the other two qubits and those three parameters, to which also an operational meaning will be given \cite{Kr09b}. This example shows already that the method presented here does not only give necessary and sufficient conditions for the LU--equivalence of arbitrary multipartite states, but also leads to a new insight into their entanglement properties. Finally, let us note that the results presented above serve also as a criterion of LU--equivalence for certain mixed and also $d$--level states. For instance, if there exists at least one non--degenerate eigenvalue of $\rho$ ($\sigma$) with corresponding eigenvectors $\ket{\Psi}$ ($\ket{\Phi}$) resp., then $\rho\simeq_{LU}\sigma$ implies that $\ket{\Psi}\simeq_{LU} \ket{\Phi}$. Using the method presented here, all the unitaries, which transform $\ket{\Psi}$ into $\ket{\Phi}$ can be determined and therefore it is straightforward to check if one of them also converts $\rho$ into $\sigma$. In summary, a systematic way to show the LU--equivalence of arbitrary multipartite pure states is presented. The results derived here also lead to a new insight into the entanglement properties of the multipartite states. Studying the different classes specified here, allows one to identify new parameters characterizing entanglement \cite{Kr09b}. In particular, for generic states all the parameters occurring in the standard form determine, like in the bipartite case, the entanglement contained in the state. The author would like to thank Hans Briegel for continuous support and interest in this work and acknowledges support of the FWF (Elise Richter Program). \section{Appendix: Interconvertability by local phase gates} In order to prove Lemma \ref{LemmaPhase} we will make use of the following lemma, where we use the same notation as before. \begin{lemma} $\ket{\Psi}$ can be converted into $\ket{\Phi}$ by local unitary phase gates iff there exist phases $\{\bar{\alpha}_i\}_{i=0}^{n}$ such that $\ket{\Psi_{\bf 0}}$ is converted into $\ket{\Phi_{\bar{\alpha}_i}}$ by local unitary phase gates.\end{lemma} \begin{proof} If $\ket{\Psi}=e^{i\alpha_0}\begin{itemize}gotimes_{i=1}^n U_i(\alpha_i) \ket{\Phi}$ then choosing $\bar{\alpha}_i=\alpha_i$ for $i{\bf i}n \{0,\ldots, n\}$ fulfills the condition. To prove the inverse direction we assume that there exist phases $\{\bar{\alpha}_i\}_{i=0}^{n}$ such that $\ket{\Psi_{\bf 0}}=e^{i\alpha_0}\begin{itemize}gotimes_{i=1}^n U_i(\alpha_i) \ket{\Phi_{\bar{\alpha}_i}}$ for some phases $\{\alpha_i\}$. Defining the projector $P=\sum_{{\bf k}\not{\bf i}n K} \proj{{\bf k}}$ we have $P\ket{\Psi_{\bf 0}}=\ket{\Psi}$ and $Pe^{i\alpha_0}\begin{itemize}gotimes_{i=1}^n U_i(\alpha_i) \ket{\Phi_{\bar{\alpha}_i}}=e^{i\alpha_0}\begin{itemize}gotimes_{i=1}^n U_i(\alpha_i) P\ket{\Phi_{\bar{\alpha}_i}}$ and therefore $\ket{\Psi}=e^{i\alpha_0}\begin{itemize}gotimes_{i=1}^n U_i (\alpha_i)\ket{\Phi}$. \end{proof} Let us now use the lemma above to prove Lemma \ref{LemmaPhase}. \begin{proof} Due to the lemma above it remains to show that for any state $\ket{\psi}$ with $\bra{{\bf k}}\psi\rangle\neq 0$ $\forall {\bf k}$ we have that $\ket{\psi}=e^{i\alpha_0}\begin{itemize}gotimes_{i=1}^n U_i (\alpha_i) \ket{\phi}$ iff condition (i) and (ii) in Lemma \ref{LemmaPhase} are satisfied. Note that Eq. (\ref{cond1}) is equivalent to $\bra{0k}\psi\rangle \bra{1l}\psi\rangle \bra{1k}\phi\rangle \bra{0l}\phi\rangle= \bra{1k}\psi\rangle \bra{0l}\psi\rangle \bra{0k}\phi\rangle \bra{1l}\phi\rangle, $ where $0,1$ is acting on system $i$ and $k,l$ denote the computational basis states of the remaining $n-1$ qubits. Let us now prove the {{\bf i}t only if} part: If $\ket{\psi}=e^{i\alpha_0}\begin{itemize}gotimes_{i=1}^n U_i \ket{\phi}$ then $\bra{{\bf i}}\psi\rangle=e^{i\phi_{{\bf i}}}\bra{{\bf i}}\phi\rangle$, with $\phi_{{\bf i}}=\alpha_0+\sum_k \alpha_k i_k$, which implies (i). Condition (ii) (for $i=1$) is then equivalent to $e^{i(\phi_{0k}+\phi_{1l})} x_{kl}=e^{i(\phi_{1k}+\phi_{0l})} x_{kl},$ where $x_{kl}=\bra{0k}\phi\rangle \bra{1l}\phi\rangle \bra{1k}\phi\rangle \bra{0l}\phi\rangle$. It is easy to see that this condition is fulfilled since $e^{i(\phi_{0k}-\phi_{1k})}=e^{-i\alpha_1}$ $ \forall k$. In the same way one can show that the conditions for $i\neq 1$ are fulfilled. {{\bf i}t If}: Condition (i) implies that $\bra{{\bf i}}\Psi\rangle=e^{i \phi_{\bf i}}\bra{{\bf i}}\Phi\rangle$, for some phases $\phi_{\bf i}$. Condition (ii) (for $i=1$) implies then that $e^{i(\phi_{0k}-\phi_{1k})} =e^{i(\phi_{0l}-\phi_{1l})}$ $\forall k,l$, since $x_{kl}=\bra{0k}\phi\rangle \bra{1l}\phi\rangle \bra{1k}\phi\rangle \bra{0l}\phi\rangle \neq 0$ $\forall k,l$. Thus, $e^{i(\phi_{0k}-\phi_{1k})}$ must be independent of $k$ and therefore, we have $e^{i(\phi_{0k}-\phi_{1k})}=e^{-i\alpha_1}$, or equivalently, $e^{i\phi_{k_1,k}}=e^{i (\alpha_1^{(k_1)}+\phi_{1k})}$, where $\alpha_1^{(0)}=-\alpha_1$ and $\alpha_1^{(1)}=0$. Similarly we have $e^{i(\phi_{k_10k_3,\ldots, k_n}-\phi_{k_11k_3\ldots, k_n})}=e^{-i\alpha_2}$ and therefore $e^{i\phi_{k_1,k_2,k_3\ldots,k_n}}=e^{i (\alpha_1^{(k_1)}+\alpha_2^{(k_2)}+\phi_{11k_3,\ldots,k_n})}$. Continuing in this way we find $e^{i\phi_{k_1,\ldots k_n}}=e^{i\alpha_0}e^{i\sum_j \alpha_j k_j}$, where $\alpha_0=\phi_{1\ldots 1}-\sum \alpha_i$. Thus, we have $\ket{\psi}=e^{i\alpha_0}\begin{itemize}gotimes_{i=1}^n U_i (\alpha_i) \ket{\phi}$ with $U_i(\alpha_i)=\mbox{diag}(1,e^{i\alpha_i})$. Using the lemma above, this implies that $\ket{\Psi}=e^{i\alpha_0} \begin{itemize}gotimes_{i=1}^n U_i(\alpha_i) \ket{\Phi}$. \end{proof} \end{document}
\begin{document} \title{There are integral heptagons, no three points on a line, no four on a circle} \author{{\sc Tobias Kreisel}\thanks{[email protected]} { and }{\sc Sascha Kurz}\thanks{[email protected]}\\ Department of Mathematics, University of Bayreuth\\ D-95440 Bayreuth, Germany} \maketitle \vspace*{-4mm} \noindent {\center\small{Keywords: integral distances, exhaustive search, orderly generation, solution to an Erd\H{o}s problem\hspace*{2cm} MSC: 52C10,52C35,52-04,52A99,51K99\\}} \noindent \rule{\textwidth}{0.3 mm} \begin{abstract} \noindent We give two configurations of seven points in the plane, no three points in a line, no four points on a circle with pairwise integral distances. This answers a famous question of Paul Erd\H{o}s. \end{abstract} \noindent \rule{\textwidth}{0.3 mm} \section{Introduction} A famous open problem of P. Erd\H{o}s asks for seven points in the plane, no three on a line, no four on a circle with pairwise rational or integral distances \cite{1086.52001,UPIN}. For six points parameter solutions for infinite families of such point sets are known, see e.g. \cite{hab_kemnitz}. Since for finite point sets we can multiply the occurring distances with their denominators' smallest common multiple we confine ourselves to considering integral distances only. From the combinatorial point of view the question for the smallest possible diameter $\dot{d}(2,n)$ of $n$ points arises, where the diameter is the largest occurring distance in a point set. So far $$ \left(\dot{d}(2,n)\right)_{n=3,\dots,6}=1,8,73,174 $$ are known \cite{integral_distances_in_point_sets}. By exhaustive search the bound $\dot{d}(2,7)\ge 20000$ could be determined \cite{1088.52011,paper_alfred}. Up to diameter $20000$ there are only few integral point sets consisting of $6$ points, no three on a line, no four on a circle with pairwise integral distances, see \cite{hp} for a complete list. Some attempts to show that no integral point set in general position consisting of more than six points can exist are known \cite{pers}, but the suggested proofs turned out to be incorrect. So there was little hope to discover such a point set. But then by a suggestion of S. Dimiev \cite{Dimiev-Setting} we considered integral point sets over $\mathbb{Z}_n^2$ \cite{paper_axel}. \begin{definition} Two points $(u_1,\dots,u_m),(v_1,\dots,v_m)\in \mathbb{Z}_n^\di:=(\mathbb{Z}\backslash\mathbb{Z}n)^m$ are at \textbf{integral distance} if there exists a number $d\in\mathbb{Z}_n$ with $ \sum\limits_{i=1}^{m}(u_i-v_i)^2=d^2 $. \end{definition} \noindent So, an integral point set in $\mathbb{Z}_n^2$ is defined as a subset of $\mathbb{Z}_n^2$ where all pairs of points are at integral distance. To have an analogue to the ``no three on a line and no four on a circle" restriction we need two further definitions. \begin{definition} \label{definition_collinear} A set of $r$ points $(u_i,v_i)\in\mathbb{Z}_n^2$ is collinear if there are $a,b,t_1,t_2,w_i\in \mathbb{Z}_n$ with $ a+w_it_1=u_i\,\,\text{and}\,\, b+w_it_2=v_i $. \end{definition} \begin{definition} Four points $p_i=(x_i,y_i)$ in $\mathbb{Z}_n^2$ are said to be situated on a circle if there exist $a,b \in\mathbb{Z}_n$, $r \in\mathbb{Z}_n\backslash\{\overline{0}\}$ with $ (x_i-a)^2+(y_i-b)^2=r^2\,\,\forall i $. \end{definition} \noindent By $\dot{\mathcal{I}}(n,2)$ we denote the maximum number of points in $\mathbb{Z}_n^2$ with pairwise integral distances where no three are collinear and no four points are situated on a circle. By combinatorial search techniques---see \cite{paper_axel} for the details---we found two point sets proving $\dot{\mathcal{I}}(50,2)\ge 12$ and $\dot{\mathcal{I}}(61,2)\ge 9$. Surely this does not imply the existence of an integral point set over the real plane in general position, i.e. no three points on a line, no four points on a circle, however it did give us a fresh impetus to continue our search. \section{Integral heptagons in general position} The results for the ``relaxed" problem over $\mathbb{Z}_n^2$ motivated us to maintain our approach of exhaustive generation of all plane integral point sets in general position up to a given diameter by a variant of orderly generation, see \cite{1088.52011,paper_alfred} for details. Also, without changing our approach but simply by harnessing more computational power we were lucky enough to discover the following distance matrix \begin{equation} \label{example_1} \left( \begin{array}{rrrrrrr} 0 & 22270 & 22098 & 16637 & 9248 & 8908 & 8636 \\ 22270 & 0 & 21488 & 11397 & 15138 & 20698 & 13746 \\ 22098 & 21488 & 0 & 10795 & 14450 & 13430 & 20066 \\ 16637 & 11397 & 10795 & 0 & 7395 & 11135 & 11049 \\ 9248 & 15138 & 14450 & 7395 & 0 & 5780 & 5916 \\ 8908 & 20698 & 13430 & 11135 & 5780 & 0 & 10744 \\ 8636 & 13746 & 20066 & 11049 & 5916 & 10744 & 0 \end{array} \right) \end{equation} \noindent corresponding to a plane integral point set in general position with diameter $22270$ consisting of seven points. So this answers Erd\H{o}s's question positively. Since we applied an exhaustive search we receive: \begin{theorem} $ \dot{d}(2,7)=22270 $. \end{theorem} \noindent To avoid duplicated listings of isomorphic point sets we give all point sets in the following canonical form. Consider the vector $v(\Delta)$ formed by the columns of the upper right triangle of a distance matrix $\Delta$. A certain distance matrix $\Delta$ of a point set $\mathcal{P}$ (induced by a labeling of the points) is said to be canonical or maximal if its vector $v(\Delta)$ is the largest one in the set of all vectors of distance matrices of $\mathcal{P}$ with respect to the lexicographic order. \begin{figure} \caption{First example of an integral heptagon in general position.} \label{fig_ex_1} \end{figure} \noindent In Figure \ref{fig_ex_1} we give an embedding of distance matrix (\ref{example_1}) in the plane and an exact coordinate representation. Discovering this point set clearly motivates to search for further examples to get ideas how to construct an infinite family of examples. Unfortunately this point set is the only example with at most $30000$ in diameter. For diameters greater than $30000$ our approach of exhaustive search requires too much computational power so that we decided to skip to a restricted search. To describe the details of our restriction of the search space we need: \begin{definition} \label{def_characteristic} The \textbf{characteristic} of an integral triangle with side lengths $a,b,c\in\mathbb{Z}$ is the square free part of $(a+b+c)(a+b-c)(a-b+c)(-a+b+c)$. \end{definition} \begin{theorem} Each non degenerated triangle in a plane integral point set has equal characteristic. \end{theorem} \noindent In point set (\ref{example_1}) the characteristic is given by $2002=2\cdot 7\cdot 11\cdot 13$ which explains the shape of the $y$-coordinates, see Figure \ref{fig_ex_1} and \cite{paper_characteristic}. We notice that the characteristic of point set (\ref{example_1}) is composed of relatively small prime factors. By a look at our list of integral hexagons in general position \cite{hp} we see that this seems to be a phenomenon that holds for a great part of the known examples. This phenomenon seems to hold for similar problems also. By determing the minimum diameter $d(2,n)$ of plane integral point sets without further restrictions up to $n=122$ points \cite{paper_alfred} we could check that the known minimal examples also have a characteristic composed of small prime factors. If additionally no three points are allowed to be collinear we denote the corresponding minimum diameter by $\overline{d}(n,2)$. By determing all those minimal integral point sets with up to $n=36$ points \cite{1088.52011,paper_alfred} we could check that the same phenomenon also occurs in this case. So it seemed worth a try to exhaustively construct all plane integral point sets in general position with given diameter of at most $70000$ and the characteristic being a divisor of $6469693230=2\cdot 3\cdot 5\cdot 7\cdot 11\cdot 13\cdot 17\cdot 19\cdot 23\cdot 29$. The outcome was yet another example: \begin{equation} \label{example_2} \left( \begin{array}{rrrrrrr} 0 & 66810 & 66555 & 66294 & 49928 & 41238 & 40290 \\ 66810 & 0 & 32385 & 64464 & 32258 & 25908 & 52020 \\ 66555 & 32385 & 0 & 34191 & 16637 & 33147 & 33405 \\ 66294 & 64464 & 34191 & 0 & 34322 & 53244 & 26724 \\ 49928 & 32258 & 16637 & 34322 & 0 & 20066 & 20698 \\ 41238 & 25908 & 33147 & 53244 & 20066 & 0 & 32232 \\ 40290 & 52020 & 33405 & 26724 & 20698 & 32232 & 0 \end{array} \right) \end{equation} \noindent Unfortunately the discovery of further examples is currently beyond our means since the algorithm we use is of running time $\Omega(d^3)$ for the search for plane integral point sets in general position with diameter at most $d$. Though the restriction on the characteristic did accelerate computations significantly the theoretic lower bound for the complexity remains. (There are $O(d^3)$ integral triangles with diameter at most $d$.) \section{Open problems} Clearly, one can ask for further examples or an infinite family of integral heptagons in general position. Since our two given examples are in non convex position it would be interesting to see a convex example. As a further restriction Bell and Noll \cite{cluster} also required the coordinates of the point sets to be integral. Such point sets are commonly called $n_m$-clusters, where $n$ is the number of points and $m$ the dimension. In general the set of $n_2$-cluster equals the set of plane integral point sets in general position with characteristic $1$. So far no $7_2$-cluster is known and even its existence is unclear. The smallest $6_2$-cluster has diameter $1886$. At first sight it seems that we have answered Erd\H{o}s question completely, but from a realistic point of view we have only pushed the frontier a step further. Originally P. Erd\H{o}s asked for five points in the plain, no three on a line, no for on a circle with pairwise integral distances. When such a set was found he asked for $6$-set then for a seven set. So now we ask as a substitute: \begin{center} {\glqq}Are there eight points in the plane, no three on a line, no four on a circle with pairwise integral distances?{\grqq} \end{center} \bibdata{rare} \end{document}
\begin{document} \title{An extension of Nunokawa lemma and its example} \begin{abstract} For analytic functions $p(z)$ in the open unit disk $\mathbb{U}$ with $p(0)=1$, Nunokwa has given a result which called Nunokawa lemma (Proc. Japan Acad., Ser. A {\bf 68} (1992)). By studying Nunokawa lemma, we obtain this expansion. In this paper, we introduce this result and its example. \end{abstract} \ \section{Introduction} \ Let $\mathbb{U}$ be defined by the open unit disk $$ \mathbb{U} = \{z \in \mathbb{C}:|z|<1\}. $$ \ The basic tool in proving our results is the following lemma due to Miller and Mocanu \cite{m1ref2} (also \cite{d7ref2}). \ \begin{lem} \label{jack} \quad Let the function $w(z)$ be analytic in $\mathbb{U}$ with $w(0)=0$. If $\left|w(z)\right|$ attains its maximum value on the circle $|z|=r$ at a point $z_{0}\in\mathbb{U}$, then there exists a real number $m \geqq 1$ such that $$ \frac{z_{0}w'(z_{0})}{w(z_{0})} = m. $$ \end{lem} \ \section{Main result} \ Applying Lemma \ref{jack}, we derive the following result. \ \begin{thm} \label{p01thm1} \quad Let $p(z)$ be analytic in $\mathbb{U}$ with $p(0)=1$ and suppose that there exists a point $z_0\in\mathbb{U}$ such that $$ \mathrm{Re}(p(z)) > \alpha \quad for \quad |z|<|z_0| $$ and $$ p(z_0) = \alpha + \beta i $$ for some real $\alpha$ and $\beta$, $0\leqq\alpha<1$ and $\beta\neq0$. Then we have $$ \mathrm{Re}\left( \frac{z_0 p'(z_0)}{p(z_0)} \right) = -\frac{\alpha \beta k}{\alpha^2+\beta^2} \leqq 0 $$ and $$ \mathrm{Im}\left( \frac{z_0 p'(z_0)}{p(z_0)} \right) = \frac{\beta^2 k}{\alpha^2+\beta^2} $$ where $$ k \geqq \frac{1}{2}\left( \frac{\beta}{1-\alpha}+\frac{1-\alpha}{\beta} \right) \geqq 1 \qquad(\beta>0) $$ and $$ k \leqq \frac{1}{2}\left( \frac{\beta}{1-\alpha}+\frac{1-\alpha}{\beta} \right) \leqq -1 \qquad(\beta<0). $$ \end{thm} \ \begin{proof} \quad Let us define the function $q(z)$ by $$ q(z) = \frac{p(z)-\alpha}{1-\alpha} \qquad(z\in\mathbb{U}). $$ Clearly, $q(z)$ is analytic in $\mathbb{U}$ with $q(0)=1$ and $q(z_0)=\dfrac{\beta}{1-\alpha}i$ for a point $z_0$ such that $p(z_0)=\alpha+\beta i$. Also, let us put $$ w(z) = \frac{1-q(z)}{1+q(z)} \qquad(z\in\mathbb{U}). $$ Then, we have that $w(z)$ is analytic in $|z|<|z_0|$, $w(0)=0$, $|w(z)|<1$ for $|z|<|z_0|$ and $$ |w(z_0)| = \left| \frac{(1-\alpha)^2-\beta^2-2(1-\alpha)\beta i}{(1-\alpha)^2+\beta^2} \right| = 1. $$ From Lemma \ref{jack}, we obtain $$ \frac{z_0 w'(z_0)}{w(z_0)} = \frac{-2 z_0 q'(z_0)}{1- \{ q(z_0) \}^2} = \dfrac{-2 z_0 q'(z_0)}{1+ \left( \dfrac{\beta}{1-\alpha} \right)^2} = m \geqq 1. $$ This shows that $$ -z_0 q'(z_0) \geqq \frac{1}{2}\left( 1+ \left( \dfrac{\beta}{1-\alpha} \right)^2 \right) $$ and $z_0 q'(z_0)$ is a negative real number. From the fact that $z_0 q'(z_0)$ is a real number and $q(z_0)$ is a pure imaginary number, we can put $$ \frac{z_0q'(z_0)}{q(z_0)} = ik $$ where $k$ is a real number. For the case $\beta>0$, we have \begin{align*} k &= \mathrm{Im}\left( \frac{z_0q'(z_0)}{q(z_0)} \right) \\ &= \mathrm{Im}\left( -z_0q'(z_0) \frac{1-\alpha}{\beta}i \right) \\ &\geqq \frac{1}{2}\left( 1+ \left( \dfrac{\beta}{1-\alpha} \right)^2 \right)\frac{1-\alpha}{\beta} \\ &= \frac{1}{2}\left( \frac{\beta}{1-\alpha}+\frac{1-\alpha}{\beta} \right) \geqq 1 \end{align*} and for the case $\beta<0$, we get \begin{align*} k &= \mathrm{Im}\left( \frac{z_0q'(z_0)}{q(z_0)} \right) \\ &= \mathrm{Im}\left( -z_0q'(z_0) \frac{1-\alpha}{\beta}i \right) \\ &\leqq \frac{1}{2}\left( 1+ \left( \dfrac{\beta}{1-\alpha} \right)^2 \right)\frac{1-\alpha}{\beta} \\ &= \frac{1}{2}\left( \frac{\beta}{1-\alpha}+\frac{1-\alpha}{\beta} \right) \leqq -1. \end{align*} On the other hand, let us consider $$ \frac{z_0 q'(z_0)}{q(z_0)} = \frac{z_0 p'(z_0)}{p(z_0)-\alpha} = ik, $$ then we have $$ \frac{z_0p'(z_0)}{p(z_0)} = \frac{p(z_0)-\alpha}{p(z_0)}ik = -\frac{\alpha \beta k}{\alpha^2+\beta^2}+\frac{\beta^2 k}{\alpha^2+\beta^2}i. $$ This completes our proof. \end{proof} \ Putting $\alpha=0$ in Theorem \ref{p01thm1}, we have Corollary \ref{p01cor1} \cite{ds2ref2}. \ \begin{cor} \label{p01cor1} \quad Let $p(z)$ be analytic in $\mathbb{U}$ with $p(0)=1$ and suppose that there exists a point $z_0\in\mathbb{U}$ such that $$ \mathrm{Re}(p(z)) > 0 \quad for \quad |z|<|z_0|, $$ $\mathrm{Re}(p(z_0))=0$ and $p(z_0)\neq0$. Then we have $$ \frac{z_0 p'(z_0)}{p(z_0)} = ik $$ where $k$ is a real and $k\geqq1$ for $\mathrm{Im}(p(z_0))>0$ and $k\leqq-1$ for $\mathrm{Im}(p(z_0))<0$. \end{cor} \ \section{Example of the theorem} \ \begin{ex} \upshape \label{p01ex1} \quad We consider the function $p(z)$ given by $$ p(z) = 1+(1-\alpha)(2z+z^2) \qquad(z\in\mathbb{U}) $$ for some real $0\leqq\alpha<1$. Then, $p(z)$ is analytic in $\mathbb{U}$ with $p(0)=1$. Putting $z_0=-\dfrac{1}{2}\pm\dfrac{1}{2}i$, it follows that $$ \mathrm{Re}(p(z)) > \alpha \quad for \quad |z|<|z_0|=\frac{1}{\sqrt[]{2}} $$ and $\mathrm{Re}(p(z_0))=\alpha$. For the case $z_0=-\dfrac{1}{2}+\dfrac{1}{2}i$, we have $$ p(z_0) = \alpha+\frac{1-\alpha}{2}i. $$ Putting $\beta = \dfrac{1-\alpha}{2}$, we obtain $$ \frac{z_0p'(z_0)}{p(z_0)} = -\frac{4\alpha(1-\alpha)}{4\alpha^2+(1-\alpha)^2}+\frac{2(1-\alpha)^2}{4\alpha^2+(1-\alpha)^2}i = -\frac{\alpha \beta k}{\alpha^2+\beta^2}+\frac{\beta^2 k}{\alpha^2+\beta^2}i $$ where $$ k = 2 \geqq \frac{5}{4} = \frac{1}{2}\left( \frac{\beta}{1-\alpha}+\frac{1-\alpha}{\beta} \right). $$ For the case $z_0=-\dfrac{1}{2}-\dfrac{1}{2}i$, we have $$ p(z_0) = \alpha-\frac{1-\alpha}{2}i. $$ Putting $\beta = -\dfrac{1-\alpha}{2}$, we obtain also $$ \frac{z_0p'(z_0)}{p(z_0)} = -\frac{4\alpha(1-\alpha)}{4\alpha^2+(1-\alpha)^2}-\frac{2(1-\alpha)^2}{4\alpha^2+(1-\alpha)^2}i = -\frac{\alpha \beta k}{\alpha^2+\beta^2}+\frac{\beta^2 k}{\alpha^2+\beta^2}i $$ where $$ k = -2 \leqq -\frac{5}{4} = \frac{1}{2}\left( \frac{\beta}{1-\alpha}+\frac{1-\alpha}{\beta} \right). $$ The function $p(z)$ satisfies Theorem \ref{p01thm1}. Especially, the function $$ p(z) = 1+z+\frac{1}{2}z^2 \qquad(z\in\mathbb{U}) $$ is one of the example of Theorem Theorem \ref{p01thm1}. In fact, when we choice a point $z_0$ such that $$ z_0 = -\frac{1}{2}\pm\frac{1}{2}i $$ for $|z_0|=\dfrac{1}{\sqrt[]{2}}$, the function $p(z)$ satisfies that $\mathrm{Re}(p(z_0))>\dfrac{1}{2}$ for $|z|<|z_0|$ and $\mathrm{Re}(p(z_0))=\dfrac{1}{2}$. For $z_0=-\dfrac{1}{2}+\dfrac{1}{2}i$, we have $$ p(z_0) = \frac{1}{2}+\frac{1}{4}i $$ and $$ \frac{z_0p'(z_0)}{p(z_0)} = -\frac{4}{5} +\frac{2}{5}i = -\frac{2}{5}k +\frac{1}{5}ki $$ with $k=2\geqq\dfrac{5}{4}$. Furthermore, for $z_0=-\dfrac{1}{2}-\dfrac{1}{2}i$, we get $$ p(z_0) = \frac{1}{2}-\frac{1}{4}i $$ and $$ \frac{z_0p'(z_0)}{p(z_0)} = -\frac{4}{5} -\frac{2}{5}i = \frac{2}{5}k +\frac{1}{5}ki $$ with $k=-2\leqq-\dfrac{5}{4}$. \ \begin{figure*} \caption{$p(z)=1+z+\dfrac{1} \label{p1fig1} \end{figure*} \end{ex} \ \end{document}
\begin{document} \title{Discrepancy bounds for infinite-dimensional order two digital sequences over $\mathbb{F} \begin{abstract} We provide explicit constructions of infinite-dimensional digital sequences $\mathcal{S} = (\boldsymbol{x}_0, \boldsymbol{x}_1, \ldots, ) \subset [0,1]^{\mathbb{N}}$, which are constructed over the finite field $\mathbb{F}_2$, whose projection onto the first $s$ coordinates $\boldsymbol{x}_0^{(s)}, \boldsymbol{x}_{1}^{(s)}, \ldots$ for all $s \ge 1$, has $\mathcal{L}_q$ discrepancy bounded by \begin{equation*} \mathcal{L}_q(\{\boldsymbol{x}^{(s)}_0, \boldsymbol{x}^{(s)}_1, \ldots, \boldsymbol{x}^{(s)}_{N-1}\} ) \le C_{q,s} \frac{r^{3/2-1/q} }{N} \sqrt{ \sum_{v=1}^r m_v^{s-1} } \end{equation*} for all $N = 2^{m_1} + 2^{m_2} + \cdots + 2^{m_r} \ge 2$ and even integers $q$ with $2 \le q < \infty$, where the constant $C_{q,s} > 0$ is independent of $N$. In particular, we have \begin{equation*} \mathcal{L}_q(\{\boldsymbol{x}^{(s)}_0, \boldsymbol{x}^{(s)}_1, \ldots, \boldsymbol{x}^{(s)}_{2^m-1}\} ) \le C_{q,s} \frac{m^{(s-1)/2}}{2^m} \end{equation*} for all $m, s \ge 1$ and $2 \le q < \infty$. Further we give explicit constructions of finite point sets $\boldsymbol{y}_0, \boldsymbol{y}_1, \ldots, \boldsymbol{y}_{N-1}$ in $[0,1)^\mathbb{N}$ for all $N \ge 2$ such that their projection on the first $s$ coordinates $\boldsymbol{y}_0^{(s)}, \boldsymbol{y}_1^{(s)}, \ldots, \boldsymbol{y}_{N-1}^{(s)}$ in $[0,1)^s$ for all $s \ge 1$ satisfies \begin{equation*} \mathcal{L}_q(\{\boldsymbol{y}^{(s)}_0, \boldsymbol{y}^{(s)}_1, \ldots, \boldsymbol{y}^{(s)}_{N-1}\} ) \le C_{q,s} \frac{(\log N)^{(s-1)/2}}{N} \end{equation*} for all $2 \le q < \infty$, where $C_{q,s} > 0$ is again independent of $N$. The last two results are best possible by a lower bound of Roth [ K. F. Roth, On irregularities of distribution. Mathematika, {\bf 1} (1954), 73--79.]. The proofs are based on a generalization of the Niederreiter-Rosenbloom-Tsfasman metric, which itself is a generalization of the Hamming metric. \end{abstract} {\bf Keywords}: $\mathcal{L}_q$ discrepancy, optimal convergence, explicit constructions, digital sequence, higher order sequence, higher order digital sequence, higher order net, higher order digital net {\bf AMS Subject Classification}: Primary: 11K38; Secondary: 11K06, 11K45; \section{Introduction} The $\mathcal{L}_q$ discrepancy is a measure of the equidistribution properties of a point set $\widehat{\mathcal{P}}_{N,s} = \{ \boldsymbol{x}^{(s)}_0,\boldsymbol{x}^{(s)}_1, \ldots, \boldsymbol{x}^{(s)}_{N-1}\}$ in the unit cube $[0,1]^s$, see \cite{BC,kuinie,mat}. It is based on the local discrepancy function \begin{equation*} \delta(\widehat{\mathcal{P}}_{N,s}; \boldsymbol{t}heta) = \frac{1}{N} \sum_{n=0}^{N-1} 1_{[\boldsymbol{z}ero, \boldsymbol{t}heta)}(\boldsymbol{x}_n) - \prod_{j=1}^s \theta_j, \end{equation*} where $\boldsymbol{t}heta = (\theta_1,\ldots, \theta_s)$, $[\boldsymbol{z}ero, \boldsymbol{t}heta) = \prod_{j=1}^s [0, \theta_j)$, and $1_{[\boldsymbol{z}ero, \boldsymbol{t}heta)}$ denotes the characteristic function of the interval $[\boldsymbol{z}ero, \boldsymbol{t}heta)$. For a given interval $[\boldsymbol{z}ero, \boldsymbol{t}heta)$, the local discrepancy function measures the difference between the proportion of points which fall into this interval and the volume of the interval. The $\mathcal{L}_q$ discrepancy is then the $\mathcal{L}_q$ norm of the discrepancy function \begin{equation*} \mathcal{L}_q(\widehat{\mathcal{P}}_{N,s}) = \left(\int_{[0,1]^s} |\delta(\widehat{\mathcal{P}}_{N,s}, \boldsymbol{t}heta)|^q \,\mathrm{d} \boldsymbol{t}heta \right)^{1/q}, \end{equation*} with the obvious modifications for $q = \infty$. One of the questions on irregularities of distribution is concerned with the precise order of convergence of the smallest possible values of $\mathcal{L}_q(\widehat{\mathcal{P}}_{N,s})$ as $N$ goes to infinity. That is, the aim is to study the convergence of \begin{equation*} \mathcal{L}_{q,N,s} = \inf_{\satop{\widehat{\mathcal{P}}_{N,s} \subset [0,1]^s}{|\widehat{\mathcal{P}}_{N,s}|=N}} \mathcal{L}_q(\widehat{\mathcal{P}}_{N,s}), \end{equation*} as $N$ tends to infinity (for fixed dimension $s$) and the explicit construction of point sets $\widehat{\mathcal{P}}_{N,s}$ which achieve the optimal rate of convergence of the $\mathcal{L}_q$ discrepancy \cite{BC}. (Such point sets are of use for instance in quasi-Monte Carlo integration \cite{DP10,DT97,niesiam}.) In the next subsection we describe the results of this paper. \subsection{The results} Let $\mathbb{N}$ denote the set of natural numbers and $\mathbb{N}_0$ the set of nonnegative integers. In the following we write $A(N,m,q,s) \ll_{q,s} B(N,m,q,s)$ if there is a constant $c_{q,s} > 0$ which depends only on $s$ and $q$ (but not on $N$ or $m$) such that $A(N,m,q, s) \le c_{q,s} B(N,m,q, s)$ for all $m$ and $N$, with analogous meanings for $\ll_s, \gg_{q,s}, \gg_s$. We write $\mathcal{S} = (\boldsymbol{x}_0, \boldsymbol{x}_1,\ldots) \subset [0,1)^{\mathbb{N}}$ for an infinite dimensional sequence and $\mathcal{S}_s = (\boldsymbol{x}_0^{(s)}, \boldsymbol{x}_1^{(s)}, \ldots) \subset [0,1)^s$ for the projection of $\mathcal{S}$ onto the first $s$ coordinates. Further let $\mathcal{P}_N = \{\boldsymbol{x}_0, \boldsymbol{x}_1, \ldots, \boldsymbol{x}_{N-1} \} \subset [0,1)^{\mathbb{N}}$ denote the first $N$ points of $\mathcal{S}$ and let $\mathcal{P}_{N,s} = \{\boldsymbol{x}_0^{(s)}, \boldsymbol{x}_1^{(s)}, \ldots, \boldsymbol{x}_{N-1}^{(s)}\} \subset [0,1)^s$ denote the first $N$ points of $\mathcal{S}_s$. For point sets consisting of $N$ elements which are not obtained as the first $N$ points of a sequence for all $N \in \mathbb{N}$, we write $\widehat{\mathcal{P}}_N$ if the point set is in $[0,1)^{\mathbb{N}}$ and $\widehat{\mathcal{P}}_{N,s}$ for point sets in $[0,1) ^s$, where $\widehat{P}_{N,s}$ is the projection of $\widehat{P}_N$ onto the first $s$ coordinates. We show the following theorem. \begin{theorem}\label{thm1} One can explicitly construct an infinite sequence $\mathcal{S}$ of points in $[0,1)^{\mathbb{N}}$ such that for all $s \ge 1$, the projection of the first $N$ points of $\mathcal{S}$ onto the first $s$ coordinates $\mathcal{P}_{N,s}$ satisfies \begin{equation*} \mathcal{L}_{q}(\mathcal{P}_{N,s}) \ll_{q,s} \frac{r^{3/2-1/q} }{N} \sqrt{\sum_{v=1}^r m_v^{s-1}} \end{equation*} for all $N \in \mathbb{N}$, with $N \ge 2$, and with dyadic expansion $N = 2^{m_1} + 2^{m_2} + \cdots + 2^{m_r}$, where $m_1 > m_2 > \cdots > m_r \ge 0$, and all even integers $q$ with $2 \le q < \infty$. In particular, we have \begin{equation*} \mathcal{L}_{q}(\mathcal{P}_{2^m,s}) \ll_{q,s} \frac{m^{(s-1)/2}}{2^m} \quad \mbox{for all } m \ge 1 \mbox{ and } 2 \le q < \infty. \end{equation*} \end{theorem} As a corollary to Theorem~\ref{thm1}, using an idea from \cite{CS02}, we can also obtain explicit constructions of finite point sets in the infinite dimensional unit cube $[0,1]^{\mathbb{N}}$ whose projection onto the first $s$ coordinates achieve the optimal rate of convergence of the $\mathcal{L}_q$ discrepancy. \begin{corollary}\label{cor1} For every $N \ge 2$ one can explicitly construct a point set $\widehat{\mathcal{P}}_{N}$ of $N$ points in $[0,1)^{\mathbb{N}}$ such that for all $s \ge 1$, the projection of $\widehat{\mathcal{P}}_N$ onto the first $s$ coordinates $\widehat{\mathcal{P}}_{N,s}$ satisfies \begin{equation*} \mathcal{L}_{q}(\widehat{\mathcal{P}}_{N,s}) \ll_{q,s} \frac{(\log N)^{(s-1)/2}}{N} \quad \mbox{for all } 2 \le q < \infty. \end{equation*} \end{corollary} In the next subsection we provide a review of the literature and explain how the above results relate to what is known. \subsection{Literature review} The classic lower bound on the $\mathcal{L}_q$ discrepancy is by Roth~\cite{Roth} and ascertains that \begin{equation*} \mathcal{L}_{q,N,s} \gg_{s} \frac{(\log N)^{(s-1)/2}}{N} \quad \mbox{for all } N, q, s \ge 2. \end{equation*} This result is known to be best possible for $q = 2$ as shown first by Davenport~\cite{dav} for $s=2$ and then by Roth~\cite{roth2,Roth4}. Other constructions of point sets with optimal $\mathcal{L}_2$ discrepancy were found by Chen~\cite{C80, C83}, Frolov~\cite{Frolov}, Dobrovol'ski\v{i}~\cite{Do84}, Skriganov~\cite{Skr89, Skr94}, Hickernell and Yue~\cite{HY00}, and Dick and Pillichshammer~\cite{DP05b}. For more details on the history of the subject see the monograph \cite{BC}. All the constructions mentioned so far involve some random elements, except for the special case of $s=2$ studied by Davenport. Further examples of two-dimensional point sets with best possible order of ${\cal L}_2$ discrepancy can be found in \cite{FauPi09a,FauPi09,FauPiPriSch09,KriPi2006,lp,pro1988a}. Thus the constructions for $s \ge 3$ are not explicit. First explicit constructions of finite point sets in fixed dimension matching the lower bound were provided by the works of Chen and Skriganov \cite{CS02} for $q=2$ and Skriganov~\cite{Skr} for $2 \le q < \infty$. See also Chen and Skriganov~\cite{CS08} where the arguments of \cite{CS02} were simplified and the constant was improved. The papers \cite{CS02} and \cite{Skr} completely solved the open problem of finding explicit constructions of finite point sets of fixed dimension with optimal $\mathcal{L}_2$ and optimal $\mathcal{L}_q$ discrepancy. On the other hand, the $\mathcal{L}_\infty$ discrepancy, called star discrepancy, is much harder to analyze, the exact order of convergence is not known \cite{BL,BLV}. We briefly describe what is known about the discrepancy of sequences. A lower bound for infinite sequences of points was shown by Pro{\u\i}nov~\cite{pro85}, which states that for all infinite sequences $\mathcal{S}_s$ in the unit cube $[0,1)^s$ one has \begin{equation}\label{ineq_pro} \mathcal{L}_2(\mathcal{P}_{N,s}) \gg_s \frac{(\log N)^{s/2}}{N}, \end{equation} for infinitely many values of $N$. This implies that one cannot construct an infinite sequence of points such that its first $N$ points match Roth's lower bound for all values of $N$. An explicit construction of an infinite sequence of points $\mathcal{S}_s$ in $[0,1]^s$ which satisfies \begin{equation*} \mathcal{L}_2(\mathcal{P}_{N,s}) \ll_s \frac{(\log N)^{s/2}}{N} \quad \mbox{for all } N \ge 2, \end{equation*} was provided in \cite{DP12}. Note that those results only apply to the $\mathcal{L}_2$ discrepancy. Further, the sequences from \cite{DP12} match Roth's lower bound for infinitely many values of $N$, more precisely, for $N = 2^m$ one obtains \begin{equation*} \mathcal{L}_2(\mathcal{P}_{2^m,s}) \ll_s \frac{m^{(s-1)/2}}{2^m} \quad \mbox{for all } m \ge 1. \end{equation*} One-dimensional infinite sequences whose ${\cal L}_2$ discrepancy satisfies a bound of order $\sqrt{\log N}/N$ for every $N \ge 2$ were given in, e.g. \cite{chafa,g96,lp,pro85,pg}. These constructions are mainly based on the symmetrization of sequences (also called reflection principle). The explicit construction of sequences studied in \cite{DP12} are the same as in this paper. In \cite{DP12} the authors studied the ${\cal L}_2$ discrepancy, whereas here we consider the $\mathcal{L}_q$ discrepancy for $2 \le q < \infty$. Using the estimations $r \le \log N$ and $m_h \le \log N$, the first result of Theorem~\ref{thm1} implies that \begin{equation}\label{Lq_seq} \mathcal{L}_q(\mathcal{P}_{N,s}) \ll_{q,s} \frac{(\log N)^{s/2 + 3/2-1/q}}{N}, \end{equation} for all $N \ge 2$ and all even integers $q$ with $2 \le q < \infty$. Thus, at least for $q=2$, this result is not best possible. It seems reasonable to suggest that the exponent of the $\log N$ factor above can be replaced by $s/2$, which would be best possible by the lower bound of Pro{\u\i}nov~\cite{pro85}. On the other hand, for finite point sets, the second part of Theorem~\ref{thm1} and Corollary~\ref{cor1} match the lower bound by Roth~\cite{Roth} and are therefore optimal. The construction of the sequences and point sets presented in this paper uses the finite field $\mathbb{F}_2$, which is different from the construction in \cite{CS02,Skr}, where the points were constructed using the finite field $\mathbb{F}_p$ of prime order $p$ with $p \ge q s^2$. By removing the restriction $p \ge q s^2$ we can now use the projection of infinite dimensional point sets to obtain point sets with optimal $\mathcal{L}_q$ discrepancy, which is not possible using the construction from \cite{CS02, Skr}. Further, the bound on $\mathcal{L}_q(\mathcal{P}_{N,s})$ holds for all $2 \le q < \infty$, i.e., as opposed to \cite{Skr}, one does not have to change the point set as $q$ increases. Hence the explicit construction in this paper is the first construction which achieves the optimal rate of convergence of the $\mathcal{L}_q$ discrepancy for all $2 \le q < \infty$. In fact, we conjecture that the explicit constructions of the point sets and sequences in this paper also achieve the optimal rate of convergence of the ${\cal L}_\infty$ discrepancy. In the next subsection we describe the explicit construction of sequences and point sets which satisfy Theorem~\ref{thm1} and Corollary~\ref{cor1}. \subsection{Explicit construction of sequences}\label{sec_const} The construction is done in two steps. In the first step, we use explicit constructions of so-called digital $(t,m,s)$-nets and digital $(t,s)$-sequences \cite{DP10,nie87,nie88,niesiam,NX96,sob67} over the finite field $\mathbb{F}_2$. We introduce the relevant background as well as a special case of a suitable explicit construction in the following. We first introduce some notation. We call $x\in [0,1)$ a dyadic rational if it can be written in a finite dyadic expansion. By $\oplus$ we denote the digit-wise addition modulo $2$, i.e., for $x, y \in \mathbb{R}_+ = \{z \in \mathbb{R}: z \ge 0\}$ and dyadic expansions $x = \sum_{i=w}^{\infty} \frac{x_i}{2^i}$ and $y = \sum_{i=w}^{\infty} \frac{y_i}{2^i}$ for some $w \in \mathbb{Z}$, we have $$ x \oplus y := \sum_{i=w}^{\infty } \frac{z_i}{2^i}, \; \; \;{\rm where} \; \; \; z_i := x_i + y_i \mypmod{2},$$ where for dyadic rationals we always use the finite expansion. For vectors $\boldsymbol{x}, \boldsymbol{y} \in \mathbb{R}_+^s$ we use the notation $\boldsymbol{x} \oplus \boldsymbol{y}$ to denote the component-wise addition $\oplus$. (Since we consider addition modulo $2$, the dyadic subtraction $\ominus$ is the same as the dyadic addition $\oplus$.) Note that, for instance, for $x = 2^{-1} + 2^{-3} + 2^{-5} + \cdots$ and $y = 2^{-2} + 2^{-4} + 2^{-6} + \cdots$ we obtain $x \oplus y = 2^{-1} + 2^{-2} + 2^{-3} + \cdots$, which is given by its infinite expansion although it is a dyadic rational. Hence $x \oplus y$ is not always defined via its finite expansion, even if we always use the finite expansion of $x$ and $y$. This problem could be avoided by using the dyadic group $(\mathbb{F}_2)^{\mathbb{N}}$ as in \cite[Section~2]{Fine} instead of $\mathbb{R}_+$. However, this situation does not occur in this paper since we only use $\oplus$ for (vectors of) dyadic rationals (in fact, usually nonnegative integers) for which we always use the finite expansion in our proofs, so it is sufficient to use $\mathbb{R}_+$ instead of $(\mathbb{F}_2)^{\mathbb{N}}$. \subsubsection*{The digital construction scheme} We now describe the digital construction scheme for point sets in the unit cube. In the following we identify $0, 1 \in \mathbb{F}_2$ with the integers $0, 1$. Let $C_j = (c_{j,k,\ell})_{\satop{1 \le k \le 2m}{1 \le \ell \le m}} \in \mathbb{F}_2^{2m \times m}$ for $j \in \mathbb{N}$ be $2m \times m$ matrices over $\mathbb{F}_2$. Let $n = n_0 + n_1 2 + \cdots + n_{m-1} 2^{m-1} \in \{0, 1, \ldots, 2^m-1\}$ be the dyadic expansion of $n$. Set $\vec{n} = (n_0, n_1, \ldots, n_{m-1})^\top \in \mathbb{F}_2^{m}$. Then define \begin{equation*} \vec{x}_{j,n} = C_j \vec{n}, \end{equation*} that is, $\vec{x}_{j,n} = (x_{j,n,1}, x_{j,n,2},\ldots, x_{j,n,2m} )^\top$ with $x_{j,n,k} = \sum_{\ell=1}^m n_{\ell-1} c_{j,k,\ell} \in \mathbb{F}_2$ and define \begin{equation*} x_{j,n} = x_{j,n,1} 2^{-1} + x_{j,n,2} 2^{-2} + \cdots + x_{j,n,2m} 2^{-2m}. \end{equation*} Then the $n$th point $\boldsymbol{x}_n$ of the point set is given by $\boldsymbol{x}_n = (x_{1,n}, x_{2,n}, \ldots ) \in [0,1)^{\mathbb{N}}$. The point set $\widehat{\mathcal{P}}_{2^m} = \{\boldsymbol{x}_0, \boldsymbol{x}_1,\ldots, \boldsymbol{x}_{2^m-1}\}$ is a digital net. With some minor modifications we can also set $m=\infty$. In this case the generating matrices are of the form $C_j = (c_{j,k,\ell})_{k, \ell \in \mathbb{N}}$ and we obtain an infinite sequence $\mathcal{S}$, which we call a digital sequence (with generating matrices $(C_j)_{j \in \mathbb{N}}$). In this case we have $x_{j,n,k} = \sum_{\ell=1}^\infty n_{\ell-1} c_{j,k,\ell} \in \mathbb{F}_2$, which is actually a finite sum since for any $n \in \mathbb{N}_0$ only finitely many digits are nonzero. Further, we consider only matrices $C_j = (c_{j,k, \ell})_{k, \ell \in \mathbb{N}}$ for which $c_{j,k,\ell} = 0$ for all $k > 2 \ell$. (We point out that we actually only need $c_{j,k,\ell} = 0$ for all $k$ large enough for our purposes here, but to simplify the notation we use only constructions for which $c_{j,k,\ell}=0$ for $k > 2 \ell$.) For a matrix $C_j = (c_{j,k,\ell}) \in \mathbb{F}_2^{\mathbb{N} \times \mathbb{N}}$ we denote by $C_j^{u \times v} = (c_{j,k,\ell})_{1 \le k \le u, 1 \le \ell \le v}$ the left-upper $u \times v$ submatrix of $C_j$. For the proof we also use the concept of a digitally shifted digital net. Let $\boldsymbol{\sigma} = (\sigma_1, \sigma_2, \ldots) \in [0,1]^{\mathbb{N}}$ with dyadic expansion $\sigma_j = \sigma_{j,1} 2^{-1} + \sigma_{j,2} 2^{-2} + \cdots$. Then the digitally shifted digital net $\widehat{\mathcal{P}}_{2^m}(\boldsymbol{\sigma})$ consists of the points $\boldsymbol{x}_{n} \oplus \boldsymbol{\sigma}$ for $0 \le n < 2^m$. (Below we only use shift vectors whose components are dyadic rationals.) We now show that certain subsets of a digital sequence are digitally shifted digital nets. \begin{lemma}\label{lem_net_seq} Let $\boldsymbol{x}_0, \boldsymbol{x}_1, \ldots$ be the points of a digital sequence with generating matrices $C_j = (c_{j,k,\ell})_{k, \ell \in \mathbb{N}}$ for which $c_{j,k,\ell} = 0$ for all $k > 2 \ell$. Let $m \ge 0$. Then for any $\beta \ge 0$ the point set \begin{equation*} \boldsymbol{x}_{\beta 2^m}, \boldsymbol{x}_{\beta 2^m+1}, \ldots, \boldsymbol{x}_{\beta 2^m + 2^m-1} \end{equation*} is a digitally shifted digital net with generating matrices $C_{j}^{2m \times m}$, $j \in \mathbb{N}$, and which is shifted by a digital shift vector whose coordinates are dyadic rationals. \end{lemma} \begin{proof} For $n \in \{\beta 2^m, \beta 2^m+1, \ldots, \beta 2^m +2^m-1\}$ we write $n = a + \beta 2^m$ with $0 \le a < 2^m$. Then $\vec{n} = (\vec{a}^\top, \vec{0}_\infty^\top)^\top + (\vec{0}_{m}^\top, \vec{\beta}^\top)^\top$, where $\vec{a} = (n_0, n_1,\ldots, n_{m-1})^\top$, $\vec{\beta} = (n_{m}, n_{m+1},\ldots)^\top$ and $\vec{0}_z$ is the zero-vector of length $z$. We write $$ C_{j} = \left( \begin{array}{ccc} & \vline & \\ C_{j}^{2 m \times m} & \vline & D_{j}^{2 m \times \mathbb{N}} \\ & \vline & \\ \hline & \vline & \\ 0^{\mathbb{N} \times m} & \vline & F_{j}^{\mathbb{N} \times \mathbb{N}} \\ & \vline & \end{array} \right) \in \mathbb{F}_2^{\mathbb{N} \times \mathbb{N}}, $$ where $0^{\mathbb{N} \times m}$ denotes the $\mathbb{N} \times m$ matrix whose entries are all $0 \in \mathbb{F}_2$. With this notation we have $$C_j \vec{n}=\left( \begin{array}{c} C_{j}^{2 m \times m} \vec{a} \\ 0 \\ 0 \\ \vdots \end{array} \right) + \left( \begin{array}{c} \\ D_{j}^{2 m \times \mathbb{N}} \\ \\ \hline \\ F_{j}^{\mathbb{N} \times \mathbb{N}} \\ \end{array} \right) \vec{\beta}.$$ For the point set under consideration, the vector \begin{equation*} \vec{\sigma}_{\beta,j}:=\left( \begin{array}{c} \\ D_{j,2 m \times \mathbb{N}} \\ \\ \hline \\ F_{j,\mathbb{N} \times \mathbb{N}} \\ \end{array} \right) \vec{\beta} \end{equation*} is fixed. Let $\vec{\sigma}_{\beta,j} = (\sigma_{\beta,j,1}, \sigma_{\beta,j,2},\ldots)^\top$. By the assumption $c_{j,k, \ell} =0$ for all $k > 2\ell$ it also follows that $\sigma_{\beta,j,b} = 0$ for all $b$ large enough. Further, as $n$ runs through all elements in the set $\{\beta 2^m, \beta 2^m+1,\ldots, (\beta + 1) 2^m-1\}$, the vector $\vec{a}$ runs through all elements in the set $\mathbb{F}_2^m$. Thus the point set $\{\boldsymbol{x}_{\beta 2^m}, \boldsymbol{x}_{\beta 2^m+1},\ldots, \boldsymbol{x}_{\beta 2^m+2^m-1}\}$ is a digitally shifted digital net with generating matrices $C_j^{2m \times m}$, $j \in \mathbb{N}$ and digital shift vector $\boldsymbol{\sigma}_\beta = (\sigma_{\beta,j})_{j \in \mathbb{N}}$ where $\sigma_{\beta,j} = \sigma_{\beta,j,1} 2^{-1} + \sigma_{\beta,j,2} 2^{-2} + \cdots$ are dyadic rationals. \end{proof} \subsubsection*{The NRT weight function} The properties of the digital sequence $\mathcal{S}$ depend entirely on the properties of the generating matrices $(C_j)_{j \in \mathbb{N}}$. We now introduce a weight function which serves as a criterion for selecting good generating matrices. Assume that the integer $k > 0$ has dyadic expansion $k = \kappa_0 + \kappa_1 2 + \cdots + \kappa_{a-2} 2^{a-2} + 2^{a-1}$ with $\kappa_i \in \{0,1\}$. We define the NRT weight function $\mu_1$ (Niederreiter~\cite{nie86} and Rosenbloom-Tsfasman~\cite{RT}) weight) for nonnegative integers $k$ by \begin{equation}\label{def_mu} \mu_1(k) = \left\{\begin{array}{rl} a = 1 + \lfloor \log_2 k \rfloor & \mbox{if } k > 0, \\ 0 & \mbox{if } k = 0, \end{array} \right. \end{equation} where $\lfloor x \rfloor$ is the largest integer smaller or equal to $x$. For vectors $\boldsymbol{k} = (k_1,\ldots, k_s) \in \mathbb{N}_0^s$ we define the NRT weight by \begin{equation*} \mu_1(\boldsymbol{k}) = \mu_1(k_1) + \mu_1(k_2) + \cdots + \mu_1(k_s). \end{equation*} We now explain how the NRT weight is used to obtain a criterion for choosing good generating matrices. For $m \ge 1$ let $C_j^{2m \times m} \in \mathbb{F}_2^{2m \times m}$ denote the left-upper $2m \times m$ sub-matrix of $C_j \in \mathbb{F}_2^{\mathbb{N} \times \mathbb{N}}$. Further we set $\vec{k} = (\kappa_0, \kappa_1, \ldots, \kappa_{2m-1})^\top \in \mathbb{F}_2^{2m}$, where for $a < 2m$ we set $\kappa_{i} = 0$ for $a-1 < i \le 2m-1$. We define \begin{align*} \mathcal{D}_{m,s} = & \mathcal{D}(C_1^{2m \times m},\ldots, C_s^{2m \times m}) \\ = & \{\boldsymbol{k} = (k_1,\ldots, k_s) \in \mathbb{N}_0^s: (C_1^{2m \times m})^\top \vec{k}_1 + \cdots + (C_s^{2m \times m})^\top \vec{k}_s = \vec{0}_m \in \mathbb{F}_2^m\}, \end{align*} where $\vec{0}_m$ denotes the column zero vector in $\mathbb{F}_2^m$. Further we set $\mathcal{D}^\ast_{m,s} = \mathcal{D}_{m,s} \setminus \{\boldsymbol{z}ero\}$, where $\boldsymbol{z}ero$ denotes the zero-vector in $\mathbb{N}_0^s$. (The set $\mathcal{D}_{m,s}$ is related to the dual space of the row space of $((C_1^{2m \times m})^\top, \ldots, (C_s^{2m \times m})^\top)$.) We define the minimal weight of $\mathcal{D}^\ast_{m,s}$ as \begin{equation*} \rho_{1,m,s} = \rho_{1,m,s}(\mathcal{D}^\ast_{m,s}) = \min_{\boldsymbol{k} \in \mathcal{D}_{m,s}^\ast} \mu_1(\boldsymbol{k}). \end{equation*} It can be shown that a large weight $\rho_{1,m,s}(\mathcal{D}^\ast_{m,s})$ for all $m \ge 1$ yields good distribution properties of the corresponding digital sequence. Therefore the goal is to construct generating matrices $(C_j)_{j \in \mathbb{N}}$ of digital sequences for which the minimal weight is in some sense large. Since this is only an intermediate step in our construction, we will not go into the details of relating the NRT weight to the distribution properties of the sequence, the interested reader may, for instance, consult \cite{nie87,np} for details. \subsubsection*{Construction of generating matrices with large $\rho_{1,m,s}$} We return to the digital construction scheme. We now introduce a construction of generating matrices with large minimal weight $\rho_{1,m,s}$. This is the first step in our construction of obtaining digital sequences which satisfy the bound in Theorem~\ref{thm1}. Explicit constructions of suitable generating matrices $C_j \in \mathbb{F}_2^{\mathbb{N} \times \mathbb{N} }$ were obtained by Sobol'~\cite{sob67}, Niederreiter~\cite{nie88}, Tezuka~\cite{Tez}, Niederreiter-Xing~\cite{NX96} and others (see also \cite[Chapter~8]{DP10}). To make the construction fully explicit, we briefly describe a special case of generalized Niederreiter sequences introduced by Tezuka~\cite[Eq. (3)]{Tez}. The basic idea of this construction is based on Sobol's and Niederreiter's construction of the generating matrices. The construction is based on irreducible polynomials over the finite field $\mathbb{F}_2$. Let $p_1=x$ and $p_j \in \mathbb{F}_2[x]$, for $j \ge 2$, be the $(j-1)$st irreducible polynomial in a list of irreducible polynomials over $\mathbb{F}_2$ that is sorted in increasing order according to their degree $e_j = \deg(p_j)$, that is, $e_1 \le e_2 \le \cdots $ (the ordering of polynomials with the same degree is irrelevant; further, one could also use primitive polynomials instead of irreducible polynomials). Let $C_j = (c_{j,k,\ell})_{k,\ell \in \mathbb{N} }$ with $c_{j,k,\ell} \in \mathbb{F}_2$. We describe now how to obtain the element $c_{j,k,\ell}$ for $j, k, \ell \ge 1$. To do so, fix natural numbers $j$ and $k$. Take $i-1$ and $z$ to be respectively the main term and remainder when we divide $k-1$ by $e_j$, so that $k-1 = (i-1) e_j + z$ with $0 \le z < e_j$. Now consider the Laurent series expansion \begin{equation*} \frac{x^{e_j-z-1}}{p_j(x)^i} = \sum_{\ell =1}^\infty a_\ell(i,j,z) x^{-\ell} \in \mathbb{F}_2((x^{-1})). \end{equation*} Then for all $\ell \ge 1$ we set \begin{equation*} c_{j,k,\ell} = a_\ell(i,j,z). \end{equation*} Note that in this construction we have $c_{j,k,\ell} = 0$ for all $k > \ell$. The weight function and constructions we introduced so far have been well studied. In the following we introduce a new weight function and construction of generating matrices which can be viewed as an extension of the constructions above. It has first been studied in \cite{D07,D08}. \subsubsection*{A new weight function} As mentioned above, we have not found the NRT weight to be sufficient to obtain explicit constructions of point sets and sequences satisfying the bound in Theorem~\ref{thm1}. In fact, \cite{BC} and \cite{Skr} use the NRT weight and additionally a Hamming weight to obtain their constructions. Here we use a generalization of the NRT weight (but we do not use the Hamming weight). We introduce this weight function in the following. Let $k = 2^{a_1-1} + 2^{a_2-1} + \cdots + 2^{a_\nu-1} \in \mathbb{N}$, where $a_1 > a_2 > \cdots > a_\nu > 0$. Then we define the weight function \begin{equation}\label{def_mu2} \mu_2(k) = \left\{\begin{array}{rl} a_1 + a_2 & \mbox{if } \nu \ge 2, \\ a_1 & \mbox{if } \nu = 1, \\ 0 & \mbox{if } k = 0. \end{array} \right. \end{equation} For vectors $\boldsymbol{k} = (k_1,\ldots, k_s) \in \mathbb{N}_0^s$ we set \begin{equation}\label{def_mu2_vec} \mu_2(\boldsymbol{k}) = \mu_2(k_1) + \cdots + \mu_2(k_s). \end{equation} We can also define the minimal weight by \begin{equation*} \rho_{2,m,s} = \rho_{2,m,s}(\mathcal{D}^\ast_{m,s}) = \min_{\boldsymbol{k} \in \mathcal{D}^\ast_{m,s}} \mu_2(\boldsymbol{k}). \end{equation*} The main idea in this paper is to use $\rho_{2,m,s}$ as the criterion to choose generating matrices $(C_j)_{j \in \mathbb{N}}$ and to use it to prove Theorem~\ref{thm1}. In the following we introduce the second part of our construction of digital sequences. \subsubsection*{Construction of generating matrices with large $\rho_{2,m,s}$} We first describe a method to obtain generating matrices $(C_j)_{j \in \mathbb{N}}$ for which $\rho_{2,m,s}$ is large. The following definition was used in \cite{D08} to obtain explicit construction of suitable sequences. \begin{definition}\rm The digit interlacing composition is defined by \begin{eqnarray*} \mathscr{D}: [0,1)^{2} & \to & [0,1) \\ (x_1, x_{2}) &\mapsto & \sum_{d=1}^\infty \sum_{r=1}^2 \xi_{r,d} 2^{-r - 2 (d-1)}, \end{eqnarray*} where $x_r = \xi_{r,1} 2^{-1} + \xi_{r,2} 2^{-2} + \cdots$ for $1 \le r \le 2$. We also define this function for vectors by setting \begin{eqnarray*} \mathscr{D}: [0,1)^{\mathbb{N}} & \to & [0,1)^\mathbb{N} \\ (x_1, x_2, \ldots) &\mapsto & (\mathscr{D}(x_1, x_2), \mathscr{D}(x_{3},x_{4}), \ldots), \end{eqnarray*} for point sets $\mathcal{P}_{N} = \{\boldsymbol{x}_0,\boldsymbol{x}_1, \ldots, \boldsymbol{x}_{N-1}\} \subseteq [0,1)^{\mathbb{N} }$ by setting \begin{equation*} \mathscr{D}(\mathcal{P}_{N}) = \{\mathscr{D}(\boldsymbol{x}_0), \mathscr{D}(\boldsymbol{x}_1), \ldots, \mathscr{D}(\boldsymbol{x}_{N-1})\}\subseteq[0,1)^{\mathbb{N}} \end{equation*} and sequences $\mathcal{S} = (\boldsymbol{x}_0, \boldsymbol{x}_1, \ldots)$ by setting \begin{equation*} \mathscr{D}(\mathcal{S}) = (\mathscr{D}(\boldsymbol{x}_0), \mathscr{D}(\boldsymbol{x}_1), \ldots). \end{equation*} \end{definition} We comment here that the interlacing can also be applied to the generating matrices $C_1, C_2, \ldots$ of digital nets or digital sequences directly as described in \cite[Section~4.4]{D08}. This is done in the following way: Let $C_1, C_2, \ldots$ be generating matrices of a digital net or digital sequence and let $\vec{c}_{j,k}$ denote the $k$th row of $C_j$. We define matrices $D_1, D_2, \ldots$, where the $k$th row of $D_j$ is given by $\vec{d}_{j,k}$, in the following way: For all $j \ge 1$, $u \ge 0$ and $1 \le v \le 2$ let \begin{equation*} \vec{d}_{j,2 u + v} = \vec{c}_{2 (j-1) + v, u+1} \end{equation*} It is easy to show that if $C_1, C_2, \ldots$ are the generating matrices of a digital net $\mathcal{P}_{N}$ or digital sequence $\mathcal{S}$ respectively, then the matrices $D_1, D_2, \ldots$ defined above, are the generating matrices of $\mathscr{D}(\mathcal{P}_{N})$ or $\mathscr{D}(\mathcal{S})$ respectively. In particular, $\mathscr{D}(\mathcal{P}_{N})$ is a digital net and $\mathscr{D}(\mathcal{S})$ is a digital sequence. In the proof of Theorem~\ref{thm1} below we show that the following explicit construction satisfies the $\mathcal{L}_q$ discrepancy bounds: \begin{construction}\label{construction} Let $C_j \in \mathbb{F}_2^{\mathbb{N} \times \mathbb{N} }$ be defined as above (based on a special case of Tezuka's construction \cite{Tez} of generalized Niederreiter sequences as described above) and let $\mathcal{S}$ in $[0,1)^{\mathbb{N} }$ denote the digital sequence obtained from these generating matrices. Then the sequence $\mathscr{D}(\mathcal{S}) \subset [0,1)^{\mathbb{N}}$ provides an example of an explicit construction of a sequence satisfying Theorem~\ref{thm1}. \end{construction} Let $\mathcal{S}$ be a digital sequence as defined in Construction~\ref{construction}. Since the generating matrices $C_j = (c_{j,k,\ell})_{k, \ell \in \mathbb{N}}$ of $\mathcal{S}$ satisfy $c_{j, k, \ell} = 0$ for $k > \ell$, the generating matrices $D_j = (d_{j,k,\ell})_{j,k,\ell \in \mathbb{N}}$ for the sequence $\mathscr{D}(\mathcal{S})$ satisfy $d_{j,k,\ell} = 0$ for $k > 2\ell$. Finally we describe how to obtain, for each $N \in \mathbb{N}$, a finite point set $\widehat{\mathcal{P}}_{N} \subset [0,1]^{\mathbb{N}}$, whose projection onto the first $s$ coordinates achieves the optimal order of convergence of the $\mathcal{L}_q$ discrepancy. To do so, we use a propagation rule introduced in \cite{CS02}. In Section~\ref{cor_N} we show that the subset \begin{equation}\label{P_tilde} \widetilde{\mathcal{P}}_{N,s}:= \mathscr{D}(\mathcal{P}_{2^m,2s}) \cap \left(\left[0,\frac{N}{2^m}\right) \times [0,1)^{s-1}\right) \end{equation} contains exactly $N$ points. Then we define the point set \begin{equation}\label{Npoints} \widehat{\mathcal{P}}_{N,s}:=\left\{\left(\frac{2^m}{N} x_1,x_2,\ldots,x_s\right)\, : \, (x_1,x_2,\ldots,x_s) \in \widetilde{\mathcal{P}}_{N,s}\right\}. \end{equation} Further we show in Section~\ref{cor_N} that the point set $\widehat{\mathcal{P}}_{N,s}$ satisfies the bound in Corollary~\ref{cor1}. \subsection{The essential property} The construction in the previous subsection is a special case of a more general construction principle for infinite-dimensional sequences which satisfy Theorem~\ref{thm1}. We describe this in the following. \begin{definition}\rm\label{def_net} Let $m \ge 1$ and $0 \le t \le 2 m$ be natural numbers. Let $\mathbb{F}_2$ be the finite field of order $2$ and let $C_1,\ldots, C_s \in \mathbb{F}_2^{2 m \times m}$ with $C_j = (c_{j,1}, \ldots, c_{j, 2 m})^\top$. If for all $1 \le i_{j,\nu_j} < \cdots < i_{j,1} \le 2 m$ with $$\sum_{j = 1}^s \sum_{l=1}^{\min(\nu_j,2)} i_{j,l} \le 2 m - t$$ the vectors $$c_{1,i_{1,\nu_1}}, \ldots, c_{1,i_{1,1}}, \ldots, c_{s,i_{s,\nu_s}}, \ldots, c_{s,i_{s,1}}$$ are linearly independent over $\mathbb{F}_2$, then the digital net with generating matrices $C_1,\ldots, C_s$ is called an order $2$ digital $(t,m,s)$-net over $\mathbb{F}_2$. \end{definition} \begin{definition}\rm\label{def_seq} Let $t \ge 0$ be an integer. Let $C_1,\ldots, C_s \in \mathbb{F}_2^{\mathbb{N} \times \mathbb{N}}$ and let $C_{j}^{2 m \times m}$ denote the left upper $2 m \times m$ submatrix of $C_j$. If for all $m > t/2$ the matrices $C_{1}^{2 m \times m},\ldots, C_{s}^{2 m \times m}$ generate an order $2$ digital $(t, m,s)$-net over $\mathbb{F}_2$, then the digital sequence with generating matrices $C_1,\ldots, C_s$ is called an {\it order $2$ digital $(t,s)$-sequence over $\mathbb{F}_2$}. \end{definition} From \cite[Lemma~4]{Tez} we obtain that generalized Niederreiter sequences are digital $(t',s)$-sequences with \begin{equation*} t' = \sum_{j=1}^s (e_j-1). \end{equation*} A special case of \cite[Theorem~4.12]{D08} is the following result (set $d=2$ in \cite[Theorem~4.12]{D08}). \begin{theorem} Let $\mathcal{S}$ be a digital sequence such that $\mathcal{S}_s$ is a digital $(t',s)$-sequence. Then the projection of the sequence $\mathscr{D}(\mathcal{S}) \subset [0,1)^{\mathbb{N}}$ onto the first $s \ge 1$ coordinates is an order $2$ digital $(t,s)$-sequence over $\mathbb{F}_2$ with \begin{equation*} t = s + 2 t'. \end{equation*} \end{theorem} The main property of order $2$ digital $(t,m,s)$-nets with generating matrices $C_1, \ldots, C_s \in \mathbb{F}_2^{2 m \times m}$ is that the minimum weight of $\mathcal{D}(C_1,\ldots, C_s)$ satisfies \begin{equation}\label{2min_weight} \rho_{2,m,s}(\mathcal{D}^\ast(C_1,\ldots, C_s)) > 2m-t. \end{equation} This property follows directly from the linear independence property of the rows of the generating matrices. Consider now the sequence $\mathscr{D}(\mathcal{S})$. \cite[Proposition~1]{D10} implies that the first $2^m$ points of the projection of $\mathscr{D}(\mathcal{S})$ onto the first $s$ coordinates is also a digital $(t,m,s)$-net. Thus the linear independence properties of certain sets of rows of the generating matrices implies that we also have \begin{equation}\label{min_weight} \rho_{1,m,s}(\mathcal{D}^\ast(C_1,\ldots, C_s)) > m- t. \end{equation} \subsubsection*{Some background on higher order nets} Definitions~\ref{def_net} and \ref{def_seq} are derived from numerical integration of smooth functions studied in \cite{D08}. We give only a very rough description of these results in the following, since we do not rely on them for our purposes here. Let $\boldsymbol{x}_0, \boldsymbol{x}_1, \ldots, \boldsymbol{x}_{2^m-1} \in [0,1]^s$ be an order $2$ digital $(t,m,s)$-net over $\mathbb{F}_2$. Let $f:[0,1]^s \to \mathbb{R}$ be a function whose partial mixed derivatives up to order $2$ in each variable are square integrable, that is, \begin{equation*} \int_{[0,1]^s} \left|\frac{\partial^{\boldsymbol{\tau} } f}{\partial \boldsymbol{x}^{\boldsymbol{\tau} }}(\boldsymbol{x}) \right|^2 \,\mathrm{d} \boldsymbol{x} < \infty, \end{equation*} where for $\boldsymbol{\tau} = (\tau_1,\tau_2,\ldots, \tau_s) \in \{0, 1, 2\}^s$, the expression $\frac{\partial^{\boldsymbol{\tau} } f}{\partial \boldsymbol{x}^{\boldsymbol{\tau} }}(\boldsymbol{x})$ denotes the partial mixed derivatives of order $\tau_j$ in coordinate $j$. Then \begin{equation*} \left|\int_{[0,1]^s} f(\boldsymbol{x}) \,\mathrm{d} \boldsymbol{x} - \frac{1}{2^m} \sum_{n=0}^{2^m-1} f(\boldsymbol{x}_n) \right| \ll_{f,s,t} \frac{m^{2 s}}{2^{2 m}}. \end{equation*} See \cite{D08} for details. \section{A bound on the $\mathcal{L}_q$ discrepancy of higher order digital sequences}\label{cor_N} In this section we state a bound on the $\mathcal{L}_q$ discrepancy of the higher order digital sequences introduced in Section~\ref{sec_const}. Construction~\ref{construction} in Section~\ref{sec_const} is infinite dimensional, however in this section we only deal with the projection of those infinite dimensional sequences onto the first $s$ coordinates. To simplify the notation we write $\boldsymbol{x}_n$ instead of $\boldsymbol{x}_n^{(s)}$ in the remainder of the paper. The next theorem implies the first part of Theorem~\ref{thm1}. \begin{theorem}\label{thm2} For all even integers $q$ with $2 \le q < \infty$, the $\mathcal{L}_q$ discrepancy of the first $N \ge 2$ points of an order $2$ digital $(t,s)$-sequence $\mathcal{S}_{s}$ in $[0,1)^s$ over $\mathbb{F}_2$ is bounded by \begin{align*} \mathcal{L}_q(\mathcal{P}_{N,s}) \ll_{s, q} & \frac{r^{3/2-1/q} }{N} \sqrt{\sum_{v=1}^r m_v^{s-1} }, \end{align*} where $N = 2^{m_1} + 2^{m_2} + \cdots + 2^{m_r}$ with $m_1 > m_2 > \cdots > m_r \ge 0$. \end{theorem} The proof of this result is shown in Section~\ref{sec_appendix}. Choosing $r=1$ in Theorem~\ref{thm2} yields the following corollary. This result implies the second part of Theorem~\ref{thm1}. \begin{corollary} Let $\mathcal{P}_{2^m,s}$ be an order $2$ digital net. Then \begin{equation*} \mathcal{L}_q(\mathcal{P}_{2^m,s}) \ll_{s,q} \frac{m^{(s-1)/2}}{2^m} \quad \mbox{for all } 2 \le q < \infty. \end{equation*} \end{corollary} The next corollary shows that the optimal convergence rate can be obtained for any $N \in \mathbb{N}$ using an idea from \cite{CS02}. \begin{corollary}\label{cor2} For each $s \ge 1$ and $N \ge 2$ one can explicitly construct a point set $\widehat{\mathcal{P}}_{N,s} \subset [0,1)^s$ such that \begin{equation*} \mathcal{L}_q(\widehat{\mathcal{P}}_{N,s}) \ll_{s,q} \frac{(\log N)^{(s-1)/2}}{N} \quad \mbox{for all } 2 \le q < \infty. \end{equation*} \end{corollary} \begin{proof} It suffices to prove the result for all even integers $q$ with $2 \le q < \infty$. For given $N \ge 2$ choose $m \ge 1$ such that $2^{m-1} \le N < 2^m$. Then $\frac{2^m}{N} \le 2$. Let $\mathcal{P}_{N,s}$ be the point set given by \eqref{Npoints}. It is elementary to check that the projection of $\mathscr{D}(\mathcal{P}_{2^m, 2s})$ onto the first coordinate yields a point set which has exactly one point in each interval $[a 2^{-m}, (a+1) 2^{-m})$ for $0 \le a < 2^m$. This also follows from \cite[Proposition~1]{D10}. Thus $\mathcal{P}_{N,s}$ contains exactly $N$ points. Let $A([\boldsymbol{z}ero, \boldsymbol{t}heta), N, \widehat{\mathcal{P}}_{N,s}) = \sum_{n=0}^{N-1} 1_{[\boldsymbol{z}ero, \boldsymbol{t}heta)}(\boldsymbol{x}_n)$ and let $\widetilde{\mathcal{P}}_{N,s}$ be the point set given by \eqref{P_tilde}. Then we have \begin{align*} & (N \mathcal{L}_q(\widehat{\mathcal{P}}_{N,s}))^q = \int_{[0,1]^s} |\delta(\widehat{\mathcal{P}}_{N,s}; \boldsymbol{t}heta)|^q \,\mathrm{d} \boldsymbol{t}heta \\ = & \int_{[0,1]^s} \left| A([0, N 2^{-m} \theta_1) \times \prod_{j=2}^s [0, \theta_j), N, \widetilde{\mathcal{P}}_{N,s}) - 2^m \frac{N}{2^m} \theta_1 \theta_2 \cdots \theta_s \right|^q \,\mathrm{d} \boldsymbol{t}heta \\ = & \frac{2^m}{N} \int_{0}^{N2^{-m}} \int_{[0,1]^{s-1}} \left| A([0,\boldsymbol{t}heta), N, \widetilde{\mathcal{P}}_{N,s}) - 2^m \theta_1 \theta_2 \cdots \theta_s\right|^q \,\mathrm{d} \boldsymbol{t}heta \\ = & \frac{2^m}{N} \int_{0}^{N2^{-m}} \int_{[0,1]^{s-1}} \left| A([0,\boldsymbol{t}heta), N, \mathscr{D}(\mathcal{P}_{N,2s})) - 2^m \theta_1 \theta_2 \cdots \theta_s\right|^q \,\mathrm{d} \boldsymbol{t}heta \\ \le & \frac{2^m}{N} (2^m \mathcal{L}_q(\mathscr{D}(\mathcal{P}_{2^m, 2s})))^q. \end{align*} Thus we obtain $$\mathcal{L}_q(\widehat{\mathcal{P}}_{N,s}) \le \left(\frac{2^m}{N} \right)^{1+1/q} \mathcal{L}_q(\mathscr{D}(\mathcal{P}_{2^m,2s}) \le 3 \mathcal{L}_q(\mathscr{D}(\mathcal{P}_{2^m,2s})) $$ and therefore \begin{align*} \mathcal{L}_q(\widehat{\mathcal{P}}_{N,s}) \ll_{s,q} & \frac{m^{(s-1)/2}}{N} \ll_{s,q} \frac{(\log N)^{(s-1) /2}}{N}. \end{align*} \end{proof} The proof of Theorem~\ref{thm2} is presented in the next section. \section{The proof of Theorem~\ref{thm2}}\label{sec_appendix} The main analytical tool to prove the bound in Theorem~\ref{thm2} are Walsh functions. These are introduced in next subsection. The Walsh series expansion of the local discrepancy function is given in Subsection~\ref{Walsh_discrepancy}. Finally, the proof of Theorem~\ref{thm2} is presented in Subsection~\ref{subsec_proof}. \subsection{Walsh functions and some of their properties}\label{sect_walsh} In this section we introduce Walsh functions in base $2$ (see \cite{chrest,walsh}). \begin{definition}\rm For a non-negative integer $k$ with dyadic expansion \[ k = \kappa_{a-1} 2^{a-1} + \cdots + \kappa_1 2 + \kappa_0, \] with $\kappa_i \in \{0,1\}$ and $x \in [0,1)$ with dyadic expansion $$x = \frac{x_1}{2}+\frac{x_2}{2^2}+\cdots $$ (unique in the sense that infinitely many of the $x_i$ must be zero), we define the Walsh function $\mathrm{wal}_{k}:[0,1) \rightarrow \{-1,1\}$ by \[ \mathrm{wal}_{k}(x) := (-1)^{x_1 \kappa_0 + \cdots + x_a \kappa_{a-1}}. \] \end{definition} \begin{definition}\rm For dimension $s \ge 2$, $\boldsymbol{x} = (x_1, \ldots, x_s) \in [0,1)^s$ and $\boldsymbol{k} = (k_1, \ldots, k_s) \in \mathbb{N}_0^s$ we define $\mathrm{wal}_{\boldsymbol{k}} : [0,1)^s \rightarrow \{-1,1\}$ by \[ \mathrm{wal}_{\boldsymbol{k}}(\boldsymbol{x}) := \prod_{j=1}^s \mathrm{wal}_{k_j}(x_j). \] \end{definition} Walsh functions are orthogonal in $\mathcal{L}_2$, that is, for any $\boldsymbol{k}, \boldsymbol{\ell} \in \mathbb{N}_0^s$ we have \begin{equation}\label{walsh_orthogonal} \int_{[0,1]^s} \mathrm{wal}_{\boldsymbol{k}}(\boldsymbol{x}) \mathrm{wal}_{\boldsymbol{\ell}}(\boldsymbol{x}) \,\mathrm{d} \boldsymbol{x} = \left\{\begin{array}{rl} 1 & \mbox{if } \boldsymbol{k} = \boldsymbol{\ell}, \\ 0 & \mbox{otherwise}. \end{array} \right. \end{equation} Further, they are characters with respect to digital nets. That is, let $\mathcal{P}_{2^m,s}$ be a digital net with generating matrices $C_1,\ldots, C_s$, then (cf. \cite[Lemma~4.75]{DP10}) \begin{equation}\label{char_prop} \frac{1}{2^m} \sum_{n=0}^{2^m-1} \mathrm{wal}_{\boldsymbol{k}}(\boldsymbol{x}_n) = \left\{\begin{array}{rl} 1 & \mbox{if } \boldsymbol{k} \in \mathcal{D}(C_1,\ldots, C_s), \\ 0 & \mbox{otherwise}. \end{array} \right. \end{equation} The classical Walsh functions were first used in earlier investigations of discrepancy in \cite{CS00} and in the related contexts of numerical integration in \cite{LT94} and pseudo random numbers in \cite{Tez2}. For more properties of Walsh functions see \cite{chrest,walsh}, for Walsh functions in the context of discrepancy see for instance \cite{CS02,DP05b,lp,Skr}, or \cite[Appendix~A]{DP10} in the context of numerical integration. In the following we also introduce an inequality from \cite{Skr}, which is based on the Littlewood-Paley inequality for the Walsh function system and Minkowski's inequality. This inequality plays a central role in the proof of Theorem~\ref{thm2}. The following proposition is \cite[Lemma~4.2]{Skr}. \begin{proposition}\label{prop_skr} For $\boldsymbol{b} = (b_1,\ldots, b_s) \in \mathbb{N}_0^s$ we set $$B(\boldsymbol{b}) = \{\boldsymbol{\ell} \in \mathbb{N}_0^s: \mu_1(\ell_j) = b_j \mbox{ for } 1 \le j \le s\}.$$ Let $2 \le q < \infty$. For functions $f \in \mathcal{L}_q([0,1]^s)$ and $\boldsymbol{b} \in \mathbb{N}_0^s$ let \begin{equation*} \sigma_{\boldsymbol{b}} f(\boldsymbol{t}heta) = \sum_{\boldsymbol{\ell} \in B(\boldsymbol{b})} \widehat{f}(\boldsymbol{\ell}) \mathrm{wal}_{\boldsymbol{\ell}}(\boldsymbol{t}heta) \end{equation*} where $\widehat{f}(\boldsymbol{\ell}) = \int_{[0,1]^s} f(\boldsymbol{x}) \mathrm{wal}_{\boldsymbol{\ell}}(\boldsymbol{x}) \,\mathrm{d} \boldsymbol{x}$ is the $\boldsymbol{\ell}$th Walsh coefficient of $f$. Then for any $f \in \mathcal{L}_q([0,1]^s)$ we have \begin{equation*} \left(\int_{[0,1]^s} |f(\boldsymbol{t}heta)|^q \,\mathrm{d} \boldsymbol{t}heta \right)^{1/q} \ll_{q,s} \left(\sum_{\boldsymbol{b} \in \mathbb{N}_0^s} \left(\int_{[0,1]^s} |\sigma_{\boldsymbol{b}}f(\boldsymbol{t}heta)|^q \,\mathrm{d} \boldsymbol{t}heta \right)^{2/q} \right)^{1/2}. \end{equation*} \end{proposition} \subsection{The Walsh series expansion of the $\mathcal{L}_q$ discrepancy function}\label{Walsh_discrepancy} We now obtain the Walsh series expansion for the local discrepancy function. In the following the symbol `$\sim$' shall denote equality in the $\mathcal{L}_2$ norm sense. It is only used to point out which function corresponds to a given Walsh series. We need the following notation. For $a \in \mathbb{R}$ let \begin{equation*} 1_{a\neq 0} = \left\{\begin{array}{rl} 1 & \mbox{if } a \neq 0, \\ 0 & \mbox{if } a = 0. \end{array} \right. \end{equation*} For $\boldsymbol{a} = (a_1,\ldots, a_s)$, let $|\boldsymbol{a}|_1 = |a_1| + \cdots + |a_s|$, $1_{\boldsymbol{a} \neq \boldsymbol{z}ero} = (1_{a_1\neq 0}, \ldots, 1_{a_s\neq 0})$, $|1_{\boldsymbol{a} \neq \boldsymbol{z}ero}|_1 = \sum_{j=1}^s 1_{a_j\neq 0}$, and for a subset $u \subseteq \{1,\ldots, s\}$ let $\boldsymbol{a}_u = (a_j)_{j \in u}$, and $(\boldsymbol{a}_u, \boldsymbol{z}ero)$ denote the vector whose $j$th component is $a_j$ for $j \in u$ and $0$ otherwise. Let $k \in \mathbb{N}$ have dyadic expansion $k = \kappa_0 + \kappa_1 2 + \cdots + \kappa_{a-2} 2^{a-2} + 2^{a-1}$ with $\kappa_i \in \{0,1\}$. Further let $\boldsymbol{k}=(k_1,\ldots, k_s) \in \mathbb{N}_0^s$, $\nu(\boldsymbol{k}) = (\mu_1(k_1), \ldots, \mu_1(k_s))$, where $\mu_1$ is given by \eqref{def_mu}, and $$\boldsymbol{k} \oplus \lfloor 2^{\boldsymbol{a} + \nu(\boldsymbol{k}) - \boldsymbol{1}} \rfloor = (k_1 \oplus \lfloor 2^{a_1+\mu_1(k_1)-1} \rfloor, \ldots, k_s \oplus \lfloor 2^{a_s+\mu_1(k_s)-1} \rfloor).$$ \begin{lemma}\label{lem1} The local discrepancy function has Walsh series expansion \begin{align*} & \delta(\mathcal{P}_{N,s}; \boldsymbol{t}heta) \\ \sim & \frac{1}{2^s N} \sum_{n=0}^{N-1} \sum_{\boldsymbol{k} \in \mathbb{N}_0^s \setminus \{\boldsymbol{z}ero\}} 2^{-\mu_1(\boldsymbol{k})} \mathrm{wal}_{\boldsymbol{k}}(\boldsymbol{x}_n) \sum_{\boldsymbol{a} \in \mathbb{N}_0^s} (-1)^{|1_{\boldsymbol{a} \neq \boldsymbol{z}ero}|_1} 2^{-|\boldsymbol{a}|_1} \mathrm{wal}_{\boldsymbol{k} \oplus \lfloor 2^{\boldsymbol{a} + \nu(\boldsymbol{k})-\boldsymbol{1}} \rfloor} (\boldsymbol{t}heta). \end{align*} \end{lemma} \begin{proof} It is well known that (cf. \cite{Fine} or \cite[Lemma~A.22]{DP10}) \begin{equation*} \theta = \sum_{a=0}^\infty (-1)^{1_{a \neq 0}} 2^{-a-1} \mathrm{wal}_{\lfloor 2^{a-1} \rfloor }(\theta). \end{equation*} The Walsh series expansion of the indicator function $1_{[0,t)}(x)$ can be obtained from \cite[Section~3]{Fine} (or \cite[Lemma~2]{lp} and \cite[Lemma~3]{lp}, or see also \cite[Lemma~14.8]{DP10}) and is given by \begin{align*} 1_{[0,\theta)}(x) \sim & \sum_{a=0}^\infty \sum_{k=0}^\infty (-1)^{1_{a\neq 0}} 2^{-a-1} 2^{-\mu_1(k)} \mathrm{wal}_k(x) \mathrm{wal}_{k \oplus \lfloor 2^{a+ \mu_1(k)-1} \rfloor}(\theta). \end{align*} By substituting these formulae into \begin{align*} \delta(\mathcal{P}_{2^m,s}; \boldsymbol{t}heta) = & \frac{1}{N} \sum_{n=0}^{N-1} \left( \prod_{j=1}^s 1_{[0, \theta_j)}(x_{n,j}) - \prod_{j=1}^s \theta_j \right) \end{align*} we obtain the result. \end{proof} In the following section we show how Definition~\ref{def_net} and Lemma~\ref{lem1} can be combined to obtain Theorem~\ref{thm2}. \subsection{The proof of Theorem~\ref{thm2}}\label{subsec_proof} Throughout this subsection we assume that $q$ is an even integer with $2 \le q < \infty$. Let $\mathcal{P}_{N,s} = \{\boldsymbol{x}_0, \boldsymbol{x}_1, \ldots, \boldsymbol{x}_{N-1}\}$ denote the first $N$ points of an order $2$ digital $(t,s)$-sequence. We split the proof of Theorem~\ref{thm2} into several lemmas. \begin{lemma}\label{lem_1} For $N, s \in \mathbb{N}$, with $N \ge 2$, we have \begin{align*} \mathcal{L}^2_q(\mathcal{P}_{N,s}) \ll_{q,s} \sum_{\boldsymbol{b} \in \mathbb{N}_0^s} \left(\int_{[0,1]^s} |\sigma_{\boldsymbol{b}} (\boldsymbol{t}heta)|^q \,\mathrm{d} \boldsymbol{t}heta \right)^{2/q}, \end{align*} where \begin{align*} \sigma_{\boldsymbol{b}}(\boldsymbol{t}heta) = \sum_{\boldsymbol{\ell} \in B(\boldsymbol{b})} c(\boldsymbol{\ell}) \mathrm{wal}_{\boldsymbol{\ell}}(\boldsymbol{t}heta), \end{align*} and where \begin{align*} c(\boldsymbol{\ell}) = & \sum_{u \in A(\boldsymbol{\ell})} (-1)^{s-|u|} \sum_{\boldsymbol{z}_{u} \in \mathbb{N}^{|u|} } 2^{-\mu_1(\boldsymbol{\ell}) - |\boldsymbol{z}_{u}|_1 - s} \frac{1}{N} \sum_{n=0}^{N-1} \mathrm{wal}_{\boldsymbol{\ell} \oplus \lfloor 2^{(\boldsymbol{z}_{u}, \boldsymbol{z}ero) + \nu(\boldsymbol{\ell}) - \boldsymbol{1}}\rfloor}(\boldsymbol{x}_n), \end{align*} where the set $A(\boldsymbol{\ell})$ is the power set $\mathscr{P}(\{1,\ldots, s\})$ of $\{1,\ldots, s\}$, unless $\boldsymbol{\ell} = (\ell_1,\ldots, \ell_s)$ is of the form $\ell_j = 0$ or $\ell_j = 2^{\mu_1(\ell_j)-1}$ for all $1 \le j \le s$, in which case the term $u = \emptyset$ is omitted, i.e. \begin{equation*} A(\boldsymbol{\ell}) = \left\{\begin{array}{rl} \mathscr{P}(\{1,\ldots, s\}) \setminus \{\emptyset\} & \mbox{if } \ell_j = 0 \mbox{ or } \ell_j = 2^{\mu_1(\ell_j)-1} \mbox{ for } 1 \le j \le s, \\ \mathscr{P}(\{1,\ldots, s\}) & \mbox{otherwise}. \end{array} \right. \end{equation*} \end{lemma} \begin{proof} Using Lemma~\ref{lem1} we obtain the following expression for the Walsh series of the local discrepancy function $\delta(\mathcal{P}_{N,s}; \boldsymbol{t}heta)$ \begin{align}\label{big_sum} & \frac{1}{2^sN} \sum_{n=0}^{N-1} \sum_{\boldsymbol{k}\in \mathbb{N}_0^s \setminus \{\boldsymbol{z}ero\} } 2^{-\mu_1(\boldsymbol{k})} \mathrm{wal}_{\boldsymbol{k}}(\boldsymbol{x}_n)\sum_{v \subseteq \{1,\ldots, s\}} (-1)^{|v|} \sum_{\boldsymbol{a}_v \in \mathbb{N}^{|v|}} 2^{-|\boldsymbol{a}_v|_1} \mathrm{wal}_{\boldsymbol{k} \oplus \lfloor 2^{(\boldsymbol{a}_v,\boldsymbol{z}ero) + \nu(\boldsymbol{k})-\boldsymbol{1}} \rfloor }(\boldsymbol{t}heta). \end{align} The sum \eqref{big_sum} can be rearranged using the substitution $\ell_j = k_j \oplus 2^{a_j + \mu_1(k_j)-1}$. Thus we can write $k_j = \ell_j \oplus 2^{z_j + \mu_1(\ell_j)-1}$. In this case we have \begin{align*} \mu_1(\ell_j) = \mu_1(k_j) + a_j & \mbox{ if } a_j > 0, \\ \mu_1(k_j) = \mu_1(\ell_j) + z_j & \mbox{ if } z_j > 0. \end{align*} The two cases are complementary, that is, if $a_j > 0$, then $\mu_1(\ell_j) > \mu_1(k_j)$ and then $z_j = 0$. By symmetry we also have that if $z_j > 0$, then $a_j = 0$. In \eqref{big_sum} we sum over all $v \subseteq \{1,\ldots, s\}$ and then sum over the vector $\boldsymbol{a}_v$ which consists of positive integers. By the substitution above we replace $k_j$ by $\ell_j$ and $a_j$ by $z_j$, where $z_j = 0$ for $a_j > 0$ and $z_j > 0$ for $a_j = 0$. Thus we need to replace the sum over $v \subseteq \{1,\ldots, s\}$ by a sum over $u := \{1,\ldots, s\} \setminus v$. Hence we obtain \begin{align}\label{big_sum2} & \frac{1}{2^sN} \sum_{n=0}^{N-1} \sum_{\boldsymbol{k}\in \mathbb{N}_0^s \setminus \{\boldsymbol{z}ero\} } 2^{-\mu_1(\boldsymbol{k})} \mathrm{wal}_{\boldsymbol{k}}(\boldsymbol{x}_n) \\ & \sum_{v \subseteq \{1,\ldots, s\}} (-1)^{|v|} \sum_{\boldsymbol{a}_v \in \mathbb{N}^{|v|}} 2^{-|\boldsymbol{a}_v|_1} \mathrm{wal}_{\boldsymbol{k} \oplus \lfloor 2^{(\boldsymbol{a}_v,\boldsymbol{z}ero) + \nu(\boldsymbol{k})-\boldsymbol{1}} \rfloor }(\boldsymbol{t}heta) \nonumber \\ = & \frac{1}{2^sN} \sum_{n=0}^{N-1} \sum_{u \subseteq \{1,\ldots, s\}} (-1)^{s-|u|} \sum_{\boldsymbol{z}_u \in \mathbb{N}^{|u|}} 2^{-|\boldsymbol{z}_u|_1} \nonumber \\ & \sum_{ \satop{ \boldsymbol{\ell} \in \mathbb{N}_0^s }{\boldsymbol{\ell} \oplus \lfloor 2^{\boldsymbol{z}_u + \mu_1(\boldsymbol{\ell})- \boldsymbol{1} } \rfloor \neq \boldsymbol{z}ero } } 2^{-\mu_1(\boldsymbol{\ell})} \mathrm{wal}_{\boldsymbol{\ell} \oplus \lfloor 2^{\boldsymbol{z}_u + \mu_1(\boldsymbol{\ell})- \boldsymbol{1} } \rfloor }(\boldsymbol{x}_n) \mathrm{wal}_{\boldsymbol{\ell}}(\boldsymbol{t}heta). \end{align} Note that for $k_j = 2^{\mu_1(k_j)-1}$ and $a_j = 0$ we have $\ell_j = 0$. Thus the case $\boldsymbol{\ell} = \boldsymbol{z}ero$ needs to be included. On the other hand, we must have $\boldsymbol{\ell} \oplus \lfloor 2^{(\boldsymbol{z}_u,\boldsymbol{z}ero) + \nu(\boldsymbol{\ell}) - \boldsymbol{1}} \rfloor \neq \boldsymbol{z}ero$. We have $\ell_j \oplus \lfloor 2^{z_j+\mu_1(\ell_j)-1} \rfloor = 0$ if either $\ell_j = 0$ and $z_j=0$ or $\ell_j = 2^{\mu_1(\ell_j)-1}$ and $z_j=0$. We can exclude this case by assuming that if all $\ell_j$ are of the form $\ell_j = 0$ or $\ell_j = 2^{\mu_1(\ell_j)-1}$ then $u \neq \emptyset$, since then at least one $z_j \neq 0$. This is ensured by the condition on the sum over $u$ in the statement of the lemma. Thus we obtain that \eqref{big_sum2} equals \begin{align*} \sum_{\boldsymbol{\ell} \in \mathbb{N}_0^s} c(\boldsymbol{\ell}) \mathrm{wal}_{\boldsymbol{\ell}}(\boldsymbol{t}heta) = & \sum_{\boldsymbol{b} \in \mathbb{N}_0^s} \sum_{\boldsymbol{\ell} \in B(\boldsymbol{b})} c(\boldsymbol{\ell}) \mathrm{wal}_{\boldsymbol{\ell}}(\boldsymbol{t}heta) \end{align*} and we have \begin{equation}\label{eq_c} \delta(\mathcal{P}_{N,s}; \boldsymbol{t}heta) \sim \sum_{\boldsymbol{b} \in \mathbb{N}_0^s} \sigma_{\boldsymbol{b}}(\boldsymbol{t}heta). \end{equation} Using Proposition~\ref{prop_skr} applied to the local discrepancy function $\delta(\mathcal{P}_{N,s}; \cdot)$ we obtain the result. \end{proof} Using the orthogonality of the Walsh functions \eqref{walsh_orthogonal} and the assumption that $q \ge 2$ is an even integer, the integral over $|\sigma_{\boldsymbol{b}}|^q$ can be written in terms of the coefficients $c(\boldsymbol{\ell}_i)$ in the following way: \begin{align}\label{bound_sumb} \int_{[0,1]^s} |\sigma_{\boldsymbol{b}}(\boldsymbol{t}heta) |^{q} \,\mathrm{d} \boldsymbol{t}heta = & \sum_{\boldsymbol{\ell}_1, \ldots, \boldsymbol{\ell}_{q} \in B(\boldsymbol{b})} \prod_{i=1}^{q} c(\boldsymbol{\ell}_i) \int_{[0,1]^s} \prod_{i=1}^{q} \mathrm{wal}_{\boldsymbol{\ell}_i}(\boldsymbol{t}heta) \,\mathrm{d} \boldsymbol{t}heta \nonumber \\ = & \sum_{\boldsymbol{\ell}_1,\ldots, \boldsymbol{\ell}_{q-1} \in B(\boldsymbol{b})} c(\boldsymbol{\ell}_1 \oplus \cdots \oplus \boldsymbol{\ell}_{q-1}) \prod_{i=1}^{q-1} c(\boldsymbol{\ell}_i). \end{align} We now obtain a bound on $\int_{[0,1]^s} |\sigma_{\boldsymbol{b}}(\boldsymbol{t}heta) |^{q} \,\mathrm{d} \boldsymbol{t}heta$ by bounding the coefficients $c(\boldsymbol{\ell}_i)$ and $c(\boldsymbol{\ell}_1 \oplus \cdots \oplus \boldsymbol{\ell}_{q-1})$. \begin{lemma} Let $N \in \mathbb{N}$ with $N \ge 2$ and $N = 2^{m_1} + 2^{m_2} + \cdots + 2^{m_r}$ with $m_1 > m_2 > \cdots > m_r \ge 0$ and $\boldsymbol{\ell} \in B(\boldsymbol{b})$. Then we have \begin{align}\label{cl_est} |c(\boldsymbol{\ell})| \ll_s & \sum_{h=1}^r \frac{2^{m_h}}{N} \sum_{\satop{ \boldsymbol{z} \in \mathbb{N}_0^{s}}{ \boldsymbol{\ell} \oplus \lfloor 2^{\boldsymbol{z} + \nu(\boldsymbol{\ell}) - \boldsymbol{1} } \rfloor \in \mathcal{D}_{m_h,s}^\ast}} 2^{- |\boldsymbol{b}|_1 - |\boldsymbol{z}|_1 }. \end{align} \end{lemma} \begin{proof} In the following we rewrite the expression for $c(\boldsymbol{\ell})$ from Lemma~\ref{lem_1} which will make it easier to obtain a suitable bound. To do so, we need the following notation. Let $C_1, \ldots, C_s \in \mathbb{F}_2^{\mathbb{N} \times \mathbb{N}}$ denote the generating matrices of $\mathcal{S}_s$. Let $C_{j}^{(2m_h \times m_h)}$ denote the left upper submatrix of $C_j$ of size $2m_h \times m_h$. We divide the point set $\{\boldsymbol{x}_0, \boldsymbol{x}_1, \ldots, \boldsymbol{x}_{N-1}\}$ into blocks of size $2^{m_h}$ in the following way: Let \begin{equation*} Q_h = \{\boldsymbol{x}_{2^{m_1}+\cdots + 2^{m_{h-1}}}, \boldsymbol{x}_{2^{m_1}+\cdots + 2^{m_{h-1}} + 1}, \ldots, \boldsymbol{x}_{2^{m_1} + \cdots + 2^{m_{h}}-1}\}, \end{equation*} for $1 \le h \le r$, where for $h=1$ we set $2^{m_1} + \cdots + 2^{m_{h-1}} = 0$. The main reason for dividing the point set in this way is that $Q_h$ is a digitally shifted digital net over $\mathbb{F}_2$ with generating matrices $C_1^{2m_h \times m_h}, \ldots, C_s^{2m_h \times m_h}$, where the digital shift is done by dyadic rationals, see Lemma~\ref{lem_net_seq}. Assume that the digital shift is given by $\sigma_h$. We have $Q_h \oplus \boldsymbol{\sigma}_h = \{\boldsymbol{x} \oplus \boldsymbol{\sigma}_h: \boldsymbol{x} \in Q_h\}$ is a digital net with generating matrices $C_1^{(2m_h \times m_h)}, \ldots, C_s^{(2m_h \times m_h)}$ (note that for $\mathbb{F}_2$ the symbol $\oplus$ coincides with $\ominus$). Let $\mathcal{D}_{m_h,s}$ denote the dual net corresponding to $Q_h$, that is, $\mathcal{D}_{m_h,s} = \mathcal{D}(C_1^{(2m_h \times m_h)}, \ldots, C_s^{(2m_h \times m_h)})$. Further $$\mathrm{wal}_{\boldsymbol{\ell} \oplus \lfloor 2^{(\boldsymbol{z}_{u}, \boldsymbol{z}ero) + \nu(\boldsymbol{\ell}) - \boldsymbol{1}}\rfloor}(\boldsymbol{x}_n) = \mathrm{wal}_{\boldsymbol{\ell} \oplus \lfloor 2^{(\boldsymbol{z}_{u}, \boldsymbol{z}ero) + \nu(\boldsymbol{\ell}) - \boldsymbol{1}}\rfloor}(\boldsymbol{x}'_n) \mathrm{wal}_{\boldsymbol{\ell} \oplus \lfloor 2^{(\boldsymbol{z}_{u}, \boldsymbol{z}ero) + \nu(\ bsell) - \boldsymbol{1}}\rfloor}(\boldsymbol{\sigma}_h),$$ where $\boldsymbol{x}_n = \boldsymbol{x}'_n \oplus \boldsymbol{\sigma}_ h$ (note that all components of $\boldsymbol{x}_n, \boldsymbol{\sigma}, \boldsymbol{x}'_n$ are dyadic rationals). We can use the character property \eqref{char_prop} for the digital net $Q_h \oplus \boldsymbol{\sigma}_h$ to obtain \begin{align*} c(\boldsymbol{\ell}) = & \sum_{h=1}^r \frac{2^{m_h}}{N} \sum_{u\subseteq \{1,\ldots, s\}}^\ast (-1)^{s-|u|} \hspace{-1cm} \sum_{\satop{ \boldsymbol{z}_{u} \in \mathbb{N}^{|u|}}{ \boldsymbol{\ell} \oplus \lfloor 2^{(\boldsymbol{z}_u, \boldsymbol{z}ero) + \nu(\boldsymbol{\ell}) - \boldsymbol{1} } \rfloor \in \mathcal{D}_{m_h,s}^\ast}} \hspace{-1cm} 2^{-\mu_1(\boldsymbol{\ell}) - |\boldsymbol{z}_{u}|_1 - s} \mathrm{wal}_{\boldsymbol{\ell} \oplus \lfloor 2^{(\boldsymbol{z}_{u}, \boldsymbol{z}ero) + \nu(\boldsymbol{\ell}) - \boldsymbol{1}} \rfloor}(\boldsymbol{\sigma}_h). \end{align*} We now estimate $|c(\boldsymbol{\ell})|$ for $\boldsymbol{\ell} \in B(\boldsymbol{b})$. Using the facts that $|(-1)^{s-|u|}|=1$, $|\mathrm{wal}_{\boldsymbol{\ell} \oplus \lfloor 2^{(\boldsymbol{z}_{u}, \boldsymbol{z}ero) + \nu(\boldsymbol{\ell}) - \boldsymbol{1}} \rfloor}(\boldsymbol{\sigma}_h)| = 1$ and the triangle inequality we deduce \begin{align*} |c(\boldsymbol{\ell})| \ll_s & \sum_{h=1}^r \frac{2^{m_h}}{N} \sum_{u \subseteq \{1,\ldots, s\}} \sum_{\satop{ \boldsymbol{z}_u \in \mathbb{N}^{|u|}}{ \boldsymbol{\ell} \oplus \lfloor 2^{(\boldsymbol{z}_u, \boldsymbol{z}ero) + \nu(\boldsymbol{\ell}) - \boldsymbol{1} } \rfloor \in \mathcal{D}_{m_h,s}^\ast}} 2^{-\mu_1(\boldsymbol{\ell}) - |\boldsymbol{z}_u|_1 } \\ = & \sum_{h=1}^r \frac{2^{m_h}}{N} \sum_{\satop{ \boldsymbol{z} \in \mathbb{N}_0^{s}}{ \boldsymbol{\ell} \oplus \lfloor 2^{\boldsymbol{z} + \nu(\boldsymbol{\ell}) - \boldsymbol{1} } \rfloor \in \mathcal{D}_{m_h,s}^\ast}} 2^{-\mu_1(\boldsymbol{\ell}) - |\boldsymbol{z}|_1 }. \end{align*} Since $\mu_1(\boldsymbol{\ell}) = |\boldsymbol{b}|_1$ we obtain the result. \end{proof} The following lemma proves an effective bound on $|c(\boldsymbol{\ell})|$. In the proof of this result we make essential use of the order $2$ digital net property of our point set. \begin{lemma}\label{lem_c_bound} Let $\boldsymbol{\ell} \in B(\boldsymbol{b})$ and let $N \in \mathbb{N}$ have dyadic expansion $N = 2^{m_1} + 2^{m_2} + \cdots + 2^{m_r}$ with $m_1 > m_2 > \cdots > m_r \ge 0$. Then we have \begin{equation*} |c(\boldsymbol{\ell})| \ll_s \frac{1}{N} \sum_{h=1}^r 2^{m_h - |\boldsymbol{b}|_1 -2(m_h-|\boldsymbol{b}|_1)_+} {2(m_h-|\boldsymbol{b}|_1)_+ + s-1 \choose s-1}, \end{equation*} where $(v)_+ = \max\{0, v\}$. \end{lemma} \begin{proof} Let $\boldsymbol{\ell} = (\ell_{1},\ldots, \ell_{s})$ and $\ell_{j} = 2^{w_{j,1}-1} + 2^{w_{j,2}-1} + \cdots + 2^{w_{j,r_{j}}-1}$, where $w_{j,1} > w_{j,2} > \cdots > w_{j,r_{j}} > 0$ for $\ell_j > 0$ and for $\ell_j=0$ we set $w_{j,1}= w_{j,2} =0$ and $r_{j}=0$. Further we set $w_{j, r_j+1} = w_{j, r_j+2} = w_{j, r_j+3} = 0$. For $\boldsymbol{z} = (z_1, \ldots, z_s) \in \mathbb{N}_0^s$ let $u$ be the set of components $j$ for which $z_j > 0$. Using the definition of $\mu_2$ given by \eqref{def_mu2} and \eqref{def_mu2_vec}, we have \begin{equation*} \mu_2(\boldsymbol{\ell} \oplus \lfloor 2^{\boldsymbol{z} + \nu(\boldsymbol{\ell})-\boldsymbol{1}} \rfloor) = \sum_{j\in u} (2 \mu_1(\ell_{j}) + z_{j}) + \sum_{j \in \{1,\ldots, s\} \setminus u} (w_{j,2}+w_{j,3}). \end{equation*} By the order $2$ digital net property, in particular \eqref{2min_weight}, and $\boldsymbol{\ell} \oplus \lfloor 2^{\boldsymbol{z} + \nu(\boldsymbol{\ell})-\boldsymbol{1}} \rfloor \in \mathcal{D}_{m_h,s}^\ast$ for $\boldsymbol{z} \in \mathbb{N}_0^{s}$ we have \begin{align*} 2 m_h -t+1 \le & \mu_2(\boldsymbol{\ell} \oplus \lfloor 2^{\boldsymbol{z} + \nu(\boldsymbol{\ell})-\boldsymbol{1}} \rfloor) \\ = & \sum_{j\in u} (2 \mu_1(\ell_{j}) + z_{j}) + \sum_{j \in \{1, \ldots, s\} \setminus u} (w_{j,2}+w_{j,3}) \\ \le & 2\mu_1(\boldsymbol{\ell}) + |\boldsymbol{z}|_1. \end{align*} Since $\mu_1(\boldsymbol{\ell}) = |\boldsymbol{b}|_1$, we obtain \begin{equation}\label{zb_bound2} |\boldsymbol{z}|_1 \ge 2 m_h - t + 1 - 2 |\boldsymbol{b}|_1. \end{equation} Using \eqref{zb_bound2}, the right-most sum in \eqref{cl_est} can be split into the cases where $2 m_h - t + 1 - 2 |\boldsymbol{b}|_1 \le 0$ and where $2 m_h - t + 1 - 2 |\boldsymbol{b}|_1 > 0$. If $2 m_h - t + 1 - 2 |\boldsymbol{b}|_1 \le 0$ we sum over all $\boldsymbol{z} \in \mathbb{N}^{s}$, which is \begin{equation*} \sum_{\boldsymbol{z} \in \mathbb{N}_0^{s}} 2^{- |\boldsymbol{z}|_1} = \left(\sum_{z=0}^\infty 2^{-z} \right)^s = 2^s. \end{equation*} In the following we make use of the well-known inequality \begin{equation}\label{ineq_el} \sum_{a=a_0}^\infty b^{-a} {a + s-1 \choose s-1} \le b^{-a_0} {a_0 + s-1 \choose s-1} \left(1-\frac{1}{b}\right)^{-s}. \end{equation} A proof can for instance be found in \cite[Lemma~2.18]{mat}. In the second case we have $2m_h-t+1-2|\boldsymbol{b}|_1 > 0$. From \eqref{2min_weight} we obtain that for $\boldsymbol{z}$ satisfying $\boldsymbol{\ell} \oplus \lfloor 2^{\boldsymbol{z} + \nu(\boldsymbol{\ell}) - \boldsymbol{1} } \rfloor \in \mathcal{D}_{m_h,s}^\ast$, we have $|\boldsymbol{z}|_1 \ge 2m_h-t+1-2|\boldsymbol{b}|_1$. Thus \begin{align*} \sum_{\satop{ \boldsymbol{z} \in \mathbb{N}_0^{s}}{ \boldsymbol{\ell} \oplus \lfloor 2^{\boldsymbol{z} + \nu(\boldsymbol{\ell}) - \boldsymbol{1} } \rfloor \in \mathcal{D}_{m_h,s}^\ast}} 2^{- |\boldsymbol{z}|_1 } \le & \sum_{\satop{ \boldsymbol{z} \in \mathbb{N}_0^{s}}{|\boldsymbol{z}|_1 \ge 2m_h-t+1-2|\boldsymbol{b}|_1 }} 2^{- |\boldsymbol{z}|_1 } \\ = & \sum_{a= 2m_h-t+1 - 2|\boldsymbol{b}|_1}^\infty 2^{-a} {a + s-1 \choose s-1} \\ \ll_s & 2^{-2m_h + 2|\boldsymbol{b}|_1} {2m_h-2|\boldsymbol{b}|_1 + s-1 \choose s-1}. \end{align*} Thus we obtain \begin{align*} |c(\boldsymbol{\ell})| \ll_s & \frac{1}{N} \sum_{h=1}^r 2^{m_h - |\boldsymbol{b}|_1} 2^{-2(m_h-|\boldsymbol{b}|_1)_+} {2(m_h-|\boldsymbol{b}|_1)_+ + s-1 \choose s-1}, \end{align*} where we used that $t$ depends only on the dimension $s$. \end{proof} We can also estimate $|c(\boldsymbol{\ell}_1 \oplus \cdots \oplus \boldsymbol{\ell}_{q-1})|$. Note that for $\boldsymbol{\ell}_1,\ldots, \boldsymbol{\ell}_{q-1} \in B(\boldsymbol{b})$ and for $q$ even we have $\boldsymbol{\ell}_1 \oplus \cdots \oplus \boldsymbol{\ell}_{q-1} \in B(\boldsymbol{b})$. Therefore, by Lemma~\ref{lem_c_bound} we have \begin{equation}\label{cl_est2} |c(\boldsymbol{\ell}_1 \oplus \cdots \oplus \boldsymbol{\ell}_{q-1})| \ll_s \frac{1}{N} \sum_{h=1}^r 2^{m_h - |\boldsymbol{b}|_1} 2^{-2(m_h-|\boldsymbol{b}|_1)_+} {2(m_h-|\boldsymbol{b}|_1)_+ + s-1 \choose s-1}. \end{equation} We return to the initial aim of bounding \eqref{bound_sumb}. Since the right-hand side of \eqref{cl_est2} depends only on $\boldsymbol{b}$ but is independent of $\boldsymbol{\ell}_1,\ldots, \boldsymbol{\ell}_{q-1}$, we obtain a bound on \eqref{bound_sumb} \begin{align}\label{expr_parenthesis} & \left|\sum_{\boldsymbol{\ell}_1,\ldots, \boldsymbol{\ell}_{q-1} \in B(\boldsymbol{b})} c(\boldsymbol{\ell}_1 \oplus \cdots \oplus \boldsymbol{\ell}_{q-1}) \prod_{i=1}^{q-1} c(\boldsymbol{\ell}_i) \right| \nonumber \\ \le & \sum_{\boldsymbol{\ell}_1,\ldots, \boldsymbol{\ell}_{q-1} \in B(\boldsymbol{b})} |c(\boldsymbol{\ell}_1 \oplus \cdots \oplus \boldsymbol{\ell}_{q-1})| \prod_{i=1}^{q-1} |c(\boldsymbol{\ell}_i)| \nonumber \\ \ll_s & \frac{1}{N} \sum_{h=1}^r 2^{m_h - |\boldsymbol{b}|_1} 2^{-2(m_h-|\boldsymbol{b}|_1)_+} {2(m_h-|\boldsymbol{b}|_1)_+ + s-1 \choose s-1} \left(\sum_{\boldsymbol{\ell} \in B(\boldsymbol{b})} |c(\boldsymbol{\ell})| \right)^{q-1} \nonumber \\ \ll_s & \frac{1}{N^q} \sum_{h=1}^r 2^{m_h - |\boldsymbol{b}|_1 -2(m_h-|\boldsymbol{b}|_1)_+} {2(m_h-|\boldsymbol{b}|_1)_+ + s-1 \choose s-1} \nonumber \\ & \left(\sum_{h=1}^r 2^{m_h - |\boldsymbol{b}|_1} \sum_{\boldsymbol{\ell} \in B(\boldsymbol{b})} \sum_{\satop{ \boldsymbol{z} \in \mathbb{N}_0^{s}}{ \boldsymbol{\ell} \oplus \lfloor 2^{\boldsymbol{z} + \nu(\boldsymbol{\ell}) - \boldsymbol{1} } \rfloor \in \mathcal{D}_{m_h,s}^\ast}} 2^{ - |\boldsymbol{z}|_1 } \right)^{q-1}. \end{align} The aim is now to obtain a bound on the expression in parenthesis in \eqref{expr_parenthesis}. We prove an auxiliary result first. \begin{lemma}\label{lem_aux} For $|\boldsymbol{b}|_1 \ge m_h$ we have \begin{equation*} \sum_{\boldsymbol{\ell} \in B(\boldsymbol{b})} \sum_{\satop{ \boldsymbol{z} \in \mathbb{N}_0^{s}}{ \boldsymbol{\ell} \oplus \lfloor 2^{\boldsymbol{z} + \nu(\boldsymbol{\ell}) - \boldsymbol{1} } \rfloor \in \mathcal{D}_{m_h,s}^\ast}} 2^{- |\boldsymbol{z}|_1 } \ll_s 2^{|\boldsymbol{b}|_1-m_h} \end{equation*} and for $|\boldsymbol{b}|_1 < m_h$ we have \begin{equation*} \sum_{\boldsymbol{\ell} \in B(\boldsymbol{b})} \sum_{\satop{ \boldsymbol{z} \in \mathbb{N}_0^{s}}{ \boldsymbol{\ell} \oplus \lfloor 2^{\boldsymbol{z} + \nu(\boldsymbol{\ell}) - \boldsymbol{1} } \rfloor \in \mathcal{D}_{m_h,s}^\ast}} 2^{- |\boldsymbol{z}|_1 } \ll_s 2^{-2m_h + 2|\boldsymbol{b}|_1} {2m_h-2|\boldsymbol{b}|_1 + s \choose s-1}. \end{equation*} \end{lemma} \begin{proof} Let $\boldsymbol{z} \in \mathbb{N}_0^s$ be fixed. We count the number of $\boldsymbol{\ell} \in B(\boldsymbol{b})$ such that $\lfloor \boldsymbol{\ell} \oplus 2^{\boldsymbol{z} + \nu(\boldsymbol{\ell}) - \boldsymbol{1}} \rfloor \in \mathcal{D}_{m_h,s}^\ast$. Let $C_1^{(2m_h \times m_h)},\ldots, C_s^{(2m_h \times m_h)}$ denote the generating matrices of the digital net and let $\vec{c}^{(h)}_{j,k}$ denote the $k$th row of $C_j^{(2m_h \times m_h)}$ for $1 \le k \le 2m_h$ and let $\vec{c}^{(h)}_{j,k} = \vec{0}$ for $k > 2m_h$. Let $\boldsymbol{\ell} = (\ell_{1},\ldots, \ell_{s})$. Assume that $\ell_{j} = 2^{w_{j,1}-1} + 2^{w_{j,2}-1} + \cdots + 2^{w_{j,r_{j}}-1}$, where $w_{j,1} > w_{j,2} > \cdots > w_{j,r_{j}} > 0$ and also that $\ell_{j} = \ell_{j,0} + 2 \ell_{j,1} + \cdots + \ell_{j,w_{j,1}-1} 2^{w_{j,1}-1}$, where $\ell_{j, w_{j,1}-1} = 1$. The condition $\lfloor \boldsymbol{\ell} \oplus 2^{\boldsymbol{z} + \nu(\boldsymbol{\ell}) - \boldsymbol{1}} \rfloor \in \mathcal{D}_{m_h,s}^\ast$ translates into the system of equations \begin{eqnarray}\label{eq_sys} \vec{c}_{1,1}^{(h)} \ell_{1,0}+\cdots +\vec{c}_{1,w_{1,1}-1}^{(h)} \ell_{1,w_{1,1}-2}+ \vec{c}_{1,w_{1,1}}^{(h)} + && \nonumber \\ \vec{c}_{2,1}^{(h)} \ell_{2,0}+\cdots + \vec{c}_{2,w_{2,1}-1}^{(h)} \ell_{2,w_{2,1} -2}+ \vec{c}_{2,w_{2,1}}^{(h)} + && \nonumber \\ \vdots && \nonumber \\ \vec{c}_{s,1}^{(h)} \ell_{s,0}+\cdots +\vec{c}_{s,w_{s,1}-1}^{(h)} \ell_{s,w_{s,1}-2} + \vec{c}_{s,w_{s,1}}^{(h)} \hspace{0.3cm}& = & \vec{c}^{(h)}, \end{eqnarray} where the vector $\vec{c}^{(h)}$ on the right hand side is fixed by $\boldsymbol{z}$. Consider now the set of vectors $\{ \vec{c}_{j,k}^{(h)}: 1 \le k < w_{j,1}, 1 \le j \le s\}$ from \eqref{eq_sys}. The minimum weight bound \eqref{min_weight} implies that at least $m_h - t + 1$ of those vectors are linearly independent. Thus at most $(\mu_1(\boldsymbol{\ell}) - m_h + t-1)_+$ of the $\ell_{j,k}$ can be chosen freely, the remaining ones need to be fixed in order to obtain a solution of \eqref{eq_sys}. Hence it follows that the number of solutions of the linear system \eqref{eq_sys} is at most \begin{equation*} 2^{(\mu_1(\boldsymbol{\ell}) - m_h + t-1)_+} = 2^{(|\boldsymbol{b}|_1 - m_h + t-1)_+} \le 2^{t-1} 2^{ (|\boldsymbol{b}|_1 - m_h)_+} \ll_s 2^{(|\boldsymbol{b}|_1 - m_h)_+}, \end{equation*} where $(x)_+ = \max\{x, 0\}$ and where we used that $t$ depends only on $s$. Using \eqref{zb_bound2} and the bound above we obtain \begin{align*} & \sum_{\boldsymbol{\ell} \in B(\boldsymbol{b})} \sum_{\satop{\boldsymbol{z} \in \mathbb{N}_0^{s}}{ (\boldsymbol{\ell} \oplus 2^{\boldsymbol{z} + \nu(\boldsymbol{\ell}) - \boldsymbol{1}}) \in \mathcal{D}_{m_h,s}^\ast }} 2^{ - |\boldsymbol{z}|_1} \ll_s \sum_{\satop{\boldsymbol{z}\in \mathbb{N}_0^s}{|\boldsymbol{z}|_1 \ge 2m_h - t+1-2|\boldsymbol{b}|_1}} 2^{- |\boldsymbol{z}|_1 } 2^{(|\boldsymbol{b}|_1-m_h)_+}. \end{align*} If $|\boldsymbol{b}|_1 \ge m_h$, then the above sum is bounded by (using again that $t$ depends only on $s$) \begin{equation}\label{bound_lsb1} 2^{|\boldsymbol{b}|_1 -m_h +t-1} \sum_{\boldsymbol{z} \in \mathbb{N}_0^s} 2^{-|\boldsymbol{z}|_1} = 2^{|\boldsymbol{b}|_1 -m_h+t-1+s} \ll_s 2^{|\boldsymbol{b}|_1 - m_h}. \end{equation} If $|\boldsymbol{b}|_1 < m_h$, then the above sum is bounded by \begin{align}\label{bound_lsb2} \sum_{\satop{\boldsymbol{z}\in \mathbb{N}_0^s}{|\boldsymbol{z}|_1 \ge 2m_h -t+1-2|\boldsymbol{b}|_1}} 2^{- |\boldsymbol{z}|_1} \le & \sum_{a=2m_h -t+1-2|\boldsymbol{b}|_1}^\infty 2^{-a} {a+s-1\choose s-1} \nonumber \\ \ll_{s} & 2^{-2m_h + 2|\boldsymbol{b}|_1} {2m_h -2|\boldsymbol{b}|_1 + s - t \choose s-1} \nonumber \\ \ll_s & 2^{-2m_h + 2|\boldsymbol{b}|_1} {2m_h -2|\boldsymbol{b}|_1 + s \choose s-1}, \end{align} where we set ${n \choose k} = 0$ for $n < k$ and where we again used that $t$ depends only on $s$. \end{proof} In the following lemma we show a bound on the expression in parenthesis in \eqref{expr_parenthesis}. \begin{lemma}\label{lem_bound_sum} Let $N \in \mathbb{N}$ with $N \ge 2$ and $N = 2^{m_1} + 2^{m_2} + \cdots + 2^{m_r}$, where $m_1 > m_2 > \cdots > m_r \ge 0$. Set $m_0 = \infty$ and $m_{r+1} = 0$. Then we have \begin{align*} \sum_{h=1}^r 2^{m_h - |\boldsymbol{b}|_1} \sum_{\boldsymbol{\ell} \in B(\boldsymbol{b})} \sum_{\satop{ \boldsymbol{z} \in \mathbb{N}_0^{s}}{ \boldsymbol{\ell} \oplus \lfloor 2^{\boldsymbol{z} + \nu(\boldsymbol{\ell}) - \boldsymbol{1} } \rfloor \in \mathcal{D}_{m_h,s}^\ast}} 2^{ - |\boldsymbol{z}|_1 } \ll_s r. \end{align*} \end{lemma} \begin{proof} For $\boldsymbol{b} \in \mathbb{N}_0^s$ let $1 \le h_0= h_0(|\boldsymbol{b}|_1) \le r+1$ be the integer which satisfies $m_{h_0-1} > |\boldsymbol{b}|_1 \ge m_{h_0}$. Using Lemma~\ref{lem_aux} we obtain \begin{align*} & \sum_{h=1}^r 2^{m_h - |\boldsymbol{b}|_1} \sum_{\boldsymbol{\ell} \in B(\boldsymbol{b})} \sum_{\satop{ \boldsymbol{z} \in \mathbb{N}_0^{s}}{ \boldsymbol{\ell} \oplus \lfloor 2^{\boldsymbol{z} + \nu(\boldsymbol{\ell}) - \boldsymbol{1} } \rfloor \in \mathcal{D}_{m_h,s}^\ast}} 2^{ - |\boldsymbol{z}|_1 } \\ \ll_s & \sum_{h=1}^{h_0-1} 2^{|\boldsymbol{b}|_1-m_h} {2m_h-2|\boldsymbol{b}|_1 + s \choose s-1} + \sum_{h=h_0}^{r} 1. \end{align*} We now estimate the sum over $1 \le h < h_0$, which is essentially a sum over $\{m_1 - |\boldsymbol{b}|_1, m_2 - |\boldsymbol{b}|_1, \ldots, m_{h_0(|\boldsymbol{b}|_1)} - |\boldsymbol{b}|_1\}$. We replace this set by $\mathbb{N}$, that is \begin{equation*} \sum_{h=1}^{h_0(|\boldsymbol{b}|_1)} 2^{|\boldsymbol{b}|-m_h} {2m_h-2|\boldsymbol{b}|_1 +s \choose s-1} \le \sum_{a=1}^\infty 2^{-a} {2a + s \choose s-1} \ll_s 1. \end{equation*} Thus the result follows. \end{proof} We can now obtain a bound on $\int |\sigma_{\boldsymbol{b}}|^q$. \begin{lemma}\label{lem_bound_int_sigma} Let $N \in \mathbb{N}$ with $N \ge 2$ have dyadic expansion $N = 2^{m_1} + 2^{m_2} + \cdots + 2^{m_r}$, where $m_1 > m_2 > \cdots > m_r \ge 0$. Then \begin{align*} \int_{[0,1]^s} |\sigma_{\boldsymbol{b}}(\boldsymbol{t}heta) |^{q} \,\mathrm{d} \boldsymbol{t}heta \ll_{s} & \frac{r^{q-1} }{N^q} \sum_{h=1}^r 2^{m_h-|\boldsymbol{b}|_1- 2(m_h-|\boldsymbol{b}|_1)_+} {2(m_h-|\boldsymbol{b}|_1)_+ + s-1 \choose s-1}. \end{align*} \end{lemma} \begin{proof} The result follows by combining \eqref{bound_sumb}, \eqref{expr_parenthesis} and Lemma~\ref{lem_bound_sum}. \end{proof} The following lemma completes the proof of Theorem~\ref{thm2}. \begin{lemma}\label{lem_Lq_bound_weak} For any $N \in \mathbb{N}$ with $N \ge 2$ and $N = 2^{m_1} +2^{m_2} + \cdots + 2^{m_r}$, where $m_1 > m_2 > \cdots > m_r \ge 0$, the first $N$ points $\mathcal{P}_{N,s}$ of the sequence $\mathcal{S}_s$ satisfy \begin{equation*} \mathcal{L}_q(\mathcal{P}_{N,s}) \ll_{s, q} \frac{r^{3/2-1/q} }{N} \sqrt{\sum_{v=1}^r m_v^{s-1}}, \end{equation*} for all even integers $q$ with $2 \le q < \infty$. \end{lemma} \begin{proof} Using Lemma~\ref{lem_1} and Lemma~\ref{lem_bound_int_sigma} we obtain \begin{align}\label{ineq_Lq_bound} \mathcal{L}_q^2(\mathcal{P}_{N,s}) \ll_{s} & c_{q,s} \frac{r^{2(1-1/q)} }{N^2} \sum_{\boldsymbol{b} \in \mathbb{N}_0^s} \left(\sum_{h=1}^r 2^{m_h-|\boldsymbol{b}|_1- 2(m_h-|\boldsymbol{b}|_1)_+} {2(m_h-|\boldsymbol{b}|_1)_+ + s-1 \choose s-1} \right)^{2/q}. \end{align} It remains to estimate the sum over $\boldsymbol{b}$. We have \begin{align*} & \sum_{\boldsymbol{b} \in \mathbb{N}_0^s} \left(\sum_{h=1}^r 2^{m_h-|\boldsymbol{b}|_1 - 2(m_h-|\boldsymbol{b}|_1)_+} {2(m_h-|\boldsymbol{b}|_1)_+ + s-1 \choose s-1} \right)^{2/q} \\ = & \sum_{a=0}^{\infty} {a+s-1 \choose s-1} \left( \sum_{h=1}^r 2^{m_h-a-2(m_h-a)_+} {2 (m_h-a)_+ + s-1 \choose s-1} \right)^{2/q}. \end{align*} We split the above sum into the part where $a \ge m_1$ and where $0 \le a < m_1$. For $a \ge m_1$ we have ${2(m_h-a)_+ + s-1 \choose s-1} = 1$ and \begin{equation*} \sum_{h=1}^r 2^{m_h-a- 2(m_h-a)_+} = \sum_{h=1}^r 2^{m_h-a} = 2^{-a} \sum_{h=1}^r 2^{m_h} = 2^{-a} N. \end{equation*} Thus we can use \eqref{ineq_el} to obtain \begin{align*} & \sum_{a=m_1}^{\infty} {a+s-1 \choose s-1} \left( \sum_{h=1}^r 2^{m_h-a-2(m_h-a)_+} {2 (m_h-a)_+ + s-1 \choose s-1} \right)^{2/q} \\ \le & \sum_{a=m_1 }^\infty N^{2/q} 2^{-2a/q} {a+s-1 \choose s-1} \\ \ll_s & m_1^{s-1}. \end{align*} For $0 \le a < m_1$ we use Jensen's inequality, which states that for a sequence of nonnegative real numbers $(a_j)$ and any real number $0 < \lambda \le 1$ we have $(\sum_j a_j)^\lambda \le \sum_j a_j^\lambda$. Since $2/q \le 1$ we have \begin{align*} & \sum_{a=0}^{m_1-1} {a+s-1 \choose s-1} \left( \sum_{h=1}^r 2^{m_h-a- 2(m_h-a)_+} {2(m_h-a)_+ + s-1 \choose s-1} \right)^{2/q} \\ \le & \sum_{h=1}^r \sum_{a=0}^{m_1-1} {a+s-1 \choose s-1} 2^{\frac{2}{q} [m_h-a- 2(m_h-a)_+ ] } {2(m_h-a)_+ + s-1 \choose s-1}^{2/q}. \end{align*} We split the sum over $a$ into the parts $m_{v+1} \le a < m_v$. Let $m_{r+1} = 0$. Thus the above sum can be written as \begin{equation*} \sum_{h=1}^r \sum_{v=1}^r \sum_{a=m_{v+1}}^{m_v-1} {a+s-1 \choose s-1} 2^{\frac{2}{q} [m_h-a- 2(m_h-a)_+ ] } {2(m_h-a)_+ + s-1 \choose s-1}^{2/q}. \end{equation*} For $v \ge h$ we have $a \le m_h$. We can use \eqref{ineq_el} again to deduce \begin{align*} & \sum_{a=m_{v+1}}^{m_v-1} {a+s-1 \choose s-1} 2^{\frac{2}{q} [m_h-a- 2(m_h-a)_+ ] } {2(m_h-a)_+ + s-1 \choose s-1}^{2/q} \\ \ll_s & m_v^{s-1} \sum_{a=m_{v+1}}^{m_v-1} 2^{\frac{2}{q} [- (m_h-a) ] } {2(m_h-a) + s-1 \choose s-1}^{2/q} \\ \le & m_v^{s-1} \sum_{c=0}^\infty 2^{-2c/q} {2c + s-1 \choose s-1}^{2/q} \\ \ll_{s,q} & m_v^{s-1}. \end{align*} For $v < h$ we have $a > m_h$. We can use \eqref{ineq_el} again to deduce \begin{align*} & \sum_{a=m_{v+1}}^{m_v-1} {a+s-1 \choose s-1} 2^{\frac{2}{q} [m_h-a- 2(m_h-a)_+ ] } {2(m_h-a)_+ + s-1 \choose s-1}^{2/q} \\ \le & \sum_{a=m_{v+1}}^{m_v-1} {a+s-1 \choose s-1} 2^{\frac{2}{q} [m_h-a] } \\ \le & \sum_{a=m_{v+1}}^\infty 2^{\frac{2}{q} [m_h-a]} {a+s-1 \choose s-1} \\ \ll_s & 2^{\frac{2}{q} [m_h-m_{v+1}]} {m_{v+1} + s-1 \choose s-1}. \end{align*} Thus we have \begin{align*} & \sum_{a=0}^{m_1-1} {a+s-1 \choose s-1} \left( \sum_{h=1}^r 2^{m_h-a- 2(m_h-a)_+} {2(m_h-a)_+ + s-1 \choose s-1} \right)^{2/q} \\ \ll_{s,q} & \sum_{h=1}^r \left(\sum_{v=h}^r m_v^{s-1} + \sum_{v=1}^{h-1} 2^{\frac{2}{q} [m_h-m_{v+1}]} {m_{v+1} + s-1 \choose s-1} \right) \\ \le & r \sum_{v=1}^r m_v^{s-1} + \sum_{h=1}^r \sum_{c=0}^\infty 2^{-\frac{2}{q} c} {c+m_h+s-1 \choose s-1} \\ \ll_s & r \sum_{v=1}^r m_v^{s-1}. \end{align*} Substituting this result into \eqref{ineq_Lq_bound} we obtain \begin{equation*} \mathcal{L}_q^2(\mathcal{P}_{N,s}) \ll_{s,q} \frac{r^{3-2/q}}{N^2} \sum_{v=1}^r m_v^{s-1}, \end{equation*} which implies the result. \end{proof} \begin{remark} Assume that $q$ is an even integer with $2 \le q < \infty$. A slight improvement of \eqref{Lq_seq} can be obtained using the interpolation of norms \begin{equation*} \mathcal{L}_z(\mathcal{P}_{N,s}) \le (\mathcal{L}_2(\mathcal{P}_{N,s}) )^\theta (\mathcal{L}_q(\mathcal{P}_{N,s}) )^{1-\theta}, \end{equation*} where $0 \le \theta \le 1$ and $$\frac{1}{z} = \frac{\theta}{2} + \frac{1-\theta}{q}.$$ This elementary inequality can be shown by applying H\"older's inequality to $\int |\delta(\mathcal{P}_{N,s}, \cdot)|^{z \theta} |\delta(\mathcal{P}_{N,s}, \cdot)|^{z (1-\theta)}$. Since the results of this paper also apply to the sequences constructed in \cite{DP12}, we obtain that one can explicitly construct infinite-dimensional sequences for which the projection onto the first $s$ coordinates of the first $N$ points satisfies \begin{equation*} \mathcal{L}_z(\mathcal{P}_{N,s}) \ll_{z,s} \frac{(\log N)^{s/2 + 3/2-1/z}}{N} \end{equation*} for all $N \ge 2$ and $2 \le z < \infty$. Note that $2 \le z < \infty$ can be any real number. \end{remark} \noindent{\bf Author's Address:}\\ \noindent Josef Dick, School of Mathematics and Statistics, The University of New South Wales, Sydney 2052, Australia. Email: [email protected] \\ \end{document}
\begin{document} \title{A Recipe for State-and-Effect Triangles} \begin{abstract} In the semantics of programming languages one can view programs as state transformers, or as predicate transformers. Recently the author has introduced `state-and-effect' triangles which capture this situation categorically, involving an adjunction between state- and predicate-transformers. The current paper exploits a classical result in category theory, part of Jon Beck's monadicity theorem, to systematically construct such a state-and-effect triangle from an adjunction. The power of this construction is illustrated in many examples, covering many monads occurring in program semantics, including (probabilistic) power domains. \end{abstract} \section{Introduction}\label{IntroSec} In program semantics three approaches can be distinguished. \begin{itemize} \item Interpreting programs themselves as morphisms in certain categories. Composition in the category then corresponds to sequential composition. Parallel composition may be modeled via tensors $\otimes$. Since~\cite{Moggi91a} the categories involved are often Kleisli categories $\Kl(T)$ of a monad $T$, where the monad $T$ captures a specific form of computation: deterministic, non-deterministic, probabilistic, \textit{etc}. \item Interpreting programs via their actions on states, as \emph{state transformers}. For instance, in probabilistic programming the states may be probabilistic distributions over certain valuations (mapping variables to values). Execution of a program changes the state, by adapting the probabilities of valuations. The state spaces often have algebraic structure, and take the form of Eilenberg-Moore categories $\EM(T)$ of a monad $T$. \item Interpreting programs via their actions on predicates, as \emph{predicate transformers}. The predicates involved describe what holds at a specific point. This validity may also be quantitative (or `fuzzy'), describing that a predicate holds with a certain probability in the unit interval $[0,1]$. Execution of a program may then adapt the validity of predicates. A particular form of semantics of this sort is weakest precondition computation~\cite{DijkstraS90}. In the context of (coalgebraic) modal logic, these predicate transformers appear as modal operators. \end{itemize} A systematic picture of these three approaches has emerged in categorical language, using triangles of the form described below, see~\cite{Jacobs15d}, and also~\cite{Jacobs13a,Jacobs15a,ChoJWW15b}. \begin{equation} \label{ComputationTriangle} \qquad\vcenter{\[email protected]@R-1.5pc{ \ovalbox{\textbf{Heisenberg}} & & \ovalbox{\textbf{Schr\"odinger}} \\ \llap{$\op{\Cat{Log}}=$}{\left(\begin{array}{c} \text{predicate} \\[-.3em] \text{transformers} \end{array}\right)}\ar@/^1em/[rr] & \top & {\left(\begin{array}{c} \text{state} \\[-.3em] \text{transformers} \end{array}\right)}\ar@/^1em/[ll] \\ \\ & \Big(\text{computations}\Big)\ar[uul]^(0.45){\ensuremath{\mathrm{Pred}}}\ar[uur]_(0.45){\ensuremath{\mathrm{Stat}}} & }} \end{equation} The three nodes in this diagram represent categories of which only the morphisms are described. The arrows between these nodes are functors, where the two arrows $\rightleftarrows$ at the top form an adjunction. The two triangles involved should commute. In the case where two up-going `predicate' and `state' functors $\ensuremath{\mathrm{Pred}}$ and $\ensuremath{\mathrm{Stat}}$ in~\eqref{ComputationTriangle} are full and faithful, we have three equivalent ways of describing computations. On morphisms, the predicate functor $\ensuremath{\mathrm{Pred}}$ in~\eqref{ComputationTriangle} yields what is called substitution in categorical logic, but what amounts to a weakest precondition operation in program semantics, or a modal operator in programming logic. The upper category on the left is of the form $\op{\Cat{Log}}$, where $\Cat{Log}$ is some category of logical structures. The opposite category $\op{(-)}$ is needed because predicate transformers operate in the reverse direction, taking a postcondition to a precondition. In a setting of quantum computation this translation back-and-forth $\rightleftarrows$ in~\eqref{ComputationTriangle} is associated with the different approaches of Heisenberg (logic-based, working backwards) and Schr\"odinger (state-based, working forwards), see \textit{e.g.}~\cite{HeinosaariZ12}. In quantum foundations one speaks of the duality between states and effects (predicates). Since the above triangles first emerged in the context of semantics of quantum computation~\cite{Jacobs15d}, they are sometimes referred to as `state-and-effect' triangles. In certain cases the adjunction $\rightleftarrows$ in~\eqref{ComputationTriangle} forms --- or may be restricted to --- an equivalence of categories, yielding a duality situation. It shows the importance of duality theory in program semantics and logic; this topic has a long history, going back to~\cite{Abramsky91}. In~\cite{Jacobs15d} it is shown that in the presence of relatively weak structure in a category $\cat{B}$, a diagram of the form~\eqref{ComputationTriangle} can be formed, with $\cat{B}$ as base category of computations, with predicates forming effect modules (see below) and with states forming convex sets. A category with this relatively weak structure is called an \emph{effectus}, see~\cite{ChoJWW15b}. The main contribution of this paper is a ``new'' way of generating state-and-effect triangles, namely from adjunctions. We write the word `new' between quotes, because the underlying category theory uses a famous of result of Jon Beck, and is not new at all. What the paper contributes is mainly a new perspective: it reorganises the work of Beck in such a way that an appropriate triangle appears, see Section~\ref{MonadSec}. The rest of the paper is devoted to illustrations of this recipe for triangles. These include Boolean and probabilistic examples, see Sections~\ref{TwoSec} and~\ref{UnitSec} respectively. The Boolean examples are all obtained from an adjunction using ``homming into $2 =\{0,1\}$'', whereas the probabilistic (quantitative) examples all arise from ``homming into $[0,1]$'', where $[0,1]$ is the unit interval of probabilities. In between we consider Plotkin-style constructions via ``homming into 3'', where $3 = \{0, \mathord{\bowtie}, 1\}$ is a three-element ordered algebra. The series of examples in this paper involves many mathematical structures, ranging from Boolean algebras to compact Hausdorff spaces and $C^*$-algebras. It is impossible to explain all these notions in detail here. Hence the reader is assumed to be reasonably familiar with these structures. It does not matter so much if some of the examples involve unfamiliar mathematical notions. The structure of these sections~\ref{TwoSec}, \ref{ThreeSec} and~\ref{UnitSec} is clear enough --- using 2, $3$ and $[0,1]$ as dualising object, respectively --- and it does not matter if some of the examples are skipped. An exception is made for the notions of effect algebra and effect module. They are explicitly explained (briefly) in the beginning of Section~\ref{UnitSec} because they play such a prominent role in quantitative logic. The examples involve many adjunctions that are known in the literature. Here they are displayed in triangle form. In several cases monads arise that are familiar in coalgebraic research, like the neighbourhood monad $\mathcal{N}$ in Subsection~\ref{SetsSetsSubsec}, the monotone neighbourhood monad $\mathcal{M}$ in Subsection~\ref{SetsPosetsSubsec}, the Hoare power domain monad $\mathcal{H}$ in Subsection~\ref{DcpoCLSubsec}, the Smyth power domain monad $\mathcal{S}$ in Subsection~\ref{DcpoPreFrmSubsec}, the infinite distribution monad $\distributionsymbol_{\infty}$ in Subsection~\ref{SetsDcEModSubsec}, the Giry monad $\mathcal{G}$ in Subsection~\ref{MeaswEModSubsec}, and the valuation monad $\mathcal{V}$ in Subsection~\ref{DcpoDcEModSubsec}. Also we will see several examples where we have pushed the recipe to a limit, and where the monad involved is simply the identity. This paper extends the earlier conference version~\cite{Jacobs15b} with several order-theoretic examples, notably using complete lattices and directed complete partial orders (for various power domains). \section{A basic result about monads}\label{MonadSec} We assume that the reader is familiar with the categorical concept of a monad $T$, and with its double role, describing a form of computation, via the associated Kleisli category $\Kl(T)$, and describing algebraic structure, via the category $\EM(T)$ of Eilenberg-Moore algebras. The following result is a basic part of the theory of monads, see \textit{e.g.}\xspace~\cite[Prop.~3.15 and Exercise (KEM)]{BarrW85} or~\cite[Prop.~6.5 and~6.7]{LambekS86} or~\cite[Thm.~20.42]{AdamekHS90}, and describes the initiality and finality of the Kleisli category and Eilenberg-Moore category as `adjunction resolutions' giving rise to a monad. \begin{theorem} \label{AdjTriangleThm} Consider an adjunction $F \dashv G$ with induced monad $T = GF$. Then there are `comparison' functors $\Kl(T) \rightarrow \cat{A} \rightarrow \EM(T)$ in a diagram: \begin{equation} \label{ResolutionDiag} \vcenter{\xymatrix@C+1pc@R-0pc{ \Kl(T)\ar@/^1.3ex/[rr]^-{L}\ar@/^1ex/[drr] & & \cat{A}\ar@/^1.5ex/[rr]^-{K}\ar@/^1.1ex/[d]_-{\dashv}^-{G} & \top & \EM\rlap{$(T)$}\ar@/^1ex/[dll]_(0.42)[@]{\top}\ar@{.>}@/^1.2ex/[ll]^(0.7){M} \\ & & \cat{B}\ar@/^1ex/[urr]\ar@/^1.1ex/[u]^-{F}\ar@(dl,dr)_{T=GF} \ar@/^1ex/[ull]_(0.45)[@]{\raisebox{-3em}{$\scriptstyle\bot$}} & }} \end{equation} \noindent where the functor $L \colon \Kl(T) \rightarrow \cat{A}$ is full and faithful. In case the category $\Cat{A}$ has coequalisers (of reflexive pairs), then $K$ has a left adjoint $M$, as indicated via the dotted arrow, satisfying $MKL \cong L$. \end{theorem} The famous monadicity theorem of Jon Beck gives conditions that guarantee that the functor $K \colon \cat{A} \rightarrow \EM(T)$ is an equivalence of categories, so that objects of $\cat{A}$ are algebras. The existence of the left adjoint $M$ is the part of this theorem that we use in the current setting. Other (unused) parts of Beck's theorem require that the functor $G$ preserves and reflects coequalisers of reflexive pairs. For convenience we include a proof sketch. \begin{myproof} We write $\eta,\varepsilon$ for the unit and counit of the adjunction $F \dashv G$, so that $\eta$ is also the unit of the induced monad $T = GF$, with multiplication $\mu = G(\varepsilon F)$. Define $L(X) = F(X)$ and $\smash{L\big(X \stackrel{f}{\rightarrow} GF(Y)\big)} = \varepsilon_{F(Y)} \mathrel{\circ} F(f) \colon F(X) \rightarrow F(Y)$. This functor $L$ is full and faithful because there is a bijective adjoint correspondence: $$\begin{prooftree} \xymatrix{ F(X)\ar[r] & F(Y)} \Justifies \xymatrix{ X\ar[r] & GF(Y) \rlap{$\;=T(Y)$}} \end{prooftree}$$ \auxproof{ We check that this is a functor indeed: $$\begin{array}{rcl} L(\idmap[X]) & = & L(\eta_{X}) \\ & = & \varepsilon_{F(X)} \mathrel{\circ} F(\eta_{X}) \\ & = & \idmap[F(X)] \\ L(g \klafter f) & = & \varepsilon \mathrel{\circ} F(\mu \mathrel{\circ} T(g) \mathrel{\circ} f) \\ & = & \varepsilon \mathrel{\circ} FG(\epsilon) \mathrel{\circ} FGF(g) \mathrel{\circ} F(f) \\ & = & \varepsilon \mathrel{\circ} F(g) \mathrel{\circ} \varepsilon \mathrel{\circ} F(f) \\ & = & L(g) \mathrel{\circ} L(f). \end{array}$$ } The functor $K \colon \cat{A} \rightarrow \EM(T)$ is defined as: $$\begin{array}{rclcrcl} K(A) & = & \ensuremath{\left(\xy (0,4)*{GFG(A)}; (0,-4)*{G(A)}; {\ar^{G(\varepsilon_{A})\!\!\!} (0,2); (0,-2)}; \endxy\right)} & \qquad\mbox{and}\qquad & K\big(A\stackrel{f}{\rightarrow} B\big) & = & G(f). \end{array}$$ \auxproof{ $$\begin{array}{rcl} G(\varepsilon_{A}) \mathrel{\circ} \eta_{G(A)} & = & \idmap \\ G(\varepsilon_{A}) \mathrel{\circ} \mu_{G(A)} & = & G(\varepsilon_{A}) \mathrel{\circ} G(\varepsilon_{FG(A)}) \\ & = & G(\varepsilon_{A}) \mathrel{\circ} GFG(\varepsilon_{A}) \\ & = & G(\varepsilon_{A}) \mathrel{\circ} TG(\varepsilon_{A}). \end{array}$$ \noindent For $f\colon A \rightarrow B$ we have that $G(f)$ is a map of algebras, by naturality: $$\begin{array}{rcccccl} G(\varepsilon_{B}) \mathrel{\circ} TG(f) & = & G(\varepsilon_{B} \mathrel{\circ} FG(f)) & = & G(f \mathrel{\circ} \varepsilon_{A}) & = & G(f) \mathrel{\circ} G(\varepsilon_{A}). \end{array}$$ } \noindent We leave it to the reader to see that $K$ is well-defined. On an object $X\in\Kl(T)$, that is, on $X\in\cat{B}$, the result $KL(X)$ is the multiplication $\mu_{X} = G(\varepsilon_{FX})$ of the monad $T = GF$. For a Kleisli map $f\colon X \rightarrow T(Y)$ the map $KL(f)$ is Kleisli extension: $$\begin{array}{rcccl} KL(f) & = & G(\varepsilon_{F(Y)} \mathrel{\circ} F(f)) & = & \mu_{Y} \mathrel{\circ} T(f) \,\colon\, T(X) \longrightarrow T(Y). \end{array}$$ Assume now that the category $\Cat{A}$ has coequalisers. For an algebra $a\colon T(X) \rightarrow X$ let $M(X,a)$ be the (codomain of the) coequaliser in: $$\vcenter{\xymatrix{ \llap{$FG$}F(X)\ar@/^1ex/[rr]^-{F(a)}\ar@/_1ex/[rr]_-{\varepsilon_{F(X)}} & & F(X)\ar@{->>}[r]^-{c} & M(X,a) }} $$ \noindent It is not hard to see that there is a bijective correspondence: $$\begin{prooftree} \xymatrix{M(X,a)\ar[r]^-{f} & A \rlap{\hspace*{8.4em}in $\Cat{A}$}} \Justifies \xymatrix{\ensuremath{\left(\xy (0,4)*{T(X)}; (0,-4)*{X}; {\ar^{a} (0,2); (0,-2)}; \endxy\right)} \ar[r]_-{g} & \ensuremath{\left(\xy (0,4)*{TG(A)}; (0,-4)*{G(A)}; {\ar^{G(\varepsilon_{A})\!\!\!} (0,2); (0,-2)}; \endxy\right)}\rlap{$=K(A)$} \rlap{\hspace*{6em}in $\EM(T)$}} \end{prooftree}\hspace*{5em}$$ \auxproof{ \noindent This is done as follows. \begin{itemize} \item Given $f\colon M(X,a) \rightarrow A$ as above, take $\overline{f} = G(f \mathrel{\circ} c) \mathrel{\circ} \eta_{X} \colon X \rightarrow G(A)$. We prove that $\overline{f}$ is an algebra homomorphism by using the triangular identities and the above coequaliser~\eqref{AdjTriangleCoeq}: $$\begin{array}{rcl} G(\varepsilon_{A}) \mathrel{\circ} T(\overline{f}) & = & G\big(\varepsilon_{A} \mathrel{\circ} FG(f \mathrel{\circ} c) \mathrel{\circ} F(\eta_{X})\big) \\ & = & G\big(f \mathrel{\circ} c \mathrel{\circ} \varepsilon_{F(X)} \mathrel{\circ} F(\eta_{X})\big) \\ & = & G\big(f \mathrel{\circ} c\big) \\ & = & G\big(f \mathrel{\circ} c \mathrel{\circ} \varepsilon_{F(X)}\big) \mathrel{\circ} \eta_{GF(X)} \\ & = & G\big(f \mathrel{\circ} c \mathrel{\circ} F(a)\big) \mathrel{\circ} \eta_{GF(X)} \\ & = & G(f \mathrel{\circ} c) \mathrel{\circ} \eta_{X} \mathrel{\circ} a \\ & = & \overline{f} \mathrel{\circ} a. \end{array}$$ \item Conversely, given an algebra map $g\colon X \rightarrow G(A)$, its transpose $\varepsilon_{A} \mathrel{\circ} F(g)$ coequalises the two parallel maps in~\eqref{AdjTriangleCoeq}: $$\begin{array}{rcl} \big(\varepsilon_{A} \mathrel{\circ} F(g)\big) \mathrel{\circ} F(a) & = & \varepsilon_{A} \mathrel{\circ} F(g \mathrel{\circ} a) \\ & = & \varepsilon_{A} \mathrel{\circ} F(G(\varepsilon_{A}) \mathrel{\circ} T(g)) \\ & = & \big(\varepsilon_{A} \mathrel{\circ} F(g)\big) \mathrel{\circ} \varepsilon_{F(X)}. \end{array}$$ \noindent Hence there is a unique map $\overline{g} \colon M(X,a) \rightarrow X$, with $\overline{g} \mathrel{\circ} c = \varepsilon_{A} \mathrel{\circ} F(g)$. \end{itemize} Clearly, $\smash{\overline{\overline{f}} = f}$ and $\smash{\overline{\overline{g}} = g}$. $$\begin{array}{rcl} \overline{\overline{f}} \mathrel{\circ} c & = & \varepsilon_{A} \mathrel{\circ} F(\overline{f}) \\ & = & \varepsilon_{A} \mathrel{\circ} FG(f \mathrel{\circ} c) \mathrel{\circ} F(\eta_{X}) \\ & = & f \mathrel{\circ} c \mathrel{\circ} \varepsilon_{F(X)} \mathrel{\circ} F(\eta_{X}) \\ & = & f \mathrel{\circ} c \\ \overline{\overline{g}} & = & G(\overline{g} \mathrel{\circ} c) \mathrel{\circ} \eta_{X} \\ & = & G(\varepsilon_{A} \mathrel{\circ} F(g)) \mathrel{\circ} \eta_{X} \\ & = & G(\varepsilon_{A}) \mathrel{\circ} \eta_{G(A)} \mathrel{\circ} g \\ & = & g. \end{array}$$ } \noindent What remains is to show $MKL \cong L$. This follows because for each $X\in\cat{B}$, the following diagram is a coequaliser in $\Cat{A}$. $$\vcenter{\xymatrix@C+1pc{ \llap{$FG$}FGF(X)\ar@/^1ex/[rr]^-{F(\mu_{X}) = FG(\varepsilon_{F(X)})} \ar@/_1ex/[rr]_-{\varepsilon_{FGF(X)}} & & FGF(X)\ar@{->>}[r]^-{\varepsilon_{F(X)}} & F(X) }}$$ \auxproof{ We have: $$\begin{array}{rcccl} \varepsilon_{F(X)} \mathrel{\circ} F(\mu_{X}) & = & \varepsilon_{F(X)} \mathrel{\circ} FG(\varepsilon_{F(X)}) & = & \varepsilon_{F(X)} \mathrel{\circ} \varepsilon_{FGF(X)}. \end{array}$$ \noindent And if $f\colon FGF(X) \rightarrow A$ satisfies $f \mathrel{\circ} F(\mu_{X}) = f \mathrel{\circ} \varepsilon_{FGF(X)}$, then $\overline{f} = f \mathrel{\circ} F(\eta_{X}) \colon F(X) \rightarrow A$ satisfies: $$\begin{array}{rcl} \overline{f} \mathrel{\circ} \varepsilon_{F(X)} & = & f \mathrel{\circ} F(\eta_{X}) \mathrel{\circ} \varepsilon_{F(X)} \\ & = & f \mathrel{\circ} \varepsilon_{FGF(X)} \mathrel{\circ} FGF(\eta_{X}) \\ & = & f \mathrel{\circ} F(\mu_{X}) \mathrel{\circ} FT(\eta_{X}) \\ & = & f. \end{array}$$ \noindent If also $g\colon F(X) \rightarrow A$ satisfies $g \mathrel{\circ} \varepsilon_{F(X)} = f$, then: $$\begin{array}{rcccccl} \overline{f} & = & f \mathrel{\circ} F(\eta_{X}) & = & g \mathrel{\circ} \varepsilon_{F(X)} \mathrel{\circ} F(\eta_{X}) & = & g. \end{array}$$ } \noindent Hence the codomain $MKL(X)$ of the coequaliser of $FKL(X) = FG(\varepsilon_{F(X)})$ and the counit map $\varepsilon_{FGF(X)}$ is isomorphic to $F(X) = L(X)$. Proving naturality of $MKL \cong L$ (w.r.t.\ Kleisli maps) is a bit of work, but is essentially straightforward. \hspace*{\fill}$\mathbb{Q}EDbox$ \auxproof{ For $f\colon X \rightarrow GF(Y)$, write $f_{*} = \mu_{Y} \mathrel{\circ} GF(f) = G(\varepsilon_{F(Y)} \mathrel{\circ} F(f)) \colon GF(X) \rightarrow GF(Y)$ for the Kleisli extension. We get $MK(f) = M(f_{*}) = P(f)$ because the rectangle on the right below commutes. $$\xymatrix@C+1pc{ \llap{$FG$}FGF(X)\ar@/^1ex/[rr]^-{F(\mu_{X}) = FG(\varepsilon_{F(X)})} \ar@/_1ex/[rr]_-{\varepsilon_{FGF(X)}}\ar[d]_{FGF(f_{*})} & & FGF(X)\ar@{->>}[r]^-{\varepsilon_{F(X)}}\ar[d]^{F(f_{*})} & F(X)\ar[d]^{P(f) = \varepsilon_{F(Y)} \mathrel{\circ} F(f)} \\ \llap{$FG$}FGF(Y)\ar@/^1ex/[rr]^-{F(\mu_{Y}) = FG(\varepsilon_{F(Y)})} \ar@/_1ex/[rr]_-{\varepsilon_{FGF(Y)}} & & FGF(Y)\ar@{->>}[r]^-{\varepsilon_{F(Y)}} & F(Y) }$$ \noindent Indeed, $$\begin{array}{rcl} P(f) \mathrel{\circ} \varepsilon_{F(X)} & = & \varepsilon_{F(Y)} \mathrel{\circ} F(f) \mathrel{\circ} \varepsilon_{F(X)} \\ & = & \varepsilon_{F(Y)} \mathrel{\circ} FG(\varepsilon_{F(Y)} \mathrel{\circ} F(f)) \\ & = & \varepsilon_{F(Y)} \mathrel{\circ} F(f_{*}). \end{array}$$ } \end{myproof} An essential `aha moment' underlying this paper is that the above result can be massaged into triangle form. This is what happens in the next result, to which we will refer as the `triangle corollary'. It is the `recipe' that occurs in the title of this paper. \begin{corollary} \label{TriangleCor} Consider an adjunction $F\dashv G$, where $F$ is a functor $\cat{B} \rightarrow \cat{A}$, the category $\cat{A}$ has coequalisers, and the induced monad on $\cat{B}$ is written as $T = GF$. Diagram~\eqref{ResolutionDiag} then gives rise to a triangle as below, where both up-going functors are full and faithful. \begin{equation} \label{TriangleCorDiag} \vcenter{\[email protected]@C-1pc{ \cat{A}\ar@/^0.7em/[rr]^-{K} & \top & \EM\rlap{$(T)$}\ar@/^0.6em/[ll]^-{M} \\ & \Kl(T)\ar[ul]^{\ensuremath{\mathrm{Pred}} = L}\ar[ur]_{KL = \ensuremath{\mathrm{Stat}}} & }} \end{equation} \noindent This triangle commutes, trivially from left to right, and up-to-isomorphism from right to left, since $MKL \cong L$. In this context we refer to the functor $L$ as the `predicate' functor $\ensuremath{\mathrm{Pred}}$, and to the functor $KL$ as the `states' functor $\ensuremath{\mathrm{Stat}}$. \hspace*{\fill}$\mathbb{Q}EDbox$ \end{corollary} The remainder of the paper is devoted to instances of this triangle corollary. In each of these examples the category $\cat{A}$ will be of the form $\op{\cat{P}}$, where $\cat{P}$ is a category of predicates (with equalisers). The full and faithfulness of the functors $\ensuremath{\mathrm{Pred}} \colon \Kl(T) \rightarrow \op{\cat{P}}$ and $\ensuremath{\mathrm{Stat}} \colon \Kl(T) \rightarrow \EM(T)$ means that there are bijective correspondences between: \begin{equation} \label{ComputationTransformers} \begin{prooftree} \xymatrix{X\ar[rr]^-{\text{computations}} & & T(Y)} \Justifies \xymatrix{\ensuremath{\mathrm{Pred}}(Y)\ar[rrr]_-{\text{predicate transformers}} & & & \ensuremath{\mathrm{Pred}}(X)} \end{prooftree} \qquad \begin{prooftree} \xymatrix{X\ar[rr]^-{\text{computations}} & & T(Y)} \Justifies \xymatrix{\ensuremath{\mathrm{Stat}}(X)\ar[rrr]_-{\text{state transformers}} & & & \ensuremath{\mathrm{Stat}}(Y)} \end{prooftree} \end{equation} \noindent Since $\ensuremath{\mathrm{Stat}}(X) = T(X)$, the correspondence on the right is given by Kleisli extension, sending a map $f\colon X \rightarrow T(Y)$ to $\mu \mathrel{\circ} T(f) \colon T(X) \rightarrow T(Y)$. This bijective correspondence on the right is a categorical formality. But the correspondence on the left is much more interesting, since it precisely describes to which kind of predicate transformers (preserving which structure) computations correspond. Such a correspondence is often referred to as `healthiness' of the semantics. It is built into our triangle recipe, as will be illustrated below. Before looking at triangle examples, we make the following points. \begin{itemize} \item As discussed in~\cite{Jacobs15d}, the predicate functor $\ensuremath{\mathrm{Pred}} \colon \Kl(T) \rightarrow \cat{A}$ is in some cases an \emph{enriched} functor, preserving additional structure that is of semantical/logical relevance. For instance, operations on programs, like $\cup$ for non-deterministic sum, may be expressed as structure on Kleisli homsets. Preservation of this structure by the functor $\ensuremath{\mathrm{Pred}}$ gives the logical rules for dealing with such structure in weakest precondition computations. These enriched aspects will not be elaborated in the current context. \item The triangle picture that we use here is refined in~\cite{HinoKHJ16}. In all our examples, the adjunction $F \dashv G$ arises by homming into a dualising object $\Omega$. The induced monad $T$ is then of the `double dual' form $\Omega^{\Omega^{(-)}}$. The approach of~\cite{HinoKHJ16} uses monads $S$ having a map of monads $S \mathbb{R}ightarrow T = GF$; this monad map corresponds bijectively to an Eilenberg-Moore algebra $S(\Omega) \rightarrow \Omega$, which is understood as a logical modality. \end{itemize} \section{Dualising with 2}\label{TwoSec} We split our series of examples in three parts, determined by the dualising object: $2$, $3$, or $[0,1]$. The first series of Boolean examples is obtained via adjunctions that involve `homming into $2$', where $2 = \{0,1\}$ is the 2-element set of Booleans. \subsection{Sets and sets}\label{SetsSetsSubsec} We will present examples in the following manner, in three stages. $$\vcenter{\xymatrix@R-2pc{ \op{\Cat{Sets}}\ar@/^2ex/[dd]^{\powersetsymbol = \ensuremath{\mathrm{Hom}}(-,2)} \\ \dashv \\ \Cat{Sets}\ar@/^2ex/[uu]^{\powersetsymbol = \ensuremath{\mathrm{Hom}}(-,2)}\ar@(dl,dr)_{\mathcal{N}=\powersetsymbol\powersetsymbol} }} \qquad \begin{prooftree} \begin{prooftree} \xymatrix{\powersetsymbol(X)\ar[r]^-{\op{\Cat{Sets}}} & Y} \Justifies \xymatrix{Y\ar[r]^-{\Cat{Sets}} & \powersetsymbol(X)} \end{prooftree} \Justifies \xymatrix{X\ar[r]_-{\Cat{Sets}} & \powersetsymbol(Y)} \end{prooftree} \qquad \vcenter{\[email protected]@C-2pc{ \op{\Cat{Sets}}\ar@/^0.7em/[rr] & \top & \EM\rlap{$(\mathcal{N}) = \Cat{CABA}$}\ar@/^0.6em/[ll] \\ & \Kl(\mathcal{N})\ar[ul]^{\ensuremath{\mathrm{Pred}}}\ar[ur]_{\ensuremath{\mathrm{Stat}}} & }}\hspace*{7em}$$ \noindent On the left we describe the adjunction that forms the basis for the example at hand, together with the induced monad. In this case we have the familiar fact that the contravariant powerset functor $\powersetsymbol \colon \Cat{Sets} \rightarrow \op{\Cat{Sets}}$ is adjoint to itself, as indicated. The induced double-powerset monad $\powersetsymbol\powersetsymbol$ on $\Cat{Sets}$ is known in the coalgebra/modal logic community as the neighbourhood monad $\mathcal{N}$, because its coalgebras are related to neighbourhood frames in modal logic. In the middle, the bijective correspondence is described that forms the basis of the adjunction. In this case there is the obvious correspondence between functions $Y \rightarrow \powersetsymbol(X)$ and functions $X \rightarrow \powersetsymbol(Y)$ --- which are all relations on $X\times Y$. On the right the result is shown of applying the triangle corollary~\ref{TriangleCor} to the adjunction on the left. The full and faithfulness of the predicate functor $\ensuremath{\mathrm{Pred}} \colon \Kl(\mathcal{N}) \rightarrow \op{\Cat{Sets}}$ plays an important role in the approach to coalgebraic dynamic logic in~\cite{HansenKL14}, relating coalgebras $X \rightarrow \mathcal{N}(X)$ to predicate transformer functions $\powersetsymbol(X) \rightarrow \powersetsymbol(X)$, going in the opposite direction. The category $\EM(\mathcal{N})$ of Eilenberg-Moore algebras of the neighbourhood monad $\mathcal{N}$ is the category $\Cat{CABA}$ of complete atomic Boolean algebras (see \textit{e.g.}\xspace~\cite{Taylor02}). The adjunction $\op{\Cat{Sets}} \rightleftarrows \EM(\mathcal{N})$ is thus an equivalence. \subsection{Sets and posets}\label{SetsPosetsSubsec} We now restrict the adjunction in the previous subsection to posets. $$\vcenter{\xymatrix@R-2pc{ \op{\Cat{PoSets}}\ar@/^2ex/[dd]^{\ensuremath{\mathrm{Up}} = \ensuremath{\mathrm{Hom}}(-,2)} \\ \dashv \\ \Cat{Sets}\ar@/^2ex/[uu]^{\powersetsymbol = \ensuremath{\mathrm{Hom}}(-,2)}\ar@(dl,dr)_{\mathcal{M}=\ensuremath{\mathrm{Up}}\powersetsymbol} }} \qquad \begin{prooftree} \xymatrix@C+.5pc{Y\ar[r]^-{\Cat{PoSets}} & \powersetsymbol(X)} \Justifies \xymatrix{X\ar[r]_-{\Cat{Sets}} & \ensuremath{\mathrm{Up}}(Y)} \end{prooftree} \qquad \vcenter{\[email protected]@C-2pc{ \op{\Cat{PoSets}}\ar@/^0.7em/[rr] & \top & \EM(\mathcal{M})\rlap{$ = \Cat{CDL}$}\ar@/^0.6em/[ll] \\ & \Kl(\mathcal{M})\ar[ul]^{\ensuremath{\mathrm{Pred}}}\ar[ur]_{\ensuremath{\mathrm{Stat}}} & }}\hspace*{6em}$$ \auxproof{ We check the bijective correspondence, for $Y\in\Cat{PoSets}$a, and $X\in\Cat{Sets}$, $$\begin{prooftree} \xymatrix@C+.5pc{Y\ar[r]^-{f} & \powersetsymbol(X)} \Justifies \xymatrix{X\ar[r]_-{g} & \ensuremath{\mathrm{Up}}(Y)} \end{prooftree}$$ \begin{itemize} \item Given $f\colon Y \rightarrow \powersetsymbol(X)$ in $\Cat{PoSets}$, define $\overline{f} \colon X \rightarrow \ensuremath{\mathrm{Up}}(Y)$ as $\overline{f}(x) = \setin{y}{Y}{x\in f(y)}$. Each $\overline{f}(x)$ is an upset, since if $z \geq y \in \overline{f}(x)$, then $f(z) \supseteq f(y)$, so $x\in f(y)$ implies $x\in f(z)$, and thus $z\in \overline{f}(x)$. \item For $g\colon X \rightarrow \ensuremath{\mathrm{Up}}(Y)$ define $\overline{g} \colon Y \rightarrow \powersetsymbol(X)$ as $\overline{g}(y) = \setin{x}{X}{y\in g(x)}$. This is a monotone function, since if $y \leq z$ in $Y$, and $x\in \overline{g}(y)$, then $y \in g(x)$, and thus $z\in g(x)$, since $g(x)$ is an upset. Hence $x\in \overline{g}(z)$. \end{itemize} } \noindent The functor $\ensuremath{\mathrm{Up}} \colon \op{\Cat{PoSets}} \rightarrow \Cat{Sets}$ sends a poset $Y$ to the collection of upsets $U\subseteq Y$, satisfying $y \geq x \in U$ implies $y\in U$. These upsets can be identified with monotone maps $p\colon Y \rightarrow 2$, namely as $p^{-1}(1)$. Notice that this time there is a bijective correspondence between computations $X \rightarrow \mathcal{M}(Y) = \ensuremath{\mathrm{Up}}\powersetsymbol(Y)$ and \emph{monotone} predicate transformers $\powersetsymbol(Y) \rightarrow \powersetsymbol(X)$. This fact is used in~\cite{HansenKL14}. The algebras of the monad $\mathcal{M}$ are completely distributive lattices, see~\cite{Markowsky79} and~\cite[I, Prop.~3.8]{Johnstone82}. \subsection{Sets and meet-semilattices}\label{SetsMSLSubsec} We now restrict the adjunction further to meet semilattices, that is, to posets with finite meets $\wedge, \top$. $$\vcenter{\xymatrix@R-2pc{ \op{\Cat{MSL}}\ar@/^2ex/[dd]^{\ensuremath{\mathrm{Hom}}(-,2)} \\ \dashv \\ \Cat{Sets}\ar@/^2ex/[uu]^{\powersetsymbol = \ensuremath{\mathrm{Hom}}(-,2)}\ar@(dl,dr)_{\mathcal{F}=\Cat{MSL}(\powersetsymbol(-), 2)} }} \qquad \begin{prooftree} \xymatrix@C+.5pc{Y\ar[r]^-{\Cat{MSL}} & \powersetsymbol(X)} \Justifies \xymatrix{X\ar[r]_-{\Cat{Sets}} & \Cat{MSL}(Y, 2)} \end{prooftree} \qquad \vcenter{\[email protected]@C-2pc{ \op{\Cat{MSL}}\ar@/^0.7em/[rr] & \top & \EM(\mathcal{F})\rlap{$ = \Cat{CCL}$}\ar@/^0.6em/[ll] \\ & \Kl(\mathcal{F})\ar[ul]^{\ensuremath{\mathrm{Pred}}}\ar[ur]_{\ensuremath{\mathrm{Stat}}} & }}\hspace*{6em}$$ \noindent Morphisms in the category $\Cat{MSL}$ of meet semilattices preserve the meet $\wedge$ and the top element $\top$ (and hence the order too). For $Y\in\Cat{MSL}$ one can identify a map $Y\rightarrow 2$ with a \emph{filter} of $Y$, that is, with an upset $U\subseteq Y$ closed under $\wedge, \top$. \auxproof{ \begin{itemize} \item A filter $U\subseteq Y$ yields a function $p_{U} \colon Y \rightarrow 2$ by $p_{U}(y) = 1$ iff $y\in U$. \begin{itemize} \item If $x \leq y$ and $p_{U}(x) = 1$, then $x\in U$, so $y\in U$, and thus $p_{U}(y) = 1$. Hence $p_{U}(x) \leq p_{U}(y)$. \item $p_{U}(\top) = 1$, since $\top \in U$. \item In order to get $p_{U}(x \wedge y) = p_{U}(x) \wedge p_{U}(y)$, the non-trivial part is $(\geq)$. So assume $p_{U}(x) \wedge p_{U}(y) = 1$. Then $x\in U$ and $y\in U$, so that $x\wedge y\in U$. Hence $p_{U}(x\wedge y) = 1$. \end{itemize} \item A map $p \colon Y \rightarrow 2$ yields a filter $U_{p} = p^{-1}(1) = \setin{y}{Y}{p(y) = 1}$. This is a filter: \begin{itemize} \item $y \geq x \in U_{p}$ means $p(x) = 1$, so that $p(y) \geq p(x) = 1$, and thus $y\in U_{p}$. \item $\top \in U_{p}$ since $p(\top) = 1$. \item If $x, y\in U_{p}$, then $p(x\wedge y) = p(x) \wedge p(y) = 1 \wedge 1 = 1$, so that $x\wedge y \in U_{p}$. \end{itemize} \end{itemize} \noindent Moreover, $$\begin{array}{rcl} U_{p_U} & = & \set{y}{p_{U}(y) = 1} \\ & = & \set{y}{y\in U} \\ & = & U \\ p_{U_p}(y) = 1 & \Longleftrightarrow & y \in U_{p} \\ & \Longleftrightarrow & p(y) = 1. \end{array}$$ } The resulting monad $\mathcal{F}(X) = \Cat{MSL}(\powersetsymbol(X), 2)$ gives the filters in $\powersetsymbol(X)$. This monad is thus called the \emph{filter monad}. In~\cite{Wyler81} it is shown that its category of algebras $\EM(\mathcal{F})$ is the category $\Cat{CCL}$ of continuous complete lattices, that is, of complete lattices in which each element $x$ is the (directed) join $x = \bigvee\set{y}{y \ll x}$ of the elements way below it. \subsection{Sets and complete lattices}\label{SetsCLSubsec} A poset is called a complete lattice if each subset has a join, or equivalently, if each subset has a meet. Since these complete lattices will be used in several examples, we elaborate some basic properties first. We shall consider two categories with complete lattices as objects, namely: \begin{itemize} \item $\Cat{CL}J$ whose morphisms preserve all joins $\bigvee$; \item $\Cat{CL}M$ whose morphisms preserve all meets $\bigwedge$. \end{itemize} \noindent We write $\op{L}$ for the complete lattice obtained from $L$ by reversing the order. Thus, $f\colon L \rightarrow K$ in $\Cat{CL}J$ gives a map $f \colon \op{L} \rightarrow \op{K}$ in $\Cat{CL}M$. Hence we have an isomorphism $\Cat{CL}J \cong \Cat{CL}M$. Notice that we have: $$\begin{array}{rcl} \Cat{CL}M(L, K) & \cong & \Cat{CL}J(\op{L}, \op{K})\quad\mbox{as sets} \end{array}$$ \noindent But: $$\begin{array}{rcl} \op{\Cat{CL}M(L, K)} & \cong & \Cat{CL}J(\op{L}, \op{K})\quad\mbox{as posets} \end{array}$$ There is another isomorphism between these two categories of complete lattices. A basic fact in order theory is that each map $f\colon L \rightarrow K$ in $\Cat{CL}J$ has a right adjoint $f^{\#} \colon K \rightarrow L$ in $\Cat{CL}M$, given by: \begin{equation} \label{CLmapAdjointEqn} \begin{array}{rcl} f^{\#}(b) & = & \bigvee\setin{x}{L}{f(x) \leq b}. \end{array} \end{equation} \noindent Clearly, $f(a) \leq b$ implies $a \leq f^{\#}(b)$. For the reverse direction we apply $f$ to an inequality $a \leq f^{\#}(b)$ and obtain: $$\begin{array}{rcccccccl} f(a) & \,\leq\, & f\big(f^{\#}(b)\big) & \,=\, & f\big(\bigvee\set{x}{f(x) \leq b}\big) & \,=\, & \bigvee\set{f(x)}{f(x)\leq b} & \,\leq\, & b. \end{array}$$ \noindent This gives an isomorphism of categories $\Cat{CL}J \cong \op{\big(\Cat{CL}M\big)}$. Via a combination with the above isomorphism $\Cat{CL}J \cong \Cat{CL}M$ we see that the two categories $\Cat{CL}J$ and $\Cat{CL}M$ are self-dual. \begin{lemma} \label{CLTwoLem} For a complete lattice $L$ there are isomomorphisms of posets: \begin{equation} \label{CLJTwoEqn} \xymatrix{ \Cat{CL}J\big(L, 2\big)\ar[r]^-{\cong} & \op{L} & \Cat{CL}J\big(L, \op{2}\big)\ar[l]_-{\cong} } \end{equation} \noindent Similarly there are isomorphisms: \begin{equation} \label{CLMTwoEqn} \xymatrix{ \Cat{CL}M\big(L, 2\big)\ar[r]^-{\cong} & \op{L} & \Cat{CL}M\big(L, \op{2}\big)\ar[l]_-{\cong} } \end{equation} \end{lemma} \begin{myproof} We restrict ourselves to describing the four isomorphisms. The isomorphism on the left in~\eqref{CLJTwoEqn} sends a join-preserving map $\varphi \colon L \rightarrow 2$ and an element $a\in L$ to: $$\begin{array}{rclcrcl} \widehat{\varphi} & = & \bigvee\setin{x}{L}{\varphi(x) = 0} & \qquad\mbox{and}\qquad \widehat{a}(x) & = & \left\{\begin{array}{ll} 0 \quad & \mbox{if } x \leq a \\ 1 & \mbox{otherwise.} \end{array}\right. \end{array}$$ \noindent The isomorphism on the right in~\eqref{CLJTwoEqn} maps a $\varphi \colon L \rightarrow \op{2}$ and $a\in L$ to: $$\begin{array}{rclcrcl} \widetilde{\varphi} & = & \bigvee\set{x}{\varphi(x)=1} & \qquad\mbox{and}\qquad & \qquad \widetilde{a}(x) = 1 & \Longleftrightarrow & x \leq a. \end{array}$$ \auxproof{ The map from left to right in~\eqref{CLJTwoEqn} sends a join-preserving map $\varphi \colon L \rightarrow 2$ to the element: $$\begin{array}{rcl} \widehat{\varphi} & = & \bigvee\setin{x}{L}{\varphi(x) = 0}. \end{array}$$ \noindent If $\varphi \leq \psi$ in $\Cat{CL}J\big(L, 2\big)$, then $\varphi(x) \leq \psi(x)$ for each $x\in L$, so that: $$\begin{array}{rcl} \set{x}{\varphi(x) = 0} & \supseteq & \set{x}{\psi(x) = 0}, \end{array}$$ \noindent and thus: $$\begin{array}{rcccccl} \widehat{\varphi} & = & \bigvee\set{x}{\varphi(x) = 0} & \geq & \bigvee\set{x}{\psi(x) = 0} & = & \widehat{\psi}. \end{array}$$ \noindent In the other direction one sends an element $a\in L$ to the function $\widehat{a} \colon L \rightarrow 2$ given by: $$\begin{array}{rcl} \widehat{a}(x) & = & \left\{\begin{array}{ll} 0 \quad & \mbox{if } x \leq a \\ 1 & \mbox{otherwise.} \end{array}\right. \end{array}$$ \noindent Clearly, $\widehat{a}$ preserves joins: $$\begin{array}{rcl} \widehat{a}(\bigvee_{i}x_{i}) = 0 & \Longleftrightarrow & \bigvee_{i}x_{i} \leq a \\ & \Longleftrightarrow & \all{i}{x_{i} \leq a} \\ & \Longleftrightarrow & \all{i}{\widehat{a}(x_{i}) = 0} \\ & \Longleftrightarrow & \bigvee_{i}\widehat{a}(x_{i}) = 0. \end{array}$$ \noindent Also, if $a \leq b$ then $\widehat{b} \leq \widehat{a}$ since: from $\widehat{a}(x) = 0$ we get $x\leq a$ and thus $x\leq b$ so that $\widehat{b}(x) = 0$. We have an isomorphism on the left in~\eqref{CLJTwoEqn} since: $$\begin{array}{rcccccl} \widehat{\widehat{a}} & = & \bigvee\set{x}{\widehat{a}(x) = 0} & = & \bigvee\set{x}{x \leq a} & = & a. \end{array}$$ \noindent And: $$\begin{array}{rcccl} \widehat{\widehat{\varphi}}(y) = 0 & \Longleftrightarrow & y \leq \widehat{\varphi} = \bigvee\set{x}{\varphi(x)=0} & \smash{\stackrel{(*)}{\Longleftrightarrow}} & \varphi(y) = 0. \end{array}$$ \noindent The direction $(\Leftarrow)$ of the marked equivalence is obvious. For $(\mathbb{R}ightarrow)$ assume $y \leq \widehat{\varphi}$. Then: $$\begin{array}{rcccccccl} \varphi(y) & \leq & \varphi(\widehat{\varphi}) & = & \varphi(\bigvee\set{x}{\varphi(x)=0}) & = & \bigvee\set{\varphi(x)}{\varphi(x)=0} & = & 0. \end{array}$$ We turn to the isomorphism on the right in~\eqref{CLJTwoEqn}. It is given by: $$\begin{array}{rclcrcl} \widetilde{\varphi} & = & \bigvee\set{x}{\varphi(x)=1} & \qquad\mbox{and}\qquad & \qquad \widetilde{a}(x) = 1 & \Longleftrightarrow & x \leq a. \end{array}$$ \noindent These mappings are monotone. If $\varphi \leq \psi$, then $\varphi(x) \geq \psi(x)$ for all $x\in L$, so that $\set{x}{\varphi(x)=1} \supseteq \set{x}{\psi(x)=1}$, and thus $\widetilde{\varphi} = \bigvee\set{x}{\varphi(x)=1} \geq \bigvee\set{x}{\psi(x)=1} = \widetilde{\psi}$. The map $\widetilde{a} \colon L \rightarrow \op{2}$ preserves joins, since it sends joins to meets: $$\begin{array}{rcl} \widetilde{a}(\bigvee_{i}x_{i})=1 & \Longleftrightarrow & \bigvee_{i}x_{i} \leq a \\ & \Longleftrightarrow & \all{i}{x_{i} \leq a} \\ & \Longleftrightarrow & \all{i}{\widetilde{a}(x_{i}) = 1} \\ & \Longleftrightarrow & \bigwedge_{i}\widetilde{a}(x_{i}) = 1. \end{array}$$ \noindent We have: $$\begin{array}{rcccccl} \widetilde{\widetilde{a}} & = & \bigvee\set{x}{\widetilde{a}(x)=1} & = & \bigvee\set{x}{x \leq a} & = & a. \end{array}$$ \noindent And: $$\begin{array}{rcccl} \widetilde{\widetilde{\varphi}}(x) = 1 & \Longleftrightarrow & x \leq \widetilde{\varphi} = \bigvee\set{y}{\varphi(y)=1} & \Longleftrightarrow & \varphi(x) = 1. \end{array}$$ \noindent For the direction $(\mathbb{R}ightarrow)$ of the last equivalence we use that $\varphi$ reverses the order and sends joins to meets: $$\begin{array}{rcccccccl} \varphi(x) & \geq & \varphi(\widetilde{\varphi}) & = & \varphi(\bigvee\set{y}{\varphi(y)=1}) & = & \bigwedge\set{\varphi(y)}{\varphi(y)=1} & = & 1. \end{array}$$ } We turn to the isomorphisms in~\eqref{CLMTwoEqn}. They are a consequence of~\eqref{CLJTwoEqn} since: $$\begin{array}{rcccccl} \Cat{CL}M(L, 2) & \cong & \op{\Cat{CL}J(\op{L}, \op{2})} & \cong & \op{(\op{(\op{L})})} & \cong & \op{L}. \end{array}$$ \noindent And similarly: $$\begin{array}{rcccccl} \Cat{CL}M(L, \op{2}) & \cong & \op{\Cat{CL}J(\op{L}, 2)} & \cong & \op{(\op{(\op{L})})} & \cong & \op{L}. \end{array}$$ \noindent The isomorphism on the left in~\eqref{CLMTwoEqn} is described explicitly by: $$\begin{array}{rclcrcl} \widehat{\varphi} & = & \bigwedge\setin{x}{L}{\varphi(x) = 1} & \qquad\mbox{and}\qquad & \qquad\widehat{a}(x) = 1 & \Longleftrightarrow & a \leq x. \end{array}$$ \auxproof{ \noindent These operations are monotone. If $\varphi \leq \psi$ in $\Cat{CL}M(L,2)$ then $\varphi(x) \leq \psi(x)$ for all $x\in L$. Hence $\set{x}{\varphi(x)=1} \subseteq \set{x}{\psi(x)=1}$, and so $\widehat{\varphi} = \bigwedge\set{x}{\varphi(x)=1} \geq \bigwedge\set{x}{\psi(x)=1} = \widehat{\psi}$. In the other direction, if $a \leq b$, then $\widehat{a} \geq \widehat{b}$, since if $\widehat{a}(x) \leq \widehat{b}(x)$: if $\widehat{b}(x)= 1$, then $b \leq x$, so $a\leq x$ and thus $\widehat{a}(x) = 1$. The map $\widehat{a}$ preserves meets: $$\begin{array}{rcl} \widehat{a}(\bigwedge_{i}x_{i}) = 1 & \Longleftrightarrow & a \leq \bigwedge_{i}x_{i} \\ & \Longleftrightarrow & \all{i}{a \leq x_{i}} \\ & \Longleftrightarrow & \all{i}{\widehat{a}(x_{i}) = 1} \\ & \Longleftrightarrow & \bigwedge_{i}\widehat{a}(x_{i}) = 1. \end{array}$$ \noindent We have $\widehat{\widehat{a}} = \bigwedge\set{x}{\widehat{a}(x) = 1} = \bigwedge\set{x}{a \leq x} = a$, and: $$\begin{array}{rcccl} \widehat{\widehat{\varphi}}(x)=1 & \Longleftrightarrow & \widehat{\varphi} = \bigwedge\set{y}{\varphi(y)=1} \leq x & \Longleftrightarrow & \varphi(x)=1. \end{array}$$ \noindent For the last $(\mathbb{R}ightarrow)$ we use: $$\begin{array}{rcccccccl} \varphi(x) & \geq & \varphi(\widehat{\varphi}) & = & \varphi(\bigwedge\set{y}{\varphi(y)=1}) & = & \bigwedge\set{\varphi(y)}{\varphi(y)=1} & = & 1. \end{array}$$ } \noindent The isomorphism on the right in~\eqref{CLMTwoEqn} is described explicitly by: $$\begin{array}{rclcrcl} \widetilde{\varphi} & = & \bigwedge\setin{x}{L}{\varphi(x) = 0} & \qquad\mbox{and}\qquad & \qquad\widetilde{a}(x) = 0 & \Longleftrightarrow & a \leq x. \end{array}\eqno{\square}$$ \auxproof{ \noindent We check that these operations are monotone. If $\varphi \leq \psi$ in $\Cat{CL}M(L, \op{2})$, then $\widehat{\varphi} \geq \beta(\psi)$, since $\varphi \leq \psi$ implies $\varphi(x) \geq \psi(x)$ for all $x$. Hence $\set{x}{\varphi(x) = 0} \subseteq \set{x}{\psi(x) = 0}$, and thus $\widehat{\varphi} = \bigwedge\set{x}{\varphi(x) = 0} \geq \bigwedge\set{x}{\psi(x) = 0} = \widehat{\psi}$. Next, if $a \leq b$, then $\widetilde{a} \geq \widetilde{b}$ since $\widetilde{a}(x) \leq \widetilde{b}(x)$ for all $x$: if $\widetilde{b}(x) = 0$, then $b \leq x$, so that $a\leq x$, and thus $\widetilde{a}(x) = 0$. The function $\widetilde{a}$ sends meets to joins: $$\begin{array}{rcl} \widetilde{a}(\bigwedge_{i}x_{i}) = 0 & \Longleftrightarrow & a \leq \bigwedge x_{i} \\ & \Longleftrightarrow & \all{i}{a \leq x_{i}} \\ & \Longleftrightarrow & \all{i}{\widetilde{a}(x_{i}) = 0} \\ & \Longleftrightarrow & \bigvee_{i} \widetilde{a}(x_{i}) = 0. \end{array}$$ \noindent We have a bijective correspondence since: $$\begin{array}{rcccccl} \widetilde{\widetilde{a}} & = & \bigwedge\set{x}{\widetilde{a}(x) = 0} & = & \bigwedge\set{x}{a \leq x} & = & a. \end{array}$$ \noindent And: $$\begin{array}{rcccl} \widetilde{\widetilde{\varphi}}(x) = 0 & \Longleftrightarrow & \bigwedge\set{y}{\varphi(y)=0} = \widetilde{\varphi} \leq x & \Longleftrightarrow & \varphi(x) = 0. \end{array}$$ \noindent For the last $(\mathbb{R}ightarrow)$ we use that $\varphi$ sends meets to joins and thus reverses the order: $$\begin{array}{rcccccccl} \varphi(x) & \leq & \varphi(\widetilde{\varphi}) & = & \varphi(\bigwedge\set{y}{\varphi(y)=0}) & = & \bigvee\set{\varphi(y)}{\varphi(y)=0} & = & 0. \end{array}\eqno{\square}$$ } \end{myproof} We note that the composite isomorphisms $\Cat{CL}J(L, 2) \cong \Cat{CL}J(L, \op{2})$ in~\eqref{CLJTwoEqn} and $\Cat{CL}M(L, 2) \cong \Cat{CL}M(L, \op{2})$ in~\eqref{CLMTwoEqn} are given by $\varphi\mapsto\neg\varphi$, where $\neg\varphi(x) = 1$ iff $\varphi(x)=0$. \auxproof{ If $\varphi$ sends joins to joins, then $\neg\varphi$ sends joins to meets: $$\begin{array}{rcl} \neg\varphi(\bigvee_{i}x_{i}) = 1 & \Longleftrightarrow & \bigvee_{i}\varphi(x_{i}) = \varphi(\bigvee_{i}x_{i}) = 0 \\ & \Longleftrightarrow & \all{i}{\varphi(x_{i}) = 0} \\ & \Longleftrightarrow & \all{i}{\neg\varphi(x_{i}) = 1} \\ & \Longleftrightarrow & \bigwedge_{i}\neg\varphi(x_{i}) = 1. \end{array}$$ \noindent In the opposite direction, if $\varphi$ sends joins to meets, then $\neg\varphi$ sends joins to joins: $$\begin{array}{rcl} \neg\varphi(\bigvee_{i}x_{i}) = 0 & \Longleftrightarrow & \bigwedge_{i}\varphi(x_{i}) = \varphi(\bigvee_{i}x_{i}) = 1 \\ & \Longleftrightarrow & \all{i}{\varphi(x_{i}) = 1} \\ & \Longleftrightarrow & \all{i}{\neg\varphi(x_{i}) = 0} \\ & \Longleftrightarrow & \bigvee_{i}\neg\varphi(x_{i}) = 0. \end{array}$$ \noindent For the isomorphism $\Cat{CL}M(L, 2) \cong \Cat{CL}M(L, \op{2})$ we use that if $\varphi$ sends meets to meets, then $\neg\varphi$ sends meets to joins: $$\begin{array}{rcl} \neg\varphi(\bigwedge_{i}x_{i}) = 0 & \Longleftrightarrow & \bigwedge_{i}\varphi(x_{i}) = \varphi(\bigwedge_{i}x_{i}) = 1 \\ & \Longleftrightarrow & \all{i}{\varphi(x_{i}) = 1} \\ & \Longleftrightarrow & \all{i}{\neg\varphi(x_{i}) = 0} \\ & \Longleftrightarrow & \bigvee_{i}\neg\varphi(x_{i}) = 0. \end{array}$$ \noindent In the other direction, if $\varphi$ sends meets to joins, then $\neg\varphi$ sends meets to meets: $$\begin{array}{rcl} \neg\varphi(\bigwedge_{i}x_{i}) = 1 & \Longleftrightarrow & \bigvee_{i}\varphi(x_{i}) = \varphi(\bigwedge_{i}x_{i}) = 0 \\ & \Longleftrightarrow & \all{i}{\varphi(x_{i}) = 0} \\ & \Longleftrightarrow & \all{i}{\neg\varphi(x_{i}) = 1} \\ & \Longleftrightarrow & \bigwedge_{i}\neg\varphi(x_{i}) = 1. \end{array}$$ } The state-and-effect triangle of this subsection is given by the following situation. $$\vcenter{\xymatrix@R-2pc{ \op{\big(\Cat{CL}M\big)}\ar@/^2ex/[dd]^{\ensuremath{\mathrm{Hom}}(-,\op{2}) \cong \op{(-)}} \\ \dashv \\ \Cat{Sets}\ar@/^2ex/[uu]^{\powersetsymbol = \ensuremath{\mathrm{Hom}}(-,\op{2})}\ar@(dl,dr)_{\powersetsymbol(-)} }} \quad \begin{prooftree} \xymatrix@C+.5pc{L\ar[r]^-{\Cat{CL}M} & \powersetsymbol(X)} \Justifies \xymatrix{X\ar[r]_-{\Cat{Sets}} & L} \end{prooftree} \quad \vcenter{\[email protected]@C-2pc{ \op{\big(\Cat{CL}M\big)}\ar@/^0.7em/[rr] & \top & \EM(\powersetsymbol)\rlap{$\; = \Cat{CL}J$}\ar@/^0.6em/[ll] \\ & \Kl(\powersetsymbol)\ar[ul]^{\ensuremath{\mathrm{Pred}}}\ar[ur]_{\ensuremath{\mathrm{Stat}}} & }}\hspace*{6em}$$ \noindent The upgoing functor on the left $\powersetsymbol = \ensuremath{\mathrm{Hom}}(-,\op{2})$ is the contravariant powerset functor. In the other direction, the functor $L \mapsto \ensuremath{\mathrm{Hom}}(L,\op{2}) \cong \op{L}$, by~\eqref{CLMTwoEqn}, maps a complete lattice $L$ to its underlying set. It sends a $\bigwedge$-preserving map $L \rightarrow K$ to the associated ($\bigvee$-preserving) map $K \rightarrow L$. \auxproof{ The functor $\powersetsymbol = \ensuremath{\mathrm{Hom}}(-,\op{2}) \colon \Cat{Sets} \rightarrow \op{\big(\Cat{CL}M\big)}$ is the contravariant powerset functor, since $\varphi,\psi \colon X \rightarrow \op{2}$ are ordered by $\varphi \sqsubseteq \psi$ iff $\varphi(x) \geq \psi(x)$ for all $x\in X$. In particular, for subsets $U,V \subseteq X$ the associated indicator functions satisfy: $$\begin{array}{rcccccl} \indic{U} \sqsubseteq \indic{V} & \Longleftrightarrow & \all{x}{\indic{U}(x) \geq \indic{V}(x)} & \Longleftrightarrow & \all{x}{x \in V \mathbb{R}ightarrow x \in U} & \Longleftrightarrow & V \subseteq U. \end{array}$$ } The adjoint correspondence in the middle sends a meet-preserving map $f\colon L \rightarrow \powersetsymbol(X)$ and a function $g\colon X \rightarrow L$ to the transposes: $$\begin{array}{rclcrcl} \overline{f}(x) & = & \bigwedge\setin{a}{L}{x\in f(a)} & \qquad\mbox{and} & \qquad \overline{g}(a) & = & \set{x}{g(x) \leq a}. \end{array}$$ \auxproof{ \noindent Clearly, $\overline{g}$ preserves meets, and $\overline{\overline{g}} = g$. We also have: $$\begin{array}{rcccl} x\in \overline{\overline{f}}(a) & \Longleftrightarrow & \overline{f}(x) = \bigwedge\set{b}{x\in f(b)} \leq a & \Longleftrightarrow & x\in f(a). \end{array}$$ \noindent The direction $(\Leftarrow)$ of the last equivalence is obvious, and the direction $(\mathbb{R}ightarrow)$ is obtained from: $$\begin{array}{rcccccccl} x & \in & \bigcap\set{f(b)}{x\in f(b)} & = & f(\bigvee\set{b}{x\in f(b)}) & = & f(\overline{f}(x)) & \subseteq & f(a). \end{array}$$ } \noindent By taking $L = \powersetsymbol(Y)$ we get the classical healthiness of the $\Box$-predicate transformer semantics for non-deterministic computation~\cite{DijkstraS90}, with a bijective correspondence between Kleisli maps $X \rightarrow \powersetsymbol(Y)$ and meet-preserving maps $\powersetsymbol(Y)\rightarrow \powersetsymbol(X)$. The adjunction $\rightleftarrows$ in the state-and-effect triangle on the right is an isomorphism of categories, as discussed before Lemma~\ref{CLTwoLem}. This triangle captures the essence of non-deterministic program semantics from~\cite{DijkstraS90}, involving computations, predicate transformation and state transformation. There is also an adjunction that gives rise to $\Diamond$-predicate transformer semantics, as join preserving maps. In order to describe it properly, with opposite orders, we need to use posets instead of sets, see Subsection~\ref{PosetCLSubsec} below. \auxproof{ There is a similar adjunction with join-preserving maps: $$\vcenter{\xymatrix@R-2pc{ \op{\big(\Cat{CL}J\big)}\ar@/^2ex/[dd]^{\ensuremath{\mathrm{Hom}}(-,2) \cong (-)} \\ \dashv \\ \Cat{Sets}\ar@/^2ex/[uu]^{\powersetsymbol = \ensuremath{\mathrm{Hom}}(-,2)}\ar@(dl,dr)_{\powersetsymbol(-)} }} \qquad \begin{prooftree} \xymatrix@C+.5pc{L\ar[r]^-{f} & \powersetsymbol(X)} \Justifies \xymatrix{X\ar[r]_-{g} & L} \end{prooftree} \qquad \vcenter{\[email protected]@C-2pc{ \op{\big(\Cat{CL}J\big)}\ar@/^0.7em/[rr] & \top & \EM\rlap{$(\powersetsymbol) = \Cat{CL}J$}\ar@/^0.6em/[ll] \\ & \Kl(\powersetsymbol)\ar[ul]^{\ensuremath{\mathrm{Pred}}}\ar[ur]_{\ensuremath{\mathrm{Stat}}} & }}\hspace*{6em}$$ \noindent The bijective correspondence works as follows. \begin{itemize} \item Given a join preserving map $f\colon L \rightarrow \powersetsymbol(X)$ we define $\overline{f} \colon X \rightarrow L$ in $\Cat{Sets}$ as $\overline{f}(x) = \bigvee\setin{a}{L}{x \not\in f(a)}$. \item In the other direction, given a function $g\colon X \rightarrow L$ we take $\overline{g} \colon L \rightarrow \powersetsymbol(X)$ to be $\overline{g}(a) = \setin{x}{X}{a \not\leq g(x)}$. This map $\overline{g}$ preserves joins since: $$\begin{array}{rcccccccl} x \not\in \overline{g}(\bigvee_{i}a_{i}) & \Longleftrightarrow & \bigvee_{i}a_{i} \leq g(x) \\ & \Longleftrightarrow & \all{i}{a_{i} \leq g(x)} \\ & \Longleftrightarrow & \all{i}{x \not\in \overline{g}(a_{i})} \\ & \Longleftrightarrow & x \not\in \bigcup_{i}\overline{g}(a_{i}). \end{array}$$ \end{itemize} \noindent The operations are each other's inverse: $$\begin{array}{rcccccl} \overline{\overline{g}}(x) & = & \bigvee\set{a}{x\not\in \overline{g}(a)} & = & \bigvee\set{a}{a \leq g(x)} & = & g(x). \end{array}$$ \noindent And: $$\begin{array}{rcccl} x\not\in\overline{\overline{f}}(a) & \Longleftrightarrow & a \leq \overline{f}(x) = \bigvee\set{b}{x\not\in f(b)} & \smash{\stackrel{(*)}{\Longleftrightarrow}} & x \not\in f(a). \end{array}$$ \noindent The direction $(\Leftarrow)$ is obvious, and for $(\mathbb{R}ightarrow)$ we reason as follows. Let $a \leq \overline{f}(x) = \bigvee\set{b}{x\not\in f(b)}$. Then: $$\begin{array}{rcccl} f(a) & \subseteq & f\big(\bigvee\set{b}{x\not\in f(b)}\big) & = & \bigcup\set{f(b)}{x\not\in f(b)}. \end{array}$$ \noindent Hence if $x\in f(a)$, then $x\in f(b)$ for some $b\in L$ with $x\not\in f(b)$. Clearly, this is impossible. In the induced triangle we have the self-duality of $\Cat{CL}J$ as isomorphism between categories of predicates and of states. For a Kleisli map $g\colon X \rightarrow \powersetsymbol(Y)$ the induced substitution functor is written as $\ensuremath{\mathrm{Pred}}(g) \colon \powersetsymbol(Y) \rightarrow \powersetsymbol(X)$. It is not the familiar $\diamond$-formula, but: $$\begin{array}{rcccccl} \ensuremath{\mathrm{Pred}}(g)(V) & = & \overline{g}(V) & = & \setin{x}{X}{V \not\subseteq g(x)} & = & \setin{x}{X}{V \cap \neg g(x) \neq \emptyset}. \end{array}$$ \noindent This is curious! If we restrict ourselves to powerset complete lattices, then we do have the familiar $\diamond$-formula in the bijective correspondence: $$\begin{prooftree} \xymatrix{\powersetsymbol(Y)\ar[r]^-{f} & \powersetsymbol(X) \mbox{ $\bigvee$-preserving}} \Justifies \xymatrix{X\ar[r]_-{g} & \powersetsymbol(Y)} \end{prooftree}$$ \noindent via: $$\begin{array}{rclcrcl} \overline{f}(x) & = & \set{y}{x\in f(\{y\})} & \qquad\mbox{and} & \qquad \overline{g}(V) & = & \set{x}{g(x) \cap V \neq \emptyset}. \end{array}$$ \noindent Then indeed: $$\begin{array}{rcl} \overline{\overline{g}}(x) & = & \set{y}{x\in \overline{g}(\{y\})} \\ & = & \set{y}{g(x) \cap \{y\} \neq \emptyset} \\ & = & g(x) \\ \overline{\overline{f}}(V) & = & \set{x}{\overline{f}(x) \cap V \neq \emptyset} \\ & = & \set{x}{\exin{y}{V}{x\in f(\{y\})}} \\ & = & \set{x}{x \in \bigcup_{y\in V} f(\{y\})} \\ & = & \set{x}{x \in f(\bigcup_{y\in V}\{y\})} \\ & = & \set{x}{x \in f(V)} \\ & = & f(V). \end{array}$$ \noindent The problem is that this correspondence for powerset lattices cannot be formulated for arbitrary lattices, since it uses singleton sets. \auxproof{ The functor $\op{(\Cat{CL}M)} \rightarrow \Cat{Sets}$ can be described in this setting as $L \mapsto \Cat{CL}M(L, \op{2})$, see~\eqref{CLMTwoEqn}. Let's write the induced double dual monad on $\Cat{Sets}$ as $T(X) = \Cat{CL}M(2^{X}, \op{2})$. We have maps: $$\xymatrix@R-2pc{ \powersetsymbol(X)\ar[rr]^-{\sigma^{\Box}} & & T(X)=\Cat{CL}M(2^{X}, \op{2}) \\ U\ar@{|->}[rr] & & {\lamin{p}{2^{X}}{\left\{\begin{array}{ll} 0 \quad & \mbox{if } \allin{x}{U}{p(x)=1} \rlap{\quad (written as $\indic{U} \leq p$)} \\ 1 & \mbox{otherwise} \end{array}\right.}} }\hspace*{5em}$$ \noindent For each $U\subseteq X$ the map $\sigma^{\Box}(U)$ sends meets to joins since: $$\begin{array}{rcl} \sigma^{\Box}(U)(\bigwedge_{i}p_{i}) = 0 & \Longleftrightarrow & \indic{U} \leq \bigwedge_{i}p_{i} \\ & \Longleftrightarrow & \all{i}{\indic{U} \leq p_{i}} \\ & \Longleftrightarrow & \all{i}{\sigma^{\Box}(U)(p_{i}) = 0} \\ & \Longleftrightarrow & \bigvee_{i}\sigma^{\Box}(U)(p_{i}) = 0. \end{array}$$ \noindent Moreover, $\sigma^{\Box}$ is an isomorphism, with inverse: $$\begin{array}{rcl} (\sigma^{\Box})^{-1}(h) & = & \bigcap\setin{V}{\powersetsymbol(X)}{h(\indic{V}) = 0}. \end{array}$$ \noindent Indeed: $$\begin{array}{rcl} \big((\sigma^{\Box})^{-1} \mathrel{\circ} \sigma^{\Box}\big)(U) & = & \bigcap\set{V}{\sigma^{\Box}(U)(\indic{V}) = 0} \\ & = & \bigcap\set{V}{\indic{U} \leq \indic{V}} \\ & = & \bigcap\set{V}{U \subseteq V} \\ & = & U \\ \big(\sigma^{\Box} \mathrel{\circ} (\sigma^{\Box})^{-1}\big)(h)(p) = 0 & \Longleftrightarrow & \sigma^{\Box}\big((\sigma^{\Box})^{-1}(h)\big)(p) = 0 \\ & \Longleftrightarrow & \indic{(\sigma^{\Box})^{-1}(h)} = \bigwedge\set{\indic{V}}{h(\indic{V}) = 0} \leq p \\ & \Longleftrightarrow & h(p) = 0 \end{array}$$ \noindent For $(\Leftarrow)$ write $p = \indic{U}$, so that $h(\indic{U}) = 0$. Hence $\bigwedge\set{\indic{V}}{h(\indic{V}) = 0} \leq \indic{U} = p$. For $(\mathbb{R}ightarrow)$, we use that $h$ maps meets to joins and reverses the order. Hence: $$\begin{array}{rcccccl} h(p) & \leq & h(\bigwedge\set{\indic{V}}{h(\indic{V}) = 0}) & = & \bigvee\set{h(\indic{V})}{h(\indic{V}) = 0} & = & 0. \end{array}$$ The modality $\tau^{\Box} \colon \powersetsymbol(2) \rightarrow \op{2}$ corresponding to $\sigma^{\Box}$is obtained as: $$\begin{array}{rcccccl} \tau^{\Box}(U) & = & \sigma^{\Box}(U)(\idmap[2]) & = & \left\{\begin{array}{ll} 0 \quad & \mbox{if } \allin{x}{U}{x=1} \\ 1 & \mbox{otherwise} \end{array}\right\} & = & \neg(U\subseteq \{1\}). \end{array}$$ There is also a map: $$\xymatrix@R-2pc{ \powersetsymbol(X)\ar[rr]^-{\sigma^{\Diamond}} & & T(X)=\Cat{CL}M(2^{X}, \op{2}) \\ U\ar@{|->}[rr] & & {\lamin{p}{2^{X}}{\left\{\begin{array}{ll} 1 \quad & \mbox{if } \allin{x}{U}{p(x)=0} \\ 0 & \mbox{otherwise} \end{array}\right.}} }\hspace*{5em}$$ We prove that $\sigma^{\Diamond}(U)$ sends meets to join: $$\begin{array}{rcl} \sigma^{\Diamond}(U)(\bigwedge_{i}p_{i}) = 0 & \Longleftrightarrow & \exin{x}{U}{(\bigwedge_{i}p_{i})(x) = 1} \\ & \Longleftrightarrow & \exin{x}{U}{\all{i}{p_{i}(x) = 1}} \\ & \Longleftrightarrow & \end{array}$$ \noindent The induced modality $\tau^{\Diamond} \colon \powersetsymbol(2) \rightarrow \op{2}$ is given by: $$\begin{array}{rcccccl} \tau^{\Diamond}(U) & = & \sigma^{\Diamond}(U)(\idmap[2]) & = & \left\{\begin{array}{ll} 0 \quad & \mbox{if } \allin{x}{\neg U}{x=1} \\ 1 & \mbox{otherwise} \end{array}\right\} & = & \neg(1\in U). \end{array}$$ There is, by-the-way, a slightly different adjunction $\Cat{Sets} \leftrightarrows \op{\big(\Cat{CL}J\big)}$ but it also does not give the $\diamond$-formula. This adjunction sends a set $X$ to the complete lattice $\op{\powersetsymbol(X)}$, and involves: $$\begin{prooftree} \xymatrix{L\ar[r]^-{f} & \op{\powersetsymbol(X)} \mbox{ $\bigvee$-preserving}} \Justifies \xymatrix{X\ar[r]_-{g} & L} \end{prooftree}$$ \noindent via: $$\begin{array}{rclcrcl} \overline{f}(x) & = & \bigvee\setin{a}{L}{x\in f(a)} & \qquad\mbox{and} & \qquad \overline{g}(a) & = & \set{x}{a \leq g(x)}. \end{array}$$ \noindent This works since: $$\begin{array}{rcl} \overline{g}(\bigvee_{i}a_{i}) & = & \set{x}{\bigvee_{i}a_{i} \leq g(x)} \\ & = & \set{x}{\all{i}{a_{i} \leq g(x)}} \\ & = & \bigcap_{i}\set{x}{a_{i} \leq g(x)} \\ & = & \bigcap_{i} \overline{g}(a_{i}). \end{array}$$ \noindent And: $$\begin{array}{rcl} \overline{\overline{g}}(x) & = & \bigvee\set{a}{x\in\overline{g}(a)} \\ & = & \bigvee\set{a}{a \leq g(x)} \\ & = & g(x) \\ x \in \overline{\overline{f}}(a) & \Leftrightarrow & a \leq \overline{f}(x) = \bigvee\set{b}{x\in f(b)} \\ & \Leftrightarrow & x \in f(a). \end{array}$$ \noindent In the last step the direction $(\Leftarrow)$ is obvious. For $(\mathbb{R}ightarrow)$ let $a \leq \overline{f}(x)$, then we are done by: $$\begin{array}{rcccccl} f(a) & \supseteq & f(\overline{f}(x)) & = & \bigcap\set{f(b)}{x\in f(b)} & \ni & x. \end{array}$$ \noindent In this case we get for $g\colon X \rightarrow \powersetsymbol(Y)$, $$\begin{array}{rcccl} \ensuremath{\mathrm{Pred}}(g)(V) & = & \overline{g}(V) & = & \set{x}{V \subseteq g(x)}. \end{array}$$ } } \subsection{Sets and Boolean algebras}\label{SetsBASubsec} We further restrict the adjunction $\op{\Cat{MSL}} \leftrightarrows \Cat{Sets}$ from Subsection~\ref{SetsMSLSubsec} to the category $\Cat{BA}$ of Boolean algebras. $$\vcenter{\xymatrix@R-2pc{ \op{\Cat{BA}}\ar@/^2ex/[dd]^{\ensuremath{\mathrm{Hom}}(-,2)} \\ \dashv \\ \Cat{Sets}\ar@/^2ex/[uu]^{\powersetsymbol = \ensuremath{\mathrm{Hom}}(-,2)}\ar@(dl,dr)_{\mathcal{U}=\Cat{BA}(\powersetsymbol(-), 2)} }} \qquad \begin{prooftree} \xymatrix@C+.5pc{Y\ar[r]^-{\Cat{BA}} & \powersetsymbol(X)} \Justifies \xymatrix{X\ar[r]_-{\Cat{Sets}} & \Cat{BA}(Y, 2)} \end{prooftree} \qquad \vcenter{\[email protected]@C-2pc{ \op{\Cat{BA}}\ar@/^0.7em/[rr] & \top & \EM\rlap{$(\mathcal{U}) = \Cat{CH}$}\ar@/^0.6em/[ll] \\ & \Kl(\mathcal{U})\ar[ul]^{\ensuremath{\mathrm{Pred}}}\ar[ur]_{\ensuremath{\mathrm{Stat}}} & }}\hspace*{6em}$$ \noindent The functor $\ensuremath{\mathrm{Hom}}(-, 2) \colon \op{\Cat{BA}} \rightarrow \Cat{Sets}$ sends a Boolean algebra $Y$ to the set $\Cat{BA}(Y,2)$ of Boolean algebra maps $Y\rightarrow 2$. They can be identified with \emph{ultrafilters} of $Y$. The resulting monad $\mathcal{U} = \Cat{BA}(\powersetsymbol(-), 2)$ is the \emph{ultrafilter monad}, sending a set $X$ to the BA-maps $\powersetsymbol(X) \rightarrow 2$, or equivalently, the ultrafilters of $\powersetsymbol(X)$. An important result of Manes (see~\cite{Manes69}, and also~\cite[III, 2.4]{Johnstone82}) says that the category of Eilenberg-Moore algebras of the ultrafilter monad $\mathcal{U}$ is the category $\Cat{CH}$ of compact Hausdorff spaces. This adjunction $\op{\Cat{BA}} \rightleftarrows \Cat{CH}$ restricts to an equivalence $\op{\Cat{BA}} \simeq \Cat{Stone}$ called Stone duality, where $\Cat{Stone} \hookrightarrow \Cat{CH}$ is the full subcategory of Stone spaces --- in which each open subset is the union of the clopens contained in it. \subsection{Sets and complete Boolean algebras}\label{SetsCBASubsec} We can restrict the adjunction $\op{\Cat{BA}} \rightleftarrows \Cat{Sets}$ from the previous subsection to an adjunction $\op{\Cat{CBA}} \rightleftarrows \Cat{Sets}$ between \emph{complete} Boolean algebras and sets. The resulting monad on $\Cat{Sets}$ is of the form $X \mapsto \Cat{CBA}(\powersetsymbol(X), 2)$. But here we hit a wall, since this monad is the identity. \begin{lemma} \label{CBALem} For each set $X$ the unit map $\eta \colon X \rightarrow \Cat{CBA}(\powersetsymbol(X), 2)$, given by $\eta(x)(U) = 1$ iff $x\in U$, is an isomorphism. \end{lemma} \begin{myproof} Let $h\colon \powersetsymbol(X) \rightarrow 2$ be a map of complete Boolean algebras, preserving the BA-structure and all joins (unions). Since each $U\in\powersetsymbol(X)$ can be described as union of singletons, the function $h$ is determined by its values $h(\{x\})$ for $x\in X$. We have $1 = h(X) = \bigcup_{x\in X} h(\{x\})$. Hence $h(\{x\}) = 1$ for some $x\in X$. But then $h(X - \{x\}) = h(\neg \{x\}) = \neg h(\{x\}) = \neg 1 = 0$. This implies $h(\{x'\}) = 0$ for each $x'\neq x$. Hence $h = \eta(x)$. \hspace*{\fill}$\mathbb{Q}EDbox$ \end{myproof} \subsection{Posets and complete lattices}\label{PosetCLSubsec} We return to complete lattices, from Subsection~\ref{SetsCLSubsec}, but now consider them with join-preserving maps: $$\vcenter{\xymatrix@R-2pc{ \op{\big(\Cat{CL}J\big)}\ar@/^2ex/[dd]^{\ensuremath{\mathrm{Hom}}(-,2)\cong\op{(-)}} \\ \dashv \\ \Cat{PoSets}\ar@/^2ex/[uu]^{\ensuremath{\mathrm{Up}} = \ensuremath{\mathrm{Hom}}(-,2)}\ar@(dl,dr)_{\ensuremath{\mathrm{Dwn}}} }} \quad \begin{prooftree} \xymatrix@C+.5pc{L\ar[r]^-{\Cat{CL}J} & \ensuremath{\mathrm{Up}}(X)} \Justifies \xymatrix{X\ar[r]_-{\Cat{PoSets}} & \op{L}} \end{prooftree} \quad \vcenter{\[email protected]@C-2.5pc{ \op{\big(\Cat{CL}J\big)}\ar@/^0.7em/[rr] & \top & \EM(\ensuremath{\mathrm{Dwn}})\rlap{$ =\! \Cat{CL}J$}\ar@/^0.6em/[ll] \\ & \Kl(\ensuremath{\mathrm{Dwn}})\ar[ul]^{\ensuremath{\mathrm{Pred}}}\ar[ur]_{\ensuremath{\mathrm{Stat}}} & }}\hspace*{10em}$$ \noindent Recall from Subsection~\ref{SetsPosetsSubsec} that we write $\ensuremath{\mathrm{Up}}(X)$ for the poset of upsets in a poset $X$, ordered by inclusion. This poset is a complete lattice via unions. For a monotone function $f\colon X \rightarrow Y$ between posets, the inverse image map $f^{-1}$ restricts to $\ensuremath{\mathrm{Up}}(Y) \rightarrow \ensuremath{\mathrm{Up}}(X)$ and preserves unions. This gives the functor $\ensuremath{\mathrm{Up}} \colon \Cat{PoSets} \rightarrow \op{(\Cat{CL}J)}$, which is isomorphic to $\ensuremath{\mathrm{Hom}}(-,2)$, as already noted in Subsection~\ref{SetsPosetsSubsec}. \auxproof{ If $V \subseteq Y$ is a downset, and $x' \geq x \in f^{-1}(V)$, then $f(x') \geq f(x) \in V$, so that $f(x') \in V$, and $x'\in f^{-1}(V)$. For an upset $U\subseteq X$ define $\widehat{U}\colon X \rightarrow 2$ by $\widehat{U}(x) = 1$ iff $x\in U$. This map is monotone: if $x \leq x'$, and $\widehat{U}(x) = 1$, then $x\in U$, so $x'\in U$, and thus $\widehat{U}(x') = 1$. For a monotone map $\varphi \colon X \rightarrow 2$ define $\widehat{\varphi} = \set{x}{\varphi(x) = 1}$. This is an upset: if $x' \geq x\in \widehat{\varphi}$, then $\varphi(x') \geq \varphi(x) = 1$, so $x'\in \widehat{\varphi}$. We have: $$\begin{array}{rcccccl} \widehat{\widehat{U}} & = & \set{x}{\widehat{U}(x) = 1} & = & \set{x}{x\in U} & = & U. \end{array}$$ \noindent And: $$\begin{array}{rcccl} \widehat{\widehat{\varphi}}(x) = 1 & \Longleftrightarrow & x \in \widehat{\varphi} & \Longleftrightarrow & \varphi(x) = 1. \end{array}$$ } The downgoing functor $\ensuremath{\mathrm{Hom}}(-,2) \colon \op{(\Cat{CL}J)} \rightarrow \Cat{PoSets}$ is isomorphic to taking the opposite order $\op{(-)}$, see Lemma~\ref{CLTwoLem}. A map $f\colon L \rightarrow K$ in $\Cat{CL}J$ is mapped to the monotone adjoint function $f^{\#} \colon \op{K} \rightarrow \op{L}$, as in~\eqref{CLmapAdjointEqn}, given by $f^{\#}(a) = \bigvee\set{b}{f(b) \leq a}$. \auxproof{ We use the isomorphism on the left in~\eqref{CLJTwoEqn} twice: $$\begin{array}{rcccccl} \op{f}(a) & = & \widehat{\widehat{a} \mathrel{\circ} f} & = & \bigvee\set{b}{\widehat{a}(f(b)) = 0} & = & \bigvee\set{b}{f(b) \leq a}. \end{array}$$ } We elaborate the bijective correspondence in the middle in detail. \begin{itemize} \item Given a join preserving map $f\colon L \rightarrow \ensuremath{\mathrm{Up}}(X)$ we define $\overline{f} \colon X \rightarrow \op{L}$ in $\Cat{PoSets}$ as $\overline{f}(x) = \bigvee\setin{a}{L}{x \not\in f(a)}$. It is easy to see that $\overline{f}$ is monotone. \auxproof{ Let $x \leq y$. Since each $f(a)$ is an upset, we have: $$\begin{array}{rcl} \set{a}{x\not\in f(a)} & \supseteq & \set{a}{y\not\in f(a)} \end{array}$$ \noindent and thus: $$\begin{array}{rcccccl} \overline{f}(x) & = & \bigvee\set{a}{x\not\in f(a)} & \geq & \bigvee\set{a}{y\not\in f(a)} & = & \overline{f}(y). \end{array}$$ } \item In the other direction, given a monotone function $g\colon X \rightarrow \op{L}$ we take $\overline{g} \colon L \rightarrow \ensuremath{\mathrm{Up}}(X)$ to be $\overline{g}(a) = \setin{x}{X}{a \not\leq g(x)}$. This yields an upset: if $x' \geq x \in \overline{g}(a)$, then $a \not\leq g(x')$. If $a \leq g(x')$ then $a \leq g(x)$ since $g(x') \leq g(x)$ because $g$ reverses the order. This map $\overline{g}$ preserves joins since: $$\begin{array}{rcl} x \not\in \overline{g}(\bigvee_{i}a_{i}) \hspace*{\arraycolsep}\Longleftrightarrow\hspace*{\arraycolsep} \bigvee_{i}a_{i} \leq g(x) & \Longleftrightarrow & \all{i}{a_{i} \leq g(x)} \\ & \Longleftrightarrow & \all{i}{x \not\in \overline{g}(a_{i})} \hspace*{\arraycolsep}\Longleftrightarrow\hspace*{\arraycolsep} x \not\in \bigcup_{i}\overline{g}(a_{i}). \end{array}$$ \end{itemize} \noindent The transformations are each other's inverse: $$\begin{array}{rcccccl} \overline{\overline{g}}(x) & = & \bigvee\set{a}{x\not\in \overline{g}(a)} & = & \bigvee\set{a}{a \leq g(x)} & = & g(x). \end{array}$$ \noindent And: $$\begin{array}{rcccl} x\not\in\overline{\overline{f}}(a) & \Longleftrightarrow & a \leq \overline{f}(x) = \bigvee\set{b}{x\not\in f(b)} & \smash{\stackrel{(*)}{\Longleftrightarrow}} & x \not\in f(a). \end{array}$$ \noindent The direction $(\Leftarrow)$ of the marked equivalence is obvious, and for $(\mathbb{R}ightarrow)$ we reason as follows. Let $a \leq \overline{f}(x) = \bigvee\set{b}{x\not\in f(b)}$. Then, using that $f$ preserves joins: $$\begin{array}{rcccl} f(a) & \subseteq & f\big(\bigvee\set{b}{x\not\in f(b)}\big) & = & \bigcup\set{f(b)}{x\not\in f(b)}. \end{array}$$ \noindent Hence if $x\in f(a)$, then $x\in f(b)$ for some $b\in L$ with $x\not\in f(b)$. Clearly, this is impossible. We notice that the induced monad on $\Cat{PoSets}$ is given by taking downsets $\ensuremath{\mathrm{Dwn}}(-)$, since the reversed poset $\op{\ensuremath{\mathrm{Up}}(X)}$ is the poset $\ensuremath{\mathrm{Dwn}}(X)$ of downsets of $X$, ordered by inclusion. The isomorphism $\op{\ensuremath{\mathrm{Up}}(X)} \cong \ensuremath{\mathrm{Dwn}}(X)$ is given by complements. For a monotone map $f\colon X \rightarrow Y$ the function $\ensuremath{\mathrm{Dwn}}(f) \colon \ensuremath{\mathrm{Dwn}}(X) \rightarrow \ensuremath{\mathrm{Dwn}}(Y)$ sends a downset $U\subseteq X$ to the downclosure of the image: $\mathop{\downarrow\!} f(U) = \setin{y}{Y}{\exin{x}{U}{y\leq f(x)}}$. This function $\ensuremath{\mathrm{Dwn}}(f)$ is clearly monotone. \auxproof{ For an upset $U \subseteq X$ the complement $\neg U$ is a downset: let $x' \leq x \in \neg U$; we need to prove $x'\in\neg U$. If not, then $x'\in U$, but then also $x\in U$ since $U$ is an upset. Similarly, if $V \subseteq X$ is a downset, then $\neg V$ is an upset: if $x' \geq x \in \neg V$, then $x'\in\neg V$, because if $x'\in V$, then $x\in V$ since $V$ is a downset. } If we incorporate this isomorphism $\op{\ensuremath{\mathrm{Up}}(X)} \cong \ensuremath{\mathrm{Dwn}}(X)$, then the adjoint correspondence specialises to: \begin{equation} \label{PosetCLMuDiamondCorr} \begin{prooftree} \xymatrix@C+.5pc{\ensuremath{\mathrm{Up}}(Y)\ar[r]^-{f} & \ensuremath{\mathrm{Up}}(X)} \Justifies \xymatrix{X\ar[r]_-{g} & \ensuremath{\mathrm{Dwn}}(Y)} \end{prooftree} \qquad\mbox{given by}\qquad \left\{\begin{array}{rcl} \overline{f}(x) & = & \bigcap\setin{U}{\ensuremath{\mathrm{Dwn}}(X)}{x\not\in f(\neg U)} \\ \overline{g}(V) & = & \setin{x}{X}{g(x) \cap V \neq \emptyset} \end{array}\right. \end{equation} \noindent We see that in this adjunction $\op{(\Cat{CL}J)} \leftrightarrows \Cat{PoSets}$ gives rise to the $\Diamond$-predicate transformer. Again, healthiness is built into the construction. This correspondence gives a handle on the downsets monad $\ensuremath{\mathrm{Dwn}}$ on $\Cat{PoSets}$. The unit $\eta\colon X \rightarrow \ensuremath{\mathrm{Dwn}}(X)$ is obtained by transposing the identity on $\ensuremath{\mathrm{Up}}(X)$, so that: $$\begin{array}{rcccccccl} \eta(x) & = & \overline{\idmap}(x) & = & \bigcap\setin{U}{\ensuremath{\mathrm{Dwn}}(X)}{x\not\in \neg U} & = & \bigcap\setin{U}{\ensuremath{\mathrm{Dwn}}(X)}{x\in U} & = & \mathop{\downarrow\!} x. \end{array}$$ \auxproof{ With this unit we can also derive what $\ensuremath{\mathrm{Dwn}}(f)$, for $f\colon X \rightarrow Y$ should be, namely the $\op{(-)}$ of the transpose $\ensuremath{\mathrm{Up}}(Y) \rightarrow \ensuremath{\mathrm{Up}}(X)$ of $\eta \mathrel{\circ} f\colon X \rightarrow \ensuremath{\mathrm{Dwn}}(Y)$, pre- and post-composed with complement: $$\begin{array}{rcl} \ensuremath{\mathrm{Dwn}}(f)(U) & = & \neg\op{(\overline{\eta \mathrel{\circ} f})}(\neg U) \\ & = & \neg\bigcup\setin{V}{\ensuremath{\mathrm{Up}}(Y)} {\overline{\eta \mathrel{\circ} f}(V) \subseteq \neg U} \\ & = & \bigcap\setin{V}{\ensuremath{\mathrm{Dwn}}(Y)} {\overline{\eta \mathrel{\circ} f}(\neg V) \subseteq \neg U} \\ & = & \bigcap\setin{V}{\ensuremath{\mathrm{Dwn}}(Y)} {\set{x}{\mathop{\downarrow\!} f(x) \cap \neg V \neq \emptyset} \subseteq \neg U} \\ & = & \bigcap\setin{V}{\ensuremath{\mathrm{Dwn}}(Y)} {U \subseteq \set{x}{\mathop{\downarrow\!} f(x) \cap \neg V = \emptyset}} \\ & = & \bigcap\setin{V}{\ensuremath{\mathrm{Dwn}}(Y)}{\allin{x}{U}{\mathop{\downarrow\!} f(x) \subseteq V}} \\ & = & \bigcap\setin{V}{\ensuremath{\mathrm{Dwn}}(Y)}{\allin{x}{U}{f(x) \in V}} \\ & = & \bigcap\setin{V}{\ensuremath{\mathrm{Dwn}}(Y)}{f(U) \subseteq V} \\ & = & \overline{f(U)}. \end{array}$$ } \noindent The multiplication $\mu \colon \ensuremath{\mathrm{Dwn}}^{2}(X) \rightarrow \ensuremath{\mathrm{Dwn}}(X)$ is given by union. To see this, we first transpose the identity map on $\ensuremath{\mathrm{Dwn}}(X)$ upwards, giving a map $\varepsilon\colon \ensuremath{\mathrm{Up}}(X) \rightarrow \ensuremath{\mathrm{Up}}(\ensuremath{\mathrm{Dwn}}(X))$ described by: $$\begin{array}{rcl} \varepsilon(V) & = & \setin{U}{\ensuremath{\mathrm{Dwn}}(X)}{U \cap V \neq \emptyset}. \end{array}$$ \noindent We then obtain the multiplication map $\mu$ of the downset monad by applying the $\op{(-)}$ functor to $\varepsilon$, and using complement on both sides: \begin{equation} \label{PosetCLMuEqn} \begin{array}{rcl} \mu(B) \hspace*{\arraycolsep}=\hspace*{\arraycolsep} \neg\op{\varepsilon}(\neg B) & = & \neg\bigcup\setin{V}{\ensuremath{\mathrm{Up}}(X)}{\varepsilon(V) \subseteq \neg B} \\ & = & \bigcap\setin{V}{\ensuremath{\mathrm{Dwn}}(X)}{B \subseteq \neg\varepsilon(\neg V)} \\ & = & \bigcap\setin{V}{\ensuremath{\mathrm{Dwn}}(X)}{\allin{U}{B}{U\cap \neg V = \emptyset}} \\ & = & \bigcap\setin{V}{\ensuremath{\mathrm{Dwn}}(X)}{\bigcup B\subseteq V} \\ & = & \bigcup B. \end{array} \end{equation} \noindent This last equation holds because the union of downclosed sets is downclosed. \auxproof{ We check that $\eta$ and $\mu$ are natural transformations: $$\begin{array}{rcccccccl} \ensuremath{\mathrm{Dwn}}(f)\big(\eta(x)\big) & = & \set{y}{\exin{x'}{\mathop{\downarrow\!} x}{y \leq f(x')}} & = & \set{y}{y \leq f(x)} & = & \mathop{\downarrow\!} f(x) & = & \eta(f(x)). \end{array}$$ \noindent Similarly: $$\begin{array}{rcl} \big(\mu \mathrel{\circ} \ensuremath{\mathrm{Dwn}}^{2}(f)\big)(A) & = & \mu\big(\set{V}{\exin{U}{A}{V \subseteq \ensuremath{\mathrm{Dwn}}(f)(U)}}\big) \\ & = & \bigcup\set{V}{\exin{U}{A}{V \subseteq \ensuremath{\mathrm{Dwn}}(f)(U)}} \\ & = & \bigcup\set{\ensuremath{\mathrm{Dwn}}(f)(U)}{U\in A} \\ & = & \set{y}{\exin{U}{A}{\exin{x}{U}{y \leq f(x)}}} \\ & = & \set{y}{\ex{x\in\bigcup A}{y \leq f(x)}} \\ & = & \set{y}{\ex{x\in\mu(A)}{y \leq f(x)}} \\ & = & \big(\ensuremath{\mathrm{Dwn}}(f) \mathrel{\circ} \mu\big)(A) \end{array}$$ We also check the monad equations. First, for $U\in\ensuremath{\mathrm{Dwn}}(X)$, $$\begin{array}{rcl} \big(\mu \mathrel{\circ} \eta\big)(U) & = & \mu(\mathop{\downarrow\!} U) \\ & = & \bigcup \mathop{\downarrow\!} U \\ & = & U \\ \big(\mu \mathrel{\circ} \ensuremath{\mathrm{Dwn}}(\eta)\big)(U) & = & \mu\big(\mathop{\downarrow\!}\set{\mathop{\downarrow\!} x}{x\in U}\big) \\ & = & \bigcup \mathop{\downarrow\!}\set{\mathop{\downarrow\!} x}{x\in U} \\ & = & \bigcup \set{\mathop{\downarrow\!} x}{x\in U} \\ & = & \mathop{\downarrow\!} U \\ & = & U. \end{array}$$ \noindent Next, for $A\in\ensuremath{\mathrm{Dwn}}^{3}(X)$, $$\begin{array}{rcl} \big(\mu \mathrel{\circ} \ensuremath{\mathrm{Dwn}}(\mu)\big)(A) & = & \mu\big(\mathop{\downarrow\!}\set{\mu(B)}{B\in A}\big) \\ & = & \bigcup\mathop{\downarrow\!}\set{\mu(B)}{B\in A} \\ & = & \set{x}{\exin{U}{\mathop{\downarrow\!}\set{\mu(B)}{B\in A}}{x\in U}} \\ & = & \set{x}{\ex{U}{\exin{B}{A}{U\subseteq \mu(B) \mbox{ and } x\in U}}} \\ & = & \set{x}{\exin{B}{A}{x\in \mu(B)}} \\ & = & \set{x}{\exin{B}{A}{\exin{U}{B}{x\in B}}} \\ & = & \set{x}{\exin{U}{\bigcup A}{x\in U}} \\ & = & \set{x}{x \in \bigcup\bigcup A} \\ & = & \bigcup\bigcup A \\ & = & \mu\big(\bigcup A\big) \\ & = & \big(\mu \mathrel{\circ} \mu\big)(A) \end{array}$$ } The category $\EM(\ensuremath{\mathrm{Dwn}})$ of Eilenberg-Moore algebras of this downset monad $\ensuremath{\mathrm{Dwn}}$ is the category $\Cat{CL}J$ of complete lattices and join-preserving maps. Hence the adjunction $\rightleftarrows$ above on the right is an isomorphism of categories. \auxproof{ If $X$ is a complete lattice, then then its join map forms an $\ensuremath{\mathrm{Dwn}}$-algebra $\bigvee \colon \ensuremath{\mathrm{Dwn}}(X) \rightarrow X$, since: \begin{itemize} \item $(\bigvee \mathrel{\circ} \eta)(x) = \bigvee \mathop{\downarrow\!} x = x$. \item For a downset of downsets $A\subseteq \ensuremath{\mathrm{Dwn}}(X)$, $$\begin{array}{rcl} \big(\bigvee \mathrel{\circ} \ensuremath{\mathrm{Dwn}}(\bigvee)\big)(A) & = & \bigvee\mathop{\downarrow\!}\set{\bigvee U}{U\in A} \\ & = & \bigvee\set{\bigvee U}{U\in A} \\ & = & \bigvee\bigcup A \\ & = & \big(\bigvee \mathrel{\circ} \mu\big)(A). \end{array}$$ \end{itemize} \noindent A join-preserving map is obviously a map of algebras. In the other direction, if a dcpo $X$ carries an algebra $\alpha \colon \ensuremath{\mathrm{Dwn}}(X) \rightarrow X$ in $\Cat{Dcpo}$, then each subset $U\subseteq X$ has a join $\bigvee U = \alpha(\mathop{\downarrow\!} U)$, since: \begin{itemize} \item if $x\in U$, then $\mathop{\downarrow\!} x \subseteq \mathop{\downarrow\!} U$, so that $x = \alpha(\mathop{\downarrow\!} x) \leq \alpha(\mathop{\downarrow\!} U) = \bigvee U$. \item If $x\leq y$ for each $x\in U$, then $U \subseteq \mathop{\downarrow\!} y$, so that $\mathop{\downarrow\!} U \subseteq \mathop{\downarrow\!} y$. Hence $\bigvee U = \alpha(\mathop{\downarrow\!} U) \leq \alpha(\mathop{\downarrow\!} y) = y$. \end{itemize} } \subsection{Dcpo's and complete lattices}\label{DcpoCLSubsec} We write $\Cat{Dcpo}$ for the category with directed complete partial orders (dcpos) as objects, and (Scott) continuous functions (preserving directed joins) as morphisms between them. A subset $U\subseteq X$ of a dcpo $X$ is (Scott) open if $U$ is an upset satisfying for each directed collection $(x_{i})$, if $\bigvee_{i}x_{i}\in U$, then $x_{i}\in U$ for some index $i$. The (Scott) closed sets are then the downsets that are closed under directed joins. We write $\ensuremath{\mathcal{O}}(X)$ and $\Closed(X)$ for the sets of open and closed subsets of $X$. \begin{lemma} \label{DcpoTwoLem} For each dcpo $X$ there are isomorphisms: \begin{equation} \label{DcpoTwoEqn} \begin{array}{rclcrcl} \ensuremath{\mathcal{O}}(X) & \cong & \Cat{Dcpo}(X, 2) & \qquad\mbox{and}\qquad & \qquad \Closed(X) & \cong & \Cat{Dcpo}(X, \op{2}). \end{array} \end{equation} \noindent Moreover, via complements we have an isomorphism of complete lattices $\op{\ensuremath{\mathcal{O}}(X)} \cong \Closed(X)$. In combination with~\eqref{CLJTwoEqn} we get $\Closed(X) \cong \Cat{CL}J(\ensuremath{\mathcal{O}}(X), 2)$. \end{lemma} \begin{myproof} The first isomorphism in~\eqref{DcpoTwoEqn} sends an open subset $U\subseteq X$ to the function $\widehat{U} \colon X \rightarrow 2$ given by $\widehat{U}(x) = 1$ iff $x\in U$. In the other direction, for a continuous function $\varphi\colon X \rightarrow 2$ we take the open subset $\widehat{\varphi} = \setin{x}{L}{\varphi(x) = 1}$. Similarly, the second isomorphism sends a closed subset $V$ to the function $\widetilde{V} \colon X \rightarrow \op{2}$ with $\widetilde{V}(x) = 1$ iff $x\not\in V$, and conversely sends $\psi \colon X \rightarrow \op{2}$ to $\widetilde{\psi} = \setin{x}{X}{\psi(x) = 0}$. \hspace*{\fill}$\mathbb{Q}EDbox$ \auxproof{ Then: $$\begin{array}{rcl} \widehat{U}(\bigvee_{i}x_{i}) = 1 & \Longleftrightarrow & \bigvee_{i}x_{i}\in U \\ & \Longleftrightarrow & \exin{i}{x_{i}\in U} \\ & \Longleftrightarrow & \exin{i}{\widehat{U}(x_{i}) = 1} \\ & \Longleftrightarrow & \bigvee_{i} \widehat{U}(x_{i}) = 1. \end{array}$$ \noindent These operations are monotone. If $U\subseteq V$, then $\widehat{U} \leq \widehat{V}$, since $\widehat{U}(x) \leq \widehat{V}(x)$ for each $x$: if $\widehat{U}(x) = 1$, then $x\in U$, so $x\in V$ and thus $\widehat{V}(x) = 1$. Similarly, if $\varphi \leq \psi$, then $\widehat{\varphi} = \set{x}{\varphi(x) = 1} \subseteq \set{x}{\psi(x) = 1} = \widehat{\psi}$. These operations are each other's inverse: $$\begin{array}{rcccccl} \widehat{\widehat{U}} & = & \set{x}{\widehat{U}(x) = 1} & = & \set{x}{x\in U} & = & U. \end{array}$$ \noindent And: $$\begin{array}{rcccl} \widehat{\widehat{\varphi}}(x) = 1 & \Longleftrightarrow & x\in\widehat{\varphi} & \Longleftrightarrow & \varphi(x) = 1. \end{array}$$ \noindent For a closed subset $V\subseteq X$ we define $\widetilde{V} \colon X \rightarrow \op{2}$ as $\widetilde{V}(x) = 1$ iff $x\not\in V$. And for a continuous map $\varphi\colon X \rightarrow \op{2}$ we take $\widetilde{\varphi} = \setin{x}{X}{\varphi(x) = 0}$. We check: \begin{itemize} \item The function $\widetilde{V}$ is continuous: $$\begin{array}{rcl} \widetilde{V}(\bigvee_{i}x_{i}) = 1 & \Longleftrightarrow & \bigvee_{i}x_{i} \in \neg V \\ & \Longleftrightarrow & \ex{i}{x_{i}\in \neg V} \\ & \Longleftrightarrow & \ex{i}{\widetilde{V}(x_{i}) = 1} \\ & \Longleftrightarrow & \bigvee_{i}\widetilde{V}(x_{i}) = 1. \end{array}$$ \item The set $\widetilde{\varphi}$ is closed, since its complement $\neg\widetilde{\varphi} = \set{x}{\varphi(x) = 1}$ is open. \item If $U\subseteq V$, then $\widetilde{U} \leq \widetilde{V}$ since $\widetilde{U}(x) \geq \widetilde{V}(x)$ for each $x\in X$: if $\widetilde{V}(x) = 1$, then $x\not\in V$, so $x\not\in U$, and thus $\widetilde{U}(x) = 1$. \item If $\varphi \leq \psi$, then $\widetilde{\varphi} \subseteq \widetilde{\psi}$. Indeed, if $\varphi(x) \geq \psi(x)$, then $\widetilde{\varphi} = \set{x}{\varphi(x) = 0} \subseteq \set{x}{\psi(x) = 0} = \widetilde{\psi}$. \item We have: $$\begin{array}{rcccccl} \widetilde{\widetilde{V}} & = & \set{x}{\widetilde{V}(x) = 0} & = & \set{x}{x \in V} & = & V. \end{array}$$ \item And: $$\begin{array}{rcccccl} \widetilde{\widetilde{\varphi}}(x) = 1 & \Longleftrightarrow & x\not\in\widetilde{\varphi} & \Longleftrightarrow & \varphi(x) \neq 0 & \Longleftrightarrow & \varphi(x) = 1. \end{array}$$ \end{itemize} } \end{myproof} We shall be using a subcategory $\Cat{CL}JO \hookrightarrow \Cat{CL}J$ of complete lattices where maps are not only join-preserving but also preserve the top element $1$. The following is then an easy adaptation of Lemma~\ref{CLTwoLem} and Lemma~\ref{DcpoTwoLem}. \begin{lemma} \label{CLJOTwoLem} For a complete lattice $L$ and a dcpo $X$ there are isomorphisms: $$\begin{array}{rclcrcl} \Cat{CL}JO(L,2) & \cong & \op{\big(L\backslash 1\big)} & \qquad\mbox{and thus}\qquad & \Cat{CL}JO(\ensuremath{\mathcal{O}}(X), 2) & \cong & \Closed(X)\backslash\emptyset. \end{array}$$ \end{lemma} \begin{myproof} Following the proof of Lemma~\ref{CLTwoLem} one easily shows that $\varphi\colon L \rightarrow 2$ in $\Cat{CL}J$ preserves $1$ iff the corresponding element $\widehat{\varphi} = \bigvee\set{x}{\varphi(x)=0}\in L$ is not $1$. This gives the first isomorphism. The second one then easily follows, see Lemma~\ref{DcpoTwoLem}. \hspace*{\fill}$\mathbb{Q}EDbox$ \auxproof{ \begin{itemize} \item Let $\varphi(1) = 1$. Then $\widehat{\varphi}=1$ leads to a contradiction: $$\begin{array}{rcccccccccl} 1 & = & \varphi(1) & = & \varphi(\widehat{\varphi}) & = & \varphi(\bigvee\set{x}{\varphi(x)=0}) & = & \bigvee\set{\varphi(x)}{\varphi(x)=0} & = & 0. \end{array}$$ \item Let now $\widehat{\varphi} \neq 1$. Assume, towards a contradication, that $\varphi(1) \neq 1$, so $\varphi(1) = 0$. But then $1 \leq \bigvee\set{x}{\varphi(x)=0} = \widehat{\varphi}$. \end{itemize} The resulting isomorphism $\Cat{CL}JO(\ensuremath{\mathcal{O}}(X), 2) \cong \Closed(X)\backslash\emptyset$ is given as follows. \begin{itemize} \item For $\varphi \colon \ensuremath{\mathcal{O}}(X) \rightarrow 2$, take: $$\begin{array}{rcccl} \widehat{\varphi} & = & \neg \bigcup\setin{U}{\ensuremath{\mathcal{O}}(X)}{\varphi(U) = 0} & = & \bigcap\setin{V}{\Closed(X)}{\varphi(\neg V) = 0}. \end{array}$$ \noindent Suppose $\widehat{\varphi} = \emptyset$. Then: $$\begin{array}{rcccccccl} 1 & = & \varphi(X) & = & \varphi(\neg\widehat{\varphi}) & = & \bigvee\set{\varphi(U)}{\varphi(U) = 0} & = & 0. \end{array}$$ \item Given $V\in\Closed(X)$ non-empty. Take: $$\begin{array}{rcl} \varphi(U) & = & \left\{\begin{array}{ll} 0 \quad & \mbox{if $U\subseteq \neg V$, i.e. $U\cap V = \emptyset$} \\ 1 & \mbox{otherwise} \end{array}\right. \end{array}$$ \noindent Then: $$\begin{array}{rcl} \varphi(\bigcup_{i}U_{i}) = 0 & \Longleftrightarrow & \bigcup_{i}U_{i} \subseteq \neg V \\ & \Longleftrightarrow & \all{i}{U_{i} \subseteq \neg V} \\ & \Longleftrightarrow & \all{i}{\varphi(U_{i}) = 0} \\ & \Longleftrightarrow & \bigvee_{i} \varphi(U_{i}) = 0 \end{array}$$ \noindent Since $V\neq \emptyset$, we have $\neg V \neq X$, so $X \not\subseteq \neg V$, and thus $\varphi(X) = 1$. \end{itemize} \noindent Finally, $\widehat{\widehat{V}} = \bigcap\set{W}{\widehat{V}(\neg W) = 0} = \bigcap\set{W}{\neg W \subseteq \neg V} = \bigcap\set{W}{V\subseteq W} = V$. Similarly, $\widehat{\widehat{\varphi}}(U) = 0$ iff $U\subseteq \neg\widehat{\varphi} = \bigcup\set{V}{\varphi(V)=0}$ iff $\varphi(U) = 0$. For the last equivalence, the (if)-part is easy. For (only if), let $U\subseteq \neg\widehat{\varphi}$. Then $\varphi(U) \leq \bigvee\set{\varphi(U)}{\varphi(U)=0} = 0$. } \end{myproof} We now restrict the adjunction $\op{(\Cat{CL}J)} \leftrightarrows \Cat{PoSets}$ from Subsection~\ref{PosetCLSubsec} to dcpos. $$\vcenter{\xymatrix@R-2pc{ \op{\big(\Cat{CL}JO\big)}\ar@/^2ex/[dd]^{\ensuremath{\mathrm{Hom}}(-,2)} \\ \dashv \\ \Cat{Dcpo}\ar@/^2ex/[uu]^{\ensuremath{\mathcal{O}} = \ensuremath{\mathrm{Hom}}(-,2)}\ar@(dl,dr)_{\mathcal{H}} }} \qquad \begin{prooftree} \xymatrix@C+.5pc{L\ar[r]^-{\Cat{CL}JO} & \ensuremath{\mathcal{O}}(X)} \Justifies \xymatrix{X\ar[r]_-{\Cat{Dcpo}} & \op{(L\backslash 1)}} \end{prooftree} \qquad \vcenter{\[email protected]@C-2pc{ \op{\big(\Cat{CL}JO\big)}\ar@/^0.7em/[rr] & \top & \EM(\mathcal{H})\ar@/^0.6em/[ll] \\ & \Kl(\mathcal{H})\ar[ul]^{\ensuremath{\mathrm{Pred}}}\ar[ur]_{\ensuremath{\mathrm{Stat}}} & }}\hspace*{0em}$$ \noindent In this situation we encounter Smyths~\cite{Smyth83} topological view on predicate transformers, as maps between complete lattices of open subsets $\ensuremath{\mathcal{O}}(X) \cong \ensuremath{\mathrm{Hom}}(X, 2)$, see Lemma~\ref{DcpoTwoLem}. Notice that the poset $\op{(L\backslash 1)}$ is a dcpo, with directed joins given by meets in $L$. The adjoint transposes for the above adjunction are defined precisely as in Subsection~\ref{PosetCLSubsec}. We only have to prove some additional properties. \begin{itemize} \item For $f\colon L \rightarrow \ensuremath{\mathcal{O}}(X)$ in $\Cat{CL}JO$ we have $\overline{f}(x) = \bigvee\set{a}{x\not\in f(a)}$. We check: \begin{itemize} \item $\overline{f}(x)\neq 1$ for each $x\in X$. Towards a contradiction, let $\overline{f}(x) = 1$. Then, using that $f$ preserves $1$ and $\bigvee$ we get: $$\begin{array}{rcccccccl} x & \in & X & = & f(1) & = & f(\overline{f}(x)) & = & \bigcup\set{f(a)}{x\not\in f(a)}. \end{array}$$ \noindent We get $x\in \bigcup\set{f(a)}{x\not\in f(a)}$, which is impossible. \item The function $\overline{f} \colon X \rightarrow \op{(L\backslash 1)}$ sends directed joins $\bigvee_{i}x_{i}$ to meets. By monotonicity of $\overline{f} \colon X \rightarrow \op{L}$ we have $\overline{f}(\bigvee_{i}x_{i}) \leq \overline{f}(x_{j})$, for each $j$, and thus $\overline{f}(\bigvee_{i}x_{i}) \leq \bigwedge_{i}\overline{f}(x_{i})$. For the reverse inequality we reason as follows. \begin{itemize} \item We have $x_{j} \not\in f(\bigwedge_{i}\overline{f}(x_{i}))$, for each $j$; otherwise, because $f\colon L \rightarrow \ensuremath{\mathcal{O}}(X)$ is monontone and preserves joins, we get a contradiction: $$\qquad\begin{array}{rcl} x_{j} \hspace*{\arraycolsep}\in\hspace*{\arraycolsep} f\big(\bigwedge_{i}\overline{f}(x_{i})\big) \hspace*{\arraycolsep}\leq\hspace*{\arraycolsep} f\big(\overline{f}(x_{j})\big) & = & f\big(\bigvee\set{y}{x_{j}\not\in f(y)}\big) \\ & = & \bigcup\set{f(y)}{x_{j}\not\in f(y)}. \end{array}$$ \item Since $f(\bigwedge_{i}\overline{f}(x_{i}))$ is open, we get $\bigvee_{i}x_{i} \not\in f(\bigwedge_{i}\overline{f}(x_{i}))$. \item But then $\bigwedge_{i}\overline{f}(x_{i}) \leq \bigvee\set{y}{\bigvee_{i}x_{i} \not \in f(y)} = \overline{f}(\bigvee_{i}x_{i})$. \end{itemize} \end{itemize} \item We also check that $\overline{g}(a) = \set{x}{a \not\leq g(x)}$ is open. We already know from Subsection~\ref{PosetCLSubsec} that it is an upset. So let $\bigvee_{i}x_{i} \in \overline{g}(a)$. Then $a \not\leq g(\bigvee_{i}x_{i})$. Let $a \leq g(x_{i})$ for all $i$. Then $a \leq \bigwedge_{i}g(x_{i}) = g(\bigvee_{i}x_{i})$, which is impossible. Hence $a \not\leq g(x_{i})$ for some index $i$. But then $x_{i} \in \overline{g}(a)$. We need to add that $\overline{g}$ preserves the top element $1$, \textit{i.e.}\xspace that $\overline{g}(1) = X$. We thus have to show that $x\in\overline{g}(1)$ holds for each $x$. But this is clear, since $1 \not\leq g(x)$ \textit{i.e.}\xspace $g(x)\neq 1$. The latter holds because $g$ has type $X \rightarrow \op{(L\backslash 1)}$. \end{itemize} The induced monad on $\Cat{Dcpo}$ is $X \mapsto \op{\big(\ensuremath{\mathcal{O}}(X)\backslash X\big)} \cong \Closed(X)\backslash\emptyset$. This is what is called the Hoare power monad~\cite{AbramskyJ94a}, written as $\mathcal{H}$, which sends a dcpo to its non-empty closed subsets. For a continuous map $f\colon X \rightarrow Y$ we have $\mathcal{H}(f) \colon \mathcal{H}(X) \rightarrow \mathcal{H}(Y)$ given by $\mathcal{H}(f)(U) = \overline{f(U)}$, that is, by the (topological) closure of the image. The unit $\eta\colon X \rightarrow \mathcal{H}(X)$ of the Hoare monad is determined as $\eta(x) = \mathop{\downarrow\!} x$, and the multiplication $\mu \colon \mathcal{H}^{2}(X) \rightarrow \mathcal{H}(X)$ as $\mu(A) = \overline{\bigcup A}$. This closure arises in the last step of~\eqref{PosetCLMuEqn}. The predicate transformer $\ensuremath{\mathcal{O}}(Y) \rightarrow \ensuremath{\mathcal{O}}(X)$ that is bijectively associated with a Kleisli map $g\colon X \rightarrow \mathcal{H}(Y)$ is the $\Diamond$-version, given by $g^{\Diamond}(V) = \set{x}{V \cap g(x) \neq \emptyset}$. Like in~\eqref{PosetCLMuDiamondCorr} the bijective correspondence has to take the isomorphism $\op{\ensuremath{\mathcal{O}}(X)} \cong \Closed(X)$ via complement $\neg$ into account. The Eilenberg-Moore algebra of the Hoare monad are the dcpos with a binary join operation. They are also called affine complete lattices, see \textit{e.g.}\xspace~\cite{Jacobs94a}. \auxproof{ An Eilenberg-Moore algebra $\alpha \colon \mathcal{H}(X) \rightarrow X$ provides a dcpo $X$ with finite joins, and makes it into a complete lattice. Hence $\EM(\mathcal{H}) = \Cat{CL}J$. If $X$ is a complete lattice we can define $\bigvee \colon \mathcal{H}(X) \rightarrow X$, which is an algebra, as usual. This follows since $\bigvee U = \bigvee \overline{U}$. The direction $(\leq)$ is obvious, and for $(\geq)$ we reason as follows. We have $U \subseteq \mathop{\downarrow\!}\bigvee U$, and because $\mathop{\downarrow\!}\bigvee U$ is closed also $\overline{U} \subseteq \mathop{\downarrow\!}\bigvee U$. Hence $\bigvee\overline{U} \leq \bigvee U$. Conversely, if $X$ is a dcpo with algebra $\alpha\colon \mathcal{H}(X) \rightarrow X$, then we define finite joins in $X$ as: $$\begin{array}{rcl} x_{1} \vee \cdots \vee x_{n} & = & \alpha\big(\mathop{\downarrow\!} x_{1} \cup \ldots \cup \mathop{\downarrow\!} x_{n}\big). \end{array}$$ \noindent We show that this forms a join. \begin{itemize} \item Clearly $\mathop{\downarrow\!} x_{i} \subseteq \mathop{\downarrow\!} x_{1} \cup \ldots \cup \mathop{\downarrow\!} x_{n}$. Hence $x_{i} = \alpha(\mathop{\downarrow\!} x_{i}) \leq \alpha\big(\mathop{\downarrow\!} x_{1} \cup \ldots \cup \mathop{\downarrow\!} x_{n}\big) = x_{1} \vee \cdots \vee x_{n}$. \item If $x_{i} \leq y$ for each $i$, then $\mathop{\downarrow\!} x_{i} \subseteq \mathop{\downarrow\!} y$, and so $\mathop{\downarrow\!} x_{1} \cup \ldots \cup \mathop{\downarrow\!} x_{n} \subseteq \mathop{\downarrow\!} y$. Hence $x_{1} \vee \cdots \vee x_{n} = \alpha\big(\mathop{\downarrow\!} x_{1} \cup \ldots \cup \mathop{\downarrow\!} x_{n}\big) \leq \alpha(\mathop{\downarrow\!} y) = y$. \end{itemize} $\Cat{Dcpo}\leftrightarrows\op{(\Cat{CL}J)}$ given in: $$\vcenter{\xymatrix@R-2pc{ \op{(\Cat{CL}J)}\ar@/^2ex/[dd]^{\ensuremath{\mathrm{Hom}}(-,2)\cong\op{(-)}} \\ \dashv \\ \Cat{Dcpo}\ar@/^2ex/[uu]^{\ensuremath{\mathcal{O}} = \Cat{Dcpo}(-,2)}\ar@(dl,dr)_{\mathcal{H}} }} \qquad \begin{prooftree} \xymatrix@C+.5pc{Y\ar[r]^-{f} & \ensuremath{\mathcal{O}}(X) \cong \Cat{Dcpo}(X, 2)} \Justifies \xymatrix{X\ar[r]_-{g} & \op{Y} \cong \Cat{CL}J(Y, 2)} \end{prooftree}$$ \noindent The bijective correspondence is given as follows. \begin{itemize} \item Given a join-preserving function $f\colon Y \rightarrow \ensuremath{\mathcal{O}}(X)$ we take $\overline{f} \colon X \rightarrow Y$ as: $$\begin{array}{rcl} \overline{f}(x) & = & \bigvee\setin{y}{Y}{x\not\in f(y)}. \end{array}$$ \noindent This definition arises through the above isomorphisms $\ensuremath{\mathcal{O}}(X) \cong \Cat{Dcpo}(X, 2)$ and $\Cat{CL}J(Y, 2) \cong \op{Y}$ given by, as in~\eqref{DcpoMapEqn} and~\eqref{CLJTwoEqn}. \auxproof{ $$\begin{array}{rcccccl} \overline{f}(x) & = & \widehat{\lam{y}{\widehat{f(y)}(x)}} & = & \bigvee\set{y}{\widehat{f(y)}(x) = 0} & = & \bigvee\set{y}{x \not\in f(y)}. \end{array}$$ } It is easy to see that $\overline{f}$ is a monotone map $X \rightarrow \op{Y}$. If $x \leq x'$, then $x\in f(y) \mathbb{R}ightarrow x'\in f(y)$ since $f(y)$ is open and thus an upset, so $\set{y}{x'\not\in f(y)} \subseteq \set{y}{x\not\in f(y)}$, and thus $\overline{f}(x') = \bigvee\set{y}{x'\not\in f(y)} \leq \bigvee\set{y}{x\not\in f(y)} = \overline{f}(x)$. Next, the map $\overline{f}$ is continuous: it sends directed joins to meets. This is the non-trivial part of the proof. By monotonicity we have $\overline{f}(\bigvee_{i}x_{i}) \leq \overline{f}(x_{j})$, for each $j$, and thus $\overline{f}(\bigvee_{i}x_{i}) \leq \bigwedge_{i}\overline{f}(x_{i})$. For the reverse inequality we reason as follows. \begin{itemize} \item We have $x_{j} \not\in f(\bigwedge_{i}\overline{f}(x_{i}))$. If not, then, because $f$ is monontone and preserves joins, we get: $$\begin{array}{rcccccccl} x_{j} & \in & f\big(\bigwedge_{i}\overline{f}(x_{i})\big) & \leq & f\big(\overline{f}(x_{j})\big) & = & f\big(\bigvee\set{y}{x_{j}\not\in f(y)}\big) & = & \bigcup\set{f(y)}{x_{j}\not\in f(y)}. \end{array}$$ \noindent This is impossible. \item Since $f(\bigwedge_{i}\overline{f}(x_{i}))$ is open, we get $\bigvee_{i}x_{i} \not\in f(\bigwedge_{i}\overline{f}(x_{i}))$. \item But then $\bigwedge_{i}\overline{f}(x_{i}) \leq \bigvee\set{y}{\bigvee_{i}x_{i} \not \in f(y)} = \overline{f}(\bigvee_{i}x_{i})$. \end{itemize} \item In the other direction, given a continous map $g\colon X \rightarrow \op{Y}$ we define $\overline{g} \colon Y \rightarrow \ensuremath{\mathcal{O}}(X)$ as: $$\begin{array}{rcl} \overline{g}(y) & = & \setin{x}{X}{y \not\leq g(x)}. \end{array}$$ \noindent We first have to check that $\overline{g}(y)$ is open. \begin{itemize} \item If $x'\geq x\in\overline{g}(y)$, then $y \not\leq g(x)$. Since $g(x') \leq g(x)$, having $y \leq g(x')$ is impossible. Hence $x'\in \overline{g}(y)$. \item Let $\bigvee_{i}x_{i} \in \overline{g}(y)$, and assume towards a contradication that $x_{i}\not\in\overline{g}(y)$ for all $i$. Then $y \leq g(x_{i})$, so that $y \leq \bigwedge_{i} g(x_{i}) = g(\bigvee_{i}x_{i})$, \textit{quod non}. \end{itemize} \noindent Next we have to check that $\overline{g}$ is a join-preserving function $Y \rightarrow \ensuremath{\mathcal{O}}(X)$. Well, $$\begin{array}{rcl} x\not\in \overline{g}(\bigvee_{i}y_{i}) & \Longleftrightarrow & \bigvee_{i}y_{i} \leq g(x) \\ & \Longleftrightarrow & \all{i}{y_{i} \leq g(x)} \\ & \Longleftrightarrow & \all{i}{x \not\in \overline{g}(y_{i})} \\ & \Longleftrightarrow & x\not\in \bigcup_{i}\overline{g}(y_{i}). \end{array}$$ \end{itemize} \noindent Finally we have: $$\begin{array}{rcccccl} \overline{\overline{g}}(x) & = & \bigvee\set{y}{x \not\in \overline{g}(y)} & = & \bigvee\set{y}{y \leq g(x)} & = & g(x). \end{array}$$ \noindent And: $$\begin{array}{rcccl} x \not\in \overline{\overline{f}}(y) & \Longleftrightarrow & y \leq \overline{f}(x) = \bigvee\set{z}{x\not\in f(z)} & \Longleftrightarrow & x \not\in f(y) \end{array}$$ \noindent As usual, $(\Leftarrow)$ is easy, and for $(\mathbb{R}ightarrow)$ we use that $f$ preserves meets in: $$\begin{array}{rcccccl} x & \in & \bigcap\set{f(z)}{x\in f(z)} & = & f(\bigwedge\set{z}{x\in f(z)}) & \subseteq & f(y). \end{array}$$ Keimel~\cite{Keimel15} describes for dcpo's $X,Y$ the following correspondence $$\begin{prooftree} \xymatrix{\ensuremath{\mathcal{O}}(Y)\ar[r]^-{f} & \ensuremath{\mathcal{O}}(X) \mbox{ in }\Cat{CL}J} \Justifies \xymatrix{X\ar[r]_-{g} & \mathcal{H}(Y)} \end{prooftree}$$ \noindent where $\mathcal{H}(Y) \subseteq \Closed(Y)$ is the Hoare power domain given by \emph{non-empty} closed subsets. The correspondence is given as follows. \begin{itemize} \item Given a join-preserving map $f\colon \ensuremath{\mathcal{O}}(Y) \rightarrow \ensuremath{\mathcal{O}}(X)$ define $\overline{f} \colon X \rightarrow \mathcal{H}(Y)$ as: $$\begin{array}{rcl} \overline{f}(x) & = & \bigcap\set{\neg V}{V\in\ensuremath{\mathcal{O}}(Y) \mbox{ with } x\not\in f(V)}. \end{array}$$ \noindent We first check that $\overline{f}(x)$ is non-empty. Towards a contradiction, let $\overline{f}(x) = \emptyset$. Since $\emptyset$ is open, we get $f(\overline{f}(x)) = f(\emptyset) = \emptyset$. Thus $x\not\in f(\overline{f}(x))$, so that $\overline{f}(x) \subseteq \neg\overline{f}(x)$. ?? Next, $\overline{f}$ preserves directed joins: $$\begin{array}{rcl} y\in\overline{f}(\bigvee_{i}x_{i}) & \Longleftrightarrow & \allin{V}{\ensuremath{\mathcal{O}}(Y)}{\bigvee_{i}x_{i} \not\in f(V) \mathbb{R}ightarrow y\in\neg V} \\ & \Longleftrightarrow & \allin{V}{\ensuremath{\mathcal{O}}(Y)}{y\in V \mathbb{R}ightarrow \bigvee_{i}x_{i} \in f(V)} \\ & \Longleftrightarrow & \allin{V}{\ensuremath{\mathcal{O}}(Y)}{y\in V \mathbb{R}ightarrow \ex{i}{x_{i} \in f(V)}} \\ & \Longleftrightarrow & \allin{V}{\ensuremath{\mathcal{O}}(Y)}{(\all{i}{x_{i} \not\in f(V)}) \mathbb{R}ightarrow y\in\neg V} \\ & \smash{\stackrel{(*)}{\Longleftrightarrow}} & \ex{i}{\allin{V}{\ensuremath{\mathcal{O}}(Y)}{x_{i} \not\in f(V) \mathbb{R}ightarrow y\in\neg V}} \\ & \Longleftrightarrow & y \in \bigcup_{i}\overline{f}(x_{i}). \end{array}$$ \noindent For the marked implication $(\Leftarrow)$ let $i$ be given, and $V\in\ensuremath{\mathcal{O}}(V)$ with $\all{i}{x_{i} \not\in f(V)}$. Then $x_{i} \not\in f(V)$ in particular, so that $y\in\neg V$ by assumption. For $(\mathbb{R}ightarrow)$, suppose towards a contradiction $\all{i}{\ex{V}{ x_{i} \not\in f(V) \mbox{ and } y\in V}}$. We pick an $i$, using that a directed collection is non-empty, and a $V\in\ensuremath{\mathcal{O}}(Y)$ with $x_{i}\not\in f(V)$ and $y\in V$. The antecedent then gives $\neg\all{i}{x_{i} \not\in f(V)}$, so that $x_{j} \in f(V)$ for some $j$. The assumption gives a $V'\in\ensuremath{\mathcal{O}}(Y)$ with $x_{j}\not\in V'$ and $y\in V'$. Hence $y\in V\cap V'$ \item Given a continuous map $g\colon X \rightarrow \mathcal{H}(Y)$, define $\overline{g} \colon \ensuremath{\mathcal{O}}(Y) \rightarrow \ensuremath{\mathcal{O}}(X)$ as: $$\begin{array}{rcl} \overline{g}(V) & = & \set{x}{g(x) \cap V \neq \emptyset}. \end{array}$$ \noindent We have to check that $\overline{g}(V)$ is open. \begin{itemize} \item If $x' \geq x \in \overline{g}(V)$, say via $y\in g(x) \cap V$, then $y\in g(x') \cap V$ since $g(x) \subseteq g(x')$. \item If $\bigvee_{i}x_{i} \in \overline{g}(V)$, say via $y\in g(\bigvee_{i}x_{i}) \cap V = (\bigcup_{i} g(x_{i}))\cap V = \bigcup_{i} g(x_{i}) \cap V$, then $y\in g(x_{i})\cap V$ for some $i$. But then $x_{i} \in \overline{g}(V)$. \end{itemize} \noindent Next, the function $\overline{g} \colon \ensuremath{\mathcal{O}}(Y) \rightarrow \ensuremath{\mathcal{O}}(V)$ preserves joins (unions): $$\begin{array}{rcl} x \in \overline{g}(\bigcup_{i}U_{i}) & \Longleftrightarrow & g(x) \cap \bigcup_{i}U_{i} = \bigcup_{i}(g(x)\cap U_{i}) \neq \emptyset \\ & \Longleftrightarrow & \ex{i}{g(x) \cap U_{i} \neq \emptyset} \\ & \Longleftrightarrow & \ex{i}{x \in \overline{g}(U_{i})} \\ & \Longleftrightarrow & x \in \bigcup_{i}\overline{g}(U_{i}). \end{array}$$ \end{itemize} \noindent The correspondence is obtained via: $$\begin{array}{rcl} y\in\overline{\overline{g}}(x) & \Longleftrightarrow & \allin{V}{\ensuremath{\mathcal{O}}(Y)}{x\not\in \overline{g}(V) \mathbb{R}ightarrow y\in\neg V} \\ & \Longleftrightarrow & \allin{V}{\ensuremath{\mathcal{O}}(Y)}{g(x) \cap V = \emptyset \mathbb{R}ightarrow y\not\in V} \\ & \Longleftrightarrow & y \in g(x). \end{array}$$ \noindent The implication $(\mathbb{R}ightarrow)$ is obtained by taking $V = \neg g(V)\in\ensuremath{\mathcal{O}}(Y)$. For $(\Leftarrow)$, let $y\in g(x)$ and $g(x) \cap V = \emptyset$. Then evidently $y\not\in V$. In the other direction: $$\begin{array}{rcl} x \in \overline{\overline{f}}(V) & \Longleftrightarrow & \overline{f}(x) \cap V \neq \emptyset \\ & \Longleftrightarrow & \exin{y}{V}{\allin{U}{\ensuremath{\mathcal{O}}(Y)}{x\not\in f(U) \mathbb{R}ightarrow y\in\neg U}} \\ & \Longleftrightarrow & \exin{y}{V}{\allin{U}{\ensuremath{\mathcal{O}}(Y)}{y\in U \mathbb{R}ightarrow x\in f(U)}} \\ & \Longleftrightarrow & x \in f(V). \end{array}$$ \noindent For the implication $(\mathbb{R}ightarrow)$, let $y\in V$ and take $U = V$. Then $x\in f(V)$. For $(\Leftarrow)$ let $x\in f(V) = f(\bigcup_{y\in V}\{y\}) = \bigcup_{y\in V}f(\{y\})$. Hence $x \in f(\{y\})$ for some $y\in V$. Then, for an arbitrary open $U$, if $y\in U$, then $\{y\} \subseteq U$, so that $x \in f(\{y\}) \subseteq f(U)$. } \subsection{Dcpo's and Preframes}\label{DcpoPreFrmSubsec} A \emph{preframe} is a dcpo with finite meets, in which the binary meet operation $\wedge$ is continuous in both variables. We write $\Cat{PreFrm}$ for the category of preframes, where maps are both (Scott) continuous and preserve finite meets $(\wedge, \top)$. The two-element set $2 = \{0,1\} = \{\bot, \top\}$ is a preframe, with obvious joins and meets. Each set of opens of a topological space is also a preframe. In fact we shall use a subcategory $\Cat{PreFrm}Z \hookrightarrow \Cat{PreFrm}$ of preframes with a bottom element $0$, which is preserved by (preframe) homomorphisms. We shall use this category as codomain of the functor $\ensuremath{\mathcal{O}} = \ensuremath{\mathrm{Hom}}(-,2) \colon \Cat{Dcpo} \rightarrow \op{(\Cat{PreFrm}Z)}$. We obtain a functor in the opposite direction also by homming into $2$. We note that for a preframe $L$ the preframe-homomorphisms $f\colon L \rightarrow 2$ correspond to Scott open filters $f^{-1}(1) \subseteq L$, that is, to filters which are at the same time open subsets in the Scott topology. If we require that $f$ is a map in $\Cat{PreFrm}Z$, additionally preserving $0$, then the Scott open filter $f^{-1}(1)$ is proper, that is, not the whole of $L$. We shall write the resulting functor as $\OF = \ensuremath{\mathrm{Hom}}(-,2) \colon \op{(\Cat{PreFrm}Z)} \rightarrow \Cat{Dcpo}$. Here we use that these proper Scott open filters, ordered by inclusion, form a dcpo. \auxproof{ First we check the correspondence: $$\begin{prooftree} {\xymatrix{L\ar[r]^-{f} & 2 \mbox{ in $\Cat{PreFrm}$}}} \Justifies U\subseteq L \mbox{ Scott open filter} \end{prooftree}$$ \begin{itemize} \item Given $f$, then $\overline{f} = f^{-1}(1) = \setin{a}{L}{f(a) = 1} \subseteq L$. We check that this is a Scott open filter. \begin{itemize} \item Clearly $\top\in\overline{f}$ since $f(\top) = 1$. \item If $a,b\in \overline{f}$, then $a\wedge b$ too, since $f(a\wedge b) = f(a) \wedge f(b) = 1 \wedge 1 = 1$. \item If $b \geq a\in \overline{f}$, then $f(b) \geq f(a) = 1$, so $f(b) = 1$ too. Hence $\overline{f}$ is a filter. \item It is Scott open since if $\bigvee_{i}a_{i} \in \overline{f}$, then $\bigvee_{i}f(a_{i}) = f(\bigvee_{i}a_{i}) = 1$, so that $f(a_{i}) = 1$ for some $i$. \end{itemize} \item Conversely, given a Scott open filter $U\subseteq L$, we define a function $\overline{U} \colon L \rightarrow 2$ by $\overline{U}(a) = 1$ iff $a\in U$. We check that $\overline{U}$ is a map of preframes. \begin{itemize} \item $\overline{U}(\top) = 1$ since $\top\in U$. \item $\overline{U}$ is monotone since: if $a\leq b$, and $\overline{U}(a) = 1$, then $a\in U$, and thus $b\in U$, since $U$ is an upset, so that $\overline{U}(b) = 1$. \item $\overline{U}$ preserves $\wedge$. By monotonicity we have $\overline{U}(a\wedge b) \leq \overline{U}(a) \wedge \overline{U}(b)$. For the other direction, let $\overline{U}(a) \wedge \overline{U}(b) = 1$. Then $\overline{U}(a) = \overline{U}(b) = 1$, so that $a\in U$ and $b\in U$. But then $a\wedge b \in U$, so that $\overline{U}(a\wedge b) = 1$. \item Again by monotonicity we have $\bigvee_{i}\overline{U}(a_{i}) \leq \overline{U}(\bigvee_{i}a_{i})$. For the other direction, let $\overline{U}(\bigvee_{i}a_{i}) = 1$. Then $\bigvee_{i}a_{i} \in U$, so that $a_{i}\in U$ for some $i$. Hence $\overline{U}(a_{i}) = 1$, and thus $\bigvee_{i}\overline{U}(a_{i}) = 1$. \end{itemize} \end{itemize} \noindent These operations are each other's inverses: $$\begin{array}{rcccl} \overline{\overline{f}}(a) = 1 & \Longleftrightarrow & a\in \overline{f} & \Longleftrightarrow & f(a) = 1. \end{array}$$ \noindent And: $$\begin{array}{rcccccl} \overline{\overline{U}} & = & \set{a}{\overline{U}(a) = 1} & = & \set{a}{a\in U} & = & U. \end{array}$$ We check that we have a functor $\OF \colon \op{\Cat{PreFrm}} \rightarrow \Cat{Dcpo}$. Suppose we have directed collection $(U_{i})$ of open filters. We claim that the union $U = \bigcup_{i}U_{i}$ is again an open filter. \begin{itemize} \item We have $\top\in U$ because a directed collection is non-empty. \item If $a_{i}, a_{2} \in U$, say via $a_{j} \in U_{i_j}$, then there is an index $k$ with $U_{i_1} \subseteq U_{k}$ and $U_{i_2} \subseteq U_{k}$, using directedness. Hence $a_{1},a_{2} \in U_{k}$, and thus also $a_{1} \wedge a_{2} \in U_{k} \subseteq U$. \item If $b \geq a \in U$, say via $a\in U_{i}$, then $b\in U_{i} \subseteq U$. \item If $\bigvee_{j}a_{j}\in U$, say via $\bigvee_{j}a_{j}\in U_{i}$, then $a_{j}\in U_{i} \subseteq U$ for some $j$. \end{itemize} \noindent If $f\colon L \rightarrow K$ is a map of preframes, then $f^{-1}$ restricts to $\OF(K) \rightarrow \OF(L)$. The easiest way to see this is via the above correspondence $\OF(K) \cong \ensuremath{\mathrm{Hom}}(K,2)$. The inverse image map $f^{-1}$ obviously preserves (directed) unions, and is thus Scott continuous. } If we put things together we obtain: $$\vcenter{\xymatrix@R-2pc{ \op{(\Cat{PreFrm}Z)}\ar@/^2ex/[dd]^{\ensuremath{\mathrm{Hom}}(-,2)\cong\OF} \\ \dashv \\ \Cat{Dcpo}\ar@/^2ex/[uu]^{\ensuremath{\mathcal{O}} = \ensuremath{\mathrm{Hom}}(-,2)}\ar@(dl,dr)_{\mathcal{S}} }} \qquad \begin{prooftree} \xymatrix@C+1pc{L\ar[r]^-{\Cat{PreFrm}Z} & \ensuremath{\mathcal{O}}(X)} \Justifies \xymatrix{X\ar[r]_-{\Cat{Dcpo}} & \OF(L)} \end{prooftree} \qquad \vcenter{\[email protected]@C-2pc{ \op{(\Cat{PreFrm}Z)}\ar@/^0.7em/[rr] & \top & \;\EM(\mathcal{S})\quad\ar@/^0.6em/[ll] \\ & \Kl(\mathcal{S})\ar[ul]^{\ensuremath{\mathrm{Pred}}}\ar[ur]_{\ensuremath{\mathrm{Stat}}} & }}\hspace*{0em}$$ \auxproof{ We check that correspondence: $$\begin{prooftree} \xymatrix@C+.5pc{L\ar[r]^-{f} & \ensuremath{\mathcal{O}}(X)} \Justifies \xymatrix{X\ar[r]_-{g} & \OF(L)} \end{prooftree}$$ \begin{itemize} \item Given $f\colon L \rightarrow \ensuremath{\mathcal{O}}(X)$, take $\overline{f}(x) = \setin{a}{L}{x\in f(a)}$. This is well-defined. \begin{itemize} \item $\top\in\overline{f}(x)$ since $x\in f(\top) = X$. \item If $b \geq a \in \overline{f}(x)$, then $x \in f(a) \subseteq f(b)$, so $b\in \overline{f}(x)$. \item If $a,b\in \overline{f}(x)$, then $x\in f(a)$ and $x\in f(b)$, so that $x\in f(a) \cap f(b) = f(a \wedge b)$, and thus $a\wedge b \in \overline{f}(x)$. \item If $\bigvee_{i}a_{i} \in \overline{f}(x)$, then $x \in f(\bigvee_{i}a_{i}) = \bigcup_{i} f(a_{i})$, so that $x\in f(a_{i})$ for some $i$. But then $a_{i} \in \overline{f}(x)$. These points show that $\overline{f}(x)$ is a Scott open filter. \item $\overline{f}$ is continuous, since for a directed collection $(x_{i})$ we have: $$\begin{array}{rcl} a \in \overline{f}(\bigvee_{i}x_{i}) & \Longleftrightarrow & \bigvee_{i}x_{i} \in f(a) \\ & \Longleftrightarrow & \ex{i}{x_{i} \in f(a)} \\ & \Longleftrightarrow & \ex{i}{a \in \overline{f}(x_{i})} \\ & \Longleftrightarrow & a \in \bigcup_{i}\overline{f}(x_{i}). \end{array}$$ \end{itemize} \item Conversely, given $g\colon X \rightarrow \OF(L)$, define $\overline{g}(a) = \setin{x}{X}{a\in g(x)}$. This is also well-defined. \begin{itemize} \item $y \geq x \in \overline{g}(a)$ implies $a\in g(x) \subseteq g(y)$, so that $y\in \overline{g}(a)$. \item If $\bigvee_{i}x_{i} \in \overline{g}(a)$, then $a \in g(\bigvee_{i}x_{i}) = \bigcup_{i}g(x_{i})$, so that $a\in g(x_{i})$ for some $i$. But then $x_{i} \in \overline{g}(a)$. Hence $\overline{g}(a)$ is a Scott open subset. \item $\overline{g}(\top) = \set{x}{\top\in g(x)} = X$ since $g(x)$ is a filter. \item $\overline{g}(a\wedge b) = \set{x}{a\wedge b \in g(x)} = \set{x}{a \in g(x) \mbox{ and } b \in g(x)} = \set{x}{a \in g(x)} \cap \set{x}{b \in g(x)} = \overline{g}(a) \cap \overline{g}(b)$. \item $\overline{g}(\bigvee_{i}a_{i}) = \set{x}{\bigvee_{i}a_{i} \in g(x)} = \set{x}{\ex{i}{a_{i}\in g(x)}} = \bigcup_{i} \set{x}{a_{i}\in g(x)}$. \end{itemize} \end{itemize} \noindent Clearly we have: $$\begin{array}{rcl} \overline{\overline{f}}(a) & = & \set{x}{a \in \overline{f}(x)} \\ & = & \set{x}{x\in f(a)} \\ & = & f(a) \\ \overline{\overline{g}}(x) & = & \set{a}{x \in \overline{g}(a)} \\ & = & \set{a}{a \in g(x)} \\ & = & g(x). \end{array}$$ } \noindent The induced monad $\mathcal{S}(X) = \OF(\ensuremath{\mathcal{O}}(X))$ takes the proper Scott open filters in the preframe $\ensuremath{\mathcal{O}}(X)$ of Scott open subsets of a dcpo $X$. This is the Smyth power domain, see~\cite{Keimel12}. We recall the Hofmann-Mislove theorem~\cite{HofmannM81,KeimelP94}: in a sober topological space $Y$, the Scott open filters in $\ensuremath{\mathcal{O}}(Y)$ correspond to compact saturated subsets of $Y$. This subset is non-empty if and only if the corresponding filter is proper. We also recall that if $X$ is a continuous dcpo, where each element is the directed join of elements way below it, then its Scott topology is sober, see \textit{e.g.}\xspace~\cite[VII, Lemma~2.6]{Johnstone82}. This explains why the Smyth power domain is often defined on continuous dcpos. We shall not follow this route here, and will continue to work with functions instead of with subsets. The induced functor $\ensuremath{\mathrm{Pred}} \colon \Kl(\mathcal{S}) \rightarrow \op{(\Cat{PreFrm}Z)}$ is full and faithful, corresponding to healthiness of the predicate transformer semantics. Specifically, for a Kleisli map $g\colon X \rightarrow \mathcal{S}(Y) = \OF(\ensuremath{\mathcal{O}}(Y))$ we have $\ensuremath{\mathrm{Pred}}(g) \colon \ensuremath{\mathcal{O}}(Y) \rightarrow \ensuremath{\mathcal{O}}(X)$ given by the preframe homomorphism: $$\begin{array}{rcl} \ensuremath{\mathrm{Pred}}(g)(V) & = & \setin{x}{X}{V \in g(x)}. \end{array}$$ \noindent The Eilenberg-Moore algebras of the Smyth power domain monad $\mathcal{S}$ are dcpos with an additional binary meet operation. \section{Dualising with 3}\label{ThreeSec} Using a three-element set $3$ as dualising object is unusual. We will elaborate one example only, leading to a description of the Plotkin power domain on the category of dcpos. We start from the following notion, which seems new, but is used implicitly in the theory of Plotkin power domains, notably in~\cite{Heckmann97}, see also~\cite{BattenfeldKS14}. \begin{definition} \label{PlotkinAlgDef} A \emph{Plotkin algebra} is a poset $X$ with least and greatest elements $0,1\in X$, and with a binary operation $\amalg$ and a special element $\mathord{\bowtie}\in X$ such that: \begin{itemize} \item $\amalg$ is idempotent, commutative, associative, and monotone; \item $\mathord{\bowtie}$ is an absorbing element for $\amalg$, so that $x \amalg \mathord{\bowtie} = \mathord{\bowtie} = \mathord{\bowtie} \amalg x$. \end{itemize} \noindent A Plotkin algebra is called directed complete if the poset $X$ is a dcpo and the operation $\amalg$ is continuous. We write $\Cat{DcPA}$ for the category of directed complete Plotkin algebras. A morphism in this category is a continuous function that preserves $\amalg$ and $\mathord{\bowtie}, 0, 1$. \end{definition} Each meet semilattice $(X, \wedge, 1)$ with a least element $0$ is a Plotkin algebra with $\mathord{\bowtie} = 0$. Similarly, each join semilattice $(X, \vee, 0)$ with a greatest element $1$ is a Plotkin algebra with $\mathord{\bowtie} = 1$. These observations can be extended to the directed complete case via functors: $$\xymatrix{ \Cat{CL}JO\ar[r] & \Cat{DcPA} & \Cat{PreFrm}Z\ar[l] }$$ \noindent They give a connection with the categories that we have seen in Subsections~\ref{DcpoCLSubsec} and~\ref{DcpoPreFrmSubsec} for the Hoare and Smyth power domain. A \emph{frame} is complete lattice whose binary meet operation $\wedge$ preserves all joins on both sides. The morphisms in the category $\Cat{Frm}$ of frames preserve both joins $\bigvee$ and finite meets $(\wedge, 1)$. Hence there are forgetful functors. $$\xymatrix{ \Cat{CL}JO & \Cat{Frm}\ar[l]\ar[r] & \Cat{PreFrm}Z }$$ \noindent But there is also another construction to obtain a Plotkin algebra from a frame. \begin{definition} \label{FrmDcPADef} Each frame $X$ gives rise to a directed complete Plotkin algebra, written as $X\ltimes X$, via: $$\begin{array}{rcl} X\ltimes X & = & \setin{(x,y)}{X\times X}{x \geq y}. \end{array}$$ \noindent It carries the product dcpo structure, and forms a Plotkin algebra with: $$\begin{array}{rclcrcl} (x,y) \amalg (x',y') & = & (x \vee x', y \wedge y') & \qquad\mbox{and}\qquad & \mathord{\bowtie} & = & (1, 0). \end{array}$$ \noindent This operation $\amalg$ is continuous since $X$ is a frame. \end{definition} Explicitly, the projections form maps of Plotkin algebras in: \begin{equation} \label{PiMapDiag} \vcenter{\xymatrix@C+1pc{ (X, 0, 1, \vee, 1) & (X\ltimes X, (0,0), (1,1), \amalg, \mathord{\bowtie}) \ar[l]_-{\pi_1}\ar[r]^-{\pi_2} & (X, 0, 1, \wedge, 0) }} \end{equation} \noindent We shall also use functions $\ensuremath{\mathrm{in}_1}, \ensuremath{\mathrm{in}_2} \colon X \rightarrow X\ltimes X$ defined by: $$\begin{array}{rclcrcl} \ensuremath{\mathrm{in}_1}(x) & = & (x, 0) & \qquad\mbox{and}\qquad & \ensuremath{\mathrm{in}_2}(y) & = & (1, y). \end{array}$$ \noindent These are \emph{not} maps of Plotkin algebras, since $\ensuremath{\mathrm{in}_1}(1) = \mathord{\bowtie} \neq 1$ and $\ensuremath{\mathrm{in}_2}(0) = \mathord{\bowtie} \neq 0$. But we do have $\ensuremath{\mathrm{in}_1}(0) = 0$ and $\ensuremath{\mathrm{in}_2}(1) = 1$, and also the following structure is preserved. \begin{equation} \label{InMapDiag} \vcenter{\xymatrix@C+1pc{ (X, \vee, 1)\ar[r]^-{\ensuremath{\mathrm{in}_1}} & (X\ltimes X, \amalg, \mathord{\bowtie}) & (X, \wedge, 0)\ar[l]_-{\ensuremath{\mathrm{in}_2}} }} \end{equation} \auxproof{ $$\begin{array}{rclcrcl} \ensuremath{\mathrm{in}_1}(0) & = & (0,0) & \hspace*{5em} & \ensuremath{\mathrm{in}_2}(0) & = & (1, 0) \\ & = & 0 & & & = & \mathord{\bowtie} \\ \ensuremath{\mathrm{in}_1}(1) & = & (1, 0) & & \ensuremath{\mathrm{in}_2}(1) & = & (1,1) \\ & = & \mathord{\bowtie} & & & = & 1 \\ \ensuremath{\mathrm{in}_1}(x\vee y) & = & (x\vee y, 0) & & \ensuremath{\mathrm{in}_2}(x \wedge y) & = & (1, x\wedge y) \\ & = & (x\vee y, 0 \wedge 0) & & & = & (1 \vee 1, x \wedge y) \\ & = & (x,0) \amalg (y,0) & & & = & (1, x) \amalg (1, y) \\ & = & \ensuremath{\mathrm{in}_1}(x) \amalg \ensuremath{\mathrm{in}_1}(y) & & & = & \ensuremath{\mathrm{in}_2}(x) \amalg \ensuremath{\mathrm{in}_2}(y). \end{array}$$ Similarly, $$\begin{array}{rclcrcl} \pi_{1}(0) & = & 0 & \hspace*{1em} & \pi_{2}(0) & = & 0 \\ \pi_{1}(1) & = & 1 & & \pi_{2}(1) & = & 1 \\ \pi_{1}(\mathord{\bowtie}) & = & 1 & & \pi_{2}(\mathord{\bowtie}) & = & 0 \\ \pi_{1}((x,y) \amalg (x',y')) & = & \pi_{1}(x \vee x', y\wedge y') & & \pi_{2}((x,y) \amalg (x',y')) & = & \pi_{2}(x \vee x', y\wedge y') \\ & = & x\vee x' & & & = & y\wedge y' \\ & = & \pi_{1}(x,y) \vee \pi_{1}(x',y') & & & = & \pi_{2}(x,y) \wedge \pi_{2}(x',y') \end{array}$$ } The $\ltimes$ construction yields a three-element algebra $2\ltimes 2$ that will be described more directly below, following~\cite{Heckmann97}. We use it as dualising object. \begin{example} \label{ThreeEx} For the two-element frame $2 = \{0,1\}$ the Plotkin algebra $2\ltimes 2$ is a three-element set, which we can also describe as: $$\begin{array}{rclcrcccl} 3 & = & \{0, \mathord{\bowtie}, 1\} & \qquad\mbox{where}\qquad & 0 & \leq & \mathord{\bowtie} & \leq & 1. \end{array}$$ \noindent This order is obviously both complete and cocomplete. It is determined for $a,b\in3$ by: \begin{equation} \label{ThreeOrderChar} \begin{array}{rcl} a\leq b & \mbox{iff} & \mbox{ both }\left\{\begin{array}{rcl} a=1 & \mathbb{R}ightarrow & b=1 \\ b=0 & \mathbb{R}ightarrow & a=0. \end{array}\right. \end{array} \end{equation} \noindent The isomorphism $j = (j_{1},j_{2})\colon 3 \conglongrightarrow 2\ltimes 2$ is given by: $$\begin{array}{rclcrccclcrcl} j(0) & = & (0,0) & \qquad & j(\mathord{\bowtie}) & = & (1,0) & = & \mathord{\bowtie} & \qquad & j(1) & = & (1,1). \end{array}$$ \noindent The two components $j_{i} = \pi_{i} \mathrel{\circ} j \colon 3 \rightarrow 2$ are monotone, and satisfy $j_{1} \geq j_{2}$. This isomorphism $3 \cong 2\ltimes 2$ makes $3$ into a (directed complete) Plotkin algebra, via $\mathord{\bowtie}\in3$ and $\amalg \colon 3\times3 \rightarrow 3$ determined by: $$\begin{array}{rcl} a \amalg b & = & \left\{\begin{array}{ll} 0 \quad & \mbox{if }a=b=0 \\ 1 & \mbox{if }a=b=1 \\ \mathord{\bowtie} & \mbox{otherwise.} \end{array}\right. \end{array}$$ \noindent Finally we notice that the two maps $j_{1}, j_{2} \colon 3 \rightarrow 2$ are maps of Plotkin algebras: \begin{equation} \label{ThreeTwoDiag} \vcenter{\xymatrix@C+1pc{ (2, 0, 1, \vee, 1) & (3, 0, 1, \amalg, \mathord{\bowtie})\ar[l]_-{j_1}\ar[r]^-{j_2} & (2, 0, 1, \wedge, 0) }} \end{equation} \auxproof{ The following diagram commutes. $$\xymatrix{ 3\times3\ar[rr]^-{j}_-{\cong}\ar[d]_{\amalg} & & (2\ltimes 2)\times (2\ltimes 2)\ar[d]^{\amalg} \\ 3\ar[rr]_-{j}^-{\cong} & & 2\ltimes 2 }$$ We check this in detail, where we write $j\colon 3 \rightarrow 2\times 2$ for the inclusion. \begin{itemize} \item If $a=b=0$ in $3$, then: $$\begin{array}{rcccccccccl} j(a)\amalg j(b) & = & (0,0) \amalg (0,0) & = & (0\vee 0, 0\wedge 0) & = & (0,0) & = & j(0) & = & j(a\amalg b). \end{array}$$ \item Similarly, if $a=b=1$ in $3$, then: $$\begin{array}{rcccccccccl} j(a)\amalg j(b) & = & (1,1) \amalg (1,1) & = & (1\vee 1, 1\wedge 1) & = & (1,1) & = & j(1) & = & j(a\amalg b). \end{array}$$ \item Otherwise, let $a=\mathord{\bowtie}$, or $b=\mathord{\bowtie}$, or $a=0$ and $b=1$, or $a=1$ and $b=0$. For $a=\mathord{\bowtie}$ we have: $$\begin{array}{rcccccccccl} j(a)\amalg j(b) & = & (1,0) \amalg (b_{1},b_{2}) & = & (1\vee b_{1}, 0\wedge b_{2}) & = & (1,0) & = & j(\mathord{\bowtie}) & = & j(a\amalg b). \end{array}$$ \noindent And if $a=0$ and $b=1$, then: $$\begin{array}{rcccccccccl} j(a)\amalg j(b) & = & (0,0) \amalg (1,1) & = & (0\vee 1, 0\wedge 1) & = & (1,0) & = & j(\mathord{\bowtie}) & = & j(a\amalg b). \end{array}$$ \end{itemize} } \end{example} The following result is the analogue of Lemma~\ref{DcpoTwoLem}, but with the dcpo $3$ instead of $2$. The correspondence is mentioned in~\cite{BattenfeldKS14}, just before Lemma~4.11. \begin{lemma} \label{DcpoThreeLem} For a dcpo $X$ there is a bijective correspondence between: $$\begin{prooftree} \begin{prooftree} \xymatrix{X\ar[r]^-{f} & 3 \mbox{ in $\Cat{Dcpo}$}} \Justifies \xymatrix{X\ar@<-.5ex>[r]_-{g_2}\ar@<+.5ex>[r]^-{g_1} & 2 \mbox{ in $\Cat{Dcpo}$ with $g_{1} \geq g_{1}$}} \end{prooftree} \Justifies (U_{1}, U_{2}) \in \ensuremath{\mathcal{O}}(X) \ltimes \ensuremath{\mathcal{O}}(X) \end{prooftree}$$ \noindent As a result there is an isomorphism: $$\begin{array}{rclcrcl} \Cat{Dcpo}(X, 3) & \cong & \ensuremath{\mathcal{O}}(X)\ltimes\ensuremath{\mathcal{O}}(X) & \quad\mbox{given by}\quad & f & \longmapsto & (\, \set{x}{f(x)\neq 0}, \; \set{x}{f(x)=1} \,). \end{array}$$ \noindent This is an isomorphism of Plotkin algebras, where the left hand side carries the pointwise Plotkin algebra structure inherited from $3$. \end{lemma} The (equivalent) structures in this lemma form predicates on the dcpo $X$. The last description tells that such a predicate is a pair of opens $U_{1},U_{2}\in\ensuremath{\mathcal{O}}(X)$ with $U_{1} \supseteq U_{2}$. This predicate is true if $U_{1}=U_{2}=X$ and false if $U_{1}=U_{2}=\emptyset$. In this `logic', predicates come equipped with a binary operation $\amalg$; its logical interpretation is not immedidately clear. \begin{myproof} The second, lower correspondence is given by Lemma~\ref{DcpoTwoLem}, so we concentrate on the first one. It works as follows. \begin{itemize} \item Given $f\colon X \rightarrow 3$ in $\Cat{Dcpo}$ we obtain continuous maps $\overline{f}_{i} = j_{i} \mathrel{\circ} f \colon X \rightarrow 2$ by compositon, with $f_{1} \geq f_{2}$, since $j_{1} \geq j_{2}$, see Example~\ref{ThreeEx}. \item In the other direction, given $g = (g_{1}, g_{2})$ we define $\overline{g} \colon X \rightarrow 3$ as: $$\begin{array}{rcl} \overline{g}(x) & = & \left\{\begin{array}{ll} 0 \quad & \mbox{if } g_{1}(x) = 0 \\ \mathord{\bowtie} & \mbox{if } g_{1}(x) = 1 \mbox{ and } g_{2}(x) = 0 \\ 1 & \mbox{if } g_{2}(x) = 1. \end{array}\right. \end{array}$$ \noindent We first show that $\overline{g}$ is monotone. So let $x \leq y$ in $X$. We use the characterisation~\eqref{ThreeOrderChar}. \begin{itemize} \item Let $\overline{g}(x) = 1$, so that $g_{2}(x) = 1$. But then $g_{2}(y) \geq g_{2}(x) = 1$, so that $\overline{g}(y) = 1$. \item If $\overline{g}(y) = 0$, then $g_{1}(x) \leq g_{1}(y) = 0$, so that $\overline{g}(x) = 0$. \end{itemize} \noindent Next, let $(x_{i})$ be a directed collection in $X$. Since $\overline{g}$ is monotone we have $\bigvee_{i}\overline{g}(x_{i}) \leq \overline{g}(\bigvee_{i}x_{i})$. For the reverse inequality we use~\eqref{ThreeOrderChar} again. \begin{itemize} \item Let $\overline{g}(\bigvee_{i}x_{i}) = 1$, so that $g_{2}(\bigvee_{i}x_{i}) = \bigvee_{i}g_{2}(x_{i}) = 1$. Then $g_{2}(x_{i}) = 1$ for some index $i$, for which then $\overline{g}(x_{i}) = 1$. Hence $\bigvee_{i}\overline{g}(x_{i}) = 1$. \item Let $\bigvee_{i}\overline{g}(x_{i}) = 0$, so that $\overline{g}(x_{i}) = 0$ for all $i$, and thus $g_{1}(x_{i}) = 0$. But then $g_{1}(\bigvee_{i}x_{i}) = \bigvee_{i} g_{1}(x_{i}) = 0$. Hence $\overline{g}(\bigvee_{i}x_{i}) = 0$. \end{itemize} \end{itemize} \noindent It is easy to see that $\overline{\overline{f}} = f$ and $\overline{\overline{g}} = g$. \hspace*{\fill}$\mathbb{Q}EDbox$ \auxproof{ $$\begin{array}{rcl} \overline{\overline{f}}(x) = 0 & \Longleftrightarrow & \overline{f}_{1}(x) = j_{1}(f(x)) = 0 \\ & \Longleftrightarrow & f(x) = 0 \\ \overline{\overline{f}}(x) = 1 & \Longleftrightarrow & \overline{f}_{2}(x) = j_{2}(f(x)) = 1 \\ & \Longleftrightarrow & f(x) = 1. \end{array}$$ \noindent Also: $$\begin{array}{rcl} \overline{\overline{g}}_{1}(x) = 0 & \Longleftrightarrow & j_{1}(\overline{g}(x)) = 0 \\ & \Longleftrightarrow & \overline{g}(x) = 0 \\ & \Longleftrightarrow & g_{1}(x) = 0 \\ \overline{\overline{g}}_{2}(x) = 1 & \Longleftrightarrow & j_{2}(\overline{g}(x)) = 1 \\ & \Longleftrightarrow & \overline{g}(x) = 1 \\ & \Longleftrightarrow & g_{2}(x) = 1 \end{array}$$ } \auxproof{ We write $\varphi \colon \Cat{Dcpo}(X, 3) \conglongrightarrow \ensuremath{\mathcal{O}}(X) \ltimes \ensuremath{\mathcal{O}}(X)$, defined by: $$\begin{array}{rcl} \varphi(f) & = & (\, \set{x}{\overline{f}_{1}(x) = j_{1}(f(x))=1}, \; \set{x}{\overline{f}_{2}(x) = j_{2}(f(x))=1} \,) \\ & = & (\, \set{x}{f(x)\neq 0}, \; \set{x}{f(x)=1} \,). \end{array}$$ \noindent We claim that this is a map of Plotkin algebras. $$\begin{array}{rcl} \varphi(\mathbf{0}) & = & (\, \set{x}{\mathbf{0}(x)\neq 0}, \; \set{x}{\mathbf{0}(x)=1} \,) \\ & = & (\, \set{x}{0\neq 0}, \; \set{x}{0=1} \,) \\ & = & (\, \emptyset, \; \emptyset \,) \\ & = & 0 \\ \varphi(\mathbf{1}) & = & (\, \set{x}{\mathbf{1}(x)\neq 0}, \; \set{x}{\mathbf{1}(x)=1} \,) \\ & = & (\, X, \; X \,) \\ & = & 1 \\ \varphi(\mathbf{\mathord{\bowtie}}) & = & (\, \set{x}{\mathbf{\mathord{\bowtie}}(x)\neq 0}, \; \set{x}{\mathbf{\mathord{\bowtie}}(x)=1} \,) \\ & = & (\, X, \; \emptyset \,) \\ & = & \mathord{\bowtie} \\ \varphi(f \amalg g) & = & (\, \set{x}{f(x)\amalg g(x)\neq 0}, \; \set{x}{f(x)\amalg g(x)(x)=1} \,) \\ & = & (\, \set{x}{f(x)\neq 0 \mbox{ or } g(x)\neq 0}, \; \set{x}{f(x) = 1 \mbox{ and } g(x)=1} \,) \\ & = & (\, \set{x}{f(x)\neq 0} \cup \set{x}{g(x)\neq 0}, \; \set{x}{f(x) = 1} \cap \set{x}{g(x)=1} \,) \\ & = & (\, \set{x}{f(x)\neq 0}, \; \set{x}{f(x) = 1} \,) \amalg (\, \set{x}{g(x)\neq 0}, \; \set{x}{g(x)=1} \,) \\ & = & \varphi(f) \amalg \varphi(g). \end{array}$$ } \end{myproof} Here is another fundamental correspondence, see also~\cite[Obs.~4.10]{BattenfeldKS14}. \begin{lemma} \label{FrmThreeLem} For frames $X,Y$ there is a bijective correspondence: $$\begin{prooftree} \xymatrix{X\ltimes X\ar[r]^-{f} & Y\ltimes Y \quad \mbox{in $\Cat{DcPA}$}} \Justifies \xymatrix{X\ar[r]_-{g_1} & Y\mbox{ in $\Cat{CL}JO$}\quad\mbox{and}\quad X\ar[r]_-{g_2} & Y\mbox{ in $\Cat{PreFrm}Z$} \quad \mbox{with $g_{1} \geq g_{2}$}} \end{prooftree}$$ \end{lemma} \begin{myproof} The correspondence is given as follows. \begin{itemize} \item For $f\colon X\ltimes X \rightarrow Y\ltimes Y$ in $\Cat{DcPA}$ we take the following continuous functions. $$\xymatrix@C-1pc{ \overline{f}_{1} = \Big(X\ar[r]^-{\ensuremath{\mathrm{in}_1}} & X\ltimes X\ar[r]^-{f} & Y\ltimes Y\ar[r]^-{\pi_1} & Y\Big) \quad\mbox{and}\quad \overline{f}_{2} = \Big(X\ar[r]^-{\ensuremath{\mathrm{in}_2}} & X\ltimes X\ar[r]^-{f} & Y\ltimes Y\ar[r]^-{\pi_2} & Y\Big) }$$ \noindent They preserve $0,1,\mathord{\bowtie}, \amalg$ by~\eqref{PiMapDiag} and~\eqref{InMapDiag}. For instance, $$\begin{array}{rcccccccl} \overline{f}_{1}(1) & = & \pi_{1}\big(f(\ensuremath{\mathrm{in}_1}(1))\big) & = & \pi_{1}\big(f(\mathord{\bowtie})\big) & = & \pi_{1}(\mathord{\bowtie}) & = & 1. \end{array}$$ \noindent And: $$\begin{array}{rcl} \overline{f}_{1}(x \vee y) \hspace*{\arraycolsep}=\hspace*{\arraycolsep} \pi_{1}\big(f(\ensuremath{\mathrm{in}_1}(x \vee y))\big) & = & \pi_{1}\big(f(\ensuremath{\mathrm{in}_1}(x) \amalg \ensuremath{\mathrm{in}_1}(y))\big) \\ & = & \pi_{1}\big(f(\ensuremath{\mathrm{in}_1}(x)) \amalg f(\ensuremath{\mathrm{in}_1}(y))\big) \\ & = & \pi_{1}\big(f(\ensuremath{\mathrm{in}_1}(x))\big) \vee \pi_{1}\big(f(\ensuremath{\mathrm{in}_1}(y))\big) \\ & = & \overline{f}_{1}(x) \vee \overline{f}_{1}(y). \end{array}$$ \auxproof{ We check the remaining equations. $$\begin{array}{rcccccccl} \overline{f}_{1}(0) & = & \pi_{1}\big(f(\ensuremath{\mathrm{in}_1}(0))\big) & = & \pi_{1}\big(f(0)\big) & = & \pi_{1}(0) & = & 0 \end{array}$$ \noindent And for the second map: $$\begin{array}{rcl} \overline{f}_{2}(0) & = & \pi_{2}\big(f(\ensuremath{\mathrm{in}_2}(0))\big) \\ & = & \pi_{2}\big(f(\mathord{\bowtie})\big) \\ & = & \pi_{2}(\mathord{\bowtie}) \\ & = & 0 \\ \overline{f}_{2}(1) & = & \pi_{2}\big(f(\ensuremath{\mathrm{in}_2}(1))\big) \\ & = & \pi_{2}\big(f(1)\big) \\ & = & \pi_{2}(1) \\ & = & 1 \\ \overline{f}_{2}(x \wedge y) & = & \pi_{2}\big(f(\ensuremath{\mathrm{in}_2}(x \wedge y))\big) \\ & = & \pi_{2}\big(f(\ensuremath{\mathrm{in}_2}(x) \amalg \ensuremath{\mathrm{in}_2}(y))\big) \\ & = & \pi_{2}\big(f(\ensuremath{\mathrm{in}_2}(x)) \amalg f(\ensuremath{\mathrm{in}_2}(y))\big) \\ & = & \pi_{2}\big(f(\ensuremath{\mathrm{in}_2}(x))\big) \wedge \pi_{2}\big(f(\ensuremath{\mathrm{in}_2}(y))\big) \\ & = & \overline{f}_{2}(x) \wedge \overline{f}_{2}(y). \end{array}$$ } We claim that for $(x,x')\in X\ltimes X$ the following two equations hold. $$\begin{array}{rclcrcl} \overline{f}_{1}(x) & = & \pi_{1}(f(x,x')) & \qquad\mbox{and}\qquad & \overline{f}_{2}(x') & = & \pi_{2}(f(x,x')). \end{array}\eqno{(*)}$$ \noindent We only prove the first one, since the second one works analogously. We have to prove $\overline{f}_{1}(x) = \pi_{1}(f(x,0)) = \pi_{1}(f(x,x'))$. The inequality $\leq$ holds by monotonicity, so it suffices to prove $\geq$. In $Y\ltimes Y$ we have: $$\begin{array}{rcccccl} f(x, 0) \amalg f(x, x') & = & f\big((x, 0) \amalg (x, x')\big) & = & f(x \vee x, 0 \wedge x') & = & f(x, 0) \end{array}$$ \noindent By applying the first projection we obtain: $$\begin{array}{rcccl} \pi_{1}(f(x,0)) \vee \pi_{1}(f(x,x')) & = & \pi_{1}\big(f(x, 0) \amalg f(x, x')\big) & = & \pi_{1}(f(x,0)). \end{array}$$ \noindent Hence $\pi_{1}(f(x,x')) \leq \pi_{1}(f(x,0))$. \auxproof{ Similarly, $$\begin{array}{rcccccl} f(1, x') \amalg f(x, x') & = & f\big((1, x') \amalg (x, x')\big) & = & f(1 \vee x, x' \wedge x') & = & f(1, x') \end{array}$$ \noindent By applying the second projection we obtain: $$\begin{array}{rcccl} \pi_{2}(f(1,x')) \wedge \pi_{2}(f(x,x')) & = & \pi_{2}\big(f(1, x') \amalg f(x, x')\big) & = & \pi_{2}(f(1,x')). \end{array}$$ \noindent Hence $\pi_{2}(f(1,x')) \leq \pi_{2}(f(x,x'))$. } We use these equations~$(*)$ to prove $\overline{f}_{1} \geq \overline{f}_{2}$. For an arbitrary $x\in X$ we have $(x,x)\in X\ltimes X$, and so: $$\begin{array}{rcccccl} \overline{f}_{1}(x) & \smash{\stackrel{(*)}{=}} & \pi_{1}(f(x,x)) & \geq & \pi_{2}(f(x,x)) & \smash{\stackrel{(*)}{=}} & \overline{f}_{2}(x). \end{array}$$ \item In the other direction, given $g_{1} \colon X \rightarrow Y$ in $\Cat{CL}JO$ and $g_{2} \colon X \rightarrow Y$ in $\Cat{PreFrm}Z$ we define $\overline{g} \colon X\ltimes X \rightarrow Y\ltimes Y$ by: $$\begin{array}{rcl} \overline{g}(x,x') & = & (\, g_{1}(x), \; g_{2}(x') \, ). \end{array}$$ \noindent This is well-defined: we have $x \geq x'$, so $g_{1}(x) \geq g_{1}(x') \geq g_{2}(x')$. It is easy to see that $\overline{g}$ is a continuous map of Plotkin algebras. \auxproof{ $$\begin{array}{rcl} \overline{g}(0,0) & = & (g_{1}(0), g_{2}(0)) \\ & = & (0, 0) \\ \overline{g}(1,1) & = & (g_{1}(1), g_{2}(1)) \\ & = & (1, 1) \\ \overline{g}(\mathord{\bowtie}) & = & (g_{1}(1), g_{2}(0)) \\ & = & (1, 0) \\ & = & \mathord{\bowtie} \\ \overline{g}\big((x,x') \amalg (y,y')\big) & = & \overline{g}\big(x \vee y, x' \wedge y'\big) \\ & = & (g_{1}(x \vee y), g_{2}(x' \wedge y')) \\ & = & (g_{1}(x) \vee g_{1}(y), g_{2}(x') \wedge g_{2}(y')) \\ & = & (g_{1}(x), g_{2}(x')) \amalg (g_{1}(y), g_{2}(y')) \\ & = & \overline{g}(x,x') \amalg \overline{g}(y,y'). \end{array}$$ } \end{itemize} \noindent We prove that these operations yield a bijective correspondence. First, $$\begin{array}{rcccccccl} \overline{\overline{g}}_{1}(x) & = & \pi_{1}\big(\overline{g}(\ensuremath{\mathrm{in}_1}(x))\big) & = & \pi_{1}\big(\overline{g}(x, 0)\big) & = & \pi_{1}(g_{1}(x), g_{2}(0)) & = & g_{1}(x). \end{array}$$ \auxproof{ \noindent Similarly: $$\begin{array}{rcccccccl} \overline{\overline{g}}_{2}(x) & = & \pi_{2}\big(\overline{g}(\ensuremath{\mathrm{in}_2}(x))\big) & = & \pi_{2}\big(\overline{g}(1, x)\big) & = & \pi_{2}(g_{1}(1), g_{2}(x)) & = & g_{2}(x) \end{array}$$ } \noindent Similarly we get $\overline{\overline{g}}_{2}(x) = g_{2}(x)$. Next, in the other direction, $$\begin{array}[b]{rcccccl} \overline{\overline{f}}(x,x') & = & (\, \overline{f}_{1}(x), \; \overline{f}_{2}(x')\,) & \smash{\stackrel{(*)}{=}} & (\, \pi_{1}\big(f(x,x')\big), \; \pi_{2}\big(f(x,x')\big) \,) & = & f(x,x'). \end{array}\eqno{\square}$$ \end{myproof} As announced, we will use the dcpo $3$ as dualising object, in: $$\vcenter{\xymatrix@R-2pc{ \op{\Cat{DcPA}}\ar@/^2ex/[dd]^{\ensuremath{\mathrm{Hom}}(-,3)} \\ \dashv \\ \Cat{Dcpo}\ar@/^2ex/[uu]^{\ensuremath{\mathrm{Hom}}(-,3)}\ar@(dl,dr)_{\wp} }} \qquad \begin{prooftree} \xymatrix@C+1pc{Y\ar[r]^-{\Cat{DcPA}} & \ensuremath{\mathrm{Hom}}(X, 3)} \Justifies \xymatrix{X\ar[r]_-{\Cat{Dcpo}} & \ensuremath{\mathrm{Hom}}(Y,3)} \end{prooftree} \qquad \vcenter{\[email protected]@C-2pc{ \op{\Cat{DcPA}}\ar@/^0.7em/[rr] & \top & \,\EM(\wp)\ar@/^0.6em/[ll] \\ & \Kl(\wp)\ar[ul]^{\ensuremath{\mathrm{Pred}}}\ar[ur]_{\ensuremath{\mathrm{Stat}}} & }}\hspace*{0em}$$ \noindent For directed complete Plotkin algebra $Y\in\Cat{DcPA}$ the homset $\ensuremath{\mathrm{Hom}}(Y,3)$ of maps in $\Cat{DcPA}$ is a dcpo, via the pointwise ordering. The above adjunction is then obtained via the usual swapping of arguments. \auxproof{ We show that a directed join $f = \bigvee_{i}f_{i}$ is a map of (directed complete) Plotkin algebras, if all $f_i$'s are. This works because the sum $\amalg$ is continuous. Thus: $$\begin{array}{rcccccccl} f(x \amalg y) & = & \bigvee_{i} f_{i}(x \amalg y) & = & \bigvee_{i} f_{i}(x) \amalg f_{i}(y) & = & (\bigvee_{i} f_{i}(x)) \amalg (\bigvee_{i} f_{i}(y)) & = & f(x) \amalg f(y). \end{array}$$ } We call the induced monad the Plotkin power domain on $\Cat{Dcpo}$. It can be described as: $$\begin{array}{rcl} \wp(X) & = & \Cat{DcPA}\big(\Cat{Dcpo}(X, 3), 3\big) \\ & \cong & \Cat{DcPA}\big(\ensuremath{\mathcal{O}}(X)\ltimes\ensuremath{\mathcal{O}}(X), 2\ltimes 2\big) \\ & \cong & \set{(f_{1}, f_{2})}{f_{1} \in \Cat{CL}JO(\ensuremath{\mathcal{O}}(X), 2), f_{2}\in \Cat{PreFrm}Z(\ensuremath{\mathcal{O}}(X), 2), \mbox{ with }f_{1} \geq f_{2}}. \end{array}$$ \noindent The first isomorphism is based on Lemma~\ref{DcpoThreeLem} and Example~\ref{ThreeEx}. The second one comes from Lemma~\ref{FrmThreeLem}. The map $f_{1} \colon \ensuremath{\mathcal{O}}(X) \rightarrow 2$ in $\Cat{CL}JO$ corresponds to a non-empty closed subset of $X$, see Lemma~\ref{CLJOTwoLem}. The function $f_{2} \colon \ensuremath{\mathcal{O}}(X) \rightarrow 2$ in $\Cat{PreFrm}Z$ correponds to a proper Scott open filter, and in the sober case, to a non-empty compact saturated subset, as discussed already in Subsection~\ref{DcpoPreFrmSubsec}. In~\cite{Heckmann97} `valuations' of the form $\ensuremath{\mathcal{O}}(X) \rightarrow 2\ltimes 2$, for a topological space $X$, form the elements of a monad. In contrast, here we arrive at maps of the form $\ensuremath{\mathcal{O}}(X)\ltimes\ensuremath{\mathcal{O}}(X) \rightarrow 2\ltimes 2$. \auxproof{ \begin{itemize} \item Let $K$ be non-empty, but $\varphi(\emtpyset) = 1$. The latter yields $\emptyset\in\varphi^{-1}(1)$, so that $K = \bigcap\varphi^{-1}(1) \subseteq \emptyset$. By assumption, this is not the case. \item If $\varphi(\emptyset) = 0$, then $\emptyset\not\in\varphi^{-1}(1)$. But then $K\not\subseteq\emptyset$, see~\cite[Lemma]{KeimelP94}, \textit{i.e.}\xspace, $K\neq\emptyset$. \end{itemize} Finally, we have for the maps $f_{1} \in \Cat{CL}JO(\ensuremath{\mathcal{O}}(X), 2)$ and $f_{2}\in \Cat{PreFrm}Z(\ensuremath{\mathcal{O}}(X), 2)$, $$\begin{array}{rcl} f_{1} \geq f_{2} & \Longleftrightarrow & \bigcap\setin{V}{\Closed(X)}{f_{1}(\neg V) = 0} ?? \bigcap \setin{U}{\ensuremath{\mathcal{O}}(X)}{f_{2}(U) = 1}. \end{array}$$ } \section{Dualising with $[0,1]$}\label{UnitSec} The next series of examples starts from adjunctions that are obtained by homming into the unit interval $[0,1]$. The quantitative logic that belongs to these examples is given in terms of effect modules. These can be seen as ``probabilistic vector spaces'', involving scalar multiplication with scalars from the unit interval $[0,1]$, instead of from $\mathbb{R}$ or $\mathbb{C}$. We provide a crash course for these structures, and refer to~\cite{JacobsM16,Jacobs15a,ChoJWW15b} or~\cite{DvurecenskijP00} for more information. A systematic description of the `probability' monads below can be found in~\cite{Jacobs17a}. A partial commutative monoid (PCM) consists of a set $M$ with a partial binary operation $\ovee$ and a zero element $0\in M$. The operation $\ovee$ is commutative and associative, in an appropriate partial sense. One writes $x \mathrel{\bot} y$ if $x \ovee y$ is defined. An \emph{effect algebra} is a PCM with an orthosupplement $(-)^{\bot}$, so that $x \ovee x^{\bot} = 1$, where $1 = 0^{\bot}$, and $x \mathrel{\bot} 1$ implies $x=0$. An effect algebra is automatically a poset, via the definition $x \leq y$ iff $x\ovee z = y$ for some $z$. The main example is the unit interval $[0,1]$, with $x \mathrel{\bot} y$ iff $x+y \leq 1$, and in that case $x\ovee y = x+y$; the orthosupplement is $x^{\bot} = 1-x$. A map of effect algebras $f\colon E \rightarrow D$ is a function that preserves $1$ and $\ovee$, if defined. We write $\Cat{EA}$ for the resulting category. Each Boolean algebra is an effect algebra, with $x \mathrel{\bot} y$ iff $x\wedge y = 0$, and in that case $x\ovee y = x\vee y$. This yields a functor $\Cat{BA} \rightarrow \Cat{EA}$, which is full and faithful. An \emph{effect module} is an effect algebra $E$ with an action $[0,1]\times E \rightarrow E$ that preserves $\ovee, 0$ in each argument separately. A map of effect modules $f$ is a map of effect algebras that preserves scalar multiplication: $f(r\cdot x) = r\cdot f(x)$. We thus get a subcategory $\Cat{EMod} \hookrightarrow \Cat{EA}$. For each set $X$, the set $[0,1]^{X}$ of fuzzy predicates on $X$ is an effect module, with $p \mathrel{\bot} q$ iff $p(x) + q(x) \leq 1$ for all $x\in X$, and in that case $(p\ovee q)(x) = p(x) + q(x)$. Orthosupplement is given by $p^{\bot}(x) = 1 - p(x)$ and scalar multiplication by $r\cdot p \in [0,1]^{X}$, for $r\in [0,1]$ and $p\in [0,1]^{X}$, by $(r\cdot p)(x) = r\cdot p(x)$. This assignment $X \mapsto [0,1]^{X}$ yields a functor $\Cat{Sets} \rightarrow \op{\Cat{EMod}}$ that will be used below. Important examples of effect modules arise in quantum logic. For instance, for each Hilbert space $\mathscr{H}$, the set $\Ef(\mathscr{H}) = \set{A\colon \mathscr{H} \rightarrow \mathscr{H}}{ 0 \leq A \leq \idmap}$ of effects is an effect module. More generally, for a (unital) $C^*$-algebra $A$, the set of effects $[0,1]_{A} = \setin{a}{A}{0 \leq a \leq 1}$ is an effect module. In~\cite{FurberJ13a} it is shown that taking effects yields a full and faithful functor: \begin{equation} \label{PUEModFunDiag} \xymatrix{ \CstarMap{\mathrm{P}}U\ar[rr]^-{[0,1]_{(-)}} & & \Cat{EMod} } \end{equation} \noindent Here we write $\CstarMap{\mathrm{P}}U$ for the category of $C^*$-algebras with positive unital maps. An \emph{MV-algebra}~\cite{CignoliDM00} can be understood as a `commutative' effect algebra. It is an effect algebra with a join $\vee$, and thus also a meet $\wedge$, via De Morgan, in which the equation $(x \vee y)^{\bot} \ovee x = y^{\bot} \ovee (x\wedge y)$ holds. There is a subcategory $\Cat{MVA} \hookrightarrow \Cat{EA}$ with maps additionally preserving joins $\vee$ (and hence also $\wedge$). Within an MV-algebra one can define (total) addition and subtraction operations as $x + y = x \ovee (x^{\bot} \wedge y)$ and $x - y = (x^{\bot} + y)^{\bot}$. The unit interval $[0,1]$ is an MV-algebra, in which $+$ and $-$ are truncated (to $1$ or $0$), if needed. There is a category $\Cat{MVMod}$ of \emph{MV-modules}, which are MV-algebras with $[0,1]$-scalar multiplication. Thus $\Cat{MVMod}$ is twice a subcategory in: $\Cat{MVA} \hookleftarrow \Cat{MVMod} \hookrightarrow \Cat{EMod}$. The effect module $[0,1]^{X}$ of fuzzy predicates is an MV-module. For a commutative $C^*$-algebra $A$ the set of effects $[0,1]_{A}$ is an MV-module. In fact there is a full and faithful functor: \begin{equation} \label{MIUMVModFunDiag} \xymatrix{ \CCstarMap{\mathrm{MIU}}\ar[rr]^-{[0,1]_{(-)}} & & \Cat{MVMod} } \end{equation} \noindent where $\CCstarMap{\mathrm{MIU}}$ is the category of commutative $C^*$-algebras, with MIU-maps, preserving multiplication, involution and unit (aka.\ $*$-homomorphisms). Having seen this background information we continue our series of examples. \subsection{Sets and effect modules}\label{SetsEModSubsec} As noted above, fuzzy predicates yield a functor $\Cat{Sets}\rightarrow\op{\Cat{EMod}}$. This functor involves homming into $[0,1]$, and has an adjoint that is used as starting point for several variations. $$\vcenter{\xymatrix@R-2pc{ \op{\Cat{EMod}}\ar@/^2ex/[dd]^{\ensuremath{\mathrm{Hom}}(-,[0,1])} \\ \dashv \\ \Cat{Sets}\ar@/^2ex/[uu]^{\ensuremath{\mathrm{Hom}}(-,[0,1])}\ar@(dl,dr)_{\mathcal{E}=\Cat{EMod}([0,1]^{(-)}, [0,1])} }} \quad \begin{prooftree} \xymatrix@C+.5pc{Y\ar[r]^-{\Cat{EMod}} & [0,1]^{X}} \Justifies \xymatrix{X\ar[r]_-{\Cat{Sets}} & \Cat{EMod}(Y, [0,1])} \end{prooftree} \quad \vcenter{\[email protected]@C-2.5pc{ \op{\Cat{EMod}}\ar@/^0.7em/[rr] & \top & \EM(\mathcal{E})\rlap{$ = \Cat{CCH}sep$}\ar@/^0.6em/[ll] \\ & \Kl(\mathcal{E})\ar[ul]^{\ensuremath{\mathrm{Pred}}}\ar[ur]_{\ensuremath{\mathrm{Stat}}} & }}\hspace*{6em}$$ \noindent The induced monad $\mathcal{E}$ is the \emph{expectation} monad introduced in~\cite{JacobsM12b}. It can be understood as an extension of the (finite probability) distribution monad $\distributionsymbol$, since $\mathcal{E}(X) \cong \distributionsymbol(X)$ if $X$ is a finite set. The triangle corollary on the right says in particular that Kleisli maps $X \rightarrow \mathcal{E}(Y)$ are in bijective correspondence with effect module maps $[0,1]^{Y} \rightarrow [0,1]^{X}$ acting as predicate transformers, on fuzzy predicates. The category of algebras $\EM(\mathcal{E})$ of the expectation monad is the category $\Cat{CCH}sep$ of convex compact Hausdorff spaces, with a separation condition (see~\cite{JacobsM12b,JacobsMF16} for details). State spaces in quantum computing are typically such convex compact Hausdorff spaces. Using the full and faithfulness of the functor $[0,1]_{(-)} \colon \CstarMap{\mathrm{P}}U \rightarrow \Cat{EMod}$ from~\eqref{PUEModFunDiag}, the expectation monad can alternatively be described in terms of the states of the commutative $C^*$-algebra $\ell^{\infty}(X)$ of bounded functions $X \rightarrow \mathbb{C}$, via: \begin{equation} \label{StatIsoEqn} \begin{array}{rcl} \ensuremath{\mathrm{Stat}}(\ell^{\infty}(X)) \hspace*{\arraycolsep}\smash{\stackrel{\text{def}}{=}}\hspace*{\arraycolsep} \CstarMap{\mathrm{P}}U\big(\ell^{\infty}(X), \mathbb{C}\big) & \smash{\stackrel{\eqref{PUEModFunDiag}}{\cong}} & \Cat{EMod}\big([0,1]_{\ell^{\infty}(X)}, [0,1]_{\mathbb{C}}\big) \\ & = & \Cat{EMod}\big([0,1]^{X}, [0,1]\big) \hspace*{\arraycolsep} = \hspace*{\arraycolsep} \mathcal{E}(X). \end{array} \end{equation} \noindent In this way one obtains the result from~\cite{FurberJ13a} that there is a full \& faithful functor: \begin{equation} \label{KlECstarFunDiag} \xymatrix{ \Kl(\mathcal{E})\ar[rr] & & \op{\big(\CCstarMap{\mathrm{PU}}\big)} } \end{equation} \noindent embedding the Kleisli category $\Kl(\mathcal{E})$ of the expectation monad into commutative $C^*$-algebras with positive unital maps. On objects this functor~\eqref{KlECstarFunDiag} is given by $X \mapsto \ell^{\infty}(X)$. \auxproof{ We consider two maps $\distributionsymbol_{\leq 1} \mathbb{R}ightarrow \mathcal{E}$, namely: $$\xymatrix@R-2pc@C-1pc{ \distributionsymbol_{\leq 1}(X)\ar[rr]^-{\sigma^\Diamond} & & \mathcal{E}(X) \\ \omega\ar@{|->}[rr] & & \lamin{p}{[0,1]^{X}}{\sum_{x}p(x)\cdot\omega(x)} }$$ \noindent and: $$\xymatrix@R-2pc@C-1pc{ \distributionsymbol_{\leq 1}(X)\ar[rr]^-{\sigma^\Box} & & \mathcal{E}(X) \\ \omega\ar@{|->}[rr] & & \lamin{p}{[0,1]^{X}}{\sum_{x}p(x)\cdot\omega(x) + (1-\sum_{x}\omega(x))} }$$ Naturality: for $f\colon X \rightarrow Y$ in $\Cat{Sets}$ we have: $$\begin{array}{rcl} \big(\mathcal{E}(f) \mathrel{\circ} \sigma^{\Diamond})(\omega)(q) & = & \mathcal{E}(f)\big(\sigma^{\Diamond}(\omega)\big)(q) \\ & = & \sigma^{\Diamond}(\omega)(q \mathrel{\circ} f) \\ & = & \sum_{x} q(f(x))\cdot \omega(x) \\ & = & \sum_{y}\sum_{x \in f^{-1}(y)} q(y) \cdot \omega(x) \\ & = & \sum_{y} q(y) \cdot \big(\sum_{x \in f^{-1}(y)}\omega(x)\big) \\ & = & \sum_{y} q(y) \cdot \distributionsymbol_{\leq 1}(f)(\omega)(y) \\ & = & \sigma^{\Diamond}\big(\distributionsymbol_{\leq 1}(f)(\omega)\big)(q) \\ & = & \big(\sigma^{\Diamond} \mathrel{\circ} \distributionsymbol_{\leq 1}(f)\big)(\omega)(q) \end{array}$$ \noindent And similarly: $$\begin{array}{rcl} \big(\mathcal{E}(f) \mathrel{\circ} \sigma^{\Box})(\omega)(q) & = & \mathcal{E}(f)\big(\sigma^{\Box}(\omega)\big)(q) \\ & = & \sigma^{\Box}(\omega)(q \mathrel{\circ} f) \\ & = & \sum_{x} q(f(x))\cdot \omega(x) + (1 - \sum_{x}\omega(x)) \\ & = & \sum_{y}\sum_{x \in f^{-1}(y)} q(y) \cdot \omega(x) + (1 - \sum_{y}\sum_{x \in f^{-1}(y)} \omega(x)) \\ & = & \sum_{y} q(y) \cdot \big(\sum_{x \in f^{-1}(y)}\omega(x)\big) + (1 - \sum_{y}\distributionsymbol_{\leq 1}(f)(\omega)(y)) \\ & = & \sum_{y} q(y) \cdot \distributionsymbol_{\leq 1}(f)(\omega)(y) + (1 - \sum_{y}\distributionsymbol_{\leq 1}(f)(\omega)(y)) \\ & = & \sigma^{\Box}\big(\distributionsymbol_{\leq 1}(f)(\omega)\big)(q) \\ & = & \big(\sigma^{\Box} \mathrel{\circ} \distributionsymbol_{\leq 1}(f)\big)(\omega)(q) \end{array}$$ Preservation of units: $$\begin{array}{rcl} \big(\sigma^{\Diamond} \mathrel{\circ} \eta^{\distributionsymbol_{\leq 1}}\big)(x)(p) & = & \sigma^{\Diamond}\big(\eta^{\distributionsymbol_{\leq 1}}(x)\big)(p) \\ & = & \sum_{z} p(z) \cdot \eta^{\distributionsymbol_{\leq 1}}(x)(z) \\ & = & p(x) \\ & = & \eta^{\mathcal{E}}(x)(p) \\ \big(\sigma^{\Box} \mathrel{\circ} \eta^{\distributionsymbol_{\leq 1}}\big)(x)(p) & = & \sigma^{\Box}\big(\eta^{\distributionsymbol_{\leq 1}}(x)\big)(p) \\ & = & \sum_{z} p(z) \cdot \eta^{\distributionsymbol_{\leq 1}}(x)(z) + (1 - \sum_{z}\eta(x)(z)) \\ & = & p(x) + (1 - 1) \\ & = & \eta^{\mathcal{E}}(x)(p). \end{array}$$ \noindent Preservation of multiplication: $$\begin{array}{rcl} \big(\mu^{\mathcal{E}} \mathrel{\circ} \mathcal{E}(\sigma^{\Diamond}) \mathrel{\circ} \sigma^{\Diamond}\big)(\Omega)(p) & = & \mu^{\mathcal{E}}\big(\mathcal{E}(\sigma^{\Diamond})(\sigma^{\Diamond}(\Omega))\big)(p) \\ & = & \mathcal{E}(\sigma^{\Diamond})(\sigma^{\Diamond}(\Omega))(\lam{k}{k(p)}) \\ & = & \sigma^{\Diamond}(\Omega)\big((\lam{k}{k(p)}) \mathrel{\circ} \sigma^{\Diamond}\big) \\ & = & \sigma^{\Diamond}(\Omega)\big(\lam{\omega}{\sigma^{\Diamond}(\omega)(p)}\big) \\ & = & \sum_{\omega} \sigma^{\Diamond}(\omega)(p)\cdot \Omega(\omega) \\ & = & \sum_{\omega} \big(\sum_{x} \omega(x)\cdot p(x)\big)\cdot \Omega(\omega) \\ & = & \sum_{x} \sum_{\omega} p(x) \cdot \Omega(\omega) \cdot \omega(x) \\ & = & \sum_{x} p(x) \cdot \big(\sum_{\omega} \Omega(\omega) \cdot \omega(x)\big) \\ & = & \sum_{x} p(x) \cdot \mu^{\distributionsymbol_{\leq 1}}(\Omega)(x) \\ & = & \sigma^{\Diamond}\big(\mu^{\distributionsymbol_{\leq 1}}(\Omega)\big)(p) \\ & = & \big(\sigma^{\Diamond} \mathrel{\circ} \mu^{\distributionsymbol_{\leq 1}}\big)(\Omega)(p). \end{array}$$ \noindent Similarly, $$\begin{array}{rcl} \lefteqn{\big(\mu^{\mathcal{E}} \mathrel{\circ} \mathcal{E}(\sigma^{\Box}) \mathrel{\circ} \sigma^{\Box}\big)(\Omega)(p)} \\ & = & \mu^{\mathcal{E}}\big(\mathcal{E}(\sigma^{\Box})(\sigma^{\Box}(\Omega))\big)(p) \\ & = & \mathcal{E}(\sigma^{\Box})(\sigma^{\Box}(\Omega))(\lam{k}{k(p)}) \\ & = & \sigma^{\Box}(\Omega)\big((\lam{k}{k(p)}) \mathrel{\circ} \sigma^{\Box}\big) \\ & = & \sigma^{\Box}(\Omega)\big(\lam{\omega}{\sigma^{\Box}(\omega)(p)}\big) \\ & = & \sum_{\omega} \sigma^{\Box}(\omega)(p)\cdot \Omega(\omega) + (1 - \sum_{\omega}\Omega(\omega)) \\ & = & \sum_{\omega} \big(\sum_{x} \omega(x)\cdot p(x) + (1 - \sum_{x}\omega(x))\big)\cdot \Omega(\omega) + (1 - \sum_{\omega}\Omega(\omega)) \\ & = & \big(\sum_{x} \sum_{\omega} p(x) \cdot \Omega(\omega) \cdot \omega(x)\big) + \sum_{\omega} (\Omega(\omega) - \sum_{x}\Omega(\omega)\cdot\omega(x)) + (1 - \sum_{\omega}\Omega(\omega)) \\ & = & \sum_{x} p(x) \cdot \big(\sum_{\omega} \Omega(\omega) \cdot \omega(x)\big) + (1 - \sum_{\omega} \Omega(\omega) \cdot \omega(x)) \\ & = & \sum_{x} p(x) \cdot \mu^{\distributionsymbol_{\leq 1}}(\Omega)(x) + (1 - \sum_{x}\mu^{\distributionsymbol_{\leq 1}}(\Omega)(x)) \\ & = & \sigma^{\Box}\big(\mu^{\distributionsymbol_{\leq 1}}(\Omega)\big)(p) \\ & = & \big(\sigma^{\Box} \mathrel{\circ} \mu^{\distributionsymbol_{\leq 1}}\big)(\Omega)(p). \end{array}$$ For a Kleisli map $f\colon X \rightarrow \distributionsymbol_{\leq 1}(Y)$ we now get two predicate transformers $f^{\Diamond}, f^{\Box} \colon [0,1]^{Y} \rightarrow [0,1]^{X}$, namely: $$\begin{array}{rcccl} f^{\Diamond}(q)(x) & = & \sigma^{\Diamond}(f(x))(q) & = & \sum_{y} q(y)\cdot f(x)(y) \\ f^{\Box}(q)(x) & = & \sigma^{\Box}(f(x))(q) & = & \sum_{y} q(y)\cdot f(x)(y) + (1 - \sum_{y}f(x)(y)). \end{array}$$ \noindent They are as expected. The two associated algebras maps $\distributionsymbol_{\leq 1}([0,1]) \rightrightarrows [0,1]$ are obtained as: $$\begin{array}{rcccl} \tau^{\Diamond}(\omega) & = & \sigma^{\Diamond}(\omega)(\idmap) & = & \sum_{r\in[0,1]}r\cdot\omega(r) \\ \tau^{\Box}(\omega) & = & \sigma^{\Box}(\omega)(\idmap) & = & \sum_{r\in[0,1]}r\cdot\omega(r) + (1-\sum_{r\in[0,1]}\omega(r)) \end{array}$$ \noindent It is easy to see that $f^{\Diamond}$ preserves $0, \ovee$. Dually, $f^{\Box}$ preserves $1, \owedge$. For preservation of $\owedge$ let $p^{\bot} \mathrel{\bot} q^{\bot}$, that is $(1-p(x)) + (1-q(x)) \leq 1$, or equivalently, $1 \leq p(x)+q(x)$. Then: $$\begin{array}{rcccl} (p\owedge q)(x) & = & 1 - \big((1-p(x)) + (1-q(x))\big) & = & p(x) + q(x) - 1. \end{array}$$ \noindent And thus: $$\begin{array}{rcl} f^{\Box}(p \owedge q)(x) & = & \sum_{y} (p(y) + q(y) - 1)\cdot f(x)(y) + (1 - \sum_{y}f(x)(y)) \\ & = & \big(\sum_{y} p(y)\cdot f(x)(y)\big) - \big(\sum_{y} q(y)\cdot f(x)(y)\big) - 2\sum_{y}f(x)(y)) \\ & = & \sum_{y} p(y)\cdot f(x)(y) + (1 - \sum_{y}f(x)(y))\big) + \\ & & \qquad \sum_{y} q(y)\cdot f(x)(y) + (1 - \sum_{y}f(x)(y))\big) - 1 \\ & = & f^{\Box}(p)(x) \owedge f^{\Box}(q)(x) \end{array}$$ } \subsection{Compact Hausdorff spaces and effect modules}\label{CHEModSubsec} In the previous example we have used the \emph{set} $\Cat{EMod}(E, [0,1])$ of effect module maps $E \rightarrow [0,1]$, for an effect module $E$. It turns out that this homset has much more structure: it is a compact Hausdorff space. The reason is that the unit interval $[0,1]$ is compact Hausdorff, and so the function space $[0,1]^{E}$ too, by Tychonoff. The homset $\Cat{EMod}(E, [0,1]) \hookrightarrow [0,1]^{E}$ can be described via a closed subset of maps satisfying the effect module map requirements. Hence $\Cat{EMod}(E, [0,1])$ is compact Hausdorff itself. We thus obtain the following situation. $$\vcenter{\xymatrix@R-2pc{ \op{\Cat{EMod}}\ar@/^2ex/[dd]^{\ensuremath{\mathrm{Hom}}(-,[0,1])} \\ \dashv \\ \Cat{CH}\ar@/^2ex/[uu]^{\ensuremath{\mathrm{Hom}}(-,[0,1])}\ar@(dl,dr)_{\mathcal{R}=\Cat{EMod}(\ensuremath{\mathrm{C}}(-,[0,1]), [0,1])} }} \quad \begin{prooftree} \xymatrix@C+.5pc{Y\ar[r]^-{\Cat{EMod}} & \ensuremath{\mathrm{C}}(X,[0,1])} \Justifies \xymatrix{X\ar[r]_-{\Cat{CH}} & \Cat{EMod}(Y, [0,1])} \end{prooftree} \quad \vcenter{\[email protected]@C-2.5pc{ \op{\Cat{EMod}}\ar@/^0.7em/[rr] & \top & \EM(\mathcal{R})\rlap{$ = \Cat{CCH}sep$}\ar@/^0.6em/[ll] \\ & \Kl(\mathcal{R})\ar[ul]^{\ensuremath{\mathrm{Pred}}}\ar[ur]_{\ensuremath{\mathrm{Stat}}} & }}\hspace*{6em}$$ \noindent For a compact Hausdorff space $X$, the subset $\ensuremath{\mathrm{C}}(X, [0,1]) \hookrightarrow [0,1]^{X}$ of continuous maps $X \rightarrow [0,1]$ is a (sub) effect module. The induced monad $\mathcal{R}(X) = \Cat{EMod}\big(\ensuremath{\mathrm{C}}(X, [0,1]), [0,1]\big)$ is the \emph{Radon} monad. Using the full \& faithful functor~\eqref{PUEModFunDiag} the monad can equivalently be described as $X \mapsto \ensuremath{\mathrm{Stat}}(\ensuremath{\mathrm{C}}(X))$, where $\ensuremath{\mathrm{C}}(X)$ is the commutative $C^*$-algebra of functions $X \rightarrow \mathbb{C}$. The monad occurs in~\cite{Mislove12} as part of a topological and domain-theoretic approach to information theory. The main result of~\cite{FurberJ13a} is the equivalence of categories $$\begin{array}{rcl} \Kl(\mathcal{R}) & \simeq & \op{\big(\CCstarMap{\mathrm{PU}}\big)} \end{array}$$ \noindent between the Kleisli category of this Radon monad $\mathcal{R}$ and the category of commutative $C^*$-algebras and positive unital maps. This shows how (commutative) $C^*$-algebras appear in state-and-effect triangles (see also~\cite{Jacobs15a,ChoJWW15b}). The algebras of the Radon monad are convex compact Hausdorff spaces (with separation), like for the expectation monad $\mathcal{E}$, see~\cite{JacobsM12b} for details. \subsection{Compact Hausdorff spaces and MV-modules}\label{CHMVModSubsec} The adjunction $\op{\Cat{EMod}} \rightleftarrows \Cat{CH}$ can be restricted to an adjunction $\op{\Cat{MVMod}} \rightleftarrows \Cat{CH}$, involving MV-modules instead of effect modules. This can be done since continuous functions $X \rightarrow [0,1]$ are appropriately closed under joins $\vee$, and thus form an MV-module. Additionally, for an MV-module $E$, the MV-module maps $E \rightarrow [0,1]$ form a compact Hausdorff space (using the same argument as in the previous subsection). Via this restriction to an adjunction $\op{\Cat{MVMod}} \rightleftarrows \Cat{CH}$ we hit a wall again. \begin{lemma} For a compact Hausdorff space $X$, the unit $\eta \colon X \rightarrow \Cat{MVMod}\big(\ensuremath{\mathrm{C}}(X, [0,1]), [0,1]\big)$, given by $\eta(x)(p) = p(x)$, is an isomorphism in $\Cat{CH}$. \end{lemma} This result can be understood as part of the Yosida duality for Riesz spaces. It is well-known in the MV-algebra community, but possibly not precisely in this form. For convenience, we include a proof. \begin{myproof} We only show that the unit $\eta$ is an isomorphism, not that it is also a homeomorphism. Injectivity is immediate by Urysohn. For surjectivity, we first establish the following two auxiliary results. \begin{enumerate} \item For each $p\in\ensuremath{\mathrm{C}}(X, [0,1])$ and $\omega\in\Cat{MVMod}\big(\ensuremath{\mathrm{C}}(X, [0,1]), [0,1]\big)$, if $\omega(p) = 0$, then there is an $x\in X$ with $p(x) = 0$. If not, then $p(x) > 0$ for all $x\in X$. Hence there is an inclusion $X \subseteq \bigcup_{r > 0} p^{-1}\big((r,1]\big)$. By compactness there are finitely many $r_{i}$ with $X \subseteq \bigcup_{i} p^{-1}\big((r_{i}, 1]\big)$. Thus for $r = \bigwedge_{i} r_{i} > 0$ we have $p(x) > r$ for all $x\in X$. Find an $n\in\mathbb{N}$ with $n\cdot r \geq 1$. The $n$-fold sum $n\cdot p$ in the MV-module $\ensuremath{\mathrm{C}}(X, [0,1])$ then satisfies $p(x) = 1$ for all $x$, so that $n\cdot p = 1$ in $\ensuremath{\mathrm{C}}(X, [0,1])$. But now we get a contradiction: $1 = \omega(1) = \omega(n\cdot p) = n\cdot \omega(p) = 0$. \item For each finite collection of maps $p_{1}, \ldots, p_{n} \in \ensuremath{\mathrm{C}}(X, [0,1])$ and for each function $\omega\in\Cat{MVMod}\big(\ensuremath{\mathrm{C}}(X, [0,1]), [0,1]\big)$ there is an $x\in X$ with $\omega(p_{i}) = p_{i}(x)$ for all $1 \leq i \leq n$. For the proof, define $p\in\ensuremath{\mathrm{C}}(X, [0,1])$ using the MV-structure of $\ensuremath{\mathrm{C}}(X, [0,1])$ as: $$\begin{array}{rcl} p & = & {\displaystyle\bigvee}_{\!i} \big(p_{i} - \omega(p_{i})\cdot 1\big) \vee \big(\omega(p_{i})\cdot 1 - p_{i}\big). \end{array}$$ \noindent Since the state $\omega \colon \ensuremath{\mathrm{C}}(X,[0,1]) \rightarrow [0,1]$ preserves the MV-structure we get in $[0,1]$: $$\begin{array}{rcccl} \omega(p) & = & {\displaystyle\bigvee}_{\!i} \big(\omega(p_{i}) - \omega(p_{i})\cdot 1\big) \vee \big(\omega(p_{i})\cdot 1 - \omega(p_{i})\big) & = & 0. \end{array}$$ \noindent Hence by the previous point there is an $x\in X$ with $p(x) = 0$. But then $p_{i}(x) = \omega(p_{i})$, as required. \end{enumerate} \noindent Now we can prove surjectivity of the unit map $\eta \colon X \rightarrow \Cat{MVMod}\big(\ensuremath{\mathrm{C}}(X, [0,1]), [0,1]\big)$. Let $\omega\colon \ensuremath{\mathrm{C}}(X, [0,1]) \rightarrow [0,1]$ be an MV-module map. Define for each $p\in\ensuremath{\mathrm{C}}(X, [0,1])$ the subset $U_{p} = \setin{x}{X}{\omega(p) \neq p(x)}$. This subset $U_{p} \subseteq X$ is open since it can be written as $f^{-1}(\mathbb{R}-\{0\})$, for the continuous function $f(x) = p(x) - \omega(p)$. Suppose towards a contradiction that $\omega \neq \eta(x)$ for all $x\in X$. Thus, for each $x\in X$ there is a $p\in\ensuremath{\mathrm{C}}(X, [0,1])$ with $\omega(p) \neq \eta(x)(p) = p(x)$. This means $X \subseteq \bigcup_{p}U_{p}$. By compactness of $X$ there are finitely many $p_{i}\in\ensuremath{\mathrm{C}}(X, [0,1])$ with $X \subseteq \bigcup_{i} U_{p_i}$. The above second point however gives an $x\in X$ with $\omega(p_{i}) = p_{i}(x)$ for all $i$. But then $x\not\in \bigcup_{i} U_{p_i}$. \hspace*{\fill}$\mathbb{Q}EDbox$ \end{myproof} \subsection{Sets and directed complete effect modules}\label{SetsDcEModSubsec} In the remainder of this paper we shall consider effect modules with additional completeness properties (w.r.t.\ its standard order), as in~\cite{JacobsW15a}. Specifically, we consider $\omega$-complete, and directed-complete effect modules. In the first case each ascending $\omega$-chain $x_{0} \leq x_{1} \leq \cdots$ has a least upperbound $\bigvee_{n}x_{n}$; and in the second case each directed subset $D$ has a join $\bigvee D$. We write the resulting subcategories as: $$\xymatrix{ \Cat{DcEMod}\;\ar@{^(->}[r] & \ensuremath{\omega\text{-}\EMod}\;\ar@{^(->}[r] & \Cat{EMod} }$$ \noindent where maps are required to preserve the relevant joins $\bigvee$. We start with the directed-complete case. The adjunction $\op{\Cat{EMod}} \rightleftarrows \Cat{Sets}$ from Subsection~\ref{SetsEModSubsec} can be restricted to an adjunction as on the left below. $$\hspace*{-0.5em}\vcenter{\xymatrix@R-2pc{ \op{\Cat{DcEMod}}\ar@/^2ex/[dd]^{\ensuremath{\mathrm{Hom}}(-,[0,1])} \\ \dashv \\ \Cat{Sets}\ar@/^2ex/[uu]^{\ensuremath{\mathrm{Hom}}(-,[0,1])}\ar@(dl,dr)_{\mathcal{E}_{\infty}=\Cat{DcEMod}([0,1]^{(-)}, [0,1])} }} \begin{prooftree} \xymatrix@C+1pc{Y\ar[r]^-{\Cat{DcEMod}} & [0,1]^{X}} \Justifies \[email protected]{X\ar[r]_-{\Cat{Sets}} & \Cat{DcEMod}(Y, [0,1])} \end{prooftree} \vcenter{\[email protected]@C-2.5pc{ \op{\Cat{DcEMod}}\ar@/^0.7em/[rr] & \top & \EM(\mathcal{E}_{\infty})\rlap{$=\!\!\Cat{Conv}_{\infty}$}\ar@/^0.6em/[ll] \\ & \Kl(\mathcal{E}_{\infty})\ar[ul]^{\ensuremath{\mathrm{Pred}}}\ar[ur]_{\ensuremath{\mathrm{Stat}}} & }}\hspace*{6em}$$ \auxproof{ We briefly check the adjunction. \begin{itemize} \item $\overline{f}(x) = \lam{y}{f(y)(x)}$ is in $\Cat{EMod}(Y, [0,1])$, since for each directed collection $y_{i}$ we have: $$\begin{array}{rcl} \overline{f}(x)(\bigvee_{i}y_{i}) & = & f(\bigvee_{i}y_{i})(x) \\ & = & \big(\bigvee_{i} f(y_{i})\big)(x) \\ & = & \bigvee_{i} f(y_{i})(x) \qquad \mbox{since joins are pointwise in } [0,1]^X \\ & = & \bigvee_{i} \overline{f}(x)(y_{i}) \end{array}$$ \item $\overline{g} = \lam{y}{\lam{x}{g(x)(y)}}$ preserves joins, since: $$\begin{array}{rcl} \overline{g}(\bigvee_{i}y_{i}) & = & \lam{x}{g(x)(\bigvee_{i} y_{i})} \\ & = & \lam{x}{\bigvee_{i} g(x)(y_{i})} \\ & = & \bigvee_{i} \lam{x}{\overline{g}(y_{i})(x)} \qquad \mbox{since joins are pointwise in } [0,1]^X \\ & = & \bigvee_{i} \overline{g}(y_{i}). \end{array}$$ \end{itemize} } \noindent The resulting monad $\mathcal{E}_{\infty} = \Cat{DcEMod}\big([0,1]^{(-)}, [0,1]\big)$ on $\Cat{Sets}$ is in fact isomorphic\footnote{This isomorphism $\mathcal{E}_{\infty} \cong \distributionsymbol_{\infty}$ in Proposition~\ref{DcEModMonadProp} is inspired by work of Robert Furber (PhD Thesis, forthcoming): he noticed the isomorphism $\ensuremath{\mathrm{NStat}}(\ell^{\infty}(X)) \cong \distributionsymbol_{\infty}(X)$ in~\eqref{NStatIsoEqn}, which is obtained here as a corollary to Proposition~\ref{DcEModMonadProp}.} to the infinite (discrete probability) distribution monad $\distributionsymbol_{\infty}$, see~\cite{Jacobs16g}. We recall, for a set $X$, $$\begin{array}{rcl} \distributionsymbol_{\infty}(X) & = & \set{\omega \colon X \rightarrow [0,1]}{\mathrm{supp}(\omega)\mbox{ is countable, and }\sum_{x}\omega(x) = 1}. \end{array}$$ \noindent The subset $\mathrm{supp}(\omega) \subseteq X$ contains the elements $x\in X$ with $\omega(x) \neq 0$. The requirement in the definition of $\distributionsymbol_{\infty}(X)$ that $\mathrm{supp}(\omega)$ be countable is superfluous, since it follows from the requirement $\sum_{x} \omega(x) = 1$. Briefly, $\mathrm{supp}(\omega) \subseteq \bigcup_{n>0} X_{n}$, where $X_{n} = \setin{x}{X}{\omega(x) > \frac{1}{n}}$ contains at most $n-1$ elements (see \textit{e.g.}~\cite[Prop.~2.1.2]{Sokolova05}). \begin{proposition} \label{DcEModMonadProp} There is an isomorphism of monads $\distributionsymbol_{\infty} \cong \mathcal{E}_{\infty}$, where $\mathcal{E}_{\infty}$ is the monad induced by the above adjunction $\op{\Cat{DcEMod}} \rightleftarrows \Cat{Sets}$. \end{proposition} \begin{myproof} For a subset $U\subseteq X$ we write $\indic{U} \colon X \rightarrow [0,1]$ for the `indicator' function, defined by $\indic{U}(x) = 1$ if $x\in U$ and $\indic{U}(x) = 0$ if $x\not\in U$. We write $\indic{x}$ for $\indic{\{x\}}$. This function $\indic{(-)} \colon \powersetsymbol(X) \rightarrow [0,1]^{X}$ is a map of effect algebras that preserves all joins. Let $h\in\mathcal{E}_{\infty}(X)$, so $h$ is a Scott continuous map of effect modules $h \colon [0,1]^{X} \rightarrow [0,1]$. Define $\overline{h} \colon X \rightarrow [0,1]$ as $\overline{h}(x) = h(\indic{x})$. Notice that if $U\subseteq X$ is a finite subset, then: $$\begin{array}{rcccccccccccl} 1 & = & h(1) & = & h(\indic{X}) & \geq & h(\indic{U}) & = & h(\bigovee_{x\in U} \indic{x}) & = & \bigovee_{x\in U} h(\indic{x}) & = & \bigovee_{x\in U} \overline{h}(x). \end{array}$$ \noindent We can write $X$ as directed union of its finite subsets, and thus also $\indic{X} = \bigvee\set{\indic{U}}{U \subseteq X \mbox{ finite}}$. But then $\overline{h} \in \distributionsymbol_{\infty}(X)$, because $h$ preserves directed joins: $$\begin{array}{rcccccccl} 1 & = & h(\indic{X}) & = & \bigvee \set{h(\indic{U})}{U \subseteq X \mbox{ finite}} & = & \bigvee \set{\sum_{x\in U} \overline{h}(x)}{U \subseteq X \mbox{ finite}} & = & \sum_{x\in X} \overline{h}(x). \end{array}$$ Conversely, given $\omega \in \distributionsymbol_{\infty}(X)$ we define $\overline{\omega} \colon [0,1]^{X} \rightarrow [0,1]$ as $\overline{\omega}(p) = \sum_{x\in X} p(x) \cdot \omega(x)$. It is easy to see that $\overline{\omega}$ is a map of effect modules. It is a bit more challenging to see that it preserves directed joins $\bigvee_{i} p_{i}$, for $p_{i}\in [0,1]^{X}$. First we write the countable support of $\omega$ as $\mathrm{supp}(\omega) = \{x_{0}, x_{1}, x_{2}, \ldots\}\subseteq X$ in such a way that $\omega(x_{0}) \geq \omega(x_{1}) \geq \omega(x_{2}) \geq \cdots$. We have $1 = \sum_{x\in X}\omega(x) = \sum_{n\in\mathbb{N}} \omega(x_{n})$. Hence, for each $N\in\mathbb{N}$ we get: $$\begin{array}{rcl} \sum_{n > N}\omega(x_{n}) & = & 1 - \sum_{n \leq N} \omega(x_{n}). \end{array}$$ \noindent By taking the limit $N \rightarrow \infty$ on both sides we get: $$\begin{array}{rcccccccl} \lim\limits_{N\rightarrow\infty}\sum_{n > N}\omega(x_{n}) & = & 1 - \lim\limits_{N\rightarrow\infty}\sum_{n \leq N} \omega(x_{n}) & = & 1 - \sum_{n\in\mathbb{N}} \omega(x_{n}) & = & 1 - 1 & = & 0. \end{array}$$ \noindent We have to prove $\overline{\omega}(\bigvee_{i}p_{i}) = \bigvee_{i}\overline{\omega}(p_{i})$. The non-trivial part is $(\leq)$. For each $N\in\mathbb{N}$ we have: $$\begin{array}{rcl} \overline{\omega}(\bigvee_{i}p_{i}) & = & \sum_{n\in\mathbb{N}} (\bigvee_{i}p_{i})(x_{n})\cdot \omega(x_{n}) \\ & = & \sum_{n\in\mathbb{N}} (\bigvee_{i}p_{i}(x_{n}))\cdot \omega(x_{n}) \\ & = & \sum_{n\in\mathbb{N}} \bigvee_{i}p_{i}(x_{n})\cdot \omega(x_{n}) \\ & = & \Big(\sum_{n\leq N} \bigvee_{i}p_{i}(x_{n})\cdot \omega(x_{n})\Big) + \Big(\sum_{n > N} \bigvee_{i}p_{i}(x_{n})\cdot \omega(x_{n})\Big) \\ & = & \Big(\bigvee_{i}\sum_{n\leq N} p_{i}(x_{n})\cdot \omega(x_{n})\Big) + \Big(\sum_{n > N} \bigvee_{i}p_{i}(x_{n})\cdot \omega(x_{n})\Big) \\ & \leq & \Big(\bigvee_{i}\sum_{n\leq N} p_{i}(x_{n})\cdot \omega(x_{n})\Big) + \Big(\sum_{n > N} \omega(x_{n})\Big) \qquad \mbox{since } p_{i}(x) \in [0,1]. \end{array}$$ \noindent Hence we are done by taking the limit $N \rightarrow \infty$. Notice that we use that the join $\bigvee$ can be moved outside a finite sum. This works precisely because the join is taken over a directed set. What remains is to show that these mappings $h\mapsto \overline{h}$ and $\omega\mapsto\overline{\omega}$ yield an isomorphism $\distributionsymbol_{\infty}(X) \cong \mathcal{E}_{\infty}(X)$, which is natural in $X$, and forms an isomorphism of monads. This is left to the interested reader. \hspace*{\fill}$\mathbb{Q}EDbox$ \auxproof{ $$\begin{array}{rcl} \overline{\overline{h}}(p) & = & \sum_{x} p(x) \cdot \overline{h}(x) \\ & = & \sum_{x} p(x) \cdot h(\indic{x}) \\ & = & \sum_{x} h(p(x) \cdot \indic{x}) \\ & = & \bigvee \set{\sum_{x\in U} h(p(x) \cdot \indic{x})} {U \subseteq X \mbox{ finite}} \\ & = & \bigvee \set{h(\bigovee_{x\in U} p(x) \cdot \indic{x})} {U \subseteq X \mbox{ finite}} \\ & = & h\big(\bigvee \set{\bigovee_{x\in U} p(x) \cdot \indic{x}} {U \subseteq X \mbox{ finite}}\big) \\ & = & h(p) \\ \overline{\overline{\omega}}(x) & = & \overline{\omega}(\indic{x}) \\ & = & \sum_{x'} \indic{x}(x') \cdot \omega(x') \\ & = & 1 \cdot \omega(x) \\ & = & \omega(x). \end{array}$$ Let's now write $\sigma_{X} \colon \distributionsymbol_{\infty}(X) \rightarrow \mathcal{E}_{\infty}(X)$ for this map, given by $\sigma_{X}(\omega)(p) = \sum_{x} p(x)\cdot \omega(x)$. For $f\colon X \rightarrow Y$ we have for $\omega\in\distributionsymbol_{\infty}(X)$ and $q \in [0,1]^{Y}$, $$\begin{array}{rcl} \big(\mathcal{E}_{\infty}(f) \mathrel{\circ} \sigma_{X}\big)(\omega)(q) & = & \mathcal{E}_{\infty}(f)\big(\sigma_{X}(\omega)\big)(q) \\ & = & \sigma_{X}(\omega)(q \mathrel{\circ} f) \\ & = & \sum_{x} q(f(x))\cdot \omega(x) \\ & = & \sum_{y, x \in f^{-1}(y)} q(y) \cdot \omega(x) \\ & = & \sum_{y} q(y) \cdot \big(\sum_{x \in f^{-1}(y)} \omega(x)\big) \\ & = & \sum_{y} q(y) \cdot \distributionsymbol_{\infty}(f)(\omega)(y) \\ & = & \sigma_{Y}\big(\distributionsymbol_{\infty}(f)(\omega)\big)(q) \\ & = & \big(\sigma_{Y} \mathrel{\circ} \distributionsymbol_{\infty}(f)\big)(\omega)(q). \end{array}$$ \noindent Also, $$\begin{array}{rcl} \big(\sigma_{X} \mathrel{\circ} \eta^{\distributionsymbol_{\infty}}_{X}\big)(x)(p) & = & \sigma_{X}(\eta(x))(p) \\ & = & \sum_{x'} p(x')\cdot \eta(x)(x') \\ & = & p(x) \cdot 1 \\ & = & \eta^{\mathcal{E}_{\infty}}_{X}(x)(p). \end{array}$$ \noindent Further, $$\begin{array}{rcl} \big(\mu^{\mathcal{E}_{\infty}} \mathrel{\circ} \mathcal{E}_{\infty}(\sigma) \mathrel{\circ} \sigma\big)(\Omega)(p) & = & \mu^{\mathcal{E}_{\infty}}\big(\mathcal{E}_{\infty}(\sigma)(\sigma(\Omega))\big)(p) \\ & = & \mathcal{E}_{\infty}(\sigma)(\sigma(\Omega))(\lam{k}{k(p)}) \\ & = & \sigma(\Omega)\big((\lam{k}{k(p)}) \mathrel{\circ} \sigma\big) \\ & = & \sigma(\Omega)\big(\lam{\omega}{\sigma(\omega)(p)}\big) \\ & = & \sum_{\omega} \sigma(\omega)(p) \cdot \Omega(\omega) \\ & = & \sum_{\omega} (\sum_{x} p(x) \cdot \omega(x)) \cdot \Omega(\omega) \\ & = & \sum_{\omega, x} p(x) \cdot \omega(x) \cdot \Omega(\omega) \\ & = & \sum_{x} p(x) \cdot (\sum_{\omega} \omega(x) \cdot \Omega(\omega)) \\ & = & \sum_{x} p(x) \cdot \mu^{\distributionsymbol_{\infty}}(\Omega)(x) \\ & = & \sigma\big(\mu^{\distributionsymbol_{\infty}}(\Omega)\big)(p) \\ & = & \big(\sigma \mathrel{\circ} \mu^{\distributionsymbol_{\infty}}\big)(\Omega)(p). \end{array}$$ } \end{myproof} As a result, the Eilenberg-Moore category $\EM(\mathcal{E}_{\infty})$ is isomorphic to $\EM(\distributionsymbol_{\infty}) = \Cat{Conv}_{\infty}$, where $\Cat{Conv}_{\infty}$ is the category of countably-convex sets $X$, in which convex sums $\sum_{n\in\mathbb{N}}r_{n}x_{n}$ exist, where $x_{n}\in X$ and $r_{n}\in [0,1]$ with $\sum_{n}r_{n} = 1$. We briefly look at the relation with $C^*$-algebras (actually $W^*$-algebras), like in Subsection~\ref{SetsEModSubsec}. We write $\WstarMap{\mathrm{NPU}}$ for the category of $W^*$-algebras with normal positive unital maps. The term `normal' is used in the operator algebra community for what is called `Scott continuity' (preservation of directed joins) in the domain theory community. This means that taking effects yields a full and faithful functor: \begin{equation} \label{NPUDcEModFunDiag} \xymatrix{ \WstarMap{\mathrm{NPU}}\ar[rr]^-{[0,1]_{(-)}} & & \Cat{DcEMod} } \end{equation} \noindent This is similar to the situation in~\eqref{PUEModFunDiag} and~\eqref{MIUMVModFunDiag}. One could also use $AW^*$-algebras here. Next, there is now a full and faithful functor to the category of commutative $W^*$-algebras: \begin{equation} \label{KlDWstarFunDiag} \xymatrix{ \Kl(\distributionsymbol_{\infty}) \cong \Kl(\mathcal{E}_{\infty})\ar[rr] & & \CWstarMap{\mathrm{NPU}} } \end{equation} \noindent On objects it is given by $X \mapsto \ell^{\infty}(X)$. This functor is full and faithful since there is a bijective correspondence: $$\begin{prooftree} \xymatrix{\ell^{\infty}(X)\ar[r] & \ell^{\infty}(Y) \rlap{\hspace*{12.6em}in $\CWstarMap{\mathrm{NPU}}$}} \Justifies \xymatrix{Y\ar[r] & \ensuremath{\mathrm{NStat}}(\ell^{\infty}(X)) \rlap{$\;\cong\mathcal{E}_{\infty}(X)\cong\distributionsymbol_{\infty}(X)$\hspace*{3em}in $\Cat{Sets}$}} \end{prooftree}\hspace*{12em}$$ \noindent where the isomorphism $\cong$ describing normal states is given, like in~\eqref{StatIsoEqn}, by: \begin{equation} \label{NStatIsoEqn} \begin{array}{rcl} \ensuremath{\mathrm{NStat}}(\ell^{\infty}(X)) \hspace*{\arraycolsep}\smash{\stackrel{\text{def}}{=}}\hspace*{\arraycolsep} \WstarMap{\mathrm{NPU}}\big(\ell^{\infty}(X), \mathbb{C}\big) & \smash{\stackrel{\eqref{NPUDcEModFunDiag}}{\cong}} & \Cat{DcEMod}\big([0,1]_{\ell^{\infty}(X)}, [0,1]_{\mathbb{C}}\big) \\ & = & \Cat{DcEMod}\big([0,1]^{X}, [0,1]\big) \\ & = & \mathcal{E}_{\infty}(X) \\ & \cong & \distributionsymbol_{\infty}(X). \end{array} \end{equation} \subsection{Measurable spaces and $\omega$-complete effect modules}\label{MeaswEModSubsec} In our final example we use an adjunction between effect modules and measurable spaces (instead of sets or compact Hausdorff spaces). We write $\Cat{Meas}$ for the category of measurable spaces $(X,\Sigma_{X})$, where $\Sigma_{X} \subseteq \powersetsymbol(X)$ is the $\sigma$-algebra of measurable subsets, with measurable functions between them (whose inverse image maps measurable subsets to measurable subsets). We use the unit interval $[0,1]$ with its standard Borel $\sigma$-algebra (the least one that contains all the usual opens). A basic fact in this situation is that for a measurable space $X$, the set $\Cat{Meas}(X, [0,1])$ of measurable functions $X \rightarrow [0,1]$ is an $\omega$-effect module. The effect module structure is inherited via the inclusion $\Cat{Meas}(X, [0,1]) \hookrightarrow [0,1]^{X}$. Joins of ascending $\omega$-chains $p_{0} \leq p_{1} \leq \cdots$ exists, because the (pointwise) join $\bigvee_{n} p_{n}$ is a measurable function again. In this way we obtain a functor $\Cat{Meas}(-, [0,1]) \colon \Cat{Meas} \rightarrow \op{\ensuremath{\omega\text{-}\EMod}}$. In the other direction there is also a hom-functor $\ensuremath{\omega\text{-}\EMod}(-, [0,1]) \colon \op{\ensuremath{\omega\text{-}\EMod}} \rightarrow \Cat{Meas}$. For an $\omega$-effect module $E$ we can provide the set of maps $\ensuremath{\omega\text{-}\EMod}(E, [0,1])$ with a $\sigma$-algebra, namely the least one that makes all the evaluation maps $\mathrm{ev}_{x} \colon \ensuremath{\omega\text{-}\EMod}(E,[0,1]) \rightarrow [0,1]$ measurable, for $x\in E$. This function $\mathrm{ev}_{x}$ is given by $\mathrm{ev}_{x}(p) = p(x)$. This gives the following situation. $$\vcenter{\xymatrix@R-2pc{ \op{\ensuremath{\omega\text{-}\EMod}}\ar@/^2ex/[dd]^{\ensuremath{\mathrm{Hom}}(-,[0,1])} \\ \dashv \\ \Cat{Meas}\ar@/^2ex/[uu]^{\ensuremath{\mathrm{Hom}}(-,[0,1])}\ar@(dl,dr)_{\mathcal{G} = \ensuremath{\omega\text{-}\EMod}(\Cat{Meas}(-,[0,1]), [0,1])} }} \quad \begin{prooftree} \xymatrix@C+1pc{Y\ar[r]^-{\ensuremath{\omega\text{-}\EMod}} & \Cat{Meas}(X, [0,1])} \Justifies \xymatrix{X\ar[r]_-{\Cat{Meas}} & \ensuremath{\omega\text{-}\EMod}(Y, [0,1])} \end{prooftree} \quad \vcenter{\[email protected]@C-2pc{ \op{\ensuremath{\omega\text{-}\EMod}}\ar@/^0.7em/[rr] & \top & \EM(\mathcal{G})\ar@/^0.6em/[ll] \\ & \Kl(\mathcal{G})\ar[ul]^{\ensuremath{\mathrm{Pred}}}\ar[ur]_{\ensuremath{\mathrm{Stat}}} & }}\hspace*{6em}$$ \noindent We use the symbol $\mathcal{G}$ for the induced monad because of the following result. \begin{proposition} \label{GiryProp} The monad $\mathcal{G} = \ensuremath{\omega\text{-}\EMod}\big(\Cat{Meas}(-,[0,1]), [0,1]\big)$ on $\Cat{Meas}$ in the above situation is (isomorphic to) the Giry monad~\cite{Giry82}, given by probability measures: $$\begin{array}{rcccl} \mathrm{Giry}(X) & \smash{\stackrel{\text{def}}{=}} & \set{\phi\colon \Sigma_{X} \rightarrow [0,1]} {\phi \mbox{ is a probability measure}} & = & \ensuremath{\omega\text{-}\EA}(\Sigma_{X}, [0,1]). \end{array}$$ \end{proposition} \begin{myproof} The isomorphism involves Lebesgue integration: $$\hspace*{5em}\xymatrix{ \llap{$\mathcal{G}(X) = \ensuremath{\omega\text{-}\EMod}\big(\Cat{Meas}(X,[0,1]),\;$} [0,1]\big) \ar@/^2ex/[rr]^-{I \mapsto (M\mapsto I(\indic{M}))} & \cong & \ensuremath{\omega\text{-}\EA}\rlap{$(\Sigma_{X}, [0,1]) = \mathrm{Giry}(X)$} \ar@/^2ex/[ll]^-{\phi \mapsto (p\mapsto \int p \intd\phi))} }$$ \noindent See~\cite{Jacobs13a} or~\cite{JacobsW15a} for more details. \hspace*{\fill}$\mathbb{Q}EDbox$ \end{myproof} The above triangle is further investigated in~\cite{Jacobs13a}. It resembles the situation described in~\cite{ChaputDPP14} for Markov kernels (the ordinary, not the abstract, ones). \subsection{Dcpo's and directed complete effect modules}\label{DcpoDcEModSubsec} In our final example we briefly consider another variation of the adjunction $\op{\Cat{DcEMod}} \leftrightarrows \Cat{Sets}$ in Subsection~\ref{SetsDcEModSubsec}, now with an adjunction $\op{\Cat{DcEMod}} \leftrightarrows \Cat{Dcpo}$ between the categories of directed complete effect modules and partial orders. This brings us into the realm of probabilistic power domains, which has its own thread of research, see \textit{e.g.}~\cite{Heckmann94,JonesP89,Keimel08,Keimel09,KeimelP09,Saheb80,TixKP05}. Our only aim at this stage is to show how the current approach connects to that line of work. The most significant difference is that we use the unit interval $[0,1]$, whereas it is custom for probabilistic power domains to use the extended non-negative real numbers $\setin{r}{\mathbb{R}}{r \geq 0} \cup \{\infty\}$. Consequently, we use effect modules instead of cones. Using that the unit interval $[0,1]$, with its usual order, is a dcpo, and that its multiplication, and also its partial addition, is Scott continuous in each variable, we obtain: $$\vcenter{\xymatrix@R-2pc{ \op{\Cat{DcEMod}}\ar@/^2ex/[dd]^{\ensuremath{\mathrm{Hom}}(-,[0,1])} \\ \dashv \\ \Cat{Dcpo}\ar@/^2ex/[uu]^{\ensuremath{\mathrm{Hom}}(-,[0,1])}\ar@(dl,dr)_{\mathcal{V}=\Cat{DcEMod}([0,1]^{(-)}, [0,1])} }} \quad \begin{prooftree} \xymatrix@C+1.5pc{Y\ar[r]^-{\Cat{DcEMod}} & \Cat{Dcpo}(X, [0,1])} \Justifies \[email protected]{X\ar[r]_-{\Cat{Dcpo}} & \Cat{DcEMod}(Y, [0,1])} \end{prooftree} \quad \vcenter{\[email protected]@C-2.5pc{ \op{\Cat{DcEMod}}\ar@/^0.7em/[rr] & \top & \EM(\mathcal{V})\ar@/^0.6em/[ll] \\ & \Kl(\mathcal{V})\ar[ul]^{\ensuremath{\mathrm{Pred}}}\ar[ur]_{\ensuremath{\mathrm{Stat}}} & }}$$ \noindent The induced monad $\mathcal{V}$ is a restricted version of the monad of valuations, that uses the extended real numbers, as mentioned above. It is unclear what its category of Eilenberg-Moore algebras is. \auxproof{ We check the adjunction. First of all, for a dcpo $X$ the homset $\ensuremath{\mathrm{Hom}}(X, [0,1])$ of Scott continuous functions $X \rightarrow [0,1]$ is a dcpo (since $\Cat{Dcpo}$ is cartesian closed), and also an effect module via pointwise partial sum $\ovee$ and scalar multiplication. In the other direction, for $Y\in\Cat{DcEMod}$ the homset $\ensuremath{\mathrm{Hom}}(Y, [0,1])$ of Scott continuous effect module maps is a dcpo, via the pointwise join. This join is again a map of effect modules via continuity of the operations on $[0,1]$. For instance, for a directed collection $(h_{i})$ in $\ensuremath{\mathrm{Hom}}(Y,[0,1])$, define $h(y) = \bigvee_{i}h_{i}(y)$. This $h$ is continous, and preserves $\ovee$ since: $$\begin{array}{rcl} h(y\ovee z) & = & \bigvee_{i} h_{i}(y\ovee z) \\ & = & \bigvee_{i} h_{i}(y)\ovee h_{i}(z) \\ & = & \bigvee_{i} h_{i}(y)\ovee \bigvee_{i}h_{i}(z) \qquad \mbox{by directedness} \\ & = & h(y) \ovee h(z). \end{array}$$ \noindent The adjunction involves the usual argument swapping and verifications. \begin{itemize} \item For $f\colon Y \rightarrow \Cat{Dcpo}(X,[0,1])$ in $\Cat{DcEMod}$ we obtain $\overline{f} \colon X \rightarrow \Cat{DcEMod}(Y,[0,1])$ by $\overline{f}(x)(y) = f(y)(x)$. This is well-defined. Each $\overline{f}(x) \colon Y \rightarrow [0,1]$ is a map of effect modules, for instance because: $$\begin{array}{rcl} \overline{f}(x)(r\cdot y) & = & f(r\cdot y)(x) \\ & = & \big(r\cdot f(y)\big)(x) \\ & = & r \cdot f(y)(x) \\ & = & r \cdot \overline{f}(x)(y). \end{array}$$ \item In the other direction, for $g\colon X \rightarrow \ensuremath{\mathrm{Hom}}(Y, [0,1])$ in $\Cat{Dcpo}$ we have $\overline{g} \colon Y \rightarrow \ensuremath{\mathrm{Hom}}(X, [0,1])$ in $\Cat{DcEMod}$, given by $\overline{g}(y)(x) = g(x)(y)$. This $\overline{g}$ is a map of effect modules, for instance because: $$\begin{array}{rcl} \overline{g}(r\cdot y) & = & \lam{x}{\overline{g}(r\cdot y)(x)} \\ & = & \lam{x}{g(x)(r\cdot y)} \\ & = & \lam{x}{r\cdot g(x)(y)} \\ & = & r\cdot \big(\lam{x}{g(x)(y)}\big) \\ & = & r\cdot \big(\lam{x}{\overline{g}(y)(x)}\big) \\ & = & r \cdot \overline{g}(y). \end{array}$$ \end{itemize} } \subparagraph*{\textbf{Acknowledgements}} Several people have contributed to the ideas and examples presented here, including, in alphabetical order: Kenta Cho, Robert Furber, Helle Hansen, Klaus Keimel, Bas and Bram Westerbaan. Thanks to all of them! \end{document}
\begin{document} \title[The Dirichlet problem for the fractional Laplacian]{The Dirichlet problem for the fractional Laplacian: regularity up to the boundary} \author{Xavier Ros-Oton} \address{Universitat Polit\`ecnica de Catalunya, Departament de Matem\`{a}tica Aplicada I, Diagonal 647, 08028 Barcelona, Spain} \email{[email protected]} \thanks{The authors were supported by grants MTM2008-06349-C03-01, MTM2011-27739-C04-01 (Spain), and 2009SGR345 (Catalunya)} \author{Joaquim Serra} \address{Universitat Polit\`ecnica de Catalunya, Departament de Matem\`{a}tica Aplicada I, Diagonal 647, 08028 Barcelona, Spain} \email{[email protected]} \keywords{Fractional Laplacian, Dirichlet problem, regularity, boundary Harnack inequality} \maketitle \begin{abstract} We study the regularity up to the boundary of solutions to the Dirichlet problem for the fractional Laplacian. We prove that if $u$ is a solution of $(-\Delta)^s u =g$ in $\Omega$, $u\equiv0$ in $\mathbb R^n\backslash\Omega$, for some $s\in(0,1)$ and $g\in L^\infty(\Omega)$, then $u$ is $C^s(\mathbb{R}^n)$ and $u/\delta^s|_{\Omega}$ is $C^{\alpha}$ up to the boundary $\partial\Omega$ for some $\alpha\in(0,1)$, where $\delta(x)={\rm dist}(x,\partial\Omega)$. For this, we develop a fractional analog of the Krylov boundary Harnack method. Moreover, under further regularity assumptions on $g$ we obtain higher order H\"older estimates for $u$ and $u/\delta^s$. Namely, the $C^\beta$ norms of $u$ and $u/\delta^s$ in the sets $\{x\in\Omega:\delta(x)\geq\rho\}$ are controlled by $C\rho^{s-\beta}$ and $C\rho^{\alpha-\beta}$, respectively. These regularity results are crucial tools in our proof of the Pohozaev identity for the fractional Laplacian \cite{RS-CRAS,RS}. \end{abstract} \section{Introduction and results} Let $s\in(0,1)$ and $g\in L^\infty(\Omega)$, and consider the fractional elliptic problem \begin{equation}\label{eqlin} \left\{ \begin{array}{rcll} (-\Delta)^s u &=&g&\textrm{in }\Omega \\ u&=&0&\textrm{in }\mathbb R^n\backslash\Omega,\end{array}\right. \end{equation} in a bounded domain $\Omega\subset\mathbb R^n$, where \begin{equation} \label{laps}(-\Delta)^s u (x)= c_{n,s}{\rm PV}\int_{\mathbb{R}^n}\frac{u(x)-u(y)}{|x-y|^{n+2s}}dy \end{equation} and $c_{n,s}$ is a normalization constant. Problem \eqref{eqlin} is the Dirichlet problem for the fractional Laplacian. There are classical results in the literature dealing with the interior regularity of $s$-harmonic functions, or more generally for equations of the type \eqref{eqlin}. However, there are few results on regularity up to the boundary. This is the topic of study of the paper. Our main result establishes the H\"older regularity up to the boundary $\partial\Omega$ of the function $u/\delta^s|_\Omega$, where \[\delta(x)={\rm dist}(x,\partial\Omega).\] For this, we develop an analog of the Krylov \cite{Krylov} boundary Harnack method for problem \eqref{eqlin}. As in Krylov's work, our proof applies also to operators with ``bounded measurable coefficients'', more precisely those of the type \eqref{bmc}. This will be treated in a future work \cite{RS-K}. In this paper we only consider the constant coefficient operator $(-\Delta)^s$, since in this case we can establish more precise regularity results. Most of them will be needed in our subsequent work \cite{RS}, where we find and prove the Pohozaev identity for the fractional Laplacian, announced in \cite{RS-CRAS}. For \eqref{eqlin}, in addition to the H\"older regularity up to the boundary for $u/\delta^s$, we prove that any solution $u$ is $C^s(\mathbb{R}^n)$. Moreover, when $g$ is not only bounded but H\"older continuous, we obtain better interior H\"older estimates for $u$ and $u/\delta^s$. The Dirichlet problem for the fractional Laplacian \eqref{eqlin} has been studied from the point of view of probability, potential theory, and PDEs. The closest result to the one in our paper is that of Bogdan \cite{B}, establishing a boundary Harnack inequality for nonnegative $s$-harmonic functions. It will be described in more detail later on in the Introduction (in relation with Theorem \ref{thm:v-is-Calpha}). Related regularity results up to the boundary have been proved in \cite{KL} and \cite{CRS}. In \cite{KL} it is proved that $u/\delta^s$ has a limit at every boundary point when $u$ solves the homogeneous fractional heat equation. The same is proven in \cite{CRS} for a free boundary problem for the fractional Laplacian. Some other results dealing with various aspects concerning the Dirichlet problem are the following: estimates for the heat kernel (of the parabolic version of this problem) and for the Green function, e.g., \cite{BGR,CKS}; an explicit expression of the Poisson kernel for a ball \cite{L}; and the explicit solution to problem \eqref{eqlin} in a ball for $g\equiv1$ \cite{G}. In addition, the interior regularity theory for viscosity solutions to nonlocal equations with ``bounded measurable coefficients'' is developed in \cite{CS}. The first result of this paper gives the optimal H\"older regularity for a solution $u$ of \eqref{eqlin}. The proof, which is given in Section \ref{sec3}, is based on two ingredients: a suitable upper barrier, and the interior regularity results for the fractional Laplacian. Given $g\in L^\infty(\Omega)$, we say that $u$ is a solution of \eqref{eqlin} when $u\in H^s(\mathbb{R}^n)$ is a weak solution (see Definition \ref{weak}). When $g$ is continuous, the notions of weak solution and of viscosity solution agree; see Remark \ref{remviscosity}. We recall that a domain $\Omega$ satisfies the exterior ball condition if there exists a positive radius $\rho_0$ such that all the points on $\partial \Omega$ can be touched by some exterior ball of radius $\rho_0$. \begin{prop}\label{prop:u-is-Cs} Let $\Omega$ be a bounded Lipschitz domain satisfying the exterior ball condition, $g\in L^\infty(\Omega)$, and $u$ be a solution of \eqref{eqlin}. Then, $u\in C^s(\mathbb{R}^n)$ and \[\|u\|_{C^{s}(\mathbb{R}^n)}\le C \|g\|_{L^\infty (\Omega)},\] where $C$ is a constant depending only on $\Omega$ and $s$. \end{prop} This $C^s$ regularity is optimal, in the sense that a solution to problem \eqref{eqlin} is not in general $C^\alpha$ for any $\alpha>s$. This can be seen by looking at the problem \begin{equation}\label{explicitsolution1} \left\{ \begin{array}{rcll} (-\Delta)^s u &=&1&\textrm{in }B_r(x_0) \\ u&=&0&\textrm{in }\mathbb R^n\backslash B_r(x_0),\end{array}\right.\end{equation} for which its solution is explicit. For any $r>0$ and $x_0\in\mathbb{R}^n$, it is given by \cite{G,BGR} \begin{equation}\label{explicitsolution2} u(x)=\frac{2^{-2s}\Gamma(n/2)}{\Gamma\left(\frac{n+2s}{2}\right)\Gamma(1+s)} \left(r^2-|x-x_0|^2\right)^s\qquad\textrm{in}\ \ B_r(x_0). \end{equation} It is clear that this solution is $C^s$ up to the boundary but it is not $C^\alpha$ for any $\alpha>s$. Since solutions $u$ of \eqref{eqlin} are $C^s$ up to the boundary, and not better, it is of importance to study the regularity of $u/\delta^s$ up to $\partial\Omega$. For instance, our recent proof \cite{RS,RS-CRAS} of the Pohozaev identity for the fractional Laplacian uses in a crucial way that $u/\delta^s$ is H\"older continuous up to $\partial\Omega$. This is the main result of the present paper and it is stated next. For local equations of second order with bounded measurable coefficients and in non-divergence form, the analog result is given by a theorem of N. Krylov \cite{Krylov}, which states that $u/\delta$ is $C^\alpha$ up to the boundary for some $\alpha\in (0,1)$. This result is the key ingredient in the proof of the $C^{2,\alpha}$ boundary regularity of solutions to fully nonlinear elliptic equations $F(D^2u)=0$ ---see \cite{Kazdan,CaffC}. For our nonlocal equation \eqref{eqlin}, the corresponding result is the following. \begin{thm}\label{thm:v-is-Calpha} Let $\Omega$ be a bounded $C^{1,1}$ domain, $g\in L^\infty(\Omega)$, $u$ be a solution of \eqref{eqlin}, and $\delta(x)={\rm dist}(x,\partial\Omega)$. Then, $u/\delta^s|_\Omega$ can be continuously extended to $\overline\Omega$. Moreover, we have $u/\delta^s\in C^{\alpha}(\overline{\Omega})$ and \[ \|u/\delta^s\|_{C^{\alpha}(\overline \Omega)}\le C \|g\|_{L^\infty(\Omega)}\] for some $\alpha>0$ satisfying $\alpha<\min\{s,1-s\}$. The constants $\alpha$ and $C$ depend only on $\Omega$ and $s$. \end{thm} To prove this result we use the method of Krylov (see \cite{Kazdan}). It consists of trapping the solution between two multiples of $\delta^s$ in order to control the oscillation of the quotient $u/\delta^s$ near the boundary. For this, we need to prove, among other things, that $(-\Delta)^s\delta_0^s$ is bounded in $\Omega$, where $\delta_0(x)={\rm dist}(x,\mathbb{R}^n\setminus\Omega)$ is the distance function in $\Omega$ extended by zero outside. This will be guaranteed by the assumption that $\Omega$ is $C^{1,1}$. To our knowledge, the only previous results dealing with the regularity up to the boundary for solutions to \eqref{eqlin} or its parabolic version were the ones by K. Bogdan \cite{B} and S. Kim and K. Lee \cite{KL}. The first one \cite{B} is the boundary Harnack principle for nonnegative $s$-harmonic functions, which reads as follows: assume that $u$ and $v$ are two nonnegative functions in a Lipschitz domain $\Omega$, which satisfy $(-\Delta)^su\equiv0$ and $(-\Delta)^sv\equiv0$ in $\Omega\cap B_r(x_0)$ for some ball $B_r(x_0)$ centered at $x_0\in \partial\Omega$. Assume also that $u\equiv v\equiv 0$ in $B_r(x_0)\setminus\Omega$. Then, the quotient $u/v$ is $C^\alpha(\overline{B_{r/2}(x_0)})$ for some $\alpha\in(0,1)$. In \cite{BKK} the same result is proven in open domains $\Omega$, without any regularity assumption. While the result in \cite{BKK} assumes no regularity on the domain, we need to assume $\Omega$ to be $C^{1,1}$. This assumption is needed to compare the solutions with the function $\delta^s$. As a counterpart, we allow nonzero right hand sides $g\in L^\infty(\Omega)$ and also changing-sign solutions. In $C^{1,1}$ domains, our results in Section \ref{sec4} (which are local near any boundary point) extend Bogdan's result. For instance, assume that $u$ and $v$ satisfy $(-\Delta)^s u = g$ and $(-\Delta)^s v = h$ in $\Omega$, $u\equiv v\equiv 0$ in $\mathbb{R}^n\setminus \Omega$, and that $h$ is positive in $\Omega$. Then, by Theorem \ref{thm:v-is-Calpha} we have that $u/\delta^s$ and $v/\delta^s$ are $C^\alpha(\overline{\Omega})$ functions. In addition, by the Hopf lemma for the fractional Laplacian we find that $v/\delta^s\ge c>0$ in $\Omega$. Hence, we obtain that the quotient $u/v$ is $C^\alpha$ up to the boundary, as in Bogdan's result for $s$-harmonic functions. As in Krylov's result, our method can be adapted to the case of nonlocal elliptic equations with ``bounded measurable coefficients''. Namely, in another paper \cite{RS-K} we will prove the boundary Harnack principle for solutions to $\mathcal{L}u=g$ in $\Omega$, $u\equiv0$ in $\mathbb{R}^n\setminus\Omega$, where $g\in L^\infty(\Omega)$, \begin{equation}\label{bmc} \mathcal{L}u(x)=\int_{\mathbb{R}^n}\frac{2u(x)-u(x+y)-u(x-y)}{\left|y^TA(x)y\right|^{\frac{n+2s}{2}}}dy, \end{equation} and $A(x)$ is a symmetric matrix, measurable in $x$, and with $0<\lambda{\rm Id}\leq A(x)\leq \Lambda{\rm Id}$. A second result (for the parabolic problem) related to ours is contained in \cite{KL}. The authors show that any solution of $\partial_t u + (-\Delta)^s u =0$ in $\Omega$, $u\equiv 0$ in $\mathbb{R}^n\setminus \Omega$, satisfies the following property: for any $t>0$ the function $u/\delta^s$ is continuous up to the boundary $\partial\Omega$. Our results were motivated by the study of nonlocal semilinear problems $(-\Delta)^su=f(u)$ in $\Omega$, $u\equiv0$ in $\mathbb{R}^n\setminus\Omega$, more specifically, by the Pohozaev identity that we establish in \cite{RS}. Its proof requires the precise regularity theory up to the boundary developed in the present paper (see Corollary \ref{krylov} below). Other works treating the fractional Dirichlet semilinear problem, which deal mainly with existence of solutions and symmetry properties, are \cite{SV,ServV,FW,BMW}. In the semilinear case, $g=f(u)$ and therefore $g$ automatically becomes more regular than just bounded. When $g$ has better regularity, the next two results improve the preceding ones. The proofs of these results require the use of the following weighted H\"older norms, a slight modification of the ones in Gilbarg-Trudinger \cite[Section 6.1]{GT}. Throughout the paper, and when no confusion is possible, we use the notation $C^\beta(U)$ with $\beta>0$ to refer to the space $C^{k,\beta'}(U)$, where $k$ is the is greatest integer such that $k<\beta$ and where $\beta'=\beta-k$. This notation is specially appropriate when we work with $(-\Delta)^s$ in order to avoid the splitting of different cases in the statements of regularity results. According to this, $[\,\cdot\,]_{C^{\beta}(U)}$ denotes the $C^{k,\beta'}(U)$ seminorm \[[u]_{C^\beta(U)}=[u]_{C^{k,\beta'}(U)}=\sup_{x,y\in U,\ x\neq y}\frac{|D^ku(x)-D^ku(y)|}{|x-y|^{\beta'}}.\] Moreover, given an open set $U\subset\mathbb{R}^n$ with $\partial U\neq\varnothing$, we will also denote \[d_x= \mathrm{dist}(x,\partial U) \qquad\mbox{and}\qquad d_{x,y}=\min\{d_x,d_y\}.\] \begin{defi}\label{definorm} Let $\beta>0$ and $\sigma\ge -\beta$. Let $\beta=k+\beta'$, with $k$ integer and $\beta'\in (0,1]$. For $w\in C^{\beta}(U)=C^{k,\beta'}(U)$, define the seminorm \[ [w]_{\beta;U}^{(\sigma)}= \sup_{x,y\in U} \biggl(d_{x,y}^{\beta+\sigma} \frac{|D^{k}w(x)-D^{k}w(y)|}{|x-y|^{\beta'}}\biggr).\] For $\sigma>-1$, we also define the norm $\|\,\cdot\,\|_{\beta;U}^{(\sigma)}$ as follows: in case that $\sigma\ge0$, \[ \|w\|_{\beta;U}^{(\sigma)} = \sum_{l=0}^k \sup_{x\in U} \biggl(d_x^{l+\sigma} |D^l w(x)|\biggr) + [w]_{\beta;U}^{(\sigma)}\,,\] while for $-1<\sigma<0$, \[\|w\|_{\beta;U}^{(\sigma)} = \|w\|_{C^{-\sigma}(\overline U)}+\sum_{l=1}^k \sup_{x\in U} \biggl(d_x^{l+\sigma} |D^l w(x)|\biggr) + [w]_{\beta;U}^{(\sigma)}.\] Note that $\sigma$ is the rescale order of the seminorm $[\,\cdot\,]_{\beta;U}^{(\sigma)}$, in the sense that $[w(\lambda\cdot)]_{\beta;U/\lambda}^{(\sigma)} = \lambda^\sigma[w]_{\beta;U}^{(\sigma)}$. \end{defi} When $g$ is H\"older continuous, the next result provides optimal estimates for higher order H\"older norms of $u$ up to the boundary. \begin{prop}\label{prop:int-est-u} Let $\Omega$ be a bounded domain, and $\beta>0$ be such that neither $\beta$ nor $\beta+2s$ is an integer. Let $g\in C^\beta(\Omega)$ be such that $\|g\|_{\beta;\Omega}^{(s)}<\infty$, and $u\in C^s(\mathbb{R}^n)$ be a solution of \eqref{eqlin}. Then, $u\in C^{\beta+2s}(\Omega)$ and \[\|u\|_{\beta+2s;\Omega}^{(-s)}\le C \bigl(\|u\|_{C^s(\mathbb{R}^n)}+\|g\|_{\beta;\Omega}^{(s)}\bigr),\] where $C$ is a constant depending only on $\Omega$, $s$, and $\beta$. \end{prop} Next, the H\"older regularity up to the boundary of $u/\delta^s$ in Theorem \ref{thm:v-is-Calpha} can be improved when $g$ is H\"older continuous. This is stated in the following theorem, whose proof uses a nonlocal equation satisfied by the quotient $u/\delta^s$ in $\Omega$ ---see \eqref{equaciov}--- and the fact that this quotient is $C^\alpha(\overline\Omega)$. \begin{thm}\label{thm:int-est-v} Let $\Omega$ be a bounded $C^{1,1}$ domain, and let $\alpha\in(0,1)$ be given by Theorem \ref{thm:v-is-Calpha}. Let $g\in L^\infty(\Omega)$ be such that $\|g\|_{\alpha;\Omega}^{(s-\alpha)}<\infty$, and $u$ be a solution of \eqref{eqlin}. Then, $u/\delta^s\in C^\alpha(\overline\Omega)\cap C^\gamma(\Omega)$ and \[ \|u/\delta^s\|_{\gamma;\Omega}^{(-\alpha)}\le C \bigl(\|g\|_{L^\infty(\Omega)}+\|g\|_{\alpha;\Omega}^{(s-\alpha)}\bigr),\] where $\gamma=\min\{1,\alpha+2s\}$ and $C$ is a constant depending only on $\Omega$ and $s$. \end{thm} Finally, we apply the previous results to the semilinear problem \begin{equation}\label{eqnonlin} \left\{ \begin{array}{rcll} (-\Delta)^s u &=&f(x,u)&\textrm{in }\Omega \\ u&=&0&\textrm{on }\mathbb R^n\backslash\Omega,\end{array}\right. \end{equation} where $\Omega$ is a bounded $C^{1,1}$ domain and $f$ is a Lipschitz nonlinearity. In the following result, the meaning of ``bounded solution'' is that of ``bounded weak solution'' (see definition \ref{weak}) or that of ``viscosity solution''. By Remark \ref{remviscosity}, these two notions coincide. Also, by $f\in C^{0,1}_{\rm loc}(\overline\Omega\times\mathbb{R})$ we mean that $f$ is Lipschitz in every compact subset of $\overline\Omega\times\mathbb{R}$. \begin{cor}\label{krylov} Let $\Omega$ be a bounded and $C^{1,1}$ domain, $f\in C^{0,1}_{\rm loc}(\overline\Omega\times\mathbb{R})$, $u$ be a bounded solution of \eqref{eqnonlin}, and $\delta(x)={\rm dist}(x,\partial\Omega)$. Then, \begin{itemize} \item[(a)] $u\in C^s(\mathbb{R}^n)$ and, for every $\beta\in[s,1+2s)$, $u$ is of class $C^{\beta}(\Omega)$ and \[[u]_{C^{\beta}(\{x\in\Omega\,:\,\delta(x)\ge\rho\})}\le C \rho^{s-\beta}\qquad \textrm{for all}\ \ \rho\in(0,1).\] \item[(b)] The function $u/\delta^s|_\Omega$ can be continuously extended to $\overline\Omega$. Moreover, there exists $\alpha\in(0,1)$ such that $u/\delta^s\in C^{\alpha}(\overline{\Omega})$. In addition, for all $\beta\in[\alpha,s+\alpha]$, it holds the estimate \[ [u/\delta^s]_{C^{\beta}(\{x\in\Omega\,:\,\delta(x)\ge\rho\})}\le C \rho^{\alpha-\beta}\qquad \textrm{for all}\ \ \rho\in(0,1).\] \end{itemize} The constants $\alpha$ and $C$ depend only on $\Omega$, $s$, $f$, $\|u\|_{L^{\infty}(\mathbb{R}^n)}$, and $\beta$. \end{cor} The paper is organized as follows. In Section \ref{sec3} we prove Propositions \ref{prop:u-is-Cs} and \ref{prop:int-est-u}. In Section \ref{sec4} we prove Theorem \ref{thm:v-is-Calpha} using the Krylov method. In Section \ref{sec5} we prove Theorem \ref{thm:int-est-v} and Corollary \ref{krylov}. Finally, the Appendix deals with some basic tools and barriers which are used throughout the paper. \section{Optimal H\"older regularity for $u$} \label{sec3} In this section we prove that, assuming $\Omega$ to be a bounded Lipschitz domain satisfying the exterior ball condition, every solution $u$ of \eqref{eqlin} belongs to $C^s(\mathbb{R}^n)$. For this, we first establish that $u$ is $C^{\beta}$ in $\Omega$, for all $\beta\in(0,2s)$, and sharp bounds for the corresponding seminorms near $\partial\Omega$. These bounds yield $u\in C^s(\mathbb{R}^n)$ as a corollary. First, we make precise the notion of weak solution to problem \eqref{eqlin}. \begin{defi}\label{weak} We say that $u$ is a weak solution of \eqref{eqlin} if $u\in H^s(\mathbb{R}^n)$, $u\equiv 0$ (a.e.) in $\mathbb{R}^n\setminus\Omega$, and \[\int_{\mathbb R^n}(-\Delta)^{s/2}u(-\Delta)^{s/2}v\,dx=\int_\Omega gv\,dx\] for all $v\in H^s(\mathbb{R}^n)$ such that $v\equiv0$ in $\mathbb{R}^n\setminus\Omega$. \end{defi} We recall first some well known interior regularity results for linear equations involving the operator $(-\Delta)^s$, defined by \eqref{laps}. The first one states that $w\in C^{\beta+2s}(\overline{B_{1/2}})$ whenever $w\in C^\beta(\mathbb{R}^n)$ and $(-\Delta)^sw\in C^\beta(\overline{B_1})$. Recall that, throughout this section and in all the paper, we denote by $C^\beta$, with $\beta>0$, the space $C^{k,\beta'}$, where $k$ is an integer, $\beta'\in(0,1]$, and $\beta=k+\beta'$. \begin{prop}\label{prop-reg-lin-2} Assume that $w\in C^\infty (\mathbb{R}^n)$ solves $(-\Delta)^s w = h$ in $B_1$ and that neither $\beta$ nor $\beta+2s$ is an integer. Then, \[ \|w\|_{C^{\beta+2s}(\overline{B_{1/2}})} \le C\bigl( \|w\|_{C^\beta(\mathbb{R}^n)}+ \|h\|_{C^\beta(\overline{B_1})}\bigr)\,,\] where $C$ is a constant depending only on $n$, $s$, and $\beta$. \end{prop} \begin{proof} Follow the proof of Proposition 2.1.8 in \cite{S}, where the same result is proved with $B_1$ and $B_{1/2}$ replaced by the whole $\mathbb{R}^n$. \end{proof} The second result states that $w\in C^{\beta}(\overline{B_{1/2}})$ for each $\beta\in(0,2s)$ whenever $w\in L^\infty(\mathbb{R}^n)$ and $(-\Delta)^sw\in L^\infty({B_1})$. \begin{prop} \label{prop-reg-lin-1} Assume that $w\in C^\infty (\mathbb{R}^n)$ solves $(-\Delta)^s w = h$ in $B_1$. Then, for every $\beta\in(0,2s)$, \[ \|w\|_{C^{\beta}(\overline{B_{1/2}})} \le C\bigl( \|w\|_{L^\infty(\mathbb{R}^n)}+ \|h\|_{L^\infty(B_1)}\bigr)\,,\] where $C$ is a constant depending only on $n$, $s$, and $\beta$. \end{prop} \begin{proof} Follow the proof of Proposition 2.1.9 in \cite{S}, where the same result is proved in the whole $\mathbb{R}^n$. \end{proof} The third result is the analog of the first, with the difference that it does not need to assume $w\in C^\beta(\mathbb{R}^n)$, but only $w\in C^\beta(\overline{B_2})$ and $(1+|x|)^{-n-2s}w(x)\in L^1(\mathbb{R}^n)$. \begin{cor}\label{int-est-brick} Assume that $w\in C^{\infty}(\mathbb{R}^n)$ is a solution of $(-\Delta)^s w = h$ in $B_2$, and that neither $\beta$ nor $\beta+2s$ is an integer. Then, \[ \|w\|_{C^{\beta+2s}(\overline{B_{1/2}})} \le C\biggl(\|(1+|x|)^{-n-2s}w(x)\|_{L^1(\mathbb{R}^n)} + \|w\|_{C^{\beta}(\overline {B_2})} + \|h\|_{C^{\beta}(\overline{B_2})} \biggr)\] where the constant $C$ depends only on $n$, $s$, and $\beta$. \end{cor} \begin{proof} Let $\eta\in C^\infty(\mathbb{R}^n)$ be such that $\eta\equiv 0$ outside $B_2$ and $\eta\equiv 1$ in $B_{3/2}$. Then $\tilde w:= w\eta\in C^{\infty}(\mathbb{R}^n)$ and $(-\Delta)^s \tilde w = \tilde h := h - (-\Delta)^{s} \bigl(w(1-\eta)\bigr)$. Note that for $x\in B_{3/2}$ we have \[(-\Delta)^s \left(w(1-\eta)\right)(x) = c_{n,s}\int_{\mathbb{R}^n\setminus{B_{3/2}}}\frac{ - \bigl(w(1-\eta)\bigr)(y)}{|x-y|^{n+2s}}dy.\] From this expression we obtain that \[ \|(-\Delta)^s \left(w(1-\eta)\right) \|_{L^\infty({B_1})}\le C \|(1+|y|)^{-n-2s} w(y)\|_{L^1(\mathbb{R}^n)}\] and for all $\gamma\in(0,\beta]$, \[ \begin{split} [(-\Delta)^s \left(w(1-\eta)\right) ]_{C^\gamma(\overline{B_1})}&\leq C \|(1+|y|)^{-n-2s-\gamma} w(y)\|_{L^1(\mathbb{R}^n)}\\&\le C \|(1+|y|)^{-n-2s} w(y)\|_{L^1(\mathbb{R}^n)} \end{split} \] for some constant $C$ that depends only on $n$, $s$, $\beta$, and $\eta$. Therefore \[ \|\tilde h\|_{C^{\beta}(\overline {B_{1}})}\le C \bigl(\|h\|_{C^{\beta}(\overline{B_2})} + \|(1+|x|)^{-n-2s} w(x)\|_{L^1(\mathbb{R}^n)}\bigr), \] while we also clearly have \[ \|\tilde w\|_{C^{\beta}(\overline{\mathbb{R}^n})}\le C\|w\|_{C^{\beta}(\overline{B_2})}\,. \] The constants $C$ depend only on $n$, $s$, $\beta$ and $\eta$. Now, we finish the proof by applying Proposition \ref{prop-reg-lin-2} with $w$ replaced by $\tilde w$. \end{proof} Finally, the fourth result is the analog of the second one, but instead of assuming $w\in L^\infty(\mathbb{R}^n)$, it only assumes $w\in L^\infty(B_2)$ and $(1+|x|)^{-n-2s}w(x)\in L^1(\mathbb{R}^n)$. \begin{cor}\label{int-est-brick2} Assume that $w\in C^\infty(\mathbb{R}^n)$ is a solution of $(-\Delta)^s w = h$ in $B_2$. Then, for every $\beta\in(0,2s)$, \[ \|w\|_{C^{\beta}(\overline{B_{1/2}})} \le C\biggl(\|(1+|x|)^{-n-2s}w(x)\|_{L^1(\mathbb{R}^n)} + \|w\|_{L^\infty(B_2)} + \|h\|_{L^\infty(B_2)} \biggr)\] where the constant $C$ depends only on $n$, $s$, and $\beta$. \end{cor} \begin{proof} Analog to the proof of Corollary \ref{int-est-brick}. \end{proof} As a consequence of the previous results we next prove that every solution $u$ of \eqref{eqlin} is $C^s(\mathbb{R}^n)$. First let us find an explicit upper barrier for $|u|$ to prove that $|u|\le C \delta^s$ in $\Omega$. This is the first step to obtain the $C^s$ regularity. To construct this we will need the following result, which is proved in the Appendix. \begin{lem}[Supersolution]\label{prop:supersolution} There exist $C_1>0$ and a radial continuous function $\varphi_1\in H^s_{\rm loc}(\mathbb{R}^n)$ satisfying \begin{equation}\label{eq:propsupersol} \begin{cases} (-\Delta)^s \varphi_1 \ge 1 &\mbox{in }B_4\setminus B_1\\ \varphi_1 \equiv 0 \quad &\mbox{in }B_1 \\ 0\le\varphi_1 \le C_1(|x|-1)^s &\mbox{in }B_4\setminus B_1\\ 1\le \varphi_1 \le C_1 &\mbox{in }\mathbb{R}^n\setminus B_4\,. \end{cases} \end{equation} \end{lem} The upper barrier for $|u|$ will be constructed by scaling and translating the supersolution from Lemma \ref{prop:supersolution}. The conclusion of this barrier argument is the following. \begin{lem}\label{lem-aux-Cs} Let $\Omega$ be a bounded domain satisfying the exterior ball condition and let $g\in L^\infty(\Omega)$. Let $u$ be the solution of \eqref{eqlin}. Then, \[ |u(x)| \le C\|g\|_{L^{\infty}(\Omega)} \delta^s(x)\quad \mbox{for all }x\in\Omega\,,\] where $C$ is a constant depending only on $\Omega$ and $s$. \end{lem} In the proof of Lemma \ref{lem-aux-Cs} it will be useful the following \begin{claim}\label{Linftybound} Let $\Omega$ be a bounded domain and let $g\in L^\infty(\Omega)$. Let $u$ be the solution of \eqref{eqlin}. Then, \[ \|u\|_{L^\infty(\mathbb{R}^n)} \le C ({\rm diam}\,\Omega)^{2s}\|g\|_{L^{\infty}(\Omega)} \] where $C$ is a constant depending only on $n$ and $s$. \end{claim} \begin{proof} The domain $\Omega$ is contained in a large ball of radius ${\rm diam}\,\Omega$. Then, by scaling the explicit (super)solution for the ball given by \eqref{explicitsolution2} we obtain the desired bound. \end{proof} We next give the \begin{proof}[Proof of Lemma \ref{lem-aux-Cs}] Since $\Omega$ satisfies the exterior ball condition, there exists $\rho_0>0$ such that every point of $\partial\Omega$ can be touched from outside by a ball of radius $\rho_0$. Then, by scaling and translating the supersolution $\varphi_1$ from Lemma \ref{prop:supersolution}, for each of this exterior tangent balls $B_{\rho_0}$ we find an upper barrier in $B_{2\rho_0}\setminus B_{\rho_0}$ vanishing in $\overline{B_{\rho_0}}$. This yields the bound $u\le C\delta^s$ in a $\rho_0$-neighborhood of $\partial\Omega$. By using Claim \ref{Linftybound} we have the same bound in all of $\overline\Omega$. Repeating the same argument with $-u$ we find $|u|\le C\delta^s$, as wanted. \end{proof} The following lemma gives interior estimates for $u$ and yields, as a corollary, that every bounded weak solution $u$ of \eqref{eqlin} in a $C^{1,1}$ domain is $C^s(\mathbb{R}^n)$. \begin{lem}\label{lem-sarp-Cs-bounds-u} Let $\Omega$ be a bounded domain satisfying the exterior ball condition, $g\in L^\infty(\Omega)$, and $u$ be the solution of \eqref{eqlin}. Then, $u \in C^\beta(\Omega)$ for all $\beta\in (0,2s)$ and for all $x_0\in \Omega$ we have the following seminorm estimate in $B_R(x_0)=B_{\delta(x_0)/2}(x_0)$: \begin{equation}\label{first-seminorm-estimate-u} [u]_{C^\beta(\overline{B_{R}(x_0)})}\le C R^{s-\beta}\|g\|_{L^{\infty}(\Omega)}, \end{equation} where $C$ is a constant depending only on $\Omega$, $s$, and $\beta$. \end{lem} \begin{proof} Recall that if $u$ solves \eqref{eqlin} in the weak sense and $\eta_\epsilon$ is the standard mollifier then $(-\Delta)^s(u\ast \eta_\epsilon)=g\ast \eta_\epsilon$ in $B_R$ for $\epsilon$ small enough. Hence, we can regularize $u$, obtain the estimates, and then pass to the limit. In this way we may assume that $u$ is smooth. Note that $B_R(x_0)\subset B_{2R}(x_0)\subset \Omega$. Let $\tilde u(y)= u(x_0+Ry)$. We have that \begin{equation}\label{Rs1} (-\Delta)^s \tilde u(y) = R^{2s} g(x_0+Ry)\quad \mbox{in } B_1\,. \end{equation} Furthermore, using that $|u| \le C\bigl(\|u\|_{L^{\infty}(\mathbb{R}^n)}+ \|g\|_{L^{\infty}(\Omega)}\bigr) \delta^s$ in $\Omega$ ---by Lemma \ref{lem-aux-Cs}--- we obtain \begin{equation}\label{Rs2} \|\tilde u\|_{L^\infty(B_1)}\le C\bigl(\|u\|_{L^{\infty}(\mathbb{R}^n)}+ \|g\|_{L^{\infty}(\Omega)}\bigr) R^s \end{equation} and, observing that $|\tilde u(y)|\le C\bigl(\|u\|_{L^{\infty}(\mathbb{R}^n)}+ \|g\|_{L^{\infty}(\Omega)}\bigr) R^s(1+|y|^s)$ in all of $\mathbb{R}^n$, \begin{equation}\label{Rs3} \|(1+|y|)^{-n-2s}\tilde u(y)\|_{L^1(\mathbb{R}^n)}\le C \bigl(\|u\|_{L^{\infty}(\mathbb{R}^n)}+ \|g\|_{L^{\infty}(\Omega)}\bigr) R^s, \end{equation} with $C$ depending only on $\Omega$ and $s$. Next we use Corollary \ref{int-est-brick2}, which taking into account \eqref{Rs1}, \eqref{Rs2}, and \eqref{Rs3}, yields \[ \|\tilde u\|_{C^{\beta}\left(\overline{B_{1/4}}\right)} \le C \bigl(\|u\|_{L^{\infty}(\mathbb{R}^n)}+ \|g\|_{L^{\infty}(\Omega)}\bigr)R^s \] for all $\beta\in(0,2s)$, where $C=C(\Omega,s,\beta)$. Finally, we observe that \[[u]_{C^\beta\left(\overline{B_{R/4}(x_0)}\right)}=R^{-\beta}[\tilde u]_{C^\beta\left(\overline{B_{1/4}}\right)}.\] Hence, by an standard covering argument, we find the estimate \eqref{first-seminorm-estimate-u} for the $C^\beta$ seminorm of $u$ in $\overline {B_{R}(x_0)}$. \end{proof} We now prove the $C^s$ regularity of $u$. \begin{proof}[Proof of Proposition \ref{prop:u-is-Cs}] By Lemma \ref{lem-sarp-Cs-bounds-u}, taking $\beta=s$ we obtain \begin{equation}\label{cotaCsenboles} \frac{|u(x)-u(y)|}{|x-y|^s}\leq C\bigl( \|u\|_{L^\infty(\mathbb{R}^n)}+ \|g\|_{L^\infty(\Omega)}\bigr) \end{equation} for all $x,y$ such that $y\in B_R(x)$ with $R=\delta(x)/2$. We want to show that \eqref{cotaCsenboles} holds, perhaps with a bigger constant $C= C(\Omega,s)$, for all $x,y\in \overline\Omega$, and hence for all $x,y\in \mathbb{R}^n$ (since $u\equiv 0$ outside $\Omega$). Indeed, observe that after a Lipschitz change of coordinates, the bound \eqref{cotaCsenboles} remains the same except for the value of the constant $C$. Hence, we can flatten the boundary near $x_0\in\partial\Omega$ to assume that $\Omega\cap B_{\rho_0}(x_0)=\{x_n>0\}\cap B_1(0)$. Now, \eqref{cotaCsenboles} holds for all $x,y$ satisfying $|x-y|\le \gamma x_n $ for some $\gamma =\gamma(\Omega)\in (0,1)$ depending on the Lipschitz map. Next, let $z=(z',z_n)$ and $w=(w',w_n)$ be two points in $\{x_n>0\}\cap B_{1/4}(0)$, and $r=|z-w|$. Let us define $\bar z=(z',z_n+r)$, $\bar z=(z',z_n+r)$ and $z_k = (1-\gamma^k) z + \gamma^k\bar z$ and $w_k = \gamma^k w + (1-\gamma^k)\bar w$, $k\ge0$. Then, using that bound \eqref{cotaCsenboles} holds whenever $|x-y|\le \gamma x_n $, we have \[|u(z_{k+1})-u(z_{k})|\le C|z_{k+1}-z_k|^s = C|\gamma^{k}(z-\bar z)(\gamma-1)|^s\leq C\gamma^k|z-\bar z|. \] Moreover, since $x_n>r$ in all the segment joining $\bar z$ and $\bar w$, splitting this segment into a bounded number of segments of length less than $\gamma r$, we obtain \[|u(\bar z)-u(\bar w)|\leq C|\bar z-\bar w|^s\leq Cr^s.\] Therefore, \[ \begin{split}|u(z)-u(w)|&\leq \sum_{k\ge 0} |u(z_{k+1})-u(z_{k})|+|u(\bar z)-u(\bar w)|+\sum_{k\geq0} |u(w_{k+1})-u(w_{k})|\\ &\leq \left( C \sum_{k\ge 0} \bigl(\gamma^{k} r\bigr)^s + C r^s \right) \bigl( \|u\|_{L^\infty(\mathbb{R}^n)}+ \|g\|_{L^\infty(\Omega)}\bigr)\\ &\le C\bigl( \|u\|_{L^\infty(\mathbb{R}^n)}+ \|g\|_{L^\infty(\Omega)}\bigr) |z-w|^s, \end{split} \] as wanted. \end{proof} The following lemma is similar to Proposition \ref{prop-reg-lin-2} but it involves the weighted norms introduced above. It will be used to prove Proposition \ref{prop:int-est-u} and Theorem \ref{thm:int-est-v}. \begin{lem} \label{refined-2s-gain} Let $s$ and $\alpha$ belong to $(0,1)$, and $\beta>0$. Let $U$ be an open set with nonempty boundary. Assume that neither $\beta$ nor $\beta+2s$ is an integer, and $\alpha<2s$. Then, \begin{equation}\label{eq:2s-derivatives-more} \|w\|_{\beta+2s;U}^{(-\alpha)}\le C\biggl( \|w\|_{C^{\alpha}(\mathbb{R}^n)}+ \|(-\Delta)^s w\|_{\beta;U}^{(2s-\alpha)}\biggr) \end{equation} for all $w$ with finite right hand side. The constant $C$ depends only on $n$, $s$, $\alpha$, and $\beta$. \end{lem} \begin{proof} \emph{Step 1.} We first control the $C^{\beta+2s}$ norm of $w$ in balls $B_R(x_0)$ with $R=d_{x_0}/2$. Let $x_0\in U$ and $R=d_{x_0}/2$. Define $\tilde w(y)= w(x_0+Ry)-w(x_0)$ and note that \[ \|\tilde w\|_{C^\alpha(B_1)}\le R^\alpha [w]_{C^{\alpha}(\mathbb{R}^n)}\] and \[ \|(1+|y|)^{-n-2s} \tilde w(y)\|_{L^1(\mathbb{R}^n)} \le C(n,s) R^\alpha [w]_{C^{\alpha}(\mathbb{R}^n)}.\] This is because \[|\tilde w(y)| = |w(x_0+Ry)-w(x_0)|\le R^\alpha |y|^\alpha [w]_{C^{\alpha}(\mathbb{R}^n)}\] and $\alpha< 2s$. Note also that \[ \|(-\Delta)^s\tilde w\|_{C^\beta(\overline{B_1})}= R^{2s+\beta}\|(-\Delta)^s w\|_{C^\beta(\overline{B_R(x_0)})} \le R^\alpha\|(-\Delta)^s w\|_{\beta;U}^{(2s-\alpha)}\,.\] Therefore, using Corollary \ref{int-est-brick} we obtain that \[ \|\tilde w\|_{C^{\beta+2s}(\overline{B_{1/2}})} \le C R^\alpha\bigl([w]_{C^{\alpha}(\mathbb{R}^n)}+ \|(-\Delta)^s w\|_{\beta;U}^{(2s-\alpha)}\bigr),\] where the constant $C$ depends only on $n$, $s$, $\alpha$, and $\beta$. Scaling back we obtain \begin{equation}\label{dos} \begin{split} \sum_{l=1}^k R^{l-\alpha} \|D^l w\|_{L^\infty(B_{R/2}(x_0))} + R^{2s+\beta-\alpha} [w]_{C^{\beta+2s}(\overline{B_{R/2}(x_0)})}\le \\ \le C\bigl(\|w\|_{C^{\alpha}(\mathbb{R}^n)}+ \|(-\Delta)^s w\|_{\alpha;U}^{(2s-\alpha)}\bigr), \end{split} \end{equation} where $k$ denotes the greatest integer less that $\beta+2s$ and $C=C(n,s)$. This bound holds, with the same constant $C$, for each ball $B_R(x_0)$, $x_0\in U$, where $R=d_{x_0}/2$. \emph{Step 2.} Next we claim that if \eqref{dos} holds for each ball $B_{d_{x}/2}(x)$, $x\in U$, then \eqref{eq:2s-derivatives-more} holds. It is clear that this already yields \begin{equation}\label{cotaambk} \sum_{l=1}^k d_x^{k-\alpha} \sup_{x\in U} |D^k u(x)|\le C\biggl( \|w\|_{C^{\alpha}(\mathbb{R}^n)}+ \|(-\Delta)^s w\|_{\beta;U}^{(2s-\alpha)}\biggr) \end{equation} where $k$ is the greatest integer less than $\beta+2s$. To prove this claim we only have to control $[w]_{\beta+2s;U}^{(-\alpha)}$ ---see Definition \ref{definorm}. Let $\gamma\in(0,1)$ be such that $\beta+2s=k+\gamma$. We next bound \[\frac{|D^k w(x)- D^k w(y)|}{|x-y|^{\gamma}}\] when $d_x\geq d_y$ and $|x-y|\ge d_x/2$. This will yield the bound for $[w]_{\beta+2s;U}^{(-\alpha)}$, because if $|x-y|<d_x/2$ then $y\in B_{d_{x}/2}(x)$, and that case is done in Step 1. We proceed differently in the cases $k=0$ and $k\ge 1$. If $k=0$, then \[d_x^{\beta+2s-\alpha}\frac{w(x)-w(y)}{|x-y|^{2s+\beta}} =\left(\frac{d_x}{|x-y|}\right)^{\beta+2s-\alpha}\frac{w(x)-w(y)}{|x-y|^{\alpha}}\leq C\|w\|_{C^\alpha(\mathbb{R}^n)}.\] If $k\ge 1$, then \[d_x^{\beta+2s-\alpha}\frac{|D^k w(x)- D^k w(y)|}{|x-y|^{\gamma}}\le \biggl(\frac{d_x}{|x-y|}\biggr)^{\gamma} d_x^{\beta+2s-\alpha-\gamma}|D^k w(x)- D^k w(y)| \le C \|w\|_{k;U}^{(-\alpha)}\,,\] where we have used that $\beta+2s-\alpha-\gamma= k-\alpha$. Finally, noting that for $x\in B_R(x_0)$ we have $R\leq d_{x_0}\leq 3R$, \eqref{eq:2s-derivatives-more} follows from \eqref{dos}, \eqref{cotaambk} and the definition of $\|w\|_{\alpha+2s;U}^{(-\alpha)}$ in \eqref{definorm}. \end{proof} Finally, to end this section, we prove Proposition \ref{prop:int-est-u}. \begin{proof}[Proof of Proposition \ref{prop:int-est-u}] Set $\alpha=s$ in Lemma \ref{refined-2s-gain}. \end{proof} \begin{rem}\label{remviscosity} When $g$ is continuous, the notions of bounded weak solution and viscosity solution of \eqref{eqlin} ---and hence of \eqref{eqnonlin}--- coincide. Indeed, let $u\in H^s(\mathbb{R}^n)$ be a weak solution of \eqref{eqlin}. Then, from Proposition \ref{prop:u-is-Cs} it follows that $u$ is continuous up to the boundary. Let $u_{\varepsilon}$ and $g_{\varepsilon}$ be the standard regularizations of $u$ and $g$ by convolution with a mollifier. It is immediate to verify that, for $\varepsilon$ small enough, we have $(-\Delta)^s u_{\varepsilon}= g_\varepsilon$ in every subdomain $U\subset\subset\Omega$ in the classical sense. Then, noting that $u_\varepsilon \to u$ and $g_\varepsilon \to g$ locally uniformly in $\Omega$, and applying the stability property for viscosity solutions \cite[Lemma 4.5]{CS}, we find that $u$ is a viscosity solution of \eqref{eqlin}. Conversely, every viscosity solution of \eqref{eqlin} is a weak solution. This follows from three facts: the existence of weak solution, that this solution is a viscosity solution as shown before, and the uniqueness of viscosity solutions \cite[Theorem 5.2]{CS}. As a consequence of this, if $g$ is continuous, any viscosity solution of \eqref{eqlin} belongs to $H^s(\mathbb{R}^n)$ ---since it is a weak solution. This fact, which is not obvious, can also be proved without using the result on uniqueness of viscosity solutions. Indeed, it follows from Proposition \ref{prop:int-est-u} and Lemma \ref{remlog}, which yield a stronger fact: that $(-\Delta)^{s/2}u\in L^p(\mathbb{R}^n)$ for all $p<\infty$. Note that although we have proved Proposition \ref{prop:int-est-u} for weak solutions, its proof is also valid ---with almost no changes--- for viscosity solutions. \end{rem} \section{Boundary regularity} \label{sec4} In this section we study the precise behavior near the boundary of the solution $u$ to problem \eqref{eqlin}, where $g\in L^\infty(\Omega)$. More precisely, we prove that the function $u/\delta^s|_\Omega$ has a $C^{\alpha}(\overline\Omega)$ extension. This is stated in Theorem \ref{thm:v-is-Calpha}. This result will be a consequence of the interior regularity results of Section \ref{sec3} and an oscillation lemma near the boundary, which can be seen as the nonlocal analog of Krylov's boundary Harnack principle; see Theorem 4.28 in \cite{Kazdan}. The following proposition and lemma will be used to establish Theorem \ref{thm:v-is-Calpha}. They are proved in the Appendix. \begin{prop}[1-D solution in half space, \cite{CRS}] \label{prop:solution} The function $\varphi_0$, defined by \begin{equation} \varphi_0(x) = \begin{cases} 0 \quad & \mbox{if } x\le 0\\ x^s & \mbox{if } x\ge 0\,, \end{cases} \end{equation} satisfies $(-\Delta)^s\varphi_0 = 0$ in $\mathbb{R}_+$. \end{prop} The lemma below gives a subsolution in $B_1\setminus B_{1/4}$ whose support is $B_1\subset\mathbb{R}^n$ and such that it is comparable to $(1-|x|)^s$ in $B_1$. \begin{lem}[Subsolution]\label{prop:subsolution} There exist $C_2>0$ and a radial function $\varphi_2=\varphi_2(|x|)$ satisfying \begin{equation}\label{eq:propsubsol} \begin{cases} (-\Delta)^s\varphi_2 \le 0 &\mbox{in }B_1\setminus B_{1/4}\\ \varphi_2 = 1 &\mbox{in }B_{1/4}\\ \varphi_2(x)\ge C_2(1-|x|)^s & \mbox{in } B_1\\ \varphi_2 = 0 \quad &\mbox{in }\mathbb{R}^n\setminus B_1 \,. \end{cases} \end{equation} \end{lem} To prove H\"older regularity of $u/\delta^s|_\Omega$ up to the boundary, we will control the oscillation of this function in sets near $\partial\Omega$ whose diameter goes to zero. To do it, we will set up an iterative argument as it is done for second order equations. Let us define the sets in which we want to control the oscillation and also auxiliary sets that are involved in the iteration. \begin{defi}\label{defiDR} Let $\kappa>0$ be a fixed small constant and let $\kappa'= 1/2+2\kappa$. We may take, for instance $\kappa= 1/16$, $\kappa' = 5/8$. Given a point $x_0$ in $\partial \Omega$ and $R>0$ let us define \[ D_R = D_R(x_0) = B_R(x_0)\cap \Omega \] and \[D_{\kappa'R}^+ = D_{\kappa'R}^+(x_0)= B_{\kappa'R}(x_0)\cap\{x\in \Omega\,:\,-x\cdot \nu(x_0)\ge 2\kappa R\}\,,\] where $\nu(x_0)$ is the unit outward normal at $x_0$; see Figure \ref{figura}. By $C^{1,1}$ regularity of the domain, there exists $\rho_0>0$, depending on $\Omega$, such that the following inclusions hold for each $x_0\in \partial\Omega$ and $R\le \rho_0$: \begin{equation}\label{eq:BkR(DR+)subsetDR} B_{\kappa R}(y) \subset D_R(x_0) \quad \mbox{for all }y\in D_{\kappa'R}^+(x_0)\,, \end{equation} and \begin{equation}\label{eq:B4kR(y*+nu)subset} B_{4\kappa R}(y^*-4\kappa R\nu(y^*))\subset D_R(x_0) \quad \mbox{and}\quad B_{\kappa R}(y^*-4\kappa R\nu(y^*))\subset D_{\kappa'R}^+(x_0)\, \end{equation} for all $y \in D_{R/2}$, where $y^*\in\partial\Omega$ is the unique boundary point satisfying $|y-y^*|=\text{dist}(y,\partial\Omega)$. Note that, since $R\leq\rho_0$, $y\in D_{R/2}$ is close enough to $\partial \Omega$ and hence the point $y^*-4\kappa R\nu(y^*)$ lays on the line joining $y$ and $y^*$; see Remark \ref{remrho0} below. \end{defi} \begin{figure} \caption{\label{figura} \label{figura} \end{figure} \begin{rem}\label{remrho0} Throughout the paper, $\rho_0>0$ is a small constant depending only on $\Omega$, which we assume to be a bounded $C^{1,1}$ domain. Namely, we assume that \eqref{eq:BkR(DR+)subsetDR} and \eqref{eq:B4kR(y*+nu)subset} hold whenever $R\le\rho_0$, for each $x_0\in\partial\Omega$, and also that every point on $\partial\Omega$ can be touched from both inside and outside $\Omega$ by balls of radius $\rho_0$. In other words, given $x_0\in \partial\Omega$, there are balls of radius $\rho_0$, $B_{\rho_0}(x_1)\subset \Omega$ and $B_{\rho_0}(x_2)\subset\mathbb{R}^n\setminus \Omega$, such that $\overline{B_{\rho_0}(x_1)}\cap\overline{B_{\rho_0}(x_2)}=\{x_0\}$. A useful observation is that all points $y$ in the segment that joins $x_1$ and $x_2$ ---through $x_0$--- satisfy $\delta(y)= |y-x_0|$. Recall that $\delta={\rm dist}(\,\cdot\,,\partial\Omega)$. \end{rem} In the rest of this section, by $|(-\Delta)^su|\le K$ we mean that either $(-\Delta)^s u = g$ in the weak sense for some $g\in L^\infty$ satisfying $\|g\|_{L^\infty}\le K$ or that $u$ satisfies $-K \le (-\Delta)^s u \le K$ in the viscosity sense. The first (and main) step towards Theorem \ref{thm:v-is-Calpha} is the following. \begin{prop}\label{lem_main} Let $\Omega$ be a bounded $C^{1,1}$ domain, and $u$ be such that $|(-\Delta)^s u|\le K$ in $\Omega$ and $u\equiv0$ in $\mathbb{R}^n \setminus \Omega$, for some constant $K$. Given any $x_0\in \partial\Omega$, let $D_R$ be as in Definition \ref{defiDR}. Then, there exist $\alpha\in(0,1)$ and $C$ depending only on $\Omega$ and $s$ ---but not on $x_0$--- such that \begin{equation}\label{eq:lemmain}\sup_{D_R} u/\delta^s - \inf_{D_R} u/\delta^s \le C K R^\alpha\end{equation} for all $R\leq\rho_0$, where $\rho_0>0$ is a constant depending only on $\Omega$. \end{prop} To prove Proposition \ref{lem_main} we need three preliminary lemmas. We start with the first one, which might be seen as the fractional version of Lemma 4.31 in \cite{Kazdan}. Recall that $\kappa'\in(1/2,1)$ is a fixed constant throughout the section. It may be useful to regard the following lemma as a bound by below for $\inf_{D_{R/2}} u/\delta^s$, rather than an upper bound for $\inf_{D_{\kappa'R}^+} u/\delta^s$. \begin{lem}\label{lemA} Let $\Omega$ be a bounded $C^{1,1}$ domain, and $u$ be such that $u\ge0$ in all of $\mathbb{R}^n$ and $|(-\Delta)^s u|\le K$ in $D_R$, for some constant $K$. Then, there exists a positive constant $C$, depending only on $\Omega$ and $s$, such that \begin{equation}\label{eq:lemA} \inf_{D_{\kappa'R}^+} u/\delta^s \le C \bigl(\,\inf_{D_{R/2}} u/\delta^s + K R^s\bigr) \end{equation} for all $R\le \rho_0$, where $\rho_0>0$ is a constant depending only on $\Omega$. \end{lem} \begin{proof} {\em Step 1.} We do first the case $K=0$. Let $R\leq \rho_0$, and let us call $m = \inf_{D_{\kappa'R}^+} u/\delta^s \ge 0$. We have $u\ge m \delta^s \ge m (\kappa R)^s$ on $D_{\kappa'R}^+$. The second inequality is a consequence of \eqref{eq:BkR(DR+)subsetDR}. We scale the subsolution $\varphi_2$ in Lemma \ref{prop:subsolution} as follows, to use it as lower barrier: \[\psi_R(x):= (\kappa R)^s \varphi_2\bigl(\textstyle \frac{x}{4\kappa R}\bigr)\,.\] By \eqref{eq:propsubsol} we have \[ \begin{cases} (-\Delta)^s\psi_R \leq 0 &\mbox{in }B_{4\kappa R}\setminus B_{\kappa R}\\ \psi_R = (\kappa R)^s &\mbox{in }B_{\kappa R}\\ \psi_R \ge 4^{-s} C_2(4\kappa R-|x|)^s & \mbox{in } B_{4\kappa R}\setminus B_{\kappa R}\\ \psi_R \equiv 0 \quad &\mbox{in }\mathbb{R}^n\setminus B_{4\kappa R} \,. \end{cases} \] Given $y \in D_{R/2}$, we have either $y\in D_{\kappa'R}^+$ or $\delta(y)<4\kappa R$, by \eqref{eq:B4kR(y*+nu)subset}. If $y\in D_{\kappa'R}^+$ it follows from the definition of $m$ that $m \le u(y)/\delta(y)^s$. If $\delta(y) < 4\kappa R$, let $y^*$ be the closest point to $y$ on $\partial \Omega$ and $\tilde y = y^* +4\kappa \nu(y^*)$. Again by \eqref{eq:B4kR(y*+nu)subset}, we have $B_{4\kappa R}(\tilde y)\subset D_R$ and $B_{\kappa R}(\tilde y)\subset D_{\kappa'R}^+$. But recall that $u\ge m (\kappa R)^s$ in $D_{\kappa'R}^+$, $(-\Delta)^s u= 0$ in $\Omega$, and $u\ge 0$ in $\mathbb{R}^n$. Hence, $u(x)\ge m \psi_R(x-\tilde y)$ in all $\mathbb{R}^n$ and in particular $u/\delta^s \ge 4^{-s}C_2 m$ on the segment joining $y^*$ and $\tilde y$, that contains $y$. Therefore, \begin{equation}\label{eq:pflemA1} \inf_{D_{\kappa'R}^+} u/\delta^s \le C \,\inf_{D_{R/2}} u/\delta^s\,. \end{equation} {\em Step 2.} If $K>0$ we consider $\tilde u$ to be the solution of \[ \begin{cases} (-\Delta)^s \tilde u = 0 \quad&\mbox{ in }D_R\\ \tilde u= u & \mbox{in }\mathbb{R}^n\setminus D_R. \end{cases} \] By Step 1, \eqref{eq:pflemA1} holds with $u$ replaced by $\tilde u$. On the other hand, $w=\tilde u - u$ satisfies $|(-\Delta)^s w|\le K$ and $w\equiv 0$ outside $D_R$. Recall that points of $\partial \Omega$ can be touched by exterior balls of radius less than $\rho_0$. Hence, using the rescaled supersolution $K R^{2s}\varphi_1(x/R)$ from Lemma \ref{prop:supersolution} as upper barrier and we readily prove, as in the proof of Lemma \ref{lem-aux-Cs}, that \[|w| \le C_1 K R^s\delta^s \quad \mbox{in }D_R\,.\] Thus, \eqref{eq:lemA} follows. \end{proof} The second lemma towards Proposition \ref{lem_main}, which might be seen as the fractional version of Lemma 4.35 in \cite{Kazdan}, is the following. \begin{lem}\label{lemB} Let $\Omega$ be a bounded $C^{1,1}$ domain, and $u$ be such that $u\ge0$ in all of $\mathbb{R}^n$ and $|(-\Delta)^s u|\le K$ in $D_R$, for some constant $K$. Then, there exists a positive constant $C$, depending on $\Omega$ and $s$, such that \begin{equation}\label{eq:lemB} \sup_{D_{\kappa'R}^+} u/\delta^s \le C \bigl(\,\inf_{D_{\kappa'R}^+} u/\delta^s + K R^s\bigr) \end{equation} for all $R\le\rho_0$, where $\rho_0>0$ is a constant depending only on $\Omega$. \end{lem} \begin{proof} {\em Step 1.} Consider first the case $K=0$. In this case \eqref{eq:lemB} follows from the Harnack inequality for the fractional Laplacian \cite{L} ---note that we assume $u\ge 0$ in all $\mathbb{R}^n$. Indeed, by \eqref{eq:BkR(DR+)subsetDR}, for each $y\in D_{\kappa'R}^+$ we have $B_{\kappa R}(y)\subset D_R$ and hence $(-\Delta)^s u= 0$ in $B_{\kappa R}(y)$. Then we may cover $D_{\kappa'R}^+$ by a finite number of balls $B_{\kappa R/2}(y_i)$, using the same (scaled) covering for all $R\leq \rho_0$, to obtain \[\sup_{B_{\kappa R/2}(y_i)} u\leq C\inf_{B_{\kappa R/2}(y_i)} u.\] Then, \eqref{eq:lemB} follows since $(\kappa R/2)^s\leq \delta^s\leq (3\kappa R/2)^s$ in $B_{\kappa R/2}(y_i)$ by \eqref{eq:BkR(DR+)subsetDR}. {\em Step 2.} When $K>0$, we prove \eqref{eq:lemB} by using a similar argument as in Step 2 in the proof of Proposition \ref{lemA}. \end{proof} Before proving Lemma \ref{lapsdeltas} we give an extension lemma ---see \cite[Theorem 1, Section 3.1]{EG} where the case $\alpha=1$ is proven in full detail. \begin{lem} \label{prop:extension-op-E} Let $\alpha\in(0,1]$ and $V\subset\mathbb{R}^n$ a bounded domain. There exists a (nonlinear) map $E:C^{0,\alpha}(\overline V)\rightarrow C^{0,\alpha}(\mathbb{R}^n)$ satisfying \[ E(w)\equiv w \quad \mbox{in }\overline V,\ \ \ [E(w)]_{C^{0,\alpha}(\mathbb{R}^n)}\le [w]_{C^{0,\alpha}(\overline V)},\ \ \ \mbox{and}\ \ \ \|E(w)\|_{L^\infty(\mathbb{R}^n)}\le \|w\|_{L^\infty(V)}\] for all $w\in C^{0,\alpha}(\overline V)$. \end{lem} \begin{proof} It is immediate to check that \[E(w)(x)=\min\left\{\min_{z\in \overline V}\left\{w(z)+ [w]_{C^{\alpha}(\overline V)}|z-x|^\alpha\right\},\|w\|_{L^\infty(V)}\right\}\] satisfies the conditions since, for all $x,y,z$ in $\mathbb{R}^n$, \[|z-x|^\alpha \le |z-y|^\alpha+|y-x|^\alpha\,.\] \end{proof} We can now give the third lemma towards Proposition \ref{lem_main}. This lemma, which is related to Proposition \ref{prop:solution}, is crucial. It states that $\delta^s|_\Omega$, extended by zero outside $\Omega$, is an approximate solution in a neighborhood of $\partial\Omega$ inside $\Omega$. \begin{lem}\label{lapsdeltas} Let $\Omega$ be a bounded $C^{1,1}$ domain, and $\delta_0=\delta\chi_\Omega$ be the distance function in $\Omega$ extended by zero outside $\Omega$. Let $\alpha=\min\{s,1-s\}$, and $\rho_0$ be given by Remark \ref{remrho0}. Then, \[(-\Delta)^s \delta_0^s\qquad \textrm{belongs to }\ C^{\alpha}(\overline{\Omega_{\rho_0}})\,,\] where $\Omega_{\rho_0}=\Omega\cap\{\delta<\rho_0\}$. In particular, \[|(-\Delta)^s \delta_0^s| \le C_\Omega\quad \mbox{in } \Omega_{\rho_0}\,,\] where $C_\Omega$ is a constant depending only on $\Omega$ and $s$. \end{lem} \begin{proof} Fix a point $x_0$ on $\partial\Omega$ and denote, for $\rho>0$, $B_\rho= B_{\rho}(x_0)$. Instead of proving that \[(-\Delta)^s\delta_0^s=c_{n,s}{\rm PV} \int_{\mathbb{R}^n}\frac{\delta_0(x)^s-\delta_0(y)^s}{|x-y|^{n+2s}}dy\] is $C^{\alpha}(\overline{\Omega\cap B_{\rho_0}})$ ---as a function of $x$---, we may equivalently prove that \begin{equation}\label{prove1}{\rm PV} \int_{B_{2\rho_0}}\frac{\delta_0(x)^s-\delta_0(y)^s}{|x-y|^{n+2s}}dy\qquad \mbox{belongs to}\qquad C^{\alpha}(\overline{\Omega\cap B_{\rho_0}}). \end{equation} This is because the difference \[\frac{1}{c_{n,s}}(-\Delta)^s\delta_0^s-{\rm PV} \int_{B_{2\rho_0}}\frac{\delta_0(x)^s-\delta_0(y)^s}{|x-y|^{n+2s}}dy=\int_{\mathbb{R}^n \setminus B_{2\rho_0}}\frac{\delta_0(x)^s-\delta_0(y)^s}{|x-y|^{n+2s}}dy\,\] belongs to $C^{s}(\overline{B_{\rho_0}})$, since $\delta_0^s$ is $C^{s}(\mathbb{R}^n)$ and $|x|^{-n-2s}$ is integrable and smooth outside a neighborhood of $0$. To see \eqref{prove1}, we flatten the boundary. Namely, consider a $C^{1,1}$ change of variables $X=\Psi(x)$, where $\Psi: B_{3\rho_0} \rightarrow V\subset \mathbb{R}^n$ is a $C^{1,1}$ diffeomorphism, satisfying that $\partial\Omega$ is mapped onto $\{X_n=0\}$, $\Omega\cap B_{3\rho_0}$ is mapped into $\mathbb{R}^n_+$, and $\delta_0(x)= (X_n)_+$. Such diffeomorphism exists because we assume $\Omega$ to be $C^{1,1}$. Let us respectively call $V_1$ and $V_2$ the images of $B_{\rho_0}$ and $B_{2\rho_0}$ under $\Psi$. Let us denote the points of $V\times V$ by $(X,Y)$. We consider the functions $x$ and $y$, defined in $V$, by $x= \Psi^{-1}(X)$ and $y= \Psi^{-1}(Y)$. With these notations, we have \[x-y=-D\Psi^{-1}(X)(X-Y)+\mathcal{O}\left(|X-Y|^2\right),\] and therefore \begin{equation}\label{x-y}|x-y|^2=(X-Y)^TA(X)(X-Y)+\mathcal{O}\left(|X-Y|^3\right),\end{equation} where \[A(X)=\left(D\Psi^{-1}(X)\right)^TD\Psi^{-1}(X)\] is a symmetric matrix, uniformly positive definite in $\overline{V_2}$. Hence, \[{\rm PV} \int_{B_{2\rho_0}}\frac{\delta_0(x)^s-\delta_0(y)^s}{|x-y|^{n+2s}}dy= {\rm PV} \int_{V_2}\frac{(X_n)_+^s-(Y_n)_+^s}{\left|(X-Y)^TA(X)(X-Y)\right|^{\frac{n+2s}{2}}}g(X,Y)dY,\] where we have denoted \[g(X,Y)=\left(\frac{(X-Y)^TA(X)(X-Y)}{|x-y|^2}\right)^{\frac{n+2s}{2}}J(Y)\] and $J=|\det D\Psi^{-1}|$. Note that we have $g\in C^{0,1}(\overline{V_2\times V_2})$, since $\Psi$ is $C^{1,1}$ and we have \eqref{x-y}. Now we are reduced to proving that \begin{equation}\label{prove2} \psi_1(X):= {\rm PV}\int_{V_2}\frac{(X_n)_+^s-(Y_n)_+^s}{\left|(X-Y)^TA(X)(X-Y)\right|^{\frac{n+2s}{2}}}\,g(X,Y)dY, \end{equation} belongs to $C^{\alpha}(\overline{V_1^+})$ (as a function of $X$), where $V_1^+=V_1\cap \{X_n>0\}$. To prove this, we extend the Lipschitz function $g\in C^{0,1}(\overline{V_2\times V_2})$ to all $\mathbb{R}^n$. Namely, consider the function $g^*=E(g)\in C^{0,1}(\mathbb{R}^n\times\mathbb{R}^n)$ provided by Proposition \ref{prop:extension-op-E}, which satisfies \[ g^*\equiv g \mbox{ in }\overline{V_2\times V_2} \quad \mbox{and}\quad \| g^*\|_{C^{0,1}(\mathbb{R}^n\times\mathbb{R}^n)}\le\|g\|_{C^{0,1}(\overline{V_2\times V_2})}\,.\] By the same argument as above, using that $V_1\subset\subset V_2$, we have that $\psi_1\in C^{\alpha}(\overline{V_1^+})$ if and only if so is the function \[\psi(X)= {\rm PV}\int_{\mathbb{R}^n}\frac{(X_n)_+^s-(Y_n)_+^s}{\left|(X-Y)^TA(X)(X-Y)\right|^{\frac{n+2s}{2}}}\,g^*(X,Y)dY.\] Furthermore, from $g^*$ define $\tilde g \in C^{0,1}(\overline{V_2}\times\mathbb{R}^n)$ by $\tilde g (X,Z) = g^*( X,X+MZ) \det M$, where $M=M(X)= D\Psi(X)$. Then, using the change of variables $Y=X+MZ$ we deduce \[\psi(X)= {\rm PV}\int_{\mathbb{R}^n}\frac{(X_n)_+^s-\bigl(e_n\cdot (X+ MZ)\bigr)_+^s}{|Z|^{n+2s}}\,\tilde g(X,Z)dZ.\] Next, we prove that $\psi\in C^\alpha(\mathbb{R}^n)$, which concludes the proof. Indeed, taking into account that the function $(X_n)_+^s$ is $s$-harmonic in $\mathbb{R}^n_+$ ---by Proposition \ref{prop:solution}--- we obtain \[{\rm PV}\int_{\mathbb{R}^n}\frac{(e'\cdot X')_+^s-(e'\cdot(X'+Z))_+^s}{|Z|^{n+2s}}dZ=0 \] for every $e'\in \mathbb{R}^n$ and for every $X'$ such that $e'\cdot X'>0$. Thus, letting $e'= e_n^T M$ and $X'=M^{-1}X$ we deduce \[{\rm PV}\int_{\mathbb{R}^n}\frac{(X_n)_+^s-\bigl(e_n\cdot (X+ MZ)\bigr)_+^s}{|Z|^{n+2s}}dZ=0\] for every $X$ such that $(e_n^T M)\cdot(M^{-1}X)>0$, that is, for every $X\in\mathbb{R}^n_+$. Therefore, it holds \[\psi(X)=\int_{\mathbb{R}^n}\frac{\phi(X,0)-\phi(X,Z)}{|Z|^{n+2s}}\bigl(\tilde g(X,Z)- \tilde g(X,0)\bigr)dZ,\] where \[\phi(X,Z)=(e_n\cdot(X+MZ))_+^s\] satisfies $[\phi]_{C^s(\overline{V_2}\times \mathbb{R}^n)}\leq C$, and $\|\tilde g\|_{C^{0,1}(\overline{V_2}\times \mathbb{R}^n)}\leq C$. Let us finally prove that $\psi$ belongs to $C^\alpha(\overline{V_1^+})$. To do it, let $X$ and $\bar X$ be in $\overline{V_1^+}$. Then, we have \[\psi(X)-\psi(\bar X)=\int_{\mathbb{R}^n} \frac{\Theta(X,\bar X,Z)}{|Z|^{n+2s}}dZ,\] where \begin{equation} \begin{split} \Theta(X,&\bar X,Z)= \bigl(\phi(X,0)-\phi(X,Z)\bigr)\bigl(\tilde g(X,Z)- \tilde g(X,0)\bigr)\\ &\hspace{15mm} -\bigl(\phi(\bar X,0)-\phi(\bar X,Z)\bigr)\bigl(\tilde g(\bar X,Z)- \tilde g(\bar X,0)\bigr)\\ &= \bigl(\phi(X,0)-\phi(X,Z)- \phi(\bar X,0)+\phi(\bar X,Z)\bigr)\bigl(\tilde g(X,Z)- \tilde g(X,0)\bigr) \\& \quad -\bigl(\phi(\bar X,0)-\phi(\bar X,Z)\bigr)\bigl(\tilde g(X,Z)- \tilde g(X,0) - \tilde g(\bar X,Z)+ \tilde g(\bar X,0)\bigr). \end{split} \end{equation} Now, on the one hand, it holds \begin{equation}\label{theta1}|\Theta(X,\bar X,Z)|\leq C|Z|^{1+s},\end{equation} since $[\phi]_{C^s(\overline{V_2}\times \mathbb{R}^n)}\leq C$ and $\|\tilde g\|_{C^{0,1}(\overline{V_2}\times \mathbb{R}^n)}\leq C$. On the other hand, it also holds \begin{equation}\label{theta2}|\Theta(X,\bar X,Z)|\leq C|X-\bar X|^s \min\{|Z|,|Z|^s\}.\end{equation} Indeed, we only need to observe that \[\begin{split} \left|\tilde g(X,Z)- \tilde g(X,0) - \tilde g(\bar X,Z)+ \tilde g(\bar X,0)\right| &\leq C\min\bigl\{\min\{|Z|,1\}, |X-\bar X|\bigr\}\\ &\le C \min \{|Z|^{1-s},1\}|X-\bar X|^s. \end{split} \] Thus, letting $r=|X-\bar X|$ and using \eqref{theta1} and \eqref{theta2}, we obtain \[ \begin{split} |\psi(X)-\psi(\bar X)|&\le \int_{\mathbb{R}^n}\frac{|\Theta(X,\bar X,Z)|}{|Z|^{n+2s}}dZ \\ &\le \int_{B_r} \frac{C|Z|^{1+s}}{|Z|^{n+2s}}dZ + \int_{\mathbb{R}^n\setminus B_r} \frac{C r^s \min\{|Z|,|Z|^s\}}{|Z|^{n+2s}}dZ \\ &\le C r^{1-s}+C\max\{r^{1-s},r^s\}\,, \end{split} \] as desired. \end{proof} Next we prove Proposition \ref{lem_main}. \begin{proof}[Proof of Proposition \ref{lem_main}] By considering $u/K$ instead of $u$ we may assume that $K=1$, that is, that $|(-\Delta)^s u|\le 1$ in $\Omega$. Then, by Claim \ref{Linftybound} we have $\|u\|_{L^\infty(\mathbb{R}^n)}\le C$ for some constant $C$ depending only on $\Omega$ and $s$. Let $\rho_0>0$ be given by Remark \ref{remrho0}. Fix $x_0\in \partial \Omega$. We will prove that there exist constants $C_0>0$, $\rho_1\in(0,\rho_0)$, and $\alpha\in(0,1)$, depending only on $\Omega$ and $s$, and monotone sequences $(m_k)$ and $(M_k)$ such that, for all $k\geq0$, \begin{equation}\label{eq:prooflem1} M_k - m_k = 4^{-\alpha k}\,,\quad -1\le m_k\le m_{k+1}< M_{k+1}\le M_k\le 1\,, \end{equation} and \begin{equation}\label{eq:prooflem2} m_k \le C_0^{-1}u/\delta^s \le M_k \quad \mbox{in } D_{R_k}= D_{R_k}(x_0)\,, \quad \mbox{where } R_k = \rho_1 4^{-k}. \end{equation} Note that \eqref{eq:prooflem2} is equivalent to the following inequality in $B_{R_k}$ instead of $D_{R_k}$ --- recall that $D_{R_k}= B_{R_k}\cap\Omega$. \begin{equation}\label{eq:prooflem3} m_k \delta_0^s \le C_0^{-1}u \le M_k \delta_0^s \quad \mbox{in } B_{R_k}= B_{R_k}(x_0)\,, \quad \mbox{where } R_k = \rho_1 4^{-k}\,. \end{equation} If there exist such sequences, then \eqref{eq:lemmain} holds for all $R\leq \rho_1$ with $C=4^\alpha C_0/\rho_1^\alpha$. Then, by increasing the constant $C$ if necessary, \eqref{eq:lemmain} holds also for every $R\leq \rho_0$. Next we construct $\{M_k\}$ and $\{m_k\}$ by induction. By Lemma \ref{lem-aux-Cs}, we find that there exist $m_0$ and $M_0$ such that \eqref{eq:prooflem1} and \eqref{eq:prooflem2} hold for $k=0$ provided we pick $C_0$ large enough depending on $\Omega$ and $s$. Assume that we have sequences up to $m_k$ and $M_k$. We want to prove that there exist $m_{k+1}$ and $M_{k+1}$ which fulfill the requirements. Let \begin{equation}\label{123} u_k = C_0^{-1}u - m_k \delta_0^s\,. \end{equation} We will consider the positive part $u_{k}^+$ of $u_k$ in order to have a nonnegative function in all of $\mathbb{R}^n$ to which we can apply Lemmas \ref{lemA} and \ref{lemB}. Let $u_{k}= u_k^+-u_k^-$. Observe that, by induction hypothesis, \begin{equation}\label{321} u_k^+ = u_k \quad\mbox{and}\quad u_k^-= 0 \quad \mbox{in }B_{R_k}\,. \end{equation} Moreover, $C_0^{-1}u \ge m_j\delta_0^s$ in $B_{R_j}$ for each $j\le k$. Therefore, by \eqref{123} we have \[u_k \ge (m_{j}-m_k) \delta_0^s \ge (m_{j}-M_{j}+M_k-m_k) \delta_0^s \ge (-4^{-\alpha j} +4^{-\alpha k}) \delta_0^s \quad \mbox{in } B_{R_j}.\] But clearly $0\le \delta_0^s\le R_{j}^s = \rho_1^s 4^{-js}$ in $B_{R_j}$, and therefore using $R_j=\rho_1 4^{-j}$ \[u_k \ge - \rho_1^{-\alpha} R_j^s(R_j^\alpha-R_k^\alpha) \quad \mbox{in }\ B_{R_j}\ \mbox{for each}\ \ j\le k\,.\] Thus, since for every $x\in B_{R_0}\setminus B_{R_k}$ there is $j<k$ such that \[ |x-x_0|< R_j = \rho_1 4^{-j} \le 4|x-x_0|,\] we find \begin{equation}\label{eq:pflem3} u_{k}(x)\ge - \rho_1^{-\alpha} R_k^{\alpha+s} \biggl|\frac{4(x-x_0)}{R_k}\biggr|^s \biggl(\biggl|\frac{4(x-x_0)}{R_k}\biggr|^\alpha - 1\biggr) \quad \mbox{outside } B_{R_k}\,. \end{equation} By \eqref{eq:pflem3} and \eqref{321}, at $x\in B_{R_k/2}(x_0)$ we have \[ \begin{split} 0\le -(-\Delta)^s u_{k}^- (x) &= c_{n,s}\int_{x+y\notin B_{R_k}}\frac{u_k^-(x+y)}{|y|^{n+2s}}\,dy\\ &\le c_{n,s}\,\rho_1^{-\alpha}\int_{|y|\ge R_k/2} R_k^{\alpha+s} \biggl|\frac{8y}{R_k}\biggr|^s \biggl(\biggl|\frac{8y}{R_k}\biggr|^\alpha - 1\biggr) |y|^{-n-2s}\,dy \\ &= C\rho_1^{-\alpha} R_k^{\alpha-s}\int_{|z|\ge 1/2} \frac{|8z|^{s}(|8z|^\alpha-1)}{|z|^{n+2s}}\,dz\\ &\le \varepsilon_0\rho_1^{-\alpha} R_k^{\alpha-s}, \end{split} \] where $\varepsilon_0= \varepsilon_0(\alpha)\downarrow 0$ as $\alpha\downarrow 0$ since $|8z|^\alpha\rightarrow 1$. Therefore, writing $u_k^+=C_0^{-1}u-m_k\delta_0^s+u_k^-$ and using Lemma \ref{lapsdeltas}, we have \[\begin{split} |(-\Delta)^s u_{k}^+| &\le C_0^{-1}|(-\Delta)^s u|+ m_k|(-\Delta)^s \delta_0^s| + |(-\Delta)^s (u_k^-)|\\ &\le (C_0^{-1} + C_\Omega) + \varepsilon_0\rho_1^{-\alpha} R_k^{\alpha -s}\\ &\le \textstyle \bigl(C_1\rho_1^{s-\alpha}+\varepsilon_0\rho_1^{-\alpha}\bigr) R_k^{\alpha-s}\qquad\textrm{ in }D_{R_k/2}. \end{split}\] In the last inequality we have just used $R_k\leq\rho_1$ and $\alpha\leq s$. Now we can apply Lemmas \ref{lemA} and \ref{lemB} with $u$ in its statements replaced by $u_{k}^+$, recalling that \[\textstyle u_{k}^+ = u_k = C_0^{-1}u- m_k \delta^s \quad \mbox{in }D_{R_k}\] to obtain \begin{eqnarray} \sup_{D_{\kappa'R_k/2}^+} (C_0^{-1}u/\delta^s-m_k) &\le C \nonumber \biggl(\inf_{D_{\kappa'R_k/2}^+} (C_0^{-1}u/\delta^s-m_k) + \bigl(C_1\rho_1^{s-\alpha} +\varepsilon_0\rho_1^{-\alpha}\bigr) R_k^\alpha \biggr)\\ &\le C \biggl(\inf_{D_{R_k/4}} (C_0^{-1}u/\delta^s-m_k)+ \bigl(C_1\rho_1^{s-\alpha} +\varepsilon_0\rho_1^{-\alpha}\bigr) R_k^\alpha \biggr)\,.\label{eq:pflema1} \end{eqnarray} Next we can repeat all the argument ``upside down'', that is, with the functions $u^k = M_k \delta^s - u$ instead of $u_k$. In this way we obtain, instead of \eqref{eq:pflema1}, the following: \begin{equation}\label{eq:pflema2} \sup_{D_{\kappa'R_k/2}^+} (M_k - C_0^{-1}u/\delta^s) \le C \biggl(\inf_{D_{R_k/4}} (M^k-C_0^{-1}u/\delta^s)+ \bigl(C_1\rho_1^{s-\alpha} +\varepsilon_0\rho_1^{-\alpha}\bigr) R_k^\alpha \biggr). \end{equation} Adding \eqref{eq:pflema1} and \eqref{eq:pflema2} we obtain \begin{equation}\label{eq:pflema3} \begin{split} M_k-m_k &\le C \biggl(\inf_{D_{R_k/4}} (C_0^{-1}u/\delta^s-m_k) + \inf_{D_{R_k/4}} (M_k-C_0^{-1}u/\delta^s) + \bigl(C_1\rho_1^{s-\alpha} +\varepsilon_0\rho_1^{-\alpha}\bigr) R_k^\alpha \biggr)\\ &= C\biggl(\inf_{D_{R_{k+1}}} C_0^{-1}u/\delta^s - \sup_{D_{R_{k+1}}} C_0^{-1}u/\delta^s +M_k-m_k+ \bigl(C_1\rho_1^{s-\alpha} +\varepsilon_0\rho_1^{-\alpha}\bigr) R_k^\alpha \biggr), \end{split} \end{equation} and thus, using that $M_k-m_k = 4^{-\alpha k}$ and $R_k = \rho_1 4^{-k}$, \[\sup_{D_{R_{k+1}}} C_0^{-1}u/\delta^s - \inf_{D_{R_{k+1}}} C_0^{-1}u/\delta^s \le \bigl(\textstyle \frac{C-1}{C} +C_1\rho_1^s +\varepsilon_0\bigr) 4^{-\alpha k}\,.\] Now we choose $\alpha$ and $\rho_1$ small enough so that \[\frac{C-1}{C} +C_1\rho_1^s +\varepsilon_0(\alpha) \le 4^{-\alpha}.\] This is possible since $\varepsilon_0(\alpha)\downarrow 0$ as $\alpha\downarrow 0$ and the constants $C$ and $C_1$ do not depend on $\alpha$ nor $\rho_1$ ---they depend only on $\Omega$ and $s$. Then, we find \[\sup_{D_{R_{k+1}}} C_0^{-1}u/\delta^s - \inf_{D_{R_{k+1}}} C_0^{-1}u/\delta^s \le 4^{-\alpha (k+1)},\] and thus we are able to choose $m_{k+1}$ and $M_{k+1}$ satisfying \eqref{eq:prooflem1} and \eqref{eq:prooflem2}. \end{proof} Finally, we give the: \begin{proof}[Proof of Theorem \ref{thm:v-is-Calpha}] Define $v=u/\delta^s|_\Omega$ and $K=\|g\|_{L^\infty(\Omega)}$. As in the proof of Proposition \ref{lem_main}, by considering $u/K$ instead of $u$ we may assume that $|(-\Delta)^s u|\leq 1$ in $\Omega$ and that $\|u\|_{L^{\infty}(\Omega)}\leq C$ for some constant $C$ depending only on $\Omega$ and $s$. First we claim that there exist constants $C$, $M>0$, $\widetilde\alpha\in (0,1)$ and $\beta\in(0,1)$, depending only on $\Omega$ and $s$, such that \begin{itemize} \item[(i)] $\|v\|_{L^\infty(\Omega)}\le C$. \item[(ii)] For all $x\in \Omega$, it holds the seminorm bound \[ [v]_{C^{\beta}(\overline{B_{R/2}(x)})} \le C \left(1+R^{-M}\right),\] where $R={\rm dist}(x,\mathbb R^n\setminus\Omega)$. \item[(iii)] For each $x_0\in \partial \Omega$ and for all $\rho>0$ it holds \[ \sup_{B_{\rho}(x_0)\cap \Omega} v-\inf_{B_{\rho}(x_0)\cap \Omega} v \leq C {\rho}^{\widetilde \alpha}.\] \end{itemize} Indeed, it follows from Lemma \ref{lem-aux-Cs} that $\|v\|_{L^\infty(\Omega)}\le C$ for some $C$ depending only on $\Omega$ and $s$. Hence, (i) is satisfied. Moreover, if $\beta\in(0,2s)$, it follows from Lemma \ref{lem-sarp-Cs-bounds-u} that for every $x\in \Omega$, \[ [u]_{C^{\beta}(B_{R/2}(x))} \le CR^{-\beta},\qquad \beta\in(0,2s), \] where $R=\delta(x)$. But since $\Omega$ is $C^{1,1}$, then provided $\delta(x)<\rho_0$ we will have \[\|\delta^{-s}\|_{L^{\infty}(B_{R/2}(x))}\le CR^{-s}\quad \mbox{and}\quad [\delta^{-s}]_{C^{0,1}(B_{R/2}(x))}\le CR^{-s-1}\] and hence, by interpolation, \[[\delta^{-s}]_{C^{\beta}(B_{R/2}(x))}\le CR^{-s-\beta}\] for each $\beta\in(0,1)$. Thus, since $v= u\delta^{-s}$, we find \[[v]_{C^{\beta}(B_{R/2}(x))} \le C\left(1+R^{-s-\beta}\right)\] for all $x\in \Omega$ and $\beta<\min\{1,2s\}$. Therefore hypothesis (ii) is satisfied. The constants $C$ depend only on $\Omega$ and $s$. In addition, using Proposition \ref{lem_main} and that $\|v\|_{L^\infty(\Omega)}\leq C$, we deduce that hypothesis (iii) is satisfied. Now, we claim that (i)-(ii)-(iii) lead to \[ [v]_{C^{\alpha}(\overline \Omega)} \le C,\] for some ${\alpha}\in(0,1)$ depending only on $\Omega$ and $s$. Indeed, let $x, y\in \Omega$, $R={\rm dist}(x,\mathbb{R}^n\setminus\Omega) \geq {\rm dist}(y,\mathbb{R}^n\setminus\Omega)$, and $r=|x-y|$. Let us see that $|v(x)-v(y)|\leq Cr^{\alpha}$ for some ${\alpha}>0$. If $r\geq1$ then it follows from (i). Assume $r<1$, and let $p\geq1$ to be chosen later. Then, we have the following dichotomy: {\em Case 1.} Assume $r\ge R^p/2$. Let $x_0, y_0 \in \partial \Omega$ be such that $|x-x_0|={\rm dist}(x,\mathbb{R}^n\setminus\Omega)$ and $|y-y_0|={\rm dist}(y,\mathbb{R}^n\setminus\Omega)$. Then, using (iii) and the definition of $R$ we deduce \[ |v(x)-v(y)| \le |v(x)-v(x_0)|+|v(x_0)-v(y_0)|+|v(y_0)-v(y)| \le C R^{\widetilde\alpha} \le Cr^{\widetilde\alpha/p}.\] {\em Case 2.} Assume $r\le R^p/2$. Hence, since $p\ge 1$, we have $y\in B_{R/2}(x)$. Then, using (ii) we obtain \[ |v(x)-v(y)| \le C (1+R^{-M}) r^\beta \le C\left( 1+ r^{-M/p}\right) r^{\beta} \le C r^{\beta- M/p}.\] To finish the proof we only need to choose $p>M/\beta$ and take ${\alpha}= \min\{\widetilde\alpha/p, \beta -M/p \}$. \end{proof} \section{Interior estimates for $u/\delta^s$} \label{sec5} The main goal of this section is to prove the $C^\gamma$ bounds in $\Omega$ for the function $u/\delta^s$ in Theorem \ref{thm:int-est-v}. To prove this result we find an equation for the function $v=u/\delta^s|_\Omega$, that is derived below. This equation is nonlocal, and thus, we need to give values to $v$ in $\mathbb{R}^n\setminus\Omega$, although we want an equation only in $\Omega$. It might seem natural to consider $u/\delta^s$, which vanishes outside $\Omega$ since $u\equiv0$ there, as an extension of $u/\delta^s|_\Omega$. However, such extension is discontinuous through $\partial\Omega$, and it would lead to some difficulties. Instead, we consider a $C^\alpha(\mathbb{R}^n)$ extension of the function $u/\delta^s|_\Omega$, which is $C^\alpha(\overline\Omega)$ by Theorem \ref{thm:v-is-Calpha}. Namely, throughout this section, let $v$ be the $C^\alpha(\mathbb{R}^n)$ extension of $u/\delta^s|_\Omega$ given by Lemma \ref{prop:extension-op-E}. Let $\delta_0=\delta\chi_\Omega$, and note that $u=v\delta_0^s$ in $\mathbb{R}^n$. Then, using \eqref{eqlin} we have \[g(x) = (-\Delta)^s(v\delta_0^s) = v(-\Delta)^s \delta_0^s+ \delta_0^s(-\Delta)^s v -I_s(v,\delta_0^s)\,\] in $\Omega_{\rho_0}=\{x\in\Omega\,:\,\delta(x)<\rho_0\}$, where \begin{equation}\label{Is} I_s(w_1,w_2)(x)= c_{n,s}\int_{\mathbb{R}^n} \frac{\bigl(w_1(x)-w_1(y)\bigr)\bigl(w_2(x)-w_2(y)\bigr)}{|x-y|^{n+2s}}\,dy \end{equation} and $\rho_0$ is a small constant depending on the domain; see Remark \ref{remrho0}. Here, we have used that $(-\Delta)^s(w_1w_2)=w_1(-\Delta)^sw_2+w_2(-\Delta)^sw_1-I_s(w_1,w_2)$, which follows easily from \eqref{laps}. This equation is satisfied pointwise in $\Omega_{\rho_0}$, since $g$ is $C^\alpha$ in $\Omega$. We have to consider $\Omega_{\rho_0}$ instead of $\Omega$ because the distance function is $C^{1,1}$ there and thus we can compute $(-\Delta)^s\delta_0^s$. In all $\Omega$ the distance function $\delta$ is only Lipschitz and hence $(-\Delta)^s\delta_0^s$ is singular for $s\ge\frac12$. Thus, the following is the equation for $v$: \begin{equation}\label{equaciov} (-\Delta)^sv= \frac{1}{\delta_0^s}\biggl( g(x) - v (-\Delta)^s \delta_0^s + I_s(v,\delta_0^s)\biggr) \quad\mbox{in } \Omega_{\rho_0}\,. \end{equation} From this equation we will obtain the interior estimates for $v$. More precisely, we will obtain a priori bounds for the interior H\"older norms of $v$, treating $\delta_0^{-s}I_s(v,\delta_0^s)$ as a lower order term. For this, we consider the weighted H\"older norms given by Definition \ref{definorm}. Recall that, in all the paper, we denote $C^\beta$ the space $C^{k,\beta'}$, where $\beta=k+\beta'$ with $k$ integer and $\beta'\in(0,1]$. In Theorem \ref{thm:v-is-Calpha} we have proved that $u/\delta^s|_{\Omega}$ is $C^{\alpha}(\overline\Omega)$ for some $\alpha\in(0,1)$, with an estimate. From this $C^\alpha$ estimate and from the equation for $v$ \eqref{equaciov}, we will find next the estimate for $\|u/\delta^s\|_{\gamma;\Omega}^{(-\alpha)}$ stated in Theorem \ref{thm:int-est-v}. The proof of this result relies on some preliminary results below. Next lemma is used to control the lower order term $\delta_0^{-s}I_s(v,\delta_0^s)$ in the equation \eqref{equaciov} for $v$. \begin{lem} \label{lem:bound-I} Let $\Omega$ be a bounded $C^{1,1}$ domain, and $U\subset \Omega_{\rho_0}$ be an open set. Let $s$ and $\alpha$ belong to $(0,1)$ and satisfy $\alpha+s\le 1$ and $\alpha< s$. Then, \begin{equation}\label{eq:bound-I} \|I_s(w,\delta_0^s)\|_{\alpha;U}^{(s-\alpha)}\le C\biggl( [w]_{C^{\alpha}(\mathbb{R}^n)}+ [w]_{\alpha+s;U}^{(-\alpha)}\biggr)\,, \end{equation} for all $w$ with finite right hand side. The constant $C$ depends only on $\Omega$, $s$, and $\alpha$. \end{lem} To prove Lemma \ref{lem:bound-I} we need the next \begin{lem}\label{claim:I} Let $U\subset \mathbb{R}^n$ be a bounded open set. Let $\alpha_1,\alpha_2,\in(0,1)$ and $\beta\in(0,1]$ satisfy $\alpha_i < \beta$ for $i=1,2$, $\alpha_1+\alpha_2<2s$, and $s<\beta<2s$. Assume that $w_1,w_2\in C^\beta(U)$. Then, \begin{equation}\label{eq:bound-Iclaim} \|I_s(w_1,w_2)\|_{2\beta-2s;U}^{(2s-\alpha_1-\alpha_2)}\le C\left( [w_1]_{C^{\alpha_1}(\mathbb{R}^n)}+ [w_1]_{\beta;U}^{(-\alpha_1)}\right)\left( [w_2]_{C^{\alpha_2}(\mathbb{R}^n)}+ [w_2]_{\beta;U}^{(-\alpha_2)}\right), \end{equation} for all functions $w_1, w_2$ with finite right hand side. The constant $C$ depends only on $\alpha_1$, $\alpha_2$, $n$, $\beta$, and $s$. \end{lem} \begin{proof} Let $x_0\in U$ and $R=d_{x_0}/2$, and denote $B_{\rho}=B_\rho(x_0)$. Let \[K=\left( [w_1]_{C^{\alpha_1}(\mathbb{R}^n)}+ [w_1]_{\beta;U}^{(-\alpha_1)}\right)\left( [w_2]_{C^{\alpha_2}(\mathbb{R}^n)}+ [w_2]_{\beta;U}^{(-\alpha_2)}\right)\,.\] First we bound $|I_s(w_1,w_2)(x_0)|$. \[ \begin{split} |I_s(w_1,w_2)(x_0)|&\le C\int_{\mathbb{R}^n}\frac{\bigl|w_1(x_0)-w_1(y)\bigr|\bigl|w_2(x_0)-w_2(y)\bigr|}{|x_0-y|^{n+2s}}\,dy \\ &\le C\int_{B_R(0)} \frac{R^{\alpha_1+\alpha_2-2\beta}[w_1]_{\beta;U}^{(-\alpha_1)}[w_2]_{\beta;U}^{(-\alpha_2)}|z|^{2\beta}}{|z|^{n+2s}}\,dz \ +\\ &\qquad\qquad\qquad\quad+ C\int_{\mathbb{R}^n\setminus B_R(0)} \frac{[w_1]_{C^{\alpha_1}(\mathbb{R}^n)}[w_2]_{C^{\alpha_2}(\mathbb{R}^n)} |z|^{\alpha_1+\alpha_2} }{|z|^{n+2s}}\,dz \\ &\le C R^{\alpha_1+\alpha_2-2s}K \,. \end{split} \] Let $x_1,x_2\in B_{R/2}(x_0)\subset B_{2R}(x_0)$. Next, we bound $|I_s(w_1,w_2)(x_1)-I_s(w_1,w_2)(x_2)|$. Let $\eta$ be a smooth cutoff function such that $\eta\equiv 1$ on $B_{1}(0)$ and $\eta\equiv 0$ outside $B_{3/2}(0)$. Define \[\eta^R(x)=\eta\left(\frac{x-x_0}{R}\right)\quad \mbox{ and } \quad \bar w_i= \bigr(w_i-w_i(x_0)\bigl)\eta^R\,,\quad i=1,2\,.\] Note that we have \[ \|\bar w_i\|_{L^\infty(\mathbb{R}^n)}= \|\bar w_i\|_{L^\infty({B_{3R/2}})} \le \left(\frac{3R}{2}\right)^{\alpha_i} [w_i]_{C^{\alpha_i}(\mathbb{R}^n)} \] and \[ \begin{split} [\bar w_i]_{C^{\beta}(\mathbb{R}^n)} &\le C \biggl([w_i]_{C^{\beta}(\overline{B_{3R/2}})}\|\eta\|_{L^\infty(B_{3R/2})} + \|w_i-w_i(0)\|_{L^\infty(B_{3R/2})} [w_i]_{C^{\beta}(\overline{B_{3R/2}})}\biggr) \\ &\le C R^{\alpha_i-\beta} \biggl( [w_i]_{C^{\alpha_i}(\mathbb{R}^n)}+ [w_i]_{\beta;U}^{(-\alpha_i)}\biggr)\,. \end{split} \] Let \[\varphi_i = w_i- w_i(x_0)- \bar w_i\] and observe that $\varphi_i$ vanishes in $B_R$. Hence, $\varphi_i(x_1)= \varphi_i(x_2)=0$, $i=1,2$. Next, let us write \[ I_s(w_1,w_2)(x_1)-I_s(w_1,w_2)(x_2) = c_{n,s}\left( J_{11} + J_{12}+J_{21} +J_{22}\right),\] where \[\begin{split} J_{11} = \int_{\mathbb{R}^n} \frac{\bigl(\bar w_1(x_1)-\bar w_1(y)\bigr)\bigl(\bar w_2(x_1)-\bar w_2(y)\bigr)}{|x_1-y|^{n+2s}}&\,dy\\ &\hspace{-20mm}-\int_{\mathbb{R}^n}\frac{\bigl(\bar w_1(x_2)-\bar w_1(y)\bigr)\bigl(\bar w_2(x_2)-\bar w_2(y)\bigr)}{|x_2-y|^{n+2s}}\,dy\,, \end{split}\] \[J_{12} = \int_{\mathbb{R}^n\setminus B_{R}} \frac{-\bigl(\bar w_1(x_1)-\bar w_1(y)\bigr)\varphi_2(y)}{|x_1-y|^{n+2s}}+ \frac{\bigl(\bar w_1(x_2)-\bar w_1(y)\bigr)\varphi_2(y)}{|x_2-y|^{n+2s}}\,dy\,,\] \[J_{21} = \int_{\mathbb{R}^n\setminus B_{R}} \frac{-\bigl(\bar w_2(x_1)-\bar w_2(y)\bigr)\varphi_1(y)}{|x_1-y|^{n+2s}}+ \frac{\bigl(\bar w_2(x_2)-\bar w_2(y)\bigr)\varphi_1(y)}{|x_2-y|^{n+2s}}\,dy\,,\] and \[J_{22} = \int_{\mathbb{R}^n\setminus B_{R}} \frac{\varphi_1(y) \varphi_2(y)}{|x_1-y|^{n+2s}}- \frac{\varphi_1(y)\varphi_2(y)}{|x_2-y|^{n+2s}}\,dy\,.\] We now bound separately each of these terms. {\em Bound of $J_{11}$.} We write $J_{11} = J_{11}^1+J_{11}^2$ where \[J_{11}^1= \int_{\mathbb{R}^n}\frac{\bigl(\bar w_1(x_1)-\bar w_1(x_1+z)-\bar w_1(x_2)+\bar w_1(x_2+z)\bigr) \bigl(\bar w_2(x_1)-\bar w_2(x_1+z)\bigr)}{|z|^{n+2s}}\,dz, \] \[J_{11}^2= \int_{\mathbb{R}^n}\frac{\bigl(\bar w_1(x_2)-\bar w_1(x_2+z)\bigr)\bigl(\bar w_2(x_1)-\bar w_2(x_1+z)-\bar w_2(x_2)+\bar w_2(x_2+z)\bigr) }{|z|^{n+2s}}\,dz\,.\] To bound $|J_{11}^1|$ we proceed as follows \[ \begin{split} |J_{11}^1| &\le \int_{B_r(0)} \frac{R^{\alpha_1-\beta}[w_1]_{\beta;U}^{(-\alpha_1)}|z|^{\beta}R^{\alpha_2-\beta}[w_2]_{\beta;U}^{(-\alpha_2)}|z|^{\beta} }{|z|^{n+2s}}\,dz\,+\\ &\hspace{20mm}+ \int_{\mathbb{R}^n\setminus B_r(0)} \frac{R^{\alpha_1-\beta}[w_1]_{\beta;U}^{(-\alpha_1)} r^{\beta} R^{\alpha_2-\beta}[w_2]_{\beta;U}^{(-\alpha_2)}|z|^{\beta} }{|z|^{n+2s}}\,dz \\ &\le C R^{\alpha_1+\alpha_2-2\beta}r^{2\beta-2s} K\,. \end{split} \] Similarly, $|J_{11}^2|\le C R^{\alpha_1+\alpha_2-2\beta}r^{2\beta-2s} K$. {\em Bound of $J_{12}$ and $J_{21}$.} We write $J_{12}= J_{12}^1+ J_{12}^2$ where \[ J_{12}^1 = \int_{\mathbb{R}^n \setminus B_R} -\varphi_2(y) \frac{\bar w_1(x_1)-\bar w_1(x_2)}{|x_1-y|^{n+2s}}\,dy \] and \[ J_{12}^2 = \int_{\mathbb{R}^n \setminus B_R} -\varphi_2(y) \bigl(\bar w_1(x_2)-\bar w_1(y)\bigr)\left\{\frac{1}{|x_1-y|^{n+2s}}-\frac{1}{|x_2-y|^{n+2s}}\right\}\,dy\,. \] To bound $|J_{12}^1|$ we recall that $\varphi_2(x_1)=0$ and proceed as follows \[ \begin{split} |J_{12}^1|&\le C\int_{\mathbb{R}^n \setminus B_R} |x_1-y|^{\alpha_2}[\varphi_2]_{C^{0,\alpha_2}(\mathbb{R}^n)} \frac{R^{\alpha_1-\beta} [w_1]_{\beta;U}^{(-\alpha_1)} r^\beta}{|x_1-y|^{n+2s}}\,dy\\ &\leq CR^{\alpha_1+\alpha_2-\beta-2s}r^{\beta} K \le CR^{\alpha_1+\alpha_2-2\beta}r^{2\beta-2s} K. \end{split} \] We have used that $[\varphi_2]_{C^{\alpha_2}(\mathbb{R}^n)}=[w-\bar w]_{C^{\alpha_2}(\mathbb{R}^n)}\leq 2[w]_{C^{\alpha_2}(\mathbb{R}^n)}$, $r\le R$, and $\beta<2s$. To bound $|J_{12}^2|$, let $\Phi(z)=|z|^{-n-2s}$. Note that, for each $\gamma\in(0,1]$, we have \begin{equation}\label{boundPhi}|\Phi(z_1-z)-\Phi(z_2-z)| \le C |z_1-z_2|^{\gamma} |z|^{-n-2s-\gamma}\end{equation} for all $z_1, z_2$ in $B_{R/2}(0)$ and $z\in\mathbb{R}^n\setminus B_R(0)$. Then, using that $\varphi_2(x_2)=0$, \[ \begin{split} |J_{12}^2|&\le C\int_{\mathbb{R}^n \setminus B_R} |x_2-y|^{\alpha_1+\alpha_2}[\varphi_2]_{C^{\alpha_2}(\mathbb{R}^n)}[\varphi_2]_{C^{\alpha_2}(\mathbb{R}^n)} \frac{|x_1-x_2|^{2\beta-2s}}{|x_2-y|^{n+2\beta}}\,dy\\ &\leq C R^{\alpha_1+\alpha_2-2\beta}r^{2\beta-2s}K\,. \end{split} \] This proves that $|J_{12}|\le C R^{\alpha_1+\alpha_2-2\beta}r^{2\beta-2s}K$. Changing the roles of $\alpha_1$ and $\alpha_2$ we obtain the same bound for $|J_{21}|$. {\em Bound of $J_{22}$.} Using again $\varphi_i(x_i)=0$, $i=1,2$, we write \[J_{22} = \int_{\mathbb{R}^n\setminus B_{R}} \bigl(\varphi_1(x_1) -\varphi_1(y)\bigr)\bigl(\varphi_2(x_1) -\varphi_2(y)\bigr)\left(\frac{1}{|x_1-y|^{n+2s}}-\frac{1} {|x_2-y|^{n+2s}}\right)dy\,.\] Hence, using again \eqref{boundPhi}, \[ \begin{split} |J_{22}|&\le C\int_{\mathbb{R}^n \setminus B_R} |x_1-y|^{\alpha_1+\alpha_2}[\varphi_2]_{C^{0,\alpha_2}(\mathbb{R}^n)}[\varphi_2]_{C^{0,\alpha_2}(\mathbb{R}^n)} \frac{|x_1-x_2|^{2\beta-2s}}{|x_1-y|^{n+2\beta}}\,dy\\ &\leq C R^{\alpha_1+\alpha_2-2\beta}r^{2\beta-2s}K\,. \end{split} \] Summarizing, we have proven that for all $x_0$ such that $d_x=2R$ and for all $x_1,x_2\in B_{R/2}(x_0)$ it holds \[ |I_s(\delta_0^s,w)(x_0)|\le C R^{\alpha_1-\alpha_2-2s}K \] and \[\frac{|I_s(\delta_0^s,w)(x_1)-I_s(\delta_0^s,w)(x_2)|}{|x_1-x_2|^{2\beta-2s}}\le C R^{\alpha_1+\alpha_2-2\beta}\bigl( [w]_{\alpha+s;U}^{(-\alpha)}+[w]_{C^\alpha(\mathbb{R}^n)}\bigr)\,.\] This yields \eqref{eq:bound-Iclaim}, as shown in Step 2 in the proof of Lemma \ref{refined-2s-gain}. \end{proof} Next we prove Lemma \ref{lem:bound-I}. \begin{proof}[Proof of Lemma \ref{lem:bound-I}] The distance function $\delta_0$ is $C^{1,1}$ in $\overline{\Omega_{\rho_0}}$ and since $U\subset \Omega_{\rho_0}$ we have $d_x\le \delta_0(x)$ for all $x\in U$. Hence, it follows that \[[\delta_0^s]_{C^{s}(\mathbb{R}^n)}+ [\delta_0^s]_{\beta;U}^{(-s)} \le C(\Omega,\beta)\] for all $\beta\in[s,2]$. Then, applying Lemma \ref{claim:I} with $w_1=w$, $w_2=\delta_0^s$, $\alpha_1=\alpha$, $\alpha_2=s$, and $\beta=s+\alpha$, we obtain \[\|I_s(w,\delta_0^s)\|_{2\alpha;U}^{(s-\alpha)}\le C\biggl( [w]_{C^{\alpha}(\mathbb{R}^n)}+ [w]_{\alpha+s;U}^{(-\alpha)}\biggr)\,,\] and hence \eqref{eq:bound-I} follows. \end{proof} Using Lemma \ref{lem:bound-I} we can now prove Theorem \ref{thm:int-est-v} and Corollary \ref{krylov}. \begin{proof}[Proof of Theorem \ref{thm:int-est-v}] Let $U\subset\subset \Omega_{\rho_0}$. We prove first that there exist $\alpha\in(0,1)$ and $C$, depending only on $s$ and $\Omega$ ---and not on $U$---, such that \[\|u/\delta^s\|_{\alpha+2s;U}^{(-\alpha)}\le C\left(\|g\|_{L^\infty(\Omega)}+\|g\|_{\alpha;\Omega}^{(s-\alpha)}\right).\] Then, letting $U\uparrow \Omega_{\rho_0}$ we will find that this estimate holds in $\Omega_{\rho_0}$ with the same constant. To prove this, note that by Theorem \ref{thm:v-is-Calpha} we have \[ \|u/\delta^s\|_{C^\alpha(\overline\Omega)} \le C\bigl(s,\Omega\bigr)\|g\|_{L^\infty(\Omega)}\,. \] Recall that $v$ denotes the $C^\alpha(\mathbb{R}^n)$ extension of $u/\delta^s|_\Omega$ given by Lemma \ref{prop:extension-op-E}, which satisfies $\|v\|_{C^\alpha(\mathbb{R}^n)}=\|u/\delta^s\|_{C^\alpha(\overline\Omega)}$. Since $u\in C^{\alpha+2s}(\Omega)$ and $\delta\in C^{1,1}(\Omega_{\rho_0})$, it is clear that $\|v\|_{\alpha+2s;U}^{(-\alpha)} <\infty$ ---it is here where we use that we are in a subdomain $U$ and not in $\Omega_{\rho_0}$. Next we obtain an a priori bound for this seminorm in $U$. To do it, we use the equation \eqref{equaciov} for $v$: \[(-\Delta)^sv= \frac{1}{\delta^s}\biggl( g(x) - v (-\Delta)^s \delta_0^s + I(\delta_0^s,v)\biggr) \quad\mbox{in } \Omega_{\rho_0}=\{x\in \Omega\,:\,\delta(x)<\rho_0\}\,.\] Now we will se that this equation and Lemma \ref{refined-2s-gain} lead to an a priori bound for $\|v\|_{\alpha+2s;U}^{(-\alpha)}$. To apply Lemma \ref{refined-2s-gain}, we need to bound $\|(-\Delta)^s v\|_{\alpha;U}^{(2s-\alpha)}$. Let us examine the three terms on the right hand side of the equation. {\em First term.} Using that \[d_x= {\rm dist}(x,\partial U)< {\rm dist}(x,\partial \Omega)=\delta(x)\] for all $x\in U$ we obtain that, for all $\alpha\le s$, \[\|\delta^{-s}g\|_{\alpha;U}^{(2s-\alpha)}\le C\bigl(s,\Omega\bigr)\|g\|_{\alpha;\Omega}^{(s-\alpha)}\,.\] {\em Second term.} We know from Lemma \ref{lapsdeltas} that, for $\alpha\le \min\{s,1-s\}$, \[\|(-\Delta)^s \delta_0^s\|_{C^{\alpha}(\overline{\Omega_{\rho_0}})} \le C\bigl( s, \Omega)\,.\] Hence, \[ \begin{split} \|\delta^{-s}v(-\Delta)^s \delta_0^s\|_{\alpha;U}^{(2s-\alpha)} &\le {\rm diam} (\Omega)^s \|\delta^{-s}v(-\Delta)^s \delta_0^s\|_{\alpha;U}^{(s-\alpha)} \le C\bigl(s, \Omega\bigr) \|v\|_{C^\alpha(\mathbb{R}^n)}\\ &\le C\bigl(s,\Omega\bigr)\|g\|_{L^\infty(\Omega)}\,. \end{split} \] {\em Third term.} From Lemma \ref{lem:bound-I} we know that \[ \|I(v,\delta_0^s)\|_{\alpha;U}^{(s-\alpha)}\le C(n,s,\alpha)\biggl( \|v\|_{C^\alpha(\mathbb{R}^n)}+ [v]_{\alpha+s;U}^{(-\alpha)}\biggr)\,, \] and hence \[ \begin{split} \|\delta^{-s}I(v,\delta_0^s)\|_{\alpha;U}^{(2s-\alpha)}&\le C(n,s,\Omega,\alpha)\biggl( \|v\|_{C^\alpha(\mathbb{R}^n)}+ [v]_{\alpha+s;U}^{(-\alpha)}\biggr) \\ &\le C(n,s,\Omega,\alpha,\varepsilon_0)\|v\|_{C^\alpha(\mathbb{R}^n)}+ \varepsilon_0 \|v\|_{\alpha+2s;U}^{(-\alpha)} \end{split} \] for each $\epsilon_0>0$. The last inequality is by standard interpolation. Now, using Lemma \ref{refined-2s-gain} we deduce \[\begin{split} \|v\|_{\alpha+2s;U}^{(-\alpha)}&\le C\left(\|v\|_{C^\alpha(\mathbb{R}^n)} + \|(-\Delta)^s v\|_{\alpha;U}^{(2s-\alpha)}\right)\\ &\hspace{-8mm}\leq C\left(\|v\|_{C^\alpha(\mathbb{R}^n)}+\|\delta^{-s}g\|_{\alpha;U}^{(2s-\alpha)}+\|\delta^{-s}v(-\Delta)^s \delta_0^s\|_{\alpha;U}^{(2s-\alpha)}+\|I(v,\delta_0^s)\|_{\alpha;U}^{(s-\alpha)}\right)\\ &\hspace{-8mm}\le C(s,\Omega,\alpha,\varepsilon_0)\left(\|g\|_{L^\infty(\Omega)}+\|g\|_{\alpha;\Omega}^{(s-\alpha)}\right)+ C\varepsilon_0 \|v\|_{\alpha+2s;U}^{(-\alpha)}, \end{split}\] and choosing $\varepsilon_0$ small enough we obtain \[\|v\|_{\alpha+2s;U}^{(-\alpha)}\le C\left(\|g\|_{L^\infty(\Omega)}+\|g\|_{\alpha;\Omega}^{(s-\alpha)}\right).\] Furthermore, letting $U\uparrow\Omega_{\rho_0}$ we obtain that the same estimate holds with $U$ replaced by $\Omega_{\rho_0}$. Finally, in $\Omega\setminus\Omega_{\rho_0}$ we have that $u$ is $C^{\alpha+2s}$ and $\delta^s$ is uniformly positive and $C^{0,1}$. Thus, we have $u/\delta^s\in C^\gamma(\Omega\setminus\Omega_{\rho_0})$, where $\gamma=\min\{1,\alpha+2s\}$, and the theorem follows. \end{proof} Next we give the \begin{proof}[Proof of Corollary \ref{krylov}] (a) It follows from Proposition \ref{prop:u-is-Cs} that $u\in C^s(\mathbb{R}^n)$. The interior estimate follow by applying repeatedly Proposition \ref{prop:int-est-u}. (b) It follows from Theorem \ref{thm:v-is-Calpha} that $u/\delta^s|_\Omega\in C^\alpha(\overline\Omega)$. The interior estimate follows from Theorem \ref{thm:int-est-v}. \end{proof} The following two lemmas are closely related to Lemma \ref{claim:I} and are needed in \cite{RS} and in Remark \ref{remviscosity} of this paper. \begin{lem}\label{cosaiscalpha} Let $U$ be an open domain and $\alpha$ and $\beta$ be such that $\alpha\le s<\beta$ and $\beta-s$ is not an integer. Let $k$ be an integer such that $\beta= k+\beta'$ with $\beta'\in(0,1]$. Then, \begin{equation}\label{eq:bound-lap-s/2} [(-\Delta)^{s/2}w]_{\beta-s;U}^{(s-\alpha)}\le C\bigl( \|w\|_{C^\alpha(\mathbb{R}^n)}+ \|w\|_{\beta;U}^{(-\alpha)}\bigr)\,, \end{equation} for all $w$ with finite right hand side. The constant $C$ depends only on $n$, $s$, $\alpha$, and $\beta$. \end{lem} \begin{proof} Let $x_0\in U$ and $R=d_{x_0}/2$, and denote $B_{\rho}=B_\rho(x_0)$. Let $\eta$ be a smooth cutoff function such that $\eta\equiv 1$ on $B_{1}(0)$ and $\eta\equiv 0$ outside $B_{3/2}(0)$. Define \[\eta^R(x)=\eta\left(\frac{x-x_0}{R}\right)\quad \mbox{ and } \quad \bar w= \bigr(w-w(x_0)\bigl)\eta^R\,.\] Note that we have \[ \|\bar w\|_{L^\infty(\mathbb{R}^n)}= \|\bar w\|_{L^\infty(\overline{B_{3R/2}})} \le \left(\frac{3R}{2}\right)^\alpha [w]_{C^\alpha(\mathbb{R}^n)}\,. \] In addition, for each $1\le l\le k$ \[ \begin{split} \|D^l \bar w\|_{L^\infty(\mathbb{R}^n)} &\le C\sum_{m=0}^l \|D^m (w-w(x_0)) D^{l-m} \eta^R\|_{L^\infty(\overline{B_{3R/2}})} \\ &\le C R^{-l+\alpha} \left( [w]_{C^\alpha(\mathbb{R}^n)}+\sum_{m=1}^{l} [w]_{m,U}^{(-\alpha)} \right). \end{split} \] Hence, by interpolation, for each $0\le l\le k-1$ \[ \|D^l \bar w\|_{C^{l+\beta'}(\mathbb{R}^n)}\le C R^{-l-\beta'+\alpha} \left( [w]_{C^\alpha(\mathbb{R}^n)}+\sum_{m=1}^{l} [w]_{m,U}^{(-\alpha)} \right)\,, \] and therefore \begin{equation}\label{Dk-bar-w} [D^k \bar w]_{C^{\beta'}(\mathbb{R}^n)} \le C R^{-\beta+\alpha}\|w\|_{\beta;U}^{(-\alpha)}\,. \end{equation} Let $\varphi = w- w(x_0)- \bar w$ and observe that $\varphi$ vanishes in $B_R$ and, hence, $\varphi(x_1)= \varphi(x_2)=0$. Next we proceed differently if $\beta'>s$ or if $\beta'< s$. This is because $C^{\beta-s}$ equals either $C^{k,\beta'-s}$ or $C^{k-1,1+\beta'-s}$. {\em Case 1.} Assume $\beta'> s$. Let $x_1,x_2\in B_{R/2}(x_0)\subset B_{2R}(x_0)$. We want to bound $|D^k (-\Delta)^{s/2} w(x_1)- D^k (-\Delta)^{s/2} w(x_2)|$, where $D^k$ denotes any $k$-th derivative with respect to a fixed multiindex. We have \[ (-\Delta)^{s/2} w = (-\Delta)^{s/2} \bar w + (-\Delta)^{s/2} \varphi \quad \mbox{in }B_{R/2}\,.\] Then, \[D^k(-\Delta)^{s/2}w(x_1) - D^k(-\Delta)^{s/2}w(x_2) = c_{n,\frac s2}(J_1 + J_2)\,,\] where \[J_1 = \int_{\mathbb{R}^n} \left\{\frac{ D^k \bar w(x_1)- D^k \bar w(y)}{|x_1-y|^{n+s}}- \frac{D^k \bar w(x_2)- D^k \bar w(y)}{|x_2-y|^{n+s}}\right\}dy\,\] and \[J_2 = D^k\int_{\mathbb{R}^n\setminus B_{R}} \frac{-\varphi(y)}{|x_1-y|^{n+s}}\,dy- D^k\int_{\mathbb{R}^n\setminus B_{R}} \frac{ -\varphi(y)}{|x_2-y|^{n+s}}\,dy\,.\] To bound $|J_{1}|$ we proceed as follows. Let $r=|x_1-x_2|$. Then, using \eqref{Dk-bar-w}, \[ \begin{split} |J_{1}|&= \biggl|\int_{\mathbb{R}^n}\frac{D^k\bar w(x_1)- D^k\bar w(x_1+z)-D^k\bar w(x_2)+D^k\bar w(x_2+z)}{|z|^{n+s}}\,dz \biggr| \\ &\le \int_{B_r} \frac{R^{\alpha-\beta}\|w\|_{\beta;U}^{(-\alpha)} |z|^{\beta'} }{ |z|^{n+s} }\,dz + \int_{\mathbb{R}^n\setminus B_r} \frac{R^{\alpha-\beta}\|w\|_{\beta;U}^{(-\alpha)} r^{\beta'}}{|z|^{n+s}}\,dz \\ &\le CR^{\alpha-\beta}r^{\beta'-s}\|w\|_{\beta;U}^{(-\alpha)}\,. \end{split} \] Let us bound now $|J_2|$. Writing $\Phi(z)= |z|^{-n-s}$ and using that $\varphi(x_0)=0$, \[ \begin{split} |J_{2}|&= \biggl|\int_{ \mathbb{R}^n\setminus B_{R} } \varphi(y) \bigl( D^k \Phi (x_1-y)- D^k\Phi(x_2-y) \bigr) \,dy \biggr| \\ &\le C\int_{\mathbb{R}^n \setminus B_R} |x_0-y|^\alpha [w]_{C^{\alpha}(\mathbb{R}^n)} \frac{|x_1-x_2|^{\beta'-s}}{|x_0-y|^{n+\beta}}\,dy \\ &\le C R^{\alpha-\beta} r^{\beta'-s} [w]_{C^{\alpha}(\mathbb{R}^n)}, \end{split} \] where we have used that \[|D^k\Phi(z_1-z)-D^k \Phi(z_2-z)| \le C |z_1-z_2|^{\beta'-s} |z|^{-n-\beta}\] for all $z_1, z_2$ in $B_{R/2}(0)$ and $z\in\mathbb{R}^n\setminus B_R$. Hence, we have proved that \[[(-\Delta)^{s/2}w]_{C^{\beta-s}(\overline{B_R(x_0)})}\leq CR^{\alpha-\beta}\|w\|_{\beta;U}^{(-\alpha)}.\] {\em Case 2.} Assume $\beta'< s$. Let $x_1,x_2\in B_{R/2}(x_0)\subset B_{2R}(x_0)$. We want to bound $|D^{k-1} (-\Delta)^{s/2} w(x_1)- D^{k-1} (-\Delta)^{s/2} w(x_2)|$. We proceed as above but we now use \[ \begin{split} |D^{k-1}\bar w(x_1)&- D^{k-1}\bar w(x_1+y)-D^{k-1}\bar w(x_2)+D^{k-1}\bar w(x_2+y)|\le \\ &\le \left|D^{k}\bar w(x_1)-D^{k}\bar w(x_2)\right||y| + |y|^{1+\beta'} \|\bar w\|_{C^{\beta}(\mathbb{R}^n)} \\ &\le \bigl(|x_1-x_2|^{\beta'} |y|+ |y|^{1+\beta'}\bigr) R^{\alpha-\beta}\|w\|_{\beta;U}^{(-\alpha)} \end{split} \] in $B_r$, and \[ \begin{split} |D^{k-1}\bar w(x_1)&- D^{k-1}\bar w(x_1+y)-D^{k-1}\bar w(x_2)+D^{k-1}\bar w(x_2+y)|\le \\ &\le \left|D^{k}\bar w(x_1)-D^{k}\bar w(x_1+y)\right| |x_1-x_2| + |x_1-x_2|^{1+\beta'} \|\bar w\|_{C^{\beta}(\mathbb{R}^n)} \\ &\le \bigl(|y|^{\beta'} |x_1-x_2|+ |x_1-x_2|^{1+\beta'}\bigr) R^{\alpha-\beta}\|w\|_{\beta;U}^{(-\alpha)} \end{split} \] in $\mathbb{R}^n\backslash B_r$. Then, as in Case 1 we obtain $[(-\Delta)^{s/2}w]_{C^{\beta-s}(\overline{B_R(x_0)})}\leq CR^{\alpha-\beta}\|w\|_{\beta;U}^{(-\alpha)}$. This yields \eqref{eq:bound-lap-s/2}, as in Step 2 of Lemma \ref{refined-2s-gain}. \end{proof} Next lemma is a variation of the previous one and gives a pointwise bound for $(-\Delta)^{s/2} w$. It is used in Remark \ref{remviscosity}. \begin{lem}\label{remlog} Let $U\subset\mathbb{R}^n$ be an open set, and let $\beta>s$. Then, for all $x\in U$ \[|(-\Delta)^{s/2}w(x)|\leq C(\|w\|_{C^s(\mathbb{R}^n)}+\|w\|_{\beta;U}^{(-s)})\biggl(1+|\log{\rm dist}(x,\partial U)|\biggr),\] whenever $w$ has finite right hand side. The constant $C$ depends only on $n$, $s$, and $\beta$. \end{lem} \begin{proof} We may assume $\beta<1$. Let $x_0\in U$ and $R=d_{x_0}/2$, and define $\bar w$ and $\varphi$ as in the proof of the previous lemma. Then, \[ (-\Delta)^{s/2} w(x_0) =(-\Delta)^{s/2} \bar w(x_0) +(-\Delta)^{s/2} \varphi(x_0) = c_{n,\frac s2} (J_1+J_2),\] where \[ J_1= \int_{\mathbb{R}^n} \frac{\bar w(x_0)-\bar w(x_0+z)}{|z|^{n+s}}\,dz \quad \mbox{and}\quad J_2= \int_{\mathbb{R}^n\setminus B_R} \frac{-\varphi(x_0+z)}{|z|^{n+s}}\,dz. \] With similar arguments as in the previous proof we readily obtain $|J_1|\le C (1+|\log R|) \|w\|_{\beta;U}^{(-s)}$ and $|J_2|\le C (1+|\log R|) \|w\|_{C^s(\mathbb{R}^n)}$. \end{proof} \appendix \section{Basic tools and barriers} In this appendix we prove Proposition \ref{prop:solution} and Lemmas \ref{prop:subsolution} and \ref{prop:supersolution}. Proposition \ref{prop:solution} is well-known (see \cite{CRS}), but for the sake of completeness we sketch here a proof that uses the Caffarelli-Silvestre extension problem \cite{CSext}. \begin{proof}[Proof of Proposition \ref{prop:solution}] Let $(x,y)$ and $(r,\theta)$ be Cartesian and polar coordinates of the plane. The coordinate $\theta\in(-\pi,\pi)$ is taken so that $\{\theta=0\}$ on $\{y=0,\ x>0\}$. Use that the function $r^s \cos(\theta/2)^{2s}$ is a solution in the half-plane $\{y>0\}$ to the extension problem \cite{CSext}, \[{\rm div}(y^{1-2s}\nabla u) = 0 \quad \mbox{ in} \ \{y>0\},\] and that its trace on $y=0$ is $\varphi_0$. \end{proof} The fractional Kelvin transform has been studied thoroughly in \cite{Bog}. \begin{prop}[Fractional Kelvin transform]\label{prop:frac-kelvin} Let $u$ be a smooth bounded function in $\mathbb{R}^n\setminus\{0\}$. Let $x\mapsto x^*= x/|x|^2$ be the inversion with respect to the unit sphere. Define $u^*(x)= |x|^{2s-n} u(x^*)$. Then, \begin{equation}\label{eq:frac-kelvin} (-\Delta)^s u^*(x) = |x|^{-2s-n} (-\Delta)^s u(x^*)\,, \end{equation} for all $x\neq 0$. \end{prop} \begin{proof} Let $x_0\in\mathbb{R}^n\setminus\{0\}$. By subtracting a constant to $u^*$ and using $(-\Delta)^s |x|^{2s-n}=0$ for $x\neq 0$, we may assume $u^*(x_0)= u(x_0^*)=0$. Recall that \[|x-y| = \frac{|x^*-y^*|}{|x^*||y^*|}\,.\] Thus, using the change of variables $z=y^*= y/|y|^2$, \[ \begin{split} (-\Delta)^s u^*(x_0) &= c_{n,s}\ \text{PV} \int_{\mathbb{R}^n} \frac{- u^*(y)}{|x_0-y|^{n+2s}}\,dy\\ &= c_{n,s}\ \text{PV} \int_{\mathbb{R}^n} \frac{- |y|^{2s-n} u(y^*)}{|x_0^*-y^*|^{n+2s}} |x_0^*|^{n+2s}|y^*|^{n+2s}\,dy\\ &= c_{n,s} |x_0|^{-n-2s}\ \text{PV} \int_{\mathbb{R}^n} \frac{- |z|^{n-2s} u(z)}{|x_0^*-z|^{n+2s}} |z|^{n+2s}\,|z|^{-2n} dz\\ &= c_{n,s} |x_0|^{-n-2s}\ \text{PV} \int_{\mathbb{R}^n} \frac{- u(z)}{|x_0^*-z|^{n+2s}} dz\\ &= |x_0|^{-n-2s} (-\Delta)^s u(x_0^*)\,. \end{split} \] \end{proof} Now, using Proposition \ref{prop:frac-kelvin} we prove Lemma \ref{prop:supersolution}. \begin{proof}[Proof of Lemma \ref{prop:supersolution}] Let us denote by $\psi$ (instead of $u$) the explicit solution \eqref{explicitsolution2} to problem \eqref{explicitsolution1} in $B_1$, which satisfies \begin{equation}\label{supersol-in-ball} \begin{cases} (-\Delta)^s\psi = 1\quad&\mbox{in }B_1\\ \psi\equiv 0 &\mbox{in }\mathbb{R}^n \setminus B_1\\ 0 <\psi< C(1-|x|)^s &\mbox{in }B_1\,.\\ \end{cases} \end{equation} From $\psi$, the supersolution $\varphi_1$ in the exterior of the ball is readily built using the fractional Kelvin transform. Indeed, let $\xi$ be a radial smooth function satisfying $\xi \equiv 1$ in $\mathbb{R}^n\setminus B_5$ and $\xi\equiv 0$ in $B_4$, and define $\varphi_1$ by \begin{equation}\label{varphi1} \varphi_1 (x) = C |x|^{2s-n}\psi(1-|x|^{-1}) + \xi(x)\,. \end{equation} Observe that $(-\Delta)^s \xi \ge - C_2$ in $B_4$, for some $C_2>0$. Hence, if we take $C\ge 4^{2s+n}(1+C_2)$, using \eqref{eq:frac-kelvin}, we have \[(-\Delta)^s\varphi_1 (x) \ge C |x|^{-2s-n} + (-\Delta)^s \xi(x) \ge 1 \quad \mbox{in } B_4 \,.\] Now it is immediate to verify that $\varphi_1$ satisfies \eqref{eq:propsupersol} for some $c_1>0$. To see that $\varphi_1\in H^s_{\rm loc}(\mathbb{R}^n)$ we observe that from \eqref{varphi1} it follows \[ |\nabla \varphi_1(x)|\le C(|x|-1)^{s-1} \quad \mbox{in }\mathbb{R}^n \setminus B_1\] and hence, using Lemma \ref{remlog}, we have $(-\Delta)^{s/2}\varphi_1\in L^p_{\rm loc}(\mathbb{R}^n)$ for all $p<\infty$. \end{proof} Next we prove Lemma \ref{prop:subsolution}. \begin{proof}[Proof of Lemma \ref{prop:subsolution}] We define \[\psi_1(x)= (1-|x|^2)^s\chi_{B_1}(x)\,.\] Since \eqref{explicitsolution2} is the solution of problem \eqref{explicitsolution1}, we have $(-\Delta)^s\psi_1$ is bounded in $B_1$. Hence, for $C>0$ large enough the function $\psi= \psi_1 + C\chi_{\overline{B_{1/4}}}$ satisfies $(-\Delta)^s \psi\le 0$ in $B_1\setminus \overline{B_{1/4}}$ and it can be used as a viscosity subsolution. Note that $\psi$ is upper semicontinuous, as required to viscosity subsolutions, and it satisfies pointwise (if $C$ is large enough) \[ \begin{cases} \psi \equiv 0 \quad &\mbox{in }\mathbb{R}^n\setminus B_1 \\ (-\Delta)^s \psi \le 0 &\mbox{in }B_1\setminus \overline{B_{1/4}}\\ \psi = 1 &\mbox{in }\overline{B_{1/4}}\\ \psi (x)\ge c(1-|x|)^s & \mbox{in } B_1. \end{cases} \] If we want a subsolution which is continuous and $H^s(\mathbb{R}^n)$ we may construct it as follows. We consider the viscosity solution (which is also a weak solution by Remark \ref{remviscosity}) of \[ \begin{cases} (-\Delta)^s\varphi_2 = 0 &\mbox{in }B_1\setminus B_{1/4}\\ \varphi_2 \equiv 0 \quad &\mbox{in }\mathbb{R}^n\setminus B_1 \\ \varphi_2 = 1 &\mbox{in }\overline{B_{1/4}}. \end{cases} \] Using $\psi$ as a lower barrier, it is now easy to prove that $\varphi_2$ satisfies \eqref{eq:propsubsol} for some constant $c_2>0$. \end{proof} \end{document}
\begin{document} \author[Luc Deleaval]{L. Deleaval} \address{Laboratoire d'Analyse et de Math\'ematiques appliqu\'ees \\ Universit\'e Paris-Est Marne-la-Vall\'ee \\ France} \email{[email protected]} \author[N. Demni]{N. Demni} \address{IRMAR, Universit\'e de Rennes 1\\ Campus de Beaulieu\\ 35042 Rennes cedex\\ France} \email{[email protected]} \subjclass[2010]{33C45; 33C52; 33C65; 44A20} \keywords{Generalized Bessel function; Dihedral groups; Confluent Horn functions; Laplace-type integral representation.} \title{Generalized Bessel functions of dihedral-type: expression as a series of confluent Horn functions and Laplace-type integral representation} \begin{abstract} In the first part of this paper, we express the generalized Bessel function associated with dihedral systems and a constant multiplicity function as a infinite series of confluent Horn functions. The key ingredient leading to this expression is an extension of an identity involving Gegenbauer polynomials proved in a previous paper by the authors, together with the use of the Poisson kernel for these polynomials. In particular, we derive an integral representation of this generalized Bessel function over the standard simplex. The second part of this paper is concerned with even dihedral systems and boundary values of one of the variables. Still assuming that the multiplicity function is constant, we obtain a Laplace-type integral representation of the corresponding generalized Bessel function, which extends to all even dihedral systems a special instance of the Laplace-type integral representation proved in \cite{Amr-Dem}. \end{abstract} \section{Introduction} Generalized Bessel functions associated with dihedral groups have received considerable attention in recent times. This is mainly due to their occurence in several branches of mathematics, such as harmonic analysis or representation theory. Indeed, beyond their most natural setting of being the symmetric counterpart of the so-called Dunkl kernel, they turn out to be surprisingly connected to the Laguerre semi-group constructed in \cite{BKO} and in the particular case of the square-preserving dihedral group, to certain representations of the indefinite orthogonal group of rank two \cite{Kob-Man}. Besides, special instances of them are Laplace transforms of Duistermaat-Heckman measures which were introduced in \cite{BBO} for finite Coxeter groups by means of generalized Pitman transforms and given there a very interesting probabilistic interpretation. One of the challenging problems concerning generalized Bessel functions associated with dihedral groups is to find relatively simple formulas in terms of well-known special functions, and to potentially obtain integral representations of Laplace-type for them. Results towards this goal have been recently obtained in a series of papers (\cite{Amr-Dem,CDBL,DDY,Del-Dem,Dem2,Xu}). In particular, the identity recalled below in \eqref{IdGeg} and proved in \cite{Del-Dem} shows that, if one of the two variables of the generalized Bessel function lies on the boundary of the dihedral wedge and if the multiplicity function is constant, then it may be expressed through the confluent Horn function $\Phi_2$. In this paper, we shall pursue this line of research and prove, only assuming that the multiplicity function is constant, that the generalized Bessel function is given by a infinite series of confluent Horn functions. Our main tool is an extension of the aforementioned identity to the case of two Gegenbauer polynomials, and its proof appeals to their Poisson kernel. As a by-product, we obtain an integral representation over the standard simplex in an Euclidean space whose dimension is half of the order of the underlying dihedral group. Assuming further that the latter is even and that one of the variables lies on the boundary of the dihedral wedge, we shall derive a Laplace-type integral representation of the generalized Bessel function involving the standard simplex of an Euclidean space of smaller dimension (one quarter of the order of the underlying dihedral group). Up to our best knowledge, this kind of integral representations is only derived for the root systems of type $A$ (\cite{Amr}) or $B_2$ (\cite{Amr-Dem}), the latter being a particular dihedral root system. In this respect, it is worth noting that Theorem 1 in \cite{Amr-Dem} motivates the extension of our Laplace-type integral representation to all even dihedral groups without any assumption neither on the multiplicity function nor on the locations of the variables. The paper is organized as follows. In the next section, we recall some basic facts on dihedral groups and some definitions of special functions we will need later on. In section three, we express the generalized Bessel function associated with a given dihedral group and a constant multiplicity function as a series of confluent Horn functions, from which we deduce an integral representation for it over a simplex. In the last section, we prove a Laplace-type integral representation for the generalized Bessel function associated with even dihedral group and assuming in addition that one of its variables lies on the boundary of the dihedral wedge. \section{Background and notations} In this section, we recall the expression of the generalized Bessel functions of dihedral-type and introduce some special functions occuring in the sequel. For the interested reader, we refer to \cite{Dun-Xu} for a good account on general root systems and Dunkl operators to which generalized Bessel functions are canonically associated and to \cite{AAR} for the definitions and the properties of one-variable special functions occurring below. The dihedral group $\mathcal{D}_2(n), n \geq 3$, consists of orthogonal transformations leaving invariant a regular $n$-sided polygon centered at the origin. As a finite reflection group, $\mathcal{D}_2(n)$ corresponds to the dihedral root system \begin{equation*} \mathcal I_2(n) := \{\pm i \mathrm{e}^{i\pi l/n},\, 1 \leq l \leq n\}. \end{equation*} Choosing the positive root system \begin{equation*} \{- i \mathrm{e}^{i\pi l/n},\, 1 \leq l \leq n\}, \end{equation*} then the simple system consists of the vectors $\{\alpha_1 :=i, \alpha_2 := -i\mathrm{e}^{i\pi/n}\}$ so that the positive Weyl chamber $C$ is the dihedral wedge \begin{equation*} C := \bigl\{(r, \theta), \, r > 0, \, 0 < \theta < \pi/n\bigr\}. \end{equation*} The dihedral group acts on its root system in a natural way and this action admits two orbits when $n := 2p, p\geq 2,$ (the roots forming the diagonals and those forming the lines joining the midpoints of the polygon) while there is only one orbit when $n$ is odd. Consequently, a multiplicity function $k$ (a function which is constant on each conjugate class) on $\mathcal I_2(n)$ takes two values $(k_0,k_1)$ if $n$ is even and only one value, again denoted by $k$, otherwise. For our later purposes, we only consider nonnegative multiplicity values though the formulas below extend to larger domains in the complex plane. Let $n=2p, p \geq 2$, then the generalized Bessel functions associated with the even dihedral group $\mathcal{D}_2(n)$ and the multiplicity function $(k_0,k_1)$, which we shall denote by $D_k^{\mathcal{D}_2(n)}$, admits the following expansion in polar coordinates (\cite{Dem0}) \begin{equation*} D_k^{\mathcal{D}_2(n)}(x,y) = c_{n,k_0,k_1}\biggl(\frac{2}{r\rho}\biggr)^{\gamma} \sum_{j \geq 0}{\it I}_{2jp +\gamma}(\rho r)p_j^{(l_1, l_0)}\bigl(\cos(2p\phi)\bigr)p_j^{(l_1, l_0)}\bigl(\cos(2p\theta)\bigr), \end{equation*} where $x = \rho e^{i\phi}, y = re^{i\theta}$ belong to $\overline{C}$ and \begin{itemize} \item $\displaystyle{\gamma = p(k_0+k_1),}$ \item $\displaystyle{p_j^{(l_1, l_0)}}$ is the $j$-th orthonormal Jacobi polynomial of parameters \[ l_i = k_i - (1/2), i \in \{0,1\},\] \item ${\it I}_{\nu}$ is the modified Bessel function of the first kind \begin{equation}\label{Bessel} {\it I}_{\nu}(v) = \sum_{m \geq 0} \frac{1}{m!\Gamma(\nu+m+1)} \biggl(\frac{v}{2}\biggr)^{2m + \nu}, \end{equation} \item $c_{n,k_0,k_1}$ is a normalizing constant subject to $\displaystyle{D^{\mathcal{D}_2(n)}_k(0,y) = 1}$ for all $y \in \overline{C}$. \end{itemize} Note in passing that $D_k^{\mathcal{D}_2(n)}$ is $\mathcal{D}_2(n)$-invariant and this invariance manifests itself by the presence of $\cos(2p\theta)$ in the argument of Jacobi polynomials. In the particular case $k_0= k_1 = k$, we can rewrite $D_k^{\mathcal{D}_2(n)}$ by means of non orthonormal Gegenbauer polynomials $C_j^{(k)}$ as: \begin{equation}\label{Eq1} D_k^{\mathcal{D}_2(n)}(x,y) = \frac{c_{n,k}}{B(k+1/2,1/2)}\biggl(\frac{2}{r\rho}\biggr)^{\gamma} \sum_{j \geq 0}{\it I}_{2jp +\gamma}(\rho r)(j+k) \frac{C_j^{(k)}\bigl(\cos(2p\phi)\bigr)C_j^{(k)}\bigl(\cos(2p\theta)\bigr)}{C_j^{(k)}(1)}, \end{equation} where \begin{equation*} C_j^{(k)}(1) = \frac{(2k)_j}{j!}, \quad c_{n,k}:=c_{n,k,k}, \end{equation*} and \begin{equation*} B(u,v) = \frac{\Gamma(u)\Gamma(v)}{\Gamma(u+v)} \end{equation*} is the Beta function. For odd dihedral groups, the corresponding generalized Bessel function is also given by \eqref{Eq1} where we should identify $2p$ with $n$ so that $\gamma = nk$ \footnote{The second formula displayed in \cite{Dem0}, Corollary 1, is erroneous.}. This coincidence allows to unify both cases (dihedral groups with equal multiplicity values and odd dihedral groups) which is the main focus of the paper. In the next section, we shall derive an expansion of $D_k^{\mathcal{D}_2(n)}$ as a series of confluent Horn functions (see for instance \cite[chapter V]{Erd1}, \cite{Kar-Sri}): \begin{equation}\label{Horn} \Phi_{2}^{(n)}(\beta_1, \ldots, \beta_{n}; \gamma; z_1, \dots, z_{n}) := \sum_{j_1, \ldots, j_{n} \geq 0} \frac{(\beta_1)_{j_1}\ldots (\beta_{n})_{j_{n}}}{(\gamma)_{j_1+\cdots+j_{n}}} \frac{z_1^{j_1}}{j_1!}\cdots \frac{z_{n}^{j_{n}}}{j_{n}!}, \end{equation} where $(\cdot)_j$ is the so-called Pochhammer symbol. The occurrence of this function is motivated by the special boundary value $\phi = 0$ (a similar statement holds for $\theta = \pi/n$) for which the series displayed in the right-hand side of \eqref{Eq1} reduces to \begin{align*} \sum_{j \geq 0}{\it I}_{jn +\gamma}(\rho r)(j+k)C_j^{(k)}\bigl(\cos(n\theta)\bigr), \end{align*} which is equal, up to a factor depending only on $n$ and $k$, to (\cite{CDBL,Del-Dem}) \begin{equation}\label{Horn1} \Phi_{2}^{(n)}\Biggl(\underbrace{k, \dots, k}_{n \mathrm{\ times}}; nk; \rho r \cos(\theta), \rho r \cos\biggl(\theta + \frac{2\pi}{n}\biggr), \dots, \rho r \cos\biggl(\theta + \frac{2\pi(n-1)}{n}\biggr)\Biggr). \end{equation} Actually, we shall see that \eqref{Horn1} is the lowest-order term of the series displayed in Corollary \ref{coco}. \section{Generalized Bessel function as a series of confluent Horn functions} Using the expansion \eqref{Bessel} of the modified Bessel function, the right-hand side of \eqref{Eq1} may be written up to the factor $\displaystyle{\frac{c_{n,k}}{nB(k+1/2,1/2)}}$ as a double series \begin{equation*} \sum_{j,m \geq 0}\frac{n(j+k)}{m!\Gamma(nj+nk+m+1)} \frac{C_j^{(k)}\bigl(\cos(n\phi)\bigr)C_j^{(k)}\bigl(\cos(n\theta)\bigr)}{C_j^{(k)}(1)}\biggl(\frac{\rho r}{2}\biggr)^{2m + nj} \end{equation*} which converges absolutely since \begin{equation*} \bigl|C_j^{(k)}(z)\bigr| \leq C_j^{(k)}(1) = \frac{(2k)_j}{j!}, \quad |z| \leq 1. \end{equation*} Hence, we will focus on the following finite sums \begin{equation*} S_N(n,k, \phi, \theta) := \sum_{\substack{j,m\geq0\\ N = 2m+nj}} \frac{n(j+k)}{m!\Gamma\bigl(n(j+k) +m+ 1\bigr)} \frac{C_j^{(k)}\bigl(\cos(n\phi)\bigr)C_j^{(k)}\bigl(\cos(n\theta)\bigr)}{C_j^{(k)}(1)}, \qquad N \geq 0. \end{equation*} If $\phi = 0$, then Proposition 1 in \cite{Del-Dem} asserts that \begin{equation}\label{IdGeg} S_N(n,k, 0, \theta) = \frac{2^N}{\Gamma(nk+N)}\sum_{\substack{j_1, \ldots, j_n \geq 0 \\ j_1+\cdots +j_n = N}} (k)_{j_1}\ldots (k)_{j_{n}} \frac{\bigl({b^{\theta,n}_1}\bigr)^{j_1}}{j_1!}\cdots \frac{\bigl({b^{\theta,n}_n}\bigr)^{j_1}}{j_{n}!}, \end{equation} where we have set \begin{equation*} b^{\theta,n}_s : = \cos\biggl(\theta + \frac{2\pi s}{n}\biggr), \quad s = 1, \ldots, n. \end{equation*} More generally, we shall prove the following formula. \begin{teo} \label{teooo} Let $N \geq 0$. Then, we have the equality \begin{multline*} S_N(n,k, \phi, \theta) = \frac{2^N}{\Gamma(nk+N)}\times \\ \sum_{\substack{m_1, \dots, m_n \geq 0 \\ m_1+\dots +m_n = N}} \sum_{j=0}^{\inf(m_1,\dots, m_n)}\frac{(k)_j(k)_j}{(2k)_j j!} \Bigl(-2^{2-n}\sin(n\theta) \sin(n\phi)\Bigr)^j \prod_{s=1}^n \frac{(k+j)_{m_s-j}}{(m_s-j)!}\bigl(b^{\theta-\phi,n}_s\bigr)^{m_s-j}. \end{multline*} \end{teo} \begin{proof} Multiply $S_N(n,k, \phi, \theta) $ by the factor $\Gamma(nk +N)z^N/2^N$ for small enough $|z|$. Summing the resulting expression over $N \geq 0$ and using the duplication formula \[ (x)_{2l}=2^{2l}\biggl(\frac{x}{2}\biggr)_{l}\biggl(\frac{1+x}{2}\biggr)_{l}, \] we get the series \begin{align*} \sum_{m, j \geq 0} \frac{n(j+k)\Gamma\bigl(n(j+k)\bigr)}{m!2^{nj} \Gamma\bigl(n(j+k) +m+ 1\bigr)} \bigl((nj+nk)/2\bigr)_m\bigl((nj+nk+1)/2\bigr)_m \frac{C_j^{(k)}\bigl(\cos(n\phi)\bigr)C_j^{(k)}\bigl(\cos(n\theta)\bigr)}{C_j^{(k)}(1)}z^{2m+nj}. \end{align*} Summing the latter over $m$, we further get \begin{multline*} \sum_{N \geq 0} S_N(n,k, \phi, \theta) \frac{\Gamma(nk +N)z^N}{2^N} = \\ \sum_{j \geq 0} \frac{z^{nj}}{2^{nj}} \frac{C_j^{(k)}\bigl(\cos(n\phi)\bigr)C_j^{(k)}\bigl(\cos(n\theta)\bigr)}{C_j^{(k)}(1)} {}_2F_1\left(\frac{nj+nk}{2}, \frac{nj+nk+1}{2}, nj + nk + 1; z^2\right), \end{multline*} where ${}_2F_1$ stands for the Gauss hypergeometric function. Thanks to the following formula (see for instance \cite{Erd1}, p.101) \begin{equation*} {}_2F_1\left(\frac{nj+nk}{2}, \frac{nj+nk+1}{2}, nj + nk + 1; z\right) = \frac{2^{nk+nj}}{\bigl(1+\sqrt{1-z}\bigr)^{nk+nj}}, \end{equation*} valid for $z \in \mathbb{C} \setminus [1,\infty[$, we are led to \begin{equation}\label{Poisson} \frac{2^{nk}}{\bigl(1+\sqrt{1-z^2}\bigr)^{nk}} \sum_{j \geq 0} \frac{z^{nj}}{\bigl(1+\sqrt{1-z^2}\bigr)^{nj}} \frac{C_j^{(k)}\bigl(\cos(n\phi)\bigr)C_j^{(k)}\bigl(\cos(n\theta)\bigr)}{C_j^{(k)}(1)}. \end{equation} But the Poisson kernel expression (formula $(19)$ in \cite{Mai}, see also \cite{Dun}) \begin{multline*} \sum_{j \geq 0} \frac{C_j^{(k)}\bigl(\cos(n\phi)\bigr)C_j^{(k)}\bigl(\cos(n\theta)\bigr)}{C_j^{(k)}(1)}t^j = \frac{1}{\Bigl(1-2t\cos\bigl(n(\theta-\phi)\bigr) + t^2\Bigr)^k} {}_2F_1\left(k, k, 2k; \frac{-4t\sin(n\theta) \sin(n\phi)}{1-2t\cos\bigl(n(\theta-\phi)\bigr) + t^2}\right) \end{multline*} which converges absolutely for $|t| < 1$, shows that \eqref{Poisson} may be written after some simplifications as \begin{multline}\label{Lastfor} \frac{2^{nk}}{\Bigl(\bigl(1+\sqrt{1-z^2}\bigr)^n + \bigl(1-\sqrt{1-z^2}\bigr)^n -2z^n \cos\bigl(n(\theta-\phi)\bigr)\Bigr)^k}\times \\ {}_2F_1\left(k, k, 2k; \frac{-4z^n \sin(n\theta) \sin(n\phi)}{\bigl(1+\sqrt{1-z^2}\bigr)^n + \bigl(1-\sqrt{1-z^2}\bigr)^n -2z^n \cos\bigl(n(\theta-\phi)\bigr)^k}\right). \end{multline} Expanding the hypergeometric function and using the identity (\cite{AAR}) \begin{align*} \bigl(1+\sqrt{1-z^2}\bigr)^n + \bigl(1-\sqrt{1-z^2}\bigr)^n = 2 \sum_{j=0}^{[n/2]} \binom{n}{2j}(1-z^2)^j = 2z^nT_n\biggl(\frac{1}{z}\biggr) , \end{align*} where $T_n$ is the well-known $n$-th Tchebycheff polynomial of the first kind, then formula \eqref{Lastfor} reads \begin{equation*} 2^{nk} \sum_{j \geq 0} \frac{(k)_j(k)_j}{(2k)_j j!} \frac{\bigl(-4z^n \sin(n\theta) \sin(n\phi)\bigr)^j}{\Bigl(2z^n T_n\left(1/z\right) - 2z^n \cos\bigl(n(\theta-\phi)\bigr)\Bigr)^{j+k}}. \end{equation*} Moreover, the following factorization holds (see \cite{Del-Dem}) \begin{equation*} 2z^n T_n\left(1/z\right) - 2z^n \cos\bigl(n(\theta-\phi)\bigr) = 2^n \prod_{s=1}^n \Bigl(1 - b^{\theta-\phi,n}_s z\Bigr) \end{equation*} whence we get the identity \begin{multline*} 2^{nk} \sum_{j \geq 0} \frac{(k)_j(k)_j}{(2k)_j j!} \frac{\bigl(-4z^n \sin(n\theta) \sin(n\phi)\bigr)^j}{\Bigl(2z^n T_n\left(1/z\right) - 2z^n \cos\bigl(n(\theta-\phi)\bigr)\Bigr)^{j+k}} = \\ \sum_{j \geq 0} \frac{(k)_j(k)_j}{(2k)_j j!} \frac{\bigl(-z^n \sin(n\theta) \sin(n\phi)\bigr)^j}{2^{(n-2)j}} \prod_{s=1}^n \Bigl(1 - b^{\theta-\phi,n}_s z\Bigr)^{-(k+j)}. \end{multline*} Now, the generalized binomial theorem entails \begin{align*} z^{nj} \prod_{s=1}^n \Bigl(1 - b^{\theta-\phi,n}_s z\Bigr)^{-(k+j)} = \sum_{m_1, \dots, m_n \geq 0} z^{nj+m_1+\dots+m_n}\prod_{s=1}^n \frac{(k+j)_{m_s}}{m_s!}\bigl(b^{\theta-\phi,n}_s\bigr)^{m_s} \end{align*} so that \begin{multline*} \sum_{j \geq 0} \frac{(k)_j(k)_j}{(2k)_j j!} \frac{\bigl(-z^n \sin(n\theta) \sin(n\phi)\bigr)^j}{2^{(n-2)j}} \prod_{s=1}^n\Bigl(1 - b^{\theta-\phi,n}_s z\Bigr)^{-(k+j)}= \\ \sum_{j, m_1, \dots, m_n \geq 0} \frac{(k)_j(k)_j}{(2k)_j j!}z^{(m_1+j) + \dots+ (m_n+j)} \frac{\bigl(-\sin(n\theta) \sin(n\phi)\bigr)^j}{2^{(n-2)j}}\prod_{s=1}^n \frac{(k+j)_{m_s}}{m_s!}\bigl(b^{\theta-\phi,n}_s\bigr)^{m_s}. \end{multline*} Finally, we perform in the multiple series above the index changes $m_s+j \rightarrow m_s$, for fixed $j \geq 0$ and each $1\leq s \leq n$, to obtain \begin{multline*} \sum_{j \geq 0} \frac{(k)_j(k)_j}{(2k)_j j!} \frac{\bigl(-z^n \sin(n\theta) \sin(n\phi)\bigr)^j}{2^{(n-2)j}} \prod_{s=1}^n \Bigl(1 - b^{\theta-\phi,n}_s z\Bigr)^{-(k+j)}= \\ \sum_{j \geq 0} \sum_{m_1, \dots, m_n \geq j } \frac{(k)_j(k)_j}{(2k)_j j!}z^{m_1 + \dots+ m_n} \frac{\bigl(-\sin(n\theta) \sin(n\phi)\bigr)^j}{2^{(n-2)j}}\prod_{s=1}^n \frac{(k+j)_{m_s-j}}{(m_s-j)!}\bigl(b^{\theta-\phi,n}_s\bigr)^{m_s-j}, \end{multline*} and we now change the summation order to end up with \begin{multline*} \sum_{N \geq 0} S_N(n,k, \phi, \theta) \frac{\Gamma(nk +N)z^N}{2^N} = \\ \sum_{m_1, \dots, m_n \geq 0} z^{m_1 + \dots+ m_n} \sum_{j=0}^{\inf(m_1,\dots, m_n)}\frac{(k)_j(k)_j}{(2k)_j j!} \Bigl(-2^{2-n}\sin(n\theta) \sin(n\phi)\Bigr)^j\prod_{s=1}^n \frac{(k+j)_{m_s-j}}{(m_s-j)!}\bigl(b^{\theta-\phi,n}_s\bigr)^{m_s-j}. \end{multline*} Comparing equal powers of $z$, we are done. \end{proof} As a first corollary, we get the following expansion of the generalized Bessel function. \begin{cor} \label{coco} Let $\mathcal{D}_2(n), n \geq 3,$ be a dihedral group with a constant multiplicity function $k$. Then, the associated generalized Bessel function is given by \begin{multline*} D_k^{\mathcal{D}_2(n)}(x,y) = \frac{c_{n,k}}{nB(k+1/2,1/2)\Gamma(nk)} \times \\ \sum_{j \geq 0}\frac{(k)_j(k)_j}{(2k)_j(nk)_{nj} j!} \Biggl(-4\biggl(\frac{\rho r}{2}\biggr)^n\sin(n\theta) \sin(n\phi)\Biggr)^j \Phi_2^{(n)}\Bigl(k+j, \dots, k+j; nk+nj; \rho r b^{\theta-\phi,n}_1, \dots, \rho r b^{\theta-\phi,n}_n\Bigr). \end{multline*} \end{cor} \begin{proof} By the very definition of the Pochhammer symbol, the result of the previous theorem can be rewritten as \begin{multline*} S_N(n,k, \phi, \theta) =\frac{2^N}{\Gamma(nk)} \times \\ \sum_{\substack{m_1, \dots, m_n \geq 0 \\ m_1+\dots +m_n = N}} \frac{1}{(nk)_N} \sum_{j=0}^{\inf(m_1,\dots, m_n)}\frac{(k)_j(k)_j}{(2k)_j j!}\Bigl(-2^{2-n}\sin(n\theta) \sin(n\phi)\Bigr)^j \prod_{s=1}^n \frac{(k+j)_{m_s-j}}{(m_s-j)!}\bigl(b^{\theta-\phi,n}_s\bigr)^{m_s-j}. \end{multline*} Multiplying both sides by $(\rho r/2)^N$ and then summing over $N \geq 0$, it follows that \begin{multline*} \sum_{N \geq 0} S_N(n,k, \phi, \theta) \biggl(\frac{\rho r}{2}\biggr)^N = \frac{1}{\Gamma(nk)} \sum_{m_1, \dots, m_n \geq 0}\frac{(\rho r)^{m_1+\dots+m_n}}{(nk)_{m_1+\dots+m_n}} \\ \sum_{j=0}^{\inf(m_1,\dots, m_n)}\frac{(k)_j(k)_j}{(2k)_j j!}\Bigl(-2^{2-n}\sin(n\theta) \sin(n\phi)\Bigr)^j \prod_{s=1}^n \frac{(k+j)_{m_s-j}}{(m_s-j)!} \bigl(b^{\theta-\phi,n}_s\bigr)^{m_s-j}, \end{multline*} which is equivalent to \begin{multline*} \sum_{N \geq 0} S_N(n,k, \phi, \theta) \biggl(\frac{\rho r}{2}\biggr)^N = \frac{1}{\Gamma(nk)} \sum_{j \geq 0}\frac{(k)_j(k)_j}{(2k)_j j!} \Bigl(-2^{2-n}\sin(n\theta) \sin(n\phi)\Bigr)^j\\ \sum_{m_1, \dots, m_n \geq j} \frac{(\rho r)^{m_1+\dots+m_n}}{(nk)_{m_1+\dots+m_n}}\prod_{s=1}^n \frac{(k+j)_{m_s-j}}{(m_s-j)!} \bigl(b^{\theta-\phi,n}_s\bigr)^{m_s-j}, \end{multline*} and then \begin{multline*} \sum_{N \geq 0} S_N(n,k, \phi, \theta) \biggl(\frac{\rho r}{2}\biggr)^N = \frac{1}{\Gamma(nk)} \sum_{j \geq 0}\frac{(k)_j(k)_j}{(2k)_j(nk)_{nj} j!} \Biggl(-4\biggl(\frac{\rho r}{2}\biggr)^n\sin(n\theta) \sin(n\phi)\Biggr)^j \\ \sum_{m_1, \dots, m_n \geq 0} \frac{1}{(nk+nj)_{m_1+\dots+m_n}}\prod_{s=1}^n \frac{(k+j)_{m_s}}{(m_s)!} (\rho r)^{m_s}\bigl(b^{\theta-\phi,n}_s\bigr)^{m_s}, \end{multline*} where we have used in particular the formula \[ \frac{1}{(nk)_{m_1+\dots+m_n+nj}}=\frac{1}{(nk)_{nj}}\frac{1}{(nk+nj)_{m_1+\dots+m_n}}. \] Keeping in mind \eqref{Horn}, it follows that \begin{multline*} \sum_{N \geq 0} S_N(n,k, \phi, \theta) \biggl(\frac{\rho r}{2}\biggr)^N = \frac{1}{\Gamma(nk)} \sum_{j \geq 0}\frac{(k)_j(k)_j}{(2k)_j(nk)_{nj} j!} \Biggl(-4\biggl(\frac{\rho r}{2}\biggr)^n\sin(n\theta) \sin(n\phi)\Biggr)^j \\ \Phi_2^{(n)}\Bigl(k+j, \dots, k+j; nk+nj; \rho r b^{\theta-\phi,n}_1, \dots, \rho r b^{\theta-\phi,n}_n\Bigr), \end{multline*} which coincides with $D_k^{\mathcal{D}_2(n)}(x,y)$ up to the normalizing factor $\displaystyle{\frac{c_{n,k}}{nB(k+1/2,1/2)}}$. \end{proof} We now give another corollary, which establishes an integral representation of the generalized Bessel function over the $(n-1)$-dimensional standard simplex, which will be denoted by \begin{equation*} \Sigma_n := \biggl\{\Bigl(u_1, \ldots, u_{n-1},\bigl(\underbrace{1-u_1-\cdots-u_{n-1}}_{:=u_0}\bigr)\Bigr) \in \mathbb R^n:\ \, u_0, u_1, \ldots, u_{n-1} \geq 0\biggr\}. \end{equation*} \begin{cor}Let $\mathcal{D}_2(n), n \geq 3,$ be a dihedral group with a constant multiplicity function $k$. Then, the generalized Bessel function admits the following integral representation \begin{multline*} D_k^{\mathcal{D}_2(n)}(x,y) = \frac{c_{n,k}}{nB(k+1/2,1/2)\bigr(\Gamma(k)\bigl)^n} \int_{\Sigma_n}\Biggl(\exp\biggl(\rho r \Bigl(u_0\cos(\theta-\phi)+\sum_{s=1}^{n-1} u_s b^{\theta-\phi,n}_s\Bigr)\biggr)\times \\ {}_0F_{n-1}\Biggl(2k, k, \dots, k; -4\biggl(\frac{\rho r}{2}\biggr)^nu_0u_1\dots u_{n-1} \sin(n\theta) \sin(n\phi)\Biggr)u_0^{k-1}\prod_{s=1}^{n-1} u_s^{k-1}\Biggr) du_1 \ldots du_{n-1}, \end{multline*} where ${}_0F_{n-1}$ is the following hypergeometric function \begin{equation*} {}_0F_{n-1}(a_1, \dots, a_{n-1}; z) := \sum_{j \geq 0}\frac{1}{(a_1)_j \dots (a_{n-1})_j} \frac{z^j}{j!}, \quad a_1, \dots, a_{n-1} \in \mathbb{R} \setminus \left(-\mathbb{N}\right), z \in \mathbb{C}. \end{equation*} \end{cor} \begin{proof} Recall first the Dirichlet integral formula for the simplex: for any $\beta_1, \ldots, \beta_n > 0$, then we have \begin{equation}\label{Dir} \int_{\Sigma_n}u_0^{\beta_n-1}\prod_{s=1}^{n-1} u_s^{\beta_s-1} du_1 \ldots du_{n-1} = \frac{\Gamma(\beta_1) \ldots \Gamma(\beta_n)}{\Gamma(\beta_1+\cdots+\beta_n)}. \end{equation} Substituting $\beta_s = k+ j+m_s$ in \eqref{Dir} for each $1 \leq s \leq n$, we get \begin{equation*} \frac{(k+j)_{m_1}\dots (k+j)_{m_n}}{(nk+nj)_{m_1+\dots+m_n}} = \frac{\Gamma(nk+nj)}{\bigl(\Gamma(k+j)\bigr)^n} \int_{\Sigma_n}u_0^{k+j+m_n-1} \prod_{s=1}^{n-1} u_s^{k+j+m_s-1}du_1 \ldots du_{n-1}, \end{equation*} whence \begin{multline*} \Phi_2^{(n)}\Bigl(k+j, \dots, k+j; nk+nj; \rho r b^{\theta-\phi,n}_1, \dots, \rho r b^{\theta-\phi,n}_n\Bigr) = \\ \frac{\Gamma(nk+nj)}{\bigl(\Gamma(k+j)\bigr)^n}\int_{\Sigma_n}\Biggl(\exp\biggl(\rho r \Bigl(u_0b^{\theta-\phi,n}_n+\sum_{s=1}^{n-1} u_s b^{\theta-\phi,n}_s\Bigr)\biggr)u_0^{k+j-1}\prod_{s=1}^n u_s^{k+j-1} \Biggr)du_1 \ldots du_{n-1} \end{multline*} which can be rewritten as \begin{multline*} \Phi_2^{(n)}\Bigl(k+j, \dots, k+j; nk+nj; \rho r b^{\theta-\phi,n}_1, \dots, \rho r b^{\theta-\phi,n}_n\Bigr)= \\ \frac{(nk)_{nj}\Gamma(nk)}{\bigl((k)_j\bigr)^n\bigr(\Gamma(k)\bigl)^n} \int_{\Sigma_n}\Biggl(\exp\biggl(\rho r \Bigl(u_0b^{\theta-\phi,n}_n+\sum_{s=1}^{n-1} u_s b^{\theta-\phi,n}_s\Bigr)\biggr)u_0^{k+j-1}\prod_{s=1}^n u_s^{k+j-1} \Biggr)du_1 \ldots du_{n-1}. \end{multline*} If we use this last equality in the formula of Corollary \ref{coco} and since \begin{multline*} \sum_{j \geq 0}\frac{1}{(2k)_j\underbrace{(k)_j\dots (k)_j}_{n-2 \mathrm{\ times}} j!} \Biggl(-4\biggl(\frac{\rho r}{2}\biggr)^nu_0u_1\dots u_{n-1} \sin(n\theta) \sin(n\phi)\Biggr)^j = \\ {}_0F_{n-1}\Biggl(2k, k, \dots, k; -4\biggl(\frac{\rho r}{2}\biggr)^nu_0u_1\dots u_{n-1} \sin(n\theta) \sin(n\phi)\Biggr) \end{multline*} by the very definition of ${}_0F_{n-1}$, the corollary is proved. \end{proof} \section{Laplace-type integral representation of the generalized Bessel function for boundary angles} In this section, we derive a Laplace-type integral representation of the generalized Bessel function associated with even dihedral groups $\mathcal{D}_2(2p)$ and a constant multiplicity function when one of the variables, say $x$, lies on the boundary of the dihedral wedge. Remarkably, we shall see that the range of integration is $\Sigma_p$ and not $\Sigma_{2p}$ as expected. Due to the symmetry relation $C_j^{(k)}(-z) = (-1)^j C_j^{(k)}(z)$, it suffices to consider the value $\phi = 0$ for which the generalized Bessel function reduces to \begin{align*} D_k^{\mathcal{D}_2(n)}(x,y) & = \frac{c_{n,k}}{B(k+1/2,1/2)}\sum_{j,m \geq 0}\frac{j+k}{m!\Gamma\bigl(2p(j+k) +m+ 1\bigr)} C_j^{(k)}\bigl(\cos(2p\theta)\bigr) \biggl(\frac{\rho r}{2}\biggr)^{2m+2jp} \\& = \frac{c_{n,k}}{nB(k+1/2,1/2)} \sum_{N \geq 0} \Biggl(\sum_{\substack{j,m \geq 0 \\N = jp+m}} \frac{n(j+k)}{m!\Gamma\bigl(2p(j+k) +m+ 1\bigr)} C_j^{(k)}\bigl(\cos(2p\theta)\bigr)\Biggr) \biggl(\frac{\rho r}{2}\biggr)^{2N}. \end{align*} The key ingredient is the following result which partially extends a previous one due to the second author valid for $p=2$ and without any assumption either on the multiplicity function or on the location of the variables (\cite{Dem2}, section 3). It involves the following normalized modified Bessel function of the first kind \begin{equation*} i_{\nu}(v) :=\Gamma(\nu+1)\biggl(\frac{2}{v}\biggr)^\nu I_\nu(v)= \sum_{m\geq 0} \frac{1}{m!(\nu+1)_m}\biggl(\frac{v}{2}\biggr)^{2m}, \quad \nu, v \in \mathbb{R}. \end{equation*} \begin{teo} Let $\mathcal{D}_2(2p), p \geq 1,$ be an even dihedral group with a constant multiplicity function. Assume $x= \rho \geq 0$. Then, for any $y = re^{i\theta} \in \overline{C}$, we have \begin{multline*} D_k^{\mathcal{D}_2(2p)}(x,y) = \frac{c_{2p,k}\Gamma(pk)}{2pB(k+1/2,1/2)\Gamma(2pk)\bigl(\Gamma(k)\bigr)^p}\times \\ \int_{\Sigma_p}\Biggl(i_{pk-1/2}\Biggl(\frac{\rho}{\sqrt{2}}\sqrt{\bigl(y_1^2+y_2^2\bigr)+\bigl(y_1^2-y_2^2\bigr)u_0+ \bigl(y_1^2-y_2^2\bigr)\sum_{s=1}^{p-1} u_sA_s + 2y_1y_2 \sum_{s=1}^{p-1} u_sB_s}\Biggr)\\ u_0^{k-1}\prod_{s=1}^{p-1} u_s^{k-1}\Biggr)du_1\dots du_{p-1}, \end{multline*} where we have set \begin{equation*} A_s :=b_s^{0,p}=\cos\biggl(\frac{2s\pi}{p}\biggr), \quad B_s :=-\sin\biggl(\frac{2s\pi}{p}\biggr), \quad s=1,\ldots,p-1. \end{equation*} \end{teo} \begin{proof} Recall formula \eqref{IdGeg} (this is Proposition 1 in \cite{Del-Dem}): for two integers $n,M$ such that $n\geq 1, M \geq 0$ and for $\xi \in [0,\pi]$, we have \begin{equation}\label{IdGegb} \sum_{\substack{m,j \geq 0 \\ M = 2m+nj}}\frac{n(j+k)}{m!\Gamma\bigl(n(j+k)+m+1\bigr)}C_j^{(k)}(\cos \xi) = \frac{2^{M}}{\Gamma(nk+M)}\sum_{\substack{j_1, \ldots, j_n \geq 0 \\ j_1+\cdots +j_n = M}} (k)_{j_1}\ldots (k)_{j_{n}} \frac{\bigl(b^{\xi/n,n}_1\bigr)^{j_1}}{j_1!}\cdots \frac{\bigl(b^{\xi/n,n}_{n}\bigr)^{j_{n}}}{j_{n}!}. \end{equation} In particular, if $M = 2N, n = 2p, \xi = 2p\theta$, then \eqref{IdGegb} specializes to \begin{align*} \sum_{\substack{m,j \geq 0 \\ N = m+pj}}\frac{2p(j+k)}{m!\Gamma\bigl(2p(j+k)+m+1\bigr)}C_j^{(k)}\bigl(\cos(2p\theta)\bigr) = \frac{2^{2N}}{\Gamma(2N+2pk)} \sum_{\substack{j_1, \ldots, j_{2p} \geq 0 \\ j_1+\cdots +j_{2p} = 2N}} \prod_{s=1}^{2p} \frac{(k)_{j_s}}{j_s!} \bigl(b^{\theta,2p}_{s}\bigr)^{j_s}. \end{align*} Now, since $\cos(a+\pi) = -\cos(a)$, then the finite sum in the right-hand side of the last equality may be written as \begin{equation}\label{Sum1} \sum_{\substack{j_1, \ldots, j_{2p} \geq 0 \\ j_1+\cdots+ j_{2p} = 2N}} (-1)^{j_1+\dots+j_p} \prod_{s=1}^{2p} \frac{(k)_{j_s}}{j_s!} \prod_{s=1}^p \bigl(b^{\theta,2p}_{s}\bigr)^{j_s + j_{s+p}}, \end{equation} which may be further simplified as follows. Consider a $2p$-tuple of integers in the sum \eqref{Sum1} arranged in $p$ pairs \begin{equation*} (j_1,j_{p+1}), \ldots, (j_p, j_{2p}), \quad \sum_{s=1}^{2p} j_s = 2N, \end{equation*} and assume that one pair consists of an even and an odd integers. Then, the $2p$-tuple obtained from the previous one by permuting the even and the odd integers in the given pair (while keeping the other pairs unchanged) cancels the latter since the expression \begin{equation*} \prod_{s=1}^{2p} \frac{(k)_{j_s}}{j_s!} \prod_{s=1}^p \bigl(b^{\theta,2p}_{s}\bigr)^{j_s + j_{s+p}} \end{equation*} is the same for both tuples. As a matter of fact, the only tuples that remains in \eqref{Sum1} after cancellations are those for which $j_s+j_{s+p}$ is even, therefore \begin{multline}\label{Sum2} \sum_{\substack{j_1, \ldots, j_{2p} \geq 0 \\ j_1+\cdots +j_{2p} = 2N}} (-1)^{j_1+\dots+j_p} \prod_{s=1}^{2p} \frac{(k)_{j_s}}{j_s!} \prod_{s=1}^p \bigl(b^{\theta,2p}_{s}\bigr)^{j_s + j_{s+p}} \\ = \sum_{\substack{j_1, \ldots, j_{2p} \geq 0 \\ j_1+\cdots +j_{2p} = 2N,\ j_s+j_{s+p} \textrm{\ is even}}} (-1)^{j_1+\dots+j_p} \prod_{s=1}^{2p} \frac{(k)_{j_s}}{j_s!} \prod_{s=1}^p \bigl(b^{\theta,2p}_{s}\bigr)^{j_s + j_{s+p}}. \end{multline} Setting $2m_s = j_s+j_{s+p}$ and using the identity \begin{equation*} \sum_{j_s = 0}^{2m_s} (-1)^{j_s} \frac{(k)_{j_s}(k)_{2m_s-j_s}}{j_s!(2m_s - j_s)!} = \frac{(k)_{m_s}}{m_s!}, \end{equation*} which is readily checked by equating generating functions of both sides, then the right-hand side of \eqref{Sum2} may be written as \begin{align*} \sum_{\substack{m_1, \ldots, m_{p} \geq 0 \\ m_1+\cdots +m_{p} = N}} \prod_{s=1}^p \bigl(b^{\theta,2p}_{s}\bigr)^{2m_s} \sum_{j_s = 0}^{2m_s} (-1)^{j_s} \frac{(k)_{j_s}(k)_{2m_s-j_s}}{j_s!(2m_s - j_s)!} = \sum_{\substack{m_1, \ldots, m_{p} \geq 0 \\ m_1+\cdots +m_{p} = N}} \prod_{s=1}^p \bigl(b^{\theta,2p}_{s}\bigr)^{2m_s} \frac{(k)_{m_s}}{m_s!}. \end{align*} Consequently, \begin{equation*} \sum_{N = jp+m} \frac{2p(j+k)}{m!\Gamma\bigl(2p(j+k) +m+ 1\bigr)} C_j^{(k)}\bigl(\cos(2p\theta)\bigr) = \frac{2^{2N}}{\Gamma(2N+2pk)} \sum_{\substack{m_1, \ldots, m_{p} \geq 0 \\ m_1+\cdots +m_{p} = N}} \prod_{s=1}^p \bigl(b^{\theta,2p}_{s}\bigr)^{2m_s} \frac{(k)_{m_s}}{m_s!}, \end{equation*} whence \begin{align*} D_k^{\mathcal{D}_2(2p)}(x,y) = \frac{c_{2p,k}}{2pB(k+1/2,1/2)} \sum_{N \geq 0} \frac{1}{\Gamma(2N+2pk)} \sum_{\substack{m_1, \ldots, m_{p} \geq 0 \\ m_1+\cdots +m_{p} = N}}(\rho r)^{2N} \prod_{s=1}^p \bigl(b^{\theta,2p}_{s}\bigr)^{2m_s} \frac{(k)_{m_s}}{m_s!}. \end{align*} Thanks to the duplication formula, we derive: \begin{multline*} D_k^{\mathcal{D}_2(2p)}(x,y) = \frac{c_{2p,k}}{2pB(k+1/2,1/2)\Gamma(2pk)}\times \\ \sum_{m_1, \dots, m_p \geq 0} \frac{1}{(pk)_{m_1+\dots+m_p} (pk+1/2)_{m_1+\dots + m_p}} \prod_{s=1}^p \bigl(b^{\theta,2p}_{s}\bigr)^{2m_s} \frac{(k)_{m_s}}{m_s!}\biggl(\frac{\rho r}{2}\biggr)^{2m_s}, \end{multline*} or equivalently \begin{multline*} D_k^{\mathcal{D}_2(2p)}(x,y) = \frac{c_{2p,k}\Gamma(pk)}{2pB(k+1/2,1/2)\Gamma(2pk)\bigl(\Gamma(k)\bigr)^p} \times \\ \sum_{m_1, \dots, m_p \geq 0} \frac{\Gamma(k+m_1) \ldots \Gamma(k+m_p)}{\Gamma\bigl((k+m_1)+\cdots+(k+m_p)\bigr)} \frac{1}{(pk+1/2)_{m_1+\dots + m_p}} \prod_{s=1}^p \frac{\bigl(b^{\theta,2p}_{s}\bigr)^{2m_s}}{m_s!}\biggl(\frac{\rho r}{2}\biggr)^{2m_s}. \end{multline*} Moreover, formula \eqref{Dir} entails \begin{equation*} \frac{\Gamma(k+m_1) \ldots \Gamma(k+m_p)}{\Gamma\bigl((k+m_1)+\cdots+(k+m_p)\bigr)}=\int_{\Sigma_p}u_0^{k+m_p-1}\prod_{s=1}^{p-1} u_s^{k+m_s-1} du_1 \ldots du_{p-1},\end{equation*} and as such, \begin{multline*} D_k^{\mathcal{D}_2(2p)}(x,y) = \frac{c_{2p,k}\Gamma(pk)}{2pB(k+1/2,1/2)\Gamma(2pk)\bigl(\Gamma(k)\bigr)^p}\times\\ \int_{\Sigma_p}\Biggl( \sum_{m_1, \dots, m_p \geq 0} \frac{1}{(pk+1/2)_{m_1+\dots + m_p}} \Biggl(\prod_{s=1}^p \frac{\bigl(b^{\theta,2p}_{s}\bigr)^{2m_s}}{m_s!}\biggl(\frac{\rho r}{2}\biggr)^{2m_s} \Biggr)u_0^{k+m_p-1}\prod_{s=1}^{p-1}u_s^{k+m_s-1} \Biggr) du_1\ldots du_{p-1}. \end{multline*} By applying the multinomial theorem \begin{equation*} (a_1+\cdots +a_p)^N = \sum_{m_1+\cdots + m_p = N} \frac{N!}{m_1!\ldots m_p!} \prod_{s=1}^p a_s^{m_s}, \end{equation*} we get, leaving behind the notation $b^{\theta,2p}_{s}$, \begin{multline*} D_k^{\mathcal{D}_2(2p)}(x,y) = \frac{c_{2p,k}\Gamma(pk)}{2pB(k+1/2,1/2)\Gamma(2pk)\bigl(\Gamma(k)\bigr)^p} \times \\ \int_{\Sigma_p}\Biggl( \sum_{N \geq 0} \frac{1}{N! (pk+1/2)_{N}} \biggl(\frac{\rho r}{2}\biggr)^{2N} \Biggl(u_0\cos^{2}(\theta)+\sum_{s=1}^{p-1} u_s \cos^{2}\biggl(\theta + \frac{s\pi}{p}\biggr)\Biggr)^N u_0^{k-1}\prod_{s=1}^{p-1}u_s^{k-1}\Biggr) du_1\dots du_{p-1}, \end{multline*} that is to say, thanks to the very definition of $i_\nu$, \begin{multline*} D_k^{\mathcal{D}_2(2p)}(x,y) = \frac{c_{2p,k}\Gamma(pk)}{2pB(k+1/2,1/2)\Gamma(2pk)\bigl(\Gamma(k)\bigr)^p} \times \\ \int_{\Sigma_p}\Biggl(i_{pk-1/2}\Biggl(\rho r\sqrt{u_0\cos^{2}(\theta)+\sum_{s=1}^{p-1} u_s \cos^{2}\biggl(\theta + \frac{s\pi}{p}}\biggr)\Biggr)u_0^{k-1}\prod_{s=1}^{p-1} u_s^{k-1}\Biggr) du_1\dots du_{p-1}. \end{multline*} Finally, the trigonometric formulas \begin{equation*} \cos^2(\theta) = \frac{1+\cos(2\theta)}{2}, \quad \cos\biggl(2\theta + \frac{2s\pi}{p}\biggr) = A_s \cos(2\theta) + B_s \sin(2\theta), \end{equation*} together with the fact that $u_1+\dots + u_{p-1} = 1-u_0$ yield \begin{multline*} D_k^{\mathcal{D}_2(2p)}(x,y) = \frac{c_{2p,k}\Gamma(pk)}{2pB(k+1/2,1/2)\Gamma(2pk)\bigl(\Gamma(k)\bigr)^p}\times \\ \int_{\Sigma_p}\Biggl( i_{pk-1/2}\Biggl(\frac{\rho r}{\sqrt{2}}\sqrt{1+u_0\cos(2\theta)+\cos(2\theta) \sum_{s=1}^{p-1 }u_sA_s + \sin(2\theta) \sum_{s=1}^{p-1} u_sB_s}\Biggr)u_0^{k-1}\prod_{s=1}^{p-1} u_s^{k-1}\Biggr) du_1\dots du_{p-1}, \end{multline*} or in cartesian coordinates \begin{multline*} D_k^{\mathcal{D}_2(2p)}(x,y) = \frac{c_{2p,k}\Gamma(pk)}{2pB(k+1/2,1/2)\Gamma(2pk)\bigl(\Gamma(k)\bigr)^p}\times \\ \int_{\Sigma_p}\Biggl(i_{pk-1/2}\Biggl(\frac{\rho}{\sqrt{2}}\sqrt{\bigl(y_1^2+y_2^2\bigr)+\bigl(y_1^2-y_2^2\bigr)u_0+ \bigl(y_1^2-y_2^2\bigr)\sum_{s=1}^{p-1} u_sA_s + 2y_1y_2 \sum_{s=1}^{p-1} u_sB_s}\Biggr)\\ u_0^{k-1}\prod_{s=1}^{p-1} u_s^{k-1}\Biggr)du_1\dots du_{p-1}. \end{multline*} \end{proof} We are now ready to prove a Laplace-type integral representation for the generalized Bessel function. Before stating it, we introduce, for $u=(u_1,\ldots,u_{p-1},u_0) \in \Sigma_p$, \begin{equation*} a := a(u) = \sqrt{1+u_0+ \sum_{s=1}^{p-1} u_sA_s}, \qquad b := b(u) = \frac{1}{a} \sum_{s=1}^{p-1} u_sB_s, \end{equation*} \begin{equation*} c := c(u) = \frac{1}{a} \sqrt{1- \Biggl(u_0+\sum_{s=1}^{p-1} u_sA_s\Biggr)^2 - \Biggl(\sum_{s=1}^{p-1} u_sB_s\Biggr)^2}. \end{equation*} Note that \begin{equation*} \Biggl(u_0+\sum_{s=1}^{p-1} u_sA_s\Biggr)^2 + \Biggl(\sum_{s=1}^{p-1} u_sB_s\Biggr)^2 = \left|\sum_{s=0}^{p-1} u_se^{2i\pi s/p} \right|^2 \end{equation*} so that the square-root is well-defined. Note also that $a$ may vanish (for instance when $p$ is even) on a codimension-one subspace of $\Sigma_p$ and that the expressions involved in the integral representation below may be rewritten in a way that they are well-defined for all $u \in \Sigma_p$ (see the first remark after the proof). \begin{cor}\label{CorLap} Let $\mathcal{D}_2(2p), p \geq 1,$ be an even dihedral group with a constant multiplicity function which further satisfies $pk > 1/2$. Assume $x= \rho \geq 0$. Then, for any $y = re^{i\theta} \in \overline{C}$, we have \begin{equation*} D_k^{\mathcal{D}_2(2p)}(x, y) = \int_{\mathbb R^2} \mathrm{e}^{\langle y, z\rangle} H_p(\rho, z) dz, \end{equation*} where we have set \begin{multline*} H_p(\rho,z) := \frac{c_{2p,k}\Gamma(pk)}{2pB(k+1/2,1/2)\Gamma(2pk)\bigl(\Gamma(k)\bigr)^p}\times \\ \int_{E_{z,\rho, p}} (a(u)c(u)) \biggl(\frac{2}{\rho^2a^2(u)c^2(u)}\biggr)^{pk - (1/2)} \biggl(\frac{\rho^2a^2(u)c^2(u)}{2} - (c(u)z_1)^2 - (a(u)z_2-b(u)z_1)^2\biggr)^{pk-(3/2)} \\ u_0^{k-1} \prod_{s=1}^{p-1} u_s^{k-1} du_1\ldots du_{p-1}, \end{multline*} and, for any $z \in \mathbb{R}^2$, \begin{equation*} E_{z,\rho, p} := \biggl\{u \in \Sigma_p:\ \frac{\rho^2a^2(u)c^2(u)}{2} > (c(u)z_1)^2 + (a(u)z_2-b(u)z_1)^2\biggr\}. \end{equation*} \end{cor} \begin{rem} We conjecture that $H_p(\rho,\cdot)$ is supported in the convex hull of \begin{equation*} \mathcal{D}_2(2p) \rho = \{\rho \mathrm{e}^{is\pi/p}, \, 1 \leq s \leq 2p\}. \end{equation*} This conjecture was proved in \cite{Amr-Dem} for $p=2$. More generally, we can prove only the first half of this conjecture. More precisely, recall from Lemma 3.3. in \cite{Kos} that the convex hull of $\mathcal{D}_2(2p) \rho$ is the set \begin{equation*} z_C - \rho \in \mathbb{R}_+\alpha_1 + \mathbb{R}_+\alpha_2, \end{equation*} where we recall the simple root vectors $\alpha_1 = i, \alpha_2 = -i\mathrm{e}^{i\pi/n}$, and $z_C$ is the unique representative of $z$ in the closed Weyl chamber $\overline{C}$. Equivalently, this set is characterized by the following inequalities \begin{eqnarray*} z_{C,1} & \leq & \rho, \\ z_{C,1}\cos\biggl(\frac{\pi}{2p}\biggr) + z_{C,2}\sin\biggl(\frac{\pi}{2p}\biggr) & \leq & \rho \cos\biggl(\frac{\pi}{2p}\biggr). \end{eqnarray*} The first inequality can be proved as follows: if $z$ is such that $E_{z,\rho, p} \neq \emptyset$, then it satisfies \begin{equation*} c^2(u) z_1^2 < \frac{\rho^2a^2(u)c^2(u)}{2} \quad \Leftrightarrow \quad z_1^2 < \frac{\rho^2}{2} \Biggl(1+u_0+ \sum_{s=1}^{p-1} u_sA_s\Biggr). \end{equation*} By writing \begin{equation*} (c(u)z_1)^2 + (a(u)z_2-b(u)z_1)^2 = (b^2(u)+c^2(u))\biggl(z_1 - \frac{a(u)b(u)}{b^2(u)+c^2(u)} z_2\biggr)^2 + \frac{a^2(u)c^2(u)}{b^2(u)+c^2(u)} z_2^2 \geq \frac{a^2(u)c^2(u)}{b^2(u)+c^2(u)} z_2^2, \end{equation*} we see that \begin{equation*} z_2^2 \leq \frac{\rho^2}{2} \biggl(1-u_0- \sum_{s=1}^{p-1} u_sA_s\biggr). \end{equation*} Consequently, $|z|^2 \leq \rho^2$ and, since reflection groups consist of isometries, $|z_C| = |z|$, so that $z_{C,1} \leq \rho$. \end{rem} We now prove Corollary \ref{CorLap}. \begin{proof} The derivation of the integral representation is similar to that of Theorem 1 in \cite{Amr-Dem}. More precisely, recall first from \cite{Amr-Dem} the following integral representation: if $pk > 1/2$, then for any $w \in \mathbb{R}^2$, \begin{equation*} i_{pk-1/2}\bigl(|w|\bigr) = \int_{|z| < 1} \mathrm{e}^{\langle w, z\rangle}\bigl(1-|z|^2\bigr)^{pk - 3/2} dz, \end{equation*} where $|w| = \sqrt{w_1^2+w_2^2}$ is the Euclidean norm. Next, straightforward computations show that \begin{equation*} \sqrt{\bigl(y_1^2+y_2^2\bigr)+\bigl(y_1^2-y_2^2\bigr)u_0+ \bigl(y_1^2-y_2^2\bigr)\sum_{s=1}^{p-1} u_sA_s + 2y_1y_2 \sum_{s=1}^{p-1} u_sB_s} = \sqrt{(a(u)y_1+b(u)y_2)^2 + (c(u)y_2)^2}. \end{equation*} Consequently, \begin{multline*} i_{pk-1/2}\Biggl(\frac{\rho}{\sqrt{2}}\sqrt{\bigl(y_1^2+y_2^2\bigr)+\bigl(y_1^2-y_2^2\bigr)u_0+ \bigl(y_1^2-y_2^2\bigr)\sum_{s=1}^{p-1} u_sA_s + 2y_1y_2 \sum_{s=1}^{p-1} u_sB_s}\Biggr) = \\ \int_{|z| < 1} \mathrm{e}^{\rho\bigl(a(u)y_1z_1 +y_2(b(u)z_1 + c(u)z_2)\bigr)/\sqrt{2}}\bigl(1-|z|^2\bigr)^{pk - 3/2} dz, \end{multline*} which can be rewritten as \begin{multline*} i_{pk-1/2}\Biggl(\frac{\rho}{\sqrt{2}}\sqrt{\bigl(y_1^2+y_2^2\bigr)+\bigl(y_1^2-y_2^2\bigr)u_0+ \bigl(y_1^2-y_2^2\bigr)\sum_{s=1}^{p-1} u_sA_s + 2y_1y_2 \sum_{s=1}^{p-1} u_sB_s}\Biggr) = \\ (a(u)c(u)) \biggl(\frac{2}{\rho^2a^2(u)c^2(u)}\biggr)^{pk - (1/2)} \int_{V_{u,\rho, p}} \mathrm{e}^{\langle y, z \rangle} \biggl(\frac{\rho^2a^2(u)c^2(u)}{2} - (c(u)z_1)^2 - (a(u)z_2-b(u)z_1)^2\biggr)^{pk-(3/2)} dz, \end{multline*} where, for each $u \in \Sigma_p$, the set $V_{u,\rho, p}$ is defined by \begin{equation*} V_{u,\rho, p} := \biggl\{z \in \mathbb{R}^2, \frac{\rho^2a(u)^2c(u)^2}{2} > (c(u)z_1)^2 + (a(u)z_2-b(u)z_1)^2\biggr\}. \end{equation*} Using Fubini Theorem, the sought integral representation follows. \end{proof} \begin{rem} Using the so-called shift principle, we can derive from Corollary \ref{CorLap} the Laplace-type integral representation of the dihedral Dunkl kernel along the same lines presented in \cite{Amr-Dem} for the dihedral group of order eight. Besides, it would be interesting to exploit the integral representations proved in this paper in order to derive asymptotic results for the generalized Bessel function and its behaviour near the edges of the dihedral Weyl chamber. \end{rem} \end{document}
\begin{document} \title{\textsf{An introduction to SDE simulation}} \author{\textsf{Simon J.A. Malham} \and \textsf{Anke Wiese}} \authorrunning{\textsf{Malham and Wiese}} \institute{Simon J.A. Malham \and Anke Wiese \at Maxwell Institute for Mathematical Sciences \\ and School of Mathematical and Computer Sciences\\ Heriot-Watt University, Edinburgh EH14 4AS, UK \\ Tel.: +44-131-4513200\\ Fax: +44-131-4513249\\ \email{[email protected]}\\ \email{[email protected]}} \date{5th April 2010} \voffset=10ex \maketitle \begin{abstract} We outline the basic ideas and techniques underpinning the simulation of stochastic differential equations. In particular we focus on strong simulation and its context. We also provide illustratory examples and sample matlab algorithms for the reader to use and follow. Our target audience is advanced undergraduate and graduate students interested in learning about simulating stochastic differential equations. We try to address the FAQs we have encountered. \keywords{stochastic simulation} \subclass{60H10 \and 60H35} \end{abstract} \lstset{language=Matlab,basicstyle=\ttfamily} \section{\textsf{Introduction}}\label{intro} \subsection{\textbf{\textsf{When is a model stochastic?}}} Often in modelling, we need to incorporate a phenomenon or influence that seems random, whose behaviour we can only recognize as statistically Gaussian. The prototypical example is behaviour of molecular origin---the Brownian motion of a pollen particle on a water surface. However the influences can be large scale, for example a turbulent wind or atmospheric flow, or thousands of people buying and selling millions of shares. Imagine tracking and adjusting the flight of a rocket after lift-off. If the rocket is buffeted by a random turbulent wind, you might sensibly equip the rocket with stabilizers that kick-in if a gust diverts it too far. Computing the path of the rocket, and regulating it to ensure it threads pre-arranged target positions at critical junctures (eg.\/ stage separation), is a stochastic simulation problem. Indeed it is a \emph{strong simulation problem}, as conditional on the path, the stabilizers will affect the outcome. The pricing of financial derivatives (futures/options) is another example. One tracks a security price (eg.\/ a share price) that is randomly buffeted by market forces. Pricing the derivative you wish to sell, which might be exercised by the buyer at a future time, involves hedging/regulating the proportion of your investment in the security (the rest invested in risk-free bonds) so as to minimize your risk. Building in stabilizers/barriers that kick-in if the security price skews too wildly, is again, a strong stochastic simulation problem. \subsection{\textbf{\textsf{What is a stochastic differential equation?}}} Consider a model that incorporates some random phenomena whose statistics is Gaussian. Suppose the state of the system is recorded through the vector $y_t\in\R^N$, with $N\geqslant 1$. Suppose there are several random sources, say $W^1,\ldots,W^d$; these are Wiener processes; think of them as independent, continuous, nowhere differentiable, functions of time. Indeed the time derivatives of the Wiener processes represent pure white noise. Suppose the affect of the Wiener processes on the model, i.e.\/ which way they skew the solution, is recorded through the vector fields $V_1,\ldots,V_d$. Without the noise we would have a nice fuzz-free signal which is generated by the vector field $V_0$. A stochastic model for the system state $y_t\in\R^N$ might be that it evolves according to the stochastic differential equation: \begin{equation*} \frac{\rd y}{\rd t}=V_0(y_t) +V_1(y_t)\,\frac{\mathrm{d}W_t^1}{\rd t}+\cdots+V_d(y_t)\,\frac{\mathrm{d}W_t^d}{\rd t}. \end{equation*} This representation of the model is somewhat formal; after all the pure white noise terms $\rd W^i_t/\rd t$ need to interpreted in an extremely weak sense, we prefer to represent the model in the form \begin{equation*} \mathrm{d} y_t=V_0(y_t)\,\mathrm{d}t +V_1(y_t)\,\mathrm{d}W_t^1+\cdots+V_d(y_t)\,\mathrm{d}W_t^d. \end{equation*} Indeed there is also a representation in a preferred integral form which we meet presently. With this in mind though, an important observation at this point, is to recall that Wiener processes are continuous functions. Thus the solution function will also be continuous. \noindent\textsf{Example (Langevin equation/Brownian motion).} Consider the equation of motion of a pollen particle suspended in a fluid flow. The particle might obey the following equation of motion for its velocity $y_t$: \begin{equation*} \frac{\rd y_t}{\rd t}=-a\,y_t+\sqrt{b}\,\frac{\rd W_t}{\rd t}, \end{equation*} where $a$ and $b$ are constants. The right-hand side is the force exerted on the particle per unit mass. There is a deterministic force $-a\,y_t$ and a white noise force $\sqrt{b}\,\rd W_t/\rd t$ supposed to represent the random buffeting by the water molecules. \subsection{\textbf{\textsf{What information do we want to retrieve?}}} If the time interval of interest is $[0,T]$, and our initial deterministic state is $y_0$, each realization $\omega$ of an individual Wiener path $W(\omega)$ will produce a different outcome $y_t(\omega)$ for $t\in[0,T]$. Practical information of interest is often expected values of functions $f$ of the solution, $f(y_t)$, or more generally path-dependent functions of the solution $f(t,y_s;s\leqslant t)$. Hence we might want to compute \begin{equation*} \Eb\, f(y_t)\coloneqq\int f\bigl(y_t(\omega)\bigr)\,\rd\mathsf P(\omega), \end{equation*} where $\mathsf P$ is a probability measure. For example we could pick $f$ to be of polynomial or exponential form, synomynous with statistical moments of $y_t$. If $f$ is the identity map, we obtain the expectation of the solution. If we take $f$ to be $\|\cdot\|_p^p$, where $\|\cdot\|_p$ is the $p$-vector norm, then we define the $L^p$-norm for $p\geqslant 1$ by \begin{equation*} \|y_t\|_{L^p}^p\coloneqq\int \|y_t(\omega)\|_p^p\,\rd\mathsf P(\omega). \end{equation*} \subsection{\textbf{\textsf{How do we retrieve it?}}} There are two main simulation approaches to extract such information, we can either: \begin{itemize} \item Solve a \emph{partial differential equation}; or \item Perform \emph{Monte--Carlo simulation}. \end{itemize} Associated with every stochastic differential equation, there is a parabolic \emph{partial differential equation} for $u(t,y)$ whose solution at time $t\in[0,T]$ is \begin{equation*} u(t,y)=\Eb\, f(y_t) \end{equation*} provided $u(0,y)=f(y)$ initially. Thus solving the associated partial differential equation on $[0,T]$ will generate lots of information about the solution to the stochastic differential equation at time $t$. By dudiciously choosing $f$ to be a monomial function we can generate any individual moment of the solution $y_t$ we like, or if we choose $f=\exp$ we generate all the moments simultaneously (this is essentially the Laplace transform). If we choose $f$ to be a Dirac delta function we generate the transition probability distribution for $y_t$---the probability density function for $y_t$ conditioned on the initial data $y_0$. Choosing $f$ to be a Heaviside function generates the corresponding (cumulative) distribution function. Of course often, the partial differential equation will have to be solved approximately. Also note, if we fix a form for $f$ from the start, for example $f$ as the identity map, then we simply solve an ordinary differential equation for $u(t,y)$. In \emph{Monte--Carlo simulation}, we generate a set of suitable multidimensional sample paths $\hat W(\omega)\coloneqq\bigl(\hat W^1(\omega),\ldots,\hat W^d(\omega)\bigr)$ on $[0,T]$; in practice, $\omega$ belongs to a large but finite set. For each sample path $\hat W(\omega)$, we generate a sample path solution $\hat y(\omega)$ to the stochastic differential equation on $[0,T]$. This is often achieved using a truncation of the `stochastic' Taylor series expansion for the solution $y$ of the stochastic differential equation, on successive small subintervals of $[0,T]$. Suppose for example, we wanted to compute the expectation $\Eb\,f(\hat y_t)$. Having generated a set of approximate solutions $\hat y_t(\omega_i)$ at time $t\in[0,T]$, for $i=1,\ldots,P$ with $P$ large, we can estimate $\Eb\,f(\hat y_t)$ by computing the mean-sum over the large finite set of approximate sample solutions $\hat y_t(\omega_i)$. Hence in practice we approximate \begin{equation*} \int f\bigl(y_t(\omega)\bigr)\,\rd\mathsf P(\omega) \approx\tfrac{1}{P}\sum_{i=1}^Pf\bigl(y_t(\omega_i)\bigr) \end{equation*} where $P$ is the total number of sample paths. A natural dichotomy now arises. To compute $\Eb\,f(\hat y_t)$, we can in fact choose any suitable multidimensional paths $\hat W(\omega)$ that leave $\Eb\,f(\hat y_t)$ approximately invariant, in the sense that $\|\Eb\,f(y_t)-\Eb\,f(\hat y_t)\|$ is sufficiently small. This is a \emph{weak approximation}. For example, increments $\Delta W^i$ in each computation interval can be chosen from a suitable binomial branching process, or using Lyons and Victoir's cubature method~\cite{LV}. Note that since the approximate paths are not close to Brownian paths we cannot compare $\hat y_t$ and $y_t$ directly. In a \emph{strong approximation}, discrete increments $\Delta W^i$ in each computation interval are directly sampled from the Gaussian distribution. This is more expensive. However, the sample paths $\hat W(\omega)$ generated in this way, allow us to compare $\hat y_t(\omega)$ and $y_t(\omega)$ directly in the sense that we can guarantee $\Eb\,\|y_t-\hat y_t\|$ will be sufficiently small. Naturally, using strong simulation we can also account for path-dependent features, such as conditional cut-offs or barriers, when we investigate individual solutions or the final expectation or higher moments of the approximate solution paths $\hat y$. For a comprehensive overview of Monte--Carlo methods see Boyle, Broadie and Glasserman~\cite{BBG}. \subsection{\textbf{\textsf{What is required?}}} In general to extract qualitative and quantative information from a stochastic differental system requires the languages and techniques of several mathematical disciplines, notably: \begin{enumerate} \item \emph{Integration}: in Brownian motion new information is continuously generated on infinitesimally small time scales (imagine the pollen particle jiggles); solution as with ordinary differential equations is by integration, except that now the coefficients of the evolution equation---the Wiener processes---are no longer differentiable. \item \emph{Statistics}: we typically extract statistical information from the solution process; \item \emph{Geometry:} as with ordinary differential equations, preserving invariant geometric structure of the solution path evolution is important; for example the solution may evolve on a homogeneous manifold; \item \emph{Simulation:} stochastic differential equations more often than not, are \emph{not} integrable in the classical sense, and require numerical computation. \end{enumerate} For general background reading, we recommend as follows. For a comprehensive introduction to the theory underlying stochastic differential equations download Evans' notes~\cite{Evans}. For an introduction to numerical simulation, see Higham's notes~\cite{Higham:intro}. The answer to just about any other question that a beginner may have on numerical simulation, not covered above or here, can likely be found in the treatise by Kloeden and Platen~\cite{KP}. \section{\textsf{Stochastic differential equations}} \subsection{\textbf{\textsf{Integral representation}}} Consider the nonlinear stochastic differential equation of order $N\in\mathbb N$ given by \begin{equation*} y_t=y_0+\int_0^t \tilde V_0(y_\tau)\,\mathrm{d}\tau +\sum_{i=1}^d\int_0^t V_i(y_\tau)\,\mathrm{d}W^i_\tau. \end{equation*} Here $(W^1,\ldots,W^d)$ is a $d$-dimensional Wiener process, i.e.\/ there are $d$ independent driving noisy signals. We assume there exists a unique solution $y\colon[0,T]\mapsto\mathbb R^N$ for some time interval $[0,T]\subseteq\mathbb R_{+}$. We suppose that $\tilde V_0$ and $V_i\colon\mathbb R^N\rightarrow \mathbb R^N$, $i=1,\ldots,d$, are smooth non-commuting autonomous vector fields. We are representing the stochastic differential equation above in It\^o form and indicate this by using $\tilde V_0$ to represent the \emph{It\^o drift vector field}. We call the vector fields $V_i$ for $i=1,\ldots,d$ associated with the driving noise terms the \emph{diffusion vector fields}. Presently we will distinguish, and explain, the It\^o representation as opposed to the Stratonovich representation for a stochastic differential equation. We also remark that a common convention is to set $W^0_t\equiv t$. Results on existence and uniquess of solutions can be found in Kloeden and Platen~\cite{KP}. \subsection{\textbf{\textsf{Driving Wiener process}}} A scalar driving noisy signal or disturbing Brownian motion has a concise definition and set of properties formulated by Wiener. \begin{definition}[\textbf{\textsf{Wiener process}}] A scalar \emph{standard Wiener process} or \emph{standard Brownian motion} $W$ is a continuous process that satisfies the three conditions: \begin{enumerate} \item $W_0=0$ with probability one; \item $W_t-W_s\sim \sqrt{t-s}\,\cdot\text{\textsf{N}}(0,1)$ for $0\leqslant s<t$, where $\text{\textsf{N}}(0,1)$ denotes a standard Normal random variable; \item Increments $W_t-W_s$ and $W_\xi-W_\eta$ on distinct time intervals are independent, i.e.\/ for $0\leqslant s<t<\eta<\xi$. \end{enumerate} Note that with probability one an individual Brownian path is nowhere differentiable. \end{definition} \begin{figure} \caption{Example scalar Wiener path on the interval $[0,T]$.} \end{figure} \noindent\textsf{Example (Langevin equation).} As we have seen, the Brownian motion of a pollen particle suspended in a fluid flow obeys the following equation of motion for its velocity $y_t$: \begin{equation*} \rd y_t=-a\,y_t\,\rd t+\sqrt{b}\,\rd W_t, \end{equation*} where $a$ and $b$ are constants, and $W$ is a scalar Wiener process. This type of stochastic differential equation is said to have \emph{additive noise} as the diffusion vector field is constant. It is also an example of an \emph{Ornstein--Uhlenbeck process}. \noindent\textsf{Example (scalar linear equation).} Consider the scalar linear stochastic differential equation \begin{equation*} \rd y_t=a\,y_t\,\rd t+b\,y_t\,\rd W_t \end{equation*} driven by a scalar Wiener process $W$, with $a$ and $b$ constants. This stochastic differential equation is said to have \emph{multiplicative noise} as the diffusion vector field depends multiplicatively on the solution $y_t$. We can in fact analytically solve this equation, the solution is \begin{equation*} y_t=y_0\,\exp\bigl(a\,t+b\,W_t-\tfrac12b^2t\bigr). \end{equation*} The additional term `$-\tfrac12b^2t$' is due to the It\^o correction, which we discuss shortly. \subsection{\textbf{\textsf{Vector fields and flow maps}}} Consider an ordinary differential equation governed by an autonomous vector field $V$ that evolves on a homogeneous manifold $\Mcc$ so that \begin{equation*} \frac{\mathrm{d}y_t}{\mathrm{d}t}=V(y_t), \end{equation*} with initial data $y_0\in\Mcc$. Here we suppose $\Mcc$ to be an $N$-dimensional manifold. The reader can for simplicity assume $\Mcc\equiv\R^N$ for the rest of this section if they choose. Let $\text{Diff}(\Mcc)$ denote the group of diffeomorphisms of $\Mcc$. The \emph{flow-map} $\varphi_{t,t_0}\in\text{Diff}(\Mcc)$ for the ordinary differential equation above is the map taking the solution configuration on $\Mcc$ at time $t_0$ to that at time $t$, i.e.\/ it is the map $\varphi_{t,t_0}\colon\Mcc\to\Mcc$ such that \begin{equation*} \varphi_{t,t_0}\colon y_{t_0}\mapsto y_t. \end{equation*} In other words for any data $y_0\in\Mcc$ at time $t_0$ we can determine its value $y_t\in\Mcc$ at time $t$ later by applying the \emph{action} of the flow map $\varphi_{t,t_0}$ to $y_0$ so that $y_t=\varphi_{t,t_0}\circ y_0$. Note that the flow map satisfies the usual group properties \begin{equation*} \varphi_{t,s}\circ\varphi_{s,t_0}=\varphi_{t,t_0}, \end{equation*} with $\varphi_{t_0,t_0}=\id$, the identity diffeomorphism. If $f\in\text{Diff}(\Mcc)$, the chain rule reveals that for all $y\in\Mcc$, we have \begin{equation*} \frac{\mathrm{d}f(y)}{\mathrm{d}t}=V(y)\cdot\partial_y\,f(y), \end{equation*} where $\partial_y\equiv\nabla_y$ is the usual gradient operator with respect to each component of $y$. In other words, vector fields act on the group of diffeomorphisms $\text{Diff}(\Mcc)$ as first order partial differential operators, and for any $f\in\text{Diff}(\Mcc)$, we write \begin{equation*} V\circ f\circ y=V(y)\cdot\partial_yf(y). \end{equation*} We now think of $V(y)\cdot\partial_y$ as a first order partial differential operator and an element of the tangent space $\mathfrak X(\Mcc)$ to $\text{Diff}(\Mcc)$. In particular, choose the diffeomorphic map $f$ to be the flow map $\varphi_t\equiv\varphi_{t,0}$. Then for all $y_0\in\Mcc$ and $y_t=\varphi_t\circ y_0$ we have, \begin{equation*} \frac{\rd}{\rd t}(\varphi_t\circ y_0)=V\circ\varphi_t\circ y_0. \end{equation*} We pull back this ordinary differential equation for $y_t\in\Mcc$ to the \emph{linear} functional differential equation in $\text{Diff}(\Mcc)$: \begin{equation*} \frac{\rd\varphi_t}{\rd t}=V\circ\varphi_t. \end{equation*} Since $\varphi_0=\id$, the solution is $\varphi_t=\exp(t\,V)$, giving the represention of the flow-map as the \emph{exponentiation of the vector field}. Hence we see that \begin{equation*} y_t=\exp(t\,V)\circ y_0. \end{equation*} An important and illustrative concomitant derivation of this result is as follows. Integrating the functional differential equation for the flow-map we get \begin{equation*} \varphi_t=\id+\int_0^t V\circ\varphi_\tau\,\rd\tau. \end{equation*} To solve this integral equation, we set up the formal iterative procedure given by \begin{equation*} \varphi_t^{(n+1)}=\id+\int_0^t V\circ\varphi^{(n)}_\tau\,\rd\tau, \end{equation*} with $\varphi^{(0)}_t=\id$. For example, after two iterations: $\varphi_t^{(2)}=\id+t\,V\circ\id+\tfrac12t^2\,V^2\circ\id$, where $V^2\equiv V\circ V$. Hence in the limit we obtain the exponential form for $\varphi_t$ above. The composition of two vector fields $U$ and $V$ is a second order differential operator: \begin{align*} U\circ V=&\;\bigl(U(y)\cdot\partial_y\bigr)\bigl(V(y)\cdot\partial_y\bigr)\\ =&\;\Bigl(\bigl(U(y)\cdot\partial_y\bigr)\bigl(V(y)\bigr)\Bigr)\cdot\partial_y +U(y)\otimes V(y)\colon \partial_{yy}\\ =&\;\sum_{i,j=1}^N U^i\partial_{y_i}(V^j)\partial_{y_j}+ \sum_{i,j=1}^N U^iV^j\partial_{y_iy_j}. \end{align*} Importantly we now observe that since $\partial_{y_iy_j}=\partial_{y_jy_i}$ as operators on $\text{Diff}(\Mcc)$, the Lie bracket $[U,V]$ of two vector fields is a vector field: \begin{equation*} [U,V]=U\circ V-V\circ U =\Bigl(\bigl(U(y)\cdot\partial_y\bigr)V(y)- \bigl(V(y)\cdot\partial_y\bigr)U(y)\Bigr)\cdot\partial_y. \end{equation*} Note the sum of two vector fields is itself a vector field. Hence the set of vector fields is a \emph{Lie algebra} $\mathfrak X(\Mcc)$---closed under summation and Lie product $[\cdot,\cdot]$. \subsection{\textbf{\textsf{Stratonovich representation}}} There are two generic representations for stochastic differential equations. One can either express them in It\^o form, as we did at the beginning of Section~2, or we can express the stochastic differential equation in Stratonovich form, in which case we write \begin{equation*} y_t=y_0+\int_0^t V_0(y_\tau)\,\mathrm{d}\tau +\sum_{i=1}^d\int_0^t V_i(y_\tau)\,\strat\mathrm{d}W^i_\tau. \end{equation*} Two subtle notational changes can be spotted. First the `$\strat\mathrm{d}W^i_\tau$' indicates that the stochastic integrals on the right are supposed to be interpreted in the Stratonovich sense; discussed presently. Second we now use the \emph{Stratonovich drift vector field} $V_0$ instead of the It\^o drift vector field $\tilde V_0$. The relation between the two vector fields is \begin{equation*} V_0=\tilde V_0-\tfrac12\sum_{i=1}^d(V_i\cdot\partial_yV_i). \end{equation*} Importantly, when stochastic integrals are interpreted in the It\^o sense, then they are limit of a left Riemann sum and when repeated integrals are computed, an It\^o correction must be taken into account; a practical discussion of this point can be found in Higham~\cite[pp.~530--1]{Higham:intro}. For example, the correct evaluation of the an It\^o repeated integral of a Wiener process with respect to itself is \begin{equation*} \int_0^T W^i_\tau\,\mathrm{d}W^i_\tau=\tfrac12\bigl(W^i_T\bigr)^2-\tfrac12 T. \end{equation*} Stratonovich integrals are interpreted as the limit of the midpoint rule, so for the corresponding Stratonovich integral the correct evaluation is \begin{equation*} \int_0^T W^i_\tau\,\strat\mathrm{d}W^i_\tau=\tfrac12\bigl(W^i_T\bigr)^2. \end{equation*} Thus the rules of Stratonovich integral calculus match those of standard integral calculus. For this reason it is often preferable to use the Stratonovich representation for a stochastic differential equation. The two representations are equivalent, but it is important to know in which form you have been quoted the stochastic differential equation. Often this depends on the modeller and their field of interest. In finance applications the It\^o representation predominates, and by simply replacing the given It\^o drift vector field $\tilde V_0$ by the \emph{corresponding} Stratonovich drift vector field $V_0$ above, one can proceed using standard integral calculus rules. In physical applications, the model is often often directly expressed in Stratonovich form. Hereafter we will use the Stratonovich representation and omit the `$\strat$' symbol, unless we specify otherwise. \section{\textsf{Stochastic Taylor expansion}} We follow the procedure we performed above for the ordinary differential equation to try to find the solution for the flow-map. In the process we obtain a solution series expansion called the stochastic Taylor series. We define the \emph{flow-map} $\varphi_{t}\in\text{Diff}(\Mcc)$ for the stochastic differential equation above as the map taking the solution configuration on $\Mcc$ at time $0$ to that at time $t$; hence $y_t=\varphi_t\circ y_0$. Using the Stratonovich representation for a stochastic differential equation and the convention $W^0_t\equiv t$, the chain rule for any function $f\in\text{Diff}(\Mcc)$ yields the stochastic differential equation governing the evolution of $f\circ y_t$ as follows \begin{equation*} f\circ y_t=f\circ y_0 +\sum_{i=0}^d\int_0^t (V_i\cdot\partial_y\,f)\circ y_\tau\,\mathrm{d}W^i_\tau. \end{equation*} As for the ordinary differential equation, setting $f=\varphi_t$, we can pull back the stochastic flow on $\Mcc$ to a functional stochastic differential equation on $\text{Diff}(\Mcc)$ given by \begin{equation*} \varphi_t=\id+\sum_{i=0}^d\int_0^t V_i\circ\varphi_\tau\,\mathrm{d}W^i_\tau. \end{equation*} To solve this equation, we set up the formal iterative procedure given by \begin{equation*} \varphi_t^{(n+1)}=\id+\sum_{i=0}^d\int_0^t V_i\circ\varphi^{(n)}_\tau\,\mathrm{d}W^i_\tau, \end{equation*} with $\varphi^{(0)}_t=\id$. By performing the iterations one can see formally, and prove rigorously, that the solution flow-map is given by the series expansion \begin{equation*} \varphi_t=\id+\sum_{i=0}^d\bigl(W_t^i\bigr)\,V_i +\sum_{i,j=0}^d\biggl(\int_0^t\int_0^{\tau_1}\,\rd W^i_{\tau_2}\,\rd W^j_{\tau_1}\biggr)\,V_{ij}+\cdots. \end{equation*} Here we use the notation $V_{ij}\equiv V_i\circ V_j$. We can apply this to the initial data $y_0\in\Mcc$ and obtain the \emph{stochastic Taylor expansion} for the solution \begin{equation*} y_t=y_0+\sum_{i=0}^d\bigl(W_t^i\bigr)\,V_i(y_0) +\sum_{i,j=0}^d\biggl(\int_0^t\int_0^{\tau_1}\,\rd W^i_{\tau_2}\,\rd W^j_{\tau_1}\biggr)\,V_{ij}(y_0)+\cdots. \end{equation*} We can express the solution series for the flow-map concisely as follows. Let $\Ab^\ast$ denote the free monoid of words over the alphabet $\mathbb A=\{0,1,\ldots,d\}$. We adopt the standard notation for \emph{Stratonovich integrals}, if $w=a_1\ldots a_n$ then we set \begin{equation*} J_w(t)\coloneqq\int_0^t\cdots\int_0^{\tau_{n-1}} \mathrm{d}W^{a_1}_{\tau_n}\,\cdots\,\mathrm{d}W^{a_n}_{\tau_1}. \end{equation*} We also write the composition of the vector fields as $V_w\equiv V_{a_1}\circ V_{a_2}\circ\cdots\circ V_{a_n}$. Then the flow-map is given by \begin{equation*} \varphi_t=\sum_{w\in\Ab^\ast}J_w(t)\,V_w. \end{equation*} \section{\textsf{PDE simulation}} There is an intimate link between any stochastic differential equation and a prescribed \emph{parabolic partial differential equation}. The link is given by the Feynman--Kac formula, which we give here in a very simple form. See for example Karlin and Taylor~\cite[pp.~222--4]{KT} for the full statement of the Feynman--Kac formula and its applications. \begin{theorem}[\textbf{\textsf{Feynman--Kac formula}}] Consider the parabolic partial differential equation for $t\in[0,T]$: \begin{equation*} \partial_t u=\Lc\, u, \end{equation*} with $u(0,y)=f(y)$. Here $\mathcal L\coloneqq V_0+\tfrac12(V_1^2+\cdots+V_d^2)$ is a differential operator of order $2N$. Let $y_t$ denote the solution to the stochastic differential equation for $t\in[0,T]$: \begin{equation*} y_t=y_0+\int_0^t V_0(y_\tau)\,\mathrm{d}\tau +\sum_{i=1}^d\int_0^t V_i(y_\tau)\,\mathrm{d}W^i_\tau. \end{equation*} Then, when $y_0=y$ we have: $u(t,y)=\Eb\,f(y_t)$. \end{theorem} \noindent\textsf{Remark.} Note that using the relation between the It\^o and Stratonovich drift vector fields, an equivalent formulation is $\Lc\equiv \tilde V_0\cdot\partial_y+\tfrac12\sum_{i=1}^d(V_i\otimes V_i):\partial_{yy}$. We provide a purely combinatorial proof of the Feynman--Kac formula. Before we begin, we need the following results, for the expectation of Stratonovich integrals and also a combinatorial expansion. Let $\mathbb D^*\subset\mathbb A^*$ denote the free monoid of words constructed from the alphabet $\mathbb D=\{0,11,22,\ldots,dd\}$. The expectation of a Stratonovich integral $J_w$ is given by \begin{equation*} \Eb\,J_w=\begin{cases} \frac{t^{\mathrm{n}(w)}}{2^{\mathrm{d}(w)}\mathrm{n}(w)!},&\qquad w\in\mathbb D^*; \\ 0,&\qquad w\in\Ab^\ast\backslash\mathbb D^*. \end{cases} \end{equation*} In the formula, $\mathrm{d}(w)$ is the number of non-zero consecutive pairs from $\mathbb D$ in $w$ and $\mathrm{n}(w)=\mathrm{z}(w)+\mathrm{d}(w)$, where $\mathrm{z}(w)$ is the number of zeros in $w$. We also have the following combinatorial identity for all $w\in\mathbb D^\ast$: \begin{equation*} \bigl(V_0+\tfrac12(V_1^2+\cdots+V_d^2)\bigr)^k \equiv\sum_{\mathrm{n}(w)=k}(\tfrac12)^{\mathrm{d}(w)}\,V_w, \end{equation*} where note that $V_i^2\equiv V_{ii}$. In other words, expanding $\bigl(V_0+\tfrac12(V_{1}^2+\cdots+V_{d}^2)\bigr)^k$ generates all the possible vector fields $V_w$ with $w\in\mathbb D^*$ and $\mathrm{n}(w)=k$, with the appropriate coefficients of powers of one-half. \begin{proof} In the series solution for the flow-map $\varphi_t$, all stochastic information is encoded in the words on the left and the geometric information on the right. Taking the expectation of the flow-map, noting that expectation is a linear operator, and using the two results above for $\Eb\,J_w$ and the combinatorial expansion, we get \begin{align*} \Eb\,\varphi_t=&\;\Eb\,\sum_{w\in\Ab^*} J_wV_w\\ =&\;\sum_{w\in\Ab^*}\bigl(\Eb\,J_w\bigr)\,V_w,\\ =&\;\sum_{k\geq0} \sum_{\mathrm{n}(w)=k}(\tfrac12)^{\mathrm{d}(w)}\frac{t^k}{k!}\,V_w\\ =&\;\sum_{k\geq0} \frac{t^k}{k!}\bigl(V_0+\tfrac12(V_{1}^2+\cdots+V_{d}^2)\bigr)^k\\ =&\;\exp\bigl(t\,\mathcal L\bigr). \end{align*} Now note that $\exp\bigl(t\,\mathcal L\bigr)$ generates the \emph{semi-group} for the solution to the parabolic differential equation in the theorem.\qed \end{proof} \noindent\textsf{Example (Heston model).} In the Heston model~\cite{Heston}, a stock price $S_t$ is modelled by a stochastic process $x_t=\log S_t$ with variance process $v_t$ which evolve according to: \begin{align*} \mathrm{d}x_t=&\;\mu\,\mathrm{d}t+\sqrt{v_t}\,\mathrm{d}W^1_t,\\ \mathrm{d}v_t=&\;\kappa(\theta-v_t)\,\mathrm{d}t +\varepsilon\,\sqrt{v_t}\,\bigl(\rho\,\mathrm{d}W^1_t +\sqrt{1-\rho^2}\,\mathrm{d}W^2_t\bigr), \end{align*} given in It\^o form. Note that the variance is a mean-reverting process; it tries to revert to the mean value $\theta$ at rate $\kappa$. Using the Feynman--Kac formula the corresponding partial differential equation for $u(x,v,t)=\Eb\,\bigl(f(x_t,v_t)~|~x_0=x,~v_0=v\bigr)$ is \begin{equation*} u_t=\mu\,u_x+\kappa(\theta-v)\,u_v +\tfrac12v\,u_{xx}+\rho\epsilon v\,u_{xv}+\tfrac12\epsilon^2v\,u_{vv}. \end{equation*} \noindent\textsf{Remark.} The Feynman--Kac formula shows how we can solve a partial differential equation to obtain information about the solution $y_t$ to the stochastic differential equation at time $t\in[0,T]$. In the reverse direction, to numerically solve high dimensional diffusion problems in the form of deterministic partial differential equations, we need only simulate an $N$-dimensional stochastic equation for $y_t$ and then compute the expectation $\Eb\,y_t$ to find the solution. \section{\textsf{Monte--Carlo simulation}} In \emph{Monte--Carlo simulation}, we generate a set of suitable multidimensional sample paths say $\hat W(\omega)\coloneqq\bigl(\hat W^1(\omega),\ldots,\hat W^d(\omega)\bigr)$ on $[0,T]$. We generate a large finite set of paths, each labelled by $\omega$. Here the number of paths, say $P$, must be large enough so that, for example, any statistical information for the solution $y_t$ that we want to extract is sufficiently robust. For each sample path $\hat W(\omega)$, we generate a sample path solution $\hat y(\omega)$ to the stochastic differential equation on $[0,T]$. This can often only be achieved approximately, by using a truncation of the stochastic Taylor expansion for the solution $y$ on successive small subintervals of $[0,T]$. Having generated a set of approximate solutions $\hat y_t(\omega_i)$ at time $t\in[0,T]$ for every $\omega_i$ for $i=1,\ldots,P$, we estimate the expectation $\Eb\,f(\hat y_t)$ by computing \begin{equation*} \tfrac{1}{P}\sum_{i=1}^Pf\bigl(\hat y_t(\omega_i)\bigr) \end{equation*} regarded as a suitable approximation for \begin{equation*} \Eb\,f(y_t)\coloneqq\int f\bigl(y_t(\omega)\bigr)\,\rd\mathsf P(\omega) \end{equation*} over all possible paths. Now a natural question arises. Do the suitable paths $\hat W(\omega)$ we generate, have to be sample Brownian paths to compute the mean above, or can we choose different paths that will still generate the expectation effectively? We discuss the latter case (weak simulation) briefly next. We then move onto a more comprehensive treatment of the former (strong simulation) case, which also allows us to include path-dependent features. \subsection{\textbf{\textsf{Weak simulation}}} There are several \emph{weak simulation} strategies to approximate $\Eb\,f(\hat y_t)$. The simplest and most common is to replace the driving Wiener paths $W^i$ by paths generated as follows. Construct paths by generating increments $\Delta W^i(t_n,t_{n+1})$ over the computation subinterval $[t_n,t_{n+1}]$ by the binomial branching process \begin{equation*} \mathsf{P}\bigl(\Delta W^i(t_n,t_{n+1})=\pm\sqrt{h}\bigr)=\tfrac12, \end{equation*} where $h=t_{n+1}-t_n$. Then, depending on the ordinary numerical integration scheme employed for each such path, one can show that for some order of convergence $p$, for some $t\in[0,T]$, we have \begin{equation*} \|\Eb\,f(y_t)-\Eb\,f(\hat y_t)\|=\mathcal O(h^p). \end{equation*} See Kloeden and Platen~\cite{KP} for more details. Another promising method is the cubature method of Lyons and Victoir~\cite{LV}. \subsection{\textbf{\textsf{Strong simulation}}} In a \emph{strong simulation}, discrete increments $\Delta W^i$ in each computation interval are directly sampled from the Gaussian distribution. Indeed we generate each multidimensional path $\hat W(\omega)$ by choosing increments in each component as follows \begin{equation*} \Delta W^i(t_n,t_{n+1})\sim\sqrt{h}\,\cdot\textsf{N}(0,1). \end{equation*} This is more expensive, but sample paths $\hat W(\omega)$ generated in this way, allow us to compare $\hat y_t(\omega)$ and $y_t(\omega)$ directly in the sense that one can show \begin{equation*} \Eb\,\|y_t-\hat y_t\|=\mathcal O\bigl(h^{\frac{p}{2}}\bigr) \end{equation*} for some order of strong convergence $p/2$; which we will discuss in detail in Section~\ref{sec:strongerror}. Often in practice we take $\|\cdot\|$ to be the Euclidean norm so that the convergence shown is in the $L^2$-norm. Given a sample mulitdimensional path $\hat W(\omega)$ on $[0,T]$, how do we actually construct an approximate solution $\hat y_t$? Here we are guided by the stochastic Taylor expansion. Indeed, classical strong numerical methods are based on truncating the stochastic Taylor expansion \begin{equation*} y_t=\sum_{w\in\Ab^\ast}J_w(t)\,V_w(y_0), \end{equation*} and applying the approximation over successive subintervals of the global interval of integration $[0,T]$; see Kloeden and Platen~\cite{KP} or Milstein~\cite{Mil}. We present three simple example numerical approximation methods. \subsection{\textbf{\textsf{Euler--Maruyama method}}} If we truncate the It\^o form of the stochastic Taylor series after the first order terms we generate the \emph{Euler--Maruyama numerical method} as follows: \begin{equation*} \hat y_{n+1}=\hat y_n+h\,\tilde V_0(\hat y_n) +\sum_{i=1}^d\bigl(\Delta W^i(t_n,t_{n+1})\bigr)\,V_i(\hat y_n). \end{equation*} This is a numerical scheme with global order of convergence of $h^{\frac12}$. We explain in Section~\ref{sec:strongerror} why we have used the It\^o drift vector field here. \begin{figure} \caption{Weak and strong simulations for the scalar linear example given in Section~2, with $a=3$ and $b=2$. For this example we took $y_0=1$, $T=1$, $h=0.05$ and $P=10$ for both simulations, though only $5$ sample paths are shown above in all cases. Top left are $5$ sample binomial branching paths $w$, and top right are $5$ sample Brownian paths $W$. Lower left are $5$ sample solution paths $y$ using the binomial branching paths, while lower right are $5$ sample solution paths $Y$ using the Brownian paths; both computed using the Euler--Maruyama method. At each time-step the thick black line shows the average value over $P=10$ samples and the red line is the analytic solution for the expectation.} \label{fig:weakstrongsim} \end{figure} \subsection{\textbf{\textsf{Milstein method}}} We now truncate the stochastic Taylor series after the second order terms. This generates the \emph{Milstein numerical method} given by \begin{equation*} \hat y_{n+1}=\hat y_n+h\,V_0(\hat y_n)+\sum_{i=1}^d\bigl(\Delta W^i(t_n,t_{n+1})\bigr)\,V_i(\hat y_n) +\sum_{i,j=1}^dJ_{ij}(t_n,t_{n+1})V_{ij}(\hat y_n). \end{equation*} An important and expensive ingredient in this method, is the simulation of the multiple integrals $J_{ij}(t_n,t_{n+1})$ for $i\neq j$ shown, on each integration step. When $i=j$, the multiple integrals $J_{ii}(t_n,t_{n+1})$ are cheaply evaluated by \begin{equation*} J_{ii}(t_n,t_{n+1})=\int_{t_n}^{t_{n+1}}\int_{t_n}^{\tau_1}\,\rd W^i_{\tau_2}\,\rd W^j_{\tau_1}= \tfrac12\bigl(\Delta W^i(t_n,t_{n+1})\bigr)^2. \end{equation*} When $i\neq j$ we have by integration by parts that \begin{equation*} J_{ji}=J_iJ_j-J_{ij}. \end{equation*} Hence we need only compute one double integral for each pair $i\neq j$. Equivalently we need only compute the L\'evy area given by \begin{equation*} A_{ij}\coloneqq\tfrac12(J_{ij}-J_{ji}), \end{equation*} since $J_{ij}=\tfrac12J_iJ_j+A_{ij}$. By Stokes' Theorem, the L\'evy area on the interval $[t_n,t_{n+1}]$ is the chordal area for the path $(W^i,W^j)$ on $[t_n,t_{n+1}]$. This can be observed directly by definition since \begin{equation*} A_{ij}(t)=\tfrac12\int_{0}^{t}W^i_{\tau}\,\rd W^j_{\tau}-W^j_{\tau}\,\rd W^i_{\tau}. \end{equation*} We consider the issue of simulating the L\'evy area in some detail in Section~\ref{sec:levy}. The Milstein scheme has global order of convergence $h$. \subsection{\textbf{\textsf{Castell--Gaines method}}} Consider the exponential Lie series $\psi_t=\log\varphi_t$ generated by taking the logarithm of the stochastic Taylor series for the flow-map, i.e.\/ \begin{align*} \psi_t&=(\varphi_t-\id)-\tfrac12(\varphi_t-\id)^2+\tfrac13(\varphi_t-\id)^3+\cdots\\ &=\sum_{i=0}^dJ_iV_i+\sum_{i>j}\tfrac12(J_{ij}-J_{ji})[V_i,V_j]+\cdots. \end{align*} This series is also known as the Chen--Strichartz, Chen--Fleiss or Magnus series. The Castell--Gaines method is a strong numerical method based on truncating the exponential Lie series. As for the methods above, we generate a set of multidimensional paths $\hat W(\omega)$ on $[0,T]$ with Wiener increments $\Delta W^i(t_n,t_{n+1})$ sampled on the scale $h=t_{n+1}-t_n$. On each computation interval $[t_n,t_{n+1}]$, we relace the $J_i(t_n,t_{n+1})$ by the Normal samples $\Delta W^i(t_n,t_{n+1})$. If required, the L\'evy area increments $A_{ij}(t_n,t_{n+1})$ shown, are also replaced by suitable samples $\hat A_{ij}(t_n,t_{n+1})$ as we outline in Section~\ref{sec:levy}. Then across the computation interval $[t_n,t_{n+1}]$, we have \begin{equation*} \hat\psi_{t_n,t_{n+1}}=\sum_{i=0}^d\bigl(\Delta W^i(t_n,t_{n+1})\bigr)\,V_i +\sum_{i>j}\hat A_{ij}(t_n,t_{n+1})\,[V_i,V_j]. \end{equation*} The solution at time $t_{n+1}$ is then approximately given by \begin{equation*} \hat y_{t_{n+1}}\approx\exp(\hat\psi_{t_n,t_{n+1}})\circ\hat y_{t_n}. \end{equation*} Note that for each path $\Delta W^i(t_n,t_{n+1})$ and $\hat A_{ij}(t_n,t_{n+1})$ are fixed constants. Hence the truncated Lie series $\hat\psi_{t_n,t_{n+1}}$ is itself an autonomous vector field. Thus, for $\tau\in[0,1]$ and with $u(0)=\hat y_{t_n}$, we solve the ordinary differential equation \begin{equation*} u'(\tau)=\hat\psi_{t_n,t_{n+1}}\circ u(\tau). \end{equation*} Using a suitable high order ordinary differential integrator generates $u(1)\approx\hat y_{t_{n+1}}$. Without the L\'evy area the Castell--Gaines method has global order of convergence $h^{\frac12}$; while with the L\'evy area it has global order of convergence $h$. Castell and Gaines~\cite{CG1,CG2} prove that their strong order $h^{\frac12}$ method is always more accurate than the Euler--Maruyama method. Indeed they prove that this method is \emph{asymptotically efficient} in the sense of Newton~\cite{Newton}. Further in the case of a single driving Wiener process ($d=1$), they prove the same is true for their strong order $h$ method. By asymptotically efficient we mean, quoting from Newton, that they ``minimize the leading coefficient in the expansion of mean-square errors as power series in the sample step size''. \section{\textsf{Simulating the L\'evy area}}\label{sec:levy} A fundamental and crucial aspect to the implementation of strong order one or higher integrators for stochastic differential equations, is the need to successfully simulate the L\'evy chordal areas $A_{ij}(t_n,t_{n+1})$, when the diffusion vector fields do not commute. This aspect is more than just a new additional concern once we step off the cliff edge of simple path increment approximations with frozen vector fields characterized by the Euler--Maruyama approximation. It also represents a substantial technical difficulty. Here we will outline several methods employed to simulate it sufficiently accurately; the important distinguishing criterion for the success of the method will be its asymptotic rate of convergence as $h\to0$. A sample survey of these methods can be found in Ryd\'en and Wiktorsson~\cite{RW}. Here we will focus on the case of two independent Wiener processes $W^1$ and $W^2$ and the requirement to simulate $A_{12}(h)\coloneqq A_{12}(t_n,t_{n+1})$, given the Normal increments $\Delta W^1(h)\coloneqq\Delta W^1(t_n,t_{n+1})$ and $\Delta W^2(h)\coloneqq\Delta W^2(t_n,t_{n+1})$ across $[t_n,t_{n+1}]$. \subsection{\textbf{\textsf{Simulating Normal random variables}}} We will start with the question: what is the most efficient method for generating $\Delta W^1(h)$ and $\Delta W^2(h)$? The simple and direct answer is to use the Matlab command \begin{quote} \begin{center} \texttt{sqrt(h)*randn} \end{center} \end{quote} This command invokes an algorithm that has been scrupulously refined and adapted over the years. One of the simplest efficient earlier incarnations of this algorithm is Marsaglia's polar method~\cite{Mar}. The Box--M\"uller method is also very simple but not quite as efficient; see Kloeden and Platen~\cite{KP} for a discussion of these issues. Also see Moro's inversion method~\cite{Moro}. We outline Marsaglia's method here because of its simplicity and effectiveness. \begin{algorithm}[\textbf{\textsf{Marsaglia's method}}] To produce two standard Normal samples: \begin{enumerate} \item Generate two independent uniform random samples $U_1,U_2\in\text{\textsf{Unif}}([-1,1])$; \item If $S\coloneqq U_1^2+U_2^2<1$ continue, otherwise repeat Step~1; \item Compute $X_i=U_i/\sqrt{-2\,\ln(S)/S}$, for $i=1,2$; then $X_1$ and $X_2$ are independent standard Normal samples. \end{enumerate} \end{algorithm} \subsection{\textbf{\textsf{Conditional distribution of L\'evy area}}} The \emph{characteristic function} $\hat\phi$ of the probability density function for $\xi=A_{12}(h)$ given $\Delta W^1(h)$ and $\Delta W^2(h)$ is \begin{equation*} \hat\phi(\xi)=\frac{\tfrac12 h\xi}{\sinh(\tfrac12h\xi)} \exp\Bigl(-\tfrac12a^2\bigl(\tfrac12h\xi\coth(\tfrac12h\xi)-1\bigr)\Bigr) \end{equation*} where $a^2=\bigl(\bigl(\Delta W^1(h)\bigr)^2+\bigl(\Delta W^1(h)\bigr)^2\bigr)/h$. L\'evy derived this in a very succinct calculation in 1951; see L\'evy~\cite[pp.~171--3]{Levy}. Since $\hat\phi$ is the characteristic function, i.e.\ the Fourier transform of the corresponding probability density function, the actual probability density function $\phi$ is given by the inverse Fourier transform (see for example Gaines and Lyons~\cite{GL:Mar}): \begin{equation*} \phi(x)=\tfrac{1}{\pi}\int_0^\infty \hat\phi(\xi)\,\cos(x\,\xi)\,\rd\xi. \end{equation*} The ungainly form of this probability density function means that generating samples is not likely to be easy. For example, the simplest method for sampling from a continuous distribution $f$ is based on the inversion of its (cumulative) distribution function $F(x)\coloneqq\int_{-\infty}^xf(\eta)\,\rd\eta$. If we sample from the uniform distribution, say $U\sim\textsf{Unif}([0,1])$, then $F^{-1}(U)$ is a sample from the target distribution. For this to be a practical sampling method we must have an analytic form for $F$ or an extremely efficient quadrature approximation for the integral in $F$ at our disposal. We don't have this for the probability density function of the L\'evy area $\phi$. Several methods have been proposed for sampling from $\phi$. Gaines and Lyons~\cite{GL:Mar} proposed one of the most efficient, based on Marsaglia's rectangle--wedge--tail method. However it can be complicated to implement. Kloeden and Platen~\cite{KP} and Wiktorsson have proposed methods based on the Karhunen--Lo\`eve expansion are much easier to code. Ryd\'en and Wiktorsson~\cite{RW} proposed a method based on recognising the characteristic function $\hat\phi$ as a product of characteristic functions for a logistic random variable and an infinite sum of Poisson mixed Laplace random variables. Gaines and Lyons~\cite{GL97} also proposed a method based on the conditional expectation of the L\'evy area, conditioned on intermediate Wiener increments. We discuss these methods in the following four sections. Stump and Hill~\cite{SH} have also proposed a very efficient method, whose potential in a practical implementation is yet to be explored. \begin{figure} \caption{Sample two-dimensional Wiener path and enclosed chordal L\'evy area.} \end{figure} \subsection{\textbf{\textsf{Karhunen--Lo\`eve expansion method}}} L\'evy~\cite{Levy} derived the form for the characteristic function $\phi$ for the L\'evy area, using the Karhunen--Lo\`eve expansion for a Brownian bridge. This is an expansion in orthogonal basis functions. The details can be found in L\'evy~\cite{Levy} or Kloeden and Platen~\cite{KP}. If $U_{k},V_{k},X_{k},Y_{k}$ are independent $\textsf{N}(0,1)$ samples, also independent of $\Delta W^1(h)$ and $\Delta W^2(h)$, then the L\'evy area can be represented by \begin{equation*} A_{12}(h)=\frac{h}{2\pi}\sum_{k=1}^\infty \tfrac{1}{k}\Bigl(U_{k}\bigl(Y_{k}-\sqrt{\tfrac{2}{h}}\Delta W^2(h)\bigr) -V_{k}\bigl(X_{k}-\sqrt{\tfrac{2}{h}}\Delta W^1(h)\bigr)\Bigr). \end{equation*} In practice, we truncate this expansion to only include $k\leqslant Q$ terms and use the truncation, $\hat A_{12}(h)$, as an approximation for the L\'evy area. The important question now, is as far as strong simulation is concerned, how many standard Normal random variables do we need to simulate in order to have a sufficiently accurate L\'evy area sample $\hat A_{12}(h)$, i.e.\/ how large must $Q$ be? Note that the coefficients in the above expansion scale like $h/k$. The properties of the tail of the series, i.e.\/ all the terms for $k\geqslant Q+1$, mean that it scales as $h/\sqrt{Q}$. For a Milstein numerical approximation we require that the strong error is locally of order $h^{\frac{3}{2}}$; see Section~\ref{sec:strongerror}. Hence we must choose $Q\approx h^{-1}$ for a sufficiently accurate sample. \subsection{\textbf{\textsf{Ryd\'en and Wiktorsson's method}}} Ryd\'en and Wiktorsson~\cite{RW} proposed several methods, we detail here the most expedient. The characteristic function $\hat\phi$ is the product of two characteristic functions \begin{equation*} \hat\phi_{X(h)}(\xi)=\frac{\tfrac12 h\xi}{\sinh(\tfrac12h\xi)} \qquad\text{and}\qquad \hat\phi_{Y(h)}(\xi)=\exp\Bigl(-\tfrac12a^2\bigl(\tfrac12h\xi\coth(\tfrac12h\xi)-1\bigr)\Bigr) \end{equation*} corresponding to the random variables $X(h)$ and $Y(h)$, respectively. We observe that $\hat\phi_{X(h)}$ is the characteristic function of a logistic random variable which can be generated by the inverse method, i.e.\ pick $U\sim\textsf{Unif}([0,1])$ and let $X(h)=(h/2\pi)\,\log\bigl(U/(1-U)\bigr)$. Then using the identity \begin{equation*} z\coth z-1=2\sum_{k=1}^\infty \frac{z^2}{\pi^2k^2+z^2}, \end{equation*} we observe that \begin{equation*} \hat\phi_{Y(h)}(\xi)=\exp\biggl(-a^2\sum_{k=1}^\infty\frac{\xi^2}{(2\pi k/h)^2+\xi^2}\biggr). \end{equation*} This can be viewed as a sum of compound Poisson random variables. Indeed if for each $k\in\mathbb N$, we generate $N_k\sim\textsf{Poisson}(a^2)$ and then, for $j=1,\ldots,N_k$ generate independent Laplace random variables $Y_{jk}\sim\textsf{Laplace}(1/k)$, then \begin{equation*} Y(h)=\frac{h}{2\pi}\sum_{k=1}^\infty\sum_{j=1}^{N_k} Y_{jk}, \end{equation*} has density $\phi_{Y(h)}$. In a practical implementation we truncate this expansion to include $k\leqslant Q$ terms, and use the truncation as an approximation for the L\'evy area. Further the tail sum, by the central limit theorem, is asymptotically Normally distributed and can be approximated by a Normal random variable. This provides quite a dramatic improvement as it is possible to show that this method only requires the number of standard Normal samples to be $Q\approx h^{-\frac{1}{2}}$. \subsection{\textbf{\textsf{Wiktorsson's method}}} Wiktorsson proposed a method that uses the Karhunen--Lo\`eve expansion method, but also simulates the tail sum as in the last method. Again, by the central limit theorem, the tail sum can be approximated by a Normal random variable, and the corresponding improvement is that this method only requires the number of standard Normal samples to be $Q\approx h^{-\frac{1}{2}}$. Wiktorsson's method has been successfully implemented by Gilsing and Shardlow~\cite{GilSh} in their SDELab, to where the interested reader is referred. \subsection{\textbf{\textsf{Conditional expectation}}} One more approach to simulating the L\'evy area, or equivalently $J_{12}(t_n,t_{n+1})$, is based on replacing $J_{12}(t_n,t_{n+1})$, by its conditional expectation $\hat J_{12}(t_n,t_{n+1})$, as follows. Suppose we are about to perform the numerical update for the solution across the interval $[t_n,t_{n+1}]$. We generate $Q$ pairs of independent standard Normal random variables $X_q,Y_q\sim \textsf{N}(0,1)$ for $q=1,\ldots,Q$. Set $\tau_q=t_n+q\Delta t$, for $q=0,\ldots,Q-1$, and $\Delta W^1(\tau_q)=\sqrt{\Delta t}\,X_q$ and $\Delta W^2(\tau_q)=\sqrt{\Delta t}\,Y_q$, where $\Delta t$ is defined by $Q\Delta t=h$. We thus generate a two-dimensional Brownian sample path on $[t_n,t_{n+1}]$. We can take $\Delta W^1(h)$ and $\Delta W^2(h)$ to be the increments across the interval $[t_n,t_{n+1}]$. More importantly, we can use the intervening path information we have generated on the scale $\Delta t$ to approximate $J_{12}(t_n,t_{n+1})$. Indeed, $J_{12}(t_n,t_{n+1})$ can be expressed as \begin{align*} J_{12}(t_n,t_{n+1})=&\; \int_{t_n}^{t_{n+1}}\int_{t_n}^\tau \,\mathrm{d}W_{\tau_1}^1 \,\mathrm{d}W_{\tau}^2\\ =&\; \sum_{q=0}^{Q-1}\int_{\tau_q}^{\tau_{q+1}} (W_{\tau}^1-W_{\tau_q}^1) +(W_{\tau_q}^1-W_{t_n}^1) \,\mathrm{d}W_{\tau}^2\\ =&\;\sum_{q=0}^{Q-1}J_{12}(\tau_q,\tau_{q+1}) +\sum_{q=0}^{Q-1}\bigl(W_{\tau_q}^1-W_{t_n}^1\bigr) \,\Delta W^2(\tau_q). \end{align*} The quantity \begin{equation*} \hat J_{12}(t_n,t_{n+1})\coloneqq\sum_{q=0}^{Q-1}\bigl(W_{\tau_q}^1-W_{t_n}^1\bigr) \,\Delta W^2(\tau_q) \end{equation*} represents the expectation of $J_{12}(t_n,t_{n+1})$ conditioned on the increments $\Delta W^1(\tau_q)$ and $\Delta W^2(\tau_q)$. From an algebraic and geometric perspective, $\hat J_{12}(t_n,t_{n+1})$ represents a suitable approximation to $J_{12}(t_n,t_{n+1})$. Computing its mean-square strong error we see that \begin{equation*} \left\|J_{12}(t_n,t_{n+1})-\hat J_{12}(t_n,t_{n+1})\right\|_{L_2}^2 =\sum_{q=0}^{Q-1}\bigl\|J_{12}(\tau_q,\tau_{q+1})\bigr\|_{L^2}^2 =Q(\Delta t)^2=h^2/Q. \end{equation*} Hence its root-mean-square strong error is $h/\sqrt{Q}$. Thus, as for the Karhunen--Lo\`eve expansion approach, to achieve a suitable approximate sample for the stochastic area integral, this method requires $Q\approx h^{-1}$. One advantage of this method is that it is very convenient for generating log-log error plots. \section{\textsf{Strong error}}\label{sec:strongerror} We will focus here on the global, strong $L^2$ error. In practical terms, the global error is generated by the accumulation of contributions from the local error. The local error is itself the leading order terms in the remainder $R_t$, say, of our truncated stochastic Taylor series. Note that there is also a contribution to the global error from the approximate simulation of the L\'evy area, however we will assume here, that the L\'evy area has been sufficiently accurately simulated, as discussed in detail in the last section, so that its contribution is \emph{small}, in comparison to the truncation error. Suppose we base a strong numerical approximation on truncating the stochastic Taylor expansion (in Stratonovich form). Let $\hat y$ denoted the truncated expansion and $R$ the corresponding remainder; hence the exact solution is $y=\hat y+R$. To guarantee our numerical scheme based on such a truncation is globally of order $h^{m}$, where $m\in\mathbb Z/2$, which terms must we keep in $\hat y$? We give the following rule. \noindent\textsf{Rule of Thumb:} Terms in the remainder $R$ of $L^2$ measure $h^m$ with: \begin{itemize} \item zero expectation, accumulate so as to contribute to the global error as $h^{m-\frac12}$ order terms; \item non-zero expectation, accumulate so as to contribute to the global error as $h^{m-1}$ order terms. \end{itemize} Hence to achieve an integrator with global error of order $h^{m}$, we must retain in $\hat y$: \begin{itemize} \item all terms with $L^2$ measure of order $h^{m'}$ for all $m'\leqslant m$; \item the expectation of all terms of order $h^{m+\frac12}$ which have non-zero expectation (the corresponding terms left in remainder will then have zero expectation). \end{itemize} \noindent\textsf{Example (Euler--Maruyama).} Recall that we based the Euler--Maruyama approximation on the truncated It\^o Taylor series. If we had truncated the stochastic Taylor series in Stratonovich form, then according to the rules above, to achieve global order $h^{\frac12}$ we should retain in our integrator the expectation of the terms \begin{equation*} \sum_{i=1}^d(V_i\cdot\partial_yV_i)\,J_{ii}. \end{equation*} Since $\Eb\, J_{ii}=\tfrac12 h$, we thus recover the corresponding truncated It\^o Taylor series. \section{\textsf{Further issues}} There are many important simulation issues we have not had space to discuss. Chief among these is the numerical stability of the strong methods we have explicitly outlined. This issue is discussed in Higham~\cite{Higham:intro} and more can be found for example in Buckwar, Horv\'ath--Bokor and Winkler~\cite{BHBW}. \appendix \section{\textsf{Stratonovich to It\^o relations}} We give here some Stratonovich to It\^o relations for convenience for the reader---more details can be found in Kloeden and Platen~\cite{KP}. For the words $w$ shown, the Stratonovich integrals $J_w$ can be expressed in terms of It\^o integrals $I_w$ as follows: \begin{align*} w=a_1a_2\colon&\; J_w=I_w+\tfrac12I_0\,\delta_{a_1=a_2\neq0};\\ w=a_1a_2a_3\colon&\; J_w=I_w+\tfrac12(I_{0a_3}\,\delta_{a_1=a_2\neq0} +I_{a_10}\,\delta_{a_2=a_3\neq0});\\ w=a_1a_2a_3a_4\colon&\; J_w=I_w +\tfrac14 I_{00}\,\delta_{a_1=a_2\neq0}\delta_{a_3=a_4\neq0}\\ &\;\qquad\qquad+\tfrac12(I_{0a_3a_4}\,\delta_{a_1=a_2\neq0}+I_{a_10a_4}\,\delta_{a_2=a_3\neq0} +I_{a_1a_20}\,\delta_{a_3=a_4\neq0}). \end{align*} Note that the expectation of any It\^o integral $I_w$ is zero, i.e.\/: $\Eb\, I_w=0$ for any word $w\in\Ab^\ast$ which has at least one non-zero letter. \section{\textsf{Sample program for weak and strong Euler--Maruyama}} We provide the listing for the weak vs strong Euler--Maruyama simulation shown in Figure~\ref{fig:weakstrongsim}. \begin{lstlisting}[frame=topline,caption={Weak vs strong simulation},label=WVSS] a=3.0; b=1.4; y0=1.0; P=10; h=0.05; T=1.0; N=T/h; dw=zeros(N,P); w=zeros(N+1,P); binom=binornd(1,1/2,[N,P]); dw=sqrt(h)*(1-2*binom); w(2:N+1,:)=cumsum(dw,1); y=zeros(N+1,P); for p=1:P y(1,p)=y0; for n=1:N y(n+1,p)=y(n,p)+a*y(n,p)*h+b*y(n,p)*dw(n,p); end end dW=zeros(N,P); W=zeros(N+1,P); dW=sqrt(h)*randn(N,P); W(2:N+1,:)=cumsum(dW,1); Y=zeros(N+1,P); for p=1:P Y(1,p)=y0; for n=1:N Y(n+1,p)=Y(n,p)+a*Y(n,p)*h+b*Y(n,p)*dW(n,p); end end expect_y=zeros(N+1,1); expect_Y=zeros(N+1,1); for n=1:N+1 expect_y(n)=mean(y(n,:)); expect_Y(n)=mean(Y(n,:)); end \end{lstlisting} \section{\textsf{Example strong simulation program}} \subsection{\textbf{\textsf{Heston model strong simulation}}} We provide here a sample program that shows how to perform log-log error plots for a strong simulation. We used a real example, the Heston model, and applied the full truncation Euler--Maruyama type numerical scheme devised by Lord, Koekkoek and Van Dijk~\cite{LKVD}. The log-log error vs stepsize, and error vs CPU time, are shown in Fig.~\ref{fig:simsde0p5}. Note that to estimate the strong global error, we must compare solutions for different stepsizes along the \emph{same} path, before taking the expectation. \begin{figure} \caption{Error vs stepsize and error vs CPU time for the Heston model. The parameter values can be seen in the program listing.} \label{fig:simsde0p5} \end{figure} \begin{lstlisting}[frame=topline,caption={Heston model strong simulation},label=HMSS] T0=0; T=1; M=10; hmin=(T-T0)/2^M; Mstart=4; hmax=(T-T0)/2^Mstart; Q=(1/(hmin))*(hmax/hmin); dt=hmax/Q; R=M-Mstart+1; P=100; alpha=2.0; theta=0.09; beta=0.1; rho=0.5; mu=0.05; ic=[1.0; 0.09]; YFT=zeros(P,R,2); clockYFT=zeros(1,R); for p=1:P for r=1:R YFT(p,r,:)=ic; end end for jj=1:2^Mstart YFTold=YFT; for p=1:P siv=hmax/hmin; dW1=sqrt(dt)*randn(1,Q); dW2=sqrt(dt)*randn(1,Q); dW0=(zeros(1,Q)+1)*dt; for r=1:R SF=2^(r-1); h=SF*hmin; L=hmax/h; QR=Q/(SF^2); dw0=zeros(1,QR); dw1=zeros(1,QR); dw2=zeros(1,QR); w0=zeros(1,QR); w1=zeros(1,QR); w2=zeros(1,QR); for j=1:QR dw0(j)=sum(dW0((j-1)*(SF^2)+1:j*(SF^2))); dw1(j)=sum(dW1((j-1)*(SF^2)+1:j*(SF^2))); dw2(j)=sum(dW2((j-1)*(SF^2)+1:j*(SF^2))); end QF=QR/L; for n=1:L w0((n-1)*QF+1:n*QF)=cumsum(dw0((n-1)*QF+1:n*QF)); w1((n-1)*QF+1:n*QF)=cumsum(dw1((n-1)*QF+1:n*QF)); w2((n-1)*QF+1:n*QF)=cumsum(dw2((n-1)*QF+1:n*QF)); end oldclockYFT=clockYFT(r); ts=cputime; YFT(p,r,:)=YFTapprox(p,h,L,T0,QF,dw0,dw1,dw2,w0,w1,w2, ... alpha,theta,beta,rho,mu,... YFTold(p,r,:)); clockYFT(r)=cputime-ts+oldclockYFT; end end end stepsizes=log10((2.^([1:R-1]))*hmin); save('stepsizes','stepsizes') save('P','P') save('YFT','YFT') save('clockYFT','clockYFT') \end{lstlisting} \subsection{\textbf{\textsf{Program listing: integrator}}} We give the program listing for the Heston model full truncation Euler--Maruyama integrator. \begin{lstlisting}[frame=topline,caption={Full truncation Euler--Maruyama integrator},label=FTEM] function trunc=YFTapprox(p,h,L,T0,QR,dW0,dW1,dW2,... W0,W1,W2,alpha,theta,beta,rho,mu,ic) trunc=zeros(2,1); J1=zeros(1,L); J2=zeros(1,L); J1(1)=W1(QR); J2(1)=W2(QR); pts=2*QR:QR:L*QR; J1(2:L)=W1(pts); J2(2:L)=W2(pts); S=ic(1); v=ic(2); for n=1:L Sold=S; vold=v; S=exp((mu-max(0,vold)/2)*h+sqrt(max(0,vold))*J1(n))*Sold; v=vold+alpha*(theta-max(0,vold))*h+beta*(rho*J1(n)... +sqrt(1-rho^2)*J2(n))*sqrt(max(0,vold)); end trunc=[S; v]; end \end{lstlisting} \subsection{\textbf{\textsf{Program listing: log-log strong error plots}}} The following program performs the log-log plots for the strong $L^2$ error measure. \begin{lstlisting}[frame=topline,caption={Log-log strong error plots},label=LLEP] load stepsizes load YFT load clockYFT load P R=length(stepsizes); errorYFT=zeros(1,R); diffYFT=zeros(P,R); for r=1:R for p=1:P diffYFT(p,r)=norm(YFT(p,r+1)-YFT(p,1)); end end errorYFT=sqrt(mean(diffYFT.^2,1)); figure subplot(1,2,1) plot(stepsizes(1:end-1),log10(errorYFT(1:end-1)),... '-ks','LineWidth',2) xlabel('log_{10}(stepsize)') ylabel('log_{10}(global error)') title(['Number of sampled paths=',int2str(P)]) subplot(1,2,2) plot(log10(clockYFT(2:end-1)),log10(errorYFT(1:end-1)),... '-ks','LineWidth',2) xlabel('log_{10}(CPU time)') ylabel('log_{10}(global error)') title(['Number of sampled paths=',int2str(P)]) \end{lstlisting} \end{document}
\begin{document} \title{Counting processes in $p$-variation with applications to recurrent events} \begin{abstract} Convergence results for averages of independent replications of counting processes are established in a $p$-variation setting and under certain assumptions. Such convergence results can be combined with functional differentiability results in $p$-variation in order to study the asymptotic properties of estimators that can be considered functionals of such averages of counting processes. Examples of this are given in recurrent events settings, confirming known results while also establishing the appropriateness of the pseudo-observation method for regression analysis. In a recurrent events setting with a terminal event, it is also established that it is more efficient to discard complete information on a censoring time and instead consider the censoring times censored by the terminal event. \end{abstract} \section{Introduction} The concept of $p$-variation allows for an elegant way of studying asymptotic properties of estimators depending on the data through the empirical distribution function of one or more one-dimensional variables. This is because of two things. Firstly, the empirical distribution function, based on independent and identically distributed observations, converges to the true distribution function in $p$-variation for many $p$. Secondly, many such estimators may be described as differentiable functionals of the empirical distribution function in a $p$-variation setting, at least for some values of~$p$. This means that asymptotic properties can be derived by what is essentially a functional delta method. Unfortunately, many estimators do not depend only on the empirical distribution of one-dimensional variables, but will rather be more complex. This motivates looking for extensions to this approach, which involves proving convergence in $p$-variation of more general averages to the true mean. In this paper, we will see how the approach described above can be used for estimators based on averages of counting processes. The main result is a convergence result for counting processes in $p$-variation which builds on and extends results by Qian ^{\mathsf{c}}ite{qian1998}. Applications to different estimators of the mean function in a recurrent events setting serve as demonstrations of how the described approach can be used to derive asymptotic properties of the different estimators. Expressions of influence functions and asymptotic variances and suggestions for natural variance estimators are also given for the different estimators and the appropriateness of the pseudo-observation method based on these estimators is established under some conditions. \section{Main results} \label{sec:main} A counting process $N$ is characterized by taking only non-negative integer values, $N \in \mathbb{N}_0$, and being increasing, $N(t) \geq N(s)$ for $t \geq s$. In our setting, counting processes are also continuous on the right with limits on the left and are defined on the time interval $[0, \infty]$ with the requirements $N(0) = 0$ and $N(\infty) = \lim_{t \to \infty} N(t)$. A counting process which is no larger than 1 is called simple. For a counting process $N$, the time points $T_k := \inf\{s : N(s) \geq k\}$ for $k \in \mathbb{N}$ characterize the process. Specifically, we can define simple counting processes by $N^{(k)}(s) = \indic{T_k \leq s}$, $s \in [0, \infty)$ for $k \in \mathbb{N}$, in which case \begin{equation} \label{eq:counting_decomposition} N(s) = \sum_{k = 1}^\infty N^{(k)}(s) \end{equation} for all $s \in [0, \infty]$. If $E(N(\infty)) < \infty$ for a counting process, $N$, as defined above then $\operatorname{E}(N(s)) < \infty$ for all $s \in [0,\infty]$. In this case, the notation $F$ for the mean function given by $F(s) = \operatorname{E}(N(s))$ will be used in the following. Consider $p \geq 1$. For a real function $f$ defined on an interval $J$ the $p$-variation is defined by \begin{equation} v_p(f;J) = \sup_{m \in \mathbb{N}, t_0, \hspace{1pt} \mathrm{d}ots, t_m \in J} \sum_{j=1}^m |f(t_j)-f(t_{j-1})|^p \end{equation} and $f$ is of bounded $p$-variation when $v_p(f;J) < \infty$. An equivalent definition and the following results can be found in the book by Dudley and Norvai\v{s}a ^{\mathsf{c}}ite{Dudley2010}. The definition $\|f\|_{(p)} = v_p(f;J)^{\frac{1}{p}}$ leads to a seminorm and if $\|f\|_\infty = \sup_{t \in J} |f(t)|$, norms are defined by both $\| f \|_\infty$, the supremum norm, and $\| f\|_{[p]} = \|f\|_{(p)} + \|f\|_{\infty}$, the $p$-variation norm. The space $\mathcal{W}_p(J)$ of all functions $f ^{\mathsf{c}}olon J \to \mathbb{R}$ of bounded $p$-variation, $v_p(f;J) < \infty$, is a Banach space, a complete normed vector space, when equipped with the $p$-variation norm. This is similarly the case for the subset $\mathcal{W}_p^{\mathsf{r}}(J)$ of right-continuous functions. The interval $J$, which will be $[0,\infty)$ in the following, is generally dropped from the notation when it is clear from the context as has already been done in the cases of $\|^{\mathsf{c}}dot\|_{\infty}$, $\|^{\mathsf{c}}dot\|_{(p)}$, and $\|^{\mathsf{c}}dot\|_{[p]}$. An important convergence result by Qian, Theorem~3.2 of ^{\mathsf{c}}ite{qian1998}, implies the following result. \begin{prop} \label{prop:QianE} Let $F$ be a cumulative distribution function with mass on $(0, \infty)$ and let, for each $n \in \mathbb{N}$, $F_n$ be the empirical distribution of $n$ independent observations $X_1, \hspace{1pt} \mathrm{d}ots, X_n \in [0,\infty)$ from the distribution given by $F$, that is, given by $F_n(t) = \frac{1}{n} \sum_{i=1}^n \indic{X_i \leq t}$. Then, for $p \in [1, 2)$, a constant $C_p$, only depending on~$p$, exists such that \begin{equation} \label{eq:QianE} \operatorname{E}(v_p(F_n - F)) \leq C_p n^{1-p} \end{equation} for any $n \in \mathbb{N}$. \end{prop} The inequality of \eqref{eq:QianE} also implies \begin{equation} \label{eq:QianE_alt} \operatorname{E}(\|F_n - F\|_{[p]}) \leq 2C_p^{\frac{1}{p}} n^{\frac{1-p}{p}} \end{equation} according to Jensen's inequality with the convex function $x \mapsto x^p$ and the fact that $F_n(0)-F(0)=0$ under the same conditions. The processes $t \mapsto \indic{X_i \leq t}$ at play above are simple counting processes. Proposition~\ref{prop:QianE} is the main ingredient in obtaining the following convergence result for more general simple counting processes, which generalizes~\eqref{eq:QianE_alt}. \begin{theorem} \label{theorem:simple} Consider a simple counting process $N$ with mean function $F$. For each $n \in \mathbb{N}$, let $F_n = n^{-1} \sum_{i=1}^n N_{i}$ be the average of independent replications $N_1, \hspace{1pt} \mathrm{d}ots, N_n$ of $N$. Then, for any $p \in [1, 2)$ a constant $K_p$, only depending on $p$, exists such that \begin{equation} \operatorname{E}(\| F_n - F \|_{[p]}) \leq K_p F(\infty)^{\frac{1}{p}} n^{\frac{1-p}{p}} \end{equation} for any $n \in \mathbb{N}$. \end{theorem} \begin{proof} Let $p \in [1,2)$, $n \in \mathbb{N}$ and the processes $N_1, \hspace{1pt} \mathrm{d}ots, N_n$ be given and let $\tilde n = \# \{i : N_i(\infty) = 1 \}$ be the number of observed jumps. Interpreting division by 0 as 0 allows for the equations \begin{equation} \label{eq:FnFsplit} \begin{aligned} F_n - F &= \frac{\tilde n}{n} \frac{1}{\tilde n}\sum_{i=1}^n N_i - F(\infty) \frac{F}{F(\infty)} \\ &= \frac{\tilde n}{n} \big( \frac{1}{\tilde n} \sum_{i=1}^n N_i - \frac{F}{F(\infty)} \big) + ( \frac{\tilde n}{n} - F(\infty)) \frac{F}{F(\infty)}. \end{aligned} \end{equation} Because $F$ is 0 at 0 and increasing, we have $\| F /F(\infty)\|_{[p]} \leq 2$. Note that $F(\infty) \in [0,1]$ because $N$ is simple. Since $\tilde n$ follows a binomial distribution of $n$ trials with probability $F(\infty)$, we have $\operatorname{E}(| \tilde n - n F(\infty)|) \leq \operatorname{E}(\tilde n(1-F(\infty)) + (n-\tilde n) F(\infty)) = 2 n F(\infty)(1-F(\infty))$ and $\operatorname{E}(| \tilde n - n F(\infty)|^2) = \Var(\tilde n) = nF(\infty)(1-F(\infty))$ and so $\operatorname{E}(| \tilde n - n F(\infty)|^p) \leq \operatorname{E}(\max(| \tilde n - n F(\infty)|, | \tilde n - n F(\infty)|^2)) \leq \operatorname{E}(| \tilde n - n F(\infty)|) + \operatorname{E}(| \tilde n - n F(\infty)|^2) \leq 3 n F(\infty)(1-F(\infty))$. It is then an application of Jensen's inequality with the convex function $x \mapsto x^p$ which reveals \begin{equation} \begin{aligned} \operatorname{E} \big(\big| \frac{\tilde n}{n} - F(\infty) \big|\big) &\leq \frac{1}{n} \operatorname{E}(| \tilde n - n F(\infty)|^p)^{\frac{1}{p}} \leq \frac{1}{n} \big(3n F(\infty)(1-F(\infty))\big)^{\frac{1}{p}} \\ & \leq 3^{\frac{1}{p}} n^{\frac{1-p}{p}} F(\infty)^{\frac{1}{p}}. \end{aligned} \end{equation} The function $F/F(\infty)$ is a cumulative distribution function. In the conditional distribution given $N_1(\infty), \hspace{1pt} \mathrm{d}ots, N_n(\infty)$, the $\tilde n^{-1} \sum_{i=1}^n N_i$ is an empirical distribution of $\tilde n$ independent observations from the distribution given by $F/F(\infty)$. Let $\sigAlg{A}$ be the $\sigma$-algebra generated by $N_1(\infty), \hspace{1pt} \mathrm{d}ots, N_n(\infty)$. An application of Proposition~\ref{prop:QianE} and Jensen's inequality as before in the conditional distribution reveals, almost surely, \begin{equation} \begin{aligned} \operatorname{E} \big(\big\|\frac{1}{\tilde n} \sum_{i=1}^n N_i - \frac{F}{F(\infty)} \big\|_{[p]} \mathbin{|} \sigAlg{A} \big) &\leq 2\operatorname{E} \big(v_p \big(\frac{1}{\tilde n} \sum_{i=1}^n N_i - \frac{F}{F(\infty)} \big) \mathbin{|} \sigAlg{A} \big)^{\frac{1}{p}} \\ &\leq 2C_p^{\frac{1}{p}} \tilde n^{\frac{1-p}{p}}, \end{aligned} \end{equation} where $C_p$ is the constant from Proposition~\ref{prop:QianE}. It is worth noting that $\sigma(\tilde n) \subseteq \sigAlg{A}$ and that the same constant $C_p$ can be used no matter the distribution according to Proposition~\ref{prop:QianE}. The considerations above mean that equation~\eqref{eq:FnFsplit} leads to \begin{equation} \begin{aligned} \operatorname{E}(\|F_n - F\|_{[p]}) &\leq 2C_p^{\frac{1}{p}} \frac{1}{n} \operatorname{E}(\tilde n^{\frac{1}{p}}) + 3^{\frac{1}{p}} 2 n^{\frac{1-p}{p}} F(\infty)^{\frac{1}{p}} \\ &\leq 2(C_p^{\frac{1}{p}} + 3^{\frac{1}{p}}) n^{\frac{1-p}{p}} F(\infty)^{\frac{1}{p}} \end{aligned} \end{equation} where Jensen's inequality is again used to establish that $\operatorname{E}((\tilde n / n)^{1/p}) \leq \operatorname{E}(\tilde n/ n)^{1/p} = F(\infty)^{1/p}$. This shows the desired upper bound with the constant $K_p = 2(C_p^{\frac{1}{p}} + 3^{\frac{1}{p}})$. \end{proof} Let us now turn our attention to more general counting processes as defined in the beginning of this section. The characterization of a counting process as a sum of simple counting processes allows us to establish the following convergence result by appealing to Theorem~\ref{theorem:simple}. \begin{theorem} \label{theorem:generalE} Let $p \in [1,2)$ be given and consider a counting process $N$ with mean function $F$ such that $N(\infty)$ has finite moment of order $p+\varepsilon$ for some $\varepsilon > 0$. For each $n \in \mathbb{N}$, let $F_n = n^{-1} \sum_{i=1}^n N_{i}$ be the average of independent replications $N_1, \hspace{1pt} \mathrm{d}ots, N_n$ of $N$. Then a constant $C$ exists, depending on $p$ as well as the distribution of $N(\infty)$, such that \begin{equation} \label{eq:multjumpE} \operatorname{E}(\| F_n - F \|_{[p]}) \leq C n^{\frac{1-p}{p}}, \end{equation} for any $n \in \mathbb{N}$. \end{theorem} \begin{proof} Let $n \in \mathbb{N}$ and the processes $N_1, \hspace{1pt} \mathrm{d}ots, N_n$ be given. For each $k \in \mathbb{N}$, let $N^{(k)}$ denote the simple counting process corresponding to the decomposition of $N$ as in \eqref{eq:counting_decomposition} and similarly with $N_i^{(k)}$ for $i = 1, \hspace{1pt} \mathrm{d}ots, n$. Also, for $k \in \mathbb{N}$, let $F^{(k)}$ denote the mean function of $N^{(k)}$ and let $F_n^{(k)} = n^{-1} \sum_{i=1}^n N_i^{(k)}$ be the empirical mean function. We then obtain the identities $F = \sum_{k=1}^\infty F^{(k)}$ and $F_n = \sum_{k=1}^\infty F_n^{(k)}$. By the triangle inequality, the monotone convergence theorem, and Theorem~\ref{theorem:simple}, we now have \begin{equation} \label{eq:FnF_E_bound_general} \operatorname{E}(\|F_n - F\|_{[p]}) \leq \sum_{k=1}^\infty \operatorname{E}(\|F_n^{(k)} - F^{(k)}\|_{[p]}) \leq K_p \sum_{k=1}^\infty F^{(k)}(\infty)^{\frac{1}{p}} n^{\frac{1-p}{p}}. \end{equation} Now, $F^{(k)}(\infty) = \operatorname{P}(N(\infty) \geq k)$ and by Markov's inequality, we have, for the given $\varepsilon > 0$, $P(N(\infty) \geq k) \leq \operatorname{E}(N(\infty)^{p+\varepsilon})/k^{p+\varepsilon}$. Since $\operatorname{E}(N(\infty)^{p+\varepsilon}) < \infty$ by the moment condition, then \begin{equation} C:= K_p \sum_{k=1}^\infty F^{(k)}(\infty)^{\frac{1}{p}} \leq K_p \operatorname{E}(N(\infty)^{p+\varepsilon})^{\frac{1}{p}} \sum_{k=1}^{\infty} k^{-1-\frac{\varepsilon}{p}} < \infty \end{equation} and this proves the desired result. \end{proof} The bound in expectation in \eqref{eq:multjumpE} immediately, by Markov's inequality, gives the useful bound in probability, \begin{equation} \| F_n - F \|_{[p]} = O_{\operatorname{P}}(n^{\frac{1-p}{p}}) \end{equation} under the same assumptions. The following result gives an almost sure bound in some cases and has its inspiration in Theorem~4.2 of ^{\mathsf{c}}ite{qian1998}. The proof will rely heavily on a lemma from ^{\mathsf{c}}ite{dudley1983invariance} in addition to the results of Theorem~\ref{theorem:generalE} above. \begin{theorem} \label{theorem:almost_sure} Consider a counting process $N$ such that $N(\infty) \leq B$ almost surely for some $B > 0$. For each $n \in \mathbb{N}$, let $F_n = n^{-1} \sum_{i=1}^n N_{i}$ be the average of independent replications $N_1, \hspace{1pt} \mathrm{d}ots, N_n$ of $N$. Then for any $p \in [1,2)$ a constant $\lambda$ exists, depending on $p$ and $B$, such that \begin{equation} \label{eq:as_bound} \limsup_{n \to \infty} n^{\frac{p-1}{p}} \| F_n - F \|_{[p]} \leq \lambda \end{equation} almost surely. In particular, $\|F_n - F\|_{p]} = O(n^{(1-p)/p})$ almost surely in this case. \end{theorem} \begin{proof} The statement is trivial for $p=1$ where $\|F_n - F\|_{[p]} \leq \lambda$ for all $n$ for $\lambda = 4B$. Consider a given $p \in (1,2)$ and, for now, a given $n \in \mathbb{N}$ and the processes $N_1, \hspace{1pt} \mathrm{d}ots, N_n$. We let $X_j = N_j -F$, for $j=1, \hspace{1pt} \mathrm{d}ots, n$, as well as $S_n = \sum_{j=1}^n X_j$ be random elements of $\mathcal{W}_p$. The $\{X_j\}_{j=1}^n$ can be considered an independent sequence in the terminology of Section~2 of ^{\mathsf{c}}ite{dudley1983invariance}. Owing to the upper bound of $N$ by $B$, we have $F(s) \leq B$ and so $\|X_j\|_{[p]} \leq 4B =: M$ and $\sum_{j=1}^n \operatorname{E}(\|X_j\|_{[p]}^2) \leq n (4B)^{2} =: \tau_n$. Lemma~2.6 of ^{\mathsf{c}}ite{dudley1983invariance} now states that \begin{equation} \label{eq:dudley_bound} \operatorname{P}(\|S_n\|_{[p]} \geq K) \leq \exp(3 \gamma^2 \tau_n - \gamma (K - \operatorname{E}(\|S_n\|_{[p]}))) \end{equation} for any $\gamma \in [0, (2M)^{-1}]$ and any $K > 0$. Now, supposing $n$ is sufficiently large that $n^{(1-p)/p} \leq (2M)^{-1}$, we want to use this with $\gamma = n^{(1-p)/p}$ and $K = t n^{1/p}$ for a given $t > 0$. Note that $S_n = n(F_n - F)$ and use that $\operatorname{E}(\|S_n\|_{[p]}) \leq C n^{1/p}$ for some $C > 0$ according to Theorem~\ref{theorem:generalE} to see that equation~\eqref{eq:dudley_bound} implies \begin{equation} \label{eq:tail_DP} \begin{aligned} \operatorname{P}(n^{\frac{p-1}{p}}\|F_n - F\|_{[p]} \geq t) &\leq \exp \big( 3 n^{2\frac{1-p}{p}} n(4B)^2 - n^{\frac{1-p}{p}}(t n^{\frac{1}{p}} - C n^{\frac{1}{p}})\big) \\ &= \exp\big(-n^{\frac{2-p}{p}}(t - (C+3(4B)^2))\big) \end{aligned} \end{equation} for any $t > 0$ for sufficiently large $n$. Let $\lambda = C+3(4B)^2$. Since $p < 2$ and if $t > \lambda$ is considered, the tail probability from~\eqref{eq:tail_DP} vanishes rapidly as $n$ increases. In particular, $\sum_{n=1}^\infty \operatorname{P}(n^{(p-1)/p} \|F_n - F\|_{[p]} \geq t)$ converges for $t > \lambda$. The Borel--Cantelli lemma then reveals that $\operatorname{P}( \limsup_{n \to \infty} \{n^{(p-1)/p} \|F_n - F\|_{[p]} \geq t\}) = 0$ for $t > \lambda$. This implies $\limsup_{n \to \infty} n^{(p-1)/p} \|F_n - F\|_{[p]} \leq \lambda$ almost surely, which is the desired result. Looking at the proof of Theorem~\ref{theorem:generalE} and equation~\eqref{eq:FnF_E_bound_general} in particular reveals $C \leq K_p B$, for $K_p$ from Theorem~\ref{theorem:simple}, can be used in this case such that $\lambda$ can be taken to only depend on $p$ and $B$ if desired. The statement $\|F_n - F\|_{[p]} = O(n^{(1-p)/p})$ almost surely means exactly the existence of a $\lambda$ such that \eqref{eq:as_bound} holds almost surely. \end{proof} \section{Application in a recurrent events setting} \label{sec:count} An example of a counting process is a process counting the number of recurrent events a study participant has experienced. Estimation of targets such as the expected number of events by a certain time point may be estimated in a straightforward manner by an average over independent replications when the process is completely observed. When the counting of events of interest is sometimes prevented by censoring, estimation may be more complicated. When the censoring time is itself censored, perhaps by a terminal event, then estimation is further complicated. In this section, we will see how the convergence results of Section~\ref{sec:main} may be applied to study the asymptotic properties of estimators in this setting by appealing to differentiability properties of the involved estimating functionals. Here, differentiability means Fréchet differentiability. Appendix~\ref{appendix:differentiability} includes the most important definitions and properties for our purposes in a general Banach space-based setting, primarily based on Chapter~5 of ^{\mathsf{c}}ite{Dudley2010}. As mentioned in Section~\ref{sec:main}, the function space $\mathcal{W}_p$ of functions of bounded $p$-variation is a Banach space when equipped with the $p$-variation norm, as is the case for the subspace $\mathcal{W}_p^{\mathsf{r}}$ of right-continuous functions of bounded $p$-variation. In particular owing to the inequalities $\|f g\|_{[p]} \leq \|f\|_{[p]} \|g\|_{[p]}$ and $\|\int_0^{(^{\mathsf{c}}dot)} g(s) f(\hspace{1pt} \mathrm{d} s)\|_{[p]} \leq k_p \|f\|_{[p]} \|g\|_{[p]}$ for a constant $k_p > 0$ for $f,g \in \mathcal{W}_p$ for $p \in [1,2)$, many important functionals are differentiable as functionals between $p$-variation-based spaces. In addition to the two implied bilinear functionals, these differentiable functionals include mapping to the inverse element and product integration. Above, $\int_0^{(^{\mathsf{c}}dot)} g(s) f(\hspace{1pt} \mathrm{d} s)$ for $f,g \in \mathcal{W}_p$ should be considered a Young integral, see for instance ^{\mathsf{c}}ite{Dudley1992}. The Young integral does correspond to the Lebesgue--Stieltjes integral when $f$ is of bounded variation, $f \in \mathcal{W}_1$, here. More details on these topics can be found in ^{\mathsf{c}}ite{Dudley2010} and ^{\mathsf{c}}ite{Dudley1999}. The supplements to the papers ^{\mathsf{c}}ite{overgaard2017asymptotic, Overgaard2019} include some important details in a more condensed form. Let $N$ be the counting process of interest, which is assumed square integrable. In this section, we let $\mu$ denote the mean function of $N$ such that $\mu(s) = \operatorname{E}(N(s))$, reserving the notation $F$ for a collection of such means. The mean function $\mu$ is the target of estimation in this section. \begin{example} \label{example:uncens} If information is available on $N_1, \hspace{1pt} \mathrm{d}ots, N_n$ which are $n$ independent replications of $N$, estimation may be performed by simply taking the average, $\hat \mu_n(t) = n^{-1} \sum_{i=1}^n N_i(t)$. In this case, $\sqrt{n}(\hat \mu_n(t) - \mu(t))$ has an asymptotic normal distribution with variance $\Var(N(t))$ according to the central limit theorem. The convergence result of Theorem~\ref{theorem:generalE} implies that $\| \hat \mu_n - \mu \|_{[p]} = O_{\operatorname{P}}(n^{(1-p)/p})$ for $p \in [1,2)$ and also opens up for use of the functional delta method. \end{example} \begin{example} \label{example:cens_obs} If observation of events is prevented after a right-censoring time $C$, we do not generally have information on the entire $N$, but only on $\tilde N$ given by \begin{equation} \tilde N(s) = \int_0^s \indic{C \geq u} N(\hspace{1pt} \mathrm{d} u). \end{equation} Here, $\tilde N$ is simply another counting process. Assume that $N$ is independent of $C$ and let $K(s) = \operatorname{P}(C \geq s)$. If $\nu$ denotes the mean function of $\tilde N$, then \begin{equation} \nu(s) = \int_0^s K(u) \mu(\hspace{1pt} \mathrm{d} u), \end{equation} and so, for $s$ such that $K(s) > 0$, \begin{equation} \label{eq:ipcw_motivation} \mu(s) = \int_0^s \frac{1}{K(u)} \nu(\hspace{1pt} \mathrm{d} u). \end{equation} Let $X = (\tilde N, C)$ and suppose information is available on $X_1, \hspace{1pt} \mathrm{d}ots, X_n$ which are independent replications of $X$ with $X_i = (\tilde N_i, C_i)$. Then $\nu(s)$ can be estimated as in Example~\ref{example:uncens} by $\hat \nu_n(s) = n^{-1} \sum_{i=1}^n \tilde N_i(s)$, and $K(s)$ can be estimated by $\hat K_n(s) = n^{-1} \sum_{i=1}^n \indic{C_i \geq s}$. Equation~\eqref{eq:ipcw_motivation} now suggests the estimate \begin{equation} \label{eq:ipcw_estimate_emp} \hat \mu_n(s) = \int_0^s \frac{1}{\hat K_n(u)} \hat \nu_n(\hspace{1pt} \mathrm{d} u) \end{equation} of $\mu(s)$. This corresponds to the estimator from~(2.2) of ^{\mathsf{c}}ite{lawless1995some}. The estimate of \eqref{eq:ipcw_estimate_emp} relies on empirical means of two counting processes, namely $\tilde N$ and $N_C$ given by $N_C(s) = \indic{C \leq s}$. The estimate is in fact obtained from the empirical means of those counting processes by a functional which is differentiable of any order in a $p$-variation setting. Consider a given $t > 0$ such that $K(t) > 0$. Interest will now be in properties of $\hat \mu_n(s)$ from \eqref{eq:ipcw_estimate_emp} for $s \in [0,t]$. This will be studied through a functional approach based on $p$-variation. To be specific, consider the Banach space $\banach{F} = \mathcal{W}_p^{\mathsf{r}}([0, \infty))^2$ for a $p \in [1,2)$, with a general element $f = (f_1, f_2) \in \banach{F}$ and norm given by $\|f\|_{\banach{F}} = \max(\|f_1\|_{[p]}, \|f_2\|_{[p]})$. The functional given by $K(f) = (s \mapsto K(f;s))$ with $K(f;s) = f_2(\infty) - f_2(s-)$ is linear and continuous as a functional from $\banach{F}$ to $\mathcal{W}_p([0,t])$. It is therefore differentiable of any order with a first order derivative at $f \in \banach{F}$ in direction $g \in \banach{F}$ which is $K_f'(g) = K(g)$. As can be seen from Theorem~4.16 of ^{\mathsf{c}}ite{Dudley2010} since $\mathcal{W}_p([0,t])$ is a unital Banach algebra, if $U \subseteq \mathcal{W}_p([0,t])$ is the open subset of $\mathcal{W}_p([0,t])$ of $f$s for which $1/f \in \mathcal{W}_p([0,t])$, then $f \mapsto 1/f$ is differentiable of any order with a first order derivative at $f \in U$ in direction $g \in \mathcal{W}_p([0,t])$ which is $-g/f^2$. Also, $(f_1,f_2) \mapsto \int_0^{(^{\mathsf{c}}dot)} f_2(s) f_1(\hspace{1pt} \mathrm{d} s)$ is, as a functional from $\mathcal{W}_p([0,t])^2$ to $\mathcal{W}_p([0,t])$, bilinear and continuous and thus differentiable of any order with first order derivative given by $\int_0^{(^{\mathsf{c}}dot)} f_2(s) g_1(\hspace{1pt} \mathrm{d} s) + \int_0^{(^{\mathsf{c}}dot)} g_2(s) f_1(\hspace{1pt} \mathrm{d} s)$. An $f \in U$ is characterized by being bounded uniformly away from 0. It is now the chain rule, see Appendix~\ref{appendix:differentiability}, which reveals that \begin{equation} \mu ^{\mathsf{c}}olon f \mapsto \int_0^{(^{\mathsf{c}}dot)} \frac{1}{K(f;s)} f_1(\hspace{1pt} \mathrm{d} s) \end{equation} is differentiable of any order, at least in a neighborhood of an $f$ such that $K(f)$ is bounded uniformly away from 0, as a functional from $\banach{F}$ into $\mathcal{W}_p^{\mathsf{r}}([0,t])$. For $f$ where the functional $\mu$ is differentiable, the derivative at $f$ in direction $g$ is, by the chain rule, see~\eqref{eq:chain_rule} of Appendix~\ref{appendix:differentiability}, \begin{equation} \mu_f'(g) = \int_0^{(^{\mathsf{c}}dot)} \frac{1}{K(f;s)} g_1(\hspace{1pt} \mathrm{d} s) - \int_0^{(^{\mathsf{c}}dot)} \frac{K(g;s)}{K(f;s)^2} f_1(\hspace{1pt} \mathrm{d} s). \end{equation} If we let $G(s) = \operatorname{P}(C \leq s)$ and $F = (\nu, G)$, then the functional $\mu$ is, in particular, differentiable of any order in a neighborhood of $F \in \banach{F}$. With the notation $x = (\tilde n, c)$ for $\tilde n \in \mathcal{W}_p^{\mathsf{r}}([0, \infty))$ and $c > 0$, $N_c(s) = \indic{c \leq s}$, and $\hspace{1pt} \mathrm{d}elta_x \in \banach{F}$ given by $\hspace{1pt} \mathrm{d}elta_x(s) = (\tilde n(s), N_c(s))$, the empirical mean of the counting processes is $F_n = n^{-1} \sum_{i=1}^n \hspace{1pt} \mathrm{d}elta_{X_i}$ and we see that $\hat \mu_n$ from \eqref{eq:ipcw_estimate_emp} is obtained by $\hat \mu_n = \mu(F_n)$. The influence function, defined by $\hspace{1pt} \mathrm{d}ot \mu(x) = \mu_F'(\hspace{1pt} \mathrm{d}elta_x - F)$, of this estimator can be expressed as \begin{equation} \hspace{1pt} \mathrm{d}ot \mu(x) = \int_0^{(^{\mathsf{c}}dot)} \frac{1}{K(s)} \tilde n(\hspace{1pt} \mathrm{d} s) - \int_0^{(^{\mathsf{c}}dot)} \frac{\indic{c \geq s}}{K(s)} \mu(\hspace{1pt} \mathrm{d} s) \end{equation} by using that $K(F;s) = K(s)$ and $\mu(s) = \int_0^s K(u)^{-1} \nu(\hspace{1pt} \mathrm{d} u)$. The differentiability of any order of $\mu$ in a neighborhood of $F$ is enough to establish \begin{equation} \label{eq:taylor1} \mu(F_n) = \mu(F) + \mu_F'(F_n - F) + O(\|F_n - F\|_{\banach{F}}^2) \end{equation} as in \eqref{eq:Taylor1+Lip} of Appendix~\ref{appendix:differentiability}. We have already assumed square integrability of $N$ and thus of $\tilde N \leq N$ while this is trivially the case for the bounded counting process $N_C$. This means that Theorem~\ref{theorem:generalE} ensures $\|F_n - F\|_{\banach{F}} = O_{\operatorname{P}}(n^{(1-p)/p})$. Take $p \in (4/3, 2)$, then we have, in particular, $\|F_n - F\|_{\banach{F}} = o_{\operatorname{P}}(n^{-1/4})$. From \eqref{eq:taylor1} and linearity of $\mu_F'$ we obtain \begin{equation} \sqrt{n}(\hat \mu_n - \mu) = \sqrt{n}\frac{1}{n} \sum_{i=1}^n \hspace{1pt} \mathrm{d}ot \mu(X_i) + o_{\operatorname{P}}(1) \end{equation} in $p$-variation. Evaluating at $s \in [0,t]$, this ensures that $\sqrt{n}(\hat \mu_n(s) - \mu(s))$ has the same asymptotic distribution as $n^{-1/2} \sum_{i=1}^n \hspace{1pt} \mathrm{d}ot \mu(X;s)$ which is a normal distribution with mean $\operatorname{E}(\hspace{1pt} \mathrm{d}ot \mu(X;s)) = 0$ and variance $\Var(\hspace{1pt} \mathrm{d}ot \mu(X;s))$. From the alternative expression of the influence function as \begin{equation} \label{eq:C_obs_infl} \hspace{1pt} \mathrm{d}ot \mu(X;s) = N(s) - \mu(s) - \int_0^{s-} \frac{(N(s) - \mu(s) - (N(u) - \mu(u)))}{K(u+)} M_C(\hspace{1pt} \mathrm{d} u), \end{equation} where $M_C(s) = N_C(s) - \int_0^s \indic{C \geq u} \Lambda(\hspace{1pt} \mathrm{d} u)$, the variance can be seen to be \begin{equation} \label{eq:var_expr_C_obs} \Var(\hspace{1pt} \mathrm{d}ot \mu(X;s)) = \Var(N(s)) + \int_0^{s-} \Var(N(s) - N(u)) \frac{1}{K(u+)} \Lambda(\hspace{1pt} \mathrm{d} u) \end{equation} under the independence assumption $N \independent C$. This variance can be estimated by $n^{-1} \sum_{i=1}^n \mu_{F_n}'(\hspace{1pt} \mathrm{d}elta_{X_i} - F_n;s)^2$ which turns out to be the variance estimate of (2.3) from ^{\mathsf{c}}ite{lawless1995some}. Some more details on the derivations of~\eqref{eq:C_obs_infl} and~\eqref{eq:var_expr_C_obs} can be found in Appendix~\ref{appendix:infl_var}. In comparison to Example~\ref{example:uncens} with no censoring, the last term of~\eqref{eq:var_expr_C_obs} can be seen as the added variance due to censoring when using the estimator $\hat \mu_n$ from \eqref{eq:ipcw_estimate_emp}. \end{example} \begin{example} \label{example:cens_unobs} Consider the setting of Example~\ref{example:cens_obs} with $N$ censored by $C$, leaving $\tilde N(s) = \int_0^s \indic{C \geq u} N(\hspace{1pt} \mathrm{d} u)$ observed. As mentioned in the beginning of the section, $C$ may itself be censored such that the empirical estimate of $K(s) = \operatorname{P}(C \geq s)$ is not generally available. This is the setting considered in this example. Suppose a terminal event at time $T$ right-censors observation of $C$. Here, $T$ is terminal in the sense that $N(s) = N(T \wedge s)$ for all $s$. We let $\tilde C = C \wedge T$ and $\tilde D = \indic{C < T}$. The function $K$ is the left-continuous version of a survival function for $C$ and takes the form $K(s) = \prodi_0^{s-}(1-\Lambda(\hspace{1pt} \mathrm{d} u))$ for a right-continuous cumulative censoring hazard function $\Lambda$. If we let $G(s) = \operatorname{P}(C \leq s)$, we have $\Lambda(s) = \int_0^s K(u)^{-1} G(\hspace{1pt} \mathrm{d} u)$. We will assume independence of $C$ and $(N,T)$. Then we also have \begin{equation} \label{eq:Lambda_identified} \Lambda(s) = \int_0^s \frac{1}{K^{\mathsf{c}}(u)} G_1^{\mathsf{c}}(\hspace{1pt} \mathrm{d} u), \end{equation} for $s$ such that $K^{\mathsf{c}}(s) > 0$, where $K^{\mathsf{c}}(s) = \operatorname{P}(\tilde C > s) + \operatorname{P}(\tilde C = s, \tilde D = 1)$ and $G_1^{\mathsf{c}}(s) = \operatorname{P}(\tilde C \leq s, \tilde D = 1)$ since, owing to the independence of $C$ and $T$, $K^{\mathsf{c}}(s) = K(s) \operatorname{P}(T > s)$ and $G_1^{\mathsf{c}}(s) = \int_0^s \operatorname{P}(T > u) G(\hspace{1pt} \mathrm{d} u)$. In this example, the basic observation is $X = (\tilde N, \tilde C, \tilde D)$. If information is available on $X_1, \hspace{1pt} \mathrm{d}ots, X_n$ which are independent replications of $X = (\tilde N, \tilde C, \tilde D)$ with $X_i =(\tilde N_i, \tilde C_i, \tilde D_i)$, we may estimate $K^{\mathsf{c}}(s)$ by $\hat K_n^{\mathsf{c}}(s) = n^{-1} \sum_{i=1}^n (\indic{\tilde C_i > s} + \indic{\tilde C_i=s, \tilde D_i = 1})$ and $G_1^{\mathsf{c}}(s)$ by $\hat G_{n,1}^{\mathsf{c}}(s) = n^{-1} \sum_{i=1}^n \indic{\tilde C_i \leq s, \tilde D_i = 1}$. Equation~\eqref{eq:Lambda_identified} suggests the estimate \begin{equation} \hat \Lambda_n(s) = \int_0^s \frac{1}{\hat K_n^{\mathsf{c}}(u)} \hat G_{n,1}^{\mathsf{c}}(\hspace{1pt} \mathrm{d} u) \end{equation} of $\Lambda(s)$. This estimate then leads to the estimate of $K$ given by $\hat K_n(s) = \prodi_0^{s-}(1-\hat \Lambda_n(\hspace{1pt} \mathrm{d} u))$. This is basically the Kaplan--Meier estimator, but for the censoring distribution and in a left-continuous version. Finally, if $\hat \nu_n(s) = n^{-1} \sum_{i=1}^n \tilde N_i(s)$ as before, the estimate obtained by \begin{equation} \label{eq:ipcw_estimate} \hat \mu_n(s) = \int_0^s \frac{1}{\hat K_n(u)} \hat \nu_n(\hspace{1pt} \mathrm{d} u) \end{equation} yields an estimate of $\mu(s)$. This estimate corresponds to the estimate of~\eqref{eq:ipcw_estimate_emp} from Example~\ref{example:cens_obs} if no censoring of the censoring times occurs before time $s$, and is in this sense a generalization. The estimate of~\eqref{eq:ipcw_estimate} also corresponds to the estimate studied by ^{\mathsf{c}}ite{ghosh2000}, which is also considered in ^{\mathsf{c}}ite{Andersen2019}. The estimate in \eqref{eq:ipcw_estimate} relies on empirical means of three counting processes, namely $\tilde N$, $N_{X,0}$, and $N_{X,1}$, where $N_{X,j}(s) = \indic{\tilde C \leq s, \tilde D = j}$ for $j=0, 1$, through a functional which is differentiable of any order in a $p$-variation setting. This allows us to take a similar approach as in Example~\ref{example:cens_obs} when studying the asymptotic properties of the estimator. Specifically, we let $\banach{F} = \mathcal{W}_p^{\mathsf{r}}([0, \infty))^3$, for a $p \in [1,2)$, with a general element of the form $f= (f_1, f_2, f_3) \in \banach{F}$ and a norm given by $\|f\|_{\banach{F}} = \max(\|f_1\|_{[p]}, \|f_2\|_{[p]}, \|f_3\|_{[p]})$. In particular, $F := (\nu, G_1^{\mathsf{c}}, G_0^{\mathsf{c}}) \in \banach{F}$, where $G_0^{\mathsf{c}}(s) = \operatorname{P}(\tilde C \leq s, \tilde D = 0)$. With $x = (\tilde n, \tilde c, \tilde d)$ for $\tilde n \in \mathcal{W}_p^{\mathsf{r}}$, $\tilde c > 0$, and $\tilde d \in \{0,1\}$, define $\hspace{1pt} \mathrm{d}elta_x$ by $\hspace{1pt} \mathrm{d}elta_x(s) = (\tilde n(s), N_{x,1}(s), N_{x,0}(s))$, where $N_{x,1}(s) = \indic{\tilde c \leq s, \tilde d = 1}$ and $N_{x,0}(s) = \indic{\tilde c \leq s, \tilde d = 0})$. Based on $n$ independent replications of $X = (\tilde N, \tilde C, \tilde D)$, the empirical version of $F$ is $F_n = n^{-1} \sum_{i=1}^n \hspace{1pt} \mathrm{d}elta_{X_i}$. In the following, the estimate $\hat \mu_n(s)$ from \eqref{eq:ipcw_estimate} is studied as a functional of $F_n$. This is done for $s \in [0,t]$ for a given $t > 0$ that satisfies $K^{\mathsf{c}}(t) > 0$. Define a $K^{\mathsf{c}}$ functional by $K^{\mathsf{c}}(f;s) = f_2(\infty) + f_3(\infty) - f_2(s-) - f_3(s)$. This functional is continuous and linear and so differentiable of any order as a functional from $\banach{F}$ to $\mathcal{W}_p([0,t])$ with first order derivative given by ${K^{\mathsf{c}}}_f'(g;s) = K^{\mathsf{c}}(g;s)$. Since $K^{\mathsf{c}}(F;s) = K^{\mathsf{c}}(s)$ and $K^{\mathsf{c}}(t) > 0$, a $\Lambda$ functional can, at least in a neighborhood of $F$, be defined by \begin{equation} \Lambda(f;s) = \int_0^s \frac{1}{K^{\mathsf{c}}(f;u)} f_2(\hspace{1pt} \mathrm{d} u), \end{equation} such that $f \mapsto \Lambda(f)$ is mapping into $\mathcal{W}_p^{\mathsf{r}}([0,t])$. We see that $\Lambda(F;s) = \int_0^s K^{\mathsf{c}}(u)^{-1} G_1^{\mathsf{c}}(\hspace{1pt} \mathrm{d} u) = \Lambda(s)$ as well as $\Lambda(F_n;s) = \hat \Lambda_n(s)$. By the arguments in Example~\ref{example:cens_obs}, the $\Lambda$ functional is differentiable of any order in a neighborhood of $F$ with first order derivative \begin{equation} \Lambda_f'(g;s) = \int_0^s \frac{1}{K^{\mathsf{c}}(f;u)} g_2(\hspace{1pt} \mathrm{d} u) - \int_0^s \frac{K^{\mathsf{c}}(g;u)}{K^{\mathsf{c}}(f;u)^2} f_2(\hspace{1pt} \mathrm{d} u). \end{equation} Note how $\Lambda_F'(\hspace{1pt} \mathrm{d}elta_x - F;s) = \Lambda_F'(\hspace{1pt} \mathrm{d}elta_x;s) = \int_0^s K^{\mathsf{c}}(u)^{-1} M_{x,1}(\hspace{1pt} \mathrm{d} u)$ where $M_{x,1}(s) = N_{x,1}(s) - \int_0^s \indic{\tilde c \geq u} \Lambda(\hspace{1pt} \mathrm{d} u)$. Next, a $K$ functional can be defined by $K(f;s) = \prodi_0^{s-}(1- \Lambda(f; \hspace{1pt} \mathrm{d} u))$ as a functional from $\banach{F}$ to $\mathcal{W}_p([0,t])$. This will then satisfy $K(F;s) = K(s)$ and $K(F_n;s) = \hat K_n(s)$. The product integral $f \mapsto \prodi_0^{(^{\mathsf{c}}dot)}(1+f(\hspace{1pt} \mathrm{d} u))$, as a functional from $\mathcal{W}_p^{\mathsf{r}}([0,t])$ to $\mathcal{W}_p^{\mathsf{r}}([0,t])$ for a $p \in [1,2)$, is differentiable of any order with a first order derivative at $f$ in direction $g$ which is $\int_0^{(^{\mathsf{c}}dot)} \prodi_0^{s-}(1+f(\hspace{1pt} \mathrm{d} u)) g(\hspace{1pt} \mathrm{d} s) \prodi_s^{(^{\mathsf{c}}dot)}(1+f(\hspace{1pt} \mathrm{d} u))$, which when $\Delta f(s):=f(s)-f(s-) \neq -1$ for all $s \in [0,t]$ can also be given as $\prodi_0^{(^{\mathsf{c}}dot)}(1+f(\hspace{1pt} \mathrm{d} u)) \int_0^{(^{\mathsf{c}}dot)} (1+\Delta f(s))^{-1} g(\hspace{1pt} \mathrm{d} s)$. Since $\Lambda(s) < 1$ for all $s \in [0,t]$, the chain rule now reveals that the $K$ functional is differentiable of any order in a neighborhood of $F \in \banach{F}$ with first order derivative given by \begin{equation} K_f'(g;s) = -K(f;s) \int_0^{s-} \frac{1}{1-\Delta \Lambda(f;u)} \Lambda_f'(g;\hspace{1pt} \mathrm{d} u). \end{equation} Using the expression of $\Lambda_F'(\hspace{1pt} \mathrm{d}elta_x - F;s)$ given above, the expression \begin{equation} \label{eq:K_infl} K_F'(\hspace{1pt} \mathrm{d}elta_x - F;s) = -K(s) \int_0^{s-} \frac{1}{1-\Delta \Lambda(u)} \frac{1}{K^{\mathsf{c}}(u)} M_{x,1}(\hspace{1pt} \mathrm{d} u) \end{equation} can be obtained. Lastly, the $\mu$ functional is defined by \begin{equation} \mu(f;s) = \int_0^s \frac{1}{K(f;u)} f_1(\hspace{1pt} \mathrm{d} u) \end{equation} as a functional from $\banach{F}$ to $\mathcal{W}_p^{\mathsf{r}}([0,t])$. The functional satisfies $\mu(F;s) = \mu(s)$ and $\mu(F_n;s) = \hat \mu_n(s)$ for $\hat \mu_n$ from~\eqref{eq:ipcw_estimate}. As in Example~\ref{example:cens_obs}, the $\mu$ functional is differentiable of any order in a neighborhood of $F \in \banach{F}$. The first order derivative is given by \begin{equation} \label{eq:cens_unobs_mu_deriv} \mu_f'(g;s) = \int_0^s \frac{1}{K(f; u)} g_1(\hspace{1pt} \mathrm{d} u) - \int_0^s \frac{K_f'(g;u)}{K(f;u)^2} f_1(\hspace{1pt} \mathrm{d} u). \end{equation} Using the expression of $K_F'(\hspace{1pt} \mathrm{d}elta_x - F;s)$ from \eqref{eq:K_infl} the influence function $\hspace{1pt} \mathrm{d}ot \mu$ can be expressed as \begin{equation} \begin{aligned} \hspace{1pt} \mathrm{d}ot \mu(x;s) &= \int_0^s \frac{1}{K(u)} \tilde n(\hspace{1pt} \mathrm{d} u) - \mu(s)\\ &\phantom{{}=} + \int_0^s \int_0^{u-} \frac{1}{1-\Delta \Lambda(v)} \frac{1}{K^{\mathsf{c}}(v)} M_{x,1}(\hspace{1pt} \mathrm{d} v) \mu(\hspace{1pt} \mathrm{d} u) \end{aligned} \end{equation} for $x = (\tilde n, \tilde c, \tilde d)$. As was the case in Example~\ref{example:cens_obs}, using $p \in (4/3,2)$ allows for the conclusion that, for $s \in [0,t]$, $\sqrt{n}(\hat \mu_n(s) - \mu(s))$ has an asymptotic normal distribution with mean $\operatorname{E}(\hspace{1pt} \mathrm{d}ot \mu(X;s)) = 0$ and a variance of $\Var(\hspace{1pt} \mathrm{d}ot \mu(X;s))$. In terms of the potentially unobserved $N$, $T$ and $C$, the influence function at $X$ can also be expressed as \begin{equation} \label{eq:C_unobs_infl} \begin{aligned} \hspace{1pt} \mathrm{d}ot \mu(X;s) &= N(s) - \mu(s) \\ & \hspace{-12pt}- \int_0^{s-} \big(N(s) - N(u) - \frac{\indic{T > u}}{\operatorname{P}(T > u)} (\mu(s) - \mu(u)\big)\frac{1}{K(u+)} M_C(\hspace{1pt} \mathrm{d} u). \end{aligned} \end{equation} This expression leads to the variance expression \begin{equation} \label{eq:var_expr_C_unobs} \begin{aligned} \Var(\hspace{1pt} \mathrm{d}ot \mu(X;s)) &= \Var(N(s)) + \int_0^{s-} \Var(N(s) - N(u)) \frac{1}{K(u+)} \Lambda(\hspace{1pt} \mathrm{d} u) \\ &\phantom{{}=} - \int_0^{s-} (\mu(s) - \mu(u))^2 \frac{\operatorname{P}(T \leq u)}{\operatorname{P}(T > u)} \frac{1}{K(u+)} \Lambda(\hspace{1pt} \mathrm{d} u) \end{aligned} \end{equation} under the independence assumption $(N,T) \independent C$. This variance can be estimated by $n^{-1} \sum_{i=1}^n \mu_{F_n}'(\hspace{1pt} \mathrm{d}elta_{X_i} - F_n;s)^2$ where the expression of $\mu_{F_n}'(\hspace{1pt} \mathrm{d}elta_x - F_n; s)$ can be obtained by insertion in \eqref{eq:cens_unobs_mu_deriv}. This variance estimate will be very similar to the one suggested by ^{\mathsf{c}}ite{ghosh2000} and seemingly identical in the absence of ties. Some more details on the derivations of~\eqref{eq:C_unobs_infl} and~\eqref{eq:var_expr_C_unobs} can be found in Appendix~\ref{appendix:infl_var}. In comparison to Example~\ref{example:cens_obs} where the actual censoring times are available, the last term of~\eqref{eq:var_expr_C_unobs} reveals that this asymptotic variance is smaller than for the estimator of Example~\ref{example:cens_obs}. This means that even when information is available on the potential censoring times $C_1, \hspace{1pt} \mathrm{d}ots, C_n$ in a setting with a terminal event and from an asymptotic point of view, the analyst is better off by disregarding this complete information and relying only on the censored censoring times. \end{example} \begin{example} \label{example:pseudo} The pseudo-observation method is a method for regression analysis of an outcome such as $N(t)$ when the outcomes are incompletely observed such as in examples~\ref{example:cens_obs} and~\ref{example:cens_unobs}. Given $n$ independent replications $X_1, \hspace{1pt} \mathrm{d}ots, X_n$ of $X$, the method works by substituting all the potentially unobserved outcomes $N_1(t), \hspace{1pt} \mathrm{d}ots, N_n(t)$ for jack-knife pseudo-values, $\hat \mu_{n,1}(t), \hspace{1pt} \mathrm{d}ots, \hat \mu_{n,n}(t)$ with $\hat \mu_{n,i}(t) = n \hat \mu_n(t) - (n-1) \hat \mu_n^{(i)}(t)$, and proceeding by performing whatever regression analysis was intended for $N_1(t), \hspace{1pt} \mathrm{d}ots, N_n(t)$. Here, $\hat \mu_n(t)$ is an estimator of the expectation $\mu(t) = \operatorname{E}(N(t))$ based on the sample $X_1, \hspace{1pt} \mathrm{d}ots, X_n$ and $\hat \mu_n^{(i)}(t)$ is the same estimator applied to the sample where the $i$th observation has been left out. Suppose $Z$ denotes covariates and the regression analysis concerns a model of $\operatorname{E}(N(t) \mathbin{|} Z)$. According to ^{\mathsf{c}}ite{overgaard2017asymptotic}, this pseudo-observation approach will work, under some regularity conditions, in a setting where the estimator can be seen as a functional applied to a sample average, $\hat \mu_n(t) = \mu(F_n;t)$ for a functional $\mu(^{\mathsf{c}}dot; t) ^{\mathsf{c}}olon \banach{F} \to \mathbb{R}$ defined on a Banach space $ \banach{F}$ where $F_n = n^{-1} \sum_{i=1}^n \hspace{1pt} \mathrm{d}elta_{X_i}$ for some function $x \mapsto \hspace{1pt} \mathrm{d}elta_x$ applied to the observed $X_1, \hspace{1pt} \mathrm{d}ots, X_n$, if \begin{enumerate}[label=(\alph*)] \item \label{it:conv} an $F \in \banach{F}$ and an $\varepsilon \in (0, 1/4]$ exist such that $\|F_n - F\|_{\banach{F}} = o_{\operatorname{P}}(n^{-1/4 - \varepsilon/2})$ and $\lim_{y \to \infty} y^{1/\varepsilon}\operatorname{P}(\|\hspace{1pt} \mathrm{d}elta_X\| > y) = 0$, \item \label{it:diff} the functional $f \mapsto \mu(f;t)$ is continuously differentiable of order 2 with a Lipschitz continuous second order derivative in a neighborhood of $F$, \item \label{it:cond} the influence function $\hspace{1pt} \mathrm{d}ot \mu(x;t) = \mu_F'(\hspace{1pt} \mathrm{d}elta_x - F;t)$ satisfies \begin{equation} \operatorname{E}(\hspace{1pt} \mathrm{d}ot \mu(X;t) \mathbin{|} Z) = \operatorname{E}(N(t) \mathbin{|} Z) - \mu(F;t). \end{equation} \end{enumerate} Conditions~\ref{it:conv} and~\ref{it:diff} agree well with the estimators of examples~\ref{example:cens_obs} and~\ref{example:cens_unobs} above. The condition that $\lim_{y \to \infty} y^{1/\varepsilon}\operatorname{P}(\|\hspace{1pt} \mathrm{d}elta_X\| > y) = 0$ is fulfilled in either example if $N(t)$ has finite moment of order a little higher than $1/\varepsilon$, that is, at least a little more than fourth order with the choice $\varepsilon = 1/4$. The convergence order $\|F_n - F\|_{\banach{F}} = o_{\operatorname{P}}(n^{-3/8})$ with this choice of $\varepsilon$ is achieved for the $p$-variation-based norm for $p \in (8/5,2)$. The range is $p \in (4/(3-2 \varepsilon), 2)$ more generally for the relevant $\varepsilon \in (0,1/4]$. The functionals involved in the estimators of the examples are differentiable of any order for such choices of $p$, and so condition~\ref{it:diff} is met in this setting since the Lipschitz continuity of the second order derivative in a neighborhood of $F$ follows from third order differentiability in a neighborhood of $F$. This leaves us with condition~\ref{it:cond}. If we assume $(N,Z) \independent C$ or $(N,T,Z) \independent C$ in the examples, respectively, this can easily be seen to be fulfilled by appealing to the expressions of~\eqref{eq:C_obs_infl} and~\eqref{eq:C_unobs_infl}, respectively, since $\operatorname{E}(M_C(s) \mathbin{|} N, Z) = 0$ or similarly $\operatorname{E}(M_C(s) \mathbin{|} N, T, Z) = 0$ for $s \in [0,t]$ in these settings. The pseudo-observation method with pseudo-observations based on the estimator of Example~\ref{example:cens_unobs} has been suggested and applied by Andersen, Angst, and Ravn in ^{\mathsf{c}}ite{Andersen2019}, where their equation~(4) corresponds to~\eqref{eq:ipcw_estimate} of this paper. With conditions \ref{it:conv}, \ref{it:diff}, and \ref{it:cond} fulfilled, the results of ^{\mathsf{c}}ite{overgaard2017asymptotic} now brings a theoretical justification to this approach. \end{example} \section{Concluding remarks} In many cases counting processes may go to infinity as time passes. Such a setting does not fit well with an assumption of finite $p$-variation or finite moment conditions and the approach described in this paper is not directly applicable. It may be useful to consider stopped or localized versions of such counting processes, and such stopped or localized counting processes may perhaps be studied using the $p$-variation approach described here. Concretely, the settings of Examples~\ref{example:uncens}--\ref{example:pseudo} reduced interest to the interval $[0,t]$ or simply $t$ for some time point $t > 0$ and the stopped processes $N(^{\mathsf{c}}dot \wedge t)$ can replace $N$ without issues in such cases. The convergence results in $p$-variation of Section~\ref{sec:main} are likely not the best possible and further studies of this subject are called for. It is, for instance, not clear whether the moment condition of Theorem~\ref{theorem:generalE} or the boundedness condition of Theorem~\ref{theorem:almost_sure} are necessary or if such convergence results apply more generally to averages of independent replications of random elements in $\mathcal{W}_p$ spaces. In Banach spaces, measurability is not a straightforward matter. Measurability has not been touched upon in any detail here. When a counting process $N$ is considered a random element, it is in the sense of measurable coordinate projections, that $N(s)$ is a random variable for all relevant $s$. In the examples of Section~\ref{sec:count}, the various $\mu$ functionals have not been formalized as measurable maps from $\banach{F}$ to $\mathcal{W}_p^{\mathsf{r}}$. It is however clear by inspection that at $s \in [0,t]$, the various $\hat \mu_n(s)$ and $\hspace{1pt} \mathrm{d}ot \mu(X;s)$ are random variables. Owing to right-continuity, the $\|F_n - F\|_{[p]}$ of Section~\ref{sec:main}, and so similarly the various $\|F_n - F\|_{\banach{F}}$ of Section~\ref{sec:count}, are random variables since the involved suprema can be taken over the rationals. Various convergence results exist in $p$-variation for $p \geq 2$, see for instance Theorem~3.1 and~4.1 of ^{\mathsf{c}}ite{qian1998} and Theorem~1 of ^{\mathsf{c}}ite{huang2001}. Since somewhat fewer functionals are differentiable in a $p$-variation setting for such $p$s, this has not been considered here. \appendix \section{Fréchet differentiability} \label{appendix:differentiability} A functional $\phi$ defined on an open subset $U$ of a Banach space $\banach{D}$ and with values in a Banach space $\banach{E}$ is said to be differentiable at $f \in U$ if a linear continuous operator $\phi_f' \in L(\banach{D}, \banach{E})$ exists such that \begin{equation} \|\phi(f + g) - \phi(f) - \phi_f'(g)\|_{\banach{E}} = o(\|g\|_{\banach{D}}) \end{equation} as $\|g\|_{\banach{D}} \to 0$. In that case, $\phi_f' \in L(\banach{D}, \banach{E})$ is the first order derivative of $\phi$ at $f$ and, for any $g \in \banach{D}$, $\phi_f'(g) \in \banach{E}$ is called the first order derivative of $\phi$ at $f$ in direction $g$. The space of linear, continuous operators $L(\banach{D}, \banach{E})$ is itself a Banach space when equipped with the operator norm given by \begin{equation} \|\lambda\|_{L(\banach{D}, \banach{E})} = \inf \{c \geq 0 : \|\lambda(f)\|_{\banach{E}} \leq c \|f \|_{\banach{D}} \textup{ for all } f \in \banach{D}\} \end{equation} and the first order derivative $\phi' ^{\mathsf{c}}olon U \to L(\banach{D}, \banach{E})$, given by $f \mapsto \phi_f'$, is simply another functional. Higher order differentiability of the functional $\phi$ can then iteratively be defined in terms of differentiability properties of the functional $\phi'$. If $\phi$ is differentiable of order $k$, the $k$th order derivative can be identified with a functional $\phi^{(k)} ^{\mathsf{c}}olon U \to L^k(\banach{D}, \banach{E})$ where $L^k(\banach{D}, \banach{E})$ is the space of $k$-linear, continuous operators. For $\lambda \in L^k(\banach{D}, \banach{E})$ a $c > 0$ exists such that \begin{equation} \|\lambda(f_1, \hspace{1pt} \mathrm{d}ots, f_k)\|_{\banach{E}} \leq c \|f_1\|_{\banach{D}} \hspace{1pt} \mathrm{d}ots \|f_k\|_{\banach{D}} \end{equation} and, in similarity to the $k=1$ case, the norm of $\lambda$ in the Banach space $L^k(\banach{D}, \banach{E})$ is given by the infimum over such constants $c$. The $k$th order derivative is not only continuous and $k$-linear, but also symmetric in its arguments. If $\phi ^{\mathsf{c}}olon U \to \banach{E}$, in the setting from before, is continuously differentiable of order $k$, a $k$th order Taylor approximation in line with \begin{equation} \phi(f+g) = \phi(f) + \sum_{j=1}^k \frac{1}{j!} \phi_f^{(j)}(g, \hspace{1pt} \mathrm{d}ots, g) + o(\|g\|_{\banach{D}}^k) \end{equation} applies as $\|g\|_{\banach{D}} \to 0$. In particular if $\phi$ is continuously differentiable of order 2 in a neighborhood of $f \in \banach{D}$, or weaker still if $\phi'$ is Lipschitz continuous in a neighborhood of $f \in \banach{D}$, then \begin{equation} \label{eq:Taylor1+Lip} \phi(f+g) = \phi(f) + \phi_f'(g) + O(\|g\|_{\banach{D}}^2) \end{equation} as $\|g\|_{\banach{D}} \to 0$. If $\banach{D}$, $\banach{E}$, and $\banach{F}$ are Banach spaces and $\phi$ is a functional defined and differentiable on a neighborhood of $f \in \banach{D}$ as a functional into $\banach{E}$, whereas $\psi$ is a functional defined and differentiable on a neighborhood of $\phi(f) \in \banach{E}$ as a functional into $\banach{F}$, then $\psi ^{\mathsf{c}}irc \phi$ is differentiable in a neighborhood of $f \in \banach{D}$ as a functional into $\banach{F}$. The derivative is given by \begin{equation} \label{eq:chain_rule} (\psi ^{\mathsf{c}}irc \phi)_f'(g) = \psi_{\phi(f)}'(\phi_f'(g)), \end{equation} the derivative of $\psi$ at $\phi(f)$ in direction $\phi_f'(g)$. This is the chain rule. \section{Influence functions and the variance expressions} \label{appendix:infl_var} In obtaining the desired expressions of the influence functions and variance expressions in examples~\ref{example:cens_obs} and~\ref{example:cens_unobs}, an important identity is \begin{equation} \frac{\indic{C \geq s)}}{K(s)} - 1 = -\int_0^{s-} \frac{1}{K(u+)} M_C(\hspace{1pt} \mathrm{d} u), \end{equation} which can be seen as a consequence of the Duhamel equation, see for instance ^{\mathsf{c}}ite{Gill1990}, here in the form \begin{equation} \indic{C \geq s} - K(s) = \int_0^{s-} \indic{C \geq u}(\Lambda - N_C)(\hspace{1pt} \mathrm{d} u) \frac{K(s)}{K(u+)}. \end{equation} Since $\tilde N(s) = \int_0^s \indic{C \geq u} N(\hspace{1pt} \mathrm{d} u)$, we have in Example~\ref{example:cens_obs} \begin{equation} \begin{aligned} &\int_0^s \frac{1}{K(u)} \tilde N(\hspace{1pt} \mathrm{d} u) - \int_0^s \frac{\indic{C \geq u}}{K(u)} \mu(\hspace{1pt} \mathrm{d} u) \\ =& \int_0^s \frac{\indic{C \geq u}}{K(u)}(N - \mu)(\hspace{1pt} \mathrm{d} u) \\ =& N(s) - \mu(s) + \int_0^s \big(\frac{\indic{C \geq u}}{K(u)} - 1\big)(N - \mu)(\hspace{1pt} \mathrm{d} u) \\ =& N(s) - \mu(s) - \int_0^s \int_0^{u-} \frac{1}{K(v+)} M_C(\hspace{1pt} \mathrm{d} v)(N - \mu)(\hspace{1pt} \mathrm{d} u) \\ =& N(s) - \mu(s) - \int_0^{s-} \frac{N(s) - \mu(s) - N(v) + \mu(v)}{K(v+)} M_C(\hspace{1pt} \mathrm{d} v), \end{aligned} \end{equation} which shows the alternative expression of the influence function in Example~\ref{example:cens_obs}. For Example~\ref{example:cens_unobs}, it can be noted that we have $M_{X,1}(s) = \int_0^s \indic{T > u} M_C(\hspace{1pt} \mathrm{d} u)$ and also $(1- \Delta \Lambda(s))K^{\mathsf{c}}(s) = \operatorname{P}(T > s) K(s+)$. It is now a similar argument as above which yields \begin{equation} \begin{aligned} &\int_0^s \frac{1}{K(u)} \tilde N(\hspace{1pt} \mathrm{d} u) - \mu(s) + \int_0^s \int_0^{u-} \frac{1}{1-\Delta \Lambda(v)} \frac{1}{K^{\mathsf{c}}(v)} M_{X,1}(\hspace{1pt} \mathrm{d} v) \mu(\hspace{1pt} \mathrm{d} u) \\ =& N(s) - \mu(s) + \int_0^s \big(\frac{\indic{C \geq u}}{K(u)} - 1\big) N(\hspace{1pt} \mathrm{d} u) \\ &+ \int_0^s \int_0^{u-} \frac{\indic{T > v}}{\operatorname{P}(T > v)} \frac{1}{K(v+)} M_{C}(\hspace{1pt} \mathrm{d} v) \mu(\hspace{1pt} \mathrm{d} u) \\ =& N(s) - \mu(s) \\ &- \int_0^{s-} \big(N(s) - N(v) - \frac{\indic{T > v}}{\operatorname{P}(T > v)}(\mu(s) - \mu(v))\big) \frac{1}{K(v+)} M_{C}(\hspace{1pt} \mathrm{d} v). \end{aligned} \end{equation} The process given by $M_C(s) = N_C(s) - \int_0^s \indic{C \geq u} \Lambda(\hspace{1pt} \mathrm{d} u)$ is a martingale with respect to the natural filtration of $N_C$. If we assume $N \independent C$ or $(N,T) \independent C$ this is the case even in the conditional distribution given $N$ or given $(N,T)$. So, for certain stochastic processes $A(s)$ and $B(s)$ that are measurable with respect to $\sigAlg{A} = \sigma(N)$ or $\sigAlg{A} = \sigma(N, T)$, we have $\Var(A(s) + \int_0^s B(u) K(u+)^{-1}M_{C}(\hspace{1pt} \mathrm{d} u) \mathbin{|} \sigAlg{A}) = \int_0^s B(u)^2 K(u+)^{-1} \Lambda(\hspace{1pt} \mathrm{d} u)$ by martingale properties since, for instance, the optional variation process of $M_C$ is given by $[M_C](s) = \int_0^s (1-\Delta \Lambda(u)) N_C(\hspace{1pt} \mathrm{d} u) - \int_0^s \Delta \Lambda(u) M_C(s)$ with conditional expectation $\operatorname{E}([M_C](s) \mathbin{|} \sigAlg{A} ) = \int_0^s (1- \Delta \Lambda(u)) G(\hspace{1pt} \mathrm{d} u) = \int_0^s K(u+) \Lambda(\hspace{1pt} \mathrm{d} u)$. The law of total variation then reveals $\Var(A(s) + \int_0^s B(u) K(u+)^{-1} M_C(\hspace{1pt} \mathrm{d} u)) = \Var(A(s)) + \int_0^s \operatorname{E}(B(u)^2) K(u+)^{-1} \Lambda(\hspace{1pt} \mathrm{d} u)$. Both~\eqref{eq:var_expr_C_obs} and~\eqref{eq:var_expr_C_unobs} follow this structure, although establishing~\eqref{eq:var_expr_C_unobs} requires an additional direct calculation as follows. In this case, $N(s) - N(u) - \indic{T > u} \operatorname{P}(T > u)^{-1}(\mu(s) - \mu(u))$ is playing the role of $B(u)$. It is the fact that $T$ is terminal for $N$ which implies $(N(s) - N(u)) \indic{T > u} = N(s) - N(u)$ and so \begin{equation} \begin{aligned} &\phantom{{}=}\operatorname{E}\big(\big(N(s) - N(u) - \frac{\indic{T > u}}{\operatorname{P}(T > u)}(\mu(s) - \mu(u)\big)^2\big) \\ &=\operatorname{E}((N(s)-N(u))^2) - 2 \frac{\operatorname{E}(N(s) - N(u))}{\operatorname{P}(T > u)} (\mu(s) - \mu(u)) \\ &\phantom{{}=}+ \frac{\operatorname{E}(\indic{T > u})}{\operatorname{P}(T > u)^2}(\mu(s) - \mu(u))^2 \\ &= \operatorname{E}((N(s)-N(u))^2) - \frac{1}{\operatorname{P}(T > u)}(\mu(s) - \mu(u))^2 \\ &= \Var(N(s) - N(u)) - \frac{\operatorname{P}(T \leq u)}{\operatorname{P}(T > u)}(\mu(s) - \mu(u))^2 \end{aligned} \end{equation} as desired. \end{document}
\begin{equation}gin{document} \baselineskip=18pt \title{ A blowup criteria along maximum points of the 3D-Navier-Stokes flow in terms of function spaces with variable growth condition } \author{Eiichi Nakai} \address{ Department of Mathematics, Ibaraki University, Mito, Ibaraki 310-8512, Japan} \email{[email protected]} \author{Tsuyoshi Yoneda} \address{ Department of Mathematics, Tokyo Institute of Technology, Meguro-ku, Tokyo 152-8551, Japan} \email{[email protected]} \begin{equation}gin{abstract} \noindent A blowup criteria along maximum point of the 3D-Navier-Stokes flow in terms of function spaces with variable growth condition is constructed. This criterion is different from the Beale-Kato-Majda type and Constantin-Fefferman type criterion. If geometric behavior of the velocity vector field near the maximum point has a kind of symmetry up to a possible blowup time, then the solution can be extended to be the strong solution beyond the possible blowup time. \end{abstract} \maketitle \noindent Key words: blowup criterion, 3D Navier-Stokes equation, Campanato spaces with variable growth condition. \noindent {\it AMS Subject Classification (2010):} 35Q30, 76D03, 76D05, 46E35 \section{Introduction}\label{s:intro} \noindent In this paper we construct a blowup criteria along maximum points of the 3D-Navier-Stokes flow in terms of function spaces with variable growth condition. The Navier-Stokes equation is expressed as \begin{equation}gin{equation}\label{NS} \begin{equation}gin{cases} \partial_tv+(v\cdot \nabla)v-\Delta v+\nabla p=0 & \text{in}\ \mathbb{R}^3\times[0,T), \\ \nabla\cdot v=0 & \text{in}\ \mathbb{R}^3\times[0,T), \\ v_0=v|_{t=0} & \text{in}\ \mathbb{R}^3, \end{cases} \end{equation} where $v$ is a vector field representing velocity of the fluid, and $p$ is the pressure. The most significant blowup criterion must be the Beale-Kato-Majda criterion \cite{BKM}. The Beale-Kato-Majda criterion is as follows: \begin{equation}gin{thm}\label{thm:BKM} Let $s>1/2$, and let $v_0\in H^s$ with $\text{div}\ v_0=0$ in distribution sense. Suppose that $v$ is a strong solution of \eqref{NS}. If \begin{equation}gin{equation}\label{BKM criterion} \int_0^T\|\text{curl}\ v(t)\,\|_{\infty}dt<\infty, \end{equation} then $v$ can be extended to the strong solution up to some $T'$ with $T'>T$. \end{thm} \noindent This blowup criterion was further improved by Giga~\cite{G}, Kozono and Taniuchi~\cite{KT}, the authors~\cite{NY}, etc. On the other hand, Constantin and Fefferman \cite{CF} (see also \cite{CFM}) took into account geometric structure of the vortex stretching term in the vorticity equations to get another kind of blowup condition. They imposed vortex direction condition to the high vorticity part. This criterion was also further improved by, for example, Deng, Hou and Yu~\cite{DHY}. These two separate forms of criteria controlling the blow-up by magnitude and the direction of the vorticity respectively are interpolated by Chae~\cite{Chae}. For the detail of the blowup problem of the Navier-Stokes equation, see Fefferman~\cite{F} for example. In this paper, we give a different type of blowup criterion from them. We focus on a geometric behavior of the velocity vector field near the each maximum points. In order to state our blowup criterion, we need to give several definitions. Let us denote a maximum point of $|v|$ at a time $t$ as $x_M=x_{M(t)}\in\mathbb{R}^3$ (if there are several maximum points at a time $t$, then we choose one maximum point. We sometimes abbreviate the time $t$). We use rotation and transformation and bring a maximum point to the origin and its direction parallel to $x_3$-axis. Then we decompose $v$ into two parts: symmetric flow part and its remainder. In this paper we prove that, if the remainder part is small, then the solution never blowup. Let us explain precisely. We denote the unit tangent vector as \begin{equation}gin{equation*} \tau(x_M)=\tau(x_{M(t)})=(v/|v|)(x_{M(t)},t), \end{equation*} and we choose unit normal vectors $n_1(x_M)$ and $n_2(x_M)$ as \begin{equation}gin{equation*} \tau(x_M)\cdot n_1(x_M)=\tau(x_M)\cdot n_2(x_M)=n_1(x_M)\cdot n_2(x_M)=0. \end{equation*} Note that $n_1$ and $n_2$ are not uniquely determined. We now construct a Cartesian coordinate system with a new $y_1$-axis to be the straight line which passes through the maximum point and is parallel to $n_1$, and a new $y_2$-axis to be the straight line which passes through the maximum point and is parallel to $n_2$. We set $y_3$-axis by $\tau$ in the same process. Here we fix the maximum point $x_M=x_{M(t_*)}$ at $t=t_*$ for some time. Then $v$ can be expressed as \begin{equation}gin{equation}\label{v to u} v(x,t)=\tilde u_1(x,t)n_1(x_{M(t_*)})+\tilde u_2(x,t)n_2(x_{M(t_*)})+\tilde u_3(x,t)\tau(x_{M(t_*)}), \end{equation} with $\tilde u=(\tilde u_1,\tilde u_2,\tilde u_3)$, where \begin{equation}gin{align*} \tilde u_1(x,t) &=v(x,t)\cdot n_1(x_{M(t_*)}), \\ \tilde u_2(x,t) &=v(x,t)\cdot n_2(x_{M(t_*)}), \\ \tilde u_3(x,t) &=v(x,t)\cdot \tau(x_{M(t_*)}). \end{align*} Let $y=(y_1,y_2,y_3)$ be the coordinate representation of the point $x$ in the coordinate system based at the maximum point which is specified by the orthogonal frame $\{n_1,n_2,\tau\}$. That is, the point $x\in\mathbb{R}^3$ can be realized as $x=x_{M}+n_1(x_M)y_1+n_2(x_M)y_2+\tau(x_M)y_3$ with $x_M=x_{M(t_*)}$. Then we can rewrite $\tilde u(x)=\tilde u(x,t)$ to $u(y)=u(y,t)=u_{M(t_*)}(y,t)$ as \begin{equation}gin{align*} u_1(y)&=u_1(y,t)=\tilde u_1(x_M+n_1(x_M)y_1+n_2(x_M)y_2+\tau(x_M)y_3,t),\\ u_2(y)&=u_2(y,t)=\tilde u_2(x_M+n_1(x_M)y_1+n_2(x_M)y_2+\tau(x_M)y_3,t),\\ u_3(y)&=u_3(y,t)=\tilde u_3(x_M+n_1(x_M)y_1+n_2(x_M)y_2+\tau(x_M)y_3,t). \end{align*} In this case $u_1(0,t_*)=u_2(0,t_*)=0$ and $u_3(0,t_*)=|v(x_{M(t_*)},t_*)|$. Since the Navier-Stokes equation is rotation and translation invariant, $u$ also satisfies the Navier-Stokes equation \eqref{NS} in $y$-valuable. Then $\nabla p$, in $y$-valuable, can be expressed as \begin{equation}gin{equation*} \nabla p = \sum_{i,j=1}^3R_iR_j\nabla(u_iu_j), \end{equation*} where $R_j$ ($j=1,2,3$) are the Riesz transforms. We decompose $u$ into two parts; symmetric flow part $U$ and its remainder part $r$: $$ u=U+r. $$ The symmetric flow part $U$ can be defined as follows: \begin{equation}gin{defn}\label{symmetric flow} We say $U$ is a symmetric flow if $U$ satisfies \begin{equation}gin{equation*} \begin{equation}gin{cases} U_1(y_1,y_2,y_3)=-U_1(y_1,y_2,-y_3),\\ U_2(y_1,y_2,y_3)=-U_2(y_1,y_2,-y_3),\\ U_3(y_1,y_2,y_3)=U_3(y_1,y_2,-y_3). \end{cases} \end{equation*} \end{defn} We see that the symmetric flow cannot create large gradient of the pressure. Actually, a basic calculation shows that \begin{equation}gin{equation}\label{pressure zero} \sum_{i,j=1}^3R_iR_j\partial_{3}(U_iU_j)|_{y=0}=0, \end{equation} since, if $f$ is even (odd) with respect to $y_3$, then $R_1f$ and $R_2f$ are also even (odd) with respect to $y_3$, but $R_3f$ is odd (even) with respect to $y_3$. Thus we need to see the remainder part~$r$, namely, we have the following pressure formula: \begin{equation}gin{equation}\label{remainder estimate} \partial_3 p|_{y=0} = \sum_{i,j=1}^3R_iR_j\partial_3\left(r_iU_j+U_ir_j+r_ir_j\right)|_{y=0}. \end{equation} In this paper, using the above formula, we construct a different type (from Beale-Kato-Majda type and Constantin-Fefferman type) of blowup criterion. We measure symmetricity of the flow near each maximum points by controlling the remainder part~$r$. In order to obtain a reasonable blowup condition from \eqref{remainder estimate}, we need two function spaces $V=(V,\|\cdot\|_V)$ and $W=(W,\|\cdot\|_W)$ on $\mathbb{R}^3$ such that \begin{equation}gin{eqnarray}\label{ineq1} |f(0)|&\leq& \|f\|_{W}, \\ \label{ineq2} \|R_iR_jf\|_{W}&\leq&C \|f\|_{W}, \\ \label{ineq3} \|fg\|_{W}&\leq& C\|f\|_{V}\|g\|_{V}. \end{eqnarray} That is, we need some smoothness condition at the origin for functions in $W$, the boundedness of Riesz transforms on $W$ and the boundedness of pointwise multiplication operator as $V\times V\to W$. Moreover, it is known that there exist positive constants $R$ and $C$ such that \begin{equation}gin{equation}\label{C/r} |v(x,t)|\leq C/|x|\quad\text{for}\quad |x|>R, \end{equation} where $R$ and $C$ are independent of $t\in[0,T)$. This is due to Corollary 1 in \cite{CKN} (we use the partial regularity result to the decay). See also Section 1 in \cite{CY}. We need to take the decay condition \eqref{C/r} into account to construct $V$. In these points of view, we use Campanato spaces with variable growth condition. We discuss these function spaces in Sections \ref{s:space}--\ref{s:exmp}. The following definition is the key in this paper. \begin{equation}gin{defn}\label{no local collapsing} We say ``$v$ is no local collapsing (of its symmetricity near each maximum points)" with respect to the function space $V$, if there exist constants $C>0$ and $\alpha<2$ such that, for each fixed $x_{M(t_*)}$ at $t_*\in[0,T)$, $u=u_{M(t_*)}$ has the following property: \begin{equation}gin{equation*} \inf_{u=U+r} \left\{\sum_{i,j} \left( \|\partial_3r_i\|_{V} \|U_j\|_{V} + \|r_i\|_{V} \|\partial_3 U_j\|_{V} + \|r_i\|_{V} \|\partial_3 r_j\|_{V} \right) \bigg|_{t=t_*} \right\} \leq C\frac{(T-t_*)^{-\alpha}}{u_3(0,t_*)}, \end{equation*} where the infimum is taken over all decomposition $u=U+r$ with symmetric flow $U$. \end{defn} Roughly saying, if $\|\partial_3 r_j\|_V$ and $\|r_j\|_V$ are sufficiently small compare to $\|\partial_3 U_j\|_V$ and $\|U_j\|_V$ (which means symmetric part is dominant), then $v$ is no local collapsing. The following is the main theorem. \begin{equation}gin{thm}[Blowup criteria along maximum points]\label{theorem 1} Let function spaces $V$ and $W$ satisfy \eqref{ineq1}, \eqref{ineq2} and \eqref{ineq3}. Let $v_0$ be any non zero, smooth, divergence-free vector field in Schwartz class, that is, \begin{equation}gin{equation*} |\partial_x^\alpha v_0(x)| \leq C_{\alpha,K} (1+|x|)^{-K} \quad\text{in}\quad \mathbb{R}^3 \end{equation*} for any $\alpha\in\mathbb{Z}_+^3$ and any $K>0$. Suppose that $v\in C^\infty([0,T)\times\mathbb{R}^3)$ is a unique smooth solution of \eqref{NS} up to $T$. If $v$ is no local collapsing with respect to $V$, then $v$ can be extended to the strong solution up to some $T'$ with $T'>T$. \end{thm} In the next section we prove Theorem~\ref{theorem 1} by using the regularity criterion by \cite{G}. We also give an example of function with no local collapsing which doesn't satisfy the Beale-Kato-Majda criterion. In Section~\ref{s:space} we define Campanato spaces with variable growth condition which give concrete function spaces $V$ and $W$ satisfying \eqref{ineq1}, \eqref{ineq2} and \eqref{ineq3}. Campanato spaces with variable growth condition were introduced by \cite{NakaiYabuta1985JMSJ} to characterize the pointwise multipliers on $\mathrm{BMO}$, and then they were investigated by \cite{Nakai1997Studia,Nakai2006Studia,Nakai2010RMC,NakaiSawano2012JFA}, etc. Roughly saying, the function spaces $V$ and $W$ are required to express $C^\alpha$ ($0<\alpha<1$) continuity near the origin and the decay condition \eqref{C/r} far from the origin. For these requirement, we can use Campanato spaces with variable growth condition. We state the boundedness of the Riesz transforms and the pointwise multiplication operator on these function spaces in Section~\ref{s:SIO} and Section~\ref{s:PWM}, respectively. Finally, we show that Campanato spaces satisfy the conditions \eqref{ineq1}, \eqref{ineq2} and \eqref{ineq3} for some variable growth condition in Section~\ref{s:exmp}. \section{Proof of the main theorem} In this section we give a proof of the main theorem. First we show a lemma. \begin{equation}gin{lem}\label{lem} Under the assumption of Theorem~\ref{theorem 1}, for each fixed $x_{M(t_*)}$, the following inequalities hold: \begin{equation}gin{align}\label{universal constant} -(v\cdot\nabla p)(x_{M(t_*)},t_*) &\leq C (T-t_*)^{-\alpha}, \\ (v\cdot \Delta v)(x_{M(t_*)},t_*) &\leq 0. \label{Delta} \end{align} \end{lem} \begin{equation}gin{proof} Using the derivative $\partial_3$ along $\tau$ direction, we have \begin{equation}gin{equation*} -(v\cdot\nabla p)(x_{M(t_*)},t_*) = -(u_3\partial_{3}p)(0,t_*), \end{equation*} since $u_1(0,t_*)=u_2(0,t_*)=0$. Then, by \eqref{remainder estimate}, \eqref{ineq1} and the definition of no local collapsingness, we get \eqref{universal constant}. Next we show \eqref{Delta}. To do this we prove \begin{equation}gin{equation*} (u_3\Delta u_3)(0,t_*)\le 0, \end{equation*} where $\Delta$ is the Laplacian with respect to $y=(y_1,y_2,y_3)$. Since $y=0$ is a maximum point, we see \begin{equation}gin{equation*} \partial_{j}|u(y)|\bigg|_{y=0}=0\quad\text{for}\quad j=1,2,3, \end{equation*} and \begin{equation}gin{equation*} \partial_{j}^2|u(y)|\bigg|_{y=0}\leq 0\quad\text{for}\quad j=1,2,3. \end{equation*} There are smooth functions $\theta_1$, $\theta_2$ and $\theta_3$ such that \begin{equation}gin{eqnarray*} u_1(y)&=&|u(y)|\sin\theta_1(y), \\ u_2(y)&=&|u(y)|\sin\theta_2(y), \\ u_3(y)&=&|u(y)|\cos\theta_3(y) \end{eqnarray*} with $\theta_1(0)=\theta_2(0)=\theta_3(0)=0$. A direct calculation yields \begin{equation}gin{eqnarray*} \partial_{1}u_3(y)&=&\partial_{1}|u(y)|\cos \theta_3(y)-|u(y)|\sin\theta_3(y)\partial_{1}\theta_3(y)\\ \partial_{1}^2u_3(y)&=&\partial_{1}^2|u(y)|\cos \theta_3(y)-2\partial_{1}|u(y)|\sin\theta_3(y)\partial_{1}\theta_3(y)\\ & &\phantom{*} -|u(y)|\cos\theta_3(y)(\partial_{1}\theta_3(y))^2-|u(y)|\sin\theta_3(y)\partial_{1}^2\theta_3(y). \end{eqnarray*} Thus we have \begin{equation}gin{equation*} \partial_{1}^2u_3(y)\bigg|_{y=0}=\partial_{y_1}^2|u(y)|-|u(y)|(\partial_{y_1}\theta_3(y))^2\bigg|_{y=0}\leq 0. \end{equation*} Similar calculations to $y_2$ and $y_3$ directions, we have $(u_3\Delta u_3)(0,t_*)\le 0$. \end{proof} Next we define ``trajectory" $\gamma:[\tilde t,T)\to\mathbb{R}^3$ starting at a point $\tilde x$: \begin{equation}gin{equation*} \partial_t\gamma(\tilde x,\tilde t;t)=v( \gamma(\tilde x,\tilde t; t), t) \quad\text{with}\quad \gamma(\tilde x,\tilde t;\tilde t)=\tilde x. \end{equation*} Then $\gamma$ provides a diffeomorphism and the equation \eqref{NS} can be rewritten as follows: \begin{equation}gin{equation*} \partial_t\bigg(v(\gamma(\tilde x,\tilde t;t),t)\bigg) =(\Delta v-\nabla p)(\gamma(\tilde x,\tilde t;t),t) \quad (\tilde t<t<T) \end{equation*} with $\gamma(\tilde x,\tilde t; \tilde t)=\tilde x\in \mathbb{R}^3$. Since $v$ is bounded for fixed $t\in [0,T)$, we can define $X(t)\subset\mathbb{R}^3$ as the set of all maximum points of $|v(\cdot,t)|$ at a time $t\in[0,T)$, namely, \begin{equation}gin{equation*} |v(x,t)| = \sup_{\xi\in\mathbb{R}^3}|v(\xi,t)|\ \text{for}\ x\in X(t) \quad\text{and}\quad |v(x,t)| < \sup_{\xi\in\mathbb{R}^3}|v(\xi,t)|\ \text{for}\ x\not\in X(t). \end{equation*} By \eqref{C/r}, $X(t)$ is a bounded set uniformly in $t$ in a possible blowup scenario. Let $B(x,r)$ is a ball with radius $r$ and centered at $x$. For any $r>0$, we see that there is a barrier function $\begin{equation}ta(t)>0$ such that \begin{equation}gin{equation*} |v(x,t)|+\begin{equation}ta(t) < \sup_{\xi\in\mathbb{R}^3}|v(\xi,t)| \quad \text{for}\quad x\not\in \cup_{\xi\in X(t)}B(\xi,r). \end{equation*} Then, using Lemma~\ref{lem} and the smoothness of the solution, we get the following: \begin{equation}gin{prop}\label{prop} Under the assumption of Theorem~\ref{theorem 1}, for any $\delta>0$ and $t_*\in[0,T)$, there exists a time interval $[t_*,t_*')\subset[0,T)$ and a radius $r_*$ such that the following two properties hold for all $t'\in[t_*,t_*')$: \begin{equation}gin{itemize} \item $ \cup_{\xi\in X(t_*)}B(\xi,r_*) \Subset \Omega(t'), $ where \begin{equation}gin{multline}\label{Omega} \Omega(t'):=\bigg\{x\in\mathbb{R}^3: (\Delta v\cdot v)(\gamma(x,t_*;t'),t')\leq \delta,\\ (-\nabla p\cdot v)(\gamma(x,t_*;t'),t')\leq \delta+C(T-t')^{-\alpha}\bigg\}, \end{multline} \item $ |v(\gamma(x,t_*;t'), t')|^2< \sup_{\xi\in\mathbb{R}^3}|v(\xi,t_*)|^2 \quad\text{for}\quad x\in \left(\cup_{\xi\in X(t_*)}B(\xi,r_*)\right)^c$. \end{itemize} \end{prop} \begin{equation}gin{proof}[Proof of Theorem~\ref{theorem 1}] Note that the open interval $(0,T)$ is covered by the collection $\{(t_*,t_*')\}_{t_*\in[0,T)}$ of the open intervals such that the interval $[t_*,t_*')$ is as in Proposition~\ref{prop} for $t_*\in[0,T)$. Since $(0,T)$ is a Lindel\"of space, we can choose a sequence of the time intervals $[t_j,t_j')$, $j=0,1,2,\cdots$ (finite or infinite), such that $(0,T)=\cup_{j}(t_j,t_j')$, and that $[t_j,t_j')$ and $r_j$ satisfy the properties of Proposition~\ref{prop} for $t_j\in[0,T)$. We may assume that \begin{equation}gin{equation*} 0=t_0<t_1<t_2<\cdots, \quad t_{j+1}<t_j',\ j=0,1,\cdots \end{equation*} For $t\in[t_0,t_0')$ and $x\in\cup_{\xi\in X(t_0)}B(\xi,r_0)$, from the first property in Proposition~\ref{prop} it follows that \begin{equation}gin{eqnarray*} & &|v(\gamma(x,t_0;t),t)|^2\\ &=&\int_{t_0}^t\partial_{t'}|v(\gamma(x,t_0;t'),t')|^2dt'+|v(x,t_0)|^2 \\ &=& 2\int_{t_0}^t\partial_{t'} v\cdot v dt'+|v(x,0)|^2\\ &=& 2\int_{t_0}^t\left(\Delta v\cdot v-\nabla p\cdot v\right) dt'+|v(x,t_0)|^2\\ &\leq & 2\left(2\delta(t-t_0)+C\int_{t_0}^t(T-t')^{-\alpha}dt'\right) +\sup_{\xi\in\mathbb{R}^3}|v(\xi,t_0)|^2. \end{eqnarray*} The case $x\in(\cup_{\xi\in X(t_0)}B(\xi,r_0))^c$ is straightforward by the second property in Proposition~\ref{prop}. Then we have \begin{equation}gin{equation*} |v({z},t)|^2 \leq 2\left(2\delta(t-t_0)+C\int_{t_0}^t(T-t')^{-\alpha}dt'\right) +\sup_{\xi\in\mathbb{R}^3}|v(\xi,t_0)|^2. \end{equation*} for all $t\in[t_0,t_0')$ and all $z\in\mathbb{R}^3$ with $z=\gamma(x,t_0;t)$, since $\gamma$ gives a diffeomorphism. Repeating the above argument infinite times, and we finally have \begin{equation}gin{equation*} |v(x,t)|^2 \leq 2\left(2\delta t+C\int_0^t(T-t')^{-\alpha}dt'\right)+\sup_{\xi\in\mathbb{R}^3}|v(\xi,0)|^2 \end{equation*} for all $t\in [0,T)$ and all $x\in\mathbb{R}^3$. This implies \begin{equation}gin{equation*} \|v\|_{L^2(0,T;L^\infty(\mathbb{R}^3))}<\infty. \end{equation*} Due to the classical regularity criterion (see \cite{G} for example), we see that the solution never blowup. \end{proof} \begin{equation}gin{rem}\label{counterexample} We can construct a function $u$ which satisfy both Definition~\ref{no local collapsing} and \begin{equation}gin{equation*} \int_0^T\|curl\ u(t)\|_\infty=\infty \quad\text{(the Beale-Kato-Majda criterion)} \end{equation*} (in this remark, $u$ is nothing to do with the Navier-Stokes solution, we just regard $u$ as a time dependent vector field). If $\theta_j(y)=\theta_j(-y)$ ($j=1,2,3,$ even angular), we see that $\partial_3u_1(y)-\partial_1u_3(y)|_{y=0}$ is arrowed to be arbitrary large. In fact, \begin{equation}gin{align*} \partial_1u_3(y) &= (\partial_1|u(y)|)\cos\theta_3(y) -|u(y)|\sin\theta_3(y)\,\partial_1\theta_3(y), \\ \partial_3u_1(y) &= (\partial_3|u(y)|)\sin\theta_1(y) +|u(y)|\cos\theta_1(y)\,\partial_3\theta_1(y) \end{align*} and then \begin{equation}gin{equation*} \partial_3u_1(y)-\partial_1u_3(y)\bigg|_{y=0} = |u(y)|\partial_3\theta_1(y)\bigg|_{y=0}. \end{equation*} Since $\partial_3\theta_1(0)$ can be taken arbitrary large for each $t>0$, we can construct the desired function $u$. Note that since $\theta_j(y)$ ($j=1,2,3$) are even angular, $u$ is symmetric flow (see Definition \ref{symmetric flow}). \end{rem} \section{Campanato spaces with variable growth condition}\label{s:space} In this section we define Campanato spaces $\mathcal{L}N_{p,\phi}$ with variable growth condition. We state basic properties of the function spaces $\mathcal{L}N_{p,\phi}$. To do this we also define Morrey spaces and H\"older spaces with variable growth condition. Let $\mathbb{R}^n$ be the $n$-dimensional Euclidean space. We denote by $B(x,r)$ the open ball centered at $x\in\mathbb{R}^n$ and of radius $r$, that is, \begin{equation}gin{equation*} B(x,r) =\{y\in\mathbb{R}^n:|y-x|<r\}. \end{equation*} For a measurable set $G \subset \mathbb{R}^n$, we denote by $|G|$ and $\chi_{G}$ the Lebesgue measure of $G$ and the characteristic function of $G$, respectively. We consider variable growth functions $\phi:\mathbb{R}^n\times(0,\infty)\to(0,\infty)$. For a ball $B=B(x,r)$, write $\phi(B)$ in place of $\phi(x,r)$. For a function $f\in L^1_{\mathrm{loc}}(\mathbb{R}^n)$ and for a ball $B$, let $$ f_B = |B|^{-1} \int_B f(x) \,dx. $$ Then we define Campanato spaces $\mathcal{L}_{p,\phi}(\mathbb{R}^n)$ and $\mathcal{L}N_{p,\phi}(\mathbb{R}^n)$, Morrey spaces $L_{p,\phi}(\mathbb{R}^n)$, and H\"older spaces $\Lambda_{\phi}(\mathbb{R}^n)$ and $\Lambda^{\natural}_{\phi}(\mathbb{R}^n)$ with variable growth functions $\phi$ as the following: \begin{equation}gin{defn}\label{defn:CamMorHol} For $1\le p <\infty$ and $\phi:\mathbb{R}^n\times(0,\infty)\to(0,\infty)$, function spaces $\mathcal{L}_{p,\phi}(\mathbb{R}^n)$, $\mathcal{L}N_{p,\phi}(\mathbb{R}^n)$, $L_{p,\phi}(\mathbb{R}^n)$, $\Lambda_{\phi}(\mathbb{R}^n)$, $\Lambda^{\natural}_{\phi}(\mathbb{R}^n)$ are the set of all functions $f$ such that \begin{equation}gin{align*} \|f\|_{\mathcal{L}_{p,\phi}} & =\sup_B \frac 1{\phi(B)}\left( \frac 1{|B|} \int_{B} |f(x)-f_{B}|^p \,dx \right)^{1/p}<\infty, \\ \|f\|_{\mathcal{L}N_{p,\phi}} &= \|f\|_{\mathcal{L}_{p,\phi}} + |f_{B(0,1)}|<\infty, \\ \|f\|_{L_{p,\phi}} & =\sup_B \frac 1{\phi(B)}\left( \frac 1{|B|} \int_{B} |f(x)|^p \,dx \right)^{1/p}<\infty, \\ \|f\|_{\Lambda_{\phi}} & =\sup_{x,y\in \mathbb{R}^n, \; x\ne y} \frac {2|f(x)-f(y)|}{\phi(x,|x-y|)+\phi(y,|y-x|)}<\infty, \\ \|f\|_{\Lambda^{\natural}_{\phi}} &= \|f\|_{\Lambda_{\phi}} + |f(0)|<\infty, \end{align*} respectively. \end{defn} We regard $\mathcal{L}N_{p,\phi}(\mathbb{R}^n)$ and $L_{p,\phi}(\mathbb{R}^n)$ as spaces of functions modulo null-functions, $\mathcal{L}_{p,\phi}(\mathbb{R}^n)$ as spaces of functions modulo null-functions and constant functions, $\Lambda^{\natural}_{\phi}(\mathbb{R}^n)$ as a space of functions defined at all $x\in \mathbb{R}^n$, and $\Lambda_{\phi}(\mathbb{R}^n)$ as a space of functions defined at all $x\in \mathbb{R}^n$ modulo constant functions. Then these five functionals are norms and thereby these spaces are all Banach spaces. In order to apply $\mathcal{L}N_{p,\phi}$ to the blowup criterion (more precisely, in order to find specific function spaces $V$ and $W$ satisfying \eqref{ineq1}, \eqref{ineq2} and \eqref{ineq3}), we state several properties of these function spaces and relation between $\phi$ and the function spaces. For two variable growth functions $\phi_1$ and $\phi_2$, we write $\phi_1\sim\phi_2$ if there exists a positive constant $C$ such that $$ C^{-1}\phi_1(B)\le\phi_2(B)\le C\phi_1(B) \quad\text{for all balls $B$}. $$ In this case, two spaces defined by $\phi_1$ and by $\phi_2$ coincide with equivalent norms. If $p=1$ and $\phi\equiv 1$, then $\mathcal{L}_{p,\phi}(\mathbb{R}^n)$ is the usual $\mathrm{BMO}(\mathbb{R}^n)$. For $\phi(x,r)=r^{\alpha}$, $0<\alpha\le1$, we denote $\Lambda_{r^{\alpha}}(\mathbb{R}^n)$ and $\Lambda^{\natural}_{r^{\alpha}}(\mathbb{R}^n)$ by $\mathrm{Lip}_{\alpha}(\mathbb{R}^n)$ and $\mathrm{Lip}N_{\alpha}(\mathbb{R}^n)$, respectively. In this case, \begin{equation}gin{equation*} \|f\|_{\mathrm{Lip}_{\alpha}} = \sup_{x,y\in \mathbb{R}^n, \; x\ne y} \frac {|f(x)-f(y)|}{|x-y|^{\alpha}} \quad\text{and}\quad \|f\|_{\mathrm{Lip}N_{\alpha}} = \|f\|_{\mathrm{Lip}_{\alpha}} + |f(0)|. \end{equation*} If $\phi(x,r)=\min(r^{\alpha},1)$, $0<\alpha\le1$, then \begin{equation}gin{equation*} \|f\|_{\Lambda^{\natural}_{\phi}} \sim \|f\|_{\mathrm{Lip}_{\alpha}}+\|f\|_{L^{\infty}}. \end{equation*} From the definition it follows that \begin{equation}gin{equation*} \|f\|_{\mathcal{L}_{p,\phi}}\le 2\|f\|_{{L}_{p,\phi}}, \quad \|f\|_{\mathcal{L}N_{p,\phi}}\le (2+\phi(0,1))\|f\|_{{L}_{p,\phi}}. \end{equation*} If $\phi(B)=|B|^{-1/p}$ for all balls $B$, then \begin{equation}gin{equation*} \|f\|_{L_{p,\phi}} = \|f\|_{L^p}. \end{equation*} We consider the following conditions on variable growth function $\phi$: \begin{equation}gin{alignat}{2} &\frac1{A_1} \le \frac {\phi(x,s)}{\phi(x,r)} \le A_1, & \quad \frac12\le \frac sr\le 2, \label{phi-double}\\ &\frac1{A_2} \le \frac {\phi(x,r)}{\phi(y,r)} \le A_2, & \quad d(x,y)\le r, \label{phi-near} \\ &\phi(x,r) \le A_3\phi(x,s), & \quad 0<r<s<\infty, \label{phi-incr} \end{alignat} where $A_i$, $i=1,2,3$, are positive constants independent of $x,y\in \mathbb{R}^n,\;r,s>0$. Note that \eqref{phi-near} and \eqref{phi-incr} imply that there exists a positive constant $C$ such that $$ \phi(x,r)\le C \phi(y,s) \quad \text{for}\quad B(x,r) \subset B(y,s), $$ where the constant $C$ is independent of balls $B(x,r)$ and $B(y,s)$. The following three theorems are known: \begin{equation}gin{thm}[{\cite{Nakai2008ActaMathSinica}}]\label{thm:Cam-p-1} If $\phi$ satisfies \eqref{phi-double}, \eqref{phi-near} and \eqref{phi-incr}, then, for every $1\le p<\infty$, $\mathcal{L}_{p,\phi}(\mathbb{R}^n)=\mathcal{L}_{1,\phi}(\mathbb{R}^n)$ and $\mathcal{L}N_{p,\phi}(\mathbb{R}^n)=\mathcal{L}N_{1,\phi}(\mathbb{R}^n)$ with equivalent norms, respectively. \end{thm} \begin{equation}gin{thm}[{\cite{Nakai2006Studia}}]\label{thm:Cam-Hol} If $\phi$ satisfies \eqref{phi-double}, \eqref{phi-near}, \eqref{phi-incr}, and there exists a positive constant $C$ such that \begin{equation}gin{equation}\label{int_0 phi} \int_0^r \frac{\phi(x,t)}{t}\,dt \le C\phi(x,r), \quad x\in \mathbb{R}^n,\ r>0, \end{equation} then, for every $1\le p<\infty$, each element in $\mathcal{L}N_{p,\phi}(\mathbb{R}^n)$ can be regarded as a continuous function, (that is, each element is equivalent to a continuous function modulo null-functions) and $\mathcal{L}_{p,\phi}(\mathbb{R}^n)=\Lambda_{\phi}(\mathbb{R}^n)$ and $\mathcal{L}N_{p,\phi}(\mathbb{R}^n)=\Lambda^{\natural}_{\phi}(\mathbb{R}^n)$ with equivalent norms, respectively. In particular, if $\phi(x,r)=r^{\alpha}$, $0<\alpha\le1$, then, for every $1\le p<\infty$, $\mathcal{L}N_{p,\phi}(\mathbb{R}^n)=\mathrm{Lip}N_{\alpha}(\mathbb{R}^n)$ and $\mathcal{L}_{p,\phi}(\mathbb{R}^n)=\mathrm{Lip}_{\alpha}(\mathbb{R}^n)$ with equivalent norms, respectively. \end{thm} \begin{equation}gin{thm}[{\cite{Nakai2006Studia}}]\label{thm:Cam-Mor} Let $1\le p<\infty$. If $\phi$ satisfies \eqref{phi-double}, \eqref{phi-near}, and there exists a positive constant $C$ such that \begin{equation}gin{equation}\label{int^infty phi} \int_r^{\infty}\frac{\phi(x,t)}t\,dt \le C\phi(x,r), \quad x\in \mathbb{R}^n,\ r>0, \end{equation} then, for $f\in\mathcal{L}_{p,\phi}(\mathbb{R}^n)$, the limit $ \sigma(f) = \lim_{r\to\infty}f_{B(0,r)} $ exists and $$ \|f\|_{\mathcal{L}_{p,\phi}} \sim \|f-\sigma(f)\|_{L_{p,\phi}}. $$ That is, the mapping $f\mapsto f-\sigma(f)$ is bijective and bicontinuous from $\mathcal{L}_{p,\phi}(\mathbb{R}^n)$ (modulo constants) to ${L}_{p,\phi}(\mathbb{R}^n)$. \end{thm} \begin{equation}gin{rem}\label{rem:Mor} If $\int_1^{\infty}\phi(0,t)/t\,dt<\infty$, then $\phi(0,r)\to0$ as $r\to\infty$. Then, for $f\in L_{p,\phi}(\mathbb{R}^n)$, we have $$ |\sigma(f)| =\lim_{r\to\infty}|f_{B(0,r)}| \le\lim_{r\to\infty}\phi(0,r)\|f\|_{L_{p,\phi}} \to0 \quad\text{as}\quad r\to\infty. $$ That is, $\sigma(f)=0$. \end{rem} For a ball $B_*\subset\mathbb{R}^n$ and $0<\alpha\le1$, let \begin{equation}gin{equation*} \|f\|_{\mathrm{Lip}_{\alpha}(B_*)} = \sup_{x,y\in B_*, \; x\ne y} \frac {|f(x)-f(y)|}{|x-y|^{\alpha}}. \end{equation*} We also conclude the following: \begin{equation}gin{prop}\label{prop:Cam-Lip-local} Let $1\le p<\infty$ and $0<\alpha\le1$. Assume that, for a ball $B_*$, \begin{equation}gin{equation}\label{r^a in B} \phi(x,r)=r^{\alpha} \quad\text{for all balls $B(x,r)\subset B_*$}. \end{equation} Then each element $f$ in $\mathcal{L}N_{p,\phi}(\mathbb{R}^n)$ can be regarded as a continuous function on the ball $B_*$, and, there exists a positive constant $C$ such that \begin{equation}gin{equation*} \|f\|_{\mathrm{Lip}_{\alpha}(B_*)} \le C\|f\|_{\mathcal{L}_{p,\phi}}, \end{equation*} where $C$ is dependent only on $n$ and $\alpha$. In particular, if \eqref{r^a in B} holds for $B_*=B(0,1)$, then each $f\in\mathcal{L}N_{p,\phi}(\mathbb{R}^n)$ is $\alpha$-Lipschitz continuous near the origin and \begin{equation}gin{equation*} \|f\|_{\mathcal{L}N_{p,\phi}} \sim \|f\|_{\mathcal{L}_{p,\phi}}+|f(0)|. \end{equation*} \end{prop} \begin{equation}gin{proof} It is known that, if $\phi$ satisfies \eqref{phi-double}, then \begin{equation}gin{equation}\label{fB1-fB2} |f_{B(x,r_1)}-f_{B(x,r_2)}| \le C\int_{r_1}^{2r_2}\frac{\phi(x,t)}t\,dt\ \|f\|_{\mathcal{L}_{p,\phi}} \quad\text{for}\ x\in\mathbb{R}^n, \ r_1<r_2, \end{equation} where $C$ is dependent only on $n$, see \cite[Lemma~2.4]{Nakai1993Studia}. Hence we have that, if $B(x,r)$, $B(y,r)\subset B_*$, then \begin{equation}gin{equation*} |f_{B(x,r)}-f_{B(y,r)}| \le C\int_{r}^{2r+|x-y|}\frac{t^{\alpha}}t\,dt\ \|f\|_{\mathcal{L}_{p,\phi}} \le C_* (2r+|x-y|)^{\alpha}\ \|f\|_{\mathcal{L}_{p,\phi}}, \end{equation*} since $B(x,r)$, $B(y,r)\subset B((x+y)/2,r+|x-y|/2)$, where $C_*$ is dependent only on $n$ and $\alpha$. Letting $r\to0$, we have \begin{equation}gin{equation*} |f(x)-f(y)| \le C_* |x-y|^{\alpha}\ \|f\|_{\mathcal{L}_{p,\phi}}, \end{equation*} for almost every $x,y\in B_*$. In this case we can regard that $f$ is a continuous function modulo null-functions and we have \begin{equation}gin{equation*} \|f\|_{\mathrm{Lip}_{\alpha}(B_*)} \le C_*\|f\|_{\mathcal{L}_{p,\phi}}. \end{equation*} If $B_*=B(0,1)$, then \begin{equation}gin{equation*} |f_{B(0,r)}-f_{B(0,1)}| \le C\int_{r}^{2}\frac{t^{\alpha}}t\,dt\ \|f\|_{\mathcal{L}_{p,\phi}} \le C \|f\|_{\mathcal{L}_{p,\phi}}. \end{equation*} Letting $r\to0$, we have \begin{equation}gin{equation*} |f(0)-f_{B(0,1)}| \le C\|f\|_{\mathcal{L}_{p,\phi}}. \end{equation*} This shows that $\|f\|_{\mathcal{L}_{p,\phi}}+|f_{B(0,1)}|\sim\|f\|_{\mathcal{L}_{p,\phi}}+|f(0)|$. \end{proof} \begin{equation}gin{prop}\label{prop:Cam-Lp-local} Let $1\le p<\infty$ and $B_*$ be a ball such that $B(0,1)\subset B_*$. Assume that there exists a positive constant $A$ such that \begin{equation}gin{equation*} \phi(B)\le A|B|^{-1/p} \quad\text{for all balls $B\subset B_*$}. \end{equation*} Then there exists a positive constant $C$ such that \begin{equation}gin{equation*} \left(\int_{B_*}|f(x)|^p\,dx\right)^{1/p} \le C\|f\|_{\mathcal{L}N_{p,\phi}}. \end{equation*} for all $f\in\mathcal{L}N_{p,\phi}(\mathbb{R}^n)$, where $C$ is dependent only on $A$, $n$ and $p$. \end{prop} \begin{equation}gin{proof} Let $B_*=B(x_*,r_*)$. Using \eqref{fB1-fB2}, we have \begin{equation}gin{equation*} |f_{B(0,1)}-f_{B(x_*,r_*)}| \le C\int_{1}^{2r_*}\frac{At^{-n/p}}t\,dt\ \|f\|_{\mathcal{L}_{p,\phi}} \le C_*\|f\|_{\mathcal{L}_{p,\phi}}, \end{equation*} where $C_*$ is dependent only on $A$, $n$ and $p$. Then \begin{equation}gin{align*} \left(\int_{B_*}|f(x)|^p\,dx\right)^{1/p} &\le \left(\int_{B_*}|f(x)-f_{B_*}|^p\,dx\right)^{1/p} + |f_{B(0,1)}-f_{B(x_*,r_*)}| + |f_{B(0,1)}| \\ &\le (A +C_*)\|f\|_{\mathcal{L}_{p,\phi}}+|f_{B(0,1)}| \\ &\le (A + C_*+1)\|f\|_{\mathcal{L}N_{p,\phi}}. \end{align*} This shows the conclusion. \end{proof} \section{Singular integral operators}\label{s:SIO} In this section we consider the singular integral theory to show the boundedness of Riesz transforms in Campanato spaces with variable growth condition. We denote by $L^p_c(\mathbb{R}^n)$ the set of all $f\in L^p(\mathbb{R}^n)$ with compact support. Let $0<\kappa\le1$. We shall consider a singular integral operator $T$ with measurable kernel $K$ on $\mathbb{R}^n\times \mathbb{R}^n$ satisfying the following properties: \begin{equation}gin{gather} |K(x,y)|\le \frac{C}{|x-y|^n} \quad\text{for}\quad x\not=y, \label{SK1} \\ \begin{equation}gin{split} |K(x,y)-K(z,y)|+|K(y,x)-K(y,z)| &\le \frac{C}{|x-y|^n} \left(\frac{|x-z|}{|x-y|}\right)^{\kappa} \\ &\text{for}\quad |x-y|\ge2|x-z|, \label{SK2} \end{split} \\ \begin{equation}gin{split} \int_{r\le|x-y|<R} K(x,y) \,dy =\int_{r\le|x-y|<R} K(y,x) \,dy = & \ 0 \\ \text{for $0<r<R<\infty$ } & \text{and $x\in \mathbb{R}^n$}, \label{SK3} \end{split} \end{gather} where $C$ is a positive constant independent of $x,y,z\in \mathbb{R}^n$. For $\eta>0$, let \begin{equation}gin{equation*} T_{\eta}f(x)=\int_{|x-y|\ge\eta} K(x,y)f(y)\,dy. \end{equation*} Then $T_{\eta}f(x)$ is well defined for $f\in L^p_c(\mathbb{R}^n)$, $1<p<\infty$. We assume that, for all $1<p<\infty$, there exists positive constant $C_p$ independently $\eta>0$ such that, \begin{equation}gin{equation*} \|T_{\eta}f\|_{L^p} \le C_p\|f\|_{L^p} \quad\text{for}\quad f\in L^p_c(\mathbb{R}^n), \end{equation*} and $T_{\eta}f$ converges to $Tf$ in $L^p(\mathbb{R}^n)$ as $\eta\to0$. By this assumption, the operator $T$ can be extended as a continuous linear operator on $L^p(\mathbb{R}^n)$. We shall say the operator $T$ satisfying the above conditions is a singular integral operator of type $\kappa$. For example, Riesz transforms are singular integral operators of type $1$. Now, to define $T$ for functions $f\in\mathcal{L}N_{p,\phi}(\mathbb{R}^n)$, we first define the modified version of $T_{\eta}$ by \begin{equation}gin{equation}\label{tildeT} {\tilde T}_{\eta}f(x) =\int_{|x-y|\ge\eta} f(y) \big[K(x,y)-K(0,y)(1-\chi_{B(0,1)}(y))\big] \,dy. \end{equation} Then we can show that the integral in the definition above converges absolutely for each $x$ and that ${\tilde T}_{\eta}f$ converges in $L^{p}(B)$ as $\eta\to0$ for each ball $B$. We denote the limit by ${\tilde T}f$. If both ${\tilde T}f$ and $Tf$ are well defined, then the difference is a constant. We can show the following results. Theorem~\ref{thm:SI} is an extension of \cite[Theorem~4.1]{Nakai2010RMC} and Theorem~\ref{thm:SI-Mor} is an extension of \cite[Theorem~2]{Nakai1994MathNachr}. The proofs are almost the same. \begin{equation}gin{thm}\label{thm:SI} Let $0<\kappa\le1$ and $1< p<\infty$. Assume that $\phi$ and $\psi$ satisfy \eqref{phi-double} and that there exists a positive constant $A$ such that, for all $x\in \mathbb{R}^n$ and $r>0$, \begin{equation}gin{equation}\label{C1-A} r^{\kappa} \int_r^{\infty}\frac{\phi(x,t)}{t^{1+\kappa}}\,dt \le A \psi(x,r). \end{equation} If $T$ is a singular integral operator of type $\kappa$, then ${\tilde T}$ is bounded from $\mathcal{L}_{p,\phi}(\mathbb{R}^n)$ to $\mathcal{L}_{p,\psi}(\mathbb{R}^n)$ and from $\mathcal{L}N_{p,\phi}(\mathbb{R}^n)$ to $\mathcal{L}N_{p,\psi}(\mathbb{R}^n)$, that is, there exists a positive constants $C$ such that $$ \|\tilde{T}f\|_{\mathcal{L}_{p,\psi}} \le C\|f\|_{\mathcal{L}_{p,\phi}}, \quad \|\tilde{T}f\|_{\mathcal{L}N_{p,\psi}} \le C\|f\|_{\mathcal{L}N_{p,\phi}}. $$ Moreover, if $\phi$ and $\psi$ satisfy \eqref{phi-near} and \eqref{phi-incr} also, then ${\tilde T}$ is bounded from $\mathcal{L}N_{1,\phi}(\mathbb{R}^n)$ to $\mathcal{L}N_{1,\psi}(\mathbb{R}^n)$. \end{thm} \begin{equation}gin{cor}\label{cor:SI Lam} Under the assumption in Theorem~\ref{thm:SI}, if $\phi$ and $\psi$ satisfies \eqref{phi-near}, \eqref{phi-incr} and \eqref{int_0 phi}, then ${\tilde T}$ is bounded from $\Lambda_{\phi}(\mathbb{R}^n)$ to $\Lambda_{\psi}(\mathbb{R}^n)$ and from $\Lambda^{\natural}_{\phi}(\mathbb{R}^n)$ to $\Lambda^{\natural}_{\psi}(\mathbb{R}^n)$. \end{cor} For Morrey spaces $L_{p,\phi}(\mathbb{R}^n)$, we have the following. \begin{equation}gin{thm}\label{thm:SI-Mor} Let $0<\kappa\le1$ and $1< p<\infty$. Assume that $\phi$ and $\psi$ satisfy \eqref{phi-double} and that there exists a positive constant $A$ such that, for all $x\in \mathbb{R}^n$ and $r>0$, \begin{equation}gin{equation*} \int_r^{\infty}\frac{\phi(x,t)}{t}\,dt \le A \psi(x,r). \end{equation*} If $T$ is a singular integral operator of type $\kappa$, then $T$ is bounded from $L_{p,\phi}(\mathbb{R}^n)$ to $L_{p,\psi}(\mathbb{R}^n)$. \end{thm} Now we state the boundedness of Riesz transforms. For $f$ in Schwartz class, the Riesz transforms of $f$ are defined by \begin{equation}gin{equation*} R_jf(x) = c_n\lim_{\varepsilon\to0}R_{j,\varepsilon}f(x), \quad j=1,\cdots, n, \end{equation*} where \begin{equation}gin{equation*} R_{j,\varepsilon}f(x) = \int_{\mathbb{R}^n\setminus B(x,\varepsilon)} \frac{x_j-y_j}{|x-y|^{n+1}}f(y)\,dy, \quad c_n=\Gamma\left(\frac{n+1}{2}\right) \pi^{-\frac{n+1}{2}}. \end{equation*} Then it is known that there exists a positive constant $C_p$ independently $\varepsilon>0$ such that, \begin{equation}gin{equation*} \|R_{j,\varepsilon}f\|_{L^p} \le C_p\|f\|_{L^p} \quad\text{for}\quad f\in L^p_c(\mathbb{R}^n), \end{equation*} and $R_{j,\varepsilon}f$ converges to $R_jf$ in $L^p(\mathbb{R}^n)$ as $\varepsilon\to0$. That is, the operator $R_j$ can be extended as a continuous linear operator on $L^p(\mathbb{R}^n)$. Hence, we can define a modified Riesz transforms of $f$ as \begin{equation}gin{equation*} \tilde{R}_jf(x) = c_n\lim_{\varepsilon\to0} \tilde{R}_{j.\varepsilon}f(x), \quad j=1,\cdots, n, \end{equation*} and \begin{equation}gin{equation*} \tilde{R}_{j.\varepsilon}f(x) = \int_{\mathbb{R}^n\setminus B(x,\varepsilon)} \left(\frac{x_j-y_j}{|x-y|^{n+1}} -\frac{(-y_j)(1-\chi_{B(0,1)}(y))}{|y|^{n+1}}\right)f(y)\,dy. \end{equation*} We note that, if both $R_jf$ and $\tilde{R}_jf$ are well defined on $\mathbb{R}^n$, then $R_jf-\tilde{R}_jf$ is a constant function. More precisely, $$ R_jf(x)-\tilde{R}_jf(x) = c_n\int_{\mathbb{R}^n} \frac{(-y_j)(1-\chi_{B(0,1)}(y))}{|y|^{n+1}}f(y)\,dy. $$ \begin{equation}gin{rem}\label{rem:RT1} If $f$ is a constant function, then $\tilde{R}_jf=0$. Actually, for $f\equiv1$, \begin{equation}gin{align*} \tilde{R}_{j.\varepsilon}1(x) &= \int_{\mathbb{R}^n\setminus B(x,\varepsilon)} \frac{(x_j-y_j)\chi_{B(x,1)}}{|x-y|^{n+1}} \,dy \\ &\phantom{**} + \int_{\mathbb{R}^n\setminus B(x,\varepsilon)} \left(\frac{(x_j-y_j)(1-\chi_{B(x,1)})}{|x-y|^{n+1}} -\frac{(-y_j)(1-\chi_{B(0,1)}(y))}{|y|^{n+1}}\right)\,dy \\ &= \int_{B(0,1)\setminus B(0,\varepsilon)} \frac{y_j}{|y|^{n+1}} \,dy + \int_{B(x,\varepsilon)} \frac{(-y_j)(1-\chi_{B(0,1)}(y))}{|y|^{n+1}}\,dy \\ &= \int_{B(x,\varepsilon)} \frac{(-y_j)(1-\chi_{B(0,1)}(y))}{|y|^{n+1}}\,dy \to0 \quad\text{as $\varepsilon\to0$}, \end{align*} since $$ \int_{B(0,1)\setminus B(0,\varepsilon)} \frac{y_j}{|y|^{n+1}} \,dy =0 $$ and $$ \int_{\mathbb{R}^n} \left(\frac{(x_j-y_j)(1-\chi_{B(x,1)})}{|x-y|^{n+1}} -\frac{(-y_j)(1-\chi_{B(0,1)}(y))}{|y|^{n+1}}\right)\,dy =0. $$ Hence $\tilde{R}_j1(x)=0$ for all $x\in\mathbb{R}^3$. \end{rem} \begin{equation}gin{thm}\label{thm:Cam-RT} Let $1\le p<\infty$ and $\phi$ satisfy \eqref{phi-double} and $$ r\int_r^{\infty}\frac{\phi(x,t)}{t^2}\,dt \le A\phi(x,r), $$ for all $x\in\mathbb{R}^n$ and $r>0$. Assume that there exists a growth function $\tilde{p}hi$ such that $\phi\le\tilde{p}hi$ and that $\tilde{p}hi$ satisfies \eqref{phi-double}, \eqref{phi-near} and \eqref{int^infty phi}. If $f\in\mathcal{L}N_{p,\phi}(\mathbb{R}^n)$ and $\sigma(f)=\lim_{r\to\infty}f_{B(0,r)}=0$, then $R_jf$, $j=1,2,\cdots,n$, are well defined, $\sigma(R_jf)=\lim_{r\to\infty}(R_jf)_{B(0,r)}=0$, and $$ \|R_jf\|_{\mathcal{L}N_{p,\phi}} \le C\|f\|_{\mathcal{L}N_{p,\phi}}, \quad j=1,2,\cdots,n, $$ where $C$ is a positive constant independent of $f$. \end{thm} \begin{equation}gin{proof} Let $f\in\mathcal{L}N_{p,\phi}(\mathbb{R}^n)$ and $\sigma(f)=0$. Then, by Theorem \ref{thm:Cam-Mor}, $$ \|f\|_{L_{p,\tilde{p}hi}} =\|f-\sigma(f)\|_{L_{p,\tilde{p}hi}} \sim\|f\|_{\mathcal{L}_{p,\tilde{p}hi}} \le\|f\|_{\mathcal{L}_{p,\phi}} \le\|f\|_{\mathcal{L}N_{p,\phi}}. $$ By Theorems~\ref{thm:SI-Mor} $R_jf$ is well defined and $$ \|R_jf\|_{L_{p,\tilde{p}hi}} \le C\|f\|_{L_{p,\tilde{p}hi}} \le C\|f\|_{\mathcal{L}N_{p,\phi}}. $$ This shows that $\sigma(R_jf)=0$ by Remark~\ref{rem:Mor} and $$ |(R_jf)_{B(0,1)}| \le \left(\frac1{|B(0,1)|}\int_{B(0,1)}|R_jf(x)|^p\,dx\right)^{1/p} \le \tilde{p}hi(0,1)\|R_jf\|_{L_{p,\tilde{p}hi}} \le C\|f\|_{\mathcal{L}N_{p,\phi}}. $$ Since $R_jf-\tilde{R}_jf$ is a constant, by Theorem \ref{thm:SI}, we have $$ \|R_jf\|_{\mathcal{L}_{p,\phi}} =\|\tilde{R}_jf\|_{\mathcal{L}_{p,\phi}} \le C\|f\|_{\mathcal{L}_{p,\phi}} \le C\|f\|_{\mathcal{L}N_{p,\phi}}. $$ Therefore, we have $\|R_jf\|_{\mathcal{L}N_{p,\phi}}\le C\|f\|_{\mathcal{L}N_{p,\phi}}$. \end{proof} \section{Pointwise multiplication}\label{s:PWM} Let $L^0(\mathbb{R}^n)$ be the set of all measurable functions on $\mathbb{R}^n$. Let $X_1$ and $X_2$ be subspaces of $L^0(\mathbb{R}^n)$ and $g\in L^0(\mathbb{R}^n)$. We say that $g$ is a pointwise multiplier from $X_1$ to $X_2$ if $fg\in X_2$ for all $f\in X_1$. We denote by $\mathrm{PWM}(X_1,X_2)$ the set of all pointwise multipliers from $X_1$ to $X_2$. For $\phi:\mathbb{R}^n\times(0,\infty)\to(0,\infty)$, we define \begin{equation}gin{align}\label{Phi*} \Phi^{*}(x,r)=\int_1^{\max(2,|x|,r)}\frac{\phi(0,t)}{t}\,dt, \\ \Phi^{**}(x,r)=\int_r^{\max(2,|x|,r)}\frac{\phi(x,t)}{t}\,dt. \label{Phi**} \end{align} \begin{equation}gin{prop}[{\cite[Proposition~4.4]{Nakai1997Studia}}]\label{prop:PWM} Suppose that $\phi_1$ and $\phi_2$ satisfy the doubling condition \eqref{phi-double}. For $\phi_1$, define $\Phi_1^{*}$ and $\Phi_1^{**}$ by \eqref{Phi*} and \eqref{Phi**}, respectively. Let $\phi_3=\phi_2/(\Phi_1^{*}+\Phi_1^{**})$. If $1\le p_2<p_1<\infty$ and $p_4\ge p_1p_2/(p_1-p_2)$, then \begin{equation}gin{gather} \mathrm{PWM}(\mathcal{L}N_{p_1,\phi_1}(\mathbb{R}^n),\mathcal{L}N_{p_2,\phi_2}(\mathbb{R}^n)) \supset \mathcal{L}N_{p_2,\phi_3}(\mathbb{R}^n)\cap L_{p_4,\phi_2/\phi_1}(\mathbb{R}^n), \\ \|g\|_{\mathrm{Op}}\le C(\|g\|_{\mathcal{L}_{p_2,\phi_3}}+\|g\|_{L_{p_4,\phi_2/\phi_1}}), \end{gather} where $\|g\|_{\mathrm{Op}}$ is the operator norm of $g\in\mathrm{PWM}(\mathcal{L}N_{p_1,\phi_1}(\mathbb{R}^n),\mathcal{L}N_{p_2,\phi_2}(\mathbb{R}^n))$. \end{prop} \begin{equation}gin{lem}[{\cite[Lemma~3.5]{Nakai1997Studia}}]\label{lem:Cam-Mor} Let $1\le p<\infty$. Suppose that $\phi$ satisfies the doubling condition \eqref{phi-double}. Then \begin{equation}gin{equation} \mathcal{L}N_{p,\phi}(\mathbb{R}^n)\subset L_{p,\Phi^{*}+\Phi^{**}}(\mathbb{R}^n) \quad\text{and}\quad \|f\|_{L_{p,\Phi^{*}+\Phi^{**}}}\le C\|f\|_{\mathcal{L}N_{p,\phi}}. \end{equation} \end{lem} \begin{equation}gin{cor}\label{cor:PWM} Suppose that $\phi$ satisfies the doubling condition \eqref{phi-double}. Let $\psi=\phi(\Phi^{*}+\Phi^{**})$. If $1\le p_2<p_1<\infty$ and $p_4\ge p_1p_2/(p_1-p_2)$, then \begin{equation}gin{gather} \mathrm{PWM}(\mathcal{L}N_{p_1,\phi}(\mathbb{R}^n),\mathcal{L}N_{p_2,\psi}(\mathbb{R}^n)) \supset \mathcal{L}N_{p_4,\phi}(\mathbb{R}^n), \\ \|g\|_{\mathrm{Op}}\le C\|g\|_{\mathcal{L}N_{p_4,\phi}}, \end{gather} where $\|g\|_{\mathrm{Op}}$ is the operator norm of $g\in\mathrm{PWM}(\mathcal{L}N_{p_1,\phi}(\mathbb{R}^n),\mathcal{L}N_{p_2,\psi}(\mathbb{R}^n))$. This implies that \begin{equation}gin{equation} \|fg\|_{\mathcal{L}N_{p_2,\psi}}\le C\|f\|_{\mathcal{L}N_{p_1,\phi}}\|g\|_{\mathcal{L}N_{p_4,\phi}}. \end{equation} \end{cor} For example, we can take $p_1=p_4=4$ and $p_2=2$. \begin{equation}gin{proof} By Lemma~\ref{lem:Cam-Mor} we have the inclusion \begin{equation}gin{gather} \mathcal{L}N_{p_2,\phi}(\mathbb{R}^n)\cap L_{p_4,\Phi^{*}+\Phi^{**}}(\mathbb{R}^n) \supset \mathcal{L}N_{p_4,\phi}(\mathbb{R}^n), \\ \|g\|_{\mathcal{L}N_{p_2,\phi}}+\|g\|_{L_{p_4,\Phi^{*}+\Phi^{**}}} \le C \|g\|_{\mathcal{L}N_{p_4,\phi}}. \end{gather} Then, using Proposition~\ref{prop:PWM}, we have the conclusion. \end{proof} \section{Specific function spaces}\label{s:exmp} We now give the specific function spaces $V$ and $W$ satisfying \eqref{ineq1}, \eqref{ineq2} and \eqref{ineq3}. For example, let $p>2$, $-n/p\le\alpha_{*}<0<\alpha<1$, $-n/p\le\begin{equation}ta<0$, and \begin{equation}gin{equation}\label{exmp1} \phi(x,r)= \begin{equation}gin{cases} r^{\alpha}, & |x|\le2,\ 0<r\le2, \\ r^{\begin{equation}ta}, & |x|\le2,\ r>2, \\ r^{\alpha_{*}}, & |x|>2,\ 0<r\le2, \\ r^{\begin{equation}ta}, & |x|>2,\ r>2, \end{cases} \quad \psi(x,r)= \begin{equation}gin{cases} r^{\alpha}, & |x|\le2,\ 0<r\le2, \\ r^{\begin{equation}ta}, & |x|\le2,\ r>2, \\ r^{2\alpha_{*}}, & |x|>2,\ 0<r\le2, \\ r^{\begin{equation}ta}, & |x|>2,\ r>2, \end{cases} \end{equation} and take $$ W=\mathcal{L}N_{p/2,\psi}(\mathbb{R}^n) \quad\text{and}\quad V=\mathcal{L}N_{p,\phi}(\mathbb{R}^n), $$ then $V$ and $W$ satisfy \eqref{ineq1}, \eqref{ineq2} and \eqref{ineq3} when $n=3$. We will check these properties in this section. Firstly, we see that $\phi$ and $\psi$ satisfy \eqref{phi-double} and \begin{equation}gin{equation*} \psi(x,r)=r^{\alpha} \quad\text{for all $B(x,r)\subset B(0,2)$}. \end{equation*} Then, by Proposition~\ref{prop:Cam-Lip-local}, we have \begin{equation}gin{equation*} \|f\|_{\mathrm{Lip}_{\alpha}(B(0,2))} \le C\|f\|_{\mathcal{L}_{p/2,\psi}}, \end{equation*} and $$ \|f\|_{\mathcal{L}N_{p/2,\psi}} \sim \|f\|_{\mathcal{L}_{p/2,\psi}}+|f(0)|. $$ This shows the property \eqref{ineq1}. Next, the properties \eqref{ineq2} and \eqref{ineq3} follows from Propositions~\ref{prop:product} and \ref{prop:RT} below, respectively. Therefore, if $f,g\in\mathcal{L}N_{p,\phi}(\mathbb{R}^n)$ and $\sigma(fg)=\lim_{r\to\infty}(fg)_{B(0,r)}=0$, then \begin{equation}gin{equation*} |(R_jR_k(fg))(0)| \le \|R_jR_k(fg)\|_{\mathcal{L}N_{p/2,\psi}} \le C\|fg\|_{\mathcal{L}N_{p/2,\psi}} \le C\|f\|_{\mathcal{L}N_{p,\phi}}\|g\|_{\mathcal{L}N_{p,\phi}}. \end{equation*} Further, let $f$ be $\alpha$-Lipschitz continuous on $B(0,2)$ and $|f(x)|\le C/|x|$ for $|x|\ge2$. Then $\sigma(f)=0$ and $f$ is in $\mathcal{L}N_{p,\phi}(\mathbb{R}^n)$, if $p$ and $\begin{equation}ta$ satisfy one of the following conditions: \begin{equation}gin{equation*} \begin{equation}gin{cases} 2<p<n&\text{and}\ -1\le\begin{equation}ta<0, \\ p=n&\text{and}\ -1<\begin{equation}ta<0, \\ n<p&\text{and}\ -n/p\le\begin{equation}ta<0. \end{cases} \end{equation*} Moreover, if $\alpha_{*}=\begin{equation}ta/2=-n/p$ also, then $-n/(p/2)=2\alpha_{*}=\begin{equation}ta<0$ and \begin{equation}gin{equation*} \|R_jR_k(fg)\|_{\mathrm{Lip}_{\alpha}(B(0,2))} +\|R_jR_k(fg)\|_{L^{p/2}} \le C\|R_jR_k(fg)\|_{\mathcal{L}N_{p/2,\psi}} \le C\|f\|_{\mathcal{L}N_{p,\phi}}\|g\|_{\mathcal{L}N_{p,\phi}}, \end{equation*} for all $f,g\in\mathcal{L}N_{p,\phi}(\mathbb{R}^n)$ satisfying $\sigma(fg)=0$, see Proposition~\ref{prop:Cam-Lp-local}. Note that, in the decomposition $u=U+r$ in Definition~\ref{no local collapsing}, we may assume that $U$ has a compact support in $\mathbb{R}^3$ at fixed $t$. Then $|r(t,x)|\le C/|x|$ for large $x\in\mathbb{R}^3$. It is also known that $\nabla u\inL^{\infty}(\mathbb{R}^3)$ at $t$, see \cite{GIM}, that is, $\nabla r$ is bounded. Hence $\sigma(\partial_3r_i U_j)=\sigma(r_i \partial_3U_j) =\sigma(r_i \partial_3r_j)=0$ for all $i,j$. \begin{equation}gin{prop}\label{prop:product} Let $p\ge2$, $-n/p\le\alpha_{*}<0<\alpha\le1$, $-n/p\le\begin{equation}ta<0$, and let $\phi$ and $\psi$ be as \eqref{exmp1}. Then there exists a positive constant $C$ such that, for all $f,g\in\mathcal{L}N_{p,\phi}(\mathbb{R}^n)$, \begin{equation}gin{equation} \|fg\|_{\mathcal{L}N_{p/2,\psi}}\le C\|f\|_{\mathcal{L}N_{p,\phi}}\|g\|_{\mathcal{L}N_{p,\phi}}. \end{equation} \end{prop} \begin{equation}gin{proof} For $\phi$ in \eqref{exmp1}, we have \begin{equation}gin{equation*} \Phi^*(x,r) = \int_1^{\max(2,|x|,r)}\frac{\phi(0,t)}{t}\,dt = \int_1^2 t^{\alpha-1}\,dt+\int_2^{\max(2,|x|,r)} t^{\begin{equation}ta-1}\,dt \sim 1, \end{equation*} and \begin{equation}gin{align*} 1+\Phi^{**}(x,r) &= 1+\int_r^{\max(2,|x|,r)}\frac{\phi(x,t)}{t}\,dt \\ &=1+ \begin{equation}gin{cases} \int_r^2 t^{\alpha-1}\,dt, & |x|\le2,\ 0<r\le2, \\ 0, & |x|\le2,\ r>2, \\ \int_r^2 t^{\alpha_{*}-1}\,dt+\int_2^{|x|} t^{\begin{equation}ta-1}\,dt, & |x|>2,\ 0<r\le2, \\ \int_r^{\max(|x|,r)} t^{\begin{equation}ta-1}\,dt, & |x|>2,\ r>2, \end{cases} \\ &\sim \begin{equation}gin{cases} 1, & |x|\le2,\ 0<r\le2, \\ r^{\alpha_{*}}, & |x|>2,\ 0<r\le2, \\ 1, & r>2. \end{cases} \end{align*} Hence \begin{equation}gin{equation*} \phi(x,r)(\Phi^*(x,r)+\Phi^{**}(x,r)) \sim \psi(x,r)= \begin{equation}gin{cases} r^{\alpha}, & |x|\le2,\ 0<r\le2, \\ r^{2\alpha_{*}}, & |x|>2,\ 0<r\le2, \\ r^{\begin{equation}ta}, & r>2. \end{cases} \end{equation*} Then, using Corollary~\ref{cor:PWM}, we have the conclusion. \end{proof} \begin{equation}gin{prop}\label{prop:RT} Let $q>1$, $-n/q\le\delta<0<\alpha<1$, $-n/q\le\begin{equation}ta<0$, and \begin{equation}gin{equation*} \psi(x,r)= \begin{equation}gin{cases} r^{\alpha}, & |x|\le2,\ 0<r\le2, \\ r^{\begin{equation}ta}, & |x|\le2,\ r>2, \\ r^{\delta}, & |x|>2,\ 0<r\le2, \\ r^{\begin{equation}ta}, & |x|>2,\ r>2. \end{cases} \end{equation*} Then the Riesz transforms $\tilde{R}_j$, $j=1,2,\cdots,n$, are bounded on $\mathcal{L}_{q,\psi}(\mathbb{R}^n)$ and on $\mathcal{L}N_{q,\psi}(\mathbb{R}^n)$. That is, there exists a positive constant $C$ such that, for all $f\in\mathcal{L}_{q,\psi}(\mathbb{R}^n)$, $$ \|\tilde{R}_jf\|_{\mathcal{L}_{q,\psi}}\le C\|f\|_{\mathcal{L}_{q,\psi}}, \quad \|\tilde{R}_jf\|_{\mathcal{L}N_{q,\psi}}\le C\|f\|_{\mathcal{L}N_{q,\psi}}, \quad j=1,2,\cdots,n. $$ Moreover, if $f\in\mathcal{L}N_{q,\psi}(\mathbb{R}^n)$ and $\sigma(f)=\lim_{r\to\infty}f_{B(0,r)}=0$, then the Riesz transforms $R_jf$, $j=1,2,\cdots,n$, are well defined, $\sigma(R_jf)=\lim_{r\to\infty}(R_jf)_{B(0,r)}=0$, and $$ \|R_jf\|_{\mathcal{L}N_{q,\psi}} \le C\|f\|_{\mathcal{L}N_{q,\psi}}, \quad j=1,2,\cdots,n. $$ \end{prop} \begin{equation}gin{proof} We see that $\psi$ satisfies \eqref{phi-double} and $$ r\int_r^{\infty}\frac{\psi(x,t)}{t^2}\,dt \le A\psi(x,r), $$ for all $x\in\mathbb{R}^n$ and $r>0$. Then we have the boundedness of $\tilde{R}_j$ on $\mathcal{L}_{q,\psi}(\mathbb{R}^n)$ and on $\mathcal{L}N_{q,\psi}(\mathbb{R}^n)$. Let \begin{equation}gin{equation*} \tilde{p}si(x,r)=\tilde{p}si(r)= \begin{equation}gin{cases} r^{\delta}, & 0<r\le2, \\ r^{\begin{equation}ta}, & r>2. \end{cases} \end{equation*} Then $\tilde{p}si$ satisfies \eqref{phi-double}, \eqref{phi-near}, \eqref{int^infty phi} and $\psi\le\tilde{p}si$. Therefore, by Theorem~\ref{thm:Cam-RT}, we have the conclusion. \end{proof} \section*{Acknowledgments} The first author was partially supported by Grant-in-Aid for Scientific Research (C), No.~24540159, Japan Society for the Promotion of Science. The second author was partially supported by Grant-in-Aid for Young Scientists (B), No.~25870004, Japan Society for the Promotion of Science. \begin{equation}gin{thebibliography}{99}\label{references} \bibitem{BKM} J. T. Beale, T. Kato, and A. Majda, \emph{Remarks on the breakdown of smooth solutions for the 3-D Euler equations}, Comm. Math. Phys., 94 (1984), 61--66. \bibitem{CKN} L. Caffarelli, R. Kohn and L. Nirenberg, \emph{Partial regularity of suitable weak solutions of the Navier-Stokes equations}, Comm. Pure Appl. Math., 35 (1982), 771--831. \bibitem{Chae} D. Chae, \emph{Local existence and blow-up criterion for the Euler equations in the Besov spaces}, Asymp. Anal. 38 (2004), 339--358. \bibitem{CY} C-H. Chan and T. Yoneda, \emph{On possible isolated blow-up phenomena and regularity criterion of the 3D Navier-Stokes equation along the streamlines}, Methods Appl. Anal., 19 (2012), 211--242. \bibitem{CF} P. Constantin and C. Fefferman, \emph{Direction of vorticity and the problem of global regularity for the Navier-Stokes equations}, Indiana Univ. Math. J., 42 (1993), 775--789. \bibitem{CFM} P. Constantin, C. Fefferman and A. Majda, \emph{Geometric constraints on potential singularity formulation in the 3D Euler equations}, Comm. Partial Differential Equations, 21 (1996), 559--571. \bibitem{DHY} J. Deng, T-Y. Hou and X. Yu, \emph{Improved geometric conditions for non-blowup of the 3D incompressible Euler equation.} Comm. Partial Differential Equations, 31 (2006), 293--306. \bibitem{F} C. Fefferman, \emph{Existence and smoothness of the Navier-Stokes equation}, The millennium prize problems, 57--67, Clay Math. Inst., Cambridge, MA, 2006. \bibitem{G} Y. Giga, \emph{Solutions for semilinear parabolic equations in $L^p$ and regularity of weak solutions of the Navier-Stokes system}, J. Differential Equations, 62 (1986), 186--212. \bibitem{GIM} Y. Giga, K. Inui and S. Matsui, \emph{On the Cauchy problem for the Navier-Stokes equations with nondecaying initial data.} Advances in fluid dynamics, 27--68, Quad. Mat., 4, Dept. Math., Seconda Univ. Napoli, Caserta, (1999). \bibitem{KT} H. Kozono and Y. Taniuchi, \emph{Bilinear estimates in BMO and the Navier-Stokes equations}, Math. Z., 235 (2000), 173--194. \bibitem{Nakai1993Studia} E.~Nakai, \emph{Pointwise multipliers for functions of weighted bounded mean oscillation}, Studia Math., 105 (1993), 105--119. \bibitem{Nakai1994MathNachr} E.~Nakai, \emph{Hardy-Littlewood maximal operator, singular integral operators and the Riesz potentials on generalized Morrey spaces}, Math. Nachr., 166 (1994), 95--103. \bibitem{Nakai1997Studia} E.~Nakai, \emph{Pointwise multipliers on weighted BMO spaces}, Studia Math., 125 (1997), 35--56. \bibitem{Nakai2006Studia} E.~Nakai, \emph{The Campanato, Morrey and H\"older spaces on spaces of homogeneous type}, Studia Math., 176 (2006), 1--19. \bibitem{Nakai2008ActaMathSinica} E.~Nakai, \emph{A generalization of Hardy spaces $H^p$ by using atoms}, Acta Math. Sinica, 24 (2008), 1243--1268. \bibitem{Nakai2010RMC} E.~Nakai, \emph{Singular and fractional integral operators on Campanato spaces with variable growth conditions}, Rev. Mat. Complut., 23 (2010), 355--381. \bibitem{NakaiSawano2012JFA} E.~Nakai and Y.~Sawano, \emph{Hardy spaces with variable exponents and generalized Campanato spaces}, J. Funct. Anal., 262 (2012), 3665--3748. \bibitem{NakaiYabuta1985JMSJ} Eiichi Nakai and K\^oz\^o Yabuta, \emph{Pointwise multipliers for functions of bounded mean oscillation}, J. Math. Soc. Japan, 37 (1985), 207--218. \bibitem{NY} E.~Nakai and T.~Yoneda. \emph{Bilinear estimates in dyadic BMO and the Navier-Stokes equations.} J. Math. Soc. Japan, 64 (2012), 399--422. \end{thebibliography} \end{document}
\begin{document} \title{Canard Phenomenon in a modified Slow-Fast Leslie-Gower and Holling type scheme model } \author{B. Ambrosio \and M.A. Aziz-Alaoui \and R. Yafia } \institute{B. Ambrosio, M.A. Aziz-Alaoui\at Normandie Univ, UNIHAVRE, LMAH, FR-CNRS-3335, ISCN, 76600 Le Havre, France \email{[email protected]} \and R. Yafia \at Ibn Zohr University, Agadir, Le Havre } \date{Received: date / Accepted: date} \maketitle \begin{abstract} Geometrical Singular Perturbation Theory has been successful to investigate a broad range of biological problems with different time scales. The aim of this paper is to apply this theory to a predator-prey model of modified Leslie-Gower type for which we consider that prey reproduces mush faster than predators. This naturally leads to introduce a small parameter $\epsilon$ which gives rise to a slow-fast system. This system has a special folded singularity which has not been analyzed in the classical work \cite{KS01}. We use the blow-up technique to visualize the behavior near this fold point $P$. Outside of this region the dynamics are given by classical singular perturbation theory. This allows to quantify geometrically the attractive limit-cycle with an error of $O(\epsilon$) and shows that it exhibits the \textit{canard} phenomenon while crossing $P$. \end{abstract} \section{Introduction} \label{sec:intro} In \cite{az03}, the authors introduced the following model: \begin{equation}\label{DLsystemaz03} \left\{ \begin{array}{rcl} \dot{x}=\left(r_{1}-b_{1}x-\frac{a_{1}y}{x+k_{1}}\right)x,\\ \\ \dot{y}=\left(r_{2}-\frac{a_{2}y}{x+k_{2}}\right)y \hspace{1cm} \end{array} \right. \end{equation} where $x$ represent the prey and $y$ the predator. This two species food chain model describes a prey population $x$ which serves as food for a predator $y$. The model parameters $r_1,r_2,a_1, a_2, b, k_1$ and $k_2$ are assumed to be positive. They are defined as follows: $r_1$ (resp. $r_2$) is the growth rate of prey $x$ (resp. predator $y$), $b_1$ measures the strength of competition among individuals of species $x$, $a_1$ (resp. $r_2$) is the maximum value of the per capita reduction rate of $x$ (resp. $y$) due to $y$, $k_1$ (respectively, $k_2$) measures the extent to which environment provides protection to prey $x$ (respectively, to the predator $y$). There is a wide variety of natural systems which may be modelled by system \eqref{DLsystemaz03}, see \cite{Ha91,Up97}. It may, for example, be considered as a representation of an insect pest–spider food chain. Let us mention that the first equation of system \eqref{DLsystemaz03} is standard. The second equation is rather absolutely not standard. Recall that the Leslie-Gower formulation is based on the assumption that reduction in a predator population has a reciprocal relationship with per capita availability of its preferred food. This leads to replace the classical growing term ($+xy$) in Lotka-Volterra predator equation by a decreasing term ($-y^2$). Indeed, Leslie introduced a predator prey model where the carrying capacity of the predator environment is proportional to the number of prey. These considerations lead to the following equation for predator $\dot{y}=r_2y(1-\frac{y}{\alpha x}).$ The term $\frac{y}{\alpha x}$ of this equation is called the Leslie–Gower term. In case of severe scarcity, adding a positive constant to the denominator, introduces a maximum decrease rate, which stands for environment protection. Classical references include \cite{Le48,Le60,Ma73,RM63}. In order to simplify \eqref{DLsystemaz03}, we proceed to the following change of variables:\\ $u(r_{1}t)=\frac{b_{1}}{r_{1}}x(t)$, $v(r_{1}t)=\frac{a_{2}b_{1}}{r_{1}r_{2}}y(t)$, $a=\frac{a_{1}r_{2}}{a_{2}r_{1}}$, $\epsilon=\frac{r_{2}}{r_{1}}$, $e_{1}=\frac{b_{1}k_{1}}{r_{1}}$, $e_{2}=\frac{b_{1}k_{2}}{r_{1}}$, $t'=r_{1}t$.\\ For convenience, we drop the primes on $t$. We obtain the following system: \begin{equation}\label{DLsystem} \left\{ \begin{array}{rcl} u_t&=&u\left(1-u\right)-\frac{auv}{u+e_{1}},\\ v_t&=&\epsilon v\left(1-\frac{v}{u+e_{2}}\right).\hspace{2cm} \end{array} \right. \end{equation} We assume here that the prey reproduces much faster than the predator, $i.e.$ $r_1>>r_2$, which implies that $\epsilon$ is small. Note that there are special solutions: $u=0,v_t=\epsilon v(1-\frac{v}{e_2})$ and $v=0,u_t=u(1-u)$. Hence, the quadrant $(0\leq u \leq 1, v\geq 0)$ is positively invariant for \eqref{DLsystem}. We restrict our analysis to this quadrant. We also assume the following conditions which ensure the existence of a unique attractive limit-cycle for \eqref{DLsystem}: \[ae_2<e_1, ae_2 \mbox{ not to close of } e_1,\] and, \[u^*<\frac{1-e_1}{2}, u^* \mbox{ not to close of } \frac{1-e_1}{2},\] where $u^*$ is solution of \[u+e_2=\frac{1}{a}(1-u)(u+e_1).\] Under these asumptions there are 4 fixed points in the positive quadrant: \[P_1=(0,0), P_2=(0,e_2), P_3=(1,0), P_4=(u^*,g(u^*)),\] where \[g(u)=\frac{1}{a}(1-u)(u+e_1).\] \begin{figure} \caption{Limit cycle and nullclines of system \eqref{DLsystem} \label{fig:limcyclpdf} \end{figure} They also prevent additional singularities for the folded points. Figure \ref{fig:limcyclpdf} illustrates nullclines and the attractive limit-cycle for \eqref{DLsystem}. Our aim is now to characterize the limit-cycle. In the following section we proceed to the classical slow-fast analysis which allows to describe the trajectories outside of a neighborhood of a special fold-point, induced by the nullcline $u=0$, which we will call $P$. In the third section, we use the blow-up technique to analyze the trajectories near this special fold point $P$. Now, let us fix a small value $\alpha>0$ and define a cross section $V=\{(u,v)\in {\mathbb{R}}^2; u>0, \, v=\frac{e_1}{a}+\alpha\}$. Then, by the regularity of the flow with regard to $\epsilon$, the limit cycle crosses $V$ at a point $(k(\alpha)\epsilon+o(\epsilon),\frac{e_1}{a}+\alpha)$ (below, for convenience, we do not write the dependence on $\alpha$). We have the following theorem. Let \[\bar{u}=\frac{1-e_1}{2},\] and \[A=(0,g(\bar{u})), B=(0,\frac{e_1}{a}+\alpha+\frac{c_2}{c_1k}), C=(u_*,\frac{e_1}{a}+\alpha+\frac{c_2}{c_1k}), D=(\bar{u},g(\bar{u})),\] where $u_*$ is such that $g(u_*)=\frac{e_1}{a}+\alpha+\frac{c_2}{c_1k}$ and \[c_1=\frac{1-e_1}{e_1}, \, c_2=\frac{e_1}{a}(1-\frac{e_1}{ae_2}).\] Let $\gamma'$ be the closed curve defined by: \[\gamma'=[A,B]\cup [B,C] \cup \zeta \cup [D,A]\] where, \[\zeta= \{(u,g(u)); \bar{u}\leq u \leq u_*\}.\] \begin{theorem} \label{th:maintheorem} All the trajectories not in $u=0$ and $v=0$, and different from the fixed point $P_4$ evolve asymptotically towards a unique limit-cycle $\gamma$ which is $O(\epsilon)$ close of $\gamma'$. \end{theorem} \begin{proof} The existence of the cycle results from Poincare-Bendixon theorem. For uniqueness, we refer to \cite{Da04}. The approximation by $\gamma'$ results from slow-fast analysis and the blow-up technique which will be carried out in sections 2 and 3. \end{proof} \begin{remark} According to \cite{BC81,KS01,SW01}, the canard phenomenon occurs when a trajectory crosses a folded point from the attractive manifold and follows the repulsive manifold during a certain amount of time before going away. We will see that according to this definition, the canard phenomena occurs here. This explains why we have introduced $\alpha$ and $k$. \end{remark} \section{Slow-Fast Analysis} In this section, we proceed to a classical slow-fast analysis, see for example \cite{He10,Jo95,Ka99,KS01}. We study the layer system and the reduced system. The layer system is obtained by setting $\epsilon=0$ in system \eqref{DLsystem}. It reads as, \begin{equation}\label{LayerDLsystem} \left\{ \begin{array}{rcl} u_t&=&u\left(1-u\right)-\frac{auv}{u+e_{1}}=F(u,v),\\ v_t&=&0 \hspace{2cm} \end{array} \right. \end{equation} The stationary points of this system are given by: \begin{equation}\label{CriticalManifold} M_0=\{u=0 \mbox{ or } v=\frac{1}{a}(1-u)(u+e_1)=g(u)\} \end{equation} The set $M_0$ is called the critical manifold. Outside from a neighborhood of this manifold, for $\epsilon$ small, regular perturbation theory ensures that trajectories of system \eqref{DLsystem} ar $O(\epsilon)$-close to those of system \eqref{LayerDLsystem}. The trajectories of system \eqref{LayerDLsystem} are tangent to the $u$-axis, which justifies the name of ``layer system''. These trajectories are the fast trajectories. Furthermore, the Fenichel theory, see \cite{Fe79} or references cited above, provides the existence of a locally invariant manifold $O(\epsilon)$-close to the critical manifold $M_0$ for compact subsets of $M_0$ where $F'_u(u,v)\neq 0$. Thus, we have to evaluate $F'_u(u,v)$ on the critical manifold. The parts of $M_0$ where $F'_u(u,v)<0$ is called the attractive part of the critical manifold. Analogously, the part of $M_0$ where $F'_u(u,v)>0$ is called the repulsive part of the critical manifold. Now, we compute these subset of $M_0$. We start our computations with the case $u=0$. We have, \begin{equation} F'_u(0,v)=1-\frac{av}{e_1}. \end{equation} Therefore, \begin{equation} F'_u(0,v)>0 \Leftrightarrow v<\frac{e_1}{a}. \end{equation} Now, we deal with the case $v=\frac{1}{a}(1-u)(u+e_1)$. We have \begin{equation} F'_u(u,v)=1-2u-av\frac{e_1}{(u+e_1)^2}. \end{equation} For $v=\frac{1}{a}(1-u)(u+e_1)$, we obtain, \begin{equation} F'_u(u,v)=\frac{u}{u+e_1}(-2u+(1-e_1)). \end{equation} Therefore, \begin{equation} F'_u(u,g(u))>0 \Leftrightarrow u<\frac{1-e_1}{2}=\bar{u}. \end{equation} Finally, the attractive critical manifold $M_{0,a}$ is given by $u=0$ and $v>\frac{e_1}{a}$, or $v=g(u)$ and $\frac{1-e_1}{2}<u\leq 1$: \[M_{0,a}=\{(0,v);v>\frac{e_1}{a}\}\cup \{(u,g(u));\frac{e_1}{a}<u\leq 1\}.\] Analogously, the repulsive critical manifold $M_{0,r}$ is given by: \[M_{0,r}=\{(0,v);0\leq v<\frac{e_1}{a}\}\cup \{(u,g(u));0\leq u <\bar{u}\}.\] The non-hyperbolic points of the critical manifold, or fold points, where $F'(u,v)=0$ are $B=(0,\frac{e_1}{a})$ and $D=(\bar{u},g(\bar{u})).$ Now, we look at the reduced system. The reduced system gives the slow-trajectories ie., the trajectories within the critical manifold which persists for $\epsilon$ small within the locally invariant manifold. It is obtained by setting $\epsilon=0$ after the change of time $\tau=\epsilon t$ in \eqref{DLsystem}. It reads as (to avoid complications, we keep the notation with $t$, but it should be with $\tau$), \begin{equation}\label{ReducedDLsystem} \left\{ \begin{array}{rcl} 0&=&u\left(1-u\right)-\frac{auv}{u+e_{1}},\\ v_t&=& v\left(1-\frac{v}{u+e_{2}}\right).\hspace{2cm} \end{array} \right. \end{equation} For $u=0$, we obtain, \begin{equation}\label{ReducedDLsystemII} v_t=v(1-\frac{v}{e_{2}}). \end{equation} This implies that \begin{equation*} v_t>0 \Leftrightarrow v<e_2. \end{equation*} Note that $(0,e_2)$ is the fixed point $P_2$ of the original system. For, $v=g(u)$. We have \begin{equation*} \begin{array}{rcl} v_t&>&0\\ \Leftrightarrow v(1-\frac{v}{u+e_2})&>&0\\ \Leftrightarrow v<u+e_2 \end{array} \end{equation*} which reads also \[v_t=g'(u)u_t=g(u)(1-\frac{g(u)}{u+e_2}).\] Therefore, \[u_t=\frac{g(u)}{g'(u)}(1-\frac{g(u)}{u+e_2}).\] The points where $g'=0$ correspond to a jump-point if $g(u) \neq u+e_2$, since in this case, we have at this point, $u_t=-\infty$. The analysis of layer and reduced system gives the qualitative behavior of the system outside the neighborhood of the fold-points. Trajectories reach the slow attractive manifold, and follow it according to the dynamics, or are repelled by the repulsive slow manifold. Furthermore, the behavior near the jump-point $(\bar{u},g(\bar{u})),$ has been rigorously described in \cite{KS01}. Trajectories reaching a neighborhood of the fold point from the right exit the neighborhood at left along fast fibers, and there is a contraction of rate $e^{-\frac{c}{\epsilon}}$ for some constant $c$ between arriving and exiting trajectories. The figure \ref{fig:limcyclpdf} illustrates this behavior. Therefore, it remains only to analyze the behavior of trajectories near the fold point $P=(0,\frac{e_1}{a})$. This is what we wish to do in the following section by using the blow-up technique. Note that this has not been done in \cite{KS01} since it is assumed there that critical manifold can be written $v=\varphi(u)$ with $\varphi'(0)=0$ and $\varphi''(0)\neq0$, which is not the case here since $M_0$ writes $u=0$ in a neighborhood of the fold-point $P=(0,\frac{e_1}{a})$. \begin{remark} Canards may appear near the fold point $D=(\bar{u},g(\bar{u})),$ when \begin{equation} \label{eq:sing} g(u)\simeq u+e_2 \end{equation}. As we have already mentioned, canards are solutions that follow the repulsive manifold during a certain amount of time after crossing the fold before being repelled. They have been discovered by french mathematicians with non standard analysis and studied after with geometrical singular perturbation theory, see \cite{BC81,KS01,SW01}. Our assumptions prevent the apparition of canards near $D$. Near $P=(0,\frac{e_1}{a})$, we have canards as it is stated in theorem \ref{th:maintheorem} The condition $e_2\simeq \frac{e_1}{a}$, which is the analog of \eqref{eq:sing} for $P$ would lead to a higher singularity. We don't consider this case here and leave it for a forthcoming work. \end{remark} \section{Blow-up technique near the fold-point $P=(0,\frac{e_1}{a})$.} The following proposition gives the formulation of \eqref{DLsystem} when written around $(0,\frac{e_1}{a})$: \begin{proposition} Near the fold point $(0,\frac{e_1}{a})$, system \eqref{DLsystem} rewrites: \begin{equation} \label{eq:arroundfoldpointwithc1} \begin{array}{rcl} \dot{x}&=&(c_1x^2-\frac{a}{e_1}xy+O(||(x,y)||^3)\\\ \dot{y}&=&\epsilon(c_2+ \frac{e_1^2}{a^2e_2^2}x+(1-\frac{2e_1}{ae_2})y+O(||(x,y)||^2)\\ \dot{\epsilon}&=&0 \end{array} \end{equation} where \[c_1=\frac{1-e_1}{e_1}, \, c_2=\frac{e_1}{a}(1-\frac{e_1}{ae_2}).\] \end{proposition} \begin{proof} We start with the change of variables \[u=x,\, v=\frac{e_1}{a}+y.\] Plugging into \eqref{DLsystem} gives: \begin{equation*} \begin{array}{rcl} \dot{x}&=&x(1-x)-a\frac{x}{e_1+x}(\frac{e_1}{a}+y)\\ \dot{y}&=&\epsilon(\frac{e_1}{a}+y) (1-\frac{e_1}{a(x+e_2)}-\frac{y}{x+e_2})\\ \dot{\epsilon}&=&0. \end{array} \end{equation*} Then, we use the following Taylor development: \begin{equation*} \frac{1}{e_1+x}=\frac{1}{e_1}-\frac{1}{e_1^2}x+\frac{1}{e_1^3}x^2+o(x^2). \end{equation*} We find, \begin{equation} \label{eq:arroundfoldpoint} \begin{array}{rcl} \dot{x}&=&(\frac{1}{e_1}-1)x^2-\frac{a}{e_1}xy+O(x^3)+O(x^2y),\\\ \dot{y}&=&\epsilon(\frac{e_1}{a}(1-\frac{e_1}{ae_2})+ \frac{e_1^2}{a^2e_2^2}x+(1-\frac{2e_1}{ae_2})y+O(||(x,y)||^2)\\ \dot{\epsilon}&=&0, \end{array} \end{equation} which gives the result.\\ Note that $c_1>0$ whereas $c_2<0$. \end{proof} We will now apply the blow-up technique. The blow-up technique is a change of variables which allows to desingularize the fold-point and visualize the trajectories in different charts. We use the following change of variables: \[x=\bar{r}\bar{x}, y=\bar{r}^2\bar{y}, \epsilon=\bar{r}^3\bar{\epsilon} \] We obtain (we drop the bar): \begin{equation*} \label{eq:afterBlowUp} \begin{array}{rcl} \dot{r}x+r\dot{x}&=&c_1r^2x^2-\frac{a}{e_1}r^3xy+O(r^4x^2y)+O(r^3x^3)\\ 2ry\dot{r}+r^2\dot{y}&=&r^3\epsilon(c_2+ \frac{e_1^2}{a^2e_2^2}rx+(1-\frac{2e_1}{ae_2})r^2y+O(||(rx,r^2y)||^2))\\ 3r^2\epsilon\dot{r}+r^3\dot{\epsilon}&=&0 \end{array} \end{equation*} The chart $K_1$ is obtained by setting $\bar{y}=1$. The chart $K_2$ is obtained by setting $\bar{\epsilon}=1$. The chart $K_3$ is obtained by setting $\bar{x}=1$.\\ In order to prove theorem we need only to consider the chart $K_2$ which will be fundamental in our analysis. When working ni chart $K_2$, we use the suscript $2$.\\ \textbf{Dynamics in chart $K_2$. }\\ \begin{proposition} The dynamics in chart $K_2$ are given by the system: \begin{equation} \label{eq:K2} \begin{array}{rcl} \dot{x}_2&=&c_1x^2_2+O(r_2))\\ \dot{y}_2&=&c_2+O(r_2)\\ \dot{r}_2&=&0 \end{array} \end{equation} \end{proposition} \begin{proof} Setting $\bar{\epsilon}=1$ in \eqref{eq:afterBlowUp} gives: \begin{equation*} \begin{array}{rcl} \dot{x}_2&=&r_2(c_1x_2^2+O(r_2))\\ \dot{y}_2&=&r_2(c_2+O(r_2))\\ \dot{r}_2&=&0. \end{array} \end{equation*} Then, we desinguralize the system by a change of time $\tau=r_2 t$, which gives the result. \end{proof} For $r_2=0$, we obtain: \begin{equation} \label{eq:K2r=0} \begin{array}{rcl} \dot{x}_2&=&c_1x_2^2\\ \dot{y}_2&=&\frac{e_1}{a}(1-\frac{e_1}{ae_2})\\ \dot{r}_2&=&0. \end{array} \end{equation} Equation \eqref{eq:K2r=0} is very important in our analysis since it shows how the trajectories cross the fold point. \begin{proposition} The solution of system \eqref{eq:K2r=0} is: \begin{equation} \label{eq:sol:K2r=0} \begin{array}{rcl} x_2(t)&=&\frac{1}{x_2^{-1}(0)-c_1t}\\ y_2(t)&=&y_2(0)+c_2t\\ \end{array} \end{equation} i.e. \begin{equation*} \begin{array}{rcl} x_2(t)&=&\frac{1}{x_2^{-1}(0)-c_1\frac{y_2(t)-y_2(0)}{c_2}} \end{array} \end{equation*} or \begin{equation*} \begin{array}{rcl} y_2(t)&=&y_2(0)+\frac{c_2}{c_1}(\frac{1}{x_2(0)}-\frac{1}{x_2(t)}) \end{array} \end{equation*} It follows that orbits have the following properties: \begin{enumerate} \item Every orbit has a horizontal asymptote $y = y_r$, where $y_r$ depends on the orbit such that $x\rightarrow +\infty$ as $y$ approaches $y_r$ from above. \item Every orbit has a vertical asymptote $x= 0^{+}$. \item The point $(x_2(0),\alpha,0)$ is mapped to the point $(\delta, \alpha+\frac{c_2}{c_1}(\frac{1}{x_2(0)}-\frac{1}{\delta}))$. \end{enumerate} \end{proposition} \begin{proof} It follows easily from the explicit solution. \end{proof} \begin{proposition} Solutions of \eqref{eq:K2} are $O(r)$-close of those of \eqref{eq:K2r=0}. \end{proposition} \begin{proof} This follows from regular perturbation theory. \end{proof} \begin{remark} Let us make a remark on the first statement of proposition 3. For $t^*=\frac{1}{c_1x_2(0)}$, $x_2$ blows-up. Since $x_2=\frac{x}{r_2}$, and $r_2=\epsilon^{\frac{1}{3}}, x_2=+\infty $ correspond, when $\epsilon=0$ to a point $x>0$ where we can consider that trajectory has left the neighborhood of the fold and where the previous slow-fast analysis applies. This gives for $y_2$: \begin{equation} \label{eq:valueconnect} y_2(t^*)=y_2(0)+\frac{c_2}{c_1x_2(0)}. \end{equation} This means, that fixing $x_2(0)$ and $y_2(0)$, the value where the trajectory leaves the slow manifold and connects the fast fiber is determined by \eqref{eq:valueconnect}. Therefore, if we choose $(x_2(0),y_2(0))$ on the limit-cycle, this determines the fast fiber followed by the limit-cycle. We will now detail this argument which gives the proof of theorem \ref{th:maintheorem}. \end{remark} \begin{proof}[proof of theorem \ref{th:maintheorem}] Fix a value $x$ far from $0$, let's say $x=\frac{1}{2}$. We want to determine $t^*$ such that $x(t^*)=\frac{1}{2}$, which corresponds to $x_2(t^*)=\frac{1}{2\epsilon^{\frac{1}{3}}}$. Taking $x(0)=k\epsilon+o(\epsilon)$, and according to equation \eqref{eq:sol:K2r=0}, this gives: \[t^*=\frac{\epsilon^\frac{1}{3}}{c_1}(\frac{1}{k\epsilon+o(\epsilon)}-2)\] and for equation \eqref{eq:K2}, \[y_2(t^*)=y_2(0)+\frac{c_2\epsilon^\frac{1}{3}}{c_1}(\frac{1}{k\epsilon+o(\epsilon)}-2) +O(\epsilon),\] which in original coordinates gives: \[y(t^*)=y(0)+\frac{c_2}{kc_1}+O(\epsilon).\] This proves the theorem. \end{proof} \begin{figure} \caption{Solutions of system \eqref{eq:K2r=0} \label{fig:ChartK2} \end{figure} \begin{figure} \caption{Solutions of system \eqref{DLsystem} \label{figlimcyclvareps} \end{figure} \begin{remark} Note that the folded node $P$ is at the intersection of the the two branches of the manifold $M_0$, $v=g(u)$ and $u=0$. Note also that these two branches actually exchange their stability at $P$. This case has been treated in a general form in \cite{KS01-2} under the appropriate name of transcritical bifurcation. However, here we are precisely in the special case $\lambda=1$ excluded from theorem 2.1 of \cite{KS01-2}. The authors have announced the existence of the canard in this case without giving the detailed proof of it. Here, we have proved the canard phenomenon using the blow up technique in the case of the limit-cycle of this classical model of predator-prey. \end{remark} \section{Conclusion} In this article, we have characterised the limit-cycle of the system \eqref{DLsystem}. The system was originally introduced in \cite{az03} as a modification of the Leslie-Gower model. We have proved that the limit-cycle of the model exhibits the canard phenomenon when crossing a special folded node as well as computed the value at which it reaches the fast fiber. In a forthcoming work, we hope to investigate the diffusive model obtained by adding a laplacian term in the first equation. \end{document}
\begin{document} \title{Non-Markovian coherent feedback control of quantum dot systems} \author{Shibei Xue$^{1,2,3}$}\epsilon mail[]{[email protected]} \author{Re-Bing Wu$^{1,2}$}\epsilon mail[]{[email protected]} \author{Michael R. Hush$^{3}$} \author{Tzyh-Jong Tarn$^{1,2,4}$} \affiliation{$^1$Department of Automation, Tsinghua University, Beijing 100084, P. R. China\\ $^2$Center for Quantum Information Science and Technology, TNList, Beijing 100084, P. R. China\\ $^3$School of Information Technology and Electrical Engineering, University of New South Wales Canberra at the Australian Defence Force Academy, Canberra, ACT 2600, Australia\\ $^4$Department of Electrical and Systems Engineering, Washington University, St. Louis, Missouri 63130, USA} \date{\today} \begin{abstract} This paper presents a non-Markovian coherent feedback scheme to control single quantum dot systems. The feedback loop is closed via a quantum tunneling junction between the natural source and drain baths of the quantum dot. The exact feedback-controlled non-Markovian Langevin equation is derived for describing the dynamics of the quantum dot. To deal with the nonlinear memory function in the Langevin equation, we analyze the Green's function-based root locus, from which we show that the decoherence of the quantum dot can be suppressed via increasing the feedback coupling strength. This effectiveness of decoherence suppression induced by non-Markovian coherent feedback is verified by an example of single quantum dot systems. \epsilon nd{abstract} \pacs{} \maketitle \section{Introduction} As a solid-state information carrier for quantum computation, quantum dot systems have attracted much attention in recent years~\cite{LossPRA1998,BurkardPRB1999,ElzerNAT2004,KrouNAT2004}. As well as other quantum registers, the coherent manipulation of the quantum dot is vital for processing quantum information~\cite{Mansci2015,franco}, which is always deteriorated by the decoherence induced by interaction with the environments~\cite{LeeJCP2008,PhysRevA.89.042320,PhysRevA.77.032117}. In quantum dot systems, the interaction occurs between the quantum dot and source and drain electrodes, the hyperfine interaction between electron spins of quantum dots and spins of nuclei, and the noise generated by the defects on the substrate materials~\cite{Chirolli2008,Tu2008,Xue2011}. Under conditions that the memory time is ignorable, the Markovian approximation can be taken to simplify the analysis and design of open quantum control systems such as stabilizing the current through nanostructures and purifying the state of quantum dot qubit via feedback control~\cite{TobiasPRL2010,PoltlPRB2011,BluPRL2010}. However, for general cases, the feedback control performance may be degraded due to the violation of Markovian approximation in the solid-state systems. Consequently, colored noise disturbs the system of interest, whose spectrum is defined by a multiplication of the state density and the square norm of the coupling strength between the system and the environment~\cite{leggett1987}. This resulting non-Markovian effect can be harnessed by using a class of direct coherent feedback approach~\cite{XuePRA2012} where the structure of the environments is altered by the couplings between the modes of the environment, and the characteristics of correlated environments can modify the non-Markovianity of a quantum system~\cite{PhysRevA.92.012315,zhuEPJD}. In particular, no measurement on the non-Markovian dynamics is required with this method. This paper studies coherent feedback control of non-Markovian dymanics of single quantum dot systems, with application to the suppression of decoherence where the noise baths, i.e., source and drain, are coupled together by a tunneling junction to form a closed loop. The quantum transport process of the electrons between them is modified via adjusting the structure of the junction so that the effective noise spectrum of the close-loop system. Our scheme is equivalent to use spectrally tunable environment to directly couple the controlled system. In this regard, our scheme is a direct coherent feedback scheme~\cite{LLoyd2000,XuePRA2012}. This similar loop topology has been employed to photonic crystal systems, by which the noise is driven out of resonance with the working frequency of the system so as to suppress non-Markovian decoherence. However, the circumstance for the quantum dots system is quite different due to the bias voltage applied on the source and drain baths. The resulting detuning between the central frequencies of the source and the drain leads to a memory kernel function that is nonlinearly dependent on the feedback coupling strength, which makes it difficult to design coherent feedback. In this paper, we utilize a Green's function based root locus method~\cite{xueqip2015,xueian} to analyze the decoherence effect of the closed-loop system, by which we show that the coherent feedback can suppress the decoherence in quantum dots systems. The rest of this paper is organized as below. In section \ref{2nd}, the Hamiltonian of the coherent feedback loop is introduced. Starting from this Hamiltonian, we obtain an exact non-Markovian Langevin equation to describe the dynamics of the quantum dot in section \ref{3rd}. In section \ref{4th}, via Green's function based root locus approach, the analysis for the dynamics of the controlled system is done in the frequency domain. An example of the quantum dot system is given in section \ref{5th}. Finally, conclusions are drawn in section \ref{6th}. \section{Coherent feedback loop Hamiltonian}\label{2nd} Consider a single quantum dot~\cite{ReimRMP2002} located in the center of two leads named source (left) and drain (right), respectively, where a bias voltage is applied on the two leads. The coupling strengthes between the quantum dot and each mode of two electrodes are different resulting in the non-Markovian decoherence dynamics of the quantum dot~\cite{Tu2008}. To effectively reject non-Markovian noises, non-Markovian coherent feedback scheme is introduced. To build a feedback loop, the source and drain are joined together with a tunneling junction where the tunneling strength is tunable. This scheme is sketched in Fig.~\ref{Fig1}. This design leads to a closed interaction relationship, where the interconnection of each part induces bidirectional causal effect (i.e., two interconnected system always affect each other) . Thus, the information flow in this closed loop is in both clockwise and anticlockwise directions. \begin{figure} \includegraphics[width=8cm]{Fig12.eps}\\ \caption{The schematic diagram of a direct coherent feedback loop for the quantum dot system.}\label{Fig1} \epsilon nd{figure} The Hamiltonian of the open-loop system (i.e., without tunneling junction) can be written as \begin{equation}\label{1} H_O=H_{S}+H_E+H_{SE}, \epsilon nd{equation} where $H_{S}/\hbar=\omega_{S} \hat{d}^\dagger \hat{d}$ is the quantum dot Hamiltonian with a working frequency $\omega_S$ and a fermion annihilation operator $\hat{d}$. The environment Hamiltonian $H_E$ describes two clusters of the electron bath (source and drain), i.e., $$H_E/\hbar=\sum_k\omega_{Bk}\hat{b}^\dagger_{k} \hat{b}_{k}+\sum_k\omega_{Ck}\hat{c}^\dagger_{k} \hat{c}_{k}$$ with the frequencies for each mode $\omega_{Bk}$ ($\omega_{Ck}$), where the symbols $\hat{b}_{k}$ ($\hat{c}_{k}$) are the fermion annihilation operator of the source (drain). Their couplings to the system are determined by the interaction Hamiltonian $$H_{SE}/\hbar=\sum_k (V^*_{Bk}\hat{d}^\dagger \hat{b}_{k}+V_{Bk}\hat{b}^\dagger_{k}\hat{d})+\sum_k (V^*_{Ck}\hat{d}^\dagger \hat{c}_{k}+V_{Ck}\hat{c}^\dagger_{k}\hat{d}).$$ The coupling strengthes between the system and each mode of the source~(drain) are denoted as $V_{Bk}$($V_{Ck}$), which is different for each mode resulting in the non-Markovian decoherence dynamics. For simplicity, the nonlinear system-bath interactions are not considered here. For the above system, the interaction between the system and the bath disturbs the system dynamics. The noise structure induced by the interaction determines how serious the non-Markovian decoherence is. In this paper, a tunnelling junction is introduced between the source and the drain to efficiently modify the noise structure, which induces a coupling Hamiltonian between the source and drain as \begin{eqnarray}\label{2} H_{F}&=&\sum_k\sum_{k'}(F_{kk'}\hat{c}^\dagger_{k'}\hat{b}_{k}+F_{kk'}^*\hat{b}_{k}^\dagger\hat{c}_{k'}) \epsilon nd{eqnarray} where $F_{kk'}$ describes the tunneling strength between the $k$-th source mode and the $k'$-th drain mode~\cite{Mahan}. Here, the source together with the drain constitutes a structured bath for the system (as shown in Fig.\ref{Fig1}), whose internal properties are expected to be modified via the tunable coupling strength $F_{kk'}$. The $F_{kk'}$ will depend on the physical properties of the junction. In what follow we describe how to calculate $F_{kk'}$, assuming the electrons act as if they are free and are in one dimension. Consider an electron starting at the source with wave vector $k_B$ and ending at the drain with wave vector $k_C$, an initial energy and a final energy can be expressed as $E_B = \hbar^2 k_B^2 / 2m + eU_B$ and $E_C = \hbar^2 k_C/ 2m + eU_C$, respectively, where $U_B$ and $U_C$ are the voltage on each side of the junction, $e$ is the charge of the electron and $m$ is its mass. By energy conservation we can relate the initial and final wave vector of the electron after crossing the junction as: $ \hbar^2(k_C^2 - k_B^2)/ 2m = e(U_B - U_C)$. On the other hand, we assume that the initial and final kinetic energy of the electron is much larger than the potential difference across the junction. In this case the probability is very low for the electron to be reflected. Hence, the characteristic central frequency $k_0$ of both baths is much higher than the potential difference across the junction, i.e., $\hbar^2k_0^2/2m \gg e(U_C - U_B)$. Furthermore, we only consider perturbations about whose frequency components are near this central frequency, as the off-resonant components have minor effects on the coherence of the quantum dot. Thus, it is easy to see that $ k_C - k_B \approx m e(U_B - U_C)/\hbar^2 k_0$. Hence, the difference between the wave vectors is approximately a constant which is related to the potential difference across the junction. And thus the tunneling strength between the source and drain can be expressed as \begin{equation}\label{2-0} F_{kk'}=\left\{\begin{array}{cc} f_{k}, & k-k'=l\neq0 \\ 0, & {\rm otherwise} \epsilon nd{array}\right. \epsilon nd{equation} where $l = m e(U_B - U_C)/\hbar^2k_0$. The coupling strengths $f_{k}$ can be engineered by treating the junction as a waveguide and changing its geometry. Thus, the total Hamiltonian of our coherent feedback control system reads as \begin{equation}\label{14} H_{T}=H_O+H_{F}. \epsilon nd{equation} The details of the rejection of the non-Markovian noises via coherent feedback will be shown in the next sections. \section{Exact Non-Markovian Quantum Langevin Equation}\label{3rd} \subsection{Exact Langevin equation} The evolution of the fermion annihilation operator $\hat d(t)$ of the quantum dot is described by the following integral-differential exact non-Markovian quantum Langevin equation (for the details of the derivation, see the Appendix~\ref{ASA}), \begin{equation}\label{6} \dot{\hat d}(t)=-i\omega_S\hat d(t)-\int_0^t d\tau M(t-\tau)\hat d(\tau)-i\hat \epsilon _n(t), \epsilon nd{equation} where the memory kernel function $M(t)$ embedded with the noise spectrum determines the dissipation process; and the noise term $\hat \epsilon _n(t)$ corresponds to the equivalent noise injected from the two leads. Due to the linearity of the integral-differential Eq.~(\ref{6}), the solution of $\hat{d}(t)$ is expressed as \begin{equation}\label{10} \hat{d}(t)=g(t)\hat{d}(0)+\int_0^td\tau g(t-\tau)\hat \epsilon _n(\tau), \epsilon nd{equation} where the first term characterizes the dissipative evolution from its initial state $\hat{d}(0)$ and the second term describes the dynamics excited by the noise $\hat\epsilon _n(t)$. The complex coefficient $g(t)$ satisfies the following integral-differential equation: \begin{equation}\label{11} \dot{g}(t) = -i\omega_S g(t)- \int_0^t d\tau M(t-\tau)g(\tau),\quad g(0)=1, \epsilon nd{equation} where the absolute value of the Green's function $g(t)$ is the scaled amplitude of the system~\cite{Tan2011}. It can be used to evaluate the dissipation process of the system due to affected by the same memory kernel function $M(t)$ as that for the system operator $\hat{d}(t)$. \subsection{The coherent feedback case} When the feedback couplings (\ref{2}) are introduced, i.e., the total system is described by (\ref{14}), both $M(t)$ and $\hat \epsilon _{\rm n}(t)$ are affected. Assume that the source and drain can be effectively coupled as expressed in Eq.~(\ref{A1}) and denote $F_{kk'}$ in the continuous limit and the polar coordinate as $f(\omega)=r(\omega)e^{ i\theta(\omega)}$. The memory kernel function is split by the feedback as $M(t)\epsilon quiv M_f(t)=M^+(t)+M^-(t)$ with \begin{equation}\label{12} M^\pm(t) =\int_{-\infty}^{+\infty}d\omega \frac{J^\pm(\omega)}{2\pi} e^{-i(\omega-\delta\pm\sqrt{\delta^2+r(\omega)^2})t}, \epsilon nd{equation} where the noise spectral functions \begin{eqnarray} \frac{J^+(\omega)}{2\pi} &=&\varrho(\omega) |V_{B}(\omega)e^{- i\theta(\omega)}\cos\frac{\alpha(\omega)}{2}+V_{C}(\omega)\sin\frac{\alpha(\omega)}{2}|^2,\nonumber \\ \frac{J^-(\omega)}{2\pi} &=&\varrho(\omega)|V_{B}(\omega)e^{- i\theta(\omega)}\sin\frac{\alpha(\omega)}{2}-V_{C}(\omega)\cos\frac{\alpha(\omega)}{2}|^2\nonumber \epsilon nd{eqnarray} are modulated by feedback parameters $r(\omega)$ and $\theta(\omega)$ and $\alpha(\omega)=\arctan\frac{r(\omega)}{\delta}$. The split in the memory kernel function shows that the noises can be modified by the tunneling strength. The equivalent noise $\hat \epsilon _n(t)\epsilon quiv\hat \epsilon _{nf}(t)$ in Eq.~(\ref{6}) is \begin{equation} \hat \epsilon _ {nf}(t)=\int_{-\infty}^{+\infty} d\omega \varrho(\omega){v}^\dagger(\omega)\Phi(\omega,t){\hat \epsilon }(\omega,0), \epsilon nd{equation} where $\varrho(\omega)$ is the density state and the definitions of the coupling strength vector $v(\omega)$ and the feedback-induced modulation matrix $\Phi(\omega,t)$ are given in Eqs.~(\ref{13-55}) and (\ref{51}). Note that we have assumed the source and drain share the same density state $\varrho(\omega)$ and the effect of the feedback Hamiltonian $H_F$ has been embedded in both Green's function $g(t)$ and the equivalent input $\hat \epsilon _n(t)$. \subsection{The open-loop case}\label{opl} For comparing with an open-loop method to suppress the non-Markovian decoherence~\cite{Lei2011}, the memory kernel function and the noise term in the open-loop case are also considered. In absence of feedback couplings (\ref{2}), i.e., the system is totally described by the open-loop Hamiltonian $H_O$ (\ref{1}), the memory kernel function $M(t)\epsilon quiv M_0(t)=M_B(t)+M_C(t)$ with \begin{eqnarray}\label{27} M_B(t)&=& \int_{-\infty}^{+\infty} d\omega_B \varrho(\omega_B)|V_B(\omega_B)|^2 e^{- i\omega_{B}t},\\ M_C(t)&=&\int_{-\infty}^{+\infty} d\omega_C \varrho(\omega_C)|V_C(\omega_C)|^2 e^{- i\omega_{C}t}, \epsilon nd{eqnarray} is only dependent on the system coupling strengthes with the source and drain, where $V_{B}(\omega_{B})$ and $V_{C}(\omega_{C})$ are the coupling strength of the system with the source and drain in a continuous frequency form, respectively. And $\varrho (\omega_{B})$ and $\varrho (\omega_{C})$ are the state density function of the source and drain, respectively; and the noise $\hat \epsilon _{n}(t)\epsilon quiv\hat \epsilon _{n0}(t)=\hat \epsilon _{\rm B0}(t)+\hat \epsilon _{\rm C0}(t)$ is a summation of the noise arising from the source and drain \begin{eqnarray} \hat \epsilon _{\rm B0}(t)&=&\int_{-\infty}^{+\infty} d\omega_{B}\varrho (\omega_{B}) V^*_{B}(\omega_{B})e^{-i\omega_{B}t}\hat{b}(\omega_{B},0)\\ \hat \epsilon _{\rm C0}(t)&=&\int_{-\infty}^{+\infty}d\omega_{C}\varrho (\omega_{C}) V^*_{C}(\omega_{C}) e^{- i\omega_{C}t}\hat{c}(\omega_{C},0). \epsilon nd{eqnarray} In the above expression, $\hat{b}(\omega_{B},0)$ and $\hat{c}(\omega_{C},0)$ are the value of $\hat b(\omega_{B},t)$, $\hat c(\omega_{C},t)$ at $t=0$, respectively. The exact non-Markovian Langevin equation above affords the basis for analyzing the system dynamics with or without coherent feedback control. The feedback control parameters $r(\omega)$ and $\theta(\omega)$ are embodied in the memory kernel function $M(t)$ in Eq.~(\ref{6}). How to effectively manipulate the memory kernel function $M(t)$ is considered in the next section. \section{Green's function based Root locus Analysis for decoherence suppression}\label{4th} In our previous work\cite{XuePRA2012}, it is shown that spectral modulation induced by coherent feedback can be used to suppress decoherence. However, this method is not directly extendable to the system discussed here due to the nonlinearity of the control amplitude $r(\omega)$ as shown in Eq.~(\ref{12}) resulting from the bias-voltage-induced central frequency difference between the source and drain. Hence, whether or not decoherence can be suppressed is not as obvious as in Ref.~\onlinecite{XuePRA2012}. In this section, we will analyze it through Green's function based root locus method. \subsection{Green's function based root locus} Root locus is a graphical method for describing the dependence of the modes on a changeable parameter of the controlled system (e.g., the gain) and thus determining the regime of the parameter that ensures the system stability~\cite{Ogata}. Here, we analyze the root locus for the Green's function to understand the mechanism of decoherence suppression induced by coherent feedback. Transforming the dynamical equation of the Green's function for the non-Markovian quantum system~(\ref{11}) into the complex frequency domain, the Laplace transform $G(s)$ of the Green's function $g(t)$ is \begin{equation}\label{15} G(s)=\frac{1}{s+ i\omega_S+M(s)}~. \epsilon nd{equation} where $M(s)$ is the Laplace transform of the memory kernel function $M(t)$. The poles of the Green's function $G(s)$ are defined as points of $s$ at which $G(s)$ is singular. The trajectories of the poles versus a varying parameter as the root locus of the Green's function $G(s)$~\cite{xueqip2015}. As shown in Eq.~(\ref{15}), the poles of the Green's function are dependent with the memory kernel function $M(s)$. For the simplest case $M(s)=0$, i.e., the system is closed, the pole lies in the imaginary axis of the complex plane, which implies the coherence of the system are not destroyed. When the system is a Markovian quantum system, i.e., $M(s)=\frac{\gamma}{2}$ with a constant damping rate $\gamma$, the pole is shifted to the left half of the complex plane with a negative real part corresponding to the damping. For a non-Markovian quantum system involving complicated noise spectrum, the distribution of the poles of the Green's function becomes complicated, where we assume that $M(s)$ can be expressed in a rational form. In the following, we will investigate its influence of the memory kernel function on the Green's function in the case with or without our coherent feedback scheme, respectively, so as to observe the coherence of the system. \subsection{The coherent feedback case} To explore the root locus of the Green's function induced by the coherent feedback, we assume that the single quantum dot is equally strongly coupled to the source and drain, i.e., $V_B(\omega)=V_C(\omega+2\delta)=V(\omega)$ where $2\delta$ is the frequency difference between the two baths, and the Lorentzian spectral density~\cite{Tu2008,WeiMinPRL2012} is adopted here for fermion systems as \begin{equation}\label{18} J(\omega)=2\pi\varrho(\omega)|V(\omega)|^2=\frac{\epsilon ta h^2}{(\omega-\omega_S)^2+h^2}~, \epsilon nd{equation} where the parameters $\epsilon ta$ and $h$ are the strength and width of the noise spectrum, respectively. Assume that the feedback coupling strength is independent on the frequency and express it in the polar coordinate as $f=re^{ i\theta}$, and thus the corresponding parameter $\alpha$ is also independent on frequency. When the feedback coupling is applied, the memory kernel function $M(t)$ is split into two branches in Eq.~(\ref{12}) which can be expressed as $M(s)=M^+(s)+M^-(s)$ in the frequency domain (see Appendix~\ref{ASC}) with \begin{equation}\label{19} M^\pm(s)=\frac{\frac{1}{2}\epsilon ta h(1\pm\cos\theta\sin\alpha)}{s+z_0\pm i\gamma}, \epsilon nd{equation} where $z_0=h+i(\omega_S-\delta)$ and $\gamma=\sqrt{\delta^2+r^2}$. Physically, this means our coherent feedback can modify the noise spectrum, i.e., the structure of the environment can be engineered by the coherent feedback. To see how the memory kernel $M(s)$ affects the Green's function, we can substitute Eq.~(\ref{19}) into Eq.~(\ref{15}) and then obtain \begin{equation}\label{21} G(s)=\frac{s^2+\alpha_1s+\alpha_2}{s^3+\beta_1s^2+\beta_2s+\beta_3}, \epsilon nd{equation} where $\alpha_1=2z_0$, $\alpha_2=z_0^2+\gamma^2$, $\beta_1=2z_0+i\omega_S$, $\beta_2=z_0^2+\gamma^2+\epsilon ta h+i2\omega_S z_0$, and $\beta_3=\epsilon ta hz_0+i\omega_S(z_0^2+\gamma^2)-i\epsilon ta h\gamma\cos\theta\sin\alpha$. For utilizing inverse Laplace transform to obtain an explicit solution of $g(t)$ in the time domain, we express Eq.~(\ref{21}) in the form of a partial fraction decomposition as \begin{equation}\label{22} G(s)=\frac{q_1}{s-p_1}+\frac{q_2}{s-p_2}+\frac{q_3}{s-p_3}~, \epsilon nd{equation} where three poles \begin{eqnarray} p_1&=&-\frac{\beta_1}{3}+\frac{l}{3\sqrt[3]{2}}e^{i\phi}-\frac{\sqrt[3]{2}A}{3l}e^{-i\phi},\\ p_2&=&-\frac{\beta_1}{3}-\frac{l}{3\sqrt[3]{2}}e^{i(\phi-\frac{\pi}{3})}+\frac{\sqrt[3]{2}A}{3l}e^{-i(\phi-\frac{\pi}{3})},\\ p_3&=&-\frac{\beta_1}{3}-\frac{l}{3\sqrt[3]{2}}e^{i(\phi+\frac{\pi}{3})}+\frac{\sqrt[3]{2}A}{3l} e^{-i(\phi+\frac{\pi}{3})}, \epsilon nd{eqnarray} with $ A=3\beta_2-\beta_1^2, B=9(\beta_2\beta_1-3\beta_3)-2\beta_1^3 $, and $le^{i\phi}\epsilon quiv\sqrt[3]{B+\sqrt{4A^3+B^2}}$ are what we concern about. Their distribution determines the root locus of the Green's function and thus the non-Markovian dynamics of the system. The complex coefficients $q_1,q_2,q_3$ can be calculated as \begin{eqnarray}\label{23} q_1&=& \frac{\alpha_2 + \alpha_1 p_1 + p_1^2}{(p_1 - p_2) (p_1 - p_3)},\nonumber\\ q_2&=& \frac{-\alpha_2 - \alpha_1 p_2 - p_2^2}{(p_1 - p_2) (p_2 - p_3)},\nonumber\\ q_3&=&\frac{\alpha_2 + \alpha_1 p_3 + p_3^2}{(p_1 - p_3) (p_2 - p_3)}.\nonumber \epsilon nd{eqnarray} With the help of Eq.~(\ref{22}), the solution of Eq.~(\ref{11}) can be obtained as \begin{eqnarray}\label{20} g(t)=q_1 e^{p_1 t}+q_2 e^{p_2 t}+q_3 e^{p_3 t}, \epsilon nd{eqnarray} which will be used to observe the dynamics of $g(t)$ under coherent feedback in the example of next section. The number of poles of $G(s)$ is increased to be 3. Their distribution will directly affect the dynamics of $g(t)$. To qualitatively observe the effect of our coherent feedback on the distribution of the poles of the Green's function $G(s)$, we consider a limit case that the feedback coupling strength $r$ approaches to infinity. Since the three poles $p_{1,2,3}$ are functions of the feedback coupling strength $r$, the limit value of $p_{1,2,3}$ as $r$ going to the infinity are calculated as \begin{eqnarray} \lim_{r\rightarrow+\infty}p_1&=&0+ i(-\omega_S),\\ \lim_{r\rightarrow+\infty}p_2&=&-h+ i(-\infty),\\ \lim_{r\rightarrow+\infty}p_3&=&-h+ i(+\infty). \epsilon nd{eqnarray} It is shown that one of the poles $p_1$ is pushed to be close to $-i\omega_S$ by choosing a sufficiently large $r$ and the real value of the other two poles are driven to be $-h$ and the imaginary parts of them go to $-\infty$ and $+\infty$, respectively. Compared with $p_2$ and $p_3$ whose real parts are negative leading to quick damping, the pole $p_1$ for sufficiently large $r$ is very close to the imaginary axis, which will keep its mode oscillating for a long time. It means $|g(t)|$ can be kept on a high value close to $1$. This indicates our coherent feedback scheme can suppress the decoherence. In practice, the feedback coupling strength can not be arbitrarily strong, and hence the dissipation process can only be slowed down only by the feedback. \subsection{The open-loop case} If the Hamiltonian $H_{F}$ is ignored, our system is reduced to a common single quantum dot setting governed by the open loop Hamiltonian $H_O$. The system dynamics obeys the same form Langevin equation~(\ref{6}) with a different memory kernel $M_0(t)$ and a noise term $\hat \epsilon _{n0}(t)$ as given in section~\ref{opl}. Transformed to the frequency domain, (see the Appendix~\ref{ASC}), $M_0(s)$ in Eq. (\ref{15}) is expressed as \begin{eqnarray}\label{31} M_0(s)=\frac{\frac{1}{2}\epsilon ta h}{s+h+ i\omega_S}+\frac{\frac{1}{2}\epsilon ta h}{s+h+i(\omega_S-2\delta)}. \epsilon nd{eqnarray} Substituting Eq.~(\ref{31}) into Eq.~(\ref{15}), a partial fraction decomposition form of $G_0(s)$ can be obtained as \begin{equation}\label{25} G_0(s)=\frac{q_{01}}{s-p_{01}}+\frac{q_{02}}{s-p_{02}}+\frac{q_{03}}{s-p_{03}}, \epsilon nd{equation} where three poles $p_{01}$, $p_{02}$, $p_{03}$ of $G_0(s)$ are equal to the values of $p_{1}$, $p_{2}$, $p_{3}$ as $r$ and $\theta$ being zero; and $q_{01},q_{02},q_{03}$ can also obtained in the same way. Hence, the behavior of the Green's function $g_0(t)$ can be evaluated by \begin{eqnarray}\label{24} g_0(t)=q_{01} e^{p_{01} t}+q_{02} e^{p_{02} t}+q_{03} e^{p_{03} t}, \epsilon nd{eqnarray} which can be obtained from Eq.~(\ref{25}) via inverse Laplace transform. Ref.~\onlinecite{Lei2011} proposed a scheme of realizing strong couplings between the system and its environment to suppress non-Markovian decoherence for bosonic systems. With respect to our system, it is equivalent to increase the noise strength $\epsilon ta$ in Eq.~(\ref{18}) to suppress the decoherence. In the next section, we will numerically compare the method in Ref.~\onlinecite{Lei2011} with our coherent feedback scheme. \section{Example of single quantum dot}\label{5th} In numerical simulations, we choose parameters that can be engineered as follows: the system working frequency $\hbar\omega_S=10\mu eV$, the frequency difference between the source and drain $\hbar\delta=0.05\mu eV$and the noise width $\hbar h=0.3\mu eV$. Other varying parameters will be given below. The coherence of the system is measured by absolute value of the Green's function $|g(t)|$ which can be analytically calculated as Eq.~(\ref{20}) or Eq.~(\ref{24}). \begin{figure} \includegraphics[width=8.5cm]{openDyna2.eps}\\ \caption{(Color online) The dynamics of the absolute value of the Green's function $|g_0(t)|$ with increasing the noise strength $\epsilon ta$. The decay of $|g_0(t)|$ is not apparently improved.}\label{Fig5} \epsilon nd{figure} Figure~\ref{Fig5} shows the variations of the absolute value of open-loop Green's function $g_0(t)$ with increasing the noise strength to realize strong couplings between the system and the bath. When the noise strength $\epsilon ta$ is set to be $0.4$, $|g_0(t)|$ is oscillatingly damping as plotted in green dot-dashed line, which indicates that the dynamics of the system is in the non-Markovian regime. When $\epsilon ta$ is further increased, e.g., $\epsilon ta=0.8$ or $\epsilon ta=1.2$, the oscillation of $|g_0(t)|$ is enhanced. However, the damping of $|g_0(t)|$ can not be stopped. Even when $\epsilon ta$ reaches $1.6$, the damping of $|g_0(t)|$ is still not changed. As pointed out by Ref.~\onlinecite{Lei2011}, the damping process can be slowed down via increasing the coupling strength between the system and the baths (equivalent to increase the noise strength $\epsilon ta$) for boson systems. Here, we observe that their strategy does not work for fermion systems. The reason can be understood in the root locus plot Figure~\ref{Fig2} which shows the dependence of three poles of the open-loop Green's function $g_0(t)$ with the noise coupling strength $\epsilon ta$ from $0.4$ to $1.6$. It is clearly shown $p_{01}$ and $p_{02}$ are towards the axis ${\rm Re}s=0.15$ with opposite directions and $p_{03}$ is away from the imaginary axis. The three modes of $g_0(t)$ have negative real parts no matter how large $\epsilon ta$ is. The strategy in Ref.~\onlinecite{Lei2011} can not decrease the real part value of poles approaching to the imaginary axis, which indicates the damping process can not be suppressed via increasing the couplings between the system and the bath. \begin{figure} \includegraphics[width=8.5cm]{openPoles2.eps}\\ \caption{(Color online) The poles variation of the open-loop Green's function $G_0(s)$ versus increasing noise strength $\epsilon ta$ from $0$ to $1.6$. The variation of the poles $p_{01}$, $p_{02}$, $p_{03}$ in the non-Markovian regime corresponding to the Figure~\ref{Fig5} are plotted in red, green, and blue lines, respectively. The starting points $\epsilon ta=0.4$ are labeled by the rectangle mark. This figure shows the scheme increasing noise strength as done in Ref.~\onlinecite{Lei2011} can not effectively drive the poles to be close to the imaginary axis so as to slow down the damping.}\label{Fig2} \epsilon nd{figure} Compared with the open-loop case, the decoherence can be significantly suppressed via coherent feedback as shown in Figure~\ref{Fig4}, where the noise strength is set sufficiently large, e.g., $\epsilon ta=0.4$ causing the non-Markovian dynamics of the system (see the blue dashed line under $r=0$ in Figure~\ref{Fig4}). When the feedback loop is closed, e.g., $r=0.1\omega_S$, the damping of the absolute value of $g(t)$ is slowed down. The value of $|g(t)|$ can be kept on a high value when the feedback strength is further enhanced, for example, $r=0.2\omega_S$ or $r=0.3\omega_S$. \begin{figure} \includegraphics[width=8.5cm]{feedbackDyna.eps}\\ \caption{(Color online)The dynamics of the absolute value of the Green's function $|g(t)|$ versus the increasing feedback coupling strength $r(\omega)=r$. With the increasing $r$, the value of $|g(t)|$ is kept on a high value for a long time.}\label{Fig4} \epsilon nd{figure} The above phenomena can be well analyzed from the variation of poles of $g(t)$ with continuously increasing the feedback coupling strength $r$ from $0$ to $0.3\omega_S$ (see root locus plot Figure~\ref{Fig3}). The three poles initially lie in the left-part of complex plane with negative real parts (as shown the starting point of three lines) causing the damping of $|g(t)|$. When the feedback coupling strength is enhanced, the pole $p_1$ is oscillatingly driven to be close to the imaginary axis and the other poles $p_2,p_3$ are pushed to approach to ${\rm Re}s=-0.3$. We can see that the real part of the pole $p_1$ is nearly decreased to be zero when the feedback coupling strength is sufficiently strong, which indicates that such weak damping mode can help $g(t)$ to resist the decoherence. Compared with the open loop strategy above, our coherent feedback scheme can effectively suppress the decoherence in quantum dot systems. \begin{figure} \includegraphics[width=8.5cm]{feedbackPoles.eps}\\ \caption{(Color online) The root locus of the Green's function $G(s)$ with respect to the feedback coupling strength $r(\omega)=r$ from $0$ to $0.3\omega_S$ with the noise strength $\epsilon ta=0.4$. The pole $p_1$ is oscillatingly pushed to be close to the imaginary axis so as to afford a very slow damping mode of $g(t)$, which indicates the decoherence is effectively suppressed by our coherent feedback scheme.}\label{Fig3} \epsilon nd{figure} \section{Conclusion}\label{6th} This paper presents a non-Markovian coherent feedback scheme to stabilize a single quantum dot whose natural noise baths (source and drain) are connected to form the tunable quantum tunneling process. The mechanism of the decoherence suppression is analyzed in the frequency domain via the root locus of the Green's function which is extended from classical control theory. Compared with the open loop strong coupling strategy, our coherent feedback scheme can suppress the damping of the system dynamics more efficiently. For future works, it is worthwhile to explore how to apply our direct coherent feedback scheme to complicated quantum dots systems, e.g., two quantum dots system where the Coulomb interaction between two dots exists. In addition, when a quantum dot is weakly coupled with a resonator, its information can be indirectly extracted through the output of a probing field of the resonator~\cite{PhysRevLett.108.046807}. Hence, this makes it possible to design a field-mediated coherent feedback controller for the quantum dot. \appendix \section{Derivation of non-Markovian Langevin Equation under feedback}\label{ASA} To facilitate the following derivation, we assume \begin{equation}\label{A1} F_{kk'}=\left\{\begin{array}{cc} f_{k} & k-k'=l\neq0 \\ 0 & {\rm otherwise} \epsilon nd{array}\right. \epsilon nd{equation} which implies only two modes with mode difference $l$ in each bath can be effectively coupled. According to the Heisenberg equation in quantum mechanics \begin{equation}\label{3} \dot{\hat{o}}(t)=-\frac{i}{\hbar}[\hat{o}(t),H(t)] \epsilon nd{equation} for arbitrary operator $\hat{o}(t)$. The motion equations for the system and bath modes can be obtained as \begin{eqnarray}\label{4} \dot{\hat{d}}(t)&=&- i\omega_S \hat{d}(t)- i\sum_kV^*_{Bk}\hat{b}_{k}(t)- i\sum_{k-l}V^*_{Ck-l}\hat{c}_{k-l}(t),\nonumber\\ &&\label{4-1}\\ \dot{\hat{b}}_{k}(t)&=&-i\omega_{Bk}\hat{b}_{k}(t)- if_k^*\hat{c}_{k-l}(t) - i V_{Bk}\hat{d}(t),\label{4-2}\\ \dot{\hat{c}}_{k-l}(t)&=&-i\omega_{Ck-l}\hat{c}_{k-l}(t)- if_k \hat{b}_{k}(t)- i V_{Ck-l}\hat{d}(t).\label{4-3} \epsilon nd{eqnarray} Firstly, the motion coupling equations between two bath (\ref{4-2}) and (\ref{4-3}) can be jointly solved as \begin{equation}\label{5} \hat{\epsilon }_{k}(t)=\Phi_{k}(t)\hat{\epsilon }_{k}(0)- i\int_0^t \Phi_{k}(t-\tau)v_{k}\hat{d}(\tau) d\tau, \epsilon nd{equation} where $$\hat{\epsilon }_{k}(t)=\left[ \begin{array}{c} \hat{b}_{k}(t) \\ \hat{c}_{k-l}(t) \\ \epsilon nd{array} \right] , v_{k}=\left[ \begin{array}{c} V_{Bk} \\ V_{Ck-l} \\ \epsilon nd{array} \right]. $$ Expressing the feedback coupling strength in the polar coordinate as $f_{k}=r_{k}e^{i\theta_{k}}$, the transition matrix is calculated as \begin{eqnarray}\label{51} & &\Phi_{k}(t)= \epsilon xp\left[- it\left( \begin{array}{cc} \omega_{Bk} & f_{k}^* \\ f_{k} & \omega_{Ck-l} \\ \epsilon nd{array} \right)\right]\nonumber\\ &&=\left[ \begin{array}{cc} \chi_+e^{-i\lambda_+t}-\chi_- e^{- i\lambda_-t} &\kappa^*(e^{- i\lambda_+t}-e^{-i\lambda_-t}) \\ \kappa(e^{-i\lambda_+t}-e^{- i\lambda_-t}) & - \chi_- e^{-i\lambda_+t}+\chi_+ e^{- i\lambda_-t} \\ \epsilon nd{array} \right],\nonumber\\ \epsilon nd{eqnarray} where $\chi_\pm=\frac{1}{2}(\cos\alpha_k\pm 1)$ and $\kappa=\frac{1}{2}\sin\alpha_k e^{i\theta_k}$ with $\alpha_k=\arctan\frac{r_k}{\delta}$ and frequency difference $2\delta=\omega_{Bk}-\omega_{Ck-l}$. Eigenvalues $\lambda_\pm$ of the matrix $\left[ \begin{array}{cc} \omega_{Bk} & f_{k}^* \\ f_{k} & \omega_{Ck-l} \\ \epsilon nd{array} \right]$ are expressed as \begin{equation} \lambda_\pm=\frac{\omega_{Bk}+\omega_{Ck-l}\pm\sqrt{(\omega_{Bk}-\omega_{Ck-l})^2+4r_{k}^2}}{2}. \epsilon nd{equation} Then, substituting (\ref{5}) into (\ref{4-1}), we can get the system Langevin equation as \begin{equation}\label{7} \dot{\hat d}(t)=-i\omega_{S} \hat{d}(t)-\int_0^t d\tau M(t-\tau) \hat d(\tau)-i\hat \epsilon _{\rm n}(t), \epsilon nd{equation} where the memory kernel function and the equivalent noise are defined as \begin{equation}\label{8} M(t) = \sum_k v_{k}^\dagger \Phi_{k}( t) v_{k},\quad \hat \epsilon _{\rm n}(t)=\sum_k v_{k}^\dagger\Phi_{k}(t)\hat{\epsilon }_{k}(0), \epsilon nd{equation} respectively. The memory kernel function $M(t)$ can be further expressed as \begin{eqnarray}\label{9} M(t)& = &\sum_k |V_{Bk}e^{-i\theta_k}\cos\frac{\alpha_k}{2}+V_{Ck-l}\sin\frac{\alpha_k}{2}|^2e^{-i\lambda_+t}\nonumber\\ &&+\sum_k |V_{Bk}e^{-i\theta_k}\sin\frac{\alpha_k}{2}-V_{Ck-l}\cos\frac{\alpha_k}{2}|^2e^{-i\lambda_-t}\nonumber\\ \epsilon nd{eqnarray} which are modulated by $r_k$ and $\theta_k$. Under the continuous limit that the modes of the baths are too dense, the memory kernel function $M(t)$ is expressed in a frequency continuous form and can be further decomposed as $M(t) =M^+(t)+M^-(t)$ with \begin{equation}\label{A12} M^\pm(t) =\frac{1}{2\pi}\int_{-\infty}^{+\infty} d\omega J^\pm(\omega)e^{-i\lambda_\pm(\omega)t} \epsilon nd{equation} where the noise spectral functions are \begin{equation} \frac{J^+(\omega)}{2\pi}=\varrho(\omega) |V_{B}(\omega)e^{-i\theta(\omega)}\cos\frac{\alpha(\omega)}{2}+V_{C}(\omega)\sin\frac{\alpha(\omega)}{2}|^2\\ \epsilon nd{equation} \begin{equation} \frac{J^-(\omega)}{2\pi}=\varrho(\omega)|V_{B}(\omega)e^{-i\theta(\omega)}\sin\frac{\alpha(\omega)}{2}-V_{C}(\omega)\cos\frac{\alpha(\omega)}{2}|^2, \epsilon nd{equation} and \begin{equation}\label{13} \lambda_\pm(\omega)=\omega-\delta\pm\sqrt{\delta^2+r(\omega)^2}. \epsilon nd{equation} with the state density $\varrho(\omega)$. The noise term $ \hat{\epsilon }_{\rm n}(t)$ in Eq.~(\ref{6}) is \begin{equation} \hat{\epsilon }_{\rm n}(t)=\int_{-\infty}^{+\infty}d\omega\varrho(\omega)v^\dagger(\omega)\Phi(\omega,t)\hat{\epsilon }(\omega,0), \epsilon nd{equation} where \begin{equation}\label{13-55} \hat{\epsilon }(\omega,t)=\left[ \begin{array}{c} \hat{b}(\omega,t) \\ \hat{c}(\omega-2\delta,t) \\ \epsilon nd{array} \right] , v(\omega)=\left[ \begin{array}{c} V_{B}(\omega,t) \\ V_{C} (\omega-2\delta,t)\\ \epsilon nd{array} \right]. \epsilon nd{equation} and \begin{eqnarray}\label{51} &&\Phi(\omega,t)= \\ &&{\tiny \left[ \begin{array}{cc} \chi_+e^{-i\lambda_+(\omega)t}-\chi_- e^{- i\lambda_-(\omega)t} &\kappa^*(e^{- i\lambda_+(\omega)t}-e^{-i\lambda_-(\omega)t}) \\ \kappa(e^{-i\lambda_+(\omega)t}-e^{- i\lambda_-(\omega)t}) & - \chi_- e^{-i\lambda_+(\omega)t}+\chi_+ e^{- i\lambda_-(\omega)t} \\ \epsilon nd{array} \right]},\nonumber \epsilon nd{eqnarray} where $\chi_\pm=\frac{1}{2}(\cos\alpha(\omega)\pm 1)$ and $\kappa=\frac{1}{2}e^{i\theta(\omega)}\sin\alpha(\omega)$ with $\alpha(\omega)=\arctan\frac{r(\omega)}{\delta}$ and frequency difference $2\delta=\omega_{B}(\omega)-\omega_{C}(\omega-2\delta)$. \section{Expression of Memory Kernel Function $M(t)$ in the Frequency Domain}\label{ASC} Inserting the Lorentzian spectral density (\ref{18}) into Eq.~(\ref{27}), we obtain \begin{eqnarray}\label{52} M_0(t)&=&\frac{1}{2\pi}\int_{-\infty}^{+\infty} \frac{\epsilon ta h^2}{h^2+(\omega-\omega_S)^2} e^{-i\omega t} d\omega+\nonumber\\ &&\frac{1}{2\pi}\int_{-\infty}^{+\infty} \frac{\epsilon ta h^2}{h^2+(\omega-\omega_S+2\delta)^2} e^{- i\omega t} d\omega \epsilon nd{eqnarray} The integration of Eq.~(\ref{52}) can be solved as \begin{equation} M_0(t)=\frac{1}{2}\epsilon ta h e^{-h|t|-i\omega_S t}+\frac{1}{2}\epsilon ta h e^{-h|t|- i(\omega_S-2\delta) t}. \epsilon nd{equation} Further, via Laplace transform, we can obtain $M_0(s)$ as \begin{equation} M_0(s)=\frac{\frac{1}{2}\epsilon ta h}{s+h+ i\omega_S}+\frac{\frac{1}{2}\epsilon ta h}{s+h+ i(\omega_S-2\delta)} \epsilon nd{equation} Following the same idea, we also assume $\omega_S\pm\sqrt{\delta^2+r^2}$ are much larger than the noise width $h$. Therefore, we directly write transformed $M^\pm(t)$ as \begin{equation} M^\pm(s)=\frac{\frac{1}{2}\epsilon ta h(1\pm\cos\theta\sin\alpha)}{s+h+ i(\omega_S-\delta\pm\sqrt{\delta^2+r^2})} \epsilon nd{equation} \section{Definition of Laplace Transform}\label{LP} Laplace and its inverse transform for arbitrary operator $\hat{o}(t)$ are defined as \begin{eqnarray} \hat{O}(s) &=& \int_0^\infty \hat{o}(t) e^{-st} dt, \\ \hat{o}(t) &=& \frac{1}{2\pi i}\int_{\sigma-i\infty}^{\sigma+i\infty}\hat{O}(s)e^{st}ds , \epsilon nd{eqnarray} respectively, where $s=\sigma+i\omega$. \begin{thebibliography}{31} \makeatletter \providecommand \@ifxundefined [1]{ \@ifx{#1\undefined} } \providecommand \@ifnum [1]{ \ifnum #1\epsilon xpandafter \@firstoftwo \epsilon lse \epsilon xpandafter \@secondoftwo \fi } \providecommand \@ifx [1]{ \ifx #1\epsilon xpandafter \@firstoftwo \epsilon lse \epsilon xpandafter \@secondoftwo \fi } \providecommand \natexlab [1]{#1} \providecommand \epsilon nquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\epsilon ndgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode `\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\epsilon ndgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{http://dx.doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\epsilon ndcsname} \let\auto@bib@innerbib\@empty \bibitem [{\citenamefont {Loss}\ and\ \citenamefont {DiVincenzo}(1998)}]{LossPRA1998} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Loss}}\ and\ \bibinfo {author} {\bibfnamefont {D.~P.}\ \bibnamefont {DiVincenzo}},\ }\href {\doibase 10.1103/PhysRevA.57.120} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {57}},\ \bibinfo {pages} {120} (\bibinfo {year} {1998})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Burkard}\ \epsilon mph {et~al.}(1999)\citenamefont {Burkard}, \citenamefont {Loss},\ and\ \citenamefont {DiVincenzo}}]{BurkardPRB1999} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Burkard}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Loss}}, \ and\ \bibinfo {author} {\bibfnamefont {D.~P.}\ \bibnamefont {DiVincenzo}},\ }\href {\doibase 10.1103/PhysRevB.59.2070} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume} {59}},\ \bibinfo {pages} {2070} (\bibinfo {year} {1999})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Elzerman}\ \epsilon mph {et~al.}(2004)\citenamefont {Elzerman}, \citenamefont {Hanson}, \citenamefont {Van~Beveren}, \citenamefont {Witkamp}, \citenamefont {Vandersypen},\ and\ \citenamefont {Kouwenhoven}}]{ElzerNAT2004} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Elzerman}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Hanson}}, \bibinfo {author} {\bibfnamefont {L.~H.~W.}\ \bibnamefont {Van~Beveren}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Witkamp}}, \bibinfo {author} {\bibfnamefont {L.~M.~K.}\ \bibnamefont {Vandersypen}}, \ and\ \bibinfo {author} {\bibfnamefont {L.~P.}\ \bibnamefont {Kouwenhoven}},\ }\href {http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=13868123&lang=zh-cn&site=ehost-live} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {430}},\ \bibinfo {pages} {431 } (\bibinfo {year} {2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kroutvar}\ \epsilon mph {et~al.}(2004)\citenamefont {Kroutvar}, \citenamefont {Ducommun}, \citenamefont {Heiss}, \citenamefont {Bichler}, \citenamefont {Schuh}, \citenamefont {Abstreiter},\ and\ \citenamefont {Finley}}]{KrouNAT2004} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Kroutvar}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Ducommun}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Heiss}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Bichler}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Schuh}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Abstreiter}}, \ and\ \bibinfo {author} {\bibfnamefont {J.~J.}\ \bibnamefont {Finley}},\ }\href {http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=14930967&lang=zh-cn&site=ehost-live} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {432}},\ \bibinfo {pages} {81 } (\bibinfo {year} {2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Man}\ \epsilon mph {et~al.}(2015{\natexlab{a}})\citenamefont {Man}, \citenamefont {Xia},\ and\ \citenamefont {Franco}}]{Mansci2015} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Z.-X.}\ \bibnamefont {Man}}, \bibinfo {author} {\bibfnamefont {Y.-J.}\ \bibnamefont {Xia}}, \ and\ \bibinfo {author} {\bibfnamefont {R.~L.}\ \bibnamefont {Franco}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Sci. Rep.}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {13843} (\bibinfo {year} {2015}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Franco}(2015)}]{franco} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~L.}\ \bibnamefont {Franco}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {17}},\ \bibinfo {pages} {081004} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lee}\ and\ \citenamefont {Zhang}(2008)}]{LeeJCP2008} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.-T.}\ \bibnamefont {Lee}}\ and\ \bibinfo {author} {\bibfnamefont {W.-M.}\ \bibnamefont {Zhang}},\ }\href {http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=35763956&lang=zh-cn&site=ehost-live} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys.}\ }\textbf {\bibinfo {volume} {129}},\ \bibinfo {pages} {224106} (\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wang}\ \epsilon mph {et~al.}(2014)\citenamefont {Wang}, \citenamefont {Zhao},\ and\ \citenamefont {Yu}}]{PhysRevA.89.042320} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Zhao}}, \ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Yu}},\ }\href {\doibase 10.1103/PhysRevA.89.042320} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {89}},\ \bibinfo {pages} {042320} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Cui}\ \epsilon mph {et~al.}(2008)\citenamefont {Cui}, \citenamefont {Xi},\ and\ \citenamefont {Pan}}]{PhysRevA.77.032117} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Cui}}, \bibinfo {author} {\bibfnamefont {Z.~R.}\ \bibnamefont {Xi}}, \ and\ \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Pan}},\ }\href {\doibase 10.1103/PhysRevA.77.032117} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {77}},\ \bibinfo {pages} {032117} (\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Chirolli}\ and\ \citenamefont {Burkard}(2008)}]{Chirolli2008} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Chirolli}}\ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Burkard}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Adv. in Phys.}\ }\textbf {\bibinfo {volume} {57}},\ \bibinfo {pages} {225} (\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Tu}\ and\ \citenamefont {Zhang}(2008)}]{Tu2008} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~W.~Y.}\ \bibnamefont {Tu}}\ and\ \bibinfo {author} {\bibfnamefont {W.~M.}\ \bibnamefont {Zhang}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume} {78}},\ \bibinfo {pages} {235311} (\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Xue}\ \epsilon mph {et~al.}(2011)\citenamefont {Xue}, \citenamefont {Zhang}, \citenamefont {Wu}, \citenamefont {Li},\ and\ \citenamefont {Tarn}}]{Xue2011} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Xue}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {R.-B.}\ \bibnamefont {Wu}}, \bibinfo {author} {\bibfnamefont {C.-W.}\ \bibnamefont {Li}}, \ and\ \bibinfo {author} {\bibfnamefont {T.-J.}\ \bibnamefont {Tarn}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {J. Phys. B: At. Mol. Phys.}\ }\textbf {\bibinfo {volume} {44}},\ \bibinfo {pages} {154016} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Leggett}\ \epsilon mph {et~al.}(1987)\citenamefont {Leggett}, \citenamefont {Chakravarty}, \citenamefont {Dorsey}, \citenamefont {Fisher}, \citenamefont {Garg},\ and\ \citenamefont {Zwerger}}]{leggett1987} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~J.}\ \bibnamefont {Leggett}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Chakravarty}}, \bibinfo {author} {\bibfnamefont {A.~T.}\ \bibnamefont {Dorsey}}, \bibinfo {author} {\bibfnamefont {M.~P.~A.}\ \bibnamefont {Fisher}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Garg}}, \ and\ \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Zwerger}},\ }\href {\doibase 10.1103/RevModPhys.59.1} {\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume} {59}},\ \bibinfo {pages} {1} (\bibinfo {year} {1987})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Brandes}(2010)}]{TobiasPRL2010} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Brandes}},\ }\href {\doibase 10.1103/PhysRevLett.105.060602} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {105}},\ \bibinfo {pages} {060602} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {P\"oltl}\ \epsilon mph {et~al.}(2011)\citenamefont {P\"oltl}, \citenamefont {Emary},\ and\ \citenamefont {Brandes}}]{PoltlPRB2011} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {P\"oltl}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Emary}}, \ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Brandes}},\ }\href {\doibase 10.1103/PhysRevB.84.085302} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume} {84}},\ \bibinfo {pages} {085302} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bluhm}\ \epsilon mph {et~al.}(2010)\citenamefont {Bluhm}, \citenamefont {Foletti}, \citenamefont {Mahalu}, \citenamefont {Umansky},\ and\ \citenamefont {Yacoby}}]{BluPRL2010} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Bluhm}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Foletti}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Mahalu}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Umansky}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Yacoby}},\ }\href {\doibase 10.1103/PhysRevLett.105.216803} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {105}},\ \bibinfo {pages} {216803} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Xue}\ \epsilon mph {et~al.}(2012)\citenamefont {Xue}, \citenamefont {Wu}, \citenamefont {Zhang}, \citenamefont {Zhang}, \citenamefont {Li},\ and\ \citenamefont {Tarn}}]{XuePRA2012} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Xue}}, \bibinfo {author} {\bibfnamefont {R.-B.}\ \bibnamefont {Wu}}, \bibinfo {author} {\bibfnamefont {W.-M.}\ \bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {C.-W.}\ \bibnamefont {Li}}, \ and\ \bibinfo {author} {\bibfnamefont {T.-J.}\ \bibnamefont {Tarn}},\ }\href {\doibase 10.1103/PhysRevA.86.052304} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {86}},\ \bibinfo {pages} {052304} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Man}\ \epsilon mph {et~al.}(2015{\natexlab{b}})\citenamefont {Man}, \citenamefont {Xia},\ and\ \citenamefont {Lo~Franco}}]{PhysRevA.92.012315} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Z.-X.}\ \bibnamefont {Man}}, \bibinfo {author} {\bibfnamefont {Y.-J.}\ \bibnamefont {Xia}}, \ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Lo~Franco}},\ }\href {\doibase 10.1103/PhysRevA.92.012315} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {92}},\ \bibinfo {pages} {012315} (\bibinfo {year} {2015}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zhu}\ \epsilon mph {et~al.}(2015)\citenamefont {Zhu}, \citenamefont {Ding}, \citenamefont {Wu},\ and\ \citenamefont {Lai}}]{zhuEPJD} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Q.-S.}\ \bibnamefont {Zhu}}, \bibinfo {author} {\bibfnamefont {C.-c.}\ \bibnamefont {Ding}}, \bibinfo {author} {\bibfnamefont {S.-Y.}\ \bibnamefont {Wu}}, \ and\ \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Lai}},\ }\href {\doibase 10.1140/epjd/e2015-60223-4} {\bibfield {journal} {\bibinfo {journal} {Eur. Phys. J. D}\ }\textbf {\bibinfo {volume} {69}},\ \bibinfo {eid} {231} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kr\"onke}\ and\ \citenamefont {Strunz}(2012)}]{strunz2012} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Kr\"onke}}\ and\ \bibinfo {author} {\bibfnamefont {W.~T.}\ \bibnamefont {Strunz}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {J. Phys. A: Math. Theor.}\ }\textbf {\bibinfo {volume} {45}},\ \bibinfo {pages} {055305} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gambetta}\ and\ \citenamefont {Wiseman}(2002)}]{gambetta} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Gambetta}}\ and\ \bibinfo {author} {\bibfnamefont {H.~M.}\ \bibnamefont {Wiseman}},\ }\href {\doibase 10.1103/PhysRevA.66.012108} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {66}},\ \bibinfo {pages} {012108} (\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Barchielli}\ \epsilon mph {et~al.}(2012)\citenamefont {Barchielli}, \citenamefont {Pellegrini},\ and\ \citenamefont {Petruccione}}]{Barch2012PRA} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Barchielli}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Pellegrini}}, \ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Petruccione}},\ }\href {\doibase 10.1103/PhysRevA.86.063814} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {86}},\ \bibinfo {pages} {063814} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lloyd}(2000)}]{LLoyd2000} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Lloyd}},\ }\href {\doibase 10.1103/PhysRevA.62.022108} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {62}},\ \bibinfo {pages} {022108} (\bibinfo {year} {2000})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Xue}\ \epsilon mph {et~al.}(2015)\citenamefont {Xue}, \citenamefont {Wu}, \citenamefont {Tarn},\ and\ \citenamefont {Petersen}}]{xueqip2015} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Xue}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Wu}}, \bibinfo {author} {\bibfnamefont {T.-J.}\ \bibnamefont {Tarn}}, \ and\ \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Petersen}},\ }\href {\doibase 10.1007/s11128-015-1000-6} {\bibfield {journal} {\bibinfo {journal} {Quantum Inf. Process.}\ }\textbf {\bibinfo {volume} {14}},\ \bibinfo {pages} {2657} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Xue}\ \epsilon mph {et~al.}(2015)\citenamefont {Xue}, \ and\ \citenamefont {Petersen}}]{xueian} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Xue}}, \bibinfo {author} \ and\ \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Petersen}},\ }\href {\doibase 10.1007/s11128-015-1196-5} {\bibfield {journal} {\bibinfo {journal} {Quantum Inf. Process.},\ 10.1007/s11128-015-1196-5} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Reimann}\ and\ \citenamefont {Manninen}(2002)}]{ReimRMP2002} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~M.}\ \bibnamefont {Reimann}}\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Manninen}},\ }\href {\doibase 10.1103/RevModPhys.74.1283} {\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume} {74}},\ \bibinfo {pages} {1283} (\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Mahan}(2000)}]{Mahan} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~D.}\ \bibnamefont {Mahan}},\ }\href@noop {} {\epsilon mph {\bibinfo {title} {Many-particle physics (3rd ed.)}}}\ (\bibinfo {publisher} {New York : Kluwer Academic/Plenum Publishers},\ \bibinfo {year} {2000})\BibitemShut {NoStop} \bibitem [{\citenamefont {Tan}\ and\ \citenamefont {Zhang}(2011)}]{Tan2011} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.~T.}\ \bibnamefont {Tan}}\ and\ \bibinfo {author} {\bibfnamefont {W.~M.}\ \bibnamefont {Zhang}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {83}},\ \bibinfo {pages} {032102} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lei}\ and\ \citenamefont {Zhang}(2011)}]{Lei2011} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~U.}\ \bibnamefont {Lei}}\ and\ \bibinfo {author} {\bibfnamefont {W.-M.}\ \bibnamefont {Zhang}},\ }\href {\doibase 10.1103/PhysRevA.84.052116} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {84}},\ \bibinfo {pages} {052116} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ogata}(1996)}]{Ogata} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Ogata}},\ }\href@noop {} {\epsilon mph {\bibinfo {title} {Modern Control Engineering}}}\ (\bibinfo {publisher} {Englewood Cliffs: PrenticeHall},\ \bibinfo {year} {1996})\BibitemShut {NoStop} \bibitem [{\citenamefont {Zhang}\ \epsilon mph {et~al.}(2012)\citenamefont {Zhang}, \citenamefont {Lo}, \citenamefont {Xiong}, \citenamefont {Tu},\ and\ \citenamefont {Nori}}]{WeiMinPRL2012} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {W.-M.}\ \bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {P.-Y.}\ \bibnamefont {Lo}}, \bibinfo {author} {\bibfnamefont {H.-N.}\ \bibnamefont {Xiong}}, \bibinfo {author} {\bibfnamefont {M.~W.-Y.}\ \bibnamefont {Tu}}, \ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}},\ }\href {\doibase 10.1103/PhysRevLett.109.170402} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {109}},\ \bibinfo {pages} {170402} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Frey}\ \epsilon mph {et~al.}(2012)\citenamefont {Frey}, \citenamefont {Leek}, \citenamefont {Beck}, \citenamefont {Blais}, \citenamefont {Ihn}, \citenamefont {Ensslin},\ and\ \citenamefont {Wallraff}}]{PhysRevLett.108.046807} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Frey}}, \bibinfo {author} {\bibfnamefont {P.~J.}\ \bibnamefont {Leek}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Beck}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Blais}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Ihn}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Ensslin}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Wallraff}},\ }\href {\doibase 10.1103/PhysRevLett.108.046807} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {108}},\ \bibinfo {pages} {046807} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \epsilon nd{thebibliography} \epsilon nd{document}
\begin{document} \title{EPR correlations without the EPR dilemma: a local scheme} \author{A.\ Matzkin } \address{Laboratoire de Spectrom\'{e}trie physique (CNRS Unit\'{e} 5588), Universit\'{e} Joseph-Fourier Grenoble-1, BP 87, 38402 Saint-Martin d'H\`{e}res, France} \begin{abstract} A model for two entangled systems in an EPR setting is shown to reproduce the quantum-mechanical outcomes and expectation values. Each system is represented by a small sphere containing a point-like particle embedded in a field.\ A quantum state appears as an equivalence class of several possible particle-field configurations. Contrarily to Bell-type hidden variables models, the fields account for the non-commutative aspects of the measurements and deny the simultaneous reality of incompatible physical quantities, thereby allowing to escape EPR's ``completeness or locality'' dilemma. \end{abstract} \maketitle \section{Introduction} In their celebrated paper \cite{EPR}, Einstein, Podolsky and Rosen (EPR) argued that quantum mechanics was incomplete on the ground that for entangled states the formalism predicts with certainty the measurement outcomes of noncommuting observables although they cannot have simultaneous reality. They argued the alternative to incompleteness was to make the reality of one particle's properties depend on the measurement made on the other particle, irrespective of their spatial separation.\ EPR concluded: \textquotedblleft\emph{no reasonable definition of reality could permit this} \textquotedblright\ nonlocal action at-a-distance. In a seminal work \cite {bell1964}, Bell showed that local models based on a distribution of hidden variables (HV) intended to complete quantum mechanics must satisfy an inequality involving averages taken over the hidden variable distributions. He also showed that in certain circumstances the average values of two-particle quantum observables violate these inequalities. However, it is seldom mentioned that Bell-type models are only a subset of the local models that can be envisaged. Indeed, Bell's theorem \cite{CHSH,bell2} is grounded on two important assumptions: (a) the HV\ ascribe a sub-quantum elementary probability for any 1 or 2-particle outcomes; (b) this probability factorizes into two single particle probabilities. These assumptions lead \cite{fine82} to the existence of a \emph{joint} probability function for all the observables entering the inequality (though there is no such probability according to quantum mechanics), thereby accounting for the 'simultaneous reality' appearing in the EPR dilemma.\ General arguments seem to indicate that these assumptions are needed to comply with the EPR requirements but are by no means necessary ingredients in order to enforce locality \cite{jaynes,orlov,rec2}. In this work we put forward a scheme compatible with quantum mechanical correlations but does not abide by the EPR dilemma. The model, developed for the prototypical spin-1/2 pair, describes each system by postulating a particle and a classical field. It is shown that different particle-field configurations yield the same probabilities for outcomes detection, even when the outcomes can be predicted with certainty. We will first put forward the model for a \emph{ single} particle. We will then naturally extend the model to the two-particle case, and show how, by introducing a correlation \emph{at the source}, the model reproduces the quantum predictions that violate the Bell inequalities without involving action at a distance. \section{Single field-particle system} Let a single spin-1/2 be represented by a field-particle system assumed to be composed of a small sphere, with the position of its center in the laboratory frame being denoted by $\mathbf{x}$ and the internal spherical variables relative to the center of the sphere by $\mathbf{r}\equiv (r,\theta ,\phi )$ (see Fig. 1a). A classical scalar field $F(\mathbf{r})$ is defined on the spherical surface.\ The point-like particle sits still at a fixed (but unknown position) on the sphere. We are interested in measuring the polarization of the particle, i.e., its internal angular momentum projection along a given axis\footnote{ From a physical standpoint, what we have called here the position of the particle should more properly be called the position of the particle's angular momentum $\mathbf{r}_{0}\times \mathbf{p}$ relative to the center of the sphere. We will not make explicitly this distinction in this paper.}. Let $\varepsilon _{b}$ denote the polarization along an axis $b$ making an angle $\theta _{b}$ with the $z$ axis. We assume that the possible outcomes $ \varepsilon _{b}=\pm 1$ can be obtained, the result depending in a manner to be described below (i) on the region on which the field is defined and (ii) on the position of the particle. The elementary support of the field $F$ is a hemispherical surface. The value of the field at any point depends on the projection of that point on the axis. Let $\Sigma _{+a}$ denote the positive half-sphere centered on the axis $a$ making an angle $\theta _{a}$ with the $z$ axis, and $F_{\Sigma _{+a}}$ denote the field distributed on that hemisphere. $F_{\Sigma _{+a}}( \mathbf{r})$ is defined by \begin{equation} F_{\Sigma _{+a}}(\mathbf{r})=\left\{ \begin{tabular}{l} $\mathbf{r}\cdot \mathbf{a}e^{i\phi _{+a}}/\pi R^{2}$ if $\mathbf{r}\in \Sigma _{+a}$ \\ $0$ otherwise \end{tabular} \ \ \ \ \ \right. , \label{2} \end{equation} $R$ being the radius of the sphere and $\phi $ the phase of the field; for simplicity we will take all the axes to be coplanar with $z$ and the phase will be assumed constant over an entire hemisphere (the phase thus appears as a global additional degree of freedom of the field). The mean value of $ \mathbf{r}\cdot \mathbf{b}/\pi R^{2}$ taken over $\Sigma _{+a}$ is given by \begin{equation} \left\langle F_{\Sigma _{+b}}+F_{\Sigma _{-b}}\right\rangle _{\Sigma _{+a}}\equiv \int_{\Sigma _{+a}}\frac{\mathbf{r}\cdot \mathbf{b}}{\pi R^{2}}d \mathbf{\hat{r}}=\cos \left( \theta _{b}-\theta _{a}\right) , \label{3} \end{equation} where $d\mathbf{\hat{r}}$ denotes the spherical surface element for a sphere of radius $R$ and we have set $\phi _{\pm b}=0$. The only requirement we make on the particle's position is that it must be embedded within the field: the particle cannot be in a field free region of the sphere. When the polarization $\varepsilon _{b}$ is measured we postulate that the measuring apparatus along $b$ interacts with the field $F_{\Sigma _{+a}}.$ Let $[a+b]$ and $[a-b]$ denote the directions lying halfway between the axes $a$ (of the distribution) and $b$ or $-b$ (of the measuring direction), with respective angles $(\theta _{b}+\theta _{a})/2$ and $(\theta _{b}+\pi +\theta _{a})/2$. We will assume that the field-apparatus interaction results in a \emph{rotation} of the original pre-measurement field $ F_{\Sigma _{+a}}$ toward both of the apparatus axes, $F_{\Sigma _{+a}}\rightarrow \left( F_{\Sigma _{+b}}+F_{\Sigma _{-b}}\right) e^{i\phi _{+a}}$ (Fig 1b); $\phi _{+a}$ is the phase of the original field and we will suppose the measurement does not introduce additional phases. A definite outcome $\varepsilon _{b}=\pm 1$ depends on which of the hemispheres $\Sigma _{\pm b}$ the particle is after the interaction. In terms of the field, this probability is given by the relative value of the average of the rotated field $F_{\Sigma _{+b}}+F_{\Sigma _{-b}}$ over the intermediate 'half-rotated' hemisphere $F_{\Sigma _{\lbrack a\pm b]}}$, yielding in accordance with Eq. (\ref{3}) \begin{eqnarray} P_{F_{\Sigma _{+a}}}(\varepsilon _{b}& =+1)=\left\vert \left\langle F_{\Sigma _{+b}}+F_{\Sigma _{-b}}\right\rangle _{\Sigma _{\lbrack a+b]}}\right\vert ^{2}/N=\cos ^{2}\frac{\theta _{b}-\theta _{a}}{2} \label{10} \\ P_{F_{\Sigma _{+a}}}(\varepsilon _{b}& =-1)=\left\vert \left\langle F_{\Sigma _{+b}}+F_{\Sigma _{-b}}\right\rangle _{\Sigma _{\lbrack a-b]}}\right\vert ^{2}/N=\sin ^{2}\frac{\theta _{a}-\theta _{b}}{2} \label{12} \end{eqnarray} with $N\ $being the sum of both terms, thereby recovering the probabilities of measurements made on a single spin-1/2, reading in the standard notation $ \left\vert \left\langle \pm b\right. \left\vert +a\right\rangle \right\vert ^{2}$ (normalization will be implicitly understood in the rest of the paper). If $b$ and $a$ are taken to be the same, then one has $\Sigma _{\lbrack a+a]}\equiv \Sigma _{+a}$ and $P_{\Sigma _{+a}}(\varepsilon _{a}=\pm 1)=1$ and $0$ respectively. Hence a field $F_{\Sigma _{+a}}$ corresponds to a well-defined positive polarization along the $a$ axis. In this case the symmetry axis of the field distribution coincides with the measurement axis and the system-apparatus fields interaction has no effect: the particle's pre and post-measurement position remains within the same hemisphere $\Sigma _{+a}$. The particle's hemispherical position can be said to determine the outcome, i.e. $P_{F_{\Sigma _{+a}}}(\varepsilon _{a}=\pm 1)=P_{F_{\Sigma _{+a}}}(\mathbf{ r\in }\Sigma _{\pm a})=1$ or $0$. On the other hand when $b$ and $a$ lie along different directions, the particle position cannot ascribe probabilities: the probabilities depend on the system and apparatus fields and $\varepsilon _{b}$ only acquires a value $\pm 1$ \emph{after} the system field has interacted with the apparatus and rotated toward the measurement axis. A straightforward consequence is that the measurements do not commute, and thus joint polarization measurements along different axes are undefined. Since fields obey the principle of superposition, we can envisage superpositions of fields defined on different hemispheres. But fields defined on different hemispheres turn out to be \emph{equivalent }to a field defined on a single hemisphere. Indeed it is easy\footnote{ As the reader will have noted, the fields are defined through a mapping of the Hilbert space rays onto the relevant hemispherical surface.} to see that one can write for any axis $u$ \begin{equation} F_{\Sigma _{+a}}\sim \cos (\frac{\theta _{u}-\theta _{a}}{2})F_{\Sigma _{+u}}+\sin (\frac{\theta _{u}-\theta _{a}}{2})F_{\Sigma _{-u}}, \label{z1} \end{equation} meaning that although the two fields on the right and left handsides (hs) of Eq. (\ref{z1}) are different -- they are not defined on the same hemispherical surfaces --, they lead to exactly the same predictions. Indeed, when measurements are made along \emph{any} axis $b$ the averages of the left and right hs of Eq. (\ref{z1}) give the same result $\cos (\frac{ \theta _{a}-\theta _{b}}{2})$. These fields thus define an \emph{equivalence class.} From the particle standpoint a definite field configuration implies a different behaviour: for the\ field on the rhs of Eq. (\ref{z1}), denoted $ F_{rhs}$, the no-perturbation axis is $u$, not $a$, and the particle distribution cannot be uniform. Hence there is a probability function $ p_{F_{rhs}}(\varepsilon _{u}=\pm 1,\mathbf{r})=1$ or $0$ depending on whether $\mathbf{r\in }\Sigma _{\pm u}$ and such that $P_{F_{rhs}}( \varepsilon _{u}=\pm 1)$ is recovered by integration over the particle distribution. For $b\neq u$ however there is no probability function $p_{F_{rhs}}( \varepsilon _{b}=\pm 1,\mathbf{r})$ hence $P_{F_{rhs}}(\varepsilon _{b}=\pm 1)$ cannot depend on $\mathbf{r}$: there is no sub-field mechanism that determines the outcome.\ This is consistent with Eqs. (\ref{10})-(\ref{12}) in which the field rotation does not allow to define joint probabilities of the type $P_{F_{rhs}}(\varepsilon _{u}=\pm 1\cap \varepsilon _{b}=\pm 1)$; it can be shown instead that such joint probabilities would follow by allowing the particle position to determine probabilities for measurements along arbitrary axes \cite{matzkin jpa08}. In the specific case of measuring $\varepsilon _{a}$ in the field $F_{rhs}$, the system and apparatus fields must interfere in such a way as to obtain $ P_{F_{rhs}}(\varepsilon _{a}=-1)=0 $, irrespective of the initial particle's position. Finally let us introduce the fields $F_{\alpha (u)\pm }$ defined by \begin{equation} F_{\alpha (u)\pm }(\mathbf{r})=e^{i\pm \frac{\pi }{2}}F_{\Sigma _{+u}}( \mathbf{r})+F_{\Sigma _{-u}}(\mathbf{r}), \label{57} \end{equation} which obey the equivalence $F_{\alpha (u)\pm }\sim F_{\alpha (b)\pm }$ for any axes $u$ and $b$.\ We have \begin{equation} P_{\alpha (u)\pm }(\varepsilon _{b}=\pm 1)=\left\vert \left\langle F_{\alpha (u)\pm }\right\rangle \right\vert ^{2}=\frac{1}{2} \label{41} \end{equation} for \emph{any} $b$, the average depicting Eq. (\ref{3}) taken on the rotated hemispheres $\Sigma _{\lbrack u\pm b]}$ (for $F_{\Sigma _{+u}}$) and $\Sigma _{\lbrack -u\pm b]}$ (for $F_{\Sigma _{-u}}$). An interpretation in terms of the particle position can only be given for $b=u$ with elementary probabilities $p_{F_{\alpha (u)\pm }}(\varepsilon _{u}=\pm 1,\mathbf{r})=1$ or $0$ depending on whether $\mathbf{r\in }\Sigma _{\pm u}$.\ It is nevertheless possible to postulate additional sub-quantum probabilities provided they are consistent with the field averages.\ For example we will suppose for either of the fields $F_{\alpha (u)\pm }$ that \begin{equation} P_{\alpha (u)}(\varepsilon _{b}=1|\mathbf{r}\in \Sigma _{\pm u})=\cos ^{2}\left( \frac{\theta _{u}-\theta _{b}}{2}+\frac{\pi }{2}(1\mp 1)\right) \label{42} \end{equation} which assuming $\mathbf{r}$ is uniformly distributed is consistent with (\ref {41}) given that \begin{equation} \sum_{\pm }P_{\alpha (u)}(\varepsilon _{b}=1|\mathbf{r} \in \Sigma _{\pm u})P_{\alpha (u)}(\mathbf{r}\in \Sigma _{\pm u})=P_{\alpha (u)}(\varepsilon _{b}=1). \end{equation} Note that Eq. (\ref{42}) supplements Eq. (\ref{41} ) with a condition on the hemispherical position of the particle, but the latter does not determine the outcome (it is not an elementary probability). \begin{figure} \caption{ (a) A system is represented by a point-like particle lying on the surface of a small sphere, on which a field is defined. The particle position $\mathbf{r} \label{f1} \end{figure} \section{Two-particle system} Assume now an initial two-particle system is fragmented into two subsystems flying apart in opposite directions. Each of the two particles, labeled $1$ and $2$, is embedded in a field defined on the surface of a small sphere. $ \mathbf{x}_{1}$ (resp. $\mathbf{x}_{2}$) denotes the position of the subsystem 1 (resp.\ 2) sphere in the laboratory frame.\ The internal variables within each sphere are labeled by $\mathbf{r}_{1}$ and $\mathbf{r} _{2}$. As soon as the fragmentation process is completed, the positions of each point-like particle as well as the fields are fixed, the polarization of each system depending on the field distribution and the particle position on its spherical surface. We will choose the initial correlation to correspond to the compound having zero polarization at least along an axis $u $ (but see below), in view of reproducing the statistics for the two spin-1/2 in the singlet state problem. Assuming the total polarization is conserved, the fields and particle positions must be initially correlated such that $\varepsilon _{u}^{1}=-\varepsilon _{u}^{2}$. Let us start by examining the \emph{no-perturbation} measurements. In this case the particle positions determine the outcomes, from which it follows that we must set \begin{equation} \mathbf{r}_{1}=-\mathbf{r}_{2} \label{28} \end{equation} at the source. Assume subsystem 1 and 2 fields to be given by $F_{\alpha (u)+}$ and $F_{\alpha (u)-}$ defined above. The total field for the system is thus \begin{equation} F_{T(u)}(\mathbf{r}_{1},\mathbf{r}_{2})=F_{\alpha (u)+}^{1}(\mathbf{r} _{1})F_{\alpha (u)-}^{2}(\mathbf{r}_{2}). \label{31} \end{equation} Single outcome probabilities $P(\varepsilon _{u}^{1,2})=1/2$ are straightforwardly computed from the single subsystem field $F^{1}$ or $F^{2}$ .\ On the other hand two outcome probabilities must take into account the particle correlation (\ref{28}). It is thus impossible to obtain $ \varepsilon _{u}^{1}=\varepsilon _{u}^{2}$: since there are no measurement perturbations, $\varepsilon _{u}^{1}=\pm 1$ is associated with $\mathbf{r} _{1}\in \Sigma _{\pm u}^{1},$ implying $\mathbf{r}_{2}\in \Sigma _{\mp u}^{2} $ so only $\varepsilon _{u}^{2}=\mp 1$ can be obtained. The probabilities in this case read \begin{equation} P\left( \varepsilon _{u}^{1}=\pm 1,\varepsilon _{u}^{2}=\mp 1\right) =P(\varepsilon _{u}^{1}=\pm 1)P(\varepsilon _{u}^{2}=\mp 1|\varepsilon _{u}^{1}=\pm 1)=\frac{1}{2} \label{100} \end{equation} where the conditional probability is computed by way of the particle dependence as $ P(\mathbf{r}_{1}\in \Sigma _{\pm u}^{1})P(\mathbf{r}_{2}\in \Sigma _{\mp u}^{2}|\mathbf{r}_{1}\in \Sigma _{\pm u}^{1})$ and setting $b=u$ in Eqs. ( \ref{41})-(\ref{42}). Note that these probabilities \emph{are not} equal to those obtained by taking the relevant averages of $F_{T(u)}$ (eg, $ \left\langle F_{T(u)}\right\rangle _{\Sigma _{+u}^{1}\Sigma _{+u}^{2}}$ does not vanish).\ The reason is that $F_{T(u)}$ does not take into account the particle correlation.\ It is possible nevertheless to identify the term correlating the fields consistent with Eq. (\ref{28}) by rewriting Eq. (\ref {31}) as \begin{equation} F_{T(u)}(\mathbf{r}_{1},\mathbf{r}_{2})=F_{0(u)}(\mathbf{r}_{1},\mathbf{r} _{2})+e^{i\pi /2}F_{\aleph (u)}(\mathbf{r}_{1},\mathbf{r}_{2}) \label{56} \end{equation} where $F_{0}$ and $F_{\aleph }$ are given by \begin{eqnarray} F_{0(u)}(\mathbf{r}_{1},\mathbf{r}_{2})& =F_{\Sigma _{+u}}^{1}(\mathbf{r} _{1})F_{\Sigma _{+u}}^{2}(\mathbf{r}_{2})+F_{\Sigma _{-u}}^{1}(\mathbf{r} _{1})F_{\Sigma _{-u}}^{2}(\mathbf{r}_{2}) \label{58b} \\ F_{\aleph (u)}(\mathbf{r}_{1},\mathbf{r}_{2})& =F_{\Sigma _{+u}}^{1}(\mathbf{ r}_{1})F_{\Sigma _{-u}}^{2}(\mathbf{r}_{2})-F_{\Sigma _{-u}}^{1}(\mathbf{r} _{1})F_{\Sigma _{+u}}^{2}(\mathbf{r}_{2}). \label{58} \end{eqnarray} It is easy to show that $F_{0(u)}$ cannot contribute to the probabilities by repeating the reasoning involving no-perturbation measurements. On the other hand $F_{\aleph (u)}$ respects by construction the particle correlation (\ref {28}) and the probabilities can be computed from the fields averages $ \left\langle F_{\aleph }\right\rangle _{\Sigma _{\pm u}^{1}\Sigma _{\mp u}^{2}}$\ (which are equal) and $\left\langle F_{\aleph }\right\rangle _{\Sigma _{\pm u}^{1}\Sigma _{\pm u}^{2}}=0$. Note that the particle labels as well as the field indices can be interchanged in the definition (\ref{31} ) of $F_{T}$. Let us now investigate measurements along arbitrary directions $a$ for particle 1 and $b$ for particle 2.\ Probabilities for a single subsystem are immediately obtained from the subsystem's field $F_{\alpha (u)\pm }^{1,2}$ yielding $1/2$ for any arbitrary direction. To compute correlations for two outcomes, say $\varepsilon _{a}^{1}=1,\varepsilon _{b}^{2}=1$, the averages involving $F_{T}$ must again be supplemented with the correlation (\ref{28} ). This can be done by employing the equivalence relations $F_{\alpha (u)\pm }\sim F_{\alpha (a)\pm }$ in Eq. (\ref{31}), yielding $F_{T(u)}\sim F_{T(a)}$ . The probability is then computed as $P(\varepsilon _{a}^{1}=1)P(\varepsilon _{b}^{2}=1|\varepsilon _{a}^{1}=1)$ by writing as in Eq. (\ref{100}) single subsystem probabilities in terms of the particle positions: \begin{eqnarray} P_{T(a)}(\varepsilon _{a}^{1}=1,\varepsilon _{b}^{2}=1)& =P_{\alpha (a)+}( \mathbf{r}_{1}\in \Sigma _{+a}^{1})P_{\alpha (a)-}(\varepsilon _{b}^{2}=1| \mathbf{r}_{2}\in \Sigma _{-a}^{2}) \label{66} \\ & =\frac{1}{2}\sin ^{2}\left( \frac{\theta _{b}-\theta _{a}}{2}\right) \label{66c} \end{eqnarray} where we have used Eqs. (\ref{41}) and (\ref{42}). We can obviously reach the same result by employing $F_{\alpha (u)\pm }\sim F_{\alpha (b)\pm }$ in Eq. (\ref{31}) yielding \begin{equation} P_{T(b)}(\varepsilon _{a}^{1}=1,\varepsilon _{b}^{2}=1)=P_{\alpha (b)-}( \mathbf{r}_{2}\in \Sigma _{+b}^{2})P_{\alpha (b)+}(\varepsilon _{a}^{1}=1| \mathbf{r}_{1}\in \Sigma _{-b}^{1}). \label{68} \end{equation} Both computations hinge on employing the form of the field that does not perturb one of the measurements: this is necessary in order to be able to compute conditional statements. As in the single particle system case [see below Eq. (\ref{z1})] each particular realization of an equivalence class gives rise to different, incompatible, accounts: Eq. (\ref{66}) specifies that $\mathbf{r}_{1}\in \Sigma _{+a}^{1}$ while assuming the field configuration is $F_{T(a)}$ whereas Eq. (\ref{68}) indicates that $\mathbf{r} _{1}\in \Sigma _{-b}^{1}$ when the field is $F_{T(b)}$. The direct computation of $P_{T(u)}(\varepsilon _{a}^{1}=1,\varepsilon _{b}^{2}=1),$ without resorting to an equivalent configuration, cannot relie on conditional statements since both measurements involve perturbations\footnote{ Employing $P_{\alpha (u)+}(\mathbf{r}_{1}\in \Sigma _{+a}^{1})P_{\alpha (u)-}(\varepsilon _{b}^{2}=1|\mathbf{r}_{2}\in \Sigma _{-a}^{2})$ along with Eq. (\ref{42}) does not ensure the correlation is taken into account, since one may have $\mathbf{r}_{1}\in \Sigma _{+a}^{1}$ and $\mathbf{r}_{2}\in \Sigma _{-a}^{2}$ without $\mathbf{r}_{2}=-\mathbf{r}_{1}$.\ It is only when at least one of the measurements is not perturbed that such an inference can be made.}. The probability can be computed by obtaining the correlated averages of the fields rotated by the interaction for each measurement. As in the no-perturbation case $F_{\aleph (u)}$ is the field encapsulating the correlation (\ref{28}) while $F_{0(u)}$ does not contribute to the probabilities. This can be seen by noting that for the outcomes $\varepsilon _{a,b}^{1,2}=1$ the averages $\left\langle F_{0(u),\aleph (u)}\right\rangle , $ giving $\cos (\frac{\theta _{b}-\theta _{a}}{2})$ and $\sin (\frac{\theta _{b}-\theta _{a}}{2})$ for $F_{0(u)}$ and $F_{\aleph (u)}$ respectively do not depend on $u$. This implies that $F_{0}$ and $F_{\aleph }$ form separately \emph{equivalence classes}, i.e. we have \begin{equation} F_{0(u)}\sim F_{0(a)}\mathrm{\hspace{.3cm}and\hspace{.3cm}}F_{\aleph (u)}\sim F_{\aleph (a)} \label{62} \end{equation} for any axes $u$ and $a$. Using Eq.\ (\ref{62}) it can be established that $ F_{0}$ does not contribute to the probabilities\footnote{ This can be seen by computing first $\left\langle F_{0}\right\rangle $ for the outcomes $\varepsilon _{a}^{1,2}=1$ and $\varepsilon _{b}^{1,2}=1$ which we know to vanish from the no-perturbation case (use $F_{0(a)}$ and $F_{0(b)} $ respectively). The same averages can be computed instead from $F_{0(b)}$ and $F_{0(a)}$ implying that terms such as $\left\langle F_{\Sigma _{\lbrack \pm b+a]}}^{1}\right\rangle _{+a}\left\langle F_{\Sigma _{\lbrack \pm b+a]}}^{2}\right\rangle _{+a}$ must be put to zero by hand to take the particle correlation into account.\ But these same terms also appear when computing $P(\varepsilon _{a}^{1}=1,\varepsilon _{b}^{2}=1)$ from $ \left\langle F_{0}\right\rangle $ with $F_{0(a)}$ and $F_{0(b)}$.}. Hence $ P_{T(u)}$ is given by $P_{\aleph (u)}$ \begin{equation} P_{\aleph (u)}(\varepsilon _{a}^{1}=1,\varepsilon _{b}^{2}=1)=\left\vert \left\langle F_{\Sigma _{\lbrack u+a]}}^{1}\right\rangle _{+a}\left\langle F_{\Sigma _{\lbrack -u+b]}}^{2}\right\rangle _{+b}-\left\langle F_{\Sigma _{\lbrack -u+a]}}^{1}\right\rangle _{+a}\left\langle F_{\Sigma _{\lbrack u+b]}}^{2}\right\rangle _{+b}\right\vert ^{2}, \end{equation} where the term between the $\left\vert ...\right\vert $ is the explicit expression of $\left\langle F_{\aleph (u)}\right\rangle $. Of course, it is possible (and simpler) to use Eq. (\ref{62}) and compute $P_{\aleph (a)}$ or $P_{\aleph (b)}$ instead of $P_{\aleph (u)}$ (expressions similar to Eqs. ( \ref{66}) and (\ref{68}) are obtained -- only the field indices need to be changed despite $F_{\aleph }$ being defined jointly over the two subsystems). \section{Discussion and conclusion} The present dual field-particle model reproduces the EPR correlations without the need to invoke non-locality (i.e. action at a distance): a measurement carried out on one subsystem does not modify the field or the particle position of the other system. A striking feature is that although the total field $F_{T}$ is separable, the effective field $F_{\aleph }$ is a non-separable function. Non-separability does not involve nor imply non-locality (recall that non-separable functions are not exceptional in classical physics\footnote{ For example the classical action for a multi-particle system is a non-separable function in configuration space.}) but is necessary in order to account for field correlations between hemispheres encapsulating the particle correlation (\ref{28}). The field configuration, as well as the particle positions, are set at the source, in the intersection of the past light cones of each subsystem's space-time location and are modified \emph{ locally} by the measurement process\footnote{ The non-separable part of the field $F_{T}$ that takes into account the correlations between both subsystems becomes irrelevant to describe the system once a measurement is made (since the correlations are broken at that point).}. Several differences between our model and Bell-type LHV models deserve to be pointed out. First, note that the probabilities are obtained from average field intensities, not from elementary probabilities averaged over HV distributions $\rho (\lambda )$.\ It is known that in general classical fields do not have to obey Bell-type inequalities \cite{morgan}. Here, the fields (i) are not necessarily positive valued, (ii) can interfere, and (iii) define equivalence classes. Field measurements are \emph{non-commutative}, whereas LHV models assume factorizable elementary probabilities $p(\varepsilon _{a}^{1},\varepsilon _{b}^{2},\lambda )=p(\varepsilon _{a}^{1},\lambda )p(\varepsilon _{b}^{2},\lambda )$, leading to the existence of global joint probabilities (e.g. $P_{\aleph }(\varepsilon _{a}^{1},\varepsilon _{a^{\prime }}^{1},\varepsilon _{b}^{2},\varepsilon _{b^{\prime }}^{2})$ ) that in quantum mechanics can only be defined for \emph{commuting} operators \cite{fine82}. If the hemispherical fields were replaced by probability distributions for the particles, then the equivalence relations would not hold and the conditional probabilities appearing in Eqs. (\ref{66}) or (\ref{68}) would imply outcome dependence \cite{matzkin jpa08,matzkin pra08}. The particles' positions thus appear as pre-determined, and can play the role of hidden-variables, but they do not ascribe probabilities except in the absence of measurement perturbations. The field configurations can also be taken as hidden variables and they do ascribe probabilities but only as members of an equivalence class that does not give a more complete specification than afforded by the quantum-mechanical state. These last remarks lead us back to the original EPR dilemma recalled in the Introduction. In a single particle system the field dynamics ensure that there is no pre-existing outcome as an element of reality, even when it is possible to make a prediction with unit probability (in this case too there is an infinity of possible field-particle configurations, the outcome arising from the interference between the system and the apparatus fields). For an arbitrary measurement axis, a definite field-particle configuration, even if known, would not give an elementary sub-quantum description of a measurement outcome; such a description is only possible by resorting to an equivalent, albeit fictitious, field particle configuration in which there is no perturbation. In the two particle system, the additional constraint is that the particle positions as well as the effective fields on each sphere are correlated, allowing to infer one subsystem's outcome once the other subsystem's outcome is known. This inference, in terms of a sub-quantum description, also relies on the existence of an equivalence class providing an equivalent configuration characterized by a no-perturbation measurement along at least one axis. As a consequence the model denies the attribution of simultaneous reality to $\varepsilon _{a}^{2}$ and $\varepsilon _{b}^{2}$ on the ground that an observer has the choice of measuring $\varepsilon _{a}^{1}$ or $\varepsilon _{b}^{1}$ on particle 1 (this would imply that $ F_{T(a)}$ and $F_{T(b)}$ be both realized as the system's field which is impossible as noted above), although both conditional probabilities are unity. Thereby the \textquotedblleft simultaneous reality\textquotedblright\ branch of EPR's dilemma -- which is fulfilled by Bell-type models -- is decoupled here from the issue of locality. To sum up, we have given an explicit model in which a quantum state appears as an equivalence class comprising an infinity of possible field-particle configurations.\ The model can be said to 'complete' quantum mechanics (in the sense that it assumes an underlying reality relative to the quantum state) though it does not generally allow to give more complete and deterministic sub-quantum predictions. The model despite being local does not abide by Bell's causality condition \cite{bell-loc} but nevertheless defuses the EPR dilemma while avoiding the type of probability ascription leading to Bell's theorem. \end{document}
\BinaryInfCegin{document} \BinaryInfCegin{abstract} We present a logic for Proximity-based Understanding of Conditionals (PUC-Logic) that unifies the Counterfactual and Deontic logics proposed by David Lewis. We also propose a natural deduction system (PUC-ND) associated to this new logic. This inference system is proven to be sound, complete, normalizing and decidable. The relative completeness for the $\BinaryInfColdsymbol{V}$ and $\BinaryInfColdsymbol{CO}$ logics is shown to emphasize the unified approach over the work of Lewis. \end{abstract} \BinaryInfCegin{keyword} Conditionals, Logic, Natural Deduction, Counterfactual Logic, Deontic Logic \end{keyword} \mathcalaketitle \section{Counterfactuals} \BinaryInfCegin{itemize} \item If Oswald did not kill Kennedy, then someone else did. \item If Oswald had not killed Kennedy, then someone else would have.\cite{Lewis} \end{itemize} The phrases above are respectively instances of the indicative and the subjunctive conditionals. The indicative conditional is associated to the material implication, whereas the subjunctive construction of the language is traditionally studied by the philosophy as the counterfactual conditional\cite{Lewis,Goodman} or the counterfactual for short. Conditional propositions involve two components, the antecedent and the consequent. Counterfactual conditionals differ from material implication in a subtle way. The truth of a material implication is based on the actual state-of-affairs. From the knowledge that Kennedy was killed, we can accept the truth of the phrase. On the other hand, a counterfactual conditional should take into account the truth of the antecedent, even if it is not the case. The truth of the antecedent is mandatory in this analysis. Some approaches to counterfactuals entail belief revision, particularly those based on {\em Ramsey} test evaluation \cite{Ramsey}. In this analysis, the truth value of a counterfactual is considered within a minimal change generated by admitting the antecedent true\cite{Goodman}. A possible way to circumvent belief revision mechanisms is to consider alternative (possible) state-of-affairs, considered here as worlds, and, based on some accessibility notion, choose the closest one among the worlds that satisfy the antecedent. If the consequent is true at this considered world, then the counterfactual is also true\cite{Lewis}. Both conditionals have false antecedents and false consequents in the current state-of-affairs. However, the second conditional is clearly false, since we found no reason to accept that, in the closest worlds in which Kennedy is not killed by Oswald, Kennedy is killed by someone else. We choose the approach of Lewis\cite{Lewis} in our attempt to formalize an inference system for counterfactuals because his accessibility relation leaves out the discussion for a general definition of similitude among worlds, which is considered as given in his analysis. It also opened the possibility for a contribution in the other way. If we found some general properties in his accessibility relation, considering the evaluation of the formulas in the counterfactual reasoning, we could sketch some details of the concepts of similitude. \section{Lewis analysis} Lewis, in the very first page of his book\cite{Lewis}: \BinaryInfCegin{quote} "\TrinaryInfCextit{If kangaroos had no tails, they would topple over}, seems to me to mean something like this: in any possible state of affairs in which kangaroos have no tails, and which resembles our actual state of affairs as much as kangaroos having no tails permits it to, the kangaroos topple over." \end{quote} We can observe that the word ``resemble'' may be seen as a reference to the concept of similarity between some possible state of affairs in relation to the actual state of affairs. The expression ``as much as'' here may be understood as a relative comparison of similarities among the possible states of affairs in relation to the actual state of affairs. But Lewis gave no formal definition of similarity in his book\cite{Lewis}. He defined two basic counterfactual conditional operators: \BinaryInfCegin{itemize} \item $A \BinaryInfCoxright B$: If it were the case that A, then it would be the case that B; \item $A \Deltaiamondright B$: If it were the case that A, then it might be the case that B. \end{itemize} \noLineoindent And provided also the definition of other counterfactual operators. But, since they are interdefinable, he took $\BinaryInfCoxright$ as the primitive for the construction of formulas. In the middle of his book, he introduced the comparative possibility operators and showed that they can serve as the primitive notion for counterfactuals. \BinaryInfCegin{itemize} \item $A \preccurlyeq B$: It is as possible that A as it is that B. \end{itemize} \noLineoindent This operator gave us simpler proofs during this work. He used possible-world semantics for intentional logic. For that reason the state of affairs are treated as worlds. To express similarity, he used proximity notions: a world is closer to the actual world in comparison to other worlds if it is more similar to the actual world than other considered worlds. Lewis called the set of worlds to be considered for an evaluation as the strictness of the conditional. He pointed out that the strictness of the counterfactual conditional is based on the similarity of worlds. He showed that the counterfactual could not be treated by strict conditionals, necessity operators or possibility operators given by modal logics. To do so, he argued that strictness of the conditional can not be given before all evaluations. He constructed sequences of connected counterfactuals in a single English sentence for which the strictness cannot be given for the evaluation: \BinaryInfCegin{quote} "If Otto had come, it would have been a lively party; but if both Otto and Anna had come it would have been a dreary party; but if Waldo had come as well, it would have been lively; but..." \end{quote} \noLineoindent to show that the strictness of the counterfactuals cannot be defined by the context, because the sentence provides a single context for the evaluation of all counterfactuals. If we try to fix a strictness that makes a counterfactual true, then the next counterfactual is made false. Lewis proposed a variably strict conditional, in which different degrees of strictness is given for every world before the evaluation of any counterfactual. To express this concept, the accessibility relation is defined by a system of spheres, which is given for every world by a nesting function $\$$ that applies over a set of worlds $\mathcal{W}$. The nested function attributes a set of non-empty sets of worlds for each world and this set of sets is in total order for the inclusion relation. \BinaryInfCegin{figure}[htb] \BinaryInfCegin{center} \includegraphics[height=4cm,width=4cm]{SphereSystem.jpg} \caption{A system of spheres around some world $i$} \end{center} \end{figure} A systems of spheres, of any kind, is central in the most traditional analysis of counterfactuals. But the idea behind it is also available to many different logics. So, if we manage to handle them in a satisfactory manner, we will be able to use it in a broader class of logics. The system of neighbourhoods facilitates the development of the model, by leaving open the choice for a proper definition of similarity. And that concept can be used for a broader class of logics, not only the counterfactuals. From Lewis definitions, the nesting function is a primitive notion: \BinaryInfCegin{quote} $\phi \BinaryInfCoxright \psi$ is true at a world $i$ (according to a system of spheres $\$$) if and only if either: no $\phi$-world belongs to any sphere $S$ in $\$_{i}$\footnote{$\$_{i}$ gives the neighbourhoods around the world $i$. They are the available strictness to evaluate counterfactuals at $i$.}, or some sphere $S$ in $\$_{i}$ does contains at least one $\phi$-world, and $\phi \rightarrow \psi$ holds at every world in $S$. \end{quote} \BinaryInfCegin{quote} $\phi \preccurlyeq \psi$ is true at a world $i$ (according to a system of spheres $\$$) if and only if, for every sphere $S$ in $\$_{i}$, if $S$ contains any $\psi$-world then $S$ contains a $\phi$-world. \end{quote} Lewis\cite{Lewis} also provided conditions that may be applied to the nesting function $\$$. To every condition corresponds a different counterfactual logic:\BinaryInfCegin{itemize} \item Normality (N): $\$$ is normal iff $\forall w \in \mathcal{W} : \$(w) \noLineeq \emptyset$; \item Total reflexivity (T): $\$$ is totally reflexive iff $\forall w \in \mathcal{W} : w \in \BinaryInfCigcup\$(w)$; \item Weak centering (W): $\$$ is weakly centered iff $\forall w \in \mathcal{W} : \$(w) \noLineeq \emptyset \mathcalbox{ and }\forall N \in \$(w) : w \in N$ ; \item Centering (C): $\$$ is centered iff $\forall w \in \mathcal{W} : \{w\} \in \$(w)$; \item Limit Assumption (L): $\$$ satisfies the Limit Assumption iff, for any world $w$ and any formula $\phi$, if there is some $\phi$-world\footnote{A $\phi$-world is a world in which $\phi$ holds.} in $\BinaryInfCigcup\$(w)$, then there is some smallest sphere of $\$(w)$ that contains a $\phi$-world; \item Stalnaker's Assumption (A): $\$$ satisfies Stalnaker's Assumption iff, for any world $w$ and any formula $\phi$, if there is some $\phi$-world in $\BinaryInfCigcup\$(w)$, then there is some sphere of $\$(w)$ that contains exactly one $\phi$-world; \item Local Uniformity (U-): $\$$ is locally uniform iff for any world $w$ and any $v \in \BinaryInfCigcup\$(w)$, $\BinaryInfCigcup\$(w)$ and $\BinaryInfCigcup\$(v)$ are the same; \item Uniformity (U): $\$$ is uniform iff for any worlds $w$ and $v$, $\BinaryInfCigcup\$(w)$ and $\BinaryInfCigcup\$(v)$ are the same; \item Local absoluteness (A-): $\$$ is locally absolute iff for any world $w$ and any $v \in \BinaryInfCigcup\$(w)$, $\$(w)$ and $\$(v)$ are the same; \item Absoluteness (A): $\$$ is absolute iff for any worlds $w$ and $v$, $\$(w)$ and $\$(v)$ are the same. \end{itemize} The $\BinaryInfColdsymbol{V}$-logic is the most basic counterfactual logic presented by Lewis\cite{Lewis}, where ``V'' stands for variably strict conditional. If, for example, we accept the centering condition (C), then we have the $\BinaryInfColdsymbol{VC}$-logic. Lewis showed in his book a chart of 26 non-equivalent $\BinaryInfColdsymbol{V}$-logics that arises from the combinations of the conditions. We prefer to call the spheres as neighbourhoods, because they represent better the concept of proximity, which Lewis used to express similarity. The neighbourhoods provide a relative way to compare distance. The world that is contained in a neighbourhood is closer to the actual world than other world that is not contained in that same neighbourhood. As far as we know, there is only one natural deduction system for the counterfactuals, which is given by Bonevac \cite{Bonevac}. But his system is designed to deals with the $\BinaryInfColdsymbol{VW}$-logic, since it contains the rule of counterfactual exploitation ($\BinaryInfCoxright$E), which encapsulates the weak centering condition. His approach to define rules for the counterfactual operators provides a better intuition of the counterfactual logic. His systems is expressive enough to deal with modalities and strict conditionals. The labelling of world shifts using formulas make easier to capture the counterfactual mechanics. We also found the work of Sano \cite{Sano} which pointed out the advantages of using the hybrid formalism for the counterfactual logic. He presented some axioms and rules for the $\BinaryInfColdsymbol{V_{\mathcal{HC}(@)}}$-logic that extends the $\BinaryInfColdsymbol{V}$-logic of Lewis. Another interesting reference is the article of Gent\cite{Gent}, which presents a new sequent- or tableaux-style proof system for V C. His work depends on the operator $\llbracket \rrbracket$ and the definition of signed formulas. We recently found a sequent calculus, provided by Lellmann\cite{Jelia2012}, that treats the $\BinaryInfColdsymbol{V}$-logic of Lewis and its extensions. Its language depends on modal operators, specially the counterfactual operators $\BinaryInfCoxright$ and $\BinaryInfCoxRight$ and the comparative possibility operator $\preccurlyeq$. As far as we know, our deduction system is the only one dealing with Lewis systems in a general form, that is, without using modalities in the syntax and treating the most basic counterfactual $\BinaryInfColdsymbol{V}$-logic. \section{Proximity-based Understanding of Conditionals} In \cite{LSFA09}, we presented a sequent calculus for counterfactual logic based on a Local Set Theory \cite{Bell}. In this article, we defined the satisfaction relation for worlds, for sets of the worlds and for neighbourhoods, where we encapsulated some quantifications that made it easier to express the operators with fewer quantifiers. But the encapsulation made the the inference system to have no control of the quantifications. Here we propose a logic for Proximity-based Understanding of Conditionals, PUC-Logic for short, that take control of the quantifications with labels. \BinaryInfCegin{definition} Given a non-empty set $\mathcal{W}$ (considered the set of worlds), we define a nesting function $\$$ that assigns to each world of $\mathcal{W}$ a set of nested sets of $\mathcal{W}$. A set of nested sets is a set of sets in which the inclusion relation among sets is a total order. \end{definition} \BinaryInfCegin{definition} A \TrinaryInfCextit{frame} is a tuple $\mathcal{F} = \langle \mathcal{W} , \$ , \mathcal{V} \rightarrowngle$, in which $\mathcal{V}$ is a truth assignment function for each atomic formula with image on the subsets of $\mathcal{W}$. A \TrinaryInfCextit{model} is a pair $\mathcal{M} = \langle \mathcal{F} , \mathcal{C}i \rightarrowngle$, $\mathcal{F}$ a frame and $\mathcal{C}i$ a world of $\mathcal{W}$, called the \TrinaryInfCextit{reference world} of the model. A \TrinaryInfCextit{template} is a pair $\mathcal{T} = \langle \mathcal{M} , N \rightarrowngle$, $N \in \$(\mathcal{C}i)$ and $N$ is called the \TrinaryInfCextit{reference neighbourhood} of the template. \end{definition} We use the term \TrinaryInfCextit{structure} to refer a model or a template. \BinaryInfCegin{definition} A structure is \TrinaryInfCextit{finite} if its set of worlds is finite. \end{definition} We now define a relation between structures to represent the pertinence of neighbourhoods in a neighbourhood system of a world and the pertinence of worlds in a given neighbourhood. \BinaryInfCegin{definition} Given a model $\mathcal{M} = \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i \rightarrowngle$, then, for any $N \in \$(\mathcal{C}i)$, the template $\mathcal{T} = \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle$ is in \TrinaryInfCextit{perspective} relation to $\mathcal{M}$. We represent this by $\mathcal{M} \mathcalultimap \mathcal{T}$. Given a template $\mathcal{T} = \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle$, then, for any $w \in N$, the model $\mathcal{M} = \langle \mathcal{W} , \$ , \mathcal{V} , w \rightarrowngle$ is in perspective relation to $\mathcal{T}$. We represent this by $\mathcal{T} \mathcalultimap \mathcal{M}$. \end{definition} \BinaryInfCegin{definition} The concatenation of $n$ tuples of the perspective relation is called a path of size $n$ and is represented by the symbol $\mathcalultimap_{n}$. \end{definition} One remark: if the size of a path is even, then a model is related to another model or a template is related to another template. \BinaryInfCegin{definition} The transitive closure of the perspective relation is called the \TrinaryInfCextit{projective} relation, which is represented by the symbol $\leadsto$. \end{definition} \BinaryInfCegin{definition} Given a world $\mathcal{C}i$ and the nested neighbourhood function $\$$ we can build a sequence of sets of worlds: \BinaryInfCegin{enumerate} \item $\BinaryInfCigtriangleup^{\$}_{0}(\mathcal{C}i) = \{\mathcal{C}i\}$; \item $\BinaryInfCigtriangleup^{\$}_{k+1}(\mathcal{C}i) = \BinaryInfCigcup_{w \in \BinaryInfCigtriangleup^{\$}_{k}(\mathcal{C}i)} (\BinaryInfCigcup \$(w))$, $k \geq 0$. \end{enumerate} Let $\BinaryInfCigtriangleup^{\$}(\mathcal{C}i) = \BinaryInfCigcup_{n \in \mathcalathbb{N}} \BinaryInfCigtriangleup^{\$}_{n}(\mathcal{C}i)$ and $\BinaryInfCigtriangleup^{\$}_{\vec{n}}(\mathcal{C}i) = \BinaryInfCigcup_{0 \leq m \leq n} \BinaryInfCigtriangleup^{\$}_{m}(\mathcal{C}i)$. \end{definition} We introduce labels in our language, in order to syntactically represent quantifications over two specific domains: neighbourhoods and worlds. So, for that reason, a label may be a neighbourhood label or a world label: \BinaryInfCegin{itemize} \item Neighbourhood labels: \BinaryInfCegin{itemize} \item[($\circledast$)] Universal quantifier over neighbourhoods of some neighbourhood system; \item[($\circledcirc$)] Existential quantifier over neighbourhoods of some neighbourhood system; \item[($N$)] Variables (capital letters) that may denote some neighbourhood of some neighbourhood system. \end{itemize} \item World labels: \BinaryInfCegin{itemize} \item[($\AxiomCst$)] Universal quantifier over worlds of some neighbourhood; \item[($\BinaryInfCullet$)] Existential quantifier over worlds of some neighbourhood; \item[($u$)] Variables (lower case letters) that denote some world of some neighbourhood. \end{itemize} \end{itemize} We denote the set of neighbourhood labels by $\BinaryInfColdsymbol{L}_{n}$ and the set of world labels by $\BinaryInfColdsymbol{L}_{w}$. \BinaryInfCegin{definition} The language of PUC-Logic consists of: \BinaryInfCegin{itemize} \item countably neighbourhood variables: $N,M,L,\ldots$; \item countably world variables: $w,z,\ldots$; \item countably proposition symbols: $p_{0},p_{1},\ldots$; \item countably proposition constants: $terminal objectp_{n},\BinaryInfCot_{n},terminal objectp_{w},\BinaryInfCot_{w},\shneg N, \shpos N, \shneg M, \shpos M,\ldots$; \item connectives: $\wedge,\vee,\rightarrow,\noLineeg$; \item neighbourhood labels: $\circledast, \circledcirc$; \item world labels: $\AxiomCst,\BinaryInfCullet$; \item auxiliary symbols: $(,)$. \end{itemize} \end{definition} As in the case of labels, we want to separate the sets of well-formed formulas into two disjoint sets, according to sort of label that labels the formula. We denote the set of neighbourhood formulas by $\BinaryInfColdsymbol{F}_{n}$ and the set of world formulas by $\BinaryInfColdsymbol{F}_{w}$. \BinaryInfCegin{definition} \label{wffDefinition} The sets $\BinaryInfColdsymbol{F}_{n}$ and $\BinaryInfColdsymbol{F}_{w}$ of well-formed formulas\footnote{We use the term wff to denote both the singular and the plural form of the expression well-formed formula.} are constructed the following rules: \BinaryInfCegin{enumerate} \item $terminal objectp_{n},\BinaryInfCot_{n} \in \BinaryInfColdsymbol{F}_{n}$; \item $terminal objectp_{w},\BinaryInfCot_{w} \in \BinaryInfColdsymbol{F}_{w}$; \item $\shneg N,\shpos N \in \BinaryInfColdsymbol{F}_{w}$, for every neighbourhood variable $N$; \item $\AxiomClpha \in \BinaryInfColdsymbol{F}_{n}$, for every atomic formula $\AxiomClpha$, except $terminal objectp$ and $\BinaryInfCot$; \item if $\AxiomClpha \in \BinaryInfColdsymbol{F}_{n}$, then $\noLineeg \AxiomClpha \in \BinaryInfColdsymbol{F}_{n}$; \item if $\AxiomClpha \in \BinaryInfColdsymbol{F}_{w}$, then $\noLineeg \AxiomClpha \in \BinaryInfColdsymbol{F}_{w}$; \item if $\AxiomClpha,\BinaryInfCeta \in \BinaryInfColdsymbol{F}_{n}$, then $\AxiomClpha \wedge \BinaryInfCeta,\AxiomClpha \vee \BinaryInfCeta,\AxiomClpha \rightarrow \BinaryInfCeta \in \BinaryInfColdsymbol{F}_{n}$; \item if $\AxiomClpha,\BinaryInfCeta \in \BinaryInfColdsymbol{F}_{w}$, then $\AxiomClpha \wedge \BinaryInfCeta,\AxiomClpha \vee \BinaryInfCeta,\AxiomClpha \rightarrow \BinaryInfCeta \in \BinaryInfColdsymbol{F}_{w}$; \item if $\AxiomClpha \in \BinaryInfColdsymbol{F}_{n}$ and $\phi \in \BinaryInfColdsymbol{L}_{w}$, then $\AxiomClpha^{\phi} \in \BinaryInfColdsymbol{F}_{w}$; \item if $\AxiomClpha \in \BinaryInfColdsymbol{F}_{w}$ and $\phi \in \BinaryInfColdsymbol{L}_{n}$, then $\AxiomClpha^{\phi} \in \BinaryInfColdsymbol{F}_{n}$.\end{enumerate}\end{definition} We introduced the two formulas for true and false, in order to make the sets of formulas disjoint. The formula $\shneg N$ is introduced to represent that a neighbourhood contains the neighbourhood $N$ and the formula $\shpos N$ represent a neighbourhood is contained in $N$. The last two rules of definition \ref{wffDefinition} introduces the labelling the formulas. Moreover, since we can label a labelled formula, every formula has a stack of labels that represent nested labels. We call it the \TrinaryInfCextit{attribute} of the formula. The top label of the stack is the \TrinaryInfCextit{index} of the formula. We represent the attribute of a formula as a letter that appear to the right of the formula. If the attribute is empty, we may omit it and the formula has no index. The attribute of some formula will always be empty if the last rule, used to build the formula, is not one of the labelling rules, as in the case of $((\AxiomClpha \rightarrow \AxiomClpha)^{\circledast,\BinaryInfCullet})\vee(\gamma^{\circledcirc,\AxiomCst})$. To read a labelled formula, it is necessary to read its index first and then the rest of the formula. For example, $(\AxiomClpha \rightarrow \AxiomClpha)^{\circledast,\BinaryInfCullet}$ should be read as: there is some world, in all neighbourhoods of the considered neighbourhood system, in which it is the case that $\AxiomClpha \rightarrow \AxiomClpha$. We may concatenate stacks of labels and labels, using commas, to produce a stack of labels that is obtained by respecting the order of the labels in the stacks and the order of the concatenation, like $\AxiomClpha^{\Sigma,\Deltaelta}$, where $\AxiomClpha$ is a formula and $\Sigma$ and $\Deltaelta$ are stacks of labels. But we admit no nesting of attributes, which means that $(\AxiomClpha^{\Sigma})^{\Deltaelta}$ is the same as $\AxiomClpha^{\Sigma,\Deltaelta}$. \BinaryInfCegin{definition} Given a stack of labels $\Sigma$, we define $\overline{\Sigma}$ as the stack of labels that is obtained from $\Sigma$ by reversing the order of the labels in the stack.\end{definition}\BinaryInfCegin{definition} Given a stack of labels $\Sigma$, the size $s(\Sigma)$ is its number of labels.\end{definition}\BinaryInfCegin{definition} Given a set of worlds $\mathcal{W}$, a set of world variables and a set of neighbourhood variables, we define a variable assignment function $\sigma$, that assigns a world of $\mathcal{W}$ to each world variable and a non-empty set of $\mathcal{W}$ to each neighbourhood variable.\end{definition}\BinaryInfCegin{definition} \label{satisfactionDefinition} Given a variable assignment function $\sigma$, the relation $\mathcalodels$ of \TrinaryInfCextit{satisfaction} between formulas, models and templates is given by: \BinaryInfCegin{enumerate} \item $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i \rightarrowngle \mathcalodels \AxiomClpha$, $\AxiomClpha$ atomic, iff: $\mathcal{C}i \in \mathcal{V}(\AxiomClpha)$. For every world $w \in \mathcal{W}$, $w \in \mathcal{V}(terminal objectp_{n})$ and $w \noLineot\in \mathcal{V}(\BinaryInfCot_{n})$; \item $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i \rightarrowngle \mathcalodels \noLineeg \; (\AxiomClpha^{\Sigma})$ iff: $\noLineeg \; (\AxiomClpha^{\Sigma}) \in \BinaryInfColdsymbol{F}_{n}$ and $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i \rightarrowngle \noLineot\mathcalodels \AxiomClpha^{\Sigma}$; \item $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} \wedge \BinaryInfCeta^{\Omega}$ iff: $\AxiomClpha^{\Sigma} \wedge \BinaryInfCeta^{\Omega} \in \BinaryInfColdsymbol{F}_{n}$ and\\$(\;\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} \mathcalbox{ and } \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i \rightarrowngle \mathcalodels \BinaryInfCeta^{\Omega}$; \item $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} \vee \BinaryInfCeta^{\Omega}\;)$ iff: $\AxiomClpha^{\Sigma} \vee \BinaryInfCeta^{\Omega} \in \BinaryInfColdsymbol{F}_{n}$ and\\$(\; \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} \mathcalbox{ or } \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i \rightarrowngle \mathcalodels \BinaryInfCeta^{\Omega} \;)$; \item $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} \rightarrow \BinaryInfCeta^{\Omega}$ iff: $\AxiomClpha^{\Sigma} \rightarrow \BinaryInfCeta^{\Omega} \in \BinaryInfColdsymbol{F}_{n}$ and\\$(\; \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i \rightarrowngle \mathcalodels \noLineeg (\AxiomClpha^{\Sigma}) \mathcalbox{ or } \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i \rightarrowngle \mathcalodels \BinaryInfCeta^{\Omega} \;)$; \item $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma,\circledast}$ iff: $\forall N \in \$(\mathcal{C}i) : \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma}$; \item $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma,\circledcirc}$ iff: $\exists N \in \$(\mathcal{C}i) : \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma}$; \item $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma,N}$ iff: $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , \sigma(N) \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma}$; \item $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \shneg M$ iff: $\sigma(M) \in \$(\mathcal{C}i)$ and $\sigma(M) \sqsubseteqet N$; \item $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \shpos M$ iff: $\sigma(M) \in \$(\mathcal{C}i)$ and $N \sqsubseteqet \sigma(M)$; \item $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma,\AxiomCst}$ iff: $\forall w \in N : \langle \mathcal{W} , \$ , \mathcal{V} , w \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma}$; \item $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma,\BinaryInfCullet}$ iff: $\exists w \in N : \langle \mathcal{W} , \$ , \mathcal{V} , w \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma}$; \item $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma,u}$ iff: $\sigma(u) \in N$ and $\langle \mathcal{W} , \$ , \mathcal{V} , \sigma(u) \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma}$; \item $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \noLineeg \; (\AxiomClpha^{\Sigma})$ iff: $\noLineeg \; (\AxiomClpha^{\Sigma}) \in \BinaryInfColdsymbol{F}_{w}$ and $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \noLineot\mathcalodels \AxiomClpha^{\Sigma}$; \item $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} \wedge \BinaryInfCeta^{\Omega}$ iff: $\AxiomClpha^{\Sigma} \wedge \BinaryInfCeta^{\Omega} \in \BinaryInfColdsymbol{F}_{w}$ and\\$(\;\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} \mathcalbox{ and }\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \BinaryInfCeta^{\Omega}\;)$; \item $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} \vee \BinaryInfCeta^{\Omega}$ iff: $\AxiomClpha^{\Sigma} \vee \BinaryInfCeta^{\Omega} \in \BinaryInfColdsymbol{F}_{w}$ and\\$(\; \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} \mathcalbox{ or } \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \BinaryInfCeta^{\Omega} \;)$; \item $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} \rightarrow \BinaryInfCeta^{\Omega}$ iff: $\AxiomClpha^{\Sigma} \rightarrow \BinaryInfCeta^{\Omega} \in \BinaryInfColdsymbol{F}_{w}$ and\\$(\; \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \noLineeg (\AxiomClpha^{\Sigma}) \mathcalbox{ or } \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \BinaryInfCeta^{\Omega} \;)$; \item $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels terminal objectp_{w}$ and $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \noLineot\mathcalodels \BinaryInfCot_{w}$, for every template.\end{enumerate}\end{definition} \BinaryInfCegin{definition} \label{logicalConsequenceDf} The relation $\AxiomClpha^{\Sigma} \mathcalodels \BinaryInfCeta^{\Omega}$ of \TrinaryInfCextit{logical consequence} is defined iff $\AxiomClpha^{\Sigma},\BinaryInfCeta^{\Omega} \in \BinaryInfColdsymbol{F}_{n}$ and for all model $\mathcal{M} \mathcalodels \AxiomClpha^{\Sigma}$, we have $\mathcal{M} \mathcalodels \BinaryInfCeta^{\Omega}$. The relation is also defined iff $\AxiomClpha^{\Sigma},\BinaryInfCeta^{\Omega} \in \BinaryInfColdsymbol{F}_{w}$ and for all template $\mathcal{T} \mathcalodels \AxiomClpha^{\Sigma}$, we have $\mathcal{T} \mathcalodels \BinaryInfCeta^{\Omega}$. Given $\Gamma \cup \{\AxiomClpha^{\Sigma}\} \sqsubseteqet \BinaryInfColdsymbol{F}_{n}$, the relation $\Gamma \mathcalodels \AxiomClpha^{\Sigma}$ of logical consequence is defined iff for all model $\mathcal{M}$ that satisfies every formula of $\Gamma$, $\mathcal{M} \mathcalodels \AxiomClpha^{\Sigma}$. Given $\Gamma \cup \{\AxiomClpha^{\Sigma}\} \sqsubseteqet \BinaryInfColdsymbol{F}_{w}$, the relation $\Gamma \mathcalodels \AxiomClpha^{\Sigma}$ is defined iff for all templates $\mathcal{T}$ that satisfies every formula of $\Gamma$, $\mathcal{T} \mathcalodels \AxiomClpha^{\Sigma}$.\end{definition} \BinaryInfCegin{definition} $\AxiomClpha^{\Sigma} \in \BinaryInfColdsymbol{F}_{n}$ ($\in \BinaryInfColdsymbol{F}_{w}$) is a n-tautology (w-tautology) iff for every model (template) $\mathcal{M} \mathcalodels \AxiomClpha^{\Sigma}$ ($\mathcal{T} \mathcalodels \AxiomClpha^{\Sigma}$).\end{definition} \BinaryInfCegin{lemma} \label{taut} $\AxiomClpha^{\Sigma}$ is a n-tautology iff $\AxiomClpha^{\Sigma,\AxiomCst,\circledast}$ is a n-tautology. \end{lemma} \BinaryInfCegin{proof} If $\AxiomClpha^{\Sigma}$ is a n-tautology, $\forall z \in \mathcal{W}$, $\langle \mathcal{W} , \$ , \mathcal{V} , z \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma}$. In particular, given a world $\mathcal{C}i \in \mathcal{W}$, $\forall N \in \$(\mathcal{C}i) : \forall w \in N : \langle \mathcal{W} , \$ , \mathcal{V} , w \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma}$ and, by definition, $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma,\AxiomCst,\circledast}$ for every world of $\mathcal{W}$ and $\AxiomClpha^{\Sigma,\AxiomCst,\circledast}$ is also a n-tautology. Conversely, if $\AxiomClpha^{\Sigma,\AxiomCst,\circledast}$ is a n-tautology, then $\forall N \in \$(\mathcal{C}i) : \forall w \in N : \langle \mathcal{W} , \$ , \mathcal{V} , w \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma}$ for every choice of $\mathcal{W}$ , $\$$ , $\mathcal{V}$ and $w$. So, given $\mathcal{W}$ , $\mathcal{V}$ and $w$, we can choose $\$$ to be the constant function $\{\mathcal{W}\}$. So, $\forall z \in \mathcal{W}$, $\langle \mathcal{W} , \$ , \mathcal{V} , z \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma}$ and $\AxiomClpha^{\Sigma}$ must also be a n-tautology.\end{proof} The relation defined below is motivated by the fact that, if a model $\mathcal{M}$ satisfies a formula like $\AxiomClpha^{\circledast,\AxiomCst}$, then for every template $\mathcal{T}$, such that $\mathcal{M} \mathcalultimap \mathcal{T}$, $\mathcal{T}$ satisfies $\AxiomClpha^{\circledast}$ by definition. And also for every model $\mathcal{H}$, such that $\mathcal{M} \mathcalultimap_{2} \mathcal{H}$, $\mathcal{H}$ satisfies $\AxiomClpha$ by definition. \BinaryInfCegin{definition} \label{referentialConsequence} Given a model $\mathcal{M}$, called \TrinaryInfCextit{the reference model}, the relation $\AxiomClpha^{\Sigma} \mathcalodels_{\mathcal{M}:n} \BinaryInfCeta^{\Omega}$ of \TrinaryInfCextit{referential consequence} is defined iff: \BinaryInfCegin{itemize} \item $n > 0$ and ($\mathcal{M} \mathcalodels \AxiomClpha^{\Sigma}$ implies $\mathcal{H} \mathcalodels \BinaryInfCeta^{\Omega}$, for any structure $\mathcal{M} \mathcalultimap_{n} \mathcal{H}$); \item $n = 0$ and (if $\mathcal{M} \mathcalodels \AxiomClpha^{\Sigma}$ implies $\mathcal{M} \mathcalodels \BinaryInfCeta^{\Omega}$). \end{itemize} Given $\Gamma \cup \{\AxiomClpha^{\Sigma}\} \sqsubseteqet \BinaryInfColdsymbol{F}_{n}$, $\Gamma \mathcalodels_{\mathcal{M}:n} \AxiomClpha^{\Sigma}$ iff: \BinaryInfCegin{itemize} \item $n > 0$ and ($\mathcal{H} \mathcalodels \AxiomClpha^{\Sigma}$, for any structure $\mathcal{M} \mathcalultimap_{n} \mathcal{H}$ that satisfies every formula of $\Gamma$); \item $n = 0$ and ($\mathcal{M} \mathcalodels \AxiomClpha^{\Sigma}$ if $\mathcal{M}$ satisfies every formula of $\Gamma$). \end{itemize}\end{definition} \BinaryInfCegin{figure}[htbp] \label{theSystem} \BinaryInfCegin{dedsystem} \hline \AxiomC{$\Pi$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\AxiomClpha^{\Sigma} \wedge \BinaryInfCeta^{\Omega}$} \LeftLabel{1:} \RightLabel{$\Deltaelta$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \DeltaisplayProof & \AxiomC{$\Pi$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\AxiomClpha^{\Sigma} \wedge \BinaryInfCeta^{\Omega}$} \LeftLabel{2:} \RightLabel{$\Deltaelta$} \UnaryInfC{$\BinaryInfCeta^{\Omega}$} \DeltaisplayProof & \AxiomC{$\Pi_{1}$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \AxiomC{$\Pi_{2}$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\BinaryInfCeta^{\Omega}$} \LeftLabel{3:} \RightLabel{$\Deltaelta$} \BinaryInfC{$\AxiomClpha^{\Sigma} \wedge \BinaryInfCeta^{\Omega}$} \DeltaisplayProof \\ \AxiomC{$\Pi$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \LeftLabel{4:} \RightLabel{$\Deltaelta$} \UnaryInfC{$\AxiomClpha^{\Sigma} \vee \BinaryInfCeta^{\Omega}$} \DeltaisplayProof & \AxiomC{$\Pi_{1}$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\AxiomClpha^{\Sigma} \vee \BinaryInfCeta^{\Omega}$} \AxiomC{$\phantom{-}$} \AxiomClwaysNoLine \UnaryInfC{[$\AxiomClpha^{\Sigma}$]} \RightLabel{$\Deltaelta$} \AxiomClwaysSingleLine \UnaryInfC{$\Pi_{2}$} \RightLabel{$\Theta$} \UnaryInfC{$\gamma^{\Lambda}$} \AxiomC{$\phantom{-}$} \AxiomClwaysNoLine \UnaryInfC{[$\BinaryInfCeta^{\Omega}$]} \RightLabel{$\Deltaelta$} \AxiomClwaysSingleLine \UnaryInfC{$\Pi_{3}$} \RightLabel{$\Theta$} \UnaryInfC{$\gamma^{\Lambda}$} \LeftLabel{5:} \RightLabel{$\Theta$} \AxiomClwaysSingleLine \TrinaryInfC{$\gamma^{\Lambda}$} \DeltaisplayProof & \AxiomC{$\Pi$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\BinaryInfCeta^{\Omega}$} \LeftLabel{6:} \RightLabel{$\Deltaelta$} \UnaryInfC{$\AxiomClpha^{\Sigma} \vee \BinaryInfCeta^{\Omega}$} \DeltaisplayProof \\ \AxiomC{$\phantom{-}$} \AxiomClwaysNoLine \UnaryInfC{[$\noLineeg (\AxiomClpha^{\Sigma})$]} \RightLabel{$\Deltaelta$} \AxiomClwaysSingleLine \UnaryInfC{$\Pi$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\BinaryInfCot$} \LeftLabel{7:} \RightLabel{$\Deltaelta$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \DeltaisplayProof & \AxiomC{$\Pi$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\BinaryInfCot$} \LeftLabel{8:} \RightLabel{$\Deltaelta$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \DeltaisplayProof & \AxiomC{$\Pi$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\BinaryInfCot$} \LeftLabel{9:} \UnaryInfC{$\BinaryInfCot_{n}$} \DeltaisplayProof \\ \AxiomC{$\AxiomClpha^{\Sigma}$} \LeftLabel{10:} \RightLabel{$\Deltaelta$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \DeltaisplayProof & \AxiomC{$\phantom{-}$} \AxiomClwaysNoLine \UnaryInfC{[$\AxiomClpha^{\Sigma}$]} \RightLabel{$\Deltaelta$} \AxiomClwaysSingleLine \UnaryInfC{$\Pi$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\BinaryInfCeta^{\Omega}$} \AxiomClwaysSingleLine \LeftLabel{11:} \RightLabel{$\Deltaelta$} \UnaryInfC{$\AxiomClpha^{\Sigma} \rightarrow \BinaryInfCeta^{\Omega}$} \AxiomClwaysNoLine \UnaryInfC{$\phantom{-}$} \DeltaisplayProof & \AxiomC{$\Pi_{1}$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \AxiomC{$\Pi_{2}$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\AxiomClpha^{\Sigma} \rightarrow \BinaryInfCeta^{\Omega}$} \LeftLabel{12:} \RightLabel{$\Deltaelta$} \BinaryInfC{$\BinaryInfCeta^{\Omega}$} \DeltaisplayProof \\ \AxiomC{$\Pi$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\AxiomClpha^{\Sigma,\phi}$} \LeftLabel{13:} \RightLabel{$\Deltaelta,\phi$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \AxiomClwaysNoLine \UnaryInfC{$\phantom{-}$} \DeltaisplayProof & \AxiomC{$\Pi$} \RightLabel{$\Deltaelta,\phi$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \LeftLabel{14:} \RightLabel{$\Deltaelta$} \UnaryInfC{$\AxiomClpha^{\Sigma,\phi}$} \AxiomClwaysNoLine \UnaryInfC{$\phantom{-}$} \DeltaisplayProof & \AxiomC{$\Pi$} \RightLabel{$\Deltaelta,u$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \LeftLabel{15:} \RightLabel{$\Deltaelta,\AxiomCst$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \AxiomClwaysNoLine \UnaryInfC{$\phantom{-}$} \DeltaisplayProof \\ \AxiomC{$\Pi$} \RightLabel{$\Deltaelta,\AxiomCst$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \LeftLabel{16:} \RightLabel{$\Deltaelta,u$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \AxiomClwaysNoLine \UnaryInfC{$\phantom{-}$} \DeltaisplayProof & \AxiomC{$\Pi$} \RightLabel{$\Deltaelta,u$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \LeftLabel{17:} \RightLabel{$\Deltaelta,\BinaryInfCullet$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \AxiomClwaysNoLine \UnaryInfC{$\phantom{-}$} \DeltaisplayProof & \AxiomC{$\Pi_{1}$} \RightLabel{$\Deltaelta,\BinaryInfCullet$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \AxiomC{[$\AxiomClpha^{\Sigma}$]} \RightLabel{$\Deltaelta,u$} \UnaryInfC{$\Pi_{2}$} \RightLabel{$\Theta$} \UnaryInfC{$\BinaryInfCeta^{\Omega}$} \LeftLabel{18:} \RightLabel{$\Theta$} \BinaryInfC{$\BinaryInfCeta^{\Omega}$} \DeltaisplayProof \\ \AxiomC{$\Pi_{1}$} \RightLabel{$\Deltaelta,N$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \AxiomC{$\Pi_{2}$} \RightLabel{$\Deltaelta,\circledcirc$} \UnaryInfC{$\BinaryInfCeta^{\Omega}$} \LeftLabel{19:} \RightLabel{$\Deltaelta,\circledcirc$} \BinaryInfC{$\AxiomClpha^{\Sigma}$} \AxiomClwaysNoLine \UnaryInfC{$\phantom{-}$} \DeltaisplayProof & \AxiomC{$\Pi_{1}$} \RightLabel{$\Deltaelta,\circledcirc$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \AxiomC{[$\AxiomClpha^{\Sigma}$]} \RightLabel{$\Deltaelta,N$} \UnaryInfC{$\Pi_{2}$} \RightLabel{$\Theta$} \UnaryInfC{$\BinaryInfCeta^{\Omega}$} \LeftLabel{20:} \RightLabel{$\Theta$} \BinaryInfC{$\BinaryInfCeta^{\Omega}$} \DeltaisplayProof & \AxiomC{$\Pi$} \RightLabel{$\Deltaelta,N$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \LeftLabel{21:} \RightLabel{$\Deltaelta,\circledast$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \DeltaisplayProof \\ \AxiomC{$\Pi_{1}$} \RightLabel{$\Deltaelta,\circledast$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \AxiomC{$\Pi_{2}$} \RightLabel{$\Deltaelta,N$} \UnaryInfC{$\BinaryInfCeta^{\Omega}$} \LeftLabel{22:} \RightLabel{$\Deltaelta,N$} \BinaryInfC{$\AxiomClpha^{\Sigma}$} \DeltaisplayProof & \AxiomC{$\Pi_{1}$} \RightLabel{$\Deltaelta,N$} \UnaryInfC{$\AxiomClpha^{\Sigma,\BinaryInfCullet}$} \AxiomC{$\Pi_{2}$} \RightLabel{$\Deltaelta,M$} \UnaryInfC{$\shneg N$} \LeftLabel{23:} \RightLabel{$\Deltaelta,M$} \BinaryInfC{$\AxiomClpha^{\Sigma,\BinaryInfCullet}$} \DeltaisplayProof & \AxiomC{$\Pi_{1}$} \RightLabel{$\Deltaelta,N$} \UnaryInfC{$\AxiomClpha^{\Sigma,\AxiomCst}$} \AxiomC{$\Pi_{2}$} \RightLabel{$\Deltaelta,M$} \UnaryInfC{$\shpos N$} \LeftLabel{24:} \RightLabel{$\Deltaelta,M$} \BinaryInfC{$\AxiomClpha^{\Sigma,\AxiomCst}$} \AxiomClwaysNoLine \UnaryInfC{$\phantom{-}$} \DeltaisplayProof \\ \AxiomC{$\Pi_{1}$} \RightLabel{$\Deltaelta,N$} \UnaryInfC{$\shneg M$} \AxiomC{$\Pi_{2}$} \RightLabel{$\Deltaelta,M$} \UnaryInfC{$\shneg P$} \LeftLabel{25:} \RightLabel{$\Deltaelta,N$} \BinaryInfC{$\shneg P$} \AxiomClwaysNoLine \UnaryInfC{$\phantom{-}$} \DeltaisplayProof & \AxiomC{$\Pi_{1}$} \RightLabel{$\Deltaelta,N$} \UnaryInfC{$\shpos M$} \AxiomC{$\Pi_{2}$} \RightLabel{$\Deltaelta,M$} \UnaryInfC{$\shpos P$} \LeftLabel{26:} \RightLabel{$\Deltaelta,N$} \BinaryInfC{$\shpos P$} \AxiomClwaysNoLine \UnaryInfC{$\phantom{-}$} \DeltaisplayProof & \AxiomC{[$\shneg M$]} \RightLabel{$\Deltaelta,N$} \UnaryInfC{$\Pi_{1}$} \RightLabel{$\Theta$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \AxiomC{[$\shneg N$]} \RightLabel{$\Deltaelta,M$} \UnaryInfC{$\Pi_{2}$} \RightLabel{$\Theta$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \LeftLabel{27:} \RightLabel{$\Theta$} \BinaryInfC{$\AxiomClpha^{\Sigma}$} \AxiomClwaysNoLine \UnaryInfC{$\phantom{-}$} \DeltaisplayProof \\ \AxiomC{[$\shpos M$]} \RightLabel{$\Deltaelta,N$} \UnaryInfC{$\Pi_{1}$} \RightLabel{$\Theta$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \AxiomC{[$\shpos N$]} \RightLabel{$\Deltaelta,M$} \UnaryInfC{$\Pi_{2}$} \RightLabel{$\Theta$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \LeftLabel{28:} \RightLabel{$\Theta$} \BinaryInfC{$\AxiomClpha^{\Sigma}$} \AxiomClwaysNoLine \UnaryInfC{$\phantom{-}$} \DeltaisplayProof & \AxiomC{[$\shneg N$]} \RightLabel{$\Deltaelta,M$} \UnaryInfC{$\Pi_{1}$} \RightLabel{$\Theta$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \AxiomC{[$\shpos N$]} \RightLabel{$\Deltaelta,M$} \UnaryInfC{$\Pi_{2}$} \RightLabel{$\Theta$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \LeftLabel{29:} \RightLabel{$\Theta$} \BinaryInfC{$\AxiomClpha^{\Sigma}$} \AxiomClwaysNoLine \UnaryInfC{$\phantom{-}$} \DeltaisplayProof & \AxiomC{$\phantom{-}$} \LeftLabel{30:} \RightLabel{$\Deltaelta$} \UnaryInfC{$terminal objectp$} \AxiomClwaysNoLine \UnaryInfC{$\phantom{-}$} \DeltaisplayProof \\ \hline \end{dedsystem} \caption{Natural Deduction System for PUC-Logic (PUC-ND)} \end{figure} Every rule of PUC-ND has a stack of labels, called its \TrinaryInfCextit{context}. The scope is represented by a capital Greek letter at the right of each rule. The \TrinaryInfCextit{scope} of a rule is the top label of its context. Given a context $\Deltaelta$, we denote its scope by $\oc \Deltaelta$. If the context is empty, then there is no scope. As in the case of labels and formulas, we want to separate the contexts into two disjoint sets: $\Deltaelta \in \BinaryInfColdsymbol{C}_{n}$ if $\oc \Deltaelta \in \BinaryInfColdsymbol{L}_{n}$; $\Deltaelta \in \BinaryInfColdsymbol{C}_{w}$ if $\Deltaelta$ is empty or $\oc \Deltaelta \in \BinaryInfColdsymbol{L}_{w}$. \BinaryInfCegin{definition} \label{fitDf} We say that a wff $\AxiomClpha^{\Sigma}$ \TrinaryInfCextit{fits} into a context $\Deltaelta$ iff $\AxiomClpha^{\Sigma,\overline{\Deltaelta}} \in \BinaryInfColdsymbol{F}_{n}$. \end{definition} The wff $\AxiomClpha^{\BinaryInfCullet} \rightarrow \BinaryInfCeta^{\BinaryInfCullet}$ and $\gamma^{u,\circledast,\AxiomCst}$ fit into the context $\{\circledcirc\}$, because $(\AxiomClpha^{\BinaryInfCullet} \rightarrow \BinaryInfCeta^{\BinaryInfCullet})^{\circledcirc} \in \BinaryInfColdsymbol{F}_{n}$ and $\gamma^{u,\circledast,\AxiomCst,\circledcirc} \in \BinaryInfColdsymbol{F}_{n}$. The wff $\AxiomClpha^{\BinaryInfCullet} \vee \BinaryInfCeta^{\AxiomCst}$ and $\gamma^{\AxiomCst,N,u}$ do not fit into the context $\{\circledcirc,\AxiomCst\}$, because $(\AxiomClpha^{\BinaryInfCullet} \vee \BinaryInfCeta^{\AxiomCst})^{\AxiomCst,\circledcirc}$ and $\gamma^{\AxiomCst,N,u,\AxiomCst,\circledcirc}$ are not wff and, therefore, cannot be in $\BinaryInfColdsymbol{F}_{n}$. There is no wff that fits into the context $\{\AxiomCst\}$, because the label $\AxiomCst \in \BinaryInfColdsymbol{L}_{w}$ and the rule of labelling can only include the resulting formula into $\BinaryInfColdsymbol{F}_{w}$. We can conclude that if a wff is in $\BinaryInfColdsymbol{F}_{n}$, then the context must be in $\BinaryInfColdsymbol{C}_{w}$ and the same for $\BinaryInfColdsymbol{F}_{w}$ and $\BinaryInfColdsymbol{C}_{n}$. The fitting restriction ensures that the conclusion of a rule is always a wff. Moreover, the definition of fitting resembles the attribute grammar approach for context free languages \cite{Knuth}. This is the main reason to name the stack of labels of a formula as the attribute of the formula. \noLineewpage \noLineoindent Here it follows the names and restrictions of the rules of PUC-ND: \BinaryInfCegin{enumerate} \item $\BinaryInfColdsymbol\wedge$\TrinaryInfCextbf{-elimination}: (a) $\AxiomClpha^{\Sigma}$ and $\BinaryInfCeta^{\Omega}$ must fit into the context; (b) $\Deltaelta$ has no existential quantifier;\\ The existential quantifier is excluded to make it possible to distribute the context over the $\wedge$ operator, what is shown in lemma \ref{resolution}. \item $\BinaryInfColdsymbol\wedge$\TrinaryInfCextbf{-elimination}: (a) $\AxiomClpha^{\Sigma}$ and $\BinaryInfCeta^{\Omega}$ must fit into the context; (b) $\Deltaelta$ has no existential quantifier;\\ The existential quantifier is excluded to make it possible to distribute the context over the $\wedge$ operator, what is shown in lemma \ref{resolution}. \item $\BinaryInfColdsymbol\wedge$\TrinaryInfCextbf{-introduction}: (a) $\AxiomClpha^{\Sigma}$ and $\BinaryInfCeta^{\Omega}$ must fit into the context; (b) $\Deltaelta$ has no existential quantifier;\\ The existential quantifier is excluded because the existence of some world (or neighbourhood) in which some wff $A$ holds and the existence of some world in which $B$ holds do not implies that there is some world in which $A$ and $B$ holds. \item $\BinaryInfColdsymbol\vee$\TrinaryInfCextbf{-introduction}: (a) $\AxiomClpha^{\Sigma}$ and $\BinaryInfCeta^{\Omega}$ must fit into the context; (b) $\Deltaelta$ has no universal quantifier;\\ The universal quantifier is excluded to make it possible to distribute the context over the $\vee$ operator, what is shown in lemma \ref{resolution}. \item $\BinaryInfColdsymbol\vee$\TrinaryInfCextbf{-elimination}: (a) $\AxiomClpha^{\Sigma}$ and $\BinaryInfCeta^{\Omega}$ must fit into the context $\Deltaelta$; (b) $\Deltaelta$ has no universal quantifier; The universal quantifier is excluded because the fact that for all worlds (or neighbourhoods) $A \vee B$ holds does not implies that for all worlds $A$ holds or for all worlds $B$ holds. \item $\BinaryInfColdsymbol\vee$\TrinaryInfCextbf{-introduction}: (a) $\AxiomClpha^{\Sigma}$ and $\BinaryInfCeta^{\Omega}$ must fit into the context; (b) $\Deltaelta$ has no universal quantifier;\\ The universal quantifier is excluded to make it possible to distribute the context over the $\vee$ operator, what is shown in lemma \ref{resolution}. \item $\BinaryInfColdsymbol\BinaryInfCot$\TrinaryInfCextbf{-classical}: (a) $\AxiomClpha^{\Sigma}$ and $\BinaryInfCot$ must fit into the context; \item $\BinaryInfColdsymbol\BinaryInfCot$\TrinaryInfCextbf{-intuitionistic}: (a) $\AxiomClpha^{\Sigma}$ and $\BinaryInfCot$ must fit into the context; \item \TrinaryInfCextbf{absurd expansion}: (a) $\Deltaelta$ must have no occurrence of $\circledast$; (b) $\BinaryInfCot$ must fit into the context; (c) $\Deltaelta$ must be non empty.\\ The symbol $\BinaryInfCot$ is used to denote a formula that may only be $\BinaryInfCot_{n}$ or $\BinaryInfCot_{w}$. In the occurrence of $\circledast$, we admit the possibility of an empty system of neighbourhoods. In that context, the absurd does not mean that we actually reach an absurd in our world. $\Deltaelta$ must be non empty to avoid unnecessary detours, like the conclusion of $\BinaryInfCot_{n}$ from $\BinaryInfCot_{n}$ in the empty context; \item \TrinaryInfCextbf{hypothesis-injection}: (a) $\AxiomClpha^{\Sigma}$ must fit into the context.\\ This rule permits an scope change before any formula change. It also avoids combinatorial definitions of rules with hypothesis and formulas inside a given context; \item $\BinaryInfColdsymbol\rightarrow$\TrinaryInfCextbf{-introduction}: (a) $\AxiomClpha^{\Sigma}$ and $\BinaryInfCeta^{\Omega}$ must fit into the context; \item $\BinaryInfColdsymbol\rightarrow$\TrinaryInfCextbf{-elimination (modus ponens)}: (a) $\AxiomClpha^{\Sigma}$ and $\BinaryInfCeta^{\Omega}$ must fit into the context; (b) $\Deltaelta$ has no existential quantifier; (c) the premises may be in reverse order;\\ The existential quantifier is excluded because the existence of some world (or neighbourhood) in which some wff $A$ holds and the existence of some world in which $A \rightarrow B$ holds do not implies that there is some world in which $B$ holds. \item \TrinaryInfCextbf{context-introduction}: (a) $\AxiomClpha^{\Sigma,\phi}$ and $\AxiomClpha^{\Sigma}$ must fit into their contexts; \item \TrinaryInfCextbf{context-elimination}: (a) $\AxiomClpha^{\Sigma,\phi}$ and $\AxiomClpha^{\Sigma}$ must fit into their contexts; \item \TrinaryInfCextbf{world universal introduction}: (a) $\AxiomClpha^{\Sigma}$ must fit into the context; (b) $u$ must not occur in any hypothesis on which $\AxiomClpha^{\Sigma}$ depends; (c) $u$ must not occur in the context of any hypothesis on which $\AxiomClpha^{\Sigma}$ depends; \item \TrinaryInfCextbf{world universal elimination}: (a) $\AxiomClpha^{\Sigma}$ must fit into the context; (b) $u$ must not occur in $\AxiomClpha^{\Sigma}$ or $\Deltaelta$; \item \TrinaryInfCextbf{world existential introduction}: (a) $\AxiomClpha^{\Sigma}$ must fit into the context; \item \TrinaryInfCextbf{world existential elimination}: (a) the formula $\AxiomClpha^{\Sigma}$ must fit into the context; (b) $u$ must not occur in $\AxiomClpha^{\Sigma}$, $\Deltaelta$, $\Theta$ or any open hypothesis on which $\BinaryInfCeta^{\Omega}$ depends; (c) $u$ must not occur in the context of any open hypothesis on which $\BinaryInfCeta^{\Omega}$ depends; (d) the premises may be in reverse order; \item \TrinaryInfCextbf{neighbourhood existential introduction}: (a) $\AxiomClpha^{\Sigma}$ must fit into the context; (b) the premises may be in reverse order; \item \TrinaryInfCextbf{neighbourhood existential elimination}: (a) the formula $\AxiomClpha^{\Sigma}$ must fit into the context; (b) $N$ must not occur in $\AxiomClpha^{\Sigma}$, $\Deltaelta$, $\Theta$ or any open hypothesis on which $\BinaryInfCeta^{\Omega}$ depends; (c) $N$ must not occur in the context of any open hypothesis on which $\BinaryInfCeta^{\Omega}$ depends; (d) the premises may be in reverse order; \item \TrinaryInfCextbf{neighbourhood universal introduction}: (a) the formula $\AxiomClpha^{\Sigma}$ must fit into the contexts; (b) $N$ must not occur in any open hypothesis on which $\AxiomClpha^{\Sigma}$ depends; (c) $N$ must not occur in the context of any open hypothesis on which $\AxiomClpha^{\Sigma}$ depends; \item \TrinaryInfCextbf{neighbourhood universal wild-card}: (a) the formulas $\AxiomClpha^{\Sigma}$ and $\BinaryInfCeta^{\Omega}$ must fit into their contexts; (b) the premises may be in reverse order;\\ This rule is necessary, because a system of neighbourhood may be empty and every variable must denote some neighbourhood because of the variable assignment function $\sigma$. The wild-card rule may be seen as a permition to use some available variable as an instantiation, by making explicit the choice of the variable. \item \TrinaryInfCextbf{world existential propagation}: (a) $\AxiomClpha^{\Sigma,\BinaryInfCullet}$ and $\shneg N$ fit into their contexts; (b) the premises may be in reverse order; \item \TrinaryInfCextbf{world universal propagation}: (a) $\AxiomClpha^{\Sigma,\AxiomCst}$ and $\shpos N$ fit into their contexts; (b) the premises may be in reverse order; \item \TrinaryInfCextbf{transitive neighbourhood inclusion}: (a) $\shneg M$ and $\shneg P$ fit into their contexts; (b) the premises may be in reverse order; \item \TrinaryInfCextbf{transitive neighbourhood inclusion}: (a) $\shpos M$ and $\shpos P$ fit into their contexts; (b) the premises may be in reverse order; \item \TrinaryInfCextbf{neighbourhood total order}: (a) $\shneg M$, $\shneg N$ and $\AxiomClpha^{\Sigma}$ fit into their contexts; (b) the premises may be in reverse order; \item \TrinaryInfCextbf{neighbourhood total order}: (a) $\shpos M$, $\shpos N$ and $\AxiomClpha^{\Sigma}$ fit into their contexts; (b) the premises may be in reverse order; \item \TrinaryInfCextbf{neighbourhood total order}: (a) $\shneg N$, $\shpos N$ and $\AxiomClpha^{\Sigma}$ fit into their contexts. (b) the premises may be in reverse order; \item \TrinaryInfCextbf{truth acceptance}: (a) $\Deltaelta$ must have no occurrence of $\circledcirc$; (b) $terminal objectp$ must fit into the context. The symbol $terminal objectp$ is used to denote a formula that may only be $terminal objectp_{n}$ or $terminal objectp_{w}$. If we accepted the occurrence of $\circledcirc$, the existence of some neighbourhood in every system of neighbourhoods would be necessary and the logic of PUC-ND should be normal according to Lewis classification \cite{Lewis}. $\Deltaelta$ must be non empty to avoid unnecessary detours, like the conclusion of $terminal objectp_{n}$ from $terminal objectp_{n}$ in the empty context. \end{enumerate} We present here, as an example of the PUC-ND inference calculus, a proof of a tautology. Considering Lewis definitions, we understand that if there is some neighbourhood that has some $\BinaryInfCeta^{\Omega}$-world but no $\AxiomClpha^{\Sigma}$-world, then, for all neighbourhoods, having some $\AxiomClpha^{\Sigma}$-world implies having some $\BinaryInfCeta^{\Omega}$-world. The reason is the total order for the inclusion relation among neighbourhoods.\BinaryInfCegin{center}\AxiomC{$^{4}$[$(\noLineeg(\AxiomClpha^{\Sigma}))^{\AxiomCst} \wedge \BinaryInfCeta^{\Omega,\BinaryInfCullet})^{\circledcirc}$]} \UnaryInfC{$(\noLineeg(\AxiomClpha^{\Sigma}))^{\AxiomCst} \wedge \BinaryInfCeta^{\Omega,\BinaryInfCullet})^{\circledcirc}$} \RightLabel{$\circledcirc$} \UnaryInfC{$(\noLineeg(\AxiomClpha^{\Sigma}))^{\AxiomCst} \wedge \BinaryInfCeta^{\Omega,\BinaryInfCullet}$} \AxiomC{$^{3}$[$(\noLineeg(\AxiomClpha^{\Sigma}))^{\AxiomCst} \wedge \BinaryInfCeta^{\Omega,\BinaryInfCullet}$]} \RightLabel{$N$} \UnaryInfC{$\Pi$} \UnaryInfC{$(\AxiomClpha^{\Sigma,\BinaryInfCullet} \rightarrow \BinaryInfCeta^{\Omega,\BinaryInfCullet})^{\circledast}$} \LeftLabel{3} \BinaryInfC{$(\AxiomClpha^{\Sigma,\BinaryInfCullet} \rightarrow \BinaryInfCeta^{\Omega,\BinaryInfCullet})^{\circledast}$} \LeftLabel{4} \UnaryInfC{$((\noLineeg(\AxiomClpha^{\Sigma}))^{\AxiomCst} \wedge \BinaryInfCeta^{\Omega,\BinaryInfCullet})^{\circledcirc} \rightarrow (\AxiomClpha^{\Sigma,\BinaryInfCullet} \rightarrow \BinaryInfCeta^{\Omega,\BinaryInfCullet})^{\circledast}$} \AxiomClwaysNoLine \UnaryInfC{$\phantom{.}$} \UnaryInfC{$\phantom{.}$} \DeltaisplayProof \BinaryInfCegin{footnotesize}\AxiomC{$(\noLineeg(\AxiomClpha^{\Sigma}))^{\AxiomCst} \wedge \BinaryInfCeta^{\Omega,\BinaryInfCullet}$} \RightLabel{$N$} \UnaryInfC{$(\noLineeg(\AxiomClpha^{\Sigma}))^{\AxiomCst} \wedge \BinaryInfCeta^{\Omega,\BinaryInfCullet}$} \RightLabel{$N$} \UnaryInfC{$\BinaryInfCeta^{\Omega,\BinaryInfCullet}$} \AxiomC{$^{2}$[$\shneg N$]} \RightLabel{$M$} \UnaryInfC{$\shneg N$} \RightLabel{$M$} \BinaryInfC{$\BinaryInfCeta^{\Omega,\BinaryInfCullet}$} \RightLabel{$M$} \UnaryInfC{$\AxiomClpha^{\Sigma,\BinaryInfCullet} \rightarrow \BinaryInfCeta^{\Omega,\BinaryInfCullet}$} \AxiomC{$(\noLineeg(\AxiomClpha^{\Sigma}))^{\AxiomCst} \wedge \BinaryInfCeta^{\Omega,\BinaryInfCullet}$} \RightLabel{$N$} \UnaryInfC{$(\noLineeg(\AxiomClpha^{\Sigma}))^{\AxiomCst} \wedge \BinaryInfCeta^{\Omega,\BinaryInfCullet}$} \RightLabel{$N$} \UnaryInfC{$(\noLineeg(\AxiomClpha^{\Sigma}))^{\AxiomCst}$} \AxiomC{$^{2}$[$\shpos N$]} \RightLabel{$M$} \UnaryInfC{$\shpos N$} \RightLabel{$M$} \BinaryInfC{$(\noLineeg(\AxiomClpha^{\Sigma}))^{\AxiomCst}$} \RightLabel{$M,\AxiomCst$} \UnaryInfC{$\noLineeg(\AxiomClpha^{\Sigma})$} \RightLabel{$M,u$} \UnaryInfC{$\noLineeg(\AxiomClpha^{\Sigma})$} \AxiomC{$^{1}$[$\AxiomClpha^{\Sigma,\BinaryInfCullet}$]} \RightLabel{$M$} \UnaryInfC{$\AxiomClpha^{\Sigma,\BinaryInfCullet}$} \RightLabel{$M,\BinaryInfCullet$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \RightLabel{$M,u$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \RightLabel{$M,u$} \BinaryInfC{$\BinaryInfCot$} \RightLabel{$M,u$} \UnaryInfC{$\BinaryInfCeta^{\Omega}$} \RightLabel{$M,\BinaryInfCullet$} \UnaryInfC{$\BinaryInfCeta^{\Omega}$} \RightLabel{$M$} \UnaryInfC{$\BinaryInfCeta^{\Omega,\BinaryInfCullet}$} \RightLabel{$M$} \LeftLabel{1} \UnaryInfC{$\AxiomClpha^{\Sigma,\BinaryInfCullet} \rightarrow \BinaryInfCeta^{\Omega,\BinaryInfCullet}$} \LeftLabel{2} \RightLabel{$M$} \BinaryInfC{$\AxiomClpha^{\Sigma,\BinaryInfCullet} \rightarrow \BinaryInfCeta^{\Omega,\BinaryInfCullet}$} \RightLabel{$\circledast$} \UnaryInfC{$\AxiomClpha^{\Sigma,\BinaryInfCullet} \rightarrow \BinaryInfCeta^{\Omega,\BinaryInfCullet}$} \LeftLabel{$\BinaryInfColdsymbol{\Pi} \hspace*{1cm}$} \UnaryInfC{$(\AxiomClpha^{\Sigma,\BinaryInfCullet} \rightarrow \BinaryInfCeta^{\Omega,\BinaryInfCullet})^{\circledast}$} \DeltaisplayProof\end{footnotesize}\end{center} \BinaryInfCegin{lemma} \label{countingLemma} If $\Deltaelta \in \BinaryInfColdsymbol{C}_{n}$, then $s(\Deltaelta)$ is odd. If $\Deltaelta \in \BinaryInfColdsymbol{C}_{w}$, then $s(\Deltaelta)$ is even. \end{lemma} \BinaryInfCegin{proof} By definition, if $\Deltaelta$ is empty, then $\Deltaelta \in \BinaryInfColdsymbol{C}_{w}$ and $s(\Deltaelta)$ is even. According to the rules of the PUC-ND, if $\Deltaelta$ is empty, it can only accept an additional label $\phi \in \BinaryInfColdsymbol{L}_{n}$, then $\{\Deltaelta,\phi\} \in \BinaryInfColdsymbol{C}_{n}$ and $s(\Deltaelta)$ is odd. We conclude that changing the context from $\BinaryInfColdsymbol{C}_{w}$ to $\BinaryInfColdsymbol{C}_{n}$ and vice-versa always involves adding one to the size of the label and the even sizes are only and always for contexts in $\BinaryInfColdsymbol{C}_{w}$.\end{proof} \section{PUC Soundness and Completeness} For the proof of soundness of PUC-Logic, we prove that the PUC-ND derivations preserves the relation of resolution, which is a relation that generalizes the satisfability relation. To do so, we need to prove some lemmas. In many cases we use the definition \ref{referentialConsequence} of the referential consequence relation. \BinaryInfCegin{definition} \label{resolutionDf} Given a model $\mathcal{M}$, a context $\Deltaelta$ and a wff $\AxiomClpha^{\Sigma}$, the relation $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma}$ of \TrinaryInfCextit{resolution} is defined iff $\AxiomClpha^{\Sigma}$ fits into the context $\Deltaelta$ and $\mathcal{M} \mathcalodels \AxiomClpha^{\Sigma,\overline{\Deltaelta}}$. If $\Gamma \sqsubseteqet \BinaryInfColdsymbol{F}_{n}$ or $\Gamma \sqsubseteqet \BinaryInfColdsymbol{F}_{w}$, then $\mathcal{M} \mathcalodels^{\Deltaelta} \Gamma$ if the resolution relation holds for every formula of $\Gamma$. \end{definition} \BinaryInfCegin{lemma} \label{lemmaConsequence} Given a model $\mathcal{M} = \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i \rightarrowngle$, if $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma}$ and $\AxiomClpha^{\Sigma} \mathcalodels_{\mathcal{M}:s(\Deltaelta)} \BinaryInfCeta^{\Omega}$, then $\mathcal{M} \mathcalodels^{\Deltaelta} \BinaryInfCeta^{\Omega}$. \end{lemma} \BinaryInfCegin{proof} If $\Deltaelta$ is empty ($s(\Deltaelta)=0$), the resolution gives us $\mathcal{M} \mathcalodels \AxiomClpha^{\Sigma}$. From $\AxiomClpha^{\Sigma} \mathcalodels_{\mathcal{M}:0} \BinaryInfCeta^{\Omega}$ we know that $\mathcal{M} \mathcalodels \BinaryInfCeta^{\Omega}$ if $\mathcal{M} \mathcalodels \AxiomClpha^{\Sigma}$ and, by the definition of resolution, $\mathcal{M} \mathcalodels^{\Deltaelta} \BinaryInfCeta^{\Omega}$;\\ \noLineoindent If $\Deltaelta = \{\circledast\}$ ($s(\Deltaelta)=1$), then, by definition, $\mathcal{M} \mathcalodels^{\{\circledast\}} \AxiomClpha^{\Sigma}$ means $\mathcal{M} \mathcalodels \AxiomClpha^{\Sigma,\circledast}$ and for every template $\mathcal{T}$, such that $\mathcal{M} \mathcalultimap \mathcal{T}$, $\mathcal{T} \mathcalodels \AxiomClpha^{\Sigma}$. $$ \BinaryInfCegin{xy} \xymatrix{ &\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma,\circledast} \AxiomCr@{-o}[dl] \AxiomCr@{-o}[d] \AxiomCr@{-o}[dr] &\\ \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} & \ldots & \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , S \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} } \end{xy} $$ $N, \ldots , S$ represent all neighbourhoods of $\$(\mathcal{C}i)$. From $s(\{\circledast\})=1$, we know that $\AxiomClpha^{\Sigma} \mathcalodels_{\mathcal{M}:1} \BinaryInfCeta^{\Omega}$ and, by definition, we can change $\AxiomClpha^{\Sigma}$ by $\BinaryInfCeta^{\Omega}$ in all endpoints of the directed graph and conclude $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i \rightarrowngle \mathcalodels \BinaryInfCeta^{\Omega,\circledast}$ and $\mathcal{M} \mathcalodels^{\Deltaelta} \BinaryInfCeta^{\Omega}$;\\ \noLineoindent If $\Deltaelta = \{\circledcirc\}$ ($s(\Deltaelta)=1$), then $\mathcal{M} \mathcalodels \AxiomClpha^{\Sigma,\circledcirc}$. $$ \BinaryInfCegin{xy} \xymatrix{ &\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma,\circledcirc} \AxiomCr@{-o}[dl] \AxiomCr@{-o}[d] \AxiomCr@{-o}[dr] &\\ \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} & \ldots & \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , S \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} } \end{xy} $$ $N, \ldots , S$ represent all neighbourhoods of $\$(\mathcal{C}i)$ such that $\AxiomClpha^{\Sigma}$ holds. We know that there is at least one of such neighbourhoods. From $\AxiomClpha^{\Sigma} \mathcalodels_{\mathcal{M}:1} \BinaryInfCeta^{\Omega}$, we can change $\AxiomClpha^{\Sigma}$ by $\BinaryInfCeta^{\Omega}$ in all endpoints and conclude $\mathcal{M} \mathcalodels \BinaryInfCeta^{\Omega,\circledcirc}$ because we know that there is at least one of such downward paths. By definition, $\mathcal{M} \mathcalodels^{\Deltaelta} \BinaryInfCeta^{\Omega}$;\\ \noLineoindent If $\Deltaelta = \{N\}$ ($s(\Deltaelta)=1$), then $\mathcal{M} \mathcalodels \AxiomClpha^{\Sigma,N}$. $$ \BinaryInfCegin{xy} \xymatrix{ \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma,N} \AxiomCr@{-o}[d]\\ \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , \sigma(N) \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} } \end{xy} $$ From $\AxiomClpha^{\Sigma} \mathcalodels_{\mathcal{M}:1} \BinaryInfCeta^{\Omega}$, we change $\AxiomClpha^{\Sigma}$ by $\BinaryInfCeta^{\Omega}$ in the endpoint and conclude $\mathcal{M} \mathcalodels \BinaryInfCeta^{\Omega,N}$. By definition, $\mathcal{M} \mathcalodels^{\Deltaelta} \BinaryInfCeta^{\Omega}$;\\ \noLineoindent If $\Deltaelta = \{\circledast,\AxiomCst\}$ ($s(\Deltaelta)=2$), then $\mathcal{M} \mathcalodels \AxiomClpha^{\Sigma,\AxiomCst,\circledast}$. $$ \BinaryInfCegin{scriptsize} \BinaryInfCegin{xy} \xymatrix{ \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma,\AxiomCst,\circledast} \AxiomCr@{-o}[d] \AxiomCr@{-o}[drrrr] &&&\\ \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma,\AxiomCst} \AxiomCr@{-o}[d] \AxiomCr@{-o}[drr] && \ldots && \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , S \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma,\AxiomCst} \AxiomCr@{-o}[d]\\ \langle \mathcal{W} , \$ , \mathcal{V} , \lambda_{1} \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} & \ldots & \langle \mathcal{W} , \$ , \mathcal{V} , \lambda_{t} \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} && \ldots } \end{xy} \end{scriptsize} $$ $N, \ldots , S$ represent all neighbourhoods of $\$(\mathcal{C}i)$. $\lambda_{1}, \ldots , \lambda_{t}$ represent all worlds of $N$. From $\AxiomClpha^{\Sigma} \mathcalodels_{\mathcal{M}:2} \BinaryInfCeta^{\Omega}$, we can change $\AxiomClpha^{\Sigma}$ by $\BinaryInfCeta^{\Omega}$ in all endpoints and conclude $\mathcal{M} \mathcalodels \BinaryInfCeta^{\Omega,\AxiomCst,\circledast}$. By definition, $\mathcal{M} \mathcalodels^{\Deltaelta} \BinaryInfCeta^{\Omega}$;\\ \noLineoindent If $\Deltaelta = \{\circledast,\BinaryInfCullet\}$ ($s(\Deltaelta)=2$), then $\mathcal{M} \mathcalodels \AxiomClpha^{\Sigma,\BinaryInfCullet,\circledast}$. $$ \BinaryInfCegin{scriptsize} \BinaryInfCegin{xy} \xymatrix{ \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma,\BinaryInfCullet,\circledast} \AxiomCr@{-o}[d] \AxiomCr@{-o}[drrrr] &&&\\ \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma,\BinaryInfCullet} \AxiomCr@{-o}[d] \AxiomCr@{-o}[drr] && \ldots && \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , S \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma,\BinaryInfCullet} \AxiomCr@{-o}[d]\\ \langle \mathcal{W} , \$ , \mathcal{V} , \lambda_{1} \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} & \ldots & \langle \mathcal{W} , \$ , \mathcal{V} , \lambda_{t} \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} && \ldots } \end{xy} \end{scriptsize} $$ $N, \ldots , S$ represent all neighbourhoods of $\$(\mathcal{C}i)$. $\lambda_{1}, \ldots , \lambda_{t}$ represent all worlds of $N$ in which $\AxiomClpha^{\Sigma}$ holds. We know that there is at least one of these worlds. From $\AxiomClpha^{\Sigma} \mathcalodels_{\mathcal{M}:2} \BinaryInfCeta^{\Omega}$, we can change $\AxiomClpha^{\Sigma}$ by $\BinaryInfCeta^{\Omega}$ in all endpoints and conclude $\mathcal{M} \mathcalodels \BinaryInfCeta^{\Omega,\BinaryInfCullet,\circledast}$ and $\mathcal{M} \mathcalodels^{\Deltaelta} \BinaryInfCeta^{\Omega}$;\\ \noLineoindent If $\Deltaelta = \{\circledast,u\}$ ($s(\Deltaelta)=2$), then $\mathcal{M} \mathcalodels \AxiomClpha^{\Sigma,u,\circledast}$. $$ \BinaryInfCegin{xy} \xymatrix{ \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma,u,\circledast} \AxiomCr@{-o}[d] \AxiomCr@{-o}[drrr] &&&\\ \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N_{1} \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma,u} \AxiomCr@{-o}[d] & \ldots && \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N_{s} \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma,u} \AxiomCr@{-o}[dlll]\\ \langle \mathcal{W} , \$ , \mathcal{V} , \sigma(u) \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} &&&& } \end{xy} $$ $N, \ldots , S$ represent all neighbourhoods of $\$(\mathcal{C}i)$. From $\AxiomClpha^{\Sigma} \mathcalodels_{\mathcal{M}:2} \BinaryInfCeta^{\Omega}$, we can change $\AxiomClpha^{\Sigma}$ by $\BinaryInfCeta^{\Omega}$ in the endpoint and conclude $\mathcal{M} \mathcalodels \BinaryInfCeta^{\Omega,u,\circledast}$. So, by definition, $\mathcal{M} \mathcalodels^{\Deltaelta} \BinaryInfCeta^{\Omega}$;\\ \noLineoindent Any combination of labels follows, by analogy, the same arguments for each label presented above.\end{proof} \BinaryInfCegin{lemma} \label{lemmaDirectConsequence} Given a model $\mathcal{M} = \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i \rightarrowngle$, if $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma}$ and $\AxiomClpha^{\Sigma} \mathcalodels \BinaryInfCeta^{\Omega}$, then $\mathcal{M} \mathcalodels^{\Deltaelta} \BinaryInfCeta^{\Omega}$. \end{lemma} \BinaryInfCegin{proof} We follow the argument of lemma \ref{lemmaConsequence}, by changing $\AxiomClpha^{\Sigma}$ by $\BinaryInfCeta^{\Omega}$ in all endpoints, what is possible by the definition of logical consequence. \end{proof} \BinaryInfCegin{lemma} \label{lemmaVee} Given $\Deltaelta$ without universal quantifiers, if $\AxiomClpha^{\Sigma,\overline{\Deltaelta}} \vee \BinaryInfCeta^{\Omega,\overline{\Deltaelta}}$ is wff, then $\AxiomClpha^{\Sigma,\overline{\Deltaelta}} \vee \BinaryInfCeta^{\Omega,\overline{\Deltaelta}} \equiv (\AxiomClpha^{\Sigma} \vee \BinaryInfCeta^{\Omega})^{\overline{\Deltaelta}}$. \end{lemma} \BinaryInfCegin{proof} We proceed by induction on the size of $\Deltaelta$:\\ \noLineoindent If $\Deltaelta$ is empty, then equivalence is true;\\ \noLineoindent (base) If $\Deltaelta$ contains only one label, it must be a neighbourhood label: \BinaryInfCegin{itemize} \item[-] $\AxiomClpha^{\Sigma,\circledcirc} \vee \BinaryInfCeta^{\Omega,\circledcirc}$ may be read as $\exists N \in \$(\mathcal{C}i) : \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma}$ or $\exists M \in \$(\mathcal{C}i) : \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , M \rightarrowngle \mathcalodels \BinaryInfCeta^{\Omega}$. But $\exists N \in \$(\mathcal{C}i) : \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma}$ implies, by definition, $\exists N \in \$(\mathcal{C}i) : \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} \vee \BinaryInfCeta^{\Omega}$. Then we have $\exists N \in \$(\mathcal{C}i) : \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} \vee \BinaryInfCeta^{\Omega}$ or $\exists M \in \$(\mathcal{C}i) : \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , M \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} \vee \BinaryInfCeta^{\Omega}$. Since the neighbourhood variables are bound, we have $\exists N \in \$(\mathcal{C}i) : \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} \vee \BinaryInfCeta^{\Omega}$, which is represented whit labels as $(\AxiomClpha^{\Sigma} \vee \BinaryInfCeta^{\Omega})^{\circledcirc}$. Then $\AxiomClpha^{\Sigma,\circledcirc} \vee \BinaryInfCeta^{\Omega,\circledcirc}$ implies $(\AxiomClpha^{\Sigma} \vee \BinaryInfCeta^{\Omega})^{\circledcirc}$. On the other hand, $(\AxiomClpha^{\Sigma} \vee \BinaryInfCeta^{\Omega})^{\circledcirc}$ may be read as $\exists N \in \$(\mathcal{C}i) : \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} \vee \BinaryInfCeta^{\Omega}$, which means, by definition, $\exists N \in \$(\mathcal{C}i) : \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} \mathcalbox{ or } \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \BinaryInfCeta^{\Omega}$. In the first case, $\exists N \in \$(\mathcal{C}i) : \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma}$, which may be read as $\AxiomClpha^{\Sigma,\circledcirc}$. In the second case, $\exists N \in \$(\mathcal{C}i) : \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \BinaryInfCeta^{\Omega}$, which may be read as $\BinaryInfCeta^{\Omega,\circledcirc}$. Since we have one or the other case, we have $\AxiomClpha^{\Sigma,\circledcirc} \vee \BinaryInfCeta^{\Omega,\circledcirc}$. So, $(\AxiomClpha^{\Sigma} \vee \BinaryInfCeta^{\Omega})^{\circledcirc} \equiv \AxiomClpha^{\Sigma,\circledcirc} \vee \BinaryInfCeta^{\Omega,\circledcirc}$; \item[-] $\AxiomClpha^{\Sigma,N} \vee \BinaryInfCeta^{\Omega,N}$ may be read as $\sigma(N) \in \$(\mathcal{C}i)$ and $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , \sigma(N) \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma}$ or $\sigma(N) \in \$(\mathcal{C}i)$ and $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , \sigma(N) \rightarrowngle \mathcalodels \BinaryInfCeta^{\Omega}$. Then we have $\sigma(N) \in \$(\mathcal{C}i)$ and ($\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , \sigma(N) \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma}$ or $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , \sigma(N) \rightarrowngle \mathcalodels \BinaryInfCeta^{\Omega}$), which is, by definition, $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , \sigma(N) \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} \vee \BinaryInfCeta^{\Omega}$. Then $\AxiomClpha^{\Sigma,N} \vee \BinaryInfCeta^{\Omega,N}$ implies $(\AxiomClpha^{\Sigma} \vee \BinaryInfCeta^{\Omega})^{N}$. On the other hand, $(\AxiomClpha^{\Sigma} \vee \BinaryInfCeta^{\Omega})^{N}$ may be read as $\sigma(N) \in \$(\mathcal{C}i)$ and $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , \sigma(N) \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} \vee \BinaryInfCeta^{\Omega}$, which means, by definition, $\sigma(N) \in \$(\mathcal{C}i)$ and $(\; \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , \sigma(N) \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} \mathcalbox{ or } \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , \sigma(N) \rightarrowngle \mathcalodels \BinaryInfCeta^{\Omega} \;)$. So, we have ($\sigma(N) \in \$(\mathcal{C}i)$ and $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , \sigma(N) \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma}$) or ($\sigma(N) \in \$(\mathcal{C}i)$ and $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , \sigma(N) \rightarrowngle \mathcalodels \BinaryInfCeta^{\Omega}$), which may be read as $\AxiomClpha^{\Sigma,N} \vee \BinaryInfCeta^{\Omega,N}$. So, $(\AxiomClpha^{\Sigma} \vee \BinaryInfCeta^{\Omega})^{N} \equiv \AxiomClpha^{\Sigma,N} \vee \BinaryInfCeta^{\Omega,N}$; \end{itemize} \noLineoindent (base) If $\Deltaelta$ contains two labels, it may be $\{\circledcirc,\BinaryInfCullet\}$, $\{N,\BinaryInfCullet\}$, $\{\circledcirc,u\}$ or $\{N,u\}$. But we just need to look at the distributivity for the $\BinaryInfCullet$ label and for world variables, because we have already seen the distributivity of the $\vee$ connective for the label $\circledcirc$ and for any neighbourhood variable. \BinaryInfCegin{itemize} \item[-] $\AxiomClpha^{\Sigma,\BinaryInfCullet,\circledcirc} \vee \BinaryInfCeta^{\Omega,\BinaryInfCullet,\circledcirc}$ may be read as $\exists N \in \$(\mathcal{C}i) : \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma,\BinaryInfCullet}$ or $\exists M \in \$(\mathcal{C}i) : \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , M \rightarrowngle \mathcalodels \BinaryInfCeta^{\Omega,\BinaryInfCullet}$. But $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma,\BinaryInfCullet}$ implies, by definition, $\exists w \in N : \langle \mathcal{W} , \$ , \mathcal{V} , w \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma}$, which implies $\exists w \in N : \langle \mathcal{W} , \$ , \mathcal{V} , w \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} \vee \BinaryInfCeta^{\Omega}$. So, we have $\exists N \in \$(\mathcal{C}i) : \exists w \in N : \langle \mathcal{W} , \$ , \mathcal{V} , w \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} \vee \BinaryInfCeta^{\Omega}$ or $\exists M \in \$(\mathcal{C}i) : \exists z \in N : \langle \mathcal{W} , \$ , \mathcal{V} , z \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} \vee \BinaryInfCeta^{\Omega}$. Since every variable is bound, we have $\exists N \in \$(\mathcal{C}i) : \exists w \in N : \langle \mathcal{W} , \$ , \mathcal{V} , w \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} \vee \BinaryInfCeta^{\Omega}$, which is, by definition, equivalent to $\exists N \in \$(\mathcal{C}i) : \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels (\AxiomClpha^{\Sigma} \vee \BinaryInfCeta^{\Omega})^{\BinaryInfCullet}$, which is equivalent, by definition, to $(\AxiomClpha^{\Sigma} \vee \BinaryInfCeta^{\Omega})^{\BinaryInfCullet,\circledcirc}$. On the other hand, $(\AxiomClpha^{\Sigma} \vee \BinaryInfCeta^{\Omega})^{\BinaryInfCullet,\circledcirc}$ may be read as $\exists N \in \$(\mathcal{C}i) : \exists w \in N : \langle \mathcal{W} , \$ , \mathcal{V} , w \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} \vee \BinaryInfCeta^{\Omega}$, which is, by definition, $\exists N \in \$(\mathcal{C}i) : \exists w \in N : \langle \mathcal{W} , \$ , \mathcal{V} , w \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} \mathcalbox{ or } \langle \mathcal{W} , \$ , \mathcal{V} , w \rightarrowngle \mathcalodels \BinaryInfCeta^{\Omega}$, which implies $\exists N \in \$(\mathcal{C}i) : \exists w \in N : \langle \mathcal{W} , \$ , \mathcal{V} , w \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} \mathcalbox{ or } \exists z \in N : \langle \mathcal{W} , \$ , \mathcal{V} , z \rightarrowngle \mathcalodels \BinaryInfCeta^{\Omega}$, which implies $\exists N \in \$(\mathcal{C}i) : \exists w \in N : \langle \mathcal{W} , \$ , \mathcal{V} , w \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} \mathcalbox{ or } \exists M \in \$(\mathcal{C}i) : \exists z \in M : \langle \mathcal{W} , \$ , \mathcal{V} , z \rightarrowngle \mathcalodels \BinaryInfCeta^{\Omega}$, which may be represented with labels as $\AxiomClpha^{\Sigma,\BinaryInfCullet,\circledcirc} \vee \BinaryInfCeta^{\Omega,\BinaryInfCullet,\circledcirc}$. So, $\AxiomClpha^{\Sigma,\BinaryInfCullet,\circledcirc} \vee \BinaryInfCeta^{\Omega,\BinaryInfCullet,\circledcirc} \equiv (\AxiomClpha^{\Sigma} \vee \BinaryInfCeta^{\Omega})^{\BinaryInfCullet,\circledcirc}$; \item[-] The proofs of $\AxiomClpha^{\Sigma,\BinaryInfCullet,N} \vee \BinaryInfCeta^{\Omega,\BinaryInfCullet,N} \equiv (\AxiomClpha^{\Sigma} \vee \BinaryInfCeta^{\Omega})^{\BinaryInfCullet,N}$, $\AxiomClpha^{\Sigma,u,\circledcirc} \vee \BinaryInfCeta^{\Omega,u,\circledcirc} \equiv (\AxiomClpha^{\Sigma} \vee \BinaryInfCeta^{\Omega})^{u,\circledcirc}$ and $\AxiomClpha^{\Sigma,u,N} \vee \BinaryInfCeta^{\Omega,u,N} \equiv (\AxiomClpha^{\Sigma} \vee \BinaryInfCeta^{\Omega})^{u,N}$ are analogous. \end{itemize} \noLineoindent (induction) If $\AxiomClpha^{\Sigma} \vee \BinaryInfCeta^{\Omega} \in \BinaryInfColdsymbol{F}_{w}$, $\Deltaelta = \{\Deltaelta',\phi\}$ and $s(\Deltaelta) = n + 1$, then $\oc\Deltaelta \in \BinaryInfColdsymbol{L}_{n}$ and $\AxiomClpha^{\Sigma,\overline{\Deltaelta}} \vee \BinaryInfCeta^{\Omega,\overline{\Deltaelta}}$ may be written as $\AxiomClpha^{\Sigma,\phi,\overline{\Deltaelta'}} \vee \BinaryInfCeta^{\Omega,\phi,\overline{\Deltaelta'}}$, where $s(\Deltaelta') = n$. Then, by the induction hypothesis, $\AxiomClpha^{\Sigma,\phi,\overline{\Deltaelta'}} \vee \BinaryInfCeta^{\Omega,\phi,\overline{\Deltaelta'}} = (\AxiomClpha^{\Sigma,\phi} \vee \BinaryInfCeta^{\Omega,\phi})^{\overline{\Deltaelta'}}$. From the base assertions, $(\AxiomClpha^{\Sigma,\phi} \vee \BinaryInfCeta^{\Omega,\phi})^{\overline{\Deltaelta'}} = ((\AxiomClpha^{\Sigma} \vee \BinaryInfCeta^{\Omega})^{\phi})^{\overline{\Deltaelta'}} = (\AxiomClpha^{\Sigma} \vee \BinaryInfCeta^{\Omega})^{\phi,\overline{\Deltaelta'}} = (\AxiomClpha^{\Sigma} \vee \BinaryInfCeta^{\Omega})^{\overline{\Deltaelta}}$;\\ \noLineoindent (induction) If $\AxiomClpha^{\Sigma} \vee \BinaryInfCeta^{\Omega} \in \BinaryInfColdsymbol{F}_{n}$ and $s(\Deltaelta) = n + 2$, then $\oc\Deltaelta \in \BinaryInfColdsymbol{L}_{w}$ and $\AxiomClpha^{\Sigma,\overline{\Deltaelta}} \vee \BinaryInfCeta^{\Omega,\overline{\Deltaelta}}$ may be written as $\AxiomClpha^{\Sigma,\phi,\Theta,\overline{\Deltaelta'}} \vee \BinaryInfCeta^{\Omega,\phi,\Theta,\overline{\Deltaelta'}}$, where $s(\Deltaelta') = n$. Then, by induction hypothesis, $\AxiomClpha^{\Sigma,\phi,\Theta,\overline{\Deltaelta'}} \vee \BinaryInfCeta^{\Omega,\phi,\Theta,\overline{\Deltaelta'}} = (\AxiomClpha^{\Sigma,\phi,\Theta} \vee \BinaryInfCeta^{\Omega,\phi,\Theta})^{\overline{\Deltaelta'}}$. By base, $(\AxiomClpha^{\Sigma,\phi,\Theta} \vee \BinaryInfCeta^{\Omega,\phi,\Theta})^{\overline{\Deltaelta'}} = ((\AxiomClpha^{\Sigma} \vee \BinaryInfCeta^{\Omega})^{\phi,\Theta})^{\overline{\Deltaelta'}} = (\AxiomClpha^{\Sigma} \vee \BinaryInfCeta^{\Omega})^{\phi,\Theta,\overline{\Deltaelta'}} = (\AxiomClpha^{\Sigma} \vee \BinaryInfCeta^{\Omega})^{\overline{\Deltaelta}}$.\end{proof} \BinaryInfCegin{lemma} \label{lemmaWedge} Given $\Deltaelta$ without existential quantifiers, if $\AxiomClpha^{\Sigma,\overline{\Deltaelta}} \wedge \BinaryInfCeta^{\Omega,\overline{\Deltaelta}}$ is wff, then $\AxiomClpha^{\Sigma,\overline{\Deltaelta}} \wedge \BinaryInfCeta^{\Omega,\overline{\Deltaelta}} \equiv (\AxiomClpha^{\Sigma} \wedge \BinaryInfCeta^{\Omega})^{\overline{\Deltaelta}}$. \end{lemma} \BinaryInfCegin{proof} We proceed by induction on the size of $\Deltaelta$:\\ \noLineoindent If $\Deltaelta$ is empty, then equivalence is true;\\ \noLineoindent (base) If $\Deltaelta$ contains only one label, it must be a neighbourhood label: \BinaryInfCegin{itemize} \item[-] $\AxiomClpha^{\Sigma,\circledast} \wedge \BinaryInfCeta^{\Omega,\circledast}$ may be read as $\forall N \in \$(\mathcal{C}i) : \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma}$ and $\forall M \in \$(\mathcal{C}i) : \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , M \rightarrowngle \mathcalodels \BinaryInfCeta^{\Omega}$. But then, we may conclude that, for every neighbourhood $L \in \$(\mathcal{C}i)$, $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , L \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma}$ and $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , L \rightarrowngle \mathcalodels \BinaryInfCeta^{\Omega}$, which can be represented with labels, since $L$ is arbitrary, as $(\AxiomClpha^{\Sigma} \wedge \BinaryInfCeta^{\Omega})^{\circledast}$. On the other hand, $(\AxiomClpha^{\Sigma} \wedge \BinaryInfCeta^{\Omega})^{\circledast}$ can be read as $\forall N \in \$(\mathcal{C}i) : \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} \wedge \BinaryInfCeta^{\Omega}$, which is equivalent, by definition, to $\forall N \in \$(\mathcal{C}i) : \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} \mathcalbox{ and } \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \BinaryInfCeta^{\Omega}$. So we have $\forall N \in \$(\mathcal{C}i) : \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma}$ and $\forall N \in \$(\mathcal{C}i) : \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \BinaryInfCeta^{\Omega}$, that is equivalent to $\AxiomClpha^{\Sigma,\circledast} \wedge \BinaryInfCeta^{\Omega,\circledast}$; \item[-] $\AxiomClpha^{\Sigma,N} \wedge \BinaryInfCeta^{\Omega,N}$ may be read as ($\sigma(N) \in \$(\mathcal{C}i)$ and $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , \sigma(N) \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma}$) and ($\sigma(N) \in \$(\mathcal{C}i)$ and $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , \sigma(N) \rightarrowngle \mathcalodels \BinaryInfCeta^{\Omega}$). But then, we may conclude, by definition, that $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , \sigma(N) \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma}$ and $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , \sigma(N) \rightarrowngle \mathcalodels \BinaryInfCeta^{\Omega}$, which can be represented with labels as $(\AxiomClpha^{\Sigma} \wedge \BinaryInfCeta^{\Omega})^{N}$. On the other hand, $(\AxiomClpha^{\Sigma} \wedge \BinaryInfCeta^{\Omega})^{N}$ can be read as $\sigma(N) \in \$(\mathcal{C}i)$ and $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , \sigma(N) \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} \wedge \BinaryInfCeta^{\Omega}$, which is equivalent, by definition, to $\sigma(N) \in \$(\mathcal{C}i)$ and $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , \sigma(N) \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} \mathcalbox{ and } \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , \sigma(N) \rightarrowngle \mathcalodels \BinaryInfCeta^{\Omega}$. So we have ($\sigma(N) \in \$(\mathcal{C}i)$ and $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , \sigma(N) \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma}$ and ($\sigma(N) \in \$(\mathcal{C}i)$ and $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , \sigma(N) \rightarrowngle \mathcalodels \BinaryInfCeta^{\Omega}$), that is equivalent to $\AxiomClpha^{\Sigma,N} \wedge \BinaryInfCeta^{\Omega,N}$; \end{itemize} \noLineoindent (base) If $\Deltaelta$ contains two labels, it may be $\{\circledast,\AxiomCst\}$, $\{N,\AxiomCst\}$, $\{\circledast,u\}$ or $\{N,u\}$. But we just need to look at the distributivity for the $\AxiomCst$ label and for world variables, because we have already seen the distributivity of the $\wedge$ connective for the label $\circledast$ and for any neighbourhood variable. \BinaryInfCegin{itemize} \item[-] $\AxiomClpha^{\Sigma,\AxiomCst,\circledast} \wedge \BinaryInfCeta^{\Omega,\AxiomCst,\circledast}$ may be read as $\forall N \in \$(\mathcal{C}i) : \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma,\AxiomCst}$ and $\forall M \in \$(\mathcal{C}i) : \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , M \rightarrowngle \mathcalodels \BinaryInfCeta^{\Omega,\AxiomCst}$. Then we have, by definition, $\forall w \in N : \langle \mathcal{W} , \$ , \mathcal{V} , w \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma}$ and $\forall z \in M : \langle \mathcal{W} , \$ , \mathcal{V} , z \rightarrowngle \mathcalodels \BinaryInfCeta^{\Omega}$. So, for every world $x$ of every neighbourhood $L$, $\langle \mathcal{W} , \$ , \mathcal{V} , x \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma}$ and $\langle \mathcal{W} , \$ , \mathcal{V} , x \rightarrowngle \mathcalodels \BinaryInfCeta^{\Omega}$. Then we may conclude, by definition, that $\langle \mathcal{W} , \$ , \mathcal{V} , x \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} \wedge \BinaryInfCeta^{\Omega}$ and represent it with labels as $(\AxiomClpha^{\Sigma} \wedge \BinaryInfCeta^{\Omega})^{\AxiomCst,\circledast}$ because $x$ and $L$ are arbitrary. On the other hand, $(\AxiomClpha^{\Sigma} \wedge \BinaryInfCeta^{\Omega})^{\AxiomCst,\circledast}$ may be read as $\forall N \in \$(\mathcal{C}i) : \forall w \in N : \AxiomClpha^{\Sigma} \wedge \BinaryInfCeta^{\Omega}$, which implies, by definition, $\forall N \in \$(\mathcal{C}i) : \forall w \in N : \AxiomClpha^{\Sigma}$ and also $\forall N \in \$(\mathcal{C}i) : \forall w \in N : \BinaryInfCeta^{\Omega}$. So, we have $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma,\AxiomCst,\circledast}$ and $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i \rightarrowngle \mathcalodels \BinaryInfCeta^{\Omega,\AxiomCst,\circledast}$. So, we may conclude, by definition, that $\AxiomClpha^{\Sigma,\AxiomCst,\circledast} \wedge \BinaryInfCeta^{\Omega,\AxiomCst,\circledast}$; \item[-] The proofs of $\AxiomClpha^{\Sigma,\AxiomCst,N} \wedge \BinaryInfCeta^{\Omega,\AxiomCst,N} \equiv (\AxiomClpha^{\Sigma} \wedge \BinaryInfCeta^{\Omega})^{\AxiomCst,N}$, $\AxiomClpha^{\Sigma,u,\circledast} \wedge \BinaryInfCeta^{\Omega,u,\circledast} \equiv (\AxiomClpha^{\Sigma} \wedge \BinaryInfCeta^{\Omega})^{u,\circledast}$ and$\AxiomClpha^{\Sigma,u,N} \wedge \BinaryInfCeta^{\Omega,u,N} \equiv (\AxiomClpha^{\Sigma} \wedge \BinaryInfCeta^{\Omega})^{u,N}$ are analogous. \end{itemize} \noLineoindent (induction) If $\AxiomClpha^{\Sigma} \wedge \BinaryInfCeta^{\Omega} \in \BinaryInfColdsymbol{F}_{w}$ and $s(\Deltaelta) = n + 1$, then $\oc\Deltaelta \in \BinaryInfColdsymbol{L}_{n}$ and $\AxiomClpha^{\Sigma,\overline{\Deltaelta}} \wedge \BinaryInfCeta^{\Omega,\overline{\Deltaelta}}$ may be written as $\AxiomClpha^{\Sigma,\phi,\overline{\Deltaelta'}} \wedge \BinaryInfCeta^{\Omega,\phi,\overline{\Deltaelta'}}$, where $s(\Deltaelta') = n$. Then, by the induction hypothesis, $\AxiomClpha^{\Sigma,\phi,\overline{\Deltaelta'}} \wedge \BinaryInfCeta^{\Omega,\phi,\overline{\Deltaelta'}} = (\AxiomClpha^{\Sigma,\phi} \wedge \BinaryInfCeta^{\Omega,\phi})^{\overline{\Deltaelta'}}$. From the base assertions, $(\AxiomClpha^{\Sigma,\phi} \wedge \BinaryInfCeta^{\Omega,\phi})^{\overline{\Deltaelta'}} = ((\AxiomClpha^{\Sigma} \wedge \BinaryInfCeta^{\Omega})^{\phi})^{\overline{\Deltaelta'}} = (\AxiomClpha^{\Sigma} \wedge \BinaryInfCeta^{\Omega})^{\phi,\overline{\Deltaelta'}} = (\AxiomClpha^{\Sigma} \wedge \BinaryInfCeta^{\Omega})^{\overline{\Deltaelta}}$;\\ \noLineoindent (induction) If $\AxiomClpha^{\Sigma} \wedge \BinaryInfCeta^{\Omega} \in \BinaryInfColdsymbol{F}_{n}$ and $s(\Deltaelta) = n + 2$, then $\oc\Deltaelta \in \BinaryInfColdsymbol{L}_{w}$ and $\AxiomClpha^{\Sigma,\overline{\Deltaelta}} \wedge \BinaryInfCeta^{\Omega,\overline{\Deltaelta}}$ may be written as $\AxiomClpha^{\Sigma,\phi,\Theta,\overline{\Deltaelta'}} \wedge \BinaryInfCeta^{\Omega,\phi,\Theta,\overline{\Deltaelta'}}$, where $s(\Deltaelta') = n$. Then, by induction hypothesis, $\AxiomClpha^{\Sigma,\phi,\Theta,\overline{\Deltaelta'}} \wedge \BinaryInfCeta^{\Omega,\phi,\Theta,\overline{\Deltaelta'}} = (\AxiomClpha^{\Sigma,\phi,\Theta} \wedge \BinaryInfCeta^{\Omega,\phi,\Theta})^{\overline{\Deltaelta'}}$. By base, $(\AxiomClpha^{\Sigma,\phi,\Theta} \wedge \BinaryInfCeta^{\Omega,\phi,\Theta})^{\overline{\Deltaelta'}} = ((\AxiomClpha^{\Sigma} \wedge \BinaryInfCeta^{\Omega})^{\phi,\Theta})^{\overline{\Deltaelta'}} = (\AxiomClpha^{\Sigma} \wedge \BinaryInfCeta^{\Omega})^{\phi,\Theta,\overline{\Deltaelta'}} = (\AxiomClpha^{\Sigma} \wedge \BinaryInfCeta^{\Omega})^{\overline{\Deltaelta}}$.\end{proof} \BinaryInfCegin{lemma} \label{lemmaRa} Given $\Deltaelta$ without existential quantifiers, if $(\AxiomClpha^{\Sigma} \rightarrow \BinaryInfCeta^{\Omega})^{\overline{\Deltaelta}}$ is wff, then it implies $\AxiomClpha^{\Sigma,\overline{\Deltaelta}} \rightarrow \BinaryInfCeta^{\Omega,\overline{\Deltaelta}}$. \end{lemma} \BinaryInfCegin{proof} We proceed by induction on the size of $\Deltaelta$:\\ \noLineoindent If $\Deltaelta$ is empty, then the implication is true;\\ \noLineoindent (base) If $\Deltaelta$ contains only one label, it must be a neighbourhood label: \BinaryInfCegin{itemize} \item[-] $(\AxiomClpha^{\Sigma} \rightarrow \BinaryInfCeta^{\Omega})^{\circledast}$ means, by definition, that $\forall N \in \$(\mathcal{C}i) : \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} \rightarrow \BinaryInfCeta^{\Omega}$. Then we know that $\forall N \in \$(\mathcal{C}i) : \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \noLineot\mathcalodels \AxiomClpha^{\Sigma} \mathcalbox{ or } \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \BinaryInfCeta^{\Omega}$. So, if we have $\forall N \in \$(\mathcal{C}i) : \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} $, we must have $\forall N \in \$(\mathcal{C}i) : \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \BinaryInfCeta^{\Omega}$. In other words, $\AxiomClpha^{\Sigma,\circledast} \rightarrow \BinaryInfCeta^{\Omega,\circledast}$; \item[-] $(\AxiomClpha^{\Sigma} \rightarrow \BinaryInfCeta^{\Omega})^{N}$ means, by definition, that $\sigma(N) \in \$(\mathcal{C}i)$ and $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , \sigma(N) \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} \rightarrow \BinaryInfCeta^{\Omega}$. Then we know that $\sigma(N) \in \$(\mathcal{C}i)$ and ($\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , \sigma(N) \rightarrowngle \noLineot\mathcalodels \AxiomClpha^{\Sigma} \mathcalbox{ or } \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , \sigma(N) \rightarrowngle \mathcalodels \BinaryInfCeta^{\Omega}$). So, if we have $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , \sigma(N) \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} $, we must have $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , \sigma(N) \rightarrowngle \mathcalodels \BinaryInfCeta^{\Omega}$. In other words, $\AxiomClpha^{\Sigma,N} \rightarrow \BinaryInfCeta^{\Omega,N}$. \end{itemize} \noLineoindent (base) If $\Deltaelta$ contains two labels, it may be $\{\circledast,\AxiomCst\}$, $\{N,\AxiomCst\}$, $\{\circledast,u\}$ or $\{N,u\}$. But we just need to look at the distributivity for the $\AxiomCst$ label and for world variables, because we have already seen the distributivity of the $\rightarrow$ connective for the label $\circledast$ and for any neighbourhood variable. \BinaryInfCegin{itemize} \item[-] $(\AxiomClpha^{\Sigma} \rightarrow \BinaryInfCeta^{\Omega})^{\AxiomCst,\circledast}$ means, by definition, that $\forall N \in \$(\mathcal{C}i) : \forall w \in N : \langle \mathcal{W} , \$ , \mathcal{V} , w \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} \rightarrow \BinaryInfCeta^{\Omega}$. Then we know that $\forall N \in \$(\mathcal{C}i) : \forall w \in N : \langle \mathcal{W} , \$ , \mathcal{V} , w \rightarrowngle \noLineot\mathcalodels \AxiomClpha^{\Sigma} \mathcalbox{ or } \langle \mathcal{W} , \$ , \mathcal{V} , w \rightarrowngle \mathcalodels \BinaryInfCeta^{\Omega}$. So, if we have $\forall N \in \$(\mathcal{C}i) : \forall w \in N : \langle \mathcal{W} , \$ , \mathcal{V} , w \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} $, we must have $\forall N \in \$(\mathcal{C}i) : \forall w \in N : \langle \mathcal{W} , \$ , \mathcal{V} , w \rightarrowngle \mathcalodels \BinaryInfCeta^{\Omega}$. In other words, $\AxiomClpha^{\Sigma,\AxiomCst,\circledast} \rightarrow \BinaryInfCeta^{\Omega,\AxiomCst,\circledast}$; \item[-] The proofs of $(\AxiomClpha^{\Sigma} \rightarrow \BinaryInfCeta^{\Omega})^{\AxiomCst,N}$, $(\AxiomClpha^{\Sigma} \rightarrow \BinaryInfCeta^{\Omega})^{u,\circledast}$ and $(\AxiomClpha^{\Sigma} \rightarrow \BinaryInfCeta^{\Omega})^{u,N}$ are analogous. \end{itemize} \noLineoindent (induction) If $\AxiomClpha^{\Sigma} \rightarrow \BinaryInfCeta^{\Omega} \in \BinaryInfColdsymbol{F}_{w}$ and $s(\Deltaelta) = n + 1$, then $\oc\Deltaelta \in \BinaryInfColdsymbol{L}_{n}$ and $\AxiomClpha^{\Sigma,\overline{\Deltaelta}} \rightarrow \BinaryInfCeta^{\Omega,\overline{\Deltaelta}}$ may be written as $\AxiomClpha^{\Sigma,\phi,\overline{\Deltaelta'}} \rightarrow \BinaryInfCeta^{\Omega,\phi,\overline{\Deltaelta'}}$, where $s(\Deltaelta') = n$. Then, by the induction hypothesis, $\AxiomClpha^{\Sigma,\phi,\overline{\Deltaelta'}} \rightarrow \BinaryInfCeta^{\Omega,\phi,\overline{\Deltaelta'}} = (\AxiomClpha^{\Sigma,\phi} \rightarrow \BinaryInfCeta^{\Omega,\phi})^{\overline{\Deltaelta'}}$. From the base assertions, $(\AxiomClpha^{\Sigma,\phi} \rightarrow \BinaryInfCeta^{\Omega,\phi})^{\overline{\Deltaelta'}} = ((\AxiomClpha^{\Sigma} \rightarrow \BinaryInfCeta^{\Omega})^{\phi})^{\overline{\Deltaelta'}} = (\AxiomClpha^{\Sigma} \rightarrow \BinaryInfCeta^{\Omega})^{\phi,\overline{\Deltaelta'}} = (\AxiomClpha^{\Sigma} \rightarrow \BinaryInfCeta^{\Omega})^{\overline{\Deltaelta}}$;\\ \noLineoindent (induction) If $\AxiomClpha^{\Sigma} \rightarrow \BinaryInfCeta^{\Omega} \in \BinaryInfColdsymbol{F}_{n}$ and $s(\Deltaelta) = n + 2$, then $\oc\Deltaelta \in \BinaryInfColdsymbol{L}_{w}$ and $\AxiomClpha^{\Sigma,\overline{\Deltaelta}} \rightarrow \BinaryInfCeta^{\Omega,\overline{\Deltaelta}}$ may be written as $\AxiomClpha^{\Sigma,\phi,\Theta,\overline{\Deltaelta'}} \rightarrow \BinaryInfCeta^{\Omega,\phi,\Theta,\overline{\Deltaelta'}}$, where $s(\Deltaelta') = n$. Then, by the induction hypothesis, $\AxiomClpha^{\Sigma,\phi,\Theta,\overline{\Deltaelta'}} \rightarrow \BinaryInfCeta^{\Omega,\phi,\Theta,\overline{\Deltaelta'}} = (\AxiomClpha^{\Sigma,\phi,\Theta} \rightarrow \BinaryInfCeta^{\Omega,\phi,\Theta})^{\overline{\Deltaelta'}}$. By the base, $(\AxiomClpha^{\Sigma,\phi,\Theta} \rightarrow \BinaryInfCeta^{\Omega,\phi,\Theta})^{\overline{\Deltaelta'}} = ((\AxiomClpha^{\Sigma} \rightarrow \BinaryInfCeta^{\Omega})^{\phi,\Theta})^{\overline{\Deltaelta'}} = (\AxiomClpha^{\Sigma} \rightarrow \BinaryInfCeta^{\Omega})^{\phi,\Theta,\overline{\Deltaelta'}} = (\AxiomClpha^{\Sigma} \rightarrow \BinaryInfCeta^{\Omega})^{\overline{\Deltaelta}}$.\end{proof} Now we prove one of the main lemmas, in which, from the resolution of the hypothesis, follow the resolution of the conclusion. We express this property by saying that PUC-ND preserves resolution. \BinaryInfCegin{lemma} \label{resolution} PUC-ND without the rules $5, 7, 11, 18, 20, 27, 28$ and $29$ preserves resolution. \end{lemma} \BinaryInfCegin{proof} Consider $\mathcal{M} = \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i \rightarrowngle$. \BinaryInfCegin{enumerate} \item If $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma} \wedge \BinaryInfCeta^{\Omega}$, then $\mathcal{M} \mathcalodels (\AxiomClpha^{\Sigma} \wedge \BinaryInfCeta^{\Omega})^{\overline{\Deltaelta}}$, and, by lemma \ref{lemmaWedge}, $\mathcal{M} \mathcalodels \AxiomClpha^{\Sigma,\overline{\Deltaelta}} \wedge \BinaryInfCeta^{\Omega,\overline{\Deltaelta}}$, which means, by definition, $\mathcal{M} \mathcalodels \AxiomClpha^{\Sigma,\overline{\Deltaelta}}$ and $\mathcal{M} \mathcalodels \BinaryInfCeta^{\Omega,\overline{\Deltaelta}}$. So, we have $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma}$; \item Follow the same argument for rule 1; \item If $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma}$ and $\mathcal{M} \mathcalodels^{\Deltaelta} \BinaryInfCeta^{\Omega}$, then $\mathcal{M} \mathcalodels \AxiomClpha^{\Sigma,\overline{\Deltaelta}}$ and $\mathcal{M} \mathcalodels \BinaryInfCeta^{\Omega,\overline{\Deltaelta}}$, then, by definition, $\mathcal{M} \mathcalodels \AxiomClpha^{\Sigma,\overline{\Deltaelta}} \wedge \BinaryInfCeta^{\Omega,\overline{\Deltaelta}}$, then, by lemma \ref{lemmaWedge}, $\mathcal{M} \mathcalodels (\AxiomClpha^{\Sigma} \wedge \BinaryInfCeta^{\Omega})^{\overline{\Deltaelta}}$, then, by definition, $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma} \wedge \BinaryInfCeta^{\Omega}$; \item If $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma}$, then $\mathcal{M} \mathcalodels \AxiomClpha^{\Sigma,\overline{\Deltaelta}}$, and, by definition, $\mathcal{M} \mathcalodels \AxiomClpha^{\Sigma,\overline{\Deltaelta}} \vee \BinaryInfCeta^{\Omega,\overline{\Deltaelta}}$, then, by lemma \ref{lemmaVee}, $\mathcal{M} \mathcalodels (\AxiomClpha^{\Sigma} \vee \BinaryInfCeta^{\Omega})^{\overline{\Deltaelta}}$, and, by definition, $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma} \vee \BinaryInfCeta^{\Omega}$; \setcounter{enumi}{5} \item Follow the same argument for rule 4; \setcounter{enumi}{7} \item By definition, there is no template $\mathcal{T}$, such that $\mathcal{T} \mathcalodels \BinaryInfCot_{w}$. So, by definition, for every $\AxiomClpha^{\Sigma} \in \BinaryInfColdsymbol{F}_{w}$, $\BinaryInfCot_{w} \mathcalodels \AxiomClpha^{\Sigma}$ and, by lemma \ref{lemmaDirectConsequence}, $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma}$. The same argument holds for $\BinaryInfCot_{n}$ considering formulas in $\BinaryInfColdsymbol{F}_{n}$; \item If $\Deltaelta = \{\circledcirc\}$, then $\mathcal{M} \mathcalodels^{\Deltaelta} \BinaryInfCot_{w}$ means $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i \rightarrowngle \mathcalodels \BinaryInfCot_{w}^{\circledcirc}$. This means that $\exists N \in \$(\mathcal{C}i) : \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \BinaryInfCot_{w}$, but, by definition, $\noLineexists N \in \$(\mathcal{C}i) : \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \BinaryInfCot_{w}$, so $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i \rightarrowngle \mathcalodels \noLineeg (\BinaryInfCot_{w}^{\circledcirc})$. Then, by the rule 3, $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i \rightarrowngle \mathcalodels \BinaryInfCot_{n}$ and, by definition, $\mathcal{M} \mathcalodels \BinaryInfCot_{n}$. The case $\Deltaelta = \{N\}$ is similar. If $\Deltaelta = \{\circledcirc,\BinaryInfCullet\}$, then $\mathcal{M} \mathcalodels^{\Deltaelta} \BinaryInfCot_{n}$ means $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i \rightarrowngle \mathcalodels \BinaryInfCot_{n}^{\BinaryInfCullet,\circledcirc}$. But this means that $\exists N \in \$(\mathcal{C}i) : \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \BinaryInfCot_{n}^{\BinaryInfCullet}$ and $\exists w \in N : \langle \mathcal{W} , \$ , \mathcal{V} , w \rightarrowngle \mathcalodels \BinaryInfCot_{n}$. But, by definition, $\noLineexists w \in N : \langle \mathcal{W} , \$ , \mathcal{V} , w \rightarrowngle \mathcalodels \BinaryInfCot_{n}$, so $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \noLineeg (\BinaryInfCot_{n}^{\BinaryInfCullet})$. Using rule 3, we conclude that $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \BinaryInfCot_{w}$ and, by a previous case, $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i \rightarrowngle \mathcalodels \BinaryInfCot_{n}$. The other cases where $s(\Deltaelta) = 2$ are similar. If $\Deltaelta = \{\circledcirc,\BinaryInfCullet,\circledcirc\}$, then $\mathcal{M} \mathcalodels^{\Deltaelta} \BinaryInfCot_{w}$ means $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i \rightarrowngle \mathcalodels \BinaryInfCot_{w}^{\circledcirc,\BinaryInfCullet,\circledcirc}$. But this means that $\exists N \in \$(\mathcal{C}i) : \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \BinaryInfCot_{w}^{\circledcirc,\BinaryInfCullet}$ and $\exists w \in N : \langle \mathcal{W} , \$ , \mathcal{V} , w \rightarrowngle \mathcalodels \BinaryInfCot_{w}^{\circledcirc}$. But, by a previous case, it means that $\exists w \in N : \langle \mathcal{W} , \$ , \mathcal{V} , w \rightarrowngle \mathcalodels \BinaryInfCot_{n}$ and $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \BinaryInfCot_{n}^{\BinaryInfCullet}$. But, by definition, $\noLineexists w \in N : \langle \mathcal{W} , \$ , \mathcal{V} , w \rightarrowngle \mathcalodels \BinaryInfCot_{n}$ and $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \noLineeg(\BinaryInfCot_{n}^{\BinaryInfCullet})$. So, using rule 3, $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \BinaryInfCot_{w}$. Then $\exists N \in \$(\mathcal{C}i) : \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \BinaryInfCot_{w}$ and $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i \rightarrowngle \mathcalodels \BinaryInfCot_{w}^{\circledcirc}$. By a previous case, we conclude that $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i \rightarrowngle \mathcalodels \BinaryInfCot_{n}$. The other cases where $s(\Deltaelta) = 3$ are similar. If $\Deltaelta = \{\circledcirc,\BinaryInfCullet,\circledcirc,\BinaryInfCullet\}$, then $\mathcal{M} \mathcalodels^{\Deltaelta} \BinaryInfCot_{n}$ means $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i \rightarrowngle \mathcalodels \BinaryInfCot_{n}^{\BinaryInfCullet,\circledcirc,\BinaryInfCullet,\circledcirc}$. But this means that $\exists N \in \$(\mathcal{C}i) : \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \BinaryInfCot_{n}^{\BinaryInfCullet,\circledcirc,\BinaryInfCullet}$ and $\exists w \in N : \langle \mathcal{W} , \$ , \mathcal{V} , w \rightarrowngle \mathcalodels \BinaryInfCot_{n}^{\BinaryInfCullet,\circledcirc}$ and, by the above arguments, $\langle \mathcal{W} , \$ , \mathcal{V} , w \rightarrowngle \mathcalodels \BinaryInfCot_{n}$. But, by definition, $\noLineexists w \in N : \langle \mathcal{W} , \$ , \mathcal{V} , w \rightarrowngle \mathcalodels \BinaryInfCot_{n}$, so $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \noLineeg (\BinaryInfCot_{n}^{\BinaryInfCullet,\circledcirc,\BinaryInfCullet})$ because of the implication of $\BinaryInfCot_{n}$ from $\BinaryInfCot_{n}^{\BinaryInfCullet,\circledcirc}$. Using rule 3, we conclude that $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \BinaryInfCot_{w}$ and $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i \rightarrowngle \mathcalodels \BinaryInfCot_{n}$ by a previous argument. The other cases are similar and the general case is treated by induction on the size of $\Deltaelta$ following the previous arguments; \item If $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma}$, then $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma}$; \setcounter{enumi}{11} \item If $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma} \rightarrow \BinaryInfCeta^{\Omega}$, then $\mathcal{M} \mathcalodels (\AxiomClpha^{\Sigma} \rightarrow \BinaryInfCeta^{\Omega})^{\overline{\Deltaelta}}$, then, by lemma \ref{lemmaRa}, $\mathcal{M} \mathcalodels \AxiomClpha^{\Sigma,\overline{\Deltaelta}} \rightarrow \BinaryInfCeta^{\Omega,\overline{\Deltaelta}}$. Then, by definition, $\mathcal{M} \mathcalodels \noLineeg (\AxiomClpha^{\Sigma,\overline{\Deltaelta}})$ or $\mathcal{M} \mathcalodels \BinaryInfCeta^{\Omega,\overline{\Deltaelta}}$. But we know from $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma}$ that $\mathcal{M} \mathcalodels \AxiomClpha^{\Sigma,\overline{\Deltaelta}}$. So, we can conclude $\mathcal{M} \mathcalodels^{\Deltaelta} \BinaryInfCeta^{\Omega}$; \item If $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma,\phi}$, then $\mathcal{M} \mathcalodels \AxiomClpha^{\Sigma,\phi,\overline{\Deltaelta}}$. But, $\{\phi,\overline{\Deltaelta}\} \equiv \overline{\{{\Deltaelta,\phi}\}}$, then, by definition, $\mathcal{M} \mathcalodels^{\Deltaelta,\phi} \AxiomClpha^{\Sigma}$; \item If $\mathcal{M} \mathcalodels^{\Deltaelta,\phi} \AxiomClpha^{\Sigma}$, then $\mathcal{M} \mathcalodels \AxiomClpha^{\Sigma,\overline{\{\Deltaelta,\phi\}}}$. But, $\overline{\{{\Deltaelta,\phi}\}} \equiv \{\phi,\overline{\Deltaelta}\}$, and, by definition, $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma,\phi}$; \item If $\mathcal{M} \mathcalodels^{\Deltaelta,u} \AxiomClpha^{\Sigma}$, then, by the rule 14, $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma,u}$. By the fact that $\AxiomClpha^{\Sigma,u} \in \BinaryInfColdsymbol{F}_{w}$, the fitting relation and lemma \ref{countingLemma}, we know that $s(\Deltaelta)$ is odd. If we take some template $\mathcal{T} = \langle \mathcal{W} , \$ , \mathcal{V} , z , N \rightarrowngle$, such that $\mathcal{M} \mathcalultimap_{s(\Deltaelta)} \mathcal{T}$ and $\mathcal{T} \mathcalodels \AxiomClpha^{\Sigma,u}$, we can conclude that $N \in \$(z)$, $\sigma(u) \in N$ and $\langle \mathcal{W} , \$ , \mathcal{V} , \sigma(u) \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma}$. The restrictions of the rule assures us that the variable $u$ is arbitrary and we may conclude that $\forall w \in N : \langle \mathcal{W} , \$ , \mathcal{V} , w \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma}$. So, $\mathcal{T} \mathcalodels \AxiomClpha^{\Sigma,\AxiomCst}$ and, by definition, $\AxiomClpha^{\Sigma,u} \mathcalodels_{\mathcal{M}:s(\Deltaelta)} \AxiomClpha^{\Sigma,\AxiomCst}$, which means, by lemma \ref{lemmaConsequence}, that $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma,\AxiomCst}$ and, by rule 13, $\mathcal{M} \mathcalodels^{\Deltaelta,\AxiomCst} \AxiomClpha^{\Sigma}$; \item If $\mathcal{M} \mathcalodels^{\Deltaelta,\AxiomCst} \AxiomClpha^{\Sigma}$, then, by the rule 14, $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma,\AxiomCst}$. By the fact that $\AxiomClpha^{\Sigma,\AxiomCst} \in \BinaryInfColdsymbol{F}_{w}$, the fitting relation and lemma \ref{countingLemma}, we know that $s(\Deltaelta)$ is odd. If we take some template $\mathcal{T} = \langle \mathcal{W} , \$ , \mathcal{V} , z \rightarrowngle$, such that $\mathcal{M} \mathcalultimap_{s(\Deltaelta)} \mathcal{T}$ and $\mathcal{T} \mathcalodels \AxiomClpha^{\Sigma,\AxiomCst}$, then $N \in \$(z)$ and $\forall w \in N : \langle \mathcal{W} , \$ , \mathcal{V} , w \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma}$. If we take a variable $u$ to denote a world of $N$ obeying the restrictions of the rule, then we may conclude that $u \in N$ and $\langle \mathcal{W} , \$ , \mathcal{V} , u \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma}$. So, $\mathcal{T} \mathcalodels \AxiomClpha^{\Sigma,u}$ and, by definition, $\AxiomClpha^{\Sigma,\AxiomCst} \mathcalodels_{\mathcal{M}:s(\Deltaelta)} \AxiomClpha^{\Sigma,u}$, which means, by lemma \ref{lemmaConsequence}, that $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma,u}$ and, by rule 13, $\mathcal{M} \mathcalodels^{\Deltaelta,u} \AxiomClpha^{\Sigma}$; \item If $\mathcal{M} \mathcalodels^{\Deltaelta,u} \AxiomClpha^{\Sigma}$, then, by the rule 14, $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma,u}$. By the fact that $\AxiomClpha^{\Sigma,u} \in \BinaryInfColdsymbol{F}_{w}$, the fitting relation and lemma \ref{countingLemma}, we know that $s(\Deltaelta)$ is odd. If we take some template $\mathcal{T} = \langle \mathcal{W} , \$ , \mathcal{V} , z \rightarrowngle$, such that $\mathcal{M} \mathcalultimap_{s(\Deltaelta)} \mathcal{T}$ and $\mathcal{T} \mathcalodels \AxiomClpha^{\Sigma,u}$, then $N \in \$(z)$, $\sigma(u) \in N$ and $\langle \mathcal{W} , \$ , \mathcal{V} , \sigma(u) \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma}$. Since we denote some world with the variable $u$, we know that there is some world in $N$ such that the formula $\AxiomClpha^{\Sigma}$ holds. Then we may conclude that $\exists w \in N : \langle \mathcal{W} , \$ , \mathcal{V} , w \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma}$. So, $\mathcal{T} \mathcalodels \AxiomClpha^{\Sigma,\BinaryInfCullet}$ and, by definition, $\AxiomClpha^{\Sigma,u} \mathcalodels_{\mathcal{M}:s(\Deltaelta)} \AxiomClpha^{\Sigma,\BinaryInfCullet}$, which means, by lemma \ref{lemmaConsequence}, that $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma,\BinaryInfCullet}$ and, by rule 13, $\mathcal{M} \mathcalodels^{\Deltaelta,\BinaryInfCullet} \AxiomClpha^{\Sigma}$; \setcounter{enumi}{18} \item If $\mathcal{M} \mathcalodels^{\Deltaelta,N} \AxiomClpha^{\Sigma}$ and $\mathcal{M} \mathcalodels^{\Deltaelta,\circledcirc} \BinaryInfCeta^{\Omega}$, then, by the rule 14, $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma,N}$ and $\mathcal{M} \mathcalodels^{\Deltaelta} \BinaryInfCeta^{\Omega,\circledcirc}$ and, by rule 3, $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma,N} \wedge \BinaryInfCeta^{\Omega,\circledcirc}$. By the fact that $\AxiomClpha^{\Sigma,N} \wedge \BinaryInfCeta^{\Omega,\circledcirc} \in \BinaryInfColdsymbol{F}_{n}$, the fitting relation and lemma \ref{countingLemma}, we know that $s(\Deltaelta)$ is even. If we take some model $\mathcal{H} = \langle \mathcal{W} , \$ , \mathcal{V} , z \rightarrowngle$, such that $\mathcal{M} \mathcalultimap_{s(\Deltaelta)} \mathcal{H}$ and $\mathcal{H} \mathcalodels \AxiomClpha^{\Sigma,N} \wedge \BinaryInfCeta^{\Omega,\circledcirc}$, then from $\BinaryInfCeta^{\Omega,\circledcirc}$ we know that $\$(z) \noLineeq \emptyset$, $\sigma(N) \in \$(z)$ and $\langle \mathcal{W} , \$ , \mathcal{V} , z , \sigma(N) \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma}$. Since we denote some neighbourhood with the variable $N$, we know that there is some neighbourhood in $\$(z)$, such that the formula $\AxiomClpha^{\Sigma}$ holds. Then $\exists M \in \$(z) : \langle \mathcal{W} , \$ , \mathcal{V} , z , M \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma}$ and $\mathcal{H} \mathcalodels \AxiomClpha^{\Sigma,\circledcirc}$. So, by definition, $\AxiomClpha^{\Sigma,N} \wedge \BinaryInfCeta^{\Omega,\circledcirc} \mathcalodels_{\mathcal{M}:s(\Deltaelta)} \AxiomClpha^{\Sigma,\circledcirc}$, which means, by lemma \ref{lemmaConsequence}, that $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma,\circledcirc}$ and, by rule 13, $\mathcal{M} \mathcalodels^{\Deltaelta,\circledcirc} \AxiomClpha^{\Sigma}$; \setcounter{enumi}{20} \item If $\mathcal{M} \mathcalodels^{\Deltaelta,N} \AxiomClpha^{\Sigma}$, then, by the rule 14, $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma,N}$. By the fact that $\AxiomClpha^{\Sigma,N} \in \BinaryInfColdsymbol{F}_{n}$, the fitting relation and lemma \ref{countingLemma}, we know that $s(\Deltaelta)$ is even. If we take some model $\mathcal{H} = \langle \mathcal{W} , \$ , \mathcal{V} , z \rightarrowngle$, such that $\mathcal{M} \mathcalultimap_{s(\Deltaelta)} \mathcal{H}$ and $\mathcal{H} \mathcalodels \AxiomClpha^{\Sigma,N}$, then $\sigma(N) \in \$(z)$ and $\langle \mathcal{W} , \$ , \mathcal{V} , z , \sigma(N) \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma}$. From the restrictions of the rule, we know that $N$ is arbitrary, so, $\forall M \in \$(z) : \langle \mathcal{W} , \$ , \mathcal{V} , z , M \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma}$, which means that $\mathcal{H} \mathcalodels \AxiomClpha^{\Sigma,\circledast}$. So, by definition, $\AxiomClpha^{\Sigma,N} \mathcalodels_{\mathcal{M}:s(\Deltaelta)} \AxiomClpha^{\Sigma,\circledast}$, which means, by lemma \ref{lemmaConsequence}, that $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma,\circledast}$ and, by rule 13, $\mathcal{M} \mathcalodels^{\Deltaelta,\circledast} \AxiomClpha^{\Sigma}$; \item If $\mathcal{M} \mathcalodels^{\Deltaelta,\circledast} \AxiomClpha^{\Sigma}$ and $\mathcal{M} \mathcalodels^{\Deltaelta,N} \BinaryInfCeta^{\Omega}$, then, by the rule 14, $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma,\circledast}$ and $\mathcal{M} \mathcalodels^{\Deltaelta} \BinaryInfCeta^{\Omega,N}$. So, by rule 3, $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma,\circledast} \wedge \BinaryInfCeta^{\Omega,N}$. By the fact that $\AxiomClpha^{\Sigma,\circledast} \wedge \BinaryInfCeta^{\Omega,N} \in \BinaryInfColdsymbol{F}_{n}$, the fitting relation and lemma \ref{countingLemma}, we know that $s(\Deltaelta)$ is even. If we take some model $\mathcal{H} = \langle \mathcal{W} , \$ , \mathcal{V} , z \rightarrowngle$, such that $\mathcal{M} \mathcalultimap_{s(\Deltaelta)} \mathcal{H}$ and $\mathcal{H} \mathcalodels \AxiomClpha^{\Sigma,\circledast} \wedge \BinaryInfCeta^{\Omega,N}$, then $\mathcal{H} \mathcalodels \AxiomClpha^{\Sigma,\circledast}$ and $\mathcal{H} \mathcalodels \BinaryInfCeta^{\Omega,N}$. By definition, $\sigma(N) \in \$(z)$ and $\langle \mathcal{W} , \$ , \mathcal{V} , z , \sigma(N) \rightarrowngle \mathcalodels \BinaryInfCeta^{\Omega}$ and $\forall M \in \$(z) : \langle \mathcal{W} , \$ , \mathcal{V} , z , M \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma}$. So, $\sigma(N) \in \$(z)$ and, by the universal quantification, $\langle \mathcal{W} , \$ , \mathcal{V} , z , \sigma(N) \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma}$. This means that $\mathcal{H} \mathcalodels \AxiomClpha^{\Sigma,N}$ and, by definition, $\AxiomClpha^{\Sigma,\circledast} \wedge \BinaryInfCeta^{\Omega,N} \mathcalodels_{\mathcal{M}:s(\Deltaelta)} \AxiomClpha^{\Sigma,N}$, which means, by lemma \ref{lemmaConsequence}, that $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma,N}$ and, by rule 13, $\mathcal{M} \mathcalodels^{\Deltaelta,N} \AxiomClpha^{\Sigma}$; \item If $\mathcal{M} \mathcalodels^{\Deltaelta,N} \AxiomClpha^{\Sigma,\BinaryInfCullet}$ and $\mathcal{M} \mathcalodels^{\Deltaelta,M} \shneg N$, then, by the rule 14, $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma,\BinaryInfCullet,N}$ and $\mathcal{M} \mathcalodels^{\Deltaelta} (\shneg N)^{M}$. By the rule 3, $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma,\BinaryInfCullet,N} \wedge (\shneg N)^{M}$. By the fact that $\AxiomClpha^{\Sigma,\BinaryInfCullet,N} \wedge (\shneg N)^{M} \in \BinaryInfColdsymbol{F}_{n}$, the fitting relation and lemma \ref{countingLemma}, we know that $s(\Deltaelta)$ is even. If we take some model $\mathcal{H} = \langle \mathcal{W} , \$ , \mathcal{V} , z \rightarrowngle$, such that $\mathcal{M} \mathcalultimap_{s(\Deltaelta)} \mathcal{H}$ and $\mathcal{H} \mathcalodels \AxiomClpha^{\Sigma,\BinaryInfCullet,N} \wedge (\shneg N)^{M}$, then $\sigma(N) \in \$(z)$ and $\exists w \in \sigma(N) : \langle \mathcal{W} , \$ , \mathcal{V} , w \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma}$. From $(\shneg N)^{M}$, we know that $\sigma(M) \in \$(z)$ and $\sigma(N) \sqsubseteqet \sigma(M)$, then $\exists w \in \sigma(M) : \langle \mathcal{W} , \$ , \mathcal{V} , w \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma}$. We conclude that $\mathcal{H} \mathcalodels \AxiomClpha^{\Sigma,\BinaryInfCullet,M}$ and, by definition, $\AxiomClpha^{\Sigma,\BinaryInfCullet,N} \wedge (\shneg N)^{M} \mathcalodels_{\mathcal{M}:s(\Deltaelta)} \AxiomClpha^{\Sigma,\BinaryInfCullet,M}$, which means, by lemma \ref{lemmaConsequence}, that $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma,\BinaryInfCullet,M}$ and, by rule 13, $\mathcal{M} \mathcalodels^{\Deltaelta,M} \AxiomClpha^{\Sigma,\BinaryInfCullet}$; \item If $\mathcal{M} \mathcalodels^{\Deltaelta,N} \AxiomClpha^{\Sigma,\AxiomCst}$ and $\mathcal{M} \mathcalodels^{\Deltaelta,M} \shpos N$, then, by the rule 14, $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma,\AxiomCst,N}$ and $\mathcal{M} \mathcalodels^{\Deltaelta} (\shpos N)^{M}$. By the rule 3, $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma,\AxiomCst,N} \wedge (\shpos N)^{M}$. By the fact that $\AxiomClpha^{\Sigma,\AxiomCst,N} \wedge (\shpos N)^{M} \in \BinaryInfColdsymbol{F}_{n}$, the fitting relation and lemma \ref{countingLemma}, we know that $s(\Deltaelta)$ is even. If we take some model $\mathcal{H} = \langle \mathcal{W} , \$ , \mathcal{V} , z \rightarrowngle$, such that $\mathcal{M} \mathcalultimap_{s(\Deltaelta)} \mathcal{H}$ and $\mathcal{H} \mathcalodels \AxiomClpha^{\Sigma,\AxiomCst,N} \wedge (\shpos N)^{M}$, then $\sigma(N) \in \$(z)$ and $\forall w \in \sigma(N) : \langle \mathcal{W} , \$ , \mathcal{V} , w \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma}$. From $(\shpos N)^{M}$, we know that $\sigma(M) \in \$(z)$ and $\sigma(M) \sqsubseteqet \sigma(N)$, then $\forall w \in \sigma(M) : \langle \mathcal{W} , \$ , \mathcal{V} , w \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma}$. We conclude that $\mathcal{H} \mathcalodels \AxiomClpha^{\Sigma,\AxiomCst,M}$ and, by definition, $\AxiomClpha^{\Sigma,\AxiomCst,N} \wedge (\shpos N)^{M} \mathcalodels_{\mathcal{M}:s(\Deltaelta)} \AxiomClpha^{\Sigma,\AxiomCst,M}$, which means, by lemma \ref{lemmaConsequence}, that $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma,\AxiomCst,M}$ and, by rule 13, $\mathcal{M} \mathcalodels^{\Deltaelta,M} \AxiomClpha^{\Sigma,\AxiomCst}$; \item If $\mathcal{M} \mathcalodels^{\Deltaelta,N} \shneg M$ and $\mathcal{M} \mathcalodels^{\Deltaelta,M} \shneg P$, then, by the rule 14, $\mathcal{M} \mathcalodels^{\Deltaelta} (\shneg M)^{N}$ and $\mathcal{M} \mathcalodels^{\Deltaelta} (\shneg P)^{M}$. By the rule 3, $\mathcal{M} \mathcalodels^{\Deltaelta} (\shneg M)^{N} \wedge (\shneg P)^{M}$. By the fact that $(\shneg M)^{N} \wedge (\shneg P)^{M} \in \BinaryInfColdsymbol{F}_{n}$, the fitting relation and lemma \ref{countingLemma}, we know that $s(\Deltaelta)$ is even. If we take some model $\mathcal{H} = \langle \mathcal{W} , \$ , \mathcal{V} , z \rightarrowngle$, such that $\mathcal{M} \mathcalultimap_{s(\Deltaelta)} \mathcal{H}$ and $\mathcal{H} \mathcalodels (\shneg M)^{N} \wedge (\shneg P)^{M}$, then $\sigma(N) \in \$(z)$ and $\sigma(M) \sqsubseteqet \sigma(N)$. From $(\shneg P)^{M}$, we know that $\sigma(M) \in \$(z)$ and $\sigma(P) \sqsubseteqet \sigma(M)$, then $\sigma(P) \sqsubseteqet \sigma(N)$. We conclude that $\mathcal{H} \mathcalodels (\shneg P)^{N}$ and, by definition, $(\shneg M)^{N} \wedge (\shneg P)^{M} \mathcalodels_{\mathcal{M}:s(\Deltaelta)} (\shneg P)^{N}$, which means, by lemma \ref{lemmaConsequence}, that $\mathcal{M} \mathcalodels^{\Deltaelta} (\shneg P)^{N}$ and, by rule 13, $\mathcal{M} \mathcalodels^{\Deltaelta,N} \shneg P$; \item It follows the same argument of rule 25; \setcounter{enumi}{29} \item According to the satisfaction relation, every model must model $terminal objectp_{n}$ and every template must model $terminal objectp_{w}$. So, given a model $\mathcal{M}$, if $s(\Deltaelta)$ is even, then, for every model $\mathcal{H}$, such that $\mathcal{M} \mathcalultimap_{s(\Deltaelta)} \mathcal{H}$, $\mathcal{H} \mathcalodels terminal objectp_{n}$ and, by lemma \ref{lemmaConsequence}, $\mathcal{M} \mathcalodels^{\Deltaelta} terminal objectp_{n}$. The argument for odd $s(\Deltaelta)$ is analogous.\end{enumerate}\end{proof} \BinaryInfCegin{lemma} \label{withOrWithoutYou} Given a context $\Deltaelta$ with no existential label, and a wff $\AxiomClpha^{\Sigma}$ that fits on $\Deltaelta$, then, for any model, $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma} \vee \noLineeg ( \AxiomClpha^{\Sigma} )$. \end{lemma} \BinaryInfCegin{proof} We proceed by induction on the size of $\Deltaelta$.\\ \noLineoindent If $\Deltaelta$ is empty, then $\AxiomClpha^{\Sigma} \in \BinaryInfColdsymbol{F}_{n}$. $\AxiomClpha^{\Sigma} \vee \noLineeg ( \AxiomClpha^{\Sigma} )$ is a tautology because of the satisfaction relation definition: given any model $\mathcal{M}$, if $\mathcal{M} \mathcalodels \AxiomClpha^{\Sigma}$, then $\mathcal{M} \mathcalodels \AxiomClpha^{\Sigma} \vee \noLineeg ( \AxiomClpha^{\Sigma} )$. If $\mathcal{M} \noLineot\mathcalodels \AxiomClpha^{\Sigma}$, then $\mathcal{M} \mathcalodels \noLineeg ( \AxiomClpha^{\Sigma} )$ and $\mathcal{M} \mathcalodels \AxiomClpha^{\Sigma} \vee \noLineeg ( \AxiomClpha^{\Sigma} )$.\\ \noLineoindent (base) If $\Deltaelta = \{\circledast\}$, then $\AxiomClpha^{\Sigma} \in \BinaryInfColdsymbol{F}_{w}$. $\AxiomClpha^{\Sigma} \vee \noLineeg ( \AxiomClpha^{\Sigma} )$ is a tautology because of the satisfaction relation definition: given any template $\mathcal{T}$, if $\mathcal{T} \mathcalodels \AxiomClpha^{\Sigma}$, then $\mathcal{T} \mathcalodels \AxiomClpha^{\Sigma} \vee \noLineeg ( \AxiomClpha^{\Sigma} )$. If $\mathcal{T} \noLineot\mathcalodels \AxiomClpha^{\Sigma}$, then $\mathcal{T} \mathcalodels \noLineeg ( \AxiomClpha^{\Sigma} )$ and $\mathcal{T} \mathcalodels \AxiomClpha^{\Sigma} \vee \noLineeg ( \AxiomClpha^{\Sigma} )$. Given any model $\mathcal{M} = \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i \rightarrowngle$, then for every template $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i , N \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma} \vee \noLineeg ( \AxiomClpha^{\Sigma} )$ and, by definition, $\mathcal{M} \mathcalodels (\AxiomClpha^{\Sigma} \vee \noLineeg ( \AxiomClpha^{\Sigma} ))^{\circledast}$. So, $\mathcal{M} \mathcalodels (\AxiomClpha^{\Sigma} \vee \noLineeg ( \AxiomClpha^{\Sigma} ))^{\overline{\Deltaelta}}$ and, by definition, $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma} \vee \noLineeg ( \AxiomClpha^{\Sigma} )$.\\ \noLineoindent (base) If $\Deltaelta = \{N\}$: by the previous case, $\mathcal{M} \mathcalodels (\AxiomClpha^{\Sigma} \vee \noLineeg ( \AxiomClpha^{\Sigma} ))^{\circledast}$ and, in particular, $\mathcal{M} \mathcalodels (\AxiomClpha^{\Sigma} \vee \noLineeg ( \AxiomClpha^{\Sigma} ))^{N}$, for any neighbourhood variable $N$.\\ \noLineoindent (base) If $\Deltaelta = \{\circledast,\AxiomCst\}$, then $\AxiomClpha^{\Sigma} \in \BinaryInfColdsymbol{F}_{n}$. $\AxiomClpha^{\Sigma} \vee \noLineeg ( \AxiomClpha^{\Sigma} )$ is a tautology because of the satisfaction relation definition: given any model $\mathcal{H}$, if $\mathcal{H} \mathcalodels \AxiomClpha^{\Sigma}$, then $\mathcal{H} \mathcalodels \AxiomClpha^{\Sigma} \vee \noLineeg ( \AxiomClpha^{\Sigma} )$. If $\mathcal{H} \noLineot\mathcalodels \AxiomClpha^{\Sigma}$, then $\mathcal{H} \mathcalodels \noLineeg ( \AxiomClpha^{\Sigma} )$ and $\mathcal{H} \mathcalodels \AxiomClpha^{\Sigma} \vee \noLineeg ( \AxiomClpha^{\Sigma} )$. We apply lemma \ref{taut} to conclude that $(\AxiomClpha^{\Sigma} \vee \noLineeg ( \AxiomClpha^{\Sigma} ))^{\AxiomCst,\circledast}$ is also a tautology. So, for any model $\mathcal{M} \mathcalodels (\AxiomClpha^{\Sigma} \vee \noLineeg ( \AxiomClpha^{\Sigma} ))^{\AxiomCst,\circledast}$ and, by definition, $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma} \vee \noLineeg ( \AxiomClpha^{\Sigma} )$.\\ \noLineoindent (base) If $\Deltaelta = \{\circledast,u\}$: by the previous case $(\AxiomClpha^{\Sigma} \vee \noLineeg ( \AxiomClpha^{\Sigma} ))^{\AxiomCst,\circledast}$ is a tautology. So, in particular, $\mathcal{M} \mathcalodels (\AxiomClpha^{\Sigma} \vee \noLineeg ( \AxiomClpha^{\Sigma} ))^{u,\circledast}$ for any world variable and, by definition, $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma} \vee \noLineeg ( \AxiomClpha^{\Sigma} )$.\\ \noLineoindent (base) If $\Deltaelta = \{N,\AxiomCst\}$ and $\Deltaelta = \{N,u\}$ are analogous to the previous case.\\ \noLineoindent (induction) If $\Deltaelta = \{\phi,\Deltaelta'\}$: by lemma \ref{lemmaVee}, $(\AxiomClpha^{\Sigma} \vee \noLineeg ( \AxiomClpha^{\Sigma} ))^{\phi,\Deltaelta'} \equiv (\AxiomClpha^{\Sigma,\phi} \vee (\noLineeg ( \AxiomClpha^{\Sigma} )^{\phi} )^{\Deltaelta'}$. By the induction hypothesis, $\mathcal{M} \mathcalodels^{\Deltaelta'} \AxiomClpha^{\Sigma,\phi} \vee (\noLineeg ( \AxiomClpha^{\Sigma} ))^{\phi}$. By lemma \ref{lemmaVee} again, $\mathcal{M} \mathcalodels^{\Deltaelta'} (\AxiomClpha^{\Sigma} \vee (\noLineeg ( \AxiomClpha^{\Sigma} ))^{\phi}$ and, by definition, $\mathcal{M} \mathcalodels^{\Deltaelta',\phi} \AxiomClpha^{\Sigma} \vee \noLineeg ( \AxiomClpha^{\Sigma} )$.\end{proof} \BinaryInfCegin{lemma} \label{completeResolutionPUC-ND} PUC-ND preserves resolution. \end{lemma} \BinaryInfCegin{proof} We present the proof for each remaining rule of the PUC-ND inside an induction. Base argument: \BinaryInfCegin{enumerate} \setcounter{enumi}{4} \item If $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma} \vee \BinaryInfCeta^{\Omega}$, then $\mathcal{M} \mathcalodels (\AxiomClpha^{\Sigma} \vee \BinaryInfCeta^{\Omega})^{\overline{\Deltaelta}}$, then, by lemma \ref{lemmaVee}, $\mathcal{M} \mathcalodels \AxiomClpha^{\Sigma,\overline{\Deltaelta}} \vee \BinaryInfCeta^{\Omega,\overline{\Deltaelta}}$, then, by definition, $\mathcal{M} \mathcalodels \AxiomClpha^{\Sigma,\overline{\Deltaelta}}$ or $\mathcal{M} \mathcalodels \BinaryInfCeta^{\Omega,\overline{\Deltaelta}}$. This means, by definition, that $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma}$ or $\mathcal{M} \mathcalodels^{\Deltaelta} \BinaryInfCeta^{\Omega}$. So, if $\Pi_{1}$ and $\Pi_{2}$ only contains the rules from lemma \ref{resolution}, $\mathcal{M} \mathcalodels^{\Theta} \gamma^{\Lambda}$ in both cases, because of the preservation of the resolution relation. And, for that conclusion, the hypothesis are no longer necessary and may be discharged; \setcounter{enumi}{6} \item We know from classical logic that $\mathcal{M} \mathcalodels \AxiomClpha^{\Sigma,\overline{\Deltaelta}} \vee \noLineeg (\AxiomClpha^{\Sigma,\overline{\Deltaelta}})$, which means that $\mathcal{M} \mathcalodels \AxiomClpha^{\Sigma,\overline{\Deltaelta}}$ or $\mathcal{M} \mathcalodels \noLineeg (\AxiomClpha^{\Sigma,\overline{\Deltaelta}})$. In the first case, we know that $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma}$. In the second case, we know that $\mathcal{M} \mathcalodels^{\Deltaelta} \noLineeg (\AxiomClpha^{\Sigma})$. If the subderivation $\Pi$ only contains the rules from lemma \ref{resolution}, we can conclude that $\mathcal{M} \mathcalodels^{\Deltaelta} \BinaryInfCot$. But, from rule 7, this means that $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma}$. So, in either case, we can conclude $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma}$ and we are able to discharge the hypothesis; \setcounter{enumi}{10} \item From lemma \ref{withOrWithoutYou}, we know that $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma} \vee \noLineeg ( \AxiomClpha^{\Sigma} )$, so $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma}$ or $\mathcal{M} \mathcalodels^{\Deltaelta} \noLineeg ( \AxiomClpha^{\Sigma} )$. In the first case, if $\Pi$ only contains the rules of lemma \ref{resolution}, then the derivation gives us $\mathcal{M} \mathcalodels^{\Deltaelta} \BinaryInfCeta^{\Omega}$. If $\BinaryInfCeta^{\Omega} \in \BinaryInfColdsymbol{F}_{n}$, then, by the fitting relation and lemma \ref{countingLemma}, we know that $s(\Deltaelta)$ is even. If we take some model $\mathcal{H} = \langle \mathcal{W} , \$ , \mathcal{V} , z \rightarrowngle$, such that $\mathcal{M} \mathcalultimap_{s(\Deltaelta)} \mathcal{H}$ and $\mathcal{H} \mathcalodels \BinaryInfCeta^{\Omega}$, then, by definition, $\mathcal{H} \mathcalodels \AxiomClpha^{\Sigma} \rightarrow \BinaryInfCeta^{\Omega}$. So, by definition, $\BinaryInfCeta^{\Omega} \mathcalodels_{\mathcal{M}:s(\Deltaelta)} \AxiomClpha^{\Sigma} \rightarrow \BinaryInfCeta^{\Omega}$, which means, by lemma \ref{lemmaConsequence}, that $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma} \rightarrow \BinaryInfCeta^{\Omega}$. If $\BinaryInfCeta^{\Omega} \in \BinaryInfColdsymbol{F}_{w}$, then, by the fitting relation and lemma \ref{countingLemma}, we know that $s(\Deltaelta)$ is odd. If we take some template $\mathcal{T} = \langle \mathcal{W} , \$ , \mathcal{V} , z , L\rightarrowngle$, such that $\mathcal{M} \mathcalultimap_{s(\Deltaelta)} \mathcal{T}$ and $\mathcal{T} \mathcalodels \BinaryInfCeta^{\Omega}$, then, by definition, $\mathcal{T} \mathcalodels \AxiomClpha^{\Sigma} \rightarrow \BinaryInfCeta^{\Omega}$. So, by definition, $\BinaryInfCeta^{\Omega} \mathcalodels_{\mathcal{M}:s(\Deltaelta)} \AxiomClpha^{\Sigma} \rightarrow \BinaryInfCeta^{\Omega}$, which means, by lemma \ref{lemmaConsequence}, that $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma} \rightarrow \BinaryInfCeta^{\Omega}$. In the case where $\mathcal{M} \mathcalodels^{\Deltaelta} \noLineeg (\AxiomClpha^{\Sigma})$, if $\noLineeg (\AxiomClpha^{\Sigma}) \in \BinaryInfColdsymbol{F}_{n}$, then, by the fitting relation and lemma \ref{countingLemma}, we know that $s(\Deltaelta)$ is even. If we take some model $\mathcal{H} = \langle \mathcal{W} , \$ , \mathcal{V} , z \rightarrowngle$, such that $\mathcal{M} \mathcalultimap_{s(\Deltaelta)} \mathcal{H}$ and $\mathcal{H} \mathcalodels \noLineeg (\AxiomClpha^{\Sigma})$, then, by definition, $\mathcal{H} \mathcalodels \AxiomClpha^{\Sigma} \rightarrow \BinaryInfCeta^{\Omega}$. So, by definition, $\noLineeg (\AxiomClpha^{\Sigma}) \mathcalodels_{\mathcal{M}:s(\Deltaelta)} \AxiomClpha^{\Sigma} \rightarrow \BinaryInfCeta^{\Omega}$, which means, by lemma \ref{lemmaConsequence}, that $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma} \rightarrow \BinaryInfCeta^{\Omega}$. If $\noLineeg (\AxiomClpha^{\Sigma}) \in \BinaryInfColdsymbol{F}_{w}$, then, by the fitting relation and lemma \ref{countingLemma}, we know that $s(\Deltaelta)$ is odd. If we take some template $\mathcal{T} = \langle \mathcal{W} , \$ , \mathcal{V} , z , L\rightarrowngle$, such that $\mathcal{M} \mathcalultimap_{s(\Deltaelta)} \mathcal{T}$ and $\mathcal{T} \mathcalodels \noLineeg (\AxiomClpha^{\Sigma})$, then, by definition, $\mathcal{T} \mathcalodels \AxiomClpha^{\Sigma} \rightarrow \BinaryInfCeta^{\Omega}$. So, by definition, $\noLineeg (\AxiomClpha^{\Sigma}) \mathcalodels_{\mathcal{M}:s(\Deltaelta)} \AxiomClpha^{\Sigma} \rightarrow \BinaryInfCeta^{\Omega}$, which means, by lemma \ref{lemmaConsequence}, that $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma} \rightarrow \BinaryInfCeta^{\Omega}$. So the hypothesis is unnecessary and may be discharged; \setcounter{enumi}{17} \item If $\mathcal{M} \mathcalodels^{\Deltaelta,\BinaryInfCullet} \AxiomClpha^{\Sigma}$, then, by the rule 14, $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma,\BinaryInfCullet}$. By the fact that $\AxiomClpha^{\Sigma,\BinaryInfCullet} \in \BinaryInfColdsymbol{F}_{w}$, the fitting relation and lemma \ref{countingLemma}, we know that $s(\Deltaelta)$ is odd. If we take some template $\mathcal{T} = \langle \mathcal{W} , \$ , \mathcal{V} , z , N \rightarrowngle$, such that $\mathcal{M} \mathcalultimap_{s(\Deltaelta)} \mathcal{T}$ and $\mathcal{T} \mathcalodels \AxiomClpha^{\Sigma,\BinaryInfCullet}$, then, $N \in \$(z)$ and $\exists w \in N : \langle \mathcal{W} , \$ , \mathcal{V} , w \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma}$. Since the variable $u$ occurs nowhere else in the derivation, $u$ can be taken as a denotation of the given existential and we conclude that $\langle \mathcal{W} , \$ , \mathcal{V} , \sigma(u) \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma}$, what means that $\mathcal{T} \mathcalodels \AxiomClpha^{\Sigma,u}$. So, by definition, $\AxiomClpha^{\Sigma,\BinaryInfCullet} \mathcalodels_{\mathcal{M}:s(\Deltaelta)} \AxiomClpha^{\Sigma,u}$, which means, by lemma \ref{lemmaConsequence}, that $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma,u}$. We conclude, using the rule 13, that $\mathcal{M} \mathcalodels^{\Deltaelta,u} \AxiomClpha^{\Sigma}$. If $\Pi$ only contains rules of the lemma \ref{resolution}, then we can conclude $\mathcal{M} \mathcalodels^{\Theta} \BinaryInfCeta^{\Omega}$. Then we can discharge the hypothesis because we know that any denotation of the existential may provide the same conclusion; \setcounter{enumi}{19} \item If $\mathcal{M} \mathcalodels^{\Deltaelta,\circledcirc} \AxiomClpha^{\Sigma}$, then, by the rule 14, $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma,\circledcirc}$. By the fact that $\AxiomClpha^{\Sigma,\circledcirc} \in \BinaryInfColdsymbol{F}_{n}$, the fitting relation and lemma \ref{countingLemma}, we know that $s(\Deltaelta)$ is even. If we take some model $\mathcal{H} = \langle \mathcal{W} , \$ , \mathcal{V} , z \rightarrowngle$, such that $\mathcal{M} \mathcalultimap_{s(\Deltaelta)} \mathcal{H}$ and $\mathcal{H} \mathcalodels \AxiomClpha^{\Sigma,\circledcirc}$, then $\exists M \in \$(z) : \langle \mathcal{W} , \$ , \mathcal{V} , z , M \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma}$. Since the variable $N$ occurs nowhere else in the derivation, $N$ can be taken as a denotation of the given existential and we conclude that $\langle \mathcal{W} , \$ , \mathcal{V} , z , \sigma(N) \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma}$, what means that $\mathcal{H} \mathcalodels \AxiomClpha^{\Sigma,N}$. So, by definition, $\AxiomClpha^{\Sigma,\circledcirc} \mathcalodels_{\mathcal{M}:s(\Deltaelta)} \AxiomClpha^{\Sigma,N}$, which means, by lemma \ref{lemmaConsequence}, that $\mathcal{M} \mathcalodels^{\Deltaelta} \AxiomClpha^{\Sigma,N}$. We conclude, using the rule 13, that $\mathcal{M} \mathcalodels^{\Deltaelta,N} \AxiomClpha^{\Sigma}$. If $\Pi$ only contains rules of the lemma \ref{resolution}, then we can conclude $\mathcal{M} \mathcalodels^{\Theta} \BinaryInfCeta^{\Omega}$. Then we can discharge the hypothesis because we know that any denotation of the existential may provide the same conclusion; \setcounter{enumi}{26} \item From rule 14, the fitting relation, and lemma \ref{countingLemma}, we know that $s(\Deltaelta)$ is even. If we take some model $\mathcal{H} = \langle \mathcal{W} , \$ , \mathcal{V} , z \rightarrowngle$, such that $\mathcal{M} \mathcalultimap_{s(\Deltaelta)} \mathcal{H}$, we know that the neighbourhoods of $\$(z)$ are in total order for the inclusion relation. Given any two neighbourhood variables $M$ and $N$, we know that $\sigma(M) \in \$(z)$, $\sigma(N) \in \$(z)$ and either $\sigma(M) \sqsubseteqet \sigma(N)$ or $\sigma(N) \sqsubseteqet \sigma(M)$. This can be expressed by $\mathcal{H} \mathcalodels (\shneg N)^{M} \vee (\shneg M)^{N}$. By definition, $\mathcal{H} \mathcalodels (\shneg N)^{M}$ or $\mathcal{H} \mathcalodels (\shneg M)^{N}$, then, by definition, $\mathcal{M} \mathcalodels^{\Deltaelta} (\shneg N)^{M}$ or $\mathcal{M} \mathcalodels^{\Deltaelta} (\shneg M)^{N}$ and, using rule 13, $\mathcal{M} \mathcalodels^{\Deltaelta,M} \shneg N$ or $\mathcal{M} \mathcalodels^{\Deltaelta,N} \shneg M$. If the subderivations $\Pi_{1}$ and $\Pi_{2}$ only contains the rules of lemma 8, then $\mathcal{M} \mathcalodels^{\Theta} \AxiomClpha^{\Sigma}$ and the hypothesis may be discharged. \item Follow the same argument for rule 28. \item From rule 14, the fitting relation, and lemma \ref{countingLemma}, we know that $s(\Deltaelta)$ is even. If we take some model $\mathcal{H} = \langle \mathcal{W} , \$ , \mathcal{V} , z \rightarrowngle$, such that $\mathcal{M} \mathcalultimap_{s(\Deltaelta)} \mathcal{H}$, we know that the neighbourhoods of $\$(z)$ are in total order for the inclusion relation. Given a neighbourhood variable $M$, we know that, for every neighbourhood variable $N$, either $\sigma(M) \sqsubseteqet \sigma(N)$ or $\sigma(N) \sqsubseteqet \sigma(M)$. This can be expressed by $\mathcal{H} \mathcalodels (\shneg N)^{M} \vee (\shpos N)^{M}$. By definition, $\mathcal{H} \mathcalodels (\shneg N)^{M}$ or $\mathcal{H} \mathcalodels (\shpos N)^{M}$, then, by definition, $\mathcal{M} \mathcalodels^{\Deltaelta} (\shneg N)^{M}$ or $\mathcal{M} \mathcalodels^{\Deltaelta} (\shpos N)^{M}$ and, using rule 13, $\mathcal{M} \mathcalodels^{\Deltaelta,M} \shneg N$ or $\mathcal{M} \mathcalodels^{\Deltaelta,M} \shpos N$. If the subderivations $\Pi_{1}$ and $\Pi_{2}$ only contains the rules of lemma 8, then $\mathcal{M} \mathcalodels^{\Theta} \AxiomClpha^{\Sigma}$ and the hypothesis may be discharged. \end{enumerate} \noLineoindent Inductive case: for every rule, we suppose that the subderivations ($\Pi$) were only composed by rules of the lemma \ref{resolution}. If some derivation may contains all rules of the PUC-ND, then there must be an application of the rules of the present lemma that contains only the rules of the lemma \ref{resolution}, because the derivation is finite and the subderivations have a positive number of application of rules. Those cases are covered by the Base argument and, for that reason, they preserve the resolution relation. The next step is to consider all application of the rules of the present lemma that may have one application of the rules $5, 7, 11, 19, 20, 28, 29$ or $30$. Then, step by step, we cover all possible nested application of the rules of the present lemma.\end{proof} \BinaryInfCegin{definition} \label{derivability} Given the formulas $\AxiomClpha^{\Sigma}$ and $\BinaryInfCeta^{\Omega}$, the relation $\AxiomClpha^{\Sigma} \vdash^{\Deltaelta}_{\Theta} \BinaryInfCeta^{\Omega}$ of \TrinaryInfCextit{derivability} is defined iff there is a derivation that concludes $\BinaryInfCeta^{\Omega}$ in the context $\Theta$ and that may only have $\AxiomClpha^{\Sigma}$ in the context $\Deltaelta$ as open hypothesis. If $\Gamma \sqsubseteqet \BinaryInfColdsymbol{F}_{n}$ or $\Gamma \sqsubseteqet \BinaryInfColdsymbol{F}_{w}$, the relation $\Gamma \vdash^{\Deltaelta}_{\Theta} \AxiomClpha^{\Sigma}$ of derivability is defined iff there is a derivation that concludes $\AxiomClpha^{\Sigma}$ in the context $\Theta$ and that only has as open hypothesis the formulas of $\Gamma$ in the context $\Deltaelta$. \end{definition} \BinaryInfCegin{definition} $\AxiomClpha^{\Sigma}$ is a \TrinaryInfCextit{theorem} iff $\vdash \AxiomClpha^{\Sigma}$. \end{definition} \BinaryInfCegin{theorem} \label{soundness} $\Gamma \vdash \AxiomClpha^{\Sigma}$ implies $\Gamma \mathcalodels \AxiomClpha^{\Sigma}$ (Soundness). \end{theorem} \BinaryInfCegin{proof} The fitting restriction of the rules of PUC-ND ensures that $\AxiomClpha^{\Sigma} \in \BinaryInfColdsymbol{F}_{n}$ because it appears in the empty context. The same conclusion follows for every formula of $\Gamma$. The derivability assures that there is a derivation that concludes $\AxiomClpha^{\Sigma}$ and takes as open hypothesis a subset of $\Gamma$, which we call $\Gamma'$. If we take a model $\mathcal{M}$ that satisfies every formula of $\Gamma$, then it also satisfies every formula of $\Gamma'$. So, $\mathcal{M} \mathcalodels \gamma^{\Theta}$, for every $\gamma^{\Theta} \in \Gamma'$. But this means, by definition, that, for every wff of $\Gamma'$, the resolution relation holds with the empty context. Then, from lemma \ref{completeResolutionPUC-ND}, we know that $\mathcal{M} \mathcalodels \AxiomClpha^{\Sigma}$. So, every model, that satisfies every formula of $\Gamma$, satisfies $\AxiomClpha^{\Sigma}$ and, by definition, $\Gamma \mathcalodels \AxiomClpha^{\Sigma}$.\end{proof} In order to prove the converse implication, we use maximal consistent sets to prove completeness for the fragment $\{\wedge, \rightarrow, \BinaryInfCullet, \circledcirc, \circledast\}$ of the language. The label $\circledcirc$ is not definable from $\circledast$ and vice-versa because the chosen logic for neighbourhoods is a free logic \cite{Lambert}. The reader can see the propositional classic logic case of this way of proving completeness in [17]. But for the completeness proof we must restrict the formulas to \TrinaryInfCextit{sentences} due to occurrences of variables. \BinaryInfCegin{definition} Given $\AxiomClpha^{\Sigma} \in \BinaryInfColdsymbol{F}_{n}$, if $\AxiomClpha^{\Sigma}$ has no variables in the attributes of its subformulas nor any subformula of the shape $\shneg N$ or $\shpos N$, then $\AxiomClpha^{\Sigma} \in \BinaryInfColdsymbol{S}_{n}$. By analogy, we can construct $\BinaryInfColdsymbol{S}_{w}$ from $\BinaryInfColdsymbol{F}_{w}$. \end{definition} \BinaryInfCegin{definition} Given $\Gamma \sqsubseteqet \BinaryInfColdsymbol{S}_{n}$ ($\Gamma \sqsubseteqet \BinaryInfColdsymbol{S}_{w}$), we say that $\Gamma$ is \TrinaryInfCextit{n-inconsistent} (\TrinaryInfCextit{w-inconsistent}) if $\Gamma \vdash \BinaryInfCot_{n}$ ($\Gamma \vdash^{N}_{N} \BinaryInfCot_{w}$, where $N$ is a neighbourhood variable that does not occur in $\Gamma$) and \TrinaryInfCextit{n-consistent} (\TrinaryInfCextit{w-consistent}) if $\Gamma \noLineot\vdash \BinaryInfCot_{n}$ ($\Gamma \noLineot\vdash^{N}_{N} \BinaryInfCot_{w}$). \end{definition} \BinaryInfCegin{lemma} Given $\Gamma \sqsubseteqet \BinaryInfColdsymbol{S}_{n}$ ($\Gamma \sqsubseteqet \BinaryInfColdsymbol{S}_{w}$), the following three conditions are equivalents: \BinaryInfCegin{enumerate}[1.] \item $\Gamma$ is n-inconsistent; \item $\Gamma \vdash \phi^{\Theta}$, for any formula $\phi^{\Theta}$ that fits into the empty context; \item There is at least a formula $\phi^{\Theta}$, such that $\Gamma \vdash \phi^{\Theta}$ and $\Gamma \vdash \phi^{\Theta} \rightarrow \BinaryInfCot_{n}$ \end{enumerate} \end{lemma} \BinaryInfCegin{proof} $1 \Rightarrow 2)$ If $\Gamma \vdash \BinaryInfCot_{n}$, then there is a derivation $\mathcal{D}$ with conclusion $\BinaryInfCot_{n}$ and hypothesis in $\Gamma$. To $\mathcal{D}$ we can add one inference using the rule 8 of PUC-ND to conclude any formula that fits into the empty context. $2 \Rightarrow 3)$ Trivial; $3 \Rightarrow 1)$ If $\Gamma \vdash \phi^{\Theta}$ and $\Gamma \vdash \phi^{\Theta} \rightarrow \BinaryInfCot_{n}$, then there is a derivation for each formula with the hypothesis in $\Gamma$. Combining the derivations, we conclude $\BinaryInfCot_{n}$ using rule 12 of the PUC-ND. There is no problem with existential quantifiers in the context because we conclude the formulas in the empty context. So, $\Gamma \vdash \BinaryInfCot_{n}$. The same holds for $\Gamma \sqsubseteqet \BinaryInfColdsymbol{S}_{w}$.\end{proof} \BinaryInfCegin{lemma} \label{modelForSet} Given $\Gamma \sqsubseteqet \BinaryInfColdsymbol{S}_{n}$ ($\Gamma \sqsubseteqet \BinaryInfColdsymbol{S}_{w}$), if there is a model (template) that satisfies every formula of $\Gamma$, then $\Gamma$ is n-consistent (w-consistent). \end{lemma} \BinaryInfCegin{proof} If $\Gamma \vdash \BinaryInfCot_{n}$, then, by theorem \ref{soundness}, $\Gamma \mathcalodels \BinaryInfCot_{n}$. If there is model that satisfies every formula of $\Gamma$, then it also satisfies $\BinaryInfCot_{n}$ by the definition of logical consequence. But there is no model that satisfies $\BinaryInfCot_{n}$ because of the definition of the truth evaluation function. The same holds for $\Gamma \sqsubseteqet \BinaryInfColdsymbol{S}_{w}$.\end{proof} \BinaryInfCegin{lemma} \label{consistency} Given $\Gamma \sqsubseteqet \BinaryInfColdsymbol{S}_{n}$: 1. If $\Gamma \cup \{\phi^{\Theta} \rightarrow \BinaryInfCot_{n}\} \vdash \BinaryInfCot_{n}$, then $\Gamma \vdash \phi^{\Theta}$; 2. If $\Gamma \cup \{\phi^{\Theta}\} \vdash \BinaryInfCot_{n}$, then $\Gamma \vdash \phi^{\Theta} \rightarrow \BinaryInfCot_{n}$. Likewise for $\Gamma \sqsubseteqet \BinaryInfColdsymbol{S}_{w}$. \end{lemma} \BinaryInfCegin{proof} The first (second) assumption implies that there is a derivation $\mathcal{D}$ ($\mathcal{D}'$) with hypothesis in $\Gamma \cup \{\phi^{\Theta} \rightarrow \BinaryInfCot_{n}\}$ ($\Gamma \cup \{\phi^{\Theta}\}$) and conclusion $\BinaryInfCot_{n}$. Since $\noLineeg (\phi^{\Theta}) \equiv \phi^{\Theta} \rightarrow \BinaryInfCot_{n}$, we can apply the rule $\BinaryInfCot$-classical ($\rightarrow$-introduction) and eliminate all occurrences of $\phi^{\Theta} \rightarrow \BinaryInfCot_{n}$ ($\phi^{\Theta}$) as hypothesis, then we obtain a derivation with hypothesis in $\Gamma$ and conclusion $\phi^{\Theta}$ ($\phi^{\Theta} \rightarrow \BinaryInfCot_{n}$). The same argument holds for $\Gamma \sqsubseteqet \BinaryInfColdsymbol{S}_{w}$.\end{proof} \BinaryInfCegin{lemma} \label{counting} $\BinaryInfColdsymbol{S}_{n}$ and $\BinaryInfColdsymbol{S}_{w}$ are denumerable. \end{lemma} \BinaryInfCegin{proof} Every $\AxiomClpha^{\Sigma} \in \BinaryInfColdsymbol{S}_{n}$ contains a finite number of proposition symbols and logical operators. So, any lexical order provide a bijection from $\BinaryInfColdsymbol{S}_{n}$ to the natural numbers. The same argument works for $\BinaryInfColdsymbol{S}_{w}$.\end{proof} \BinaryInfCegin{definition} $\Gamma \sqsubseteqet \BinaryInfColdsymbol{S}_{n}$ ($\Gamma \sqsubseteqet \BinaryInfColdsymbol{S}_{w}$) is \TrinaryInfCextit{maximally n-consistent} (\TrinaryInfCextit{maximally w-consistent}) iff $\Gamma$ is n-consistent (w-consistent) and it cannot be a proper subset of any other n-consistent (w-consistent) set. \end{definition} \BinaryInfCegin{lemma} \label{subMax} Every n-consistent (w-consistent) set is subset of a maximally n-consistent (w-consistent) set. \end{lemma} \BinaryInfCegin{proof} According to the lemma \ref{counting}, we may have a list $\varphi_{0}, \varphi_{1}, \ldots$ of all wff of $\BinaryInfColdsymbol{S}_{n}$. We build a non-decreasing sequence of sets $\Gamma_{i}$ such that the union is maximally n-consistent.\\ \noLineoindent $\Gamma_{0} = \Gamma$;\\ \noLineoindent $\Gamma_{k+1} = \Gamma_{k} \cup \{\varphi_{k}\}$ if n-consistent, $\Gamma_{k} $ otherwise;\\ \noLineoindent $\hat{\Gamma} = \BinaryInfCigcup\{\Gamma_{k} \; | \; k \geq 0 \}$.\\ (a) $\Gamma_{k}$ is n-consistent for all $k$: by induction; (b) $\hat{\Gamma}$ is n-consistent: suppose that $\hat{\Gamma} \vdash \BinaryInfCot_{n}$, then for every derivation $\mathcal{D}$ of $\BinaryInfCot_{n}$ with hypothesis in $\hat{\Gamma}$ we have a finite set of hypothesis. By definition, every wff is included in $\hat{\Gamma}$ via a set $\Gamma_{k}$. Then, because the sequence of construction of $\hat{\Gamma}$ is non-decreasing, there is a number $m$, such that $\Gamma_{m}$ contains all hypothesis of $\mathcal{D}$. But $\Gamma_{m}$ is n-consistent and, therefore, cannot derive $\BinaryInfCot_{n}$. The same holds for w-consistent sets.\end{proof} \BinaryInfCegin{lemma} \label{closedDeriv} If $\Gamma$ is maximally n-consistent (w-consistent) set, then $\Gamma$ is closed under derivability. \end{lemma} \BinaryInfCegin{proof} Suppose that $\Gamma \vdash \varphi^{\Theta}$ and $\varphi^{\Theta} \noLineot\in \Gamma$. Then $\Gamma \cup \{\varphi^{\Theta}\}$ must be n-inconsistent by the definition of maximally n-consistent set. By lemma \ref{consistency}, $\Gamma \vdash \varphi^{\Theta} \rightarrow \BinaryInfCot_{n}$, so $\Gamma$ is n-inconsistent. The same argument holds for w-consistent sets.\end{proof} \BinaryInfCegin{lemma} \label{dual} If $\Gamma$ is maximally n-consistent (w-consistent), then: \BinaryInfCegin{itemize} \item[(a)] For all $\varphi^{\Theta} \in \BinaryInfColdsymbol{S}_{n}$ ($\in \BinaryInfColdsymbol{S}_{w}$), either $\varphi^{\Theta} \in \Gamma$ or $\varphi^{\Theta} \rightarrow \BinaryInfCot_{n} \in \Gamma$ ($\varphi^{\Theta} \rightarrow \BinaryInfCot_{w}$); \item[(b)] For all $\varphi^{\Theta},\psi^{\Upsilon} \in \BinaryInfColdsymbol{S}_{n}$ ($\in \BinaryInfColdsymbol{S}_{w}$), $\varphi^{\Theta} \rightarrow \psi^{\Upsilon} \in \Gamma$ iff $\varphi^{\Theta} \in \Gamma$ implies $\psi^{\Upsilon} \in \Gamma$. \end{itemize} \end{lemma} \BinaryInfCegin{proof} (a) Both $\varphi^{\Theta}$ and $\varphi^{\Theta} \rightarrow \BinaryInfCot_{n}$ cannot belong to $\Gamma$. If $\Gamma \cup \varphi^{\Theta}$ is n-consistent, then, by the definition of maximally n-consistent set, $\varphi^{\Theta} \in \Gamma$. If it is n-inconsistent, then by lemmas \ref{consistency} and \ref{closedDeriv}, $\varphi^{\Theta} \rightarrow \BinaryInfCot_{n} \in \Gamma$. (b) If $\varphi^{\Theta} \rightarrow \psi^{\Upsilon} \in \Gamma$ and $\varphi^{\Theta} \in \Gamma$, then $ \Gamma \vdash \psi^{\Upsilon}$ by $\rightarrow$-elimination and, by lemma \ref{closedDeriv}, $\psi^{\Upsilon} \in \Gamma$. In other way, supposing that $\varphi^{\Theta} \in \Gamma$ implies $\psi^{\Upsilon} \in \Gamma$, if $\varphi^{\Theta} \in \Gamma$, then obviously $\Gamma \vdash \psi^{\Upsilon}$ and $\Gamma \vdash \varphi^{\Theta} \rightarrow \psi^{\Upsilon}$ by $\rightarrow$-introduction. If $\varphi^{\Theta} \noLineot\in \Gamma$, then, by the (a) conclusion, $\varphi^{\Theta} \rightarrow \BinaryInfCot_{n} \in \Gamma$. The conclusion $\varphi^{\Theta} \rightarrow \psi^{\Upsilon} \in \Gamma$ comes from a simple derivation with $\varphi^{\Theta}$ as a discharged hypothesis of a $\rightarrow$-introduction that follows an application of the intuitionistic absurd. The same argument holds for w-consistent sets.\end{proof} \BinaryInfCegin{corollary} \label{corIff} If $\Gamma$ is maximally n-consistent (w-consistent), then $\varphi^{\Theta} \in \Gamma$ iff $\varphi^{\Theta} \rightarrow \BinaryInfCot_{n} \noLineot\in \Gamma$. \end{corollary} \BinaryInfCegin{definition} Given the maximally n-consistent set $\Gamma \sqsubseteqet \BinaryInfColdsymbol{S}_{n}$ and the maximally w-consistent set $\Lambda \sqsubseteqet \BinaryInfColdsymbol{S}_{w}$, we say that $\Gamma$ \TrinaryInfCextit{accepts} $\Lambda$ ($\Gamma \propto \Lambda$) if $\AxiomClpha^{\Sigma} \in \Lambda$ implies $\AxiomClpha^{\Sigma,\circledcirc} \in \Gamma$. If $\AxiomClpha^{\Sigma} \in \Gamma$ implies $\AxiomClpha^{\Sigma,\BinaryInfCullet} \in \Lambda$, then $\Lambda \propto \Gamma$. \end{definition} \BinaryInfCegin{definition} Given maximally w-consistent sets $\Gamma$ and $\Lambda$, we say that $\Gamma$ \TrinaryInfCextit{subordinates} $\Lambda$ ($\Lambda \sqsubset \Gamma$) iff $\AxiomClpha^{\Sigma,\BinaryInfCullet} \in \Lambda$ implies $\AxiomClpha^{\Sigma,\BinaryInfCullet} \in \Gamma$ and $\AxiomClpha^{\Sigma,\AxiomCst} \in \Gamma$ implies $\AxiomClpha^{\Sigma,\AxiomCst} \in \Lambda$. \end{definition} \BinaryInfCegin{lemma} \label{consistentModel} If $\Gamma$ is n-consistent, then there is a model $\mathcal{M}$, such that $\mathcal{M} \mathcalodels \AxiomClpha^{\Sigma}$, for every $\AxiomClpha^{\Sigma} \in \Gamma$. \end{lemma} \BinaryInfCegin{proof} By lemma \ref{subMax}, $\Gamma$ is contained in a maximally n-consistent set $\hat{\Gamma}$. We consider every maximally n-consistent set $\Psi$ as a representation of one world, denoted by $\mathcal{C}i_{\Psi}$. Every maximally w-consistent set will be seen as a set of worlds that may be a neighbourhood. We take the set of maximally n-consistent sets as $\mathcal{W}$. We take $\propto$ as the nested neighbourhood function $\$$ and $\sqsubset$ as the total order among neighbourhoods. To build the truth evaluation function $\mathcal{V}$, we require, for every maximally n-consistent set $\Psi$ and for every $\AxiomClpha$ atomic: (a) $\mathcal{C}i_{\Psi} \in \mathcal{V}(\AxiomClpha)$ if $\AxiomClpha \in \Psi$; (b) $\mathcal{C}i_{\Psi} \noLineot\in \mathcal{V}(\AxiomClpha)$ if $\AxiomClpha \noLineot\in \Psi$. If we take $\mathcal{M} = \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i_{\hat{\Gamma}} \rightarrowngle$, then, for every $\AxiomClpha^{\Sigma} \in \hat{\Gamma}$, $\mathcal{M} \mathcalodels \AxiomClpha^{\Sigma}$. We proceed by induction on the structure of $\AxiomClpha^{\Sigma}$:\\ \noLineoindent (Base) If $\AxiomClpha^{\Sigma}$ is atomic, $\mathcal{M} \mathcalodels \AxiomClpha^{\Sigma}$ iff $\AxiomClpha^{\Sigma} \in \hat{\Gamma}$, by the definition of $\mathcal{V}$; \BinaryInfCegin{itemize} \item $\AxiomClpha^{\Sigma} = \BinaryInfCeta^{\Omega} \wedge \gamma^{\Theta}$. $\mathcal{M} \mathcalodels \AxiomClpha^{\Sigma}$ iff $\mathcal{M} \mathcalodels \BinaryInfCeta^{\Omega}$ and $\mathcal{M} \mathcalodels \gamma^{\Theta}$ iff (induction hypothesis) $\BinaryInfCeta^{\Omega} \in \hat{\Gamma}$ and $\gamma^{\Theta} \in \hat{\Gamma}$. We conclude that $\AxiomClpha^{\Sigma} \in \hat{\Gamma}$ by lemma \ref{closedDeriv}. Conversely $\AxiomClpha^{\Sigma} \in \hat{\Gamma}$ iff $\BinaryInfCeta^{\Omega} \in \hat{\Gamma}$ and $\gamma^{\Theta} \in \hat{\Gamma}$ by lemma \ref{closedDeriv} and the rest follows by the induction hypothesis; \item $\AxiomClpha^{\Sigma} = \BinaryInfCeta^{\Omega} \rightarrow \gamma^{\Theta}$. $\mathcal{M} \noLineot\mathcalodels \AxiomClpha^{\Sigma}$ iff $\mathcal{M} \mathcalodels \BinaryInfCeta^{\Omega}$ and $\mathcal{M} \noLineot\mathcalodels \gamma^{\Theta}$ iff (induction hypothesis) $\BinaryInfCeta^{\Omega} \in \hat{\Gamma}$ and $\gamma^{\Theta} \noLineot\in \hat{\Gamma}$ iff $\BinaryInfCeta^{\Omega} \rightarrow \gamma^{\Theta} \noLineot\in \hat{\Gamma}$ by lemma \ref{dual}; \item $\AxiomClpha^{\Sigma} = \BinaryInfCeta^{\Omega,\circledast}$. If there is no maximally w-consistent set $\Upsilon$, such that $\hat{\Gamma} \propto \Upsilon$, then $\$(\mathcal{C}i)$ is empty and for every $\BinaryInfCeta^{\Omega} \in \BinaryInfColdsymbol{F}_{w}$, $\mathcal{M} \mathcalodels \BinaryInfCeta^{\Omega,\circledast}$. This case occurs iff there is no wff of the form $\sigma^{\Phi,\circledcirc}$ in $\hat{\Gamma}$. If there is some maximally w-consistent set accepted by $\hat{\Gamma}$, then $\mathcal{M} \mathcalodels \BinaryInfCeta^{\Omega,\circledast}$ iff, for every maximally w-consistent set $\Upsilon$, such that $\hat{\Gamma} \propto \Upsilon$, $\BinaryInfCeta^{\Omega} \in \Upsilon$ iff $(\BinaryInfCeta^{\Omega} \rightarrow \BinaryInfCot_{w})^{\circledcirc} \rightarrow \BinaryInfCot_{n} \in \hat{\Gamma}$ which is verified by the other cases; \item $\AxiomClpha^{\Sigma} = \BinaryInfCeta^{\Omega,\circledcirc}$. We build a set $\Upsilon \sqsubseteqet \BinaryInfColdsymbol{F}_{w}$, starting by $\BinaryInfCeta^{\Omega} \in \Upsilon$. We take a sequence $\varphi_{i}$ of all wff with the shape of $(\BinaryInfCeta^{\Omega} \wedge \gamma^{\Theta})^\circledcirc$ in $\hat{\Gamma}$. If, for $\varphi_{i} = (\BinaryInfCeta^{\Omega} \wedge \gamma^{\Theta})^\circledcirc$, $\Upsilon \cup \{ \gamma^{\Theta} \}$ is w-consistent, then $\gamma^{\Theta} \in \Upsilon$. To demonstrate that $\Upsilon$ is maximally w-consistent, we suppose that there is a wff $\sigma^{\Phi} \in \BinaryInfColdsymbol{F}_{w}$, such that $\sigma^{\Phi} \noLineot\in \Upsilon$ and $\Upsilon \cup \{ \sigma^{\Phi} \}$ is w-consistent. Then $(\BinaryInfCeta^{\Omega} \wedge \sigma^{\Phi})^\circledcirc \noLineot\in \hat{\Gamma}$ by the definition of $\Upsilon$ and, by lemma \ref{dual}, $(\BinaryInfCeta^{\Omega} \wedge \sigma^{\Phi})^\circledcirc \rightarrow \BinaryInfCot_{n} \in \hat{\Gamma}$. But from $\BinaryInfCeta^{\Omega,\circledcirc} \in \hat{\Gamma}$ and $(\BinaryInfCeta^{\Omega} \wedge \sigma^{\Phi})^\circledcirc \rightarrow \BinaryInfCot_{n} \in \hat{\Gamma}$ we know that $(\BinaryInfCeta^{\Omega} \wedge ( \sigma^{\Phi} \rightarrow \BinaryInfCot_{w} ) )^\circledcirc \in \hat{\Gamma}$, using lemma \ref{closedDeriv} and the following derivation: \noLineoindent\BinaryInfCegin{center} \AxiomC{$\BinaryInfCeta^{\Omega,\circledcirc}$} \UnaryInfC{$\BinaryInfCeta^{\Omega,\circledcirc}$} \RightLabel{$\circledcirc$} \UnaryInfC{$\BinaryInfCeta^{\Omega}$} \AxiomC{$\BinaryInfCeta^{\Omega,\circledcirc}$} \UnaryInfC{$\BinaryInfCeta^{\Omega,\circledcirc}$} \RightLabel{$\circledcirc$} \UnaryInfC{$\BinaryInfCeta^{\Omega}$} \AxiomC{$^{1}$[$\BinaryInfCeta^{\Omega}$]} \RightLabel{$N$} \UnaryInfC{$\BinaryInfCeta^{\Omega}$} \AxiomC{$\Pi$} \RightLabel{$N$} \UnaryInfC{$\sigma^{\Phi} \rightarrow \BinaryInfCot_{w}$} \RightLabel{$N$} \BinaryInfC{$\BinaryInfCeta^{\Omega} \wedge ( \sigma^{\Phi} \rightarrow \BinaryInfCot_{w} )$} \RightLabel{$\circledcirc$} \BinaryInfC{$\BinaryInfCeta^{\Omega} \wedge ( \sigma^{\Phi} \rightarrow \BinaryInfCot_{w} )$} \UnaryInfC{$(\BinaryInfCeta^{\Omega} \wedge ( \sigma^{\Phi} \rightarrow \BinaryInfCot_{w} ) )^\circledcirc$} \LeftLabel{$1$} \BinaryInfC{$(\BinaryInfCeta^{\Omega} \wedge ( \sigma^{\Phi} \rightarrow \BinaryInfCot_{w} ) )^\circledcirc$} \AxiomClwaysNoLine \UnaryInfC{$\phantom{.}$} \DeltaisplayProof \AxiomC{$\BinaryInfCeta^{\Omega,\circledcirc}$} \UnaryInfC{$\BinaryInfCeta^{\Omega,\circledcirc}$} \AxiomC{$\BinaryInfCeta^{\Omega}$} \RightLabel{$N$} \UnaryInfC{$\BinaryInfCeta^{\Omega}$} \AxiomC{$^{2}$[$\sigma^{\Phi}$]} \RightLabel{$N$} \UnaryInfC{$\sigma^{\Phi}$} \RightLabel{$N$} \BinaryInfC{$\BinaryInfCeta^{\Omega} \wedge \sigma^{\Phi}$} \RightLabel{$N$} \UnaryInfC{$\BinaryInfCeta^{\Omega} \wedge \sigma^{\Phi}$} \RightLabel{$\circledcirc$} \BinaryInfC{$\BinaryInfCeta^{\Omega} \wedge \sigma^{\Phi}$} \UnaryInfC{$(\BinaryInfCeta^{\Omega} \wedge \sigma^{\Phi})^\circledcirc$} \AxiomC{$(\BinaryInfCeta^{\Omega} \wedge \sigma^{\Phi})^\circledcirc \rightarrow \BinaryInfCot_{n}$} \UnaryInfC{$(\BinaryInfCeta^{\Omega} \wedge \sigma^{\Phi})^\circledcirc \rightarrow \BinaryInfCot_{n}$} \BinaryInfC{$\BinaryInfCot_{n}$} \UnaryInfC{$\BinaryInfCot_{w}^{N}$} \RightLabel{$N$} \UnaryInfC{$\BinaryInfCot_{w}$} \LeftLabel{$\BinaryInfColdsymbol{\Pi}$ \hspace*{2cm} $2$} \RightLabel{$N$} \UnaryInfC{$\sigma^{\Phi} \rightarrow \BinaryInfCot_{w}$} \DeltaisplayProof\end{center} \noLineoindent So, by definition, $\sigma^{\Phi} \rightarrow \BinaryInfCot_{w} \in \Upsilon$ and $\Upsilon \cup \{\sigma^{\Phi}\}$ cannot be w-consistent. We conclude that $\Upsilon$ is maximally w-consistent and $\hat{\Gamma} \propto \Upsilon$. $\Upsilon$ represents a neighbourhood $N_{\Upsilon} \in \$(\mathcal{C}i_{\hat{\Gamma}})$. To prove that $\mathcal{M} \mathcalodels \BinaryInfCeta^{\Omega,\circledcirc}$, we need to prove that $\mathcal{T} = \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i_{\hat{\Gamma}} , N_{\Upsilon} \rightarrowngle \mathcalodels \BinaryInfCeta^{\Omega}$. We proceed by induction on the structure of $\BinaryInfCeta^{\Omega}$: \BinaryInfCegin{itemize} \item $\BinaryInfCeta^{\Omega} = \varphi^{\Lambda} \wedge \gamma^{\Theta}$. $\mathcal{T} \mathcalodels \BinaryInfCeta^{\Omega}$ iff $\mathcal{T} \mathcalodels \varphi^{\Lambda}$ and $\mathcal{T} \mathcalodels \gamma^{\Theta}$ iff (induction hypothesis) $\varphi^{\Lambda} \in \Upsilon$ and $\gamma^{\Theta} \in \Upsilon$. We conclude that $\BinaryInfCeta^{\Omega} \in \Upsilon$ by lemma \ref{closedDeriv}. Conversely $\BinaryInfCeta^{\Omega} \in \hat{\Gamma}$ iff $\varphi^{\Lambda} \in \Upsilon$ and $\gamma^{\Theta} \in \Upsilon$ by lemma \ref{closedDeriv} and the rest follows by the induction hypothesis; \item $\BinaryInfCeta^{\Omega} = \varphi^{\Lambda} \rightarrow \gamma^{\Theta}$. $\mathcal{T} \noLineot\mathcalodels \varphi^{\Lambda}$ iff $\mathcal{T} \mathcalodels \varphi^{\Lambda}$ and $\mathcal{T} \noLineot\mathcalodels \gamma^{\Theta}$ iff (induction hypothesis) $\varphi^{\Lambda} \in \hat{\Gamma}$ and $\gamma^{\Theta} \noLineot\in \Upsilon$ iff $\varphi^{\Lambda} \rightarrow \gamma^{\Theta} \noLineot\in \Upsilon$ by lemma \ref{dual}; \item $\BinaryInfCeta^{\Omega} = \varphi^{\Lambda,\BinaryInfCullet}$. We build a set $\Psi$, starting by $\varphi^{\Lambda} \in \Psi$. We take a sequence $\varphi_{i}$ in $\Upsilon$ that have the form $(\varphi^{\Lambda} \wedge \gamma^{\Theta})^\BinaryInfCullet$. If, for $\varphi_{i} = (\varphi^{\Lambda} \wedge \gamma^{\Theta})^\BinaryInfCullet$, $\Psi \cup \{ \gamma^{\Theta} \}$ is n-consistent, then $\gamma^{\Theta} \in \Upsilon$. To demonstrate that $\Psi$ is maximally n-consistent, we suppose that there is a wff $\sigma^{\Phi}$, such that $\sigma^{\Phi} \noLineot\in \Psi$ and $\Psi \cup \{ \sigma^{\Phi} \}$ is n-consistent. Then $(\varphi^{\Lambda} \wedge \sigma^{\Phi})^\BinaryInfCullet \noLineot\in \hat{\Gamma}$ by the definition of $\Psi$ and, by lemma \ref{dual}, $(\varphi^{\Lambda} \wedge \sigma^{\Phi})^\BinaryInfCullet \rightarrow \BinaryInfCot_{w} \in \Upsilon$. But from $\varphi^{\Lambda,\BinaryInfCullet} \in \Upsilon$ and $(\varphi^{\Lambda} \wedge \sigma^{\Phi})^\BinaryInfCullet \rightarrow \BinaryInfCot_{w} \in \Upsilon$ we know that $(\varphi^{\Lambda} \wedge ( \sigma^{\Phi} \rightarrow \BinaryInfCot_{n} ) )^\BinaryInfCullet \in \Upsilon$, using lemma \ref{closedDeriv} and the following derivation: \noLineoindent\BinaryInfCegin{center} \AxiomC{$\BinaryInfCeta^{\Omega,\BinaryInfCullet}$} \UnaryInfC{$\BinaryInfCeta^{\Omega,\BinaryInfCullet}$} \RightLabel{$\BinaryInfCullet$} \UnaryInfC{$\BinaryInfCeta^{\Omega}$} \AxiomC{$^{1}$[$\BinaryInfCeta^{\Omega}$]} \RightLabel{$u$} \UnaryInfC{$\BinaryInfCeta^{\Omega}$} \AxiomC{$\Pi$} \RightLabel{$u$} \UnaryInfC{$\sigma^{\Phi} \rightarrow \BinaryInfCot_{n}$} \RightLabel{$u$} \BinaryInfC{$\BinaryInfCeta^{\Omega} \wedge ( \sigma^{\Phi} \rightarrow \BinaryInfCot_{n} )$} \RightLabel{$\BinaryInfCullet$} \UnaryInfC{$\BinaryInfCeta^{\Omega} \wedge ( \sigma^{\Phi} \rightarrow \BinaryInfCot_{n} )$} \UnaryInfC{$(\BinaryInfCeta^{\Omega} \wedge ( \sigma^{\Phi} \rightarrow \BinaryInfCot_{n} ) )^\BinaryInfCullet$} \LeftLabel{$1$} \BinaryInfC{$(\BinaryInfCeta^{\Omega} \wedge ( \sigma^{\Phi} \rightarrow \BinaryInfCot_{n} ) )^\BinaryInfCullet$} \AxiomClwaysNoLine \UnaryInfC{$\phantom{.}$} \DeltaisplayProof \AxiomC{$\BinaryInfCeta^{\Omega,\BinaryInfCullet}$} \UnaryInfC{$\BinaryInfCeta^{\Omega,\BinaryInfCullet}$} \AxiomC{$^{1}$[$\BinaryInfCeta^{\Omega}$]} \RightLabel{$u$} \UnaryInfC{$\BinaryInfCeta^{\Omega}$} \AxiomC{$^{2}$[$\sigma^{\Phi}$]} \RightLabel{$u$} \UnaryInfC{$\sigma^{\Phi}$} \RightLabel{$u$} \BinaryInfC{$\BinaryInfCeta^{\Omega} \wedge \sigma^{\Phi}$} \RightLabel{$u$} \UnaryInfC{$\BinaryInfCeta^{\Omega} \wedge \sigma^{\Phi}$} \RightLabel{$\BinaryInfCullet$} \BinaryInfC{$\BinaryInfCeta^{\Omega} \wedge \sigma^{\Phi}$} \UnaryInfC{$(\BinaryInfCeta^{\Omega} \wedge \sigma^{\Phi})^\BinaryInfCullet$} \AxiomC{$(\BinaryInfCeta^{\Omega} \wedge \sigma^{\Phi})^\BinaryInfCullet \rightarrow \BinaryInfCot_{w}$} \UnaryInfC{$(\BinaryInfCeta^{\Omega} \wedge \sigma^{\Phi})^\BinaryInfCullet \rightarrow \BinaryInfCot_{w}$} \BinaryInfC{$\BinaryInfCot_{w}$} \UnaryInfC{$\BinaryInfCot_{n}^{u}$} \RightLabel{$u$} \UnaryInfC{$\BinaryInfCot_{n}$} \LeftLabel{$\BinaryInfColdsymbol{\Pi}$ \hspace*{2cm} $2$} \RightLabel{$u$} \UnaryInfC{$\sigma^{\Phi} \rightarrow \BinaryInfCot_{n}$} \DeltaisplayProof\end{center} \noLineoindent So, by definition, $\sigma^{\Phi} \rightarrow \BinaryInfCot_{n} \in \Psi$ and $\Psi \cup \{\sigma^{\Phi}\}$ can not be n-consistent. We conclude that $\Psi$ is maximally n-consistent and $\Upsilon \propto \Psi$. $\Psi$ represents a world $\mathcal{C}i_{\Psi} \in N_{\Upsilon}$. To prove that $\mathcal{T} \mathcalodels \varphi^{\Lambda,\BinaryInfCullet}$, we need to prove that $\langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i_{\Psi} \rightarrowngle \mathcalodels \varphi^{\Lambda}$ using the previous cases. \end{itemize}\end{itemize}\end{proof} \BinaryInfCegin{corollary} \label{trick} $\Gamma \noLineot\vdash \AxiomClpha^{\Sigma}$ iff there is a model $\mathcal{M}$, such that $\mathcal{M} \mathcalodels \phi^{\Theta}$, for every $\phi^{\Theta} \in \Gamma$, and $\mathcal{M} \noLineot\mathcalodels \AxiomClpha^{\Sigma}$. \end{corollary} \BinaryInfCegin{proof} $\Gamma \noLineot\vdash \AxiomClpha^{\Sigma}$ iff $\Gamma \cup \{\AxiomClpha^{\Sigma} \rightarrow \BinaryInfCot_{n}\}$ is n-consistent by lemma \ref{consistency} and the definition of n-consistent set. By lemmas \ref{modelForSet} and \ref{consistentModel}, $\Gamma \cup \{\AxiomClpha^{\Sigma} \rightarrow \BinaryInfCot_{n}\}$ is n-consistent iff there is a model $\mathcal{M}$, such that $\mathcal{M} \mathcalodels \phi^{\Theta}$, for every $\phi^{\Theta} \in \Gamma \cup \{\AxiomClpha^{\Sigma} \rightarrow \BinaryInfCot_{n}\}$. It means that $\mathcal{M}$ satisfies every formula of $\Gamma$ and $\mathcal{M} \noLineot\mathcalodels \AxiomClpha^{\Sigma}$.\end{proof} \BinaryInfCegin{theorem} $\Gamma \mathcalodels \AxiomClpha^{\Sigma}$ implies $\Gamma \vdash \AxiomClpha^{\Sigma}$ (Completeness). \end{theorem} \BinaryInfCegin{proof} $\Gamma \noLineot\vdash \AxiomClpha^{\Sigma}$ implies $\Gamma \noLineot\mathcalodels \AxiomClpha^{\Sigma}$, by the corollary \ref{trick} and the definition of logical consequence.\end{proof} \section{Normalization, Decidability, Complexity} We investigate here the normalization of PUC-ND. For the normalization proof, we want to present first the approach similar to the classical propositional normalization. This case happens for maximum formulas in derivations with fixed contexts, since the contexts are not defined for propositional logic. To do so, we investigate a fragment of the presented language, in order to use the Prawitz \cite{Prawitz} strategy for propositional logic normalization, in which he restricted the applications of the classical absurd to atomic formulas. In the chosen fragment $\mathcal{L}_{-}$ we only omit the operator $\vee$, which may be recovered by the definition $\AxiomClpha \vee \BinaryInfCeta \equiv \noLineeg \AxiomClpha \rightarrow \BinaryInfCeta$. After that result, we present the reductions for the remaining rules. In every case we follow the van Dalen algorithm for normalizing a derivation, starting form a subderivation that concludes a maximum formula with maximum rank, what means a maximum formula that has no maximum formula above it with more connectives in the subderivation. \BinaryInfCegin{lemma} \label{lemmaNormProp0} Every derivation that is composed only by the rules 1 to 8 and 10 to 12 is normalizable. \end{lemma} \BinaryInfCegin{proof} These rules may be seen as a natural deduction system for the classical propositional logic, since the context is fixed and the formulas with labels are treated like atomic formulas. We follow the strategy of Prawitz \cite{Prawitz}. We give here the reductions for the propositional logical operators, in the case of fixed context and labels: \BinaryInfCegin{itemize} \item $\wedge$-reductions:\\[5pt] \noLineoindent\BinaryInfCegin{tabular}{ccccccc} \AxiomC{$\Pi_{1}$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\AxiomClpha$} \RightLabel{$\Deltaelta$} \AxiomC{$\Pi_{2}$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\BinaryInfCeta$} \RightLabel{$\Deltaelta$} \BinaryInfC{$\AxiomClpha \wedge \BinaryInfCeta$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\AxiomClpha$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\Pi_{3}$} \DeltaisplayProof & $\rhd$ & \AxiomC{$\Pi_{1}$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\AxiomClpha$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\Pi_{3}$} \DeltaisplayProof & & \AxiomC{$\Pi_{1}$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\AxiomClpha$} \RightLabel{$\Deltaelta$} \AxiomC{$\Pi_{2}$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\BinaryInfCeta$} \RightLabel{$\Deltaelta$} \BinaryInfC{$\AxiomClpha \wedge \BinaryInfCeta$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\BinaryInfCeta$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\Pi_{3}$} \DeltaisplayProof & $\rhd$ & \AxiomC{$\Pi_{2}$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\BinaryInfCeta$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\Pi_{3}$} \DeltaisplayProof \\ \end{tabular} \item $\rightarrow$-reduction:\\[5pt] \noLineoindent\BinaryInfCegin{tabular}{ccc} \AxiomC{$\Pi_{1}$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\AxiomClpha$} \RightLabel{$\Deltaelta$} \AxiomC{[$\AxiomClpha$]} \RightLabel{$\Deltaelta$} \UnaryInfC{$\Pi_{2}$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\BinaryInfCeta$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\AxiomClpha \rightarrow \BinaryInfCeta$} \RightLabel{$\Deltaelta$} \BinaryInfC{$\BinaryInfCeta$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\Pi_{3}$} \RightLabel{$\Deltaelta$} \DeltaisplayProof & $\rhd$ & \AxiomC{$\Pi_{1}$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\AxiomClpha$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\Pi_{2}$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\BinaryInfCeta$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\Pi_{3}$} \RightLabel{$\Deltaelta$} \DeltaisplayProof \end{tabular} \end{itemize} The application of the classical absurd may be restricted to atomic formulas only. We change the following derivation according to the principal logical operator of $\gamma$. We only present the change procedure for $\wedge$, see \cite{Prawitz} for further details.\\ \\ \BinaryInfCegin{tabular}{ccc} \AxiomC{[$\noLineeg \gamma$]} \RightLabel{$\Deltaelta$} \UnaryInfC{$\Pi_{1}$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\BinaryInfCot$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\gamma$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\Pi_{2}$} \RightLabel{$\Deltaelta$} \DeltaisplayProof & \quad \quad & \AxiomC{$^{1}$[$\AxiomClpha \wedge \BinaryInfCeta$]} \RightLabel{$\Deltaelta$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\AxiomClpha$} \RightLabel{$\Deltaelta$} \RightLabel{$\Deltaelta$} \AxiomC{$^{2}$[$\noLineeg \AxiomClpha$]} \RightLabel{$\Deltaelta$} \RightLabel{$\Deltaelta$} \BinaryInfC{$\BinaryInfCot$} \RightLabel{$\Deltaelta$} \LeftLabel{\small{1}} \RightLabel{$\Deltaelta$} \UnaryInfC{[$\noLineeg (\AxiomClpha \wedge \BinaryInfCeta)$]} \RightLabel{$\Deltaelta$} \UnaryInfC{$\Pi_{1}$} \RightLabel{$\Deltaelta$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\BinaryInfCot$} \RightLabel{$\Deltaelta$} \LeftLabel{\small{2}} \RightLabel{$\Deltaelta$} \UnaryInfC{$\AxiomClpha$} \RightLabel{$\Deltaelta$} \RightLabel{$\Deltaelta$} \AxiomC{$^{3}$[$\AxiomClpha \wedge \BinaryInfCeta$]} \RightLabel{$\Deltaelta$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\AxiomClpha$} \RightLabel{$\Deltaelta$} \RightLabel{$\Deltaelta$} \AxiomC{$^{4}$[$\noLineeg \BinaryInfCeta$]} \RightLabel{$\Deltaelta$} \RightLabel{$\Deltaelta$} \BinaryInfC{$\BinaryInfCot$} \RightLabel{$\Deltaelta$} \LeftLabel{\small{3}} \RightLabel{$\Deltaelta$} \UnaryInfC{[$\noLineeg (\AxiomClpha \wedge \BinaryInfCeta)$]} \RightLabel{$\Deltaelta$} \UnaryInfC{$\Pi_{1}$} \RightLabel{$\Deltaelta$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\BinaryInfCot$} \RightLabel{$\Deltaelta$} \LeftLabel{\small{4}} \RightLabel{$\Deltaelta$} \UnaryInfC{$\BinaryInfCeta$} \RightLabel{$\Deltaelta$} \RightLabel{$\Deltaelta$} \BinaryInfC{$\AxiomClpha \wedge \BinaryInfCeta$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\Pi_{2}$} \RightLabel{$\Deltaelta$} \DeltaisplayProof \end{tabular}\end{proof} \BinaryInfCegin{lemma} Given a derivation $\Pi$, if we exchange every occurence of a world variable $u$ in $\Pi$ by a world variable $w$ that does occurs in $\Pi$, then the resulting derivation, which we represent by $\Pi(u\mathcalid w)$, is also a derivation. \end{lemma} \BinaryInfCegin{proof} By induction. \end{proof} \BinaryInfCegin{theorem} \label{normalization} Every derivation is normalizable. \end{theorem} \BinaryInfCegin{proof} We present the argument for the introduction of the remaining rules. The introduction of the rule 9 cannot produce maximum formulae, but it may produce detours, considering the rules 7 and 8, if the considered subderivation ($\Pi_{2}$ below) do not discharge any hypothesis of the upper subderivation ($\Pi_{1}$ below). But such detours may be substituted by one application of the rule 8 as shown below:\BinaryInfCegin{center}\BinaryInfCegin{tabular}{lcr} \AxiomC{$\Pi_{1}$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\BinaryInfCot$} \LeftLabel{rule 9:} \UnaryInfC{$\BinaryInfCot_{n}$} \UnaryInfC{$\Pi_{2}$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\BinaryInfCot$} \LeftLabel{rule 8:} \RightLabel{$\Deltaelta$} \UnaryInfC{$\BinaryInfCeta^{\Omega}$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\Pi_{3}$} \DeltaisplayProof & $\rhd$ & \AxiomC{$\Pi_{2}$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\Pi_{2}$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\BinaryInfCot$} \LeftLabel{rule 8:} \RightLabel{$\Deltaelta$} \UnaryInfC{$\BinaryInfCeta^{\Omega}$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\Pi_{3}$} \DeltaisplayProof\\&&\\ \AxiomC{[$\noLineeg (\BinaryInfCeta^{\Omega})$]} \RightLabel{$\Deltaelta$} \UnaryInfC{$\noLineeg (\BinaryInfCeta^{\Omega})$} \AxiomC{$\Pi_{1}$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\BinaryInfCot$} \LeftLabel{rule 9:} \UnaryInfC{$\BinaryInfCot_{n}$} \BinaryInfC{$\Pi_{2}$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\BinaryInfCot$} \LeftLabel{rule 7:} \RightLabel{$\Deltaelta$} \UnaryInfC{$\BinaryInfCeta^{\Omega}$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\Pi_{3}$} \DeltaisplayProof & $\rhd$ & \AxiomC{$\Pi_{1}$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\BinaryInfCot$} \LeftLabel{rule 8:} \RightLabel{$\Deltaelta$} \UnaryInfC{$\BinaryInfCeta^{\Omega}$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\Pi_{3}$} \DeltaisplayProof \end{tabular}\end{center} The rules 13 and 14 produce a detour only if the conclusion of one is taken as an hypothesis of the other rule for the same context and, as above, the considered subderivation do not discharge any hypothesis of the upper subderivation. In this case, if we eliminate such detour, as below, we may produce a new maximum formula of the case of lemma \ref{lemmaNormProp0}. We cannot produce new detours by doing that elimination because, if there is any detour surrounding the formula $\AxiomClpha^{\Sigma}$, it must exist before the elimination. If we start from the up and left most detour, we eliminate the detours until we produce a derivation that contains only maximum formulas of the case of lemma \ref{lemmaNormProp0}. The same argument works for the rules 15 and 16 and to the rules 21 and 22.\BinaryInfCegin{center}\BinaryInfCegin{tabular}{ccccccc} \AxiomC{$\Pi_{1}$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\AxiomClpha^{\Sigma,\phi}$} \LeftLabel{rule 13:} \RightLabel{$\Deltaelta,\phi$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \UnaryInfC{$\Pi_{2}$} \RightLabel{$\Deltaelta,\phi$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \LeftLabel{rule 14:} \RightLabel{$\Deltaelta$} \UnaryInfC{$\AxiomClpha^{\Sigma,\phi}$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\Pi_{3}$} \DeltaisplayProof & $\rhd$ & \AxiomC{$\Pi_{1}$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\AxiomClpha^{\Sigma,\phi}$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\Pi_{3}$} \DeltaisplayProof & \quad \quad & \AxiomC{$\Pi_{1}$} \RightLabel{$\Deltaelta,\phi$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \LeftLabel{rule 14:} \RightLabel{$\Deltaelta$} \UnaryInfC{$\AxiomClpha^{\Sigma,\phi}$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\Pi_{2}$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\AxiomClpha^{\Sigma,\phi}$} \LeftLabel{rule 13:} \RightLabel{$\Deltaelta,\phi$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \UnaryInfC{$\Pi_{3}$} \DeltaisplayProof & $\rhd$ & \AxiomC{$\Pi_{1}$} \RightLabel{$\Deltaelta,\phi$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \RightLabel{$\Deltaelta,\phi$} \UnaryInfC{$\Pi_{3}$} \DeltaisplayProof \end{tabular}\end{center}\BinaryInfCegin{center}\BinaryInfCegin{tabular}{ccc} \AxiomC{$\Pi_{1}$} \RightLabel{$\Deltaelta,N$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \LeftLabel{rule 21:} \RightLabel{$\Deltaelta,\circledast$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \AxiomC{$\phantom{-}$} \RightLabel{$\Deltaelta,N$} \UnaryInfC{$\BinaryInfCeta^{\Omega}$} \LeftLabel{rule 22:} \RightLabel{$\Deltaelta,N$} \BinaryInfC{$\AxiomClpha^{\Sigma}$} \AxiomClwaysNoLine \UnaryInfC{$\phantom{.}$} \DeltaisplayProof & $\rhd$ & \AxiomC{$\Pi_{1}$} \RightLabel{$\Deltaelta,N$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \DeltaisplayProof\\&&\\ \AxiomC{$\Pi_{1}$} \RightLabel{$\Deltaelta,\circledast$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \AxiomC{$\phantom{-}$} \RightLabel{$\Deltaelta,N$} \UnaryInfC{$\BinaryInfCeta^{\Omega}$} \LeftLabel{rule 22:} \RightLabel{$\Deltaelta,N$} \BinaryInfC{$\AxiomClpha^{\Sigma}$} \LeftLabel{rule 21:} \RightLabel{$\Deltaelta,\circledast$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \DeltaisplayProof & $\rhd$ & \AxiomC{$\Pi_{1}$} \RightLabel{$\Deltaelta,\circledast$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \DeltaisplayProof \end{tabular}\end{center} The introduction of the rules 17 and 19 preserves normalization. These rules produce a detour only if the conclusion of one is taken as an hypothesis of the other rule for the same context. In this case, if we eliminate such detour, as below, we may produce a new maximum formula of the case of lemma \ref{lemmaNormProp0}. We cannot produce new detours by doing that elimination because, if there is any detour surrounding the formula $\AxiomClpha^{\Sigma}$, it must exist before the elimination. If we start from the up and left most detour, we eliminate the detours until we produce a derivation that contains only maximum formulas of the case of lemma \ref{lemmaNormProp0}. We used the representation $(u , v \mathcalid w,u)$ for the substitution of all occurrences of the variable $u$ by the variable $w$, that do not occur in $\Pi_{2}$, $\Theta$ or $\BinaryInfCeta^{\Omega}$, and the subsequent substitution of all occurrences of the variable $v$ by the variable $u$. The same argument works for the rules 18 and 20.\BinaryInfCegin{small}\BinaryInfCegin{center}\BinaryInfCegin{tabular}{lcl} \AxiomC{$\Pi_{1}$} \RightLabel{$\Deltaelta,N,u$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \LeftLabel{rule 17:} \RightLabel{$\Deltaelta,N,\BinaryInfCullet$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \AxiomC{[$\AxiomClpha^{\Sigma}$]} \RightLabel{$\Deltaelta,N,v$} \UnaryInfC{$\Pi_{2}$} \RightLabel{$\Theta$} \UnaryInfC{$\BinaryInfCeta^{\Omega}$} \LeftLabel{rule 19:} \RightLabel{$\Theta$} \BinaryInfC{$\BinaryInfCeta^{\Omega}$} \DeltaisplayProof & $\rhd$ & \AxiomC{$\Pi_{1}$} \RightLabel{$\Deltaelta,N,u$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \RightLabel{$\Deltaelta,N,u$} \UnaryInfC{$\Pi_{2} (u , v \mathcalid w,u)$} \RightLabel{$\Theta (u , v \mathcalid w,u)$} \UnaryInfC{$\BinaryInfCeta^{\Omega} (u , v \mathcalid w,u)$} \DeltaisplayProof\\&&\\ \AxiomC{$\Pi_{1}$} \RightLabel{$\Deltaelta,N,\BinaryInfCullet$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \AxiomC{[$\AxiomClpha^{\Sigma}$]} \RightLabel{$\Deltaelta,N,u$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \LeftLabel{rule 17:} \RightLabel{$\Deltaelta,N,\BinaryInfCullet$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \UnaryInfC{$\Pi_{2}$} \RightLabel{$\Theta$} \UnaryInfC{$\BinaryInfCeta^{\Omega}$} \LeftLabel{rule 19:} \RightLabel{$\Theta$} \BinaryInfC{$\BinaryInfCeta^{\Omega}$} \DeltaisplayProof & $\rhd$ & \AxiomC{$\Pi_{1}$} \RightLabel{$\Deltaelta,N,\BinaryInfCullet$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \UnaryInfC{$\Pi_{2}$} \RightLabel{$\Theta$} \UnaryInfC{$\BinaryInfCeta^{\Omega}$} \DeltaisplayProof \end{tabular}\end{center}\end{small} The introduction of the rules 23 to 26 may produce no maximum formula but they produce unnecessary detours. We repeat the above arguments to eliminate them. The reduction for rule 24 is similar to the reduction for rule 23 and the reductions for rule 26 are similar to the reductions for rule 25. For rules 25 and 26 the reductions depend on the size of the cycles built to recover the same formula in the same context. We present only the case for a cycle of size 3. The rules 27 to 30 produce no maximum formula nor any unnecessary detour.\\[5pt] \BinaryInfCegin{tabular}{lcr} \AxiomC{$\Pi_{1}$} \RightLabel{$\Deltaelta,N$} \UnaryInfC{$\AxiomClpha^{\Sigma,\BinaryInfCullet}$} \AxiomC{$\Pi_{2}$} \RightLabel{$\Deltaelta,M$} \UnaryInfC{$\shneg N$} \LeftLabel{rule 23:} \RightLabel{$\Deltaelta,M$} \BinaryInfC{$\AxiomClpha^{\Sigma,\BinaryInfCullet}$} \RightLabel{$\Deltaelta,M$} \AxiomC{$\Pi_{3}$} \RightLabel{$\Deltaelta,N$} \UnaryInfC{$\shneg M$} \LeftLabel{rule 23:} \RightLabel{$\Deltaelta,N$} \BinaryInfC{$\AxiomClpha^{\Sigma,\BinaryInfCullet}$} \RightLabel{$\Deltaelta,N$} \UnaryInfC{$\Pi_{4}$} \AxiomClwaysNoLine \UnaryInfC{$\phantom{-}$} \DeltaisplayProof & $\rhd$ & \AxiomC{$\Pi_{1}$} \RightLabel{$\Deltaelta,N$} \UnaryInfC{$\AxiomClpha^{\Sigma,\BinaryInfCullet}$} \RightLabel{$\Deltaelta,N$} \UnaryInfC{$\Pi_{4}$} \DeltaisplayProof\\&&\\ \AxiomC{$\Pi_{1}$} \RightLabel{$\Deltaelta,N$} \UnaryInfC{$\shneg M$} \AxiomC{$\Pi_{2}$} \RightLabel{$\Deltaelta,M$} \UnaryInfC{$\shneg P$} \LeftLabel{rule 25:} \RightLabel{$\Deltaelta,N$} \BinaryInfC{$\shneg P$} \RightLabel{$\Deltaelta,N$} \AxiomC{$\Pi_{3}$} \RightLabel{$\Deltaelta,P$} \UnaryInfC{$\shneg Q$} \LeftLabel{rule 25:} \RightLabel{$\Deltaelta,N$} \BinaryInfC{$\shneg Q$} \RightLabel{$\Deltaelta,N$} \AxiomC{$\Pi_{4}$} \RightLabel{$\Deltaelta,Q$} \UnaryInfC{$\shneg M$} \LeftLabel{rule 25:} \RightLabel{$\Deltaelta,N$} \BinaryInfC{$\shneg M$} \RightLabel{$\Deltaelta,N$} \UnaryInfC{$\Pi_{5}$} \DeltaisplayProof & $\rhd$ & \AxiomC{$\Pi_{1}$} \RightLabel{$\Deltaelta,N$} \UnaryInfC{$\shneg M$} \RightLabel{$\Deltaelta,N$} \UnaryInfC{$\Pi_{5}$} \DeltaisplayProof \end{tabular}\end{proof} \BinaryInfCegin{definition} Given a wff $\AxiomClpha^{\Sigma}$, the label rank $\AxiomCleph(\AxiomClpha^{\Sigma})$ is the depth of label nesting: \BinaryInfCegin{enumerate} \item $\AxiomCleph(\AxiomClpha^{\Sigma}) = \AxiomCleph(\AxiomClpha) + s(\Sigma)/2$; \item If $\AxiomClpha^{\Sigma} = \BinaryInfCeta^{\Omega} \vee \gamma^{\Theta}$, then $\AxiomCleph(\AxiomClpha^{\Sigma}) = \mathcalax(\AxiomCleph(\BinaryInfCeta^{\Omega}),\AxiomCleph(\gamma^{\Theta}))$; \item If $\AxiomClpha^{\Sigma} = \BinaryInfCeta^{\Omega} \wedge \gamma^{\Theta}$, then $\AxiomCleph(\AxiomClpha^{\Sigma}) = \mathcalax(\AxiomCleph(\BinaryInfCeta^{\Omega}),\AxiomCleph(\gamma^{\Theta}))$; \item If $\AxiomClpha^{\Sigma} = \BinaryInfCeta^{\Omega} \rightarrow \gamma^{\Theta}$, then $\AxiomCleph(\AxiomClpha^{\Sigma}) = \mathcalax(\AxiomCleph(\BinaryInfCeta^{\Omega}),\AxiomCleph(\gamma^{\Theta}))$; \item If $\AxiomClpha^{\Sigma} = \noLineeg \BinaryInfCeta^{\Omega}$, then $\AxiomCleph(\AxiomClpha^{\Sigma}) = \AxiomCleph(\BinaryInfCeta^{\Omega})$; \end{enumerate} \end{definition} \noLineoindent Remark: by definition, the rank for a wff in $\BinaryInfColdsymbol{F}_{n}$ must be a natural number. \BinaryInfCegin{lemma} \label{depthModel} Given a model $\mathcal{M} = \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i \rightarrowngle$ and a $\AxiomClpha^{\Sigma} \in \BinaryInfColdsymbol{F}_{n}$, if $\AxiomCleph(\AxiomClpha^{\Sigma}) = k$, then we only need to verify the worlds of $\BinaryInfCigtriangleup^{\$}_{\vec{k}}(\mathcal{C}i)$ to know if $\mathcal{M} \mathcalodels \AxiomClpha^{\Sigma}$ holds. \end{lemma} \BinaryInfCegin{proof} If $\AxiomCleph(\AxiomClpha^{\Sigma}) = 0$, then $\AxiomClpha^{\Sigma}$ is a propositional formula. In this case, we need only to verify that the formula holds at $\BinaryInfCigtriangleup^{\$}_{\vec{0}}(\mathcal{C}i) = \{\mathcal{C}i\}$. If $\AxiomCleph(\AxiomClpha^{\Sigma}) = k + 1$, then it must have a subformula of the form $(\BinaryInfCeta^{\Omega})^{\phi}$, where $\phi$ is a neighbourhood label. In the worst case, we need to verify all neighbourhoods of $\$(\mathcal{C}i)$ to assure that the property described by $\BinaryInfCeta^{\Omega}$ holds in all of them. $\BinaryInfCeta^{\Omega}$ must have a subformula of the form $(\gamma^{\Theta})^{\psi}$, where $\psi$ is a world label. In the worst case, we need to verify all worlds of $\$(\mathcal{C}i)$ to ensure that the property described by $\gamma^{\Theta}$ holds in all of them. But $\AxiomCleph(\gamma^{\Theta}) = k$ and, by the induction hypothesis, we need only to verify in the worlds of $\BinaryInfCigtriangleup^{\$}_{\vec{k}}(w)$, for every $w \in \BinaryInfCigtriangleup^{\$}_{1}(\mathcal{C}i)$. So we need, at the worst case, to verify the worlds of $\BinaryInfCigtriangleup^{\$}_{\vec{k+1}}(w)$. \end{proof} \BinaryInfCegin{lemma} \label{finiteVerification} If $\mathcal{M} = \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i \rightarrowngle \mathcalodels \AxiomClpha^{\Sigma}$, then there is a finite model $\mathcal{M}' = \langle \mathcal{W}' , \$' , \mathcal{V}' , \mathcal{C}i' \rightarrowngle$, such that $\mathcal{M}' \mathcalodels \AxiomClpha^{\Sigma}$. \end{lemma} \BinaryInfCegin{proof} In the proof of lemma \ref{consistentModel}, we verified the pertinence of the formulas in maximally n-consistent sets and maximally w-consistent sets based on the structure of the given formula to stablish the satisfying relation. Each existential label required the existence of one neighbourhood or world for the verification of the validity of a given subformula. The universal label for neighbourhood required no neighbourhood at all. It only added properties to the neighbourhoods that exist in a given system of neighbourhoods. The procedure is a demonstration that, for any wff in $\BinaryInfColdsymbol{F}_{n}$, we only need to gather a finite set of neighbourhoods and worlds. \end{proof} \BinaryInfCegin{theorem} \label{decidability} PUC-Logic is decidable. \end{theorem} \BinaryInfCegin{proof} If $\noLineot\vdash \AxiomClpha^{\Sigma}$, then it must be possible to find a template that satisfies the negation of the formula. By the lemma above, there is a finite template that satisfies this negation.\end{proof} \BinaryInfCegin{definition} Every label occurrence $\phi$ inside a formula $\AxiomClpha^{\Sigma}$ is an index of a subformula $\BinaryInfCeta^{\Omega,\phi}$. Every label occurrence $\phi$ has a relative label depth defined by $\flat(\phi) = \AxiomCleph(\AxiomClpha^{\Sigma}) - \AxiomCleph(\BinaryInfCeta^{\Omega,\phi})$. \end{definition} \BinaryInfCegin{lemma} \label{complexity} Given $\AxiomClpha^{\Sigma} \in \BinaryInfColdsymbol{F}_{n}$, there is a finite model $\mathcal{M} = \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i \rightarrowngle$, such that $\mathcal{M} \mathcalodels \AxiomClpha^{\Sigma}$ with the following properties: (a) $\mathcal{W} = \BinaryInfCigtriangleup^{\$}_{\vec{k}}(\mathcal{C}i)$, where $k = \AxiomCleph(\AxiomClpha^{\Sigma})$; (b) For every world $w \in \BinaryInfCigtriangleup^{\$}_{n}(\mathcal{C}i)$, $\$(w)$ has at most the same number of neighbourhoods as labels $\phi$, such that $\flat(\phi)=n$; (c) Every neighbourhood $N \in \$(w)$ has at most the same number of worlds as the labels $\phi$, such that $\flat(\phi)=n+1/2$, plus the number of labels $\varphi$, such that $\flat(\varphi)=n$. \end{lemma} \BinaryInfCegin{proof} (a) From lemmas \ref{finiteVerification} and \ref{depthModel}; (b) Every neighbourhood existential label $\phi$, such that $\flat(\phi) = 0$ contribute, by the procedure of lemma \ref{consistentModel}, to one neighbourhood to $\$(\mathcal{C}i)$ for the model $\mathcal{M} = \langle \mathcal{W} , \$ , \mathcal{V} , \mathcal{C}i \rightarrowngle$. The neighbourhood universal requires no additional neighbourhood to $\$(\mathcal{C}i)$ according to the explanation of lemma \ref{finiteVerification}. In the worst case, all neighbourhood labels $\phi$, such that $\flat(\phi) = 0$, are existential. The labels $\phi$, such that $\flat(\phi) = n$, $n \geq 0$, $n \in \mathcalathbb{N}$ contributes to the systems of neighbourhoods of the worlds of $\BinaryInfCigtriangleup^{\$}_{n}(\mathcal{C}i)$. In the worst case, all of this labels contributes to system of neighbourhoods of a single world; (c) The same argument works for number of worlds in a neighbourhood except that the number of worlds in a neighbourhood is bigger than the number worlds in every neighbourhoods it contains. In the worst case, the smallest neighbourhood contains the same number of worlds as the number of labels $\phi$, such that $\flat(\phi)=n+1/2$. In this case, we must add at least one world to each neighbourhood that contains the smallest neighbourhood in the considered system of neighbourhoods. But the number of neighbourhoods is limited by the number of labels $\flat(\phi) = n$, $n \in \mathcalathbb{N}$. So, the biggest neighbourhood reaches the asserted limit and the number of worlds of the model is linear in the number of labels. \end{proof} \BinaryInfCegin{theorem} \label{satisfabilityPUC} The problem of satisfiability is $\BinaryInfColdsymbol{NP}$-complete for PUC-Logic. \end{theorem} \BinaryInfCegin{proof} A wff without labels is a propositional formula, then, by \cite{Cook}, the complexity of the satisfiability problem for PUC-Logic must be a least $\BinaryInfColdsymbol{NP}$-complete. Given a wff with labels, by lemma \ref{complexity}, we know that there is a directed graph, in the manner of lemma \ref{lemmaConsequence}, that depends on the satisfiability of the endpoints. Those endpoints are always propositional formulas. So, the complexity of the problem of satisfiability is the sum of complexities of the problems for each endpoint. It means that the biggest subformula dictates the complexity because the model of lemma \ref{complexity} has at most a linear number of worlds and the satisfiability problem is $\BinaryInfColdsymbol{NP}$-complete. So, the worst case is the wff without labels. \end{proof} \section{Counterfactual logics} In \cite{Lewis}, Lewis presents many logics for counterfactual reasoning, organized according to some given conditions imposed on the nested neighbourhood function. The most basic logic is $\BinaryInfColdsymbol{V}$, which has no condition imposed on $\$$. Lewis presented the axioms and inference rules of $\BinaryInfColdsymbol{V}$ using his comparative possibility operator ($\preccurlyeq$). \BinaryInfCegin{definition} $\AxiomClpha^{\Sigma} \preccurlyeq \BinaryInfCeta^{\Omega} \equiv (\BinaryInfCeta^{\Omega,\BinaryInfCullet} \rightarrow \AxiomClpha^{\Sigma,\BinaryInfCullet})^{\circledast}$ \end{definition} Here we prove that the the axioms of the $\BinaryInfColdsymbol{V}$-logic are theorems and that the inference rules are derived rules in PUC-Logic. This is proof that the PUC-Logic is complete for the $\BinaryInfColdsymbol{V}$-logic based on the completeness proof of completeness given by Lewis\cite{Lewis}. \BinaryInfCegin{itemize} \item TRANS axiom: $((\AxiomClpha \preccurlyeq \BinaryInfCeta) \wedge (\BinaryInfCeta \preccurlyeq \gamma)) \rightarrow (\AxiomClpha \preccurlyeq \gamma)$; \item CONNEX axiom: $(\AxiomClpha \preccurlyeq \BinaryInfCeta) \vee (\BinaryInfCeta \preccurlyeq \AxiomClpha)$; \item Comparative Possibility Rule (CPR): If $\vdash \AxiomClpha \rightarrow (\BinaryInfCeta_{1} \vee \ldots \vee \BinaryInfCeta_{n})$, then $\vdash (\BinaryInfCeta_{1} \preccurlyeq \AxiomClpha) \vee \ldots \vee (\BinaryInfCeta_{n} \preccurlyeq \AxiomClpha)$, for any $n \geq 1$. \end{itemize} We present a proof of the CPR rule for $n = 2$. We omit the attribute representation of the wff denoted by $\AxiomClpha$, $\BinaryInfCeta$ and $\gamma$ to simplify the reading of the derivations. We use lemma \ref{transfer} below for the theorem $\AxiomClpha \rightarrow (\BinaryInfCeta \vee \gamma)$ and a derivation $\Xi$ of it.\BinaryInfCegin{center}\AxiomC{$^{2}$[$\gamma^ {\BinaryInfCullet}$]} \RightLabel{$\circledast$} \UnaryInfC{$\gamma^ {\BinaryInfCullet}$} \AxiomC{$\phantom{.}$}\AxiomClwaysNoLine \UnaryInfC{$^{1}$[$(\BinaryInfCeta^{\BinaryInfCullet} \rightarrow \AxiomClpha^{\BinaryInfCullet})^{\circledast}\wedge(\gamma^{\BinaryInfCullet} \rightarrow \BinaryInfCeta^{\BinaryInfCullet})^{\circledast}$]} \AxiomClwaysSingleLine \UnaryInfC{$(\BinaryInfCeta^{\BinaryInfCullet} \rightarrow \AxiomClpha^{\BinaryInfCullet})^{\circledast}\wedge(\gamma^{\BinaryInfCullet} \rightarrow \BinaryInfCeta^{\BinaryInfCullet})^{\circledast}$} \UnaryInfC{$(\gamma^{\BinaryInfCullet} \rightarrow \BinaryInfCeta^{\BinaryInfCullet})^{\circledast}$} \RightLabel{$\circledast$} \UnaryInfC{$\gamma^{\BinaryInfCullet} \rightarrow \BinaryInfCeta^{\BinaryInfCullet}$} \RightLabel{$\circledast$} \BinaryInfC{$\BinaryInfCeta^{\BinaryInfCullet}$} \AxiomC{$^{1}$[$(\BinaryInfCeta^{\BinaryInfCullet} \rightarrow \AxiomClpha^{\BinaryInfCullet})^{\circledast}\wedge(\gamma^{\BinaryInfCullet} \rightarrow \BinaryInfCeta^{\BinaryInfCullet})^{\circledast}$]} \UnaryInfC{$(\BinaryInfCeta^{\BinaryInfCullet} \rightarrow \AxiomClpha^{\BinaryInfCullet})^{\circledast}\wedge(\gamma^{\BinaryInfCullet} \rightarrow \BinaryInfCeta^{\BinaryInfCullet})^{\circledast}$} \UnaryInfC{$(\BinaryInfCeta^{\BinaryInfCullet} \rightarrow \AxiomClpha^{\BinaryInfCullet})^{\circledast}$} \RightLabel{$\circledast$} \UnaryInfC{$\BinaryInfCeta^{\BinaryInfCullet} \rightarrow \AxiomClpha^{\BinaryInfCullet}$} \RightLabel{$\circledast$} \BinaryInfC{$\AxiomClpha^{\BinaryInfCullet}$} \LeftLabel{\TrinaryInfCextbf{TRANS} \hspace*{4cm} \scriptsize{2}} \RightLabel{$\circledast$} \UnaryInfC{$\gamma^{\BinaryInfCullet} \rightarrow \AxiomClpha^{\BinaryInfCullet}$} \RightLabel{$\circledast$} \UnaryInfC{$(\gamma^{\BinaryInfCullet} \rightarrow \AxiomClpha^{\BinaryInfCullet})^{\circledast}$} \LeftLabel{\scriptsize{1}} \UnaryInfC{$((\BinaryInfCeta^{\BinaryInfCullet} \rightarrow \AxiomClpha^{\BinaryInfCullet})^{\circledast}\wedge(\gamma^{\BinaryInfCullet} \rightarrow \BinaryInfCeta^{\BinaryInfCullet})^{\circledast}) \rightarrow (\gamma^{\BinaryInfCullet} \rightarrow \AxiomClpha^{\BinaryInfCullet})^{\circledast}$} \DeltaisplayProof\end{center} \BinaryInfCegin{landscape}\BinaryInfCegin{center}\AxiomC{$^{1}$[$\noLineeg ((\BinaryInfCeta^{\BinaryInfCullet} \rightarrow \AxiomClpha^{\BinaryInfCullet})^{\circledast}\vee(\AxiomClpha^{\BinaryInfCullet} \rightarrow \BinaryInfCeta^{\BinaryInfCullet})^{\circledast})$]} \UnaryInfC{$\noLineeg ((\BinaryInfCeta^{\BinaryInfCullet} \rightarrow \AxiomClpha^{\BinaryInfCullet})^{\circledast}\vee(\AxiomClpha^{\BinaryInfCullet} \rightarrow \BinaryInfCeta^{\BinaryInfCullet})^{\circledast})$} \AxiomC{$^{1}$[$\noLineeg ((\BinaryInfCeta^{\BinaryInfCullet} \rightarrow \AxiomClpha^{\BinaryInfCullet})^{\circledast}\vee(\AxiomClpha^{\BinaryInfCullet} \rightarrow \BinaryInfCeta^{\BinaryInfCullet})^{\circledast})$]} \UnaryInfC{$\noLineeg ((\BinaryInfCeta^{\BinaryInfCullet} \rightarrow \AxiomClpha^{\BinaryInfCullet})^{\circledast}\vee(\AxiomClpha^{\BinaryInfCullet} \rightarrow \BinaryInfCeta^{\BinaryInfCullet})^{\circledast})$} \AxiomC{$^{2}$[$\BinaryInfCeta^{\BinaryInfCullet}$]} \RightLabel{$\circledast$} \UnaryInfC{$\BinaryInfCeta^{\BinaryInfCullet}$} \RightLabel{$\circledast$} \UnaryInfC{$\AxiomClpha^{\BinaryInfCullet} \rightarrow \BinaryInfCeta^{\BinaryInfCullet}$} \UnaryInfC{$(\AxiomClpha^{\BinaryInfCullet} \rightarrow \BinaryInfCeta^{\BinaryInfCullet})^{\circledast}$} \UnaryInfC{$(\BinaryInfCeta^{\BinaryInfCullet} \rightarrow \AxiomClpha^{\BinaryInfCullet})^{\circledast}\vee(\AxiomClpha^{\BinaryInfCullet} \rightarrow \BinaryInfCeta^{\BinaryInfCullet})^{\circledast}$} \BinaryInfC{$\BinaryInfCot_{n}$} \UnaryInfC{$\AxiomClpha^{\BinaryInfCullet,\circledast}$} \RightLabel{$\circledast$} \UnaryInfC{$\AxiomClpha^{\BinaryInfCullet}$} \LeftLabel{\scriptsize{2}} \RightLabel{$\circledast$} \UnaryInfC{$\BinaryInfCeta^{\BinaryInfCullet} \rightarrow \AxiomClpha^{\BinaryInfCullet}$} \UnaryInfC{$(\BinaryInfCeta^{\BinaryInfCullet} \rightarrow \AxiomClpha^{\BinaryInfCullet})^{\circledast}$} \UnaryInfC{$(\BinaryInfCeta^{\BinaryInfCullet} \rightarrow \AxiomClpha^{\BinaryInfCullet})^{\circledast}\vee(\AxiomClpha^{\BinaryInfCullet} \rightarrow \BinaryInfCeta^{\BinaryInfCullet})^{\circledast}$} \BinaryInfC{$\BinaryInfCot_{n}$} \LeftLabel{\TrinaryInfCextbf{CONNEX} \hspace*{2cm} \scriptsize{1}} \UnaryInfC{$(\BinaryInfCeta^{\BinaryInfCullet} \rightarrow \AxiomClpha^{\BinaryInfCullet})^{\circledast}\vee(\AxiomClpha^{\BinaryInfCullet} \rightarrow \BinaryInfCeta^{\BinaryInfCullet})^{\circledast}$} \AxiomClwaysNoLine \UnaryInfC{$\phantom{.}$} \UnaryInfC{$\phantom{.}$}\DeltaisplayProof \AxiomC{$^{1}$[$\noLineeg ((\AxiomClpha^{\BinaryInfCullet} \rightarrow \BinaryInfCeta^{\BinaryInfCullet})^{\circledast} \vee(\AxiomClpha^{\BinaryInfCullet} \rightarrow \gamma^{\BinaryInfCullet})^{\circledast})$]} \UnaryInfC{$\noLineeg ((\AxiomClpha^{\BinaryInfCullet} \rightarrow \BinaryInfCeta^{\BinaryInfCullet})^{\circledast} \vee(\AxiomClpha^{\BinaryInfCullet} \rightarrow \gamma^{\BinaryInfCullet})^{\circledast})$} \AxiomC{$^{1}$[$\noLineeg ((\AxiomClpha^{\BinaryInfCullet} \rightarrow \BinaryInfCeta^{\BinaryInfCullet})^{\circledast} \vee(\AxiomClpha^{\BinaryInfCullet} \rightarrow \gamma^{\BinaryInfCullet})^{\circledast})$]} \UnaryInfC{$\noLineeg ((\AxiomClpha^{\BinaryInfCullet} \rightarrow \BinaryInfCeta^{\BinaryInfCullet})^{\circledast} \vee(\AxiomClpha^{\BinaryInfCullet} \rightarrow \gamma^{\BinaryInfCullet})^{\circledast})$} \AxiomC{$^{2}$[$\AxiomClpha^{\BinaryInfCullet}$]} \RightLabel{$N$} \UnaryInfC{$\AxiomClpha^{\BinaryInfCullet}$} \UnaryInfC{$\Sigma$} \UnaryInfC{$(\AxiomClpha^{\BinaryInfCullet} \rightarrow \BinaryInfCeta^{\BinaryInfCullet})^{\circledast} \vee(\AxiomClpha^{\BinaryInfCullet} \rightarrow \gamma^{\BinaryInfCullet})^{\circledast}$} \BinaryInfC{$\BinaryInfCot_{n}$} \UnaryInfC{$\BinaryInfCeta^{\BinaryInfCullet,N}$} \RightLabel{$N$} \UnaryInfC{$\BinaryInfCeta^{\BinaryInfCullet}$} \LeftLabel{2} \RightLabel{$N$} \UnaryInfC{$\AxiomClpha^{\BinaryInfCullet} \rightarrow \BinaryInfCeta^{\BinaryInfCullet}$} \RightLabel{$\circledast$} \UnaryInfC{$\AxiomClpha^{\BinaryInfCullet} \rightarrow \BinaryInfCeta^{\BinaryInfCullet}$} \UnaryInfC{$(\AxiomClpha^{\BinaryInfCullet} \rightarrow \BinaryInfCeta^{\BinaryInfCullet})^{\circledast}$} \UnaryInfC{$(\AxiomClpha^{\BinaryInfCullet} \rightarrow \BinaryInfCeta^{\BinaryInfCullet})^{\circledast} \vee(\AxiomClpha^{\BinaryInfCullet} \rightarrow \gamma^{\BinaryInfCullet})^{\circledast}$} \BinaryInfC{$\BinaryInfCot_{n}$} \LeftLabel{\TrinaryInfCextbf{CPR} \hspace*{2cm}1} \UnaryInfC{$(\AxiomClpha^{\BinaryInfCullet} \rightarrow \BinaryInfCeta^{\BinaryInfCullet})^{\circledast} \vee(\AxiomClpha^{\BinaryInfCullet} \rightarrow \gamma^{\BinaryInfCullet})^{\circledast}$}\DeltaisplayProof \AxiomC{$\AxiomClpha^{\BinaryInfCullet}$} \RightLabel{$N$} \UnaryInfC{$\AxiomClpha^{\BinaryInfCullet}$} \RightLabel{$N,\BinaryInfCullet$} \AxiomC{$\Xi$} \RightLabel{$N,u$} \UnaryInfC{$\AxiomClpha \rightarrow (\BinaryInfCeta \vee \gamma)$} \BinaryInfC{$\Pi$} \RightLabel{$\circledast$} \UnaryInfC{$\BinaryInfCeta^{\BinaryInfCullet} \vee \gamma^{\BinaryInfCullet} $} \AxiomC{$^{3}$[$\BinaryInfCeta^{\BinaryInfCullet}$]} \RightLabel{$\circledast$} \UnaryInfC{$\BinaryInfCeta^{\BinaryInfCullet}$} \RightLabel{$\circledast$} \UnaryInfC{$\AxiomClpha^{\BinaryInfCullet} \rightarrow \BinaryInfCeta^{\BinaryInfCullet}$} \UnaryInfC{$(\AxiomClpha^{\BinaryInfCullet} \rightarrow \BinaryInfCeta^{\BinaryInfCullet})^{\circledast}$} \UnaryInfC{$(\AxiomClpha^{\BinaryInfCullet} \rightarrow \BinaryInfCeta^{\BinaryInfCullet})^{\circledast} \vee(\AxiomClpha^{\BinaryInfCullet} \rightarrow \gamma^{\BinaryInfCullet})^{\circledast}$} \AxiomC{$^{3}$[$\gamma^{\BinaryInfCullet}$]} \RightLabel{$\circledast$} \UnaryInfC{$\gamma^{\BinaryInfCullet}$} \RightLabel{$\circledast$} \UnaryInfC{$\AxiomClpha^{\BinaryInfCullet} \rightarrow \gamma^{\BinaryInfCullet} $} \UnaryInfC{$(\AxiomClpha^{\BinaryInfCullet} \rightarrow \gamma^{\BinaryInfCullet})^{\circledast}$} \UnaryInfC{$(\AxiomClpha^{\BinaryInfCullet} \rightarrow \BinaryInfCeta^{\BinaryInfCullet})^{\circledast} \vee(\AxiomClpha^{\BinaryInfCullet} \rightarrow \gamma^{\BinaryInfCullet})^{\circledast}$} \LeftLabel{$\BinaryInfColdsymbol{\Sigma}$ \hspace*{0.5cm} 3} \TrinaryInfC{$(\AxiomClpha^{\BinaryInfCullet} \rightarrow \BinaryInfCeta^{\BinaryInfCullet})^{\circledast} \vee(\AxiomClpha^{\BinaryInfCullet} \rightarrow \gamma^{\BinaryInfCullet})^{\circledast}$} \AxiomClwaysNoLine \UnaryInfC{$\phantom{.}$} \UnaryInfC{$\phantom{.}$} \UnaryInfC{$\phantom{.}$}\DeltaisplayProof \AxiomC{$\AxiomClpha^{\BinaryInfCullet}$} \RightLabel{$N$} \UnaryInfC{$\AxiomClpha^{\BinaryInfCullet}$} \RightLabel{$N,\BinaryInfCullet$} \UnaryInfC{$\AxiomClpha$} \AxiomC{$^{4}$[$\AxiomClpha$]} \RightLabel{$N,u$} \UnaryInfC{$\AxiomClpha$} \AxiomC{$\Xi$} \RightLabel{$N,u$} \UnaryInfC{$\AxiomClpha \rightarrow (\BinaryInfCeta \vee \gamma)$} \BinaryInfC{$\BinaryInfCeta \vee \gamma$} \RightLabel{$N,\BinaryInfCullet$} \UnaryInfC{$\BinaryInfCeta \vee \gamma$} \RightLabel{$\circledast,\BinaryInfCullet$} \UnaryInfC{$\BinaryInfCeta \vee \gamma$} \LeftLabel{4} \RightLabel{$\circledast,\BinaryInfCullet$} \BinaryInfC{$\BinaryInfCeta \vee \gamma$} \RightLabel{$\circledast,\BinaryInfCullet$} \UnaryInfC{$\BinaryInfCeta \vee \gamma$} \AxiomC{[$\BinaryInfCeta$]} \RightLabel{$\circledast,\BinaryInfCullet$} \UnaryInfC{$\BinaryInfCeta$} \RightLabel{$\circledast$} \UnaryInfC{$\BinaryInfCeta^{\BinaryInfCullet}$} \RightLabel{$\circledast$} \UnaryInfC{$\BinaryInfCeta^{\BinaryInfCullet} \vee \gamma^{\BinaryInfCullet}$} \AxiomC{[$\gamma$]} \RightLabel{$\circledast,\BinaryInfCullet$} \UnaryInfC{$\gamma$} \RightLabel{$\circledast$} \UnaryInfC{$\gamma^{\BinaryInfCullet}$} \RightLabel{$\circledast$} \UnaryInfC{$\BinaryInfCeta^{\BinaryInfCullet} \vee \gamma^{\BinaryInfCullet}$} \RightLabel{$\circledast$} \TrinaryInfC{$\BinaryInfCeta^{\BinaryInfCullet} \vee \gamma^{\BinaryInfCullet}$} \LeftLabel{$\BinaryInfColdsymbol{\Pi}$ \hspace*{0.5cm}} \RightLabel{$\circledast$} \UnaryInfC{$\BinaryInfCeta^{\BinaryInfCullet} \vee \gamma^{\BinaryInfCullet} $} \DeltaisplayProof\end{center}\end{landscape} \BinaryInfCegin{lemma} \label{transfer} Given a theorem $\AxiomClpha^{\Sigma}$, there is a proof of $\AxiomClpha^{\Sigma}$ in the context $\{N,u\}$, in which the variables $N$ and $u$ do not occur in the proof. \end{lemma} \BinaryInfCegin{proof} $\AxiomClpha^{\Sigma}$ is a theorem, then, by definition, there is a proof $\Pi$ without open hypothesis that concludes the theorem in the empty context. During the proof $\Pi$, the smallest context is the empty context. So, if we can choose variables that do not occur in $\Pi$ and add the stack of labels $\{N,u\}$ at the rightmost position of each context of each rule. We end up with a proof of the theorem in the context $\{N,u\}$. This is possible because there is no restriction that could be applied over the new variables.\end{proof} We now present some ideas related to the different counterfactual logics Lewis defined, based on conditions imposed to the function $\$$: \BinaryInfCegin{itemize} \item Normality (N): $\$$ is normal iff $\forall w \in \mathcal{W} : \$(w) \noLineeq \emptyset$; \item Total reflexivity (T): $\$$ is totally reflexive iff $\forall w \in \mathcal{W} : w \in \BinaryInfCigcup\$(w)$; \item Weak centering (W): $\$$ is weakly centered iff $\forall w \in \mathcal{W} : \$(w) \noLineeq \emptyset \mathcalbox{ and }\forall N \in \BinaryInfCigcup\$(w) : w \in N$ ; \item Centering (C): $\$$ is centered iff $\forall w \in \mathcal{W} : \{w\} \in \$(w)$. \end{itemize} To each condition, corresponds a logic, respectively $\BinaryInfColdsymbol{VN}$, $\BinaryInfColdsymbol{VT}$, $\BinaryInfColdsymbol{VW}$ and $\BinaryInfColdsymbol{VC}$-logics. For each logic, the PUC-ND may change the set of rules to acquire the corresponding expressivity provided by the conditions. We present some ideas to make those changes: \BinaryInfCegin{itemize} \item[$\BinaryInfColdsymbol{VN}$] Rule 9 looses restriction (a). Rule 19 and 22 loose second premiss.\\ensuremath{\centerdot^{\mathcalathcal{I}}}ntroduction of the rule: \AxiomC{$\phantom{-}$} \RightLabel{$\Deltaelta,\circledast$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \RightLabel{$\Deltaelta,N$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \DeltaisplayProof\\Restriction: (a) $\AxiomClpha^{\Sigma}$ must fit into the contexts; \item[$\BinaryInfColdsymbol{VT}$] We repeat the system for VN.\\ensuremath{\centerdot^{\mathcalathcal{I}}}ntroduction of the rule: \AxiomC{$\phantom{-}$} \RightLabel{$\Deltaelta,\circledast,\AxiomCst$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \DeltaisplayProof\\Restriction: (a) $\AxiomClpha^{\Sigma}$ must fit into the contexts; \item[$\BinaryInfColdsymbol{VW}$] We repeat the system for VT.\\ensuremath{\centerdot^{\mathcalathcal{I}}}ntroduction of the rule: \AxiomC{$\phantom{-}$} \RightLabel{$\Deltaelta,\circledcirc,\AxiomCst$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \DeltaisplayProof\\Restriction: (a) $\AxiomClpha^{\Sigma}$ must fit into the contexts; \item[$\BinaryInfColdsymbol{VC}$] We repeat the system for VW.\\ensuremath{\centerdot^{\mathcalathcal{I}}}ntroduction of the rule: \AxiomC{$\phantom{-}$} \RightLabel{$\Deltaelta,\circledast,\BinaryInfCullet$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \RightLabel{$\Deltaelta$} \UnaryInfC{$\AxiomClpha^{\Sigma}$} \DeltaisplayProof\\Restriction: (a) $\AxiomClpha^{\Sigma}$ must fit into the contexts. \end{itemize} \section{Related Works} As far as we know, there is only one natural deduction system for the counterfactuals, which is given by Bonevac \cite{Bonevac}. But his system is designed to deal with the $\BinaryInfColdsymbol{VW}$-logic, since it contains the rule of counterfactual exploitation ($\BinaryInfCoxright$E), which encapsulates the weak centering condition. His approach to define rules for the counterfactual operators provides a better intuition of the counterfactual logic. His systems is expressive enough to deal with modalities and strict conditionals. The labelling of world shifts using formulas makes it easier to capture the counterfactual mechanics.\\ We also found the work of Sano \cite{Sano} which pointed out the advantages of using the hybrid formalism for the counterfactual logic. He presented some axioms and rules for the $\BinaryInfColdsymbol{V_{\mathcal{HC}(@)}}$-logic that extends the $\BinaryInfColdsymbol{V}$-logic of Lewis.\\ We also found a sequent calculus for the $\BinaryInfColdsymbol{V}$-logic that is given by \cite{Jelia2012}. But this system also demands modalities in the syntax. As far as we know, our deduction system is the only one dealing with Lewis systems in a general form, that is, without using modalities in the syntax. \section*{Conclusions} From the definitions of Lewis \cite{Lewis} for the counterfactual logic, we define our natural deduction system, which is proven to be sound and complete for the $\BinaryInfColdsymbol{V}$-logic.\\ The use of two types of labels (neighbourhood and world labels) gave us the ability to manage different types of quantifications. The quantifications are largely used by the counterfactual operators definitions according to Lewis. That approach makes it possible to build the rules for the counterfactual operators as derived rules of the system.\\ Another advantage of that approach is that our natural deduction system is built without the use of modalities or strict conditionals, making it easier to take benefits from the well known propositional results such as normalization. \section*{References} \BinaryInfCegin{thebibliography}{1} \noLineewcommand{\enquote}[1]{``#1''} \BinaryInfCibitem{Lewis} Lewis, D. K., \enquote{Counterfactuals}, Blackwell Publishing, 2008. \BinaryInfCibitem{LewisPapers} Lewis, D. K., \enquote{Papers in ethics and social philosophy}, Cambridge University Press, 2000. \BinaryInfCibitem{Goodman} Goodman, N., \enquote{Fact, Fiction, and Forecast}, 4th Edition, Harvard University Press, 1983. \BinaryInfCibitem{Bell} Bell, J.~L., \enquote{Toposes and Local Set Theories}, Dover Publications, 2008. \BinaryInfCibitem{Knuth} Knuth, D. E., \enquote{Semantics of context-free languages}, Mathematical Systems Theory 2 (1968). \BinaryInfCibitem{Goldblatt} Goldblatt, R., \enquote{Topoi: The categorical analysis of logic}, Dover, 2006. \BinaryInfCibitem{Goldblatt2} Goldblatt, R., \enquote{Logics of time and computation}, CSLI lecture notes, 1992. \BinaryInfCibitem{Prawitz} Prawitz, D., \enquote{Natural Deduction: a proof-theoretical study}, Dover, 2006. \BinaryInfCibitem{Naufel} do~Amaral, F.~N. and E.~H. Haeusler, \enquote{Using the internal logic of a topos to model search spaces for problems}, Logic Journal of IGPL (2007). \BinaryInfCibitem{Hermann} Menezes, P.~B. and E.~H. Haeusler, \enquote{Teoria das Categorias para Ci\^{e}ncia da Computa\c{c}\~{a}o}, Editora Sagra Luzatto, 2006. \BinaryInfCibitem{Ramsey} Ramsey, F.~P., \enquote{Philosophical papers}, Cambridge University Press, 1990. \BinaryInfCibitem{Gent} Gent, I. P., \enquote{A Sequent- or Tableau-style System for Lewis's Counterfactual Logic VC}, Notre Dame Journal of Formal Logic, vol. 33, no. 3, pp. 369-382, 1992. \BinaryInfCibitem{Bonevac} Bonevac, D., \enquote{Deduction: Introductory Symbolic Logic}, Blackwell, 2003. \BinaryInfCibitem{Sano} Sano, K., \enquote{Hybrid counterfactual logics}, Journal of Logic, Language and Information, volume 18, No. 4, pp 515-539, 2009. \BinaryInfCibitem{Escobar} L\'{o}pez-Escobar, E.G.K., \enquote{Implicational Logics in Natural Deduction Systems}, Journal of Symbolic Logic, Vol. 47, No. 1, pp. 184-186, 1982 \BinaryInfCibitem{Fernandes} Fernandes, R.Q.A., Haeusler, E.H., Pereira, L.C.P.D., \enquote{A Natural Deduction System for Counterfactual Logic}, in XVI Encontro Brasileiro de L\'{o}gica, Petr\'{o}polis, 2011. \BinaryInfCibitem{LSFA09} Fernandes, R.Q.A., Haeusler, E.H., \enquote{A Topos-Theoretic Approach to Counterfactual Logic}, in Fourth Workshop on Logical and Semantic Frameworks, Bras\'{i}lia, 2009. Pre-proceedings, 2009. \BinaryInfCibitem{Hansson} Hansson, B., \enquote{An Analysis of some Deontic Logics}, No\^{u}s, Vol. 3, No. 4, pp. 373-398, 1969. \BinaryInfCibitem{vanDalen} van Dalen, D., \enquote{Logic and Structure}, Springer, 2008. \BinaryInfCibitem{Libkin} Libkin, L., \enquote{Elements of Finite Model Theory}, Springer, 2010. \BinaryInfCibitem{Troelstra} Troelstra, A. S., Schwichtenberg, H., \enquote{Basic Proof Theory}, Cambridge University Press, 2000. \BinaryInfCibitem{Lambert} Lambert, K., \enquote{Free Logic: selected essays}, Cambridge University Press, 2004. \BinaryInfCibitem{Cook} Cook, S. A., \enquote{The complexity of theorem proving procedures}, In 3rd Annual ACM Symposium on the Theory of Computation, pages 151-158, 1971. \BinaryInfCibitem{Statman} Statman, R., \enquote{Intuitioinistic propositional logic is polinomial-space complete}, Journal of Theoretical Computer Science, vol. 9, no. 1, pp. 67-72, 1979. \BinaryInfCibitem{Jelia2012} Lellmann, B., Pattinson, D., \enquote{Sequent Systems for Lewis' Conditional Logics}, In 13th European Conference on Logics in Artificial Intelligence, 2012. \end{thebibliography} \end{document}
\begin{document} \title{Emission of photon pairs by mechanical stimulation of the squeezed vacuum} \author{Wei Qin} \affiliation{Theoretical Quantum Physics Laboratory, RIKEN Cluster for Pioneering Research, Wako-shi, Saitama 351-0198, Japan} \author{Vincenzo Macr\`{i}} \affiliation{Theoretical Quantum Physics Laboratory, RIKEN Cluster for Pioneering Research, Wako-shi, Saitama 351-0198, Japan} \author{Adam Miranowicz} \affiliation{Theoretical Quantum Physics Laboratory, RIKEN Cluster for Pioneering Research, Wako-shi, Saitama 351-0198, Japan} \affiliation{Faculty of Physics, Adam Mickiewicz University, 61-614 Pozna\'n, Poland} \author{Salvatore Savasta} \affiliation{Theoretical Quantum Physics Laboratory, RIKEN Cluster for Pioneering Research, Wako-shi, Saitama 351-0198, Japan} \affiliation{Dipartimento di Scienze Matematiche e Informatiche, Scienze Fisiche e Scienze della Terra, \\ Universit\`{a} di Messina, I-98166 Messina, Italy} \author{Franco Nori} \affiliation{Theoretical Quantum Physics Laboratory, RIKEN Cluster for Pioneering Research, Wako-shi, Saitama 351-0198, Japan} \affiliation{Department of Physics, The University of Michigan, Ann Arbor, Michigan 48109-1040, USA} \begin{abstract} To observe the dynamical Casimir effect (DCE) induced by a moving mirror is a long-standing challenge because the mirror velocity needs to approach the speed of light. Here, we present an experimentally feasible method for observing this mechanical DCE in an optomechanical system. It employs a detuned, parametric driving to squeeze a cavity mode, so that the mechanical mode, with a typical resonance frequency, can parametrically and resonantly couple to the squeezed cavity mode, thus leading to a resonantly amplified DCE in the squeezed frame. The DCE process can be interpreted as {\it mechanically-induced two-photon hyper-Raman scattering} in the laboratory frame. Specifically, {\it a photon pair} of the parametric driving absorbs a single phonon and then is scattered into an anti-Stokes sideband. We also find that the squeezing, which additionally induces and amplifies the DCE, can be extremely small. Our method requires neither an ultra-high mechanical-oscillation frequency (i.e., a mirror moving at nearly the speed of light) nor an ultrastrong single-photon optomechanical coupling and, thus, could be implemented in a wide range of physical systems. \end{abstract} \maketitle \section{Introduction} One of the most astonishing phenomena of nature, predicted by quantum field theory, is that the quantum vacuum is not empty but teems with virtual particles. Under certain conditions, these vacuum fluctuations could be converted into real particles by dynamical amplification mechanisms such as the Schwinger process~\cite{schwinger1951gauge}, Hawking radiation~\cite{hawking1974black}, and Unruh effect~\cite{unruh1976notes}. The dynamical Casimir effect (DCE) describes the creation of photons out of the quantum vacuum due to a moving mirror~\cite{moore1970quantum, fulling1976radiation}. The physics underlying the DCE is that the electromagnetic field cannot adiabatically adapt to the time-dependent boundary condition imposed by the mechanical motion of the mirror, such that it occurs a mismatch of vacuum modes in time. This gives rise to the emission of photon pairs from the vacuum and, at the same time, to the equal-energy dissipation of the mechanical phonons. Thus, according to energy conservation, the DCE can also be understood as the energy conversion of the mechanical motion to the electromagnetic field. In order to detect the DCE, the mirror velocity is, however, required to be close to the speed of light~\cite{dodonov2010current,nation2012colloquium}. This requirement is the main obstacle in observing the DCE. This problem led to many alternative proposals, which replaced the mechanical motion with an effective motion provided by, e.g., modulating dielectric properties of semiconductors or superconductors~\cite{yablonovitch1989accelerating,lozovik1995parametric,crocce2004model,braggio2005novel,segev2007prospects}, modulating the ultrastrong light-matter coupling in cavity quantum electrodynamics (QED)~\cite{ciuti2005quantum,de2007quantum,de2009extracavity,garziano2013switching,hagenmuller2016all,de2017virtual,cirio2017amplified,e2018microscopic,kockum2019ultrastrong,forn2019ultrastrong}, or driving an optical parametric oscillator~\cite{dezael2010analogue}. In particular, two remarkable experimental verifications have recently been implemented utilizing a superconducting quantum interference device~\cite{nation2012colloquium,johansson2009dynamical,johansson2010dynamical,wilson2011observation,dalvit2011quantum,johansson2013nonclassical} and a Josephson metamaterial~\cite{lahteenmaki2013dynamical}, respectively, to produce the effective motion. Despite such achievements, implementing the DCE with a massive mechanical mirror is still highly desirable for a more fundamental understanding of the DCE physics. This is because the parametric conversion of mechanical energy to photons, which is a key feature of the DCE predicted in its original proposals~\cite{moore1970quantum, fulling1976radiation,dodonov2010current,nation2012colloquium}, can be demonstrated in this case, contrary to proposals based on the effective motion. However, owing to the serious problem mentioned above (i.e., very fast oscillating mirror), such a radiation has not yet been observed experimentally, although the DCE has been predicted for almost fifty years. Here, we propose a novel approach to this outstanding problem, and we show that in a squeezed optomechanical system, a mirror oscillating at a common frequency can induce an observable DCE. The DCE can, in principle, also be directly implemented in cavity-optomechanical systems~\cite{lambrecht1996motion,dodonov1996generation,plunien2000dynamical,schaller2002dynamical,kim2006detectability, de2013influence,macri2018nonperturbative,Sanz2018electromechanical,wang2018mechanically,settineri2019conversion}. But it requires a mechanical frequency $\omega_{m}$ to be very close to the cavity frequency $\omega_{c}$, or even a single-photon optomechanical coupling $g_{0}$ to reach the ultrastrong-coupling regime $g_{0}/\omega_{m}\!\gtrsim\!0.1$~\cite{macri2018nonperturbative,settineri2019conversion}. For typical parameters, $\omega_{m}\!\!\sim\!\!$~MHz is much smaller than $\omega_{c}\!\!\sim\!\!$~THz ($\sim$~\!\!GHz) for optical (microwave) cavities, and at the same time, achieving the ultrastrong coupling is, currently, also a very challenging task in optomechanical experiments. However, as we describe in this manuscript, when squeezing the cavity~\cite{scully1997book}, the squeezed-cavity-mode (SCM) frequency is tunable, such that the SCM can parametrically and resonantly couple to a mechanical mode with a typically available $\omega_{m}$. This enables an observable DCE in the squeezed frame. Such a {\it mechanical DCE corresponds to two-photon hyper-Raman scattering} in the laboratory frame. Compared to one-photon Raman scattering typically demonstrated in cavity optomechanics, this hyper-Raman scattering process describes {\it a photon pair scattered into a higher energy mode by absorbing a mechanical phonon}. As opposed to previous mechanical-DCE proposals, our approach requires {\it neither} an ultra-high mechanical frequency {\it nor} an ultrastrong coupling. In addition, the model discussed here is a generic optomechanical setup. Hence, with current technologies our proposal could be realized in various physical architectures, e.g., superconducting resonators~\cite{xiang2013hybrid,gu2017microwave} and optical cavities~\cite{reiserer2015cavity}. Furthermore, our proposal also shows mechanically-induced two-photon hyper-Raman scattering, which, to our knowledge, has not been considered before in cavity optomechanics. \begin{figure} \caption{(a) Setup for observing the mechanical dynamical Casimir effect. In this optomechanical system, a $\chi^{\left(2\right)} \label{fig-schematic} \end{figure} \section{Model} We consider an optomechanical system, as schematically depicted in Fig.~\ref{fig-schematic}(a). The basic idea underlying our proposal is to use a detuned two-photon driving, e.g., of frequency $\omega_{L}$ and amplitude $\Omega$, to squeeze the cavity mode. The driving results in parametric down conversion of mechanical phonons to correlated cavity-photon pairs, which corresponds to the DCE. Furthermore, the SCM frequency completely depends on the detuning $\Delta=\omega_{c}-\omega_{L}/2$ and the amplitude $\Omega$. This can be exploited to tune the parametric phonon-photon coupling into resonance, determining a strong amplification of the DCE. When the mechanical mode is driven, e.g., at frequency $\omega_{d}$ and amplitude $F$, a strong steady-state output-photon flux that is induced by the DCE can be achieved. To be specific, we consider the Hamiltonian \begin{equation} H=H_{\rm OM}+H_{\rm CD}+H_{\rm MD}. \end{equation} Here, \begin{equation} H_{\rm OM}=\omega_{m}b^{\dag}b-g_{0}a^{\dag}a\left(b+b^{\dag}\right) \end{equation} describes a standard optomechanical coupling, \begin{equation} H_{\rm CD}=\Delta a^{\dag}a+\frac{1}{2}\Omega\left(a^{2}+a^{\dag2}\right) \end{equation} a detuned two-photon cavity driving, and \begin{equation} H_{\rm MD}=\frac{1}{2}F\left[\exp\left(i\omega_{d}t\right)b+\exp\left(-i\omega_{d}t\right)b^{\dag}\right] \end{equation} a single-phonon mechanical driving. The bare cavity mode $a$, when parametrically driven, is squeezed with a squeezing parameter \begin{equation} r=\frac{1}{4}\ln\left(\frac{\Delta+\Omega}{\Delta-\Omega}\right) \end{equation} and accordingly, is transformed to a squeezed mode $a_{s}$, via the Bogoliubov transformation~\cite{scully1997book} \begin{equation} a_{s}=\cosh\left(r\right)a+\sinh\left(r\right)a^{\dag}. \end{equation} Similar methods have been used for enhancing light-matter interactions in cavity optomechanics~\cite{lu2015squeezed,lemonde2016enhanced} and cavity QED~\cite{qin2018exponentially, leroux2018enhancing}, but involving markedly different physical processes. As a result, $H_{\rm CD}$ is diagonalized to $H_{\rm CD}=\omega_{s}a_{s}^{\dag}a_{s}$, where $\omega_{s}=\sqrt{\Delta^{2}-\Omega^{2}}$ is a controllable SCM frequency. The optomechanical-coupling Hamiltonian is transformed, in terms of $a_{s}$, to \begin{equation} H_{\rm OM}=\left[-g_{\rm OM}a_{s}^{\dag}a_{s}+g_{\rm DCE}\left(a_{s}^{2}+a^{\dag2}_{s}\right)\right]\left(b+b^{\dag}\right), \end{equation} where $g_{\rm OM}=g_{0}\cosh\left(2r\right)$ is an effective single-photon optomechanical coupling, and $g_{\rm DCE}=g_{0}\sinh\left(2r\right)/2$ is a coupling associated with the DCE. The dynamics under $H_{\rm OM}$ describes a mechanical modulation of the boundary condition of the squeezed field~\cite{law1995interaction,macri2018nonperturbative,di2017interaction}. Under the rotating-wave approximation, the coherent dynamics of the system is governed by an effective Hamiltonian, \begin{align}\label{sq:effectiveH} H_{\rm eff}=\;&\Delta_{s}a_{s}^{\dag}a_{s}+\Delta_{m}b^{\dag}b\nonumber\\ &+g_{\rm DCE}\left(a_{s}^{2}b^{\dag}+{\rm H.c.}\right)+\frac{1}{2}F\left(b+b^{\dag}\right), \end{align} where $\Delta_{s}=\omega_{s}-\omega_{d}/2$ and $\Delta_{m}=\omega_{m}-\omega_{d}$. We find that when $\omega_{m}=2\omega_{s}$, the resonant DCE can be demonstrated, and that the parametric energy conversion of the mechanical motion to the electromagnetic field, which was predicted in the original DCE proposals, can therefore be observed. We also find that the energy of emitted photons in the squeezed frame completely originates from the mechanical motion. Thus, parametrically driving the cavity without a moving mirror~\cite{dezael2010analogue}, corresponding to $F=0$, {\it cannot} excite the $a_{s}$ mode and {\it cannot} result in such a parametric energy conversion from mechanics to light. \section{Mechanically-induced two-photon hyper-Raman scattering} More interestingly, the DCE in the squeezed frame can be interpreted, in the laboratory frame, as mechanically-induced two-photon hyper-Raman scattering. This hyper-Raman scattering is an anti-Stokes process, as illustrated in Fig.~\ref{fig-schematic}(b). According to the Bogoliubov transformation, the squeezing gives rise to an anti-Stokes sideband at frequency $\omega_{s}+\omega_{L}/2$ [right arrow in Fig.~\ref{fig-schematic}(b)]. The two-photon driving at frequency $\omega_{L}$ produces photon pairs at frequency $\omega_{L}/2$ [left arrow in Fig.~\ref{fig-schematic}(b)]. When mechanical phonons at frequency $\omega_{m}=2\omega_{s}$ are present, a driving photon pair is scattered into the anti-Stokes sideband, while simultaneously absorbing a phonon in the mechanical resonator. Because of their different frequency from the driving photon pairs, the anti-Stokes scattered photon pairs, which are referred to as the DCE photons, can be spectrally filtered from the driving photons, which are referred to as the noise photons. In cavity optomechanics, most of the experimental and theoretical studies are carried out under detuned {\it one-photon} driving of a cavity, so that the cavity field can be split into an average coherent amplitude and a fluctuating term. For a red-detuned driving, a driving photon can be scattered into the cavity resonance by absorbing a phonon. This process is viewed as {\it mechanically-induced one-photon Raman scattering} [dashed arrows in Fig.~\ref{fig-schematic}(c)]. As described above, our proposal instead exploits a red-detuned {\it two-photon} driving, and {\it the mechanical motion can induce two-photon hyper-Raman scattering}. In order to compare the two scattering processes more explicitly, we consider the limit $\Omega\ll\Delta$. In this limit, the $a_{s}$ mode can be approximated by the $a$ mode, i.e., $a_{s}\approx a$, and as a result, the anti-Stokes sideband becomes the cavity resonance. Correspondingly, the effective Hamiltonian $H_{\rm eff}$ becomes \begin{align}\label{eq:widetilde_Heff} \widetilde{H}_{\rm eff}=\;&\Delta_s a^{\dag}a+\omega_{m}b^{\dag}b\nonumber\\ &+g_{\rm DCE}\left(a^{2}b^{\dag}+{\rm H.c.}\right)+\frac{1}{2}F\left(b+b^{\dag}\right). \end{align} Under the resonant condition $\omega_{m}=2\omega_{s}$ (i.e., 2$\omega_{c}\approx\omega_{L}+\omega_{m}$), the dynamics described by $\widetilde{H}_{\rm eff}$ shows that a driving photon pair, rather than a single photon, is scattered into the cavity resonance by absorbing a phonon [solid arrows in Fig.~\ref{fig-schematic}(c)]. \begin{figure} \caption{(a)-(b) Photon number $\langle a_{s} \label{fig-excitation-spectrum} \end{figure} \section{How to observe the dynamical Casimir effect} In our approach, we squeeze the $a$ mode to make the effective cavity frequency very close to the mechanical frequency. However, this squeezing also inputs thermal noise and two-photon correlation noise into the cavity. Although these undesired effects are negligible in the weak-squeezing case (see below), they can be completely eliminated by coupling a squeezed-vacuum bath, e.g., with a squeezing parameter $r_{e}$ and a reference phase $\theta_{e}$, to the $a$ mode~\cite{murch2013reduction,bartkowiak2014quantum,clark2017sideband,zeytinouglu2017engineering,vahlbruch2018laser}. We assume that $r_{e}=r$ and $\theta_{e}=\pm n\pi$ ($n=1,3,5,\cdots$), so that the $a_{s}$ mode is equivalently coupled to a vacuum bath (see Appendix~\ref{sec:Optomechanical master equation}). The full dynamics is therefore determined by the standard master equation \begin{equation}\label{eq:master-equation} \dot{\rho}\left(t\right)=i\left[\rho\left(t\right),H_{\rm eff}\right]-\frac{\kappa}{2}\mathcal{L}\left(a_{s}\right)\rho\left(t\right)-\frac{\gamma_{m}}{2}\mathcal{L}\left(b\right)\rho\left(t\right), \end{equation} where $\kappa$ and $\gamma_{m}$ are the cavity and mechanical loss rates, respectively, and we have defined \begin{equation} \mathcal{L}\left(o\right)\rho\left(t\right)=o^{\dag}o\rho\left(t\right)-2o\rho\left(t\right)o^{\dag}+\rho\left(t\right)o^{\dag}o. \end{equation} We have also assumed that the mechanical resonator is coupled to a zero-temperature bath (see Appendix~\ref{sec:dynamical Casimir effect in the weak mechanical driving regime} for an analytical discussion at finite temperatures). The SCM excitation spectrum $\langle a_{s}^{\dag}a_{s}\rangle_{\rm ss}\left(\Delta_{s}\right)$, where $\average{o}_{\rm ss}$ represents a steady-state average value, is plotted in Figs.~\ref{fig-excitation-spectrum}(a) and \ref{fig-excitation-spectrum}(b). Eliminating the squeezing-induced noise ensures a zero background noise for the excitation spectrum. If the mechanical resonator is driven, then photons are excited from the vacuum, and according to energy conservation, are emitted from the mechanical resonator, together with a resonance peak in the excitation spectrum. We now return to the original laboratory frame and consider the steady-state output-photon flux. Because of the squeezing, the steady-state intracavity photon number, $\average{a^{\dag}a}_{\rm ss}$, in the laboratory frame includes two physical contributions, i.e., \begin{equation}\label{eq:cavity_photon_number} \average{a^{\dag}a}_{\rm ss}=\Phi_{\rm BGN}+\Phi_{\rm DCE}, \end{equation} where $\Phi_{\rm BGN}=\sinh^{2}\left(r\right)$ is the number of background-noise photons contained in the squeezed vacuum, and \begin{equation}\label{eq:DCE_signal} \Phi_{\rm DCE}=\average{a_{s}^{\dag}a_{s}}_{\rm ss}\cosh\left(2r\right)-{\rm Re}\left[\average{a_{s}^{2}}_{\rm ss}\right]\sinh\left(2r\right) \end{equation} is the number of DCE-induced photons. The output-photon flux is then given by \begin{equation}\label{eq:output_flux} \Phi_{\rm out}=\kappa\left(\Phi_{\rm BGN}+\Phi_{\rm DCE}\right), \end{equation} according to the input-output relation. We plot the flux spectrum $\Phi_{\rm out}\left(\Delta_{s}\right)$ in Figs.~\ref{fig-excitation-spectrum}(c) and \ref{fig-excitation-spectrum}(d). There exists a nonzero background noise in the photon flux spectrum, as discussed previously. Nevertheless, when driving the mechanical resonator, the DCE-induced photons are emitted from the cavity, and a resolved resonance peak can be observed. We find that the behavior of the flux spectrum directly reflects that of the excitation spectrum. Hence, the emergence of the resonance peak in the flux spectrum can be considered as an experimentally observable signature of the DCE. \begin{figure} \caption{(a) Signal-to-noise ratio $\mathcal{R} \label{fig-signal-to-noise-ratio} \end{figure} Owing to the existence of the background noise in the flux $\Phi_{\rm out}$, we now discuss the ability to resolve the DCE signal $\Phi_{\rm DCE}$ from the background noise $\Phi_{\rm BGN}$ at resonance $\omega_{m}=\omega_{d}=2\omega_{s}$. In order to quantify this, we typically employ the signal-to-noise ratio, defined as \begin{equation}\label{eq:signal_to_noise_ratio} \mathcal{R}=\frac{\Phi_{\rm DCE}}{\Phi_{\rm BGN}}. \end{equation} The signal-resolved regime often requires $\mathcal{R}>1$, allowing for a resolved DCE-signal detection. We find that, by increasing the mechanical driving $F$, the signal $\Phi_{\rm DCE}$ becomes stronger, but at the same time, the noise $\Phi_{\rm BGN}$ remains unchanged. This enables an improvement in the signal-to-noise ratio with the mechanical force. Consequently, the desired signal can be directly driven from the unresolved to resolved regime, as shown in Fig.~\ref{fig-signal-to-noise-ratio}(a). Assuming a realistic parameter $g_{0}=10\gamma_{m}$, we find that a mechanical driving of $F=15\gamma_{m}$ is able to keep the ratio $\mathcal{R}$ above $1$ for $\kappa\leq1000\gamma_{m}$. With these parameters, we can obtain $\average{a_{s}^{\dag}a_{s}}_{\rm ss}\approx0.2$, as given in Fig.~\ref{fig-excitation-spectrum}. Therefore, in the laboratory frame, a cavity having a typical linewidth of $\kappa/2\pi=2.0$~MHz could emit $\approx1.4\times10^{7}$ photons per second, which is larger than the background photon emission $\approx6.3\times10^{6}$ per second. The ratio $\mathcal{R}$ can be made $\gg1$ as long as the driving $F$ is further increased, so that the background noise can be even neglected compared to the DCE signal. This is demonstrated in Appendix~\ref{sec:Semi-classical treatment for the strong mechanical driving}, where we make a semi-classical approximation for investigating the DCE under a strong-$F$ drive. For \begin{equation} F\gg\left(g_{\rm DCE}+\kappa\gamma_{m}/4g_{\rm DCE}\right), \end{equation} the system behaves classically~\cite{wilson2010photon, butera2019mechanical}, and quantum effects are negligible. Thus in order to observe the DCE, such a regime needs to be avoided. Note, however, that the signal can still be resolved even for $\mathcal{R}<1$, if standard techniques of Raman spectroscopy are used. This is because the background noise is due to driving photons at frequency $\omega_{L}/2$, while the DCE photons have a frequency $\omega_{s}+\omega_{L}/2$. The monotonic increase of the flux $\Phi_{\rm out}$ at resonance with the driving $F$ can, therefore, be considered as another signature of the mechanical DCE in experiments. The DCE photons are emitted in pairs, and could exhibit photon bunching~\cite{johansson2010dynamical,macri2018nonperturbative,stassi2013spontaneous}. The essential parameter characterizing this property is the equal-time second-order correlation function, \begin{equation}\label{eq:g2_correlation} g^{\left(2\right)}_{s}\!\left(0\right)=\frac{\average{a_{s}^{\dag 2}a_{s}^{2}}_{\rm ss}}{\average{a_{s}^{\dag}a_{s}}_{\rm ss}^{2}}. \end{equation} We plot it as a function of the mechanical driving in Fig.~\ref{fig-signal-to-noise-ratio}(b). We find that \begin{equation} g_{s}^{\left(2\right)}\!\left(0\right)\approx\frac{1}{2\average{a_{s}^{\dag}a_{s}}_{\rm ss}} \end{equation} in the $F\rightarrow0$ limit, and $\approx1$ in the $F\rightarrow\infty$ limit (see Appendix~\ref{sec:dynamical Casimir effect in the weak mechanical driving regime} and~\ref{sec:Semi-classical treatment for the strong mechanical driving}). Hence, for a weak-$F$ drive, the very small $\average{a_{s}^{\dag}a_{s}}_{\rm ss}$ leads to $g_{s}^{\left(2\right)}\!\left(0\right)\gg1$. This corresponds to strong photon bunching. In the special case of $F=0$, the $a_{s}$ mode cannot be excited although the two-photon driving still exists, and as a consequence, the $g_{s}^{\left(2\right)}\left(0\right)$ correlation cannot be observed. We also find that with increasing the driving $F$, the $g_{s}^{\left(2\right)}\!\left(0\right)$ correlation decreases and then, as suggested above, approaches its lower bound equal to $1$. These features confirm that the photons are bunched, as required. So far, we have assumed a model with a squeezed-vacuum bath. To avoid using such a bath and simplify the model, we now consider the limit of $\Omega\ll\Delta$. In this limit, the effective Hamiltonian is $\widetilde{H}_{\rm eff}$, as given above. In the absence of the squeezed-vacuum bath, the $a$ mode is coupled to a vacuum bath, and the master equation is the same as given in Eq.~(\ref{eq:master-equation}), but with $a_{s}\mapsto a$. We find that the noise induced by squeezing the cavity, which includes thermal noise $\propto\sinh^{2}\left(r\right)$ and two-photon correlation noise $\propto\sinh\left(2r\right)$, becomes strongly suppressed, even when there is no squeezed-vacuum bath. The DCE dynamics of the simplified model is therefore similar to what we have already demonstrated for the model that includes a squeezed-vacuum bath. Such a similarity can be made closer by decreasing the ratio $\Omega/\Delta$, but at the expense of the DCE radiation strength. In the limit of $\Omega\ll\Delta$, the background noise is $\approx0$, so that all the photons radiated from the cavity can be thought of as the DCE photons. For realistic parameters $g_{0}=10\gamma_{m}$, $F=15\gamma_{m}$ and $\Omega/\Delta=0.1$, we could obtain $\average{a^{\dag}a}_{\rm ss}\approx1.8\times10^{-3}$ at resonance ($\omega_{m}=\omega_{d}=2\omega_{s}$). This results in an output flux $\approx2.0\times10^{4}$ photons per second for $\kappa/2\pi=2$~MHz. This radiation can be measured using single-photon detectors. \section{Possible implementations} As an example, we consider an {\it LC} superconducting circuit with a micromechanical membrane (see Appendix~\ref{sec:Possible-implementation-with-superconducting-circuits} for details). In this device, the {\it LC} circuit is used to form a single-mode microwave cavity. The mechanical motion of the membrane modulates the capacitance of the {\it LC} circuit, and thus the cavity frequency. In order to squeeze the cavity mode, an additional tunable capacitor is embedded into the device. Its cosine-wave modulation serves as a two-photon driving for the cavity mode. The squeezed-vacuum reservoir can be generated through an {\it LC} circuit with a tunable capacitor, or through a Josephson parametric amplifier~\cite{murch2013reduction,toyli2016resonance}. Alternatively, our proposal can be implemented in an optical system such as a whispering-gallery-mode (WGM) microresonator coupled to a mechanical breathing mode~\cite{kippenberg2005analysis,schliesser2006radiation,fiore2011storing,dong2012optomechanical,verhagen2012quantum,shen2016experimental,monifi2016optomechanically}. The WGM microresonator made from nonlinear crystals exhibits strong optical nonlinearities~\cite{furst2011quantum,sedlmeir2017polarization,trainor2018selective}, which is the essential requirement for squeezing. The squeezed-vacuum reservoir for the optical cavity can be prepared by pumping a nonlinear medium, e.g., periodically-poled ${\rm KTiOPO}_{4}$ (PPKTP) crystal, in a cavity~\cite{ast2013high,serikawa2016creation,vahlbruch2016detection,schnabel2017squeezed}. \section{Conclusions} We have introduced a method for how to observe the mechanical DCE in an optomechanical system. The method eliminates the problematic need for an extremely high mechanical-oscillation frequency and an ultrastrong single-photon optomechanical coupling. Thus, it paves an experimentally feasible path to observing quantum radiation from a moving mirror. Our method can be interpreted in the laboratory frame as mechanically-induced two-photon hyper-Raman scattering, an anti-Stokes process of scattering a driving photon pair into a higher energy mode by absorbing a phonon. For the absorbed phonon, its annihilation indicates the creation of a real photon pair out of the quantum vacuum in the squeezed frame. We have also showed a surprising result: that the squeezing, which additionally induces and amplifies the DCE, can be extremely weak. Note that in this case, the unconventional DCE can be considered somehow similar to unconventional photon blockade (UPB)~\cite{flayac2017unconventional}. Indeed, UPB is induced by a nonlinearity, which can be extremely small. Finally, we expect that the approach presented here could find diverse applications in theoretical and experimental studies of quantum vacuum radiation. \begin{acknowledgments} S.S. acknowledges the Army Research Office (ARO) (Grant No. W911NF1910065). F.N. is supported in part by the: MURI Center for Dynamic Magneto-Optics via the Air Force Office of Scientific Research (AFOSR) (FA9550-14-1-0040), Army Research Office (ARO) (Grant No. Grant No. W911NF-18-1-0358), Asian Office of Aerospace Research and Development (AOARD) (Grant No. FA2386-18-1-4045), Japan Science and Technology Agency (JST) (via the Q-LEAP program, and the CREST Grant No. JPMJCR1676), Japan Society for the Promotion of Science (JSPS) (JSPS-RFBR Grant No. 17-52-50023, and JSPS-FWO Grant No. VS.059.18N), the RIKEN-AIST Challenge Research Fund, the Foundational Questions Institute (FQXi), and the NTT PHI Labs. \end{acknowledgments} \section*{APPENDICES} \appendix \setcounter{equation}{0} \setcounter{figure}{0} \setcounter{table}{0} \makeatletter \renewcommand{A\arabic{figure}}{A\arabic{figure}} \section{Optomechanical master equation, effective Hamiltonian, and off-resonant signal-to-noise ratio} \label{sec:Optomechanical master equation} \subsection{Optomechanical master equation} In order to evaluate the steady-state behavior of the system, its interaction with the environment needs to be described carefully. In our proposal for observing the DCE, we parametrically squeeze the cavity mode. Related methods have been used to enhance the light-matter interaction in optomechanical systems~\cite{lu2015squeezed,lemonde2016enhanced} and in cavity electrodynamics systems~\cite{qin2018exponentially, leroux2018enhancing}. This can make the squeezed-cavity-mode (SCM) frequency comparable to the mechanical frequency, so that the mechanically induced DCE can be observed in a common optomechanical setup without the need for an ultra-high mechanical frequency and an ultrastrong single-photon optomechanical coupling. However, the squeezing can also introduce undesired noise, including thermal noise and two-photon correlation, into the cavity. We can remove them by coupling a squeezed-vacuum bath to the bare-cavity mode. In this section, we give a detailed derivation of the master equation when the bare-cavity mode is coupled to a squeezed-vacuum bath and the mechanical mode is coupled to a thermal bath. We show that the noise induced by squeezing the cavity can be completely eliminated. To begin with, we consider the Hamiltonian for the interaction between the system and the baths, which is given by \begin{equation} H_{\rm bath}=H_{\rm bath}^{0}+H_{\rm bath}^{c}+H_{\rm bath}^{m}, \end{equation} where \begin{align} H_{\rm bath}^{0}&=\sum_{l}\nu_{l}\left[t_{c}^{\dag}\left(\nu_{l}\right)t_{c}\left(\nu_{l}\right) +t_{m}^{\dag}\left(\nu_{l}\right)t_{m}\left(\nu_{l}\right)\right],\\ H_{\rm bath}^{c}&=\sum_{l}\lambda_{c}\left(\nu_{l}\right)\left[a^{\dag}t_{c}\left(\nu_{l}\right) +t_{c}^{\dag}\left(\nu_{l}\right)a\right],\\ H_{\rm bath}^{m}&=\sum_{l}\lambda_{m}\left(\nu_{l}\right)\left[b^{\dag}t_{m}\left(\nu_{l}\right) +t_{m}^{\dag}\left(\nu_{l}\right)b\right]. \end{align} Here, $H_{\rm bath}^{0}$ is the free Hamiltonian of the baths, with $t_{c/m}\left(\nu_{l}\right)$ the annihilation operators for the cavity and mechanical bath modes of frequency $\nu_{l}$, and $H_{\rm bath}^{c/m}$ represent the couplings of the cavity and the mechanical resonator to their baths, with the coupling strengths $\lambda_{c/m}\left(\nu_{l}\right)$ depending on the frequency $\nu_{l}$. To derive the master equation, we first switch into the frame rotating at \begin{equation} H_{0}=\omega_{L}a^{\dag}a/2+H_{\rm bath}^{0}, \end{equation} to introduce the SCM using the Bogoliubov transformation $a_{s}=\cosh\left(r\right)a+\sinh\left(r\right)a^{\dag}$. Then, we again switch into the frame rotating at $H_{\rm CD}=\omega_{s}a_{s}^{\dag}a_{s}$, with $\omega_{s}=\sqrt{\Delta^2-\Omega^2}$ being the SCM frequency, where $\Delta=\omega_{c}-\omega_{L}/2$ is the detuning between the bare-cavity frequency $\omega_{c}$ and the half-frequency, $\omega_{L}/2$, of the two-photon driving, and $\Omega$ is the two-photon driving amplitude. The couplings between the system and the baths are, accordingly, transformed to \begin{align} H_{\rm bath}^{c}\left(t\right)&=a\left(t\right)T_{c}^{\dag}\left(t\right)+a^{\dag}\left(t\right)T_{c}\left(t\right),\\ H_{\rm bath}^{m}\left(t\right)&=b\left(t\right)T_{m}^{\dag}\left(t\right)+b^{\dag}\left(t\right)T_{m}\left(t\right). \end{align} Here, we have defined \begin{align} \label{seq:time-dependent-a} a\left(t\right)&=\exp\left(-i\omega_{L}t/2\right)\exp\left(iH_{\rm CD}t\right)a\exp\left(-iH_{\rm CD}t\right),\\ b\left(t\right)&=\exp\left(-i\omega_{m}t\right)b,\\ T_{c}\left(t\right)&=\sum_{\nu_{l}}\lambda_{c}\left(\nu_{l}\right)t_{c}\left(\nu_{l}\right)\exp\left(-i\nu_{l}t\right),\\ T_{m}\left(t\right)&=\sum_{\nu_{l}}\lambda_{m}\left(\nu_{l}\right)t_{m}\left(\nu_{l}\right)\exp\left(-i\nu_{l}t\right). \end{align} Following the standard procedure in Ref.~\cite{scully1997book} and, then, returning to the frame rotating at $H_{0}$, we can obtain the following master equation expressed, in terms of the $a_{s}$ mode, as \begin{align}\label{seq:full-master-equation} \frac{d}{dt}\rho\left(t\right)=\;&i\left[\rho\left(t\right),H\right]\nonumber\\ &-\frac{\kappa}{2}\left(N+1\right)\mathcal{L}\left(a_{s}\right)\rho\left(t\right) -\frac{\kappa}{2}N\mathcal{L}\left(a_{s}^{\dag}\right)\rho\left(t\right)\nonumber\\ &+\frac{\kappa}{2}M\mathcal{L}^{\prime}\left(a_{s}\right)\rho\left(t\right) +\frac{\kappa}{2}M^{*}\mathcal{L}^{\prime}\left(a_{s}^{\dag}\right)\rho\left(t\right)\nonumber\\ &-\frac{\gamma_{m}}{2}\left(n_{\rm th}+1\right)\mathcal{L}\left(b\right)\rho\left(t\right) -\frac{\gamma_{m}}{2}n_{\rm th}\mathcal{L}\left(b^{\dag}\right)\rho\left(t\right), \end{align} where the Lindblad superoperators are defined by \begin{align} \mathcal{L}\left(o\right)\rho\left(t\right) &=o^{\dag}o\rho\left(t\right)-2o\rho\left(t\right)o^{\dag}+\rho\left(t\right)o^{\dag}o,\\ \mathcal{L}^{\prime}\left(o\right)\rho\left(t\right) &=oo\rho\left(t\right)-2o\rho\left(t\right)o+\rho\left(t\right)oo, \end{align} and $N$, $M$ are given, respectively, by \begin{align} \label{seq:thermal-nosie-N} N=&\cosh^{2}\left(r\right)\sinh^{2}\left(r_{e}\right)+\sinh^{2}\left(r\right)\cosh^{2}\left(r_{e}\right)\nonumber\\ &+\frac{1}{2}\sinh\left(2r\right)\sinh\left(2r_{e}\right)\cos\left(\theta_{e}\right),\\ \label{seq:two-photon-correlation-M} M=&\left[\sinh\left(r\right)\cosh\left(r_{e}\right) +\exp\left(-i\theta_{e}\right)\cosh\left(r\right)\sinh\left(r_{e}\right)\right]\nonumber\\ &\times\left[\cosh\left(r\right)\cosh\left(r_{e}\right)+\exp\left(i\theta_{e}\right) \sinh\left(r\right)\sinh\left(r_{e}\right)\right], \end{align} corresponding to the thermal noise and two-photon correlation, and where \begin{align} \kappa&=2\pi d_{c}\left(\omega_{L}/2\right)\lambda_{c}^{2}\left(\omega_{L}/2\right),\\ \gamma_{m}&=2\pi d_{m}\left(\omega_{m}\right)\lambda_{m}^{2}\left(\omega_{m}\right), \end{align} represent, respectively, the cavity and mechanical decay rates, with $d_{c}\left(\omega_{L}/2\right)$ being the density of states for the cavity bath at frequency $\omega_{L}/2$, and $d_{m}\left(\omega_{m}\right)$ being the density of states for the mechanical bath at frequency $\omega_{m}$. Moreover, $n_{\rm th}=\left[\exp\left(\omega_{m}/k_{B}T\right)-1\right]^{-1}$ is the equilibrium phonon occupation at temperature $T$. Note that, to derive the master equation in Eq.~(\ref{seq:full-master-equation}), we have assumed that the central frequency of the squeezed-vacuum bath is equal to half the two-photon driving frequency. In addition, we have made the following approximations, \begin{align} d_{c}\left(\omega_{L}/2\pm\omega_{s}\right)&\approx d_{c}\left(\omega_{L}/2\right),\\ \lambda_{c}\left(\omega_{L}/2\pm\omega_{s}\right)&\approx \lambda_{c}\left(\omega_{L}/2\right). \end{align} This is because, in our case, the SCM frequency $\omega_{s}$ is tuned to be comparable to the mechanical frequency $\omega_{m}$ ($\sim$~MHz). Thus, it is much smaller than the two-photon driving frequency $\omega_{L}$ (of the order of GHz for microwave light or even THz for optical light). According to Eqs.~(\ref{seq:thermal-nosie-N}) and (\ref{seq:two-photon-correlation-M}), we can have $N=M=0$ for $r_{e}=r$ and $\theta_{e}=\pm n\pi$ ($n=1,3,5,\cdots$), and thus, we have, \begin{align}\label{seq:master-equation-in-terms-of-the-squeezed-mode} \frac{d}{dt}\rho\left(t\right)=\;&i\left[\rho\left(t\right),H\right]-\frac{\kappa}{2}\mathcal{L}\left(a_{s}\right)\rho\left(t\right)\nonumber\\ &-\frac{\gamma_{m}}{2}\left(n_{\rm th}+1\right)\mathcal{L}\left(b\right)\rho\left(t\right) -\frac{\gamma_{m}}{2}n_{\rm th}\mathcal{L}\left(b^{\dag}\right)\rho\left(t\right). \end{align} We find from Eq.~(\ref{seq:master-equation-in-terms-of-the-squeezed-mode}) that the squeezing-induced noise is completely eliminated, so that the $a_{s}$ mode is equivalently coupled to the thermal vacuum bath. As we demonstrate below, eliminating this noise can ensure that the background noise is zero for the SCM excitation spectrum in the squeezed frame, and as a result, the background noise of the output-photon flux spectrum in the original laboratory frame only originates from photons contained in the squeezed vacuum. This minimizes the background noise for the observation of the DCE, and thus enables the DCE to be observed more clearly in experiments. \begin{figure} \caption{Signal-to-noise ratio $\mathcal{R} \label{sfig_SNR_squeezed_frame} \end{figure} When the conditions $r=r_{e}$ and $\theta_{e}=\pm n\pi$ ($n=1,3,5,\cdots$) are not perfectly satisfied, the squeezing-induced noise cannot be eliminated completely (i.e., $N\neq0$ and $M\neq0$). However, according to the master equation in Eq. (\ref{seq:full-master-equation}), such imperfections do not affect the occurrence of the DCE. They only cause some noises. To quantify this undesired effect, we use the signal-to-noise ratio defined as \begin{equation} \mathcal{R}_{s}=\frac{\langle a_{s}^{\dag}a_{s}\rangle_{\rm ss}^{F\neq0}-\langle a_{s}^{\dag}a_{s}\rangle_{\rm ss}^{F=0}}{\langle a_{s}^{\dag}a_{s}\rangle_{\rm ss}^{F=0}}, \end{equation} where $\langle a_{s}^{\dag}a_{s}\rangle_{\rm ss}^{F=0}$ ($\langle a_{s}^{\dag}a_{s}\rangle_{\rm ss}^{F\neq0}$) is the steady state $\langle a_{s}^{\dag}a_{s}\rangle$ when $F=0$ ($F\neq0$), and the subscript ``ss" stands for ``steady state". We plot $\mathcal{R}_{s}$ in Fig.~\ref{sfig_SNR_squeezed_frame}, according to the master equation given in Eq.~(\ref{seq:full-master-equation}) but replacing $H\mapsto H_{\rm eff}$. In this figure, we assume that $r_{e}=r+\delta r_{e}$ and $\theta_{e}=\pi+\delta\theta_{e}$. In the perfect case of $N=M=0$, $\mathcal{R}_{s}\rightarrow\infty$ because $\langle a_{s}^{\dag}a_{s}\rangle_{\rm ss}^{F=0}=0$. Thus, we find in Fig.~\ref{sfig_SNR_squeezed_frame} that the noise induced by imperfect parameters reduces the ratio $\mathcal{R}_{s}$. However, we also find that with increasing the driving $F$, the noise becomes smaller compared to the DCE signal, such that it can even be neglected for sufficiently strong $F$. \subsection{Effective Hamiltonian} \begin{figure} \caption{(a) Squeezed-cavity-mode (SCM) frequency $\omega_{s} \label{sfig:onephonontotwophoton} \end{figure} \begin{figure} \caption{Time evolution of $\langle a_{s} \label{sfig_dynamics_evolution} \end{figure} The Hamiltonian in Eqs.~(\ref{seq:full-master-equation}) and (\ref{seq:master-equation-in-terms-of-the-squeezed-mode}) is expressed, in terms of the $a_{s}$ mode, as \begin{align}\label{seq:full-Hamiltonian-in-squeezed-frame} H=\;&\omega_{s}a_{s}^{\dag}a_{s}+\omega_{m}b^{\dag}b-g_{\rm OM}a_{s}^{\dag}a_{s}\left(b+b^{\dag}\right)\nonumber\\ &+g_{\rm DCE}\left(a_{s}^{2}+a_{s}^{\dag2}\right)\left(b+b^{\dag}\right)\nonumber\\ &+\frac{F}{2}\left[\exp\left(i\omega_{d}t\right)b+\exp\left(-i\omega_{d}t\right)b^{\dag}\right], \end{align} where $g_{\rm OM}=g_{0}\cosh\left(2r\right)$ and $g_{\rm DCE}=g_{0}\sinh\left(2r\right)/2$, with $r=\left(1/4\right)\ln\left[\left(\Delta+\Omega\right)/\left(\Delta-\Omega\right)\right]$ being the squeezing parameter of the cavity. In Fig.~\ref{sfig:onephonontotwophoton}(a) we plot $\omega_{s}$ as a function of $\Delta$ and $\Omega$, and find that the resonance condition $\omega_{m}=2\omega_{s}$, for a parametric coupling between SCM and mechanical mode, can be achieved with experimentally modest parameters. The Hamiltonian $H$ essentially describes the optomechanical system where the boundary condition of a squeezed field is modulated by the mechanical motion of a driven mirror. In the limit $\left\{\omega_{s}, \omega_{m},\omega_{d}\right\}\gg\left\{g_{\rm OM}, g_{\rm DCE}, F\right\}$, we can apply the rotating-wave approximation, such that the coherent dynamics of the system is governed by the following effective Hamiltonian, \begin{align}\label{seq:JC-like-Hamiltonian-in-terms-of-as} H_{\rm eff}=\;&\Delta_{s}a_{s}^{\dag}a_{s}+\Delta_{m}b^{\dag}b\nonumber\\ &+g_{\rm DCE}\left(a_{s}^{2}b^{\dag}+a_{s}^{\dag2}b\right)+\frac{F}{2}\left(b+b^{\dag}\right), \end{align} where $\Delta_{s}=\omega_{s}-\omega_{d}/2$ and $\Delta_{m}=\omega_{m}-\omega_{d}$. The master equation in Eq.~(\ref{seq:master-equation-in-terms-of-the-squeezed-mode}) is then reduced to \begin{align}\label{seq:effective-master-equation-in-terms-of-the-squeezed-mode} \frac{d}{dt}\rho\left(t\right)=\;&i\left[\rho\left(t\right),H_{\rm eff}\right]-\frac{\kappa}{2}\mathcal{L}\left(a_{s}\right)\rho\left(t\right)\nonumber\\ &-\frac{\gamma_{m}}{2}\left(n_{\rm th}+1\right)\mathcal{L}\left(b\right)\rho\left(t\right) -\frac{\gamma_{m}}{2}n_{\rm th}\mathcal{L}\left(b^{\dag}\right)\rho\left(t\right). \end{align} We find, according to Eq.~(\ref{seq:JC-like-Hamiltonian-in-terms-of-as}), that the coupling of the states $|0_{\rm s},1\rangle$ and $|2_{\rm s},0\rangle$, where the first number in the ket refers to the SCM photon number and the second one to the mechanical phonon number, is given by \begin{equation} g_{|0_{\rm s},1\rangle\leftrightarrow|2_{\rm s},0\rangle}=\sqrt{2}g_{\rm DCE}. \end{equation} In the squeezed frame, this means that under the time evolution, one phonon can be converted into two photons, and vice versa, at resonance $\omega_{m}=2\omega_{s}$. To confirm such a state conversion, we perform numerics, as shown in Fig.~\ref{sfig:onephonontotwophoton}(b). Specifically, we use the master equation in Eq.~(\ref{seq:master-equation-in-terms-of-the-squeezed-mode}) to calculate the fidelity, $\mathcal{F}=\langle2_{\rm s},0|\rho_{\rm actual}\left(t\right)|2_{\rm s},0\rangle$, where $\rho_{\rm actual}\left(t\right)$ is the actual state. It is seen in Fig.~\ref{sfig:onephonontotwophoton}(b) that we have the expected state conversion between light and mechanics, and there is a maximum conversion at resonance. Note that owing to the presence of the cavity and mechanical losses, the maximum conversion fidelity decreases with time. To describe the dynamics of the DCE further, we plotted the time evolution of $\langle a_{s}^{\dag}a_{s}\rangle$ in the presence of the driving $F$ in Fig.~\ref{sfig_dynamics_evolution}. We find that $\langle a_{s}^{\dag}a_{s}\rangle$ increases with time and then gradually approaches its stationary value. For an experimental parameter $\gamma_{m}\approx200$~Hz in Ref.~\cite{teufel2011sideband}, the stationary state is reached within a time $\approx5/\gamma_{m}\approx25$~ms. In Eq.~(\ref{seq:JC-like-Hamiltonian-in-terms-of-as}), we made the rotating-wave approximation and neglected the high-frequency component \begin{align} H_{\rm high}=&-g_{\rm OM}a_{s}^{\dag}a_{s}\left[\exp\left(-i\omega_{d}t\right)b+\exp\left(i\omega_{d}t\right)b^{\dag}\right]\nonumber\\ &+g_{\rm DCE}\left[\exp\left(-i2\omega_{d}t\right)a_{s}^{2}b+\exp\left(i2\omega_{d}t\right)a_{s}^{\dag2}b^{\dag}\right]. \end{align} In typical situations, $\left\{g_{\rm OM}, g_{\rm DCE}\right\}\ll\omega_{d}$, which allows a time-averaging treatment of $H_{\rm high}$ using the formalism of Ref.~\cite{gamel2010time}. After a straightforward calculation, the behavior of $H_{\rm high}$ can be approximated, at resonance $\omega_{m}=\omega_{d}=2\omega_{s}$ (i.e., $\Delta_{s}=\Delta_{m}=0$), as \begin{align}\label{seq:time-average-Hamiltonian} H_{\rm high}\approx H_{\rm TA}=&-\frac{g_{\rm OM}^{2}}{\omega_{m}}\left(a_{s}^{\dag}a_{s}\right)^{2}-\frac{g_{\rm DCE}^{2}}{2\omega_{m}}\big[a_{s}^{\dag2}a_{s}^{2}\nonumber\\ &+2\left(2a_{s}^{\dag}a_{s}+1\right)b^{\dag}b+2\left(2a_{s}^{\dag}a_{s}+1\right)\big]. \end{align} The Hamiltonian $H$ is, accordingly, transformed to \begin{equation}\label{seq:averaged_full_hamiltonian} H\approx H_{\rm eff}+H_{\rm TA}. \end{equation} For realistic parameters, the couplings $g_{\rm OM}$ and $g_{\rm DCE}$ are three orders of magnitude lower than $\omega_{m}$. We can find from Eq.~(\ref{seq:time-average-Hamiltonian}) that the high-frequency term $H_{\rm high}$ can be neglected, compared to the low-frequency term $H_{\rm eff}$. To confirm this, in Fig.~\ref{sfig_no_RWA_steady_state} we numerically calculated $\langle a_{s}^{\dag}a_{s}\rangle_{\rm ss}$ using the low-frequency term $H_{\rm eff}$ and the full Hamiltonian $H$ given in Eq.~(\ref{seq:averaged_full_hamiltonian}), respectively. By comparing these, we find an excellent agreement, and the high-frequency term $H_{\rm high}$ can be safely neglected, as expected. \begin{figure} \caption{Effects of the high-frequency component $H_{\rm high} \label{sfig_no_RWA_steady_state} \end{figure} \subsection{Off-resonant signal-to-noise ratio} \begin{figure} \caption{Signal-to-noise ratio $\mathcal{R} \label{sfig_SNR_detuning} \end{figure} In the main article, the signal-to-noise ratio $\mathcal{R}$ is discussed at resonance $\omega_{m}=\omega_{d}=2\omega_{s}$ (i.e., $\Delta_{s}=\Delta_{m}=0$). We now discuss the ratio $\mathcal{R}$ in the off-resonance case where $\Delta_{s}\neq0$ and $\Delta_{m}\neq0$. We plot the ratio $\mathcal{R}$ as a function of the detunings $\Delta_{s}$ and $\Delta_{m}$ in Fig.~\ref{sfig_SNR_detuning}. There, the results are obtained by numerically integrating the master equation in Eq.~(\ref{seq:effective-master-equation-in-terms-of-the-squeezed-mode}). We find that the ratio $\mathcal{R}$ decreases with the detuning $\Delta_{s}$ or $\Delta_{m}$, but increases with the force $F$. Note that the DCE photons are the scattered photon pairs via two-photon hyper-Raman scattering. As a result, their frequency $\omega_{s}+\omega_{L}/2$ is different from the noise-photon frequency $\omega_{L}/2$. This means that if standard techniques of Raman spectroscopy are used, the noise can then be filtered out. Therefore, the signal can still be resolved even if $R < 1$. \section{Dynamical Casimir effect in the mechanical weak-driving regime} \label{sec:dynamical Casimir effect in the weak mechanical driving regime} In our main article, we have studied the steady-state behavior associated with the DCE, by numerically integrating the master equation in Eq.~(\ref{seq:effective-master-equation-in-terms-of-the-squeezed-mode})~\cite{johansson2012qutip, johansson2013qutip2}. To study the DCE further, an analytical understanding for the mechanical weak driving is given in this section. Here, we only focus on the resonance situation where $\omega_{m}=\omega_{d}=2\omega_{s}$. \begin{figure*} \caption{Photon number $\langle a_{s} \label{sfig-intracavity-squeezed-photon-number} \end{figure*} \begin{figure*} \caption{Real part of the correlation function $\average{a_{s} \label{sfig-intracavity-asas} \end{figure*} Let us now derive the steady-state SCM photon number $\langle a_{s}^{\dag}a_{s}\rangle_{\rm ss}$. To begin, we consider the master equation in Eq.~(\ref{seq:effective-master-equation-in-terms-of-the-squeezed-mode}). The involved equations of motion are given, respectively, by \begin{align} \label{seq:differential-equation01} \frac{d}{dt}\langle a_{s}^{\dag}a_{s}\rangle=-&4g_{\rm DCE}\;{\rm Im}\left[\average{a_{s}^{2}b^{\dag}}\right]-\kappa\langle a_{s}^{\dag}a_{s}\rangle,\\ \label{seq:differential-equation02} \frac{d}{dt}\average{a_{s}^{2}}=&-i2g_{\rm DCE}\left(2\average{a_{s}^{\dag}a_{s}b}+\average{b}\right)-\kappa\average{a_{s}^{2}},\\ \label{seq:differential-equation03} \frac{d}{dt}\average{b}=&-i\left(g_{\rm DCE}\average{a_{s}^{2}}+\frac{F}{2}\right)-\frac{\gamma_{m}}{2}\average{b},\\ \label{seq:differential-equation04} \frac{d}{dt}\average{b^{\dag}b}=\;&2g_{\rm DCE}\;{\rm Im}\left[\average{a_{s}^{2}b^{\dag}}\right]-F{\rm Im}\left[\average{b}\right]\nonumber\\ &-\gamma_{m}\average{b^{\dag}b} +\gamma_{m}n_{\rm th},\\ \label{seq:differential-equation05} \frac{d}{dt}\average{a_{s}^{2}b^{\dag}}=\;&i\bigg(g_{\rm DCE}\average{a_{s}^{\dag 2}a_{s}^{2}}-4g_{\rm DCE}\average{a_{s}^{\dag}a_{s}b^{\dag}b}\nonumber\\ &+\frac{F}{2}\average{a_{s}^{2}}-2g_{\rm DCE}\average{b^{\dag}b}\bigg)\nonumber\\ &-\left(\kappa+\frac{\gamma_{m}}{2}\right)\average{a_{s}^{2}b^{\dag}},\\ \label{seq:differential-equation06} \frac{d}{dt}\average{a_{s}^{\dag}a_{s}b}=\;&ig_{\rm DCE}\left(2\average{a_{s}^{2}b^{\dag}b}-\average{a_{s}^{\dag}a_{s}^{3}}-2\average{a_{s}^{\dag 2}b^{2}}\right)\nonumber\\ &-i\frac{F}{2}\average{a_{s}^{\dag}a_{s}}-\left(\kappa+\frac{\gamma_{m}}{2}\right)\average{a_{s}^{\dag}a_{s}b}. \end{align} Here, ${\rm Im}\left[z\right]$ represents the imaginary part of $z$. In fact, owing to the parametric coupling, the Hamiltonian in Eq.~(\ref{seq:JC-like-Hamiltonian-in-terms-of-as}) leads to an infinite set of differential equations, which may not be analytically solved. Thus, in order to obtain an analytical result, we neglect the higher-order correlation terms, that is: $\average{a_{s}^{\dag 2}a_{s}^{2}}$, $\average{a_{s}^{\dag}a_{s}b^{\dag}b}$, $\average{a_{s}^{2}b^{\dag}b}$, $\average{a_{s}^{\dag}a_{s}^{3}}$, and $\average{a_{s}^{\dag 2}b^{2}}$. This approximation is valid for a weak driving $F$, as shown below. In such an approximation, the coupled differential equations ~(\ref{seq:differential-equation01})--(\ref{seq:differential-equation06}) construct a closed set, so in the steady state we have \begin{align} \label{seq:coupledequation1} 0\approx&-4g_{\rm DCE}\;{\rm Im}\left[\average{a_{s}^{2}b^{\dag}}_{\rm ss}\right]-\kappa\langle a_{s}^{\dag}a_{s}\rangle_{\rm ss},\\ 0\approx&-i2g_{\rm DCE}\left(2\average{a_{s}^{\dag}a_{s}}_{\rm ss}+\average{b}_{\rm ss}\right)-\kappa\average{a_{s}^{2}}_{\rm ss},\\ 0\approx&-i\left(g_{\rm DCE}\average{a_{s}^{2}}_{\rm ss}+\frac{F}{2}\right)-\frac{\gamma_{m}}{2}\average{b}_{\rm ss},\\ 0\approx\;&2g_{\rm DCE}\;{\rm Im}\left[\average{a_{s}^{2}b^{\dag}}_{\rm ss}\right]-F\;{\rm Im}\left[\average{b}_{\rm ss}\right]-\gamma_{m}\average{b^{\dag}b}_{\rm ss}\nonumber\\ &+\gamma_{m}n\left(\omega_{m},T\right),\\ 0\approx\;&i\left(\frac{F}{2}\average{a_{s}^{2}}-2g_{\rm DCE}\average{b^{\dag}b}_{\rm ss}\right)-\left(\kappa+\frac{\gamma_{m}}{2}\right)\average{a_{s}^{2}b^{\dag}}_{\rm ss},\\ \label{seq:coupledequation6} 0\approx&-i\frac{F}{2}\average{a_{s}^{\dag}a_{s}}_{\rm ss}-\left(\kappa+\frac{\gamma_{m}}{2}\right)\average{a_{s}^{\dag}a_{s}b}_{\rm ss}. \end{align} By solving this closed set of equations, the steady-state SCM photon number is found to be \begin{equation}\label{seq:steady-state-intracavity-photon-number} \average{a_{s}^{\dag}a_{s}}_{\rm ss}\approx\frac{4\gamma_{m}g_{\rm DCE}^{2}}{\kappa\left(2g_{\rm DCE}^{2}+\gamma_{0}^{2}\right)}\left[\frac{\kappa F^{2}}{2\gamma_{m}\left(2g_{\rm DCE}+\gamma_{0}^{2}\right)}+n_{\rm th}\right], \end{equation} where $\gamma_{0}=\sqrt{\kappa\gamma_{m}/2}$. Equation~(\ref{seq:steady-state-intracavity-photon-number}) shows that $\average{a_{s}^{\dag}a_{s}}_{\rm ss}$ includes two physical contributions: one from the mechanical driving and the other from the thermal noise. Furthermore, we also find a quadratic increase in $\average{a_{s}^{\dag}a_{s}}_{\rm ss}$ with the driving $F$. To confirm this analytical expression, in Fig.~\ref{sfig-intracavity-squeezed-photon-number} we compare it with exact numerical simulations of the master equation in Eq.~(\ref{seq:effective-master-equation-in-terms-of-the-squeezed-mode}). It is seen that the analytical predictions are in good agreement with the exact numerical results, especially for weak $F$. According to the Bogoliubov transformation, the steady-state intracavity-photon number $\average{a^{\dag}a}_{\rm ss}$ in the original laboratory frame is given in Eq.~(\ref{eq:cavity_photon_number}). Then, the steady-state output-photon flux is given in Eq.~(\ref{eq:output_flux}). \begin{figure*} \caption{Correlation function $g^{\left(2\right)} \label{sfig-g2} \end{figure*} To obtain $\Phi_{\rm out}$ analytically, the physical quantities $\average{a_{s}^{\dag}a_{s}}_{\rm ss}$ and ${\rm Re}\left[\average{a_{s}^{2}}_{\rm ss}\right]$ are involved, as shown in Eq.~(\ref{eq:DCE_signal}). The steady-state SCM photon number, $\average{a_{s}^{\dag}a_{s}}_{\rm ss}$, is given in Eq.~(\ref{seq:steady-state-intracavity-photon-number}), and further is numerically confirmed in Fig.~\ref{sfig-intracavity-squeezed-photon-number}. From the closed set of the steady-state equations given in Eqs.~(\ref{seq:coupledequation1})--(\ref{seq:coupledequation6}), we can straightforwardly find \begin{equation}\label{seq:DCE-induced-asas} \average{a_{s}^{2}}_{\rm ss}=-\frac{g_{\rm DCE}}{2g_{\rm DCE}^{2}+\gamma_{0}^{2}}F. \end{equation} It shows that $|{\rm Re}\left[\average{a_{s}^{2}}_{\rm ss}\right]|$ increases linearly with $F$ but is independent of the thermal mechanical noise. This behavior is also numerically confirmed in Fig.~\ref{sfig-intracavity-asas}, showing a good agreement especially for the weak driving $F$. Note that the derivation of the analytical results and their numerical confirmations originates from neglecting the higher-order correlation terms. In order to exactly describe $\average{a_{s}^{2}}$, such higher-order correlations should be included. By combining Eqs.~(\ref{seq:steady-state-intracavity-photon-number})--(\ref{seq:DCE-induced-asas}), the steady-state output-photon flux can be analytically expressed as \begin{align}\label{seq:steady-state-output} \Phi_{\rm out}=&\frac{4\gamma_{m}g_{\rm DCE}^{2}}{\kappa\left(2g_{\rm DCE}^{2}+\gamma_{0}^{2}\right)}\left[\frac{\kappa }{2\gamma_{m}\left(2g_{\rm DCE}+\gamma_{0}^{2}\right)}F^{2}+n_{\rm th}\right]\nonumber\\ &\times\cosh\left(2r\right)\nonumber\\ &+\frac{g_{\rm DCE}}{2g_{\rm DCE}^{2}+\gamma_{0}^{2}}\sinh\left(2r\right)F+\kappa\sinh^{2}\left(r\right). \end{align} We find from Eq.~(\ref{seq:steady-state-output}) that, by increasing the mechanical driving $F$, the DCE-induced photon flux $\Phi_{\rm DCE}$ becomes stronger quadratically, but at the same time, the background-noise photon flux $\Phi_{\rm BGN}$ remains unchanged. Therefore, the increase in the total photon flux $\Phi_{\rm out}$ with $F$ can be considered as a signature of the mechanical-motion induced DCE. In the DCE process, the photons are emitted in pairs, and therefore, they could exhibit photon bunching~\cite{johansson2010dynamical,stassi2013spontaneous,macri2018nonperturbative}. The essential parameter quantifying this property is the equal-time second-order correlation function, defined in Eq.~(\ref{eq:g2_correlation}). We now derive this second-order correlation function. The equation of motion for $\average{a_{s}^{\dag 2}a_{s}^{2}}$ is given by \begin{align} \frac{d}{dt}\average{a_{s}^{\dag 2}a_{s}^{2}}=&-4g_{\rm DCE}\left\{2{\rm Im}\left[\average{a_{s}^{\dag}a_{s}^{3}b^{\dag}}\right]+{\rm Im}\left[\average{a_{s}^{2}b^{\dag}}\right]\right\}\nonumber\\ &-2\kappa\average{a_{s}^{\dag 2}a_{s}^{2}}. \end{align} We can neglect the term ${\rm Im}\left[\average{a_{s}^{\dag}a_{s}^{3}b^{\dag}}\right]$ for the weak driving $F$. Then, combining Eq.~(\ref{seq:coupledequation1}) yields \begin{equation}\label{seq:second-order-correlation-weak} g_{s}^{\left(2\right)}\!\left(0\right)\approx\frac{1}{2\average{a_{s}^{\dag}a_{s}}_{\rm ss}}. \end{equation} In Fig.~\ref{sfig-g2}, we plot the $g_{s}^{\left(2\right)}\!\left(0\right)$ correlation as a function of the driving $F$. In this figure, we compare the analytical and numerical results, and show an exact agreement. Owing to a very small of $\average{a_{s}^{\dag}a_{s}}_{\rm ss}$ for the mechanical weak driving, $g_{s}^{\left(2\right)}\!\left(0\right)$ is very large as shown in Fig.~\ref{sfig-g2}, which corresponds to large photon bunching. With increasing the driving $F$, we also find that the $g_{s}^{\left(2\right)}\!\left(0\right)$ correlation decreases, and as demonstrated more explicitly in the Appendix~\ref{sec:Semi-classical treatment for the strong mechanical driving}, it would approach a lower bound equal to $1$, thereby implying that the DCE radiation field becomes a coherent state in the limit of the mechanical strong driving, $F\rightarrow\infty$. \section{Semi-classical treatment for the dynamical Casimir effect} \label{sec:Semi-classical treatment for the strong mechanical driving} In Appendix~\ref{sec:dynamical Casimir effect in the weak mechanical driving regime} we have analytically discussed the DCE process when the mechanical driving $F$ is weak. There, the higher-order correlations that arise from the parametric coupling are neglected, and the resulting expressions can predict the system behavior well. For strong-$F$ driving, all high-order correlations should be included to exactly describe the system; but in this case, finding solutions analytically or even numerically becomes much more difficult. In order to investigate the DCE in the strong-$F$ regime, in this section we employ a semi-classical treatment~\cite{butera2019mechanical}. For simplicity, but without loss of generality, here we assume that the mechanical resonator is coupled to a zero-temperature bath. For finite temperatures, the discussion below is still valid, as long as the total number of phonons is much larger than the number of thermal phonons. \subsection{Excitation spectrum and output-photon flux spectrum in the steady state} We again begin with the master equation in Eq.~(\ref{seq:effective-master-equation-in-terms-of-the-squeezed-mode}) and, accordingly, obtain \begin{align} \label{seq:semiclassical-equation01} \frac{d}{dt}\langle a_{s}^{\dag}a_{s}\rangle=&-4g_{\rm DCE}\;{\rm Im}\left[\average{a_{s}^{2}}\average{b}^{*}\right]-\kappa\langle a_{s}^{\dag}a_{s}\rangle,\\ \label{seq:semiclassical-equation02} \frac{d}{dt}\average{a_{s}^{2}}=&-i2\Delta_{s}\average{a_{s}^{2}}\nonumber\\ &-i2g_{\rm DCE}\left(2\average{a_{s}^{\dag}a_{s}}+1\right)\average{b}-\kappa\average{a_{s}^{2}},\\ \label{seq:semiclassical-equation03} \frac{d}{dt}\average{b}=&-i\left(\Delta_{m}\average{b}+g_{\rm DCE}\average{a_{s}^{2}}+\frac{F}{2}\right)-\frac{\gamma_{m}}{2}\average{b}. \end{align} Here, we have made the semiclassical approximation, such that $\average{a_{s}^{2}b^{\dag}}\approx\average{a_{s}^{2}}\average{b}^{*}$ and $\average{a_{s}^{\dag}a_{s}b}\approx\average{a_{s}^{\dag}a_{s}}\average{b}$. Under this approximation, the fluctuation correlation between the cavity and the mechanical resonator is neglected. It is found that Eqs.~(\ref{seq:semiclassical-equation01})--(\ref{seq:semiclassical-equation03}) construct a closed set. \begin{figure*} \caption{(a) Photon number $\average{a_{s} \label{sfig:omega_m=omega_d} \end{figure*} \subsubsection{Excitation spectrum for resonant mechanical driving: $\omega_{m}=\omega_{d}$} We first consider the case of a resonant mechanical driving (i.e., $\omega_{m}=\omega_{d}$). In this case, we have $\Delta_{m}=0$, and the steady-state SCM photon number $\average{a_{s}^{\dag}a_{s}}_{\rm ss}$ satisfies a cubic equation, \begin{align}\label{seq:cubic-equation-01} 0=\;&g_{\rm DCE}^{4}\,x^{3}-g_{\rm DCE}^{2}\left(g_{\rm DCE}^{2}-\gamma_{0}^{2}\right)x^{2}\nonumber\\ &+\left[\frac{1}{4}\left(\Delta_{s}^{2}\gamma_{m}^{2}+\gamma_{0}^{4}\right)-g_{\rm DCE}^{2}\left(F^{2}+\gamma_{0}^{2}\right)\right]x\nonumber\\ &-\frac{1}{4}\left(\Delta_{s}^{2}\gamma_{m}^{2}+\gamma_{0}^{4}\right), \end{align} where $\gamma_{0}=\sqrt{\kappa\gamma_{m}/2}$ and $x=2\average{a_{s}^{\dag}a_{s}}_{\rm ss}+1$. The solutions of such a equation can be exactly obtained using the Cardano formula. Then, the steady-state $\average{a_{s}^{2}}$ and $\average{b}$ are given, respectively, by \begin{align} \label{seq:strong-steady-asas} \average{a_{s}^{2}}_{\rm ss}&=-\frac{g_{\rm DCE}}{2g_{\rm DCE}^{2}\,x +\gamma_{0}^{2}+i\Delta_{s}\gamma_{m}}Fx,\\ \label{seq:strong-steady-b} \average{b}_{\rm ss}&=-\frac{\left(-2\Delta_{s}+i\kappa\right)}{2g_{\rm DCE}^{2}\,x+\gamma_{0}^{2}+i\Delta_{s}\gamma_{m}}\frac{F}{2}. \end{align} For simplicity, we numerically solve the cubic equation~(\ref{seq:cubic-equation-01}), and in Fig.~\ref{sfig:omega_m=omega_d} we plot $\average{a_{s}^{\dag}a_{s}}_{\rm ss}$, ${\rm Re}\left[\average{a_{s}^{2}}_{\rm ss}\right]$, and $|\average{b}_{\rm ss}|^{2}$ versus the detuning $\Delta_{s}$ for $\kappa=500\gamma_{m}$ and $1000\gamma_{m}$. At large detunings, the resonantly driven mechanical resonator is effectively decoupled from the cavity mode. As a consequence, there is almost no conversion of mechanical energy into photons. Thus at large detunings, the mechanical phonon number $|\average{b}_{\rm ss}|^{2}$ quickly approaches $\left(F/\gamma_m\right)^{2}$, i.e., the steady-state phonon number when the mechanical resonator is completely uncoupled. Meanwhile, both the photon number $\average{a_{s}^{\dag}a_{s}}_{\rm ss}$ and correlation function $\average{a_{s}^{2}}_{\rm ss}$ are very close to zero. As the detuning decreases, the effective parametric coupling between the mechanical motion and the cavity mode increases, and the parametric conversion from mechanical energy into photons is accordingly enhanced. Such an energy conversion is maximized at resonance $\omega_{m}=\omega_{d}=2\omega_{s}$. Thus, when decreasing the detuning, both $\average{a_{s}^{\dag}a_{s}}_{\rm ss}$ and $|{\rm Re}\left[\average{a_{s}^{2}}_{\rm ss}\right]|$ increase but $|\average{b}_{\rm ss}|^{2}$ decreases, as shown in Fig.~\ref{sfig:omega_m=omega_d}. In particular, $\average{a_{s}^{\dag}a_{s}}_{\rm ss}$ and $|{\rm Re}\left[\average{a_{s}^{2}}_{\rm ss}\right]|$ reach their maximum values at resonance, and at the same time, $|\average{b}_{\rm ss}|^{2}$ reaches its minimum value. This behavior implies that the photons are emitted by the mechanical resonator. \begin{figure*} \caption{(a) Photon number $\average{a_{s} \label{sfig:omega_m=2omega_s} \end{figure*} \subsubsection{Excitation spectrum for resonant parametric coupling: $\omega_{m}=2\omega_{s}$} We next consider the case of a resonant parametric coupling (i.e., $\omega_{m}=2\omega_{s}$). In this case, we have $\Delta_{m}=2\Delta_{s}=\Delta$, and the steady-state $\average{a_{s}^{\dag}a_{s}}$ also satisfies a cubic equation \begin{align}\label{seq:cubic-equation-02} 0=\;&g_{\rm DCE}^{4}\,x^{3}-g_{\rm DCE}^{2}\left(g_{\rm DCE}^{2}+\Delta^{2}-\gamma_{0}^{2}\right)x^{2}\nonumber\\ &+\bigg\{\frac{1}{4}\left[\left(\Delta^2-\gamma_{0}^{2}\right)^{2} +\Delta^{2}\gamma_{1}^{2}\right]+g_{\rm DCE}^{2}\big(\Delta^2\nonumber\\ &-\gamma_{0}^{2}-F^{2}\big)\bigg\}x-\frac{1}{4}\left[\left(\Delta^{2}-\gamma_{0}^{2}\right)^{2}+\Delta^{2}\gamma_{1}^{2}\right], \end{align} where $\gamma_{1}=\kappa+\gamma_{m}/2$. This cubic equation can also be exactly solved using the Cardano formula, and then the steady-state $\average{a_{s}^{2}}$ and $\average{b}$ are given, respectively, by \begin{align} \average{a_{s}^{2}}_{\rm ss}&=-\frac{g_{\rm DCE}}{2g_{\rm DCE}\,x-\Delta^{2}+\gamma_{0}^{2}+i\Delta\gamma_{1}}Fx,\\ \average{b}_{\rm ss}&=-\frac{\left(-\Delta+i\kappa\right)}{2g_{\rm DCE}\,x-\Delta^{2}+\gamma_{0}^{2}+i\Delta\gamma_{1}}\frac{F}{2}. \end{align} We numerically solve the cubic equation~(\ref{seq:cubic-equation-02}), and in Fig.~\ref{sfig:omega_m=2omega_s}, we plot $\average{a_{s}^{\dag}a_{s}}_{\rm ss}$, ${\rm Re}\left[\average{a_{s}^{2}}_{\rm ss}\right]$, and $|\average{b}_{\rm ss}|^{2}$ versus the detuning $\Delta_{s}$ for $\kappa=500\gamma_{m}$ and $1000\gamma_{m}$. At large detunings, the mechanical driving is effectively decoupled from the mechanical resonator, so that almost no phonons are excited and almost no photons are emitted. As the detuning decreases, the mechanical phonon number increases, which strengthens the parametric conversion from mechanical energy into photons, and in turn, leads to an increase in the excited photon number. This process is maximized at resonance $\omega_{m}=\omega_{d}=2\omega_{s}$. Thus, we find, as shown in Fig.~\ref{sfig:omega_m=2omega_s}, that with decreasing the detuning, not only $\average{a_{s}^{\dag}a_{s}}_{\rm ss}$ and $|{\rm Re}\left[\average{a_{s}^{2}}_{\rm ss}\right]|$ but also $|\average{b}_{\rm ss}|^{2}$ increases, and that they simultaneously reach their maximum values at resonance. This behavior also implies that the photons are emitted by the mechanical resonator. \subsubsection{Output-photon flux spectrum for resonant mechanical driving and parametric coupling} Having obtained $\average{a_{s}^{\dag}a_{s}}_{\rm ss}$ and $\average{a_{s}^{2}}_{\rm ss}$ in the squeezed frame, we can, according to the Bogoliubov transformation, calculate the steady-state intracavity photon number $\average{a^{\dag}a}_{\rm ss}$ in the original laboratory frame, as given in Eq.~(\ref{eq:cavity_photon_number}). Then we can calculate the steady-state output-photon flux $\Phi_{\rm out}$ according to the input-output relation, given in Eq.~(\ref{eq:output_flux}). We plot the photon flux $\Phi_{\rm out}$ as a function of the detuning $\Delta_{s}$ in Fig.~\ref{sfig:output-photon-flux-strong}. As expected, for a given mechanical driving, we can observe a resonance peak, corresponding to the maximum value of the photon flux. This behavior in the laboratory frame can directly reflect the behavior of the excitation spectrum $\average{a_{s}^{\dag}a_{s}}_{\rm ss}\left(\Delta_{s}\right)$ in the squeezed frame in Figs.~\ref{sfig:omega_m=omega_d}(a) and~\ref{sfig:omega_m=2omega_s}(a). This is because the background noise $\Phi_{\rm BGN}$ remains unchanged when the detuning is changed, and the peak completely arises from the DCE in the squeezed frame. Thus, the appearance of the peak of the output flux spectrum $\Phi_{\rm out}\left(\Delta_{s}\right)$ can be considered as an experimentally observable signature of the DCE. \begin{figure} \caption{Steady-state output-photon flux $\Phi_{\rm out} \label{sfig:output-photon-flux-strong} \end{figure} \subsection{Signal-to-noise ratio and second-order correlation function at resonance} As mentioned before, there exists a background noise $\Phi_{\rm BGN}$ in the flux $\Phi_{\rm out}$. Thus, we need to analyze the ability of our proposal to resolve the DCE-induced signal from the background noise. To quantitatively describe this ability, we typically employ the signal-to-noise ratio defined in Eq.~(\ref{eq:signal_to_noise_ratio}). Without loss of generality, we focus on the ratio $\mathcal{R}$ at resonance $\omega_{m}=\omega_{d}=2\omega_{s}$. Under this resonance condition, the cubic equation satisfied by $\average{a_{s}^{\dag}a_{s}}_{\rm ss}$ becomes \begin{align}\label{seq:cubic-eq-at-resonance} 0=\;&g_{\rm DCE}^{4}\,x^3-g_{\rm DCE}^{2}\left(g_{\rm DCE}^{2}-\gamma_{0}^{2}\right)x^{2}\nonumber\\ &+\left[\frac{\gamma_{0}^{4}}{4}-g_{\rm DCE}^{2}\left(\gamma_{0}^{2}+F^{2}\right)\right]x-\frac{\gamma_{0}^{4}}{4}, \end{align} where $x=2\average{a_{s}^{\dag}a_{s}}+1$. Then, $\average{a_{s}^{2}}_{\rm ss}$ and $\average{b}_{\rm ss}$ are given by \begin{align} \average{a_{s}^{2}}_{\rm ss}&=-\frac{g_{\rm DCE}}{2g_{\rm DCE}^{2}\,x+\gamma_{0}^{2}}Fx,\\ \average{b}_{\rm ss}&=-\frac{i\kappa}{2g_{\rm DCE}^{2}\,x+\gamma_{0}^{2}}\frac{F}{2}. \end{align} We plot the ratio $\mathcal{R}$ versus the driving $F$ in Fig.~\ref{sfig:SNR_plus_g2}(a). We find that the signal-to-noise ratio monotonically increases with the mechanical driving. This is owing to the fact that an increase in the mechanical driving leads to an increase in the number of DCE-induced photons, but at the same time leaves the number of background-noise photons unchanged. \begin{figure} \caption{(a) Signal-to-noise ratio $\mathcal{R} \label{sfig:SNR_plus_g2} \end{figure} The equal-time second-order correlation function is defined in Eq.~(\ref{eq:g2_correlation}). Similarly to the discussion of the signal-to-noise ratio $\mathcal{R}$, we also only focus on the $g_{s}^{\left(2\right)}\!\left(0\right)$ correlation at resonance $\omega_{m}=\omega_{d}=2\omega_{s}$. In the semi-classical treatment presented in this section, $\average{a_{s}^{\dag 2}a_{s}^{2}}_{\rm ss}$ can be approximated as $\average{a_{s}^{\dag 2}a_{s}^{2}}_{\rm ss}\approx|\average{a_{s}^{2}}_{\rm ss}|^{2}$, and as a result, the $g_{s}^{\left(2\right)}\!\left(0\right)$ correlation is reduced to \begin{equation} g_{s}^{\left(2\right)}\!\left(0\right)\approx\frac{|\average{a_{s}^{2}}_{\rm ss}|^{2}}{\average{a_{s}^{\dag}a_{s}}^{2}_{\rm ss}}, \end{equation} which is plotted as a function of the mechanical driving in Fig.~\ref{sfig:SNR_plus_g2}(b). We find that $g_{s}^{\left(2\right)}\!\left(0\right)$ starts with very large values, and as the mechanical driving increases, $g_{s}^{\left(2\right)}\!\left(0\right)$ then decreases approaching 1. This behavior, as expected, suggests the phenomenon of photon bunching, thus confirming the DCE. \subsection{Analytical solutions in the limits $F\rightarrow0$ and $F\rightarrow\infty$} In order to have a better analytical understanding, let us now consider the limit of $F\rightarrow0$, and also the opposite limit of $F\rightarrow\infty$, at resonance $\omega_{m}=\omega_{d}=2\omega_{s}$. For the $F\rightarrow0$ limit, we have $\average{a_{s}^{\dag}a_{s}}_{\rm ss}\rightarrow0$, and thus, $x^{n}\approx1+2n\average{a_{s}^{\dag}a_{s}}$ for $n=0,1,2,\cdots$. Based on this, an approximate solution of the cubic equation in Eq.~(\ref{seq:cubic-eq-at-resonance}) is found to be \begin{equation} \average{a_{s}^{\dag}a_{s}}_{\rm ss}\approx\frac{2g_{\rm DCE}^{2}}{\left(2g_{\rm DCE}^{2}+\gamma_{0}^{2}\right)^{2}}F^{2}, \quad {\rm when} \quad F\rightarrow0, \end{equation} which corresponds to Eq.~(\ref{seq:steady-state-intracavity-photon-number}) for $n_{\rm th}=0$. Analogously, we obtain \begin{align} \label{seq:DCE-induced-asas_02} \average{a_{s}^{2}}_{\rm ss}&\approx-\frac{g_{\rm DCE}}{2g_{\rm DCE}^{2}+\gamma_{0}^{2}}F,\quad {\rm when} \quad F\rightarrow0,\\ \average{b}_{\rm ss}&\approx-\frac{i\kappa}{2g_{\rm DCE}^{2}+\gamma_{0}^{2}}\frac{F}{2}, \quad {\rm when} \quad F\rightarrow0. \end{align} Note that Eq.~(\ref{seq:DCE-induced-asas_02}) corresponds to Eq.~(\ref{seq:DCE-induced-asas}). Therefore, according to Eq.~(\ref{seq:steady-state-output}), we obtain a quadratic increase in the ratio \begin{equation} \mathcal{R}=\frac{\Phi_{\rm DCE}}{\Phi_{\rm BGN}}\propto F, \end{equation} with large driving $F$, as shown in Fig.~\ref{sfig:SNR_plus_g2}(a). In the opposite limit of $F\rightarrow\infty$, we have $x\rightarrow2\average{a_{s}^{\dag}a_{s}}_{\rm ss}$, and then obtain \begin{align} \average{a_{s}^{\dag}a_{s}}_{\rm ss}&\approx \frac{F}{2g_{\rm DCE}}, \quad {\rm when} \quad F\rightarrow\infty,\\ \average{a_{s}^{2}}_{\rm ss}&\approx-\frac{F}{2g_{\rm DCE}}, \quad {\rm when} \quad F\rightarrow\infty,\\ \average{b}_{\rm ss}&\approx-\frac{i\kappa}{4g_{\rm DCE}}, \quad {\rm when} \quad F\rightarrow\infty. \end{align} Consequently, the photon flux $\Phi_{\rm out}$ is given by \begin{equation} \Phi_{\rm DCE}=\frac{F}{2g_{\rm DCE}}\exp\left(2r\right), \quad {\rm when} \quad F\rightarrow\infty. \end{equation} This indicates a linear increase in the ratio $\mathcal{R}$ with the driving $F$, as shown in Fig.~\ref{sfig:SNR_plus_g2}(a). For the $g_{s}^{\left(2\right)}\!\left(0\right)$ correlation in the limit of $F\rightarrow0$, we find \begin{equation} g_{s}^{\left(2\right)}\!\left(0\right)\approx\frac{1}{2\average{a_{s}^{\dag}a_{s}}_{\rm ss}}, \quad {\rm when} \quad F\rightarrow0, \end{equation} which is the same as Eq.~(\ref{seq:second-order-correlation-weak}). This corresponds to a large $g_{s}^{\left(2\right)}\!\left(0\right)$ as in Fig.~\ref{sfig:SNR_plus_g2}(b) and, thus, to large photon bunching. Furthermore, in the opposite limit of $F\rightarrow\infty$, the correlation function $g_{s}^{\left(2\right)}\!\left(0\right)$ is approximately equal to $1$, i.e., \begin{equation} g_{s}^{\left(2\right)}\!\left(0\right)\approx1, \quad {\rm when} \quad F\rightarrow\infty, \end{equation} as shown Fig.~\ref{sfig:SNR_plus_g2}(b). This means that the DCE radiation field is approximately in a coherent state. \subsection{Stability analysis} \begin{figure} \caption{Steady-state squeezed cavity mode photon number $\average{a_{s} \label{sfig:multistability} \end{figure} We now turn to multistability effects of our system. As discussed previously, in the semi-classical approximation, the system is governed by a cubic function. However, a cubic function has three solutions, and thus the system may exhibit multistability effects. To analyze them, we need to perform steady-state analysis~\cite{sarchi2008coherent}. Thus, we express the quantities $\average{a_{s}^{\dag}a_{s}}$, $\average{a_{s}^{2}}$, and $\average{b}$ as the sum of their steady-state values ($\average{a_{s}^{\dag}a_{s}}_{\rm ss}$, $\average{a_{s}^{2}}_{\rm ss}$, $\average{b}_{\rm ss}$) and time-dependent small perturbations [$\delta_{1}\left(t\right)$, $\delta_{2}\left(t\right)$, $\delta_{3}\left(t\right)$], that is, \begin{align} \average{a_{s}^{\dag}a_{s}}&=\average{a_{s}^{\dag}a_{s}}_{\rm ss}+\delta_{1}\left(t\right),\\ \average{a_{s}^{2}}&=\average{a_{s}^{2}}_{\rm ss}+\delta_{2}\left(t\right),\\ \average{b}&=\average{b}_{\rm ss}+\delta_{3}\left(t\right). \end{align} Then, substituting these equations into Eqs.~(\ref{seq:semiclassical-equation01}), (\ref{seq:semiclassical-equation02}), and (\ref{seq:semiclassical-equation03}) yields \begin{align} \label{seq:small-fluctuation01} \frac{d}{dt}\delta_{1}\left(t\right)=\;&i2g_{\rm DCE}\big(\average{b}_{\rm ss}^{*}\delta_{2}+\average{a_{s}^{2}}_{\rm ss}\delta_{3}^{*}\nonumber\\ &-\average{b}_{\rm ss}\delta_{2}^{*}-\average{a_{s}^{2}}_{\rm ss}^{*}\delta_{3}\big)-\kappa\delta_{1},\\ \label{seq:small-fluctuation02} \frac{d}{dt}\delta_{2}\left(t\right)=&-i2\Delta_{s}\delta_{2}-i2g_{\rm DCE}\left(2\average{a_{s}^{\dag}a_{s}}_{\rm ss}+1\right)\delta_{3}\nonumber\\ &-i4g_{\rm DCE}\delta_{1}\average{b}_{\rm ss}-\kappa\delta_{2},\\ \label{seq:small-fluctuation03} \frac{d}{dt}\delta_{3}\left(t\right)=&-i\Delta_{m}\delta_{3}-ig_{\rm DCE}\delta_{2}-\frac{\gamma_{m}}{2}\delta_{3}. \end{align} We further make the following replacements, \begin{align} \delta_{1}\left(t\right)&\mapsto \exp\left(-i\omega t\right)x_{1}+\exp\left(i\omega^{*}t\right)y_{1}^{*},\\ \delta_{2}\left(t\right)&\mapsto \exp\left(-i\omega t\right)x_{2}+\exp\left(i\omega^{*}t\right)y_{2}^{*},\\ \delta_{3}\left(t\right)&\mapsto \exp\left(-i\omega t\right)x_{3}+\exp\left(i\omega^{*}t\right)y_{3}^{*}, \end{align} where $x_{k}$ and $y_{k}$ ($k=1,2,3$) are time-independent complex numbers, and $\omega$ denotes a complex frequency. Then, the coupled equations~(\ref{seq:small-fluctuation01}), (\ref{seq:small-fluctuation02}), and (\ref{seq:small-fluctuation03}) can be rewritten as \begin{equation} M\Psi=\omega\Psi, \end{equation} where \begin{equation} \Psi=\left(x_{1},y_{1},x_{2},y_{2},x_{3},y_{3}\right)^{T}, \end{equation} \begin{widetext} \begin{align} M=i\left( \begin{array}{cccccc} -\kappa & 0 & A^{*} & A & B^{*} & B\\ 0& -\kappa & A^{*} & A & B^{*} & B\\ 2A & 0 & -i2\Delta_{s}-\kappa & 0 & C & 0\\ 0 & 2A^{*} & 0 & i2\Delta_{s}-\kappa & 0 & C^{*}\\ 0 & 0 & -ig_{\rm DCE} & 0 &-i\Delta_{m}-\gamma_{m}/2 & 0\\ 0 & 0 & 0 & ig_{\rm DCE} & 0 & i\Delta_{m}-\gamma_{m}/2 \end{array} \right), \end{align} \end{widetext} where \begin{align} A&=-i2g_{\rm DCE}\average{b}_{\rm ss},\\ B&=i2g_{\rm DCE}\average{a_{s}^{2}}_{\rm ss},\\ C&=-i2g_{\rm DCE}\left(2\average{a_{s}^{\dag}a_{s}}_{\rm ss}+1\right). \end{align} If all imaginary parts of the eigenvalues of the matrix $M$ are negative, then the system is stable; otherwise the system is unstable~\cite{kyriienko2014optomechanics}. According to this criterion, we estimate the stability of our system. We find that for the parameters used in the above discussion about the DCE, the system does not exhibit multistability. Furthermore, when the mechanical loss is close to the cavity loss, we find for $\omega_{m}=2\omega_{s}$ that the system becomes multistable, as shown in Fig.~\ref{sfig:multistability}. However, the requirement that the mechanical loss is close to the cavity loss makes the threshold \begin{equation} F_{\rm th}=g_{\rm DCE}+\kappa\gamma_{m}/4g_{\rm DCE} \end{equation} very low. For $F\gg F_{\rm th}$, the system behaves classically, and quantum effects are negligible~\cite{wilson2010photon, butera2019mechanical}. For the parameters in Fig.~\ref{sfig:multistability}, the value of $F_{\rm th}$ is $\approx9\gamma_{m}$ (here, we set $\kappa=20\gamma_{m}$), which is smaller than one fifth of the force $F=50\gamma_{m}$. As a consequence, the system, when demonstrating such multistable behaviors, has probably reached the classical regime, where the DCE effect induced by the quantum fluctuations is negligible. Therefore, in order to observe the DCE, it is better to avoid the multistable regime of the system. \section{Possible implementations with superconducting quantum circuits} \label{sec:Possible-implementation-with-superconducting-circuits} Our scheme to implement the DCE is based on a generic optomechanical system, and at the same time, does not require an ultra-high-frequency mechanical resonator and an ultrastrong single-photon coupling between light and mechanical motion. Therefore, we can expect that it can be implemented in various physical systems. In this section, as an example, we discuss in detail a possible implementation with superconducting circuits and, in particular, we refer to the experimental superconducting quantum circuit of Ref.~\cite{teufel2011sideband}, described by the standard optomechanical coupling of the form $a^{\dag}a\left(b+b^{\dag}\right)$. \begin{figure*} \caption{(a) A standard {\it LC} \label{sfig-LC-circuits} \end{figure*} A standard {\it LC} circuit consists of a capacitor (e.g., with capacitance $C_{0}$) and an inductor (e.g., with inductance $L_{0}$), as shown in Fig.~\ref{sfig-LC-circuits}(a). Its Hamiltonian is expressed in terms of the capacitor charge $Q$ and the inductor current $I$ as \begin{equation}\label{seq:simple-LC-Hamiltonian} H_{0}=\frac{\Phi^{2}}{2L_{0}}+\frac{1}{2}L_{0}\omega_{0}^{2}Q^{2}, \end{equation} where $\Phi=L_{0}I$ is the magnetic flux through the inductor, and $\omega_{0}=1/\sqrt{L_{0}C_{0}}$ is the fundamental frequency of the circuit. After quantization, the charge $Q$ and the flux $\Phi$ represent a pair of canonically conjugate variables, which obey the commutation relation $\left[Q, \Phi\right]=i\hbar$. Upon introducing a canonical transformation, \begin{align}\label{seq:canonical-transformation} Q=&\frac{1}{2}\sqrt{2\hbar\omega_{0}C_{0}}\left(a+a^{\dag}\right), \nonumber\\ \Phi=&\frac{1}{2i}\sqrt{2\hbar\omega_{0}L_{0}}\left(a-a^{\dag}\right), \end{align} the Hamiltonian $H_{0}$ becomes \begin{equation} H_{0}=\hbar\omega_{0}a^{\dag}a. \end{equation} Here, we have subtracted the constant zero-point energy $\hbar\omega_{0}/2$. Such an {\it LC} circuit thus behaves as a single-mode microwave cavity, with $\omega_{0}$ being the cavity frequency, and with $a$ ($a^{\dag}$) being the annihilation (creation) operator of the cavity mode. As demonstrated in Ref.~\cite{teufel2011sideband}, when the capacitance $C_{0}$ in Fig.~\ref{sfig-LC-circuits}(a) is modulated by the mechanical motion of a micromechanical membrane, the mechanical motion can couple to the cavity mode. In this manner, the capacitance $C_{0}$ becomes \begin{equation} C_{0}\mapsto C_{x}=\frac{C_{0}}{1+x/d}, \end{equation} where $x$ is the displacement of the membrane, and $d$ is the distance between the conductive plates of the capacitor. To parametrically squeeze the cavity mode, we further add an additional and electrically tunable capacitor into such an experimental setup. The {\it LC} circuit is shown in Fig.~\ref{sfig-LC-circuits}(b). Here, we assume the capacitance of the additional capacitor to be \begin{equation} C_{t}=C_{0}+\Delta C\cos\left(\omega_{L}t\right), \end{equation} where $\omega_{L}$ is the modulation frequency, and $\Delta C\ll C_{0}$. The total capacitance is thus given by $C_{\rm total}=C_{t}+C_{x}$. Note that, in the absence of both mechanical motion and cosine modulation, the total capacitance is equal to $2C_{0}$, and as a result, the resonance frequency of the bare {\it LC} cavity, shown in Fig.~\ref{sfig-LC-circuits}(b), is $\omega_{c}=\omega_{0}/\sqrt{2}$, rather than $=\omega_{0}$. When both mechanical motion and cosine modulation are present, the cavity frequency $\omega_{c}$ is modulated as \begin{align} \omega_{c}\mapsto\omega_{c}^{\prime}=&\frac{1}{\sqrt{L_{0}C_{\rm total}}}\nonumber\\ =&\frac{\omega_{0}}{\sqrt{1+\frac{\Delta C}{C_{0}}\cos\left(\omega_{L}t\right)+\frac{1}{1+x/d}}}. \end{align} In the limit $\left\{\Delta C/C_{0}, x/d\right\}\ll1$, we can expand $\omega_{c}^{\prime}$, up to first order, to have \begin{equation} \omega_{c}^{\prime}\approx\omega_{c}\left[1-\frac{\Delta C}{4C_{0}}\cos\left(\omega_{L}t\right)+\frac{x}{4d}\right]. \end{equation} The Hamiltonian describing the cavity mode of the {\it LC} circuit in Fig.~\ref{sfig-LC-circuits}(b) is then given by \begin{equation} H_{c}=\frac{\Phi^{2}}{2L_{0}}+\frac{1}{2}L_{0}\omega_{c}^{2}\left[1-\frac{\Delta C}{2C_{0}}\cos\left(\omega_{L}t\right)+\frac{x}{2d}\right]Q^{2}. \end{equation} Using the canonical transformation in Eq.~(\ref{seq:canonical-transformation}), but with $\omega_{0}$ replaced by $\omega_{c}$, the Hamiltonian $H_{c}$ is reduced to \begin{align}\label{seq:LC-cavity-mode-Hamiltonian} H_{c}=\;&\hbar\omega_{c}a^{\dag}a-\hbar g_{0}a^{\dag}a\left(b+b^{\dag}\right)\nonumber\\ &+\frac{1}{2}\hbar\Omega\left[\exp\left(i\omega_{L}t \right)a^{2}+\exp\left(-i\omega_{L}t\right)a^{\dag2}\right], \end{align} where $b$ ($b^{\dag}$) is the annihilation (creation) operator of the mechanical mode, $g_{0}=-\omega_{c}x_{\rm zpf}/4d$ is the single-photon optomechanical coupling, $x_{\rm zpf}$ is the zero-point fluctuation of the mechanical resonator, $\Omega=-\omega_{c}\Delta C/8C_{0}$ is the amplitude of the two-photon driving, and $\omega_{L}$ is its frequency. Here, we have made the rotating-wave approximation, and we have also replaced \begin{equation} x\mapsto x_{\rm zpf}\left(b+b^{\dag}\right). \end{equation} After including the free Hamiltonian of the mechanical resonator, the full Hamiltonian, in a rotating frame at $\omega_{L}/2$, becomes ($\hbar=1$) \begin{align}\label{seq:LC-Hamiltonian} H=\;&\omega_{m}b^{\dag}b+\Delta a^{\dag}a\nonumber\\ &-g_{0}a^{\dag}a\left(b+b^{\dag}\right)+\frac{1}{2}\Omega\left(a^{2}+a^{\dag2}\right), \end{align} where $\omega_{m}$ is the frequency of the mechanical mode, and $\Delta=\omega_{c}-\omega_{L}/2$. The Hamiltonian in Eq.~(\ref{seq:LC-Hamiltonian}) is exactly the one applied by us in this work. A squeezed-vacuum reservoir coupled to the cavity mode can be realized directly using the {\it LC} circuit in Fig.~\ref{sfig-LC-circuits}(a), but the constant capacitance $C_{0}$ needs to be replaced by a tunable capacitance $C_{t}$. By following the same recipe as above, the corresponding Hamiltonian is then given by \begin{equation} H_{r}=\Delta_{0} a^{\dag}a+\frac{1}{2}\Omega_{0}\left(a^{2}+a^{\dag2}\right), \end{equation} where $\Delta_{0}=\omega_{0}-\omega_{L}/2$, and $\Omega_{0}=-\omega_{0}\Delta C/8C_{0}$. The canonical transformation used here is the same as given in Eq.~(\ref{seq:canonical-transformation}). When the input field of the cavity is in the vacuum, we can obtain a squeezed-vacuum field at the output port, according to the input-output relation. In addition to the {\it LC} circuit, the squeezed-vacuum reservoir can also be generated by a Josephson parametric amplifier, as experimentally demonstrated in Refs.~\cite{murch2013reduction,toyli2016resonance}. In particular, a squeezing bandwidth of up to $\sim 10$~MHz was reported in Ref.~\cite{murch2013reduction}. This is sufficient to fulfil the large-bandwidth requirement of the reservoir. \begin{thebibliography}{79} \makeatletter \providecommand \@ifxundefined [1]{ \@ifx{#1\undefined} } \providecommand \@ifnum [1]{ \ifnum #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \@ifx [1]{ \ifx #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode `\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{http://dx.doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty \bibitem [{\citenamefont {Schwinger}(1951)}]{schwinger1951gauge} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Schwinger}},\ }\bibfield {title} {\enquote {\bibinfo {title} {On {G}auge {I}nvariance and {V}acuum {P}olarization},}\ }\href {https://link.aps.org/doi/10.1103/PhysRev.82.664} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev.}\ }\textbf {\bibinfo {volume} {82}},\ \bibinfo {pages} {664} (\bibinfo {year} {1951})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hawking}(1974)}]{hawking1974black} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~W.}\ \bibnamefont {Hawking}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Black hole explosions?}}\ }\href {http://dx.doi.org/10.1038/248030a0} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {248}},\ \bibinfo {pages} {30} (\bibinfo {year} {1974})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Unruh}(1976)}]{unruh1976notes} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {W.~G.}\ \bibnamefont {Unruh}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Notes on black-hole evaporation},}\ }\href {https://link.aps.org/doi/10.1103/PhysRevD.14.870} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. D}\ }\textbf {\bibinfo {volume} {14}},\ \bibinfo {pages} {870} (\bibinfo {year} {1976})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Moore}(1970)}]{moore1970quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~T.}\ \bibnamefont {Moore}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Quantum {T}heory of the {E}lectromagnetic {F}ield in a {V}ariable-{L}ength {O}ne-{D}imensional {C}avity},}\ }\href {https://doi.org/10.1063/1.1665432} {\bibfield {journal} {\bibinfo {journal} {J. Math. Phys.}\ }\textbf {\bibinfo {volume} {11}},\ \bibinfo {pages} {2679--2691} (\bibinfo {year} {1970})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Fulling}\ and\ \citenamefont {Davies}(1976)}]{fulling1976radiation} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~A.}\ \bibnamefont {Fulling}}\ and\ \bibinfo {author} {\bibfnamefont {P.~C.~W.}\ \bibnamefont {Davies}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Radiation from a moving mirror in two dimensional space-time: conformal anomaly},}\ }\href {http://rspa.royalsocietypublishing.org/content/348/1654/393} {\bibfield {journal} {\bibinfo {journal} {Proc. R. Soc. Lond. A}\ }\textbf {\bibinfo {volume} {348}},\ \bibinfo {pages} {393--414} (\bibinfo {year} {1976})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Dodonov}(2010)}]{dodonov2010current} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {V.~V.}\ \bibnamefont {Dodonov}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Current status of the dynamical {C}asimir effect},}\ }\href {http://stacks.iop.org/1402-4896/82/i=3/a=038105} {\bibfield {journal} {\bibinfo {journal} {Phys. Scr.}\ }\textbf {\bibinfo {volume} {82}},\ \bibinfo {pages} {038105} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Nation}\ \emph {et~al.}(2012)\citenamefont {Nation}, \citenamefont {Johansson}, \citenamefont {Blencowe},\ and\ \citenamefont {Nori}}]{nation2012colloquium} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.~D.}\ \bibnamefont {Nation}}, \bibinfo {author} {\bibfnamefont {J.~R.}\ \bibnamefont {Johansson}}, \bibinfo {author} {\bibfnamefont {M.~P.}\ \bibnamefont {Blencowe}}, \ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Colloquium: {S}timulating uncertainty: {A}mplifying the quantum vacuum with superconducting circuits},}\ }\href {https://link.aps.org/doi/10.1103/RevModPhys.84.1} {\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume} {84}},\ \bibinfo {pages} {1} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yablonovitch}(1989)}]{yablonovitch1989accelerating} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Yablonovitch}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Accelerating reference frame for electromagnetic waves in a rapidly growing plasma: {U}nruh-{D}avies-{F}ulling-{D}ewitt radiation and the nonadiabatic {C}asimir effect},}\ }\href {https://link.aps.org/doi/10.1103/PhysRevLett.62.1742} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {62}},\ \bibinfo {pages} {1742} (\bibinfo {year} {1989})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lozovik}\ \emph {et~al.}(1995)\citenamefont {Lozovik}, \citenamefont {Tsvetus},\ and\ \citenamefont {Vinogradov}}]{lozovik1995parametric} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.~E.}\ \bibnamefont {Lozovik}}, \bibinfo {author} {\bibfnamefont {V.~G.}\ \bibnamefont {Tsvetus}}, \ and\ \bibinfo {author} {\bibfnamefont {E.~A.}\ \bibnamefont {Vinogradov}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Parametric excitation of vacuum by use of femtosecond laser pulses},}\ }\href {http://stacks.iop.org/1402-4896/52/i=2/a=008} {\bibfield {journal} {\bibinfo {journal} {Phys. Scr.}\ }\textbf {\bibinfo {volume} {52}},\ \bibinfo {pages} {184} (\bibinfo {year} {1995})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Crocce}\ \emph {et~al.}(2004)\citenamefont {Crocce}, \citenamefont {Dalvit}, \citenamefont {Lombardo},\ and\ \citenamefont {Mazzitelli}}]{crocce2004model} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Crocce}}, \bibinfo {author} {\bibfnamefont {D.~A.~R.}\ \bibnamefont {Dalvit}}, \bibinfo {author} {\bibfnamefont {F.~C.}\ \bibnamefont {Lombardo}}, \ and\ \bibinfo {author} {\bibfnamefont {F.~D.}\ \bibnamefont {Mazzitelli}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Model for resonant photon creation in a cavity with time-dependent conductivity},}\ }\href {https://link.aps.org/doi/10.1103/PhysRevA.70.033811} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {70}},\ \bibinfo {pages} {033811} (\bibinfo {year} {2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Braggio}\ \emph {et~al.}(2005)\citenamefont {Braggio}, \citenamefont {Bressi}, \citenamefont {Carugno}, \citenamefont {Del~Noce}, \citenamefont {Galeazzi}, \citenamefont {Lombardi}, \citenamefont {Palmieri}, \citenamefont {Ruoso},\ and\ \citenamefont {Zanello}}]{braggio2005novel} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Braggio}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Bressi}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Carugno}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Del~Noce}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Galeazzi}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Lombardi}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Palmieri}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Ruoso}}, \ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Zanello}},\ }\bibfield {title} {\enquote {\bibinfo {title} {A novel experimental approach for the detection of the dynamical {C}asimir effect},}\ }\href {http://stacks.iop.org/0295-5075/70/i=6/a=754} {\bibfield {journal} {\bibinfo {journal} {Europhys. Lett.}\ }\textbf {\bibinfo {volume} {70}},\ \bibinfo {pages} {754} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Segev}\ \emph {et~al.}(2007)\citenamefont {Segev}, \citenamefont {Abdo}, \citenamefont {Shtempluck}, \citenamefont {Buks},\ and\ \citenamefont {Yurke}}]{segev2007prospects} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Segev}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Abdo}}, \bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Shtempluck}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Buks}}, \ and\ \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Yurke}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Prospects of employing superconducting stripline resonators for studying the dynamical {C}asimir effect experimentally},}\ }\href {https://doi.org/10.1016/j.physleta.2007.05.066} {\bibfield {journal} {\bibinfo {journal} {Phys. Lett. A}\ }\textbf {\bibinfo {volume} {370}},\ \bibinfo {pages} {202--206} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ciuti}\ \emph {et~al.}(2005)\citenamefont {Ciuti}, \citenamefont {Bastard},\ and\ \citenamefont {Carusotto}}]{ciuti2005quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Ciuti}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Bastard}}, \ and\ \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Carusotto}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Quantum vacuum properties of the intersubband cavity polariton field},}\ }\href {https://link.aps.org/doi/10.1103/PhysRevB.72.115303} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume} {72}},\ \bibinfo {pages} {115303} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {De~Liberato}\ \emph {et~al.}(2007)\citenamefont {De~Liberato}, \citenamefont {Ciuti},\ and\ \citenamefont {Carusotto}}]{de2007quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {De~Liberato}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Ciuti}}, \ and\ \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Carusotto}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Quantum {V}acuum {R}adiation {S}pectra from a {S}emiconductor {M}icrocavity with a {T}ime-{M}odulated {V}acuum {R}abi {F}requency},}\ }\href {https://link.aps.org/doi/10.1103/PhysRevLett.98.103602} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {98}},\ \bibinfo {pages} {103602} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {De~Liberato}\ \emph {et~al.}(2009)\citenamefont {De~Liberato}, \citenamefont {Gerace}, \citenamefont {Carusotto},\ and\ \citenamefont {Ciuti}}]{de2009extracavity} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {De~Liberato}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Gerace}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Carusotto}}, \ and\ \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Ciuti}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Extracavity quantum vacuum radiation from a single qubit},}\ }\href {https://link.aps.org/doi/10.1103/PhysRevA.80.053810} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {80}},\ \bibinfo {pages} {053810} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Garziano}\ \emph {et~al.}(2013)\citenamefont {Garziano}, \citenamefont {Ridolfo}, \citenamefont {Stassi}, \citenamefont {Di~Stefano},\ and\ \citenamefont {Savasta}}]{garziano2013switching} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Garziano}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Ridolfo}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Stassi}}, \bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Di~Stefano}}, \ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Savasta}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Switching on and off of ultrastrong light-matter interaction: {P}hoton statistics of quantum vacuum radiation},}\ }\href {https://link.aps.org/doi/10.1103/PhysRevA.88.063829} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {88}},\ \bibinfo {pages} {063829} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hagenm{\"u}ller}(2016)}]{hagenmuller2016all} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Hagenm{\"u}ller}},\ }\bibfield {title} {\enquote {\bibinfo {title} {All-optical dynamical {C}asimir effect in a three-dimensional terahertz photonic band gap},}\ }\href {https://link.aps.org/doi/10.1103/PhysRevB.93.235309} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume} {93}},\ \bibinfo {pages} {235309} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {De~Liberato}(2017)}]{de2017virtual} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {De~Liberato}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Virtual photons in the ground state of a dissipative system},}\ }\href {https://doi.org/10.1038/s41467-017-01504-5} {\bibfield {journal} {\bibinfo {journal} {Nat. Commun.}\ }\textbf {\bibinfo {volume} {8}},\ \bibinfo {pages} {1465} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Cirio}\ \emph {et~al.}(2017)\citenamefont {Cirio}, \citenamefont {Debnath}, \citenamefont {Lambert},\ and\ \citenamefont {Nori}}]{cirio2017amplified} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Cirio}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Debnath}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Lambert}}, \ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Amplified {O}ptomechanical {T}ransduction of {V}irtual {R}adiation {P}ressure},}\ }\href {https://link.aps.org/doi/10.1103/PhysRevLett.119.053601} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {119}},\ \bibinfo {pages} {053601} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {de~Melo~e Souza}\ \emph {et~al.}(2018)\citenamefont {de~Melo~e Souza}, \citenamefont {Impens},\ and\ \citenamefont {Maia~Neto}}]{e2018microscopic} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {de~Melo~e Souza}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Impens}}, \ and\ \bibinfo {author} {\bibfnamefont {P.~A.}\ \bibnamefont {Maia~Neto}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Microscopic dynamical {C}asimir effect},}\ }\href {https://link.aps.org/doi/10.1103/PhysRevA.97.032514} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {97}},\ \bibinfo {pages} {032514} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kockum}\ \emph {et~al.}(2019)\citenamefont {Kockum}, \citenamefont {Miranowicz}, \citenamefont {De~Liberato}, \citenamefont {Savasta},\ and\ \citenamefont {Nori}}]{kockum2019ultrastrong} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~F.}\ \bibnamefont {Kockum}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Miranowicz}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {De~Liberato}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Savasta}}, \ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Ultrastrong coupling between light and matter},}\ }\href {https://doi.org/10.1038/s42254-018-0006-2} {\bibfield {journal} {\bibinfo {journal} {Nat. Rev. Phys.}\ }\textbf {\bibinfo {volume} {1}},\ \bibinfo {pages} {19} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Forn-D{\'\i}az}\ \emph {et~al.}(2019)\citenamefont {Forn-D{\'\i}az}, \citenamefont {Lamata}, \citenamefont {Rico}, \citenamefont {Kono},\ and\ \citenamefont {Solano}}]{forn2019ultrastrong} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Forn-D{\'\i}az}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Lamata}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Rico}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Kono}}, \ and\ \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Solano}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Ultrastrong coupling regimes of light-matter interaction},}\ }\href {https://link.aps.org/doi/10.1103/RevModPhys.91.025005} {\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume} {91}},\ \bibinfo {pages} {025005} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Dezael}\ and\ \citenamefont {Lambrecht}(2010)}]{dezael2010analogue} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.~X.}\ \bibnamefont {Dezael}}\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Lambrecht}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Analogue casimir radiation using an optical parametric oscillator},}\ }\href {https://doi.org/10.1209 {\bibinfo {journal} {Europhys. Lett.}\ }\textbf {\bibinfo {volume} {89}},\ \bibinfo {pages} {14001} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Johansson}\ \emph {et~al.}(2009)\citenamefont {Johansson}, \citenamefont {Johansson}, \citenamefont {Wilson},\ and\ \citenamefont {Nori}}]{johansson2009dynamical} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~R.}\ \bibnamefont {Johansson}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Johansson}}, \bibinfo {author} {\bibfnamefont {C.~M.}\ \bibnamefont {Wilson}}, \ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Dynamical {C}asimir {E}ffect in a {S}uperconducting {C}oplanar {W}aveguide},}\ }\href {https://link.aps.org/doi/10.1103/PhysRevLett.103.147003} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {103}},\ \bibinfo {pages} {147003} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Johansson}\ \emph {et~al.}(2010)\citenamefont {Johansson}, \citenamefont {Johansson}, \citenamefont {Wilson},\ and\ \citenamefont {Nori}}]{johansson2010dynamical} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~R.}\ \bibnamefont {Johansson}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Johansson}}, \bibinfo {author} {\bibfnamefont {C.~M.}\ \bibnamefont {Wilson}}, \ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Dynamical {C}asimir effect in superconducting microwave circuits},}\ }\href {https://link.aps.org/doi/10.1103/PhysRevA.82.052509} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {82}},\ \bibinfo {pages} {052509} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wilson}\ \emph {et~al.}(2011)\citenamefont {Wilson}, \citenamefont {Johansson}, \citenamefont {Pourkabirian}, \citenamefont {Simoen}, \citenamefont {Johansson}, \citenamefont {Duty}, \citenamefont {Nori},\ and\ \citenamefont {Delsing}}]{wilson2011observation} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~M.}\ \bibnamefont {Wilson}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Johansson}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Pourkabirian}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Simoen}}, \bibinfo {author} {\bibfnamefont {J.~R.}\ \bibnamefont {Johansson}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Duty}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}}, \ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Delsing}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Observation of the dynamical {C}asimir effect in a superconducting circuit},}\ }\href {http://dx.doi.org/10.1038/nature10561} {\bibfield {journal} {\bibinfo {journal} {Nature (London)}\ }\textbf {\bibinfo {volume} {479}},\ \bibinfo {pages} {376} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Dalvit}(2011)}]{dalvit2011quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~A.~R.}\ \bibnamefont {Dalvit}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Quantum physics: Shaking photons out of the vacuum},}\ }\href {http://dx.doi.org/10.1038/479303a} {\bibfield {journal} {\bibinfo {journal} {Nature (London)}\ }\textbf {\bibinfo {volume} {479}},\ \bibinfo {pages} {303} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Johansson}\ \emph {et~al.}(2013{\natexlab{a}})\citenamefont {Johansson}, \citenamefont {Johansson}, \citenamefont {Wilson}, \citenamefont {Delsing},\ and\ \citenamefont {Nori}}]{johansson2013nonclassical} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~R.}\ \bibnamefont {Johansson}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Johansson}}, \bibinfo {author} {\bibfnamefont {C.~M.}\ \bibnamefont {Wilson}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Delsing}}, \ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Nonclassical microwave radiation from the dynamical casimir effect},}\ }\href {https://link.aps.org/doi/10.1103/PhysRevA.87.043804} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {87}},\ \bibinfo {pages} {043804} (\bibinfo {year} {2013}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {L{\"a}hteenm{\"a}ki}\ \emph {et~al.}(2013)\citenamefont {L{\"a}hteenm{\"a}ki}, \citenamefont {Paraoanu}, \citenamefont {Hassel},\ and\ \citenamefont {Hakonen}}]{lahteenmaki2013dynamical} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {L{\"a}hteenm{\"a}ki}}, \bibinfo {author} {\bibfnamefont {G.~S.}\ \bibnamefont {Paraoanu}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Hassel}}, \ and\ \bibinfo {author} {\bibfnamefont {P.~J.}\ \bibnamefont {Hakonen}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Dynamical {C}asimir effect in a {J}osephson metamaterial},}\ }\href {http://www.pnas.org/content/110/11/4234} {\bibfield {journal} {\bibinfo {journal} {Proc. Natl. Acad. Sci. U. S. A.}\ }\textbf {\bibinfo {volume} {110}},\ \bibinfo {pages} {4234--4238} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lambrecht}\ \emph {et~al.}(1996)\citenamefont {Lambrecht}, \citenamefont {Jaekel},\ and\ \citenamefont {Reynaud}}]{lambrecht1996motion} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Lambrecht}}, \bibinfo {author} {\bibfnamefont {M.-T.}\ \bibnamefont {Jaekel}}, \ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Reynaud}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Motion {I}nduced {R}adiation from a {V}ibrating {C}avity},}\ }\href {https://link.aps.org/doi/10.1103/PhysRevLett.77.615} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {77}},\ \bibinfo {pages} {615} (\bibinfo {year} {1996})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Dodonov}\ and\ \citenamefont {Klimov}(1996)}]{dodonov1996generation} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {V.~V.}\ \bibnamefont {Dodonov}}\ and\ \bibinfo {author} {\bibfnamefont {A.~B.}\ \bibnamefont {Klimov}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Generation and detection of photons in a cavity with a resonantly oscillating boundary},}\ }\href {https://link.aps.org/doi/10.1103/PhysRevA.53.2664} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {53}},\ \bibinfo {pages} {2664} (\bibinfo {year} {1996})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Plunien}\ \emph {et~al.}(2000)\citenamefont {Plunien}, \citenamefont {Sch{\"u}tzhold},\ and\ \citenamefont {Soff}}]{plunien2000dynamical} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Plunien}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Sch{\"u}tzhold}}, \ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Soff}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Dynamical {C}asimir {E}ffect at {F}inite {T}emperature},}\ }\href {https://link.aps.org/doi/10.1103/PhysRevLett.84.1882} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {84}},\ \bibinfo {pages} {1882} (\bibinfo {year} {2000})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Schaller}\ \emph {et~al.}(2002)\citenamefont {Schaller}, \citenamefont {Sch{\"u}tzhold}, \citenamefont {Plunien},\ and\ \citenamefont {Soff}}]{schaller2002dynamical} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Schaller}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Sch{\"u}tzhold}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Plunien}}, \ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Soff}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Dynamical {C}asimir effect in a leaky cavity at finite temperature},}\ }\href {https://link.aps.org/doi/10.1103/PhysRevA.66.023812} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {66}},\ \bibinfo {pages} {023812} (\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kim}\ \emph {et~al.}(2006)\citenamefont {Kim}, \citenamefont {Brownell},\ and\ \citenamefont {Onofrio}}]{kim2006detectability} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {W.-J.}\ \bibnamefont {Kim}}, \bibinfo {author} {\bibfnamefont {J.~H.}\ \bibnamefont {Brownell}}, \ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Onofrio}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Detectability of {D}issipative {M}otion in {Q}uantum {V}acuum via {S}uperradiance},}\ }\href {https://link.aps.org/doi/10.1103/PhysRevLett.96.200402} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {96}},\ \bibinfo {pages} {200402} (\bibinfo {year} {2006})}\BibitemShut {NoStop} \bibitem [{\citenamefont {De~Castro}\ \emph {et~al.}(2013)\citenamefont {De~Castro}, \citenamefont {Cacheffo},\ and\ \citenamefont {Dodonov}}]{de2013influence} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~S.~M.}\ \bibnamefont {De~Castro}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Cacheffo}}, \ and\ \bibinfo {author} {\bibfnamefont {V.~V.}\ \bibnamefont {Dodonov}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Influence of the field-detector coupling strength on the dynamical {C}asimir effect},}\ }\href {https://link.aps.org/doi/10.1103/PhysRevA.87.033809} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {87}},\ \bibinfo {pages} {033809} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Macr{\`\i}}\ \emph {et~al.}(2018)\citenamefont {Macr{\`\i}}, \citenamefont {Ridolfo}, \citenamefont {Di~Stefano}, \citenamefont {Kockum}, \citenamefont {Nori},\ and\ \citenamefont {Savasta}}]{macri2018nonperturbative} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Macr{\`\i}}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Ridolfo}}, \bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Di~Stefano}}, \bibinfo {author} {\bibfnamefont {A.~F.}\ \bibnamefont {Kockum}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}}, \ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Savasta}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Nonperturbative {D}ynamical {C}asimir {E}ffect in {O}ptomechanical {S}ystems: {V}acuum {C}asimir-{R}abi {S}plittings},}\ }\href {https://link.aps.org/doi/10.1103/PhysRevX.8.011031} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume} {8}},\ \bibinfo {pages} {011031} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sanz}\ \emph {et~al.}(2018)\citenamefont {Sanz}, \citenamefont {Wieczorek}, \citenamefont {Gr{\"{o}}blacher},\ and\ \citenamefont {Solano}}]{Sanz2018electromechanical} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Sanz}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Wieczorek}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Gr{\"{o}}blacher}}, \ and\ \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Solano}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Electro-mechanical {C}asimir effect},}\ }\href {\doibase 10.22331/q-2018-09-03-91} {\bibfield {journal} {\bibinfo {journal} {{Quantum}}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages} {91} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wang}\ \emph {et~al.}(2019)\citenamefont {Wang}, \citenamefont {Blencowe}, \citenamefont {Wilson},\ and\ \citenamefont {Rimberg}}]{wang2018mechanically} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {M.~P.}\ \bibnamefont {Blencowe}}, \bibinfo {author} {\bibfnamefont {C.~M.}\ \bibnamefont {Wilson}}, \ and\ \bibinfo {author} {\bibfnamefont {A.~J.}\ \bibnamefont {Rimberg}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Mechanically generating entangled photons from the vacuum: A microwave circuit-acoustic resonator analog of the oscillatory {U}nruh effect},}\ }\href {\doibase 10.1103/PhysRevA.99.053833} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {99}},\ \bibinfo {pages} {053833} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Settineri}\ \emph {et~al.}(2019)\citenamefont {Settineri}, \citenamefont {Macr\`{\i}}, \citenamefont {Garziano}, \citenamefont {Di~Stefano}, \citenamefont {Nori},\ and\ \citenamefont {Savasta}}]{settineri2019conversion} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Settineri}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Macr\`{\i}}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Garziano}}, \bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Di~Stefano}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}}, \ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Savasta}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Conversion of mechanical noise into correlated photon pairs: Dynamical casimir effect from an incoherent mechanical drive},}\ }\href {\doibase 10.1103/PhysRevA.100.022501} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {100}},\ \bibinfo {pages} {022501} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Scully}\ and\ \citenamefont {Zubairy}(1997)}]{scully1997book} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~O.}\ \bibnamefont {Scully}}\ and\ \bibinfo {author} {\bibfnamefont {M.~S.}\ \bibnamefont {Zubairy}},\ }\href@noop {} {\emph {\bibinfo {title} {Quantum {O}ptics}}}\ (\bibinfo {publisher} {Cambridge University Press, Cambridge},\ \bibinfo {year} {1997})\BibitemShut {NoStop} \bibitem [{\citenamefont {Xiang}\ \emph {et~al.}(2013)\citenamefont {Xiang}, \citenamefont {Ashhab}, \citenamefont {You},\ and\ \citenamefont {Nori}}]{xiang2013hybrid} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Z.~L.}\ \bibnamefont {Xiang}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Ashhab}}, \bibinfo {author} {\bibfnamefont {J.~Q.}\ \bibnamefont {You}}, \ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Hybrid quantum circuits: Superconducting circuits interacting with other quantum systems},}\ }\href {\doibase 10.1103/RevModPhys.85.623} {\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume} {85}},\ \bibinfo {pages} {623} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gu}\ \emph {et~al.}(2017)\citenamefont {Gu}, \citenamefont {Kockum}, \citenamefont {Miranowicz}, \citenamefont {Liu},\ and\ \citenamefont {Nori}}]{gu2017microwave} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Gu}}, \bibinfo {author} {\bibfnamefont {A.~F.}\ \bibnamefont {Kockum}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Miranowicz}}, \bibinfo {author} {\bibfnamefont {Y.-x.}\ \bibnamefont {Liu}}, \ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Microwave photonics with superconducting quantum circuits},}\ }\href {http://www.sciencedirect.com/science/article/pii/S0370157317303290} {\bibfield {journal} {\bibinfo {journal} {Phys. Rep.}\ }\textbf {\bibinfo {volume} {718-719}},\ \bibinfo {pages} {1--102} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Reiserer}\ and\ \citenamefont {Rempe}(2015)}]{reiserer2015cavity} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Reiserer}}\ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Rempe}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Cavity-based quantum networks with single atoms and optical photons},}\ }\href {https://link.aps.org/doi/10.1103/RevModPhys.87.1379} {\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume} {87}},\ \bibinfo {pages} {1379} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {L{\"u}}\ \emph {et~al.}(2015)\citenamefont {L{\"u}}, \citenamefont {Wu}, \citenamefont {Johansson}, \citenamefont {Jing}, \citenamefont {Zhang},\ and\ \citenamefont {Nori}}]{lu2015squeezed} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {X.-Y.}\ \bibnamefont {L{\"u}}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Wu}}, \bibinfo {author} {\bibfnamefont {J.~R.}\ \bibnamefont {Johansson}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Jing}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Zhang}}, \ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Squeezed {O}ptomechanics with {P}hase-{M}atched {A}mplification and {D}issipation},}\ }\href {https://link.aps.org/doi/10.1103/PhysRevLett.114.093602} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {114}},\ \bibinfo {pages} {093602} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lemonde}\ \emph {et~al.}(2016)\citenamefont {Lemonde}, \citenamefont {Didier},\ and\ \citenamefont {Clerk}}]{lemonde2016enhanced} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.-A.}\ \bibnamefont {Lemonde}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Didier}}, \ and\ \bibinfo {author} {\bibfnamefont {A.~A.}\ \bibnamefont {Clerk}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Enhanced nonlinear interactions in quantum optomechanics via mechanical amplification},}\ }\href {http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4848487/} {\bibfield {journal} {\bibinfo {journal} {Nat. Commun.}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {11338} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Qin}\ \emph {et~al.}(2018)\citenamefont {Qin}, \citenamefont {Miranowicz}, \citenamefont {Li}, \citenamefont {L{\"u}}, \citenamefont {You},\ and\ \citenamefont {Nori}}]{qin2018exponentially} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Qin}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Miranowicz}}, \bibinfo {author} {\bibfnamefont {P.-B.}\ \bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {X.-Y.}\ \bibnamefont {L{\"u}}}, \bibinfo {author} {\bibfnamefont {J.~Q.}\ \bibnamefont {You}}, \ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Exponentially {E}nhanced {L}ight-{M}atter {I}nteraction, {C}ooperativities, and {S}teady-{S}tate {E}ntanglement {U}sing {P}arametric {A}mplification},}\ }\href {https://link.aps.org/doi/10.1103/PhysRevLett.120.093601} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {120}},\ \bibinfo {pages} {093601} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Leroux}\ \emph {et~al.}(2018)\citenamefont {Leroux}, \citenamefont {Govia},\ and\ \citenamefont {Clerk}}]{leroux2018enhancing} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Leroux}}, \bibinfo {author} {\bibfnamefont {L.~C.~G.}\ \bibnamefont {Govia}}, \ and\ \bibinfo {author} {\bibfnamefont {A.~A.}\ \bibnamefont {Clerk}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Enhancing {C}avity {Q}uantum {E}lectrodynamics via {A}ntisqueezing: {S}ynthetic {U}ltrastrong {C}oupling},}\ }\href {https://link.aps.org/doi/10.1103/PhysRevLett.120.093602} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {120}},\ \bibinfo {pages} {093602} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Law}(1995)}]{law1995interaction} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~K.}\ \bibnamefont {Law}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Interaction between a moving mirror and radiation pressure: A {H}amiltonian formulation},}\ }\href {https://link.aps.org/doi/10.1103/PhysRevA.51.2537} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {51}},\ \bibinfo {pages} {2537} (\bibinfo {year} {1995})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Di~Stefano}\ \emph {et~al.}(2019)\citenamefont {Di~Stefano}, \citenamefont {Settineri}, \citenamefont {Macr{\`\i}}, \citenamefont {Ridolfo}, \citenamefont {Stassi}, \citenamefont {Kockum}, \citenamefont {Savasta},\ and\ \citenamefont {Nori}}]{di2017interaction} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Di~Stefano}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Settineri}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Macr{\`\i}}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Ridolfo}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Stassi}}, \bibinfo {author} {\bibfnamefont {A.~F.}\ \bibnamefont {Kockum}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Savasta}}, \ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Interaction of {M}echanical {O}scillators {M}ediated by the {E}xchange of {V}irtual {P}hoton {P}airs},}\ }\href {https://link.aps.org/doi/10.1103/PhysRevLett.122.030402} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {112}},\ \bibinfo {pages} {030402} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Murch}\ \emph {et~al.}(2013)\citenamefont {Murch}, \citenamefont {Weber}, \citenamefont {Beck}, \citenamefont {Ginossar},\ and\ \citenamefont {Siddiqi}}]{murch2013reduction} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.~W.}\ \bibnamefont {Murch}}, \bibinfo {author} {\bibfnamefont {S.~J.}\ \bibnamefont {Weber}}, \bibinfo {author} {\bibfnamefont {K.~M.}\ \bibnamefont {Beck}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Ginossar}}, \ and\ \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Siddiqi}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Reduction of the radiative decay of atomic coherence in squeezed vacuum.}}\ }\href {http://dx.doi.org/10.1038/nature12264 L3} {\bibfield {journal} {\bibinfo {journal} {Nature (London)}\ }\textbf {\bibinfo {volume} {499}},\ \bibinfo {pages} {62--65} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bartkowiak}\ \emph {et~al.}(2014)\citenamefont {Bartkowiak}, \citenamefont {Wu},\ and\ \citenamefont {Miranowicz}}]{bartkowiak2014quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Bartkowiak}}, \bibinfo {author} {\bibfnamefont {L.-A.}\ \bibnamefont {Wu}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Miranowicz}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Quantum circuits for amplification of {K}err nonlinearity via quadrature squeezing},}\ }\href {http://stacks.iop.org/0953-4075/47/i=14/a=145501} {\bibfield {journal} {\bibinfo {journal} {J. Phys. B}\ }\textbf {\bibinfo {volume} {47}},\ \bibinfo {pages} {145501} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Clark}\ \emph {et~al.}(2017)\citenamefont {Clark}, \citenamefont {Lecocq}, \citenamefont {Simmonds}, \citenamefont {Aumentado},\ and\ \citenamefont {Teufel}}]{clark2017sideband} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~B.}\ \bibnamefont {Clark}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Lecocq}}, \bibinfo {author} {\bibfnamefont {R.~W.}\ \bibnamefont {Simmonds}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Aumentado}}, \ and\ \bibinfo {author} {\bibfnamefont {J.~D.}\ \bibnamefont {Teufel}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Sideband cooling beyond the quantum backaction limit with squeezed light},}\ }\href {http://dx.doi.org/10.1038/nature20604} {\bibfield {journal} {\bibinfo {journal} {Nature (London)}\ }\textbf {\bibinfo {volume} {541}},\ \bibinfo {pages} {191--195} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zeytino{\u{g}}lu}\ \emph {et~al.}(2017)\citenamefont {Zeytino{\u{g}}lu}, \citenamefont {{\.I}mamo{\u{g}}lu},\ and\ \citenamefont {Huber}}]{zeytinouglu2017engineering} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Zeytino{\u{g}}lu}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {{\.I}mamo{\u{g}}lu}}, \ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Huber}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Engineering {M}atter {I}nteractions {U}sing {S}queezed {V}acuum},}\ }\href {https://link.aps.org/doi/10.1103/PhysRevX.7.021041} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {021041} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Vahlbruch}\ \emph {et~al.}(2018)\citenamefont {Vahlbruch}, \citenamefont {Wilken}, \citenamefont {Mehmet},\ and\ \citenamefont {Willke}}]{vahlbruch2018laser} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Vahlbruch}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Wilken}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Mehmet}}, \ and\ \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Willke}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Laser {P}ower {S}tabilization beyond the {S}hot {N}oise {L}imit {U}sing {S}queezed {L}ight},}\ }\href {\doibase 10.1103/PhysRevLett.121.173601} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {121}},\ \bibinfo {pages} {173601} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wilson}\ \emph {et~al.}(2010)\citenamefont {Wilson}, \citenamefont {Duty}, \citenamefont {Sandberg}, \citenamefont {Persson}, \citenamefont {Shumeiko},\ and\ \citenamefont {Delsing}}]{wilson2010photon} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~M.}\ \bibnamefont {Wilson}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Duty}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Sandberg}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Persson}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Shumeiko}}, \ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Delsing}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Photon {G}eneration in an {E}lectromagnetic {C}avity with a {T}ime-{D}ependent {B}oundary},}\ }\href {https://link.aps.org/doi/10.1103/PhysRevLett.105.233907} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {105}},\ \bibinfo {pages} {233907} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Butera}\ and\ \citenamefont {Carusotto}(2019)}]{butera2019mechanical} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Butera}}\ and\ \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Carusotto}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Mechanical backreaction effect of the dynamical casimir emission},}\ }\href {https://link.aps.org/doi/10.1103/PhysRevA.99.053815} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {99}},\ \bibinfo {pages} {053815} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Stassi}\ \emph {et~al.}(2013)\citenamefont {Stassi}, \citenamefont {Ridolfo}, \citenamefont {Di~Stefano}, \citenamefont {Hartmann},\ and\ \citenamefont {Savasta}}]{stassi2013spontaneous} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Stassi}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Ridolfo}}, \bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Di~Stefano}}, \bibinfo {author} {\bibfnamefont {M.~J.}\ \bibnamefont {Hartmann}}, \ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Savasta}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Spontaneous {C}onversion from {V}irtual to {R}eal {P}hotons in the {U}ltrastrong-{C}oupling {R}egime},}\ }\href {https://link.aps.org/doi/10.1103/PhysRevLett.110.243601} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {110}},\ \bibinfo {pages} {243601} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Toyli}\ \emph {et~al.}(2016)\citenamefont {Toyli}, \citenamefont {Eddins}, \citenamefont {Boutin}, \citenamefont {Puri}, \citenamefont {Hover}, \citenamefont {Bolkhovsky}, \citenamefont {Oliver}, \citenamefont {Blais},\ and\ \citenamefont {Siddiqi}}]{toyli2016resonance} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~M.}\ \bibnamefont {Toyli}}, \bibinfo {author} {\bibfnamefont {A.~W.}\ \bibnamefont {Eddins}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Boutin}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Puri}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Hover}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Bolkhovsky}}, \bibinfo {author} {\bibfnamefont {W.~D.}\ \bibnamefont {Oliver}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Blais}}, \ and\ \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Siddiqi}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Resonance {F}luorescence from an {A}rtificial {A}tom in {S}queezed {V}acuum},}\ }\href {https://link.aps.org/doi/10.1103/PhysRevX.6.031004} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume} {6}},\ \bibinfo {pages} {031004} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kippenberg}\ \emph {et~al.}(2005)\citenamefont {Kippenberg}, \citenamefont {Rokhsari}, \citenamefont {Carmon}, \citenamefont {Scherer},\ and\ \citenamefont {Vahala}}]{kippenberg2005analysis} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.~J.}\ \bibnamefont {Kippenberg}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Rokhsari}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Carmon}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Scherer}}, \ and\ \bibinfo {author} {\bibfnamefont {K.~J.}\ \bibnamefont {Vahala}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Analysis of {R}adiation-{P}ressure {I}nduced {M}echanical {O}scillation of an {O}ptical {M}icrocavity},}\ }\href {https://link.aps.org/doi/10.1103/PhysRevLett.95.033901} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {95}},\ \bibinfo {pages} {033901} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Schliesser}\ \emph {et~al.}(2006)\citenamefont {Schliesser}, \citenamefont {Del'Haye}, \citenamefont {Nooshi}, \citenamefont {Vahala},\ and\ \citenamefont {Kippenberg}}]{schliesser2006radiation} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Schliesser}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Del'Haye}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Nooshi}}, \bibinfo {author} {\bibfnamefont {K.~J.}\ \bibnamefont {Vahala}}, \ and\ \bibinfo {author} {\bibfnamefont {T.~J.}\ \bibnamefont {Kippenberg}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Radiation {P}ressure {C}ooling of a {M}icromechanical {O}scillator {U}sing {D}ynamical {B}ackaction},}\ }\href {https://link.aps.org/doi/10.1103/PhysRevLett.97.243905} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {97}},\ \bibinfo {pages} {243905} (\bibinfo {year} {2006})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Fiore}\ \emph {et~al.}(2011)\citenamefont {Fiore}, \citenamefont {Yang}, \citenamefont {Kuzyk}, \citenamefont {Barbour}, \citenamefont {Tian},\ and\ \citenamefont {Wang}}]{fiore2011storing} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Fiore}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Yang}}, \bibinfo {author} {\bibfnamefont {M.~C.}\ \bibnamefont {Kuzyk}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Barbour}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Tian}}, \ and\ \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Wang}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Storing {O}ptical {I}nformation as a {M}echanical {E}xcitation in a {S}ilica {O}ptomechanical {R}esonator},}\ }\href {https://link.aps.org/doi/10.1103/PhysRevLett.107.133601} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {107}},\ \bibinfo {pages} {133601} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Dong}\ \emph {et~al.}(2012)\citenamefont {Dong}, \citenamefont {Fiore}, \citenamefont {Kuzyk},\ and\ \citenamefont {Wang}}]{dong2012optomechanical} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Dong}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Fiore}}, \bibinfo {author} {\bibfnamefont {M.~C.}\ \bibnamefont {Kuzyk}}, \ and\ \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Wang}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Optomechanical dark mode},}\ }\href {http://science.sciencemag.org/content/338/6114/1609} {\bibfield {journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo {volume} {338}},\ \bibinfo {pages} {1609--1613} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Verhagen}\ \emph {et~al.}(2012)\citenamefont {Verhagen}, \citenamefont {Del{\'e}glise}, \citenamefont {Weis}, \citenamefont {Schliesser},\ and\ \citenamefont {Kippenberg}}]{verhagen2012quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Verhagen}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Del{\'e}glise}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Weis}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Schliesser}}, \ and\ \bibinfo {author} {\bibfnamefont {T.~J.}\ \bibnamefont {Kippenberg}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Quantum-coherent coupling of a mechanical oscillator to an optical cavity mode},}\ }\href {https://doi.org/10.1038/nature10787} {\bibfield {journal} {\bibinfo {journal} {Nature (London)}\ }\textbf {\bibinfo {volume} {482}},\ \bibinfo {pages} {63} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Shen}\ \emph {et~al.}(2016)\citenamefont {Shen}, \citenamefont {Zhang}, \citenamefont {Chen}, \citenamefont {Zou}, \citenamefont {Xiao}, \citenamefont {Zou}, \citenamefont {Sun}, \citenamefont {Guo},\ and\ \citenamefont {Dong}}]{shen2016experimental} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Shen}}, \bibinfo {author} {\bibfnamefont {Y.-L.}\ \bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {C.-L.}\ \bibnamefont {Zou}}, \bibinfo {author} {\bibfnamefont {Y.-F.}\ \bibnamefont {Xiao}}, \bibinfo {author} {\bibfnamefont {X.-B.}\ \bibnamefont {Zou}}, \bibinfo {author} {\bibfnamefont {F.-W.}\ \bibnamefont {Sun}}, \bibinfo {author} {\bibfnamefont {G.-C.}\ \bibnamefont {Guo}}, \ and\ \bibinfo {author} {\bibfnamefont {C.-H.}\ \bibnamefont {Dong}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Experimental realization of optomechanically induced non-reciprocity},}\ }\href {https://doi.org/10.1038/nphoton.2016.161} {\bibfield {journal} {\bibinfo {journal} {Nat. Photon.}\ }\textbf {\bibinfo {volume} {10}},\ \bibinfo {pages} {657} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Monifi}\ \emph {et~al.}(2016)\citenamefont {Monifi}, \citenamefont {Zhang}, \citenamefont {{\"O}zdemir}, \citenamefont {Peng}, \citenamefont {Liu}, \citenamefont {Bo}, \citenamefont {Nori},\ and\ \citenamefont {Yang}}]{monifi2016optomechanically} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Monifi}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {{\c{S}}.~K.}\ \bibnamefont {{\"O}zdemir}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Peng}}, \bibinfo {author} {\bibfnamefont {Y.-x.}\ \bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Bo}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}}, \ and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Yang}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Optomechanically induced stochastic resonance and chaos transfer between optical fields},}\ }\href {https://doi.org/10.1038/nphoton.2016.73} {\bibfield {journal} {\bibinfo {journal} {Nat. Photon.}\ }\textbf {\bibinfo {volume} {10}},\ \bibinfo {pages} {399} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {F{\"u}rst}\ \emph {et~al.}(2011)\citenamefont {F{\"u}rst}, \citenamefont {Strekalov}, \citenamefont {Elser}, \citenamefont {Aiello}, \citenamefont {Andersen}, \citenamefont {Marquardt},\ and\ \citenamefont {Leuchs}}]{furst2011quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~U.}\ \bibnamefont {F{\"u}rst}}, \bibinfo {author} {\bibfnamefont {D.~V.}\ \bibnamefont {Strekalov}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Elser}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Aiello}}, \bibinfo {author} {\bibfnamefont {U.~L.}\ \bibnamefont {Andersen}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Marquardt}}, \ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Leuchs}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Quantum {L}ight from a {W}hispering-{G}allery-{M}ode {D}isk {R}esonator},}\ }\href {https://link.aps.org/doi/10.1103/PhysRevLett.106.113901} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {106}},\ \bibinfo {pages} {113901} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sedlmeir}\ \emph {et~al.}(2017)\citenamefont {Sedlmeir}, \citenamefont {Foreman}, \citenamefont {Vogl}, \citenamefont {Zeltner}, \citenamefont {Schunk}, \citenamefont {Strekalov}, \citenamefont {Marquardt}, \citenamefont {Leuchs},\ and\ \citenamefont {Schwefel}}]{sedlmeir2017polarization} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Sedlmeir}}, \bibinfo {author} {\bibfnamefont {M.~R.}\ \bibnamefont {Foreman}}, \bibinfo {author} {\bibfnamefont {U.}~\bibnamefont {Vogl}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Zeltner}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Schunk}}, \bibinfo {author} {\bibfnamefont {D.~V.}\ \bibnamefont {Strekalov}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Marquardt}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Leuchs}}, \ and\ \bibinfo {author} {\bibfnamefont {H.~G.~L.}\ \bibnamefont {Schwefel}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Polarization-{S}elective {O}ut-{C}oupling of {W}hispering-{G}allery {M}odes},}\ }\href {https://link.aps.org/doi/10.1103/PhysRevApplied.7.024029} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Applied}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {024029} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Trainor}\ \emph {et~al.}(2018)\citenamefont {Trainor}, \citenamefont {Sedlmeir}, \citenamefont {Peuntinger},\ and\ \citenamefont {Schwefel}}]{trainor2018selective} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.~S.}\ \bibnamefont {Trainor}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Sedlmeir}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Peuntinger}}, \ and\ \bibinfo {author} {\bibfnamefont {H.~G.~L.}\ \bibnamefont {Schwefel}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Selective {C}oupling {E}nhances {H}armonic {G}eneration of {W}hispering-{G}allery {M}odes},}\ }\href {https://link.aps.org/doi/10.1103/PhysRevApplied.9.024007} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Applied}\ }\textbf {\bibinfo {volume} {9}},\ \bibinfo {pages} {024007} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ast}\ \emph {et~al.}(2013)\citenamefont {Ast}, \citenamefont {Mehmet},\ and\ \citenamefont {Schnabel}}]{ast2013high} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Ast}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Mehmet}}, \ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Schnabel}},\ }\bibfield {title} {\enquote {\bibinfo {title} {High-bandwidth squeezed light at 1550 nm from a compact monolithic {PPKTP} cavity},}\ }\href {http://www.opticsexpress.org/abstract.cfm?URI=oe-21-11-13572} {\bibfield {journal} {\bibinfo {journal} {Opt. Express}\ }\textbf {\bibinfo {volume} {21}},\ \bibinfo {pages} {13572--13579} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Serikawa}\ \emph {et~al.}(2016)\citenamefont {Serikawa}, \citenamefont {Yoshikawa}, \citenamefont {Makino},\ and\ \citenamefont {Furusawa}}]{serikawa2016creation} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Serikawa}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Yoshikawa}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Makino}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Furusawa}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Creation and measurement of broadband squeezed vacuum from a ring optical parametric oscillator},}\ }\href {http://www.opticsexpress.org/abstract.cfm?URI=oe-24-25-28383} {\bibfield {journal} {\bibinfo {journal} {Opt. Express}\ }\textbf {\bibinfo {volume} {24}},\ \bibinfo {pages} {28383--28391} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Vahlbruch}\ \emph {et~al.}(2016)\citenamefont {Vahlbruch}, \citenamefont {Mehmet}, \citenamefont {Danzmann},\ and\ \citenamefont {Schnabel}}]{vahlbruch2016detection} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Vahlbruch}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Mehmet}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Danzmann}}, \ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Schnabel}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Detection of 15 d{B} {S}queezed {S}tates of {L}ight and {T}heir {A}pplication for the {A}bsolute {C}alibration of {P}hotoelectric {Q}uantum {E}fficiency},}\ }\href {https://link.aps.org/doi/10.1103/PhysRevLett.117.110801} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {117}},\ \bibinfo {pages} {110801} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Schnabel}(2017)}]{schnabel2017squeezed} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Schnabel}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Squeezed states of light and their applications in laser interferometers},}\ }\href {http://www.sciencedirect.com/science/article/pii/S0370157317300595} {\bibfield {journal} {\bibinfo {journal} {Phys. Rep.}\ }\textbf {\bibinfo {volume} {684}},\ \bibinfo {pages} {1--51} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Flayac}\ and\ \citenamefont {Savona}(2017)}]{flayac2017unconventional} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Flayac}}\ and\ \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Savona}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Unconventional photon blockade},}\ }\href {https://link.aps.org/doi/10.1103/PhysRevA.96.053810} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {96}},\ \bibinfo {pages} {053810} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Teufel}\ \emph {et~al.}(2011)\citenamefont {Teufel}, \citenamefont {Donner}, \citenamefont {Li}, \citenamefont {Harlow}, \citenamefont {Allman}, \citenamefont {Cicak}, \citenamefont {Sirois}, \citenamefont {Whittaker}, \citenamefont {Lehnert},\ and\ \citenamefont {Simmonds}}]{teufel2011sideband} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~D.}\ \bibnamefont {Teufel}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Donner}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {J.~W.}\ \bibnamefont {Harlow}}, \bibinfo {author} {\bibfnamefont {M.~S.}\ \bibnamefont {Allman}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Cicak}}, \bibinfo {author} {\bibfnamefont {A.~J.}\ \bibnamefont {Sirois}}, \bibinfo {author} {\bibfnamefont {J.~D.}\ \bibnamefont {Whittaker}}, \bibinfo {author} {\bibfnamefont {K.~W.}\ \bibnamefont {Lehnert}}, \ and\ \bibinfo {author} {\bibfnamefont {R.~W.}\ \bibnamefont {Simmonds}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Sideband cooling of micromechanical motion to the quantum ground state},}\ }\href {https://doi.org/10.1038/nature10261} {\bibfield {journal} {\bibinfo {journal} {Nature (London)}\ }\textbf {\bibinfo {volume} {475}},\ \bibinfo {pages} {359} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gamel}\ and\ \citenamefont {James}(2010)}]{gamel2010time} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Gamel}}\ and\ \bibinfo {author} {\bibfnamefont {D.~F.~V.}\ \bibnamefont {James}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Time-averaged quantum dynamics and the validity of the effective {H}amiltonian model},}\ }\href {https://link.aps.org/doi/10.1103/PhysRevA.82.052106} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {82}},\ \bibinfo {pages} {052106} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Johansson}\ \emph {et~al.}(2012)\citenamefont {Johansson}, \citenamefont {Nation},\ and\ \citenamefont {Nori}}]{johansson2012qutip} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~R.}\ \bibnamefont {Johansson}}, \bibinfo {author} {\bibfnamefont {P.~D.}\ \bibnamefont {Nation}}, \ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Qutip: An open-source {P}ython framework for the dynamics of open quantum systems},}\ }\href {http://www.sciencedirect.com/science/article/pii/S0010465512000835} {\bibfield {journal} {\bibinfo {journal} {Comput. Phys. Commun.}\ }\textbf {\bibinfo {volume} {183}},\ \bibinfo {pages} {1760--1772} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Johansson}\ \emph {et~al.}(2013{\natexlab{b}})\citenamefont {Johansson}, \citenamefont {Nation},\ and\ \citenamefont {Nori}}]{johansson2013qutip2} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~R.}\ \bibnamefont {Johansson}}, \bibinfo {author} {\bibfnamefont {P.~D.}\ \bibnamefont {Nation}}, \ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Qutip 2: A {P}ython framework for the dynamics of open quantum systems},}\ }\href {http://www.sciencedirect.com/science/article/pii/S0010465512003955} {\bibfield {journal} {\bibinfo {journal} {Comput. Phys. Commun.}\ }\textbf {\bibinfo {volume} {184}},\ \bibinfo {pages} {1234--1240} (\bibinfo {year} {2013}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sarchi}\ \emph {et~al.}(2008)\citenamefont {Sarchi}, \citenamefont {Carusotto}, \citenamefont {Wouters},\ and\ \citenamefont {Savona}}]{sarchi2008coherent} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Sarchi}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Carusotto}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Wouters}}, \ and\ \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Savona}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Coherent dynamics and parametric instabilities of microcavity polaritons in double-well systems},}\ }\href {https://link.aps.org/doi/10.1103/PhysRevB.77.125324} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume} {77}},\ \bibinfo {pages} {125324} (\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kyriienko}\ \emph {et~al.}(2014)\citenamefont {Kyriienko}, \citenamefont {Liew},\ and\ \citenamefont {Shelykh}}]{kyriienko2014optomechanics} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Kyriienko}}, \bibinfo {author} {\bibfnamefont {T.~C.~H.}\ \bibnamefont {Liew}}, \ and\ \bibinfo {author} {\bibfnamefont {I.~A.}\ \bibnamefont {Shelykh}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Optomechanics with {C}avity {P}olaritons: {D}issipative {C}oupling and {U}nconventional {B}istability},}\ }\href {https://link.aps.org/doi/10.1103/PhysRevLett.112.076402} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {112}},\ \bibinfo {pages} {076402} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \end{thebibliography} \end{document}
\begin{document} \title[Semigroup graded algebras]{Semigroup graded algebras and codimension growth of graded polynomial identities} \author{A.\,S.~Gordienko} \address{Vrije Universiteit Brussel, Belgium} \email{[email protected]} \keywords{Associative algebra, Jacobson radical, polynomial identity, grading, semigroup, zero band, $H$-(co)module algebra, bialgebra, codimension, Amitsur's conjecture.} \begin{abstract} We show that if $T$ is any of four semigroups of two elements that are not groups, there exists a finite dimensional associative $T$-graded algebra over a field of characteristic $0$ such that the codimensions of its graded polynomial identities have a non-integer exponent of growth. In particular, we provide an example of a finite dimensional graded-simple semigroup graded algebra over an algebraically closed field of characteristic $0$ with a non-integer graded PI-exponent, which is strictly less than the dimension of the algebra. However, if $T$ is a left or right zero band and the $T$-graded algebra is unital, or $T$ is a cancellative semigroup, then the $T$-graded algebra satisfies the graded analog of Amitsur's conjecture, i.e. there exists an integer graded PI-exponent. Moreover, in the first case it turns out that the ordinary and the graded PI-exponents coincide. In addition, we consider related problems on the structure of semigroup graded algebras. \end{abstract} \subjclass[2010]{Primary 16W50; Secondary 16R10, 16R50, 16T05, 16T15.} \thanks{Supported by Fonds voor Wetenschappelijk Onderzoek~--- Vlaanderen Pegasus Marie Curie post doctoral fellowship (Belgium) and RFBR grant 13-01-00234a (Russia).} \maketitle The notion of a semigroup graded algebra is a natural generalization of the notion of a group graded algebra, however the first notion is much less restricting: e.g. if an algebra is the direct sum of its left ideals or if an algebra is the direct sum of a subalgebra and an ideal, this can be expressed in the language of semigroup gradings. In 2010--2011 E.~Aljadeff, A.~Giambruno, and D.~La~Mattina~\cite{AljaGia, AljaGiaLa, GiaLa} proved that if an associative PI-algebra is graded by a finite group, then there exists an integer exponent of codimensions of its graded polynomial identities, i.e. the graded analog of Amitsur's conjecture holds. In~\cite[Theorem~1]{ASGordienko5} and~\cite[Theorem~3]{ASGordienko9} the author proved the same for finite dimensional associative and Lie algebras graded by any groups. In~\cite{KelarevPI} A.\,V.~Kelarev studied semigroup graded PI-algebras. The next question that naturally arises in this investigation is as to whether the results on codimension growth of graded polynomial identities hold for semigroup graded associative algebras. In the associative case the main properties that we use in order to prove the graded analog of Amitsur's conjecture (Theorem~\ref{TheoremMainTGrAssoc}) are the gradedness (or homogeneity) of the Jacobson radical and the graded version of the Wedderburn~--- Artin theorem. We consider these properties in Sections~\ref{SectionSemigroupJacobson} and~\ref{SectionSemigroupWedderburn} and obtain the graded analog of Amitsur's conjecture for algebras graded by cancellative semigroups (Theorem~\ref{TheoremTCancelAmitsur}) and unital algebras graded by left or right zero bands (Theorem~\ref{TheoremTIdemAmitsur}). In the first case we use Kelarev and Plant's result on gradedness of Jacobson radicals in algebras graded by cancellative groupoids~\cite[Corollary 4.1]{KelarevBook}. Until now, there were no examples known of an associative algebra with a non-integer PI-exponent of any kind (graded, Hopf, etc.). In 1999 S.\,P.~Mishchenko and M.\,V.~Zaicev gave an example of an infinite dimensional Lie algebra with a non-integer PI-exponent~\cite{ZaiMishchFracPI} (see the proof in~\cite{VerZaiMishch}). Here we use their ideas to present a finite dimensional semigroup graded associative algebra with a non-integer exponent of codimension growth of graded polynomial identities (Theorems~\ref{TheoremT1GradFractPI}--\ref{TheoremT3GradFractPI}) for each of four semigroups of two elements that are not groups. The PI-exponent of a finite dimensional graded-simple group graded Lie or associative algebra over an algebraically closed field of characteristic $0$ equals the dimension of the algebra~\cite[Example~12]{ASGordienko5} and~\cite[Theorem~4]{ASGordienko9}. In Theorem~\ref{TheoremT3GradFractPI} we provide an example of a finite dimensional graded-simple semigroup graded algebra over an algebraically closed field of characteristic $0$ with a non-integer graded PI-exponent, which is strictly less than the dimension of the algebra. \section{Semigroups of two elements} First we describe all the five non-isomorphic semigroups of two elements. Let $T_1 =\lbrace 0,1\rbrace$ be the multiplicative semigroup of the field $\mathbb Z_2$. Let $T_2 =\lbrace 0,v\rbrace$ be the semigroup defined by relations $v^2 =0^2= 0\cdot v=v\cdot 0=0$. Recall that a semigroup $T$ is a \textit{left zero band} if $t_1 t_2 = t_1$ for every $t_1, t_2 \in T$ and a \textit{right zero band} if $t_1 t_2 = t_2$ for every $t_1, t_2 \in T$. Let $T_3$ be the right zero band of two elements. \begin{proposition} Let $T$ be a semigroup that consists of two elements. Then $T$ is isomorphic to one of semigroups from the list $\lbrace T_1, T_2, T_3, T_3^{\,\op}, (\mathbb Z_2,+)\rbrace$ and each two semigroups from this list are non-isomorphic. (Here $T_3^{\,\op}$ is anti-isomorphic to $T_3$.) \end{proposition} \begin{proof} First, consider the case when $T = \lbrace a, a^2 \rbrace$ for some $a\in T$. Then if $a^3=a$, we have $a^4=a^2$, $a^2$ is the identity element of $T$, and $T \cong (\mathbb Z_2,+)$. If $a^3=a^2$, then $a^4=a^3=a^2$, $a^2$ is the zero element of $T$, and $T \cong T_2$. Now consider the case when $T \ne \lbrace a, a^2 \rbrace$ for all $a\in T$. Then $T = \lbrace a, b \rbrace$, $a^2=a$, $b^2=b$. If $ab=ba$, then $T \cong T_1$. If $ab\ne ba$, then $T\cong T_3$ for $ab = b$, $ba=a$, and $T\cong T_3^{\,\mathrm{op}}$ for $ab = a$, $ba=b$. \end{proof} \section{Gradedness of the Jacobson radical}\label{SectionSemigroupJacobson} Let $T$ be a semigroup. An algebra $A$ is \textit{$T$-graded} if $A=\bigoplus_{t\in T} A^{(t)}$ (direct sum of subspaces) and $A^{(h)}A^{(t)} \subseteq A^{(ht)}$ for all $h,t\in T$. A subspace $V$ of $A$ is \textit{graded} (or \textit{homogeneous}) if $V=\bigoplus_{t\in T} V \cap A^{(t)}$. In particular, $T_1$-graded algebras are exactly the algebras with a fixed decomposition into the direct sum of a two-sided ideal and a subalgebra. If $T$ is a right zero band, then $T$-graded algebras are algebras with a fixed decomposition into the direct sum of left ideals indexed by the elements of $T$. It is known~\cite[Example 4.2]{KelarevBook}, that the Jacobson radical is not necessarily graded. (See also the survey of positive results in~\cite[Section 4.4]{KelarevBook}.) Here we provide examples and results related to semigroups of two elements and left and right zero bands. Denote by $M_k(F)$ is the full $k\times k$ matrix algebra over a field $F$ and $\UT_k(F)$ is the algebra of upper triangular $k\times k$ matrices. In $M_k(F)$ we fix the basis of matrix units $e_{i\ell}$, $1\leqslant i,\ell \leqslant k$. \begin{example}\label{ExampleT1} Let $A = M_k(F)\oplus \UT_k(F)$ (direct sum of ideals) where $F$ is a field, $k\geqslant 2$. Define a $T_1$-grading on $A$ by $A^{(0)}=(M_k(F),0)$, $A^{(1)}=\lbrace (\varphi(a), a) \mid a \in \UT_k(F) \rbrace$ where $\varphi \colon \UT_k(F) \hookrightarrow M_k(F)$ is the natural embedding. Then $$J(A) = \lbrace (0,e_{ij}) \mid 1 \leqslant i < j \leqslant k\rbrace \subset (0,\UT_k(F)),$$ $J(A) \cap A^{(0)} = J(A) \cap A^{(1)}=0$, and $J(A)$ is not a graded ideal. \end{example} \begin{example}\label{ExampleT2} Let $A = M_k(F)\oplus V$ (direct sum of ideals) where $V\cong M_k(F)$ as a vector space, $k\in \mathbb N$, $V^2 = 0$, and $F$ is a field. Denote by $\varphi \colon V \mathrel{\widetilde{\rightarrow}} M_k(F)$ the corresponding linear isomorphism. Define a $T_2$-grading on $A$ by $A^{(0)}=(M_k(F),0)$, $A^{(v)}=\lbrace (\varphi(a), a) \mid a \in V \rbrace$. Then $$J(A) = (0,V),\ J(A) \cap A^{(0)} = J(A) \cap A^{(v)}=0,$$ and $J(A)$ is not a graded ideal. \end{example} \begin{example} \label{ExampleT3} Let $A = M_k(F)\oplus V$ (direct sum of left ideals) where $V\cong M_k(F)$ as a left $M_k(F)$-module, $k\in \mathbb N$, $V^2=V M_k(F) = 0$, and $F$ is a field. Denote by $\varphi \colon V \mathrel{\widetilde{\rightarrow}} M_k(F)$ the corresponding isomorphism. Define a $T_3$-grading on $A$ by $A^{(e_1)}=(M_k(F),0)$, $$A^{(e_2)}=\lbrace (\varphi(a), a) \mid a \in V \rbrace.$$ Then $$J(A) = (0,V),\ J(A) \cap A^{(e_1)} = J(A) \cap A^{(e_2)}=0,$$ and $J(A)$ is not a graded ideal. \end{example} \begin{remark} One can use the opposite example to show that the Jacobson radical is not necessarily $T_3^{\,\mathrm{op}}$-graded. \end{remark} However, if an algebra is unital and $T_3$- or $T_3^{\,\mathrm{op}}$-graded, then the Jacobson radical is graded. In fact, a more general result holds. \begin{proposition}\label{PropositionTIdemGradedIdeals} Let $A$ be a $T$-graded associative algebra with $1$ over a field $F$ for some left or right zero band $T$. Then every ideal of $A$ is graded. \end{proposition} \begin{proof} Consider the case when $t_1 t_2 = t_2$ for every $t_1, t_2 \in T$. (Another case is considered analogously.) Then all $A^{(t)}$, $t\in T$, are left ideals of $A$ and $1 = \sum_{t\in T}e_t$ for some $e_t \in A^{(t)}$. Let $I$ be an ideal. Then for every $a\in I$ we have $a=\sum_{t\in T}a e_t$ where $ae_t\in I \cap A^{(t)}$ for every $t\in T$. Hence $I=\bigoplus_{t\in T} I \cap A^{(t)}$ is a graded ideal. \end{proof} \section{Graded analogs of the Wedderburn theorems and $T$-graded simplicity}\label{SectionSemigroupWedderburn} Now we study whether the graded analogs of the Wedderburn theorems hold for $T$-graded algebras where $T$ is a semigroup. Recall that a $T$-graded algebra $A$ is a \textit{graded-simple algebra} if $A^2\ne 0$ and $A$ has no graded ideals other than $A$ and $0$. \begin{example}\label{ExampleT1Wedderburn} Let $B=M_k(F)\oplus M_k(F)$ (direct sum of ideals), $k\in \mathbb N$, where $F$ is a field. Define a $T_1$-grading on $A$ by $A^{(0)}=(M_k(F),0)$, $A^{(1)}=\lbrace (a, a) \mid a \in M_k(F) \rbrace$. Then $B$ cannot be presented as the direct sum of $T_1$-graded ideals that are $T_1$-graded-simple algebras, i.e. the $T_1$-graded analog of the Wedderburn~--- Artin theorem does not hold. \end{example} \begin{proof} Note that the semisimple algebra $B$ has only four ideals: $0$, $B$, $(0,M_k(F))$, and $(M_k(F),0)$. Three of them are $T_1$-graded, namely, $0$, $B$, and $(M_k(F),0)$, and only $(M_k(F),0)$ is a $T_1$-graded-simple algebra. \end{proof} \begin{remark} Since $B^{(0)}$ is always a graded ideal, every $T_1$-graded-simple algebra $B$ has the trivial grading, i.e. $B=B^{(0)}$. Therefore, every $T_1$-graded-simple algebra is simple as an ordinary algebra. \end{remark} \begin{proposition}\label{PropositionT2Wedderburn} Let $B$ be a finite dimensional associative $T_2$-graded semisimple algebra over a field $F$. Then $B=B^{(0)}$, and by the ordinary Wedderburn~--- Artin theorem, $B$ is the direct sum of $T_2$-graded ideals that are simple algebras (with the trivial grading). Moreover, every $T_2$-graded-simple algebra is simple as an ordinary algebra. In particular, the $T_2$-graded analog of the Wedderburn~--- Artin theorem holds. \end{proposition} \begin{proof} Suppose $B \ne B^{(0)}$. Note that $B^{(0)}$ is an ideal and, by the ordinary Wedderburn~--- Artin theorem, $B=B^{(0)} \oplus I$ for some semisimple ideal $I$ of $B$. However, $(B/B^{(0)})^2=0$ since $(B^{(1)})^2 \subseteq B^{(0)}$, and $I \cong B/B^{(0)}$ cannot be semisimple. Hence $B=B^{(0)}$, the algebra $B$ has the trivial $T_2$-grading, and we can apply to $B$ the ordinary Wedderburn~--- Artin theorem. \end{proof} \begin{proposition}\label{PropositionTIdemWedderburn} Let $B$ be a finite dimensional associative $T$-graded semisimple algebra over a field $F$ for some left or right zero band $T$. Then $B$ is the direct sum of $T$-graded ideals that are simple algebras. In particular, the $T$-graded analog of the Wedderburn~--- Artin theorem holds and every finite dimensional semisimple $T$-graded-simple algebra is simple as an ordinary algebra. \end{proposition} \begin{proof} Consider the case when $T$ is a right zero band. The other case is considered analogously. By the ordinary Wedderburn~--- Artin theorem, $$B=B_1\oplus B_2 \oplus \ldots \oplus B_s \text{ (direct sum of ideals)}$$ for some simple algebras $B_i$. By Proposition~\ref{PropositionTIdemGradedIdeals}, each $B_i$ is a $T$-graded ideal. Now the theorem follows. \end{proof} However there exist non-semisimple $T_3$-graded-simple algebras. (See Proposition~\ref{PropositionAT3GrSimple} below.) By Proposition~\ref{PropositionTIdemGradedIdeals}, if a $T$-graded algebra, where $T$ is a left or right zero band, contains unity, the its Jacobson radical is graded (as well as all the other ideals). Therefore, one may ask whether the $T$-graded analog of the Wedderburn~--- Mal'cev theorem holds for such algebras. In fact, the answer is true. \begin{theorem}\label{TheoremTIdemGradedWeddMalcev} Let $A$ be a finite dimensional associative $T$-graded algebra with unity over a field $F$ where $T$ is a left or right zero band and $A/J(A)$ is a separable algebra. (E.g., $F$ is a perfect field.) Then there exists a graded maximal semisimple subalgebra $B$ such that $A = B \oplus J$ (direct sum of graded spaces) where $J := J(A)$. \end{theorem} \begin{proof} Without loss of generality, we may assume that $T$ is a right zero band. First we consider the case $J^2=0$. Note that $1_A = \sum_{t\in T} e_t$ for some $e_t\in A^{(t)}$. Moreover, since $1_A e_t= \sum_{r\in T} e_r e_t$ and $e_r e_t \in A^{(t)}$ for every $t\in T$, we have $e_t^2 = e_t$ and $e_r e_t = 0$ for all $r\ne t$. Using the ordinary Wedderburn~--- Mal'cev theorem we choose a maximal semisimple subalgebra $B$ such that $A = B \oplus J$ (direct sum of subspaces). Let $\pi \colon A \twoheadrightarrow A/J$ be the natural projection which is a graded map since $J$ is graded. Let $\varphi \colon A/J \hookrightarrow A$ be a homomorphic embedding such that $\varphi(A/J)=B$ and $\pi\varphi = \id_{A/J}$. Note that $\pi(1_A)=\sum_{t\in T} \pi(e_t)$ is the unity of $A/J$. In addition, $1_A = 1_B = \varphi\pi(1_A)$. Let $T=\lbrace t_1, \ldots, t_s\rbrace$. If $\varphi\pi(e_{t_i})=e_{t_i}$ for all $1\leqslant i\leqslant s$, then $B{e_{t_i}} \subseteq B$ for all $1\leqslant i\leqslant s$ and $B=\bigoplus_{t\in T} B{e_t}$ is a graded subalgebra and the theorem is proved. Suppose $\varphi\pi(e_{t_i}) \ne e_{t_i}$ for at least one $1\leqslant i\leqslant s$. Choose $0\leqslant k \leqslant s-1$ such that $\varphi\pi(e_{t_i})=e_{t_i}$ for all $1\leqslant i \leqslant k$ and $\varphi\pi(e_{t_{k+1}}) \ne e_{t_{k+1}}$. Note that $\pi(\varphi\pi(e_{t_{k+1}})-e_{t_{k+1}})=0$ and $\varphi\pi(e_{t_{k+1}}) = e_{t_{k+1}}+j$ for some $j\in J$. In addition, $j e_{t_i} = (\varphi\pi(e_{t_{k+1}})-e_{t_{k+1}})e_{t_i} = \varphi\pi(e_{t_{k+1}} e_{t_i})-e_{t_{k+1}} e_{t_i} = 0$ for all $1\leqslant i\leqslant k$. Analogously, $e_{t_i}j=0$ for all $1\leqslant i\leqslant k$. Moreover, since $(e_{t_{k+1}} +j)^2=e_{t_{k+1}} +j$, we have $j=e_{t_{k+1}}j + j e_{t_{k+1}}$ and $e_{t_{k+1}} j e_{t_{k+1}} = 0$. Let $\tilde \varphi \colon A/J \hookrightarrow A$ be the homomorphic embedding defined by \begin{equation*}\begin{split}\tilde\varphi(a) = (1_A + e_{t_{k+1}}j - je_{t_{k+1}})\varphi(a)(1_A + e_{t_{k+1}}j - je_{t_{k+1}})^{-1} =\\(1_A + e_{t_{k+1}}j - je_{t_{k+1}})\varphi(a)(1_A - e_{t_{k+1}}j + je_{t_{k+1}}).\end{split}\end{equation*} Note that $\pi\tilde\varphi=\id_{A/J}$ and $$\tilde \varphi \pi(e_{t_i})=(1_A + e_{t_{k+1}}j - je_{t_{k+1}})e_{t_i}(1_A - e_{t_{k+1}}j + je_{t_{k+1}})=e_{t_i} \text{ for all } 1\leqslant i \leqslant k.$$ Moreover \begin{equation*}\begin{split}\tilde \varphi \pi(e_{t_{k+1}})= (1_A + e_{t_{k+1}}j - je_{t_{k+1}})\varphi \pi(e_{t_{k+1}})(1_A - e_{t_{k+1}}j + je_{t_{k+1}}) =\\ (1_A + e_{t_{k+1}}j - je_{t_{k+1}})(e_{t_{k+1}}+j)(1_A - e_{t_{k+1}}j + je_{t_{k+1}})=\\ (e_{t_{k+1}}+j - je_{t_{k+1}})(1_A - e_{t_{k+1}}j + je_{t_{k+1}})=\\ e_{t_{k+1}}+j - je_{t_{k+1}} - e_{t_{k+1}}j = e_{t_{k+1}}.\end{split}\end{equation*} Therefore $\tilde B = \tilde\varphi(A/J)$ is a maximal semisimple subalgebra such that $A = \tilde B \oplus J$ (direct sum of subspaces) and $\tilde \varphi \pi(e_{t_i}) = e_{t_i}$ for all $1\leqslant i \leqslant k+1$. Thus, using the induction argument, we may assume that $e_t=\tilde \varphi \pi(e_t) \in \tilde B$ for all $t\in T$. Hence $\tilde B=\bigoplus_{t\in T} \tilde B{e_t}$ is a graded subalgebra of $A$. We have proved the theorem for the case $J^2=0$. The general case is proved by induction on $\dim A$. Suppose $J^2\ne 0$. Then $A/J^2 = B_0 \oplus J/J^2$ (direct sum of graded subspaces) for some graded maximal semisimple subalgebra $B_0$ of $A/J^2$. Note that $1_{A/J^2} \in B_0$. Consider the preimage $B_1$ of $B_0$ in $A$ under the natural map $\pi_1 \colon A \twoheadrightarrow A/J^2$. Then $1_A \in B_1$. Since $B_0 \cong A/J$ is semisimple, $J(B_1)=J^2$. Moreover $\dim B_1 < \dim A$ and, by the induction assumption, we have $B_1 = B \oplus J^2$ (direct sum of graded subspaces) for some graded maximal semisimple subalgebra $B$ in $A$. Hence $A = B \oplus J$ (direct sum of graded subspaces) and the theorem is proved. \end{proof} Recall that a semigroup $T$ is \textit{cancellative} if for every $a,b,c \in T$ each of the conditions $ac = bc$ and $ca=cb$ implies $a=b$. \begin{proposition}\label{PropositionTCancelWedderburn} Let $B$ be a finite dimensional associative $T$-graded semisimple algebra over a field $F$ for some cancellative semigroup $T$. Then $B$ is the direct sum of $T$-graded ideals that are $T$-graded-simple algebras. In particular, the $T$-graded analog of the Wedderburn~--- Artin theorem holds.\end{proposition} \begin{proof} By the ordinary Wedderburn~--- Artin theorem, $B = B_1 \oplus \ldots \oplus B_s$ (direct sum of ideals) for some simple algebras $B_i$. Let $I$ be a minimal $T$-graded ideal in $B$. Then $I=B_{i_1}\oplus\ldots \oplus B_{i_k}$ for some $i_1, \ldots, i_k$. Define $N:=\bigoplus_{i \in \lbrace 1,\ldots, s\rbrace \backslash \lbrace i_1, \ldots, i_k\rbrace} B_i$. Since all $B_i$ are semisimple, we have $N=\lbrace b\in B \mid ba =0 \text{ for all } a\in I\rbrace$. Since $T$ is cancellative and $I$ is graded, the ideal $N$ is graded too. Hence $B = I \oplus N$ (direct sum of graded ideals) where $I$ is a $T$-graded-simple algebra. Applying to $N$ the inductive argument, we get the proposition. \end{proof} \section{Graded polynomial identities, their codimensions and cocharacters} Let $T$ be a semigroup and let $F$ be a field. Denote by $F\langle X^{T\text{-}\mathrm{gr}} \rangle $ the free $T$-graded associative algebra over $F$ on the countable set $$X^{T\text{-}\mathrm{gr}}:=\bigcup_{t \in T}X^{(t)},$$ $X^{(t)} = \{ x^{(t)}_1, x^{(t)}_2, \ldots \}$, i.e. the algebra of polynomials in non-commuting variables from $X^{T\text{-}\mathrm{gr}}$. The indeterminates from $X^{(t)}$ are said to be homogeneous of degree $t$. The $T$-degree of a monomial $x^{(t_1)}_{i_1} \dots x^{(t_t)}_{i_s} \in F\langle X^{T\text{-}\mathrm{gr}} \rangle $ is defined to be $t_1 t_2 \dots t_s$, as opposed to its total degree, which is defined to be $s$. Denote by $F\langle X^{T\text{-}\mathrm{gr}} \rangle^{(t)}$ the subspace of the algebra $F\langle X^{T\text{-}\mathrm{gr}} \rangle$ spanned by all the monomials having $T$-degree $t$. Notice that $$F\langle X^{T\text{-}\mathrm{gr}} \rangle^{(t)} F\langle X^{T\text{-}\mathrm{gr}} \rangle^{(h)} \subseteq F\langle X^{T\text{-}\mathrm{gr}} \rangle^{(th)},$$ for every $t, h \in T$. It follows that $$F\langle X^{T\text{-}\mathrm{gr}} \rangle =\bigoplus_{t\in T} F\langle X^{T\text{-}\mathrm{gr}} \rangle^{(t)}$$ is a $T$-grading. Let $f=f(x^{(t_1)}_{i_1}, \dots, x^{(t_s)}_{i_s}) \in F\langle X^{T\text{-}\mathrm{gr}} \rangle$. We say that $f$ is a \textit{graded polynomial identity} of a $T$-graded algebra $A=\bigoplus_{t\in T} A^{(t)}$ and write $f\equiv 0$ if $f(a^{(t_1)}_{i_1}, \dots, a^{(t_s)}_{i_s})=0$ for all $a^{(t_j)}_{i_j} \in A^{(t_j)}$, $1 \leqslant j \leqslant s$. The set $\Id^{T\text{-}\mathrm{gr}}(A)$ of graded polynomial identities of $A$ is a graded ideal of $F\langle X^{T\text{-}\mathrm{gr}} \rangle$. \begin{example}\label{ExampleIdGr} Let $T=(\mathbb Z_2,+) = \lbrace \bar 0, \bar 1 \rbrace$, $M_2(F)=M_2(F)^{(\bar 0)}\oplus M_2(F)^{(\bar 1)}$ where $M_2(F)^{(\bar 0)}=\left( \begin{array}{cc} F & 0 \\ 0 & F \end{array} \right)$ and $M_2(F)^{(\bar 1)}=\left( \begin{array}{cc} 0 & F \\ F & 0 \end{array} \right)$. Then $x^{(\bar 0)} y^{(\bar 0)} - y^{(\bar 0)} x^{(\bar 0)} \in \Id^{T\text{-}\mathrm{gr}}(M_2(F))$. \end{example} Let $P^{T\text{-}\mathrm{gr}}_n := \langle x^{(t_1)}_{\sigma(1)} x^{(t_2)}_{\sigma(2)}\ldots x^{(t_n)}_{\sigma(n)} \mid t_i \in T, \sigma\in S_n \rangle_F \subset F \langle X^{T\text{-}\mathrm{gr}} \rangle$, $n \in \mathbb N$. Then the number $$c^{T\text{-}\mathrm{gr}}_n(A):=\dim\left(\frac{P^{T\text{-}\mathrm{gr}}_n}{P^{T\text{-}\mathrm{gr}}_n \cap \Id^{T\text{-}\mathrm{gr}}(A)}\right)$$ is called the $n$th \textit{codimension of graded polynomial identities} or the $n$th \textit{graded codimension} of $A$. The analog of Amitsur's conjecture for graded codimensions can be formulated as follows. \begin{conjecture} There exists $\PIexp^{T\text{-}\mathrm{gr}}(A):=\lim\limits_{n\to\infty} \sqrt[n]{c^{T\text{-}\mathrm{gr}}_n(A)} \in \mathbb Z_+$. \end{conjecture} If $T$ is the trivial (semi)group of one element, we get the notion of ordinary polynomial identities, ordinary codimensions $c_n(A)$, and the ordinary PI-exponent $\PIexp(A)$. As we shall see in Theorems~\ref{TheoremT1GradFractPI}--\ref{TheoremT3GradFractPI} below, the analog of Amitsur's conjecture fails for all semigroups $T$ of two elements that are not groups. However, in Theorem~\ref{TheoremMainTGrAssoc} below we provide sufficient conditions for a graded algebra to satisfy the analog of Amitsur's conjecture. As a consequence, we prove that if $T$ is a cancellative semigroup or $T$ is a left or right zero band, and a finite dimensional $T$-graded algebra $A$ contains $1$, then $A$ satisfies the graded analog of Amitsur's conjecture (Theorems~\ref{TheoremTCancelAmitsur} and~\ref{TheoremTIdemAmitsur}). \section{Polynomial $H$-identities and their codimensions} In our case, instead of working with graded codimensions directly, it is more convenient to replace the grading with the corresponding dual structure and study the asymptotic behaviour of polynomial $H$-identities. Let $H$ be an arbitrary associative algebra with $1$ over a field $F$. We say that an associative algebra $A$ is an algebra with a \textit{generalized $H$-action} if $A$ is endowed with a homomorphism $H \to \End_F(A)$ and for every $h \in H$ there exist $k\in \mathbb N$ and $h'_i, h''_i, h'''_i, h''''_i \in H$, $1\leqslant i \leqslant k$, such that \begin{equation}\label{EqGenHAction} h(ab)=\sum_{i=1}^k\bigl((h'_i a)(h''_i b) + (h'''_i b)(h''''_i a)\bigr) \text{ for all } a,b \in A. \end{equation} \begin{remark} We use the term ``generalized $H$-action'' in order to distinguish from the case when an algebra is an $H$-module algebra for some Hopf algebra $H$ which is a particular case of the generalized $H$-action. \end{remark} Let $F \langle X \rangle$ be the free associative algebra without $1$ on the set $X := \lbrace x_1, x_2, x_3, \ldots \rbrace$. Then $F \langle X \rangle = \bigoplus_{n=1}^\infty F \langle X \rangle^{(n)}$ where $F \langle X \rangle^{(n)}$ is the linear span of all monomials of total degree $n$. Consider the algebra $$F \langle X | H\rangle := \bigoplus_{n=1}^\infty H^{{}\otimes n} \otimes F \langle X \rangle^{(n)}$$ with the multiplication $(u_1 \otimes w_1)(u_2 \otimes w_2):=(u_1 \otimes u_2) \otimes w_1w_2$ for all $u_1 \in H^{{}\otimes j}$, $u_2 \in H^{{}\otimes k}$, $w_1 \in F \langle X \rangle^{(j)}$, $w_2 \in F \langle X \rangle^{(k)}$. We use the notation $$x^{h_1}_{i_1} x^{h_2}_{i_2}\ldots x^{h_n}_{i_n} := (h_1 \otimes h_2 \otimes \ldots \otimes h_n) \otimes x_{i_1} x_{i_2}\ldots x_{i_n}.$$ Here $h_1 \otimes h_2 \otimes \ldots \otimes h_n \in H^{{}\otimes n}$, $x_{i_1} x_{i_2}\ldots x_{i_n} \in F \langle X \rangle^{(n)}$. Note that if $(\gamma_\beta)_{\beta \in \Lambda}$ is a basis in $H$, then $F\langle X | H \rangle$ is isomorphic to the free associative algebra over $F$ with free formal generators $x_i^{\gamma_\beta}$, $\beta \in \Lambda$, $i \in \mathbb N$. We refer to the elements of $F\langle X | H \rangle$ as \textit{associative $H$-polynomials}. Note that here we do not consider any $H$-action on $F \langle X | H \rangle$. Let $A$ be an associative algebra with a generalized $H$-action. Any map $\psi \colon X \to A$ has the unique homomorphic extension $\bar\psi \colon F \langle X | H \rangle \to A$ such that $\bar\psi(x_i^h)=h\psi(x_i)$ for all $i \in \mathbb N$ and $h \in H$. An $H$-polynomial $f \in F\langle X | H \rangle$ is an \textit{$H$-identity} of $A$ if $\bar\psi(f)=0$ for all maps $\psi \colon X \to A$. In other words, $f(x_1, x_2, \ldots, x_n)$ is an $H$-identity of $A$ if and only if $f(a_1, a_2, \ldots, a_n)=0$ for any $a_i \in A$. In this case we write $f \equiv 0$. The set $\Id^{H}(A)$ of all $H$-identities of $A$ is an ideal of $F\langle X | H \rangle$. We denote by $P^H_n$ the space of all multilinear $H$-polynomials in $x_1, \ldots, x_n$, $n\in\mathbb N$, i.e. $$P^{H}_n = \langle x^{h_1}_{\sigma(1)} x^{h_2}_{\sigma(2)}\ldots x^{h_n}_{\sigma(n)} \mid h_i \in H, \sigma\in S_n \rangle_F \subset F \langle X | H \rangle.$$ Then the number $c^H_n(A):=\dim\left(\frac{P^H_n}{P^H_n \cap \Id^H(A)}\right)$ is called the $n$th \textit{codimension of polynomial $H$-identities} or the $n$th \textit{$H$-codimension} of $A$. One of the main tools in the investigation of polynomial identities is provided by the representation theory of symmetric groups. The symmetric group $S_n$ acts on the space $\frac {P^H_n}{P^H_{n} \cap \Id^H(A)}$ by permuting the variables. Irreducible $FS_n$-modules are described by partitions $\lambda=(\lambda_1, \ldots, \lambda_s)\vdash n$ and their Young diagrams $D_\lambda$. The character $\chi^H_n(A)$ of the $FS_n$-module $\frac {P^H_n}{P^H_n \cap \Id^H(A)}$ is called the $n$th \textit{cocharacter} of polynomial $H$-identities of $A$. We can rewrite it as a sum $$\chi^H_n(A)=\sum_{\lambda \vdash n} m(A, H, \lambda)\chi(\lambda)$$ of irreducible characters $\chi(\lambda)$. Let $e_{T_{\lambda}}=a_{T_{\lambda}} b_{T_{\lambda}}$ and $e^{*}_{T_{\lambda}}=b_{T_{\lambda}} a_{T_{\lambda}}$ where $a_{T_{\lambda}} = \sum_{\pi \in R_{T_\lambda}} \pi$ and $b_{T_{\lambda}} = \sum_{\sigma \in C_{T_\lambda}} (\sign \sigma) \sigma$, be Young symmetrizers corresponding to a Young tableau~$T_\lambda$. Then $M(\lambda) = FS_n e_{T_\lambda} \cong FS_n e^{*}_{T_\lambda}$ is an irreducible $FS_n$-module corresponding to a partition~$\lambda \vdash n$. We refer the reader to~\cite{Bahturin, DrenKurs, ZaiGia} for an account of $S_n$-representations and their applications to polynomial identities. \section{Generalized $(FT)^*$-action on $T$-graded algebras} In this section we show that every finite dimensional semigroup graded algebra is an algebra with a generalized $H$-action for a suitable associative algebra $H$. For an arbitrary semigroup $T$ one can consider the \textit{semigroup algebra} $FT$ over a field $F$ which is the vector space with the formal basis $(t)_{t\in T}$ and the multiplication induced by the one in $T$. Consider the vector space $(FT)^*$ dual to $FT$. Then $(FT)^*$ is an algebra with the multiplication defined by $(hw)(t)=h(t)w(t)$ for $h,w \in (FT)^*$ and $t\in T$. The identity element is defined by $1_{(FT)^*}(t)=1$ for all $t\in T$. In other words, $(FT)^*$ is the algebra dual to the coalgebra $FT$. Let $\Gamma \colon A=\bigoplus_{t\in T} A^{(t)}$ be a grading on an algebra $A$. We have the following natural $(FT)^*$-action on $A$: $h a^{(t)}:=h(t)a^{(t)}$ for all $h \in (FT)^*$, $a^{(t)}\in A^{(t)}$ and $t\in T$. \begin{remark} If $T$ is a finite group, then $A$ is an $FT$-comodule algebra for the Hopf algebra $FT$ and an $(FT)^*$-module algebra for the Hopf algebra $(FT)^*$. \end{remark} For every $t\in T$ define $h_t \in (FT)^*$ by $h_t(g)=\left\lbrace \begin{array}{lll} 0 & \text{if} & g\ne t, \\ 1 & \text{if} & g = t \end{array}\right.$ for $g\in T$. If $A$ is finite dimensional, the set $\supp \Gamma := \lbrace t\in T \mid A^{(t)}\ne 0 \rbrace$ is finite and $$h_t(ab)=\sum\limits_{\substack{g, w \in \supp \Gamma,\\ gw=t}} h_g(a)h_w(b)\text{ for all }a,b\in A.$$ Note that $ha = \sum_{t\in \supp \Gamma}h(t)h_t a$ for all $a\in A$ and \begin{equation}\label{EqIdentityHFiniteSupp}x^h - \sum_{t\in \supp \Gamma}h(t) x^{h_t} \in \Id^{(FT)^*}(A) \end{equation} for all $h\in (FT)^*$. By linearity, we get~(\ref{EqGenHAction}). Therefore, $A$ is an algebra with a generalized $(FT)^*$-action. \begin{lemma}\label{LemmaCnGrCnGenH} Let $A$ be a finite dimensional algebra over a field $F$ graded by a semigroup $T$. Then $c_n^{T\text{-}\mathrm{gr}}(A)=c_n^{(FT)^*}(A)$ for all $n\in \mathbb N$. \end{lemma} \begin{proof} Denote the grading $ A=\bigoplus_{t\in T} A^{(t)}$ by $\Gamma$. Let $$\xi \colon F\langle X \mid (FT)^* \rangle \to F\langle X^{T\text{-}\mathrm{gr}} \rangle$$ be the homomorphism of algebras defined by $\xi(x_i^h) = \sum\limits_{t\in\supp \Gamma} h(t)x^{(t)}_i$, $i\in\mathbb N$, $h\in (FT)^*$. Suppose $f\in \Id^{(FT)^*}(A)$. Consider an arbitrary graded homomorphism $\psi \colon F\langle X^{T\text{-}\mathrm{gr}} \rangle \to A$. Then the homomorphism of algebras $\psi\xi \colon F\langle X \mid (FT)^* \rangle \to A$ satisfies the condition $$\psi\xi(x_i^h)=\sum\limits_{t\in\supp \Gamma} h(t)\psi\left(x^{(t)}_i\right)= h\left(\sum\limits_{t\in\supp \Gamma} \psi\left(x^{(t)}_i\right)\right)=h\,\psi\xi(x_i).$$ Thus $\psi\xi(f) =0$ and $\xi(f)\in \Id^{T\text{-}\mathrm{gr}}(A)$. Hence $\xi\left(\Id^{(FT)^*}(A)\right)\subseteq \Id^{T\text{-}\mathrm{gr}}(A)$. Denote by $$\tilde \xi \colon F\langle X \mid (FT)^* \rangle/\Id^{(FT)^*}(A) \to F\langle X^{T\text{-}\mathrm{gr}} \rangle/\Id^{T\text{-}\mathrm{gr}}(A)$$ the homomorphism induced by $\xi$. Let $$\eta \colon F\langle X^{T\text{-}\mathrm{gr}} \rangle \to F\langle X \mid (FT)^* \rangle$$ be the homomorphism defined by $\eta\left(x^{(t)}_i\right) = x^{h_t}_i$ for all $i\in \mathbb N$ and $t\in T$. Consider an arbitrary graded polynomial identity $f\in F\langle X^{T\text{-}\mathrm{gr}} \rangle$. Let $\psi \colon F\langle X \mid (FT)^* \rangle \to A$ be a homomorphism satisfying the condition $\psi(x_i^h)=h\psi(x_i)$ for every $i\in\mathbb N$ and $h\in (FT)^*$. Then for any $i\in\mathbb N$ and $g, t \in T$ we have $$h_g \psi\eta\left(x^{(t)}_i\right) = h_g\psi(x^{h_t}_i)=h_g h_t \psi(x_i) =\left\lbrace \begin{array}{lll} 0 & \text{ if } & g\ne t,\\ \psi\eta\left(x^{(t)}_i\right) & \text{ if } & g=t. \end{array}\right.$$ Thus $\psi\eta\left(x^{(t)}_i\right) \in A^{(t)}$ and $\psi\eta$ is a graded homomorphism. Therefore, $\psi\eta(f)=0$ and $\eta(\Id^{T\text{-}\mathrm{gr}}(A)) \subseteq \Id^{(FT)^*}(A)$. Denote by $\tilde\eta \colon F\langle X^{T\text{-}\mathrm{gr}} \rangle/\Id^{T\text{-}\mathrm{gr}}(A) \to F\langle X \mid (FT)^* \rangle/\Id^{(FT)^*}(A)$ the induced homomorphism. Now we use the notation $\bar f = f + \Id^{(FT)^*}(A) \in F\langle X \mid (FT)^* \rangle/\Id^{(FT)^*}(A)$ for $f\in F\langle X \mid (FT)^* \rangle$ and $\bar f = f + \Id^{T\text{-}\mathrm{gr}}(A) \in F\langle X^{T\text{-}\mathrm{gr}} \rangle/\Id^{T\text{-}\mathrm{gr}}(A)$ for $f\in F\langle X^{T\text{-}\mathrm{gr}} \rangle$. We have $$\tilde\eta\tilde\xi\left(\bar x^h_i\right)=\tilde\eta\left( \sum\limits_{t\in\supp \Gamma} h(t) \bar x^{(t)}_i\right) =\sum\limits_{t\in\supp \Gamma} h(t) \bar x^{h_t}_i = \bar x^h_i$$ for every $h\in (FT)^*$ and $i\in\mathbb N$. (Here we use~(\ref{EqIdentityHFiniteSupp}).) Thus $\tilde\eta\tilde\xi=\id_{F\langle X \mid (FT)^* \rangle/\Id^{(FT)^*}(A)}$. Moreover $\tilde\xi\tilde\eta\left(\bar x^{(t)}_i\right)= \tilde\xi\left(\bar x^{h_t}_i\right)=\bar x^{(t)}_i$ for every $t\in T$ and $i\in \mathbb N$. Therefore, $\tilde\xi\tilde\eta=\id_{F\langle X^{T\text{-}\mathrm{gr}} \rangle/\Id^{T\text{-}\mathrm{gr}}(A)}$ and $F\langle X^{T\text{-}\mathrm{gr}} \rangle/\Id^{T\text{-}\mathrm{gr}}(A) \cong F\langle X \mid (FT)^* \rangle/\Id^{(FT)^*}(A)$ as algebras. The restriction of $\tilde\xi$ provides the isomorphism of $\frac{P^{(FT)^*}_n}{P^{(FT)^*}_n \cap \Id^{(FT)^*}(A)}$ and $\frac{P^{T\text{-}\mathrm{gr}}_n}{P^{T\text{-}\mathrm{gr}}_n\cap \Id^{T\text{-}\mathrm{gr}}(A)}$. Hence $$c^{(FT)^*}_n(A)=\dim \frac{P^{(FT)^*}_n}{P^{(FT)^*}_n \cap \Id^{(FT)^*}(A)} = \dim\frac{P^{T\text{-}\mathrm{gr}}_n}{P^{T\text{-}\mathrm{gr}}_n\cap \Id^{T\text{-}\mathrm{gr}}(A)}=c^{T\text{-}\mathrm{gr}}_n(A).$$ \end{proof} Now we can provide a sufficient condition for a graded algebra to satisfy the graded analog of Amitsur's conjecture. \begin{theorem}\label{TheoremMainTGrAssoc} Let $A$ be a finite dimensional non-nilpotent $T$-graded associative algebra over an algebraically closed field $F$ of characteristic $0$ for some semigroup $T$. Suppose that the Jacobson radical $J:=J(A)$ is a graded ideal. Let $$A/J = B_1 \oplus \ldots \oplus B_q \text{ (direct sum of graded ideals)}$$ where $B_i$ are graded-simple algebras and let $\varkappa \colon A/J \to A$ be any homomorphism of algebras (not necessarily graded) such that $\pi\varkappa = \id_{A/J}$ where $\pi \colon A \to A/J$ is the natural projection. Then there exist constants $C_1, C_2 > 0$, $r_1, r_2 \in \mathbb R$ such that $C_1 n^{r_1} d^n \leqslant c^{T\text{-}\mathrm{gr}}_n(A) \leqslant C_2 n^{r_2} d^n$ for all $n \in \mathbb N$ where $$d= \max\dim\left( B_{i_1}\oplus B_{i_2} \oplus \ldots \oplus B_{i_r} \mathbin{\Bigl|} r \geqslant 1,\right.$$ \begin{equation*}\left. ((FT)^*\varkappa(B_{i_1}))A^+ \,((FT)^*\varkappa(B_{i_2})) A^+ \ldots ((FT)^*\varkappa(B_{i_{r-1}})) A^+\,((FT)^*\varkappa(B_{i_r}))\ne 0\right)\end{equation*} and $A^+:=A+F\cdot 1$. \end{theorem} \begin{proof} The theorem is an immediate consequence of Lemma~\ref{LemmaCnGrCnGenH} and~\cite[Theorem~1]{ASGordienko8}. \end{proof} \begin{remark} The existence of the map $\varkappa$ follows from the ordinary Wedderburn~--- Mal'cev theorem. \end{remark} \begin{remark} If $A$ is nilpotent, i.e. $x_1 \ldots x_p \equiv 0$ for some $p\in\mathbb N$, then $P^{T\text{-}\mathrm{gr}}_n \subseteq \Id^{T\text{-}\mathrm{gr}}(A)$ and $c^{T\text{-}\mathrm{gr}}_n(A)=0$ for all $n \geqslant p$. \end{remark} \begin{corollary} The above analog of Amitsur's conjecture holds for such codimensions. \end{corollary} \section{Partitions restricted to convex polytopes} Here we apply ideas from~\cite{VerZaiMishch} and prove auxiliary results that we use in the construction of algebras with non-integer graded PI-exponents. In this section we show that if all the partitions $\lambda\vdash n$ that correspond to irreducible $FS_n$-modules with nonzero multiplicities $m(A,H,\lambda)$ belong to a convex polyhedron, then $\mathop{\overline\lim}_{n\to\infty}\sqrt[n]{c_n^{H}(A)}$ is bounded by the maximum of a particular function $\Phi$ on the ``continuous'' version of the polyhedron. Fix $q\in\mathbb N$. Let $\Phi(\alpha_1, \ldots, \alpha_q)=\frac{1}{\alpha_1^{\alpha_1} \ldots \alpha_q^{\alpha_q}}$. Define $0^0 := 1$. Then $\Phi$ is continuous on the segment $\lbrace (\alpha_1, \ldots, \alpha_q) \mid \alpha_i \geqslant 0\rbrace$. Suppose we have some numbers $\gamma_{ij} \in \mathbb R$ for $1\leqslant i \leqslant m$, $0\leqslant j \leqslant q$ and $\theta_k \in \mathbb Z_+$ for $q< k \leqslant r$ where $m,r\in \mathbb Z_+$, $r\geqslant q$. Define $$\Omega = \left\lbrace (\alpha_1, \ldots, \alpha_q)\in \mathbb R^q \mathrel{\Bigl|}\sum_{i=1}^q \alpha_i=1,\ \alpha_1\geqslant\alpha_2\geqslant \ldots \geqslant \alpha_q\geqslant 0, \ \sum_{j=1}^q \gamma_{ij}\alpha_j \geqslant 0\text{ for } 1\leqslant i \leqslant m\right\rbrace.$$ For every $n\in \mathbb N$ we define $$\Omega_n = \left\lbrace \lambda \vdash n \mathrel{\Bigl|} \sum_{j=1}^q \gamma_{ij}\lambda_j+\gamma_{i0} \geqslant 0 \text{ for } 1\leqslant i \leqslant m,\ \lambda_i \leqslant \theta_i \text{ for } q < i \leqslant r,\ \lambda_{r+1}=0 \right\rbrace.$$ We treat $\Omega$ and $\Omega_n$ as the ``continuous'' and the ``discrete'' version of the same polyhedron. Denote by $d$ the maximum of $\Phi$ on the compact set $\Omega$. (We assume $\Omega$ to be non-empty.) \begin{lemma}\label{LemmaTExampleUpperFd} Let $A$ be an algebra with a generalized $H$-action where $H$ is an associative algebra with unity over a field $F$ of characteristic $0$. Suppose $m(A, H, \lambda)=0$ for all $\lambda\vdash n$, $\lambda \notin \Omega_n$, $n\in\mathbb N$. Then $\mathop{\overline\lim}_{n\to\infty}\sqrt[n]{c_n^{H}(A)} \leqslant d$. \end{lemma} \begin{proof} Let $\lambda \vdash n$ such that $m(A,H,\lambda)\ne 0$. By the hook formula, $\dim M(\lambda)=\frac{n!}{\prod_{i,j} h_{ij}}$ where $h_{ij}$ is the length of the hook with the edge in $(i,j)$ in the Young diagram $D_\lambda$. Hence $\dim M(\lambda) \leqslant \frac{n!}{\lambda_1! \ldots \lambda_r!}$. Note that $\left(x^x\right)'=\left(e^{x\ln x}\right)'= (\ln x+1)e^{x\ln x}$ and $x^x$ is decreasing for $x\leqslant \frac{1}{e}$. By the Stirling formula, for all sufficiently large $n$ we have \begin{equation}\begin{split}\label{EqMlambdaUpperFd}\dim M(\lambda) \leqslant \frac{C_1 n^{r_1} \left(\frac{n}{e}\right)^n}{\left(\frac{\lambda_1}{e}\right)^{\lambda_1}\ldots \left(\frac{\lambda_r}{e}\right)^{\lambda_r}}=C_1 n^{r_1}\left(\frac{1} {\left(\frac{\lambda_1}{n}\right)^{\frac{\lambda_1}{n}}\ldots \left(\frac{\lambda_r}{n}\right)^{\frac{\lambda_r}{n}}}\right)^n \leqslant \\ C_1 n^{r_1} \left(\Phi\left(\frac{\lambda_1}{n}, \ldots, \frac{\lambda_q}{n}\right)\right)^n \frac{n^{\theta_{q+1}+\ldots +\theta_r}}{\theta_{q+1}^{\theta_{q+1}} \ldots \theta_r^{\theta_r}}=C_2 n^{r_2} \left(\Phi\left(\frac{\lambda_1}{n}, \ldots, \frac{\lambda_q}{n}\right)\right)^n\end{split}\end{equation} for some $C_1, C_2 > 0$ and $r_1, r_2 \in\mathbb R$ that do not depend on $\lambda_i$. Let $\varepsilon > 0$. Since $\Phi$ is continuous, there exists $\delta > 0$ such that for every $x$ from the domain of $\Phi$ such that the distance between $x$ and $\Omega$ is less than $\delta$, we have $\Phi(x) < d + \varepsilon$. Therefore, by~(\ref{EqMlambdaUpperFd}), there exists $n_0\in\mathbb N$ such that for all $n \geqslant n_0$ and $\lambda\vdash n$ such that $m(A,H,\lambda)\ne 0$ we have $ \dim M(\lambda) \leqslant C_2 n^{r_2} (d+\varepsilon)^n$. By~\cite[Theorem~5]{ASGordienko8}, there exist $C_3 > 0$, $r_3\in\mathbb Z_+$ such that $$\sum_{\lambda \vdash n} m(A,H,\lambda) \leqslant C_3 n^{r_3}\text{ for all }n \in \mathbb N.$$ Hence $$ c^{H}_n(A) = \sum_{\lambda \vdash n} m(A,H,\lambda) \dim M(\lambda) \leqslant C_2 C_3 n^{r_2+r_3} (d+\varepsilon)^n$$ and $\mathop{\overline\lim}_{n\to\infty}\sqrt[n]{c_n^H(A)} \leqslant d+\varepsilon$. Since $\varepsilon > 0$ is arbitrary, we get the lemma. \end{proof} \begin{lemma}\label{LemmaMaxTExample} Let $q \in \mathbb N$, $q \geqslant 4$, $$\Omega = \left\lbrace (\alpha_1, \ldots, \alpha_q)\in \mathbb R^q \mathrel{\biggl|} \sum_{i=1}^q \alpha_i = 1,\ \alpha_1 \geqslant \alpha_2 \geqslant \ldots \geqslant \alpha_q\geqslant 0,\ \alpha_q +\alpha_{q-1} \leqslant \alpha_1\right\rbrace.$$ Then $ d:=\max_{x\in \Omega} \Phi(x) = (q-3)+2\sqrt 2= q-0.1716\ldots$ \end{lemma} \begin{proof} We express $\alpha_1$ in terms of $\alpha_2, \ldots, \alpha_q$ and consider $$\Phi_0(\alpha_2, \ldots, \alpha_q):=\Phi\left(1-\sum_{i=2}^q \alpha_i, \alpha_2, \ldots, \alpha_q\right) = \frac{1}{\left(1-\sum_{i=2}^q \alpha_i\right)^{\left(1-\sum_{i=2}^q \alpha_i\right)} \alpha_2^{\alpha_2} \ldots \alpha_q^{\alpha_q}}$$ on the segment $$\Omega_0 = \left\lbrace (\alpha_2, \ldots, \alpha_q) \mathrel{\Bigl|} \alpha_2\geqslant 0, \ldots,\ \alpha_q\geqslant 0,\ \alpha_2+\ldots+\alpha_{q-2}+ 2\alpha_{q-1} +2\alpha_q \leqslant 1 \right\rbrace.$$ Note that we have weakened the restrictions on $\alpha_i$. However, we will see that $\max_{x\in \Omega} \Phi(x)=\max_{x\in \Omega_0} \Phi_0(x)$. We have $\Phi_0(\alpha_2, \ldots, \alpha_q)=e^{-\left(1-\sum_{i=2}^q \alpha_i\right)\ln\left(1-\sum_{i=2}^q \alpha_i\right)-\sum_{i=2}^q(\alpha_i \ln \alpha_i) }$ and $$\frac{\partial \Phi_0}{\partial \alpha_k}(\alpha_2, \alpha_3,\ldots,\alpha_q) = \left(\ln\left(1-\sum_{i=2}^q \alpha_i\right)-\ln\alpha_k \right)e^{-\left(1-\sum_{i=2}^q \alpha_i\right)\ln\left(1-\sum_{i=2}^q \alpha_i\right)-\sum_{i=2}^q(\alpha_i \ln \alpha_i) }.$$ Hence the only critical point is $(\alpha_2,\alpha_3,\ldots,\alpha_q)=\left(\frac{1}{q},\ldots, \frac{1}{q}\right)\notin \Omega_0$. Therefore, $\Phi_0$ takes its maximal values on the border $\partial\Omega_0$ of $\Omega_0$. Note that $\partial \Omega_0 = \Upsilon \cup \bigcup_{i=2}^q \Omega_i$ where $\Omega_i=\lbrace (\alpha_2,\ldots,\alpha_q)\in \Omega_0 \mid \alpha_i=0 \rbrace$ and $\Upsilon = \lbrace (\alpha_2,\ldots,\alpha_q)\in \Omega_0 \mid \alpha_2+\ldots+\alpha_{q-2}+ 2\alpha_{q-1} +2\alpha_q = 1 \rbrace$. Determining the critical points once again, we get $\Phi_0(x) \leqslant q -1$ for all $x\in \bigcup_{i=2}^q \Omega_i$. Consider $\Phi_0$ on $\Upsilon$. We express $\alpha_2$ in terms of $\alpha_3,\ldots,\alpha_q$ and define \begin{equation*}\begin{split}\Phi_1(\alpha_3,\ldots,\alpha_q) = \Phi_0(1-\alpha_3-\ldots-\alpha_{q-2}-2\alpha_{q-1}-2\alpha_q, \alpha_3, \ldots, \alpha_q)=\\ \frac{1}{(\alpha_{q-1}+\alpha_q)^{\alpha_{q-1}+\alpha_q} (1-\alpha_3-\ldots-\alpha_{q-2}-2\alpha_{q-1}-2\alpha_q)^{1-\alpha_3-\ldots- \alpha_{q-2}-2\alpha_{q-1}-2\alpha_q} \alpha_3^{\alpha_3}\ldots \alpha_{q}^{\alpha_q}}=\\ e^{-(\alpha_{q-1}+\alpha_q)\ln(\alpha_{q-1}+\alpha_q) -(1-\alpha_3-\ldots-\alpha_{q-2}-2\alpha_{q-1}-2\alpha_q) \ln(1-\alpha_3-\ldots-\alpha_{q-2}-2\alpha_{q-1}-2\alpha_q) -\alpha_3\ln\alpha_3-\ldots-\alpha_q\ln \alpha_q} \end{split}\end{equation*} on $$\Upsilon_1=\left\lbrace (\alpha_3,\ldots,\alpha_q) \mid 1-\alpha_3-\ldots-\alpha_{q-2}-2\alpha_{q-1}-2\alpha_q \geqslant 0,\ \alpha_3 \geqslant 0, \ldots,\ \alpha_q \geqslant 0\right\rbrace.$$ Then \begin{equation*}\begin{split}\frac{\partial \Phi_1}{\partial \alpha_i}(\alpha_3,\ldots,\alpha_q)= (\ln(1-\alpha_3-\ldots-\alpha_{q-2}-2\alpha_{q-1}-2\alpha_q)-\ln\alpha_i)\cdot \\ e^{-(\alpha_{q-1}+\alpha_{q})\ln(\alpha_{q-1}+\alpha_{q}) -(1-\alpha_3-\ldots- \alpha_{q-2}-2\alpha_{q-1}-2\alpha_q) \ln(1-\alpha_3-\ldots-\alpha_{q-2}-2\alpha_{q-1}-2\alpha_q)- \alpha_3\ln\alpha_3-\ldots-\alpha_q\ln \alpha_q} \end{split}\end{equation*} for $i=3,\ldots,q-2$ and \begin{equation*}\begin{split}\frac{\partial \Phi_1}{\partial \alpha_i}(\alpha_3,\ldots,\alpha_q)= (2\ln(1-\alpha_3-\ldots-\alpha_{q-2}-2\alpha_{q-1}-2\alpha_q) -\ln(\alpha_{q-1}+\alpha_q)-\ln\alpha_i)\cdot \\ e^{-(\alpha_{q-1}+\alpha_q)\ln(\alpha_{q-1}+\alpha_q) -(1-\alpha_3-\ldots-\alpha_{q-2} -2\alpha_{q-1}-2\alpha_q) \ln(1-\alpha_3-\ldots-\alpha_{q-2}-2\alpha_{q-1}-2\alpha_q)- \alpha_3\ln\alpha_3-\ldots-\alpha_q\ln \alpha_q}, \end{split}\end{equation*} for $i=q-1,\,q$. Therefore, $(\tilde\alpha_3, \ldots, \tilde\alpha_q)$ where $\tilde\alpha_3=\ldots=\tilde\alpha_{q-2}=\frac{\sqrt 2}{4+(q-3)\sqrt 2}$ and $\tilde\alpha_{q-1}=\tilde\alpha_q=\frac{1}{4+(q-3)\sqrt 2}$, is the critical point of $\Phi_1$. Hence $\Phi_1(\tilde\alpha_3, \ldots, \tilde\alpha_q)=(q-3)+2\sqrt 2= q-0.1716\ldots$ is the maximum of $\Phi_1$ on $\Upsilon_1$. Now we return to the original variables. Since $\tilde \alpha_1 = \tilde \alpha_{q-1}+\tilde \alpha_q = \frac{2}{4+(q-3)\sqrt 2}$ and $$\tilde \alpha_2 = 1-\alpha_3-\ldots-\alpha_{q-2}-2\alpha_{q-1}-2\alpha_q = \frac{\sqrt 2}{4+(q-3)\sqrt 2},$$ we have $\tilde\alpha_1 \geqslant \tilde\alpha_2 \geqslant \ldots \geqslant \tilde\alpha_q$. Therefore $d=\Phi(\tilde\alpha_1, \ldots, \tilde\alpha_q)= (q-3)+2\sqrt 2= q-0.1716\ldots$ is the maximum of $\Phi$ on $\Omega$. \end{proof} \begin{lemma}\label{LemmaTExampleIneqLambda} Let $A = M_2(F)\oplus W$ (direct sum of left ideals) be an algebra over a field $F$ of characteristic $0$ and let $T=\lbrace t_1, t_2\rbrace$ be a semigroup of two elements. Suppose there exists a linear isomorphism $\varphi \colon W \mathrel{\widetilde{\rightarrow}} \langle \mathcal B_0\rangle_F \subset M_2(F)$ where ${\mathcal B}_0 \subseteq \lbrace e_{11}, e_{12}, e_{22} \rbrace$ is a subset such that $e_{12} \in \mathcal B_0$. Suppose the $T$-grading on $A$ is defined by $A^{(t_1)}=(M_2(F),0)$ and $A^{(t_2)}=\lbrace (\varphi(a), a) \mid a\in W\rbrace$. Suppose $W M_2(F)=0$ and one of the following three conditions holds: \begin{enumerate} \item $M_2(F) W=0$ and $\varphi$ is a homomorphism of algebras; \item $M_2(F) W=0$ and $W^2=0$; \item $W^2=0$ and $\varphi$ is a homomorphism of left $M_2(F)$-modules. \end{enumerate} Then if $m(A, (FT)^*, \lambda)\ne 0$ for some $\lambda\vdash n$, $n\in\mathbb N$, we have $\lambda_{q+1} = 0$ and $\lambda_{q-1}+\lambda_q \leqslant \lambda_1 + 1$ where $q :=\dim A$. \end{lemma} \begin{proof} In order to prove that $m(A, (FT)^*, \lambda)= 0$ for some $\lambda \vdash n$, it is sufficient to show that $e^*_{T_\lambda}f \equiv 0$ for every $f\in P^{(FT)^*}_n$ and a Young tableau~$T_\lambda$. Note that $e^*_{T_\lambda}f$ is alternating in the variables of each column of $T_\lambda$. Since $f$ is multilinear, it is sufficient to substitute only basis elements. However $\dim A = q$ and if $\lambda_{q+1} > 0$, then at least two of the basis elements corresponding to the variables of the first column coincide and $e^*_{T_\lambda}f$ vanish. Hence if $\lambda_{q+1} > 0$, we have $e_{T_\lambda}f \equiv 0$. Consider in $A$ the homogeneous basis $$\mathcal B=\lbrace (e_{11},0), (e_{12},0), (e_{21},0), (e_{22},0)\rbrace \cup \lbrace (a, \varphi^{-1}(a) \mid a \in \mathcal B_0 \rbrace.$$ Note that the product of any two elements of $\mathcal B$ is either $0$ or again an element of $\mathcal B$. Define the function $\theta \colon \mathcal B \to \mathbb Z$ by $\theta(e_{ij}, \varphi^{-1}(e_{ij}))=\theta(e_{ij},0)=j-i$. Let $a_1, \ldots, a_k \in\mathcal B$. If $a_1 \ldots a_k \ne 0$, then \begin{equation}\label{EqTExampleThetaAk}-1 \leqslant \sum_{i=1}^k\theta(a_i)=\theta(a_1 \ldots a_k) \leqslant 1.\end{equation} Note that $\sum_{b\in \mathcal B} \theta(b)=1$ and $\sum_{i=1}^{q-1} \theta(a_i) \geqslant 0$ for any different $a_i \in \mathcal B$. If $m(A, (FT)^*, \lambda)\ne 0$, then $e^*_{T_\lambda}f \not\equiv 0$ for some $f\in P^{(FT)^*}_n$ and $e^*_{T_\lambda}f$ does not vanish under substitution of some basis elements $a_1, \ldots, a_n$. Again, $e^*_{T_\lambda}f$ is alternating in the variables of each column. In each of the first $\lambda_q$ columns we have $q$ boxes and in each of the next $(\lambda_{q-1}-\lambda_q)$ columns we have $(q-1)$ boxes. Therefore, the impact in $\sum_{i=1}^n \theta(a_i)$ of the basis elements corresponding to the first $\lambda_{q-1}$ columns is at least $\lambda_q$. Since $\theta(a)=-1$ only for $a=(e_{21},0)$ and we cannot substitute more than one such element for the variables of the same column, by~(\ref{EqTExampleThetaAk}) there must be at least $(\lambda_q-1)$ other columns. Hence $\lambda_1-\lambda_{q-1} \geqslant \lambda_q-1$ and we get the lemma. \end{proof} \begin{lemma}\label{LemmaTExampleLower} Let $A$ be an algebra from Lemma~\ref{LemmaTExampleIneqLambda}. Suppose that for every $\lambda \vdash n$, $n\in\mathbb N$, such that $\lambda_{q-1}+\lambda_q \leqslant \lambda_1$, we have $m(A, (FT)^*, \lambda)\ne 0$. Then there exists $$\lim\limits_{n\to \infty} \sqrt[n]{c_n^{T\text{-}\mathrm{gr}}(A)} =(q-3)+2\sqrt 2= q - 0.1716\ldots$$ \end{lemma} \begin{proof} Let $$\Omega = \left\lbrace (\alpha_1, \ldots, \alpha_q)\in \mathbb R^q \mathrel{\biggl|} \sum_{i=1}^q \alpha_i = 1,\ \alpha_1 \geqslant \alpha_2 \geqslant \ldots \geqslant \alpha_q\geqslant 0,\ \alpha_q +\alpha_{q-1} \leqslant \alpha_1\right\rbrace.$$ By Lemma~\ref{LemmaMaxTExample}, $ d:=\max_{x\in \Omega} \Phi(x) = (q-3)+2\sqrt 2= q-0.1716\ldots$ Denote by $(\alpha_1, \ldots, \alpha_q) \in \Omega$ such a point that $\Phi(\alpha_1, \ldots, \alpha_q)=d$. For every $n\in\mathbb N$ define $\mu\vdash n$ by $\mu_i = [\alpha_i n]$ for $2\leqslant i \leqslant q$ and $\mu_1 = n-\sum_{i=1}^q \mu_i$. For every $\varepsilon > 0$ there exists $n_0\in\mathbb N$ such that for every $n\geqslant n_0$ we have $\Phi\left(\frac{\mu_1}{n},\ldots,\frac{\mu_q}{n}\right) > d-\varepsilon$. By the assumptions of the lemma, $m(A,(FT)^*,\mu) \ne 0$ and by the hook and the Stirling formulas, there exist $C_1 > 0$ and $r_1\in\mathbb R$ such that we have \begin{equation}\begin{split} c^{(FT)^*}_n(A) \geqslant \dim M(\mu) = \frac{n!}{\prod_{i,j} h_{ij}} \geqslant \frac{n!}{(\mu_1+q-1)! \ldots (\mu_q+q-1)!} \geqslant \\ \frac{n!}{n^{q(q-1)}\mu_1! \ldots \mu_q!} \geqslant \frac{C_1 n^{r_1} \left(\frac{n}{e}\right)^n}{\left(\frac{\mu_1}{e}\right)^{\mu_1}\ldots \left(\frac{\mu_q}{e}\right)^{\mu_q}}\geqslant \\ C_1 n^{r_1}\left(\frac{1} {\left(\frac{\mu_1}{n}\right)^{\frac{\mu_1}{n}}\ldots \left(\frac{\mu_q}{n}\right)^{\frac{\mu_q}{n}}}\right)^n \geqslant C_1 n^{r_1} (d-\varepsilon)^n.\end{split}\end{equation} Hence $\mathop{\underline\lim}_{n\to\infty}\sqrt[n]{c_n^{(FT)^*}(A)} \geqslant d-\varepsilon$. Since $\varepsilon > 0$ is arbitrary, $\mathop{\underline\lim}_{n\to\infty}\sqrt[n]{c_n^{(FT)^*}(A)} \geqslant d$. Now Lemmas~\ref{LemmaCnGrCnGenH}, \ref{LemmaTExampleUpperFd}, \ref{LemmaMaxTExample}, and \ref{LemmaTExampleIneqLambda} finish the proof. \end{proof} \section{A $T_1$-graded algebra with a non-integer graded PI-exponent} \begin{theorem}\label{TheoremT1GradFractPI} Let $A = M_2(F)\oplus \UT_2(F)$ (direct sum of ideals) where $F$ is a field of characteristic $0$. Define a $T_1$-grading on $A$ by $A^{(0)}=(M_2(F),0)$, $A^{(1)}=\lbrace (\varphi(a), a) \mid a \in \UT_2(F) \rbrace$ where $\varphi \colon \UT_2(F) \hookrightarrow M_2(F)$ is the natural embedding. In other words, $A$ is an algebra from Example~\ref{ExampleT1} for $k=2$. Then there exists $\lim\limits_{n\to \infty} \sqrt[n]{c_n^{T_1\text{-}\mathrm{gr}}(A)} =4+2\sqrt 2= 6.8284\ldots$ \end{theorem} To prove Theorem~\ref{TheoremT1GradFractPI}, we need the following lemma. We omit $\varphi$ for shortness and write $(e_{ij}, e_{ij})$ instead of $(e_{ij}, \varphi^{-1}(e_{ij}))$. \begin{lemma}\label{LemmaAltT1} Let $\lambda \vdash n$, $n\in\mathbb N$, $\lambda_8 = 0$, and $\lambda_6+\lambda_7 \leqslant \lambda_1$. Then $m(A,(FT_1)^*,\lambda) \ne 0$. \end{lemma} \begin{proof} It is sufficient to show that for some $f\in P_n$ and some $T_\lambda$ we have $e_{T_\lambda}f \not\equiv 0$ on $A$. Note that each $e_{T_\lambda}f$ is alternating in $\lambda_7$ disjoint sets of variables, each of $7$ variables. By the proof Lemma~\ref{LemmaTExampleIneqLambda}, each column will have the impact at least $1$ to the sum of the values of $\theta$ on the elements substituted for the variables of $e_{T_\lambda}f$. Therefore, we need a compensation. Let $\beta_2=\lambda_6-\lambda_7$. Fix numbers $\beta_3,\ldots,\beta_{12} \geqslant 0$ such that $\beta_3+\beta_5+\beta_7+\beta_9+\beta_{11} = \lambda_7$,\quad $\beta_3+\beta_4=\lambda_5-\lambda_6$,\quad $\beta_5+\beta_6=\lambda_4-\lambda_5$, $\beta_7+\beta_8=\lambda_3-\lambda_4$,\quad $\beta_9+\beta_{10}=\lambda_2-\lambda_3$, and $\beta_{11}+\beta_{12}=\lambda_1-\lambda_2$. In other words, we have $$D_\lambda=\begin{array}{|c|c|c|c|c|c|c|c|c|c|c|c|} \multicolumn{1}{c}{\lambda_7} & \multicolumn{1}{c}{\beta_2} & \multicolumn{1}{c}{\beta_3} & \multicolumn{1}{c}{\beta_4} & \multicolumn{1}{c}{\beta_5} & \multicolumn{1}{c}{\beta_6} & \multicolumn{1}{c}{\beta_7} & \multicolumn{1}{c}{\beta_8} & \multicolumn{1}{c}{\beta_9} & \multicolumn{1}{c}{\beta_{10}} & \multicolumn{1}{c}{\beta_{11}} & \multicolumn{1}{c}{\beta_{12}} \\ \hline \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots \\ \cline{1-12} \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots \\ \cline{1-10} \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots \\ \cline{1-8} \ldots & \ldots & \ldots & \ldots & \ldots & \ldots \\ \cline{1-6} \ldots & \ldots & \ldots & \ldots \\ \cline{1-4} \ldots & \ldots \\ \cline{1-2} \ldots \\ \cline{1-1} \end{array}.$$ (Here $\beta_i$ denotes the number of columns in each block of columns.) Each of the first $\lambda_7$ columns will give an impact $1$ to $\theta$, which will be compensated by the columns that are marked by $\beta_3$, $\beta_5$, $\beta_7$, $\beta_9$, and $\beta_{11}$. The columns that are marked by $\beta_2$, $\beta_3$, $\beta_4$, $\beta_6$, $\beta_8$, $\beta_{10}$, and $\beta_{12}$ will give zero impact to $\theta$. We fix some Young tableau $T_\lambda$ of the shape $\lambda$ filled in with the numbers from $1$ to $n$. For each column of $T_\lambda$ we define a multilinear alternating polynomial depending on the variables with the indexes from the column. For shortness, we denote the polynomials corresponding to different columns in the $i$th block by the same letter $f_i$. By $(i_1, \ldots, i_\ell)$ we denote the $\ell$-tuple of numbers from a column (from up to down). By $S\lbrace i_1, \ldots, i_\ell\rbrace$ we denote the symmetric group on $i_1, \ldots, i_\ell$. We define $$f_1 := \sum_{\sigma\in S\lbrace i_1, \ldots, i_7\rbrace} (\sign \sigma) x^{h_0}_{\sigma(i_3)} x^{h_1}_{\sigma(i_2)} x^{h_0}_{\sigma(i_6)} x^{h_1}_{\sigma(i_4)} x^{h_0}_{\sigma(i_5)} x^{h_0}_{\sigma(i_1)} x^{h_1}_{\sigma(i_7)}, $$ $$f_2 := \sum_{\sigma\in S\lbrace i_1, \ldots, i_6\rbrace} (\sign \sigma) x^{h_0}_{\sigma(i_3)} x^{h_1}_{\sigma(i_2)} x^{h_0}_{\sigma(i_6)} x^{h_1}_{\sigma(i_4)} x^{h_0}_{\sigma(i_5)} x^{h_0}_{\sigma(i_1)} ,$$ $$f_3 := \sum_{\sigma\in S\lbrace i_1, \ldots, i_5\rbrace} (\sign \sigma) x^{h_0}_{\sigma(i_5)} x^{h_1}_{\sigma(i_4)} x^{h_0}_{\sigma(i_1)} x^{h_1}_{\sigma(i_2)} x^{h_0}_{\sigma(i_3)} ,$$ $$f_4 := \sum_{\sigma\in S\lbrace i_1, \ldots, i_5\rbrace} (\sign \sigma) x^{h_1}_{\sigma(i_2)} x^{h_0}_{\sigma(i_3)} x^{h_1}_{\sigma(i_5)} x^{h_1}_{\sigma(i_4)} x^{h_0}_{\sigma(i_1)} ,$$ $$f_5 := \sum_{\sigma\in S\lbrace i_1, \ldots, i_4\rbrace} (\sign \sigma) x^{h_1}_{\sigma(i_4)} x^{h_0}_{\sigma(i_1)} x^{h_1}_{\sigma(i_2)} x^{h_0}_{\sigma(i_3)} ,$$ $$f_6 := \sum_{\sigma\in S\lbrace i_1, \ldots, i_4\rbrace} (\sign \sigma) x^{h_1}_{\sigma(i_2)} x^{h_1}_{\sigma(i_3)} x^{h_1}_{\sigma(i_4)} x^{h_0}_{\sigma(i_1)} ,$$ $$f_7 := \sum_{\sigma\in S\lbrace i_1, i_2, i_3\rbrace} (\sign \sigma) x^{h_0}_{\sigma(i_1)} x^{h_1}_{\sigma(i_2)} x^{h_0}_{\sigma(i_3)} ,\qquad f_8 := \sum_{\sigma\in S\lbrace i_1, i_2, i_3\rbrace} (\sign \sigma) x^{h_1}_{\sigma(i_2)} x^{h_1}_{\sigma(i_3)} x^{h_0}_{\sigma(i_1)} ,$$ $$f_9 := \sum_{\sigma\in S\lbrace i_1, i_2\rbrace} (\sign \sigma) x^{h_0}_{\sigma(i_1)} x^{h_1}_{\sigma(i_2)} ,\qquad f_{10} := \sum_{\sigma\in S\lbrace i_1, i_2\rbrace} (\sign \sigma) x^{h_0}_{\sigma(i_2)} x^{h_0}_{\sigma(i_1)} ,$$ $$f_{11} := x^{h_0}_{i_1},\qquad f_{12} := x^{h_1}_{i_1}.$$ Define the polynomial $$f=(f_1 f_3)^{\beta_3}(f_1 f_5)^{\beta_5}(f_1 f_7)^{\beta_7}(f_1 f_9)^{\beta_9}(f_1 f_{11})^{\beta_{11}} f_2^{\beta_2} f_4^{\beta_4}f_6^{\beta_6}f_8^{\beta_8}f_{10}^{\beta_{10}}f_{12}^{\beta_{12}} \in P_n.$$ As we have already mentioned, here different copies of $f_i$ depend on different variables. The copies of $f_1$ are alternating polynomials of degree $7$ corresponding to the first $\lambda_7$ columns of height $7$. The copies of $f_2$ are alternating polynomials of degree $6$ corresponding to the next $\beta_2$ columns of height $6$. \ldots The copies of $f_{12}$ are polynomials of degree $1$ with the indexes from the last $\beta_{12}$ columns of height $1$. We claim that $e_{T_\lambda}f \not\equiv 0$. In order to verify this, we fill $D_\lambda$ with specific homogeneous elements and denote the tableau obtained by $\tau$. (See Figure~\ref{FigureTauT1}.) \begin{landscape} \begin{figure} \caption{Substitution for the variables of $e_{T_\lambda} \label{FigureTauT1} \end{figure} (Here in the $i$th block we have $\beta_i$ columns with the same values in all cells of a row. For shortness, we depict each value for each block only once. The tableau $\tau$ is still of the shape $\lambda$.) \end{landscape} Now for each variable we substitute the element from the corresponding box in $\tau$. Note that $f$ does not vanish under this substitution. Recall that $e_{T_\lambda} = a_{T_\lambda}b_{T_\lambda}$ where $a_{T_\lambda}$ is the symmetrization in the variables of each row and $b_{T_\lambda}$ is the alternation in the variables of each column. Since all $f_i$ are alternating polynomials, $b_{T_\lambda} f$ is a nonzero multiple of $f$. Two sets of variables correspond to the second row of $T_\lambda$. For the variables of the first group we substitute $(e_{11},e_{11}) \in A^{(1)}$, for the second one, we substitute $(e_{12},0) \in A^{(0)}$. Thus if an item in $a_{T_\lambda}$ mixes variables from these two groups, at least one variable from the second group, i.e. in $f_{11}$, is replaced with a variable form the first one. However, $f_{10}$ vanishes if at least one variable of it is replaced with an element of $A^{(1)}$ since $h_0$ is applied for both variables of $f_{10}$. Thus all items in $a_{T_\lambda}b_{T_\lambda}f$ where variables from these two groups are mixed, vanish. Therefore, if an item in $a_{T_\lambda}$ replaces a variable from the first two columns with a variable with a different value from the tableau $\tau$, we will have too many elements from $A^{(1)}$ substituted for the variables of $f_1$ and $f_2$ and the result is zero in virtue of the action of $h_0$. Therefore all items in $a_{T_\lambda}b_{T_\lambda}f$ where variables from the first two columns having different values are mixed, vanish. We continue this procedure and finally show that if an item in $a_{T_\lambda}$ does not stabilize the sets of variables with the same values from the tableau $\tau$, the corresponding item in $a_{T_\lambda}b_{T_\lambda}f$ vanishes. Hence the value of $a_{T_\lambda}b_{T_\lambda}f$ is a nonzero multiple of the value of $b_{T_\lambda}f$, i.e. is nonzero. The lemma is proved. \end{proof} \begin{proof}[Proof of Theorem~\ref{TheoremT1GradFractPI}.] We use Lemmas~\ref{LemmaTExampleLower} and~\ref{LemmaAltT1}. \end{proof} \section{A $T_2$-graded algebra with a non-integer graded PI-exponent} \begin{theorem}\label{TheoremT2GradFractPI} Let $A_2 = M_2(F)\oplus F j_{11} \oplus F j_{12} \oplus F j_{22}$ (direct sum of ideals) where $F$ is a field of characteristic $0$ and $j_{11}^2=j_{12}^2=j_{22}^2=0$. Define a $T_2$-grading on $A_2$ by $A_2^{(0)}=(M_2(F),0)$, $A_2^{(v)}=\langle (e_{11}, j_{11}), (e_{12}, j_{12}), (e_{22}, j_{22})\rangle$. Then there exists $\lim\limits_{n\to \infty} \sqrt[n]{c_n^{T_2\text{-}\mathrm{gr}}(A_2)} = 4+2\sqrt 2= 6.8284\ldots$. \end{theorem} \begin{proof} Let $A$ be the algebra from Theorem~\ref{TheoremT1GradFractPI}. Define a linear isomorphism $\psi \colon A \mathrel{\widetilde{\rightarrow}} A_2$ by $\psi(e_{ij}, e_{k\ell})=(e_{ij}, j_{k\ell})$ and $\psi(e_{ij}, 0)=(e_{ij}, 0)$. Then $\psi(A^{(0)})=A_2^{(0)}$ and $\psi(A^{(1)})=A_2^{(v)}$. Define an isomorphism $\Theta \colon F\langle X^{T_1\text{-}\mathrm{gr}} \rangle \to F\langle X^{T_2\text{-}\mathrm{gr}} \rangle$ of algebras by $\Theta(x^{(0)}_i)=x^{(0)}_i$ and $\Theta(x^{(1)}_i)=x^{(v)}_i$. We claim that \begin{equation}\label{EqIdT1IdT2}\Theta(\Id^{T_1\text{-}\mathrm{gr}}(A))=\Id^{T_2\text{-}\mathrm{gr}}(A_2).\end{equation} Since $F$ is of characteristic $0$, both $\Id^{T_1\text{-}\mathrm{gr}}(A)$ and $\Id^{T_2\text{-}\mathrm{gr}}(A_2)$ are generated by multilinear polynomials. (The proof is completely analogous to~\cite[Theorem~1.3.8]{ZaiGia}.) In other words, in order to prove~(\ref{EqIdT1IdT2}), it is sufficient to show that if $f\in F\langle X^{T_1\text{-}\mathrm{gr}} \rangle$ is multilinear as an ordinary polynomial in variables $x_1^{(t_1)}, x_2^{(t_2)},\ldots, x_n^{(t_n)}$ where $t_i\in T_1$, then $f \in \Id^{T_1\text{-}\mathrm{gr}}(A)$ if and only if $\Theta(f)\in \Id^{T_2\text{-}\mathrm{gr}}(A_2)$. We substitute only homogeneous elements. Note that $\pi \psi(a) = \pi(a)$ where $\pi$ is the projection on the first component: $\pi(a,b)=a$ for all $(a,b)\in A$ and $(a,b)\in A_2$. Moreover, if $a \in A^{(0)} \cup A^{(1)}$, then $\pi(a)=0$ if and only if $a=0$. Since $T_1$ and $T_2$ are commutative, the value of $f$ under the substitution of homogeneous elements is again a homogeneous element. Applying $\pi$, we show that $f\in P_n^{T_1\text{-}\mathrm{gr}}$ vanishes under the substitution of homogeneous elements $a^{(t_i)}_i\in A^{(t_i)}$, $t_i\in T_1$, if and only if $\Theta(f)$ vanishes under the substitution of $\psi\left(a^{(\tilde t_i)}_i\right)\in A_2^{(\tilde t_i)}$. (Here $\tilde 0 = 0$ and $\tilde 1 = v$.) Hence~(\ref{EqIdT1IdT2}) holds and $$c_n^{T_1\text{-}\mathrm{gr}}(A)=c_n^{T_2\text{-}\mathrm{gr}}(A_2)\text{ for all }n\in\mathbb N.$$ Now we apply Theorem~\ref{TheoremT1GradFractPI}. \end{proof} \section{A $T_3$-graded algebra with a non-integer graded PI-exponent} \begin{theorem}\label{TheoremT3GradFractPI} Let $F$ be a field of characteristic $0$. Denote by $I$ the irreducible left $M_2(F)$-module isomorphic to the minimal left ideal $\langle e_{12}, e_{22}\rangle_F \subset M_2(F)$. Let $A = M_2(F)\oplus I$ (direct sum of left ideals) where $IM_2(F) := 0$ and $I^2:=0$. Define a $T_3$-grading on $A$ by $A^{(e_1)}=(M_2(F),0)$, $A^{(e_2)}=\lbrace (\varphi(a), a) \mid a \in I \rbrace$ where $\varphi \colon I \hookrightarrow M_2(F)$ is the natural embedding which is a homomorphism of $ M_2(F)$-modules. Then there exists $\lim\limits_{n\to \infty} \sqrt[n]{c_n^{T_3\text{-}\mathrm{gr}}(A)} =3+2\sqrt 2= 5.8284\ldots$ \end{theorem} Theorem~\ref{TheoremT3GradFractPI} is proved at the end of the section. \begin{remark} Similarly, $\lim\limits_{n\to \infty} \sqrt[n]{c_n^{T_3^{\,\mathrm{op}}\text{-}\mathrm{gr}}(A^{\,\mathrm{op}})} =3+2\sqrt 2= 5.8284\ldots$ \end{remark} \begin{remark} Note that $A$ does not contain unity. If a $T_3$-graded algebra is unital, its graded PI-exponent is integer. (See Theorem~\ref{TheoremTIdemAmitsur} below.) \end{remark} \begin{remark} The algebra $A$ is $T_3$-graded-simple (see Proposition~\ref{PropositionAT3GrSimple} below), however $3+2\sqrt 2=\PIexp^{T_3\text{-}\mathrm{gr}}(A)<\dim A=6$ even if $F$ is algebraically closed. \end{remark} \begin{proposition}\label{PropositionAT3GrSimple} The algebra $A$ is $T_3$-graded-simple. \end{proposition} \begin{proof} First, we notice that $J(A)=I$. Suppose $W\ne 0$ is a graded ideal of $A$. Then there exists a nonzero homogeneous element $(a_1, b_1) \in W$ where $a_1 \in M_2(F)$ and $b_1\in I$. Since $I$ does not contain homogeneous elements, we have $a_1 \ne 0$. However $(a_1,b_1)(E,0) = (a_1,0) \in (M_2(F),0) \cap W$. Since $W$ is a two sided ideal and $M_2(F)$ is simple, we get $(M_2(F),0) \subseteq W$. Furthermore $(0,I) = (M_k(F),0) (0,I) \subseteq W$. Thus $W = A$, and $A$ is $T_3$-graded-simple. \end{proof} To prove Theorem~\ref{TheoremT3GradFractPI}, we need Lemma~\ref{LemmaAltT3} which is an analog of Lemma~\ref{LemmaAltT1}. We omit $\varphi$ for shortness and write $(e_{ij}, e_{ij})$ instead of $(e_{ij}, \varphi^{-1}(e_{ij}))$. \begin{lemma}\label{LemmaAltT3} Let $\lambda \vdash n$, $n\in\mathbb N$, $\lambda_7 = 0$, and $\lambda_5+\lambda_6 \leqslant \lambda_1$. Then $m(A,(FT_3)^*,\lambda) \ne 0$. \end{lemma} \begin{proof} It is sufficient to show that for some $f\in P_n$ and some $T_\lambda$ we have $e_{T_\lambda}f \not\equiv 0$ on $A$. Let $\beta_2=\lambda_5-\lambda_6$. Fix numbers $\beta_3,\ldots,\beta_{10} \geqslant 0$ such that $\beta_3+\beta_5+\beta_7+\beta_9 = \lambda_6$,\quad $\beta_3+\beta_4=\lambda_4-\lambda_5$,\quad $\beta_5+\beta_6=\lambda_3-\lambda_4$,\quad $\beta_7+\beta_8=\lambda_2-\lambda_3$, and $\beta_9+\beta_{10}=\lambda_1-\lambda_2$. In other words, we have $$D_\lambda=\begin{array}{|c|c|c|c|c|c|c|c|c|c|} \multicolumn{1}{c}{\lambda_6} & \multicolumn{1}{c}{\beta_2} & \multicolumn{1}{c}{\beta_3} & \multicolumn{1}{c}{\beta_4} & \multicolumn{1}{c}{\beta_5} & \multicolumn{1}{c}{\beta_6} & \multicolumn{1}{c}{\beta_7} & \multicolumn{1}{c}{\beta_8} & \multicolumn{1}{c}{\beta_9} & \multicolumn{1}{c}{\beta_{10}} \\ \hline \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots \\ \cline{1-10} \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots \\ \cline{1-8} \ldots & \ldots & \ldots & \ldots & \ldots & \ldots \\ \cline{1-6} \ldots & \ldots & \ldots & \ldots \\ \cline{1-4} \ldots & \ldots \\ \cline{1-2} \ldots \\ \cline{1-1} \end{array}.$$ (Here $\beta_i$ denotes the number of columns in each block of columns.) We fix some Young tableau $T_\lambda$ of the shape $\lambda$ filled in with the numbers from $1$ to $n$. Like in Lemma~\ref{LemmaAltT1}, for each column of $T_\lambda$ we define a multilinear alternating polynomial depending on the variables with the indexes from the column. For shortness, we denote the polynomials corresponding to different columns in the $i$th block by the same letter $f_i$. By $(i_1, \ldots, i_\ell)$ we denote the $\ell$-tuple of numbers from a column (from up to down). By $S\lbrace i_1, \ldots, i_\ell\rbrace$ we denote the symmetric group on $i_1, \ldots, i_\ell$. We define $$f_1 := \sum_{\sigma\in S\lbrace i_1, \ldots, i_6\rbrace} (\sign \sigma) x^{h_{e_1}}_{\sigma(i_3)} x^{h_{e_2}}_{\sigma(i_5)} x^{h_{e_1}}_{\sigma(i_4)} x^{h_{e_2}}_{\sigma(i_2)} x^{h_{e_1}}_{\sigma(i_1)} x^{h_{e_1}}_{\sigma(i_6)} ,$$ $$f_2 := \sum_{\sigma\in S\lbrace i_1, \ldots, i_5\rbrace} (\sign \sigma) x^{h_{e_1}}_{\sigma(i_1)} x^{h_{e_1}}_{\sigma(i_3)} x^{h_{e_2}}_{\sigma(i_5)} x^{h_{e_2}}_{\sigma(i_2)} x^{h_{e_1}}_{\sigma(i_4)} ,$$ $$f_3 := \sum_{\sigma\in S\lbrace i_1, \ldots, i_4\rbrace} (\sign \sigma) x^{h_{e_1}}_{\sigma(i_4)} x^{h_{e_2}}_{\sigma(i_2)} x^{h_{e_1}}_{\sigma(i_1)} x^{h_{e_1}}_{\sigma(i_3)} ,$$ $$f_4 := \sum_{\sigma\in S\lbrace i_1, \ldots, i_4\rbrace} (\sign \sigma) x^{h_{e_1}}_{\sigma(i_1)} x^{h_{e_1}}_{\sigma(i_3)} x^{h_{e_2}}_{\sigma(i_4)} x^{h_{e_2}}_{\sigma(i_2)} ,$$ $$f_5 := \sum_{\sigma\in S\lbrace i_1, i_2, i_3\rbrace} (\sign \sigma) x^{h_{e_2}}_{\sigma(i_2)} x^{h_{e_1}}_{\sigma(i_1)} x^{h_{e_1}}_{\sigma(i_3)} ,\qquad f_6 := \sum_{\sigma\in S\lbrace i_1, i_2, i_3\rbrace} (\sign \sigma) x^{h_{e_2}}_{\sigma(i_2)} x^{h_{e_1}}_{\sigma(i_1)} x^{h_{e_2}}_{\sigma(i_3)} ,$$ $$f_7 := \sum_{\sigma\in S\lbrace i_1, i_2\rbrace} (\sign \sigma) x^{h_{e_2}}_{\sigma(i_2)} x^{h_{e_1}}_{\sigma(i_1)} ,\qquad f_8 := \sum_{\sigma\in S\lbrace i_1, i_2\rbrace} (\sign \sigma) x^{h_{e_1}}_{\sigma(i_1)} x^{h_{e_1}}_{\sigma(i_2)} ,$$ $$f_{9} := x^{h_{e_1}}_{i_1},\qquad f_{10} := x^{h_{e_2}}_{i_1}.$$ Define the polynomial $$f=(f_3 f_1)^{\beta_3}(f_5 f_1)^{\beta_5}(f_7 f_1)^{\beta_7}(f_9 f_1)^{\beta_9} f_2^{\beta_2} f_4^{\beta_4}f_6^{\beta_6}f_8^{\beta_8}f_{10}^{\beta_{10}} \in P_n.$$ As we have already mentioned, here different copies of $f_i$ depend on different variables. The copies of $f_1$ are alternating polynomials of degree $6$ corresponding to the first $\lambda_6$ columns of height $6$. The copies of $f_2$ are alternating polynomials of degree $5$ corresponding to the next $\beta_2$ columns of height $5$. \ldots The copies of $f_{10}$ are polynomials of degree $1$ with the indexes from the last $\beta_{10}$ columns of height $1$. We claim that $e_{T_\lambda}f \not\equiv 0$. In order to verify this, we fill $D_\lambda$ with specific homogeneous elements and denote the tableau obtained by $\tau$. (See Figure~\ref{FigureTauT3}.) \begin{landscape} \begin{figure} \caption{Substitution for the variables of $e_{T_\lambda} \label{FigureTauT3} \end{figure} (Here in the $i$th block we have $\beta_i$ columns with the same values in all cells of a row. For shortness, we depict each value for each block only once. The tableau $\tau$ is still of the shape $\lambda$.) \end{landscape} Now for each variable we substitute the element from the corresponding box in $\tau$. Note that $f$ does not vanish under this substitution. Recall that $e_{T_\lambda} = a_{T_\lambda}b_{T_\lambda}$ where $a_{T_\lambda}$ is the symmetrization in the variables of each row and $b_{T_\lambda}$ is the alternation in the variables of each column. Since all $f_i$ are alternating polynomials, $b_{T_\lambda} f$ is a nonzero multiple of $f$. Two sets of variables correspond to the second row of $T_\lambda$. For the variables of the first group we substitute $(e_{22},e_{22}) \in A^{(e_2)}$, for the second one, we substitute $(e_{12},0) \in A^{(e_1)}$. Thus if an item in $a_{T_\lambda}$ mixes variables from these two groups, at least one variable from the second group, i.e. in $f_8$, is replaced with a variable form the first one. However, $f_8$ vanishes if at least one variable of it is replaced with an element of $A^{(e_2)}$ since $h_{e_1}$ is applied for both variables of $f_8$. Thus all items in $a_{T_\lambda}b_{T_\lambda}f$ where variables from these two groups are mixed, vanish. Therefore, if an item in $a_{T_\lambda}$ replaces a variable from the first three columns with a variable with a different value from the tableau $\tau$, we have too many elements from $A^{(e_2)}$ substituted for the variables of $f_1$, $f_2$, and $f_3$ and the result is zero in virtue of the action of $h_{e_1}$. Therefore all items in $a_{T_\lambda}b_{T_\lambda}f$ where variables from the first three columns having different values are mixed, vanish. We continue this procedure and finally show that if an item in $a_{T_\lambda}$ does not stabilize the sets of variables with the same values from the tableau $\tau$, the corresponding item in $a_{T_\lambda}b_{T_\lambda}f$ vanishes. Hence the value of $a_{T_\lambda}b_{T_\lambda}f$ is a nonzero multiple of the value of $b_{T_\lambda}f$, i.e. is nonzero. The lemma is proved. \end{proof} \begin{proof}[Proof of Theorem~\ref{TheoremT3GradFractPI}.] We use Lemmas~\ref{LemmaTExampleLower} and~\ref{LemmaAltT3}. \end{proof} \section{Positive results on the analog of Amitsur's conjecture for polynomial $T$-graded identities} \begin{theorem}\label{TheoremTCancelAmitsur} Let $A$ be a finite dimensional non-nilpotent $T$-graded associative algebra with $1$ over a field $F$ of characteristic $0$ for some cancellative semigroup $T$. Then there exist constants $C_1, C_2 > 0$, $r_1, r_2 \in \mathbb R$, $d\in\mathbb N$, such that $$C_1 n^{r_1} d^n \leqslant c^{T\text{-}\mathrm{gr}}_n(A) \leqslant C_2 n^{r_2} d^n\text{ for all }n \in \mathbb N.$$ \end{theorem} \begin{corollary} The graded analog of Amitsur's conjecture holds for such codimensions. \end{corollary} \begin{proof}[Proof of Theorem~\ref{TheoremTCancelAmitsur}.] Note that graded codimensions do not change upon an extension of the base field. The proof is analogous to the case of ordinary codimensions~\cite[Theorem~4.1.9]{ZaiGia}. Hence we may assume $F$ to be algebraically closed. By~\cite[Corollary 4.1]{KelarevBook}, $J(A)$ is a graded ideal. By Proposition~\ref{PropositionTCancelWedderburn}, $A/J(A)$ is the sum of graded ideals that are $T$-graded-simple algebras. Now we apply Theorem~\ref{TheoremMainTGrAssoc}. \end{proof} \begin{theorem}\label{TheoremTIdemAmitsur} Let $A$ be a finite dimensional non-nilpotent $T$-graded associative algebra with $1$ over a field $F$ of characteristic $0$ for some left or right zero band $T$. Then there exist constants $C_1, C_2 > 0$, $r_1, r_2 \in \mathbb R$, such that $$C_1 n^{r_1} d^n \leqslant c^{T\text{-}\mathrm{gr}}_n(A) \leqslant C_2 n^{r_2} d^n\text{ for all }n \in \mathbb N$$ where $d=\PIexp(A)$ is the ordinary PI-exponent of $A$. \end{theorem} \begin{corollary} The graded analog of Amitsur's conjecture holds for such codimensions. \end{corollary} \begin{proof}[Proof of Theorem~\ref{TheoremTIdemAmitsur}.] Since $A$ is finite dimensional, we may assume that $T$ is finite. Again, without loss of generality, we may assume $F$ to be algebraically closed. By Propositions~\ref{PropositionTIdemGradedIdeals} and~\ref{PropositionTIdemWedderburn}, the Jacobson radical of $A$ is a graded ideal and $A/J(A)$ is the sum of graded ideals that are simple algebras. Therefore, by Theorem~\ref{TheoremMainTGrAssoc} there exists an integer $\PIexp^{T\text{-}\mathrm{gr}}(A)$. By Theorem~\ref{TheoremTIdemGradedWeddMalcev}, we can choose a graded maximal semisimple subalgebra $B$ such that $A=B\oplus J(A)$ (direct sum of graded subspaces). Then we may define an embedding $\varkappa \colon A/J \mathrel{\widetilde{\rightarrow}} B$ (see Theorem~\ref{TheoremMainTGrAssoc}) to be graded. Let $B=B_1 \oplus \ldots \oplus B_s$ (direct sum of ideals) for some simple algebras $B_i$. Then by Proposition~\ref{PropositionTIdemGradedIdeals} the ideals $B_i$ are graded, $A/J=\varkappa^{-1}(B_1) \oplus \ldots \oplus\varkappa^{-1}(B_s)$ (direct sum of graded ideals), and \begin{equation*}\begin{split}\PIexp^{T\text{-}\mathrm{gr}}(A) = \max\dim\left( B_{i_1}\oplus B_{i_2} \oplus \ldots \oplus B_{i_r} \mathbin{\Bigl|} r \geqslant 1,\right. \\ \left. ((FT)^*B_{i_1})A^+ \,((FT)^*B_{i_2}) A^+ \ldots ((FT)^*B_{i_{r-1}}) A^+\,((FT)^*B_{i_r})\ne 0\right)=\\ \max\dim\left( B_{i_1}\oplus B_{i_2} \oplus \ldots \oplus B_{i_r} \mathbin{\Bigl|} r \geqslant 1,\ B_{i_1} A^+ \,B_{i_2} A^+ \ldots B_{i_{r-1}} A^+\,B_{i_r}\ne 0\right)=\\ \max\dim\left( B_{i_1}\oplus B_{i_2} \oplus \ldots \oplus B_{i_r} \mathbin{\Bigl|} r \geqslant 1,\ B_{i_1} J(A) \,B_{i_2} J(A) \ldots B_{i_{r-1}} J(A)\,B_{i_r}\ne 0\right)=\PIexp(A)\end{split}\end{equation*} since $B_i$ are simple as ordinary algebras. \end{proof} \begin{remark} The equality $\PIexp^{T\text{-}\mathrm{gr}}(A)=\PIexp(A)$ does not imply the equality of codimensions. Indeed, let $k\in\mathbb N$, $k\geqslant 2$, $A=M_k(F)$, $T=\lbrace t_1, \ldots, t_k\rbrace$ where $t_i t_\ell = t_\ell$ for all $1\leqslant i,\ell \leqslant k$, $A^{(t_i)}=\langle e_{1 i},\ldots, e_{k i}\rangle_F$. Then $x_1^{(t_i)}$ are linearly independent modulo $\Id^{T\text{-}\mathrm{gr}}(A)$. In order to check this, it is sufficient to substitute $x_1^{(t_i)} = e_{ii}$. Thus $c_1(A)=1 < c_1^{T\text{-}\mathrm{gr}}(A)=k$. \end{remark} \end{document}
\begin{document} \title{\textbf{Angle-Based Models for Ranking Data} } \author{Hang Xu$^{1}$, Mayer Alvo$^{2}$ and Philip L.H. Yu$^{1}$} \maketitle \lyxaddress{$^{1}$Department of Statistics and Actuarial Science, The University of Hong Kong, Hong Kong } \begin{singlespace} \lyxaddress{$^{2}$Department of Mathematics and Statistics, University of Ottawa, Canada } \end{singlespace} \begin{abstract} A new class of general exponential ranking models is introduced which we label angle-based models for ranking data. A consensus score vector is assumed, which assigns scores to a set of items, where the scores reflect a consensus view of the relative preference of the items. The probability of observing a ranking is modeled to be proportional to its cosine of the angle from the consensus vector. Bayesian variational inference is employed to determine the corresponding predictive density. It can be seen from simulation experiments that the Bayesian variational inference approach not only has great computational advantage compared to the traditional MCMC, but also avoids the problem of overfitting inherent when using maximum likelihood methods. The model also works when a large number of items are ranked which is usually an NP-hard problem to find the estimate of parameters for other classes of ranking models. Model extensions to incomplete rankings and mixture models are also developed. Real data applications demonstrate that the model and extensions can handle different tasks for the analysis of ranking data. Keywords: Ranking data; Bayesian variational inference; Incomplete ranking. \end{abstract} \section{Introduction} \noindent Ranking data are often encountered in practice when judges (or individuals) are asked to rank a set of $t$ items, which may be political goals, candidates in an election, types of food, etc.. We see examples in voting and elections, market research and food preference just to name a few. \citet{alvo1991balanced} considered tests of hypotheses related to problems of trend and independence using only the ranks of the data. In another direction, the interest may be in modeling the ranking data. Some of these models are: (i) order-statistics models \citep{Thurstone1927a,yu2000bayesian}, (ii) distance-based models \citep{critchlow1991probability,lee2012mixtures}, (iii) paired-comparison models \citep{Mallows1957}, and (iv) multistage models \citep{Fligner1988}. A more comprehensive discussion on these probability ranking models can be found in the book by \citet{alvo2014statistical}. However, some of these models cannot handle the situation in which the number of items being ranked is large, nor when incomplete rankings exist in the data. For distance-based models: (i) there is no closed-form for the normalizing constants for Spearman distances and (ii) the modal ranking is discrete over a finite space of $t!$ dimensions and searching for it will be time consuming when the number of items, $t$, becomes large. In this article, we first propose a new class of general exponential ranking models called angle-based models for the distribution of rankings. We assume a consensus score vector $\boldsymbol{\theta}$ which assigns scores to the items, where the scores reflect a consensus view of the relative preference of the items. The probability of observing a ranking is proportional to the cosine of the angle from the consensus score vector. The distance-based model with Spearman distance can be seen as a special case of our model. Unlike the Spearman distance-based model, we obtain a very good approximation of the normalizing constant of our angle-based model. Note that this approximation allows us to have the explicit form in the first or second derivative of normalizing constant which can facilitate the computation of the ranking probabilities under the model. For the parameter estimation of the model, we first place a joint Gamma-von Mises-Fisher prior distribution on the parameter. We describe several mathematical difficulties incurred in determining the resulting posterior distribution and propose to make use of the variational inference method. From the simulation experiments, it can be seen that the Bayesian variational inference approach not only has great computational advantage compared to the traditional Markov Chain Monte Carlo (MCMC), but also avoids the over-fitting problem in maximum likelihood estimation (MLE). Our model also works when the number of items being ranked is large, while it is usually an NP-hard problem to obtain the parameter estimates for other classes of ranking models. Model extensions to the incomplete rankings and mixture model are also discussed. From the simulations and applications, it can be seen that our extensions can handle well incomplete rankings as well as the clustering and classification tasks for ranking data. The article is organized as follows. Section 2 introduces the angle-based model as well as the Bayesian MCMC approach. In Section 3, we describe the method of variational inference for our model and derive the predictive density of a new ranking. In Section 4, we consider model extensions to incomplete rankings and mixture models for clustering and classification. In Section 5, we describe several simulation experiments whereas in Section 6, the methodology is then applied to real data sets including a sushi data set, ranking data from the American Psychological Association (APA) presidential election of 1980 and a breast cancer gene expressions dataset. We conclude with a discussion in Section 7. \section{Angle-Based Models} \subsection{Model setup} \noindent A ranking $\boldsymbol{R}$ represents the order of preference with respect to a set of items. In ranking $t$ items, labeled $1,\ldots,t$, a ranking $\boldsymbol{R}=(R(1),\ldots,R(t))^{T}$ is a mapping function from $1,...,t$ to ranks $1,...,t$, where $R(2)=3$ means that item 2 is ranked third and $R^{-1}(3)=2$ means that the item ranked third is item 2. It will be more convenient to standardize the rankings as: \[ \boldsymbol{y}=\frac{\boldsymbol{R}-\frac{t+1}{2}}{\sqrt{\frac{t(t^{2}-1)}{12}}}, \] where $\boldsymbol{y}$ is the $t\times1$ vector with $\left\Vert \boldsymbol{y}\right\Vert =1$. We consider the following ranking model: \[ p(\boldsymbol{y}|\kappa,\boldsymbol{\theta})=C(\kappa,\boldsymbol{\theta})\exp\left\{ \kappa\boldsymbol{\theta}^{T}\boldsymbol{y}\right\} , \] where the parameter $\boldsymbol{\theta}$ is a $t\times1$ vector with $\left\Vert \boldsymbol{\theta}\right\Vert =1$, parameter $\kappa\geq0$, and $C(\kappa,\boldsymbol{\theta})$ is the normalizing constant. In the case of the distance-based models \citep{alvo2014statistical}, the parameter $\boldsymbol{\theta}$ can be viewed as if a modal ranking vector. In fact, if $\boldsymbol{R}$ and $\boldsymbol{\pi}_{0}$ represent an observed ranking and the modal ranking of $t$ items respectively, then the probability of observing $\boldsymbol{R}$ under the Spearman distance-based model is proportional to \begin{eqnarray*} \exp\left\{ -\lambda\left(\frac{1}{2}\sum_{i=1}^{t}\left(R\left(i\right)-\boldsymbol{\pi}_{0}\left(i\right)\right)^{2}\right)\right\} & = & \exp\left\{ -\lambda\left(\frac{t\left(t+1\right)\left(2t+1\right)}{12}-\boldsymbol{\pi}_{0}^{T}\boldsymbol{R}\right)\right\} \\ & \propto & \exp\left\{ \kappa\boldsymbol{\theta}^{T}\boldsymbol{y}\right\} , \end{eqnarray*} where $\kappa=\lambda\frac{t(t^{2}-1)}{12}$, and $\boldsymbol{y}$ and $\boldsymbol{\theta}$ are the standardized rankings of $\boldsymbol{R}$ and $\boldsymbol{\pi}_{0}$ respectively. However, the $\boldsymbol{\pi}_{0}$ in the distance-based model is a discrete permutation vector of integers $\{1,2,\ldots,t\}$ but the $\boldsymbol{\theta}$ in our model is a real-valued vector, representing a consensus view of the relative preference of the items from the individuals. Since both $\left\Vert \boldsymbol{\theta}\right\Vert =1$ and $\left\Vert \boldsymbol{y}\right\Vert =1$, the term $\boldsymbol{\theta}^{T}\boldsymbol{y}$ can be seen as $\cos\phi$ where $\phi$ is the angle between the consensus score vector $\boldsymbol{\theta}$ and the observation $\boldsymbol{y}$. Figure \ref{fig:Illustration-for-the-angle} illustrates an example of the angle between the consensus score vector $\boldsymbol{\theta}=(0,1,0)^{T}$ and the standardized observation of $\boldsymbol{R}=\left(1,2,3\right)^{T}$ on the sphere for $t=3$. The probability of observing a ranking is proportional to the cosine of the angle from the consensus score vector. The parameter $\kappa$ can be viewed as a concentration parameter. For small $\kappa$, the distribution of rankings will appear close to a uniform whereas for larger values of $\kappa$, the distribution of rankings will be more concentrated around the consensus score vector. \begin{figure} \caption{\label{fig:Illustration-for-the-angle} \label{fig:Illustration-for-the-angle} \end{figure} To compute the normalizing constant $C(\kappa,\boldsymbol{\theta})$, let $\lyxmathsym{\textgreek{R}}_{t}$ be the set of all possible permutations of the integers $1,...,t$. Then \begin{equation} \left(C(\kappa,\boldsymbol{\theta})\right)^{-1}=\sum_{\boldsymbol{y}\in\lyxmathsym{\textgreek{R}}_{t}}\exp\left\{ \kappa\boldsymbol{\theta}^{T}\boldsymbol{y}\right\} .\label{eq:normalizing constant} \end{equation} Notice that the summation is over $t!$ elements in $\lyxmathsym{\textgreek{R}}_{t}$. When $t$ is large, say greater than 15, the exact calculation of the normalizing constant is prohibitive. Using the fact that the set of $t!$ permutations lie on a sphere in $(t-1)$-space, our model resembles the continuous von Mises-Fisher distribution, abbreviated as $vMF(\boldsymbol{x}|\boldsymbol{m},\kappa)$, which is defined on a $\left(p-1\right)$ unit sphere with mean direction $\boldsymbol{m}$ and concentration parameter $\kappa$: \[ p(\boldsymbol{x}|\kappa,\boldsymbol{m})=V_{p}(\kappa)\exp(\kappa\boldsymbol{m}^{T}\boldsymbol{x}), \] where \[ V_{p}(\kappa)=\frac{\kappa^{\frac{p}{2}-1}}{\left(2\pi\right)^{\frac{p}{2}}I_{\frac{p}{2}-1}(\kappa)}, \] and $I_{\frac{p}{2}-1}(\kappa)$ is the modified Bessel function of the first kind with order $\frac{p}{2}-1.$ Consequently, we may approximate the sum in (\ref{eq:normalizing constant}) by an integral over the sphere. It is shown in Appendix A that \[ C(\kappa,\boldsymbol{\theta})\simeq C_{t}(\kappa)=\frac{\kappa^{\frac{t-3}{2}}}{2^{\frac{t-3}{2}}t!I_{\frac{t-3}{2}}(\kappa)\Gamma(\frac{t-1}{2})}, \] where $\Gamma(\cdot)$ is the gamma function. Table \ref{tab:The-error-rate-of-NC} shows the error rate of the approximate log-normalizing constant as compared to the exact one computed by direct summation. Here, $\kappa$ is chosen to be 0.01 to 2 and $t$ ranges from 3 to 11. Note that the exact calculation of the normalizing constant for $t=11$ requires the summation of $11!\approx3.9\times10^{7}$ permutations. The computer ran out of memory (16GB) beyond $t=11$. This approximation seems to be very accurate even when $t=3$. The error drops rapidly as $t$ increases. Note that this approximation allows us to approximate the first and second derivatives of $\log C$ which can facilitate our computation in what follows. Notice that $\kappa$ may grow with $t$ as $\boldsymbol{\theta}^{T}\boldsymbol{y}$ is a sum of $t$ terms. It can be seen from the applications in Section 6 that in one of the clusters for the APA data ($t=5$), $\kappa$ is $7.44$($\approx1.5t$) (see Table 4) while in the gene data ($t=96$), $\kappa$ is $194.34(\approx2.0t)$ (see Table 5). We thus compute the error rate for $\kappa=t$ and $\kappa=2t$ as shown in Figure \ref{fig:The-error-rate-of-NC-1}. It is found that the approximation is still accurate with error rate of less than 0.5\% for $\kappa=t$ and is acceptable for large $t$ when $\kappa=2t$ as the error rate decreases in $t$. The von Mises-Fisher distribution was used to model compositional data by \citet{hornik2014movmf} who also provide different approaches for estimating $\kappa$ efficiently. \begin{table}[h] \begin{centering} {\scriptsize{}\hspace*{-1cm}} \begin{tabular}{cccccccccc} \hline & \multicolumn{9}{c}{{\footnotesize{}$t$}}\tabularnewline \cline{2-10} {\footnotesize{}$\kappa$} & {\footnotesize{}3} & {\footnotesize{}4} & {\footnotesize{}5} & {\footnotesize{}6} & {\footnotesize{}7} & {\footnotesize{}8} & {\footnotesize{}9} & {\footnotesize{}10} & {\footnotesize{}11}\tabularnewline \hline {\footnotesize{}$0.01$} & {\footnotesize{}<0.00001\%} & {\footnotesize{}<0.00001\%} & {\footnotesize{}<0.00001\%} & {\footnotesize{}<0.00001\%} & {\footnotesize{}<0.00001\%} & {\footnotesize{}<0.00001\%} & {\footnotesize{}<0.00001\%} & {\footnotesize{}<0.00001\%} & {\footnotesize{}<0.00001\%}\tabularnewline {\footnotesize{}$0.1$} & {\footnotesize{}<0.00001\%} & {\footnotesize{}<0.00001\%} & {\footnotesize{}<0.00001\%} & {\footnotesize{}<0.00001\%} & {\footnotesize{}<0.00001\%} & {\footnotesize{}<0.00001\%} & {\footnotesize{}<0.00001\%} & {\footnotesize{}<0.00001\%} & {\footnotesize{}<0.00001\%}\tabularnewline {\footnotesize{}$0.5$} & {\footnotesize{}0.00003\%} & {\footnotesize{}0.00042\%} & {\footnotesize{}0.00024\%} & {\footnotesize{}0.00013\%} & {\footnotesize{}0.00007\%} & {\footnotesize{}0.00004\%} & {\footnotesize{}0.00003\%} & {\footnotesize{}0.00002\%} & {\footnotesize{}0.00001\%}\tabularnewline {\footnotesize{}$0.8$} & {\footnotesize{}0.00051\%} & {\footnotesize{}0.00261\%} & {\footnotesize{}0.00150\%} & {\footnotesize{}0.00081\%} & {\footnotesize{}0.00046\%} & {\footnotesize{}0.00027\%} & {\footnotesize{}0.00017\%} & {\footnotesize{}0.00011\%} & {\footnotesize{}0.00008\%}\tabularnewline {\footnotesize{}$1$} & {\footnotesize{}0.00175\%} & {\footnotesize{}0.00607\%} & {\footnotesize{}0.00354\%} & {\footnotesize{}0.00194\%} & {\footnotesize{}0.00110\%} & {\footnotesize{}0.00066\%} & {\footnotesize{}0.00041\%} & {\footnotesize{}0.00027\%} & {\footnotesize{}0.00018\%}\tabularnewline {\footnotesize{}$2$} & {\footnotesize{}0.05361\%} & {\footnotesize{}0.06803\%} & {\footnotesize{}0.04307\%} & {\footnotesize{}0.02528\%} & {\footnotesize{}0.01508\%} & {\footnotesize{}0.00932\%} & {\footnotesize{}0.00598\%} & {\footnotesize{}0.00398\%} & {\footnotesize{}0.00273\%}\tabularnewline \hline \end{tabular} \par\end{centering}{\scriptsize \par} \caption{\label{tab:The-error-rate-of-NC}The error rate of the approximate log-normalizing constant as compared to the exact one computed by direct summation. } \end{table} \begin{figure} \caption{\label{fig:The-error-rate-of-NC-1} \label{fig:The-error-rate-of-NC-1} \end{figure} \subsection{Maximum likelihood estimation (MLE) of our model\label{subsec:MLE-of-osimple-model}} \noindent Let $\boldsymbol{Y}=\left\{ \boldsymbol{y}_{1},\ldots,\boldsymbol{y}_{N}\right\} $ be a random sample of $N$ standardized rankings drawn from $p(\boldsymbol{y}|\kappa,\boldsymbol{\theta})$. The log-likelihood of $\left(\kappa,\boldsymbol{\theta}\right)$ is then given by \begin{equation} L(\boldsymbol{Y}|\kappa,\boldsymbol{\theta})=N\ln C_{t}(\kappa)+\sum_{i=1}^{N}\kappa\boldsymbol{\theta}^{T}\boldsymbol{y}_{i}.\label{eq:log-likelihood-data} \end{equation} Maximizing (\ref{eq:log-likelihood-data}) subject to $\left\Vert \boldsymbol{\theta}\right\Vert =1$ and $\kappa\geq0$, we find that the maximum likelihood estimator of $\boldsymbol{\theta}$ is given by $\hat{\boldsymbol{\theta}}_{MLE}=\frac{\sum_{i=1}^{N}\boldsymbol{y}_{i}}{\left\Vert \sum_{i=1}^{N}\boldsymbol{y}_{i}\right\Vert },$ and $\hat{\kappa}$ is the solution of \begin{equation} A_{t}(\kappa)\equiv\frac{-C_{t}^{'}(\kappa)}{C_{t}(\kappa)}=\frac{I_{\frac{t-1}{2}}\left(\kappa\right)}{I_{\frac{t-3}{2}}\left(\kappa\right)}=\frac{\left\Vert \sum_{i=1}^{N}\boldsymbol{y}_{i}\right\Vert }{N}\equiv r.\label{eq:MLE_alpha} \end{equation} A simple approximation to the solution of (\ref{eq:MLE_alpha}) following \citet{banerjee2005clustering} is given by \[ \hat{\kappa}_{MLE}=\frac{r(t-1-r^{2})}{1-r^{2}}. \] A more precise approximation can be obtained from a few iterations of Newton's method. Using the method suggested by \citet{sra2012short}, starting from an initial value $\kappa_{0}$, we can recursively update $\kappa$ by iteration: \[ \kappa_{i+1}=\kappa_{i}-\frac{A_{t}(\kappa_{i})-r}{1-A_{t}(\kappa_{i})^{2}-\frac{t-2}{\kappa_{i}}A_{t}(\kappa_{i})},\;i=0,1,2,\ldots. \] \subsection{Bayesian method with conjugate prior and posterior\label{subsec:Bayesian-method-SIR}} \noindent Taking a Bayesian approach, we consider the following conjugate prior for $(\kappa,\boldsymbol{\theta})$ as \begin{equation} p(\kappa,\boldsymbol{\theta})\propto\left[C_{t}(\kappa)\right]^{\nu_{0}}\exp\left\{ \beta_{0}\kappa\boldsymbol{m}_{0}^{T}\boldsymbol{\theta}\right\} ,\label{eq:conjugate_prior} \end{equation} where $\left\Vert \boldsymbol{m}_{0}\right\Vert =1$, $\nu_{0},\beta_{0}\geq0$. Given $\boldsymbol{Y}$, the posterior density of $(\kappa,\boldsymbol{\theta})$ can be expressed by \[ p(\kappa,\boldsymbol{\theta}|\boldsymbol{Y})\propto\exp\left\{ \beta\kappa\boldsymbol{m}^{T}\boldsymbol{\theta}\right\} V_{t}(\beta\kappa)\frac{\left[C_{t}(\kappa)\right]^{N+\nu_{0}}}{V_{t}(\beta\kappa)}, \] where $\boldsymbol{m}=\left(\beta_{0}\boldsymbol{m}_{\boldsymbol{0}}+\sum_{i=1}^{N}\boldsymbol{y}_{i}\right)\beta^{-1},$ $\beta=\left\Vert \beta_{0}\boldsymbol{m}_{0}+\sum_{i=1}^{N}\boldsymbol{y}_{i}\right\Vert $. The posterior density can be factorized as \begin{equation} p(\kappa,\boldsymbol{\theta}|\boldsymbol{Y})=p(\boldsymbol{\theta}|\kappa,\boldsymbol{Y})p(\kappa|\boldsymbol{Y}),\label{eq:conjugate_posterior} \end{equation} where $p(\boldsymbol{\theta}|\kappa,\boldsymbol{Y})\sim vMF(\boldsymbol{\theta}|\boldsymbol{m},\beta\kappa)$ and \[ p(\kappa|\boldsymbol{Y})\propto\frac{\left[C_{t}(\kappa)\right]^{N+\nu_{0}}}{V_{t}(\beta\kappa)}=\frac{\kappa^{\frac{t-3}{2}(\upsilon_{0}+N)}I_{\frac{t-2}{2}}(\beta\kappa)}{\left[I_{\frac{t-3}{2}}(\kappa)\right]^{\nu_{0}+N}\left(\beta\kappa\right)^{\frac{t-2}{2}}}. \] The normalizing constant for $p(\kappa|\boldsymbol{Y})$ is not available in closed form. \citet{nunez2005bayesian} suggested using a sampling-importance-resampling (SIR) procedure with a proposal density chosen to be the gamma density with mean $\hat{\kappa}_{MLE}$ and variance equal to some pre-specified number such as 50 or 100. However, in a simulation study, it was found that the choice of this variance is crucially related to the performance of SIR. An improper choice of variance may lead to slow or unsuccessful convergence. Also the MCMC method leads to intensive computational complexity. Furthermore, when the sample size $N$ is large, $\beta\kappa$ can be very large which complicates the computation of the term $I_{\frac{t-2}{2}}\left(\left(\beta\kappa\right)\right)$ in $V_{t}(\beta\kappa).$ Thus the calculation of the weights in the SIR method will fail when $N$ is large. We conclude that in view of the difficulties for directly sampling from $p(\kappa|\boldsymbol{Y})$, it may be preferable to approximate the posterior distribution with an alternative method known as variational inference (abbreviated VI from here on). \section{Variational Inference} \noindent VI provides a deterministic approximation to an intractable posterior density through optimization. It has been used in many applications and tends to be faster than classical methods, such as Markov Chain Monte Carlo (MCMC) sampling and is easier to scale to large data. The basic idea behind VI is to first posit a candidate family of densities and then to select the member of that family which is closest to the target posterior density as measured by the Kullback-Leibler divergence. If $q\left(\boldsymbol{Z}\right)$ represents the candidate family and $p\left(\boldsymbol{Z}|\boldsymbol{Y}\right)$ represents the target posterior density, the Kullback-Leibler divergence is given by \[ KL\left(q|p\right)=E_{q}\left[\ln\frac{q\left(\boldsymbol{Z}\right)}{p\left(Z|\boldsymbol{Y}\right)}\right]. \] See \citet{blei2017variational} for a more comprehensive discussion of VI. We first adopt a joint vMF-Gamma distribution as the prior for $(\kappa,\boldsymbol{\theta})$: \begin{align*} p(\kappa,\boldsymbol{\theta}) & =p(\boldsymbol{\theta}|\kappa)p(\kappa)\\ & =vMF(\boldsymbol{\theta}|\boldsymbol{m}_{0},\beta_{0}\kappa)Gamma(\kappa|a_{0},b_{0}), \end{align*} where $Gamma(\kappa|a_{0},b_{0})$ is the Gamma density function with shape parameter $a_{0}$ and rate parameter $b_{0}$ (i.e., mean equal to $\frac{a_{0}}{b_{0}}$), and $p(\boldsymbol{\theta}|\kappa)=vMF(\boldsymbol{\theta}|\boldsymbol{m}_{0},\beta_{0}\kappa)$. The choice of $Gamma(\kappa|a_{0},b_{0})$ for $p(\kappa)$ is motivated by the fact that for large values of $\kappa$, $p(\kappa)$ based on (\ref{eq:conjugate_posterior}) tends to take the shape of a Gamma density. In fact, for large values of $\kappa$, $I_{\frac{t-3}{2}}(\kappa)\simeq\frac{e^{\kappa}}{\sqrt{2\pi\kappa}},$ and hence $p(\kappa)$ becomes the Gamma density with shape $(\nu_{0}-1)\frac{t-2}{2}+1$ and rate $\nu_{0}-\beta_{0}$: \[ p(\kappa)\propto\frac{\left[C_{t}(\kappa)\right]^{\nu_{0}}}{V_{t}(\kappa\beta)}\simeq\kappa^{(\nu_{0}-1)\frac{t-2}{2}}\exp(-(\nu-\beta)\kappa). \] In a similar vein, \citet{forbes2015fast} used a similar Gamma-based approximation to develop an algorithm for sampling from the Bessel exponential posterior distribution for $\kappa.$ Under the usual variational Bayesian methods, all variables are assumed to be mutually independent. This is known as the mean-field approximation. However, inspired by the conjugate posterior distribution (\ref{eq:conjugate_posterior}), we adopt a structural factorization of the variational posterior as $q(\boldsymbol{\theta},\kappa)=q(\boldsymbol{\theta}|\kappa)q(\kappa)$ which retains the dependency between $\boldsymbol{\theta}$ and $\kappa$. \subsection{Optimization of the variational distribution\label{subsec:Optimization-of-the-VI}} \noindent In the variational inference framework, we aim to determine $q$ so as to minimize the Kullback-Leibler (KL) divergence between $p(\boldsymbol{\theta},\kappa|\boldsymbol{Y})$ and $q(\boldsymbol{\theta},\kappa)$. This can be shown to be equivalent to maximizing the evidence lower bound (ELBO) \citep{blei2017variational}. So the optimization of the variational factors $q(\boldsymbol{\theta}|\kappa)$ and $q(\kappa)$ is performed by maximizing the evidence lower bound $\mathcal{L}(q)$ with respect to $q$ on the log-marginal likelihood, which in our model is given by \begin{align} \mathcal{L}(q) & =E_{q(\boldsymbol{\theta},\kappa)}\left[\ln\frac{p(\boldsymbol{y}|\kappa,\boldsymbol{\theta})p(\boldsymbol{\theta}|\kappa)p(\kappa)}{q(\boldsymbol{\theta}|\kappa)q(\kappa)}\right]\label{eq:evidence_lower_bound}\\ & =E_{q(\boldsymbol{\theta},\kappa)}\left[f(\boldsymbol{\theta},\kappa)\right]-E_{q(\boldsymbol{\theta},\kappa)}\left[\ln q(\boldsymbol{\theta}|\kappa)\right]-E_{q(\kappa)}\left[\ln q(\kappa)\right]+constant,\nonumber \end{align} where all the expectations are taken with respect to $q(\boldsymbol{\theta},\kappa)$ and \begin{align*} f(\boldsymbol{\theta},\kappa) & =\sum_{i=1}^{N}\kappa\boldsymbol{\theta}^{T}\boldsymbol{y}_{i}+N\left(\frac{t-3}{2}\right)\ln\kappa-N\ln I_{\frac{t-3}{2}}(\kappa)+\kappa\beta_{0}\boldsymbol{m}_{0}^{T}\boldsymbol{\theta}\\ & +\left(\frac{t-2}{2}\right)\ln\kappa-\ln I_{\frac{t-2}{2}}(\kappa\beta_{0})+(a_{0}-1)\ln\kappa-b_{0}\kappa. \end{align*} For fixed $\kappa$, the optimal posterior distribution $\ln q^{*}(\boldsymbol{\theta}|\kappa)$ is $\ln q^{*}(\boldsymbol{\theta}|\kappa)=\kappa\beta_{0}\boldsymbol{m}_{0}^{T}\boldsymbol{\theta}+\sum_{i=1}^{N}\kappa\boldsymbol{\theta}^{T}\boldsymbol{y}_{i}+constant.$ We recognize $q^{*}(\boldsymbol{\theta}|\kappa)$ as a von Mises-Fisher distribution $vMF(\boldsymbol{\theta}|\boldsymbol{m},\kappa\beta)$ where \[ \beta=\left\Vert \beta_{0}\boldsymbol{m}_{0}+\sum_{i=1}^{N}\boldsymbol{y}_{i}\right\Vert \;\text{ and }\;\boldsymbol{m}=\left(\beta_{0}\boldsymbol{m}_{0}+\sum_{i=1}^{N}\boldsymbol{y}_{i}\right)\beta^{-1}. \] Let $g(\kappa)$ denote the remaining terms in $f(\boldsymbol{\theta},\kappa)$ which only involve $\kappa$: \begin{align*} g(\kappa) & =\left[N\left(\frac{t-3}{2}\right)+a_{0}-1\right]\ln\kappa-b_{0}\kappa-N\ln I_{\frac{t-3}{2}}(\kappa)-\ln I_{\frac{t-2}{2}}(\kappa\beta_{0})+\ln I_{\frac{t-2}{2}}(\kappa\beta). \end{align*} It is still difficult to maximize $E_{q(\kappa)}\left[g(\kappa)\right]-E_{q(\kappa)}\left[\ln q(\kappa)\right]$ since it involves the evaluation of the expected modified Bessel function. Follow the similar idea in \citet{taghia2014bayesian}, we first find a tight lower bound $\underline{g(\kappa)}$ for $g(\kappa)$ so that \[ \mathcal{L}(q)\geq\underline{\mathcal{L}(q)}=E_{q(\kappa)}\left[\underline{g(\kappa)}\right]-E_{q(\kappa)}\left[\ln q(\kappa)\right]+constant. \] From the properties of the modified Bessel function of the first kind, it is known that the function $\ln I_{\nu}(x)$ is strictly concave with respect to $x$ and strictly convex relative to $\ln x$ for all $\nu>0$. Then, we can have the following two inequalities: \begin{equation} \ln I_{\nu}(x)\leq\ln I_{\nu}(\bar{x})+\left(\frac{\partial}{\partial x}\ln I_{\nu}(\bar{x})\right)(x-\bar{x}),\label{eq:Ineq-I} \end{equation} \begin{equation} \ln I_{\nu}(x)\geq\ln I_{\nu}(\bar{x})+\left(\frac{\partial}{\partial x}\ln I_{\nu}(\bar{x})\right)\bar{x}(\ln x-\ln\bar{x}).\label{eq:Ineq-II} \end{equation} where $\frac{\partial}{\partial x}\ln I_{\nu}(\bar{x})$ is the first derivative of $\ln I_{\nu}(x)$ evaluated at $x=\bar{x}$. Applying inequality (\ref{eq:Ineq-I}) for $\ln I_{\frac{t-3}{2}}(\kappa)$ and inequality (\ref{eq:Ineq-II}) for $\ln I_{\frac{t-2}{2}}(\kappa\beta_{0})$, we have \begin{align*} g(\kappa) & \geq\underline{g(\kappa)}=\left[N\left(\frac{t-3}{2}\right)+a_{0}-1\right]\ln\kappa-b_{0}\kappa+\ln I_{\frac{t-2}{2}}(\beta\bar{\kappa})\\ & +\frac{\partial}{\partial\beta\kappa}\ln I_{\frac{t-2}{2}}(\beta\bar{\kappa})\beta\bar{\kappa}\left(\ln\beta\kappa-\ln\beta\bar{\kappa}\right)-N\ln I_{\frac{t-3}{2}}(\bar{\kappa})\\ & -N\frac{\partial}{\partial\kappa}\ln I_{\frac{t-3}{2}}(\bar{\kappa})\left(\kappa-\bar{\kappa}\right)-N\ln I_{\frac{t-2}{2}}(\beta_{0}\bar{\kappa})-N\frac{\partial}{\partial\beta_{0}\kappa}\ln I_{\frac{t-2}{2}}(\beta_{0}\bar{\kappa})\beta_{0}\left(\kappa-\bar{\kappa}\right). \end{align*} Since the equality holds when $\kappa=\bar{\kappa}$, we see that the lower bound of $\mathcal{L}(q)$ is tight. Rearranging the terms, we have the approximate optimal solution as $\ln q^{*}(\kappa)=(a-1)\ln\kappa-b\kappa+constant,$ where \begin{equation} a=a_{0}+N\left(\frac{t-3}{2}\right)+\beta\bar{\kappa}\left[\frac{\partial}{\partial\beta\kappa}\ln I_{\frac{t-2}{2}}(\beta\bar{\kappa})\right],\label{eq:update_postrerior_a} \end{equation} \begin{equation} b=b_{0}+N\frac{\partial}{\partial\kappa}I_{\frac{t-3}{2}}(\bar{\kappa})+\beta_{0}\left[\frac{\partial}{\partial\beta_{0}\kappa}\ln I_{\frac{t-2}{2}}(\beta_{0}\bar{\kappa})\right].\label{eq:Update_posterior_b} \end{equation} We also recognize $q^{*}(\kappa)$ to be a $Gamma(\kappa|a,b)$ with shape $a$ and rate $b$. The posterior mode $\bar{\kappa}$ obtained from the previous iteration as: \begin{equation} \bar{\kappa}=\begin{cases} \frac{a-1}{b} & \mbox{if }a>1,\\ \frac{a}{b} & \mbox{otherwise.} \end{cases}\label{eq:Update_alpha_ba} \end{equation} A summary of the algorithm for our estimation is shown in Algorithm \ref{alg:Bayesian-Estimation-our_model}. \begin{algorithm}[h] \textbf{\small{}Input: }{\small{}Scaled $\boldsymbol{Y}=\left\{ \boldsymbol{y}_{1},...,\boldsymbol{y}_{N}\right\} $}{\small \par} \textbf{\small{}Step 1: Initialization}{\small \par} \begin{enumerate} \item {\small{}Set the prior parameters: $\beta_{0}$, $\boldsymbol{m}_{0}$, $a_{0}$ and $b_{0}$.}{\small \par} \item {\small{}Calculate the posterior parameters for $q^{*}(\boldsymbol{\theta}|\kappa)$: $\boldsymbol{m}$, $\beta$.}{\small \par} \item {\small{}Calculate the initial value of $\bar{\kappa}=\frac{a_{0}}{b_{0}}$.}{\small \par} \end{enumerate} \textbf{\small{}Step 2: Optimization of the posterior distribution}{\small \par} \textbf{\small{}repeat}{\small \par} \begin{enumerate} \item {\small{}Update posterior parameter $a$ and $b$ by (\ref{eq:update_postrerior_a}) and (\ref{eq:Update_posterior_b}) respectively.}{\small \par} \item {\small{}Update $\bar{\kappa}$ by (\ref{eq:Update_alpha_ba}).}{\small \par} \end{enumerate} \textbf{\small{}until convergence}{\small \par} \caption{\label{alg:Bayesian-Estimation-our_model}Bayesian Estimation using variational inference of our model} \end{algorithm} \subsection{\label{subsec:Predictive-Density-of_our_model}Predictive density of our model} \noindent We may derive the predictive density for a new standardized ranking $\tilde{\boldsymbol{y}}$ given the observed data $\boldsymbol{Y}$. The exact predictive density is given by \begin{equation} p(\tilde{\boldsymbol{y}}|\boldsymbol{Y})=\int\int p(\tilde{\boldsymbol{y}}|\kappa,\boldsymbol{\theta})p(\kappa,\boldsymbol{\theta}|\boldsymbol{Y})\,d\kappa d\boldsymbol{\theta}.\label{eq:exact_predictive_density-1} \end{equation} We can approximate this density by first replacing the true posterior distribution with its variational approximation as: \begin{align} p(\tilde{\boldsymbol{y}}|\boldsymbol{Y}) & \approx q(\tilde{\boldsymbol{y}}|\boldsymbol{Y})=\int\int p(\tilde{\boldsymbol{y}}|\kappa,\boldsymbol{\theta})q(\boldsymbol{\theta}|\kappa,\boldsymbol{Y})q(\kappa|\boldsymbol{Y})\,d\kappa d\boldsymbol{\theta}\nonumber \\ & =\int\int p(\tilde{\boldsymbol{y}}|\kappa,\boldsymbol{\theta})vMF(\boldsymbol{\theta}|\boldsymbol{m},\beta\kappa)d\boldsymbol{\theta}Gamma(\kappa|a,b)\,d\kappa\label{eq:posterior_predictiver_density} \end{align} where $\kappa$, $\beta$, $a$ and $b$ are the posterior parameters calculated from our algorithm. After using a second-order approximation of the Bessel function, the approximate predictive density of $\tilde{\boldsymbol{y}}$ can be obtained by: \[ q(\tilde{\boldsymbol{y}}|\boldsymbol{Y})\approx h(\tilde{\boldsymbol{y}})l(\bar{\kappa})e^{r(\tilde{\boldsymbol{y}})\bar{\kappa}}\bar{\kappa}^{-s(\tilde{\boldsymbol{y}})}\frac{b^{a+\frac{t-1}{2}-1}\Gamma(a+s(\tilde{\boldsymbol{y}})+\frac{t-1}{2}-1)}{\left(b+r(\tilde{\boldsymbol{y}})\right)^{a+s(\tilde{\boldsymbol{y}})+\frac{t-1}{2}-1}\Gamma(a+\frac{t-1}{2}-1)}, \] where $\eta(\tilde{\boldsymbol{y}})=\left\Vert \tilde{\boldsymbol{y}}+\beta\boldsymbol{m}\right\Vert $ and \[ s(\tilde{\boldsymbol{y}})=-\eta^{2}(\tilde{\boldsymbol{y}})\bar{\kappa}^{2}\left(\frac{I'_{\frac{t-2}{2}}(\eta(\tilde{\boldsymbol{y}})\bar{\kappa})}{I_{\frac{t-2}{2}}(\eta(\tilde{\boldsymbol{y}})\bar{\kappa})}\right)'+\beta^{2}\bar{\kappa}^{2}\left(\frac{I'_{\frac{t-2}{2}}(\beta\bar{\kappa})}{I_{\frac{t-2}{2}}(\beta\bar{\kappa})}\right)'+\bar{\kappa}^{2}\left(\frac{I'_{\frac{t-3}{2}}(\bar{\kappa})}{I_{\frac{t-3}{2}}(\bar{\kappa})}\right)', \] \[ r(\tilde{\boldsymbol{y}})=\frac{s(\tilde{\boldsymbol{y}})}{\bar{\kappa}}-\eta(\tilde{\boldsymbol{y}})\frac{I'_{\frac{t-2}{2}}(\eta(\tilde{\boldsymbol{y}})\bar{\kappa})}{I_{\frac{t-2}{2}}(\eta(\tilde{\boldsymbol{y}})\bar{\kappa})}+\beta\frac{I'_{\frac{t-2}{2}}(\beta\bar{\kappa})}{I_{\frac{t-2}{2}}(\beta\bar{\kappa})}+\frac{I'_{\frac{t-3}{2}}(\bar{\kappa})}{I_{\frac{t-3}{2}}(\bar{\kappa})}, \] \[ h(\tilde{\boldsymbol{y}})=\frac{1}{\Gamma\left(\frac{t-1}{2}\right)t!2^{\frac{t-3}{2}}}\left(\frac{\beta}{\eta(\tilde{\boldsymbol{y}})}\right)^{\frac{t-2}{2}}, \] \[ l(\bar{\kappa})=\frac{I_{\frac{t-2}{2}}(\eta(\tilde{\boldsymbol{y}})\bar{\kappa})}{I_{\frac{t-3}{2}}(\bar{\kappa})I_{\frac{t-2}{2}}(\beta\bar{\kappa})}. \] The detailed derivation of the predictive density of our model can be found in Appendix B. \section{Model Extensions} \subsection{Incomplete rankings} \noindent A judge may rank a set of items in accordance with some criteria. However, in real life, some of the ranking data may be missing either at random or by design. For example, in the former case, some of the items may not be ranked due to the limited knowledge of the judges. In this kind of incomplete ranking data, a missing item could have any rank and this is called subset rankings. In another instance called top-$k$ rankings, the judges may only rank the top 10 best movies among several recommended. The unranked movies would in principle receive ranks larger than $10$. In those cases, the notation $\boldsymbol{R}^{I}=(2,-,3,4,1)^{T}$ refers to a subset ranking with item 2 unranked while $\boldsymbol{R}^{I}=(2,*,*,*,1)^{T}$ represents a top two ranking with item 5 ranked first and item 1 ranked second. In the usual Bayesian framework, missing data problems can be resolved by appealing to Gibbs sampling and data augmentation methods. Let $\left\{ \boldsymbol{R}_{1}^{I},...,\boldsymbol{R}_{N}^{I}\right\} $ be a set of $N$ observed incomplete rankings, and let $\left\{ \boldsymbol{R}_{1}^{*},...,\boldsymbol{R}_{N}^{*}\right\} $ be their unobserved complete rankings. We want to have the following posterior distribution: \[ p(\boldsymbol{\theta},\kappa|\boldsymbol{R}_{1}^{I},...,\boldsymbol{R}_{N}^{I})\propto p(\boldsymbol{\theta},\kappa)p(\boldsymbol{R}_{1}^{I},...,\boldsymbol{R}_{N}^{I}|\boldsymbol{\theta},\kappa), \] which can be achieved by Gibbs sampling based on the following two full conditional distributions: \[ p(\boldsymbol{R}_{1}^{*},...,\boldsymbol{R}_{N}^{*}|\boldsymbol{R}_{1}^{I},...,\boldsymbol{R}_{N}^{I},\boldsymbol{\theta},\kappa)=\prod_{i=1}^{N}p(\boldsymbol{R}_{i}^{*}|\boldsymbol{R}_{i}^{I},\boldsymbol{\theta},\kappa), \] \[ p(\boldsymbol{\theta},\kappa|\boldsymbol{R}_{1}^{*},...,\boldsymbol{R}_{N}^{*})\propto p(\boldsymbol{\theta},\kappa)\prod_{i=1}^{N}p(\boldsymbol{R}_{i}^{*}|\boldsymbol{\theta},\kappa). \] Sampling from $p(\boldsymbol{R}_{1}^{*},...,\boldsymbol{R}_{N}^{*}|\boldsymbol{R}_{1}^{I},...,\boldsymbol{R}_{N}^{I},\boldsymbol{\theta},\kappa)$ can be generated by using the Bayesian SIR method or the Bayesian VI method which have been discussed in the previous sections. More concretely, we need to fill in the missing ranks for each observation and for that we appeal to the concept of compatibility described in \citet{alvo2014statistical} which considers for an incomplete ranking, the class of complete order preserving rankings. For example, suppose we observe one incomplete subset ranking $\boldsymbol{R}^{I}=(2,-,3,4,1)$. The set of corresponding compatible rankings is $\left\{ \left(2,5,3,4,1\right)^{T},\left(2,4,3,5,1\right)^{T},\left(2,3,4,5,1\right)^{T},\left(3,2,4,5,1\right)^{T},\left(3,1,4,5,2\right)^{T}\right\} $. Generally speaking, let $\Omega(\boldsymbol{R}_{i}^{I})$ be the set of complete rankings compatible with $\boldsymbol{R}_{i}^{I}$. For an incomplete subset ranking with $k$ out of $t$ items being ranked, we will have a total $t!/k!$ complete rankings in its compatible set. Note that $p(\boldsymbol{R}_{i}^{*}|\boldsymbol{R}_{i}^{I},\boldsymbol{\theta},\kappa)\propto p(\boldsymbol{R}_{i}^{*}|\boldsymbol{\theta},\kappa),\:\boldsymbol{R}_{i}^{*}\in\Omega(\boldsymbol{R}_{i}^{I}).$ Obviously, direct sampling from this distribution will be tedious for large $t$. Instead, in this paper, we use the Metropolis-Hastings algorithm to draw samples from this distribution with the proposed candidates generated uniformly from $\Omega(\boldsymbol{R}_{i}^{I})$. The idea of introducing compatible rankings allows us to treat different kinds of incomplete rankings easily. It is easy to sample uniformly from the compatible rankings since we just need to fill-in the missing ranks under different situations. In the case of top-$k$ rankings, the compatibility set will be defined to ensure that the unranked items receive rankings larger than $k$. Note that it is also possible to use Monte Carlo EM approach to handle incomplete rankings under a maximum likelihood setting where the Gibbs sampling is used in the E-step (see \citet{YuLamLo2005}). \subsection{Mixture ranking model\label{subsec:Mixture-ranking-model}} \noindent It is quite natural to extend our simple model to that of a mixture model in order to take into account several clusters that may exist among heterogeneous data \citep{lee2012mixtures,kidwell2008visualizing}. If a population contains $G$ sub-populations (clusters), the probability of observing a standardized ranking $\boldsymbol{y}$ under our mixture model is given by \[ p(\boldsymbol{y}|\boldsymbol{\kappa},\boldsymbol{\Theta},\boldsymbol{\tau})=\sum_{g=1}^{G}\tau_{g}C_{t}(\kappa_{g})\exp\left\{ \kappa_{g}\boldsymbol{\theta}_{g}^{T}\boldsymbol{y}\right\} , \] where $\boldsymbol{\tau}=(\tau_{1},\ldots,\tau_{G})$, with $\tau_{g}$ representing the proportion or the mixture weights for the $g$th sub-population whereas $\boldsymbol{\Theta}=\left(\boldsymbol{\theta}_{1},\ldots,\boldsymbol{\theta}_{G}\right)$ and $\boldsymbol{\kappa}=\left(\kappa_{1},\ldots,\kappa_{G}\right)$, with $\boldsymbol{\theta}_{g}$ and $\kappa_{g}$ are the directional and concentration parameters in the $g$th sub-population respectively. To obtain the MLE of this mixture model, we may extend the approach described in Section \ref{subsec:MLE-of-osimple-model} using the traditional EM algorithm. The variational inference approach for this mixture model follows the method of \citet{taghia2014bayesian}. Given a random sample of $N$ complete standardized rankings $\boldsymbol{Y}=\left\{ \boldsymbol{y}_{1},\ldots,\boldsymbol{y}_{N}\right\} $ drawn from $p(\boldsymbol{y}|\boldsymbol{\kappa},\boldsymbol{\Theta},\boldsymbol{\tau})$. We first introduce a set of binary latent variables $\boldsymbol{Z}=\left\{ z_{ig}\right\} $ where $i=1,\ldots,N$, $g=1,\ldots,,G$ where $z_{ig}=1$ indicates the observed ranking $\boldsymbol{y}_{i}$ belongs to the $g$th sub-population. Thus the generative model may be written as \[ p(\boldsymbol{Y},\boldsymbol{Z},\boldsymbol{\tau},\boldsymbol{\Theta},\boldsymbol{\kappa})=p(\boldsymbol{Y}|\boldsymbol{Z},\boldsymbol{\Theta},\boldsymbol{\kappa})p(\boldsymbol{\Theta},\boldsymbol{\kappa})p(\boldsymbol{Z}|\boldsymbol{\tau})p(\boldsymbol{\tau}), \] where \begin{eqnarray*} p(\boldsymbol{Y}|\boldsymbol{Z},\boldsymbol{\Theta},\boldsymbol{\kappa}) & = & \prod_{i=1}^{N}\prod_{g=1}^{G}\left(C_{t}(\kappa_{g})\exp\left\{ \kappa_{g}\boldsymbol{\theta}_{g}^{T}\boldsymbol{y}_{i}\right\} \right)^{z_{ig}}\\ p(\boldsymbol{Z}|\boldsymbol{\tau}) & = & \prod_{i=1}^{N}\prod_{g=1}^{G}\tau_{g}^{z_{ig}} \end{eqnarray*} A Dirichlet distribution with prior vector parameters $d_{0,g}$ is considered for the prior distribution of $\boldsymbol{\tau}$ : \[ p(\boldsymbol{\tau})=\frac{\Gamma(\sum_{g=1}^{G}d_{0,g})}{\prod_{g=1}^{G}\varGamma\left(d_{0,g}\right)}\prod_{g=1}^{G}\tau_{k}^{d_{o,g}-1}. \] The prior distribution for $\left(\boldsymbol{\Theta},\boldsymbol{\kappa}\right)$ is the conditional von Mises-Fisher distribution for $\boldsymbol{\Theta}|\boldsymbol{\kappa}$ and the marginal Gamma distribution for $\boldsymbol{\kappa}$: \[ p(\boldsymbol{\Theta},\boldsymbol{\kappa})=\prod_{g=1}^{G}vMF(\boldsymbol{\theta}|\boldsymbol{m}_{0,g},\beta_{0,g}\kappa_{g})Gamma(\kappa_{g}|a_{0,g},b_{0,g}), \] where $\boldsymbol{m}_{0,g},\beta_{0,g},a_{0,g},b_{0,g}$ are the prior parameters of the $g$th sub-population. Using the similar technique in Section \ref{subsec:Optimization-of-the-VI} to optimize the evidence lower bound given by \begin{equation} \mathcal{L_{M}}(q)=E_{q(\boldsymbol{Z},\boldsymbol{\Theta},\boldsymbol{\kappa},\boldsymbol{\tau})}\left[\ln\frac{p(\boldsymbol{Y}|\boldsymbol{Z},\boldsymbol{\Theta},\boldsymbol{\kappa})p(\boldsymbol{\Theta},\boldsymbol{\kappa})p(\boldsymbol{Z}|\boldsymbol{\tau})p(\boldsymbol{\tau})}{q(\boldsymbol{Z})q(\boldsymbol{\Theta}|\boldsymbol{\kappa})q(\boldsymbol{\kappa})q(\boldsymbol{\tau})}\right],\label{eq:evidence_lower_bound-mixture-1} \end{equation} we can derive the optimal posterior distribution of each parameter. It is not difficult to see that the optimal posterior distribution for $q(\boldsymbol{\tau})$ is recognized to be a Dirichlet distribution with parameter $d_{g}$: \begin{equation} d_{g}=d_{0,g}+\sum_{i=1}^{N}p_{ig},\label{eq:update_d_g} \end{equation} where \begin{equation} p_{ig}=\frac{\exp(\rho_{ig})}{\sum_{j=1}^{G}\exp(\rho_{ij})},\label{eq:update_p_ig} \end{equation} \begin{align} \rho_{ig}=\frac{t-3}{2}E_{q(\kappa)}(\ln\kappa_{g})+E_{q(\boldsymbol{\tau})}\left(\ln\tau_{g}\right)+E_{q(\boldsymbol{\Theta},\boldsymbol{\kappa})}\left(\kappa_{g}\boldsymbol{\theta}_{g}^{T}\boldsymbol{y}_{i}\right)-\ln\left[2^{\frac{t-3}{2}}t!\Gamma\left(\frac{t-1}{2}\right)\right]\label{eq:update_rho_ig}\\ -\ln I_{\frac{t-3}{2}}\left(\bar{\kappa}_{g}\right)-\left(\frac{\partial}{\partial\kappa_{g}}\ln I_{\frac{t-3}{2}}\left(\bar{\kappa}_{g}\right)\right)\left[E_{q(\boldsymbol{\kappa})}\kappa_{g}-\bar{\kappa}_{g}\right],\nonumber \end{align} \begin{equation} \bar{\kappa}_{g}=\begin{cases} \frac{a_{g}-1}{b_{g}} & \mbox{if }a_{g}>1\\ \frac{a_{g}}{b_{g}} & \mbox{otherwise} \end{cases},\label{eq:kappa bar_g} \end{equation} and the optimal posterior distribution of $q(\boldsymbol{\Theta}|\boldsymbol{\kappa})$ can be written as von Mises-Fisher distribution: \[ q^{*}(\boldsymbol{\Theta}|\boldsymbol{\kappa})=\prod_{g=1}^{G}vMF(\boldsymbol{\theta}_{g}|\boldsymbol{m}_{g},\kappa_{g}\beta_{g}) \] \begin{equation} \beta_{g}=\left\Vert \beta_{0,g}\boldsymbol{m}_{0,g}+\sum_{i=1}^{N}p_{ig}\boldsymbol{y}_{i}\right\Vert ,\label{eq:Update_beta_g} \end{equation} \begin{equation} \boldsymbol{m}_{g}=\left(\beta_{0,g}\boldsymbol{m}_{0,g}+\sum_{i=1}^{N}p_{ig}\boldsymbol{y}_{i}\right)\beta_{g}^{-1}.\label{eq:Update_m_g} \end{equation} Also, the optimal distribution of $q^{*}(\boldsymbol{\kappa})=\prod_{g=1}^{G}q^{*}(\kappa_{g})$ can be recognized as independent Gamma distributions: \[ q^{*}(\kappa_{g})=Gamma(\kappa_{g}|a_{g},b_{g}), \] \begin{equation} a_{g}=a_{0,g}+\left(\frac{t-3}{2}\right)\sum_{i=1}^{N}p_{ig}+\beta_{g}\bar{\kappa}_{g}\left[\frac{\partial}{\partial\beta_{g}\kappa_{g}}\ln I_{\frac{t-2}{2}}(\beta_{g}\bar{\kappa}_{g})\right],\label{eq:update_postrerior_a-Mixture} \end{equation} \begin{equation} b_{g}=b_{0,g}+\left(\sum_{i=1}^{N}p_{ig}\right)\frac{\partial}{\partial\kappa_{g}}\ln I_{\frac{t-3}{2}}(\bar{\kappa}_{g})+\beta_{0,g}\left[\frac{\partial}{\partial\beta_{0,g}\kappa_{g}}\ln I_{\frac{t-2}{2}}(\beta_{0,g}\bar{\kappa}_{g})\right],\label{eq:Update_posterior_b-mixture} \end{equation} and finally the optimal variational posterior distribution for $\boldsymbol{Z}$ is recognized as a multinomial distribution: \[ q^{*}(\boldsymbol{Z})=\prod_{i=1}^{N}\prod_{g=1}^{G}p_{ig}^{z_{ig}}. \] The detailed derivation of the optimization of the mixture model can be found in Appendix C. A summary of the algorithm for estimation for this mixture model is shown in Algorithm \ref{alg:Bayesian-Estimation-our-mixture-model}. \begin{algorithm}[h] \textbf{\small{}Input:}{\small{} Scaled $\boldsymbol{Y}=\left\{ \boldsymbol{y}_{1},...,\boldsymbol{y}_{n}\right\} $}{\small \par} \textbf{\small{}Step 1: Initialization}{\small \par} \begin{enumerate} \item {\small{}Set the prior parameters: $d_{0,g}$, $\beta_{0,g}$, $\boldsymbol{m}_{0,g}$, $a_{0,g}$ , $b_{0,g}$ and number of clusters $G$.}{\small \par} \item {\small{}Initialize $p_{ig}=\frac{1}{G}$ and the initial value of $\bar{\kappa}_{g}=\frac{a_{0,g}}{b_{0,g}}.$}{\small \par} \end{enumerate} \textbf{\small{}Step 2: Optimization of the posterior distribution}{\small \par} \textbf{\small{}repeat}{\small \par} \begin{enumerate} \item {\small{}Update posterior parameters $d_{g}$, $\beta_{g}$, $\boldsymbol{m}_{g}$, $a_{g}$ , $b_{g}$ by (\ref{eq:update_d_g}), (\ref{eq:Update_beta_g}), (\ref{eq:Update_m_g}), (\ref{eq:update_postrerior_a-Mixture}) and (\ref{eq:Update_posterior_b-mixture}).}{\small \par} \item {\small{}Update $p_{ig}$ by (\ref{eq:update_p_ig}) and (\ref{eq:update_rho_ig}).}{\small \par} \item {\small{}Update $\bar{\kappa}_{g}$ by (\ref{eq:kappa bar_g}).}{\small \par} \end{enumerate} \textbf{\small{}until convergence}{\small \par} \caption{\label{alg:Bayesian-Estimation-our-mixture-model}Bayesian Estimation using variational inference of our mixture ranking model.} \end{algorithm} \section{Simulation Studies} \subsection{Comparison of the posterior distributions obtained by Bayesian SIR method and variational inference approach} \noindent Since we use a factorized approximation for the posterior distribution in the variational inference approach, it is of interest to compare the true posterior distribution with its approximation obtained using the variational inference approach. We simulated two data sets with $\kappa=1$, $\boldsymbol{\theta}=\left(-0.71,0,0.71\right)^{T},$ $t=3$ and different data sizes of $N=20,100.$ We generated samples from the posterior distribution by SIR method in Section \ref{subsec:Bayesian-method-SIR} using a gamma density with mean $\hat{\kappa}_{MLE}$ and variance equal to $0.2$ as the proposal density. We then applied the variational approach in Algorithm \ref{alg:Bayesian-Estimation-our_model} and generated samples from the corresponding posterior distribution. Figure \ref{fig:Comparison-posterior_distribution_SIR_VI} exhibits the histogram and box-plot for the posterior distribution of $\kappa$ and $\boldsymbol{\theta}$. From Figure \ref{fig:Comparison-posterior_distribution_SIR_VI}, we see that the posterior distribution using the Bayesian-VI is very close to the posterior distribution obtained by the Bayesian-SIR method. When the sample size is small ($N=20$), there are more outliers for the Bayesian-SIR method while the posterior $\kappa$ for the Bayesian-VI method seems to be more concentrated. When the sample size is large, the posterior estimates of $\boldsymbol{\theta}$ and $\kappa$ become more accurate and Bayesian-VI is closer to the posterior distribution obtained by the Bayesian-SIR method. We calculate the symmetric variant of the Kullback-Leibler divergence (KLD) between two distributions obtained by two methods for posterior $\kappa$. The symmetric KLD are 0.45 for the $N=20$ case and 0.44 for the $N=100$ case. More simulations for different settings of parameters can be found in Appendix D. \begin{figure} \caption{\label{fig:Comparison-posterior_distribution_SIR_VI} \label{fig:Comparison-posterior_distribution_SIR_VI} \end{figure} \subsection{Experiments with different sample sizes\label{subsec:Experiments-different_N}} \noindent We also evaluated the performance of the three estimating algorithms for our model when the sample size $N$ is allowed to vary from 25 to 500. We simulated three different data sets with the number of items being ranked $t=10,20,50.$ The true $\boldsymbol{\theta}$ is a random unit vector. Since our model is not a standard distribution, we use the random-walk Metropolis algorithm to draw samples from it \citep{liu2008monte}. We compared the performance of the MLE method, the Bayesian method with SIR for posterior sampling (Bayesian-SIR) and the Bayesian VI. We chose non-informative priors for both Bayesian-SIR and Bayesian-VI. Specifically, the prior parameter $\boldsymbol{m}_{0}$ is chosen uniformly whereas $\beta_{0}$, $a_{0}$ and $b_{0}$ are chosen to be small numbers close to zero. For the MLE method, we perform Newton-Raphson iterations to get a more accurate $\kappa$. For the posterior distribution of $\kappa$ in the Bayesian-SIR method. we used a Gamma density with mean $\hat{\kappa}_{MLE}$ and variance 1 as the proposal density to sample $1,000$ observations of $\kappa$ from $10,000$ candidates. We calculated the Kullback-Leibler divergence (KLD) of the true model from the estimated model, in which the model parameters are the point estimates derived by either MLE or the posterior mean of the Bayesian method. A smaller value of KLD implies higher accuracy of the estimation method. Each experiment is repeated 10 times to smooth out the effect of random initialization and the average results are shown in Figure \ref{fig:fig:KLD-Simple_model_diff_N}. \begin{figure} \caption{\label{fig:fig:KLD-Simple_model_diff_N} \label{fig:fig:KLD-Simple_model_diff_N} \end{figure} It can be seen from Figure \ref{fig:fig:KLD-Simple_model_diff_N} that large sample sizes lead to lower KLD values for the three algorithms as expected. From the comparison, when $t$ is small $(t=10)$, the Bayesian method with variational approach (Bayesian-VI) performs similar to the MLE method and works better than Bayesian-SIR. The failure of Bayesian-SIR may be the result of the variance of the proposal gamma density being too large compared to the variance of the true posterior distribution of $\kappa.$ Thus this improper choice of variance will slow down the convergence of the MCMC model. When $t$ is large ($t=20$ \& $50$), Bayesian-VI and Bayesian-SIR are very close while the MLE method doesn't work well when the sample size is small. When $N$ is large, the three approaches tend to converge to fairly similar results. Even for $t=50$ and $N=500$, the MLE method still performs slightly poorer than the Bayesian counterparts. As a whole. the Bayesian-VI generally performs the best for different sets of $t$ and $N$. We also computed the average computation time for each set of experiments. From Figure \ref{fig:Average-Computation-times_diff_N}, we see that the computation time for Bayesian-SIR is the slowest as expected since it is an MCMC sampling method. The speeds of Bayesian-VI and MLE are quite similar and they are about 50 to 100 times faster than Bayesian-SIR. All the simulations were conducted on a PC with 4.0 GHz quad-core CPU. \begin{figure} \caption{\label{fig:Average-Computation-times_diff_N} \label{fig:Average-Computation-times_diff_N} \end{figure} \subsection{Experiments with different data dimensions \label{subsec:Effect_t_simple_model}} \noindent In the following experiments we compared the performances of the different approaches as the number $t$ of items ranked varies from 3 to 100. We set $\kappa=1$ and chose the true $\boldsymbol{\theta}$ to be a random unit vector. We chose $N=100,200,500$. The detailed simulation settings are the same as Section \ref{subsec:Experiments-different_N}. For evaluation, we again calculated the Kullback-Leibler divergence (KLD) of the true model from the estimated model shown in Figure \ref{fig:KLD-Simple_model_diff_t}. Each experiment is repeated 10 times and the average results are shown to smooth out the effect of random initialization. It is seen that large values of $t$ lead to the failure of the MLE method since there will be more parameters than data-points. The Bayesian-SIR method also encounters problems when $t$ is large compared to $N.$ This may be because the selected proposal density (variance $=1$) in the SIR method may be inappropriate when $t$ is large. From the comparison, the Bayesian -VI method has lower KLD values for large $t$. When $t$ is small, the MLE and Bayesian-VI have similar results while the Bayesian-SIR method has higher KLD. We also computed the average computation time for each set of experiments. From Figure \ref{fig:Average-Computation-times_diff_t}, the speed of convergence for Bayesian-VI and MLE are quite similar and they are about 50 to 100 times faster than Bayesian-SIR. All the simulations are conducted on a PC with 4 core 4.0 GHz CPU. \begin{figure} \caption{\label{fig:KLD-Simple_model_diff_t} \label{fig:KLD-Simple_model_diff_t} \end{figure} \begin{figure} \caption{\label{fig:Average-Computation-times_diff_t} \label{fig:Average-Computation-times_diff_t} \end{figure} \subsection{Simulation for the estimation of the predictive density} \noindent In this experiment, we compared the accuracy of the approximated predictive density between the Bayesian-VI and MLE methods. We simulated data from our model with $\kappa=1$ and $t=5$. The true $\boldsymbol{\theta}$ is a random unit vector. For each set of simulation, we considered sample sizes ranging from 10 to 100. We also calculated the Kullback-Leibler divergence (KLD) from the true posterior distribution to the approximate predictive density. The true posterior predictive distribution is calculated by Monte Carlo integration (\ref{eq:exact_predictive_density-1}) numerically from the true posterior distribution obtained by the SIR method with size $10^{5}$. We calculated the approximated predictive density using the MLE method. Each experiment was repeated 10 times and the average results are shown in Figure \ref{fig:The-KLD-predictive_density}. From Figure \ref{fig:The-KLD-predictive_density}, we see that the KLD for both methods decreases with increasing sample size as expected. However, for small training data, Bayesian-VI performs much better than the MLE method. \begin{figure} \caption{\label{fig:The-KLD-predictive_density} \label{fig:The-KLD-predictive_density} \end{figure} \subsection{Simulation for incomplete rankings} \noindent The following experiments aim to compare the performance with respect to the number of missing items $k$ when incomplete rankings are observed. We first simulated data from our model with $\kappa=1$ and $\left\Vert \boldsymbol{\theta}\right\Vert =1$. Then we randomly dropped the ranking for $k$ items and re-ranked the remaining items to get the incomplete ranking. We chose three different settings for the simulations: $\left(t=10,N=500\right)$, $\left(t=10,N=1000\right)$ and $\left(t=20,N=500\right)$. The number of missing items $k$ varies up to $\left(t-2\right)$. We also calculated the Kullback-Leibler divergence (KLD) of the true model from the estimated model to assess the impact. The estimated model is given by Gibbs samplings with the Bayesian method with SIR (Bayesian-SIR) and the Bayesian-VI in the second conditional distribution. For each iteration for the Bayesian-SIR method, we simulated $10$ samples from $100$ candidates and selected one for the next step. The result of this comparison is shown in Figure \ref{fig:KLD-Incomplete_diff_N_miss}. Each experiment is repeated 10 times and the average results are shown to smooth out the effect of random initialization. From Figure \ref{fig:KLD-Incomplete_diff_N_miss}, we see that the KLD increases with increasing number of missing items as expected. From the comparison, the Bayesian-VI has lower KLD values for small number of missing items $k$ compared to the Bayesian-SIR method. This is consistent with previous simulation results. When $N$ is large ($N=1,000$), Bayesian-VI performs better than Bayesian-SIR. When the number of missing items is large, Bayesian-SIR seems to be a better choice than Bayesian-VI. However, when comparing computation time, Bayesian-VI is much faster than Bayesian-SIR. \begin{figure} \caption{\label{fig:KLD-Incomplete_diff_N_miss} \label{fig:KLD-Incomplete_diff_N_miss} \end{figure} \section{Applications} \subsection{Sushi data sets} \noindent We investigate the two data sets of \citet{kamishima2003nantonac} for finding the difference in food preference patterns between eastern and western Japan. Historically, western Japan has been mainly affected by the culture of the Mikado emperor and nobles, while eastern Japan has been the home of the Shogun and Samurai warriors. Therefore, the preference patterns in food are different between these two regions \citep{kamishima2003nantonac}. The first data set consists of complete rankings of $t=10$ different kinds of sushi given by 5000 respondents according to their preference. The region of respondents is also recorded ($N=3285$ for Eastern Japan, $1715$ for Western Japan). We apply the MLE, Bayesian-SIR and Bayesian-VI on both Eastern and Western Japan data. The settings for the priors are similar to those used in the simulations in Section \ref{subsec:Experiments-different_N}. Since the sample size $N$ is quite large compared to $t$, the estimated models for all three methods are almost the same. Figure \ref{fig:SUSHI_small_theta} compares the posterior means of $\boldsymbol{\theta}$ between Eastern Japan (Blue bar) and Western Japan (Red bar) obtained by Bayesian-VI method. Note that the more negative value of $\theta_{i}$ means that the more preferable sushi $i$ is. From Figure \ref{fig:SUSHI_small_theta}, we see that the main difference for sushi preference between Eastern and Western Japan occurs in Salmon roe, Squid, Sea eel, Shrimp and Tuna. People in Eastern Japan have a greater preference for Salmon roe and Tuna than the western Japanese. On the other hand, the latter have a greater preference for Squid, Shrimp and Sea eel. Table \ref{tab:Comparison-posterior-Sushi-small} shows the posterior parameter obtained by Bayesian-VI. It can be seen that the eastern Japanese are slightly more cohesive than western Japanese since the posterior mean of $\kappa$ is larger. \begin{figure} \caption{\label{fig:SUSHI_small_theta} \label{fig:SUSHI_small_theta} \end{figure} \begin{table} \begin{centering} \begin{tabular}{c|c|c} \hline {\footnotesize{}Posterior Parameter} & {\footnotesize{}Eastern Japan} & {\footnotesize{}Western Japan}\tabularnewline \hline {\footnotesize{}$\beta$} & {\footnotesize{}1458.85} & {\footnotesize{}741.61}\tabularnewline {\footnotesize{}$a$} & {\footnotesize{}18509.84} & {\footnotesize{}9462.70}\tabularnewline {\footnotesize{}$b$} & {\footnotesize{}3801.57} & {\footnotesize{}2087.37}\tabularnewline \hline {\footnotesize{}Posterior Mean of $\kappa$} & {\footnotesize{}4.87} & {\footnotesize{}4.53}\tabularnewline \hline \end{tabular} \par\end{centering} \caption{\label{tab:Comparison-posterior-Sushi-small}Posterior parameters for the sushi complete ranking data ($t=10$) in Eastern Japan and Western Japan obtained by Bayesian-VI.} \end{table} The second data set contains incomplete rankings given by 5000 respondents who were asked to pick and rank some of the $t=100$ different kinds of sushi according to their preference and most of them only selected and ranked the top 10 out of 100 sushi. Figure \ref{fig:Comparison-boxplot-sushi-big} compares the box-plots of the posterior means of $\boldsymbol{\theta}$ between Eastern Japan (Blue box) and Western Japan (Red box) obtained by Bayesian-VI. The posterior distribution of $\boldsymbol{\theta}$ is based on the Gibbs samplings after dropping the first 200 samples during the burn-in period. Since there are too many kinds of Sushi, this graph doesn't allow us to show the name of each Sushi. However, we can see that about one third of the 100 kinds of sushi have fairly large posterior means of $\theta_{i}$ and their values are pretty close to each others. This is mainly because these sushi are less commonly preferred by Japanese and the respondents hardly chose these sushi in their list. As these sushi are usually not ranked as top 10, it is natural to see that the posterior distributions of their $\theta_{i}$'s tend to have a larger variance. From Figure \ref{fig:Comparison-boxplot-sushi-big}, we see that there exists a greater difference between eastern and western Japan for small $\theta_{i}$'s. Figure \ref{fig:Comparison-boxplot-sushi-big-part} compares the box-plots of the top 10 smallest posterior means of $\boldsymbol{\theta}$ between Eastern Japan (Blue box) and Western Japan (Red box). The main difference for sushi preference between Eastern and Western Japan appears to be in Sea eel, Salmon roe, Tuna, Sea urchin and Sea bream. The eastern Japanese prefer Salmon roe, Tuna and Sea urchin sushi more than the western Japanese, while the latter like Sea eel and Sea bream more than the former. Generally speaking, Tuna and Sea urchin are more oily food, while Salmon roe and Tuna are more seasonal food. So from the analysis of both data sets, we can conclude that the eastern Japanese usually prefer more oily and seasonal food than the western Japanese \citep{kamishima2003nantonac}. \begin{figure} \caption{\label{fig:Comparison-boxplot-sushi-big} \label{fig:Comparison-boxplot-sushi-big} \end{figure} \begin{figure} \caption{\label{fig:Comparison-boxplot-sushi-big-part} \label{fig:Comparison-boxplot-sushi-big-part} \end{figure} \subsection{APA data} \noindent We revisit the well-known APA data set of \citet{diaconis1988group} which contains $5738$ full rankings of 5 candidates for the presidential election of the American Psychological Association (APA) in 1980. For this election, members of APA had to rank five candidates \{A,B,C,D,E\} in order of their preference. Candidates A and C are research psychologists, candidates D and E are clinical psychologists and candidate B is a community psychologist. This data set has been studied by \citet{diaconis1988group} and \citet{kidwell2008visualizing} who found that the voting population was divided into 3 clusters. We fit the data using the mixture model stated in Section \ref{subsec:Mixture-ranking-model}. We chose a non-informative prior for the Bayesian-VI method for a different number of clusters $G=1$ to 5. Specifically, the prior parameter $\boldsymbol{m}_{0g}$ is a randomly chosen unit vector whereas $\beta_{0g}$, $d_{0g}$, $a_{0g}$ and $b_{0g}$ are chosen as random numbers close to zero. The $p_{ig}$ are initialized as $\frac{1}{G}$. Table \ref{tab:DIC-(Deviance-information} shows the Deviance information criterion (DIC) for $G=1$ to $5$. It can be seen that the mixture model with $G=3$ clusters attains the smallest DIC. \begin{table}[h] \begin{centering} \begin{tabular}{c|ccccc} \hline $G$ & 1 & 2 & 3 & 4 & 5\tabularnewline \hline DIC & 54827 & 53497 & 53281 & 53367 & 53375\tabularnewline \hline \end{tabular} \par\end{centering} \caption{\label{tab:DIC-(Deviance-information}Deviance information criterion (DIC) for the APA ranking data.} \end{table} Table \ref{tab:The-posterior-parameter-APA} indicates the posterior parameters for the three-cluster solution and Figure \ref{fig:Comparison-APA-Data} exhibits the posterior means of $\boldsymbol{\theta}$ for the three clusters obtained by Bayesian-VI. It is very interesting to see that Clusters 1 vote clinical psychologists D and E as their first and second choices and dislike especially the research psychologist C. Cluster 2 prefer research psychologists A and C but dislike the others. Cluster 3 prefer research psychologist C. From Table \ref{tab:The-posterior-parameter-APA}, Cluster 1 represents the majority (posterior mean of $\tau_{1}=56.31\%$). Cluster 2 is small but more cohesive since the posterior mean of $\kappa_{2}$ is larger. Cluster 3 has a posterior mean of $\tau_{3}=20.73\%$ and $\kappa_{3}$ is $1.52$. The preferences of the five candidates made by the voters in the three clusters are heterogeneous and the mixture model enables us to draw further inference from the data. \begin{table} \begin{centering} \begin{tabular}{c|ccc} \hline {\footnotesize{}Posterior Parameter} & {\footnotesize{}Cluster 1} & {\footnotesize{}Cluster 2} & {\footnotesize{}Cluster 3}\tabularnewline \hline {\footnotesize{}$\boldsymbol{m}$} & {\footnotesize{}0.06} & {\footnotesize{}-0.44} & {\footnotesize{}0.26}\tabularnewline & {\footnotesize{}0.02} & {\footnotesize{}0.19} & {\footnotesize{}0.14}\tabularnewline & {\footnotesize{}0.78} & {\footnotesize{}-0.64} & {\footnotesize{}-0.75}\tabularnewline & {\footnotesize{}-0.54} & {\footnotesize{}0.49} & {\footnotesize{}0.55}\tabularnewline & {\footnotesize{}-0.33} & {\footnotesize{}0.39} & {\footnotesize{}-0.19}\tabularnewline {\footnotesize{}$\beta$} & {\footnotesize{}1067.10} & {\footnotesize{}1062.34} & {\footnotesize{}414.74}\tabularnewline {\footnotesize{}$d$} & {\footnotesize{}3231.09} & {\footnotesize{}1317.21} & {\footnotesize{}1189.72}\tabularnewline {\footnotesize{}$a$} & {\footnotesize{}4756.33} & {\footnotesize{}9224.97} & {\footnotesize{}1821.73}\tabularnewline {\footnotesize{}$b$} & {\footnotesize{}3330.45} & {\footnotesize{}1239.41} & {\footnotesize{}1197.80}\tabularnewline \hline {\footnotesize{}Posterior mean of $\boldsymbol{\kappa}$} & {\footnotesize{}1.43} & {\footnotesize{}7.44} & {\footnotesize{}1.52}\tabularnewline {\footnotesize{}Posterior mean of $\boldsymbol{\tau}$} & {\footnotesize{}56.31\%} & {\footnotesize{}22.96\%} & {\footnotesize{}20.73\%}\tabularnewline \hline \end{tabular} \par\end{centering} \caption{\label{tab:The-posterior-parameter-APA}Posterior parameters for the APA ranking data ($t=5$) for three clusters obtained by Bayesian-VI.} \end{table} \begin{figure} \caption{\label{fig:Comparison-APA-Data} \label{fig:Comparison-APA-Data} \end{figure} \subsection{Breast cancer gene expressions data} \noindent We apply our mixture model on a ranked mRNA expression data set to classify patients into the sub-type of breast cancer. Similar topics have also been studied by \citet{naume2007presence}. All the raw data can be obtained from the Stanford Micro array Database (SMD) (http://genome-www5.stanford.edu/). We downloaded the mRNA expression data of 121 breast cancer patients who have two disease sub-type based on their ER/PgR-status: Estrogen Receptor negative (ER-, 41 patients) or positive (ER+, 80 patients). Our aim is to classify the breast cancer patients into two sub-groups based on their ranked gene expressions data for 96 genes ($t=96$). These 96 genes are selected from the KEGG Estrogen signaling pathway (Kyoto Encyclopedia of Genes and Genomes: hsa04915) (http://www.genome.jp/kegg/). We use the rankings of 96 normalized log 2-transformed gene expression ratios for the 121 patients as our training data. In this experiment, we first use the patients' gene ranking data (without knowing the true disease sub-type of each patient) to fit our mixture model ($G=2$). The prior parameter $\boldsymbol{m}_{0g}$ is a randomly chosen unit vector while the other prior parameters $\beta_{0g}$, $d_{0g}$, $a_{0g}$ and $b_{0g}$ are chosen as random small numbers close to zero. The $p_{ig}$ are initialized as $\frac{1}{G}$. Table \ref{tab:The-posterior-parameter-Gene-data} shows the posterior parameters for the gene ranking data for the two clusters obtained by Bayesian-VI. As the ER+ patients are more frequent in this data set, we label Cluster 1 as the ER+ group since the posterior mean of $\tau_{1}$ is higher (66.79\%). So Cluster 2 is then labeled as the ER- group. Using our clustering solution and the true disease sub-type for the patients, Figure \ref{fig:ROC-curve-for-Gene-data} shows the ROC (Receiver operating characteristic) curves based on the fitted two-mixture model (the left panel) and the classification implied by the K-means clustering with squared Euclidean distance (the right panel) \citep{hartigan1979algorithm,arthur2007k}. From Figure \ref{fig:ROC-curve-for-Gene-data}, it is seen that our mixture model has a greater discrimination power. The AUC (Area under the curve) for our method is 0.9183 which is higher than that for the K-means method (0.8235). \begin{table} \begin{centering} \begin{tabular}{c|cc} \hline {\footnotesize{}Posterior Parameter} & {\footnotesize{}Cluster 1 (ER+)} & {\footnotesize{}Cluster 2 (ER-)}\tabularnewline \hline {\footnotesize{}$\beta$} & {\footnotesize{}63.68} & {\footnotesize{}29.75}\tabularnewline {\footnotesize{}$d$} & {\footnotesize{}80.84} & {\footnotesize{}40.18}\tabularnewline {\footnotesize{}$a$} & {\footnotesize{}16181.18} & {\footnotesize{}6462.97}\tabularnewline {\footnotesize{}$b$} & {\footnotesize{}83.26} & {\footnotesize{}42.28}\tabularnewline \hline {\footnotesize{}Posterior mean of $\kappa$} & {\footnotesize{}194.34} & {\footnotesize{}152.85}\tabularnewline {\footnotesize{}Posterior mean of $\tau$} & {\footnotesize{}0.6679} & {\footnotesize{}0.3320}\tabularnewline \hline \end{tabular} \par\end{centering} \caption{\label{tab:The-posterior-parameter-Gene-data}The posterior parameter of $\theta$ for the gene ranking data ($t=96$) for two clusters using Bayesian-VI.} \end{table} \begin{figure} \caption{\label{fig:ROC-curve-for-Gene-data} \label{fig:ROC-curve-for-Gene-data} \end{figure} \section{Conclusions and Discussion} \noindent We proposed a new class of general exponential ranking model called angle-based ranking models. The model assumed a consensus score vector $\boldsymbol{\theta}$ where the rankings reflect the rank-order preference of the items. The probability of observing a ranking is proportional to the cosine of the angle from the consensus score vector. Then we proposed a very good approximation for the normalizing constant using the von Mises-Fisher distribution which can facilitate the computation of fitting the model. Usually it is an NP-hard problem to find the estimates of parameters for other classes of ranking models when $t$ is large. However, our model avoided this problem and can easily calculate the estimate of our model. We made use of Bayesian variational inference to approximate the posterior density as well as the predictive density. This approach exhibited a great computational advantage compared to traditional MCMC methods. One can also consider to use regularization methods such as LASSO, Ridge and Elastic Net to overcome the potential over-fitting problem, especially for large $t$. In fact, regularization methods can be implemented via a Bayesian approach with suitably chosen priors. For instance, LASSO in a regression problem can be viewed as maximum a posterior method in a Bayesian framework using a Laplace prior centered at zero. It is of interest to study regularization for angle-based models and such interesting problem would be studied in the future. Unlike distance-based models, the consensus score vector $\boldsymbol{\theta}$ proposed exhibits detailed information on item preferences while distance-based model only provide equal-spaced modal ranking. We applied the method to sushi data, and concluded that certain types of sushi are seldom eaten by the Japanese. Model extensions to incomplete rankings and mixture models were also developed. Incomplete rankings often arise when the number of items ranked is large. The use of compatible rankings makes it possible to handle incomplete rankings such as top-$k$ rankings, and subsets ranking. The mixture models can be used as a model-based clustering tool for ranking data. Our consensus score vector $\boldsymbol{\theta}$ defined on a unit sphere can be easily reparameterized to incorporate additional arguments or covariates in the model. the judge-specific covariates could be age, gender and income, and the item-specific covariates could be prices, weights and brands, and the judge-item-specific covariates could be some personal experience on using each phone or brand. Adding those covariates into the model will greatly improve the power of prediction of our model. We can also develop Bayesian inference methods to facilitate the computation. This interesting problem will be deferred to later papers. \section*{Acknowledgments} \noindent The authors are grateful to the referees for making useful suggestions which improved the presentation of several aspects of the manuscript. The research of Philip L.H. Yu and Mayer Alvo was supported by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No.17303515). Mayer Alvo was also supported by the Natural Sciences and Engineering Research Council of Canada OGP0009068. \section*{Appendix A. Derivation of the approximation for normalizing constant of our model} \noindent Since $t!$ permutations lie on a sphere in $(t-1)$-space, our model is very close to another exponential family distribution, the von Mises-Fisher distribution which is defined on a unit sphere. Consider a von Mises-Fisher distribution defined on a $(t-1)$-space, its normalizing constant can be written as the integration on a unit $(t-2)$-sphere: \begin{equation} V_{t-1}(\kappa)^{-1}=\frac{(2\pi)^{\frac{t-1}{2}}I_{\frac{t-3}{2}}(\kappa)}{\kappa^{\frac{t-3}{2}}}=\int_{\left\Vert \boldsymbol{x}\right\Vert =1}\exp\left\{ \kappa\boldsymbol{\theta}^{T}\boldsymbol{x}\right\} d\boldsymbol{x}.\label{eq:NC_vMF} \end{equation} Using a naive Monte Carlo integration, we have \[ \int_{\left\Vert \boldsymbol{x}\right\Vert =1}\exp\left\{ \kappa\boldsymbol{\theta}^{T}\boldsymbol{x}\right\} dx\simeq S\frac{1}{n}\sum_{i=1}^{n}\exp\left\{ \kappa\boldsymbol{\theta}^{T}\boldsymbol{x}_{i}\right\} , \] where $\left\{ x_{i}\right\} $ are uniformly distributed on a $(t-2)$-unit sphere, and $S=\int_{\left\Vert \boldsymbol{x}\right\Vert =1}d\boldsymbol{x}$. Summing over all possible $t!$ permutations $y_{i}$ we can further write: \begin{equation} \int_{\left\Vert \boldsymbol{x}\right\Vert =1}\exp\left\{ \kappa\boldsymbol{\theta}^{T}\boldsymbol{x}\right\} d\boldsymbol{x}\simeq S\frac{1}{t!}\sum_{i=1}^{t!}\exp\left\{ \kappa\boldsymbol{\theta}^{T}\boldsymbol{y}_{i}\right\} .\label{eq:Approxi_NC_VMF} \end{equation} Note that \[ S=\int_{\left\Vert \boldsymbol{x}\right\Vert =1}d\boldsymbol{x}=\frac{2\pi^{\frac{t-1}{2}}}{\Gamma(\frac{t-1}{2})} \] is actually the surface of the unit $(t-2)$-sphere. After combining (\ref{eq:NC_vMF}) and (\ref{eq:Approxi_NC_VMF}), we can have the inverse of the approximation for normalizing constant of our model: \begin{align*} C_{t}(\kappa)^{-1}=\sum_{\boldsymbol{y}\in\text{\textgreek{R}}_{t}}\exp\left\{ \kappa\boldsymbol{\theta}^{T}\boldsymbol{y}\right\} & \simeq\frac{t!}{S}V_{t-1}(\kappa)^{-1}\\ & =\frac{t!}{S}\cdot\frac{(2\pi)^{\frac{t-1}{2}}I_{\frac{t-3}{2}}(\kappa)}{\kappa^{\frac{t-3}{2}}}\\ & =\frac{2^{\frac{t-3}{2}}t!I_{\frac{t-3}{2}}(\kappa)\Gamma(\frac{t-1}{2})}{\kappa^{\frac{t-3}{2}}}. \end{align*} Note that when $\kappa=0$, the $V_{t-1}(\kappa)^{-1}$ becomes the surface of a unit $(t-2)$-sphere: $V_{t-1}(\kappa)^{-1}=S$. Then the approximation for normalizing constant of our model becomes:$C_{t}(\kappa)\simeq\frac{S}{t!}S^{-1}=\frac{1}{t!}$, which is equal to the exact normalizing constant of our model for $\kappa=0$. \section*{Appendix B. Detailed Derivation of the predictive density of our model} \noindent To obtain the predictive density of our model, we first integrate (\ref{eq:posterior_predictiver_density}) over $\boldsymbol{\theta}$: \begin{align*} \int p(\tilde{\boldsymbol{y}}|\kappa,\boldsymbol{\theta})vMF(\boldsymbol{\theta}|\boldsymbol{m},\beta\kappa)d\boldsymbol{\theta} & =C_{t}(\kappa)V_{t}(\beta\kappa)\int\exp\left[\kappa\boldsymbol{\theta}^{T}\tilde{\boldsymbol{y}}+\beta\kappa\boldsymbol{m}^{T}\boldsymbol{\theta}\right]d\boldsymbol{\theta}\\ & =C_{t}(\kappa)V_{t}(\beta\kappa)V_{t}(\kappa\eta(\tilde{\boldsymbol{y}}))^{-1}\int V_{t}(\kappa\eta(\tilde{\boldsymbol{y}}))\exp\left[\kappa\eta(\tilde{\boldsymbol{y}})\frac{\tilde{\boldsymbol{y}}^{T}+\beta\boldsymbol{m}^{T}}{\eta(\tilde{\boldsymbol{y}})}\boldsymbol{\theta}\right]d\boldsymbol{\theta} \end{align*} where $\eta(\tilde{\boldsymbol{y}})=\left\Vert \tilde{\boldsymbol{y}}+\beta\boldsymbol{m}\right\Vert $. This involves integrating a vMF with mean direction $\tilde{\boldsymbol{y}}+\beta\boldsymbol{m}$ and concentration parameter $\kappa\eta(\tilde{\boldsymbol{y}})$. Hence, we can replace the known normalizing constant for vMF as: \begin{align} \int p(\tilde{\boldsymbol{y}}|\kappa,\boldsymbol{\theta})vMF(\boldsymbol{\theta}|\boldsymbol{m},\beta\boldsymbol{\kappa})d\boldsymbol{\theta} & =C_{t}(\kappa)V_{t}(\beta\kappa)V_{t}(\kappa\eta(\tilde{\boldsymbol{y}}))^{-1}\nonumber \\ & =h(\tilde{\boldsymbol{y}})l(\kappa)\kappa^{\frac{t-3}{2}},\label{eq:Intergration_result_theta} \end{align} where \[ h(\tilde{\boldsymbol{y}})=\frac{1}{\Gamma\left(\frac{t-1}{2}\right)t!2^{\frac{t-3}{2}}}\left(\frac{\beta}{\eta(\tilde{\boldsymbol{y}})}\right)^{\frac{t-2}{2}}, \] \[ l(\kappa)=\frac{I_{\frac{t-2}{2}}(\eta(\tilde{\boldsymbol{y}})\kappa)}{I_{\frac{t-3}{2}}(\kappa)I_{\frac{t-2}{2}}(\beta\kappa)}. \] Substituting (\ref{eq:Intergration_result_theta}) into $q(\tilde{\boldsymbol{y}}|\boldsymbol{Y})$ in (\ref{eq:posterior_predictiver_density}), we have \begin{equation} q\left(\tilde{\boldsymbol{y}}|\boldsymbol{Y}\right)=h(\tilde{\boldsymbol{y}})\frac{b^{a+\frac{t-1}{2}-1}}{\Gamma(a+\frac{t-1}{2}-1)}\int l(\kappa)e^{-b\kappa}\kappa^{a+\frac{t-1}{2}-2}d\kappa.\label{eq:posterior_preditive_density_one_intergration} \end{equation} Since the term $l(\kappa)$ involves three Bessel functions, we can use a second order approximation of $\ln l(\kappa)$ in terms of $\kappa$ and $\ln\kappa$ as: \begin{equation} \ln l(\kappa)\approx\ln l(\bar{\kappa})-r(\tilde{\boldsymbol{y}})\left(\kappa-\bar{\kappa}\right)+s(\tilde{\boldsymbol{y}})(\ln\kappa-\ln\bar{\kappa}),\label{eq:approxi_lgl_alpha} \end{equation} where $r(\tilde{\boldsymbol{y}})$ and $s(\tilde{\boldsymbol{y}})$ are calculated from the first and second order derivatives expanded at $\bar{\kappa}$. This yields: \[ s(\tilde{\boldsymbol{y}})=-\eta^{2}(\tilde{\boldsymbol{y}})\bar{\kappa}^{2}\left(\frac{I'_{\frac{t-2}{2}}(\eta(\tilde{\boldsymbol{y}})\bar{\kappa})}{I_{\frac{t-2}{2}}(\eta(\tilde{\boldsymbol{y}})\bar{\kappa})}\right)'+\beta^{2}\bar{\kappa}^{2}\left(\frac{I'_{\frac{t-2}{2}}(\beta\bar{\kappa})}{I_{\frac{t-2}{2}}(\beta\bar{\kappa})}\right)'+\bar{\kappa}^{2}\left(\frac{I'_{\frac{t-3}{2}}(\bar{\kappa})}{I_{\frac{t-3}{2}}(\bar{\kappa})}\right)', \] \[ r(\tilde{\boldsymbol{y}})=\frac{s(\tilde{\boldsymbol{y}})}{\bar{\kappa}}-\eta(\tilde{\boldsymbol{y}})\frac{I'_{\frac{t-2}{2}}(\eta(\tilde{\boldsymbol{y}})\bar{\kappa})}{I_{\frac{t-2}{2}}(\eta(\tilde{\boldsymbol{y}})\bar{\kappa})}+\beta\frac{I'_{\frac{t-2}{2}}(\beta\bar{\kappa})}{I_{\frac{t-2}{2}}(\beta\bar{\kappa})}+\frac{I'_{\frac{t-3}{2}}(\bar{\kappa})}{I_{\frac{t-3}{2}}(\bar{\kappa})}. \] The quantities $\frac{I'_{v}(x)}{I_{v}(x)}$ and $\left(\frac{I'_{v}(x)}{I_{v}(x)}\right)'$ can be computed using the recurrence relation of the derivative of the modified Bessel function of the first kind: \[ \frac{I'_{v}(x)}{I_{v}(x)}=\frac{I_{v+1}(x)}{I_{v}(x)}+\frac{v}{x} \] \[ \left(\frac{I'_{v}(x)}{I_{v}(x)}\right)'=-\frac{v}{x^{2}}+1-\frac{2v+1}{x}\left(\frac{I_{v+1}(x)}{I_{v}(x)}\right)-\left(\frac{I_{v+1}(x)}{I_{v}(x)}\right)^{2}. \] Using (\ref{eq:approxi_lgl_alpha}), then the integration over $\kappa$ can be approximated by \begin{align} \int l(\kappa)e^{-b\kappa}\kappa^{a+\frac{t-1}{2}-2}d\kappa & \approx l(\bar{\kappa})e^{r(\tilde{\boldsymbol{y}})\bar{\kappa}}\bar{\kappa}^{-s(\tilde{\boldsymbol{y}})}\int e^{-\kappa\left(b+r(\tilde{\boldsymbol{y}})\right)}\kappa^{a+s(\tilde{\boldsymbol{y}})+\frac{t-1}{2}-2}d\kappa\nonumber \\ & =l(\bar{\kappa})e^{r(\tilde{\boldsymbol{y}})\bar{\kappa}}\bar{\kappa}^{-s(\tilde{\boldsymbol{y}})}\Gamma\left(a+s(\tilde{\boldsymbol{y}})+\frac{t-1}{2}-1\right)\left(b+r(\tilde{\boldsymbol{y}})\right)^{-(a+s(\tilde{\boldsymbol{y}})+\frac{t-1}{2}-1)}\label{eq:result_intergration_over_alpha} \end{align} where the integration involves a Gamma distribution with shape parameter \[ a+s(\tilde{\boldsymbol{y}})+\frac{t-1}{2}-1 \] and rate parameter \[ b+r(\tilde{\boldsymbol{y}}). \] Hence, plugging in the known normalizing constant of the Gamma distribution, we see that the approximate predictive density of $\tilde{\boldsymbol{y}}$ can be obtained by substituting (\ref{eq:result_intergration_over_alpha}) in (\ref{eq:posterior_preditive_density_one_intergration}): \[ q(\tilde{\boldsymbol{y}}|\boldsymbol{Y})\approx h(\tilde{\boldsymbol{y}})l(\bar{\kappa})e^{r(\tilde{\boldsymbol{y}})\bar{\kappa}}\bar{\kappa}^{-s(\tilde{\boldsymbol{y}})}\frac{b^{a+\frac{t-1}{2}-1}\Gamma(a+s(\tilde{\boldsymbol{y}})+\frac{t-1}{2}-1)}{\left(b+r(\tilde{\boldsymbol{y}})\right)^{a+s(\tilde{\boldsymbol{y}})+\frac{t-1}{2}-1}\Gamma(a+\frac{t-1}{2}-1)}. \] \section*{Appendix C. Derivation of the variational inference of the mixture ranking model} \noindent For the mixture model, the evidence lower bound is given by \begin{equation} \mathcal{L_{M}}(q)=E_{q(\boldsymbol{Z},\boldsymbol{\Theta},\boldsymbol{\kappa},\boldsymbol{\tau})}\left[\ln\frac{p(\boldsymbol{R}|\boldsymbol{Z},\boldsymbol{\Theta},\kappa)p(\boldsymbol{\Theta},\kappa)p(\boldsymbol{Z}|\tau)p(\tau)}{q(\boldsymbol{Z})q(\boldsymbol{\Theta}|\boldsymbol{\kappa})q(\boldsymbol{\kappa})q(\tau)}\right].\label{eq:evidence_lower_bound-mixture} \end{equation} Focusing first on terms involving $Z$, we have from (\ref{eq:evidence_lower_bound-mixture}) \begin{align*} \mathcal{L}(q) & =E_{q(\boldsymbol{Z},\boldsymbol{\Theta},\boldsymbol{\kappa},\boldsymbol{\tau})}\left[\ln\left(p(\boldsymbol{R}|\boldsymbol{Z},\boldsymbol{\Theta},\boldsymbol{\kappa})p(Z|\boldsymbol{\tau})\right)\right]-E_{q(\boldsymbol{Z})}\left[\ln q(\boldsymbol{Z})\right]+constant\\ & =\sum_{i=1}^{N}\sum_{g=1}^{G}E_{q(\boldsymbol{Z})}\left[z_{ig}\rho_{ig}\right]-E_{q(\boldsymbol{Z})}\left[\ln q(\boldsymbol{Z})\right]+constant, \end{align*} where \[ \rho_{ig}=\frac{t-3}{2}E_{q(\boldsymbol{\kappa})}(\ln\kappa_{g})+E_{q(\boldsymbol{\tau})}\left(\ln\tau_{g}\right)+E_{q(\boldsymbol{\Theta},\boldsymbol{\kappa})}\left(\kappa_{g}\boldsymbol{\theta}_{g}^{T}\boldsymbol{y}_{i}\right)-E_{q(\boldsymbol{\kappa})}\left(\ln I_{\frac{t-3}{2}}\left(\kappa_{g}\right)\right)-\ln\left[2^{\frac{t-3}{2}}t!\Gamma\left(\frac{t-1}{2}\right)\right]. \] Since the term $E_{q(\boldsymbol{\kappa})}\left(\ln I_{\frac{t-3}{2}}\left(\kappa_{g}\right)\right)$ is not tractable, we use the method in Section \ref{subsec:Optimization-of-the-VI} which leads to the lower bound \[ \mathcal{L}_{M}(q)\geq\underline{\mathcal{L}_{M}(q)}=\sum_{i=1}^{N}\sum_{g=1}^{G}E_{q(\boldsymbol{Z})}\left[z_{ig}\underline{\rho_{ig}}\right]-E_{q(\boldsymbol{Z})}\left[\ln q(\boldsymbol{Z})\right]+constant. \] Using (\ref{eq:Ineq-I}), we have \begin{align} \rho_{ig} & \geq\underline{\rho_{ig}}=\frac{t-3}{2}E_{q(\boldsymbol{\kappa})}(\ln\kappa_{g})+E_{q(\boldsymbol{\tau})}\left(\ln\tau_{g}\right)+E_{q(\boldsymbol{\Theta},\boldsymbol{\kappa})}\left(\kappa_{g}\boldsymbol{\theta}_{g}^{T}\boldsymbol{y}_{i}\right)-\ln\left[2^{\frac{t-3}{2}}t!\Gamma\left(\frac{t-1}{2}\right)\right]\label{eq:lower_bound_rho}\\ & -\ln I_{\frac{t-3}{2}}\left(\bar{\kappa}_{g}\right)-\left(\frac{\partial}{\partial\kappa_{g}}\ln I_{\frac{t-3}{2}}\left(\bar{\kappa}_{g}\right)\right)\left[E_{q(\boldsymbol{\kappa})}\kappa_{g}-\bar{\kappa}_{g}\right].\nonumber \end{align} Hence the optimal variational posterior distribution for $Z$ is \[ \ln q^{*}(\boldsymbol{Z})=\sum_{i=1}^{N}\sum_{g=1}^{G}z_{ig}\rho_{ig}+constant \] which is recognized as a multinomial distribution: \[ q^{*}(\boldsymbol{Z})=\prod_{i=1}^{N}\prod_{g=1}^{G}p_{ig}^{z_{ig}}, \] where \[ p_{ig}=\frac{\exp(\underline{\rho_{ig}})}{\sum_{j=1}^{G}\exp(\underline{\rho_{ij}})}. \] Next, consider the optimization of $q(\tau)$. Since $E_{Z}(z_{ig})=p_{ig}$, the optimal posterior distribution for $\tau$ can be written as \[ \ln q^{*}(\boldsymbol{\tau})=\sum_{g=1}^{G}\left(d_{0,g}-1+\sum_{i=1}^{N}p_{ig}\right)\ln\tau_{g}+constant, \] which is recognized to be a Dirichlet distribution with parameter $d_{g}$: \[ q^{*}(\boldsymbol{\tau})=Dirichlet(\boldsymbol{\tau}|\boldsymbol{d}), \] where $\boldsymbol{d}=\left[d_{1},...,d_{G}\right]^{T}$ and \begin{equation} d_{g}=d_{0,g}+\sum_{i=1}^{N}p_{ig}.\label{eq:update_d_g-1} \end{equation} The remaining optimization of $q(\boldsymbol{\theta}|\kappa)$ and $q(\kappa)$ is similar to Section \ref{subsec:Optimization-of-the-VI} and we have \[ q^{*}(\boldsymbol{\theta}|\kappa)=\prod_{g=1}^{G}q^{*}(\boldsymbol{\theta}_{g}|\kappa_{g}) \] and \[ q^{*}(\boldsymbol{\theta}_{g}|\kappa_{g})=vMF(\boldsymbol{\theta}_{g}|\boldsymbol{m}_{g},\kappa_{g}\beta_{g}), \] where \begin{equation} \beta_{g}=\left\Vert \beta_{0,g}\boldsymbol{m}_{0,g}+\sum_{i=1}^{n}p_{ig}\boldsymbol{y}_{i}\right\Vert ,\label{eq:Update_beta_g-1} \end{equation} \begin{equation} \boldsymbol{m}_{g}=\left(\beta_{0,g}\boldsymbol{m}_{0,g}+\sum_{i=1}^{n}p_{ig}\boldsymbol{y}_{i}\right)\beta_{g}^{-1}.\label{eq:Update_m_g-1} \end{equation} We can write $q^{*}(\boldsymbol{\kappa})=\prod_{g=1}^{G}q^{*}(\kappa_{g})$ where \[ q^{*}(\kappa_{g})=Gamma(\kappa_{g}|a_{g},b_{g}), \] and \begin{equation} a_{g}=a_{0,g}+\left(\frac{t-3}{2}\right)\sum_{i=1}^{N}p_{ig}+\beta_{g}\bar{\kappa}_{g}\left[\frac{\partial}{\partial\beta_{g}\kappa_{g}}\ln I_{\frac{t-2}{2}}(\beta_{g}\bar{\kappa}_{g})\right],\label{eq:update_postrerior_a-Mixture-1} \end{equation} \begin{equation} b_{g}=b_{0,g}+\left(\sum_{i=1}^{N}p_{ig}\right)\frac{\partial}{\partial\kappa_{g}}\ln I_{\frac{t-3}{2}}(\bar{\kappa}_{g})+\beta_{0,g}\left[\frac{\partial}{\partial\beta_{0,g}\kappa_{g}}\ln I_{\frac{t-2}{2}}(\beta_{0,g}\bar{\kappa}_{g})\right].\label{eq:Update_posterior_b-mixture-1} \end{equation} Since all the optimal variational posterior distributions are determined, the expectations in (\ref{eq:lower_bound_rho}) can be easily evaluated by the property of $q^{*}$: \[ E_{q(\boldsymbol{\kappa})}(\ln\kappa_{g})=\psi(a_{g})-\ln(b_{g}),E_{q(\boldsymbol{\tau})}\left(\ln\tau_{g}\right)=\psi(d_{g})-\psi\left(\sum_{g=1}^{G}d_{g}\right), \] where $\psi(.)$ is the digamma function \[ E_{q(\boldsymbol{\Theta},\boldsymbol{\kappa})}\left(\kappa_{g}\boldsymbol{\theta}_{g}^{T}\boldsymbol{y}_{i}\right)=\frac{a_{g}}{b_{g}}\boldsymbol{m}_{g}^{T}\boldsymbol{y}_{i} \] and \[ E_{q(\boldsymbol{\kappa})}\kappa_{g}=\frac{a_{g}}{b_{g}}. \] \section*{Appendix D. Additional simulations for Section 5.1} \noindent We have done more simulations to compare the true posterior distribution with the approximate obtained using the variational inference approach. We simulated another four data sets with $t=3,5$ and different data sizes of $N=20,100,200.$ We generated samples from the posterior distribution by SIR method in Section \ref{subsec:Bayesian-method-SIR} using the proposal gamma density. We then applied the variational approach in Algorithm \ref{alg:Bayesian-Estimation-our_model} and generated samples from the corresponding posterior distribution. Figure \ref{fig:Comparison-posterior_distribution_SIR_VI-1} exhibits the histogram and box-plot for the posterior distribution of $\kappa$ and $\boldsymbol{\theta}$. From Figure \ref{fig:Comparison-posterior_distribution_SIR_VI-1}, we see that the posterior distribution using the Bayesian-VI is very close to the posterior distribution obtained by the Bayesian-SIR method for different cases of $t$ and $N$. \begin{figure} \caption{\label{fig:Comparison-posterior_distribution_SIR_VI-1} \label{fig:Comparison-posterior_distribution_SIR_VI-1} \end{figure} \section*{References} \end{document}
\begin{document} \title[Almost split sequences]{\sc Almost split sequences in tri-exact categories} \author[Shiping Liu]{Shiping Liu } \address{Shiping Liu\\ D\'epartement de math\'ematiques, Universit\'e de Sherbrooke, Sherbrooke, Qu\'ebec, Canada} \email{[email protected]} \author[Hongwei Niu]{\hspace{2pt} Hongwei Niu} \address{Hongwei Niu\\D\'epartement de math\'ematiques, Universit\'e de Sherbrooke, Sherbrooke, Qu\'ebec, Canada} \email{[email protected]} \subjclass[2010]{16D90, 16G20, 16G70, 16E35} \keywords{Modules; algebras; almost split sequences; almost split triangles; abelian categories; derived categories; triangulated categories.} \thanks{The first named author is supported in part by the Natu\-ral Science and Engineering Research Council of Canada.} \maketitle \begin{abstract} We shall study the existence of almost split sequences in tri-exact categories, that is, extension-closed subcategories of triangulated cate\-gories. Our results unify and extend the existence theorems for almost split sequences in abelian categories and exact categories (that is, extension-closed subcategories of abelian categories), and those for almost split triangles in triangulated cate\-gories in \cite{AUS,Hap1,Kra2,LeZ,LNP,RvdB}. As applications, we shall obtain some new results on the existence of almost split sequences in the derived categories of all modules over an algebra with a unity or a locally finite dimensional algebra given by a quiver with relations. \end{abstract} \section*{Introduction} Since its introduction in the last seventies; see \cite{AuR1, AuR2}, the Auslander-Reiten theory of almost split sequences has been playing a fundamental role in the modern representation theory of algebras; see, for example, \cite{ASS,ARS}. Later, Happel introduced the analogous theory of almost split triangles in triangulated categories; see \cite{Hap2, H}, making the Auslander-Reiten theory applicable in other areas of mathematics such as algebraic topology and algebraic geometry; see \cite{Rei,JOR,JOR2}. Since then, this theory has been further developed separately for exact categories and triangulated categories; see \cite{AUS,Hap1,Kra2, KLe, LeZ,LNP,RvdB}. Our purpose is to unify and extend these results by working with tri-exact categories. Observe that the existence of almost split sequences in a Krull-Schmidt category will help us to classify the indecomposable objects and describe certain morphisms in terms of the Auslander-Reiten quiver; see \cite{Liu1}. We shall outline the content of the paper section by section. In Section 1, in addition to laying down the foundation, we shall also study modules over an $R$-algebra, which are reflexive with respect to the minimal injective co-generator for ${\rm Mod}\hspace{.4pt}R$, where $R$ is a commutative ring. These modules will play the same role as those of finite length over an artin algebra. In case the algebra is reflexive and noetherian, we shall establish a duality between the $R$-noetherian modules and the $R$-artinian modules; see (\ref{Noe-Ref}), which generalizes the well-known Matlis duality; see \cite{AUS, BH}. In Section 2, we shall study mainly the stable categories of a tri-exact category. The stable categories were first considered by Auslander and Reiten for modules over an artin algebra in order to establish the existence of almost split sequences; see \cite{AuR2}. Later, Lenzing and Zuazua defined the stable categories of an abelian category without projective or injection objects; see \cite{LeZ}, which carry over easily to an exact category; see \cite{LNP}. We shall extend them to tri-exact categories and show that every exact category is equivalent to a tri-exact category with equivalent stable categories. This ensures that the study of almost split sequences in exact categories and abelian categories is covered under our tri-exact setting. We should point out that a triangulated category coincides with its stable categories. In Section 3, we shall study the existence of an individual almost split sequence in a tri-exact category. Historically, one derives an almost split sequence from an Auslander-Reiten formula in an abelian category; see \cite{AUS} and \cite[(1.1)]{LeZ}, and an almost split triangle from a Serre formula in a triangulated category; see \cite[(2.2)]{Kra2} and \cite[(I.2.3)]{RvdB}, but the converses do not hold in general. These formulae involve taking the ``dual" of some stable Hom-spaces against injective modules over various rings; see \cite{AUS,ARS,Kra2,LeZ}. Recently, some necessary and sufficient conditions were found for the existence of an almost split sequence in an exact $R$-category, where the ``dual" is taken against an injective co-generator for ${\rm Mod}\hspace{.4pt}R$; see \cite[(2.2)]{LNP}. By taking the ``dual" against injective modules over rings mapping to the stable endomorphisms of two prescribed objects, we shall obtain some necessary and sufficient conditions for the existence of an almost split sequence in a tri-exact category; see (\ref{existence}), which essentially cover all the previously mentioned results. In Section 4, we shall be concerned with the global existence of almost split sequences in a tri-exact category. It is known that an Ext-finite abelian $R$-category with $R$ being artinian has almost split sequences if and only if it admits an Auslander-Reiten duality; see \cite{ARS,GR,LeZ} and a Hom-finite triangulated category over a field has almost split triangles on the right (or left) if and only if it admits a right (or left) Serre functor; see \cite[(I.2.3)]{RvdB}. We shall deal this problem for Hom-reflexive Krull-Schmidt tri-exact categories. This class of categories includes the category of noetherian modules and that of artinian modules over a noetheiran $R$-algebra with $R$ being notherian complete local, which are not Hom-finite if the algebra is not artinian; see \cite{AUS}. We shall show that such a tri-exact $R$-category has almost split sequences on the right (or left) if and only if it admits a full right (or left) Auslander-Reiten functor; see (\ref{ARS-ARF}). In the right (or left) triangulated case, the existence of almost split sequences on the right (or left) is equivalent to the existence of a right (or left) Auslander-Reiten functor, or equivalently, a right (or left) Serre functor with a proper image; see (\ref{AR-Serre}). In Section 5, we shall study the existence of almost split triangles in the derived categories of an abelian category with enough projective objects and enough injective objects. This has been done for the bounded derived category of finite dimensional modules over a finite dimensional algebra; see \cite{H, Hap2}. In the most general case, we shall show that an almost split triangle in the bounded derived cate\-gory starts with a bounded complex of injective objects and ends with a bounded complex of projective objects; see (\ref{ART-nec}) and (\ref{ART-bd}). In case the abelain category admits a Nakayama functor with respect to a subcategory of projective objects; see (\ref{Naka-Func}), we shall establish an existence theorem of an almost split triangle in the bounded derived category; see (\ref{ART-general}). In case the subcategory of projective objects is Hom-reflexive, we shall describe all possible almost split triangles in the bounded derived category; see (\ref{ART-refl}). These results will be applicable to the derived categories of module over an general algebra, a reflexive notherian algebra, or a locally finite dimensional algebra given by a quiver with relations. \section{Preliminaries} \noindent The main objective of this section is to fix the notation and the terminology, which will be used throughout this paper, and collect some prelimi\-nary results. However, we shall also obtain some new results on modules over an algebra. Throughout this paper, morphisms in any category are composed from the right to the left. \noindent{\sc 1) Modules.} All rings and algebras except for those given by a quiver with relations have an identity. Let $\hbox{${\it\Sigma}$}$ be a ring or an algebra. We shall denote by $\hbox{{\rm Mod}\hspace{0.5pt}} \hbox{${\it\Sigma}$}$ the category of all left $\hbox{${\it\Sigma}$}$-modules, and by ${\rm mod}\hbox{${\it\Sigma}$}$ the full subcategory of $\hbox{{\rm Mod}\hspace{0.5pt}} \hbox{${\it\Sigma}$}$ of modules of finite length. For convenience, we shall identify the category of all right $\hbox{${\it\Sigma}$}$-modules with $\hbox{{\rm Mod}\hspace{0.5pt}} \hbox{${\it\Sigma}$}^{\rm op}$, where $\hbox{${\it\Sigma}$}^{\rm op}$ is the opposite ring or the opposite algebra of $\hbox{${\it\Sigma}$}$. A map $f: M\to N$ in $\hbox{{\rm Mod}\hspace{0.5pt}} \hbox{${\it\Sigma}$}$ is called {\it socle essential} provided that ${\rm Im}(f) \cap {\rm Soc}(N)$ is non-zero whenever ${\rm Soc}(N)$ is non-zero. Let $M$ be a left or right $\hbox{${\it\Sigma}$}$-module. Then $M^*={\rm Hom}_{\it\Sigma}(M, \hbox{${\it\Sigma}$})$ is a right or left $\hbox{${\it\Sigma}$}$-module, respectively. Given $u\in M$, we have $\hat{u}\in M^{**}={\rm Hom}_{\it\Sigma}(M^*, \hbox{${\it\Sigma}$})$, sending $f\in M^*$ to $f(u)$. The map $\rho_{_M}: M\to M^{**}$, sending $u$ to $\hat{u},$ is clearly a natural $\hbox{${\it\Sigma}$}$-linear map. It is well known; see, for example, \cite[(3.15)]{ROT} that $M$ is finitely generated projective if and only if it has a finite projective basis $\{u_i; f_i\}_{1\le i\le n}$, where $u_i\in M$ and $f_i\in M^*$, such that $u=\sum_{i=1}^n\, f_i(u) u_i$ (or $u=\sum_{i=1}^n\, u_i f_i(u))$ for all $u\in M$. We shall denote by ${\rm proj}\hspace{.4pt}\hbox{${\it\Sigma}$}$ the full subcategory of $\hbox{{\rm Mod}\hspace{0.5pt}} \hbox{${\it\Sigma}$}$ of finitely generated projective modules. The following statement is probably well-known. \begin{Lemma}\label{fgp-equiv} Let $\hbox{${\it\Sigma}$}$ be a ring or an algebra. Then ${\rm Hom}_{\it\Sigma}(-, \hbox{${\it\Sigma}$}): {\rm proj}\hspace{.5pt}\hbox{${\it\Sigma}$} \to {\rm proj} \hspace{.5pt}\hbox{${\it\Sigma}$}^{\rm op}$ is a duality. \end{Lemma} \noindent{\it Proof.} Let $P\in {\rm proj}\hspace{.5pt}\hbox{${\it\Sigma}$}$ with a finite projective basis $\{u_i; f_i\}_{1\le i\le n}$. Given $f \in P^*$ and $u\in P,$ we obtain $$ \textstyle(\sum_{i=1}^n f_i \, \hat{u}_i(f))(u) = \sum_{i=1}^n \left(f_i f(u_i)\right)(u) = \sum_{i=1}^n f_i(u) f (u_i) = f (\sum_{i=1}^n f _i(u)u_i).$$ Since $u=\sum_{i=1}^n f _i(u)u_i$, we conclude that $f=\sum_{i=1}^n f_i \, \hat{u}_i(f)$. That is, $\{f_i; \hat{u}_i\}_{1\le i\le n}$ is a projective basis of $P^*$. In particular, $P^*\in {\rm proj}\hspace{.5pt}\hbox{${\it\Sigma}$}^{\rm op}$. If $u\in P$ is non-zero, since $u=\sum_{i=1}^n f_i(u) u_i,$ we see that $\hat{u}$ is non-zero. That is, $\rho_{_P}$ is a monomorphism. As shown above, $P^{* *}$ has a projective basis $\{\hat{u}_i; \hat{f}_i\}_{1\le i\le n}$. Given ${\varphi}rphi\in P^{**}$, we obtain $$\textstyle {\varphi}rphi=\sum_{i=1}^n \hat{f}_i({\varphi}rphi) \hat{u}_i=\sum_{i=1}^n {\varphi}rphi(f_i) \hat{u}_i=\rho_{_P}(\sum_{i=1}^n {\varphi}rphi(f_i) u_i).$$ Thus, $\rho_{_P}$ is an isomorphism. The proof of the lemma is completed. Throughout this paper, $R$ will stand for a commutative ring and $I_{\hspace{-1pt}R}$ for a minimal injective co-generator for $\hbox{{\rm Mod}\hspace{0.5pt}} R$; see \cite[(18.19)]{AnF}. We shall use frequently the functor $D={\rm Hom}_R(-, I_{\hspace{-1pt}R}): \hbox{{\rm Mod}\hspace{0.5pt}} R \to \hbox{{\rm Mod}\hspace{0.5pt}} R$. The following statement is probably known. \begin{Prop}\label{Mod-Commu} Let $U$ be a module over a commutative ring $R$. \begin{enumerate}[$(1)$] \item If $U$ is of finite length $n$, then $DU$ is also of length $n$. \item If $U$ is finitely co-generated, then $DU$ is finitely generated. \item If $DU$ is artinian or noetherian, then $U$ is noetherian or artinian respectively. \end{enumerate} \end{Prop} \noindent{\it Proof.} Assume first that $U$ is simple. In particular, $U=Ru$ for some $u\in U$. Consider some non-zero linear functions $f, g\in DU$. Since $I_{\hspace{-1pt}R}$ is a minimal injective co-generator, ${\rm soc}(I_{\hspace{-1pt}R})$ contains exactly one copy of $U$; see \cite[(18.19)]{AnF}. Therefore, $g(U)=f(U)$, and hence, $g(u)= r f(u)$ for some $r\in R$. This yields $g=rf$. Thus, $DU$ is also simple. By induction, we can establish Statement (1). Assume next that $U$ is finitely co-generated, that is, $U$ has an essential socle $S=S_1\oplus \cdots\oplus S_t$, where the $S_i$ are simple; see \cite[(10.4)]{AnF}. Consider the canonical projections $p_i: S\to S_i$ and the canonical injections $q_i: S_i\to S$, and fix some monomorphisms $f_i: S_i \to I_{\hspace{-1pt}R}$, for $i=1, \ldots, t$. Letting $q: S\to U$ be the inclusion, we obtain $R$-linear maps $g_i: U\to I_{\hspace{-1pt}R}$ such that $g_i q =f_i p_i$, for $i=1, \ldots, t$. Given any $R$-linear map $g: U\to I_{\hspace{-1pt}R}$, as seen above, $gqq_i=r_i f_i$ for some $r_i\in R$. This yields $gq=\textstyle\sum_{i=1}^t gqq_i p_i=\sum_{i=1}^t r_if_ip_i=( \sum_{i=1}^t r_i g_i)q.$ Since $q$ is an essential monomorphism, $g=\sum_{i=1}^t r_i g_i.$ Statement (2) is established. Finally, given a submodule $V$ of $U$, we denote by $V^\perp$ the submodule of $DU$ of $R$-linear maps vanishing on $V$ and by $^\perp(V^\perp)$ the submodule of $U$ of elements annihilated by the $R$-linear maps in $V^\perp$. Then, $V\subseteq {}^\perp(V^\perp)$. We claim that $V= {}^\perp(V^\perp)$. Otherwise, we can find an $R$-linear map $h: U\to I_{\hspace{-1pt}R}$ such that $h({}^\perp(V^\perp))\ne 0$ but $h(V)=0$, contrary to the definition. Using this claim, we may easily establish Statement (3). The proof of the proposition is completed. Let $A$ be an $R$-algebra. A left or right $A$-module $M$ is called {\it $R$-noetherian} or {\it $R$-artinian} if $_RM$ is noetherian or artinian; and $A$ is called a {\it noetherian} or {\it reflexive $R$-algebra} if $_AA$ is $R$-noetherian or $R$-reflexive, respectively. Note that our definition of a noetherian $R$-algebra is different from the classical one, where $R$ is assumed to be noetherian. Consider the exact functors $D={\rm Hom}_R(-, I\hspace{-1.5pt}_R)\hspace{-1pt}:\hspace{-1pt} \hbox{{\rm Mod}\hspace{0.5pt}} A \hspace{-1pt}\to\hspace{-1pt} \hbox{{\rm Mod}\hspace{0.5pt}} A^{\rm op}$ and $D={\rm Hom}_R(-, I\hspace{-1.5pt}_R)\hspace{-1pt}:\hspace{-1pt} \hbox{{\rm Mod}\hspace{0.5pt}} A^{\rm op} \hspace{-1pt}\to\hspace{-1pt} \hbox{{\rm Mod}\hspace{0.5pt}} A$. Given a left or right $A$-module $M$, we obtain a canonical $A$-linear monomorphism $\sigma_{\hspace{-1.5pt}_M}: M\to D^2M$ so that $\sigma_{\hspace{-1pt}_M}(x)(f)=f(x)$, for $x\in M$ and $f\in DM$. We shall say that $M$ is {\it $R$-reflexive} if $\sigma_{\hspace{-1pt}_M}$ is bijective. \begin{Lemma}\label{alg-ref-mod} Let $A$ be an $R$-algebra. The full subcategory ${\rm RMod}\hspace{.4pt}A$ of $\hbox{{\rm Mod}\hspace{0.5pt}} A$ of $R$-reflexive modules is abelian, contains all modules of finite $R$-length, and admits a duality $D: {\rm RMod}\hspace{.5pt}A \to {\rm RMod}\hspace{.5pt}A^{\rm op}$. \end{Lemma} \noindent{\it Proof.} Considering the canonical monomorphisms and applying the Snake Lemma, we see that ${\rm RMod}\hspace{.5pt}A$ is closed under taking submodules and quotient modules, that is, it is abelian. If $M\in \hbox{{\rm Mod}\hspace{0.5pt}} A$ is of $R$-length $n$, by Proposition \ref{Mod-Commu}(1), so is $D^2M$, and consequently, $\sigma_{\hspace{-1.5pt}_M}: M\to D^2M$ is an isomorphism. Finally, by the definition of reflexive modules, $D: {\rm RMod}\hspace{.5pt}A \to {\rm RMod}\hspace{.5pt}A^{\rm op}$ and $D: {\rm RMod}\hspace{.5pt}A^{\rm op} \to {\rm RMod}\hspace{.5pt}A$ are mutual quasi-inverse. The proof of the lemma is completed. Consider now the endofunctors $\nu_{\hspace{-1.5pt}_A}=D{\rm Hom}_A(-, A)$ and $\nu_{\hspace{-1.5pt}_A}^{\mbox{\hspace{1pt}-\hspace{-3pt}}}={\rm Hom}_A(D(-), A)$ of $\hbox{{\rm Mod}\hspace{0.5pt}} A$. Put ${\rm inj}\hspace{.4pt} A=\nu_{\hspace{-1.5pt}_A}({\rm proj}\hspace{.5pt}A)$ which, by Lemma \ref{fgp-equiv}, contains only injective mo\-dules. Let ${\rm mod}^{+\hspace{-3pt}}A$ stand for the full subcategory of $\hbox{{\rm Mod}\hspace{0.5pt}} A$ of finitely generated modules, and ${\rm mod}^{-\hspace{-3.5pt}}A$ for that of modules finitely co-generated by ${\rm inj}\hspace{.4pt} A$. \begin{Theo}\label{Noe-Ref} Let $A$ be a reflexive noetherian $R$-algebra. \begin{enumerate}[$(1)$] \item The functors $\nu_{\hspace{-1.5pt}_A}: {\rm proj}\hspace{.5pt}A \to {\rm inj}\hspace{.5pt}A$ and $\nu_{\hspace{-1.5pt}_A}^{\mbox{\hspace{.5pt}-\hspace{-3pt}}}: {\rm inj}\hspace{.5pt}A\to {\rm proj}\hspace{.5pt}A$ are mutual quasi-inverses, where ${\rm proj}\hspace{.5pt}A$ and ${\rm inj}\hspace{.5pt}A$ have as objects all $R$-noetherian projective modules and all $R$-artinian injective modules, respectively. \item There exists a duality $D={\rm Hom}_R(-, I_{\hspace{-1pt}R}): {\rm mod}^{+\hspace{-3pt}}A^{\rm op}\to {\rm mod}^{-\hspace{-3pt}}A$, where ${\rm mod}^{+\hspace{-3pt}}A$ and ${\rm mod}^{-\hspace{-3.5pt}}A$ are abelian subcategories of ${\rm RMod}\hspace{.4pt}A$, whose objects are all $R$-noetherian modules and all $R$-artinian modules, respectively. \end{enumerate} \end{Theo} \noindent{\it Proof.} Since $_AA$ is $R$-reflexive and $R$-noetherian, we deduce from Lemma \ref{alg-ref-mod} that ${\rm mod}^{+\hspace{-3pt}}A$ is an abelian subcategory of ${\rm RMod}\hspace{.5pt}A$, whose objects are clearly the $R$-noetherian $A$-modules. In particular, the objects of ${\rm proj}\hspace{.5pt}A$ are the $R$-noetherian projective $A$-modules. On the other hand, since $A^{\rm op}$ is also a reflexive noetherian $R$-algebra, ${\rm mod}^{+\hspace{-3pt}}A^{\rm op}$ is an abelian subcate\-gory of ${\rm RMod}\hspace{.5pt}A^{\rm op}$. Consider the equiva\-lence ${\rm Hom}_A(-, A): {\rm proj}\hspace{.4pt}A \to {\rm proj}\hspace{.4pt}A^{\rm op}$ and the duality $D: {\rm RMod}\hspace{.5pt}A^{\rm op} \to {\rm RMod}\hspace{.5pt}A$ in Lemmas \ref{fgp-equiv} and \ref{alg-ref-mod}, we see that ${\rm inj}\hspace{.5pt}A$ is a subcategory of ${\rm RMod}\hspace{.5pt}A$, whereas the functors $\nu_{\hspace{-2pt}_A}: {\rm proj}\hspace{.5pt}A\to {\rm inj}\hspace{.5pt}A$ and $\nu_{\hspace{-2pt}_A}^{\hspace{.5pt}\mbox{-\hspace{-3.5pt}}}: {\rm inj}\hspace{.5pt}A\to {\rm proj}\hspace{.5pt}A$ are mutual quasi-inverses. Being finitely co-generated by ${\rm inj}\hspace{.5pt}A$, by Lemma \ref{alg-ref-mod}, ${\rm mod}^{-\hspace{-3pt}}A$ is a subcategory of ${\rm RMod}\hspace{.5pt}A$. Given $M\in {\rm RMod}\hspace{.4pt}A^{\rm op}$, in view of the duality $D: {\rm RMod}\hspace{.5pt}A^{\rm op}\to {\rm RMod}\hspace{.5pt}A$, we see that $M\in {\rm mod}^{-\hspace{-3pt}}A$ if and only if $DM\in {\rm mod}^{+\hspace{-3pt}}A^{\rm op}$. Thus, we obtain a duality $D: {\rm mod}^{+\hspace{-3pt}}A^{\rm op}\to {\rm mod}^{-\hspace{-3pt}}A$. In particular, ${\rm mod}^{-\hspace{-3pt}}A$ is abelian. Since ${\rm mod}^{+\hspace{-3pt}}A^{\rm op}$ contains only $R$-noetherian modules, by Proposition \ref{Mod-Commu}(3), ${\rm mod}^{-\hspace{-3pt}}A$ contains only $R$-artinian modules. On the other hand, if $M\in \hbox{{\rm Mod}\hspace{0.5pt}} A$ is $R$-artinian, then $_RM$ is finitely co-generated, and by Proposition \ref{Mod-Commu}(2), $DM$ is finitely generated over $R$. In particular, $DM\in {\rm mod}^{+\hspace{-3pt}}A^{\rm op}$, and hence, $D^2M\in {\rm mod}^{-\hspace{-3pt}}A$. Since ${\rm mod}^{-\hspace{-3pt}}A$ is abelian and $\sigma_{\hspace{-1pt}_M}: M\to D^2M$ is a monomorphism, $M\in {\rm mod}^{-\hspace{-3pt}}A$. Finally, let $I$ be an $R$-artinian injective $A$-module. Then, $I$ is an injective object in ${\rm mod}^{-\hspace{-3pt}}A$. Hence, $I\cong DP$, where $P$ is a projective object in ${\rm mod}^{+\hspace{-3pt}}A^{\rm op}$. It is then easy to see that $P\in {\rm proj}\hspace{.4pt}A^{\rm op},$ that is, $I\in {\rm inj}\hspace{.4pt}A.$ The proof of the theorem is completed. \noindent{\sc Remark.} A noetherian algebra over a commutative noetherian complete local ring is reflexive; see \cite[Section 5]{AUS}. Thus, Theorem \ref{Noe-Ref} generalizes the well-known Matlis duality; see \cite[Section 5]{AUS} and \cite[(3.2.13)]{BH}. We conclude this subsection with algebras given by a quiver with relations. Let $Q$ be a locally finite quiver with vertex set $Q_0$. An infinite path in $Q$ is called {\it left infinite} if it has no starting point and {\it right infinite} if it has no ending point. Given a field $k$, an ideal $J$ in the path algebra $kQ$ is called {\it weakly admissible} if it lies in the ideal generated by the paths of length two; and {\it locally admissible} if, for any $x\in Q_0$, there exists $n_x\in \mathbb{Z}$ for which $R$ contains all paths of length $\ge n_x$, starting or ending with $x$. Consider $\hbox{$\it\Lambda$}=kQ/J$, where $J$ is weakly admissible, with a complete set of pairwise orthogonal primitive idempotents $\{e_x \mid x\in Q_0\}$. One calls $\hbox{$\it\Lambda$}$ {\it locally finite dimensional} if $e_x\hbox{$\it\Lambda$} e_y$ is finite dimensional for all $x, y\in Q_0$; and {\it strongly locally finite dimensional} if $J$ is locally admissible; see \cite[Section 1(4)]{BHL}. Assume that $\hbox{$\it\Lambda$}$ is locally finite dimensional. We shall denote by $\hbox{{\rm Mod}\hspace{0.5pt}}\hbox{$\it\Lambda$}$ the category of all left $\hbox{$\it\Lambda$}$-modules $M$ such that $M=\oplus_{x\in Q_0}\, e_xM$, and by ${\rm mod}^{\hspace{.5pt}b\hspace{-2.5pt}}\hbox{$\it\Lambda$}$ the full subcategory of $\hbox{{\rm Mod}\hspace{0.5pt}}\hbox{$\it\Lambda$}$ of finite dimensional modules. Given $x\in Q_0$, one obtains a projective module $P_x=\hbox{$\it\Lambda$} e_x$ and an injective module $I_x=D(\hbox{$\it\Lambda$}^{\rm op}e_x)$; see \cite[Section 3]{BHL}. Let ${\rm proj}\,\hbox{$\it\Lambda$}$ and ${\rm inj}\hspace{.4pt}\hbox{$\it\Lambda$}$ be the strictly additive subcategories of $\hbox{{\rm Mod}\hspace{0.5pt}}\hbox{$\it\Lambda$}$ generated by the $P_x$ with $x\in Q_0$ and by the $I_x$ with $x\in Q_0$, respectively. Observe that $\hbox{$\it\Lambda$}$ is strongly locally finite dimensional if and only if all $P_x$ and $I_x$ with $x\in Q_0$ are finite dimensional. \noindent{\sc 2) Additive categories.} Throughout this paper, all functors between additive cate\-gories are additive. Let $\mathcal{A}$ be an additive category. A {\it strictly additive} subcategory of $\mathcal{A}$ is a full subcategory which is closed under finite direct sums, direct summands and isomorphisms. An object in $\mathcal{A}$ is called {\it strongly indecomposable} if it has a local endomorphism ring. One says that $\mathcal{A}$ is {\it Krull-Schmidt} if every non-zero object is a finite direct sum of strongly indecomposable objects; and in this case, $\mathcal{A}/\mathcal{I}$ is Krull-Schmidt, for every ideal $\mathcal{I}$ in $\mathcal{A}$. A morphism $f: X\to Y$ in $\mathcal{A}$ is called {\it left minimal} if every morphism $h: Y\to Y$ such that $f=hf$ is an automorphism; {\it left almost split} if $f$ is not a section such that every non-section morphism $g: X \to M$ factors through it; and {\it minimal left almost split} if it is left minimal and left almost split. In the dual situations, one says that $f$ is {\it right minimal}, {\it right almost split}, and {\it minimal right almost split}, respectively. \noindent{\sc 3) Stable categories of exact categories.} Let $\mathscr{C}$ be an exact category, that is an extension-closed subcategory of an abelian category $\mathfrak{A}$; see \cite[Section 2]{LNP}. Given $X, Y\in \mathscr{C}$, one writes ${\rm Ext}_\mathscr{\hspace{.5pt}C}^1(X, Y)={\rm Ext}_\mathfrak{\hspace{.5pt}A}^1(X, Y)$. A morphism $f: X\to Y$ in $\mathscr{C}$ is called {\it injectively trivial} in $\mathscr{C}$ if the push-out map $${\rm Ext}_\mathscr{\hspace{.5pt}C}^1(Z, f): \, {\rm Ext}_\mathscr{\hspace{.5pt}C}^1(Z, X)\to {\rm Ext}_\mathscr{\hspace{.5pt}C}^1(Z, Y): \delta\mapsto f \cdot \delta$$ vanishes for all $Z\in \mathscr{C}$; and {\it projectively trivial} in $\mathscr{C}$ if the pull-up map $${\rm Ext}_\mathscr{\hspace{.5pt}C}^1(f, Z): \, {\rm Ext}_\mathscr{\hspace{.5pt}C}^1(Y, Z)\to {\rm Ext}_\mathscr{\hspace{.5pt}C}^1(X, Z): \zeta\mapsto \zeta \cdot f$$ vanishes for all $Z\in \mathscr{C}$. Denoting by $\mathcal{I}_{\hspace{.5pt}\mathscr{C}}$ the ideal of injectively trivial morphisms and by $\mathcal{P}_{\mathscr{\hspace{.5pt}C}}$ that of projectively trivial morphisms, one obtains the {\it injectively stable category} $\hspace{2pt}\overline{\hspace{-2pt}\mathscr{C}} =\mathscr{C}/\mathcal{I}_{\hspace{.5pt}\mathscr{C}}$ and the {\it projectively stable category} $\underline{\mathscr{C}\hspace{-2pt}}\hspace{1pt}=\mathscr{C}/\mathcal{P}_{\mathscr{\hspace{.5pt}C}}$ of $\mathscr{C}$; see \cite{LNP, LeZ}. In the sequel, we shall write as usual ${\rm Hom}_{\hspace{2.5pt}\overline{\hspace{-2pt}\mathscr{C}}\hspace{.5pt}}(X, Y)=\overline{\rm Hom}\hspace{.5pt}_\mathscr{C}(X, Y)$ and ${\rm Hom}_{\hspace{2.5pt}\underline{\hspace{-1pt}\mathscr{C}\hspace{-1pt}}\hspace{1pt}}(X, Y)=\underline{\hspace{-.5pt}\rm Hom\hspace{-.5pt}}_{\hspace{1pt}\mathscr{C}}(X, Y)$, for all $X, Y\in \mathscr{C}$. \noindent{\sc 4) Triangulated categories.} Let $\mathcal{A}$ be a triangulated category, whose translation functor will always be written as $[1].$ An exact triangle $$\xymatrix{X \ar[r]^f &Y \ar[r]^g & Z \ar[r]^-\delta &X[1]} $$ in $\mathcal{A}$ is called {\it almost split} if $f$ is minimal left almost split and $g$ is minimal right almost split; see \cite[(4.1)]{H}. In this case, $X$ and $Z$ will be called the {\it starting term} and the {\it ending term}, respectively. \begin{Defn} A full subcategory $\mathcal{C}$ of a triangulated category $\mathcal{A}$ is called {\it exten\-sion-closed} provided, for any exact triangle $\xymatrixcolsep{18pt}\xymatrix{X \ar[r] &Y \ar[r] &Z \ar[r] &X[1]}$ in $\mathcal{A}$, that $Y \in \mathcal{C}$ whenever $X, Z\in \mathcal{C}.$ In this case, we shall simply call $\mathcal{C}$ a {\it tri-exact category.} \end{Defn} Observe that an extension-closed subcategory of a triangulated category is strictly additive, but it is not necessarily closed under the translation functor $[1]$. \begin{Defn}\label{tri-sub} An extension-closed subcate\-gory $\mathcal{C}$ of a triangulated category $\mathcal{A}$ will be called \begin{enumerate}[$(1)$] \item {\it left triangulated} provided, for any exact triangle $\xymatrixcolsep{18pt}\xymatrix{\hspace{-3pt} X \ar[r] &Y \ar[r] &Z \ar[r] &X[1]\hspace{-3pt}}$ in $\mathcal{A}$, that $X\in \mathcal{C}$ whenever $Y, Z\in \mathcal{C}$. \item {\it right triangulated} provided, for any exact triangle $\xymatrixcolsep{18pt}\xymatrix{\hspace{-3pt}X \ar[r] &Y \ar[r] &Z \ar[r] &X[1]\hspace{-3pt}}$ in $\mathcal{A}$, that $Z\in \mathcal{C}$ whenever $X, Y\in \mathcal{C}$. \end{enumerate} \end{Defn} Let $\mathcal{C}$ be a full subcategory of $\mathcal{A}$. For $n\in \mathbb{Z}$, we denote by $\mathcal{C}[n]$ the full subcategory of $\mathcal{A}$ generated by the objects $X[n]$ with $X\in \mathcal{C}$. Clearly, $X\in \mathcal{C}[n]$ if and only if $X[-n]\in \mathcal{C}$. Turning exact triangles in $\mathcal{A}$, we obtain the following observation. \begin{Lemma}\label{left-tri} An extension-closed subcategory $\mathcal{C}$ of a triangulated category is left triangulated if and only if $\mathcal{C}[-1]\subseteq \mathcal{C};$ and right triangulated if and only if $\mathcal{C}[1]\subseteq \mathcal{C}.$ \end{Lemma} Observe that a left or right triangulated subcategory of a triangulated category is a left or right triangulated category as defined in \cite[(1.1)]{ABM} and will be simply called a {\it left triangulated category} with a left translation $[-1]$ or a {\it right triangulated category} with a right translation $[1]$, respectively. \noindent{\sc 5) Derived categories.} Let $\mathcal{A}$ be an additive category. We shall denote by $C(\mathcal{A})$ the {\it complex category} of $\mathcal{A}$. The full subcate\-gories of $C(\mathcal{A})$ of bounded-above complexes, of bounded-below complexes, and of bounded complexes will be written as $C^-(\mathcal{A})$, $C^-(\mathcal{A})$ and $C^b(\mathcal{A})$, respectively. Given $*\in \{+, -, b, \emptyset\}$, let $K^*(\mathcal{A})$ stand for the {\it homotopy category}, that is the quotient of $C^*(\mathcal{A})$ modulo the null-homotopic morphisms, which is triangulated with a canonical projection functor $\mathbb{P}^*: C^*(\mathcal{A})\to K^*(\mathcal{A})$; and $D^*(\mathcal{A})$ for the {\it derived category}, that is the localization of $K^*(\mathcal{A})$ with respect to the quasi-isomorphisms, which is also triangulated with a canonical localization functor $\mathbb{L}^*: K^*(\mathcal{A}) \to D^*(\mathcal{A})$. Given a morphism $f^{\hspace{0.5pt}\dt\hspace{1.5pt}}$ in $C^*(\mathcal{A}),$ we shall write $\bar{f}^{\hspace{0.5pt}\dt\hspace{1.5pt}}= \mathbb{P}^*(f^{\hspace{0.5pt}\dt\hspace{1.5pt}})\in K^*(\mathcal{A})$ and $\tilde{f}^{\hspace{0.5pt}\dt\hspace{1.5pt}}=\mathbb{L}^*(\bar{f}^{\hspace{0.5pt}\dt\hspace{1.5pt}})\in D^*(\mathcal{A}).$ Given a complex $M^{\hspace{0.5pt}\dt\hspace{1.5pt}}\in C^b(\mathcal{A}),$ its {\it width} $w(M^{\hspace{0.5pt}\dt\hspace{1.5pt}})$ is an integer defined by $w(M^{\hspace{0.5pt}\dt\hspace{1.5pt}})=0$ if $M^i=0$ for all $i\in \mathbb{Z}$; and otherwise, $w(M^{\hspace{0.5pt}\dt\hspace{1.5pt}})=t-s+1$, where $s\le t$ such that $M^s$ and $M^t$ are non-zero, but $M^i=0$ for all $i\notin [s, t]$. Furthermore, given a complex $(X^{\hspace{0.5pt}\dt\hspace{1.5pt}}, d^\wdt\hspace{2pt})$ over $\mathcal{A}$ and some integer $n$, one defines two {\it brutal truncations} $$\kappa_{\ge n}(X^\dt\hspace{2pt}): \; \textstyle \xymatrix{\cdots \ar[r] &0 \ar[r] &X^{n} \ar[r]^{d^n} &X^{n+1} \ar[r]^{d^{n+1}} &X^{n+2} \ar[r] &\cdots} $$ and $$\kappa_{\le n}(X^\dt\hspace{2pt}): \; \xymatrix{\cdots \ar[r] &X^{n-2}\ar[r]^{d^{n-2}} &X^{n-1} \ar[r]^{d^{n-1}} &X^{n} \ar[r] &0 \ar[r] &\cdots,}$$ where $X^n$ is the component of degree $n$ in both complexes, with two canonical morphisms $\mu_n^{\hspace{0.5pt}\dt\hspace{1.5pt}}: \kappa_{\ge n}(X^{\hspace{0.5pt}\dt\hspace{1.5pt}}) \to X^{\hspace{0.5pt}\dt\hspace{1.5pt}} $ such that $\mu_n^p=1_{_{X^p}}$ for $p\ge n$ and $\mu_n^p=0$ for $p<n;$ and $\pi_n^{\hspace{0.5pt}\dt\hspace{1.5pt}}: X^{\hspace{0.5pt}\dt\hspace{1.5pt}} \to \kappa_{\le n}(X^{\hspace{0.5pt}\dt\hspace{1.5pt}})$ such that $\pi_n^p=1_{_{X^p}}$ for $p\le n$ and $\pi_n^p=0$ for $p > n.$ The following statement is well-known; see \cite[(III.4.4.2)]{Mil} and \cite[(1.3)]{Hap1}. \begin{Lemma}\label{truncation-exact} Let $\mathcal{A}$ be an additive category. If $X^{\hspace{0.5pt}\dt\hspace{1.5pt}}\in C(\mathcal{A})$ and $n\in \mathbb{Z},$ then $K(\mathcal{A})$ has an exact triangle $\xymatrixcolsep{18pt}\xymatrix{\hspace{-3pt}\kappa_{\ge n}(X^\dt\hspace{2pt}) \ar[r] & X^\dt\hspace{2pt} \ar[r] & \kappa_{\le n-1}(X^\dt\hspace{2pt}) \ar[r] &\kappa_{\ge n}(X^\dt\hspace{2pt})[1].}$ \end{Lemma} Consider now the derived category of an abelian category $\mathfrak{A}$. Fix an integer $n$. We shall denote by $D^{\le n}(\mathfrak A)$ and $D^{\ge n}(\mathfrak A)$ the full subcategories of $D(\mathfrak A)$ of complexes $M^{\hspace{0.5pt}\dt\hspace{1.5pt}}$ with ${\rm H}^i(M^{\hspace{0.5pt}\dt\hspace{1.5pt}})=0$ for all $i> n$ and of complexes $M^{\hspace{0.5pt}\dt\hspace{1.5pt}}$ with ${\rm H}^i(M^{\hspace{0.5pt}\dt\hspace{1.5pt}})=0$ for all $i< n$ respectively, where ${\rm H}^i(M^{\hspace{0.5pt}\dt\hspace{1.5pt}})$ is the $i$-th cohomology of $M^{\hspace{0.5pt}\dt\hspace{1.5pt}}$. By Lemma \ref{tri-sub}, $D^{\le n}(\mathfrak A)$ is right triangulated and $D^{\ge n}(\mathfrak A)$ is left triangulated in $D(\mathfrak A)$. Let $(X^{\hspace{1.3pt}\dt\hspace{1pt}}, d^{\hspace{1.3pt}\dt\hspace{1pt}})$ be a complex over $\mathfrak{A}$. Writing $d^n=q^n p^n$, where $p^{n}: X^{n} \to C^{n}$ is the cokernel of $d^{n-1}$, and $d^{n-1}=i^nj^{n-1}$, where $i^n: K^n\to X^n$ is the kernel of $d_n$, we obtain two {\it smart truncations} $$\tau_{\ge n}(X^\dt\hspace{2pt}): \quad \xymatrix{\cdots \ar[r] &0\ar[r]& C^n \ar[r]^{q^n} &X^{n+1} \ar[r]^{d^{n+1}} &X^{n+2} \ar[r] &\cdots,} $$ where $C^n$ is of degree $n$, with a canonical projection $p_n^\wdt\hspace{2pt}: X^{\hspace{0.5pt}\dt\hspace{1.5pt}}\to \tau_{\ge n}(X^{\hspace{0.5pt}\dt\hspace{1.5pt}})$ so that $p_n^n= p^n$ and $p^s_n=\id_{X^s}$ for all $s>n$; and $$\tau_{\le n}(X^\dt\hspace{2pt}): \quad \xymatrix{\cdots \ar[r] & X^{n-2} \ar[r]^{d^{n-2}} & X^{n-1} \ar[r]^-{j^{n-1}} & K^n \ar[r] & 0\ar[r] & \cdots }$$ where $K^n$ is of degree $n$, with a canonical injection $i_n^\wdt\hspace{2pt}: \tau_{\le n}(X^{\hspace{0.5pt}\dt\hspace{1.5pt}})\to X^{\hspace{0.5pt}\dt\hspace{1.5pt}}$ so that $i_n^n=i^n$ and $i_n^t=\id_{X^t}$ for all $t<n.$ \begin{Lemma}\label{bhomology} Let $\mathfrak B$ be an abelian subcategory of an abelian category $\mathfrak{A}$. Consider $X^{\hspace{0.5pt}\dt\hspace{1.5pt}} \in C^*(\mathfrak A)$ with $*\in \{\emptyset, -, +, b\}$ and $Y^{\hspace{0.5pt}\dt\hspace{1.5pt}}\in C(\mathfrak B)$. If $X^{\hspace{0.5pt}\dt\hspace{1.5pt}}\cong Y^{\hspace{0.5pt}\dt\hspace{1.5pt}}$ in $D(\mathfrak A)$, then there exists $Z^{\hspace{0.5pt}\dt\hspace{1.5pt}}\in C^*(\mathfrak B)$ such that $X^{\hspace{0.5pt}\dt\hspace{1.5pt}}\cong Z^{\hspace{0.5pt}\dt\hspace{1.5pt}}$ in $D^*(\mathfrak A)$. \end{Lemma} \noindent{\it Proof.} We shall only consider the case where $X^{\hspace{0.5pt}\dt\hspace{1.5pt}} \in C^b(\mathfrak A)$ and $Y^{\hspace{0.5pt}\dt\hspace{1.5pt}}\in C(\mathfrak B)$ such that $X^{\hspace{0.5pt}\dt\hspace{1.5pt}}\cong Y^{\hspace{0.5pt}\dt\hspace{1.5pt}}$ in $D(\mathfrak A)$. Let $s, t$ with $s\le t$ be such that $X^p=0$ for $p\not\in [s, t]$. Then, ${\rm H}^p(Y^{\hspace{0.5pt}\dt\hspace{1.5pt}})=0$ for all $p\not\in [s, t]$, and hence, the canonical injection $i_t^\wdt\hspace{2pt}: \tau_{\le t}(Y^{\hspace{0.5pt}\dt\hspace{1.5pt}})\to Y^{\hspace{0.5pt}\dt\hspace{1.5pt}}$ and the canonical projection $p_s^\wdt\hspace{2pt}: \tau_{\le t}(Y^{\hspace{0.5pt}\dt\hspace{1.5pt}}) \to \tau_{\ge s}(\tau_{\le t}(Y^{\hspace{0.5pt}\dt\hspace{1.5pt}}))$ are quasi-isomorphisms; see \cite[(III.3.4.1), (III.3.4.2)]{Mil}. As a consequence, $\tau_{\ge s}(\tau_{\le t}(Y^{\hspace{0.5pt}\dt\hspace{1.5pt}}))\in C^b(\mathfrak B)$ such that $X^\dt\hspace{2pt}\cong \tau_{\ge s}(\tau_{\le t}(Y^{\hspace{0.5pt}\dt\hspace{1.5pt}}))$ in $D^b(\mathfrak A)$. The proof of the lemma is completed. In the same fashion, one can show that $D^*(\mathfrak A)$ with $*\in \{-, +, b\}$ fully embeds in $D(\mathfrak A)$; see \cite[(III.3.4.3), (III.3.4.4), (III.3.4.5)]{Mil}. In the sequel, we shall always regard $D^*(\mathfrak A)$ as a full triangulated subcategory of $D(\mathfrak A)$. \section{Tri-exact structure and stable categories} \noindent The objective of this section is to study the tri-exact structure in a non-axiomatic fashion and introduce the stable categories of a tri-exact category. They are analo\-gous to the exact structure and the stable categories of an exact category as described in \cite{LeZ, LNP}. More importantly, we shall show that every exact category is equivalent to a tri-exact category with equivalent stable categories. Throughout this section $\mathcal{C}$ will denote a tri-exact category, say an extension-closed subcategory of a triangulated category $\mathcal{A}$. Given $X, Y\in \mathcal{C}$, we shall write ${\rm Ext}^1_\mathcal{C}(X, Y)={\rm Hom}_\mathcal{A}(X, Y[1])$, whose elements will be called {\it extensions} of $Y$ by $X$. Given an extension $\delta\in{\rm Ext}^1_\mathcal{C}(X, Y)$ and two morphisms $f\in {\rm Hom}_{\mathcal{C}}(M, X)$ and $g\in {\rm Hom}_{\mathcal{C}}(Y, N)$, we shall define $$\delta \cdot f=\delta \circ f \in {\rm Hom}_\mathcal{A}(M, Y[1])={\rm Ext}_\mathcal{C}^1(M, Y) $$ and $$g\cdot \delta=g[1]\circ \delta\in {\rm Hom}_\mathcal{A}(X, N[1])={\rm Ext}_\mathcal{C}^1(X, N). $$ This yields the following trivial observation. \begin{Lemma}\label{Ext-bimod} Let $\mathcal{C}$ be a tri-exact category. Given an extension $\delta$ and two morphisms $f, g$ in $\mathcal{C}$, the following equations hold whenever the composites make sense$\,:$ $$(g\cdot \delta )\cdot f =g\cdot (\delta \cdot f); \quad (\delta\cdot f) \cdot g =\delta \cdot (fg); \quad g \cdot (f \cdot \delta)= (gf ) \cdot \delta.$$ \end{Lemma} The tri-exact structure of $\mathcal{C}$ consists of the tri-exact sequences as defined below. \begin{Defn} Let $\mathcal{C}$ be an extension-closed subcategory of a triangulated cate\-gory $\mathcal{A}$. A sequence of morphisms $\hspace{-3pt}\xymatrixcolsep{18pt}\xymatrix{X \ar[r] & Y \ar[r] & Z}\hspace{-2pt}$ in $\mathcal{C}$ is called a {\it tri-exact sequence} if it embeds in an exact triangle $$\xymatrix{X \ar[r] &Y \ar[r] &Z \ar[r]^-\delta & X[1]} $$ in $\mathcal{A}$. In this case, we say that the tri-exact sequence is {\it defined} by $\delta\in {\rm Ext}_\mathcal{C}^1(Z, X)$, and call $X$ the {\it starting term} and $Z$ the {\it ending term}. \end{Defn} \noindent{\sc Remark.} It is well-known; see \cite[(1.2)]{H} that a tri-exact sequence is a pseudo-exact sequence as defined in \cite[Section 1]{Liu1}. The converse, however, is not true. The following statement describes some basic properties of the tri-exact structure of a tri-exact category. \begin{Lemma}\label{section-retraction} Let $\mathcal{C}$ be a tri-exact category, and let $\hspace{-2pt}\xymatrixcolsep{18pt}\xymatrix{X\ar[r]^f &Y\ar[r]^g &Z}\hspace{-3pt}$ be a tri-exact sequence defined by an extension $\delta\in {\rm Ext}_\mathcal{C}^1(Z, X)$. \begin{enumerate}[$(1)$] \item A morphism $u: X\to M$ factors through $f$ if and only if $u\cdot \delta =0$. \item A morphism $v: N\to Z$ factors through $g$ if and only if $\delta \cdot v=0.$ \item The morphism $f$ is a section if and only if $g$ is a retraction if and only if $\delta=0$. \item If $X$ or $Z$ is strongly indecomposable, then $g$ is right minimal or $f$ is left minimal, respectively. \end{enumerate} \end{Lemma} \noindent{\it Proof.} Assume that $\mathcal{C}$ is an extension-closed subcategory of a triangulated category $\mathcal{A}$. Statements (3) and (4) follow from some well-known properties of $\mathcal{A}$; see \cite[(1.4)]{H} and \cite[(2.4),(2.5)]{Kra2}. Given a morphism $u: X\to M$ in $\mathcal{C}$, we obtain a commutative diagram with rows being exact triangles $$\xymatrixrowsep{16pt}\xymatrix{X\ar[r]^f \ar[d]_-u & Y\ar[r]^g \ar@{.>}[d] & Z\ar@{=}[d] \ar[r]^\delta & X[1] \ar[d]^{u[1]}\\ M\ar[r]^{f'} & L \ar[r]^{g'} & Z \ar[r]^-{u\cdot \delta} & M[1]}$$ in $\mathcal{A}$, where $L\in \mathcal{C}$. If $u\cdot \delta=0,$ then $f'$ is a section, and hence, $u$ factors through $f$. If $u$ factors through $f$ then, by rotating the top exact triangle to the left, we see that $u\circ (-\delta[-1])=0$, and hence, $u\cdot \eta=u[1]\circ \delta=0$. This establish Statement (1). Dually, we can prove Statement (2). The proof of the lemma is completed. We are ready to introduce the stable categories of $\mathcal{C}$. Fix an object $M$ in $\mathcal{C}$. Given a morphism $f: X\to Y$ in $\mathcal{C}$, we obtain two $\mathbb{Z}$-linear maps $${\rm Ext}^1_\mathcal{C}(M, f): {\rm Ext}^1_\mathcal{C}(M, X)\to {\rm Ext}^1_\mathcal{C}(M, Y): \delta\mapsto f \cdot \delta $$ and $${\rm Ext}^1_\mathcal{C}(f, M): {\rm Ext}^1_\mathcal{C}(Y, M)\to {\rm Ext}^1_\mathcal{C}(X, M): \zeta \mapsto \zeta\cdot f. $$ This consideration yields a covariant functor ${\rm Ext}^1_\mathcal{C}(M, -): \mathcal{C}\to \hbox{{\rm Mod}\hspace{0.5pt}} \mathbb{Z}$ and a contravariant functor ${\rm Ext}^1_\mathcal{C}(-, M): \mathcal{C}\to \hbox{{\rm Mod}\hspace{0.5pt}} \mathbb{Z}.$ A morphism $f: X\to Y$ in $\mathcal{C}$ is called {\it injectively trivial} if ${\rm Ext}_\mathcal{C}^1(M, f)=0$ for all $M\in \mathcal{C};$ and {\it projectively trivial} if ${\rm Ext}_\mathcal{C}^1(f, M)=0$ for all $M\in \mathcal{C}$; compare \cite[Section 2]{LeZ}. Moreover, an object $X\in \mathcal{C}$ is called {\it Ext-injective} if $\id_X$ is injectively trivial, or equivalently, ${\rm Ext}_\mathcal{C}^1(M, X)= 0$ for all $M\in \mathcal{C}$; and {\it Ext-projective} if $\id_X$ is projectively trivial, or equivalently, ${\rm Ext}_\mathcal{C}^1(X, N)= 0$ for all $N\in \mathcal{C}$. Clearly, the injectively trivial morphisms and the projectively trivial morphisms form two ideals written as $\mathcal{P}_\mathcal{C}$ and $\mathcal{I}\hspace{.3pt}_\mathcal{C}$ in $\mathcal{C}$, respectively. The following observation is important. \begin{Lemma}\label{left-tri-ideal} Let $\mathcal{C}$ be an extension-closed subcategory of a triangulated category. \begin{enumerate}[$(1)$] \item If $X\in \mathcal{C} \cap \mathcal{C}[1]$, then $\mathcal{P}_\mathcal{C}(M, X)=0$ for all $M\in \mathcal{C}$. \item If $X\in \mathcal{C} \cap \mathcal{C}[-1]$, then $\mathcal{I}_{\hspace{.5pt}\mathcal{C}}(X, N)=0$ for all $N\in \mathcal{C}$. \end{enumerate}\end{Lemma} \noindent{\it Proof.} We shall only prove Statement (1). Let $f: M\to X$ be projectively trivial, where $X\in \mathcal{C} \cap \mathcal{C}[1]$. Since $X[-1]\in \mathcal{C}$, we see that $\id_X \in {\rm Ext}^1_{\mathcal{C}}(X, X[-1])$ is such that ${\rm Ext}_\mathcal{C}^1(f, X[-1])(\id_X)=0,$ that is, $f= 0$. The proof of the lemma is completed. We are ready to define the stable categories of a tri-exact category. \begin{Defn} Let $\mathcal{C}$ be a tri-exact category. We shall call $\overline{\hspace{-1pt}\mathcal{C}}=\mathcal{C}/\mathcal{I}_\mathcal{C}$ the {\it injectively stable category}, and $\underline{\mathcal{C}\hspace{-1pt}}=\mathcal{C}/\mathcal{P}_\mathcal{C}$ the {\it projectively stable category}, of $\mathcal{C}.$ \end{Defn} \noindent{\sc Remark.} In view of Lemmas \ref{left-tri} and \ref{left-tri-ideal}, we see that $\underline{\mathcal{C}\hspace{-1pt}}=\mathcal{C}$ in case $\mathcal{C}$ is a left triangulated category, and $\hspace{1.5pt}\overline{\hspace{-1pt}\mathcal{C}\hspace{-.2pt}}=\mathcal{C}$ in case $\mathcal{C}$ is a right triangulated category. We shall put $\overline{\rm Hom\hspace{-.8pt}}\hspace{.6pt}_\mathcal{\hspace{.8pt}C}(X, Y)= {\rm Hom}_{\hspace{1.5pt}\overline{\hspace{-1pt}\mathcal{C}\hspace{-.2pt}}}(X, Y)$ and $\hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_\mathcal{\hspace{1pt}C}(X, Y)={\rm Hom}_{\hspace{1pt}\underline{\mathcal C\hspace{-1pt}}\hspace{1pt}}(X, Y)$, for all $X, Y\in \mathcal{C}$. Consider a morphism $f: X\to Y$ in $\mathcal{C}$. We shall write $\bar{\hspace{.5pt}f}$ and $\underline{f\hspace{-2pt}}\hspace{2pt}$ for its images in $\overline{\rm Hom}_\mathcal{\hspace{.8pt}C}(X, Y)$ and $\underline{\hspace{-.5pt}\rm Hom\hspace{-.5pt}}_\mathcal{\hspace{2pt}C}(X, Y)$, respectively. In this way, we may define $\bar{\hspace{.5pt}f} \cdot \zeta=f \cdot \zeta$ and $\delta \cdot \underline{f\hspace{-2pt}}\hspace{2pt}=\delta \cdot f$, for all $\zeta\in {\rm Ext}^1(M, X)$ and $\delta\in {\rm Ext}^1(M, Y)$. \begin{Lemma}\label{Lift-stab-obj} Let $\mathcal{C}$ be a Krull-Schmidt tri-exact category. \begin{enumerate}[$(1)$] \item If $X\in \underline{\hspace{.6pt}\mathcal{C}\hspace{-1.5pt}}\hspace{1.5pt}\hspace{.5pt}$ is indecomposable, then there exists an indecomposable object $M$ in $\mathcal{C}$ such that $\hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_{\hspace{.5pt}\mathcal{C}}(X, -)\cong \hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_{\hspace{.5pt}\mathcal{C}}(M, -)$ and ${\rm Ext}^1_{\hspace{.5pt}\mathcal{C}}(X, -)\cong {\rm Ext}^1_{\hspace{.5pt}\mathcal{C}}(M, -)$. \item If $X\in \hspace{1.5pt}\overline{\hspace{-1pt}\mathcal{C}\hspace{-.2pt}}$ is indecomposable, then there exists an indecomposable object $N$ in $\mathcal{C}$ such that $\overline{\rm Hom\hspace{-.8pt}}\hspace{.6pt}_{\hspace{.5pt}\mathcal{C}}(-, X)\cong \overline{\rm Hom\hspace{-.8pt}}\hspace{.6pt}_{\hspace{.5pt}\mathcal{C}}(-, N)$ and ${\rm Ext}^1_{\hspace{.5pt}\mathcal{C}}(-, X)\cong {\rm Ext}^1_{\hspace{.5pt}\mathcal{C}}(-, N)$. \end{enumerate} \end{Lemma} \noindent{\it Proof.} We shall only prove the first statement. Let $X\in \underline{\hspace{.6pt}\mathcal{C}\hspace{-1.5pt}}\hspace{1.5pt}$ be indecomposable. Since $\mathcal{C}$ is Krull-Schmidt, ${\rm End}(X)$ is semiperfect; see \cite[(1.1)]{LNP}. Thus, ${\rm End}(X)$ has a complete orthogonal set $\{e_1, \ldots, e_n\}$ of primitive idempotents such that $e_i {\rm End}(X) e_i$ is local, for $i=1, \ldots, n$; see \cite[(27.6)]{AnF}. Since $\underline{{\rm End}\hspace{-1pt}}\hspace{1.5pt}(X)$ is local, we may assume that $\underline{\hspace{-.5pt}e\hspace{-.8pt}}\hspace{.5pt}_1=\underline{\id}\hspace{1pt}_X.$ Let $q: M\to X$ and $p: X\to M$ be morphisms such that $p \hspace{.5pt}q=\id_M$ and $q p=e_1$. Observing that ${\rm End}(M)\cong e_1 {\rm End}(X) e_1$, we see that $M$ is indecomposable in $\mathcal{C}$. Given $Y\in \mathcal{C}$, since $\id_X-e_1$ is projectively trivial, we obtain two $\mathbb{Z}$-linear isomorphisms ${\rm Ext}^1_\mathcal{C}(p\hspace{.5pt}, Y): {\rm Ext}_\mathcal{C}^1(M, Y)\to {\rm Ext}_\mathcal{C}^1(X, Y)$ and $\hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_{\hspace{.5pt}\mathcal{C}}(\hspace{1pt}\underline{p\hspace{-1pt}}\hspace{1.5pt}, Y): \hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_{\hspace{.5pt}\mathcal{C}}(M, Y)\to \hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_{\hspace{.5pt}\mathcal{C}}(X, Y)$, which are evidently natural in $Y$. The proof of the lemma is completed. Next, we shall relate exact categories to tri-exact categories. Fix an abelian category $\mathfrak A$ and consider its derived category $D(\mathfrak A)$. Given an object $X$ in $\mathfrak{A}$, we obtain a stalk complex $X[n]$ whose component of degree $-n$ is $X$. Given a morphism $f: X\to Y$ in $\mathfrak{A}$, we obtain a morphism $f[n]: X[n]\to Y[n]$ whose component of degree $-n$ is $f$. Consider the canonical embedding functor $$\mathbb{D}: \mathfrak{A}\to D(\mathfrak{A}): X\mapsto X[0]; f\mapsto \tilde{f}[0], $$ where $\tilde{f}[0]$ is the image of $f[0]$ under $\mathbb{L}\circ \mathbb{P}: C(\mathfrak{A})\to D(\mathfrak{A})$; see \cite[(III.3.4.7)]{Mil}. Let $\mathscr{C}$ be an extension-closed subcategory of $\mathfrak A$. We shall denote by $\hspace{-2pt}\hat{\hspace{2pt}\mathscr{C}}$ the full subcategory of $D(\mathfrak{A})$ of complexes $X^\dt\hspace{2pt}$ with ${\rm H}^0(X^\dt\hspace{2pt})\in \mathscr{C}$ and ${\rm H}^i(X^\dt\hspace{2pt})=0$ for $i\ne 0$. \begin{Lemma}\label{ext-closed} Let $\mathscr{C}$ be an extension-closed subcategory of an abelian category $\mathfrak{A}$. \begin{enumerate}[$(1)$] \item The category $\hspace{-2pt}\hat{\hspace{2pt}\mathscr{C}}$ is an extension-closed subcategory of $D(\mathfrak{A})$. \item Given $X^\dt\hspace{2pt}\in \hspace{-2pt}\hat{\hspace{2pt}\mathscr{C}}$, there exists a natural isomorphism $\theta_{\hspace{-.5pt}X^\dt\hspace{2pt}}: X^\dt\hspace{2pt}\to {\rm H}^0(X^\dt\hspace{2pt})[0]$ in $\hspace{-2pt}\hat{\hspace{2pt}\mathscr{C}}.$ \end{enumerate} \end{Lemma} \noindent{\it Proof.} Given $X^\dt\hspace{2pt}\in \hspace{-2pt}\hat{\hspace{2pt}\mathscr{C}}$, consider the smart truncations $\tau_{\le 0}(X^\dt\hspace{2pt})$ and $\tau_{\ge 0}(\tau_{\le 0}(X^\dt\hspace{2pt}))$. Since ${\rm H}^i(X^\dt\hspace{2pt})=0$ for all $i\ne 0$, the canonical injection $i_0^\wdt\hspace{2pt}: \tau_{\le 0}(X^\dt\hspace{2pt}) \to X^\dt\hspace{2pt}$ and the canonical projection $p_0^\wdt\hspace{2pt}: \tau_{\le 0}(X^\dt\hspace{2pt}) \to \tau_{\ge 0}(\tau_{\le 0}(X^\dt\hspace{2pt})) $ are quasi-isomorphisms; see \cite[(III.3.4)]{Mil}. Observing that $\tau_{\ge 0}(\tau_{\le 0}(X^\dt\hspace{2pt}))={\rm H}^0(X^\dt\hspace{2pt})[0]$, we obtain a isomorphism $\theta_{\hspace{-.5pt}X^{\hspace{1.3pt}\dt\hspace{1pt}}}: X^\dt\hspace{2pt}\to {\rm H}^0(X^\dt\hspace{2pt})[0]$ in $\hspace{-2pt}\hat{\hspace{2pt}\mathscr{C}}$, which is evidently natural in $X^{\hspace{0.5pt}\dt\hspace{1.5pt}}$. Let $\hspace{-3pt}\xymatrixcolsep{18pt}\xymatrix{ X^{\hspace{0.5pt}\dt\hspace{1.5pt}} \ar[r] & Y^{\hspace{0.5pt}\dt\hspace{1.5pt}} \ar[r] & Z^{\hspace{0.5pt}\dt\hspace{1.5pt}} \ar[r] & X^{\hspace{0.5pt}\dt\hspace{1.5pt}}[1]\hspace{-3pt}},$ where $X^{\hspace{0.5pt}\dt\hspace{1.5pt}}, Z^{\hspace{0.5pt}\dt\hspace{1.5pt}}\in \hspace{-2pt}\hat{\hspace{2pt}\mathscr{C}}$, be an exact triangle in $D(\mathfrak{A})$. By the long exact sequence of cohomology, ${\rm H}^i(Y^\dt\hspace{2pt})=0$ for all $i\ne 0$ and $\hspace{-2pt}\xymatrixcolsep{18pt}\xymatrix{0\ar[r] & {\rm H}^0(X^{\hspace{0.5pt}\dt\hspace{1.5pt}}) \ar[r] & {\rm H}^0(Y^{\hspace{0.5pt}\dt\hspace{1.5pt}}) \ar[r] & {\rm H}^0(Z^{\hspace{0.5pt}\dt\hspace{1.5pt}}) \ar[r] & 0}\hspace{-2pt} $ is a short exact sequence in $\mathfrak{A}$. Since ${\rm H}^0(X^{\hspace{0.5pt}\dt\hspace{1.5pt}}), {\rm H}^0(Z^{\hspace{0.5pt}\dt\hspace{1.5pt}})\in \mathscr{C} $, we see that ${\rm H}^0(Y^{\hspace{1.3pt}\dt\hspace{1pt}}) \in \mathscr{C}$. The proof of the lemma is completed. By Lemma \ref{ext-closed}, restricting the canonical embedding $\mathbb{D}: \mathfrak{A}\to D(\mathfrak{A})$ yields an equivalence $\mathbb{D}_\mathscr{\hspace{.4pt}C}: \mathscr{C} \to \hat{\hspace{1.5pt}\mathscr{C}}.$ We shall say that a functor $F: \mathscr{C}\to \hbox{{\rm Mod}\hspace{0.5pt}} \mathbb{Z}$ is {\it essentially equivalent} to a functor $\hat{F}: \hspace{-2pt}\hat{\hspace{2pt}\mathscr{C}} \to \hbox{{\rm Mod}\hspace{0.5pt}} \mathbb{Z}$ provided that $F\cong \hat{F}\circ \mathbb{D}\hspace{.4pt}_\mathscr{C}.$ \begin{Lemma}\label{Fun-equiv} Let $\mathscr{C}$ be an extension-closed subcategory of an abelian category $\mathfrak{A}$, and consider an object $X$ in $\mathscr{C}$. \begin{enumerate}[$(1)$] \item The functors ${\rm Ext}_\mathscr{C}^1(X, -)$ and ${\rm Ext}_\mathscr{C}^1(-, X)$ are essentially equivalent to the functors ${\rm Ext}_{\hspace{-1pt}\hat{\hspace{2pt}\mathscr{C}}}^1(X[0], -) $ and ${\rm Ext}_{\hspace{-1pt}\hat{\hspace{2pt}\mathscr{C}}}^1(-, X[0]),$ respectively. \item The functors $\hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_\mathscr{\hspace{.5pt}C}(X, -) $ and $\overline{\rm Hom\hspace{-.8pt}}\hspace{.6pt}_\mathscr{\hspace{.5pt}C}(\hspace{-1pt}-, X)$ are essentially equivalent to the functors $\hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_{\hspace{-1pt}\hat{\hspace{2pt}\mathscr{C}}}(X[0], \hspace{-1pt}-\hspace{-1pt})$ and $\overline{\rm Hom\hspace{-.8pt}}\hspace{.6pt}_{\hspace{-2pt}\hat{\hspace{2pt}\mathscr{C}}}(-, \hspace{-1pt}X[0]),$ respectively. \end{enumerate}\end{Lemma} \noindent{\it Proof.} Given an extension $\delta\in {\rm Ext}_{\hspace{.4pt}\mathscr{C}}^1(M, N)$ represented by a short exact sequence $$\xymatrixcolsep{18pt}\xymatrix{0\ar[r] & N \ar[r]^f& L \ar[r]^g & M \ar[r] &0}$$ in $\mathscr{C}$, it is well-known that $D(\mathfrak A)$ has an induced exact triangle $$\xymatrix{N[0] \ar[r]^{\tilde{f}[0]} & L[0] \ar[r]^{\tilde{g}[0]} & M[0] \ar[r]^{\tilde\delta} & N[1].} $$ This yields an isomorphism $$\mathbb{E}_{M, N}: {\rm Ext}_{\hspace{.4pt}\mathscr{C}}^1(M, N) \to {\rm Ext}_{\hspace{-1pt}\hat{\hspace{2pt}\mathscr{C}}}^1(M[0], N[0]): \delta\mapsto \tilde{\delta},$$ such, for all $f\in {\rm Hom}_\mathscr{\hspace{.5pt}C}(X, M)$, $\delta\in {\rm Ext}_\mathscr{\hspace{.5pt}C}^1(Z,X)$ and $g\in {\rm Hom}_\mathscr{\hspace{.5pt}C}(N, Z)$, that $\mathbb{E}_{N,M}(g\cdot \delta \cdot f)=\tilde{f}[0]\cdot \tilde{\delta}\cdot \tilde{g}[0]$; see \cite[(IV.2.1.1)]{Mil}. Thus, Statement (1) holds. To show Statement (2), we claim that a morphism $f: X\to Y$ in $\mathscr{C}$ is injectively trivial if and only if $\tilde{f}[0]$ is injectively trivial in $\hspace{-2pt}\hat{\mathscr{\hspace{2pt}C}}$. Assume first that $\tilde{f}[0]$ is injectively trivial in $\hspace{-2pt}\hat{\mathscr{\hspace{2pt}C}}$. Given any $\delta\in {\rm Ext}^1_\mathscr{\,C}(Z, X)$, we obtain $\mathbb{E}_{Z,Y}(f\cdot \delta)=\tilde{f}[0] \cdot \tilde{\delta}=0$, and hence, $f\cdot \delta=0$. That is, $f$ is injectively trivial in $\mathscr{C}$. Conversely, assume that $f$ is injectively trivial in $\mathscr{C}.$ Let $\zeta^\wdt\hspace{2pt}\in {\rm Ext}_{\hspace{-2pt}\hat{\mathscr{\hspace{2pt}C}}}^1(Z^{\hspace{1.3pt}\dt\hspace{1pt}}, X[0])$. Setting $Z={\rm H}^0(Z^{\hspace{1.3pt}\dt\hspace{1pt}})$, by Lemma \ref{ext-closed}(2), we have an isomorphism $\theta_{Z^{\hspace{1.3pt}\dt\hspace{1pt}}}: Z^{\hspace{1.3pt}\dt\hspace{1pt}}\to Z[0]$ in $\hspace{-2pt}\hat{\mathscr{\hspace{2pt}C}}$. Thus, $\zeta^\wdt\hspace{2pt} \cdot \theta_{\hspace{-1pt}Z^{\hspace{1.3pt}\dt\hspace{1pt}}}^{-1}=\tilde{\delta}$ for some $\delta\in {\rm Ext}_\mathscr{C}^1(Z, X)$. Observing that $\tilde{f}[0]\cdot \tilde{\delta}=\mathbb{E}_{Z,Y}(f \cdot \delta)=\mathbb{E}_{Z,Y}(0)=0,$ we obtain $\tilde{f}[0]\cdot \zeta^\wdt\hspace{2pt}=0$. That is, $\tilde{f}[0]$ is injectively trivial in $\hspace{-2pt}\hat{\hspace{2pt}\mathscr{C}}$. This establishes our claim. As a consequence, $\mathbb{D}_\mathscr{C}: \mathscr{C}\to \hat{\hspace{2pt}\mathscr{C}}$ induces an equivalence between the injectively stable categories. In particular, $\hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_\mathscr{\hspace{.5pt}C}(X, -) $ is essentially equivalent to $\hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_{\hspace{-1pt}\hat{\hspace{2pt}\mathscr{C}}}(X[0], \hspace{-1pt}-\hspace{-1pt})$. In a dual fashion, we may establish the second part of Statement (2). The proof of the lemma is completed. The following statement is needed in the next section. \begin{Lemma} \label{Fun-mono} Let $\mathscr{C}$ be an extension-closed subcategory of an abelian category $\mathfrak{A}$, and let $F, G: \mathscr{C}\to \hbox{{\rm Mod}\hspace{0.5pt}} \mathbb{Z}$ be functors essentially equivalent to $\hat{F}, \hat{G}: \hspace{-2pt}\hat{\hspace{2pt}\mathscr{C}} \to \hbox{{\rm Mod}\hspace{0.5pt}} \hspace{.3pt} \mathbb{Z}$ respectively. Then there exists a $(\hspace{-1.5pt}$mono, iso$)$morphism $\eta: F\to G$ if and only if there exists a $(\hspace{-1.5pt}$mono, iso$)$morphism $\hat{\eta}: \hat{F}\to \hat{G}$. \end{Lemma} \noindent{\it Proof.} Let $\zeta: F\to \hat{F} \circ \mathcal{D}_\mathscr{C}$ and $\xi: G\to \hat{G} \circ \mathcal{D}_\mathscr{C}$ be isomorphisms. Firstly, assume that $\hat{\eta}: \hat{F}\to \hat{G}$ is a morphism. Given $X\in \mathscr{C}$, we set $\eta_X=\xi_X^{-1}\circ \hat{\eta}_{X[0]}\circ \zeta_X$, which is a monomorphism or isomorphism in case $\hat{\eta}_{X[0]}$ is a monomorphism or isomorphism, respectively. This yields a desired (mono, iso)morphism $\eta: F\to G$. Conversely, assume that $\eta: F \to G$ is a morphism. Given $X^\dt\hspace{2pt}\in \hspace{-2pt}\hat{\hspace{2.5pt}\mathscr{C}}$, by Lemma \ref{ext-closed}(2), there exists a natural isomorphism $\theta_{\hspace{-.5pt}X^{\hspace{1.3pt}\dt\hspace{1pt}}}: X^\dt\hspace{2pt}\to X[0]$, where $X={\rm H}^0(X^\dt\hspace{2pt}).$ We define $\hat{\eta}_{X^\dt\hspace{2pt}}$ to be the composite of the following morphisms $$\xymatrix{\hat{F}(X^\dt\hspace{2pt})\ar[r]^{\hat{F}(\theta_{X^{\hspace{1.3pt}\dt\hspace{1pt}}})} & \hat{F}(X[0])\ar[r]^{\zeta_X^{-1}} & F(X)\ar[r]^{\eta_X}& G(X) \ar[r]^{\xi_X}& \hat{G}(X[0]) \ar[r]^{\hat{G}(\theta_{X^\dt\hspace{2pt}}^{-1})} & \hat{G}(X^\dt\hspace{2pt}),}$$ which is a monomorphism or isomorphism if $\eta_X$ is a monomorphism or isomorphism respectively. This yields a desired (mono, iso)morphism $\hat{\eta}: \hat{F}\to \hat{G}$. The proof of the lemma is completed. \section{Almost split sequences} \noindent The objective of this section is to study the existence of an individual almost split sequence in a tri-exact category. Using similar but more general techniques, we shall unify and extend the results under various classical settings; see \cite{ABM, Kra2, LNP, LeZ, RvdB}. In parti\-cular, Auslander's existence theorem for an almost split sequence in the category of all modules over a ring; see \cite{AUS} and Krause's existence theorem for an almost triangle in a triangulated category fit well into our setting; see \cite{Kra2}. Throughout this section, $\mathcal{C}$ stands for a tri-exact category, say an extension-closed subcategory of a triangulated category $\mathcal{A}$. The following notion plays a fundamental role in our investigation. \begin{Defn}\label{art} Let $\mathcal{C}$ be a tri-exact category. A tri-exact sequence $$\xymatrix{X \ar[r]^f & Y \ar[r]^g &Z}$$ defined by an extension $\delta\in {\rm Ext}_\mathcal{C}^1(Z, X)$ is called {\it almost split} if $f$ is minimal left almost split and $g$ is minimal right almost split. In this case, $\delta$ is called {\it almost-zero}. \end{Defn} \noindent{\sc Remark.} An almost split sequence with a non-zero middle term is an Auslander-Reiten sequence as defined in \cite[(1.3)]{Liu1}. The converse is probably not true. Modifying slightly the proof of the proposition stated in \cite[(3.5)]{Hap2}, we obtain the uniqueness of an almost split sequence in a tri-exact category as follows. \begin{Prop}\label{ARS-uni} Let $\mathcal{C}$ be a tri-exact category. If $\hspace{-3pt}\xymatrixcolsep{18pt}\xymatrix{X\ar[r]&Y\ar[r] &Z}\hspace{-3pt}$ is an almost split sequence in $\mathcal{C}$, then it is unique up to isomorphism for $X$ and for $Z$. \end{Prop} The following statement says in particular that the study of almost split sequences under various classical settings can be unified under our tri-exact setting. \begin{Prop}\label{ART-ARS} Let $\mathscr{C}$ be an extension-closed subcategory of an abelian category $\mathfrak{A}$. Then every almost split sequence $\xymatrixcolsep{18pt}\xymatrix{0\ar[r] & X \ar[r] & Y\ar[r] & Z \ar[r] & 0} $ in $\mathscr{C}$ induces an almost split sequence $\xymatrixcolsep{18pt}\xymatrix{\hspace{-3pt} X[0] \ar[r] & Y[0] \ar[r] & Z[0]\hspace{-3pt}} $ in $\hspace{-2pt}\hat{\mathscr{\hspace{2pt}C}};$ and every almost split sequence in $\hspace{-2pt}\hat{\mathscr{\hspace{2pt}C}}$ can be obtained in this way. \end{Prop} \noindent{\it Proof.} Since $\mathbb{D}_\mathscr{C}: \mathscr{C}\to \hspace{-2pt}\hat{\hspace{2pt}\mathscr{C}}$ is an equivalence, the first part of the proposition follows immediately. Assume that $\hspace{-2pt}\hat{\mathscr{\hspace{2pt}C}}$ has an almost split sequence $$\xymatrix{\hspace{-3pt} X^{\hspace{0.5pt}\dt\hspace{1.5pt}}\ar[r]^{f^\wdt\hspace{2pt}} & Y^{\hspace{0.5pt}\dt\hspace{1.5pt}}\ar[r]^{g^{\hspace{0.5pt}\dt\hspace{1.5pt}}} & Z^{\hspace{0.5pt}\dt\hspace{1.5pt}}\hspace{-3pt}}$$ defined by an extension $\eta^{\hspace{0.5pt}\dt\hspace{1.5pt}}\in {\rm Ext}_{\hspace{-2pt}\hat{\mathscr{\hspace{2pt}C}}}^1(Z^{\hspace{0.5pt}\dt\hspace{1.5pt}}, X^{\hspace{0.5pt}\dt\hspace{1.5pt}}[1])$. Applying the long exact sequence of cohomology, we obtain a short exact sequence $$\delta: \quad \xymatrix{0\ar[r] & {\rm H}^0(X^{\hspace{0.5pt}\dt\hspace{1.5pt}})\ar[r]^{f} & {\rm H}^0(Y^{\hspace{0.5pt}\dt\hspace{1.5pt}})\ar[r]^{g} & {\rm H}^0(Z^{\hspace{0.5pt}\dt\hspace{1.5pt}})\ar[r] & 0} $$ in $\mathscr{C},$ where $f={\rm H}^0(f^{\hspace{0.5pt}\dt\hspace{1.5pt}})$ and $g={\rm H}^0(g^{\hspace{0.5pt}\dt\hspace{1.5pt}})$. In view of Lemma \ref{ext-closed}(2), there exists a commutative diagram with vertical isomorphisms $$\xymatrixrowsep{16pt}\xymatrix{X^{\hspace{0.5pt}\dt\hspace{1.5pt}}\ar[r]^{f^\wdt\hspace{2pt}} \ar[d]_-{\theta_{\hspace{-.5pt}X^{\hspace{0.5pt}\dt\hspace{1.5pt}}}} & Y^{\hspace{0.5pt}\dt\hspace{1.5pt}} \ar[r]^{g^{\hspace{0.5pt}\dt\hspace{1.5pt}}} \ar[d]^-{\theta_{\hspace{-.5pt} Y^{\hspace{0.5pt}\dt\hspace{1.5pt}}}} & Z^{\hspace{0.5pt}\dt\hspace{1.5pt}} \ar@{.>}[d]^-{\zeta^\wdt\hspace{2pt}} \ar[r]^{\eta^{\hspace{0.5pt}\dt\hspace{1.5pt}}} & X^\dt\hspace{2pt}[1] \ar[d]^-{\theta_{\hspace{-1pt}X^{\hspace{0.5pt}\dt\hspace{1.5pt}}}[1]}\\ {\rm H}^0(X^{\hspace{0.5pt}\dt\hspace{1.5pt}})[0] \ar[r]^{\tilde{f}[0]} & {\rm H}^0(Y^{\hspace{0.5pt}\dt\hspace{1.5pt}})[0]\ar[r]^{\tilde{g}[0]} & {\rm H}^0(Z^{\hspace{0.5pt}\dt\hspace{1.5pt}})[0] \ar[r]^{\tilde{\delta}} & {\rm H}^0(X^{\hspace{0.5pt}\dt\hspace{1.5pt}})[1] }$$ in $D(\mathfrak{A})$, where the rows are exact triangles. In particular, $\tilde{f}[0]$ is minimal left almost split and $\tilde{g}[0]$ is minimal right almost split in $\hat{\hspace{2pt}\mathscr{C}}$. Since $\mathbb{D}_\mathscr{C}: \mathscr{C}\to \hspace{-2pt}\hat{\hspace{2pt}\mathscr{C}}$ is an equivalence, $\delta$ is almost split in $\mathscr{C}$. The proof of the proposition is completed. The following characterization of an almost split sequence in a tri-exact category is adapted from those in the classical settings; see \cite{AuR2, Kra2}. \begin{Theo}\label{ART-equivalence} Let $\mathcal{C}$ be a tri-exact category. If $\xymatrix{X\ar[r]^f &Y\ar[r]^g &Z}$ is a tri-exact sequence in $\mathcal{C}$, then the following statements are equivalent. \begin{enumerate}[$(1)$] \item The sequence is an almost split sequence in $\mathcal{C}$. \item The morphism $f$ is left almost split and $g$ is right almost split. \item The morphism $f$ is left almost split and $Z$ is strongly indecomposable. \item The morphism $g$ is right almost split and $X$ is strongly indecomposable. \item The morphism $f$ is minimal left almost split or $g$ is minimal right almost split. \end{enumerate} \end{Theo} \noindent{\it Proof.} Let $\eta\in {\rm Ext}_\mathcal{C}^1(Z, X)$, defining the tri-exact sequence stated in the theorem. If Statement (2) holds, then $X$ and $Z$ are strongly indecomposable; see \cite[(2.3)]{AuR2}, and by Lemma \ref{section-retraction}(4), $g$ is right minimal and $f$ is left minimal, that is, Statement (1) holds. Moreover, by Lemma \ref{section-retraction}(4), either of Statements (3) and (4) implies Statement (5). Assume now that Statement (5) holds. To prove Statement (2), we assume that $\mathcal{C}$ is an extension-closed subcategory of a triangulated category $\mathcal{A}$. If $g$ is minimal right almost split, using the same argument given in \cite[(2.6)]{Kra2}, we may show that $f$ is left almost split. If $f$ is minimal left almost split, one can dually show that $g$ is right almost split. The proof of the theorem is completed. The rest of this section is devoted to the study of the existence of an almost split sequence in a tri-exact category. We start with some properties of almost-zero extensions; compare \cite[(2.2)]{JOR}, \cite[(3.1)]{LeZ} and \cite[Page 306]{RvdB}. \begin{Lemma}\label{art-1} Let $\mathcal{C}$ be a tri-exact category with $\delta\in {\rm Ext}^1_\mathcal{C}(Z, X)$ being almost-zero. \begin{enumerate}[$(1)$] \item A factorization $\delta = \eta \cdot \underline{f\hspace{-2pt}}\hspace{2pt}$ exists whenever a non-zero extension $\eta\in {\rm Ext}^1_\mathcal{C}(L, X)$ or a non-zero morphism $\underline{f\hspace{-2pt}}\hspace{2pt} \in \hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_{\hspace{.6pt}\mathcal{C}}(Z, L)$ is given. \item A factorization $\delta=\bar{\hspace{-.6pt}g} \cdot \zeta$ exists whenever a non-zero extension $\zeta \in {\rm Ext}^1_{\mathcal{C}}(Z, L)$ or a non-zero morphism $\bar{\hspace{-.6pt}g}\in \overline{\rm Hom\hspace{-.8pt}}\hspace{.6pt}_{\hspace{.5pt}\mathcal{C}}(L, X)$ is given. \end{enumerate} \end{Lemma} \noindent{\it Proof.} Given a non-zero extension $\eta\in {\rm Ext}^1_\mathcal{C}(L, X)$, using the same proof of the first statement of the sublemma stated in \cite[Page 306]{RvdB}, we obtain $\delta = \eta \cdot \underline{f\hspace{-2pt}}\hspace{2pt}$ for some morphism $\underline{f\hspace{-2pt}}\hspace{2pt} \in \hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_{\hspace{.6pt}\mathcal{C}}(Z, L)$. Given a non-zero extension $\zeta \in {\rm Ext}^1_{\mathcal{C}}(Z, L)$, by a dual argument, we can show that $\delta=\bar{\hspace{-.6pt}g} \cdot \zeta$ for some $\bar{\hspace{-.6pt}g}\in \overline{\rm Hom\hspace{-.8pt}}\hspace{.6pt}_{\hspace{.5pt}\mathcal{C}}(L, X)$. Now, let $f: Z\to L$ be a non projectively trivial morphism in $\mathcal{C}$. Then, there exists some $\zeta\in {\rm Ext}_\mathcal{C}(L, M)$ such that $0\ne \zeta\cdot f\in {\rm Ext}_\mathcal{C}(Z, M)$. By the first part of Statement (2), there exists some $g: M\to X$ in $\mathcal{C}$ such that $\delta= g\cdot (\zeta \cdot f)=(g \cdot \zeta) \cdot f$. This establishes the second part of Statement (1). Dually, one can verify the second part of Statement (2). The proof of the lemma is completed. Given $X, Z\in \mathcal{C}$, by Lemma \ref{Ext-bimod}, ${\rm Ext}_\mathcal{C}^1(Z, X)$ is an $\overline{{\rm End}}(X)$-$\underline{{\rm End}\hspace{-1pt}}\hspace{1pt}(Z)$-bimodule so that $\overline{\hspace{-2pt}f} \cdot \delta \cdot \underline{g\hspace{-1pt}} = f\cdot \delta \cdot g$, for $f\in {\rm End}(X), \delta \in {\rm Ext}_\mathcal{C}^1(Z, X)$ and $g\in {\rm End}(Z).$ Given $M\in \mathcal{C}$ strongly indecomposable, we always write $S_M={\rm End}(M)/{\rm rad}({\rm End}(M))$, which is a simple left $\overline{{\rm End}}(M)$-module if $M$ is not Ext-injective; and a simple right $\underline{{\rm End}\hspace{-.8pt}}\,(M)$-module if $M$ is not Ext-projective. \begin{Theo}\label{AZE} Let $\mathcal{C}$ be a tri-exact category with $X, Z\in \mathcal{C}$ strongly indecomposable. Consider non-zero ring homomorphisms $\hbox{$\it\Gamma$}\to \overline{{\rm End}}(X)$ and $\hbox{${\it\Sigma}$}\to \underline{{\rm End}}(Z)$. Let $_{\it\hbox{$\it\Gamma$}mma}\hspace{-.4pt}I$ be an injective cogenerator of the left $\hbox{$\it\Gamma$}$-module $S_X $ and $I_{\hspace{-.5pt}\it\Sigma}$ an injective cogenerator of the right $\hbox{${\it\Sigma}$}$-module $S_Z$. If $\delta\in {\rm Ext}^1_\mathcal{C}(Z, X)$ is non-zero, then $\delta$ being almost zero is equivalent to each of the following statements. \begin{enumerate}[$(1)$] \item There exists a monomorphism $\Psi: {\rm Ext}_\mathcal{C}^1(Z,-)\to {\rm Hom}{_{\it\hbox{$\it\Gamma$}mma}}(\hspace{.8pt}\overline{\rm Hom\hspace{-.8pt}}\hspace{.6pt}_\mathcal{\hspace{1pt}C}(-, X), {}_{\it\hbox{$\it\Gamma$}mma}\hspace{-.4pt}I) $ such that $\Psi_X(\delta)$ lies in the socle of the left $\overline{{\rm End}}(X)$-module ${\rm Hom}{_{\it\hbox{$\it\Gamma$}mma}}(\hspace{.8pt}\overline{\rm End}(X), {}_{\it\hbox{$\it\Gamma$}mma}\hspace{-.4pt}I)$. \item There exists a monomorphism $\Phi: {\rm Ext}^1_\mathcal{C}(-, X)\to {\rm Hom}_{\it\Sigma}(\hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_\mathcal{\hspace{1.5pt}C}(Z,-), I_{\hspace{-.6pt}\it\Sigma}) $ such that $\Phi_Z(\delta)$ lies in the socle of the right $\underline{{\rm End}\hspace{-.5pt}}\hspace{.4pt}(\hspace{-.5pt}Z)$-module ${\rm Hom}_{\it\Sigma}(\hspace{.5pt}\underline{\rm End\hspace{-1pt}}(Z), I_{\hspace{-.6pt}\it\Sigma})$. \end{enumerate} \end{Theo} \noindent{\it Proof.} Let $\delta\in {\rm Ext}^1_{\mathcal{C}}(Z, X)$ be non-zero. We shall only consider Statement (1). Let $I$ be an injective cogenerator of the left $\hbox{$\it\Gamma$}$-module $S_X$. Assume first that $\delta$ is almost-zero. Consider the $\overline{{\rm End}}(X)$-submodule $S$ of ${\rm Ext}^1_{\mathcal{C}}(Z, X)$ generated by $\delta$, which is simple by Lemma \ref{art-1}(2). Since $\overline{{\rm End}}(X)$ is local, $S\cong S_X$ as left $\overline{{\rm End}}(X)$-modules, and consequently, $S\cong S_X$ as $\hbox{$\it\Gamma$}$-modules. Thus, we may find a $\hbox{$\it\Gamma$}$-linear map $\psi: {\rm Ext}^1_\mathcal{C}(Z, X) \to I$ with $\psi(\delta)\neq 0.$ Given $L\in \mathcal{C}$, by Lemma \ref{art-1}(2), we obtain a non-degenerate $\mathbb{Z}$-bilinear form $$<\hspace{-3pt}-\,,-\hspace{-3pt}>_L \hspace{1pt}: \hspace{2pt} \overline{\rm Hom}_\mathcal{\hspace{.5pt}C}(L,X) \times {\rm Ext}^1_\mathcal{C}(Z, L)\to I:(\bar{g}, \zeta)\mapsto \psi(\bar{g}\cdot \zeta).$$ Since $\psi$ is left $\hbox{$\it\Gamma$}$-linear, by Lemma \ref{Ext-bimod}, we obtain a $\mathbb{Z}$-linear monomorphism $$\Psi_L: {\rm Ext}^1_\mathcal{C}(Z,L) \to {\rm Hom}_{\it\hbox{$\it\Gamma$}mma}(\hspace{.7pt}\overline{\rm Hom}_\mathcal{\hspace{.5pt}C}(L, X), I): \zeta\mapsto <\hspace{-3pt}-\,,\zeta\hspace{-1pt}>_L,$$ which is natural in $L$. This yields a monomorphism $\Psi$ as stated in Statement (1). Since $\Psi_X$ is a left $\overline{{\rm End}}(X)$-linear monomorphism, $\Psi_X(S)$ is a simple submodule of the left $\overline{{\rm End}}(X)$-module ${\rm Hom}_{\it\hbox{$\it\Gamma$}mma}(\hspace{.5pt}\overline{\rm End}(X), I).$ This establishes Statement (1). Conversely, assume that $\Psi: {\rm Ext}^1_\mathcal{C}(Z,-) \to {\rm Hom}_{\it\hbox{$\it\Gamma$}mma}(\,\overline{\rm Hom}_\mathcal{\hspace{.5pt}C}(-, X), I) $ is a monomorphism such that $\Psi_X(\delta)$ is in the left $\overline{\rm End}(X)$-socle of ${\rm Hom}_{\it\hbox{$\it\Gamma$}mma}(\hspace{.5pt}\overline{\rm End}(X), I).$ Then, $\Psi_X(\delta)$ vanishes on ${\rm rad}(\overline{{\rm End}}(X)).$ Consider the non-splitting tri-exact sequence $$(*) \qquad \xymatrix{X\ar[r]^f &Y\ar[r]^g &Z} $$ in $\mathcal{C}$ defined by $\delta$. Let $u: X \to L$ be a non-section morphism in $\mathcal{C}$. In view of the commutative diagram $$\xymatrixcolsep{30pt}\xymatrixrowsep{18pt}\xymatrix{{\rm Ext}^1_\mathcal{C}(Z, X) \ar[r]^-{\Psi_X} \ar[d]_{{\rm Ext}^1_\mathcal{C}(Z, \bar{u})} & {\rm Hom}_{\it\hbox{$\it\Gamma$}mma}(\,\overline{{\rm End}}(X), I)\ar[d]^-{{\rm Hom}_{\it\hbox{$\it\Gamma$}mma}(\overline{\rm Hom\hspace{-.8pt}}\hspace{.6pt}_{\hspace{.5pt}\mathcal{C}}(\hspace{-1pt}\bar{\hspace{1pt}u}, X), I)} \\ {\rm Ext}^1_\mathcal{C}(Z, L) \ar[r]^-{\Psi_L} & {\rm Hom}_{\it\hbox{$\it\Gamma$}mma}(\hspace{.5pt}\overline{\rm Hom\hspace{-.8pt}}\hspace{.6pt}_\mathcal{\hspace{.8pt}C}(L, X), I),}$$ we obtain $\Psi_L(\bar{u}\cdot \delta)=\Psi_X(\delta) \circ \overline{\rm Hom\hspace{-.8pt}}\hspace{.6pt}_\mathcal{\hspace{.6pt}C}(\bar{u}, X).$ For any morphism $v: L\to X$ in $\mathcal{C}$, since $v u \in {\rm rad}({\rm End}(X)),$ we see that $\Psi_L(\bar{u}\cdot \delta)(\bar{v})=\Psi_X(\delta)(\bar{v}\hspace{.4pt}\bar{u})= 0.$ Thus, $\Psi_L(\bar{u} \cdot \delta)=0$, and hence, $\bar{u}\cdot \delta=0$. By Lemma \ref{section-retraction}(1), $u$ factors through $f$. That is, $f$ is left almost split. Since $Z$ is strongly indecomposable, by Theorem \ref{ART-equivalence}(3), the tri-exact sequence $(*)$ is almost split. The proof of the theorem is completed. \noindent{\sc Remark.} In view of Lemmas \ref{Fun-equiv} and \ref{Fun-mono} and Proposition \ref{ART-ARS}, it is easy to see that Theorem \ref{AZE} covers the result stated in \cite[(2.2)]{LNP}. We are ready to obtain our main existence theorem of an almost splits sequence. \begin{Theo}\label{existence} Let $\mathcal{C}$ be a tri-exact category with $X, Z\in \mathcal{C}$ strongly indecomposable. Consider non-zero ring homomorphisms $\hbox{$\it\Gamma$}\to \overline{{\rm End}}(X)$ and $\hbox{${\it\Sigma}$}\to \underline{{\rm End}}(Z)$. Let $_{\it\hbox{$\it\Gamma$}mma}\hspace{-.0pt}I$ be an injective cogenerator of the left $\hbox{$\it\Gamma$}$-module $S_X$ and $I_{\hspace{-.5pt}\it\Sigma}$ an injective cogenerator of the right $\hbox{${\it\Sigma}$}$-module $S_Z$. The following statements are equivalent. \begin{enumerate}[$(1)$] \item There exists an almost split sequence $\hspace{-2pt}\xymatrixcolsep{18pt}\xymatrix{X\ar[r]&Y\ar[r] &Z}\hspace{-2pt}$ in $\mathcal{C}$. \item There exists a monomorphism $\Psi: {\rm Ext}_\mathcal{C}^1(Z,-)\to {\rm Hom}{_{\it\hbox{$\it\Gamma$}mma}}(\hspace{.8pt}\overline{\rm Hom\hspace{-.8pt}}\hspace{.6pt}_\mathcal{\hspace{1pt}C}(-, X), {}_{\it\hbox{$\it\Gamma$}mma}\hspace{-.4pt}I)$ such that the left $\overline{{\rm End}}(X)$-linear map $\Psi_X$ is socle essential. \item There exists a monomorphism $\Phi: {\rm Ext}^1_\mathcal{C}(-, X)\to {\rm Hom}_{\it\Sigma}(\hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_\mathcal{\hspace{1.5pt}C}(Z,-), I_{\hspace{-.6pt}\it\Sigma})$ such that the right $\underline{{\rm End}\hspace{-.5pt}}\hspace{.4pt}(\hspace{-.5pt}Z)$-linear map $\Phi_Z$ is socle essential. \end{enumerate} \end{Theo} \noindent{\it Proof.} By Theorem \ref{AZE}, it suffices to prove that Statement (2) implies Statement (1). Let $I$ be an injective co-generator of the left $\hbox{$\it\Gamma$}$-module $S_X$ with a monomorphism $\Psi: {\rm Ext}_\mathcal{C}^1(Z,-)\to {\rm Hom}{_{\it\hbox{$\it\Gamma$}mma}}(\hspace{.8pt}\overline{\rm Hom\hspace{-.8pt}}\hspace{.6pt}_\mathcal{\hspace{1pt}C}(-, X), I)$ such that the left $\overline{{\rm End}}(X)$-linear map $\Psi_X: {\rm Ext}_\mathcal{C}^1(Z,X)\to {\rm Hom}{_{\it\hbox{$\it\Gamma$}mma}}(\hspace{.8pt}\overline{{\rm End}\hspace{-.5pt}}(X), I)$ is socle essential. Consider the canonical projection $p: \overline{\rm End}(X) \to S_X$ and fix a non-zero $\hbox{$\it\Gamma$}$-linear map $q: S_X\to I$. Then, $qp$ is a non-zero element in ${\rm Hom}_{\it\hbox{$\it\Gamma$}mma}(\hspace{.8pt}\overline{\rm End}(X), I)$ annihilated by ${\rm rad}(\overline{\rm End}(X))$. Since $\overline{\rm End}(X)$ is local, $qp$ belongs to the left $\overline{\rm End}(X)$-socle of ${\rm Hom}_{\it\hbox{$\it\Gamma$}mma}(\hspace{.5pt}\overline{\rm End}(X), I)$. Since $\Psi_X$ is socle essential, there exists $\delta\in {\rm Ext}_\mathcal{C}^1(Z, X)$ such that $\Psi_X(\delta)$ lies in the left $\overline{{\rm End}}(X)$-socle of ${\rm Hom}{_{\it\hbox{$\it\Gamma$}mma}}(\hspace{.8pt}\overline{\rm End}(X), {}_{\it\hbox{$\it\Gamma$}mma}\hspace{-.4pt}I)$. By Theorem \ref{AZE}(1), $\delta$ defines an almost split sequence as stated in Statement (1). The proof of the theorem is completed. \noindent{\sc Remark.} Observe, for any ring $\hbox{${\it\Sigma}$}$, that a $\hbox{${\it\Sigma}$}$-linear monomorphism $f: M\to N$ is socle essential if $M$ has a nonzero socle or $N$ has an essential socle. Thus, we see from Lemmas \ref{Fun-equiv} and \ref{Fun-mono} and Proposition \ref{ART-ARS} that Theorem \ref{existence} includes the results stated in \cite[(4.1)]{LeZ} and \cite[(2.3)]{LNP}. By abuse of terminology, a functor $F$ is called a {\it subfunctor} of another functor $G$ if there exists a monomorphism $F\to G$. We shall drop the additional hypotheses on $\Psi_Z$ and $\Phi_X$ stated in Theorem \ref{existence} in some special cases as below. \begin{Theo}\label{existence-1} Let $\mathcal{C} $ be a tri-exact category with $X, Z\in \mathcal{C}$ strongly indecomposable. Consider non-zero surjective ring homomorphisms $\hbox{$\it\Gamma$}\to \overline{{\rm End}}(X)$ and $\hbox{${\it\Sigma}$}\to \underline{{\rm End}}(Z) $. Let $_{\it\hbox{$\it\Gamma$}mma}I$ be an injective envelope of the left $\hbox{$\it\Gamma$}$-module $S_X$ and $I_{\hspace{-.5pt}\it\Sigma}$ an injective envelope of the right $\hbox{${\it\Sigma}$}$-module $S_Z$. The following statements are equivalent. \begin{enumerate}[$(1)$] \item There exists an almost split sequence $\hspace{-2pt}\xymatrixcolsep{20pt}\xymatrix{X\ar[r]&Y\ar[r] &Z}\hspace{-2pt}$ in $\mathcal{C}$. \item ${\rm Ext}_\mathcal{C}^1(Z,-)$ is a non-zero subfunctor ${\rm Hom}{_{\it\hbox{$\it\Gamma$}mma}}(\hspace{.8pt}\overline{\rm Hom\hspace{-.8pt}}\hspace{.6pt}_\mathcal{\hspace{1pt}C}(-, X), {}_{\it\hbox{$\it\Gamma$}mma}\hspace{-.4pt}I).$ \item ${\rm Ext}^1_\mathcal{C}(-, X)$ is a non-zero subfunctor of ${\rm Hom}_{\it\Sigma}(\hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_\mathcal{\hspace{1.5pt}C}(Z,-), I_{\hspace{-.6pt}\it\Sigma}).$ \end{enumerate} \end{Theo} \noindent{\it Proof.} By Theorem \ref{existence}, it suffices to prove that Statement (2) implies Statement (1). Let $\Psi: {\rm Ext}_\mathcal{C}^1(Z,-)\to {\rm Hom}{_{\it\hbox{$\it\Gamma$}mma}}(\hspace{.8pt}\overline{\rm Hom\hspace{-.8pt}}\hspace{.6pt}_\mathcal{\hspace{1pt}C}(-, X), I) $ be a non-zero monomorphism, where $I={}_{\it\hbox{$\it\Gamma$}mma}\hspace{-.4pt}I$. In parti\-cular, there exists some $L\in \mathcal{C}$ such that ${\rm Ext}^1_\mathcal{C}(Z,L)\ne 0$. Thus, $\Psi_L(\delta)(\bar{f})\ne 0$ for some $\delta\in {\rm Ext}^1_\mathcal{C}(Z, L)$ and $f\in {\rm Hom}_\mathcal{\hspace{.5pt}C}(L, X)$. Since $$\xymatrixcolsep{30pt}\xymatrixrowsep{16pt}\xymatrix{ {\rm Ext}^1_\mathcal{C}(Z, L) \ar[r]^-{\Psi_L} \ar[d]_{{\rm Ext}^1_\mathcal{C}(Z, \bar{f})} & {\rm Hom}_{\it\hbox{$\it\Gamma$}mma}(\,\overline{\rm Hom}_\mathcal{\hspace{.8pt}C}(L, X), I) \ar[d]^{{\rm Hom}_{\it\hbox{$\it\Gamma$}mma}(\overline{\rm Hom\hspace{-.8pt}}\hspace{.6pt}_{\mathcal{C}}(\bar{f}, X), I)} \\ {\rm Ext}^1_\mathcal{C}(Z, X) \ar[r]^-{\Psi_{\hspace{-.6pt}X}} & {\rm Hom}_{\it\hbox{$\it\Gamma$}mma}(\hspace{.5pt}\overline{{\rm End}}(X), I) }$$ is a commutative diagram, we obtain $\Psi_{\hspace{-.6pt}X}(\bar{f}\cdot \delta)(\overline{\id}_X) =\Psi_L(\delta)(\bar{f})\ne 0.$ Thus, $\Psi_{\hspace{-.6pt}X}$ is a non-zero left $\overline{{\rm End}}(X)$-linear monomorphism, which is also left $\hbox{$\it\Gamma$}$-linear. Since the ring homomorphism $\rho: \hbox{$\it\Gamma$}\to \overline{{\rm End}}(X)$ is surjective, $S_X$ is a simple left $\hbox{$\it\Gamma$}$-module and $\rho^*={\rm Hom}_{\it\hbox{$\it\Gamma$}mma}(\rho, I): {\rm Hom}_{\it\hbox{$\it\Gamma$}mma}(\,\overline{{\rm End}}(X), I)\to {\rm Hom}_{\it\hbox{$\it\Gamma$}mma}(\hbox{$\it\Gamma$}, I)$ is a left $\hbox{$\it\Gamma$}$-linear monomorphism. Let $\nu: {\rm Hom}_{\it\hbox{$\it\Gamma$}mma}(\hbox{$\it\Gamma$}, I)\to I$ be the canonical $\hbox{$\it\Gamma$}$-linear isomorphism. Then, $\psi=\nu\circ \rho^*\circ \Psi_{\hspace{-.6pt}X}: {\rm Ext}^1_\mathcal{C}(Z, X) \to I$ is a non-zero $\hbox{$\it\Gamma$}$-linear monomorphism. Since $S_X$ is the essential $\hbox{$\it\Gamma$}$-socle of $I$, it is contained in ${\rm Im}(\psi)$. In particular, ${\rm Ext}^1_\mathcal{C}(Z, X)$ has a simple left $\hbox{$\it\Gamma$}$-submodule $S$ such that $\psi(S)=S_X$. Since $\rho$ is surjective, $S$ is a simple left $\overline{{\rm End}}(X)$-submodule of ${\rm Ext}^1_\mathcal{C}(Z, X),$ and hence, the left $\overline{{\rm End}}(X)$-linear monomorphism $\Psi_X$ is socle essential. By Theorem \ref{existence}(2), $\mathcal{C}$ has a desired almost split sequence. The proof of the theorem is completed. \noindent{\sc Remark.} (1) Let $Z\in \hbox{{\rm Mod}\hspace{0.5pt}} \hbox{$\it\Lambda$}$ be finitely presented, strongly indecomposable and not projective, where $\hbox{$\it\Lambda$}$ is a ring. Since $\underline{{\rm End}\hspace{-.6pt}}_{\hspace{.6pt}\it\hbox{$\it\Lambda$}mbda^{\rm op}}({\rm Tr}Z)^{\rm op} \cong \underline{{\rm End}\hspace{-.6pt}}_{\hspace{.6pt}\it\hbox{$\it\Lambda$}mbda}(Z); $ see \cite[Section I.3]{AUS}, we have a surjective ring homomorphism from $\hbox{${\it\Sigma}$}={\rm End}_{\it\hbox{$\it\Lambda$}mbda^{\rm op}}({\rm Tr}Z)^{\rm op}$ onto $\underline{{\rm End}\hspace{-.6pt}}_{\hspace{.4pt}\it\hbox{$\it\Lambda$}mbda \hspace{-.5pt}}(Z)$. Let $I$ be the injective envelope of the right $\hbox{${\it\Sigma}$}$-module $S_Z$. Then, $X={\rm Hom}_{\it\Sigma}({\rm Tr}Z, \, I)$ is a strongly indecomposable module in $\hbox{{\rm Mod}\hspace{0.5pt}}\hbox{$\it\Lambda$}$ such that ${\rm Ext}^1_{\it\hbox{$\it\Lambda$}mbda}(-, X) \cong {\rm Hom}_{\it\Sigma}(\underline{{\rm Hom}\hspace{-.6pt}}_{\hspace{.6pt}\it\hbox{$\it\Lambda$}mbda}(Z, -), I)$; see \cite[(I.11.3), (I.3.4)]{AUS}. By Theorem \ref{existence-1}, there exists an almost split sequence $\xymatrixcolsep{16pt}\xymatrix{\hspace{-4pt} 0\ar[r] & X \ar[r] & Y \ar[r] & Z\ar[r] & 0}$ in $\hbox{{\rm Mod}\hspace{0.5pt}}\hbox{$\it\Lambda$}$. This is Auslander's theorem stated in \cite[(II.5.1)]{AUS}. (2) Let $\mathfrak{A}$ be a locally finitely presented Grothendieck ableian category with a finitely presented strongly indecomposable object $Z$. Consider the canonical projection $\hbox{${\it\Sigma}$}={\rm End}(Z) \to \underline{{\rm End}}(Z).$ Given any injective module $I\in \hbox{{\rm Mod}\hspace{0.5pt}} \hbox{${\it\Sigma}$}^{\rm op}$, Krause obtained a monomorphism ${\rm Ext}_\mathfrak{\hspace{.4pt}A}^1(-, \tau_{_I}(Z)) \to {\rm Hom}_{\it\Sigma}(\hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_\mathfrak{\hspace{.6pt}A}(Z, -), I),$ for some $\tau_I(Z)\in \mathfrak{A};$ see \cite[(1.2)]{Kra3}. However, it is not known whether or not $\tau_{_I}(Z)$ is strongly indecomposable even if $I$ is the injective envelope of $S_Z$. Thus, we cannot apply Theorem \ref{existence-1} to obtain an almost split sequence in $\mathfrak{A}.$ We can weaken the assumption that both $X$ and $Z$ are strongly indecomposable as stated in Theorem \ref{existence-1} in some special cases as below. \begin{Theo}\label{existence-3} Let $\mathcal{C}$ be an extension-closed subcategory of a triangulated category, and let $Z\in \mathcal{C}$ be strongly indecomposable with a non-zero monomorphism $$\Phi: {\rm Hom}_\mathcal{\hspace{.4pt}C}(-, X) \to {\rm Hom}_{\hspace{.4pt}\underline{{\rm End}}(Z)}(\hspace{.4pt}\hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_\mathcal{\hspace{.4pt}C}(Z,-), I), $$ where $X\in \mathcal{C}\cap \mathcal{C}[1]$ and $I$ is an injective envelope of the right $\underline{{\rm End}}(Z)$-module $S_Z$. If $\hspace{.8pt}\Phi_{\hspace{-1.1pt}X}$ is bijective, then $\mathcal{C}$ has an almost split sequence $\hspace{-2pt}\xymatrixcolsep{18pt}\xymatrix{X[-1] \ar[r] & Y \ar[r] & Z.}$ \end{Theo} \noindent{\it Proof.} Assume that $\hspace{.8pt}\Phi_{\hspace{-1.1pt}X}$ is bijective. Observing that $X[-1]\in \mathcal{C}$ is such that ${\rm Ext}^1_\mathcal{C}(-, X[-1])={\rm Hom}_\mathcal{\hspace{.4pt}C}(-, X)$. By Theorem \ref{existence-1}(3), it amounts to show that ${\rm End}(X)$ is local. Put $\hbox{${\it\Sigma}$}=\underline{\rm End\hspace{-.6pt}}(Z)$. By Lemma \ref{left-tri-ideal}(1), ${\rm Hom}_\mathcal{\hspace{.5pt}C}(Z, X)=\hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_\mathcal{\hspace{.5pt}C}(Z, X)$, which is a right $\hbox{${\it\Sigma}$}$-module. Now, $\Phi_{\hspace{-.6pt}X}: {\rm End}(X)\to {\rm Hom}_{\hspace{.4pt}\it\Sigma}(\hspace{.4pt}{\rm Hom}_\mathcal{\hspace{.5pt}C}(Z, X), I)$ is a right ${\rm End}(X)$-linear isomorphism, while $\Phi_{\hspace{-.6pt}Z}: {\rm Hom}_\mathcal{\hspace{.5pt}C}(Z, X)\to {\rm Hom}_{\it\Sigma}(\hbox{${\it\Sigma}$}, I)$ is a right $\hbox{${\it\Sigma}$}$-linear monomorphism. Considering the canonical $\hbox{${\it\Sigma}$}$-linear isomorphism $\rho: {\rm Hom}_{\it\Sigma}(\hbox{${\it\Sigma}$}, I)\to I,$ we obtain a commutative diagram of surjective $\mathbb{Z}$-linear maps $$\xymatrixcolsep{40pt}\xymatrixrowsep{16pt}\xymatrix{ {\rm End}_{\it\Sigma}(I) \ar[r]^-{{\rm Hom}_{\it\Sigma}(\rho, \,I)} \ar[d]_\theta & {\rm Hom}_{\it\Sigma}({\rm Hom}_{\it\Sigma}(\hbox{${\it\Sigma}$}, I), I)\ar[d]^{{\rm Hom}_{\it\Sigma}(\Phi\hspace{-.6pt}_Z, \,I)}\\ {\rm End}(X) \ar[r]^-{\Phi\hspace{-.8pt}_X} & {\rm Hom}_{\it\Sigma}({\rm Hom}_\mathcal{\hspace{.4pt}C}(Z, X),I).}$$ Since ${\rm End}_{\it\Sigma}(I)$ is local; see \cite[(25.4)]{AnF}, it suffices to show that $\theta$ is a ring homomorphism. Fix arbitrarily a morphism $u: Z\to X$ in $\mathcal{C}$. Given $f\in {\rm End}_{\it\Sigma}(I)$, in view of the above commutative diagram, we see that $\Phi_{\hspace{-1pt}X}(\theta(f))=f\circ \Phi_{\hspace{-1pt}Z}\circ \rho,$ and consequently, we obtain an equation $$(1) \hspace{50pt} \Phi_{\hspace{-1pt}X}(\theta(f))(u)=f(\Phi_{\hspace{-1pt}Z}(u)(\id_{\it\Sigma})). \hspace{141pt} $$ On the other hand, considering the commutative diagram $$\xymatrixrowsep{16pt}\xymatrix{ {\rm End}(X) \ar[d]_-{{\rm Hom}_\mathcal{C}(u, \,X)} \ar[r]^-{\Phi\hspace{-.8pt}_X} & {\rm Hom}_{\it\Sigma}({\rm Hom}_\mathcal{C}(Z, X), I) \ar[d]^{{\rm Hom}_{\hspace{-1pt}\it\Sigma}({\rm Hom}_{\mathcal{C}}(Z, u), I)}\\ {\rm Hom}_{\mathcal{C}}(Z, X) \ar[r]^-{\Phi\hspace{-.6pt}_Z} & {\rm Hom}_{\it\Sigma}(\hbox{${\it\Sigma}$}, I), }$$ we obtain an equation $$(2) \hspace{45pt} \Phi_{\hspace{-1pt}Z}(u)(\id_{\it\Sigma})=\Phi_X(\id_X)(u). \hspace{165pt}$$ Let $f_i\in {\rm End}_{\it\hbox{$\it\Gamma$}mma}(I)$, and write $g_i=\theta(f_i)\in {\rm End}(X)$, for $i=1, 2$. We deduce from the equation (1) that $$\Phi_{\hspace{-1pt}X} (\theta(f_1f_2)) (u)= (f_1f_2) (\Phi_{\hspace{-1pt}Z}(u)(\id_Z))= f_1 \left[ f_2(\Phi_{\hspace{-1pt}Z}(u)(\id_Z)) \right] = f_1 \left(\Phi_{\hspace{-1pt}X}(g_2)(u) \right).$$ Since $\Phi_{\hspace{-.8pt}X}$ is right ${\rm End}(X)$-linear, combining the equations (1) and (2) yields $$\Phi_{\hspace{-1pt}X}(g_1g_2)(u) \hspace{-1pt} = \hspace{-1pt} \Phi_{\hspace{-1pt}X}(g_1)(g_2u) \hspace{-1pt} = \hspace{-2pt} f_1 (\Phi_{\hspace{-1pt}Z}(g_2u)(\id_{\it\Sigma})) \hspace{-1pt} = \hspace{-2pt} f_1(\Phi_{\hspace{-1pt}X}(\id_X)(g_2u)) \hspace{-1pt} = \hspace{-2pt} f_1(\Phi_{\hspace{-1pt}X}(g_2)(u)).$$ Thus, $\Phi_X(\theta(f_1f_2))=\Phi_{\hspace{-1.1pt}X}(g_1g_2), $ and consequently, $\theta(f_1 f_2)=g_1 g_2=\theta(f_1)\hspace{.4pt}\theta(f_2).$ Since $\theta$ is surjective, $\theta(\id\hspace{.4pt}_I)=\id_X.$ The proof of the theorem is completed. \noindent{\sc Remark.} If $\mathcal{C}$ is a left triangulated subcategory of a triangulated category, then $\mathcal{C}\subseteq \mathcal{C}[1]$. In particular, Theorem \ref{existence-3} covers the essential part of Krause's result stated in \cite[(2.2)]{Kra2}, where the isomorphism ${\rm End}(X)\cong {\rm End}_{{\rm End}(Z)}(I)$ is only verified to be an abelian group isomorphism. In a dual fashion, we may establish the following statement. \begin{Theo}\label{existence-2} Let $\mathcal{C}$ be an extension-closed subcategory of a triangulated category, and let $X\in \mathcal{C}$ be strongly indecomposable with a non-zero monomorphism $$\Psi : {\rm Hom}_\mathcal{\hspace{.4pt}C}(Z, -) \hspace{-2pt}\to \hspace{-2pt} {\rm Hom}_{\hspace{1pt}\overline{{\rm End}\hspace{-.5pt}}\hspace{.5pt}(\hspace{-1pt}X\hspace{-1pt})} \hspace{-1pt} (\hspace{.4pt}\overline{\rm Hom\hspace{-.8pt}}\hspace{.6pt}_\mathcal{\hspace{.6pt}C }(-, X), I) $$ where $Z\in \mathcal{C}\cap \mathcal{C}[-1]$ and $I$ is an injective envelope of the left $\overline{{\rm End}\hspace{-.5pt}}(X)$-module $S_X$. If $\hspace{.8pt}\Psi_{\hspace{-1.1pt}Z}$ is bijective, then $\mathcal{C}$ has an almost split sequence $\hspace{-2pt}\xymatrixcolsep{18pt}\xymatrix{X \ar[r] & Y \ar[r] & Z[1].}$ \end{Theo} \section{Auslander-Reiten functors} \noindent The objective of this section is to study the existence of almost split sequences in a Hom-reflexive tri-exact $R$-category, where $R$ is a commutative ring. Our main results will relate the global existence of almost split sequences in the category to the existence of an Auslander-Reiten functor, which is a generalization of an Auslander-Reiten duality considered in \cite{LeZ}; and in the left or right triangulated case, to the existence of a Serre functor with a proper image, which differs slightly from the classical notion of a Serre functor defined in \cite[(I.1)]{RvdB}; see also \cite{BoK}. Throughout this section, $\mathcal{C}$ will stand for a tri-exact $R$-category, say an extension-closed subcategory of a triangulated $R$-category $\mathcal{A}$. We shall say that $\mathcal{C}$ has {\it almost split sequences on the right} (respectively, {\it left}) if every strongly indecomposable not Ext-projective (respectively, not Ext-injective) object is the ending (respectively, starting) term of an almost split sequence; and that $\mathcal{C}$ {\it has almost split sequences} if it has almost split sequences on the right and on the left. Recall that the exact functor $D={\rm Hom}_R(-, I\hspace{-1pt}_R): \hbox{{\rm Mod}\hspace{0.5pt}} R\to \hbox{{\rm Mod}\hspace{0.5pt}} R$ restricts to a duality $D: {\rm RMod}\hspace{.5pt}R \to {\rm RMod}\hspace{.5pt}R$, where $I\hspace{-1pt}_R$ is the minimal injective co-generator for $\hbox{{\rm Mod}\hspace{0.5pt}} R$, whereas ${\rm RMod}\hspace{.5pt}R$ is the category of reflexive $R$-modules. We shall say that $\mathcal{C}$ is {\it Hom-reflexive} (respectively, {\it Hom-finite}) if ${\rm Hom}\hspace{.5pt}_\mathcal{C}\hspace{-1pt}(X, Y)$ is reflexive (respectively, of finite length) over $R$, for all $X, Y\in \mathcal{C}$; and {\it Ext-reflexive} if ${\rm Ext}^1_\mathcal{C}\hspace{-1pt}(X, Y)$ is reflexive over $R$, for all $X, Y\in \mathcal{C}$. Hom-finite $R$-categories are Hom-reflexive; see (\ref{alg-ref-mod}), and the converse is not true. The following statement is important for our purpose; compare \cite[(2.4)]{LNP}. \begin{Lemma}\label{lin-alg} Let $R$ be a commutative ring, and let $M, N\in \hbox{{\rm Mod}\hspace{0.5pt}} R$ defining a non-degenerate $R$-bilinear form $<\hspace{-3pt}-,-\hspace{-3pt}>: M\times N \to I_{\hspace{-1pt}R}.$ If $M$ or $N$ is reflexive, then both $M$ and $N$ are reflexive with $R$-linear isomorphisms $\phi_M: M\to DN: u\mapsto <\hspace{-2.5pt}u, -\hspace{-4pt}>$ and $\psi\hspace{-.5pt}_N: N\to DM: v\mapsto <\hspace{-2.5pt}-, v\hspace{-2.5pt}>$. \end{Lemma} \noindent{\it Proof.} By the hypothesis, $\phi_M$ and $\psi\hspace{-.5pt}_N$ are monomorphisms. Consider the canonical monomorphisms $\sigma_{\hspace{-1pt}_M}: M\to D^2M$ and $\sigma_{\hspace{-1pt}_N}: N\to D^2N$. It is easy to verify that $\phi_M=D(\psi\hspace{-.5pt}_N)\circ \sigma_{\hspace{-1pt}_M}$ and $\psi\hspace{-.5pt}_N=D(\phi_M)\circ \sigma_{\hspace{-1pt}_N}$, where $D(\phi_M)$ and $D(\psi\hspace{-.5pt}_N)$ are surjective. If $M$ is reflexive, so are $DM$ and $N$; see (\ref{alg-ref-mod}). In particular, $\sigma_{\hspace{-1pt}_M}$ and $\sigma_{\hspace{-1pt}_N}$ are surjective, so are $\phi_M$ and $\psi\hspace{-.5pt}_N$. The proof of the lemma is completed. We shall first strengthen the results on the existence of an individual almost split sequence under the Hom-reflexive setting. The following preparatory result is well-known under some classical settings; see, for example, \cite{GR, LeZ, RvdB}. \begin{Lemma}\label{bilin-forms} Let $\mathcal{C}$ be a Hom-reflexive tri-exact $R$-category. Consider an almost-zero extension $\delta\in {\rm Ext}^1_{\hspace{.5pt}\mathcal{C}}(Z, X)$ and a linear form $\theta\in D{\rm Ext}^1_{\hspace{.5pt}\mathcal{C}}(Z, X)$ such that $\theta(\delta)\ne 0$. Given any object $L\in \mathcal{C}$, there exist natural $R$-linear isomorphisms $$\hbox{${\it\Omega}$}_{L, X}: \overline{\rm Hom\hspace{-.8pt}}\hspace{.6pt}_{{\hspace{.5pt}\mathcal{C}}}(L, X) \to D{\rm Ext}^1_{{\hspace{.5pt}\mathcal{C}}}(Z, L): \bar{\hspace{-1.5pt}g\hspace{-.5pt}} \mapsto \theta \circ {\rm Ext}_{{\hspace{.5pt}\mathcal{C}}}^1(Z, \, \bar{\hspace{-1.5pt}g\hspace{-.5pt}}) $$ and $$\hbox{${\it\Theta}$}_{Z,L}: \hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_{{\hspace{.5pt}\mathcal{C}}}(Z, L) \to D{\rm Ext}^1_{{\hspace{.5pt}\mathcal{C}}}(L, X): \underline{f\hspace{-2pt}}\, \mapsto \theta \circ {\rm Ext}_\mathcal{C}^1(\hspace{.4pt}\underline{f\hspace{-2pt}}\hspace{2pt}, X). $$ \end{Lemma} \noindent{\it Proof.} Given $L\in \mathcal{C}$, by Lemma \ref{art-1}, we obtain two non-degenerate $R$-bilinear forms $$<\hspace{-3pt}-, -\hspace{-3.2pt}>\hspace{-2pt}_L: \overline{\rm Hom\hspace{-.8pt}}\hspace{.6pt}_\mathcal{\hspace{.5pt}C}(L, X) \times {\rm Ext}_{\mathcal{C}}^1(Z, L)\to I_R: (\hspace{1.5pt}\bar{\hspace{-1.5pt}g}, \zeta) \mapsto \theta (\hspace{1.5pt}\bar{\hspace{-1.5pt}g}\cdot \zeta) $$ and $${_L}\hspace{-3pt}<\hspace{-3pt}-, -\hspace{-3.2pt}>: {\rm Ext}^1_{{\hspace{.5pt}\mathcal{C}}}(L, X) \times \hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_{{\hspace{.5pt}\mathcal{C}}}(Z, L) \to I_R: (\zeta, \,\underline{f\hspace{-2pt}}\hspace{2pt}) \to \theta (\zeta \cdot \underline{f\hspace{-2pt}}\hspace{2pt}).$$ Since $\mathcal{C}$ is Hom-reflexive, by Lemma \ref{alg-ref-mod}, $\overline{\rm Hom\hspace{-.8pt}}\hspace{.6pt}_\mathcal{\hspace{.5pt}C}(L, X)$ and $\hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_{{\hspace{.5pt}\mathcal{C}}}(Z, L)$ are reflexive $R$-modules. By Lemma \ref{lin-alg}, we obtain two isomorphisms $\hbox{${\it\Omega}$}_{L, X}$ and $\hbox{${\it\Theta}$}_{Z,L}$ as stated in the lemma, which are clearly natural in $L$; see (\ref{Ext-bimod}). The proof of the lemma is completed. The following result improves Theorem \ref{existence} under the Hom-reflexive setting. \begin{Theo}\label{art-6} Let $\mathcal{C}$ be a Hom-reflexive tri-exact $R$-category with $X, Z\in \mathcal{C}$ strongly indecomposable. The following statements are equivalent. \begin{enumerate}[$(1)$] \item There exists an almost split sequence $\hspace{-3pt}\xymatrixcolsep{20pt}\xymatrix{X\ar[r]&Y\ar[r] &Z}\hspace{-3.5pt}$ in $\mathcal{C}.$ \item There exists a non-zero isomorphism $\hbox{${\it\Omega}$}_X: \overline{\rm Hom\hspace{-.8pt}}\hspace{.6pt}_\mathcal{\hspace{.7pt}C}(-, X)\to D{\rm Ext}^1_\mathcal{C}(Z, -)$. \item There exists a non-zero isomorphism $\hbox{${\it\Theta}$}_Z: \hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_\mathcal{\hspace{.6pt}C}(Z,-)\to D{\rm Ext}^1_\mathcal{C}(-,X).$ \end{enumerate} \end{Theo} \noindent{\it Proof.} Given an almost-zero extension $\delta\in{\rm Ext}^1_{\hspace{.5pt}\mathcal{C}}(Z, X)$, we choose $\theta\in D{\rm Ext}^1_{\hspace{.5pt}\mathcal{C}}(Z, X)$ such that $\theta(\delta)\ne 0$. By Lemma \ref{bilin-forms}, we see that $\hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_\mathcal{\hspace{.6pt}C}(Z,-)\cong D{\rm Ext}^1_\mathcal{C}(-,X)$ and $\overline{\rm Hom\hspace{-.8pt}}\hspace{.6pt}_\mathcal{\hspace{.7pt}C}(-, X)\cong D{\rm Ext}^1_\mathcal{C}(Z, -)$. Thus, Statement (1) implies Statements (2) and (3). Let now $\hbox{${\it\Omega}$}_X: \overline{\rm Hom\hspace{-.8pt}}\hspace{.6pt}_\mathcal{\hspace{.7pt}C}(-, X)\to D{\rm Ext}^1_\mathcal{C}(Z, -)$ be a nonzero isomorphism. In particular, $Z$ is not Ext-projective, and hence, we have a nonzero canonical algebra homomorphism $R\to \underline{{\rm End}}(Z)$. Moreover, since $\mathcal{C}$ is Hom-reflexive, we obtain an isomorphism $\Psi_X: {\rm Ext}^1_\mathcal{C}(Z, -)\to D\overline{\rm Hom\hspace{-.8pt}}\hspace{.6pt}_\mathcal{\hspace{.7pt}C}(-, X).$ By Theorem \ref{existence}(2), we obtain an almost split sequence as stated in Statement (1). Similarly, we may show that Statement (3) implies Statement (1). The proof of the theorem is completed. \noindent{\sc Remark.} In case $R$ is artinian, Theorem \ref{art-6} is known for an Ext-finite abelian $R$-category; see \cite{GR,LeZ} and for a Hom-finite exact $R$-category; see \cite{LNP}. We shall weaken the condition that both $X$ and $Z$ are strongly indecomposable stated in Theorem \ref{art-6} in some special cases as below. \begin{Theo} \label{existence-7} Let $\mathcal{C}$ be a Hom-reflexive extension-closed subcategory of a triangulated $R$-category. \begin{enumerate}[$(1)$] \item If $X\in \mathcal{C}$ is strongly indecomposable and $Z\in \mathcal{C}\cap \mathcal{C}[1]$, then $\mathcal{C}$ has an almost split sequence $\xymatrixcolsep{18pt}\xymatrix{\hspace{-3pt}X \ar[r] & Y \ar[r] & Z}\hspace{-3pt}$ if and only if $\overline{\rm Hom\hspace{-.8pt}}\hspace{.6pt}_\mathcal{\hspace{.6pt}C\hspace{-.8pt}}(-, X)\cong D{\rm Ext}^1_\mathcal{\hspace{.6pt}C\hspace{-.5pt}}(Z, -)\ne 0$. \item If $Z\in \mathcal{C}$ is strongly indecomposable and $X\in \mathcal{C}\cap \mathcal{C}[-1]$, then $\mathcal{C}$ has an almost split sequence $\xymatrixcolsep{18pt}\xymatrix{\hspace{-3pt}X \ar[r] & Y \ar[r] & Z}\hspace{-3pt}$ if and only if $\hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_\mathcal{\hspace{.8pt}C\hspace{-.8pt}}(Z, -)\cong D {\rm Ext}^1_\mathcal{\hspace{.6pt}C\hspace{-.5pt}}(-,X)\ne 0.$ \end{enumerate} \end{Theo} \noindent{\it Proof.} We shall only prove the sufficiency of Statement (1). Let $X\in \mathcal{C}$ be strongly indecomposable and $\overline{\rm Hom\hspace{-.8pt}}\hspace{.6pt}_\mathcal{\hspace{.6pt}C\hspace{-.8pt}}(-, X)\cong D{\rm Ext}^1_\mathcal{\hspace{.6pt}C\hspace{-.5pt}}(Z, -)\ne 0$, where $Z=M[1]$ for some $M\in \mathcal{C}$. It suffices to show that ${\rm End}(M)$ is local. Since ${\rm Ext}^1_\mathcal{C}(Z, -)\cong {\rm Hom}_\mathcal{\hspace{.4pt}C}(M, -),$ we obtain an isomorphism $\Psi_X: \overline{\rm Hom\hspace{-.8pt}}\hspace{.6pt}_\mathcal{\hspace{.5pt}C}(-, X) \to D{\rm Hom}_\mathcal{\hspace{.4pt}C}(M, -)$.\hspace{-4pt} In particular, $\Psi_{X,X}: \overline{{\rm End}}(X)\to D{\rm Hom}_{\hspace{.6pt}\mathcal{C}}(M, X)$ is a right ${\rm End}(X)$-linear isomorphism and $\Psi_{M,X}: \overline{\rm Hom\hspace{-.8pt}}\hspace{.6pt}_{\hspace{.5pt}\mathcal{C}}(M,X)\to D{\rm End}(M)$ is a right ${\rm End}(M)$-linear isomorphism. Since $M\in \mathcal{C}[-1]$, by Lemma \ref{left-tri-ideal}(2), we have $\overline{\rm Hom\hspace{-.8pt}}\hspace{.6pt}_\mathcal{\hspace{.6pt}C}(M, X)={\rm Hom}_\mathcal{\hspace{.5pt}C}(M, X)$, which is a left $\overline{{\rm End}}(X)$-module. Since $\overline{{\rm End}\hspace{-1.5pt}}\hspace{1.5pt}(X)$ is reflexive, we obtain a commutative diagram $$\xymatrixcolsep{38pt}\xymatrixrowsep{16pt}\xymatrix{ {\rm End}(M) \ar[d]_\theta \ar[r]^-{\sigma} & D^2{\rm End}(M) \ar[d]^-{D(\Psi\hspace{-1pt}_{M,X})}\\ \overline{{\rm End}\hspace{-.5pt}}\hspace{.5pt}(X) \ar[r]^-{\Psi_{X,X}} & D{\rm Hom}_{\hspace{.6pt}\mathcal{C}}(M, X) }$$ of $R$-linear isomorphisms, where $\sigma$ is the canonical isomorphism. It remains to show that $\theta$ is an algebra homomorphism. Indeed, we fix arbitrarily a morphism $u\in {\rm Hom}_{\hspace{.4pt}\mathcal{C}}(M, X)$. Given any $f\in {\rm End}(M)$, in view of the above commutative diagram, we obtain an equation $$ (1) \hspace{50pt} \Psi\hspace{-1pt}_{X,X}(\theta(f))(u)=\Psi\hspace{-1pt}_{M,X}(u)(f). \hspace{125pt} $$ On the other hand, consider the commutative diagram $$\xymatrixrowsep{18pt}\xymatrix{\overline{{\rm End}}(X) \ar[r]^-{\Psi_{\hspace{-.5pt}X,X}} \ar[d]_{{\rm Hom}_\mathcal{C}(\bar{u},X)} & D{\rm Hom}_\mathcal{\hspace{.8pt}C}(M, X) \ar[d]^{D{\rm Hom}_{\hspace{.5pt}\mathcal{C}}(M, u)}\\ \overline{\rm Hom\hspace{-.8pt}}\hspace{.6pt}_\mathcal{\hspace{1pt}C}(M, X) \ar[r]^{\Psi_{\hspace{-.5pt}M,X}} & D{\rm End}(M).}$$ Given any $v\in {\rm End}(X)$, we obtain an equation $$(2) \hspace{50pt} \Psi_{\hspace{-1.1pt}M, X}(\bar{v} \bar{u})(f) = \Psi_{X,\hspace{-1.5pt}X}(\bar{v})(u f). \hspace{130pt}$$ Let now $f, g\in {\rm End}(M)$. Since $\Psi_{X,X}$ is also right $\overline{{\rm End}}(X)$-linear, we deduce from the equations (1) and (2) that $$ \Psi_{X, X}(\theta(f)\theta (g))(u)= \Psi_{X, X}(\theta(f))(\theta (g)u) = \Psi\hspace{-1pt}_{M,X}(\theta (g) \bar{u} )(f) = \Psi_{X,\hspace{-1.5pt}X}(\theta (g))(u f). $$ Since $\Psi\hspace{-1pt}_{M,X}$ is right ${\rm End}(M)$-linear, we deduce from the equation (1) that $$ \Psi_{X,\hspace{-1.5pt}X}(\theta (g))(u f) =\Psi\hspace{-1pt}_{M,X}(u f)(g)=\Psi\hspace{-1pt}_{M,X}(u)(fg) =\Psi\hspace{-1pt}_{X,X}(\theta(fg))(u).$$ This yields $\Psi_{X, X}(\theta(f)\theta (g)) =\Psi\hspace{-1pt}_{X,X}(\theta(fg))$, and hence, $\theta(f g )=\theta(f) \hspace{.4pt} \theta(g)$. The proof of the theorem is completed. \begin{Cor}\label{ARS-LRT} Let $\mathcal{C}$ be a Hom-reflexive extension-closed subcategory of a triangulated $R$-category. \begin{enumerate}[$(1)$] \item If $\mathcal{C}$ is left triangulated with $X\in \mathcal{C}$ strongly indecomposable, then $X$ is the starting term of an almost split sequence if and only if the functor $D\overline{\rm Hom\hspace{-.8pt}}\hspace{.6pt}_{\hspace{.8pt}\mathcal{C}}(-, X)$ is representable by a nonzero object in $\mathcal{C}[-1]$. \item If $\mathcal{C}$ is right triangulated with $Z\in \mathcal{C}$ strongly indecomposable, then $Z$ is the ending term of an almost split sequence if and only if the functor $D\hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_{\hspace{.6pt}\mathcal{C}}(Z, -)$ is representable by a nonzero object in $\mathcal{C}[1]$. \end{enumerate} \end{Cor} In order to study the global existence of almost split sequences in $\mathcal{C}$, we need to generalize the classical notion of an Auslander-Reiten duality; see \cite{ARS, LeZ}. \begin{Defn}\label{ARF} Let $\mathcal{C}$ be a tri-exact $R$-category. \begin{enumerate}[$(1)$] \item A {\it right Auslander-Reiten functor} for $\mathcal{C}$ is a functor $\tau: \underline{\hspace{.6pt}\mathcal{C}\hspace{-1.5pt}}\hspace{1.5pt} \to \hspace{1.5pt}\overline{\hspace{-1pt}\mathcal{C}\hspace{-.2pt}} \hspace{2pt}$ with binatural $R$-linear isomorphisms $\hbox{${\it\Theta}$}_{X, Y} : \hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_{{\hspace{1pt}\mathcal{C}}}(X, Y) \to D{\rm Ext}^1_{{\hspace{.5pt}\mathcal{C}}}(X, \tau Y),$ where $X, Y\in \mathcal{C}.$ \item A {\it left Auslander-Reiten functor} for $\mathcal{C}$ is a functor $\tau^-: \hspace{1.5pt}\overline{\hspace{-1pt}\mathcal{C}\hspace{-.2pt}}\to \underline{\hspace{.6pt}\mathcal{C}\hspace{-1.5pt}}\hspace{1.5pt}\hspace{.5pt}$ with binatural $R$-linear isomorphisms $\hbox{${\it\Omega}$}_{X, Y}: \overline{\rm Hom\hspace{-.8pt}}\hspace{.6pt}_{{\hspace{1pt}\mathcal{C}}}(X, Y) \to D{\rm Ext}^1_{{\hspace{.5pt}\mathcal{C}}}(\tau^-Y, X),$ where $X, Y\in \mathcal{C}.$ \end{enumerate} \end{Defn} The following statement collects some properties of an Auslander-Reiten functor. \begin{Prop}\label{ARF-prop} Let $\mathcal{C}$ be a Hom-reflexive tri-exact $R$-category. Then, a right $($or left$)$ Auslander-Reiten functor for $\mathcal{C}$ is faithful. If $\mathcal{C}$ is, in addition, right $($or left$)$ triangulated, then a right $($or left$)$ Auslander-Reiten functor is fully faithful. \end{Prop} \noindent{\it Proof.} We shall only consider a right Auslander-Reiten functor $\tau: \underline{\hspace{.6pt}\mathcal{C}\hspace{-1.5pt}}\hspace{1.5pt}\to \hspace{1.5pt}\overline{\hspace{-1pt}\mathcal{C}\hspace{-.2pt}}$ with binatural $R$-linear isomorphisms $\hbox{${\it\Theta}$}_{X, Y}: \hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_{\hspace{1pt}\mathcal{C}}(X, Y)\to D{\rm Ext}^1_{{\hspace{.5pt}\mathcal{C}}}(Y, \tau X)$. Given a morphism $u: X\to Y$, considering the commutative diagram $$\xymatrixcolsep{30pt}\xymatrixrowsep{16pt}\xymatrix{ \underline{{\rm End}\hspace{-1.5pt}}\hspace{1.5pt}(Y) \ar[d]_{\hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_{\hspace{.5pt}\mathcal{C}}(\underline{u\hspace{-1.5pt}}\hspace{2pt}, Y)} \ar[r]^-{\hbox{${\it\Theta}$}_{Y, Y}} & D{\rm Ext}^1_{\hspace{.5pt}\mathcal{C}}(Y, \tau Y) \ar[d]^{D{\rm Ext}^1_{\hspace{.5pt}\mathcal{C}}(Y, \tau (\underline{u\hspace{-1.5pt}}\hspace{2pt}))} \\ \hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_{\hspace{1pt}\mathcal{C}}(X, Y) \ar[r]^-{\hbox{${\it\Theta}$}_{X, Y}} & D{\rm Ext}^1_{\hspace{.5pt}\mathcal{C}}(Y, \tau X),}$$ we obtain $\hbox{${\it\Theta}$}_{X, Y}(\hspace{.5pt}\underline{u\hspace{-1.5pt}}\hspace{2pt})=\hbox{${\it\Theta}$}_{Y, Y}(\hspace{.5pt}\underline{\id}\hspace{.8pt}_Y) \circ {\rm Ext}^1_{\hspace{.5pt}\mathcal{C}}(Y, \tau(\underline{u\hspace{-1.5pt}}\hspace{2pt})).$ If $\tau(\hspace{.5pt}\underline{u\hspace{-1.5pt}}\hspace{2pt})=0$, then $\hbox{${\it\Theta}$}_{X, Y}(\hspace{.5pt}\underline{u\hspace{-1.5pt}}\hspace{2pt})=0$, and hence, $\underline{u\hspace{-1.5pt}}\hspace{2pt}=0$. That is, $\tau$ is faithful. Next, assume that $\mathcal{C}$ is a Hom-reflexive right triangulated subcategory of a triangulated $R$-category. By Lemmas \ref{left-tri} and \ref{left-tri-ideal}(2), $\mathcal{C}[1]\subseteq \mathcal{C}=\hspace{1.5pt}\overline{\hspace{-1pt}\mathcal{C}\hspace{-.2pt}}$. This yields a functor $F=[1]\circ \tau: \underline{\hspace{.6pt}\mathcal{C}\hspace{-1.5pt}}\hspace{1.5pt}\to \mathcal{C}$ with isomorphisms $\Phi_{X,Y}: \hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_{\hspace{1pt}\mathcal{C}}(X, Y)\to D{\rm Hom}_{\hspace{.5pt}\mathcal{C}}(Y, FX),$ which are binatural in $X$ and $Y$. Given $f\in {\rm Hom}_{\hspace{.5pt}\mathcal{C}}(X, Y)$ and $g\in {\rm Hom}_{\hspace{.5pt}\mathcal{C}}(Y, FX)$, considering the commutative diagram $$\xymatrixcolsep{50pt}\xymatrixrowsep{16pt}\xymatrix{ \hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_{\hspace{1pt}\mathcal{C}}(Y, FX) \ar[d]_{\Phi_{Y, FX}} & \hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_{\hspace{1pt}\mathcal{C}}(Y, Y) \ar[l]_-{\hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}(Y, \hspace{1pt} \underline{g\hspace{-2pt}}\hspace{2pt})} \ar[d]^-{\Phi_{Y,Y}} \ar[r]^{\hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}(\hspace{.5pt}\underline{f\hspace{-2pt}}\hspace{2pt}, Y)} & \hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_{\hspace{1pt}\mathcal{C}}(X, Y) \ar[d]^{\Phi_{X,Y}} \\ D{\rm Hom}_{{\hspace{.5pt}\mathcal{C}}}(FX, FY) & D{\rm Hom}_{{\hspace{.5pt}\mathcal{C}}}(Y, FY) \ar[l]_-{D{\rm Hom}(g, FY)} \ar[r]^-{D{\rm Hom}(Y, F(\underline{f\hspace{-2pt}}\hspace{2pt}))} & D{\rm Hom}_{{\hspace{.5pt}\mathcal{C}}}(Y, FX),}$$ we obtain the following equations $$(*)\hspace{50pt} \Phi_{Y,FX}(\hspace{.4pt}\underline{g\hspace{-2pt}}\hspace{2pt})(F(\hspace{.5pt}\underline{f\hspace{-2pt}}\hspace{2pt}))=\Phi_{Y,Y}(\underline{\id}\hspace{1pt}_Y)(F(\underline{f\hspace{-2pt}}\hspace{2pt}) g)=\Phi_{X,Y}(\underline{f\hspace{-2pt}}\hspace{2pt})(g).\hspace{40pt}$$ Since $FX\in \mathcal{C}[1]$, by Lemma \ref{left-tri-ideal}(1), ${\rm Hom}_{{\hspace{.5pt}\mathcal{C}}}(Y, FX)=\hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}\hspace{.6pt}_{\mathcal{C}}(Y, FX)$. Thus, we obtain an isomorphism $\Phi_{Y, FX}: {\rm Hom}_{{\hspace{.5pt}\mathcal{C}}}(Y, FX)\to D{\rm Hom}_{\hspace{.5pt}\mathcal{C}}(FX,FY),$ which makes $$\xymatrixrowsep{16pt}\xymatrix{ \hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_{\hspace{.5pt}\mathcal{C}}(X, Y) \ar[r]^-{\Phi_{X,Y}} \ar[d]_-{F} & D{\rm Hom}_{{\hspace{.5pt}\mathcal{C}}}(Y, FX) \\ {\rm Hom}_{\hspace{.5pt}\mathcal{C}}(FX, FY) \ar[r]^-{\sigma} & D^2{\rm Hom}_{{\hspace{.5pt}\mathcal{C}}}(FX, FY) \ar[u]_-{D(\Phi_{Y, FX})} }$$ commute, where $\sigma$ is the canonical isomorphism. Indeed, for any $f\in {\rm Hom}_{\hspace{1pt}\mathcal{C}}(X, Y)$ and $g\in {\rm Hom}_{\hspace{.5pt}\mathcal{C}}(Y, FX)$, in view of the equations in $(*)$, we see that $$D(\Phi_{Y,FX}) (\sigma(F(\underline{f\hspace{-2pt}}\hspace{2pt})))(g) = \sigma(F(\underline{f\hspace{-2pt}}\hspace{2pt})) (\Phi_{Y,FX}(\hspace{.4pt}\underline{g\hspace{-2pt}}\hspace{2pt})) = \Phi_{Y,FX}(\hspace{.5pt}\underline{g\hspace{-2pt}}\hspace{2pt})(F(\hspace{.4pt}\underline{f\hspace{-2pt}}\hspace{2pt})) =\Phi_{X,Y}(\hspace{.4pt}\underline{f\hspace{-2pt}}\hspace{2pt})(g). $$ As a consequence, $F: \hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_{\hspace{.5pt}\mathcal{C}}(X, Y)\to {\rm Hom}_{\hspace{.5pt}\mathcal{C}}(FX, FY)$ is an isomorphism. That is, $F$ is fully faithful, and so is $\tau $. The proof of the proposition is completed. We are now able to relate the existence of almost split sequences to the existence of an Auslander-Reiten functor. \begin{Theo}\label{ARS-ARF} Let $\mathcal{C}$ be a Hom-reflexive Krull-Schmidt tri-exact $R$-category. \begin{enumerate}[$(1)$] \item There exist almost split sequences on the right $($respectively, left$\hspace{.5pt})$ in $\mathcal{C}$ if and only if it admits a full right $($respectively, left$\hspace{.5pt})$ Auslander\--Reiten functor. \item There exist almost split sequences in $\mathcal{C}$ if and only if it admits a right Auslander-Reiten equivalence, or equivalently, a left Auslander-Reiten equiva\-lence$\,;$ and in this case, $\mathcal{C}$ is Ext-reflexive. \end{enumerate} \end{Theo} \noindent{\it Proof.} We shall prove the theorem only for right Auslander-Reiten functors. Consider a full right Auslander-Reiten functor $\tau: \underline{\hspace{.6pt}\mathcal{C}\hspace{-1.5pt}}\hspace{1.5pt}\to \hspace{1.5pt}\overline{\hspace{-1pt}\mathcal{C}\hspace{-.2pt}}$. Let $Z\in \mathcal{C}$ be indecomposable but not Ext-projective. By defi\-nition, there exist $R$-linear isomorphisms $\hbox{${\it\Theta}$}_{Z,Y}: \hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_{{\hspace{.5pt}\mathcal{C}}}(Z,Y)\to D{\rm Ext}^1_{{\hspace{.5pt}\mathcal{C}}}(Y, \tau Z)$, which is natural in $Y$. Therefore, $\hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_{{\hspace{1pt}\mathcal{C}}}(Z, - )\cong D{\rm Ext}^1_{{\hspace{.5pt}\mathcal{C}}}(-, \tau Z).$ By Proposition \ref{ARF-prop}, $\overline{{\rm End}\hspace{-1.5pt}}\hspace{1.5pt}(\tau Z)\cong \underline{{\rm End}\hspace{-1pt}}\hspace{1.5pt}(Z)$, which is local. By Lemma \ref{Lift-stab-obj}(1), ${\rm Ext}_\mathcal{C}^1(-, \tau Z) \cong {\rm Ext}_\mathcal{C}^1(-, X)$ for some indecomposable $X\in \mathcal{C}$. Then, $\hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_{{\hspace{1pt}\mathcal{C}}}(Z, -)\cong D{\rm Ext}^1_{{\hspace{.5pt}\mathcal{C}}}(-, X).$ By Theorem \ref{art-6}, there exists an almost split sequence $\hspace{-3pt}\xymatrixcolsep{20pt}\xymatrix{X\ar[r]& L \ar[r] & Z}$ in $\mathcal{C}$. Assume that $\mathcal{C}$ has almost split sequences on the right. For each indecomposable and not Ext-projective object $M\in \mathcal{C}$, we fix an indecomposable object $\tau M\in \mathcal{C}$, an almost-zero extension $\delta_M \in {\rm Ext}^1_{\hspace{.5pt}\mathcal{C}}(M, \tau M)$, and a linear form $\theta_M\in D{\rm Ext}^1_{\hspace{.5pt}\mathcal{C}}(M, \tau M)$ such that $\theta_M(\delta_M)\ne 0$. Let $X, Y, Z\in \mathcal{C}$ be indecomposable and not Ext-projective. Considering $\theta_X$ and $\theta_Y$, by Lemma \ref{bilin-forms}, we obtain two $R$-linear isomorphisms $$\hbox{${\it\Theta}$}_{X,Y}: \hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_{{\hspace{.5pt}\mathcal{C}}}(X, Y) \to D{\rm Ext}^1_{{\hspace{.5pt}\mathcal{C}}}(Y, \tau X): \underline{f\hspace{-2pt}}\, \mapsto \theta_X \circ {\rm Ext}_\mathcal{C}^1(\,\underline{f\hspace{-2pt}}\hspace{2pt}, X) \hspace{15pt}$$ and $$\hbox{${\it\Omega}$}_{\tau X, \tau Y}: \overline{\rm Hom\hspace{-.8pt}}\hspace{.6pt}_{{\hspace{.5pt}\mathcal{C}}}(\tau X, \tau Y) \to D{\rm Ext}^1_{{\hspace{.5pt}\mathcal{C}}}(Y, \tau X): \hspace{1.5pt}\bar{\hspace{-1.5pt}g} \mapsto \theta_Y \circ {\rm Ext}_{{\hspace{.5pt}\mathcal{C}}}^1(Y, \, \bar{g}).$$ This yields an $R$-linear isomorphism $$\tau_{\hspace{-.5pt}_{X, Y}}=\hbox{${\it\Omega}$}_{\tau X, \tau Y}^{-1} \hbox{${\it\Theta}$}_{X,Y} \hspace{-2pt}: \hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_{\hspace{1pt}\mathcal{C}}(X, Y) \to \overline{\rm Hom\hspace{-.8pt}}\hspace{.6pt}_{\hspace{1pt}\mathcal{C}}(\tau X, \tau Y).$$ Let $f\in {\rm Hom}_{{\hspace{.5pt}\mathcal{C}}}(X, Y)$ and $\zeta\in {\rm Ext}^1_\mathcal{C}(Y, \tau X)$. Since $\hbox{${\it\Theta}$}_{X, Y}(\underline{f\hspace{-2pt}}\hspace{2pt})=\hbox{${\it\Omega}$}_{\tau X, \tau Y}(\tau_{\hspace{-.5pt}_{X, Y}}\hspace{-1.5pt}(\hspace{.5pt}\underline{f\hspace{-2pt}}\hspace{2pt}))$ by definition, we obtain an equation $$(*) \hspace{20pt} \theta_X(\zeta \cdot \underline{f\hspace{-2pt}}\hspace{2pt})=\theta_Y(\tau_{\hspace{-.5pt}_{X, Y}}\hspace{-1.5pt}(\hspace{1pt}\underline{f\hspace{-2pt}}\hspace{2pt}) \cdot \zeta ).$$ Let $g\in {\rm Hom}_{{\hspace{.5pt}\mathcal{C}}}(Y, Z)$ and $\delta\in {\rm Ext}_\mathcal{C}^1(Z, \tau X)$. By the above equation, we obtain $$\theta_X(\delta\cdot (\underline{g\hspace{-2pt}}\,\underline{f\hspace{-2pt}}\hspace{2.5pt})) =\theta_X((\delta\cdot \underline{g\hspace{-2pt}}\hspace{2.5pt})\cdot \underline{f\hspace{-2pt}}\hspace{2.5pt}) =\theta_Y(\tau_{\hspace{-.5pt}_{X, Y}}\hspace{-1.5pt}(\hspace{1pt}\underline{f\hspace{-2pt}}\hspace{2pt}) \cdot \delta\cdot \underline{g\hspace{-2pt}}\hspace{2pt}) =\theta_Z((\tau_{\hspace{-.5pt}_{Y, Z}}\hspace{-1.5pt}(\hspace{1pt}\underline{g\hspace{-2pt}}\hspace{2pt}) \, \tau_{\hspace{-.5pt}_{X, Y}}\hspace{-1.5pt}(\hspace{1pt}\underline{f\hspace{-2pt}}\hspace{2pt})) \cdot \delta)$$ That is, $\hbox{${\it\Theta}$}_{X,Z}(\hspace{.5pt}\underline{g\hspace{-2pt}}\,\underline{f\hspace{-2pt}}\hspace{2.5pt}) (\delta)=\hbox{${\it\Omega}$}_{X,Z}(\tau_{\hspace{-.5pt}_{Y, Z}}\hspace{-1.5pt}(\hspace{1pt}\underline{g\hspace{-2pt}}\hspace{2pt}) \, \tau_{\hspace{-.5pt}_{X, Y}}\hspace{-1.5pt}(\hspace{1pt}\underline{f\hspace{-2pt}}\hspace{2pt}))(\delta),$ from which we conclude that $\tau_{\hspace{-.5pt}_{X,Z}}\hspace{-1.5pt}(\hspace{.5pt}\underline{g\hspace{-2pt}}\,\underline{f\hspace{-2pt}}\hspace{2.5pt}) =\tau_{\hspace{-.5pt}_{Y, Z}}\hspace{-1.5pt}(\hspace{1pt}\underline{g\hspace{-2pt}}\hspace{2.5pt}) \, \tau_{\hspace{-.5pt}_{X, Y}}\hspace{-1.5pt}(\hspace{1pt}\underline{f\hspace{-2pt}}\hspace{2pt}).$ In particular, since $\tau_{\hspace{-.5pt}_{X, X}}$ is bijective, $\tau_{\hspace{-.5pt}_{X, X}}\hspace{-1.5pt}(\hspace{.8pt}\underline{\id}\hspace{1pt}_X)=\bar{\hspace{-.4pt}\id}_{\hspace{1.2pt} \tau \hspace{-1pt} X}$. Since $\underline{\hspace{.6pt}\mathcal{C}\hspace{-1.5pt}}\hspace{1.5pt}$ is Krull-Schmidt, by Lemma \ref{Lift-stab-obj}(1), we may extend $\tau$ to a fully faithful functor $\tau: \underline{\hspace{.6pt}\mathcal{C}\hspace{-1.5pt}}\hspace{1.5pt}\to \hspace{1.5pt}\overline{\hspace{-1pt}\mathcal{C}\hspace{-.2pt}}\hspace{.5pt}.$ It remains to show that the isomorphism $\hbox{${\it\Theta}$}_{X,Y}$ is binatural. It is natural in $Y$ by Lemma \ref{bilin-forms}. We claim, for $h\in {\rm Hom}_{{\hspace{.5pt}\mathcal{C}}}(X, Z)$, that the diagram $$\xymatrixcolsep{28pt}\xymatrixrowsep{16pt} \xymatrix{\hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_{{\hspace{1pt}\mathcal{C}}}(Z, Y) \ar[r]^-{\it\Theta_{Z, Y}} \ar[d]_{\hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_{\hspace{.4pt}\mathcal{C}}(\underline{h\hspace{-1pt}}\hspace{2pt}, Y)} & D{\rm Ext}^1_{{\hspace{.5pt}\mathcal{C}}}(Y, \tau Z) \ar[d]^{D{\rm Ext}^1_{{\hspace{.5pt}\mathcal{C}}}(Y, \hspace{1pt} \tau_{\hspace{-1pt}_{X, Z}}\hspace{-.8pt}(\hspace{-.5pt}\underline{h\hspace{-1pt}}\hspace{1pt}))} \\ \hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_{{\hspace{1pt}\mathcal{C}}}(X, Y) \ar[r]^-{\it\Theta_{X, Y}} & D{\rm Ext}^1_{{\hspace{.5pt}\mathcal{C}}}(Y, \tau X) } $$ is commutative. Indeed, for any $u\in {\rm Hom}_{\hspace{.5pt}\mathcal{C}}(Z, Y)$ and $\delta\in {\rm Ext}^1_{{\hspace{.5pt}\mathcal{C}}}(Y, \tau Z)$, we obtain $$D{\rm Ext}^1_{{\hspace{.5pt}\mathcal{C}}}(Y, \hspace{1pt} \tau_{\hspace{-1pt}_{X, Z}}\hspace{-1pt}(\hspace{-.5pt}\underline{h\hspace{-1pt}}\hspace{1pt})) (\hbox{${\it\Theta}$}_{Z, Y}(\underline{u\hspace{-1pt}}\hspace{1pt}))(\delta)=\hbox{${\it\Theta}$}_{Z, Y}(\underline{u\hspace{-1pt}}\hspace{1pt}) (\tau_{\hspace{-1pt}_{X, Z}}\hspace{-1pt}(\hspace{-.5pt}\underline{h\hspace{-1pt}}\hspace{1pt})\cdot \delta)= \theta_Z(\tau_{\hspace{-1pt}_{X, Z}}\hspace{-1pt}(\hspace{-.5pt}\underline{h\hspace{-1pt}}\hspace{1pt})\cdot \delta\cdot \underline{u\hspace{-1pt}}\hspace{1pt}).\hspace{12pt}$$ On the other hand, applying the definition of $\hbox{${\it\Theta}$}_{X\hspace{-1pt}, Y}$ and the equation $(*)$, we obtain $$ \hbox{${\it\Theta}$}_{X\hspace{-1pt}, Y}(\underline{u\hspace{-1pt}} \, \underline{h\hspace{-1pt}}\hspace{1pt})(\delta)= \theta_X( (\delta \cdot \underline{u\hspace{-1pt}} ) \cdot \underline{h\hspace{-1pt}}\hspace{1pt}) =\theta_Z(\tau_{\hspace{-1pt}_{X, Z}}\hspace{-1pt}(\hspace{-.5pt}\underline{h\hspace{-1pt}}\hspace{1pt})\cdot \delta\cdot \underline{u\hspace{-1pt}}\hspace{1pt}).$$ This shows that $\hbox{${\it\Theta}$}_{X,Y}$ is natural in $X$. This establishes Statement (1). Next, assume that the right Auslander-Reiten functor $\tau: \underline{\hspace{.6pt}\mathcal{C}\hspace{-1.5pt}}\hspace{1.5pt}\to \hspace{1.5pt}\overline{\hspace{-1pt}\mathcal{C}\hspace{-.2pt}}$ is an equivlence. Since $\tau$ is dense, $\mathcal{C}$ has almost split sequences on the left. Let $X, Y\in \mathcal{C}$ be indecomposable. If $Y$ is Ext-injective, then ${\rm Ext}^1_{\hspace{.5pt}\mathcal{C}}(X, Y)=0$. Otherwise, we obtain $Y\cong \tau Z$, where $Z\in \mathcal{C}$ is indecomposable and not Ext-projective. In this case, $D{\rm Ext}^1_{\hspace{.5pt}\mathcal{C}}(X, Y)\cong D{\rm Ext}^1_{\hspace{.5pt}\mathcal{C}}(X, \tau Z)\cong \hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_{\hspace{.5pt}\mathcal{C}}(Z, X)$. Since ${\rm Hom}_{\hspace{.5pt}\mathcal{C}}(Z, X)$ is reflexive, by Lemma \ref{alg-ref-mod}, so are $\hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_{\hspace{.5pt}\mathcal{C}}(Z, X)$ and ${\rm Ext}^1_{\hspace{.5pt}\mathcal{C}}(X, Y)$. Being Krull-Schmidt, $\mathcal{C}$ is Ext-reflexive. The proof of the theorem is completed. \noindent{\sc Remark.} Theorem \ref{ARS-ARF}(2) generalizes Lenzing and Zuazua's result stated in \cite[(1.1)]{LeZ} for Ext-finite abelian categories over a commutative artinian ring. \noindent{\sc Example.} Let $Q$ be a quiver of type $\mathbb{A}_\infty$ with a unique source vertex. The category of finitely presented representations of $Q$ over a field $k$ is a Hom-finite abelian $k$-category, which admits a right Auslander-Reiten functor, but no left Auslander-Reiten functor; see \cite[(1.15), (3.7)]{BLP}. Finally, we shall specialize to left or right triangulated $R$-categories. For this purpose, we shall modify the classical notion of a Serre functor; see \cite[(I.1)]{RvdB}. \begin{Defn}\label{SF} Let $\mathcal{C}$ be a tri-exact $R$-category. \begin{enumerate}[$(1)$] \item A {\it left Serre functor} for $\mathcal{C}$ is a functor $\mathbb{S}: \hspace{1.5pt}\overline{\hspace{-1pt}\mathcal{C}\hspace{-.2pt}}\to \underline{\hspace{.6pt}\mathcal{C}\hspace{-1.5pt}}\hspace{1.5pt}\hspace{.5pt}$ with binatural $R$-linear isomorphisms $\Phi_{X, Y}: \overline{\rm Hom\hspace{-.8pt}}\hspace{.6pt}_{{\hspace{1pt}\mathcal{C}}}(X, Y) \to D\hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_{{\hspace{1pt}\mathcal{C}}}(\mathbb{S}Y, X),$ with $X, Y\in \mathcal{C}.$ \item A {\it right Serre functor} for $\mathcal{C}$ is a functor $\mathbb{S}: \underline{\hspace{.6pt}\mathcal{C}\hspace{-1.5pt}}\hspace{1.5pt} \to \hspace{1.5pt}\overline{\hspace{-1pt}\mathcal{C}\hspace{-.2pt}} \hspace{2pt}$ with binatural $R$-linear isomorphisms $\Psi_{X, Y} : \hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_{{\hspace{1pt}\mathcal{C}}}(X, Y) \to D\overline{\rm Hom\hspace{-.8pt}}\hspace{.6pt}_{{\hspace{1pt}\mathcal{C}}}(Y, \mathbb{S}X)$, with $X, Y\in \mathcal{C}.$ \end{enumerate} \end{Defn} \noindent{\sc Remark.} In case $\mathcal{C}$ is a triangulated $R$-category, by Lemma \ref{left-tri-ideal}, our left or right Serre functors coincide with those given by Reiten and van den Bergh in \cite[(I.1)]{RvdB}. \begin{Theo}\label{AR-Serre} Let $\mathcal{C}$ be a Hom-reflexive Krull-Schmidt tri-exact $R$-category. \begin{enumerate}[$(1)$] \item If $\mathcal{C}$ is right triangulated, then it has almost split sequences on the right if and only if it admits a right Auslander-Reiten functor, or equivalently, a right Serre functor whose image lies in $\mathcal{C}[1].$ \item If $\mathcal{C}$ is left triangulated, then it has almost split sequences on the left if and only if it admits a left Auslander-Reiten functor, or equivalently, a left Serre functor whose image lies in $\mathcal{C}[-1].$ \item If $\mathcal{C}$ is triangulated, then it has almost split sequences if and only if it admits a right or left Serre equivalence, or equivalently, a right or left Serre equivalence. \end{enumerate} \end{Theo} \noindent{\it Proof.} Since Statement (3) is an immediate consequence of Statements (1) and (2), we shall only prove Statement (1). Assume that $\mathcal{C}$ is a right triangulated subcategory of a triangulated $R$-category. The first equivalence stated in Statement (1) follows from Proposition \ref{ARF-prop} and Theorem \ref{ARS-ARF}(2). For the second equivalence, observe that $\mathcal{C}[1]\subseteq \mathcal{C}=\hspace{1.5pt}\overline{\hspace{-1pt}\mathcal{C}\hspace{-.2pt}}$; see (\ref{left-tri}) and (\ref{left-tri-ideal}). Let $\tau: \underline{\hspace{.6pt}\mathcal{C}\hspace{-1.5pt}}\hspace{1.5pt} \to \mathcal{C}$ be a right Auslander-Reiten functor with binatural isomorphisms $\hbox{${\it\Theta}$}_{X,Y}\hspace{-1.5pt}:\hspace{-.5pt} \hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_{{\hspace{.5pt}\mathcal{C}}}(X,Y)\hspace{-1.5pt}\to \hspace{-2pt} D{\rm Ext}^1_{\hspace{.5pt}\mathcal{C}}(Y, \tau X)$. Since ${\rm Ext}^1_{\hspace{.5pt}\mathcal{C}}(Y, \tau X)={\rm Hom}_{\hspace{.5pt}\mathcal{C}}(Y, (\tau X)[1])$, we see that $\mathbb{S}=[1]\circ \tau: \underline{\hspace{.6pt}\mathcal{C}\hspace{-1.5pt}}\hspace{1.5pt}\to \mathcal{C}$ is a right Serre functor, whose image lies in $\mathcal{C}[1]$. Conversely, let $\mathbb{S}: \underline{\hspace{.6pt}\mathcal{C}\hspace{-1.5pt}}\hspace{1.5pt} \to \mathcal{C}$ be a right Serre functor with binatural isomorphisms $\Psi_{X,Y}: \hspace{.8pt}\underline{\hspace{-.8pt}\rm Hom\hspace{-.6pt}}\hspace{.6pt}_{{\hspace{.5pt}\mathcal{C}}}(X,Y)\to D{\rm Hom}_{\hspace{.5pt}\mathcal{C}}(Y, \mathbb{S} X)$. Suppose that $\mathbb{S}(\underline{\hspace{.6pt}\mathcal{C}\hspace{-1.5pt}}\hspace{1.5pt})\subseteq \mathcal{C}[1]$. Then, ${\rm Hom}_{\hspace{.5pt}\mathcal{C}}(Y, \mathbb{S} X)= {\rm Ext}^1_{\hspace{.5pt}\mathcal{C}}(Y, (\mathbb{S} X)[-1])$, for $X, Y\in \mathcal{C}$. Thus, $\tau=[-1]\circ \mathbb{S}: \underline{\hspace{.6pt}\mathcal{C}\hspace{-1.5pt}}\hspace{1.5pt}\to \mathcal{C}$ is a right Auslander-Reiten functor. The proof of the theorem is completed. \noindent{\sc Remark.} Theorem \ref{AR-Serre} generalizes Reiten and van den Bergh's result stated in \cite[(I.2.4)]{RvdB} for a Hom-finite triangulated category over a field. \noindent{\sc Example.} Let $\hbox{$\it\Lambda$}=kQ/J$ be a strongly locally finite dimensional algebra over a field $k$, where $Q$ is a locally finite quiver without infinite paths and $J$ is locally admissible. Since all modules in ${\rm mod}^{\hspace{.3pt}b\hspace{-2.5pt}}\hbox{$\it\Lambda$}$ have finite projective and finite injective dimension, we will see from Theorem \ref{ART-refl} that $D^b({\rm mod}^{\hspace{.3pt}b\hspace{-2.5pt}}\hbox{$\it\Lambda$})$ has almost split sequences. Further, for each $n\in \mathbb{Z}$, the right triangulated category $D^{\le n}({\rm mod}^{\hspace{.3pt}b\hspace{-2.5pt}}\hbox{$\it\Lambda$})$ has almost sequences on the left, and the left triangulated category $D^{\ge n}({\rm mod}^{\hspace{.3pt}b\hspace{-2.5pt}}\hbox{$\it\Lambda$})$ has almost sequences on the right. More examples can be found at the end of this paper. \section{Almost split triangles in derived categories} \noindent The main objective of this section is to study almost split triangles in the derived categories of an abelian category with enough projective objects and enough injective objects. Our results are applicable to the derived categories of modules categories over an algebra with a unity or a locally finite dimension algebra given by a quiver with relations. In particular, they include Happel's result obtained in \cite{Hap1} for the bounded derived category of finite dimensional modules over a finite dimensional modules. We shall start with an arbitrary abelian category $\mathfrak A$ and quote the following well-known statement; see, for example, \cite[(10.4.7)]{WEI}. \begin{Lemma}\label{Hom-iso} Let $X^{\hspace{0.5pt}\dt\hspace{1.5pt}}, Y^{\hspace{0.5pt}\dt\hspace{1.5pt}}$ be complexes over an abelian category $\mathfrak{A}$. If $X^{\hspace{0.5pt}\dt\hspace{1.5pt}}$ is bounded-above of projective objects or $Y^{\hspace{0.5pt}\dt\hspace{1.5pt}}$ is bounded-below of injective objects, then there exists an isomorphism $\mathbb{L}_{X^{\hspace{0.5pt}\dt\hspace{1.5pt}},Y^{\hspace{0.5pt}\dt\hspace{1.5pt}}}:{\rm Hom}_{K(\mathfrak{A})}(X^{\hspace{0.5pt}\dt\hspace{1.5pt}}, Y^{\hspace{0.5pt}\dt\hspace{1.5pt}}) \to {\rm Hom}_{D(\mathfrak{A})}(X^{\hspace{0.5pt}\dt\hspace{1.5pt}}, Y^{\hspace{0.5pt}\dt\hspace{1.5pt}}),$ which is induced from the localization functor $\mathbb{L}: K(\mathfrak{A})\to D(\mathfrak{A})$. \end{Lemma} Let $\mathcal P$ and $\mathcal I$ be strictly additive subcategories of $\mathfrak A$ of projective objects and of injective objects, respectively. By Lemma \ref{Hom-iso}, we can view $K^b(\mathcal P)$ and $K^b(\mathcal I)$ as full subcategories of $D^b(\mathfrak A)$. A {\it projective resolution} over $\mathcal P$ of a complex $Z^{\hspace{1.3pt}\dt\hspace{1pt}}\in C^-(\mathfrak A)$ is a quasi-isomorphism $s^\wdt\hspace{2pt}: P^{\hspace{1.3pt}\dt\hspace{1pt}}\to Z^{\hspace{1.3pt}\dt\hspace{1pt}}$ with $P^{\hspace{1.3pt}\dt\hspace{1pt}}\in C^-(\mathcal P),$ which is {\it finite} if $P^{\hspace{1.3pt}\dt\hspace{1pt}}\in C^b(\mathcal P).$ Dually, an injective {\it co-resolution} over $\mathcal I$ of a complex $X^{\hspace{0.5pt}\dt\hspace{1.5pt}}\in C^+(\mathfrak A)$ is a quasi-isomorphism $t^\wdt\hspace{2pt}: X^{\hspace{0.5pt}\dt\hspace{1.5pt}} \to I^{\hspace{0.5pt}\dt\hspace{1.5pt}}$ with $I^{\hspace{0.5pt}\dt\hspace{1.5pt}}\in C^+(\mathcal I),$ which is {\it finite} if $I^{\hspace{0.5pt}\dt\hspace{1.5pt}}\in C^b(\mathcal I).$ \begin{Theo}\label{ART-nec} Let $\mathfrak{A}$ be an abelian category such that $D^*(\mathfrak{A})$ with $*\in \{\emptyset, +, -, b\}$ has an almost split triangle $\hspace{-2pt}\xymatrixcolsep{18pt}\xymatrix{X^{\hspace{0.5pt}\dt\hspace{1.5pt}}\ar[r] & Y^{\hspace{0.5pt}\dt\hspace{1.5pt}} \ar[r] & Z^{\hspace{0.5pt}\dt\hspace{1.5pt}} \ar[r] & X^{\hspace{0.5pt}\dt\hspace{1.5pt}}[1],}\hspace{-2pt} $ where $X^{\hspace{0.5pt}\dt\hspace{1.5pt}}$ is a bounded-below complex and $Z^{\hspace{0.5pt}\dt\hspace{1.5pt}}$ is a bounded-above complex. \begin{enumerate}[$(1)$] \item If $Z^{\hspace{1.3pt}\dt\hspace{1pt}}$ admits a projective resolution over a strictly additive subcategory $\mathcal P$ of projective objects of $\mathfrak A$, then it admits a finite projective resolution over $\mathcal P$. \item If $X^{\hspace{0.5pt}\dt\hspace{1.5pt}}$ admits an injective co-resolution over a strictly additive subcategory $\mathcal I$ of injective objects of $\mathfrak A$, then it admits a finite injective co-resolution over $\mathcal I$. \end{enumerate} \end{Theo} \noindent{\it Proof.} We shall view $D^*(\mathfrak{A})$ as a full triangulated subcategory of $D(\mathfrak{A})$; see \cite[Chapter III]{Mil}. Let $\mathcal P$ be a strictly additive subcategory of projective objects $\mathfrak A$ with $s^\wdt\hspace{2pt}: P^{\hspace{1.3pt}\dt\hspace{1pt}} \to Z^{\hspace{1.3pt}\dt\hspace{1pt}}$ a quasi-isomorphism with $P^{\hspace{1.3pt}\dt\hspace{1pt}}\in C^-({\mathcal P})$. Write $W^{\hspace{1.3pt}\dt\hspace{1pt}}=X^\dt\hspace{2pt}[1]$, a complex in $D^+(\mathfrak A)\cap D^*(\mathfrak A)$. Let $n$ be an integer such that $W^i=0$ for all $i < n$. Write $\delta^{\hspace{1.3pt}\dt\hspace{1pt}}: Z^{\hspace{1.3pt}\dt\hspace{1pt}}\to X^{\hspace{1.3pt}\dt\hspace{1pt}}[1]$ for the third morphism in the almost split triangle stated in the theorem. By Lemma \ref{Hom-iso}, $\delta^{\hspace{1.3pt}\dt\hspace{1pt}} \hspace{.5pt} \tilde{s}^\wdt\hspace{2pt}=\tilde{t}^\wdt\hspace{2pt}$ for some complex morphism $t^{\hspace{1.3pt}\dt\hspace{1pt}}: P^{\hspace{1.3pt}\dt\hspace{1pt}}\to W^{\hspace{1.3pt}\dt\hspace{1pt}}$. Consider the brutal truncation $\kappa_{\ge n}(P^{\hspace{1.3pt}\dt\hspace{1pt}})$ and the associated canonical morphism $\mu^{\hspace{0.5pt}\dt\hspace{1.5pt}}: \kappa_{\ge n}(P^{\hspace{1.3pt}\dt\hspace{1pt}}) \to P^{\hspace{1.3pt}\dt\hspace{1pt}}$. Being bounded, $\kappa_{\ge n}(P^{\hspace{1.3pt}\dt\hspace{1pt}})$ lies in $D^*(\mathfrak A)$. We claim that $\bar \mu^{\hspace{0.5pt}\dt\hspace{1.5pt}}$ is a retraction in $K(\mathfrak{A})$. Otherwise, by Lemma \ref{Hom-iso}, $\tilde{s}^\wdt\hspace{2pt} \hspace{.5pt} \tilde{\mu}^\wdt\hspace{2pt}$ is not a retraction in $D^*(\mathfrak{A})$, and hence, $\tilde t^\wdt\hspace{2pt} \hspace{.5pt} \tilde \mu^{\hspace{1.3pt}\dt\hspace{1pt}} = \delta^\wdt\hspace{2pt} (\tilde{s}^\wdt\hspace{2pt} \hspace{.5pt} \tilde\mu^\wdt\hspace{2pt})=0$. By Lemma \ref{Hom-iso}, $\bar t^\wdt\hspace{2pt} \hspace{.5pt} \bar\mu^\wdt\hspace{2pt}=0$. In particular, there exist morphisms $h^i: P^i\to W^{i-1}$ with $i\ge n$ such that $t^i=t^i\mu^i=h^{i+1} d_P^i + d_W^{i+1}h^i,$ for all $i\ge n.$ Setting $h^i=0: P^i\to W^{i-1}$ for $i<n$, we obtain $t^i=h^{i+1}d_P^i + d_W^{i+1}h^i,$ for all $i\in \mathbb{Z}$. That is, $\bar t^\wdt\hspace{2pt}=0,$ and hence, $\delta^{\hspace{1.3pt}\dt\hspace{1pt}}=0$, a contradiction. This establishes our claim. In particular, ${\rm H}^i(P^{\hspace{1.3pt}\dt\hspace{1pt}})=0$ for all $i<n$. Let $u^\dt\hspace{2pt}: P^\wdt\hspace{2pt}\to \kappa_{\ge n}(P^\wdt\hspace{2pt})$ be a complex morphism such that $\bar\mu^\dt\hspace{2pt} \hspace{.5pt} \bar u^\dt\hspace{2pt}=\bar{\hspace{-.5pt}\id}_{P^\wdt\hspace{2pt}}$. Then, there exist $f^{i+1}: P^{i+1}\to P^i$ such that $1_{\hspace{-1pt}P^i}-u^i=f^{i+1} d_P^i+d_P^{i-1} f^i$, for $i\in \mathbb{Z}$. In particular, $1_{P^{n-1}}=f^n d_P^{n-1}+d_P^{n-2}f^{n-1}$ and $d_P^{n-1}=d^{n-1}_P f^n d_P^{n-1}.$ Write $d^{n-1}_P= j v,$ where $v: P^{n-1}\to C$ is the cokernel of $d_P^{n-2}$. Since ${\rm Im}(d_P^{n-2})={\rm Ker}(d_P^{n-1})$, by the Snake Lemma, $j: C\to P^n$ is a monomorphism. Since $ju=juf^nju$, we obtain $1_Q=(uf^n)j$, and hence, $C\in \mathcal{P}$. Thus, the smart truncation $\tau_{\hspace{.5pt}\ge n}(P^{\hspace{1.3pt}\dt\hspace{1pt}})$ lies in $C^b(\mathcal{P})$. Since ${\rm H}^i(P^{\hspace{1.3pt}\dt\hspace{1pt}})=0$ for all $i<n$, the canonical projection $p^\wdt\hspace{2pt}: P^{\hspace{1.3pt}\dt\hspace{1pt}} \to \tau_{\hspace{.5pt}\ge n}(P^\wdt\hspace{2pt})$ is a quasi-isomorphism; see \cite[(III.3.4.2)]{Mil}. Therefore, $Z^{\hspace{0.5pt}\dt\hspace{1.5pt}}\cong \tau_{\hspace{.5pt}\ge n}(P^\wdt\hspace{2pt})$ in $D^*(\mathfrak{A})$. By Lemma \ref{Hom-iso}, $\tau_{\hspace{.5pt}\ge n}(P^{\hspace{1.3pt}\dt\hspace{1pt}})$ is a finite projective resolution of $Z^{\hspace{0.5pt}\dt\hspace{1.5pt}}$ over $\mathcal P$. Dually, we may establish Statement (2). The proof of the theorem is completed. If $\mathfrak A$ has enough projective (respectively, injective) objects, then every bounded-above (respectively, bounded-below) complex over $\mathfrak A$ admits a projective resolution (respectively, injective co-resolution); see \cite[(7.5)]{BGKHME}. \begin{Cor}\label{ART-bd} Let $\mathfrak A$ be an abelian category such that $D^b(\mathfrak{A})$ has an almost split triangle $\hspace{-2pt}\xymatrixcolsep{18pt}\xymatrix{X^{\hspace{1.3pt}\dt\hspace{1pt}}\ar[r] & Y^{\hspace{1.3pt}\dt\hspace{1pt}} \ar[r] & Z^{\hspace{1.3pt}\dt\hspace{1pt}} \ar[r] & X^{\hspace{1.3pt}\dt\hspace{1pt}}\hspace{.4pt}[1].}$ \begin{enumerate}[$(1)$] \item If $\mathfrak A$ has enough projective objects, then $Z^{\hspace{1.3pt}\dt\hspace{1pt}}$ has a finite projective resolution. \item If $\mathfrak A$ has enough injective objects, then $X^{\hspace{1.3pt}\dt\hspace{1pt}}$ has a finite injective co-resolution. \end{enumerate} \end{Cor} \noindent{\sc Example.} Given any ring $\hbox{${\it\Sigma}$}$, Corollary \ref{ART-bd} applies in $D^b(\hbox{{\rm Mod}\hspace{0.5pt}} \hbox{${\it\Sigma}$})$; and if $\hbox{${\it\Sigma}$}$ is noetherian, then Corollary \ref{ART-bd}(1) applies in $D^b({\rm mod}^+\hspace{-2pt} \hbox{${\it\Sigma}$})$. Next, we shall obtain some sufficient conditions for the existence of an almost split triangle in the derived categories of $\mathfrak A$. For this purpose, we need to assume that $\mathfrak A$ is an abelian $R$-category and consider $D={\rm Hom}_R(-, I\hspace{-1.8pt}_R): \hbox{{\rm Mod}\hspace{0.5pt}} R \to \hbox{{\rm Mod}\hspace{0.5pt}} R,$ where $I\hspace{-1.8pt}_R$ is a minimal injective co-generator for $\hbox{{\rm Mod}\hspace{0.5pt}} R$. \begin{Defn}\label{Naka-Func} Let $\mathfrak A$ be an abelian $R$-category. Given $\mathcal P$ a strictly additive subcategory of projective objects of $\mathfrak A$, a functor $\nu: \mathcal{P}\to {\mathfrak A}$ is called a {\it Nakayama functor} if there exist binatural isomorphisms $\beta_{\hspace{-1pt}_{P, X}}:{\rm Hom}_{\mathfrak A}(X, \nu P)\to D{\rm Hom}_{\mathfrak A}(P, X)$, for all $P\in {\mathcal P}$ and $X\in {\mathfrak A}$. \end{Defn} \noindent{\sc Remark.} Given a Nakayama functor $\nu: {\mathcal P}\to \mathfrak A$, we see easily that $\nu P$ is an injective object of $\mathfrak A$, for every $P\in \mathcal P$. Hence $\nu \mathcal P$, the image of $\mathcal P$ under $\nu$, is a strictly additive subcategory of injective objects of $\mathfrak A$. As an example, we have the following probably known statement. \begin{Lemma}\label{Alg-nf} Let $A$ be an $R$-algebra. Then $\nu_{\hspace{-1.5pt}_A}=D{\rm Hom}_A(-, A): {\rm proj}\hspace{.5pt}A \to \hbox{{\rm Mod}\hspace{0.5pt}} A$ is a Nakayama functor for $\hbox{{\rm Mod}\hspace{0.5pt}} A$. \end{Lemma} \noindent{\it Proof.} Given $P\in {\rm proj}\hspace{.5pt} A$ and $X\in \hbox{{\rm Mod}\hspace{0.5pt}} A$, it is well known; see \cite[(20.10)]{AnF} that there exists a binatural $R$-linear isomoprhism $$\eta_{_{P,X}}: {\rm Hom}_A(P, A)\otimes_A X\to {\rm Hom}_A(P, X): f\otimes x \mapsto [\hspace{.5pt}u\mapsto f(u) x \hspace{.5pt}].$$ Considering the $R$-$A$-bimodule ${\rm Hom}_A(P, A)$ and the adjoint isomorphism, we obtain the following binatural isomorphisms $$\begin{array}{rcl} {\rm Hom}_R({\rm Hom}_A(P, X), I\hspace{-1.5pt}_R) & \stackrel{\sim}{\longrightarrow} & {\rm Hom}_R({\rm Hom}_A(P, A)\otimes_AX, I\hspace{-1.5pt}_R) \\ & \stackrel{\sim}{\longrightarrow} &{\rm Hom}_A(X, {\rm Hom}_R({\rm Hom}_A(P, A), I\hspace{-1.5pt}_R)). \end{array}$$ The proof of the lemma is completed. The following statement collects some properties of a Nakayama functor. \begin{Lemma}\label{NFPro} Let $\mathfrak A$ be an abelian category with $\mathcal P$ a strictly additive subcategory of projective objects of $\mathfrak A$. Then every Nakayama functor $\nu: {\mathcal P}\to {\mathfrak A}$ is faithful, and it is fully faithful in case $\mathcal P$ is Hom-reflexive over $R$. \end{Lemma} \noindent{\it Proof.} Let $\nu: {\mathcal P}\to {\mathfrak A}$ be a Nakayama functor with binatural $R$-linear isomorphisms $\beta_{\hspace{-.8pt}_{P, X}}: {\rm Hom}_{\mathfrak A}(X, \nu P)\to D{\rm Hom}_{\mathfrak A}(P, X),$ where $P\in {\mathcal P}$ and $X\in {\mathfrak A}$. Fix two objects $L, P \in \mathcal P$. Given $f: L\to P$ and $g: P\to \nu L$, considering the commutative diagram $$\xymatrixcolsep{26pt}\xymatrixrowsep{16pt}\xymatrix{ {\rm Hom}_{\hspace{.5pt}\mathfrak A}(\nu L, \nu P) \ar[d]^{\beta_{P, \nu L}} & {\rm Hom}_{\mathfrak A}(\nu L, \nu L) \ar[r]^-{(gf, \nu L)} \ar[d]^-{\beta_{L,\nu L}} \ar[l]_-{(\nu L, \nu f)} & {\rm Hom}_{\mathfrak A}(L, \nu L) \ar[d]^{\beta_{L, L}} & {\rm Hom}_{\mathfrak A}(P, \nu L) \ar[d]^{\beta_{L, P}} \ar[l]_-{(f, \nu L)} \\ D {\rm Hom}_{\mathfrak A}(P, \nu L) & D{\rm Hom}_{\mathfrak A}(L, \nu L) \ar[r]^-{D(L, gf)} \ar[l]_-{D(f, \nu L)} & D {\rm Hom}_{\mathfrak A}(L, L) & D {\rm Hom}_{\mathfrak A}(L, P) \ar[l]_-{D(L, f)},}$$ we obtain the following equations $$\hspace{30pt} (*) \hspace{30pt} \beta_{P, \nu L}(\nu f)(g)=\beta_{L,\nu L}(\id_{\hspace{.5pt}\nu L})(gf)=\beta_{L, L}(gf)(\id_L)=\beta_{L, P}(g)(f).\hspace{50pt}$$ We claim these equations imply the commutativity of the diagram $$\xymatrixcolsep{32pt}\xymatrixrowsep{16pt}\xymatrix{ {\rm Hom}_{\mathfrak A}(L, P) \ar[r]^-{\sigma} \ar[d]_\nu& D^2 {\rm Hom}_{\mathfrak A}(L, P)\ar[d]^{D(\beta_{L,P})}\\ {\rm Hom}_{\mathfrak A}(\nu L, \nu P) \ar[r]^-{\beta_{P, \nu L}} & D {\rm Hom}_{\mathfrak A}(P, \nu L),}$$ where $\sigma$ is the canonical injection. Indeed, using the equations in $(*)$, we see that $$D(\beta_{L,P})(\sigma(f))(g)=\sigma(f)(\beta_{L,P}(g))=\beta_{L,P}(g)(f)=\beta_{P, \nu L}(\nu f)(g).$$ As a consequence, $\nu: {\rm Hom}_{\mathfrak A}(L, P) \to {\rm Hom}_{\mathfrak A}(\nu L, \nu P)$ is a monomorphism, and it is an isomorphism if ${\rm Hom}_{\mathfrak A}(L, P)$ is reflexive. The proof of the lemma is completed. \noindent{\sc Remark.} If $\mathcal{P}$ is Hom-reflexive over $R$, then every Nakayama functor $\nu: \mathcal{P}\to \mathfrak A$ co-restricts an equivalence $\nu: \mathcal{P} \to \nu \mathcal{P}$. In this case, we shall always denote by $\nu^{\hspace{.4pt}\mbox{-}}: \nu\mathcal{P} \to \mathcal{P}$ a quasi-inverse of $\nu: \mathcal{P} \to \nu \mathcal{P}$. \noindent{\sc Example.} Let $\hbox{$\it\Lambda$}=kQ/I$ be a locally finite dimensional algebra over a field $k$, where $Q$ is locally finite and $J$ is weakly admissible. Then ${\rm proj}\hspace{.5pt}\hbox{$\it\Lambda$}$ is Hom-finite over $k$; see \cite[(3.2)]{BHL}, and we have a Nakayama functor $\nu_{\hspace{-1.5pt}_{\mathit\hbox{$\it\Lambda$}mbda}}: {\rm proj}\hspace{.5pt}\hbox{$\it\Lambda$} \to \hbox{{\rm Mod}\hspace{0.5pt}} \hbox{$\it\Lambda$}$, sending $P_x$ to $I_x$; see \cite[(3.2), (3.6)]{BHL}. This yields an equivalence $\nu_{\hspace{-1.5pt}_{\mathit\hbox{$\it\Lambda$}mbda}}: {\rm proj}\hspace{.5pt}\hbox{$\it\Lambda$} \to {\rm inj}\hspace{.5pt}\hbox{$\it\Lambda$}$ with a quasi-inverse $\nu^{\mbox{-}}_{\hspace{-1.5pt}_{\mathit\hbox{$\it\Lambda$}mbda}}: {\rm inj}\hspace{.5pt}\hbox{$\it\Lambda$} \to {\rm proj}\hspace{.5pt}\hbox{$\it\Lambda$}$, sending $I_x$ to $P_x$. Let $F: \mathcal{A}\to \mathcal{B}$ be a functor between additive categories. Applying $F$ component-wise, one may extend $F$ to a functor $C(\mathcal{A})\to C(\mathcal{B})$, sending null-homotopic morphisms to null-homotopic ones and cones to cones. The latter functor induces a triangle-exact functor $K(\mathcal{A})\to K(\mathcal{B})$; see \cite[(V.1.1.1)]{Mil}. For the simplicity of notation, these functors will be written as $F: C(\mathcal{A})\to C(\mathcal{B})$ and $F: K(\mathcal{A})\to K(\mathcal{B})$. \begin{Prop}\label{Hotopy-fun} Let $\mathfrak A$ be an abelian $R$-category admitting a Nakayama functor $\nu: {\mathcal P}\to \mathfrak A$, where $\mathcal P$ is a strictly additive subcategory of projective objects of $\mathfrak A$. \begin{enumerate}[$(1)$] \item The triangle-exact functor $\nu:\hspace{-1pt} K(\mathcal P) \hspace{-1pt}\to\hspace{-1pt} K(\mathfrak A)$ restricts to a triangle-exact functor $\nu : K^b(\mathcal P) \to K^b(\nu \mathcal P),$ which is an equivalence if $\mathcal P$ is Hom-reflexive over $R$. \item Given a complex $X^{\hspace{0.5pt}\dt\hspace{1.5pt}}$ over $\mathfrak A$ and a bounded complex $P^{\hspace{1.3pt}\dt\hspace{1pt}}$ over $\mathcal P$, we obtain a bina\-tural $R$-linear isomorphism $\tilde{\beta}_{_{\hspace{-1pt}P^{\hspace{1.3pt}\dt\hspace{1pt}}, X^{\hspace{0.5pt}\dt\hspace{1.5pt}}}}: {\rm Hom}_{D(\mathfrak{A})}(X^{\hspace{0.5pt}\dt\hspace{1.5pt}},\nu P^{\hspace{1.3pt}\dt\hspace{1pt}})\rightarrow D{\rm Hom}_{D(\mathfrak A)}(P^{\hspace{1.3pt}\dt\hspace{1pt}}, X^{\hspace{0.5pt}\dt\hspace{1.5pt}}).$ \end{enumerate} \end{Prop} \noindent{\it Proof.} If $\mathcal P$ is Hom-reflexive over $R$, then $\nu: \mathcal P \to \nu \mathcal P$ is an equivalence; see (\ref{NFPro}), which clearly induces an equivalence $\nu\hspace{-1pt}:\hspace{-1pt} K^b(\mathcal P) \hspace{-1pt}\to\hspace{-1pt} K^b(\nu \mathcal P)$. It remains to prove Statement (2). By definition, we obtain binatural $R$-linear isomorphisms $\beta_{_{\hspace{-1pt}P,X}}: {\rm Hom}_{\mathfrak{A}}(X,\nu P)\rightarrow D {\rm Hom}_{\mathfrak{A}}(P,X)$, for all $P\in \mathcal P$ and $X\in \mathfrak A$. Fix a complex $X^\dt\hspace{2pt}$ over $\mathfrak A$ and a bounded complex $P^{\hspace{1.3pt}\dt\hspace{1pt}}$ over $\mathcal P$. We may define an $R$-linear map $\beta_{_{\hspace{-1.3pt}P^{\hspace{1.3pt}\dt\hspace{1pt}}, X^{\hspace{0.5pt}\dt\hspace{1.5pt}}}}: {\rm Hom}_{\hspace{.5pt}C(\mathfrak{A})}(X^{\hspace{0.5pt}\dt\hspace{1.5pt}},\nu P^{\hspace{1.3pt}\dt\hspace{1pt}}) \to D{\rm Hom}_{\hspace{.5pt}C(\mathfrak{A})}(P^{\hspace{1.3pt}\dt\hspace{1pt}}, X^{\hspace{0.5pt}\dt\hspace{1.5pt}})$ by setting $$\beta_{_{\hspace{-1.3pt}P^{\hspace{1.3pt}\dt\hspace{1pt}}, X^{\hspace{0.5pt}\dt\hspace{1.5pt}}}}(\xi^\wdt\hspace{2pt})(\zeta^\wdt\hspace{2pt})= {\textstyle\sum}_{i\in \mathbb{Z}}\,(-1)^i\beta_{_{\hspace{-1.3pt}P^i, X^i}}(\xi^i)(\zeta^i), $$ for $\xi^\wdt\hspace{2pt}: X^\dt\hspace{2pt}\to \nu P^{\hspace{1.3pt}\dt\hspace{1pt}}$ and $\zeta^\wdt\hspace{2pt}: P^{\hspace{1.3pt}\dt\hspace{1pt}}\to X^\dt\hspace{2pt}$ in $C(\mathfrak A).$ Using the binaturality of $\beta_{_{\hspace{-1pt}P,X}}$, we see that $\beta_{_{\hspace{-1.3pt}P^{\hspace{1.3pt}\dt\hspace{1pt}}, X^\dt\hspace{2pt}}}$ is binatural and $\beta_{_{\hspace{-1.3pt}P^{\hspace{1.3pt}\dt\hspace{1pt}},X^\dt\hspace{2pt}}}(\xi^\wdt\hspace{2pt})(\zeta^\wdt\hspace{2pt})=0$ if $\xi^\wdt\hspace{2pt}$ or $\zeta^\wdt\hspace{2pt}$ is null-homotopic. This induces binatural $R$-linear maps $\bar \beta_{_{\hspace{-1.3pt}P^{\hspace{1.3pt}\dt\hspace{1pt}}, X^{\hspace{0.5pt}\dt\hspace{1.5pt}}}}: {\rm Hom}_{K(\mathfrak{A})}(X^{\hspace{0.5pt}\dt\hspace{1.5pt}},\nu P^{\hspace{1.3pt}\dt\hspace{1pt}}) \to D {\rm Hom}_{K(\mathfrak{A})}(P^{\hspace{1.3pt}\dt\hspace{1pt}}, X^{\hspace{0.5pt}\dt\hspace{1.5pt}})$ such that $\bar\beta_{_{\hspace{-1.3pt}P^{\hspace{1.3pt}\dt\hspace{1pt}}, X^{\hspace{0.5pt}\dt\hspace{1.5pt}}}}(\bar{\xi}^\wdt\hspace{2pt})(\bar{\zeta}^\wdt\hspace{2pt})=\beta_{_{\hspace{-1.3pt}P^{\hspace{1.3pt}\dt\hspace{1pt}},X^\dt\hspace{2pt}}}(\xi^\wdt\hspace{2pt})(\zeta^\wdt\hspace{2pt}),$ which we claim are isomorphisms. {\sc Sublemma.} {\it If $X^{\hspace{0.5pt}\dt\hspace{1.5pt}}$ is a bounded complex, then $\bar\beta_{P^{\hspace{1.3pt}\dt\hspace{1pt}},X^{\hspace{0.5pt}\dt\hspace{1.5pt}}}$ is an isomorphism.} Indeed, we start with the case where $w(X^{\hspace{0.5pt}\dt\hspace{1.5pt}})= 1$, say $X^{\hspace{0.5pt}\dt\hspace{1.5pt}}$ concentrates at degree $0$. Suppose first that $w(P^{\hspace{1.3pt}\dt\hspace{1pt}})=1.$ If $P^{\hspace{1.3pt}\dt\hspace{1pt}}$ concentrates at degree $0$, then $\bar\beta_{_{\hspace{-1.5pt}P^{\hspace{1.3pt}\dt\hspace{1pt}},X^{\hspace{1.3pt}\dt\hspace{1pt}}}}$ can be identified with $\beta_{_{\hspace{-1.5pt}P\hspace{.4pt}^0, X^0}}$, which is an isomorphism. Otherwise, $\bar\beta_{_{\hspace{-1.5pt}P^{\hspace{1.3pt}\dt\hspace{1pt}},X^{\hspace{0.5pt}\dt\hspace{1.5pt}}}}$ is a zero isomorphism. Suppose now that $w(P^{\hspace{1.3pt}\dt\hspace{1pt}}) = s >1.$ By Lemma \ref{truncation-exact}, $K(\mathcal{P})$ has an exact triangle $\xymatrixcolsep{18pt}\xymatrix{\hspace{-3pt} Q^{\hspace{1.3pt}\dt\hspace{1pt}}\ar[r] &P^{\hspace{1.3pt}\dt\hspace{1pt}} \ar[r] &L^{\hspace{1.3pt}\dt\hspace{1pt}} \ar[r] &Q^{\hspace{1.3pt}\dt\hspace{1pt}}[1]} $ with $w(Q^{\hspace{1.3pt}\dt\hspace{1pt}})< s$ and $w(L^{\hspace{0.5pt}\dt\hspace{1.5pt}})=1.$ This yields an exact triangle $\xymatrixcolsep{18pt}\xymatrix{\nu Q^{\hspace{0.5pt}\dt\hspace{1.5pt}}\ar[r] &\nu P^{\hspace{1.3pt}\dt\hspace{1pt}} \ar[r] &\nu L^{\hspace{0.5pt}\dt\hspace{1.5pt}} \ar[r] &\nu Q^{\hspace{0.5pt}\dt\hspace{1.5pt}}[1]} $ in $K(\mathfrak A)$. Since $I_{\hspace{-1pt}R}$ is injective, we obtain a commutative diagram with exact rows $$\xymatrixcolsep{20pt}\xymatrixrowsep{16pt}\xymatrix{ (X^{\hspace{1.3pt}\dt\hspace{1pt}},\nu L^{\hspace{0.5pt}\dt\hspace{1.5pt}}[-1])\ar[d]^{\bar\beta_{_{\hspace{-1.5pt}L^{\hspace{0.5pt}\dt\hspace{1.5pt}}[-1], X^{\hspace{0.5pt}\dt\hspace{1.5pt}}}}} \ar[r] & (X^{\hspace{0.5pt}\dt\hspace{1.5pt}}, \nu Q^{\hspace{0.5pt}\dt\hspace{1.5pt}}) \ar[d]^{\bar\beta_{_{\hspace{-1.5pt}Q^{\hspace{0.5pt}\dt\hspace{1.5pt}}, X^{\hspace{0.5pt}\dt\hspace{1.5pt}}}}} \ar[r] & (X^{\hspace{0.5pt}\dt\hspace{1.5pt}},\nu P^{\hspace{1.3pt}\dt\hspace{1pt}})\ar[d]^{\bar\beta_{_{\hspace{-1.5pt}P^{\hspace{1.3pt}\dt\hspace{1pt}}, X^{\hspace{0.5pt}\dt\hspace{1.5pt}}}}} \ar[r] & (X^{\hspace{0.5pt}\dt\hspace{1.5pt}},\nu L^{\hspace{0.5pt}\dt\hspace{1.5pt}}) \ar[d]^{\bar\beta_{_{\hspace{-1.5pt}L^{\hspace{0.5pt}\dt\hspace{1.5pt}}, X^{\hspace{0.5pt}\dt\hspace{1.5pt}}}}}\ar[r] & (X^{\hspace{0.5pt}\dt\hspace{1.5pt}}, \nu Q^{\hspace{0.5pt}\dt\hspace{1.5pt}}[1])\ar[d]^{\bar\beta_{_{\hspace{-1.5pt}Q^{\hspace{0.5pt}\dt\hspace{1.5pt}}[1], X^{\hspace{0.5pt}\dt\hspace{1.5pt}}}}}\\ D(L^{\hspace{0.5pt}\dt\hspace{1.5pt}}[-1],X^{\hspace{0.5pt}\dt\hspace{1.5pt}})\ar[r] & D(Q^{\hspace{0.5pt}\dt\hspace{1.5pt}}, X^{\hspace{0.5pt}\dt\hspace{1.5pt}}) \ar[r]&D(P^{\hspace{1.3pt}\dt\hspace{1pt}}, X^{\hspace{0.5pt}\dt\hspace{1.5pt}}) \ar[r] & D(L^{\hspace{0.5pt}\dt\hspace{1.5pt}}, X^{\hspace{0.5pt}\dt\hspace{1.5pt}}) \ar[r] & D (Q^{\hspace{0.5pt}\dt\hspace{1.5pt}}[1], X^{\hspace{0.5pt}\dt\hspace{1.5pt}}),} $$ where $(M^{\hspace{0.5pt}\dt\hspace{1.5pt}}, N^{\hspace{0.5pt}\dt\hspace{1.5pt}})={\rm Hom}_{K(\mathfrak{A})}(M^{\hspace{0.5pt}\dt\hspace{1.5pt}}, N^{\hspace{0.5pt}\dt\hspace{1.5pt}}).$ By the induction hypothesis on $w(P^{\hspace{1.3pt}\dt\hspace{1pt}})$, we see that $\bar\beta_{_{\hspace{-1.5pt}P^{\hspace{1.3pt}\dt\hspace{1pt}}, X^{\hspace{0.5pt}\dt\hspace{1.5pt}}}}$ is an $R$-linear isomorphism. Consider next the case where $w(X^{\hspace{0.5pt}\dt\hspace{1.5pt}})=t>1.$ By Lemma \ref{truncation-exact}, $K(\mathfrak{A})$ has an exact triangle $\xymatrixcolsep{20pt}\xymatrix{\hspace{-3pt}Z^{\hspace{0.5pt}\dt\hspace{1.5pt}}\ar[r] &X^{\hspace{0.5pt}\dt\hspace{1.5pt}} \ar[r] &Y^{\hspace{0.5pt}\dt\hspace{1.5pt}} \ar[r] &Z^{\hspace{0.5pt}\dt\hspace{1.5pt}}[1],} $ where $w(Z^{\hspace{0.5pt}\dt\hspace{1.5pt}})<t$ and $w(Y^{\hspace{0.5pt}\dt\hspace{1.5pt}})=1.$ This yields a commutative diagram with exact rows $$\xymatrixcolsep{20pt}\xymatrixrowsep{16pt}\xymatrix{ (Z^{\hspace{0.5pt}\dt\hspace{1.5pt}}[-1],\nu P^{\hspace{1.3pt}\dt\hspace{1pt}}) \ar[d]^{\bar\beta_{_{\hspace{-1.5pt}P^{\hspace{1.3pt}\dt\hspace{1pt}},Z^{\hspace{0.5pt}\dt\hspace{1.5pt}}[-1]}}}\ar[r] & (Y^{\hspace{0.5pt}\dt\hspace{1.5pt}},\nu P^{\hspace{1.3pt}\dt\hspace{1pt}}) \ar[d]^{\bar\beta_{_{\hspace{-1.5pt}P^{\hspace{1.3pt}\dt\hspace{1pt}},Y^{\hspace{0.5pt}\dt\hspace{1.5pt}}}}} \ar[r] & (X^{\hspace{0.5pt}\dt\hspace{1.5pt}},\nu P^{\hspace{1.3pt}\dt\hspace{1pt}}) \ar[d]^{\bar\beta_{_{\hspace{-1.5pt}P^{\hspace{1.3pt}\dt\hspace{1pt}},X^{\hspace{0.5pt}\dt\hspace{1.5pt}}}}} \ar[r] & (Z^{\hspace{0.5pt}\dt\hspace{1.5pt}},\nu P^{\hspace{1.3pt}\dt\hspace{1pt}}) \ar[d]^{\bar\beta_{_{\hspace{-1.5pt}P^{\hspace{1.3pt}\dt\hspace{1pt}},Z^{\hspace{0.5pt}\dt\hspace{1.5pt}}}}} \ar[r] & (Y^{\hspace{0.5pt}\dt\hspace{1.5pt}}[1],\nu P^{\hspace{1.3pt}\dt\hspace{1pt}})\ar[d]^{\bar\beta_{_{\hspace{-1.5pt}P^{\hspace{1.3pt}\dt\hspace{1pt}},Y^{\hspace{0.5pt}\dt\hspace{1.5pt}}[1]}}} \\ D(P^{\hspace{1.3pt}\dt\hspace{1pt}}, Z^{\hspace{0.5pt}\dt\hspace{1.5pt}}[-1])\ar[r] & D(P^{\hspace{1.3pt}\dt\hspace{1pt}}, Y^{\hspace{0.5pt}\dt\hspace{1.5pt}})\ar[r] & D(P^{\hspace{1.3pt}\dt\hspace{1pt}}, X^{\hspace{0.5pt}\dt\hspace{1.5pt}}) \ar[r] & D(P^{\hspace{1.3pt}\dt\hspace{1pt}}, Z^{\hspace{0.5pt}\dt\hspace{1.5pt}})\ar[r] & D(P^{\hspace{1.3pt}\dt\hspace{1pt}}, Y^{\hspace{0.5pt}\dt\hspace{1.5pt}}[1]). }$$ By the induction hypothesis, $\bar\beta_{_{\hspace{-1.5pt}P^{\hspace{1.3pt}\dt\hspace{1pt}},X^{\hspace{1.3pt}\dt\hspace{1pt}}}}$ is an isomorphism. This proves the sublemma. In general, assume that $P^i=0$ for $i \not\in [m, n]$, where $m<n$. Considering the brutal truncations $M^{\hspace{0.5pt}\dt\hspace{1.5pt}}=\kappa_{\ge m}(X^{\hspace{0.5pt}\dt\hspace{1.5pt}})$ and $N^{\hspace{0.5pt}\dt\hspace{1.5pt}}=\kappa_{\le n}(M^{\hspace{0.5pt}\dt\hspace{1.5pt}})$ with canonical morphisms $\mu^\wdt\hspace{2pt}: M^{\hspace{0.5pt}\dt\hspace{1.5pt}} \to X^{\hspace{0.5pt}\dt\hspace{1.5pt}}$ and $\pi^\wdt\hspace{2pt}: M^\dt\hspace{2pt}\to N^{\hspace{0.5pt}\dt\hspace{1.5pt}},$ we obtain a commutative diagram $$\xymatrixcolsep{45pt}\xymatrixrowsep{16pt}\xymatrix{ {\rm Hom}_{K(\mathfrak{A})}(X^{\hspace{0.5pt}\dt\hspace{1.5pt}},\nu P^{\hspace{1.3pt}\dt\hspace{1pt}}) \ar[d]_{\bar\beta_{_{\hspace{-1.5pt}P^{\hspace{1.3pt}\dt\hspace{1pt}}, X^{\hspace{0.5pt}\dt\hspace{1.5pt}}}}} \ar[r]^{{\rm Hom}(\bar\mu^{\hspace{0.5pt}\dt\hspace{1.5pt}}\hspace{-2pt}, \,\nu P^{\hspace{1.3pt}\dt\hspace{1pt}})} & {\rm Hom}_{K(\mathfrak{A})}(M^{\hspace{0.5pt}\dt\hspace{1.5pt}},\nu P^{\hspace{1.3pt}\dt\hspace{1pt}}) \ar[d]^{\bar\beta_{_{\hspace{-1.5pt}P^{\hspace{1.3pt}\dt\hspace{1pt}}\hspace{-2pt}, \, M^{\hspace{0.5pt}\dt\hspace{1.5pt}}}}} & {\rm Hom}_{K(\mathfrak{A})}(N^{\hspace{0.5pt}\dt\hspace{1.5pt}},\nu P^{\hspace{1.3pt}\dt\hspace{1pt}}) \ar[d]^{\bar\beta_{_{\hspace{-1.5pt}P^{\hspace{1.3pt}\dt\hspace{1pt}}, N^{\hspace{0.5pt}\dt\hspace{1.5pt}}}}}\ar[l]_{{\rm Hom}(\bar\pi^{\hspace{0.5pt}\dt\hspace{1.5pt}}\hspace{-2pt},\, \nu P^{\hspace{1.3pt}\dt\hspace{1pt}})}\\ D{\rm Hom}_{K(\mathfrak{A})}(P^{\hspace{1.3pt}\dt\hspace{1pt}}, X^{\hspace{0.5pt}\dt\hspace{1.5pt}}) \ar[r]^{D{\rm Hom}(P^{\hspace{1.3pt}\dt\hspace{1pt}}\hspace{-2pt}, \,\bar\mu^{\hspace{0.5pt}\dt\hspace{1.5pt}})} & D{\rm Hom}_{K(\mathfrak{A})}(P^{\hspace{1.3pt}\dt\hspace{1pt}}, M^{\hspace{0.5pt}\dt\hspace{1.5pt}}) & D{\rm Hom}_{K(\mathfrak{A})}(P^{\hspace{1.3pt}\dt\hspace{1pt}}, N^{\hspace{0.5pt}\dt\hspace{1.5pt}}).\ar[l]_{D{\rm Hom}(P^{\hspace{1.3pt}\dt\hspace{1pt}}\hspace{-2pt}, \,\bar\pi^{\hspace{0.5pt}\dt\hspace{1.5pt}})} } $$ Since $P^i=0$ for $i\not\in [m, n]$, it is not difficulty to see that the horizontal maps are $R$-linear isomorphisms. Since $N^{\hspace{0.5pt}\dt\hspace{1.5pt}}$ is bounded, by the sublemma, $\bar\beta_{_{\hspace{-1.2pt}P^{\hspace{1.3pt}\dt\hspace{1pt}}, N^{\hspace{0.5pt}\dt\hspace{1.5pt}}}}$ is an isomorphism, and so are $\bar\beta_{_{\hspace{-1.2pt}P^{\hspace{1.3pt}\dt\hspace{1pt}}\hspace{-2pt}, \, M^{\hspace{0.5pt}\dt\hspace{1.5pt}}}}$ and $\bar\beta_{_{\hspace{-1.2pt}P^{\hspace{1.3pt}\dt\hspace{1pt}}, X^{\hspace{0.5pt}\dt\hspace{1.5pt}}}}$. This establishes our claim. Then, by Lemma \ref{Hom-iso}, we obtain a binatural $R$-linear isomorphism $$\tilde \beta_{_{\hspace{-1.3pt}P^{\hspace{1.3pt}\dt\hspace{1pt}}, X^{\hspace{0.5pt}\dt\hspace{1.5pt}}}}=D(\mathbb{L}_{P^{\hspace{1.3pt}\dt\hspace{1pt}}\hspace{-.6pt}, \hspace{.4pt} X^{\hspace{0.5pt}\dt\hspace{1.5pt}} }^{-1})\circ \bar \beta_{_{\hspace{-1.3pt}P^{\hspace{1.3pt}\dt\hspace{1pt}}, X^{\hspace{0.5pt}\dt\hspace{1.5pt}}}}\circ \mathbb{L}_{X^{\hspace{0.5pt}\dt\hspace{1.5pt}}\hspace{-.6pt}, \hspace{.4pt}\nu P^{\hspace{1.3pt}\dt\hspace{1pt}}}^{-1}: {\rm Hom}_{D(\mathfrak{A})}(X^{\hspace{0.5pt}\dt\hspace{1.5pt}},\nu P^{\hspace{1.3pt}\dt\hspace{1pt}}) \to D {\rm Hom}_{D(\mathfrak{A})}(P^{\hspace{1.3pt}\dt\hspace{1pt}}, X^{\hspace{0.5pt}\dt\hspace{1.5pt}}). $$ The proof of the proposition is completed. \noindent{\sc Remark.} The isomorphism stated in Proposition \ref{Hotopy-fun}(2) is known for the bounded derived category of a finite dimensional algebra; see \cite[Page 350]{H}. We are ready to obtain a sufficient condition for the existence of an almost split triangle in the derived categories of an abelian category with a Nakayama functor. \begin{Theo}\label{ART-general} Let $\mathfrak A$ be an abelian $R$-category with $\mathcal P$ a strictly additive subcategory of projective objects of $\mathfrak A$ and $\nu\hspace{-1pt}: \hspace{-1.5pt} \mathcal P \hspace{-1pt}\to \hspace{-1pt} \mathfrak A$ a Nakayama functor. If $P^{\hspace{1.3pt}\dt\hspace{1pt}}\in K^b(\mathcal P)$ and $\nu P^{\hspace{1.3pt}\dt\hspace{1pt}}\in K^b(\nu \mathcal P)$ are strongly indecomposable, then $D^b(\mathfrak A)$ has an almost split triangle $\hspace{-3pt}\xymatrixcolsep{18pt}\xymatrix{\nu P^{\hspace{1.3pt}\dt\hspace{1pt}}[-1] \ar[r] & M^{\hspace{0.5pt}\dt\hspace{1.5pt}} \ar[r] & P^{\hspace{1.3pt}\dt\hspace{1pt}} \ar[r] & \nu P^{\hspace{1.3pt}\dt\hspace{1pt}},}\hspace{-3pt} $ which is also almost split in $D(\mathfrak A)$. \end{Theo} \noindent{\it Proof.} Assume that $P^{\hspace{1.3pt}\dt\hspace{1pt}}\in K^b(\mathcal P)$ and $\nu P^{\hspace{1.3pt}\dt\hspace{1pt}}\in K^b(\nu \mathcal P)$ are strongly indecomposable. By Lemma \ref{Hom-iso}, $P^{\hspace{1.3pt}\dt\hspace{1pt}}$ and $\nu P^{\hspace{1.3pt}\dt\hspace{1pt}}$ are strongly indecomposable in $D^b(\mathfrak A)$. In view of Proposition \ref{Hotopy-fun}(2), we obtain an isomorphism $$\Phi: {\rm Ext}_{D(\mathfrak{A})}^1(-, \nu P^{\hspace{1.3pt}\dt\hspace{1pt}}[-1])={\rm Hom}_{D(\mathfrak{A})}(-, \nu P^{\hspace{1.3pt}\dt\hspace{1pt}})\to D{\rm Hom}_{D(\mathfrak{A})}(P^{\hspace{1.3pt}\dt\hspace{1pt}}, -), $$ which restricts to an isomorphism $$\Psi: {\rm Ext}_{D^b(\mathfrak{A})}^1(-, \nu P^{\hspace{1.3pt}\dt\hspace{1pt}}[-1])={\rm Hom}_{D^b(\mathfrak{A})}(-, \nu P^{\hspace{1.3pt}\dt\hspace{1pt}})\to D{\rm Hom}_{D^b(\mathfrak{A})}(P^{\hspace{1.3pt}\dt\hspace{1pt}}, -). $$ Choose a non-zero $R$-linear form $\theta: {\rm End}_{D^b(\mathfrak{A})}(P^{\hspace{1.3pt}\dt\hspace{1pt}})\to I{\hspace{-1pt}_R}$, which vanishes on the radical of ${\rm End}_{D^b(\mathfrak{A})}(P^{\hspace{1.3pt}\dt\hspace{1pt}})$. Then, $\theta=\Psi_{P^{\hspace{1.3pt}\dt\hspace{1pt}}}(\delta^\wdt\hspace{2pt})$ for some $\delta^\wdt\hspace{2pt}\in {\rm Ext}_{D^b(\mathfrak{A})}^1(P^{\hspace{1.3pt}\dt\hspace{1pt}}, \nu P^{\hspace{1.3pt}\dt\hspace{1pt}}[-1])$. Since $\theta$ is in the right ${\rm End}_{D^b(\mathfrak{A})}(P^{\hspace{1.3pt}\dt\hspace{1pt}})$-socle of $D{\rm End}_{D^b(\mathfrak{A})}(P^{\hspace{1.3pt}\dt\hspace{1pt}})$, by Theorem \ref{AZE}, $\delta^\wdt\hspace{2pt}$ is an almost-zero extension in $D^b(\mathfrak{A})$. On the other hand, since $D^b(\mathfrak{A})$ is a full triangulated subcategory of $D(\mathfrak{A})$, we see that $\delta^\wdt\hspace{2pt}\in {\rm Ext}_{D(\mathfrak{A})}^1(P^{\hspace{1.3pt}\dt\hspace{1pt}}, \nu P^{\hspace{1.3pt}\dt\hspace{1pt}}[-1])$ such that $\Phi_{P^{\hspace{1.3pt}\dt\hspace{1pt}}}(\delta^\wdt\hspace{2pt})=\theta$, which lies in the right ${\rm End}_{D(\mathfrak{A})}(P^{\hspace{1.3pt}\dt\hspace{1pt}})$-socle of $D{\rm End}_{D(\mathfrak{A})}(P^{\hspace{1.3pt}\dt\hspace{1pt}})$. Hence, $\delta^\wdt\hspace{2pt}$ is an almost-zero extension in $D(\mathfrak{A})$. Therefore, $\delta^\wdt\hspace{2pt}$ defines an almost split triangle $\xymatrixcolsep{18pt}\xymatrix{\hspace{-3pt} \nu P^{\hspace{1.3pt}\dt\hspace{1pt}}[-1] \ar[r] & M^{\hspace{0.5pt}\dt\hspace{1.5pt}} \ar[r] &P^{\hspace{1.3pt}\dt\hspace{1pt}} \ar[r] & \nu P^{\hspace{1.3pt}\dt\hspace{1pt}}} \hspace{-2pt}$ in $D^b(\mathfrak A)$, which is also an almost split triangle in $D(\mathfrak A)$. The proof of the theorem is completed. \noindent{\sc Example.} Let $A$ be an $R$-algebra. By Lemma \ref{Alg-nf}, there exists a Nakayama functor $\nu_{\hspace{-2pt}_A}: {\rm proj}\hspace{.5pt}A \to \hbox{{\rm Mod}\hspace{0.5pt}}\hspace{.5pt}A$, and hence, Theorem \ref{ART-general} applies in $D(\hbox{{\rm Mod}\hspace{0.5pt}} A)$. As an application of Theorem \ref{ART-general}, we shall describe some almost split triangles in the derived categories of all modules over a locally finite dimensional algebra. \begin{Cor} Let $\hbox{$\it\Lambda$}=kQ/J$ be a locally finite dimensional algebra over a field $k$, where $Q$ is locally finite and $J$ is weakly admissible. \begin{enumerate}[$(1)$] \item If $P^{\hspace{1.3pt}\dt\hspace{1pt}}\in K^b({\rm proj}\hspace{.4pt}\hbox{$\it\Lambda$})$ is indecomposable, then $D^b({\rm Mod}\hspace{.4pt}\hbox{$\it\Lambda$})$ has an almost split triangle $\xymatrixcolsep{18pt}\xymatrix{\hspace{-2pt}\nu_{\hspace{-1.5pt}_{\it\hbox{$\it\Lambda$}mbda}}\hspace{-.5pt}P^{\hspace{1.3pt}\dt\hspace{1pt}}[-1] \ar[r] & M^{\hspace{1.3pt}\dt\hspace{1pt}} \ar[r] &P^{\hspace{1.3pt}\dt\hspace{1pt}} \ar[r] & \nu_{\hspace{-1.5pt}_{\it\hbox{$\it\Lambda$}mbda}} \hspace{-.5pt} P^{\hspace{1.3pt}\dt\hspace{1pt}}\hspace{-1pt},} \hspace{-2pt}$ which is almost split in $D(\hbox{{\rm Mod}\hspace{0.5pt}}La)$. \item If $I^\dt\hspace{2pt}\in K^b({\rm inj}\hspace{.4pt}\hbox{$\it\Lambda$})$ is indecomposable, then $D^b({\rm Mod}\hspace{.4pt}\hbox{$\it\Lambda$})$ has an almost split triangle $\xymatrixcolsep{18pt}\xymatrix{\hspace{-2pt}I^\dt\hspace{2pt}\ar[r] & M^{\hspace{1.3pt}\dt\hspace{1pt}} \ar[r] & \nu^{\mbox{-}}_{\hspace{-1.5pt}_{\it\hbox{$\it\Lambda$}mbda}}I^\dt\hspace{2pt}[1] \ar[r] & I^{\hspace{1.3pt}\dt\hspace{1pt}}[1], \hspace{-2pt}} $ which is almost split in $D(\hbox{{\rm Mod}\hspace{0.5pt}}La)$. \end{enumerate} \end{Cor} \noindent{\it Proof.} Since $A$ is locally finite dimensional, ${\rm proj}\hspace{.4pt}\hbox{$\it\Lambda$}$ is Hom-finite; see \cite[(3.2)]{BHL}, and so is $K^b({\rm proj}\hspace{.4pt}\hbox{$\it\Lambda$})$. Since the idempotents in $D^b(\hbox{{\rm Mod}\hspace{0.5pt}} \hbox{$\it\Lambda$})$ split; see \cite[Corollary A]{LeC}, so do the idempotents in $K^b({\rm proj}\hspace{.4pt}\hbox{$\it\Lambda$})$. Thus, $K^b({\rm proj}\hspace{.4pt}\hbox{$\it\Lambda$})$ is Krull-Schmidt; see \cite[(1.1)]{LNP}. Moreover, the Nakayama functor $\nu_{\hspace{-1.5pt}_\mathit\hbox{$\it\Lambda$}mbda}: {\rm proj}\hspace{.4pt}\hbox{$\it\Lambda$}\to \hbox{{\rm Mod}\hspace{0.5pt}}\hbox{$\it\Lambda$}$ induces an equivalence $\nu_{\hspace{-1.5pt}_\mathit\hbox{$\it\Lambda$}mbda}: K^b({\rm proj}\hspace{.4pt}\hbox{$\it\Lambda$})\to K^b({\rm inj}\hspace{.4pt}\hbox{$\it\Lambda$})$ with a quasi-inverse $\nu^{\hspace{.4pt}\mbox{-}}_{\hspace{-1.5pt}_{\it\hbox{$\it\Lambda$}mbda}}: K^b({\rm inj}\hspace{.4pt}\hbox{$\it\Lambda$})\to K^b({\rm proj}\hspace{.4pt}\hbox{$\it\Lambda$});$ see (\ref{Hotopy-fun}). If $P^{\hspace{1.3pt}\dt\hspace{1pt}}\in K^b({\rm proj}\hspace{.4pt}\hbox{$\it\Lambda$})$ is indecomposable, then so is $\nu_{\hspace{-1.5pt}_{\it\hbox{$\it\Lambda$}mbda}} P^{\hspace{1.3pt}\dt\hspace{1pt}}\in K^b({\rm inj}\hspace{.4pt}\hbox{$\it\Lambda$})$. By Theorem \ref{ART-general}, there exists an almost split triangle as stated in Statement (1). On the other hand, if $I^{\hspace{1.3pt}\dt\hspace{1pt}}\in K^b({\rm inj}\hspace{.4pt}\hbox{$\it\Lambda$})$ is indecomposable, then so is $\nu^{\mbox{-}}_{\hspace{-1.5pt}_{\it\hbox{$\it\Lambda$}mbda}}I^{\hspace{1.3pt}\dt\hspace{1pt}}[1]\in K^b({\rm proj}\hspace{.4pt}\hbox{$\it\Lambda$})$. By Theorem \ref{ART-general}, there exists an almost split triangle as stated in Statement (2). The proof of the corollary is completed. Similarly, we can describe some almost split triangles in the derived categories of all modules over a reflexive noetherian algebra. \begin{Theo}\label{ART-NA} Let $A$ be a reflexive noetherian $R$-algebra. Consider a strongly indecomposable complex $M^{\hspace{0.5pt}\dt\hspace{1.5pt}}\in D^b(\hbox{{\rm Mod}\hspace{0.5pt}} A)$. \begin{enumerate}[$(1)$] \item If $M^{\hspace{0.5pt}\dt\hspace{1.5pt}}$ is a complex over ${\rm mod}^+\hspace{-3pt}A$, then $D^b(\hbox{{\rm Mod}\hspace{0.5pt}} A)$ has an almost split triangle $\xymatrixcolsep{16pt}\xymatrix{\hspace{-2pt}\ar[r] N^{\hspace{0.5pt}\dt\hspace{1.5pt}} & L^{\hspace{0.5pt}\dt\hspace{1.5pt}} \ar[r] & M^{\hspace{0.5pt}\dt\hspace{1.5pt}} \ar[r] & N^{\hspace{0.5pt}\dt\hspace{1.5pt}}[1]}\hspace{-2pt} $ if and only if $M^{\hspace{0.5pt}\dt\hspace{1.5pt}}$ has a finite projective resolution $P^{\hspace{1.3pt}\dt\hspace{1pt}}$ over ${\rm proj}\hspace{.5pt} A;$ and in this case, $N^{\hspace{0.5pt}\dt\hspace{1.5pt}}\cong \nu_{\hspace{-2pt}_A}\hspace{-1pt}P^{\hspace{1.3pt}\dt\hspace{1pt}}[-1]$, a complex over ${\rm mod}^{\hspace{.3pt}-\hspace{-3.2pt}}A.$ \item If $M^{\hspace{0.5pt}\dt\hspace{1.5pt}}$ is a complex over ${\rm mod}^{\hspace{.3pt}-\hspace{-3.2pt}}A$, then $D^b(\hbox{{\rm Mod}\hspace{0.5pt}} A)$ has an almost split triangle $\hspace{-4pt} \xymatrixcolsep{16pt}\xymatrix{M^{\hspace{0.5pt}\dt\hspace{1.5pt}} \ar[r] & L^{\hspace{0.5pt}\dt\hspace{1.5pt}} \ar[r] & N^{\hspace{1.3pt}\dt\hspace{1pt}} \ar[r] & M^{\hspace{0.5pt}\dt\hspace{1.5pt}}[1]\hspace{-4pt}}$ if and only if $M^{\hspace{0.5pt}\dt\hspace{1.5pt}}$ has a finite injective co-resolution $I^{\hspace{1.3pt}\dt\hspace{1pt}}$ over ${\rm inj}\hspace{.5pt} A;$ and in this case, $N^{\hspace{0.5pt}\dt\hspace{1.5pt}}\cong \nu_{\hspace{-2pt}_A}^{\mbox{\hspace{.5pt}-\hspace{-2pt}}}I^{\hspace{0.5pt}\dt\hspace{1.5pt}}[1]$, a complex over ${\rm mod}^+\hspace{-3pt}A$. \end{enumerate} \end{Theo} \noindent{\it Proof.} Since $_AA$ is $R$-reflexive, by Lemma \ref{alg-ref-mod}, we see that ${\rm proj}\hspace{.5pt}A$ is Hom-reflexive over $R$. Thus, by Proposition \ref{Hotopy-fun}(1), the Nakayama functor $\nu_{\hspace{-2pt}_A}: {\rm proj}\hspace{.5pt}A\to \hbox{{\rm Mod}\hspace{0.5pt}}La$ induces an equivalence $\nu_{\hspace{-2pt}_A}: K^b({\rm proj}\hspace{.5pt}A)\to K^b({\rm inj}\hspace{.5pt}A)$, which has a quasi-inverse $\nu_{\hspace{-2pt}_A}^{\hspace{.8pt}\mbox{-\hspace{-3pt}}}: K^b({\rm inj}\hspace{.5pt}A)\to K^b({\rm proj}\hspace{.5pt}A)$. In particular, a complex $P^{\hspace{1.3pt}\dt\hspace{1pt}}\in K^b({\rm proj}\hspace{.5pt}A)$ is strongly indecomposable if and only if $\nu_{\hspace{-2pt}_A}P^{\hspace{1.3pt}\dt\hspace{1pt}}\in K^b({\rm inj}\hspace{.5pt}A)$ is strongly indecomposable. Moreover, by Theorem \ref{Noe-Ref}, ${\rm mod}^+\hspace{-3pt}A$ is an abelian category with enough projective modules in ${\rm proj}\hspace{.5pt}A$ and ${\rm mod}^-\hspace{-3pt}A$ is an abelian category with enough injective modules in ${\rm inj}\hspace{.5pt}A$, and consequently, every bounded complex over ${\rm mod}^+\hspace{-3pt}A$ has a projective resolution over ${\rm proj}\hspace{.5pt} A$ and every bounded complex over ${\rm mod}^-\hspace{-3pt}A$ has an injective co-resolution over ${\rm inj}\hspace{.5pt}A$; see \cite[(7.5)]{BGKHME}. Now, Statements (1) and (2) follow immediately from Theorems \ref{ART-nec} and \ref{ART-general}. The proof of the theorem is completed. {\sc Example.} If $R$ is a product of noetherian complete local commutative rings, then every noetherian $R$-algebra is reflexive; see \cite[Section 5]{AUS}. In case $R$ is noetherian complete local, the finiteness of the global dimension of a noetherian $R$-algebra is related to the existence of almost split triangles in its derived category. \begin{Cor}\label{ART-fgd} Let $A$ be a noetherian $R$-algebra, where $R$ is a product of commutative noetherian complete local rings. The following statements are equivalent. \begin{enumerate}[$(1)$] \item The global dimension of $A$ is finite. \item Every indecomposable complex in $D^b({\rm mod}^{+\hspace{-3pt}}A)$ is the ending term of an almost split triangle in $D^b(\hbox{{\rm Mod}\hspace{0.5pt}} A).$ \item Every indecomposable object in $D^b({\rm mod}^{-\hspace{-3pt}}A)$ is the starting term of an almost split triangle in $D^b(\hbox{{\rm Mod}\hspace{0.5pt}} A).$ \end{enumerate} \end{Cor} \noindent{\it Proof.} First of all, $A$ is a reflexive noetherian $R$-algebra. Moreover, ${\rm mod}^{+\hspace{-3pt}}A$ is a Krull-Schmidt abelian subcategory of ${\rm RMod}\hspace{.5pt}A$; see \cite[Section 5]{AUS}, and by Theorem \ref{Noe-Ref}(2), so is ${\rm mod}^{-\hspace{-3pt}}A$. Thus, $D^b({\rm mod} \hspace{.5pt} A)$ and $D^b({\rm mod}^{\hspace{.5pt}\mbox{-}\hspace{-3.5pt}}A)$ are Krull-Schmidt; see \cite[Corollary B]{LeC}. Since ${\rm mod}^{+\hspace{-3pt}}A$ has enough projective modules in ${\rm proj}\hspace{.4pt}A$ and ${\rm mod}^{-\hspace{-3pt}}A$ has enough injective modules in ${\rm inj}\hspace{.4pt}A$, we see that $D^b({\rm mod} \hspace{.5pt} A)$ and $D^b({\rm mod}^{\hspace{.5pt}\mbox{-}\hspace{-3.5pt}}A)$ are full triangulated subcategories of $D(\hbox{{\rm Mod}\hspace{0.5pt}} A)$; see \cite[(1.11)]{BaL}. Let $A$ be of finite global dimension. Then, every bounded complex over ${\rm mod}^{+\hspace{-3pt}}A$ has a finite projective resolution over ${\rm proj}\hspace{.5pt}A$ and every bounded complex over ${\rm mod}^{\hspace{.5pt}\mbox{-}\hspace{-3.5pt}}A$ has a finite injective co-resolution over ${\rm inj}\hspace{.5pt}A$; see \cite[(7.5)]{BGKHME}. Thus, Statements (2) and (3) follow from Theorem \ref{ART-NA}. Conversely, assume that Statement (3) holds. Since ${\rm mod}^{-\hspace{-3pt}}A$ is Krull-Schmidt, we deduce from Theorem \ref{ART-NA}(2) that every module in ${\rm mod}^{-\hspace{-3pt}}A$ is of finite injective dimension. By Proposition \ref{Noe-Ref}(2), every module in ${\rm mod}^{+\hspace{-3pt}}A^{\rm op}$ is of finite projective dimension, and hence, $A^{\rm op}$ is of finite global dimension; see \cite[(9.12)]{ROT}. Being left and right noetherian as a ring, $A$ is of finite global dimension; see \cite[(9.23)]{ROT}. The proof of the corollary is completed. To conclude, we shall describe all possible almost split triangles in the bounded derived category of an abelian category with a Nakayama functor, enough projective objects and enough injective objects. \begin{Theo}\label{ART-refl} \hspace{-4pt} Let $\mathfrak A$ be an abelian $R\hspace{-1pt}$-category with a Nakaya\-ma functor $\nu\hspace{-1pt}:\hspace{-1pt} \mathcal P \hspace{-2pt}\to \hspace{-1pt}\mathfrak A$, where $\mathcal P$ is a Hom-reflexive strictly additive subcategory of projective objects of $\mathfrak A$ such that $\mathfrak A$ has enough projective objects in $\mathcal P$ and enough injective objects in $\hspace{-.5pt} \nu \mathcal P$. \begin{enumerate}[$(1)$] \item If $Z^{\hspace{0.5pt}\dt\hspace{1.5pt}}\in D^b(\mathfrak A)$ is strongly indecomposable, then there exists an almost split triangle $\hspace{-2pt}\xymatrixcolsep{16pt}\xymatrix{X^{\hspace{0.5pt}\dt\hspace{1.5pt}} \ar[r] & Y^{\hspace{0.5pt}\dt\hspace{1.5pt}} \ar[r] & Z^{\hspace{0.5pt}\dt\hspace{1.5pt}} \ar[r] & X^{\hspace{0.5pt}\dt\hspace{1.5pt}}[1]}\hspace{-2pt}$ in $D^b(\mathfrak A)$ if and only if $Z^{\hspace{0.5pt}\dt\hspace{1.5pt}}$ has a finite projective resolution $P^{\hspace{1.3pt}\dt\hspace{1pt}}$ over $\mathcal P;$ and in this case, $X^{\hspace{0.5pt}\dt\hspace{1.5pt}}\cong \nu P^{\hspace{1.3pt}\dt\hspace{1pt}}$. \item If $X^{\hspace{0.5pt}\dt\hspace{1.5pt}} \hspace{-1.5pt} \in \hspace{-1.5pt} D^b(\mathfrak A)$ is strongly indecomposable, then there exists an almost split triangle $\hspace{-3pt}\xymatrixcolsep{16pt}\xymatrix{X^{\hspace{0.5pt}\dt\hspace{1.5pt}} \ar[r] & Y^{\hspace{0.5pt}\dt\hspace{1.5pt}} \ar[r] & Z^{\hspace{0.5pt}\dt\hspace{1.5pt}} \ar[r] & X^{\hspace{0.5pt}\dt\hspace{1.5pt}}[1]}\hspace{-3pt}$ in $D^b(\mathfrak A)$ if and only if $X^{\hspace{0.5pt}\dt\hspace{1.5pt}}$ has a finite injective co-resolution $I^{\hspace{0.5pt}\dt\hspace{1.5pt}}$ over $\nu \mathcal P;$ and in this case, $Z^{\hspace{0.5pt}\dt\hspace{1.5pt}}\cong \nu^{\mbox{-}\hspace{-0.5pt}} I^{\hspace{0.5pt}\dt\hspace{1.5pt}}$. \item If every object in $\mathfrak A$ has a finite projective resolution over $\mathcal P ($respectively, injective co-resolution over $\nu\mathcal P)$, then $D^b(\mathfrak A)$ has almost split triangles on the right $($res\-pectively, left$);$ and the converse holds in case $\mathfrak A$ is Krull-Schmidt. \end{enumerate} \end{Theo} \noindent{\it Proof.} By Lemma \ref{Hotopy-fun}(1), the Nakayama functor $\nu: \mathcal P \to \mathfrak A$ induces an equivalence $\nu: K^b(\mathcal P) \to K^b(\nu \mathcal P)$ with a quasi-inverse $\nu^{\hspace{.4pt}\mbox{-}}: K^b(\nu \mathcal{P})\to K^b(\mathcal{P}).$ Since $\mathfrak A$ has enough projective objects in $\mathcal P$ and enough injective objects in $\nu \mathcal P$, every bounded complex over $\mathfrak A$ has a projective resolution over $\mathcal P$ and an injective co-resolution over $\nu \mathcal P$; see \cite[(7.5)]{BGKHME}. In view of Theorems \ref{ART-nec} and \ref{ART-general}, we see easily that the first two statements hold true. Next, assume that every object in $\mathfrak A$ has a finite projective resolution over $\mathcal P$. Then, every bounded complex over $\mathfrak A$ has a finite resolution over $\mathcal P$; see \cite[(7.5)]{BGKHME}. By Statement (1), $D^b(\mathfrak A)$ has almost split triangles on the right. Conversely, suppose that $\mathfrak A$ has almost split triangles on the right. In particular, every strongly indecomposable object in $\mathfrak A$ is the ending term of an almost split triangle in $D^b(\mathfrak A)$, and by Statement (1), it has a finite projective resolution over $\mathcal P$. If $\mathfrak A$ in addition is Krull-Schmidt, then every object in $\mathfrak A$ has a finite projective resolution over $\mathcal P$. This proves the first part of Statement (3), and the second part follows dually. The proof of the theorem is completed. \noindent{\sc Example.} (1) Let $A$ be an artin algebra over a commutative artinian ring $R$. Then ${\rm mod}\hspace{.5pt}A$ is a Hom-finite abelian $R$-category with enough projective modules and enough injective modules. Considering the Nakayama functor $\nu_{\hspace{-2pt}_A}: {\rm proj}\hspace{.5pt}A \to {\rm mod}\hspace{.5pt}A$, we see that Theorem \ref{ART-refl} applies in $D^b({\rm mod}\hspace{.5pt}A)$, and in particular, it includes Happel's results stated in \cite{Hap1}. (2) Let $\hbox{$\it\Lambda$}=kQ/J$ be a strongly locally finite dimensional algebra over a field $k$, where $Q$ is locally finite and $J$ is locally admissible. Then ${\rm mod}^{\hspace{.3pt}b\hspace{-2.5pt}}\hbox{$\it\Lambda$}$ is a Hom-finite abelian $k$-category with enough projective modules in ${\rm proj}\,\hbox{$\it\Lambda$}$ and enough injective modules in ${\rm inj}\,\hbox{$\it\Lambda$}$. Considering the Nakayama functor $\nu_{\hspace{-2pt}_{\it\hbox{$\it\Lambda$}mbda}}: {\rm proj}\hspace{.5pt}\hbox{$\it\Lambda$} \to {\rm mod}^{\hspace{.3pt}b\hspace{-2.5pt}}\hbox{$\it\Lambda$}$, we see that Theorem \ref{ART-refl} applies in $D^b({\rm mod}^{\hspace{.3pt}b\hspace{-2.5pt}}\hbox{$\it\Lambda$})$. In case $Q$ has no infinite path, every module in ${\rm mod}^{\hspace{.3pt}b\hspace{-2.5pt}}\hbox{$\it\Lambda$}$ has a finite projective dimension and a finite injective dimension, and by Theorem \ref{ART-refl}(3), $D^b({\rm mod}^{\hspace{.3pt}b\hspace{-2.5pt}}\hbox{$\it\Lambda$})$ has almost split triangles. (3) Let $\hbox{$\it\Lambda$}=kQ/J$, where $Q$ is a locally finite quiver and $J$ is the ideal in $kQ$ generated by the paths of length two. Then, every module in ${\rm mod}^{\hspace{.3pt}b\hspace{-2.5pt}}\hbox{$\it\Lambda$}$ is of finite projective dimension over ${\rm proj}\,\hbox{$\it\Lambda$}$ if and only if $Q$ has no right infinite path, and every module in ${\rm mod}^{\hspace{.3pt}b\hspace{-2.5pt}}\hbox{$\it\Lambda$}$ is of finite injective dimension over ${\rm inj}\,\hbox{$\it\Lambda$}$ if and only if $Q$ has no left infinite path. By Theorem \ref{ART-refl}, $D^b({\rm mod}^{\hspace{.3pt}b\hspace{-2.5pt}}\hbox{$\it\Lambda$})$ has almost split triangles $($on the left, on the right$)$ if and only if $Q$ has no $($left, right$)$ infinite path. \end{document}
\begin{document} \title[Multi-dimensional $q$-summations]{Multi-dimensional $q$-summations and multi-colored partitions} \author[S. Chern]{Shane Chern} \address[Shane Chern]{Department of Mathematics, The Pennsylvania State University, University Park, PA 16802, USA} \email{[email protected]} \author[S. Fu]{Shishuo Fu} \address[Shishuo Fu]{College of Mathematics and Statistics, Chongqing University, Huxi campus LD506, Chongqing 401331, P.R. China} \email{[email protected]} \author[D. Tang]{Dazhao Tang} \address[Dazhao Tang]{College of Mathematics and Statistics, Chongqing University, Huxi campus LD208, Chongqing 401331, P.R. China} \email{[email protected]} \date{\today} \begin{abstract} Motivated by Alladi's recent multi-dimensional generalization of Sylvester's classical identity, we provide a simple combinatorial proof of an overpartition analogue, which contains extra parameters tracking the numbers of overlined parts of different colors. This new identity encompasses a handful of classical results as special cases, such as Cauchy's identity, and the product expressions of three classical theta functions studied by Gauss, Jacobi and Ramanujan. \end{abstract} \subjclass[2010]{05A17, 11P84} \keywords{Sylvester's identity, Cauchy's identity, multiple summations, multi-colored partitions, combinatorial proof.} \maketitle \section{Introduction}\label{sec1} In 1882, Sylvester \cite{Syl} discovered the following identity: \begin{align} \label{eq1.1} (-aq;q)_{\infty}=1+\sum_{k=1}^{\infty}\frac{a^{k}q^{(3k^{2}-k)/2}(-aq;q)_{k-1}(1+aq^{2k})}{(q;q)_{k}}. \end{align} Here and in the sequel, we use the standard $q$-series notation \cite{Andr}: \begin{align*} (a;q)_n:=&\prod_{k= 0}^{n-1} (1-aq^k),\\ (a;q)_\infty:=&\prod_{k= 0}^\infty (1-aq^k). \end{align*} The case $a=-1$ in \eqref{eq1.1} yields Euler's celebrated pentagonal number theorem: \begin{align}\label{theta:3} (q;q)_{\infty} &=1+\sum_{k=1}^{\infty}(-1)^{k}q^{(3k^{2}-k)/2}(1+q^{k})\notag\\ &=\sum_{k=-\infty}^{\infty}(-1)^{k}q^{(3k^{2}-k)/2}. \end{align} The right-hand side of \eqref{theta:3} is one of the three classical theta functions studied by Gauss, Jacobi and Ramanujan. The other two allow similar product representations as follows. \begin{align} \label{theta:2} \frac{(q;q)_{\infty}}{(-q;q)_{\infty}} &=\sum_{k=-\infty}^{\infty}(-1)^{k}q^{k^2},\\ \label{theta:4} \frac{(q^2;q^2)_{\infty}}{(-q;q^2)_{\infty}} &=\sum_{k=-\infty}^{\infty}(-1)^{k}q^{2k^{2}-k}. \end{align} Empirically, properties enjoyed by one of these theta functions are usually shared by the other two, as witnessed by a recent work of the second and third authors \cite{FT2}. Our current investigation is of no exception (see Remark~\ref{rmk}). A \emph{partition} of a nonnegative integer $n$ is a weakly decreasing sequence of positive integers whose sum equals $n$. Based on the observation that the left-hand side of \eqref{eq1.1} is the generating function of \emph{strict partitions} (i.e.~partitions into distinct parts), Sylvester proved his identity combinatorially by analyzing the Ferrers graphs of strict partitions in terms of their Durfee squares. The interested readers may refer to \cite{Andr3} for details. In a recent paper \cite{All}, Alladi further considered $r$-colored strict partitions (i.e.~$r$ copies of strict partitions attached with colors $a_{1}$, $a_{2}$, $\ldots$, $a_{r}$). He then naturally generalized Sylvester's identity to a multi-dimensional summation, which can be stated as follows. \begin{theorem}[Alladi]\label{thm:alladi} We have \begin{align} &(-a_{1}q;q)_{\infty}(-a_{2}q;q)_{\infty}\cdots(-a_{r}q;q)_{\infty}\nonumber\\ &\quad=1+\sum_{N=1}^{\infty}q^{N^{2}}\prod_{j=1}^{r}(-a_{j}q;q)_{N-1} \sum_{i_{1}+i_{2}+\cdots+i_{r}=N}\frac{a_{1}^{i_{1}}a_{2}^{i_{2}}\cdots a_{r}^{i_{r}}q^{\binom{i_1}{2}+\binom{i_2}{2}+\cdots+\binom{i_r}{2}}}{(q;q)_{i_{1}}(q;q)_{i_{2}}\cdots(q;q)_{i_{r}}}\nonumber\\ &\quad\quad\quad\quad\quad\times \left(1+\sum_{s=1}^{r}q^{i_1+i_2+\cdots+i_s}a_s q^N \prod_{k=1}^{s-1}\left(1+a_k q^N\right)\right).\label{id:alladi} \end{align} \end{theorem} We remark that Alladi's original identity (cf.~\cite[Eq.~(4.8)]{All}) involves some combinatorial statistics. However, he then showed in his equation (4.9) that the combinatorial statistics can be replaced and hence the multiple summation can be stated as above. In fact, he provided both an analytic and a combinatorial proof of \eqref{id:alladi}. Nonetheless, his combinatorial proof is complicated to some extent. This motivated us to give a simplified combinatorial proof. During the course, we are naturally led to the following {\em$r$-colored overpartition} (see Section 3 for the definition) analogue: \begin{theorem}\label{thm:CFT} We have \begin{align} &\frac{(-a_{1} z_1 q;q)_{\infty}(-a_{2} z_2 q;q)_{\infty}\cdots(-a_{r} z_r q;q)_{\infty}}{(a_{1}q;q)_{\infty}(a_{2}q;q)_{\infty}\cdots(a_{r}q;q)_{\infty}}\nonumber\\ &\quad=1+\sum_{N=1}^{\infty}q^{N^{2}}\prod_{j=1}^{r}\frac{(-a_{j} z_j q;q)_{N-1}}{(a_{j}q;q)_{N-1}} \sum_{i_{1}+i_{2}+\cdots+i_{r}=N}\frac{a_{1}^{i_{1}}a_{2}^{i_{2}}\cdots a_{r}^{i_{r}}(-z_1;q)_{i_{1}}(-z_2;q)_{i_{2}}\cdots(-z_r;q)_{i_{r}}}{(q;q)_{i_{1}}(q;q)_{i_{2}}\cdots(q;q)_{i_{r}}}\nonumber\\ &\quad\quad\quad\quad\quad\times \left(1+\sum_{s=1}^{r}q^{i_1+i_2+\cdots+i_{s-1}}\left(1+z_s q^{i_s}\right) \frac{a_s q^N}{1-a_s q^N} \prod_{k=1}^{s-1} \frac{1+a_k z_k q^N}{1-a_k q^N}\right).\label{eq:over-gen} \end{align} \end{theorem} The rest of this paper is organized as follows. In Section~\ref{sec:simple proof}, we provide a simplified combinatorial proof of \eqref{id:alladi}. In Section~\ref{sec:generalization}, we apply our approach to multi-colored overpartitions and prove \eqref{eq:over-gen}. We close with some remarks to motivate further investigations. \section{A simple combinatorial proof of Theorem \ref{thm:alladi}}\label{sec:simple proof} We could have proven Theorem~\ref{thm:CFT} directly and shown how to make appropriate substitutions for the variables to imply Theorem~\ref{thm:alladi}. However, we decide to warm the reader up by beginning with the proof of Theorem~\ref{thm:alladi}, since the combinatorial analysis in this case is simpler. We first assume the following generalized order of parts in an $r$-colored (strict) partition: $$1_{a_1}<1_{a_2}<\cdots<1_{a_r}<2_{a_1}<2_{a_2}<\cdots<2_{a_r}<3_{a_1}<\cdots.$$ When we plot the Ferrers graphs of these $r$-colored partitions, we color only the last node on the right of each row; the remaining nodes are uncolored. \begin{figure} \caption{Four blocks in a partition} \label{fig:block} \end{figure} For an $r$-colored partition $\lambda$, its \emph{Durfee square} $D$ is defined to be the largest square of nodes contained within the Ferrers graph. We denote it as Block \uppercase\expandafter{\romannumeral1} in Fig.~\ref{fig:block}. We then denote by Block \uppercase\expandafter{\romannumeral2} the portion to the right of the Durfee square. Furthermore, the parts below the Durfee square that have the same size as the size of the Durfee square form Block \uppercase\expandafter{\romannumeral3}. At last, the portion below Block \uppercase\expandafter{\romannumeral3} is called Block \uppercase\expandafter{\romannumeral4}. We remark that in Block \uppercase\expandafter{\romannumeral2} we also allow $0$ as a part. In this sense, we do not color any nodes in Block \uppercase\expandafter{\romannumeral1}, while instead we color the $0$ parts in Block \uppercase\expandafter{\romannumeral2}. Now we are ready to write the generating function of each block combinatorially. Let $N\ge 1$ be the size of the Durfee square $D$. \emph{Block \uppercase\expandafter{\romannumeral1}}: Note that all nodes in $D$ are uncolored. Hence the generating function of $D$ is simply \begin{align} \label{cd1} q^{N^{2}}. \end{align} \emph{Block \uppercase\expandafter{\romannumeral4}}: Note that Block \uppercase\expandafter{\romannumeral4} can be regarded as an $r$-colored strict partition with largest part $\le N-1$. Hence its generating function is \begin{align} \label{cd2} \prod_{j=1}^{r}(-a_{j}q;q)_{N-1}. \end{align} \emph{Blocks \uppercase\expandafter{\romannumeral2} \& \uppercase\expandafter{\romannumeral3}}: We discuss the following two cases: \begin{enumerate}[\indent 1).] \item If Block \bl{3} is empty, then the generating function of Block \bl{2} is \begin{align*} \sum_{i_{1}+i_{2}+\cdots+i_{r}=N}\frac{a_{1}^{i_{1}}a_{2}^{i_{2}}\cdots a_{r}^{i_{r}}q^{\binom{i_1}{2}+\binom{i_2}{2}+\cdots+\binom{i_r}{2}}}{(q;q)_{i_{1}}(q;q)_{i_{2}}\cdots(q;q)_{i_{r}}}. \end{align*} \item If Block \bl{3} is not empty, then we assume that the part on the top of Block \bl{3} is colored by $a_s$ with $1\le s\le r$. Then the generating function of Block \bl{3} is given by \begin{align*} a_s q^N \prod_{k=1}^{s-1} \left(1+a_k q^N\right). \end{align*} Furthermore, in this case, we only allow $0$ colored by $a_{s+1}$, $\ldots$, $a_r$ as a part in Block \bl{2} to ensure that the whole is an $r$-colored strict partition. We assume that there are $i_t$ parts colored by $a_t$ in Block \bl{2} for each $1\le t\le r$. Then $i_{1}+i_{2}+\cdots+i_{r}=N$. For $1\le t_1\le s$, all distinct parts colored by $a_{t_1}$ can be regarded as a strict partition with exactly $i_{t_1}$ parts in the conventional sense (i.e.~$0$ is not allowed as a part), and hence have generating function \begin{align*} \frac{a_{t_1}^{i_{t_1}}q^{\binom{i_{t_1}}{2}+i_{t_1}}}{(q;q)_{i_{t_1}}}. \end{align*} For $s+1\le t_2\le r$, these parts colored by $a_{t_2}$ form a strict partition with either $i_{t_2}$ or $i_{t_2}-1$ parts in the conventional sense, which has generating function \begin{align*} \frac{a_{t_2}^{i_{t_2}}q^{\binom{i_{t_2}}{2}}}{(q;q)_{i_{t_2}}}. \end{align*} \end{enumerate} We conclude that the generating function of Blocks \bl{2} \& \bl{3} is \begin{align} &\sum_{i_{1}+i_{2}+\cdots+i_{r}=N}\frac{a_{1}^{i_{1}}a_{2}^{i_{2}}\cdots a_{r}^{i_{r}}q^{\binom{i_1}{2}+\binom{i_2}{2}+\cdots+\binom{i_r}{2}}}{(q;q)_{i_{1}}(q;q)_{i_{2}}\cdots(q;q)_{i_{r}}}\nonumber\\ &\quad\quad\times \left(1+\sum_{s=1}^{r}q^{i_1+i_2+\cdots+i_s}a_s q^N \prod_{k=1}^{s-1}\left(1+a_k q^N\right)\right). \end{align} Finally, we notice that the generating function of $r$-colored strict partitions is \begin{align} (-a_{1}q;q)_{\infty}(-a_{2}q;q)_{\infty}\cdots(-a_{r}q;q)_{\infty}. \end{align} Hence \begin{align*} &(-a_{1}q;q)_{\infty}(-a_{2}q;q)_{\infty}\cdots(-a_{r}q;q)_{\infty}\\ &\quad=1+\sum_{N=1}^{\infty}q^{N^{2}}\prod_{j=1}^{r}(-a_{j}q;q)_{N-1} \sum_{i_{1}+i_{2}+\cdots+i_{r}=N}\frac{a_{1}^{i_{1}}a_{2}^{i_{2}}\cdots a_{r}^{i_{r}}q^{\binom{i_1}{2}+\binom{i_2}{2}+\cdots+\binom{i_r}{2}}}{(q;q)_{i_{1}}(q;q)_{i_{2}}\cdots(q;q)_{i_{r}}}\\ &\quad\quad\quad\quad\quad\times \left(1+\sum_{s=1}^{r}q^{i_1+i_2+\cdots+i_s}a_s q^N \prod_{k=1}^{s-1}\left(1+a_k q^N\right)\right). \end{align*} \section{Multi-colored overpartitions}\label{sec:generalization} In the previous section, the main object we study is $r$-colored strict partitions. We notice that our approach can be naturally adapted to other types of partitions. In particular, if we study multi-colored overpartitions, a more general identity can be deduced. An \emph{$r$-colored overpartition} means $r$ copies of overpartitions attached with colors $a_{1}$, $a_{2}$, $\ldots$, $a_{r}$. We always assume that only the last occurence of each different part in a different color may be overlined. For instance, $$\overline{2}_{a_2}+2_{a_1}+\overline{1}_{a_2}+1_{a_1}+\overline{1}_{a_1}$$ is a $2$-colored overpartition of $7$. Here we still assume the following generalized order of parts: $$1_{a_1}<1_{a_2}<\cdots<1_{a_r}<2_{a_1}<2_{a_2}<\cdots<2_{a_r}<3_{a_1}<\cdots.$$ We will still use the block decomposition shown in Fig.~\ref{fig:block} as well as the same coloring strategy. To identify the overlined parts, we also shadow the last node of each overlined part in the Ferrers graph (see Fig.~\ref{fig:block2}). Again, we allow $0$ (and hence $\overline{0}$) as a part in Block \bl{2}. In this sense, nodes in Block \bl{1} are neither colored nor shadowed. \begin{figure} \caption{Four blocks in an overpartition} \label{fig:block2} \end{figure} Let $N\ge 1$ be the size of the Durfee square $D$, which is also the side length of Block I. In the following generating functions, for $1\le i\le r$, the exponent of $z_i$ counts the number of overlined parts colored by $a_i$. \emph{Block \uppercase\expandafter{\romannumeral1}}: From the above arguement, we know that the generating function of $D$ is \begin{align} q^{N^{2}}. \end{align} \emph{Block \uppercase\expandafter{\romannumeral4}}: It is easy to see that Block \uppercase\expandafter{\romannumeral4} is an $r$-colored overpartition with largest part $\le N-1$. Hence its generating function is \begin{align} \prod_{j=1}^{r}\frac{(-a_{j} z_j q;q)_{N-1}}{(a_{j}q;q)_{N-1}}. \end{align} \emph{Blocks \uppercase\expandafter{\romannumeral2} \& \uppercase\expandafter{\romannumeral3}}: We start by noticing that the generating function of overpartitions ($0$ not allowed) with at most $i$ parts is (since its conjugate is an overpartition with largest part $\le i$) \begin{equation}\label{eq:over-1} \frac{(-zq;q)_i}{(q;q)_i}=\frac{1+z q^i}{1+z}\frac{(-z;q)_i}{(q;q)_i}, \end{equation} the generating function of overpartitions ($0$ not allowed) with exactly $i$ parts is \begin{equation}\label{eq:over-2} \frac{(-zq;q)_i}{(q;q)_i}-\frac{(-zq;q)_{i-1}}{(q;q)_{i-1}}=\frac{q^i (-z;q)_i}{(q;q)_i}, \end{equation} and the generating function of overpartitions ($0$ allowed) with exactly $i$ parts is \begin{equation}\label{eq:over-3} \frac{(1+z)(-zq;q)_{i-1}}{(q;q)_{i-1}}+\frac{q^i (-z;q)_i}{(q;q)_i}=\frac{(-z;q)_i}{(q;q)_i}. \end{equation} We have the following two cases: \begin{enumerate}[\indent 1).] \item If Block \bl{3} is empty, then thanks to \eqref{eq:over-3}, we know that the generating function of Block \bl{2} is \begin{align*} \sum_{i_{1}+i_{2}+\cdots+i_{r}=N}\frac{a_{1}^{i_{1}}a_{2}^{i_{2}}\cdots a_{r}^{i_{r}}(-z_1;q)_{i_{1}}(-z_2;q)_{i_{2}}\cdots(-z_r;q)_{i_{r}}}{(q;q)_{i_{1}}(q;q)_{i_{2}}\cdots(q;q)_{i_{r}}}. \end{align*} \item If Block \bl{3} is not empty, then we assume that the part on the top of Block \bl{3} is colored by $a_s$ with $1\le s\le r$. Then the generating function of Block \bl{3} is given by \begin{align*} \frac{(1+z_s) a_s q^N}{1-a_s q^N} \prod_{k=1}^{s-1} \frac{1+a_k z_k q^N}{1-a_k q^N}. \end{align*} Furthermore, in this case, we only allow $0$ colored by $a_{s}$, $\ldots$, $a_r$ as a part in Block \bl{2} and thoes $0$'s colored by $a_s$, if any, should be non-overlined to ensure that the whole is an $r$-colored overpartition. Suppose there are $i_t$ parts colored by $a_t$ in Block \bl{2} for each $1\le t\le r$, then $i_{1}+i_{2}+\cdots+i_{r}=N$. For $1\le t_1\le s-1$, the overpartition colored by $a_{t_1}$ has exactly $i_{t_1}$ parts and no parts of size $0$, and hence has generating function by \eqref{eq:over-2} \begin{align*} \frac{a_{t_1}^{i_{t_1}}q^{i_{t_1}}(-z_{t_1};q)_{i_{t_1}}}{(q;q)_{i_{t_1}}}. \end{align*} Next, the overpartition colored by $a_s$ can be treated as an overpartition ($0$ not allowed) with at most $i_s$ parts, and hence has generating function by \eqref{eq:over-1} \begin{align*} a_s^{i_s}\frac{1+z_s q^{i_s}}{1+z_s}\frac{(-z_s;q)_{i_s}}{(q;q)_{i_s}}. \end{align*} At last, for $s+1\le t_2\le r$, the overpartition colored by $a_{t_2}$ is an overpartition in which we allow $0$ as a part with exactly $i_{t_2}$ parts, and hence has generating function by \eqref{eq:over-3} \begin{align*} \frac{a_{t_2}^{i_{t_2}}(-z_{t_2};q)_{i_{t_2}}}{(q;q)_{i_{t_2}}}. \end{align*} \end{enumerate} We conclude that the generating function of Blocks \bl{2} \& \bl{3} is \begin{align} &\sum_{i_{1}+i_{2}+\cdots+i_{r}=N}\frac{a_{1}^{i_{1}}a_{2}^{i_{2}}\cdots a_{r}^{i_{r}}(-z_1;q)_{i_{1}}(-z_2;q)_{i_{2}}\cdots(-z_r;q)_{i_{r}}}{(q;q)_{i_{1}}(q;q)_{i_{2}}\cdots(q;q)_{i_{r}}}\nonumber\\ &\quad\quad\times \left(1+\sum_{s=1}^{r}q^{i_1+i_2+\cdots+i_{s-1}}\left(1+z_s q^{i_s}\right) \frac{a_s q^N}{1-a_s q^N} \prod_{k=1}^{s-1} \frac{1+a_k z_k q^N}{1-a_k q^N}\right). \end{align} Since the generating function of $r$-colored overpartitions is \begin{align} \frac{(-a_{1} z_1 q;q)_{\infty}(-a_{2} z_2 q;q)_{\infty}\cdots(-a_{r} z_r q;q)_{\infty}}{(a_{1}q;q)_{\infty}(a_{2}q;q)_{\infty}\cdots(a_{r}q;q)_{\infty}}, \end{align} it follows that \eqref{eq:over-gen} is true and we have completed the proof of Theorem~\ref{thm:CFT}. \begin{remark}\label{rmk} The following are special cases of \eqref{eq:over-gen}: \begin{enumerate}[1).] \item If we take $z_i=0$ ($1\le i\le r$), then \begin{align} &\frac{1}{(a_{1}q;q)_{\infty}(a_{2}q;q)_{\infty}\cdots(a_{r}q;q)_{\infty}}\nonumber\\ &\quad=1+\sum_{N=1}^{\infty}q^{N^{2}}\prod_{j=1}^{r}\frac{1}{(a_{j}q;q)_{N-1}} \sum_{i_{1}+i_{2}+\cdots+i_{r}=N}\frac{a_{1}^{i_{1}}a_{2}^{i_{2}}\cdots a_{r}^{i_{r}}}{(q;q)_{i_{1}}(q;q)_{i_{2}}\cdots(q;q)_{i_{r}}}\nonumber\\ &\quad\quad\quad\quad\quad\times \left(1+\sum_{s=1}^{r}q^{i_1+i_2+\cdots+i_{s-1}}\frac{a_s q^N}{1-a_s q^N} \prod_{k=1}^{s-1}\frac{1}{1-a_k q^N}\right),\label{eq:mul-ptn} \end{align} which is a multi-dimensional generalization of Cauchy's identity (cf.~\cite[Eq.~(2.2.8)]{Andr} with $z$ replaced by $aq$): $$\frac{1}{(aq;q)_\infty}=1+\sum_{N=1}^\infty \frac{a^N q^{N^2}}{(q;q)_N (aq;q)_N}.$$ This multiple summation indeed corresponds to $r$-colored ordinary partitions in our approach. \item The case $z_i=1$ ($1\le i\le r$) generalizes an identity due to Dousse and Kim (cf.~\cite[Corollary 3.5]{DK}): \begin{align*} \dfrac{(-aq;q)_{\infty}}{(aq;q)_{\infty}}=1+\sum_{N=1}^{\infty}\left(\frac{(-q;q)_{N-1}(-aq;q)_{N-1}}{(q;q)_{N-1}(aq;q)_{N-1}}a^{N}q^{N^{2}}+\dfrac{(-q;q)_{N}(-aq;q)_{N}} {(q;q)_{N}(aq;q)_{N}}a^{N}q^{N^{2}}\right). \end{align*} Their proof is based on an overpartition analogue of $q$-binomial coefficients. A further specialization by taking $a=-1$ then recovers \eqref{theta:2}. \item If we take $a_i=a_i\slash q, z_i=z_iq\:(1\le i\le r)$ and take $q=q^2$, we get the following multi-summation, which can be viewed as the version for {\em ped}, i.e., partitions with even parts distinct. \begin{align} &\frac{(-a_{1} z_1 q^2;q^2)_{\infty}(-a_{2} z_2 q^2;q^2)_{\infty}\cdots(-a_{r} z_r q^2;q^2)_{\infty}}{(a_{1}q;q^2)_{\infty}(a_{2}q;q^2)_{\infty}\cdots(a_{r}q;q^2)_{\infty}}\nonumber\\ &\quad=1+\sum_{N=1}^{\infty}q^{2N^{2}-N}\prod_{j=1}^{r}\frac{(-a_{j} z_j q^2;q^2)_{N-1}}{(a_{j}q^2;q^2)_{N-1}} \sum_{i_{1}+i_{2}+\cdots+i_{r}=N}\frac{a_{1}^{i_{1}}\cdots a_{r}^{i_{r}}(-z_1q;q^2)_{i_{1}}\cdots(-z_rq;q^2)_{i_{r}}}{(q^2;q^2)_{i_{1}}\cdots(q^2;q^2)_{i_{r}}}\nonumber\\ &\quad\quad\quad\quad\quad\times \left(1+\sum_{s=1}^{r}q^{2(i_1+i_2+\cdots+i_{s-1})}\left(1+z_s q^{2i_s+1}\right) \frac{a_s q^{2N-1}}{1-a_s q^{2N-1}} \prod_{k=1}^{s-1} \frac{1+a_k z_k q^{2N}}{1-a_k q^{2N-1}}\right).\label{eq:ped-gen} \end{align} Now for the uncolored case $r=1$, we get back to \eqref{theta:4} by setting $a_1=-1,z_1=1$. \item \eqref{id:alladi} can be deduced from \eqref{eq:over-gen} by taking $a_i \to a_i/z_i$ and then letting $z_i\to \infty$ for $1\le i\le r$. \end{enumerate} \end{remark} \section{Final remarks} Quite recently, the second and third authors \cite[Eq.~(3.7)]{FT} considered another generalization of Euler's pentagonal number theorem, which involves the numbers of parts and the largest part. On the other hand, we see from \eqref{id:alladi} \begin{align} &(-a_{1}yq;q)_{\infty}(-a_{2}yq;q)_{\infty}\cdots(-a_{r}yq;q)_{\infty}\nonumber\\ &\quad=1+\sum_{N=1}^{\infty}\left(y q^N\right)^N\prod_{j=1}^{r}(-a_{j}yq;q)_{N-1} \sum_{i_{1}+i_{2}+\cdots+i_{r}=N}\frac{a_{1}^{i_{1}}a_{2}^{i_{2}}\cdots a_{r}^{i_{r}}q^{\binom{i_1}{2}+\binom{i_2}{2}+\cdots+\binom{i_r}{2}}}{(q;q)_{i_{1}}(q;q)_{i_{2}}\cdots(q;q)_{i_{r}}}\nonumber\\ &\quad\quad\quad\quad\quad\times \left(1+\sum_{s=1}^{r}q^{i_1+i_2+\cdots+i_s}a_s y q^N \prod_{k=1}^{s-1}\left(1+a_k y q^N\right)\right).\label{eq3.1} \end{align} Note that \eqref{eq3.1} generalizes \eqref{id:alladi} in the sense of adding a parameter that counts the number of parts in a partition. We will get back to \eqref{id:alladi} by taking $y=1$. However, it seems to be not easy to consider simultaneously both the number of parts and the largest part, so as to obtain the joint distribution. At last, it is worth mentioning that in \cite{Nat}, Nataraj established two multivariate generalizations of Euler's pentagonal number theorem related to Rogers--Ramanujan identities. \end{document}
\begin{document} \title{Fundamental Limitation on the Detectability of Entanglement} \author{Pengyu Liu} \affiliation{Center for Quantum Information, Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing 100084, China} \author{Zhenhuan Liu} \affiliation{Center for Quantum Information, Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing 100084, China} \author{Shu Chen} \affiliation{Center for Quantum Information, Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing 100084, China} \author{Xiongfeng Ma} \email{[email protected]} \affiliation{Center for Quantum Information, Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing 100084, China} \begin{abstract} Entanglement detection is essential in quantum information science and quantum many-body physics. It has been proved that entanglement exists almost surely for a random quantum state, while the realizations of effective entanglement criteria usually consume exponentially many resources with regard to system size or qubit number, and efficient criteria often perform poorly without prior knowledge. This fact implies a fundamental limitation might exist in the detectability of entanglement. In this work, we formalize this limitation as a fundamental trade-off between the efficiency and effectiveness of entanglement criteria via a systematic method to evaluate the detection capability of entanglement criteria theoretically. For a system coupled to an environment, we prove that any entanglement criterion needs exponentially many observables to detect the entanglement effectively when restricted to single-copy operations. Otherwise, the detection capability of the criterion will decay double exponentially. Furthermore, if multicopy joint measurements are allowed, the effectiveness of entanglement detection can be exponentially improved, which implies a quantum advantage in entanglement detection problems. Our results may shed light on why quantum phenomena are difficult to observe in large noisy systems. \end{abstract} \date{\today} \maketitle Quantum information technology promises advancement in various information processing tasks. Currently, we are in a stage where noisy intermediate-scale quantum devices~\cite{Preskill2018quantumcomputingin} with 50 to 200 qubits can be well manipulated to demonstrate quantum advantages~\cite{arute2019quantum, Gong2021zuchongzhi,zhong2021jiuzhang,madsen2022quantum}. For these devices, entanglement generation is regarded as an important benchmark, while the verification of systems with only 18 qubits is already challenging~\cite{wang2018eighteen}. This is rather counterintuitive as entangled states have been proved to constitute a large proportion of state space~\cite{zyczkowski1998volume,szarek2005volume,gurvits2002largest}, even for highly mixed states~\cite{aubrun2012phase}. Among the various detection methods, entanglement witness (EW) criteria are rather straightforward and the most commonly used ones in experiments~\cite{lu2018structure,wang2018eighteen}. However, much evidence shows that the EW criteria are only effective with precise prior knowledge of the target state~\cite{nidari2007witness}. Unpredictable noises in the state preparation could significantly reduce the success probability for EW protocols. To solve this problem, researchers have developed nonlinear entanglement criteria, such as positive map criteria, including the well-known positive partial transposition (PPT) criterion~\cite{peres1996separability}, computable cross norm or realignment (CCNR) criterion~\cite{chen2002matrix}, and symmetric extension criterion~\cite{GUHNE2009detection}. Although more effective than EW criterion, checking these nonlinear criteria relies heavily on state tomography, which is experimentally unaffordable. In the last few decades, many efforts have been devoted to modifying these powerful entanglement criteria, such as the positive map criteria, to avoid state tomographies~\cite{horodecki2002method,horodecki2003measuring}. With the intermediate-scale quantum devices available, entanglement criteria have been applied to various physical systems. For these experiments, the experimental feasibility --- low sample complexity and single-copy compatibility --- becomes a growing concern for criterion design. Protocols like the moment-based PPT and CCNR criteria~\cite{elben2020mixed,yu2021optimal,neven2021symmetry,liu2022detecting} are proposed which can even be realized by single-copy and qubit-wise measurements when combined with the randomized measurements techniques~\cite{van2012measuring,huang2020predicting,brydges2019probing}. Although much more efficient than state tomography, these methods still require a number of measurements that scales exponentially with the system size. In addition to EW and moment-based criteria, many other case studies investigating the detection capability of some specific entanglement criteria~\cite{lu2016universal,collins2016random,bhosale2012entanglement,shapourian2021diagram,jivulescu2014reduction,jivulescu2015thresholds,aubrun2012realigning} also suggest that a trade-off may exist between the effectiveness and the efficiency of entanglement detection. However, a general and quantitative study is still missing. In this work, we develop a systematic method to upper bound the detection capability of various entanglement criteria, including EW, positive map, and faithful entanglement criteria. We further generalize it to any entanglement criteria with single-copy implementations and theoretically formulate the fundamental trade-off between efficiency and effectiveness, see Theorem \ref{theorem:singlecopy}. Here we give an informal version. \begin{theorem}[Trade-off between Efficiency and Effectiveness, Informal] To detect the entanglement of a random state coupled to a $k$-dimensional environment, any entanglement criterion that can be verified experimentally with $M$ observables is either \begin{enumerate} \item Inefficient: The criterion requires $M=\Omega(k/\ln k)$ observables to verify, or \item Ineffective: The criterion can detect the entanglement successfully with a probability $P=e^{-\Omega(k)}$ even if the state is entangled. \end{enumerate} \label{theorem:informal} \end{theorem} Explicitly speaking, we investigate the entanglement within a bipartite system $AB$, and system $R$ is their purification with dimension $k$. The composite system $ABR$ as a whole is in a random pure state. System $R$ can be regarded as the environment of $AB$, representing either the uncontrollable noise or some system that is not of concern. Such a composite system $ABR$ often appears in many-body physics as it can be generated by a generic Hamiltonian. Note that $k$ usually scales exponentially with the environment size. So, according to Theorem \ref{theorem:informal}, the number of observables increases exponentially, and the detection capability decreases double exponentially with the environment size. To formalize our study quantitatively, here we give a formal definition of density state distribution~\cite{collins2016random, Nechita2007}. \begin{definition}[$k$-induced Distribution of Density Matrix] $\pi_{d,k}$ is the distribution in $\mathcal{D}(\mathcal{H})$ induced by the uniform distribution of pure states in $\mathcal{H}\otimes\mathcal{H}_R$, where the dimensions of $\mathcal{H}$ and $\mathcal{H}_R$ are $d$ and $k$ respectively. A state $\rho$ following the distribution $\pi_{d,k}$ can be generated by $\rho=\tr_R(\ketbra{\phi})$, where $\ket\phi$ is a Haar-measured pure state in $\mathcal{H}\otimes\mathcal{H}_R$. \end{definition} Let us start with EW criteria. An EW is an observable, $W$, satisfying $\tr(W\rho)\geq 0,\forall \rho\in \mathrm{SEP}$ where $\mathrm{SEP}$ is the set of all separable states. Define the detection capability of an EW criterion with $W$ as \begin{equation} \mathcal{C}_k(W)=\Pr_{\rho\sim\pi_{d,k}}\bqty{\tr(W\rho)<0}, \end{equation} which represents the portion of states that $W$ can detect. Without loss of generality, hereafter, we assume the two subsystems $A$ and $B$ are equal in dimension, $d_A=d_B=\sqrt{d}$. It has been proved that when $k<cd^{\frac{3}{2}}$, where $c$ is some constant, a state following $\pi_{d,k}$ distribution is entangled with probability $1$ asymptotically~\cite{aubrun2012phase}. Throughout the Letter, we will always assume $k<cd^{\frac{3}{2}}$ so that the definition of $\mathcal{C}_k(W)$ can also be viewed as the ratio of detected states to all entangled states. Using Laurent-Massart's lemma~\cite{laurent2000adaptive}, we can give an upper bound of the detection capability of EW criteria. \begin{theorem}[Detection Capability of EW Criteria]\label{theorem:maintheorem} The detection capability of an EW criterion with $W$ decays at least exponentially with the dimension of the environment \begin{equation} \mathcal{C}_k(W)< 2e^{-(\sqrt{1+\alpha}-1)^2 k}\leq 2e^{-(3-2\sqrt{2}) k}, \end{equation} where $\alpha=\frac{\tr(W)}{\sqrt{\tr(W^2)}}\geq 1$~\cite{JOHNSTON20181} is a witness-dependent factor. \end{theorem} We show the proof of Theorem~\ref{theorem:maintheorem} intuitively in Fig.~\ref{fig:geometry_new}. When $k$ is large, the state distribution $\pi_{d,k}$ converges near the surface of the set of separable states. An entanglement witness can only detect states in a high-dimensional spherical cap due to the constraint of $\tr(W\rho)\geq 0,\forall \rho\in \mathrm{SEP}$. Since a spherical cap in high-dimensional space is exponentially small compared to the ball, $\mathcal{C}_k(W)$ also suffers from an exponential decay. Detailed proofs of this theorem and the rest can be found in the Appendix. \begin{figure} \caption{ An intuitive illustration of Theorem~\ref{theorem:maintheorem} \label{fig:geometry_new} \end{figure} This theorem explains why the effectiveness of EW criteria highly depends on the prior knowledge of the studied states, as the detection capability decreases double-exponentially fast with the environment size. It is also worth mentioning that this result holds for multipartite EWs and the leftmost inequality holds for any observable $O$ with a positive trace. We use two typical examples to support our results. The first example is PPT-type EW, $W=\ketbra{\phi}^{T_A}$, where $T_A$ is the partial transposition operator acting on $\mathcal{H}_A$ and $\ket\phi$ is an arbitrary pure state. In the sense of detection capability, they are optimal EWs as $\alpha$ achieves its minimum value, $\alpha=\frac{\tr(W)}{\sqrt{\tr(W^2)}}=1$, which is irrelevant with the system dimension, $d$. Hence, we have \begin{equation} \mathcal{C}_k(\ketbra{\phi}^{T_A})= e^{-\Omega(k)}. \end{equation} In fact, this inequality is rather tight as there exists a constant $c$ such that $\mathcal{C}_k(\ketbra{\phi}^{T_A})\geq e^{-ck}$ according to Ref.~\cite{nidari2007witness}. The second example is the faithful EW, defined as $W=\frac{\mathbb{I}}{\sqrt{d}}-\ketbra{\Phi}{\Phi}$, where $\mathbb{I}$ is the identity operator and $\ket{\Phi}$ is a maximally entangled state in $\mathcal{H}_A\otimes\mathcal{H}_B$. Such kinds of fidelity-based EWs are commonly used in practical entanglement detection tasks~\cite{wang2018eighteen} as many efficient fidelity estimation protocols exist~\cite{huang2020predicting,flammia2011direct}. However, Theorem \ref{theorem:maintheorem} tells us that such an entanglement witness performs extremely weak in the sense that its detection capability also decreases with system size since $\alpha=\frac{\tr(W)}{\sqrt{\tr(W^2)}}=\sqrt{\frac{d-\sqrt{d}}{2}}$. As a result, \begin{equation} \mathcal{C}_k\left(\frac{\mathbb{I}}{\sqrt{d}}-\ketbra{\Phi}\right)=e^{-\Omega(\sqrt{d}k)}. \end{equation} To make our results more convincing, we conduct several numerical experiments, as shown in Fig.~\ref{fig:detectioncap}. We generate random states according to distribution $\pi_{d,k}$ with different values of $d$ and $k$ and use the two kinds of EWs discussed above to detect it. From Fig.~\ref{fig:detectioncap}(a), one could find that the detection capabilities of all types of EWs exponentially decay with $k$. Besides, the slopes of the faithful EW with $d=4$ and two PPT EWs are almost the same, which fulfills the prediction of Theorem \ref{theorem:maintheorem} as $\alpha=1$ for these three EWs. The slope of the faithful EW with $d=9$ is smaller than the other three EWs, reflecting that the value of $\alpha$ for faithful EWs increases with system dimension. In Fig.~\ref{fig:detectioncap}(b), we investigate the relation between detection capability and system dimension. One could find that the detection capabilities of PPT-type EWs have no apparent changes when increasing the system dimension. In comparison, the detection capability of faithful EWs shows exponential decaying behavior, and the slopes decrease as $k$ increases. These phenomena all satisfy our predictions. \begin{figure} \caption{Scaling of detection capability of EW criteria with regard to (a) the environment dimension $k$ and (b) the system dimension $d_A=d_B$. To numerically calculate the detection capability, we generate $10^8$ density matrices following $\pi_{d,k} \label{fig:detectioncap} \end{figure} Since EW criteria highly depend on prior knowledge to succeed, a direct improvement is to combine a large number of EWs. Naturally, we define an EW set $\mathcal{W}=\Bqty{W_i,i=1\cdots N}$ and the corresponding detection capability as \begin{equation} \mathcal{C}_k(\mathcal{W})=\Pr_{\rho\sim\pi_{d,k}}\bqty{\exists W\in \mathcal{W}: \tr(W\rho)<0}. \end{equation} By using the union bound, we can show that the detection capability of the finite EW set still decreases exponentially when $k$ is large: \begin{equation}\label{eq:manyEWs} \mathcal{C}_k\pqty{\mathcal{W}}< 2N{e}^{-(\sqrt{1+\alpha_{\min}}-1)^2 k}\le 2{e}^{\ln(N)-(3-2\sqrt{2}) k} \end{equation} where $\alpha_{\min}=\min_{W\in \mathcal{W}}\frac{\tr(W)}{\sqrt{\tr(W^2)}}\geq 1$. Therefore, to effectively detect entanglement, a total number of ${e}^{\Omega(k)}$ EWs is required, which is extremely impractical. There are many other theoretically attractive entanglement criteria and concepts based on EWs. Examples like the positive map criteria~\cite{GUHNE2009detection} and faithful entanglement~\cite{weilenmann2020faithful,gunhe2021geometry,riccardi2021exploring} are equivalent to infinitely many EWs. As a result, Eq.~\eqref{eq:manyEWs} does not apply directly. To adapt the previous theorem to the infinite case, here we define parameterized EW criteria. \begin{definition}[Parameterized EW Criteria]\label{def:paraEW} A parameterized EW criterion is a set of an infinite number of EWs, which can be represented by a map $\mathcal{M}$ from $M$ real parameters to EWs in $\mathcal{D}(\mathcal{H})$, satisfying \begin{equation} \forall \vb* \theta\in\Theta\subset[-1,1]^M,\forall\rho\in \mathrm{SEP}:\tr\left[\rho \mathcal{M}(\vb* \theta)\right]\geq 0, \end{equation} where $\mathcal{M}(\vb*\theta)$ is a normalized EW satisfying $\norm{\mathcal{M}(\theta)}_F=1$ with $\norm{A}_F=\sqrt{\sum_{i,j}|A_{i,j}|^2}$ being the Frobenius norm and $\Theta$ is the feasible parameter space ensuring $\mathcal{M}(\vb* \theta)$ a valid EW. A state $\rho$ can be detected by this criterion if and only if \begin{equation} \exists \vb*\theta\in\Theta: \tr\pqty{\rho \mathcal{M}(\vb* \theta)}<0. \end{equation} \end{definition} Similarly, we can define the detection capability of a parameterized EW as \begin{equation} \mathcal{C}^p_k(\mathcal{M})=\Pr_{\rho\sim\pi_{d,k}}\bqty{\exists \vb*\theta\in\Theta: \tr(\rho \mathcal{M}(\vb* \theta))<0}. \end{equation} By using a coarse-graining method and adopting Theorem \ref{theorem:maintheorem}, we can derive an upper bound for $\mathcal{C}^p_k(\mathcal{M})$. \begin{theorem}[Detection Capability of Parameterized EW Criteria] For any parameterized EW represented by a normalized $l$-Lipschitz map $\mathcal{M}$ satisfying \begin{equation} \forall \vb*\theta,\vb*\theta'\in \Theta:\norm{\mathcal{M}(\vb*\theta)-\mathcal{M}(\vb*\theta')}_F\leq l\norm{\vb*\theta-\vb*\theta'}_2, \end{equation} the detection capability decays at least exponentially with $k$ after $k$ exceeds a certain threshold, \begin{equation} \mathcal{C}^p_k(\mathcal{M})< 2e^{C_1-C_2k}, \end{equation} where $C_1=M\ln 4\sqrt{M}ld$, $M$ is the number of real parameters in $\mathcal{M}$, $C_2=(\sqrt{0.5+\alpha_{\min}}-1)^2$ where $\alpha_{\min}=\min_{\vb*\theta} \frac{\tr[\mathcal{M}(\vb*\theta)]}{\sqrt{\tr[\mathcal{M}(\vb*\theta)^2]}}=\min_{\vb*\theta} \tr[\mathcal{M}(\vb*\theta)]\geq 1$. \label{theorem:paraEWs} \end{theorem} The definition of a parameterized EW criterion naturally covers positive map criteria. If a state $\rho$ does not satisfy $\mathcal{N}_A\otimes\mathbb{I}_B(\rho)\geq 0$ for a positive map $\mathcal{N}$, then $\exists \ket\phi: \tr\left[\rho\mathcal{N}_A\otimes\mathbb{I}_B(\ketbra{\phi})\right]<0$. Regarding $\ket{\phi}$ as the parameters $\vb* \theta$ in theorem~\ref{theorem:paraEWs}, this theorem can be applied directly. We leave the detailed discussion in the Appendix. Another example of parameterized EW is the faithful entanglement, proposed in~\cite{weilenmann2020faithful}, which refers to those entangled states detected by faithful EWs as defined before. We define a parameterized EW that is equivalent to all the faithful EWs as $\mathcal{M}_{\mathrm{faithful}}(\vb*\theta)=\pqty{\sqrt{2-\frac{2}{\sqrt{d}}}}^{-1}\pqty{\frac{\mathbb{I}}{\sqrt{d}}-\ketbra{\phi(\vb*\theta)}}$, where $\ket{\phi(\vb*\theta)}$ is a maximally entangled state~\cite{gunhe2021geometry}. One could prove that $\mathcal{M}_{\mathrm{faithful}}(\vb*\theta)$ is at least $\sqrt{2}$-Lipschitz and $\alpha_{\mathrm{min}}=\sqrt{\frac{d-\sqrt{d}}{2}}\approx\sqrt{\frac{d}{2}}$ when $d$ is large. So that an upper bound for the ratio of faithful entangled states can be summarized below using Theorem \ref{theorem:paraEWs}. \begin{corollary}[Ratio of Faithful Entanglement States] The set of faithful entangled states has an exponentially small ratio in the state space: \begin{equation} \Pr_{\rho\sim \pi_{d,k}}[\rho\in \mathrm{FE}]=\mathcal{C}^p_k\pqty{\mathcal{M}_{\mathrm{faithful}}}< 2e^{C_1-C_2k} \end{equation} where $\mathrm{FE}$ is the set of all faithful entangled states and $C_1=3d\ln 4d$, $C_2=\pqty{\sqrt{0.5+\sqrt{\frac{d-\sqrt{d}}{2}}}-1}^2\approx \sqrt{\frac{d}{2}}$. \end{corollary} This result shows when $k=\Omega(\sqrt{d}\ln d)$, the faithful EWs can hardly detect entanglement, which is compatible with the numerical results shown in Ref.~\cite{gunhe2021geometry}. Besides positive map and faithful criteria, there are many other entanglement criteria designed for different scenarios, like the one based on the state moments~\cite{imai2021bound,elben2020mixed,liu2022detecting}, uncertainty relations~\cite{duan2000inseparability,gunhe2004uncertainty}, and machine learning~\cite{gray2018machine,yin2022efficient}. They may use complex mathematical relations and complicated postprocessing to detect the entanglement. While limited by the basic principles of quantum mechanics and current technology, only values like $\tr(O\rho)$ can be measured directly. Hence, we propose a general definition of entanglement criteria with single-copy realizations. \begin{definition}[Single-Copy Criteria]\label{def:singlecopy} An entanglement criterion is said to have a single-copy realization if it can be checked by the expectation of a set of observables $\mathcal{O}=\Bqty{O_i|i=1,\cdots, M}$. After the measurement, one gets the results, $r_{\rho,i}=\tr(O_i\rho), i=1,\cdots,M$, and can decide the feasible region $F_\mathcal{O}(\rho)$ of the state \begin{equation} F_\mathcal{O}(\rho)=\Bqty{\sigma\in\mathcal{D}(\mathcal{H}_d)|\tr(O_i\sigma)=r_{\rho,i},i=1,\cdots,M}. \end{equation} If \begin{equation} F_{\mathcal{O}}(\rho)\cap \mathrm{SEP}=\varnothing\label{eq:singlecopycriterion}, \end{equation}then $\rho$ is entangled. \end{definition} According to this definition, we can define the detection capability of the single-copy criterion $\mathcal{O}$ as \begin{equation}\label{eq:singlecopycap} \mathcal{C}_k^s(\mathcal{O})=\Pr_{\rho\sim\pi_{d,k}}\bqty{F_{\mathcal{O}}(\rho)\cap \mathrm{SEP}=\varnothing}. \end{equation} Since the verification of Eq.~\eqref{eq:singlecopycriterion} might require exponentially many classical resources, many practical entanglement criteria are essentially designed by finding supersets of $\mathrm{SEP}$ and $F_{\mathcal{O}}(\rho)$ and deciding whether these two supersets are disjoint or not. Therefore, the previous definition is the strongest criterion using the measurement results of $\mathcal{O}$, and Eq.~\eqref{eq:singlecopycap} gives an upper bound for all criteria using the same data. Without loss of generality, we could assume that all the observables are mutually orthogonal and normalized, i.e., $\tr(O_iO_j)=\delta_{i,j}$. In the Appendix, we prove that if a state $\rho$ can be detected by a single-copy criterion $\mathcal{O}$ which contains $M-1$ observables, then it can be detected by a $1$-Lipschitz parameterized EW with $M$ parameters. \begin{equation} \mathcal{M}(\theta_0,\theta_1,\cdots,\theta_{M-1})=\theta_0\mathbb{I}+\sum_{i=1}^{M-1}\theta_iO_i \end{equation} Hence, directly adopting Theorem \ref{theorem:paraEWs}, one can give an upper bound for $\mathcal{C}_k^s(\mathcal{O})$. \begin{theorem}[Detection Capability of Single-Copy Criteria]\label{theorem:singlecopy} Any single-copy entanglement criterion $\mathcal{O}$ with $M-1$ observables has detection capability \begin{equation} \mathcal{C}^s_k(\mathcal{O})< 2e^{C_1-C_2k}, \end{equation} where $C_1=M\ln 4\sqrt{M}d$, $C_2=(\sqrt{1.5}-1)^2\approx 0.05$. \end{theorem} Theorem \ref{theorem:singlecopy} theoretically formulates the trade-off between the effectiveness and sample complexities of entanglement criteria. According to this theorem, at least $\Omega(\frac{k}{\ln k})$ observables are needed to effectively detect the entanglement of a random state, even assuming the measurement results are infinitely accurate. Besides, compared with Eq.~\eqref{eq:manyEWs}, one could conclude that a general single-copy detection can be exponentially better than simply using a set of EWs. Here, we numerically examine the detection capabilities of several nonlinear criteria, like purity~\cite{GUHNE2009detection}, fisher information~\cite{Zhang_2020}, moments of partial transposed~\cite{yu2021optimal,neven2021symmetry} (labeled by $D_{3,\mathrm{opt}}$) and realigned density matrices~\cite{liu2022detecting} (labeled by $M_4$). We leave the description of these four criteria for the Appendix. These criteria all have single-copy realizations with resources independent of $k$. Therefore, from Fig.~\ref{fig:single-copy}, one could find that the detection capabilities of these four criteria decay exponentially with $k$ when $k$ is large, which is compatible with Theorem \ref{theorem:singlecopy}. \begin{figure} \caption{Detection capability of four nonlinear criteria. Here for each point, we generate $10^8$ states $\rho\in\mathcal{D} \label{fig:single-copy} \end{figure} Before the exponential decaying period, we also observe that the detection capabilities hold constant. In the Appendix, we analyze these thresholds in detail and numerically find that they all have polynomial relations with the system dimension $d$. Like for the $D_{3,\mathrm{opt}}$ criterion, the threshold is linearly dependent on $d$. These observations together with Theorem \ref{theorem:singlecopy} explain why the verification of these four criteria needs exponentially many resources~\cite{brydges2019probing,rath2021fisher,zhou2020single,elben2020mixed,liu2022detecting}. From another point of view, if not restricted to single-copy operations, some of these criteria can be realized by only a few multicopy observables, implying a quantum advantage in entanglement detection tasks by joint operations~\cite{Huang2022}. We can prove this advantage in some special cases. Let $d_A=d_B=k=\sqrt{d}$, the distributions of $\tr(\rho_A^2)$ and $\tr(\rho_{AB}^2)=\tr(\rho_R^2)$ are completely the same as systems $A$ and $R$ are symmetric. Hence, using the purity criterion, i.e. $\tr(\rho_{AB}^2)\le\tr(\rho_A^2) \ \forall \rho\in \mathrm{SEP}$, the detection capability is $0.5$ and the criterion can be verified using just one two-copy observable, $\tr(\rho_{AB}^2)-\tr(\rho_A^2)=\tr[(\mathbb{S}_{AB}-\mathbb{S}_A)\rho_{AB}^{\otimes 2}]$, where $\mathbb{S}$ is the SWAP operator. So we can summarize the results below. \begin{corollary}[Quantum Advantage in Entanglement Detection] Consider a state following $\pi_{d,\sqrt{d}}$ distribution, and $d_A=d_B=\sqrt{d}$. With only single-copy measurements, $M=\Omega(\frac{\sqrt{d}}{\ln d})$ observables are required for any criterion with detection capability greater than $0.5$. However, if multicopy joint measurements are allowed, one can detect with a capability equaling $0.5$ with only one two-copy observable. \label{proposition:advantage} \end{corollary} Beyond Definition \ref{def:singlecopy}, adaptive methods could also be used to increase the efficiency of entanglement detection. In the Appendix, we give similar results as Theorem \ref{theorem:singlecopy} and Corollary \ref{proposition:advantage} for adaptive methods. It should be noticed that the quantum advantage in Corollary \ref{proposition:advantage} only holds in terms of the number of observables. While considering real-world experiments where multicopy measurements may require much more resources than single-copy ones, will the advantage still hold soundly? Besides, will Theorem \ref{theorem:singlecopy} holds when a small false-positive error rate is allowed? We will leave these questions to future work. Meanwhile, our result also holds for some other typical state distributions. For example, we can show that Theorem~\ref{theorem:singlecopy} applies to random thermal states, which is widely used in quantum thermodynamics~\cite{vinjanampathy2016quantum}. In the Appendix, we present some numerical results demonstrating the exponential decay behavior of detection capabilities for random thermal states. \begin{acknowledgments} We thank Zhaohui Wei for the valuable discussions. This work was supported by the National Natural Science Foundation of China Grants No.~11875173 and No.~12174216 and the National Key Research and Development Program of China Grants No.~2019QY0702 and No.~2017YFA0303903. \end{acknowledgments} \begin{thebibliography}{50} \makeatletter \providecommand \@ifxundefined [1]{ \@ifx{#1\undefined} } \providecommand \@ifnum [1]{ \ifnum #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \@ifx [1]{ \ifx #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode `\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{http://dx.doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty \bibitem [{\citenamefont {Preskill}(2018)}]{Preskill2018quantumcomputingin} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Preskill}},\ }\href {\doibase 10.22331/q-2018-08-06-79} {\bibfield {journal} {\bibinfo {journal} {{Quantum}}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages} {79} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Arute}\ \emph {et~al.}(2019)\citenamefont {Arute}, \citenamefont {Arya}, \citenamefont {Babbush}, \citenamefont {Bacon}, \citenamefont {Bardin}, \citenamefont {Barends}, \citenamefont {Biswas}, \citenamefont {Boixo}, \citenamefont {Brandao}, \citenamefont {Buell} \emph {et~al.}}]{arute2019quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Arute}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Arya}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Babbush}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Bacon}}, \bibinfo {author} {\bibfnamefont {J.~C.}\ \bibnamefont {Bardin}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Barends}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Biswas}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Boixo}}, \bibinfo {author} {\bibfnamefont {F.~G.}\ \bibnamefont {Brandao}}, \bibinfo {author} {\bibfnamefont {D.~A.}\ \bibnamefont {Buell}}, \emph {et~al.},\ }\href {https://doi.org/10.1038/s41586-019-1666-5} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {574}},\ \bibinfo {pages} {505} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gong}\ \emph {et~al.}(2021)\citenamefont {Gong}, \citenamefont {Wang}, \citenamefont {Zha}, \citenamefont {Chen}, \citenamefont {Huang}, \citenamefont {Wu}, \citenamefont {Zhu}, \citenamefont {Zhao}, \citenamefont {Li}, \citenamefont {Guo}, \citenamefont {Qian}, \citenamefont {Ye}, \citenamefont {Chen}, \citenamefont {Ying}, \citenamefont {Yu}, \citenamefont {Fan}, \citenamefont {Wu}, \citenamefont {Su}, \citenamefont {Deng}, \citenamefont {Rong}, \citenamefont {Zhang}, \citenamefont {Cao}, \citenamefont {Lin}, \citenamefont {Xu}, \citenamefont {Sun}, \citenamefont {Guo}, \citenamefont {Li}, \citenamefont {Liang}, \citenamefont {Bastidas}, \citenamefont {Nemoto}, \citenamefont {Munro}, \citenamefont {Huo}, \citenamefont {Lu}, \citenamefont {Peng}, \citenamefont {Zhu},\ and\ \citenamefont {Pan}}]{Gong2021zuchongzhi} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Gong}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Zha}}, \bibinfo {author} {\bibfnamefont {M.-C.}\ \bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {H.-L.}\ \bibnamefont {Huang}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Wu}}, \bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont {Zhu}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Zhao}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Guo}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Qian}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Ye}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Ying}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Yu}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Fan}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Wu}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Su}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Deng}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Rong}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Cao}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Lin}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Xu}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Sun}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Guo}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Liang}}, \bibinfo {author} {\bibfnamefont {V.~M.}\ \bibnamefont {Bastidas}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Nemoto}}, \bibinfo {author} {\bibfnamefont {W.~J.}\ \bibnamefont {Munro}}, \bibinfo {author} {\bibfnamefont {Y.-H.}\ \bibnamefont {Huo}}, \bibinfo {author} {\bibfnamefont {C.-Y.}\ \bibnamefont {Lu}}, \bibinfo {author} {\bibfnamefont {C.-Z.}\ \bibnamefont {Peng}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Zhu}}, \ and\ \bibinfo {author} {\bibfnamefont {J.-W.}\ \bibnamefont {Pan}},\ }\href {\doibase 10.1126/science.abg7812} {\bibfield {journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo {volume} {372}},\ \bibinfo {pages} {948} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zhong}\ \emph {et~al.}(2021)\citenamefont {Zhong}, \citenamefont {Deng}, \citenamefont {Qin}, \citenamefont {Wang}, \citenamefont {Chen}, \citenamefont {Peng}, \citenamefont {Luo}, \citenamefont {Wu}, \citenamefont {Gong}, \citenamefont {Su}, \citenamefont {Hu}, \citenamefont {Hu}, \citenamefont {Yang}, \citenamefont {Zhang}, \citenamefont {Li}, \citenamefont {Li}, \citenamefont {Jiang}, \citenamefont {Gan}, \citenamefont {Yang}, \citenamefont {You}, \citenamefont {Wang}, \citenamefont {Li}, \citenamefont {Liu}, \citenamefont {Renema}, \citenamefont {Lu},\ and\ \citenamefont {Pan}}]{zhong2021jiuzhang} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.-S.}\ \bibnamefont {Zhong}}, \bibinfo {author} {\bibfnamefont {Y.-H.}\ \bibnamefont {Deng}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Qin}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {M.-C.}\ \bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {L.-C.}\ \bibnamefont {Peng}}, \bibinfo {author} {\bibfnamefont {Y.-H.}\ \bibnamefont {Luo}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Wu}}, \bibinfo {author} {\bibfnamefont {S.-Q.}\ \bibnamefont {Gong}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Su}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Hu}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Hu}}, \bibinfo {author} {\bibfnamefont {X.-Y.}\ \bibnamefont {Yang}}, \bibinfo {author} {\bibfnamefont {W.-J.}\ \bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Jiang}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Gan}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Yang}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {You}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {N.-L.}\ \bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {J.~J.}\ \bibnamefont {Renema}}, \bibinfo {author} {\bibfnamefont {C.-Y.}\ \bibnamefont {Lu}}, \ and\ \bibinfo {author} {\bibfnamefont {J.-W.}\ \bibnamefont {Pan}},\ }\href {\doibase 10.1103/PhysRevLett.127.180502} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {127}},\ \bibinfo {pages} {180502} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Madsen}\ \emph {et~al.}(2022)\citenamefont {Madsen}, \citenamefont {Laudenbach}, \citenamefont {Askarani}, \citenamefont {Rortais}, \citenamefont {Vincent}, \citenamefont {Bulmer}, \citenamefont {Miatto}, \citenamefont {Neuhaus}, \citenamefont {Helt}, \citenamefont {Collins} \emph {et~al.}}]{madsen2022quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.~S.}\ \bibnamefont {Madsen}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Laudenbach}}, \bibinfo {author} {\bibfnamefont {M.~F.}\ \bibnamefont {Askarani}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Rortais}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Vincent}}, \bibinfo {author} {\bibfnamefont {J.~F.}\ \bibnamefont {Bulmer}}, \bibinfo {author} {\bibfnamefont {F.~M.}\ \bibnamefont {Miatto}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Neuhaus}}, \bibinfo {author} {\bibfnamefont {L.~G.}\ \bibnamefont {Helt}}, \bibinfo {author} {\bibfnamefont {M.~J.}\ \bibnamefont {Collins}}, \emph {et~al.},\ }\href {https://doi.org/10.1038/s41586-022-04725-x} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {606}},\ \bibinfo {pages} {75} (\bibinfo {year} {2022})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wang}\ \emph {et~al.}(2018)\citenamefont {Wang}, \citenamefont {Luo}, \citenamefont {Huang}, \citenamefont {Chen}, \citenamefont {Su}, \citenamefont {Liu}, \citenamefont {Chen}, \citenamefont {Li}, \citenamefont {Fang}, \citenamefont {Jiang}, \citenamefont {Zhang}, \citenamefont {Li}, \citenamefont {Liu}, \citenamefont {Lu},\ and\ \citenamefont {Pan}}]{wang2018eighteen} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {X.-L.}\ \bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {Y.-H.}\ \bibnamefont {Luo}}, \bibinfo {author} {\bibfnamefont {H.-L.}\ \bibnamefont {Huang}}, \bibinfo {author} {\bibfnamefont {M.-C.}\ \bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {Z.-E.}\ \bibnamefont {Su}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {Y.-Q.}\ \bibnamefont {Fang}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Jiang}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {N.-L.}\ \bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {C.-Y.}\ \bibnamefont {Lu}}, \ and\ \bibinfo {author} {\bibfnamefont {J.-W.}\ \bibnamefont {Pan}},\ }\href {\doibase 10.1103/PhysRevLett.120.260502} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {120}},\ \bibinfo {pages} {260502} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {\ifmmode~\dot{Z}\else \.{Z}\fi{}yczkowski}\ \emph {et~al.}(1998)\citenamefont {\ifmmode~\dot{Z}\else \.{Z}\fi{}yczkowski}, \citenamefont {Horodecki}, \citenamefont {Sanpera},\ and\ \citenamefont {Lewenstein}}]{zyczkowski1998volume} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {\ifmmode~\dot{Z}\else \.{Z}\fi{}yczkowski}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Horodecki}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Sanpera}}, \ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Lewenstein}},\ }\href {\doibase 10.1103/PhysRevA.58.883} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {58}},\ \bibinfo {pages} {883} (\bibinfo {year} {1998})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Szarek}(2005)}]{szarek2005volume} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~J.}\ \bibnamefont {Szarek}},\ }\href {\doibase 10.1103/PhysRevA.72.032304} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {72}},\ \bibinfo {pages} {032304} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gurvits}\ and\ \citenamefont {Barnum}(2002)}]{gurvits2002largest} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Gurvits}}\ and\ \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Barnum}},\ }\href {\doibase 10.1103/PhysRevA.66.062311} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {66}},\ \bibinfo {pages} {062311} (\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Aubrun}\ \emph {et~al.}(2012)\citenamefont {Aubrun}, \citenamefont {Szarek},\ and\ \citenamefont {Ye}}]{aubrun2012phase} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Aubrun}}, \bibinfo {author} {\bibfnamefont {S.~J.}\ \bibnamefont {Szarek}}, \ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Ye}},\ }\href {\doibase 10.1103/PhysRevA.85.030302} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {85}},\ \bibinfo {pages} {030302} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lu}\ \emph {et~al.}(2018)\citenamefont {Lu}, \citenamefont {Zhao}, \citenamefont {Li}, \citenamefont {Yin}, \citenamefont {Yuan}, \citenamefont {Hung}, \citenamefont {Chen}, \citenamefont {Li}, \citenamefont {Liu}, \citenamefont {Peng}, \citenamefont {Liang}, \citenamefont {Ma}, \citenamefont {Chen},\ and\ \citenamefont {Pan}}]{lu2018structure} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Lu}}, \bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont {Zhao}}, \bibinfo {author} {\bibfnamefont {Z.-D.}\ \bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {X.-F.}\ \bibnamefont {Yin}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Yuan}}, \bibinfo {author} {\bibfnamefont {J.-C.}\ \bibnamefont {Hung}}, \bibinfo {author} {\bibfnamefont {L.-K.}\ \bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {N.-L.}\ \bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {C.-Z.}\ \bibnamefont {Peng}}, \bibinfo {author} {\bibfnamefont {Y.-C.}\ \bibnamefont {Liang}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Ma}}, \bibinfo {author} {\bibfnamefont {Y.-A.}\ \bibnamefont {Chen}}, \ and\ \bibinfo {author} {\bibfnamefont {J.-W.}\ \bibnamefont {Pan}},\ }\href {\doibase 10.1103/PhysRevX.8.021072} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume} {8}},\ \bibinfo {pages} {021072} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {{\v{Z}}nidari{\v{c}}}\ \emph {et~al.}(2007)\citenamefont {{\v{Z}}nidari{\v{c}}}, \citenamefont {Prosen}, \citenamefont {Benenti},\ and\ \citenamefont {Casati}}]{nidari2007witness} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {{\v{Z}}nidari{\v{c}}}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Prosen}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Benenti}}, \ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Casati}},\ }\href {\doibase 10.1088/1751-8113/40/45/017} {\bibfield {journal} {\bibinfo {journal} {Journal of Physics A: Mathematical and Theoretical}\ }\textbf {\bibinfo {volume} {40}},\ \bibinfo {pages} {13787} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Peres}(1996)}]{peres1996separability} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Peres}},\ }\href {\doibase 10.1103/PhysRevLett.77.1413} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {77}},\ \bibinfo {pages} {1413} (\bibinfo {year} {1996})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Chen}\ and\ \citenamefont {Wu}(2002)}]{chen2002matrix} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Chen}}\ and\ \bibinfo {author} {\bibfnamefont {L.-A.}\ \bibnamefont {Wu}},\ }\href {https://arxiv.org/pdf/quant-ph/0205017.pdf} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint quant-ph/0205017}\ } (\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {G체hne}\ and\ \citenamefont {T처th}(2009)}]{GUHNE2009detection} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {G체hne}}\ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {T처th}},\ }\href {\doibase https://doi.org/10.1016/j.physrep.2009.02.004} {\bibfield {journal} {\bibinfo {journal} {Physics Reports}\ }\textbf {\bibinfo {volume} {474}},\ \bibinfo {pages} {1} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Horodecki}\ and\ \citenamefont {Ekert}(2002)}]{horodecki2002method} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Horodecki}}\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Ekert}},\ }\href {\doibase 10.1103/PhysRevLett.89.127902} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {89}},\ \bibinfo {pages} {127902} (\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Horodecki}(2003)}]{horodecki2003measuring} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Horodecki}},\ }\href {\doibase 10.1103/PhysRevLett.90.167901} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {90}},\ \bibinfo {pages} {167901} (\bibinfo {year} {2003})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Elben}\ \emph {et~al.}(2020)\citenamefont {Elben}, \citenamefont {Kueng}, \citenamefont {Huang}, \citenamefont {van Bijnen}, \citenamefont {Kokail}, \citenamefont {Dalmonte}, \citenamefont {Calabrese}, \citenamefont {Kraus}, \citenamefont {Preskill}, \citenamefont {Zoller},\ and\ \citenamefont {Vermersch}}]{elben2020mixed} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Elben}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Kueng}}, \bibinfo {author} {\bibfnamefont {H.-Y.~R.}\ \bibnamefont {Huang}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {van Bijnen}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Kokail}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Dalmonte}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Calabrese}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Kraus}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Preskill}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Zoller}}, \ and\ \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Vermersch}},\ }\href {\doibase 10.1103/PhysRevLett.125.200501} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {125}},\ \bibinfo {pages} {200501} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yu}\ \emph {et~al.}(2021)\citenamefont {Yu}, \citenamefont {Imai},\ and\ \citenamefont {G\"uhne}}]{yu2021optimal} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {X.-D.}\ \bibnamefont {Yu}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Imai}}, \ and\ \bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {G\"uhne}},\ }\href {\doibase 10.1103/PhysRevLett.127.060504} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {127}},\ \bibinfo {pages} {060504} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Neven}\ \emph {et~al.}(2021)\citenamefont {Neven}, \citenamefont {Carrasco}, \citenamefont {Vitale}, \citenamefont {Kokail}, \citenamefont {Elben}, \citenamefont {Dalmonte}, \citenamefont {Calabrese}, \citenamefont {Zoller}, \citenamefont {Vermersch}, \citenamefont {Kueng},\ and\ \citenamefont {Kraus}}]{neven2021symmetry} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Neven}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Carrasco}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Vitale}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Kokail}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Elben}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Dalmonte}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Calabrese}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Zoller}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Vermersch}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Kueng}}, \ and\ \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Kraus}},\ }\href {\doibase 10.1038/s41534-021-00487-y} {\bibfield {journal} {\bibinfo {journal} {npj Quantum Information}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {152} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Liu}\ \emph {et~al.}(2022)\citenamefont {Liu}, \citenamefont {Tang}, \citenamefont {Dai}, \citenamefont {Liu}, \citenamefont {Chen},\ and\ \citenamefont {Ma}}]{liu2022detecting} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Tang}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Dai}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Chen}}, \ and\ \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Ma}},\ }\href {https://arxiv.org/abs/2203.08391} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:2203.08391}\ } (\bibinfo {year} {2022})}\BibitemShut {NoStop} \bibitem [{\citenamefont {van Enk}\ and\ \citenamefont {Beenakker}(2012)}]{van2012measuring} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~J.}\ \bibnamefont {van Enk}}\ and\ \bibinfo {author} {\bibfnamefont {C.~W.~J.}\ \bibnamefont {Beenakker}},\ }\href {\doibase 10.1103/PhysRevLett.108.110503} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {108}},\ \bibinfo {pages} {110503} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Huang}\ \emph {et~al.}(2020)\citenamefont {Huang}, \citenamefont {Kueng},\ and\ \citenamefont {Preskill}}]{huang2020predicting} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.-Y.}\ \bibnamefont {Huang}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Kueng}}, \ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Preskill}},\ }\href {https://doi.org/10.1038/s41567-020-0932-7} {\bibfield {journal} {\bibinfo {journal} {Nature Physics}\ }\textbf {\bibinfo {volume} {16}},\ \bibinfo {pages} {1050} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Brydges}\ \emph {et~al.}(2019)\citenamefont {Brydges}, \citenamefont {Elben}, \citenamefont {Jurcevic}, \citenamefont {Vermersch}, \citenamefont {Maier}, \citenamefont {Lanyon}, \citenamefont {Zoller}, \citenamefont {Blatt},\ and\ \citenamefont {Roos}}]{brydges2019probing} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Brydges}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Elben}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Jurcevic}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Vermersch}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Maier}}, \bibinfo {author} {\bibfnamefont {B.~P.}\ \bibnamefont {Lanyon}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Zoller}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Blatt}}, \ and\ \bibinfo {author} {\bibfnamefont {C.~F.}\ \bibnamefont {Roos}},\ }\href {https://www.science.org/doi/abs/10.1126/science.aau4963#pill-citations} {\bibfield {journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo {volume} {364}},\ \bibinfo {pages} {260} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lu}\ \emph {et~al.}(2016)\citenamefont {Lu}, \citenamefont {Xin}, \citenamefont {Yu}, \citenamefont {Ji}, \citenamefont {Chen}, \citenamefont {Long}, \citenamefont {Baugh}, \citenamefont {Peng}, \citenamefont {Zeng},\ and\ \citenamefont {Laflamme}}]{lu2016universal} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Lu}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Xin}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Yu}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Ji}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Long}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Baugh}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Peng}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Zeng}}, \ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Laflamme}},\ }\href {\doibase 10.1103/PhysRevLett.116.230501} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {116}},\ \bibinfo {pages} {230501} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Collins}\ and\ \citenamefont {Nechita}(2016)}]{collins2016random} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Collins}}\ and\ \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Nechita}},\ }\href {https://aip.scitation.org/doi/full/10.1063/1.4936880} {\bibfield {journal} {\bibinfo {journal} {Journal of Mathematical Physics}\ }\textbf {\bibinfo {volume} {57}},\ \bibinfo {pages} {015215} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bhosale}\ \emph {et~al.}(2012)\citenamefont {Bhosale}, \citenamefont {Tomsovic},\ and\ \citenamefont {Lakshminarayan}}]{bhosale2012entanglement} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {U.~T.}\ \bibnamefont {Bhosale}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Tomsovic}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Lakshminarayan}},\ }\href {\doibase 10.1103/PhysRevA.85.062331} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {85}},\ \bibinfo {pages} {062331} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Shapourian}\ \emph {et~al.}(2021)\citenamefont {Shapourian}, \citenamefont {Liu}, \citenamefont {Kudler-Flam},\ and\ \citenamefont {Vishwanath}}]{shapourian2021diagram} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Shapourian}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Kudler-Flam}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Vishwanath}},\ }\href {\doibase 10.1103/PRXQuantum.2.030347} {\bibfield {journal} {\bibinfo {journal} {PRX Quantum}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages} {030347} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Jivulescu}\ \emph {et~al.}(2014)\citenamefont {Jivulescu}, \citenamefont {Lupa},\ and\ \citenamefont {Nechita}}]{jivulescu2014reduction} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont {Jivulescu}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Lupa}}, \ and\ \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Nechita}},\ }\href {https://aip.scitation.org/doi/full/10.1063/1.4901548?casa_token=Z9c6iHu4oOMAAAAA {\bibfield {journal} {\bibinfo {journal} {Journal of Mathematical Physics}\ }\textbf {\bibinfo {volume} {55}},\ \bibinfo {pages} {112203} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Jivulescu}\ \emph {et~al.}(2015)\citenamefont {Jivulescu}, \citenamefont {Lupa},\ and\ \citenamefont {Nechita}}]{jivulescu2015thresholds} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont {Jivulescu}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Lupa}}, \ and\ \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Nechita}},\ }\href {https://arxiv.org/abs/1503.08008} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:1503.08008}\ } (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Aubrun}\ and\ \citenamefont {Nechita}(2012)}]{aubrun2012realigning} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Aubrun}}\ and\ \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Nechita}},\ }\href {https://aip.scitation.org/doi/full/10.1063/1.4759115?casa_token=dQr3RlOhZwwAAAAA {\bibfield {journal} {\bibinfo {journal} {Journal of mathematical physics}\ }\textbf {\bibinfo {volume} {53}},\ \bibinfo {pages} {102210} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Nechita}(2007)}]{Nechita2007} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Nechita}},\ }\href {\doibase 10.1007/s00023-007-0345-5} {\bibfield {journal} {\bibinfo {journal} {Annales Henri Poincar{\'e}}\ }\textbf {\bibinfo {volume} {8}},\ \bibinfo {pages} {1521} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Laurent}\ and\ \citenamefont {Massart}(2000)}]{laurent2000adaptive} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Laurent}}\ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Massart}},\ }\href {http://www.jstor.org/stable/2674095} {\bibfield {journal} {\bibinfo {journal} {The Annals of Statistics}\ }\textbf {\bibinfo {volume} {28}},\ \bibinfo {pages} {1302} (\bibinfo {year} {2000})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Johnston}\ and\ \citenamefont {Patterson}(2018)}]{JOHNSTON20181} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Johnston}}\ and\ \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Patterson}},\ }\href {\doibase https://doi.org/10.1016/j.laa.2018.03.043} {\bibfield {journal} {\bibinfo {journal} {Linear Algebra and its Applications}\ }\textbf {\bibinfo {volume} {550}},\ \bibinfo {pages} {1} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Holgersson}\ and\ \citenamefont {Singull}(2020)}]{holgersson2020recent} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Holgersson}}\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Singull}},\ }\href {https://link.springer.com/book/10.1007/978-3-030-56773-6} {\emph {\bibinfo {title} {Recent Developments in Multivariate and Random Matrix Analysis: Festschrift in Honour of Dietrich Von Rosen}}}\ (\bibinfo {publisher} {Springer Nature},\ \bibinfo {year} {2020})\BibitemShut {NoStop} \bibitem [{\citenamefont {Blum}\ \emph {et~al.}(2020)\citenamefont {Blum}, \citenamefont {Hopcroft},\ and\ \citenamefont {Kannan}}]{blum2020foundations} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Blum}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Hopcroft}}, \ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Kannan}},\ }\href {\doibase 10.1017/9781108755528} {\emph {\bibinfo {title} {Foundations of data science}}}\ (\bibinfo {publisher} {Cambridge University Press},\ \bibinfo {year} {2020})\BibitemShut {NoStop} \bibitem [{\citenamefont {Flammia}\ and\ \citenamefont {Liu}(2011)}]{flammia2011direct} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~T.}\ \bibnamefont {Flammia}}\ and\ \bibinfo {author} {\bibfnamefont {Y.-K.}\ \bibnamefont {Liu}},\ }\href {\doibase 10.1103/PhysRevLett.106.230501} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {106}},\ \bibinfo {pages} {230501} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Weilenmann}\ \emph {et~al.}(2020)\citenamefont {Weilenmann}, \citenamefont {Dive}, \citenamefont {Trillo}, \citenamefont {Aguilar},\ and\ \citenamefont {Navascu\'es}}]{weilenmann2020faithful} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Weilenmann}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Dive}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Trillo}}, \bibinfo {author} {\bibfnamefont {E.~A.}\ \bibnamefont {Aguilar}}, \ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Navascu\'es}},\ }\href {\doibase 10.1103/PhysRevLett.124.200502} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {124}},\ \bibinfo {pages} {200502} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {G\"uhne}\ \emph {et~al.}(2021)\citenamefont {G\"uhne}, \citenamefont {Mao},\ and\ \citenamefont {Yu}}]{gunhe2021geometry} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {G\"uhne}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Mao}}, \ and\ \bibinfo {author} {\bibfnamefont {X.-D.}\ \bibnamefont {Yu}},\ }\href {\doibase 10.1103/PhysRevLett.126.140503} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {126}},\ \bibinfo {pages} {140503} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Riccardi}\ \emph {et~al.}(2021)\citenamefont {Riccardi}, \citenamefont {Jones}, \citenamefont {Yu}, \citenamefont {G\"uhne},\ and\ \citenamefont {Kirby}}]{riccardi2021exploring} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Riccardi}}, \bibinfo {author} {\bibfnamefont {D.~E.}\ \bibnamefont {Jones}}, \bibinfo {author} {\bibfnamefont {X.-D.}\ \bibnamefont {Yu}}, \bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {G\"uhne}}, \ and\ \bibinfo {author} {\bibfnamefont {B.~T.}\ \bibnamefont {Kirby}},\ }\href {\doibase 10.1103/PhysRevA.103.042417} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {103}},\ \bibinfo {pages} {042417} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Imai}\ \emph {et~al.}(2021)\citenamefont {Imai}, \citenamefont {Wyderka}, \citenamefont {Ketterer},\ and\ \citenamefont {G\"uhne}}]{imai2021bound} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Imai}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Wyderka}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Ketterer}}, \ and\ \bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {G\"uhne}},\ }\href {\doibase 10.1103/PhysRevLett.126.150501} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {126}},\ \bibinfo {pages} {150501} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Duan}\ \emph {et~al.}(2000)\citenamefont {Duan}, \citenamefont {Giedke}, \citenamefont {Cirac},\ and\ \citenamefont {Zoller}}]{duan2000inseparability} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.-M.}\ \bibnamefont {Duan}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Giedke}}, \bibinfo {author} {\bibfnamefont {J.~I.}\ \bibnamefont {Cirac}}, \ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Zoller}},\ }\href {\doibase 10.1103/PhysRevLett.84.2722} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {84}},\ \bibinfo {pages} {2722} (\bibinfo {year} {2000})}\BibitemShut {NoStop} \bibitem [{\citenamefont {G\"uhne}(2004)}]{gunhe2004uncertainty} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {G\"uhne}},\ }\href {\doibase 10.1103/PhysRevLett.92.117903} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {92}},\ \bibinfo {pages} {117903} (\bibinfo {year} {2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gray}\ \emph {et~al.}(2018)\citenamefont {Gray}, \citenamefont {Banchi}, \citenamefont {Bayat},\ and\ \citenamefont {Bose}}]{gray2018machine} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Gray}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Banchi}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Bayat}}, \ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Bose}},\ }\href {\doibase 10.1103/PhysRevLett.121.150503} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {121}},\ \bibinfo {pages} {150503} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yin}\ \emph {et~al.}(2022)\citenamefont {Yin}, \citenamefont {Du}, \citenamefont {Fei}, \citenamefont {Zhang}, \citenamefont {Liu}, \citenamefont {Mao}, \citenamefont {Liu}, \citenamefont {Hsieh}, \citenamefont {Li}, \citenamefont {Liu}, \citenamefont {Tao}, \citenamefont {Chen},\ and\ \citenamefont {Pan}}]{yin2022efficient} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {X.-F.}\ \bibnamefont {Yin}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Du}}, \bibinfo {author} {\bibfnamefont {Y.-Y.}\ \bibnamefont {Fei}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {L.-Z.}\ \bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Mao}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {M.-H.}\ \bibnamefont {Hsieh}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {N.-L.}\ \bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Tao}}, \bibinfo {author} {\bibfnamefont {Y.-A.}\ \bibnamefont {Chen}}, \ and\ \bibinfo {author} {\bibfnamefont {J.-W.}\ \bibnamefont {Pan}},\ }\href {\doibase 10.1103/PhysRevLett.128.110501} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {128}},\ \bibinfo {pages} {110501} (\bibinfo {year} {2022})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zhang}\ and\ \citenamefont {Fei}(2020)}]{Zhang_2020} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Q.-H.}\ \bibnamefont {Zhang}}\ and\ \bibinfo {author} {\bibfnamefont {S.-M.}\ \bibnamefont {Fei}},\ }\href {\doibase 10.1088/1612-202x/ab8793} {\bibfield {journal} {\bibinfo {journal} {Laser Physics Letters}\ }\textbf {\bibinfo {volume} {17}},\ \bibinfo {pages} {065202} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Rath}\ \emph {et~al.}(2021)\citenamefont {Rath}, \citenamefont {Branciard}, \citenamefont {Minguzzi},\ and\ \citenamefont {Vermersch}}]{rath2021fisher} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Rath}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Branciard}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Minguzzi}}, \ and\ \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Vermersch}},\ }\href {\doibase 10.1103/PhysRevLett.127.260501} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {127}},\ \bibinfo {pages} {260501} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zhou}\ \emph {et~al.}(2020)\citenamefont {Zhou}, \citenamefont {Zeng},\ and\ \citenamefont {Liu}}]{zhou2020single} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Zhou}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Zeng}}, \ and\ \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Liu}},\ }\href {\doibase 10.1103/PhysRevLett.125.200502} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {125}},\ \bibinfo {pages} {200502} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Huang}\ \emph {et~al.}(2022)\citenamefont {Huang}, \citenamefont {Broughton}, \citenamefont {Cotler}, \citenamefont {Chen}, \citenamefont {Li}, \citenamefont {Mohseni}, \citenamefont {Neven}, \citenamefont {Babbush}, \citenamefont {Kueng}, \citenamefont {Preskill},\ and\ \citenamefont {McClean}}]{Huang2022} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.-Y.}\ \bibnamefont {Huang}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Broughton}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Cotler}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Mohseni}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Neven}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Babbush}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Kueng}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Preskill}}, \ and\ \bibinfo {author} {\bibfnamefont {J.~R.}\ \bibnamefont {McClean}},\ }\href {\doibase 10.1126/science.abn7293} {\bibfield {journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo {volume} {376}},\ \bibinfo {pages} {1182} (\bibinfo {year} {2022})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Vinjanampathy}\ and\ \citenamefont {Anders}(2016)}]{vinjanampathy2016quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Vinjanampathy}}\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Anders}},\ }\href {https://doi.org/10.1080/00107514.2016.1201896} {\bibfield {journal} {\bibinfo {journal} {Contemporary Physics}\ }\textbf {\bibinfo {volume} {57}},\ \bibinfo {pages} {545} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \end{thebibliography} \appendix \onecolumngrid \setcounter{theorem}{1} \section{Detection Capability Upper Bound of EW Criteria}\label{section: EW} \subsection{Restriction of Valid EWs} \begin{lemma}[Restriction of Valid EWs] For any valid EW $W$ satisfying \begin{equation} \forall \rho\in \mathrm{SEP}: \tr(W\rho)\geq 0, \end{equation} the following inequality always holds \begin{equation} \alpha^2=\frac{\tr(W)^2}{\tr(W^2)}\geq 1. \end{equation} \end{lemma} \begin{proof} Given an EW $W$, without loss of generality, we assume $\tr(W)=1$. Write $W$ in the form \begin{equation} W=\frac{\mathbb{I}}{d}+\frac{c\sigma}{d} \end{equation} Where $\sigma$ is a hermitian operator satisfying $\tr(\sigma^2)=d$, $\tr(\sigma)=0$ and $c$ is a constant. We have \begin{equation} \tr(W^2)=\tr(\frac{\mathbb{I}}{d^2})+\tr(\frac{c^2\sigma^2}{d^2})+2\tr(\frac{c\sigma}{d^2})=\frac{1+c^2}{d}. \end{equation} To show $\alpha^2=\frac{\tr(W)^2}{\tr(W^2)}\geq 1$, we are going to prove that $c^2\leq d-1$ by constructing a state: \begin{equation} \rho_0=\frac{\mathbb{I}}{d}-\frac{\sigma}{\sqrt{d-1}d}. \end{equation} According to \cite{gurvits2002largest}, the set of separable states has a non-zero inner radius \begin{equation} \forall \rho\in\mathcal{D}(\mathcal{H}_d), \tr(\rho^2)\leq \frac{1}{d-1}\to \rho \in \mathrm{SEP}. \end{equation} We can directly verify that \begin{equation} \tr(\rho^2_0)=\frac{1}{d}+\frac{1}{d(d-1)}=\frac{1}{d-1}, \end{equation} which means $\rho_0$ is not only a valid state but also separable. Since $W$ is an EW, $\tr(W\rho_0)\geq 0$. \begin{equation} \begin{split} \tr(W\rho_0)&\geq 0\\ \frac{1}{d}-\frac{c}{d\sqrt{d-1}}&\geq 0\\ c^2&\leq d-1 \end{split} \end{equation} So we can conclude that for any valid entanglement witness $W$, \begin{equation} \alpha^2=\frac{\tr(W)^2}{\tr(W^2)}=\frac{d}{1+c^2}\geq 1. \end{equation} \end{proof} \begin{figure} \caption{Graphical illustration of EW criteria. } \label{fig:geometry} \end{figure} We could also provide a graphical illustration of this lemma, which can help us understand the EW criteria. In Fig.~\ref{fig:geometry}, we use Pauli-Liouville representation to represent the density matrix and EWs as vectors in the operator space. The normalized identity $\frac{\mathbb{I}}{\sqrt{d}}$ is the $x$-axis, and the $y$-axis represents one of the other Pauli basis. The expectation of an observable can be calculated by the inner product of the state vector and the observable vector. Because of the trace condition $\tr(\rho)=1$, the density matrix lies in a hyperplane that is orthogonal to the $x$-axis. We use the solid and meshed area to represent the entangled and separable states. An EW can detect those states labeled by horizontal lines with obtuse angles with the EW. This observation and the fact that any state with a distance to the maximally mixed state less than a certain threshold is separable ensures that the angle between a valid EW and $y$-axis is larger than some constant. Quantitatively speaking, this tells us that $\alpha=\frac{\tr(W)}{\sqrt{\tr(W^2)}}$ has a minimum value. Without loss of generality, we assume all the EWs satisfy a normalization condition, $\tr(W^2)=\frac{1}{d}$. Thus all the EWs lie in a sphere centralized at the original point, represented by the dashed circle. Due to the $\alpha \geq 1$ ($\tr(W)\geq \frac{1}{\sqrt{d}}$) constraint, valid EWs are within dashed circular sector area. Given an EW, the states that can be detected lie in a fixed area in the space. When $k$ increases, the distribution of states, represented by the darkness of the color, will concentrate towards the maximally mixed state, making the ratio of detectable states decrease accordingly. The solid-line circles on the right represent the boundary of the typical set. \subsection{Proof of Theorem 2} \begin{theorem}[Detection Capability of EW Criteria]\label{theorem:singleEW} The detection capability of an EW criterion with $W$ decays at least exponentially with the dimension of the environment \begin{equation} \mathcal{C}_k(W)< 2e^{-(\sqrt{1+\alpha}-1)^2 k}\leq 2e^{-(3-2\sqrt{2}) k}, \end{equation} where $\alpha=\frac{\tr(W)}{\sqrt{\tr(W^2)}}\geq 1$ is a witness-dependent factor. \end{theorem} \begin{proof} We first generalize the definition of detection capability for EW to any observable $O$ with a positive trace and prove it with this generalized definition. Similarly, define \begin{equation} \mathcal{C}_k(O)=\Pr_{\rho\sim\pi_{d,k}}\bqty{\tr(O\rho)<0} \end{equation} as the detection capability of an observable $O$. Since $O$ can be decomposed as $O=U_O\Lambda_OU_O^\dagger$, where $U_O$ is unitary, and $\Lambda_O$ are the eigenvalues of $O$, we can equivalently rewrite \begin{equation} \mathcal{C}_k(O)=\Pr_{\rho\sim\pi_{d,k}}\bqty{\tr(U_O\Lambda_OU_O^\dagger \rho)<0}=\Pr_{\rho\sim\pi_{d,k}}\bqty{\tr(\Lambda_OU_O^\dagger \rho U_O)<0}. \end{equation} According to the definition of Haar measure, if $\rho$ follows the distribution of $\pi_{d,k}$, $U_O^\dagger\rho U_O$ also follows the distribution of $\pi_{d,k}$ as $U_O$ is a fix unitary \cite{Nechita2007}. Therefore, \begin{equation} \mathcal{C}_k(O)=\mathcal{C}_k(\Lambda_O) \end{equation} only depends on the eigenvalues of $O$. To analyze $\mathcal{C}_k(O)$, we need to write down the distribution of $\rho$ explicitly. According to the definition of $\pi_{d,k}$, $\rho\in\mathcal{D}(\mathcal{H})$ can be written as the reduced density matrix in a larger Hilbert space, $\rho=\tr_R(\ketbra{\Psi})$, where $\ket\Psi$ is a random state in $\mathcal{H}\otimes \mathcal{H}_R$. The distribution of $\ket\Psi$ can be generated by random Gaussian variables: \begin{equation} \ket\Psi=\sum_{i=1}^d\sum_{j=1}^k \frac{z_{i,j}}{\sqrt{\tr(ZZ^\dagger)}}\ket{\phi_i}\ket{\psi_j}, \end{equation} where $z_{i,j}$ is the element of the random complex Gaussian matrix, $Z$ \cite{Nechita2007}; $\{\ket{\phi_i}\}$ and $\{\ket{\psi_j}\}$ form orthonormal bases for $\mathcal{H}$ and $\mathcal{H}_R$ respectively. Precisely speaking, \begin{equation}\label{eq:distribution} x_{i,j}=\Re(z_{i,j})\sim N(0, 1), y_{i,j}=\Im(z_{i,j})\sim N(0, 1), \end{equation} are all standard Gaussian variables. Hence, \begin{equation} \rho=\tr_R(\ketbra{\Psi})=\sum_{i,j=1}^d \frac{\sum_{l=1}^k z_{i,l}z^*_{j,l}}{\tr(ZZ^\dagger)}\ket{\phi_i}\bra{\phi_j}. \end{equation} Therefore, the detection capability can be written as \begin{equation}\label{eq:probGaussian} \begin{split} \Pr_{\rho\sim\pi_{d,k}}\bqty{\tr(\Lambda_O\rho)<0}=&\Pr_{x_{i,j},y_{i,j}\sim N(0, 1)}\bqty{\pqty{\frac{1}{\tr(ZZ^\dagger)}}\sum_{i=1}^{d} \sum_{j=1}^k\lambda_i z_{i,j}z_{i,j}^*<0}\\ =&\Pr_{x_{i,j},y_{i,j}\sim N(0, 1)}\bqty{\sum_{i=1}^{d} \sum_{j=1}^k\lambda_i(x_{i,j}^2+y_{i,j}^2)<0}. \end{split} \end{equation} We label the positive and negative eigenvalues of $O$ as $a_1, \cdots, a_p$ and $-b_1\cdots -b_q$ with $a_i, b_j>0$ and $p+q\le d$. So we can rewrite Eq.~\eqref{eq:probGaussian} as \begin{equation} \Pr_{x_{i,j},y_{i,j}\sim N(0, 1)}\bqty{\sum_{i=1}^{d} \sum_{j=1}^k\lambda_i(x_{i,j}^2+y_{i,j}^2)<0}=\Pr_{x_{i,j},y_{i,j}\sim N(0, 1)}\bqty{\sum_{j=1}^{2k}\pqty{\sum_{i=1}^p a_ix_{i,j}^2-\sum_{i=1}^q b_i y_{i,j}^2}<0},\label{Proof of theorem 1:step3} \end{equation} where the $x$ and $y$ in the left-hand and right-hand sides are not the same variables. We relabel them to make the representation clearer while keeping them independent variables. For simplicity, we define $\vb* u=(a_1,\dots,a_p,\dots,a_1,\dots,a_p)$ and $\vb* v=(b_1,\dots,b_q,\dots,b_1,\dots,b_q)$ which are the $2k$ replica of $\vb* a=(a_1,\dots,a_p)$ and $\vb* b=(b_1,\dots,b_q)$ respectively. Accordingly, \begin{equation} \Pr_{x_{i,j},y_{i,j}\sim N(0, 1)}\bqty{\sum_{j=1}^{2k}\pqty{\sum_{i=1}^p a_ix_{i,j}^2-\sum_{i=1}^q b_i y_{i,j}^2}<0}=\Pr_{x_i,y_i\sim N(0,1)}\left(\sum_{i=1}^{2kp}u_ix_i^2-\sum_{i=1}^{2kq}v_iy_i^2<0\right). \end{equation} Using union bound, we can prove that for any real number $c$ \begin{equation} \begin{split} \Pr_{x_i,y_i\sim N(0,1)}\left(\sum_{i=1}^{2kp}u_ix_i^2-\sum_{i=1}^{2kq}v_iy_i^2<0\right)&\le\Pr_{x_i,y_i\sim N(0,1)}\left(\left(\sum_{i=1}^{2kp}u_ix_i^2\le c\right)\cup\left(\sum_{i=1}^{2kq}v_iy_i^2\ge c\right)\right)\\ &\le\Pr_{x_i\sim N(0,1)}\left(\sum_{i=1}^{2kp}u_ix_i^2\le c\right)+\Pr_{y_i\sim N(0,1)}\left(\sum_{i=1}^{2kq}v_iy_i^2\ge c\right). \end{split} \end{equation} To bound this probability, we adopt the Laurent-Massart's lemma \cite{laurent2000adaptive}, which states that for non-negative vectors $\vb* u$ and $\vb* v$ and i.i.d. variables $\{x_i\sim N(0,1)\}$, the following two inequalities hold for all positive numbers $t_1$ and $t_2$: \begin{equation} \begin{split} &\Pr_{x_i\sim N(0,1)}\left(\sum_i u_i x_i^2\leq \norm{\vb* u}_1-2\norm{\vb* u}_2\sqrt {t_2}\right)\leq e^{-t_2}\\ &\Pr_{y_i\sim N(0,1)}\left(\sum_i v_i y_i^2\geq \norm{\vb* v}_1+2\norm{\vb* v}_2\sqrt {t_1}+2\norm{\vb* v}_\infty t_1\right)\leq e^{-t_1} \end{split} \end{equation} where $\norm{\vb* v}_1=\sum_i\abs{v_i}$ ,$\norm{\vb* v}_2=\sqrt{\sum_iv_i^2}$ and $\norm{\vb* v}_\infty=\max_i\abs{v_i}$. Hence, if \begin{equation}\label{eq:t1t2relation} \begin{split} &\norm{\vb* u}_1-2\norm{\vb* u}_2\sqrt{t_2}=2k\norm{\vb* a}_1-2\sqrt{2k}\norm{\vb* a}_2\sqrt{t_2}=c\\ &\norm{\vb* v}_1+2\norm{\vb* v}_2\sqrt{t_1}+2\norm{\vb* v}_\infty t_1=2k\norm{\vb* b}_1+2\sqrt{2k}\norm{\vb* b}_2\sqrt{t_1}+2\norm{\vb* b}_\infty t_1=c \end{split} \end{equation} hold, then the probability can be upper bounded by \begin{equation}\label{eq:upperbound12} \Pr_{x_i,y_i\sim N(0,1)}\left(\sum_{i=1}^{2kp}u_ix_i^2-\sum_{i=1}^{2kq}v_iy_i^2<0\right)\le e^{-t_1}+e^{-t_2}. \end{equation} To find a $c$ that gives the tightest bound, one should notice that according to Eq.~\eqref{eq:upperbound12}, the upper bound is determined by the minimal one of $t_1$ and $t_2$. Besides, Eq.~\eqref{eq:t1t2relation} tells us that the values of $t_1$ and $t_2$ are inversely related. Therefore, the tightest upper bound is reached when $t_1=t_2=t$, which gives the exact value of $t$: \begin{equation} \sqrt {\frac{t}{2k}}=\frac{-(\norm{\vb* a}_2+\norm{\vb* b}_2)+\sqrt{{(\norm{\vb* a}_2+\norm{\vb* b}_2)}^2+2\norm{\vb* b}_\infty \tr(O)}}{2\norm{\vb* b}_\infty}. \end{equation} To further simplify this equation, let $\alpha=\frac{\tr(O)}{\sqrt{\tr(O^2)}}$, then we have \begin{equation} \begin{split} \sqrt {\frac{t}{2k}}&=\frac{-(\norm{\vb* a}_2+\norm{\vb* b}_2)+\sqrt{{(\norm{\vb* a}_2+\norm{\vb* b}_2)}^2+2\norm{\vb* b}_\infty \tr(O)}}{2\norm{\vb* b}_\infty}\\&\geq \frac{-\sqrt{2\tr(O^2)}+\sqrt{2\tr(O^2)+2\norm{\vb* b}_\infty \tr(O)}}{2\norm{\vb* b}_{\infty}}\\&= \frac{-\alpha^{-1}\tr(O)+\sqrt{\alpha^{-2}{\tr(O)}^2+\norm{\vb* b}_\infty \tr(O)}}{\sqrt{2}\norm{\vb* b}_\infty}. \end{split} \end{equation} The first inequality uses the fact that $f(x)=-x+\sqrt{1+x^2}$ is monotone and $\tr(O^2)=\norm{\vb* a}_2^2+\norm{\vb* b}_2^2\geq \frac{{(\norm{\vb* a}_2+\norm{\vb* b}_2)}^2}{2}$. Define \begin{equation} x=\frac{\alpha\norm{\vb* b}_\infty}{\tr(O)}=\frac{\norm{\vb* b}_\infty}{\sqrt{\tr(O^2)}}\geq0. \end{equation} By definition, $\norm{\vb* b}_\infty=\max_i|b_i|<\sqrt{\tr(O^2)}$, it is easy to prove that $0\le x\le \sqrt{1-\frac{1}{d}}< 1$. Therefore, \begin{equation} \sqrt{\frac{t}{k}}= \frac{-1+\sqrt{1+\alpha x}}{x}>\sqrt{1+\alpha}-1, \end{equation} where we use the fact that the function $\frac{-1+\sqrt{1+\alpha x}}{x}$ is monotonically decreasing with $x$. Combined with Eq.~\eqref{eq:upperbound12}, we have \begin{equation} \mathcal{C}_k(O) < 2e^{-{(\sqrt{1+\alpha}-1)}^2k}. \end{equation} \end{proof} \section{Detection Capability Upper Bound of Parameterized EW Criteria}\label{sec:Para EW} \subsection{Proof of Theorem 3} \begin{theorem}[Detection Capability of Parameterized EW Criteria] For any parameterized EW represented by a normalized $l$-Lipschitz map $\mathcal{M}$ satisfying \begin{equation} \forall \vb*\theta,\vb*\theta'\in \Theta:\norm{\mathcal{M}(\vb*\theta)-\mathcal{M}(\vb*\theta')}_F\leq l\norm{\vb*\theta-\vb*\theta'}_2, \end{equation} The detection capability decays at least exponentially with $k$ after $k$ exceeds a certain threshold \begin{equation} \mathcal{C}^p_k(\mathcal{M})\leq 2e^{C_1-C_2k}, \end{equation} where $C_1=M\ln \frac{2\sqrt{M}ld}{\epsilon}$, $M$ is the number of real parameters in $\mathcal{M}$, $C_2=(\sqrt{1+\alpha_{\min}-\epsilon}-1)^2$ where $\alpha_{\min}=\min_{\vb*\theta} \frac{\tr[\mathcal{M}(\vb*\theta)]}{\sqrt{\tr[\mathcal{M}(\vb*\theta)^2]}}=\min_{\vb*\theta} \tr[\mathcal{M}(\vb*\theta)]\geq 1$, and $0< \epsilon<1$ is an arbitrary number. By choosing $\epsilon=0.5$, we have the original theorem in the main text. \label{theorem:paraEWsinapp} \end{theorem} We prove this theorem using a coarse-graining method. The proof sketch is shown in Fig.~\ref{fig:proofoftheo2}. \begin{figure} \caption{Proof Sketch of Theorem \ref{theorem:paraEWsinapp} \label{fig:proofoftheo2} \end{figure} \begin{proof} A parameterized EW $\mathcal{M}$ is a map that maps $M$ real parameters to a continuous set of EWs: \begin{equation} \forall \vb* \theta\in\Theta\subset[-1,1]^M,\rho\in \mathrm{SEP}:\tr(\rho \mathcal{M}(\vb* \theta))\geq 0. \end{equation} We are going to bound the detection capability of a parameterized EW \begin{equation} \mathcal{C}^p_k(\mathcal{M})=\Pr_{\rho\sim\pi_{d,k}}\bqty{\exists \vb*\theta\in\Theta: \tr(\rho \mathcal{M}(\vb* \theta))<0} \end{equation} by constructing a finite set of observables $\mathcal{O}=\Bqty{O_i|i=1,\cdots, N}$ (not necessarily EWs), such that all the entangled states $\rho$ that can be detected by $\mathcal{M}$ can also be detected by $\mathcal{O}$, \begin{equation} \forall \rho : \exists \vb*\theta\in\Theta, \tr\left(\rho \mathcal{M}(\vb* \theta)\right)<0\to \exists O_i\in \mathcal{O},\tr\pqty{\rho O_i}<0. \end{equation} Once we find the observable set $\mathcal{O}$, the detection capability of $\mathcal{M}$ is bounded by the detection capability of $\mathcal{O}$, \begin{equation} \mathcal{C}^p_k(\mathcal{M})\leq \mathcal{C}_k(\mathcal{O}) = \Pr_{\rho\sim\pi_{d,k}}\left[\exists O\in\mathcal{O},\tr(O\rho)<0\right]. \end{equation} Firstly, we coarse-grain the parameter space, define $\Theta^*=\Bqty{\vb* \theta_i\in\Theta,i=1,\cdots, N}$, such that \begin{equation}\label{eq:coarsegraincondition} \forall \vb*\theta \in \Theta, \exists \vb* \theta^*\in \Theta^*:\norm{\vb* \theta-\vb* \theta^*}_2\leq \delta. \end{equation} Since $\mathcal{M}$ is $l$-Lipschitz, we have \begin{equation} \forall\vb*\theta \in \Theta, \exists \vb* \theta^*\in\Theta^*:\norm{\mathcal{M}(\vb* \theta)-\mathcal{M}(\vb* \theta^*)}_F\leq l\delta, \end{equation} which means that \begin{equation} \forall \vb*\theta \in \Theta, \exists \vb* \theta^*\in\Theta^*: \mathcal{M}(\vb* \theta)-(\mathcal{M} (\vb* \theta^*)-l\delta \mathbb{I}) \geq 0. \end{equation} Hence, for any state $\rho$ satisfying $\tr\left(\rho\mathcal{M}(\vb* \theta)\right)<0$, it also holds that \begin{equation} \exists\vb*\theta^*\in\Theta^*:\tr\left[\left(\mathcal{M}(\vb* \theta^*)-l\delta \mathbb{I}\right)\rho\right]<0. \end{equation} Therefore, we can choose $\mathcal{O}$ to be $\mathcal{O}=\Bqty{\mathcal{M}(\vb* \theta_i)-l\delta\mathbb{I},\vb* \theta_i\in\Theta^*}$, whose detection capability can also be bounded using Theorem \ref{theorem:singleEW}. To bound $\mathcal{C}_k(\mathcal{O})$, we need to figure out two problems: what is the detection capability of a single $O_i$ in $\mathcal{O}$ and what is the number of elements in $\mathcal{O}$. According to Theorem \ref{theorem:singleEW}, the key quantity to bound $\mathcal{C}_k\left(\mathcal{M}(\vb* \theta^*)-l\delta\mathbb{I}\right)$ is \begin{equation} \begin{split} \alpha^2=\frac{\tr(\mathcal{M}(\vb* \theta^*)-l\delta \mathbb{I})^2}{\tr((\mathcal{M}(\vb* \theta^*)-l\delta \mathbb{I})^2)}=\frac{(\alpha^*-l\delta d)^2}{1-2l\delta\alpha^*+(l\delta)^2d},\label{proof of theorem 2:step1} \end{split} \end{equation} where $\alpha^*=\frac{\tr(\mathcal{M}(\vb* \theta^*))}{\sqrt{\tr(\mathcal{M}(\vb* \theta^*)^2)}}=\tr(\mathcal{M}(\vb* \theta^*)) \geq 1$. It can also be directly verified by norm inequality, $\alpha^*\leq \sqrt{d}$. Define $0< l\delta d= \epsilon< 1$, we have \begin{equation} 1-2l\delta\alpha^*+(l\delta)^2d=1-2\frac{\epsilon\alpha^*}{d}+\frac{\epsilon^2}{d}>0 \end{equation} and \begin{equation} 1-2\frac{\epsilon\alpha^*}{d}+\frac{\epsilon^2}{d}\leq 1 \end{equation} Combine these inequalities with Eq.~\eqref{proof of theorem 2:step1}, and we get $\alpha\geq \alpha^*-\epsilon$. Thus the detection capability of a single observable in $\mathcal{O}$ can be bounded by \begin{equation}\label{proof of theorem 2:step2} \mathcal{C}_k\left(\mathcal{M}(\vb* \theta^*)-l\delta\mathbb{I}\right)< 2e^{-(\sqrt{1+\alpha_{\mathrm{min}}-\epsilon}-1)^2k}, \end{equation} where $\alpha_{\min}=\min_{\vb*\theta\in \Theta} \frac{\tr(\mathcal{M}(\vb* \theta))}{\sqrt{\tr(\mathcal{M}(\vb* \theta)^2)}}\geq 1$. To find the number of elements in $\mathcal{O}$, we can divide the parameter space into small cubes with side length $\frac{\delta}{\sqrt{M}}$. In each cube, there exists a $\vb*\theta_i$, such that for all the $\vb*\theta$ contained in this cube, $\norm{\vb*\theta-\vb*\theta_i}_2\le\sqrt{M\left(\frac{\delta}{\sqrt{M}}\right)^2}=\delta$, which fulfills the condition of Eq.~\eqref{eq:coarsegraincondition}. As the volume of parameter space is upper bounded by $2^M$, the number of cubes, which is also the upper bound of the number of elements in $\mathcal{O}$, is \begin{equation}\label{eq:numberofobs} \abs{\mathcal{O}}=\left(\frac{2\sqrt{M}}{\delta}\right)^M=\left(\frac{2\sqrt{M}ld}{\epsilon}\right)^M. \end{equation} Combining Eq.~\eqref{proof of theorem 2:step2} and Eq.~\eqref{eq:numberofobs}, we can finish the proof by \begin{equation} \mathcal{C}^p_k(\mathcal{M})\le\mathcal{C}_k(\mathcal{O})<2 e^{M\ln \frac{2\sqrt{M}ld}{\epsilon}-(\sqrt{1+\alpha_{min}-\epsilon}-1)^2k}. \end{equation} \end{proof} \subsection{Examples: Positive Map and Faithful Entanglement Criteria} A bipartite state $\rho\in\mathcal{D}(\mathcal{H}_A\otimes\mathcal{H}_B)$ can be detected by a positive map $\mathcal{N}$ if and only if there exists a parameterized EW $\mathcal{M}_{\mathcal{N}}\pqty{\vb*\theta}=\frac{\mathcal{N}_A\otimes\mathbb{I}_B(\ketbra{\phi(\vb*\theta)})}{\norm{\mathcal{N}_A\otimes\mathbb{I}_B(\ketbra{\phi(\vb*\theta)})}_F}$ to detect it. This is equivalent to \begin{equation} \exists\vb*\theta\in S^{2d-1}: \tr\bqty{\mathcal{N}_A\otimes\mathbb{I}_B\left(\ketbra{\phi(\vb*\theta)}\right)\rho}<0, \end{equation} where $S^{2d-1}$ is the unit sphere in the $2d$-dimensional parameter space, and $\bra{j}\ket{\phi(\vb* \theta)}=\theta_{2j}+i\theta_{2j+1}$. Hence, substituting $M$ with $2d$, we have: \begin{corollary}[Detection Capability of Positive Maps]\label{coro:PNCP} A normalized $l$-Lipschitz positive map $\mathcal{N}$ has detection capability: \begin{equation} \mathcal{C}^p_k(\mathcal{M}_{\mathcal{N}})< 2e^{C_1-C_2k} \end{equation} where $C_1=2d\ln \bqty{2^{2.5}d^{1.5}l}$, $C_2=(\sqrt{0.5+\alpha_{\min}}-1)^2$, $\alpha_{\mathrm{min}}=\min_{\vb*\theta}\frac{\tr[\mathcal{N}_A\otimes \mathbb{I}_B(\ketbra{\phi(\vb*\theta)})]}{\sqrt{\tr[\mathcal{N}_A\otimes \mathbb{I}_B(\ketbra{\phi(\vb*\theta)})^2]}}$. \end{corollary} \begin{proof} It follows directly from Theorem~\ref{theorem:paraEWsinapp} by choosing $\epsilon=0.5$ and $M=2d$. \end{proof} Also take the PPT criterion as an example, where $\mathcal{M}_{\mathrm{PPT}}(\vb*\theta)=\ketbra{\phi(\vb*\theta)}^{{T_A}}$. It can be easily proved that the partial transposition map is $\sqrt{2}$-Lipschitz and $\alpha_{\mathrm{min}}=1$. First, we give the relationship between the $F$-norm of the density matrix representation and the $2$-norm of the real-valued vector representation. \begin{equation} \begin{split} \norm{{\vb*\theta}-{\vb*\theta'}}_2^2&=\norm{\ket{\phi(\vb*\theta)}-\ket{\phi(\vb*\theta')}}_2^2\\&=2-2\Re(\braket{\phi(\vb*\theta)}{\phi(\vb*\theta')})\\&\geq 2-2\abs{\bra{\phi(\vb*\theta)}\ket{\phi(\vb*\theta')}}, \end{split} \end{equation} \begin{equation} \begin{split} \norm{\ketbra{\phi(\vb*\theta)}-\ketbra{\phi(\vb*\theta')}}_F^2&=2-2\abs{\bra{\phi(\vb*\theta)}\ket{\phi(\vb*\theta')}}^2\\&=(2-2\abs{\bra{\phi(\vb*\theta)}\ket{\phi(\vb*\theta')}})(1+\abs{\bra{\phi(\vb*\theta)}\ket{\phi(\vb*\theta')}})\\&\leq 2\norm{{\vb*\theta}-{\vb*\theta'}}_2^2 \end{split} \end{equation} So the map $\mathcal{M}(\vb*\theta)=\ketbra{\phi(\vb*\theta)}$ is $\sqrt{2}$-Lipschitz. For partial transposition map, $\mathcal{M}(\vb*\theta)=\ketbra{\phi(\vb*\theta)}^{{T_B}}$, it is normalized by itself, \begin{equation} \norm{\mathcal{M}(\vb*\theta)}=\norm{\ketbra{\phi(\vb*\theta)}^{{T_B}}}_F=\norm{\ketbra{\phi(\vb*\theta)}}_F=1 \end{equation} and \begin{equation} \begin{split} \norm{\mathcal{M}(\vb*\theta)-\mathcal{M}(\vb*\theta')}&=\norm{\ketbra{\phi(\vb*\theta)}^{T_B}-\ketbra{\phi(\vb*\theta)}^{T_B}}_F\\&=\norm{\ketbra{\phi(\vb*\theta)}-\ketbra{\phi(\vb*\theta)}}_F\\&\leq \sqrt{2}\norm{\vb*\theta-\vb*\theta'}_2. \end{split} \end{equation} Therefore partial transposition map is $\sqrt{2}$-Lipschitz. Corollary \ref{coro:PNCP} shows that for $k=\Omega(d\ln d)$, the PPT criterion can hardly detect any entanglement, which meets the former results \cite{nidari2007witness,bhosale2012entanglement,shapourian2021diagram}. For faithful EW, the parameterized EW can be defined as \begin{equation} \mathcal{M}(\vb*\theta)=\pqty{\sqrt{2-\frac{2}{\sqrt{d}}}}^{-1}\pqty{\frac{\mathbb{I}}{\sqrt{d}}-\ketbra{\phi(\vb*\theta)}} \end{equation} where $\pqty{\sqrt{2-\frac{2}{\sqrt{d}}}}^{-1}\leq 1$ is a factor to ensure $ \mathcal{M}(\vb*\theta)$ is normalized, then \begin{equation} \begin{split} \norm{\mathcal{M}(\vb*\theta)-\mathcal{M}(\vb*\theta')}_F&=\pqty{\sqrt{2-\frac{2}{\sqrt{d}}}}^{-1}\norm{\ketbra{\phi(\vb*\theta)}-\ketbra{\phi(\vb*\theta')}}_F\\&\leq \sqrt{2}\norm{\vb*\theta-\vb*\theta'}_2. \end{split} \end{equation} So faithful map is also $\sqrt{2}$-Lipschitz with $2d$ real parameters. Combined with the fact that $\alpha_{\min}=\sqrt{\frac{d-\sqrt{d}}{2}}$, we have \begin{corollary}[Ratio of Faithful Entanglement States] The set of faithful entangled states has an exponentially small ratio in the state space: \begin{equation} \Pr_{\rho\sim \pi_{d,k}}[\rho\in \mathrm{FE}]=\mathcal{C}^p_k\pqty{\mathcal{M}_{\mathrm{faithful}}}< 2e^{C_1-C_2k} \end{equation} where $\mathrm{FE}$ is the set of all faithful entangled states and $C_1=3d\ln 4d$, $C_2=\pqty{\sqrt{0.5+\sqrt{\frac{d-\sqrt{d}}{2}}}-1}^2\approx \sqrt{\frac{d}{2}}$. \end{corollary} \section{Detection Capability Upper Bound of Single-copy Criteria}\label{sec:singlecopy} \subsection{Proof of Theorem 4} \begin{theorem}[Detection Capability of Single-Copy Criteria] \label{theorem:singlecopy_inapp} Any single-copy entanglement criterion $\mathcal{O}$ with $M-1$ observables has detection capability: \begin{equation} \mathcal{C}^s_k(\mathcal{O})\leq 2e^{C_1-C_2k} \end{equation} Where $C_1=M\ln \frac{2\sqrt{M}d}{\epsilon}$, $C_2=(\sqrt{2-\epsilon}-1)^2$. $0<\epsilon<1$ is an arbitrary number. By choosing $\epsilon=0.5$, we have the original theorem in the main text. \end{theorem} \begin{proof} Without loss of generality, we add $O_M=\frac{\mathbb{I}}{\sqrt{d}}$ to the set, so $\mathcal{O}$ has $M$ observables now. We further assume $\mathcal{O}$ is mutually orthonormal in the operator space $\tr(O_iO_j)=\delta_{ij}$. If this condition is not satisfied, we can normalize and orthogonalize the operator set without changing the feasible region. Given the observable set $\mathcal{O}$ with $M$ observables and the measurement result \begin{equation} r_{\rho,i}=\tr(O_i\rho), i=1\cdots M, \end{equation} the quantum state is restricted in the feasible region defined as \begin{equation} F_\mathcal{O}(\rho)=\Bqty{\sigma\in\mathcal{D}(\mathcal{H}_d)|\tr(O_i\sigma)=r_{\rho,i},i=1\cdots M}. \end{equation} If the feasible region is disjoint with SEP, then the entanglement is successfully detected by $\mathcal{O}$. Therefore, the detection capability of $\mathcal{O}$ is defined as \begin{equation} \mathcal{C}^s_k(\mathcal{O})=\Pr_{\rho\sim\pi_{d,k}}\bqty{F_{\mathcal{O}}(\rho)\cap \mathrm{SEP}=\varnothing}. \end{equation} To benefit our proof, we extend the definition of $F_{\mathcal{O}}(\rho)$ from density states to Hermitian matrices, define \begin{equation} F_{\mathcal{O}}'(\rho)=\Bqty{\sigma|\tr(O_i\sigma)=r_{\rho,i},\sigma^\dagger=\sigma}. \end{equation} It is easy to prove that $F_{\mathcal{O}}(\rho)\cap \mathrm{SEP}=\varnothing$ if and only if $F_{\mathcal{O}}'(\rho)\cap \mathrm{SEP}=\varnothing$ as SEP is only in the density matrices set. By definition, SEP and $F_{\mathcal{O}}(\rho)$ are all convex sets. Hence, from the hyperplane separation theorem, we can find Hermitian operators $W$ that separate SEP and $F_{\mathcal{O}}'$ \begin{equation} \exists W: \tr(W\sigma)< 0, \forall \sigma\in F_{\mathcal{O}}'(\rho)\text{ and }\tr(W\sigma')\geq 0, \forall \sigma'\in \mathrm{SEP}\label{proof of theorem3:step1}, \end{equation} which is also an EW separate SEP and $F_{\mathcal{O}}(\rho)$. It can be proved that $W$ must have the form \begin{equation} W=\sum_{i=1}^{M} \theta_i O_i. \end{equation} If not, suppose $W=\sum_i \theta_i O_i+\tilde{O}$, where $\tilde{O}\neq 0$ is orthogonal to each $O_i\in\mathcal{O}$, Then for any $\sigma\in F_{\mathcal{O}}'(\rho)$, $\sigma+C\tilde{O}\in F_{\mathcal{O}}'(\rho)$ where $C$ is an arbitrary real number. In this scenario, $\tr\left((\sigma+C\tilde{O})W\right)=C\tr(\tilde{O}^2)$ can be arbitrary large, which contradicts the requirement \eqref{proof of theorem3:step1}. Accordingly, the entangled states that can be detected by $\mathcal{O}$ can also be detected by the following parameterized EW \begin{equation} \mathcal{M}(\vb*\theta)=\sum_{i=1}^{M} \theta_i O_i \end{equation} where $\vb*\theta\in\Theta$ and $\Theta$ is constituted by all $\vb*\theta$ such that $\sum_i\theta_i^2=1$ and makes $\mathcal{M}(\vb*\theta)$ a valid EW. Therefore, the detection capability of the single-copy criteria $\mathcal{O}$ is bounded by the detection capability of the parameterized EW $\mathcal{M}$ \begin{equation} \mathcal{C}^s_k(\mathcal{O})\leq \mathcal{C}^p_k(\mathcal{M}). \end{equation} Such parameterized EW with $M$ parameters is normalized, and $1$-Lipschitz \begin{equation} \norm{W(\vb*\theta)-W(\vb*\theta)'}^2_F=\sum_i(\theta_i-\theta_i')^2\tr(O_i^2)=\norm{\vb*\theta-\vb*\theta'}^2_2. \end{equation} By directly applying Theorem \ref{theorem:paraEWsinapp}, we have \begin{equation} \mathcal{C}^s_k(\mathcal{O})< 2e^{M\ln\frac{2\sqrt{M}d}{\epsilon}-(\sqrt{2-\epsilon}-1)^2} \end{equation} Where $0<\epsilon<1$ is an arbitrary number. \end{proof} \subsection{Adaptive Single-Copy Measurement} The most general method to detect entanglement may take advantage of adaptive measurements. After the previous $j-1$ measurement, One can determine $O_j$ as a function of previous measurement results. Here we consider a case where each measurement or query gives $1$ bit of information. \begin{definition}[Measurement with $1$ Bit Information] The measurement can be viewed as a quantum oracle, given an observable $O$, the oracle will output $\mathrm{sign}(\tr(O\rho))$, more specifically, $+1$ if $\tr(O\rho)\geq 0$ and $-1$ if $\tr(O\rho)< 0$. \end{definition} To determine whether $\tr(O\rho)\geq c$, one may simply replace $O$ by $O-c\mathbb{I}$. To determine any observable up to $\epsilon$ precision, one may use a binary search method with $O(\ln \frac{1}{\epsilon})$ queries. Next, we define the most general adaptive single-copy measurement where observables may depend on previous results. Formally, we define: \begin{definition}[Adaptive Single-Copy Protocols] An adaptive single-copy entanglement detection protocol with finite precision contains a program $\mathcal{P}$ that can generate an observable based on the previous results. More specifically, after the previous $j-1$ measurement, one get the measurement results $(k_1...k_{j-1})\in\{-1,+1\}^{j-1}$. Based on these result, the program can generate $O_j=f_j(k_i...k_{j-1})$. After $M$ iterations, one gets the following equations: \begin{equation} \mathrm{sign}(\tr(O_i\rho))=k_i\in\{-1,+1\},\forall i=1,\cdots, M \end{equation} We can still define the feasible set \begin{equation} F_\mathcal{P}(\rho)=F_\mathcal{K}(k_1,\cdots, k_M)=\Bqty{\sigma\in\mathcal{D}(\mathcal{H}_d)|\mathrm{sign}(\tr(O_i\sigma))=k_i,i=1,\cdots,M} \end{equation} And the detection capability is similarly defined as \begin{equation} \mathcal{C}_k^s(\mathcal{P})=\Pr_{\rho\sim\pi_{d,k}}\bqty{F_{\mathcal{P}}(\rho)\cap \mathrm{SEP}=\varnothing} \end{equation} \end{definition} Use the measurement outcome $\vb* k=(k_1,\cdots, k_M)\in\{-1,1\}^{M}$ to rewrite the previous definition: \begin{equation} F_{\mathcal{P}}(\rho)\cap \mathrm{SEP}=\varnothing\Longleftrightarrow \exists \vb* k: \rho\in F_\mathcal{K}(\vb* k) \text{ and } F_{\mathcal{K}}(\vb* k)\cap \mathrm{SEP}=\varnothing \end{equation} So that \begin{equation} \begin{split} \mathcal{C}_k^s(\mathcal{P})&=\Pr_{\rho\sim\pi_{d,k}}\bqty{\exists \vb* k: \rho\in F_\mathcal{K}(\vb* k) \text{ and } F_{\mathcal{K}}(\vb* k)\cap \mathrm{SEP}=\varnothing}\\&=\sum_{\vb* k} \Pr_{\rho\sim\pi_{d,k}}\bqty{\rho\in F_\mathcal{K}(\vb* k) \text{ and } F_{\mathcal{K}}(\vb* k)\cap \mathrm{SEP}=\varnothing} \end{split} \end{equation} Notice that for any $\vb* k$, $F_\mathcal{K}(\vb* k)$ is a convex set. So if $F_{\mathcal{K}}(\vb* k)\cap \mathrm{SEP}=\varnothing$, then by the hyperplane separation theorem, there exists an EW $W$ s.t. \begin{equation} \tr(W\sigma)< 0, \forall \sigma\in F_\mathcal{K}(\vb* k)\text{ and }\tr(W\sigma')\geq 0, \forall \sigma'\in \mathrm{SEP} \end{equation} According to Theorem \ref{theorem:singleEW}, each term in the summation is bounded by $2e^{-(3-2\sqrt{2})k}$. And there are a total of $2^M$ different terms in the summation, \begin{equation} \mathcal{C}_k^s(\mathcal{P})\leq 2^{M+1}e^{-(3-2\sqrt{2})k}=2e^{M\ln2-(3-2\sqrt{2})k}. \end{equation} So the detection capability of any adaptive single-copy method also suffers from exponential decay. \subsection{Details of Figure 3} In Fig.~3 of the main text, we use four entanglement criteria to demonstrate our conclusion of the single-copy criteria. We explicitly list them here. Suppose the state $\rho_{AB}$ we consider is bipartite with subsystems $A$ and $B$. \begin{enumerate} \item Purity \cite{GUHNE2009detection}: \begin{equation} \forall \rho_{AB}\in \mathrm{SEP}:\tr(\rho_{AB}^2)\le \tr(\rho_A^2). \end{equation} \item Fisher Information \cite{Zhang_2020}: \begin{equation} \forall \rho_{AB}\in \mathrm{SEP}: F(\rho, A \otimes I+I \otimes B) \leq \Delta(A \otimes I-I \otimes B)_{\rho}^{2}, \end{equation} where \begin{equation} F(\rho, A)=\sum_{k, l} \frac{(\lambda_k-\lambda_l)^2}{2\left(\lambda_{k}+\lambda_{l}\right)}\abs{\mel{k}{A}{l}}^2, \end{equation} \begin{equation} \rho=\sum_k \lambda_k\ketbra{k}, \end{equation} and \begin{equation} \Delta(A)_{\rho}^{2}=\expval{A^2}_\rho-\expval{A}^2_\rho. \end{equation} Since this criterion holds for any observable $A$ and $B$, we randomly choose $10$ different $A$s and $B$s to build a series of criteria. If any of them is violated, the state is classified as entangled. \item $M_4$ \cite{liu2022detecting}: \begin{equation} \forall \rho_{AB}\in \mathrm{SEP}: E_{4}(\rho_{AB}-\rho_A\otimes\rho_B)\le \sqrt{(1-\tr(\rho_A^2))(1-\tr(\rho_B^2))}. \end{equation} where $E_{4}(\rho)=\sqrt{\frac{q(q M_2+U)}{q+1}}+\sqrt{\frac{M_2-U}{q+1}}$, $q=\lfloor\frac{M_2^2}{M_4}\rfloor$, $U=\sqrt{q(q+1)M_4-qM_2^2}$, $M_2=\tr(\rho_{AB}^2)$, $M_4=\tr[(\mathbb{S}_A^{(1,2)}\otimes\mathbb{S}_A^{(3,4)}\otimes\mathbb{S}_B^{(2,3)}\otimes\mathbb{S}_B^{(4,1)})\rho_{AB}^{\otimes 4}]$, and $\mathbb{S}_A^{(i,j)}$ is the SWAP operator acting on the $i$-th and $j$-th copies of subsystem $A$. \item $D_{3,\mathrm{opt}}$ \cite{yu2021optimal,neven2021symmetry}: \begin{equation} \forall \rho_{AB}\in \mathrm{SEP}: \beta x^3+(1-\beta x)^3\le\tr\left((\rho_{AB}^{T_B})^3\right), \end{equation} where $\beta=\lfloor\frac{1}{\tr(\rho_{AB}^2)}\rfloor$ and $x=\frac{\beta+\sqrt{\beta\left((\beta+1)\tr(\rho_{AB}^2)-1\right)}}{\beta(\beta+1)}$. \end{enumerate} \subsection{More Numerical Experiments} \subsubsection{Relationship between the threshold $k_{th}$ and $d$} In Fig.~\ref{smfig:3criteria}, we show the detection capability of purity, $M_4$, and $D_{3,\mathrm{opt}}$ criteria. All the curves have two regimes: constant and exponential decay with $k$. Denote the turning point between these two regimes to be $k_{th}$, beyond which the criterion becomes ineffective. For different dimensions $d$, it is interesting to study the threshold $k_{th}$ for different criteria. From the figure, the thresholds $k_{th}$ for Purity, $M_4$, and $D_{3,\mathrm{opt}}$ are approximately $\sqrt{d}$, $0.6d$, and $d$, respectively. These polynomial relations show that an exponential number of observables are needed to verify these criteria with only single-copy observables. In fact, all three criteria require the number of observables larger than $\Omega(k_{th}/\ln k_{th})$ with the best-known randomized measurements. This is consistent with Theorem \ref{theorem:singlecopy_inapp}. \begin{figure} \caption{The three figures represents the detection capability of purity, $M_4$ and $D_{3,\mathrm{opt} \label{smfig:3criteria} \end{figure} \subsubsection{Numerical experiments on random thermal states} In the proof of the theorems, we assume distribution $\pi_{d,k}$. Obviously, the results cannot hold for all distributions. For example, if the states only distribute around a particular maximally entangled state, we can easily design an effective EW to witness these states. In this case, we already assume lots of prior information about the states. Without such strong prior information, the states are more evenly distributed over the state space. Then, if the state distribution is approximately symmetric around the maximally mixed state, the theorems should also hold. Here, we present another typical state distribution as an example and leave detailed studies for future work. Here, we numerically examine the detection capability of the three criteria with random thermal states in Fig.~\ref{smfig:thermal}. The detection capability also suffers from exponential decay after a constant period. As $T$ increases, the purity of states decreases, just like the case when $k$ increases in the $\pi_{d,k}$ distribution. This is compatible with the theorems. \begin{figure} \caption{This figure represents the detection capability of purity, $M_4$ and $D_{3,\mathrm{opt} \label{smfig:thermal} \end{figure} \end{document}
\begin{document} \title{\sf{Symplectic Geometry}\\ \vspace*{2ex} {\normalsize overview written for the {\varepsilonm Handbook of Differential Geometry}, vol.~2}\\ {\normalsize (F.J.E.\ Dillen and L.C.A.\ Verstraelen, eds.)}\\ \vspace*{5ex}} \author{\sf{Ana Cannas da Silva}\thanks{E-mail: {\tt [email protected]} or {\tt [email protected]}}} \date{\sf{September 2004}} \maketitle \def\sf{\textbf{Contents}}{\sf{\textbf{Contents}}} \addtocontents{toc}{\protect } \tableofcontents \pagestyle{headings} \sffamily{ \section*{\sf{\textbf{Introduction}}} \label{introduction} \markboth{\sf{INTRODUCTION}}{\sf{INTRODUCTION}} \addcontentsline{toc}{section}{\sf{\textbf{Introduction}}} \thispagestyle{empty} This is an overview of symplectic geometry\footnote{The word {\varepsilonm symplectic} in mathematics was coined in the late 1930's by Weyl~\cite[p.165]{we:classical} who substituted the Latin root in {\varepsilonm complex} by the corresponding Greek root in order to label the symplectic group (first studied be Abel). An English dictionary is likely to list {\varepsilonm symplectic} as the name for a bone in a fish's head.} -- the geometry of {\varepsilonm symplectic manifolds}. From a language for classical mechanics in the XVIII century, symplectic geometry has matured since the 1960's to a rich and central branch of differential geometry and topology. A current survey can thus only aspire to give a partial flavor on this exciting field. The following six topics have been chosen for this handbook: \vspace*{2ex} \textbf{1. Symplectic manifolds} are manifolds equipped with {\varepsilonm symplectic forms}. A symplectic form is a closed nondegenerate 2-form. The algebraic condition (nondegeneracy) says that the top exterior power of a symplectic form is a volume form, therefore symplectic manifolds are necessarily even-dimensional and orientable. The analytical condition (closedness) is a natural differential equation that forces all symplectic manifolds to being locally indistinguishable: they all locally look like an even-dimensional euclidean space equipped with the $\sum dx_i \wedge dy_i$ symplectic form. All cotangent bundles admit canonical symplectic forms, a fact relevant for analysis of differential operators, dynamical systems, classical mechanics, etc. Basic properties, major classical examples, equivalence notions, local normal forms of symplectic manifolds and symplectic submanifolds are discussed in Chapter~\ref{section1}. \vspace*{2ex} \textbf{2. Lagrangian submanifolds}\footnote{The name {\varepsilonm lagrangian manifold} was introduced by Maslov~\cite{ma:perturbation} in the 1960's, followed by {\varepsilonm lagrangian plane}, etc., introduced by Arnold~\cite{ar:mathematical}.} are submanifolds of symplectic manifolds of half dimension and where the restriction of the symplectic form vanishes identically. By the {\varepsilonm lagrangian creed}~\cite{we:lectures}, everything is a lagrangian submanifold, starting with closed 1-forms, real functions modulo constants and symplectomorphisms (diffeomorphisms that respect the symplectic forms). Chapter~\ref{section2} also describes normal neighborhoods of lagrangian submanifolds with applications. \vspace*{2ex} \textbf{3. Complex structures} or almost complex structures abound in symplectic geometry: any symplectic manifold possesses almost complex structures, and even so in a {\varepsilonm compatible} sense. This is the point of departure for the modern technique of studying pseudoholomorphic curves, as first proposed by Gromov~\cite{gr:pseudo}\index{Gromov ! pseudo-holomorphic curve}\index{pseudo-holomorphic curve}. K\"ahler geometry lies at the intersection of complex, riemannian and symplectic geometries, and plays a central role in these three fields. Chapter~\ref{section3} includes the local normal form for K\"ahler manifolds and a summary of Hodge theory for K\"ahler manifolds. \vspace*{2ex} \textbf{4. Symplectic geography} is concerned with existence and uniqueness of symplectic forms on a given manifold. Important results from K\"ahler geometry remain true in the more general symplectic category, as shown using pseudoholomorphic methods. This viewpoint was more recently continued with work on the existence of certain symplectic submanifolds, in the context of Seiberg-Witten invariants, and with topological descriptions in terms of Lefschetz pencils. Both of these directions are particularly relevant to 4-dimensional topology and to mathematical physics, where symplectic manifolds occur as building blocks or as key examples. Chapter~\ref{section4} treats constructions of symplectic manifolds and invariants to distinguish them. \vspace*{2ex} \textbf{5. Hamiltonian geometry} is the geometry of symplectic manifolds equipped with a {\varepsilonm moment map}, that is, with a collection of quantities conserved by symmetries. With roots in hamiltonian mechanics, moment maps became a consequential tool in geometry and topology. The notion of a moment map arises from the fact that, to any real function on a symplectic manifold, is associated a vector field whose flow preserves the symplectic form and the given function; this is called the {\varepsilonm hamiltonian vector field} of that (hamiltonian) function. The Arnold conjecture in the 60's regarding hamiltonian dynamics was a major driving force up to the establishment of Floer homology in the 80's. Chapter~\ref{section5} deals mostly with the geometry of moment maps, including the classical Legendre transform, integrable systems and convexity. \vspace*{2ex} \textbf{6. Symplectic reduction} is at the heart of many symplectic arguments. There are infinite-dimensional analogues with amazing consequences for differential geometry, as illustrated in a symplectic approach to Yang-Mills theory. Symplectic toric manifolds provide examples of extremely symmetric symplectic manifolds that arise from symplectic reduction using just the data of a polytope. All properties of a symplectic toric manifold may be read from the corresponding polytope. There are interesting interactions with algebraic geometry, representation theory and geometric combinatorics. The variation of reduced spaces is also addressed in Chapter~\ref{section6}. \ssection{Symplectic Manifolds} \label{section1} \ssubsection{Symplectic Linear Algebra} \label{symplectic_linear_algebra} Let $V$ be a vector space over ${\mathbb R}$, and let $\Omega: V \times V \to {\mathbb R}$ be a skew-symmetric bilinear map. By a skew-symmetric version of the Gram-Schmidt process,\footnote{Let $u_1,\dots,u_k$ be a basis of $U := \{u \in V \mid \Omega(u,v) = 0, \mbox{ for all } v \in V \}$, and $W$ a complementary subspace such that $V = U \oplus W$. Take any nonzero $e_1 \in W$. There is $f_1 \in W$ with $\Omega(e_1,f_1) = 1$. Let $W_1$ be the span of $e_1,f_1$ and $W_1^\Omega := \{ v \in V \, | \, \Omega (v,u) = 0 \; \forall u \in W_1 \}$. Then $W = W_1 \oplus W_1^\Omega$. Take any nonzero $e_2 \in W_1^\Omega$. There is $f_2 \in W_1^\Omega$ for which $\Omega(e_2,f_2) = 1$. Let $W_2$ be the span of $e_2,f_2$, and so on.} there is a basis $u_1,\dots,u_k$, $e_1,\dots,e_n$, $f_1,\dots,f_n$ of $V$ for which $\Omega(u_i,v) = \Omega(e_i,e_j) = \Omega(f_i,f_j) = 0$ and $\Omega(e_i,f_j) = \partialta_{ij}$ for all $i,j$ and all $v \in V$. Although such a basis is not unique, it is commonly referred to as a \textbf{canonical basis}. The dimension $k$ of the subspace $U = \{u \in V \mid \Omega(u,v) = 0, \mbox{ for all } v \in V \}$ is an invariant of the pair $(V,\Omega)$. Since $k + 2n = \dim V$, the even number $2n$ is also an invariant of $(V,\Omega)$, called the \textbf{rank} of $\Omega$.\index{rank}\index{skew-symmetric bilinear map ! rank} We denote by ${\widetilde \Omega}: V \to V^*$ the linear map defined by ${\widetilde \Omega}(v)(u) := \Omega(v,u)$. We say that $\Omega$ is \textbf{symplectic}\index{skew-symmetric bilinear map ! symplectic}\index{symplectic ! bilinear map}\index{symplectic ! linear symplectic structure} (or \textbf{nondegenerate}\index{skew-symmetric bilinear map ! nondegenerate}\index{nondegenerate ! bilinear map}) if the associated ${\widetilde \Omega}$ is bijective (i.e., the kernel $U$ of ${\widetilde \Omega}$ is the trivial space $\{0\}$). In that case, the map $\Omega$ is called a \textbf{linear symplectic structure} on $V$, and the pair $(V,\Omega)$ is called a \textbf{symplectic vector space}.\index{symplectic ! vector space}\index{vector space ! symplectic} A linear symplectic structure $\Omega$\index{symplectic ! properties of linear symplectic structures} expresses a {\varepsilonm duality} by the bijection ${\widetilde \Omega}: V \stackrel{\simeq}{\longrightarrow} V^*$,\index{symplectic ! duality} similar to the (symmetric) case of an inner product. By considering a canonical basis, we see that the dimension of a symplectic vector space $(V,\Omega)$ {\varepsilonm must be even}, $\dim V = 2n$, and that $V$ admits a basis $e_1,\dots,e_n,f_1,\dots,f_n$ satisfying $\Omega(e_i,f_j) = \partialta_{ij}$ and $\Omega(e_i,e_j) = 0 = \Omega(f_i,f_j)$. Such a basis is then called a \textbf{symplectic basis} of $(V,\Omega)$,\index{symplectic ! basis} and, in terms of exterior algebra, $\Omega = e_1^* \wedge f_1^* + \ldots + e_n^* \wedge f_n^*$, where $e_1^*, \ldots, e_n^*, f_1^*, \ldots, f_n^*$ is the dual basis. With respect to a symplectic basis, the map $\Omega$ is represented by the matrix \[ \left[ \begin{array}{cc} 0 & \mbox{Id} \\ -\mbox{Id} & 0 \varepsilonnd{array} \right] \ . \] \begin{examples} \begin{enumerate} \item The \textbf{prototype of a symplectic vector space} is $({\mathbb R}^{2n},\Omega_0)$ with $\Omega_0$ such that the canonical basis $e_1=(1,0,\ldots,0), \ldots, e_n, f_1, \ldots, f_n=(0,\ldots,0,1)$ is a symplectic basis. Bilinearity then determines $\Omega_0$ on other vectors. \item For any real vector space $E$, the direct sum $V = E \oplus E^*$ has a \textbf{canonical symplectic structure} determined by the formula $\Omega_0 (u \oplus \alpha, v \oplus \beta) = \beta (u) - \alpha (v)$. If $e_1, \ldots ,e_n$ is a basis of $E$, and $f_1, \ldots ,f_n$ is the dual basis, then $e_1 \oplus 0, \ldots ,e_n \oplus 0, 0 \oplus f_1, \ldots ,0 \oplus f_n$ is a symplectic basis for $V$. \varepsilonnd{enumerate} \varepsilonnd{examples} Given a linear subspace $W$ of a symplectic vector space $(V, \Omega)$, its \textbf{symplectic orthogonal}\index{symplectic ! orthogonal} is the subspace $W^\Omega := \{ v \in V \, | \, \Omega (v,u) = 0 \; \mbox{for all } u \in W \}$. By nondegeneracy, we have $\dim W + \dim W^\Omega = \dim V$ and $(W^\Omega)^\Omega = W$. For subspaces $W$ and $Y$, we have $(W \cap Y)^\Omega = W^\Omega + Y^\Omega$, and if $W \subseteq Y$ then $Y^\Omega \subseteq W^\Omega$. There are special types of linear subspaces of a symplectic vector space $(V,\Omega)$. A subspace $W$ is a \textbf{symplectic subspace} if the restriction $\Omega|_W$ is nondegenerate, that is, $W \cap W^\Omega = \{ 0 \}$, or equivalently $V = W \oplus W^\Omega$.\index{subspace ! symplectic} A subspace $W$ is an \textbf{isotropic subspace} if $\Omega|_W \varepsilonquiv 0$, that is, $W \subseteq W^\Omega$.\index{subspace ! isotropic} A subspace $W$ is a \textbf{coisotropic subspace} if $W^\Omega \subseteq W$.\index{coisotropic ! subspace}\index{subspace ! coisotropic} A subspace $W$ is a \textbf{lagrangian subspace} if it is both isotropic and coisotropic, or equivalently, if it is an isotropic subspace with $\dim W = {1 \over 2} \dim V$.\index{lagrangian subspace}\index{subspace ! lagrangian} A basis $e_1, \ldots ,e_n$ of a lagrangian subspace can be extended to a symplectic basis: choose $f_1$ in the symplectic orthogonal to the linear span of $\{e_2, \ldots ,e_n \}$, etc. \begin{examples} \begin{enumerate} \item For a symplectic basis as above, the span of $e_1,f_1$ is symplectic, that of $e_1,e_2$ isotropic, that of $e_1,\dots,e_n,f_1$ coisotropic, and that of $e_1,\dots,e_n$ lagrangian. \item The graph of a linear map $A : E \to E^*$ is a lagrangian subspace of $E \oplus E^*$ with the canonical symplectic structure if and only if $A$ is symmetric (i.e., $(Au)v = (Av)u$). Therefore, the grassmannian of all lagrangian subspaces in a $2n$-dimensional symplectic vector space has dimension $\frac{n(n+1)}{2}$. \varepsilonnd{enumerate} \varepsilonnd{examples} A \textbf{symplectomorphism}\index{symplectomorphism ! linear} $\varphi$ between symplectic vector spaces $(V,\Omega)$ and $(V',\Omega')$ is a linear isomorphism $\varphi: V \stackrel{\simeq}{\longrightarrow} V'$ such that $\varphi^*\Omega' = \Omega$.\footnote{By definition, $(\varphi^*\Omega')(u,v) = \Omega'(\varphi(u),\varphi(v))$.} If a symplectomorphism exists, $(V,\Omega)$ and $(V',\Omega')$ are said to be \textbf{symplectomorphic}\index{symplectomorphic}. Being symplectomorphic is clearly an equivalence relation in the set of all even-dimensional vector spaces. The existence of canonical bases shows that every $2n$-dimensional symplectic vector space $(V,\Omega)$ is symplectomorphic to the prototype $({\mathbb R}^{2n},\Omega_0)$; a choice of a symplectic basis for $(V,\Omega)$ yields a symplectomorphism to $({\mathbb R}^{2n},\Omega_0)$. Hence, nonnegative even integers classify equivalence classes for the relation of being symplectomorphic. Let $\Omega(V)$ be the space of all linear symplectic structures on the vector space $V$. Take a $\Omega \in \Omega(V)$, and let $\mathrm{Sp} (V,\Omega)$ be the \textbf{group of symplectomorphisms}\index{group of symplectomorphisms} of $(V,\Omega)$. The group $\mathrm{GL}(V)$ of all isomorphisms of $V$ acts {\varepsilonm transitively} on $\Omega(V)$ by pullback (i.e., all symplectic structures are related by a linear isomorphism), and $\mathrm{Sp} (V,\Omega)$ is the stabilizer of the given $\Omega$. Hence, $\Omega(V) \simeq \mathrm{GL} (V) / \mathrm{Sp} (V,\Omega)$. \ssubsection{Symplectic Forms} \label{symplectic_forms} \index{symplectic ! form} Let $\omega$ be a de Rham 2-form on a manifold\footnote{Unless otherwise indicated, all vector spaces are real and finite-dimensional, all maps are smooth (i.e., $C^\infty$) and all manifolds are smooth, Hausdorff and second countable.} $M$. For each point $p \in M$, the map $\omega_p:T_pM \times T_pM \rightarrow {\mathbb R}$ is skew-symmetric and bilinear on the tangent space to $M$ at $p$, and $\omega_p$ varies smoothly in $p$. \begin{definition} The 2-form $\omega$ is \textbf{symplectic}\index{symplectic ! form}\index{form ! symplectic} if $\omega$ is closed (i.e., its exterior derivative $d \omega$ is zero) and $\omega_p$ is symplectic for all $p \in M$. A \textbf{symplectic manifold}\index{symplectic ! manifold}\index{manifold ! symplectic} is a pair $(M, \omega)$ where $M$ is a manifold and $\omega$ is a symplectic form. \varepsilonnd{definition} Symplectic manifolds must be {\varepsilonm even-dimensional}. Moreover, the $n$th exterior power $\omega^n$ of a symplectic form $\omega$ on a $2n$-dimensional manifold is a {\varepsilonm volume form}\index{volume}.\footnote{A \textbf{volume form} is a nonvanishing form of top degree. If $\Omega$ is a symplectic structure on a vector space $V$ of dimension $2n$, its $n$th exterior power $\Omega^n = \Omega \wedge \ldots \wedge \Omega$ does not vanish. Actually, a skew-symmetric bilinear map $\Omega$ is symplectic if and only if $\Omega^n \neq 0$.} Hence, any symplectic manifold $(M,\omega)$ is {\varepsilonm canonically oriented}. The form $\frac{\omega^n}{n!}$ is called the \textbf{symplectic volume}\index{symplectic ! volume}\index{volume} or \textbf{Liouville volume}\index{Liouville ! volume}\index{volume ! Liouville} of $(M,\omega)$. When $(M,\omega)$ is a {\varepsilonm compact} $2n$-dimensional symplectic manifold, the de Rham cohomology\index{de Rham cohomology}\index{cohomology ! de Rham} class $[\omega ^n] \in H^{2n} (M;{\mathbb R})$ must be non-zero by Stokes theorem\index{Stokes theorem}\index{theorem ! Stokes}. Therefore, the class $[\omega]$ must be non-zero, as well as its powers $[\omega]^k = [\omega^k] \neq 0$. {\varepsilonm Exact symplectic forms} can only exist on noncompact manifolds. Compact manifolds with a trivial even cohomology group $H^{2k} (M;{\mathbb R})$, $k = 0,1,\ldots,n$, such as spheres $S^{2n}$ with $n > 1$, can thus never be symplectic. On a manifold of dimension greater than 2, a function multiple $f \omega$ of a symplectic form $\omega$ is symplectic if and only if $f$ is a nonzero locally constant function (this follows from the existence of a symplectic basis). \begin{examples} \begin{enumerate} \index{example ! of symplectic manifold} \item Let $M = {\mathbb R}^{2n}$ with linear coordinates $x_1,\dots,x_n,y_1,\dots,y_n$. The form \[ \omega_0 = \sum \limits_{i=1}^n dx_i \wedge dy_i \] is symplectic, and the vectors $\left( \frac {\partial}{\partial x_1} \right)_p,\dots,\left( \frac {\partial}{\partial x_n} \right)_p, \left( \frac {\partial}{\partial y_1} \right)_p,\dots,\left( \frac {\partial}{\partial y_n} \right)_p$ constitute a symplectic basis of $T_pM$. \vspace*{-1ex} \item Let $M = {\mathbb C}^{n}$ with coordinates $z_1,\dots,z_n$. The form $\omega_0 = \frac i2 \sum dz_k \wedge d\bar z_k$ is symplectic. In fact, this form coincides with that of the previous example under the identification ${\mathbb C}^{n} \simeq {\mathbb R}^{2n}$, $z_k = x_k + iy_k$. \vspace*{-1ex} \item The 2-sphere $S^2$, regarded as the set of unit vectors in ${\mathbb R}^3$, has tangent vectors at $p$ identified with vectors orthogonal to $p$. The standard symplectic form on $S^2$ is induced by the standard inner (dot) and exterior (vector) products: $\omega_p (u,v) := \langle p, u \times v \rangle$, for $u,v \in T_p S^2 = \{ p \} ^\perp$. This is the standard area form on $S^2$ with total area $4\pi$. In terms of cylindrical polar coordinates $0 \leq \theta < 2\pi$ and $-1 \leq z \leq 1$ away from the poles, it is written $\omega = d \theta \wedge dz$. \vspace*{-1ex} \item On any Riemann surface, regarded as a 2-dimensional oriented manifold, any area form, that is, any never vanishing 2-form, is a symplectic form. \vspace*{-1ex} \item Products of symplectic manifolds are naturally symplectic by taking the sum of the pullbacks of the symplectic forms from the factors. \vspace*{-1ex} \item If a $(2n+1)$-dimensional manifold $X$ admits a \textbf{contact form}\index{contact form}, that is, a 1-form $\alpha$ such that $\alpha \wedge (d \alpha)^n$ is never vanishing, then the 2-form $d(e^t \alpha)$ is symplectic on $X \times {\mathbb R}$, and the symplectic manifold $(X \times {\mathbb R}, d(e^t \alpha))$ is called the \textbf{symplectization}\index{symplectization} of the {\varepsilonm contact manifold} $(X , \alpha)$. For more on {\varepsilonm contact geometry}\index{contact geometry}, see for instance the corresponding contribution in this volume. \varepsilonnd{enumerate} \varepsilonnd{examples} \vspace*{-2ex} \begin{definition} Let $(M_1,\omega_1)$ and $(M_2,\omega_2)$ be symplectic manifolds. A (smooth) map $\psi:M_1\to M_2$ is \textbf{symplectic}\index{symplectic map} if $\psi^{*}\omega_2=\omega_1$.\footnote{By definition of \textbf{pullback}\index{pullback}, we have $(\psi^{*}\omega_2)_p (u,v) = (\omega_2)_{\psi(p)} (d\psi_p (u),d\psi_p (v))$, at tangent vectors $u,v \in T_p M_1$.} A symplectic diffeomorphism $\varphi:M_1\to M_2$ is a \textbf{symplectomorphism}.\index{symplectomorphism ! definition} $(M_1,\omega_1)$ and $(M_2,\omega_2)$ are said to be \textbf{symplectomorphic}\index{symplectomorphic} when there exists a symplectomorphism between them. \varepsilonnd{definition} The classification of symplectic manifolds up to symplectomorphism is an open problem in symplectic geometry. However, the local classification is taken care of by the {\varepsilonm Darboux theorem} (Theorem~\ref{thm:darboux})\index{theorem ! Darboux}\index{Darboux ! theorem}: the dimension is the only local invariant of symplectic manifolds up to symplectomorphisms. That is, just as any $n$-dimensional manifold is locally diffeomorphic to ${\mathbb R}^n$, any symplectic manifold $(M^{2n},\omega)$ is locally symplectomorphic to $({{\mathbb R}}^{2n},\omega_{0})$. As a consequence, if we prove for $({\mathbb R}^{2n},\omega_{0})$ a local assertion that is invariant under symplectomorphisms, then that assertion holds for any symplectic manifold. We will hence refer to ${{\mathbb R}}^{2n}$, with linear coordinates $(x_1,\ldots,x_n,y_1,\ldots, y_n)$, and with symplectic form $\omega_0=\sum_{i=1}^n dx_i\wedge dy_i$, as the \textbf{prototype of a local piece of a $2n$-dimensional symplectic manifold}. \ssubsection{Cotangent Bundles} \label{cotangent_bundles} Cotangent bundles are major examples of symplectic manifolds. Let $({\mathcal U},x_1,\ldots,x_n)$ be a coordinate chart for a manifold $X$, with associated cotangent coordinates $(T^* {\mathcal U},x_1,\ldots, x_n,\xi_1,\ldots,\xi_n)$.\footnote{If an $n$-dimensional manifold $X$ is described by coordinate charts $({\mathcal U},x_1,\ldots,x_n)$ with $x_i: {\mathcal U} \to {\mathbb R}$, then, at any $x \in {\mathcal U}$, the differentials $(dx_i)_x$ form a basis of $T_x^*X$, inducing a map \[ \begin{array}{rcl} T^* {\mathcal U} & \longrightarrow & {\mathbb R}^{2n} \\ (x, \xi) & \longmapsto & (x_1, \ldots , x_n, \xi_1, \ldots, \xi_n)\ , \varepsilonnd{array} \] where $\xi_1, \ldots, \xi_n \in {\mathbb R}$ are the corresponding coordinates of $\xi \in T_x^*X$: $\xi = \sum_{i=1}^n \xi_i (dx_i)_x$. Then $(T^* {\mathcal U},x_1,\ldots,x_n, \xi_1, \ldots, \xi_n)$ is a coordinate chart for the cotangent bundle $T^*X$; the coordinates $x_1,\ldots,x_n, \xi_1, \ldots, \xi_n$ are called the \textbf{cotangent coordinates}\index{cotangent bundle ! coordinates} associated to the coordinates $x_1,\ldots,x_n$ on ${\mathcal U}$. One verifies that the transition functions on the overlaps are smooth, so $T^* X$ is a $2n$-dimensional manifold.} Define a symplectic form on $T^* {\mathcal U}$ by \index{form ! canonical} \index{canonical form on $T^*X$ ! coordinate definition} \[ \omega = \sum \limits_{i=1}^n dx_i \wedge d\xi_i \ . \] One can check that this $\omega$ is intrinsically defined by considering the 1-form on $T^* {\mathcal U}$ \index{form ! tautological}\index{form ! canonical}\index{canonical form on $T^*X$ ! intrinsic definition} \index{tautological form on $T^*X$ ! coordinate definition} \[ \alpha=\sum\limits_{i=1}^n \xi_i \ dx_i \] which satisfies $\omega=-d\alpha$ and is coordinate-independent: in terms of the natural projection $\pi: M \to X$, $p=(x,\xi) \mapsto x$, the form $\alpha$\index{form ! tautological} \index{tautological form on $T^*X$ ! intrinsic definition} may be equivalently defined pointwise without coordinates by \[ \alpha_{p}=(d\pi_{p})^* \xi \quad \in T_p ^* M\ , \] where $(d\pi_{p})^*: T_x^*X \to T_p ^*M$ is the transpose of $d\pi_{p}$, that is, $\alpha_{p}(v) = \xi ( (d\pi_{p})v )$ for $v\in T_{p}M$. Or yet, the form $\alpha$ is uniquely characterized by the property that $\mu^* \alpha = \mu$ for every 1-form $\mu: X \to T^* X$\index{tautological form on $T^*X$ ! property} (see Proposition~\ref{prop:tautological_property}). The 1-form $\alpha$ is the \textbf{tautological form} (or the \textbf{Liouville 1-form}\index{Liouville 1-form}) and the 2-form $\omega$ is the \textbf{canonical symplectic form}\index{form ! tautological}\index{tautological form on $T^*X$ ! coordinate definition}\index{form ! canonical}\index{canonical form on $T^*X$ ! coordinate definition} on $T^*X$. When referring to a cotangent bundle as a symplectic manifold, the symplectic structure is meant to be given by this canonical $\omega$.\index{example ! of symplectic manifold} Let $X_1$ and $X_2$ be $n$-dimensional manifolds with cotangent bundles $M_1=T^*X_1$ and $M_2=T^*X_2$, and tautological 1-forms $\alpha_1$ and $\alpha_2$. Suppose that $f:X_1\to X_2$ is a diffeomorphism. Then there is a natural diffeomorphism $f_{\sharp}:M_1\to M_2$ which \textbf{lifts}\index{lift ! of a diffeomorphism} $f$; namely, for $p_1=(x_1,\xi_1) \in M_1$ we define \[ f_{\sharp}(p_1) =p_2 =(x_2,\xi_2) \ , \quad \mbox{ with } \left\{ \begin{array}{l} x_2=f(x_1) \in X_2 \quad \mbox{ and } \\ \xi_1=(df_{x_1})^* \xi_2 \in T_{x_1}^* X_1 \ , \varepsilonnd{array} \right. \] where $(df_{x_1})^* : T_{x_2}^* X_2 \stackrel{\simeq}{\longrightarrow} T_{x_1}^* X_1$, so $f_{\sharp}|_{T_{x_1}^* }$ is the inverse map of $(df_{x_1})^*$. \begin{proposition} \label{prop:diffeo_lift} The lift $f_{\sharp}$ of a diffeomorphism $f: X_1 \rightarrow X_2$ pulls the tautological form on $T^* X_2$ back to the tautological form on $T^* X_1$, i.e., $(f_{\sharp})^* \alpha_2=\alpha_1$. \varepsilonnd{proposition} \vspace*{-2ex} \begin{proof} At $p_1=(x_1,\xi_1)\in M_1$, the claimed identity says $\left( d f_{\sharp} \right)^*_{p_1} (\alpha_2)_{p_2} = (\alpha_1)_{p_1}$, where $p_2=f_{\sharp}(p_1)$, that is, $p_2 = (x_2,\xi_2)$ where $x_2 = f(x_1)$ and $(df_{x_1})^* \xi_2 = \xi_1$. This can be proved as follows: \[ \begin{array}{rclcl} (df_{\sharp})^* _{p_1}(\alpha_2)_{p_2} & = & (df_{\sharp})^* _{p_1}(d\pi_2)^* _{p_2}\xi_2 & & \mbox{ by definition of $\alpha_2$} \\ & = & \left( d(\pi_2 \circ f_{\sharp}) \right) ^* _{p_1}\xi_2 & & \mbox{ by the chain rule} \\ & = & \left( d(f \circ \pi_1) \right) ^* _{p_1}\xi_2 & & \mbox{ because $\pi_2 \circ f_{\sharp} = f \circ \pi_1$} \\ & = & (d\pi_1)^* _{p_1}(df)^* _{x_1} \xi_2 & & \mbox{ by the chain rule} \\ & = & (d\pi_1)^* _{p_1} \xi_1 & & \mbox{ by definition of $f_{\sharp}$} \\ & = & (\alpha_1)_{p_1} & & \mbox{ by definition of $\alpha_1$} \ . \varepsilonnd{array} \] \varepsilonnd{proof} As a consequence of this naturality for the tautological form,\index{canonical form on $T^*X$ ! naturality}\index{tautological form on $T^*X$ ! naturality}\index{cotangent bundle ! canonical symplectomorphism} a diffeomorphism of manifolds induces a canonical symplectomorphism\index{canonical ! symplectomorphism}\index{symplectomorphism ! canonical}\index{cotangent bundle ! canonical symplectomorphism} of cotangent bundles: \begin{corollary} The lift $f_{\sharp}: T^*X_1 \to T^*X_2$ of a diffeomorphism $f: X_1 \rightarrow X_2$ is a symplectomorphism for the canonical symplectic forms, i.e., $(f_{\sharp})^* \omega_2=\omega_1$. \varepsilonnd{corollary} In terms of the group (under composition) of diffeomorphisms $\mathrm{Diff}(X)$ of a manifold $X$, and the \textbf{group of symplectomorphisms} $\mathrm{Sympl} (T^*X,\omega)$\index{symplectomorphism ! group of symplectomorphisms}\index{group ! of symplectomorphisms} of its cotangent bundle, we see that the injection $\mathrm{Diff}(X) \to \mathrm{Sympl} (T^*X,\omega)$, $f \mapsto f_{\sharp}$ is a group homomorphism. Clearly this is not surjective: for instance, consider the symplectomorphism $T^*X \to T^*X$ given by translation along cotangent fibers. \begin{example} Let $X_1=X_2=S^1$. Then $T^* S^1$ is a cylinder $S^1 \times {\mathbb R}$. The canonical form is the area form $\omega=d\theta\wedge d\xi$. If $f:S^1\rightarrow S^1$ is any diffeomorphism, then $f_{\sharp}: S^1 \times {\mathbb R} \rightarrow S^1 \times {\mathbb R}$ is a symplectomorphism, i.e., is an area-preserving diffeomorphism of the cylinder. Translation along the ${\mathbb R}$ direction is area-preserving but is not induced by a diffeomorphism of the base manifold $S^1$. \varepsilonnd{example} There is a criterion for which cotangent symplectomorphisms arise as lifts of diffeomorphisms in terms of the tautological form. First note the following feature of symplectic manifolds with \textbf{exact symplectic forms}. Let $\alpha$ be a 1-form on a manifold $M$ such that $\omega = - d\alpha$ is symplectic. There exists a unique vector field $v$ whose interior product with $\omega$ is $\alpha$, i.e., $\imath_v \omega = - \alpha$. If $g : M \to M$ is a symplectomorphism that preserves $\alpha$ (that is, $g^* \alpha = \alpha$), then $g$ commutes with the flow\footnote{For $p \in M$, $(\varepsilonxp tv) (p) $ is the unique curve in $M$ solving the initial value problem \[ \left\{ \begin{array}{l} {d \over dt} (\varepsilonxp tv (p)) = v (\varepsilonxp tv (p)) \\ (\varepsilonxp tv) (p) |_{t=0} = p \varepsilonnd{array} \right. \] for $t$ in some neighborhood of $0$. The one-parameter group of diffeomorphisms $\varepsilonxp tv$ is called the \textbf{flow} of the vector field $v$.} of $v$, i.e., $(\varepsilonxp tv) \circ g = g \circ (\varepsilonxp tv)$. When $M = T^* X$ is the cotangent bundle of an arbitrary $n$-dimensional manifold $X$, and $\alpha$ is the tautological 1-form on $M$, the vector field $v$ is just $\sum \xi_i \, {\partial \over \partial \xi_i}$ with respect to a cotangent coordinate chart $(T^* {\mathcal U},x_1,\ldots,x_n, \xi_1, \ldots, \xi_n)$. The flow $\varepsilonxp tv$, $-\infty < t < \infty$, satisfies $(\varepsilonxp tv) (x, \xi) = (x, e^t \xi)$, for every $(x, \xi)$ in $M$. \begin{theorem} A symplectomorphism $g: T^*X \to T^*X$ is a lift of a diffeomorphism $f: X \rightarrow X$ if and only if it preserves the tautological form: $g^* \alpha = \alpha$. \varepsilonnd{theorem} \vspace*{-2ex} \begin{proof} By Proposition~\ref{prop:diffeo_lift}, a lift $f_{\sharp} : T^*X \to T^*X$ of a diffeomorphism $f: X \rightarrow X$ preserves the tautological form. Conversely, if $g$ is a symplectomorphism of $M$ that preserves $\alpha$, then $g$ preserves the cotangent fibration: by the observation above, $g (x, \xi) = (y,\varepsilonta) \Rightarrow g (x, \lambda \xi) = (y, \lambda \varepsilonta)$ for all $(x, \xi) \in M$ and $\lambda > 0$, and this must hold also for $\lambda \leq 0$ by the differentiability of $g$ at $(x,0)$. Therefore, there exists a diffeomorphism $f: X \to X$ such that $\pi \circ g = f \circ \pi$, where $\pi : M \to X$ is the projection map $\pi (x, \xi) = x$, and $g = f_\#$. \varepsilonnd{proof} The canonical form is natural also in the following way. Given a smooth function $h: X \to {\mathbb R}$, the diffeomorphism $\tau_h$ of $M = T^*X$ defined by $\tau_h (x, \xi) = (x, \xi + dh_x)$ turns out to be always a symplectomorphism. Indeed, if $\pi : M \to X$, $\pi (x, \xi) = x$, is the projection, we have $\tau_h^* \alpha = \alpha + \pi^* dh$, so that $\tau_h^* \omega = \omega$. \ssubsection{Moser's Trick} \label{sec:trick} \index{symplectic ! equivalence}\index{Moser ! trick} There are other relevant notions of equivalence for symplectic manifolds\footnote{Understanding these notions and the normal forms requires tools, such as isotopies (by \textbf{isotopy}\index{isotopy} we mean a smooth one-parameter family of diffeomorphisms starting at the identity, like the flow of a vector field), Lie derivative, tubular neighborhoods and the homotopy formula in de Rham theory, covered in differential geometry or differential topology texts.} besides being symplectomorphic.\index{symplectomorphic} Let $M$ be a manifold with two symplectic forms $\omega_0 , \omega_1$. \begin{definition} The symplectic manifolds $(M,\omega_0)$ and $(M,\omega_1)$ are \textbf{strongly isotopic}\index{strong isotopy}\index{symplectic ! strong isotopy} if there is an isotopy $\rho_t:M\to M$ such that $\rho^*_1 \omega_1=\omega_0$. $(M,\omega_0)$ and $(M,\omega_1)$ are \textbf{deformation-equivalent}\index{deformation equivalence}\index{symplectic ! deformation equivalence} if there is a smooth family $\omega_t$ of symplectic forms joining $\omega_0$ to $\omega_1$. $(M,\omega_0)$ and $(M,\omega_1)$ are \textbf{isotopic}\index{isotopy ! symplectic}\index{symplectic ! isotopy} if they are deformation-equivalent and the de Rham cohomology class\index{de Rham cohomology class} $[\omega_t]$ is independent of $t$. \varepsilonnd{definition} Hence, being strongly isotopic implies being symplectomorphic, and being isotopic implies being deformation-equivalent. We also have that being strongly isotopic implies being isotopic, because, if $\rho_t : M \to M$ is an isotopy such that $\rho_1 ^* \omega_1 = \omega_0$, then $\omega_t := \rho_t ^* \omega_1$ is a smooth family of symplectic forms joining $\omega_1$ to $\omega_0$ and $[\omega_t]=[\omega_1]$, $\forall t$, by the homotopy invariance of de Rham cohomology. Moser~\cite{mo:volume} proved that, on a compact manifold, being isotopic implies being strongly isotopic (Theorem~\ref{thm:moser}). McDuff\index{McDuff counterexample}\index{example ! McDuff} showed that deformation-equivalence is indeed a necessary hypothesis: even if $[\omega_0]=[\omega_1]\in H^2(M;{\mathbb R})$, there are compact examples where $(M,\omega_0)$ and $(M,\omega_1)$ are not strongly isotopic; see Example~7.23 in~\cite{mc-sa:introduction}. In other words, fix $c\in H^2(M)$ and define $S_c$ as the set of symplectic forms $\omega$ in $M$ with $[\omega]=c$. On a compact manifold, all symplectic forms in the same path-connected component of $S_c$ are symplectomorphic according to the Moser theorem, though there might be symplectic forms in different components of $S_c$ that are not symplectomorphic. \begin{theorem} \label{thm:moser}\index{Moser ! theorem}\index{theorem ! Moser} \textbf{(Moser)} $\;$ Let $M$ be a compact manifold with symplectic forms $\omega_0$ and $\omega_1$. Suppose that $\omega_t$, $0\leq t\leq 1$, is a smooth family of symplectic forms joining $\omega_0$ to $\omega_1$ with cohomology class $[\omega_t]$ independent of $t$. Then there exists an isotopy $\rho:M \times {\mathbb R} \to M$ such that $\rho^*_t\omega_t=\omega_0$, $0\leq t\leq 1$. \varepsilonnd{theorem} Moser applied an extremely useful argument, known as \textbf{Moser's trick}\index{Moser ! trick}, starting with the following observation. If there existed an isotopy $\rho:M \times {\mathbb R} \to M$ such that $\rho^*_t\omega_t=\omega_0$, $0\leq t\leq 1$, in terms of the associated time-dependent vector field \[ v_t := \frac{d\rho_t}{dt}\circ\rho^{-1}_t \ , \qquad t\in{\mathbb R} \ , \] we would then have for all $0\leq t\leq 1$ that \[ \displaystyle{0=\frac{d}{dt} (\rho^*_t\omega_t) =\rho^*_t \big({\mathcal L}_{v_t}\omega_t+\frac{d\omega_t}{dt}\big)} \iff {\mathcal L}_{v_t}\omega_t + \displaystyle{\frac{d\omega_t}{dt}} =0\ . \] Conversely, the existence of a smooth time-dependent vector field $v_t$, $t \in {\mathbb R}$, satisfying the last equation is enough to produce by integration (since $M$ is compact) the desired isotopy $\rho:M \times {\mathbb R} \to M$ satisfying $\rho^*_t\omega_t= \rho^*_0 \omega_0=\omega_0$, for all $t$. So everything boils down to solving the equation ${\mathcal L}_{v_t}\omega_t + \frac{d\omega_t}{dt} =0$ for $v_t$. \begin{proof} By the cohomology assumption that $\big[ \frac{d}{dt} \omega_t\big] = 0$, there exists a {\varepsilonm smooth} family of 1-forms $\mu_t$ such that \[ \displaystyle{\frac{d\omega_t}{dt} = d\mu_t}\ , \quad 0 \leq t \leq 1\ . \] The argument involves the Poincar\'e lemma for compactly-supported forms, together with the Mayer-Vietoris sequence in order to use induction on the number of charts in a good cover of $M$; for a sketch, see page~95 in~\cite{mc-sa:introduction}. In the simplest case where $\omega_t=(1-t)\omega_0 + t\omega_1$ with $[\omega_0]=[\omega_1]$, we have that $\frac{d\omega_t}{dt} = \omega_1-\omega_0 =d\mu$ is exact. The nondegeneracy assumption on $\omega_t$, guarantees that we can pointwise solve the equation, known as \textbf{Moser's equation},\index{Moser ! equation} \[ \imath_{v_t} \omega_t + \mu_t=0 \] to obtain a unique smooth family of vector fields $v_t$, $0 \leq t \leq 1$. Extend $v_t$ to all $t \in {\mathbb R}$. Thanks to the compactness of $M$, the vector fields $v_t$ generate an isotopy $\rho$ satisfying $\frac{d\rho_t}{dt} = v_t \circ \rho_t$. Then we indeed have \[ \displaystyle{\frac{d}{dt} (\rho_t^*\omega_t) = \rho^*_t({\mathcal L}_{v_t}\omega_t + \frac{d\omega_t}{dt}) = \rho^*_t(d \imath_{v_t} \omega_t + d\mu_t) = \rho^*_t d (\imath_{v_t} \omega_t + \mu_t) = 0}\ , \] where we used Cartan's magic formula\index{Cartan ! magic formula} in ${\mathcal L}_{v_t}\omega_t = d \imath_{v_t}\omega_t + \imath_{v_t} d\omega_t$. \varepsilonnd{proof} \vspace*{-1ex} \begin{example} On a compact oriented 2-dimensional manifold $M$, a symplectic form is just an area form.\index{form ! area} Let $\omega_0$ and $\omega_1$ be two area forms on $M$. If $[\omega_0] = [\omega_1]$, i.e., $\omega_0$ and $\omega_1$ give the same total area, then any convex combination of them is symplectic (because they induce the same orientation), and there is an isotopy $\varphi_t : M \to M$, $t \in [0,1]$, such that $\varphi_1^* \omega_0 = \omega_1$. Therefore, up to strong isotopy, there is a unique symplectic representative in each non-zero 2-cohomology class of $M$. \varepsilonnd{example} On a {\varepsilonm noncompact} manifold, given $v_t$, we would need to check the existence for $0\leq t\leq 1$ of an isotopy $\rho_t$ solving the differential equation $\frac{d\rho_t}{dt} = v_t \circ \rho_t$. \ssubsection{Darboux and Moser Theorems} \label{moser_relative_theorem} By a \textbf{submanifold} of a manifold $M$ we mean either a manifold $X$ with a {\varepsilonm closed embedding}\footnote{A \textbf{closed embedding} is a {\varepsilonm proper} injective immersion. A map is \textbf{proper}\index{proper map} when its preimage of a compact set is always compact.} $i: X \hookrightarrow M$, or an \textbf{open submanifold} (i.e., an open subset of $M$). Given a $2n$-dimensional manifold $M$, a $k$-dimensional submanifold $X$, neighborhoods ${\mathcal U}_0,{\mathcal U}_1$ of $X$, and symplectic forms $\omega_0,\omega_1$ on ${\mathcal U}_0,{\mathcal U}_1$, we would like to know whether there exists a \textbf{local symplectomorphism preserving $X$}, i.e., a diffeomorphism $\varphi:{\mathcal U}_0\to {\mathcal U}_1$ with $\varphi^*\omega_1=\omega_0$ and $\varphi(X)=X$. Moser's Theorem~\ref{thm:moser} addresses the case where $X=M$. At the other extreme, when $X$ is just one point, there is the classical Darboux theorem (Theorem~\ref{thm:darboux}). In general, we have: \begin{theorem} \label{thm:moser_relative}\index{Moser ! theorem -- relative version}\index{theorem ! Moser -- relative version} \textbf{(Moser Theorem -- Relative Version)} $\;$ Let $\omega_0$ and $\omega_1$ be symplectic forms on a manifold $M$, and $X$ a compact submanifold of $M$. Suppose that the forms coincide, $\omega_0|_p = \omega_1|_p$, at all points $p\in X$. Then there exist neighborhoods ${\mathcal U}_0$ and ${\mathcal U}_1$ of $X$ in $M$, and a diffeomorphism $\varphi : {\mathcal U}_0 \to {\mathcal U}_1$ such that $\varphi^*\omega_1=\omega_0$ and $\varphi$ restricted to $X$ is the identity map. \varepsilonnd{theorem} \vspace*{-2ex} \begin{proof} Pick a tubular neighborhood ${\mathcal U}_0$ of $X$. The 2-form $\omega_1-\omega_0$ is closed on ${\mathcal U}_0$, and satisfies $(\omega_1-\omega_0)_p=0$ at all $p\in X$. By the homotopy formula on the tubular neighborhood, there exists a 1-form $\mu$ on ${\mathcal U}_0$ such that $\omega_1-\omega_0=d\mu$ and $\mu_p=0$ at all $p\in X$. Consider the family $\omega_t=(1-t)\omega_0+t\omega_1=\omega_0+ td\mu$ of closed 2-forms on ${\mathcal U}_0$. Shrinking ${\mathcal U}_0$ if necessary, we can assume that $\omega_t$ is symplectic for $t \in [0,1]$, as nondegeneracy is an open property. Solve Moser's equation, $\imath_{v_t} \omega_t = - \mu$, for $v_t$ By integration, shrinking ${\mathcal U}_0$ again if necessary, there exists a local isotopy $\rho: {\mathcal U}_0 \times [0,1] \to M$ with $\rho^*_t\omega_t=\omega_0$, for all $t \in [0,1]$. Since $v_t |_X =0$, we have $\rho_t |_X =\id_X$. Set $\varphi=\rho_1$, ${\mathcal U}_1=\rho_1({\mathcal U}_0)$. \varepsilonnd{proof} \vspace*{-1ex} \begin{theorem} \label{thm:darboux}\index{theorem ! Darboux}\index{Darboux ! theorem} \textbf{(Darboux)} $\;$ Let $(M,\omega)$ be a symplectic manifold, and let $p$ be any point in $M$. Then we can find a chart $({\mathcal U},x_1, \ldots ,x_n,y_1,\ldots y_n)$ centered at $p$ where \[ \omega = \displaystyle{\sum_{i=1}^n dx_i\wedge dy_i}\ . \] \varepsilonnd{theorem} Such a coordinate chart $({\mathcal U},x_1,\dots,x_n,y_1,\dots,y_n)$ is called a \textbf{Darboux chart},\index{theorem ! Darboux}\index{Darboux ! chart}\index{chart ! Darboux} and the corresponding coordinates are called \textbf{Darboux coordinates}.\index{Darboux ! coordinates}\index{coordinates ! Darboux} The classical proof of Darboux's theorem is by induction on the dimension of the manifold~\cite{ar:mathematical}, in the spirit of the argument for a symplectic basis (Section~\ref{symplectic_linear_algebra}). The proof below, using Moser's theorem, was first provided by Weinstein~\cite{we:lagrangian}. \begin{proof} Apply Moser's relative theorem to $X=\{p\}$. More precisely, use any symplectic basis for $(T_pM, \omega_p)$ to construct coordinates $(x'_1,\ldots ,x'_n,$ $y'_1,\ldots y'_n)$ centered at $p$ and valid on some neighborhood ${\mathcal U}'$, so that $\omega_p= \left. \sum dx'_i\wedge dy'_i \right|_p$. There are two symplectic forms on ${\mathcal U}'$: the given $\omega_0=\omega$ and $\omega_1=\sum dx'_i\wedge dy'_i$. By Theorem~\ref{thm:moser_relative}, there are neighborhoods ${\mathcal U}_0$ and ${\mathcal U}_1$ of $p$, and a diffeomorphism $\varphi : {\mathcal U}_0 \to {\mathcal U}_1$ such that $\varphi (p)=p$ and $\varphi ^*(\sum dx'_i\wedge dy'_i)=\omega$. Since $\varphi ^*(\sum dx'_i \wedge dy'_i) = \sum d(x'_i\circ \varphi) \wedge d(y'_i\circ \varphi)$, we simply set new coordinates $x_i=x'_i\circ \varphi$, $y_i = y'_i\circ \varphi$. \varepsilonnd{proof} Darboux's theorem is easy in the 2-dimensional case.\index{Darboux ! theorem in dimension two}\index{theorem ! Darboux} Being closed $\omega$ is locally exact, $\omega = d \alpha$. Every nonvanishing 1-form on a surface can be written locally as $\alpha = g \, dh$ for suitable functions $g,h$, where $h$ is a coordinate on the local leaf space of the kernel foliation of $\alpha$. The form $\omega = dg \wedge dh$ is nondegenerate if and only if $(g,h)$ is a local diffeomorphism. By the way, transversality shows that the normal form for a {\varepsilonm generic}\footnote{\textbf{Generic} here means that the subset of those 2-forms having this behavior is open, dense and invariant under diffeomorphisms of the manifold.} 2-form is $x dx \wedge dy$ near a point where it is degenerate. \ssubsection{Symplectic Submanifolds} \label{symplectic_submanifolds} Moser's argument permeates many other proofs, including those of the next two results regarding {\varepsilonm symplectic submanifolds}. Let $(M,\omega)$ be a symplectic manifold. \begin{definition} A \textbf{symplectic submanifold}\index{symplectic submanifold ! definition} of $(M,\omega)$ is a submanifold $X$ of $M$ where, at each $p \in X$, the space $T_pX$ is a symplectic subspace of $(T_pM,\omega_p)$. \varepsilonnd{definition} If $i: X \hookrightarrow M$ is the inclusion of a symplectic submanifold $X$, then the restriction of $\omega$ to $X$ is a symplectic form, so that $(X,i^*\omega)$ is itself a symplectic manifold. Let $X$ be a symplectic submanifold of $(M,\omega)$. At each $p \in X$, we have $T_pM = T_p X \oplus (T_p X)^{\omega_p}$ (Section~\ref{symplectic_linear_algebra}), so the map $(T_p X)^{\omega_p} \to T_p M/T_p X$ is an isomorphism. This canonical identification of the \textbf{normal space}\index{normal ! space}\index{space ! normal} of $X$ at $p$, $N_p X := T_p M/T_p X$, with the symplectic orthogonal $(T_p X)^{\omega_p}$, yields a \-canonical identification of the \textbf{normal bundle} $NX$ with the {\varepsilonm symplectic vector bundle} $(TX)^\omega$. A \textbf{symplectic vector bundle}\index{symplectic ! vector bundle} is a vector bundle $E \to X$ equipped with a smooth\footnote{Smoothness means that, for any pair of (smooth) sections $u$ and $v$ of $E$, the real-valued function $\Omega (u,v) : X \to {\mathbb R}$ given by evaluation at each point is smooth.} field $\Omega$ of fiberwise nondegenerate skew-symmetric bilinear maps $\Omega_p : E_p \times E_p \to {\mathbb R}$. The \textbf{symplectic normal bundle} is the normal bundle of a symplectic submanifold, with the symplectic structure induced by orthogonals. The next theorem, due to Weinstein~\cite{we:lagrangian}, states that a neighborhood of a symplectic submanifold $X$ is determined by $X$ and (the isomorphism class of) its symplectic normal bundle. \begin{theorem} \label{thm:weinstein_symplectic}\index{Weinstein ! symplectic neighborhood theorem}\index{theorem ! Weinstein symplectic neighborhood}\index{neighborhood ! Weinstein symplectic neighborhood} \textbf{(Symplectic Neighborhood Theorem)} $\;$ Let $(M_0,\omega_0)$, $(M_1,\omega_1)$ be symplectic manifolds with diffeomorphic compact symplectic submanifolds $X_0$, $X_1$. Let $i_0 : X_0 \hookrightarrow M_0$, $i_1 : X_1 \hookrightarrow M_1$ be their inclusions. Suppose there is an isomorphism $\widetilde \phi : NX_0 \to NX_1$ of the corresponding symplectic normal bundles covering a symplectomorphism $\phi : (X_0, i_0^* \omega_0) \to (X_1, i_1^* \omega_1)$. Then there exist neighborhoods ${\mathcal U}_0 \subset M_0$, ${\mathcal U}_1 \subset M_1$ of $X_0$, $X_1$ and a symplectomorphism $\varphi: {\mathcal U}_0 \to {\mathcal U}_1$ extending $\phi$ such that the restriction of $d \varphi$ to the normal bundle $NX_0$ is $\widetilde \phi$. \varepsilonnd{theorem} As first noted by Thurston~\cite{th:examples}, the form $\Omega + \pi^* \omega_X$ is symplectic in some neighborhood of the zero section in $NX$, where $\pi : NX \to X$ is the bundle projection and $\omega_X$ is the restriction of $\omega$ to $X$. Therefore, {\varepsilonm a compact symplectic submanifold $X$ always admits a tubular neighborhood in the ambient $(M,\omega)$ symplectomorphic to a tubular neighborhood of the zero section in the symplectic normal bundle $NX$}. \begin{proof} By the Whitney extension theorem\footnote{ \textbf{Whitney Extension Theorem: }\index{Whitney extension theorem}\index{theorem ! Whitney extension} {\varepsilonm Let $M$ be a manifold and $X$ a submanifold of $M$. Suppose that at each $p\in X$ we are given a linear isomorphism $L_p:T_pM\stackrel{\simeq}{\longrightarrow} T_pM$ such that $L_p|_{T_pX}= \Id_{T_pX}$ and $L_p$ depends smoothly on $p$. Then there exists an embedding $h : {\mathcal N} \to M$ of some neighborhood ${\mathcal N}$ of $X$ in $M$ such that $h|_X= \id_X$ and $dh_p=L_p$ for all $p\in X$.} A proof relies on a tubular neighborhood model.}\index{theorem ! Whitney extension}\index{Whitney extension theorem} there exist neighborhoods ${\mathcal U}_0 \subset M_0$ and ${\mathcal U}_1 \subset M_1$ of $X_0$ and $X_1$, and a diffeomorphism $h: {\mathcal U}_0 \to {\mathcal U}_1$ such that $h \circ i_0 = i_1 \circ \phi$ and the restriction of $d h$ to the normal bundle $NX_0$ is the given $\widetilde \phi$. Hence $\omega_0$ and $h^* \omega_1$ are two symplectic forms on ${\mathcal U}_0$ which coincide at all points $p\in X_0$. The result now follows from Moser's relative theorem (Theorem~\ref{thm:moser_relative}). \varepsilonnd{proof} Carefully combining Moser's argument with the existence of an ambient isotopy that produces a given deformation of a compact submanifold, we can show: \begin{theorem} Let $X_t$, $t \in [0,1]$, be a (smooth) family of compact symplectic submanifolds of a compact symplectic manifold $(M,\omega)$. Then there exists an isotopy $\rho:M \times {\mathbb R} \to M$ such that for all $t \in [0,1]$ we have $\rho^*_t\omega=\omega$ and $\rho_t (X_0) = X_t$. \varepsilonnd{theorem} Inspired by complex geometry, Donaldson~\cite{do:almost} proved the following theorem on the existence of symplectic submanifolds. A major consequence is the characterization of symplectic manifolds in terms of {\varepsilonm Lefschetz pencils}; see Section~\ref{sec:pencils}. \begin{theorem} \label{thm:donaldson_submanifolds} \textbf{(Donaldson)} $\;$ Let $(M,\omega)$ be a compact symplectic manifold. Assume that the cohomology class $[\omega]$ is integral, i.e., lies in $H^2 (M;{\mathbb Z})$. Then, for every sufficiently large integer $k$, there exists a connected codimension-2 symplectic submanifold $X$ representing the Poincar\'e dual of the integral cohomology class $k[\omega]$. \varepsilonnd{theorem} Under the same hypotheses, Auroux extended this result to show that given $\alpha \in H_{2m} (M;{\mathbb Z})$ there exist positive $k,\varepsilonll \in {\mathbb Z}$ such that $k \mathrm{PD} [\omega^{n-m}] + \varepsilonll \alpha$ is realized by a $2m$-dimensional symplectic submanifold. \ssection{Lagrangian Submanifolds} \index{lagrangian submanifold ! definition} \label{section2} \ssubsection{First Lagrangian Submanifolds} \label{lagrangian_submanifolds} Let $(M,\omega)$ be a symplectic manifold. \begin{definition} A submanifold $X$ of $(M,\omega)$ is \textbf{lagrangian}\index{lagrangian submanifold ! definition} (respectively, \textbf{isotropic}\index{isotropic submanifold} and \textbf{coisotropic}\index{coisotropic submanifold}) if, at each $p \in X$, the space $T_pX$ is a lagrangian (respectively, isotropic and coisotropic) subspace of $(T_pM,\omega_p)$. \varepsilonnd{definition} If $i: X \hookrightarrow M$ is the inclusion map, then $X$ is a \textbf{lagrangian submanifold} if and only if $i^*\omega = 0$ and $\dim X = \frac {1}{2} \dim M$. The problem of embedding\footnote{An \textbf{embedding}\index{embedding} is an immersion that is a homeomorphism onto its image.} a compact manifold as a lagrangian submanifold of a given symplectic manifold is often global. For instance, Gromov~\cite{gr:pseudo} proved that there can be no lagrangian spheres in $({\mathbb C}^n,\omega_0)$, except for the circle in ${\mathbb C}^2$, and more generally no compact \textbf{exact lagrangian} submanifolds, in the sense that $\alpha_0 =\sum y_j \ dx_j$ restricts to an exact 1-form. The argument uses {\varepsilonm pseudoholomorphic curves} (Section~\ref{sec:pseudoholomorphic}). Yet there are {\varepsilonm immersed} lagrangian spheres (Section~\ref{sec:special_lagrangians}). More recently were found topological and geometrical constraints on manifolds that admit lagrangian embeddings into {\varepsilonm compact} symplectic manifolds; see for instance~\cite{bi:intersections,bi-ci:subcritical,se:graded}. \begin{examples} \begin{enumerate} \item Any 1-dimensional submanifold of a symplectic surface is lagrangian (because a 1-dimensional subspace of a symplectic vector space is always isotropic). Therefore, any product of $n$ embedded curves arises as a lagrangian submanifold of (a neighborhood of zero in) the prototype $( {\mathbb R} ^{2n},\omega_0)$. In particular, a \textbf{torus}\index{torus} ${\mathbb T}^n = S^1 \times \ldots \times S^1$ can be embedded as a lagrangian submanifold of any $2n$-dimensional symplectic manifold, by Darboux's theorem (Theorem~\ref{thm:darboux}). \item Let $M = T^*X$ be the cotangent bundle of a manifold $X$. With respect to a cotangent coordinate chart $(T^*U, x_1,\dots,x_n,\xi_1,\dots,\xi_n)$, the tautological form is $\alpha = \sum \xi_idx_i$ and the canonical form is $\omega = -d\alpha = \sum dx_i \wedge d\xi_i$. The \textbf{zero section}\index{lagrangian submanifold ! zero section}\index{cotangent bundle ! zero section}\index{example ! of lagrangian submanifold} $X_0 := \{(x,\xi) \in T^*X \mid \xi = 0 \mbox{ in } T_x^*X\}$ is an $n$-dimensional submanifold of $T^*X$ whose intersection with $T^*U$ is given by the equations $\xi_1 = \dots = \xi_n = 0$. Clearly $\alpha$ vanishes on $X_0 \cap T^*U$. Hence, if $i_0: X_0 \hookrightarrow T^*X$ is the inclusion map, we have $i_0^*\omega = i_0^*d\alpha = 0$, and so $X_0$ is lagrangian. A \textbf{cotangent fiber}\index{lagrangian submanifold ! zero section}\index{cotangent bundle ! zero section}\index{example ! of lagrangian submanifold} $T_{x_0}^*X$ is an $n$-dimensional submanifold of $T^*X$ given by the equations $x_i = (x_0)_i$, $i = 1, \dots , n$, on $T^*U$. Since the $x_i$'s are constant, the form $\alpha$ vanishes identically, and $T_{x_0}^*X$ is a lagrangian submanifold. \varepsilonnd{enumerate} \varepsilonnd{examples} Let $X_\mu $ be (the image of) an arbitrary section, that is, an $n$-dimensional submanifold of $T^*X$ of the form $X_\mu = \{(x,\mu_x) \mid x \in X,\ \mu_x \in T_x^*X\}$, where the covector $\mu_x$ depends smoothly on $x$, so $\mu: X \rightarrow T^*X$ is a de Rham 1-form. We will investigate when such an $X_\mu$ is lagrangian. Relative to the inclusion $i: X_\mu \hookrightarrow T^*X$ and the cotangent projection $\pi: T^*X \rightarrow X$, these $X_\mu$'s are exactly the submanifolds for which $\pi \circ i: X_\mu \rightarrow X$ is a diffeomorphism. \begin{proposition} \label{prop:tautological_property} The tautological 1-form $\alpha$ on $T^*X$ satisfies $\mu^* \alpha = \mu$, for any 1-form $\mu: X \to T^*X$. \varepsilonnd{proposition} \vspace*{-2ex} \begin{proof} Denote by $s_{\mu}: X \rightarrow T^*X$, $x \mapsto (x,\mu_x)$, the 1-form $\mu$ regarded exclusively as a map. From the definition, $\alpha_p = (d\pi_p)^*\xi$ at $p = (x,\xi) \in M$. For $p = s_{\mu}(x) = (x,\mu_x)$, we have $\alpha_p = (d\pi_p)^*\mu_x$. Then, since $\pi \circ s_{\mu} = \mathrm{id}_X$, we have \[ (s_{\mu}^*\alpha)_x = (ds_{\mu})_x^* \alpha_p = (ds_{\mu})_x^*(d\pi_p)^*\mu_x = (d (\pi \circ s_{\mu}) )_x^* \mu_x = \mu_x \ . \] \varepsilonnd{proof} The map $s_{\mu}: X \rightarrow T^*X$, $s_{\mu} (x) = (x,\mu_x)$ is an embedding with image the section $X_\mu$. The diffeomorphism $\tau : X \to X_\mu$, $\tau (x) := (x, \mu_x)$, satisfies $i \circ \tau = s_\mu$. \begin{proposition} \label{prop:closed_1_forms} The sections of $T^*X$ that are lagrangian are those corresponding to closed 1-forms on $X$.\index{lagrangian submanifold ! closed 1-form}\index{cotangent bundle ! lagrangian submanifold}\index{lagrangian submanifold ! of TX@of $T^*X$} \varepsilonnd{proposition} \vspace*{-2ex} \begin{proof} Using the previous notation, the condition of $X_\mu$ being lagrangian becomes: $i^*d\alpha = 0 \Leftrightarrow \tau^* i^* d\alpha = 0 \Leftrightarrow s_{\mu}^* d\alpha = 0 \Leftrightarrow d(s_{\mu}^*\alpha) = 0 \Leftrightarrow d\mu = 0$. \varepsilonnd{proof} When $\mu = dh$ for some $h \in C^{\infty}(X)$, such a primitive $h$ is called a \textbf{generating function}\index{generating function} for the lagrangian submanifold\index{lagrangian submanifold ! generating function} $X_\mu $. Two functions generate the same lagrangian submanifold if and only if they differ by a locally constant function. When $X$ is simply connected, or at least $H_{\mathrm{de Rham}}^1(X) = 0$, every lagrangian $X_\mu $ admits a generating function. Besides the cotangent fibers, there are lots of lagrangian submanifolds of $T^*X$\index{cotangent bundle ! lagrangian submanifold} not covered by the description in terms of closed 1-forms. Let $S$ be any submanifold of an $n$-dimensional manifold $X$. The \textbf{conormal space}\index{conormal ! space} of $S$ at $x \in S$ is \[ N_x^*S = \{\xi \in T_x^*X \mid \xi(v) = 0 \ , \mbox{ for all } v \in T_xS\} \ . \] The \textbf{conormal bundle}\index{conormal ! bundle} of $S$ is $N^*S = \{(x,\xi) \in T^*X \mid x \in S,\ \xi \in N_x^*S\}$. This is an $n$-dimensional submanifold of $T^*X$. In particular, taking $S = \{x\}$ to be one point, the conormal bundle is the corresponding cotangent fiber $T_x^*X$. Taking $S = X$, the conormal bundle is the zero section $X_0$ of $T^*X$. \begin{proposition} If $i: N^*S \hookrightarrow T^*X$ is the inclusion of the conormal bundle of a submanifold $S \subset X$, and $\alpha$ is the tautological 1-form on $T^*X$, then $i^*\alpha = 0$. \varepsilonnd{proposition} \vspace*{-2ex} \begin{proof} Let $({\mathcal U},x_1,\dots,x_n)$ be a coordinate chart on $X$ adapted to $S$, so that ${\mathcal U} \cap S$ is described by $x_{k+1} = \dots = x_n = 0$. Let $(T^*{\mathcal U},x_1,\dots,x_n,\xi_1,\dots,\xi_n)$ be the associated cotangent coordinate chart. The submanifold $N^*S \cap T^*{\mathcal U}$ is described by $x_{k+1} = \dots = x_n = 0$ and $\xi_1 = \dots = \xi_k = 0$. Since $\alpha = \sum \xi_i dx_i$ on $T^*{\mathcal U}$, we conclude that, at $p \in N^*S$, \[ (i^*\alpha)_p = \alpha_p|_{T_p(N^*S)} = \left. \sum \limits_{i>k} \xi_i dx_i \right|_{\mathrm{span} \{ \frac{\partial}{\partial x_i}, i \leq k \}} = 0 \ . \] \varepsilonnd{proof} \vspace*{-2ex} \begin{corollary} \index{lagrangian submanifold ! conormal bundle}\index{cotangent bundle ! lagrangian submanifold}\index{cotangent bundle ! conormal bundle} For any submanifold $S$ of $X$, the conormal bundle $N^*S$ is a lagrangian submanifold of $T^*X$. \varepsilonnd{corollary} \ssubsection{Lagrangian Neighborhood Theorem} \label{weinstein_lagrangian_theorem} Weinstein~\cite{we:lagrangian} proved that, if a compact submanifold $X$ is lagrangian with respect to two symplectic forms $\omega_0$ and $\omega_1$, then the conclusion of the Moser relative theorem (Theorem~\ref{thm:moser_relative}) still holds. We need some algebra for the Weinstein theorem\index{Weinstein ! lagrangian neighborhood theorem}\index{theorem ! Weinstein lagrangian neighborhood}\index{neighborhood ! Weinstein lagrangian neighborhood}. Suppose that $U,W$ are $n$-dimensional vector spaces, and $\Omega: U \times W \to {\mathbb R}$ is a bilinear pairing; the map $\Omega$ gives rise to a linear map $\widetilde\Omega:U\to W^*$, $\widetilde\Omega(u) = \Omega(u,\cdot)$. Then $\Omega$ is nondegenerate if and only if $\widetilde\Omega$ is bijective. \begin{proposition} \label{prop:complement} Let $(V,\Omega)$ be a symplectic vector space, $U$ a lagrangian subspace\index{lagrangian subspace}\index{subspace ! lagrangian} of $(V,\Omega)$, and $W$ any vector space complement to $U$, not necessarily lagrangian. Then from $W$ we can canonically build a lagrangian complement\index{lagrangian complement} to $U$. \varepsilonnd{proposition} \vspace*{-2ex} \begin{proof} From $\Omega$ we get a nondegenerate pairing $\Omega' : U \times W \to {\mathbb R}$, so $\widetilde\Omega':U \to W^*$ is bijective. We look for a lagrangian complement to $U$ of the form $W'=\{w+Aw\mid w\in W\}$ for some linear map $A:W \to U$. For $W'$ to be lagrangian we need that $\Omega( w_1,w_2) = \widetilde\Omega'(Aw_2)(w_1) -\widetilde\Omega'(Aw_1)(w_2)$. Let $A'=\widetilde\Omega'\circ A$, and look for $A'$ such that $\Omega( w_1,w_2)=A'(w_2)(w_1)-A'(w_1)(w_2)$ for all $w_1,w_2\in W$. The canonical choice is $A'(w)=-\frac 12\Omega(w,\cdot)$. Set $A=(\widetilde\Omega')^{-1}\circ A'$. \varepsilonnd{proof} \vspace*{-2ex} \begin{proposition} \label{prop:canonical_iso} Let $V$ be a vector space, let $\Omega_0$ and $\Omega_1$ be symplectic forms on $V$, let $U$ be a subspace of $V$ lagrangian for $\Omega_0$ and $\Omega_1$, and let $W$ be any complement to $U$ in $V$. Then from $W$ we can canonically construct a linear isomorphism $L:V\stackrel{\simeq} \longrightarrow V$ such that $L|_U= \Id_U$ and $L^*\Omega_1=\Omega_0$. \varepsilonnd{proposition} \vspace*{-2ex} \begin{proof} By Proposition~\ref{prop:complement}, from $W$ we canonically obtain complements $W_0$ and $W_1$ to $U$ in $V$ such that $W_0$ is lagrangian for $\Omega_0$ and $W_1$ is lagrangian for $\Omega_1$. The nondegenerate bilinear pairings $\Omega_i : W_i \times U \to {\mathbb R}$, $i=0,1$, give isomorphisms $\widetilde\Omega_i:W_i \stackrel{\simeq}\longrightarrow U^*$, $i=0,1$, respectively. Let $B : W_0 \to W_1$ be the linear map satisfying $\widetilde\Omega_1 \circ B = \widetilde\Omega_0$, i.e., $\Omega_0(w_0,u)=\Omega_1(Bw_0,u)$, $\forall w_0 \in W_0$, $\forall u \in U$. Let $L := \Id_U\oplus B: U\oplus W_0 \to U\oplus W_1$ be the extension of $B$ to the rest of $V$ by setting it to be the identity on $U$. It satisfies: \[ \begin{array}{rcl} (L^*\Omega_1)(u\oplus w_0,u'\oplus w'_0) & = & \Omega_1(u\oplus B w_0,u'\oplus B w'_0) \\ & = & \Omega_1(u,B w'_0) + \Omega_1(B w_0,u') \\ & = & \Omega_0(u, w'_0) + \Omega_0(w_0,u') \\ & = & \Omega_0(u\oplus w_0, u'\oplus w'_0)\ . \varepsilonnd{array} \] \varepsilonnd{proof} \vspace*{-2ex} \begin{theorem} \label{thm:weinstein_lagrangian}\index{Weinstein ! lagrangian neighborhood theorem}\index{theorem ! Weinstein lagrangian neighborhood}\index{neighborhood ! Weinstein lagrangian neighborhood} \textbf{(Weinstein Lagrangian Neighborhood Theorem)} $\;$ Let $M$ be a $2n$-dimensional manifold, $X$ a compact $n$-dimensional submanifold, $i: X \hookrightarrow M$ the inclusion map, and $\omega_0$ and $\omega_1$ symplectic forms on $M$ such that $i^*\omega_0=i^*\omega_1=0$, i.e., $X$ is a lagrangian submanifold of both $(M,\omega_0)$ and $(M,\omega_1)$. Then there exist neighborhoods ${\mathcal U}_0$ and ${\mathcal U}_1$ of $X$ in $M$ and a diffeomorphism $\varphi: {\mathcal U}_0 \to {\mathcal U}_1$ such that $\varphi^*\omega_1=\omega_0$ and $\varphi$ is the identity on $X$, i.e., $\varphi (p) = p$, $\forall p\in X$. \varepsilonnd{theorem} \vspace*{-2ex} \begin{proof} Put a riemannian metric $g$ on $M$. Fix $p\in X$, and let $V=T_pM$, $U=T_pX$ and $W=U^{\perp}$, the orthocomplement of $U$ in $V$ relative to the inner product $g_p(\cdot,\cdot)$. Since $i^*\omega_0=i^*\omega_1=0$, the subspace $U$ is lagrangian for both $(V,\omega_0|_p)$ and $(V,\omega_1|_p)$. By Proposition~\ref{prop:canonical_iso}, we canonically get from $U^{\perp}$ a linear isomorphism $L_p:T_pM\to T_pM$ depending smoothly on $p$, such that $L_p|_{T_pX}= \Id_{T_pX}$ and $L^*_p\omega_1|_p=\omega_0|_p$. By the Whitney extension theorem (Section~\ref{moser_relative_theorem}), there exist a neighborhood ${\mathcal N}$ of $X$ and an embedding $h: {\mathcal N} \hookrightarrow M$ with $h|_X= \id_X$ and $dh_p=L_p$ for $p\in X$. Hence, at any $p\in X$, we have $(h^*\omega_1)_p = (dh_p)^* \omega_1|_p = L^*_p \omega_1|_p = \omega_0|_p$. Applying the Moser relative theorem (Theorem~\ref{thm:moser_relative}) to $\omega_0$ and $h^*\omega_1$, we find a neighborhood ${\mathcal U}_0$ of $X$ and an embedding $f : {\mathcal U}_0 \to {\mathcal N}$ such that $f|_X= \id _X$ and $f^*(h^*\omega_1)=\omega_0$ on ${\mathcal U}_o$. Set $\varphi = h \circ f$ and ${\mathcal U}_1 = \varphi ({\mathcal U}_0)$. \varepsilonnd{proof} Theorem~\ref{thm:weinstein_lagrangian} has the following generalization. For a proof see, for instance, either of~\cite{go:coisotropic,gu-st:techniques,we:isotropic}. \begin{theorem} \label{thm:coisotropic} \index{theorem ! coisotropic embedding}\index{coisotropic ! embedding}\index{embedding ! coisotropic} \textbf{(Coisotropic Embedding Theorem)} $\;$ Let $M$ be a manifold of dimension $2n$, $X$ a submanifold of dimension $k \geq n$, $i: X \hookrightarrow M$ the inclusion, and $\omega_0$ and $\omega_1$ symplectic forms on $M$, such that $i^*\omega_0=i^*\omega_1$ and $X$ is coisotropic for both $(M,\omega_0)$ and $(M,\omega_1)$. Then there exist neighborhoods ${\mathcal U}_0$ and ${\mathcal U}_1$ of $X$ in $M$ and a diffeomorphism $\varphi: {\mathcal U}_0 \to {\mathcal U}_1$ such that $\varphi^*\omega_1=\omega_0$ and $\varphi |_X = \id_X$. \varepsilonnd{theorem} \ssubsection{Weinstein Tubular Neighborhood Theorem} \label{weinstein_tubular_theorem} \index{Weinstein ! tubular neighborhood theorem}\index{theorem ! Weinstein tubular neighborhood}\index{neighborhood ! Weinstein tubular neighborhood} Let $(V,\Omega)$ be a symplectic linear space, and let $U$ be a lagrangian subspace.\index{symplectic ! linear algebra} Then there is a canonical nondegenerate bilinear pairing $\Omega': V/U \times U \to {\mathbb R}$ defined by $\Omega'([v],u) = \Omega(v,u)$ where $[v]$ is the equivalence class of $v$ in $V/U$. Consequently, we get a canonical isomorphism ${\widetilde \Omega}': V/U \to U^*$, ${\widetilde \Omega}'([v]) = \Omega'([v],\cdot)$. In particular, if $(M,\omega)$ is a symplectic manifold, and $X$ is a lagrangian submanifold, then $T_p X$ is a lagrangian subspace of $(T_p M,\omega_p )$ for each $p \in X$ and there is a canonical identification of the \textbf{normal space}\index{normal ! space}\index{space ! normal} of $X$ at $p$, $N_p X := T_p M/T_p X$, with the cotangent fiber $T_p ^*X$. Consequently the normal bundle $NX$ and the cotangent bundle $T^*X$ are canonically identified. \begin{theorem} \index{Weinstein ! tubular neighborhood theorem}\index{theorem ! Weinstein tubular neighborhood}\index{neighborhood ! Weinstein tubular neighborhood}\index{tubular neighborhood ! Weinstein theorem}\index{Weinstein ! lagrangian embedding}\index{embedding ! lagrangian}\label{thm:weinstein_tubular} \textbf{(Weinstein Tubular Neighborhood Theorem)} $\;$ Let $(M,\omega)$ be a symplectic manifold, $X$ a compact lagrangian submanifold, $\omega_0$ the canonical symplectic form on $T^*X$, $i_0: X \hookrightarrow T^*X$ the lagrangian embedding as the zero section, and $i: X \hookrightarrow M$ the lagrangian embedding given by inclusion. Then there are neighborhoods ${\mathcal U}_0$ of $X$ in $T^*X$, ${\mathcal U}$ of $X$ in $M$, and a diffeomorphism $\varphi: {\mathcal U}_0 \to {\mathcal U}$ such that $\varphi^*\omega=\omega_0$ and $\varphi \circ i_0 = i$. \varepsilonnd{theorem} \vspace*{-2ex} \begin{proof} By the standard tubular neighborhood\index{tubular neighborhood ! theorem}\index{theorem ! tubular neighborhood}\index{neighborhood ! tubular neighborhood} theorem\footnote{\textbf{Tubular Neighborhood Theorem:} {\varepsilonm Let $M$ be a manifold, $X$ a submanifold, $NX$ the normal bundle of $X$ in $M$, $i_0: X \hookrightarrow NX$ the zero section, and $i: X \hookrightarrow M$ the inclusion. Then there are neighborhoods ${\mathcal U}_0$ of $X$ in $NX$, ${\mathcal U}$ of $X$ in $M$ and a diffeomorphism $\psi: {\mathcal U}_0 \to {\mathcal U}$ such that $\psi \circ i_0 = i$.} This theorem can be proved with the exponential map using a riemannian metric; see for instance~\cite{spivak:comprehensive}.} and since $NX \simeq T^*X$ are canonically identified, we can find a neighborhood ${\mathcal N}_0$ of $X$ in $T^*X$, a neighborhood ${\mathcal N}$ of $X$ in $M$, and a diffeomorphism $\psi: {\mathcal N}_0 \to {\mathcal N}$ such that $\psi \circ i_0 = i$. Let $\omega_0$ be the canonical form on $T^*X$ and $\omega_1 = \psi^*\omega$. The submanifold $X$ is lagrangian for both of these symplectic forms on ${\mathcal N}_0$. By the Weinstein lagrangian neighborhood theorem (Theorem~\ref{thm:weinstein_lagrangian}), there exist neighborhoods ${\mathcal U}_0$ and ${\mathcal U}_1$ of $X$ in ${\mathcal N}_0$ and a diffeomorphism $\theta: {\mathcal U}_0 \to {\mathcal U}_1$ such that $\theta^*\omega_1 = \omega_0$ and $\theta \circ i_0 = i_0$. Take $\varphi = \psi \circ \theta$ and ${\mathcal U} = \varphi({\mathcal U}_0)$. Then $\varphi^*\omega = \theta^* \psi^*\omega = \theta^* \omega_1 = \omega_0$. \varepsilonnd{proof} Theorem~\ref{thm:weinstein_tubular} classifies compact lagrangian embeddings: up to local symplectomorphism, the set of lagrangian embeddings is the set of embeddings of manifolds into their cotangent bundles as zero sections. The classification of compact {\varepsilonm isotropic} embeddings is also due to Weinstein in~\cite{we:lectures,we:isotropic}\index{Weinstein ! isotropic embedding}\index{embedding ! isotropic}\index{isotropic ! embedding}. An \textbf{isotropic embedding} of a manifold $X$ into a symplectic manifold $(M,\omega)$ is a closed embedding $i: X \hookrightarrow M$ such that $i^*\omega = 0$. Weinstein showed that neighborhood equivalence of isotropic embeddings is in one-to-one correspondence with isomorphism classes of symplectic vector bundles. The classification of compact {\varepsilonm coisotropic} embeddings is due to Gotay~\cite{go:coisotropic}.\index{embedding ! coisotropic}\index{coisotropic ! embedding}\index{embedding ! coisotropic}\index{Gotay ! coisotropic embedding} A \textbf{coiso\-tro\-pic embedding} of a manifold $X$ carrying a closed 2-form $\alpha$ of constant rank into a symplectic manifold $(M,\omega)$ is an embedding $i: X \hookrightarrow M$ such that $i^*\omega = \alpha$ and $i(X)$ is coisotropic as a submanifold of $M$. Let $E$ be the \textbf{characteristic distribution}\index{characteristic distribution} of a closed form $\alpha$ of constant rank on $X$, i.e., $E_p$ is the kernel of $\alpha_p$ at $p \in X$. Gotay showed that then the total space $E^*$ carries a symplectic structure in a neighborhood of the zero section, such that $X$ embeds coisotropically onto this zero section and, moreover, every coisotropic embedding is equivalent to this in some neighborhood of the zero section. \ssubsection{Application to Symplectomorphisms} \label{application_symplectomorphisms} \index{symplectomorphism ! group of symplectomorphisms}\index{group ! of symplectomorphisms}\index{symplectomorphism ! vs.\ lagrangian submanifold} Let $(M_1,\omega_1)$ and $(M_2,\omega_2)$ be two $2n$-dimensional symplectic manifolds. Given a diffeomorphism $f: M_1 \stackrel{\simeq}{\longrightarrow} M_2$, there is a way to express the condition of $f$ being a symplectomorphism in terms of a certain submanifold being lagrangian. Consider the two projection maps ${\mathrm{pr}}_i : M_1 \times M_2 \to M_i$, $(p_1,p_2) \mapsto p_i$, $i=1,2$. The \textbf{twisted product form}\index{twisted product form} on $M_1 \times M_2$ is the symplectic\footnote{More generally, $\lambda_1({\mathrm{pr}}_1)^*\omega_1 + \lambda_2({\mathrm{pr}}_2)^*\omega_2$ is symplectic for all $\lambda_1,\lambda_2 \in {\mathbb R}{\backslash}\{0\}$.} form \[ {\widetilde \omega} = ({\mathrm{pr}}_1)^*\omega_1 - ({\mathrm{pr}}_2)^*\omega_2 \ . \] \begin{proposition} \index{symplectomorphism ! vs.\ lagrangian submanifold}\index{lagrangian submanifold ! vs.\ symplectomorphism}\index{theorem ! symplectomorphism vs.\ lagrangian submanifold} A diffeomorphism $f: M_1 \stackrel{\simeq}{\longrightarrow} M_2$ is a symplectomorphism if and only if the graph of $f$ is a lagrangian submanifold of $(M_1 \times M_2,{\widetilde \omega})$. \varepsilonnd{proposition} \vspace*{-2ex} \begin{proof} The graph of $f$ is the $2n$-dimensional submanifold ${\mathrm{Graph}} \, f = \{(p,f(p)) \mid p \in M_1\} \subseteq M_1 \times M_2$, which is the image of the embedding $\gamma: M_1 \to M_1 \times M_2$, $p \mapsto (p,f(p))$. We have $\gamma^*{\widetilde \omega} = \gamma^* {\mathrm{pr}}_1^* \ \omega_1 - \gamma^* {\mathrm{pr}}_2^* \ \omega_2 = ({\mathrm{pr}}_1 \circ \gamma)^*\omega_1 - ({\mathrm{pr}}_2 \circ \gamma)^*\omega_2$, and ${\mathrm{pr}}_1 \circ \gamma$ is the identity map on $M_1$ whereas ${\mathrm{pr}}_2 \circ \gamma = f$. So ${\mathrm{Graph}} \, f$ is lagrangian, i.e., $\gamma^*{\widetilde \omega} = 0$, if and only if $f^*\omega_2 = \omega_1$, i.e., $f$ is a symplectomorphism. \varepsilonnd{proof} Lagrangian submanifolds of $(M_1 \times M_2,{\widetilde \omega})$ are called \textbf{canonical relations}, when viewed as morphisms between $(M_1,\omega_1)$ and $(M_2,\omega_2)$, even if $\dim M_1 \neq \dim M_2$. Under a reasonable assumption, there is a notion of composition~\cite{we:lectures}. Take $M_1 = M_2 = M$ and suppose that $(M,\omega)$ is a {\varepsilonm compact} symplectic manifold and $f \in \mathrm{Sympl}(M,\omega)$. The graphs $\mathrm{Graph} \, f$ and $\Delta$, of $f$ and of the identity map $\id : M \to M$, are lagrangian submanifolds of $M \times M$ with ${\widetilde \omega} = \mathrm{pr}_1^*\omega - \mathrm{pr}_2^*\omega$. By the Weinstein tubular neighborhood theorem, there exist a neighborhood ${\mathcal U}$ of $\Delta$ in $(M \times M,{\widetilde \omega})$ and a neighborhood ${\mathcal U}_0$ of $M$ in $(T^*M,\omega_0)$ with a symplectomorphism $\varphi: {\mathcal U} \to {\mathcal U}_0$ satisfying $\varphi (p,p) = (p,0)$, $\forall p \in M$. Suppose that $f$ is sufficiently \textbf{\boldmath{$C^1$}-close}\index{C-topology@$C^1$-topology}\footnote{Let $X$ and $Y$ be manifolds. A sequence of maps $f_i: X \to Y$ \textbf{converges in the \boldmath{$C^0$}-topology} (a.k.a.\ the \textbf{compact-open topology}) to $f: X \to Y$ if and only if $f_i$ converges uniformly on compact sets. A sequence of $C^1$ maps $f_i: X \to Y$ \textbf{converges in the \boldmath{$C^1$}-topology}\index{C-topology@$C^1$-topology} to $f: X \to Y$ if and only if it and the sequence of derivatives $df_i: TX \to TY$ converge uniformly on compact sets.} to $\id$, i.e., $f$ is in some sufficiently small neighborhood of the identity $\id$ in the $C^1$-topology\index{C-topology@$C^1$-topology}. Hence we can assume that $\mathrm{Graph} \, f \subseteq {\mathcal U}$. Let $j: M \hookrightarrow {\mathcal U}$, $j(p)=(p,f(p))$, be the embedding as $\mathrm{Graph} \, f$, and $i: M \hookrightarrow {\mathcal U}$, $i(p)=(p,p)$, be the embedding as $\Delta = \mathrm{Graph} \, \id$. The map $j$ is sufficiently $C^1$-close to $i$. These maps induce embeddings $\varphi \circ j = j_0: M \hookrightarrow {\mathcal U}_0$ and $\varphi \circ i = i_0: M \hookrightarrow {\mathcal U}_0$ as 0-section, respectively. Since the map $j_0$ is sufficiently $C^1$-close to $i_0$, the image set $j_0 (M)$ intersects each fiber $T_p^*M$ at one point $\mu_p$ depending smoothly on $p$. Therefore, the image of $j_0$ is the image of a smooth section $\mu: M \to T^*M$, that is, a 1-form $\mu = j_0 \circ (\pi \circ j_0)^{-1}$. We conclude that $\mathrm{Graph} \, f \simeq \{(p,\mu_p) \ | \ p \in M ,\ \mu_p \in T_p^*M \}$. Conversely, if $\mu$ is a 1-form sufficiently $C^1$-close to the zero 1-form, then $\{(p,\mu_p) \ | \ p \in M,\ \mu_p \in T^*_pM \} \simeq \mathrm{Graph} \, f$, for some diffeomorphism $f: M \to M$. By Proposition~\ref{prop:closed_1_forms}, $\mathrm{Graph} \, f$ is lagrangian if and only if $\mu$ is closed. A small $C^1$-neighborhood of $\id$ in $\mathrm{Sympl}(M,\omega)$ is thus homeomorphic to a $C^1$-neighborhood of zero in the vector space of closed 1-forms on $M$. So we obtain the model: \[ T_\id (\mathrm{Sympl}(M,\omega)) \simeq \{\mu \in \Omega^1(M) \ | \ d\mu = 0\}\ . \] In particular, $T_\id (\mathrm{Sympl}(M,\omega))$ contains the space of exact 1-forms that correspond to generating functions, $C^\infty (M) / \{\mbox{locally constant functions}\}$. \begin{theorem} \index{symplectomorphism ! fixed point}\index{fixed point} Let $(M,\omega)$ be a compact symplectic manifold (and not just one point) with $H^1_\mathrm{deRham}(M) = 0$. Then any symplectomorphism of $M$ that is sufficiently $C^1$-close to the identity has at least two fixed points. \varepsilonnd{theorem} \vspace*{-2ex} \begin{proof} If $f \in \mathrm{Sympl}(M,\omega)$ is sufficiently $C^1$-close to $\id$, then its graph corresponds to a closed 1-form $\mu$ on $M$. As $H^1_{\mathrm{de Rham}}(M) = 0$, we have that $\mu = dh$ for some $h \in C^{\infty}(M)$. But $h$ must have at least two critical points because $M$ is compact. A point $p$ where $\mu_p = dh_p = 0$ corresponds to a point in the intersection of the graph of $f$ with the diagonal, that is, a fixed point of $f$. \varepsilonnd{proof} This result has the following analogue in terms of \textbf{lagrangian intersections}\index{lagrangian submanifold ! intersection problem}\index{intersection of lagrangian submanifolds}: {\varepsilonm if $X$ is a compact lagrangian submanifold of a symplectic manifold $(M,\omega)$ with $H^1_\mathrm{deRham}(X) = 0$, then every lagrangian submanifold of $M$ that is $C^1$-close\footnote{We say that a submanifold $Y$ of $M$ is \textbf{\boldmath{$C^1$}-close} to another submanifold $X$ when there is a diffeomorphism $X \to Y$ that is, as a map into $M$, $C^1$-close to the inclusion $X \hookrightarrow M$.} to $X$ intersects $X$ in at least two points.} \ssubsection{Generating Functions} \label{generating_function} \index{generating function} We focus on symplectomorphisms between the cotangent bundles $M_1 = T^*X_1$, $M_2 = T^*X_2$\index{symplectomorphism ! recipe}\index{recipe ! for symplectomorphisms}\index{example ! of symplectomorphism} of two $n$-dimensional manifolds $X_1$, $X_2$. Let $\alpha_1,\alpha_2$ and $\omega_1,\omega_2$ be the corresponding tautological and canonical forms. Under the natural identification \[ M_1 \times M_2 = T^*X_1 \times T^*X_2 \simeq T^*(X_1 \times X_2) \ , \] the tautological 1-form on $T^*(X_1 \times X_2)$ is $\alpha = {\mathrm{pr}}_1^* \alpha_1 + {\mathrm{pr}}_2^*\alpha_2$, the canonical 2-form on $T^*(X_1 \times X_2)$ is $\omega = -d\alpha = {\mathrm{pr}}_1^*\omega_1 + {\mathrm{pr}}_2^*\omega_2$, and the twisted product form\index{twisted product form} is ${\widetilde \omega} = {\mathrm{pr}}_1^*\omega_1 - {\mathrm{pr}}_2^*\omega_2$. We define the involution $\sigma_2: M_2 \to M_2$, $(x_2,\xi_2) \mapsto (x_2,-\xi_2)$, which yields $\sigma_2^*\alpha_2 = -\alpha_2$. Let $\sigma = {\mathrm{id}}_{M_1} \times \sigma_2: M_1 \times M_2 \rightarrow M_1 \times M_2$. Then $\sigma^*{\widetilde \omega} = {\mathrm{pr}}_1^*\omega_1 + {\mathrm{pr}}_2^*\omega_2 = \omega$. If $L$ is a lagrangian submanifold of $(M_1 \times M_2, \omega)$, then its \textbf{twist} $L^{\sigma} := \sigma(L)$ is a lagrangian submanifold of $(M_1 \times M_2,{\widetilde \omega})$. For producing a symplectomorphism $M_1 = T^*X_1 \rightarrow M_2 = T^*X_2$\index{symplectomorphism ! recipe} we can start with a lagrangian submanifold $L$ of $(M_1 \times M_2,\omega)$, twist it to obtain a lagrangian submanifold $L^{\sigma}$ of $(M_1 \times M_2,{\widetilde \omega})$, and, if $L^{\sigma}$ happens to be the graph of some diffeomorphism $\varphi: M_1 \rightarrow M_2$, then $\varphi$ is a symplectomorphism. A method to obtain lagrangian submanifolds of $M_1 \times M_2 \simeq T^*(X_1 \times X_2)$ relies on generating functions.\index{generating function} For any $f \in C^{\infty}(X_1 \times X_2)$, $df$ is a closed 1-form on $X_1 \times X_2$. The \textbf{lagrangian submanifold generated by $f$}\index{lagrangian submanifold ! generating function} is $L_{f} := \{((x,y),(df)_{(x,y)}) \mid (x,y) \in X_1 \times X_2\}$ (cf.\ Section~\ref{lagrangian_submanifolds}). We adopt the loose notation \[ \begin{array}{rcccl} d_x f & := & d_x f (x,y) & := & (df)_{(x,y)} \mbox{ projected to } T_x^*X_1 \times \{0\}, \\ d_y f & := & d_y f (x,y) & := & (df)_{(x,y)} \mbox{ projected to } \{0\} \times T_y^*X_2 \ , \varepsilonnd{array} \] which enables us to write $L_{f} = \{(x,y,d_x f,d_y f) \mid (x,y) \in X_1 \times X_2\}$ and \[ L_{f}^{\sigma} = \{(x,y,d_x f,-d_y f) \mid (x,y) \in X_1 \times X_2\} \ . \] When $L_{f}^{\sigma}$ is in fact the graph of a diffeomorphism $\varphi: M_1=T^*X_1 \rightarrow M_2=T^*X_2$, we call $\varphi$ the \textbf{symplectomorphism generated by $f$},\index{symplectomorphism ! generating function} and call $f$ the \textbf{generating function}\index{generating function} of $\varphi$. The issue now is to determine whether a given $L_{f}^{\sigma}$ is the graph of a diffeomorphism $\varphi: M_1 \rightarrow M_2$. Let $({\mathcal U}_1,x_1,\dots,x_n),({\mathcal U}_2,y_1,\dots,y_n)$ be coordinate charts for $X_1,X_2$, with associated charts $(T^*{\mathcal U}_1,x_1,\dots,x_n,\xi_1,\dots,\xi_n)$, $(T^*{\mathcal U}_2,y_1,\dots,y_n,\varepsilonta_1,\dots,\varepsilonta_n)$ for $M_1,M_2$. The set $L_{f}^{\sigma}$ is the graph of $\varphi: M_1 \rightarrow M_2$ exactly when, for any $(x,\xi) \in M_1$ and $(y,\varepsilonta) \in M_2$, we have $\varphi(x,\xi) = (y,\varepsilonta) \Leftrightarrow \xi = d_x f \mbox{ and } \varepsilonta = -d_y f$. Therefore, given a point $(x,\xi) \in M_1$, to find its image $(y,\varepsilonta) = \varphi(x,\xi)$ we must solve the {\varepsilonm Hamilton look-alike equations}\index{Hamilton equations} \[ \left\{ \begin{array}{rll} \xi_i & = & \displaystyle{\phantom{-} \frac {\partial f}{\partial x_i} (x,y)} \\ \varepsilonta_i &= &\displaystyle{-\frac {\partial f}{\partial y_i} (x,y)} \ . \varepsilonnd{array} \right. \] If there is a solution $y = \varphi_1(x,\xi)$ of the first equation, we may feed it to the second thus obtaining $\varepsilonta = \varphi_2(x,\xi)$, so that $\varphi(x,\xi) = (\varphi_1(x,\xi),\varphi_2(x,\xi))$. By the implicit function theorem\index{theorem ! implicit function}, in order to solve the first equation locally and smoothly for $y$ in terms of $x$ and $\xi$, we need the condition \[ \det\left[ \frac {\partial}{\partial y_j} \left( \frac {\partial f}{\partial x_i} \right)\right]^n_{i,j=1} \ne 0 \ . \] This is a necessary condition for $f$ to generate a symplectomorphism $\varphi$. Locally this is also sufficient, but globally there is the usual bijectivity issue. \begin{example} Let $X_1 = X_2 = {\mathbb R}^n$, and $f (x,y) = -\frac{|x-y|^2}{2}$, the square of euclidean distance\index{euclidean ! distance} up to a constant. In this case, the Hamilton equations\index{Hamilton equations} are \[ \left\{ \begin{array}{rllll} \xi_i & = & \displaystyle{\phantom{-} \frac {\partial f}{\partial x_i}} & = & y_i - x_i \\ \varepsilonta_i & = &\displaystyle{-\frac {\partial f}{\partial y_i}} & = & y_i - x_i \varepsilonnd{array} \right. \qquad \Longleftrightarrow \qquad \left\{ \begin{array}{rll} y_i & = & x_i + \xi_i \\ \\ \varepsilonta_i & = & \xi_i \ . \varepsilonnd{array} \right. \] The symplectomorphism generated by $f$ is $\varphi (x,\xi) = (x+\xi , \xi)$. If we use the euclidean inner product\index{euclidean ! inner product} to identify $T^* {\mathbb R}^n$ with $T{\mathbb R}^n$, and hence regard $\varphi$ as $\widetilde \varphi : T{\mathbb R}^n \to T{\mathbb R}^n$ and interpret $\xi$ as the velocity vector, then the symplectomorphism $\varphi$ corresponds to free translational motion in euclidean space\index{euclidean ! space}. \varepsilonnd{example} The previous example can be generalized to the {\varepsilonm geodesic flow on a riemannian manifold}.\footnote{A \textbf{riemannian metric}\index{riemannian ! metric}\index{metric} on a manifold $X$ is a smooth function $g$ that assigns to each point $x \in X$ an {\varepsilonm inner product}\index{positive ! inner product} $g_x$ on $T_x X$, that is, a symmetric positive-definite bilinear map $g_x: T_x X \times T_x X \to {\mathbb R}$. Smoothness means that for every (smooth) vector field $v: X \to TX$ the real-valued function $x \mapsto g_x (v_x, v_x)$ is smooth. A \textbf{riemannian manifold}\index{riemannian ! manifold} is a pair $(X,g)$ where $g$ is a riemannian metric on the manifold $X$. The \textbf{arc-length}\index{arc-length} of a piecewise smooth curve $\gamma: [a,b] \to X$ on a riemannian $(X,g)$ is $\int_a^b \left| \frac{d \gamma}{dt} \right| \, dt$, where $\frac{d \gamma}{dt} (t) = d \gamma_t (1) \in T_{\gamma (t)} X$ and $\left| \frac{d \gamma}{dt} \right| = \sqrt{g_{\gamma(t)} ( \frac{d \gamma}{dt}, \frac{d \gamma}{dt} )}$ is the \textbf{velocity} of $\gamma$. A \textbf{reparametrization} of a curve $\gamma : [a,b] \to X$ is a curve of the form $\gamma \circ \tau : [c,d] \to X$ for some $\tau : [c,d] \to [a,b]$. By the change of variable formula for the integral, we see that the arc-length of $\gamma$ is invariant by reparametrization. The \textbf{riemannian distance}\index{riemannian ! distance} between two points $x$ and $y$ of a connected riemannian manifold $(X,g)$ is the infimum $d(x,y)$ of the set of all arc-lengths for piecewise smooth curves joining $x$ to $y$. A \textbf{geodesic}\index{geodesic ! curve} is a curve that locally minimizes distance and whose velocity is constant. Given any curve $\gamma : [a,b] \to X$ with ${d \gamma \over dt}$ never vanishing, there is a reparametrization $\gamma \circ \tau : [a,b] \to X$ of constant velocity. A \textbf{minimizing geodesic}\index{geodesic ! minimizing} from $x$ to $y$ is a geodesic joining $x$ to $y$ whose arc-length is the riemannian distance\index{riemannian ! distance} $d(x,y)$. A riemannian manifold $(X,g)$ is \textbf{geodesically convex}\index{geodesic ! geodesically convex} if every point $x$ is joined to every other point $y$ by a unique (up to reparametrization) minimizing geodesic. For instance, $({\mathbb R}^n , \langle \cdot , \cdot \rangle)$ is a geodesically convex riemannian manifold (where $g_x (v,w) = \langle v,w \rangle$ is the euclidean inner product on $T{\mathbb R}^n \simeq {\mathbb R}^n \times {\mathbb R}^n$), for which the riemannian distance is the usual euclidean distance\index{euclidean ! distance} $d(x,y) = |x-y|$.} Let $(X,g)$ be a geodesically convex riemannian manifold, where $d(x,y)$ is the riemannian distance between points $x$ and $y$. Consider the function \[ f: X \times X \longrightarrow {\mathbb R}\ , \qquad f (x,y) = - \frac{d(x,y)^2}{2} \ . \] We want to investigate if $f$ generates a symplectomorphism $\varphi: T^*X \to T^*X$. Using the identification $\widetilde g _x : T_xX \stackrel{\simeq}{\longrightarrow} T_x^*X$, $v \mapsto g_x(v,\cdot)$, induced by the metric, we translate $\varphi$ into a map $\widetilde \varphi : TX \to TX$. We need to solve \begin{eqnarray} \label{eqn:geodesic} \left\{ \begin{array}{lllll} \widetilde g_x(v) & = & \xi & = & \phantom{-} d_x f (x,y) \\ \widetilde g_y(w) & = & \varepsilonta & = & -d_y f (x,y) \varepsilonnd{array} \right. \varepsilonnd{eqnarray} for $(y,\varepsilonta)$ in terms of $(x,\xi)$ in order to find $\varphi$, or, equivalently, for $(y,w)$ in terms $(x,v)$ in order to find $\widetilde \varphi$. Assume that $(X,g)$ is \textbf{geodesically complete}, that is, every geodesic can be extended indefinitely. \begin{proposition} \label{prop:geodesic} Under the identification $T_xX \simeq T_x^*X$ given by the metric, the symplectomorphism generated by $f$ corresponds to the map \[ \begin{array}{rrcl} \widetilde \varphi: & TX & \longrightarrow & TX \\ & (x,v) & \longmapsto & (\gamma(1), \frac{d\gamma}{dt} (1)) \ , \varepsilonnd{array} \] where $\gamma$ is the geodesic with initial conditions $\gamma(0) = x$ and $\frac{d\gamma}{dt} (0) = v$. \varepsilonnd{proposition} This map $\widetilde \varphi$ is called the \textbf{geodesic flow}\index{geodesic ! flow} on $(X,g)$. \begin{proof} Given $(x,v) \in TX$, let $\varepsilonxp (x,v): {\mathbb R} \to X$ be the unique geodesic with initial conditions $\varepsilonxp (x,v) (0) = x$ and ${d \varepsilonxp (x,v) \over dt} (0) = v$. In this notation, we need to show that the unique solution of the system of equations~(\ref{eqn:geodesic}) is $\widetilde \varphi (x,v) = (\varepsilonxp (x,v) (1) , d {\varepsilonxp (x,v) \over dt} (1))$. The Gauss lemma\index{Gauss lemma} in riemannian geometry (see, for instance,~\cite{spivak:comprehensive}) asserts that geodesics are orthogonal to the level sets of the distance function. To solve the first equation for $y = \varepsilonxp (x,u) (1)$ for some $u \in T_xX$, evaluate both sides at $v$ and at vectors $v' \in T_x X$ orthogonal to $v$ \[ |v|^2 = {d \over dt} \left[ {-d(\varepsilonxp (x,v)(t),y)^ 2 \over 2} \right]_{t=0} \quad \mbox{ and } \quad 0 = {d \over dt} \left[ {-d(\varepsilonxp (x,v')(t),y)^ 2 \over 2} \right]_{t=0} \] to conclude that $u=v$, and thus $y = \varepsilonxp (x,v) (1)$. We have $-d_y f (x,y) (w') =0$ at vectors $w' \in T_y X$ orthogonal to $W:={d \varepsilonxp (x,v) \over dt} (1)$, because $f(x,y)$ is essentially the arc-length of a minimizing geodesic. Hence $w=kW$ must be proportional to $W$, and $k=1$ since \[ k |v|^2 = g_y (kW, W) = - {d \over dt} \left[ {- d(x,\varepsilonxp (x,v)(1-t))^ 2 \over 2} \right]_{t=0} = |v|^ 2 \ . \] \varepsilonnd{proof} \ssubsection{Fixed Points} \index{fixed_points} Let $X$ be an $n$-dimensional manifold, and $M = T^*X$ its cotangent bundle equipped with the canonical symplectic form $\omega$. Let $f: X \times X \to {\mathbb R}$ be a smooth function generating a symplectomorphism $\varphi: M \to M$, $\varphi(x,d_x f) = (y, -d_y f)$, with the notation of Section~\ref{generating_function}. To describe the fixed points of $\varphi$\index{fixed point}, we introduce the function $\psi : X \to {\mathbb R}$, $\psi(x) = f(x,x)$. \begin{proposition} \label{prop:fixed_vs_critical} There is a one-to-one correspondence between the fixed points of the symplectomorphism $\varphi$ and the critical points of $\psi$. \varepsilonnd{proposition} \vspace*{-2ex} \begin{proof} At $x_0 \in X$, $d_{x_0} \psi = ( d_x f + d_y f)|_{(x,y)=(x_0,x_0)}$. Let $\xi = d_x f |_{(x,y)=(x_0,x_0)}$. Recalling that $L_f^\sigma$ is the graph of $\varphi$, we have that $x_0$ is a critical point of $\psi$, i.e., $d_{x_0} \psi =0$, if and only if $d_y f |_{(x,y)=(x_0,x_0)} = -\xi$, which happens if and only if the point in $L_f^\sigma$ corresponding to $(x,y) = (x_0,x_0)$ is $(x_0,x_0,\xi,\xi)$, i.e., $\varphi(x_0,\xi) = (x_0,\xi)$ is a fixed point. \varepsilonnd{proof} Consider the iterates $\varphi^N = \varphi \circ \varphi \circ \ldots \circ \varphi$, $N=1,2,\ldots$, given by $N$ successive applications of $\varphi$. According to the previous proposition, if the symplectomorphism $\varphi^N: M \to M$ is generated by some function $f^{(N)}$, then there is a one-to-one correspondence between the set of fixed points of $\varphi^N$ and the set of critical points of $\psi^{(N)} : X \to {\mathbb R}\ , \ \psi^{(N)} (x) = f^{(N)} (x,x)$. It remains to know whether $\varphi^N$ admits a generating function.\index{function ! generating}\index{generating function} We will see that to a certain extent it does. For each pair $x,y \in X$, define a map $X \to {\mathbb R}$, $z \mapsto f(x,z) + f (z,y)$. Suppose that this map has a unique critical point $z_0$ and that $z_0$ is nondegenerate. As $z_0$ is determined for each $(x,y)$ implicitly by the equation $d_y f (x,z_0) + d_x f (z_0,y) =0$, by nondegeneracy, the implicit function theorem assures that $z_0 = z_0 (x,y)$ is a smooth function. Hence, the function \[ f^{(2)}: X \times X \longrightarrow {\mathbb R}\ , \quad f^{(2)} (x,y) := f(x,z_0) + f (z_0,y) \] is smooth. Since $\varphi$ is generated by $f$, and $z_0$ is critical, we have \[ \begin{array}{crclcl} & \varphi^2 (x,d_x f^{(2)} (x,y) ) & = & \varphi ( \varphi (x, d_xf (x,z_0)) & = & \varphi ( z_0, -d_y f (x,z_0)) \\ = & \varphi ( z_0, d_x f (z_0,y) ) & = & (y, -d_y f (z_0,y) ) & = & (y, -d_y f^{(2)} (x,y) )\ . \varepsilonnd{array} \] We conclude that the function $f^{(2)}$ is a generating function for $\varphi^2$, as long as, for each $\xi \in T^*_x X$, there is a unique $y \in X$ for which $d_x f^{(2)} (x,y)$ equals $\xi$. There are similar partial recipes for generating functions of higher iterates. In the case of $\varphi^3$, suppose that the function $X \times X \to {\mathbb R}$, $(z,u) \mapsto f(x,z) + f(z,u) + f (u,y)$, has a unique critical point $(z_0,u_0)$ and that it is a nondegenerate critical point. A generating function would be $f^{(3)} (x,y) = f(x,z_0) + f(z_0,u_0) + f (u_0,y)$. When the generating functions $f$, $f^{(2)}$, $f^{(3)}$, \ldots , $f^{(N)}$ exist given by these formulas, the \textbf{$N$-periodic points} of $\varphi$, i.e., the fixed points of $\varphi^N$, are in one-to-one correspondence with the critical points of \[ (x_1, \ldots , x_N) \longmapsto f(x_1,x_2) + f(x_2,x_3) + \ldots + f (x_{N-1},x_N) + f (x_N,x_1) \ . \] \begin{example} Let $\chi : {\mathbb R} \to {\mathbb R}^2$ be a smooth plane curve that is 1-periodic, i.e., $\chi (s+1) = \chi (s)$, and parametrized by arc-length, i.e., $\left| \frac{d \chi}{ds} \right| = 1$. Assume that the region $Y$ enclosed by the image of $\chi$ is \textbf{convex}, i.e., for any $s \in {\mathbb R}$, the tangent line $\{ \chi(s) + t \frac{d \chi}{ds} \mid t \in {\mathbb R} \}$ intersects the image $X := \partial Y$ of $\chi$ only at the point $\chi (s)$. Suppose that a ball\index{billiards} is thrown into a billiard table of shape $Y$ rolling with constant velocity and bouncing off the boundary subject to the usual law of reflection. The map describing successive points on the orbit of the ball is \[ \begin{array}{rrcl} \varphi: & {\mathbb R} / {\mathbb Z} \times (-1,1) & \longrightarrow & {\mathbb R} / {\mathbb Z} \times (-1,1) \\ & (x,v) & \longmapsto & (y,w) \ , \varepsilonnd{array} \] saying that when the ball bounces off $\chi(x)$ with angle $\theta = \arccos v$, it will next collide with $\chi(y)$ and bounce off with angle $\nu = \arccos w$. Then the function $f : {\mathbb R} / {\mathbb Z} \times {\mathbb R} / {\mathbb Z} \to {\mathbb R}$ defined by $f (x,y) = -|\chi(x)-\chi(y)|$ is smooth off the diagonal, and for $\varphi (x,v) = (y,w)$ satisfies \[ \left\{ \begin{array}{lclcccc} \displaystyle{\frac{\partial f}{\partial x} (x,y)} & = & \displaystyle{\left. \frac{\chi(y)-\chi(x)}{|\chi(x)-\chi(y)|} \cdot \frac{d\chi}{ds} \right|_{s=x}} & = & \cos \theta & = & v \\ \\ \displaystyle{\frac{\partial f}{\partial y} (x,y)} & = & \displaystyle{\left. \frac{\chi(x)-\chi(y)}{|\chi(x)-\chi(y)|} \cdot \frac{d\chi}{ds} \right|_{s=y}} & = & - \cos \nu & = & -w \ . \varepsilonnd{array} \right. \] We conclude that $f$ is a generating function for $\varphi$. Similar approaches work for higher-dimensional billiard problems. Periodic points are obtained by finding critical points of real functions of $N$ variables in $X$, \[ (x_1, \ldots , x_N) \longmapsto |\chi(x_1)-\chi(x_2)| + \ldots + |\chi(x_{N-1})-\chi(x_N)| + |\chi(x_N)-\chi(x_1)| \ , \] that is, by finding the $N$-sided (generalized) polygons inscribed in $X$ of critical perimeter. Notice that ${\mathbb R} / {\mathbb Z} \times (-1,1) \simeq \{ (x,v) \mid x \in X, v \in T_xX, |v|<1 \}$ is the open unit tangent ball bundle of a circle $X$, which is an open annulus $A$, and the map $\varphi: A \to A$ is area-preserving, as in the next two theorems. \varepsilonnd{example} While studying {\varepsilonm Poincar\'e return maps} in dynamical systems, Poincar\'e arrived at the following results. \begin{theorem}\label{thm:poincare_recurrence}\index{theorem ! Poincar\'e recurrence}\index{recurrence}\index{Poincar\'e ! recurrence theorem} \textbf{(Poincar\'e Recurrence Theorem)} $\;$ Let $\varphi:A \to A$ be a volume-preserving diffeomorphism of a finite-volume manifold $A$, and ${\mathcal U}$ a nonempty open set in $A$. Then there is $q \in {\mathcal U}$ and a positive integer $N$ such that $\varphi^N (q) \in {\mathcal U}$. \varepsilonnd{theorem} Hence, under iteration, a mechanical system governed by $\varphi$ will eventually return arbitrarily close to the initial state. \begin{proof} Let ${\mathcal U}_0 = {\mathcal U}, {\mathcal U}_1=\varphi({\mathcal U}), {\mathcal U}_2 = \varphi^2 ({\mathcal U}), \ldots$. If all of these sets were disjoint, then, since $\mbox{Volume } ({\mathcal U}_i) = \mbox{ Volume } ({\mathcal U}) > 0$ for all $i$, the volume of $A$ would be greater or equal to $\sum_i \mbox{ Volume } ({\mathcal U}_i) = \infty$. To avoid this contradiction we must have $\varphi^k ({\mathcal U}) \cap \varphi^\varepsilonll ({\mathcal U}) \ne \varepsilonmptyset$ for some $k > \varepsilonll$, which implies $\varphi^{k-\varepsilonll} ({\mathcal U}) \cap {\mathcal U} \ne \varepsilonmptyset$. \varepsilonnd{proof} \vspace*{-2ex} \begin{theorem}\index{Poincar\'e ! last geometric theorem}\index{theorem ! Poincar\'e's last geometric theorem}\label{thm:poincare_birkhoff} \textbf{(Poincar\'e's Last Geometric Theorem)} $\;$ Suppose that $\varphi:A \to A$ is an area-preserving diffeomorphism of the closed annulus $A ={\mathbb R} / {\mathbb Z} \times [-1,1]$ that preserves the two components of the boundary and twists them in opposite directions. Then $\varphi$ has at least two fixed points. \varepsilonnd{theorem} This theorem was proved in 1913 by Birkhoff\index{Birkhoff ! Poincar\'e-Birkhoff theorem}~\cite{bi:dynamical}, and hence is also called the \textbf{Poincar\'e-Birkhoff theorem}\index{Poincar\'e ! Poincar\'e-Birkhoff theorem}\index{theorem ! Poincar\'e-Birkhoff}. It has important applications to dynamical systems\index{dynamical system} and celestial mechanics\index{mechanics ! celestial}. The {\varepsilonm Arnold conjecture}\index{Arnold ! conjecture}\index{conjecture ! Arnold} on the existence of fixed points for symplectomorphisms\index{fixed point}\index{symplectomorphism ! fixed point}\index{symplectomorphism ! Arnold conjecture} of compact manifolds (see Section~\ref{sec:arnold_floer}) may be regarded as a generalization of the Poincar\'e-Birkhoff theorem. This conjecture has motivated a significant amount of research involving a more general notion of generating function; see, for instance,~\cite{el-gr:lagrangian,gi:periodic}. \ssubsection{Lagrangians and Special Lagrangians in ${\mathbb C}^n$} \label{sec:special_lagrangians} The standard \textbf{hermitian inner product} $h(\cdot,\cdot)$ on ${\mathbb C}^n$ has real and imaginary parts given by the euclidean inner product $\langle \cdot , \cdot \rangle$ and (minus) the symplectic form $\omega_0$, respectively: for $v=(x_1 + i y_1, \ldots , x_n + i y_n), u=(a_1 + i b_1, \ldots , a_n + i b_n) \in {\mathbb C}^n$, \[ \begin{array}{rcl} h (v,u) & = & \textstyle{\sum \limits_{k=1}^n (x_k + i y_k) (a_k - i b_k)} \\ & = & \textstyle{\sum \limits_{k=1}^n (x_k a_k + y_k b_k) - i \sum \limits_{k=1}^n (x_k b_k - y_k a_k)} \\ & = & \langle v, u \rangle - i \omega_0 (v,u) \ . \phantom{\sum \limits_{k=1}^n} \varepsilonnd{array} \] \begin{lemma} \label{lem:lagrangian} Let $W$ be a subspace of $({\mathbb C}^n, \omega_0)$ and $e_1, \ldots , e_n$ vectors in ${\mathbb C}^n$. Then: \begin{itemize} \item[(a)] $W$ is lagrangian if and only if $W^\perp = i W$; \item[(b)] $(e_1, \ldots , e_n)$ is an orthonormal basis of a lagrangian subspace if and only if $(e_1, \ldots , e_n)$ is a unitary basis of ${\mathbb C}^n$. \varepsilonnd{itemize} \varepsilonnd{lemma} \vspace*{-2ex} \begin{proof} \begin{itemize} \item[(a)] We always have $\omega_0 (v,u) = - \mathrm{Im}\, h (v,u) = \mathrm{Re}\, h (iv,u) = \langle iv, u \rangle$. It follows that, if $W$ is lagrangian, so that $\omega_0 (v,u)=0$ for all $v,u \in W$, then $i W \subseteq W^\perp$. These spaces must be equal because they have the same dimension. Reciprocally, when $\langle iv, u \rangle = 0$ for all $v,u \in W$, the equality above shows that $W$ must be isotropic. Since $\dim W = \dim iW = \dim W^\perp = 2n - \dim W$, the dimension of $W$ must be $n$. \item[(b)] If $(e_1, \ldots , e_n)$ is an orthonormal basis of a lagrangian subspace $W$, then, by the previous part, $(e_1, \ldots , e_n, ie_1, \ldots , ie_n)$ is an orthonormal basis of ${\mathbb C}^n$ as a real vector space. Hence $(e_1, \ldots , e_n)$ must be a complex basis of ${\mathbb C}^n$ and it is unitary because $h (e_j,e_k) = \langle e_j, e_k \rangle - i \omega_0 (e_j,e_k) = \partialta _{jk}$. Conversely, if $(e_1, \ldots , e_n)$ is a unitary basis of ${\mathbb C}^n$, then the real span of these vectors is lagrangian ($\omega_0 (e_j,e_k) = - \mathrm{Im}\, h (e_j,e_k) = 0$) and they are orthonormal ($\langle e_j, e_k \rangle = \mathrm{Re}\, h (e_j,e_k) = \partialta _{jk}$). \varepsilonnd{itemize} \varepsilonnd{proof} The \textbf{lagrangian grassmannian} $\Lambda_n$ is the set of all lagrangian subspaces of ${\mathbb C}^n$. It follows from part~(b) of Lemma~\ref{lem:lagrangian} that $\Lambda_n$ is the set of all subspaces of ${\mathbb C}^n$ admitting an orthonormal basis that is a unitary basis of ${\mathbb C}^n$. Therefore, we have \[ \Lambda_n \simeq \mathrm{U}(n) / \mathrm{O} (n) \ . \] Indeed $\mathrm{U} (n)$ acts transitively on $\Lambda_n$: given $W,W' \in \Lambda_n$ with orthonormal bases $(e_1, \ldots , e_n)$, $(e_1', \ldots , e_n')$ respectively, there is a unitary transformation of ${\mathbb C}^n$ that maps $(e_1, \ldots , e_n)$ to $(e_1', \ldots , e_n')$ as unitary bases of ${\mathbb C}^n$. And the stabilizer of ${\mathbb R}^n \in \Lambda_n$ is the subgroup of those unitary transformations that preserve this lagrangian subspace, namely $\mathrm{O}(n)$. It follows that $\Lambda_n$ is a compact connected manifold of dimension $\frac{n(n+1)}{2}$; cf.\ the last example of Section~\ref{symplectic_linear_algebra}. The lagrangian grassmannian comes with a \textbf{tautological vector bundle} \[ \tau_n := \{ (W,v) \in \Lambda_n \times {\mathbb C}^n \mid v \in W \} \ , \] whose fiber over $W \in \Lambda_n$ is the $n$-dimensional real space $W$. It is a consequence of part~(a) of Lemma~\ref{lem:lagrangian} that the following map gives a well-defined global isomorphism of the complexification $\tau_n \otimes_{\mathbb R} {\mathbb C}$ with the trivial bundle $\underline{{\mathbb C}^n}$ over $\Lambda_n$ (i.e., {\varepsilonm a global trivialization}): $(W,v \otimes c) \mapsto (W, cv)$, for $W \in \Lambda_n, v \in W, c \in {\mathbb C}$. \begin{definition} A \textbf{lagrangian immersion} of a manifold $X$ is an immersion $f : X \to {\mathbb C}^n$ such that $df_p (T_pX)$ is a lagrangian subspace of $({\mathbb C}^n, \omega_0)$, for every $p \in X$. \varepsilonnd{definition} \vspace*{-2ex} \begin{example} The graph of a map $h: {\mathbb R}^n \to i {\mathbb R}^n$ is an embedded $n$-dimensional submanifold $X$ of ${\mathbb C}^n$. Its tangent space at $(p,h(p))$ is $\{ v + dh_p (v) \mid v \in {\mathbb R}^n \}$. Let $e_1, \ldots , e_n$ be the standard basis of ${\mathbb R}^n$. Since $\omega_0 (e_k + dh_p (e_k) , e_j + dh_p (e_j)) = \langle e_k , -i \, dh_p (e_j) \rangle + \langle e_j , i \, dh_p (e_k) \rangle$, we see that $X$ is lagrangian if and only if $\frac{\partial h_k}{\partial x_j} = \frac{\partial h_j}{\partial x_k}$, $\forall j,k$, which in ${\mathbb R}^n$ is if and only if $h$ is the gradient of some $H : {\mathbb R}^n \to i {\mathbb R}$. \varepsilonnd{example} If $f : X \to {\mathbb C}^n$ is a lagrangian immersion, we can define a \textbf{Gauss map} \[ \begin{array}{rrcl} \lambda_f : & X & \longrightarrow & \Lambda_n \\ & p & \longmapsto & df_p (T_p X) \ . \varepsilonnd{array} \] Since $\lambda_f^* \tau_n = TX$ and $\tau_n \otimes {\mathbb C} \simeq \underline{{\mathbb C}^n}$, we see that a necessary condition for an immersion $X \to {\mathbb C}^n$ to exist is that the complexification of $TX$ be trivializable. Using the h-principle (Section~\ref{sec:compatible_almost}), Gromov~\cite{gr:partial} showed that this is also sufficient: {\varepsilonm an $n$-dimensional manifold $X$ admits a lagrangian immersion into ${\mathbb C}^n$ if and only if the complexification of its tangent bundle is trivializable.} \begin{example} For the unit sphere $S^n = \{ (t,x) \in {\mathbb R} \times {\mathbb R}^n \, : \, t^2 + |x|^2 = 1 \}$, the \textbf{Whitney sphere immersion} is the map \[ \begin{array}{rrcl} f : & S^n & \longrightarrow & {\mathbb C}^n \\ & (t,x) & \longmapsto & x + itx \ . \varepsilonnd{array} \] The only self-intersection is at the origin where $f (-1,0,\ldots,0) = f (1,0,\ldots,0)$. Since $T_{(t,x)} S^n = (t,x)^\perp$, the differential $df_{(t,x)} : (u,v) \mapsto v + i (tv + ux)$ is always injective: $v + i (tv + ux)= 0 \Leftrightarrow v=0 \mbox{ and } ux=0$, but when $x = 0$ it is $t=\pm 1$ and $T_{(\pm 1,0)} S^n = \{0\} \times {\mathbb R}^n$, so it must be $u=0$. We conclude that $f$ is an immersion. By computing $\omega_0$ at two vectors of the form $v + i (tv + ux)$, we find that the image $df_p (T_p S^n)$ is an $n$-dimensional isotropic subspace of ${\mathbb C}^n$. Therefore, $f$ is a lagrangian immersion of $S^n$, and the complexification $TS^n \otimes {\mathbb C}$ must be always trivializable, though the tangent bundle $TS^n$ is only trivializable in dimensions $n=0,1,3,7$. \varepsilonnd{example} The \textbf{special lagrangian grassmannian} $S\Lambda_n$ is the set of all {\varepsilonm oriented} subspaces of ${\mathbb C}^n$ admitting a {\varepsilonm positive} orthonormal basis $(e_1, \ldots , e_n)$ that is a {\varepsilonm special} unitary basis of ${\mathbb C}^n$. By the characterization of lagrangian in the part~(b) of Lemma~\ref{lem:lagrangian}, it follows that the elements of $S\Lambda_n$ are indeed lagrangian submanifolds. Similarly to the case of the lagrangian grassmannian, we have that \[ S \Lambda_n \simeq \mathrm{SU}(n) / \mathrm{SO} (n) \] is a compact connected manifold of dimension $\frac{n(n+1)}{2} - 1$. We can single out the {\varepsilonm special} lagrangian subspaces by expressing the condition on the determinant in terms of the real $n$-form in ${\mathbb C}^n$ \[ \beta : = \mathrm{Im}\, \Omega \ , \quad \mbox{ where } \quad \Omega : = dz_1 \wedge \ldots \wedge dz_n \ . \] Since for $A \in \mathrm{SO}(n)$, we have $\det A = 1$ and $\Omega (e_1, \ldots , e_n) = \Omega (Ae_1, \ldots , Ae_n)$, we see that, for an oriented real $n$-dimensional subspace $W \subset {\mathbb C}^n$, the number $\Omega (e_1, \ldots , e_n)$ does not depend on the choice of a positive orthonormal basis $(e_1, \ldots , e_n)$ of $W$, thus can be denoted $\Omega (W)$ and its imaginary part $\beta (W)$. \begin{proposition} \label{prop:special_lagrangian} A subspace $W$ of $({\mathbb C}^n, \omega_0)$ has an orientation for which it is a special lagrangian if and only if $W$ is lagrangian and $\beta (W) = 0$. \varepsilonnd{proposition} \vspace*{-2ex} \begin{proof} Any orthonormal basis $(e_1, \ldots , e_n)$ of a lagrangian subspace $W \subset {\mathbb C}^n$ is the image of the canonical basis of ${\mathbb C}^n$ by some $A \in \mathrm{U} (n)$, and $\Omega (W) = \det A \in S^1$. Therefore, $W$ admits an orientation for which such a {\varepsilonm positive} $(e_1, \ldots , e_n)$ is a {\varepsilonm special} unitary basis of ${\mathbb C}^n$ if and only if $\det A = \pm 1$, i.e., $\beta (W) = 0$. \varepsilonnd{proof} \vspace*{-1ex} \begin{definition} A \textbf{special lagrangian immersion} of an oriented manifold $X$ is a lagrangian immersion $f : X \to {\mathbb C}^n$ such that, at each $p \in X$, the space $df_p (T_pX)$ is a special lagrangian subspace of $({\mathbb C}^n, \omega_0)$. \varepsilonnd{definition} For a special lagrangian immersion $f$, the Gauss map $\lambda_f$ takes values in $S\Lambda_n$. By Proposition~\ref{prop:special_lagrangian}, the immersion $f$ of an $n$-dimensional manifold $X$ in $({\mathbb C}^n,\omega_0)$ is {\varepsilonm special lagrangian} if and only if $f^* \omega_0 = 0$ and $f^* \beta = 0$ \begin{example} In ${\mathbb C}^2$, writing $z_k = x_k + i y_k$, we have $\beta = dx_1 \wedge dy_2 + dy_1 \wedge dx_2$. We have seen that the graph of the gradient $i \nabla H$ is lagrangian, for any function $H : {\mathbb R}^2 \to {\mathbb R}$. So $f(x_1,x_2) = (x_1,x_2,i \frac{\partial H}{\partial x_1}, i \frac{\partial H}{\partial x_2})$ is a lagrangian immersion. For $f$ to be a {\varepsilonm special} lagrangian immersion, we need the vanish of \[ f^* \beta = dx_1 \wedge d \left( \frac{\partial H}{\partial x_2} \right) + d \left( \frac{\partial H}{\partial x_1} \right) \wedge dx_2 = \left( \frac{\partial^2 H}{\partial x_1^2} + \frac{\partial^2 H}{\partial x_2^2} \right) dx_1 \wedge dx_2 \ . \] Hence the graph of $\nabla H$ is special lagrangian if and only if $H$ is {\varepsilonm harmonic}. \varepsilonnd{example} If $f : X \to {\mathbb C}^n$ is a special lagrangian immersion, then $f^* \Omega$ is an exact (real) volume form: $f^* \Omega = d \mathrm{Re}\, (z_1 dz_2 \wedge \ldots \wedge dz_n)$. We conclude, by Stokes theorem, that there can be no special lagrangian immersion of a compact manifold in ${\mathbb C}^n$. {\varepsilonm Calabi-Yau manifolds}\footnote{\textbf{Calabi-Yau manifolds} are compact {\varepsilonm K\"ahler manifolds} (Section~\ref{sec:kahler}) with vanishing first Chern class.} are more general manifolds where a definition of special lagrangian submanifold makes sense and where the space of special lagrangian embeddings of a compact manifold is interesting. Special lagrangian geometry was introduced by Harvey and Lawson~\cite{ha-la:calibrated}. For a treatment of lagrangian and special lagrangian submanifolds with many examples, see for instance~\cite{au:barcelona}. \ssection{Complex Structures} \index{complex structure} \label{section3} \ssubsection{Compatible Linear Structures} \label{compatible_linear_structures} A \textbf{complex structure}\index{complex structure ! on a vector space} on a vector space $V$ is a linear map $J: V \to V$ such that $J^2 = -\Id$. The pair $(V,J)$ is then called a \textbf{complex vector space}\index{complex vector space}\index{vector space ! complex}. A complex structure $J$ on $V$ is equivalent to a structure of vector space over ${\mathbb C}$, the map $J$ corresponding to multiplication by $i$. If $(V, \Omega)$ is a symplectic vector space, a complex structure $J$ on $V$ is said to be \textbf{compatible}\index{compatible ! complex structure}\index{complex structure ! compatibility} (with $\Omega$, or $\Omega$-compatible) if the bilinear map $G_{_J} : V \times V \to {\mathbb R}$ defined by $G_{_J}(u,v) = \Omega(u,Jv)$ is an inner product on $V$. This condition comprises $J$ being a symplectomorphism (i.e., $\Omega(Ju,Jv) = \Omega(u,v)$ $\forall u,v$) and the so-called \textbf{taming}\index{taming}: $\Omega(u, Ju) > 0$, $\forall u \neq 0$. \begin{example} For the symplectic vector space $({\mathbb R}^{2n}, \Omega_0)$ with symplectic basis $e_1=(1,0,\ldots,0), \ldots, e_n, f_1, \ldots, f_n=(0,\ldots,0,1)$, there is a standard compatible complex structure $J_0$ determined by $J_0(e_j) = f_j$ and $J_0(f_j) = -e_j$ for all $j=1,\ldots,n$. This corresponds to a standard identification of ${\mathbb R}^{2n}$ with ${\mathbb C}^n$, and $\Omega_0 (u,J_0v) = \langle u,v \rangle$ is the standard euclidean inner product. With respect to the symplectic basis $e_1, \ldots, e_n, f_1, \ldots, f_n$, the map $J_0$ is represented by the matrix \[ \left[ \begin{array}{cc} 0 & -\mbox{Id} \\ \mbox{Id} & 0 \varepsilonnd{array} \right] \ . \] The \textbf{symplectic linear group}\index{symplectic ! linear group}, $\mathrm{Sp} (2n) := \{ A \in \mathrm{GL} (2n;{\mathbb R}) \, | \, \Omega_0 (Au , Av) = \Omega_0 (u , v)$ $\mbox{for all } u,v \in {\mathbb R}^{2n} \}$, is the group of all linear transformations of ${\mathbb R}^{2n}$ that preserve the standard symplectic structure. The \textbf{orthogonal group} $\mathrm{O} (2n)$ is the group formed by the linear transformations $A$ that preserve the euclidean inner product, $\langle Au , Av \rangle = \langle u , v \rangle$, for all $u,v \in {\mathbb R}^{2n}$. The \textbf{general complex group} $\mathrm{GL} (n;{\mathbb C})$ is the group of linear transformations $A: {\mathbb R}^{2n} \to {\mathbb R}^{2n}$ commuting with $J_0$, $A(J_0v) = J_0 (Av)$, for all $v \in {\mathbb R}^{2n}$.\footnote{Identify the complex $n \times n$ matrix $X + i Y$ with the real $2n \times 2n$ matrix $\left[ \begin{array}{cc} X & -Y \\ Y & X \varepsilonnd{array} \right]$.} The compatibility between the structures $\Omega_0$, $\langle \cdot , \cdot \rangle$ and $J_0$ implies that the intersection of {\varepsilonm any two} of these subgroups of $\mathrm{GL} (2n;{\mathbb R})$ is the same group, namely the \textbf{unitary group}\index{unitary group} $\mathrm{U} (n)$. \varepsilonnd{example} As $({\mathbb R}^{2n}, \Omega_0)$ is the prototype of a $2n$-dimensional symplectic vector space, the preceding example shows that compatible complex structures always exist on symplectic vector spaces.\footnote{Conversely, given $(V,J)$, there is a symplectic $\Omega$ with which $J$ is compatible: take $\Omega (u,v) = G(Ju,v)$ for an inner product $G$ such that $J^t = -J$.} There is yet a way to produce a {\varepsilonm canonical} compatible complex structure $J$ after the choice of an inner product $G$ on $(V, \Omega)$, though the starting $G(u,v)$ is usually different from $G_{_J}(u,v) := \Omega(u,Jv)$. \begin{proposition} \label{polar_decomposition} Let $(V, \Omega)$ be a symplectic vector space, with an inner product $G$. Then there is a canonical compatible complex structure $J$ on $V$. \varepsilonnd{proposition} \vspace*{-2ex} \begin{proof} By nondegeneracy of $\Omega$ and $G$, the maps $u \mapsto \Omega(u, \cdot)$ and $w \mapsto G(w, \cdot)$ are both isomorphisms between $V$ and $V^*$. Hence, $\Omega(u,v) = G(Au,v)$ for some linear $A: V \to V$. The map $A$ is skew-symmetric, and the product $AA^t$ is symmetric\footnote{A map $B:V \to V$ is \textbf{symmetric}, respectively \textbf{skew-symmetric}, when $B^t = B$, resp.\ $B^t = -B$, where the transpose $B^t : V \to V$ is determined by $G(B^tu, v) = G(u,Bv)$.} and positive: $G(AA^tu,u) = G(A^tu, A^tu) > 0$, for $u \neq 0$. By the spectral theorem, these properties imply that $AA^t$ diagonalizes with positive eigenvalues $\lambda_i$, say $AA^t = B \ \mbox{diag} \, ( \lambda_1, \ldots, \lambda_{2n} ) \ B^{-1}$. We may hence define an arbitrary real power of $AA^t$ by rescaling the eigenspaces, in particular, \[ \sqrt{AA^t} := B \, \mbox{diag} \, ( \sqrt{\lambda_1}, \ldots, \sqrt{\lambda_{2n}} ) \ B^{-1} \ . \] The linear transformation $\sqrt{AA^t}$ is symmetric, positive-definite and does not depend on the choice of $B$ nor of the ordering of the eigenvalues. It is completely determined by its effect on each eigenspace of $AA^t$: on the eigenspace corresponding to the eigenvalue $\lambda_k$, the map $\sqrt{AA^t}$ is defined to be multiplication by $\sqrt{\lambda_k}$. Let $J := (\sqrt{AA^t})^{-1}A$. Since $A$ and $\sqrt{AA^t}$ commute, $J$ is orthogonal ($JJ^t = \Id$), as well as skew-symmetric ($J^t = -J$). It follows that $J$ is a complex structure on $V$. Compatibility is easily checked: \[ \begin{array}{c} \Omega(Ju, Jv) = G(AJu,Jv) = G(JAu,Jv) = G(Au,v) = \Omega(u,v) \mbox{ and }\\ \Omega(u,Ju) = G(Au,Ju) = G(-JAu, u) = G(\sqrt{AA^t} \, u,u) > 0\ , \mbox{ for }u \neq 0 \ . \varepsilonnd{array} \] \varepsilonnd{proof} The factorization $A = \sqrt{AA^t} \, J$ is called the \textbf{polar decomposition}\index{polar decomposition}\index{complex structure ! polar decomposition} of $A$. \begin{remark} Being {\varepsilonm canonical}, this construction may be {\varepsilonm smoothly} performed: when $(V_t, \Omega_t)$ is a family of symplectic vector spaces with a family $G_t$ of inner products, all depending smoothly on a parameter $t$, an adaptation of the previous proof shows that there is a smooth family $J_t$ of compatible complex structures on $(V_t, \Omega_t)$. \varepsilonnd{remark} Let $(V, \Omega)$ be a symplectic vector space of dimension $2n$, and let $J$ be a complex structure on $V$. If $J$ is $\Omega$-compatible and $L$ is a lagrangian subspace of $(V, \Omega)$, then $JL$ is also lagrangian and $JL = L^\perp$, where $\perp$ indicates orthogonality with respect to the inner product $G_{_J} (u,v) = \Omega (u, Jv)$. Therefore, a complex structure $J$ is $\Omega$-compatible {\varepsilonm if and only if} there exists a symplectic basis for $V$ of the form \[ e_1, e_2, \ldots , e_n, f_1=Je_1, f_2=Je_2, \ldots, f_n=Je_n \ . \] Let ${\cal J} (V,\Omega)$ be the \textbf{set of all compatible complex structures in a symplectic vector space} $(V,\Omega)$. \begin{proposition} \label{prop:linear_contractible} The set ${\cal J} (V,\Omega)$ is contractible.\footnote{\textbf{Contractibility} of ${\cal J} (V,\Omega)$ means that there exists a homotopy $h_t: {\mathcal J}(V,\Omega) \to {\mathcal J}(V, \Omega)$, $0 \leq t \leq 1$, starting at the identity $h_0 = \Id$, finishing at a trivial map $h_1: {\mathcal J}(V,\Omega) \to \{J_0\}$, and fixing $J_0$ (i.e., $h_t(J_0) = J_0$, $\forall t$) for some $J_0 \in {\mathcal J}(V,\Omega)$.} \varepsilonnd{proposition} \vspace*{-2ex} \begin{proof} Pick a lagrangian subspace $L_0$ of $(V,\Omega)$. Let ${\cal L} (V,\Omega,L_0)$ be the space of all lagrangian subspaces of $(V,\Omega)$ that intersect $L_0$ transversally. Let ${\cal G} (L_0)$ be the space of all inner products on $L_0$. The map \[ \begin{array}{rrcl} \Psi : & {\cal J} (V,\Omega) & \longrightarrow & {\cal L} (V,\Omega,L_0)\times{\cal G}(L_0) \\ & J & \longmapsto & (J L_0, G_{_J}|_{L_0}) \varepsilonnd{array} \] is a homeomorphism, with inverse as follows. Take $(L,G) \in {\cal L} (V,\Omega,L_0)\times{\cal G}(L_0)$. For $v\in L_0$, $v^\perp = \{ u \in L_0 \, | \, G(u,v) = 0 \}$ is a $(n-1)$-dimensional space of $L_0$; its symplectic orthogonal $(v^\perp)^\Omega$ is $(n+1)$-dimensional. Then $(v^\perp)^\Omega \cap L$ is $1$-dimensional. Let $Jv$ be the unique vector in this line such that $\Omega (v,Jv) =1$. If we take $v$'s in some $G$-orthonormal basis of $L_0$, this defines an element $J \in {\cal J} (V,\Omega)$. The set ${\cal L} (V,\Omega,L_0)$ can be identified with the vector space of all symmetric $n \times n$ matrices. In fact, any $n$-dimensional subspace $L$ of $V$ that is transverse to $L_0$ is the graph of a linear map $JL_0 \to L_0$, and the lagrangian ones correspond to symmetric maps (cf.\ Section~\ref{symplectic_linear_algebra}). Hence, ${\cal L} (V,\Omega,L_0)$ is contractible. Since ${\cal G} (L_0)$ is contractible (it is even convex), we conclude that ${\cal J} (V,\Omega)$ is contractible. \varepsilonnd{proof} \ssubsection{Compatible Almost Complex Structures} \label{sec:compatible_almost} \index{almost complex structure ! definition}\index{almost complex structure ! compatible} An \textbf{almost complex structure}\index{almost complex structure ! definition} on a manifold $M$ is a smooth\footnote{\textbf{Smoothness} means that for any vector field $v$, the image $Jv$ is a (smooth) vector field.} field of complex structures on the tangent spaces, $J_p: T_pM \to T_pM$, $p \in M$. The pair $(M,J)$ is then called an \textbf{almost complex manifold}\index{almost complex manifold}. \begin{definition} An almost complex structure $J$ on a symplectic manifold $(M,\omega)$ is \textbf{compatible}\index{almost complex structure ! compatibility}\index{compatible ! almost complex structure} (with $\omega$ or $\omega$-compatible) if the map that assigns to each point $p \in M$ the bilinear pairing $g_p: T_pM \times T_pM \to {\mathbb R}$, $g_p(u,v) := \omega_p(u, J_pv)$ is a riemannian metric on $M$.\index{riemannian ! metric}\index{metric} A triple $(\omega, g, J)$ of a symplectic form, a riemannian metric and an almost complex structure on a manifold $M$ is a \textbf{compatible triple}\index{compatible ! triple} when $g(\cdot, \cdot) = \omega(\cdot, J\cdot)$. \varepsilonnd{definition} If $(\omega, J, g)$ is a compatible triple,\index{compatible ! triple} each of $\omega$, $J$ or $g$ can be written in terms of the other two. \begin{examples} \begin{enumerate} \item If we identify ${\mathbb R}^{2n}$ with ${\mathbb C}^n$ using coordinates $z_j = x_j + i y_j$, multiplication by $i$ induces a constant linear map $J_0$ on the tangent spaces such that $J_0^2 = - \Id$, known as the \textbf{standard almost complex structure} on ${\mathbb R}^{2n}$: \[ J_0 \left (\frac{\partial}{\partial x_j} \right) = \frac{\partial}{\partial y_j}\ , \qquad J_0 \left( \frac{\partial}{\partial y_j} \right) = -\frac{\partial}{\partial x_j}\ . \] For the standard symplectic form $\omega_0 = \sum dx_j \wedge dy_j$ and the euclidean inner product $g_0 = \langle \cdot, \cdot \rangle$, the compatibility relation holds: $\omega_0(u,v) = g_0(J_0(u),v)$. \item Any oriented hypersurface $\Sigma \subset {\mathbb R}^3$ carries a natural symplectic form and a natural compatible almost complex structure induced by the standard inner (or dot) and exterior (or vector) products. They are given by the formulas $\omega_p (u,v) := \langle \nu_p , u \times v \rangle$ and $J_p (v) = \nu_p \times v$ for $v \in T_p\Sigma$, where $\nu_p$ is the outward-pointing unit normal vector at $p \in \Sigma$ (in other words, $\nu : \Sigma \to S^2$ is the {\varepsilonm Gauss map}). Cf.\ Example~3 of Section~\ref{symplectic_forms}. The corresponding riemannian metric is the restriction to $\Sigma$ of the standard euclidean metric $\langle \cdot , \cdot \rangle$. \item The previous example generalizes to the oriented hypersurfaces $M \subset {\mathbb R}^7$. Regarding $u,v \in {\mathbb R}^7$ as imaginary {\varepsilonm octonions} (or {\varepsilonm Cayley numbers}), the natural vector product $u \times v$ is the imaginary part of the product of $u$ and $v$ as octonions. This induces a natural almost complex structure on $M$ given by $J_p (v) = \nu_p \times v$, where $\nu_p$ is the outward-pointing unit normal vector at $p \in M$. In the case of $S^6$, at least, this $J$ is not compatible with any symplectic form, as $S^6$ cannot be a symplectic manifold. \varepsilonnd{enumerate} \varepsilonnd{examples} As a consequence of the remark in Section~\ref{compatible_linear_structures}, we have: \begin{proposition} On any symplectic manifold $(M,\omega)$ with a riemannian metric $g$, there is a canonical compatible almost complex structure $J$. \varepsilonnd{proposition} Since riemannian metrics always exist, we conclude that {\varepsilonm any symplectic manifold has compatible almost complex structures}. The metric $g_{_J}(\cdot,\cdot) := \omega(\cdot, J\cdot)$ tends to be different from the given $g(\cdot, \cdot)$. \begin{proposition} Let $(M,J)$ be an almost complex manifold where $J$ is compatible with two symplectic forms $\omega_0, \omega_1$ Then $\omega_0$ and $\omega_1$ are deformation-equivalent.\index{deformation equivalent} \varepsilonnd{proposition} \vspace*{-2ex} \begin{proof} Simply take the convex combinations $\omega_t = (1-t)\omega_0 + t\omega_1$, $0 \leq t \leq 1$. \varepsilonnd{proof} A counterexample to the converse of this proposition is provided by the family $\omega_t = \cos \pi t~ dx_1 \wedge dy_1 + \sin \pi t ~dx_1 \wedge dy_2 + \sin \pi t ~dy_1 \wedge dx_2 + \cos \pi t ~dx_2 \wedge dy_2$ for $0 \leq t \leq 1$. There is no $J$ in ${\mathbb R}^4$ compatible with both $\omega_0$ and $\omega_1 = - \omega_0$. A submanifold $X$ of an almost complex manifold $(M,J)$ is an \textbf{almost complex submanifold}\index{submanifold ! almost complex}\index{almost complex submanifold} when $J(TX) \subseteq TX$, i.e., we have $J_p v \in T_p X$, $\forall p \in X, v \in T_p X$. \begin{proposition} Let $(M,\omega)$ be a symplectic manifold equipped with a compatible almost complex structure $J$. Then any almost complex submanifold $X$ of $(M,J)$ is a symplectic submanifold of $(M, \omega)$. \varepsilonnd{proposition} \vspace*{-2ex} \begin{proof} Let $i: X \hookrightarrow M$ be the inclusion. Then $i^*\omega$ is a closed 2-form on $X$. Since $\omega_p(u,v) = g_p(J_pu,v)$, $\forall p \in X$, $\forall u, v \in T_pX$, and since $g_p|_{T_pX}$ is nondegenerate, so is $\omega_p|_{T_pX}$, and $i^*\omega$ is nondegenerate. \varepsilonnd{proof} It is easy to see that the \textbf{set ${\mathcal J}(M, \omega)$ of all compatible almost complex structures on a symplectic manifold} $(M,\omega)$ is path-connected. From two almost complex structures $J_0, J_1$ compatible with $\omega$, we get two riemannian metrics $g_0 (\cdot, \cdot) = \omega (\cdot, J_0 \cdot)$, $g_1 (\cdot, \cdot) = \omega (\cdot, J_1 \cdot)$. Their convex combinations \[ g_t(\cdot, \cdot) = (1-t)g_0(\cdot, \cdot) + tg_1(\cdot, \cdot)\ , \qquad 0 \leq t \leq 1\ , \] form a smooth family of riemannian metrics. Applying the polar decomposition to the family $(\omega, g_t)$, we obtain a smooth path of compatible almost complex structures $J_t$ joining $J_0$ to $J_1$. The set ${\mathcal J}(M, \omega)$ is even {\varepsilonm contractible} (this is important for defining invariants). The first ingredient is the contractibility of the set of compatible complex structures on a vector space (Proposition~\ref{prop:linear_contractible}). Consider the fiber bundle ${\mathcal J} \to M$ with fiber over $p \in M$ being the space ${\mathcal J}_p := {\mathcal J}(T_pM, \omega_p)$ of compatible complex structures on the tangent space at $p$. A compatible almost complex structure on $(M,\omega)$ is a section of ${\mathcal J}$. The space of sections of ${\mathcal J}$ is contractible because the fibers are contractible.\footnote{The base being a (second countable and Hausdorff) manifold, a contraction can be produced using a countable cover by trivializing neighborhoods whose closures are compact subsets of larger trivializing neighborhoods, and such that each $p \in M$ belongs to only a finite number of such neighborhoods.} The \textbf{first Chern class} $c_1(M,\omega)$ of a symplectic manifold $(M,\omega)$ is the first Chern class of $(TM,J)$ for any compatible $J$. The class $c_1(M,\omega) \in H^2 (M;{\mathbb Z})$ is invariant under deformations of $\omega$. We never used the closedness of $\omega$ to obtain compatible almost complex structures. The construction holds for an \textbf{almost symplectic manifold}\index{manifold ! almost symplectic}\index{almost symplectic manifold}\index{symplectic ! almost symplectic manifold} $(M,\omega)$, that is, a pair of a manifold $M$ and a nondegenerate 2-form $\omega$, not necessarily closed. We could further work with a \textbf{symplectic vector bundle},\index{symplectic ! vector bundle} that is, a vector bundle $E \to M$ equipped with a smooth field $\omega$ of fiberwise nondegenerate skew-symmetric bilinear maps (Section~\ref{symplectic_submanifolds}). The existence of such a field $\omega$ is equivalent to being able to reduce the structure group of the bundle from the general linear group to the linear symplectic group. As both $\mathrm{Sp} (2n)$ and $\mathrm{GL} (n;{\mathbb C})$ retract to their common maximal compact subgroup $\mathrm{U} (n)$, a symplectic vector bundle can be always endowed with a structure of complex vector bundle, and vice-versa. Gromov showed in his thesis~\cite{gr:stable} that any {\varepsilonm open}\footnote{A manifold is \textbf{open} if it has no closed connected components, where \textbf{closed} means compact and without boundary.} almost complex manifold admits a symplectic form. The books~\cite[\S 10.2]{el-mi:principle} and~\cite[\S 7.3]{mc-sa:introduction} contain proofs of this statement using different techniques. \begin{theorem} \textbf{(Gromov)} $\;$ For an open manifold the existence of an almost complex structure $J$ implies that of a symplectic form $\omega$ in any given 2-cohomology class and such that $J$ is homotopic to an almost complex structure compatible with $\omega$. \varepsilonnd{theorem} From an almost complex structure $J$ and a metric $g$, one builds a nondegenerate 2-form $\omega (u,v) = g (Ju,v)$, which will not be closed in general. Closedness is a {\varepsilonm differential relation}, i.e., a condition imposed on the partial derivatives, encoded as a subset of {\varepsilonm jet space}. One says that a differential relation satisfies the \textbf{h-principle}\footnote{There are in fact different h-principles depending on the different possible coincidences of homotopy groups for the spaces of formal solutions and of holonomic solutions.} if any \textbf{formal solution} (i.e., a solution for the associated algebraic problem, in the present case a nondegenerate 2-form) is homotopic to a \textbf{holonomic solution} (i.e, a genuine solution, in the present case a closed nondegenerate 2-form). Therefore, when the h-principle holds, one may concentrate on a purely topological question (such as the existence of an almost complex structure) in order to prove the existence of a differential solution. Gromov showed that, for an open differential relation on an open manifold, when the relation is invariant under the group of diffeomorphisms of the underlying manifold, the inclusion of the space of holonomic solutions into the space of formal solutions is a weak homotopy equivalence, i.e., induces isomorphisms of all homotopy groups. The previous theorem fits here as an application. For {\varepsilonm closed} manifolds there is no such theorem: as discussed in Section~\ref{symplectic_forms}, the existence of a 2-cohomology class whose top power is nonzero is also necessary for the existence of a symplectic form and there are further restrictions coming from {\varepsilonm Gromov-Witten theory} (see Section~\ref{sec:invariants}). \ssubsection{Integrability} \label{sec:integrability} \index{Dolbeault decompositions}\index{integrability} Any {\varepsilonm complex manifold}\index{complex manifold}\footnote{A \textbf{complex manifold}\index{complex manifolds}\index{manifold ! complex}\index{complex ! manifold} of (complex) dimension $n$ is a set $M$ with a complete complex atlas $\left\{({\mathcal U}_\alpha, {\mathcal V}_\alpha, \varphi_\alpha)\ , \alpha \in \mbox{ index set } I \right\}$ where $M = \cup_\alpha {\mathcal U}_\alpha$, the ${\mathcal V}_\alpha$'s are open subsets of ${\mathbb C}^n$, and the maps $\varphi_\alpha: {\mathcal U}_\alpha \to {\mathcal V}_\alpha$ are bijections such that the transition maps $\psi_{\alpha\beta} = \varphi_\beta \circ \varphi_\alpha^{-1}: {\mathcal V}_{\alpha \beta} \to {\mathcal V}_{\beta \alpha}$ are {\varepsilonm biholomorphic}\index{biholomorphic map} (i.e., bijective, holomorphic and with holomorphic inverse) as maps on open subsets of ${\mathbb C}^n$, ${\mathcal V}_{\alpha \beta} = \varphi_\alpha ({\mathcal U}_\alpha \cap {\mathcal U}_\beta)$.} has a canonical almost complex structure $J$. It is defined locally over the domain ${\mathcal U}$ of a complex chart $\varphi: {\mathcal U} \rightarrow {\mathcal V} \subseteq {\mathbb C}^n$, by $J_p \left( \left. \frac {\partial}{\partial x_j} \right|_p \right) = \left. \frac {\partial}{\partial y_j} \right|_p$ and $J_p\left( \left. \frac {\partial}{\partial y_j} \right|_p \right) = \left. -\frac {\partial}{\partial x_j} \right|_p$, where these are the tangent vectors induced by the real and imaginary parts of the coordinates of $\varphi = (z_1,\ldots,z_n)$, $z_j = x_j + iy_j$. This yields a globally well-defined $J$, thanks to the {\varepsilonm Cauchy-Riemann equations}\index{Cauchy-Riemann equations}\index{Riemann ! Cauchy-Riemann equations} satisfied by the components of the transition maps. An almost complex structure $J$ on a manifold $M$ is called \textbf{integrable}\index{integrable ! almost complex structure}\index{almost complex structure ! integrability} when $J$ is induced by some underlying structure of complex manifold on $M$ as above. The question arises whether some compatible almost complex structure $J$ on a symplectic manifold $(M,\omega)$ is integrable.\index{integrability} To understand what is involved, we review Dolbeault theory and the Newlander-Nirenberg theorem. Let $(M,J)$ be a $2n$-dimensional almost complex manifold. The fibers of the complexified tangent bundle, $TM \otimes {\mathbb C}$, are $2n$-dimensional vector spaces over ${\mathbb C}$. We may extend $J$ linearly to $TM \otimes {\mathbb C}$ by $J(v\otimes c) = Jv\otimes c$, $v \in TM$, $c \in {\mathbb C}$. Since $J^2 = -\Id$, on the complex vector space $(TM \otimes {\mathbb C})_p$ the linear map $J_p$ has eigenvalues $\pm i$. The $(\pm i)$-eigenspaces of $J$ are denoted $T_{1,0}$ and $T_{0,1}$, respectively, and called the spaces of \textbf{$J$-holomorphic}\index{J-holomorphic tangent@($J$-)holomorphic tangent vectors}\index{holomorphic tangent@($J$-)holomorphic tangent vectors} and of \textbf{$J$-anti-holomorphic tangent vectors}\index{J-anti-holomorphic tangent@($J$-)anti-holomorphic tangent vectors}\index{anti-holomorphic tangent@($J$-)anti-holomorphic tangent vectors}. We have an isomorphism \[ \begin{array}{rrcl} (\pi_{1,0}, \pi_{0,1}) : & TM \otimes {\mathbb C} & \stackrel{\simeq}\longrightarrow & T_{1,0}\oplus T_{0,1} \\ & v & \longmapsto & \frac{1}{2} (v - iJv , v + iJv) \varepsilonnd{array} \] where the maps to each summand satisfy $\pi_{1,0} \circ J = i\pi_{1,0}$ and $\pi_{0,1} \circ J = - i\pi_{0,1}$. Restricting $\pi_{1,0}$ to $TM$, we see that $(TM, J) \simeq T_{1,0} \simeq \overline{T_{0,1}}$, as complex vector bundles, where the multiplication by $i$ is given by $J$ in $(TM, J)$ and where $\overline{T_{0,1}}$ denotes the complex conjugate bundle of $T_{0,1}$. Similarly, $J^*$ defined on $T^*M \otimes {\mathbb C}$ by $J^* \xi = \xi \circ J$ has ($\pm i$)-eigenspaces $T^{1,0} = (T_{1,0})^*$ and $T^{0,1} = (T_{0,1})^*$, respectively, called the spaces of \textbf{complex-linear}\index{complex-linear cotangent vectors} and of \textbf{complex-antilinear cotangent vectors}\index{complex-antilinear cotangent vectors}. Under the two natural projections $\pi^{1,0}, \pi^{0,1}$ , the complexified cotangent bundle splits as \[ \begin{array}{rrcl} (\pi^{1,0}, \pi^{0,1}) : & T^*M \otimes {\mathbb C} & \stackrel{\simeq}\longrightarrow & T^{1,0} \oplus T^{0,1} \\ & \xi & \longmapsto & \frac{1}{2} (\xi - iJ^* \xi, \xi + i J^* \xi) \ . \varepsilonnd{array} \] Let \[ \Lambda^k (T^*M\otimes{\mathbb C}) := \Lambda^k(T^{1,0}\oplus T^{0,1}) = \oplus_{\varepsilonll+m=k} \Lambda^{\varepsilonll,m} \ , \] where $\Lambda^{\varepsilonll,m} := (\Lambda^\varepsilonll T^{1,0}) \wedge (\Lambda^mT^{0,1})$, and let $\Omega^k(M;{\mathbb C})$ be the space of sections of $\Lambda^k (T^*M \otimes {\mathbb C})$, called \textbf{complex-valued $k$-forms on $M$}\index{form ! complex-valued}\index{complex-valued form}. The \textbf{differential forms of type $(\varepsilonll,m)$}\index{form ! type} \index{form ! complex-valued}on $(M,J)$ are the sections of $\Lambda^{\varepsilonll,m}$, and the space of these differential forms is denoted $\Omega^{\varepsilonll,m}$. The decomposition of forms by Dolbeault type is $\Omega^k(M;{\mathbb C}) = \oplus_{\varepsilonll+m = k} \Omega^{\varepsilonll,m}$. Let $\pi^{\varepsilonll,m}: \Lambda^k(T^*M \otimes {\mathbb C}) \to \Lambda^{\varepsilonll,m}$ be the projection map, where $\varepsilonll + m = k$. The usual exterior derivative $d$ (extended linearly to smooth complex-valued forms) composed with two of these projections induces the \textbf{del} and \textbf{del-bar} differential operators, $\partial$ and $\partialbar$, on forms of type $(\varepsilonll, m)$: \[ \begin{array}{rcl} \partial & := & \pi^{\varepsilonll+1,m}\circ d : \Omega^{\varepsilonll,m} \longrightarrow \Omega^{\varepsilonll+1,m} \; \quad \mbox{ and } \\ \partialbar & := & \pi^{\varepsilonll,m+1}\circ d : \Omega^{\varepsilonll,m} \longrightarrow \Omega^{\varepsilonll,m+1} \ . \varepsilonnd{array} \] If $\beta \in \Omega^{\varepsilonll,m} (M)$, with $k = \varepsilonll+m$, then $d\beta \in \Omega^{k+1}(M ; {\mathbb C})$: \[ d\beta = \displaystyle{\sum_{r+s = k+1}} \pi^{r,s}d\beta = \pi^{k+1,0}d\beta + \cdots + \partial\beta + \partialbar\beta+ \cdots + \pi^{0,k+1}d\beta\ . \] In particular, on complex-valued functions we have $df = d (\mathrm{Re} f) +i \, d(\mathrm{Im} f)$ and $d = \partial + \partialbar$, where $\partial = \pi^{1,0}\circ d$ and $\partialbar = \pi^{0,1}\circ d$. A function $f: M \to {\mathbb C}$ is \textbf{$J$-holomorphic at} $p \in M$\index{J-holomorphic function@$J$-holomorphic function} if $df_p$ is complex linear, i.e., $df_p\circ J_p = i \, df_p$ (or $df_p \in T_p^{1,0}$). A function $f$ is \textbf{$J$-holomorphic} if it is holomorphic at all $p \in M$. A function $f: M \to {\mathbb C}$ is \textbf{$J$-anti-holomorphic at} $p \in M$\index{J-anti-holomorphic function@$J$-anti-holomorphic function} if $df_p$ is complex antilinear, i.e., $df_p\circ J_p = -i \, df_p$ (or $df_p \in T_p^{0,1}$), that is, when the conjugate function $\bar{f}$ is holomorphic at $p \in M$. In terms of $\partial$ and $\partialbar$, a function $f$ is $J$-holomorphic if and only if $\partialbar f = 0$, and $f$ is $J$-anti-holomorphic if and only if $\partial f = 0$. When $M$ is a {\varepsilonm complex manifold} and $J$ is its canonical almost complex structure, the splitting $\Omega^k(M;{\mathbb C}) = \oplus_{\varepsilonll+m=k} \Omega^{\varepsilonll,m}$\index{form ! on a complex manifold} is particularly interesting. Let ${\mathcal U} \subseteq M$ be the domain of a complex coordinate chart $\varphi = (z_1,\ldots,z_n)$, where the corresponding real coordinates $x_1,y_1,\ldots,x_n,y_n$ satisfy $z_j = x_j + iy_j$. In terms of \[ \displaystyle{ \frac {\partial}{\partial z_j} := \frac {1}{2} \left( \frac {\partial}{\partial x_j} - i\frac {\partial}{\partial y_j} \right) \quad \mbox{ and } \quad \frac {\partial}{\partial {\bar z}_j} := \frac {1}{2} \left( \frac {\partial}{\partial x_j} + i\frac {\partial}{\partial y_j} \right)\ ,} \] the $(\pm i)$-eigenspaces of $J_p$ ($p \in {\mathcal U}$) can be written \[ (T_{1,0})_p = {\mathbb C} \mbox{-span}\left\{ \left. \frac {\partial}{\partial z_j} \right|_p: j = 1,\ldots,n\right\} \quad \mbox{ and } \quad (T_{0,1})_p = {\mathbb C} \mbox{-span}\left\{ \left. \frac {\partial}{\partial {\bar z}_j} \right|_p \right\}\ . \] Similarly, putting $dz_j = dx_j + idy_j$ and $d{\bar z}_j = dx_j - idy_j$, we obtain simple formulas for the differentials of a $b \in C^{\infty}({{\mathcal U}};{\mathbb C})$, $\partial b = \sum \frac {\partial b}{\partial z_j} dz_j$ and ${\bar \partial}b = \sum \frac {\partial b}{\partial {\bar z}_j} d{\bar z}_j$, and we have $T^{1,0} = {\mathbb C} \mbox{-span}\{dz_j: j = 1,\ldots,n\}$ and $T^{0,1} = {\mathbb C} \mbox{-span}\{d{\bar z}_j: j = 1,\ldots,n\}$. If we use multi-index notation $J = (j_1,\ldots,j_\varepsilonll)$ where $1 \leq j_1 < \ldots < j_\varepsilonll \leq n$, $|J| = \varepsilonll$ and $dz_{_J} = dz_{j_1} \wedge dz_{j_2} \wedge \ldots \wedge dz_{j_\varepsilonll}$, then the set of $(\varepsilonll,m)$-forms on ${\mathcal U}$ is \[ \Omega^{\varepsilonll,m} = \left\{ \displaystyle{\sum_{|J| = \varepsilonll,|K| = m}} b_{_{J,K}} dz_{_J} \wedge d{\bar z}_{_K} \mid b_{_{J,K}} \in C^{\infty}({{\mathcal U}};{\mathbb C}) \right\}\ . \] A form $\beta \in \Omega^k(M;{\mathbb C})$ may be written over ${\mathcal U}$ as \[ \beta = \displaystyle{\sum_{\varepsilonll+m=k}} \left( \displaystyle{\sum_{|J| = \varepsilonll, |K| = m}} b_{_{J,K}}dz_{_J} \wedge d{\bar z}_{_K}\right) \ . \] Since $d = \partial + {\bar \partial}$ on functions, we get \[ \begin{array}{rl} d\beta = & \displaystyle{\sum_{\varepsilonll+m=k}} \left( \displaystyle{\sum_{|J| = \varepsilonll, |K| = m}} db_{_{J,K}} \wedge dz_{_J} \wedge d{\bar z}_{_K}\right) \\ \\ = & \displaystyle{\sum_{\varepsilonll+m=k}} \underbrace{\left( \displaystyle{\sum_{|J| = \varepsilonll, |K| = m}} \partial b_{_{J,K}} \wedge dz_{_J} \wedge d{\bar z}_{_K} \right.}_{\in \Omega^{\varepsilonll+1,m}} + \underbrace{\left. \displaystyle{\sum_{|J| = \varepsilonll, |K| = m}} {\bar \partial}b_{_{J,K}} \wedge dz_{_J} \wedge d{\bar z}_{_K} \right)}_{\in \Omega^{\varepsilonll,m+1}} \\ \\ = & \partial \beta + {\bar \partial}\beta\ , \varepsilonnd{array} \] and conclude that, {\varepsilonm on a complex manifold, $d = \partial + {\bar \partial}$ on forms of any degree}.\index{complex ! differentials} This cannot be proved for an almost complex manifold, because there are no coordinate functions $z_j$ to give a suitable basis of 1-forms. When $d = \partial + \partialbar$, for any form $\beta \in \Omega^{\varepsilonll,m}$, we have \[ 0 = d^2\beta = \underbrace{\partial^2 \beta}_{\in \Omega^{\varepsilonll+2,m}} + \underbrace{\partial \partialbar \beta + \partialbar\partial \beta}_ {\in \Omega^{\varepsilonll+1,m+1}} + \underbrace{\partialbar^2 \beta}_{\in \Omega^{\varepsilonll,m+2}} \quad \Longrightarrow \quad \left\{ \begin{array}{l} \partialbar^2 = 0 \\ \partial \partialbar + \partialbar\partial = 0 \\ \partial^2 = 0 \varepsilonnd{array} \right. \] Since $\partialbar^2 = 0$, the chain $0 \longrightarrow \Omega^{\varepsilonll,0} \stackrel{\partialbar}{\longrightarrow} \Omega^{\varepsilonll,1} \stackrel{\partialbar}{\longrightarrow} \Omega^{\varepsilonll,2} \stackrel{\partialbar}{\longrightarrow} \cdots$ is a differential complex. Its cohomology groups \[ H^{\varepsilonll,m}_{\mathrm{Dolbeault}}(M) := \frac{\ker ~ \partialbar: \Omega^{\varepsilonll,m} \to \Omega^{\varepsilonll,m+1}} {\mathrm{im} ~\partialbar:\Omega^{\varepsilonll,m-1} \to \Omega^{\varepsilonll,m}} \] are called the \textbf{Dolbeault cohomology} groups.\index{Dolbeault ! cohomology}\index{cohomology ! Dolbeault} The Dolbeault theorem\index{theorem ! Dolbeault}\index{Dolbeault ! theorem} states that for complex manifolds $H_{\mathrm{Dolbeault}}^{\varepsilonll,m} (M) \simeq H^m(M; {\mathcal O}(\Omega^{(\varepsilonll,0)}))$, where ${\mathcal O}(\Omega^{(\varepsilonll,0)})$ is the sheaf of forms of type $(\varepsilonll,0)$ over $M$. It is natural to ask whether the identity $d = \partial + {\bar \partial}$ could hold for manifolds other than complex manifolds. Newlander and Nirenberg\index{Newlander-Nirenberg theorem}~\cite{ne-ni:complex} showed that the answer is no: for an almost complex manifold $(M,J)$, the following are equivalent \[ \mbox{$M$ is a complex manifold } \iff {\mathcal N} \varepsilonquiv 0 \iff d = \partial + {\bar \partial} \iff {\bar \partial}^2 = 0 \ , \] where ${\mathcal N}$ is the \textbf{Nijenhuis tensor}\index{Nijenhuis tensor}: \[ {\cal N} (X,Y) := [JX, JY] - J[JX,Y] - J[X,JY] - [X,Y]\ , \] for vector fields $X$ and $Y$ on $M$, $[ \cdot , \cdot ]$ being the usual bracket.\footnote{The \textbf{bracket} of vector fields $X$ and $Y$ is the vector field $[X,Y]$ characterized by the property that ${\mathcal L}_{[X,Y]} f := {\mathcal L}_X ({\mathcal L}_Y f) - {\mathcal L}_Y ({\mathcal L}_X f)$, for $f \in C^\infty (M)$, where ${\mathcal L}_X f = df (X)$.} The Nijenhuis tensor can be thought of as a measure of the existence of $J$-holomorphic functions: if there exist $n$ $J$-holomorphic functions, $f_1, \ldots ,f_n$, on ${\mathbb R}^{2n}$, that are independent at some point $p$, i.e., the real and imaginary parts of $(df_1)_p, \ldots ,(df_n)_p$ form a basis of $T^*_p {\mathbb R}^{2n}$, then ${\cal N}$ vanishes identically at $p$. More material related to Dolbeault theory or to the Newlander-Nirenberg theorem can be found in~\cite{ch:potential,du:heat,gr-ha:principles,ho:several,we:complex}. \begin{example}\index{example ! of almost complex manifold}\index{example ! of non-almost-complex manifold} Out of all spheres, only $S^2$ and $S^6$ admit almost complex structures~\cite[\S41.20]{st:fibre_bundles}. As a complex manifold, $S^2$ if referred to as the {\varepsilonm Riemann sphere} ${\mathbb C} {\mathbb P}^1$. The almost complex structure on $S^6$ from Example~3 of Section~\ref{sec:compatible_almost} is not integrable, but it is not yet known whether $S^6$ admits a structure of complex manifold. \varepsilonnd{example} In the (real) 2-dimensional case ${\cal N}$ always vanishes simply because ${\cal N}$ is a tensor, i.e., ${\cal N} (fX,gY) = fg {\cal N} (X,Y)$ for any $f,g \in C^\infty (M)$, and ${\cal N} (X,JX) =0$ for any vector field $X$. Combining this with the fact that any orientable surface is symplectic, we conclude that any orientable surface is a complex manifold, a result already known to Gauss. However, most almost complex structures on higher dimensional manifolds are not integrable. In Section~\ref{sec:hodge} we see that the existence of a complex structure compatible with a symplectic structure on a compact manifold imposes significant topological constraints. \ssubsection{K\"ahler Manifolds} \label{sec:kahler} \index{K\"ahler ! manifold}\index{manifold ! K\"ahler} \begin{definition} A \textbf{K\"ahler manifold}\index{K\"ahler ! manifold}\index{manifold ! K\"ahler} is a symplectic manifold $(M,\omega)$ equipped with an integrable compatible almost complex structure $J$. The symplectic form $\omega$ is then called a \textbf{K\"ahler form}.\index{K\"ahler ! form}\index{form ! K\"ahler} \varepsilonnd{definition} As a complex manifold, a K\"ahler manifold $(M,\omega,J)$ has Dolbeault cohomology. As it is also a symplectic manifold, it is interesting to understand where the symplectic form $\omega$ sits with respect to the Dolbeault type decomposition. \begin{proposition} A K\"ahler form $\omega$ is a $\partial$- and ${\bar \partial}$-closed $(1,1)$-form that is given on a local complex chart $({\mathcal U},z_1,\ldots,z_n)$ by \[ \omega = \frac i2 \sum_{j,k=1}^n h_{jk}\ dz_j \wedge d{\bar z}_k \] where, at every point $p \in {\mathcal U}$, $(h_{jk}(p))$ is a positive-definite hermitian matrix. \varepsilonnd{proposition} In particular, $\omega$ defines a Dolbeault $(1,1)$-cohomology class, $[\omega] \in H_{\mathrm{Dolbeault}}^{1,1}(M)$. \begin{proof} Being a form in $\Omega^2(M;{\mathbb C}) = \Omega^{2,0} \oplus \Omega^{1,1} \oplus \Omega^{0,2}$, with respect to a local complex chart, $\omega$ can be written \[ \omega = \sum a_{jk} \ dz_j \wedge dz_k + \sum b_{jk} \ dz_j \wedge d{\bar z}_k + \sum c_{jk} \ d{\bar z}_j \wedge d{\bar z}_k \] for some $a_{jk},b_{jk},c_{jk} \in C^{\infty}({\mathcal U};{\mathbb C})$. By the compatibility of $\omega$ with the complex structure, $J$ is a symplectomorphism, that is, $J^*\omega = \omega$ where $(J^* \omega ) (u,v) := \omega (Ju,Jv)$. Since $J^*dz_j = dz_j \circ J = idz_j$ and $J^*d{\bar z}_j = d{\bar z}_j \circ J = -id{\bar z}_j$, we have $J^*\omega = \omega$ if and only if the coefficients $a_{jk}$ and $c_{jk}$ all vanish identically, that is, if and only if $\omega \in \Omega^{1,1}$. Since $\omega$ is closed, of type $(1,1)$ and $d\omega = \partial \omega + {\bar \partial}\omega$, we must have $\partial\omega = 0$ and ${\bar \partial}\omega = 0$. Set $b_{jk} = \frac i2 h_{jk}$. As $\omega$ is real-valued, i.e., $\omega = \frac i2 \sum h_{jk}\ dz_j \wedge d{\bar z}_k$ and ${\overline \omega} = -\frac {i}{2} \sum \overline{h_{jk}} \ d{\bar z}_j \wedge dz_k$ coincide, we must have $h_{jk} = \overline{h_{kj}}$ for all $j$ and $k$. In other words, at every point $p \in {\mathcal U}$, the $n \times n$ matrix $(h_{jk}(p))$ is hermitian. The nondegeneracy amounts to the nonvanishing of \[ \omega^n = n!\left( \frac {i}{2} \right)^n \textstyle{\det} (h_{jk}) \, dz_1 \wedge d{\bar z}_1 \wedge \ldots \wedge dz_n \wedge d{\bar z}_n \ . \] Therefore, at every $p \in M$, the matrix $(h_{jk}(p))$ must be nonsingular. Finally, the positivity condition $\omega(v,Jv) > 0$, $\forall v \neq 0$, from compatibility, implies that, at each $p \in {\mathcal U}$, the matrix $(h_{jk}(p))$ is positive-definite. \varepsilonnd{proof} Consequently, if $\omega_0$ and $\omega_1$ are both K\"ahler forms on a compact manifold $M$ with $[\omega_0] = [\omega_1] \in H_{\mathrm{deRham}}^2(M)$, then $(M,\omega_0)$ and $(M,\omega_1)$ are strongly isotopic by Moser's Theorem~\ref{thm:moser}. Indeed $\omega_t = (1-t)\omega_0 + t\omega_1$ is symplectic for $t \in [0,1]$, as convex combinations of positive-definite matrices are still positive-definite. Another consequence is the following recipe for K\"ahler forms.\index{recipe ! for K\"ahler forms}\index{K\"ahler ! recipe} A smooth real function $\rho$ on a complex manifold $M$ is \textbf{strictly plurisubharmonic}\index{strictly plurisubharmonic}\index{potential ! strictly plurisubharmonic} (\textbf{s.p.s.h.})\index{s.p.s.h.} if, on each local complex chart $({\mathcal U},z_1,\ldots,z_n)$, the matrix $\left( \frac {\partial^2\rho}{\partial z_j \partial {\bar z}_k} (p)\right)$ is positive-definite at all $p \in {\mathcal U}$. If $\rho \in C^{\infty}(M;{\mathbb R})$ is s.p.s.h., then the form \[ \omega = \frac i2 \partial {\bar \partial} \rho \] is K\"ahler. The function $\rho$ is then called a (global) \textbf{K\"ahler potential}.\index{K\"ahler ! potential}\index{potential ! K\"ahler} \begin{example} Let $M = {\mathbb C}^n \simeq {\mathbb R}^{2n}$, with complex coordinates $(z_1,\ldots,z_n)$ and corresponding real coordinates $(x_1,y_1,\ldots,x_n,y_n)$ via $z_j = x_j + iy_j$. The function \[ \rho(x_1,y_1,\ldots,x_n,y_n) = \sum_{j=1}^n (x_j^2 + y_j^2) = \sum |z_j|^2 = \sum z_j{\bar z}_j \] is s.p.s.h.\ and is a K\"ahler potential for the standard K\"ahler form: \[ \frac i2 \partial{\bar \partial}\rho = \frac i2 \sum \limits_{j,k} \partialta_{jk} \ dz_j \wedge d{\bar z}_k = \frac i2 \sum \limits_j dz_j \wedge d{\bar z}_j = \sum \limits_j dx_j \wedge dy_j = \omega_0 \ . \] \varepsilonnd{example} There is a local converse to the previous construction of K\"ahler forms. \begin{proposition} \index{local form}\index{K\"ahler ! local form} Let $\omega$ be a closed real-valued $(1,1)$-form on a complex manifold $M$ and let $p \in M$. Then on a neighborhood ${\mathcal U}$ of $p$ we have $\omega = \frac i2 \partial{\bar \partial}\rho$ for some $\rho \in C^{\infty}({\mathcal U};{\mathbb R})$. \varepsilonnd{proposition} The proof of this theorem requires holomorphic versions of Poincar\'e's lemma, namely, the local triviality of Dolbeault groups (the fact that any point in a complex manifold admits a neighborhood ${\mathcal U}$ such that $H^{\varepsilonll,m}_{\mathrm{Dolbeault}} ({\mathcal U}) = 0$ for all $m >0$) and the local triviality of the holomorphic de Rham groups; see~\cite{gr-ha:principles}. For a K\"ahler $\omega$, such a local function $\rho$ is called a \textbf{local K\"ahler potential}.\index{K\"ahler ! potential}\index{potential ! K\"ahler} \begin{proposition} Let $M$ be a complex manifold, $\rho \in C^{\infty}(M;{\mathbb R})$ s.p.s.h., $X$ a complex submanifold, and $i: X \hookrightarrow M$ the inclusion map. Then $i^*\rho$ is s.p.s.h.. \varepsilonnd{proposition} \begin{proof} It suffices to verify this locally by considering a complex chart $(z_1,\ldots,z_n)$ for $M$ adapted to $X$ so that $X$ is given there by the equations $z_1 = \ldots = z_m =0$. Being a principal minor of the positive-definite matrix $\left( \frac {\partial^2}{\partial z_j\partial {\bar z}_k} (0,\ldots,0,z_{m+1},\ldots,z_n)\right)$ the matrix $\left( \frac {\partial^2\rho}{\partial z_{m+j}\partial {\bar z}_{m+k}} (0,\ldots,0,z_{m+1},\ldots,z_n)\right)$ is also positive-definite. \varepsilonnd{proof} \vspace*{-1ex} \begin{corollary}\label{thm:subkahler} Any complex submanifold of a K\"ahler manifold is also K\"ahler.\index{example ! complex submanifold of a K\"ahler manifold} \varepsilonnd{corollary} \vspace*{-1ex} \begin{definition} Let $(M,\omega)$ be a K\"ahler manifold, $X$ a complex submanifold, and $i: X \hookrightarrow M$ the inclusion. Then $(X, i^* \omega)$ is called a \textbf{K\"ahler submanifold}.\index{K\"ahler ! submanifold}\index{submanifold ! K\"ahler} \varepsilonnd{definition} \vspace*{-1ex} \begin{examples}\index{example ! of K\"ahler submanifold} \begin{enumerate} \item Complex vector space $({\mathbb C}^n,\omega_0)$ where $\omega_0 = \frac i2 \sum dz_j \wedge d{\bar z}_j$ is K\"ahler. According to Corollary~\ref{thm:subkahler}, every complex submanifold of ${\mathbb C}^n$ is K\"ahler. \item In particular, {\varepsilonm Stein manifolds}\index{example ! Stein manifold}\index{Stein manifold} are K\"ahler. \textbf{Stein manifolds} are the properly embedded complex submanifolds of ${\mathbb C}^n$. They can be alternatively characterized as being the K\"ahler manifolds $(M,\omega)$ that admit a global proper K\"ahler potential, i.e., $\omega = \frac {i}{2} \partial{\bar \partial} \rho$ for some proper function $\rho: M \to {\mathbb R}$. \item The function $z \mapsto \log (|z|^2 + 1)$ on ${\mathbb C}^n$ is strictly plurisubharmonic. Therefore the 2-form \[ \omega_{_{\mathrm{FS}}} = \textstyle{\frac{i}{2}} \partial \bar{\partial} \log (|z|^2 + 1) \] is another K\"ahler form on ${\mathbb C}^n$ This is called the \textbf{Fubini-Study form} on ${\mathbb C}^n$. \item Let $\{ ({\cal U}_j , {\mathbb C}^n , \varphi_j), j=0, \ldots ,n\}$ be the usual complex atlas for \textbf{complex projective space}.\index{complex ! projective space}\index{example ! complex projective space}\footnote{The \textbf{complex projective space} ${\mathbb C} {\mathbb P} ^n$ is the complex $n$-dimensional manifold\index{example ! of complex manifold} given by the space of complex lines in ${\mathbb C}^{n+1}$. It can be obtained from ${\mathbb C}^{n+1} \setminus \{ 0 \}$ by making the identifications $(z_0, \ldots, z_n) \sim (\lambda z_0, \ldots, \lambda z_n)$ for all $\lambda \in {\mathbb C} \setminus \{ 0 \}$. One denotes by $[z_0, \ldots, z_n]$ the equivalence class of $(z_0, \ldots, z_n)$, and calls $z_0, \ldots, z_n$ the \textbf{homogeneous coordinates} of the point $p = [z_0, \ldots, z_n]$. (Homogeneous coordinates are, of course, only determined up to multiplication by a non-zero complex number $\lambda$.) Let ${\cal U}_j$ be the subset of ${\mathbb C} {\mathbb P} ^n$ consisting of all points $p = [z_0, \ldots, z_n]$ for which $z_j \neq 0$. Let $\varphi_j : {\cal U}_j \to {\mathbb C}^n$ be the map defined by \[ \varphi_j ([z_0, \ldots, z_n]) = \displaystyle{ \left( \textstyle{\frac{z_0}{z_j}} , \ldots , \textstyle{\frac{z_{j-1}}{z_j}} , \textstyle{\frac{z_{j+1}}{z_j}} , \ldots , \textstyle{\frac{z_n}{z_j}} \right)} \ . \] The collection $\{ ({\cal U}_j , {\mathbb C}^n , \varphi_j), j=0, \ldots ,n\}$ is the \textbf{usual complex atlas}\index{complex ! atlas} for ${\mathbb C} {\mathbb P} ^n$. For instance, the transition map from $({\cal U}_0 , {\mathbb C}^n , \varphi_0)$ to $({\cal U}_1 , {\mathbb C}^n , \varphi_1)$ is $\varphi_{0,1} (z_1 , \ldots, z_n) = (\textstyle{\frac{1}{z_1}} , \textstyle{\frac{z_2}{z_1}}, \ldots, \textstyle{\frac{z_n}{z_1}})$ defined from the set $\{ (z_1 , \ldots, z_n) \in {\mathbb C}^n \, | \, z_1 \neq 0\}$ to itself.} The form $\omega_{_{\mathrm{FS}}}$ is preserved by the transition maps, hence $\varphi_j ^* \omega_{_{\mathrm{FS}}}$ and $\varphi_k ^* \omega_{_{\mathrm{FS}}}$ agree on the overlap ${\cal U}_j \cap {\cal U}_k$. The \textbf{Fubini-Study form} on ${\mathbb C} {\mathbb P} ^n$ is the K\"ahler form obtained by gluing together the $\varphi_j ^* \omega_{_{\mathrm{FS}}}$, $j=0, \ldots, n$. \item Consequently, all \textbf{non-singular projective varieties}\index{non-singular projective variety}\index{example ! non-singular projective variety} are K\"ahler submanifolds. Here by non-singular we mean smooth, and by projective variety we mean the zero locus of some collection of homogeneous polynomials. \item All \textbf{Riemann surfaces}\index{example ! Riemann surface}\index{Riemann ! surface} are K\"ahler, since any compatible almost complex structure is integrable for dimension reasons (Section~\ref{sec:integrability}). \item The Fubini-Study form on the chart ${\cal U}_0 = \{ [z_0, z_1] \in {\mathbb C} {\mathbb P} ^1 \, | z_0 \neq 0 \}$ of the \textbf{Riemann sphere}\index{Riemann sphere} ${\mathbb C} {\mathbb P} ^1$ is given by the formula \[ \omega_{_{\mathrm{FS}}} = \frac{ dx \wedge dy }{(x^2+y^2+1)^2} \] where $\frac{z_1}{z_0} = z = x+iy$ is the usual coordinate on ${\mathbb C}$. The standard area form $\omega_{_{\mathrm{std}}} = d \theta \wedge dh$ is induced by regarding ${\mathbb C} {\mathbb P} ^1$ as the unit sphere $S^2$ in ${\mathbb R}^3$ (Example~3 of Section~\ref{symplectic_forms}). Stereographic projection\index{stereographic projection} shows that $\omega_{_{\mathrm{FS}}} = \frac{1}{4} \omega_{_{\mathrm{std}}}$. \item \textbf{Complex tori}\index{example ! complex torus}\index{complex torus} are K\"ahler. Complex tori look like quotients ${\mathbb C}^n / {\mathbb Z}^n$ where ${\mathbb Z}^n$ is a lattice in ${\mathbb C}^n$. The form $\omega = \sum dz_j \wedge d \bar z_j$ induced by the euclidean structure is K\"ahler. \item Just like products of symplectic manifolds are symplectic, also products of K\"ahler manifolds\index{example ! product of K\"ahler manifolds} are K\"ahler. \varepsilonnd{enumerate} \varepsilonnd{examples} \ssubsection{Hodge Theory} \label{sec:hodge} Hodge~\cite{ho:theory} identified the spaces of cohomology classes of forms with spaces of actual forms, by picking {\varepsilonm the} representative from each class that solves a certain differential equation, namely the {\varepsilonm harmonic} representative.\index{harmonic form}\index{form ! harmonic} We give a sketch of Hodge's idea. The first part makes up ordinary Hodge theory, which works for any compact oriented riemannian manifold $(M,g)$, not necessarily K\"ahler. At a point $p \in M$, let $e_1,\ldots,e_n$ be a positively oriented orthonormal basis of the cotangent space $T^*_pM$, with respect to the induced inner product and orientation. The \textbf{Hodge star operator}\index{Hodge ! star operator@$\ast$-operator} is the linear operator on the exterior algebra of $T^*_pM$ defined by \[ \begin{array}{rcl} \ast (1) & = & e_1\wedge \ldots \wedge e_n \\ \ast (e_1\wedge \ldots \wedge e_n) & = & 1 \\ \ast (e_1 \wedge \ldots \wedge e_k) & = & e_{k+1} \wedge \ldots \wedge e_n \ . \varepsilonnd{array} \] We see that $\ast: \Lambda^k(T^*_pM) \to \Lambda^{n-k}(T^*_pM)$ and satisfies $\ast\ast =(-1)^{k(n-k)}$. The \textbf{codifferential}\index{codifferential} and the \textbf{laplacian}\index{laplacian} are the operators defined by \[ \begin{array}{cclcl} \partialta & = & (-1)^{n(k+1)+1} \ast d \ast & : & \Omega^k(M) \to \Omega^{k-1}(M) \ , \\ \Delta & = & d\partialta + \partialta d & : & \Omega^k(M) \to \Omega^k(M)\ . \varepsilonnd{array} \] The operator $\Delta$ is also called the \textbf{Laplace-Beltrami operator} and satisfies $\Delta \ast = \ast\Delta$.\index{Laplace-Beltrami operator}\index{Beltrami ! Laplace-Beltrami operator}\index{operator ! Laplace-Beltrami} On $\Omega^0({\mathbb R}^n) = C^{\infty}({\mathbb R}^n)$, it is simply the usual laplacian $\Delta = -\sum_{i=1}^n \frac {\partial^2}{\partial x_i^2}$. The \textbf{inner product on forms} of any degree, \[ \langle \cdot , \cdot \rangle: \Omega^k (M) \times \Omega^k (M) \longrightarrow {\mathbb R} \ , \qquad \langle \alpha,\beta \rangle := \int_M \alpha \wedge \ast\beta \ , \] satisfies $\langle d\alpha,\beta \rangle = \langle \alpha,\partialta\beta\rangle$, so the codifferential $\partialta$ is often denoted by $d^*$ and called the {\varepsilonm adjoint\footnote{When $M$ is not compact, we still have a {\varepsilonm formal adjoint} of $d$ with respect to the nondegenerate bilinear pairing $\langle \cdot , \cdot \rangle: \Omega^k (M) \times \Omega^k_c (M) \to {\mathbb R}$ defined by a similar formula, where $\Omega^k_c (M)$ is the space of compactly supported $k$-forms.} of $d$}. Also, $\Delta$ is self-adjoint (i.e., $\langle \Delta\alpha,\beta\rangle = \langle \alpha,\Delta\beta\rangle$), and $\langle \Delta\alpha,\alpha\rangle = |d\alpha|^2 +|\partialta\alpha|^2 \geq 0$, where $| \cdot |$ is the norm with respect to this inner product. The \textbf{harmonic $k$-forms}\index{form ! harmonic}\index{harmonic form} are the elements of ${{\mathcal H}}^k := \{\alpha \in \Omega^k \mid \Delta\alpha = 0\}$. Note that $\Delta\alpha = 0$ if and only if $d\alpha = \partialta\alpha = 0$. Since a harmonic form is $d$-closed, it defines a de Rham cohomology class. \begin{theorem}\index{Hodge ! theorem}\index{theorem ! Hodge} \textbf{(Hodge)} $\;$ Every de Rham cohomology class on a compact oriented riemannian manifold $(M,g)$ possesses a unique harmonic representative, i.e., there is an isomorphism ${\mathcal H}^k \simeq H_{\mathrm{deRham}}^k(M;{\mathbb R})$. In particular, the spaces ${\mathcal H}^k$ are finite-dimensional. We also have the following orthogonal decomposition with respect to the inner product on forms: $\Omega^k \simeq {\mathcal H}^k \oplus \Delta(\Omega^k(M)) \simeq {\mathcal H}^k \oplus d\Omega^{k-1} \oplus \partialta\Omega^{k+1}$. \varepsilonnd{theorem} This decomposition is called the \textbf{Hodge decomposition on forms}.\index{Hodge ! decomposition} The proof of this and the next theorem involves functional analysis, elliptic differential operators, pseudodifferential operators and Fourier analysis; see for instance~\cite{gr-ha:principles,ko:harmonic,we:complex}. Here is where \textbf{complex Hodge theory}\index{Hodge ! complex Hodge theory}\index{complex ! Hodge theory} begins. When $M$ is K\"ahler, the laplacian satisfies $\Delta = 2( \bar \partial \bar \partial ^* + \bar \partial ^* \bar \partial)$ (see, for example,~\cite{gr-ha:principles}) and preserves the decomposition according to type, $\Delta: \Omega^{\varepsilonll,m} \to \Omega^{\varepsilonll,m}$. Hence, harmonic forms are also bigraded \[ {\mathcal H}^k = \bigoplus_{\varepsilonll+m=k} {\mathcal H}^{\varepsilonll,m}\ . \] and satisfy a K\"unneth formula ${\mathcal H}^{\varepsilonll,m} (M \times N) \simeq \bigoplus_{p+r=\varepsilonll,q+s=m} {\mathcal H}^{p,q} (M) \otimes {\mathcal H}^{r,s} (N)$. \begin{theorem}\index{Hodge ! theorem}\index{theorem ! Hodge} \textbf{(Hodge)} $\;$ Every Dolbeault cohomology class on a compact K\"ahler manifold $(M,\omega)$ possesses a unique harmonic representative, i.e., there is an isomorphism ${\mathcal H}^{\varepsilonll,m} \simeq H_{\mathrm{Dolbeault}}^{\varepsilonll,m}(M)$. \varepsilonnd{theorem} Combining the two theorems of Hodge, we find the decomposition of cohomology groups for a compact K\"ahler manifold \[ H_{\mathrm{deRham}}^k (M;{\mathbb C}) \simeq \bigoplus_{\varepsilonll+m=k} H_{\mathrm{Dolbeault}}^{\varepsilonll,m}(M) \ , \] known as the \textbf{Hodge decomposition}\index{Hodge ! decomposition}. In particular, the Dolbeault cohomology groups $H_{\mathrm{Dolbeault}}^{\varepsilonll,m}$ are finite-dimensional and $H^{\varepsilonll,m} \simeq \overline{H^{m,\varepsilonll}}$. Let $b^k(M) := \dim H_{\mathrm{deRham}}^k(M)$ be the usual \textbf{Betti numbers}\index{Betti number}\index{number ! Betti} of $M$, and let $h^{\varepsilonll,m}(M) := \dim H_{\mathrm{Dolbeault}}^{\varepsilonll,m}(M)$ be the \textbf{Hodge numbers}\index{Hodge ! number}\index{number ! Hodge} of $M$. For an arbitrary compact symplectic manifold $(M,\omega)$, the even Betti numbers must be positive, because $\omega^k$ is closed but not exact $(k = 0,1,\ldots,n)$. In fact, if it were $\omega ^k = d\alpha$, by Stokes' theorem we would have $\int_M \omega^n = \int_M d (\alpha \wedge \omega^{n-k}) = 0$, which contradicts $\omega^n$ being a volume form. For a compact K\"ahler manifold $(M,\omega)$, there are finer topological consequences coming from the Hodge theorems,\index{Hodge ! theorem}\index{theorem ! Hodge} as we must have $b^k = \sum_{\varepsilonll+m=k} h^{\varepsilonll,m}$ and $h^{\varepsilonll,m} = h^{m,\varepsilonll}$. The odd Betti numbers must be even because $b^{2k+1} = \sum_{\varepsilonll+m=2k+1} h^{\varepsilonll,m} = 2 \sum_{\varepsilonll=0}^k h^{\varepsilonll,(2k+1-\varepsilonll)}$. The number $h^{1,0} = \frac {1}{2} b^1$ must be a topological invariant. The numbers $h^{\varepsilonll,\varepsilonll}$ are positive, because $0 \neq [\omega^\varepsilonll] \in H^{\varepsilonll,\varepsilonll}_{\mathrm{Dolbeault}} (M)$. First of all, $[\omega^\varepsilonll]$ defines an element of $H_{\mathrm{Dolbeault}}^{\varepsilonll,\varepsilonll}$ as $\omega \in \Omega^{1,1}$ implies that $\omega^\varepsilonll \in \Omega^{\varepsilonll,\varepsilonll}$, and the closedness of $\omega^\varepsilonll$ implies that ${\bar \partial}\omega^\varepsilonll = 0$. If it were $\omega^\varepsilonll = {\bar \partial}\beta$ for some $\beta \in \Omega^{\varepsilonll-1,\varepsilonll}$, then $\omega^n = \omega^\varepsilonll \wedge \omega^{n-\varepsilonll} = {\bar \partial}(\beta \wedge \omega^{n-\varepsilonll})$ would be ${\bar \partial}$-exact. But $[\omega^n] \ne 0$ in $H^{2n}_{\mathrm{deRham}}(M ; {\mathbb C}) \simeq H^{n,n}_{\mathrm{Dolbeault}}(M)$ since it is a volume form. A popular diagram to describe relations among Hodge numbers is the \textbf{Hodge diamond}\index{Hodge ! diamond}: \[ \begin{array}{ccccccc} & & & h^{n,n} \\ & & h^{n,n-1} & & h^{n-1,n} \\ & h^{n,n-2} & & h^{n-1,n-1} & & h^{n-2,n} \\ \ldots & & & \vdots & & & \ldots \\ \\ & h^{2,0} & & h^{1,1} & & h^{0,2} \\ & & h^{1,0} & & h^{0,1} \\ & & & h^{0,0} \\ \varepsilonnd{array} \] Complex conjugation gives symmetry with respect to the middle vertical, whereas the Hodge star operator induces symmetry about the center of the diamond. The middle vertical axis is all non-zero. There are further symmetries and ongoing research on how to compute $H_{\mathrm{Dolbeault}}^{\varepsilonll,m}$ for a compact K\"ahler manifold $(M,\omega)$. In particular, the \textbf{hard Lefschetz theorem} states isomorphisms $L^k : H^{n-k}_{\mathrm{deRham}}(M) \stackrel{\simeq}{\longrightarrow} H^{n+k}_{\mathrm{deRham}}(M)$ given by wedging with $\omega^ k$ at the level of forms and the \textbf{Lefschetz decompositions} $H^m_{\mathrm{deRham}}(M) \simeq \oplus_k L^k (\ker L^{n-m+2k+1} |_{H^{m-2k}})$. The \textbf{Hodge conjecture}\index{Hodge ! conjecture}\index{conjecture ! Hodge} claims, for projective manifolds $M$ (i.e., complex submanifolds of complex projective space), that the Poincar\'e duals of elements in $H_{\mathrm{Dolbeault}}^{\varepsilonll,\varepsilonll}(M) \cap H^{2\varepsilonll}(M;{\mathbb Q})$ are rational linear combinations of classes of complex codimension $\varepsilonll$ subvarieties of $M$. This has been proved only for the $\varepsilonll = 1$ case (it is the Lefschetz theorem on $(1,1)$-classes; see for instance~\cite{gr-ha:principles}). \ssubsection{Pseudoholomorphic Curves} \label{sec:pseudoholomorphic} \index{pseudoholomorphic curves} Whereas an almost complex manifold $(M,J)$ tends to have no $J$-holomorphic functions $M \to {\mathbb C}$ at all,\footnote{However, the study of {\varepsilonm asymptotically $J$-holomorphic functions} has been recently developed to obtain important results~\cite{do:almost,do:pencils,au:branched}; see Section~\ref{sec:pencils}.} it has plenty of {\varepsilonm $J$-holomorphic curves}\index{J-holomorphic curve@$J$-holomorphic curve} ${\mathbb C} \to M$. Gromov\index{Gromov ! pseudoholomorphic curve}\index{pseudoholomorphic curve} first realized that {\varepsilonm pseudoholomorphic curves} provide a powerful tool in symplectic topology in an extremely influential paper~\cite{gr:pseudo}. Fix a closed Riemann surface $(\Sigma, j)$, that is, a compact complex 1-dimensional manifold $\Sigma$ without boundary and equipped with the canonical almost complex structure $j$. \begin{definition} A parametrized \textbf{pseudoholomorphic curve} (or \textbf{ $J$-holomorphic curve}) in $(M,J)$ is a (smooth) map $u : \Sigma \to M$ whose differential intertwines $j$ and $J$, that is, $du_p \circ j_p = J_p \circ du_p$, $\forall p \in \Sigma$. \varepsilonnd{definition} In other words, the \textbf{Cauchy-Riemann equation} $du + J \circ du \circ j = 0$ holds. Pseudoholomorphic curves are related to parametrized 2-dimensional symplectic submanifolds. If a pseudoholomorphic curve $u : \Sigma \to M$ is an embedding, then its image $S:=u(\Sigma)$ is a 2-dimensional almost complex submanifold, hence a symplectic submanifold. Conversely, the inclusion $i : S \hookrightarrow M$ of a 2-dimensional symplectic submanifold can be seen as a pseudoholomorphic curve. An appropriate compatible almost complex structure $J$ on $(M, \omega)$ can be constructed starting from $S$, such that $TS$ is $J$-invariant. The restriction $j$ of $J$ to $TS$ is necessarily integrable because $S$ is 2-dimensional. The group $G$ of complex diffeomorphisms of $(\Sigma, j)$ acts on (parametrized) pseudoholomorphic curves by reparametrization: $u \mapsto u \circ \gamma$, for $\gamma \in G$. This normally means that each curve $u$ has a noncompact orbit under $G$. The orbit space ${\mathcal M}_g (A,J)$ is the set of unparametrized pseudoholomorphic curves in $(M,J)$ whose domain $\Sigma$ has genus $g$ and whose image $u(\Sigma)$ has homology class $A \in H_2 (M;{\mathbb Z})$. The space ${\mathcal M}_g (A,J)$ is called the \textbf{moduli space of unparametrized pseudoholomorphic curves} of genus $g$ representing the class $A$. For generic $J$, Fredholm theory shows that pseudoholomorphic curves occur in finite-dimensional smooth families, so that the moduli spaces ${\mathcal M}_g (A,J)$ can be manifolds, after avoiding singularities given by {\varepsilonm multiple coverings}.\footnote{A curve $u : \Sigma \to M$ is a \textbf{multiple covering} if $u$ factors as $u = u' \circ \sigma$ where $\sigma : \Sigma \to \Sigma'$ is a holomorphic map of degree greater than 1.} \begin{example} Usually $\Sigma$ is the Riemann sphere ${\mathbb C} {\mathbb P}^1$, whose complex diffeomorphisms are those given by {\varepsilonm fractional linear transformations} (or {\varepsilonm M\"obius transformations}). So the 6-dimensional noncompact group of projective linear transformations $\mathrm{PSL} (2;{\mathbb C} )$ acts on \textbf{pseudoholomorphic spheres} by reparametrization $u \mapsto u \circ \gamma_A$, where $A = \textstyle{\left[ \begin{array}{cc} a & b \\ c & d \varepsilonnd{array} \right]} \in \mathrm{PSL} (2;{\mathbb C} )$ acts by $\gamma_A : {\mathbb C} {\mathbb P}^1 \to {\mathbb C} {\mathbb P}^1$, $\gamma_A [z,1] = [ \textstyle{\frac{az+b}{cz+d}},1 ]$. \varepsilonnd{example} When $J$ is an almost complex structure {\varepsilonm compatible} with a symplectic form $\omega$, the area of the image of a pseudoholomorphic curve $u$ (with respect to the metric $g_{_J} ( \cdot , \cdot) = \omega (\cdot , J \cdot)$) is determined by the class $A$ that it represents. The number \[ E(u) := \omega (A) = \int_\Sigma u^* \omega = \mbox{ area of the image of } u \mbox{ with respect to } g_{_J} \] is called the \textbf{energy}\index{energy} of the curve $u$ and is a topological invariant: it only depends on $[\omega]$ and on the homotopy class of $u$. Gromov proved that the constant energy of all the pseudoholomorphic curves representing a homology class $A$ ensured that the space ${\mathcal M}_g (A,J)$, though not necessarily compact, had natural \textbf{compactifications} $\overline {\mathcal M}_g (A,J)$ by including what he called {\varepsilonm cusp-curves}. \begin{theorem} \textbf{(Gromov's compactness theorem)} $\;$ If $(M,\omega)$ is a compact manifold equipped with a generic compatible almost complex structure $J$, and if $u_j$ is a sequence of pseudoholomorphic curves in ${\mathcal M}_g (A,J)$, then there is a subsequence that weakly converges to a cusp-curve in $\overline {\mathcal M}_g (A,J)$. \varepsilonnd{theorem} Hence the cobordism class of the compactified moduli space $\overline {\mathcal M}_g (A,J)$ might be a nice symplectic invariant of $(M,\omega)$, as long as it is not empty or null-cobordant. Actually a nontrivial regularity criterion for $J$ ensures the existence of pseudoholomorphic curves. And even when $\overline {\mathcal M}_g (A,J)$ is null-cobordant, we can define an invariant to be the (signed) number of pseudoholomorphic curves of genus $g$ in class $A$ that intersect a specified set of representatives of homology classes in $M$~\cite{ru:sigma_models,ta:sw=gr,wi:sigma_models}. For more on pseudoholomorphic curves, see for instance~\cite{mc-sa:curves} (for a comprehensive discussion of the genus 0 case) or~\cite{au-la:holomorphic} (for higher genus). Here is a selection of applications of (developments from) pseudoholomorphic curves: \begin{itemize} \item Proof of the \textbf{nonsqueezing theorem}~\cite{gr:pseudo}: for $R > r$ there is no symplectic embedding of a ball $B^{2n}_R$ of radius $R$ into a cylinder $B^{2}_r \times {\mathbb R}^{2n-2}$ of radius $r$, both in $({\mathbb R}^{2n},\omega_0)$. \item Proof that there are {\varepsilonm no lagrangian spheres} in $({\mathbb C}^n,\omega_0)$, except for the circle in ${\mathbb C}^2$, and more generally {\varepsilonm no compact exact lagrangian submanifolds}, in the sense that the tautological 1-form $\alpha$ restricts to an exact form~\cite{gr:pseudo}. \item Proof that if $(M,\omega)$ is a connected symplectic 4-manifold symplectomorphic to $({\mathbb R}^4,\omega_0)$ outside a compact set and containing no symplectic $S^2$'s, then $(M,\omega)$ symplectomorphic to $({\mathbb R}^4,\omega_0)$~\cite{gr:pseudo}. \item Study questions of \textbf{symplectic packing}~\cite{bi:packing,mc-po:packing,tr:packing} such as: for a given $2n$-dimensional symplectic manifold $(M,\omega)$, what is the maximal radius $R$ for which there is a symplectic embedding of $N$ disjoint balls $B^{2n}_R$ into $(M,\omega)$? \item Study \textbf{groups of symplectomorphisms} of 4-manifolds (for a review see~\cite{mc:groups}). Gromov~\cite{gr:pseudo} showed that $\mathrm{Sympl} ({\mathbb C} {\mathbb P}^2, \omega_{_{\mathrm{FS}}})$ and $\mathrm{Sympl} (S^2 \times S^2, {\mathrm{pr}}_1^* \sigma \oplus {\mathrm{pr}}_2^* \sigma)$ deformation retract onto the corresponding groups of standard isometries. \item Development of \textbf{Gromov-Witten invariants} allowing to prove, for instance, the nonexistence of symplectic forms on ${\mathbb C}{\mathbb P}^2 \# {\mathbb C}{\mathbb P}^2 \# {\mathbb C}{\mathbb P}^2$ or the classification of symplectic structures on {\varepsilonm ruled surfaces} (Section~\ref{sec:blow_up}). \item Development of \textbf{Floer homology} to prove the Arnold conjecture on the fixed points of symplectomorphisms of compact symplectic manifolds, or on the intersection of lagrangian submanifolds (Section~\ref{sec:arnold_floer}). \item Development of \textbf{symplectic field theory} introduced by Eliashberg, Givental and Hofer~\cite{el-gi-ho:field} extending Gromov-Witten theory, exhibiting a rich algebraic structure and also with applications to {\varepsilonm contact geometry}. \varepsilonnd{itemize} \ssection{Symplectic Geography} \index{symplectic geography} \label{section4} \ssubsection{Existence of Symplectic Forms} \label{sec:existence} The utopian goal of symplectic classification addresses the standard questions: \begin{itemize} \item {\varepsilonm (Existence)} Which manifolds carry symplectic forms? \item {\varepsilonm (Uniqueness)} What are the distinct symplectic structures on a given manifold? \varepsilonnd{itemize} Existence is tackled through central examples in this section and symplectic constructions in the next two sections. Uniqueness is treated in the remainder of this chapter dealing with invariants that allow to distinguish symplectic manifolds. A K\"ahler structure naturally yields both a symplectic form and a complex structure (compatible ones). Either a symplectic or a complex structure on a manifold implies the existence of an almost complex structure. The following diagram represents the relations among these structures. In dimension 2, orientability trivially guarantees the existence of all other structures, so the picture collapses. In dimension 4, the first interesting dimension, the picture above is faithful -- we will see that there are {\varepsilonm closed} 4-dimensional examples in each region. \textbf{Closed} here means compact and without boundary. \begin{picture}(200,200)(-20,-5) \put(10,10){\line(1,0){240}} \put(20,20){\line(1,0){220}} \put(30,30){\line(1,0){120}} \put(80,40){\line(1,0){150}} \put(90,50){\line(1,0){50}} \put(90,90){\line(1,0){50}} \put(80,100){\line(1,0){150}} \put(30,120){\line(1,0){120}} \put(20,150){\line(1,0){220}} \put(10,180){\line(1,0){240}} \put(10,10){\line(0,1){170}} \put(20,20){\line(0,1){130}} \put(30,30){\line(0,1){90}} \put(80,40){\line(0,1){60}} \put(90,50){\line(0,1){40}} \put(140,50){\line(0,1){40}} \put(230,40){\line(0,1){60}} \put(150,30){\line(0,1){90}} \put(240,20){\line(0,1){130}} \put(250,10){\line(0,1){170}} \put(40,105){symplectic} \put(70,165){even-dimensional orientable} \put(101,65){K\"ahler} \put(160,130){almost complex} \put(180,80){complex} \thicklines \varepsilonnd{picture} Not all 4-dimensional manifolds are almost complex. A result of Wu~\cite{wu:classes} gives a necessary and sufficient condition in terms of the signature $\sigma$ and the Euler characteristic $\chi$ of a 4-dimensional closed manifold $M$ for the existence of an almost complex structure: {\varepsilonm $3 \sigma + 2 \chi = h^2$ for some $h \in H^2 (M;{\mathbb Z})$ congruent with the second Stiefel-Whitney class $w_2 (M)$ modulo 2}. For example, $S^4$ and $(S^2 \times S^2) \# (S^2 \times S^2)$ are not almost complex. When an almost complex structure exists, the first Chern class of the tangent bundle (regarded as a complex vector bundle) satisfies the condition for $h$. The sufficiency of Wu's condition is the remarkable part.\footnote{Moreover, such solutions $h$ are in one-to-one correspondence with {\varepsilonm isomorphism} classes of almost complex structures.} According to Kodaira's classification of closed complex surfaces~\cite{ko:surfacesI}\index{Kodaira ! complex surfaces}, such a surface admits a K\"ahler structure if and only if its first Betti number $b_1$ is even. The necessity of this condition is a Hodge relation on the Betti numbers (Section~\ref{sec:hodge}). The complex projective plane ${\mathbb C} {\mathbb P}^2$ with the Fubini-Study form (Section~\ref{sec:kahler}) might be called the simplest example of a closed K\"ahler 4-manifold. The \textbf{Kodaira-Thurston example}~\cite{th:examples}\index{example ! Kodaira-Thurston}\index{Kodaira ! Kodaira-Thurston example}\index{Thurston ! Kodaira-Thurston example} first demonstrated that a manifold that admits both a symplectic and a complex structure does not have to admit any K\"ahler structure. Take ${\mathbb R}^4$ with $dx_1 \wedge dy_1 + dx_2 \wedge dy_2$, and $\Gamma$ the discrete group generated by the four symplectomorphisms: \[ \begin{array}{rcl} (x_1,x_2,y_1,y_2) & \longmapsto & (x_1+1,x_2,y_1,y_2) \\ (x_1,x_2,y_1,y_2) & \longmapsto & (x_1,x_2+1,y_1,y_2) \\ (x_1,x_2,y_1,y_2) & \longmapsto & (x_1,x_2+y_2,y_1+1,y_2) \\ (x_1,x_2,y_1,y_2) & \longmapsto & (x_1,x_2,y_1,y_2+1) \varepsilonnd{array} \] Then $M = {\mathbb R}^4/\Gamma$ is a symplectic manifold that is a 2-torus bundle over a 2-torus. Kodaira's classification~\cite{ko:surfacesI}\index{Kodaira ! complex surfaces} shows that $M$ has a complex structure. However, $\pi_1 (M) = \Gamma$, hence $H_1({\mathbb R}^4/\Gamma;{\mathbb Z}) = \Gamma / [ \Gamma,\Gamma]$ has rank 3, so $b_1 = 3$ is {\varepsilonm odd}. Fern\'andez-Gotay-Gray~\cite{fe-go-gr:symplectic}\index{Fern\'andez-Gotay-Gray example}\index{example ! Fern\'andez-Gotay-Gray}\index{Gotay ! Fern\'andez-Gotay-Gray}\index{Gray ! Fern\'andez-Gotay-Gray (A.\ Gray)} first exhibited symplectic manifolds that do not admit any complex structure at all. Their examples are circle bundles over circle bundles (i.e., a {\varepsilonm tower} of circle bundles) over a 2-torus. The \textbf{Hopf surface}\index{example ! Hopf surface}\index{Hopf ! surface} is the complex surface diffeomorphic to $S^1 \times S^3$ obtained as the quotient ${\mathbb C}^2 \backslash \{0\}/\Gamma$ where $\Gamma = \{2^n \Id \mid n \in {\mathbb Z}\}$ is a group of {\varepsilonm complex} transformations, i.e., we factor ${\mathbb C}^2 \backslash \{0\}$ by the equivalence relation $(z_1,z_2) \sim (2z_1,2z_2)$. The Hopf surface is not symplectic because $H^2(S^1 \times S^3) = 0$. The manifold ${\mathbb C}{\mathbb P}^2 \# {\mathbb C}{\mathbb P}^2 \# {\mathbb C}{\mathbb P}^2$ is almost complex but is neither complex (since it does not fit Kodaira's classification~\cite{ko:surfacesI})\index{Kodaira ! complex surface}\index{complex surface}, nor symplectic as shown by Taubes~\cite{ta:invariants}\index{example ! Taubes}\index{Taubes ! CP@${\mathbb C}{\mathbb P}^2 \# {\mathbb C}{\mathbb P}^2 \# {\mathbb C}{\mathbb P}^2$ is not complex}\index{Taubes ! CP@${\mathbb C}{\mathbb P}^2 \# {\mathbb C}{\mathbb P}^2 \# {\mathbb C}{\mathbb P}^2$ is not complex} using Seiberg-Witten invariants\index{Seiberg-Witten invariants}\index{Witten ! Seiberg-Witten invariants} (Section~\ref{sec:invariants}). We could go through the previous discussion restricting to closed 4-dimensional examples {\varepsilonm with a specific fundamental group}. We will do this restricting to simply connected examples, where the following picture holds. \begin{picture}(200,180)(-20,10) \put(10,20){\line(1,0){240}} \put(20,30){\line(1,0){220}} \put(30,40){\line(1,0){200}} \put(40,50){\line(1,0){180}} \put(40,90){\line(1,0){180}} \put(30,120){\line(1,0){200}} \put(20,150){\line(1,0){220}} \put(10,180){\line(1,0){240}} \put(10,20){\line(0,1){160}} \put(20,30){\line(0,1){120}} \put(30,40){\line(0,1){80}} \put(40,50){\line(0,1){40}} \put(220,50){\line(0,1){40}} \put(230,40){\line(0,1){80}} \put(240,30){\line(0,1){120}} \put(250,20){\line(0,1){160}} \put(50,160){even-dimensional and simply connected} \put(50,130){almost complex (and simply connected)} \put(57,100){symplectic (and simply connected)} \put(64,70){complex (and simply connected)} \thicklines \varepsilonnd{picture} It is a consequence of Wu's result~\cite{wu:classes} that a simply connected manifold admits an almost complex structure if and only if $b_2^+$ is odd.\footnote{The \textbf{intersection form} of an oriented {\varepsilonm topological} closed 4-manifold $M$ is the bilinear pairing $Q_M : H^2 (M;{\mathbb Z}) \times H^2 (M;{\mathbb Z}) \to {\mathbb Z}$, $Q_M (\alpha,\beta) := \langle \alpha \cup \beta , [M] \rangle$, where $\alpha \cup \beta$ is the {\varepsilonm cup product} and $[M]$ is the {\varepsilonm fundamental class}. Since $Q_M$ always vanishes on torsion elements, descending to $H^2 (M;{\mathbb Z}) / \mathrm{torsion}$ it can be represented by a matrix. When $M$ is smooth and simply connected, this pairing is $Q_M (\alpha,\beta) := \int_M \alpha \wedge \beta$ since non-torsion elements are representable by 2-forms. As $Q_M$ is symmetric (in the smooth case, the wedge product of 2-forms is symmetric) and unimodular (the determinant of a matrix representing $Q_M$ is $\pm 1$ by Poincar\'e duality), it is diagonalizable over ${\mathbb R}$ with eigenvalues $\pm 1$. We denote by $b_2^+$ (respectively, $b_2^-$) the number of positive (resp.\ negative) eigenvalues of $Q_M$ counted with multiplicities, i.e., the dimension of a maximal subspace where $Q_M$ is positive-definite (resp.\ negative-definite). The \textbf{signature} of $M$ is the difference $\sigma := b_2^+ - b_2^-$, whereas the second Betti number is the sum $b_2 = b_2^+ + b_2^-$, i.e., the \textbf{rank} of $Q_M$. The \textbf{type} of an intersection form is \textbf{definite} if it is positive or negative definite (i.e., $|\sigma| = b_2$) and \textbf{indefinite} otherwise.} In particular, the connected sum $\#_m {\mathbb C} {\mathbb P}^2 \#_n \overline{{\mathbb C} {\mathbb P}^2}$ (of $m$ copies of ${\mathbb C} {\mathbb P}^2$ with $n$ copies of $\overline{{\mathbb C} {\mathbb P}^2}$) has an almost complex structure if and only if $m$ is odd.\footnote{The intersection form of a connected sum $M_0 \# M_1$ is (isomorphic to) $Q_{M_0} \oplus Q_{M_1}$.} By Kodaira's classification~\cite{ko:surfacesI}, a simply connected complex surface always admits a compatible symplectic form (since $b^1 = 0$ is even), i.e., it is always K\"ahler. Since they are simply connected, $S^4$, ${\mathbb C}{\mathbb P}^2 \# {\mathbb C}{\mathbb P}^2 \# {\mathbb C}{\mathbb P}^2$ and ${\mathbb C} {\mathbb P}^2$ live in three of the four regions in the picture for simply connected examples. All of ${\mathbb C} {\mathbb P}^2 \#_m \overline{{\mathbb C} {\mathbb P}^2}$ are also simply connected K\"ahler manifolds because they are {\varepsilonm pointwise blow-ups} ${\mathbb C} {\mathbb P}^2$ and the {\varepsilonm blow-down map} is holomorphic; see Section~\ref{sec:blow_up}. There is a family of manifolds obtained from ${\mathbb C} {\mathbb P}^2 \#_9 \overline{{\mathbb C} {\mathbb P}^2} =: E(1)$ by a {\varepsilonm knot surgery}~\cite{fi-st:knots} that were shown by Fintushel and Stern to be symplectic and confirmed not to admit a complex structure~\cite{pa:non-complex}. The first example of a closed simply connected symplectic manifold that cannot be K\"ahler, was a 10-dimensional manifold obtained by McDuff~\cite{mc:examples} as follows. The Kodaira-Thurston example\index{example ! Kodaira-Thurston}\index{Kodaira ! Kodaira-Thurston example}\index{Thurston ! Kodaira-Thurston example} ${\mathbb R}^4 / \Gamma$ (not simply connected) embeds symplectically in $({\mathbb C} {\mathbb P}^5,\omega_{FS})$~\cite{gr:partial,ti:embedding}. McDuff's example is a {\varepsilonm blow-up} of $({\mathbb C} {\mathbb P}^5,\omega_{FS})$ along the image of ${\mathbb R}^4 / \Gamma$. \textbf{Geography problems} are problems on the existence of simply connected closed oriented 4-dimensional manifolds with some additional structure (such as, a symplectic form or a complex structure) for each pair of {\varepsilonm topological coordinates}. As a consequence of the work of Freedman~\cite{fr:topology} and Donaldson~\cite{do:gauge} in the 80's, it became known that the homeomorphism class of a connected simply connected closed oriented {\varepsilonm smooth} 4-manifold is determined by the two integers -- the second Betti number and the signature $(b_2, \sigma)$ -- and the {\varepsilonm parity}\footnote{We say that the \textbf{parity} of an intersection form $Q_M$ is \textbf{even} when $Q_M (\alpha,\alpha)$ is even for all $\alpha \in H^2 (M;{\mathbb Z})$, and \textbf{odd} otherwise.} of the intersection form. Forgetting about the parity, the numbers $(b_2, \sigma)$ can be treated as \textbf{topological coordinates}. For each pair $(b_2, \sigma)$ there could well be infinite different (i.e., nondiffeomorphic) smooth manifolds. Using riemannian geometry, Cheeger~\cite{ch:finiteness} showed that there are at most {\varepsilonm countably many} different smooth types for closed 4-manifolds. There are no known finiteness results for the smooth types of a given topological 4-manifold, in contrast to other dimensions. Traditionally, the numbers used are $(c_1^2, c_2):= (3 \sigma + 2 \chi, \chi) = (3 \sigma + 4 + 2b_2 , 2 + b_2)$, and frequently just the {\varepsilonm slope} $c_1^2/c_2$ is considered. If $M$ admits an almost complex structure $J$, then $(TM,J)$ is a complex vector bundle, hence has Chern classes $c_1 = c_1 (M,J)$ and $c_2 = c_2 (M,J)$. Both $c_1^2 := c_1 \cup c_1$ and $c_2$ may be regarded as numbers since $H^4 (M;{\mathbb Z}) \simeq {\mathbb Z}$. They satisfy $c_1^2 = 3 \sigma + 2 \chi$ (by Hirzebruch's signature formula) and $c_2 = \chi$ (because the top Chern class is always the Euler class), justifying the notation for the topological coordinates in this case. \begin{examples} The manifold ${\mathbb C} {\mathbb P}^2$ has $(b_2, \sigma)=(1,1)$, i.e., $(c_1^2, c_2) = (9,3)$. Reversing the orientation $\overline{{\mathbb C} {\mathbb P}^2}$ has $(b_2, \sigma)=(1,-1)$, i.e., $(c_1^2, c_2) = (3,3)$. Their connected sum ${\mathbb C} {\mathbb P}^2 \# \overline{{\mathbb C} {\mathbb P}^2}$ has $(b_2, \sigma)=(2,0)$, i.e., $(c_1^2, c_2) = (8,0)$. The product $S^2 \times S^2$ also has $(b_2, \sigma)=(2,0)$ i.e., $(c_1^2, c_2) = (8,4)$. But ${\mathbb C} {\mathbb P}^2 \# \overline{{\mathbb C} {\mathbb P}^2}$ has an {\varepsilonm odd} intersection form whereas $S^2 \times S^2$ has an {\varepsilonm even} intersection form: $\left[ \begin{array}{cc} 1 & 0 \\ 0 & -1 \varepsilonnd{array} \right]$ vs.\ $\left[ \begin{array}{cc} 0 & 1 \\ 1 & 0 \varepsilonnd{array} \right]$. \varepsilonnd{examples} \textbf{Symplectic geography}~\cite{go-st:calculus,st:geography} addresses the following question: What is the set of pairs of integers $(m,n) \in {\mathbb Z} \times {\mathbb Z}$ for which there exists a connected simply connected closed {\varepsilonm symplectic} 4-manifold $M$ having second Betti number $b_2 (M) = m$ and signature $\sigma (M) = n$? This problem includes the usual geography of simply connected complex surfaces, since all such surfaces are K\"ahler according to Kodaira's classification~\cite{ko:surfacesI}\index{Kodaira ! complex surfaces}. Often, instead of the numbers $(b_2,\sigma)$, the question is equivalently phrased in terms of the Chern numbers $(c_1^2,c_2)$ for a compatible almost complex structure, which satisfy $c_1^2 = 3 \sigma + 2 \chi$~\cite{wu:classes} and $c_2 = \chi$, where $\chi = b_2 + 2$ is the {\varepsilonm Euler number}. Usually only {\varepsilonm minimal} (Section~\ref{sec:blow_up}) or {\varepsilonm irreducible} manifolds are considered to avoid trivial examples. A manifold is \textbf{irreducible} when it is not a connected sum of other manifolds, except when one of the summands is a homotopy sphere. It was speculated that perhaps any simply connected closed smooth 4-manifold other than $S^4$ is diffeomorphic to a connected sum of symplectic manifolds, where any orientation is allowed on each summand (the so-called {\varepsilonm minimal conjecture} for smooth 4-manifolds). Szab\'o~\cite{sz:exotic,sz:irreducible} provided counterexamples in a family of irreducible simply connected closed non-symplectic smooth 4-manifolds. All these problems could be posed for other fundamental groups. Gompf~\cite{go:new} used {\varepsilonm symplectic sums} (Section~\ref{sec:constructions}) to prove the following theorem. He also proved that his surgery construction can be adapted to produce {\varepsilonm non}K\"ahler examples. Since finitely-presented groups are not classifiable, this shows that compact symplectic 4-manifold are not classifiable. \begin{theorem} \label{thm:gompf}\index{Gompf ! theorem}\index{theorem ! Gompf}\index{example ! Gompf}\index{Gompf construction} \textbf{(Gompf)} $\;$ Every finitely-presented group occurs as the fundamental group $\pi_1 (M)$ of a compact symplectic 4-manifold $(M,\omega)$. \varepsilonnd{theorem} \ssubsection{Fibrations and Sums} \label{sec:constructions} Products of symplectic manifolds are naturally symplectic. As we will see, special kinds of {\varepsilonm twisted products}, i.e., fibrations,\footnote{A \textbf{fibration} (or {\varepsilonm fiber bundle}) is a manifold $M$ (called the \textbf{total space}\index{space ! total}\index{total space}) with a submersion $\pi : M \to X$ to a manifold $X$ (the \textbf{base}\index{base}) that is locally trivial in the sense that there is an open covering of $X$, such that, to each set ${\mathcal U}$ in that covering corresponds a diffeomorphism of the form $\varphi_{\mathcal U} = (\pi,s_{\mathcal U}) : \pi ^{-1} ({\mathcal U}) \to {\mathcal U} \times F$ (a \textbf{local trivialization}\index{trivialization}) where $F$ is a fixed manifold (the \textbf{model fiber}\index{fiber}). A collection of local trivializations such that the sets ${\mathcal U}$ cover $X$ is called a \textbf{trivializing cover} for $\pi$. Given two local trivializations, the second entry of the composition $\varphi_{\mathcal V} \circ \varphi_{\mathcal U}^{-1} = (\id , \psi_{{\mathcal U}{\mathcal V}})$ on $({\mathcal U} \cap {\mathcal V}) \times F$ gives the corresponding \textbf{transition function} $\psi_{{\mathcal U}{\mathcal V}} (x): F \to F$ at each $x \in {\mathcal U} \cap {\mathcal V}$.} are also symplectic. \begin{definition} A \textbf{symplectic fibration} is a fibration $\pi : M \to X$ where the model fiber is a symplectic manifold $(F,\sigma)$ and with a trivializing cover for which all the transition functions are symplectomorphisms $F \to F$. \varepsilonnd{definition} In a symplectic fibration each \textbf{fiber} $\pi^{-1} (x)$ carries a \textbf{canonical symplectic form} $\sigma_x$ defined by the restriction of $s_{\mathcal U}^* \sigma$, for any domain ${\mathcal U}$ of a trivialization covering $x$ (i.e., $x \in {\mathcal U}$). A symplectic form $\omega$ on the total space $M$ of a symplectic fibration is called \textbf{compatible} with the fibration if each fiber $(\pi^{-1} (x), \sigma_x)$ is a symplectic submanifold of $(M,\omega)$, i.e., $\sigma_x$ is the restriction of $\omega$ to $\pi^{-1} (x)$. \begin{examples} \begin{enumerate} \item {\varepsilonm Every compact oriented\footnote{An \textbf{oriented fibration} is a fibration whose model fiber is oriented and there is a trivializing cover for which all transition functions preserve orientation.} fibration whose model fiber $F$ is an oriented surface admits a structure of symplectic fibration} for the following reason. Let $\sigma_0$ be an area form on $F$. Each transition function $\psi_{{\mathcal U}{\mathcal V}} (x): F \to F$ pulls $\sigma_0$ back to a cohomologous area form $\sigma_1$ (depending on $\psi_{{\mathcal U}{\mathcal V}} (x)$). Convex combinations $\sigma_t = (1-t) \sigma_0 + t \sigma_1$ give a path of area forms from $\sigma_0$ to $\sigma_1$ with constant class $[\sigma_t]$. By Moser's argument (Section~\ref{sec:trick}), there exists a diffeomorphism $\rho (x):F \to F$ isotopic to the identity, depending smoothly on $x \in {\mathcal U} \cap {\mathcal V}$, such that $\psi_{{\mathcal U}{\mathcal V}} (x) \circ \rho (x)$ is a symplectomorphism of $(F,\sigma_0)$. By successively adjusting local trivializations for a finite covering of the base, we can make all transition functions into symplectomorphisms. \item {\varepsilonm Every fibration with connected base and compact fibers having a symplectic form $\omega$ for which all fibers are symplectic submanifolds admits a structure of symplectic fibration compatible with $\omega$.} Indeed, under trivializations, the restrictions of $\omega$ to the fibers give cohomologous symplectic forms in the model fiber $F$. So by Moser's Theorem~\ref{thm:moser}, all fibers are strongly isotopic to $(F,\sigma)$ where $\sigma$ is the restriction of $\omega$ to a chosen fiber. These isotopies can be used to produce a trivializing cover where each $s_{\mathcal U} (x)$ is a symplectomorphism. \varepsilonnd{enumerate} \varepsilonnd{examples} In the remainder of this section, assume that for a fibration $\pi : M \to X$ the total space is compact and the base is connected. For the existence of a compatible symplectic form on a symplectic fibration, a necessary condition is the existence of a cohomology class in $M$ that restricts to the classes of the fiber symplectic forms. Thurston~\cite{th:examples} showed that, when the base admits also a symplectic form, this condition is sufficient. Yet not all symplectic fibrations with a compatible symplectic form have a symplectic base~\cite{we:fat}. \begin{theorem} \label{thm:thurston}\index{Thurston ! theorem}\index{theorem ! Thurston} \textbf{(Thurston)} $\;$ Let $\pi : M \to X$ be a compact symplectic fibration with connected symplectic base $(X,\alpha)$ and model fiber $(F,\sigma)$. If there is a class $[\nu] \in H^2 (M)$ pulling back to $[\sigma]$, then, for sufficiently large $k > 0$, there exists a symplectic form $\omega_k$ on $M$ that is compatible with the fibration and is in $[\nu + k \pi^* \alpha]$. \varepsilonnd{theorem} \vspace*{-2ex} \begin{proof} We first find a form $\tau$ on $M$ in the class $[\nu]$ that restricts to the canonical symplectic form on each fiber. Pick a trivializing cover $\{\varphi_i = (\pi,s_i) \mid i \in I\}$ with contractible domains ${\mathcal U}_i$. Let $\rho_i$, $i \in I$, be a partition of unity subordinate to this covering and let $\widetilde \rho_i := \rho_i \circ \pi : M \to {\mathbb R}$. Since $[\nu]$ always restricts to the class of the canonical symplectic form $[\sigma_x]$, and the ${\mathcal U}_i$'s are contractible, on each $\pi_i^{-1} ({\mathcal U}_i)$ the forms $s_i^* \sigma - \nu$ are exact. Choose 1-forms $\lambda_i$ such that $s_i^* \sigma = \nu + d \lambda_i$, and set \[ \tau := \nu + \sum_{i \in I} d (\widetilde \rho_i \lambda_i) \ . \] Since $\tau$ is nondegenerate on the (vertical) subbundle given by the kernel of $d \pi$, for $k > 0$ large enough the form $\tau + k \pi^* \alpha$ is nondegenerate on $M$. \varepsilonnd{proof} \vspace*{-1ex} \begin{corollary} Let $\pi : M \to X$ be a compact oriented fibration with connected symplectic base $(X,\alpha)$ and model fiber an oriented surface $F$ of genus $g(F) \neq 1$. Then $\pi$ admits a compatible symplectic form. \varepsilonnd{corollary} \vspace*{-2ex} \begin{proof} By Example~1 above, $\pi : M \to X$ admits a structure of symplectic fibration with model fiber $(F,\sigma)$. Since the fiber is not a torus ($g(F) \neq 1$), the Euler class of the tangent bundle $TF$ (which coincides with $c_1 (F,\sigma)$) is $\lambda [\sigma]$ for some $\lambda \neq 0$. Hence, the first Chern class $[c]$ of the {\varepsilonm vertical} subbundle given by the kernel of $d\pi$ (assembling the tangent bundles to the fibers) restricts to $\lambda [\sigma_x]$ on the fiber over $x \in X$. We can apply Theorem~\ref{thm:thurston} using the class $[\nu] = \lambda^{-1} [c]$. \varepsilonnd{proof} A pointwise connected sum $M_0 \# M_1$ of symplectic manifolds $(M_0,\omega_0)$ and $(M_1,\omega_1)$ tends to not admit a symplectic form, even if we only require the eventual symplectic form to be isotopic to $\omega_i$ on each $M_i$ minus a ball. The reason~\cite{au:exemples} is that such a symplectic form on $M_0 \# M_1$ would allow to construct an almost complex structure on the sphere formed by the union of the two removed balls, which is known not to exist except on $S^2$ and $S^6$. Therefore: \begin{proposition} Let $(M_0,\omega_0)$ and $(M_1,\omega_1)$ be two compact symplectic manifolds of dimension not 2 nor 6. Then the connected sum $M_0 \# M_1$ does not admit any symplectic structure isotopic to $\omega_i$ on $M_i$ minus a ball, $i=1,2$. \varepsilonnd{proposition} For connected sums to work in the symplectic category, they should be done along codimension 2 symplectic submanifolds. The following construction, already mentioned in~\cite{gr:partial}, was dramatically explored and popularized by Gompf~\cite{go:new} (he used it to prove Theorem~\ref{thm:gompf}). Let $(M_0,\omega_0)$ and $(M_1,\omega_1)$ be two $2n$-dimensional symplectic manifolds. Suppose that a compact symplectic manifold $(X,\alpha)$ of dimension $2n-2$ admits symplectic embeddings to both $i_0 : X \hookrightarrow M_0$, $i_1 : X \hookrightarrow M_1$. For simplicity, assume that the corresponding normal bundles are trivial (in general, they need to have symmetric Euler classes). By the symplectic neighborhood theorem (Theorem~\ref{thm:weinstein_symplectic}), there exist symplectic embeddings $j_0 : X \times B_\varepsilon \to M_0$ and $j_1 : X \times B_\varepsilon \to M_1$ (called \textbf{framings}) where $B_\varepsilon$ is a ball of radius $\varepsilon$ and centered at the origin in ${\mathbb R}^2$ such that $j_k^* \omega_k = \alpha + dx \wedge dy$ and $j_k (p,0) = i_k (p) $ $\forall p \in X$, $k=0,1$. Chose an area- and orientation-preserving diffeomorphism $\phi$ of the annulus $B_\varepsilon \setminus B_\partialta$ for $0 < \partialta < \varepsilon$ that interchanges the two boundary components. Let ${\mathcal U}_k = j_k (X \times B_\partialta) \subset M_k$, $k=0,1$. A \textbf{symplectic sum} of $M_0$ and $M_1$ along $X$ is defined to be \[ M_0 \#_X M_1 := \left( M_0 \setminus {\mathcal U}_0 \right) \cup_\phi \left( M_1 \setminus {\mathcal U}_1 \right) \] where the symbol $\cup_\phi$ means that we identify $j_1(p,q)$ with $j_0 (p,\phi(q))$ for all $p \in X$ and $\partialta < |q| < \varepsilon$. As $\omega_0$ and $\omega_1$ agree on the regions under identification, they induce a symplectic form on $M_0 \#_X M_1$. The result depends on $j_0$, $j_1$, $\partialta$ and $\phi$. \textbf{Rational blowdown} is a surgery on 4-manifolds that replaces a neighborhood of a chain of embedded $S^2$'s with boundary a {\varepsilonm lens space} $L(n^2,n-1)$ by a manifold with the same rational homology as a ball. This simplifies the homology possibly at the expense of complicating the fundamental group. Symington~cite{sy:blowdown} showed that rational blowdown preserves a symplectic structure if the original spheres are symplectic surfaces in a symplectic 4-manifold. \ssubsection{Symplectic Blow-Up} \label{sec:blow_up} {\varepsilonm Symplectic blow-up} is the extension to the symplectic category of the blow-up operation in algebraic geometry. It is due to Gromov according to the first printed exposition of this operation in~\cite{mc:examples}. Let $L$ be the \textbf{tautological line bundle} over ${\mathbb C} {\mathbb P}^{n-1}$, that is, \[ L = \{ ([p], z) \mid p \in {\mathbb C}^n \setminus \{ 0 \} \ , \ z = \lambda p \mbox{ for some } \lambda \in {\mathbb C} \} \] with projection to ${\mathbb C} {\mathbb P}^{n-1}$ given by $\pi: ([p], z) \mapsto [p]$. The fiber of $L$ over the point $[p] \in {\mathbb C} {\mathbb P}^{n-1}$ is the complex line in ${\mathbb C}^n$ represented by that point. The \textbf{blow-up of ${\mathbb C}^n$ at the origin}\index{blow-up ! definition} is the total space of the bundle $L$, sometimes denoted $\widetilde {\mathbb C}^n$. The corresponding \textbf{blow-down map}\index{blow-down map} is the map $\beta : L \to {\mathbb C}^n$ defined by $\beta ([p], z) = z$. The total space of $L$ may be decomposed as the disjoint union of two sets: the zero section \[ E:= \{ ([p],0) \mid p \in {\mathbb C}^n \setminus \{ 0 \} \} \] and \[ S:= \{ ([p],z) \mid p \in {\mathbb C}^n \setminus \{ 0 \} \ , \ z = \lambda p \mbox{ for some } \lambda \in {\mathbb C}^* \} \ . \] The set $E$ is called the \textbf{exceptional divisor}\index{exceptional divisor}; it is diffeomorphic to ${\mathbb C} {\mathbb P}^{n-1}$ and gets mapped to the origin by $\beta$. On the other hand, the restriction of $\beta$ to the complementary set $S$ is a diffeomorphism onto ${\mathbb C}^n \setminus \{ 0 \}$. Hence, we may regard $L$ as being obtained from ${\mathbb C}^n$ by smoothly replacing the origin by a copy of ${\mathbb C} {\mathbb P}^{n-1}$. Every biholomorphic map $f: {\mathbb C}^n \to {\mathbb C}^n$ with $f(0)=0$ lifts uniquely to a biholomorphic map $\widetilde f : L \to L$ with $\widetilde f (E) = E$. The lift is given by the formula \[ {\widetilde f} ([p],z) = \left\{ \begin{array}{ll} ([f(z)],f(z)) & \mbox{ if } z \neq 0 \\ ([p],0) & \mbox{ if } z = 0 \ . \varepsilonnd{array} \right. \] There are actions of the unitary group $\mathrm{U} (n)$ on $L$, $E$ and $S$ induced by the standard linear action on ${\mathbb C}^n$, and the map $\beta$ is $\mathrm{U} (n)$-equivariant. For instance, $\beta^* \omega_0 + \pi^* \omega_{FS}$ is a $\mathrm{U} (n)$-invariant K\"ahler form on $L$. \begin{definition} A \textbf{blow-up symplectic form}\index{blow-up ! symplectic form}\index{symplectic ! blow-up} on the tautological line bundle $L$ is a $\mathrm{U} (n)$-invariant symplectic form $\omega$ such that the difference $\omega - \beta^* \omega_0$ is compactly supported, where $\omega _0 = \frac i2 \sum_{k=1}^n dz_k \wedge d\bar z_k$ is the standard symplectic form on ${\mathbb C}^n$. \varepsilonnd{definition} Two blow-up symplectic forms are \textbf{equivalent}\index{blow-up ! equivalence} if one is the pullback of the other by a $\mathrm{U} (n)$-equivariant diffeomorphism of $L$. Guillemin and Sternberg~\cite{gu-st:birational} showed that two blow-up symplectic forms are equivalent if and only if they have equal restrictions to the exceptional divisor $E \subset L$. Let $\Omega^\varepsilon$ ($\varepsilon > 0$) be the set of all blow-up symplectic forms on $L$ whose restriction to the exceptional divisor $E \simeq {\mathbb C} {\mathbb P}^{n-1}$ is $\varepsilon \omega_{_{\mathrm{FS}}}$, where $\omega_{_{\mathrm{FS}}}$ is the Fubini-Study form (Section~\ref{sec:kahler}). An \textbf{$\varepsilon$-blow-up} of ${\mathbb C}^n$ at the origin is a pair $(L,\omega)$ with $\omega \in \Omega ^\varepsilon$. Let $(M, \omega)$ be a $2n$-dimensional symplectic manifold. It is a consequence of Darboux's Theorem (Theorem~\ref{thm:darboux}) that, for each point $p \in M$, there exists a complex chart $({\mathcal U} , z_1 , \ldots , z_n)$ centered at $p$ and with image in ${\mathbb C}^n$ where $\left. \omega \right|_{\mathcal U} = \frac i2 \sum_{k=1}^n dz_k \wedge d\bar z_k$. It is shown in~\cite{gu-st:birational} that, for $\varepsilon$ small enough, we can perform an $\varepsilon$-blow-up of $M$ at $p$ modeled on ${\mathbb C}^n$ at the origin, without changing the symplectic structure outside of a small neighborhood of $p$. The resulting manifold is called an \textbf{$\varepsilon$-blow-up of $M$ at $p$}.\index{blow-up ! definition} As a manifold, the blow-up of $M$ at a point is diffeomorphic to the {\varepsilonm connected sum}\footnote{The \textbf{connected sum} of two oriented $m$-dimensional manifolds $M_0$ and $M_1$ is the manifold, denoted $M_0 \# M_1$, obtained from the union of those manifolds each with a small ball removed $M_i \setminus B_i$ by identifying the boundaries via a (smooth) map $\phi : \partial B_1 \to \partial B_2$ that extends to an orientation-preserving diffeomorphism of neighborhoods of $\partial B_1$ and $\partial B_2$ (interchanging the inner and outer boundaries of the annuli).} $M \# \overline{{\mathbb C} {\mathbb P}^n}$, where $\overline{{\mathbb C} {\mathbb P}^n}$ is the manifold ${\mathbb C} {\mathbb P}^n$ equipped with the orientation opposite to the natural complex one. \begin{example} Let ${\mathbb P} (L \oplus {\mathbb C})$ be the ${\mathbb C} {\mathbb P}^1$-bundle over ${\mathbb C} {\mathbb P}^{n-1}$ obtained by projectivizing the direct sum of the tautological line bundle $L$ with a trivial complex line bundle. Consider the map \[ \begin{array}{rrcl} \beta: & {\mathbb C} {\mathbb P} (L \oplus {\mathbb C}) & \longrightarrow & {\mathbb C} {\mathbb P}^n \\ & ([p],[\lambda p : w]) & \longmapsto & [\lambda p : w] \ , \varepsilonnd{array} \] where $[\lambda p : w]$ on the right represents a line in ${\mathbb C}^{n+1}$, forgetting that, for each $[p] \in {\mathbb C} {\mathbb P}^{n-1}$, that line sits in the 2-complex-dimensional subspace $L_{[p]} \oplus {\mathbb C} \subset {\mathbb C}^n \oplus {\mathbb C}$. Notice that $\beta$ maps the {\varepsilonm exceptional divisor} \[ E:= \{ ([p],[0:\ldots :0:1]) \mid [p] \in {\mathbb C} {\mathbb P}^{n-1} \} \simeq {\mathbb C} {\mathbb P}^{n-1} \] to the point $[0:\ldots :0:1] \in {\mathbb C} {\mathbb P}^n$, and $\beta$ is a diffeomorphism on the complement \[ S:= \{ ([p],[\lambda p : w]) \mid [p] \in {\mathbb C} {\mathbb P}^{n-1} \ , \ \lambda \in {\mathbb C}^* \ , \ w \in {\mathbb C} \} \simeq {\mathbb C} {\mathbb P}^n \setminus \{ [0:\ldots : 0:1] \} \ . \] Therefore, we may regard ${\mathbb C} {\mathbb P} (L \oplus {\mathbb C})$ as being obtained from ${\mathbb C} {\mathbb P}^n$ by smoothly replacing the point $[0:\ldots :0:1]$ by a copy of ${\mathbb C} {\mathbb P}^{n-1}$. The space ${\mathbb C} {\mathbb P} (L \oplus {\mathbb C})$ is the blow-up of ${\mathbb C} {\mathbb P}^n$ at the point $[0:\ldots :0:1]$, and $\beta$ is the corresponding blow-down map. The manifold ${\mathbb C} {\mathbb P} (L \oplus {\mathbb C})$ for $n=2$ is a \textbf{Hirzebruch surface}.\index{example ! Hirzebruch surface}\index{Hirzebruch surface} \varepsilonnd{example} When $({\mathbb C} {\mathbb P}^{n-1} , \omega_{FS})$ is symplectically embedded in a symplectic manifold $(M,\omega)$ with image $X$ and normal bundle isomorphic to the tautological bundle $L$, it can be subject to a {\varepsilonm blow-down} operation. By the symplectic neighborhood theorem (Theorem~\ref{thm:weinstein_symplectic}), some neighborhood ${\mathcal U} \subset M$ of the image $X$ is symplectomorphic to a neighborhood ${\mathcal U}_0 \subset L$ of the zero section. It turns out that some neighborhood of $\partial {\mathcal U}_0$ in $L$ is symplectomorphic to a spherical shell in $({\mathbb C}^n,\omega_0)$. The \textbf{blow-down of $M$ along $X$} is a manifold obtained from the union of $M \setminus {\mathcal U}$ with a ball in ${\mathbb C}^n$. For more details, see~\cite[\S 7.1]{mc-sa:introduction}. Following algebraic geometry, we call \textbf{minimal} a $2n$-dimensional symplectic manifold $(M,\omega)$ without any symplectically embedded $({\mathbb C} {\mathbb P}^{n-1} , \omega_{FS})$, so that $(M,\omega)$ is not the blow-up at a point of another symplectic manifold. In dimension 4, a manifold is minimal if it does not contain any embedded sphere $S^2$ with self-intersection $-1$. Indeed, by the work of Taubes~\cite{ta:invariants,ta:sw=>gr}, if such a sphere $S$ exists, then either the homology class $[S]$ or its symmetric $-[S]$ can be represented by a {\varepsilonm symplectically} embedded sphere with self-intersection $-1$. For a symplectic manifold $(M , \omega)$, let $i : X \hookrightarrow M$ be the inclusion of a symplectic submanifold. The normal bundle $NX$ to $X$ in $M$ admits a structure of complex vector bundle (as it is a symplectic vector bundle). Let ${\mathbb P} (NX) \to X$ be the projectivization of the bundle $NX \to X$, let $Z$ be the zero section of $NX$, let $L (NX)$ be the corresponding \textbf{tautological line bundle} (given by assembling the tautological line bundles over each fiber) and let $\beta : L (NX) \to NX$ be the blow-down map. On the {\varepsilonm exceptional divisor} \[ E:= \{ ([p],0) \in L(NX) \mid p \in NX \setminus Z \} \simeq {\mathbb P} (NX) \] the map $\beta$ is just projection to the zero section $Z$. The restriction of $\beta$ to the complement $L (NX) \setminus E$ is a diffeomorphism to $NX \setminus Z$. Hence, $L (NX)$ may be viewed as being obtained from $NX$ by smoothly replacing each point of the zero section by the projectivization of its normal space. We symplectically identify some tubular neighborhood ${\mathcal U}$ of $X$ in $M$ with a tubular neighborhood ${\mathcal U}_0$ of the zero section $Z$ in $NX$. A \textbf{blow-up of the symplectic manifold $(M , \omega)$ along the symplectic submanifold $X$}\index{blow-up ! along a submanifold} is the manifold obtained from the union of $M \setminus {\mathcal U}$ and $\beta^{-1} ({\mathcal U}_0)$ by identifying neighborhoods of $\partial {\mathcal U}$, and equipped with a symplectic form that restricts to $\omega$ on $M \setminus {\mathcal U}$~\cite{mc:examples}. When $X$ is one point, this construction reduces to the previous symplectic blow-up at a point. Often symplectic geography concentrates on minimal examples. McDuff~\cite{mc:rational} showed that a minimal symplectic 4-manifold with a symplectically embedded $S^2$ with nonnegative self-intersection is symplectomorphic either to ${\mathbb C} {\mathbb P}^2$ or to an $S^2$-bundle over a surface. Using Seiberg-Witten theory it was proved: \begin{theorem} Let $(M,\omega)$ be a minimal closed symplectic 4-manifold. \begin{itemize} \item[(a)] \textbf{(Taubes~\cite{ta:sw=>gr})} If $b_2^+ > 1$, then $c_1^2 \geq 0$. \item[(b)] \textbf{(Liu~\cite{li:wall})} If $b_2^+ = 1$ and $c_1^2 < 0$, then $M$ is the total space of an $S^2$-fibration over a surface of genus $g$ where $\omega$ is nondegenerate on the fibers, and $(c_1^2,c_2) = (8-8g, 4-4g)$, i.e., $(M,\omega)$ is a {\varepsilonm symplectic ruled surface}. \varepsilonnd{itemize} \varepsilonnd{theorem} A \textbf{symplectic ruled surface}\footnote{A (rational) \textbf{ruled surface} is a complex (K\"ahler) surface that is the total space of a holomorphic fibration over a Riemann surface with fiber ${\mathbb C} {\mathbb P}^1$. When the base is also a sphere, these are the \textbf{Hirzebruch surfaces} ${\mathbb P} (L \oplus {\mathbb C})$ where $L$ is a holomorphic line bundle over ${\mathbb C} {\mathbb P}^1$.} is a symplectic 4-manifold $(M,\omega)$ that is the total space of an $S^2$-fibration where $\omega$ is nondegenerate on the fibers. A \textbf{symplectic rational surface} is a symplectic 4-manifold $(M,\omega)$ that can be obtained from the standard $({\mathbb C} {\mathbb P} ^2,\omega_{FS})$ by blowing up and blowing down. With $b_2^+ = 1$ and $c_1^2 = 0$, we have symplectic manifolds ${\mathbb C} {\mathbb P}^2 \#_9 \overline{{\mathbb C} {\mathbb P}^2} =: E(1)$, the {\varepsilonm Dolgachev surfaces} $E(1,p,q)$, the results $E(1)_K$ of surgery on a fibered knot $K \subset S^3$, etc. With $b_2^+ = 1$ and $c_1^2 > 0$, we have symplectic manifolds ${\mathbb C} {\mathbb P}^2$, $S^2 \times S^2$, ${\mathbb C} {\mathbb P}^2 \#_n \overline{{\mathbb C} {\mathbb P}^2}$ for $n \leq 8$ and the {\varepsilonm Barlow surface}. For $b_2^+ = 1$ and $c_1^2 \geq 0$, Park~\cite{pa:non-complex} gave a criterion for a symplectic 4-manifold to be rational or ruled in terms of Seiberg-Witten theory. \ssubsection{Uniqueness of Symplectic Forms} \label{sec:classification} Besides the notions listed in Section~\ref{sec:trick}, the following equivalence relation for symplectic manifolds is considered. As it allows the cleanest statements about uniqueness, this relation is simply called {\varepsilonm equivalence}. \begin{definition} Symplectic manifolds $(M,\omega_0)$ and $(M,\omega_1)$ are \textbf{equivalent} if they are related by a combination of deformation-equivalences and symplectomorphisms. \varepsilonnd{definition} Recall that $(M,\omega_0)$ and $(M,\omega_1)$ are {\varepsilonm deformation-equivalent} when there is a smooth family $\omega_t$ of symplectic forms joining $\omega_0$ to $\omega_1$ (Section~\ref{sec:trick}), and they are {\varepsilonm symplectomorphic} when there is a diffeomorphism $\varphi : M \to M$ such that $\varphi^* \omega_1 = \omega_0$ (Section~\ref{symplectic_forms}). Hence, equivalence is the relation generated by deformations and diffeomorphisms. The corresponding equivalence classes can be viewed as the connected components of the moduli space of symplectic forms up to diffeomorphism. This is a useful notion when focusing on topological properties. \begin{examples} \begin{enumerate} \item The complex projective plane ${\mathbb C} {\mathbb P} ^2$ has a unique symplectic structure up to symplectomorphism and scaling. This was shown by Taubes~\cite{ta:sw=gr}\index{Taubes ! unique symplectic structure on ${\mathbb C} {\mathbb P} ^2$}\index{unique symplectic structure on ${\mathbb C} {\mathbb P} ^2$} relating Seiberg-Witten invariants (Section~\ref{sec:invariants}) to pseudoholomorphic curves to prove the existence of a pseudoholomorphic sphere. Previous work of Gromov~\cite{gr:pseudo} and McDuff~\cite{mc:structure} showed that the existence of a pseudoholomorphic sphere implies that the symplectic form is standard. Lalonde and McDuff~\cite{la-mc:classification} concluded similar classifications for symplectic ruled surfaces and for symplectic rational surfaces (Section~\ref{sec:blow_up}). The symplectic form on a symplectic ruled surface is unique up to symplectomorphism in its cohomology class, and is isotopic to a standard K\"ahler form. In particular, any symplectic form on $S^2 \times S^2$ is symplectomorphic to $a \pi_1^* \sigma + b \pi_2^* \sigma$ for some $a,b >0$ where $\sigma$ is the standard area form on $S^2$. Li-Liu~\cite{li-liu:symplectic} showed that the symplectic structure on ${\mathbb C} {\mathbb P}^2 \#_n \overline{{\mathbb C} {\mathbb P}^2}$ for $2 \leq n \leq 9$ is unique up to equivalence. \item McMullen and Taubes~\cite{mc-ta:inequivalent} first exhibited simply connected closed 4-manifolds admitting inequivalent symplectic structures. Their examples were constructed using 3-dimensional topology, and distinguished by analyzing the structure of Seiberg-Witten invariants to show that the first Chern classes (Section~\ref{sec:compatible_almost}) of the two symplectic structures lie in disjoint orbits of the diffeomorphism group. In higher dimensions there were previously examples of manifolds with inequivalent symplectic forms; see for instance~\cite{ru:algebraic}. With symplectic techniques and avoiding gauge theory, Smith~\cite{sm:moduli} showed that, for each $n \geq 2$, there is a simply connected closed 4-manifold that admits at least $n$ inequivalent symplectic forms, also distinguished via the first Chern classes. It is not yet known whether there exist inequivalent symplectic forms on a 4-manifold with the same first Chern class. \varepsilonnd{enumerate} \varepsilonnd{examples} \ssubsection{Invariants for 4-Manifolds} \label{sec:invariants} Very little was known about 4-dimensional manifolds until 1981, when Freedman~\cite{fr:topology} provided a complete classification of closed simply connected {\varepsilonm topological} 4-manifolds, and shortly thereafter Donaldson~\cite{do:gauge} showed that the panorama for {\varepsilonm smooth} 4-manifolds was much wilder.\footnote{It had been proved by Rokhlin in 1952 that if such a smooth manifold $M$ has even intersection form $Q_M$ (i.e., $w_2 =0$), then the signature of $Q_M$ must be a multiple of 16. It had been proved by Whitehead and Milnor that two such topological manifolds are homotopy equivalent if and only if they have the same intersection form.} Freedman showed that, modulo homeomorphism, such topological manifolds are essentially classified by their intersection forms (for an {\varepsilonm even} intersection form there is exactly one class, whereas for an {\varepsilonm odd} intersection form there are exactly two classes distinguished by the {\varepsilonm Kirby-Siebenmann invariant} $KS$, at most one of which admits smooth representatives -- smoothness requires $KS = 0$). Donaldson showed that, whereas the existence of a smooth structure imposes strong constraints on the topological type of a manifold, for the same topological manifold there can be infinite different smooth structures.\footnote{It is known that in dimensions $\leq 3$, each topological manifold has exactly one smooth structure, and in dimensions $\geq 5$ each topological manifold has at most finitely many smooth structures. For instance, whereas each topological ${\mathbb R}^n$, $n \neq 4$, admits a unique smooth structure, the topological ${\mathbb R}^4$ admits uncountably many smooth structures.} In other words, by far not all intersection forms can occur for smooth 4-manifolds and the same intersection form may correspond to nondiffeomorphic manifolds. Donaldson's key tool was a set of gauge-theoretic invariants, defined by counting with signs the equivalence classes (modulo gauge equivalence) of connections on $\mathrm{SU} (2)$- (or $\mathrm{SO} (3)$-) bundles over $M$ whose curvature has vanishing self-dual part. For a dozen years there was hard work on the invariants discovered by Donaldson but limited advancement on the understanding of smooth 4-manifolds. \begin{examples} Finding {\varepsilonm exotic}\footnote{A manifold homeomorphic but not diffeomorphic to a smooth manifold $M$ is called an \textbf{exotic} $M$.} smooth structures on closed simply connected manifolds with small $b_2$ has long been an interesting problem, especially in view of the smooth Poincar\'e conjecture for 4-manifolds. The first exotic smooth structures on a rational surface ${\mathbb C}{\mathbb P}^2 \#_n \overline{{\mathbb C} {\mathbb P}^2}$ were found in the late 80's for $n=9$ by Donaldson~\cite{do:irrationality} and for $n=8$ by Kotschick~\cite{ko:homeomorphic}. There was no progress until the recent work of Park~\cite{pa:symplectic} constructing a symplectic exotic ${\mathbb C} {\mathbb P}^2 \#_7 \overline{{\mathbb C} {\mathbb P}^2}$ and using this to exhibit a third distinct smooth structure ${\mathbb C} {\mathbb P}^2 \#_8 \overline{{\mathbb C} {\mathbb P}^2}$, thus illustrating how the existence of symplectic forms is tied to the existence of different smooth structures. This stimulated research by Fintushel, Ozsv\'ath, Park, Stern, Stipsicz and Szab\'o, which together shows that there are infinitely many exotic smooth structures on ${\mathbb C}{\mathbb P}^2 \#_n \overline{{\mathbb C} {\mathbb P}^2}$ for $n=5,6,7,8$ (the case $n=9$ had been shown in the late 80's by Friedman-Morgan and by Okonek-Van de Ven). \varepsilonnd{examples} In 1994 Witten brought about a revolution in Donaldson theory by introducing a new set of invariants -- the \textbf{Seiberg-Witten invariants} -- which are much simpler to calculate and to apply. This new viewpoint was inspired by developments due to Seiberg and Witten in the understanding of {\varepsilonm $N=2$ supersymmetric Yang-Mills}. Let $M$ be a smooth oriented closed 4-dimensional manifold with $b_2^+ (M) > 1$ (there is a version for $b_2^+ (M) = 1$). All such 4-manifolds $M$ (with any $b_2^+ (M)$) admit a spin-c structure, i.e., a $\mathrm{Spin}^c (4)$-bundle over $M$ with an isomorphism of the associated $\mathrm{SO} (4)$-bundle to the bundle of oriented frames on the tangent bundle for some chosen riemannian metric. Let $\mathcal C_M = \{ a \in H^2 (M;{\mathbb Z}) \mid a \varepsilonquiv w_2 (TM) (2) \}$ be the set of characteristic elements, and let $\mathrm{Spin}^c (M)$ be the set of spin-c structures on $M$. For simplicity, assume that $M$ is simply connected (or at least that $H_1 (M;{\mathbb Z})$ has no 2-torsion), so that $\mathrm{Spin}^c (M)$ is isomorphic to $\mathcal C_M$ with isomorphism given by the first Chern class of the {\varepsilonm determinant line bundle} (the \textbf{determinant line bundle} is the line bundle associated by a natural group homomorphism $\mathrm{Spin}^c (4) \to \mathrm{U}(1)$). Fix an orientation of a maximal-dimensional positive-definite subspace $H_+^2 (M ; {\mathbb R}) \subset H^2 (M ; {\mathbb R})$. The Seiberg-Witten invariant is the function \[ \mathrm{SW}_M : \mathcal C_M \longrightarrow {\mathbb Z} \] defined as follows. Given a spin-c structure $\alpha \in \mathrm{Spin}^c (M) \simeq \mathcal C_M$, the image $\mathrm{SW}_M (\alpha) = [{\mathcal M}] \in H_d ({\mathcal B}^*;{\mathbb Z})$ is the homology class of the moduli space ${\mathcal M}$ of solutions (called \textbf{monopoles}) of the Seiberg-Witten (SW) equations modulo gauge equivalence. The SW equations are non-linear differential equations on a pair of a connection $A$ on the determinant line bundle of $\alpha$ and of a section $\varphi$ of an associated $\mathrm{U}(2)$-bundle, called the positive (half) spinor bundle: \[ F_A^+ = i q (\varphi) \qquad \mbox{ and } \qquad D_A \varphi = 0 \ , \] where $F_A^+$ is the self-dual part of the (imaginary) curvature of $A$, $q$ is a squaring operation taking sections of the positive spinor bundle to self-dual 2-forms, and $D_A$ is the corresponding Dirac operator. For a generic perturbation of the equations (replacing the first equation by $F_A^+ = i q (\varphi) +i\nu$, where $\nu$ is a self-dual 2-form) and of the riemannian metric, a transversality argument shows that the moduli space ${\mathcal M}$ is well-behaved and actually inside the space ${\mathcal B}^*$ of gauge-equivalence classes of irreducible pairs (those $(A,\varphi)$ for which $\varphi \neq 0$), which is homotopy-equivalent to ${\mathbb C} {\mathbb P}^\infty$ and hence has even-degree homology groups $H_d ({\mathcal B}^*;{\mathbb Z}) \simeq {\mathbb Z}$. When the dimension $d$ of ${\mathcal M}$ is odd or when ${\mathcal M}$ is empty, the invariant $\mathrm{SW}_M (\alpha)$ is set to be zero. The \textbf{basic classes} are the classes $\alpha \in \mathcal C_M$ for which $\mathrm{SW}_M (\alpha) \neq 0$. The set of basic classes is always finite, and if $\alpha$ is a basic class then so is $-\alpha$. The main results are that the Seiberg-Witten invariants are invariants of the diffeomorphism type of the 4-manifold $M$ and satisfy vanishing and nonvanishing theorems, which allowed to answer an array of questions about specific manifolds. Taubes~\cite{ta:sw=gr} discovered an equivalence between Seiberg-Witten and Gromov invariants (using pseudoholomorphic curves) for symplectic 4-manifolds, by proving the existence of pseudoholomorphic curves from solutions of the Seiberg-Witten equations and vice-versa. As a consequence, he proved: \begin{theorem} \label{thm:taubes}\index{Taubes ! theorem}\index{theorem ! Taubes} \textbf{(Taubes)} $\;$ Let $(M,\omega)$ be a compact symplectic 4-manifold. If $b_2^+ > 1$, then $c_1 (M,\omega)$ admits a smooth pseudoholomorphic representative. If $M = M_1 \# M_2$, then one of the $M_i$'s has negative definite intersection form. \varepsilonnd{theorem} There are results also for $b_2^+ = 1$, and follow-ups describe the set of basic classes of a connected sum $M \# N$ in terms of the set of basic classes of $M$ when $N$ is a manifold with negative definite intersection form (starting with $\overline{{\mathbb C} {\mathbb P}^2}$). In an attempt to understand other 4-manifolds via Seiberg-Witten and Gromov invariants, some analysis of pseudoholomorphic curves has been extended to nonsymplectic 4-manifolds by equipping these with a {\varepsilonm nearly nondegenerate closed 2-form}. In particular, Taubes~\cite{ta:harmonic} has related Seiberg-Witten invariants to pseudoholomorphic curves for compact oriented 4-manifolds with $b_2^+ > 0$. Any compact oriented 4-manifold $M$ with $b_2^+ > 0$ admits a closed 2-form that vanishes along a union of circles and is symplectic elsewhere~\cite{ga-ki:circles,ho:harmonic}. In fact, for a generic metric on $M$, there is a self-dual harmonic form $\omega$ which is transverse to zero as a section of $\Lambda^2 T^* M$. The vanishing locus of $\omega$ is the union of a finite number of embedded circles, and $\omega$ is symplectic elsewhere. The generic behavior of closed 2-forms on orientable 4-manifolds is partially understood~\cite[pp.23-24]{ar-gi:symplectic}. Here is a summary. Let $\omega$ be a generic closed 2-form on a 4-manifold $M$. At the points of some hypersurface $Z$, the form $\omega$ has rank 2. At a generic point of $M$, $\omega$ is nondegenerate; in particular, has the Darboux normal form $dx_1 \wedge dy_1 + dx_2 \wedge dy_2$. There is a codimension-1 submanifold $Z$ where $\omega$ has rank 2, and there are no points where $\omega$ vanishes. At a generic point of $Z$, the kernel of $\widetilde \omega$ is transverse to $Z$; the normal form near such a point is $x_1 dx_1 \wedge dy_1 + dx_2 \wedge dy_2$. There is a curve $C$ where the kernel of $\widetilde \omega$ is not transverse to $Z$, hence sits in $TZ$. At a generic point of $C$, the kernel of $\widetilde \omega$ is transverse to $C$; there are two possible normal forms near such points, called {\varepsilonm elliptic} and {\varepsilonm hyperbolic}, $d(x-\frac{z^2}{2}) \wedge dy + d(xz \pm ty-\frac{z^3}{3}) \wedge dt$. The hyperbolic and elliptic sections of $C$ are separated by {\varepsilonm parabolic} points, where the kernel is tangent to $C$. It is known that there exists at least one continuous family of inequivalent degeneracies in a parabolic neighborhood~\cite{go-ti:moduli}. \ssubsection{Lefschetz Pencils} \label{sec:pencils} {\varepsilonm Lefschetz pencils} in symplectic geometry imitate linear systems in complex geometry. Whereas holomorphic functions on a projective surface must be constant, there are interesting functions on the complement of a finite set, and generic such functions have only quadratic singularities. A Lefschetz pencil can be viewed as a complex Morse function or as a very singular fibration, in the sense that, not only some fibers are singular (have ordinary double points) but all {\varepsilonm fibers} go through some points. \begin{definition} A \textbf{Lefschetz pencil} on an oriented 4-manifold $M$ is a map $f : M \setminus \{ b_1 , \ldots , b_n \} \to {\mathbb C} {\mathbb P} ^1$ defined on the complement of a finite set in $M$, called the \textbf{base locus}, that is a submersion away from a finite set $\{ p_1 , \ldots , p_{n+1} \}$, and obeying local models $(z_1,z_2) \mapsto z_1 / z_2$ near the $b_j$'s and $(z_1,z_2) \mapsto z_1 z_2$ near the $p_j$'s, where $(z_1,z_2)$ are oriented local complex coordinates. \varepsilonnd{definition} Usually it is also required that each fiber contains at most one singular point. By blowing-up $M$ at the $b_j$'s, we obtain a map to ${\mathbb C} {\mathbb P}^1$ on the whole manifold, called a \textbf{Lefschetz fibration}. Lefschetz pencils and Lefschetz fibrations can be defined on higher dimensional manifolds where the $b_j$'s are replaced by codimension 4 submanifolds. By working on the Lefschetz fibration, Gompf~\cite{go:characterization,go:Lefschetz} proved that a structure of Lefschetz pencil (with a nontrivial base locus) gives rise to a symplectic form, canonical up to isotopy, such that the fibers are symplectic. Using asymptotically holomorphic techniques~\cite{au:asymptotically,do:almost}, Donaldson~\cite{do:pencils} proved that symplectic 4-manifolds admit Lefschetz pencils. More precisely: \begin{theorem} \label{thm:donaldson_lefschetz} \textbf{(Donaldson)} $\;$ Let $J$ be a compatible almost complex structure on a compact symplectic 4-manifold $(M,\omega)$ where the class $[\omega]/2\pi$ is integral. Then $J$ can be deformed through almost complex structures to an almost complex structure $J'$ such that $M$ admits a Lefschetz pencil with $J'$-holomorphic fibers. \varepsilonnd{theorem} The closure of a smooth fiber of the Lefschetz pencil is a symplectic submanifold Poincar\'e dual to $k [\omega]/2\pi$; cf.\ Theorem~\ref{thm:donaldson_submanifolds}. Other perspectives on Lefschetz pencils have been explored, including in terms of representations of the free group $\pi_1 ({\mathbb C} {\mathbb P} ^1 \setminus \{ p_1 , \ldots , p_{n+1} \})$ in the mapping class group $\Gamma_g$ of the generic fiber surface~\cite{sm:monodromy}. Similar techniques were used by Auroux~\cite{au:branched} to realize symplectic 4-manifolds as {\varepsilonm branched covers} of ${\mathbb C} {\mathbb P} ^2$, and thus reduce the classification of symplectic 4-manifolds to a (hard) algebraic question about factorization in the braid group. Let $M$ and $N$ be compact oriented 4-manifolds, and let $\nu$ be a symplectic form on $N$. \begin{definition} A map $f:M \to N$ is a \textbf{symplectic branched cover} if for any $p \in M$ there are complex charts centered at $p$ and $f(p)$ such that $\nu$ is positive on each complex line and where $f$ is given by: a local diffeomorphism $(x,y) \to (x,y)$, or a simple branching $(x,y) \to (x^2,y)$, or an ordinary cusp $(x,y) \to (x^3 -xy,y)$. \varepsilonnd{definition} \vspace*{-2ex} \begin{theorem} \label{thm:auroux_branched} \textbf{(Auroux)} $\;$ Let $(M,\omega)$ be a compact symplectic 4-manifold where the class $[\omega]$ is integral, and let $k$ be a sufficiently large integer. Then there is a symplectic branched cover $f_k :(M,k\omega) \to {\mathbb C}{\mathbb P}^2$, that is canonical up to isotopy for $k$ large enough. Conversely, given a symplectic branched cover $f:M \to N$, the domain $M$ inherits a symplectic form canonical up to isotopy in the class $f^*[\nu]$. \varepsilonnd{theorem} \ssection{Hamiltonian Geometry} \index{moment map} \label{section5} \ssubsection{Symplectic and Hamiltonian Vector Fields} \label{sec:symplectic_hamiltonian_fields} \index{hamiltonian ! vector field}\index{symplectic ! vector field} \index{vector field ! hamiltonian}\index{vector field ! symplectic} Let $(M,\omega)$ be a symplectic manifold and let $H : M \to {\mathbb R}$ be a smooth function. By nondegeneracy, there is a unique vector field $X_{_H}$ on $M$ such that $\imath_{X_{H}}\omega=dH$. Supposing that $X_{_H}$ is complete (this is always the case when $M$ is compact), let $\rho_{t} : M \to M$, $t \in {\mathbb R}$, be its flow (cf.\ Section~\ref{cotangent_bundles}). Each diffeomorphism $\rho_{t}$ preserves $\omega$, i.e., $\rho_{t}^* \omega = \omega$, because $\frac{d}{dt}\rho_{t}^{*}\omega = \rho_{t}^{*}{\mathcal L}_{X_{_H}}\omega = \rho_{t}^{*}(d \imath_{X_{_H}}\omega + \imath_{X_{_H}} d\omega ) = 0$. Therefore, every function on $(M,\omega)$ produces a family of symplectomorphisms. Notice how this feature involves both the {\varepsilonm nondegeneracy} and the {\varepsilonm closedness} of $\omega$. \begin{definition} A vector field $X_{_H}$ such that $\imath_{X_{H}}\omega=dH$ for some $H \in C^\infty (M)$ is a \textbf{hamiltonian vector field}\index{hamiltonian ! vector field}\index{vector field ! hamiltonian} with \textbf{hamiltonian function}\index{hamiltonian ! function} $H$. \varepsilonnd{definition} Hamiltonian vector fields preserve their hamiltonian functions (${\mathcal L}_{X_{H}}H=\imath_{X_{H}}dH$ $=\imath_{X_{H}}\,\imath_{X_{H}}\omega=0$), so each integral curve\index{integral ! curve} $\{ \rho_t (x) \mid t \in {\mathbb R} \}$ of a hamiltonian vector field $X_{_H}$ must be contained in a level set of the hamiltonian function $H$. In $({\mathbb R}^{2n}, \omega_0 = \sum dx_j \wedge dy_j)$, the {\varepsilonm symplectic gradient} $X_{_H} = \sum \left( \frac{\partial H}{\partial y_j} \frac{\partial} {\partial x_j}-\frac{\partial H}{\partial x_j} \frac{\partial} {\partial y_j} \right)$ and the usual (euclidean) gradient $\nabla H = \sum_j \left (\frac{\partial H}{\partial x_j} \frac{\partial}{\partial x_j}+\frac{\partial H}{\partial y_j} \frac{\partial}{\partial y_j}\right)$ of a function $H$ are\index{gradient vector field}\index{vector field ! gradient} related by $JX_{_H}=\nabla H$, where $J$ is the standard almost complex structure. \begin{examples} \begin{enumerate} \item For the height function $H(\theta,h)=h$ on the sphere $(M,\omega) = (S^{2},d\theta \wedge dh)$, from $\imath_{X_H}(d\theta \wedge dh)=dh$ we get $X_{_H}=\frac{\partial}{\partial\theta}$. Thus, $\rho_{t}(\theta,h)=(\theta+t,h)$, which is rotation about the vertical axis, preserving the height $H$. \item Let $X$ be any vector field on a manifold $W$. There is a unique vector field $X_{\sharp}$ on the cotangent bundle $T^*W$ whose flow is the lift\index{lift ! of a vector field} of the flow of $X$. Let $\alpha$ be the tautological form and $\omega = -d \alpha$ the canonical symplectic form on $T^*W$. The vector field $X_{\sharp}$ is hamiltonian with hamiltonian function $H := \imath_{X_{\sharp}} \alpha$. \item Consider euclidean space ${\mathbb R}^{2n}$ with coordinates $(q_{1},\ldots,q_{n},p_{1},\ldots,p_{n})$ and $\omega_{0}=\sum dq_j\wedge dp_j$. The curve $\rho_{t}=(q(t),p(t))$ is an integral curve for a hamiltonian vector field $X_{_H}$ exactly when it satisfies the \textbf{Hamilton equations}:\index{Hamilton equations}\index{classical mechanics}\index{mechanics ! classical} \[ \left \{ \begin{array}{l} \frac{dq_i}{dt}(t) = \phantom{-}\frac{\partial H}{\partial p_i} \\ \frac{dp_i}{dt}(t) = -\frac{\partial H}{\partial q_i} \varepsilonnd{array} \right. \] \item \textbf{Newton's second law}\index{Newton ! second law} states that a particle of mass $m$ moving in \textbf{configuration space}\index{space ! configuration}\index{configuration space} ${\mathbb R}^3$ with coordinates $q=(q_1,q_2,q_3)$ under a potential $V(q)$ moves along a curve $q(t)$ such that \[ m\frac{d^2 q}{d t^2} = -\nabla V(q)\ . \] Introduce the \textbf{momenta}\index{momentum} $p_i=m\frac{dq_i}{dt}$ for $i=1,2,3$, and \textbf{energy}\index{energy ! classical mechanics} function $H(q,p)=\frac{1}{2m}|p|^2+V(q)$ on the \textbf{phase space}\footnote{The \textbf{phase space} of a system of $n$ particles is the space parametrizing the position and momenta of the particles. The mathematical model for a phase space is a symplectic manifold.}\index{space ! phase}\index{phase space} ${\mathbb R}^6 = T^* {\mathbb R}^3$ with coordinates $(q_{1},q_{2},q_{3},p_{1},p_{2},p_{3})$. The energy $H$ is conserved by the motion and Newton's second law\index{Newton ! second law} in ${\mathbb R}^3$ is then equivalent to the Hamilton equations\index{Hamilton equations} in ${\mathbb R}^6$: \[ \left\{ \begin{array}{l} \frac{dq_i}{dt}=\frac{1}{m}p_i=\frac{\partial H}{\partial p_i} \\ \frac{dp_i}{dt}=m \frac{d^2q_i}{dt^2}= -\frac{\partial V}{\partial q_i}=-\frac{\partial H}{\partial q_i} \varepsilonnd{array} \right. \] \varepsilonnd{enumerate} \varepsilonnd{examples} \vspace*{-2ex} \begin{definition} A vector field $X$ on $M$ preserving $\omega$ (i.e., such that ${\mathcal L}_{X}\omega=0$) is a \textbf{symplectic vector field}.\index{symplectic ! vector field}\index{vector field ! symplectic} \varepsilonnd{definition} Hence, a vector field $X$ on $(M,\omega)$ is called \textbf{symplectic} when $\imath_X\omega$ is closed, and \textbf{hamiltonian}\index{vector field ! hamiltonian}\index{hamiltonian ! vector field} when $\imath_X \omega$ is exact. In the latter case, a {\varepsilonm primitive} $H$ of $\imath_X\omega$ is called a \textbf{hamiltonian function}\index{hamiltonian ! function}\index{function ! hamiltonian} of $X$. On a contractible open set every symplectic vector field is hamiltonian. Globally, the group $H_{\rm de Rham}^{1}(M)$ measures the obstruction for symplectic vector fields to be hamiltonian. For instance, the vector field $X_{1}=\frac{\partial}{\partial\theta_{1}}$ on the 2-torus $(M,\omega) = ({\mathbb T}^{2},d\theta_{1}\wedge d\theta_{2})$ is symplectic but not hamiltonian. A vector field $X$ is a differential operator on functions: $X \cdot f := {\mathcal L}_{_X} f = df (X)$ for $f \in C^\infty (M)$. As such, the bracket $W = [X,Y]$ is the commutator: ${\mathcal L}_W = [ {\mathcal L}_X , {\mathcal L}_Y ] = {\mathcal L}_X {\mathcal L}_Y - {\mathcal L}_Y {\mathcal L}_X$ (cf.\ Section~\ref{sec:integrability}). This endows the set $\chi(M)$ of vector fields on a manifold $M$ with a structure of {\varepsilonm Lie algebra}.\footnote{A (real) \textbf{Lie algebra}\index{Lie ! algebra} is a (real) vector space ${\mathfrak g}$ together with a \textbf{Lie bracket}\index{Lie ! bracket} $[ \cdot, \cdot ]$, i.e., a bilinear map $[ \cdot, \cdot ] : {\mathfrak g} \times {\mathfrak g} \to {\mathfrak g}$ satisfying \textbf{antisymmetry}, $[x,y] = -[y,x]$, $\forall x,y \in {\mathfrak g}$,\index{antisymmetry} and the \textbf{Jacobi identity}, $[x,[y,z]] + [y,[z,x]] + [z,[x,y]] = 0$, $\forall x,y,z \in {\mathfrak g}$.\index{Jacobi ! identity}} For a symplectic manifold $(M,\omega)$, using $\imath_{[X,Y]} = [ {\mathcal L}_X , \imath_Y ]$ and Cartan's magic formula, we find that $\imath_{[X,Y]}\omega = d\imath_{X}\imath_{Y}\omega + \imath_{X} d\imath_{Y}\omega - \imath_{Y} d\imath_{X}\omega - \imath_{Y}\imath_{X} d\omega = d(\omega(Y,X))$. Therefore: \begin{proposition} \label{prop:brackets} If $X$ and $Y$ are symplectic vector fields on a symplectic manifold $(M,\omega)$, then $[X,Y]$ is hamiltonian with hamiltonian function $\omega(Y,X)$. \varepsilonnd{proposition} Hence, hamiltonian vector fields and symplectic vector fields form Lie subalgebras for the Lie bracket $[\cdot,\cdot]$. \begin{definition} The \textbf{Poisson bracket}\index{bracket ! Poisson}\index{Poisson ! bracket} of two functions $f,g\in C^{\infty}(M)$ is the function $\{f,g\} := \omega(X_f,X_g) = {\mathcal L}_{X_g} f$. \varepsilonnd{definition} By Proposition~\ref{prop:brackets} we have $X_{\{f,g\}}=-[X_{f},X_{g}]$. Moreover, the bracket $\{\cdot,\cdot\}$ satisfies the \textbf{Jacobi identity}\index{Jacobi identity}, $\{f,\{g,h\}\}+\{g,\{h,f\}\}+\{h,\{f,g\}\}=0$, and the \textbf{Leibniz rule}\index{Leibniz rule}, $\{ f, gh \} = \{ f,g \} h + g \{ f,h \}$. \begin{definition} A \textbf{Poisson algebra}\index{Poisson ! algebra} $({\mathcal P} , \{\cdot,\cdot\})$ is a commutative associative algebra ${\mathcal P}$ with a Lie bracket $\{\cdot,\cdot\}$ satisfying the Leibniz rule\index{Leibniz rule}. \varepsilonnd{definition} When $(M,\omega)$ is a symplectic manifold, $(C^{\infty}(M), \{\cdot,\cdot\})$ is a Poisson algebra, and the map $C^{\infty}(M) \to \chi(M)$, $H \mapsto X_{_H}$ is a Lie algebra anti-homomorphism. \begin{examples} \begin{enumerate} \item For the prototype $({\mathbb R}^{2n}, \sum dx_i \wedge dy_i)$, we have $X_{x_i} = - \frac{\partial}{\partial y_i}$ and $X_{y_i} = \frac{\partial}{\partial x_i}$, so that $\{ x_i , x_j \} = \{ y_i , y_j \} = 0$ and $\{ x_i , y_j \} = \partialta_{ij}$ for all $i,j$. Arbitrary functions $f,g \in C^\infty ({\mathbb R}^{2n})$ have the \textbf{classical Poisson bracket} \[ \{ f, g \} = \sum \limits_{i=1}^{n} \left( \frac{\partial f}{\partial x_i}\frac{\partial g}{\partial y_i} - \frac{\partial f}{\partial y_i}\frac{\partial g}{\partial x_i} \right) \ . \] \item Let $G$ be a Lie group,\footnote{A \textbf{Lie group} is a manifold $G$ equipped with a group structure where the group operation $G \times G \to G$ and inversion $G \to G$ are smooth maps. An \textbf{action} of a Lie group $G$ on a manifold $M$ is a group homomorphism $G \to \mathrm{Diff}(M)$, $g \mapsto \psi_g$, where the \textbf{evaluation map} $M \times G \to M$, $(p,g) \mapsto \psi_g (p)$ is a smooth map. The \textbf{orbit}\index{orbit ! definition} of $G$ through $p \in M$ is $\{\psi_g(p) \mid g \in G\}$. The \textbf{stabilizer}\index{stabilizer} (or {\varepsilonm isotropy}\index{isotropy}) of $p \in M$ is $G_p := \{g \in G \mid \psi_g(p) = p\}$.} ${{\mathfrak g}}$ its Lie algebra and ${{\mathfrak g}}^*$ the dual vector space of ${{\mathfrak g}}$. The vector field $^{\mathfrak g} X^\#$ generated by $X \in {\mathfrak g}$ for the adjoint action\footnote{Any Lie group $G$ acts on itself by \textbf{conjugation}:\index{conjugation} $g \in G \mapsto \psi_g \in \mathrm{Diff}(G)$, $\psi_g(a) = g \cdot a \cdot g^{-1}$. Let $\mathrm{Ad}_g: {\mathfrak g} \to {\mathfrak g}$ be the derivative at the identity of $\psi_g: G \to G$. We identify the Lie algebra ${\mathfrak g}$ with the tangent space $T_eG$. For matrix groups, $\mathrm{Ad}_g X = g X g^{-1}$. Letting $g$ vary, we obtain the \textbf{adjoint action}\index{adjoint ! action}\index{action ! adjoint} of $G$ on its Lie algebra $\mathrm{Ad}: G \to \mathrm{GL}({\mathfrak g})$. Let $\langle \cdot,\cdot \rangle : {\mathfrak g}^* \times {\mathfrak g} \to {\mathbb R}$ be the natural pairing $\langle \xi,X\rangle = \xi(X)$. Given $\xi \in {\mathfrak g}^*$, we define $\mathrm{Ad}_g^*\xi$ by $\langle\mathrm{Ad}_g^*\xi,X\rangle = \langle \xi,\mathrm{Ad}_{g^{-1}}X\rangle$, for any $X \in {\mathfrak g}$. The collection of maps $\mathrm{Ad}_g^*$ forms the \textbf{coadjoint action}\index{coadjoint ! action}\index{action ! coadjoint} of $G$ on the dual of its Lie algebra $\mathrm{Ad}^*: G \to \mathrm{GL}({\mathfrak g}^*)$. These satisfy $\mathrm{Ad}_g \circ \mathrm{Ad}_h = \mathrm{Ad}_{gh}$ and $\mathrm{Ad}_g^* \circ \mathrm{Ad}_h^* = \mathrm{Ad}_{gh}^*$.} of $G$ on ${\mathfrak g}$ has value $[X,Y]$ at $Y \in {\mathfrak g}$. The vector field $X^\#$ generated by $X \in {\mathfrak g}$ for the coadjoint action of $G$ on ${\mathfrak g}^*$ is $\langle X^\#_{_\xi} , Y \rangle = \langle \xi , [Y,X] \rangle$, $\forall \ \xi \in {\mathfrak g}^* , Y \in {\mathfrak g}$. The skew-symmetric pairing $\omega$ on ${\mathfrak g}$ defined at $\xi \in {\mathfrak g}^*$ by \[ \omega_{_\xi} (X,Y) := \langle \xi , [X,Y] \rangle \] has kernel at $\xi$ the Lie algebra ${\mathfrak g}_{_\xi}$ of the stabilizer of $\xi$ for the coadjoint action. Therefore, $\omega$ restricts to a nondegenerate 2-form on the tangent spaces to the orbits of the coadjoint action. As the tangent spaces to an orbit are generated by the vector fields $X^\#$, the Jacobi identity in ${\mathfrak g}$ implies that this form is closed. It is called the \textbf{canonical symplectic form}\index{canonical ! symplectic form on a coadjoint orbit}\index{symplectic ! canonical symplectic form on a coadjoint orbit} (or the \textbf{Lie-Poisson}\index{Lie-Poisson symplectic form}\index{Poisson ! Lie-Poisson symplectic form} or \textbf{Kirillov-Kostant-Souriau symplectic structure}\index{Kostant-Kirillov symplectic form}\index{Kirillov|see{Kostant-Kirillov}}) on the \textbf{coadjoint orbits}. The corresponding Poisson structure on ${\mathfrak g}^*$\index{Poisson ! structure on ${\mathfrak g}^*$} is the canonical one induced by the Lie bracket: \[ \{ f , g \} (\xi) = \langle \xi, [df_{_\xi}, dg_{_\xi}] \rangle \] for $f,g \in C^\infty ({\mathfrak g}^*)$ and $\xi \in {{\mathfrak g}}^*$. The differential $df_{_\xi} : T_{_\xi} {{\mathfrak g}}^* \simeq {{\mathfrak g}}^* \to {\mathbb R}$ is identified with an element of ${\mathfrak g} \simeq {\mathfrak g}^{**}$. \varepsilonnd{enumerate} \varepsilonnd{examples} \ssubsection{Arnold Conjecture and Floer Homology} \label{sec:arnold_floer} There is an important generalization of Poincar\'e's last geometric theorem (Theorem~\ref{thm:poincare_birkhoff}) conjectured by Arnold starting around~1966.\index{symplectomorphism ! Arnold conjecture}\index{Arnold ! conjecture}\index{conjecture ! Arnold} Let $(M,\omega)$ be a compact symplectic manifold, and $h_t: M \to {\mathbb R}$ a 1-periodic (i.e., $h_t = h_{t+1}$) smooth family of functions. Let $\rho: M \times {\mathbb R} \to M$ be the isotopy generated by the time-dependent hamiltonian vector field $v_t$ defined by the equation $\omega(v_t,\cdot) = dh_t$. The symplectomorphism $\varphi = \rho_1$ is then said to be \textbf{exactly homotopic to the identity}.\index{symplectomorphism ! Arnold conjecture}\index{Arnold ! conjecture}\index{conjecture ! Arnold}\index{symplectomorphism ! exactly homotopic to the identity}\index{exactly homotopic to the identity} In other words, a symplectomorphism exactly homotopic to the identity is the time-1 map of the isotopy generated by some time-dependent 1-periodic hamiltonian function. There is a one-to-one correspondence between the fixed points of $\varphi$ and the period-1 orbits of $\rho$. When all the fixed points of such $\varphi$ are nondegenerate (generic case), we call $\varphi$ \textbf{nondegenerate}. The \textbf{Arnold conjecture}~\cite[Appendix~9]{ar:mathematical} predicted that \[ \# \{ \mbox{fixed points of a nondegenerate } \varphi\} \geq \displaystyle{\sum_{i=0}^{2n}} \dim H^i(M;{\mathbb R}) \] (or even that the number of fixed points of a nondegenerate $\varphi$ is at least the minimal number of critical points of a Morse function\footnote{A \textbf{Morse function} is a smooth function $f:M \to {\mathbb R}$ all of whose critical points are nondegenerate, i.e., at any critical point the hessian matrix is nondegenerate.}). When the hamiltonian $h: M \to {\mathbb R}$ is independent of $t$, this relation is trivial: a point $p$ is critical for $h$ if and only if $dh_p = 0$, if and only if $v_p = 0$, if and only if $\rho (t,p) = p$, $\forall t \in {\mathbb R}$, which implies that $p$ is a fixed point of $\rho_1=\varphi$, so the Arnold conjecture reduces to a Morse inequality. Notice that, according to the Lefschetz fixed point theorem,\index{theorem ! Lefschetz fixed point}\index{Lefschetz fixed point theorem} the Euler characteristic of $M$, i.e., the {\varepsilonm alternating} sum of the Betti numbers, $\sum (-1)^ i \dim H^i(M;{\mathbb R})$, is a (weaker) lower bound for the number of fixed points of $\varphi$. The Arnold conjecture\index{Arnold ! conjecture}\index{conjecture ! Arnold} was gradually proved from the late~70's to the late~90's by Eliashberg~\cite{el:fixed_points}, Conley-Zehnder~\cite{co-ze:arnold}, Floer~\cite{fl:holomorphic}, Sikorav~\cite{si:points}, Weinstein~\cite{we:conley_zehnder}, Hofer-Salamon~\cite{ho-sa:floer}, Ono~\cite{on:arnold}, culminating with independent proofs by Fukaya-Ono~\cite{fu-on:arnold} and Liu-Tian~\cite{li-ti:arnold}. There are open conjectures for sharper bounds on the number of fixed points. The breakthrough tool for establishing the Arnold conjecture was \textbf{Floer homology} -- an $\infty$-dimensional analogue of Morse theory. Floer homology was defined by Floer~\cite{fl:relative,fl:gradient,fl:lagrangian,fl:holomorphic,fl:witten} and developed through the work of numerous people after Floer's death. It combines the variational approach of Conley and Zehnder~\cite{co-ze:morse}, with Witten's Morse-Smale complex~\cite{wi:morse_theory}, and with Gromov's compactness theorem for pseudo-holomorphic curves~\cite{gr:pseudo}. Floer theory starts from a symplectic action functional on the space of loops ${\mathcal L} M$ of a symplectic manifold $(M,\omega)$ whose zeros of the differential $dF : T ({\mathcal L} M) \to {\mathbb R}$ are the period-1 orbits of the isotopy $\rho$ above. The tangent bundle $T ({\mathcal L} M)$ is the space of loops with vector fields over them: pairs $(\varepsilonll,v)$, where $\varepsilonll : S^1 \to M$ and $v : S^1 \to \varepsilonll^* (TM)$ is a section. Then $df (\varepsilonll,v) = \int_0^1 \omega ( \dot \varepsilonll (t) - X_{h_t} (\varepsilonll (t) , v(t) ) \, dt$. The {\varepsilonm Floer complex}\footnote{The \textbf{Morse complex} for a Morse function on a compact manifold, $f:M \to {\mathbb R}$, is the chain complex freely generated by the critical points of $f$, graded by the {\varepsilonm Morse index} $\imath$ and with differential given by counting the number $n(x,y)$ of flow lines of the negative gradient $- \nabla f$ (for a metric on $X$) from the point $x$ to the point $y$ whose indices differ by 1: \[ C_* = \oplus_{x \in \mathrm{Crit} (f)} {\mathbb Z} \langle x \rangle \quad \mbox{ and } \quad \partial \langle x \rangle = \sum \limits_{y \in \mathrm{Crit} (f), \imath (y) = \imath (x) -1} n(x,y) \langle y \rangle \ . \] The coefficient $n(x,y)$ is thus the number of solutions (modulo ${\mathbb R}$-reparametrization) $u : {\mathbb R} \to X$ of the ordinary differential equation $\frac{d}{dt} u(t) = - \nabla f (u(t))$ with conditions $\lim_{t \to -\infty} u(t) = x$, $\lim_{t \to +\infty} u(t) = y$. The \textbf{Morse index} of a critical point of $f$ is the dimension of its unstable manifold, i.e., the number of negative eigenvalues of the hessian of $f$ at that point. For a generic metric, the unstable manifold of a critical point $W^u (x)$ intersects transversally with the stable manifold of another critical point $W^s (y)$. When $\imath (x) - \imath (y) = 1$, the intersection $W^u (x) \cap W^s (y)$ has dimension 1, so when we quotient out by the ${\mathbb R}$-reparametrization (to count actual image curves) we get a discrete set, which is finite by compactness. That $(C_*,\partial)$ is indeed a complex, i.e., $\partial ^2 =0$, follows from counting broken flow lines between points whose indices differ by 2. Morse's theorem states that the homology of the Morse complex coincides with the ordinary homology of $M$. In particular, the sum of all the Betti numbers $\sum \dim H^i(M;{\mathbb R})$ is a lower bound for the number of critical points of a Morse function.} is the chain complex freely generated by the critical points of $F$ (corresponding to the fixed points of $\varphi$), with {\varepsilonm relative grading} $\mathrm{index} (x,y)$ given by the difference in the number of positive eigenvalues from the spectral flow. The Floer differential is given by counting the number $n(x,y)$ of pseudo-holomorphic surfaces (the {\varepsilonm gradient flow lines} joining two fixed points): \[ C_* = \oplus_{x \in \mathrm{Crit} (F)} {\mathbb Z} \langle x \rangle \quad \mbox{ and } \quad \partial \langle x \rangle = \sum \limits_{\footnotesize{\begin{array}{l} y \in \mathrm{Crit} (F) \\ \mathrm{index} (x,y) = 1 \varepsilonnd{array}}} n(x,y) \langle y \rangle \ . \] Pondering transversality, compactness and orientation, Floer's theorem states that the homology of $(C_* , \partial)$ is isomorphic to the ordinary homology of $M$. In particular, the sum of the Betti numbers is a lower bound for the number of fixed points of $\varphi$. From the above {\varepsilonm symplectic Floer homology}, Floer theory has branched out to tackle other differential geometric problems in symplectic geometry and 3- and 4-dimensional topology. It provides a rigorous definition of invariants viewed as homology groups of infinite-dimensional Morse-type theories, with relations to gauge theory and quantum field theory. There is {\varepsilonm lagrangian Floer homology} (for the case of lagrangian intersections, i.e., intersection of a lagrangian submanifold with a hamiltonian deformation of itself), {\varepsilonm instanton Floer homology} (for invariants of 3-manifolds), {\varepsilonm Seiberg-Witten Floer homology}, {\varepsilonm Heegaard Floer homology} and {\varepsilonm knot Floer homology}. For more on Floer homology, see for instance~\cite{do:floer,sa:pcmi}. \ssubsection{Euler-Lagrange Equations} \label{sec:euler_lagrange} The equations of motion in classical mechanics arise from \textbf{variational principles}.\index{principle ! of least action}\index{equations ! of motion}\index{motion ! equations}\index{variational ! problem} The physical path of a general mechanical system\index{mechanical system}\index{system ! mechanical} of $n$ particles is the path that {\varepsilonm minimizes} a quantity called the {\varepsilonm action}. When dealing with systems with constraints, such as the simple pendulum,\index{pendulum ! simple} or two point masses attached by a rigid rod, or a rigid body, the language of variational principles becomes more appropriate than the explicit analogues of Newton's second laws.\index{Newton ! second law} Variational principles\index{variational ! principle} are due mostly to D'Alembert,\index{D'Alembert ! variational principle}\index{principle ! variational} Maupertius\index{Maupertius ! variational principle}, Euler\index{Euler ! variational principle} and Lagrange\index{Lagrange ! variational principle}. Let $M$ be an $n$-dimensional manifold, and let $F: TM \to {\mathbb R}$ be a function on its tangent bundle. If $\gamma: [a,b] \to M$ is a curve on $M$, the \textbf{lift of $\gamma$ to $TM$}\index{lift ! of a path} is the curve on $TM$ given by ${\widetilde \gamma}: [a,b] \to TM$, $t \mapsto \left( \gamma(t), \frac{d\gamma}{dt} (t) \right)$. The \textbf{action}\index{action ! of a path} of $\gamma$ is \[ {\mathcal A}_{\gamma} := \displaystyle{\int_a^b ({\widetilde \gamma}^*F)(t)dt} = \displaystyle{\int_a^b F\left( \gamma(t), \frac{d\gamma}{dt} (t) \right) dt}\ . \] For fixed $p,q$, let ${\mathcal P}(a,b,p,q) = \{\gamma: [a,b] \to M \mbox{ smooth} \mid \gamma(a) = p, \gamma(b) = q\}$. The goal is to find, among all $\gamma \in {\mathcal P}(a,b,p,q)$, the curve that {\varepsilonm locally minimizes} ${\mathcal A}_{\gamma}$. (Minimizing curves are always locally minimizing.)\index{action ! minimizing}\index{minimizing ! action}\index{minimizing ! locally} Assume that $p$, $q$ and the image of $\gamma$ lie in a coordinate neighborhood $({\mathcal U},x_1,\dots,x_n)$. On $T{\mathcal U}$ we have coordinates $(x_1,\dots,x_n,v_1,\dots,v_n)$ associated with a trivialization of $T{\mathcal U}$ by $\frac {\partial}{\partial x_1},\dots,\frac {\partial}{\partial x_n}$. Using this trivialization, a curve $\gamma: [a,b] \to {\mathcal U}$, $\gamma(t) = (\gamma_1(t),\dots,\gamma_n(t))$ lifts to \[ {\widetilde \gamma}: [a,b] \longrightarrow T{\mathcal U}\ , \qquad {\widetilde \gamma}(t) = \left(\gamma_1(t),\dots,\gamma_n(t), \frac {d\gamma_1}{dt} (t),\dots,\frac {d\gamma_n}{dt} (t)\right)\ . \] Consider infinitesimal variations of $\gamma$. Let $c_1,\dots,c_n \in C^{\infty}([a,b])$ be such that $c_k(a) = c_k(b) = 0$. For $\varepsilon$ small, let $\gamma_{\varepsilon}: [a,b] \to {\mathcal U}$ be the curve $\gamma_{\varepsilon}(t) = (\gamma_1(t) + \varepsilon c_1(t),\dots,\gamma_n(t) + \varepsilon c_n(t))$. Let ${\mathcal A}_{\varepsilon} := {\mathcal A}_{\gamma_{\varepsilon}}$. A necessary condition for $\gamma = \gamma_0 \in {\mathcal P}(a,b,p,q)$ to minimize the action is that $\varepsilon = 0$ be a critical point of ${\mathcal A}_{\varepsilon}$. By the Leibniz rule and integration by parts, we have that \[ \begin{array}{rl} \displaystyle{\frac {d{\mathcal A}_{\varepsilon}}{d\varepsilon} (0)} & = \displaystyle{\int_a^b \sum_k \left[ \frac {\partial F}{\partial x_k} \left( \gamma_0(t), \frac {d\gamma_0}{dt} (t)\right) c_k(t) + \frac {\partial F}{\partial v_k} \left( \gamma_0, \frac {d\gamma_0}{dt} \right) \frac {dc_k}{dt} (t)\right] dt} \\ & = \displaystyle{\int_a^b \sum_k \left[ \frac {\partial F}{\partial x_k} (\dots) - \frac {d}{dt} \frac {\partial F}{\partial v_k} (\dots)\right] c_k(t)\, dt \ .} \varepsilonnd{array} \] For $\frac {d{\mathcal A}_{\varepsilon}}{d\varepsilon} (0)$ to vanish for all $c_k$'s satisfying boundary conditions $c_k(a) = c_k(b) = 0$, the path $\gamma_0$ must satisfy the \textbf{Euler-Lagrange equations}\index{Euler ! Euler-Lagrange equations}: \[ \frac {\partial F}{\partial x_k} \left( \gamma_0(t), \frac {d\gamma_0}{dt} (t)\right) = \frac {d}{dt} \frac {\partial F}{\partial v_k} \left( \gamma_0(t), \frac{d\gamma_0}{dt} (t)\right) \ , \quad k=1, \ldots , n \ . \] \begin{examples} \begin{enumerate} \item Let $(M,g)$ be a riemannian manifold\index{riemannian ! manifold}\index{manifold ! riemannian}. Let $F: TM \to {\mathbb R}$ be the function whose restriction to each tangent space is the quadratic form defined by the riemannian metric.\index{riemannian ! metric}\index{metric} On a coordinate chart $F(x,v) = |v|^2 = \sum g_{ij}(x) v^i v^j$. Let $p,q \in M$ and $\gamma : [a,b] \to M$ a curve joining $p$ to $q$. The {\varepsilonm action}\index{action ! of a path} of $\gamma$ is \[ {\mathcal A}_\gamma = \displaystyle{\int_a^b \left| {d \gamma \over dt} \right|^2 dt} \ . \] The Euler-Lagrange equations\index{Euler ! Euler-Lagrange equations}\index{equations ! Euler-Lagrange}\index{Lagrange ! Euler-Lagrange equations} become the \textbf{Christoffel equations}\index{Christoffel ! equations}\index{equations ! Christoffel} for a geodesic \[ {d^2 \gamma^k \over dt^2} + \sum (\Gamma_{ij}^k \circ \gamma) {d \gamma^i \over dt} {d \gamma^j \over dt} = 0\ , \] where the \textbf{Christoffel symbols}\index{Christoffel ! symbols} $\Gamma_{ij}^k$'s are defined in terms of the coefficients of the riemannian metric ($g^{ij}$ is the matrix inverse to $g_{ij}$) by \[ \Gamma_{ij}^k = \frac{1}{2} \sum \limits_\varepsilonll g^{\varepsilonll k} \left( \frac{\partial g_{\varepsilonll i}}{\partial x_j} + \frac{\partial g_{\varepsilonll j}}{\partial x_i} - \frac{\partial g_{ij}}{\partial x_\varepsilonll } \right) \ . \] \item Consider\index{example ! of mechanical system} a point-particle of mass $m$ moving in ${\mathbb R}^3$ under a \textbf{force field} $G$. The \textbf{work}\index{work} of $G$ on a path $\gamma: [a,b] \to {\mathbb R}^3$ is $W_{\gamma} := \int_a^b G(\gamma(t)) \cdot \frac {d\gamma}{dt} (t) \, dt$. Suppose that $G$ is \textbf{conservative},\index{system ! conservative}\index{conservative system} i.e., $W_{\gamma}$ depends only on the initial and final points, $p= \gamma(a)$ and $q = \gamma(b)$. We can define the \textbf{potential energy}\index{energy ! potential}\index{potential ! energy} as $V: {\mathbb R}^3 \to {\mathbb R}$, $V(q) := W_{\gamma}$, where $\gamma$ is a path joining a fixed base point $p_0 \in {\mathbb R}^3$ to $q$. Let ${\mathcal P}$ be the set of all paths going from $p$ to $q$ over time $t \in [a,b]$. By the \textbf{principle of least action},\index{action ! principle of least action}\index{principle ! of least action} the physical path is the path $\gamma \in {\mathcal P}$ that minimizes a kind of mean value of kinetic minus potential energy\index{energy ! kinetic}\index{energy ! potential}\index{potential ! energy}\index{kinetic energy}, known as the \textbf{action}\index{action ! of a path}: \[ {\mathcal A}_{\gamma} := \int_a^b \left( \frac {m}{2} \left| \frac{d\gamma}{dt} (t) \right|^2 - V(\gamma(t))\right) dt\ . \] The Euler-Lagrange equations are then equivalent to \textbf{Newton's second law}:\index{Newton ! second law} \[ m \frac {d^2x}{dt^2} (t) -\frac {\partial V}{\partial x} (x(t)) = 0 \quad \iff \quad m \frac {d^2x}{dt^2} (t) = G(x(t)) \ . \] In the case of the earth moving about the sun, both regarded as point-masses and assuming that the sun to be stationary at the origin, the \textbf{gravitational potential}\index{gravitational potential}\index{potential ! gravitational} $V(x) = \frac {\mbox{const.}}{|x|}$ yields the \textbf{inverse square law} for the motion.\index{inverse square law} \item Consider now $n$ point-particles of masses $m_1,\dots,m_n$ moving in ${\mathbb R}^3$ under a conservative force corresponding to a potential energy $V \in C^{\infty}({\mathbb R}^{3n})$. At any instant $t$, the configuration of this system is described by a vector $x = (x_1,\dots,x_n)$ in configuration space ${\mathbb R}^{3n}$, where $x_k \in {\mathbb R}^3$ is the position of the $k$th particle. For fixed $p,q \in {\mathbb R}^{3n}$, let ${\mathcal P}$ be the set of all paths $\gamma = (\gamma_1 , \ldots , \gamma_n) : [a,b] \to {\mathbb R}^{3n}$ from $p$ to $q$. The \textbf{action}\index{action ! of a path} of a path $\gamma \in {\mathcal P}$ is \[ {\mathcal A}_{\gamma} := \int_a^b \left( \sum \limits_{k=1}^n \frac {m_k}{2} \left| \frac{d\gamma_k}{dt} (t) \right|^2 - V(\gamma(t))\right) dt\ . \] The Euler-Lagrange equations reduce to Newton's law for each particle. Suppose that the particles are restricted to move on a submanifold $M$ of ${\mathbb R} ^{3n}$ called the \textbf{constraint set}.\index{constraint set}\index{Newton ! second law}\index{system ! constrained}\index{constrained system} By the \textbf{principle of least action for a constrained system}, the physical path has minimal action among all paths satisfying the rigid constraints. I.e., we single out the actual physical path as the one that minimizes ${\mathcal A}_{\gamma}$ among all $\gamma: [a,b] \to M$ with $\gamma(a) = p$ and $\gamma(b) = q$. \varepsilonnd{enumerate} \varepsilonnd{examples} In the case where $F(x,v)$ does not depend on $v$, the Euler-Lagrange equations\index{Euler ! Euler-Lagrange equations} are simply $\frac {\partial F}{\partial x_i} \left( \gamma_0(t), \frac {d\gamma_0}{dt} (t)\right) = 0$. These are satisfied if and only if the curve $\gamma_0$ sits on the critical set of $F$. For generic $F$, the critical points are isolated, hence $\gamma_0(t)$ must be a constant curve. In the case where $F(x,v)$ depends affinely on $v$, $F(x,v) = F_0(x) + \sum_{j=1}^n F_j(x)v_j$, the Euler-Lagrange equations become \[ \frac {\partial F_0}{\partial x_i} (\gamma(t)) = \sum_{j=1}^n \left( \frac {\partial F_i}{\partial x_j} - \frac {\partial F_j}{\partial x_i} \right) (\gamma(t)) \frac {d\gamma_j}{dt} (t)\ . \] If the $n \times n$ matrix $\left( \frac {\partial F_i}{\partial x_j} - \frac {\partial F_j}{\partial x_i}\right)$ has an inverse $G_{ij}(x)$, we obtain the system of first order ordinary differential equations $\frac {d\gamma_j}{dt} (t) = \sum G_{ji}(\gamma(t)) \frac {\partial F_0}{\partial x_i} (\gamma(t))$. Locally it has a unique solution through each point $p$. If $q$ is not on this curve, there is no solution at all to the Euler-Lagrange equations belonging to ${\mathcal P}(a,b,p,q)$. Therefore, we need non-linear dependence of $F$ on the $v$ variables in order to have appropriate solutions. From now on, assume the \textbf{Legendre condition}:\index{Legendre ! condition} \[ \displaystyle{ \det \left( \frac {\partial^2F}{\partial v_i \partial v_j} \right)} \ne 0\ . \] Letting $G_{ij}(x,v) = \left( \frac {\partial^2F}{\partial v_i \partial v_j} (x,v) \right)^{-1}$, the Euler-Lagrange equations become \[ \frac {d^2\gamma_j}{dt^2} = \sum_i G_{ji} \frac {\partial F}{\partial x_i} \left( \gamma,\frac {d\gamma}{dt} \right) - \sum_{i,k} G_{ji} \frac {\partial^2F}{\partial v_i \partial x_k} \left( \gamma, \frac {d\gamma}{dt} \right) \frac {d\gamma_k}{dt}\ . \] This second order ordinary differential equation has a unique solution given initial conditions $\gamma(a) = p$ and $\frac {d\gamma}{dt} (a) = v$. Assume that $\left( \frac {\partial^2F}{\partial v_i \partial v_j} (x,v) \right) \gg 0$, $\forall (x,v)$, i.e., with the $x$ variable frozen, the function $v \mapsto F(x,v)$ is \textbf{strictly convex}\index{strictly convex function}. Then the path $\gamma_0 \in {\mathcal P}(a,b,p,q)$ satisfying the above Euler-Lagrange equations does indeed locally minimize\index{minimizing ! locally} ${\mathcal A}_{\gamma}$ (globally it is only critical): \begin{proposition} For every sufficiently small subinterval $[a_1,b_1]$ of $[a,b]$, $\gamma_0|_{[a_1,b_1]}$ is locally minimizing in ${\mathcal P}(a_1,b_1,p_1,q_1)$ where $p_1 = \gamma_0(a_1)$, $q_1 = \gamma_0(b_1)$. \varepsilonnd{proposition} \vspace*{-2ex} \begin{proof} Take $c = (c_1,\dots,c_n)$ with $c_i \in C^{\infty}([a,b])$, $c_i(a) = c_i(b) = 0$. Let $\gamma_{\varepsilon} = \gamma_0 + \varepsilon c \in {\mathcal P}(a,b,p,q)$, and let ${\mathcal A}_{\varepsilon} = {\mathcal A}_{\gamma_{\varepsilon}}$. Suppose that $\gamma_0: [a,b] \to {\mathcal U}$ satisfies the Euler-Lagrange equations, i.e., $\frac {d{\mathcal A}_{\varepsilon}}{d\varepsilon} (0) = 0$. Then \[ \begin{array}{llll} \displaystyle{\frac {d^2{\mathcal A}_{\varepsilon}}{d\varepsilon^2} (0)} & = & \displaystyle{\int_a^b \sum_{i,j} \frac {\partial^2F}{\partial x_i\partial x_j} \left( \gamma_0, \frac {d\gamma_0}{dt} \right) \ c_i \ c_j \ dt} & \quad \mbox{(A)} \\ & + & \displaystyle{2 \int_a^b \sum_{i,j} \frac {\partial^2F}{\partial x_i\partial v_j} \left( \gamma_0, \frac {d\gamma_0}{dt} \right) \ c_i \ \frac {dc_j}{dt} \ dt} & \quad \mbox{(B)} \\ & + & \displaystyle{\int_a^b \sum_{i,j} \frac {\partial^2F}{\partial v_i\partial v_j} \left( \gamma_0, \frac {d\gamma_0}{dt} \right) \ \frac {dc_i}{dt} \ \frac {dc_j}{dt} \ dt} & \quad \mbox{(C)} \ . \varepsilonnd{array} \] Since $\left( \frac {\partial^2F}{\partial v_i\partial v_j} (x,v)\right) \gg 0$ at all $x,v$, we have \[ |\mbox{(A)}| \le \displaystyle{K_{_{\mathrm{A}}} |c|_{L^2[a,b]}^2} \ , \quad |\mbox{(B)}| \le \displaystyle{K_{_{\mathrm{B}}} |c|_{L^2[a,b]} \left| \frac {dc}{dt} \right|_{L^2[a,b]}} \; \mbox{ and } \; \mbox{(C)} \ge \displaystyle{K_{_{\mathrm{C}}} \left| \frac {dc}{dt} \right|^2_{L^2[a,b]}} \ . \] where $K_{_{\mathrm{A}}}, K_{_{\mathrm{B}}}, K_{_{\mathrm{C}}}$ are positive constants. By the Wirtinger inequality\footnote{The \textbf{Wirtinger inequality}\index{Wirtinger inequality} states that, for $f \in C^1([a,b])$ with $f(a) = f(b) = 0$, we have \[ \int_a^b \left| \frac {df}{dt} \right|^2 dt \ge \frac {\pi^2}{(b-a)^2} \int_a^b |f|^2dt\ . \] This can be proved with Fourier series.}\index{Wirtinger inequality}, if $b-a$ is very small, then $\mbox{(C)} > |\mbox{(A)}| + |\mbox{(B)}|$ when $c \not\varepsilonquiv 0$. Hence, $\gamma_0$ is a local minimum. \varepsilonnd{proof} In Section~\ref{sec:symplectic_hamiltonian_fields} we saw that solving Newton's second law\index{Newton ! second law} in {\varepsilonm configuration space}\index{configuration space}\index{space ! configuration} ${\mathbb R}^3$ is equivalent to solving in {\varepsilonm phase space}\index{phase space}\index{space ! phase} for the integral curve\index{integral ! curve} in $T^*{\mathbb R}^3 = {\mathbb R}^6$ of the hamiltonian vector field with hamiltonian function $H$. In the next section we will see how this correspondence extends to more general Euler-Lagrange equations. \ssubsection{Legendre Transform} \label{sec:legendre} \index{Legendre ! transform} The Legendre transform gives the relation between the variational (Euler-Lagrange)\index{equations ! Euler-Lagrange}\index{Euler ! Euler-Lagrange equations} and the symplectic (Hamilton-Jacobi)\index{equations ! Hamilton-Jacobi}\index{Hamilton-Jacobi equations}\index{Jacobi ! Hamilton-Jacobi equations} formulations of the equations of motion. Let $V$ be an $n$-dimensional vector space, with $e_1,\dots,e_n$ a basis of $V$ and $v_1,\dots,v_n$ the associated coordinates. Let $F: V \to {\mathbb R}$, $F = F(v_1,\dots,v_n)$, be a smooth function. The function $F$ is \textbf{strictly convex}\index{function ! strictly convex}\index{strictly convex function} if and only if for every pair of elements $p,v \in V$, $v \neq 0$, the restriction of $F$ to the line $\{ p + xv \, | \, x \in {\mathbb R} \}$ is strictly convex.\footnote{A function $F:V \to {\mathbb R}$ is \textbf{strictly convex} if at every $p \in V$ the {\varepsilonm hessian} $d^2 F_p$ is positive definite. Let $u = \sum_{i=1}^n u_ie_i \in V$. The \textbf{hessian}\index{hessian} of $F$ at $p$ is the quadratic function on $V$ \[ (d^2F)_p(u) := \sum_{i,j} \frac {\partial^2F}{\partial v_i\partial v_j} (p) u_iu_j = \left. \frac {d^2}{dt^2} F(p+tu) \right|_{t=0} \ . \]} It follows from the case of real functions on ${\mathbb R}$ that, for a strictly convex function $F$ on $V$, the following are equivalent: \footnote{A smooth function $f: {\mathbb R} \to {\mathbb R}$ is \textbf{strictly convex}\index{function ! strictly convex}\index{strictly convex function} if $f''(x) >0$ for all $x \in {\mathbb R}$. Assuming that $f$ is strictly convex, the following four conditions are equivalent: $f' (x) = 0$ at some point, $f$ has a local minimum, $f$ has a unique (global) minimum, and $f(x) \to +\infty$ as $x \to \pm \infty$. The function $f$ is \textbf{stable}\index{function ! stable}\index{stable ! function} if it satisfies one (and hence all) of these conditions. For instance, $e^x + ax$ is strictly convex for any $a \in {\mathbb R}$, but it is stable only for $a < 0$. The function $x^2 + ax$ is strictly convex and stable for any $a \in {\mathbb R}$.} \begin{itemize} \item[(a)] $F$ has a critical point, i.e., a point where $dF_p = 0$; \item[(b)] $F$ has a local minimum at some point; \item[(c)] $F$ has a unique critical point (global minimum); and \item[(d)] $F$ is proper\index{proper function}, that is, $F(p) \to +\infty$ as $p \to \infty$ in $V$. \varepsilonnd{itemize} A strictly convex function $F$ is \textbf{stable}\index{function ! stable}\index{stability ! definition} when it satisfies conditions (a)-(d) above. \begin{definition}\index{Legendre ! transform} The \textbf{Legendre transform} associated to $F \in C^{\infty}(V)$ is the map \[ \begin{array}{rrcl} L_F: & V & \longrightarrow & V^* \\ & p & \longmapsto & dF_p \in T_p^*V \simeq V^*\ , \varepsilonnd{array} \] where $T_p^*V \simeq V^*$ is the canonical identification for a vector space $V$. \varepsilonnd{definition} From now on, assume that $F$ is a strictly convex function on $V$. Then, for every point $p \in V$, $L_{_F}$ maps a neighborhood of $p$ diffeomorphically onto a neighborhood of $L_{_F}(p)$. Given $\varepsilonll \in V^*$, let \[ F_\varepsilonll: V \longrightarrow {\mathbb R}\ , \qquad F_\varepsilonll(v) = F(v) - \varepsilonll(v)\ . \] Since $(d^2F)_p = (d^2F_\varepsilonll)_p$, $F$ is strictly convex if and only if $F_\varepsilonll$ is strictly convex. The \textbf{stability set}\index{stability ! set} of $F$ is \[ S_F = \{\varepsilonll \in V^* \mid F_\varepsilonll \mbox{ is stable}\}\ . \] The set $S_{_F}$ is open and convex, and $L_{_F}$ maps $V$ diffeomorphically onto $S_{_F}$. (A way to ensure that $S_{_F} = V^*$ and hence that $L_{_F}$ maps $V$ diffeomorphically onto $V^*$, is to assume that a strictly convex function $F$ has \textbf{quadratic growth at infinity}\index{quadratic growth at infinity}, i.e., there exists a positive-definite quadratic form $Q$ on $V$ and a constant $K$ such that $F(p) \geq Q(p) - K$, for all $p$.) The inverse to $L_F$ is the map $L_F^{-1}: S_F \to V$ described as follows: for $\varepsilonll \in S_F$, the value $L_F^{-1}(\varepsilonll)$ is the unique minimum point $p_\varepsilonll \in V$ of $F_\varepsilonll$. Indeed $p$ is the minimum of $F(v) - dF_p(v)$. \begin{definition} The \textbf{dual function}\index{function ! dual}\index{dual function} $F^*$ to $F$ is \[ F^*: S_F \longrightarrow {\mathbb R} \ , \quad F^*(\varepsilonll) = -\min_{p \in V} F_\varepsilonll(p)\ . \] \varepsilonnd{definition} The dual function $F^*$ is smooth and, for all $p \in V$ and all $\varepsilonll \in S_{_F}$, satisfies the \textbf{Young inequality}\index{Young inequality} $F(p) + F^*(\varepsilonll) \geq \varepsilonll(p)$. On one hand we have $V \times V^* \simeq T^*V$, and on the other hand, since $V = V^{**}$, we have $V \times V^* \simeq V^* \times V \simeq T^*V^*$. Let $\alpha_1$ be the tautological 1-form on $T^*V$ and $\alpha_2$ be the tautological 1-form on $T^*V^*$. Via the identifications above, we can think of both of these forms as living on $V \times V^*$. Since $\alpha_1 = d\beta - \alpha_2$, where $\beta: V \times V^* \to {\mathbb R}$ is the function $\beta (p,\varepsilonll) = \varepsilonll(p)$, we conclude that the forms $\omega_1 = - d \alpha_1$ and $\omega_2 = - d \alpha_2$ satisfy $\omega_1 = - \omega_2$. \begin{theorem} For a strictly convex function $F$ we have that $L_F^{-1} = L_{F^*}$. \varepsilonnd{theorem} \vspace*{-2ex} \begin{proof} The graph $\Lambda_{_F}$ of the Legendre transform $L_{_F}$ is a lagrangian submanifold of $V \times V^*$ with respect to the symplectic form $\omega_1$. Hence, $\Lambda_{_F}$ is also lagrangian for $\omega_2$. Let ${\mathrm{pr}}_1 : \Lambda_{_F} \to V$ and ${\mathrm{pr}}_2 : \Lambda_{_F} \to V^*$ be the restrictions of the projection maps $V \times V^* \to V$ and $V \times V^* \to V^*$, and let $i : \Lambda_{_F} \hookrightarrow V \times V^*$ be the inclusion map. Then $i^* \alpha_1 = d ({\mathrm{pr}}_1)^* F$ as both sides have value $dF_p$ at $(p,dF_p) \in \Lambda_{_F}$. It follows that $i^* \alpha_2 = d (i^* \beta - ({\mathrm{pr}}_1)^* F) = d ({\mathrm{pr}}_2)^* F^*$, which shows that $\Lambda_{_F}$ is the graph of the inverse of $L_{F^*}$. From this we conclude that the inverse of the Legendre transform associated with $F$ is the Legendre transform\index{Legendre ! transform} associated with $F^*$. \varepsilonnd{proof} Let $M$ be a manifold and $F: TM \to {\mathbb R}$. We return to the Euler-Lagrange equations for minimizing the action ${\mathcal A}_{\gamma} = \int {\widetilde \gamma}^*F$.\index{variational ! problem} At $p \in M$, let $F_p := F|_{T_pM}: T_p M \to {\mathbb R}$. Assume that $F_p$ is strictly convex for all $p \in M$. To simplify notation, assume also that $S_{F_p} = T_p^*M$. The Legendre transform on each tangent space $L_{F_p}: T_pM \stackrel{\simeq}{\longrightarrow} T_p^*M$ is essentially given by the first derivatives of $F$ in the $v$ directions. Collect these and the dual functions $F_p^*: T_p^*M \to {\mathbb R}$ into maps \[ {\mathcal L}: TM \longrightarrow T^*M \ , \ {\mathcal L}|_{T_pM} = L_{F_p} \quad \mbox{ and } \quad H: T ^*M \longrightarrow {\mathbb R} \ , \ H|_{T_p^*M} = F_p^*\ . \] The maps $H$ and ${\mathcal L}$ are smooth, and ${\mathcal L}$ is a diffeomorphism. \begin{theorem} \index{theorem ! Euler-Lagrange equations}\index{equations ! Euler-Lagrange}\index{Euler ! Euler-Lagrange equations} Let $\gamma: [a,b] \to M$ be a curve, and ${\widetilde \gamma}: [a,b] \to TM$ its lift. Then $\gamma$ satisfies the Euler-Lagrange equations on every coordinate chart if and only if ${\mathcal L} \circ {\widetilde \gamma}: [a,b] \to T^*M$ is an integral curve of the hamiltonian vector field $X_H$. \varepsilonnd{theorem} \vspace*{-2ex} \begin{proof} Let $({\mathcal U},x_1,\dots,x_n)$ be a coordinate chart in $M$, with associated tangent $(T{\mathcal U},x_1,\dots,x_n,v_1,\dots,v_n)$ and cotangent $(T^*{\mathcal U},x_1,\dots,x_n,\xi_1,\dots,\xi_n)$ coordinates. On $T{\mathcal U}$ we have $F = F(x,v)$, on $T^*{\mathcal U}$ we have $H = H(x,\xi)$, and \[ \begin{array}{rrclcrrcl} {\mathcal L}: & T{\mathcal U} & \longrightarrow & T^*{\mathcal U} & \qquad & H: & T^*{\mathcal U} & \longrightarrow & {\mathbb R} \\ & (x,v) & \longmapsto & (x,\xi) & & & (x,\xi) & \longmapsto & F_x^*(\xi) = \xi \cdot v - F(x,v) \varepsilonnd{array} \] where $\xi := L_{F_x}(v) = \frac {\partial F}{\partial v} (x,v)$ is called the \textbf{momentum}\index{momentum}. Integral curves $(x(t),\xi(t))$ of $X_H$ satisfy the Hamilton equations\index{Hamilton equations}\index{equations ! Hamilton}: \[ \mbox{(H)} \qquad \qquad \qquad \left\{ \begin{array}{rll} \frac {dx}{dt} & = & \phantom{-} \frac {\partial H}{\partial \xi} (x,\xi) \\ \frac {d\xi}{dt} & = & - \frac {\partial H}{\partial x} (x,\xi) \varepsilonnd{array} \right. \] whereas the physical path $x(t)$ satisfies the Euler-Lagrange equations: \[ \mbox{(E-L)} \qquad \qquad \frac {\partial F}{\partial x} \left( x,\frac {dx}{dt} \right) = \frac {d}{dt} \frac {\partial F}{\partial v} \left( x,\frac {dx}{dt} \right)\ . \] Let $(x(t),\xi(t)) = {\mathcal L}\left(x(t),\frac {dx}{dt} (t)\right)$. For an arbitrary curve $x(t)$, we want to prove that $t \mapsto (x(t),\xi(t))$ satisfies~(H) if and only if $t \mapsto \left(x(t), \frac {dx}{dt} (t)\right)$ satisfies~(E-L). The first line of~(H) comes automatically from the definition of $\xi$: \[ \xi = L_{F_x}\left( \frac {dx}{dt} \right) \qquad \iff \qquad \frac {dx}{dt} = L_{F_x}^{-1}(\xi) = L_{F_x^*}(\xi) = \frac {\partial H}{\partial \xi} (x,\xi) \ . \] If $(x,\xi) = {\mathcal L}(x,v)$, by differentiating both sides of $H(x,\xi) = \xi \cdot v - F(x,v)$ with respect to $x$, where $\xi = L_{F_x}(v) = \xi(x,v)$ and $v = \frac {\partial H}{\partial \xi}$, we obtain \[ \frac {\partial H}{\partial x} + \frac {\partial H}{\partial \xi} \frac {\partial \xi}{\partial x} = \frac {\partial \xi}{\partial x} \cdot v - \frac {\partial F}{\partial x} \qquad \iff \qquad \frac {\partial F}{\partial x} (x,v) = -\frac {\partial H}{\partial x} (x,\xi) \ . \] Using the last equation and the definition of $\xi$, the second line of~(H) becomes~(E-L): \[ \frac {d\xi}{dt} = -\frac {\partial H}{\partial x} (x,\xi) \qquad \iff \qquad \frac {d}{dt} \frac {\partial F}{\partial v} (x,v) = \frac {\partial F}{\partial x} (x,v) \ . \] \varepsilonnd{proof} \ssubsection{Integrable Systems} \label{sec:integrable} \index{integrable ! system} \begin{definition} A \textbf{hamiltonian system}\index{hamiltonian ! system} is a triple $(M,\omega,H)$, where $(M,\omega)$ is a symplectic manifold and $H \in C^\infty (M)$ is the \textbf{hamiltonian function}.\index{hamiltonian ! function} \varepsilonnd{definition} \vspace*{-2ex} \begin{proposition}\label{prop:integral} For a function $f$ on a symplectic manifold $(M,\omega)$ we have that $\{f,H\}=0$ if and only if $f$ is constant along integral curves of $X_{_H}$. \varepsilonnd{proposition} \vspace*{-2ex} \begin{proof} Let $\rho_{t}$ be the flow of $X_{_H}$. Then \[ \frac{d}{dt}(f\circ \rho_{t}) = \rho_{t}^{*}{\mathcal L}_{X_{H}}f = \rho_{t}^{*}\imath_{X_{H}}df = \rho_{t}^{*}\imath_{X_{H}}\imath_{X_{f}}\omega \\ = \rho_{t}^{*}\omega(X_{f},X_{_H}) = \rho_{t}^{*} \{f,H\} \ . \] \varepsilonnd{proof} A function $f$ as in Proposition~\ref{prop:integral} is called an \textbf{integral of motion}\index{integral ! of motion}\index{motion ! integral of motion} (or a \textbf{first integral}\index{first integral}\index{integral ! first} or a \textbf{constant of motion}).\index{motion ! constant of motion} In general, hamiltonian systems do not admit integrals of motion that are {\varepsilonm independent} of the hamiltonian function. Functions $f_1, \ldots, f_n$ are said to be \textbf{independent} if their differentials $(df_1)_p, \ldots, (df_n)_p$ are linearly independent at all points $p$ in some dense subset of $M$. Loosely speaking, a hamiltonian system is {\varepsilonm (completely) integrable}\index{integrable ! system} if it has as many {\varepsilonm commuting} integrals of motion as possible. \textbf{Commutativity} is with respect to the Poisson bracket.\index{Poisson ! bracket}\index{bracket ! Poisson} If $f_1, \ldots, f_n$ are commuting integrals of motion for a hamiltonian system $(M,\omega,H)$, then $\omega (X_{f_i}, X_{f_j}) = \{ f_i, f_j \} = 0$, so at each $p \in M$ the hamiltonian vector fields generate an isotropic subspace of $T_pM$. When $f_1, \ldots, f_n$ are independent, by symplectic linear algebra $n$ can be at most half the dimension of $M$. \begin{definition} A hamiltonian system $(M,\omega,H)$ where $M$ is a $2n$-dimensional manifold is \textbf{(completely) integrable}\index{completely integrable system}\index{integrable ! system} if it possesses $n$ independent commuting integrals of motion, $f_1=H, f_2,\ldots, f_n$. \varepsilonnd{definition} Any 2-dimensional hamiltonian system (where the set of non-fixed points is dense) is trivially integrable. Basic examples are the simple pendulum and the harmonic oscillator. A hamiltonian system $(M,\omega,H)$ where $M$ is 4-dimensional is integrable if there is an integral of motion independent of $H$ (the commutativity condition is automatically satisfied). A basic example is the spherical pendulum. Sophisticated examples of integrable systems can be found in~\cite{au:tops,hi-se-wa:integrable}. \begin{examples} \begin{enumerate} \item The \textbf{simple pendulum}\index{example ! simple pendulum}\index{pendulum ! simple}\index{simple pendulum} is a mechanical system consisting of a massless rigid rod of length $\varepsilonll$, fixed at one end, whereas the other end has a bob of mass $m$, which may oscillate in the vertical plane. We assume that the force of gravity\index{gravity} is constant pointing vertically downwards and the only external force acting on this system. Let $\theta$ be the oriented angle between the rod and the vertical direction. Let $\xi$ be the coordinate along the fibers of $T^* S^1$ induced by the standard angle coordinate on $S^1$. The energy function $H: T^* S^1 \to {\mathbb R}$, $H(\theta, \xi) = \frac{\xi^2}{2m\varepsilonll^2} + m\varepsilonll (1-\cos \theta)$, is an appropriate hamiltonian function to describe the simple pendulum. Gravity is responsible for the potential energy\index{energy ! potential}\index{potential ! energy} $V(\theta) = m\varepsilonll (1-\cos \theta)$, and the kinetic energy\index{energy ! kinetic}\index{kinetic energy} is given by $K(\theta,\xi) = \frac{1}{2m\varepsilonll^2} \xi^2$. \item The \textbf{spherical pendulum}\index{example ! spherical pendulum}\index{pendulum ! spherical}\index{spherical pendulum} consists of a massless rigid rod of length $\varepsilonll$, fixed at one end, whereas the other end has a bob of mass $m$, which may oscillate {\varepsilonm freely in all directions}. For simplicity let $m=\varepsilonll=1$. Again assume that gravity\index{gravity} is the only external force. Let $\varphi, \theta$ ($0 < \varphi < \pi$, $0 < \theta < 2\pi$) be spherical coordinates for the bob, inducing coordinates $\varepsilonta, \xi$ along the fibers of $T^* S^2$. An appropriate hamiltonian function for this system is the energy function $H: T^* S^2 \to {\mathbb R}$, $H(\varphi, \theta, \varepsilonta, \xi) = \frac{1}{2} \left( \varepsilonta^2 + \frac{\xi^2}{(\sin \varphi)^2} \right) + \cos \varphi$. The function $J(\varphi, \theta, \varepsilonta, \xi) = \xi$ is an independent integral of motion corresponding to the group of symmetries given by rotations about the vertical axis (Section~\ref{sec:actions}). The points $p \in T^*S^2$ where $dH_p$ and $dJ_p$ are linearly dependent are: \begin{itemize} \item the two critical points of $H$ (where both $dH$ and $dJ$ vanish); \item if $x \in S^2$ is in the southern hemisphere ($x_3 < 0$), then there exist exactly two points, $p_+ = (x,\varepsilonta,\xi)$ and $p_- = (x,-\varepsilonta,-\xi)$, in the cotangent fiber above $x$ where $dH_p$ and $dJ_p$ are linearly dependent; \item since $dH_p$ and $dJ_p$ are linearly dependent along the trajectory of the hamiltonian vector field of $H$ through $p_+$, this trajectory is also a trajectory of the hamiltonian vector field of $J$ and hence its projection onto $S^2$ is a latitudinal (or horizontal) circle. The projection of the trajectory through $p_-$ is the same latitudinal circle traced in the opposite direction. \varepsilonnd{itemize} \varepsilonnd{enumerate} \varepsilonnd{examples} Let $(M,\omega,H)$ be an integrable system of dimension $2n$ with integrals of motion $f_1=H, f_2,\ldots, f_n$. Let $c \in {\mathbb R}^n$ be a regular value of $f:= (f_1, \ldots, f_n)$. The corresponding level set $f^{-1} (c)$ is a lagrangian submanifold, as it is $n$-dimensional and its tangent bundle is isotropic. If the flows are complete on $f^{-1} (c)$, by following them we obtain global coordinates. Any compact component of $f^{-1} (c)$ must hence be a torus. These components, when they exist, are called \textbf{Liouville tori}\index{Liouville ! torus}. A way to ensure that compact components exist is to have one of the $f_i$'s proper. \begin{theorem} \label{thm:arnold}\index{theorem ! Arnold-Liouville}\index{Liouville ! Arnold-Liouville theorem}\index{Arnold ! Arnold-Liouville theorem} \textbf{(Arnold-Liouville~\cite{ar:mathematical})} $\;$ Let $(M,\omega,H)$ be an integrable system of dimension $2n$ with integrals of motion $f_1=H, f_2,\ldots, f_n$. Let $c \in {\mathbb R}^n$ be a regular value of $f:= (f_1, \ldots, f_n)$. The level $f^{-1} (c)$ is a lagrangian submanifold of $M$. \begin{itemize} \item[(a)] If the flows of the hamiltonian vector fields $X_{f_1}, \ldots, X_{f_n}$ starting at a point $p \in f^{-1} (c)$ are complete, then the connected component of $f^{-1} (c)$ containing $p$ is a homogeneous space for ${\mathbb R}^n$, i.e., is of the form ${\mathbb R}^{n-k} \times {\mathbb T}^k$ for some $k$, $0 \leq k \leq n$, where ${\mathbb T}^k$ is a $k$-dimensional torus.. With respect to this affine structure, that component has coordinates $\varphi_1, \ldots, \varphi_n$, known as \textbf{angle coordinates}\index{angle coordinates}, in which the flows of $X_{f_1}, \ldots, X_{f_n}$ are linear. \item[(b)] There are coordinates $\psi_1, \ldots, \psi_n$, known as \textbf{action coordinates}\index{action ! coordinates}, complementary to the angle coordinates, such that the $\psi_i$'s are integrals of motion and $\varphi_1, \ldots, \varphi_n,\psi_1, \ldots, \psi_n$ form a Darboux chart. \varepsilonnd{itemize} \varepsilonnd{theorem} Therefore, the dynamics of an integrable system has a simple explicit solution in action-angle coordinates\index{action-angle coordinates}. The proof of part~(a) -- the easy part of the theorem -- is sketched above. For the proof of part~(b), see for instance~\cite{ar:mathematical,du:global}. Geometrically, regular levels being lagrangian submanifolds implies that, in a neighborhood of a regular value, the map $f: M \to {\mathbb R}^n$ collecting the given integrals of motion is a \textbf{lagrangian fibration}\index{lagrangian fibration}, i.e., it is locally trivial and its fibers are lagrangian submanifolds. Part~(a) states that there are coordinates along the fibers, the angle coordinates,\footnote{The name {\varepsilonm angle coordinates} is used even if the fibers are not tori.} in which the flows of $X_{f_1}, \ldots, X_{f_n}$ are linear. Part (b) guarantees the existence of coordinates on ${\mathbb R}^n$, the action coordinates, $\psi_1, \ldots, \psi_n$, complementary to the angle coordinates, that (Poisson) commute among themselves and satisfy $\{ \varphi _i , \psi _j \} = \partialta_{ij}$. The action coordinates are generally not the given integrals of motion because $\varphi_1, \ldots, \varphi_n,f_1, \ldots, f_n$ do not form a Darboux chart. \ssubsection{Symplectic and Hamiltonian Actions} \index{action ! symplectic}\index{action ! hamiltonian}\index{hamiltonian ! action}\index{symplectic ! action} \label{sec:actions} Let $(M,\omega)$ be a symplectic manifold, and $G$ a Lie group. \begin{definition}\index{action ! symplectic}\index{symplectic ! action} An action\footnote{A (smooth) \textbf{action} of $G$ on $M$ is a group homomorphism $G \to \mathrm{Diff}(M)$, $g \mapsto \psi_g$, whose evaluation map $M \times G \to M$, $(p,g) \mapsto \psi_g (p)$, is smooth.} $\psi: G \to \mathrm{Diff}(M)$, $g \mapsto \psi_g$, is a \textbf{symplectic action} if each $\psi_g$ is a symplectomorphism, i.e., $\psi: G \to \mathrm{Sympl}(M,\omega) \subset \mathrm{Diff}(M)$. \varepsilonnd{definition} In particular, symplectic actions of ${\mathbb R}$ on $(M,\omega)$\index{symplectic ! action}\index{action ! symplectic} are in one-to-one correspondence with complete symplectic vector fields on $M$:\index{complete vector field}\index{vector field ! symplectic}\index{symplectic ! vector field} \[ \psi = \varepsilonxp tX \quad \longleftrightarrow \quad X_p = \left. \frac {d\psi_t(p)}{dt} \right|_{t=0}\ , \ p \in M \ . \] We may define a symplectic action $\psi$ of $S^1$ or ${\mathbb R}$ on $(M,\omega)$ to be \textbf{hamiltonian}\index{action ! hamiltonian}\index{hamiltonian ! action}\index{action ! hamiltonian} if the vector field $X$ generated by $\psi$ is hamiltonian, that is, when there is $H:M \to {\mathbb R}$ with $d H = \imath_X \omega$. An action of $S^1$ may be viewed as a periodic action of ${\mathbb R}$. \begin{examples} \begin{enumerate} \item On $({\mathbb R}^{2n}, \omega_0)$, the orbits of the action generated by $X = -\frac {\partial}{\partial y_1}$ are lines parallel to the $y_1$-axis, $\{(x_1,y_1-t,x_2,y_2,\dots,x_n,y_n) \mid t \in {\mathbb R}\}$. Since $X$ is hamiltonian with hamiltonian function $x_1$, this is a {\varepsilonm hamiltonian action}\index{action ! hamiltonian}\index{hamiltonian ! action}\index{example ! of hamiltonian actions} of ${\mathbb R}$. \item On the 2-sphere $(S^{2},d\theta \wedge dh)$ in cylindrical coordinates, the one-parameter group of diffeomorphisms given by rotation around the vertical axis, $\psi_{t}(\theta,h)=(\theta+t,h)$ ($t \in {\mathbb R}$) is a symplectic action of the group $S^1 \simeq {\mathbb R} / \langle 2\pi \rangle$, as it preserves the area form $d\theta \wedge dh$. Since the vector field corresponding to $\psi$ is hamiltonian with hamiltonian function $h$, this is a {\varepsilonm hamiltonian action}\index{action ! hamiltonian}\index{hamiltonian ! action}\index{example ! of hamiltonian actions} of $S^1$. \varepsilonnd{enumerate} \varepsilonnd{examples} When $G$ is a product of $S^1$'s or ${\mathbb R}$'s, an action $\psi: G \to \mathrm{Sympl}(M,\omega)$ is called {\varepsilonm hamiltonian}\index{hamiltonian ! action}\index{action ! hamiltonian} when the restriction to each 1-dimensional factor is hamiltonian in the previous sense {\varepsilonm with hamiltonian function preserved by the action of the rest of $G$}. For an arbitrary Lie group $G$, we use an upgraded hamiltonian function $\mu$, known as a {\varepsilonm moment map}\index{moment map ! upgraded hamiltonian function}, determined up to an additive local constant by coordinate functions $\mu_i$ indexed by a basis of the Lie algebra of $G$. We require that the constant be such that $\mu$ is {\varepsilonm equivariant}, i.e., $\mu$ intertwines the action of $G$ on $M$ and the coadjoint action of $G$ on the dual of its Lie algebra. (If $M$ is compact, equivariance can be achieved by adjusting the constant so that $\int_M \mu \omega^n = 0$. Similarly when there is a fixed point $p$ (on each component of $M$) by imposing $\mu (p) = 0$.) Let $G$ be a Lie group, ${\mathfrak g}$ the Lie algebra of $G$, and ${\mathfrak g}^*$ the dual vector space of ${\mathfrak g}$. \begin{definition} An action $\psi: G \to \mathrm{Diff}(M)$ on a symplectic manifold $(M,\omega)$ is a \textbf{hamiltonian action}\index{hamiltonian ! action}\index{action ! hamiltonian} if there exists a map $\mu: M \to {\mathfrak g}^*$ satisfying: \begin{itemize} \item For each $X \in {\mathfrak g}$, we have $d\mu^X = \imath_{X^{\#}}\omega$, i.e., $\mu^X$ is a hamiltonian function\index{hamiltonian ! function}\index{function ! hamiltonian}\index{hamiltonian ! moment map} for the vector field $X^{\#}$, where \begin{itemize} \item $\mu^X: M \to {\mathbb R}$, $\mu^X(p) := \langle \mu(p),X\rangle$, is the component of $\mu$ along $X$, \item $X^{\#}$ is the vector field on $M$ generated by the one-parameter subgroup $\{\varepsilonxp tX \mid t \in {\mathbb R}\} \subseteq G$. \varepsilonnd{itemize} \item The map $\mu$ is {\varepsilonm equivariant}\index{equivariant ! moment map}\index{moment map ! equivariance} with respect to the given action $\psi$ on $M$ and the coadjoint action: $\mu \circ \psi_g = \mathrm{Ad}_g^* \circ \mu$, for all $g \in G$. \varepsilonnd{itemize} Then $(M,\omega,G,\mu)$ is a \textbf{hamiltonian $G$-space}\index{hamiltonian ! G-space@$G$-space}\index{G-space@$G$-space} and $\mu$ is a \textbf{moment map}\index{moment map ! definition}\index{moment map ! hamiltonian G-space@hamiltonian $G$-space}. \varepsilonnd{definition} This definition matches the previous one when $G$ is an abelian group ${\mathbb R}$, $S^1$ or ${\mathbb T}^n$, for which equivariance becomes invariance since the coadjoint action is trivial. \begin{examples}\index{moment map ! example} \begin{enumerate} \item Let ${\mathbb T} ^n = \{ (t_1, \ldots, t_n) \in {\mathbb C}^n\, : \, |t_j| =1, \mbox{ for all }j \, \}$ be a torus acting on ${\mathbb C}^n$ by $(t_1, \ldots, t_n) \cdot (z_1, \ldots, z_n) = (t_1^{k_1} z_1, \ldots, t_n^{k_n} z_n)$, where $k_1, \ldots, k_n \in {\mathbb Z}$ are fixed. This action is hamiltonian with a moment map $\mu : {\mathbb C}^n \to ({\mathfrak t} ^n)^* \simeq {\mathbb R}^n$, $\mu (z_1, \ldots, z_n) = - \textstyle{\frac12} (k_1 |z_1|^2, \ldots, k_n |z_n|^2)$. \item When a Lie group $G$ acts on two symplectic manifolds $(M_j, \omega_j)$, $j = 1,2$, with moment maps $\mu_j : M_j \to {\mathfrak g}^*$, the diagonal action of $G$ on $M_1 \times M_2$ has moment map $\mu : M_1 \times M_2 \to {\mathfrak g}^*$, $\mu (p_1,p_2) = \mu_1 (p_1) + \mu_2 (p_2)$. \item Equip the coadjoint orbits\index{coadjoint ! orbit} of a Lie group $G$ with the canonical symplectic form\index{canonical ! symplectic form on a coadjoint orbit}\index{symplectic ! canonical symplectic form on a coadjoint orbit} (Section~\ref{sec:symplectic_hamiltonian_fields}). Then, for each $\xi \in {\mathfrak g}^*$, the coadjoint action\index{coadjoint ! action} on the orbit $G \cdot \xi$ is hamiltonian with moment map simply the inclusion map $\mu : G \cdot \xi \hookrightarrow {\mathfrak g}^*$. \item Identify the Lie algebra of the unitary group $\mathrm{U} (n)$ with its dual via the inner product $\langle A, B \rangle = \mathrm{trace} (A ^* B)$. The natural action of $\mathrm{U} (n)$ on $({\mathbb C}^n, \omega_0)$ is hamiltonian\index{moment map ! example} with moment map $\mu : {\mathbb C}^n \to {\mathfrak u} (n)$ given by $\mu (z) = \textstyle{i \over {2}} z z^*$. Similarly, a moment map for the natural action of $\mathrm{U} (k)$ on the space $({\mathbb C}^{k\times n}, \omega_0)$ of complex $(k\times n)$-matrices is given by $\mu (A) = \textstyle{{i} \over {2}} A A^*$ for $A \in {\mathbb C}^{k\times n}$. Thus the $\mathrm{U} (n)$-action by conjugation on the space $({\mathbb C}^{n^2},\omega_0)$ of complex $(n\times n)$-matrices is hamiltonian, with moment map given by $\mu (A) = \textstyle{i \over 2} [A, A^*]$. \item For the spherical pendulum (Section~\ref{sec:integrable}), the {\varepsilonm energy-momentum map}\index{energy ! energy-momentum map} $(H,J) : T^* S^2 \to {\mathbb R}^2$ is a moment map for the ${\mathbb R} \times S^1$ action given by time flow and rotation about the vertical axis. \item Suppose that a compact Lie group acts on a symplectic manifold $(M,\omega)$ in a hamiltonian way, and that $q \in M$ is a fixed point for the $G$-action. Then, by an equivariant version of Darboux's theorem,\footnote{\textbf{Equivariant Darboux Theorem~\cite{we:lagrangian}:}\index{theorem ! equivariant Darboux}\index{Darboux ! equivariant Darboux theorem}\index{equivariant ! Darboux theorem} {\varepsilonm Let $(M, \omega)$ be a $2n$-dimensional symplectic manifold equipped with a symplectic action of a compact Lie group $G$, and let $q$ be a fixed point. Then there exists a $G$-invariant chart $({\mathcal U},x_1,\dots,x_n,y_1,\dots,y_n)$ centered at $q$ and $G$-equivariant with respect to a linear action of $G$ on ${\mathbb R}^{2n}$ such that \[ \left. \omega \right|_{\mathcal U} = \sum\limits_{k=1}^n dx_k \wedge dy_k \ . \]}A suitable linear action on ${\mathbb R}^{2n}$ is equivalent to the induced action of $G$ on $T_qM$. The proof relies on an equivariant version of the Moser trick and may be found in~\cite{gu-st:techniques}.} there exists a Darboux chart $({\mathcal U} , z_1 , \ldots , z_n)$ centered at $q$ that is $G$-equivariant with respect to a linear action of $G$ on ${\mathbb C}^n$. Consider an $\varepsilon$-blow-up of $M$ relative to this chart, for $\varepsilon$ sufficiently small. Then $G$ acts on the blow-up in a hamiltonian way. \varepsilonnd{enumerate} \varepsilonnd{examples} The concept of a moment map\index{moment map ! origin} was introduced by Souriau~\cite{so:structure} under the french name {\varepsilonm application moment}; besides the more standard english translation to {\varepsilonm moment map}, the alternative {\varepsilonm momentum map} is also used, and recently James Stasheff has proposed the short unifying new word \textbf{momap}. The name comes from being the generalization of {\varepsilonm linear and angular momenta}\index{momentum}\index{angular momentum}\index{linear momentum} in classical mechanics. Let ${\mathbb R}^3$ act on $({\mathbb R}^6 \simeq T^*{\mathbb R}^3, \omega_0 = \sum dx_i \wedge dy_i)$ by \textbf{translations}: \[ a \in {\mathbb R}^3 \; \longmapsto \; \psi_a \in \mathrm{Sympl}({\mathbb R}^6,\omega_0)\ , \; \psi_a (x,y) = (x + a,y)\ . \] The vector field generated by $X = a = (a_1,a_2,a_3)$ is $X^{\#} = a_1 \frac {\partial}{\partial x_1} + a_2 \frac {\partial}{\partial x_2} + a_3 \frac {\partial}{\partial x_3}$, and the \textbf{linear momentum}\index{linear momentum} map \[ \mu: {\mathbb R}^6 \longrightarrow {\mathbb R}^3\ , \quad \mu(x,y) = y \] is a moment map, with $\mu^{a}(x,y) = \langle \mu(x,y),a \rangle = y \cdot a$. Classically, $y$ is called the \textbf{momentum vector}\index{momentum vector} corresponding to the \textbf{position vector}\index{momentum vector} $x$. The $\mathrm{SO} (3)$-action on ${\mathbb R}^3$ by \textbf{rotations} lifts to a symplectic action $\psi$ on the cotangent bundle ${\mathbb R}^6$. The infinitesimal version of this action is\footnote{The Lie group $\mathrm{SO} (3) = \{A \in \mathrm{GL} (3;{\mathbb R}) \mid A^t A = \mathrm{Id} \mbox{ and } \mathrm{det} A = 1\}$,\index{example ! coadjoint orbits} has Lie algebra, ${\mathfrak g} = \{A \in \mathfrak{gl}(3;{\mathbb R}) \mid A + A^t = 0\}$, the space of $3 \times 3$ skew-symmetric matrices. The standard identification of ${\mathfrak g}$ with ${\mathbb R}^3$ carries the Lie bracket to the exterior product: \[ \begin{array}{rcl} A = \left[ \begin{array}{ccc} 0 & -a_3 & a_2 \\ a_3 & 0 & -a_1 \\ -a_2 & a_1 & 0 \varepsilonnd{array} \right] & \longmapsto & a = (a_1,a_2,a_3) \\ \\ {[ A,B ]} = AB - BA & \longmapsto & a \times b \ . \varepsilonnd{array} \]} \[ a \in {\mathbb R}^3 \; \longmapsto \; d \psi (a) \in \chi^{\mathrm{sympl}}({\mathbb R}^6) \ , \; d\psi (a) (x,y) = (a \times x, a \times y)\ . \] Then the \textbf{angular momentum}\index{angular momentum} map \[ \mu: {\mathbb R}^6 \longrightarrow {\mathbb R}^3 \ , \quad \mu(x,y) = x \times y \] is a moment map, with $\mu^{a}(x,y) = \langle \mu(x,y),a \rangle = (x \times y) \cdot a$. The notion of a moment map associated to a group action on a symplectic manifold formalizes the \textbf{Noether principle}\index{Noether ! principle}\index{principle ! Noether}, which asserts that there is a one-to-one correspondence between {\varepsilonm symmetries} (or one-parameter group actions) and {\varepsilonm integrals of motion} (or conserved quantities) for a mechanical system. \begin{definition} An \textbf{integral of motion}\index{motion ! integral of motion}\index{integral ! of motion} of a hamiltonian $G$-space $(M,\omega,G,\mu)$ is a $G$-invariant function $f: M \to {\mathbb R}$. When $\mu$ is constant on the trajectories of a hamiltonian vector field $X_f$, the corresponding flow $\{\varepsilonxp tX_f \mid t \in {\mathbb R}\}$ (regarded as an ${\mathbb R}$-action) is a \textbf{symmetry} of the hamiltonian $G$-space $(M,\omega,G,\mu)$. \varepsilonnd{definition} \vspace*{-2ex} \begin{theorem}\index{theorem ! Noether}\index{Noether ! theorem} \textbf{(Noether)} $\;$ Let $(M,\omega,G,\mu)$ be a hamiltonian $G$-space where $G$ is connected. If $f$ is an integral of motion, the flow of its hamiltonian vector field $X_f$ is a symmetry. If the flow of some hamiltonian vector field $X_f$ is a symmetry, then a corresponding hamiltonian function $f$ is an integral of motion. \varepsilonnd{theorem} \vspace*{-2ex} \begin{proof} Let $\mu^X = \langle \mu,X\rangle: M \to {\mathbb R}$ for $X \in {\mathfrak g}$. We have ${\mathcal L}_{X_f} \mu^X = \imath_{X_f} d\mu^X = \imath_{X_f}\imath_{X^{\#}}\omega = -\imath_{X^{\#}} \imath_{X_f}\omega = -\imath_{X^{\#}} df = -{\mathcal L}_{X^{\#}} f$. So $\mu$ is invariant over the flow of $X_f$ if and only if $f$ is invariant under the infinitesimal $G$-action. \varepsilonnd{proof} We now turn to the questions of existence and uniqueness of moment maps. Let ${\mathfrak g}$ be a Lie algebra, and let $C^k := \Lambda^k{\mathfrak g}^*$ be the set of \textbf{$k$-cochains} on ${\mathfrak g}$, that is, of alternating $k$-linear maps ${\mathfrak g} \times \dots \times {\mathfrak g} \to {\mathbb R}$. The linear operator $\partialta: C^k \to C^{k+1}$ defined by $\partialta c(X_0,\dots,X_k) = \sum_{i < j} (-1)^{i+j} c([X_i,X_j],X_0,\dots, {\widehat X}_i,\dots,{\widehat X}_j,\dots,X_k)$ satisfies $\partialta^2 = 0$. The \textbf{Lie algebra cohomology groups}\index{cohomology ! Lie algebra}\index{Lie ! algebra cohomology} (or \textbf{Chevalley cohomology groups}\index{cohomology ! Chevalley}\index{Chevalley cohomology}) of ${\mathfrak g}$ are the cohomology groups of the complex $0 \stackrel{\partialta}{\longrightarrow} C^0 \stackrel{\partialta}{\longrightarrow} C^1 \stackrel{\partialta}{\longrightarrow} \dots$: \[ H^k({\mathfrak g};{\mathbb R}) := \frac {\ker \partialta: C^k \to C^{k+1}}{\mathrm{im}\ \partialta: C^{k-1} \to C^k}\ . \] It is always $H^0({\mathfrak g};{\mathbb R}) = {\mathbb R}$. If $c \in C^1 = {\mathfrak g}^*$, then $\partialta c(X,Y) = -c([X,Y])$. The \textbf{commutator ideal}\index{commutator ideal} $[{\mathfrak g},{\mathfrak g}]$ is the subspace of ${\mathfrak g}$ spanned by $\{ [X,Y] \mid X,Y \in {\mathfrak g} \}$. Since $\partialta c = 0$ if and only if $c$ vanishes on $[{\mathfrak g},{\mathfrak g}]$, we conclude that $H^1({\mathfrak g};{\mathbb R}) = [{\mathfrak g},{\mathfrak g}]^0$, where $[{\mathfrak g},{\mathfrak g}]^0 \subseteq {\mathfrak g}^*$ is the \textbf{annihilator} of $[{\mathfrak g},{\mathfrak g}]$. An element of $C^2$ is an alternating bilinear map $c: {\mathfrak g} \times {\mathfrak g} \to {\mathbb R}$, and $\partialta c(X,Y,Z) = -c([X,Y],Z) + c([X,Z],Y) - c([Y,Z],X)$. If $c = \partialta b$ for some $b \in C^1$, then $c(X,Y) = (\partialta b) (X,Y) = -b([X,Y])$. If ${\mathfrak g}$ is the Lie algebra of a compact connected Lie group $G$, then by averaging one can show that the de Rham cohomology may be computed from the subcomplex of $G$-invariant forms, and hence $H^k({\mathfrak g};{\mathbb R}) = H_{\mathrm{de Rham}}^k(G)$. \begin{proposition} If $H^1({\mathfrak g};{\mathbb R}) = H^2({\mathfrak g},{\mathbb R}) = 0$, then any symplectic $G$-action is hamiltonian. \varepsilonnd{proposition} \vspace*{-2ex} \begin{proof} Let $\psi: G \to \mathrm{Sympl}(M,\omega)$ be a symplectic action of $G$ on a symplectic manifold $(M,\omega)$. Since $H^1({\mathfrak g};{\mathbb R}) = 0$ means that $[{\mathfrak g},{\mathfrak g}] = {\mathfrak g}$, and since commutators of symplectic vector fields are hamiltonian, we have $d\psi: {\mathfrak g} = [{\mathfrak g},{\mathfrak g}] \to \chi^{\mathrm{ham}}(M)$. The action $\psi$ is hamiltonian if and only if there is a Lie algebra homomorphism $\mu^*: {\mathfrak g} \to C^{\infty}(M)$ such that the hamiltonian vector field of $\mu^* (\xi)$ is $d\psi (\xi)$. We first take an arbitrary vector space lift $\tau: {\mathfrak g} \to C^{\infty}(M)$ with this property, i.e., for each basis vector $X \in {\mathfrak g}$, we choose $\tau(X) = \tau^X \in C^{\infty}(M)$ such that $v_{(\tau^X)} = d\psi(X)$. The map $X \mapsto \tau^X$ may not be a Lie algebra homomorphism. By construction, $\tau^{[X,Y]}$ is a hamiltonian function for $[X,Y]^{\#}$, and (as computed in Section~\ref{sec:integrable}) $\{\tau^X,\tau^Y\}$ is a hamiltonian function for $-[X^{\#},Y^{\#}]$. Since $[X,Y]^{\#} = -[X^{\#},Y^{\#}]$, the corresponding hamiltonian functions must differ by a constant: \[ \tau^{[X,Y]} - \{\tau^X,\tau^Y\} = c(X,Y) \in {\mathbb R} \ . \] By the Jacobi identity, $\partialta c = 0$. Since $H^2({\mathfrak g};{\mathbb R}) = 0$, there is $b \in {\mathfrak g}^*$ satisfying $c = \partialta b$, $c(X,Y) = -b([X,Y])$. We define \[ \begin{array}{rrcl} \mu^*: & {\mathfrak g} & \longrightarrow & C^{\infty}(M) \\ & X & \longmapsto & \mu^*(X) = \tau^X + b(X) = \mu^X \ . \varepsilonnd{array} \] Now $\mu^*$ is a Lie algebra homomorphism: $\mu^*([X,Y]) = \{\tau^X,\tau^Y\} = \{\mu^X,\mu^Y\}$. \varepsilonnd{proof} By the Whitehead lemmas\index{Whitehead lemmas} (see for instance~\cite[pages 93-95]{ja:lie}) a semisimple Lie group $G$ has $H^1({\mathfrak g};{\mathbb R}) = H^2({\mathfrak g};{\mathbb R}) = 0$. As a corollary, {\varepsilonm when $G$ is semisimple, any symplectic $G$-action is hamiltonian.}\footnote{A compact Lie group $G$ has $H^1({\mathfrak g};{\mathbb R}) = H^2({\mathfrak g};{\mathbb R}) = 0$ if and only if it is semisimple. In fact, a compact Lie group $G$ is semisimple\index{semisimple} when ${\mathfrak g} = [{\mathfrak g},{\mathfrak g}]$. The unitary group $\mathrm{U}(n)$ is not semisimple because the multiples of the identity, $S^1 \cdot \mathrm{Id}$, form a nontrivial center; at the level of the Lie algebra, this corresponds to the subspace ${\mathbb R} \cdot \mathrm{Id}$ of scalar matrices, which are not commutators since they are not traceless. Any abelian Lie group is {\varepsilonm not} semisimple. Any direct product of the other compact classical groups $\mathrm{SU} (n)$, $\mathrm{SO} (n)$ and $\mathrm{Sp} (n)$ is semisimple. An arbitrary compact Lie group admits a finite cover by a direct product of tori and semisimple Lie groups.} \begin{proposition} For a connected Lie group $G$,\index{moment map ! uniqueness} if $H^1({\mathfrak g};{\mathbb R}) = 0$, then moment maps for hamiltonian $G$-actions are unique. \varepsilonnd{proposition} \vspace*{-2ex} \begin{proof} Suppose that $\mu_1$ and $\mu_2$ are two moment maps for an action $\psi$. For each $X \in {\mathfrak g}$, $\mu_1^X$ and $\mu_2^X$ are both hamiltonian functions for $X^{\#}$, thus $\mu_1^X - \mu_2^X = c(X)$ is locally constant. This defines $c \in {\mathfrak g}^*$, $X \mapsto c(X)$. Since the corresponding $\mu_i^* : {\mathfrak g} \to C^\infty (M)$ are Lie algebra homomorphisms, we have $c([X,Y]) = 0$, $\forall X,Y \in {\mathfrak g}$, i.e., $c \in [{\mathfrak g},{\mathfrak g}]^0 = \{0\}$. Hence, $\mu_1 = \mu_2$. \varepsilonnd{proof} In general, if $\mu: M \to {\mathfrak g}^*$ is a moment map, then given any $c \in [{\mathfrak g},{\mathfrak g}]^0$, $\mu_1 = \mu + c$ is another moment map. In other words, moment maps are unique up to elements of the dual of the Lie algebra that annihilate the commutator ideal. The two extreme cases are when \[ \begin{array}{ll} \mbox{$\bullet$ $G$ is semisimple:} & \mbox{any symplectic action is hamiltonian} \ ,\\ & \mbox{moment maps are unique} \ ; \\ \mbox{$\bullet$ $G$ is abelian:} & \mbox{symplectic actions may not be hamiltonian} \ ,\\ & \mbox{moment maps are unique up to a constant $c \in {\mathfrak g}^*$}\ . \varepsilonnd{array} \] \ssubsection{Convexity} \index{moment map ! properties}\index{moment map ! convexity} Atiyah, Guillemin and Sternberg~\cite{at:convexity, gu-st:convexity} showed that the image of the moment map for a hamiltonian torus action on a compact connected symplectic manifold is always a polytope.\footnote{A \textbf{polytope} in ${\mathbb R}^n$ is the convex hull of a finite number of points in ${\mathbb R}^n$. A \textbf{convex polyhedron} is a subset of ${\mathbb R}^n$ that is the intersection of a finite number of affine half-spaces. Hence, polytopes coincide with bounded convex polyhedra.} A proof of this theorem can also be found in~\cite{mc-sa:introduction}. \begin{theorem}\label{thm:convexity}\index{theorem ! Atiyah-Guillemin-Sternberg}\index{Atiyah-Guillemin-Sternberg theorem}\index{Guillemin|see{Atiyah-Guillemin-Sternberg}}\index{theorem ! convexity}\index{Sternberg|see{Atiyah-Guillemin-Sternberg}}\index{convexity} \textbf{(Atiyah, Guillemin-Sternberg)} $\;$ Let $(M,\omega)$ be a compact connected symplectic manifold. Suppose that $\psi: {\mathbb T}^m \to \mathrm{Sympl}(M,\omega)$ is a hamiltonian action of an $m$-torus with moment map $\mu: M \to {\mathbb R}^m$. Then: \begin{itemize} \item[(a)] the levels $\mu^{-1} (c)$ are connected ($c \in {\mathbb R}^m$);\index{connectedness} \item[(b)] the image $\mu (M)$ is convex; \item[(c)] $\mu (M)$ is the convex hull of the images of the fixed points of the action. \varepsilonnd{itemize} \varepsilonnd{theorem} The image $\mu(M)$ of the moment map is called the \textbf{moment polytope}\index{moment polytope}\index{polytope ! moment}. \begin{examples} \begin{enumerate} \item Suppose that ${\mathbb T}^m$ acts linearly on $({\mathbb C}^n, \omega_0)$. Let $\lambda^{(1)}, \ldots, \lambda^{(n)} \in {\mathbb Z}^m$ be the {\varepsilonm weights} appearing in the corresponding weight space decomposition, that is, \[ {\mathbb C}^n \simeq \displaystyle{ \bigoplus _{k=1}^n V_{\lambda^{(k)}}} \ , \] where, for $\lambda^{(k)} = (\lambda^{(k)}_1,\ldots,\lambda^{(k)}_m)$, the torus ${\mathbb T}^m$ acts on the complex line $V_{\lambda^{(k)}}$ by $( e^{i t_1} , \ldots, e^{i t_m} ) \cdot v = e ^{i \sum_j \lambda^{(k)}_j t_j} v$. If the action is effective\footnote{An action of a group $G$ on a manifold $M$ is called \textbf{effective}\index{action ! effective}\index{effective ! action} if each group element $g \ne e$ moves at least one point $p \in M$, that is, $\cap_{p \in M} G_p = \{e\}$, where $G_p = \{g \in G \mid g \cdot p = p\}$ is the stabilizer of $p$.}, then $m \leq n$ and the weights $\lambda^{(1)} , \ldots, \lambda^{(n)}$ are part of a ${\mathbb Z}$-basis of ${\mathbb Z}^m$. If the action is symplectic (hence hamiltonian in this case), then the weight spaces $V_{\lambda^{(k)}}$ are symplectic subspaces. In this case, a moment map is given by \[ \mu (v) = - \textstyle{\frac12} \sum \limits_{k=1}^{n} \lambda^{(k)} | v_{\lambda^{(k)}} |^2 \ , \] where $| \cdot |$ is the standard norm\footnote{The standard inner product satisfies $\langle v,w \rangle = \omega_0 (v, Jv)$ where $J \frac{\partial}{\partial z} = i \frac{\partial}{\partial z}$ and $J \frac{\partial}{\partial \bar z} = -i \frac{\partial}{\partial \bar z}$. In particular, the standard norm is invariant for a symplectic complex-linear action.} and $v = v_{\lambda^{(1)}} + \ldots + v_{\lambda^{(n)}}$ is the weight space decomposition of $v$. We conclude that, if ${\mathbb T}^n$ acts on ${\mathbb C}^n$ in a linear, effective and hamiltonian way, then any moment map $\mu$ is a submersion, i.e., each differential $d \mu_v : {\mathbb C}^n \to {\mathbb R}^n$ ($v \in {\mathbb C}^n$) is surjective. \item Consider a coadjoint orbit ${\mathcal O}_{\lambda}$ for the unitary group $\mathrm{U} (n)$. Multiplying by $i$, the orbit ${\mathcal O}_{\lambda}$ can be viewed as the set of hermitian matrices with a given eigenvalue spectrum $\lambda = (\lambda_1 \geq \ldots \geq \lambda_n)$. The restriction of the coadjoint action to the maximal torus ${\mathbb T}^n$ of diagonal unitary matrices is hamiltonian with moment map $\mu : {\mathcal O}_\lambda \to {\mathbb R}^n$ taking a matrix to the vector of its diagonal entries. Then the moment polytope $\mu ({\mathcal O}_\lambda)$ is the convex hull $C$ of the points given by all the permutations of $(\lambda_1, \ldots , \lambda_n)$. This is a rephrasing of the classical theorem of Schur ($\mu ({\mathcal O}_\lambda) \subseteq C$) and Horn ($C \subseteq \mu ({\mathcal O}_\lambda)$). \varepsilonnd{enumerate} \varepsilonnd{examples} Example~1 is related to the universal local picture for a moment map near a fixed point of a hamiltonian torus action: \begin{theorem} \label{thm:moment_darboux} Let $(M^{2n}, \omega, {\mathbb T}^m , \mu)$ be a hamiltonian ${\mathbb T}^m$-space, where $q$ is a fixed point. Then there exists a chart $({\mathcal U},x_1,\dots,x_n,y_1,\dots,y_n)$ centered at $q$ and weights $\lambda^{(1)}, \ldots, \lambda^{(n)} \in {\mathbb Z}^m$ such that \[ \left. \omega \right|_{\mathcal U} = \sum\limits_{k=1}^n dx_k \wedge dy_k \quad \mbox{ and } \quad \left. \mu \right|_{\mathcal U} = \mu (q) - \frac 12 \sum\limits_{k=1}^n \lambda^{(k)} (x_k^2 + y_k^2) \ . \] \varepsilonnd{theorem} The following two results use the crucial fact that any effective action of an $m$-torus on a manifold has orbits of dimension $m$; a proof may be found in~\cite{br:groups}. \begin{corollary} Under the conditions of the convexity theorem, if the ${\mathbb T}^m$-action is effective, then there must be at least $m+1$ fixed points. \varepsilonnd{corollary} \vspace*{-2ex} \begin{proof} At a point $p$ of an $m$-dimensional orbit the moment map is a submersion, i.e., $(d\mu_1)_p,\dots,(d\mu_m)_p$ are linearly independent. Hence, $\mu(p)$ is an interior point of $\mu(M)$, and $\mu(M)$ is a nondegenerate polytope. A nondegenerate polytope in ${\mathbb R}^m$ has at least $m+1$ vertices. The vertices of $\mu(M)$ are images of fixed points. \varepsilonnd{proof} \vspace*{-1ex} \begin{proposition} \label{prop:dimension} Let $(M,\omega,{\mathbb T}^m,\mu)$ be a hamiltonian ${\mathbb T}^m$-space. If the ${\mathbb T}^m$-action is effective, then $\dim M \ge 2m$. \varepsilonnd{proposition} \vspace*{-2ex} \begin{proof} Since the moment map is constant on an orbit ${\mathcal O}$, for $p \in {\mathcal O}$ the differential $d\mu_p: T_pM \to {\mathfrak g}^*$ maps $T_p{\mathcal O}$ to $0$. Thus $T_p{\mathcal O} \subseteq \ker d\mu_p = (T_p{\mathcal O})^{\omega}$, where $(T_p{\mathcal O})^{\omega}$ is the symplectic orthogonal of $T_p{\mathcal O}$. This shows that orbits ${\mathcal O}$ of a hamiltonian torus action are isotropic submanifolds of $M$. In particular, by symplectic linear algebra we have that $\dim {\mathcal O} \le \frac {1}{2} \dim M$. Now consider an $m$-dimensional orbit. \varepsilonnd{proof} For a hamiltonian action of an arbitrary compact Lie group $G$ on a compact symplectic manifold $(M,\omega)$, the following {\varepsilonm nonabelian} convexity theorem was proved by Kirwan~\cite{ki:convexity_III}: if $\mu: M \to {\mathfrak g}^*$ is a moment map, then the intersection $\mu(M) \cap {\mathfrak t}_+^*$ of the image of $\mu$ with a Weyl chamber for a Cartan subalgebra ${\mathfrak t} \subseteq {\mathfrak g}$ is a convex polytope. This had been conjectured by Guillemin and Sternberg and proved by them in particular cases. \ssection{Symplectic Reduction} \index{symplectic reduction} \label{section6} \ssubsection{Marsden-Weinstein-Meyer Theorem} \index{theorem ! Marsden-Weinstein-Meyer}\index{Weinstein ! Marsden-Weinstein-Meyer theorem}\index{Marsden-Weinstein-Meyer ! theorem}\index{Meyer|see{Marsden-Weinstein-Meyer}}\index{symplectic ! reduction|see{reduction}} \label{sec:m-w-m-thm} Classical physicists realized that, whenever there is a symmetry group of dimension $k$ acting on a mechanical system, the number of degrees of freedom for the position and momenta of the particles may be reduced by $2k$. Symplectic reduction formulates this process mathematically. \begin{theorem}\index{theorem ! Marsden-Weinstein-Meyer}\index{Weinstein ! Marsden-Weinstein-Meyer theorem}\index{Marsden-Weinstein-Meyer ! theorem}\label{thm:reduction} \textbf{(Marsden-Weinstein, Meyer~\cite{ma-we:reduction,me:symmetries})} $\;$ Let $(M,\omega,G,\mu)$ be a hamiltonian $G$-space (Section~\ref{sec:actions}) for a compact Lie group $G$. Let $i: \mu^{-1}(0) \hookrightarrow M$ be the inclusion map. Assume that $G$ acts freely on $\mu^{-1}(0)$. Then \begin{itemize} \item[(a)] the orbit space $M_{\mathrm{red}} = \mu^{-1}(0)/G$ is a manifold, \item[(b)] $\pi: \mu^{-1}(0) \rightarrow M_{\mathrm{red}}$ is a principal $G$-bundle, and \item[(c)] there is a symplectic form $\omega_{\mathrm{red}}$ on $M_{\mathrm{red}}$ satisfying $i^*\omega = \pi^*\omega_{\mathrm{red}}$. \varepsilonnd{itemize} \varepsilonnd{theorem} \vspace*{-1ex} \begin{definition}\index{quotient ! Marsden-Weinstein-Meyer}\index{Weinstein ! Marsden-Weinstein-Meyer quotient}\index{Marsden-Weinstein-Meyer ! quotient}\index{quotient ! symplectic}\index{symplectic ! quotient}\index{reduced ! space} The symplectic manifold $(M_{\mathrm{red}},\omega_{\mathrm{red}})$ is the \textbf{reduction} (or \textbf{reduced space}, or \textbf{symplectic quotient}) of $(M,\omega)$ with respect to $G,\mu$. \varepsilonnd{definition} When $M$ is K\"ahler\index{K\"ahler structure} and the action of $G$ preserves the complex structure, we can show that the symplectic reduction has a natural K\"ahler structure. Let $(M,\omega,G,\mu)$ be a hamiltonian $G$-space for a compact Lie group $G$. To reduce at a level $\xi \in {\mathfrak g}^*$ of $\mu$,\index{reduction ! other levels} we need $\mu^{-1}(\xi)$ to be preserved by $G$, or else take the $G$-orbit of $\mu^{-1}(\xi)$, or else take the quotient by the maximal subgroup of $G$ that preserves $\mu^{-1}(\xi)$. Since $\mu$ is equivariant, $G$ preserves $\mu^{-1}(\xi)$ if and only if $\mathrm{Ad}_g^*\xi = \xi$, $\forall g \in G$. Of course, the level $0$ is always preserved. Also, when $G$ is a torus, any level is preserved and \textbf{reduction at $\xi$} for the moment map $\mu$, is equivalent to reduction at $0$ for a shifted moment map $\phi: M \rightarrow {\mathfrak g}^*$, $\phi(p) := \mu(p) - \xi$. In general, let ${\mathcal O}$ be a coadjoint orbit in ${\mathfrak g}^*$ equipped with the \textbf{canonical symplectic form}\index{canonical ! symplectic form on a coadjoint orbit}\index{symplectic ! canonical symplectic form on a coadjoint orbit} $\omega_{{\mathcal O}}$ (defined in Section~\ref{sec:symplectic_hamiltonian_fields}). Let ${\mathcal O}^-$ be the orbit ${\mathcal O}$ equipped with $-\omega_{{\mathcal O}}$. The natural product action of $G$ on $M \times {\mathcal O}^-$ is hamiltonian with moment map $\mu_{{\mathcal O}}(p,\xi) = \mu(p) - \xi$. If the hypothesis of Theorem~\ref{thm:reduction} is satisfied for $M \times {\mathcal O}^-$, then one obtains a \textbf{reduced space with respect to the coadjoint orbit ${\mathcal O}$}.\index{reduced ! space} \begin{examples} \index{reduction ! examples}\index{example ! reduction} \begin{enumerate} \item The standard symplectic form on ${\mathbb C}^n$ is $\omega_0 = \frac {i}{2} \sum dz_i \wedge d{\bar z}_i = \sum dx_i \wedge dy_i = \sum r_idr_i \wedge d\theta_i$ in polar coordinates. The $S^1$-action on $({\mathbb C}^n,\omega_0)$ where $e^{it} \in S^1$ acts as multiplication by $e^{it}$ has vector field $X^{\#} = \frac {\partial}{\partial\theta_1} + \frac {\partial}{\partial\theta_2} + \dots + \frac {\partial}{\partial\theta_n}$. This action is hamiltonian with moment map $\mu: {\mathbb C}^n \to {\mathbb R}$, $\mu(z) = -\frac {|z|^2}{2}$, since $\imath_{X^\#}\omega = \sum r_idr_i = -\frac {1}{2} \sum dr_i^2 = d\mu$. The level $\mu^{-1}(- \frac 12)$ is the unit sphere $S^{2n-1}$, whose orbit space is the projective space,\index{complex ! projective space}\index{reduction ! reduced space}\index{reduced ! space} \[ \mu^{-1}(\textstyle{- \frac 12})/S^1 = S^{2n-1}/S^1 = {\mathbb C}{\mathbb P}^{n-1} \ . \] The reduced symplectic form at level $- \frac 12$ is $\omega_{_{\mathrm{red}}} = \omega_{_{\mathrm{FS}}}$ the Fubini-Study symplectic form.\index{Fubini-Study form}\index{Study|see{Fubini-Study}}\index{symplectic ! Fubini-Study form}\index{form ! Fubini-Study} Indeed, if $\mathrm{pr} : {\mathbb C}^{n+1} \setminus \{0\} \to {\mathbb C} {\mathbb P} ^n$ is the standard projection, the forms $\mathrm{pr}^* \omega_{_{\mathrm{FS}}} = \textstyle{\frac{i}{2}} \partial \bar{\partial} \log (|z|^2)$ and $\omega_0$ have the same restriction to $S^{2n+1}$. \item Consider the natural action of $\mathrm{U} (k)$ on ${\mathbb C}^{k\times n}$ with moment map $\mu (A) = \textstyle{{i} \over {2}} A A^* + {\mathrm{Id} \over {2i}}$ for $A \in {\mathbb C}^{k\times n}$ (Section~\ref{sec:actions}). Since $\mu^{-1} (0) = \{ A \in {\mathbb C}^{k\times n} \, | \, A A^* = \mathrm{Id} \}$, the reduced manifold is the grassmannian of $k$-planes in ${\mathbb C}^n$: \[ \mu^{-1} (0) / \mathrm{U} (k) = {\mathbb G} (k,n) \ . \] \varepsilonnd{enumerate} \varepsilonnd{examples} For the case where $G = S^1$ and $\dim M = 4$, here is a glimpse of reduction.\index{reduction ! low-brow proof} Let $\mu: M \rightarrow {\mathbb R}$ be the moment map and $p \in \mu^{-1}(0)$. Choose local coordinates near $p$: $\theta$ along the orbit through $p$, $\mu$ given by the moment map, and $\varepsilonta_1,\varepsilonta_2$ the pullback of coordinates on $M_{\mathrm{red}} = \mu^{-1}(0)/S^1$. Then the symplectic form can be written \[ \omega = A \ d\theta \wedge d\mu + \sum B_j \ d\theta \wedge d\varepsilonta_j + \sum C_j \ d\mu \wedge d\varepsilonta_j + D \ d\varepsilonta_1 \wedge d\varepsilonta_2 \ . \] As $d\mu = \imath \left( \frac {\partial}{\partial\theta} \right)\omega$, we must have $A = 1$, $B_j = 0$. Since $\omega$ is symplectic, it must be $D \ne 0$. Hence, $i^*\omega = D \ d\varepsilonta_1 \wedge d\varepsilonta_2$ is the pullback of a symplectic form on $M_{\mathrm{red}}$. The actual proof of Theorem~\ref{thm:reduction} requires some preliminary ingredients. Let $\mu : M \to {\mathfrak g}^*$ be the moment map for an (hamiltonian) action of a Lie group $G$ on a symplectic manifold $(M,\omega)$. Let ${\mathfrak g}_p$ be the Lie algebra of the stabilizer of a point $p \in M$, let ${\mathfrak g}_p^0 = \{\xi \in {\mathfrak g}^* \mid \langle \xi,X \rangle = 0,\ \forall X \in {\mathfrak g}_p\}$ be the annihilator of ${\mathfrak g}_p$, and let ${\mathcal O} _p$ be the $G$-orbit through $p$. Since $\omega_p(X_p^{\#},v) = \langle d\mu_p(v),X \rangle$, for all $v \in T_pM$ and all $X \in {\mathfrak g}$, the differential $d\mu_p: T_pM \rightarrow {\mathfrak g}^*$ has \[ {\mathrm{ker}} \ d\mu_p = (T_p {\mathcal O} _p)^{\omega_p} \quad \mbox{ and } \quad {\mathrm{im}} \ d\mu_p = {\mathfrak g}_p^0 \ . \] Consequently, the action is locally free\footnote{The action is \textbf{locally free} at $p$ when ${\mathfrak g}_p = \{ 0 \}$, i.e., the stabilizer of $p$ is a discrete group. The action is \textbf{free} at $p$ when the stabilizer of $p$ is trivial, i.e., $G_p = \{ e \}$.} at $p$ if and only if $p$ is a regular point of $\mu$ (i.e., $d\mu_p$ is surjective), and we obtain: \begin{lemma} \label{lem:free} If $G$ acts freely on $\mu^{-1}(0)$, then $0$ is a regular value of $\mu$, the level $\mu^{-1}(0)$ is a submanifold of $M$ of codimension $\dim G$, and, for $p \in \mu^{-1}(0)$, the tangent space $T_p\mu^{-1}(0) = \ker d\mu_p$ is the symplectic orthogonal to $T_p{\cal O}_p$ in $T_pM$. \varepsilonnd{lemma} In particular, {\varepsilonm orbits in $\mu^{-1}(0)$ are isotropic}. Since any tangent vector to the orbit is the value of a vector field generated by the group, we can show this directly by computing, for any $X,Y \in {\mathfrak g}$ and $p \in \mu^{-1}(0)$, the hamiltonian function for $[Y^\#,X^\#] = [Y,X]^\#$ at that point: $\omega_p(X_p^\#,Y_p^\#) = \mu^{[Y,X]}(p) = 0$. \begin{lemma} \label{lem:isotropic} Let $(V,\Omega)$ be a symplectic vector space, and $I$ an isotropic subspace. Then $\Omega$ induces a canonical symplectic structure $\Omega_{\mathrm{red}}$ on $I^{\Omega}/I$. \varepsilonnd{lemma} \vspace*{-2ex} \begin{proof} Let $[u],[v]$ be the classes in $I^{\Omega}/I$ of $u,v \in I^{\Omega}$. We have $\Omega(u+i,v+j) = \Omega(u,v)$, $\forall i,j \in I$, because $\Omega(u,j) = \Omega(i,v) = \Omega(i,j) = 0$. Hence, we can define $\Omega_{\mathrm{red}} ([u],[v]) := \Omega(u,v)$. This is nondegenerate: if $u \in I^{\Omega}$ has $\Omega(u,v) = 0$, for all $v \in I^{\Omega}$, then $u \in (I^{\Omega})^{\Omega} = I$, i.e., $[u] = 0$. \varepsilonnd{proof} \vspace*{-1ex} \begin{proposition} \label{prop:quotient} If a compact Lie group $G$ acts freely on a manifold $M$, then $M/G$ is a manifold and the map $\pi: M \rightarrow M/G$ is a principal $G$-bundle. \varepsilonnd{proposition} \vspace*{-2ex} \begin{proof} We first show that, for any $p \in M$, the $G$-orbit through $p$ is a compact submanifold of $M$ diffeomorphic to $G$.\footnote{Even if the action is not free, the orbit through $p$ is a compact submanifold of $M$. In that case, the orbit of a point $p$ is diffeomorphic to the quotient $G / G_p$ of $G$ by the stabilizer of $p$.} The $G$-orbit through $p$ is the image of the smooth injective map $\mathrm{ev}_p: G \to M$, $\mathrm{ev}_p(g) = g \cdot p$. The map $\mathrm{ev}_p$ is proper because, if $A$ is a compact, hence closed, subset of $M$, then its inverse image $(\mathrm{ev}_p)^{-1} (A)$, being a closed subset of the compact Lie group $G$, is also compact. The differential $d (\mathrm{ev}_p )_e$ is injective because $d (\mathrm{ev}_p )_e (X) = 0 \Leftrightarrow X_p^{\#} = 0 \Leftrightarrow X = 0$, $\forall X \in T_e G$, as the action is free. At any other point $g \in G$, for $X \in T_g G$ we have $d (\mathrm{ev}_p )_g (X) = 0 \Leftrightarrow d (\mathrm{ev}_p \circ R_g )_e \circ (d R_{g^{-1}} )_g (X) = 0$, where $R_g : G \to G$, $h \mapsto hg$, is right multiplication by $g$. But $\mathrm{ev}_p \circ R_g = \mathrm{ev}_{g \cdot p}$ has an injective differential at $e$, and $(d R_{g^{-1}} )_g$ is an isomorphism. It follows that $d (\mathrm{ev}_p )_g$ is always injective, so $\mathrm{ev}_p$ is an immersion. We conclude that $\mathrm{ev}_p$ is a closed embedding. We now apply the slice theorem\footnote{\textbf{Slice Theorem: }\index{theorem ! slice}\index{slice theorem}\label{thm:slice} {\varepsilonm Let $G$ be a compact Lie group acting on a manifold $M$ such that $G$ acts freely at $p \in M$. Let $S$ be a transverse section to ${\mathcal O}_p$ at $p$ (this is called a \textbf{slice}). Choose a coordinate chart $x_1,\dots,x_n$ centered at $p$ such that ${\mathcal O}_p \simeq G$ is given by $x_1 = \dots = x_k = 0$ and $S$ by $x_{k+1} = \dots = x_n = 0$. Let $S_{\varepsilon} = S \cap B_{\varepsilon}$ where $B_{\varepsilon}$ is the ball of radius $\varepsilon$ centered at $0$ with respect to these coordinates. Let $\varepsilonta: G \times S \rightarrow M$, $\varepsilonta(g,s) = g\cdot s$. Then, for sufficiently small $\varepsilon$, the map $\varepsilonta: G \times S_{\varepsilon} \rightarrow M$ takes $G \times S_{\varepsilon}$ diffeomorphically onto a $G$-invariant neighborhood ${\mathcal U}$ of the $G$-orbit through $p$.} In particular, if the action of $G$ is free at $p$, then the action is free on ${\mathcal U}$, so the set of points where $G$ acts freely is open.} which is an equivariant tubular neighborhood theorem.\index{equivariant ! tubular neighborhood theorem}\index{tubular neighborhood ! equivariant} For $p \in M$, let $q = \pi(p) \in M/G$. Choose a $G$-invariant neighborhood ${\mathcal U}$ of $p$ as in the slice theorem, so that ${\mathcal U} \simeq G \times S$ where $S$ is an appropriate slice. Then $\pi({\mathcal U}) = {\mathcal U}/G =: {\mathcal V}$ is a neighborhood of $q$ in $M/G$ homeomorphic\footnote{We equip the orbit space $M/G$ with the \textbf{quotient topology}\index{quotient ! topology}, i.e., ${\mathcal V} \subseteq M/G$ is open if and only if $\pi^{-1}({\mathcal V})$ is open in $M$.} to $S$. Such neighborhoods ${\mathcal V}$ are used as charts on $M/G$. To show that the associated transition maps are smooth, consider two $G$-invariant open sets ${\mathcal U}_1,{\mathcal U}_2$ in $M$ and corresponding slices $S_1,S_2$. Then $S_{12} = S_1 \cap {\mathcal U}_2$, $S_{21} = S_2 \cap {\mathcal U}_1$ are both slices for the $G$-action on ${\mathcal U}_1 \cap {\mathcal U}_2$. To compute the transition map $S_{12} \rightarrow S_{21}$, consider the sequence $S_{12} \stackrel{\simeq}{\longrightarrow} \{ e \} \times S_{12} \hookrightarrow G \times S_{12} \stackrel{\simeq}{\longrightarrow} {\mathcal U}_1 \cap {\mathcal U}_2$ and similarly for $S_{21}$. The composition $S_{12} \hookrightarrow {\mathcal U}_1 \cap {\mathcal U}_2 \stackrel{\simeq}{\longrightarrow} G \times S_{21} \stackrel{pr}{\longrightarrow} S_{21}$ is smooth. Finally, we show that $\pi: M \rightarrow M/G$ is a principal $G$-bundle. For $p \in M$, $q = \pi(p)$, choose a $G$-invariant neighborhood ${\mathcal U}$ of $p$ of the form $\varepsilonta: G \times S \stackrel{\simeq}{\longrightarrow} {\mathcal U}$. Then ${\mathcal V} = {\mathcal U}/G \simeq S$ is the corresponding neighborhood of $q$ in $M/G$: \[ \begin{array}{rccc} M \supseteq & {\mathcal U} & \stackrel{\varepsilonta}{\simeq} \; \; G \times S \; \; \simeq & G \times {\mathcal V} \\ & \phantom{\pi} \downarrow \pi & & \downarrow \\ M/G \supseteq & {\mathcal V} & = & {\mathcal V} \varepsilonnd{array} \] Since the projection on the right is smooth, $\pi$ is smooth. By considering the overlap of two trivializations $\phi_1: {\mathcal U}_1 \to G \times {\mathcal V}_1$ and $\phi_2: {\mathcal U}_2 \to G \times {\mathcal V}_2$, we check that the transition map $\phi_2 \circ \phi_1^{-1} = (\sigma_{12}, \id) : G \times ({\mathcal V}_1 \cap {\mathcal V}_2 ) \to G \times ({\mathcal V}_1 \cap {\mathcal V}_2 )$ is smooth. \varepsilonnd{proof} \noindent \textbf{Proof of Theorem~\ref{thm:reduction}.} Since $G$ acts freely on $\mu^{-1}(0)$, by Lemma~\ref{lem:free} the level $\mu^{-1}(0)$ is a submanifold. Applying Proposition~\ref{prop:quotient} to the free action of $G$ on the manifold $\mu^{-1}(0)$, we conclude the assertions~(a) and~(b). At $p \in \mu^{-1}(0)$ the tangent space to the orbit $T_p{\mathcal O}_p$ is an isotropic subspace of the symplectic vector space $(T_pM,\omega_p)$. By Lemma~\ref{lem:isotropic} there is a canonical symplectic structure on the quotient $T_p\mu^{-1}(0)/T_p{\mathcal O}_p$. The point $[p] \in M_{\mathrm{red}} = \mu^{-1}(0)/G$ has tangent space $T_{[p]} M_{\mathrm{red}} \simeq T_p\mu^{-1}(0)/T_p{\mathcal O}_p$. This gives a well-defined nondegenerate 2-form $\omega_{\mathrm{red}}$ on $M_{\mathrm{red}}$ because $\omega$ is $G$-invariant. By construction $i^*\omega = \pi^*\omega_{\mathrm{red}}$ where \[ \begin{array}{cll} \mu^{-1}(0) & \stackrel{i}{\hookrightarrow} & M \\ \downarrow \pi \\ M_{\mathrm{red}} \varepsilonnd{array} \] The injectivity of $\pi^*$ yields closedness: $\pi^* d \omega_{\mathrm{red}} = d \pi^* \omega_{\mathrm{red}} = d \imath ^* \omega = \imath ^* d \omega = 0$. $\Box$ \ssubsection{Applications and Generalizations} \label{sec:generalizations} Let $(M,\omega,G,\mu)$ be a hamiltonian $G$-space for a compact Lie group $G$. Suppose that another Lie group $H$ acts on $(M,\omega)$ in a hamiltonian way with moment map $\phi: M \rightarrow {\mathfrak h}^*$. Suppose that the $H$-action commutes with the $G$-action, that $\phi$ is $G$-invariant and that $\mu$ is $H$-invariant. Assuming that $G$ acts freely on $\mu^{-1}(0)$, let $(M_{\mathrm{red}}, \omega_{\mathrm{red}})$ be the corresponding reduced space. Since the action of $H$ preserves $\mu^{-1}(0)$ and $\omega$ and commutes with the $G$-action, the reduced space $(M_{\mathrm{red}}, \omega_{\mathrm{red}})$ inherits a symplectic action of $H$. Since $\phi$ is preserved by the $G$-action, the restriction of this moment map to $\mu^{-1}(0)$ descends to a moment map $\phi_{\mathrm{red}}: M_{\mathrm{red}} \rightarrow {\mathfrak h}^*$ satisfying $\phi_{\mathrm{red}} \circ \pi = \phi \circ i$, where $\pi : \mu^{-1}(0) \to M_{\mathrm{red}}$ and $i : \mu^{-1}(0) \hookrightarrow M$. Therefore, $(M_{\mathrm{red}}, \omega_{\mathrm{red}}, H, \phi_{\mathrm{red}})$ is a hamiltonian $H$-space. Consider now the action of a \textbf{product group} $G = G_1 \times G_2$, where $G_1$ and $G_2$ are compact connected Lie groups.\index{reduction ! for product groups}\index{product group}\index{group ! product} We have ${\mathfrak g} = {\mathfrak g}_1 \oplus {\mathfrak g}_2$ and ${\mathfrak g}^* = {\mathfrak g}_1^* \oplus {\mathfrak g}_2^*$. Suppose that $(M,\omega,G,\psi)$ is a hamiltonian $G$-space with moment map \[ \psi = (\psi_1,\psi_2) : M \longrightarrow {\mathfrak g}_1^* \oplus {\mathfrak g}_2^*\ , \] where $\psi_i : M \rightarrow {\mathfrak g}_i^*$ for $i=1,2$. The fact that $\psi$ is equivariant implies that $\psi_1$ is invariant under $G_2$ and $\psi_2$ is invariant under $G_1$. Assume that $G_1$ acts freely on $Z_1 := \psi_1^{-1}(0)$. Let $(M_1 = Z_1/G_1,\omega_1)$ be the reduction of $(M,\omega)$ with respect to $G_1,\psi_1$. From the observation above, $(M_1,\omega_1)$ inherits a hamiltonian $G_2$-action with moment map $\mu_2 : M_1 \rightarrow {\mathfrak g}_2^*$ such that $\mu_2 \circ \pi = \psi_2 \circ i$, where $\pi : Z_1 \to M_1$ and $i : Z_1 \hookrightarrow M$. If $G$ acts freely on $\psi ^{-1} (0,0)$, then $G_2$ acts freely on $\mu_2 ^{-1} (0)$, and there is a natural symplectomorphism \[ \mu_2 ^{-1} (0) / G_2 \; \simeq \; \psi ^{-1} (0,0) / G \ . \] This technique of performing reduction with respect to one factor of a product group at a time is called \textbf{reduction in stages}\index{reduction ! in stages}. It may be extended to reduction by a normal subgroup $H \subset G$ and by the corresponding quotient group $G / H$. \begin{example}\index{reduction ! symmetry} Finding symmetries for a mechanical problem may reduce degrees of freedom by two at a time: an integral of motion $f$ for a $2n$-dimensional hamiltonian system $(M,\omega,H)$ may allow to understand the trajectories of this system in terms of the trajectories of a $(2n-2)$-dimensional hamiltonian system $(M_{\mathrm{red}},\omega_{\mathrm{red}},H_{\mathrm{red}})$. Locally this process goes as follows. Let $({\mathcal U},x_1,\dots,x_n,\xi_1,\dots,\xi_n)$ be a Darboux chart for $M$ such that $f = \xi_n$.\footnote{To obtain such a chart, in the proof of Darboux's Theorem~\ref{thm:darboux} start with coordinates $(x'_1,\ldots ,x'_n,$ $y'_1,\ldots y'_n)$ such that $y'_n = f$ and $\frac{\partial}{\partial x'_n} = X_f$.} Since $\xi_n$ is an integral of motion, $0 = \{\xi_n,H\} = -\frac {\partial H}{\partial x_n}$, the trajectories of the hamiltonian vector field $X_H$ lie on a constant level $\xi_n = c$ (Proposition~\ref{prop:integral}), and $H$ does not depend on $x_n$. The \textbf{reduced space}\index{reduced ! phase space}\index{phase space} is ${\mathcal U}_{\mathrm{red}} = \{(x_1,\dots,x_{n-1},\xi_1,\dots,\xi_{n-1}) \mid \varepsilonxists a: (x_1,\dots,x_{n-1},a,\xi_1,\dots,\xi_{n-1},c) \in {\mathcal U} \}$ and the \textbf{reduced hamiltonian}\index{reduced ! hamiltonian}\index{hamiltonian ! reduced} is $H_{\mathrm{red}}: {\mathcal U}_{\mathrm{red}} \to {\mathbb R}$, $H_{\mathrm{red}}(x_1,\dots,x_{n-1},\xi_1,\dots,\xi_{n-1}) =$ $H(x_1,\dots,x_{n-1},a,\xi_1,\dots,\xi_{n-1},c)$ for some $a$. In order to find the trajectories of the original system on the hypersurface $\xi_n = c$, we look for the trajectories $(x_1(t),\dots,x_{n-1}(t),\xi_1(t),\dots,\xi_{n-1}(t))$ of the reduced system on ${\mathcal U}_{\mathrm{red}}$, and integrate the equation $\frac {dx_n}{dt} (t) = \frac {\partial H}{\partial \xi_n}$ to obtain the original trajectories where \[ \left\{ \begin{array}{rcl} x_n(t) & = & x_n(0) + \displaystyle{\int_0^t \frac {\partial H}{\partial \xi_n} (x_1(t),\dots,x_{n-1}(t),\xi_1(t),\dots,\xi_{n-1}(t),c)\ dt} \\ \xi_n(t) & = & c \ . \varepsilonnd{array} \right. \] \varepsilonnd{example} By Sard's theorem, the singular values of a moment map $\mu : M \to {\mathfrak g}^*$ form a set of measure zero. So, perturbing if necessary, we may assume that a level of $\mu$ is regular hence, when $G$ is compact, that any point $p$ of that level has finite stabilizer $G_p$. Let ${\mathcal O}_p$ be the orbit of $p$. By the slice theorem for the case of orbifolds, near ${\mathcal O}_p$ the orbit space of the level is modeled by $S/G_p$, where $S$ is a $G_p$-invariant disk in the level and transverse to ${\mathcal O}_p$ (a {\varepsilonm slice}). Thus, the orbit space is an {\varepsilonm orbifold}.\index{orbifold ! reduced space}\footnote{Let $|M|$ be a Hausdorff topological space satisfying the second axiom of countability. An \textbf{orbifold chart}\index{orbifold ! chart} on $|M|$ is a triple $({\mathcal V}, \Gamma, \varphi)$, where ${\mathcal V}$ is a connected open subset of some euclidean space ${\mathbb R}^m$, $\Gamma$ is a finite group that acts linearly on ${\mathcal V}$ so that the set of points where the action is not free has codimension at least two, and $\varphi: {\mathcal V} \to |M|$ is a $\Gamma$-invariant map inducing a homeomorphism from ${\mathcal V} / \Gamma$ onto its image ${\mathcal U} \subset |M|$. An \textbf{orbifold atlas}\index{orbifold ! atlas} ${\mathcal A}$ for $|M|$ is a collection of orbifold charts on $|M|$ such that: the collection of images ${\mathcal U}$ forms a basis of open sets in $|M|$, and the charts are compatible in the sense that, whenever two charts $({\mathcal V}_1, \Gamma_1, \varphi_1)$ and $({\mathcal V}_2, \Gamma_2, \varphi_2)$ satisfy ${\mathcal U}_1 \subseteq {\mathcal U}_2$, there exists an injective homomorphism $\lambda :\Gamma_1 \to \Gamma_2$ and a $\lambda$-equivariant open embedding $\psi : {\mathcal V}_1 \to {\mathcal V}_2$ such that $\varphi_2 \circ \psi = \varphi_1$. Two orbifold atlases are \textbf{equivalent}\index{orbifold ! equivalence} if their union is still an atlas. An $m$-dimensional \textbf{orbifold}\index{orbifold ! definition} $M$ is a Hausdorff topological space $|M|$ satisfying the second axiom of countability, plus an equivalence class of orbifold atlases on $|M|$. We do not require the action of each group $\Gamma$ to be effective. Given a point $p$ on an orbifold $M$, let $({\mathcal V}, \Gamma, \varphi)$ be an orbifold chart for a neighborhood ${\mathcal U}$ of $p$. The \textbf{orbifold structure group}\index{orbifold ! structure group} of $p$, $\Gamma_p$, is (the isomorphism class of) the stabilizer of a pre-image of $p$ under $\phi$. Orbifolds were introduced by Satake in~\cite{sa:orbifolds}.} This implies that, when $G = {\mathbb T}^n$ is an $n$-torus, for most levels reduction goes through, however the quotient space is not necessarily a manifold but an orbifold. Roughly speaking, orbifolds are singular manifolds where each singularity is locally modeled on ${\mathbb R}^m / \Gamma$, for some finite group $\Gamma \subset \mathrm{GL} (m;{\mathbb R})$. The differential-geometric notions of vector fields, differential forms, exterior differentiation, group actions, etc., extend naturally to orbifolds by gluing corresponding local $\Gamma$-invariant or $\Gamma$-equivariant objects. In particular, a \textbf{symplectic orbifold}\index{orbifold ! symplectic} is a pair $(M,\omega)$ where $M$ is an orbifold and $\omega$ is a closed 2-form on $M$ that is nondegenerate at every point. \begin{examples}\index{orbifold ! examples} The $S^1$-action on ${\mathbb C}^2$ given by $e^{i\theta} \cdot (z_1,z_2) = (e^{ik\theta}z_1,e^{i\varepsilonll\theta}z_2)$, for some integers $k$ and $\varepsilonll$, has moment map $\mu: {\mathbb C}^2 \to {\mathbb R}$, $(z_1,z_2) \mapsto -\frac {1}{2} (k|z_1|^2 + \varepsilonll |z_2|^2)$. Any $\xi < 0$ is a regular value and $\mu^{-1}(\xi)$ is a 3-dimensional ellipsoid. When $\varepsilonll = 1$ and $k \ge 2$, the stabilizer of $(z_1,z_2)$ is $\{1\}$ if $z_2 \ne 0$ and is ${\mathbb Z}_k = \left\{ e^{i \frac {2\pi m}{k}} \mid m = 0,1,\dots,k-1 \right\}$ if $z_2 = 0$. The reduced space $\mu^{-1}(\xi)/S^1$ is then called a \textbf{teardrop}\index{orbifold ! teardrop}\index{teardrop orbifold} orbifold or {\varepsilonm conehead}\index{orbifold ! conehead}\index{conehead orbifold}; it has one \textbf{cone} (or {\varepsilonm dunce cap}\index{orbifold ! dunce cap}\index{dunce cap orbifold}) singularity with cone angle $\frac {2\pi}{k}$, that is, a point with orbifold structure group ${\mathbb Z}_k$. When $k, \varepsilonll \ge 2$ are relatively prime, for $z_1,z_2 \ne 0$ the stabilizer of $(z_1,0)$ is ${\mathbb Z}_k$, of $(0,z_2)$ is ${\mathbb Z}_{\varepsilonll}$ and of $(z_1,z_2)$ is $\{1\}$. The quotient $\mu^{-1}(\xi)/S^1$ is called a \textbf{football} orbifold: it has two cone singularities, with angles $\frac {2\pi}{k}$ and $\frac {2\pi}{\varepsilonll}$. For $S^1$ acting on ${\mathbb C}^n$ by $e^{i\theta} \cdot (z_1,\dots,z_n) = (e^{ik_1\theta}z_1,\dots,e^{ik_n\theta}z_n)$ the reduced spaces are orbifolds called \textbf{weighted} (or \textbf{twisted}) \textbf{projective spaces}.\index{example ! weighted projective space}\index{weighted projective space}\index{twisted projective space} \varepsilonnd{examples} Let $(M,\omega)$ be a symplectic manifold where $S^1$ acts in a hamiltonian way, $\rho: S^1 \to {\rm Diff} (M)$, with moment map $\mu : M \to {\mathbb R}$. Suppose that: \begin{itemize} \item $M$ has a unique nondegenerate minimum at $q$ where $\mu (q) = 0$, and \item for $\varepsilon$ sufficiently small, $S^1$ acts freely on the level set $\mu^{-1} (\varepsilon)$. \varepsilonnd{itemize} Let ${\mathbb C}$ be equipped with the symplectic form $-i dz \wedge d \bar z$. Then the action of $S^1$ on the product $\psi: S^1 \to {\rm Diff} (M \times {\mathbb C})$, $\psi_t (p,z) = (\rho_t (p) , t \cdot z)$, is hamiltonian with moment map \[ \phi : M \times {\mathbb C} \longrightarrow {\mathbb R} \ , \qquad \phi (p,z) = \mu (p) - |z|^2 \ . \] Observe that $S^1$ acts freely on the $\varepsilon$-level of $\phi$ for $\varepsilon$ small enough: \[ \begin{array}{rcl} \phi^{-1} (\varepsilon) & = & \{ (p,z) \in M \times {\mathbb C} \mid \mu (p) - |z|^2 = \varepsilon \} \\ & = & \{ (p,0) \in M \times {\mathbb C} \mid \mu (p) = \varepsilon \} \\ & & \quad \cup \quad \{ (p,z) \in M \times {\mathbb C} \mid |z|^2 = \mu (p) -\varepsilon >0\} \ . \varepsilonnd{array} \] The reduced space is hence \[ \phi^{-1} (\varepsilon) / S^1 \simeq \mu^{-1} (\varepsilon) / S^1 \cup \{ p \in M \mid \mu (p) > \varepsilon \} \ . \] The open submanifold of $M$ given by $\{ p \in M \mid \mu (p) > \varepsilon \}$ embeds as an open dense submanifold into $\phi^{-1} (\varepsilon) / S^1$. The reduced space $\phi^{-1} (\varepsilon) / S^1$ is the $\varepsilon$-blow-up of $M$ at $q$ (Section~\ref{sec:actions}). This global description of blow-up for hamiltonian $S^1$-spaces is due to Lerman~\cite{le:cuts}, as a particular instance of his {\varepsilonm cutting} technique. \textbf{Symplectic cutting}\index{symplectic ! cutting} is the application of symplectic reduction to the product of a hamiltonian $S^1$-space with the standard ${\mathbb C}$ as above, in a way that the reduced space for the original hamiltonian $S^1$-space embeds symplectically as a codimension 2 submanifold in a symplectic manifold. As it is a local construction, the cutting operation may be more generally performed at a local minimum (or maximum) of the moment map $\mu$. There is a remaining $S^1$-action on the cut space $M_{\mathrm{cut}}^{\geq \varepsilon} := \phi^{-1} (\varepsilon) / S^1$ induced by \[ \tau: S^1 \longrightarrow {\rm Diff} (M \times {\mathbb C}) \ , \qquad \tau_t (p,z) = (\rho_t (p) , z) \ . \] In fact, $\tau$ is a hamiltonian $S^1$-action on $M \times {\mathbb C}$ that commutes with $\psi$, thus descends to an action $\widetilde \tau : S^1 \to {\rm Diff} (M_{\mathrm{cut}}^{\geq \varepsilon})$, which is also hamiltonian. Loosely speaking, the cutting technique provides a hamiltonian way to close the open manifold $\{ p \in M \mid \mu (p) > \varepsilon \}$, by using the reduced space at level $\varepsilon$, $\mu^{-1} (\varepsilon) / S^1$. We may similarly close $\{ p \in M \mid \mu (p) < \varepsilon \}$. The resulting hamiltonian $S^1$-spaces are called \textbf{cut spaces}\index{cut spaces}, and denoted $M_{\mathrm{cut}}^{\geq \varepsilon}$ and $M_{\mathrm{cut}}^{\leq \varepsilon}$. If another group $G$ acts on $M$ in a hamiltonian way that commutes with the $S^1$-action, then the cut spaces are also hamiltonian $G$-spaces. \ssubsection{Moment Map in Gauge Theory} \index{moment map ! in gauge theory}\index{gauge ! theory} Let $G$ be a Lie group and $P$ a principal $G$-bundle over $B$.\footnote{Let $G$ be a Lie group and $B$ a manifold. A \textbf{principal $G$-bundle over $B$} is a fibration $\pi : P \to B$ (Section~\ref{sec:constructions}) with a free action of $G$ (the \textbf{structure group}\index{group ! structure}) on the total space $P$, such that the base $B$ is the orbit space, the map $\pi$ is the point-orbit projection and the local trivializations are of the form $\varphi_{\mathcal U} = (\pi,s_{\mathcal U}) : \pi ^{-1} ({\mathcal U}) \to {\mathcal U} \times G$ with $s_{\mathcal U} (g \cdot p) = g \cdot s_{\mathcal U} (p)$ for all $g \in G$ and all $p \in \pi ^{-1} ({\mathcal U})$. A principal $G$-bundle is represented by a diagram \[ \begin{array}{cll} G & \hookrightarrow & P \\ & & \downarrow \pi \\ & & B \varepsilonnd{array} \] For instance, the \textbf{Hopf fibration}\index{Hopf ! fibration} is a principal $S^1$-bundle over $S^2$($={\mathbb C} {\mathbb P}^1$) with total space $S^3$ regarded as unit vectors in ${\mathbb C} ^2$ where circle elements act by complex multiplication.} If $A$ is a connection (form)\footnote{An action $\psi: G \to \mathrm{Diff}(P)$ induces an infinitesimal action\index{action ! infinitesimal}\index{infinitesimal action} $d\psi: {\mathfrak g} \to \chi(P)$ mapping $X \in {\mathfrak g}$ to the vector field $X^{\#}$ generated by the one-parameter group $\{\varepsilonxp tX(e) \mid t \in {\mathbb R}\} \subseteq G$. Fix a basis $X_1, \ldots, X_k$ of ${\mathfrak g}$. Let $P$ be a principal $G$-bundle over $B$. Since the $G$-action is free, the vector fields $X_1^{\#}, \ldots, X_k^{\#}$ are linearly independent at each $p \in P$. The \textbf{vertical bundle} $V$ is the rank $k$ subbundle of $TP$ generated by $X_1^{\#}, \ldots, X_k^{\#}$. Alternatively, $V$ is the set of vectors tangent to $P$ that lie in the kernel of the derivative of the bundle projection $\pi$, so $V$ is indeed independent of the choice of basis for ${\mathfrak g}$. An \textbf{(Ehresmann) connection}\index{Ehresmann ! connection}\index{connection ! on a principal bundle}\index{principal bundle ! connection} on $P$ is a choice of a splitting $TP = V \oplus H$, where $H$ (called the \textbf{horizontal bundle}) is a $G$-invariant subbundle of $TP$ complementary to the vertical bundle $V$. A \textbf{connection form}\index{connection ! form}\index{form ! connection} on $P$ is a Lie-algebra-valued 1-form $A = \sum_{i=1}^k A_i \otimes X_i \in \Omega^1 (P) \otimes {\mathfrak g}$ such that $A$ is $G$-invariant, with respect to the product action of $G$ on $\Omega^1 (P)$ (induced by the action on $P$) and on ${\mathfrak g}$ (the adjoint action), and $A$ is vertical, in the sense that $\imath_{X^{\#}} A = X$ for any $X \in {\mathfrak g}$. A connection $TP = V \oplus H$ determines a connection (form) $A$ and vice-versa by the formula $H = \ker A = \{ v \in TP \mid \imath_v A = 0 \}$. Given a connection on $P$, the splitting $TP = V \oplus H$ induces splittings for bundles $T^*P = V^* \oplus H^*$, $\wedge ^2 T^*P = (\wedge ^2 V^*) \oplus (V^* \wedge H^*) \oplus (\wedge ^2 H^*)$, etc., and for their sections: $\Omega^1 (P) = \Omega^1_{\mathrm{vert}} \oplus \Omega^1_{\mathrm{horiz}}$, $\Omega^2 (P) = \Omega^2_{\mathrm{vert}} \oplus \Omega^2_{\mathrm{mix}} \oplus \Omega^2_{\mathrm{horiz}}$, etc. The corresponding connection form $A$ is in $\Omega^1_{\mathrm{vert}} \otimes {\mathfrak g}$.} on $P$, and if $a \in \Omega^1_{\mathrm{horiz}} \otimes {\mathfrak g}$ is $G$-invariant for the product action, then $A+a$ is also a connection on $P$. Reciprocally, any two connections on $P$ differ by an $a \in (\Omega^1_{\mathrm{horiz}} \otimes {\mathfrak g})^G$. We conclude that the \textbf{set ${\mathcal A}$ of all connections} on the principal $G$-bundle $P$ is an affine space\index{space ! affine} modeled on the linear space ${\mathfrak a} = (\Omega^1_{\mathrm{horiz}} \otimes {\mathfrak g})^G$. \index{connection ! space}\index{symplectic ! structure on the space of connections}\index{example ! of infinite-dimensional symplectic manifold}\index{space ! of connections} Now let $P$ be a principal $G$-bundle over a compact Riemann surface.\index{Riemann ! surface} Suppose that the group $G$ is compact or semisimple\index{group ! semisimple}\index{semisimple}. Atiyah and Bott~\cite{at-bo:surfaces}\index{Atiyah ! moduli space}\index{Bott ! moduli space} noticed that the corresponding space ${\mathcal A}$ of all connections may be treated as an {\varepsilonm infinite-dimensional symplectic manifold}.\index{manifold ! infinite-dimensional} This requires choosing a $G$-invariant inner product $\langle \cdot,\cdot \rangle$ on ${\mathfrak g}$, which always exists, either by averaging any inner product when $G$ is compact, or by using the {\varepsilonm Killing form}\index{form ! Killing}\index{Killing form} on semisimple groups. Since ${\mathcal A}$ is an affine space, its tangent space at any point $A$ is identified with the model linear space ${\mathfrak a}$. With respect to a basis $X_1, \ldots, X_k$ for the Lie algebra ${\mathfrak g}$, elements $a,b \in {\mathfrak a}$ are written \[ a = \sum a_i \otimes X_i \quad \mbox{ and } \quad b = \sum b_i \otimes X_i \ . \] If we wedge $a$ and $b$, and then integrate over $B$, we obtain a real number: \[ \begin{array}{rrclcl} \omega : & {\mathfrak a} \times {\mathfrak a} & \longrightarrow & \left( \Omega ^2_{\mathrm{horiz}} (P) \right) ^G \simeq \Omega ^2 (B) & \longrightarrow & {\mathbb R} \\ & (a,b) & \longmapsto & \sum \limits_{i,j} a_i \wedge b_j \langle X_i , X_j \rangle & \longmapsto & \int \limits_B \sum \limits_{i,j} a_i \wedge b_j \langle X_i , X_j \rangle \ . \varepsilonnd{array} \] We used that the pullback $\pi^* : \Omega ^2 (B) \to \Omega ^2 (P)$ is an isomorphism onto its image $\left( \Omega ^2_{\mathrm{horiz}} (P) \right) ^G$. When $\omega(a,b) =0$ for all $b \in {\mathfrak a}$, then $a$ must be zero. The map $\omega$ is nondegenerate, skew-symmetric, bilinear and constant in the sense that it does not depend on the base point $A$. Therefore, it has the right to be called a symplectic form on ${\mathcal A}$, so the pair $({\mathcal A}, \omega)$ is an \textbf{infinite-dimensional symplectic manifold}. A diffeomorphism $f:P \to P$ commuting with the $G$-action determines a diffeomorphism $f_{\mathrm{basic}} : B \to B$ by projection. Such a diffeomorphism $f$ is called a \textbf{gauge transformation}\index{gauge ! transformation} if the induced $f_{\mathrm{basic}}$ is the identity. The \textbf{gauge group}\index{group ! gauge}\index{gauge ! group} of $P$ is the group ${\mathcal G}$ of all gauge transformations of $P$. The derivative of an $f \in {\mathcal G}$ takes an {\varepsilonm Ehresmann connection} $TP = V \oplus H$ to another connection $TP = V \oplus H_f$, and thus induces an action of ${\mathcal G}$ in the space ${\mathcal A}$ of all connections. Atiyah and Bott~\cite{at-bo:surfaces} noticed that the action of ${\mathcal G}$\index{action ! gauge group}\index{principal bundle ! gauge group} on $({\mathcal A} , \omega)$ is hamiltonian, where the moment map (appropriately interpreted) is \[ \begin{array}{rrcl} \mu: & {\mathcal A} & \longrightarrow & \left( \Omega ^2 (P) \otimes {\mathfrak g} \right) ^G \\ & A & \longmapsto & \curv \ A \ , \varepsilonnd{array} \] i.e., the moment map {\varepsilonm is} the curvature.\footnote{The exterior derivative of a connection $A$ decomposes into three components, \[ dA = (dA)_{\mathrm{vert}} + (dA)_{\mathrm{mix}} + (dA)_{\mathrm{horiz}} \in \left( \Omega^2_{\mathrm{vert}} \oplus \Omega^2_{\mathrm{mix}} \oplus \Omega^2_{\mathrm{horiz}} \right) \otimes {\mathfrak g} \] satisfying $(dA)_{\mathrm{mix}} = 0$ and $(dA)_{\mathrm{vert}} (X,Y) = [X,Y]$, i.e., $(dA)_{\mathrm{vert}} = \frac12 \sum_{i,\varepsilonll,m} c_{\varepsilonll m}^i A_\varepsilonll \wedge A_m \otimes X_i$, where the $c_{\varepsilonll m}^i$'s are the \textbf{structure constants} of the Lie algebra with respect to the chosen basis, and defined by $[X_\varepsilonll, X_m] = \sum_{i,\varepsilonll,m} c_{\varepsilonll m}^i X_i$. So the relevance of $dA$ may come only from its horizontal component, called the \textbf{curvature form}\index{curvature form}\index{form ! curvature} of the connection $A$, and denoted $\curv \ A = (dA)_{\mathrm{horiz}} \in \Omega^2_{\mathrm{horiz}} \otimes {\mathfrak g}$. A connection is called \textbf{flat} if its curvature is zero.} The reduced space ${\mathcal M} = \mu ^{-1} (0) / {\mathcal G}$ is the space of {\varepsilonm flat connections} modulo gauge equivalence, known as the \textbf{moduli space of flat connections},\index{connection ! flat}\index{flat connection}\index{connection ! moduli space}\index{moduli space}\index{space ! moduli} which is a finite-dimensional symplectic orbifold. \begin{example} We describe the Atiyah-Bott construction for the case of a circle bundle\index{circle bundle} \[ \begin{array}{cll} S^1 & \hookrightarrow & P \\ & & \downarrow \pi \\ & & B \varepsilonnd{array} \] Let $v$ be the generator of the $S^1$-action on $P$, corresponding to the basis $1$ of ${\mathfrak g} \simeq {\mathbb R}$. A connection form on $P$ is an ordinary 1-form $A \in \Omega ^1 (P)$ such that ${\mathcal L} _v A = 0$ and $\imath_v A = 1$. If we fix one particular connection $A_0$, then any other connection is of the form $A = A_0 + a$ for some $a \in {\mathfrak a} = \left( \Omega^1_{\mathrm{horiz}} (P) \right)^G = \Omega^1 (B)$. The symplectic form on ${\mathfrak a} = \Omega^1 (B)$ is simply \[ \begin{array}{rrcccl} \omega : & {\mathfrak a} \times {\mathfrak a} & \longrightarrow & \Omega ^2 (B) & \longrightarrow & {\mathbb R} \\ & (a,b) & \longmapsto & a \wedge b & \longmapsto & \int_B a \wedge b \ . \varepsilonnd{array} \] The gauge group is ${\mathcal G} = \mathrm{Maps}(B,S^1)$, because a gauge transformation is multiplication by some element of $S^1$ over each point in $B$ encoded in a map $h : B \to S^1$. The action $\phi : {\mathcal G} \to \mathrm{Diff} (P)$ takes $h \in {\mathcal G}$ to the diffeomorphism \[ \begin{array}{rrcl} \phi_h : & p & \longmapsto & h(\pi(p)) \cdot p \ . \varepsilonnd{array} \] The Lie algebra of ${\mathcal G}$ is $\mathrm{Lie} \ {\mathcal G} = \mathrm{Maps}(B,{\mathbb R}) = C^\infty (B)$ with dual $\left( \mathrm{Lie} \ {\mathcal G} \right) ^* = \Omega ^2 (B)$, where the (smooth) duality is provided by integration $C^\infty (B) \times \Omega ^2 (B) \to {\mathbb R}$, $(h,\beta) \mapsto \int_B h \beta$. The gauge group acts on the space of all connections by \[ \begin{array}{rrcl} \psi & {\mathcal G} & \longrightarrow & \mathrm{Diff} ({\mathcal A}) \\ & (h: x \mapsto e^{i\theta(x)}) & \longmapsto & ( \psi_h: A \mapsto A - \pi^* d\theta ) \varepsilonnd{array} \] (In the case where $P = S^1 \times B$ is a trivial bundle, every connection can be written $A = dt + \beta$, with $\beta \in \Omega ^1 (B)$. A gauge transformation $h \in {\mathcal G}$ acts on $P$ by $\phi_h : (t,x) \mapsto (t + \theta (x) , x)$ and on ${\mathcal A}$ by $A \mapsto \phi^*_{h^{-1}} (A)$.) The infinitesimal action is \[ \begin{array}{rrcl} d\psi: & \mathrm{Lie} \ {\mathcal G} & \longrightarrow & \chi({\mathcal A}) \\ & X & \longmapsto & X^{\#} = \mbox{ vector field described by } ( A \mapsto A -dX ) \varepsilonnd{array} \] so that $X^{\#} = -dX$. It remains to check that \[ \begin{array}{rrcl} \mu: & {\mathcal A} & \longrightarrow & \left( \mathrm{Lie} \ {\mathcal G} \right) ^* = \Omega ^2 (B) \\ & A & \longmapsto & \curv \ A \varepsilonnd{array} \] is indeed a moment map for the action of the gauge group on ${\mathcal A}$. Since in this case $\curv \ A = dA \in \left( \Omega_{\mathrm{horiz}} ^2 (P) \right) ^G = \Omega ^2 (B)$, the action of ${\mathcal G}$ on $\Omega ^2 (B)$ is trivial and $\mu$ is ${\mathcal G}$-invariant, the equivariance condition is satisfied. Take any $X \in \mathrm{Lie} \ {\mathcal G} = C^\infty (B)$. Since the map $\mu ^X: A \mapsto \langle X , dA \rangle = \int_B X \cdot dA$ is linear in $A$, its differential is \[ \begin{array}{rrcl} d\mu ^X: & {\mathfrak a} & \longrightarrow & {\mathbb R} \\ & a & \longmapsto & \int_B X da \ . \varepsilonnd{array} \] By definition of $\omega$ and the Stokes theorem,\index{Stokes theorem}\index{theorem ! Stokes} we have that \[ \omega (X^{\#} , a) = \displaystyle{\int_B} X^{\#} \cdot a = - \displaystyle{\int_B} dX \cdot a = \displaystyle{\int_B} X \cdot da = d \mu ^X (a) \ , \qquad \forall a \in \Omega ^1 (B) \ . \] so we are done in proving that $\mu$ is the moment map. \varepsilonnd{example} The function $||\mu||^2 : {\mathcal A} \to {\mathbb R}$ giving the square of the $L^2$ norm of the curvature is the \textbf{Yang-Mills functional}, whose Euler-Lagrange equations are the {\varepsilonm Yang-Mills equations}. Atiyah and Bott~\cite{at-bo:surfaces} studied the topology of ${\mathcal A}$ by regarding $||\mu||^2$ as an equivariant Morse function. In general, it is a good idea to apply Morse theory to the norm square of a moment map~\cite{ki:quotients}. \ssubsection{Symplectic Toric Manifolds} \label{sec:stm} \index{symplectic ! toric manifolds}\index{toric manifolds} Toric manifolds are smooth {\varepsilonm toric varieties}.\footnote{Toric varieties were introduced by Demazure in~\cite{de:subgroups}. There are many nice surveys of the theory of toric varieties in algebraic geometry; see, for instance,~\cite{da:toric,fu:toric,ke-kn-mu-sa:toroidal,od:toric}. Toric geometry has recently become an important tool in physics in connection with mirror symmetry~\cite{co:recent}.} When studying the symplectic features of these spaces, we refer to them as {\varepsilonm symplectic toric manifolds}. Relations between the algebraic and symplectic viewpoints on toric manifolds are discussed in~\cite{ca:toric}. \begin{definition} A \textbf{symplectic toric manifold} is a compact connected symplectic manifold $(M,\omega)$ equipped with an effective hamiltonian action of a torus ${\mathbb T}$ of dimension equal to half the dimension of the manifold, $\dim {\mathbb T} = \frac {1}{2} \dim M$, and with a choice of a corresponding moment map $\mu$. Two symplectic toric manifolds, $(M_i,\omega_i,{\mathbb T}_i,\mu_i)$, $i=1,2$, are \textbf{equivalent} if there exists an isomorphism $\lambda : {\mathbb T}_1 \to {\mathbb T}_2$ and a $\lambda$-equivariant symplectomorphism $\varphi : M_1 \to M_2$ such that $\mu_1 = \mu_2 \circ \varphi$. \varepsilonnd{definition} \vspace*{-1ex} \begin{examples} \begin{enumerate} \item The circle $S^1$ acts on the 2-sphere $(S^2,\omega_{\mathrm{standard}} = d\theta \wedge dh)$ by rotations, $e^{i \nu} \cdot (\theta, h) = (\theta + \nu ,h)$. with moment map $\mu = h$ equal to the height function and moment polytope $[-1,1]$. \begin{picture}(200,100)(-40,0) \qbezier[50](180,10)(180,50)(180,90) \put(95,50){\vector(1,0){50}} \put(105,57){$\mu=h$} \put(185,28){$-1$} \put(185,68){$\phantom{-}1$} \thicklines \put(180,30){\line(0,1){40}} \put(50,50){\circle{50}} \qbezier(30,50)(45,45)(70,50) \qbezier[25](30,50)(55,55)(70,50) \put(180,30){\circle*{5}} \put(180,70){\circle*{5}} \put(50,30){\circle*{5}} \put(50,70){\circle*{5}} \varepsilonnd{picture} Analogously, $S^1$ acts on the Riemann sphere ${\mathbb C} {\mathbb P}^1$ with the Fubini-Study form $\omega_{\mathrm{FS}} = \frac {1}{4} \omega_{\mathrm{standard}}$, by $e ^{i\theta} \cdot [z_0,z_1] = [z_0,e ^{i\theta}z_1]$. This is hamiltonian with moment map $\mu[z_0,z_1] = -\frac {1}{2} \cdot \frac {|z_1|^2}{|z_0|^2 + |z_1|^2}$, and moment polytope $\left[ -\frac {1}{2} ,0\right]$. \item For the ${\mathbb T}^n$-action on the product of $n$ Riemann spheres ${\mathbb C} {\mathbb P}^1 \times \ldots \times {\mathbb C} {\mathbb P}^1$ by \[ (e ^{i\theta_1}, \ldots ,e ^{i\theta_n}) \cdot ([z_1,w_1],\ldots , [z_n,w_n]) = ([z_1,e ^{i\theta_1}w_1],\ldots , [w_0,e ^{i\theta_n}w_1]) \ , \] the moment polytope is an $n$-dimensional cube. \item Let $({\mathbb C} {\mathbb P}^2, \omega_{\mathrm{FS}})$ be 2-(complex-)dimensional complex projective space equipped with the Fubini-Study form defined in Section~\ref{sec:kahler}. The ${\mathbb T}^2$-action on ${\mathbb C} {\mathbb P}^2$ by $(e ^{i\theta_1},e ^{i\theta_2}) \cdot [z_0,z_1,z_2] = [z_0,e ^{i\theta_1}z_1,e ^{i\theta_2}z_2]$ has moment map \[ \mu[z_0,z_1,z_2] = -\frac {1}{2} \left( \frac {|z_1|^2}{|z_0|^2+|z_1|^2+|z_2|^2} , \frac {|z_2|^2}{|z_0|^2+|z_1|^2+|z_2|^2} \right) \ . \] The image is the isosceles triangle with vertices $(0,0)$, $( -\frac {1}{2} ,0)$ and $(0, -\frac {1}{2})$. \item For the ${\mathbb T}^n$-action on $({\mathbb C} {\mathbb P}^n,\omega_{\mathrm{FS}})$ by \[ (e ^{i\theta_1}, \ldots ,e ^{i\theta_n}) \cdot [z_0,z_1,\ldots,z_n] = [z_0,e ^{i\theta_1}z_1, \ldots , e ^{i\theta_n}z_n] \] the moment polytope is an $n$-dimensional simplex. \varepsilonnd{enumerate} \varepsilonnd{examples} Since the coordinates of the moment map are commuting integrals of motion, a symplectic toric manifold gives rise to a completely integrable system\index{integrable ! system}. By Proposition~\ref{prop:dimension}, symplectic toric manifolds are optimal hamiltonian torus-spaces. By Theorem~\ref{thm:convexity}, they have an associated polytope. It turns out that the moment polytope contains enough information to sort all symplectic toric manifolds. We now define the class of polytopes that arise in the classification. For a symplectic toric manifold the weights $\lambda^{(1)}, \ldots, \lambda^{(n)}$ in Theorem~\ref{thm:moment_darboux} form a ${\mathbb Z}$-basis of ${\mathbb Z}^m$, hence the moment polytope is a {\varepsilonm Delzant polytope}: \begin{definition} A \textbf{Delzant polytope}\index{polytope ! Delzant}\index{Delzant ! polytope} in ${\mathbb R}^n$ is a polytope satisfying: \begin{itemize} \item \textbf{simplicity}\index{simple polytope}\index{polytope ! simple}, i.e., there are $n$ edges meeting at each vertex; \item \textbf{rationality}\index{rational polytope}\index{polytope ! rational}, i.e., the edges meeting at the vertex $p$ are rational in the sense that each edge is of the form $p + tu_i$, $t \ge 0$, where $u_i \in {\mathbb Z}^n$; \item \textbf{smoothness}\index{smooth polytope}\index{polytope ! smooth}, i.e., for each vertex, the corresponding $u_1,\dots,u_n$ can be chosen to be a ${\mathbb Z}$-basis of ${\mathbb Z}^n$. \varepsilonnd{itemize} \varepsilonnd{definition} In ${\mathbb R}^2$ the simplicity condition is always satisfied (by nondegenerate polytopes). In ${\mathbb R}^3$, for instance a square pyramid fails the simplicity condition. \begin{examples} The pictures below represent Delzant polytopes in ${\mathbb R}^2$.\index{example ! of Delzant polytope}\index{Delzant ! example of Delzant polytope}\index{polytope ! example of Delzant polytope} \begin{picture}(250,60)(5,-20) \thicklines \put(0,0){\line(1,0){30}} \put(0,0){\line(0,1){30}} \put(0,30){\line(1,-1){30}} \put(50,0){\line(1,0){40}} \put(50,0){\line(0,1){30}} \put(90,30){\line(-1,0){40}} \put(90,30){\line(0,-1){30}} \put(110,0){\line(1,0){70}} \put(110,0){\line(0,1){30}} \put(150,30){\line(1,-1){30}} \put(110,30){\line(1,0){40}} \put(200,0){\line(1,0){80}} \put(200,0){\line(0,1){30}} \put(200,30){\line(1,0){20}} \put(220,30){\line(2,-1){60}} \put(310,0){\line(1,0){20}} \put(310,0){\line(-1,1){10}} \put(300,30){\line(1,0){10}} \put(300,10){\line(0,1){20}} \put(330,10){\line(-1,1){20}} \put(330,0){\line(0,1){10}} \varepsilonnd{picture} \varepsilonnd{examples} The following theorem classifies (equivalence classes of) symplectic toric manifolds in terms of the combinatorial data encoded by a Delzant polytope. \begin{theorem}\index{Delzant ! theorem}\index{theorem ! Delzant} \textbf{(Delzant~\cite{de:hamiltoniens})} $\;$ Toric manifolds are classified by Delzant polytopes, and their bijective correspondence is given by the moment map: \[ \begin{array}{rcl} \{\mbox{toric manifolds}\} & \longleftrightarrow &\{\mbox{Delzant polytopes}\} \\ (M^{2n},\omega,{\mathbb T}^n,\mu) &\longmapsto &\mu(M) \ . \varepsilonnd{array} \] \varepsilonnd{theorem} Delzant's construction (Section~\ref{sec:construction}) shows that for a toric manifold the moment map takes the fixed points bijectively to the vertices of the moment polytope and takes points with a $k$-dimensional stabilizer to the codimension $k$ faces of the polytope. The moment polytope is exactly the orbit space, i.e., the preimage under $\mu$ of each point in the polytope is exactly one orbit. For instance, consider $(S^2,\omega=d\theta \wedge dh, S^1, \mu = h)$, where $S^1$ acts by rotation. The image of $\mu$ is the line segment $I = [-1,1]$. The product $S^1 \times I$ is an open-ended cylinder. We can recover the 2-sphere by collapsing each end of the cylinder to a point. Similarly, we can build ${\mathbb C} {\mathbb P}^2$ from ${\mathbb T}^2 \times \Delta$ where $\Delta$ is a rectangular isosceles triangle, and so on. \begin{examples} \begin{enumerate} \item By a linear transformation in $\mathrm{SL} (2;{\mathbb Z})$, we can make one of the angles in a Delzant triangle into a right angle. Out of the rectangular triangles, only the isosceles one satisfies the smoothness condition. Therefore, up to translation, change of scale and the action of $\mathrm{SL} (2;{\mathbb Z})$, there is just one 2-dimensional Delzant polytope\index{polytope ! Delzant}\index{Delzant ! polytope} with three vertices, namely an {\varepsilonm isosceles triangle}. We conclude that the projective space ${\mathbb C} {\mathbb P}^2$ is the only 4-dimensional toric manifold with three fixed points, up to choices of a constant in the moment map, of a multiple of $\omega_{_{\mathrm{FS}}}$ and of a lattice basis in the Lie algebra of ${\mathbb T}^2$. \item Up to translation, change of scale and the action of $\mathrm{SL} (n;{\mathbb Z})$, the {\varepsilonm standard $n$-simplex} $\Delta$ in ${\mathbb R}^n$ (spanned by the origin and the standard basis vectors $(1,0,\ldots,0),\ldots,(0,\ldots,0,1)$) is the only $n$-dimensional Delzant polytope\index{polytope ! Delzant}\index{Delzant ! polytope} with $n+1$ vertices. Hence, $M_\Delta = {\mathbb C} {\mathbb P}^n$ is the only $2n$-dimensional toric manifold with $n+1$ fixed points, up to choices of a constant in the moment map, of a multiple of $\omega_{_{\mathrm{FS}}}$ and of a lattice basis in the Lie algebra of ${\mathbb T}^N$. \item A transformation in $\mathrm{SL} (2;{\mathbb Z})$ makes one of the angles in a Delzant quadrilateral into a right angle. Automatically an adjacent angle also becomes $90^o$. Smoothness imposes that the slope of the skew side be integral. Thus, up to translation, change of scale and $\mathrm{SL} (2;{\mathbb Z})$-action, the 2-dimensional Delzant polytopes\index{polytope ! Delzant}\index{Delzant ! polytope} with four vertices are trapezoids with vertices $(0,0)$, $(0,1)$, $(\varepsilonll,1)$ and $(\varepsilonll +n,0)$, for $n$ a nonnegative integer and $\varepsilonll > 0$. Under Delzant's construction (that is, under symplectic reduction of ${\mathbb C}^4$ with respect to an action of $(S^1)^2$), these correspond to the so-called \textbf{Hirzebruch surfaces}\index{example ! Hirzebruch surfaces}\index{Hirzebruch surface} -- the only 4-dimensional symplectic toric manifolds\index{toric manifold ! four-dimensional@4-dimensional} that have four fixed points up to equivalence as before. Topologically, they are $S^2$-bundles over $S^2$, either the trivial bundle $S^2 \times S^2$ when $n$ is even or the nontrivial bundle (given by the blow-up of ${\mathbb C} {\mathbb P}^2$ at a point; see Section~\ref{sec:blow_up}) when $n$ is odd. \varepsilonnd{enumerate} \varepsilonnd{examples} Let $\Delta$ be an $n$-dimensional Delzant polytope, and let $(M_\Delta,\omega_\Delta, {\mathbb T}^n, \mu_\Delta)$ be the associated symplectic toric manifold. The $\varepsilon$-blow-up of $(M_\Delta,\omega_\Delta)$ at a fixed point of the ${\mathbb T}^n$-action is a new symplectic toric manifold (Sections~\ref{sec:blow_up} and~\ref{sec:actions}). Let $q$ be a fixed point of the ${\mathbb T}^n$-action on $(M_\Delta,\omega_\Delta)$, and let $p = \mu_\Delta (q)$ be the corresponding vertex of $\Delta$. Let $u_1, \ldots ,u_n$ be the primitive (inward-pointing) edge vectors at $p$, so that the rays $p + t u_i$, $t \geq 0$, form the edges of $\Delta$ at $p$. \begin{proposition} \index{blow-up ! of toric manifold} The $\varepsilon$-blow-up of $(M_\Delta,\omega_\Delta)$ at a fixed point $q$ is the symplectic toric manifold associated to the polytope $\Delta_\varepsilon$ obtained from $\Delta$ by replacing the vertex $p$ by the $n$ vertices $p + \varepsilon u_i$, $i=1,\ldots , n$. \varepsilonnd{proposition} In other words, the moment polytope for the blow-up of $(M_\Delta,\omega_\Delta)$ at $q$ is obtained from $\Delta$ by chopping off the corner corresponding to $q$, thus substituting the original set of vertices by the same set with the vertex corresponding to $q$ replaced by exactly $n$ new vertices. The truncated polytope is Delzant. We may view the $\varepsilon$-blow-up of $(M_\Delta, \omega_\Delta)$ as being obtained from $M_\Delta$ by smoothly replacing $q$ by $({\mathbb C} {\mathbb P}^{n-1}, \varepsilon \omega_{_{\mathrm{FS}}})$ (whose moment polytope is an $(n-1)$-dimensional simplex). \begin{picture}(200,70)(-95,-10) \put(43,43){$p$} \qbezier[10](50,20)(50,30)(50,40) \qbezier[15](70,20)(60,30)(50,40) \qbezier[15](75,25)(65,31)(50,40) \thicklines \put(50,20){\line(5,1){25}} \put(50,20){\line(1,0){20}} \qbezier[15](70,20)(72,22)(75,25) \put(50,0){\line(0,1){20}} \put(90,0){\line(-1,1){20}} \put(75,25){\line(5,-3){20}} \varepsilonnd{picture} \begin{example} The moment polytope for the standard ${\mathbb T}^2$-action on $({\mathbb C} {\mathbb P}^2, \omega_{_{\mathrm{FS}}})$ is a right isosceles triangle $\Delta$. If we blow-up ${\mathbb C} {\mathbb P}^2$ at $[0:0:1]$ we obtain a symplectic toric manifold associated to the trapezoid below: a {\varepsilonm Hirzebruch surface}. \begin{picture}(200,60)(-30,-5) \qbezier[7](165,30)(165,35)(165,40) \qbezier[10](175,30)(170,35)(165,40) \thicklines \put(50,0){\line(1,0){40}} \put(50,0){\line(0,1){40}} \put(90,0){\line(-1,1){40}} \put(145,25){\vector(-1,0){45}} \put(125,32){$\beta$} \put(165,0){\line(1,0){40}} \put(165,30){\line(1,0){10}} \put(165,0){\line(0,1){30}} \put(205,0){\line(-1,1){30}} \varepsilonnd{picture} \varepsilonnd{example} Let $(M, \omega, {\mathbb T}^n , \mu)$ be a $2n$-dimensional symplectic toric manifold. Choose a suitably generic direction in ${\mathbb R}^n$ by picking a vector $X$ whose components are independent over ${\mathbb Q}$. This condition ensures that: \begin{itemize} \item the one-dimensional subgroup ${\mathbb T}^X$ generated by the vector $X$ is dense in ${\mathbb T}^n$, \item $X$ is not parallel to the facets of the moment polytope $\Delta := \mu (M)$, and \item the vertices of $\Delta$ have different projections along $X$. \varepsilonnd{itemize} Then the fixed points for the ${\mathbb T}^n$-action are exactly the fixed points of the action restricted to ${\mathbb T}^X$, that is, are the zeros of the vector field, $X^\#$ on $M$ generated by $X$. The projection of $\mu$ along $X$, $\mu ^X := \langle \mu , X \rangle : M \to {\mathbb R}$, is a hamiltonian function for the vector field $X^\#$ generated by $X$. We conclude that the critical points of $\mu ^X$ are precisely the fixed points of the ${\mathbb T}^n$-action. \begin{picture}(200,90)(-5,10) \put(25,50){$M$} \put(55,50){\vector(1,0){30}} \put(65,57){$\mu$} \put(155,50){\vector(1,0){70}} \put(165,57){projection} \qbezier[20](100,40)(115,45)(130,50) \qbezier[50](250,20)(250,55)(250,90) \put(255,85){${\mathbb R}$} \put(100,40){\circle*{5}} \put(110,30){\circle*{5}} \put(120,80){\circle*{5}} \put(130,50){\circle*{5}} \put(250,30){\circle*{5}} \put(250,40){\circle*{5}} \put(250,50){\circle*{5}} \put(250,80){\circle*{5}} \thicklines \put(100,40){\line(1,-1){10}} \put(100,40){\line(1,2){20}} \put(110,30){\line(1,5){10}} \put(110,30){\line(1,1){20}} \put(120,80){\line(1,-3){10}} \put(250,30){\line(0,1){50}} \varepsilonnd{picture} By Theorem~\ref{thm:moment_darboux}, if $q$ is a fixed point for the ${\mathbb T}^n$-action, then there exists a chart $({\mathcal U},x_1,\dots,x_n,y_1,\dots,y_n)$ centered at $q$ and weights $\lambda^{(1)}, \ldots, \lambda^{(n)} \in {\mathbb Z}^n$ such that \[ \left. \mu ^X \right|_{\mathcal U} = \left. \langle \mu , X \rangle \right|_{\mathcal U} = \mu ^X (q) - \frac 12 \sum\limits_{k=1}^n \langle \lambda^{(k)} , X \rangle (x_k^2 + y_k^2) \ . \] Since the components of $X$ are independent over ${\mathbb Q}$, all coefficients $\langle \lambda^{(k)} , X \rangle$ are nonzero, so $q$ is a {\varepsilonm nondegenerate}\index{nondegenerate critical point} critical point of $\mu^X$. Moreover, the {\varepsilonm index}\footnote{A \textbf{Morse function}\index{Morse ! function} on an $m$-dimensional manifold $M$ is a smooth function $f: M \to {\mathbb R}$ all of whose critical points (where $df$ vanishes) are nondegenerate\index{nondegenerate critical point} (i.e., the {\varepsilonm hessian matrix}\index{hessian} is nonsingular). Let $q$ be a nondegenerate critical point for $f : M \to {\mathbb R}$. The \textbf{index of $f$ at $q$} is the index of the hessian $H_q : {\mathbb R}^m \times {\mathbb R}^m \to {\mathbb R}$ regarded as a symmetric bilinear function, that is, the the maximal dimension of a subspace of ${\mathbb R}$ where $H$ is negative definite.} of $q$ is twice the number of labels $k$ such that $- \langle \lambda^{(k)} , X \rangle < 0$. But the $-\lambda^{(k)}$'s are precisely the edge vectors $u_i$ which satisfy Delzant's conditions. Therefore, geometrically, the index of $q$ can be read from the moment polytope $\Delta$, by taking twice the number of edges whose inward-pointing edge vectors at $\mu (q)$ {\varepsilonm point up relative to $X$}, that is, whose inner product with $X$ is positive. In particular, $\mu ^X$ is a {\varepsilonm perfect Morse function}\footnote{A \textbf{perfect Morse function}\index{perfect Morse function} is a Morse function $f$ for which the {\varepsilonm Morse inequalities}~\cite{mi:morse,mo:calculus} are equalities, i.e., $b_\lambda (M) = C_\lambda$ and $b_\lambda (M) - b_{\lambda -1} (M) + \ldots \pm b_0 (M) = C_\lambda - C_{\lambda -1} + \ldots \pm C_0$ where $b_\lambda (M) = \dim H_\lambda (M)$ and $C_\lambda$ be the number of critical points of $f$ with index $\lambda$. If all critical points of a Morse function $f$ have even index, then $f$ is a perfect Morse function.} and we have: \begin{proposition} Let $X\in {\mathbb R}^n$ have components independent over ${\mathbb Q}$. The degree-$2k$ homology group of the symplectic toric manifold $(M, \omega, {\mathbb T} , \mu)$ has dimension equal to the number of vertices of the moment polytope where there are exactly $k$ (primitive inward-pointing) edge vectors that point up relative to the projection along the $X$. All odd-degree homology groups of $M$ are zero. \varepsilonnd{proposition} By Poincar\'e duality (or by taking $-X$ instead of $X$), the words {\varepsilonm point up} may be replaced by {\varepsilonm point down}. The Euler characteristic of a symplectic toric manifold is simply the number of vertices of the corresponding polytope. There is a combinatorial way of understanding the cohomology ring~\cite{fu:toric}. A \textbf{symplectic toric orbifold}\index{symplectic ! toric orbifold}\index{orbifold ! toric} is a compact connected symplectic orbifold $(M,\omega)$ equipped with an effective hamiltonian action of a torus of dimension equal to half the dimension of the orbifold, and with a choice of a corresponding moment map. Symplectic toric orbifolds were classified by Lerman and Tolman~\cite{le-to} in a theorem that generalizes Delzant's: a symplectic toric orbifold is determined by its moment polytope plus a positive integer label attached to each of the polytope facets. The polytopes that occur are more general than the Delzant polytopes in the sense that only simplicity and rationality are required; the edge vectors $u_1,\dots,u_n$ need only form a rational basis of ${\mathbb Z}^n$. When the integer labels are all equal to 1, the failure of the polytope smoothness accounts for all orbifold singularities. \ssubsection{Delzant's Construction} \index{Delzant ! construction} \label{sec:construction} Following~\cite{de:hamiltoniens,gu:moment}, we prove the existence part (or surjectivity) in Delzant's theorem,\index{Delzant ! construction} by using symplectic reduction to associate to an $n$-dimensional Delzant polytope $\Delta$ a symplectic toric manifold $(M_\Delta,\omega_\Delta,{\mathbb T}^n,\mu_\Delta)$. Let $\Delta$ be a Delzant polytope in $({\mathbb R}^n)^*$\footnote{Although we identify ${\mathbb R}^n$ with its dual via the euclidean inner product, it may be more clear to see $\Delta$ in $({\mathbb R}^n)^*$ for Delzant's construction.} and with $d$ facets.\footnote{A \textbf{face} of a polytope $\Delta$ is a set of the form $F = P \cap \{ x \in {\mathbb R}^n \mid f(x) = c \}$ where $c \in {\mathbb R}$ and $f \in ({\mathbb R}^n)^*$ satisfies $f(x) \geq c$, $\forall x \in P$. A \textbf{facet}\index{facet}\index{polytope ! facet} of an $n$-dimensional polytope is an $(n-1)$-dimensional face.} We can algebraically describe $\Delta$ as an intersection of $d$ halfspaces. Let $v_i \in {\mathbb Z}^n$, $i = 1,\dots,d$, be the primitive\footnote{A lattice vector $v \in {\mathbb Z}^n$ is \textbf{primitive}\index{primitive vector} if it cannot be written as $v = ku$ with $u \in {\mathbb Z}^n$, $k \in {\mathbb Z}$ and $|k| > 1$; for instance, $(1,1)$, $(4,3)$, $(1,0)$ are primitive, but $(2,2)$, $(3,6)$ are not.} outward-pointing normal vectors to the facets of $\Delta$. Then, for some $\lambda_i \in {\mathbb R}$, we can write $\Delta = \{x \in ({\mathbb R}^n)^* \mid \langle x,v_i\rangle \le \lambda_i,\ i = 1,\dots,d\}$. \begin{example} When $\Delta$ is the triangle below, we have \[ \Delta = \{x \in ({\mathbb R}^2)^* \mid \langle x,(-1,0)\rangle \le 0 \ , \ \langle x,(0,-1) \rangle \le 0 \ , \ \langle x,(1,1)\rangle \le 1\} \ . \] \begin{picture}(100,80)(-140,-25) \put(0,0){\line(1,0){40}} \put(0,0){\line(0,1){40}} \put(0,40){\line(1,-1){40}} \put(-14,-12){$(0,0)$} \put(33,-12){$(1,0)$} \put(-12,47){$(0,1)$} \put(43,33){$v_3$} \put(6,-24){$v_1$} \put(-33,27){$v_2$} \put(0,0){\circle*{3}} \put(0,40){\circle*{3}} \put(40,0){\circle*{3}} \put(0,20){\vector(-1,0){40}} \put(20,0){\vector(0,-1){40}} \put(20,20){\vector(1,1){40}} \varepsilonnd{picture} \varepsilonnd{example} For the standard basis $e_1 = (1,0,\dots,0),\dots,e_d = (0,\dots,0,1)$ of ${\mathbb R}^d$, consider \[ \begin{array}{rrcl} \pi: &{\mathbb R}^d &\longrightarrow &{\mathbb R}^n \\ &e_i &\longmapsto &v_i \ . \varepsilonnd{array} \] \begin{lemma} \label{le:onto} The map $\pi$ is onto and maps ${\mathbb Z}^d$ onto ${\mathbb Z}^n$. \varepsilonnd{lemma} \vspace*{-2ex} \begin{proof} We need to show that the set $\{v_1,\dots,v_d\}$ spans ${\mathbb Z}^n$. At a vertex $p$, the edge vectors $u_1,\dots,u_n \in ({\mathbb R}^n)^*$ form a basis for $({\mathbb Z}^n)^*$ which, by a change of basis if necessary, we may assume is the standard basis. Then the corresponding primitive normal vectors to the facets meeting at $p$ are $-u_1,\dots,-u_n$. \varepsilonnd{proof} We still call $\pi$ the induced surjective map ${\mathbb T}^d = {\mathbb R}^d/(2\pi {\mathbb Z}^d) \stackrel{\pi}{\rightarrow} {\mathbb T}^n = {\mathbb R}^n/(2\pi {\mathbb Z}^n)$. The kernel $N$ of $\pi$ is a $(d-n)$-dimensional Lie subgroup of ${\mathbb T}^d$ with inclusion $i : N \hookrightarrow {\mathbb T}^d$. Let ${\mathfrak n}$ be the Lie algebra of $N$. The exact sequence of tori \[ 1 \longrightarrow N \stackrel{i}{\longrightarrow} {\mathbb T}^d \stackrel{\pi}{\longrightarrow} {\mathbb T}^n \longrightarrow 1 \] induces an exact sequence of Lie algebras \[ 0 \longrightarrow {\mathfrak n} \stackrel{i}{\longrightarrow} {\mathbb R}^d \stackrel{\pi}{\longrightarrow} {\mathbb R}^n \longrightarrow 0 \] with dual exact sequence \[ 0 \longrightarrow ({\mathbb R}^n)^* \stackrel{\pi^*}{\longrightarrow} ({\mathbb R}^d)^* \stackrel{i^*}{\longrightarrow} {\mathfrak n}^* \longrightarrow 0 \ . \] Consider ${\mathbb C}^d$ with symplectic form $\omega_0 = \frac {i}{2} \sum dz_k \wedge d{\bar z}_k$, and standard hamiltonian action of ${\mathbb T}^d$ given by $(e^{i t_1},\dots,e^{i t_d}) \cdot (z_1,\dots,z_d) = (e^{i t_1}z_1,\dots,e^{i t_d}z_d)$. A moment map is $\phi: {\mathbb C}^d \to ({\mathbb R}^d)^*$ defined by \[ \phi(z_1,\dots,z_d) = - \frac 12 (|z_1|^2,\dots,|z_d|^2) + (\lambda_1,\dots,\lambda_d) \ , \] where the constant is chosen for later convenience. The subtorus $N$ acts on ${\mathbb C}^d$ in a hamiltonian way with moment map $i^* \circ \phi: {\mathbb C}^d \to {\mathfrak n}^*$. Let $Z = (i^* \circ \phi)^{-1}(0)$. In order to show that $Z$ (a closed set) is compact it suffices (by the Heine-Borel theorem) to show that $Z$ is bounded. Let $\Delta'$ be the image of $\Delta$ by $\pi^*$. First we show that $\phi(Z) = \Delta'$. A value $y\in({\mathbb R}^d)^*$ is in the image of $Z$ by $\phi$ if and only if \[ \mbox{(a) $y$ is in the image of $\phi$} \qquad \mbox{ and } \qquad \mbox{(b) $i^* y = 0$} \] if and only if (using the expression for $\phi$ and the third exact sequence) \[ \mbox{(a) $\ip{y}{e_i} \le \lambda_i$ for $i=1,\ldots,d$} \quad \mbox{ and } \quad \mbox{(b) $y = \pi^*(x)$ for some $x\in({\mathbb R}^n)^*$} \ . \] Suppose that $y = \pi^*(x)$. Then \[ \ip{y}{e_i} \le \lambda_i, \forall i \iff \ip{x}{\pi(e_i)} \le \lambda_i, \forall i \iff \ip{x}{v_i} \le \lambda_i, \forall i \iff x \in \Delta \ . \] Thus, $y \in \phi(Z) \Leftrightarrow y\in\pi^*(\Delta) = \Delta'$. Since $\Delta '$ is compact, $\phi$ is proper and $\phi (Z) = \Delta'$, we conclude that $Z$ must be bounded, and hence compact. In order to show that $N$ acts freely on $Z$, pick a vertex $p$ of $\Delta$, and let $I= \{ i_1 , \ldots , i_n \}$ be the set of indices for the $n$ facets meeting at $p$. Pick $z \in Z$ such that $\phi (z) = \pi ^* (p)$. Then $p$ is characterized by $n$ equations $\langle p , v_i \rangle = \lambda_i$ where $i \in I$: \[ \begin{array}{rcl} \langle p , v_i \rangle = \lambda_i & \iff & \langle p , \pi (e_i) \rangle = \lambda_i \\ & \iff & \langle \pi ^* (p) ,e_i \rangle = \lambda_i \\ & \iff & \langle \phi (z) ,e_i \rangle = \lambda_i \\ & \iff & \mbox{$i$-th coordinate of $\phi (z)$ is equal to $\lambda_i$} \\ & \iff & -\frac 12 |z_i|^2 + \lambda_i = \lambda_i \\ & \iff & z_i = 0 \ . \varepsilonnd{array} \] Hence, those $z$'s are points whose coordinates in the set $I$ are zero, and whose other coordinates are nonzero. Without loss of generality, we may assume that $I = \{ 1,\ldots ,n\}$. The stabilizer of $z$ is \[ ({\mathbb T}^d)_z = \{ (t_1,\ldots , t_n,1,\ldots ,1) \in {\mathbb T}^d \} \ . \] As the restriction $\pi : ({\mathbb R}^d)_z \to {\mathbb R}^n$ maps the vectors $e_1, \ldots , e_n$ to a ${\mathbb Z}$-basis $v_1, \ldots , v_n$ of ${\mathbb Z}^n$ (respectively), at the level of groups $\pi : ({\mathbb T}^d)_z \to {\mathbb T}^n$ must be bijective. Since $N = \ker (\pi : {\mathbb T}^d \to {\mathbb T}^n)$, we conclude that $N \cap ({\mathbb T}^d)_z = \{ e \}$, i.e., $N_z = \{ e \}$. Hence all $N$-stabilizers at points mapping to vertices are trivial. But this was the worst case, since other stabilizers $N_{z'}$ ($z' \in Z$) are contained in stabilizers for points $z$ that map to vertices. We conclude that $N$ acts freely on $Z$. We now apply reduction. Since $i^*$ is surjective, $0 \in {\mathfrak n}^*$ is a regular value of $i^* \circ \phi$. Hence, $Z$ is a compact submanifold of ${\mathbb C}^d$ of (real) dimension $2d - (d-n) = d+n$. The orbit space $M_{\Delta} = Z/N$ is a compact manifold of (real) dimension $\dim Z - \dim N = (d+n) - (d-n) = 2n$. The point-orbit map $p: Z \to M_{\Delta}$ is a principal $N$-bundle over $M_{\Delta}$. Consider the diagram \[ \begin{array}{ccc} Z &\stackrel{j}{\hookrightarrow} &{\mathbb C}^d \\ {\scriptstyle{p}} \downarrow \phantom{\scriptstyle{p}} \\ M_{\Delta} \varepsilonnd{array} \] where $j: Z \hookrightarrow {\mathbb C}^d$ is inclusion. The Marsden-Weinstein-Meyer theorem (Theorem~\ref{thm:reduction}) guarantees the existence of a symplectic form $\omega_{\Delta}$ on $M_{\Delta}$ satisfying \[ p^*\omega_{\Delta} = j^*\omega_0 \ . \] Since $Z$ is connected, the symplectic manifold $(M_{\Delta},\omega_{\Delta})$ is also connected. It remains to show that $(M_\Delta,\omega_\Delta)$ is a hamiltonian ${\mathbb T}^n$-space with a moment map $\mu_\Delta$ having image $\mu_\Delta(M_\Delta) = \Delta$. Let $z$ be such that $\phi (z) = \pi ^* (p)$ where $p$ is a vertex of $\Delta$. Let $\sigma : {\mathbb T}^n \to ({\mathbb T}^d)_z$ be the inverse for the earlier bijection $\pi : ({\mathbb T}^d)_z \to {\mathbb T}^n$. This is a {\varepsilonm section}, i.e., a right inverse for $\pi$, in the sequence \[ \begin{array}{ccccccccc} 1 & \longrightarrow & N & \stackrel{i}{\longrightarrow} & {\mathbb T}^d & \stackrel{\pi}{\longrightarrow} & {\mathbb T}^n & \longrightarrow & 1 \ , \\ & & & & & \stackrel{\sigma}{\longleftarrow} \varepsilonnd{array} \] so it {\varepsilonm splits}, i.e., becomes like a sequence for a product, as we obtain an isomorphism $(i , \sigma) : N \times {\mathbb T}^n \stackrel{\simeq}{\longrightarrow} {\mathbb T}^d$. The action of the ${\mathbb T}^n$ factor (or, more rigorously, $\sigma ({\mathbb T}^n) \subset {\mathbb T}^d$) descends to the quotient $M_\Delta = Z / N$. Consider the diagram \[ \begin{array}{rl} Z & \stackrel{j}{\hookrightarrow} {\mathbb C}^d \stackrel{\phi}{\longrightarrow} ({\mathbb R}^d)^* \simeq \varepsilonta^* \oplus ({\mathbb R}^n)^* \stackrel{\sigma^*}{\longrightarrow} ({\mathbb R}^n)^* \\ p \downarrow \\ M_\Delta \varepsilonnd{array} \] where the last horizontal map is projection onto the second factor. Since the composition of the horizontal maps is constant along $N$-orbits, it descends to a map \[ \mu_\Delta : M_\Delta \longrightarrow ({\mathbb R}^n)^* \] which satisfies $\mu_\Delta \circ p = \sigma^* \circ \phi \circ j$. By reduction for product groups (Section~\ref{sec:generalizations}), this is a moment map for the action of ${\mathbb T}^n$ on $(M_\Delta, \omega_\Delta)$. The image of $\mu_\Delta$ is \[ \mu_\Delta (M_\Delta) = (\mu_\Delta \circ p) (Z) = (\sigma ^* \circ \phi \circ j ) (Z) = (\sigma ^* \circ \pi ^* ) (\Delta) = \Delta \ , \] because $\phi (Z) = \pi^* (\Delta)$ and $\pi \circ \sigma = \mbox{id}$. We conclude that $(M_\Delta,\omega_\Delta, {\mathbb T}^n , \mu_\Delta)$ is the required toric manifold corresponding to $\Delta$. This construction via reduction also shows that symplectic toric manifolds are in fact K\"ahler. \begin{example}\index{example ! complex projective space}\index{complex ! projective space}\index{example ! Delzant construction} Here are the details of Delzant's construction for the case of a segment $\Delta = [0,a] \subset {\mathbb R}^*\ (n = 1,d = 2)$. Let $v(=1)$ be the standard basis vector in ${\mathbb R}$. Then $\Delta$ is described by $\langle x,-v \rangle \le 0$ and $\langle x, v \rangle \le a$, where $v_1 = -v$, $v_2 = v$, $\lambda_1 =0$ and $\lambda_2 =a$. The projection ${\mathbb R}^2 \stackrel{\pi}{\longrightarrow} {\mathbb R}$, $e_1 \mapsto -v$, $e_2 \mapsto v$, has kernel equal to the span of $(e_1 + e_2)$, so that $N$ is the diagonal subgroup of ${\mathbb T}^2 = S^1 \times S^1$. The exact sequences become \[ \begin{array}{ccccccccc} 1 & \longrightarrow & N & \stackrel{i}{\longrightarrow} & {\mathbb T}^2 & \stackrel{\pi}{\longrightarrow} & S^1 & \longrightarrow & 1 \\ & & t & \longmapsto & (t,t) \\ & & & & (t_1,t_2) & \longmapsto & t_1^{-1}t_2 \\ \\ 0 & \longrightarrow & {\mathfrak n} & \stackrel{i}{\longrightarrow} & {\mathbb R}^2 & \stackrel{\pi}{\longrightarrow} & {\mathbb R} & \longrightarrow & 0 \\ & & x & \longmapsto & (x,x) \\ & & & & (x_1,x_2) & \longmapsto & x_2 - x_1 \\ \\ 0 & \longrightarrow & {\mathbb R}^* & \stackrel{\pi^*}{\longrightarrow} & ({\mathbb R}^2)^* & \stackrel{i^*}{\longrightarrow} & {\mathfrak n}^* & \longrightarrow & 0 \\ & & x & \longmapsto & (-x,x) \\ & & & & (x_1,x_2) & \longmapsto & x_1 + x_2 \ . \varepsilonnd{array} \] The action of the diagonal subgroup $N = \{(e^{i t},e^{i t}) \in S^1 \times S^1\}$ on ${\mathbb C}^2$ by \[ (e^{i t},e^{i t}) \cdot (z_1,z_2) = (e^{i t}z_1,e^{i t}z_2) \] has moment map $(i^* \circ\phi)(z_1,z_2) = \textstyle{-\frac 12} (|z_1|^2 + |z_2|^2) + a$, with zero-level set \[ (i^* \circ \phi)^{-1}(0) = \{(z_1,z_2) \in {\mathbb C}^2 :|z_1|^2 + |z_2|^2 = 2a \} \ . \] Hence, the reduced space is a projective space, $(i^* \circ \phi)^{-1}(0)/N = {\mathbb C} {\mathbb P}^1$. \varepsilonnd{example} \ssubsection{Duistermaat-Heckman Theorems} \index{Duistermaat-Heckman ! polynomial}\index{Heckman|see{Duistermaat-Heckman}} Throughout this section, let $(M, \omega, G, \mu)$ be a hamiltonian $G$-space, where $G$ is an $n$-torus\footnote{The discussion in this section may be extended to hamiltonian actions of other compact Lie groups, not necessarily tori; see~\cite[Exercises 2.1-2.10]{gu:moment}.} and the moment map $\mu$ is proper. If $G$ acts freely on $\mu^{-1} (0)$, it also acts freely on nearby levels $\mu^{-1} (t)$, $t \in {\mathfrak g}^*$ and $t \approx 0$. (Otherwise, assume only that $0$ is a regular value of $\mu$ and work with orbifolds.) We study the variation of the reduced spaces by relating\index{reduction ! local form}\index{local form} \[ (M_{\red} = \mu^{-1} (0) / G, \omega_{\red}) \qquad \mbox{ and } \qquad (M_t = \mu^{-1} (t) / G , \omega_t) \ . \] For simplicity, assume $G$ to be the circle $S^1$. Let $Z = \mu^{-1} (0)$ and let $i : Z \hookrightarrow M$ be the inclusion map. Fix a connection form $\alpha \in \Omega^1 (Z)$ for the principal bundle \[ \begin{array}{cll} S^1 & \hookrightarrow & Z \\ & & \downarrow \pi \\ & & M_{\mathrm{red}} \varepsilonnd{array} \] that is, ${\mathcal L} _{X^\#} \alpha = 0$ and $\imath _{X^\#} \alpha = 1$, where $X^\#$ is the infinitesimal generator for the $S^1$-action. Construct a 2-form on the product manifold $Z \times (-\varepsilon, \varepsilon)$ by the recipe \[ \sigma = \pi^* \omega_{\red} - d(x \alpha) \ , \] where $x$ is a linear coordinate on the interval $(-\varepsilon, \varepsilon) \subset {\mathbb R} \simeq {\mathfrak g}^*$. (By abuse of notation, we shorten the symbols for forms on $Z \times (-\varepsilon, \varepsilon)$ that arise by pullback via projection onto each factor.) \begin{lemma} The 2-form $\sigma$ is symplectic for $\varepsilon$ small enough. \varepsilonnd{lemma} \vspace*{-2ex} \begin{proof} At points where $x=0$, the form $\sigma |_{x=0} = \pi^* \omega_{\red} + \alpha \wedge dx$ satisfies $\sigma |_{x=0} \left( X^{\#}, \frac{\partial}{\partial x} \right) = 1$, so $\sigma$ is nondegenerate along $Z \times \{ 0 \}$. Since nondegeneracy is an open condition, we conclude that $\sigma$ is nondegenerate for $x$ in a sufficiently small neighborhood of $0$. Closedness is clear. \varepsilonnd{proof} Notice that $\sigma$ is invariant with respect to the $S^1$-action on the first factor of $Z \times (-\varepsilon, \varepsilon)$. This action is hamiltonian with moment map $x : Z \times (-\varepsilon, \varepsilon) \to (-\varepsilon, \varepsilon)$ given by projection onto the second factor (since ${\mathcal L} _{X^\#} \alpha =0$ and $\imath _{X^\#} \alpha =1$): \[ \imath _{X^\#} \sigma = - \imath _{X^\#} d (x \alpha) = - {\mathcal L} _{X^\#} (x \alpha) + d {\imath _{X^\#} (x \alpha)} = dx \ . \] \begin{lemma} \label{lem:model} There exists an equivariant symplectomorphism between a neighborhood of $Z$ in $M$ and a neighborhood of $Z \times \{ 0 \}$ in $Z \times (-\varepsilon, \varepsilon)$, intertwining the two moment maps, for $\varepsilon$ small enough. \varepsilonnd{lemma} \vspace*{-2ex} \begin{proof} The inclusion $i_0 : Z \hookrightarrow Z \times (-\varepsilon, \varepsilon)$ as $Z \times \{ 0 \}$ and the natural inclusion $i : Z \hookrightarrow M$ are $S^1$-equivariant coisotropic embeddings. Moreover, they satisfy $i_0^* \sigma = i^* \omega$ since both sides are equal to $\pi^* \omega_{\red}$, and the moment maps coincide on $Z$ because $i_0^* x = 0 = i^* \mu$. Replacing $\varepsilon$ by a smaller positive number if necessary, the result follows from the equivariant version of the coisotropic embedding theorem (Theorem~\ref{thm:coisotropic}).\footnote{\textbf{Equivariant Coisotropic Embedding Theorem:}\index{theorem ! equivariant coisotropic embedding}\index{equivariant ! coisotropic embedding} {\varepsilonm Let $(M_0, \omega_0)$, $(M_1, \omega_1)$ be symplectic manifolds of dimension $2n$, $G$ a compact Lie group acting on $(M_i, \omega_i)$, $i=0,1$, in a hamiltonian way with moment maps $\mu_0$ and $\mu_1$, respectively, $Z$ a manifold of dimension $k \geq n$ with a $G$-action, and $\iota_i : Z \hookrightarrow M_i$, $i=0,1$, $G$-equivariant coisotropic embeddings. Suppose that $\iota_0^* \omega_0 = \iota_1^* \omega_1$ and $\iota_0^* \mu_0 = \iota_1^* \mu_1$. Then there exist $G$-invariant neighborhoods ${\mathcal U}_0$ and ${\mathcal U}_1$ of $\iota_0 (Z)$ and $\iota_1 (Z)$ in $M_0$ and $M_1$, respectively, and a $G$-equivariant symplectomorphism $\varphi : {\mathcal U}_0 \rightarrow {\mathcal U}_1$ such that $\varphi \circ \iota_0 = \iota_1$ and $\mu_0 = \varphi ^* \mu_1$.}} \varepsilonnd{proof} Therefore, in order to compare the reduced spaces $M_t = \mu ^{-1} (t) / S^1$ for $t \approx 0$, we can work in $Z \times (-\varepsilon, \varepsilon)$ and compare instead the reduced spaces $x ^{-1} (t) / S^1$. \begin{proposition} \label{prop:curvature} The space $(M_t,\omega_t)$ is symplectomorphic to $(M_{\red}, \omega_{\red} -t \beta)$ where $\beta$ is the curvature form of the connection $\alpha$. \varepsilonnd{proposition} \vspace*{-2ex} \begin{proof} By Lemma~\ref{lem:model}, $(M_t,\omega_t)$ is symplectomorphic to the reduced space at level $t$ for the hamiltonian space $(Z \times (-\varepsilon, \varepsilon) , \sigma , S^1, x)$. Since $x ^{-1} (t) = Z \times \{ t \}$, where $S^1$ acts on the first factor, all the manifolds $x ^{-1} (t) / S^1$ are diffeomorphic to $Z / S^1 = M_{\red}$. As for the symplectic forms, let $\iota_t : Z \times \{ t \} \hookrightarrow Z \times (-\varepsilon, \varepsilon)$ be the inclusion map. The restriction of $\sigma$ to $Z \times \{ t \}$ is \[ \iota_t ^* \sigma = \pi^* \omega_{\red} -t d \alpha \ . \] By definition of curvature, $d \alpha = \pi ^* \beta$. Hence, the reduced symplectic form on $x ^{-1} (t) / S^1$ is $\omega_{\red} - t \beta$. \varepsilonnd{proof} In loose terms, Proposition~\ref{prop:curvature} says that the reduced forms $\omega_t$ vary linearly in $t$, for $t$ close enough to $0$. However, the identification of $M_t$ with $M_{\red}$ as abstract manifolds is not natural. Nonetheless, any two such identifications are isotopic. By the homotopy invariance of de Rham classes, we obtain: \begin{theorem}\index{theorem ! Duistermaat-Heckman}\index{Duistermaat-Heckman ! theorem}\label{thm:dh2} \textbf{(Duistermaat-Heckman~\cite{du-he:variation})} $\;$ Under the hypotheses and notation before, the cohomology class of the reduced symplectic form $[ \omega_t ]$ varies linearly in $t$. More specifically, if $c = [- \beta] \in H_{\mathrm{deRham}}^2 (M_{\red})$ is the first Chern class\footnote{Often the Lie algebra of $S^1$ is identified with $2\pi i {\mathbb R}$ under the exponential map $\varepsilonxp : {\mathfrak g} \simeq 2\pi i {\mathbb R} \rightarrow S^1$, $\xi \mapsto e^{\xi}$. Given a principal $S^1$-bundle, by this identification the infinitesimal action maps the generator $2\pi i$ of $2\pi i {\mathbb R}$ to the generating vector field $X^\#$. A connection form $A$ is then an imaginary-valued 1-form on the total space satisfying ${\mathcal L} _{X^\#} A =0$ and $\imath_ {X^\#} A = 2\pi i$. Its curvature form $B$ is an imaginary-valued 2-form on the base satisfying $\pi^* B = dA$. By the Chern-Weil isomorphism, the \textbf{first Chern class}\index{Chern ! first Chern class}\index{first Chern class} of the principal $S^1$-bundle is $c= [\frac{i}{2\pi} B]$. Here we identify the Lie algebra of $S^1$ with ${\mathbb R}$ and implicitly use the exponential map $\varepsilonxp : {\mathfrak g} \simeq {\mathbb R} \rightarrow S^1$, $t \mapsto e^{2\pi i t}$. Hence, given a principal $S^1$-bundle, the infinitesimal action maps the generator 1 of ${\mathbb R}$ to $X^\#$, and here a connection form $\alpha$ is an ordinary 1-form on the total space satisfying ${\mathcal L} _{X^\#} \alpha =0$ and $\imath_ {X^\#} \alpha = 1$. The curvature form $\beta$ is an ordinary 2-form on the base satisfying $\pi^* \beta = d\alpha$. Consequently, we have $A=2\pi i \alpha$, $B=2\pi i \beta$ and the first Chern class is given by $c= [-\beta]$.}\index{Chern ! first Chern class}\index{first Chern class} of the $S^1$-bundle $Z \rightarrow M_{\red}$, we have \[ [ \omega_t ] = [ \omega_{\red} ] + t c \ . \] \varepsilonnd{theorem} \vspace*{-1ex} \begin{definition} The \textbf{Duistermaat-Heckman measure}\index{measure ! Duistermaat-Heckman}\index{Duistermaat-Heckman ! measure}, $m_{DH}$, on ${\mathfrak g}^*$ is the push-forward of the Liouville measure\footnote{On an arbitrary symplectic manifold $(M^{2n},\omega)$, with symplectic volume\index{symplectic ! volume}\index{volume} $\frac{\omega^n}{n!}$, the \textbf{Liouville measure}\index{Liouville ! measure}\index{measure ! Liouville} (or \textbf{symplectic measure})\index{symplectic ! measure}\index{measure ! symplectic} of a Borel subset\index{Borel subset} ${\mathcal U}$ of $M$ is \[ m_\omega({\mathcal U}) = \int_{\mathcal U} \frac{\omega^n}{n!} \ . \] The set ${\mathcal B}$ of \textbf{Borel subsets}\index{Borel subset} is the {\varepsilonm $\sigma$-ring} generated by the set of compact subsets, i.e., if $A,B \in {\mathcal B}$, then $A \setminus B \in {\mathcal B}$, and if $A_i \in {\mathcal B}$, $i=1,2,\ldots$, then $\cup_{i=1}^{\infty} A_i \in {\mathcal B}$.} by $\mu:M \rightarrow {\mathfrak g}^*$, that is, for any Borel subset ${\mathcal U}$ of ${\mathfrak g}^*$, we have \[ m_{DH}({\mathcal U}) = \int_{\mu^{-1}({\mathcal U})} \frac{\omega^n}{n!} \ . \] \varepsilonnd{definition} The integral with respect to the Duistermaat-Heckman measure of a compactly-supported function $h \in C^\infty ({\mathfrak g} ^*)$ is \[ \int_{{\mathfrak g} ^*} h \ dm_{DH} := \int_M (h \circ \mu) \frac{\omega^n}{n!} \ . \] On ${\mathfrak g} ^*$ regarded as a vector space, say ${\mathbb R} ^n$, there is also the Lebesgue (or euclidean) measure\index{euclidean ! measure}\index{Lebesgue ! measure}, $m_0$. The relation between $m_{DH}$ and $m_0$ is governed by the {\varepsilonm Radon-Nikodym derivative}\index{Radon-Nikodym derivative}, denoted by $\frac{dm_{DH}}{dm_0}$, which is a {\varepsilonm generalized function} satisfying \[ \int_{{\mathfrak g} ^*} h \ dm_{DH} = \int_{{\mathfrak g} ^*} h \ \frac{dm_{DH}}{dm_0} \ dm_0 \ . \] \begin{theorem}\index{theorem ! Duistermaat-Heckman}\index{Duistermaat-Heckman ! theorem}\label{thm:dh} \textbf{(Duistermaat-Heckman~\cite{du-he:variation})} $\;$ Under the hypotheses and notation before, the Duistermaat-Heckman measure is a piecewise polynomial multiple of Lebesgue measure\index{measure ! Lebesgue}\index{Lebesgue ! measure} on ${\mathfrak g}^* \simeq {\mathbb R}^n$, that is, the Radon-Nikodym derivative\index{Nikodym ! Radon-Nikodym derivative}\index{Radon-Nikodym derivative} $f = \frac{dm_{DH}}{dm_0}$ is piecewise polynomial. More specifically, for any Borel subset ${\mathcal U}$ of ${\mathfrak g}^*$, we have $m_{DH}({\mathcal U}) = \int_{\mathcal U} f(x)\, dx$, where $dx = dm_0$ is the Lebesgue volume\index{Lebesgue ! volume} form on ${\mathcal U}$ and $f: {\mathfrak g}^* \simeq {\mathbb R}^n \to {\mathbb R}$ is polynomial on any region consisting of regular values of $\mu$. \varepsilonnd{theorem} This Radon-Nikodym derivative $f$ is called the \textbf{Duistermaat-Heckman polynomial}.\index{Duistermaat-Heckman ! polynomial} In the case of a toric manifold, the Duistermaat-Heckman polynomial is a universal constant equal to $(2\pi)^n$ when $\Delta$ is $n$-dimensional. Thus the symplectic volume of $(M_\Delta,\omega_\Delta)$ is $(2\pi)^n$ times the euclidean volume of $\Delta$. \begin{example} For the standard spinning of a sphere, $(S^2,\omega=d\theta\wedge dh, S^1, \mu=h)$, the image of $\mu$ is the interval $[-1,1]$. The Lebesgue measure of $[a,b] \subseteq [-1,1]$ is $m_0([a,b]) = b-a$. The Duistermaat-Heckman measure of $[a,b]$ is \[ m_{DH}([a,b]) = \int_{\{(\theta,h) \in S^2 \mid a\le h \le b \}} d\theta \ dh = 2\pi(b-a) \ , \] i.e., $m_{DH} = 2\pi \ m_0$. Consequently, {\varepsilonm the area of the spherical region between two parallel planes depends only on the distance between the planes}, a result that was known to Archimedes\index{theorem ! Archimedes}\index{Archimedes} around 230 BC. \varepsilonnd{example} \vspace*{-1ex} \begin{proof} We sketch the proof of Theorem~\ref{thm:dh} for the case $G=S^1$. The proof for the general case, which follows along similar lines, can be found in, for instance, \cite{gu:moment}, besides the original articles. Let $(M, \omega, S^1, \mu)$ be a hamiltonian $S^1$-space of dimension $2n$ and let $(M_x, \omega_x)$ be its reduced space at level $x$. Proposition~\ref{prop:curvature} or Theorem~\ref{thm:dh2} imply that, for $x$ in a sufficiently narrow neighborhood of $0$, the symplectic volume\index{symplectic ! volume}\index{volume} of $M_x$, \[ \mathrm{vol} (M_x) = \int_{M_x} \frac{\omega _x ^{n-1}}{(n-1)!} = \int_{M_{\red}} \frac{(\omega _{\red} - x \beta) ^{n-1}}{(n-1)!} \ , \] is a polynomial in $x$ of degree $n-1$. This volume can be also expressed as \[ \mathrm{vol} (M_x) = \int_{Z} \frac{\pi^* (\omega _{\red} - x \beta) ^{n-1}}{(n-1)!} \wedge \alpha \ , \] where $\alpha$ is a connection form for the $S^1$-bundle $Z \rightarrow M_{\red}$ and $\beta$ is its curvature form. Now we go back to the computation of the Duistermaat-Heckman measure. For a Borel subset ${\mathcal U}$ of $(-\varepsilon, \varepsilon)$, the Duistermaat-Heckman measure is, by definition, \[ m_{DH}({\mathcal U}) = \int_{\mu^{-1}({\mathcal U})} \frac{\omega^n}{n!} \ . \] Using the fact that $(\mu^{-1}(-\varepsilon, \varepsilon), \omega)$ is symplectomorphic to $(Z \times (-\varepsilon, \varepsilon), \sigma)$ and, moreover, they are isomorphic as hamiltonian $S^1$-spaces, we obtain \[ m_{DH}({\mathcal U}) = \int_{Z \times {\mathcal U}} \frac{\sigma^n}{n!} \ . \] Since $\sigma = \pi^* \omega_{\red} - d(x \alpha)$, its power is $\sigma^n = n (\pi^* \omega_{\red} - x d \alpha)^{n-1} \wedge \alpha \wedge dx$. By the Fubini theorem\index{Fubini theorem}\index{theorem ! Fubini}, we then have \[ m_{DH}({\mathcal U}) = \int_{{\mathcal U}} \left[ \int_{Z} \frac{\pi^* (\omega _{\red} - x \beta) ^{n-1}}{(n-1)!} \wedge \alpha \right] \wedge dx \ . \] Therefore, the Radon-Nikodym derivative of $m_{DH}$ with respect to the Lebesgue measure, $dx$, is \[ f(x) = \int_{Z} \frac{\pi^* (\omega _{\red} - x \beta) ^{n-1}}{(n-1)!} \wedge \alpha = \mathrm{vol} (M_x) \ . \] The previous discussion proves that, for $x \approx 0$, $f(x)$ is a polynomial in $x$. The same holds for a neighborhood of any other regular value of $\mu$, because we may change the moment map $\mu$ by an arbitrary additive constant. \varepsilonnd{proof} Duistermaat and Heckman~\cite{du-he:variation} also applied these results when $M$ is compact to provide a formula for the oscillatory integral $\int_M e^{i\mu^X} \frac{\omega^n}{n!}$ for $X \in {\mathfrak g}$ as a sum of contributions of the fixed points of the action of the one-parameter subgroup generated by $X$. They hence showed that the {\varepsilonm stationary phase approximation}\footnote{The \textbf{stationary phase lemma} gives the asymptotic behavior (for large $N$) of integrals $\left( \frac{N}{2\pi} \right)^n \int_M f e^{ig} vol$, where $f$ and $g$ are real functions and $vol$ is a volume form on a $2n$-dimensional manifold $M$.} is exact in the case of the moment map. When $G$ is a maximal torus of a compact connected simple Lie group acting on a coadjoint orbit, the Duistermaat-Heckman formula reduces to the Harish-Chandra formula. It was observed by Berline and Vergne~\cite{be-ve:zeros} and by Atiyah and Bott~\cite{at-bo:moment} that the Duistermaat-Heckman formula can be derived by {\varepsilonm localization in equivariant cohomology}. This is an instance of \textbf{abelian localization}, i.e., a formula for an integral (in equivariant cohomology) in terms of data at the fixed points of the action, and typically is used for the case of abelian groups (or of maximal tori). Later \textbf{non-abelian localization} formulas were found, where integrals (in equivariant cohomology) are expressed in terms of data at the zeros of the moment map, normally used for the case of non-abelian groups. Both localizations gave rise to computations of the cohomology ring structure of reduced spaces~\cite{ki:quotients}. \begin{thebibliography}{999} \addcontentsline{toc}{section}{\sf{\textbf{References}}} \markboth{\sf{REFERENCES}}{\sf{REFERENCES}} \thispagestyle{empty} \bibitem{ar:characteristic} Arnold, V., On a characteristic class entering into conditions of quantization, {\varepsilonm Funkcional Anal.\ i Prilo\v zen} \textbf{1} (1967), 1-14. \bibitem{ar:mathematical} Arnold, V., {\varepsilonm Mathematical Methods of Classical Mechanics}, Graduate Texts in Math.\ \textbf{60}, Springer-Verlag, New York, 1978. \bibitem{ar-gi:symplectic} Arnold, V., Givental, A., Symplectic geometry, {\varepsilonm Dynamical Systems IV, Symplectic Geometry and its Applications} (Arnold, V., Novikov, S., eds.), Encyclopaedia of Math.\ Sciences \textbf{4}, Springer-Verlag, Berlin-New York, 1990. \bibitem{at:convexity} Atiyah, M., Convexity and commuting Hamiltonians, {\varepsilonm Bull.\ London Math.\ Soc.} \textbf{14} (1982), 1-15. \bibitem{at-bo:moment} Atiyah, M., Bott, R., The moment map and equivariant cohomology, {\varepsilonm Topology} \textbf{23} (1984), 1-28. \bibitem{at-bo:surfaces} Atiyah, M., Bott, R., The Yang-Mills equations over Riemann surfaces, {\varepsilonm Topology} \textbf{23} (1984), 1-28. {\varepsilonm Philos.\ Trans.\ Roy.\ Soc.\ London} \textbf{308} (1983), 523-615. \bibitem{au:exemples} Audin, M., Exemples de vari\'et\'es presque complexes, {\varepsilonm Enseign.\ Math.}\ \textbf{37} (1991), 175-190. \bibitem{au:tops} Audin, M., {\varepsilonm Spinning Tops, A Course on Integrable Systems}, Cambridge Studies in Advanced Mathematics \textbf{51}, Cambridge University Press, Cambridge, 1996. \bibitem{au:barcelona} Audin, M., Lagrangian submanifolds, {\varepsilonm Symplectic Geometry of Integrable Hamiltonian Systems} (Barcelona, 2001), 1-83, {\varepsilonm Adv.\ Courses Math.\ CRM Barcelona}, Birkh\"auser Verlag, Basel, 2003. \bibitem{au:topology} Audin, M., {\varepsilonm Torus Actions on Symplectic Manifolds}, Progress in Mathematics \textbf{93}, Birkh\"auser Verlag, Basel, 2004. \bibitem{au-la:holomorphic} Audin, M., Lafontaine, J., Eds., {\varepsilonm Holomorphic Curves in Symplectic Geometry}, Progress in Mathematics \textbf{117}, Birkh\"auser Verlag, Basel, 1994. \bibitem{au:asymptotically} Auroux, D., Asymptotically holomorphic families of symplectic submanifolds, {\varepsilonm Geom.\ Funct.\ Anal.}\ \textbf{7} (1997), 971-995. \bibitem{au:branched} Auroux, D., Symplectic 4-manifolds as branched coverings of ${\mathbb C} {\mathbb P}^2$, {\varepsilonm Invent.\ Math.}\ \textbf{139} (2000), 551-602. \bibitem{be-ve:zeros} Berline, N., Vergne, M., Z\'eros d'un champ de vecteurs et classes caract\'eristiques \'equivariantes, {\varepsilonm Duke Math.\ J.}\ \textbf{50} (1983), 539-549. \bibitem{bi:packing} Biran, P., A stability property of symplectic packing, {\varepsilonm Invent.\ Math.}\ \textbf{136} (1999), 123-155. \bibitem{bi:intersections} Biran, P., Geometry of symplectic intersections, {\varepsilonm Proceedings of the I.C.M.}, vol.\ II (Beijing, 2002), 241-255, Higher Ed.\ Press, Beijing, 2002. \bibitem{bi-ci:subcritical} Biran, P., Cieliebak, K., Symplectic topology on subcritical manifolds, {\varepsilonm Comment.\ Math.\ Helv.}\ \textbf{76} (2001), 712-753. \bibitem{bi:dynamical} Birkhoff, G., {\varepsilonm Dynamical Systems}, reprinting of the original 1927 edition, with an addendum by J.\ Moser, Amer.\ Math.\ Soc.\ Colloquium Publications vol.\ IX, Amer.\ Math.\ Soc., Providence, 1966. \bibitem{br:groups} Bredon, G., {\varepsilonm Introduction to Compact Transformation Groups}, Pure and Applied Mathematics \textbf{46}, Academic Press, New York-London, 1972. \bibitem{br:introduction} Bryant, R., An introduction to Lie groups and symplectic geometry, {\varepsilonm Geometry and Quantum Field Theory} (Park City, UT, 1991), 5-181, {\varepsilonm IAS/Park City Math.\ Ser.}\ \textbf{1}, Amer.\ Math.\ Soc., Providence, 1995. \bibitem{ca:toric} Cannas da Silva, A., Symplectic toric manifolds, {\varepsilonm Symplectic Geometry of Integrable Hamiltonian Systems} (Barcelona, 2001), 1-83, {\varepsilonm Adv.\ Courses Math.\ CRM Barcelona}, Birkh\"auser Verlag, Basel, 2003. \bibitem{ch:finiteness} Cheeger, J., Finiteness theorems for Riemannian manifolds, {\varepsilonm Amer.\ J.\ Math.}\ \textbf{92} (1970), 61-74. \bibitem{ch:potential} Chern, S.S., {\varepsilonm Complex Manifolds Without Potential Theory}, with an appendix on the geometry of characteristic classes, second edition, Universitext, Springer-Verlag, New York-Heidelberg, 1979. \bibitem{co-ze:arnold} Conley, C., Zehnder, E., The Birkhoff-Lewis fixed point theorem and a conjecture of V. I. Arnold, {\varepsilonm Invent.\ Math.}\ \textbf{73} (1983), 33-49. \bibitem{co-ze:morse} Conley, C., Zehnder, E., Morse-type index theory for flows and periodic solutions for Hamiltonian equations, {\varepsilonm Comm.\ Pure Appl.\ Math.}\ \textbf{37} (1984), 207-253. \bibitem{co:recent} Cox, D., Recent developments in toric geometry, {\varepsilonm Algebraic Geometry -- Santa Cruz 1995}, 389-436, Proc.\ Sympos.\ Pure Math.\ \textbf{62}, part 2, Amer.\ Math.\ Soc., Providence, 1997. \bibitem{da:toric} Danilov, V., The geometry of toric varieties, {\varepsilonm Uspekhi Mat.\ Nauk} \textbf{33} (1978), no. 2 (200), 85-134, 247, English translation: {\varepsilonm Russian Math.\ Surveys} \textbf{33} (1978), no. 2, 97-154. \bibitem{de:hamiltoniens} Delzant, T., Hamiltoniens p\'eriodiques et images convexes de l'application moment, {\varepsilonm Bull.\ Soc.\ Math.\ France} \textbf{116} (1988), 315-339. \bibitem{de:subgroups} Demazure, M., Sous-groupes alg\'ebriques de rang maximum du groupe de Cremona, {\varepsilonm Ann.\ Sci.\ \'Ecole Norm.\ Sup.}\ (4) \textbf{3} (1970), 507-588. \bibitem{do:gauge} Donaldson, S., An application of gauge theory to four-dimensional topology, {\varepsilonm J.\ Differential Geom.}\ \textbf{18} (1983), 279-315. \bibitem{do:irrationality} Donaldson, S., Irrationality and the h-cobordism conjecture, {\varepsilonm J.\ Differential Geom.}\ \textbf{26} (1987), 141-168. \bibitem{do:almost} Donaldson, S., Symplectic submanifolds and almost-complex geometry, {\varepsilonm J.\ Differential Geom.}\ \textbf{44} (1996), 666-705. \bibitem{do:congress} Donaldson, S., Lefschetz fibrations in symplectic geometry, {\varepsilonm Proceedings of the I.C.M.}, vol.\ II (Berlin, 1998), {\varepsilonm Doc.\ Math.}\ \textbf{1998}, extra vol.\ II, 309-314. \bibitem{do:pencils} Donaldson, S., Lefschetz pencils on symplectic manifolds, {\varepsilonm J.\ Differential Geom.}\ \textbf{53} (1999), 205-236. \bibitem{do:floer} Donaldson, S., {\varepsilonm Floer Homology Groups in Yang-Mills Theory}, with the assistance of M.\ Furuta and D.\ Kotschick, Cambridge Tracts in Mathematics \textbf{147}, Cambridge University Press, Cambridge, 2002. \bibitem{du:global} Duistermaat, J.J., On global action-angle coordinates, {\varepsilonm Comm.\ Pure Appl.\ Math.} \textbf{33} (1980), 687-706. \bibitem{du:heat} Duistermaat, J.J., {\varepsilonm The Heat Kernel Lefschetz Fixed Point Formula for the Spin-c Dirac Operator}, Progress in Nonlinear Differential Equations and their Applications \textbf{18}, Birkh\"auser Boston, Inc., Boston, 1996. \bibitem{du-he:variation} Duistermaat, J.J., Heckman, G., On the variation in the cohomology of the symplectic form of the reduced phase space, {\varepsilonm Invent.\ Math.} \textbf{69} (1982), 259-268; Addendum, {\varepsilonm Invent.\ Math.} \textbf{72} (1983), 153-158. \bibitem{el:fixed_points} Eliashberg, Y., Estimates on the number of fixed points of area-preserving transformations, Syktyvkar University, preprint, 1979. \bibitem{el-gi-ho:field} Eliashberg, Y., Givental, A., Hofer, H., Introduction to symplectic field theory, GAFA 2000 (Tel Aviv, 1999), {\varepsilonm Geom.\ Funct.\ Anal.}\ 2000, special volume, part II, 560-673. \bibitem{el-gr:lagrangian} Eliashberg, Y., Gromov, M., Lagrangian intersection theory: finite-dimensional approach, {\varepsilonm Geometry of Differential Equations}, 27-118, Amer.\ Math.\ Soc.\ Transl.\ Ser.\ 2, \textbf{186}, Amer.\ Math.\ Soc., Providence, 1998. \bibitem{el-mi:principle} Eliashberg, Y., Mishachev, N., {\varepsilonm Introduction to the h-Principle}, Graduate Studies in Mathematics \textbf{48}, Amer.\ Math.\ Soc., Providence, 2002. \bibitem{el-tr:park} Eliashberg, Y., Traynor, L., Eds., {\varepsilonm Symplectic Geometry and Topology}, lectures from the Graduate Summer School held in Park City, June 29-July 19, 1997, IAS/Park City Mathematics Series \textbf{7}, Amer.\ Math.\ Soc., Providence, 1999. \bibitem{fe-go-gr:symplectic} Fern\'andez, M., Gotay, M., Gray, A., Compact parallelizable four-dimensional symplectic and complex manifolds, {\varepsilonm Proc.\ Amer.\ Math.\ Soc.} \textbf{103} (1988), 1209-1212. \bibitem{fi-st:knots} Fintushel, R., Stern, R., Knots, links, and 4-manifolds, {\varepsilonm Invent.\ Math.}\ \textbf{134} (1998), 363-400. \bibitem{fl:relative} Floer, A., A relative Morse index for the symplectic action, {\varepsilonm Comm.\ Pure Appl.\ Math.}\ \textbf{41} (1988), 393-407. \bibitem{fl:gradient} Floer, A., The unregularized gradient flow of the symplectic action, {\varepsilonm Comm.\ Pure Appl.\ Math.}\ \textbf{41} (1988), 775-813. \bibitem{fl:lagrangian} Floer, A., Morse theory for Lagrangian intersections, {\varepsilonm J.\ Differential Geom.}\ \textbf{28} (1988), 513-547. \bibitem{fl:holomorphic} Floer, A., Symplectic fixed points and holomorphic spheres, {\varepsilonm Comm.\ Math.\ Phys.}\ \textbf{120} (1989), 575-611. \bibitem{fl:witten} Floer, A., Witten's complex and infinite-dimensional Morse theory, {\varepsilonm J.\ Differential Geom.}\ \textbf{30} (1989), 207-221. \bibitem{fr:topology} Freedman, M., The topology of four-dimensional manifolds, {\varepsilonm J.\ Differential Geom.}\ \textbf{17} (1982), 357-453. \bibitem{fu-on:arnold} Fukaya, K., Ono, K., Arnold conjecture and Gromov-Witten invariant, {\varepsilonm Topology} \textbf{38} (1999), 933-1048. \bibitem{fu:toric} Fulton, W., {\varepsilonm Introduction to Toric Varieties}, Annals of Mathematics Studies \textbf{131}, Princeton University Press, Princeton, 1993. \bibitem{ga-ki:circles} Gay, D., Kirby, R., Constructing symplectic forms on 4-manifolds which vanish on circles, {\varepsilonm Geom.\ Topol.}\ \textbf{8} (2004), 743-777. \bibitem{gi:periodic} Givental, A., Periodic mappings in symplectic topology (Russian), {\varepsilonm Funktsional.\ Anal.\ i Prilozhen} \textbf{23} (1989), 37-52, translation in {\varepsilonm Funct.\ Anal.\ Appl.}\ \textbf{23} (1989), 287-300. \bibitem{go-ti:moduli} Golubitsky, M., Tischler, D., An example of moduli for singular symplectic forms, {\varepsilonm Invent.\ Math.}\ \textbf{38} (1976/77), 219-225. \bibitem{go:new} Gompf, R., A new construction of symplectic manifolds, {\varepsilonm Ann.\ of Math.} \textbf{142} (1995), 527-595. \bibitem{go:characterization} Gompf, R., Toward a topological characterization of symplectic manifolds, to appear in {\varepsilonm J.\ Symp.\ Geom.} \bibitem{go:Lefschetz} Gompf, R., Symplectic structures from Lefschetz pencils in high dimensions, {\varepsilonm Geometry and Topology Monographs} \textbf{7} (2004), 267-290. \bibitem{go-st:calculus} Gompf, R., Stipsicz, A., {\varepsilonm 4-Manifolds and Kirby Calculus}, {\varepsilonm Graduate Studies in Mathematics} \textbf{20}, Amer.\ Math.\ Soc., Providence, 1999. \bibitem{go:coisotropic} Gotay, M., On coisotropic imbeddings of presymplectic manifolds, {\varepsilonm Proc.\ Amer.\ Math.\ Soc.}\ \textbf{84} (1982), 111-114. \bibitem{gr-ha:principles} Griffiths, P., Harris, J., {\varepsilonm Principles of Algebraic Geometry}, reprint of the 1978 original, Wiley Classics Library, John Wiley \& Sons, Inc., New York, 1994. \bibitem{gr:stable} Gromov, M., Stable mappings of foliations into manifolds, {\varepsilonm Izv.\ Akad.\ Nauk SSSR Ser.\ Mat.}\ \textbf{33} (1969), 707-734. \bibitem{gr:pseudo} Gromov, M., Pseudoholomorphic curves in symplectic manifolds, {\varepsilonm Invent.\ Math.}\ \textbf{82} (1985), 307-347. \bibitem{gr:partial} Gromov, M., {\varepsilonm Partial Differential Relations}, Ergebnisse der Mathematik und ihrer Grenzgebiete \textbf{9}, Springer-Verlag, Berlin-New York, 1986. \bibitem{gu:moment} Guillemin, V., {\varepsilonm Moment Maps and Combinatorial Invariants of Hamiltonian $T^n$-spaces}, Progress in Mathematics \textbf{122}, Birkh\"auser, Boston, 1994. \bibitem{gu-gi-ka:moment} Guillemin, V., Ginzburg, V., Karshon, Y., {\varepsilonm Moment Maps, Cobordisms, and Hamiltonian Group Actions}, with appendix J by M.\ Braverman, {\varepsilonm Mathematical Surveys and Monographs} \textbf{98}, Amer.\ Math.\ Soc., Providence, 2002. \bibitem{gu-st:convexity} Guillemin, V., Sternberg, S., Convexity properties of the moment mapping, {\varepsilonm Invent.\ Math.} \textbf{67} (1982), 491-513. \bibitem{gu-st:birational} Guillemin, V., Sternberg, S., Birational equivalence in the symplectic category, {\varepsilonm Invent.\ Math.}\ \textbf{97} (1989), 485-522. \bibitem{gu-st:techniques} Guillemin, V., Sternberg, S., {\varepsilonm Symplectic Techniques in Physics}, second edition, Cambridge University Press, Cambridge, 1990. \bibitem{ha-la:calibrated} Harvey, R., Lawson, H. B., Calibrated geometries, {\varepsilonm Acta Math.}\ \textbf{148} (1982), 47-157. \bibitem{hi-se-wa:integrable} Hitchin, N., Segal, G., Ward, R., {\varepsilonm Integrable Systems. Twistors, Loop groups, and Riemann Surfaces} Oxford Graduate Texts in Mathematics \textbf{4}, The Clarendon Press, Oxford University Press, New York, 1999. \bibitem{ho:theory} Hodge, W., {\varepsilonm The Theory and Applications of Harmonic Integrals}, 2nd edition, Cambridge University Press, Cambridge, 1952. \bibitem{ho-sa:floer} Hofer, H., Salamon, D., Floer homology and Novikov rings, {\varepsilonm The Floer Memorial Volume}, 483-524, Progress in Mathematics \textbf{133}, Birkh\"auser, Basel, 1995. \bibitem{ho:harmonic} Honda, K., Transversality theorems for harmonic forms, {\varepsilonm Rocky Mountain J.\ Math.}\ \textbf{34} (2004), 629-664. \bibitem{ho:several} H\"ormander, L., {\varepsilonm An Introduction to Complex Analysis in Several Variables}, third edition, North-Holland Mathematical Library \textbf{7}, North-Holland Publishing Co., Amsterdam-New York, 1990. \bibitem{ja:lie} Jacobson, N., {\varepsilonm Lie Algebras}, republication of the 1962 original, Dover Publications, Inc., New York, 1979. \bibitem{je-ki:localization} Jeffrey, L., Kirwan, F., Localization for nonabelian group actions, {\varepsilonm Topology} \textbf{34} (1995), 291-327. \bibitem{ke-kn-mu-sa:toroidal} Kempf, G., Knudsen, F., Mumford, D., Saint-Donat, B., {\varepsilonm Toroidal Embeddings, I}, Lecture Notes in Mathematics \textbf{339}, Springer-Verlag, Berlin-New York, 1973. \bibitem{ki:quotients} Kirwan, F., {\varepsilonm Cohomology of Quotients in Symplectic and Algebraic Geometry}, Mathematical Notes \textbf{31}, Princeton University Press, Princeton, 1984. \bibitem{ki:convexity_III} Kirwan, F., Convexity properties of the moment mapping, III, {\varepsilonm Invent.\ Math.} \textbf{77} (1984), 547-552. \bibitem{ko:surfacesI} Kodaira, K., On the structure of compact complex analytic surfaces, I, {\varepsilonm Amer.\ J.\ Math.} \textbf{86} (1964), 751-798. \bibitem{ko:harmonic} Kohn, J., Harmonic integrals on strongly pseudo-convex manifolds I, {\varepsilonm Ann.\ of Math.}\ \textbf{78} (1963) 112-148. \bibitem{ko:homeomorphic} Kotschick, D., On manifolds homeomorphic to ${\mathbb C} {\mathbb P} ^2 \# 8 \overline{{\mathbb C} {\mathbb P} ^2}$, {\varepsilonm Invent.\ Math.}\ \textbf{95} (1989), 591-600. \bibitem{la-mc:classification} Lalonde, F., McDuff, D., J-curves and the classification of rational and ruled symplectic 4-manifolds, {\varepsilonm Contact and Symplectic Geometry (Cambridge, 1994)}, 3-42, {\varepsilonm Publ.\ Newton Inst.}\ \textbf{8}, Cambridge Univ.\ Press, Cambridge, 1996. \bibitem{le:cuts} Lerman, E., Symplectic cuts, {\varepsilonm Math.\ Res.\ Lett.}\ \textbf{2} (1995), 247-258. \bibitem{le-to} Lerman, E., Tolman, S., Hamiltonian torus actions on symplectic orbifolds and toric varieties, {\varepsilonm Trans.\ Amer.\ Math.\ Soc.}\ \textbf{349} (1997), 4201-4230. \bibitem{li-liu:symplectic} Li, T., Liu, A., Symplectic structure on ruled surfaces and a generalized adjunction formula, {\varepsilonm Math.\ Res.\ Lett.}\ \textbf{2} (1995), 453-471. \bibitem{li:wall} Liu, A.-K., Some new applications of general wall crossing formula, Gompf's conjecture and its applications, {\varepsilonm Math.\ Res.\ Lett.}\ \textbf{3} (1996), 569-585. \bibitem{li-ti:arnold} Liu, G., Tian, G., Floer homology and Arnold conjecture, {\varepsilonm J. Differential Geom.}\ \textbf{49} (1998), 1-74. \bibitem{ma-ra:introduction} Marsden, J., Ratiu, T., {\varepsilonm Introduction to Mechanics and Symmetry, A Basic Exposition of Classical Mechanical Systems}, Texts in Applied Mathematics \textbf{17}, Springer-Verlag, New York, 1994. \bibitem{ma-we:reduction} Marsden, J., Weinstein, A., Reduction of symplectic manifolds with symmetry, {\varepsilonm Rep.\ Mathematical Phys.} \textbf{5} (1974), 121-130. \bibitem{ma:perturbation} Maslov, V., {\varepsilonm Perturbation Theory and Asymptotic Methods} (in russian), {\varepsilonm Izdat.\ Moskov Univ.}, Moscow, 1965. \bibitem{mc:examples} McDuff, D., Examples of simply-connected symplectic non-K\"ahlerian manifolds, {\varepsilonm J.\ Differential Geom.}\ \textbf{20} (1984), 267-277. \bibitem{mc:rational} McDuff, D., Rational and ruled symplectic $4$-manifolds, {\varepsilonm Geometry of low-dimensional manifolds} 2 (Durham, 1989), 7-14, {\varepsilonm London Math.\ Soc.\ Lecture Note Ser.}\ \textbf{151}, Cambridge Univ. Press, Cambridge, 1990. \bibitem{mc:structure} McDuff, D., The structure of rational and ruled symplectic 4-manifolds, {\varepsilonm J.\ Amer.\ Math.\ Soc.}\ \textbf{3} (1990), 679-712. \bibitem{mc:groups} McDuff, D., Lectures on groups of symplectomorphisms, {\varepsilonm Rend.\ Circ.\ Mat.\ Palermo} (2) \textbf{72} (2004), 43-78. \bibitem{mc-po:packing} McDuff, D., Polterovich, L., Symplectic packings and algebraic geometry, with an appendix by Yael Karshon, {\varepsilonm Invent.\ Math.}\ \textbf{115} (1994), 405-434. \bibitem{mc-sa:curves} McDuff, D., Salamon, D., {\varepsilonm J-holomorphic Curves and Symplectic Topology}, Amer.\ Math.\ Soc.\ Colloquium Publications \textbf{52}, Amer.\ Math.\ Soc., Providence, 2004. \bibitem{mc-sa:introduction} McDuff, D., Salamon, D., {\varepsilonm Introduction to Symplectic Topology}, Oxford Mathematical Monographs, Oxford University Press, New York, 1995. \bibitem{mc-ta:inequivalent} McMullen, C., Taubes, C., 4-manifolds with inequivalent symplectic forms and 3-manifolds with inequivalent fibrations, {\varepsilonm Math.\ Res.\ Lett.}\ \textbf{6} (1999), 681-696. \bibitem{me:symmetries} Meyer, K., Symmetries and integrals in mechanics, {\varepsilonm Dynamical Systems} (Proc.\ Sympos., Univ.\ Bahia, Salvador, 1971), 259-272. Academic Press, New York, 1973. \bibitem{mi:morse} Milnor, J., {\varepsilonm Morse Theory}, based on lecture notes by M.\ Spivak and R.\ Wells, Annals of Mathematics Studies \textbf{51}, Princeton University Press, Princeton, 1963. \bibitem{mo:calculus} Morse, M., The foundations of a theory in the calculus of variations in the large, {\varepsilonm Trans.\ Amer.\ Math.\ Soc.}\ \textbf{30} (1928), 213-274. \bibitem{mo:volume} Moser, J., On the volume elements on a manifold, {\varepsilonm Trans.\ Amer.\ Math.\ Soc.} \textbf{120} (1965), 286-294. \bibitem{ne-ni:complex} Newlander, A., Nirenberg, L., Complex analytic coordinates in almost complex manifolds, {\varepsilonm Ann.\ of Math.} \textbf{65} (1957), 391-404. \bibitem{od:toric} Oda, T., {\varepsilonm Convex Bodies and Algebraic Geometry -- An Introduction to the Theory of Toric Varieties}, Ergebnisse der Mathematik und ihrer Grenzgebiete (3) \textbf{15}, Springer-Verlag, Berlin, 1988. \bibitem{on:arnold} Ono, K., On the Arnold conjecture for weakly monotone symplectic manifolds, {\varepsilonm Invent.\ Math.}\ \textbf{119} (1995), 519-537. \bibitem{pa:non-complex} Park, J., Non-complex symplectic 4-manifolds with $b_2^+=1$, {\varepsilonm Bull.\ London Math.\ Soc.}\ \textbf{36} (2004), 231-240. \bibitem{pa:symplectic} Park, J., Simply connected symplectic 4-manifolds with $b_2^+ =1$ and $c_1^2 =2$, to appear in {\varepsilonm Invent.\ Math.} \bibitem{ru:algebraic} Ruan, Y., Symplectic topology on algebraic 3-folds, {\varepsilonm J.\ Differential Geom.}\ \textbf{39} (1994), 215-227. \bibitem{ru:sigma_models} Ruan, Y., Topological sigma model and Donaldson-type invariants in Gromov theory, {\varepsilonm Duke Math.\ J.}\ \textbf{83} (1996), 461-500. \bibitem{sa:pcmi} Salamon, D., Lectures on Floer homology, {\varepsilonm Symplectic Geometry and Topology} (Eliashberg, Y., Traynor, L., eds.), 143-229, IAS/Park City Math.\ Ser.\ \textbf{7}, Amer.\ Math.\ Soc., Providence, 1999. \bibitem{sa:orbifolds} Satake, I., On a generalization of the notion of manifold, {\varepsilonm Proc.\ Nat.\ Acad.\ Sci.\ U.S.A.} \textbf{42} (1956), 359-363. \bibitem{se:graded} Seidel, P., Graded Lagrangian submanifolds, {\varepsilonm Bull.\ Soc.\ Math.\ France} \textbf{128} (2000), 103-149. \bibitem{si:points} Sikorav, J.-C., Points fixes d'un symplectomorphisme homologue \`a l'identit\'e, {\varepsilonm C.\ R.\ Acad.\ Sci.\ Paris} S\'er\. I Math.\ \textbf{299} (1984), 343-346. \bibitem{sm:moduli} Smith, I., On moduli spaces of symplectic forms, {\varepsilonm Math.\ Res.\ Lett.}\ \textbf{7} (2000), 779-788. \bibitem{sm:monodromy} Smith, I., Geometric monodromy and the hyperbolic disc, {\varepsilonm Q.\ J.\ Math.}\ \textbf{52} (2001), 217-228. \bibitem{so:structure} Souriau, J.-M. , {\varepsilonm Structure des Syst\`emes Dynamiques}, Ma\^\i trises de Math\'ematiques, Dunod, Paris 1970. \bibitem{spivak:comprehensive} Spivak, M., {\varepsilonm A Comprehensive Introduction to Differential Geometry}, vol.\ I, second edition, Publish or Perish, Inc., Wilmington, 1979. \bibitem{st:fibre_bundles} Steenrod, N., {\varepsilonm The Topology of Fibre Bundles}, Princeton Mathematical Series \textbf{ 14}, Princeton University Press, Princeton, 1951. \bibitem{st:geography} Stipsicz, A., The geography problem of 4-manifolds with various structures, {\varepsilonm Acta Math.\ Hungar.}\ \textbf{7} (2000), 267-278. \bibitem{sy:blowdown} Symington, M., Symplectic rational blowdowns, {\varepsilonm J.\ Differential Geom.}\ \textbf{50} (1998), 505-518. \bibitem{sz:exotic} Szab\'o, Z., Exotic 4-manifolds with $b^+_2 =1$, {\varepsilonm Math.\ Res.\ Lett.}\ \textbf{3} (1996), 731-741. \bibitem{sz:irreducible} Szab\'o, Z., Simply-connected irreducible 4-manifolds with no symplectic structures, {\varepsilonm Invent.\ Math.}\ \textbf{132} (1998), 457-466. \bibitem{ta:invariants} Taubes, C., The Seiberg-Witten invariants and symplectic forms, {\varepsilonm Math.\ Res.\ Lett.} \textbf{1} (1994), 809-822. \bibitem{ta:more} Taubes, C., More constraints on symplectic forms from Seiberg-Witten invariants, {\varepsilonm Math.\ Res.\ Lett.}\ \textbf{2} (1995), 9-13. \bibitem{ta:sw=gr} Taubes, C., The Seiberg-Witten and Gromov invariants, {\varepsilonm Math.\ Res.\ Lett.}\ \textbf{2} (1995), 221-238. \bibitem{ta:sw=>gr} Taubes, C., ${\rm SW}\Rightarrow{\rm Gr}$: from the Seiberg-Witten equations to pseudo-holomorphic curves, {\varepsilonm J.\ Amer.\ Math.\ Soc.}\ \textbf{9} (1996), 845-918. \bibitem{ta:harmonic} Taubes, C., Seiberg-Witten invariants and pseudo-holomorphic subvarieties for self-dual, harmonic 2-forms, {\varepsilonm Geom.\ Topol.}\ \textbf{3} (1999), 167-210. \bibitem{th:examples} Thurston, W., Some simple examples of symplectic manifolds, {\varepsilonm Proc.\ Amer.\ Math.\ Soc.} \textbf{55} (1976), 467-468. \bibitem{ti:embedding} Tischler, D., Closed 2-forms and an embedding theorem for symplectic manifolds, {\varepsilonm J.\ Differential Geometry} \textbf{12} (1977), 229-235. \bibitem{to-we:cohomology} Tolman, S., Weitsman, J., The cohomology rings of symplectic quotients, {\varepsilonm Comm.\ Anal.\ Geom.}\ \textbf{11} (2003), 751-773. \bibitem{tr:packing} Traynor, L., Symplectic packing constructions, {\varepsilonm J.\ Differential Geom.}\ \textbf{42} (1995), 411-429. \bibitem{vi:generating} Viterbo, C., Symplectic topology as the geometry of generating functions, {\varepsilonm Math.\ Ann.}\ \textbf{292} (1992), 685-710. \bibitem{we:lagrangian} Weinstein, A., Symplectic manifolds and their Lagrangian submanifolds, {\varepsilonm Advances in Math.} \textbf{6} (1971), 329-346. \bibitem{we:lectures} Weinstein, A., {\varepsilonm Lectures on Symplectic Manifolds}, Regional Conference Series in Mathematics \textbf{29}, Amer.\ Math.\ Soc., Providence, 1977. \bibitem{we:fat} Weinstein, A., Fat bundles and symplectic manifolds, {\varepsilonm Adv.\ in Math.}\ \textbf{37} (1980), 239-250. \bibitem{we:isotropic} Weinstein, A., Neighborhood classification of isotropic embeddings, {\varepsilonm J.\ Differential Geom.} \textbf{16} (1981), 125-128. \bibitem{we:conley_zehnder} Weinstein, A., On extending the Conley-Zehnder fixed point theorem to other manifolds, {\varepsilonm Nonlinear Functional Analysis and its Applications}, Part 2 (Berkeley, 1983), 541-544, {\varepsilonm Proc.\ Sympos.\ Pure Math.}\ \textbf{45}, Part 2, Amer.\ Math.\ Soc., Providence, 1986. \bibitem{we:complex} Wells, R.O., {\varepsilonm Differential Analysis on Complex Manifolds}, second edition, Graduate Texts in Mathematics \textbf{65}, Springer-Verlag, New York-Berlin, 1980. \bibitem{we:classical} Weyl, H., {\varepsilonm The Classical Groups. Their Invariants and Representations}, Princeton Landmarks in Mathematics, Princeton University Press, Princeton, 1997. \bibitem{wh:extension} Whitney, H., Analytic extensions of differentiable functions defined in closed sets, {\varepsilonm Trans.\ Amer.\ Math.\ Soc.}\ \textbf{36} (1934), 63-89. \bibitem{wi:morse_theory} Witten, E., Supersymmetry and Morse theory, {\varepsilonm J.\ Differential Geom.}\ \textbf{17} (1982), 661-692. \bibitem{wi:sigma_models} Witten, E., Topological sigma models, {\varepsilonm Comm.\ Math.\ Phys.}\ \textbf{118} (1988), 411-449. \bibitem{wu:classes} Wu, W.-T., Sur les classes caractéristiques des structures fibr\'ees sph\'eriques, {\varepsilonm Actualit\'es Sci.\ Ind.}\ \textbf{1183} (1952). \varepsilonnd{thebibliography} } \varepsilonnd{document}
\begin{document} \mathfrak{t}itle[Global sections of twisted normal bundles]{Global sections of twisted normal bundles of $K3$ surfaces and their hyperplane sections} \author{Andreas Leopold Knutsen} \address{Andreas Leopold Knutsen, Department of Mathematics, University of Bergen, Postboks 7800, 5020 Bergen, Norway} \mathfrak{e}mail{[email protected]} \begin{abstract} Let $S \mathfrak{s}ubset {\mathbb P}^g$ be a smooth $K3$ surface of degree $2g-2$, $g \geq 3$. We classify all the cases for which $h^0({\mathcal N}_{S/{\mathbb P}^g}(-2)) \neq 0$ and the cases for which $h^0({\mathcal N}_{S/{\mathbb P}^g}(-2)) < h^0({\mathcal N}_{C/{\mathbb P}^{g-1}}(-2))$ for $C \mathfrak{s}ubset {\mathbb P}^{g-1}$ a general canonical curve section of $S$. \mathfrak{e}nd{abstract} \maketitle \mathfrak{s}ection{Introduction} The spaces of global sections of twists of normal bundles of an embedded variety $X \mathfrak{s}ubset {\mathbb P}^n$ in projective space naturally occur in many ways, for instance in the deformation theory of the cone over $X$, cf. \mathfrak{c}ite{Pi}. More specifically, the spaces $H^0({\mathcal N}_{X/{\mathbb P}^n}(-k))$ for $k=1,2$ are related to extendability properties of $X$, as we now briefly recall. An {^{-1}t $r$-step extension} of a smooth variety $X \mathfrak{s}ubset {\mathbb P}^n$ is a projective variety $W \mathfrak{s}ubset {\mathbb P}^{n+r}$ so that $X$ is the transversal intersection of $W$ with a ${\mathbb P}^n \mathfrak{s}ubset {\mathbb P}^{n+r}$. If $W$ is not a cone, then the extension is called {^{-1}t nontrivial}, and $X$ is called {^{-1}t $r$-extendable}. A famous theorem of Zak-Lvovski \mathfrak{c}ite{Zak,Lvov} states that if $X$ is not a quadric, and $h^0({\mathcal N}_{X/{\mathbb P}^n}(-1)) < \operatorname{min}\{n+1+r,2n+1\}$, then $X$ is not $r$-extendable. Quite remarkably, a converse of the theorem of Zak-Lvovski was recently obtained in \mathfrak{c}ite[Thms. 2.1 and 2.19]{cds} in the case of $X$ a canonical curve or a $K3$ surface, to the effect that $h^0({\mathcal N}_{X/{\mathbb P}^n}(-1)) \geq n+1+r$ is a {^{-1}t sufficient} condition for $r$-extendability, provided that the curve (respectively, any smooth hyperplane section of the surface) has genus at least $11$ and Clifford index at least $3$. Whereas $H^0({\mathcal N}_{X/{\mathbb P}^n}(-1))$ is connected with the {^{-1}t existence} of nontrivial $r$-extensions $X \mathfrak{s}ubset W$, the space $H^0({\mathcal N}_{X/{\mathbb P}^n}(-2))$ is connected with {^{-1}t uniqueness}, as was proved by Wahl \mathfrak{c}ite[Thm. 1.9 and Thm. 2.8]{W-CY}: If $H^0({\mathcal N}_{X/{\mathbb P}^n}(-2))=0$, then $W$ is uniquely determined by its Kodaira-Spencer map (see \mathfrak{c}ite[(1.4)]{W-CY} for its definition), and if in addition the Kodaira-Spencer map is an isomorphism, then $W$ is {^{-1}t universal}, meaning that every extension of $X$ is equivalent to a (possibly trivial) cone over a unique subextension (see also \mathfrak{c}ite[\S 4]{cds}). Much attention has been devoted to canonical curves and $K3$ surfaces. We refer to the recent works \mathfrak{c}ite{abs,cds}, and recall, as another instance, that considerations as above led to the proof that a curve of genus $g \geq 11$, $g \neq 12$ lying on a $K3$ surface, is generically contained in at most one such surface \mathfrak{c}ite{clm}. In this paper we will concentrate on the computations of the spaces of global sections $H^0({\mathcal N}_{S/{\mathbb P}^g}(-k))$ for $k \geq 2$ in the case of $K3$ surfaces $S \mathfrak{s}ubset {\mathbb P}^g$ of degree $2g-2$. (The case of {^{-1}t general} $K3$ surfaces was treated in \mathfrak{c}ite{clm-class}.) Their hyperplane sections are canonical curves $C \mathfrak{s}ubset {\mathbb P}^{g-1}$ of genus $g$ and we start by recalling the known results in this case, before stating our main result (Proposition \ref{prop:main2} below). The cohomology groups in question are related to the well-known gaussian maps \[ {\mathcal P}hi_{\omega_C,\omega_C^{\otimesl}}: \operatorname{ker} \mu_{\omega_C,\omega_C^{\otimesl}} \longrightarrow H^0(\omega_C^{\otimes(l+1)}), \] where $\operatorname{ker} \mu_{\omega_C,\omega_C^{\otimesl}}$ is the kernel of the multiplication map \[ \mu_{\omega_C, \omega_C^{\otimesl}}: H^0(\omega_C) \otimes H^0(\omega_C^{\otimesl}) \mathfrak{t}o H^0(\omega_C^{\otimes(l+1)}),\] and the map $ {\mathcal P}hi_{\omega_C,\omega_C^{\otimesl}}$ is (essentially) defined by sending $\mathfrak{s}igma \otimes \mathfrak{t}au$ to $d\mathfrak{s}igma \otimes \mathfrak{t}au-\mathfrak{s}igma \otimes d\mathfrak{t}au$. We have (cf. \mathfrak{c}ite{W1} or \mathfrak{c}ite[Prop. 1.2]{CM1}): \begin{eqnarray} \label{eq:gauss1} h^0({\mathcal N}_{C/{\mathbb P}^{g-1}}(-1)) & = & g + \operatorname{cork}{\mathcal P}hi_{\omega_C,\omega_C}, \\ \label{eq:gauss2} h^0({\mathcal N}_{C/{\mathbb P}^{g-1}}(-j)) & = & \operatorname{cork}{\mathcal P}hi_{\omega_C,\omega_C^{\otimesj}} \; \; \mbox{for} \; \; j \geq 2. \mathfrak{e}nd{eqnarray} The study of $h^0({\mathcal N}_{C/{\mathbb P}^{g-1}}(-1))$, or equivalently, of the corank of the gaussian map ${\mathcal P}hi_{\omega_C,\omega_C}$, is a tricky question and a history of its own. We refer for instance to the works \mathfrak{c}ite{W2,W,W3,chm,CM1,CM2} and the very recent works on $K3$ surfaces \mathfrak{c}ite{abs,cds}. It is still an open question to determine the possible values of this for all curves, although the value is known for general curves and for a general curve of any fixed gonality. We will in the following concentrate on the dimensions $h^0({\mathcal N}_{C/{\mathbb P}^{g-1}}(-j))$ for $j \geq 2$ and restrict our attention to the cases $g \geq 5$, as otherwise the canonical model is a complete intersection and the cohomology groups can be easily calculated. Recall the following well-known fact, cf. \mathfrak{c}ite[\S 2]{W2}, \mathfrak{c}ite[Lemma 2.7(ii)]{KL-gauss} or \mathfrak{c}ite[Lemma 3.5]{cds}: \begin{lemma} \label{lemma:well-known} Let $X \mathfrak{s}ubset {\mathbb P}^n$ be a locally complete intersection variety such that the homogeneous ideal of $X$ is generated by quadrics and the first syzygy module is generated by linear syzygies. Then $h^0({\mathcal N}_{X/{\mathbb P}^n}(-k))=0$ for all $k \geq 2$. \mathfrak{e}nd{lemma} An immediate consequence of this, together with Petri's theorem and results on syzygies of tetragonal curves by Schreyer \mathfrak{c}ite{sc} and Voisin \mathfrak{c}ite{Vo}, is the following well-known fact that can also be deduced from \mathfrak{c}ite[Thm. 2]{BEL} and \mathfrak{e}qref{eq:gauss2}: \begin{corollary} \label{cor:well-knowncancurve} Let $C \mathfrak{s}ubset {\mathbb P}^{g-1}$ be a canonically embedded (nonhyperelliptic) curve of genus $g \geq 3$. If $\operatorname{Cliff} C \geq 3$, then $h^0({\mathcal N}_{C/{\mathbb P}^{g-1}}(-k))=0$ for all $k \geq 2$. \mathfrak{e}nd{corollary} Secondly, as the gaussian maps ${\mathcal P}hi_{\omega_C,\omega_C^{\otimesl}}$ are well-known to be surjective for $l \geq 3$ and $g \geq 5$, cf., e.g., \mathfrak{c}ite[Cor. 2.10 and Prop. 2.11]{KL-gauss} for a proof, we have by \mathfrak{e}qref{eq:gauss2} that \begin{equation} \label{eq:Ckgeq3} h^0({\mathcal N}_{C/{\mathbb P}^{g-1}}(-k))=0 \; \; \mbox{for all} \; \;k \geq 3 \; \; \mbox{(when $g \geq 5$)}. \mathfrak{e}nd{equation} The cases left are the cases $k=2$ for curves of Clifford indices one and two, that is, trigonal and tetragonal curves, as well as curves isomorphic to smooth plane quintics or sextics. The possible values have been computed in various works, cf. \mathfrak{c}ite{DS,Te,BEL,CM2}, although a complete statement seems to be missing in the literature. We give all possible values of $h^0({\mathcal N}_{C/{\mathbb P}^{g-1}}(-2))$, for $k \geq 2$, for such curves, in Propositions \ref{prop:c=1}, \ref{prop:c=1SPQ}, \ref{prop:c=2} and \ref{prop:c=2SPS} and remark that all the possible values actually do occur (more precisely, for curves on $K3$ surfaces, cf. Remark \ref{rem:SMP}, Example \ref{ex:genere7} and Proposition \ref{prop:main2}). In particular, we see that \begin{equation} \label{eq:Ck2} h^0({\mathcal N}_{C/{\mathbb P}^{g-1}}(-2))=0 \; \; \mbox{if $g \geq 11$ and $C$ is not bielliptic}. \mathfrak{e}nd{equation} Since hyperplane sections of $K3$ surfaces are canonical curves, an immediate consequence of Lemma \ref{lemma:well-known}, together with Green's hyperplane section theorem on syzygies \mathfrak{c}ite[Thm. 3.b.7]{gr}, is the following fact, also well-known. Recall that the Clifford index is constant among all smooth curves in a complete linear system on a $K3$ surface, cf. \mathfrak{c}ite{gl}. \begin{corollary} \label{cor:well-knownK3} Let $S \mathfrak{s}ubset {\mathbb P}^g$ be a (possibly singular) projective model of a $K3$ surface of degree $2g-2$. Let $c$ be the Clifford index of the smooth hyperplane sections of $S$. If $c \geq 3$, then $h^0({\mathcal N}_{S/{\mathbb P}^g}(-k))=0$ for all $k \geq 2$. \mathfrak{e}nd{corollary} Here, by a projective model of a $K3$ surface of degree $2g-2$ in ${\mathbb P}^g$, we mean the image under a birational morphism $^{\vee}arphi_H$ defined by a complete linear system $|H|$ on a smooth $K3$ surface, where $H ^{-1}n {\mathcal P}ic S$. As is well-known, $H^2=2(g-1)$, the general members of such a linear system are smooth, nonhyperelliptic curves of genus $g$, and the morphism $^{\vee}arphi_H$ is an isomorphism except for the possible contraction of (chains of) smooth rational curves. The image surface is normal, with at most isolated, rational double points as singularities, cf. \mathfrak{c}ite{SD}. It is very easy to see, cf. Lemma \ref{lemma:primo}, that \[ h^0({\mathcal N}_{S/{\mathbb P}^g}(-k)) \leq \mathfrak{s}um_{j=0}^{^{-1}nfty} h^0({\mathcal N}_{C/{\mathbb P}^{g-1}}(-k-j)).\] Hence immediate consequences of \mathfrak{e}qref{eq:Ckgeq3} are \begin{equation} \label{eq:Skgeq3} h^0({\mathcal N}_{S/{\mathbb P}^{g}}(-k))=0 \; \; \mbox{for all} \; \;k \geq 3 \; \; \mbox{(when $g \geq 5$)} \mathfrak{e}nd{equation} and \begin{equation} \label{eq:SCk2} h^0({\mathcal N}_{S/{\mathbb P}^{g}}(-2)) \leq h^0({\mathcal N}_{C/{\mathbb P}^{g-1}}(-2)) \; \; \mbox{(when $g \geq 5$)} \mathfrak{e}nd{equation} (again for a possibly singular projective model $S \mathfrak{s}ubset {\mathbb P}^g$ of a $K3$ surface). Moreover, as there are no bielliptic curves of genus $g \geq 11$ on a $K3$ surface by a result of Reid's \mathfrak{c}ite[Cor. 2]{Reid}, an immediate consequence of \mathfrak{e}qref{eq:Ck2} is that \begin{equation} \label{eq:Sk2} h^0({\mathcal N}_{S/{\mathbb P}^{g}}(-2))=0 \; \; \mbox{if $g \geq 11$}. \mathfrak{e}nd{equation} (This was already implicitly contained in \mathfrak{c}ite[Pf. of Thm. 3.2]{clm-class}.) The main result of this paper gives an explicit classification of all {^{-1}t smooth} projective models of $K3$ surfaces such that $h^0({\mathcal N}_{S/{\mathbb P}^{g}}(-2)) \neq 0$: \begin{proposition} \label{prop:main2} Let $S \mathfrak{s}ubset {\mathbb P}^g$ be a smooth $K3$ surface of degree $2g-2$, with $g \geq 5$. If $g=5$, then $h^0({\mathcal N}_{S/{\mathbb P}^5}(-2))=3$, and if $g=6$, then $h^0({\mathcal N}_{S/{\mathbb P}^6}(-2))=1$. If $g \geq 7$, then $h^0({\mathcal N}_{S/{\mathbb P}^g}(-2))=0$ except for the following cases, where $h^0({\mathcal N}_{S/{\mathbb P}^g}(-2))=1$: \begin{itemize} ^{-1}tem[(I)] $g=7$ and ${\mathcal O}_S(1) \mathfrak{s}im 3E+{\mathcal G}amma_1+{\mathcal G}amma_2+{\mathcal G}amma_3$, where $|E|$ is an elliptic pencil of degree three on $S$ and ${\mathcal G}amma_1,{\mathcal G}amma_2,{\mathcal G}amma_3$ are disjoint lines (with ${\mathcal G}amma_i \mathfrak{c}dot E=1$, $i=1,2,3$). ^{-1}tem[(II)] $g=7$ and there are three elliptic pencils $|E_i|$ on $S$, $i=1,2,3$, such that $E_i \mathfrak{c}dot E_j=2$ for $i \neq j$ and ${\mathcal O}_S(1) \mathfrak{s}im E_1+E_2+E_3$. ^{-1}tem[(III)] $g=7$ and there is a globally generated line bundle $D$ on $S$ satisfying $D^2=2$ and $D \mathfrak{c}dot H=6$. ^{-1}tem[(IV)] $g=8$ and there is a globally generated line bundle $D$ on $S$ satisfying $D^2=2$ and $D \mathfrak{c}dot H=6$. ^{-1}tem[(V)] $g=9$ and $H \mathfrak{s}im 2D$ with $D^2=4$. ^{-1}tem[(VI)] $g=9$ and $H \mathfrak{s}im 3E+2{\mathcal D}elta$, where $|E|$ is an elliptic pencil and ${\mathcal D}elta$ is an effective divisor such that ${\mathcal D}elta^2=-2$ and ${\mathcal D}elta \mathfrak{c}dot E=2$. ^{-1}tem[(VII)] $g=10$ and $H \mathfrak{s}im 3D$ with $D^2=2$. \mathfrak{e}nd{itemize} \mathfrak{e}nd{proposition} We remark that the statement for $g=5$ and $6$ is of course well-known for {^{-1}t general} $K3$ surfaces, more precisely, with the Clifford index of all smooth hyperplane sections $c=2$, as they are complete intersections of three quadrics for $g=5$ and quadratic sections of a (possibly singular) quintic Del Pezzo threefold in ${\mathbb P}^6$. It is however new for $K3$ surfaces with $c=1$, at least as far as we know. We also remark that $c=1$ in (I) and $c=2$ in (II)-(VII). A general surface $S$ in each of the cases is of the following form, and conversely all surfaces below belong to the cases listed in the proposition (in particular, all cases do occur): \begin{itemize} ^{-1}tem[(I)] $S$ lies in a three-dimensional rational normal scroll $T$ of type $(3,1,1)$ in ${\mathbb P}^7$ as a a divisor in $|\left({\mathcal O}_T(1)(-{\mathcal F})\right)^{\otimes3}|$, where ${\mathcal F}$ is the class of the ruling of $T$. ^{-1}tem[(II)] $S$ is a quadratic section of the sextic Del Pezzo threefold $T \mathfrak{c}ong {\mathbb P}^1 \times {\mathbb P}^1 \times {\mathbb P}^1$ in its Segre embedding in ${\mathbb P}^7$. ^{-1}tem[(III)] $S$ is a quadratic section of the sextic Del Pezzo threefold $W$ in ${\mathbb P}^7$ that is a divisor of bidegree $(1,1)$ in ${\mathbb P}^2 \times {\mathbb P}^2$. ^{-1}tem[(IV)] $S$ is a quadratic section of a blow up of ${\mathbb P}^3$ at a point embedded in ${\mathbb P}^8$ by the linear system of quadrics through the point (a septic Del Pezzo threefold). ^{-1}tem[(V)] $S$ is the $2$-Veronese embedding of a quartic in ${\mathbb P}^3$, and thus a quadratic section of the $2$-Veronese embedding of ${\mathbb P}^3$ in ${\mathbb P}^{9}$. ^{-1}tem[(VI)] $S$ is a quadratic section of the cone over the anticanonical embedding of the Hirzebruch surface ${\mathcal F}F_1$ in ${\mathbb P}^8$. ^{-1}tem[(VII)] $S$ is a quadratic section of the cone over the Veronese surface in ${\mathbb P}^9$. \mathfrak{e}nd{itemize} Finally, in Proposition \ref{prop:main3}, we classify the cases for which the strict inequality $h^0({\mathcal N}_{S/{\mathbb P}^{g}}(-2)) < h^0({\mathcal N}_{C/{\mathbb P}^{g-1}}(-2))$ holds, again in the case of {^{-1}t smooth} projective models. ^{\vee}space{0.4cm} {^{-1}t Acknowledgements.} This paper grew out of my interest in the recent paper \mathfrak{c}ite{cds}. I thank C.~Ciliberto, T.~Dedieu and E.~Sernesi for the many conversations on this topic, and in particular, C.~Ciliberto for encouraging me to write down these results. I also thank A.~F.~Lopez for useful conversations and for indicating several references, as well as the referee for detecting various misprints. I have been partially supported by grant n. 261756 of the Research Council of Norway and by the Bergen Research Foundation. \mathfrak{s}ection{Some useful results} The following result was already mentioned in the introduction: \begin{lemma} \label{lemma:primo} Let $X \mathfrak{s}ubset {\mathbb P}^n$ be a local complete intersection surface with isolated singularities. Then, for any smooth hyperplane section $C \mathfrak{s}ubset X$, we have \[ h^0({\mathcal N}_{X/{\mathbb P}^n}(-k)) \leq \mathfrak{s}um_{j=0}^{^{-1}nfty} h^0({\mathcal N}_{C/{\mathbb P}^{n-1}}(-k-j)). \] \mathfrak{e}nd{lemma} \begin{proof} By assumption, $X$ has smooth hyperplane sections. For any such $C$, we have ${\mathcal N}_{C/{\mathbb P}^{n-1}} \mathfrak{c}ong {\mathcal N}_{X/{\mathbb P}^n} \otimes {\mathcal O}_C$. The exact sequences \[ 0 \longrightarrow {\mathcal N}_{X/{\mathbb P}^n} (-k-1-j) \longrightarrow {\mathcal N}_{X/{\mathbb P}^n} (-k-j) \longrightarrow {\mathcal N}_{X/{\mathbb P}^n}(-k-j) \otimes {\mathcal O}_C \longrightarrow 0 \] thus yield the desired result. \mathfrak{e}nd{proof} We will need the following strengthening of Lemma \ref{lemma:well-known}, proved in \mathfrak{c}ite[Lemma 2.7]{KL-gauss}: \begin{lemma} \label{lemma:well-known2} Let $Y \mathfrak{s}ubset {\mathbb P}^n$ be an integral subvariety such that the homogeneous ideal of $Y$ is generated by quadrics and the first syzygy module is generated by linear syzygies (e.g., $Y$ satisfies property $N_2$). Let $X \mathfrak{s}ubset Y$ be a smooth irreducible nondegenerate subvariety. Then $h^0(\operatorname{ \mathfrak{h}\mathfrak{o}\mathfrak{m} }_{{\mathcal O}_{{\mathbb P}^n}}({\mathcal J}_{Y/{\mathbb P}^n},{\mathcal O}_X)(-2))=0$. \mathfrak{e}nd{lemma} We will also make use of the following simple observation: \begin{lemma} \label{lemma:trick} Let $S \mathfrak{s}ubset {\mathbb P}^g$ be a smooth $K3$ surface of degree $2g-2$. Then $h^0({\mathcal N}_{S/{\mathbb P}^g}(-2))=h^1({\mathcal T}_S(-2))$. \mathfrak{e}nd{lemma} \begin{proof} The Euler sequence twisted by ${\mathcal O}_S(-2)$ is \[ \timesymatrix{ 0 \ar[r] & {\mathcal O}_S(-2) \ar[r] & H^0({\mathcal O}_S(1))^{^{\vee}ee} \otimes {\mathcal O}_S(-1) \ar[r] & {\mathcal T}_{{\mathbb P}^g}|_S(-2) \ar[r] & 0.}\] The map on cohomology $H^2({\mathcal O}_S(-2)) \mathfrak{t}o H^0({\mathcal O}_S(1))^{^{\vee}ee} \otimes H^2({\mathcal O}_S(-1))$ is the dual of the multiplication map $H^0({\mathcal O}_S(1)) \otimes H^0({\mathcal O}_S(1)) \mathfrak{t}o H^0({\mathcal O}_S(2))$, which is surjective by \mathfrak{c}ite[Thm. 6.1(ii)]{SD}. Thus, $h^i({\mathcal T}_{{\mathbb P}^g}|_S(-2))=0$ for $i=0,1$. The desired conclusion now follows from the exact sequence \[ \timesymatrix{ 0 \ar[r] & {\mathcal T}_S(-2) \ar[r] & {\mathcal T}_{{\mathbb P}^g}|_S(-2) \ar[r] & {\mathcal N}_{S/{\mathbb P}^g}(-2) \ar[r] & 0.} \] \mathfrak{e}nd{proof} \mathfrak{s}ection{The case of Clifford index one} \label{sec:c=1} It is well-known that smooth curves of Clifford index one are either trigonal or isomorphic to smooth plane quintics. The next two results give all possible values of $h^0({\mathcal N}_{C/{\mathbb P}^{g-1}}(-2))$, or equivalently, $\operatorname{cork} {\mathcal P}hi_{\omega_C,\omega_C^{\otimes2}}$, by \mathfrak{e}qref{eq:gauss2}, for such curves. The following result was proved in \mathfrak{c}ite[\S 3.8]{DS} in terms of {^{-1}t Maroni invariants} of the rational normal scroll defined by the $g^1_3$, cf. \mathfrak{c}ite{sc}. The result is apparently also contained in an unpublished preprint of Tendian \mathfrak{c}ite{DS}. We formulate the result in a slightly different way and prove it using \mathfrak{c}ite{KL-gauss}, which in principle adopts the same idea of proof as \mathfrak{c}ite{DS}. Note that the case of {^{-1}t general} trigonal curves was proved in \mathfrak{c}ite[Thm. 2.8]{CM2}. \begin{prop} \label{prop:c=1} Let $C$ be a smooth trigonal curve of genus $g \geq 5$ and denote by $A$ its unique line bundle of type $g^1_3$. \begin{itemize} ^{-1}tem[(i)] If $g \geq 11$, then $h^0({\mathcal N}_{C/{\mathbb P}^{g-1}}(-2))=0$. ^{-1}tem[(ii)] If $g=10$, then $h^0({\mathcal N}_{C/{\mathbb P}^9}(-2))=0$, unless $\omega_C \mathfrak{c}ong 6A$, in which case one has $h^0({\mathcal N}_{C/{\mathbb P}^{9}}(-2))=1$. ^{-1}tem[(iii)] If $g=8$ or $9$, then $h^0({\mathcal N}_{C/{\mathbb P}^{g-1}}(-2))=h^0(\omega_C-(g-4)A) \leq 1$. ^{-1}tem[(iv)] If $g=7$, then $h^0({\mathcal N}_{C/{\mathbb P}^{6}}(-2))=1$, unless $\omega_C \mathfrak{c}ong 4A$, in which case one has $h^0({\mathcal N}_{C/{\mathbb P}^{6}}(-2))=2$. ^{-1}tem[(v)] If $g=6$, then $h^0({\mathcal N}_{C/{\mathbb P}^{5}}(-2))=2$. ^{-1}tem[(vi)] If $g=5$, then $h^0({\mathcal N}_{C/{\mathbb P}^{4}}(-2))=3$. \mathfrak{e}nd{itemize} \mathfrak{e}nd{prop} \begin{proof} In the canonical embedding $C \mathfrak{s}ubset {\mathbb P}^{g-1}$, the members of the $g^1_3$ on $C$ are collinear, and the lines sweep out a rational normal surface $Y \mathfrak{s}ubset {\mathbb P}^{g-1}$ containing $C$, cf. \mathfrak{c}ite[\S 4 and 6.1]{sc}. Since the $g^1_3$ is base point free, the curve $C$ does not intersect the possibly empty singular locus of $Y$ and $C ^{-1}n |{\mathcal O}_Y(3)(-(g-4)\mathcal{R})|$, where $\mathcal{R}$ is the class of the ruling of $Y$, cf. \mathfrak{c}ite[\S~6.1]{sc}. We have the twisted normal bundle sequence (recalling that $Y$ is smooth along $C$): \[ \timesymatrix{ 0 \longrightarrow {\mathcal N}_{C/Y}(-2) \ar@{=}[d]^{\wr} \ar[r] & {\mathcal N}_{C/{\mathbb P}^{g-1}}(-2) \ar[r] & {\mathcal N}_{Y/{\mathbb P}^{g-1}}|_C(-2) \ar[r] & 0. \\ {\mathcal O}_C(1)(-(g-4)\mathcal{R}) & & & }\] Since $Y$ satisfies property $N_2$ (as any of its smooth hyperplane sections does), Lemma \ref{lemma:well-known2} yields \[ h^0( {\mathcal N}_{C/{\mathbb P}^{g-1}}(-2))=h^0({\mathcal O}_C(1)(-(g-4)\mathcal{R})= h^0(\omega_C-(g-4)A).\] Because $\deg (\omega_C-(g-4)A)=10-g$, items (i)-(iv) easily follow. (In item (iv) one uses that $h^0(3A) \geq 4$, to conclude by Riemann-Roch that $h^0(\omega_C-3A) \geq 1$.) If $g=6$, we get $h^0(\omega_C-2A) \leq 2$, since otherwise $C$ would carry a $g^2_4$ and thus be hyperelliptic, a contradiction. At the same time, $h^0(2A) \geq 3$, so that $h^0(\omega_C-2A) \geq 2$ by Riemann-Roch. Hence (v) follows. If $g=5$, then $h^0(\omega_C-A)=h^1(A)=3$, and (vi) follows. \mathfrak{e}nd{proof} \begin{prop} \label{prop:c=1SPQ} If $C$ is isomorphic to a smooth plane quintic (whence of genus $6$), then $h^0( {\mathcal N}_{C/{\mathbb P}^{g-1}}(-2)) =3$. \mathfrak{e}nd{prop} \begin{proof} This is \mathfrak{c}ite[Thm. 2.3]{CM2}. Alternatively, it follows from \mathfrak{c}ite[Prop. 2.9(d)]{KL-gauss}. \mathfrak{e}nd{proof} \begin{remark} \label{rem:SMP} It is well known, cf. \mathfrak{c}ite{SD}, that a curve $C$ on a $K3$ surface is isomorphic to a smooth plane quintic if and only if $C \mathfrak{s}im 2B+{\mathcal G}amma$, where $B$ is a smooth genus $2$ curve, ${\mathcal G}amma$ is a smooth rational curve and $B \mathfrak{c}dot {\mathcal G}amma=1$. In particular, as ${\mathcal G}amma \mathfrak{c}dot C=0$, the line bundle ${\mathcal O}_S(C)$ is not ample. \mathfrak{e}nd{remark} \begin{example} \label{ex:genere7} We give an example of a genus $7$ curve $C$ on a $K3$ surface such that $h^0({\mathcal N}_{C/{\mathbb P}^{6}}(-2))=2$. By standard techniques using lattice theory and the surjectivity of the period map, one can prove the existence of a $K3$ surface $S$ carrying a smooth irreducible elliptic curve $E$ and three smooth irreducible rational curves ${\mathcal G}amma$, ${\mathcal G}amma_1$ and ${\mathcal G}amma_2$ such that $E \mathfrak{c}dot {\mathcal G}amma={\mathcal G}amma \mathfrak{c}dot {\mathcal G}amma_1={\mathcal G}amma_1 \mathfrak{c}dot {\mathcal G}amma_2=1$ and $E \mathfrak{c}dot {\mathcal G}amma_1=E \mathfrak{c}dot {\mathcal G}amma_2={\mathcal G}amma \mathfrak{c}dot {\mathcal G}amma_2=0$, cf. \mathfrak{c}ite[Fourth row of table p.~145]{JK}. Then ${\mathcal O}_C(E)$ induces a linear system of type $g^1_3$ on any smooth $C ^{-1}n |4E+3{\mathcal G}amma+2{\mathcal G}amma_1+{\mathcal G}amma_2|$ and $\omega_C \mathfrak{c}ong {\mathcal O}_C(4E)$, as $C \mathfrak{c}dot {\mathcal G}amma= C \mathfrak{c}dot {\mathcal G}amma_1= C \mathfrak{c}dot {\mathcal G}amma_2=0$. Thus, $h^0(\omega_C-3{\mathcal O}_C(E)) =h^0({\mathcal O}_C(E))=2$, so that $h^0({\mathcal N}_{C/{\mathbb P}^{6}}(-2))=2$, by Proposition \ref{prop:c=1}(iv). One can check that $|C|$ defines a birational morphism contracting ${\mathcal G}amma$, ${\mathcal G}amma_1$ and ${\mathcal G}amma_2$, thus the projective model of $S$ has an $A_3$-singularity. In a similar way, one can construct examples of curves of genera $8$, $9$ and $10$ on $K3$ surfaces with $h^0({\mathcal N}_{C/{\mathbb P}^{g-1}}(-2))=1$. We list the cases, which occur in \mathfrak{c}ite{JK}. In all cases, $E$ is a smooth elliptic curve and ${\mathcal G}amma$ and ${\mathcal G}amma'$ are smooth rational curves. The projective models have an $A_1$-singularity coming from the contraction of ${\mathcal G}amma$. Propositions \ref{prop:main2} and \ref{prop:main3} imply that all cases with $C$ trigonal, $ 8 \leq g \leq 10$ and $h^0({\mathcal N}_{C/{\mathbb P}^{g-1}}(-2))>0$ occur on {^{-1}t singular} projective models of $K3$ surfaces. ^{\vee}space{0.2cm} \begin{tabular}{ |l | l |l|l|} \hline $g$ & $C\mathfrak{s}im$ & intersections & appearance in \mathfrak{c}ite{JK} \\\hline\hline $8$ & $4E+2{\mathcal G}amma+{\mathcal G}amma'$ & $E \mathfrak{c}dot {\mathcal G}amma=E \mathfrak{c}dot {\mathcal G}amma'=1$, ${\mathcal G}amma \mathfrak{c}dot {\mathcal G}amma'=0$ & third row table p.~146 \\ $9$ & $5E+3{\mathcal G}amma+{\mathcal G}amma'$ & $E \mathfrak{c}dot {\mathcal G}amma={\mathcal G}amma \mathfrak{c}dot {\mathcal G}amma'=1$, $E \mathfrak{c}dot {\mathcal G}amma'=0$ & fourth row table p.~148 \\ $10$ & $6E+3{\mathcal G}amma$ & $E \mathfrak{c}dot {\mathcal G}amma=1$ & fifth row table on p.~151 \\ \hline \mathfrak{e}nd{tabular} ^{\vee}space{0.2cm} Hence all the maximal values of $h^0({\mathcal N}_{C/{\mathbb P}^{6}}(-2))$ in Proposition \ref{prop:c=1} actually occur on curves on $K3$ surfaces. At the same time, also the remaining values occur for curves on $K3$ surfaces, as a consequence of Proposition \ref{prop:main2}. \mathfrak{e}nd{example} Assume now that $S \mathfrak{s}ubset {\mathbb P}^g$ is a smooth $K3$ surface of degree $2g-2$, with $g \geq 5$, all of whose hyperplane sections have Clifford index one, and set $H:={\mathcal O}_S(1)$. By the classical results of Saint-Donat \mathfrak{c}ite{SD} and the fact that $H$ is ample, all smooth hyperplane sections are trigonal and the $g^1_3$ is induced by an elliptic pencil $|E|$ on the surface satisfying $E \mathfrak{c}dot H=3$ (see, e.g., \mathfrak{c}ite[Thm. 1.3]{JK} for the precise statement). It is proved in \mathfrak{c}ite[\S 5]{JK} that one can find a pencil such that the three-dimensional rational normal scroll $T \mathfrak{s}ubset {\mathbb P}^g$ swept out by the span of the members of $|E|$ in ${\mathbb P}^g$ (which are plane cubics) is smooth (of degree $g-2$) and furthermore such that \begin{equation} \label{eq:h1R'} h^1(H-E)=h^1(H-2E)=0 \mathfrak{e}nd{equation} (the first vanishing by \mathfrak{c}ite[(2.6)]{Ma} or \mathfrak{c}ite[Prop. 2.6]{JK} and the latter by \mathfrak{c}ite[Prop. 5.5]{JK}, noting that the exceptional cases labeled (E0)-(E4) in \mathfrak{c}ite[Prop. 5.5]{JK} do not occur for ample $H$). Moreover, by \mathfrak{c}ite[Prop. 7.2]{JK}, the surface $S \mathfrak{s}ubset {\mathbb P}^g$ is cut out in $T$ by a section of ${\mathcal O}_T(3)(-(g-4){\mathcal F})$, where ${\mathcal F}$ is the class of the ruling of $T$, and the scroll type of $T$ is $(e_1,e_2,e_3)$ with $e_1+e_2+e_3=g-2$. (For the notion of scroll type and how to calculate it, cf., e.g., \mathfrak{c}ite{sc,Br,ste,JK}.) The possible scroll types occuring have been studied in \mathfrak{c}ite[2.11]{Reid0}, \mathfrak{c}ite[(1.7)]{ste} and \mathfrak{c}ite[\S 9.1]{JK}. \begin{lemma} \label{lemma:c=11} Let $C \mathfrak{s}ubset S$ be a general hyperplane section. We have \[ h^0({\mathcal N}_{S/{\mathbb P}^g}(-2))=h^0({\mathcal N}_{C/{\mathbb P}^{g-1}}(-2))=0, \] except precisely in the following cases: \begin{itemize} ^{-1}tem[(i)] If $g=5$, then $h^0({\mathcal N}_{S/{\mathbb P}^5}(-2))=h^0({\mathcal N}_{C/{\mathbb P}^{4}}(-2))=3$; ^{-1}tem[(ii)] If $g=6$, then $h^0({\mathcal N}_{S/{\mathbb P}^6}(-2))=1$ and $h^0({\mathcal N}_{C/{\mathbb P}^{5}}(-2))=2$; ^{-1}tem[(iii)] If $g=7$, then $h^0({\mathcal N}_{C/{\mathbb P}^{6}}(-2))=1$ and \[ h^0({\mathcal N}_{S/{\mathbb P}^7}(-2))=\begin{cases} 1, & \mbox{if} \; H \mathfrak{s}im 3E+{\mathcal G}amma_1+{\mathcal G}amma_2+{\mathcal G}amma_3, \\ & \mbox{where} \; {\mathcal G}amma_1,{\mathcal G}amma_2,{\mathcal G}amma_3 \; \mbox{are disjoint lines;} \\ 0, & \mbox{otherwise}.\mathfrak{e}nd{cases} \] \mathfrak{e}nd{itemize} \mathfrak{e}nd{lemma} \begin{remark} \label{c=1,g=7} As will be seen in the proof below, the two cases for $g=7$ occur, respectively, when $h^0(H-3E)=1$ and $0$. Moreover, the scroll type of $T$ is, respectively, $(3,1,1)$ and $(2,2,1)$ (cf. also \mathfrak{c}ite[\S 9.1 and table on p.~144-145]{JK}). \mathfrak{e}nd{remark} \renewcommand{Proof}{Proof of Lemma \ref{lemma:c=11}} \begin{proof} The normal bundle sequence twisted by $-2$ yields \[ \timesymatrix{ 0 \ar[r] & {\mathcal N}_{S/T}(-2) \ar@{=}[d]^{\wr} \ar[r] & {\mathcal N}_{S/{\mathbb P}^{g}}(-2) \ar[r] & {\mathcal N}_{T/{\mathbb P}^{g}}|_S(-2) \ar[r] & 0 \\ & {\mathcal O}_S(H-(g-4)E) & & & }\] Since $T$ satisfies property $N_2$ (this can for instance be seen using Green's hyperplane section theorem on syzygies \mathfrak{c}ite[Thm. 3.b.7]{gr}, since its general curve section is a rational normal curve, which, as is well-known, satisfies property $N_2$), we have $h^0({\mathcal N}_{T/{\mathbb P}^{g}}|_S(-2))=0$ by Lemma \ref{lemma:well-known2}. Hence \begin{equation} \label{eq:trig} h^0({\mathcal N}_{S/{\mathbb P}^{g}}(-2))=h^0({\mathcal O}_S(H-(g-4)E). \mathfrak{e}nd{equation} Since $H \mathfrak{c}dot (H-(g-4)E)=10-g$, we have $h^0({\mathcal N}_{S/{\mathbb P}^{g}}(-2))=0$ for $g \geq 10$. The possible values of $h^0({\mathcal O}_S(H-(g-4)E)$, together with the scroll types, have been found in \mathfrak{c}ite[\S 9.1]{JK}, but we repeat the arguments for the sake of the reader. We consider first the case $g=9$. By \mathfrak{e}qref{eq:h1R'} and Riemann-Roch, one finds \begin{equation} \label{eq:19-1} h^0(H)=10, \; \; h^0(H-E)=7, \; \; h^0(H-2E)=4. \mathfrak{e}nd{equation} We have $(H-3E)^2=-2$ and $(H-3E) \mathfrak{c}dot H =7$. Hence $h^0(H-3E) \geq 1$ by Riemann-Roch and Serre duality. We claim that \begin{equation} \label{eq:19-2} h^0(H-3E) =1 \; \mbox{or} \; 2. \mathfrak{e}nd{equation} Indeed, if by contradiction $h^0(H-3E) \geq 3$, write $|H-3E|=|M|+{\mathcal D}elta$, with ${\mathcal D}elta$ the fixed part and $h^0(M) \geq 3$. We have \begin{eqnarray*} 0 < {\mathcal D}elta \mathfrak{c}dot H & = & 3E \mathfrak{c}dot {\mathcal D}elta+M \mathfrak{c}dot {\mathcal D}elta + {\mathcal D}elta^2 = 3E \mathfrak{c}dot {\mathcal D}elta+(M+{\mathcal D}elta)^2-M^2-M \mathfrak{c}dot {\mathcal D}elta \\ & = & 3E \mathfrak{c}dot {\mathcal D}elta-2-M^2-M \mathfrak{c}dot {\mathcal D}elta. \mathfrak{e}nd{eqnarray*} Hence $E \mathfrak{c}dot {\mathcal D}elta \geq 2$ if $M^2 >0$, and from $3=E \mathfrak{c}dot (H-3E)=E \mathfrak{c}dot M + E \mathfrak{c}dot {\mathcal D}elta$, we obtain $E \mathfrak{c}dot M \leq 1$, which is impossible. Therefore, $M^2=0$, which means that $M \mathfrak{s}im lF$ for an elliptic pencil $|F|$ and $l \geq 2$. Since $F \mathfrak{c}dot H \geq 3$, we must have $M \mathfrak{s}im 2F$, $F \mathfrak{c}dot H=3$ and ${\mathcal D}elta \mathfrak{c}dot H=1$. Hence ${\mathcal D}elta$ is a line, so that ${\mathcal D}elta^2=-2$. As $3-3E \mathfrak{c}dot F =F \mathfrak{c}dot (H-3E)=F \mathfrak{c}dot {\mathcal D}elta \geq 0$, we must have $E \mathfrak{c}dot F \leq 1$, whence $E \mathfrak{s}im F$. It follows that $H \mathfrak{s}im 5E + {\mathcal D}elta$, which implies ${\mathcal D}elta^2=-14$, a contradiction. This proves \mathfrak{e}qref{eq:19-2}. We next claim that \begin{equation} \label{eq:19-3} h^0(H-4E) =0. \mathfrak{e}nd{equation} Indeed, assume the contrary and write ${\mathcal D}elta=H-4E$. Then ${\mathcal D}elta^2=-8$, $H \mathfrak{c}dot {\mathcal D}elta=4$ and $E \mathfrak{c}dot {\mathcal D}elta=3$. The first and last of these equations imply that ${\mathcal D}elta$ must contain at least three irreducible curves in its support; moreover, at least one of them, call it ${\mathcal G}amma$, must satisfy ${\mathcal G}amma \mathfrak{c}dot E \geq {\mathcal G}amma \mathfrak{c}dot H >0$. Then ${\mathcal G}amma \mathfrak{c}dot {\mathcal D}elta ={\mathcal G}amma \mathfrak{c}dot (H-4E) \leq -3$, whence ${\mathcal D}elta -2{\mathcal G}amma \geq 0$, which implies ${\mathcal G}amma \mathfrak{c}dot E={\mathcal G}amma \mathfrak{c}dot H=1$. Since $(H-4E-2{\mathcal G}amma)^2=-4$ and $H \mathfrak{c}dot (H-4E-2{\mathcal G}amma)=2$, we must have $H-4E-2{\mathcal G}amma \mathfrak{s}im {\mathcal G}amma_1+{\mathcal G}amma_2$, with ${\mathcal G}amma_1$ and ${\mathcal G}amma_2$ disjoint lines. Since $E \mathfrak{c}dot ({\mathcal G}amma_1+{\mathcal G}amma_2)=1$, we may assume $E \mathfrak{c}dot {\mathcal G}amma_1=1$. But then \[ -3={\mathcal G}amma_1 \mathfrak{c}dot (H-4E) = {\mathcal G}amma_1 \mathfrak{c}dot (2{\mathcal G}amma+{\mathcal G}amma_1+{\mathcal G}amma_2)=2{\mathcal G}amma \mathfrak{c}dot {\mathcal G}amma_1-2 ,\] which is impossible. This proves \mathfrak{e}qref{eq:19-3}. From \mathfrak{e}qref{eq:19-3} we obtain that $h^0(H-5E) =0$, whence $h^0({\mathcal N}_{S/{\mathbb P}^{9}}(-2))=0$ by \mathfrak{e}qref{eq:trig}. From \mathfrak{e}qref{eq:19-1}-\mathfrak{e}qref{eq:19-3} we obtain the two possible scroll types $(3,2,2)$ (occurring if $h^0(H-3E)=1$) and $(3,3,1)$ (occurring if $h^0(H-3E)=2$) for $T$. By \mathfrak{c}ite[Thm. 2.4]{Br} a general hyperplane section $T'$ of $T$ is a rational normal scroll of type $(4,3)$ in both cases. This is the scroll swept out by the members of the $g^1_3$, on a general hyperplane section $C$ of $S$. Hence $h^0(\omega_C-5{\mathcal O}_C(E))=0$. It follows, by Proposition \ref{prop:c=1}(iii), that $h^0({\mathcal N}_{C/{\mathbb P}^{8}}(-2))=0$. We next consider the case $g=8$. Similar considerations as in the previous case show that \begin{eqnarray} \label{eq:18-1} h^0(H)=9, \; \; h^0(H-E)=6, \; \; h^0(H-2E)=3, \\ \nonumber h^0(H-3E)=0 \; \mbox{or} \; 1, \; \; h^0(H-4E)=0. \mathfrak{e}nd{eqnarray} We prove only the last vanishing here. We have $(H-4E)^2=-10$ and $H \mathfrak{c}dot (H-4E)=2$. The latter implies that, if effective, $H-4E$ is linearly equivalent to a sum of at most two rational curves, counted with multiplicity. Hence $(H-4E)^2 \geq -8$, again a contradiction. In particular, \mathfrak{e}qref{eq:18-1} implies that $h^0({\mathcal N}_{S/{\mathbb P}^{8}}(-2))=0$ by \mathfrak{e}qref{eq:trig} and that the two possible scroll types of $T$ are $(2,2,2)$ (occurring if $h^0(H-3E)=0$) and $(3,2,1)$ (occurring if $h^0(H-3E)=1$). By \mathfrak{c}ite[Thm. 2.4]{Br} a general hyperplane section $T'$ of $T$ is a rational normal scroll of type $(3,3)$ in both cases. This is, as before, the scroll swept out by the members of the $g^1_3$ on a general hyperplane section $C$ of $S$. Hence $h^0(\omega_C-4{\mathcal O}_C(E))=0$. It follows, by Proposition \ref{prop:c=1}(iii), that $h^0({\mathcal N}_{C/{\mathbb P}^{7}}(-2))=0$. Assume now that $g=7$. Similar considerations as above show that \begin{equation} \label{eq:17-1} h^0(H)=8, \; \; h^0(H-E)=5, \; \; h^0(H-2E)=2, \; \; h^0(H-3E)=0 \; \mbox{or} \; 1. \mathfrak{e}nd{equation} More precisely, as $(H-3E)^2=-6$ and $H \mathfrak{c}dot (H-3E)=3$, we have \[ h^0(H-3E)=1 \; \; \mbox{if and only if $H-3E$ is the sum of three disjoint lines.} \] In particular, \mathfrak{e}qref{eq:17-1} yields the two possible scroll types $(2,2,1)$ (occurring if $h^0(H-3E)=0$) and $(3,1,1)$ (occurring if $h^0(H-3E)=1$), with $h^0({\mathcal N}_{S/{\mathbb P}^{7}}(-2))=0$ and $1$, respectively, by \mathfrak{e}qref{eq:trig}. Moreover, by \mathfrak{c}ite[Thm. 2.4]{Br} a general hyperplane section $T'$ of $T$ is a rational normal scroll of type $(3,2)$ in both cases, which yields $h^0(\omega_C-3{\mathcal O}_C(E))=1$. It follows, by Proposition \ref{prop:c=1}(iv), that $h^0({\mathcal N}_{C/{\mathbb P}^{6}}(-2))=1$ in both cases. Assume that $g=6$. Then $h^0({\mathcal N}_{C/{\mathbb P}^{5}}(-2))=2$ by Proposition \ref{prop:c=1}(v). Since $(H-2E)^2=-2$ and $H \mathfrak{c}dot (H-2E) >0$, we have $h^0(H-2E)=1$ by \mathfrak{e}qref{eq:h1R'} and Riemann-Roch, whence $h^0({\mathcal N}_{S/{\mathbb P}^{6}}(-2))=1$ by \mathfrak{e}qref{eq:trig}. Assume that $g=5$. Then $h^0({\mathcal N}_{C/{\mathbb P}^{4}}(-2))=3$ by Proposition \ref{prop:c=1}(vi). We have $(H-E)^2=2$, whence $h^0(H-E) =3$ by \mathfrak{e}qref{eq:h1R'} and Riemann-Roch, so that $h^0({\mathcal N}_{S/{\mathbb P}^{5}}(-2))=3$ by \mathfrak{e}qref{eq:trig}. \mathfrak{e}nd{proof} \renewcommand{Proof}{Proof} \mathfrak{s}ection{The case of Clifford index two} It is well-known that a smooth curve of Clifford index two is either tetragonal (not isomorphic to a smooth plane quintic) or isomorphic to a smooth plane sextic, cf., e.g., \mathfrak{c}ite{elms}. If $g=5$, then $C$ is a complete intersection of three quadrics in ${\mathbb P}^4$, whence $h^0({\mathcal N}_{C/{\mathbb P}^4}(-2))=3$. The following result is in principle proved in the unpublished preprint \mathfrak{c}ite{Te}, albeit with a small gap, cf. \mathfrak{c}ite[Rem. 2.17]{KL-gauss}. It is also present in \mathfrak{c}ite[Table 2, p.~161]{Te-duke}, referring to another unpublished preprint \mathfrak{c}ite{Te2} for a proof. The result can also be deduced from \mathfrak{c}ite[Thm. 5.6]{W2-coho} or from \mathfrak{c}ite[Thm. 2.16 and Prop. 2.19]{ste}. We give a proof following the arguments in the proof of \mathfrak{c}ite[Prop. 2.18]{KL-gauss}. \begin{prop} \label{prop:c=2} Let $C$ be a smooth tetragonal curve of genus $g \geq 6$, not isomorphic to a smooth plane quintic. Then $h^0( {\mathcal N}_{C/{\mathbb P}^{g-1}}(-2)) \leq 1$, with equality if and only if \begin{itemize} ^{-1}tem[(i)] $C$ is bielliptic; or ^{-1}tem[(ii)] $7 \leq g \leq 9$ and $C$ in its canonical embedding is a quadratic section of either the anticanonical image of ${\mathbb P}^2$ blown up in $10-g$ possibly infinitely near points or the $2$-Veronese embedding in ${\mathbb P}^8$ of an irreducible quadric in ${\mathbb P}^3$; or ^{-1}tem[(iii)] $g=6$. \mathfrak{e}nd{itemize} \mathfrak{e}nd{prop} \begin{proof} Let $|A|$ be any $g^1_4$ on $C$. By \mathfrak{c}ite[\S 6.2]{sc}, a hyperplane section curve $C \mathfrak{s}ubset {\mathbb P}^{g-1}$ lies in a rational normal scroll spanned by the divisors in $|A|$, not intersecting its possibly empty singular locus. In the desingularization of the scroll, denote by $H$ and $\mathcal{R}$ the pullbacks of the hyperplane bundle and ruling of the scroll, respectively. Then there are two surfaces $\widetilde{Y}_A \mathfrak{s}im 2H-b_{1,A}\mathcal{R}$ and $\widetilde{Z}_A \mathfrak{s}im 2H-b_{2,A}\mathcal{R}$, for integers $b_{1,A} \geq b_{2,A} \geq 0$ such that $b_{1,A} + b_{2,A}=g-5$, with images $Y_A$ and $Z_A$ in ${\mathbb P}^{g-1}$ so that $C = Y_A \mathfrak{c}ap Z_A$. Applying $\operatorname{ \mathfrak{h}\mathfrak{o}\mathfrak{m} }_{{\mathcal O}_{{\mathbb P}^{g-1}}}(-,{\mathcal O}_C)$ to the exact sequence \[ \timesymatrix{ 0 \ar[r] & {\mathcal J}_{Y_A/{\mathbb P}^{g-1}} \ar[r] & {\mathcal J}_{C/{\mathbb P}^{g-1}} \ar[r] & {\mathcal J}_{C/Y_A} \ar[r] & 0} \] and twisting, we obtain \[ \timesymatrix{ 0 \ar[r] & {\mathcal N}_{C/Y_A}(-2) \ar@{=}[d]^{\wr} \ar[r] & {\mathcal N}_{C/{\mathbb P}^{g-1}}(-2) \ar[r] & \operatorname{ \mathfrak{h}\mathfrak{o}\mathfrak{m} }_{{\mathcal O}_{{\mathbb P}^{g-1}}}({\mathcal J}_{Y_A/{\mathbb P}^{g-1}},{\mathcal O}_C)(-2) \\ & {\mathcal O}_C(-b_{2,A}\mathcal{R}) & & } \] By \mathfrak{c}ite[Lemma 2.16]{KL-gauss}, the surface $Y_A \mathfrak{s}ubset {\mathbb P}^{g-1}$ satisfies property $N_2$. Thus, by Lemma \ref{lemma:well-known2}, we obtain $h^0( {\mathcal N}_{C/{\mathbb P}^{g-1}}(-2))=h^0({\mathcal O}_C(-b_{2,A}\mathcal{R}))$, that is, \begin{itemize} ^{-1}tem $h^0( {\mathcal N}_{C/{\mathbb P}^{g-1}}(-2))=0$ if $b_{2,A}>0$; ^{-1}tem $h^0( {\mathcal N}_{C/{\mathbb P}^{g-1}}(-2))=1$ if $b_{2,A}=0$. \mathfrak{e}nd{itemize} In particular, the last case always happens when $g=6$. Moreover, which is not a priori obvious, the invariant $b_{2,A}$ is either zero for all $A$, or nonzero for all $A$. Assume now that $g \geq 7$ and $b_{2,A}=0$. Then $Z_A$ is a quadric and, by \mathfrak{c}ite[Lemma 2.16]{KL-gauss}, $Y_A \mathfrak{s}ubset {\mathbb P}^{g-1}$ has degree $g-1$ and is linearly normal. Such surfaces are classified by \mathfrak{c}ite[Thm. 8]{Na}: \begin{itemize} ^{-1}tem[(a)] $Y_A$ is the anticanonical image of ${\mathbb P}^2$ blown up $10-g$ possibly infinitely near points; ^{-1}tem[(b)] $Y_A$ is the $2$-Veronese embedding in ${\mathbb P}^8$ of an irreducible quadric in ${\mathbb P}^3$; ^{-1}tem[(c)] $Y_A$ is the cone over a smooth elliptic curve in ${\mathbb P}^{g-2}$; ^{-1}tem[(d)] $Y_A$ is the $3$-Veronese embedding in ${\mathbb P}^9$ of ${\mathbb P}^2$. \mathfrak{e}nd{itemize} In case (d) the curve $C$ is isomorphic to a smooth plane sextic, whence pentagonal, a contradiction. This leaves us with cases (a)-(c). In case (c) the curve $C$ is bielliptic, and in cases (a)-(b) we have $g \leq 9$. Conversely, if $C$ is bielliptic, or a quadratic section of a surface as in (a)-(b), then $b_2=0$ by \mathfrak{c}ite[Prop. 3.2]{Br} (see also \mathfrak{c}ite{sc}). \mathfrak{e}nd{proof} \begin{remark} \label{rem:g=9} When $g=9$, the cases where $C$ is a quadratic section of the anticanonical model of the Hirzebruch surface ${\mathcal F}F_1$ and of the $2$-Veronese embedding of a quadric in ${\mathbb P}^3$ occur, respectively, when $C$ possesses a $g^2_6$ and a $g^3_8$, see \mathfrak{c}ite[(6.2)]{sc} or \mathfrak{c}ite[(3.2)]{sagra}. The two cases are mutually exclusive, since in the second case the two rulings on ${\mathbb P}^1 \times {\mathbb P}^1$ induce two $g^1_4$s on $C$, whereas in the first case $C$ possesses a unique $g^1_4$ (the $g^2_6$ maps the curve to a plane sextic with one singular point for reasons of genus and the unique $g^1_4$ is known to be cut out by lines through the singular point; alternatively, the curve can be embedded in ${\mathcal F}F_1$ and one may use \mathfrak{c}ite[Cor. 1]{mar-hir}). \mathfrak{e}nd{remark} The next result must be well-known to the experts. We give a proof for lack of a reference, following the proof of \mathfrak{c}ite[Prop. 2.9(d)]{KL-gauss} for smooth plane quintics. \begin{prop} \label{prop:c=2SPS} If $C$ is isomorphic to a smooth plane sextic (whence of genus $10$), then $h^0( {\mathcal N}_{C/{\mathbb P}^{g-1}}(-2)) =1$. \mathfrak{e}nd{prop} \begin{proof} Let $A$ be the very ample line bundle giving the embedding in ${\mathbb P}^2$. As $\omega_C \mathfrak{c}ong {\mathcal O}_C(3A)$, in the canonical embedding $C$ is contained in the $3$-Veronese surface $Y \mathfrak{s}ubset {\mathbb P}^9$. We have a short exact sequence \[ \timesymatrix{ 0 \ar[r] & {\mathcal N}_{C/Y}(-2) \ar[r] & {\mathcal N}_{C/{\mathbb P}^9}(-2) \ar[r] & {\mathcal N}_{Y/{\mathbb P}^9}|_C (-2) \ar[r] & 0} \] Since $Y$ satisfies condition $N_2$, we have $h^0( {\mathcal N}_{Y/{\mathbb P}^9}|_C (-2))=0$ by Lemma \ref{lemma:well-known2}. Then the result follows since ${\mathcal N}_{C/Y}(-2) \mathfrak{c}ong {\mathcal O}_C(6A-2 \mathfrak{c}dot 3A)\mathfrak{c}ong {\mathcal O}_C$. \mathfrak{e}nd{proof} We will now turn to curves on $K3$ surfaces and start with the following famous example, the only occurrence of variation of gonality among smooth curves in a complete linear system on a $K3$ surface, by \mathfrak{c}ite{cp,Kn2}, and the only occurrence of smooth plane sextics, by \mathfrak{c}ite[Thm. 1.2]{Kn2}: \begin{example}{\rm (Donagi-Morrison \mathfrak{c}ite[(2.2)]{DM})}. \label{ex:DM} Let $\pi: S \mathfrak{t}o {\mathbb P}^2$ be a $K3$ surface of genus $2$, i.e. a double cover of ${\mathbb P}^2$ branched along a smooth sextic, and let $L := \pi^*{\mathcal O}_{{\mathbb P}^2}(3)$. The arithmetic genus of the curves in $|L|$ is $10$. The smooth curves in the codimension one linear subspace $\pi^*|H^0({\mathcal O}_{{\mathbb P}^2} (3))| \mathfrak{s}ubset |L|$ are biellliptic, whence of gonality $4$, whereas the general curve in $|L|$ is isomorphic to a smooth plane sextic and therefore has gonality $5$. The embedded surface $S \mathfrak{s}ubset {\mathbb P}^{10}$ is a complete intersection of the cone $V$ over the $3$-Veronese surface in ${\mathbb P}^9$ and a quadric $Q$. Since $V$ is smooth along $S$, the normal bundle sequence twisted by $-2$ yields \[ \timesymatrix{ 0 \ar[r] & {\mathcal N}_{S/V}(-2) \mathfrak{c}ong {\mathcal O}_S \ar[r] & {\mathcal N}_{S/{\mathbb P}^{10}}(-2) \ar[r] & {\mathcal N}_{V/{\mathbb P}^{10}}|_S(-2) \ar[r] & 0}.\] Since $V$ satisfies property $N_2$ (as its general hyperplane section does), we have $h^0({\mathcal N}_{V/{\mathbb P}^{10}}|_S(-2))=0$ by Lemma \ref{lemma:well-known2}. Therefore, $h^0({\mathcal N}_{S/{\mathbb P}^{10}}(-2))=1$. Similarly, $h^0({\mathcal N}_{C/{\mathbb P}^{9}}(-2))=1$. \mathfrak{e}nd{example} We recall the following well-known fact: \begin{lemma} \label{lemma:desc2} Let $|H|$ be a complete linear system of curves of genus $g \geq 7$ on a $K3$ surface such that all smooth curves in $|H|$ have Clifford index two. Then, except for the Donagi-Morrison example \ref{ex:DM}, all smooth curves in $|H|$ are tetragonal, and, for any line bundle $A$ of type $g^1_4$ on any smooth $C ^{-1}n |H|$, there is a globally generated line bundle ${\mathcal O}_S(D)$ on $S$ satifying ${\mathcal O}_C(D) \geq A$, $h^i(D)=h^i(H-D)=0$, $i=1,2$, and one of the three conditions \begin{itemize} ^{-1}tem $D^2=0$, $D \mathfrak{c}dot H=4$; ^{-1}tem $D^2=2$, $D \mathfrak{c}dot H=6$; $7 \leq g \leq 9$; ^{-1}tem $D^2=4$, $H \mathfrak{s}im 2D$, $g=9$. \mathfrak{e}nd{itemize} Moreover, ${\mathcal O}_C(D)$ computes the Clifford index of any smooth $C ^{-1}n |H|$. \mathfrak{e}nd{lemma} \begin{proof} The fact that all smooth curves in $|H|$ are tetragonal except for the Donagi-Morrison example follows from \mathfrak{c}ite[Thm. 1.2]{Kn2}. Moreover, by nowadays well-known Reider-like results as in \mathfrak{c}ite{DM,cp,Kn}, for any $C$ and $A$ as in the statement, there is a line bundle ${\mathcal O}_S(D)$ on $S$ such that ${\mathcal O}_C(D) \geq A$ and satisfying $0 \leq D^2\leq 4$ and $D \mathfrak{c}dot H=D^2+4$ (see, e.g., \mathfrak{c}ite[Lemma 8.3]{Kn}). The Hodge index theorem yields the cases stated in the lemma, in addition to the possibility $D^2=2$ and $H \mathfrak{s}im 3D$, which is the Donagi-Morrison example \ref{ex:DM}. The fact that ${\mathcal O}_S(D)$ can be chosen globally generated and such that $h^i(D)=h^i(H-D)=0$, $i=1,2$, follows from \mathfrak{c}ite[(2.3)]{Ma} (see also \mathfrak{c}ite[Proof of Lemma 8.3]{Kn} and \mathfrak{c}ite[Props. 2.6 and 2.7]{JK}). The fact that ${\mathcal O}_C(D)$ computes the Clifford index of any smooth $C ^{-1}n |H|$ is standard and easily checked. \mathfrak{e}nd{proof} \begin{remark} \label{rem:ellpenc} When $D^2=0$, then $|D|$ is an elliptic pencil and ${\mathcal O}_C(D)=A$. \mathfrak{e}nd{remark} \begin{corollary} \label{cor:desc2} Let $C$ be a smooth curve of genus $g \geq 10$ lying on a $K3$ surface, such that ${\mathcal O}_S(C)$ is not as in the Donagi-Morrison example \ref{ex:DM}. Then $C$ is neither bielliptic nor isomorphic to a smooth plane sextic. \mathfrak{e}nd{corollary} \begin{proof} If $C$ is bielliptic, then $C$ must contain infinitely many $g^1_4$s, which is impossible when $g \geq 10$ by Lemma \ref{lemma:desc2} and Remark \ref{rem:ellpenc} if we are not in the Donagi-Morrison example \ref{ex:DM}, recalling that there are finitely many elliptic pencils of degree $4$ with respect to $C$ on $S$. The fact that smooth plane sextics only occur in the Donagi-Morrison example follows from \mathfrak{c}ite[Thm. 1.2]{Kn2}. \mathfrak{e}nd{proof} \begin{remark} \label{rem:biell} (a) The fact that there are no bielliptic curves of genus $g \geq 11$ on a $K3$ surface is a well-known result of Reid's \mathfrak{c}ite[Cor. 2]{Reid}, already mentioned in the introduction. (b) By \mathfrak{c}ite[Thm. 3.12]{AF} and \mathfrak{c}ite[Thm. 1.2]{Kn2}, there exists no smooth bielliptic curve of genus $g$ with $6 \leq g \leq 9$ on a $K3$ surface that is general in its complete linear system. (Indeed, as $\rho(g,1,4) \leq 0$, by \mathfrak{c}ite[Thm. 3.12]{AF} any curve of Clifford dimension one and general in its linear system on a $K3$ has a finite number of pencils computing its gonality. Then the result follows as curves of of genus $\leq 9$ and Clifford dimension $>1$ lying on $K3$ surfaces are only smooth plane quintics by \mathfrak{c}ite[Thm. 1.2]{Kn2}, which cannot be bielliptic, cf., e.g., \mathfrak{c}ite[\S 2.2]{CM}.) Thus, case (i) in Proposition \ref{prop:c=2} never occurs if $C$ is general in its linear system on a $K3$ surface. \mathfrak{e}nd{remark} Assume now (and for the rest of the section) that $S \mathfrak{s}ubset {\mathbb P}^g$ is a smooth $K3$ surface, all of whose hyperplane sections have Clifford index two. By \mathfrak{e}qref{eq:SCk2}, \mathfrak{e}qref{eq:Sk2}, Proposition \ref{prop:c=2} and Corollary \ref{cor:desc2}, the cases for which $h^0({\mathcal N}_{S/{\mathbb P}^g}(-2)) \neq 0$ apart from the Donagi-Morrison example \ref{ex:DM} must satisfy $g \leq 9$. The next example is well-known: \begin{example} \label{ex:old} If $g=5$, then $S \mathfrak{s}ubset {\mathbb P}^5$ is a complete intersection of three quadrics, so that ${\mathcal N}_{S/{\mathbb P}^5}\mathfrak{c}ong {\mathcal O}_S(2)^{\oplus3}$ and $h^0({\mathcal N}_{S/{\mathbb P}^5}(-2))=h^0({\mathcal N}_{C/{\mathbb P}^4}(-2))=3$. If $g=6$, then $S$ is $BN$ general in the sense of Mukai (cf., e.g., \mathfrak{c}ite[Prop. 10.5]{JK}), so that by \mathfrak{c}ite{Mu}, $S$ is a quadratic section of a (possibly singular) quintic Del Pezzo threefold $V$ in ${\mathbb P}^6$ (in turn a hyperplane section of a quintic Del Pezzo fourfold in ${\mathbb P}^7$). As in Example \ref{ex:DM}, one proves that $h^0({\mathcal N}_{S/{\mathbb P}^6}(-2))=h^0({\mathcal N}_{C/{\mathbb P}^5}(-2))=1$. \mathfrak{e}nd{example} The next four lemmas will be the necessary ingredients to finish the proof of Proposition \ref{prop:main2} in the next section. \begin{lemma} \label{lemma:doppio} We have $h^0({\mathcal N}_{S/{\mathbb P}^g}(-2))= h^0({\mathcal N}_{C/{\mathbb P}^{g-1}}(-2))=1$ in the following cases: \begin{itemize} ^{-1}tem[(i)] $g=9$ and $H \mathfrak{s}im 2D$ with $D^2=4$. A general such $S$ is the $2$-Veronese embedding of a quartic in ${\mathbb P}^3$, and thus a quadratic section of the $2$-Veronese embedding of ${\mathbb P}^3$ in ${\mathbb P}^{9}$. Conversely, any such smooth quadratic section is a $K3$ surface carrying such a divisor $D$. ^{-1}tem[(ii)] $g=7$ (resp., $8$) and there is a globally generated line bundle $D$ on $S$ satisfying $D^2=2$ and $D \mathfrak{c}dot H=6$. A general such $S$ is a quadratic section of the sextic Del Pezzo threefold $W$ in ${\mathbb P}^7$ that is a divisor of bidegree $(1,1)$ in ${\mathbb P}^2 \times {\mathbb P}^2$ (resp., a quadratic section of a blow up of ${\mathbb P}^3$ at a point embedded in ${\mathbb P}^8$ by the linear system of quadrics through the point). Conversely, any such smooth quadratic section is a $K3$ surface carrying such a divisor $D$. \mathfrak{e}nd{itemize} \mathfrak{e}nd{lemma} \begin{proof} (i) Since $h^0({\mathcal N}_{S/{\mathbb P}^9}(-2)) \leq h^0({\mathcal N}_{C/{\mathbb P}^{g-1}}(-2)) \leq 1$ by Proposition \ref{prop:c=2}, we may assume that the pair $(S,D)$ is general in moduli, in particular that $D$ is very ample. (By the classical results of \mathfrak{c}ite{SD}, $D$ is not very ample if and only if there is a smooth rational curve ${\mathcal G}amma$ such that ${\mathcal G}amma \mathfrak{c}dot D=0$ or a smooth elliptic curve $F$ such that $F \mathfrak{c}dot D \leq 2$.) Then $S$ is the $2$-Veronese embedding of a quartic in ${\mathbb P}^3$, and thus a quadratic section of the $2$-Veronese embedding of ${\mathbb P}^3$ in ${\mathbb P}^{9}$. As in Example \ref{ex:DM}, one computes $h^0({\mathcal N}_{S/{\mathbb P}^9}(-2))=1$. (ii) As above, we may assume that $(S,H,D)$ is general in moduli, in particular that $F:=H-D$ is base point free and defines a birational map. If $g=8$, set ${\mathcal D}elta:=H-2D$. Then ${\mathcal D}elta^2=-2$ and $H \mathfrak{c}dot {\mathcal D}elta=2$, whence ${\mathcal D}elta$ is effective (an irreducible conic by generality) by Riemann-Roch. The complete linear system $|F|$ is base point free and maps $S$ birationally onto a quartic surface in ${\mathbb P}^3$, contracting ${\mathcal D}elta$ to a point. Let $\pi: \widetilde{{\mathbb P}^3} \mathfrak{t}o {\mathbb P}^3$ be the blow up at this point, and let $\mathfrak{E}$ be the exceptional divisor. Then $S ^{-1}n |2(\pi^*{\mathcal O}_{{\mathbb P}^3}(2)-\mathfrak{E})|$ and is thus a quadratic section of $\widetilde{{\mathbb P}^3}$ embedded into ${\mathbb P}^8$ by $|\pi^*{\mathcal O}_{{\mathbb P}^3}(2)-\mathfrak{E}|$, which restricted to $S$ becomes $|2(H-D)-{\mathcal D}elta|=|H|$, as claimed. As in Example \ref{ex:DM}, one computes $h^0({\mathcal N}_{S/{\mathbb P}^8}(-2))=1$. Conversely, it is easily checked that any such smooth quadratic section has the desired properties. If $g=7$, the linear systems $|D|$ and $|F|$ define an embedding \[ S \mathfrak{s}ubset {\mathbb P}^2 \times {\mathbb P}^2 \mathfrak{s}ubset {\mathbb P}^8,\] where the right hand embedding is the Pl{\"u}cker embedding, which factors through the embedding $S \mathfrak{s}ubset {\mathbb P}^7$ defined by $|H|$. Thus $S \mathfrak{s}ubset \left({\mathbb P}^2 \times {\mathbb P}^2\right) \mathfrak{c}ap {\mathbb P}^7 \mathfrak{s}ubset {\mathbb P}^8$. A priori, the intersection $T:= \left({\mathbb P}^2 \times {\mathbb P}^2\right) \mathfrak{c}ap {\mathbb P}^7$ does not need to be transversal. However, assuming first it is, $T$ is a sextic Del Pezzo threefold, with $\omega_T \mathfrak{c}ong {\mathcal O}_T(-2)$, so that a smooth quadratic section of $T$ will be a $K3$ surface with the desired properties. As we assume that $(S,H,D)$ is general, we can thus assume that $S$ is a quadratic section of the sextic Del Pezzo threefold $T$ (see also \mathfrak{c}ite[Lemma 4.1]{KLV}). As in Example \ref{ex:DM}, one computes $h^0({\mathcal N}_{S/{\mathbb P}^7}(-2))=1$. \mathfrak{e}nd{proof} \begin{lemma} \label{lemma:92} Assume that $g=9$ and there is a globally generated line bundle $D$ on $S$ satisfying $D^2=2$ and $D \mathfrak{c}dot H=6$. Assume furthermore that $H$ is not $2$-divisible and that it is not of the form $H \mathfrak{s}im 3E+2{\mathcal D}elta$, where $|E|$ is an elliptic pencil and ${\mathcal D}elta$ is an effective divisor such that ${\mathcal D}elta^2=-2$ and ${\mathcal D}elta \mathfrak{c}dot E=2$. Then $h^0({\mathcal N}_{S/{\mathbb P}^9}(-2))=0$ and $h^0({\mathcal N}_{C/{\mathbb P}^8}(-2))=1$ for any smooth $C ^{-1}n |H|$. \mathfrak{e}nd{lemma} \begin{proof} We first prove that $h^0({\mathcal N}_{C/{\mathbb P}^8}(-2))=1$. If $C$ is bielliptic, there is nothing to prove by Proposition \ref{prop:c=2}. Otherwise, the base point free complete linear system $|D|$ maps $C$ birationally to a plane curve of degree $6$, whence with one ordinary singular point, for reasons of genus. Blowing up the point, we get an embedding of $C$ into ${\mathcal F}F_1$ linearly equivalent to twice the anticanonical bundle. Thus, $h^0({\mathcal N}_{C/{\mathbb P}^8}(-2))=1$ again by Proposition \ref{prop:c=2}. We next prove that $h^0({\mathcal N}_{S/{\mathbb P}^9}(-2))=0$. We will use Lemma \ref{lemma:trick} and prove that $h^1({\mathcal T}_S(-2H))=0$. Contrary to the previous proof, we cannot assume that $(S,H,D)$ is general in this case. Set $F:=H-D$. Then $F^2=6$ and $F \mathfrak{c}dot D=4$. We first claim that $F$ is ample. To prove this, assume to get a contradiction, that there exists an irreducible curve ${\mathcal G}amma$ such that ${\mathcal G}amma \mathfrak{c}dot F \leq 0$. Then ${\mathcal G}amma^2=-2$. It is easy to check that ${\mathcal O}_C(F-{\mathcal G}amma)$ contributes to the Clifford index of any smooth curve $C ^{-1}n |H|$ and that \begin{eqnarray*} \operatorname{Cliff} {\mathcal O}_C(F-{\mathcal G}amma) & = & (F-{\mathcal G}amma)\mathfrak{c}dot H - 2h^0({\mathcal O}_C(F-{\mathcal G}amma))+2 \\ & \leq & (F-{\mathcal G}amma)\mathfrak{c}dot H - 2h^0(F-{\mathcal G}amma)+2 \\ & \leq & (F-{\mathcal G}amma)\mathfrak{c}dot H -(F-{\mathcal G}amma)^2-2 = 4-{\mathcal G}amma \mathfrak{c}dot H+2F \mathfrak{c}dot {\mathcal G}amma. \mathfrak{e}nd{eqnarray*} Thus, by the assumption that $\operatorname{Cliff} C=2$, we must have $F \mathfrak{c}dot {\mathcal G}amma=0$ and ${\mathcal G}amma \mathfrak{c}dot H={\mathcal G}amma \mathfrak{c}dot D=1$ or $2$. If ${\mathcal G}amma \mathfrak{c}dot H=2$, then $(D+{\mathcal G}amma)^2=4$ and $(D+{\mathcal G}amma) \mathfrak{c}dot H=8$, whence the Hodge index theorem yields $H \mathfrak{s}im 2(D+{\mathcal G}amma)$, contrary to our assumptions. If ${\mathcal G}amma \mathfrak{c}dot H=1$, then $G:=F-{\mathcal G}amma-D$ satisfies $G^2=0$ and $G \mathfrak{c}dot H=3$, so that $|G|$ cuts out a $g^1_3$ on all $C ^{-1}n |H|$, again a contradiction. Hence $F$ is ample. We next claim that $|F|$ is base point free. Indeed, if it is not, then by \mathfrak{c}ite[(2.7)]{SD}, we would have $F \mathfrak{s}im 4E+{\mathcal G}amma$, for an elliptic pencil $|E|$ and a smooth rational curve ${\mathcal G}amma$ such that ${\mathcal G}amma \mathfrak{c}dot E=1$. But then, as $F \mathfrak{c}dot H=10$, we would have $E \mathfrak{c}dot H \leq 2$, so that all smooth curves in $|H|$ would be hyperelliptic, a contradiction. We finally claim that $F$ is very ample. Indeed, if it is not, then by \mathfrak{c}ite[Thm. 5.2]{SD}, there would exist an elliptic pencil $|E|$ such that $E \mathfrak{c}dot F=2$. Set ${\mathcal D}elta:=F-2E$. Then ${\mathcal D}elta^2=-2$ and ${\mathcal D}elta \mathfrak{c}dot F=2$, whence ${\mathcal D}elta$ is effective by Riemann-Roch. Since the Clifford index of any smooth $C ^{-1}n |H|$ is $2$, we must have $E \mathfrak{c}dot H \geq 4$. From $10=F \mathfrak{c}dot H= (2E + {\mathcal D}elta) \mathfrak{c}dot H$, we thus find that $E \mathfrak{c}dot H=4$ and ${\mathcal D}elta \mathfrak{c}dot H=2$. The Hodge index theorem implies that $H \mathfrak{s}im 3E+2{\mathcal D}elta$, contrary to our assumptions. Therefore, $|F|$ defines an embedding of $S$ into ${\mathbb P}^4$ and its image is well-known to be a complete intersection of a quadric and a cubic. The Euler sequence of the embedding $S \mathfrak{s}ubset {\mathbb P}^4$ twisted by ${\mathcal O}_S(-2H)$ is \[ 0 \longrightarrow {\mathcal O}_S(-2H) \longrightarrow H^0(F)^{^{\vee}ee} \otimes {\mathcal O}_S(-F-2D) \longrightarrow {\mathcal T}_{{\mathbb P}^4}|_S(-2H) \longrightarrow 0.\] The map on cohomology $H^2({\mathcal O}_S(-2H)) \mathfrak{t}o H^0(F)^{^{\vee}ee} \otimes H^2({\mathcal O}_S(-F-2D))$ is the dual of the multiplication map of sections \[ \mu_{F,F+2D}: H^0(F) \otimes H^0({\mathcal O}_S(F+2D)) \mathfrak{t}o H^0(2H).\] Since $h^1(-2D)=0$ and $h^0(F-2D)=0$ (using $H \mathfrak{c}dot (F-2D)=-2$), this map is surjective by Mumford's generalization of a theorem of Castelnuovo \mathfrak{c}ite[Thm. 2, p.~41]{Mum}. Thus $h^i({\mathcal T}_{{\mathbb P}^4}|_S(-2H))=0$ for $i=0,1$. From the exact sequence \[ \timesymatrix{ 0 \ar[r] & {\mathcal T}_S(-2H) \ar[r] & {\mathcal T}_{{\mathbb P}^4}|_S(-2H) \ar[r] & {\mathcal N}_{S/{\mathbb P}^4}(-2H) \ar[r] & 0,} \] we therefore obtain that \begin{eqnarray*} h^1({\mathcal T}_S(-2H))& =& h^0({\mathcal N}_{S/{\mathbb P}^4}(-2H)) =h^0(2F-2H)+h^0(3F-2H) \\ & = & h^0( -2D) +h^0(F-2D)=0. \mathfrak{e}nd{eqnarray*} It follows that $h^0({\mathcal N}_{S/{\mathbb P}^9}(-2))=0$ by Lemma \ref{lemma:trick}. \mathfrak{e}nd{proof} \begin{remark} \label{rem:92} As seen in the proof, the condition on $H$ in Lemma \ref{lemma:92} can be rephrased as $H$ not being $2$-divisible and $H-D$ being very ample. \mathfrak{e}nd{remark} \begin{lemma} \label{lemma:rest2i} Assume that $7 \leq g \leq 9$ and that all $D ^{-1}n {\mathcal P}ic S$ satisfying the conditions in Lemma \ref{lemma:desc2} satisfy $D^2=0$. Let $C \mathfrak{s}ubset S$ be a general hyperplane section. Then $h^0({\mathcal N}_{S/{\mathbb P}^g}(-2))=h^0({\mathcal N}_{C/{\mathbb P}^{g-1}}(-2))=0$ except in the following case where $h^0({\mathcal N}_{S/{\mathbb P}^g}(-2))=h^0({\mathcal N}_{C/{\mathbb P}^{g-1}}(-2))=1$: $g=7$ and $H \mathfrak{s}im E_1+E_2+E_3$, where $|E_i|$ is an elliptic pencil, $i=1,2,3$, and $E_i \mathfrak{c}dot E_j=2$ for $i \neq j$. A general such $S$ is a quadratic section of the sextic Del Pezzo threefold $T \mathfrak{c}ong {\mathbb P}^1 \times {\mathbb P}^1 \times {\mathbb P}^1$ in its Segre embedding in ${\mathbb P}^7$; conversely, any such smooth quadratic section is a $K3$ surface satisfying the given properties. \mathfrak{e}nd{lemma} \begin{proof} Pick any $D$ satisfying the conditions in Lemma \ref{lemma:desc2} and call it $E$. Then $E^2=0$ and $E \mathfrak{c}dot H=4$ by assumption, and $|E|$ is an elliptic pencil, cf. Remark \ref{rem:ellpenc}. As in the case of Clifford index one, it is proved in \mathfrak{c}ite[\S 5]{JK} that one can find an $E$ such that the four-dimensional rational normal scroll $T \mathfrak{s}ubset {\mathbb P}^g$ swept out by the span of the members of $|E|$ in ${\mathbb P}^g$ is smooth (of degree $g-3$), and furthermore such that \begin{equation} \label{eq:h1R} h^1(H-2E)=0, \mathfrak{e}nd{equation} the latter by \mathfrak{c}ite[Prop. 5.5]{JK}, noting that the exceptional cases labeled (E0)-(E4) in \mathfrak{c}ite[Prop. 5.5]{JK} do not occur for ample $H$. (Here the assumption about nonexistence of divisors $D$ as in Lemma \ref{lemma:desc2} with $D^2>0$ plays a central role, as we now briefly recall for the sake of the reader: if by contradiction $h^1(H-2E)>0$, then, as $(H-2E)^2 =2g-18 \geq -4$, we have $h^0(H-2E) =\mathfrak{c}hi(H-2E)+h^1(H-2E) \geq 1$, whence $H-2E$ is effective and not numerically $1$-connected. Therefore, we have a nontrivial effective decomposition $H-2E \mathfrak{s}im A_1+A_2$ with $A_1 \mathfrak{c}dot A_2 \leq 0$. One may check that $E+A_i$ for $i=1$ or $2$ satisfies the conditions in Lemma \ref{lemma:desc2} and $(E+A_i)^2>0$, a contradiction.) Moreover, by \mathfrak{c}ite[Prop. 7.2 and \S~9.2]{JK}, or \mathfrak{c}ite[\S 4]{Br} or \mathfrak{c}ite[\S 1.7]{ste}, the surface $S \mathfrak{s}ubset {\mathbb P}^g$ is a complete intersection of two threefolds \begin{equation} \label{eq:ciinT} S = Y_1 \mathfrak{c}ap Y_2, \; \; \mbox{with} \; \; Y_i ^{-1}n |{\mathcal O}_T(2)(-b_i{\mathcal F})|, \; \; b_1 \geq b_2 \; \mbox{and} \; b_1+b_2=g-5, \mathfrak{e}nd{equation} where, as before, ${\mathcal F}$ is the class of the ruling of $T$. The normal bundle sequence of $S \mathfrak{s}ubset Y_1 \mathfrak{s}ubset {\mathbb P}^g$ twisted by $-2$ yields \[ \timesymatrix{ 0 \ar[r] & {\mathcal N}_{S/Y_1}(-2) \ar@{=}[d]^{\wr} \ar[r] & {\mathcal N}_{S/{\mathbb P}^{g}}(-2) \ar[r] & {\mathcal N}_{Y_1/{\mathbb P}^{g}}|_S(-2) \ar[r] & 0 \\ & {\mathcal O}_S(-b_2E) & & & }\] (using that $Y_1$ is smooth along $S$). Restricting to $C$ we obtain \[ \timesymatrix{ 0 \ar[r] & {\mathcal O}_C(-b_2E) \ar[r] & {\mathcal N}_{C/{\mathbb P}^{g-1}}(-2) \ar[r] & {\mathcal N}_{Y_1/{\mathbb P}^{g}}|_C(-2) \ar[r] & 0}.\] The threefold $Y_1$ satisfies property $N_2$ by Green's hyperplane section theorem \mathfrak{c}ite[Thm. 3.b.7]{gr}, since its general hyperplane section does by \mathfrak{c}ite[Lemma 2.16]{KL-gauss}. Thus, we have $h^0({\mathcal N}_{Y_1/{\mathbb P}^{g}}|_S(-2))=h^0({\mathcal N}_{Y_1/{\mathbb P}^{g}}|_C(-2))=0$ by Lemma \ref{lemma:well-known2}. Hence \[ h^0({\mathcal N}_{S/{\mathbb P}^{g}}(-2))=h^0({\mathcal O}_S(-b_2 E)) \; \; \mbox{and} \; \; h^0({\mathcal N}_{C/{\mathbb P}^{g-1}}(-2))=h^0({\mathcal O}_C(-b_2 E)).\] It follows that \begin{equation} \label{eq:tetra} h^0({\mathcal N}_{S/{\mathbb P}^{g}}(-2))=h^0({\mathcal N}_{C/{\mathbb P}^{g-1}}(-2))=\begin{cases} 0 & \mbox{if} \; b_2>0, \\ 1 & \mbox{if} \; b_2=0. \mathfrak{e}nd{cases} \mathfrak{e}nd{equation} The possible values of $b_2$ (and $b_1$), and the possible scroll types $(e_1,e_2,e_3,e_4)$, with $e_1 \geq e_2 \geq e_3 \geq e_4 >0$ (as $T$ is smooth) have been investigated in \mathfrak{c}ite{Br,ste,JK}, with some minor mistakes in the former. Recall that $e_1+e_2+e_3+e_4=g-3$. We repeat the study of the case $g=9$ for the sake of the reader. If $g=9$, we have $b_1+b_2=4$ and $e_1+e_2+e_3+e_4=6$. We may use Riemann-Roch to compute $h^0(H-E)=6$, as $h^1(H-E)=0$ by Lemma \ref{lemma:desc2}, and $h^0(H-2E)=2$, using \mathfrak{e}qref{eq:h1R}. As $H \mathfrak{c}dot (H-4E)=0$, we get $h^0(H-4E)=0$. We claim that $h^0(H-3E) \leq 1$. Indeed, if $h^0(H-3E) \geq 2$, write $|H-3E|=|M|+{\mathcal D}elta$, with $|M|$ the moving part and ${\mathcal D}elta$ the fixed part. Since $(H-3E)^2=-8$ and $H \mathfrak{c}dot (H-3E)=4$, we have ${\mathcal D}elta >0$ and $M \mathfrak{c}dot H \leq 3$, so that $|M|$ would induce a $g^1_3$ on all curves in $|H|$, a contradiction. This yields the two possible scroll types $(2,2,1,1)$ and $(3,1,1,1)$ (cf. \mathfrak{c}ite[\S 9.2.2 and table on p.~148]{JK}). In the latter case, \mathfrak{c}ite[Lemma 8.33]{JK}, \mathfrak{c}ite[Lemma 1.9]{ste} or \mathfrak{c}ite[Prop. 5.4]{Br} yields $b_1=b_2=2$ (the reason being that any section of ${\mathcal O}_T(2)(-b{\mathcal F})$ with $b \geq 3$ is a product of a section of ${\mathcal O}_T(1)(-b{\mathcal F})$ and a section of ${\mathcal O}_T(1)$). In the former case, \mathfrak{c}ite[Lemma 1.9]{ste} or \mathfrak{c}ite[Prop. 5.4]{Br} (or the discussion in \mathfrak{c}ite[\S 9.2.2]{JK}) yields $b_1=2$ or $3$, whence $b_2 >0$ (the reason being that the zero scheme of any section of ${\mathcal O}_T(2)(-4{\mathcal F})$ restricts to two lines in each fiber of $T$). In all cases we therefore have $h^0({\mathcal N}_{S/{\mathbb P}^{9}}(-2))=h^0({\mathcal N}_{C/{\mathbb P}^{8}}(-2))=0$ by \mathfrak{e}qref{eq:tetra}. If $g=8$, we have $b_1+b_2=3$ and similar considerations as in the previous case yield that the scroll type must be $(2,1,1,1)$ (cf. \mathfrak{c}ite[\S 9.2.2 and table on p.~146]{JK}). By \mathfrak{c}ite[Lemma 1.9]{ste} or \mathfrak{c}ite[Prop. 5.4]{Br} (or the discussion in \mathfrak{c}ite[\S 9.2.2]{JK}), we have $b_1=2$ and $b_2=1$. Hence $h^0({\mathcal N}_{S/{\mathbb P}^{8}}(-2))=h^0({\mathcal N}_{C/{\mathbb P}^{7}}(-2))=0$ by \mathfrak{e}qref{eq:tetra}. If $g=7$, we have $b_1+b_2=2$ and the scroll type must be $(1,1,1,1)$ (cf. \mathfrak{c}ite[\S 9.2.2 and table on p.~144-145]{JK}). By \mathfrak{c}ite[Lemma 1.9]{ste}, \mathfrak{c}ite[Lemma 8.33]{JK} or \mathfrak{c}ite[Prop. 5.4]{Br} we must have $b_1 \leq 2$, whence the two possibilities $(b_1,b_2)=(1,1)$ or $(2,0)$. Both cases occur by \mathfrak{c}ite[Thm. 5.3]{Br}, with $h^0({\mathcal N}_{S/{\mathbb P}^{8}}(-2))=h^0({\mathcal N}_{C/{\mathbb P}^{7}}(-2))=0$ and $1$, respectively, by \mathfrak{e}qref{eq:tetra}. Let us now consider the second case more thoroughly. The general curves $C ^{-1}n |H|$ are contained in hyperplane sections of the threefold $Y_2 ^{-1}n |{\mathcal O}_T(2)|$, which are the surfaces $Y_A \mathfrak{s}ubset {\mathbb P}^6$ appearing in the proof of Proposition \ref{prop:c=2}. The only possibility is that $Y_A$ is the blow up of ${\mathbb P}^2$ in three (possibly infinitely near) points, cf. Remark \ref{rem:biell}(b). Hence $C$ has precisely three linear systems of type $g^1_4$ by \mathfrak{c}ite[Prop. 3.4(d)]{KL-gauss}. As we are assuming that the only line bundles $D$ satisfying the conditions in Lemma \ref{lemma:desc2} are the ones with square zero, then there must exist three elliptic pencils $|E_i|$, $i=1,2,3$, with $E_1=E$, say, on $S$ inducing these three linear systems on $C$; in particular, $E_i \mathfrak{c}dot H=3$. An easy application of the Hodge index theorem yields $E_i \mathfrak{c}dot E_j \leq 2$ for $i \neq j$, and clearly equality must hold, as otherwise we would have moving linear systems of degree one on an elliptic curve. It is an easy exercise to check that $H \mathfrak{s}im E_1+E_2+E_3$ and to check the remaining assertions. \mathfrak{e}nd{proof} \begin{lemma} \label{lemma:rest2ii} Assume that $g=9$ and $H \mathfrak{s}im 3E+2{\mathcal D}elta$, where $|E|$ is an elliptic pencil and ${\mathcal D}elta$ is an effective divisor such that ${\mathcal D}elta^2=-2$ and ${\mathcal D}elta \mathfrak{c}dot E=2$. Then $h^0({\mathcal N}_{S/{\mathbb P}^g}(-2))=h^0({\mathcal N}_{C/{\mathbb P}^{g-1}}(-2))=1$. Moreover, a general such $S$ is a quadratic section of the cone over the anticanonical embedding of ${\mathcal F}F_1$ into ${\mathbb P}^8$; conversely, any such smooth quadratic section is a $K3$ surface satisfying the given properties. \mathfrak{e}nd{lemma} \begin{proof} Both $E$ and $E+{\mathcal D}elta$ satisfy the conditions for the divisor $D$ in Lemma \ref{lemma:desc2}. Contrary to the previous proof, the four-dimensional rational normal scroll $T_0 \mathfrak{s}ubset {\mathbb P}^g$ defined by the span of the members of $|E|$ in ${\mathbb P}^g$ is singular. Indeed, one easily computes \[ h^0(H-E)=6, \; \; h^0(H-2E)=3, \; \; h^0(H-3E)=1, \; \; h^0(H-4E)=0,\] so the resulting scroll type is $(3,2,1,0)$. A general hyperplane section of it is a rational normal scroll $X$ of type $(3,2,1)$, by \mathfrak{c}ite[Thm. 2.4]{Br}, whence $T_0$ is a cone over $X$. The scroll $X$ is defined by the spans of the members of the induced $g^1_4$ on a general hyperplane section $C$ of $S$. Denote by $T:={\mathbb P}({\mathcal E}) \mathfrak{t}o T_0$, with ${\mathcal E}:={\mathcal O}_{{\mathbb P}^1}(3) \oplus{\mathcal O}_{{\mathbb P}^1}(2)\oplus{\mathcal O}_{{\mathbb P}^1}(1)\oplus{\mathcal O}_{{\mathbb P}^1}$ the desingularization of the scroll $T_0$. Note that $S$ does not intersect the vertex of $T_0$, as $|E|$ is base point free, and that \mathfrak{e}qref{eq:ciinT} and \mathfrak{e}qref{eq:tetra} still hold (with $g=9$). Restricting to $X$, we get the two surfaces in $|{\mathcal O}_X(2)(-b_1{\mathcal F})|$ and $|{\mathcal O}_X(2)(-b_2{\mathcal F})|$ of which $C$ is a complete intersection, as in the proof of Proposition \ref{prop:c=2}. We now observe that the curve $C$ lies on ${\mathcal F}F_1$ linearly equivalent to twice the anticanonical section; indeed $|E+{\mathcal D}elta|$ defines a $g^2_6$ on $C$, mapping it to a plane sextic curve with one ordinary singular point, for reasons of genus; blowing up the plane in the singular point yields the desired embedding. Hence, by \mathfrak{c}ite[(6.2)]{sc}, we must have $b_2=0$, whence $b_1=4$. Thus, $S$ is a quadratic section of a threefold $Y_1 ^{-1}n |{\mathcal O}_T(2)(-4{\mathcal F})|$, and $C$ is a quadratic section of a hyperplane section of $Y_1$, which is the anticanonical embedding of ${\mathcal F}F_1$, as $C$ carries a $g^2_6$, cf. Remark \ref{rem:g=9}. As the latter is well-known to be nonextendable, $Y_1$ must be the cone over it. (Alternatively, one may check as in \mathfrak{c}ite{ste} that the base locus of $|{\mathcal O}_T(2)(-4{\mathcal F})|$ is the inverse image of the vertex of $T_0$.) The rest of (ii) is easily verified. \mathfrak{e}nd{proof} \mathfrak{s}ection{Proof of Proposition \ref{prop:main2} and final remarks} \label{sec:K3} The results in the two previous section are enough to deduce Proposition \ref{prop:main2}. We summarize for the sake of the reader. \renewcommand{Proof}{Proof of Proposition \ref{prop:main2}} \begin{proof} Let $S \mathfrak{s}ubset {\mathbb P}^g$ be a smooth $K3$ surface of degree $2g-2$, with $g \geq 5$. Let $c$ be the Clifford index of all smooth hyperplane sections of $S$, recalling that it is constant by \mathfrak{c}ite{gl}, and that $c >0$ by \mathfrak{c}ite{SD}. If $g=5$ or $6$, then $c=1$ or $2$, and the result follows from Lemma \ref{lemma:c=11}(i-ii) if $c=1$ and Example \ref{ex:old} if $c=2$. Assume now that $g \geq 7$ and that $h^0({\mathcal N}_{S/{\mathbb P}^g}(-2)) \neq 0$. By \mathfrak{e}qref{eq:Sk2} and Corollary \ref{cor:well-knownK3}, we have $g \leq 10$ and $c=1$ or $2$. If $c=1$, then Lemma \ref{lemma:c=11} yields case (I) as the only case where one has $h^0({\mathcal N}_{S/{\mathbb P}^g}(-2)) \neq 0$. We are therefore left with the cases $c=2$ and $7 \leq g \leq 10$. The Donagi-Morrison example \ref{ex:DM} is case (VII), so we may henceforth assume that we are not in this case. Let $D$ be a divisor satisfying the conditions in Lemma \ref{lemma:desc2}. If ${\mathcal O}_S(1) \mathfrak{s}im 2D$ (whence $g=9$ and $D^2=4$), Lemma \ref{lemma:doppio}(i) yields case (V). We may henceforth assume that ${\mathcal O}_S(1)$ is not $2$-divisible and that $D^2=0$ or $2$, with the latter implying $g \leq 9$. If $D^2=2$ and $g=7$ or $8$, then Lemma \ref{lemma:doppio}(ii) yields cases (III) and (IV). If $D^2=2$ and $g=9$, then Lemmas \ref{lemma:92} and \ref{lemma:rest2ii} yield case (VI) as the only case where $h^0({\mathcal N}_{S/{\mathbb P}^g}(-2)) \neq 0$. Finally, assume that the only divisors $D$ as in Lemma \ref{lemma:desc2} satisfy $D^2=0$. Then Lemma \ref{lemma:rest2i} yields case (II) as the only case where one has $h^0({\mathcal N}_{S/{\mathbb P}^g}(-2)) \neq 0$. \mathfrak{e}nd{proof} \renewcommand{Proof}{Proof} A thorough look at the cases above show that we have also classified the cases where $h^0({\mathcal N}_{S/{\mathbb P}^g}(-2)) \neq h^0({\mathcal N}_{C/{\mathbb P}^{g-1}}(-2))$: \begin{proposition} \label{prop:main3} Let $S \mathfrak{s}ubset {\mathbb P}^g$ be a smooth $K3$ surface of degree $2g-2$, with $g \geq 5$, all of whose hyperplane sections have Clifford index $c$. Let $C \mathfrak{s}ubset S$ be a general hyperplane section. Then $h^0({\mathcal N}_{S/{\mathbb P}^g}(-2))=h^0({\mathcal N}_{C/{\mathbb P}^{g-1}}(-2))$ except precisely in the following cases: \begin{itemize} ^{-1}tem[(a)] $(g,c)=(6,1)$. Then $h^0({\mathcal N}_{S/{\mathbb P}^6}(-2))=1$ and $h^0({\mathcal N}_{C/{\mathbb P}^{5}}(-2))=2$; ^{-1}tem[(b)] $(g,c)=(7,1)$, and $H$ is not linearly equivalent to $3E+{\mathcal G}amma_1+{\mathcal G}amma_2+{\mathcal G}amma_3$, where $|E|$ is an elliptic pencil and ${\mathcal G}amma_1,{\mathcal G}amma_2,{\mathcal G}amma_3$ are disjoint lines. Then $h^0({\mathcal N}_{S/{\mathbb P}^7}(-2))=0$ and $h^0({\mathcal N}_{C/{\mathbb P}^{7}}(-2))=1$; ^{-1}tem[(c)] $(g,c)=(9,2)$, and $H$ is as in Lemma \ref{lemma:92}. \mathfrak{e}nd{itemize} \mathfrak{e}nd{proposition} \begin{proof} Among all the cases we have considered, the only ones where we have $h^0({\mathcal N}_{S/{\mathbb P}^g}(-2)) \neq h^0({\mathcal N}_{C/{\mathbb P}^{g-1}}(-2))$, are the ones in Lemma \ref{lemma:c=11}(ii-iii), yielding cases (a) and (b), and the one in Lemma \ref{lemma:92}. \mathfrak{e}nd{proof} \begin{thebibliography}{[FKP3]} \bibitem{AF} M.~Aprodu, G.~Farkas, {^{-1}t The Green Conjecture for smooth curves lying on arbitrary $K3$ surfaces}, Compos. Math. {\bf 147} (2011), 839--851. \bibitem{abs} E.~Arbarello, A.~Bruno, E.~Sernesi, {^{-1}t On hyperplane sections of $K3$ surfaces}, Alg. Geom. {\bf 4} (2017), 562--596. \bibitem{BEL} A.~Bertram, L.~Ein, R.~Lazarsfeld, {^{-1}t Surjectivity of Gaussian maps for line bundles of large degree on curves}. In: Algebraic geometry (Chicago, IL, 1989), Lecture Notes in Math. {\bf 1479}, 15--25, Springer, Berlin 1991. \bibitem{BM} A.~Beauville, J.-Y.~M{\'e}rindol, {^{-1}t Sections hyperplanes des surfaces $K3$}, Duke Math. J. {\bf 55} (1987), 873--878. \bibitem{Br} J.~Brawner, {^{-1}t Tetragonal Curves, scrolls and $K3$ surfaces}, Trans. Am. Mat. Soc. {\bf 349} (1997), 3075--3091. \bibitem{cds} C.~Ciliberto, T.~Dedieu, E.~Sernesi, {^{-1}t Wahl maps and extensions of canonical curves and K3 surfaces}, to appear in J. Reine Angew. Math., doi:10.1515/crelle-2018-0016. \bibitem{chm} C.~Ciliberto, J.~Harris, R.~Miranda, {^{-1}t On the surjectivity of the Wahl map}, Duke Math. J. {\bf 57} (1988), 829--858. \bibitem{clm-class} C.~Ciliberto, A.~F.~Lopez, R.~Miranda, {^{-1}t Classification of varieties with canonical curve section via Gaussian maps on canonical curves}, Amer. J. Math. {\bf 120} (1998), 1--21. \bibitem{clm} C.~Ciliberto, A.~F.~Lopez, R.~Miranda, {^{-1}t Projective degenerations of $K3$ surfaces, Gaussian maps and Fano threefolds}, Invent. Math. {\bf 114} (1993), 641--667. \bibitem{CM1} C.~Ciliberto, R.~Miranda, {^{-1}t On the Gaussian map for canonical curves of low genus}, Duke Math. J. {\bf 61} (1990), 417--443. \bibitem{CM2} C.~Ciliberto, R.~Miranda, {^{-1}t Gaussian maps for certain families of canonical curves}. In {^{-1}t Complex Projective Geometry}, London Math. Soc. Lecture Notes Ser. {\bf 179} Cambridge University Press, Cambridge (1992), 106--127. \bibitem{cp} C.~Ciliberto, G.~Pareschi, {^{-1}t Pencils of minimal degree on curves on a $K3$ surface}, J. Reine Angew. Math. {\bf 460} (1995), 15--36. \bibitem{CM} M.~Coppens, G.~Martens, {^{-1}t Linear series on $4$-gonal curves}, Math. Nachr. {\bf 213} (2000), 35--55. \bibitem{DM} R.~Donagi, D.~R.~Morrison, {^{-1}t Linear systems on $K3$ sections}, J. Differential Geom. {\bf 29} (1989), 49--64. \bibitem{DS} R.~Drewes, J.~Stevens, {^{-1}t Deformations of cones over canonical trigonal curves}, Abh. Math. Sem. Univ. Hamburg {\bf 66} (1996), 289--315. \bibitem{elms} D.~Eisenbud, H.~Lange, G.~Martens, F.-O.~Schreyer, {^{-1}t The Clifford dimension of a projective curve}, Compos. Math. {\bf 72} (1989), 173--204. \bibitem{gr} M.~Green, {^{-1}t Koszul cohomology and the geometry of projective varieties}, J. Differential Geom. {\bf 19} (1984), 125--171. \bibitem{gl} M.~Green, R.~Lazarsfeld, {^{-1}t Special divisors on curves on a $K3$ surface}, Invent. Math. {\bf 89} (1987), 357--370. \bibitem{JK} T.~Johnsen, A.~L.~Knutsen, {^{-1}t $K3$ projective models in scrolls}, Springer Lecture Notes in Mathematics {\bf 1842}, 2004. \bibitem{Kn} A.~L.~Knutsen, {^{-1}t On $k$th-order embeddings of $K3$ surfaces and Enriques surfaces}, Manuscr.Math. {\bf 104} (2001), 211--237. \bibitem{Kn2} A.~L.~Knutsen, {^{-1}t On two conjectures for curves on K3 surfaces}, International J. Math. {\bf 20} (2009), 1547--1560. \bibitem{KL-gauss} A.~L.~Knutsen, A.~F.~Lopez, {^{-1}t Surjectivity of Gaussian maps for curves on Enriques surfaces}, Adv. Geom. {\bf 7} (2007), 215--247. \bibitem{KLV} A.~L.~Knutsen, M.~Lelli-Chiesa, A.~Verra, {^{-1}t Moduli of non-standard Nikulin surfaces in low genus}, arXiv:1802.01201. \bibitem{Lvov} S.~Lvovsky, {^{-1}t Extensions of projective varieties and deformations. I.} Michigan Math. J. {\bf 39} (1992), 41--51. \bibitem{Ma} G.~Martens, {^{-1}t On curves on $K3$ surfaces}, Springer Lecture Notes in Mathematics {\bf 1398} (1989), pp. 174-182. \bibitem{mar-hir} G.~Martens, {^{-1}t The gonality of curves on a Hirzebruch surface}, Archiv der Mathematik {\bf 67} (1996), 349--352. \bibitem{Mu} S.~Mukai, {^{-1}t New development of theory of Fano $3$-folds: vector bundle method and moduli problem}, Sugaku {\bf 47} (1995), 125--144. \bibitem{Mum} D.~Mumford, {^{-1}t Varieties defined by quadratic equations}. In: Questions on Algebraic Varieties. C.I.M.E. Summer Schools, vol {\bf 51}. Springer, Berlin, Heidelberg. \bibitem{Na} M.~Nagata, {^{-1}t On rational surfaces. I. Irreducible curves of arithmetic genus $0$ or $1$}, Mem. Coll. Sci. Univ. Kyoto Ser. A Math. {\bf 32} (1960), 351--370. \bibitem{Pi} H.~Pinkham, {^{-1}t Deformations of algebraic varieties with $\mathbb{G}_m$-action}, Ast{\'e}risque {\bf 20}, Soc. Math. France, 1974. \bibitem{Reid0} M.~Reid, {^{-1}t Chapters on Algebraic Surfaces}. In: Complex algebraic geometry (Park City 1993), IAS/Park City Math. Ser. 3, Amer. Math. Soc., 1997, pp. 3--159. \bibitem{Reid} M.~Reid, {^{-1}t Special linear systems on curves lying on a $K3$ surface}, J. London Math. Soc. {\bf 13} (1976), 454--458. \bibitem{sagra} M.~Sagraloff, {^{-1}t Special Linear Series and Syzygies of Canonical Curves of Genus $9$}, PhD-thesis, Universit{\"a}t des Saarlandes, 2006, arXiv:math/0605758. \bibitem{SD} B.~Saint-Donat, {^{-1}t Projective models of $K-3$ surfaces}, Amer. J. Math. {\bf 96} (1974), 602--639. \bibitem{sc} F.-O.~Schreyer, {^{-1}t Syzygies of canonical curves and special linear series}, Math. Ann. {\bf 275} (1986), 105--137. \bibitem{ste} J.~Stevens, Rolling factors deformations and extensions of canonical curves, Doc. Math. {\bf 6} (2001), 185--226. \bibitem{Te} S.~Tendian, {^{-1}t Gaussian maps on trigonal and tetragonal curves}. Unpublished. \bibitem{Te-duke} S.~Tendian, {^{-1}t Surfaces of degree $d$ with sectional genus $g$ in ${\mathbb P}^{d+1−g}$ and deformations of cones}, Duke Math. J. {\bf 65} (1992), 157--185. \bibitem{Te2} S.~Tendian, {^{-1}t The Gaussian map and deformations of cones}. Preprint. \bibitem{Vo} C.~Voisin, {^{-1}t Courbes t{\'e}tragonales et cohomologie de Koszul}, J. Reine Angew. Math. {\bf 387} (1988), 111--121. \bibitem{W1} J.~Wahl, {^{-1}t Deformations of quasi-homogeneous surface singularities}, Math. Ann. {\bf 280} (1988), 105--128. \bibitem{W3} J.~Wahl, {^{-1}t Gaussian maps on algebraic curves}, J. Differential Geom. {\bf 32} (1990), 77--98. \bibitem{W-CY} J.~Wahl, {^{-1}t Hyperplane sections of Calabi-Yau varieties}, J. Reine Angew. Math. {\bf 544} (2002), 39--59. \bibitem{W} J.~Wahl, {^{-1}t Introduction to Gaussian maps on an algebraic curve}, Complex Projective Geometry, Trieste-Bergen 1989, London Math. Soc. Lecture Notes Ser. {\bf 179}. Cambridge Univ. Press, Cambridge 1992, 304--323. \bibitem{W2-coho} J.~Wahl, {^{-1}t On cohomology of the square of an ideal sheaf}, J. Alg. Geom. {\bf 8} (1997), 481--512. \bibitem{W2} J.~Wahl, {^{-1}t The Jacobian algebra of a graded Gorenstein singularity} , Duke Math. J. {\bf 55} (1987), 843--871. \bibitem{Zak} F.~L.~Zak, {^{-1}t Some properties of dual varieties and their applications in projective geometry}. Algebraic geometry (Chicago, IL, 1989), Lecture Notes in Math. {\bf 1479}, 273--280. Springer, Berlin, 1991 \mathfrak{e}nd{thebibliography} \mathfrak{e}nd{document}
\begin{document} \title[Topologically trivial automorphisms of cKM]{On topologically trivial automorphisms of compact K\"ahler manifolds and algebraic surfaces} \author{Fabrizio Catanese} \author{Wenfei Liu} \address{Lehrstuhl Mathematik VIII, Mathematisches Institut der Universit\"{a}t Bayreuth, NW II, Universit\"{a}tsstr. 30, 95447 Bayreuth, Germany, and Korea Institute for Advanced Study, Hoegiro 87, Seoul, 133-722, Korea} \email{[email protected]} \address{Xiamen University \\ School of Mathematical Sciences \\ Siming South Road 422 \\ Xiamen, Fujian 361005 (China)} \email{[email protected]} \thanks{The first author acknowledges support of the ERC 2013 Advanced Research Grant-340258-TADMICAMT; part of this work was performed at KIAS Seoul. The second author acknowledges support of the NSFC (No.~11971399 and No.~11771294). } \keywords{Compact K\"ahler manifolds, algebraic surfaces, Lie groups of automorphisms, cohomologically trivial, topologically trivial automorphisms, (cohomologically) rigidified, Enriques--Kodaira classification, surfaces isogenous to a product, rational and ruled surfaces, minimal and not minimal surfaces. } \subjclass[2010]{14J50, 32Q15, 32Q05, 32Q55, 32M05, 32G15, 32J17, 14L30, 14J25, 14J26, 14J27.} \begin{abstract} In this paper, we investigate automorphisms of compact K\"ahler manifolds with different levels of topological triviality. In particular, we provide several examples of smooth complex projective surfaces $X$ whose groups of $C^\infty$-isotopically trivial automorphisms, resp.~ cohomologically trivial automorphisms, have a number of connected components which can be arbitrarily large. \end{abstract} \maketitle \begin{dedication} Dedicated to the memory of the `red' Bishop of Italian Mathematics, Edoardo Vesentini (1928--2020). \end{dedication} \tableofcontents \section{Introduction} Let $X$ be a compact connected complex manifold. Bochner and Montgomery \cite{bm1, bm2} showed that the automorphism group $\mathrm{Aut}(X)$ (the group of biholomorphic maps $ g\colon X \ensuremath{\rightarrow} X$, i.e., the group of diffeomorphisms $ g \in \mathrm{Diff} (X)$ which preserve the complex structure of $X$) is a finite dimensional complex Lie Group, possibly with infinitely many connected components, whose Lie Algebra is the space $H^0(X, {\mathbb T}heta_X)$ of holomorphic vector fields on $X$. Denote by $\mathrm{Aut}_0(X)$ the identity component of $\mathrm{Aut}(X)$ and define the group of $C^\infty$-isotopically trivial automorphisms as: $$\mathrm{Aut}_*(X) : = \{\sigma\in\mathrm{Aut}(X) \mid \sigma \in \mathrm{Diff}_0(X)\},$$ where $\mathrm{Diff}_0(X)$ denotes the identity component of the group of diffeomorphisms. In other words, $\mathrm{Aut}_*(X)$ consists of the automorphisms that are $C^\infty$-isotopic to the identity. This group plays an important role (\cite{handbook}) in the construction of the Teichm\"uller space of $X$, and Meersseman recently constructed the Teichm\"uller stack $\mathcal{T}(X)$ of complex structures on the underlying differentiable manifold of $X$ (\cite{Me19}). The holonomy of $\mathcal{T}(X)$ turns out to be the quotient group ${\mathbb G}a_{*}(X) : = \mathrm{Aut}_{*}(X)/\mathrm{Aut}_0(X)$, a subgroup of the group of (connected) components ${\mathbb G}a(X) : = \mathrm{Aut}(X)/\mathrm{Aut}_0(X)$. The group of (connected) components ${\mathbb G}a(X) : = \mathrm{Aut}(X) / \mathrm{Aut}_0(X)$ is at most countable, and here is an easy example where it is infinite: \begin{example} Let $E$ be an elliptic curve, and let $X = E^n$ with $n\geq 2$. Then $\mathrm{Aut}_0(X) = E^n$, while the group ${\mathbb G}a(X)$ contains $ \mathrm{GL}(n, \mathbb{Z})$, acting in the obvious way: $$ g \in \mathrm{GL}(n, \mathbb{Z}) , x = (x_1, \dots, x_n) \mapsto gx= (\sum_j g_{1j} x_j, \dots , \sum_j g_{nj} x_j).$$ \end{example} Since the condition for two automorphisms to be isotopic is not directly tractable by algebro-geometrical methods, the strategy is to first consider the action of $\mathrm{Aut}(X)$ on the cohomology groups $H^*(X;R)$, where $R$ is a coefficient ring. We denote by \[ \mathrm{Aut}_R(X):=\{\sigma\in\mathrm{Aut}(X) \mid \sigma \text{ induces the trivial action on }H^*(X;R)\} \] In practice, we choose $R=\mathbb{Z}, \mathbb{Q}, \mathbb{R}$ or $\mathbb{C}$. One more equivalence relation among automorphisms is the homotopy equivalence, so we define the group of homotopically trivial automorphisms as: \[ \mathrm{Aut}_\sharp(X) = \{\sigma\in\mathrm{Aut}(X) \mid \sigma \text{ is homotopic to }\mathrm{id}_X\}, \] It is clear that \begin{multline*} \mathrm{Aut}_0(X) \vartriangleleft \mathrm{Aut}_*(X) \vartriangleleft \mathrm{Aut}_\sharp(X)\vartriangleleft \mathrm{Aut}_\mathbb{Z}(X)\\ \vartriangleleft \mathrm{Aut}_\mathbb{Q}(X) = \mathrm{Aut}_\mathbb{R}(X) =\mathrm{Aut}_\mathbb{C}(X) \vartriangleleft \mathrm{Aut}(X) \end{multline*} so that it suffices to consider the smaller ladder $$ \mathrm{Aut}_0(X) \vartriangleleft \mathrm{Aut}_*(X) \vartriangleleft \mathrm{Aut}_\sharp(X)\vartriangleleft \mathrm{Aut}_\mathbb{Z}(X) \vartriangleleft \mathrm{Aut}_\mathbb{Q}(X) \vartriangleleft \mathrm{Aut}(X). $$ The case where $X$ is a cKM = compact K\"ahler Manifold, with a K\"ahler metric $\om$, was considered around 1978 by Lieberman \cite{Li78} and Fujiki \cite{fujiki}, in particular Lieberman \cite{Li78} proved: \begin{theo}[Lieberman]\label{thm: Lieberman} $\mathrm{Aut}_0(X)$ is a finite index sugbroup of the group of automorphisms preserving the cohomology class of the K\"ahler form, $$\mathrm{Aut}_{\om}(X) = \{\sigma\in\mathrm{Aut}(X) \mid \sigma^* [\om] = [\om]\}.$$ In particular, the quotient group $$ {\mathbb G}a_{\mathbb{Q}}(X) : = \mathrm{Aut}_\mathbb{Q}(X) / \mathrm{Aut}_0(X) $$ is a finite group. \end{theo} For complex dimension $n=1$, it is well known that everything simplifies, in fact for $n=1$ $\mathrm{Aut}_0(X) = \mathrm{Aut}_\mathbb{Q}(X)$. But already for $ n = 2$ the situation is extremely delicate, hence this paper is dedicated to the case of complex dimension $n=2$, which provides examples where the different groups of components can have arbitrarily high cardinality. For surfaces of general type, essentially by the Bogomolov--Miyaoka--Yau inequality (final result in \cite{miyaoka}), there is a constant $C$ such that $|\mathrm{Aut}_\mathbb{Q}(X)|<C$ for any surface of general type (see \cite{Cai04}). For surfaces of general type the important open question is whether they are rigidified in the sense of \cite{handbook}, that is, $\mathrm{Aut}_*(X)$ is a trivial group (see the work of Cai-Liu-Zhang \cite{clz} and of the second author \cite{CL18}, \cite{Liu18} for recent results in the study of the group $\mathrm{Aut}_\mathbb{Q}(X)$). For surfaces not of general type the aim is to describe the group $ {\mathbb G}a_{\mathbb{Q}}(X) : = \mathrm{Aut}_\mathbb{Q}(X) / \mathrm{Aut}_0(X) $. In 1975 Burns and Rapoport \cite{br} proved that, for a K3 surface $X$, $\mathrm{Aut}_\mathbb{Q}(X) $ is a trivial group. Peters \cite{Pe79}, \cite{Pe80} began the study of $\mathrm{Aut}_\mathbb{Q}(X) $ for compact K\"ahler surfaces. Automorphisms of surfaces were also investigated by Ueno \cite{ueno} and Maruyama \cite{Ma71} in the 70's, then by Mukai and Namikawa \cite{MN84}. The main results of this paper can be summarized in the following main theorem, which is obtained from several more precise theorems. \mathcal{E}dskip \noindent {\bf Main Theorem.} {\em The indices $[\mathrm{Aut}_\mathbb{Q}(X):\mathrm{Aut}_\mathbb{Z}(X)],\, [\mathrm{Aut}_\mathbb{Z}:\mathrm{Aut}_*(X)]$, and $[\mathrm{Aut}_*(X):\mathrm{Aut}_0(X)]$ can be arbitrarily large for smooth projective surfaces not of general type.} \mathcal{E}dskip The first result which we prove (contradicting earlier assertions of other authors) answers questions raised by Meersseman \cite{Me17} and Oguiso \cite{O20}: \mathcal{E}dskip \noindent{\bf Theorem \ref{rational}.} For each positive integer $m$ there exists a rational surface $X$, blow up of $\mathbb{P}^2$, such that $\mathrm{Aut}_{\mathbb{Q}}(X) = \mathrm{Aut}_*(X) \cong \mathbb{Z}/ m\mathbb{Z}$. A similar example can be constructed for other non-minimal uniruled surfaces: just blowing up appropriately any smooth projective surface with an (effective) $\mathbb{C}^*$-action. Surprisingly, the same unboundedness phenomenon for $${\mathbb G}a_*(X): = \mathrm{Aut}_*(X) / \mathrm{Aut}_0(X)$$ can happen also for minimal ruled surfaces: \begin{thm} Let $E$ be an elliptic curve and let $X : = \mathbb{P} (\ensuremath{\mathcal{O}}_E \oplus \ensuremath{\mathcal{O}}_E(D))$ where $D$ is a divisor of even positive degree $d=2m$. Then ${\mathbb G}a_*(X)$ surjects onto $(\mathbb{Z}/m\mathbb{Z})^2 $. \end{thm} In the rest of the paper we consider more general results concerning the various subgroups of the ladder in terms of the Enriques-Kodaira classification of compact K\"ahler surfaces. We first consider rational surfaces which are blow-ups of $\mathbb{P}^2$, using Principle~\ref{prin: descend} of Section~\ref{sec: prelim} and then we pass to consider minimal surfaces, starting from $\mathbb{P}^1$-bundles over curves. We describe then the situation for non ruled surfaces: the case of Kodaira dimension $\kappa(X)=0$ is pretty well understood, down here a summary of the results. \begin{itemize}[leftmargin=*] \item $\mathrm{Aut}_\#(X) = \mathrm{Aut}_0(X)$ holds for each surface $X$ with $\kappa (X)=0.$ \item For complex tori and their blow ups $X$, $ \mathrm{Aut}_0(X) = \mathrm{Aut}_\mathbb{Q}(X)$. \item For K3 surfaces (hence on their blow ups) $\mathrm{Aut}_\mathbb{Q}(X) = \{ \mathrm{id}_X \}. $ \item For Enriques surfaces Mukai and Namikawa \cite{MN84} proved that $|\mathrm{Aut}_\mathbb{Q}(X)| \leq 4$, and that there are examples with $\mathrm{Aut}_\mathbb{Z}(X)= \mathbb{Z}/2\mathbb{Z}$. \item For hyperelliptic surfaces $X$, we show that $\mathrm{Aut}_\mathbb{Z}(X)= \mathrm{Aut}_0(X) $ is isogenous to $ {\mathbb A}lb(X)$, while the group $ {\mathbb G}a_{\mathbb{Q}} = \mathrm{Aut}_\mathbb{Q}(X) / \mathrm{Aut}_\mathbb{Z}(X)$ can be described in each case. $ {\mathbb G}a_{\mathbb{Q}}$ is a group of order $\leq 12$, and the case of order $12$ occurs precisely with the alternating group $\mathfrak A_4$. \end{itemize} We postpone to the sequel to this paper the full treatment of the more delicate case where the Kodaira dimension $\kappa(X)=1$. In this paper we use examples of surfaces in this class in order to show unboundedness also for other quotients of the ladder $$ \mathrm{Aut}_0(X) \vartriangleleft \mathrm{Aut}_*(X) \vartriangleleft \mathrm{Aut}_\sharp(X)\vartriangleleft \mathrm{Aut}_\mathbb{Z}(X) \vartriangleleft \mathrm{Aut}_\mathbb{Q}(X). $$ Combining theorems \ref{Kod1min} and \ref{Kod1nonmin} we get: \begin{theo} i) For each positive integer $n$ there exists a minimal surface $X$ of Kodaira dimension $1$ such that $ [\mathrm{Aut}_\mathbb{Q}(X):\mathrm{Aut}_\mathbb{Z}(X)] \geq n.$ ii) For each positive integer $n$ there exists a (non minimal) surface $X$ of Kodaira dimension $1$ such that $\mathrm{Aut}_*(X)=\{\mathrm{id}_{X}\}$, and $$ \mathrm{Aut}_\mathbb{Z}(X)= \mathbb{Z}/n\mathbb{Z}. $$ \end{theo} The difference between $\mathrm{Aut}_\sharp(X)$ and $\mathrm{Aut}_*(X)$ is still mysterious, even if we show that in many cases the two groups coincide. We pose the following \noindent {\bf Question 1.} Is $[\mathrm{Aut}_\sharp(X) : \mathrm{Aut}_*(X)]$ uniformly bounded? \section{Elementary observations and basic principles}\label{sec: prelim} \begin{principle}\label{prin: negative} Let $ \sigma \in \mathrm{Aut}_\mathbb{Q}(X )$, where $X$ is a (compact complex connected) surface, and let $C$ be an irreducible curve with $C^2 < 0$. Then $\sigma(C) = C$. \end{principle} \begin{proof} Assume in fact that the irreducible curve $ \sigma (C)$ is different from $C$: then $ C \cdot \sigma(C) \geq 0$. But since $\sigma(C)$ has the same rational cohomology class of $C$, we have $C \cdot \sigma(C) = C^2 < 0$, a contradiction. \end{proof} \begin{principle}\label{prin: red fibre} Let $ f : X \ensuremath{\rightarrow} B$ be a fibration of the surface $X$ onto a curve, and $ \sigma \in \mathrm{Aut}_\mathbb{Q}(X )$: then $\sigma$ preserves the fibration, that is, there is an action $\bar{\sigma}$ on $B$ such that $\bar{\sigma} \circ f = f \circ \sigma$. If the genus of $B$ is at least $1$, then the action of $\bar{\s}$ on $B$ is trivial, unless if $B$ has genus $1$ and $\bar{\s}$ has no fixpoints on $B$. Moreover, if $F''$ is a reducible fibre, then $\sigma(F'') = F''$. \end{principle} For simplicity, and because of equivariance ($\bar{\sigma} \circ f = f \circ \sigma$), by a small abuse of notation we shall use the notation $\s$ also for $\bar{\s}$. \begin{proof} Let $F$ be an irreducible fibre of $f$. Then $\sigma(F) \cdot F = 0$, hence $\sigma(F)$ is contained in another fibre of $f$; since $\sigma(F)$ is irreducible, it is another fibre of $f$. Since $ \sigma \in \mathrm{Aut}_\mathbb{Q}(X )$, and $H^1(B, \mathbb{Q}) \subset H^1 (X, \mathbb{Q})$, by Lefschetz's principle $\s$ acts trivially on $B$ if the genus of $B$ is at least $2$, and is a translation if the genus of $B$ is $=1$. The last assertion follows from Principle~\ref{prin: negative} and Zariski's lemma (the components of $F''$ have negative self-intersection). \end{proof} \begin{principle}\label{prin: multiplefibres} Let $ f : X \ensuremath{\rightarrow} B$ be a fibration of the surface $X$ onto a curve, $ \sigma \in \mathrm{Aut}_\mathbb{Z}(X )$, and let $ F'' = m F'$ be a multiple fibre of $f$ with $F'$ irreducible. Then $\sigma(F'') = F''$, unless possibly if the genus $g$ of $B$ is $\leq 1$, $m=2$, there are only two multiple fibres with multiplicity $2$, they are isomorphic to each other, and all the other multiple fibres have odd multiplicity. \end{principle} \begin{proof} By Principle \ref{prin: red fibre}, $\sigma$ acts on the fibration, in particular fixing the reducible fibres, and permuting the multiple fibres having the same multiplicity. Let $g$ be the genus of $B$. We saw in Principle \ref{prin: red fibre} that $\sigma$ acts as the identity on $B$ if $g \geq 2$. We have \cite{barlotti}, \cite{cime} the exact sequence of the orbifold fundamental group of the fibration $f$: $$ \pi_1 (F ) \ensuremath{\rightarrow} \pi_1(X) \ensuremath{\rightarrow} \pi_1^{orb} (f) \ensuremath{\rightarrow} 1,$$ where $F$ is a smooth fibre, and, letting $\{P_1, \dots, P_r\}$ be the set of points whose inverse images are the multiple fibres of $f$, $f^{-1}(P_j) = m_j F'_j$, then $$\pi_1^{orb} (f) : = \langle \al_1, \dots, \al_g, \be_1, \dots, \be_g, c_1, \dots c_r \mid \Pi_1^g [\al_i, \be_i] c_1\cdot \dots \cdot c_r=1, c_j^{m_j} =1\ensuremath{\rightarrow}ngle.$$ $\s$ acts on the fibration, hence, choosing a path from the base point $x_0$ to $\s(x_0)$, we get an action of $\s$ on $\pi_1(X)$, which is defined not uniquely, but only up to an inner automorphism, and which leaves invariant the normal subgroup which is the image of $ \pi_1 (F )$, since $\s$ sends a smooth fibre to another smooth fibre. Hence with this choice of such a path, we get an action of $\s$ on $ \pi_1^{orb} (f)$. If we pass to the respective Abelianizations, we get a uniquely defined action. Hence we have a surjection $$ H_1(X, \mathbb{Z}) \ensuremath{\rightarrow} ( \oplus_1^g \mathbb{Z} \al_j \oplus_1^g \mathbb{Z} \be_j \oplus_1^r (\mathbb{Z} / m_i \mathbb{Z}) c_i) / \langle (\sum_i c_i)\ensuremath{\rightarrow}ngle$$ on which $\s$ acts equivariantly. Since $\sigma$ is now assumed to act trivially on homology, it acts trivially on the quotient group. Assume that $F''$ corresponds to the point $P_1$. Clearly $\sigma$ can only send $F''$ to a multiple fibre of the same multiplicity, and isomorphic to $F''$. Assume that there is such a fibre, and that it corresponds to the point $P_2$. There remains to see whether $c_1$ and $c_2$ can have the same image in the quotient group. This means that the vector $e_1 - e_2\in \oplus_1^r \mathbb{Z} e_i $ is an integral linear combination of the relation vectors $m_1 e_1,\, \dots ,\, m_r e_r,\, e : = \sum_1^r e_i$: \[ e_1-e_2=a_1m_1e_1+\dots + a_rm_r e_r + ae \] where $a_1, \dots, a_r$ and $a$ are integers. Since $m_1 = m_2$, this implies that $1-a , -1 -a$ are divisible by $m_1$, hence $ 2 $ is divisible by $m_1$, hence $m_1=2$. Since $a$ is then odd, and $m_j$ divides $a$ for all $ j \geq 3$, this is possible if and only if all the other $m_j$ for $ j \geq 3$ are odd, and then one can take $a$ as the least common multiple of $m_3, \dots, m_r$. \end{proof} The next Principle~\ref{SIPU} is a special case of a more general result, and is based on the technique of surfaces isogenous to a product and of unmixed type (\cite{isogenous}). \begin{defin} A surface $X$ is said to be a SIP = Surface Isogenous to a Product if $X$ is the quotient of a product of curves of genus $\geq 1$ by the free action of a finite group $G$: $$ X = (C_1 \times C_2)/G.$$ We speak of a higher product if both curves $C_1, C_2$ have genus at least $2$. The action of $G$ is said to be UNMIXED if $G$ acts on each factor $C_i$, and diagonally on the product, $ \sigma(x,y) : = (\sigma x, \sigma y),$ so that more precisely $ X = (C_1 \times C_2)/{\mathbb D}e_G,$ where ${\mathbb D}e_G \subset G \times G$ is the diagonal subgroup and $G \times G$ acts on $C_1 \times C_2$. We can assume that we have a minimal realization, that is, $G$ acts faithfully on each factor $C_i$. Observe that $\mathrm{Aut}(S)$ contains the quotient $N_{{\mathbb D}e_G}/ {\mathbb D}e_G$, where $N_{{\mathbb D}e_G}$ is the normalizer of ${\mathbb D}e_G$ inside $ \mathrm{Aut}(C_1) \times \mathrm{Aut}(C_2)$ (and the two groups are equal if we have a higher product with $C_1, C_2$ not isomorphic). \end{defin} \begin{principle}\label{SIPU} Let $X$ be a SIP of unmixed type, that is, a surface isogenous to a product of unmixed type, with a minimal realization $ X = (C_1 \times C_2)/{\mathbb D}e_G.$ Then $\mathrm{Aut}_\mathbb{Q}(X )$ is the subgroup of $N_{{\mathbb D}e_G}/ {\mathbb D}e_G$ corresponding to automorphisms $h(x,y) = (h_1(x), h_2(y))$ acting trivially on $$ H^* (C_1 \times C_2, \mathbb{Q})^{{\mathbb D}e_G}.$$ In particular $h_i$ acts trivially on $ H^1 (C_i, \mathbb{Q})^{G} = H^1(C_i /G, \mathbb{Q})$, and (I) if $C_2 =:E$ has genus $1$ and $G$ acts freely on it, then $h_2$ is a translation; (II) if $C_1$ has genus $\geq 2$ and $G$ acts freely on it, then we may represent an element in $\mathrm{Aut}_\mathbb{Q}(X)$ by such an automorphism $h$ with $h_1 = \mathrm{id}_{C_1}$ and $h_2 \in Z_G$, where $Z_G$ is the centralizer of $G$ inside $\mathrm{Aut}(C_2)$. (III) Assume that $C_2 =: E$ has genus $1$, and $G$ acts freely on $E$. Assume moreover that if $C_1$ has genus $g_1 \geq 2$, then the orders of the stabilizers of points of $C_1$ are not of the form: $(2,\, 2,\, m_3,\, \dots)$, where the $m_i$ 's are odd numbers. Then all automorphisms in $ \mathrm{Aut}_\mathbb{Z}(X)$ are represented by pairs $(h_1, h_2)\in \mathrm{Aut}(C_1) \times \mathrm{Aut}(E)$ with $h_1 = \mathrm{id}_{C_1}$ and $h_2$ a translation. In this case $ \mathrm{Aut}_\mathbb{Z}(X) = \mathrm{Aut}_0(X) \cong E$. \end{principle} \begin{proof} For each $i=1,2$ we have a fibration $f_i : X \ensuremath{\rightarrow} C_i /G$, and, by Principle 3, $H : = \mathrm{Aut}_\mathbb{Q}(X)$ acts equivariantly on $X$ and $C_i /G$. We have \cite{barlotti}, \cite{cime} the orbifold fundamental group of the fibration $f_i$: $$ 1 \ensuremath{\rightarrow} \pi_1 (C_j) \ensuremath{\rightarrow} \pi_1(X) \ensuremath{\rightarrow} \pi_1^{orb} (f_i) \ensuremath{\rightarrow} 1,$$ on which $H$ acts (here $\{i,j\} = \{1,2\}$). Hence the elements of $H$ preserve the characteristic subgroup $\pi_1 (C_1) \times \pi_1 (C_2) $ of $\pi_1(X)$ and lift to $C_1 \times C_2$, preserving the horizontal and vertical leaves. Therefore these are represented by automorphisms in $ \mathrm{Aut}(C_1) \times \mathrm{Aut}(C_2)$. Since such lift $h =(h_1, h_2)$ induces an action on $(C_1 \times C_2)/{\mathbb D}e_G$ we see that $h \in N_{{\mathbb D}e_G}$ and $H$ is then a subgroup of $N_{{\mathbb D}e_G}/ {\mathbb D}e_G$. Observing that $$ H^* (X, \mathbb{Q}) \cong H^* (C_1 \times C_2, \mathbb{Q})^{{\mathbb D}e_G},$$ we obtain the first assertion. The second follows since $$ H^1 (X, \mathbb{Q}) \cong H^1 (C_1, \mathbb{Q})^G \oplus H^1 (C_2, \mathbb{Q})^G = H^1 (C_1/G, \mathbb{Q}) \oplus H^1 (C_2/G, \mathbb{Q}).$$ (I): then $H^1 (C_2, \mathbb{Q}) = H^1 (C_2/G, \mathbb{Q})$ and since $h_2$ acts trivially on it, it is a translation. (II): then $C_1 / G$ has genus $\geq 2$, hence $h_1$ acts trivially on $C_1 / G$, hence $h_1 \in G$. Multiplying by an element in ${\mathbb D}e_G$, we may assume that $h_1 = \mathrm{id}_{C_1}$. Then the condition that $(\mathrm{id}_{C_1}, h_2) \in N_{{\mathbb D}e_G}$ is equivalent to $h_2 \in Z_G$. (III): since $ h \in \mathrm{Aut}_\mathbb{Z}(X)$, by Principle \ref{prin: multiplefibres}, $h$ acts on the fibration $f_1$ preserving its multiple fibres unless possibly if the genus $g'_1$ of $ C_1 / G$ is at most $1$ and the multiplicities of the multiple fibres are of the form: $(2,\, 2,\, m_3,\, \dots)$, where the $m_i$ 's are odd numbers. We observe that in our situation these multiplicites are equal to the orders of the stabilizers of points of $C_1$: if $g_1 \geq 2$ by assumption these are not of the form: $(2,\, 2,\, m_3,\, \dots)$, where the $m_i$'s are odd numbers. If instead $g_1=1$, and these orders have this form, then the quotient $ C_1 / G$ has genus $g'_1 = 0$, and Hurwitz' formula yields $$ 2 = \frac{1}{2} + \frac{1}{2} + \sum_i ( 1 - \frac{1}{m_i}) ,$$ which is manifestly impossible since, for $m_i$ odd, $$ \frac{2}{3} \leq ( 1 - \frac{1}{m_i}) < 1.$$ Therefore $h_1$ acts on $C_1 \ensuremath{\rightarrow} C_1 /G$ fixing the branch points in $ C_1 /G$. Since $h_1$ acts as the identity on the cohomology of $ C_1 /G$, $h_1$ acts as the identity on $ C_1 /G$ if the quotient has genus $g'_1 \geq 2$, or if $g'_1 =1$ and there is a branch point, or if $g'_1 =0$ and there are at least $3$ branch points. Since $C_1$ has genus $\geq 2$, one of the three possibilities must occur, and $h_1 \in G$. Multiplying by an element in ${\mathbb D}e_G$, we may assume that $h_1 = \mathrm{id}_{C_1}$: (I) shows that $h_2$ is a translation. The group $\{(\mathrm{id}_{C_1}, h_2) \mid h_2 \in E \} \cong E$ has an action which clearly descends to $X$. Therefore we have shown that $ \mathrm{Aut}_\mathbb{Z}(X) = \mathrm{Aut}_0(X) \cong E$. \end{proof} \mathcal{E}dskip Directly from Principle~\ref{prin: negative} follows the next Principle~\ref{prin: descend}: \begin{principle}\label{prin: descend} Let $X$ be a compact complex surface, and let $X=X_{n}\xrightarrow{f_n} X_{n-1}\xrightarrow{f_{n-1}} \cdots \xrightarrow{f_2}X_{1} \xrightarrow{f_1} X_{0}$ be a sequence of blow-downs of $(-1)$-curves. For $0\leq k\leq n-1$, let $P_k\in X_{k}$ be the blown-up point. Then, if $\mathrm{Bir}(X)$ denotes the group of bimeromorphic self maps of $X$, then: \begin{multline}\label{eq: descend Q} \mathrm{Aut}_\mathbb{Q}(X) = \{\sigma\in \mathrm{Aut}_\mathbb{Q}(X_0) \subset \mathrm{Bir}(X)\mid \text{ for any } 0\leq k\leq n-1, \\ \sigma_k:=\sigma|_{X_{k}} \in \mathrm{Aut}_\mathbb{Q}(X_{k}) \text{ is such that } \sigma_k(P_{k})=P_{k} \} \end{multline} and \begin{multline}\label{eq: descend Z} \mathrm{Aut}_\mathbb{Z}(X) = \{\sigma\in \mathrm{Aut}_\mathbb{Z}(X_0) \subset \mathrm{Bir}(X)\mid \text{ for any } 0\leq k\leq n-1, \\ \sigma_k:=\sigma|_{X_{k}} \in\mathrm{Aut}_\mathbb{Z}(X_{k}) \text{ is such that }\sigma_k(P_{k})=P_{k} \} \end{multline} \end{principle} \begin{proof} For $\sigma\in\mathrm{Aut}_\mathbb{Q}(X)$, $\sigma$ acts trivially on \[ H^2(X,\mathbb{Q})= f_n^*H^2(X_{n-1}, \mathbb{Q})\oplus \mathbb{Q}[E_n], \] where $E_n$ is the exceptional divisor of $f_n$, so it preserves $E_n$ by Principle~\ref{prin: negative}. It follows that $\sigma$ descends to a cohomologically trivial automorphism of $X_{n-1}$ preserving $P_{n-1}$. Conversely, any $\sigma\in \mathrm{Aut}_{\mathbb{Q}}(X_{n-1})$ fixing $P_{n-1}$ lifts to a cohomologically trivial automorphism of $X_{n}$. By induction on $k$, we obtain the equality \eqref{eq: descend Q}. Exactly the same argument yields \eqref{eq: descend Z}. \end{proof} We need the following rigidity of harmonic maps into Riemannian manifolds with nonpositive curvature. Note that holomorphic maps between K\"ahler manifolds are harmonic with respect to the K\"ahler metrics. \begin{thm}\label{thm: Hartman} Let $M$ and $N$ be compact Riemannian manifolds, such that $N$ has nonpositive sectional curvature. If $\phi_0, \phi_1\colon M \rightarrow N $ are homotopic harmonic maps, then there is a $C^\infty$ homotopy $\Phi\colon M\times [0,1]\rightarrow N$ from $\phi_0$ to $\phi_1$ with the following properties: \begin{enumerate}[leftmargin=*] \item denoting $\phi_t(x):=\Phi(x,t)$ for $(x, t)\in M\times [0,1]$, the maps $\phi_t\colon M\rightarrow N$ are harmonic; \item for fixed $x$, the arc $[0,1]\rightarrow N,\, t\mapsto \phi_t(x)$ is a geodesic arc with length independent of $x$, and $t$ proportional to the arc-length; \item if for each $0\leq t\leq 1$, there is a point $x_t\in M$ such that $\phi_0$ and $\phi_t$ coincide at $x_t$, then $\Phi$ is the constant homotopy, that is, $\phi_t =\phi_0$ for any $0\leq t\leq 1$. \end{enumerate} \end{thm} \begin{proof} In view of \cite[(G)]{Har67}, only the last statement needs a proof. Let $d\colon N\times N \rightarrow \mathbb{R}_{\geq 0}$ denote the distance function of the Riemannian manifold $N$. Since $N$ is compact, there is a constant $\delta>0$ such that for any $y_1, y_2\in N$ with $d(y_1,y_2)<\delta$ there is a unique minimizing geodesic joining $y_1$ and $y_2$. By the continuity of the homotopy $\Phi$, one can find a partition of the unit interval $0=t_0<t_1<\dots<t_k=1$, such that for each $0\leq i\leq k-1$ \[ |\phi_{t_i}, \phi_{t_{i+1}}|_\infty:=\max_{x\in M} d(\phi_{t_i}(x), \phi_{t_{i+1}}(x)) <\delta. \] We will show by induction on $i$ that $\phi_t=\phi_0$ for any $t\in [t_{i}, t_{i+1}]$. We can assume that the equality $\phi_{t_{i}}=\phi_0$ has been proven. By assumption, for each $0\leq i\leq k$, we have a point $x_{t_i}\in M$ such that $\phi_{t_i}(x_{t_i}) = \phi_0(x_{t_i})$. Since $|\phi_{t_i}, \phi_{t_{i+1}}|_\infty<\delta$, the arc $[t_i, t_{i+1}]\rightarrow N,\, t\mapsto \phi_t(x_{t_{i+1}})$ is the unique minimizing geodesic arc with the same starting and ending points, namely $\phi_0(x_{t_{i+1}}) = \phi_{t_{i+1}}(x_{t_{i+1}}) $; this means that it is the constant arc at $\phi_0(x_{t_{i+1}})$. By (2), we infer that $\phi_t=\phi_{t_i} = \phi_0$ for any $t\in [t_{i}, t_{i+1}]$; see also \cite[(F)]{Har67}. This finishes the induction step and hence the proof. \end{proof} \begin{principle}\label{prin: rigidity} Let $X$ a compact K\"ahler manifold with topological Euler number ${\rm c}i_\mathrm{top} (X)\neq 0$. Suppose that there is a generically finite proper holomorphic map $\rho\colon X\rightarrow Y$ onto a compact K\"ahler manifold $Y$ with nonpositive sectional curvature. Then \[ \mathrm{Aut}_\sharp(X)=\{\mathrm{id}_X\}. \] \end{principle} \begin{proof} Let $\sigma\in\mathrm{Aut}_\sharp(X)$ be a homotopically trivial automorphism, with homotopy $\Sigma\colon X\times [0,1]\rightarrow X$ from $\mathrm{id}_X$ to $\sigma$. By Theorem~\ref{thm: Hartman}, we can assume that $\sigma_t(x):=\Sigma(x,t)$ is harmonic in $x$ for each fixed $t\in[0,1]$. For each $t\in[0,1]$, we have $[{\mathbb G}amma_{\sigma_t}]= [{\mathbb D}elta_X]\in H^{2n}(X\times X)$, where $n=\dim X$, ${\mathbb G}amma_{\sigma_t}$ denotes the graph of $\sigma_t$ and ${\mathbb D}elta_X\subset X\times X$ is the diagonal. Then \[ [{\mathbb D}elta_X]\cdot [{\mathbb G}amma_{\sigma_t}] = [{\mathbb D}elta_X]^2 = {\rm c}i_\mathrm{top}(X) \neq 0. \] and it follows that ${\mathbb D}elta_X\cap {\mathbb G}amma_{\sigma_t}\neq \emptyset$. In other words, there exists a point $x_t\in X$ fixed by $\sigma_t$. The map $\Sigma$ gives a homotopy from $\rho\circ\sigma_t$ to $\rho$ for each $t\in [0,1]$. Since $\rho\circ\sigma_t(x_t)=\rho(x_t)$, we have $\rho\circ\sigma_t = \rho$ by Theorem~\ref{thm: Hartman}. Since $\rho$ is generically finite, for a general point $y\in Y$, the inverse image $\rho^{-1}(y)$ is a finite set. It follows that $\sigma_t(x) =x$ for each $t\in[0,1], x\in \rho^{-1}(y)$. In other words $\sigma_t=\mathrm{id}_X$ for $t\in[0,1]$. In particular, $\sigma = \sigma_1 = \mathrm{id}_X$. \end{proof} \section{Unbounded $[\mathrm{Aut}_\mathbb{Q}(X):\mathrm{Aut}_\mathbb{Z}(X)]$} In this section, we construct a series of examples where $[\mathrm{Aut}_\mathbb{Q}(X):\mathrm{Aut}_\mathbb{Z}(X)]$ is unbounded. Recall that surfaces with Kodaira dimension $\kappa(X)=1$ are canonically elliptic, there is a fibration $f : X \ensuremath{\rightarrow} B$ over a curve $B$ and with general fibre a smooth elliptic curve, such that $\mathrm{Aut}(X)$ acts equivariantly on $X, B$. \begin{theo}\label{Kod1min} For each positive integer $n$ there exists a minimal surface $X$ of Kodaira dimension $1$ such that $ [\mathrm{Aut}_\mathbb{Q}(X):\mathrm{Aut}_\mathbb{Z}(X)] \geq n.$ \end{theo} \begin{proof} Let $B$ be the hyperelliptic curve, compactification of the affine curve of equation: \[ y^2 = x^n-1 \] where $n = 2g(B) +2 \geq 6$ is an even integer. Let $F$ be an elliptic curve. The group $G=\langle\tau\ensuremath{\rightarrow}ngle\cong\mathbb{Z}/2\mathbb{Z}$ acts on $B$ by \[ \tau (x,y) = (x,-y) \] and we make it act on $F$ by translations \[ \tau(z) = z + \epsilon,\,\,\,\forall\, z \in F \] where $\epsilon$ is a torsion point of order precisely $2$. Consider the surface isogenous to a product $X = (B\times F)/{\mathbb D}elta_G$, where ${\mathbb D}elta_G$ is the diagonal of $G\times G$ acting naturally on $B\times F$. Since the action is free, the invariants of $X$ are as follows: \[ \kappa(X)=1,\,{\rm c}i_\mathrm{top}(X)={\rm c}i(\mathcal{O}_X) = p_g(X)=0,\, q(X)=1,\, b_2(X) = \rho(X) =2 \] The rational cohomology groups of $X$ are as follows: \begin{equation}\label{eq: cohom} \small H^1(X,\mathbb{Q}) = q^* H^1(F/G,\mathbb{Q}), \, H^2(X,\mathbb{Q}) = p^*H^2(B/G,\mathbb{Q})\oplus q^*H^2(F/G, \mathbb{Q}) \end{equation} where $p\colon X\rightarrow B/G$ and $q\colon X\rightarrow F/G$ are the induced fibrations. Consider the following action of $\langle \tilde \sigma \ensuremath{\rightarrow}ngle\cong \mathbb{Z}/n\mathbb{Z}$ on $B$: \[ (x,y)\mapsto (\xi x, y), \] where $\xi$ is a primitive $n$-th root of unity. Then $\tilde\sigma\times \mathrm{id}_F\in \mathrm{Aut}(B\times F)$ commutes with $\tau$, and hence it descends to an automorphism $\sigma\in \mathrm{Aut}(X)$, of order $n$. One sees immediately from Principle~\ref{SIPU}, or directly from \eqref{eq: cohom} that $\sigma$ acts trivially on $H^*(X,\mathbb{Q})$, that is, $\sigma\in\mathrm{Aut}_\mathbb{Q}(X)$. On the other hand, $\sigma$ permutes the $n$ double fibres of $p\colon X\rightarrow B/G$; see Figure~\ref{fig: perm}. Since $n\geq 3$, $\langle\sigma\ensuremath{\rightarrow}ngle$ acts faithfully on $H^*(X,\mathbb{Z})$ in view of Principle~\ref{prin: multiplefibres}. \begin{figure} \caption{ Cyclic permutation of double fibres} \label{fig: perm} \end{figure} It follows that \[ [\mathrm{Aut}_\mathbb{Q}(X):\mathrm{Aut}_\mathbb{Z}(X)] \geq |\sigma| =n. \] Thus $[\mathrm{Aut}_\mathbb{Q}(X):\mathrm{Aut}_\mathbb{Z}(X)]$ is not bounded, as $n$ goes to infinity. \end{proof} \section{Unbounded $[\mathrm{Aut}_\mathbb{Z}(X):\mathrm{Aut}_*(X)]$} In this section, we construct a series of examples where $[\mathrm{Aut}_\mathbb{Z}(X):\mathrm{Aut}_*(X)]$ is not bounded. \begin{theo}\label{Kod1nonmin} For each positive integer $n$ there exists a (non minimal) surface $X$ of Kodaira dimension $1$ such that $\mathrm{Aut}_*(X)=\{\mathrm{id}_{X}\}$, and $$ \mathrm{Aut}_\mathbb{Z}(X)\cong \mathbb{Z}/n\mathbb{Z}.$$ \end{theo} \begin{proof} Let $C$ and $E$ be two smooth projective curves with $g(C)\geq 2$ and $g(E)= 1$. Suppose that $G=\langle\sigma\ensuremath{\rightarrow}ngle\cong\mathbb{Z}/n\mathbb{Z}$ acts faithfully on $C$ and $E$ in such a way that \begin{itemize}[leftmargin=*] \item $C^\sigma \neq \emptyset$ and $g(C/G)\geq 1$; \item $\sigma$ acts on $E$ by translations, that is, $\sigma(y)=y+a$ for some torsion element $a\in E$ of order exactly $n$. \end{itemize} The diagonal ${\mathbb D}elta_G<G\times G$ acts freely on $C\times E$, so we take the SIP of unmixed type $ Y :=(C\times E)/{\mathbb D}elta_G$. We have shown in (III) of Principle~\ref{SIPU} that $\mathrm{Aut}_\mathbb{Z}(Y) = \mathrm{Aut}_0(Y)$, and it consists of automorphisms that lift to an automorphism $\tilde \gamma$ of $C\times E$ of the form $\tilde\gamma(x,y)=(x, y+a)$ for some $a\in E$. Now let $t\in C/G$ and $X_t=\mathrm{Bl}_P(Y)$ be the blow-up of a point $P\in F_t$, where $F_t$ denotes the fibre of the induced fibration $Y\rightarrow C/G$ over $t$. Then by Principle~\ref{prin: descend} we have \[ \mathrm{Aut}_\mathbb{Z}(X_t) = \{\gamma\in \mathrm{Aut}_\mathbb{Z}(Y) \mid \gamma(P)=P\} \cong \mathrm{Stab}_G(t). \] Note that there exists a point $t_0$ with $ \mathrm{Stab}_G(t_0) =G=\langle\sigma\ensuremath{\rightarrow}ngle$ by the assumption that $C^\sigma\neq \emptyset$. The situation is illustrated in Figure~\ref{fig: blow}, where $E$ denotes the exceptional divisor of the blow-up at a point of the fibre $F_{t_0}$: \begin{figure} \caption{ Blow-up of a point on a multiple fibre} \label{fig: blow} \end{figure} Letting $n=|G|$ go to infinity, we see that $|\mathrm{Aut}_\mathbb{Z}(X_{t_0})|$ is unbounded. Now the proof is completed by applying Principle~\ref{prin: rigidity} that the group $\mathrm{Aut}_*(X_{t})$ is trivial for any $t$. \end{proof} \begin{cor} Let $X$ be as in Theorem~\ref{Kod1nonmin}. Then $\mathrm{Aut}_*(X)=\{\mathrm{id}_X\}$. As a consequence \[ [\mathrm{Aut}_\mathbb{Z}(X):\mathrm{Aut}_*(X)] = |\mathrm{Aut}_\mathbb{Z}(X)| \] is unbounded. \end{cor} \section{Unbounded $[\mathrm{Aut}_*(X):\mathrm{Aut}_0(X)]$} In this section, we give examples of smooth projective surfaces admitting groups of $C^\infty$-isotopically trivial automorphisms with number of connected components which is not bounded. A classification of these would be desirable. \begin{theo}\label{rational} For each positive integer $m$ there exists a rational surface $X$, blow up of $\mathbb{P}^2$, such that $\mathrm{Aut}_{\mathbb{Q}}(X) = \mathrm{Aut}_*(X) \cong \mathbb{Z}/ m\mathbb{Z}$. \end{theo} A simple idea lies behind the construction: if we blow up a point $P$ in a complex manifold $Y$, the differentiable manifold we obtain does not depend on the choice of the given point, as two simple arguments show: the first is that the diffeomorphism group $\mathrm{Diff}(Y)$ acts transitively on the manifold $Y$, the second is that, varying the point $P$, we get a family with base $Y$ of blow ups of $Y$, and they are all diffeomorphic by Ehresmann's theorem. \begin{proof} Let $P_1=(1:0:0),\, P_2=(0:1:0), \, P_3=(0:0:1)$ be the coordinate points of $\mathbb{P}^2$, and let $P_4=(1:1:0)$. Denote the lines connecting these points by $L_1=\overline{P_2P_3}$, $L_2=\overline{P_1P_3}$, $L_3=\overline{P_1P_2}$ and $L_4=\overline{P_3P_4}$. Let $\pi\colon X_4 \rightarrow \mathbb{P}^2$ be the blow-up of the four points $P_i, 1\leq i\leq 4$; see Figure~\ref{fig: blow4}. \begin{figure} \caption{Blow-up $X_4$ of the plane: for each $1\leq i \leq 4$, $L_{i,4} \label{fig: blow4} \end{figure} \begin{lem} Let $G_4=\{\sigma\in \mathbb{P} \mathrm{GL}(3) \mid \sigma(P_i)=P_i \text{ for } 1\leq i\leq 4\}.$ Then \[ \mathrm{Aut}_\mathbb{Q}(X_4) \cong G_4 = \left\{ \begin{bmatrix} 1 & 0 & 0\\ 0 & 1 & 0 \\ 0 & 0 & a \end{bmatrix} \biggm|a\in \mathbb{C}^* = \mathbb{C}-\{0\} \right\} \] \end{lem} \begin{proof} The isomorphism is by Principle~\ref{prin: descend}. Since $G_4$ fixes three distinct points $P_1, P_2, P_4$ on the line $L_3=(x_3=0)$, it acts as the identity on $L_3$. It follows that any element $\sigma\in G_4$ takes the form \[ \begin{bmatrix} 1 & 0 & *\\ 0 & 1 & * \\ 0 & 0 & * \end{bmatrix} \] where $*$ denotes entries to be determined. Since $\sigma$ fixes $P_3=(0:1:0)$, one sees then easily that \[ \sigma = \begin{bmatrix} 1 & 0 & 0\\ 0 & 1 & 0 \\ 0 & 0 & a \end{bmatrix} \] for some $a\in \mathbb{C}^* $. \end{proof} Blow up now $P_5$, infinitely near $P_4$, in the direction of the line $ L_{4} = \overline{P_3P_4}=(x_0-x_1=0)$. Since $G_4$ fixes the point $P_5$, the action of $G_4$ lifts to $X_5=\mathrm{Bl}_{P_5}(X_4)$. Continue to blow up inductively $P_{n+1}\in E_n\cap L_{4,n} \subset X_{n}$ for $n\geq 4$, where $L_{4,n}$ is the strict transform of $L_{4}$ on $X_{n}$ so that we get a chain of surfaces \[ X_{n}\rightarrow\cdots \rightarrow X_6 \rightarrow X_{5} \rightarrow X_4 \] such that $G_4=G_5=G_6=\cdots=G_n$ acts on them equivariantly. The situation is illustrated by Figure~\ref{fig: blown}, \begin{figure} \caption{Iterated blow-up of the plane} \label{iterated-blow} \label{fig: blown} \end{figure} where $L_{i,n}$ denote the strict transform of $L_{i}$ on $X_{n}$ for $1\leq i \leq 4$, and $E_{k,n}$, $1\leq k\leq n-1$, is the strict transform of the exceptional curve $E_k$ of $X_{k}\rightarrow X_{k-1}$ on $X_{n}$. Now, let us look at the blown-up point $ P_{n+1}\in X_{n}$, $n\geq 4$. An element of $G_n=G_4$ can be written as \[ \sigma_a = \begin{bmatrix} 1 & 0 & 0\\ 0 & 1 & 0 \\ 0 & 0 & a \end{bmatrix},\, a\in \mathbb{C}^* \] It fixes the curve $L_{4,n}$ and preserves the exceptional curve $E_n$ of the blow-up $X_n\rightarrow X_{n-1}$ for $n\geq 4$. At $P_4\in \mathbb{P}^2$ there are local coordinates $(x,y)$ such that \[ \sigma_a(x,y)=(x,ay). \] At $P_5\in X_4$ there are local coordinates $(x/y, y)$, with $L_{4,4}=(x/y=0)$ and $ E_4=(y=0)$, such that \[ \sigma_a(x/y, y) = ((1/a)(x/y),ay) \] In general, at the point $P_{n+5}\in X_{n+4}$ with $n\geq 1$ there are local coordinates $(x/y^{n+1}, y)$ such that $L_{4, n+4} =(x/y^{n+1}=0)$ and $E_{n+4} =(y=0)$, and \[ \sigma_a(x/y, y) = ((1/a^{n+1})(x/y^{n+1}),ay) \] Observe that the local coordinate of $P_{n+5}\in E_{n+4}$ is $x/y^{n+1}$, so $\sigma_a$ acts as the identity on $E_{n+4}$ if and only if $a^{n+1}=1$; the two intersection points $P_{n+5}, P_{n+5}'$ of $E_{n+4}$ with $L_{4, n+4}$ and the strict transform of $E_{n+3}$ are fixed by $\sigma_a$ for any $a\in \mathbb{C}^*$. For any $P\in E_{n+4}$, we have by Principle~\ref{prin: descend} \[ \mathrm{Aut}_{\mathbb{Q}}( \mathrm{Bl}_P(X_{n+5}))= \{\sigma\in G_{n+4}\mid \sigma(P) =P\}= \begin{cases} G_4 & \text{ if } P=P_{n+5} \text{ or } P_{n+5}' \\ \mu_{n+1} & \text{ otherwise} \end{cases} \] where $\mu_{n+1}$ is the cyclic group of order $n+1$. As $P$ varies, we obtain a family of surfaces $\Phi\colon \mathcal{X}\rightarrow E_{n+4}\cong\mathbb{P}^1$ such that the fibre $\Phi^*P = \mathrm{Bl}_P(X_{n+5})$. These are all diffeomorphic, and there is a diffeomorphism by which $\mathrm{Aut}_{\mathbb{Q}}(\mathrm{Bl}_P(X_{n+5})) $ is always a subgroup of the same group $G_4$. Hence \[ \mathrm{Aut}_{\mathbb{Q}}(\mathrm{Bl}_P(X_{n+5})) = \mathrm{Aut}_{*}(\mathrm{Bl}_P(X_{n+5})). \] For $P\in E_{n+4}-\{P_{n+5}, P_{n+5}'\}$ we have $\mathrm{Aut}_0(\mathrm{Bl}_P(X_{n+5}))=\{\mathrm{id}\}$, and \[ [\mathrm{Aut}_{*}(\mathrm{Bl}_P(X_{n+5})): \mathrm{Aut}_0(\mathrm{Bl}_P(X_{n+5}))] = |\mathrm{Aut}_{*}(\mathrm{Bl}_P(X_{n+5}))| = n+1 \] which can be arbitrarily large. \end{proof} \begin{rmk} (1) The above construction, exploring the difference between $\mathrm{Aut}_0(\mathrm{Bl}_P(X))$ and $\mathrm{Aut}_0(X)$, shows that the statement in the fourth paragraph of \cite[page 251]{Pe80} is wrong. \end{rmk} Completely similar examples can be constructed for other non minimal ruled surfaces, blowing up appropriately any decomposable $\mathbb{P}^1$-bundle over any curve $C$ of arbitrary genus. Indeed, generalizing Theorem~\ref{rational}, we obtain a recipe for constructing surfaces with $[\mathrm{Aut}_*(X):\mathrm{Aut}_0(X)]$ arbitrarily large: \begin{enumerate}[leftmargin=*] \item Choose a $\mathbb{C}^*$-surface $Z$ that is, a projective smooth surface with $\mathbb{C}^*\cong G< \mathrm{Aut}_0(Z)$. For example, we can take $Z$ to be any smooth toric surface or any decomposable $\mathbb{P}^1$-bundle over a curve. \item Let $Y\rightarrow Z$ be a composition of blow-ups of $G$-fixed points such that $\mathrm{Aut}_0(Y)=G$. These blow-ups kill the automorphisms in $\mathrm{Aut}_0(Z)\setminus G$ while preserving the action of $G$. \item Blow up a point $P\in Y$ and its infinitely near points that is fixed by the whole $G$, in the same way as in the proof of Theorem~\ref{rational}: \[ X_{n}\rightarrow\cdots \rightarrow X_{2} \rightarrow X_{1} \rightarrow Y \] The exceptional curve $E_n$ of the $n$-th blow-up $X_{n}\rightarrow X_{n-1}$ is then invariant under $G$ and is fixed by and only by a finite cyclic subgroup $\mathbb{Z}/n\mathbb{Z}\cong H<G$. \item Let $X\rightarrow X_{n} $ be the blow-up of a general point $ P\in E_n$. Then it holds \[ \mathrm{Aut}_0(X) =\{\mathrm{id}_X\}, \text{ and } \mathrm{Aut}_*(X) \cong \mathbb{Z}/n\mathbb{Z}. \] \end{enumerate} The method can also be used to construct $X$ such that $\dim \mathrm{Aut}_0(X)>0$ and $[\mathrm{Aut}_*(X):\mathrm{Aut}_0(X)]$ is arbitrarily large. Surprisingly, however, the same unboundedness phenomenon for ${\mathbb G}a_*(X): = \mathrm{Aut}_*(X) / \mathrm{Aut}_0(X)$ can happen also for minimal ruled surfaces: \begin{theo}\label{thm: ell rule unbounded} Let $E$ be an elliptic curve and let $X : = \mathbb{P} (\ensuremath{\mathcal{O}}_E \oplus \ensuremath{\mathcal{O}}_E(D))$ where $D$ is a divisor of even positive degree $d = 2m > 0$. Then ${\mathbb G}a_*(X) $ surjects onto a subgroup of index at most $4$ inside $(\mathbb{Z}/d\mathbb{Z})^2 $, in particular $$|{\mathbb G}a_*(X) | \geq m^2.$$ \end{theo} \begin{proof} Consider the vector bundle $V := \ensuremath{\mathcal{O}}_E \oplus \ensuremath{\mathcal{O}}_E(D)$, so that $X := \mathbb{P}(V)$. Any $\ga \in \mathrm{Aut}(X)$ preserves the fibration $ f \colon X \ensuremath{\rightarrow} E$, in particular $\ga$ acts on $E$. If $\ga \in \mathrm{Aut}_{\mathbb{Q}}(X)$, then necessarily $\ga$ acts on $E$ by a translation $\tau_a, \ a \in E$. Since $ \tau_a^* (X) \cong X$, it must be that $ \tau_a^* (V) \cong V \otimes L$, for a suitable line bundle $L$, say, of degree $l$, on $E$. \begin{figure*} \caption{ An automorphism of an elliptic ruled surface inducing translation on the base.} \end{figure*} Because from the degrees equality (they are in increasing order) $(0,d) = (l, d + l ) {\mathbb R}ightarrow l = 0$, we get that $L$ is trivial, and $ \tau_a^* ( \ensuremath{\mathcal{O}}_E(D)) \cong \ensuremath{\mathcal{O}}_E(D)$, hence $$ a \in {\mathcal K} : = \mathrm{Ker}(\Phi_D : E \ensuremath{\rightarrow} E) \cong (\mathbb{Z}/d\mathbb{Z})^2,$$ where $\Phi_D (a) = \tau_a^* (D) - D \in E$. We have the exact sequence $$ 1 \ensuremath{\rightarrow} \mathbb{P} \mathrm{GL}(V) \ensuremath{\rightarrow} \mathrm{Aut}_{\mathbb{Q}}(X) \ensuremath{\rightarrow} {\mathcal K} \ensuremath{\rightarrow} 1$$ where the group $\mathbb{P} \mathrm{GL}(V)$ is connected, since it consists of the linear maps $$ (v_1, v_2) \mapsto (v_1, \be v_1 + \la v_2), \beta \in H^0 (E, \ensuremath{\mathcal{O}}_E(D)), \la \in \mathbb{C}^*.$$ There remains to show that a subgroup of index at most $4$ of $\mathrm{Aut}_{\mathbb{Q}}(X)$ is contained in $ \mathrm{Aut}_*(X)$, so that this subgroup maps via $\mathrm{Aut}_{\mathbb{Q}}(X) \ensuremath{\rightarrow} {\mathcal K}$ to a subgroup of index at most $4$ in ${\mathcal K}$. To prove this, observe that, differentiably, $V$ is classified by $c_1(V)$ (it is the pull back of a classifying map to a Grassmannian of $2$-planes). Hence $V$ is topologically equivalent to $ \ensuremath{\mathcal{O}}_E(D')\oplus \ensuremath{\mathcal{O}}_E(D')$, where $D'$ is a divisor of degree $=m$. Hence $X$ is differentiably equivalent to the product $ \mathbb{P}^1 \times E$. And $\mathrm{Aut}_{\mathbb{Q}}(X)$ maps to the group of vector bundle diffeomorphisms, so that $\ga (v,t) = (A(t) v, t + a)$, with $ a \in {\mathcal K}$, and $A : E \ensuremath{\rightarrow} \mathbb{P} \mathrm{GL}(2, \mathbb{C})$. Hence $\ga$ is isotopic to $\ga_0 (v,t) : = (A(t) v, t )$. Now, $$A : E \ensuremath{\rightarrow} \mathbb{P} GL(2, \mathbb{C}) = \mathbb{P} SU(2) = SO(3) / \pm 1$$ lifts to $SO(3) \cong S^3$ if and only if $\pi_1(A) : \pi_1(E) \ensuremath{\rightarrow} \pi_1 (\mathbb{P} SU(2) ) = \mathbb{Z}/2\mathbb{Z}$ is trivial. The group ${\mathcal G} : = \Hom ( \pi_1(E), \mathbb{Z}/2\mathbb{Z} ) \cong (\mathbb{Z}/2\mathbb{Z})^2$ has exactly $4$ elements, and we have therefore a homomorphism $\pi: \mathrm{Aut}_{\mathbb{Q}}(X) \ensuremath{\rightarrow} {\mathcal G}$. For elements $\ga \in \mathrm{Ker} (\pi)$, we get diffeomorphisms such that the corresponding map $A$ has now $\pi_1(A)=0$. Therefore $\ga \in \mathrm{Ker} (\pi)$ yields a map $A$ which lifts to a differentiable map $A'$ to the three-sphere $S^3$. This, by Sard's lemma, omits one point $P_0$. Since $S^3 \setminus \{ P_0\} \cong \mathbb{R}^3$, which is contractible, we obtain that $A$ and $\ga$ are isotopic to the identity. \end{proof} The case of more general $\mathbb{P}^1$-bundles over curves shall be treated in the next sections. \section{Cohomologically trivial automorphisms of rational surfaces} In a sense, we are now going first to explain the philosophy behind the construction of Theorem \ref{rational}. Moreover, we sketch how a similar procedure leads to examples where $\mathrm{Aut}_0(X)$ is nontrivial and the group of components ${\mathbb G}a_*(X)$ can be arbitrarily large. Let $X$ be a blow up of the projective plane $\mathbb{P}^2$, obtained by first blowing up $P_1, \dots, P_r \in \mathbb{P}^2$, and then infinitely near points $P_{r+1}, \dots, P_k$. We assume that $k \geq 2$, else we have the plane or the $\mathbb{P}^1$-bundle $ {\mathbb F}F_1 : = \mathbb{P} (\ensuremath{\mathcal{O}}_{\mathbb{P}^1} \oplus \ensuremath{\mathcal{O}}_{\mathbb{P}^1}(1))$, and for these $\mathrm{Aut}(X) = \mathrm{Aut}_0 (X)$. Then we define inductively: $$ G_0 : = \mathbb{P} \mathrm{GL} (3, \mathbb{C}), \ G_1 : = \{ g \in G_0 \mid g (P_1 ) = P_1\}, \ G_{i+1} : = \{ g \in G_i \mid g (P_{i+1} ) = P_{i+1}\},$$ and $ X_0 : = \mathbb{P}^2, X_{1}, \dots , X_{k} = X$. Here $G_i$ acts on $X_i$ and $ G_{i+1}$ is the fibre over $ P_{i+1}$ of $G_i \ensuremath{\rightarrow} G_i P_{i+1}$; notice that stabilizers of points lying in a fixed orbit are conjugate, hence all fibres are smooth and isomorphic, in particular they are connected if the orbit is 1-connected, a fortiori if $G_i$ is 1-connected. Clearly $ G_{i+1} = G_i$ if the orbit has dimension $0$, and $G_{i+1}$ is connected if the orbit is isomorphic to $\mathbb{C}$ or $\mathbb{P}^1$. But $G_{i+1}$ may not be connected if the orbit is isomorphic to $\mathbb{C}^*$. Notice that $G_1$ is isomorphic to $ \mathrm{Aff} (2, \mathbb{C})$, while if $r \geq 2$, then $G_2$ is isomorphic to the subgroup of the affine group $ \mathrm{Aff} (2, \mathbb{C})$ of transformations $ v \ensuremath{\rightarrow} A v + w$ such that $e_1$ and $e_2$ are eigenvectors of the matrix $A$ (put the two points at infinity). If $ r \geq 3$, then $G_3$ is isomorphic to $\mathbb{C}^* \times \mathbb{C}^*$ if the three points are not collinear, else we get the group $\mathbb{C}^2 \rtimes \mathbb{C}^*$ of dilations $ v \ensuremath{\rightarrow} \la v + w$ ($\la \in \mathbb{C}^*, w \in \mathbb{C}^2$). We can briefly summarize the situation as follows. If $X$ is a blow up of $\mathbb{P}^2$, then $G_k = \mathrm{Aut}_{\mathbb{Q}}(X) $ is: \begin{enumerate} \item trivial if $r \geq 4 $ and $\{P_1, \dots, P_r\}$ contains a projective basis, \item a subgroup of $\mathbb{C}^* \times \mathbb{C}^*$ if $ r \geq 3$ and $\{P_1, \dots, P_r\}$ contains three non collinear points, here $G_k$ does not need to be connected if $k \geq 5$; \item a subgroup of $\mathbb{C}^2 \rtimes \mathbb{C}^*$ if all the points $P_1, \dots, P_r $ are collinear and $r \geq 3$, again here $G_k$ does not need to be connected if $k \geq 5$; \item $G_k$ is connected if $k \in \{1,2\}$, and for $k=3$ (both $r=2$ and $r=1$); \item $G_4$ may be disconnected for $k=4$, $r=1$. \end{enumerate} To conclude with these general observations, we observe that if $X$ is a blow up of $\mathbb{P}^2$, then $\mathrm{Aut}_{\mathbb{Q}}(X) = \mathrm{Aut}_{*}(X)$. Indeed the same statement holds for all rational surfaces, and a proof could be done by induction on $k$ for $\mathrm{Aut}_{\mathbb{Q}}(X)= G_k$ provided one could show the following general assertion: \begin{question} Let $X$ be a compact complex manifold and $\s \in \mathrm{Aut}(X)$ an automorphism which admits a fixed point $P$ and which is differentiably isotopic to the identity. Let $Z $ be the blow up of $X$ at $P$, and $\s'$ the induced automorphism of $Z$. Is $\s'$ differentiably isotopic to the identity? \end{question} Following the ideas we have introduced, we shall give a proof of the next theorem, which can be viewed as a cohomological characterization of $C^\infty$-isotopically trivial automorphisms for smooth projective rational surfaces. \begin{thm}\label{thm: rational} Let $X$ be a smooth projective rational surface. Then \[ \mathrm{Aut}_*(X)=\mathrm{Aut}_\mathbb{Z}(X) = \mathrm{Aut}_\mathbb{Q}(X). \] \end{thm} We need the following lemma for the proof of Theorem~\ref{thm: rational}. \begin{lem}\label{lem: Gm} Let $G$ be a connected linear algebraic group, defined over $\mathbb{C}$. Let $H$ be an algebraic subgroup of $G$ and $H_0$ its identity component. Then, for any $\sigma\in H\setminus H_0$, there is an element $\sigma'\in \sigma H_0$ such that $\sigma'$ has finite order, and $\sigma'$ is contained a $1$-dimensional multiplicative subgroup $T\cong\mathbb{C}^*$ of $G$. \end{lem} \begin{proof} Let $\sigma=\sigma_s\sigma_u$ be the Jordan decomposition in $H$, with $\sigma_s$ semisimple and $\sigma_u$ unipotent (\cite[Theorem~2.4.8]{Spr98}, \cite{humphreys} 15.3, page 99). Necessarily $\sigma_u\in H_0$, because the Zariski closure of the subgroup generated by $\sigma_u$ is an additive group $\mathbb{G}_a \cong \mathbb{C}$ (see \cite{humphreys}, 15.5 exercise 9, page 101). Hence $\sigma_s\in \sigma G_0$. We can replace $\sigma$ by $\sigma_s$, and assume that $\sigma$ is semi-simple. Since $H$ is algebraic, there exists an $n\in\mathbb{Z}_{>0}$ such that $\gamma:=\sigma^n \in H_0$. Then (see \cite{humphreys}, 22.2, Cor. A, page 140, and Prop. 19.2, pages 122-123) there is a maximal torus $T_H$ of $H_0$ containing $\gamma$. Note that $T_H$ is divisible and commutative. Therefore, there exists an $\tau\in T_H$ such that $\sigma^n= \tau^n$ and $\sigma\tau=\tau\sigma$ follows since $\s$ commutes with the whole 1-parameter subgroup generated by $\ga$. Then $\sigma':=\sigma\tau^{-1}\in \sigma T_H\subset \sigma H_0$ is of finite order. Since $G$ is connected, there is a maximal torus $T_G$ of $G$ containing $\sigma'$. Since $\sigma'$ has finite order, one sees easily that there is a $1$-dimensional multiplicative group $T\cong\mathbb{C}^*$ of $T_G$ containing $\sigma'$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm: rational}] It suffices to show that $\mathrm{Aut}_*(X) = \mathrm{Aut}_\mathbb{Q}(X)$. Let $\sigma\in \mathrm{Aut}_\mathbb{Q}(X)$ be a numerically trivial automorphism. Without loss of generality, we can assume that $\sigma\notin \mathrm{Aut}_0(X)$. Let $f\colon X\rightarrow Y$ be a birational morphism to a smooth minimal model. Since $f$ is a composition of blow-downs of $(-1)$-curves, which are preserved by numerically trivial automorphisms (Principle~\ref{prin: negative}), the automorphism $\sigma$ descends all along to an automorphism $\sigma_Y\in \mathrm{Aut}_\mathbb{Q}(Y)$. In fact, we have $\mathrm{Aut}_\mathbb{Q}(X)\subset \mathrm{Aut}_\mathbb{Q}(Y)$, viewed as subgroups of the birational automorphism group $\mathrm{Bir}(X)=\mathrm{Bir}(Y)$. Note that $Y$ is either $\mathbb{P}^2$ or one of the Segre-Hirzebruch surfaces, and it holds $\mathrm{Aut}_\mathbb{Q}(Y)=\mathrm{Aut}_0(Y)$. By Lemma~\ref{lem: Gm}, after replacing $\sigma$ by an automorphism $\sigma'$ in the same component of $\mathrm{Aut}_\mathbb{Q}(X)$, there is a subgroup $T\cong\mathbb{C}^*$ of $\mathrm{Aut}_\mathbb{Q}(Y)$ containing $\sigma$. Factor $f\colon X\rightarrow Y$ into \[ f\colon X= X_{n}\xrightarrow{f_{n}} X_{n-1} \xrightarrow{f_{n-1}}\cdots \rightarrow X_{1} \xrightarrow{f_{1}} X_{0} =Y \] so that $f_{i+1}\colon X_{i+1}\rightarrow X_{i}$ blows up an $\mathrm{Aut}_\mathbb{Q}(X)$-fixed point $P_{i}\in X_{i}$. There is some $0\leq i\leq n-1$ such that \begin{itemize}[leftmargin=*] \item $T<\mathrm{Aut}(X_{i})$, but \item $P_{i}\in X_{i}$ is not fixed by the whole $T$, \end{itemize} so the action of $T$ does not lift to $X_{i+1}$. The orbit of $P_{i}$ under the action of $T$ is $TP_{i}\cong \mathbb{C}^*.$ The closure $\overline{TP_{i}}$ is an irreducible rational curve with $T$ acting on it, and the normalization $\nu\colon\mathbb{P}^1\rightarrow \overline{TP_{i}}$ is $T$-equivariant. Let ${\mathbb G}amma_\nu\subset \mathbb{P}^1\times X_{i}$ be the graph of $\nu\colon \mathbb{P}^1\rightarrow \overline{TP_{i}}\hookrightarrow X_{i}$. Note that $T$ acts diagonally on $\mathbb{P}^1\times X_{i}$ and ${\mathbb G}amma_\nu$ is $T$-invariant. Blowing up $\mathbb{P}^1\times X_{i}$ along ${\mathbb G}amma_\nu$, we obtain a $T$-equivariant family \[ \Phi_{i+1}\colon {\mathcal X}_{i+1} \xrightarrow{F_{i+1}} \mathbb{P}^1\times X \rightarrow \mathbb{P}^1 \] The fibre of $\Phi_{i+1}$ over $P_{i}\in TP_{i}\subset \mathbb{P}^1$ is just $X_{i+1}$, and the action of $T$ lifts to the fibres over $0$ and $\infty$. Now let us look at the blow-up $f_{i+2}\colon X_{i+2}\rightarrow X_{i+1}$. Recall that the blown-up point is $P_{i}\in X_{i+1} \subset {\mathcal X}_{i+1}$. The orbit $TP_{i} \subset {\mathcal X}_{i+1}$ is a section of $\Phi_{i+1}$ over $\mathbb{P}^1\setminus \{0,\infty\}$, and it readily extends to a section ${\mathcal S}_{i-1}$ over the whole $\mathbb{P}^1$. Let $F_{i+2}\colon {\mathcal X}_{i+2}\rightarrow {\mathcal X}_{i+1}$ be the blow-up along ${\mathcal S}_{i-1}\cong \mathbb{P}^1$, which is $T$-equivariant, extending $f_{i+2}\colon X_{i+2}\rightarrow X_{i+1}$ to a family of blow-ups. We can continue this way, extending each blow-up $f_{j+1}\colon X_{j+1}\rightarrow X_j$, $j\geq i$, to a $T$-equivariant family of blow-ups $F_{j+1}\colon {\mathcal X}_{j+1}\rightarrow {\mathcal X}_j$. In the end, we get a commutative diagram of $T$-equivariant morphisms: \[ \begin{tikzcd} {\mathcal X}_{n} \arrow[r, "F_{n}"] \arrow[rrd,"\Phi_{n}"']&{\mathcal X}_{n-1} \arrow[r, "F_{n-1}"] \arrow[rd] &\cdots \arrow[r] \arrow[d]& {\mathcal X}_{i+1} \arrow[r, "F_{i+1}"] \arrow[dl]& {\mathcal X}_{i} \arrow[dll, "\Phi_{i}"]\\ && \mathbb{P}^1 && \end{tikzcd} \] The $T$-equivariant family $\Phi_n\colon {\mathcal X}_{n}\rightarrow \mathbb{P}^1$ satisfies the following properties: \begin{itemize}[leftmargin=*] \item $X$ is the fibre of $\Phi_{n}$ over the point $P_{i}\in \mathbb{P}^1$; \item The subgroup $H=\langle \mathrm{Aut}_0(X), \sigma \ensuremath{\rightarrow}ngle$ of $\mathrm{Aut}_\mathbb{Q}(X)$ acts fibrewise on ${\mathcal X}_{n}$ and extends to an action of $T$ on the two fibres over $0$ and $\infty$. \end{itemize} Since $T\cong\mathbb{C}^*$ is connected, we infer that $\sigma$ is $C^\infty$-isotopic to $\mathrm{id}_X$. \end{proof} \section{Cohomologically trivial automorphisms of surfaces according to Kodaira dimension} In this section, we begin to investigate more systematically the boundedness of $[\mathrm{Aut}_\mathbb{Q}(X):\mathrm{Aut}_0(X)]$, looking through the Enriques--Kodaira classification of surfaces. \subsection{Case $\kappa(X)=-\infty$} In this subsection we study the case $\kappa(X)=-\infty$ and $X \neq \mathbb{P}^2, \mathbb{P}^1 \times \mathbb{P}^1$. In fact, for $\mathbb{P}^2$, $\mathrm{Aut}(X) = \mathrm{Aut}_0(X)$, while, for $X= \mathbb{P}^1 \times \mathbb{P}^1$, $\mathrm{Aut}_0(X) < \mathrm{Aut}(X)$ has index $2$ and $\mathrm{Aut}_0(X) = \mathrm{Aut}_\mathbb{Q}(X)$. Let $B$ be a smooth projective curve and $f\colon X \rightarrow B$ be a $\mathbb{P}^1$-bundle. Then $X\cong \mathbb{P}({\mathbb E})$ is the projectivization of some rank two vector bundle ${\mathbb E}\rightarrow B$. We denote by $\mathcal{E}$ the sheaf of holomorphic sections of ${\mathbb E}$ and often do not distinguish between $\mathcal{E}$ and ${\mathbb E}$. We shall say that $X$ is decomposable as a ruled surface over $B$ if ${\mathbb E}$ is so. We have $\pi_1(X)\cong\pi_1(B)$ and $H_1(X,\mathbb{Z}) \cong H_1(B,\mathbb{Z})$ which is torsion free, while $H^2(X,\mathbb{Z}) = \mathbb{Z} F \oplus \mathbb{Z} \Sigma$, where $F$ is a fibre and $\Sigma$ is a section, hence $\mathrm{Aut}_\mathbb{Q}(X) = \mathrm{Aut}_\mathbb{Z}(X)$. Moreover, topologically and differentiably the bundle ${\mathbb E}$ is determined by its first Chern class (which determines the class of the map to the classifying space, the infinite Grassmannian $Gr_{\mathbb{C}}(2, \infty)$). Hence from the differentiable view point there are only two cases: \begin{enumerate} \item $\deg ({\mathcal E})$ even: $X$ is diffeomorphic to $ \mathbb{P}^1 \times B$ \item $\deg ({\mathcal E})$ odd: $X$ is diffeomorphic to $\mathbb{P} (\ensuremath{\mathcal{O}}_B \oplus \ensuremath{\mathcal{O}}_B(P))$, where $P$ is a point of $B$, and it is obtained from $ \mathbb{P}^1 \times B$ via an elementary transformation blowing up a point and then blowing down the strict transform of the fibre through the point. \item $\deg ({\mathcal E})$ odd and $B$ has genus $\geq 1$: then there is an \'etale double covering $B' \ensuremath{\rightarrow} B$ such that the pull back $X'$ is diffeomorphic to $\mathbb{P}^1 \times B'$. \end{enumerate} The following facts about $\mathrm{Aut}(X)$ can be found for instance in \cite{Ma71}: \begin{enumerate}[leftmargin=*] \item If $X\neq \mathbb{P}^1\times\mathbb{P}^1$ then there is exactly one $\mathbb{P}^1$-bundle structure on $X$, so we have an exact sequence \[ 1\rightarrow \mathrm{Aut}_B(X) \rightarrow \mathrm{Aut}(X)\ensuremath{\rightarrow} \mathrm{Aut}(B). \] where $$\mathrm{Aut}_B(X) = \{\sigma\in\mathrm{Aut}(X)\mid \sigma \text{ preserves every fibre of } f\colon X\rightarrow B\}.$$ The image of $\mathrm{Aut}(X)\ensuremath{\rightarrow} \mathrm{Aut}(B)$ is the group of automorphisms $\ga$ of $B$ such that $\ga^* ({\mathcal E}) \cong {\mathcal E} \otimes L$ for a suitable line bundle $ L \in \Pic^0(B)$. \item The exact sequence of sheaves of Lie groups on $B$ $$ 1 \ensuremath{\rightarrow} \ensuremath{\mathcal{O}}_B^* \ensuremath{\rightarrow} \mathrm{GL}(2, \ensuremath{\mathcal{O}}_B) \ensuremath{\rightarrow} \mathbb{P} \mathrm{GL}(2, \ensuremath{\mathcal{O}}_B) \ensuremath{\rightarrow} 1$$ induces a short exact sequence \[ 1\rightarrow \mathrm{Aut}_B({\mathbb E})/\mathbb{C}^* \rightarrow \mathrm{Aut}_B(X) \rightarrow {\mathbb D}elta\rightarrow 1 \] where $\mathrm{Aut}_B({\mathbb E})$ denotes the automorphism group of the vector bundle ${\mathbb E}$ over $B$, $\mathbb{C}^*$ acts on the fibres of ${\mathbb E}\rightarrow B$ by scalar multiplication, and ${\mathbb D}elta:=\{\mathcal{L}\in\Pic^0(B)\mid \mathcal{E}\otimes\mathcal{L}\cong\mathcal{E}\}$. Note that the fact that the cokernel is ${\mathbb D}elta$, contained in $\Pic^0(B)[2]$, the $2$-torsion part of $\Pic^0(B)$, follows since every automorphism of $X$ preserves the class of the relative canonical divisor. \item Let $e:= \max \{2\deg\mathcal{L} -\deg \mathcal{E} \mid \mathcal{L}\subset \mathcal{E} \text{ invertible subsheaf}\}$. Observe that $- e$ is the minimal self intersection of a section of $ X \ensuremath{\rightarrow} B$, and that ${\mathcal E}$ is stable if $ e < 0$. Then there are the following possibilities for $\mathrm{Aut}_B({\mathbb E})$: \begin{enumerate}[leftmargin=*] \item If $e<0$, that is, $\mathcal{E}$ is stable, then $\mathrm{Aut}_B({\mathbb E}) = \mathbb{C}^*$. \item If $\mathcal{E}$ is indecomposable, $\mathcal{L}\subset\mathcal{E}$ is the (unique) invertible subsheaf of maximal degree, $r=h^0(B, \mathcal{L}^{\otimes2}\otimes(\det\mathcal{E})^{-1})$, then $\mathrm{Aut}_B({\mathbb E}) \cong H_r$, where \[ H_r :=\left\{\left( \begin{pmatrix} \alpha & 0\\ 0 & \alpha \end{pmatrix},\begin{pmatrix} \alpha & t_1\\ 0 & \alpha \end{pmatrix},\cdots, \begin{pmatrix} \alpha & t_r\\ 0 & \alpha \end{pmatrix} \right)\in \mathrm{GL}(2,\mathbb{C})^{r+1} \mid\alpha\in\mathbb{C}^*, t_i\in \mathbb{C} \right\} \] \item If $\mathcal{E}=\mathcal{L}_1\oplus\mathcal{L}_2$ with $\mathcal{L}_1\not\cong \mathcal{L}_2$ and $\deg\mathcal{L}_1\geq\deg\mathcal{L}_2$, then $\mathrm{Aut}_B({\mathbb E}) \cong H_r '$, where \[ H_r ':=\left\{\left( \begin{pmatrix} \alpha & 0\\ 0 & \beta \end{pmatrix},\begin{pmatrix} \alpha & t_1\\ 0 & \beta \end{pmatrix},\cdots, \begin{pmatrix} \alpha & t_r\\ 0 & \beta \end{pmatrix} \right)\in \mathrm{GL}(2,\mathbb{C})^{r+1} \mid\alpha,\beta\in\mathbb{C}^*, t_i\in \mathbb{C} \right\} \] where $r=h^0(B, \mathcal{L}_1^{\otimes 2}\otimes(\det \mathcal{E})^{-1})$. \item If $\mathcal{E}=\mathcal{L}\oplus \mathcal{L}$, then $\mathrm{Aut}_B({\mathbb E}) \cong \mathrm{GL}(2,\mathbb{C})$. \end{enumerate} \end{enumerate} \begin{cor}\label{cor: q geq 2} Let $f\colon X=\mathbb{P}({\mathbb E})\rightarrow B$ be a $\mathbb{P}^1$-bundle over a smooth curve $B$ with $g(B)\geq 2$. Then $\mathrm{Aut}_\mathbb{Q}(X) = \mathrm{Aut}_\mathbb{Z}(X) = \mathrm{Aut}_B(X)$, $\mathrm{Aut}_0(X) \cong \mathrm{Aut}_B({\mathbb E})/\mathbb{C}^*$ and $\mathrm{Aut}_\mathbb{Z}(X)/\mathrm{Aut}_0(X)\cong {\mathbb D}elta$, where ${\mathbb D}elta$ is as in (2) above. \end{cor} \begin{proof} The automorphism group $\mathrm{Aut}_\mathbb{Q}(X) = \mathrm{Aut}_\mathbb{Z}(X) $ induces a trivial action on $H^1(X, \mathbb{Z}) = H^1(B,\mathbb{Z})$. Since $g(B)\geq 2$, we infer that $\mathrm{Aut}_\mathbb{Z}(X)$ induces a trivial action on $B$. Thus $\mathrm{Aut}_\mathbb{Z}(X)\subset \mathrm{Aut}_B(X)$. The inclusion in the other direction is clear. The isomorphism $\mathrm{Aut}_0(X) \cong \mathrm{Aut}_B({\mathbb E})/\mathbb{C}^*$ and $\mathrm{Aut}_\mathbb{Z}(X)/\mathrm{Aut}_0(X)\cong {\mathbb D}elta$ come from (2) above. \end{proof} \begin{theo}\label{thm: q geq 2} Let $f\colon X\cong \mathbb{P}({\mathbb E})\rightarrow B$ be a $\mathbb{P}^1$-bundle over a curve of genus $ g(B)\geq 2$. Then the equalities $\mathrm{Aut}_\mathbb{Q}(X) = \mathrm{Aut}_{\mathbb{Z}}(X) = \mathrm{Aut}_B(X)$ and $\mathrm{Aut}_\#(X)=\mathrm{Aut}_*(X)=\mathrm{Aut}_0(X)$ hold. Thus ${\mathbb G}amma_\mathbb{Q}(X) = {\mathbb G}amma_\mathbb{Z}(X) \cong {\mathbb D}elta$, while ${\mathbb G}a_\sharp(X)$ and ${\mathbb G}amma_* (X)$ are trivial, where ${\mathbb G}amma_\mathbb{Q}(X)$, ${\mathbb G}amma_\mathbb{Z}(X)$, ${\mathbb G}a_\sharp(X)$ and ${\mathbb G}amma_*(X)$ are the groups of connected components of $\mathrm{Aut}_\mathbb{Q}(X)$, $\mathrm{Aut}_\mathbb{Z}(X)$, $\mathrm{Aut}_\#(X)$ and $\mathrm{Aut}_*(X)$ respectively. \end{theo} \begin{proof} The first assertion is from Corollary~\ref{cor: q geq 2}, and we have a short exact sequence of groups \[ 1\rightarrow \mathrm{Aut}_0(X) \rightarrow \mathrm{Aut}_B (X) \rightarrow {\mathbb D}elta \rightarrow 1. \] We first consider the case where $\deg {\mathcal E} $ is even, so that we can assume that ${\mathcal E}$ has zero degree, hence ${\mathcal E}$ is a differentiably trivial vector bundle, and a fortiori $X=\mathbb{P}({\mathcal E})$ is a differentiably trivial $S^2$-bundle over $B$. That is, there is a diffeomorphism $\varphi\colon X\rightarrow B\times S^2$ such that the following diagram is commutative: \[ \begin{tikzcd} X \arrow[rr, "{\varphi}"]\arrow[rd, "f"'] & & B\times S^2 \arrow[ld, "\mathrm{pr}_1"] \\ & B& \end{tikzcd} \] where $\mathrm{pr}_1$ denotes the projection to the first factor. We need to show that $\mathrm{Aut}_\sharp(X) = \mathrm{Aut}_0(X)$. We have that $\mathrm{Aut}_{\mathbb{Q}} (X) = \mathrm{Aut}_B(X)$ maps onto ${\mathbb D}elta$ with kernel $\mathrm{Aut}_0(X) $. $\mathrm{Aut}_B(X)$ consists of sections $\s$ of the sheaf $\mathbb{P} GL(2, \ensuremath{\mathcal{O}}_B) = \mathbb{P} SL(2, \ensuremath{\mathcal{O}}_B)$ and ${\mathbb D}e$ measures the obstruction to lifting to a section of $ SL(2, \ensuremath{\mathcal{O}}_B)$. This obstruction is topological. ${\mathbb D}elta$ is a group of line bundles of 2 -torsion, hence it is a group of maps $ \delta(\sigma)\colon \pi_1(B) \ensuremath{\rightarrow} \mathbb{Z}/2\mathbb{Z}$. Since $ X$ is differentiably trivial, to $\s$ corresponds a diffeomorphism $$H \colon B \times \mathbb{P}^1 \ensuremath{\rightarrow} B \times \mathbb{P}^1,$$ linear on the fibres $\mathbb{P}^1$, hence a differentiable map $s\colon B \ensuremath{\rightarrow} \mathbb{P} SL(2, \mathbb{C})$, and this map is liftable to $s'\colon B \ensuremath{\rightarrow} SL(2, \mathbb{C})$ if and only if $$\pi_1(s)\colon \pi_1(B) \ensuremath{\rightarrow} \pi_1( \mathbb{P} SL(2, \mathbb{C})) = \mathbb{Z}/2\mathbb{Z}$$ is trivial. This homomorphism is exactly $\de(\s)$, as it is easy to verify. The diffeomorphism $H \colon B \times \mathbb{P}^1 \ensuremath{\rightarrow} B \times \mathbb{P}^1$, via the second projection, gives a continuous map $ h\colon B \ensuremath{\rightarrow} \mathrm{ContMaps} (\mathbb{P}^1, \mathbb{P}^1)_1$ of $B$ into the space of continous selfmaps of degree 1 on $\mathbb{P}^1$. By a result of Graeme Segal, \cite{segal}, this space is homotopically equivalent up to dimension $1$ to $\mathbb{P} SL(2, \mathbb{C})$. Hence, if a map $H$ is homotopic to the identity, then also $h$ is homotopic to the identity, hence $h$ induces a trivial homomorphism of fundamental groups $ \pi_1(B) \ensuremath{\rightarrow} \mathbb{Z}/2\mathbb{Z}$. The conclusion is that $\s \in \mathrm{Aut}_\#(X)$ maps trivially to ${\mathbb D}elta$, hence $\s \in \mathrm{Aut}_0(X)$, see corollary \ref{cor: q geq 2}. In the case where ${\mathcal E}$ is of odd degree, take an \'etale double cover $B' \ensuremath{\rightarrow} B$, so that the pull back $X'$ of $X$ is diffeomorphic to $\mathbb{P}^1 \times B'$. We have that $\s \in \mathrm{Aut}_{\mathbb{Z}} (X)$ lifts to $X'$, since $\s$ acts trivially on $H^1 (B, \mathbb{Z}/2\mathbb{Z})$, and lifts to $\s' \in \mathrm{Aut}_{B'}(X')$. Observe that the pull-back map $$H^1 (B, \mathbb{Z}/2\mathbb{Z}) \ensuremath{\rightarrow} H^1 (B', \mathbb{Z}/2\mathbb{Z})$$ has kernel $\cong \mathbb{Z}/2$ generated by the class of the \'etale covering $B' \ensuremath{\rightarrow} B$. We have that $\de(\s') $ is the pull-back of $\de(\s)$, and if $\de(\s) \neq 0 $ we can then choose $B' \ensuremath{\rightarrow} B$ appropriately so that $\de(\s') \neq 0$. By the same token, if $\s$ is homotopic to the identity, also $h' $ is homotopic to the identity, hence to $\s$ corresponds the trivial element in ${\mathbb D}e$, and we are done also in this case argueing by contradiction. \end{proof} For completeness we summarize here some results by Maruyama. \begin{thm}[{\cite[Theorem 3 and Remark 6]{Ma71}}] Let $f\colon X=\mathbb{P}({\mathbb E})\rightarrow B$ be a $\mathbb{P}^1$-bundle over a smooth curve $B$ of genus $q (X) \leq 1$. Then the following holds. \begin{enumerate}[leftmargin=*] \item Suppose that $q(X)=0$. If $X={\mathbb F}_e$ with $e>0$, then $\mathrm{Aut}(X) =\mathrm{Aut}_0(X) $, and , more precisely, we have an exact sequence \[ 1\rightarrow \bar H_{e+1} \rightarrow\mathrm{Aut}(X) \rightarrow \mathbb{P}\mathrm{GL}(2,{\Bbb C}) \rightarrow 1. \] where $\bar H_{e+1}:=H_{e+1}/\mathbb{C}^*$; see the paragraph above corollary \ref{cor: q geq 2} for the definition of $H_{e+1}$. \item Suppose that $B$ is elliptic and $X$ is decomposable. Then we have an exact sequence: \[ 1\rightarrow \mathrm{Aut}_B(X)\rightarrow \mathrm{Aut}(X) \rightarrow H \rightarrow 1 \] where $H=\{\sigma\in\mathrm{Aut}(B)\mid \sigma^*(\mathcal{L}^{\otimes 2}\otimes \det(\mathcal{E})^{-1})\cong \mathcal{L}^{\otimes 2}\otimes \det(\mathcal{E})^{-1}\}$. Moreover, the following holds. \begin{enumerate}[leftmargin=*] \item If $e>0$ then $\mathrm{Aut}_B(X) =\mathrm{Aut}_0(X)\cong \bar H_r'$, where $r=h^0(\mathcal{L}^{\otimes 2}\otimes \det(\mathcal{E})^{-1})$ and $\bar H_r'=H_r'/\mathbb{C}^*$ (see the paragraph above corollary \ref{cor: q geq 2} for the definition of $H_r'$.) and we have an exact sequence \[ 1\rightarrow \mathrm{Aut}_B(X)\rightarrow \mathrm{Aut}_\mathbb{Z}(X) \rightarrow \mathrm{Aut}_0(B)[e] \rightarrow 1 \] where $\mathrm{Aut}_0(B)[e]\cong(\mathbb{Z}/e\mathbb{Z})^2$ denotes the $e$-torsion part of $\mathrm{Aut}_0(B)$. \item If $e=0$ and $X$ has only one minimal section, then $\mathrm{Aut}_0(X) = \mathrm{Aut}_\mathbb{Z}(X)$, $\mathrm{Aut}_B(X) \cong \bar H_r'$, and we have an exact sequence \[ 1\rightarrow \mathrm{Aut}_B(X)\rightarrow \mathrm{Aut}_0(X) \rightarrow \mathrm{Aut}_0(B) \rightarrow 1 \] \item If $e=0$ and $X$ has exactly two minimal sections $C_1$ and $C_2$, then $\mathrm{Aut}_0(X)\cong X\setminus(C_1\cup C_2)$, where the latter is with the natural algebraic group structure, and we have $\mathrm{Aut}_B(X)=\mathrm{Aut}_0(X)\rtimes \mathbb{Z}/2\mathbb{Z}$, where $\mathbb{Z}/2\mathbb{Z}$ interchanges the two sections $C_1$ and $C_2$. \item If $X=\mathbb{P}^1\times B$ then $\mathrm{Aut}(X) = \mathrm{Aut}(\mathbb{P}^1)\times \mathrm{Aut}(B) = \mathbb{P}\mathrm{GL}(2, \mathbb{C})\times \mathrm{Aut}(B)$, and $\mathrm{Aut}_\mathbb{Z}(X)=\mathrm{Aut}_0(X) = \mathbb{P}\mathrm{GL}(2, \mathbb{C})\times \mathrm{Aut}_0(B)$. \end{enumerate} \item Suppose that $B$ is elliptic and $X$ is indecomposable. \begin{enumerate} \item If $e=0$ then we have an exact sequence \[ 1\rightarrow \mathbb{C}^* \rightarrow \mathrm{Aut}(X) \rightarrow \mathrm{Aut}(B) \rightarrow 1. \] In this case, $\mathrm{Aut}_\mathbb{Z}(X) = \mathrm{Aut}_0(X)\cong X\setminus C$ where $C\subset X$ is the unique minimal section, it is a nontrivial extension of $\mathrm{Aut}_0(B)$ by $\mathbb{C}^*$. \item If $e=1$ then we have an exact sequence \[ 1\rightarrow {\mathbb D}elta \rightarrow \mathrm{Aut}(X) \rightarrow \mathrm{Aut}(B) \rightarrow 1. \] where ${\mathbb D}elta=\Pic^0(B)[2]\cong(\mathbb{Z}/2\mathbb{Z})^2$. Furthermore, $\mathrm{Aut}_\mathbb{Z}(X)=\mathrm{Aut}_0(X)$ and ${\mathbb D}elta$ is contained in $\mathrm{Aut}_0(X)$, so there is an exact sequence \[ 1\rightarrow {\mathbb D}elta \rightarrow \mathrm{Aut}_0(X) \rightarrow \mathrm{Aut}_0(B) \rightarrow 1. \] \end{enumerate} \end{enumerate} \end{thm} \subsection{Case $\kappa(X)=0$} In this section, we treat the surfaces $X$ with $\kappa(X)=0$. The following is a list of known facts: \begin{enumerate}[leftmargin=*] \item If $X$ is K3 surface, then $\mathrm{Aut}_\mathbb{Q}(X)=\{\mathrm{id}_X\}$ by \cite{br}. \item If $X$ is an Enriques surface, then $|\mathrm{Aut}_\mathbb{Q}(X)|\leq 4$ and $|\mathrm{Aut}_\mathbb{Z}(X)|\leq 2$, and both bounds are sharp \cite{MN84}. The fact that $\mathrm{Aut}_\mathbb{Z}(X)$ can be nontrivial for an Enriques surface contradicts the last statement of \cite[Theorem~2.2]{Pe80}. \item If $X$ is an Abelian surface, then $\mathrm{Aut}_{\mathbb{Q}}(X) = \mathrm{Aut}_0(X)\cong X$. \end{enumerate} Suppose now that $X$ is a hyperelliptic surface (bielliptic surface in the notation of \cite{Be96}). These are the prototype examples of an SIP of unmixed type, and were classified by Bagnera and de Franchis, resp.~ by Enriques-Severi (\cite{bdf}, \cite{es9}, \cite{es10}). These are $X=(F \times E)/{\mathbb D}e_G$ with $E$ and $F$ elliptic curves and $G$ acting freely on $E$, while $g(F/G)=0$. We can apply Principle \ref{SIPU} and obtain the following theorem. \begin{theo}\label{thm: hyperelliptic} Let $X=(F \times E)/{\mathbb D}elta_G$ be an hyperelliptic surface in the above notation. Then $\mathrm{Aut}_\mathbb{Z}(X) \cong E = \mathrm{Aut}_0(X)$ and $\mathrm{Aut}_\mathbb{Q}(X)/\mathrm{Aut}_\mathbb{Z}(X)$ is isomorphic to one of following groups: \[ 1,\, \mathbb{Z}/2\mathbb{Z},\, ( \mathbb{Z}/2\mathbb{Z})^2,\, \mathfrak S_3,\, D_4, \, \mathfrak A_4 \] where $\mathfrak S_3$ denotes the symmetric group on three elements, $D_4$ is the dihedral group of order $8$, and $\mathfrak A_4$ is the alternating group on $4$ elements. Moreover, $\mathrm{Aut}_\mathbb{Q}(X)/\mathrm{Aut}_\mathbb{Z}(X)\cong \mathfrak A_4$ if and only if $F=F_\omega$ and $G\cong\mathbb{Z}/2\mathbb{Z}$. In particular, $|\mathrm{Aut}_\mathbb{Q}(X)/\mathrm{Aut}_\mathbb{Z}(X)| \leq 12$ and the equality is attained if and only if $F=F_\omega$, the equianharmonic (Fermat) elliptic curve, and $G\cong\mathbb{Z}/2\mathbb{Z}$ ($F_\omega = \mathbb{C}/(\mathbb{Z}\oplus \mathbb{Z}\omega)$ with $\omega$ a primitive 3rd root of unity). \end{theo} \begin{proof} By Principle \ref{SIPU}, (I) and (III), $\mathrm{Aut}_{\mathbb{Q}}(X) \subset N_{{\mathbb D}e_G} / {\mathbb D}e_G$ corresponds to the automorphisms $h = (h_1, h_2)$ such that $h_2$ is a translation, and $h_1 $ acts trivially on $H^1 (F, \mathbb{Z})^G$. While $\mathrm{Aut}_{\mathbb{Z}}(X) \subset N_{{\mathbb D}e_G} / G$ corresponds to the subgroup $$ G \times E < \mathrm{Aut}(F) \times \mathrm{Aut}(E).$$ Since $H^1(F/G, \mathbb{Z}) = 0$, $\mathrm{Aut}_{\mathbb{Q}}(X)/\mathrm{Aut}_\mathbb{Z}(X) \cong N_G / G$, where $N_G$ is the normalizer of $G$ in $\mathrm{Aut}(F)$. From the identification \[ \mathrm{Aut}_{\mathbb{Q}}(X)/\mathrm{Aut}_\mathbb{Z}(X) \cong \{\gamma\in \mathrm{Aut}(F)\mid \gamma G=G\gamma\}/G = N_G/G. \] @e proceed as follows to determine $N_G/G$: consider the split short exact sequence \[ 0\rightarrow \mathrm{Aut}_0(F) \cong F \rightarrow \mathrm{Aut}(F) \xrightarrow{\varphi} A \rightarrow 0 \] where $A\subset \mathrm{Aut}(F)$ is the subgroup preserving the group structure of $F$. For convenience of notation we write $F$ instead of $ \mathrm{Aut}_0(F) $. Restricting to $G$ and $N_G$ we obtain short exact sequences \[ 0\rightarrow G\cap F \rightarrow G \rightarrow \varphi(G)\rightarrow 1 \] and \[ 0\rightarrow N_G\cap F \rightarrow N_G \rightarrow \varphi(N_G) \rightarrow 1. \] Therefore, we have a short exact sequence \[ 0\ensuremath{\rightarrow} \left(N_G\cap F \right)/\left(G\cap F\right) \rightarrow N_G/G \rightarrow \varphi(N_G)/\varphi(G) \rightarrow 1. \] Next we divide the discussion into cases according to \cite[List VI.20]{Be96}. \begin{enumerate}[leftmargin=*] \item[(1)] $G\cong\mathbb{Z}/2\mathbb{Z}$ acting on $F$ by $x\mapsto -x$. In this case, $N_G\cap F \cong(\mathbb{Z}/2\mathbb{Z})^2$ consists of translations by $2$-torsion points, and \[ \varphi(N_G)= \begin{cases} \langle x\mapsto -x \ensuremath{\rightarrow}ngle \cong \mathbb{Z}/2\mathbb{Z} & \text{ if } F\neq F_i, F_\omega\\ \langle x\mapsto ix \ensuremath{\rightarrow}ngle \cong\mathbb{Z}/4\mathbb{Z} &\text{ if } F= F_i \\ \langle x\mapsto -\omega x \ensuremath{\rightarrow}ngle \cong \mathbb{Z}/6\mathbb{Z} & \text{ if } F=F_\omega \end{cases} \] For $G$ we have \[ G\cap F =\{ 0\}\text{ and }\varphi(G) =G=\langle x\mapsto -x\ensuremath{\rightarrow}ngle \cong\mathbb{Z}/2\mathbb{Z} \] We have thus a split short exact sequence \[ 0\rightarrow N_G\cap F \rightarrow N_G/G \rightarrow \varphi(N_G)/G \rightarrow 0 \] and it is now easy to determine the corresponding semidirect product \[ N_G/G \cong \begin{cases} (\mathbb{Z}/2\mathbb{Z})^2 & \text{ if } F\neq F_i, F_\omega\\ D_4 &\text{ if } F= F_i \\ \mathfrak A_4 & \text{ if } F=F_\omega \end{cases} \] where $D_4$ denotes the dihedral group of order $8$ and $\mathfrak A_4$ is the alternating group on $4$ elements. \item[(2)] $G\cong \mathbb{Z}/2\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$ acting on $F$ by $x\mapsto -x, x\mapsto x+\epsilon$ with $\epsilon$ a nontrivial 2-torsion point of $F$. In this case, $N_G\cap F \cong(\mathbb{Z}/2\mathbb{Z})^2$ consists of the $2$-torsion points, and $\varphi(N_G) = \varphi(G) = \langle x\mapsto -x\ensuremath{\rightarrow}ngle$ if $ F \neq F_i$, while $\varphi(N_G) = A$ if $F = F_i$ and $ \epsilon = \frac{1}{2} ( 1 + i)$. Thus $N_G/G\cong\mathbb{Z}/2\mathbb{Z}$ or it may be $\mathbb{Z}/2\mathbb{Z} \oplus \mathbb{Z}/2\mathbb{Z}$ for $F = F_i$. \item[(3)] $G\cong \mathbb{Z}/4\mathbb{Z}$ acting on $F=F_i = \mathbb{C}/(\mathbb{Z}\oplus i\mathbb{Z})$ by $x\mapsto i x$. In this case, $N_G\cap F = \langle x\mapsto x+\frac{1+i}{2}\ensuremath{\rightarrow}ngle\cong\mathbb{Z}/2\mathbb{Z}$ and $\varphi(N_G) = \varphi(G)=\langle x\mapsto ix\ensuremath{\rightarrow}ngle$. Thus $N_G/G\cong\mathbb{Z}/2\mathbb{Z}$. \item[(4)] $G\cong\mathbb{Z}/4\mathbb{Z} \oplus \mathbb{Z}/2\mathbb{Z}$, acting by $x\mapsto ix, x\mapsto x+\left(\frac{1+i}{2}\right)$. In this case, $N_G\cap F\cong(\mathbb{Z}/2\mathbb{Z})^2$ is the group of $2$-torsion points of $F$, and $\varphi(N_G) = \varphi(G)=\langle x\mapsto ix\ensuremath{\rightarrow}ngle$. Thus $N_G/G\cong\mathbb{Z}/2\mathbb{Z}$. \item[(5)] $G\cong\mathbb{Z}/3\mathbb{Z}$ acting on $F=F_\omega$ by $x\mapsto \omega x$. In this case, $N_G\cap F=\langle x\mapsto x+\frac{1-\omega}{3}\ensuremath{\rightarrow}ngle\cong\mathbb{Z}/3\mathbb{Z}$, and $\varphi(N_G) = \langle x\mapsto -\omega x\ensuremath{\rightarrow}ngle\cong \mathbb{Z}/6\mathbb{Z}$. It follows that $N_G/G\cong \mathfrak S_3$, the symmetric group on three elements. \item[(6)] $G\cong \mathbb{Z}/3\mathbb{Z}\oplus \mathbb{Z}/3\mathbb{Z}$ acting by $x\mapsto \omega x, x\mapsto x+\left(\frac{1-\omega}{3}\right)$. In this case, $N_G\cap F \cong(\mathbb{Z}/3\mathbb{Z})^2$ is the group of $3$-torsion points of $F$, and $\varphi(N_G)=\langle x\mapsto -\omega x\ensuremath{\rightarrow}ngle\cong \mathbb{Z}/6\mathbb{Z}$. It follows that $N_G/G\cong \mathfrak S_3$. \item[(7)] $F=F_\omega$ and $G\cong \mathbb{Z}/6\mathbb{Z}$ acting by $x\mapsto -\omega x$. In this case, $N_G\cap F=\{ 0 \}$ and $\varphi(N_G)=\varphi(G)=G=\langle x\mapsto -\omega x\ensuremath{\rightarrow}ngle$. It follows that $N_G/G$ is trivial. \end{enumerate} \end{proof} \begin{thm}\label{thm: k=0a} Let $X$ be a smooth projective surface with $\kappa(X)=0$. Then $[\mathrm{Aut}_\mathbb{Q}(X):\mathrm{Aut}_0(X)] \leq 12$. \end{thm} \begin{proof} If $X$ is minimal, the assertion follows from the listed facts above and Theorem~\ref{thm: hyperelliptic}. If $X$ is not minimal, then $\mathrm{Aut}_\mathbb{Q}(X)\subset \mathrm{Aut}_\mathbb{Q}(X_{\min})$ by Principle~\ref{prin: descend}. So, if $\dim \mathrm{Aut}_0(X_{\min})=0$, then \[ |\mathrm{Aut}_\mathbb{Q}(X)|\leq |\mathrm{Aut}_\mathbb{Q}(X_{\min})| \leq 12. \] In case $\dim \mathrm{Aut}_0(X_{\min})>0$, $X_{\min}$ is either an abelian surface or a hyperelliptic surface and $\mathrm{Aut}_0(X_{\min})$ is an abelian variety of dimension $2$ or $1$. The subgroup $\mathrm{Aut}_\mathbb{Q}(X)\subset \mathrm{Aut}_\mathbb{Q}(X_{\min})$ fixes the points $p\in X_{\min}$, over which $X\rightarrow X_{\min}$ is not an isomorphism. But an element of $\mathrm{Aut}_0(X_{\min})$ fixes a point if and only if it is the identity. It follows that \[ \mathrm{Aut}_\mathbb{Q}(X) \cap \mathrm{Aut}_0(X_{\min})=\mathrm{id}_{X_{\min}}, \] and hence the natural homomorphism $\mathrm{Aut}_\mathbb{Q}(X)\rightarrow \mathrm{Aut}_\mathbb{Q}(X_{\min})/\mathrm{Aut}_0(X_{\min})$ is injective. Therefore, \[ |\mathrm{Aut}_\mathbb{Q}(X)|\leq |\mathrm{Aut}_\mathbb{Q}(X_{\min})/\mathrm{Aut}_0(X_{\min})|\leq 12. \] \end{proof} For $\mathrm{Aut}_\sharp(X)$ we have a sharper result: \begin{thm}\label{thm: k=0b} Let $X$ be a smooth projective surface with $\kappa(X)=0$. Then $\mathrm{Aut}_\sharp(X) = \mathrm{Aut}_0(X)$. \end{thm} \begin{proof} Let $X_{\min}$ be the minimal model of $X$. Then by Principle~\ref{prin: descend}, $\mathrm{Aut}_\mathbb{Z}(X)\subset \mathrm{Aut}_\mathbb{Z}(X_{\min})$. If $X_{\min}$ is a K3 surface, then $\mathrm{Aut}_\mathbb{Z}(X_{\min})$ is trivial. It follows that $\mathrm{Aut}_\mathbb{Z}(X)$ and hence $\mathrm{Aut}_\sharp(X)$ is trivial. If $X_{\min}$ is an Enriques surface, then its universal cover $\tilde X_{\min}$ is a K3 surface and is the minimal model of the universal cover $\pi\colon \tilde X\rightarrow X$. Suppose that $\sigma\in\mathrm{Aut}_\sharp(X)$ and that $\Sigma\colon X\times I\rightarrow X$ is a homotopy from $\mathrm{id}_X$ to $\sigma$. Then, by the homotopy lifting property, there is a homotopy $\tilde\Sigma\colon \tilde X\times [0,1] \rightarrow \tilde X$ from $\mathrm{id}_{\tilde X}$ to $\tilde \sigma$, where $\tilde \sigma\in\mathrm{Aut}(\tilde X)$ is a lifting of $\sigma$: \[ \begin{tikzcd} \tilde X\times [0,1]\arrow[d, "\pi\times\mathrm{id}"] \arrow[rr, "\exists\,\, \tilde \Sigma"]&& \tilde X \arrow[d,"\pi"]\\ X\times [0,1]\arrow[rr, "\Sigma"] && X \end{tikzcd} \] Since $\mathrm{Aut}_\sharp(\tilde X)$ is trivial, $\tilde \sigma =\mathrm{id}_{\tilde X}$. It follows that $\sigma=\mathrm{id}_X$. If $X=X_{\min}$ is an Abelian surface or a hyperelliptic surface, then by Principle~\ref{SIPU} and the discussion above we know that $\mathrm{Aut}_\mathbb{Z}(X) = \mathrm{Aut}_0(X)$. It follows a fortiori that $\mathrm{Aut}_\sharp(X) = \mathrm{Aut}_0(X)$. Finally, suppose that $X_{\min}$ is an Abelian surface or a hyperelliptic surface but $\rho\colon X\rightarrow X_{\min}$ is not an isomorphism. Then the topological Euler characteristics satisfy ${\rm c}i_\mathrm{top}(X)>{\rm c}i_\mathrm{top}(X_{\min})=0$. Now observe that there is a flat metric on $X_{\min}$, and by Principle~\ref{prin: rigidity}, we have $\mathrm{Aut}_\sharp(X) =\{ \mathrm{id}_X\}$. \end{proof} \subsection{Case $\kappa(X)=2$} For a surface $X$ of general type, we have that $|\mathrm{Aut}_\mathbb{Q}(X)|$ is bounded and, in fact, $|\mathrm{Aut}_\mathbb{Q}(X)|\leq 4$ if ${\rm c}i(\mathcal{O}_X)\geq189$ (\cite{Cai04}). Subsequently, examples have been found with $|\mathrm{Aut}_\mathbb{Q}(X)|= 4$ and ${\rm c}i(\mathcal{O}_X)$ arbitrarily large (\cite{CL18}). \end{document}
\begin{document} \title{Protecting fiber-optic quantum key distribution sources against light-injection attacks} \author{Anastasiya~Ponosova} \email{[email protected]} \affiliation{Russian Quantum Center, Skolkovo, Moscow 121205, Russia} \affiliation{NTI Center for Quantum Communications, National University of Science and Technology MISiS, Moscow 119049, Russia} \author{Daria~Ruzhitskaya} \affiliation{Russian Quantum Center, Skolkovo, Moscow 121205, Russia} \affiliation{NTI Center for Quantum Communications, National University of Science and Technology MISiS, Moscow 119049, Russia} \author{Poompong~Chaiwongkhot} \affiliation{Institute for Quantum Computing, University of Waterloo, Waterloo, ON, N2L~3G1 Canada} \affiliation{Department of Physics and Astronomy, University of Waterloo, Waterloo, ON, N2L~3G1 Canada} \affiliation{Department of Physics, Faculty of Science, Mahidol University, Bangkok, 10400 Thailand} \affiliation{Quantum technology foundation (Thailand), Bangkok, 10110 Thailand} \author{Vladimir~Egorov} \affiliation{\mbox{Leading research center for Quantum internet, ITMO University, Birzhevaya line 14, 199034 St.~Petersburg, Russia}} \affiliation{SMARTS-Quanttelecom LLC, 6 liniya V.O. 59, 199178 St.~Petersburg, Russia} \author{Vadim~Makarov} \affiliation{Russian Quantum Center, Skolkovo, Moscow 121205, Russia} \affiliation{\mbox{Shanghai Branch, National Laboratory for Physical Sciences at Microscale and CAS Center for Excellence in} \mbox{Quantum Information, University of Science and Technology of China, Shanghai 201315, People's Republic of China}} \affiliation{NTI Center for Quantum Communications, National University of Science and Technology MISiS, Moscow 119049, Russia} \author{Anqi~Huang} \email{[email protected]} \affiliation{Institute for Quantum Information \& State Key Laboratory of High Performance Computing, College of Computer Science and Technology, National University of Defense Technology, Changsha 410073, China} \date{\today} \begin{abstract} A well-protected and characterised source in a quantum key distribution system is needed for its security. Unfortunately, the source is vulnerable to light-injection attacks, such as Trojan-horse, laser-seeding, and laser-damage attacks, in which an eavesdropper actively injects bright light to hack the source unit. The hacking laser could be a high-power one that can modify properties of components via the laser-damage attack and also further help the Trojan-horse and other light-injection attacks. Here we propose a countermeasure against the light-injection attacks, consisting of an additional sacrificial component placed at the exit of the source. This component should either withstand high-power incoming light while attenuating it to a safe level that cannot modify the rest of the source, or get destroyed into a permanent high-attenuation state that breaks up the line. We demonstrate experimentally that off-the-shelf fiber-optic isolators and circulators have these desired properties, at least under attack by a continuous-wave high-power laser. \end{abstract} \maketitle \section{Introduction} \label{sec:intro} Quantum key distribution (QKD) allows to securely establish a secret key between two remote parties, usually called Alice and Bob~\cite{bennett1984,ekert1991}. Its informational-theoretical security is based on quantum physics, instead of any computational complexity~\cite{gisin2002,scarani2009,lo2014,xu2020}. This makes QKD, in principle, unhackable even by a super-powerful quantum computer. Thus, QKD is a promising candidate for quantum-safe cryptography in the era of quantum computing that is approaching with currently feasible quantum supremacy~\cite{arute2019}. However, in practice, it is a long journey to achieve an unhackable QKD system due to imperfect devices in real life~\cite{makarov2006,qi2007,lamas-linares2007,lydersen2010a,lydersen2010b,xu2010,li2011a,wiechers2011,lydersen2011c,lydersen2011b,gerhardt2011,sun2011,jain2011,bugge2014,sajeed2015a,huang2016,makarov2016,huang2018,qian2018,huang2019,huang2020,sun2022,chaiwongkhot2022,huang2022,gao2022}. The imperfections in realistic QKD systems can be exploited by an adversary equipped with current technology to learn the secret information~\cite{lydersen2010a,gerhardt2011}. The quantum hacking discloses the practical security performance of QKD systems, which then stimulates the community to enhance the security hardness of QKD implementation. For example, a decade ago, various loopholes were discovered at the receiver side that works on detecting quantum states received from a quantum channel~\cite{makarov2006,lydersen2010a,lydersen2010b,wiechers2011,lydersen2011c}. To defeat the attacks on the quantum-state detection, measurement-device-independent QKD (MDI QKD)~\cite{lo2012} and twin-field QKD (TF QKD)~\cite{lucamarini2018,wang2022} were proposed, in which there were no security assumptions about the quantum-state measurement. Therefore, these protocols can defeat all attacks on measurement unit. In addition, MDI QKD and TF QKD schemes with well-protected senders that prepare characterised quantum states are believed to be practically secure, eliminating the threat of quantum hacking~\cite{bennett2017}. Unfortunately, quantum hackers are ingenious---it has been shown that they can learn or even manipulate the characteristics of components in the source unit by light-injection attacks, like Trojan-horse attack~\cite{gisin2006,jain2014}, laser-seeding attack~\cite{sun2015,huang2019,pang2020}, laser-damage attack~\cite{makarov2016,huang2020}, and power-meter attack~\cite{sajeed2015}. Since the modified characteristics are often unpredictable, it is difficult to build a security model that counters these active attacks. Consequently, these attacks may be the effective tools in Eve's suitcase to crack the security of MDI QKD and TF QKD systems. A fiber-optic isolator or circulator, which is often placed as the last component in the source unit~\cite{mo2005,huang2016a,wang2016,dixon2017,xia2019,wei2019,liu2019}, is believed to protect a fiber-based QKD system from the adversary's injecting light through a quantum channel. For example, Ref.~\onlinecite{lucamarini2015} thoroughly analyses the necessary amount of isolation as countermeasure against the Trojan-horse attack and upper-bounds the remaining information leakage. Then the security can be restored by a privacy amplification. This countermeasure is also being standardised by the European Telecommunications Standards Institute (ETSI)~\cite{ETSIQKD0010}. From this point of view, protecting the source unit by isolation components seems to be a promising solution, achieving a practically secure source, especially for MDI QKD and TF QKD. Nevertheless, the actual amount of isolation may be affected by unknown attacks on the isolating component~\cite{huang2020}. Guaranteeing the practical security of the QKD system under such realistic situation is still challenging. Here we show that an additional sacrificial isolation component placed at the exit of the source that is not accounted into the security model can be an effective countermeasure against the light-injection attacks. We experimentally demonstrate that when the adversary illuminates isolators and circulators with a high-power continuous-wave (c.w.)\ laser, $6.4$--$42.4~\deci\bel$ residual isolation remains, although the high-power laser temporarily or permanently decreases their isolation values by $15.2$--$34.5~\deci\bel$. Since the isolation components under the high-power attack are still able to provide the significant amount of isolation, they protect other optical components behind them in the QKD source unit from the modification by the laser-damage attack. However, since this additional isolation component, the last in the QKD source, might be affected by the eavesdropper, it should not be counted into the effective isolation needed to prevent the light-injection attacks. That is, the required isolation as countermeasure against the light-injection attacks should be calculated starting from the component after our sacrificial isolation component. The article is structured as follows. In~\cref{sec:exp_meth} we describe the experimental setup and methodology to test the fiber-optic isolators and circulators. Measurement results are presented in \cref{sec:results}. We discuss the effects of this attack and application of this countermeasure in \cref{sec:countermeasure} and conclude in \cref{sec:conclusion}. \section{Experimental methodology} \label{sec:exp_meth} \subsection{Experimental setup for testing isolators} \label{sub:setup-isolator} Our experimental setup simulates a hacking scenario in which Eve hacks the system from the quantum channel to the source unit. \Cref{fig:setup} illustrates the measurement configuration used for testing fiber-optic isolators. The samples under test are illuminated by a high-power laser~(HPL), consisting of a c.w.\ $1550~\nano\meter$ seed laser diode (QPhotonics QFBGLD-1550-100) followed by an erbium-ytterbium-doped fiber amplifier (QGLex custom-made unit)~\cite{huang2020}. The laser is transmitted through a single-mode fiber to mimic the attack via the quantum channel. As we focus on the effect of optical power on the tested sample, the polarization of laser is not characterized. Laser output power can be varied from $0.16~\watt$ to $6.7~\watt$ at the isolator under test. During the experiment, the illumination power is set by the interface of a control software according to a calibration curve made before the experiment. The optical power meter 1 (OPM1; Grandway FHP2B04), connected through the 1\% arm of a beam splitter~(BS), monitors the power emitted by the high-power laser in real time. The laser light transmitted through the samples in the backward direction is continuously monitored by OPM2 (Thorlabs PM200 with S154C sensor). The isolation is determined by comparing the power measured by OPM2 with the laser power launched into the sample, taking into account the 95:5 coupling ratio of the BS. \begin{figure} \caption{Experimental setup for testing isolators. LD, laser diode; OPM, optical power meter; LT, light trap; HPL, high-power laser. The coupling ratio of the beam splitter~(BS) denoted as 95$\Relbar$:5$\bigtimes$ means $95\%$ of light passes to the port horizontally opposite in the graphical symbol of the BS, while $5\%$ of light is coupled across to the other port.} \label{fig:setup} \end{figure} A fiber-pigtailed $1550~\nano\meter$ laser diode (LD1, Gooch and Housego AA1406) with $10.5$-$\milli\watt$ optical power is used to measure the insertion loss of the isolator under test. The transmitted power is measured after 99:1 BS using OPM3 (Thorlabs PM200 with S155C sensor). The insertion loss is then determined by comparing the power measured by OPM3 with the input one, taking into account the additional $20~\deci\bel$ attenuation from the 99:1 BS. Our setup is equipped with a fiber fuse monitor, which shutdowns the high-power laser automatically in case the fiber fuse is detected, preventing an extensive damage of equipment~\cite{huang2020}. Fortunately, the fiber fuse has not occurred during the tests reported in this article. Moreover, a temperature map of the samples is measured by a thermal imaging camera (Fluke TiS45), which is placed over the samples and saves thermal images every $3~\second$ during each experiment. It is notable that during the testing on ISO PM1 as an initial trial, there is no thermal images recording yet. \subsection{Experimental setup for testing circulators} \label{sub:setup-circulator} To determine the testing setup for fiber-optic circulators, we shall first discuss two configurations that a three-port circulator can have in the QKD system. In the first scenario, the circulator is employed to direct Alice's optical pulses~\cite{lucio-martinez2009,tang2014,wang2016,xia2019,liu2019}. That is, the optical pulses first pass from port 1 to port 2. Then the pluses are reflected back to port 2 and transmitted to port 3 as the output of the QKD sender. Thus, the isolation values between each port pair matter to the security of the QKD system. In the second scenario, the circulator is used to monitor the injected light~\cite{mo2005, wang2014}. If the injected light is detected by a monitor connected at port 3, Alice and Bob may interrupt their QKD session without secret key leakage. However, it has been shown that the laser-damage attack might decrease the sensitivity of the monitor~\cite{makarov2016} and the high-speed optical pulses might bypass the alarming mechanism of the monitor~\cite{sajeed2015}. In this case, the success of the light-injection attack will highly rely on the monitor’s properties and signal processing, instead of the isolation provided by the circulator, which is out of the scope of the present study. As discussed above, in this study we focus on testing the isolation characterization of circulator configured in the first scenario, while testing the whole configuration in this scenario will be the future work. The experimental setup of testing the circulator is shown in~\cref{fig:CIRC_setup}. The measurement settings at ports~1 and 3 are the same as described hereinabove for isolator testing in \cref{sub:setup-isolator}. In addition to that, a laser diode (LD2, Gooch and Housego AA1406) and an optical power meter (OPM4, Thorlabs PM200 with S154C sensor) are placed at port~2 via a 50:50 BS. LD1, LD2, and the HPL are used one at a time to prevent measurement errors caused by reflected light. Isolation and insertion loss are estimated for each pair of circulator's ports via a procedure similar to that described in \cref{sub:setup-isolator}. \begin{figure} \caption{Experimental setup for testing circulators. LD, laser diode; OPM, optical power meter; LT, light trap; HPL, high-power laser.} \label{fig:CIRC_setup} \end{figure} \subsection{Test procedure} \label{sub:procedure} Before starting the test on the optical isolators and circulators, we experimentally verified that up to $6.7~\watt$ none of the components in the setup, excluding the optical isolators and circulators, change their characteristics during the test. Especially, the splitting ratios of BS are not seen notably change. Thus, the only changes observed in the following test are in the isolators and circulators under test. We define a successfully ``hacked" isolation component as one having a temporal or permanent isolation decrease without losing light transmission capability in the forward direction, within our measurement accuracy of about $1~\deci\bel$. We also notice when the insertion loss increases permanently. Such an increase would lead (with a threshold that depends on the particular QKD system) to the secret key failing to be generated. This means the eavesdropper would not be able to learn any secret information. The test procedure is the following for each component under test. Firstly, the initial isolation and insertion loss are measured in the experimental setup before illumination by HPL. Then each sample is exposed to a constant power level starting from $0.16~\watt$ for at least $60~\second$ (except for the sample ISO~PM~1, which is exposed for at least $10~\second$ as the initial test). The exposure period may be increased up to $900~\second$ during the testing if necessary. During the illumination, the isolation of the isolator under test is monitored. For circulators, the isolation values from port 3 to port 1 and from port 3 to port 2 are measured during the illumination. If isolation reduction is detected, the laser power is kept constant until the isolation value becomes stable. After each round of illumination, the HPL is turned off, and we measured the insertion loss of the sample under test again. For isolators, LD1 is turned on, and the insertion loss is measured by OPM3. For circulators, LD1 and LD2 are turned on alternately, and insertion loss from port~1 to port~2 and from port~2 to port~3 is measured by OPM4 and OPM3. In addition, LD2 are also used to measure the isolation from port~2 to port~1 with assistance of OPM2. The temporary changes in isolation and insertion loss are recorded during the measurement. We repeat the testing procedure above with laser power of the HPL incremented by~$100$--$500~\milli\watt$. The testing stops if an irreversible damage to the sample is incurred. For some samples, the testing stops before the sample is fully damaged. This is because we would like to measure the permanent decrease in isolation, while the sample is still operational. \section{Results} \label{sec:results} \subsection{Test results for fiber-optic isolators} \label{sub:results-isolator} \begin{table*} \caption{Testing results of isolators. All measurements are at $1550~\nano\meter$.} \label{tab:ISO_result} \begin{tabular}[t]{@{\extracolsep{1.8ex}}l@{}c@{}c@{}c@{}c@{}c@{}c@{}} \hline\hline \multirow{3}{*}{Sample} & \multirow{3}{*}{\makecell{Specified minimum\\ isolation~($\deci\bel$)}} & \multicolumn{2}{c}{Initial} & \multirow{3}{*}{\makecell{Minimum \\ isolation ($\deci\bel$)}} & \multirow{3}{*}{\makecell{Maximum decrease\\ of isolation ($\deci\bel$)}} & \multirow{3}{*}{\makecell{Irreversible\\ damage at}} \\ \cline{3-4} & & \makecell{Insertion\\ loss ($\deci\bel$)} & \makecell{Isolation ($\deci\bel$)} \\ \hline ISO PM 1 & 46 & 0.66 & 53.7 & 21.8 @ $6.7~\watt$, $360~\second$ & 31.9 & $6.7~\watt$, $900~\second$ \\ ISO PM 2 & 28 & 0.50 & 37.0 & 17.2 @ $3.37~\watt$, $820~\second$ & 19.8 & was not tested \\ ISO 3-1 & 46 & 0.45 & 58.1 & 37.1 @ $3.3~\watt$, $260~\second$ & 21.0 & was not tested \\ ISO 3-2 & 46 & 0.55 & 62.1 & 27.6 @ $3.4~\watt$, $800~\second$ & 34.5 & $3.8~\watt$, $90~\second$ \\ ISO 4 & 55 & 0.52 & 57.6 & 42.4 @ $2.2~\watt$, $200~\second$ & 15.2 & was not tested \\ \hline\hline \end{tabular} \label{tab:all} \end{table*} We have tested four models of fiber-optic isolators used in real QKD systems: one sample of models~1,~2 and 4 (ISO~PM~1, ISO~PM~2, and ISO~4) and two samples of isolator model~3 (ISO~3-1 and ISO~3-2). All the isolators have a similar design and operation principle except that ISO~PM~1 and ISO~PM~2 are polarization-dependent, while the other two models are polarization-insensitive. According to their specifications, all the tested isolators should operate correctly at a maximum c.w.\ power of $500~\milli\watt$, except for ISO~PM~2, whose maximum operating power is $300~\milli\watt$. The operating temperature range of all the samples is $-5$ to $+70~\celsius$. Owing to our confidentiality agreements with the QKD system manufacturers, we cannot publicly disclose the part numbers of the components tested in this study. They are ordinary commercial off-the-shelf products. A summary of the laser-damage results is presented in \cref{tab:ISO_result}. The tested samples are vulnerable to the high-power injection laser, exhibiting the temporary reduction of isolation by $15.2$--$34.5$~$\deci\bel$ at a certain illumination power (see ``Maximum decrease of isolation'' in \cref{tab:ISO_result}). As a result, $17.2$--$42.4$~$\deci\bel$ isolation remains before samples become inoperable~(see ``Minimum isolation''), which is less than every sample's specified minimum isolation value. In addition, ISO~PM~1 and ISO~3-2 are destroyed at $6.7$ and $3.8~\watt$ injected laser power applied for $900$ and $90~\second$ respectively. Detailed results of the testing are given in~\cref{fig:ISO_data}. The characteristics of ISO~PM~1 differ significantly from the other samples because of the shorter laser exposure time at the beginning of its test, which is not enough to observe significant changes in isolation. However, when the exposure period is lasting longer with optical power higher than $5~\watt$, the decrease in isolation is illustrated. \begin{figure} \caption{Isolators' parameters under testing. The points represents the minimum isolation value, maximum insertion loss value, and highest surface temperature achieved at each applied power of HPL. The temperature was only measured for three samples. The leftmost vertical dotted line is the maximum specified operating power of $300~\milli\watt$ for ISO~PM~2. The rightmost vertical dotted line is the maximum specified operating power of $500~\milli\watt$ for the other samples.} \label{fig:ISO_data} \end{figure} As can be seen from the topmost plot in \cref{fig:ISO_data}, the isolation reduction under high-power laser is observed for all the samples. It does not happen until the applied laser power exceeds the maximum operating power specified by the manufacturer, except for ISO~PM~2, for which isolation reduction from its maximum value by $3.4~\deci\bel$ was observed in the operating power range. However, even for this sample, the measured isolation conforms to the specification when the illumination laser power is in the operating range (specified minimum isolation of ISO~PM~2 is $28~\deci\bel$, see \cref{tab:all}). The ``breakdown'' points in \cref{fig:ISO_data} indicate that ISO~PM~1 and ISO~3-2 are fully damaged at the laser power of $6.7~\watt$ and $3.8~\watt$---they exhibit extremely large insertion loss and isolation. For the other samples, we stopped the laser exposure before completely destroying them, observing a permanent decrease in isolation by $3.9~\deci\bel$ for ISO~PM~2 and temporary decrease in isolation for ISO~3-1 and ISO~4. Interestingly, before being destroyed, the isolators keep operating in the forward direction (see their insertion loss values in the middle plot in \cref{fig:ISO_data}) while their isolation values are reduced. The insertion loss varies slightly by $0.5$--$1.1~\deci\bel$, which leads to the loss of only 22\% forward transmitted power at most. Once the irreversible damage happens for ISO~PM~1 and ISO~3-2, their insertion loss is larger than $80~\deci\bel$. \begin{figure*} \caption{Analysis of isolator response to high-power laser exposure. (a)~Isolation and temperature profile of ISO~PM~2 under stepwise-increasing laser power. The temperature is measured at the hottest surface point, marked $\times$ on thermal camera images. (b)~Photograph, top to bottom: ISO~3-1 before testing, decapsulated ISO~3-1 showing its internal design (partial damage after illumination by $3.3~\watt$ is visible), decapsulated ISO~3-2 showing damage after illumination by $3.8~\watt$ laser power.} \label{fig:ISO_trend} \end{figure*} The sample's surface temperature (see the bottommost plot in \cref{fig:ISO_data}) rises with the illumination power. It seems to be related to the isolation value. The isolation of the polarization-insensitive samples ISO~3-1 and ISO~4 begins dropping when their measured temperature exceeds the maximum specified operating temperature of $+70~\celsius$. In order to understand the mechanism of isolation decrease and isolators' damage, we analyze the thermal images and disassemble the tested samples as shown in~\cref{fig:ISO_trend}. \Cref{fig:ISO_trend}(a) illustrates the surface temperature maps, the temperature curve, and the isolation curve of ISO~PM~2 in one experiment. The thermal profile of the isolator under a high-power laser shows that the sample is heated inhomogeneously across its surface. Specifically, the tested sample is heated at the side opposite to the input port where the high-power laser is applied, which is also observed in all the other tested samples. This is because the injected high-power laser emission is rejected to be coupled from the isolator to the optical fiber~\cite{berent2013}, and next, the rejected light is absorbed inside the package to cause this local heating. Moreover, after applying laser power higher than the sample's specified maximum operating value ($300~\milli\watt$), the amount of isolation drops rapidly with the power. After cooling, the isolation reverts close to the initial value. \Cref{fig:ISO_trend}(b) shows the external and internal design of the tested samples, ISO 3-1 and ISO 3-2. After the disassembly of the sample, we found a destroyed blackened side of the optical assembly, which matches the point of the highest surface temperature marked in the thermal images. Thus, we infer that high temperature causes this destruction. To further verify the cause of isolation change, we theoretically simulate the working model of an isolator with details given in~\cref{sec:theory-isolation-vs-temp}. There, we have calculated temperature dependence of Verdet constant and isolation changes for a single-stage polarization-dependent isolator. The analysis shows that the polarization rotation angle depends on temperature. As a result, when the temperature becomes high, the light injected in the backward direction is not fully reflected by the isolator's polarizer but is partially transmitted. Thus, the amount of isolation is reduced under high temperature. These modeling results correlate well with the experimental data of ISO~PM~2, which may provide a reasonable explanation of the decrease in isolation observed in our experiment. \begin{table*} \caption{Testing results of circulators. All measurements are at $1550~\nano\meter$.} \label{tab:CIRC_res} \begin{tabular}[t]{@{\extracolsep{1ex}}l@{}c@{}c@{}c@{}c@{}c@{}c@{}c@{}c@{}c@{}c@{}} \hline\hline & & \multicolumn{4}{c}{Initial} \\ \cline{3-6} \multirow{2}{*}{Sample} & \makecell{Specified minimum \\ isolation for\\ all ports ($\deci\bel$)} & \multicolumn{2}{c}{\makecell{Insertion\\ loss ($\deci\bel$)}} & \multicolumn{2}{c}{\makecell{Isolation ($\deci\bel$)}} & \multicolumn{2}{c}{\makecell{Minimum\\ isolation ($\deci\bel$)}} & \multicolumn{2}{c}{\makecell{Maximum\\ decrease of\\ isolation ($\deci\bel$)}} & \multirow{2}{*}{\makecell{Irreversible\\ damage at}} \\ \cline{3-4} \cline{5-6} \cline{7-8} \cline{9-10} & & 1 to 2 & 2 to 3 & 2 to 1 & 3 to 2 & 2 to 1 & 3 to 2 & 2 to 1 & 3 to 2 \\ \hline CIR 1 & 45 & 1.03 & 1.07 & 61.4 & 60.6 & 34.7 @ $3.6~\watt$ & 32.2 @ $3.6~\watt$ & 26.7 & 28.4 & was not tested \\ CIR 2 & 40 & 0.72 & 0.83 & 67.0 & 65.7 & 38.3 @ $4.6~\watt$ & 32.3 @ $4.6~\watt$ & 28.7 & 33.4 & $4.6~\watt$, $910~\second$ \\ CIR PM 3 & 25 & 1.00 & 0.80 & 37.0 & 27.0 & was not tested & 6.4 @ $0.7~\watt$ & was not tested & 20.6 & $0.9~\watt$, $90~\second$ \\ \hline\hline \end{tabular} \end{table*} \subsection{Test results for fiber-optic circulators} \label{sec:results-circulator} We have tested three fiber-optic circulators. Samples of CIR~1 and CIR~2 are polarization-insensitive, while CIR~PM~3 is polarization-dependent. Similar to the isolators, the specified operating power is $500~\milli\watt$ for CIR~1 and CIR~2 ($300~\milli\watt$ for CIR~PM~3), and the operating temperature range is from 0 to $+70~\celsius$. A summary of our testing results is given in~\cref{tab:CIRC_res}. The isolation is temporarily reduced not only between the ports illuminated by the laser (from port~3 to port~2) but also between the unilluminated ports~(from port~2 to port~1). Specifically, the isolation from port~3 to port~2~(port~2 to port~1) decreases by $20.6$--$33.4~\deci\bel$~($26.7$--$28.7~\deci\bel$) at maximum. The residual isolation is $6.4$--$32.3~\deci\bel$ from port 3 to port 2 and $34.7$--$38.3~\deci\bel$ from port~2 to port~1, which is lower than the minimum isolation specified by the component manufacturer for all the samples. Thus, the transmission paths from port 1 to port 2 and from port~2 to port~3 are vulnerable to Eve's high-power injection attack. \begin{figure} \caption{Circulators' values of isolation and the maximum surface temperature under testing. Each point represents the minimum isolation achieved under each applied power.} \label{fig:CIRC_data} \end{figure} \begin{figure} \caption{Isolation of CIR~1 from port~2 to port~1 recovers gradually after illumination by $3.6~\watt$ laser power. The vertical dashed line is the HPL's switch-off time.} \label{fig:CIRC_trend} \end{figure} The detailed measurement data are presented in \cref{fig:CIRC_data}, showing isolation from port~3 to port~2, from port~3 to port~1, and from port~2 to port~1, as well as the maximum surface temperature under different illumination powers. The values of isolation from port~3 to port~2 and port~2 to port~1 are obviously decreased with the increased laser power, as the coupling ratio mainly depends on the polarization rotation provided by a Faraday mirror inside the circulator. However, the isolation from port~3 to port~1 remains essentially unchanged under all experimental conditions for all the tested samples, which is due to no coupling between these two ports according to the internal scheme. Similar to the isolators under test, the temperature of the sample's surface also rises with the laser power. For both polarization-insensitive samples, the minimum remaining isolation from port~3 to port~2 is $32.2~\deci\bel$ (CIR~1) and $32.3~\deci\bel$ (CIR~2) at the laser power of $3.6$ and $4.6~\watt$ respectively. After that, the isolation value rises for CIR~1, and we thus stop our testing of it at the laser power of $4.8~\watt$ without observing an irreversible damage. Meanwhile, irreversible damage happens for CIR~2 with the increase in its insertion loss to $2.5~\deci\bel$ from port~2 to port~3 at $4.6~\watt$. Moreover, for each of these two samples, we have measured the isolation from port~2 to port~1 immediately after the laser exposure and found that the sample's heating also temporarily reduces it. Take CIR~1 as an example. \Cref{fig:CIRC_trend} illustrates its recovery after the laser exposure, in which the value of isolation reduces to about $35~\deci\bel$ once after being illuminated by the HPL. After the HPL is switched off, the isolation then recovers to $55~\deci\bel$ in $400~\second$, during which the sample's surface temperature decreases from $272$ to $44~\celsius$. Surprisingly, for the polarization-sensitive sample, CIR~PM~3, the isolation from port~3 to port~2 falls rapidly with the increased laser power, dropping to only $6.4~\deci\bel$ at the input laser power of $700~\milli\watt$. At $900~\milli\watt$, the insertion loss from port~2 to port~3 increases irreversibly to $15.5~\deci\bel$. Since port~2 and port~3 of this sample is supposed to be used in the QKD system purely as an isolator, we have not measured the change in the insertion loss from port~1 to port~2, isolation from port~2 to port~1, and isolation from port~3 to port~1. \section{Discussion and countermeasures} \label{sec:countermeasure} The experimental results shown above provide two opposite insights into the security of a QKD system. We first discuss the hacking aspect about the vulnerabilities in a QKD system caused by the isolation reduction of the tested isolators and circulators. Then, from the defence point of view, we propose a possible countermeasure to protect the QKD source from these vulnerabilities. The isolation reduction introduced by high-power laser opens loopholes for at least two possible attacks on QKD, the Trojan-horse attack~\cite{gisin2006,jain2014} and the laser-seeding attack~\cite{huang2019,sun2015,pang2020}. Regarding the Trojan-horse attack, the isolation of the source strongly impacts the secure key rate and transmission distance. The reduced isolation of the source allows Eve to inject more Trojan-horse light into Alice, which is assumed to linearly increases the reflection light. Given $15.2$--$34.5~\deci\bel$ decrease in isolation obtained from our testing results, the photon number of reflection pulse increases by about $2$--$3$ orders from the safe value. These amounts of increase in leaked photon number result in the maximum transmission distance shortens by $20$--$100~\kilo\meter$ according to the various, theoretical security analysis \cite{lucamarini2015,tamaki2016,wang2018,navarrete2022}. Regarding the laser-seeding attack, an injection power in the order of $100~\nano\watt$ after passing the built-in isolator of Alice's laser to reach the laser cavity is sufficient for achieving a successful attack~\cite{huang2019}. According to our experimental result, the maximum power transmitted through the isolation component is $190~\milli\watt$, assuming the injected power is $10~\watt$~\cite{huang2020} and the isolation is reduced to $17.2~\deci\bel$ as in ISO~PM~2. (Although the minimum value of isolation obtained in our experiment is $6.4~\deci\bel$ for CIR~PM~2, we exclude this type of circulator from the analysis owing to its poor performance and we do not recommend it for use in QKD systems.) To prevent the laser-seeding attack, other components in QKD source should provide about $62.7~\deci\bel$ of isolation. Assuming the built-in isolation of the laser is typically $30~\deci\bel$, the success of the laser-seeding attack relies on the attenuation value of an optical attenuator in Alice. If the attenuation value is less than $32.7~\deci\bel$, the security of the QKD system might be compromised under laser-seeding attack. It is notable that in the above analysis, we assume that the attenuator in Alice works as designed, which does not affect the effectiveness of above-mentioned attacks. Regarding to the possible vulnerability of attenuators, the decreased attenuation under laser-damage attack has been investigated in~Ref.~\onlinecite{huang2020}. Most importantly, our study also provides a possible countermeasure against the light-injection attacks---adding an extra isolation component into the source unit to be the first one illuminated by the injected light. Its minimum residual isolation upper-bounds the maximum power that can transmit through to reach other optical components. Specifically, the minimum observed isolation is $6.4~\deci\bel$ and $17.2~\deci\bel$ for the polarization-dependent circulator CIR~PM~3 and isolator ISO~PM~2, respectively. Typical minimum residual isolation is more than $20~\deci\bel$ for all the polarization-insensitive components. Therefore, the injected power is limited to less than $190~\milli\watt$, which cannot successfully conduct the laser-damage attack on any optical components according to the previous testing~\cite{bugge2014,makarov2016,huang2020}. If the attacker attempts to further increase the illumination power, the first component fails permanently with a very high insertion loss, which results in a denial of service and thus protects the QKD system from the leakage of secret information~\cite{makarov2016}. Moreover, the isolation required for protection against the Trojan-horse attack and the laser-seeding attack should be calculated starting from the component behind this sacrificial isolator or circulator. Therefore, the extra isolator or circulator placed at Alice's output would protect the rest of the QKD source against the light-injection attacks. \section{Conclusion} \label{sec:conclusion} In this paper, we study the effect of high-power laser on the fiber-optics isolators and circulators and propose the effective countermeasure against light-injection attacks on a QKD system. This study first raises awareness of insecure isolation components---isolators and circulators---in QKD systems. Specifically, the testing shows that the values of isolation provided by the optical isolators and circulators under test are reduced to $17.2$ and $6.4~\deci\bel$ at minimum when high-power laser light is injected into them in the reverse direction. This decrease of isolation opens loopholes, which may allow Eve to conduct the Trojan-horse attack, the laser-seeding attack, and possibly other attacks that inject light into the source. The testing methodology proposed in this study is general and applicable to the other commercial fiber-optic isolators and circulators. To enhance the protection of the QKD source unit, an extra isolation component, an optical isolator or circulator, is needed to defeat the light-injection attacks. The residual isolation of this extra component is sufficient to protect the other components behind it. Any isolation calculated for countermeasure against the Trojan-horse attack and the laser-seeding attack shall be started from the components behind this sacrificial isolation component. Our study shows that the source unit in the QKD system needs this additional layer of protection to be secure. \acknowledgments We thank K.~Wei, F.~Xu, and our industry partners for providing us device samples. This work was funded by the Ministry of Science and Education of Russia (programs 5-in-100, NTI center for quantum communications, and grant 075-11-2021-078), Russian Science Foundation (grant 21-42-00040), Canada Foundation for Innovation, MRIS of Ontario, the National Natural Science Foundation of China (grants 61901483 and 62061136011), the National Key Research and Development Program of China (grant 2019QY0702), and the Research Fund Program of State Key Laboratory of High Performance Computing (grant 202001-02). P.C.\ acknowledges support from the DPST scholarship and NSRF via the Program Management Unit for Human Resources \& Institutional Development, Research and Innovation (grant B05F640051). {\em Author contributions:} A.P.,\ D.R.,\ V.E.,\ A.H.,\ and P.C.\ conducted the experiment. A.P.,\ D.R.,\ V.E.,\ A.H.,\ and V.M.\ analysed the data. A.P.,\ D.R.,\ and A.H.\ wrote the article with help from all authors. A.H.\ and V.M.\ supervised the project. \appendix \section{Theoretical temperature dependence of the Verdet constant and isolation in fiber-optic isolators} \label{sec:theory-isolation-vs-temp} \setcounter{figure}{0} \numberwithin{figure}{section} \numberwithin{equation}{section} \subsubsection{Faraday effect in a polarization-dependent isolator} \label{sec:Faraday} An optical isolator is a component that only allows unidirectional transmission of the optical signal. The principal scheme of polarization-dependent isolator is shown in~\cref{fig:Faraday-effect}. It consists of an input polarizer, a Faraday rotator, and an output polarizer called an analyzer. The optical axis of the second polarizer is oriented at an angle $\beta = 45\degree$ with respect to the first polarizer. In this configuration, the optical signal coming from the left side passes through the first polarizer whose optical axis is in the vertical direction, which matches the polarization of the input optical signal. Then a Faraday rotator rotates the polarization of the optical signal by $45\degree$ in a clockwise direction. If there is an introduced laser beam from the optical circuit on the right side, this optical signal has to pass through the Faraday rotator from right to left. Since the Faraday rotator is a non-reciprocal device, the polarization state of the reflected optical signal will rotate for an additional $45\degree$ in the same direction as the input signal, thus becoming perpendicular to the optical axis of the first polarizer. \begin{figure} \caption{Optical configuration of a polarization-sensitive optical isolator.} \label{fig:Faraday-effect} \end{figure} As shown above, an optical isolator is based on the Faraday effect~\cite{zvezdin1997}. The polarization plane of linearly polarized light beam during propagation in a magneto-optical crystal is rotated by an angle $\theta$. The direction of rotation is dependent on the direction of the magnetic field and not on the direction of light propagation. The relation between the angle of polarization rotation and the magnetic field in a crystal is \begin{equation}\label{eq:polarization rotation} \theta = V(\lambda, T)BL, \end{equation} where $B$ is the longitudinal magnetic field component in$~\tesla$, $L$ is the length of the path where the light and magnetic field interact in$~\meter$, and $V(\lambda, T)$ is the Verdet constant depending on the wavelength of the propagating light $\lambda$ and temperature of magneto-optic crystal $T$ in $\rad$/$(\tesla \cdot \meter)$. Here we shall consider only the temperature dependence. The temperature dependence of Verdet constant and hence the angle of Faraday rotation leads to variation of the isolation coefficient with temperature. Modern single-mode isolators have a high stability of isolation in the temperature range from~$5$ to~$70~\celsius$. Thermal effects can be neglected for typical optical circuits, such as QKD systems, with laser power less than $300$--$500~\milli\watt$. However, when the high-power laser is applied in the reverse direction, its emission is partially absorbed inside the isolator and induces heating of the magneto-optic crystal~\cite{kiriyma2015}. The temperature dependence of the Verdet constant causes the changes of the angle of polarization plane rotation [see~\cref{eq:polarization rotation}] \cite{snetkov2014,khazanov2016}. For optical isolators, it means reducing the isolation coefficient in the reverse direction and losing power and degraded beam quality in the forward direction. Thermal effects can be mitigated by a careful choice of the magneto-optical material in the component~\cite{snetkov2014}. The most widespread materials for a single-stage fiber isolator in near infrared band are rare-earth garnets~\cite{booth1984,mukimovb1990,khazanov2016}. Here we consider the following types of garnets: yttrium iron garnet (YIG), terbium gallium garnet (TGG), and bismuth-substituted yttrium iron garnet (Bi:YIG). \subsubsection{Verdet constant model} \label{sec:Verdet model} In a general case, the Verdet constant of rare-earth garnet is impacted by several different contributions~\cite{slezak2016,serber1932,buckingham1966}. In our case, only temperature-dependent contributions are considered: the paramagnetic contribution $V_{pm}$~(for more detail see Ref.~\onlinecite{vojna2018}) and frequency-independent gyromagnetic term $V_{gm}$ (detail in Ref.~\onlinecite{zvezdin1997}). The Verdet constant as a function of temperature has the appearance \begin{equation}\label{eq:Verdet_f} V(T)= V_{pm}+V_{gm} = -\dfrac{A\lambda_{0}^2}{(T-T_{w})}+\dfrac{B}{T-T_{w}}+ C, \end{equation} where $\lambda_{0}$ is the wavelength of dominant electronic transition, $T_{w}$ is the Curie temperature, and $A$, $B$, $C$ are constants depending on the properties of chosen material. \begin{figure} \caption{The temperature dependence of the Verdet constant for TGG, YIG, and Bi:YIG, at $1550~\nano\meter$ wavelength.} \label{fig:Temperature dependence of the Verdet} \end{figure} Using data from Refs.~\onlinecite{vojna2018,zao2019,cooper1968,crossley1969,stevens2016,matsumoto1986,vertruyen2008,vojna2019,kumari2018}, the dependence of the Verdet constant is obtained within a temperature range from $-20$ to $175~\celsius$, as presented in~\cref{fig:Temperature dependence of the Verdet}. These dependencies have been calculated with fixed operating wavelength $\lambda = 1550~\nano\meter$. As reflected in~\cref{fig:Temperature dependence of the Verdet}, the crystal TGG has exhibited the least stability with temperature. This means that isolators based on TGG are most susceptible to thermal effects at $\lambda = 1550~\nano\meter$. Isolators based on YIG or Bi:YIG should be more temperature-stable. \subsubsection{Isolation model} \label{sec:Imodel} Next, we analyze the change in isolation with varying crystal temperature in the proposed model with the ideal polarizer and analyzer. The polarization planes of the polarizer and the analyzer are oriented relative to each other at the angle~$\beta$, and Faraday rotator provides the $45\degree$ rotation of the polarization plane of the propagating light with a central wavelength of $1550~\nano\meter$. In our model, the magnetic field is constant and independent of temperature (but in real systems magnetic field might introduce changes in isolation). According to Malus's law, after passing through Faraday rotator and the polarizer, the intensity of a beam of plane-polarized light varies as $I = I_{0}\cos^2(\theta + \beta)$, where $I_{0}$ is the initial intensity~\cite{booth1984,vojna2018}. The isolation coefficient is then defined as $\alpha = -10 \log \cos^2 (\beta + \theta)$. (The insertion loss may be found from the similar formula using rotation angles equal to $(\beta-\theta)$.) After substituting the value of $\theta$ from~\cref{eq:polarization rotation}, the temperature dependence of the isolation coefficient takes the form \begin{equation}\label{eq:Isolation} \alpha(T)=-10 \log \left[ \beta + \dfrac{V(T)}{V(25~\celsius)} \cdot k \right], \end{equation} where $k$ is the coefficient depending on the initial isolation value at the temperature of~$25~\celsius$~\cite{konno1993}. Let's use the initial isolation value of $40~\deci\bel$, as a typical value for single-stage isolators at room temperature ranges from $32$ to $40~\deci\bel$ according to their specification~\cite{konno1993}. The isolation of~$40~\deci\bel$ corresponds to the rotation angle of polarization plane in the Faraday rotator either $\theta = 44.43\degree$ or $\theta = 45.57\degree$, depending on the direction of rotation. The calculation results for isolation and insertion loss are presented in~\cref{fig:Dependence of isolation}. The model predicts sharp peaks in the isolation value. It should be noted that generally there are no pronounced peaks in our experimental results of the isolation coefficient when the components being heated by the laser. This may be explained by the internal scattering in the crystal, which leads to a partial change in the plane of polarization. However, this factor is not considered in this model. \begin{figure} \caption{Dependence of the isolation coefficient (a) and (b) and insertion loss (c) and (d) on temperature for TGG, YIG, and Bi:YIG. (a) and (c) correspond to $\theta = 44.43\degree$; (b) and (d) correspond to $\theta = 45.57\degree$.} \label{fig:Dependence of isolation} \end{figure} \subsubsection{Outcome} \label{sec:Out} \begin{figure} \caption{Comparison of experimental results for ISO~PM~2 and model for YIG with $\theta = 44.43\degree$, for (a) isolation coefficient and (b) insertion loss.} \label{fig:Compare results} \end{figure} Our model shows that the change of the isolation coefficient with temperature depends heavily on the material of the magneto-optical crystal, even though each garnet may provide the same isolation value at room temperature. The crystal TGG has demonstrated the sharpest decrease in the isolation coefficient in the operating temperature range of isolators. This is because the operating wavelength range for this garnet is from~$700$ to~$1100~\nano\meter$~\cite{vojna2018}. The Bi:YIG crystal is specially designed for applications demanding high values of the isolation coefficient over a wide temperature range~\cite{kiriyma2015,vojna2019}. According to the calculation, the isolation coefficient is more than~$40~\deci\bel$ in the temperature range from $-20$ to $180~\celsius$. Such a high isolator stability is achieved due to the optimal crystal composition~\cite{zao2019}. Additional doping provides several sublattices in the crystal structure, which compensate the temperature dependence of the Verdet constant (and isolation respectively) of each other. In YIG, our model predicts that the isolation decreases by about~$10~\deci\bel$ at $70~\celsius$. When temperature increases significantly (up to $175~\celsius$), isolation drops to about $15~\deci\bel$. The obtained result fits well with the experimental data for ISO~PM~2. The comparison of experiment with model is shown in~\cref{fig:Compare results}. In summary, our model shows that Bi:YIG has the weakest dependence of the isolation coefficient on temperature and therefore it is the most advanced garnet for the isolators resilient to the laser-damage attack. In addition, we may assume that the magneto-optic crystal in the isolator ISO~PM~2 is YIG. \def \begin{center}\rule{0.5\columnwidth}{.8pt}\end{center} { \begin{center}\rule{0.5\columnwidth}{.8pt}\end{center} } \begin{thebibliography}{76} \makeatletter \providecommand \@ifxundefined [1]{ \@ifx{#1\undefined} } \providecommand \@ifnum [1]{ \ifnum #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \@ifx [1]{ \ifx #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode `\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{http://dx.doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty \bibitem [{\citenamefont {Bennett}\ and\ \citenamefont {Brassard}(1984)}]{bennett1984} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~H.}\ \bibnamefont {Bennett}}\ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Brassard}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Quantum cryptography: {P}ublic key distribution and coin tossing},}\ }in\ \href {\doibase 10.1016/j.tcs.2014.05.025} {\emph {\bibinfo {booktitle} {Proc. International Conference on Computers, Systems, and Signal Processing (Bangalore, India)}}}\ (\bibinfo {publisher} {IEEE Press},\ \bibinfo {address} {New York},\ \bibinfo {year} {1984})\ pp.\ \bibinfo {pages} {175--179}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ekert}(1991)}]{ekert1991} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~K.}\ \bibnamefont {Ekert}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Quantum cryptography based on {B}ell's theorem},}\ }\href {\doibase 10.1103/PhysRevLett.67.661} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {67}},\ \bibinfo {pages} {661--663} (\bibinfo {year} {1991})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gisin}\ \emph {et~al.}(2002)\citenamefont {Gisin}, \citenamefont {Ribordy}, \citenamefont {Tittel},\ and\ \citenamefont {Zbinden}}]{gisin2002} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Gisin}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Ribordy}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Tittel}}, \ and\ \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Zbinden}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Quantum cryptography},}\ }\href {\doibase 10.1103/RevModPhys.74.145} {\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume} {74}},\ \bibinfo {pages} {145--195} (\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Scarani}\ \emph {et~al.}(2009)\citenamefont {Scarani}, \citenamefont {Bechmann-Pasquinucci}, \citenamefont {Cerf}, \citenamefont {Du\v{s}ek}, \citenamefont {L\"{u}tkenhaus},\ and\ \citenamefont {Peev}}]{scarani2009} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Scarani}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Bechmann-Pasquinucci}}, \bibinfo {author} {\bibfnamefont {N.~J.}\ \bibnamefont {Cerf}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Du\v{s}ek}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {L\"{u}tkenhaus}}, \ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Peev}},\ }\bibfield {title} {\enquote {\bibinfo {title} {The security of practical quantum key distribution},}\ }\href {\doibase 10.1103/RevModPhys.81.1301} {\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume} {81}},\ \bibinfo {eid} {1301} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lo}\ \emph {et~al.}(2014)\citenamefont {Lo}, \citenamefont {Curty},\ and\ \citenamefont {Tamaki}}]{lo2014} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.-K.}\ \bibnamefont {Lo}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Curty}}, \ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Tamaki}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Secure quantum key distribution},}\ }\href {\doibase 10.1038/nphoton.2014.149} {\bibfield {journal} {\bibinfo {journal} {Nat. Photonics}\ }\textbf {\bibinfo {volume} {8}},\ \bibinfo {pages} {595--604} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Xu}\ \emph {et~al.}(2020)\citenamefont {Xu}, \citenamefont {Ma}, \citenamefont {Zhang}, \citenamefont {Lo},\ and\ \citenamefont {Pan}}]{xu2020} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Feihu}\ \bibnamefont {Xu}}, \bibinfo {author} {\bibfnamefont {Xiongfeng}\ \bibnamefont {Ma}}, \bibinfo {author} {\bibfnamefont {Qiang}\ \bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {Hoi-Kwong}\ \bibnamefont {Lo}}, \ and\ \bibinfo {author} {\bibfnamefont {Jian-Wei}\ \bibnamefont {Pan}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Secure quantum key distribution with realistic devices},}\ }\href {\doibase 10.1103/RevModPhys.92.025002} {\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume} {92}},\ \bibinfo {pages} {025002} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Arute}\ \emph {et~al.}(2019)\citenamefont {Arute}, \citenamefont {Arya}, \citenamefont {Babbush}, \citenamefont {Bacon}, \citenamefont {Bardin}, \citenamefont {Barends}, \citenamefont {Biswas}, \citenamefont {Boixo}, \citenamefont {Brandao}, \citenamefont {Buell}, \citenamefont {Burket}, \citenamefont {Chen}, \citenamefont {Chen}, \citenamefont {Chiaro}, \citenamefont {Colins}, \citenamefont {Courtney}, \citenamefont {Dunsworth}, \citenamefont {Farhi}, \citenamefont {Foxen}, \citenamefont {Fowler}, \citenamefont {Gidney}, \citenamefont {Giustina}, \citenamefont {Graff}, \citenamefont {Gerin}, \citenamefont {Hebegger}, \citenamefont {Harrigan}, \citenamefont {Hartman}, \citenamefont {Ho}, \citenamefont {Hoffman}, \citenamefont {Huang}, \citenamefont {Humble}, \citenamefont {Isakov}, \citenamefont {Jeffrey}, \citenamefont {Jiang}, \citenamefont {Kafri}, \citenamefont {Kechendzhi}, \citenamefont {Kelly}, \citenamefont {Klimov}, \citenamefont {Knysh}, \citenamefont {Kiritkov}, \citenamefont {Kostritsa}, \citenamefont {Landhuis}, \citenamefont {Lindermark}, \citenamefont {Lucero}, \citenamefont {Lyakh}, \citenamefont {Mandr{\`a}}, \citenamefont {McClean}, \citenamefont {McEwen}, \citenamefont {Megrant}, \citenamefont {Mi}, \citenamefont {Michielsen}, \citenamefont {Mohseni}, \citenamefont {Mutus}, \citenamefont {Naaman}, \citenamefont {Neeley}, \citenamefont {Niel}, \citenamefont {Niu}, \citenamefont {Ostby}, \citenamefont {Petukhov}, \citenamefont {Platt}, \citenamefont {Quintana}, \citenamefont {Rieffel}, \citenamefont {Roushan}, \citenamefont {Rubin}, \citenamefont {Sank}, \citenamefont {Satzinger}, \citenamefont {Smelyanskiy}, \citenamefont {Sung}, \citenamefont {Trevitchick}, \citenamefont {Vainsencher}, \citenamefont {Villalonga}, \citenamefont {White}, \citenamefont {Yao}, \citenamefont {Neven},\ and\ \citenamefont {Martinis}}]{arute2019} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Arute}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Arya}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Babbush}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Bacon}}, \bibinfo {author} {\bibfnamefont {J.~C.}\ \bibnamefont {Bardin}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Barends}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Biswas}}, \bibinfo {author} {\bibfnamefont {Sergio}\ \bibnamefont {Boixo}}, \bibinfo {author} {\bibfnamefont {Fernando~GSL}\ \bibnamefont {Brandao}}, \bibinfo {author} {\bibfnamefont {D.~A.}\ \bibnamefont {Buell}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Burket}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Chiaro}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Colins}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Courtney}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Dunsworth}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Farhi}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Foxen}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Fowler}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Gidney}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Giustina}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Graff}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Gerin}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Hebegger}}, \bibinfo {author} {\bibfnamefont {M.~H.}\ \bibnamefont {Harrigan}}, \bibinfo {author} {\bibfnamefont {M.~J.}\ \bibnamefont {Hartman}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Ho}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Hoffman}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Huang}}, \bibinfo {author} {\bibfnamefont {Trevis~S}\ \bibnamefont {Humble}}, \bibinfo {author} {\bibfnamefont {S.~V.}\ \bibnamefont {Isakov}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Jeffrey}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Jiang}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Kafri}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Kechendzhi}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Kelly}}, \bibinfo {author} {\bibfnamefont {P.~V.}\ \bibnamefont {Klimov}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Knysh}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Kiritkov}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Kostritsa}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Landhuis}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Lindermark}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Lucero}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Lyakh}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Mandr{\`a}}}, \bibinfo {author} {\bibfnamefont {J.~R.}\ \bibnamefont {McClean}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {McEwen}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Megrant}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Mi}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Michielsen}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Mohseni}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Mutus}}, \bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Naaman}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Neeley}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Niel}}, \bibinfo {author} {\bibfnamefont {M.~Y.}\ \bibnamefont {Niu}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Ostby}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Petukhov}}, \bibinfo {author} {\bibfnamefont {J.~C}\ \bibnamefont {Platt}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Quintana}}, \bibinfo {author} {\bibfnamefont {E.~G.}\ \bibnamefont {Rieffel}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Roushan}}, \bibinfo {author} {\bibfnamefont {N.~C.}\ \bibnamefont {Rubin}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Sank}}, \bibinfo {author} {\bibfnamefont {K.~J.}\ \bibnamefont {Satzinger}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Smelyanskiy}}, \bibinfo {author} {\bibfnamefont {K.~J.}\ \bibnamefont {Sung}}, \bibinfo {author} {\bibfnamefont {M.~D.}\ \bibnamefont {Trevitchick}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Vainsencher}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Villalonga}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {White}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Yao}}, \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Neven}}, \ and\ \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Martinis}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Quantum supremacy using a programmable superconducting processor},}\ }\href {\doibase 10.1038/s41586-019-1666-5} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {574}},\ \bibinfo {pages} {505--510} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Makarov}\ \emph {et~al.}(2006)\citenamefont {Makarov}, \citenamefont {Anisimov},\ and\ \citenamefont {Skaar}}]{makarov2006} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Makarov}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Anisimov}}, \ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Skaar}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Effects of detector efficiency mismatch on security of quantum cryptosystems},}\ }\href {\doibase 10.1103/PhysRevA.74.022313} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {74}},\ \bibinfo {pages} {022313} (\bibinfo {year} {2006})},\ \bibinfo {note} {erratum ibid. \textbf{78}, 019905 (2008)}\BibitemShut {NoStop} \bibitem [{\citenamefont {Qi}\ \emph {et~al.}(2007)\citenamefont {Qi}, \citenamefont {Fung}, \citenamefont {Lo},\ and\ \citenamefont {Ma}}]{qi2007} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Qi}}, \bibinfo {author} {\bibfnamefont {C.-H.~F.}\ \bibnamefont {Fung}}, \bibinfo {author} {\bibfnamefont {H.-K.}\ \bibnamefont {Lo}}, \ and\ \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Ma}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Time-shift attack in practical quantum cryptosystems},}\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Quantum Inf. Comput.}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {73--82} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lamas-Linares}\ and\ \citenamefont {Kurtsiefer}(2007)}]{lamas-linares2007} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Lamas-Linares}}\ and\ \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Kurtsiefer}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Breaking a quantum key distribution system through a timing side channel},}\ }\href {\doibase 10.1364/oe.15.009388} {\bibfield {journal} {\bibinfo {journal} {Opt. Express}\ }\textbf {\bibinfo {volume} {15}},\ \bibinfo {pages} {9388--9393} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lydersen}\ \emph {et~al.}(2010{\natexlab{a}})\citenamefont {Lydersen}, \citenamefont {Wiechers}, \citenamefont {Wittmann}, \citenamefont {Elser}, \citenamefont {Skaar},\ and\ \citenamefont {Makarov}}]{lydersen2010a} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Lydersen}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Wiechers}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Wittmann}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Elser}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Skaar}}, \ and\ \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Makarov}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Hacking commercial quantum cryptography systems by tailored bright illumination},}\ }\href {\doibase 10.1038/nphoton.2010.214} {\bibfield {journal} {\bibinfo {journal} {Nat. Photonics}\ }\textbf {\bibinfo {volume} {4}},\ \bibinfo {pages} {686--689} (\bibinfo {year} {2010}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lydersen}\ \emph {et~al.}(2010{\natexlab{b}})\citenamefont {Lydersen}, \citenamefont {Wiechers}, \citenamefont {Wittmann}, \citenamefont {Elser}, \citenamefont {Skaar},\ and\ \citenamefont {Makarov}}]{lydersen2010b} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Lydersen}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Wiechers}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Wittmann}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Elser}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Skaar}}, \ and\ \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Makarov}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Thermal blinding of gated detectors in quantum cryptography},}\ }\href {\doibase 10.1364/oe.18.027938} {\bibfield {journal} {\bibinfo {journal} {Opt. Express}\ }\textbf {\bibinfo {volume} {18}},\ \bibinfo {pages} {27938--27954} (\bibinfo {year} {2010}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Xu}\ \emph {et~al.}(2010)\citenamefont {Xu}, \citenamefont {Qi},\ and\ \citenamefont {Lo}}]{xu2010} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Xu}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Qi}}, \ and\ \bibinfo {author} {\bibfnamefont {H.-K.}\ \bibnamefont {Lo}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Experimental demonstration of phase-remapping attack in a practical quantum key distribution system},}\ }\href {\doibase 10.1088/1367-2630/12/11/113026} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {12}},\ \bibinfo {pages} {113026} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Li}\ \emph {et~al.}(2011)\citenamefont {Li}, \citenamefont {Wang}, \citenamefont {Huang}, \citenamefont {Chen}, \citenamefont {Yin}, \citenamefont {Li}, \citenamefont {Zhou}, \citenamefont {Liu}, \citenamefont {Zhang}, \citenamefont {Guo}, \citenamefont {Bao},\ and\ \citenamefont {Han}}]{li2011a} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Hong-Wei}\ \bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {Shuang}\ \bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {Jing-Zheng}\ \bibnamefont {Huang}}, \bibinfo {author} {\bibfnamefont {Wei}\ \bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {Zhen-Qiang}\ \bibnamefont {Yin}}, \bibinfo {author} {\bibfnamefont {Fang-Yi}\ \bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {Zheng}\ \bibnamefont {Zhou}}, \bibinfo {author} {\bibfnamefont {Dong}\ \bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {Yang}\ \bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {Guang-Can}\ \bibnamefont {Guo}}, \bibinfo {author} {\bibfnamefont {Wan-Su}\ \bibnamefont {Bao}}, \ and\ \bibinfo {author} {\bibfnamefont {Zheng-Fu}\ \bibnamefont {Han}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Attacking a practical quantum-key-distribution system with wavelength-dependent beam-splitter and multiwavelength sources},}\ }\href {\doibase 10.1103/PhysRevA.84.062308} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {84}},\ \bibinfo {pages} {062308} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wiechers}\ \emph {et~al.}(2011)\citenamefont {Wiechers}, \citenamefont {Lydersen}, \citenamefont {Wittmann}, \citenamefont {Elser}, \citenamefont {Skaar}, \citenamefont {Marquardt}, \citenamefont {Makarov},\ and\ \citenamefont {Leuchs}}]{wiechers2011} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Wiechers}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Lydersen}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Wittmann}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Elser}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Skaar}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Marquardt}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Makarov}}, \ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Leuchs}},\ }\bibfield {title} {\enquote {\bibinfo {title} {After-gate attack on a quantum cryptosystem},}\ }\href {\doibase 10.1088/1367-2630/13/1/013043} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {13}},\ \bibinfo {pages} {013043} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lydersen}\ \emph {et~al.}(2011{\natexlab{a}})\citenamefont {Lydersen}, \citenamefont {Akhlaghi}, \citenamefont {Majedi}, \citenamefont {Skaar},\ and\ \citenamefont {Makarov}}]{lydersen2011c} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Lydersen}}, \bibinfo {author} {\bibfnamefont {M.~K.}\ \bibnamefont {Akhlaghi}}, \bibinfo {author} {\bibfnamefont {A.~H.}\ \bibnamefont {Majedi}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Skaar}}, \ and\ \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Makarov}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Controlling a superconducting nanowire single-photon detector using tailored bright illumination},}\ }\href {\doibase 10.1088/1367-2630/13/11/113042} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {13}},\ \bibinfo {pages} {113042} (\bibinfo {year} {2011}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lydersen}\ \emph {et~al.}(2011{\natexlab{b}})\citenamefont {Lydersen}, \citenamefont {Jain}, \citenamefont {Wittmann}, \citenamefont {Mar{\o}y}, \citenamefont {Skaar}, \citenamefont {Marquardt}, \citenamefont {Makarov},\ and\ \citenamefont {Leuchs}}]{lydersen2011b} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Lydersen}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Jain}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Wittmann}}, \bibinfo {author} {\bibfnamefont {{\O}.}~\bibnamefont {Mar{\o}y}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Skaar}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Marquardt}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Makarov}}, \ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Leuchs}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Superlinear threshold detectors in quantum cryptography},}\ }\href {\doibase 10.1103/PhysRevA.84.032320} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {84}},\ \bibinfo {pages} {032320} (\bibinfo {year} {2011}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gerhardt}\ \emph {et~al.}(2011)\citenamefont {Gerhardt}, \citenamefont {Liu}, \citenamefont {Lamas-Linares}, \citenamefont {Skaar}, \citenamefont {Kurtsiefer},\ and\ \citenamefont {Makarov}}]{gerhardt2011} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Gerhardt}}, \bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Lamas-Linares}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Skaar}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Kurtsiefer}}, \ and\ \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Makarov}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Full-field implementation of a perfect eavesdropper on a quantum cryptography system},}\ }\href {\doibase 10.1038/ncomms1348} {\bibfield {journal} {\bibinfo {journal} {Nat. Commun.}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages} {349} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sun}\ \emph {et~al.}(2011)\citenamefont {Sun}, \citenamefont {Jiang},\ and\ \citenamefont {Liang}}]{sun2011} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.-H.}\ \bibnamefont {Sun}}, \bibinfo {author} {\bibfnamefont {M.-S.}\ \bibnamefont {Jiang}}, \ and\ \bibinfo {author} {\bibfnamefont {L.-M.}\ \bibnamefont {Liang}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Passive {F}araday-mirror attack in a practical two-way quantum-key-distribution system},}\ }\href {\doibase 10.1103/PhysRevA.83.062331} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {83}},\ \bibinfo {pages} {062331} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Jain}\ \emph {et~al.}(2011)\citenamefont {Jain}, \citenamefont {Wittmann}, \citenamefont {Lydersen}, \citenamefont {Wiechers}, \citenamefont {Elser}, \citenamefont {Marquardt}, \citenamefont {Makarov},\ and\ \citenamefont {Leuchs}}]{jain2011} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Jain}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Wittmann}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Lydersen}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Wiechers}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Elser}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Marquardt}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Makarov}}, \ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Leuchs}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Device calibration impacts security of quantum key distribution},}\ }\href {\doibase 10.1103/PhysRevLett.107.110501} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {107}},\ \bibinfo {pages} {110501} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bugge}\ \emph {et~al.}(2014)\citenamefont {Bugge}, \citenamefont {Sauge}, \citenamefont {Ghazali}, \citenamefont {Skaar}, \citenamefont {Lydersen},\ and\ \citenamefont {Makarov}}]{bugge2014} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~N.}\ \bibnamefont {Bugge}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Sauge}}, \bibinfo {author} {\bibfnamefont {A.~M.~M.}\ \bibnamefont {Ghazali}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Skaar}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Lydersen}}, \ and\ \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Makarov}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Laser damage helps the eavesdropper in quantum cryptography},}\ }\href {\doibase 10.1103/PhysRevLett.112.070503} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {112}},\ \bibinfo {pages} {070503} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sajeed}\ \emph {et~al.}(2015{\natexlab{a}})\citenamefont {Sajeed}, \citenamefont {Chaiwongkhot}, \citenamefont {Bourgoin}, \citenamefont {Jennewein}, \citenamefont {L{\" u}tkenhaus},\ and\ \citenamefont {Makarov}}]{sajeed2015a} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Sajeed}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Chaiwongkhot}}, \bibinfo {author} {\bibfnamefont {J.-P.}\ \bibnamefont {Bourgoin}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Jennewein}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {L{\" u}tkenhaus}}, \ and\ \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Makarov}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Security loophole in free-space quantum key distribution due to spatial-mode detector-efficiency mismatch},}\ }\href {\doibase 10.1103/PhysRevA.91.062301} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {91}},\ \bibinfo {pages} {062301} (\bibinfo {year} {2015}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Huang}\ \emph {et~al.}(2016{\natexlab{a}})\citenamefont {Huang}, \citenamefont {Sajeed}, \citenamefont {Chaiwongkhot}, \citenamefont {Soucarros}, \citenamefont {Legr{\' e}},\ and\ \citenamefont {Makarov}}]{huang2016} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Huang}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Sajeed}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Chaiwongkhot}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Soucarros}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Legr{\' e}}}, \ and\ \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Makarov}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Testing random-detector-efficiency countermeasure in a commercial system reveals a breakable unrealistic assumption},}\ }\href {\doibase 10.1109/JQE.2016.2611443} {\bibfield {journal} {\bibinfo {journal} {IEEE J. Quantum Electron.}\ }\textbf {\bibinfo {volume} {52}},\ \bibinfo {pages} {8000211} (\bibinfo {year} {2016}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Makarov}\ \emph {et~al.}(2016)\citenamefont {Makarov}, \citenamefont {Bourgoin}, \citenamefont {Chaiwongkhot}, \citenamefont {Gagn{\'e}}, \citenamefont {Jennewein}, \citenamefont {Kaiser}, \citenamefont {Kashyap}, \citenamefont {Legr{\'e}}, \citenamefont {M.},\ and\ \citenamefont {Sajeed}}]{makarov2016} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Makarov}}, \bibinfo {author} {\bibfnamefont {J.-P.}\ \bibnamefont {Bourgoin}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Chaiwongkhot}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Gagn{\'e}}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Jennewein}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Kaiser}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Kashyap}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Legr{\'e}}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {M.}}, \ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Sajeed}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Creation of backdoors in quantum communications via laser damage},}\ }\href {\doibase 10.1103/PhysRevA.94.030302} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {94}},\ \bibinfo {pages} {030302} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Huang}\ \emph {et~al.}(2018)\citenamefont {Huang}, \citenamefont {Sun}, \citenamefont {Liu},\ and\ \citenamefont {Makarov}}]{huang2018} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Huang}}, \bibinfo {author} {\bibfnamefont {S.-H.}\ \bibnamefont {Sun}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Liu}}, \ and\ \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Makarov}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Quantum key distribution with distinguishable decoy states},}\ }\href {\doibase 10.1103/PhysRevA.98.012330} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {98}},\ \bibinfo {pages} {012330} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Qian}\ \emph {et~al.}(2018)\citenamefont {Qian}, \citenamefont {He}, \citenamefont {Wang}, \citenamefont {Chen}, \citenamefont {Yin}, \citenamefont {Guo},\ and\ \citenamefont {Han}}]{qian2018} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Yong-Jun}\ \bibnamefont {Qian}}, \bibinfo {author} {\bibfnamefont {De-Yong}\ \bibnamefont {He}}, \bibinfo {author} {\bibfnamefont {Shuang}\ \bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {Wei}\ \bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {Zhen-Qiang}\ \bibnamefont {Yin}}, \bibinfo {author} {\bibfnamefont {Guang-Can}\ \bibnamefont {Guo}}, \ and\ \bibinfo {author} {\bibfnamefont {Zheng-Fu}\ \bibnamefont {Han}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Hacking the quantum key distribution system by exploiting the avalanche-transition region of single-photon detectors},}\ }\href {\doibase 10.1103/PhysRevApplied.10.064062} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Appl.}\ }\textbf {\bibinfo {volume} {10}},\ \bibinfo {pages} {064062} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Huang}\ \emph {et~al.}(2019)\citenamefont {Huang}, \citenamefont {Navarrete}, \citenamefont {Sun}, \citenamefont {Chaiwongkhot}, \citenamefont {Curty},\ and\ \citenamefont {Makarov}}]{huang2019} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Huang}}, \bibinfo {author} {\bibfnamefont {{\'A}.}~\bibnamefont {Navarrete}}, \bibinfo {author} {\bibfnamefont {S.-H.}\ \bibnamefont {Sun}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Chaiwongkhot}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Curty}}, \ and\ \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Makarov}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Laser-seeding attack in quantum key distribution},}\ }\href {\doibase 10.1103/PhysRevApplied.12.064043} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Appl.}\ }\textbf {\bibinfo {volume} {12}},\ \bibinfo {pages} {064043} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Huang}\ \emph {et~al.}(2020)\citenamefont {Huang}, \citenamefont {Li}, \citenamefont {Egorov}, \citenamefont {Tchouragoulov}, \citenamefont {Kumar},\ and\ \citenamefont {Makarov}}]{huang2020} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Huang}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Egorov}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Tchouragoulov}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Kumar}}, \ and\ \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Makarov}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Laser damage attack against optical attenuators in quantum key distribution},}\ }\href {\doibase 10.1103/PhysRevApplied.13.034017} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Appl.}\ }\textbf {\bibinfo {volume} {13}},\ \bibinfo {pages} {034017} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sun}\ and\ \citenamefont {Huang}(2022)}]{sun2022} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Shihai}\ \bibnamefont {Sun}}\ and\ \bibinfo {author} {\bibfnamefont {Anqi}\ \bibnamefont {Huang}},\ }\bibfield {title} {\enquote {\bibinfo {title} {A review of security evaluation of practical quantum key distribution system},}\ }\href {\doibase 10.3390/e24020260} {\bibfield {journal} {\bibinfo {journal} {Entropy}\ }\textbf {\bibinfo {volume} {24}},\ \bibinfo {pages} {260} (\bibinfo {year} {2022})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Chaiwongkhot}\ \emph {et~al.}(2022)\citenamefont {Chaiwongkhot}, \citenamefont {Zhong}, \citenamefont {Huang}, \citenamefont {Qin}, \citenamefont {Shi},\ and\ \citenamefont {Makarov}}]{chaiwongkhot2022} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Poompong}\ \bibnamefont {Chaiwongkhot}}, \bibinfo {author} {\bibfnamefont {Jiaqiang}\ \bibnamefont {Zhong}}, \bibinfo {author} {\bibfnamefont {Anqi}\ \bibnamefont {Huang}}, \bibinfo {author} {\bibfnamefont {Hao}\ \bibnamefont {Qin}}, \bibinfo {author} {\bibfnamefont {Sheng-cai}\ \bibnamefont {Shi}}, \ and\ \bibinfo {author} {\bibfnamefont {Vadim}\ \bibnamefont {Makarov}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Faking photon number on a transition-edge sensor},}\ }\href {\doibase 10.1140/epjqt/s40507-022-00141-2} {\bibfield {journal} {\bibinfo {journal} {EPJ Quantum Technol.}\ }\textbf {\bibinfo {volume} {9}},\ \bibinfo {pages} {23} (\bibinfo {year} {2022})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Huang}\ \emph {et~al.}()\citenamefont {Huang}, \citenamefont {Mizutani}, \citenamefont {Lo}, \citenamefont {Makarov},\ and\ \citenamefont {Tamaki}}]{huang2022} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Anqi}\ \bibnamefont {Huang}}, \bibinfo {author} {\bibfnamefont {Akihiro}\ \bibnamefont {Mizutani}}, \bibinfo {author} {\bibfnamefont {Hoi-Kwong}\ \bibnamefont {Lo}}, \bibinfo {author} {\bibfnamefont {Vadim}\ \bibnamefont {Makarov}}, \ and\ \bibinfo {author} {\bibfnamefont {Kiyoshi}\ \bibnamefont {Tamaki}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Characterisation of state preparation uncertainty in quantum key distribution},}\ }\href@noop {} {\ }\Eprint {http://arxiv.org/abs/2205.11870} {arXiv:2205.11870 [quant-ph]} \BibitemShut {NoStop} \bibitem [{\citenamefont {Gao}\ \emph {et~al.}()\citenamefont {Gao}, \citenamefont {Wu}, \citenamefont {Shi}, \citenamefont {Liu}, \citenamefont {Wang}, \citenamefont {Yu}, \citenamefont {Huang},\ and\ \citenamefont {Wu}}]{gao2022} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Binwu}\ \bibnamefont {Gao}}, \bibinfo {author} {\bibfnamefont {Zhihai}\ \bibnamefont {Wu}}, \bibinfo {author} {\bibfnamefont {Weixu}\ \bibnamefont {Shi}}, \bibinfo {author} {\bibfnamefont {Yingwen}\ \bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {Dongyang}\ \bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {Chunlin}\ \bibnamefont {Yu}}, \bibinfo {author} {\bibfnamefont {Anqi}\ \bibnamefont {Huang}}, \ and\ \bibinfo {author} {\bibfnamefont {Junjie}\ \bibnamefont {Wu}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Strong pulse illumination hacks self-differencing avalanche photodiode detectors in a high-speed quantum key distribution system},}\ }\href@noop {} {\ }\Eprint {http://arxiv.org/abs/2205.04177} {arXiv:2205.04177 [quant-ph]} \BibitemShut {NoStop} \bibitem [{\citenamefont {Lo}\ \emph {et~al.}(2012)\citenamefont {Lo}, \citenamefont {Curty},\ and\ \citenamefont {Qi}}]{lo2012} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.-K.}\ \bibnamefont {Lo}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Curty}}, \ and\ \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Qi}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Measurement-device-independent quantum key distribution},}\ }\href {\doibase 10.1103/PhysRevLett.108.130503} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {108}},\ \bibinfo {pages} {130503} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lucamarini}\ \emph {et~al.}(2018)\citenamefont {Lucamarini}, \citenamefont {Yuan}, \citenamefont {Dynes},\ and\ \citenamefont {Shields}}]{lucamarini2018} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Lucamarini}}, \bibinfo {author} {\bibfnamefont {Z.~L.}\ \bibnamefont {Yuan}}, \bibinfo {author} {\bibfnamefont {J.~F.}\ \bibnamefont {Dynes}}, \ and\ \bibinfo {author} {\bibfnamefont {A.~J.}\ \bibnamefont {Shields}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Overcoming the rate--distance limit of quantum key distribution without quantum repeaters},}\ }\href {\doibase 10.1038/s41586-018-0066-6} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {557}},\ \bibinfo {pages} {400} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wang}\ \emph {et~al.}(2022)\citenamefont {Wang}, \citenamefont {Yin}, \citenamefont {He}, \citenamefont {Chen}, \citenamefont {Wang}, \citenamefont {Ye}, \citenamefont {Zhou}, \citenamefont {Fan-Yuan}, \citenamefont {Wang}, \citenamefont {Chen}, \citenamefont {Zhu}, \citenamefont {Morozov}, \citenamefont {Divochiy}, \citenamefont {Zhou}, \citenamefont {Guo},\ and\ \citenamefont {Han}}]{wang2022} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Shuang}\ \bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {Zhen-Qiang}\ \bibnamefont {Yin}}, \bibinfo {author} {\bibfnamefont {De-Yong}\ \bibnamefont {He}}, \bibinfo {author} {\bibfnamefont {Wei}\ \bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {Rui-Qiang}\ \bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {Peng}\ \bibnamefont {Ye}}, \bibinfo {author} {\bibfnamefont {Yao}\ \bibnamefont {Zhou}}, \bibinfo {author} {\bibfnamefont {Guan-Jie}\ \bibnamefont {Fan-Yuan}}, \bibinfo {author} {\bibfnamefont {Fang-Xiang}\ \bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {Wei}\ \bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {Yong-Gang}\ \bibnamefont {Zhu}}, \bibinfo {author} {\bibfnamefont {Pavel~V.}\ \bibnamefont {Morozov}}, \bibinfo {author} {\bibfnamefont {Alexander~V.}\ \bibnamefont {Divochiy}}, \bibinfo {author} {\bibfnamefont {Zheng}\ \bibnamefont {Zhou}}, \bibinfo {author} {\bibfnamefont {Guang-Can}\ \bibnamefont {Guo}}, \ and\ \bibinfo {author} {\bibfnamefont {Zheng-Fu}\ \bibnamefont {Han}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Twin-field quantum key distribution over 830-km fibre},}\ }\href {\doibase 10.1038/s41566-021-00928-2} {\bibfield {journal} {\bibinfo {journal} {Nat. Photonics}\ }\textbf {\bibinfo {volume} {16}},\ \bibinfo {pages} {154--161} (\bibinfo {year} {2022})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bennett}(2017)}]{bennett2017} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Bennett}},\ }\bibfield {title} {\enquote {\bibinfo {title} {The slippery slope between quantum information and public information, and why cheap {DIY} randomness is better than expensive {DI} randomness},}\ \ }(\bibinfo {year} {2017})\ \bibinfo {note} {{QCrypt} 2017 rump session, \url{http://2017.qcrypt.net/events-in-qcrypt-2017/rump-session/}}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gisin}\ \emph {et~al.}(2006)\citenamefont {Gisin}, \citenamefont {Fasel}, \citenamefont {Kraus}, \citenamefont {Zbinden},\ and\ \citenamefont {Ribordy}}]{gisin2006} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Gisin}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Fasel}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Kraus}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Zbinden}}, \ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Ribordy}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Trojan-horse attacks on quantum-key-distribution systems},}\ }\href {\doibase 10.1103/PhysRevA.73.022320} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {73}},\ \bibinfo {pages} {022320} (\bibinfo {year} {2006})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Jain}\ \emph {et~al.}(2014)\citenamefont {Jain}, \citenamefont {Anisimova}, \citenamefont {Khan}, \citenamefont {Makarov}, \citenamefont {Marquardt},\ and\ \citenamefont {Leuchs}}]{jain2014} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Jain}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Anisimova}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Khan}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Makarov}}, \bibinfo {author} {\bibfnamefont {Ch.}\ \bibnamefont {Marquardt}}, \ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Leuchs}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Trojan-horse attacks threaten the security of practical quantum cryptography},}\ }\href {\doibase 10.1088/1367-2630/16/12/123030} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {16}},\ \bibinfo {pages} {123030} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sun}\ \emph {et~al.}(2015)\citenamefont {Sun}, \citenamefont {Xu}, \citenamefont {Jiang}, \citenamefont {Ma}, \citenamefont {Lo},\ and\ \citenamefont {Liang}}]{sun2015} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.-H.}\ \bibnamefont {Sun}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Xu}}, \bibinfo {author} {\bibfnamefont {M.-S.}\ \bibnamefont {Jiang}}, \bibinfo {author} {\bibfnamefont {X.-C.}\ \bibnamefont {Ma}}, \bibinfo {author} {\bibfnamefont {H.-K.}\ \bibnamefont {Lo}}, \ and\ \bibinfo {author} {\bibfnamefont {L.-M.}\ \bibnamefont {Liang}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Effect of source tampering in the security of quantum cryptography},}\ }\href {\doibase 10.1103/PhysRevA.92.022304} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {92}},\ \bibinfo {pages} {022304} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Pang}\ \emph {et~al.}(2020)\citenamefont {Pang}, \citenamefont {Yang}, \citenamefont {Zhang}, \citenamefont {Dou}, \citenamefont {Li}, \citenamefont {Gao},\ and\ \citenamefont {Jin}}]{pang2020} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {X.-L.}\ \bibnamefont {Pang}}, \bibinfo {author} {\bibfnamefont {A.-L.}\ \bibnamefont {Yang}}, \bibinfo {author} {\bibfnamefont {C.-N.}\ \bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {J.-P.}\ \bibnamefont {Dou}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Gao}}, \ and\ \bibinfo {author} {\bibfnamefont {X.-M.}\ \bibnamefont {Jin}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Hacking quantum key distribution via injection locking},}\ }\href {\doibase 10.1103/PhysRevApplied.13.034008} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Appl.}\ }\textbf {\bibinfo {volume} {13}},\ \bibinfo {pages} {034008} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sajeed}\ \emph {et~al.}(2015{\natexlab{b}})\citenamefont {Sajeed}, \citenamefont {Radchenko}, \citenamefont {Kaiser}, \citenamefont {Bourgoin}, \citenamefont {Pappa}, \citenamefont {Monat}, \citenamefont {Legr\'e},\ and\ \citenamefont {Makarov}}]{sajeed2015} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Shihan}\ \bibnamefont {Sajeed}}, \bibinfo {author} {\bibfnamefont {Igor}\ \bibnamefont {Radchenko}}, \bibinfo {author} {\bibfnamefont {Sarah}\ \bibnamefont {Kaiser}}, \bibinfo {author} {\bibfnamefont {Jean-Philippe}\ \bibnamefont {Bourgoin}}, \bibinfo {author} {\bibfnamefont {Anna}\ \bibnamefont {Pappa}}, \bibinfo {author} {\bibfnamefont {Laurent}\ \bibnamefont {Monat}}, \bibinfo {author} {\bibfnamefont {Matthieu}\ \bibnamefont {Legr\'e}}, \ and\ \bibinfo {author} {\bibfnamefont {Vadim}\ \bibnamefont {Makarov}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Attacks exploiting deviation of mean photon number in quantum key distribution and coin tossing},}\ }\href {\doibase 10.1103/PhysRevA.91.032326} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {91}},\ \bibinfo {pages} {032326} (\bibinfo {year} {2015}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Mo}\ \emph {et~al.}(2005)\citenamefont {Mo}, \citenamefont {Zhu}, \citenamefont {Han}, \citenamefont {Gui},\ and\ \citenamefont {Guo}}]{mo2005} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {X.-F.}\ \bibnamefont {Mo}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Zhu}}, \bibinfo {author} {\bibfnamefont {Z.-F.}\ \bibnamefont {Han}}, \bibinfo {author} {\bibfnamefont {Y.-Z.}\ \bibnamefont {Gui}}, \ and\ \bibinfo {author} {\bibfnamefont {G.-C.}\ \bibnamefont {Guo}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Faraday-{M}ichelson system for quantum cryptography},}\ }\href {\doibase 10.1364/ol.30.002632} {\bibfield {journal} {\bibinfo {journal} {Opt. Lett.}\ }\textbf {\bibinfo {volume} {30}},\ \bibinfo {pages} {2632--2634} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Huang}\ \emph {et~al.}(2016{\natexlab{b}})\citenamefont {Huang}, \citenamefont {Huang}, \citenamefont {Lin},\ and\ \citenamefont {Zeng}}]{huang2016a} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Huang}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Huang}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Lin}}, \ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Zeng}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Long-distance continuous-variable quantum key distribution by controlling excess noise},}\ }\href {\doibase 10.1038/srep19201} {\bibfield {journal} {\bibinfo {journal} {Sci. Rep.}\ }\textbf {\bibinfo {volume} {6}},\ \bibinfo {pages} {19201} (\bibinfo {year} {2016}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wang}\ \emph {et~al.}(2016)\citenamefont {Wang}, \citenamefont {Qin}, \citenamefont {Jiang}, \citenamefont {Wang}, \citenamefont {Chen}, \citenamefont {Zhao}, \citenamefont {Wei},\ and\ \citenamefont {Zhang}}]{wang2016} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Qin}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Jiang}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Zhao}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Wei}}, \ and\ \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Zhang}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Experimental demonstration of polarization encoding quantum key distribution system based on intrinsically stable polarization-modulated units},}\ }\href {\doibase 10.1364/OE.24.008302} {\bibfield {journal} {\bibinfo {journal} {Opt. Express}\ }\textbf {\bibinfo {volume} {24}},\ \bibinfo {pages} {8302--8309} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Dixon}\ \emph {et~al.}(2017)\citenamefont {Dixon}, \citenamefont {Dynes}, \citenamefont {Lucamarini}, \citenamefont {Fr{\"o}hlich}, \citenamefont {Sharpe}, \citenamefont {Plews}, \citenamefont {Tam}, \citenamefont {Yuan}, \citenamefont {Tanizawa}, \citenamefont {Sato}, \citenamefont {Kawamura}, \citenamefont {Fujiwara}, \citenamefont {Sasaki},\ and\ \citenamefont {Shields}}]{dixon2017} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~R.}\ \bibnamefont {Dixon}}, \bibinfo {author} {\bibfnamefont {J.~F.}\ \bibnamefont {Dynes}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Lucamarini}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Fr{\"o}hlich}}, \bibinfo {author} {\bibfnamefont {A.~W.}\ \bibnamefont {Sharpe}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Plews}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Tam}}, \bibinfo {author} {\bibfnamefont {Z.-L.}\ \bibnamefont {Yuan}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Tanizawa}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Sato}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Kawamura}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Fujiwara}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Sasaki}}, \ and\ \bibinfo {author} {\bibfnamefont {A.~J.}\ \bibnamefont {Shields}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Quantum key distribution with hacking countermeasures and long term field trial},}\ }\href {\doibase 10.1038/s41598-017-01884-0} {\bibfield {journal} {\bibinfo {journal} {Sci. Rep.}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {1978} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Xia}\ \emph {et~al.}(2019)\citenamefont {Xia}, \citenamefont {Zhang}, \citenamefont {Xie}, \citenamefont {Yuan}, \citenamefont {Lin}, \citenamefont {Liao}, \citenamefont {Liu}, \citenamefont {Peng}, \citenamefont {Z.},\ and\ \citenamefont {Pan}}]{xia2019} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {X.-X.}\ \bibnamefont {Xia}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {H.-B.}\ \bibnamefont {Xie}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Yuan}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Lin}}, \bibinfo {author} {\bibfnamefont {S.-K.}\ \bibnamefont {Liao}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {C.-Z.}\ \bibnamefont {Peng}}, \bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont {Z.}}, \ and\ \bibinfo {author} {\bibfnamefont {J.-W.}\ \bibnamefont {Pan}},\ }\bibfield {title} {\enquote {\bibinfo {title} {{LED}-based fiber quantum key distribution: toward low-cost applications},}\ }\href {\doibase 10.1364/PRJ.7.001169} {\bibfield {journal} {\bibinfo {journal} {Photonics Res.}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {1169--1174} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wei}\ \emph {et~al.}(2020)\citenamefont {Wei}, \citenamefont {Li}, \citenamefont {Tan}, \citenamefont {Li}, \citenamefont {Min}, \citenamefont {Zhang}, \citenamefont {Li}, \citenamefont {You}, \citenamefont {Wang}, \citenamefont {Jiang}, \citenamefont {Chen}, \citenamefont {Liao}, \citenamefont {Peng}, \citenamefont {Xu},\ and\ \citenamefont {Pan}}]{wei2019} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Wei}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Tan}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Min}}, \bibinfo {author} {\bibfnamefont {W.-J.}\ \bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {You}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Jiang}}, \bibinfo {author} {\bibfnamefont {T.-Y.}\ \bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {S.-K.}\ \bibnamefont {Liao}}, \bibinfo {author} {\bibfnamefont {C.-Z.}\ \bibnamefont {Peng}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Xu}}, \ and\ \bibinfo {author} {\bibfnamefont {J.-W.}\ \bibnamefont {Pan}},\ }\bibfield {title} {\enquote {\bibinfo {title} {High-speed measurement-device-independent quantum key distribution with integrated silicon photonics},}\ }\href {\doibase 10.1103/PhysRevX.10.031030} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume} {10}},\ \bibinfo {pages} {031030} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Liu}\ \emph {et~al.}(2019)\citenamefont {Liu}, \citenamefont {Yu}, \citenamefont {Zou}, \citenamefont {Tang}, \citenamefont {Zhao}, \citenamefont {Zhang}, \citenamefont {Wang}, \citenamefont {Chen},\ and\ \citenamefont {Pan}}]{liu2019} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {Z.-W.}\ \bibnamefont {Yu}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Zou}}, \bibinfo {author} {\bibfnamefont {Y.-L.}\ \bibnamefont {Tang}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Zhao}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {X.-B.}\ \bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {T.-Y.}\ \bibnamefont {Chen}}, \ and\ \bibinfo {author} {\bibfnamefont {J.-W.}\ \bibnamefont {Pan}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Experimental 4-intensity decoy-state quantum key distribution with asymmetric basis-detector efficiency},}\ }\href {\doibase 10.1103/PhysRevA.100.042313} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {100}},\ \bibinfo {pages} {042313} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lucamarini}\ \emph {et~al.}(2015)\citenamefont {Lucamarini}, \citenamefont {Choi}, \citenamefont {Ward}, \citenamefont {Dynes}, \citenamefont {Yuan},\ and\ \citenamefont {Shields}}]{lucamarini2015} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Lucamarini}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Choi}}, \bibinfo {author} {\bibfnamefont {M.~B.}\ \bibnamefont {Ward}}, \bibinfo {author} {\bibfnamefont {J.~F.}\ \bibnamefont {Dynes}}, \bibinfo {author} {\bibfnamefont {Z.~L.}\ \bibnamefont {Yuan}}, \ and\ \bibinfo {author} {\bibfnamefont {A.~J.}\ \bibnamefont {Shields}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Practical security bounds against the {T}rojan-horse attack in quantum key distribution},}\ }\href {\doibase 10.1103/PhysRevX.5.031030} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {031030} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{ETS()}]{ETSIQKD0010} \BibitemOpen \href@noop {} {}\bibinfo {note} {Work Programme of GS QKD 010 Quantum Key Distribution (QKD), Implementation security protection against Trojan horse attacks in one-way QKD systems, \url{https://portal.etsi.org/webapp/workProgram/Report_Schedule.asp?WKI_ID=43375}, visited 5 May 2014}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lucio-Martinez}\ \emph {et~al.}(2009)\citenamefont {Lucio-Martinez}, \citenamefont {Chan}, \citenamefont {Mo}, \citenamefont {Hosier},\ and\ \citenamefont {Tittel}}]{lucio-martinez2009} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Lucio-Martinez}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Chan}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Mo}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Hosier}}, \ and\ \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Tittel}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Proof-of-concept of real-world quantum key distribution with quantum frames},}\ }\href {\doibase doi.org/10.1088/1367-2630/11/9/095001} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {11}},\ \bibinfo {pages} {095001} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Tang}\ \emph {et~al.}(2014)\citenamefont {Tang}, \citenamefont {Liao}, \citenamefont {Xu}, \citenamefont {Qi}, \citenamefont {Qian},\ and\ \citenamefont {Lo}}]{tang2014} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Zhiyuan}\ \bibnamefont {Tang}}, \bibinfo {author} {\bibfnamefont {Zhongfa}\ \bibnamefont {Liao}}, \bibinfo {author} {\bibfnamefont {Feihu}\ \bibnamefont {Xu}}, \bibinfo {author} {\bibfnamefont {Bing}\ \bibnamefont {Qi}}, \bibinfo {author} {\bibfnamefont {Li}~\bibnamefont {Qian}}, \ and\ \bibinfo {author} {\bibfnamefont {Hoi-Kwong}\ \bibnamefont {Lo}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Experimental demonstration of polarization encoding measurement-device-independent quantum key distribution},}\ }\href {\doibase 10.1103/PhysRevLett.112.190503} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {112}},\ \bibinfo {pages} {190503} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wang}\ \emph {et~al.}(2014)\citenamefont {Wang}, \citenamefont {Chen}, \citenamefont {Yin}, \citenamefont {Li}, \citenamefont {He}, \citenamefont {Li}, \citenamefont {Zhou}, \citenamefont {Song}, \citenamefont {Li}, \citenamefont {Wang}, \citenamefont {Chen}, \citenamefont {Han}, \citenamefont {Huang}, \citenamefont {Guo}, \citenamefont {Hao}, \citenamefont {Li}, \citenamefont {Zhang}, \citenamefont {Liu}, \citenamefont {Liang}, \citenamefont {Miao}, \citenamefont {Wu}, \citenamefont {Guo}, ,\ and\ \citenamefont {Han}}]{wang2014} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Shuang}\ \bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {Wei}\ \bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {Zhen-Qiang}\ \bibnamefont {Yin}}, \bibinfo {author} {\bibfnamefont {Hong-Wei}\ \bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {De-Yong}\ \bibnamefont {He}}, \bibinfo {author} {\bibfnamefont {Yu-Hu}\ \bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {Zheng}\ \bibnamefont {Zhou}}, \bibinfo {author} {\bibfnamefont {Xiao-Tian}\ \bibnamefont {Song}}, \bibinfo {author} {\bibfnamefont {Fang-Yi}\ \bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {Dong}\ \bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {Hua}\ \bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {Yun-Guang}\ \bibnamefont {Han}}, \bibinfo {author} {\bibfnamefont {Jing-Zheng}\ \bibnamefont {Huang}}, \bibinfo {author} {\bibfnamefont {Jun-Fu}\ \bibnamefont {Guo}}, \bibinfo {author} {\bibfnamefont {Peng-Lei}\ \bibnamefont {Hao}}, \bibinfo {author} {\bibfnamefont {Mo}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {Chun-Mei}\ \bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {Dong}\ \bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {Wen-Ye}\ \bibnamefont {Liang}}, \bibinfo {author} {\bibfnamefont {Chun-Hua}\ \bibnamefont {Miao}}, \bibinfo {author} {\bibfnamefont {Ping}\ \bibnamefont {Wu}}, \bibinfo {author} {\bibfnamefont {Guang-Can}\ \bibnamefont {Guo}}, , \ and\ \bibinfo {author} {\bibfnamefont {Zheng-Fu}\ \bibnamefont {Han}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Field and long-term demonstration of a wide area quantum key distribution network},}\ }\href {\doibase 10.1364/OE.22.021739} {\bibfield {journal} {\bibinfo {journal} {Opt. Express}\ }\textbf {\bibinfo {volume} {18}},\ \bibinfo {pages} {21739--21756} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Berent}\ \emph {et~al.}(2013)\citenamefont {Berent}, \citenamefont {Rangelov},\ and\ \citenamefont {Vitanov}}]{berent2013} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Michał}\ \bibnamefont {Berent}}, \bibinfo {author} {\bibfnamefont {Andon~A.}\ \bibnamefont {Rangelov}}, \ and\ \bibinfo {author} {\bibfnamefont {Nikolay~V.}\ \bibnamefont {Vitanov}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Broadband faraday isolator},}\ }\href {\doibase doi.org/10.1364/JOSAA.30.000149} {\bibfield {journal} {\bibinfo {journal} {J. Opt. Soc. Am. A}\ }\textbf {\bibinfo {volume} {30}},\ \bibinfo {pages} {149--153} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Tamaki}\ \emph {et~al.}(2016)\citenamefont {Tamaki}, \citenamefont {Curty},\ and\ \citenamefont {Lucamarini}}]{tamaki2016} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Kiyoshi}\ \bibnamefont {Tamaki}}, \bibinfo {author} {\bibfnamefont {Marcos}\ \bibnamefont {Curty}}, \ and\ \bibinfo {author} {\bibfnamefont {Marco}\ \bibnamefont {Lucamarini}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Decoy-state quantum key distribution with a leaky source},}\ }\href {\doibase 10.1088/1367-2630/18/6/065008} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {18}},\ \bibinfo {pages} {065008} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wang}\ \emph {et~al.}(2018)\citenamefont {Wang}, \citenamefont {Tamaki},\ and\ \citenamefont {Curty}}]{wang2018} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Weilong}\ \bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {Kiyoshi}\ \bibnamefont {Tamaki}}, \ and\ \bibinfo {author} {\bibfnamefont {Marcos}\ \bibnamefont {Curty}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Finite-key security analysis for quantum key distribution with leaky sources},}\ }\href {\doibase doi.org/10.1088/1367-2630/aad839} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {20}},\ \bibinfo {pages} {083027} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Álvaro Navarrete}\ and\ \citenamefont {Curty}()}]{navarrete2022} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibnamefont {Álvaro Navarrete}}\ and\ \bibinfo {author} {\bibfnamefont {Marcos}\ \bibnamefont {Curty}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Improved finite-key security analysis of quantum key distribution against trojan-horse attacks},}\ }\href@noop {} {\ }\Eprint {http://arxiv.org/abs/2202.06630v1} {arXiv:2202.06630v1 [quant-ph]} \BibitemShut {NoStop} \bibitem [{\citenamefont {Zvezdin}\ and\ \citenamefont {Kotov}(1997)}]{zvezdin1997} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~K.}\ \bibnamefont {Zvezdin}}\ and\ \bibinfo {author} {\bibfnamefont {V.~A.}\ \bibnamefont {Kotov}},\ }\href {https://books.google.ru/books?id=hQ7Xk7MToRoC} {\emph {\bibinfo {title} {Modern Magnetooptics and Magnetooptical Materials}}},\ Condensed Matter Physics\ (\bibinfo {publisher} {CRC Press},\ \bibinfo {year} {1997})\BibitemShut {NoStop} \bibitem [{\citenamefont {Kiriyama}\ \emph {et~al.}(2015)\citenamefont {Kiriyama}, \citenamefont {Mori}, \citenamefont {Pirozhkov}, \citenamefont {Ogura}, \citenamefont {Sagisaka}, \citenamefont {Kon}, \citenamefont {Esirkepov}, \citenamefont {Hayashi}, \citenamefont {Kotaki}, \citenamefont {Kanasaki}, \citenamefont {Sakaki}, \citenamefont {Fukuda}, \citenamefont {Koga}, \citenamefont {Nishiuchi}, \citenamefont {Kando}, \citenamefont {Bulanov}, \citenamefont {Kondo}, \citenamefont {Bolton}, \citenamefont {Slezak}, \citenamefont {Vojna}, \citenamefont {Sawicka-Chyla}, \citenamefont {Jambunathan}, \citenamefont {Lucianetti},\ and\ \citenamefont {Mocek}}]{kiriyma2015} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Kiriyama}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Mori}}, \bibinfo {author} {\bibfnamefont {A.~S.}\ \bibnamefont {Pirozhkov}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Ogura}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Sagisaka}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Kon}}, \bibinfo {author} {\bibfnamefont {T.~Z.}\ \bibnamefont {Esirkepov}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Hayashi}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Kotaki}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Kanasaki}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Sakaki}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Fukuda}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Koga}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Nishiuchi}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Kando}}, \bibinfo {author} {\bibfnamefont {S.~V.}\ \bibnamefont {Bulanov}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Kondo}}, \bibinfo {author} {\bibfnamefont {P.~R.}\ \bibnamefont {Bolton}}, \bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Slezak}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Vojna}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Sawicka-Chyla}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Jambunathan}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Lucianetti}}, \ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Mocek}},\ }\bibfield {title} {\enquote {\bibinfo {title} {High-contrast, high-intensity petawatt-class laser and applications},}\ }\href {\doibase 10.1109/JSTQE.2014.2336774} {\bibfield {journal} {\bibinfo {journal} {IEEE J. Sel. Top. Quantum Electron.}\ }\textbf {\bibinfo {volume} {21}},\ \bibinfo {pages} {232--249} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Snetkov}\ \emph {et~al.}(2014)\citenamefont {Snetkov}, \citenamefont {Voitovich}, \citenamefont {Palashov},\ and\ \citenamefont {Khazanov}}]{snetkov2014} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {I.~L.}\ \bibnamefont {Snetkov}}, \bibinfo {author} {\bibfnamefont {A.~V.}\ \bibnamefont {Voitovich}}, \bibinfo {author} {\bibfnamefont {O.~V.}\ \bibnamefont {Palashov}}, \ and\ \bibinfo {author} {\bibfnamefont {E.~A.}\ \bibnamefont {Khazanov}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Review of {F}araday isolators for kilowatt average power lasers},}\ }\href {\doibase 10.1109/JQE.2014.2317231} {\bibfield {journal} {\bibinfo {journal} {IEEE J. Quantum. Electron.}\ }\textbf {\bibinfo {volume} {50}},\ \bibinfo {pages} {434--443} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Khazanov}(2016)}]{khazanov2016} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {E.~A.}\ \bibnamefont {Khazanov}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Thermooptics of magnetoactive media: {F}araday isolators for high average power lasers},}\ }\href {\doibase 10.3367/UFNe.2016.03.037829} {\bibfield {journal} {\bibinfo {journal} {Phys. Uspekhi}\ }\textbf {\bibinfo {volume} {59}},\ \bibinfo {pages} {886--909} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Booth}\ and\ \citenamefont {White}(1984)}]{booth1984} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~C.}\ \bibnamefont {Booth}}\ and\ \bibinfo {author} {\bibfnamefont {E.~A.~D.}\ \bibnamefont {White}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Magneto-optic properties of rare earth iron garnet crystals in the wavelength range 1.1--1.7~$\mu$m and their use in device fabrication},}\ }\href {\doibase 10.1088/0022-3727/17/3/015} {\bibfield {journal} {\bibinfo {journal} {J. Phys. D: Appl. Phys.}\ }\textbf {\bibinfo {volume} {17}},\ \bibinfo {pages} {579--587} (\bibinfo {year} {1984})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Mukimov}\ \emph {et~al.}(1990)\citenamefont {Mukimov}, \citenamefont {Sokolov},\ and\ \citenamefont {Valiev}}]{mukimovb1990} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.~M.}\ \bibnamefont {Mukimov}}, \bibinfo {author} {\bibfnamefont {B.~Yu.}\ \bibnamefont {Sokolov}}, \ and\ \bibinfo {author} {\bibfnamefont {U.~V.}\ \bibnamefont {Valiev}},\ }\bibfield {title} {\enquote {\bibinfo {title} {The {F}araday effect of rare-earth ions in garnets},}\ }\href {\doibase 10.1002/pssa.2211190136} {\bibfield {journal} {\bibinfo {journal} {Phys. Status Solidi A}\ }\textbf {\bibinfo {volume} {119}},\ \bibinfo {pages} {307--315} (\bibinfo {year} {1990})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Slez\'{a}k}\ \emph {et~al.}(2016)\citenamefont {Slez\'{a}k}, \citenamefont {Yasuhara}, \citenamefont {Lucianetti},\ and\ \citenamefont {Mocek}}]{slezak2016} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Slez\'{a}k}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Yasuhara}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Lucianetti}}, \ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Mocek}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Temperature-wavelength dependence of terbium gallium garnet ceramics {V}erdet constant},}\ }\href {\doibase 10.1364/OME.6.003683} {\bibfield {journal} {\bibinfo {journal} {Opt. Mater. Express}\ }\textbf {\bibinfo {volume} {6}},\ \bibinfo {pages} {3683--3691} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Serber}(1932)}]{serber1932} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Serber}},\ }\bibfield {title} {\enquote {\bibinfo {title} {The theory of the {F}araday effect in molecules},}\ }\href {\doibase 10.1103/PhysRev.41.489} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev.}\ }\textbf {\bibinfo {volume} {41}},\ \bibinfo {pages} {489--506} (\bibinfo {year} {1932})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Buckingham}\ and\ \citenamefont {Stephens}(1966)}]{buckingham1966} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~D.}\ \bibnamefont {Buckingham}}\ and\ \bibinfo {author} {\bibfnamefont {P.~J}\ \bibnamefont {Stephens}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Magnetic optical activity},}\ }\href {\doibase 10.1146/annurev.pc.17.100166.002151} {\bibfield {journal} {\bibinfo {journal} {Annu. Rev. Phys. Chem.}\ }\textbf {\bibinfo {volume} {17}},\ \bibinfo {pages} {399--432} (\bibinfo {year} {1966})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Vojna}\ \emph {et~al.}(2018)\citenamefont {Vojna}, \citenamefont {Yasuhara}, \citenamefont {Furuse}, \citenamefont {Slezak}, \citenamefont {Hutchinson}, \citenamefont {Lucianetti}, \citenamefont {Mocek},\ and\ \citenamefont {Cech}}]{vojna2018} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {David}\ \bibnamefont {Vojna}}, \bibinfo {author} {\bibfnamefont {Ryo}\ \bibnamefont {Yasuhara}}, \bibinfo {author} {\bibfnamefont {Hiroaki}\ \bibnamefont {Furuse}}, \bibinfo {author} {\bibfnamefont {Ondrej}\ \bibnamefont {Slezak}}, \bibinfo {author} {\bibfnamefont {Simon}\ \bibnamefont {Hutchinson}}, \bibinfo {author} {\bibfnamefont {Antonio}\ \bibnamefont {Lucianetti}}, \bibinfo {author} {\bibfnamefont {Tomas}\ \bibnamefont {Mocek}}, \ and\ \bibinfo {author} {\bibfnamefont {Miroslav}\ \bibnamefont {Cech}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Faraday effect measurements of holmium oxide {(Ho\textsubscript{2}O\textsubscript{3})} ceramics-based magneto-optical materials},}\ }\href {\doibase 10.1017/hpl.2017.37} {\bibfield {journal} {\bibinfo {journal} {High Power Laser Sci. Eng.}\ }\textbf {\bibinfo {volume} {6}},\ \bibinfo {pages} {e2} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zhao}(2019)}]{zao2019} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Weizhong}\ \bibnamefont {Zhao}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Magneto-optic properties and sensing performance of garnet {YbBi:YIG}},}\ }\href {\doibase 10.1016/S0924-4247(00)00560-4} {\bibfield {journal} {\bibinfo {journal} {Sens. Actuator A Phys.}\ }\textbf {\bibinfo {volume} {89}},\ \bibinfo {pages} {250--254} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Cooper}\ \emph {et~al.}(1968)\citenamefont {Cooper}, \citenamefont {Crossley}, \citenamefont {Page},\ and\ \citenamefont {Pearson}}]{cooper1968} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~W.}\ \bibnamefont {Cooper}}, \bibinfo {author} {\bibfnamefont {W.~A.}\ \bibnamefont {Crossley}}, \bibinfo {author} {\bibfnamefont {J.~L.}\ \bibnamefont {Page}}, \ and\ \bibinfo {author} {\bibfnamefont {R.~F.}\ \bibnamefont {Pearson}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Faraday rotation in {YIG} and {TbIG}},}\ }\href {\doibase 10.1063/1.2163521} {\bibfield {journal} {\bibinfo {journal} {J. Appl. Phys.}\ }\textbf {\bibinfo {volume} {39}},\ \bibinfo {pages} {565--567} (\bibinfo {year} {1968})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Crossley}\ \emph {et~al.}(1969)\citenamefont {Crossley}, \citenamefont {Cooper}, \citenamefont {Page},\ and\ \citenamefont {van Stapele}}]{crossley1969} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {W.~A.}\ \bibnamefont {Crossley}}, \bibinfo {author} {\bibfnamefont {R.~W.}\ \bibnamefont {Cooper}}, \bibinfo {author} {\bibfnamefont {J.~L.}\ \bibnamefont {Page}}, \ and\ \bibinfo {author} {\bibfnamefont {R.~P.}\ \bibnamefont {van Stapele}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Faraday rotation in rare-earth iron garnets},}\ }\href {\doibase 10.1103/PhysRev.181.896} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev.}\ }\textbf {\bibinfo {volume} {181}},\ \bibinfo {pages} {896--904} (\bibinfo {year} {1969})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Stevens}\ \emph {et~al.}(2016)\citenamefont {Stevens}, \citenamefont {Legg},\ and\ \citenamefont {Shardlow}}]{stevens2016} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Stevens}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Legg}}, \ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Shardlow}},\ }\bibfield {title} {\enquote {\bibinfo {title} {{Integrated disruptive components for $2\micro\meter$ fibre lasers {(ISLA):} project overview and passive component development}},}\ }in\ \href {\doibase 10.1117/12.2207613} {\emph {\bibinfo {booktitle} {Components and Packaging for Laser Systems II}}},\ Vol.\ \bibinfo {volume} {9730},\ \bibinfo {editor} {edited by\ \bibinfo {editor} {\bibfnamefont {Alexei~L.}\ \bibnamefont {Glebov}}\ and\ \bibinfo {editor} {\bibfnamefont {Paul~O.}\ \bibnamefont {Leisher}}},\ \bibinfo {organization} {International Society for Optics and Photonics}\ (\bibinfo {publisher} {SPIE},\ \bibinfo {year} {2016})\ pp.\ \bibinfo {pages} {1--6}\BibitemShut {NoStop} \bibitem [{\citenamefont {Matsumoto}\ and\ \citenamefont {Suzuki}(1986)}]{matsumoto1986} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Matsumoto}}\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Suzuki}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Temperature-stable {F}araday rotator material and its use in high-performance optical isolators},}\ }\href {\doibase 10.1364/AO.25.001940} {\bibfield {journal} {\bibinfo {journal} {Appl. Opt.}\ }\textbf {\bibinfo {volume} {25}},\ \bibinfo {pages} {1940--1945} (\bibinfo {year} {1986})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Vertruyen}\ \emph {et~al.}(2008)\citenamefont {Vertruyen}, \citenamefont {Cloots}, \citenamefont {Abell}, \citenamefont {Jackson}, \citenamefont {da~Silva}, \citenamefont {Popova},\ and\ \citenamefont {Keller}}]{vertruyen2008} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Vertruyen}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Cloots}}, \bibinfo {author} {\bibfnamefont {J.~S.}\ \bibnamefont {Abell}}, \bibinfo {author} {\bibfnamefont {T.~J.}\ \bibnamefont {Jackson}}, \bibinfo {author} {\bibfnamefont {R.~C.}\ \bibnamefont {da~Silva}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Popova}}, \ and\ \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Keller}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Curie temperature, exchange integrals, and magneto-optical properties in off-stoichiometric bismuth iron garnet epitaxial films},}\ }\href {\doibase 10.1103/PhysRevB.78.094429} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume} {78}},\ \bibinfo {pages} {094429} (\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Vojna}\ \emph {et~al.}(2019)\citenamefont {Vojna}, \citenamefont {Slez\'{a}k}, \citenamefont {Lucianetti},\ and\ \citenamefont {Mocek}}]{vojna2019} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Vojna}}, \bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Slez\'{a}k}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Lucianetti}}, \ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Mocek}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Verdet constant of magneto-active materials developed for high-power {F}araday devices},}\ }\href {\doibase 10.3390/app9153160} {\bibfield {journal} {\bibinfo {journal} {Appl. Sci.}\ }\textbf {\bibinfo {volume} {9}},\ \bibinfo {pages} {3160} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kumari}\ and\ \citenamefont {Chakraborty}(2018)}]{kumari2018} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Kumari}}\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Chakraborty}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Study of different magneto-optic materials for current sensing applications},}\ }\href {\doibase 10.5194/jsss-7-421-2018} {\bibfield {journal} {\bibinfo {journal} {J. Sens. Sens. Syst.}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {421--431} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{kon()}]{konno1993} \BibitemOpen \href@noop {} {}\bibinfo {note} {Y. Konno and H. Kume, ``Optical isolator and method for assembling same,'' US patent US5204868A, granted 1993-04-20}\BibitemShut {NoStop} \end{thebibliography} \end{document}